Tarantool development patches archive
 help / color / mirror / Atom feed
* [tarantool-patches] [PATCH v2 0/2] SWIM log TTD, and TTL
@ 2019-06-30 19:58 Vladislav Shpilevoy
  2019-06-30 19:58 ` [tarantool-patches] [PATCH v2 1/2] swim: disseminate event for log(cluster_size) steps Vladislav Shpilevoy
  2019-06-30 19:58 ` [tarantool-patches] [PATCH v2 2/2] [RAW] swim: send 'left' and 'dead' to limbo before deletion Vladislav Shpilevoy
  0 siblings, 2 replies; 3+ messages in thread
From: Vladislav Shpilevoy @ 2019-06-30 19:58 UTC (permalink / raw)
  To: tarantool-patches

It is a first patchset to make SWIM more usable in big clusters. Most of the
problems arise from incorrect interpretation of the original SWIM paper, and
extensions adaptation.

The first commit makes SWIM conform with the original paper and disseminate
each event for log(cluster_size) times instead of cluster_size. After it there
appears a problem in too early deletion of dead and left members, when GC is on.
Strictly speaking, it existed even before the patch, but now it is more clear.

The second commit tries to solve it by keeping dead and left members in a limbo
queue until the whole cluster is aware about their death. It solves the problem
partially, but doesn't work when heartbeat_rate is too small. See the commit
message for an explanation.

However, there is another solution how to do not struggle with TTL and limbo.
I've sent it in another thread.

V1: https://www.freelists.org/post/tarantool-patches/PATCH-02-SWIM-big-cluster-improvements-part-1
Changes in V2:
- Drop commit about preference of new events over old ones;
- Split logarithmic TTD and TTL + limbo queue in separate commits;

Branch: http://github.com/tarantool/tarantool/tree/gerold103/gh-4253-dissemination-speed
Issue: https://github.com/tarantool/tarantool/issues/4253

Vladislav Shpilevoy (2):
  swim: disseminate event for log(cluster_size) steps
  [RAW] swim: send 'left' and 'dead' to limbo before deletion

 src/lib/swim/swim.c   | 117 ++++++++++++++++++++++++++++++++++++++++--
 test/unit/swim.c      |  74 ++++++++++++++++++++++++--
 test/unit/swim.result |   8 ++-
 3 files changed, 190 insertions(+), 9 deletions(-)

-- 
2.20.1 (Apple Git-117)

^ permalink raw reply	[flat|nested] 3+ messages in thread

* [tarantool-patches] [PATCH v2 1/2] swim: disseminate event for log(cluster_size) steps
  2019-06-30 19:58 [tarantool-patches] [PATCH v2 0/2] SWIM log TTD, and TTL Vladislav Shpilevoy
@ 2019-06-30 19:58 ` Vladislav Shpilevoy
  2019-06-30 19:58 ` [tarantool-patches] [PATCH v2 2/2] [RAW] swim: send 'left' and 'dead' to limbo before deletion Vladislav Shpilevoy
  1 sibling, 0 replies; 3+ messages in thread
From: Vladislav Shpilevoy @ 2019-06-30 19:58 UTC (permalink / raw)
  To: tarantool-patches

Before the patch there was a problem of events and anti-entropy
starvation, when a cluster generates so many events, that they
consume the whole UDP packet. A packet fits up to 26 events. If
during the event storm something important happens, that event is
likely to be lost, and not disseminated until the storm is over.

Sadly, there is no way to prevent a storm, but it can be made
much shorter. For that the patch makes TTD of events logarithmic
instead of linear of cluster size.

According to the SWIM paper and to experiments the logarithm is
really enough. Linear TTD was a redundant overkill.

When events live shorter, it does not solve a problem of the
events starvation - still some of them can be lost in case of a
storm. But it frees some space for anti-entropy, which can finish
dissemination of lost events.

Experiments in a simulation of a cluster with 100 nodes showed,
that a failure dissemination happened in ~110 steps if there is
a storm. Basically, no dissemination at all.

After the patch it is ~20 steps. So it is logarithmic as it
should be, although with a bigger constant than without a storm.

However the patch can't be pushed as is. Several tests are broken
now, because due to short TTD 'dead' members are dropped from
member tables too early. It creates a possibility, that another
node resurrects them back via anti-entropy.

Part of #4253
---
 src/lib/swim/swim.c | 19 ++++++++++++-
 test/unit/swim.c    | 68 ++++++++++++++++++++++++++++++++++++++++++++-
 2 files changed, 85 insertions(+), 2 deletions(-)

diff --git a/src/lib/swim/swim.c b/src/lib/swim/swim.c
index bb9e9f519..9647094f5 100644
--- a/src/lib/swim/swim.c
+++ b/src/lib/swim/swim.c
@@ -616,7 +616,24 @@ swim_register_event(struct swim *swim, struct swim_member *member)
 		rlist_add_tail_entry(&swim->dissemination_queue, member,
 				     in_dissemination_queue);
 	}
-	member->status_ttd = mh_size(swim->members);
+	/*
+	 * Logarithm is a perfect number of disseminations of an
+	 * event.
+	 *
+	 * Firstly, it matches the dissemination speed.
+	 *
+	 * Secondly, bigger number of disseminations (for example,
+	 * linear) causes events and anti-entropy starvation in
+	 * big clusters, when lots of events occupy the whole UDP
+	 * packet, and factually the same packet content is being
+	 * sent for quite a long time. No randomness. Anti-entropy
+	 * does not get a chance to disseminate something new and
+	 * random. Bigger orders are redundant and harmful.
+	 *
+	 * Thirdly, logarithm is proved by the original
+	 * SWIM paper as the best option.
+	 */
+	member->status_ttd = ceil(log2(mh_size(swim->members)));
 	swim_cached_round_msg_invalidate(swim);
 }
 
diff --git a/test/unit/swim.c b/test/unit/swim.c
index 3486d3f73..d776a8231 100644
--- a/test/unit/swim.c
+++ b/test/unit/swim.c
@@ -43,6 +43,7 @@
 #include "swim_test_utils.h"
 #include "trigger.h"
 #include <fcntl.h>
+#include <math.h>
 
 /**
  * Test result is a real returned value of main_f. Fiber_join can
@@ -1152,10 +1153,74 @@ swim_test_generation(void)
 	swim_finish_test();
 }
 
+static void
+swim_test_dissemination_speed(void)
+{
+	swim_start_test(2);
+
+	int size = 100;
+	double ack_timeout = 0.1;
+	struct swim_cluster *cluster = swim_cluster_new(size);
+	swim_cluster_set_ack_timeout(cluster, ack_timeout);
+	swim_cluster_set_gc(cluster, SWIM_GC_OFF);
+	for (int i = 0; i < size; ++i) {
+		for (int j = i + 1; j < size; ++j)
+			swim_cluster_interconnect(cluster, i, j);
+	}
+	swim_cluster_set_drop(cluster, 0, 100);
+	fail_if(swim_cluster_wait_status_anywhere(cluster, 0,
+						  MEMBER_DEAD, size) != 0);
+	/*
+	 * Not a trivial problem - at start of a cluster there are
+	 * so many events, that they occupy a UDP packet fully.
+	 * All these events are 'a new member is added'. And
+	 * because of that other much more important events
+	 * starve. In this concrete test a new event 'member is
+	 * dead' starves. To beat that problem SWIM says that
+	 * events should be disseminated not longer than for a
+	 * O(log) round steps. In such a case all the events are
+	 * expired quite fast, and anti-entropy swiftly finishes
+	 * the job. Usually this test works in log * 2, log * 3
+	 * steps. Here it is log * 6 to avoid flakiness in some
+	 * extra rare and slow random cases, but to still check
+	 * for O(log) speed.
+	 */
+	is(swim_cluster_wait_status_everywhere(cluster, 0, MEMBER_DEAD,
+					       log2(size) * 6), 0,
+	   "dissemination work in log time even at the very start of a cluster");
+	swim_cluster_set_drop(cluster, 0, 0);
+	fail_if(swim_cluster_wait_status_everywhere(cluster, 0,
+						    MEMBER_ALIVE, size) != 0);
+	/*
+	 * Another big-cluster case. Assume, that something
+	 * happened and all the members generated an event. For
+	 * example, changed their payload. It creates a storm of
+	 * events, among which some important ones can be lost.
+	 * Such as a failure detection. The only solution again -
+	 * make the events as short living as possible in order to
+	 * faster free space in a UDP packet for other events and
+	 * for anti-entropy. The test below proves that even when
+	 * there is an event storm, failure dissemination still
+	 * works for O(log) time.
+	 */
+	swim_cluster_set_drop(cluster, 0, 100);
+	fail_if(swim_cluster_wait_status_anywhere(cluster, 0,
+						  MEMBER_DEAD, size) != 0);
+	for (int i = 0; i < size; ++i)
+		swim_cluster_member_set_payload(cluster, i, "", 0);
+	is(swim_cluster_wait_status_everywhere(cluster, 0, MEMBER_DEAD,
+					       log2(size) * 6), 0,
+	   "dissemination can withstand an event storm");
+
+	swim_cluster_delete(cluster);
+
+	swim_finish_test();
+}
+
 static int
 main_f(va_list ap)
 {
-	swim_start_test(22);
+	swim_start_test(23);
 
 	(void) ap;
 	swim_test_ev_init();
@@ -1183,6 +1248,7 @@ main_f(va_list ap)
 	swim_test_slow_net();
 	swim_test_triggers();
 	swim_test_generation();
+	swim_test_dissemination_speed();
 
 	swim_test_transport_free();
 	swim_test_ev_free();
-- 
2.20.1 (Apple Git-117)

^ permalink raw reply	[flat|nested] 3+ messages in thread

* [tarantool-patches] [PATCH v2 2/2] [RAW] swim: send 'left' and 'dead' to limbo before deletion
  2019-06-30 19:58 [tarantool-patches] [PATCH v2 0/2] SWIM log TTD, and TTL Vladislav Shpilevoy
  2019-06-30 19:58 ` [tarantool-patches] [PATCH v2 1/2] swim: disseminate event for log(cluster_size) steps Vladislav Shpilevoy
@ 2019-06-30 19:58 ` Vladislav Shpilevoy
  1 sibling, 0 replies; 3+ messages in thread
From: Vladislav Shpilevoy @ 2019-06-30 19:58 UTC (permalink / raw)
  To: tarantool-patches

The previous commit solves one important problem with too long
event dissemination. Events could for too long time occupy the
whole UDP packet. Now they live log() time, but 'dead' and 'left'
members were bound to TTD. Such members were deleted after TTD
is 0.

Now they are deleted to early. Cluster nodes too early forget
about dead ones, and nodes not aware of death of the latters, can
accidentally resurrect them via anti-entropy. A way of keeping
dead and left members until the whole cluster knows about their
death should exist.

This patch unbounds TTL and TTD concepts. When a member is
learned to be dead or left, it is disseminated for log() times,
but kept in the member table for C * log() times. Here the
members wait until everyone knows about their death with a very
high probability.

C is chosen as 10, because according to experiments it is a
minimal value covering all the cases, when event death is being
disseminated too long, and heartbeat_rate is 1 second.

Unfortunately, even TTL, even big one, does not help, when a user
sets heartbeat rate in a too small value. For example, if
heartbeat_rate is 0.01, and the cluster is of size 100, then
any member never can be deleted automatically. Dead and left
members will live no longer than 1 second - not enough to
disseminate their death to everyone. A possible solution - make
TTL be in time instead of round steps. Then it won't depend on
heartbeat rate. But still it does not eliminate resurrection
probability.

A third solution could be removal of automatic GC - it would
simplify many things. Besides, seems like no one is going to use
it.

Part of #4253
---
 src/lib/swim/swim.c   | 98 +++++++++++++++++++++++++++++++++++++++++--
 test/unit/swim.c      |  6 +--
 test/unit/swim.result |  8 +++-
 3 files changed, 105 insertions(+), 7 deletions(-)

diff --git a/src/lib/swim/swim.c b/src/lib/swim/swim.c
index 9647094f5..04bf5f414 100644
--- a/src/lib/swim/swim.c
+++ b/src/lib/swim/swim.c
@@ -327,6 +327,19 @@ struct swim_member {
 	 * allow other members to learn the dead status.
 	 */
 	int status_ttd;
+	/**
+	 * Time To Live. When a member is dead, it can't be
+	 * deleted from the member table immediately. Otherwise
+	 * there is a real possibility, that other members, not
+	 * aware of the death, will disseminate its alive state
+	 * back via anti-entropy. TTL is a way to keep a dead
+	 * member in the member table until the whole cluster
+	 * knows about the death with a very hight probability,
+	 * almost 100%. Such members are stored in a limbo queue
+	 * for TTL round steps, and deleted afterwards. All the
+	 * same is fair for 'left' members as well.
+	 */
+	int ttl;
 	/** Arbitrary user data, disseminated on each change. */
 	char *payload;
 	/** Payload size, in bytes. */
@@ -370,6 +383,13 @@ struct swim_member {
 	 * time.
 	 */
 	struct rlist in_dissemination_queue;
+	/**
+	 * Members sentenced for deletion are stored in a limbo
+	 * queue. They are kept here in order to wait until the
+	 * whole cluster knows about their deletion, and nobody
+	 * will try to resurrect them via anti-entropy.
+	 */
+	struct rlist in_limbo_queue;
 	/**
 	 * Each time a member is updated, or created, or dropped,
 	 * it is added to an event queue. Members from this queue
@@ -513,6 +533,17 @@ struct swim {
 	 * as long as the event TTD is non-zero.
 	 */
 	struct rlist dissemination_queue;
+	/**
+	 * Queue of members waiting for their deletion. The
+	 * members 'dead' and 'left' can't be deleted immediately
+	 * after their new status is discovered and event about
+	 * that is disseminated, or otherwise they may be
+	 * resurrected back by those who didn't know about their
+	 * death, via anti-entropy. Here the members wait until
+	 * the whole cluster is likely to know about their death,
+	 * and only then are deleted.
+	 */
+	struct rlist limbo_queue;
 	/**
 	 * Queue of updated, new, and dropped members to deliver
 	 * the events to triggers. Dropped members are also kept
@@ -612,6 +643,14 @@ swim_wait_ack(struct swim *swim, struct swim_member *member,
 static inline void
 swim_register_event(struct swim *swim, struct swim_member *member)
 {
+	/*
+	 * If a member, scheduled for deletion, got an event, then
+	 * it is alive. Somebody noticed an ack/ping from him, or
+	 * got its new payload, or anything. Deletion of such
+	 * members shall be canceled.
+	 */
+	if (! rlist_empty(&member->in_limbo_queue))
+		rlist_del_entry(member, in_limbo_queue);
 	if (rlist_empty(&member->in_dissemination_queue)) {
 		rlist_add_tail_entry(&swim->dissemination_queue, member,
 				     in_dissemination_queue);
@@ -815,6 +854,7 @@ swim_member_delete(struct swim_member *member)
 
 	/* Dissemination component. */
 	assert(rlist_empty(&member->in_dissemination_queue));
+	assert(rlist_empty(&member->in_limbo_queue));
 
 	swim_member_unref(member);
 }
@@ -847,10 +887,40 @@ swim_member_new(const struct sockaddr_in *addr, const struct tt_uuid *uuid,
 
 	/* Dissemination component. */
 	rlist_create(&member->in_dissemination_queue);
+	rlist_create(&member->in_limbo_queue);
 
 	return member;
 }
 
+/** Schedule member deletion after some number of steps. */
+static void
+swim_send_member_to_limbo(struct swim *swim, struct swim_member *member)
+{
+	assert(rlist_empty(&member->in_dissemination_queue));
+	assert(rlist_empty(&member->in_limbo_queue));
+	rlist_add_entry(&swim->limbo_queue, member, in_limbo_queue);
+	int size = mh_size(swim->members);
+	/*
+	 * Always linear number of steps delay would be perfect in
+	 * terms of protection against sudden resurrection of dead
+	 * members via anti-entropy. Even more perfect is when GC
+	 * is off. But these values would lead to a problem when
+	 * too many round steps are spent on sending pings to
+	 * already 100% dead members, and when they occupy too big
+	 * part of anti-entropy section. As a compromise here the
+	 * linear TTL is used for small clusters, and a value of
+	 * the same order as event dissemination time for big
+	 * clusters, but with a much bigger constant.
+	 *
+	 * According to simulations, in a cluster of size 250 a
+	 * death event is disseminated in ~13 steps. Rarely for
+	 * ~15 steps. But extremely rare for ~30-40 steps.
+	 * Log2(250) is ~8, so 10 * log(N) covers all these cases,
+	 * even super rare ones.
+	 */
+	member->ttl = MIN(size, 10 * log2(size));
+}
+
 /**
  * Remove the member from all queues, hashes, destroy it and free
  * the memory.
@@ -873,6 +943,7 @@ swim_delete_member(struct swim *swim, struct swim_member *member)
 	/* Dissemination component. */
 	swim_on_member_update(swim, member, SWIM_EV_DROP);
 	rlist_del_entry(member, in_dissemination_queue);
+	rlist_del_entry(member, in_limbo_queue);
 
 	swim_member_delete(member);
 }
@@ -1164,6 +1235,22 @@ swim_encode_round_msg(struct swim *swim)
 	swim->is_round_packet_valid = true;
 }
 
+/**
+ * Drop members, whose TTL is expired. It should be done once
+ * after each round step.
+ */
+static void
+swim_decrease_member_ttl(struct swim *swim)
+{
+	struct swim_member *member, *tmp;
+	rlist_foreach_entry_safe(member, &swim->limbo_queue,
+				 in_limbo_queue, tmp) {
+		assert(member->ttl > 0);
+		if (--member->ttl == 0)
+			swim_delete_member(swim, member);
+	}
+}
+
 /**
  * Decrement TTDs of all events. It is done after each round step.
  * Note, since we decrement TTD of all events, even those which
@@ -1191,7 +1278,7 @@ swim_decrease_event_ttd(struct swim *swim)
 			rlist_del_entry(member, in_dissemination_queue);
 			swim_cached_round_msg_invalidate(swim);
 			if (member->status == MEMBER_LEFT)
-				swim_delete_member(swim, member);
+				swim_send_member_to_limbo(swim, member);
 		}
 	}
 }
@@ -1272,6 +1359,7 @@ swim_complete_step(struct swim_task *task,
 			 * sections.
 			 */
 			swim_wait_ack(swim, m, false);
+			swim_decrease_member_ttl(swim);
 			swim_decrease_event_ttd(swim);
 		}
 	}
@@ -1443,8 +1531,10 @@ swim_check_acks(struct ev_loop *loop, struct ev_timer *t, int events)
 			break;
 		case MEMBER_DEAD:
 			if (m->unacknowledged_pings >= NO_ACKS_TO_GC &&
-			    swim->gc_mode == SWIM_GC_ON && m->status_ttd == 0) {
-				swim_delete_member(swim, m);
+			    swim->gc_mode == SWIM_GC_ON &&
+			    m->status_ttd == 0 &&
+			    rlist_empty(&m->in_limbo_queue)) {
+				swim_send_member_to_limbo(swim, m);
 				continue;
 			}
 			break;
@@ -1902,6 +1992,7 @@ swim_new(uint64_t generation)
 
 	/* Dissemination component. */
 	rlist_create(&swim->dissemination_queue);
+	rlist_create(&swim->limbo_queue);
 	rlist_create(&swim->on_member_event);
 	stailq_create(&swim->event_queue);
 	swim->event_handler = fiber_new("SWIM event handler",
@@ -2215,6 +2306,7 @@ swim_delete(struct swim *swim)
 		if (! heap_node_is_stray(&m->in_wait_ack_heap))
 			wait_ack_heap_delete(&swim->wait_ack_heap, m);
 		rlist_del_entry(m, in_dissemination_queue);
+		rlist_del_entry(m, in_limbo_queue);
 		swim_member_delete(m);
 	}
 	/*
diff --git a/test/unit/swim.c b/test/unit/swim.c
index d776a8231..e5ea931a9 100644
--- a/test/unit/swim.c
+++ b/test/unit/swim.c
@@ -115,7 +115,7 @@ swim_test_uuid_update(void)
 	is(swim_cluster_update_uuid(cluster, 0, &new_uuid), 0, "UUID update");
 	is(swim_member_status(swim_member_by_uuid(s, &old_uuid)), MEMBER_LEFT,
 	   "old UUID is marked as 'left'");
-	swim_run_for(5);
+	swim_run_for(7);
 	is(swim_member_by_uuid(s, &old_uuid), NULL,
 	   "old UUID is dropped after a while");
 	ok(swim_cluster_is_fullmesh(cluster), "dropped everywhere");
@@ -589,7 +589,7 @@ swim_test_quit(void)
 	   "S3 sees S1 as left");
 	is(swim_cluster_member_status(cluster, 1, 0), swim_member_status_MAX,
 	   "S2 does not see S1 at all");
-	swim_run_for(2);
+	swim_run_for(3);
 	is(swim_cluster_member_status(cluster, 2, 0), swim_member_status_MAX,
 	   "after more time S1 is dropped from S3");
 	is(swim_cluster_member_status(cluster, 1, 0), swim_member_status_MAX,
@@ -1061,7 +1061,7 @@ swim_test_triggers(void)
 	is(tctx.ctx.events, SWIM_EV_NEW_STATUS, "status dead");
 
 	fail_if(swim_cluster_wait_status(cluster, 0, 1,
-					 swim_member_status_MAX, 2) != 0);
+					 swim_member_status_MAX, 4) != 0);
 	swim_cluster_run_triggers(cluster);
 	is(tctx.counter, 6, "drop fired a trigger");
 	is(tctx.ctx.events, SWIM_EV_DROP, "status dropped");
diff --git a/test/unit/swim.result b/test/unit/swim.result
index 04a2778e6..5f66200b0 100644
--- a/test/unit/swim.result
+++ b/test/unit/swim.result
@@ -1,5 +1,5 @@
 	*** main_f ***
-1..22
+1..23
 	*** swim_test_one_link ***
     1..6
     ok 1 - no rounds - no fullmesh
@@ -237,4 +237,10 @@ ok 21 - subtests
     ok 3 - S2 sees new generation of S1
 ok 22 - subtests
 	*** swim_test_generation: done ***
+	*** swim_test_dissemination_speed ***
+    1..2
+    ok 1 - dissemination work in log time even at the very start of a cluster
+    ok 2 - dissemination can withstand an event storm
+ok 23 - subtests
+	*** swim_test_dissemination_speed: done ***
 	*** main_f: done ***
-- 
2.20.1 (Apple Git-117)

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2019-06-30 19:57 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-06-30 19:58 [tarantool-patches] [PATCH v2 0/2] SWIM log TTD, and TTL Vladislav Shpilevoy
2019-06-30 19:58 ` [tarantool-patches] [PATCH v2 1/2] swim: disseminate event for log(cluster_size) steps Vladislav Shpilevoy
2019-06-30 19:58 ` [tarantool-patches] [PATCH v2 2/2] [RAW] swim: send 'left' and 'dead' to limbo before deletion Vladislav Shpilevoy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox