Tarantool development patches archive
 help / color / mirror / Atom feed
* [PATCH 0/2] replication: don't stop syncing on configuration errors
@ 2018-09-23 15:31 Vladimir Davydov
  2018-09-23 15:31 ` [PATCH 1/2] replication: fix recoverable error reporting Vladimir Davydov
  2018-09-23 15:31 ` [PATCH 2/2] replication: don't stop syncing on configuration errors Vladimir Davydov
  0 siblings, 2 replies; 6+ messages in thread
From: Vladimir Davydov @ 2018-09-23 15:31 UTC (permalink / raw)
  To: kostja; +Cc: tarantool-patches

This patch aims at resolving the issue described in #3636, when
replication restart with the same replica set configuration leads
to ER_CFG error and transient orphan mode:

  replication = box.cfg.replication
  box.cfg{}
  box.cfg{replication = box.cfg.replication} -- success
  box.info.status -- orphan! wait a sec and it will change
                  -- back to running

This issue also results in spurious replication-py/multi test failures
(see #3692):

  replication-py/multi.test.py                                [ fail ]
  
  Test failed! Result content mismatch:
  --- replication-py/multi.result Mon Sep 18 13:55:15 2017
  +++ replication-py/multi.reject Wed Sep 19 13:32:54 2018
  @@ -60,9 +60,9 @@
   done
  
   Check data
  -server 1 is ok
  -server 2 is ok
  -server 3 is ok
  +server 1 is not ok
  +server 2 is not ok
  +server 3 is not ok
   Done

The first patch of the series cleanups error messages printed in
the above-mentioned case, which turned out to be very confusing,
while the second patch fixes the issue.

https://github.com/tarantool/tarantool/issues/3636
https://github.com/tarantool/tarantool/issues/3692
https://github.com/tarantool/tarantool/tree/dv/gh-3636-replication-dont-stop-sync-on-cfg-error

Vladimir Davydov (2):
  replication: fix recoverable error reporting
  replication: don't stop syncing on configuration errors

 src/box/applier.cc                | 69 ++++++++++++++++----------------
 src/box/box.cc                    |  6 +++
 src/box/relay.cc                  |  6 +--
 test/replication/sync.result      | 82 +++++++++++++++++++++++++++++++++++++--
 test/replication/sync.test.lua    | 35 ++++++++++++++++-
 test/replication/wal_off.result   |  9 +++++
 test/replication/wal_off.test.lua |  3 ++
 7 files changed, 167 insertions(+), 43 deletions(-)

-- 
2.11.0

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 1/2] replication: fix recoverable error reporting
  2018-09-23 15:31 [PATCH 0/2] replication: don't stop syncing on configuration errors Vladimir Davydov
@ 2018-09-23 15:31 ` Vladimir Davydov
  2018-09-25 22:24   ` [tarantool-patches] " Konstantin Osipov
  2018-09-23 15:31 ` [PATCH 2/2] replication: don't stop syncing on configuration errors Vladimir Davydov
  1 sibling, 1 reply; 6+ messages in thread
From: Vladimir Davydov @ 2018-09-23 15:31 UTC (permalink / raw)
  To: kostja; +Cc: tarantool-patches

First, we print "will retry every XX second" to the log after an error
message only for socket and system errors although we keep trying to
establish a replication connection after configuration errors as well.
Let's print this message for those errors too to avoid confusion.

Second, in case we receive an error in reply to SUBSCRIBE command, we
log "can't read row" instead of "can't join/subscribe". This happens,
because we switch an applier to SYNC/FOLLOW state before receiving a
reply to SUBSCRIBE command. Fix this by updating an applier state only
after successfully subscribing.

Third, we detect duplicate connections coming from the same replica on
the master only after sending a reply to SUBSCRIBE command, that is in
relay_subscribe rather than in box_process_subscribe. This results in
"can't read row" being printed to the replica's log even though it's
actually a SUBSCRIBE error. Fix this by moving the check where it
actually belongs.
---
 src/box/applier.cc | 67 ++++++++++++++++++++++++++++--------------------------
 src/box/box.cc     |  6 +++++
 src/box/relay.cc   |  6 +----
 3 files changed, 42 insertions(+), 37 deletions(-)

diff --git a/src/box/applier.cc b/src/box/applier.cc
index 604119e9..7304e83f 100644
--- a/src/box/applier.cc
+++ b/src/box/applier.cc
@@ -91,9 +91,18 @@ applier_log_error(struct applier *applier, struct error *e)
 		break;
 	}
 	error_log(e);
-	if (type_cast(SocketError, e) || type_cast(SystemError, e))
+	switch (errcode) {
+	case ER_LOADING:
+	case ER_CFG:
+	case ER_ACCESS_DENIED:
+	case ER_NO_SUCH_USER:
+	case ER_SYSTEM:
 		say_info("will retry every %.2lf second",
 			 replication_reconnect_interval());
+		break;
+	default:
+		break;
+	}
 	applier->last_logged_errcode = errcode;
 }
 
@@ -382,30 +391,7 @@ applier_subscribe(struct applier *applier)
 				 &replicaset.vclock);
 	coio_write_xrow(coio, &row);
 
-	if (applier->state == APPLIER_READY) {
-		/*
-		 * Tarantool < 1.7.7 does not send periodic heartbeat
-		 * messages so we cannot enable applier synchronization
-		 * for it without risking getting stuck in the 'orphan'
-		 * mode until a DML operation happens on the master.
-		 */
-		if (applier->version_id >= version_id(1, 7, 7))
-			applier_set_state(applier, APPLIER_SYNC);
-		else
-			applier_set_state(applier, APPLIER_FOLLOW);
-	} else {
-		/*
-		 * Tarantool < 1.7.0 sends replica id during
-		 * "subscribe" stage. We can't finish bootstrap
-		 * until it is received.
-		 */
-		assert(applier->state == APPLIER_FINAL_JOIN);
-		assert(applier->version_id < version_id(1, 7, 0));
-	}
-
-	/*
-	 * Read SUBSCRIBE response
-	 */
+	/* Read SUBSCRIBE response */
 	if (applier->version_id >= version_id(1, 6, 7)) {
 		coio_read_xrow(coio, ibuf, &row);
 		if (iproto_type_is_error(row.type)) {
@@ -421,7 +407,7 @@ applier_subscribe(struct applier *applier)
 		vclock_create(&remote_vclock_at_subscribe);
 		xrow_decode_vclock_xc(&row, &remote_vclock_at_subscribe);
 	}
-	/**
+	/*
 	 * Tarantool < 1.6.7:
 	 * If there is an error in subscribe, it's sent directly
 	 * in response to subscribe.  If subscribe is successful,
@@ -429,6 +415,27 @@ applier_subscribe(struct applier *applier)
 	 * the binary log.
 	 */
 
+	if (applier->state == APPLIER_READY) {
+		/*
+		 * Tarantool < 1.7.7 does not send periodic heartbeat
+		 * messages so we cannot enable applier synchronization
+		 * for it without risking getting stuck in the 'orphan'
+		 * mode until a DML operation happens on the master.
+		 */
+		if (applier->version_id >= version_id(1, 7, 7))
+			applier_set_state(applier, APPLIER_SYNC);
+		else
+			applier_set_state(applier, APPLIER_FOLLOW);
+	} else {
+		/*
+		 * Tarantool < 1.7.0 sends replica id during
+		 * "subscribe" stage. We can't finish bootstrap
+		 * until it is received.
+		 */
+		assert(applier->state == APPLIER_FINAL_JOIN);
+		assert(applier->version_id < version_id(1, 7, 0));
+	}
+
 	/* Re-enable warnings after successful execution of SUBSCRIBE */
 	applier->last_logged_errcode = 0;
 	if (applier->version_id >= version_id(1, 7, 4)) {
@@ -607,7 +614,8 @@ applier_f(va_list ap)
 				applier_log_error(applier, e);
 				applier_disconnect(applier, APPLIER_LOADING);
 				goto reconnect;
-			} else if (e->errcode() == ER_ACCESS_DENIED ||
+			} else if (e->errcode() == ER_CFG ||
+				   e->errcode() == ER_ACCESS_DENIED ||
 				   e->errcode() == ER_NO_SUCH_USER) {
 				/* Invalid configuration */
 				applier_log_error(applier, e);
@@ -618,11 +626,6 @@ applier_f(va_list ap)
 				applier_log_error(applier, e);
 				applier_disconnect(applier, APPLIER_DISCONNECTED);
 				goto reconnect;
-			} else if (e->errcode() == ER_CFG) {
-				/* Invalid configuration */
-				applier_log_error(applier, e);
-				applier_disconnect(applier, APPLIER_DISCONNECTED);
-				goto reconnect;
 			} else {
 				/* Unrecoverable errors */
 				applier_log_error(applier, e);
diff --git a/src/box/box.cc b/src/box/box.cc
index f25146d0..804fc00e 100644
--- a/src/box/box.cc
+++ b/src/box/box.cc
@@ -1560,6 +1560,12 @@ box_process_subscribe(struct ev_io *io, struct xrow_header *header)
 			  tt_uuid_str(&REPLICASET_UUID));
 	}
 
+	/* Don't allow multiple relays for the same replica */
+	if (relay_get_state(replica->relay) == RELAY_FOLLOW) {
+		tnt_raise(ClientError, ER_CFG, "replication",
+			  "duplicate connection with the same replica UUID");
+	}
+
 	/* Forbid replication with disabled WAL */
 	if (wal_mode() == WAL_NONE) {
 		tnt_raise(ClientError, ER_UNSUPPORTED, "Replication",
diff --git a/src/box/relay.cc b/src/box/relay.cc
index b0bc03ef..c90383d4 100644
--- a/src/box/relay.cc
+++ b/src/box/relay.cc
@@ -585,11 +585,7 @@ relay_subscribe(struct replica *replica, int fd, uint64_t sync,
 {
 	assert(replica->id != REPLICA_ID_NIL);
 	struct relay *relay = replica->relay;
-	/* Don't allow multiple relays for the same replica */
-	if (relay->state == RELAY_FOLLOW) {
-		tnt_raise(ClientError, ER_CFG, "replication",
-			  "duplicate connection with the same replica UUID");
-	}
+	assert(relay->state != RELAY_FOLLOW);
 	/*
 	 * Register the replica with the garbage collector
 	 * unless it has already been registered by initial
-- 
2.11.0

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 2/2] replication: don't stop syncing on configuration errors
  2018-09-23 15:31 [PATCH 0/2] replication: don't stop syncing on configuration errors Vladimir Davydov
  2018-09-23 15:31 ` [PATCH 1/2] replication: fix recoverable error reporting Vladimir Davydov
@ 2018-09-23 15:31 ` Vladimir Davydov
  2018-09-25 22:29   ` [tarantool-patches] " Konstantin Osipov
  1 sibling, 1 reply; 6+ messages in thread
From: Vladimir Davydov @ 2018-09-23 15:31 UTC (permalink / raw)
  To: kostja; +Cc: tarantool-patches

When replication is restarted with the same replica set configuration
(i.e. box.cfg{replication = box.cfg.replication}), there's a chance that
an old relay will be still running on the master at the time when a new
applier tries to subscribe. In this case the applier will get an error:

  main/152/applier/localhost:62649 I> can't join/subscribe
  main/152/applier/localhost:62649 xrow.c:891 E> ER_CFG: Incorrect value for
      option 'replication': duplicate connection with the same replica UUID

Such an error won't stop the applier - it will keep trying to reconnect:

  main/152/applier/localhost:62649 I> will retry every 1.00 second

However, it will stop synchronization so that box.cfg() will return
without an error, but leave the replica in the orphan mode:

  main/151/console/::1:42606 C> failed to synchronize with 1 out of 1 replicas
  main/151/console/::1:42606 C> entering orphan mode
  main/151/console/::1:42606 I> set 'replication' configuration option to
    "localhost:62649"

In a second, the stray relay on the master will probably exit and the
applier will manage to subscribe so that the replica will leave the
orphan mode:

  main/152/applier/localhost:62649 C> leaving orphan mode

This is very annoying, because there's no need to enter the orphan mode
in this case - we could as well keep trying to synchronize until the
applier finally succeeds to subscribe or replication_sync_timeout is
triggered.

So this patch makes appliers enter "loading" state on configuration
errors, the same state they enter if they detect that bootstrap hasn't
finished yet. This guarantees that configuration errors, like the one
above, won't break synchronization and leave the user gaping at the
unprovoked orphan mode.

Apart from the issue in question (#3636), this patch also fixes spurious
replication-py/multi test failures that happened for exactly the same
reason (#3692).

Closes #3636
Closes #3692
---
 src/box/applier.cc                |  2 +-
 test/replication/sync.result      | 82 +++++++++++++++++++++++++++++++++++++--
 test/replication/sync.test.lua    | 35 ++++++++++++++++-
 test/replication/wal_off.result   |  9 +++++
 test/replication/wal_off.test.lua |  3 ++
 5 files changed, 125 insertions(+), 6 deletions(-)

diff --git a/src/box/applier.cc b/src/box/applier.cc
index 7304e83f..7da278e6 100644
--- a/src/box/applier.cc
+++ b/src/box/applier.cc
@@ -619,7 +619,7 @@ applier_f(va_list ap)
 				   e->errcode() == ER_NO_SUCH_USER) {
 				/* Invalid configuration */
 				applier_log_error(applier, e);
-				applier_disconnect(applier, APPLIER_DISCONNECTED);
+				applier_disconnect(applier, APPLIER_LOADING);
 				goto reconnect;
 			} else if (e->errcode() == ER_SYSTEM) {
 				/* System error from master instance. */
diff --git a/test/replication/sync.result b/test/replication/sync.result
index 23978e48..81de6075 100644
--- a/test/replication/sync.result
+++ b/test/replication/sync.result
@@ -213,21 +213,95 @@ box.info.ro -- false
 ---
 - false
 ...
+-- gh-3636: Check that replica set sync doesn't stop on cfg errors.
+-- To do that we inject an error on the master to temporarily block
+-- the relay thread from exiting, then reconfigure replication on
+-- the slave using the same configuration. Since the relay is still
+-- running when replication is reconfigured, the replica will get
+-- ER_CFG "duplicate connection with the same replica UUID" error.
+-- It should print it to the log, but keep trying to synchronize.
+-- Eventually, it should leave box.cfg() following the master.
+box.cfg{replication_timeout = 0.1}
+---
+...
+box.cfg{replication_sync_lag = 1}
+---
+...
+box.cfg{replication_sync_timeout = 10}
+---
+...
 test_run:cmd("switch default")
 ---
 - true
 ...
-test_run:cmd("stop server replica")
+box.error.injection.set("ERRINJ_RELAY_TIMEOUT", 0)
+---
+- ok
+...
+box.error.injection.set('ERRINJ_WAL_DELAY', true)
+---
+- ok
+...
+test_run:cmd("setopt delimiter ';'")
 ---
 - true
 ...
-test_run:cmd("cleanup server replica")
+_ = fiber.create(function()
+    box.space.test:replace{123456789}
+end);
+---
+...
+_ = fiber.create(function()
+    fiber.sleep(0.1)
+    box.error.injection.set('ERRINJ_WAL_DELAY', false)
+end);
+---
+...
+test_run:cmd("setopt delimiter ''");
 ---
 - true
 ...
-box.error.injection.set("ERRINJ_RELAY_TIMEOUT", 0)
+test_run:cmd("switch replica")
 ---
-- ok
+- true
+...
+replication = box.cfg.replication
+---
+...
+box.cfg{replication = {}}
+---
+...
+box.cfg{replication = replication}
+---
+...
+box.info.status -- running
+---
+- running
+...
+box.info.ro -- false
+---
+- false
+...
+box.info.replication[1].upstream.status -- follow
+---
+- follow
+...
+test_run:grep_log('replica', 'ER_CFG.*')
+---
+- 'ER_CFG: Incorrect value for option ''replication'': duplicate connection with the
+  same replica UUID'
+...
+test_run:cmd("switch default")
+---
+- true
+...
+test_run:cmd("stop server replica")
+---
+- true
+...
+test_run:cmd("cleanup server replica")
+---
+- true
 ...
 box.space.test:drop()
 ---
diff --git a/test/replication/sync.test.lua b/test/replication/sync.test.lua
index 7d7ee757..a5cfab8d 100644
--- a/test/replication/sync.test.lua
+++ b/test/replication/sync.test.lua
@@ -108,10 +108,43 @@ repeat fiber.sleep(0.01) until box.info.status ~= 'orphan'
 box.info.status -- running
 box.info.ro -- false
 
+-- gh-3636: Check that replica set sync doesn't stop on cfg errors.
+-- To do that we inject an error on the master to temporarily block
+-- the relay thread from exiting, then reconfigure replication on
+-- the slave using the same configuration. Since the relay is still
+-- running when replication is reconfigured, the replica will get
+-- ER_CFG "duplicate connection with the same replica UUID" error.
+-- It should print it to the log, but keep trying to synchronize.
+-- Eventually, it should leave box.cfg() following the master.
+box.cfg{replication_timeout = 0.1}
+box.cfg{replication_sync_lag = 1}
+box.cfg{replication_sync_timeout = 10}
+
+test_run:cmd("switch default")
+box.error.injection.set("ERRINJ_RELAY_TIMEOUT", 0)
+box.error.injection.set('ERRINJ_WAL_DELAY', true)
+test_run:cmd("setopt delimiter ';'")
+_ = fiber.create(function()
+    box.space.test:replace{123456789}
+end);
+_ = fiber.create(function()
+    fiber.sleep(0.1)
+    box.error.injection.set('ERRINJ_WAL_DELAY', false)
+end);
+test_run:cmd("setopt delimiter ''");
+test_run:cmd("switch replica")
+
+replication = box.cfg.replication
+box.cfg{replication = {}}
+box.cfg{replication = replication}
+box.info.status -- running
+box.info.ro -- false
+box.info.replication[1].upstream.status -- follow
+test_run:grep_log('replica', 'ER_CFG.*')
+
 test_run:cmd("switch default")
 test_run:cmd("stop server replica")
 test_run:cmd("cleanup server replica")
 
-box.error.injection.set("ERRINJ_RELAY_TIMEOUT", 0)
 box.space.test:drop()
 box.schema.user.revoke('guest', 'replication')
diff --git a/test/replication/wal_off.result b/test/replication/wal_off.result
index f50bfc2f..e3b5709e 100644
--- a/test/replication/wal_off.result
+++ b/test/replication/wal_off.result
@@ -78,9 +78,18 @@ test_run:cmd('switch default')
 ---
 - true
 ...
+replication_sync_timeout = box.cfg.replication_sync_timeout
+---
+...
+box.cfg { replication_sync_timeout = 0.01 }
+---
+...
 box.cfg { replication = wal_off_uri }
 ---
 ...
+box.cfg { replication_sync_timeout = replication_sync_timeout }
+---
+...
 check = "Read access to universe"
 ---
 ...
diff --git a/test/replication/wal_off.test.lua b/test/replication/wal_off.test.lua
index 43cd0aed..81fcf0b3 100644
--- a/test/replication/wal_off.test.lua
+++ b/test/replication/wal_off.test.lua
@@ -27,7 +27,10 @@ test_run:cmd('switch wal_off')
 box.schema.user.revoke('guest', 'replication')
 test_run:cmd('switch default')
 
+replication_sync_timeout = box.cfg.replication_sync_timeout
+box.cfg { replication_sync_timeout = 0.01 }
 box.cfg { replication = wal_off_uri }
+box.cfg { replication_sync_timeout = replication_sync_timeout }
 check = "Read access to universe"
 while string.find(box.info.replication[wal_off_id].upstream.message, check) == nil do fiber.sleep(0.01) end
 box.cfg { replication = "" }
-- 
2.11.0

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [tarantool-patches] Re: [PATCH 1/2] replication: fix recoverable error reporting
  2018-09-23 15:31 ` [PATCH 1/2] replication: fix recoverable error reporting Vladimir Davydov
@ 2018-09-25 22:24   ` Konstantin Osipov
  0 siblings, 0 replies; 6+ messages in thread
From: Konstantin Osipov @ 2018-09-25 22:24 UTC (permalink / raw)
  To: tarantool-patches

* Vladimir Davydov <vdavydov.dev@gmail.com> [18/09/24 14:07]:
> First, we print "will retry every XX second" to the log after an error
> message only for socket and system errors although we keep trying to
> establish a replication connection after configuration errors as well.
> Let's print this message for those errors too to avoid confusion.
> 
> Second, in case we receive an error in reply to SUBSCRIBE command, we
> log "can't read row" instead of "can't join/subscribe". This happens,
> because we switch an applier to SYNC/FOLLOW state before receiving a
> reply to SUBSCRIBE command. Fix this by updating an applier state only
> after successfully subscribing.
> 
> Third, we detect duplicate connections coming from the same replica on
> the master only after sending a reply to SUBSCRIBE command, that is in
> relay_subscribe rather than in box_process_subscribe. This results in
> "can't read row" being printed to the replica's log even though it's
> actually a SUBSCRIBE error. Fix this by moving the check where it
> actually belongs.

OK to push.

 

-- 
Konstantin Osipov, Moscow, Russia, +7 903 626 22 32
http://tarantool.io - www.twitter.com/kostja_osipov

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [tarantool-patches] Re: [PATCH 2/2] replication: don't stop syncing on configuration errors
  2018-09-23 15:31 ` [PATCH 2/2] replication: don't stop syncing on configuration errors Vladimir Davydov
@ 2018-09-25 22:29   ` Konstantin Osipov
  2018-09-26  8:26     ` Vladimir Davydov
  0 siblings, 1 reply; 6+ messages in thread
From: Konstantin Osipov @ 2018-09-25 22:29 UTC (permalink / raw)
  To: tarantool-patches

* Vladimir Davydov <vdavydov.dev@gmail.com> [18/09/24 14:07]:
> When replication is restarted with the same replica set configuration
> (i.e. box.cfg{replication = box.cfg.replication}), there's a chance that
> an old relay will be still running on the master at the time when a new
> applier tries to subscribe. In this case the applier will get an error:

On the flip side once this change is made a really invalid
configuration (duplicate connection with the same replica id)
won't lead to an immediate disconnect. But since we continue
logging the error before applier_disconnect(), the user will still see:

>   main/152/applier/localhost:62649 I> can't join/subscribe
>   main/152/applier/localhost:62649 xrow.c:891 E> ER_CFG: Incorrect value for
>       option 'replication': duplicate connection with the same replica UUID

assuming this message is still printed it's OK to push.


-- 
Konstantin Osipov, Moscow, Russia, +7 903 626 22 32
http://tarantool.io - www.twitter.com/kostja_osipov

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [tarantool-patches] Re: [PATCH 2/2] replication: don't stop syncing on configuration errors
  2018-09-25 22:29   ` [tarantool-patches] " Konstantin Osipov
@ 2018-09-26  8:26     ` Vladimir Davydov
  0 siblings, 0 replies; 6+ messages in thread
From: Vladimir Davydov @ 2018-09-26  8:26 UTC (permalink / raw)
  To: Konstantin Osipov; +Cc: tarantool-patches

On Wed, Sep 26, 2018 at 01:29:44AM +0300, Konstantin Osipov wrote:
> * Vladimir Davydov <vdavydov.dev@gmail.com> [18/09/24 14:07]:
> > When replication is restarted with the same replica set configuration
> > (i.e. box.cfg{replication = box.cfg.replication}), there's a chance that
> > an old relay will be still running on the master at the time when a new
> > applier tries to subscribe. In this case the applier will get an error:
> 
> On the flip side once this change is made a really invalid
> configuration (duplicate connection with the same replica id)
> won't lead to an immediate disconnect. But since we continue
> logging the error before applier_disconnect(), the user will still see:
> 
> >   main/152/applier/localhost:62649 I> can't join/subscribe
> >   main/152/applier/localhost:62649 xrow.c:891 E> ER_CFG: Incorrect value for
> >       option 'replication': duplicate connection with the same replica UUID
> 
> assuming this message is still printed it's OK to push.

The error message is still printed.

Pushed both patches to 1.10.

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2018-09-26  8:26 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-09-23 15:31 [PATCH 0/2] replication: don't stop syncing on configuration errors Vladimir Davydov
2018-09-23 15:31 ` [PATCH 1/2] replication: fix recoverable error reporting Vladimir Davydov
2018-09-25 22:24   ` [tarantool-patches] " Konstantin Osipov
2018-09-23 15:31 ` [PATCH 2/2] replication: don't stop syncing on configuration errors Vladimir Davydov
2018-09-25 22:29   ` [tarantool-patches] " Konstantin Osipov
2018-09-26  8:26     ` Vladimir Davydov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox