Tarantool development patches archive
 help / color / mirror / Atom feed
* [Tarantool-patches] [PATCH v2 0/4] fix box_set_replication issues & stabilize quorum test
@ 2019-11-23 21:53 Ilya Kosarev
  2019-11-23 21:53 ` [Tarantool-patches] [PATCH v2 1/4] test: update test-run Ilya Kosarev
                   ` (4 more replies)
  0 siblings, 5 replies; 9+ messages in thread
From: Ilya Kosarev @ 2019-11-23 21:53 UTC (permalink / raw)
  To: tarantool-patches; +Cc: v.shpilevoy

This patchset fixes appliers pruning in replicaset_update, anon
replicas iteration issues in replicaset_follow and stabilizes quorum
test. It also stabilizes tcp_connect in test_run:cmd().

Ilya Kosarev (4):
  test: update test-run
  replication: fix appliers pruning
  replication: make anon replicas iteration safe
  test: stabilize quorum test conditions

 src/box/replication.cc                        | 17 +++---
 test-run                                      |  2 +-
 .../box_set_replication_stress.result         | 38 +++++++++++++
 .../box_set_replication_stress.test.lua       | 17 ++++++
 test/replication/quorum.result                | 54 ++++++++++++++-----
 test/replication/quorum.test.lua              | 24 +++++----
 6 files changed, 121 insertions(+), 31 deletions(-)
 create mode 100644 test/replication/box_set_replication_stress.result
 create mode 100644 test/replication/box_set_replication_stress.test.lua

-- 
2.17.1

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [Tarantool-patches] [PATCH v2 1/4] test: update test-run
  2019-11-23 21:53 [Tarantool-patches] [PATCH v2 0/4] fix box_set_replication issues & stabilize quorum test Ilya Kosarev
@ 2019-11-23 21:53 ` Ilya Kosarev
  2019-11-23 21:53 ` [Tarantool-patches] [PATCH v2 2/4] replication: fix appliers pruning Ilya Kosarev
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Ilya Kosarev @ 2019-11-23 21:53 UTC (permalink / raw)
  To: tarantool-patches; +Cc: v.shpilevoy

Stabilize tcp_connect in test_run:cmd() (tarantool/test-run#193)
---
 test-run | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/test-run b/test-run
index e665a6601..b9dafc8ca 160000
--- a/test-run
+++ b/test-run
@@ -1 +1 @@
-Subproject commit e665a66013dc1d2edf11f53a35fbdd1128b2274b
+Subproject commit b9dafc8cadf9edf3c76146d5a289ea90055962ee
-- 
2.17.1

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [Tarantool-patches] [PATCH v2 2/4] replication: fix appliers pruning
  2019-11-23 21:53 [Tarantool-patches] [PATCH v2 0/4] fix box_set_replication issues & stabilize quorum test Ilya Kosarev
  2019-11-23 21:53 ` [Tarantool-patches] [PATCH v2 1/4] test: update test-run Ilya Kosarev
@ 2019-11-23 21:53 ` Ilya Kosarev
  2019-11-24 15:54   ` Vladislav Shpilevoy
  2019-11-23 21:53 ` [Tarantool-patches] [PATCH v2 3/4] replication: make anon replicas iteration safe Ilya Kosarev
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 9+ messages in thread
From: Ilya Kosarev @ 2019-11-23 21:53 UTC (permalink / raw)
  To: tarantool-patches; +Cc: v.shpilevoy

During pruning of replicaset appliers some anon replicas might connect
from replicaset_follow called in another fiber. Therefore we need to
prune appliers of anon replicas first.

Closes #4643
---
 src/box/replication.cc | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/src/box/replication.cc b/src/box/replication.cc
index 509187cd3..3ccfa8b33 100644
--- a/src/box/replication.cc
+++ b/src/box/replication.cc
@@ -516,23 +516,23 @@ replicaset_update(struct applier **appliers, int count)
 	 */
 
 	/* Prune old appliers */
-	replicaset_foreach(replica) {
-		if (replica->applier == NULL)
-			continue;
+	rlist_foreach_entry_safe(replica, &replicaset.anon, in_anon, next) {
 		applier = replica->applier;
 		replica_clear_applier(replica);
-		replica->applier_sync_state = APPLIER_DISCONNECTED;
+		replica_delete(replica);
 		applier_stop(applier);
 		applier_delete(applier);
 	}
-	rlist_foreach_entry_safe(replica, &replicaset.anon, in_anon, next) {
+	rlist_create(&replicaset.anon);
+	replicaset_foreach(replica) {
+		if (replica->applier == NULL)
+			continue;
 		applier = replica->applier;
 		replica_clear_applier(replica);
-		replica_delete(replica);
+		replica->applier_sync_state = APPLIER_DISCONNECTED;
 		applier_stop(applier);
 		applier_delete(applier);
 	}
-	rlist_create(&replicaset.anon);
 
 	/* Save new appliers */
 	replicaset.applier.total = count;
-- 
2.17.1

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [Tarantool-patches] [PATCH v2 3/4] replication: make anon replicas iteration safe
  2019-11-23 21:53 [Tarantool-patches] [PATCH v2 0/4] fix box_set_replication issues & stabilize quorum test Ilya Kosarev
  2019-11-23 21:53 ` [Tarantool-patches] [PATCH v2 1/4] test: update test-run Ilya Kosarev
  2019-11-23 21:53 ` [Tarantool-patches] [PATCH v2 2/4] replication: fix appliers pruning Ilya Kosarev
@ 2019-11-23 21:53 ` Ilya Kosarev
  2019-11-23 21:53 ` [Tarantool-patches] [PATCH v2 4/4] test: stabilize quorum test conditions Ilya Kosarev
  2019-11-24 15:53 ` [Tarantool-patches] [PATCH v2 0/4] fix box_set_replication issues & stabilize quorum test Vladislav Shpilevoy
  4 siblings, 0 replies; 9+ messages in thread
From: Ilya Kosarev @ 2019-11-23 21:53 UTC (permalink / raw)
  To: tarantool-patches; +Cc: v.shpilevoy

In replicaset_follow we iterate anon replicas list: list of replicas
that haven't received an UUID. In case of successful connect replica
link is being removed from anon list. If it happens immediately,
without yield in applier, iteration breaks. Now it is fixed by
rlist_foreach_entry_safe instead of common rlist_foreach_entry.
Relevant test case is added.

Part of #4586
Closes #4576
---
 src/box/replication.cc                        |  3 +-
 .../box_set_replication_stress.result         | 38 +++++++++++++++++++
 .../box_set_replication_stress.test.lua       | 17 +++++++++
 3 files changed, 57 insertions(+), 1 deletion(-)
 create mode 100644 test/replication/box_set_replication_stress.result
 create mode 100644 test/replication/box_set_replication_stress.test.lua

diff --git a/src/box/replication.cc b/src/box/replication.cc
index 3ccfa8b33..2970d61cf 100644
--- a/src/box/replication.cc
+++ b/src/box/replication.cc
@@ -800,7 +800,8 @@ replicaset_follow(void)
 		if (replica->applier != NULL)
 			applier_resume(replica->applier);
 	}
-	rlist_foreach_entry(replica, &replicaset.anon, in_anon) {
+	struct replica *tmp;
+	rlist_foreach_entry_safe(replica, &replicaset.anon, in_anon, tmp) {
 		/* Restart appliers that failed to connect. */
 		applier_start(replica->applier);
 	}
diff --git a/test/replication/box_set_replication_stress.result b/test/replication/box_set_replication_stress.result
new file mode 100644
index 000000000..e683c0643
--- /dev/null
+++ b/test/replication/box_set_replication_stress.result
@@ -0,0 +1,38 @@
+-- test-run result file version 2
+test_run = require('test_run').new()
+ | ---
+ | ...
+SERVERS = {'master_quorum1', 'master_quorum2'}
+ | ---
+ | ...
+
+-- Deploy a cluster.
+test_run:create_cluster(SERVERS)
+ | ---
+ | ...
+test_run:wait_fullmesh(SERVERS)
+ | ---
+ | ...
+
+test_run:cmd("switch master_quorum1")
+ | ---
+ | - true
+ | ...
+repl = box.cfg.replication
+ | ---
+ | ...
+for i = 1, 1000 do              \
+    box.cfg{replication = ""}   \
+    box.cfg{replication = repl} \
+end
+ | ---
+ | ...
+test_run:cmd("switch default")
+ | ---
+ | - true
+ | ...
+
+-- Cleanup.
+test_run:drop_cluster(SERVERS)
+ | ---
+ | ...
diff --git a/test/replication/box_set_replication_stress.test.lua b/test/replication/box_set_replication_stress.test.lua
new file mode 100644
index 000000000..407e91e0f
--- /dev/null
+++ b/test/replication/box_set_replication_stress.test.lua
@@ -0,0 +1,17 @@
+test_run = require('test_run').new()
+SERVERS = {'master_quorum1', 'master_quorum2'}
+
+-- Deploy a cluster.
+test_run:create_cluster(SERVERS)
+test_run:wait_fullmesh(SERVERS)
+
+test_run:cmd("switch master_quorum1")
+repl = box.cfg.replication
+for i = 1, 1000 do              \
+    box.cfg{replication = ""}   \
+    box.cfg{replication = repl} \
+end
+test_run:cmd("switch default")
+
+-- Cleanup.
+test_run:drop_cluster(SERVERS)
-- 
2.17.1

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [Tarantool-patches] [PATCH v2 4/4] test: stabilize quorum test conditions
  2019-11-23 21:53 [Tarantool-patches] [PATCH v2 0/4] fix box_set_replication issues & stabilize quorum test Ilya Kosarev
                   ` (2 preceding siblings ...)
  2019-11-23 21:53 ` [Tarantool-patches] [PATCH v2 3/4] replication: make anon replicas iteration safe Ilya Kosarev
@ 2019-11-23 21:53 ` Ilya Kosarev
  2019-11-24 15:54   ` Vladislav Shpilevoy
  2019-11-24 15:53 ` [Tarantool-patches] [PATCH v2 0/4] fix box_set_replication issues & stabilize quorum test Vladislav Shpilevoy
  4 siblings, 1 reply; 9+ messages in thread
From: Ilya Kosarev @ 2019-11-23 21:53 UTC (permalink / raw)
  To: tarantool-patches; +Cc: v.shpilevoy

There were some pass conditions in quorum test which could take some
time to be satisfied. Now they are wrapped using test_run:wait_cond to
make the test stable.

Closes #4586
---
 test/replication/quorum.result   | 54 ++++++++++++++++++++++++--------
 test/replication/quorum.test.lua | 24 ++++++++------
 2 files changed, 56 insertions(+), 22 deletions(-)

diff --git a/test/replication/quorum.result b/test/replication/quorum.result
index ff5fa0150..939ce1e00 100644
--- a/test/replication/quorum.result
+++ b/test/replication/quorum.result
@@ -40,6 +40,10 @@ box.info.ro -- true
 ---
 - true
 ...
+test_run:wait_cond(function() return box.space.test ~= nil end, 20)
+---
+- true
+...
 box.space.test:replace{100} -- error
 ---
 - error: Can't modify data because this instance is in read-only mode.
@@ -64,6 +68,10 @@ box.info.ro -- true
 ---
 - true
 ...
+test_run:wait_cond(function() return box.space.test ~= nil end, 20)
+---
+- true
+...
 box.space.test:replace{100} -- error
 ---
 - error: Can't modify data because this instance is in read-only mode.
@@ -95,6 +103,10 @@ box.info.ro -- true
 ---
 - true
 ...
+test_run:wait_cond(function() return box.space.test ~= nil end, 20)
+---
+- true
+...
 box.space.test:replace{100} -- error
 ---
 - error: Can't modify data because this instance is in read-only mode.
@@ -115,15 +127,15 @@ box.info.status -- running
 - running
 ...
 -- Check that the replica follows all masters.
-box.info.id == 1 or box.info.replication[1].upstream.status == 'follow'
+box.info.id == 1 or test_run:wait_cond(function() return box.info.replication[1].upstream.status == 'follow' end, 20)
 ---
 - true
 ...
-box.info.id == 2 or box.info.replication[2].upstream.status == 'follow'
+box.info.id == 2 or test_run:wait_cond(function() return box.info.replication[2].upstream.status == 'follow' end, 20)
 ---
 - true
 ...
-box.info.id == 3 or box.info.replication[3].upstream.status == 'follow'
+box.info.id == 3 or test_run:wait_cond(function() return box.info.replication[3].upstream.status == 'follow' end, 20)
 ---
 - true
 ...
@@ -149,6 +161,14 @@ test_run:cmd('stop server quorum1')
 ---
 - true
 ...
+test_run:wait_cond(function() return box.space.test ~= nil end, 20)
+---
+- true
+...
+test_run:wait_cond(function() return box.space.test.index.primary ~= nil end, 20)
+---
+- true
+...
 for i = 1, 100 do box.space.test:insert{i} end
 ---
 ...
@@ -166,9 +186,13 @@ test_run:cmd('switch quorum1')
 ---
 - true
 ...
-box.space.test:count() -- 100
+test_run:wait_cond(function() return box.space.test ~= nil end, 20)
 ---
-- 100
+- true
+...
+test_run:wait_cond(function() return box.space.test:count() == 100 end, 20)
+---
+- true
 ...
 -- Rebootstrap one node of the cluster and check that others follow.
 -- Note, due to ERRINJ_RELAY_TIMEOUT there is a substantial delay
@@ -197,9 +221,13 @@ test_run:cmd('switch quorum1')
 - true
 ...
 test_run:cmd('restart server quorum1 with cleanup=1, args="0.1 0.5"')
-box.space.test:count() -- 100
+test_run:wait_cond(function() return box.space.test ~= nil end, 20)
 ---
-- 100
+- true
+...
+test_run:wait_cond(function() return box.space.test:count() == 100 end, 20)
+---
+- true
 ...
 -- The rebootstrapped replica will be assigned id = 4,
 -- because ids 1..3 are busy.
@@ -207,11 +235,9 @@ test_run:cmd('switch quorum2')
 ---
 - true
 ...
-fiber = require('fiber')
----
-...
-while box.info.replication[4].upstream.status ~= 'follow' do fiber.sleep(0.001) end
+test_run:wait_cond(function() return box.info.replication[4].upstream.status == 'follow' end, 20)
 ---
+- true
 ...
 box.info.replication[4].upstream.status
 ---
@@ -221,11 +247,13 @@ test_run:cmd('switch quorum3')
 ---
 - true
 ...
-fiber = require('fiber')
+test_run:wait_cond(function() return box.info.replication ~= nil end, 20)
 ---
+- true
 ...
-while box.info.replication[4].upstream.status ~= 'follow' do fiber.sleep(0.001) end
+test_run:wait_cond(function() return box.info.replication[4].upstream.status == 'follow' end, 20)
 ---
+- true
 ...
 box.info.replication[4].upstream.status
 ---
diff --git a/test/replication/quorum.test.lua b/test/replication/quorum.test.lua
index 98febb367..e20572344 100644
--- a/test/replication/quorum.test.lua
+++ b/test/replication/quorum.test.lua
@@ -22,6 +22,7 @@ test_run:cmd('restart server quorum2 with args="0.1 0.5"')
 box.info.status -- orphan
 box.ctl.wait_rw(0.001) -- timeout
 box.info.ro -- true
+test_run:wait_cond(function() return box.space.test ~= nil end, 20)
 box.space.test:replace{100} -- error
 box.cfg{replication={}}
 box.info.status -- running
@@ -30,6 +31,7 @@ test_run:cmd('restart server quorum2 with args="0.1 0.5"')
 box.info.status -- orphan
 box.ctl.wait_rw(0.001) -- timeout
 box.info.ro -- true
+test_run:wait_cond(function() return box.space.test ~= nil end, 20)
 box.space.test:replace{100} -- error
 box.cfg{replication_connect_quorum = 2}
 box.ctl.wait_rw()
@@ -40,6 +42,7 @@ test_run:cmd('restart server quorum2 with args="0.1 0.5"')
 box.info.status -- orphan
 box.ctl.wait_rw(0.001) -- timeout
 box.info.ro -- true
+test_run:wait_cond(function() return box.space.test ~= nil end, 20)
 box.space.test:replace{100} -- error
 test_run:cmd('start server quorum1 with args="0.1 0.5"')
 box.ctl.wait_rw()
@@ -47,9 +50,9 @@ box.info.ro -- false
 box.info.status -- running
 
 -- Check that the replica follows all masters.
-box.info.id == 1 or box.info.replication[1].upstream.status == 'follow'
-box.info.id == 2 or box.info.replication[2].upstream.status == 'follow'
-box.info.id == 3 or box.info.replication[3].upstream.status == 'follow'
+box.info.id == 1 or test_run:wait_cond(function() return box.info.replication[1].upstream.status == 'follow' end, 20)
+box.info.id == 2 or test_run:wait_cond(function() return box.info.replication[2].upstream.status == 'follow' end, 20)
+box.info.id == 3 or test_run:wait_cond(function() return box.info.replication[3].upstream.status == 'follow' end, 20)
 
 -- Check that box.cfg() doesn't return until the instance
 -- catches up with all configured replicas.
@@ -59,13 +62,16 @@ test_run:cmd('switch quorum2')
 box.error.injection.set("ERRINJ_RELAY_TIMEOUT", 0.001)
 test_run:cmd('stop server quorum1')
 
+test_run:wait_cond(function() return box.space.test ~= nil end, 20)
+test_run:wait_cond(function() return box.space.test.index.primary ~= nil end, 20)
 for i = 1, 100 do box.space.test:insert{i} end
 fiber = require('fiber')
 fiber.sleep(0.1)
 
 test_run:cmd('start server quorum1 with args="0.1  0.5"')
 test_run:cmd('switch quorum1')
-box.space.test:count() -- 100
+test_run:wait_cond(function() return box.space.test ~= nil end, 20)
+test_run:wait_cond(function() return box.space.test:count() == 100 end, 20)
 
 -- Rebootstrap one node of the cluster and check that others follow.
 -- Note, due to ERRINJ_RELAY_TIMEOUT there is a substantial delay
@@ -81,17 +87,17 @@ box.snapshot()
 test_run:cmd('switch quorum1')
 test_run:cmd('restart server quorum1 with cleanup=1, args="0.1 0.5"')
 
-box.space.test:count() -- 100
+test_run:wait_cond(function() return box.space.test ~= nil end, 20)
+test_run:wait_cond(function() return box.space.test:count() == 100 end, 20)
 
 -- The rebootstrapped replica will be assigned id = 4,
 -- because ids 1..3 are busy.
 test_run:cmd('switch quorum2')
-fiber = require('fiber')
-while box.info.replication[4].upstream.status ~= 'follow' do fiber.sleep(0.001) end
+test_run:wait_cond(function() return box.info.replication[4].upstream.status == 'follow' end, 20)
 box.info.replication[4].upstream.status
 test_run:cmd('switch quorum3')
-fiber = require('fiber')
-while box.info.replication[4].upstream.status ~= 'follow' do fiber.sleep(0.001) end
+test_run:wait_cond(function() return box.info.replication ~= nil end, 20)
+test_run:wait_cond(function() return box.info.replication[4].upstream.status == 'follow' end, 20)
 box.info.replication[4].upstream.status
 
 -- Cleanup.
-- 
2.17.1

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Tarantool-patches] [PATCH v2 0/4] fix box_set_replication issues & stabilize quorum test
  2019-11-23 21:53 [Tarantool-patches] [PATCH v2 0/4] fix box_set_replication issues & stabilize quorum test Ilya Kosarev
                   ` (3 preceding siblings ...)
  2019-11-23 21:53 ` [Tarantool-patches] [PATCH v2 4/4] test: stabilize quorum test conditions Ilya Kosarev
@ 2019-11-24 15:53 ` Vladislav Shpilevoy
  2019-11-25 11:50   ` Ilya Kosarev
  4 siblings, 1 reply; 9+ messages in thread
From: Vladislav Shpilevoy @ 2019-11-24 15:53 UTC (permalink / raw)
  To: Ilya Kosarev, tarantool-patches

Hi!

Please, specify branch and issue links.

Issue: https://github.com/tarantool/tarantool/issues/4586
Branch: https://github.com/tarantool/tarantool/commits/i.kosarev/gh-4586-fix-quorum-test

Add 'Changes in V2' section, when you send a new version.

On 23/11/2019 22:53, Ilya Kosarev wrote:
> This patchset fixes appliers pruning in replicaset_update, anon
> replicas iteration issues in replicaset_follow and stabilizes quorum
> test. It also stabilizes tcp_connect in test_run:cmd().
> 
> Ilya Kosarev (4):
>   test: update test-run
>   replication: fix appliers pruning
>   replication: make anon replicas iteration safe
>   test: stabilize quorum test conditions
> 
>  src/box/replication.cc                        | 17 +++---
>  test-run                                      |  2 +-
>  .../box_set_replication_stress.result         | 38 +++++++++++++
>  .../box_set_replication_stress.test.lua       | 17 ++++++
>  test/replication/quorum.result                | 54 ++++++++++++++-----
>  test/replication/quorum.test.lua              | 24 +++++----
>  6 files changed, 121 insertions(+), 31 deletions(-)
>  create mode 100644 test/replication/box_set_replication_stress.result
>  create mode 100644 test/replication/box_set_replication_stress.test.lua
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Tarantool-patches] [PATCH v2 2/4] replication: fix appliers pruning
  2019-11-23 21:53 ` [Tarantool-patches] [PATCH v2 2/4] replication: fix appliers pruning Ilya Kosarev
@ 2019-11-24 15:54   ` Vladislav Shpilevoy
  0 siblings, 0 replies; 9+ messages in thread
From: Vladislav Shpilevoy @ 2019-11-24 15:54 UTC (permalink / raw)
  To: Ilya Kosarev, tarantool-patches

Thanks for the patch!

On 23/11/2019 22:53, Ilya Kosarev wrote:
> During pruning of replicaset appliers some anon replicas might connect
> from replicaset_follow called in another fiber. Therefore we need to
> prune appliers of anon replicas first.
> 
> Closes #4643
> ---
>  src/box/replication.cc | 14 +++++++-------
>  1 file changed, 7 insertions(+), 7 deletions(-)
> 
> diff --git a/src/box/replication.cc b/src/box/replication.cc
> index 509187cd3..3ccfa8b33 100644
> --- a/src/box/replication.cc
> +++ b/src/box/replication.cc
> @@ -516,23 +516,23 @@ replicaset_update(struct applier **appliers, int count)
>  	 */
>  
>  	/* Prune old appliers */
> -	replicaset_foreach(replica) {
> -		if (replica->applier == NULL)
> -			continue;
> +	rlist_foreach_entry_safe(replica, &replicaset.anon, in_anon, next) {

'safe' rlist iteration won't help, if a next anon replica
will connect during yield in applier_stop(). Safe iteration can
cope with deletion only of the current element.

You can try

    while (rlist_empty(...)) {
        replica = rlist_first(...);
        ...
    }

The same about the cycle below.

And I am not really sure, that you need two
cycles now. Just keep it one, but iterate like
I showed above.

>  		applier = replica->applier;
>  		replica_clear_applier(replica);
> -		replica->applier_sync_state = APPLIER_DISCONNECTED;
> +		replica_delete(replica);
>  		applier_stop(applier);
>  		applier_delete(applier);
>  	}
> -	rlist_foreach_entry_safe(replica, &replicaset.anon, in_anon, next) {
> +	rlist_create(&replicaset.anon);
> +	replicaset_foreach(replica) {
> +		if (replica->applier == NULL)
> +			continue;
>  		applier = replica->applier;
>  		replica_clear_applier(replica);
> -		replica_delete(replica);
> +		replica->applier_sync_state = APPLIER_DISCONNECTED;
>  		applier_stop(applier);
>  		applier_delete(applier);
>  	}
> -	rlist_create(&replicaset.anon);
>  
>  	/* Save new appliers */
>  	replicaset.applier.total = count;
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Tarantool-patches] [PATCH v2 4/4] test: stabilize quorum test conditions
  2019-11-23 21:53 ` [Tarantool-patches] [PATCH v2 4/4] test: stabilize quorum test conditions Ilya Kosarev
@ 2019-11-24 15:54   ` Vladislav Shpilevoy
  0 siblings, 0 replies; 9+ messages in thread
From: Vladislav Shpilevoy @ 2019-11-24 15:54 UTC (permalink / raw)
  To: Ilya Kosarev, tarantool-patches

Thanks for the patch!

> diff --git a/test/replication/quorum.result b/test/replication/quorum.result
> index ff5fa0150..939ce1e00 100644
> --- a/test/replication/quorum.result
> +++ b/test/replication/quorum.result
> @@ -64,6 +68,10 @@ box.info.ro -- true
>  ---
>  - true
>  ...
> +test_run:wait_cond(function() return box.space.test ~= nil end, 20)

Why do you need the second wait for the same condition?
box.space.test was not dropped before this line, it should
still exist.

The same about all the similar checks for space.test.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Tarantool-patches] [PATCH v2 0/4] fix box_set_replication issues & stabilize quorum test
  2019-11-24 15:53 ` [Tarantool-patches] [PATCH v2 0/4] fix box_set_replication issues & stabilize quorum test Vladislav Shpilevoy
@ 2019-11-25 11:50   ` Ilya Kosarev
  0 siblings, 0 replies; 9+ messages in thread
From: Ilya Kosarev @ 2019-11-25 11:50 UTC (permalink / raw)
  To: Vladislav Shpilevoy; +Cc: tarantool-patches

[-- Attachment #1: Type: text/plain, Size: 1520 bytes --]

Hi!

Thanks for your review.

Sent v3 of the patchset considering mentioned drawbacks.


>Воскресенье, 24 ноября 2019, 18:53 +03:00 от Vladislav Shpilevoy <v.shpilevoy@tarantool.org>:
>
>Hi!
>
>Please, specify branch and issue links.
>
>Issue:  https://github.com/tarantool/tarantool/issues/4586
>Branch:  https://github.com/tarantool/tarantool/commits/i.kosarev/gh-4586-fix-quorum-test
>
>Add 'Changes in V2' section, when you send a new version.
>
>On 23/11/2019 22:53, Ilya Kosarev wrote:
>> This patchset fixes appliers pruning in replicaset_update, anon
>> replicas iteration issues in replicaset_follow and stabilizes quorum
>> test. It also stabilizes tcp_connect in test_run:cmd().
>> 
>> Ilya Kosarev (4):
>>   test: update test-run
>>   replication: fix appliers pruning
>>   replication: make anon replicas iteration safe
>>   test: stabilize quorum test conditions
>> 
>>  src/box/replication.cc                        | 17 +++---
>>  test-run                                      |  2 +-
>>  .../box_set_replication_stress.result         | 38 +++++++++++++
>>  .../box_set_replication_stress.test.lua       | 17 ++++++
>>  test/replication/quorum.result                | 54 ++++++++++++++-----
>>  test/replication/quorum.test.lua              | 24 +++++----
>>  6 files changed, 121 insertions(+), 31 deletions(-)
>>  create mode 100644 test/replication/box_set_replication_stress.result
>>  create mode 100644 test/replication/box_set_replication_stress.test.lua
>> 


-- 
Ilya Kosarev

[-- Attachment #2: Type: text/html, Size: 2771 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2019-11-25 11:50 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-23 21:53 [Tarantool-patches] [PATCH v2 0/4] fix box_set_replication issues & stabilize quorum test Ilya Kosarev
2019-11-23 21:53 ` [Tarantool-patches] [PATCH v2 1/4] test: update test-run Ilya Kosarev
2019-11-23 21:53 ` [Tarantool-patches] [PATCH v2 2/4] replication: fix appliers pruning Ilya Kosarev
2019-11-24 15:54   ` Vladislav Shpilevoy
2019-11-23 21:53 ` [Tarantool-patches] [PATCH v2 3/4] replication: make anon replicas iteration safe Ilya Kosarev
2019-11-23 21:53 ` [Tarantool-patches] [PATCH v2 4/4] test: stabilize quorum test conditions Ilya Kosarev
2019-11-24 15:54   ` Vladislav Shpilevoy
2019-11-24 15:53 ` [Tarantool-patches] [PATCH v2 0/4] fix box_set_replication issues & stabilize quorum test Vladislav Shpilevoy
2019-11-25 11:50   ` Ilya Kosarev

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox