Tarantool development patches archive
 help / color / mirror / Atom feed
* [Tarantool-patches] [PATCH 1/1] election: fix box.ctl.promote() crash on 0 LSN
@ 2021-07-22 23:31 Vladislav Shpilevoy via Tarantool-patches
  2021-07-25 17:41 ` Sergey Petrenko via Tarantool-patches
  0 siblings, 1 reply; 3+ messages in thread
From: Vladislav Shpilevoy via Tarantool-patches @ 2021-07-22 23:31 UTC (permalink / raw)
  To: tarantool-patches, gorcunov, sergepetrenko

box.ctl.promote() in case of non-empty limbo tried to wait for
quorum of instances confirming that they have received the latest
data.

The quorum waiting is done via walking the relays and filling a
vclock object using vclock_follow with the old leader's LSN as it
is seen by the replicas.

Some replicas might have vclock[old_leader_id] == 0 if they didn't
receive anything from it. Vclock_follow() crashed at attempt to
follow these 0 LSNs, because it is initialized with zeros, and
in vclock_follow() it has an assert: new_lsn > old_lsn. This
resulted into 0 > 0 and the crash.

The patch makes the quorum collector skip these 0 LSNs.

Part of #5430
---
Branch: http://github.com/tarantool/tarantool/tree/gerold103/gh-5430-crash-in-wait_quorum
Issue: https://github.com/tarantool/tarantool/issues/5430

 .../gh-5430-promote-quorum-crash.md           |   6 +
 src/box/box.cc                                |   8 +
 test/replication/gh-5430-master1.lua          |  15 ++
 test/replication/gh-5430-master2.lua          |  14 ++
 test/replication/gh-5430-master3.lua          |  12 ++
 .../gh-5430-qsync-promote-crash.result        | 159 ++++++++++++++++++
 .../gh-5430-qsync-promote-crash.test.lua      |  76 +++++++++
 test/replication/suite.cfg                    |   1 +
 test/replication/suite.ini                    |   2 +-
 9 files changed, 292 insertions(+), 1 deletion(-)
 create mode 100644 changelogs/unreleased/gh-5430-promote-quorum-crash.md
 create mode 100644 test/replication/gh-5430-master1.lua
 create mode 100644 test/replication/gh-5430-master2.lua
 create mode 100644 test/replication/gh-5430-master3.lua
 create mode 100644 test/replication/gh-5430-qsync-promote-crash.result
 create mode 100644 test/replication/gh-5430-qsync-promote-crash.test.lua

diff --git a/changelogs/unreleased/gh-5430-promote-quorum-crash.md b/changelogs/unreleased/gh-5430-promote-quorum-crash.md
new file mode 100644
index 000000000..09c7efd5c
--- /dev/null
+++ b/changelogs/unreleased/gh-5430-promote-quorum-crash.md
@@ -0,0 +1,6 @@
+## bugfix/replication
+
+* Fixed a possible crash when `box.ctl.promote()` was called in a cluster with
+  >= 3 instances, happened in debug build. In release build it could lead to
+  undefined behaviour. It was likely to happen if a new node was added shortly
+  before the promotion (gh-5430).
diff --git a/src/box/box.cc b/src/box/box.cc
index 8c10a99dd..21fb38e35 100644
--- a/src/box/box.cc
+++ b/src/box/box.cc
@@ -1488,6 +1488,14 @@ box_wait_quorum(uint32_t lead_id, int64_t target_lsn, int quorum,
 		assert(!tt_uuid_is_equal(&INSTANCE_UUID, &replica->uuid));
 
 		int64_t lsn = vclock_get(relay_vclock(replica->relay), lead_id);
+		/*
+		 * The replica might not yet received anything from the old
+		 * leader. Easily can happen with a newly added replica. Vclock
+		 * can't be followed then because would assert on lsn > old lsn
+		 * whereas they are both 0.
+		 */
+		if (lsn == 0)
+			continue;
 		vclock_follow(&t.vclock, replica->id, lsn);
 		if (lsn >= target_lsn) {
 			ack_count++;
diff --git a/test/replication/gh-5430-master1.lua b/test/replication/gh-5430-master1.lua
new file mode 100644
index 000000000..0f57c87c1
--- /dev/null
+++ b/test/replication/gh-5430-master1.lua
@@ -0,0 +1,15 @@
+#!/usr/bin/env tarantool
+
+require('console').listen(os.getenv('ADMIN'))
+
+box.cfg({
+    listen = 'unix/:./master1.sock',
+    replication = {
+        'unix/:./master1.sock',
+        'unix/:./master2.sock',
+    },
+    replication_synchro_quorum = 2,
+    replication_timeout = 0.5,
+})
+
+box.schema.user.grant('guest', 'super')
diff --git a/test/replication/gh-5430-master2.lua b/test/replication/gh-5430-master2.lua
new file mode 100644
index 000000000..b1aeccfe0
--- /dev/null
+++ b/test/replication/gh-5430-master2.lua
@@ -0,0 +1,14 @@
+#!/usr/bin/env tarantool
+
+require('console').listen(os.getenv('ADMIN'))
+
+box.cfg({
+    listen = 'unix/:./master2.sock',
+    replication = {
+        'unix/:./master1.sock',
+        'unix/:./master2.sock',
+    },
+    read_only = true,
+    replication_synchro_quorum = 2,
+    replication_timeout = 0.5,
+})
diff --git a/test/replication/gh-5430-master3.lua b/test/replication/gh-5430-master3.lua
new file mode 100644
index 000000000..eff479a68
--- /dev/null
+++ b/test/replication/gh-5430-master3.lua
@@ -0,0 +1,12 @@
+#!/usr/bin/env tarantool
+
+require('console').listen(os.getenv('ADMIN'))
+
+box.cfg({
+    listen = 'unix/:./master3.sock',
+    replication = {
+        'unix/:./master1.sock',
+    },
+    replication_synchro_quorum = 3,
+    replication_timeout = 0.5,
+})
diff --git a/test/replication/gh-5430-qsync-promote-crash.result b/test/replication/gh-5430-qsync-promote-crash.result
new file mode 100644
index 000000000..1204c625a
--- /dev/null
+++ b/test/replication/gh-5430-qsync-promote-crash.result
@@ -0,0 +1,159 @@
+-- test-run result file version 2
+--
+-- gh-5430: box.ctl.promote() could assert if one of replicas didn't receive
+-- anything from the old leader. Internally box.ctl.promote() collected a quorum
+-- using vclock_follow() from all connected relays by the old leader ID and in
+-- that case one of such replicas led to vclock_follow(0) which is always a
+-- crash.
+--
+test_run = require('test_run').new()
+ | ---
+ | ...
+
+--
+-- Start 2 fullmesh nodes working normally.
+--
+test_run:cmd('create server master1 with '..                                    \
+             'script="replication/gh-5430-master1.lua"')
+ | ---
+ | - true
+ | ...
+test_run:cmd('start server master1 with wait=False')
+ | ---
+ | - true
+ | ...
+
+test_run:cmd('create server master2 with '..                                    \
+             'script="replication/gh-5430-master2.lua"')
+ | ---
+ | - true
+ | ...
+test_run:cmd('start server master2 with wait=True')
+ | ---
+ | - true
+ | ...
+
+--
+-- One of them won't write to WAL anything from now on. If a new instance is
+-- added and it will write something, master2 won't apply it.
+--
+test_run:switch('master2')
+ | ---
+ | - true
+ | ...
+box.error.injection.set('ERRINJ_WAL_DELAY', true)
+ | ---
+ | - ok
+ | ...
+
+--
+-- Third node is the future 'old leader', by which master2 has
+-- vclock[master3] == 0.
+--
+test_run:cmd('create server master3 with '..                                    \
+             'script="replication/gh-5430-master3.lua"')
+ | ---
+ | - true
+ | ...
+test_run:cmd('start server master3 with wait=True')
+ | ---
+ | - true
+ | ...
+
+--
+-- Make master1 fetch data from master3 so as it could receive sync data and
+-- confirm it later.
+--
+test_run:switch('master1')
+ | ---
+ | - true
+ | ...
+-- Can't keep master2 in there because it hangs with ER_CFG about a duplicate
+-- connection. Even reset via replication = {} does not help for 100%. But for
+-- the test it does not matter.
+box.cfg{replication = {test_run:eval('master3', 'return box.cfg.listen')[1]}}
+ | ---
+ | ...
+
+--
+-- Master3 fills the limbo and dies.
+--
+test_run:switch('master3')
+ | ---
+ | - true
+ | ...
+box.ctl.promote()
+ | ---
+ | ...
+s = box.schema.create_space('test', {is_sync = true})
+ | ---
+ | ...
+_ = s:create_index('pk')
+ | ---
+ | ...
+_ = require('fiber').create(s.replace, s, {1})
+ | ---
+ | ...
+test_run:wait_lsn('master1', 'master3')
+ | ---
+ | ...
+
+test_run:switch('master1')
+ | ---
+ | - true
+ | ...
+test_run:cmd('stop server master3')
+ | ---
+ | - true
+ | ...
+test_run:cmd('delete server master3')
+ | ---
+ | - true
+ | ...
+
+--
+-- Master1 tries to promote self. In the meantime master2 has
+-- vclock[master3] == 0. It is still blocked in the WAL thread. Master1 should
+-- be ready to seeing 0 LSN by the old leader's component in some replicas.
+--
+box.cfg{replication_synchro_timeout = 0.1}
+ | ---
+ | ...
+assert(box.info.synchro.queue.len > 0)
+ | ---
+ | - true
+ | ...
+assert(not pcall(box.ctl.promote))
+ | ---
+ | - true
+ | ...
+
+test_run:switch('master2')
+ | ---
+ | - true
+ | ...
+box.error.injection.set('ERRINJ_WAL_DELAY', false)
+ | ---
+ | - ok
+ | ...
+
+test_run:switch('default')
+ | ---
+ | - true
+ | ...
+test_run:cmd('stop server master2')
+ | ---
+ | - true
+ | ...
+test_run:cmd('delete server master2')
+ | ---
+ | - true
+ | ...
+test_run:cmd('stop server master1')
+ | ---
+ | - true
+ | ...
+test_run:cmd('delete server master1')
+ | ---
+ | - true
+ | ...
diff --git a/test/replication/gh-5430-qsync-promote-crash.test.lua b/test/replication/gh-5430-qsync-promote-crash.test.lua
new file mode 100644
index 000000000..7ef8860e7
--- /dev/null
+++ b/test/replication/gh-5430-qsync-promote-crash.test.lua
@@ -0,0 +1,76 @@
+--
+-- gh-5430: box.ctl.promote() could assert if one of replicas didn't receive
+-- anything from the old leader. Internally box.ctl.promote() collected a quorum
+-- using vclock_follow() from all connected relays by the old leader ID and in
+-- that case one of such replicas led to vclock_follow(0) which is always a
+-- crash.
+--
+test_run = require('test_run').new()
+
+--
+-- Start 2 fullmesh nodes working normally.
+--
+test_run:cmd('create server master1 with '..                                    \
+             'script="replication/gh-5430-master1.lua"')
+test_run:cmd('start server master1 with wait=False')
+
+test_run:cmd('create server master2 with '..                                    \
+             'script="replication/gh-5430-master2.lua"')
+test_run:cmd('start server master2 with wait=True')
+
+--
+-- One of them won't write to WAL anything from now on. If a new instance is
+-- added and it will write something, master2 won't apply it.
+--
+test_run:switch('master2')
+box.error.injection.set('ERRINJ_WAL_DELAY', true)
+
+--
+-- Third node is the future 'old leader', by which master2 has
+-- vclock[master3] == 0.
+--
+test_run:cmd('create server master3 with '..                                    \
+             'script="replication/gh-5430-master3.lua"')
+test_run:cmd('start server master3 with wait=True')
+
+--
+-- Make master1 fetch data from master3 so as it could receive sync data and
+-- confirm it later.
+--
+test_run:switch('master1')
+-- Can't keep master2 in there because it hangs with ER_CFG about a duplicate
+-- connection. Even reset via replication = {} does not help for 100%. But for
+-- the test it does not matter.
+box.cfg{replication = {test_run:eval('master3', 'return box.cfg.listen')[1]}}
+
+--
+-- Master3 fills the limbo and dies.
+--
+test_run:switch('master3')
+box.ctl.promote()
+s = box.schema.create_space('test', {is_sync = true})
+_ = s:create_index('pk')
+_ = require('fiber').create(s.replace, s, {1})
+test_run:wait_lsn('master1', 'master3')
+
+test_run:switch('master1')
+test_run:cmd('stop server master3')
+test_run:cmd('delete server master3')
+
+--
+-- Master1 tries to promote self. In the meantime master2 has
+-- vclock[master3] == 0. It is still blocked in the WAL thread. Master1 should
+-- be ready to seeing 0 LSN by the old leader's component in some replicas.
+--
+box.cfg{replication_synchro_timeout = 0.1}
+assert(box.info.synchro.queue.len > 0)
+assert(not pcall(box.ctl.promote))
+
+test_run:switch('master2')
+box.error.injection.set('ERRINJ_WAL_DELAY', false)
+
+test_run:switch('default')
+test_run:cmd('stop server master2')
+test_run:cmd('delete server master2')
+test_run:cmd('stop server master1')
+test_run:cmd('delete server master1')
diff --git a/test/replication/suite.cfg b/test/replication/suite.cfg
index e0bbe2676..5a4506e73 100644
--- a/test/replication/suite.cfg
+++ b/test/replication/suite.cfg
@@ -41,6 +41,7 @@
     "gh-4739-vclock-assert.test.lua": {},
     "gh-4730-applier-rollback.test.lua": {},
     "gh-4928-tx-boundaries.test.lua": {},
+    "gh-5430-qsync-promote-crash.test.lua": {},
     "gh-5440-qsync-ro.test.lua": {},
     "gh-5435-qsync-clear-synchro-queue-commit-all.test.lua": {},
     "gh-5536-wal-limit.test.lua": {},
diff --git a/test/replication/suite.ini b/test/replication/suite.ini
index 18981996d..34b289bc8 100644
--- a/test/replication/suite.ini
+++ b/test/replication/suite.ini
@@ -3,7 +3,7 @@ core = tarantool
 script =  master.lua
 description = tarantool/box, replication
 disabled = consistent.test.lua
-release_disabled = catch.test.lua errinj.test.lua gc.test.lua gc_no_space.test.lua before_replace.test.lua qsync_advanced.test.lua qsync_errinj.test.lua quorum.test.lua recover_missing_xlog.test.lua sync.test.lua long_row_timeout.test.lua gh-4739-vclock-assert.test.lua gh-4730-applier-rollback.test.lua gh-5140-qsync-casc-rollback.test.lua gh-5144-qsync-dup-confirm.test.lua gh-5167-qsync-rollback-snap.test.lua gh-5506-election-on-off.test.lua gh-5536-wal-limit.test.lua hang_on_synchro_fail.test.lua anon_register_gap.test.lua gh-5213-qsync-applier-order.test.lua gh-5213-qsync-applier-order-3.test.lua gh-6027-applier-error-show.test.lua gh-6032-promote-wal-write.test.lua gh-6057-qsync-confirm-async-no-wal.test.lua gh-5447-downstream-lag.test.lua gh-4040-invalid-msgpack.test.lua
+release_disabled = catch.test.lua errinj.test.lua gc.test.lua gc_no_space.test.lua before_replace.test.lua qsync_advanced.test.lua qsync_errinj.test.lua quorum.test.lua recover_missing_xlog.test.lua sync.test.lua long_row_timeout.test.lua gh-4739-vclock-assert.test.lua gh-4730-applier-rollback.test.lua gh-5140-qsync-casc-rollback.test.lua gh-5144-qsync-dup-confirm.test.lua gh-5167-qsync-rollback-snap.test.lua gh-5430-qsync-promote-crash.test.lua gh-5506-election-on-off.test.lua gh-5536-wal-limit.test.lua hang_on_synchro_fail.test.lua anon_register_gap.test.lua gh-5213-qsync-applier-order.test.lua gh-5213-qsync-applier-order-3.test.lua gh-6027-applier-error-show.test.lua gh-6032-promote-wal-write.test.lua gh-6057-qsync-confirm-async-no-wal.test.lua gh-5447-downstream-lag.test.lua gh-4040-invalid-msgpack.test.lua
 config = suite.cfg
 lua_libs = lua/fast_replica.lua lua/rlimit.lua
 use_unix_sockets = True
-- 
2.24.3 (Apple Git-128)


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [Tarantool-patches] [PATCH 1/1] election: fix box.ctl.promote() crash on 0 LSN
  2021-07-22 23:31 [Tarantool-patches] [PATCH 1/1] election: fix box.ctl.promote() crash on 0 LSN Vladislav Shpilevoy via Tarantool-patches
@ 2021-07-25 17:41 ` Sergey Petrenko via Tarantool-patches
  2021-07-26 21:54   ` Vladislav Shpilevoy via Tarantool-patches
  0 siblings, 1 reply; 3+ messages in thread
From: Sergey Petrenko via Tarantool-patches @ 2021-07-25 17:41 UTC (permalink / raw)
  To: Vladislav Shpilevoy, tarantool-patches, gorcunov


23.07.2021 02:31, Vladislav Shpilevoy пишет:
> box.ctl.promote() in case of non-empty limbo tried to wait for
> quorum of instances confirming that they have received the latest
> data.
>
> The quorum waiting is done via walking the relays and filling a
> vclock object using vclock_follow with the old leader's LSN as it
> is seen by the replicas.
>
> Some replicas might have vclock[old_leader_id] == 0 if they didn't
> receive anything from it. Vclock_follow() crashed at attempt to
> follow these 0 LSNs, because it is initialized with zeros, and
> in vclock_follow() it has an assert: new_lsn > old_lsn. This
> resulted into 0 > 0 and the crash.
>
> The patch makes the quorum collector skip these 0 LSNs.
>
> Part of #5430


Hi! Thanks for the patch!


...

> diff --git a/test/replication/gh-5430-master1.lua b/test/replication/gh-5430-master1.lua
> new file mode 100644
> index 000000000..0f57c87c1
> --- /dev/null
> +++ b/test/replication/gh-5430-master1.lua
> @@ -0,0 +1,15 @@
> +#!/usr/bin/env tarantool
> +
> +require('console').listen(os.getenv('ADMIN'))
> +
> +box.cfg({
> +    listen = 'unix/:./master1.sock',
> +    replication = {
> +        'unix/:./master1.sock',
> +        'unix/:./master2.sock',
> +    },
> +    replication_synchro_quorum = 2,
> +    replication_timeout = 0.5,
> +})
> +
> +box.schema.user.grant('guest', 'super')
> diff --git a/test/replication/gh-5430-master2.lua b/test/replication/gh-5430-master2.lua
> new file mode 100644
> index 000000000..b1aeccfe0
> --- /dev/null
> +++ b/test/replication/gh-5430-master2.lua
> @@ -0,0 +1,14 @@
> +#!/usr/bin/env tarantool
> +
> +require('console').listen(os.getenv('ADMIN'))
> +
> +box.cfg({
> +    listen = 'unix/:./master2.sock',
> +    replication = {
> +        'unix/:./master1.sock',
> +        'unix/:./master2.sock',
> +    },
> +    read_only = true,
> +    replication_synchro_quorum = 2,
> +    replication_timeout = 0.5,
> +})
> diff --git a/test/replication/gh-5430-master3.lua b/test/replication/gh-5430-master3.lua
> new file mode 100644
> index 000000000..eff479a68
> --- /dev/null
> +++ b/test/replication/gh-5430-master3.lua
> @@ -0,0 +1,12 @@
> +#!/usr/bin/env tarantool
> +
> +require('console').listen(os.getenv('ADMIN'))
> +
> +box.cfg({
> +    listen = 'unix/:./master3.sock',
> +    replication = {
> +        'unix/:./master1.sock',
> +    },
> +    replication_synchro_quorum = 3,
> +    replication_timeout = 0.5,
> +})


I'd create a single lua file, gh-5430-master.lua, and 3 symlinks.

listening socket may be deduced from filename, and

replication_synchro_quorum might be passed as a parameter.


I don't insist, LGTM anyways.


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [Tarantool-patches] [PATCH 1/1] election: fix box.ctl.promote() crash on 0 LSN
  2021-07-25 17:41 ` Sergey Petrenko via Tarantool-patches
@ 2021-07-26 21:54   ` Vladislav Shpilevoy via Tarantool-patches
  0 siblings, 0 replies; 3+ messages in thread
From: Vladislav Shpilevoy via Tarantool-patches @ 2021-07-26 21:54 UTC (permalink / raw)
  To: Sergey Petrenko, tarantool-patches, gorcunov

Thanks for the review!

>> diff --git a/test/replication/gh-5430-master3.lua b/test/replication/gh-5430-master3.lua
>> new file mode 100644
>> index 000000000..eff479a68
>> --- /dev/null
>> +++ b/test/replication/gh-5430-master3.lua
>> @@ -0,0 +1,12 @@
>> +#!/usr/bin/env tarantool
>> +
>> +require('console').listen(os.getenv('ADMIN'))
>> +
>> +box.cfg({
>> +    listen = 'unix/:./master3.sock',
>> +    replication = {
>> +        'unix/:./master1.sock',
>> +    },
>> +    replication_synchro_quorum = 3,
>> +    replication_timeout = 0.5,
>> +})
> 
> 
> I'd create a single lua file, gh-5430-master.lua, and 3 symlinks.
> 
> listening socket may be deduced from filename, and
> 
> replication_synchro_quorum might be passed as a parameter.

Too many differences. I need a grant in one master, ready_only in another
master, different quorum and replication in the third master. I don't like
how these monstrous files look in the end having all these symbolic links
at them. Here it is a bit of code duplication, but you can always easily
say which node how is configured.

> I don't insist, LGTM anyways.

Pushed to master, 2.8, 2.7.

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2021-07-26 21:54 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-22 23:31 [Tarantool-patches] [PATCH 1/1] election: fix box.ctl.promote() crash on 0 LSN Vladislav Shpilevoy via Tarantool-patches
2021-07-25 17:41 ` Sergey Petrenko via Tarantool-patches
2021-07-26 21:54   ` Vladislav Shpilevoy via Tarantool-patches

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox