From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from [87.239.111.99] (localhost [127.0.0.1]) by dev.tarantool.org (Postfix) with ESMTP id 98D956EC55; Fri, 23 Jul 2021 02:31:06 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org 98D956EC55 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=tarantool.org; s=dev; t=1626996666; bh=M9A1pgZRZsCuHpnFw1Y0I+q5bCY7VnfVOQcIs0noV/8=; h=To:Date:Subject:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:From; b=dJ4aSis/gXgf63VYVDuAc8ooDSlyHHFGjvyhgXOTRWf2PI4C7n2pXGCxkRoarrehc uFP+zZC5gd4yS4j+oFUMquvgdEqnBzl29VPnwQ+gmqQdaCJYmW1z3WdJloONDM+iYd au/Z7OJIKEhevX9MUikdM6MroNE25GQYEzVRASWU= Received: from smtpng1.i.mail.ru (smtpng1.i.mail.ru [94.100.181.251]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dev.tarantool.org (Postfix) with ESMTPS id F38FD6EC55 for ; Fri, 23 Jul 2021 02:31:04 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org F38FD6EC55 Received: by smtpng1.m.smailru.net with esmtpa (envelope-from ) id 1m6i9f-0002Fe-R2; Fri, 23 Jul 2021 02:31:04 +0300 To: tarantool-patches@dev.tarantool.org, gorcunov@gmail.com, sergepetrenko@tarantool.org Date: Fri, 23 Jul 2021 01:31:02 +0200 Message-Id: <71a67913af750aa487571ee14aafde4e7981b0ee.1626996633.git.v.shpilevoy@tarantool.org> X-Mailer: git-send-email 2.24.3 (Apple Git-128) MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-7564579A: 646B95376F6C166E X-77F55803: 4F1203BC0FB41BD941C43E597735A9C3038391AAE5FBFA76FBCFDED1455B43CD182A05F538085040923D7A51611A1C3927CDCB11DDD2E26C2FE71C68E4F948C14432113CE8AF687B X-7FA49CB5: FF5795518A3D127A4AD6D5ED66289B5278DA827A17800CE7017638EFB5FD29E4EA1F7E6F0F101C67BD4B6F7A4D31EC0BCC500DACC3FED6E28638F802B75D45FF8AA50765F79006371111FAEE7D9FA58B8638F802B75D45FF36EB9D2243A4F8B5A6FCA7DBDB1FC311F39EFFDF887939037866D6147AF826D8036B76F5C6FB9ACE4E81CCB2F64691CD117882F4460429724CE54428C33FAD305F5C1EE8F4F765FCAA867293B0326636D2E47CDBA5A96583BD4B6F7A4D31EC0BC014FD901B82EE079FA2833FD35BB23D27C277FBC8AE2E8BAA867293B0326636D2E47CDBA5A96583BA9C0B312567BB231DD303D21008E29813377AFFFEAFD269A417C69337E82CC2E827F84554CEF50127C277FBC8AE2E8BA83251EDC214901ED5E8D9A59859A8B6A45692FFBBD75A6A089D37D7C0E48F6C5571747095F342E88FB05168BE4CE3AF X-B7AD71C0: AC4F5C86D027EB782CDD5689AFBDA7A213B5FB47DCBC3458834459D11680B50501FD96A8A2E93FE5DA3363C6496E1AD8 X-C1DE0DAB: C20DE7B7AB408E4181F030C43753B8186998911F362727C414F749A5E30D975C16CF367659D795C044A4F649C946E4574E8CA7BFB3EDDAA19C2B6934AE262D3EE7EAB7254005DCED603E7CD1993192A69510FB958DCE06DB6ED91DBE5ABE359A805C47957401F4818D1F0E447259586B93EDB24507CE13387DFF0A840B692CF8 X-C8649E89: 4E36BF7865823D7055A7F0CF078B5EC49A30900B95165D34D95739AEDB3821B4BE97D93E39FBC2B009D851C20C7DAF186854D455748293FB96753699DF215C3A1D7E09C32AA3244C95F59ABC5652A7F6CBA9C68B24DF3C6E97FE24653F78E668FACE5A9C96DEB163 X-D57D3AED: 3ZO7eAau8CL7WIMRKs4sN3D3tLDjz0dLbV79QFUyzQ2Ujvy7cMT6pYYqY16iZVKkSc3dCLJ7zSJH7+u4VD18S7Vl4ZUrpaVfd2+vE6kuoey4m4VkSEu530nj6fImhcD4MUrOEAnl0W826KZ9Q+tr5ycPtXkTV4k65bRjmOUUP8cvGozZ33TWg5HZplvhhXbhDGzqmQDTd6OAevLeAnq3Ra9uf7zvY2zzsIhlcp/Y7m53TZgf2aB4JOg4gkr2biojaAaPr+N/4d0I0TY7VVM0Ug== X-Mailru-Sender: 689FA8AB762F7393C37E3C1AEC41BA5DC4DF5CADB982268E84202FF93EAA048E3841015FED1DE5223CC9A89AB576DD93FB559BB5D741EB963CF37A108A312F5C27E8A8C3839CE0E267EA787935ED9F1B X-Mras: Ok Subject: [Tarantool-patches] [PATCH 1/1] election: fix box.ctl.promote() crash on 0 LSN X-BeenThere: tarantool-patches@dev.tarantool.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Tarantool development patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , From: Vladislav Shpilevoy via Tarantool-patches Reply-To: Vladislav Shpilevoy Errors-To: tarantool-patches-bounces@dev.tarantool.org Sender: "Tarantool-patches" box.ctl.promote() in case of non-empty limbo tried to wait for quorum of instances confirming that they have received the latest data. The quorum waiting is done via walking the relays and filling a vclock object using vclock_follow with the old leader's LSN as it is seen by the replicas. Some replicas might have vclock[old_leader_id] == 0 if they didn't receive anything from it. Vclock_follow() crashed at attempt to follow these 0 LSNs, because it is initialized with zeros, and in vclock_follow() it has an assert: new_lsn > old_lsn. This resulted into 0 > 0 and the crash. The patch makes the quorum collector skip these 0 LSNs. Part of #5430 --- Branch: http://github.com/tarantool/tarantool/tree/gerold103/gh-5430-crash-in-wait_quorum Issue: https://github.com/tarantool/tarantool/issues/5430 .../gh-5430-promote-quorum-crash.md | 6 + src/box/box.cc | 8 + test/replication/gh-5430-master1.lua | 15 ++ test/replication/gh-5430-master2.lua | 14 ++ test/replication/gh-5430-master3.lua | 12 ++ .../gh-5430-qsync-promote-crash.result | 159 ++++++++++++++++++ .../gh-5430-qsync-promote-crash.test.lua | 76 +++++++++ test/replication/suite.cfg | 1 + test/replication/suite.ini | 2 +- 9 files changed, 292 insertions(+), 1 deletion(-) create mode 100644 changelogs/unreleased/gh-5430-promote-quorum-crash.md create mode 100644 test/replication/gh-5430-master1.lua create mode 100644 test/replication/gh-5430-master2.lua create mode 100644 test/replication/gh-5430-master3.lua create mode 100644 test/replication/gh-5430-qsync-promote-crash.result create mode 100644 test/replication/gh-5430-qsync-promote-crash.test.lua diff --git a/changelogs/unreleased/gh-5430-promote-quorum-crash.md b/changelogs/unreleased/gh-5430-promote-quorum-crash.md new file mode 100644 index 000000000..09c7efd5c --- /dev/null +++ b/changelogs/unreleased/gh-5430-promote-quorum-crash.md @@ -0,0 +1,6 @@ +## bugfix/replication + +* Fixed a possible crash when `box.ctl.promote()` was called in a cluster with + >= 3 instances, happened in debug build. In release build it could lead to + undefined behaviour. It was likely to happen if a new node was added shortly + before the promotion (gh-5430). diff --git a/src/box/box.cc b/src/box/box.cc index 8c10a99dd..21fb38e35 100644 --- a/src/box/box.cc +++ b/src/box/box.cc @@ -1488,6 +1488,14 @@ box_wait_quorum(uint32_t lead_id, int64_t target_lsn, int quorum, assert(!tt_uuid_is_equal(&INSTANCE_UUID, &replica->uuid)); int64_t lsn = vclock_get(relay_vclock(replica->relay), lead_id); + /* + * The replica might not yet received anything from the old + * leader. Easily can happen with a newly added replica. Vclock + * can't be followed then because would assert on lsn > old lsn + * whereas they are both 0. + */ + if (lsn == 0) + continue; vclock_follow(&t.vclock, replica->id, lsn); if (lsn >= target_lsn) { ack_count++; diff --git a/test/replication/gh-5430-master1.lua b/test/replication/gh-5430-master1.lua new file mode 100644 index 000000000..0f57c87c1 --- /dev/null +++ b/test/replication/gh-5430-master1.lua @@ -0,0 +1,15 @@ +#!/usr/bin/env tarantool + +require('console').listen(os.getenv('ADMIN')) + +box.cfg({ + listen = 'unix/:./master1.sock', + replication = { + 'unix/:./master1.sock', + 'unix/:./master2.sock', + }, + replication_synchro_quorum = 2, + replication_timeout = 0.5, +}) + +box.schema.user.grant('guest', 'super') diff --git a/test/replication/gh-5430-master2.lua b/test/replication/gh-5430-master2.lua new file mode 100644 index 000000000..b1aeccfe0 --- /dev/null +++ b/test/replication/gh-5430-master2.lua @@ -0,0 +1,14 @@ +#!/usr/bin/env tarantool + +require('console').listen(os.getenv('ADMIN')) + +box.cfg({ + listen = 'unix/:./master2.sock', + replication = { + 'unix/:./master1.sock', + 'unix/:./master2.sock', + }, + read_only = true, + replication_synchro_quorum = 2, + replication_timeout = 0.5, +}) diff --git a/test/replication/gh-5430-master3.lua b/test/replication/gh-5430-master3.lua new file mode 100644 index 000000000..eff479a68 --- /dev/null +++ b/test/replication/gh-5430-master3.lua @@ -0,0 +1,12 @@ +#!/usr/bin/env tarantool + +require('console').listen(os.getenv('ADMIN')) + +box.cfg({ + listen = 'unix/:./master3.sock', + replication = { + 'unix/:./master1.sock', + }, + replication_synchro_quorum = 3, + replication_timeout = 0.5, +}) diff --git a/test/replication/gh-5430-qsync-promote-crash.result b/test/replication/gh-5430-qsync-promote-crash.result new file mode 100644 index 000000000..1204c625a --- /dev/null +++ b/test/replication/gh-5430-qsync-promote-crash.result @@ -0,0 +1,159 @@ +-- test-run result file version 2 +-- +-- gh-5430: box.ctl.promote() could assert if one of replicas didn't receive +-- anything from the old leader. Internally box.ctl.promote() collected a quorum +-- using vclock_follow() from all connected relays by the old leader ID and in +-- that case one of such replicas led to vclock_follow(0) which is always a +-- crash. +-- +test_run = require('test_run').new() + | --- + | ... + +-- +-- Start 2 fullmesh nodes working normally. +-- +test_run:cmd('create server master1 with '.. \ + 'script="replication/gh-5430-master1.lua"') + | --- + | - true + | ... +test_run:cmd('start server master1 with wait=False') + | --- + | - true + | ... + +test_run:cmd('create server master2 with '.. \ + 'script="replication/gh-5430-master2.lua"') + | --- + | - true + | ... +test_run:cmd('start server master2 with wait=True') + | --- + | - true + | ... + +-- +-- One of them won't write to WAL anything from now on. If a new instance is +-- added and it will write something, master2 won't apply it. +-- +test_run:switch('master2') + | --- + | - true + | ... +box.error.injection.set('ERRINJ_WAL_DELAY', true) + | --- + | - ok + | ... + +-- +-- Third node is the future 'old leader', by which master2 has +-- vclock[master3] == 0. +-- +test_run:cmd('create server master3 with '.. \ + 'script="replication/gh-5430-master3.lua"') + | --- + | - true + | ... +test_run:cmd('start server master3 with wait=True') + | --- + | - true + | ... + +-- +-- Make master1 fetch data from master3 so as it could receive sync data and +-- confirm it later. +-- +test_run:switch('master1') + | --- + | - true + | ... +-- Can't keep master2 in there because it hangs with ER_CFG about a duplicate +-- connection. Even reset via replication = {} does not help for 100%. But for +-- the test it does not matter. +box.cfg{replication = {test_run:eval('master3', 'return box.cfg.listen')[1]}} + | --- + | ... + +-- +-- Master3 fills the limbo and dies. +-- +test_run:switch('master3') + | --- + | - true + | ... +box.ctl.promote() + | --- + | ... +s = box.schema.create_space('test', {is_sync = true}) + | --- + | ... +_ = s:create_index('pk') + | --- + | ... +_ = require('fiber').create(s.replace, s, {1}) + | --- + | ... +test_run:wait_lsn('master1', 'master3') + | --- + | ... + +test_run:switch('master1') + | --- + | - true + | ... +test_run:cmd('stop server master3') + | --- + | - true + | ... +test_run:cmd('delete server master3') + | --- + | - true + | ... + +-- +-- Master1 tries to promote self. In the meantime master2 has +-- vclock[master3] == 0. It is still blocked in the WAL thread. Master1 should +-- be ready to seeing 0 LSN by the old leader's component in some replicas. +-- +box.cfg{replication_synchro_timeout = 0.1} + | --- + | ... +assert(box.info.synchro.queue.len > 0) + | --- + | - true + | ... +assert(not pcall(box.ctl.promote)) + | --- + | - true + | ... + +test_run:switch('master2') + | --- + | - true + | ... +box.error.injection.set('ERRINJ_WAL_DELAY', false) + | --- + | - ok + | ... + +test_run:switch('default') + | --- + | - true + | ... +test_run:cmd('stop server master2') + | --- + | - true + | ... +test_run:cmd('delete server master2') + | --- + | - true + | ... +test_run:cmd('stop server master1') + | --- + | - true + | ... +test_run:cmd('delete server master1') + | --- + | - true + | ... diff --git a/test/replication/gh-5430-qsync-promote-crash.test.lua b/test/replication/gh-5430-qsync-promote-crash.test.lua new file mode 100644 index 000000000..7ef8860e7 --- /dev/null +++ b/test/replication/gh-5430-qsync-promote-crash.test.lua @@ -0,0 +1,76 @@ +-- +-- gh-5430: box.ctl.promote() could assert if one of replicas didn't receive +-- anything from the old leader. Internally box.ctl.promote() collected a quorum +-- using vclock_follow() from all connected relays by the old leader ID and in +-- that case one of such replicas led to vclock_follow(0) which is always a +-- crash. +-- +test_run = require('test_run').new() + +-- +-- Start 2 fullmesh nodes working normally. +-- +test_run:cmd('create server master1 with '.. \ + 'script="replication/gh-5430-master1.lua"') +test_run:cmd('start server master1 with wait=False') + +test_run:cmd('create server master2 with '.. \ + 'script="replication/gh-5430-master2.lua"') +test_run:cmd('start server master2 with wait=True') + +-- +-- One of them won't write to WAL anything from now on. If a new instance is +-- added and it will write something, master2 won't apply it. +-- +test_run:switch('master2') +box.error.injection.set('ERRINJ_WAL_DELAY', true) + +-- +-- Third node is the future 'old leader', by which master2 has +-- vclock[master3] == 0. +-- +test_run:cmd('create server master3 with '.. \ + 'script="replication/gh-5430-master3.lua"') +test_run:cmd('start server master3 with wait=True') + +-- +-- Make master1 fetch data from master3 so as it could receive sync data and +-- confirm it later. +-- +test_run:switch('master1') +-- Can't keep master2 in there because it hangs with ER_CFG about a duplicate +-- connection. Even reset via replication = {} does not help for 100%. But for +-- the test it does not matter. +box.cfg{replication = {test_run:eval('master3', 'return box.cfg.listen')[1]}} + +-- +-- Master3 fills the limbo and dies. +-- +test_run:switch('master3') +box.ctl.promote() +s = box.schema.create_space('test', {is_sync = true}) +_ = s:create_index('pk') +_ = require('fiber').create(s.replace, s, {1}) +test_run:wait_lsn('master1', 'master3') + +test_run:switch('master1') +test_run:cmd('stop server master3') +test_run:cmd('delete server master3') + +-- +-- Master1 tries to promote self. In the meantime master2 has +-- vclock[master3] == 0. It is still blocked in the WAL thread. Master1 should +-- be ready to seeing 0 LSN by the old leader's component in some replicas. +-- +box.cfg{replication_synchro_timeout = 0.1} +assert(box.info.synchro.queue.len > 0) +assert(not pcall(box.ctl.promote)) + +test_run:switch('master2') +box.error.injection.set('ERRINJ_WAL_DELAY', false) + +test_run:switch('default') +test_run:cmd('stop server master2') +test_run:cmd('delete server master2') +test_run:cmd('stop server master1') +test_run:cmd('delete server master1') diff --git a/test/replication/suite.cfg b/test/replication/suite.cfg index e0bbe2676..5a4506e73 100644 --- a/test/replication/suite.cfg +++ b/test/replication/suite.cfg @@ -41,6 +41,7 @@ "gh-4739-vclock-assert.test.lua": {}, "gh-4730-applier-rollback.test.lua": {}, "gh-4928-tx-boundaries.test.lua": {}, + "gh-5430-qsync-promote-crash.test.lua": {}, "gh-5440-qsync-ro.test.lua": {}, "gh-5435-qsync-clear-synchro-queue-commit-all.test.lua": {}, "gh-5536-wal-limit.test.lua": {}, diff --git a/test/replication/suite.ini b/test/replication/suite.ini index 18981996d..34b289bc8 100644 --- a/test/replication/suite.ini +++ b/test/replication/suite.ini @@ -3,7 +3,7 @@ core = tarantool script = master.lua description = tarantool/box, replication disabled = consistent.test.lua -release_disabled = catch.test.lua errinj.test.lua gc.test.lua gc_no_space.test.lua before_replace.test.lua qsync_advanced.test.lua qsync_errinj.test.lua quorum.test.lua recover_missing_xlog.test.lua sync.test.lua long_row_timeout.test.lua gh-4739-vclock-assert.test.lua gh-4730-applier-rollback.test.lua gh-5140-qsync-casc-rollback.test.lua gh-5144-qsync-dup-confirm.test.lua gh-5167-qsync-rollback-snap.test.lua gh-5506-election-on-off.test.lua gh-5536-wal-limit.test.lua hang_on_synchro_fail.test.lua anon_register_gap.test.lua gh-5213-qsync-applier-order.test.lua gh-5213-qsync-applier-order-3.test.lua gh-6027-applier-error-show.test.lua gh-6032-promote-wal-write.test.lua gh-6057-qsync-confirm-async-no-wal.test.lua gh-5447-downstream-lag.test.lua gh-4040-invalid-msgpack.test.lua +release_disabled = catch.test.lua errinj.test.lua gc.test.lua gc_no_space.test.lua before_replace.test.lua qsync_advanced.test.lua qsync_errinj.test.lua quorum.test.lua recover_missing_xlog.test.lua sync.test.lua long_row_timeout.test.lua gh-4739-vclock-assert.test.lua gh-4730-applier-rollback.test.lua gh-5140-qsync-casc-rollback.test.lua gh-5144-qsync-dup-confirm.test.lua gh-5167-qsync-rollback-snap.test.lua gh-5430-qsync-promote-crash.test.lua gh-5506-election-on-off.test.lua gh-5536-wal-limit.test.lua hang_on_synchro_fail.test.lua anon_register_gap.test.lua gh-5213-qsync-applier-order.test.lua gh-5213-qsync-applier-order-3.test.lua gh-6027-applier-error-show.test.lua gh-6032-promote-wal-write.test.lua gh-6057-qsync-confirm-async-no-wal.test.lua gh-5447-downstream-lag.test.lua gh-4040-invalid-msgpack.test.lua config = suite.cfg lua_libs = lua/fast_replica.lua lua/rlimit.lua use_unix_sockets = True -- 2.24.3 (Apple Git-128)