From: Serge Petrenko via Tarantool-patches <tarantool-patches@dev.tarantool.org> To: Cyrill Gorcunov <gorcunov@gmail.com>, tml <tarantool-patches@dev.tarantool.org> Cc: Vladislav Shpilevoy <v.shpilevoy@tarantool.org> Subject: Re: [Tarantool-patches] [PATCH v21 3/3] test: add gh-6036-qsync-order test Date: Mon, 11 Oct 2021 16:38:43 +0300 [thread overview] Message-ID: <96c8d51f-3861-d323-b2f2-136f64fdcfd8@tarantool.org> (raw) In-Reply-To: <20211008175809.349501-4-gorcunov@gmail.com> 08.10.2021 20:58, Cyrill Gorcunov пишет: Thanks for the fixes! There are some minor comments left. Please, address them, and after that - LGTM. > To test that promotion requests are handled only when appropriate > write to WAL completes, because we update memory data before the > write finishes. > > Note that without the patch this test fires assertion Please, mention the patch you're talking about. I know it's in the previous commit, but still. Something like: without the patch "qsync: order access to the limbo terms" this test fires an assertion > >> tarantool: src/box/txn_limbo.c:481: txn_limbo_read_rollback: Assertion `e->txn->signature >= 0' failed. > Part-of #6036 > > Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com> > --- > test/replication/gh-6036-qsync-order.result | 204 ++++++++++++++++++ > test/replication/gh-6036-qsync-order.test.lua | 97 +++++++++ > test/replication/suite.cfg | 1 + > test/replication/suite.ini | 2 +- > 4 files changed, 303 insertions(+), 1 deletion(-) > create mode 100644 test/replication/gh-6036-qsync-order.result > create mode 100644 test/replication/gh-6036-qsync-order.test.lua > > diff --git a/test/replication/gh-6036-qsync-order.result b/test/replication/gh-6036-qsync-order.result > new file mode 100644 > index 000000000..eb3e808cb > --- /dev/null > +++ b/test/replication/gh-6036-qsync-order.result > @@ -0,0 +1,204 @@ > +-- test-run result file version 2 > +-- > +-- gh-6036: verify that terms are locked when we're inside journal > +-- write routine, because parallel appliers may ignore the fact that > +-- the term is updated already but not yet written leading to data > +-- inconsistency. > +-- > +test_run = require('test_run').new() > + | --- > + | ... > + > +SERVERS={"election_replica1", "election_replica2", "election_replica3"} > + | --- > + | ... > +test_run:create_cluster(SERVERS, "replication", {args='1 nil manual 1'}) > + | --- > + | ... > +test_run:wait_fullmesh(SERVERS) > + | --- > + | ... > + > +-- > +-- Create a synchro space on the master node and make > +-- sure the write processed just fine. > +test_run:switch("election_replica1") > + | --- > + | - true > + | ... > +box.ctl.promote() > + | --- > + | ... > +s = box.schema.create_space('test', {is_sync = true}) > + | --- > + | ... > +_ = s:create_index('pk') > + | --- > + | ... > +s:insert{1} > + | --- > + | - [1] > + | ... > + > +test_run:switch("election_replica2") > + | --- > + | - true > + | ... > +test_run:wait_lsn('election_replica2', 'election_replica1') > + | --- > + | ... > + > +test_run:switch("election_replica3") > + | --- > + | - true > + | ... > +test_run:wait_lsn('election_replica3', 'election_replica1') > + | --- > + | ... > + > +-- > +-- Drop connection between election_replica1 and election_replica2. > +test_run:switch("election_replica1") > + | --- > + | - true > + | ... > +box.cfg({ \ > + replication = { \ > + "unix/:./election_replica1.sock", \ > + "unix/:./election_replica3.sock", \ > + }, \ > +}) > + | --- > + | ... > +-- > +-- Drop connection between election_replica2 and election_replica1. > +test_run:switch("election_replica2") > + | --- > + | - true > + | ... > +test_run:wait_cond(function() return box.space.test:get{1} ~= nil end) > + | --- > + | - true > + | ... This particular check is extraneous. You've already done a wait_lsn above. > +box.cfg({ \ > + replication = { \ > + "unix/:./election_replica2.sock", \ > + "unix/:./election_replica3.sock", \ > + }, \ > +}) > + | --- > + | ... > + > +-- > +-- Here we have the following scheme > +-- > +-- election_replica3 (will be delayed) > +-- / \ > +-- election_replica1 election_replica2 > + > +-- > +-- Initiate disk delay in a bit tricky way: the next write will > +-- fall into forever sleep. > +test_run:switch("election_replica3") > + | --- > + | - true > + | ... > +write_cnt = box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") > + | --- > + | ... > +-- > +-- Make election_replica2 been a leader and start writting data, > +-- the PROMOTE request get queued on election_replica3 and not > +-- yet processed, same time INSERT won't complete either > +-- waiting for PROMOTE completion first. Note that we > +-- enter election_replica3 as well just to be sure the PROMOTE > +-- reached it. > +test_run:switch("election_replica2") > + | --- > + | - true > + | ... > +box.ctl.promote() > + | --- > + | ... > +test_run:switch("election_replica3") > + | --- > + | - true > + | ... > +test_run:wait_cond(function() return box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") > write_cnt end) > + | --- > + | - true > + | ... > +box.error.injection.set("ERRINJ_WAL_DELAY", true) > + | --- > + | - ok This errinj should be set before 'election_replica2' writes promote, like it used to be in previous patch versions. Let's set it right where you get 'ERRINJ_WAL_WRITE_COUNT' > + | ... > +test_run:switch("election_replica2") > + | --- > + | - true > + | ... > +_ = require('fiber').create(function() box.space.test:insert{2} end) > + | --- > + | ... > + > +-- > +-- The election_replica1 node has no clue that there is a new leader > +-- and continue writing data with obsolete term. Since election_replica3 > +-- is delayed now the INSERT won't proceed yet but get queued. > +test_run:switch("election_replica1") > + | --- > + | - true > + | ... > +_ = require('fiber').create(function() box.space.test:insert{3} end) > + | --- > + | ... > + > +-- > +-- Finally enable election_replica3 back. Make sure the data from new election_replica2 > +-- leader get writing while old leader's data ignored. > +test_run:switch("election_replica3") > + | --- > + | - true > + | ... > +box.error.injection.set('ERRINJ_WAL_DELAY', false) > + | --- > + | - ok > + | ... > +test_run:wait_cond(function() return box.space.test:get{2} ~= nil end) > + | --- > + | - true > + | ... > +box.space.test:select{} > + | --- > + | - - [1] > + | - [2] > + | ... > + > +test_run:switch("default") > + | --- > + | - true > + | ... > +test_run:cmd('stop server election_replica1') > + | --- > + | - true > + | ... > +test_run:cmd('stop server election_replica2') > + | --- > + | - true > + | ... > +test_run:cmd('stop server election_replica3') > + | --- > + | - true > + | ... > + > +test_run:cmd('delete server election_replica1') > + | --- > + | - true > + | ... > +test_run:cmd('delete server election_replica2') > + | --- > + | - true > + | ... > +test_run:cmd('delete server election_replica3') > + | --- > + | - true > + | ... > diff --git a/test/replication/gh-6036-qsync-order.test.lua b/test/replication/gh-6036-qsync-order.test.lua > new file mode 100644 > index 000000000..b8df170b8 > --- /dev/null > +++ b/test/replication/gh-6036-qsync-order.test.lua > @@ -0,0 +1,97 @@ > +-- > +-- gh-6036: verify that terms are locked when we're inside journal > +-- write routine, because parallel appliers may ignore the fact that > +-- the term is updated already but not yet written leading to data > +-- inconsistency. > +-- > +test_run = require('test_run').new() > + > +SERVERS={"election_replica1", "election_replica2", "election_replica3"} > +test_run:create_cluster(SERVERS, "replication", {args='1 nil manual 1'}) > +test_run:wait_fullmesh(SERVERS) > + > +-- > +-- Create a synchro space on the master node and make > +-- sure the write processed just fine. > +test_run:switch("election_replica1") > +box.ctl.promote() > +s = box.schema.create_space('test', {is_sync = true}) > +_ = s:create_index('pk') > +s:insert{1} > + > +test_run:switch("election_replica2") > +test_run:wait_lsn('election_replica2', 'election_replica1') > + > +test_run:switch("election_replica3") > +test_run:wait_lsn('election_replica3', 'election_replica1') > + > +-- > +-- Drop connection between election_replica1 and election_replica2. > +test_run:switch("election_replica1") > +box.cfg({ \ > + replication = { \ > + "unix/:./election_replica1.sock", \ > + "unix/:./election_replica3.sock", \ > + }, \ > +}) > +-- > +-- Drop connection between election_replica2 and election_replica1. > +test_run:switch("election_replica2") > +test_run:wait_cond(function() return box.space.test:get{1} ~= nil end) > +box.cfg({ \ > + replication = { \ > + "unix/:./election_replica2.sock", \ > + "unix/:./election_replica3.sock", \ > + }, \ > +}) > + > +-- > +-- Here we have the following scheme > +-- > +-- election_replica3 (will be delayed) > +-- / \ > +-- election_replica1 election_replica2 > + > +-- > +-- Initiate disk delay in a bit tricky way: the next write will > +-- fall into forever sleep. > +test_run:switch("election_replica3") > +write_cnt = box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") > +-- > +-- Make election_replica2 been a leader and start writting data, > +-- the PROMOTE request get queued on election_replica3 and not > +-- yet processed, same time INSERT won't complete either > +-- waiting for PROMOTE completion first. Note that we > +-- enter election_replica3 as well just to be sure the PROMOTE > +-- reached it. > +test_run:switch("election_replica2") > +box.ctl.promote() > +test_run:switch("election_replica3") > +test_run:wait_cond(function() return box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") > write_cnt end) > +box.error.injection.set("ERRINJ_WAL_DELAY", true) > +test_run:switch("election_replica2") > +_ = require('fiber').create(function() box.space.test:insert{2} end) > + > +-- > +-- The election_replica1 node has no clue that there is a new leader > +-- and continue writing data with obsolete term. Since election_replica3 > +-- is delayed now the INSERT won't proceed yet but get queued. > +test_run:switch("election_replica1") > +_ = require('fiber').create(function() box.space.test:insert{3} end) > + > +-- > +-- Finally enable election_replica3 back. Make sure the data from new election_replica2 > +-- leader get writing while old leader's data ignored. > +test_run:switch("election_replica3") > +box.error.injection.set('ERRINJ_WAL_DELAY', false) > +test_run:wait_cond(function() return box.space.test:get{2} ~= nil end) > +box.space.test:select{} > + > +test_run:switch("default") > +test_run:cmd('stop server election_replica1') > +test_run:cmd('stop server election_replica2') > +test_run:cmd('stop server election_replica3') > + > +test_run:cmd('delete server election_replica1') > +test_run:cmd('delete server election_replica2') > +test_run:cmd('delete server election_replica3') > diff --git a/test/replication/suite.cfg b/test/replication/suite.cfg > index 3eee0803c..ed09b2087 100644 > --- a/test/replication/suite.cfg > +++ b/test/replication/suite.cfg > @@ -59,6 +59,7 @@ > "gh-6094-rs-uuid-mismatch.test.lua": {}, > "gh-6127-election-join-new.test.lua": {}, > "gh-6035-applier-filter.test.lua": {}, > + "gh-6036-qsync-order.test.lua": {}, > "election-candidate-promote.test.lua": {}, > "*": { > "memtx": {"engine": "memtx"}, > diff --git a/test/replication/suite.ini b/test/replication/suite.ini > index 77eb95f49..080e4fbf4 100644 > --- a/test/replication/suite.ini > +++ b/test/replication/suite.ini > @@ -3,7 +3,7 @@ core = tarantool > script = master.lua > description = tarantool/box, replication > disabled = consistent.test.lua > -release_disabled = catch.test.lua errinj.test.lua gc.test.lua gc_no_space.test.lua before_replace.test.lua qsync_advanced.test.lua qsync_errinj.test.lua quorum.test.lua recover_missing_xlog.test.lua sync.test.lua long_row_timeout.test.lua gh-4739-vclock-assert.test.lua gh-4730-applier-rollback.test.lua gh-5140-qsync-casc-rollback.test.lua gh-5144-qsync-dup-confirm.test.lua gh-5167-qsync-rollback-snap.test.lua gh-5430-qsync-promote-crash.test.lua gh-5430-cluster-mvcc.test.lua gh-5506-election-on-off.test.lua gh-5536-wal-limit.test.lua hang_on_synchro_fail.test.lua anon_register_gap.test.lua gh-5213-qsync-applier-order.test.lua gh-5213-qsync-applier-order-3.test.lua gh-6027-applier-error-show.test.lua gh-6032-promote-wal-write.test.lua gh-6057-qsync-confirm-async-no-wal.test.lua gh-5447-downstream-lag.test.lua gh-4040-invalid-msgpack.test.lua > +release_disabled = catch.test.lua errinj.test.lua gc.test.lua gc_no_space.test.lua before_replace.test.lua qsync_advanced.test.lua qsync_errinj.test.lua quorum.test.lua recover_missing_xlog.test.lua sync.test.lua long_row_timeout.test.lua gh-4739-vclock-assert.test.lua gh-4730-applier-rollback.test.lua gh-5140-qsync-casc-rollback.test.lua gh-5144-qsync-dup-confirm.test.lua gh-5167-qsync-rollback-snap.test.lua gh-5430-qsync-promote-crash.test.lua gh-5430-cluster-mvcc.test.lua gh-5506-election-on-off.test.lua gh-5536-wal-limit.test.lua hang_on_synchro_fail.test.lua anon_register_gap.test.lua gh-5213-qsync-applier-order.test.lua gh-5213-qsync-applier-order-3.test.lua gh-6027-applier-error-show.test.lua gh-6032-promote-wal-write.test.lua gh-6057-qsync-confirm-async-no-wal.test.lua gh-5447-downstream-lag.test.lua gh-4040-invalid-msgpack.test.lua gh-6036-qsync-order.test.lua > config = suite.cfg > lua_libs = lua/fast_replica.lua lua/rlimit.lua > use_unix_sockets = True -- Serge Petrenko
prev parent reply other threads:[~2021-10-11 13:38 UTC|newest] Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-10-08 17:58 [Tarantool-patches] [PATCH v21 0/3] qsync: implement packet filtering (part 1) Cyrill Gorcunov via Tarantool-patches 2021-10-08 17:58 ` [Tarantool-patches] [PATCH v21 1/3] latch: add latch_is_locked helper Cyrill Gorcunov via Tarantool-patches 2021-10-08 17:58 ` [Tarantool-patches] [PATCH v21 2/3] qsync: order access to the limbo terms Cyrill Gorcunov via Tarantool-patches 2021-10-11 13:24 ` Serge Petrenko via Tarantool-patches 2021-10-08 17:58 ` [Tarantool-patches] [PATCH v21 3/3] test: add gh-6036-qsync-order test Cyrill Gorcunov via Tarantool-patches 2021-10-11 13:38 ` Serge Petrenko via Tarantool-patches [this message]
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=96c8d51f-3861-d323-b2f2-136f64fdcfd8@tarantool.org \ --to=tarantool-patches@dev.tarantool.org \ --cc=gorcunov@gmail.com \ --cc=sergepetrenko@tarantool.org \ --cc=v.shpilevoy@tarantool.org \ --subject='Re: [Tarantool-patches] [PATCH v21 3/3] test: add gh-6036-qsync-order test' \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox