From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from [87.239.111.99] (localhost [127.0.0.1]) by dev.tarantool.org (Postfix) with ESMTP id 99F4471190; Tue, 12 Oct 2021 12:47:10 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org 99F4471190 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=tarantool.org; s=dev; t=1634032030; bh=ivmiz1UKe1HLpgbIyDIRJP6uV0vps4LUQfgSFoNG744=; h=To:Cc:References:Date:In-Reply-To:Subject:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From:Reply-To:From; b=DrwOXcw688N4xwVzrtEAi60jdqOfat926VsVHZehU+SO8lvyeRUdz57GOATmOzilf XkPQSc1gGlxdPrfZfe2Qx/cOwWPoaFtgObnrKc20E7ZR6NrFRFhs2rx80FNdmqAvJE PfEvhiywxmt+/8jahZZsX8ZCahzLArYM7jWAuhNs= Received: from smtp36.i.mail.ru (smtp36.i.mail.ru [94.100.177.96]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dev.tarantool.org (Postfix) with ESMTPS id C235971190 for ; Tue, 12 Oct 2021 12:47:08 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org C235971190 Received: by smtp36.i.mail.ru with esmtpa (envelope-from ) id 1maENH-0005Sj-MM; Tue, 12 Oct 2021 12:47:08 +0300 To: Cyrill Gorcunov , tml Cc: Vladislav Shpilevoy References: <20211011191635.573685-1-gorcunov@gmail.com> <20211011191635.573685-4-gorcunov@gmail.com> Message-ID: <93bf8e06-afd7-49b1-4924-df2b49ca082d@tarantool.org> Date: Tue, 12 Oct 2021 12:47:06 +0300 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0) Gecko/20100101 Thunderbird/78.14.0 MIME-Version: 1.0 In-Reply-To: <20211011191635.573685-4-gorcunov@gmail.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Content-Language: en-GB X-4EC0790: 10 X-7564579A: EEAE043A70213CC8 X-77F55803: 4F1203BC0FB41BD922964B4BA091D9ACBEACDE9010F061E4495BCD93C24EB242182A05F538085040FAA06AD572C5C55478213B1AC20CBAB47772E955E7A488A2591577E193EBB924 X-7FA49CB5: FF5795518A3D127A4AD6D5ED66289B5278DA827A17800CE7F28A07ACE42D84A9EA1F7E6F0F101C67BD4B6F7A4D31EC0BCC500DACC3FED6E28638F802B75D45FF8AA50765F7900637CE6693660FA336BEEA1F7E6F0F101C6723150C8DA25C47586E58E00D9D99D84E1BDDB23E98D2D38BBCA57AF85F7723F23E287EF37F9395DEE37EC944F5EC72B5CC7F00164DA146DAFE8445B8C89999728AA50765F7900637F6B57BC7E64490618DEB871D839B7333395957E7521B51C2DFABB839C843B9C08941B15DA834481F8AA50765F790063783E00425F71A4181389733CBF5DBD5E9B5C8C57E37DE458B9E9CE733340B9D5F3BBE47FD9DD3FB595F5C1EE8F4F765FC72CEEB2601E22B093A03B725D353964B0B7D0EA88DDEDAC722CA9DD8327EE4930A3850AC1BE2E7356436AE5DD6441DC7C4224003CC83647689D4C264860C145E X-C1DE0DAB: 0D63561A33F958A5575149F9641358B76056D9EB44E2CBF6D92FFD42EA589D24D59269BC5F550898D99A6476B3ADF6B47008B74DF8BB9EF7333BD3B22AA88B938A852937E12ACA752654937D1C607C78410CA545F18667F91A7EA1CDA0B5A7A0 X-C8649E89: 4E36BF7865823D7055A7F0CF078B5EC49A30900B95165D345C110A855FC0999904CE0A9A1C56CF6937BEE6E1127AC9B618B3EFC557B78870963BE279AE597FC71D7E09C32AA3244C185C666F8666EA737F262CC3BB66067A853296C06374E602FACE5A9C96DEB163 X-D57D3AED: 3ZO7eAau8CL7WIMRKs4sN3D3tLDjz0dLbV79QFUyzQ2Ujvy7cMT6pYYqY16iZVKkSc3dCLJ7zSJH7+u4VD18S7Vl4ZUrpaVfd2+vE6kuoey4m4VkSEu530nj6fImhcD4MUrOEAnl0W826KZ9Q+tr5ycPtXkTV4k65bRjmOUUP8cvGozZ33TWg5HZplvhhXbhDGzqmQDTd6OAevLeAnq3Ra9uf7zvY2zzsIhlcp/Y7m53TZgf2aB4JOg4gkr2biojtWOf/ZwQPGEQKvWit7pGtw== X-Mailru-Sender: 3B9A0136629DC9125D61937A2360A446F024476D4E5085A16BAD1B0992E5EF6F6967BCE13AB6E3CC424AE0EB1F3D1D21E2978F233C3FAE6EE63DB1732555E4A8EE80603BA4A5B0BC112434F685709FCF0DA7A0AF5A3A8387 X-Mras: Ok Subject: Re: [Tarantool-patches] [PATCH v22 3/3] test: add gh-6036-qsync-order test X-BeenThere: tarantool-patches@dev.tarantool.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Tarantool development patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , From: Serge Petrenko via Tarantool-patches Reply-To: Serge Petrenko Errors-To: tarantool-patches-bounces@dev.tarantool.org Sender: "Tarantool-patches" 11.10.2021 22:16, Cyrill Gorcunov пишет: > To test that promotion requests are handled only when appropriate > write to WAL completes, because we update memory data before the > write finishes. > > Note that without the patch "qsync: order access to the limbo terms" > this test fires the assertion > >> tarantool: src/box/txn_limbo.c:481: txn_limbo_read_rollback: Assertion `e->txn->signature >= 0' failed. Thanks for the changes! Sorry for the nitpicking, there are just a couple of minor comments left. > Part-of #6036 > > Signed-off-by: Cyrill Gorcunov > --- > test/replication/gh-6036-qsync-order.result | 200 ++++++++++++++++++ > test/replication/gh-6036-qsync-order.test.lua | 96 +++++++++ > test/replication/suite.cfg | 1 + > test/replication/suite.ini | 2 +- > 4 files changed, 298 insertions(+), 1 deletion(-) > create mode 100644 test/replication/gh-6036-qsync-order.result > create mode 100644 test/replication/gh-6036-qsync-order.test.lua > > diff --git a/test/replication/gh-6036-qsync-order.result b/test/replication/gh-6036-qsync-order.result > new file mode 100644 > index 000000000..464a131a4 > --- /dev/null > +++ b/test/replication/gh-6036-qsync-order.result > @@ -0,0 +1,200 @@ > +-- test-run result file version 2 > +-- > +-- gh-6036: verify that terms are locked when we're inside journal > +-- write routine, because parallel appliers may ignore the fact that > +-- the term is updated already but not yet written leading to data > +-- inconsistency. > +-- > +test_run = require('test_run').new() > + | --- > + | ... > + > +SERVERS={"election_replica1", "election_replica2", "election_replica3"} > + | --- > + | ... > +test_run:create_cluster(SERVERS, "replication", {args='1 nil manual 1'}) > + | --- > + | ... > +test_run:wait_fullmesh(SERVERS) > + | --- > + | ... > + > +-- > +-- Create a synchro space on the master node and make > +-- sure the write processed just fine. > +test_run:switch("election_replica1") > + | --- > + | - true > + | ... > +box.ctl.promote() > + | --- > + | ... > +s = box.schema.create_space('test', {is_sync = true}) > + | --- > + | ... > +_ = s:create_index('pk') > + | --- > + | ... > +s:insert{1} > + | --- > + | - [1] > + | ... > + > +test_run:switch("election_replica2") > + | --- > + | - true > + | ... > +test_run:wait_lsn('election_replica2', 'election_replica1') > + | --- > + | ... > + > +test_run:switch("election_replica3") > + | --- > + | - true > + | ... > +test_run:wait_lsn('election_replica3', 'election_replica1') > + | --- > + | ... > + No need to switch to 'election_replica2' or 'election_replica3' before doing a 'wait_lsn'. You may remain on 'election_replica1' and drop the 2 switches. > +-- > +-- Drop connection between election_replica1 and election_replica2. > +test_run:switch("election_replica1") This switch may be dropped as well after you drop the previous 2. > + | --- > + | - true > + | ... > +box.cfg({ \ > + replication = { \ > + "unix/:./election_replica1.sock", \ > + "unix/:./election_replica3.sock", \ > + }, \ > +}) > + | --- > + | ... > +-- > +-- Drop connection between election_replica2 and election_replica1. > +test_run:switch("election_replica2") > + | --- > + | - true > + | ... > +box.cfg({ \ > + replication = { \ > + "unix/:./election_replica2.sock", \ > + "unix/:./election_replica3.sock", \ > + }, \ > +}) > + | --- > + | ... > + > +-- > +-- Here we have the following scheme > +-- > +-- election_replica3 (will be delayed) > +-- / \ > +-- election_replica1 election_replica2 > + > +-- > +-- Initiate disk delay in a bit tricky way: the next write will > +-- fall into forever sleep. > +test_run:switch("election_replica3") > + | --- > + | - true > + | ... > +write_cnt = box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") > + | --- > + | ... > +box.error.injection.set("ERRINJ_WAL_DELAY", true) > + | --- > + | - ok > + | ... > +-- > +-- Make election_replica2 been a leader and start writting data, > +-- the PROMOTE request get queued on election_replica3 and not > +-- yet processed, same time INSERT won't complete either > +-- waiting for PROMOTE completion first. Note that we > +-- enter election_replica3 as well just to be sure the PROMOTE > +-- reached it. > +test_run:switch("election_replica2") > + | --- > + | - true > + | ... > +box.ctl.promote() > + | --- > + | ... > +test_run:switch("election_replica3") > + | --- > + | - true > + | ... > +test_run:wait_cond(function() return box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") > write_cnt end) > + | --- > + | - true > + | ... > +test_run:switch("election_replica2") > + | --- > + | - true > + | ... > +_ = require('fiber').create(function() box.space.test:insert{2} end) By the way, why do you need a fiber for that? synchro_quorum is 1, as I remember, so the insert shouldn't block even without the fiber. > + | --- > + | ... > + > +-- > +-- The election_replica1 node has no clue that there is a new leader > +-- and continue writing data with obsolete term. Since election_replica3 > +-- is delayed now the INSERT won't proceed yet but get queued. > +test_run:switch("election_replica1") > + | --- > + | - true > + | ... > +_ = require('fiber').create(function() box.space.test:insert{3} end) > + | --- > + | ... > + Same as above, looks like you don't need a fiber here. Am I wrong? > +-- > +-- Finally enable election_replica3 back. Make sure the data from new election_replica2 > +-- leader get writing while old leader's data ignored. > +test_run:switch("election_replica3") > + | --- > + | - true > + | ... Please, prior to enabling replica3, make sure it's received everything: ERRINJ_WAL_WRITE_COUNT == write_cnt + 3 > +box.error.injection.set('ERRINJ_WAL_DELAY', false) > + | --- > + | - ok > + | ... > +test_run:wait_cond(function() return box.space.test:get{2} ~= nil end) > + | --- > + | - true > + | ... > +box.space.test:select{} > + | --- > + | - - [1] > + | - [2] > + | ... > + > +test_run:switch("default") > + | --- > + | - true > + | ... > +test_run:cmd('stop server election_replica1') > + | --- > + | - true > + | ... > +test_run:cmd('stop server election_replica2') > + | --- > + | - true > + | ... > +test_run:cmd('stop server election_replica3') > + | --- > + | - true > + | ... > + > +test_run:cmd('delete server election_replica1') > + | --- > + | - true > + | ... > +test_run:cmd('delete server election_replica2') > + | --- > + | - true > + | ... > +test_run:cmd('delete server election_replica3') > + | --- > + | - true > + | ... > diff --git a/test/replication/gh-6036-qsync-order.test.lua b/test/replication/gh-6036-qsync-order.test.lua > new file mode 100644 > index 000000000..6350e9303 > --- /dev/null > +++ b/test/replication/gh-6036-qsync-order.test.lua > @@ -0,0 +1,96 @@ > +-- > +-- gh-6036: verify that terms are locked when we're inside journal > +-- write routine, because parallel appliers may ignore the fact that > +-- the term is updated already but not yet written leading to data > +-- inconsistency. > +-- > +test_run = require('test_run').new() > + > +SERVERS={"election_replica1", "election_replica2", "election_replica3"} > +test_run:create_cluster(SERVERS, "replication", {args='1 nil manual 1'}) > +test_run:wait_fullmesh(SERVERS) > + > +-- > +-- Create a synchro space on the master node and make > +-- sure the write processed just fine. > +test_run:switch("election_replica1") > +box.ctl.promote() > +s = box.schema.create_space('test', {is_sync = true}) > +_ = s:create_index('pk') > +s:insert{1} > + > +test_run:switch("election_replica2") > +test_run:wait_lsn('election_replica2', 'election_replica1') > + > +test_run:switch("election_replica3") > +test_run:wait_lsn('election_replica3', 'election_replica1') > + > +-- > +-- Drop connection between election_replica1 and election_replica2. > +test_run:switch("election_replica1") > +box.cfg({ \ > + replication = { \ > + "unix/:./election_replica1.sock", \ > + "unix/:./election_replica3.sock", \ > + }, \ > +}) > +-- > +-- Drop connection between election_replica2 and election_replica1. > +test_run:switch("election_replica2") > +box.cfg({ \ > + replication = { \ > + "unix/:./election_replica2.sock", \ > + "unix/:./election_replica3.sock", \ > + }, \ > +}) > + > +-- > +-- Here we have the following scheme > +-- > +-- election_replica3 (will be delayed) > +-- / \ > +-- election_replica1 election_replica2 > + > +-- > +-- Initiate disk delay in a bit tricky way: the next write will > +-- fall into forever sleep. > +test_run:switch("election_replica3") > +write_cnt = box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") > +box.error.injection.set("ERRINJ_WAL_DELAY", true) > +-- > +-- Make election_replica2 been a leader and start writting data, > +-- the PROMOTE request get queued on election_replica3 and not > +-- yet processed, same time INSERT won't complete either > +-- waiting for PROMOTE completion first. Note that we > +-- enter election_replica3 as well just to be sure the PROMOTE > +-- reached it. > +test_run:switch("election_replica2") > +box.ctl.promote() > +test_run:switch("election_replica3") > +test_run:wait_cond(function() return box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") > write_cnt end) > +test_run:switch("election_replica2") > +_ = require('fiber').create(function() box.space.test:insert{2} end) > + > +-- > +-- The election_replica1 node has no clue that there is a new leader > +-- and continue writing data with obsolete term. Since election_replica3 > +-- is delayed now the INSERT won't proceed yet but get queued. > +test_run:switch("election_replica1") > +_ = require('fiber').create(function() box.space.test:insert{3} end) > + > +-- > +-- Finally enable election_replica3 back. Make sure the data from new election_replica2 > +-- leader get writing while old leader's data ignored. > +test_run:switch("election_replica3") > +box.error.injection.set('ERRINJ_WAL_DELAY', false) > +test_run:wait_cond(function() return box.space.test:get{2} ~= nil end) > +box.space.test:select{} > + > +test_run:switch("default") > +test_run:cmd('stop server election_replica1') > +test_run:cmd('stop server election_replica2') > +test_run:cmd('stop server election_replica3') > + > +test_run:cmd('delete server election_replica1') > +test_run:cmd('delete server election_replica2') > +test_run:cmd('delete server election_replica3') > diff --git a/test/replication/suite.cfg b/test/replication/suite.cfg > index 3eee0803c..ed09b2087 100644 > --- a/test/replication/suite.cfg > +++ b/test/replication/suite.cfg > @@ -59,6 +59,7 @@ > "gh-6094-rs-uuid-mismatch.test.lua": {}, > "gh-6127-election-join-new.test.lua": {}, > "gh-6035-applier-filter.test.lua": {}, > + "gh-6036-qsync-order.test.lua": {}, > "election-candidate-promote.test.lua": {}, > "*": { > "memtx": {"engine": "memtx"}, > diff --git a/test/replication/suite.ini b/test/replication/suite.ini > index 77eb95f49..080e4fbf4 100644 > --- a/test/replication/suite.ini > +++ b/test/replication/suite.ini > @@ -3,7 +3,7 @@ core = tarantool > script = master.lua > description = tarantool/box, replication > disabled = consistent.test.lua > -release_disabled = catch.test.lua errinj.test.lua gc.test.lua gc_no_space.test.lua before_replace.test.lua qsync_advanced.test.lua qsync_errinj.test.lua quorum.test.lua recover_missing_xlog.test.lua sync.test.lua long_row_timeout.test.lua gh-4739-vclock-assert.test.lua gh-4730-applier-rollback.test.lua gh-5140-qsync-casc-rollback.test.lua gh-5144-qsync-dup-confirm.test.lua gh-5167-qsync-rollback-snap.test.lua gh-5430-qsync-promote-crash.test.lua gh-5430-cluster-mvcc.test.lua gh-5506-election-on-off.test.lua gh-5536-wal-limit.test.lua hang_on_synchro_fail.test.lua anon_register_gap.test.lua gh-5213-qsync-applier-order.test.lua gh-5213-qsync-applier-order-3.test.lua gh-6027-applier-error-show.test.lua gh-6032-promote-wal-write.test.lua gh-6057-qsync-confirm-async-no-wal.test.lua gh-5447-downstream-lag.test.lua gh-4040-invalid-msgpack.test.lua > +release_disabled = catch.test.lua errinj.test.lua gc.test.lua gc_no_space.test.lua before_replace.test.lua qsync_advanced.test.lua qsync_errinj.test.lua quorum.test.lua recover_missing_xlog.test.lua sync.test.lua long_row_timeout.test.lua gh-4739-vclock-assert.test.lua gh-4730-applier-rollback.test.lua gh-5140-qsync-casc-rollback.test.lua gh-5144-qsync-dup-confirm.test.lua gh-5167-qsync-rollback-snap.test.lua gh-5430-qsync-promote-crash.test.lua gh-5430-cluster-mvcc.test.lua gh-5506-election-on-off.test.lua gh-5536-wal-limit.test.lua hang_on_synchro_fail.test.lua anon_register_gap.test.lua gh-5213-qsync-applier-order.test.lua gh-5213-qsync-applier-order-3.test.lua gh-6027-applier-error-show.test.lua gh-6032-promote-wal-write.test.lua gh-6057-qsync-confirm-async-no-wal.test.lua gh-5447-downstream-lag.test.lua gh-4040-invalid-msgpack.test.lua gh-6036-qsync-order.test.lua > config = suite.cfg > lua_libs = lua/fast_replica.lua lua/rlimit.lua > use_unix_sockets = True -- Serge Petrenko