From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from [87.239.111.99] (localhost [127.0.0.1]) by dev.tarantool.org (Postfix) with ESMTP id D7CEF6ECE3; Tue, 19 Oct 2021 18:09:53 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org D7CEF6ECE3 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=tarantool.org; s=dev; t=1634656194; bh=9TNzcyc7dU5q0/pBH28NuThg8W2TcROs+vWLT65YV1g=; h=To:References:Date:In-Reply-To:Subject:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To: From; b=d/jpaUKa+dMZXqB4hdiobeV7kOWuTqaCeHXoZPT4ZgIQm2tEJ+gPSmUE93DLwJOey xBZwU9iQ1ZeXUFzcphnQVYfIlx2sVj95db4/2cy6gY0WgXs/ibg2VD/OmmP+859mhR 7A2+5k74g0xbxb+joZvq9FmsVY95nLrLl4yVxenU= Received: from smtp58.i.mail.ru (smtp58.i.mail.ru [217.69.128.38]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dev.tarantool.org (Postfix) with ESMTPS id 128C96ECE3 for ; Tue, 19 Oct 2021 18:09:52 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org 128C96ECE3 Received: by smtp58.i.mail.ru with esmtpa (envelope-from ) id 1mcqkR-00028D-B3; Tue, 19 Oct 2021 18:09:51 +0300 To: Cyrill Gorcunov , tml References: <20211014215622.49732-1-gorcunov@gmail.com> <20211014215622.49732-4-gorcunov@gmail.com> Message-ID: <5ce44467-f9a9-0743-3394-c2a40cff463c@tarantool.org> Date: Tue, 19 Oct 2021 18:09:50 +0300 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0) Gecko/20100101 Thunderbird/78.14.0 MIME-Version: 1.0 In-Reply-To: <20211014215622.49732-4-gorcunov@gmail.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Content-Language: en-GB X-7564579A: EEAE043A70213CC8 X-77F55803: 4F1203BC0FB41BD9C7814344C8C501C8A875D1C8D969D20C14DED04B9421C587182A05F5380850401F769B180CB6224C39333167EA84A4F29CAA5B9B84155EEEF3275F16CB64D767 X-7FA49CB5: FF5795518A3D127A4AD6D5ED66289B5278DA827A17800CE791B57183A2B7E75BEA1F7E6F0F101C67BD4B6F7A4D31EC0BCC500DACC3FED6E28638F802B75D45FF8AA50765F7900637D58A1C8964E53F0E8638F802B75D45FF36EB9D2243A4F8B5A6FCA7DBDB1FC311F39EFFDF887939037866D6147AF826D817B1A5F836A03B268DFF569CB432FA07117882F4460429724CE54428C33FAD305F5C1EE8F4F765FCAA867293B0326636D2E47CDBA5A96583BD4B6F7A4D31EC0BC014FD901B82EE079FA2833FD35BB23D27C277FBC8AE2E8BF1175FABE1C0F9B6A471835C12D1D977C4224003CC8364762BB6847A3DEAEFB0F43C7A68FF6260569E8FC8737B5C2249EC8D19AE6D49635B68655334FD4449CB9ECD01F8117BC8BEAAAE862A0553A39223F8577A6DFFEA7CD0F529D6CE73765543847C11F186F3C59DAA53EE0834AAEE X-C1DE0DAB: 0D63561A33F958A5EA7B3B51EA6E60CDCE62CED4F7275C972F668AF530F42B1DD59269BC5F550898D99A6476B3ADF6B47008B74DF8BB9EF7333BD3B22AA88B938A852937E12ACA75FA7FF33AA1A4D21C410CA545F18667F91A7EA1CDA0B5A7A0 X-C8649E89: 4E36BF7865823D7055A7F0CF078B5EC49A30900B95165D34F05B761BB9C2AA44F3DDA40992577746E9418EC33E7E8D85F6B284BA163A2505406D17F257CAF9901D7E09C32AA3244C2A6E689A4AA72480A17CC7141C6769876C24832127668422729B2BEF169E0186 X-D57D3AED: 3ZO7eAau8CL7WIMRKs4sN3D3tLDjz0dLbV79QFUyzQ2Ujvy7cMT6pYYqY16iZVKkSc3dCLJ7zSJH7+u4VD18S7Vl4ZUrpaVfd2+vE6kuoey4m4VkSEu530nj6fImhcD4MUrOEAnl0W826KZ9Q+tr5ycPtXkTV4k65bRjmOUUP8cvGozZ33TWg5HZplvhhXbhDGzqmQDTd6OAevLeAnq3Ra9uf7zvY2zzsIhlcp/Y7m53TZgf2aB4JOg4gkr2biojSRIpe8siFhVc2BmdaVTuqA== X-Mailru-Sender: 3B9A0136629DC9125D61937A2360A4467D5009614AF417BD2ACEC3B771EDFC94C94EFE03928107DF424AE0EB1F3D1D21E2978F233C3FAE6EE63DB1732555E4A8EE80603BA4A5B0BC112434F685709FCF0DA7A0AF5A3A8387 X-Mras: Ok Subject: Re: [Tarantool-patches] [PATCH v23 3/3] test: add gh-6036-qsync-order test X-BeenThere: tarantool-patches@dev.tarantool.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Tarantool development patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , From: Serge Petrenko via Tarantool-patches Reply-To: Serge Petrenko Errors-To: tarantool-patches-bounces@dev.tarantool.org Sender: "Tarantool-patches" 15.10.2021 00:56, Cyrill Gorcunov пишет: > To test that promotion requests are handled only when appropriate > write to WAL completes, because we update memory data before the > write finishes. > > Note that without the patch "qsync: order access to the limbo terms" > this test fires the assertion > >> tarantool: src/box/txn_limbo.c:481: txn_limbo_read_rollback: Assertion `e->txn->signature >= 0' failed. > Part-of #6036 > > Signed-off-by: Cyrill Gorcunov > --- > test/replication/gh-6036-qsync-order.result | 190 ++++++++++++++++++ > test/replication/gh-6036-qsync-order.test.lua | 93 +++++++++ > test/replication/suite.cfg | 1 + > test/replication/suite.ini | 2 +- > 4 files changed, 285 insertions(+), 1 deletion(-) > create mode 100644 test/replication/gh-6036-qsync-order.result > create mode 100644 test/replication/gh-6036-qsync-order.test.lua > > diff --git a/test/replication/gh-6036-qsync-order.result b/test/replication/gh-6036-qsync-order.result > new file mode 100644 > index 000000000..1c16e19b4 > --- /dev/null > +++ b/test/replication/gh-6036-qsync-order.result > @@ -0,0 +1,190 @@ > +-- test-run result file version 2 > +-- > +-- gh-6036: verify that terms are locked when we're inside journal > +-- write routine, because parallel appliers may ignore the fact that > +-- the term is updated already but not yet written leading to data > +-- inconsistency. > +-- > +test_run = require('test_run').new() > + | --- > + | ... > + > +SERVERS={"election_replica1", "election_replica2", "election_replica3"} > + | --- > + | ... > +test_run:create_cluster(SERVERS, "replication", {args='1 nil manual 1'}) > + | --- > + | ... > +test_run:wait_fullmesh(SERVERS) > + | --- > + | ... > + > +-- > +-- Create a synchro space on the master node and make > +-- sure the write processed just fine. > +test_run:switch("election_replica1") > + | --- > + | - true > + | ... > +box.ctl.promote() > + | --- > + | ... > +s = box.schema.create_space('test', {is_sync = true}) > + | --- > + | ... > +_ = s:create_index('pk') > + | --- > + | ... > +s:insert{1} > + | --- > + | - [1] > + | ... > + > +test_run:wait_lsn('election_replica2', 'election_replica1') > + | --- > + | ... > +test_run:wait_lsn('election_replica3', 'election_replica1') > + | --- > + | ... > + > +-- > +-- Drop connection between election_replica1 and election_replica2. > +box.cfg({ \ > + replication = { \ > + "unix/:./election_replica1.sock", \ > + "unix/:./election_replica3.sock", \ > + }, \ > +}) > + | --- > + | ... > + > +-- > +-- Drop connection between election_replica2 and election_replica1. > +test_run:switch("election_replica2") > + | --- > + | - true > + | ... > +box.cfg({ \ > + replication = { \ > + "unix/:./election_replica2.sock", \ > + "unix/:./election_replica3.sock", \ > + }, \ > +}) > + | --- > + | ... > + > +-- > +-- Here we have the following scheme > +-- > +-- election_replica3 (will be delayed) > +-- / \ > +-- election_replica1 election_replica2 > + > +-- > +-- Initiate disk delay in a bit tricky way: the next write will > +-- fall into forever sleep. > +test_run:switch("election_replica3") > + | --- > + | - true > + | ... > +write_cnt = box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") > + | --- > + | ... > +box.error.injection.set("ERRINJ_WAL_DELAY", true) > + | --- > + | - ok > + | ... > +-- > +-- Make election_replica2 been a leader and start writting data, > +-- the PROMOTE request get queued on election_replica3 and not > +-- yet processed, same time INSERT won't complete either > +-- waiting for PROMOTE completion first. Note that we > +-- enter election_replica3 as well just to be sure the PROMOTE > +-- reached it. > +test_run:switch("election_replica2") > + | --- > + | - true > + | ... > +box.ctl.promote() > + | --- > + | ... > +test_run:switch("election_replica3") > + | --- > + | - true > + | ... > +test_run:wait_cond(function() return box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") > write_cnt end) > + | --- > + | - true > + | ... > +test_run:switch("election_replica2") > + | --- > + | - true > + | ... > +box.space.test:insert{2} > + | --- > + | - [2] > + | ... > + > +-- > +-- The election_replica1 node has no clue that there is a new leader > +-- and continue writing data with obsolete term. Since election_replica3 > +-- is delayed now the INSERT won't proceed yet but get queued. > +test_run:switch("election_replica1") > + | --- > + | - true > + | ... > +box.space.test:insert{3} > + | --- > + | - [3] > + | ... > + > +-- > +-- Finally enable election_replica3 back. Make sure the data from new election_replica2 > +-- leader get writing while old leader's data ignored. > +test_run:switch("election_replica3") > + | --- > + | - true > + | ... Hi and thanks for the fixes! I have only one comment left. Actually you do need to count writes here. The wait_cond for ERRINJ_WAL_WRITE_COUNT == write_cnt + 3 is needed to make sure you receive (and thus try to process) insert {3} **before** the replica is re-enabled. Otherwise we can't be sure that the test is correct. You may simply perform a select before insert{3} has reached the replica. > +box.error.injection.set('ERRINJ_WAL_DELAY', false) > + | --- > + | - ok > + | ... > +test_run:wait_cond(function() return box.space.test:get{2} ~= nil end) > + | --- > + | - true > + | ... > +box.space.test:select{} > + | --- > + | - - [1] > + | - [2] > + | ... > + > +test_run:switch("default") > + | --- > + | - true > + | ... > +test_run:cmd('stop server election_replica1') > + | --- > + | - true > + | ... > +test_run:cmd('stop server election_replica2') > + | --- > + | - true > + | ... > +test_run:cmd('stop server election_replica3') > + | --- > + | - true > + | ... > + > +test_run:cmd('delete server election_replica1') > + | --- > + | - true > + | ... > +test_run:cmd('delete server election_replica2') > + | --- > + | - true > + | ... > +test_run:cmd('delete server election_replica3') > + | --- > + | - true > + | ... > diff --git a/test/replication/gh-6036-qsync-order.test.lua b/test/replication/gh-6036-qsync-order.test.lua > new file mode 100644 > index 000000000..5fcd316d8 > --- /dev/null > +++ b/test/replication/gh-6036-qsync-order.test.lua > @@ -0,0 +1,93 @@ > +-- > +-- gh-6036: verify that terms are locked when we're inside journal > +-- write routine, because parallel appliers may ignore the fact that > +-- the term is updated already but not yet written leading to data > +-- inconsistency. > +-- > +test_run = require('test_run').new() > + > +SERVERS={"election_replica1", "election_replica2", "election_replica3"} > +test_run:create_cluster(SERVERS, "replication", {args='1 nil manual 1'}) > +test_run:wait_fullmesh(SERVERS) > + > +-- > +-- Create a synchro space on the master node and make > +-- sure the write processed just fine. > +test_run:switch("election_replica1") > +box.ctl.promote() > +s = box.schema.create_space('test', {is_sync = true}) > +_ = s:create_index('pk') > +s:insert{1} > + > +test_run:wait_lsn('election_replica2', 'election_replica1') > +test_run:wait_lsn('election_replica3', 'election_replica1') > + > +-- > +-- Drop connection between election_replica1 and election_replica2. > +box.cfg({ \ > + replication = { \ > + "unix/:./election_replica1.sock", \ > + "unix/:./election_replica3.sock", \ > + }, \ > +}) > + > +-- > +-- Drop connection between election_replica2 and election_replica1. > +test_run:switch("election_replica2") > +box.cfg({ \ > + replication = { \ > + "unix/:./election_replica2.sock", \ > + "unix/:./election_replica3.sock", \ > + }, \ > +}) > + > +-- > +-- Here we have the following scheme > +-- > +-- election_replica3 (will be delayed) > +-- / \ > +-- election_replica1 election_replica2 > + > +-- > +-- Initiate disk delay in a bit tricky way: the next write will > +-- fall into forever sleep. > +test_run:switch("election_replica3") > +write_cnt = box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") > +box.error.injection.set("ERRINJ_WAL_DELAY", true) > +-- > +-- Make election_replica2 been a leader and start writting data, > +-- the PROMOTE request get queued on election_replica3 and not > +-- yet processed, same time INSERT won't complete either > +-- waiting for PROMOTE completion first. Note that we > +-- enter election_replica3 as well just to be sure the PROMOTE > +-- reached it. > +test_run:switch("election_replica2") > +box.ctl.promote() > +test_run:switch("election_replica3") > +test_run:wait_cond(function() return box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") > write_cnt end) > +test_run:switch("election_replica2") > +box.space.test:insert{2} > + > +-- > +-- The election_replica1 node has no clue that there is a new leader > +-- and continue writing data with obsolete term. Since election_replica3 > +-- is delayed now the INSERT won't proceed yet but get queued. > +test_run:switch("election_replica1") > +box.space.test:insert{3} > + > +-- > +-- Finally enable election_replica3 back. Make sure the data from new election_replica2 > +-- leader get writing while old leader's data ignored. > +test_run:switch("election_replica3") > +box.error.injection.set('ERRINJ_WAL_DELAY', false) > +test_run:wait_cond(function() return box.space.test:get{2} ~= nil end) > +box.space.test:select{} > + > +test_run:switch("default") > +test_run:cmd('stop server election_replica1') > +test_run:cmd('stop server election_replica2') > +test_run:cmd('stop server election_replica3') > + > +test_run:cmd('delete server election_replica1') > +test_run:cmd('delete server election_replica2') > +test_run:cmd('delete server election_replica3') > diff --git a/test/replication/suite.cfg b/test/replication/suite.cfg > index 3eee0803c..ed09b2087 100644 > --- a/test/replication/suite.cfg > +++ b/test/replication/suite.cfg > @@ -59,6 +59,7 @@ > "gh-6094-rs-uuid-mismatch.test.lua": {}, > "gh-6127-election-join-new.test.lua": {}, > "gh-6035-applier-filter.test.lua": {}, > + "gh-6036-qsync-order.test.lua": {}, > "election-candidate-promote.test.lua": {}, > "*": { > "memtx": {"engine": "memtx"}, > diff --git a/test/replication/suite.ini b/test/replication/suite.ini > index 77eb95f49..080e4fbf4 100644 > --- a/test/replication/suite.ini > +++ b/test/replication/suite.ini > @@ -3,7 +3,7 @@ core = tarantool > script = master.lua > description = tarantool/box, replication > disabled = consistent.test.lua > -release_disabled = catch.test.lua errinj.test.lua gc.test.lua gc_no_space.test.lua before_replace.test.lua qsync_advanced.test.lua qsync_errinj.test.lua quorum.test.lua recover_missing_xlog.test.lua sync.test.lua long_row_timeout.test.lua gh-4739-vclock-assert.test.lua gh-4730-applier-rollback.test.lua gh-5140-qsync-casc-rollback.test.lua gh-5144-qsync-dup-confirm.test.lua gh-5167-qsync-rollback-snap.test.lua gh-5430-qsync-promote-crash.test.lua gh-5430-cluster-mvcc.test.lua gh-5506-election-on-off.test.lua gh-5536-wal-limit.test.lua hang_on_synchro_fail.test.lua anon_register_gap.test.lua gh-5213-qsync-applier-order.test.lua gh-5213-qsync-applier-order-3.test.lua gh-6027-applier-error-show.test.lua gh-6032-promote-wal-write.test.lua gh-6057-qsync-confirm-async-no-wal.test.lua gh-5447-downstream-lag.test.lua gh-4040-invalid-msgpack.test.lua > +release_disabled = catch.test.lua errinj.test.lua gc.test.lua gc_no_space.test.lua before_replace.test.lua qsync_advanced.test.lua qsync_errinj.test.lua quorum.test.lua recover_missing_xlog.test.lua sync.test.lua long_row_timeout.test.lua gh-4739-vclock-assert.test.lua gh-4730-applier-rollback.test.lua gh-5140-qsync-casc-rollback.test.lua gh-5144-qsync-dup-confirm.test.lua gh-5167-qsync-rollback-snap.test.lua gh-5430-qsync-promote-crash.test.lua gh-5430-cluster-mvcc.test.lua gh-5506-election-on-off.test.lua gh-5536-wal-limit.test.lua hang_on_synchro_fail.test.lua anon_register_gap.test.lua gh-5213-qsync-applier-order.test.lua gh-5213-qsync-applier-order-3.test.lua gh-6027-applier-error-show.test.lua gh-6032-promote-wal-write.test.lua gh-6057-qsync-confirm-async-no-wal.test.lua gh-5447-downstream-lag.test.lua gh-4040-invalid-msgpack.test.lua gh-6036-qsync-order.test.lua > config = suite.cfg > lua_libs = lua/fast_replica.lua lua/rlimit.lua > use_unix_sockets = True -- Serge Petrenko