From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from [87.239.111.99] (localhost [127.0.0.1]) by dev.tarantool.org (Postfix) with ESMTP id 69D7E6F3F2; Mon, 11 Oct 2021 16:38:45 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org 69D7E6F3F2 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=tarantool.org; s=dev; t=1633959525; bh=QELMkVhYzlD3behbp55ZGUlOIj6ZptGjNn8UMw9zARQ=; h=To:Cc:References:Date:In-Reply-To:Subject:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From:Reply-To:From; b=Y5mzQ1Ts17vJEGd6wEP1fLNts1cq3fhw2TRISFiab6v00Bqz7U9tPzja5N6nfcos0 uWQp3amTqctkCq4ujFEPXnwrRGwilqtnKd2A2MEsPl9EZbWpEnM7Q8E/4VanFohAyu QJToI4YH+H9BVn5vrINbRN6xbss9B456rWH6Pg3s= Received: from smtp43.i.mail.ru (smtp43.i.mail.ru [94.100.177.103]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dev.tarantool.org (Postfix) with ESMTPS id 7300B6F3F2 for ; Mon, 11 Oct 2021 16:38:44 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org 7300B6F3F2 Received: by smtp43.i.mail.ru with esmtpa (envelope-from ) id 1mZvVr-0003ot-Nx; Mon, 11 Oct 2021 16:38:44 +0300 To: Cyrill Gorcunov , tml Cc: Vladislav Shpilevoy References: <20211008175809.349501-1-gorcunov@gmail.com> <20211008175809.349501-4-gorcunov@gmail.com> Message-ID: <96c8d51f-3861-d323-b2f2-136f64fdcfd8@tarantool.org> Date: Mon, 11 Oct 2021 16:38:43 +0300 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0) Gecko/20100101 Thunderbird/78.14.0 MIME-Version: 1.0 In-Reply-To: <20211008175809.349501-4-gorcunov@gmail.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Content-Language: en-GB X-4EC0790: 10 X-7564579A: 646B95376F6C166E X-77F55803: 4F1203BC0FB41BD922964B4BA091D9AC7C09AE90887EC4F8E76262B8EC7CD01C182A05F53808504055B8CBBC234D322D661C445F14E3BCAD413E25F5118277D344F84A36558B298F X-7FA49CB5: FF5795518A3D127A4AD6D5ED66289B5278DA827A17800CE7BD76697DAE49F8B0EA1F7E6F0F101C67BD4B6F7A4D31EC0BCC500DACC3FED6E28638F802B75D45FF8AA50765F7900637BDA91B9D4E4EEA1D8638F802B75D45FF36EB9D2243A4F8B5A6FCA7DBDB1FC311F39EFFDF887939037866D6147AF826D80E5577A99EB405E4E62FA2CE7C4CF8B0117882F4460429724CE54428C33FAD305F5C1EE8F4F765FCAA867293B0326636D2E47CDBA5A96583BD4B6F7A4D31EC0BC014FD901B82EE079FA2833FD35BB23D27C277FBC8AE2E8B2EE5AD8F952D28FBA471835C12D1D977C4224003CC8364762BB6847A3DEAEFB0F43C7A68FF6260569E8FC8737B5C2249EC8D19AE6D49635B68655334FD4449CB9ECD01F8117BC8BEAAAE862A0553A39223F8577A6DFFEA7C2C2559B29ED8195043847C11F186F3C59DAA53EE0834AAEE X-C1DE0DAB: 0D63561A33F958A5B11FEC26689363B17D16053393A83402E3C701815F0238E4D59269BC5F550898D99A6476B3ADF6B47008B74DF8BB9EF7333BD3B22AA88B938A852937E12ACA759D2A03B9C34326B3410CA545F18667F91A7EA1CDA0B5A7A0 X-C8649E89: 4E36BF7865823D7055A7F0CF078B5EC49A30900B95165D349A401E2B4D763A4AA8DC68340BB1CD8CF59CB38DC03F527D75CCA9AEC7BD488A0B2D0806E37BEADC1D7E09C32AA3244CCE707D3A0289373918CF04C3A96C6503F26BFA4C8A6946B8FACE5A9C96DEB163 X-D57D3AED: 3ZO7eAau8CL7WIMRKs4sN3D3tLDjz0dLbV79QFUyzQ2Ujvy7cMT6pYYqY16iZVKkSc3dCLJ7zSJH7+u4VD18S7Vl4ZUrpaVfd2+vE6kuoey4m4VkSEu530nj6fImhcD4MUrOEAnl0W826KZ9Q+tr5ycPtXkTV4k65bRjmOUUP8cvGozZ33TWg5HZplvhhXbhDGzqmQDTd6OAevLeAnq3Ra9uf7zvY2zzsIhlcp/Y7m53TZgf2aB4JOg4gkr2bioju/+AAevgYAVo9LAnBYpCfg== X-Mailru-Sender: 3B9A0136629DC9125D61937A2360A4465B0FC5A4431F651A93065F73B3F1956F58BF46E4B01B60EA424AE0EB1F3D1D21E2978F233C3FAE6EE63DB1732555E4A8EE80603BA4A5B0BC112434F685709FCF0DA7A0AF5A3A8387 X-Mras: Ok Subject: Re: [Tarantool-patches] [PATCH v21 3/3] test: add gh-6036-qsync-order test X-BeenThere: tarantool-patches@dev.tarantool.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Tarantool development patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , From: Serge Petrenko via Tarantool-patches Reply-To: Serge Petrenko Errors-To: tarantool-patches-bounces@dev.tarantool.org Sender: "Tarantool-patches" 08.10.2021 20:58, Cyrill Gorcunov пишет: Thanks for the fixes! There are some minor comments left. Please, address them, and after that - LGTM. > To test that promotion requests are handled only when appropriate > write to WAL completes, because we update memory data before the > write finishes. > > Note that without the patch this test fires assertion Please, mention the patch you're talking about. I know it's in the previous commit, but still. Something like: without the patch "qsync: order access to the limbo terms" this test fires an assertion > >> tarantool: src/box/txn_limbo.c:481: txn_limbo_read_rollback: Assertion `e->txn->signature >= 0' failed. > Part-of #6036 > > Signed-off-by: Cyrill Gorcunov > --- > test/replication/gh-6036-qsync-order.result | 204 ++++++++++++++++++ > test/replication/gh-6036-qsync-order.test.lua | 97 +++++++++ > test/replication/suite.cfg | 1 + > test/replication/suite.ini | 2 +- > 4 files changed, 303 insertions(+), 1 deletion(-) > create mode 100644 test/replication/gh-6036-qsync-order.result > create mode 100644 test/replication/gh-6036-qsync-order.test.lua > > diff --git a/test/replication/gh-6036-qsync-order.result b/test/replication/gh-6036-qsync-order.result > new file mode 100644 > index 000000000..eb3e808cb > --- /dev/null > +++ b/test/replication/gh-6036-qsync-order.result > @@ -0,0 +1,204 @@ > +-- test-run result file version 2 > +-- > +-- gh-6036: verify that terms are locked when we're inside journal > +-- write routine, because parallel appliers may ignore the fact that > +-- the term is updated already but not yet written leading to data > +-- inconsistency. > +-- > +test_run = require('test_run').new() > + | --- > + | ... > + > +SERVERS={"election_replica1", "election_replica2", "election_replica3"} > + | --- > + | ... > +test_run:create_cluster(SERVERS, "replication", {args='1 nil manual 1'}) > + | --- > + | ... > +test_run:wait_fullmesh(SERVERS) > + | --- > + | ... > + > +-- > +-- Create a synchro space on the master node and make > +-- sure the write processed just fine. > +test_run:switch("election_replica1") > + | --- > + | - true > + | ... > +box.ctl.promote() > + | --- > + | ... > +s = box.schema.create_space('test', {is_sync = true}) > + | --- > + | ... > +_ = s:create_index('pk') > + | --- > + | ... > +s:insert{1} > + | --- > + | - [1] > + | ... > + > +test_run:switch("election_replica2") > + | --- > + | - true > + | ... > +test_run:wait_lsn('election_replica2', 'election_replica1') > + | --- > + | ... > + > +test_run:switch("election_replica3") > + | --- > + | - true > + | ... > +test_run:wait_lsn('election_replica3', 'election_replica1') > + | --- > + | ... > + > +-- > +-- Drop connection between election_replica1 and election_replica2. > +test_run:switch("election_replica1") > + | --- > + | - true > + | ... > +box.cfg({ \ > + replication = { \ > + "unix/:./election_replica1.sock", \ > + "unix/:./election_replica3.sock", \ > + }, \ > +}) > + | --- > + | ... > +-- > +-- Drop connection between election_replica2 and election_replica1. > +test_run:switch("election_replica2") > + | --- > + | - true > + | ... > +test_run:wait_cond(function() return box.space.test:get{1} ~= nil end) > + | --- > + | - true > + | ... This particular check is extraneous. You've already done a wait_lsn above. > +box.cfg({ \ > + replication = { \ > + "unix/:./election_replica2.sock", \ > + "unix/:./election_replica3.sock", \ > + }, \ > +}) > + | --- > + | ... > + > +-- > +-- Here we have the following scheme > +-- > +-- election_replica3 (will be delayed) > +-- / \ > +-- election_replica1 election_replica2 > + > +-- > +-- Initiate disk delay in a bit tricky way: the next write will > +-- fall into forever sleep. > +test_run:switch("election_replica3") > + | --- > + | - true > + | ... > +write_cnt = box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") > + | --- > + | ... > +-- > +-- Make election_replica2 been a leader and start writting data, > +-- the PROMOTE request get queued on election_replica3 and not > +-- yet processed, same time INSERT won't complete either > +-- waiting for PROMOTE completion first. Note that we > +-- enter election_replica3 as well just to be sure the PROMOTE > +-- reached it. > +test_run:switch("election_replica2") > + | --- > + | - true > + | ... > +box.ctl.promote() > + | --- > + | ... > +test_run:switch("election_replica3") > + | --- > + | - true > + | ... > +test_run:wait_cond(function() return box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") > write_cnt end) > + | --- > + | - true > + | ... > +box.error.injection.set("ERRINJ_WAL_DELAY", true) > + | --- > + | - ok This errinj should be set before 'election_replica2' writes promote, like it used to be in previous patch versions. Let's set it right where you get 'ERRINJ_WAL_WRITE_COUNT' > + | ... > +test_run:switch("election_replica2") > + | --- > + | - true > + | ... > +_ = require('fiber').create(function() box.space.test:insert{2} end) > + | --- > + | ... > + > +-- > +-- The election_replica1 node has no clue that there is a new leader > +-- and continue writing data with obsolete term. Since election_replica3 > +-- is delayed now the INSERT won't proceed yet but get queued. > +test_run:switch("election_replica1") > + | --- > + | - true > + | ... > +_ = require('fiber').create(function() box.space.test:insert{3} end) > + | --- > + | ... > + > +-- > +-- Finally enable election_replica3 back. Make sure the data from new election_replica2 > +-- leader get writing while old leader's data ignored. > +test_run:switch("election_replica3") > + | --- > + | - true > + | ... > +box.error.injection.set('ERRINJ_WAL_DELAY', false) > + | --- > + | - ok > + | ... > +test_run:wait_cond(function() return box.space.test:get{2} ~= nil end) > + | --- > + | - true > + | ... > +box.space.test:select{} > + | --- > + | - - [1] > + | - [2] > + | ... > + > +test_run:switch("default") > + | --- > + | - true > + | ... > +test_run:cmd('stop server election_replica1') > + | --- > + | - true > + | ... > +test_run:cmd('stop server election_replica2') > + | --- > + | - true > + | ... > +test_run:cmd('stop server election_replica3') > + | --- > + | - true > + | ... > + > +test_run:cmd('delete server election_replica1') > + | --- > + | - true > + | ... > +test_run:cmd('delete server election_replica2') > + | --- > + | - true > + | ... > +test_run:cmd('delete server election_replica3') > + | --- > + | - true > + | ... > diff --git a/test/replication/gh-6036-qsync-order.test.lua b/test/replication/gh-6036-qsync-order.test.lua > new file mode 100644 > index 000000000..b8df170b8 > --- /dev/null > +++ b/test/replication/gh-6036-qsync-order.test.lua > @@ -0,0 +1,97 @@ > +-- > +-- gh-6036: verify that terms are locked when we're inside journal > +-- write routine, because parallel appliers may ignore the fact that > +-- the term is updated already but not yet written leading to data > +-- inconsistency. > +-- > +test_run = require('test_run').new() > + > +SERVERS={"election_replica1", "election_replica2", "election_replica3"} > +test_run:create_cluster(SERVERS, "replication", {args='1 nil manual 1'}) > +test_run:wait_fullmesh(SERVERS) > + > +-- > +-- Create a synchro space on the master node and make > +-- sure the write processed just fine. > +test_run:switch("election_replica1") > +box.ctl.promote() > +s = box.schema.create_space('test', {is_sync = true}) > +_ = s:create_index('pk') > +s:insert{1} > + > +test_run:switch("election_replica2") > +test_run:wait_lsn('election_replica2', 'election_replica1') > + > +test_run:switch("election_replica3") > +test_run:wait_lsn('election_replica3', 'election_replica1') > + > +-- > +-- Drop connection between election_replica1 and election_replica2. > +test_run:switch("election_replica1") > +box.cfg({ \ > + replication = { \ > + "unix/:./election_replica1.sock", \ > + "unix/:./election_replica3.sock", \ > + }, \ > +}) > +-- > +-- Drop connection between election_replica2 and election_replica1. > +test_run:switch("election_replica2") > +test_run:wait_cond(function() return box.space.test:get{1} ~= nil end) > +box.cfg({ \ > + replication = { \ > + "unix/:./election_replica2.sock", \ > + "unix/:./election_replica3.sock", \ > + }, \ > +}) > + > +-- > +-- Here we have the following scheme > +-- > +-- election_replica3 (will be delayed) > +-- / \ > +-- election_replica1 election_replica2 > + > +-- > +-- Initiate disk delay in a bit tricky way: the next write will > +-- fall into forever sleep. > +test_run:switch("election_replica3") > +write_cnt = box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") > +-- > +-- Make election_replica2 been a leader and start writting data, > +-- the PROMOTE request get queued on election_replica3 and not > +-- yet processed, same time INSERT won't complete either > +-- waiting for PROMOTE completion first. Note that we > +-- enter election_replica3 as well just to be sure the PROMOTE > +-- reached it. > +test_run:switch("election_replica2") > +box.ctl.promote() > +test_run:switch("election_replica3") > +test_run:wait_cond(function() return box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") > write_cnt end) > +box.error.injection.set("ERRINJ_WAL_DELAY", true) > +test_run:switch("election_replica2") > +_ = require('fiber').create(function() box.space.test:insert{2} end) > + > +-- > +-- The election_replica1 node has no clue that there is a new leader > +-- and continue writing data with obsolete term. Since election_replica3 > +-- is delayed now the INSERT won't proceed yet but get queued. > +test_run:switch("election_replica1") > +_ = require('fiber').create(function() box.space.test:insert{3} end) > + > +-- > +-- Finally enable election_replica3 back. Make sure the data from new election_replica2 > +-- leader get writing while old leader's data ignored. > +test_run:switch("election_replica3") > +box.error.injection.set('ERRINJ_WAL_DELAY', false) > +test_run:wait_cond(function() return box.space.test:get{2} ~= nil end) > +box.space.test:select{} > + > +test_run:switch("default") > +test_run:cmd('stop server election_replica1') > +test_run:cmd('stop server election_replica2') > +test_run:cmd('stop server election_replica3') > + > +test_run:cmd('delete server election_replica1') > +test_run:cmd('delete server election_replica2') > +test_run:cmd('delete server election_replica3') > diff --git a/test/replication/suite.cfg b/test/replication/suite.cfg > index 3eee0803c..ed09b2087 100644 > --- a/test/replication/suite.cfg > +++ b/test/replication/suite.cfg > @@ -59,6 +59,7 @@ > "gh-6094-rs-uuid-mismatch.test.lua": {}, > "gh-6127-election-join-new.test.lua": {}, > "gh-6035-applier-filter.test.lua": {}, > + "gh-6036-qsync-order.test.lua": {}, > "election-candidate-promote.test.lua": {}, > "*": { > "memtx": {"engine": "memtx"}, > diff --git a/test/replication/suite.ini b/test/replication/suite.ini > index 77eb95f49..080e4fbf4 100644 > --- a/test/replication/suite.ini > +++ b/test/replication/suite.ini > @@ -3,7 +3,7 @@ core = tarantool > script = master.lua > description = tarantool/box, replication > disabled = consistent.test.lua > -release_disabled = catch.test.lua errinj.test.lua gc.test.lua gc_no_space.test.lua before_replace.test.lua qsync_advanced.test.lua qsync_errinj.test.lua quorum.test.lua recover_missing_xlog.test.lua sync.test.lua long_row_timeout.test.lua gh-4739-vclock-assert.test.lua gh-4730-applier-rollback.test.lua gh-5140-qsync-casc-rollback.test.lua gh-5144-qsync-dup-confirm.test.lua gh-5167-qsync-rollback-snap.test.lua gh-5430-qsync-promote-crash.test.lua gh-5430-cluster-mvcc.test.lua gh-5506-election-on-off.test.lua gh-5536-wal-limit.test.lua hang_on_synchro_fail.test.lua anon_register_gap.test.lua gh-5213-qsync-applier-order.test.lua gh-5213-qsync-applier-order-3.test.lua gh-6027-applier-error-show.test.lua gh-6032-promote-wal-write.test.lua gh-6057-qsync-confirm-async-no-wal.test.lua gh-5447-downstream-lag.test.lua gh-4040-invalid-msgpack.test.lua > +release_disabled = catch.test.lua errinj.test.lua gc.test.lua gc_no_space.test.lua before_replace.test.lua qsync_advanced.test.lua qsync_errinj.test.lua quorum.test.lua recover_missing_xlog.test.lua sync.test.lua long_row_timeout.test.lua gh-4739-vclock-assert.test.lua gh-4730-applier-rollback.test.lua gh-5140-qsync-casc-rollback.test.lua gh-5144-qsync-dup-confirm.test.lua gh-5167-qsync-rollback-snap.test.lua gh-5430-qsync-promote-crash.test.lua gh-5430-cluster-mvcc.test.lua gh-5506-election-on-off.test.lua gh-5536-wal-limit.test.lua hang_on_synchro_fail.test.lua anon_register_gap.test.lua gh-5213-qsync-applier-order.test.lua gh-5213-qsync-applier-order-3.test.lua gh-6027-applier-error-show.test.lua gh-6032-promote-wal-write.test.lua gh-6057-qsync-confirm-async-no-wal.test.lua gh-5447-downstream-lag.test.lua gh-4040-invalid-msgpack.test.lua gh-6036-qsync-order.test.lua > config = suite.cfg > lua_libs = lua/fast_replica.lua lua/rlimit.lua > use_unix_sockets = True -- Serge Petrenko