From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from [87.239.111.99] (localhost [127.0.0.1]) by dev.tarantool.org (Postfix) with ESMTP id 6710E6E454; Mon, 28 Feb 2022 11:13:10 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org 6710E6E454 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=tarantool.org; s=dev; t=1646035990; bh=6P5YCyVKQoH9FVUNzyC3yKR4HtimWmqjOGG1Sxj5z6A=; h=Date:To:References:In-Reply-To:Subject:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc: From; b=oq54sOIHXNxydBU17JTnohKTtpU36i/hVIpT5xpFsvmo+n6u65A2JlAyHqAoneCM5 OsRDWtVXsIcSDx8r0QqG/IEkxfjq8ecmirXPjd5aoeRm0JZIWPhz3Mn1sVY7F/Cz7q aO7F9ohmVp9sfHo3kr15OfEJiM7XAwDIjz2z7wNU= Received: from smtp39.i.mail.ru (smtp39.i.mail.ru [94.100.177.99]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dev.tarantool.org (Postfix) with ESMTPS id 94AA76E454 for ; Mon, 28 Feb 2022 11:13:07 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org 94AA76E454 Received: by smtp39.i.mail.ru with esmtpa (envelope-from ) id 1nOb9W-00023E-MB; Mon, 28 Feb 2022 11:13:07 +0300 Message-ID: <8c04975a-ec56-e1bd-e4d4-3066e6e00439@tarantool.org> Date: Mon, 28 Feb 2022 11:13:06 +0300 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0) Gecko/20100101 Thunderbird/91.6.1 Content-Language: en-GB To: Cyrill Gorcunov , tml References: <20220224201841.412565-1-gorcunov@gmail.com> <20220224201841.412565-4-gorcunov@gmail.com> In-Reply-To: <20220224201841.412565-4-gorcunov@gmail.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-4EC0790: 10 X-7564579A: B8F34718100C35BD X-77F55803: 4F1203BC0FB41BD95E12778296193B1F46C6EE00CDC9538AEBC73806DEDBFD7600894C459B0CD1B9AA2991E5F35F0DE22B3FEA64DAF2F330F35BDCDBC81C6673AB2D66377DC39A95 X-7FA49CB5: FF5795518A3D127A4AD6D5ED66289B5278DA827A17800CE711269A7C2F827F16EA1F7E6F0F101C67BD4B6F7A4D31EC0BCC500DACC3FED6E28638F802B75D45FF8AA50765F79006372521E7C1CE72986C8638F802B75D45FF36EB9D2243A4F8B5A6FCA7DBDB1FC311F39EFFDF887939037866D6147AF826D8EDE25DE00262BC4DE81989DD18CC1627117882F4460429724CE54428C33FAD305F5C1EE8F4F765FC2EE5AD8F952D28FBA471835C12D1D9774AD6D5ED66289B52BA9C0B312567BB23117882F446042972877693876707352033AC447995A7AD186FD1C55BDD38FC3FD2E47CDBA5A96583BA9C0B312567BB2376E601842F6C81A19E625A9149C048EE0AC5B80A05675ACD4DB3626BA78294CCD8FC6C240DEA7642DBF02ECDB25306B2B78CF848AE20165D0A6AB1C7CE11FEE386A7C529F68B8E5C6136E347CC761E07C4224003CC836476EA7A3FFF5B025636E2021AF6380DFAD1A18204E546F3947CB11811A4A51E3B096D1867E19FE1407959CC434672EE6371089D37D7C0E48F6C8AA50765F79006370DBF8AFB6532378BEFF80C71ABB335746BA297DBC24807EABDAD6C7F3747799A X-8FC586DF: 6EFBBC1D9D64D975 X-C1DE0DAB: 0D63561A33F958A523B9A85DB12F24CB76C51A0F413C98A88CCEE8007AFF7307D59269BC5F550898D99A6476B3ADF6B47008B74DF8BB9EF7333BD3B22AA88B938A852937E12ACA75F78D6440C3F49C15410CA545F18667F91A7EA1CDA0B5A7A0 X-C8649E89: 4E36BF7865823D7055A7F0CF078B5EC49A30900B95165D3454548929AF40B428667C80D812C0BCF9AF9B14D5B3390CF828BF8CC420FD4ACA5CC368F1F22F3DD21D7E09C32AA3244C440A4695BD6CA5308F3267B3DD3EE85A795D98D676DD64D0927AC6DF5659F194 X-D57D3AED: 3ZO7eAau8CL7WIMRKs4sN3D3tLDjz0dLbV79QFUyzQ2Ujvy7cMT6pYYqY16iZVKkSc3dCLJ7zSJH7+u4VD18S7Vl4ZUrpaVfd2+vE6kuoey4m4VkSEu530nj6fImhcD4MUrOEAnl0W826KZ9Q+tr5ycPtXkTV4k65bRjmOUUP8cvGozZ33TWg5HZplvhhXbhDGzqmQDTd6OAevLeAnq3Ra9uf7zvY2zzsIhlcp/Y7m53TZgf2aB4JOg4gkr2biojNOYcSwubUYJp3nJOu2lPDA== X-Mailru-Sender: 11C2EC085EDE56FA38FD4C59F7EFE407F5368B2DDEC003A30BD8A7E6535CE0735C591B039FEC00416BB2E709EA627F343C7DDD459B58856F0E45BC603594F5A135B915D4279FF0574198E0F3ECE9B5443453F38A29522196 X-Mras: Ok Subject: Re: [Tarantool-patches] [PATCH v30 3/3] test: add gh-6036-qsync-order test X-BeenThere: tarantool-patches@dev.tarantool.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Tarantool development patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , From: Serge Petrenko via Tarantool-patches Reply-To: Serge Petrenko Cc: Vladislav Shpilevoy Errors-To: tarantool-patches-bounces@dev.tarantool.org Sender: "Tarantool-patches" 24.02.2022 23:18, Cyrill Gorcunov пишет: > To test that promotion requests are handled only when appropriate > write to WAL completes, because we update memory data before the > write finishes. > > Part-of #6036 > > Signed-off-by: Cyrill Gorcunov Thanks for the patch and the fixes overall! The test finally works fine on my machine. I've experienced some flakiness, but I was able to fix that with the following diff. Please, consider: ====================== diff --git a/test/replication-luatest/gh_6036_qsync_order_test.lua b/test/replication-luatest/gh_6036_qsync_order_test.lua index 95ed3a517..d71739dcc 100644 --- a/test/replication-luatest/gh_6036_qsync_order_test.lua +++ b/test/replication-luatest/gh_6036_qsync_order_test.lua @@ -142,10 +142,19 @@ g.test_qsync_order = function(cg)      cg.r2:wait_vclock(vclock)      cg.r3:wait_vclock(vclock) +    -- Drop connection between r1 and the rest of the cluster. +    -- Otherwise r1 might become Raft follower before attempting insert{4}. +    cg.r1:exec(function() box.cfg{replication=""} end)      cg.r3:exec(function()          box.error.injection.set('ERRINJ_WAL_DELAY_COUNTDOWN', 2)          require('fiber').create(function() box.ctl.promote() end)      end) +    t.helpers.retrying({}, function() +        t.assert(cg.r3:exec(function() +            return box.info.synchro.queue.latched +        end)) +    end) +    t.assert(cg.r1:exec(function() return box.info.ro == false end))      cg.r1:eval("box.space.test:insert{4}")      cg.r3:exec(function()          assert(box.info.synchro.queue.latched == true) ======================= Also please address a couple of style-related comments below: > --- > .../gh_6036_qsync_order_test.lua | 157 ++++++++++++++++++ > test/replication-luatest/suite.ini | 1 + > 2 files changed, 158 insertions(+) > create mode 100644 test/replication-luatest/gh_6036_qsync_order_test.lua > > diff --git a/test/replication-luatest/gh_6036_qsync_order_test.lua b/test/replication-luatest/gh_6036_qsync_order_test.lua > new file mode 100644 > index 000000000..95ed3a517 > --- /dev/null > +++ b/test/replication-luatest/gh_6036_qsync_order_test.lua > @@ -0,0 +1,157 @@ > +local t = require('luatest') > +local cluster = require('test.luatest_helpers.cluster') > +local server = require('test.luatest_helpers.server') > +local fiber = require('fiber') > + > +local g = t.group('gh-6036') > + > +g.before_each(function(cg) > + cg.cluster = cluster:new({}) > + > + local box_cfg = { > + replication = { > + server.build_instance_uri('r1'), > + server.build_instance_uri('r2'), > + server.build_instance_uri('r3'), > + }, > + replication_timeout = 0.1, > + replication_connect_quorum = 1, > + election_mode = 'manual', > + election_timeout = 0.1, > + replication_synchro_quorum = 1, > + replication_synchro_timeout = 0.1, > + log_level = 6, > + } > + > + cg.r1 = cg.cluster:build_server({ alias = 'r1', box_cfg = box_cfg }) > + cg.r2 = cg.cluster:build_server({ alias = 'r2', box_cfg = box_cfg }) > + cg.r3 = cg.cluster:build_server({ alias = 'r3', box_cfg = box_cfg }) > + > + cg.cluster:add_server(cg.r1) > + cg.cluster:add_server(cg.r2) > + cg.cluster:add_server(cg.r3) > + cg.cluster:start() > +end) > + > +g.after_each(function(cg) > + cg.cluster:drop() > + cg.cluster.servers = nil > +end) > + > +g.test_qsync_order = function(cg) > + cg.cluster:wait_fullmesh() > + > + -- > + -- Create a synchro space on the r1 node and make > + -- sure the write processed just fine. > + cg.r1:exec(function() > + box.ctl.promote() > + box.ctl.wait_rw() > + local s = box.schema.create_space('test', {is_sync = true}) > + s:create_index('pk') > + s:insert{1} > + end) > + > + local vclock = cg.r1:get_vclock() > + vclock[0] = nil > + cg.r2:wait_vclock(vclock) > + cg.r3:wait_vclock(vclock) > + > + t.assert_equals(cg.r1:eval("return box.space.test:select()"), {{1}}) > + t.assert_equals(cg.r2:eval("return box.space.test:select()"), {{1}}) > + t.assert_equals(cg.r3:eval("return box.space.test:select()"), {{1}}) > + > + local function update_replication(...) > + return (box.cfg{ replication = { ... } }) > + end > + > + -- > + -- Drop connection between r1 and r2. > + cg.r1:exec(update_replication, { > + server.build_instance_uri("r1"), > + server.build_instance_uri("r3"), > + }) > + > + -- > + -- Drop connection between r2 and r1. > + cg.r2:exec(update_replication, { > + server.build_instance_uri("r2"), > + server.build_instance_uri("r3"), > + }) > + > + -- > + -- Here we have the following scheme > + -- > + -- r3 (WAL delay) > + -- / \ > + -- r1 r2 > + -- > + > + -- > + -- Initiate disk delay in a bit tricky way: the next write will > + -- fall into forever sleep. > + cg.r3:eval("box.error.injection.set('ERRINJ_WAL_DELAY', true)") 1. Sometimes you use 'eval' and sometimes you use 'exec', and I don't see    a pattern behind that. Please check every case with 'eval' and replace it    with 'exec' when possible. > + > + -- > + -- Make r2 been a leader and start writting data, the PROMOTE > + -- request get queued on r3 and not yet processed, same time > + -- the INSERT won't complete either waiting for the PROMOTE > + -- completion first. Note that we enter r3 as well just to be > + -- sure the PROMOTE has reached it via queue state test. > + cg.r2:exec(function() > + box.ctl.promote() > + box.ctl.wait_rw() > + end) > + t.helpers.retrying({}, function() > + assert(cg.r3:exec(function() > + return box.info.synchro.queue.latched == true > + end)) 2. Here you use a plain 'assert' instead of 't.assert'. Please avoid    plain assertions in luatest tests. > + end) > + cg.r2:eval("box.space.test:insert{2}") 3. Like I already mentioned above, could you wrap that into an 'exec' instead? > + > + -- > + -- The r1 node has no clue that there is a new leader and continue > + -- writing data with obsolete term. Since r3 is delayed now > + -- the INSERT won't proceed yet but get queued. > + cg.r1:eval("box.space.test:insert{3}") > + > + -- > + -- Finally enable r3 back. Make sure the data from new r2 leader get > + -- writing while old leader's data ignored. > + cg.r3:eval("box.error.injection.set('ERRINJ_WAL_DELAY', false)") > + t.helpers.retrying({}, function() > + assert(cg.r3:exec(function() > + return box.space.test:get{2} ~= nil > + end)) > + end) > + > + t.assert_equals(cg.r3:eval("return box.space.test:select()"), {{1},{2}}) > + 4. You group two tests in one function. Let's better extract the test below into    a separate function. For example, g.test_promote_order, or something.    First of all, you may get rid of the 3rd instance in this test (you only need 2 of them),    secondly, now you enter the test with a dirty config from the previous test:    r1 <-> r2 <-> r3 (no connection between r1 and r3). > + -- > + -- Make sure that while we're processing PROMOTE no other records > + -- get sneaked in via applier code from other replicas. For this > + -- sake initiate voting and stop inside wal thread just before > + -- PROMOTE get written. Another replica sends us new record and > + -- it should be dropped. > + cg.r1:exec(function() > + box.ctl.promote() > + box.ctl.wait_rw() > + end) > + vclock = cg.r1:get_vclock() > + vclock[0] = nil > + cg.r2:wait_vclock(vclock) > + cg.r3:wait_vclock(vclock) > + > + cg.r3:exec(function() > + box.error.injection.set('ERRINJ_WAL_DELAY_COUNTDOWN', 2) > + require('fiber').create(function() box.ctl.promote() end) > + end) > + cg.r1:eval("box.space.test:insert{4}") > + cg.r3:exec(function() > + assert(box.info.synchro.queue.latched == true) > + box.error.injection.set('ERRINJ_WAL_DELAY', false) > + box.ctl.wait_rw() > + end) > + > + t.assert_equals(cg.r3:eval("return box.space.test:select()"), {{1},{2}}) > +end > diff --git a/test/replication-luatest/suite.ini b/test/replication-luatest/suite.ini > index 374f1b87a..07ec93a52 100644 > --- a/test/replication-luatest/suite.ini > +++ b/test/replication-luatest/suite.ini > @@ -2,3 +2,4 @@ > core = luatest > description = replication luatests > is_parallel = True > +release_disabled = gh_6036_qsync_order_test.lua -- Serge Petrenko