From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from [87.239.111.99] (localhost [127.0.0.1]) by dev.tarantool.org (Postfix) with ESMTP id 47CE570361; Wed, 10 Feb 2021 11:58:44 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org 47CE570361 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=tarantool.org; s=dev; t=1612947524; bh=juvhNCqC/z1XyViJLcDuVX/aCtxuJ2qFZ2B0DAKK9ng=; h=To:References:Date:In-Reply-To:Subject:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To: From; b=mPLCToLjgCpj8EdyHS8wEhagxQTSH1RZgALOrq3jE0ZutfIG/TvYr+KrDkaQWhQRf HPXJmGVDJW+I0PS9CQ5078SOYCaLmvehheyFrxRrpXa8Mzh/aNGzzdPfKfboE0yj5d P67TTQ3g9AHAmrbYeotoxAEUgRWEv+/GDVyf5WkA= Received: from smtp35.i.mail.ru (smtp35.i.mail.ru [94.100.177.95]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dev.tarantool.org (Postfix) with ESMTPS id A33C870361 for ; Wed, 10 Feb 2021 11:58:01 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org A33C870361 Received: by smtp35.i.mail.ru with esmtpa (envelope-from ) id 1l9lJt-0002Zp-9q; Wed, 10 Feb 2021 11:57:58 +0300 To: Vladislav Shpilevoy , tarantool-patches@dev.tarantool.org, yaroslav.dynnikov@tarantool.org References: Message-ID: <4436bd47-7656-340d-31c3-de4146ab6aa6@tarantool.org> Date: Wed, 10 Feb 2021 11:57:56 +0300 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:78.0) Gecko/20100101 Thunderbird/78.7.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-GB X-7564579A: 646B95376F6C166E X-77F55803: 4F1203BC0FB41BD953AC099BC0052A9C4647521586BE7E6324520D2A088600D8182A05F538085040C4617E016A6ADF63C7AA3E1C32B224A35A9270C0C80527563D93523B373D8233 X-7FA49CB5: FF5795518A3D127A4AD6D5ED66289B5278DA827A17800CE72B221FD723B94806EA1F7E6F0F101C67BD4B6F7A4D31EC0BCC500DACC3FED6E28638F802B75D45FF8AA50765F790063745B6F93C788775E78638F802B75D45FF5571747095F342E8C7A0BC55FA0FE5FC8767A9B625D33EB480A57EABBEBAB8A5AC208408B3978F53389733CBF5DBD5E913377AFFFEAFD269A417C69337E82CC2CC7F00164DA146DAFE8445B8C89999729449624AB7ADAF37F6B57BC7E64490611E7FA7ABCAF51C92A417C69337E82CC2CC7F00164DA146DA6F5DAA56C3B73B237318B6A418E8EAB8D32BA5DBAC0009BE9E8FC8737B5C2249EF02ED69BFCBAD7376E601842F6C81A12EF20D2F80756B5F7E9C4E3C761E06A776E601842F6C81A127C277FBC8AE2E8BB4DCEC625E10BF493AA81AA40904B5D9DBF02ECDB25306B2B25CBF701D1BE8734AD6D5ED66289B5278DA827A17800CE75576139322D04CF567F23339F89546C5A8DF7F3B2552694A6FED454B719173D6725E5C173C3A84C301E68897F4C7D7D535872C767BF85DA2F004C906525384306FED454B719173D6462275124DF8B9C9DE2850DD75B2526BE5BFE6E7EFDEDCD789D4C264860C145E X-C1DE0DAB: 0D63561A33F958A580C7F6DBBDBC587F53F8ED7A517A5BEF8814C1A508F279FDD59269BC5F550898D99A6476B3ADF6B47008B74DF8BB9EF7333BD3B22AA88B938A852937E12ACA75F04B387B5D7535DE410CA545F18667F91A7EA1CDA0B5A7A0 X-C8649E89: 4E36BF7865823D7055A7F0CF078B5EC49A30900B95165D34A08F09726D78E07B918160E5C4076B69D799E83D12E84F9A06E4E8AAB71732AD4700EEEAB013E26E1D7E09C32AA3244C88028AA415DF2CEB976DE2AC06FB4CE8435BF7150578642FFACE5A9C96DEB163 X-D57D3AED: 3ZO7eAau8CL7WIMRKs4sN3D3tLDjz0dLbV79QFUyzQ2Ujvy7cMT6pYYqY16iZVKkSc3dCLJ7zSJH7+u4VD18S7Vl4ZUrpaVfd2+vE6kuoey4m4VkSEu530nj6fImhcD4MUrOEAnl0W826KZ9Q+tr5ycPtXkTV4k65bRjmOUUP8cvGozZ33TWg5HZplvhhXbhDGzqmQDTd6OAevLeAnq3Ra9uf7zvY2zzsIhlcp/Y7m53TZgf2aB4JOg4gkr2biojmMmQ+JvDeDGFnXh5I6XqPg== X-Mailru-Sender: E02534FE7B43636DC6DBEBE964139794AECD47E3B35C495CE945C881835690DF31F5BB1E3305B3FE23E75C7104EB1B885DEE61814008E47C7013064206BFB89F93956FB04BA385BE9437F6177E88F7363CDA0F3B3F5B9367 X-Mras: Ok Subject: Re: [Tarantool-patches] [PATCH 3/9] test: introduce a helper to wait for bucket GC X-BeenThere: tarantool-patches@dev.tarantool.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Tarantool development patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , From: Oleg Babin via Tarantool-patches Reply-To: Oleg Babin Errors-To: tarantool-patches-bounces@dev.tarantool.org Sender: "Tarantool-patches" Hi! Thanks for your patch! LGTM but I have one question. Maybe it's reasonable to add some timeout in this function? AFAIK test-run terminates tests after 120 seconds of inactivity it seems too long for such simple case. But anyway it's up to you. On 10/02/2021 02:46, Vladislav Shpilevoy wrote: > In the tests to wait for bucket deletion by GC it was necessary > to have a long loop expression which checks _bucket space and > wakes up GC fiber if the bucket is not deleted yet. > > Soon the GC wakeup won't be necessary as GC algorithm will become > reactive instead of proactive. > > In order not to remove the wakeup from all places in the main > patch, and to simplify the waiting the patch introduces a function > wait_bucket_is_collected(). > > The reactive GC will delete GC wakeup from this function and all > the tests still will pass in time. > --- > test/lua_libs/storage_template.lua | 10 ++++++++++ > test/rebalancer/bucket_ref.result | 7 ++----- > test/rebalancer/bucket_ref.test.lua | 5 ++--- > test/rebalancer/errinj.result | 13 +++++-------- > test/rebalancer/errinj.test.lua | 7 +++---- > test/rebalancer/rebalancer.result | 5 +---- > test/rebalancer/rebalancer.test.lua | 3 +-- > test/rebalancer/receiving_bucket.result | 2 +- > test/rebalancer/receiving_bucket.test.lua | 2 +- > test/reload_evolution/storage.result | 5 +---- > test/reload_evolution/storage.test.lua | 3 +-- > 11 files changed, 28 insertions(+), 34 deletions(-) > > diff --git a/test/lua_libs/storage_template.lua b/test/lua_libs/storage_template.lua > index 84e4180..21409bd 100644 > --- a/test/lua_libs/storage_template.lua > +++ b/test/lua_libs/storage_template.lua > @@ -165,3 +165,13 @@ function wait_rebalancer_state(state, test_run) > vshard.storage.rebalancer_wakeup() > end > end > + > +function wait_bucket_is_collected(id) > + test_run:wait_cond(function() > + if not box.space._bucket:get{id} then > + return true > + end > + vshard.storage.recovery_wakeup() > + vshard.storage.garbage_collector_wakeup() > + end) > +end > diff --git a/test/rebalancer/bucket_ref.result b/test/rebalancer/bucket_ref.result > index b66e449..b8fc7ff 100644 > --- a/test/rebalancer/bucket_ref.result > +++ b/test/rebalancer/bucket_ref.result > @@ -243,7 +243,7 @@ vshard.storage.buckets_info(1) > destination: > id: 1 > ... > -while box.space._bucket:get{1} do vshard.storage.garbage_collector_wakeup() fiber.sleep(0.01) end > +wait_bucket_is_collected(1) > --- > ... > _ = test_run:switch('box_2_a') > @@ -292,10 +292,7 @@ vshard.storage.buckets_info(1) > finish_refs = true > --- > ... > -while vshard.storage.buckets_info(1)[1].rw_lock do fiber.sleep(0.01) end > ---- > -... > -while box.space._bucket:get{1} do fiber.sleep(0.01) end > +wait_bucket_is_collected(1) > --- > ... > _ = test_run:switch('box_1_a') > diff --git a/test/rebalancer/bucket_ref.test.lua b/test/rebalancer/bucket_ref.test.lua > index 49ba583..213ced3 100644 > --- a/test/rebalancer/bucket_ref.test.lua > +++ b/test/rebalancer/bucket_ref.test.lua > @@ -73,7 +73,7 @@ vshard.storage.bucket_refro(1) > finish_refs = true > while f1:status() ~= 'dead' do fiber.sleep(0.01) end > vshard.storage.buckets_info(1) > -while box.space._bucket:get{1} do vshard.storage.garbage_collector_wakeup() fiber.sleep(0.01) end > +wait_bucket_is_collected(1) > _ = test_run:switch('box_2_a') > vshard.storage.buckets_info(1) > vshard.storage.internal.errinj.ERRINJ_LONG_RECEIVE = false > @@ -89,8 +89,7 @@ while not vshard.storage.buckets_info(1)[1].rw_lock do fiber.sleep(0.01) end > fiber.sleep(0.2) > vshard.storage.buckets_info(1) > finish_refs = true > -while vshard.storage.buckets_info(1)[1].rw_lock do fiber.sleep(0.01) end > -while box.space._bucket:get{1} do fiber.sleep(0.01) end > +wait_bucket_is_collected(1) > _ = test_run:switch('box_1_a') > vshard.storage.buckets_info(1) > > diff --git a/test/rebalancer/errinj.result b/test/rebalancer/errinj.result > index 214e7d8..e50eb72 100644 > --- a/test/rebalancer/errinj.result > +++ b/test/rebalancer/errinj.result > @@ -237,7 +237,10 @@ _bucket:get{36} > -- Buckets became 'active' on box_2_a, but still are sending on > -- box_1_a. Wait until it is marked as garbage on box_1_a by the > -- recovery fiber. > -while _bucket:get{35} ~= nil or _bucket:get{36} ~= nil do vshard.storage.recovery_wakeup() fiber.sleep(0.001) end > +wait_bucket_is_collected(35) > +--- > +... > +wait_bucket_is_collected(36) > --- > ... > _ = test_run:switch('box_2_a') > @@ -278,7 +281,7 @@ while not _bucket:get{36} do fiber.sleep(0.0001) end > _ = test_run:switch('box_1_a') > --- > ... > -while _bucket:get{36} do vshard.storage.recovery_wakeup() vshard.storage.garbage_collector_wakeup() fiber.sleep(0.001) end > +wait_bucket_is_collected(36) > --- > ... > _bucket:get{36} > @@ -295,12 +298,6 @@ box.error.injection.set('ERRINJ_WAL_DELAY', false) > --- > - ok > ... > -_ = test_run:switch('box_1_a') > ---- > -... > -while _bucket:get{36} and _bucket:get{36}.status == vshard.consts.BUCKET.ACTIVE do fiber.sleep(0.001) end > ---- > -... > test_run:switch('default') > --- > - true > diff --git a/test/rebalancer/errinj.test.lua b/test/rebalancer/errinj.test.lua > index 66fbe5e..2cc4a69 100644 > --- a/test/rebalancer/errinj.test.lua > +++ b/test/rebalancer/errinj.test.lua > @@ -107,7 +107,8 @@ _bucket:get{36} > -- Buckets became 'active' on box_2_a, but still are sending on > -- box_1_a. Wait until it is marked as garbage on box_1_a by the > -- recovery fiber. > -while _bucket:get{35} ~= nil or _bucket:get{36} ~= nil do vshard.storage.recovery_wakeup() fiber.sleep(0.001) end > +wait_bucket_is_collected(35) > +wait_bucket_is_collected(36) > _ = test_run:switch('box_2_a') > _bucket:get{35} > _bucket:get{36} > @@ -124,13 +125,11 @@ f1 = fiber.create(function() ret1, err1 = vshard.storage.bucket_send(36, util.re > _ = test_run:switch('box_2_a') > while not _bucket:get{36} do fiber.sleep(0.0001) end > _ = test_run:switch('box_1_a') > -while _bucket:get{36} do vshard.storage.recovery_wakeup() vshard.storage.garbage_collector_wakeup() fiber.sleep(0.001) end > +wait_bucket_is_collected(36) > _bucket:get{36} > _ = test_run:switch('box_2_a') > _bucket:get{36} > box.error.injection.set('ERRINJ_WAL_DELAY', false) > -_ = test_run:switch('box_1_a') > -while _bucket:get{36} and _bucket:get{36}.status == vshard.consts.BUCKET.ACTIVE do fiber.sleep(0.001) end > > test_run:switch('default') > test_run:drop_cluster(REPLICASET_2) > diff --git a/test/rebalancer/rebalancer.result b/test/rebalancer/rebalancer.result > index 3607e93..098b845 100644 > --- a/test/rebalancer/rebalancer.result > +++ b/test/rebalancer/rebalancer.result > @@ -334,10 +334,7 @@ vshard.storage.rebalancer_wakeup() > -- Now rebalancer makes a bucket SENT. After it the garbage > -- collector cleans it and deletes after a timeout. > -- > -while _bucket:get{91}.status ~= vshard.consts.BUCKET.SENT do fiber.sleep(0.01) end > ---- > -... > -while _bucket:get{91} ~= nil do fiber.sleep(0.1) end > +wait_bucket_is_collected(91) > --- > ... > wait_rebalancer_state("The cluster is balanced ok", test_run) > diff --git a/test/rebalancer/rebalancer.test.lua b/test/rebalancer/rebalancer.test.lua > index 63e690f..308e66d 100644 > --- a/test/rebalancer/rebalancer.test.lua > +++ b/test/rebalancer/rebalancer.test.lua > @@ -162,8 +162,7 @@ vshard.storage.rebalancer_wakeup() > -- Now rebalancer makes a bucket SENT. After it the garbage > -- collector cleans it and deletes after a timeout. > -- > -while _bucket:get{91}.status ~= vshard.consts.BUCKET.SENT do fiber.sleep(0.01) end > -while _bucket:get{91} ~= nil do fiber.sleep(0.1) end > +wait_bucket_is_collected(91) > wait_rebalancer_state("The cluster is balanced ok", test_run) > _bucket.index.status:count({vshard.consts.BUCKET.ACTIVE}) > _bucket.index.status:min({vshard.consts.BUCKET.ACTIVE}) > diff --git a/test/rebalancer/receiving_bucket.result b/test/rebalancer/receiving_bucket.result > index db6a67f..7d3612b 100644 > --- a/test/rebalancer/receiving_bucket.result > +++ b/test/rebalancer/receiving_bucket.result > @@ -374,7 +374,7 @@ vshard.storage.buckets_info(1) > destination: > id: 1 > ... > -while box.space._bucket:get{1} do vshard.storage.garbage_collector_wakeup() fiber.sleep(0.01) end > +wait_bucket_is_collected(1) > --- > ... > vshard.storage.buckets_info(1) > diff --git a/test/rebalancer/receiving_bucket.test.lua b/test/rebalancer/receiving_bucket.test.lua > index 1819cbb..24534b3 100644 > --- a/test/rebalancer/receiving_bucket.test.lua > +++ b/test/rebalancer/receiving_bucket.test.lua > @@ -137,7 +137,7 @@ box.space.test3:select{100} > _ = test_run:switch('box_2_a') > vshard.storage.bucket_send(1, util.replicasets[1], {timeout = 0.3}) > vshard.storage.buckets_info(1) > -while box.space._bucket:get{1} do vshard.storage.garbage_collector_wakeup() fiber.sleep(0.01) end > +wait_bucket_is_collected(1) > vshard.storage.buckets_info(1) > _ = test_run:switch('box_1_a') > box.space._bucket:get{1} > diff --git a/test/reload_evolution/storage.result b/test/reload_evolution/storage.result > index 4652c4f..753687f 100644 > --- a/test/reload_evolution/storage.result > +++ b/test/reload_evolution/storage.result > @@ -129,10 +129,7 @@ vshard.storage.bucket_send(bucket_id_to_move, util.replicasets[1]) > --- > - true > ... > -vshard.storage.garbage_collector_wakeup() > ---- > -... > -while box.space._bucket:get({bucket_id_to_move}) do fiber.sleep(0.01) end > +wait_bucket_is_collected(bucket_id_to_move) > --- > ... > test_run:switch('storage_1_a') > diff --git a/test/reload_evolution/storage.test.lua b/test/reload_evolution/storage.test.lua > index 06f7117..639553e 100644 > --- a/test/reload_evolution/storage.test.lua > +++ b/test/reload_evolution/storage.test.lua > @@ -51,8 +51,7 @@ vshard.storage.bucket_force_create(2000) > vshard.storage.buckets_info()[2000] > vshard.storage.call(bucket_id_to_move, 'read', 'do_select', {42}) > vshard.storage.bucket_send(bucket_id_to_move, util.replicasets[1]) > -vshard.storage.garbage_collector_wakeup() > -while box.space._bucket:get({bucket_id_to_move}) do fiber.sleep(0.01) end > +wait_bucket_is_collected(bucket_id_to_move) > test_run:switch('storage_1_a') > while box.space._bucket:get{bucket_id_to_move}.status ~= vshard.consts.BUCKET.ACTIVE do vshard.storage.recovery_wakeup() fiber.sleep(0.01) end > vshard.storage.bucket_send(bucket_id_to_move, util.replicasets[2])