From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from [87.239.111.99] (localhost [127.0.0.1]) by dev.tarantool.org (Postfix) with ESMTP id 6C15E6FC86; Wed, 24 Mar 2021 19:38:35 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org 6C15E6FC86 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=tarantool.org; s=dev; t=1616603915; bh=dpY3vB9S3mCBqkx1qIYYxEH/t1UM1msoPVZx+Vwx3vA=; h=To:Date:In-Reply-To:References:Subject:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc: From; b=gim+OdXehJaJybIpyRJ372iaeaN4vvj7Mx14JAbG6A8rIAB2nwNZA+FcCXQ7TwzLj wWh2iPnRLCUpbWBpylXwfZAL1A9wKeZrPi9bHxYYm3a6Fpbgf366IVXx5VJhILC3Qp Y/4Upfa+Qqi01SDp4CiJZEZzh3EfZPFCe+d3Ep6c= Received: from mail-lj1-f178.google.com (mail-lj1-f178.google.com [209.85.208.178]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by dev.tarantool.org (Postfix) with ESMTPS id 759136FC86 for ; Wed, 24 Mar 2021 19:38:17 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org 759136FC86 Received: by mail-lj1-f178.google.com with SMTP id y1so31009589ljm.10 for ; Wed, 24 Mar 2021 09:38:17 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=8N8tKkAyorx7Q3p5aUBBAIrvHUFl0E3vYVPwModFC0s=; b=lUjDccHCdwuRDy3cIX/869PO0TQiNfgTTNraAWwLPxmytD3KbLNWv5gIQR9M7Mw1hb V3KEL8+NZBAmUMovmqc/AQFRXsu+phR9qJ9vz+2uhXZ+JPRpVN5oTSZxrZtUiSdpAkTx hwA/TaLyoKMhzUJiUIgspdc+Sd7bAz1Tvk0PpsH3XHDYVJiDYlLQDNvoNcrmvfp4n1fZ ntOrcOpaLwsNHkobBRjRfvA5R6rVU79jeAWfyxJCO4qx6Mkw+G80QW2Ca5mf6ex7dfex WhYEFDP/bj4OAmKyvhns61x8wF7bRigM3mWveeuy731BdW6btn9+lgT0yzVj4mbrgsYT 8a3w== X-Gm-Message-State: AOAM530wxxq2ykF5taJ8MQo8NgvZQtruc0Qyvei/qi1xhWf1Kpx8Nhid fkUWekmccLiUcJHEs+XmdEaTQJwRHUc= X-Google-Smtp-Source: ABdhPJxYZlOrzvEfjimrvVFoBExxSuVimeqc7R1hb8b0MhT9CSVcST2Kjfiv2yuiRx/VoKhnO+19cg== X-Received: by 2002:a2e:9d14:: with SMTP id t20mr2690617lji.76.1616603895928; Wed, 24 Mar 2021 09:38:15 -0700 (PDT) Received: from grain.localdomain ([5.18.171.94]) by smtp.gmail.com with ESMTPSA id a14sm275433lfi.7.2021.03.24.09.38.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 24 Mar 2021 09:38:14 -0700 (PDT) Received: by grain.localdomain (Postfix, from userid 1000) id 45392560156; Wed, 24 Mar 2021 19:38:00 +0300 (MSK) To: tml Date: Wed, 24 Mar 2021 19:37:57 +0300 Message-Id: <20210324163759.1900553-2-gorcunov@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210324163759.1900553-1-gorcunov@gmail.com> References: <20210324163759.1900553-1-gorcunov@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [Tarantool-patches] [PATCH v4 1/3] gc/xlog: delay xlog cleanup until relays are subscribed X-BeenThere: tarantool-patches@dev.tarantool.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Tarantool development patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , From: Cyrill Gorcunov via Tarantool-patches Reply-To: Cyrill Gorcunov Cc: Vladislav Shpilevoy Errors-To: tarantool-patches-bounces@dev.tarantool.org Sender: "Tarantool-patches" In case if replica managed to be far behind the master node (so there are a number of xlog files present after the last master's snapshot) then once master node get restarted it may clean up the xlogs needed by the replica to subscribe in a fast way and instead the replica will have to rejoin reading a number of data back. Lets try to address this by delaying xlog files cleanup until replicas are got subscribed and relays are up and running. For this sake we start with cleanup fiber spinning in nop cycle ("paused" mode) and use a delay counter to wait until relays decrement them. This implies that if `_cluster` system space is not empty upon restart and the registered replica somehow vanished completely and won't ever come back, then the node administrator has to drop this replica from `_cluster` manually. Note that this delayed cleanup start doesn't prevent WAL engine from removing old files if there is no space left on a storage device. The WAL will simply drop old data without a question. We need to take into account that some administrators might not need this functionality at all, for this sake we introduce "wal_cleanup_delay" configuration option which allows to enable or disable the delay. Closes #5806 Signed-off-by: Cyrill Gorcunov @TarantoolBot document Title: Add wal_cleanup_delay configuration parameter The `wal_cleanup_delay` option defines a delay in seconds before write ahead log files (`*.xlog`) are getting started to prune upon a node restart. This option is ignored in case if a node is running as an anonymous replica (`replication_anon = true`). Similarly if replication is unused or there is no plans to use replication at all then this option should not be considered. An initial problem to solve is the case where a node is operating so fast that its replicas do not manage to reach the node state and in case if the node is restarted at this moment (for various reasons, for example due to power outage) then `*.xlog` files might be pruned during restart. In result replicas will not find these files on the main node and have to reread all data back which is a very expensive procedure. Since replicas are tracked via `_cluster` system space this we use its content to count subscribed replicas and when all of them are up and running the cleanup procedure is automatically enabled even if `wal_cleanup_delay` is not expired. The `wal_cleanup_delay` should be set to: - `0` to disable the cleanup delay; - `>0` to wait for specified number of seconds. By default it is set to `14400` seconds (ie `4` hours). In case if registered replica is lost forever and timeout is set to infinity then a preferred way to enable cleanup procedure is not setting up a small timeout value but rather to delete this replica from `_cluster` space manually. Note that the option does *not* prevent WAL engine from removing old `*.xlog` files if there is no space left on a storage device, WAL engine can remove them in a force way. Current state of `*.xlog` garbage collector can be found in `box.info.gc()` output. For example ``` Lua tarantool> box.info.gc() --- ... is_paused: false ``` The `is_paused` shows if cleanup fiber is paused or not. --- .../unreleased/add-wal_cleanup_delay.md | 5 ++ src/box/box.cc | 41 +++++++++ src/box/box.h | 1 + src/box/gc.c | 85 ++++++++++++++++++- src/box/gc.h | 36 ++++++++ src/box/lua/cfg.cc | 9 ++ src/box/lua/info.c | 4 + src/box/lua/load_cfg.lua | 5 ++ src/box/relay.cc | 1 + src/box/replication.cc | 2 + test/app-tap/init_script.result | 1 + test/box/admin.result | 2 + test/box/cfg.result | 4 + test/replication/replica_rejoin.result | 16 ++++ test/replication/replica_rejoin.test.lua | 9 ++ 15 files changed, 217 insertions(+), 4 deletions(-) create mode 100644 changelogs/unreleased/add-wal_cleanup_delay.md diff --git a/changelogs/unreleased/add-wal_cleanup_delay.md b/changelogs/unreleased/add-wal_cleanup_delay.md new file mode 100644 index 000000000..2e67f0a4e --- /dev/null +++ b/changelogs/unreleased/add-wal_cleanup_delay.md @@ -0,0 +1,5 @@ +## bugfix/core + +* Introduce `wal_cleanup_delay` option to prevent early cleanup + of `*.xlog` files which are needed by replicas and lead to + `XlogGapError` (gh-5806). diff --git a/src/box/box.cc b/src/box/box.cc index cc59564e1..a269f7357 100644 --- a/src/box/box.cc +++ b/src/box/box.cc @@ -771,6 +771,19 @@ box_check_wal_queue_max_size(void) return size; } +static double +box_check_wal_cleanup_delay(void) +{ + double value = cfg_getd("wal_cleanup_delay"); + if (value < 0) { + diag_set(ClientError, ER_CFG, "wal_cleanup_delay", + "the value must be >= 0"); + return -1; + } + + return value; +} + static void box_check_readahead(int readahead) { @@ -918,6 +931,8 @@ box_check_config(void) box_check_wal_mode(cfg_gets("wal_mode")); if (box_check_wal_queue_max_size() < 0) diag_raise(); + if (box_check_wal_cleanup_delay() < 0) + diag_raise(); if (box_check_memory_quota("memtx_memory") < 0) diag_raise(); box_check_memtx_min_tuple_size(cfg_geti64("memtx_min_tuple_size")); @@ -1465,6 +1480,23 @@ box_set_wal_queue_max_size(void) return 0; } +int +box_set_wal_cleanup_delay(void) +{ + double delay = box_check_wal_cleanup_delay(); + if (delay < 0) + return -1; + /* + * Anonymous replicas do not require + * delay the cleanup procedure since they + * are read only. + */ + if (replication_anon) + delay = 0; + gc_set_wal_cleanup_delay(delay); + return 0; +} + void box_set_vinyl_memory(void) { @@ -3076,6 +3108,15 @@ box_cfg_xc(void) } fiber_gc(); + /* + * Exclude self from GC delay because we care + * about remote replicas only, still for ref/unref + * balance we do reference self node initially and + * downgrade it to zero when there is no replication + * set at all. + */ + gc_delay_unref(); + bootstrap_journal_guard.is_active = false; assert(current_journal != &bootstrap_journal); diff --git a/src/box/box.h b/src/box/box.h index 65215b087..e2321b9b0 100644 --- a/src/box/box.h +++ b/src/box/box.h @@ -243,6 +243,7 @@ void box_set_checkpoint_count(void); void box_set_checkpoint_interval(void); void box_set_checkpoint_wal_threshold(void); int box_set_wal_queue_max_size(void); +int box_set_wal_cleanup_delay(void); void box_set_memtx_memory(void); void box_set_memtx_max_tuple_size(void); void box_set_vinyl_memory(void); diff --git a/src/box/gc.c b/src/box/gc.c index 9af4ef958..8dbcbcede 100644 --- a/src/box/gc.c +++ b/src/box/gc.c @@ -107,6 +107,11 @@ gc_init(void) /* Don't delete any files until recovery is complete. */ gc.min_checkpoint_count = INT_MAX; + gc.wal_cleanup_delay = TIMEOUT_INFINITY; + gc.delay_ref = 0; + gc.is_paused = true; + say_info("wal/engine cleanup is paused"); + vclock_create(&gc.vclock); rlist_create(&gc.checkpoints); gc_tree_new(&gc.consumers); @@ -238,6 +243,41 @@ static int gc_cleanup_fiber_f(va_list ap) { (void)ap; + + /* + * Stage 1 (optional): in case if we're booting + * up with cleanup disabled lets do wait in a + * separate cycle to minimize branching on stage 2. + */ + if (gc.is_paused) { + double start_time = fiber_clock(); + while (!fiber_is_cancelled()) { + double deadline = start_time + gc.wal_cleanup_delay; + double timeout = gc.wal_cleanup_delay; + + if (fiber_clock() >= deadline || + fiber_yield_timeout(timeout)) { + say_info("wal/engine cleanup is resumed " + "due to timeout expiration"); + gc.is_paused = false; + gc.delay_ref = 0; + break; + } + + /* + * If a last reference is dropped + * we can exit out early. + */ + if (!gc.is_paused) { + say_info("wal/engine cleanup is resumed"); + break; + } + } + } + + /* + * Stage 2: a regular cleanup cycle. + */ while (!fiber_is_cancelled()) { int64_t delta = gc.cleanup_scheduled - gc.cleanup_completed; if (delta == 0) { @@ -253,6 +293,42 @@ gc_cleanup_fiber_f(va_list ap) return 0; } +void +gc_set_wal_cleanup_delay(double wal_cleanup_delay) +{ + gc.wal_cleanup_delay = wal_cleanup_delay; + /* + * This routine may be called at arbitrary + * moment thus we must be sure the cleanup + * fiber is paused to not wake up it when + * it is already in a regular cleanup stage. + */ + if (gc.is_paused) + fiber_wakeup(gc.cleanup_fiber); +} + +void +gc_delay_ref(void) +{ + if (gc.is_paused) { + assert(gc.delay_ref >= 0); + gc.delay_ref++; + } +} + +void +gc_delay_unref(void) +{ + if (gc.is_paused) { + assert(gc.delay_ref > 0); + gc.delay_ref--; + if (gc.delay_ref == 0) { + gc.is_paused = false; + fiber_wakeup(gc.cleanup_fiber); + } + } +} + /** * Trigger asynchronous garbage collection. */ @@ -462,11 +538,12 @@ gc_checkpoint(void) * Wait for background garbage collection that might * have been triggered by this checkpoint to complete. * Strictly speaking, it isn't necessary, but it - * simplifies testing as it guarantees that by the - * time box.snapshot() returns, all outdated checkpoint - * files have been removed. + * simplifies testing. Same time if GC is paused and + * waiting for old XLOGs to be read by replicas the + * cleanup won't happen immediately after the checkpoint. */ - gc_wait_cleanup(); + if (!gc.is_paused) + gc_wait_cleanup(); return 0; } diff --git a/src/box/gc.h b/src/box/gc.h index 2a568c5f9..f5bc26e87 100644 --- a/src/box/gc.h +++ b/src/box/gc.h @@ -147,6 +147,24 @@ struct gc_state { * taken at that moment of time. */ int64_t cleanup_completed, cleanup_scheduled; + /** + * A counter to wait until all replicas are managed to + * subscribe so that we can enable cleanup fiber to + * remove old XLOGs. Otherwise some replicas might be + * far behind the master node and after the master + * node been restarted they will have to reread all + * data back due to XlogGapError, ie too early deleted + * XLOGs. + */ + int64_t delay_ref; + /** + * Delay timeout in seconds. + */ + double wal_cleanup_delay; + /** + * When set the cleanup fiber is paused. + */ + bool is_paused; /** * Set if there's a fiber making a checkpoint right now. */ @@ -206,6 +224,24 @@ gc_init(void); void gc_free(void); +/** + * Set a new delay value. + */ +void +gc_set_wal_cleanup_delay(double wal_cleanup_delay); + +/** + * Increment a reference to delay counter. + */ +void +gc_delay_ref(void); + +/** + * Decrement a reference from the delay counter. + */ +void +gc_delay_unref(void); + /** * Advance the garbage collector vclock to the given position. * Deactivate WAL consumers that need older data. diff --git a/src/box/lua/cfg.cc b/src/box/lua/cfg.cc index b37a93ed8..1142e2726 100644 --- a/src/box/lua/cfg.cc +++ b/src/box/lua/cfg.cc @@ -172,6 +172,14 @@ lbox_cfg_set_wal_queue_max_size(struct lua_State *L) return 0; } +static int +lbox_cfg_set_wal_cleanup_delay(struct lua_State *L) +{ + if (box_set_wal_cleanup_delay() < 0) + luaT_error(L); + return 0; +} + static int lbox_cfg_set_read_only(struct lua_State *L) { @@ -408,6 +416,7 @@ box_lua_cfg_init(struct lua_State *L) {"cfg_set_checkpoint_interval", lbox_cfg_set_checkpoint_interval}, {"cfg_set_checkpoint_wal_threshold", lbox_cfg_set_checkpoint_wal_threshold}, {"cfg_set_wal_queue_max_size", lbox_cfg_set_wal_queue_max_size}, + {"cfg_set_wal_cleanup_delay", lbox_cfg_set_wal_cleanup_delay}, {"cfg_set_read_only", lbox_cfg_set_read_only}, {"cfg_set_memtx_memory", lbox_cfg_set_memtx_memory}, {"cfg_set_memtx_max_tuple_size", lbox_cfg_set_memtx_max_tuple_size}, diff --git a/src/box/lua/info.c b/src/box/lua/info.c index c4c9fa0a0..fb39799a1 100644 --- a/src/box/lua/info.c +++ b/src/box/lua/info.c @@ -445,6 +445,10 @@ lbox_info_gc_call(struct lua_State *L) lua_pushboolean(L, gc.checkpoint_is_in_progress); lua_settable(L, -3); + lua_pushstring(L, "is_paused"); + lua_pushboolean(L, gc.is_paused); + lua_settable(L, -3); + lua_pushstring(L, "checkpoints"); lua_newtable(L); diff --git a/src/box/lua/load_cfg.lua b/src/box/lua/load_cfg.lua index 9967f992e..44bb95ed1 100644 --- a/src/box/lua/load_cfg.lua +++ b/src/box/lua/load_cfg.lua @@ -73,6 +73,7 @@ local default_cfg = { wal_max_size = 256 * 1024 * 1024, wal_dir_rescan_delay= 2, wal_queue_max_size = 16 * 1024 * 1024, + wal_cleanup_delay = 4 * 3600, force_recovery = false, replication = nil, instance_uuid = nil, @@ -155,6 +156,7 @@ local template_cfg = { wal_mode = 'string', wal_max_size = 'number', wal_dir_rescan_delay= 'number', + wal_cleanup_delay = 'number', force_recovery = 'boolean', replication = 'string, number, table', instance_uuid = 'string', @@ -289,6 +291,7 @@ local dynamic_cfg = { feedback_interval = ifdef_feedback_set_params, -- do nothing, affects new replicas, which query this value on start wal_dir_rescan_delay = function() end, + wal_cleanup_delay = private.cfg_set_wal_cleanup_delay, custom_proc_title = function() require('title').update(box.cfg.custom_proc_title) end, @@ -349,6 +352,8 @@ local dynamic_cfg_order = { -- the new one. This should be fixed when box.cfg is able to -- apply some parameters together and atomically. replication_anon = 250, + -- Cleanup delay should be ignored if replication_anon is set. + wal_cleanup_delay = 260, election_mode = 300, election_timeout = 320, } diff --git a/src/box/relay.cc b/src/box/relay.cc index 41f949e8e..6edee86bf 100644 --- a/src/box/relay.cc +++ b/src/box/relay.cc @@ -824,6 +824,7 @@ relay_subscribe(struct replica *replica, int fd, uint64_t sync, tt_uuid_str(&replica->uuid)); if (replica->gc == NULL) diag_raise(); + gc_delay_unref(); } relay_start(relay, fd, sync, relay_send_row); diff --git a/src/box/replication.cc b/src/box/replication.cc index 1fa8843e7..aefb812b3 100644 --- a/src/box/replication.cc +++ b/src/box/replication.cc @@ -250,6 +250,7 @@ replica_set_id(struct replica *replica, uint32_t replica_id) tt_uuid_str(&replica->uuid)); } replicaset.replica_by_id[replica_id] = replica; + gc_delay_ref(); ++replicaset.registered_count; say_info("assigned id %d to replica %s", replica->id, tt_uuid_str(&replica->uuid)); @@ -273,6 +274,7 @@ replica_clear_id(struct replica *replica) replicaset.replica_by_id[replica->id] = NULL; assert(replicaset.registered_count > 0); --replicaset.registered_count; + gc_delay_unref(); if (replica->id == instance_id) { /* See replica_check_id(). */ assert(replicaset.is_joining); diff --git a/test/app-tap/init_script.result b/test/app-tap/init_script.result index 74cdb3efb..1fdd9a227 100644 --- a/test/app-tap/init_script.result +++ b/test/app-tap/init_script.result @@ -53,6 +53,7 @@ vinyl_run_count_per_level:2 vinyl_run_size_ratio:3.5 vinyl_timeout:60 vinyl_write_threads:4 +wal_cleanup_delay:14400 wal_dir:. wal_dir_rescan_delay:2 wal_max_size:268435456 diff --git a/test/box/admin.result b/test/box/admin.result index b619ae6cb..f8e8808e3 100644 --- a/test/box/admin.result +++ b/test/box/admin.result @@ -127,6 +127,8 @@ cfg_filter(box.cfg) - 60 - - vinyl_write_threads - 4 + - - wal_cleanup_delay + - 14400 - - wal_dir - - - wal_dir_rescan_delay diff --git a/test/box/cfg.result b/test/box/cfg.result index 2c3d5a981..693c1b521 100644 --- a/test/box/cfg.result +++ b/test/box/cfg.result @@ -115,6 +115,8 @@ cfg_filter(box.cfg) | - 60 | - - vinyl_write_threads | - 4 + | - - wal_cleanup_delay + | - 14400 | - - wal_dir | - | - - wal_dir_rescan_delay @@ -234,6 +236,8 @@ cfg_filter(box.cfg) | - 60 | - - vinyl_write_threads | - 4 + | - - wal_cleanup_delay + | - 14400 | - - wal_dir | - | - - wal_dir_rescan_delay diff --git a/test/replication/replica_rejoin.result b/test/replication/replica_rejoin.result index 4bc3df2ca..8ec803af8 100644 --- a/test/replication/replica_rejoin.result +++ b/test/replication/replica_rejoin.result @@ -14,6 +14,13 @@ test_run:cleanup_cluster() --- ... -- +-- gh-5806: this replica_rejoin test relies on the wal cleanup fiber +-- been disabled thus lets turn it off explicitly every time we restart +-- the main node. +box.cfg{wal_cleanup_delay = 0} +--- +... +-- -- gh-461: check that a replica refetches the last checkpoint -- in case it fell behind the master. -- @@ -69,6 +76,9 @@ test_run:cmd("stop server replica") -- Restart the server to purge the replica from -- the garbage collection state. test_run:cmd("restart server default") +box.cfg{wal_cleanup_delay = 0} +--- +... -- Make some checkpoints to remove old xlogs. checkpoint_count = box.cfg.checkpoint_count --- @@ -190,6 +200,9 @@ test_run:cmd("stop server replica") - true ... test_run:cmd("restart server default") +box.cfg{wal_cleanup_delay = 0} +--- +... checkpoint_count = box.cfg.checkpoint_count --- ... @@ -313,6 +326,9 @@ test_run:cmd("switch default") - true ... test_run:cmd("restart server default") +box.cfg{wal_cleanup_delay = 0} +--- +... replica_listen = test_run:cmd("eval replica 'return box.cfg.listen'") --- ... diff --git a/test/replication/replica_rejoin.test.lua b/test/replication/replica_rejoin.test.lua index 494cbbcb7..3d407d1c8 100644 --- a/test/replication/replica_rejoin.test.lua +++ b/test/replication/replica_rejoin.test.lua @@ -5,6 +5,12 @@ engine = test_run:get_cfg('engine') test_run:cleanup_cluster() +-- +-- gh-5806: this replica_rejoin test relies on the wal cleanup fiber +-- been disabled thus lets turn it off explicitly every time we restart +-- the main node. +box.cfg{wal_cleanup_delay = 0} + -- -- gh-461: check that a replica refetches the last checkpoint -- in case it fell behind the master. @@ -28,6 +34,7 @@ test_run:cmd("stop server replica") -- Restart the server to purge the replica from -- the garbage collection state. test_run:cmd("restart server default") +box.cfg{wal_cleanup_delay = 0} -- Make some checkpoints to remove old xlogs. checkpoint_count = box.cfg.checkpoint_count @@ -72,6 +79,7 @@ box.space.test:replace{1, 2, 3} -- bumps LSN on the replica test_run:cmd("switch default") test_run:cmd("stop server replica") test_run:cmd("restart server default") +box.cfg{wal_cleanup_delay = 0} checkpoint_count = box.cfg.checkpoint_count box.cfg{checkpoint_count = 1} for i = 1, 3 do box.space.test:delete{i * 10} end @@ -116,6 +124,7 @@ _ = test_run:wait_vclock('default', vclock) -- Restart the master and force garbage collection. test_run:cmd("switch default") test_run:cmd("restart server default") +box.cfg{wal_cleanup_delay = 0} replica_listen = test_run:cmd("eval replica 'return box.cfg.listen'") replica_listen ~= nil box.cfg{replication = replica_listen} -- 2.30.2