From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from [87.239.111.99] (localhost [127.0.0.1]) by dev.tarantool.org (Postfix) with ESMTP id 5CB246EC40; Mon, 31 May 2021 20:05:33 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org 5CB246EC40 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=tarantool.org; s=dev; t=1622480733; bh=u4ZgjYnsWGpiJfXPBJYY0NKqYO3jV3rllp3CtbZWzDo=; h=To:Date:Subject:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:Cc:From; b=Vb5C+awI/bfwcAiM096anVG/xjopyoGJPsO7Sfefa1njmUTZ0FZzD+l21ZT5CAiqB uewbaGYd0bpMBtP9h2NqcwKJgsRZn4X0jadj1KPaEOX4rooZamF6uKJXQfp8PZSKO6 rpzQy44dwTQuu8Jvxdb+2CRblYLXXhqSfar1K+Mc= Received: from mail-lj1-f173.google.com (mail-lj1-f173.google.com [209.85.208.173]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by dev.tarantool.org (Postfix) with ESMTPS id 9B82C6EC40 for ; Mon, 31 May 2021 20:05:32 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org 9B82C6EC40 Received: by mail-lj1-f173.google.com with SMTP id a4so15800556ljd.5 for ; Mon, 31 May 2021 10:05:32 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=xfj8ENGMbdzzBOmHyBsiAVdIkictRKTMXIo+b3MmpcM=; b=jGfZclSN3t4eKEVPziLAspEGplARi2S6OeKBUCHRK2+7H6SPmC6e4BJCZkWTve6E9V YpB1DXF1h0kcVB+nkvP7yW177OuGaToYpMeJ0U+ofWdzA7OVWoM3ohLmukxszzbM6u85 tsMoA/r/yXv+uepViUdyR5nU38EEsEm2XTT/vzfuS0MlQqIBZV2FqczQ3dn4pAwVVJ+C tHb6WizPSeyQZGjBmtS0eASOm1f7Hmg0OqwL8tcmnDLDD9eFmCi+hPCbrph/CuC1Q/d+ wGGXIcTMhPqh6I/ANnQaGJwoj0Ac9QBOEX45QzFsjSYa03RWunjEvf5WFcp2rwlBtE0R +79g== X-Gm-Message-State: AOAM531WS9PfQacNZfsCtRxGk3TsBU7ehNwljXPv0RzMOfPI3k260fMg jS/KAHrJ1jpRtOPT1Aa1NfDM52tN0vc= X-Google-Smtp-Source: ABdhPJxbWP6tC6uGaK1J9ygtJrsAq59OXmBGZQshkfAMtUPiGUXni5h8m2Avvxxj/TjFrRrRQmi1UQ== X-Received: by 2002:a2e:3305:: with SMTP id d5mr17577635ljc.102.1622480730616; Mon, 31 May 2021 10:05:30 -0700 (PDT) Received: from grain.localdomain ([5.18.171.94]) by smtp.gmail.com with ESMTPSA id f6sm1045631ljn.136.2021.05.31.10.05.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 May 2021 10:05:29 -0700 (PDT) Received: by grain.localdomain (Postfix, from userid 1000) id 05B7C5A0042; Mon, 31 May 2021 20:05:28 +0300 (MSK) To: tml Date: Mon, 31 May 2021 20:05:25 +0300 Message-Id: <20210531170526.675346-1-gorcunov@gmail.com> X-Mailer: git-send-email 2.31.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [Tarantool-patches] [PATCH] test/replication: add testcase for synchro filtering X-BeenThere: tarantool-patches@dev.tarantool.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Tarantool development patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , From: Cyrill Gorcunov via Tarantool-patches Reply-To: Cyrill Gorcunov Cc: Vladislav Shpilevoy Errors-To: tarantool-patches-bounces@dev.tarantool.org Sender: "Tarantool-patches" In the test we do kind of ping-pong with promote records to get difference in applier_synchro_filter_tx where promote record reached a node with proper source instance but obsolete row's 'term' content. The main idea is to reach this situation we didn't figure out yet which filtering strategy should be choosen. Thus the patch is not for merging but for further investigation. issue https://github.com/tarantool/tarantool/issues/6035 branch gorcunov/gh-6035-applier-filter-3-notest Signed-off-by: Cyrill Gorcunov --- src/box/applier.cc | 42 ++- .../replication/gh-6035-applier-filter.result | 331 ++++++++++++++++++ .../gh-6035-applier-filter.test.lua | 161 +++++++++ test/replication/gh6035master.lua | 27 ++ test/replication/gh6035replica1.lua | 27 ++ test/replication/gh6035replica2.lua | 27 ++ test/replication/gh6035replica3.lua | 27 ++ 7 files changed, 640 insertions(+), 2 deletions(-) create mode 100644 test/replication/gh-6035-applier-filter.result create mode 100644 test/replication/gh-6035-applier-filter.test.lua create mode 100644 test/replication/gh6035master.lua create mode 100644 test/replication/gh6035replica1.lua create mode 100644 test/replication/gh6035replica2.lua create mode 100644 test/replication/gh6035replica3.lua diff --git a/src/box/applier.cc b/src/box/applier.cc index 33181fdbf..d89d0833a 100644 --- a/src/box/applier.cc +++ b/src/box/applier.cc @@ -965,6 +965,38 @@ apply_final_join_tx(struct stailq *rows) return rc; } +static void +decode_obsolete_rows(uint32_t replica_id, struct stailq *rows) +{ + struct applier_tx_row *item; + + say_info("XXX: %s: %d", __func__, replica_id); + stailq_foreach_entry(item, rows, next) { + struct xrow_header *r = &item->row; + + say_info("XXX: %s: type %d replica_id %d group_id %d " + "sync %llu lsn %lld tsn %lld flags %#x", + iproto_type_name(r->type), + r->type, r->replica_id, r->group_id, + (long long)r->sync, (long long)r->lsn, + (long long)r->tsn, (int)r->flags); + + if (iproto_type_is_synchro_request(r->type)) { + struct synchro_request req; + if (xrow_decode_synchro(r, &req) != 0) + continue; + + say_info("XXX: %s: type %d replica_id %d " + "origin_id %d lsn %lld term %llu", + iproto_type_name(req.type), + req.type, req.replica_id, + req.origin_id, + (long long)req.lsn, + (long long)req.term); + } + } +} + /** * When elections are enabled we must filter out synchronous rows coming * from an instance that fell behind the current leader. This includes @@ -973,7 +1005,7 @@ apply_final_join_tx(struct stailq *rows) * The rows are replaced with NOPs to preserve the vclock consistency. */ static void -applier_synchro_filter_tx(struct stailq *rows) +applier_synchro_filter_tx(uint32_t replica_id, struct stailq *rows) { /* * XXX: in case raft is disabled, synchronous replication still works @@ -989,6 +1021,12 @@ applier_synchro_filter_tx(struct stailq *rows) * node, so cannot check for applier->instance_id here. */ row = &stailq_first_entry(rows, struct applier_tx_row, next)->row; + + if (txn_limbo_is_replica_outdated(&txn_limbo, replica_id) != + txn_limbo_is_replica_outdated(&txn_limbo, row->replica_id)) { + decode_obsolete_rows(replica_id, rows); + } + if (!txn_limbo_is_replica_outdated(&txn_limbo, row->replica_id)) return; @@ -1080,7 +1118,7 @@ applier_apply_tx(struct applier *applier, struct stailq *rows) } } } - applier_synchro_filter_tx(rows); + applier_synchro_filter_tx(applier->instance_id, rows); if (unlikely(iproto_type_is_synchro_request(first_row->type))) { /* * Synchro messages are not transactions, in terms diff --git a/test/replication/gh-6035-applier-filter.result b/test/replication/gh-6035-applier-filter.result new file mode 100644 index 000000000..9507dfcd1 --- /dev/null +++ b/test/replication/gh-6035-applier-filter.result @@ -0,0 +1,331 @@ +-- test-run result file version 2 +-- +-- gh-6035: Investigate applier_synchro_filter_tx filtration, the +-- filter may operates in two modes: filter by the source of data +-- (ie applier->instance_id) or filter by data contents (row->replica_id). +-- +-- From a gut feeling data contents filtering looks more flexible +-- while data source filtering is a way more faster. +-- +test_run = require('test_run').new() + | --- + | ... + +-- +-- Basic workflow: +-- +-- - start 4 nodes, master, replica1, replica2, replica3 +-- - stop replica 1 and replica2 +-- - insert new record into sync space on master +-- - wait record propagation on replica3 +-- - drop master node +-- - promote replica3 to be RAFT leader +-- - start replica2 and wait new record replicated +-- (the initial source of this record is master node) +-- - promote replica2 to be RAFT leader +-- - promote replica3 to be RAFT leader +-- - promote replica2 to be RAFT leader +-- - promote replica3 to be RAFT leader +-- - drop replica2 +-- - start replica1 +-- + +SERVERS = { \ + 'gh6035master', \ + 'gh6035replica1', \ + 'gh6035replica2', \ + 'gh6035replica3' \ +} + | --- + | ... + +test_run:create_cluster(SERVERS, "replication") + | --- + | ... +test_run:wait_fullmesh(SERVERS) + | --- + | ... + +-- +-- Make sure master node is a RAFT leader. +test_run:switch('gh6035master') + | --- + | - true + | ... +test_run:wait_cond(function() return box.info().election.state == 'leader' end, 100) + | --- + | - true + | ... + +-- +-- Create spaces needed. +_ = box.schema.create_space('sync', {is_sync = true}) + | --- + | ... +_ = box.space.sync:create_index('pk') + | --- + | ... + +-- +-- Make sure the space is replicated. +test_run:switch('gh6035replica1') + | --- + | - true + | ... +test_run:wait_cond(function() return box.space.sync ~= nil end, 100) + | --- + | - true + | ... + +test_run:switch('gh6035replica2') + | --- + | - true + | ... +test_run:wait_cond(function() return box.space.sync ~= nil end, 100) + | --- + | - true + | ... + +test_run:switch('gh6035replica3') + | --- + | - true + | ... +test_run:wait_cond(function() return box.space.sync ~= nil end, 100) + | --- + | - true + | ... + +-- +-- On the master node insert a new record but with r1 and r2 stopped. +test_run:switch('gh6035master') + | --- + | - true + | ... +test_run:cmd('stop server gh6035replica1') + | --- + | - true + | ... +test_run:cmd('stop server gh6035replica2') + | --- + | - true + | ... + +box.space.sync:insert{1} + | --- + | - [1] + | ... + +test_run:switch('default') + | --- + | - true + | ... +test_run:cmd('stop server gh6035master') + | --- + | - true + | ... + +-- +-- Now we have only r3 up and running with +-- a new record replicated. +-- + +-- +-- Make r3 being a RAFT leader. +test_run:switch('gh6035replica3') + | --- + | - true + | ... +box.space.sync:select{} -- new record must be replicated already + | --- + | - - [1] + | ... +box.cfg{election_mode = 'manual', replication_synchro_quorum = 1} + | --- + | ... +box.ctl.promote() + | --- + | ... +test_run:wait_cond(function() return box.info().election.state == 'leader' end, 100) + | --- + | - true + | ... + +-- +-- Start r2 (so r2 and r3 are only nodes running), make sure it has a new +-- record replicated and make it a RAFT leader. +test_run:cmd('start server gh6035replica2') + | --- + | - true + | ... +test_run:switch('gh6035replica2') + | --- + | - true + | ... +test_run:wait_cond(function() return box.info().election.state == 'follower' end, 100) + | --- + | - true + | ... +test_run:wait_cond(function() return box.space.sync:select{}[1] ~= nil \ + and box.space.sync:select{}[1][1] == 1 end, 100) + | --- + | - true + | ... + +box.cfg{election_mode = 'manual', replication_synchro_quorum = 1} + | --- + | ... +box.ctl.promote() + | --- + | ... +test_run:wait_cond(function() return box.info().election.state == 'leader' end, 100) + | --- + | - true + | ... + +-- +-- Now change the RAFT leadership and transfer it back +-- to the r3 which has a follower state. +test_run:switch('gh6035replica3') + | --- + | - true + | ... +test_run:wait_cond(function() return box.info().election.state == 'follower' end, 100) + | --- + | - true + | ... +box.cfg{election_mode = 'manual', replication_synchro_quorum = 1} + | --- + | ... +box.ctl.promote() + | --- + | ... +test_run:wait_cond(function() return box.info().election.state == 'leader' end, 100) + | --- + | - true + | ... + +-- +-- Next round of RAFT leadership migration: to r2 and back to r3, +-- this is needed to make existing terms being rusty, thus +-- we will have obsolete row->replica_id associated term. +test_run:switch('gh6035replica2') + | --- + | - true + | ... +test_run:wait_cond(function() return box.info().election.state == 'follower' end, 100) + | --- + | - true + | ... +box.cfg{election_mode = 'manual', replication_synchro_quorum = 1} + | --- + | ... +box.ctl.promote() + | --- + | ... +test_run:wait_cond(function() return box.info().election.state == 'leader' end, 100) + | --- + | - true + | ... + +test_run:switch('gh6035replica3') + | --- + | - true + | ... +test_run:wait_cond(function() return box.info().election.state == 'follower' end, 100) + | --- + | - true + | ... +box.cfg{election_mode = 'manual', replication_synchro_quorum = 1} + | --- + | ... +box.ctl.promote() + | --- + | ... +test_run:wait_cond(function() return box.info().election.state == 'leader' end, 100) + | --- + | - true + | ... + +-- +-- r2 is no longer needed, so delete it. +test_run:switch('default') + | --- + | - true + | ... +test_run:cmd('stop server gh6035replica2') + | --- + | - true + | ... + +-- +-- Finally start r1: at this moment only r3 is running and +-- it has data from the master node and r2's promote records. +test_run:cmd('start server gh6035replica1') + | --- + | - true + | ... +test_run:wait_cond(function() return box.info().election.state == 'follower' end, 100) + | --- + | - true + | ... + +-- +-- Wait the former record been replicated. +test_run:switch('gh6035replica1') + | --- + | - true + | ... +test_run:wait_cond(function() return box.space.sync:select{}[1] ~= nil \ + and box.space.sync:select{}[1][1] == 1 end, 100) + | --- + | - true + | ... +test_run:wait_lsn('gh6035replica1', 'gh6035replica3') + | --- + | ... + +-- +-- At this point the r1 will receive the XXX record. +-- +-- XXX: decode_obsolete_rows: 4 +-- XXX: PROMOTE: type 31 replica_id 3 group_id 0 sync 0 lsn 1 tsn 1 flags 0x1 +-- XXX: PROMOTE: type 31 replica_id 4 origin_id 3 lsn 0 term 4 +-- +-- XXX: decode_obsolete_rows: 4 +-- XXX: PROMOTE: type 31 replica_id 3 group_id 0 sync 0 lsn 2 tsn 2 flags 0x1 +-- XXX: PROMOTE: type 31 replica_id 4 origin_id 3 lsn 0 term 6 +assert(test_run:grep_log('gh6035replica1', 'XXX: .*') ~= nil) + | --- + | - true + | ... + +-- +-- Cleanup +test_run:switch('default') + | --- + | - true + | ... +test_run:cmd('delete server gh6035master') + | --- + | - true + | ... +test_run:cmd('stop server gh6035replica1') + | --- + | - true + | ... +test_run:cmd('delete server gh6035replica1') + | --- + | - true + | ... +test_run:cmd('delete server gh6035replica2') + | --- + | - true + | ... +test_run:cmd('stop server gh6035replica3') + | --- + | - true + | ... +test_run:cmd('delete server gh6035replica3') + | --- + | - true + | ... diff --git a/test/replication/gh-6035-applier-filter.test.lua b/test/replication/gh-6035-applier-filter.test.lua new file mode 100644 index 000000000..e01349935 --- /dev/null +++ b/test/replication/gh-6035-applier-filter.test.lua @@ -0,0 +1,161 @@ +-- +-- gh-6035: Investigate applier_synchro_filter_tx filtration, the +-- filter may operates in two modes: filter by the source of data +-- (ie applier->instance_id) or filter by data contents (row->replica_id). +-- +-- From a gut feeling data contents filtering looks more flexible +-- while data source filtering is a way more faster. +-- +test_run = require('test_run').new() + +-- +-- Basic workflow: +-- +-- - start 4 nodes, master, replica1, replica2, replica3 +-- - stop replica 1 and replica2 +-- - insert new record into sync space on master +-- - wait record propagation on replica3 +-- - drop master node +-- - promote replica3 to be RAFT leader +-- - start replica2 and wait new record replicated +-- (the initial source of this record is master node) +-- - promote replica2 to be RAFT leader +-- - promote replica3 to be RAFT leader +-- - promote replica2 to be RAFT leader +-- - promote replica3 to be RAFT leader +-- - drop replica2 +-- - start replica1 +-- + +SERVERS = { \ + 'gh6035master', \ + 'gh6035replica1', \ + 'gh6035replica2', \ + 'gh6035replica3' \ +} + +test_run:create_cluster(SERVERS, "replication") +test_run:wait_fullmesh(SERVERS) + +-- +-- Make sure master node is a RAFT leader. +test_run:switch('gh6035master') +test_run:wait_cond(function() return box.info().election.state == 'leader' end, 100) + +-- +-- Create spaces needed. +_ = box.schema.create_space('sync', {is_sync = true}) +_ = box.space.sync:create_index('pk') + +-- +-- Make sure the space is replicated. +test_run:switch('gh6035replica1') +test_run:wait_cond(function() return box.space.sync ~= nil end, 100) + +test_run:switch('gh6035replica2') +test_run:wait_cond(function() return box.space.sync ~= nil end, 100) + +test_run:switch('gh6035replica3') +test_run:wait_cond(function() return box.space.sync ~= nil end, 100) + +-- +-- On the master node insert a new record but with r1 and r2 stopped. +test_run:switch('gh6035master') +test_run:cmd('stop server gh6035replica1') +test_run:cmd('stop server gh6035replica2') + +box.space.sync:insert{1} + +test_run:switch('default') +test_run:cmd('stop server gh6035master') + +-- +-- Now we have only r3 up and running with +-- a new record replicated. +-- + +-- +-- Make r3 being a RAFT leader. +test_run:switch('gh6035replica3') +box.space.sync:select{} -- new record must be replicated already +box.cfg{election_mode = 'manual', replication_synchro_quorum = 1} +box.ctl.promote() +test_run:wait_cond(function() return box.info().election.state == 'leader' end, 100) + +-- +-- Start r2 (so r2 and r3 are only nodes running), make sure it has a new +-- record replicated and make it a RAFT leader. +test_run:cmd('start server gh6035replica2') +test_run:switch('gh6035replica2') +test_run:wait_cond(function() return box.info().election.state == 'follower' end, 100) +test_run:wait_cond(function() return box.space.sync:select{}[1] ~= nil \ + and box.space.sync:select{}[1][1] == 1 end, 100) + +box.cfg{election_mode = 'manual', replication_synchro_quorum = 1} +box.ctl.promote() +test_run:wait_cond(function() return box.info().election.state == 'leader' end, 100) + +-- +-- Now change the RAFT leadership and transfer it back +-- to the r3 which has a follower state. +test_run:switch('gh6035replica3') +test_run:wait_cond(function() return box.info().election.state == 'follower' end, 100) +box.cfg{election_mode = 'manual', replication_synchro_quorum = 1} +box.ctl.promote() +test_run:wait_cond(function() return box.info().election.state == 'leader' end, 100) + +-- +-- Next round of RAFT leadership migration: to r2 and back to r3, +-- this is needed to make existing terms being rusty, thus +-- we will have obsolete row->replica_id associated term. +test_run:switch('gh6035replica2') +test_run:wait_cond(function() return box.info().election.state == 'follower' end, 100) +box.cfg{election_mode = 'manual', replication_synchro_quorum = 1} +box.ctl.promote() +test_run:wait_cond(function() return box.info().election.state == 'leader' end, 100) + +test_run:switch('gh6035replica3') +test_run:wait_cond(function() return box.info().election.state == 'follower' end, 100) +box.cfg{election_mode = 'manual', replication_synchro_quorum = 1} +box.ctl.promote() +test_run:wait_cond(function() return box.info().election.state == 'leader' end, 100) + +-- +-- r2 is no longer needed, so delete it. +test_run:switch('default') +test_run:cmd('stop server gh6035replica2') + +-- +-- Finally start r1: at this moment only r3 is running and +-- it has data from the master node and r2's promote records. +test_run:cmd('start server gh6035replica1') +test_run:wait_cond(function() return box.info().election.state == 'follower' end, 100) + +-- +-- Wait the former record been replicated. +test_run:switch('gh6035replica1') +test_run:wait_cond(function() return box.space.sync:select{}[1] ~= nil \ + and box.space.sync:select{}[1][1] == 1 end, 100) +test_run:wait_lsn('gh6035replica1', 'gh6035replica3') + +-- +-- At this point the r1 will receive the XXX record. +-- +-- XXX: decode_obsolete_rows: 4 +-- XXX: PROMOTE: type 31 replica_id 3 group_id 0 sync 0 lsn 1 tsn 1 flags 0x1 +-- XXX: PROMOTE: type 31 replica_id 4 origin_id 3 lsn 0 term 4 +-- +-- XXX: decode_obsolete_rows: 4 +-- XXX: PROMOTE: type 31 replica_id 3 group_id 0 sync 0 lsn 2 tsn 2 flags 0x1 +-- XXX: PROMOTE: type 31 replica_id 4 origin_id 3 lsn 0 term 6 +assert(test_run:grep_log('gh6035replica1', 'XXX: .*') ~= nil) + +-- +-- Cleanup +test_run:switch('default') +test_run:cmd('delete server gh6035master') +test_run:cmd('stop server gh6035replica1') +test_run:cmd('delete server gh6035replica1') +test_run:cmd('delete server gh6035replica2') +test_run:cmd('stop server gh6035replica3') +test_run:cmd('delete server gh6035replica3') diff --git a/test/replication/gh6035master.lua b/test/replication/gh6035master.lua new file mode 100644 index 000000000..b269f4732 --- /dev/null +++ b/test/replication/gh6035master.lua @@ -0,0 +1,27 @@ +local SOCKET_DIR = require('fio').cwd() + +local function unix_socket(name) + return SOCKET_DIR .. "/" .. name .. '.sock'; +end + +require('console').listen(os.getenv('ADMIN')) + +box.cfg({ + listen = unix_socket("gh6035master"), + replication = { + unix_socket("gh6035master"), + unix_socket("gh6035replica1"), + unix_socket("gh6035replica2"), + unix_socket("gh6035replica3"), + }, + replication_connect_quorum = 1, + replication_synchro_quorum = 2, + replication_synchro_timeout = 10000, + replication_sync_timeout = 5, + read_only = false, + election_mode = 'candidate', +}) + +box.once("bootstrap", function() + box.schema.user.grant('guest', 'super') +end) diff --git a/test/replication/gh6035replica1.lua b/test/replication/gh6035replica1.lua new file mode 100644 index 000000000..0f8f17e57 --- /dev/null +++ b/test/replication/gh6035replica1.lua @@ -0,0 +1,27 @@ +local SOCKET_DIR = require('fio').cwd() + +local function unix_socket(name) + return SOCKET_DIR .. "/" .. name .. '.sock'; +end + +require('console').listen(os.getenv('ADMIN')) + +box.cfg({ + listen = unix_socket("gh6035replica1"), + replication = { + unix_socket("gh6035master"), + unix_socket("gh6035replica1"), + unix_socket("gh6035replica2"), + unix_socket("gh6035replica3"), + }, + replication_connect_quorum = 1, + replication_synchro_quorum = 2, + replication_synchro_timeout = 10000, + replication_sync_timeout = 5, + read_only = true, + election_mode = 'voter', +}) + +box.once("bootstrap", function() + box.schema.user.grant('guest', 'super') +end) diff --git a/test/replication/gh6035replica2.lua b/test/replication/gh6035replica2.lua new file mode 100644 index 000000000..263a20390 --- /dev/null +++ b/test/replication/gh6035replica2.lua @@ -0,0 +1,27 @@ +local SOCKET_DIR = require('fio').cwd() + +local function unix_socket(name) + return SOCKET_DIR .. "/" .. name .. '.sock'; +end + +require('console').listen(os.getenv('ADMIN')) + +box.cfg({ + listen = unix_socket("gh6035replica2"), + replication = { + unix_socket("gh6035master"), + unix_socket("gh6035replica1"), + unix_socket("gh6035replica2"), + unix_socket("gh6035replica3"), + }, + replication_connect_quorum = 1, + replication_synchro_quorum = 2, + replication_synchro_timeout = 10000, + replication_sync_timeout = 5, + read_only = true, + election_mode = 'voter', +}) + +box.once("bootstrap", function() + box.schema.user.grant('guest', 'super') +end) diff --git a/test/replication/gh6035replica3.lua b/test/replication/gh6035replica3.lua new file mode 100644 index 000000000..7c7c3233b --- /dev/null +++ b/test/replication/gh6035replica3.lua @@ -0,0 +1,27 @@ +local SOCKET_DIR = require('fio').cwd() + +local function unix_socket(name) + return SOCKET_DIR .. "/" .. name .. '.sock'; +end + +require('console').listen(os.getenv('ADMIN')) + +box.cfg({ + listen = unix_socket("gh6035replica3"), + replication = { + unix_socket("gh6035master"), + unix_socket("gh6035replica1"), + unix_socket("gh6035replica2"), + unix_socket("gh6035replica3"), + }, + replication_connect_quorum = 1, + replication_synchro_quorum = 2, + replication_synchro_timeout = 10000, + replication_sync_timeout = 5, + read_only = true, + election_mode = 'voter', +}) + +box.once("bootstrap", function() + box.schema.user.grant('guest', 'super') +end) -- 2.31.1