Tarantool development patches archive
 help / color / mirror / Atom feed
* [Tarantool-patches] [PATCH] test/replication: add testcase for synchro filtering
@ 2021-05-31 17:05 Cyrill Gorcunov via Tarantool-patches
  2021-05-31 17:59 ` Cyrill Gorcunov via Tarantool-patches
  0 siblings, 1 reply; 2+ messages in thread
From: Cyrill Gorcunov via Tarantool-patches @ 2021-05-31 17:05 UTC (permalink / raw)
  To: tml; +Cc: Vladislav Shpilevoy

In the test we do kind of ping-pong with promote records
to get difference in applier_synchro_filter_tx where promote
record reached a node with proper source instance but obsolete
row's 'term' content.

The main idea is to reach this situation we didn't figure out
yet which filtering strategy should be choosen.

Thus the patch is not for merging but for further investigation.

issue https://github.com/tarantool/tarantool/issues/6035
branch gorcunov/gh-6035-applier-filter-3-notest

Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com>
---
 src/box/applier.cc                            |  42 ++-
 .../replication/gh-6035-applier-filter.result | 331 ++++++++++++++++++
 .../gh-6035-applier-filter.test.lua           | 161 +++++++++
 test/replication/gh6035master.lua             |  27 ++
 test/replication/gh6035replica1.lua           |  27 ++
 test/replication/gh6035replica2.lua           |  27 ++
 test/replication/gh6035replica3.lua           |  27 ++
 7 files changed, 640 insertions(+), 2 deletions(-)
 create mode 100644 test/replication/gh-6035-applier-filter.result
 create mode 100644 test/replication/gh-6035-applier-filter.test.lua
 create mode 100644 test/replication/gh6035master.lua
 create mode 100644 test/replication/gh6035replica1.lua
 create mode 100644 test/replication/gh6035replica2.lua
 create mode 100644 test/replication/gh6035replica3.lua

diff --git a/src/box/applier.cc b/src/box/applier.cc
index 33181fdbf..d89d0833a 100644
--- a/src/box/applier.cc
+++ b/src/box/applier.cc
@@ -965,6 +965,38 @@ apply_final_join_tx(struct stailq *rows)
 	return rc;
 }
 
+static void
+decode_obsolete_rows(uint32_t replica_id, struct stailq *rows)
+{
+	struct applier_tx_row *item;
+
+	say_info("XXX: %s: %d", __func__, replica_id);
+	stailq_foreach_entry(item, rows, next) {
+		struct xrow_header *r = &item->row;
+
+		say_info("XXX: %s: type %d replica_id %d group_id %d "
+			 "sync %llu lsn %lld tsn %lld flags %#x",
+			 iproto_type_name(r->type),
+			 r->type, r->replica_id, r->group_id,
+			 (long long)r->sync, (long long)r->lsn,
+			 (long long)r->tsn, (int)r->flags);
+
+		if (iproto_type_is_synchro_request(r->type)) {
+			struct synchro_request req;
+			if (xrow_decode_synchro(r, &req) != 0)
+				continue;
+
+			say_info("XXX: %s: type %d replica_id %d "
+				 "origin_id %d lsn %lld term %llu",
+				 iproto_type_name(req.type),
+				 req.type, req.replica_id,
+				 req.origin_id,
+				 (long long)req.lsn,
+				 (long long)req.term);
+		}
+	}
+}
+
 /**
  * When elections are enabled we must filter out synchronous rows coming
  * from an instance that fell behind the current leader. This includes
@@ -973,7 +1005,7 @@ apply_final_join_tx(struct stailq *rows)
  * The rows are replaced with NOPs to preserve the vclock consistency.
  */
 static void
-applier_synchro_filter_tx(struct stailq *rows)
+applier_synchro_filter_tx(uint32_t replica_id, struct stailq *rows)
 {
 	/*
 	 * XXX: in case raft is disabled, synchronous replication still works
@@ -989,6 +1021,12 @@ applier_synchro_filter_tx(struct stailq *rows)
 	 * node, so cannot check for applier->instance_id here.
 	 */
 	row = &stailq_first_entry(rows, struct applier_tx_row, next)->row;
+
+	if (txn_limbo_is_replica_outdated(&txn_limbo, replica_id) !=
+	    txn_limbo_is_replica_outdated(&txn_limbo, row->replica_id)) {
+		decode_obsolete_rows(replica_id, rows);
+	}
+
 	if (!txn_limbo_is_replica_outdated(&txn_limbo, row->replica_id))
 		return;
 
@@ -1080,7 +1118,7 @@ applier_apply_tx(struct applier *applier, struct stailq *rows)
 			}
 		}
 	}
-	applier_synchro_filter_tx(rows);
+	applier_synchro_filter_tx(applier->instance_id, rows);
 	if (unlikely(iproto_type_is_synchro_request(first_row->type))) {
 		/*
 		 * Synchro messages are not transactions, in terms
diff --git a/test/replication/gh-6035-applier-filter.result b/test/replication/gh-6035-applier-filter.result
new file mode 100644
index 000000000..9507dfcd1
--- /dev/null
+++ b/test/replication/gh-6035-applier-filter.result
@@ -0,0 +1,331 @@
+-- test-run result file version 2
+--
+-- gh-6035: Investigate applier_synchro_filter_tx filtration, the
+-- filter may operates in two modes: filter by the source of data
+-- (ie applier->instance_id) or filter by data contents (row->replica_id).
+--
+-- From a gut feeling data contents filtering looks more flexible
+-- while data source filtering is a way more faster.
+--
+test_run = require('test_run').new()
+ | ---
+ | ...
+
+--
+-- Basic workflow:
+--
+--  - start 4 nodes, master, replica1, replica2, replica3
+--  - stop replica 1 and replica2
+--  - insert new record into sync space on master
+--  - wait record propagation on replica3
+--  - drop master node
+--  - promote replica3 to be RAFT leader
+--  - start replica2 and wait new record replicated
+--    (the initial source of this record is master node)
+--  - promote replica2 to be RAFT leader
+--  - promote replica3 to be RAFT leader
+--  - promote replica2 to be RAFT leader
+--  - promote replica3 to be RAFT leader
+--  - drop replica2
+--  - start replica1
+--
+
+SERVERS = {                 \
+    'gh6035master',         \
+    'gh6035replica1',       \
+    'gh6035replica2',       \
+    'gh6035replica3'        \
+}
+ | ---
+ | ...
+
+test_run:create_cluster(SERVERS, "replication")
+ | ---
+ | ...
+test_run:wait_fullmesh(SERVERS)
+ | ---
+ | ...
+
+--
+-- Make sure master node is a RAFT leader.
+test_run:switch('gh6035master')
+ | ---
+ | - true
+ | ...
+test_run:wait_cond(function() return box.info().election.state == 'leader' end, 100)
+ | ---
+ | - true
+ | ...
+
+--
+-- Create spaces needed.
+_ = box.schema.create_space('sync', {is_sync = true})
+ | ---
+ | ...
+_ = box.space.sync:create_index('pk')
+ | ---
+ | ...
+
+--
+-- Make sure the space is replicated.
+test_run:switch('gh6035replica1')
+ | ---
+ | - true
+ | ...
+test_run:wait_cond(function() return box.space.sync ~= nil end, 100)
+ | ---
+ | - true
+ | ...
+
+test_run:switch('gh6035replica2')
+ | ---
+ | - true
+ | ...
+test_run:wait_cond(function() return box.space.sync ~= nil end, 100)
+ | ---
+ | - true
+ | ...
+
+test_run:switch('gh6035replica3')
+ | ---
+ | - true
+ | ...
+test_run:wait_cond(function() return box.space.sync ~= nil end, 100)
+ | ---
+ | - true
+ | ...
+
+--
+-- On the master node insert a new record but with r1 and r2 stopped.
+test_run:switch('gh6035master')
+ | ---
+ | - true
+ | ...
+test_run:cmd('stop server gh6035replica1')
+ | ---
+ | - true
+ | ...
+test_run:cmd('stop server gh6035replica2')
+ | ---
+ | - true
+ | ...
+
+box.space.sync:insert{1}
+ | ---
+ | - [1]
+ | ...
+
+test_run:switch('default')
+ | ---
+ | - true
+ | ...
+test_run:cmd('stop server gh6035master')
+ | ---
+ | - true
+ | ...
+
+--
+-- Now we have only r3 up and running with
+-- a new record replicated.
+--
+
+--
+-- Make r3 being a RAFT leader.
+test_run:switch('gh6035replica3')
+ | ---
+ | - true
+ | ...
+box.space.sync:select{} -- new record must be replicated already
+ | ---
+ | - - [1]
+ | ...
+box.cfg{election_mode = 'manual', replication_synchro_quorum = 1}
+ | ---
+ | ...
+box.ctl.promote()
+ | ---
+ | ...
+test_run:wait_cond(function() return box.info().election.state == 'leader' end, 100)
+ | ---
+ | - true
+ | ...
+
+--
+-- Start r2 (so r2 and r3 are only nodes running), make sure it has a new
+-- record replicated and make it a RAFT leader.
+test_run:cmd('start server gh6035replica2')
+ | ---
+ | - true
+ | ...
+test_run:switch('gh6035replica2')
+ | ---
+ | - true
+ | ...
+test_run:wait_cond(function() return box.info().election.state == 'follower' end, 100)
+ | ---
+ | - true
+ | ...
+test_run:wait_cond(function() return box.space.sync:select{}[1] ~= nil \
+                   and box.space.sync:select{}[1][1] == 1 end, 100)
+ | ---
+ | - true
+ | ...
+
+box.cfg{election_mode = 'manual', replication_synchro_quorum = 1}
+ | ---
+ | ...
+box.ctl.promote()
+ | ---
+ | ...
+test_run:wait_cond(function() return box.info().election.state == 'leader' end, 100)
+ | ---
+ | - true
+ | ...
+
+--
+-- Now change the RAFT leadership and transfer it back
+-- to the r3 which has a follower state.
+test_run:switch('gh6035replica3')
+ | ---
+ | - true
+ | ...
+test_run:wait_cond(function() return box.info().election.state == 'follower' end, 100)
+ | ---
+ | - true
+ | ...
+box.cfg{election_mode = 'manual', replication_synchro_quorum = 1}
+ | ---
+ | ...
+box.ctl.promote()
+ | ---
+ | ...
+test_run:wait_cond(function() return box.info().election.state == 'leader' end, 100)
+ | ---
+ | - true
+ | ...
+
+--
+-- Next round of RAFT leadership migration: to r2 and back to r3,
+-- this is needed to make existing terms being rusty, thus
+-- we will have obsolete row->replica_id associated term.
+test_run:switch('gh6035replica2')
+ | ---
+ | - true
+ | ...
+test_run:wait_cond(function() return box.info().election.state == 'follower' end, 100)
+ | ---
+ | - true
+ | ...
+box.cfg{election_mode = 'manual', replication_synchro_quorum = 1}
+ | ---
+ | ...
+box.ctl.promote()
+ | ---
+ | ...
+test_run:wait_cond(function() return box.info().election.state == 'leader' end, 100)
+ | ---
+ | - true
+ | ...
+
+test_run:switch('gh6035replica3')
+ | ---
+ | - true
+ | ...
+test_run:wait_cond(function() return box.info().election.state == 'follower' end, 100)
+ | ---
+ | - true
+ | ...
+box.cfg{election_mode = 'manual', replication_synchro_quorum = 1}
+ | ---
+ | ...
+box.ctl.promote()
+ | ---
+ | ...
+test_run:wait_cond(function() return box.info().election.state == 'leader' end, 100)
+ | ---
+ | - true
+ | ...
+
+--
+-- r2 is no longer needed, so delete it.
+test_run:switch('default')
+ | ---
+ | - true
+ | ...
+test_run:cmd('stop server gh6035replica2')
+ | ---
+ | - true
+ | ...
+
+--
+-- Finally start r1: at this moment only r3 is running and
+-- it has data from the master node and r2's promote records.
+test_run:cmd('start server gh6035replica1')
+ | ---
+ | - true
+ | ...
+test_run:wait_cond(function() return box.info().election.state == 'follower' end, 100)
+ | ---
+ | - true
+ | ...
+
+--
+-- Wait the former record been replicated.
+test_run:switch('gh6035replica1')
+ | ---
+ | - true
+ | ...
+test_run:wait_cond(function() return box.space.sync:select{}[1] ~= nil \
+                   and box.space.sync:select{}[1][1] == 1 end, 100)
+ | ---
+ | - true
+ | ...
+test_run:wait_lsn('gh6035replica1', 'gh6035replica3')
+ | ---
+ | ...
+
+--
+-- At this point the r1 will receive the XXX record.
+--
+-- XXX: decode_obsolete_rows: 4
+-- XXX: PROMOTE: type 31 replica_id 3 group_id 0 sync 0 lsn 1 tsn 1 flags 0x1
+-- XXX: PROMOTE: type 31 replica_id 4 origin_id 3 lsn 0 term 4
+--
+-- XXX: decode_obsolete_rows: 4
+-- XXX: PROMOTE: type 31 replica_id 3 group_id 0 sync 0 lsn 2 tsn 2 flags 0x1
+-- XXX: PROMOTE: type 31 replica_id 4 origin_id 3 lsn 0 term 6
+assert(test_run:grep_log('gh6035replica1', 'XXX: .*') ~= nil)
+ | ---
+ | - true
+ | ...
+
+--
+-- Cleanup
+test_run:switch('default')
+ | ---
+ | - true
+ | ...
+test_run:cmd('delete server gh6035master')
+ | ---
+ | - true
+ | ...
+test_run:cmd('stop server gh6035replica1')
+ | ---
+ | - true
+ | ...
+test_run:cmd('delete server gh6035replica1')
+ | ---
+ | - true
+ | ...
+test_run:cmd('delete server gh6035replica2')
+ | ---
+ | - true
+ | ...
+test_run:cmd('stop server gh6035replica3')
+ | ---
+ | - true
+ | ...
+test_run:cmd('delete server gh6035replica3')
+ | ---
+ | - true
+ | ...
diff --git a/test/replication/gh-6035-applier-filter.test.lua b/test/replication/gh-6035-applier-filter.test.lua
new file mode 100644
index 000000000..e01349935
--- /dev/null
+++ b/test/replication/gh-6035-applier-filter.test.lua
@@ -0,0 +1,161 @@
+--
+-- gh-6035: Investigate applier_synchro_filter_tx filtration, the
+-- filter may operates in two modes: filter by the source of data
+-- (ie applier->instance_id) or filter by data contents (row->replica_id).
+--
+-- From a gut feeling data contents filtering looks more flexible
+-- while data source filtering is a way more faster.
+--
+test_run = require('test_run').new()
+
+--
+-- Basic workflow:
+--
+--  - start 4 nodes, master, replica1, replica2, replica3
+--  - stop replica 1 and replica2
+--  - insert new record into sync space on master
+--  - wait record propagation on replica3
+--  - drop master node
+--  - promote replica3 to be RAFT leader
+--  - start replica2 and wait new record replicated
+--    (the initial source of this record is master node)
+--  - promote replica2 to be RAFT leader
+--  - promote replica3 to be RAFT leader
+--  - promote replica2 to be RAFT leader
+--  - promote replica3 to be RAFT leader
+--  - drop replica2
+--  - start replica1
+--
+
+SERVERS = {                 \
+    'gh6035master',         \
+    'gh6035replica1',       \
+    'gh6035replica2',       \
+    'gh6035replica3'        \
+}
+
+test_run:create_cluster(SERVERS, "replication")
+test_run:wait_fullmesh(SERVERS)
+
+--
+-- Make sure master node is a RAFT leader.
+test_run:switch('gh6035master')
+test_run:wait_cond(function() return box.info().election.state == 'leader' end, 100)
+
+--
+-- Create spaces needed.
+_ = box.schema.create_space('sync', {is_sync = true})
+_ = box.space.sync:create_index('pk')
+
+--
+-- Make sure the space is replicated.
+test_run:switch('gh6035replica1')
+test_run:wait_cond(function() return box.space.sync ~= nil end, 100)
+
+test_run:switch('gh6035replica2')
+test_run:wait_cond(function() return box.space.sync ~= nil end, 100)
+
+test_run:switch('gh6035replica3')
+test_run:wait_cond(function() return box.space.sync ~= nil end, 100)
+
+--
+-- On the master node insert a new record but with r1 and r2 stopped.
+test_run:switch('gh6035master')
+test_run:cmd('stop server gh6035replica1')
+test_run:cmd('stop server gh6035replica2')
+
+box.space.sync:insert{1}
+
+test_run:switch('default')
+test_run:cmd('stop server gh6035master')
+
+--
+-- Now we have only r3 up and running with
+-- a new record replicated.
+--
+
+--
+-- Make r3 being a RAFT leader.
+test_run:switch('gh6035replica3')
+box.space.sync:select{} -- new record must be replicated already
+box.cfg{election_mode = 'manual', replication_synchro_quorum = 1}
+box.ctl.promote()
+test_run:wait_cond(function() return box.info().election.state == 'leader' end, 100)
+
+--
+-- Start r2 (so r2 and r3 are only nodes running), make sure it has a new
+-- record replicated and make it a RAFT leader.
+test_run:cmd('start server gh6035replica2')
+test_run:switch('gh6035replica2')
+test_run:wait_cond(function() return box.info().election.state == 'follower' end, 100)
+test_run:wait_cond(function() return box.space.sync:select{}[1] ~= nil \
+                   and box.space.sync:select{}[1][1] == 1 end, 100)
+
+box.cfg{election_mode = 'manual', replication_synchro_quorum = 1}
+box.ctl.promote()
+test_run:wait_cond(function() return box.info().election.state == 'leader' end, 100)
+
+--
+-- Now change the RAFT leadership and transfer it back
+-- to the r3 which has a follower state.
+test_run:switch('gh6035replica3')
+test_run:wait_cond(function() return box.info().election.state == 'follower' end, 100)
+box.cfg{election_mode = 'manual', replication_synchro_quorum = 1}
+box.ctl.promote()
+test_run:wait_cond(function() return box.info().election.state == 'leader' end, 100)
+
+--
+-- Next round of RAFT leadership migration: to r2 and back to r3,
+-- this is needed to make existing terms being rusty, thus
+-- we will have obsolete row->replica_id associated term.
+test_run:switch('gh6035replica2')
+test_run:wait_cond(function() return box.info().election.state == 'follower' end, 100)
+box.cfg{election_mode = 'manual', replication_synchro_quorum = 1}
+box.ctl.promote()
+test_run:wait_cond(function() return box.info().election.state == 'leader' end, 100)
+
+test_run:switch('gh6035replica3')
+test_run:wait_cond(function() return box.info().election.state == 'follower' end, 100)
+box.cfg{election_mode = 'manual', replication_synchro_quorum = 1}
+box.ctl.promote()
+test_run:wait_cond(function() return box.info().election.state == 'leader' end, 100)
+
+--
+-- r2 is no longer needed, so delete it.
+test_run:switch('default')
+test_run:cmd('stop server gh6035replica2')
+
+--
+-- Finally start r1: at this moment only r3 is running and
+-- it has data from the master node and r2's promote records.
+test_run:cmd('start server gh6035replica1')
+test_run:wait_cond(function() return box.info().election.state == 'follower' end, 100)
+
+--
+-- Wait the former record been replicated.
+test_run:switch('gh6035replica1')
+test_run:wait_cond(function() return box.space.sync:select{}[1] ~= nil \
+                   and box.space.sync:select{}[1][1] == 1 end, 100)
+test_run:wait_lsn('gh6035replica1', 'gh6035replica3')
+
+--
+-- At this point the r1 will receive the XXX record.
+--
+-- XXX: decode_obsolete_rows: 4
+-- XXX: PROMOTE: type 31 replica_id 3 group_id 0 sync 0 lsn 1 tsn 1 flags 0x1
+-- XXX: PROMOTE: type 31 replica_id 4 origin_id 3 lsn 0 term 4
+--
+-- XXX: decode_obsolete_rows: 4
+-- XXX: PROMOTE: type 31 replica_id 3 group_id 0 sync 0 lsn 2 tsn 2 flags 0x1
+-- XXX: PROMOTE: type 31 replica_id 4 origin_id 3 lsn 0 term 6
+assert(test_run:grep_log('gh6035replica1', 'XXX: .*') ~= nil)
+
+--
+-- Cleanup
+test_run:switch('default')
+test_run:cmd('delete server gh6035master')
+test_run:cmd('stop server gh6035replica1')
+test_run:cmd('delete server gh6035replica1')
+test_run:cmd('delete server gh6035replica2')
+test_run:cmd('stop server gh6035replica3')
+test_run:cmd('delete server gh6035replica3')
diff --git a/test/replication/gh6035master.lua b/test/replication/gh6035master.lua
new file mode 100644
index 000000000..b269f4732
--- /dev/null
+++ b/test/replication/gh6035master.lua
@@ -0,0 +1,27 @@
+local SOCKET_DIR = require('fio').cwd()
+
+local function unix_socket(name)
+    return SOCKET_DIR .. "/" .. name .. '.sock';
+end
+
+require('console').listen(os.getenv('ADMIN'))
+
+box.cfg({
+    listen                      = unix_socket("gh6035master"),
+    replication                 = {
+        unix_socket("gh6035master"),
+        unix_socket("gh6035replica1"),
+        unix_socket("gh6035replica2"),
+        unix_socket("gh6035replica3"),
+    },
+    replication_connect_quorum  = 1,
+    replication_synchro_quorum  = 2,
+    replication_synchro_timeout = 10000,
+    replication_sync_timeout    = 5,
+    read_only                   = false,
+    election_mode               = 'candidate',
+})
+
+box.once("bootstrap", function()
+    box.schema.user.grant('guest', 'super')
+end)
diff --git a/test/replication/gh6035replica1.lua b/test/replication/gh6035replica1.lua
new file mode 100644
index 000000000..0f8f17e57
--- /dev/null
+++ b/test/replication/gh6035replica1.lua
@@ -0,0 +1,27 @@
+local SOCKET_DIR = require('fio').cwd()
+
+local function unix_socket(name)
+    return SOCKET_DIR .. "/" .. name .. '.sock';
+end
+
+require('console').listen(os.getenv('ADMIN'))
+
+box.cfg({
+    listen                      = unix_socket("gh6035replica1"),
+    replication                 = {
+        unix_socket("gh6035master"),
+        unix_socket("gh6035replica1"),
+        unix_socket("gh6035replica2"),
+        unix_socket("gh6035replica3"),
+    },
+    replication_connect_quorum  = 1,
+    replication_synchro_quorum  = 2,
+    replication_synchro_timeout = 10000,
+    replication_sync_timeout    = 5,
+    read_only                   = true,
+    election_mode               = 'voter',
+})
+
+box.once("bootstrap", function()
+    box.schema.user.grant('guest', 'super')
+end)
diff --git a/test/replication/gh6035replica2.lua b/test/replication/gh6035replica2.lua
new file mode 100644
index 000000000..263a20390
--- /dev/null
+++ b/test/replication/gh6035replica2.lua
@@ -0,0 +1,27 @@
+local SOCKET_DIR = require('fio').cwd()
+
+local function unix_socket(name)
+    return SOCKET_DIR .. "/" .. name .. '.sock';
+end
+
+require('console').listen(os.getenv('ADMIN'))
+
+box.cfg({
+    listen                      = unix_socket("gh6035replica2"),
+    replication                 = {
+        unix_socket("gh6035master"),
+        unix_socket("gh6035replica1"),
+        unix_socket("gh6035replica2"),
+        unix_socket("gh6035replica3"),
+    },
+    replication_connect_quorum  = 1,
+    replication_synchro_quorum  = 2,
+    replication_synchro_timeout = 10000,
+    replication_sync_timeout    = 5,
+    read_only                   = true,
+    election_mode               = 'voter',
+})
+
+box.once("bootstrap", function()
+    box.schema.user.grant('guest', 'super')
+end)
diff --git a/test/replication/gh6035replica3.lua b/test/replication/gh6035replica3.lua
new file mode 100644
index 000000000..7c7c3233b
--- /dev/null
+++ b/test/replication/gh6035replica3.lua
@@ -0,0 +1,27 @@
+local SOCKET_DIR = require('fio').cwd()
+
+local function unix_socket(name)
+    return SOCKET_DIR .. "/" .. name .. '.sock';
+end
+
+require('console').listen(os.getenv('ADMIN'))
+
+box.cfg({
+    listen                      = unix_socket("gh6035replica3"),
+    replication                 = {
+        unix_socket("gh6035master"),
+        unix_socket("gh6035replica1"),
+        unix_socket("gh6035replica2"),
+        unix_socket("gh6035replica3"),
+    },
+    replication_connect_quorum  = 1,
+    replication_synchro_quorum  = 2,
+    replication_synchro_timeout = 10000,
+    replication_sync_timeout    = 5,
+    read_only                   = true,
+    election_mode               = 'voter',
+})
+
+box.once("bootstrap", function()
+    box.schema.user.grant('guest', 'super')
+end)
-- 
2.31.1


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2021-05-31 17:59 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-31 17:05 [Tarantool-patches] [PATCH] test/replication: add testcase for synchro filtering Cyrill Gorcunov via Tarantool-patches
2021-05-31 17:59 ` Cyrill Gorcunov via Tarantool-patches

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox