Tarantool development patches archive
 help / color / mirror / Atom feed
From: Cyrill Gorcunov via Tarantool-patches <tarantool-patches@dev.tarantool.org>
To: Serge Petrenko <sergepetrenko@tarantool.org>
Cc: tml <tarantool-patches@dev.tarantool.org>,
	Vladislav Shpilevoy <v.shpilevoy@tarantool.org>
Subject: Re: [Tarantool-patches] [PATCH v19 3/3] test: add gh-6036-qsync-order test
Date: Wed, 6 Oct 2021 01:26:16 +0300	[thread overview]
Message-ID: <YVzRCOrUb+9Q1eJK@grain> (raw)
In-Reply-To: <1c992b36-7ad4-c68b-5252-69a6f1a7b67a@tarantool.org>

On Tue, Oct 05, 2021 at 04:55:24PM +0300, Serge Petrenko wrote:
> There is a way to dect a blocked promote:
> There's ERRINJ_WAL_WRITE_COUNT.
> It's incremented each time you call wal_write. Even before the write is
> blocked.
> 
> So you need to save ERRINJ_WAL_WRITE_COUNT, then do promote on replica1,
> then
> return to replica2 and wait until ERRINJ_WAL_WRITE_COUNT gets bigger than
> your saved
> value.

Here is a final version (which I didn't pushed outside yet, since I
started a new branch v20), anyway to share data early.
---
From: Cyrill Gorcunov <gorcunov@gmail.com>
Subject: [PATCH] test: add gh-6036-qsync-order test

To test that promotion requests are handled only when appropriate
write to WAL completes, because we update memory data before the
write finishes.

Note that without the patch this test fires assertion

> tarantool: src/box/txn_limbo.c:481: txn_limbo_read_rollback: Assertion `e->txn->signature >= 0' failed.

Part-of #6036

Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com>
---
 test/replication/election_replica.lua         |   3 +-
 test/replication/gh-6036-qsync-order.result   | 222 ++++++++++++++++++
 test/replication/gh-6036-qsync-order.test.lua | 103 ++++++++
 test/replication/suite.cfg                    |   1 +
 test/replication/suite.ini                    |   2 +-
 5 files changed, 329 insertions(+), 2 deletions(-)
 create mode 100644 test/replication/gh-6036-qsync-order.result
 create mode 100644 test/replication/gh-6036-qsync-order.test.lua

diff --git a/test/replication/election_replica.lua b/test/replication/election_replica.lua
index 3b4d9a123..1dbfa96dc 100644
--- a/test/replication/election_replica.lua
+++ b/test/replication/election_replica.lua
@@ -6,6 +6,7 @@ local SYNCHRO_QUORUM = arg[1] and tonumber(arg[1]) or 3
 local ELECTION_TIMEOUT = arg[2] and tonumber(arg[2]) or 0.1
 local ELECTION_MODE = arg[3] or 'candidate'
 local CONNECT_QUORUM = arg[4] and tonumber(arg[4]) or 3
+local SYNCHRO_TIMEOUT = arg[5] and tonumber(arg[5]) or 0.1
 
 local function instance_uri(instance_id)
     return SOCKET_DIR..'/election_replica'..instance_id..'.sock';
@@ -25,7 +26,7 @@ box.cfg({
     election_mode = ELECTION_MODE,
     election_timeout = ELECTION_TIMEOUT,
     replication_synchro_quorum = SYNCHRO_QUORUM,
-    replication_synchro_timeout = 0.1,
+    replication_synchro_timeout = SYNCHRO_TIMEOUT,
     -- To reveal more election logs.
     log_level = 6,
 })
diff --git a/test/replication/gh-6036-qsync-order.result b/test/replication/gh-6036-qsync-order.result
new file mode 100644
index 000000000..b0d8c126e
--- /dev/null
+++ b/test/replication/gh-6036-qsync-order.result
@@ -0,0 +1,222 @@
+-- test-run result file version 2
+--
+-- gh-6036: verify that terms are locked when we're inside journal
+-- write routine, because parallel appliers may ignore the fact that
+-- the term is updated already but not yet written leading to data
+-- inconsistency.
+--
+test_run = require('test_run').new()
+ | ---
+ | ...
+
+test_run:cmd('create server master with script="replication/election_replica1.lua"')
+ | ---
+ | - true
+ | ...
+test_run:cmd('create server replica1 with script="replication/election_replica2.lua"')
+ | ---
+ | - true
+ | ...
+test_run:cmd('create server replica2 with script="replication/election_replica3.lua"')
+ | ---
+ | - true
+ | ...
+
+test_run:cmd("start server master with wait=False, args='1 nil manual 1 10000'")
+ | ---
+ | - true
+ | ...
+test_run:cmd("start server replica1 with wait=False, args='1 nil manual 1 10000'")
+ | ---
+ | - true
+ | ...
+test_run:cmd("start server replica2 with wait=False, args='1 nil manual 1 10000'")
+ | ---
+ | - true
+ | ...
+
+test_run:wait_fullmesh({"master", "replica1", "replica2"})
+ | ---
+ | ...
+
+--
+-- Create a synchro space on the master node and make
+-- sure the write processed just fine.
+test_run:switch("master")
+ | ---
+ | - true
+ | ...
+box.ctl.promote()
+ | ---
+ | ...
+s = box.schema.create_space('test', {is_sync = true})
+ | ---
+ | ...
+_ = s:create_index('pk')
+ | ---
+ | ...
+s:insert{1}
+ | ---
+ | - [1]
+ | ...
+
+test_run:switch("replica1")
+ | ---
+ | - true
+ | ...
+test_run:wait_lsn('replica1', 'master')
+ | ---
+ | ...
+
+test_run:switch("replica2")
+ | ---
+ | - true
+ | ...
+test_run:wait_lsn('replica2', 'master')
+ | ---
+ | ...
+
+--
+-- Drop connection between master and replica1.
+test_run:switch("master")
+ | ---
+ | - true
+ | ...
+box.cfg({                                   \
+    replication = {                         \
+        "unix/:./election_replica1.sock",   \
+        "unix/:./election_replica3.sock",   \
+    },                                      \
+})
+ | ---
+ | ...
+--
+-- Drop connection between replica1 and master.
+test_run:switch("replica1")
+ | ---
+ | - true
+ | ...
+test_run:wait_cond(function() return box.space.test:get{1} ~= nil end)
+ | ---
+ | - true
+ | ...
+box.cfg({                                   \
+    replication = {                         \
+        "unix/:./election_replica2.sock",   \
+        "unix/:./election_replica3.sock",   \
+    },                                      \
+})
+ | ---
+ | ...
+
+--
+-- Here we have the following scheme
+--
+--              replica2 (will be delayed)
+--              /     \
+--          master    replica1
+
+--
+-- Initiate disk delay in a bit tricky way: the next write will
+-- fall into forever sleep.
+test_run:switch("replica2")
+ | ---
+ | - true
+ | ...
+box.error.injection.set('ERRINJ_WAL_DELAY_COUNTDOWN', 1)
+ | ---
+ | - ok
+ | ...
+
+--
+-- Make replica1 been a leader and start writting data,
+-- the PROMOTE request get queued on replica2 and not
+-- yet processed, same time INSERT won't complete either
+-- waiting for PROMOTE completion first. Note that we
+-- enter replica2 as well just to be sure the PROMOTE
+-- reached it.
+test_run:switch("replica1")
+ | ---
+ | - true
+ | ...
+box.ctl.promote()
+ | ---
+ | ...
+test_run:switch("replica2")
+ | ---
+ | - true
+ | ...
+test_run:wait_cond(function() return box.error.injection.get('ERRINJ_WAL_DELAY_COUNTDOWN') ~= 0 end)
+ | ---
+ | - true
+ | ...
+test_run:switch("replica1")
+ | ---
+ | - true
+ | ...
+_ = require('fiber').create(function() box.space.test:insert{2} end)
+ | ---
+ | ...
+
+--
+-- The master node has no clue that there is a new leader
+-- and continue writing data with obsolete term. Since replica2
+-- is delayed now the INSERT won't proceed yet but get queued.
+test_run:switch("master")
+ | ---
+ | - true
+ | ...
+_ = require('fiber').create(function() box.space.test:insert{3} end)
+ | ---
+ | ...
+
+--
+-- Finally enable replica2 back. Make sure the data from new replica1
+-- leader get writing while old leader's data ignored.
+test_run:switch("replica2")
+ | ---
+ | - true
+ | ...
+box.error.injection.set('ERRINJ_WAL_DELAY', false)
+ | ---
+ | - ok
+ | ...
+test_run:wait_cond(function() return box.space.test:get{2} ~= nil end)
+ | ---
+ | - true
+ | ...
+box.space.test:select{}
+ | ---
+ | - - [1]
+ |   - [2]
+ | ...
+
+test_run:switch("default")
+ | ---
+ | - true
+ | ...
+test_run:cmd('stop server master')
+ | ---
+ | - true
+ | ...
+test_run:cmd('stop server replica1')
+ | ---
+ | - true
+ | ...
+test_run:cmd('stop server replica2')
+ | ---
+ | - true
+ | ...
+
+test_run:cmd('delete server master')
+ | ---
+ | - true
+ | ...
+test_run:cmd('delete server replica1')
+ | ---
+ | - true
+ | ...
+test_run:cmd('delete server replica2')
+ | ---
+ | - true
+ | ...
diff --git a/test/replication/gh-6036-qsync-order.test.lua b/test/replication/gh-6036-qsync-order.test.lua
new file mode 100644
index 000000000..4dac9c59a
--- /dev/null
+++ b/test/replication/gh-6036-qsync-order.test.lua
@@ -0,0 +1,103 @@
+--
+-- gh-6036: verify that terms are locked when we're inside journal
+-- write routine, because parallel appliers may ignore the fact that
+-- the term is updated already but not yet written leading to data
+-- inconsistency.
+--
+test_run = require('test_run').new()
+
+test_run:cmd('create server master with script="replication/election_replica1.lua"')
+test_run:cmd('create server replica1 with script="replication/election_replica2.lua"')
+test_run:cmd('create server replica2 with script="replication/election_replica3.lua"')
+
+test_run:cmd("start server master with wait=False, args='1 nil manual 1 10000'")
+test_run:cmd("start server replica1 with wait=False, args='1 nil manual 1 10000'")
+test_run:cmd("start server replica2 with wait=False, args='1 nil manual 1 10000'")
+
+test_run:wait_fullmesh({"master", "replica1", "replica2"})
+
+--
+-- Create a synchro space on the master node and make
+-- sure the write processed just fine.
+test_run:switch("master")
+box.ctl.promote()
+s = box.schema.create_space('test', {is_sync = true})
+_ = s:create_index('pk')
+s:insert{1}
+
+test_run:switch("replica1")
+test_run:wait_lsn('replica1', 'master')
+
+test_run:switch("replica2")
+test_run:wait_lsn('replica2', 'master')
+
+--
+-- Drop connection between master and replica1.
+test_run:switch("master")
+box.cfg({                                   \
+    replication = {                         \
+        "unix/:./election_replica1.sock",   \
+        "unix/:./election_replica3.sock",   \
+    },                                      \
+})
+--
+-- Drop connection between replica1 and master.
+test_run:switch("replica1")
+test_run:wait_cond(function() return box.space.test:get{1} ~= nil end)
+box.cfg({                                   \
+    replication = {                         \
+        "unix/:./election_replica2.sock",   \
+        "unix/:./election_replica3.sock",   \
+    },                                      \
+})
+
+--
+-- Here we have the following scheme
+--
+--              replica2 (will be delayed)
+--              /     \
+--          master    replica1
+
+--
+-- Initiate disk delay in a bit tricky way: the next write will
+-- fall into forever sleep.
+test_run:switch("replica2")
+box.error.injection.set('ERRINJ_WAL_DELAY_COUNTDOWN', 1)
+
+--
+-- Make replica1 been a leader and start writting data,
+-- the PROMOTE request get queued on replica2 and not
+-- yet processed, same time INSERT won't complete either
+-- waiting for PROMOTE completion first. Note that we
+-- enter replica2 as well just to be sure the PROMOTE
+-- reached it.
+test_run:switch("replica1")
+box.ctl.promote()
+test_run:switch("replica2")
+test_run:wait_cond(function() return box.error.injection.get('ERRINJ_WAL_DELAY_COUNTDOWN') ~= 0 end)
+test_run:switch("replica1")
+_ = require('fiber').create(function() box.space.test:insert{2} end)
+
+--
+-- The master node has no clue that there is a new leader
+-- and continue writing data with obsolete term. Since replica2
+-- is delayed now the INSERT won't proceed yet but get queued.
+test_run:switch("master")
+_ = require('fiber').create(function() box.space.test:insert{3} end)
+
+--
+-- Finally enable replica2 back. Make sure the data from new replica1
+-- leader get writing while old leader's data ignored.
+test_run:switch("replica2")
+box.error.injection.set('ERRINJ_WAL_DELAY', false)
+test_run:wait_cond(function() return box.space.test:get{2} ~= nil end)
+box.space.test:select{}
+
+test_run:switch("default")
+test_run:cmd('stop server master')
+test_run:cmd('stop server replica1')
+test_run:cmd('stop server replica2')
+
+test_run:cmd('delete server master')
+test_run:cmd('delete server replica1')
+test_run:cmd('delete server replica2')
diff --git a/test/replication/suite.cfg b/test/replication/suite.cfg
index 3eee0803c..ed09b2087 100644
--- a/test/replication/suite.cfg
+++ b/test/replication/suite.cfg
@@ -59,6 +59,7 @@
     "gh-6094-rs-uuid-mismatch.test.lua": {},
     "gh-6127-election-join-new.test.lua": {},
     "gh-6035-applier-filter.test.lua": {},
+    "gh-6036-qsync-order.test.lua": {},
     "election-candidate-promote.test.lua": {},
     "*": {
         "memtx": {"engine": "memtx"},
diff --git a/test/replication/suite.ini b/test/replication/suite.ini
index 77eb95f49..080e4fbf4 100644
--- a/test/replication/suite.ini
+++ b/test/replication/suite.ini
@@ -3,7 +3,7 @@ core = tarantool
 script =  master.lua
 description = tarantool/box, replication
 disabled = consistent.test.lua
-release_disabled = catch.test.lua errinj.test.lua gc.test.lua gc_no_space.test.lua before_replace.test.lua qsync_advanced.test.lua qsync_errinj.test.lua quorum.test.lua recover_missing_xlog.test.lua sync.test.lua long_row_timeout.test.lua gh-4739-vclock-assert.test.lua gh-4730-applier-rollback.test.lua gh-5140-qsync-casc-rollback.test.lua gh-5144-qsync-dup-confirm.test.lua gh-5167-qsync-rollback-snap.test.lua gh-5430-qsync-promote-crash.test.lua gh-5430-cluster-mvcc.test.lua  gh-5506-election-on-off.test.lua gh-5536-wal-limit.test.lua hang_on_synchro_fail.test.lua anon_register_gap.test.lua gh-5213-qsync-applier-order.test.lua gh-5213-qsync-applier-order-3.test.lua gh-6027-applier-error-show.test.lua gh-6032-promote-wal-write.test.lua gh-6057-qsync-confirm-async-no-wal.test.lua gh-5447-downstream-lag.test.lua gh-4040-invalid-msgpack.test.lua
+release_disabled = catch.test.lua errinj.test.lua gc.test.lua gc_no_space.test.lua before_replace.test.lua qsync_advanced.test.lua qsync_errinj.test.lua quorum.test.lua recover_missing_xlog.test.lua sync.test.lua long_row_timeout.test.lua gh-4739-vclock-assert.test.lua gh-4730-applier-rollback.test.lua gh-5140-qsync-casc-rollback.test.lua gh-5144-qsync-dup-confirm.test.lua gh-5167-qsync-rollback-snap.test.lua gh-5430-qsync-promote-crash.test.lua gh-5430-cluster-mvcc.test.lua  gh-5506-election-on-off.test.lua gh-5536-wal-limit.test.lua hang_on_synchro_fail.test.lua anon_register_gap.test.lua gh-5213-qsync-applier-order.test.lua gh-5213-qsync-applier-order-3.test.lua gh-6027-applier-error-show.test.lua gh-6032-promote-wal-write.test.lua gh-6057-qsync-confirm-async-no-wal.test.lua gh-5447-downstream-lag.test.lua gh-4040-invalid-msgpack.test.lua gh-6036-qsync-order.test.lua
 config = suite.cfg
 lua_libs = lua/fast_replica.lua lua/rlimit.lua
 use_unix_sockets = True
-- 
2.31.1


  reply	other threads:[~2021-10-05 22:26 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-30  9:44 [Tarantool-patches] [PATCH v19 0/3] qsync: implement packet filtering (part 1) Cyrill Gorcunov via Tarantool-patches
2021-09-30  9:44 ` [Tarantool-patches] [PATCH v19 1/3] latch: add latch_is_locked helper Cyrill Gorcunov via Tarantool-patches
2021-09-30  9:44 ` [Tarantool-patches] [PATCH v19 2/3] qsync: order access to the limbo terms Cyrill Gorcunov via Tarantool-patches
2021-10-01 12:14   ` Serge Petrenko via Tarantool-patches
2021-10-01 12:31     ` Cyrill Gorcunov via Tarantool-patches
2021-10-01 12:37       ` Serge Petrenko via Tarantool-patches
2021-10-04 21:53         ` Cyrill Gorcunov via Tarantool-patches
2021-10-05 13:25           ` Serge Petrenko via Tarantool-patches
2021-10-05 21:52             ` Cyrill Gorcunov via Tarantool-patches
2021-09-30  9:44 ` [Tarantool-patches] [PATCH v19 3/3] test: add gh-6036-qsync-order test Cyrill Gorcunov via Tarantool-patches
2021-10-01 12:30   ` Serge Petrenko via Tarantool-patches
2021-10-04 21:16     ` Cyrill Gorcunov via Tarantool-patches
2021-10-05 13:55       ` Serge Petrenko via Tarantool-patches
2021-10-05 22:26         ` Cyrill Gorcunov via Tarantool-patches [this message]
2021-10-05 22:32           ` Cyrill Gorcunov via Tarantool-patches
2021-10-06  7:06             ` Cyrill Gorcunov via Tarantool-patches

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YVzRCOrUb+9Q1eJK@grain \
    --to=tarantool-patches@dev.tarantool.org \
    --cc=gorcunov@gmail.com \
    --cc=sergepetrenko@tarantool.org \
    --cc=v.shpilevoy@tarantool.org \
    --subject='Re: [Tarantool-patches] [PATCH v19 3/3] test: add gh-6036-qsync-order test' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox