Tarantool development patches archive
 help / color / mirror / Atom feed
* [Tarantool-patches] [RFC v29 0/3] qsync: implement packet filtering (part 1)
@ 2022-01-31 21:55 Cyrill Gorcunov via Tarantool-patches
  2022-01-31 21:55 ` [Tarantool-patches] [RFC v29 1/3] latch: add latch_is_locked helper Cyrill Gorcunov via Tarantool-patches
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Cyrill Gorcunov via Tarantool-patches @ 2022-01-31 21:55 UTC (permalink / raw)
  To: tml; +Cc: Vladislav Shpilevoy

Guys, here is an updated version, where test is converted into luatest
form and box_issue_promote/issue_demote are fenced out with a lock.
Take a look please. It is still RFC since I didn't managed yet to
enhance the test for promote/demote requests as Serge suggested.
Still I would like to make sure that except the test there is
nothing i forget to address from previsous comments.

v22 (by SergeP):
 - use limbo emptiness test _after_ owner_id test
 - drop redundant assert in limbo commit/rollback
   since we're unlocking a latch anyway where own
   assertion present
 - in test: drop excessive wait_cond and setup wal
   delay earlier

v23 (by SergeP):
 - fix problem with owner test in case of recovery journal
 - update test

v27:
 - simplify the code and cover the case targeting journal
   write plus terms access race only
 - i've spent almost a few months trying to use fine grained
   locking scheme for limbo but it end up in very complex code
   so eventually I dropped this idea (here why we've v27 after v23)

v29:
 - rework test into luaform
 - drop fine-grained locks idea since it requires too much code churn,
   instead lets fence out a big code parts

branch gorcunov/gh-6036-rollback-confirm-29-notest
issue https://github.com/tarantool/tarantool/issues/6036
previous series https://lists.tarantool.org/tarantool-patches/20211230202347.353494-1-gorcunov@gmail.com/#r


Cyrill Gorcunov (3):
  latch: add latch_is_locked helper
  qsync: order access to the limbo terms
  test: add gh-6036-qsync-order test

 src/box/applier.cc                            |   6 +-
 src/box/box.cc                                |   8 +-
 src/box/lua/info.c                            |   4 +-
 src/box/txn_limbo.c                           |  18 ++-
 src/box/txn_limbo.h                           |  52 ++++++-
 src/lib/core/latch.h                          |  11 ++
 .../gh_6036_qsync_order_test.lua              | 137 ++++++++++++++++++
 test/replication-luatest/suite.ini            |   1 +
 8 files changed, 226 insertions(+), 11 deletions(-)
 create mode 100644 test/replication-luatest/gh_6036_qsync_order_test.lua


base-commit: b72a5c6a1b66d983ca58ac73ecc147e5ab8dd5b3
-- 
2.34.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [Tarantool-patches] [RFC v29 1/3] latch: add latch_is_locked helper
  2022-01-31 21:55 [Tarantool-patches] [RFC v29 0/3] qsync: implement packet filtering (part 1) Cyrill Gorcunov via Tarantool-patches
@ 2022-01-31 21:55 ` Cyrill Gorcunov via Tarantool-patches
  2022-01-31 21:55 ` [Tarantool-patches] [RFC v29 2/3] qsync: order access to the limbo terms Cyrill Gorcunov via Tarantool-patches
  2022-01-31 21:55 ` [Tarantool-patches] [RFC v29 3/3] test: add gh-6036-qsync-order test Cyrill Gorcunov via Tarantool-patches
  2 siblings, 0 replies; 6+ messages in thread
From: Cyrill Gorcunov via Tarantool-patches @ 2022-01-31 21:55 UTC (permalink / raw)
  To: tml; +Cc: Vladislav Shpilevoy

To test if latch is locked.

Part-of #6036

Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com>
---
 src/lib/core/latch.h | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/src/lib/core/latch.h b/src/lib/core/latch.h
index 49c59cf63..0aaa8b634 100644
--- a/src/lib/core/latch.h
+++ b/src/lib/core/latch.h
@@ -95,6 +95,17 @@ latch_owner(struct latch *l)
 	return l->owner;
 }
 
+/**
+ * Return true if the latch is locked.
+ *
+ * @param l - latch to be tested.
+ */
+static inline bool
+latch_is_locked(const struct latch *l)
+{
+	return l->owner != NULL;
+}
+
 /**
  * Lock a latch. If the latch is already locked by another fiber,
  * waits for timeout.
-- 
2.34.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [Tarantool-patches] [RFC v29 2/3] qsync: order access to the limbo terms
  2022-01-31 21:55 [Tarantool-patches] [RFC v29 0/3] qsync: implement packet filtering (part 1) Cyrill Gorcunov via Tarantool-patches
  2022-01-31 21:55 ` [Tarantool-patches] [RFC v29 1/3] latch: add latch_is_locked helper Cyrill Gorcunov via Tarantool-patches
@ 2022-01-31 21:55 ` Cyrill Gorcunov via Tarantool-patches
  2022-02-09  9:10   ` Serge Petrenko via Tarantool-patches
  2022-01-31 21:55 ` [Tarantool-patches] [RFC v29 3/3] test: add gh-6036-qsync-order test Cyrill Gorcunov via Tarantool-patches
  2 siblings, 1 reply; 6+ messages in thread
From: Cyrill Gorcunov via Tarantool-patches @ 2022-01-31 21:55 UTC (permalink / raw)
  To: tml; +Cc: Vladislav Shpilevoy

Limbo terms tracking is shared between appliers and when
one of appliers is waiting for write to complete inside
journal_write() routine, an other may need to access read
term value to figure out if promote request is valid to
apply. Due to cooperative multitasking access to the terms
is not consistent so we need to be sure that other fibers
read up to date terms (ie written to the WAL).

For this sake we use a latching mechanism, when one fiber
takes a lock for updating other readers are waiting until
the operation is complete.

For example here is a call graph of two appliers

applier 1
---------
applier_apply_tx
  (promote term = 3
   current max term = 2)
  applier_synchro_filter_tx
  apply_synchro_row
    journal_write
      (sleeping)

at this moment another applier comes in with obsolete
data and term 2

                              applier 2
                              ---------
                              applier_apply_tx
                                (term 2)
                                applier_synchro_filter_tx
                                  txn_limbo_is_replica_outdated -> false
                                journal_write (sleep)

applier 1
---------
journal wakes up
  apply_synchro_row_cb
    set max term to 3

So the applier 2 didn't notice that term 3 is already seen
and wrote obsolete data. With locking the applier 2 will
wait until applier 1 has finished its write.

We introduce the following helpers:

1) txn_limbo_begin: which takes a lock
2) txn_limbo_commit and txn_limbo_rollback which simply release
   the lock but have different names for better semantics
3) txn_limbo_process is a general function which uses x_begin
   and x_commit helper internally
4) txn_limbo_apply to do a real job over processing the
   request, it implies that txn_limbo_begin been called

Testing such in-flight condition won't be easy so we introduce
"box.info.synchro.queue.latched" field to report if limbo is
currently latched and processing a sync request.

@TarantoolBot document
Title: synchronous replication changes

`box.info.synchro.queue` gets a new `latched` field.
It is set to `true` when there is a synchronous transaction is
processing but not yet complete. Thus any other incoming synchronous
transactions will be delayed until active one is finished.

Part-of #6036

Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com>
---
 src/box/applier.cc  |  6 +++++-
 src/box/box.cc      |  8 +++++--
 src/box/lua/info.c  |  4 +++-
 src/box/txn_limbo.c | 18 ++++++++++++++--
 src/box/txn_limbo.h | 52 ++++++++++++++++++++++++++++++++++++++++-----
 5 files changed, 77 insertions(+), 11 deletions(-)

diff --git a/src/box/applier.cc b/src/box/applier.cc
index 11a2a3adb..f3d251a4d 100644
--- a/src/box/applier.cc
+++ b/src/box/applier.cc
@@ -964,7 +964,7 @@ apply_synchro_req_cb(struct journal_entry *entry)
 		applier_rollback_by_wal_io(entry->res);
 	} else {
 		replica_txn_wal_write_cb(synchro_entry->rcb);
-		txn_limbo_process(&txn_limbo, synchro_entry->req);
+		txn_limbo_apply(&txn_limbo, synchro_entry->req);
 		trigger_run(&replicaset.applier.on_wal_write, NULL);
 	}
 	fiber_wakeup(synchro_entry->owner);
@@ -1009,14 +1009,18 @@ apply_synchro_req(uint32_t replica_id, struct xrow_header *row, struct synchro_r
 	 * before trying to commit. But that requires extra steps from the
 	 * transactions side, including the async ones.
 	 */
+	txn_limbo_begin(&txn_limbo);
 	if (journal_write(&entry.base) != 0)
 		goto err;
 	if (entry.base.res < 0) {
 		diag_set_journal_res(entry.base.res);
 		goto err;
 	}
+	txn_limbo_commit(&txn_limbo);
 	return 0;
+
 err:
+	txn_limbo_rollback(&txn_limbo);
 	diag_log();
 	return -1;
 }
diff --git a/src/box/box.cc b/src/box/box.cc
index 450fa2357..cc7dcedde 100644
--- a/src/box/box.cc
+++ b/src/box/box.cc
@@ -1780,6 +1780,7 @@ box_issue_promote(uint32_t prev_leader_id, int64_t promote_lsn)
 	struct raft *raft = box_raft();
 	assert(raft->volatile_term == raft->term);
 	assert(promote_lsn >= 0);
+	txn_limbo_begin(&txn_limbo);
 	txn_limbo_write_promote(&txn_limbo, promote_lsn,
 				raft->term);
 	struct synchro_request req = {
@@ -1789,7 +1790,8 @@ box_issue_promote(uint32_t prev_leader_id, int64_t promote_lsn)
 		.lsn = promote_lsn,
 		.term = raft->term,
 	};
-	txn_limbo_process(&txn_limbo, &req);
+	txn_limbo_apply(&txn_limbo, &req);
+	txn_limbo_commit(&txn_limbo);
 	assert(txn_limbo_is_empty(&txn_limbo));
 }
 
@@ -1802,6 +1804,7 @@ box_issue_demote(uint32_t prev_leader_id, int64_t promote_lsn)
 {
 	assert(box_raft()->volatile_term == box_raft()->term);
 	assert(promote_lsn >= 0);
+	txn_limbo_begin(&txn_limbo);
 	txn_limbo_write_demote(&txn_limbo, promote_lsn,
 				box_raft()->term);
 	struct synchro_request req = {
@@ -1811,7 +1814,8 @@ box_issue_demote(uint32_t prev_leader_id, int64_t promote_lsn)
 		.lsn = promote_lsn,
 		.term = box_raft()->term,
 	};
-	txn_limbo_process(&txn_limbo, &req);
+	txn_limbo_apply(&txn_limbo, &req);
+	txn_limbo_commit(&txn_limbo);
 	assert(txn_limbo_is_empty(&txn_limbo));
 }
 
diff --git a/src/box/lua/info.c b/src/box/lua/info.c
index 8e02f6594..f0a9e5339 100644
--- a/src/box/lua/info.c
+++ b/src/box/lua/info.c
@@ -637,11 +637,13 @@ lbox_info_synchro(struct lua_State *L)
 
 	/* Queue information. */
 	struct txn_limbo *queue = &txn_limbo;
-	lua_createtable(L, 0, 2);
+	lua_createtable(L, 0, 3);
 	lua_pushnumber(L, queue->len);
 	lua_setfield(L, -2, "len");
 	lua_pushnumber(L, queue->owner_id);
 	lua_setfield(L, -2, "owner");
+	lua_pushboolean(L, queue->promote_is_latched);
+	lua_setfield(L, -2, "latched");
 	lua_setfield(L, -2, "queue");
 
 	return 1;
diff --git a/src/box/txn_limbo.c b/src/box/txn_limbo.c
index 70447caaf..3363e2f9a 100644
--- a/src/box/txn_limbo.c
+++ b/src/box/txn_limbo.c
@@ -47,6 +47,8 @@ txn_limbo_create(struct txn_limbo *limbo)
 	vclock_create(&limbo->vclock);
 	vclock_create(&limbo->promote_term_map);
 	limbo->promote_greatest_term = 0;
+	latch_create(&limbo->promote_latch);
+	limbo->promote_is_latched = false;
 	limbo->confirmed_lsn = 0;
 	limbo->rollback_count = 0;
 	limbo->is_in_rollback = false;
@@ -724,11 +726,14 @@ txn_limbo_wait_empty(struct txn_limbo *limbo, double timeout)
 }
 
 void
-txn_limbo_process(struct txn_limbo *limbo, const struct synchro_request *req)
+txn_limbo_apply(struct txn_limbo *limbo,
+		const struct synchro_request *req)
 {
+	assert(latch_is_locked(&limbo->promote_latch));
+
 	uint64_t term = req->term;
 	uint32_t origin = req->origin_id;
-	if (txn_limbo_replica_term(limbo, origin) < term) {
+	if (vclock_get(&limbo->promote_term_map, origin) < (int64_t)term) {
 		vclock_follow(&limbo->promote_term_map, origin, term);
 		if (term > limbo->promote_greatest_term)
 			limbo->promote_greatest_term = term;
@@ -786,6 +791,15 @@ txn_limbo_process(struct txn_limbo *limbo, const struct synchro_request *req)
 	return;
 }
 
+void
+txn_limbo_process(struct txn_limbo *limbo,
+		  const struct synchro_request *req)
+{
+	txn_limbo_begin(limbo);
+	txn_limbo_apply(limbo, req);
+	txn_limbo_commit(limbo);
+}
+
 void
 txn_limbo_on_parameters_change(struct txn_limbo *limbo)
 {
diff --git a/src/box/txn_limbo.h b/src/box/txn_limbo.h
index 53e52f676..b9dddda77 100644
--- a/src/box/txn_limbo.h
+++ b/src/box/txn_limbo.h
@@ -31,6 +31,7 @@
  */
 #include "small/rlist.h"
 #include "vclock/vclock.h"
+#include "latch.h"
 
 #include <stdint.h>
 
@@ -147,6 +148,14 @@ struct txn_limbo {
 	 * limbo and raft are in sync and the terms are the same.
 	 */
 	uint64_t promote_greatest_term;
+	/**
+	 * To order access to the promote data.
+	 */
+	struct latch promote_latch;
+	/**
+	 * A flag to inform if limbo is locked (for tests mostly).
+	 */
+	bool promote_is_latched;
 	/**
 	 * Maximal LSN gathered quorum and either already confirmed in WAL, or
 	 * whose confirmation is in progress right now. Any attempt to confirm
@@ -216,7 +225,7 @@ txn_limbo_last_entry(struct txn_limbo *limbo)
  * @a replica_id.
  */
 static inline uint64_t
-txn_limbo_replica_term(const struct txn_limbo *limbo, uint32_t replica_id)
+txn_limbo_replica_term(struct txn_limbo *limbo, uint32_t replica_id)
 {
 	return vclock_get(&limbo->promote_term_map, replica_id);
 }
@@ -226,11 +235,14 @@ txn_limbo_replica_term(const struct txn_limbo *limbo, uint32_t replica_id)
  * data from it. The check is only valid when elections are enabled.
  */
 static inline bool
-txn_limbo_is_replica_outdated(const struct txn_limbo *limbo,
+txn_limbo_is_replica_outdated(struct txn_limbo *limbo,
 			      uint32_t replica_id)
 {
-	return txn_limbo_replica_term(limbo, replica_id) <
-	       limbo->promote_greatest_term;
+	latch_lock(&limbo->promote_latch);
+	uint64_t v = vclock_get(&limbo->promote_term_map, replica_id);
+	bool res = v < limbo->promote_greatest_term;
+	latch_unlock(&limbo->promote_latch);
+	return res;
 }
 
 /**
@@ -300,7 +312,37 @@ txn_limbo_ack(struct txn_limbo *limbo, uint32_t replica_id, int64_t lsn);
 int
 txn_limbo_wait_complete(struct txn_limbo *limbo, struct txn_limbo_entry *entry);
 
-/** Execute a synchronous replication request. */
+/**
+ * Initiate execution of a synchronous replication request.
+ */
+static inline void
+txn_limbo_begin(struct txn_limbo *limbo)
+{
+	latch_lock(&limbo->promote_latch);
+	limbo->promote_is_latched = true;
+}
+
+/** Commit a synchronous replication request. */
+static inline void
+txn_limbo_commit(struct txn_limbo *limbo)
+{
+	latch_unlock(&limbo->promote_latch);
+	limbo->promote_is_latched = false;
+}
+
+/** Rollback a synchronous replication request. */
+static inline void
+txn_limbo_rollback(struct txn_limbo *limbo)
+{
+	latch_unlock(&limbo->promote_latch);
+}
+
+/** Apply a synchronous replication request after processing stage. */
+void
+txn_limbo_apply(struct txn_limbo *limbo,
+		const struct synchro_request *req);
+
+/** Process a synchronous replication request. */
 void
 txn_limbo_process(struct txn_limbo *limbo, const struct synchro_request *req);
 
-- 
2.34.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [Tarantool-patches] [RFC v29 3/3] test: add gh-6036-qsync-order test
  2022-01-31 21:55 [Tarantool-patches] [RFC v29 0/3] qsync: implement packet filtering (part 1) Cyrill Gorcunov via Tarantool-patches
  2022-01-31 21:55 ` [Tarantool-patches] [RFC v29 1/3] latch: add latch_is_locked helper Cyrill Gorcunov via Tarantool-patches
  2022-01-31 21:55 ` [Tarantool-patches] [RFC v29 2/3] qsync: order access to the limbo terms Cyrill Gorcunov via Tarantool-patches
@ 2022-01-31 21:55 ` Cyrill Gorcunov via Tarantool-patches
  2022-02-09  9:11   ` Serge Petrenko via Tarantool-patches
  2 siblings, 1 reply; 6+ messages in thread
From: Cyrill Gorcunov via Tarantool-patches @ 2022-01-31 21:55 UTC (permalink / raw)
  To: tml; +Cc: Vladislav Shpilevoy

To test that promotion requests are handled only when appropriate
write to WAL completes, because we update memory data before the
write finishes.

Part-of #6036

Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com>
---
 .../gh_6036_qsync_order_test.lua              | 137 ++++++++++++++++++
 test/replication-luatest/suite.ini            |   1 +
 2 files changed, 138 insertions(+)
 create mode 100644 test/replication-luatest/gh_6036_qsync_order_test.lua

diff --git a/test/replication-luatest/gh_6036_qsync_order_test.lua b/test/replication-luatest/gh_6036_qsync_order_test.lua
new file mode 100644
index 000000000..4c0059764
--- /dev/null
+++ b/test/replication-luatest/gh_6036_qsync_order_test.lua
@@ -0,0 +1,137 @@
+local t = require('luatest')
+local cluster = require('test.luatest_helpers.cluster')
+local asserts = require('test.luatest_helpers.asserts')
+local helpers = require('test.luatest_helpers')
+local log = require('log')
+
+local g = t.group('gh-6036', {{engine = 'memtx'}, {engine = 'vinyl'}})
+
+g.before_each(function(cg)
+    pcall(log.cfg, {level = 6})
+
+    local engine = cg.params.engine
+
+    cg.cluster = cluster:new({})
+
+    local box_cfg = {
+        replication = {
+            helpers.instance_uri('r1'),
+            helpers.instance_uri('r2'),
+            helpers.instance_uri('r3'),
+        },
+        replication_timeout         = 0.1,
+        replication_connect_quorum  = 1,
+        election_mode               = 'manual',
+        election_timeout            = 0.1,
+        replication_synchro_quorum  = 1,
+        replication_synchro_timeout = 0.1,
+        log_level                   = 6,
+    }
+
+    cg.r1 = cg.cluster:build_server({ alias = 'r1',
+        engine = engine, box_cfg = box_cfg })
+    cg.r2 = cg.cluster:build_server({ alias = 'r2',
+        engine = engine, box_cfg = box_cfg })
+    cg.r3 = cg.cluster:build_server({ alias = 'r3',
+        engine = engine, box_cfg = box_cfg })
+
+    cg.cluster:add_server(cg.r1)
+    cg.cluster:add_server(cg.r2)
+    cg.cluster:add_server(cg.r3)
+    cg.cluster:start()
+end)
+
+g.after_each(function(cg)
+    cg.cluster:drop()
+    cg.cluster.servers = nil
+end)
+
+g.test_qsync_order = function(cg)
+    asserts:wait_fullmesh({cg.r1, cg.r2, cg.r3})
+
+    --
+    -- Create a synchro space on the r1 node and make
+    -- sure the write processed just fine.
+    cg.r1:exec(function()
+        box.ctl.promote()
+        box.ctl.wait_rw()
+        local s = box.schema.create_space('test', {is_sync = true})
+        s:create_index('pk')
+        s:insert{1}
+    end)
+
+    local vclock = cg.r1:eval("return box.info.vclock")
+    vclock[0] = nil
+    helpers:wait_vclock(cg.r2, vclock)
+    helpers:wait_vclock(cg.r3, vclock)
+
+    t.assert_equals(cg.r1:eval("return box.space.test:select()"), {{1}})
+    t.assert_equals(cg.r2:eval("return box.space.test:select()"), {{1}})
+    t.assert_equals(cg.r3:eval("return box.space.test:select()"), {{1}})
+
+    local function update_replication(...)
+        return (box.cfg{ replication = { ... } })
+    end
+
+    --
+    -- Drop connection between r1 and r2.
+    cg.r1:exec(update_replication, {
+            helpers.instance_uri("r1"),
+            helpers.instance_uri("r3"),
+        })
+
+    --
+    -- Drop connection between r2 and r1.
+    cg.r2:exec(update_replication, {
+        helpers.instance_uri("r2"),
+        helpers.instance_uri("r3"),
+    })
+
+    --
+    -- Here we have the following scheme
+    --
+    --      r3 (WAL delay)
+    --      /            \
+    --    r1              r2
+    --
+
+    --
+    -- Initiate disk delay in a bit tricky way: the next write will
+    -- fall into forever sleep.
+    cg.r3:eval("box.error.injection.set('ERRINJ_WAL_DELAY', true)")
+
+    --
+    -- Make r2 been a leader and start writting data, the PROMOTE
+    -- request get queued on r3 and not yet processed, same time
+    -- the INSERT won't complete either waiting for the PROMOTE
+    -- completion first. Note that we enter r3 as well just to be
+    -- sure the PROMOTE has reached it via queue state test.
+    cg.r2:exec(function()
+        box.ctl.promote()
+        box.ctl.wait_rw()
+    end)
+    t.helpers.retrying({}, function()
+        assert(cg.r3:exec(function()
+            return box.info.synchro.queue.latched == true
+        end))
+    end)
+    cg.r2:eval("box.space.test:insert{2}")
+
+    --
+    -- The r1 node has no clue that there is a new leader and continue
+    -- writing data with obsolete term. Since r3 is delayed now
+    -- the INSERT won't proceed yet but get queued.
+    cg.r1:eval("box.space.test:insert{3}")
+
+    --
+    -- Finally enable r3 back. Make sure the data from new r2 leader get
+    -- writing while old leader's data ignored.
+    cg.r3:eval("box.error.injection.set('ERRINJ_WAL_DELAY', false)")
+    t.helpers.retrying({}, function()
+        assert(cg.r3:exec(function()
+            return box.space.test:get{2} ~= nil
+        end))
+    end)
+
+    t.assert_equals(cg.r3:eval("return box.space.test:select()"), {{1},{2}})
+end
diff --git a/test/replication-luatest/suite.ini b/test/replication-luatest/suite.ini
index 374f1b87a..07ec93a52 100644
--- a/test/replication-luatest/suite.ini
+++ b/test/replication-luatest/suite.ini
@@ -2,3 +2,4 @@
 core = luatest
 description = replication luatests
 is_parallel = True
+release_disabled = gh_6036_qsync_order_test.lua
-- 
2.34.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Tarantool-patches] [RFC v29 2/3] qsync: order access to the limbo terms
  2022-01-31 21:55 ` [Tarantool-patches] [RFC v29 2/3] qsync: order access to the limbo terms Cyrill Gorcunov via Tarantool-patches
@ 2022-02-09  9:10   ` Serge Petrenko via Tarantool-patches
  0 siblings, 0 replies; 6+ messages in thread
From: Serge Petrenko via Tarantool-patches @ 2022-02-09  9:10 UTC (permalink / raw)
  To: Cyrill Gorcunov, Vladislav Shpilevoy; +Cc: tml


01.02.2022 00:55, Cyrill Gorcunov пишет:

Hi! Thanks for working on this!
Please find a couple of comments below.

There are only a few minor places to fix. Otherwise I think
the patch is almost ready for merging (once the test for promote/demote
is introduced).

Don't bother too much with my comments here, let's first
implement the test for promote/demote locking.

Overall the patch is in a very good condition.
Check out my suggestion for the promote() test in the next letter.


> diff --git a/src/box/txn_limbo.c b/src/box/txn_limbo.c
> index 70447caaf..3363e2f9a 100644
> --- a/src/box/txn_limbo.c
> +++ b/src/box/txn_limbo.c
> @@ -47,6 +47,8 @@ txn_limbo_create(struct txn_limbo *limbo)
>   	vclock_create(&limbo->vclock);
>   	vclock_create(&limbo->promote_term_map);
>   	limbo->promote_greatest_term = 0;
> +	latch_create(&limbo->promote_latch);
> +	limbo->promote_is_latched = false;
>   	limbo->confirmed_lsn = 0;
>   	limbo->rollback_count = 0;
>   	limbo->is_in_rollback = false;
> @@ -724,11 +726,14 @@ txn_limbo_wait_empty(struct txn_limbo *limbo, double timeout)
>   }
>   
>   void
> -txn_limbo_process(struct txn_limbo *limbo, const struct synchro_request *req)
> +txn_limbo_apply(struct txn_limbo *limbo,
> +		const struct synchro_request *req)
>   {
> +	assert(latch_is_locked(&limbo->promote_latch));
> +
>   	uint64_t term = req->term;
>   	uint32_t origin = req->origin_id;
> -	if (txn_limbo_replica_term(limbo, origin) < term) {
> +	if (vclock_get(&limbo->promote_term_map, origin) < (int64_t)term) {


Extraneous change: you don't lock txn_limbo_replica_term() anyways.


>   		vclock_follow(&limbo->promote_term_map, origin, term);
>   		if (term > limbo->promote_greatest_term)
>   			limbo->promote_greatest_term = term;
> @@ -786,6 +791,15 @@ txn_limbo_process(struct txn_limbo *limbo, const struct synchro_request *req)
>   	return;
>   }
>   
> +void
> +txn_limbo_process(struct txn_limbo *limbo,
> +		  const struct synchro_request *req)
> +{
> +	txn_limbo_begin(limbo);
> +	txn_limbo_apply(limbo, req);
> +	txn_limbo_commit(limbo);
> +}
> +
>   void
>   txn_limbo_on_parameters_change(struct txn_limbo *limbo)
>   {
> diff --git a/src/box/txn_limbo.h b/src/box/txn_limbo.h
> index 53e52f676..b9dddda77 100644
> --- a/src/box/txn_limbo.h
> +++ b/src/box/txn_limbo.h
> @@ -31,6 +31,7 @@
>    */
>   #include "small/rlist.h"
>   #include "vclock/vclock.h"
> +#include "latch.h"
>   
>   #include <stdint.h>
>   
> @@ -147,6 +148,14 @@ struct txn_limbo {
>   	 * limbo and raft are in sync and the terms are the same.
>   	 */
>   	uint64_t promote_greatest_term;
> +	/**
> +	 * To order access to the promote data.
> +	 */
> +	struct latch promote_latch;
> +	/**
> +	 * A flag to inform if limbo is locked (for tests mostly).
> +	 */
> +	bool promote_is_latched;


TBH, I liked 'waiter_count' more. First of all,

`promote_is_latched` duplicates `latch_is_locked(&promote_latch)`,

secondly, `waiter_count` gives more useful info.

When `waiter_count > 0`, promote is latched, but additionally you

know the count of blocked fibers.


I wasn't against `waiter_count`.

My only suggestion was to count `waiter_count` like this:

txn_limbo_begin(...) {

     waiter_count++;

     latch_lock();

     waiter_count--;

}

>   	/**
>   	 * Maximal LSN gathered quorum and either already confirmed in WAL, or
>   	 * whose confirmation is in progress right now. Any attempt to confirm
> @@ -216,7 +225,7 @@ txn_limbo_last_entry(struct txn_limbo *limbo)
>    * @a replica_id.
>    */
>   static inline uint64_t
> -txn_limbo_replica_term(const struct txn_limbo *limbo, uint32_t replica_id)
> +txn_limbo_replica_term(struct txn_limbo *limbo, uint32_t replica_id)


Extraneous change.


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Tarantool-patches] [RFC v29 3/3] test: add gh-6036-qsync-order test
  2022-01-31 21:55 ` [Tarantool-patches] [RFC v29 3/3] test: add gh-6036-qsync-order test Cyrill Gorcunov via Tarantool-patches
@ 2022-02-09  9:11   ` Serge Petrenko via Tarantool-patches
  0 siblings, 0 replies; 6+ messages in thread
From: Serge Petrenko via Tarantool-patches @ 2022-02-09  9:11 UTC (permalink / raw)
  To: Cyrill Gorcunov, Vladislav Shpilevoy; +Cc: tml


01.02.2022 00:55, Cyrill Gorcunov пишет:
> To test that promotion requests are handled only when appropriate
> write to WAL completes, because we update memory data before the
> write finishes.
>
> Part-of #6036
>
> Signed-off-by: Cyrill Gorcunov<gorcunov@gmail.com>
> ---
>   .../gh_6036_qsync_order_test.lua              | 137 ++++++++++++++++++
>   test/replication-luatest/suite.ini            |   1 +
>   2 files changed, 138 insertions(+)
>   create mode 100644 test/replication-luatest/gh_6036_qsync_order_test.lua
>
> diff --git a/test/replication-luatest/gh_6036_qsync_order_test.lua b/test/replication-luatest/gh_6036_qsync_order_test.lua
> new file mode 100644
> index 000000000..4c0059764
> --- /dev/null
> +++ b/test/replication-luatest/gh_6036_qsync_order_test.lua
> @@ -0,0 +1,137 @@
> +local t = require('luatest')
> +local cluster = require('test.luatest_helpers.cluster')
> +local asserts = require('test.luatest_helpers.asserts')
> +local helpers = require('test.luatest_helpers')
> +local log = require('log')
> +
> +local g = t.group('gh-6036', {{engine = 'memtx'}, {engine = 'vinyl'}})


You don't need to test engines here.

You always create a memtx space anyway.

> +
> +g.before_each(function(cg)
> +    pcall(log.cfg, {level = 6})


You don't use log on the default instance. Why do you need that?

Does it help luatest output somehow?


> +
> +    local engine = cg.params.engine
> +
> +    cg.cluster = cluster:new({})
> +


Otherwise the test looks fine. Congratulations on writing your first 
luatest-test!


Now on the promote/demote test. It should be fairly easy to implement.

You only need 2 servers with election mode off and synchro quorum = 1, 
and a sync space.

1. promote server1

2. wait until the promotion is replicated

3. initiate wal delay on server2

4. issue box.ctl.promote() on server2 (it will block on box_issue_promote())

5. while server2 is blocked on writing a promote, do insert{1} on server1

6. wait for replication to server2 (wal is blocked, so rely on 
ERRINJ_WAL_WRITE_COUNT)

7. unblock wal on server2

8. make sure insert{1} wasn't applied on server2.


Speaking of the demote test. Demotion only works on the queue owner, so 
no one should be able

to write obsolete data during the DEMOTE wal write. I  can't think of a 
test here and I think we may

leave it untested.


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2022-02-09  9:11 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-01-31 21:55 [Tarantool-patches] [RFC v29 0/3] qsync: implement packet filtering (part 1) Cyrill Gorcunov via Tarantool-patches
2022-01-31 21:55 ` [Tarantool-patches] [RFC v29 1/3] latch: add latch_is_locked helper Cyrill Gorcunov via Tarantool-patches
2022-01-31 21:55 ` [Tarantool-patches] [RFC v29 2/3] qsync: order access to the limbo terms Cyrill Gorcunov via Tarantool-patches
2022-02-09  9:10   ` Serge Petrenko via Tarantool-patches
2022-01-31 21:55 ` [Tarantool-patches] [RFC v29 3/3] test: add gh-6036-qsync-order test Cyrill Gorcunov via Tarantool-patches
2022-02-09  9:11   ` Serge Petrenko via Tarantool-patches

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox