* [Tarantool-patches] [PATCH v23 0/3] qsync: implement packet filtering (part 1)
@ 2021-10-14 21:56 Cyrill Gorcunov via Tarantool-patches
2021-10-14 21:56 ` [Tarantool-patches] [PATCH v23 1/3] latch: add latch_is_locked helper Cyrill Gorcunov via Tarantool-patches
` (2 more replies)
0 siblings, 3 replies; 14+ messages in thread
From: Cyrill Gorcunov via Tarantool-patches @ 2021-10-14 21:56 UTC (permalink / raw)
To: tml
Guys, please take a look once time permit, any comments are highly appreciated!
Serge, here are some important changes:
- as being discussed verbally about limbo owner test I think most straight
forward solution is to allow nesting txn_limbo_is_owner calls
- in test I left the snippet
| -- Finally enable election_replica3 back. Make sure the data from new election_replica2
| -- leader get writing while old leader's data ignored.
| test_run:switch("election_replica3")
| box.error.injection.set('ERRINJ_WAL_DELAY', false)
| test_run:wait_cond(function() return box.space.test:get{2} ~= nil end)
| box.space.test:select{}
untouched because final select{} does test for needed results so I don't need
to count writes
The tests are passed locally but lets wait for complete CI results.
v22 (by SergeP):
- use limbo emptiness test _after_ owner_id test
- drop redundant assert in limbo commit/rollback
since we're unlocking a latch anyway where own
assertion present
- in test: drop excessive wait_cond and setup wal
delay earlier
v23 (by SergeP):
- fix problem with owner test in case of recovery journal
- update test
branch gorcunov/gh-6036-rollback-confirm-23
issue https://github.com/tarantool/tarantool/issues/6036
previous series https://lists.tarantool.org/tarantool-patches/20211011191635.573685-1-gorcunov@gmail.com/
Cyrill Gorcunov (3):
latch: add latch_is_locked helper
qsync: order access to the limbo terms
test: add gh-6036-qsync-order test
src/box/applier.cc | 12 +-
src/box/box.cc | 15 +-
src/box/relay.cc | 11 +-
src/box/txn.c | 2 +-
src/box/txn_limbo.c | 49 ++++-
src/box/txn_limbo.h | 87 +++++++-
src/lib/core/latch.h | 11 +
test/replication/gh-6036-qsync-order.result | 190 ++++++++++++++++++
test/replication/gh-6036-qsync-order.test.lua | 93 +++++++++
test/replication/suite.cfg | 1 +
test/replication/suite.ini | 2 +-
test/unit/snap_quorum_delay.cc | 5 +-
12 files changed, 447 insertions(+), 31 deletions(-)
create mode 100644 test/replication/gh-6036-qsync-order.result
create mode 100644 test/replication/gh-6036-qsync-order.test.lua
base-commit: 4bca48611d7ae772ea095181349194c0c31f9a9b
--
2.31.1
^ permalink raw reply [flat|nested] 14+ messages in thread
* [Tarantool-patches] [PATCH v23 1/3] latch: add latch_is_locked helper
2021-10-14 21:56 [Tarantool-patches] [PATCH v23 0/3] qsync: implement packet filtering (part 1) Cyrill Gorcunov via Tarantool-patches
@ 2021-10-14 21:56 ` Cyrill Gorcunov via Tarantool-patches
2021-10-14 21:56 ` [Tarantool-patches] [PATCH v23 2/3] qsync: order access to the limbo terms Cyrill Gorcunov via Tarantool-patches
2021-10-14 21:56 ` [Tarantool-patches] [PATCH v23 3/3] test: add gh-6036-qsync-order test Cyrill Gorcunov via Tarantool-patches
2 siblings, 0 replies; 14+ messages in thread
From: Cyrill Gorcunov via Tarantool-patches @ 2021-10-14 21:56 UTC (permalink / raw)
To: tml
To test if latch is locked.
Part-of #6036
Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com>
---
src/lib/core/latch.h | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/src/lib/core/latch.h b/src/lib/core/latch.h
index 49c59cf63..0aaa8b634 100644
--- a/src/lib/core/latch.h
+++ b/src/lib/core/latch.h
@@ -95,6 +95,17 @@ latch_owner(struct latch *l)
return l->owner;
}
+/**
+ * Return true if the latch is locked.
+ *
+ * @param l - latch to be tested.
+ */
+static inline bool
+latch_is_locked(const struct latch *l)
+{
+ return l->owner != NULL;
+}
+
/**
* Lock a latch. If the latch is already locked by another fiber,
* waits for timeout.
--
2.31.1
^ permalink raw reply [flat|nested] 14+ messages in thread
* [Tarantool-patches] [PATCH v23 2/3] qsync: order access to the limbo terms
2021-10-14 21:56 [Tarantool-patches] [PATCH v23 0/3] qsync: implement packet filtering (part 1) Cyrill Gorcunov via Tarantool-patches
2021-10-14 21:56 ` [Tarantool-patches] [PATCH v23 1/3] latch: add latch_is_locked helper Cyrill Gorcunov via Tarantool-patches
@ 2021-10-14 21:56 ` Cyrill Gorcunov via Tarantool-patches
2021-10-21 22:06 ` Vladislav Shpilevoy via Tarantool-patches
2021-10-14 21:56 ` [Tarantool-patches] [PATCH v23 3/3] test: add gh-6036-qsync-order test Cyrill Gorcunov via Tarantool-patches
2 siblings, 1 reply; 14+ messages in thread
From: Cyrill Gorcunov via Tarantool-patches @ 2021-10-14 21:56 UTC (permalink / raw)
To: tml
Limbo terms tracking is shared between appliers and when
one of appliers is waiting for write to complete inside
journal_write() routine, an other may need to access read
term value to figure out if promote request is valid to
apply. Due to cooperative multitasking access to the terms
is not consistent so we need to be sure that other fibers
read up to date terms (ie written to the WAL).
For this sake we use a latching mechanism, when one fiber
takes a lock for updating other readers are waiting until
the operation is complete.
For example here is a call graph of two appliers
applier 1
---------
applier_apply_tx
(promote term = 3
current max term = 2)
applier_synchro_filter_tx
apply_synchro_row
journal_write
(sleeping)
at this moment another applier comes in with obsolete
data and term 2
applier 2
---------
applier_apply_tx
(term 2)
applier_synchro_filter_tx
txn_limbo_is_replica_outdated -> false
journal_write (sleep)
applier 1
---------
journal wakes up
apply_synchro_row_cb
set max term to 3
So the applier 2 didn't notice that term 3 is already seen
and wrote obsolete data. With locking the applier 2 will
wait until applier 1 has finished its write.
Also Serge Petrenko pointed that we have somewhat similar situation
with txn_limbo_ack()[we might try to write confirm on entry while
new promote is in fly and not yet applied, so confirm might be invalid]
and txn_limbo_on_parameters_change() [where we might confirm entries
reducing quorum number while we even not a limbo owner]. Thus we need
to fix these problems as well.
We introduce the following helpers:
1) txn_limbo_process_begin: which takes a lock
2) txn_limbo_process_commit and txn_limbo_process_rollback
which simply release the lock but have different names
for better semantics
3) txn_limbo_process is a general function which uses x_begin
and x_commit helper internally
4) txn_limbo_process_core to do a real job over processing the
request, it implies that txn_limbo_process_begin been called
5) txn_limbo_ack() and txn_limbo_on_parameters_change() both
respect current limbo owner via promote latch.
Part-of #6036
Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com>
---
src/box/applier.cc | 12 +++--
src/box/box.cc | 15 +++---
src/box/relay.cc | 11 ++---
src/box/txn.c | 2 +-
src/box/txn_limbo.c | 49 +++++++++++++++++--
src/box/txn_limbo.h | 87 +++++++++++++++++++++++++++++++---
test/unit/snap_quorum_delay.cc | 5 +-
7 files changed, 151 insertions(+), 30 deletions(-)
diff --git a/src/box/applier.cc b/src/box/applier.cc
index b981bd436..46c36e259 100644
--- a/src/box/applier.cc
+++ b/src/box/applier.cc
@@ -857,7 +857,7 @@ apply_synchro_row_cb(struct journal_entry *entry)
applier_rollback_by_wal_io(entry->res);
} else {
replica_txn_wal_write_cb(synchro_entry->rcb);
- txn_limbo_process(&txn_limbo, synchro_entry->req);
+ txn_limbo_process_core(&txn_limbo, synchro_entry->req);
trigger_run(&replicaset.applier.on_wal_write, NULL);
}
fiber_wakeup(synchro_entry->owner);
@@ -873,6 +873,8 @@ apply_synchro_row(uint32_t replica_id, struct xrow_header *row)
if (xrow_decode_synchro(row, &req) != 0)
goto err;
+ txn_limbo_process_begin(&txn_limbo);
+
struct replica_cb_data rcb_data;
struct synchro_entry entry;
/*
@@ -910,12 +912,16 @@ apply_synchro_row(uint32_t replica_id, struct xrow_header *row)
* transactions side, including the async ones.
*/
if (journal_write(&entry.base) != 0)
- goto err;
+ goto err_rollback;
if (entry.base.res < 0) {
diag_set_journal_res(entry.base.res);
- goto err;
+ goto err_rollback;
}
+ txn_limbo_process_commit(&txn_limbo);
return 0;
+
+err_rollback:
+ txn_limbo_process_rollback(&txn_limbo);
err:
diag_log();
return -1;
diff --git a/src/box/box.cc b/src/box/box.cc
index e082e1a3d..6a9be745a 100644
--- a/src/box/box.cc
+++ b/src/box/box.cc
@@ -1677,8 +1677,6 @@ box_issue_promote(uint32_t prev_leader_id, int64_t promote_lsn)
struct raft *raft = box_raft();
assert(raft->volatile_term == raft->term);
assert(promote_lsn >= 0);
- txn_limbo_write_promote(&txn_limbo, promote_lsn,
- raft->term);
struct synchro_request req = {
.type = IPROTO_RAFT_PROMOTE,
.replica_id = prev_leader_id,
@@ -1686,8 +1684,11 @@ box_issue_promote(uint32_t prev_leader_id, int64_t promote_lsn)
.lsn = promote_lsn,
.term = raft->term,
};
- txn_limbo_process(&txn_limbo, &req);
+ txn_limbo_process_begin(&txn_limbo);
+ txn_limbo_write_promote(&txn_limbo, req.lsn, req.term);
+ txn_limbo_process_core(&txn_limbo, &req);
assert(txn_limbo_is_empty(&txn_limbo));
+ txn_limbo_process_commit(&txn_limbo);
}
/** A guard to block multiple simultaneous promote()/demote() invocations. */
@@ -1699,8 +1700,6 @@ box_issue_demote(uint32_t prev_leader_id, int64_t promote_lsn)
{
assert(box_raft()->volatile_term == box_raft()->term);
assert(promote_lsn >= 0);
- txn_limbo_write_demote(&txn_limbo, promote_lsn,
- box_raft()->term);
struct synchro_request req = {
.type = IPROTO_RAFT_DEMOTE,
.replica_id = prev_leader_id,
@@ -1708,8 +1707,12 @@ box_issue_demote(uint32_t prev_leader_id, int64_t promote_lsn)
.lsn = promote_lsn,
.term = box_raft()->term,
};
- txn_limbo_process(&txn_limbo, &req);
+ txn_limbo_process_begin(&txn_limbo);
+ txn_limbo_write_demote(&txn_limbo, promote_lsn,
+ box_raft()->term);
+ txn_limbo_process_core(&txn_limbo, &req);
assert(txn_limbo_is_empty(&txn_limbo));
+ txn_limbo_process_commit(&txn_limbo);
}
int
diff --git a/src/box/relay.cc b/src/box/relay.cc
index f5852df7b..61ef1e3a5 100644
--- a/src/box/relay.cc
+++ b/src/box/relay.cc
@@ -545,15 +545,10 @@ tx_status_update(struct cmsg *msg)
ack.vclock = &status->vclock;
/*
* Let pending synchronous transactions know, which of
- * them were successfully sent to the replica. Acks are
- * collected only by the transactions originator (which is
- * the single master in 100% so far). Other instances wait
- * for master's CONFIRM message instead.
+ * them were successfully sent to the replica.
*/
- if (txn_limbo.owner_id == instance_id) {
- txn_limbo_ack(&txn_limbo, ack.source,
- vclock_get(ack.vclock, instance_id));
- }
+ txn_limbo_ack(&txn_limbo, instance_id, ack.source,
+ vclock_get(ack.vclock, instance_id));
trigger_run(&replicaset.on_ack, &ack);
static const struct cmsg_hop route[] = {
diff --git a/src/box/txn.c b/src/box/txn.c
index e7fc81683..06bb85a09 100644
--- a/src/box/txn.c
+++ b/src/box/txn.c
@@ -939,7 +939,7 @@ txn_commit(struct txn *txn)
txn_limbo_assign_local_lsn(&txn_limbo, limbo_entry,
lsn);
/* Local WAL write is a first 'ACK'. */
- txn_limbo_ack(&txn_limbo, txn_limbo.owner_id, lsn);
+ txn_limbo_ack_self(&txn_limbo, lsn);
}
if (txn_limbo_wait_complete(&txn_limbo, limbo_entry) < 0)
goto rollback;
diff --git a/src/box/txn_limbo.c b/src/box/txn_limbo.c
index 70447caaf..9b643072a 100644
--- a/src/box/txn_limbo.c
+++ b/src/box/txn_limbo.c
@@ -47,6 +47,7 @@ txn_limbo_create(struct txn_limbo *limbo)
vclock_create(&limbo->vclock);
vclock_create(&limbo->promote_term_map);
limbo->promote_greatest_term = 0;
+ latch_create(&limbo->promote_latch);
limbo->confirmed_lsn = 0;
limbo->rollback_count = 0;
limbo->is_in_rollback = false;
@@ -542,10 +543,30 @@ txn_limbo_read_demote(struct txn_limbo *limbo, int64_t lsn)
}
void
-txn_limbo_ack(struct txn_limbo *limbo, uint32_t replica_id, int64_t lsn)
+txn_limbo_ack(struct txn_limbo *limbo, uint32_t owner_id,
+ uint32_t replica_id, int64_t lsn)
{
+ /*
+ * ACKs are collected only by the transactions originator
+ * (which is the single master in 100% so far). Other instances
+ * wait for master's CONFIRM message instead.
+ *
+ * Due to cooperative multitasking there might be limbo owner
+ * migration in-fly (while writing data to journal), so for
+ * simplicity sake the test for owner is done here instead
+ * of putting this check to the callers.
+ */
+ if (!txn_limbo_is_owner(limbo, owner_id))
+ return;
+
+ /*
+ * Test for empty queue is done _after_ txn_limbo_is_owner
+ * call because we need to be sure that limbo is not been
+ * changed under our feet while we're reading it.
+ */
if (rlist_empty(&limbo->queue))
return;
+
/*
* If limbo is currently writing a rollback, it means that the whole
* queue will be rolled back. Because rollback is written only for
@@ -724,11 +745,14 @@ txn_limbo_wait_empty(struct txn_limbo *limbo, double timeout)
}
void
-txn_limbo_process(struct txn_limbo *limbo, const struct synchro_request *req)
+txn_limbo_process_core(struct txn_limbo *limbo,
+ const struct synchro_request *req)
{
+ assert(latch_is_locked(&limbo->promote_latch));
+
uint64_t term = req->term;
uint32_t origin = req->origin_id;
- if (txn_limbo_replica_term(limbo, origin) < term) {
+ if (vclock_get(&limbo->promote_term_map, origin) < (int64_t)term) {
vclock_follow(&limbo->promote_term_map, origin, term);
if (term > limbo->promote_greatest_term)
limbo->promote_greatest_term = term;
@@ -786,11 +810,30 @@ txn_limbo_process(struct txn_limbo *limbo, const struct synchro_request *req)
return;
}
+void
+txn_limbo_process(struct txn_limbo *limbo,
+ const struct synchro_request *req)
+{
+ txn_limbo_process_begin(limbo);
+ txn_limbo_process_core(limbo, req);
+ txn_limbo_process_commit(limbo);
+}
+
void
txn_limbo_on_parameters_change(struct txn_limbo *limbo)
{
+ /*
+ * In case if we're not current leader (ie not owning the
+ * limbo) then we should not confirm anything, otherwise
+ * we could reduce quorum number and start writing CONFIRM
+ * while leader node carries own maybe bigger quorum value.
+ */
+ if (!txn_limbo_is_owner(limbo, instance_id))
+ return;
+
if (rlist_empty(&limbo->queue))
return;
+
struct txn_limbo_entry *e;
int64_t confirm_lsn = -1;
rlist_foreach_entry(e, &limbo->queue, in_queue) {
diff --git a/src/box/txn_limbo.h b/src/box/txn_limbo.h
index 53e52f676..0bbd7a1c3 100644
--- a/src/box/txn_limbo.h
+++ b/src/box/txn_limbo.h
@@ -31,6 +31,7 @@
*/
#include "small/rlist.h"
#include "vclock/vclock.h"
+#include "latch.h"
#include <stdint.h>
@@ -147,6 +148,10 @@ struct txn_limbo {
* limbo and raft are in sync and the terms are the same.
*/
uint64_t promote_greatest_term;
+ /**
+ * To order access to the promote data.
+ */
+ struct latch promote_latch;
/**
* Maximal LSN gathered quorum and either already confirmed in WAL, or
* whose confirmation is in progress right now. Any attempt to confirm
@@ -194,6 +199,32 @@ txn_limbo_is_empty(struct txn_limbo *limbo)
return rlist_empty(&limbo->queue);
}
+/**
+ * Test if the \a owner_id is current limbo owner.
+ */
+static inline bool
+txn_limbo_is_owner(struct txn_limbo *limbo, uint32_t owner_id)
+{
+ /*
+ * A guard needed to prevent race with in-fly promote
+ * packets which are sitting inside journal but not yet
+ * written.
+ *
+ * Note that this test supports nesting calling, where
+ * the same fiber does the test on already taken lock
+ * (for example the recovery journal engine does so when
+ * it rolls back a transaction and updates replication
+ * number causing a nested test for limbo ownership).
+ */
+ if (latch_owner(&limbo->promote_latch) == fiber())
+ return limbo->owner_id == owner_id;
+
+ latch_lock(&limbo->promote_latch);
+ bool v = limbo->owner_id == owner_id;
+ latch_unlock(&limbo->promote_latch);
+ return v;
+}
+
bool
txn_limbo_is_ro(struct txn_limbo *limbo);
@@ -216,9 +247,12 @@ txn_limbo_last_entry(struct txn_limbo *limbo)
* @a replica_id.
*/
static inline uint64_t
-txn_limbo_replica_term(const struct txn_limbo *limbo, uint32_t replica_id)
+txn_limbo_replica_term(struct txn_limbo *limbo, uint32_t replica_id)
{
- return vclock_get(&limbo->promote_term_map, replica_id);
+ latch_lock(&limbo->promote_latch);
+ uint64_t v = vclock_get(&limbo->promote_term_map, replica_id);
+ latch_unlock(&limbo->promote_latch);
+ return v;
}
/**
@@ -226,11 +260,14 @@ txn_limbo_replica_term(const struct txn_limbo *limbo, uint32_t replica_id)
* data from it. The check is only valid when elections are enabled.
*/
static inline bool
-txn_limbo_is_replica_outdated(const struct txn_limbo *limbo,
+txn_limbo_is_replica_outdated(struct txn_limbo *limbo,
uint32_t replica_id)
{
- return txn_limbo_replica_term(limbo, replica_id) <
- limbo->promote_greatest_term;
+ latch_lock(&limbo->promote_latch);
+ uint64_t v = vclock_get(&limbo->promote_term_map, replica_id);
+ bool res = v < limbo->promote_greatest_term;
+ latch_unlock(&limbo->promote_latch);
+ return res;
}
/**
@@ -287,7 +324,15 @@ txn_limbo_assign_lsn(struct txn_limbo *limbo, struct txn_limbo_entry *entry,
* replica with the specified ID.
*/
void
-txn_limbo_ack(struct txn_limbo *limbo, uint32_t replica_id, int64_t lsn);
+txn_limbo_ack(struct txn_limbo *limbo, uint32_t owner_id,
+ uint32_t replica_id, int64_t lsn);
+
+static inline void
+txn_limbo_ack_self(struct txn_limbo *limbo, int64_t lsn)
+{
+ return txn_limbo_ack(limbo, limbo->owner_id,
+ limbo->owner_id, lsn);
+}
/**
* Block the current fiber until the transaction in the limbo
@@ -300,7 +345,35 @@ txn_limbo_ack(struct txn_limbo *limbo, uint32_t replica_id, int64_t lsn);
int
txn_limbo_wait_complete(struct txn_limbo *limbo, struct txn_limbo_entry *entry);
-/** Execute a synchronous replication request. */
+/**
+ * Initiate execution of a synchronous replication request.
+ */
+static inline void
+txn_limbo_process_begin(struct txn_limbo *limbo)
+{
+ latch_lock(&limbo->promote_latch);
+}
+
+/** Commit a synchronous replication request. */
+static inline void
+txn_limbo_process_commit(struct txn_limbo *limbo)
+{
+ latch_unlock(&limbo->promote_latch);
+}
+
+/** Rollback a synchronous replication request. */
+static inline void
+txn_limbo_process_rollback(struct txn_limbo *limbo)
+{
+ latch_unlock(&limbo->promote_latch);
+}
+
+/** Core of processing synchronous replication request. */
+void
+txn_limbo_process_core(struct txn_limbo *limbo,
+ const struct synchro_request *req);
+
+/** Process a synchronous replication request. */
void
txn_limbo_process(struct txn_limbo *limbo, const struct synchro_request *req);
diff --git a/test/unit/snap_quorum_delay.cc b/test/unit/snap_quorum_delay.cc
index 803bbbea8..d43b4cd2c 100644
--- a/test/unit/snap_quorum_delay.cc
+++ b/test/unit/snap_quorum_delay.cc
@@ -130,7 +130,7 @@ txn_process_func(va_list ap)
}
txn_limbo_assign_local_lsn(&txn_limbo, entry, fake_lsn);
- txn_limbo_ack(&txn_limbo, txn_limbo.owner_id, fake_lsn);
+ txn_limbo_ack_self(&txn_limbo, fake_lsn);
txn_limbo_wait_complete(&txn_limbo, entry);
switch (process_type) {
@@ -157,7 +157,8 @@ txn_confirm_func(va_list ap)
* inside gc_checkpoint().
*/
fiber_sleep(0);
- txn_limbo_ack(&txn_limbo, relay_id, fake_lsn);
+ txn_limbo_ack(&txn_limbo, txn_limbo.owner_id,
+ relay_id, fake_lsn);
return 0;
}
--
2.31.1
^ permalink raw reply [flat|nested] 14+ messages in thread
* [Tarantool-patches] [PATCH v23 3/3] test: add gh-6036-qsync-order test
2021-10-14 21:56 [Tarantool-patches] [PATCH v23 0/3] qsync: implement packet filtering (part 1) Cyrill Gorcunov via Tarantool-patches
2021-10-14 21:56 ` [Tarantool-patches] [PATCH v23 1/3] latch: add latch_is_locked helper Cyrill Gorcunov via Tarantool-patches
2021-10-14 21:56 ` [Tarantool-patches] [PATCH v23 2/3] qsync: order access to the limbo terms Cyrill Gorcunov via Tarantool-patches
@ 2021-10-14 21:56 ` Cyrill Gorcunov via Tarantool-patches
2021-10-19 15:09 ` Serge Petrenko via Tarantool-patches
2021-10-21 22:06 ` Vladislav Shpilevoy via Tarantool-patches
2 siblings, 2 replies; 14+ messages in thread
From: Cyrill Gorcunov via Tarantool-patches @ 2021-10-14 21:56 UTC (permalink / raw)
To: tml
To test that promotion requests are handled only when appropriate
write to WAL completes, because we update memory data before the
write finishes.
Note that without the patch "qsync: order access to the limbo terms"
this test fires the assertion
> tarantool: src/box/txn_limbo.c:481: txn_limbo_read_rollback: Assertion `e->txn->signature >= 0' failed.
Part-of #6036
Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com>
---
test/replication/gh-6036-qsync-order.result | 190 ++++++++++++++++++
test/replication/gh-6036-qsync-order.test.lua | 93 +++++++++
test/replication/suite.cfg | 1 +
test/replication/suite.ini | 2 +-
4 files changed, 285 insertions(+), 1 deletion(-)
create mode 100644 test/replication/gh-6036-qsync-order.result
create mode 100644 test/replication/gh-6036-qsync-order.test.lua
diff --git a/test/replication/gh-6036-qsync-order.result b/test/replication/gh-6036-qsync-order.result
new file mode 100644
index 000000000..1c16e19b4
--- /dev/null
+++ b/test/replication/gh-6036-qsync-order.result
@@ -0,0 +1,190 @@
+-- test-run result file version 2
+--
+-- gh-6036: verify that terms are locked when we're inside journal
+-- write routine, because parallel appliers may ignore the fact that
+-- the term is updated already but not yet written leading to data
+-- inconsistency.
+--
+test_run = require('test_run').new()
+ | ---
+ | ...
+
+SERVERS={"election_replica1", "election_replica2", "election_replica3"}
+ | ---
+ | ...
+test_run:create_cluster(SERVERS, "replication", {args='1 nil manual 1'})
+ | ---
+ | ...
+test_run:wait_fullmesh(SERVERS)
+ | ---
+ | ...
+
+--
+-- Create a synchro space on the master node and make
+-- sure the write processed just fine.
+test_run:switch("election_replica1")
+ | ---
+ | - true
+ | ...
+box.ctl.promote()
+ | ---
+ | ...
+s = box.schema.create_space('test', {is_sync = true})
+ | ---
+ | ...
+_ = s:create_index('pk')
+ | ---
+ | ...
+s:insert{1}
+ | ---
+ | - [1]
+ | ...
+
+test_run:wait_lsn('election_replica2', 'election_replica1')
+ | ---
+ | ...
+test_run:wait_lsn('election_replica3', 'election_replica1')
+ | ---
+ | ...
+
+--
+-- Drop connection between election_replica1 and election_replica2.
+box.cfg({ \
+ replication = { \
+ "unix/:./election_replica1.sock", \
+ "unix/:./election_replica3.sock", \
+ }, \
+})
+ | ---
+ | ...
+
+--
+-- Drop connection between election_replica2 and election_replica1.
+test_run:switch("election_replica2")
+ | ---
+ | - true
+ | ...
+box.cfg({ \
+ replication = { \
+ "unix/:./election_replica2.sock", \
+ "unix/:./election_replica3.sock", \
+ }, \
+})
+ | ---
+ | ...
+
+--
+-- Here we have the following scheme
+--
+-- election_replica3 (will be delayed)
+-- / \
+-- election_replica1 election_replica2
+
+--
+-- Initiate disk delay in a bit tricky way: the next write will
+-- fall into forever sleep.
+test_run:switch("election_replica3")
+ | ---
+ | - true
+ | ...
+write_cnt = box.error.injection.get("ERRINJ_WAL_WRITE_COUNT")
+ | ---
+ | ...
+box.error.injection.set("ERRINJ_WAL_DELAY", true)
+ | ---
+ | - ok
+ | ...
+--
+-- Make election_replica2 been a leader and start writting data,
+-- the PROMOTE request get queued on election_replica3 and not
+-- yet processed, same time INSERT won't complete either
+-- waiting for PROMOTE completion first. Note that we
+-- enter election_replica3 as well just to be sure the PROMOTE
+-- reached it.
+test_run:switch("election_replica2")
+ | ---
+ | - true
+ | ...
+box.ctl.promote()
+ | ---
+ | ...
+test_run:switch("election_replica3")
+ | ---
+ | - true
+ | ...
+test_run:wait_cond(function() return box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") > write_cnt end)
+ | ---
+ | - true
+ | ...
+test_run:switch("election_replica2")
+ | ---
+ | - true
+ | ...
+box.space.test:insert{2}
+ | ---
+ | - [2]
+ | ...
+
+--
+-- The election_replica1 node has no clue that there is a new leader
+-- and continue writing data with obsolete term. Since election_replica3
+-- is delayed now the INSERT won't proceed yet but get queued.
+test_run:switch("election_replica1")
+ | ---
+ | - true
+ | ...
+box.space.test:insert{3}
+ | ---
+ | - [3]
+ | ...
+
+--
+-- Finally enable election_replica3 back. Make sure the data from new election_replica2
+-- leader get writing while old leader's data ignored.
+test_run:switch("election_replica3")
+ | ---
+ | - true
+ | ...
+box.error.injection.set('ERRINJ_WAL_DELAY', false)
+ | ---
+ | - ok
+ | ...
+test_run:wait_cond(function() return box.space.test:get{2} ~= nil end)
+ | ---
+ | - true
+ | ...
+box.space.test:select{}
+ | ---
+ | - - [1]
+ | - [2]
+ | ...
+
+test_run:switch("default")
+ | ---
+ | - true
+ | ...
+test_run:cmd('stop server election_replica1')
+ | ---
+ | - true
+ | ...
+test_run:cmd('stop server election_replica2')
+ | ---
+ | - true
+ | ...
+test_run:cmd('stop server election_replica3')
+ | ---
+ | - true
+ | ...
+
+test_run:cmd('delete server election_replica1')
+ | ---
+ | - true
+ | ...
+test_run:cmd('delete server election_replica2')
+ | ---
+ | - true
+ | ...
+test_run:cmd('delete server election_replica3')
+ | ---
+ | - true
+ | ...
diff --git a/test/replication/gh-6036-qsync-order.test.lua b/test/replication/gh-6036-qsync-order.test.lua
new file mode 100644
index 000000000..5fcd316d8
--- /dev/null
+++ b/test/replication/gh-6036-qsync-order.test.lua
@@ -0,0 +1,93 @@
+--
+-- gh-6036: verify that terms are locked when we're inside journal
+-- write routine, because parallel appliers may ignore the fact that
+-- the term is updated already but not yet written leading to data
+-- inconsistency.
+--
+test_run = require('test_run').new()
+
+SERVERS={"election_replica1", "election_replica2", "election_replica3"}
+test_run:create_cluster(SERVERS, "replication", {args='1 nil manual 1'})
+test_run:wait_fullmesh(SERVERS)
+
+--
+-- Create a synchro space on the master node and make
+-- sure the write processed just fine.
+test_run:switch("election_replica1")
+box.ctl.promote()
+s = box.schema.create_space('test', {is_sync = true})
+_ = s:create_index('pk')
+s:insert{1}
+
+test_run:wait_lsn('election_replica2', 'election_replica1')
+test_run:wait_lsn('election_replica3', 'election_replica1')
+
+--
+-- Drop connection between election_replica1 and election_replica2.
+box.cfg({ \
+ replication = { \
+ "unix/:./election_replica1.sock", \
+ "unix/:./election_replica3.sock", \
+ }, \
+})
+
+--
+-- Drop connection between election_replica2 and election_replica1.
+test_run:switch("election_replica2")
+box.cfg({ \
+ replication = { \
+ "unix/:./election_replica2.sock", \
+ "unix/:./election_replica3.sock", \
+ }, \
+})
+
+--
+-- Here we have the following scheme
+--
+-- election_replica3 (will be delayed)
+-- / \
+-- election_replica1 election_replica2
+
+--
+-- Initiate disk delay in a bit tricky way: the next write will
+-- fall into forever sleep.
+test_run:switch("election_replica3")
+write_cnt = box.error.injection.get("ERRINJ_WAL_WRITE_COUNT")
+box.error.injection.set("ERRINJ_WAL_DELAY", true)
+--
+-- Make election_replica2 been a leader and start writting data,
+-- the PROMOTE request get queued on election_replica3 and not
+-- yet processed, same time INSERT won't complete either
+-- waiting for PROMOTE completion first. Note that we
+-- enter election_replica3 as well just to be sure the PROMOTE
+-- reached it.
+test_run:switch("election_replica2")
+box.ctl.promote()
+test_run:switch("election_replica3")
+test_run:wait_cond(function() return box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") > write_cnt end)
+test_run:switch("election_replica2")
+box.space.test:insert{2}
+
+--
+-- The election_replica1 node has no clue that there is a new leader
+-- and continue writing data with obsolete term. Since election_replica3
+-- is delayed now the INSERT won't proceed yet but get queued.
+test_run:switch("election_replica1")
+box.space.test:insert{3}
+
+--
+-- Finally enable election_replica3 back. Make sure the data from new election_replica2
+-- leader get writing while old leader's data ignored.
+test_run:switch("election_replica3")
+box.error.injection.set('ERRINJ_WAL_DELAY', false)
+test_run:wait_cond(function() return box.space.test:get{2} ~= nil end)
+box.space.test:select{}
+
+test_run:switch("default")
+test_run:cmd('stop server election_replica1')
+test_run:cmd('stop server election_replica2')
+test_run:cmd('stop server election_replica3')
+
+test_run:cmd('delete server election_replica1')
+test_run:cmd('delete server election_replica2')
+test_run:cmd('delete server election_replica3')
diff --git a/test/replication/suite.cfg b/test/replication/suite.cfg
index 3eee0803c..ed09b2087 100644
--- a/test/replication/suite.cfg
+++ b/test/replication/suite.cfg
@@ -59,6 +59,7 @@
"gh-6094-rs-uuid-mismatch.test.lua": {},
"gh-6127-election-join-new.test.lua": {},
"gh-6035-applier-filter.test.lua": {},
+ "gh-6036-qsync-order.test.lua": {},
"election-candidate-promote.test.lua": {},
"*": {
"memtx": {"engine": "memtx"},
diff --git a/test/replication/suite.ini b/test/replication/suite.ini
index 77eb95f49..080e4fbf4 100644
--- a/test/replication/suite.ini
+++ b/test/replication/suite.ini
@@ -3,7 +3,7 @@ core = tarantool
script = master.lua
description = tarantool/box, replication
disabled = consistent.test.lua
-release_disabled = catch.test.lua errinj.test.lua gc.test.lua gc_no_space.test.lua before_replace.test.lua qsync_advanced.test.lua qsync_errinj.test.lua quorum.test.lua recover_missing_xlog.test.lua sync.test.lua long_row_timeout.test.lua gh-4739-vclock-assert.test.lua gh-4730-applier-rollback.test.lua gh-5140-qsync-casc-rollback.test.lua gh-5144-qsync-dup-confirm.test.lua gh-5167-qsync-rollback-snap.test.lua gh-5430-qsync-promote-crash.test.lua gh-5430-cluster-mvcc.test.lua gh-5506-election-on-off.test.lua gh-5536-wal-limit.test.lua hang_on_synchro_fail.test.lua anon_register_gap.test.lua gh-5213-qsync-applier-order.test.lua gh-5213-qsync-applier-order-3.test.lua gh-6027-applier-error-show.test.lua gh-6032-promote-wal-write.test.lua gh-6057-qsync-confirm-async-no-wal.test.lua gh-5447-downstream-lag.test.lua gh-4040-invalid-msgpack.test.lua
+release_disabled = catch.test.lua errinj.test.lua gc.test.lua gc_no_space.test.lua before_replace.test.lua qsync_advanced.test.lua qsync_errinj.test.lua quorum.test.lua recover_missing_xlog.test.lua sync.test.lua long_row_timeout.test.lua gh-4739-vclock-assert.test.lua gh-4730-applier-rollback.test.lua gh-5140-qsync-casc-rollback.test.lua gh-5144-qsync-dup-confirm.test.lua gh-5167-qsync-rollback-snap.test.lua gh-5430-qsync-promote-crash.test.lua gh-5430-cluster-mvcc.test.lua gh-5506-election-on-off.test.lua gh-5536-wal-limit.test.lua hang_on_synchro_fail.test.lua anon_register_gap.test.lua gh-5213-qsync-applier-order.test.lua gh-5213-qsync-applier-order-3.test.lua gh-6027-applier-error-show.test.lua gh-6032-promote-wal-write.test.lua gh-6057-qsync-confirm-async-no-wal.test.lua gh-5447-downstream-lag.test.lua gh-4040-invalid-msgpack.test.lua gh-6036-qsync-order.test.lua
config = suite.cfg
lua_libs = lua/fast_replica.lua lua/rlimit.lua
use_unix_sockets = True
--
2.31.1
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Tarantool-patches] [PATCH v23 3/3] test: add gh-6036-qsync-order test
2021-10-14 21:56 ` [Tarantool-patches] [PATCH v23 3/3] test: add gh-6036-qsync-order test Cyrill Gorcunov via Tarantool-patches
@ 2021-10-19 15:09 ` Serge Petrenko via Tarantool-patches
2021-10-19 22:26 ` Cyrill Gorcunov via Tarantool-patches
2021-10-21 22:06 ` Vladislav Shpilevoy via Tarantool-patches
1 sibling, 1 reply; 14+ messages in thread
From: Serge Petrenko via Tarantool-patches @ 2021-10-19 15:09 UTC (permalink / raw)
To: Cyrill Gorcunov, tml
15.10.2021 00:56, Cyrill Gorcunov пишет:
> To test that promotion requests are handled only when appropriate
> write to WAL completes, because we update memory data before the
> write finishes.
>
> Note that without the patch "qsync: order access to the limbo terms"
> this test fires the assertion
>
>> tarantool: src/box/txn_limbo.c:481: txn_limbo_read_rollback: Assertion `e->txn->signature >= 0' failed.
> Part-of #6036
>
> Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com>
> ---
> test/replication/gh-6036-qsync-order.result | 190 ++++++++++++++++++
> test/replication/gh-6036-qsync-order.test.lua | 93 +++++++++
> test/replication/suite.cfg | 1 +
> test/replication/suite.ini | 2 +-
> 4 files changed, 285 insertions(+), 1 deletion(-)
> create mode 100644 test/replication/gh-6036-qsync-order.result
> create mode 100644 test/replication/gh-6036-qsync-order.test.lua
>
> diff --git a/test/replication/gh-6036-qsync-order.result b/test/replication/gh-6036-qsync-order.result
> new file mode 100644
> index 000000000..1c16e19b4
> --- /dev/null
> +++ b/test/replication/gh-6036-qsync-order.result
> @@ -0,0 +1,190 @@
> +-- test-run result file version 2
> +--
> +-- gh-6036: verify that terms are locked when we're inside journal
> +-- write routine, because parallel appliers may ignore the fact that
> +-- the term is updated already but not yet written leading to data
> +-- inconsistency.
> +--
> +test_run = require('test_run').new()
> + | ---
> + | ...
> +
> +SERVERS={"election_replica1", "election_replica2", "election_replica3"}
> + | ---
> + | ...
> +test_run:create_cluster(SERVERS, "replication", {args='1 nil manual 1'})
> + | ---
> + | ...
> +test_run:wait_fullmesh(SERVERS)
> + | ---
> + | ...
> +
> +--
> +-- Create a synchro space on the master node and make
> +-- sure the write processed just fine.
> +test_run:switch("election_replica1")
> + | ---
> + | - true
> + | ...
> +box.ctl.promote()
> + | ---
> + | ...
> +s = box.schema.create_space('test', {is_sync = true})
> + | ---
> + | ...
> +_ = s:create_index('pk')
> + | ---
> + | ...
> +s:insert{1}
> + | ---
> + | - [1]
> + | ...
> +
> +test_run:wait_lsn('election_replica2', 'election_replica1')
> + | ---
> + | ...
> +test_run:wait_lsn('election_replica3', 'election_replica1')
> + | ---
> + | ...
> +
> +--
> +-- Drop connection between election_replica1 and election_replica2.
> +box.cfg({ \
> + replication = { \
> + "unix/:./election_replica1.sock", \
> + "unix/:./election_replica3.sock", \
> + }, \
> +})
> + | ---
> + | ...
> +
> +--
> +-- Drop connection between election_replica2 and election_replica1.
> +test_run:switch("election_replica2")
> + | ---
> + | - true
> + | ...
> +box.cfg({ \
> + replication = { \
> + "unix/:./election_replica2.sock", \
> + "unix/:./election_replica3.sock", \
> + }, \
> +})
> + | ---
> + | ...
> +
> +--
> +-- Here we have the following scheme
> +--
> +-- election_replica3 (will be delayed)
> +-- / \
> +-- election_replica1 election_replica2
> +
> +--
> +-- Initiate disk delay in a bit tricky way: the next write will
> +-- fall into forever sleep.
> +test_run:switch("election_replica3")
> + | ---
> + | - true
> + | ...
> +write_cnt = box.error.injection.get("ERRINJ_WAL_WRITE_COUNT")
> + | ---
> + | ...
> +box.error.injection.set("ERRINJ_WAL_DELAY", true)
> + | ---
> + | - ok
> + | ...
> +--
> +-- Make election_replica2 been a leader and start writting data,
> +-- the PROMOTE request get queued on election_replica3 and not
> +-- yet processed, same time INSERT won't complete either
> +-- waiting for PROMOTE completion first. Note that we
> +-- enter election_replica3 as well just to be sure the PROMOTE
> +-- reached it.
> +test_run:switch("election_replica2")
> + | ---
> + | - true
> + | ...
> +box.ctl.promote()
> + | ---
> + | ...
> +test_run:switch("election_replica3")
> + | ---
> + | - true
> + | ...
> +test_run:wait_cond(function() return box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") > write_cnt end)
> + | ---
> + | - true
> + | ...
> +test_run:switch("election_replica2")
> + | ---
> + | - true
> + | ...
> +box.space.test:insert{2}
> + | ---
> + | - [2]
> + | ...
> +
> +--
> +-- The election_replica1 node has no clue that there is a new leader
> +-- and continue writing data with obsolete term. Since election_replica3
> +-- is delayed now the INSERT won't proceed yet but get queued.
> +test_run:switch("election_replica1")
> + | ---
> + | - true
> + | ...
> +box.space.test:insert{3}
> + | ---
> + | - [3]
> + | ...
> +
> +--
> +-- Finally enable election_replica3 back. Make sure the data from new election_replica2
> +-- leader get writing while old leader's data ignored.
> +test_run:switch("election_replica3")
> + | ---
> + | - true
> + | ...
Hi and thanks for the fixes!
I have only one comment left.
Actually you do need to count writes here.
The wait_cond for ERRINJ_WAL_WRITE_COUNT == write_cnt + 3
is needed to make sure you receive (and thus try to process)
insert {3} **before** the replica is re-enabled.
Otherwise we can't be sure that the test is correct. You may simply
perform a select before insert{3} has reached the replica.
> +box.error.injection.set('ERRINJ_WAL_DELAY', false)
> + | ---
> + | - ok
> + | ...
> +test_run:wait_cond(function() return box.space.test:get{2} ~= nil end)
> + | ---
> + | - true
> + | ...
> +box.space.test:select{}
> + | ---
> + | - - [1]
> + | - [2]
> + | ...
> +
> +test_run:switch("default")
> + | ---
> + | - true
> + | ...
> +test_run:cmd('stop server election_replica1')
> + | ---
> + | - true
> + | ...
> +test_run:cmd('stop server election_replica2')
> + | ---
> + | - true
> + | ...
> +test_run:cmd('stop server election_replica3')
> + | ---
> + | - true
> + | ...
> +
> +test_run:cmd('delete server election_replica1')
> + | ---
> + | - true
> + | ...
> +test_run:cmd('delete server election_replica2')
> + | ---
> + | - true
> + | ...
> +test_run:cmd('delete server election_replica3')
> + | ---
> + | - true
> + | ...
> diff --git a/test/replication/gh-6036-qsync-order.test.lua b/test/replication/gh-6036-qsync-order.test.lua
> new file mode 100644
> index 000000000..5fcd316d8
> --- /dev/null
> +++ b/test/replication/gh-6036-qsync-order.test.lua
> @@ -0,0 +1,93 @@
> +--
> +-- gh-6036: verify that terms are locked when we're inside journal
> +-- write routine, because parallel appliers may ignore the fact that
> +-- the term is updated already but not yet written leading to data
> +-- inconsistency.
> +--
> +test_run = require('test_run').new()
> +
> +SERVERS={"election_replica1", "election_replica2", "election_replica3"}
> +test_run:create_cluster(SERVERS, "replication", {args='1 nil manual 1'})
> +test_run:wait_fullmesh(SERVERS)
> +
> +--
> +-- Create a synchro space on the master node and make
> +-- sure the write processed just fine.
> +test_run:switch("election_replica1")
> +box.ctl.promote()
> +s = box.schema.create_space('test', {is_sync = true})
> +_ = s:create_index('pk')
> +s:insert{1}
> +
> +test_run:wait_lsn('election_replica2', 'election_replica1')
> +test_run:wait_lsn('election_replica3', 'election_replica1')
> +
> +--
> +-- Drop connection between election_replica1 and election_replica2.
> +box.cfg({ \
> + replication = { \
> + "unix/:./election_replica1.sock", \
> + "unix/:./election_replica3.sock", \
> + }, \
> +})
> +
> +--
> +-- Drop connection between election_replica2 and election_replica1.
> +test_run:switch("election_replica2")
> +box.cfg({ \
> + replication = { \
> + "unix/:./election_replica2.sock", \
> + "unix/:./election_replica3.sock", \
> + }, \
> +})
> +
> +--
> +-- Here we have the following scheme
> +--
> +-- election_replica3 (will be delayed)
> +-- / \
> +-- election_replica1 election_replica2
> +
> +--
> +-- Initiate disk delay in a bit tricky way: the next write will
> +-- fall into forever sleep.
> +test_run:switch("election_replica3")
> +write_cnt = box.error.injection.get("ERRINJ_WAL_WRITE_COUNT")
> +box.error.injection.set("ERRINJ_WAL_DELAY", true)
> +--
> +-- Make election_replica2 been a leader and start writting data,
> +-- the PROMOTE request get queued on election_replica3 and not
> +-- yet processed, same time INSERT won't complete either
> +-- waiting for PROMOTE completion first. Note that we
> +-- enter election_replica3 as well just to be sure the PROMOTE
> +-- reached it.
> +test_run:switch("election_replica2")
> +box.ctl.promote()
> +test_run:switch("election_replica3")
> +test_run:wait_cond(function() return box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") > write_cnt end)
> +test_run:switch("election_replica2")
> +box.space.test:insert{2}
> +
> +--
> +-- The election_replica1 node has no clue that there is a new leader
> +-- and continue writing data with obsolete term. Since election_replica3
> +-- is delayed now the INSERT won't proceed yet but get queued.
> +test_run:switch("election_replica1")
> +box.space.test:insert{3}
> +
> +--
> +-- Finally enable election_replica3 back. Make sure the data from new election_replica2
> +-- leader get writing while old leader's data ignored.
> +test_run:switch("election_replica3")
> +box.error.injection.set('ERRINJ_WAL_DELAY', false)
> +test_run:wait_cond(function() return box.space.test:get{2} ~= nil end)
> +box.space.test:select{}
> +
> +test_run:switch("default")
> +test_run:cmd('stop server election_replica1')
> +test_run:cmd('stop server election_replica2')
> +test_run:cmd('stop server election_replica3')
> +
> +test_run:cmd('delete server election_replica1')
> +test_run:cmd('delete server election_replica2')
> +test_run:cmd('delete server election_replica3')
> diff --git a/test/replication/suite.cfg b/test/replication/suite.cfg
> index 3eee0803c..ed09b2087 100644
> --- a/test/replication/suite.cfg
> +++ b/test/replication/suite.cfg
> @@ -59,6 +59,7 @@
> "gh-6094-rs-uuid-mismatch.test.lua": {},
> "gh-6127-election-join-new.test.lua": {},
> "gh-6035-applier-filter.test.lua": {},
> + "gh-6036-qsync-order.test.lua": {},
> "election-candidate-promote.test.lua": {},
> "*": {
> "memtx": {"engine": "memtx"},
> diff --git a/test/replication/suite.ini b/test/replication/suite.ini
> index 77eb95f49..080e4fbf4 100644
> --- a/test/replication/suite.ini
> +++ b/test/replication/suite.ini
> @@ -3,7 +3,7 @@ core = tarantool
> script = master.lua
> description = tarantool/box, replication
> disabled = consistent.test.lua
> -release_disabled = catch.test.lua errinj.test.lua gc.test.lua gc_no_space.test.lua before_replace.test.lua qsync_advanced.test.lua qsync_errinj.test.lua quorum.test.lua recover_missing_xlog.test.lua sync.test.lua long_row_timeout.test.lua gh-4739-vclock-assert.test.lua gh-4730-applier-rollback.test.lua gh-5140-qsync-casc-rollback.test.lua gh-5144-qsync-dup-confirm.test.lua gh-5167-qsync-rollback-snap.test.lua gh-5430-qsync-promote-crash.test.lua gh-5430-cluster-mvcc.test.lua gh-5506-election-on-off.test.lua gh-5536-wal-limit.test.lua hang_on_synchro_fail.test.lua anon_register_gap.test.lua gh-5213-qsync-applier-order.test.lua gh-5213-qsync-applier-order-3.test.lua gh-6027-applier-error-show.test.lua gh-6032-promote-wal-write.test.lua gh-6057-qsync-confirm-async-no-wal.test.lua gh-5447-downstream-lag.test.lua gh-4040-invalid-msgpack.test.lua
> +release_disabled = catch.test.lua errinj.test.lua gc.test.lua gc_no_space.test.lua before_replace.test.lua qsync_advanced.test.lua qsync_errinj.test.lua quorum.test.lua recover_missing_xlog.test.lua sync.test.lua long_row_timeout.test.lua gh-4739-vclock-assert.test.lua gh-4730-applier-rollback.test.lua gh-5140-qsync-casc-rollback.test.lua gh-5144-qsync-dup-confirm.test.lua gh-5167-qsync-rollback-snap.test.lua gh-5430-qsync-promote-crash.test.lua gh-5430-cluster-mvcc.test.lua gh-5506-election-on-off.test.lua gh-5536-wal-limit.test.lua hang_on_synchro_fail.test.lua anon_register_gap.test.lua gh-5213-qsync-applier-order.test.lua gh-5213-qsync-applier-order-3.test.lua gh-6027-applier-error-show.test.lua gh-6032-promote-wal-write.test.lua gh-6057-qsync-confirm-async-no-wal.test.lua gh-5447-downstream-lag.test.lua gh-4040-invalid-msgpack.test.lua gh-6036-qsync-order.test.lua
> config = suite.cfg
> lua_libs = lua/fast_replica.lua lua/rlimit.lua
> use_unix_sockets = True
--
Serge Petrenko
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Tarantool-patches] [PATCH v23 3/3] test: add gh-6036-qsync-order test
2021-10-19 15:09 ` Serge Petrenko via Tarantool-patches
@ 2021-10-19 22:26 ` Cyrill Gorcunov via Tarantool-patches
2021-10-20 6:35 ` Serge Petrenko via Tarantool-patches
0 siblings, 1 reply; 14+ messages in thread
From: Cyrill Gorcunov via Tarantool-patches @ 2021-10-19 22:26 UTC (permalink / raw)
To: Serge Petrenko; +Cc: tml
On Tue, Oct 19, 2021 at 06:09:50PM +0300, Serge Petrenko wrote:
> > +--
> > +-- The election_replica1 node has no clue that there is a new leader
> > +-- and continue writing data with obsolete term. Since election_replica3
> > +-- is delayed now the INSERT won't proceed yet but get queued.
> > +test_run:switch("election_replica1")
> > + | ---
> > + | - true
> > + | ...
> > +box.space.test:insert{3}
> > + | ---
> > + | - [3]
> > + | ...
> > +
> > +--
> > +-- Finally enable election_replica3 back. Make sure the data from new election_replica2
> > +-- leader get writing while old leader's data ignored.
> > +test_run:switch("election_replica3")
> > + | ---
> > + | - true
> > + | ...
>
> Hi and thanks for the fixes!
>
> I have only one comment left.
>
> Actually you do need to count writes here.
> The wait_cond for ERRINJ_WAL_WRITE_COUNT == write_cnt + 3
> is needed to make sure you receive (and thus try to process)
> insert {3} **before** the replica is re-enabled.
>
> Otherwise we can't be sure that the test is correct. You may simply
> perform a select before insert{3} has reached the replica.
You know, I spent a few hours trying to pass the test waiting for
ERRINJ_WAL_WRITE_COUNT == write_cnt + 3 and finally realized that
it seems that is what happens: the replica1 is not longer a leader
and when this record reach our replica3 node we NOPify it then
we run
apply_row
if (request.type == IPROTO_NOP)
return process_nop()
thus this record even not reaching the journal at all and that is
why waiting for write_cnt + 3 lasts forever. If only I didn't miss
something obvious.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Tarantool-patches] [PATCH v23 3/3] test: add gh-6036-qsync-order test
2021-10-19 22:26 ` Cyrill Gorcunov via Tarantool-patches
@ 2021-10-20 6:35 ` Serge Petrenko via Tarantool-patches
2021-10-21 22:06 ` Vladislav Shpilevoy via Tarantool-patches
0 siblings, 1 reply; 14+ messages in thread
From: Serge Petrenko via Tarantool-patches @ 2021-10-20 6:35 UTC (permalink / raw)
To: Cyrill Gorcunov, Vladislav Shpilevoy; +Cc: tml
20.10.2021 01:26, Cyrill Gorcunov пишет:
> On Tue, Oct 19, 2021 at 06:09:50PM +0300, Serge Petrenko wrote:
>>> +--
>>> +-- The election_replica1 node has no clue that there is a new leader
>>> +-- and continue writing data with obsolete term. Since election_replica3
>>> +-- is delayed now the INSERT won't proceed yet but get queued.
>>> +test_run:switch("election_replica1")
>>> + | ---
>>> + | - true
>>> + | ...
>>> +box.space.test:insert{3}
>>> + | ---
>>> + | - [3]
>>> + | ...
>>> +
>>> +--
>>> +-- Finally enable election_replica3 back. Make sure the data from new election_replica2
>>> +-- leader get writing while old leader's data ignored.
>>> +test_run:switch("election_replica3")
>>> + | ---
>>> + | - true
>>> + | ...
>> Hi and thanks for the fixes!
>>
>> I have only one comment left.
>>
>> Actually you do need to count writes here.
>> The wait_cond for ERRINJ_WAL_WRITE_COUNT == write_cnt + 3
>> is needed to make sure you receive (and thus try to process)
>> insert {3} **before** the replica is re-enabled.
>>
>> Otherwise we can't be sure that the test is correct. You may simply
>> perform a select before insert{3} has reached the replica.
> You know, I spent a few hours trying to pass the test waiting for
> ERRINJ_WAL_WRITE_COUNT == write_cnt + 3 and finally realized that
> it seems that is what happens: the replica1 is not longer a leader
> and when this record reach our replica3 node we NOPify it then
> we run
>
> apply_row
> if (request.type == IPROTO_NOP)
> return process_nop()
>
> thus this record even not reaching the journal at all and that is
> why waiting for write_cnt + 3 lasts forever. If only I didn't miss
> something obvious.
Unfortunately, this is not the case. A NOP entry still reaches WAL.
That's why we need NOP entries: they reside in WAL but do nothing.
That's for vclock bump sake. Otherwise we could skip such entries
completely, without nopifying them.
So, even if the entry is nopified, it would enter WAL sooner or later.
I just realised what the problem is: the entry is waiting on a limbo latch
inside the NOPify procedure. That's why it never reaches the journal
(until we re-enable replica3, at least).
I don't know how to wait for this entry's arrival then.
The current test version looks OK to me.
Vlad, do you have any ideas here?
--
Serge Petrenko
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Tarantool-patches] [PATCH v23 2/3] qsync: order access to the limbo terms
2021-10-14 21:56 ` [Tarantool-patches] [PATCH v23 2/3] qsync: order access to the limbo terms Cyrill Gorcunov via Tarantool-patches
@ 2021-10-21 22:06 ` Vladislav Shpilevoy via Tarantool-patches
0 siblings, 0 replies; 14+ messages in thread
From: Vladislav Shpilevoy via Tarantool-patches @ 2021-10-21 22:06 UTC (permalink / raw)
To: Cyrill Gorcunov, tml
Hi! Thanks for the patch!
See 4 comments below.
> diff --git a/src/box/box.cc b/src/box/box.cc
> index e082e1a3d..6a9be745a 100644
> --- a/src/box/box.cc
> +++ b/src/box/box.cc
> @@ -1686,8 +1684,11 @@ box_issue_promote(uint32_t prev_leader_id, int64_t promote_lsn)
> .lsn = promote_lsn,
> .term = raft->term,
> };
> - txn_limbo_process(&txn_limbo, &req);
> + txn_limbo_process_begin(&txn_limbo);
> + txn_limbo_write_promote(&txn_limbo, req.lsn, req.term);
> + txn_limbo_process_core(&txn_limbo, &req);
> assert(txn_limbo_is_empty(&txn_limbo));
> + txn_limbo_process_commit(&txn_limbo);
1. What was wrong with txn_limbo_begin/commit/rollback?
I mean `txn_limbo` prefix, without `_process_` suffix. From
this hunk we can see that you call `process_begin` but then
you call txn_limbo_write_promote - it is not 'process'.
Using 'process' because of that looks inconsistent. Also if
you would drop it from begin/commit/rollback names, then you
could leave txn_limbo_process as is, without this _core suffix.
> @@ -1708,8 +1707,12 @@ box_issue_demote(uint32_t prev_leader_id, int64_t promote_lsn)
> .lsn = promote_lsn,
> .term = box_raft()->term,
> };
> - txn_limbo_process(&txn_limbo, &req);
> + txn_limbo_process_begin(&txn_limbo);
> + txn_limbo_write_demote(&txn_limbo, promote_lsn,
> + box_raft()->term);
2. This expression fits into one line (< 80 symbols) just fine.
Maybe lets write it as one line?
> + txn_limbo_process_core(&txn_limbo, &req);
> assert(txn_limbo_is_empty(&txn_limbo));
> + txn_limbo_process_commit(&txn_limbo);
> }
> diff --git a/src/box/txn_limbo.c b/src/box/txn_limbo.c
> index 70447caaf..9b643072a 100644
> --- a/src/box/txn_limbo.c
> +++ b/src/box/txn_limbo.c
> @@ -542,10 +543,30 @@ txn_limbo_read_demote(struct txn_limbo *limbo, int64_t lsn)
> }
>
> void
> -txn_limbo_ack(struct txn_limbo *limbo, uint32_t replica_id, int64_t lsn)
> +txn_limbo_ack(struct txn_limbo *limbo, uint32_t owner_id,
> + uint32_t replica_id, int64_t lsn)
> {
> + /*
> + * ACKs are collected only by the transactions originator
> + * (which is the single master in 100% so far). Other instances
> + * wait for master's CONFIRM message instead.
> + *
> + * Due to cooperative multitasking there might be limbo owner
> + * migration in-fly (while writing data to journal), so for
> + * simplicity sake the test for owner is done here instead
> + * of putting this check to the callers.
> + */
3. This does not improve simplicity anyhow TBH, rather vice versa.
This code looks very suspicious, counter-intuitive. But lets wait for
more tests. See comments to the last commit.
4. Why doesn't txn_limbo_ack() keeps the lock for the time of
txn_limbo_write_confirm() and txn_limbo_read_confirm()?
I think that probably all the limbo functions, literally all of them,
must get the latch. From the beginning to the end. And only after
careful testing we could try to drop some of the locks, or at least
reduce their critical sections. box.ctl functions related to the
limbo/raft too. But do not insist on that. This is just how I would do
it maybe.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Tarantool-patches] [PATCH v23 3/3] test: add gh-6036-qsync-order test
2021-10-20 6:35 ` Serge Petrenko via Tarantool-patches
@ 2021-10-21 22:06 ` Vladislav Shpilevoy via Tarantool-patches
2021-10-22 6:36 ` Serge Petrenko via Tarantool-patches
0 siblings, 1 reply; 14+ messages in thread
From: Vladislav Shpilevoy via Tarantool-patches @ 2021-10-21 22:06 UTC (permalink / raw)
To: Serge Petrenko, Cyrill Gorcunov; +Cc: tml
>>> Actually you do need to count writes here.
>>> The wait_cond for ERRINJ_WAL_WRITE_COUNT == write_cnt + 3
>>> is needed to make sure you receive (and thus try to process)
>>> insert {3} **before** the replica is re-enabled.
>>>
>>> Otherwise we can't be sure that the test is correct. You may simply
>>> perform a select before insert{3} has reached the replica.
>> You know, I spent a few hours trying to pass the test waiting for
>> ERRINJ_WAL_WRITE_COUNT == write_cnt + 3 and finally realized that
>> it seems that is what happens: the replica1 is not longer a leader
>> and when this record reach our replica3 node we NOPify it then
>> we run
>>
>> apply_row
>> if (request.type == IPROTO_NOP)
>> return process_nop()
>>
>> thus this record even not reaching the journal at all and that is
>> why waiting for write_cnt + 3 lasts forever. If only I didn't miss
>> something obvious.
>
> Unfortunately, this is not the case. A NOP entry still reaches WAL.
> That's why we need NOP entries: they reside in WAL but do nothing.
> That's for vclock bump sake. Otherwise we could skip such entries
> completely, without nopifying them.
>
> So, even if the entry is nopified, it would enter WAL sooner or later.
>
> I just realised what the problem is: the entry is waiting on a limbo latch
> inside the NOPify procedure. That's why it never reaches the journal
> (until we re-enable replica3, at least).
>
> I don't know how to wait for this entry's arrival then.
> The current test version looks OK to me.
>
> Vlad, do you have any ideas here?
I think it might worth adding an errinj for the number of blocked
fibers waiting on the limbo latch. Could even expose that to box.info.qsync,
seems like useful info. Would help to measure contention.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Tarantool-patches] [PATCH v23 3/3] test: add gh-6036-qsync-order test
2021-10-14 21:56 ` [Tarantool-patches] [PATCH v23 3/3] test: add gh-6036-qsync-order test Cyrill Gorcunov via Tarantool-patches
2021-10-19 15:09 ` Serge Petrenko via Tarantool-patches
@ 2021-10-21 22:06 ` Vladislav Shpilevoy via Tarantool-patches
2021-10-22 22:03 ` Cyrill Gorcunov via Tarantool-patches
1 sibling, 1 reply; 14+ messages in thread
From: Vladislav Shpilevoy via Tarantool-patches @ 2021-10-21 22:06 UTC (permalink / raw)
To: Cyrill Gorcunov, tml
Thanks for the patch!
> +-- Make election_replica2 been a leader and start writting data,
> +-- the PROMOTE request get queued on election_replica3 and not
> +-- yet processed, same time INSERT won't complete either
> +-- waiting for PROMOTE completion first. Note that we
> +-- enter election_replica3 as well just to be sure the PROMOTE
> +-- reached it.
> +test_run:switch("election_replica2")
> + | ---
> + | - true
> + | ...
> +box.ctl.promote()
> + | ---
> + | ...
> +test_run:switch("election_replica3")
> + | ---
> + | - true
> + | ...
> +test_run:wait_cond(function() return box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") > write_cnt end)
Sorry, keep the code inside of 80 symbols line width.
I added this diff and all the tests passed. That means the only
tested parts of the patchset are apply_synchro_row() and
txn_limbo_is_replica_outdated().
Please, lets try not to submit more untested code. It is enough that
we already have one ticket which simply adds assert(false) into a few
places and the tests pass. It does not make the code better, it just
adds more uncertainty how it works and whether it works at all.
====================
diff --git a/src/box/box.cc b/src/box/box.cc
index 6a9be745a..011738409 100644
--- a/src/box/box.cc
+++ b/src/box/box.cc
@@ -1684,11 +1684,11 @@ box_issue_promote(uint32_t prev_leader_id, int64_t promote_lsn)
.lsn = promote_lsn,
.term = raft->term,
};
- txn_limbo_process_begin(&txn_limbo);
+ //txn_limbo_process_begin(&txn_limbo);
txn_limbo_write_promote(&txn_limbo, req.lsn, req.term);
txn_limbo_process_core(&txn_limbo, &req);
assert(txn_limbo_is_empty(&txn_limbo));
- txn_limbo_process_commit(&txn_limbo);
+ //txn_limbo_process_commit(&txn_limbo);
}
/** A guard to block multiple simultaneous promote()/demote() invocations. */
@@ -1707,12 +1707,12 @@ box_issue_demote(uint32_t prev_leader_id, int64_t promote_lsn)
.lsn = promote_lsn,
.term = box_raft()->term,
};
- txn_limbo_process_begin(&txn_limbo);
+ //txn_limbo_process_begin(&txn_limbo);
txn_limbo_write_demote(&txn_limbo, promote_lsn,
box_raft()->term);
txn_limbo_process_core(&txn_limbo, &req);
assert(txn_limbo_is_empty(&txn_limbo));
- txn_limbo_process_commit(&txn_limbo);
+ //txn_limbo_process_commit(&txn_limbo);
}
int
diff --git a/src/box/txn_limbo.c b/src/box/txn_limbo.c
index 9b643072a..ba11443af 100644
--- a/src/box/txn_limbo.c
+++ b/src/box/txn_limbo.c
@@ -748,7 +748,7 @@ void
txn_limbo_process_core(struct txn_limbo *limbo,
const struct synchro_request *req)
{
- assert(latch_is_locked(&limbo->promote_latch));
+ //assert(latch_is_locked(&limbo->promote_latch));
uint64_t term = req->term;
uint32_t origin = req->origin_id;
@@ -814,9 +814,9 @@ void
txn_limbo_process(struct txn_limbo *limbo,
const struct synchro_request *req)
{
- txn_limbo_process_begin(limbo);
+ //txn_limbo_process_begin(limbo);
txn_limbo_process_core(limbo, req);
- txn_limbo_process_commit(limbo);
+ //txn_limbo_process_commit(limbo);
}
void
diff --git a/src/box/txn_limbo.h b/src/box/txn_limbo.h
index 0bbd7a1c3..10d20e956 100644
--- a/src/box/txn_limbo.h
+++ b/src/box/txn_limbo.h
@@ -216,12 +216,12 @@ txn_limbo_is_owner(struct txn_limbo *limbo, uint32_t owner_id)
* it rolls back a transaction and updates replication
* number causing a nested test for limbo ownership).
*/
- if (latch_owner(&limbo->promote_latch) == fiber())
- return limbo->owner_id == owner_id;
+ //if (latch_owner(&limbo->promote_latch) == fiber())
+ // return limbo->owner_id == owner_id;
- latch_lock(&limbo->promote_latch);
+ //latch_lock(&limbo->promote_latch);
bool v = limbo->owner_id == owner_id;
- latch_unlock(&limbo->promote_latch);
+ //latch_unlock(&limbo->promote_latch);
return v;
}
@@ -249,9 +249,9 @@ txn_limbo_last_entry(struct txn_limbo *limbo)
static inline uint64_t
txn_limbo_replica_term(struct txn_limbo *limbo, uint32_t replica_id)
{
- latch_lock(&limbo->promote_latch);
+ //latch_lock(&limbo->promote_latch);
uint64_t v = vclock_get(&limbo->promote_term_map, replica_id);
- latch_unlock(&limbo->promote_latch);
+ //latch_unlock(&limbo->promote_latch);
return v;
}
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Tarantool-patches] [PATCH v23 3/3] test: add gh-6036-qsync-order test
2021-10-21 22:06 ` Vladislav Shpilevoy via Tarantool-patches
@ 2021-10-22 6:36 ` Serge Petrenko via Tarantool-patches
0 siblings, 0 replies; 14+ messages in thread
From: Serge Petrenko via Tarantool-patches @ 2021-10-22 6:36 UTC (permalink / raw)
To: Vladislav Shpilevoy, Cyrill Gorcunov; +Cc: tml
22.10.2021 01:06, Vladislav Shpilevoy пишет:
>>>> Actually you do need to count writes here.
>>>> The wait_cond for ERRINJ_WAL_WRITE_COUNT == write_cnt + 3
>>>> is needed to make sure you receive (and thus try to process)
>>>> insert {3} **before** the replica is re-enabled.
>>>>
>>>> Otherwise we can't be sure that the test is correct. You may simply
>>>> perform a select before insert{3} has reached the replica.
>>> You know, I spent a few hours trying to pass the test waiting for
>>> ERRINJ_WAL_WRITE_COUNT == write_cnt + 3 and finally realized that
>>> it seems that is what happens: the replica1 is not longer a leader
>>> and when this record reach our replica3 node we NOPify it then
>>> we run
>>>
>>> apply_row
>>> if (request.type == IPROTO_NOP)
>>> return process_nop()
>>>
>>> thus this record even not reaching the journal at all and that is
>>> why waiting for write_cnt + 3 lasts forever. If only I didn't miss
>>> something obvious.
>> Unfortunately, this is not the case. A NOP entry still reaches WAL.
>> That's why we need NOP entries: they reside in WAL but do nothing.
>> That's for vclock bump sake. Otherwise we could skip such entries
>> completely, without nopifying them.
>>
>> So, even if the entry is nopified, it would enter WAL sooner or later.
>>
>> I just realised what the problem is: the entry is waiting on a limbo latch
>> inside the NOPify procedure. That's why it never reaches the journal
>> (until we re-enable replica3, at least).
>>
>> I don't know how to wait for this entry's arrival then.
>> The current test version looks OK to me.
>>
>> Vlad, do you have any ideas here?
> I think it might worth adding an errinj for the number of blocked
> fibers waiting on the limbo latch. Could even expose that to box.info.qsync,
> seems like useful info. Would help to measure contention.
This might be useful, indeed.
Cyrill, let's implement `box.info.synchro.queue.waiters` then and use it
in the test.
Or any other suitable name if you guys come up with one.
--
Serge Petrenko
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Tarantool-patches] [PATCH v23 3/3] test: add gh-6036-qsync-order test
2021-10-21 22:06 ` Vladislav Shpilevoy via Tarantool-patches
@ 2021-10-22 22:03 ` Cyrill Gorcunov via Tarantool-patches
2021-10-24 15:39 ` Vladislav Shpilevoy via Tarantool-patches
0 siblings, 1 reply; 14+ messages in thread
From: Cyrill Gorcunov via Tarantool-patches @ 2021-10-22 22:03 UTC (permalink / raw)
To: Vladislav Shpilevoy; +Cc: tml
On Fri, Oct 22, 2021 at 12:06:09AM +0200, Vladislav Shpilevoy wrote:
> > + | ...
> > +test_run:wait_cond(function() return box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") > write_cnt end)
>
> Sorry, keep the code inside of 80 symbols line width.
OK
>
> I added this diff and all the tests passed. That means the only
> tested parts of the patchset are apply_synchro_row() and
> txn_limbo_is_replica_outdated().
>
Without the rest of locking the overall picture becomes incomplete,
we introduce the first part of split brain detection. And in second
part we'are about to introduce filtering into begin()/commit() pair.
> Please, lets try not to submit more untested code. It is enough that
> we already have one ticket which simply adds assert(false) into a few
> places and the tests pass. It does not make the code better, it just
> adds more uncertainty how it works and whether it works at all.
This is the correct way to guard access to some particular data to me.
In this case we're guarding @promote_term_map and @promote_greatest_term.
What you propose is to guard them in "some path" but leave untouched for
all others. So the person which would read this code after some years
gonna be scratching a head trying to figure out why sometimes access to
these members is done locked and somtimes are not. I completely disagree
here. Either we lock all accesses either not, not partial lockings please.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Tarantool-patches] [PATCH v23 3/3] test: add gh-6036-qsync-order test
2021-10-22 22:03 ` Cyrill Gorcunov via Tarantool-patches
@ 2021-10-24 15:39 ` Vladislav Shpilevoy via Tarantool-patches
2021-10-24 16:01 ` Cyrill Gorcunov via Tarantool-patches
0 siblings, 1 reply; 14+ messages in thread
From: Vladislav Shpilevoy via Tarantool-patches @ 2021-10-24 15:39 UTC (permalink / raw)
To: Cyrill Gorcunov; +Cc: tml
>> I added this diff and all the tests passed. That means the only
>> tested parts of the patchset are apply_synchro_row() and
>> txn_limbo_is_replica_outdated().
>>
>
> Without the rest of locking the overall picture becomes incomplete,
> we introduce the first part of split brain detection. And in second
> part we'are about to introduce filtering into begin()/commit() pair.
Sorry, but how is my comment above related to filtering? You introduced
some code which is dead - it is not tested at all. That was my only
point here.
It is not related to filtering - the ordering issue is separate as
you yourself noted when this was all a single patchset with the
split-brain detection. This is even proved by the test you added
here. I just need more tests for the other places which use the locking
now.
>> Please, lets try not to submit more untested code. It is enough that
>> we already have one ticket which simply adds assert(false) into a few
>> places and the tests pass. It does not make the code better, it just
>> adds more uncertainty how it works and whether it works at all.
>
> This is the correct way to guard access to some particular data to me.
> In this case we're guarding @promote_term_map and @promote_greatest_term.
> What you propose is to guard them in "some path" but leave untouched for
> all others. So the person which would read this code after some years
> gonna be scratching a head trying to figure out why sometimes access to
> these members is done locked and somtimes are not. I completely disagree
> here. Either we lock all accesses either not, not partial lockings please.
Well, the only thing which I am worrying right now about is that if this
person in a few years will drop almost the entirety of your patchset
wondering whether it is needed really, all the tests will pass.
Also you seem to misunderstand my proposal. I didn't propose to make less
locking, I only proposed to increase it. I thought it will be easier to do
with full blocking of certain functions because it might be just easier to
get something locked in the tests. The bigger the critical section is - the
easier to hit it. Or maybe fine-grained locking works better if you say so.
Don't know.
It will only affect performance. Perf of qsync/raft now is 0, because
it is not used at all, due to being unsafe. Because of the reordering and
of the split-brain. This patch can't say it makes the reordering stuff much
better, because there are no tests except for a single case of reordering.
If from my diff you understood that I proposed to this drop code - sorry,
I didn't mean this. I meant that your patch should test these places, not
drop them. By commenting this code out I tried to prove that this code is
dead now, and that it shouldn't be so.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Tarantool-patches] [PATCH v23 3/3] test: add gh-6036-qsync-order test
2021-10-24 15:39 ` Vladislav Shpilevoy via Tarantool-patches
@ 2021-10-24 16:01 ` Cyrill Gorcunov via Tarantool-patches
0 siblings, 0 replies; 14+ messages in thread
From: Cyrill Gorcunov via Tarantool-patches @ 2021-10-24 16:01 UTC (permalink / raw)
To: Vladislav Shpilevoy; +Cc: tml
On Sun, Oct 24, 2021 at 05:39:34PM +0200, Vladislav Shpilevoy wrote:
> >> I added this diff and all the tests passed. That means the only
> >> tested parts of the patchset are apply_synchro_row() and
> >> txn_limbo_is_replica_outdated().
> >>
> >
> > Without the rest of locking the overall picture becomes incomplete,
> > we introduce the first part of split brain detection. And in second
> > part we'are about to introduce filtering into begin()/commit() pair.
>
> Sorry, but how is my comment above related to filtering? You introduced
> some code which is dead - it is not tested at all. That was my only
> point here.
Vlad, these locking calls are not dead, they are not covered by tests
because they will be so in next part, where we introduce the filtering.
But these lockings are taken and running. At least that is what I've had
in mind.
But I see your point, thanks!
>
> It is not related to filtering - the ordering issue is separate as
> you yourself noted when this was all a single patchset with the
> split-brain detection. This is even proved by the test you added
> here. I just need more tests for the other places which use the locking
> now.
>
> >> Please, lets try not to submit more untested code. It is enough that
> >> we already have one ticket which simply adds assert(false) into a few
> >> places and the tests pass. It does not make the code better, it just
> >> adds more uncertainty how it works and whether it works at all.
> >
> > This is the correct way to guard access to some particular data to me.
> > In this case we're guarding @promote_term_map and @promote_greatest_term.
> > What you propose is to guard them in "some path" but leave untouched for
> > all others. So the person which would read this code after some years
> > gonna be scratching a head trying to figure out why sometimes access to
> > these members is done locked and somtimes are not. I completely disagree
> > here. Either we lock all accesses either not, not partial lockings please.
>
> Well, the only thing which I am worrying right now about is that if this
> person in a few years will drop almost the entirety of your patchset
> wondering whether it is needed really, all the tests will pass.
OK, I got your worrying. Lets see if we manage to settle it down.
>
> Also you seem to misunderstand my proposal. I didn't propose to make less
> locking, I only proposed to increase it. I thought it will be easier to do
> with full blocking of certain functions because it might be just easier to
> get something locked in the tests. The bigger the critical section is - the
> easier to hit it. Or maybe fine-grained locking works better if you say so.
> Don't know.
>
> It will only affect performance. Perf of qsync/raft now is 0, because
> it is not used at all, due to being unsafe. Because of the reordering and
> of the split-brain. This patch can't say it makes the reordering stuff much
> better, because there are no tests except for a single case of reordering.
>
> If from my diff you understood that I proposed to this drop code - sorry,
> I didn't mean this. I meant that your patch should test these places, not
> drop them. By commenting this code out I tried to prove that this code is
> dead now, and that it shouldn't be so.
OK, I see. Gimme some time, I'll try various ways. Thanks for comments, Vlad!
Cyrill
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2021-10-24 16:01 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-10-14 21:56 [Tarantool-patches] [PATCH v23 0/3] qsync: implement packet filtering (part 1) Cyrill Gorcunov via Tarantool-patches
2021-10-14 21:56 ` [Tarantool-patches] [PATCH v23 1/3] latch: add latch_is_locked helper Cyrill Gorcunov via Tarantool-patches
2021-10-14 21:56 ` [Tarantool-patches] [PATCH v23 2/3] qsync: order access to the limbo terms Cyrill Gorcunov via Tarantool-patches
2021-10-21 22:06 ` Vladislav Shpilevoy via Tarantool-patches
2021-10-14 21:56 ` [Tarantool-patches] [PATCH v23 3/3] test: add gh-6036-qsync-order test Cyrill Gorcunov via Tarantool-patches
2021-10-19 15:09 ` Serge Petrenko via Tarantool-patches
2021-10-19 22:26 ` Cyrill Gorcunov via Tarantool-patches
2021-10-20 6:35 ` Serge Petrenko via Tarantool-patches
2021-10-21 22:06 ` Vladislav Shpilevoy via Tarantool-patches
2021-10-22 6:36 ` Serge Petrenko via Tarantool-patches
2021-10-21 22:06 ` Vladislav Shpilevoy via Tarantool-patches
2021-10-22 22:03 ` Cyrill Gorcunov via Tarantool-patches
2021-10-24 15:39 ` Vladislav Shpilevoy via Tarantool-patches
2021-10-24 16:01 ` Cyrill Gorcunov via Tarantool-patches
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox