* [Tarantool-patches] [PATCH v22 0/3] qsync: implement packet filtering (part 1) @ 2021-10-11 19:16 Cyrill Gorcunov via Tarantool-patches 2021-10-11 19:16 ` [Tarantool-patches] [PATCH v22 1/3] latch: add latch_is_locked helper Cyrill Gorcunov via Tarantool-patches ` (2 more replies) 0 siblings, 3 replies; 10+ messages in thread From: Cyrill Gorcunov via Tarantool-patches @ 2021-10-11 19:16 UTC (permalink / raw) To: tml; +Cc: Vladislav Shpilevoy Guys, please take a look once time permit, any comments are highly appreciated! v19 (by Vlad): - do not modify box_issue_promote and demote (while they are still simply utter code duplication but whatever) - make txn_limbo_process being void - make txn_limbo_process_begin/commit/rollback being void - the real processing of request under the lock named as txn_limbo_process_core - testcase completely reworked (kudos to SergeP) - note that if we import test to the master branch without ordering pass it will fire assertion - dropped off debug info from box.info interface v20 (by SergeP): - use guard for ACK processing and parameters change - rework test v21 (by SergeP): - drop warning from txn_limbo_ack - rework test to use cluster helpers and ERRINJ_WAL_WRITE_COUNT error injection, same time drop modification of election_replica script v22 (by SergeP): - use limbo emptiness test _after_ owner_id test - drop redundant assert in limbo commit/rollback since we're unlocking a latch anyway where own assertion present - in test: drop excessive wait_cond and setup wal delay earlier branch gorcunov/gh-6036-rollback-confirm-22 issue https://github.com/tarantool/tarantool/issues/6036 previous series https://lists.tarantool.org/tarantool-patches/20211008175809.349501-1-gorcunov@gmail.com/ Cyrill Gorcunov (3): latch: add latch_is_locked helper qsync: order access to the limbo terms test: add gh-6036-qsync-order test src/box/applier.cc | 12 +- src/box/box.cc | 15 +- src/box/relay.cc | 11 +- src/box/txn.c | 2 +- src/box/txn_limbo.c | 49 ++++- src/box/txn_limbo.h | 78 ++++++- src/lib/core/latch.h | 11 + test/replication/gh-6036-qsync-order.result | 200 ++++++++++++++++++ test/replication/gh-6036-qsync-order.test.lua | 96 +++++++++ test/replication/suite.cfg | 1 + test/replication/suite.ini | 2 +- test/unit/snap_quorum_delay.cc | 5 +- 12 files changed, 451 insertions(+), 31 deletions(-) create mode 100644 test/replication/gh-6036-qsync-order.result create mode 100644 test/replication/gh-6036-qsync-order.test.lua base-commit: ce5752ce235324fcefd5a3d0503fd3f8a1800d38 -- 2.31.1 -- Summary patch against v21 -- diff --git a/src/box/txn_limbo.c b/src/box/txn_limbo.c index 176a83cb5..8f9bc11c7 100644 --- a/src/box/txn_limbo.c +++ b/src/box/txn_limbo.c @@ -546,8 +546,6 @@ void txn_limbo_ack(struct txn_limbo *limbo, uint32_t owner_id, uint32_t replica_id, int64_t lsn) { - if (rlist_empty(&limbo->queue)) - return; /* * ACKs are collected only by the transactions originator * (which is the single master in 100% so far). Other instances @@ -560,6 +558,15 @@ txn_limbo_ack(struct txn_limbo *limbo, uint32_t owner_id, */ if (!txn_limbo_is_owner(limbo, owner_id)) return; + + /* + * Test for empty queue is done _after_ txn_limbo_is_owner + * call because we need to be sure that limbo is not been + * changed under our feets while we're reading it. + */ + if (rlist_empty(&limbo->queue)) + return; + /* * If limbo is currently writing a rollback, it means that the whole * queue will be rolled back. Because rollback is written only for @@ -815,8 +822,6 @@ txn_limbo_process(struct txn_limbo *limbo, void txn_limbo_on_parameters_change(struct txn_limbo *limbo) { - if (rlist_empty(&limbo->queue)) - return; /* * In case if we're not current leader (ie not owning the * limbo) then we should not confirm anything, otherwise @@ -826,6 +831,9 @@ txn_limbo_on_parameters_change(struct txn_limbo *limbo) if (!txn_limbo_is_owner(limbo, instance_id)) return; + if (rlist_empty(&limbo->queue)) + return; + struct txn_limbo_entry *e; int64_t confirm_lsn = -1; rlist_foreach_entry(e, &limbo->queue, in_queue) { diff --git a/src/box/txn_limbo.h b/src/box/txn_limbo.h index aaff444e4..33cacef8f 100644 --- a/src/box/txn_limbo.h +++ b/src/box/txn_limbo.h @@ -349,7 +349,6 @@ txn_limbo_process_begin(struct txn_limbo *limbo) static inline void txn_limbo_process_commit(struct txn_limbo *limbo) { - assert(latch_is_locked(&limbo->promote_latch)); latch_unlock(&limbo->promote_latch); } @@ -357,7 +356,6 @@ txn_limbo_process_commit(struct txn_limbo *limbo) static inline void txn_limbo_process_rollback(struct txn_limbo *limbo) { - assert(latch_is_locked(&limbo->promote_latch)); latch_unlock(&limbo->promote_latch); } diff --git a/test/replication/gh-6036-qsync-order.result b/test/replication/gh-6036-qsync-order.result index eb3e808cb..464a131a4 100644 --- a/test/replication/gh-6036-qsync-order.result +++ b/test/replication/gh-6036-qsync-order.result @@ -76,10 +76,6 @@ test_run:switch("election_replica2") | --- | - true | ... -test_run:wait_cond(function() return box.space.test:get{1} ~= nil end) - | --- - | - true - | ... box.cfg({ \ replication = { \ "unix/:./election_replica2.sock", \ @@ -106,6 +102,10 @@ test_run:switch("election_replica3") write_cnt = box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") | --- | ... +box.error.injection.set("ERRINJ_WAL_DELAY", true) + | --- + | - ok + | ... -- -- Make election_replica2 been a leader and start writting data, -- the PROMOTE request get queued on election_replica3 and not @@ -128,10 +128,6 @@ test_run:wait_cond(function() return box.error.injection.get("ERRINJ_WAL_WRITE_C | --- | - true | ... -box.error.injection.set("ERRINJ_WAL_DELAY", true) - | --- - | - ok - | ... test_run:switch("election_replica2") | --- | - true diff --git a/test/replication/gh-6036-qsync-order.test.lua b/test/replication/gh-6036-qsync-order.test.lua index b8df170b8..6350e9303 100644 --- a/test/replication/gh-6036-qsync-order.test.lua +++ b/test/replication/gh-6036-qsync-order.test.lua @@ -37,7 +37,6 @@ box.cfg({ \ -- -- Drop connection between election_replica2 and election_replica1. test_run:switch("election_replica2") -test_run:wait_cond(function() return box.space.test:get{1} ~= nil end) box.cfg({ \ replication = { \ "unix/:./election_replica2.sock", \ @@ -57,6 +56,7 @@ box.cfg({ \ -- fall into forever sleep. test_run:switch("election_replica3") write_cnt = box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") +box.error.injection.set("ERRINJ_WAL_DELAY", true) -- -- Make election_replica2 been a leader and start writting data, -- the PROMOTE request get queued on election_replica3 and not @@ -68,7 +68,6 @@ test_run:switch("election_replica2") box.ctl.promote() test_run:switch("election_replica3") test_run:wait_cond(function() return box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") > write_cnt end) -box.error.injection.set("ERRINJ_WAL_DELAY", true) test_run:switch("election_replica2") _ = require('fiber').create(function() box.space.test:insert{2} end) ^ permalink raw reply [flat|nested] 10+ messages in thread
* [Tarantool-patches] [PATCH v22 1/3] latch: add latch_is_locked helper 2021-10-11 19:16 [Tarantool-patches] [PATCH v22 0/3] qsync: implement packet filtering (part 1) Cyrill Gorcunov via Tarantool-patches @ 2021-10-11 19:16 ` Cyrill Gorcunov via Tarantool-patches 2021-10-11 19:16 ` [Tarantool-patches] [PATCH v22 2/3] qsync: order access to the limbo terms Cyrill Gorcunov via Tarantool-patches 2021-10-11 19:16 ` [Tarantool-patches] [PATCH v22 3/3] test: add gh-6036-qsync-order test Cyrill Gorcunov via Tarantool-patches 2 siblings, 0 replies; 10+ messages in thread From: Cyrill Gorcunov via Tarantool-patches @ 2021-10-11 19:16 UTC (permalink / raw) To: tml; +Cc: Vladislav Shpilevoy To test if latch is locked. Part-of #6036 Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com> --- src/lib/core/latch.h | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/src/lib/core/latch.h b/src/lib/core/latch.h index 49c59cf63..0aaa8b634 100644 --- a/src/lib/core/latch.h +++ b/src/lib/core/latch.h @@ -95,6 +95,17 @@ latch_owner(struct latch *l) return l->owner; } +/** + * Return true if the latch is locked. + * + * @param l - latch to be tested. + */ +static inline bool +latch_is_locked(const struct latch *l) +{ + return l->owner != NULL; +} + /** * Lock a latch. If the latch is already locked by another fiber, * waits for timeout. -- 2.31.1 ^ permalink raw reply [flat|nested] 10+ messages in thread
* [Tarantool-patches] [PATCH v22 2/3] qsync: order access to the limbo terms 2021-10-11 19:16 [Tarantool-patches] [PATCH v22 0/3] qsync: implement packet filtering (part 1) Cyrill Gorcunov via Tarantool-patches 2021-10-11 19:16 ` [Tarantool-patches] [PATCH v22 1/3] latch: add latch_is_locked helper Cyrill Gorcunov via Tarantool-patches @ 2021-10-11 19:16 ` Cyrill Gorcunov via Tarantool-patches 2021-10-12 9:40 ` Serge Petrenko via Tarantool-patches 2021-10-11 19:16 ` [Tarantool-patches] [PATCH v22 3/3] test: add gh-6036-qsync-order test Cyrill Gorcunov via Tarantool-patches 2 siblings, 1 reply; 10+ messages in thread From: Cyrill Gorcunov via Tarantool-patches @ 2021-10-11 19:16 UTC (permalink / raw) To: tml; +Cc: Vladislav Shpilevoy Limbo terms tracking is shared between appliers and when one of appliers is waiting for write to complete inside journal_write() routine, an other may need to access read term value to figure out if promote request is valid to apply. Due to cooperative multitasking access to the terms is not consistent so we need to be sure that other fibers read up to date terms (ie written to the WAL). For this sake we use a latching mechanism, when one fiber takes a lock for updating other readers are waiting until the operation is complete. For example here is a call graph of two appliers applier 1 --------- applier_apply_tx (promote term = 3 current max term = 2) applier_synchro_filter_tx apply_synchro_row journal_write (sleeping) at this moment another applier comes in with obsolete data and term 2 applier 2 --------- applier_apply_tx (term 2) applier_synchro_filter_tx txn_limbo_is_replica_outdated -> false journal_write (sleep) applier 1 --------- journal wakes up apply_synchro_row_cb set max term to 3 So the applier 2 didn't notice that term 3 is already seen and wrote obsolete data. With locking the applier 2 will wait until applier 1 has finished its write. Also Serge Petrenko pointed that we have somewhat similar situation with txn_limbo_ack()[we might try to write confirm on entry while new promote is in fly and not yet applied, so confirm might be invalid] and txn_limbo_on_parameters_change() [where we might confirm entries reducing quorum number while we even not a limbo owner]. Thus we need to fix these problems as well. We introduce the following helpers: 1) txn_limbo_process_begin: which takes a lock 2) txn_limbo_process_commit and txn_limbo_process_rollback which simply release the lock but have different names for better semantics 3) txn_limbo_process is a general function which uses x_begin and x_commit helper internally 4) txn_limbo_process_core to do a real job over processing the request, it implies that txn_limbo_process_begin been called 5) txn_limbo_ack() and txn_limbo_on_parameters_change() both respect current limbo owner via promote latch. Part-of #6036 Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com> --- src/box/applier.cc | 12 ++++-- src/box/box.cc | 15 ++++--- src/box/relay.cc | 11 ++--- src/box/txn.c | 2 +- src/box/txn_limbo.c | 49 +++++++++++++++++++-- src/box/txn_limbo.h | 78 +++++++++++++++++++++++++++++++--- test/unit/snap_quorum_delay.cc | 5 ++- 7 files changed, 142 insertions(+), 30 deletions(-) diff --git a/src/box/applier.cc b/src/box/applier.cc index b981bd436..46c36e259 100644 --- a/src/box/applier.cc +++ b/src/box/applier.cc @@ -857,7 +857,7 @@ apply_synchro_row_cb(struct journal_entry *entry) applier_rollback_by_wal_io(entry->res); } else { replica_txn_wal_write_cb(synchro_entry->rcb); - txn_limbo_process(&txn_limbo, synchro_entry->req); + txn_limbo_process_core(&txn_limbo, synchro_entry->req); trigger_run(&replicaset.applier.on_wal_write, NULL); } fiber_wakeup(synchro_entry->owner); @@ -873,6 +873,8 @@ apply_synchro_row(uint32_t replica_id, struct xrow_header *row) if (xrow_decode_synchro(row, &req) != 0) goto err; + txn_limbo_process_begin(&txn_limbo); + struct replica_cb_data rcb_data; struct synchro_entry entry; /* @@ -910,12 +912,16 @@ apply_synchro_row(uint32_t replica_id, struct xrow_header *row) * transactions side, including the async ones. */ if (journal_write(&entry.base) != 0) - goto err; + goto err_rollback; if (entry.base.res < 0) { diag_set_journal_res(entry.base.res); - goto err; + goto err_rollback; } + txn_limbo_process_commit(&txn_limbo); return 0; + +err_rollback: + txn_limbo_process_rollback(&txn_limbo); err: diag_log(); return -1; diff --git a/src/box/box.cc b/src/box/box.cc index e082e1a3d..6a9be745a 100644 --- a/src/box/box.cc +++ b/src/box/box.cc @@ -1677,8 +1677,6 @@ box_issue_promote(uint32_t prev_leader_id, int64_t promote_lsn) struct raft *raft = box_raft(); assert(raft->volatile_term == raft->term); assert(promote_lsn >= 0); - txn_limbo_write_promote(&txn_limbo, promote_lsn, - raft->term); struct synchro_request req = { .type = IPROTO_RAFT_PROMOTE, .replica_id = prev_leader_id, @@ -1686,8 +1684,11 @@ box_issue_promote(uint32_t prev_leader_id, int64_t promote_lsn) .lsn = promote_lsn, .term = raft->term, }; - txn_limbo_process(&txn_limbo, &req); + txn_limbo_process_begin(&txn_limbo); + txn_limbo_write_promote(&txn_limbo, req.lsn, req.term); + txn_limbo_process_core(&txn_limbo, &req); assert(txn_limbo_is_empty(&txn_limbo)); + txn_limbo_process_commit(&txn_limbo); } /** A guard to block multiple simultaneous promote()/demote() invocations. */ @@ -1699,8 +1700,6 @@ box_issue_demote(uint32_t prev_leader_id, int64_t promote_lsn) { assert(box_raft()->volatile_term == box_raft()->term); assert(promote_lsn >= 0); - txn_limbo_write_demote(&txn_limbo, promote_lsn, - box_raft()->term); struct synchro_request req = { .type = IPROTO_RAFT_DEMOTE, .replica_id = prev_leader_id, @@ -1708,8 +1707,12 @@ box_issue_demote(uint32_t prev_leader_id, int64_t promote_lsn) .lsn = promote_lsn, .term = box_raft()->term, }; - txn_limbo_process(&txn_limbo, &req); + txn_limbo_process_begin(&txn_limbo); + txn_limbo_write_demote(&txn_limbo, promote_lsn, + box_raft()->term); + txn_limbo_process_core(&txn_limbo, &req); assert(txn_limbo_is_empty(&txn_limbo)); + txn_limbo_process_commit(&txn_limbo); } int diff --git a/src/box/relay.cc b/src/box/relay.cc index f5852df7b..61ef1e3a5 100644 --- a/src/box/relay.cc +++ b/src/box/relay.cc @@ -545,15 +545,10 @@ tx_status_update(struct cmsg *msg) ack.vclock = &status->vclock; /* * Let pending synchronous transactions know, which of - * them were successfully sent to the replica. Acks are - * collected only by the transactions originator (which is - * the single master in 100% so far). Other instances wait - * for master's CONFIRM message instead. + * them were successfully sent to the replica. */ - if (txn_limbo.owner_id == instance_id) { - txn_limbo_ack(&txn_limbo, ack.source, - vclock_get(ack.vclock, instance_id)); - } + txn_limbo_ack(&txn_limbo, instance_id, ack.source, + vclock_get(ack.vclock, instance_id)); trigger_run(&replicaset.on_ack, &ack); static const struct cmsg_hop route[] = { diff --git a/src/box/txn.c b/src/box/txn.c index e7fc81683..06bb85a09 100644 --- a/src/box/txn.c +++ b/src/box/txn.c @@ -939,7 +939,7 @@ txn_commit(struct txn *txn) txn_limbo_assign_local_lsn(&txn_limbo, limbo_entry, lsn); /* Local WAL write is a first 'ACK'. */ - txn_limbo_ack(&txn_limbo, txn_limbo.owner_id, lsn); + txn_limbo_ack_self(&txn_limbo, lsn); } if (txn_limbo_wait_complete(&txn_limbo, limbo_entry) < 0) goto rollback; diff --git a/src/box/txn_limbo.c b/src/box/txn_limbo.c index 70447caaf..8f9bc11c7 100644 --- a/src/box/txn_limbo.c +++ b/src/box/txn_limbo.c @@ -47,6 +47,7 @@ txn_limbo_create(struct txn_limbo *limbo) vclock_create(&limbo->vclock); vclock_create(&limbo->promote_term_map); limbo->promote_greatest_term = 0; + latch_create(&limbo->promote_latch); limbo->confirmed_lsn = 0; limbo->rollback_count = 0; limbo->is_in_rollback = false; @@ -542,10 +543,30 @@ txn_limbo_read_demote(struct txn_limbo *limbo, int64_t lsn) } void -txn_limbo_ack(struct txn_limbo *limbo, uint32_t replica_id, int64_t lsn) +txn_limbo_ack(struct txn_limbo *limbo, uint32_t owner_id, + uint32_t replica_id, int64_t lsn) { + /* + * ACKs are collected only by the transactions originator + * (which is the single master in 100% so far). Other instances + * wait for master's CONFIRM message instead. + * + * Due to cooperative multitasking there might be limbo owner + * migration in-fly (while writing data to journal), so for + * simplicity sake the test for owner is done here instead + * of putting this check to the callers. + */ + if (!txn_limbo_is_owner(limbo, owner_id)) + return; + + /* + * Test for empty queue is done _after_ txn_limbo_is_owner + * call because we need to be sure that limbo is not been + * changed under our feets while we're reading it. + */ if (rlist_empty(&limbo->queue)) return; + /* * If limbo is currently writing a rollback, it means that the whole * queue will be rolled back. Because rollback is written only for @@ -724,11 +745,14 @@ txn_limbo_wait_empty(struct txn_limbo *limbo, double timeout) } void -txn_limbo_process(struct txn_limbo *limbo, const struct synchro_request *req) +txn_limbo_process_core(struct txn_limbo *limbo, + const struct synchro_request *req) { + assert(latch_is_locked(&limbo->promote_latch)); + uint64_t term = req->term; uint32_t origin = req->origin_id; - if (txn_limbo_replica_term(limbo, origin) < term) { + if (vclock_get(&limbo->promote_term_map, origin) < (int64_t)term) { vclock_follow(&limbo->promote_term_map, origin, term); if (term > limbo->promote_greatest_term) limbo->promote_greatest_term = term; @@ -786,11 +810,30 @@ txn_limbo_process(struct txn_limbo *limbo, const struct synchro_request *req) return; } +void +txn_limbo_process(struct txn_limbo *limbo, + const struct synchro_request *req) +{ + txn_limbo_process_begin(limbo); + txn_limbo_process_core(limbo, req); + txn_limbo_process_commit(limbo); +} + void txn_limbo_on_parameters_change(struct txn_limbo *limbo) { + /* + * In case if we're not current leader (ie not owning the + * limbo) then we should not confirm anything, otherwise + * we could reduce quorum number and start writing CONFIRM + * while leader node carries own maybe bigger quorum value. + */ + if (!txn_limbo_is_owner(limbo, instance_id)) + return; + if (rlist_empty(&limbo->queue)) return; + struct txn_limbo_entry *e; int64_t confirm_lsn = -1; rlist_foreach_entry(e, &limbo->queue, in_queue) { diff --git a/src/box/txn_limbo.h b/src/box/txn_limbo.h index 53e52f676..33cacef8f 100644 --- a/src/box/txn_limbo.h +++ b/src/box/txn_limbo.h @@ -31,6 +31,7 @@ */ #include "small/rlist.h" #include "vclock/vclock.h" +#include "latch.h" #include <stdint.h> @@ -147,6 +148,10 @@ struct txn_limbo { * limbo and raft are in sync and the terms are the same. */ uint64_t promote_greatest_term; + /** + * To order access to the promote data. + */ + struct latch promote_latch; /** * Maximal LSN gathered quorum and either already confirmed in WAL, or * whose confirmation is in progress right now. Any attempt to confirm @@ -194,6 +199,23 @@ txn_limbo_is_empty(struct txn_limbo *limbo) return rlist_empty(&limbo->queue); } +/** + * Test if the \a owner_id is current limbo owner. + */ +static inline bool +txn_limbo_is_owner(struct txn_limbo *limbo, uint32_t owner_id) +{ + /* + * A guard needed to prevent race with in-fly promote + * packets which are sitting inside journal but not yet + * written. + */ + latch_lock(&limbo->promote_latch); + bool v = limbo->owner_id == owner_id; + latch_unlock(&limbo->promote_latch); + return v; +} + bool txn_limbo_is_ro(struct txn_limbo *limbo); @@ -216,9 +238,12 @@ txn_limbo_last_entry(struct txn_limbo *limbo) * @a replica_id. */ static inline uint64_t -txn_limbo_replica_term(const struct txn_limbo *limbo, uint32_t replica_id) +txn_limbo_replica_term(struct txn_limbo *limbo, uint32_t replica_id) { - return vclock_get(&limbo->promote_term_map, replica_id); + latch_lock(&limbo->promote_latch); + uint64_t v = vclock_get(&limbo->promote_term_map, replica_id); + latch_unlock(&limbo->promote_latch); + return v; } /** @@ -226,11 +251,14 @@ txn_limbo_replica_term(const struct txn_limbo *limbo, uint32_t replica_id) * data from it. The check is only valid when elections are enabled. */ static inline bool -txn_limbo_is_replica_outdated(const struct txn_limbo *limbo, +txn_limbo_is_replica_outdated(struct txn_limbo *limbo, uint32_t replica_id) { - return txn_limbo_replica_term(limbo, replica_id) < - limbo->promote_greatest_term; + latch_lock(&limbo->promote_latch); + uint64_t v = vclock_get(&limbo->promote_term_map, replica_id); + bool res = v < limbo->promote_greatest_term; + latch_unlock(&limbo->promote_latch); + return res; } /** @@ -287,7 +315,15 @@ txn_limbo_assign_lsn(struct txn_limbo *limbo, struct txn_limbo_entry *entry, * replica with the specified ID. */ void -txn_limbo_ack(struct txn_limbo *limbo, uint32_t replica_id, int64_t lsn); +txn_limbo_ack(struct txn_limbo *limbo, uint32_t owner_id, + uint32_t replica_id, int64_t lsn); + +static inline void +txn_limbo_ack_self(struct txn_limbo *limbo, int64_t lsn) +{ + return txn_limbo_ack(limbo, limbo->owner_id, + limbo->owner_id, lsn); +} /** * Block the current fiber until the transaction in the limbo @@ -300,7 +336,35 @@ txn_limbo_ack(struct txn_limbo *limbo, uint32_t replica_id, int64_t lsn); int txn_limbo_wait_complete(struct txn_limbo *limbo, struct txn_limbo_entry *entry); -/** Execute a synchronous replication request. */ +/** + * Initiate execution of a synchronous replication request. + */ +static inline void +txn_limbo_process_begin(struct txn_limbo *limbo) +{ + latch_lock(&limbo->promote_latch); +} + +/** Commit a synchronous replication request. */ +static inline void +txn_limbo_process_commit(struct txn_limbo *limbo) +{ + latch_unlock(&limbo->promote_latch); +} + +/** Rollback a synchronous replication request. */ +static inline void +txn_limbo_process_rollback(struct txn_limbo *limbo) +{ + latch_unlock(&limbo->promote_latch); +} + +/** Core of processing synchronous replication request. */ +void +txn_limbo_process_core(struct txn_limbo *limbo, + const struct synchro_request *req); + +/** Process a synchronous replication request. */ void txn_limbo_process(struct txn_limbo *limbo, const struct synchro_request *req); diff --git a/test/unit/snap_quorum_delay.cc b/test/unit/snap_quorum_delay.cc index 803bbbea8..d43b4cd2c 100644 --- a/test/unit/snap_quorum_delay.cc +++ b/test/unit/snap_quorum_delay.cc @@ -130,7 +130,7 @@ txn_process_func(va_list ap) } txn_limbo_assign_local_lsn(&txn_limbo, entry, fake_lsn); - txn_limbo_ack(&txn_limbo, txn_limbo.owner_id, fake_lsn); + txn_limbo_ack_self(&txn_limbo, fake_lsn); txn_limbo_wait_complete(&txn_limbo, entry); switch (process_type) { @@ -157,7 +157,8 @@ txn_confirm_func(va_list ap) * inside gc_checkpoint(). */ fiber_sleep(0); - txn_limbo_ack(&txn_limbo, relay_id, fake_lsn); + txn_limbo_ack(&txn_limbo, txn_limbo.owner_id, + relay_id, fake_lsn); return 0; } -- 2.31.1 ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Tarantool-patches] [PATCH v22 2/3] qsync: order access to the limbo terms 2021-10-11 19:16 ` [Tarantool-patches] [PATCH v22 2/3] qsync: order access to the limbo terms Cyrill Gorcunov via Tarantool-patches @ 2021-10-12 9:40 ` Serge Petrenko via Tarantool-patches 2021-10-12 20:26 ` Cyrill Gorcunov via Tarantool-patches 0 siblings, 1 reply; 10+ messages in thread From: Serge Petrenko via Tarantool-patches @ 2021-10-12 9:40 UTC (permalink / raw) To: Cyrill Gorcunov, tml; +Cc: Vladislav Shpilevoy 11.10.2021 22:16, Cyrill Gorcunov пишет: > Limbo terms tracking is shared between appliers and when > one of appliers is waiting for write to complete inside > journal_write() routine, an other may need to access read > term value to figure out if promote request is valid to > apply. Due to cooperative multitasking access to the terms > is not consistent so we need to be sure that other fibers > read up to date terms (ie written to the WAL). > > For this sake we use a latching mechanism, when one fiber > takes a lock for updating other readers are waiting until > the operation is complete. > > For example here is a call graph of two appliers Thanks for the changes! One final nit and we're good to go. > > applier 1 > --------- > applier_apply_tx > (promote term = 3 > current max term = 2) > applier_synchro_filter_tx > apply_synchro_row > journal_write > (sleeping) > > at this moment another applier comes in with obsolete > data and term 2 > > applier 2 > --------- > applier_apply_tx > (term 2) > applier_synchro_filter_tx > txn_limbo_is_replica_outdated -> false > journal_write (sleep) > > applier 1 > --------- > journal wakes up > apply_synchro_row_cb > set max term to 3 > > So the applier 2 didn't notice that term 3 is already seen > and wrote obsolete data. With locking the applier 2 will > wait until applier 1 has finished its write. > > Also Serge Petrenko pointed that we have somewhat similar situation > with txn_limbo_ack()[we might try to write confirm on entry while > new promote is in fly and not yet applied, so confirm might be invalid] > and txn_limbo_on_parameters_change() [where we might confirm entries > reducing quorum number while we even not a limbo owner]. Thus we need > to fix these problems as well. > > We introduce the following helpers: > > 1) txn_limbo_process_begin: which takes a lock > 2) txn_limbo_process_commit and txn_limbo_process_rollback > which simply release the lock but have different names > for better semantics > 3) txn_limbo_process is a general function which uses x_begin > and x_commit helper internally > 4) txn_limbo_process_core to do a real job over processing the > request, it implies that txn_limbo_process_begin been called > 5) txn_limbo_ack() and txn_limbo_on_parameters_change() both > respect current limbo owner via promote latch. > > Part-of #6036 > > Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com> > --- > src/box/applier.cc | 12 ++++-- > src/box/box.cc | 15 ++++--- > src/box/relay.cc | 11 ++--- > src/box/txn.c | 2 +- > src/box/txn_limbo.c | 49 +++++++++++++++++++-- > src/box/txn_limbo.h | 78 +++++++++++++++++++++++++++++++--- > test/unit/snap_quorum_delay.cc | 5 ++- > 7 files changed, 142 insertions(+), 30 deletions(-) > > diff --git a/src/box/applier.cc b/src/box/applier.cc > index b981bd436..46c36e259 100644 > --- a/src/box/applier.cc > +++ b/src/box/applier.cc > @@ -857,7 +857,7 @@ apply_synchro_row_cb(struct journal_entry *entry) > applier_rollback_by_wal_io(entry->res); > } else { > replica_txn_wal_write_cb(synchro_entry->rcb); > - txn_limbo_process(&txn_limbo, synchro_entry->req); > + txn_limbo_process_core(&txn_limbo, synchro_entry->req); > trigger_run(&replicaset.applier.on_wal_write, NULL); > } > fiber_wakeup(synchro_entry->owner); > @@ -873,6 +873,8 @@ apply_synchro_row(uint32_t replica_id, struct xrow_header *row) > if (xrow_decode_synchro(row, &req) != 0) > goto err; > > + txn_limbo_process_begin(&txn_limbo); > + > struct replica_cb_data rcb_data; > struct synchro_entry entry; > /* > @@ -910,12 +912,16 @@ apply_synchro_row(uint32_t replica_id, struct xrow_header *row) > * transactions side, including the async ones. > */ > if (journal_write(&entry.base) != 0) > - goto err; > + goto err_rollback; > if (entry.base.res < 0) { > diag_set_journal_res(entry.base.res); > - goto err; > + goto err_rollback; > } > + txn_limbo_process_commit(&txn_limbo); > return 0; > + > +err_rollback: > + txn_limbo_process_rollback(&txn_limbo); > err: > diag_log(); > return -1; > diff --git a/src/box/box.cc b/src/box/box.cc > index e082e1a3d..6a9be745a 100644 > --- a/src/box/box.cc > +++ b/src/box/box.cc > @@ -1677,8 +1677,6 @@ box_issue_promote(uint32_t prev_leader_id, int64_t promote_lsn) > struct raft *raft = box_raft(); > assert(raft->volatile_term == raft->term); > assert(promote_lsn >= 0); > - txn_limbo_write_promote(&txn_limbo, promote_lsn, > - raft->term); > struct synchro_request req = { > .type = IPROTO_RAFT_PROMOTE, > .replica_id = prev_leader_id, > @@ -1686,8 +1684,11 @@ box_issue_promote(uint32_t prev_leader_id, int64_t promote_lsn) > .lsn = promote_lsn, > .term = raft->term, > }; > - txn_limbo_process(&txn_limbo, &req); > + txn_limbo_process_begin(&txn_limbo); > + txn_limbo_write_promote(&txn_limbo, req.lsn, req.term); > + txn_limbo_process_core(&txn_limbo, &req); > assert(txn_limbo_is_empty(&txn_limbo)); > + txn_limbo_process_commit(&txn_limbo); > } > > /** A guard to block multiple simultaneous promote()/demote() invocations. */ > @@ -1699,8 +1700,6 @@ box_issue_demote(uint32_t prev_leader_id, int64_t promote_lsn) > { > assert(box_raft()->volatile_term == box_raft()->term); > assert(promote_lsn >= 0); > - txn_limbo_write_demote(&txn_limbo, promote_lsn, > - box_raft()->term); > struct synchro_request req = { > .type = IPROTO_RAFT_DEMOTE, > .replica_id = prev_leader_id, > @@ -1708,8 +1707,12 @@ box_issue_demote(uint32_t prev_leader_id, int64_t promote_lsn) > .lsn = promote_lsn, > .term = box_raft()->term, > }; > - txn_limbo_process(&txn_limbo, &req); > + txn_limbo_process_begin(&txn_limbo); > + txn_limbo_write_demote(&txn_limbo, promote_lsn, > + box_raft()->term); > + txn_limbo_process_core(&txn_limbo, &req); > assert(txn_limbo_is_empty(&txn_limbo)); > + txn_limbo_process_commit(&txn_limbo); > } > > int > diff --git a/src/box/relay.cc b/src/box/relay.cc > index f5852df7b..61ef1e3a5 100644 > --- a/src/box/relay.cc > +++ b/src/box/relay.cc > @@ -545,15 +545,10 @@ tx_status_update(struct cmsg *msg) > ack.vclock = &status->vclock; > /* > * Let pending synchronous transactions know, which of > - * them were successfully sent to the replica. Acks are > - * collected only by the transactions originator (which is > - * the single master in 100% so far). Other instances wait > - * for master's CONFIRM message instead. > + * them were successfully sent to the replica. > */ > - if (txn_limbo.owner_id == instance_id) { > - txn_limbo_ack(&txn_limbo, ack.source, > - vclock_get(ack.vclock, instance_id)); > - } > + txn_limbo_ack(&txn_limbo, instance_id, ack.source, > + vclock_get(ack.vclock, instance_id)); > trigger_run(&replicaset.on_ack, &ack); > > static const struct cmsg_hop route[] = { > diff --git a/src/box/txn.c b/src/box/txn.c > index e7fc81683..06bb85a09 100644 > --- a/src/box/txn.c > +++ b/src/box/txn.c > @@ -939,7 +939,7 @@ txn_commit(struct txn *txn) > txn_limbo_assign_local_lsn(&txn_limbo, limbo_entry, > lsn); > /* Local WAL write is a first 'ACK'. */ > - txn_limbo_ack(&txn_limbo, txn_limbo.owner_id, lsn); > + txn_limbo_ack_self(&txn_limbo, lsn); > } > if (txn_limbo_wait_complete(&txn_limbo, limbo_entry) < 0) > goto rollback; > diff --git a/src/box/txn_limbo.c b/src/box/txn_limbo.c > index 70447caaf..8f9bc11c7 100644 > --- a/src/box/txn_limbo.c > +++ b/src/box/txn_limbo.c > @@ -47,6 +47,7 @@ txn_limbo_create(struct txn_limbo *limbo) > vclock_create(&limbo->vclock); > vclock_create(&limbo->promote_term_map); > limbo->promote_greatest_term = 0; > + latch_create(&limbo->promote_latch); > limbo->confirmed_lsn = 0; > limbo->rollback_count = 0; > limbo->is_in_rollback = false; > @@ -542,10 +543,30 @@ txn_limbo_read_demote(struct txn_limbo *limbo, int64_t lsn) > } > > void > -txn_limbo_ack(struct txn_limbo *limbo, uint32_t replica_id, int64_t lsn) > +txn_limbo_ack(struct txn_limbo *limbo, uint32_t owner_id, > + uint32_t replica_id, int64_t lsn) > { > + /* > + * ACKs are collected only by the transactions originator > + * (which is the single master in 100% so far). Other instances > + * wait for master's CONFIRM message instead. > + * > + * Due to cooperative multitasking there might be limbo owner > + * migration in-fly (while writing data to journal), so for > + * simplicity sake the test for owner is done here instead > + * of putting this check to the callers. > + */ > + if (!txn_limbo_is_owner(limbo, owner_id)) > + return; > + > + /* > + * Test for empty queue is done _after_ txn_limbo_is_owner > + * call because we need to be sure that limbo is not been > + * changed under our feets while we're reading it. feets -> feet. > + */ > if (rlist_empty(&limbo->queue)) > return; > + > /* > * If limbo is currently writing a rollback, it means that the whole > * queue will be rolled back. Because rollback is written only for > @@ -724,11 +745,14 @@ txn_limbo_wait_empty(struct txn_limbo *limbo, double timeout) > } > > void > -txn_limbo_process(struct txn_limbo *limbo, const struct synchro_request *req) > +txn_limbo_process_core(struct txn_limbo *limbo, > + const struct synchro_request *req) > { > + assert(latch_is_locked(&limbo->promote_latch)); > + > uint64_t term = req->term; > uint32_t origin = req->origin_id; > - if (txn_limbo_replica_term(limbo, origin) < term) { > + if (vclock_get(&limbo->promote_term_map, origin) < (int64_t)term) { > vclock_follow(&limbo->promote_term_map, origin, term); > if (term > limbo->promote_greatest_term) > limbo->promote_greatest_term = term; > @@ -786,11 +810,30 @@ txn_limbo_process(struct txn_limbo *limbo, const struct synchro_request *req) > return; > } > > +void > +txn_limbo_process(struct txn_limbo *limbo, > + const struct synchro_request *req) > +{ > + txn_limbo_process_begin(limbo); > + txn_limbo_process_core(limbo, req); > + txn_limbo_process_commit(limbo); > +} > + > void > txn_limbo_on_parameters_change(struct txn_limbo *limbo) > { > + /* > + * In case if we're not current leader (ie not owning the > + * limbo) then we should not confirm anything, otherwise > + * we could reduce quorum number and start writing CONFIRM > + * while leader node carries own maybe bigger quorum value. > + */ > + if (!txn_limbo_is_owner(limbo, instance_id)) > + return; > + > if (rlist_empty(&limbo->queue)) > return; > + > struct txn_limbo_entry *e; > int64_t confirm_lsn = -1; > rlist_foreach_entry(e, &limbo->queue, in_queue) { > diff --git a/src/box/txn_limbo.h b/src/box/txn_limbo.h > index 53e52f676..33cacef8f 100644 > --- a/src/box/txn_limbo.h > +++ b/src/box/txn_limbo.h > @@ -31,6 +31,7 @@ > */ > #include "small/rlist.h" > #include "vclock/vclock.h" > +#include "latch.h" > > #include <stdint.h> > > @@ -147,6 +148,10 @@ struct txn_limbo { > * limbo and raft are in sync and the terms are the same. > */ > uint64_t promote_greatest_term; > + /** > + * To order access to the promote data. > + */ > + struct latch promote_latch; > /** > * Maximal LSN gathered quorum and either already confirmed in WAL, or > * whose confirmation is in progress right now. Any attempt to confirm > @@ -194,6 +199,23 @@ txn_limbo_is_empty(struct txn_limbo *limbo) > return rlist_empty(&limbo->queue); > } > > +/** > + * Test if the \a owner_id is current limbo owner. > + */ > +static inline bool > +txn_limbo_is_owner(struct txn_limbo *limbo, uint32_t owner_id) > +{ > + /* > + * A guard needed to prevent race with in-fly promote > + * packets which are sitting inside journal but not yet > + * written. > + */ > + latch_lock(&limbo->promote_latch); > + bool v = limbo->owner_id == owner_id; > + latch_unlock(&limbo->promote_latch); > + return v; > +} > + > bool > txn_limbo_is_ro(struct txn_limbo *limbo); > > @@ -216,9 +238,12 @@ txn_limbo_last_entry(struct txn_limbo *limbo) > * @a replica_id. > */ > static inline uint64_t > -txn_limbo_replica_term(const struct txn_limbo *limbo, uint32_t replica_id) > +txn_limbo_replica_term(struct txn_limbo *limbo, uint32_t replica_id) > { > - return vclock_get(&limbo->promote_term_map, replica_id); > + latch_lock(&limbo->promote_latch); > + uint64_t v = vclock_get(&limbo->promote_term_map, replica_id); > + latch_unlock(&limbo->promote_latch); > + return v; > } > > /** > @@ -226,11 +251,14 @@ txn_limbo_replica_term(const struct txn_limbo *limbo, uint32_t replica_id) > * data from it. The check is only valid when elections are enabled. > */ > static inline bool > -txn_limbo_is_replica_outdated(const struct txn_limbo *limbo, > +txn_limbo_is_replica_outdated(struct txn_limbo *limbo, > uint32_t replica_id) > { > - return txn_limbo_replica_term(limbo, replica_id) < > - limbo->promote_greatest_term; > + latch_lock(&limbo->promote_latch); > + uint64_t v = vclock_get(&limbo->promote_term_map, replica_id); > + bool res = v < limbo->promote_greatest_term; > + latch_unlock(&limbo->promote_latch); > + return res; > } > > /** > @@ -287,7 +315,15 @@ txn_limbo_assign_lsn(struct txn_limbo *limbo, struct txn_limbo_entry *entry, > * replica with the specified ID. > */ > void > -txn_limbo_ack(struct txn_limbo *limbo, uint32_t replica_id, int64_t lsn); > +txn_limbo_ack(struct txn_limbo *limbo, uint32_t owner_id, > + uint32_t replica_id, int64_t lsn); > + > +static inline void > +txn_limbo_ack_self(struct txn_limbo *limbo, int64_t lsn) > +{ > + return txn_limbo_ack(limbo, limbo->owner_id, > + limbo->owner_id, lsn); > +} > > /** > * Block the current fiber until the transaction in the limbo > @@ -300,7 +336,35 @@ txn_limbo_ack(struct txn_limbo *limbo, uint32_t replica_id, int64_t lsn); > int > txn_limbo_wait_complete(struct txn_limbo *limbo, struct txn_limbo_entry *entry); > > -/** Execute a synchronous replication request. */ > +/** > + * Initiate execution of a synchronous replication request. > + */ > +static inline void > +txn_limbo_process_begin(struct txn_limbo *limbo) > +{ > + latch_lock(&limbo->promote_latch); > +} > + > +/** Commit a synchronous replication request. */ > +static inline void > +txn_limbo_process_commit(struct txn_limbo *limbo) > +{ > + latch_unlock(&limbo->promote_latch); > +} > + > +/** Rollback a synchronous replication request. */ > +static inline void > +txn_limbo_process_rollback(struct txn_limbo *limbo) > +{ > + latch_unlock(&limbo->promote_latch); > +} > + > +/** Core of processing synchronous replication request. */ > +void > +txn_limbo_process_core(struct txn_limbo *limbo, > + const struct synchro_request *req); > + > +/** Process a synchronous replication request. */ > void > txn_limbo_process(struct txn_limbo *limbo, const struct synchro_request *req); > > diff --git a/test/unit/snap_quorum_delay.cc b/test/unit/snap_quorum_delay.cc > index 803bbbea8..d43b4cd2c 100644 > --- a/test/unit/snap_quorum_delay.cc > +++ b/test/unit/snap_quorum_delay.cc > @@ -130,7 +130,7 @@ txn_process_func(va_list ap) > } > > txn_limbo_assign_local_lsn(&txn_limbo, entry, fake_lsn); > - txn_limbo_ack(&txn_limbo, txn_limbo.owner_id, fake_lsn); > + txn_limbo_ack_self(&txn_limbo, fake_lsn); > txn_limbo_wait_complete(&txn_limbo, entry); > > switch (process_type) { > @@ -157,7 +157,8 @@ txn_confirm_func(va_list ap) > * inside gc_checkpoint(). > */ > fiber_sleep(0); > - txn_limbo_ack(&txn_limbo, relay_id, fake_lsn); > + txn_limbo_ack(&txn_limbo, txn_limbo.owner_id, > + relay_id, fake_lsn); > return 0; > } > -- Serge Petrenko ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Tarantool-patches] [PATCH v22 2/3] qsync: order access to the limbo terms 2021-10-12 9:40 ` Serge Petrenko via Tarantool-patches @ 2021-10-12 20:26 ` Cyrill Gorcunov via Tarantool-patches 2021-10-13 7:56 ` Serge Petrenko via Tarantool-patches 0 siblings, 1 reply; 10+ messages in thread From: Cyrill Gorcunov via Tarantool-patches @ 2021-10-12 20:26 UTC (permalink / raw) To: Serge Petrenko; +Cc: tml, Vladislav Shpilevoy On Tue, Oct 12, 2021 at 12:40:43PM +0300, Serge Petrenko wrote: > > + /* > > + * Test for empty queue is done _after_ txn_limbo_is_owner > > + * call because we need to be sure that limbo is not been > > + * changed under our feets while we're reading it. > > feets -> feet. Thanks! Force pushed an update ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Tarantool-patches] [PATCH v22 2/3] qsync: order access to the limbo terms 2021-10-12 20:26 ` Cyrill Gorcunov via Tarantool-patches @ 2021-10-13 7:56 ` Serge Petrenko via Tarantool-patches 0 siblings, 0 replies; 10+ messages in thread From: Serge Petrenko via Tarantool-patches @ 2021-10-13 7:56 UTC (permalink / raw) To: Cyrill Gorcunov; +Cc: tml, Vladislav Shpilevoy 12.10.2021 23:26, Cyrill Gorcunov пишет: > On Tue, Oct 12, 2021 at 12:40:43PM +0300, Serge Petrenko wrote: >>> + /* >>> + * Test for empty queue is done _after_ txn_limbo_is_owner >>> + * call because we need to be sure that limbo is not been >>> + * changed under our feets while we're reading it. >> feets -> feet. > Thanks! Force pushed an update LGTM! -- Serge Petrenko ^ permalink raw reply [flat|nested] 10+ messages in thread
* [Tarantool-patches] [PATCH v22 3/3] test: add gh-6036-qsync-order test 2021-10-11 19:16 [Tarantool-patches] [PATCH v22 0/3] qsync: implement packet filtering (part 1) Cyrill Gorcunov via Tarantool-patches 2021-10-11 19:16 ` [Tarantool-patches] [PATCH v22 1/3] latch: add latch_is_locked helper Cyrill Gorcunov via Tarantool-patches 2021-10-11 19:16 ` [Tarantool-patches] [PATCH v22 2/3] qsync: order access to the limbo terms Cyrill Gorcunov via Tarantool-patches @ 2021-10-11 19:16 ` Cyrill Gorcunov via Tarantool-patches 2021-10-12 9:47 ` Serge Petrenko via Tarantool-patches 2 siblings, 1 reply; 10+ messages in thread From: Cyrill Gorcunov via Tarantool-patches @ 2021-10-11 19:16 UTC (permalink / raw) To: tml; +Cc: Vladislav Shpilevoy To test that promotion requests are handled only when appropriate write to WAL completes, because we update memory data before the write finishes. Note that without the patch "qsync: order access to the limbo terms" this test fires the assertion > tarantool: src/box/txn_limbo.c:481: txn_limbo_read_rollback: Assertion `e->txn->signature >= 0' failed. Part-of #6036 Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com> --- test/replication/gh-6036-qsync-order.result | 200 ++++++++++++++++++ test/replication/gh-6036-qsync-order.test.lua | 96 +++++++++ test/replication/suite.cfg | 1 + test/replication/suite.ini | 2 +- 4 files changed, 298 insertions(+), 1 deletion(-) create mode 100644 test/replication/gh-6036-qsync-order.result create mode 100644 test/replication/gh-6036-qsync-order.test.lua diff --git a/test/replication/gh-6036-qsync-order.result b/test/replication/gh-6036-qsync-order.result new file mode 100644 index 000000000..464a131a4 --- /dev/null +++ b/test/replication/gh-6036-qsync-order.result @@ -0,0 +1,200 @@ +-- test-run result file version 2 +-- +-- gh-6036: verify that terms are locked when we're inside journal +-- write routine, because parallel appliers may ignore the fact that +-- the term is updated already but not yet written leading to data +-- inconsistency. +-- +test_run = require('test_run').new() + | --- + | ... + +SERVERS={"election_replica1", "election_replica2", "election_replica3"} + | --- + | ... +test_run:create_cluster(SERVERS, "replication", {args='1 nil manual 1'}) + | --- + | ... +test_run:wait_fullmesh(SERVERS) + | --- + | ... + +-- +-- Create a synchro space on the master node and make +-- sure the write processed just fine. +test_run:switch("election_replica1") + | --- + | - true + | ... +box.ctl.promote() + | --- + | ... +s = box.schema.create_space('test', {is_sync = true}) + | --- + | ... +_ = s:create_index('pk') + | --- + | ... +s:insert{1} + | --- + | - [1] + | ... + +test_run:switch("election_replica2") + | --- + | - true + | ... +test_run:wait_lsn('election_replica2', 'election_replica1') + | --- + | ... + +test_run:switch("election_replica3") + | --- + | - true + | ... +test_run:wait_lsn('election_replica3', 'election_replica1') + | --- + | ... + +-- +-- Drop connection between election_replica1 and election_replica2. +test_run:switch("election_replica1") + | --- + | - true + | ... +box.cfg({ \ + replication = { \ + "unix/:./election_replica1.sock", \ + "unix/:./election_replica3.sock", \ + }, \ +}) + | --- + | ... +-- +-- Drop connection between election_replica2 and election_replica1. +test_run:switch("election_replica2") + | --- + | - true + | ... +box.cfg({ \ + replication = { \ + "unix/:./election_replica2.sock", \ + "unix/:./election_replica3.sock", \ + }, \ +}) + | --- + | ... + +-- +-- Here we have the following scheme +-- +-- election_replica3 (will be delayed) +-- / \ +-- election_replica1 election_replica2 + +-- +-- Initiate disk delay in a bit tricky way: the next write will +-- fall into forever sleep. +test_run:switch("election_replica3") + | --- + | - true + | ... +write_cnt = box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") + | --- + | ... +box.error.injection.set("ERRINJ_WAL_DELAY", true) + | --- + | - ok + | ... +-- +-- Make election_replica2 been a leader and start writting data, +-- the PROMOTE request get queued on election_replica3 and not +-- yet processed, same time INSERT won't complete either +-- waiting for PROMOTE completion first. Note that we +-- enter election_replica3 as well just to be sure the PROMOTE +-- reached it. +test_run:switch("election_replica2") + | --- + | - true + | ... +box.ctl.promote() + | --- + | ... +test_run:switch("election_replica3") + | --- + | - true + | ... +test_run:wait_cond(function() return box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") > write_cnt end) + | --- + | - true + | ... +test_run:switch("election_replica2") + | --- + | - true + | ... +_ = require('fiber').create(function() box.space.test:insert{2} end) + | --- + | ... + +-- +-- The election_replica1 node has no clue that there is a new leader +-- and continue writing data with obsolete term. Since election_replica3 +-- is delayed now the INSERT won't proceed yet but get queued. +test_run:switch("election_replica1") + | --- + | - true + | ... +_ = require('fiber').create(function() box.space.test:insert{3} end) + | --- + | ... + +-- +-- Finally enable election_replica3 back. Make sure the data from new election_replica2 +-- leader get writing while old leader's data ignored. +test_run:switch("election_replica3") + | --- + | - true + | ... +box.error.injection.set('ERRINJ_WAL_DELAY', false) + | --- + | - ok + | ... +test_run:wait_cond(function() return box.space.test:get{2} ~= nil end) + | --- + | - true + | ... +box.space.test:select{} + | --- + | - - [1] + | - [2] + | ... + +test_run:switch("default") + | --- + | - true + | ... +test_run:cmd('stop server election_replica1') + | --- + | - true + | ... +test_run:cmd('stop server election_replica2') + | --- + | - true + | ... +test_run:cmd('stop server election_replica3') + | --- + | - true + | ... + +test_run:cmd('delete server election_replica1') + | --- + | - true + | ... +test_run:cmd('delete server election_replica2') + | --- + | - true + | ... +test_run:cmd('delete server election_replica3') + | --- + | - true + | ... diff --git a/test/replication/gh-6036-qsync-order.test.lua b/test/replication/gh-6036-qsync-order.test.lua new file mode 100644 index 000000000..6350e9303 --- /dev/null +++ b/test/replication/gh-6036-qsync-order.test.lua @@ -0,0 +1,96 @@ +-- +-- gh-6036: verify that terms are locked when we're inside journal +-- write routine, because parallel appliers may ignore the fact that +-- the term is updated already but not yet written leading to data +-- inconsistency. +-- +test_run = require('test_run').new() + +SERVERS={"election_replica1", "election_replica2", "election_replica3"} +test_run:create_cluster(SERVERS, "replication", {args='1 nil manual 1'}) +test_run:wait_fullmesh(SERVERS) + +-- +-- Create a synchro space on the master node and make +-- sure the write processed just fine. +test_run:switch("election_replica1") +box.ctl.promote() +s = box.schema.create_space('test', {is_sync = true}) +_ = s:create_index('pk') +s:insert{1} + +test_run:switch("election_replica2") +test_run:wait_lsn('election_replica2', 'election_replica1') + +test_run:switch("election_replica3") +test_run:wait_lsn('election_replica3', 'election_replica1') + +-- +-- Drop connection between election_replica1 and election_replica2. +test_run:switch("election_replica1") +box.cfg({ \ + replication = { \ + "unix/:./election_replica1.sock", \ + "unix/:./election_replica3.sock", \ + }, \ +}) +-- +-- Drop connection between election_replica2 and election_replica1. +test_run:switch("election_replica2") +box.cfg({ \ + replication = { \ + "unix/:./election_replica2.sock", \ + "unix/:./election_replica3.sock", \ + }, \ +}) + +-- +-- Here we have the following scheme +-- +-- election_replica3 (will be delayed) +-- / \ +-- election_replica1 election_replica2 + +-- +-- Initiate disk delay in a bit tricky way: the next write will +-- fall into forever sleep. +test_run:switch("election_replica3") +write_cnt = box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") +box.error.injection.set("ERRINJ_WAL_DELAY", true) +-- +-- Make election_replica2 been a leader and start writting data, +-- the PROMOTE request get queued on election_replica3 and not +-- yet processed, same time INSERT won't complete either +-- waiting for PROMOTE completion first. Note that we +-- enter election_replica3 as well just to be sure the PROMOTE +-- reached it. +test_run:switch("election_replica2") +box.ctl.promote() +test_run:switch("election_replica3") +test_run:wait_cond(function() return box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") > write_cnt end) +test_run:switch("election_replica2") +_ = require('fiber').create(function() box.space.test:insert{2} end) + +-- +-- The election_replica1 node has no clue that there is a new leader +-- and continue writing data with obsolete term. Since election_replica3 +-- is delayed now the INSERT won't proceed yet but get queued. +test_run:switch("election_replica1") +_ = require('fiber').create(function() box.space.test:insert{3} end) + +-- +-- Finally enable election_replica3 back. Make sure the data from new election_replica2 +-- leader get writing while old leader's data ignored. +test_run:switch("election_replica3") +box.error.injection.set('ERRINJ_WAL_DELAY', false) +test_run:wait_cond(function() return box.space.test:get{2} ~= nil end) +box.space.test:select{} + +test_run:switch("default") +test_run:cmd('stop server election_replica1') +test_run:cmd('stop server election_replica2') +test_run:cmd('stop server election_replica3') + +test_run:cmd('delete server election_replica1') +test_run:cmd('delete server election_replica2') +test_run:cmd('delete server election_replica3') diff --git a/test/replication/suite.cfg b/test/replication/suite.cfg index 3eee0803c..ed09b2087 100644 --- a/test/replication/suite.cfg +++ b/test/replication/suite.cfg @@ -59,6 +59,7 @@ "gh-6094-rs-uuid-mismatch.test.lua": {}, "gh-6127-election-join-new.test.lua": {}, "gh-6035-applier-filter.test.lua": {}, + "gh-6036-qsync-order.test.lua": {}, "election-candidate-promote.test.lua": {}, "*": { "memtx": {"engine": "memtx"}, diff --git a/test/replication/suite.ini b/test/replication/suite.ini index 77eb95f49..080e4fbf4 100644 --- a/test/replication/suite.ini +++ b/test/replication/suite.ini @@ -3,7 +3,7 @@ core = tarantool script = master.lua description = tarantool/box, replication disabled = consistent.test.lua -release_disabled = catch.test.lua errinj.test.lua gc.test.lua gc_no_space.test.lua before_replace.test.lua qsync_advanced.test.lua qsync_errinj.test.lua quorum.test.lua recover_missing_xlog.test.lua sync.test.lua long_row_timeout.test.lua gh-4739-vclock-assert.test.lua gh-4730-applier-rollback.test.lua gh-5140-qsync-casc-rollback.test.lua gh-5144-qsync-dup-confirm.test.lua gh-5167-qsync-rollback-snap.test.lua gh-5430-qsync-promote-crash.test.lua gh-5430-cluster-mvcc.test.lua gh-5506-election-on-off.test.lua gh-5536-wal-limit.test.lua hang_on_synchro_fail.test.lua anon_register_gap.test.lua gh-5213-qsync-applier-order.test.lua gh-5213-qsync-applier-order-3.test.lua gh-6027-applier-error-show.test.lua gh-6032-promote-wal-write.test.lua gh-6057-qsync-confirm-async-no-wal.test.lua gh-5447-downstream-lag.test.lua gh-4040-invalid-msgpack.test.lua +release_disabled = catch.test.lua errinj.test.lua gc.test.lua gc_no_space.test.lua before_replace.test.lua qsync_advanced.test.lua qsync_errinj.test.lua quorum.test.lua recover_missing_xlog.test.lua sync.test.lua long_row_timeout.test.lua gh-4739-vclock-assert.test.lua gh-4730-applier-rollback.test.lua gh-5140-qsync-casc-rollback.test.lua gh-5144-qsync-dup-confirm.test.lua gh-5167-qsync-rollback-snap.test.lua gh-5430-qsync-promote-crash.test.lua gh-5430-cluster-mvcc.test.lua gh-5506-election-on-off.test.lua gh-5536-wal-limit.test.lua hang_on_synchro_fail.test.lua anon_register_gap.test.lua gh-5213-qsync-applier-order.test.lua gh-5213-qsync-applier-order-3.test.lua gh-6027-applier-error-show.test.lua gh-6032-promote-wal-write.test.lua gh-6057-qsync-confirm-async-no-wal.test.lua gh-5447-downstream-lag.test.lua gh-4040-invalid-msgpack.test.lua gh-6036-qsync-order.test.lua config = suite.cfg lua_libs = lua/fast_replica.lua lua/rlimit.lua use_unix_sockets = True -- 2.31.1 ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Tarantool-patches] [PATCH v22 3/3] test: add gh-6036-qsync-order test 2021-10-11 19:16 ` [Tarantool-patches] [PATCH v22 3/3] test: add gh-6036-qsync-order test Cyrill Gorcunov via Tarantool-patches @ 2021-10-12 9:47 ` Serge Petrenko via Tarantool-patches 2021-10-12 20:28 ` Cyrill Gorcunov via Tarantool-patches 0 siblings, 1 reply; 10+ messages in thread From: Serge Petrenko via Tarantool-patches @ 2021-10-12 9:47 UTC (permalink / raw) To: Cyrill Gorcunov, tml; +Cc: Vladislav Shpilevoy 11.10.2021 22:16, Cyrill Gorcunov пишет: > To test that promotion requests are handled only when appropriate > write to WAL completes, because we update memory data before the > write finishes. > > Note that without the patch "qsync: order access to the limbo terms" > this test fires the assertion > >> tarantool: src/box/txn_limbo.c:481: txn_limbo_read_rollback: Assertion `e->txn->signature >= 0' failed. Thanks for the changes! Sorry for the nitpicking, there are just a couple of minor comments left. > Part-of #6036 > > Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com> > --- > test/replication/gh-6036-qsync-order.result | 200 ++++++++++++++++++ > test/replication/gh-6036-qsync-order.test.lua | 96 +++++++++ > test/replication/suite.cfg | 1 + > test/replication/suite.ini | 2 +- > 4 files changed, 298 insertions(+), 1 deletion(-) > create mode 100644 test/replication/gh-6036-qsync-order.result > create mode 100644 test/replication/gh-6036-qsync-order.test.lua > > diff --git a/test/replication/gh-6036-qsync-order.result b/test/replication/gh-6036-qsync-order.result > new file mode 100644 > index 000000000..464a131a4 > --- /dev/null > +++ b/test/replication/gh-6036-qsync-order.result > @@ -0,0 +1,200 @@ > +-- test-run result file version 2 > +-- > +-- gh-6036: verify that terms are locked when we're inside journal > +-- write routine, because parallel appliers may ignore the fact that > +-- the term is updated already but not yet written leading to data > +-- inconsistency. > +-- > +test_run = require('test_run').new() > + | --- > + | ... > + > +SERVERS={"election_replica1", "election_replica2", "election_replica3"} > + | --- > + | ... > +test_run:create_cluster(SERVERS, "replication", {args='1 nil manual 1'}) > + | --- > + | ... > +test_run:wait_fullmesh(SERVERS) > + | --- > + | ... > + > +-- > +-- Create a synchro space on the master node and make > +-- sure the write processed just fine. > +test_run:switch("election_replica1") > + | --- > + | - true > + | ... > +box.ctl.promote() > + | --- > + | ... > +s = box.schema.create_space('test', {is_sync = true}) > + | --- > + | ... > +_ = s:create_index('pk') > + | --- > + | ... > +s:insert{1} > + | --- > + | - [1] > + | ... > + > +test_run:switch("election_replica2") > + | --- > + | - true > + | ... > +test_run:wait_lsn('election_replica2', 'election_replica1') > + | --- > + | ... > + > +test_run:switch("election_replica3") > + | --- > + | - true > + | ... > +test_run:wait_lsn('election_replica3', 'election_replica1') > + | --- > + | ... > + No need to switch to 'election_replica2' or 'election_replica3' before doing a 'wait_lsn'. You may remain on 'election_replica1' and drop the 2 switches. > +-- > +-- Drop connection between election_replica1 and election_replica2. > +test_run:switch("election_replica1") This switch may be dropped as well after you drop the previous 2. > + | --- > + | - true > + | ... > +box.cfg({ \ > + replication = { \ > + "unix/:./election_replica1.sock", \ > + "unix/:./election_replica3.sock", \ > + }, \ > +}) > + | --- > + | ... > +-- > +-- Drop connection between election_replica2 and election_replica1. > +test_run:switch("election_replica2") > + | --- > + | - true > + | ... > +box.cfg({ \ > + replication = { \ > + "unix/:./election_replica2.sock", \ > + "unix/:./election_replica3.sock", \ > + }, \ > +}) > + | --- > + | ... > + > +-- > +-- Here we have the following scheme > +-- > +-- election_replica3 (will be delayed) > +-- / \ > +-- election_replica1 election_replica2 > + > +-- > +-- Initiate disk delay in a bit tricky way: the next write will > +-- fall into forever sleep. > +test_run:switch("election_replica3") > + | --- > + | - true > + | ... > +write_cnt = box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") > + | --- > + | ... > +box.error.injection.set("ERRINJ_WAL_DELAY", true) > + | --- > + | - ok > + | ... > +-- > +-- Make election_replica2 been a leader and start writting data, > +-- the PROMOTE request get queued on election_replica3 and not > +-- yet processed, same time INSERT won't complete either > +-- waiting for PROMOTE completion first. Note that we > +-- enter election_replica3 as well just to be sure the PROMOTE > +-- reached it. > +test_run:switch("election_replica2") > + | --- > + | - true > + | ... > +box.ctl.promote() > + | --- > + | ... > +test_run:switch("election_replica3") > + | --- > + | - true > + | ... > +test_run:wait_cond(function() return box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") > write_cnt end) > + | --- > + | - true > + | ... > +test_run:switch("election_replica2") > + | --- > + | - true > + | ... > +_ = require('fiber').create(function() box.space.test:insert{2} end) By the way, why do you need a fiber for that? synchro_quorum is 1, as I remember, so the insert shouldn't block even without the fiber. > + | --- > + | ... > + > +-- > +-- The election_replica1 node has no clue that there is a new leader > +-- and continue writing data with obsolete term. Since election_replica3 > +-- is delayed now the INSERT won't proceed yet but get queued. > +test_run:switch("election_replica1") > + | --- > + | - true > + | ... > +_ = require('fiber').create(function() box.space.test:insert{3} end) > + | --- > + | ... > + Same as above, looks like you don't need a fiber here. Am I wrong? > +-- > +-- Finally enable election_replica3 back. Make sure the data from new election_replica2 > +-- leader get writing while old leader's data ignored. > +test_run:switch("election_replica3") > + | --- > + | - true > + | ... Please, prior to enabling replica3, make sure it's received everything: ERRINJ_WAL_WRITE_COUNT == write_cnt + 3 > +box.error.injection.set('ERRINJ_WAL_DELAY', false) > + | --- > + | - ok > + | ... > +test_run:wait_cond(function() return box.space.test:get{2} ~= nil end) > + | --- > + | - true > + | ... > +box.space.test:select{} > + | --- > + | - - [1] > + | - [2] > + | ... > + > +test_run:switch("default") > + | --- > + | - true > + | ... > +test_run:cmd('stop server election_replica1') > + | --- > + | - true > + | ... > +test_run:cmd('stop server election_replica2') > + | --- > + | - true > + | ... > +test_run:cmd('stop server election_replica3') > + | --- > + | - true > + | ... > + > +test_run:cmd('delete server election_replica1') > + | --- > + | - true > + | ... > +test_run:cmd('delete server election_replica2') > + | --- > + | - true > + | ... > +test_run:cmd('delete server election_replica3') > + | --- > + | - true > + | ... > diff --git a/test/replication/gh-6036-qsync-order.test.lua b/test/replication/gh-6036-qsync-order.test.lua > new file mode 100644 > index 000000000..6350e9303 > --- /dev/null > +++ b/test/replication/gh-6036-qsync-order.test.lua > @@ -0,0 +1,96 @@ > +-- > +-- gh-6036: verify that terms are locked when we're inside journal > +-- write routine, because parallel appliers may ignore the fact that > +-- the term is updated already but not yet written leading to data > +-- inconsistency. > +-- > +test_run = require('test_run').new() > + > +SERVERS={"election_replica1", "election_replica2", "election_replica3"} > +test_run:create_cluster(SERVERS, "replication", {args='1 nil manual 1'}) > +test_run:wait_fullmesh(SERVERS) > + > +-- > +-- Create a synchro space on the master node and make > +-- sure the write processed just fine. > +test_run:switch("election_replica1") > +box.ctl.promote() > +s = box.schema.create_space('test', {is_sync = true}) > +_ = s:create_index('pk') > +s:insert{1} > + > +test_run:switch("election_replica2") > +test_run:wait_lsn('election_replica2', 'election_replica1') > + > +test_run:switch("election_replica3") > +test_run:wait_lsn('election_replica3', 'election_replica1') > + > +-- > +-- Drop connection between election_replica1 and election_replica2. > +test_run:switch("election_replica1") > +box.cfg({ \ > + replication = { \ > + "unix/:./election_replica1.sock", \ > + "unix/:./election_replica3.sock", \ > + }, \ > +}) > +-- > +-- Drop connection between election_replica2 and election_replica1. > +test_run:switch("election_replica2") > +box.cfg({ \ > + replication = { \ > + "unix/:./election_replica2.sock", \ > + "unix/:./election_replica3.sock", \ > + }, \ > +}) > + > +-- > +-- Here we have the following scheme > +-- > +-- election_replica3 (will be delayed) > +-- / \ > +-- election_replica1 election_replica2 > + > +-- > +-- Initiate disk delay in a bit tricky way: the next write will > +-- fall into forever sleep. > +test_run:switch("election_replica3") > +write_cnt = box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") > +box.error.injection.set("ERRINJ_WAL_DELAY", true) > +-- > +-- Make election_replica2 been a leader and start writting data, > +-- the PROMOTE request get queued on election_replica3 and not > +-- yet processed, same time INSERT won't complete either > +-- waiting for PROMOTE completion first. Note that we > +-- enter election_replica3 as well just to be sure the PROMOTE > +-- reached it. > +test_run:switch("election_replica2") > +box.ctl.promote() > +test_run:switch("election_replica3") > +test_run:wait_cond(function() return box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") > write_cnt end) > +test_run:switch("election_replica2") > +_ = require('fiber').create(function() box.space.test:insert{2} end) > + > +-- > +-- The election_replica1 node has no clue that there is a new leader > +-- and continue writing data with obsolete term. Since election_replica3 > +-- is delayed now the INSERT won't proceed yet but get queued. > +test_run:switch("election_replica1") > +_ = require('fiber').create(function() box.space.test:insert{3} end) > + > +-- > +-- Finally enable election_replica3 back. Make sure the data from new election_replica2 > +-- leader get writing while old leader's data ignored. > +test_run:switch("election_replica3") > +box.error.injection.set('ERRINJ_WAL_DELAY', false) > +test_run:wait_cond(function() return box.space.test:get{2} ~= nil end) > +box.space.test:select{} > + > +test_run:switch("default") > +test_run:cmd('stop server election_replica1') > +test_run:cmd('stop server election_replica2') > +test_run:cmd('stop server election_replica3') > + > +test_run:cmd('delete server election_replica1') > +test_run:cmd('delete server election_replica2') > +test_run:cmd('delete server election_replica3') > diff --git a/test/replication/suite.cfg b/test/replication/suite.cfg > index 3eee0803c..ed09b2087 100644 > --- a/test/replication/suite.cfg > +++ b/test/replication/suite.cfg > @@ -59,6 +59,7 @@ > "gh-6094-rs-uuid-mismatch.test.lua": {}, > "gh-6127-election-join-new.test.lua": {}, > "gh-6035-applier-filter.test.lua": {}, > + "gh-6036-qsync-order.test.lua": {}, > "election-candidate-promote.test.lua": {}, > "*": { > "memtx": {"engine": "memtx"}, > diff --git a/test/replication/suite.ini b/test/replication/suite.ini > index 77eb95f49..080e4fbf4 100644 > --- a/test/replication/suite.ini > +++ b/test/replication/suite.ini > @@ -3,7 +3,7 @@ core = tarantool > script = master.lua > description = tarantool/box, replication > disabled = consistent.test.lua > -release_disabled = catch.test.lua errinj.test.lua gc.test.lua gc_no_space.test.lua before_replace.test.lua qsync_advanced.test.lua qsync_errinj.test.lua quorum.test.lua recover_missing_xlog.test.lua sync.test.lua long_row_timeout.test.lua gh-4739-vclock-assert.test.lua gh-4730-applier-rollback.test.lua gh-5140-qsync-casc-rollback.test.lua gh-5144-qsync-dup-confirm.test.lua gh-5167-qsync-rollback-snap.test.lua gh-5430-qsync-promote-crash.test.lua gh-5430-cluster-mvcc.test.lua gh-5506-election-on-off.test.lua gh-5536-wal-limit.test.lua hang_on_synchro_fail.test.lua anon_register_gap.test.lua gh-5213-qsync-applier-order.test.lua gh-5213-qsync-applier-order-3.test.lua gh-6027-applier-error-show.test.lua gh-6032-promote-wal-write.test.lua gh-6057-qsync-confirm-async-no-wal.test.lua gh-5447-downstream-lag.test.lua gh-4040-invalid-msgpack.test.lua > +release_disabled = catch.test.lua errinj.test.lua gc.test.lua gc_no_space.test.lua before_replace.test.lua qsync_advanced.test.lua qsync_errinj.test.lua quorum.test.lua recover_missing_xlog.test.lua sync.test.lua long_row_timeout.test.lua gh-4739-vclock-assert.test.lua gh-4730-applier-rollback.test.lua gh-5140-qsync-casc-rollback.test.lua gh-5144-qsync-dup-confirm.test.lua gh-5167-qsync-rollback-snap.test.lua gh-5430-qsync-promote-crash.test.lua gh-5430-cluster-mvcc.test.lua gh-5506-election-on-off.test.lua gh-5536-wal-limit.test.lua hang_on_synchro_fail.test.lua anon_register_gap.test.lua gh-5213-qsync-applier-order.test.lua gh-5213-qsync-applier-order-3.test.lua gh-6027-applier-error-show.test.lua gh-6032-promote-wal-write.test.lua gh-6057-qsync-confirm-async-no-wal.test.lua gh-5447-downstream-lag.test.lua gh-4040-invalid-msgpack.test.lua gh-6036-qsync-order.test.lua > config = suite.cfg > lua_libs = lua/fast_replica.lua lua/rlimit.lua > use_unix_sockets = True -- Serge Petrenko ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Tarantool-patches] [PATCH v22 3/3] test: add gh-6036-qsync-order test 2021-10-12 9:47 ` Serge Petrenko via Tarantool-patches @ 2021-10-12 20:28 ` Cyrill Gorcunov via Tarantool-patches 2021-10-13 8:20 ` Serge Petrenko via Tarantool-patches 0 siblings, 1 reply; 10+ messages in thread From: Cyrill Gorcunov via Tarantool-patches @ 2021-10-12 20:28 UTC (permalink / raw) To: Serge Petrenko; +Cc: tml, Vladislav Shpilevoy On Tue, Oct 12, 2021 at 12:47:06PM +0300, Serge Petrenko wrote: > > > 11.10.2021 22:16, Cyrill Gorcunov пишет: > > To test that promotion requests are handled only when appropriate > > write to WAL completes, because we update memory data before the > > write finishes. > > > > Note that without the patch "qsync: order access to the limbo terms" > > this test fires the assertion > > > > > tarantool: src/box/txn_limbo.c:481: txn_limbo_read_rollback: Assertion `e->txn->signature >= 0' failed. > > Thanks for the changes! > Sorry for the nitpicking, there are just a couple of minor comments left. Thanks! Here is force-pushed variant. Please take a look, hopefully I counted number of writes correctly, lets wait for CI results. --- From 49f05ca2b31512b6555aecf1bb4d3ac1ce59729a Mon Sep 17 00:00:00 2001 From: Cyrill Gorcunov <gorcunov@gmail.com> Date: Mon, 20 Sep 2021 17:22:38 +0300 Subject: [PATCH v22 3/3] test: add gh-6036-qsync-order test To test that promotion requests are handled only when appropriate write to WAL completes, because we update memory data before the write finishes. Note that without the patch "qsync: order access to the limbo terms" this test fires the assertion > tarantool: src/box/txn_limbo.c:481: txn_limbo_read_rollback: Assertion `e->txn->signature >= 0' failed. Part-of #6036 Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com> --- test/replication/gh-6036-qsync-order.result | 194 ++++++++++++++++++ test/replication/gh-6036-qsync-order.test.lua | 94 +++++++++ test/replication/suite.cfg | 1 + test/replication/suite.ini | 2 +- 4 files changed, 290 insertions(+), 1 deletion(-) create mode 100644 test/replication/gh-6036-qsync-order.result create mode 100644 test/replication/gh-6036-qsync-order.test.lua diff --git a/test/replication/gh-6036-qsync-order.result b/test/replication/gh-6036-qsync-order.result new file mode 100644 index 000000000..0e93d429b --- /dev/null +++ b/test/replication/gh-6036-qsync-order.result @@ -0,0 +1,194 @@ +-- test-run result file version 2 +-- +-- gh-6036: verify that terms are locked when we're inside journal +-- write routine, because parallel appliers may ignore the fact that +-- the term is updated already but not yet written leading to data +-- inconsistency. +-- +test_run = require('test_run').new() + | --- + | ... + +SERVERS={"election_replica1", "election_replica2", "election_replica3"} + | --- + | ... +test_run:create_cluster(SERVERS, "replication", {args='1 nil manual 1'}) + | --- + | ... +test_run:wait_fullmesh(SERVERS) + | --- + | ... + +-- +-- Create a synchro space on the master node and make +-- sure the write processed just fine. +test_run:switch("election_replica1") + | --- + | - true + | ... +box.ctl.promote() + | --- + | ... +s = box.schema.create_space('test', {is_sync = true}) + | --- + | ... +_ = s:create_index('pk') + | --- + | ... +s:insert{1} + | --- + | - [1] + | ... + +test_run:wait_lsn('election_replica2', 'election_replica1') + | --- + | ... +test_run:wait_lsn('election_replica3', 'election_replica1') + | --- + | ... + +-- +-- Drop connection between election_replica1 and election_replica2. +box.cfg({ \ + replication = { \ + "unix/:./election_replica1.sock", \ + "unix/:./election_replica3.sock", \ + }, \ +}) + | --- + | ... + +-- +-- Drop connection between election_replica2 and election_replica1. +test_run:switch("election_replica2") + | --- + | - true + | ... +box.cfg({ \ + replication = { \ + "unix/:./election_replica2.sock", \ + "unix/:./election_replica3.sock", \ + }, \ +}) + | --- + | ... + +-- +-- Here we have the following scheme +-- +-- election_replica3 (will be delayed) +-- / \ +-- election_replica1 election_replica2 + +-- +-- Initiate disk delay in a bit tricky way: the next write will +-- fall into forever sleep. +test_run:switch("election_replica3") + | --- + | - true + | ... +write_cnt = box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") + | --- + | ... +box.error.injection.set("ERRINJ_WAL_DELAY", true) + | --- + | - ok + | ... +-- +-- Make election_replica2 been a leader and start writting data, +-- the PROMOTE request get queued on election_replica3 and not +-- yet processed, same time INSERT won't complete either +-- waiting for PROMOTE completion first. Note that we +-- enter election_replica3 as well just to be sure the PROMOTE +-- reached it. +test_run:switch("election_replica2") + | --- + | - true + | ... +box.ctl.promote() + | --- + | ... +test_run:switch("election_replica3") + | --- + | - true + | ... +test_run:wait_cond(function() return box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") > write_cnt end) + | --- + | - true + | ... +test_run:switch("election_replica2") + | --- + | - true + | ... +box.space.test:insert{2} + | --- + | - [2] + | ... + +-- +-- The election_replica1 node has no clue that there is a new leader +-- and continue writing data with obsolete term. Since election_replica3 +-- is delayed now the INSERT won't proceed yet but get queued. +test_run:switch("election_replica1") + | --- + | - true + | ... +box.space.test:insert{3} + | --- + | - [3] + | ... + +-- +-- Finally enable election_replica3 back. Make sure the data from new election_replica2 +-- leader get writing while old leader's data ignored. +test_run:switch("election_replica3") + | --- + | - true + | ... +assert(box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") == write_cnt + 2) + | --- + | - true + | ... +box.error.injection.set('ERRINJ_WAL_DELAY', false) + | --- + | - ok + | ... +test_run:wait_cond(function() return box.space.test:get{2} ~= nil end) + | --- + | - true + | ... +box.space.test:select{} + | --- + | - - [1] + | - [2] + | ... + +test_run:switch("default") + | --- + | - true + | ... +test_run:cmd('stop server election_replica1') + | --- + | - true + | ... +test_run:cmd('stop server election_replica2') + | --- + | - true + | ... +test_run:cmd('stop server election_replica3') + | --- + | - true + | ... + +test_run:cmd('delete server election_replica1') + | --- + | - true + | ... +test_run:cmd('delete server election_replica2') + | --- + | - true + | ... +test_run:cmd('delete server election_replica3') + | --- + | - true + | ... diff --git a/test/replication/gh-6036-qsync-order.test.lua b/test/replication/gh-6036-qsync-order.test.lua new file mode 100644 index 000000000..20030161e --- /dev/null +++ b/test/replication/gh-6036-qsync-order.test.lua @@ -0,0 +1,94 @@ +-- +-- gh-6036: verify that terms are locked when we're inside journal +-- write routine, because parallel appliers may ignore the fact that +-- the term is updated already but not yet written leading to data +-- inconsistency. +-- +test_run = require('test_run').new() + +SERVERS={"election_replica1", "election_replica2", "election_replica3"} +test_run:create_cluster(SERVERS, "replication", {args='1 nil manual 1'}) +test_run:wait_fullmesh(SERVERS) + +-- +-- Create a synchro space on the master node and make +-- sure the write processed just fine. +test_run:switch("election_replica1") +box.ctl.promote() +s = box.schema.create_space('test', {is_sync = true}) +_ = s:create_index('pk') +s:insert{1} + +test_run:wait_lsn('election_replica2', 'election_replica1') +test_run:wait_lsn('election_replica3', 'election_replica1') + +-- +-- Drop connection between election_replica1 and election_replica2. +box.cfg({ \ + replication = { \ + "unix/:./election_replica1.sock", \ + "unix/:./election_replica3.sock", \ + }, \ +}) + +-- +-- Drop connection between election_replica2 and election_replica1. +test_run:switch("election_replica2") +box.cfg({ \ + replication = { \ + "unix/:./election_replica2.sock", \ + "unix/:./election_replica3.sock", \ + }, \ +}) + +-- +-- Here we have the following scheme +-- +-- election_replica3 (will be delayed) +-- / \ +-- election_replica1 election_replica2 + +-- +-- Initiate disk delay in a bit tricky way: the next write will +-- fall into forever sleep. +test_run:switch("election_replica3") +write_cnt = box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") +box.error.injection.set("ERRINJ_WAL_DELAY", true) +-- +-- Make election_replica2 been a leader and start writting data, +-- the PROMOTE request get queued on election_replica3 and not +-- yet processed, same time INSERT won't complete either +-- waiting for PROMOTE completion first. Note that we +-- enter election_replica3 as well just to be sure the PROMOTE +-- reached it. +test_run:switch("election_replica2") +box.ctl.promote() +test_run:switch("election_replica3") +test_run:wait_cond(function() return box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") > write_cnt end) +test_run:switch("election_replica2") +box.space.test:insert{2} + +-- +-- The election_replica1 node has no clue that there is a new leader +-- and continue writing data with obsolete term. Since election_replica3 +-- is delayed now the INSERT won't proceed yet but get queued. +test_run:switch("election_replica1") +box.space.test:insert{3} + +-- +-- Finally enable election_replica3 back. Make sure the data from new election_replica2 +-- leader get writing while old leader's data ignored. +test_run:switch("election_replica3") +assert(box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") == write_cnt + 2) +box.error.injection.set('ERRINJ_WAL_DELAY', false) +test_run:wait_cond(function() return box.space.test:get{2} ~= nil end) +box.space.test:select{} + +test_run:switch("default") +test_run:cmd('stop server election_replica1') +test_run:cmd('stop server election_replica2') +test_run:cmd('stop server election_replica3') + +test_run:cmd('delete server election_replica1') +test_run:cmd('delete server election_replica2') +test_run:cmd('delete server election_replica3') diff --git a/test/replication/suite.cfg b/test/replication/suite.cfg index 3eee0803c..ed09b2087 100644 --- a/test/replication/suite.cfg +++ b/test/replication/suite.cfg @@ -59,6 +59,7 @@ "gh-6094-rs-uuid-mismatch.test.lua": {}, "gh-6127-election-join-new.test.lua": {}, "gh-6035-applier-filter.test.lua": {}, + "gh-6036-qsync-order.test.lua": {}, "election-candidate-promote.test.lua": {}, "*": { "memtx": {"engine": "memtx"}, diff --git a/test/replication/suite.ini b/test/replication/suite.ini index 77eb95f49..080e4fbf4 100644 --- a/test/replication/suite.ini +++ b/test/replication/suite.ini @@ -3,7 +3,7 @@ core = tarantool script = master.lua description = tarantool/box, replication disabled = consistent.test.lua -release_disabled = catch.test.lua errinj.test.lua gc.test.lua gc_no_space.test.lua before_replace.test.lua qsync_advanced.test.lua qsync_errinj.test.lua quorum.test.lua recover_missing_xlog.test.lua sync.test.lua long_row_timeout.test.lua gh-4739-vclock-assert.test.lua gh-4730-applier-rollback.test.lua gh-5140-qsync-casc-rollback.test.lua gh-5144-qsync-dup-confirm.test.lua gh-5167-qsync-rollback-snap.test.lua gh-5430-qsync-promote-crash.test.lua gh-5430-cluster-mvcc.test.lua gh-5506-election-on-off.test.lua gh-5536-wal-limit.test.lua hang_on_synchro_fail.test.lua anon_register_gap.test.lua gh-5213-qsync-applier-order.test.lua gh-5213-qsync-applier-order-3.test.lua gh-6027-applier-error-show.test.lua gh-6032-promote-wal-write.test.lua gh-6057-qsync-confirm-async-no-wal.test.lua gh-5447-downstream-lag.test.lua gh-4040-invalid-msgpack.test.lua +release_disabled = catch.test.lua errinj.test.lua gc.test.lua gc_no_space.test.lua before_replace.test.lua qsync_advanced.test.lua qsync_errinj.test.lua quorum.test.lua recover_missing_xlog.test.lua sync.test.lua long_row_timeout.test.lua gh-4739-vclock-assert.test.lua gh-4730-applier-rollback.test.lua gh-5140-qsync-casc-rollback.test.lua gh-5144-qsync-dup-confirm.test.lua gh-5167-qsync-rollback-snap.test.lua gh-5430-qsync-promote-crash.test.lua gh-5430-cluster-mvcc.test.lua gh-5506-election-on-off.test.lua gh-5536-wal-limit.test.lua hang_on_synchro_fail.test.lua anon_register_gap.test.lua gh-5213-qsync-applier-order.test.lua gh-5213-qsync-applier-order-3.test.lua gh-6027-applier-error-show.test.lua gh-6032-promote-wal-write.test.lua gh-6057-qsync-confirm-async-no-wal.test.lua gh-5447-downstream-lag.test.lua gh-4040-invalid-msgpack.test.lua gh-6036-qsync-order.test.lua config = suite.cfg lua_libs = lua/fast_replica.lua lua/rlimit.lua use_unix_sockets = True -- 2.31.1 ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Tarantool-patches] [PATCH v22 3/3] test: add gh-6036-qsync-order test 2021-10-12 20:28 ` Cyrill Gorcunov via Tarantool-patches @ 2021-10-13 8:20 ` Serge Petrenko via Tarantool-patches 0 siblings, 0 replies; 10+ messages in thread From: Serge Petrenko via Tarantool-patches @ 2021-10-13 8:20 UTC (permalink / raw) To: Cyrill Gorcunov; +Cc: tml, Vladislav Shpilevoy 12.10.2021 23:28, Cyrill Gorcunov пишет: > On Tue, Oct 12, 2021 at 12:47:06PM +0300, Serge Petrenko wrote: >> >> 11.10.2021 22:16, Cyrill Gorcunov пишет: >>> To test that promotion requests are handled only when appropriate >>> write to WAL completes, because we update memory data before the >>> write finishes. >>> >>> Note that without the patch "qsync: order access to the limbo terms" >>> this test fires the assertion >>> >>>> tarantool: src/box/txn_limbo.c:481: txn_limbo_read_rollback: Assertion `e->txn->signature >= 0' failed. >> Thanks for the changes! >> Sorry for the nitpicking, there are just a couple of minor comments left. > Thanks! Here is force-pushed variant. Please take a look, hopefully I counted > number of writes correctly, lets wait for CI results. Thanks! Looks like you've updated the branch for v21 instead of v22. Please make sure everything's consistent and repush to v22. Or maybe to a branch with no postfix so that no one is confused. The test itself looks good now, but I see a lot of CI failures, like this: https://github.com/tarantool/tarantool/runs/3875211570#step:4:9709 https://github.com/tarantool/tarantool/runs/3875211924#step:5:6039 https://github.com/tarantool/tarantool/runs/3875211332#step:5:6192 and so on. When I run the gh-5566 test locally, with debug build I get the following assertion failure: [001] replication/gh-5566-final-join-synchro.test.lua [001] [001] [Instance "replica" killed by signal: 6 (SIGABRT)] [001] [001] Last 15 lines of Tarantool Log file [Instance "replica"][/Users/s.petrenko/Source/tarantool/test/var/001_replication/replica.log]: <stripped> [001] Assertion failed: (l->owner != fiber()), function latch_lock_timeout, file /Users/s.petrenko/Source/tarantool/src/lib/core/latch.h, line 122. [001] [ fail ] > --- > From 49f05ca2b31512b6555aecf1bb4d3ac1ce59729a Mon Sep 17 00:00:00 2001 > From: Cyrill Gorcunov <gorcunov@gmail.com> > Date: Mon, 20 Sep 2021 17:22:38 +0300 > Subject: [PATCH v22 3/3] test: add gh-6036-qsync-order test > > To test that promotion requests are handled only when appropriate > write to WAL completes, because we update memory data before the > write finishes. > > Note that without the patch "qsync: order access to the limbo terms" > this test fires the assertion > >> tarantool: src/box/txn_limbo.c:481: txn_limbo_read_rollback: Assertion `e->txn->signature >= 0' failed. > Part-of #6036 > > Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com> > --- > test/replication/gh-6036-qsync-order.result | 194 ++++++++++++++++++ > test/replication/gh-6036-qsync-order.test.lua | 94 +++++++++ > test/replication/suite.cfg | 1 + > test/replication/suite.ini | 2 +- > 4 files changed, 290 insertions(+), 1 deletion(-) > create mode 100644 test/replication/gh-6036-qsync-order.result > create mode 100644 test/replication/gh-6036-qsync-order.test.lua > > diff --git a/test/replication/gh-6036-qsync-order.result b/test/replication/gh-6036-qsync-order.result > new file mode 100644 > index 000000000..0e93d429b > --- /dev/null > +++ b/test/replication/gh-6036-qsync-order.result > @@ -0,0 +1,194 @@ > +-- test-run result file version 2 > +-- > +-- gh-6036: verify that terms are locked when we're inside journal > +-- write routine, because parallel appliers may ignore the fact that > +-- the term is updated already but not yet written leading to data > +-- inconsistency. > +-- > +test_run = require('test_run').new() > + | --- > + | ... > + > +SERVERS={"election_replica1", "election_replica2", "election_replica3"} > + | --- > + | ... > +test_run:create_cluster(SERVERS, "replication", {args='1 nil manual 1'}) > + | --- > + | ... > +test_run:wait_fullmesh(SERVERS) > + | --- > + | ... > + > +-- > +-- Create a synchro space on the master node and make > +-- sure the write processed just fine. > +test_run:switch("election_replica1") > + | --- > + | - true > + | ... > +box.ctl.promote() > + | --- > + | ... > +s = box.schema.create_space('test', {is_sync = true}) > + | --- > + | ... > +_ = s:create_index('pk') > + | --- > + | ... > +s:insert{1} > + | --- > + | - [1] > + | ... > + > +test_run:wait_lsn('election_replica2', 'election_replica1') > + | --- > + | ... > +test_run:wait_lsn('election_replica3', 'election_replica1') > + | --- > + | ... > + > +-- > +-- Drop connection between election_replica1 and election_replica2. > +box.cfg({ \ > + replication = { \ > + "unix/:./election_replica1.sock", \ > + "unix/:./election_replica3.sock", \ > + }, \ > +}) > + | --- > + | ... > + > +-- > +-- Drop connection between election_replica2 and election_replica1. > +test_run:switch("election_replica2") > + | --- > + | - true > + | ... > +box.cfg({ \ > + replication = { \ > + "unix/:./election_replica2.sock", \ > + "unix/:./election_replica3.sock", \ > + }, \ > +}) > + | --- > + | ... > + > +-- > +-- Here we have the following scheme > +-- > +-- election_replica3 (will be delayed) > +-- / \ > +-- election_replica1 election_replica2 > + > +-- > +-- Initiate disk delay in a bit tricky way: the next write will > +-- fall into forever sleep. > +test_run:switch("election_replica3") > + | --- > + | - true > + | ... > +write_cnt = box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") > + | --- > + | ... > +box.error.injection.set("ERRINJ_WAL_DELAY", true) > + | --- > + | - ok > + | ... > +-- > +-- Make election_replica2 been a leader and start writting data, > +-- the PROMOTE request get queued on election_replica3 and not > +-- yet processed, same time INSERT won't complete either > +-- waiting for PROMOTE completion first. Note that we > +-- enter election_replica3 as well just to be sure the PROMOTE > +-- reached it. > +test_run:switch("election_replica2") > + | --- > + | - true > + | ... > +box.ctl.promote() > + | --- > + | ... > +test_run:switch("election_replica3") > + | --- > + | - true > + | ... > +test_run:wait_cond(function() return box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") > write_cnt end) > + | --- > + | - true > + | ... > +test_run:switch("election_replica2") > + | --- > + | - true > + | ... > +box.space.test:insert{2} > + | --- > + | - [2] > + | ... > + > +-- > +-- The election_replica1 node has no clue that there is a new leader > +-- and continue writing data with obsolete term. Since election_replica3 > +-- is delayed now the INSERT won't proceed yet but get queued. > +test_run:switch("election_replica1") > + | --- > + | - true > + | ... > +box.space.test:insert{3} > + | --- > + | - [3] > + | ... > + > +-- > +-- Finally enable election_replica3 back. Make sure the data from new election_replica2 > +-- leader get writing while old leader's data ignored. > +test_run:switch("election_replica3") > + | --- > + | - true > + | ... > +assert(box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") == write_cnt + 2) > + | --- > + | - true > + | ... > +box.error.injection.set('ERRINJ_WAL_DELAY', false) > + | --- > + | - ok > + | ... > +test_run:wait_cond(function() return box.space.test:get{2} ~= nil end) > + | --- > + | - true > + | ... > +box.space.test:select{} > + | --- > + | - - [1] > + | - [2] > + | ... > + > +test_run:switch("default") > + | --- > + | - true > + | ... > +test_run:cmd('stop server election_replica1') > + | --- > + | - true > + | ... > +test_run:cmd('stop server election_replica2') > + | --- > + | - true > + | ... > +test_run:cmd('stop server election_replica3') > + | --- > + | - true > + | ... > + > +test_run:cmd('delete server election_replica1') > + | --- > + | - true > + | ... > +test_run:cmd('delete server election_replica2') > + | --- > + | - true > + | ... > +test_run:cmd('delete server election_replica3') > + | --- > + | - true > + | ... > diff --git a/test/replication/gh-6036-qsync-order.test.lua b/test/replication/gh-6036-qsync-order.test.lua > new file mode 100644 > index 000000000..20030161e > --- /dev/null > +++ b/test/replication/gh-6036-qsync-order.test.lua > @@ -0,0 +1,94 @@ > +-- > +-- gh-6036: verify that terms are locked when we're inside journal > +-- write routine, because parallel appliers may ignore the fact that > +-- the term is updated already but not yet written leading to data > +-- inconsistency. > +-- > +test_run = require('test_run').new() > + > +SERVERS={"election_replica1", "election_replica2", "election_replica3"} > +test_run:create_cluster(SERVERS, "replication", {args='1 nil manual 1'}) > +test_run:wait_fullmesh(SERVERS) > + > +-- > +-- Create a synchro space on the master node and make > +-- sure the write processed just fine. > +test_run:switch("election_replica1") > +box.ctl.promote() > +s = box.schema.create_space('test', {is_sync = true}) > +_ = s:create_index('pk') > +s:insert{1} > + > +test_run:wait_lsn('election_replica2', 'election_replica1') > +test_run:wait_lsn('election_replica3', 'election_replica1') > + > +-- > +-- Drop connection between election_replica1 and election_replica2. > +box.cfg({ \ > + replication = { \ > + "unix/:./election_replica1.sock", \ > + "unix/:./election_replica3.sock", \ > + }, \ > +}) > + > +-- > +-- Drop connection between election_replica2 and election_replica1. > +test_run:switch("election_replica2") > +box.cfg({ \ > + replication = { \ > + "unix/:./election_replica2.sock", \ > + "unix/:./election_replica3.sock", \ > + }, \ > +}) > + > +-- > +-- Here we have the following scheme > +-- > +-- election_replica3 (will be delayed) > +-- / \ > +-- election_replica1 election_replica2 > + > +-- > +-- Initiate disk delay in a bit tricky way: the next write will > +-- fall into forever sleep. > +test_run:switch("election_replica3") > +write_cnt = box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") > +box.error.injection.set("ERRINJ_WAL_DELAY", true) > +-- > +-- Make election_replica2 been a leader and start writting data, > +-- the PROMOTE request get queued on election_replica3 and not > +-- yet processed, same time INSERT won't complete either > +-- waiting for PROMOTE completion first. Note that we > +-- enter election_replica3 as well just to be sure the PROMOTE > +-- reached it. > +test_run:switch("election_replica2") > +box.ctl.promote() > +test_run:switch("election_replica3") > +test_run:wait_cond(function() return box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") > write_cnt end) > +test_run:switch("election_replica2") > +box.space.test:insert{2} > + > +-- > +-- The election_replica1 node has no clue that there is a new leader > +-- and continue writing data with obsolete term. Since election_replica3 > +-- is delayed now the INSERT won't proceed yet but get queued. > +test_run:switch("election_replica1") > +box.space.test:insert{3} > + > +-- > +-- Finally enable election_replica3 back. Make sure the data from new election_replica2 > +-- leader get writing while old leader's data ignored. > +test_run:switch("election_replica3") > +assert(box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") == write_cnt + 2) > +box.error.injection.set('ERRINJ_WAL_DELAY', false) > +test_run:wait_cond(function() return box.space.test:get{2} ~= nil end) > +box.space.test:select{} > + > +test_run:switch("default") > +test_run:cmd('stop server election_replica1') > +test_run:cmd('stop server election_replica2') > +test_run:cmd('stop server election_replica3') > + > +test_run:cmd('delete server election_replica1') > +test_run:cmd('delete server election_replica2') > +test_run:cmd('delete server election_replica3') > diff --git a/test/replication/suite.cfg b/test/replication/suite.cfg > index 3eee0803c..ed09b2087 100644 > --- a/test/replication/suite.cfg > +++ b/test/replication/suite.cfg > @@ -59,6 +59,7 @@ > "gh-6094-rs-uuid-mismatch.test.lua": {}, > "gh-6127-election-join-new.test.lua": {}, > "gh-6035-applier-filter.test.lua": {}, > + "gh-6036-qsync-order.test.lua": {}, > "election-candidate-promote.test.lua": {}, > "*": { > "memtx": {"engine": "memtx"}, > diff --git a/test/replication/suite.ini b/test/replication/suite.ini > index 77eb95f49..080e4fbf4 100644 > --- a/test/replication/suite.ini > +++ b/test/replication/suite.ini > @@ -3,7 +3,7 @@ core = tarantool > script = master.lua > description = tarantool/box, replication > disabled = consistent.test.lua > -release_disabled = catch.test.lua errinj.test.lua gc.test.lua gc_no_space.test.lua before_replace.test.lua qsync_advanced.test.lua qsync_errinj.test.lua quorum.test.lua recover_missing_xlog.test.lua sync.test.lua long_row_timeout.test.lua gh-4739-vclock-assert.test.lua gh-4730-applier-rollback.test.lua gh-5140-qsync-casc-rollback.test.lua gh-5144-qsync-dup-confirm.test.lua gh-5167-qsync-rollback-snap.test.lua gh-5430-qsync-promote-crash.test.lua gh-5430-cluster-mvcc.test.lua gh-5506-election-on-off.test.lua gh-5536-wal-limit.test.lua hang_on_synchro_fail.test.lua anon_register_gap.test.lua gh-5213-qsync-applier-order.test.lua gh-5213-qsync-applier-order-3.test.lua gh-6027-applier-error-show.test.lua gh-6032-promote-wal-write.test.lua gh-6057-qsync-confirm-async-no-wal.test.lua gh-5447-downstream-lag.test.lua gh-4040-invalid-msgpack.test.lua > +release_disabled = catch.test.lua errinj.test.lua gc.test.lua gc_no_space.test.lua before_replace.test.lua qsync_advanced.test.lua qsync_errinj.test.lua quorum.test.lua recover_missing_xlog.test.lua sync.test.lua long_row_timeout.test.lua gh-4739-vclock-assert.test.lua gh-4730-applier-rollback.test.lua gh-5140-qsync-casc-rollback.test.lua gh-5144-qsync-dup-confirm.test.lua gh-5167-qsync-rollback-snap.test.lua gh-5430-qsync-promote-crash.test.lua gh-5430-cluster-mvcc.test.lua gh-5506-election-on-off.test.lua gh-5536-wal-limit.test.lua hang_on_synchro_fail.test.lua anon_register_gap.test.lua gh-5213-qsync-applier-order.test.lua gh-5213-qsync-applier-order-3.test.lua gh-6027-applier-error-show.test.lua gh-6032-promote-wal-write.test.lua gh-6057-qsync-confirm-async-no-wal.test.lua gh-5447-downstream-lag.test.lua gh-4040-invalid-msgpack.test.lua gh-6036-qsync-order.test.lua > config = suite.cfg > lua_libs = lua/fast_replica.lua lua/rlimit.lua > use_unix_sockets = True -- Serge Petrenko ^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2021-10-13 8:20 UTC | newest] Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2021-10-11 19:16 [Tarantool-patches] [PATCH v22 0/3] qsync: implement packet filtering (part 1) Cyrill Gorcunov via Tarantool-patches 2021-10-11 19:16 ` [Tarantool-patches] [PATCH v22 1/3] latch: add latch_is_locked helper Cyrill Gorcunov via Tarantool-patches 2021-10-11 19:16 ` [Tarantool-patches] [PATCH v22 2/3] qsync: order access to the limbo terms Cyrill Gorcunov via Tarantool-patches 2021-10-12 9:40 ` Serge Petrenko via Tarantool-patches 2021-10-12 20:26 ` Cyrill Gorcunov via Tarantool-patches 2021-10-13 7:56 ` Serge Petrenko via Tarantool-patches 2021-10-11 19:16 ` [Tarantool-patches] [PATCH v22 3/3] test: add gh-6036-qsync-order test Cyrill Gorcunov via Tarantool-patches 2021-10-12 9:47 ` Serge Petrenko via Tarantool-patches 2021-10-12 20:28 ` Cyrill Gorcunov via Tarantool-patches 2021-10-13 8:20 ` Serge Petrenko via Tarantool-patches
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox