Packet filtering takes more and more time to implement and I send similar patches again multiple times. Still the complete split brain detection is not yet done, so I thought it might be worth to split the work into several series. This part addresses a race with terms manipulation and I think can be considered as its own. I would really appreciate comments here mostly due to function naming, comments and etc. My primary aim was to make sure that this series doesn't break anything existing but I think if we would have a test case for this ordering issue it would be brilliant, though I didn't come with some simple one here yet. branch gorcunov/gh-6036-rollback-confirm-16 issue https://github.com/tarantool/tarantool/issues/6036 previous series https://lists.tarantool.org/tarantool-patches/20210910152910.607398-1-gorcunov@gmail.com/ Cyrill Gorcunov (3): latch: add latch_is_locked helper qsync: order access to the limbo terms qsync: track confirmed_lsn upon CONFIRM packet src/box/applier.cc | 16 +++++++--- src/box/box.cc | 71 +++++++++++++++++++---------------------- src/box/memtx_engine.cc | 3 +- src/box/txn_limbo.c | 48 ++++++++++++++++++++++++++-- src/box/txn_limbo.h | 49 +++++++++++++++++++++++++--- src/lib/core/latch.h | 11 +++++++ 6 files changed, 146 insertions(+), 52 deletions(-) base-commit: 8461d67ee4deb5ceff5bc8076bc40883ad4e022b -- 2.31.1
To test if latch is locked. In-scope-of #6036 Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com> --- src/lib/core/latch.h | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/src/lib/core/latch.h b/src/lib/core/latch.h index 49c59cf63..0aaa8b634 100644 --- a/src/lib/core/latch.h +++ b/src/lib/core/latch.h @@ -95,6 +95,17 @@ latch_owner(struct latch *l) return l->owner; } +/** + * Return true if the latch is locked. + * + * @param l - latch to be tested. + */ +static inline bool +latch_is_locked(const struct latch *l) +{ + return l->owner != NULL; +} + /** * Lock a latch. If the latch is already locked by another fiber, * waits for timeout. -- 2.31.1
Limbo terms tracking is shared between appliers and when one of appliers is waiting for write to complete inside journal_write() routine, an other may need to access read term value to figure out if promote request is valid to apply. Due to cooperative multitasking access to the terms is not consistent so we need to be sure that other fibers read up to date terms (ie written to the WAL). For this sake we use a latching mechanism, when one fiber takes a lock for updating other readers are waiting until the operation is complete. For example here is a call graph of two appliers applier 1 --------- applier_apply_tx (promote term = 3 current max term = 2) applier_synchro_filter_tx apply_synchro_row journal_write (sleeping) at this moment another applier comes in with obsolete data and term 2 applier 2 --------- applier_apply_tx (term 2) applier_synchro_filter_tx txn_limbo_is_replica_outdated -> false journal_write (sleep) applier 1 --------- journal wakes up apply_synchro_row_cb set max term to 3 So the applier 2 didn't notice that term 3 is already seen and wrote obsolete data. With locking the applier 2 will wait until applier 1 has finished its write. We introduce the following helpers: 1) txn_limbo_process_begin: which takes a lock (and will be validating request in future in sake of split brain detection) 2) txn_limbo_process_commit and txn_limbo_process_rollback which simply release the lock but have different names for better semantics 3) txn_limbo_process is a general function which uses x_begin and x_commit helper internally Since txn_limbo_process_begin() can fail we had to change all callers to return error. In particular box.ctrl.promote() and box.ctrl.demote() commands now may fail. Internally they write promote or demote packet to the WAL and process synchro queue. So to eliminate code duplication we use new box_issue_synchro() helper which respects locking mechanism. Part-of #6036 Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com> --- src/box/applier.cc | 16 +++++++--- src/box/box.cc | 71 +++++++++++++++++++---------------------- src/box/memtx_engine.cc | 3 +- src/box/txn_limbo.c | 34 ++++++++++++++++++-- src/box/txn_limbo.h | 49 +++++++++++++++++++++++++--- 5 files changed, 121 insertions(+), 52 deletions(-) diff --git a/src/box/applier.cc b/src/box/applier.cc index b981bd436..f0751b68a 100644 --- a/src/box/applier.cc +++ b/src/box/applier.cc @@ -458,7 +458,8 @@ applier_wait_snapshot(struct applier *applier) struct synchro_request req; if (xrow_decode_synchro(&row, &req) != 0) diag_raise(); - txn_limbo_process(&txn_limbo, &req); + if (txn_limbo_process(&txn_limbo, &req) != 0) + diag_raise(); } else if (iproto_type_is_raft_request(row.type)) { struct raft_request req; if (xrow_decode_raft(&row, &req, NULL) != 0) @@ -857,7 +858,7 @@ apply_synchro_row_cb(struct journal_entry *entry) applier_rollback_by_wal_io(entry->res); } else { replica_txn_wal_write_cb(synchro_entry->rcb); - txn_limbo_process(&txn_limbo, synchro_entry->req); + txn_limbo_process_run(&txn_limbo, synchro_entry->req); trigger_run(&replicaset.applier.on_wal_write, NULL); } fiber_wakeup(synchro_entry->owner); @@ -873,6 +874,9 @@ apply_synchro_row(uint32_t replica_id, struct xrow_header *row) if (xrow_decode_synchro(row, &req) != 0) goto err; + if (txn_limbo_process_begin(&txn_limbo, &req) != 0) + goto err; + struct replica_cb_data rcb_data; struct synchro_entry entry; /* @@ -910,12 +914,16 @@ apply_synchro_row(uint32_t replica_id, struct xrow_header *row) * transactions side, including the async ones. */ if (journal_write(&entry.base) != 0) - goto err; + goto err_rollback; if (entry.base.res < 0) { diag_set_journal_res(entry.base.res); - goto err; + goto err_rollback; } + txn_limbo_process_commit(&txn_limbo); return 0; + +err_rollback: + txn_limbo_process_rollback(&txn_limbo); err: diag_log(); return -1; diff --git a/src/box/box.cc b/src/box/box.cc index 7b11d56d6..19e67b186 100644 --- a/src/box/box.cc +++ b/src/box/box.cc @@ -424,8 +424,7 @@ wal_stream_apply_synchro_row(struct wal_stream *stream, struct xrow_header *row) say_error("couldn't decode a synchro request"); return -1; } - txn_limbo_process(&txn_limbo, &syn_req); - return 0; + return txn_limbo_process(&txn_limbo, &syn_req); } static int @@ -1670,48 +1669,43 @@ box_wait_limbo_acked(double timeout) return wait_lsn; } -/** Write and process a PROMOTE request. */ -static void -box_issue_promote(uint32_t prev_leader_id, int64_t promote_lsn) +/** Write and process PROMOTE or DEMOTE request. */ +static int +box_issue_synchro(uint16_t type, uint32_t prev_leader_id, int64_t promote_lsn) { struct raft *raft = box_raft(); + + assert(type == IPROTO_RAFT_PROMOTE || + type == IPROTO_RAFT_DEMOTE); assert(raft->volatile_term == raft->term); assert(promote_lsn >= 0); - txn_limbo_write_promote(&txn_limbo, promote_lsn, - raft->term); + struct synchro_request req = { - .type = IPROTO_RAFT_PROMOTE, - .replica_id = prev_leader_id, - .origin_id = instance_id, - .lsn = promote_lsn, - .term = raft->term, + .type = type, + .replica_id = prev_leader_id, + .origin_id = instance_id, + .lsn = promote_lsn, + .term = raft->term, }; - txn_limbo_process(&txn_limbo, &req); + + if (txn_limbo_process_begin(&txn_limbo, &req) != 0) + return -1; + + if (type == IPROTO_RAFT_PROMOTE) + txn_limbo_write_promote(&txn_limbo, req.lsn, req.term); + else + txn_limbo_write_demote(&txn_limbo, req.lsn, req.term); + + txn_limbo_process_run(&txn_limbo, &req); assert(txn_limbo_is_empty(&txn_limbo)); + + txn_limbo_process_commit(&txn_limbo); + return 0; } /** A guard to block multiple simultaneous promote()/demote() invocations. */ static bool is_in_box_promote = false; -/** Write and process a DEMOTE request. */ -static void -box_issue_demote(uint32_t prev_leader_id, int64_t promote_lsn) -{ - assert(box_raft()->volatile_term == box_raft()->term); - assert(promote_lsn >= 0); - txn_limbo_write_demote(&txn_limbo, promote_lsn, - box_raft()->term); - struct synchro_request req = { - .type = IPROTO_RAFT_DEMOTE, - .replica_id = prev_leader_id, - .origin_id = instance_id, - .lsn = promote_lsn, - .term = box_raft()->term, - }; - txn_limbo_process(&txn_limbo, &req); - assert(txn_limbo_is_empty(&txn_limbo)); -} - int box_promote_qsync(void) { @@ -1732,8 +1726,8 @@ box_promote_qsync(void) diag_set(ClientError, ER_NOT_LEADER, raft->leader); return -1; } - box_issue_promote(txn_limbo.owner_id, wait_lsn); - return 0; + return box_issue_synchro(IPROTO_RAFT_PROMOTE, + txn_limbo.owner_id, wait_lsn); } int @@ -1789,9 +1783,8 @@ box_promote(void) if (wait_lsn < 0) return -1; - box_issue_promote(txn_limbo.owner_id, wait_lsn); - - return 0; + return box_issue_synchro(IPROTO_RAFT_PROMOTE, + txn_limbo.owner_id, wait_lsn); } int @@ -1826,8 +1819,8 @@ box_demote(void) int64_t wait_lsn = box_wait_limbo_acked(replication_synchro_timeout); if (wait_lsn < 0) return -1; - box_issue_demote(txn_limbo.owner_id, wait_lsn); - return 0; + return box_issue_synchro(IPROTO_RAFT_DEMOTE, + txn_limbo.owner_id, wait_lsn); } int diff --git a/src/box/memtx_engine.cc b/src/box/memtx_engine.cc index de918c335..09f1d671c 100644 --- a/src/box/memtx_engine.cc +++ b/src/box/memtx_engine.cc @@ -238,8 +238,7 @@ memtx_engine_recover_synchro(const struct xrow_header *row) * because all its rows have a zero replica_id. */ req.origin_id = req.replica_id; - txn_limbo_process(&txn_limbo, &req); - return 0; + return txn_limbo_process(&txn_limbo, &req); } static int diff --git a/src/box/txn_limbo.c b/src/box/txn_limbo.c index 70447caaf..eb9aa7780 100644 --- a/src/box/txn_limbo.c +++ b/src/box/txn_limbo.c @@ -47,6 +47,7 @@ txn_limbo_create(struct txn_limbo *limbo) vclock_create(&limbo->vclock); vclock_create(&limbo->promote_term_map); limbo->promote_greatest_term = 0; + latch_create(&limbo->promote_latch); limbo->confirmed_lsn = 0; limbo->rollback_count = 0; limbo->is_in_rollback = false; @@ -724,11 +725,14 @@ txn_limbo_wait_empty(struct txn_limbo *limbo, double timeout) } void -txn_limbo_process(struct txn_limbo *limbo, const struct synchro_request *req) +txn_limbo_process_run(struct txn_limbo *limbo, + const struct synchro_request *req) { + assert(latch_is_locked(&limbo->promote_latch)); + uint64_t term = req->term; uint32_t origin = req->origin_id; - if (txn_limbo_replica_term(limbo, origin) < term) { + if (vclock_get(&limbo->promote_term_map, origin) < (int64_t)term) { vclock_follow(&limbo->promote_term_map, origin, term); if (term > limbo->promote_greatest_term) limbo->promote_greatest_term = term; @@ -786,6 +790,32 @@ txn_limbo_process(struct txn_limbo *limbo, const struct synchro_request *req) return; } +int +txn_limbo_process_begin(struct txn_limbo *limbo, + const struct synchro_request *req) +{ + latch_lock(&limbo->promote_latch); + /* + * FIXME: For now we take a lock only but idea + * is to verify incoming requests to detect split + * brain situation. Thus we need to change the code + * semantics in advance. + */ + (void)req; + return 0; +} + +int +txn_limbo_process(struct txn_limbo *limbo, + const struct synchro_request *req) +{ + if (txn_limbo_process_begin(limbo, req) != 0) + return -1; + txn_limbo_process_run(limbo, req); + txn_limbo_process_commit(limbo); + return 0; +} + void txn_limbo_on_parameters_change(struct txn_limbo *limbo) { diff --git a/src/box/txn_limbo.h b/src/box/txn_limbo.h index 53e52f676..afe9b429f 100644 --- a/src/box/txn_limbo.h +++ b/src/box/txn_limbo.h @@ -31,6 +31,7 @@ */ #include "small/rlist.h" #include "vclock/vclock.h" +#include "latch.h" #include <stdint.h> @@ -147,6 +148,10 @@ struct txn_limbo { * limbo and raft are in sync and the terms are the same. */ uint64_t promote_greatest_term; + /** + * To order access to the promote data. + */ + struct latch promote_latch; /** * Maximal LSN gathered quorum and either already confirmed in WAL, or * whose confirmation is in progress right now. Any attempt to confirm @@ -216,9 +221,12 @@ txn_limbo_last_entry(struct txn_limbo *limbo) * @a replica_id. */ static inline uint64_t -txn_limbo_replica_term(const struct txn_limbo *limbo, uint32_t replica_id) +txn_limbo_replica_term(struct txn_limbo *limbo, uint32_t replica_id) { - return vclock_get(&limbo->promote_term_map, replica_id); + latch_lock(&limbo->promote_latch); + uint64_t v = vclock_get(&limbo->promote_term_map, replica_id); + latch_unlock(&limbo->promote_latch); + return v; } /** @@ -226,11 +234,14 @@ txn_limbo_replica_term(const struct txn_limbo *limbo, uint32_t replica_id) * data from it. The check is only valid when elections are enabled. */ static inline bool -txn_limbo_is_replica_outdated(const struct txn_limbo *limbo, +txn_limbo_is_replica_outdated(struct txn_limbo *limbo, uint32_t replica_id) { - return txn_limbo_replica_term(limbo, replica_id) < - limbo->promote_greatest_term; + latch_lock(&limbo->promote_latch); + uint64_t v = vclock_get(&limbo->promote_term_map, replica_id); + bool res = v < limbo->promote_greatest_term; + latch_unlock(&limbo->promote_latch); + return res; } /** @@ -300,8 +311,36 @@ txn_limbo_ack(struct txn_limbo *limbo, uint32_t replica_id, int64_t lsn); int txn_limbo_wait_complete(struct txn_limbo *limbo, struct txn_limbo_entry *entry); +/** + * Initiate execution of a synchronous replication request. + */ +int +txn_limbo_process_begin(struct txn_limbo *limbo, + const struct synchro_request *req); + +/** Commit a synchronous replication request. */ +static inline void +txn_limbo_process_commit(struct txn_limbo *limbo) +{ + assert(latch_is_locked(&limbo->promote_latch)); + latch_unlock(&limbo->promote_latch); +} + +/** Rollback a synchronous replication request. */ +static inline void +txn_limbo_process_rollback(struct txn_limbo *limbo) +{ + assert(latch_is_locked(&limbo->promote_latch)); + latch_unlock(&limbo->promote_latch); +} + /** Execute a synchronous replication request. */ void +txn_limbo_process_run(struct txn_limbo *limbo, + const struct synchro_request *req); + +/** Process a synchronous replication request. */ +int txn_limbo_process(struct txn_limbo *limbo, const struct synchro_request *req); /** -- 2.31.1
While been investigating various cluster split-brain scenarios and trying to gather valid incoming synchro request domains and ranges we've discovered that limbo::confirmed_lsn updated not dense enough to cover our needs. In particular this variable is always updated by a limbo owner upon write of syncro entry (to a journal) while replica just reads such record without confirmed_lsn update, so when the replica become a cluster leader it sends a promote request back to the former leader where the packet carries zero LSN instead of previous confirmed_lsn and validation of such packet won't pass. Note the packet validation is not yet implemented in this patch so it is rather a preparatory work for future. Part-of #6036 Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com> --- src/box/txn_limbo.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/src/box/txn_limbo.c b/src/box/txn_limbo.c index eb9aa7780..959811309 100644 --- a/src/box/txn_limbo.c +++ b/src/box/txn_limbo.c @@ -774,6 +774,20 @@ txn_limbo_process_run(struct txn_limbo *limbo, switch (req->type) { case IPROTO_RAFT_CONFIRM: txn_limbo_read_confirm(limbo, lsn); + /* + * We have to adjust confirmed_lsn according + * to LSN coming from the request. It is because + * we will need to report it as old's limbo owner + * LSN inside PROMOTE requests (if administrator + * or election engine will make us so). + * + * We could update confirmed_lsn on every + * txn_limbo_read_confirm call but this function + * is usually called in a couple with + * txn_limbo_write_confirm, thus to eliminate redundant + * variables update we make so once but explicitly. + */ + limbo->confirmed_lsn = req->lsn; break; case IPROTO_RAFT_ROLLBACK: txn_limbo_read_rollback(limbo, lsn); -- 2.31.1
On 15.09.2021 16:50, Cyrill Gorcunov via Tarantool-patches wrote:
> Packet filtering takes more and more time to implement and I send
> similar patches again multiple times. Still the complete split brain
> detection is not yet done, so I thought it might be worth to split the
> work into several series. This part addresses a race with terms manipulation
> and I think can be considered as its own.
>
> I would really appreciate comments here mostly due to function naming,
> comments and etc. My primary aim was to make sure that this series doesn't
> break anything existing but I think if we would have a test case for this
> ordering issue it would be brilliant, though I didn't come with some simple
> one here yet.
Committing a part of the series won't speed up the other patches, unfortunately.
If you think something can be done separately, it is a good idea. But still
such changes must be independent and have their own tests then.
Just pushing some first commits won't help with the next ones.
On Thu, Sep 16, 2021 at 12:51:37AM +0200, Vladislav Shpilevoy wrote:
> On 15.09.2021 16:50, Cyrill Gorcunov via Tarantool-patches wrote:
> > Packet filtering takes more and more time to implement and I send
> > similar patches again multiple times. Still the complete split brain
> > detection is not yet done, so I thought it might be worth to split the
> > work into several series. This part addresses a race with terms manipulation
> > and I think can be considered as its own.
> >
> > I would really appreciate comments here mostly due to function naming,
> > comments and etc. My primary aim was to make sure that this series doesn't
> > break anything existing but I think if we would have a test case for this
> > ordering issue it would be brilliant, though I didn't come with some simple
> > one here yet.
>
> Committing a part of the series won't speed up the other patches, unfortunately.
> If you think something can be done separately, it is a good idea. But still
> such changes must be independent and have their own tests then.
>
> Just pushing some first commits won't help with the next ones.
Could you please at least comment "function naming, comments in code and etc",
so the lack of test would be the only issue.
In-scope-of #6036 Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com> --- test/replication/gh-6036-order-master.lua | 1 + test/replication/gh-6036-order-node.lua | 60 +++++ test/replication/gh-6036-order-replica1.lua | 1 + test/replication/gh-6036-order-replica2.lua | 1 + test/replication/gh-6036-term-order.result | 222 +++++++++++++++++++ test/replication/gh-6036-term-order.test.lua | 89 ++++++++ test/replication/suite.cfg | 1 + 7 files changed, 375 insertions(+) create mode 120000 test/replication/gh-6036-order-master.lua create mode 100644 test/replication/gh-6036-order-node.lua create mode 120000 test/replication/gh-6036-order-replica1.lua create mode 120000 test/replication/gh-6036-order-replica2.lua create mode 100644 test/replication/gh-6036-term-order.result create mode 100644 test/replication/gh-6036-term-order.test.lua diff --git a/test/replication/gh-6036-order-master.lua b/test/replication/gh-6036-order-master.lua new file mode 120000 index 000000000..82a6073a1 --- /dev/null +++ b/test/replication/gh-6036-order-master.lua @@ -0,0 +1 @@ +gh-6036-order-node.lua \ No newline at end of file diff --git a/test/replication/gh-6036-order-node.lua b/test/replication/gh-6036-order-node.lua new file mode 100644 index 000000000..b22a7cb4c --- /dev/null +++ b/test/replication/gh-6036-order-node.lua @@ -0,0 +1,60 @@ +local INSTANCE_ID = string.match(arg[0], "gh%-6036%-order%-(.+)%.lua") + +local function unix_socket(name) + return "unix/:./" .. name .. '.sock'; +end + +require('console').listen(os.getenv('ADMIN')) + +if INSTANCE_ID == "master" then + box.cfg({ + listen = unix_socket(INSTANCE_ID), + replication = { + unix_socket(INSTANCE_ID), + unix_socket("replica1"), + unix_socket("replica2"), + }, + replication_connect_quorum = 1, + replication_synchro_quorum = 1, + replication_synchro_timeout = 10000, + replication_sync_timeout = 5, + read_only = false, + election_mode = "off", + }) +elseif INSTANCE_ID == "replica1" then + box.cfg({ + listen = unix_socket(INSTANCE_ID), + replication = { + unix_socket("master"), + unix_socket(INSTANCE_ID), + unix_socket("replica2"), + }, + replication_connect_quorum = 1, + replication_synchro_quorum = 1, + replication_synchro_timeout = 10000, + replication_sync_timeout = 5, + read_only = false, + election_mode = "off", + }) +else + assert(INSTANCE_ID == "replica2") + box.cfg({ + listen = unix_socket(INSTANCE_ID), + replication = { + unix_socket("master"), + unix_socket("replica1"), + unix_socket(INSTANCE_ID), + }, + replication_connect_quorum = 1, + replication_synchro_quorum = 1, + replication_synchro_timeout = 10000, + replication_sync_timeout = 5, + read_only = true, + election_mode = "off", + }) +end + +--box.ctl.wait_rw() +box.once("bootstrap", function() + box.schema.user.grant('guest', 'super') +end) diff --git a/test/replication/gh-6036-order-replica1.lua b/test/replication/gh-6036-order-replica1.lua new file mode 120000 index 000000000..82a6073a1 --- /dev/null +++ b/test/replication/gh-6036-order-replica1.lua @@ -0,0 +1 @@ +gh-6036-order-node.lua \ No newline at end of file diff --git a/test/replication/gh-6036-order-replica2.lua b/test/replication/gh-6036-order-replica2.lua new file mode 120000 index 000000000..82a6073a1 --- /dev/null +++ b/test/replication/gh-6036-order-replica2.lua @@ -0,0 +1 @@ +gh-6036-order-node.lua \ No newline at end of file diff --git a/test/replication/gh-6036-term-order.result b/test/replication/gh-6036-term-order.result new file mode 100644 index 000000000..3a5f8bfb2 --- /dev/null +++ b/test/replication/gh-6036-term-order.result @@ -0,0 +1,222 @@ +-- test-run result file version 2 +-- +-- gh-6036: verify that terms are locked when we're inside journal +-- write routine, because parallel appliers may ignore the fact that +-- the term is updated already but not yet written leading to data +-- inconsistency. +-- +test_run = require('test_run').new() + | --- + | ... + +test_run:cmd('create server master with script="replication/gh-6036-order-master.lua"') + | --- + | - true + | ... +test_run:cmd('create server replica1 with script="replication/gh-6036-order-replica1.lua"') + | --- + | - true + | ... +test_run:cmd('create server replica2 with script="replication/gh-6036-order-replica2.lua"') + | --- + | - true + | ... + +test_run:cmd('start server master with wait=False') + | --- + | - true + | ... +test_run:cmd('start server replica1 with wait=False') + | --- + | - true + | ... +test_run:cmd('start server replica2 with wait=False') + | --- + | - true + | ... + +test_run:wait_fullmesh({"master", "replica1", "replica2"}) + | --- + | ... + +test_run:switch("master") + | --- + | - true + | ... +box.ctl.demote() + | --- + | ... +assert(box.info.election.term == 1) + | --- + | - true + | ... + +test_run:switch("replica1") + | --- + | - true + | ... +box.ctl.demote() + | --- + | ... +assert(box.info.election.term == 1) + | --- + | - true + | ... + +test_run:switch("replica2") + | --- + | - true + | ... +box.ctl.demote() + | --- + | ... +assert(box.info.election.term == 1) + | --- + | - true + | ... + +-- +-- Drop connection between master and replica1 +test_run:switch("master") + | --- + | - true + | ... +box.cfg({ \ + replication = { \ + "unix/:./master.sock", \ + "unix/:./replica2.sock", \ + }, \ +}) + | --- + | ... +test_run:switch("replica1") + | --- + | - true + | ... +box.cfg({ \ + replication = { \ + "unix/:./replica1.sock", \ + "unix/:./replica2.sock", \ + }, \ +}) + | --- + | ... + +-- +-- Initiate disk delay on replica2 +test_run:switch("replica2") + | --- + | - true + | ... +assert(box.info.election.term == 1) + | --- + | - true + | ... +box.error.injection.set('ERRINJ_WAL_DELAY', true) + | --- + | - ok + | ... + +-- +-- Ping-pong the promote action between master and +-- replica1 nodes, the term updates get queued on +-- replica2 because of disk being disabled. +test_run:switch("master") + | --- + | - true + | ... +box.ctl.promote() + | --- + | ... +assert(box.info.election.term == 2) + | --- + | - true + | ... +box.ctl.demote() + | --- + | ... +assert(box.info.election.term == 3) + | --- + | - true + | ... + +test_run:switch("replica1") + | --- + | - true + | ... +box.ctl.promote() + | --- + | ... +assert(box.info.election.term == 2) + | --- + | - true + | ... +box.ctl.demote() + | --- + | ... +assert(box.info.election.term == 3) + | --- + | - true + | ... + +test_run:switch("master") + | --- + | - true + | ... +box.ctl.promote() + | --- + | ... +assert(box.info.election.term == 4) + | --- + | - true + | ... + +-- +-- Finally turn back disk on replica2 so the +-- terms get sequenced. +test_run:switch("replica2") + | --- + | - true + | ... +assert(box.info.election.term == 2) + | --- + | - true + | ... +box.error.injection.set('ERRINJ_WAL_DELAY', false) + | --- + | - ok + | ... +test_run:wait_cond(function() return box.info.election.term == 4 end, 100) + | --- + | - true + | ... + +test_run:switch("default") + | --- + | - true + | ... +test_run:cmd('stop server master') + | --- + | - true + | ... +test_run:cmd('stop server replica1') + | --- + | - true + | ... +test_run:cmd('stop server replica2') + | --- + | - true + | ... + +test_run:cmd('delete server master') + | --- + | - true + | ... +test_run:cmd('delete server replica1') + | --- + | - true + | ... +test_run:cmd('delete server replica2') + | --- + | - true + | ... diff --git a/test/replication/gh-6036-term-order.test.lua b/test/replication/gh-6036-term-order.test.lua new file mode 100644 index 000000000..79dd8efe4 --- /dev/null +++ b/test/replication/gh-6036-term-order.test.lua @@ -0,0 +1,89 @@ +-- +-- gh-6036: verify that terms are locked when we're inside journal +-- write routine, because parallel appliers may ignore the fact that +-- the term is updated already but not yet written leading to data +-- inconsistency. +-- +test_run = require('test_run').new() + +test_run:cmd('create server master with script="replication/gh-6036-order-master.lua"') +test_run:cmd('create server replica1 with script="replication/gh-6036-order-replica1.lua"') +test_run:cmd('create server replica2 with script="replication/gh-6036-order-replica2.lua"') + +test_run:cmd('start server master with wait=False') +test_run:cmd('start server replica1 with wait=False') +test_run:cmd('start server replica2 with wait=False') + +test_run:wait_fullmesh({"master", "replica1", "replica2"}) + +test_run:switch("master") +box.ctl.demote() +assert(box.info.election.term == 1) + +test_run:switch("replica1") +box.ctl.demote() +assert(box.info.election.term == 1) + +test_run:switch("replica2") +box.ctl.demote() +assert(box.info.election.term == 1) + +-- +-- Drop connection between master and replica1 +test_run:switch("master") +box.cfg({ \ + replication = { \ + "unix/:./master.sock", \ + "unix/:./replica2.sock", \ + }, \ +}) +test_run:switch("replica1") +box.cfg({ \ + replication = { \ + "unix/:./replica1.sock", \ + "unix/:./replica2.sock", \ + }, \ +}) + +-- +-- Initiate disk delay on replica2 +test_run:switch("replica2") +assert(box.info.election.term == 1) +box.error.injection.set('ERRINJ_WAL_DELAY', true) + +-- +-- Ping-pong the promote action between master and +-- replica1 nodes, the term updates get queued on +-- replica2 because of disk being disabled. +test_run:switch("master") +box.ctl.promote() +assert(box.info.election.term == 2) +box.ctl.demote() +assert(box.info.election.term == 3) + +test_run:switch("replica1") +box.ctl.promote() +assert(box.info.election.term == 2) +box.ctl.demote() +assert(box.info.election.term == 3) + +test_run:switch("master") +box.ctl.promote() +assert(box.info.election.term == 4) + +-- +-- Finally turn back disk on replica2 so the +-- terms get sequenced. +test_run:switch("replica2") +assert(box.info.election.term == 2) +box.error.injection.set('ERRINJ_WAL_DELAY', false) +test_run:wait_cond(function() return box.info.election.term == 4 end, 100) + +test_run:switch("default") +test_run:cmd('stop server master') +test_run:cmd('stop server replica1') +test_run:cmd('stop server replica2') + +test_run:cmd('delete server master') +test_run:cmd('delete server replica1') +test_run:cmd('delete server replica2') diff --git a/test/replication/suite.cfg b/test/replication/suite.cfg index 3eee0803c..ac2bedfd9 100644 --- a/test/replication/suite.cfg +++ b/test/replication/suite.cfg @@ -59,6 +59,7 @@ "gh-6094-rs-uuid-mismatch.test.lua": {}, "gh-6127-election-join-new.test.lua": {}, "gh-6035-applier-filter.test.lua": {}, + "gh-6036-term-order.test.lua": {}, "election-candidate-promote.test.lua": {}, "*": { "memtx": {"engine": "memtx"}, -- 2.31.1
On Mon, Sep 20, 2021 at 06:22:48PM +0300, Cyrill Gorcunov wrote:
> In-scope-of #6036
>
Ignore it please, the test is flaky, reworking.
On Wed, Sep 15, 2021 at 05:50:43PM +0300, Cyrill Gorcunov wrote:
> Packet filtering takes more and more time to implement and I send
> similar patches again multiple times. Still the complete split brain
> detection is not yet done, so I thought it might be worth to split the
> work into several series. This part addresses a race with terms manipulation
> and I think can be considered as its own.
>
> I would really appreciate comments here mostly due to function naming,
> comments and etc. My primary aim was to make sure that this series doesn't
> break anything existing but I think if we would have a test case for this
> ordering issue it would be brilliant, though I didn't come with some simple
> one here yet.
>
> branch gorcunov/gh-6036-rollback-confirm-16
> issue https://github.com/tarantool/tarantool/issues/6036
> previous series https://lists.tarantool.org/tarantool-patches/20210910152910.607398-1-gorcunov@gmail.com/
Ignore this series please, I'll send a new one because testing of
promotion map updates requires to expose some debug info and more
patches are needed.