* [Tarantool-patches] [PATCH v8 0/6] limbo: implement packets filtering
@ 2021-07-26 15:34 Cyrill Gorcunov via Tarantool-patches
2021-07-26 15:34 ` [Tarantool-patches] [PATCH v8 1/6] latch: add latch_is_locked helper Cyrill Gorcunov via Tarantool-patches
` (5 more replies)
0 siblings, 6 replies; 11+ messages in thread
From: Cyrill Gorcunov via Tarantool-patches @ 2021-07-26 15:34 UTC (permalink / raw)
To: tml; +Cc: Vladislav Shpilevoy
Guys, take a look please once time permit. Comments are highly
appreciated. Replication tests are passing but gihutb tests are
not: the series is on top of Serge's branch
`sp/gh-6034-empty-limbo-transition` which is failing. Still I think
the series is in good shape for review.
branch gorcunov/gh-6036-rollback-confirm-08
issue https://github.com/tarantool/tarantool/issues/6036
v6:
- use txn_limbo_terms name for structure
- rebase on fresh sp/gh-6034-empty-limbo-transition branch
- rework filtering chains
v8:
- add ability to disable filtering for local recovery
and join stages
- update tests
Cyrill Gorcunov (6):
latch: add latch_is_locked helper
say: introduce panic_on helper
limbo: gather promote tracking into a separate structure
limbo: order access to the limbo terms terms
limbo: filter incoming synchro requests
test: replication -- add gh-6036-rollback-confirm
src/box/applier.cc | 31 +-
src/box/box.cc | 23 +-
src/box/memtx_engine.c | 3 +-
src/box/txn_limbo.c | 336 ++++++++++++++++--
src/box/txn_limbo.h | 150 ++++++--
src/lib/core/latch.h | 11 +
src/lib/core/say.h | 1 +
test/replication/gh-6036-master.lua | 1 +
test/replication/gh-6036-node.lua | 33 ++
test/replication/gh-6036-replica.lua | 1 +
.../gh-6036-rollback-confirm.result | 180 ++++++++++
.../gh-6036-rollback-confirm.test.lua | 88 +++++
12 files changed, 797 insertions(+), 61 deletions(-)
create mode 120000 test/replication/gh-6036-master.lua
create mode 100644 test/replication/gh-6036-node.lua
create mode 120000 test/replication/gh-6036-replica.lua
create mode 100644 test/replication/gh-6036-rollback-confirm.result
create mode 100644 test/replication/gh-6036-rollback-confirm.test.lua
base-commit: 228a83447fbe1ca2b9358a6a98d70ecc602d4c35
--
2.31.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [Tarantool-patches] [PATCH v8 1/6] latch: add latch_is_locked helper
2021-07-26 15:34 [Tarantool-patches] [PATCH v8 0/6] limbo: implement packets filtering Cyrill Gorcunov via Tarantool-patches
@ 2021-07-26 15:34 ` Cyrill Gorcunov via Tarantool-patches
2021-07-26 15:34 ` [Tarantool-patches] [PATCH v8 2/6] say: introduce panic_on helper Cyrill Gorcunov via Tarantool-patches
` (4 subsequent siblings)
5 siblings, 0 replies; 11+ messages in thread
From: Cyrill Gorcunov via Tarantool-patches @ 2021-07-26 15:34 UTC (permalink / raw)
To: tml; +Cc: Vladislav Shpilevoy
To test if latch is locked.
In-scope-of #6036
Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com>
---
src/lib/core/latch.h | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/src/lib/core/latch.h b/src/lib/core/latch.h
index 49c59cf63..0aaa8b634 100644
--- a/src/lib/core/latch.h
+++ b/src/lib/core/latch.h
@@ -95,6 +95,17 @@ latch_owner(struct latch *l)
return l->owner;
}
+/**
+ * Return true if the latch is locked.
+ *
+ * @param l - latch to be tested.
+ */
+static inline bool
+latch_is_locked(const struct latch *l)
+{
+ return l->owner != NULL;
+}
+
/**
* Lock a latch. If the latch is already locked by another fiber,
* waits for timeout.
--
2.31.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [Tarantool-patches] [PATCH v8 2/6] say: introduce panic_on helper
2021-07-26 15:34 [Tarantool-patches] [PATCH v8 0/6] limbo: implement packets filtering Cyrill Gorcunov via Tarantool-patches
2021-07-26 15:34 ` [Tarantool-patches] [PATCH v8 1/6] latch: add latch_is_locked helper Cyrill Gorcunov via Tarantool-patches
@ 2021-07-26 15:34 ` Cyrill Gorcunov via Tarantool-patches
2021-07-26 15:34 ` [Tarantool-patches] [PATCH v8 3/6] limbo: gather promote tracking into a separate structure Cyrill Gorcunov via Tarantool-patches
` (3 subsequent siblings)
5 siblings, 0 replies; 11+ messages in thread
From: Cyrill Gorcunov via Tarantool-patches @ 2021-07-26 15:34 UTC (permalink / raw)
To: tml; +Cc: Vladislav Shpilevoy
In-scope-of #6036
Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com>
---
src/lib/core/say.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/src/lib/core/say.h b/src/lib/core/say.h
index e1fec8c60..4bb1645fd 100644
--- a/src/lib/core/say.h
+++ b/src/lib/core/say.h
@@ -348,6 +348,7 @@ CFORMAT(printf, 5, 6) extern sayfunc_t _say;
#define panic_status(status, ...) ({ say(S_FATAL, NULL, __VA_ARGS__); exit(status); })
#define panic(...) panic_status(EXIT_FAILURE, __VA_ARGS__)
+#define panic_on(cond, ...) if (cond) panic(__VA_ARGS__)
#define panic_syserror(...) ({ say(S_FATAL, strerror(errno), __VA_ARGS__); exit(EXIT_FAILURE); })
enum {
--
2.31.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [Tarantool-patches] [PATCH v8 3/6] limbo: gather promote tracking into a separate structure
2021-07-26 15:34 [Tarantool-patches] [PATCH v8 0/6] limbo: implement packets filtering Cyrill Gorcunov via Tarantool-patches
2021-07-26 15:34 ` [Tarantool-patches] [PATCH v8 1/6] latch: add latch_is_locked helper Cyrill Gorcunov via Tarantool-patches
2021-07-26 15:34 ` [Tarantool-patches] [PATCH v8 2/6] say: introduce panic_on helper Cyrill Gorcunov via Tarantool-patches
@ 2021-07-26 15:34 ` Cyrill Gorcunov via Tarantool-patches
2021-07-28 21:34 ` Vladislav Shpilevoy via Tarantool-patches
2021-07-26 15:34 ` [Tarantool-patches] [PATCH v8 4/6] limbo: order access to the limbo terms terms Cyrill Gorcunov via Tarantool-patches
` (2 subsequent siblings)
5 siblings, 1 reply; 11+ messages in thread
From: Cyrill Gorcunov via Tarantool-patches @ 2021-07-26 15:34 UTC (permalink / raw)
To: tml; +Cc: Vladislav Shpilevoy
It is needed to introduce ordered promote related data
modifications in next patch.
Part-of #6036
Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com>
---
src/box/box.cc | 12 +++++++----
src/box/txn_limbo.c | 24 ++++++++++++++--------
src/box/txn_limbo.h | 49 ++++++++++++++++++++++++++++-----------------
3 files changed, 55 insertions(+), 30 deletions(-)
diff --git a/src/box/box.cc b/src/box/box.cc
index fb58f981d..b356508f0 100644
--- a/src/box/box.cc
+++ b/src/box/box.cc
@@ -1565,7 +1565,8 @@ box_run_elections(void)
static int
box_check_promote_term_intact(uint64_t promote_term)
{
- if (txn_limbo.promote_greatest_term != promote_term) {
+ const struct txn_limbo_terms *tr = &txn_limbo.terms;
+ if (tr->terms_max != promote_term) {
diag_set(ClientError, ER_INTERFERING_PROMOTE,
txn_limbo.owner_id);
return -1;
@@ -1577,7 +1578,8 @@ box_check_promote_term_intact(uint64_t promote_term)
static int
box_trigger_elections(void)
{
- uint64_t promote_term = txn_limbo.promote_greatest_term;
+ const struct txn_limbo_terms *tr = &txn_limbo.terms;
+ uint64_t promote_term = tr->terms_max;
raft_new_term(box_raft());
if (box_raft_wait_term_persisted() < 0)
return -1;
@@ -1588,7 +1590,8 @@ box_trigger_elections(void)
static int
box_try_wait_confirm(double timeout)
{
- uint64_t promote_term = txn_limbo.promote_greatest_term;
+ const struct txn_limbo_terms *tr = &txn_limbo.terms;
+ uint64_t promote_term = tr->terms_max;
txn_limbo_wait_empty(&txn_limbo, timeout);
return box_check_promote_term_intact(promote_term);
}
@@ -1604,7 +1607,8 @@ box_wait_limbo_acked(void)
if (txn_limbo_is_empty(&txn_limbo))
return txn_limbo.confirmed_lsn;
- uint64_t promote_term = txn_limbo.promote_greatest_term;
+ const struct txn_limbo_terms *tr = &txn_limbo.terms;
+ uint64_t promote_term = tr->terms_max;
int quorum = replication_synchro_quorum;
struct txn_limbo_entry *last_entry;
last_entry = txn_limbo_last_synchro_entry(&txn_limbo);
diff --git a/src/box/txn_limbo.c b/src/box/txn_limbo.c
index 570f77c46..53c86f34e 100644
--- a/src/box/txn_limbo.c
+++ b/src/box/txn_limbo.c
@@ -37,6 +37,13 @@
struct txn_limbo txn_limbo;
+static void
+txn_limbo_terms_create(struct txn_limbo_terms *tr)
+{
+ vclock_create(&tr->terms_map);
+ tr->terms_max = 0;
+}
+
static inline void
txn_limbo_create(struct txn_limbo *limbo)
{
@@ -45,8 +52,7 @@ txn_limbo_create(struct txn_limbo *limbo)
limbo->owner_id = REPLICA_ID_NIL;
fiber_cond_create(&limbo->wait_cond);
vclock_create(&limbo->vclock);
- vclock_create(&limbo->promote_term_map);
- limbo->promote_greatest_term = 0;
+ txn_limbo_terms_create(&limbo->terms);
limbo->confirmed_lsn = 0;
limbo->rollback_count = 0;
limbo->is_in_rollback = false;
@@ -305,10 +311,11 @@ void
txn_limbo_checkpoint(const struct txn_limbo *limbo,
struct synchro_request *req)
{
+ const struct txn_limbo_terms *tr = &limbo->terms;
req->type = IPROTO_PROMOTE;
req->replica_id = limbo->owner_id;
req->lsn = limbo->confirmed_lsn;
- req->term = limbo->promote_greatest_term;
+ req->term = tr->terms_max;
}
static void
@@ -726,20 +733,21 @@ txn_limbo_wait_empty(struct txn_limbo *limbo, double timeout)
void
txn_limbo_process(struct txn_limbo *limbo, const struct synchro_request *req)
{
+ struct txn_limbo_terms *tr = &limbo->terms;
uint64_t term = req->term;
uint32_t origin = req->origin_id;
if (txn_limbo_replica_term(limbo, origin) < term) {
- vclock_follow(&limbo->promote_term_map, origin, term);
- if (term > limbo->promote_greatest_term)
- limbo->promote_greatest_term = term;
+ vclock_follow(&tr->terms_map, origin, term);
+ if (term > tr->terms_max)
+ tr->terms_max = term;
} else if (iproto_type_is_promote_request(req->type) &&
- limbo->promote_greatest_term > 1) {
+ tr->terms_max > 1) {
/* PROMOTE for outdated term. Ignore. */
say_info("RAFT: ignoring %s request from instance "
"id %u for term %llu. Greatest term seen "
"before (%llu) is bigger.",
iproto_type_name(req->type), origin, (long long)term,
- (long long)limbo->promote_greatest_term);
+ (long long)tr->terms_max);
return;
}
diff --git a/src/box/txn_limbo.h b/src/box/txn_limbo.h
index 53e52f676..dc980bf7c 100644
--- a/src/box/txn_limbo.h
+++ b/src/box/txn_limbo.h
@@ -75,6 +75,31 @@ txn_limbo_entry_is_complete(const struct txn_limbo_entry *e)
return e->is_commit || e->is_rollback;
}
+/**
+ * Keep state of promote requests to handle split-brain
+ * situation and other errors.
+ */
+struct txn_limbo_terms {
+ /**
+ * Latest terms received with PROMOTE entries from remote instances.
+ * Limbo uses them to filter out the transactions coming not from the
+ * limbo owner, but so outdated that they are rolled back everywhere
+ * except outdated nodes.
+ */
+ struct vclock terms_map;
+ /**
+ * The biggest PROMOTE term seen by the instance and persisted in WAL.
+ * It is related to raft term, but not the same. Synchronous replication
+ * represented by the limbo is interested only in the won elections
+ * ended with PROMOTE request.
+ * It means the limbo's term might be smaller than the raft term, while
+ * there are ongoing elections, or the leader is already known and this
+ * instance hasn't read its PROMOTE request yet. During other times the
+ * limbo and raft are in sync and the terms are the same.
+ */
+ uint64_t terms_max;
+};
+
/**
* Limbo is a place where transactions are stored, which are
* finished, but not committed nor rolled back. These are
@@ -130,23 +155,9 @@ struct txn_limbo {
*/
struct vclock vclock;
/**
- * Latest terms received with PROMOTE entries from remote instances.
- * Limbo uses them to filter out the transactions coming not from the
- * limbo owner, but so outdated that they are rolled back everywhere
- * except outdated nodes.
- */
- struct vclock promote_term_map;
- /**
- * The biggest PROMOTE term seen by the instance and persisted in WAL.
- * It is related to raft term, but not the same. Synchronous replication
- * represented by the limbo is interested only in the won elections
- * ended with PROMOTE request.
- * It means the limbo's term might be smaller than the raft term, while
- * there are ongoing elections, or the leader is already known and this
- * instance hasn't read its PROMOTE request yet. During other times the
- * limbo and raft are in sync and the terms are the same.
+ * Track promote requests.
*/
- uint64_t promote_greatest_term;
+ struct txn_limbo_terms terms;
/**
* Maximal LSN gathered quorum and either already confirmed in WAL, or
* whose confirmation is in progress right now. Any attempt to confirm
@@ -218,7 +229,8 @@ txn_limbo_last_entry(struct txn_limbo *limbo)
static inline uint64_t
txn_limbo_replica_term(const struct txn_limbo *limbo, uint32_t replica_id)
{
- return vclock_get(&limbo->promote_term_map, replica_id);
+ const struct txn_limbo_terms *tr = &limbo->terms;
+ return vclock_get(&tr->terms_map, replica_id);
}
/**
@@ -229,8 +241,9 @@ static inline bool
txn_limbo_is_replica_outdated(const struct txn_limbo *limbo,
uint32_t replica_id)
{
+ const struct txn_limbo_terms *tr = &limbo->terms;
return txn_limbo_replica_term(limbo, replica_id) <
- limbo->promote_greatest_term;
+ tr->terms_max;
}
/**
--
2.31.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [Tarantool-patches] [PATCH v8 4/6] limbo: order access to the limbo terms terms
2021-07-26 15:34 [Tarantool-patches] [PATCH v8 0/6] limbo: implement packets filtering Cyrill Gorcunov via Tarantool-patches
` (2 preceding siblings ...)
2021-07-26 15:34 ` [Tarantool-patches] [PATCH v8 3/6] limbo: gather promote tracking into a separate structure Cyrill Gorcunov via Tarantool-patches
@ 2021-07-26 15:34 ` Cyrill Gorcunov via Tarantool-patches
2021-07-26 15:34 ` [Tarantool-patches] [PATCH v8 5/6] limbo: filter incoming synchro requests Cyrill Gorcunov via Tarantool-patches
2021-07-26 15:34 ` [Tarantool-patches] [PATCH v8 6/6] test: replication -- add gh-6036-rollback-confirm Cyrill Gorcunov via Tarantool-patches
5 siblings, 0 replies; 11+ messages in thread
From: Cyrill Gorcunov via Tarantool-patches @ 2021-07-26 15:34 UTC (permalink / raw)
To: tml; +Cc: Vladislav Shpilevoy
Limbo terms tracking is shared between appliers and when
one of appliers is waiting for write to complete inside
journal_write() routine, an other may need to access read
term value to figure out if promote request is valid to
apply. Due to cooperative multitasking access to the terms
is not consistent so we need to be sure that other fibers
either read up to date terms (ie written to the WAL).
For this sake we use latching mechanism, when one fiber
took terms-lock for updating other readers are waiting
until the operation is complete.
For example here is a call graph of two appliers
applier 1
---------
applier_apply_tx
(promote term = 3
current max term = 2)
applier_synchro_filter_tx
apply_synchro_row
journal_write
(sleeping)
at this moment another applier comes in with obsolete
data and term 2
applier 2
---------
applier_apply_tx
(term 2)
applier_synchro_filter_tx
txn_limbo_is_replica_outdated -> false
journal_write (sleep)
applier 1
---------
journal wakes up
apply_synchro_row_cb
set max term to 3
So the applier 2 didn't notice that term 3 is already seen
and wrote obsolete data. With locking the applier 2 will
wait until applier 1 has finished its write.
Part-of #6036
Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com>
---
src/box/applier.cc | 10 ++++--
src/box/box.cc | 16 ++++------
src/box/txn_limbo.c | 16 ++++++++--
src/box/txn_limbo.h | 74 ++++++++++++++++++++++++++++++++++++++++++---
4 files changed, 96 insertions(+), 20 deletions(-)
diff --git a/src/box/applier.cc b/src/box/applier.cc
index f621fa657..b5c3a7b67 100644
--- a/src/box/applier.cc
+++ b/src/box/applier.cc
@@ -856,7 +856,7 @@ apply_synchro_row_cb(struct journal_entry *entry)
applier_rollback_by_wal_io(entry->res);
} else {
replica_txn_wal_write_cb(synchro_entry->rcb);
- txn_limbo_process(&txn_limbo, synchro_entry->req);
+ txn_limbo_process_locked(&txn_limbo, synchro_entry->req);
trigger_run(&replicaset.applier.on_wal_write, NULL);
}
fiber_wakeup(synchro_entry->owner);
@@ -872,6 +872,7 @@ apply_synchro_row(uint32_t replica_id, struct xrow_header *row)
if (xrow_decode_synchro(row, &req) != 0)
goto err;
+ txn_limbo_terms_lock(&txn_limbo);
struct replica_cb_data rcb_data;
struct synchro_entry entry;
/*
@@ -909,12 +910,15 @@ apply_synchro_row(uint32_t replica_id, struct xrow_header *row)
* transactions side, including the async ones.
*/
if (journal_write(&entry.base) != 0)
- goto err;
+ goto err_unlock;
if (entry.base.res < 0) {
diag_set_journal_res(entry.base.res);
- goto err;
+ goto err_unlock;
}
+ txn_limbo_terms_unlock(&txn_limbo);
return 0;
+err_unlock:
+ txn_limbo_terms_unlock(&txn_limbo);
err:
diag_log();
return -1;
diff --git a/src/box/box.cc b/src/box/box.cc
index b356508f0..395f0d9ef 100644
--- a/src/box/box.cc
+++ b/src/box/box.cc
@@ -1565,8 +1565,7 @@ box_run_elections(void)
static int
box_check_promote_term_intact(uint64_t promote_term)
{
- const struct txn_limbo_terms *tr = &txn_limbo.terms;
- if (tr->terms_max != promote_term) {
+ if (txn_limbo_terms_max_raw(&txn_limbo) != promote_term) {
diag_set(ClientError, ER_INTERFERING_PROMOTE,
txn_limbo.owner_id);
return -1;
@@ -1578,8 +1577,7 @@ box_check_promote_term_intact(uint64_t promote_term)
static int
box_trigger_elections(void)
{
- const struct txn_limbo_terms *tr = &txn_limbo.terms;
- uint64_t promote_term = tr->terms_max;
+ uint64_t promote_term = txn_limbo_terms_max_raw(&txn_limbo);
raft_new_term(box_raft());
if (box_raft_wait_term_persisted() < 0)
return -1;
@@ -1590,8 +1588,7 @@ box_trigger_elections(void)
static int
box_try_wait_confirm(double timeout)
{
- const struct txn_limbo_terms *tr = &txn_limbo.terms;
- uint64_t promote_term = tr->terms_max;
+ uint64_t promote_term = txn_limbo_terms_max_raw(&txn_limbo);
txn_limbo_wait_empty(&txn_limbo, timeout);
return box_check_promote_term_intact(promote_term);
}
@@ -1607,8 +1604,7 @@ box_wait_limbo_acked(void)
if (txn_limbo_is_empty(&txn_limbo))
return txn_limbo.confirmed_lsn;
- const struct txn_limbo_terms *tr = &txn_limbo.terms;
- uint64_t promote_term = tr->terms_max;
+ uint64_t promote_term = txn_limbo_terms_max_raw(&txn_limbo);
int quorum = replication_synchro_quorum;
struct txn_limbo_entry *last_entry;
last_entry = txn_limbo_last_synchro_entry(&txn_limbo);
@@ -1724,7 +1720,7 @@ box_promote(void)
* Currently active leader (the instance that is seen as leader by both
* raft and txn_limbo) can't issue another PROMOTE.
*/
- bool is_leader = txn_limbo_replica_term(&txn_limbo, instance_id) ==
+ bool is_leader = txn_limbo_term(&txn_limbo, instance_id) ==
raft->term && txn_limbo.owner_id == instance_id;
if (box_election_mode != ELECTION_MODE_OFF)
is_leader = is_leader && raft->state == RAFT_STATE_LEADER;
@@ -1780,7 +1776,7 @@ box_demote(void)
return 0;
/* Currently active leader is the only one who can issue a DEMOTE. */
- bool is_leader = txn_limbo_replica_term(&txn_limbo, instance_id) ==
+ bool is_leader = txn_limbo_term(&txn_limbo, instance_id) ==
box_raft()->term && txn_limbo.owner_id == instance_id;
if (box_election_mode != ELECTION_MODE_OFF)
is_leader = is_leader && box_raft()->state == RAFT_STATE_LEADER;
diff --git a/src/box/txn_limbo.c b/src/box/txn_limbo.c
index 53c86f34e..5f43f575c 100644
--- a/src/box/txn_limbo.c
+++ b/src/box/txn_limbo.c
@@ -40,6 +40,7 @@ struct txn_limbo txn_limbo;
static void
txn_limbo_terms_create(struct txn_limbo_terms *tr)
{
+ latch_create(&tr->latch);
vclock_create(&tr->terms_map);
tr->terms_max = 0;
}
@@ -731,12 +732,14 @@ txn_limbo_wait_empty(struct txn_limbo *limbo, double timeout)
}
void
-txn_limbo_process(struct txn_limbo *limbo, const struct synchro_request *req)
+txn_limbo_process_locked(struct txn_limbo *limbo,
+ const struct synchro_request *req)
{
struct txn_limbo_terms *tr = &limbo->terms;
uint64_t term = req->term;
uint32_t origin = req->origin_id;
- if (txn_limbo_replica_term(limbo, origin) < term) {
+
+ if (txn_limbo_term_locked(limbo, origin) < term) {
vclock_follow(&tr->terms_map, origin, term);
if (term > tr->terms_max)
tr->terms_max = term;
@@ -794,6 +797,15 @@ txn_limbo_process(struct txn_limbo *limbo, const struct synchro_request *req)
return;
}
+void
+txn_limbo_process(struct txn_limbo *limbo,
+ const struct synchro_request *req)
+{
+ txn_limbo_terms_lock(limbo);
+ txn_limbo_process_locked(limbo, req);
+ txn_limbo_terms_unlock(limbo);
+}
+
void
txn_limbo_on_parameters_change(struct txn_limbo *limbo)
{
diff --git a/src/box/txn_limbo.h b/src/box/txn_limbo.h
index dc980bf7c..45687381f 100644
--- a/src/box/txn_limbo.h
+++ b/src/box/txn_limbo.h
@@ -31,6 +31,7 @@
*/
#include "small/rlist.h"
#include "vclock/vclock.h"
+#include "latch.h"
#include <stdint.h>
@@ -80,6 +81,10 @@ txn_limbo_entry_is_complete(const struct txn_limbo_entry *e)
* situation and other errors.
*/
struct txn_limbo_terms {
+ /**
+ * To order access to the promote data.
+ */
+ struct latch latch;
/**
* Latest terms received with PROMOTE entries from remote instances.
* Limbo uses them to filter out the transactions coming not from the
@@ -222,15 +227,66 @@ txn_limbo_last_entry(struct txn_limbo *limbo)
in_queue);
}
+/** Lock promote data. */
+static inline void
+txn_limbo_terms_lock(struct txn_limbo *limbo)
+{
+ struct txn_limbo_terms *tr = &limbo->terms;
+ latch_lock(&tr->latch);
+}
+
+/** Unlock promote data. */
+static void
+txn_limbo_terms_unlock(struct txn_limbo *limbo)
+{
+ struct txn_limbo_terms *tr = &limbo->terms;
+ latch_unlock(&tr->latch);
+}
+
+/** Test if promote data is locked. */
+static inline bool
+txn_limbo_terms_is_locked(const struct txn_limbo *limbo)
+{
+ const struct txn_limbo_terms *tr = &limbo->terms;
+ return latch_is_locked(&tr->latch);
+}
+
+/** Fetch replica's term with lock taken. */
+static inline uint64_t
+txn_limbo_term_locked(struct txn_limbo *limbo, uint32_t replica_id)
+{
+ const struct txn_limbo_terms *tr = &limbo->terms;
+ panic_on(!txn_limbo_terms_is_locked(limbo),
+ "limbo: unlocked term read for replica %u",
+ replica_id);
+ return vclock_get(&tr->terms_map, replica_id);
+}
+
/**
* Return the latest term as seen in PROMOTE requests from instance with id
* @a replica_id.
*/
static inline uint64_t
-txn_limbo_replica_term(const struct txn_limbo *limbo, uint32_t replica_id)
+txn_limbo_term(struct txn_limbo *limbo, uint32_t replica_id)
+{
+ txn_limbo_terms_lock(limbo);
+ uint64_t v = txn_limbo_term_locked(limbo, replica_id);
+ txn_limbo_terms_unlock(limbo);
+ return v;
+}
+
+/**
+ * Fiber's preempt not safe read of @a terms_max.
+ *
+ * Use it if you're interested in current value
+ * only and ready that the value is getting updated
+ * if after the read yield happens.
+ */
+static inline uint64_t
+txn_limbo_terms_max_raw(struct txn_limbo *limbo)
{
const struct txn_limbo_terms *tr = &limbo->terms;
- return vclock_get(&tr->terms_map, replica_id);
+ return tr->terms_max;
}
/**
@@ -238,12 +294,15 @@ txn_limbo_replica_term(const struct txn_limbo *limbo, uint32_t replica_id)
* data from it. The check is only valid when elections are enabled.
*/
static inline bool
-txn_limbo_is_replica_outdated(const struct txn_limbo *limbo,
+txn_limbo_is_replica_outdated(struct txn_limbo *limbo,
uint32_t replica_id)
{
const struct txn_limbo_terms *tr = &limbo->terms;
- return txn_limbo_replica_term(limbo, replica_id) <
- tr->terms_max;
+ txn_limbo_terms_lock(limbo);
+ bool res = txn_limbo_term_locked(limbo, replica_id) <
+ tr->terms_max;
+ txn_limbo_terms_unlock(limbo);
+ return res;
}
/**
@@ -315,6 +374,11 @@ txn_limbo_wait_complete(struct txn_limbo *limbo, struct txn_limbo_entry *entry);
/** Execute a synchronous replication request. */
void
+txn_limbo_process_locked(struct txn_limbo *limbo,
+ const struct synchro_request *req);
+
+/** Lock limbo terms and execute a synchronous replication request. */
+void
txn_limbo_process(struct txn_limbo *limbo, const struct synchro_request *req);
/**
--
2.31.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [Tarantool-patches] [PATCH v8 5/6] limbo: filter incoming synchro requests
2021-07-26 15:34 [Tarantool-patches] [PATCH v8 0/6] limbo: implement packets filtering Cyrill Gorcunov via Tarantool-patches
` (3 preceding siblings ...)
2021-07-26 15:34 ` [Tarantool-patches] [PATCH v8 4/6] limbo: order access to the limbo terms terms Cyrill Gorcunov via Tarantool-patches
@ 2021-07-26 15:34 ` Cyrill Gorcunov via Tarantool-patches
2021-07-26 15:34 ` [Tarantool-patches] [PATCH v8 6/6] test: replication -- add gh-6036-rollback-confirm Cyrill Gorcunov via Tarantool-patches
5 siblings, 0 replies; 11+ messages in thread
From: Cyrill Gorcunov via Tarantool-patches @ 2021-07-26 15:34 UTC (permalink / raw)
To: tml; +Cc: Vladislav Shpilevoy
When we receive synchro requests we can't just apply
them blindly because in worse case they may come from
split-brain configuration (where a cluster splitted into
several subclusters and each one has own leader elected,
then subclisters are trying to merge back into original
cluster). We need to do our best to detect such configs
and force these nodes to rejoin from the scratch for
data consistency sake.
Thus when we're processing requests we pass them to the
packet filter first which validates their contents and
refuse to apply if they are not matched.
Depending on request type each packet traverse an
appropriate chain(s)
FILTER_IN
- Common chain for any synchro packet. We verify
that if replica_id is nil then it shall be
PROMOTE request with lsn 0 to migrate limbo owner
FILTER_CONFIRM
FILTER_ROLLBACK
- Both confirm and rollback requests shall not come
with empty limbo since it measn the synchro queue
is already processed and the peer didn't notice
that
FILTER_PROMOTE
- Promote request should come in with new terms only,
otherwise it means the peer didn't notice election
- If limbo's confirmed_lsn is equal to promote LSN then
it is a valid request to process
- If limbo's confirmed_lsn is bigger than requested then
it is valid in one case only -- limbo migration so the
queue shall be empty
- If limbo's confirmed_lsn is less than promote LSN then
- If queue is empty then it means the transactions are
already rolled back and request is invalid
- If queue is not empty then its first entry might be
greater than promote LSN and it means that old data
either committed or rolled back already and request
is invalid
FILTER_DEMOTE
- NOP, reserved for future use
Closes #6036
Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com>
---
src/box/applier.cc | 21 ++-
src/box/box.cc | 11 +-
src/box/memtx_engine.c | 3 +-
src/box/txn_limbo.c | 304 ++++++++++++++++++++++++++++++++++++++---
src/box/txn_limbo.h | 33 ++++-
5 files changed, 346 insertions(+), 26 deletions(-)
diff --git a/src/box/applier.cc b/src/box/applier.cc
index b5c3a7b67..cf51dc8fb 100644
--- a/src/box/applier.cc
+++ b/src/box/applier.cc
@@ -458,7 +458,8 @@ applier_wait_snapshot(struct applier *applier)
struct synchro_request req;
if (xrow_decode_synchro(&row, &req) != 0)
diag_raise();
- txn_limbo_process(&txn_limbo, &req);
+ if (txn_limbo_process(&txn_limbo, &req) != 0)
+ diag_raise();
} else if (iproto_type_is_raft_request(row.type)) {
struct raft_request req;
if (xrow_decode_raft(&row, &req, NULL) != 0)
@@ -514,6 +515,11 @@ applier_fetch_snapshot(struct applier *applier)
struct ev_io *coio = &applier->io;
struct xrow_header row;
+ txn_limbo_filter_disable(&txn_limbo);
+ auto filter_guard = make_scoped_guard([&]{
+ txn_limbo_filter_enable(&txn_limbo);
+ });
+
memset(&row, 0, sizeof(row));
row.type = IPROTO_FETCH_SNAPSHOT;
coio_write_xrow(coio, &row);
@@ -587,6 +593,11 @@ applier_register(struct applier *applier, bool was_anon)
struct ev_io *coio = &applier->io;
struct xrow_header row;
+ txn_limbo_filter_disable(&txn_limbo);
+ auto filter_guard = make_scoped_guard([&]{
+ txn_limbo_filter_enable(&txn_limbo);
+ });
+
memset(&row, 0, sizeof(row));
/*
* Send this instance's current vclock together
@@ -620,6 +631,11 @@ applier_join(struct applier *applier)
struct xrow_header row;
uint64_t row_count;
+ txn_limbo_filter_disable(&txn_limbo);
+ auto filter_guard = make_scoped_guard([&]{
+ txn_limbo_filter_enable(&txn_limbo);
+ });
+
xrow_encode_join_xc(&row, &INSTANCE_UUID);
coio_write_xrow(coio, &row);
@@ -873,6 +889,9 @@ apply_synchro_row(uint32_t replica_id, struct xrow_header *row)
goto err;
txn_limbo_terms_lock(&txn_limbo);
+ if (txn_limbo_filter_locked(&txn_limbo, &req) != 0)
+ goto err_unlock;
+
struct replica_cb_data rcb_data;
struct synchro_entry entry;
/*
diff --git a/src/box/box.cc b/src/box/box.cc
index 395f0d9ef..617dae39c 100644
--- a/src/box/box.cc
+++ b/src/box/box.cc
@@ -1668,7 +1668,8 @@ box_issue_promote(uint32_t prev_leader_id, int64_t promote_lsn)
.lsn = promote_lsn,
.term = raft->term,
};
- txn_limbo_process(&txn_limbo, &req);
+ if (txn_limbo_process(&txn_limbo, &req) != 0)
+ diag_raise();
assert(txn_limbo_is_empty(&txn_limbo));
}
@@ -1695,7 +1696,8 @@ box_issue_demote(uint32_t prev_leader_id, int64_t promote_lsn)
.lsn = promote_lsn,
.term = box_raft()->term,
};
- txn_limbo_process(&txn_limbo, &req);
+ if (txn_limbo_process(&txn_limbo, &req) != 0)
+ diag_raise();
assert(txn_limbo_is_empty(&txn_limbo));
}
@@ -3273,6 +3275,11 @@ local_recovery(const struct tt_uuid *instance_uuid,
say_info("instance uuid %s", tt_uuid_str(&INSTANCE_UUID));
+ txn_limbo_filter_disable(&txn_limbo);
+ auto filter_guard = make_scoped_guard([&]{
+ txn_limbo_filter_enable(&txn_limbo);
+ });
+
struct wal_stream wal_stream;
wal_stream_create(&wal_stream);
auto stream_guard = make_scoped_guard([&]{
diff --git a/src/box/memtx_engine.c b/src/box/memtx_engine.c
index 0b06e5e63..4aed24fe3 100644
--- a/src/box/memtx_engine.c
+++ b/src/box/memtx_engine.c
@@ -238,7 +238,8 @@ memtx_engine_recover_synchro(const struct xrow_header *row)
* because all its rows have a zero replica_id.
*/
req.origin_id = req.replica_id;
- txn_limbo_process(&txn_limbo, &req);
+ if (txn_limbo_process(&txn_limbo, &req) != 0)
+ return -1;
return 0;
}
diff --git a/src/box/txn_limbo.c b/src/box/txn_limbo.c
index 5f43f575c..2f901b4ef 100644
--- a/src/box/txn_limbo.c
+++ b/src/box/txn_limbo.c
@@ -57,6 +57,7 @@ txn_limbo_create(struct txn_limbo *limbo)
limbo->confirmed_lsn = 0;
limbo->rollback_count = 0;
limbo->is_in_rollback = false;
+ limbo->is_filtering = true;
}
bool
@@ -731,37 +732,291 @@ txn_limbo_wait_empty(struct txn_limbo *limbo, double timeout)
return 0;
}
+enum filter_chain {
+ FILTER_IN,
+ FILTER_CONFIRM,
+ FILTER_ROLLBACK,
+ FILTER_PROMOTE,
+ FILTER_DEMOTE,
+ FILTER_MAX,
+};
+
+/**
+ * Common chain for any incoming packet.
+ */
+static int
+filter_in(struct txn_limbo *limbo, const struct synchro_request *req)
+{
+ (void)limbo;
+
+ if (req->replica_id == REPLICA_ID_NIL) {
+ /*
+ * The limbo was empty on the instance issuing
+ * the request. This means this instance must
+ * empty its limbo as well.
+ */
+ if (req->lsn != 0 ||
+ !iproto_type_is_promote_request(req->type)) {
+ say_info("RAFT: rejecting %s request from "
+ "instance id %u for term %llu. "
+ "req->replica_id = 0 but lsn %lld.",
+ iproto_type_name(req->type),
+ req->origin_id, (long long)req->term,
+ (long long)req->lsn);
+
+ diag_set(ClientError, ER_UNSUPPORTED,
+ "Replication",
+ "empty replica_id with nonzero LSN");
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+/**
+ * Filter CONFIRM and ROLLBACK packets.
+ */
+static int
+filter_confirm_rollback(struct txn_limbo *limbo,
+ const struct synchro_request *req)
+{
+ /*
+ * When limbo is empty we have nothing to
+ * confirm/commit and if this request comes
+ * in it means the split brain has happened.
+ */
+ if (!txn_limbo_is_empty(limbo))
+ return 0;
+
+ say_info("RAFT: rejecting %s request from "
+ "instance id %u for term %llu. "
+ "Empty limbo detected.",
+ iproto_type_name(req->type),
+ req->origin_id,
+ (long long)req->term);
+
+ diag_set(ClientError, ER_UNSUPPORTED,
+ "Replication",
+ "confirm/rollback with empty limbo");
+ return -1;
+}
+
+/**
+ * Filter PROMOTE packets.
+ */
+static int
+filter_promote(struct txn_limbo *limbo, const struct synchro_request *req)
+{
+ struct txn_limbo_terms *tr = &limbo->terms;
+ int64_t promote_lsn = req->lsn;
+
+ /*
+ * If the term is already seen it means it comes
+ * from a node which didn't notice new elections,
+ * thus been living in subdomain and its data is
+ * no longer consistent.
+ */
+ if (tr->terms_max > 1 && tr->terms_max > req->term) {
+ say_info("RAFT: rejecting %s request from "
+ "instance id %u for term %llu. "
+ "Max term seen is %llu.",
+ iproto_type_name(req->type),
+ req->origin_id,
+ (long long)req->term,
+ (long long)tr->terms_max);
+
+ diag_set(ClientError, ER_UNSUPPORTED,
+ "Replication", "obsolete terms");
+ return -1;
+ }
+
+ /*
+ * Either the limbo is empty or new promote will
+ * rollback all waiting transactions. Which
+ * is fine.
+ */
+ if (limbo->confirmed_lsn == promote_lsn)
+ return 0;
+
+ /*
+ * Explicit split brain situation. Promote
+ * comes in with an old LSN which we've already
+ * processed.
+ */
+ if (limbo->confirmed_lsn > promote_lsn) {
+ /*
+ * If limbo is empty we're migrating
+ * the owner.
+ */
+ if (txn_limbo_is_empty(limbo))
+ return 0;
+
+ say_info("RAFT: rejecting %s request from "
+ "instance id %u for term %llu. "
+ "confirmed_lsn %lld > promote_lsn %lld.",
+ iproto_type_name(req->type),
+ req->origin_id, (long long)req->term,
+ (long long)limbo->confirmed_lsn,
+ (long long)promote_lsn);
+
+ diag_set(ClientError, ER_UNSUPPORTED,
+ "Replication",
+ "backward promote LSN (split brain)");
+ return -1;
+ }
+
+ /*
+ * The last case requires a few subcases.
+ */
+ assert(limbo->confirmed_lsn < promote_lsn);
+
+ if (txn_limbo_is_empty(limbo)) {
+ /*
+ * Transactions are already rolled back
+ * since the limbo is empty.
+ */
+ say_info("RAFT: rejecting %s request from "
+ "instance id %u for term %llu. "
+ "confirmed_lsn %lld < promote_lsn %lld "
+ "and empty limbo.",
+ iproto_type_name(req->type),
+ req->origin_id, (long long)req->term,
+ (long long)limbo->confirmed_lsn,
+ (long long)promote_lsn);
+
+ diag_set(ClientError, ER_UNSUPPORTED,
+ "Replication",
+ "forward promote LSN "
+ "(empty limbo, split brain)");
+ return -1;
+ } else {
+ /*
+ * Some entries are present in the limbo,
+ * and if first entry's LSN is greater than
+ * requested then old data either commited
+ * or rolled back, so can't continue.
+ */
+ struct txn_limbo_entry *first;
+
+ first = txn_limbo_first_entry(limbo);
+ if (first->lsn > promote_lsn) {
+ say_info("RAFT: rejecting %s request from "
+ "instance id %u for term %llu. "
+ "confirmed_lsn %lld < promote_lsn %lld "
+ "and limbo first lsn %lld.",
+ iproto_type_name(req->type),
+ req->origin_id, (long long)req->term,
+ (long long)limbo->confirmed_lsn,
+ (long long)promote_lsn,
+ (long long)first->lsn);
+
+ diag_set(ClientError, ER_UNSUPPORTED,
+ "Replication",
+ "promote LSN confilict "
+ "(limbo LSN ahead, split brain)");
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+/**
+ * Filter DEMOTE packets.
+ */
+static int
+filter_demote(struct txn_limbo *limbo, const struct synchro_request *req)
+{
+ (void)limbo;
+ (void)req;
+ return 0;
+}
+
+static int (*filter_req[FILTER_MAX])
+(struct txn_limbo *limbo, const struct synchro_request *req) = {
+ [FILTER_IN] = filter_in,
+ [FILTER_CONFIRM] = filter_confirm_rollback,
+ [FILTER_ROLLBACK] = filter_confirm_rollback,
+ [FILTER_PROMOTE] = filter_promote,
+ [FILTER_DEMOTE] = filter_demote,
+};
+
+int
+txn_limbo_filter_locked(struct txn_limbo *limbo,
+ const struct synchro_request *req)
+{
+ unsigned int mask = (1u << FILTER_IN);
+ unsigned int pos = 0;
+
+ if (!limbo->is_filtering)
+ return 0;
+
+#ifndef NDEBUG
+ say_info("limbo: filter %s replica_id %u origin_id %u "
+ "term %lld lsn %lld, queue owner_id %u len %lld "
+ "confirmed_lsn %lld",
+ iproto_type_name(req->type),
+ req->replica_id, req->origin_id,
+ (long long)req->term, (long long)req->lsn,
+ limbo->owner_id, (long long)limbo->len,
+ (long long)limbo->confirmed_lsn);
+#endif
+
+ switch (req->type) {
+ case IPROTO_CONFIRM:
+ mask |= (1u << FILTER_CONFIRM);
+ break;
+ case IPROTO_ROLLBACK:
+ mask |= (1u << FILTER_ROLLBACK);
+ break;
+ case IPROTO_PROMOTE:
+ mask |= (1u << FILTER_PROMOTE);
+ break;
+ case IPROTO_DEMOTE:
+ mask |= (1u << FILTER_DEMOTE);
+ break;
+ default:
+ say_info("RAFT: rejecting unexpected %d "
+ "request from instance id %u "
+ "for term %llu.",
+ req->type, req->origin_id,
+ (long long)req->term);
+ diag_set(ClientError, ER_UNSUPPORTED,
+ "Replication",
+ "unexpected request type");
+ return -1;
+ }
+
+ while (mask != 0) {
+ if ((mask & 1) != 0) {
+ assert(pos < lengthof(filter_req));
+ if (filter_req[pos](limbo, req) != 0)
+ return -1;
+ }
+ pos++;
+ mask >>= 1;
+ };
+
+ return 0;
+}
+
void
txn_limbo_process_locked(struct txn_limbo *limbo,
const struct synchro_request *req)
{
struct txn_limbo_terms *tr = &limbo->terms;
- uint64_t term = req->term;
uint32_t origin = req->origin_id;
+ uint64_t term = req->term;
if (txn_limbo_term_locked(limbo, origin) < term) {
vclock_follow(&tr->terms_map, origin, term);
if (term > tr->terms_max)
tr->terms_max = term;
- } else if (iproto_type_is_promote_request(req->type) &&
- tr->terms_max > 1) {
- /* PROMOTE for outdated term. Ignore. */
- say_info("RAFT: ignoring %s request from instance "
- "id %u for term %llu. Greatest term seen "
- "before (%llu) is bigger.",
- iproto_type_name(req->type), origin, (long long)term,
- (long long)tr->terms_max);
- return;
}
int64_t lsn = req->lsn;
- if (req->replica_id == REPLICA_ID_NIL) {
- /*
- * The limbo was empty on the instance issuing the request.
- * This means this instance must empty its limbo as well.
- */
- assert(lsn == 0 && iproto_type_is_promote_request(req->type));
- } else if (req->replica_id != limbo->owner_id) {
+ if (req->replica_id != limbo->owner_id) {
/*
* Ignore CONFIRM/ROLLBACK messages for a foreign master.
* These are most likely outdated messages for already confirmed
@@ -792,18 +1047,25 @@ txn_limbo_process_locked(struct txn_limbo *limbo,
txn_limbo_read_demote(limbo, lsn);
break;
default:
- unreachable();
+ panic("limbo: unexpected request type %d",
+ req->type);
+ break;
}
- return;
}
-void
+int
txn_limbo_process(struct txn_limbo *limbo,
const struct synchro_request *req)
{
+ int rc;
+
txn_limbo_terms_lock(limbo);
- txn_limbo_process_locked(limbo, req);
+ rc = txn_limbo_filter_locked(limbo, req);
+ if (rc == 0)
+ txn_limbo_process_locked(limbo, req);
txn_limbo_terms_unlock(limbo);
+
+ return rc;
}
void
diff --git a/src/box/txn_limbo.h b/src/box/txn_limbo.h
index 45687381f..97de6943b 100644
--- a/src/box/txn_limbo.h
+++ b/src/box/txn_limbo.h
@@ -195,6 +195,14 @@ struct txn_limbo {
* by the 'reversed rollback order' rule - contradiction.
*/
bool is_in_rollback;
+ /**
+ * Whether the limbo should filter incoming requests.
+ * The phases of local recovery from WAL file and on applier's
+ * join phase we are in complete trust of incoming data because
+ * this data forms an initial limbo state and should not
+ * filter out requests.
+ */
+ bool is_filtering;
};
/**
@@ -372,15 +380,38 @@ txn_limbo_ack(struct txn_limbo *limbo, uint32_t replica_id, int64_t lsn);
int
txn_limbo_wait_complete(struct txn_limbo *limbo, struct txn_limbo_entry *entry);
+/**
+ * Verify if the request is valid for processing.
+ */
+int
+txn_limbo_filter_locked(struct txn_limbo *limbo,
+ const struct synchro_request *req);
+
/** Execute a synchronous replication request. */
void
txn_limbo_process_locked(struct txn_limbo *limbo,
const struct synchro_request *req);
/** Lock limbo terms and execute a synchronous replication request. */
-void
+int
txn_limbo_process(struct txn_limbo *limbo, const struct synchro_request *req);
+/** Enable filtering of synchro requests. */
+static inline void
+txn_limbo_filter_enable(struct txn_limbo *limbo)
+{
+ limbo->is_filtering = true;
+ say_info("limbo: filter enabled");
+}
+
+/** Disable filtering of synchro requests. */
+static inline void
+txn_limbo_filter_disable(struct txn_limbo *limbo)
+{
+ limbo->is_filtering = false;
+ say_info("limbo: filter disabled");
+}
+
/**
* Waiting for confirmation of all "sync" transactions
* during confirm timeout or fail.
--
2.31.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [Tarantool-patches] [PATCH v8 6/6] test: replication -- add gh-6036-rollback-confirm
2021-07-26 15:34 [Tarantool-patches] [PATCH v8 0/6] limbo: implement packets filtering Cyrill Gorcunov via Tarantool-patches
` (4 preceding siblings ...)
2021-07-26 15:34 ` [Tarantool-patches] [PATCH v8 5/6] limbo: filter incoming synchro requests Cyrill Gorcunov via Tarantool-patches
@ 2021-07-26 15:34 ` Cyrill Gorcunov via Tarantool-patches
5 siblings, 0 replies; 11+ messages in thread
From: Cyrill Gorcunov via Tarantool-patches @ 2021-07-26 15:34 UTC (permalink / raw)
To: tml; +Cc: Vladislav Shpilevoy
Follow-up #6036
Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com>
---
test/replication/gh-6036-master.lua | 1 +
test/replication/gh-6036-node.lua | 33 ++++
test/replication/gh-6036-replica.lua | 1 +
.../gh-6036-rollback-confirm.result | 180 ++++++++++++++++++
.../gh-6036-rollback-confirm.test.lua | 88 +++++++++
5 files changed, 303 insertions(+)
create mode 120000 test/replication/gh-6036-master.lua
create mode 100644 test/replication/gh-6036-node.lua
create mode 120000 test/replication/gh-6036-replica.lua
create mode 100644 test/replication/gh-6036-rollback-confirm.result
create mode 100644 test/replication/gh-6036-rollback-confirm.test.lua
diff --git a/test/replication/gh-6036-master.lua b/test/replication/gh-6036-master.lua
new file mode 120000
index 000000000..65baed5de
--- /dev/null
+++ b/test/replication/gh-6036-master.lua
@@ -0,0 +1 @@
+gh-6036-node.lua
\ No newline at end of file
diff --git a/test/replication/gh-6036-node.lua b/test/replication/gh-6036-node.lua
new file mode 100644
index 000000000..ac701b7a2
--- /dev/null
+++ b/test/replication/gh-6036-node.lua
@@ -0,0 +1,33 @@
+local INSTANCE_ID = string.match(arg[0], "gh%-6036%-(.+)%.lua")
+
+local function unix_socket(name)
+ return "unix/:./" .. name .. '.sock';
+end
+
+require('console').listen(os.getenv('ADMIN'))
+
+if INSTANCE_ID == "master" then
+ box.cfg({
+ listen = unix_socket("master"),
+ replication_connect_quorum = 0,
+ election_mode = 'candidate',
+ replication_synchro_quorum = 3,
+ replication_synchro_timeout = 1000,
+ })
+elseif INSTANCE_ID == "replica" then
+ box.cfg({
+ listen = unix_socket("replica"),
+ replication = {
+ unix_socket("master"),
+ unix_socket("replica")
+ },
+ read_only = true,
+ election_mode = 'voter',
+ replication_synchro_quorum = 2,
+ replication_synchro_timeout = 1000,
+ })
+end
+
+box.once("bootstrap", function()
+ box.schema.user.grant('guest', 'super')
+end)
diff --git a/test/replication/gh-6036-replica.lua b/test/replication/gh-6036-replica.lua
new file mode 120000
index 000000000..65baed5de
--- /dev/null
+++ b/test/replication/gh-6036-replica.lua
@@ -0,0 +1 @@
+gh-6036-node.lua
\ No newline at end of file
diff --git a/test/replication/gh-6036-rollback-confirm.result b/test/replication/gh-6036-rollback-confirm.result
new file mode 100644
index 000000000..ec5403d5c
--- /dev/null
+++ b/test/replication/gh-6036-rollback-confirm.result
@@ -0,0 +1,180 @@
+-- test-run result file version 2
+--
+-- gh-6036: Test for record collision detection. We have a cluster
+-- of two nodes: master and replica. The master initiates syncho write
+-- but fails to gather a quorum. Before it rolls back the record the
+-- network breakage occurs and replica lives with dirty data while
+-- master node goes offline. The replica becomes a new raft leader
+-- and commits the dirty data, same time master node rolls back this
+-- record and tries to connect to the new raft leader back. Such
+-- connection should be refused because old master node is not longer
+-- consistent.
+--
+test_run = require('test_run').new()
+ | ---
+ | ...
+
+test_run:cmd('create server master with script="replication/gh-6036-master.lua"')
+ | ---
+ | - true
+ | ...
+test_run:cmd('create server replica with script="replication/gh-6036-replica.lua"')
+ | ---
+ | - true
+ | ...
+
+test_run:cmd('start server master')
+ | ---
+ | - true
+ | ...
+test_run:cmd('start server replica')
+ | ---
+ | - true
+ | ...
+
+--
+-- Connect master to the replica and write a record. Since the quorum
+-- value is bigger than number of nodes in a cluster it will be rolled
+-- back later.
+test_run:switch('master')
+ | ---
+ | - true
+ | ...
+box.cfg({ \
+ replication = { \
+ "unix/:./master.sock", \
+ "unix/:./replica.sock", \
+ }, \
+})
+ | ---
+ | ...
+_ = box.schema.create_space('sync', {is_sync = true})
+ | ---
+ | ...
+_ = box.space.sync:create_index('pk')
+ | ---
+ | ...
+
+--
+-- Wait the record to appear on the master.
+f = require('fiber').create(function() box.space.sync:replace{1} end)
+ | ---
+ | ...
+test_run:wait_cond(function() return box.space.sync:get({1}) ~= nil end, 100)
+ | ---
+ | - true
+ | ...
+box.space.sync:select{}
+ | ---
+ | - - [1]
+ | ...
+
+--
+-- Wait the record from master get written and then
+-- drop the replication.
+test_run:switch('replica')
+ | ---
+ | - true
+ | ...
+test_run:wait_cond(function() return box.space.sync:get({1}) ~= nil end, 100)
+ | ---
+ | - true
+ | ...
+box.space.sync:select{}
+ | ---
+ | - - [1]
+ | ...
+box.cfg{replication = {}}
+ | ---
+ | ...
+
+--
+-- Then we jump back to the master and drop the replication,
+-- thus unconfirmed record get rolled back.
+test_run:switch('master')
+ | ---
+ | - true
+ | ...
+box.cfg({ \
+ replication = {}, \
+ replication_synchro_timeout = 0.001, \
+ election_mode = 'manual', \
+})
+ | ---
+ | ...
+while f:status() ~= 'dead' do require('fiber').sleep(0.1) end
+ | ---
+ | ...
+test_run:wait_cond(function() return box.space.sync:get({1}) == nil end, 100)
+ | ---
+ | - true
+ | ...
+
+--
+-- Force the replica to become a RAFT leader and
+-- commit this new record.
+test_run:switch('replica')
+ | ---
+ | - true
+ | ...
+box.cfg({ \
+ replication_synchro_quorum = 1, \
+ election_mode = 'manual' \
+})
+ | ---
+ | ...
+box.ctl.promote()
+ | ---
+ | ...
+box.space.sync:select{}
+ | ---
+ | - - [1]
+ | ...
+
+--
+-- Connect master back to the replica, it should
+-- be refused.
+test_run:switch('master')
+ | ---
+ | - true
+ | ...
+box.cfg({ \
+ replication = { \
+ "unix/:./replica.sock", \
+ }, \
+})
+ | ---
+ | ...
+box.space.sync:select{}
+ | ---
+ | - []
+ | ...
+assert(test_run:grep_log('master', 'rejecting PROMOTE') ~= nil);
+ | ---
+ | - true
+ | ...
+assert(test_run:grep_log('master', 'ER_UNSUPPORTED') ~= nil);
+ | ---
+ | - true
+ | ...
+
+test_run:switch('default')
+ | ---
+ | - true
+ | ...
+test_run:cmd('stop server master')
+ | ---
+ | - true
+ | ...
+test_run:cmd('delete server master')
+ | ---
+ | - true
+ | ...
+test_run:cmd('stop server replica')
+ | ---
+ | - true
+ | ...
+test_run:cmd('delete server replica')
+ | ---
+ | - true
+ | ...
diff --git a/test/replication/gh-6036-rollback-confirm.test.lua b/test/replication/gh-6036-rollback-confirm.test.lua
new file mode 100644
index 000000000..dbeb9f496
--- /dev/null
+++ b/test/replication/gh-6036-rollback-confirm.test.lua
@@ -0,0 +1,88 @@
+--
+-- gh-6036: Test for record collision detection. We have a cluster
+-- of two nodes: master and replica. The master initiates syncho write
+-- but fails to gather a quorum. Before it rolls back the record the
+-- network breakage occurs and replica lives with dirty data while
+-- master node goes offline. The replica becomes a new raft leader
+-- and commits the dirty data, same time master node rolls back this
+-- record and tries to connect to the new raft leader back. Such
+-- connection should be refused because old master node is not longer
+-- consistent.
+--
+test_run = require('test_run').new()
+
+test_run:cmd('create server master with script="replication/gh-6036-master.lua"')
+test_run:cmd('create server replica with script="replication/gh-6036-replica.lua"')
+
+test_run:cmd('start server master')
+test_run:cmd('start server replica')
+
+--
+-- Connect master to the replica and write a record. Since the quorum
+-- value is bigger than number of nodes in a cluster it will be rolled
+-- back later.
+test_run:switch('master')
+box.cfg({ \
+ replication = { \
+ "unix/:./master.sock", \
+ "unix/:./replica.sock", \
+ }, \
+})
+_ = box.schema.create_space('sync', {is_sync = true})
+_ = box.space.sync:create_index('pk')
+
+--
+-- Wait the record to appear on the master.
+f = require('fiber').create(function() box.space.sync:replace{1} end)
+test_run:wait_cond(function() return box.space.sync:get({1}) ~= nil end, 100)
+box.space.sync:select{}
+
+--
+-- Wait the record from master get written and then
+-- drop the replication.
+test_run:switch('replica')
+test_run:wait_cond(function() return box.space.sync:get({1}) ~= nil end, 100)
+box.space.sync:select{}
+box.cfg{replication = {}}
+
+--
+-- Then we jump back to the master and drop the replication,
+-- thus unconfirmed record get rolled back.
+test_run:switch('master')
+box.cfg({ \
+ replication = {}, \
+ replication_synchro_timeout = 0.001, \
+ election_mode = 'manual', \
+})
+while f:status() ~= 'dead' do require('fiber').sleep(0.1) end
+test_run:wait_cond(function() return box.space.sync:get({1}) == nil end, 100)
+
+--
+-- Force the replica to become a RAFT leader and
+-- commit this new record.
+test_run:switch('replica')
+box.cfg({ \
+ replication_synchro_quorum = 1, \
+ election_mode = 'manual' \
+})
+box.ctl.promote()
+box.space.sync:select{}
+
+--
+-- Connect master back to the replica, it should
+-- be refused.
+test_run:switch('master')
+box.cfg({ \
+ replication = { \
+ "unix/:./replica.sock", \
+ }, \
+})
+box.space.sync:select{}
+assert(test_run:grep_log('master', 'rejecting PROMOTE') ~= nil);
+assert(test_run:grep_log('master', 'ER_UNSUPPORTED') ~= nil);
+
+test_run:switch('default')
+test_run:cmd('stop server master')
+test_run:cmd('delete server master')
+test_run:cmd('stop server replica')
+test_run:cmd('delete server replica')
--
2.31.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Tarantool-patches] [PATCH v8 3/6] limbo: gather promote tracking into a separate structure
2021-07-26 15:34 ` [Tarantool-patches] [PATCH v8 3/6] limbo: gather promote tracking into a separate structure Cyrill Gorcunov via Tarantool-patches
@ 2021-07-28 21:34 ` Vladislav Shpilevoy via Tarantool-patches
2021-07-28 21:57 ` Cyrill Gorcunov via Tarantool-patches
0 siblings, 1 reply; 11+ messages in thread
From: Vladislav Shpilevoy via Tarantool-patches @ 2021-07-28 21:34 UTC (permalink / raw)
To: Cyrill Gorcunov, tml
Hi! Thanks for the patch!
On 26.07.2021 17:34, Cyrill Gorcunov via Tarantool-patches wrote:
> It is needed to introduce ordered promote related data
> modifications in next patch.
Why do you need this new struct? While I like the new names
(the old promote_greatest_term was super long), I don't see
any motivation to extract the members into a new struct.
It is still used only inside of the limbo as a member.
I suppose this might be a leftover from the old versions of
the patchset. Please, try to keep these members inside of
the limbo like they were.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Tarantool-patches] [PATCH v8 3/6] limbo: gather promote tracking into a separate structure
2021-07-28 21:34 ` Vladislav Shpilevoy via Tarantool-patches
@ 2021-07-28 21:57 ` Cyrill Gorcunov via Tarantool-patches
2021-07-28 22:07 ` Vladislav Shpilevoy via Tarantool-patches
0 siblings, 1 reply; 11+ messages in thread
From: Cyrill Gorcunov via Tarantool-patches @ 2021-07-28 21:57 UTC (permalink / raw)
To: Vladislav Shpilevoy; +Cc: tml
On Wed, Jul 28, 2021 at 11:34:43PM +0200, Vladislav Shpilevoy wrote:
> Hi! Thanks for the patch!
>
> On 26.07.2021 17:34, Cyrill Gorcunov via Tarantool-patches wrote:
> > It is needed to introduce ordered promote related data
> > modifications in next patch.
>
> Why do you need this new struct? While I like the new names
> (the old promote_greatest_term was super long), I don't see
> any motivation to extract the members into a new struct.
> It is still used only inside of the limbo as a member.
>
> I suppose this might be a leftover from the old versions of
> the patchset. Please, try to keep these members inside of
> the limbo like they were.
The key moment here is the locking we use for terms tracking,
note the locking is not covering the whole limbo thus better
to keep the lock itself inside the structure it protects I think.
Cyrill
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Tarantool-patches] [PATCH v8 3/6] limbo: gather promote tracking into a separate structure
2021-07-28 21:57 ` Cyrill Gorcunov via Tarantool-patches
@ 2021-07-28 22:07 ` Vladislav Shpilevoy via Tarantool-patches
2021-07-29 6:40 ` Cyrill Gorcunov via Tarantool-patches
0 siblings, 1 reply; 11+ messages in thread
From: Vladislav Shpilevoy via Tarantool-patches @ 2021-07-28 22:07 UTC (permalink / raw)
To: Cyrill Gorcunov; +Cc: tml
On 28.07.2021 23:57, Cyrill Gorcunov wrote:
> On Wed, Jul 28, 2021 at 11:34:43PM +0200, Vladislav Shpilevoy wrote:
>> Hi! Thanks for the patch!
>>
>> On 26.07.2021 17:34, Cyrill Gorcunov via Tarantool-patches wrote:
>>> It is needed to introduce ordered promote related data
>>> modifications in next patch.
>>
>> Why do you need this new struct? While I like the new names
>> (the old promote_greatest_term was super long), I don't see
>> any motivation to extract the members into a new struct.
>> It is still used only inside of the limbo as a member.
>>
>> I suppose this might be a leftover from the old versions of
>> the patchset. Please, try to keep these members inside of
>> the limbo like they were.
>
> The key moment here is the locking we use for terms tracking,
> note the locking is not covering the whole limbo thus better
> to keep the lock itself inside the structure it protects I think.
Yes, the struct it protects is the limbo. Part of. I see no sense
to extract the members you protect into a new struct just because
only they are protected.
Take cbus_endpoint for instance. It has a mutex, and it is locked
to push/pop new messages into a queue. It is taken to touch the
queue. But it does not mean you need struct cbus_endpoint_queue
for that. The struct you created is very artificial, it has no
purpose outside of the limbo and there can't be multiple of these
structs. Which makes it pointless. It only complicates the patch.
It has no its own methods except 'create()', has no 'destroy()',
all its access goes through txn_limbo methods, even the lock is
stored in the limbo, not in this struct, which contradicts with
what you said above.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Tarantool-patches] [PATCH v8 3/6] limbo: gather promote tracking into a separate structure
2021-07-28 22:07 ` Vladislav Shpilevoy via Tarantool-patches
@ 2021-07-29 6:40 ` Cyrill Gorcunov via Tarantool-patches
0 siblings, 0 replies; 11+ messages in thread
From: Cyrill Gorcunov via Tarantool-patches @ 2021-07-29 6:40 UTC (permalink / raw)
To: Vladislav Shpilevoy; +Cc: tml
On Thu, Jul 29, 2021 at 12:07:44AM +0200, Vladislav Shpilevoy wrote:
> >
> > The key moment here is the locking we use for terms tracking,
> > note the locking is not covering the whole limbo thus better
> > to keep the lock itself inside the structure it protects I think.
>
> Yes, the struct it protects is the limbo. Part of. I see no sense
> to extract the members you protect into a new struct just because
> only they are protected.
It doesn't protect limbo itself, the entries are added to the queue
without taking a lock (in contrast with cbus), so I completely don't
agree with your arguments. Still I'm fine to move these members back
to the limbo structure as you prefer. Will do.
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2021-07-29 6:40 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-26 15:34 [Tarantool-patches] [PATCH v8 0/6] limbo: implement packets filtering Cyrill Gorcunov via Tarantool-patches
2021-07-26 15:34 ` [Tarantool-patches] [PATCH v8 1/6] latch: add latch_is_locked helper Cyrill Gorcunov via Tarantool-patches
2021-07-26 15:34 ` [Tarantool-patches] [PATCH v8 2/6] say: introduce panic_on helper Cyrill Gorcunov via Tarantool-patches
2021-07-26 15:34 ` [Tarantool-patches] [PATCH v8 3/6] limbo: gather promote tracking into a separate structure Cyrill Gorcunov via Tarantool-patches
2021-07-28 21:34 ` Vladislav Shpilevoy via Tarantool-patches
2021-07-28 21:57 ` Cyrill Gorcunov via Tarantool-patches
2021-07-28 22:07 ` Vladislav Shpilevoy via Tarantool-patches
2021-07-29 6:40 ` Cyrill Gorcunov via Tarantool-patches
2021-07-26 15:34 ` [Tarantool-patches] [PATCH v8 4/6] limbo: order access to the limbo terms terms Cyrill Gorcunov via Tarantool-patches
2021-07-26 15:34 ` [Tarantool-patches] [PATCH v8 5/6] limbo: filter incoming synchro requests Cyrill Gorcunov via Tarantool-patches
2021-07-26 15:34 ` [Tarantool-patches] [PATCH v8 6/6] test: replication -- add gh-6036-rollback-confirm Cyrill Gorcunov via Tarantool-patches
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox