* [Tarantool-patches] [PATCH 0/5] Qsync: local rows fixes
@ 2020-07-02 23:40 Vladislav Shpilevoy
2020-07-02 23:40 ` [Tarantool-patches] [PATCH 1/5] [tosquash] replication: fix multiple rollbacks Vladislav Shpilevoy
` (4 more replies)
0 siblings, 5 replies; 14+ messages in thread
From: Vladislav Shpilevoy @ 2020-07-02 23:40 UTC (permalink / raw)
To: tarantool-patches, sergepetrenko
The whole patchset is supposed to be merged into the previous
commits split in parts.
But it is provided on top of the branch for the review and making
review fixes simplicity.
Branch: http://github.com/tarantool/tarantool/tree/gh-4842-sync-replication
Issue: https://github.com/tarantool/tarantool/issues/4842
Vladislav Shpilevoy (5):
[tosquash] replication: fix multiple rollbacks
[tosquash] applier: remove unnecessary fiber name check
[tosquash] txn_limbo: fix release build
[tosquash] replication: rework how local transactions wait sync
[tosquash] replication: add test on local row in the end of sync txn
src/box/applier.cc | 1 -
src/box/txn.c | 18 ++--
src/box/txn_limbo.c | 86 +++++++++++-----
test/replication/qsync_basic.result | 142 +++++++++++++++++++++++++-
test/replication/qsync_basic.test.lua | 65 ++++++++++++
5 files changed, 275 insertions(+), 37 deletions(-)
--
2.21.1 (Apple Git-122.3)
^ permalink raw reply [flat|nested] 14+ messages in thread
* [Tarantool-patches] [PATCH 1/5] [tosquash] replication: fix multiple rollbacks
2020-07-02 23:40 [Tarantool-patches] [PATCH 0/5] Qsync: local rows fixes Vladislav Shpilevoy
@ 2020-07-02 23:40 ` Vladislav Shpilevoy
2020-07-05 9:34 ` Serge Petrenko
2020-07-02 23:40 ` [Tarantool-patches] [PATCH 2/5] [tosquash] applier: remove unnecessary fiber name check Vladislav Shpilevoy
` (3 subsequent siblings)
4 siblings, 1 reply; 14+ messages in thread
From: Vladislav Shpilevoy @ 2020-07-02 23:40 UTC (permalink / raw)
To: tarantool-patches, sergepetrenko
The problem was that if several transactions time out in one
event loop iteration, the all will write rollback. Moreover, they
will do that in a weird order, starting from the oldest, not in
a reversed order.
This patch makes limbo write only one rollback at once.
---
src/box/txn_limbo.c | 25 +++++++++++++++++++++++++
test/replication/qsync_basic.result | 2 +-
2 files changed, 26 insertions(+), 1 deletion(-)
diff --git a/src/box/txn_limbo.c b/src/box/txn_limbo.c
index 0402664cb..2cb687f4d 100644
--- a/src/box/txn_limbo.c
+++ b/src/box/txn_limbo.c
@@ -44,6 +44,13 @@ txn_limbo_create(struct txn_limbo *limbo)
limbo->got_rollback = false;
}
+static inline struct txn_limbo_entry *
+txn_limbo_first_entry(struct txn_limbo *limbo)
+{
+ return rlist_first_entry(&limbo->queue, struct txn_limbo_entry,
+ in_queue);
+}
+
struct txn_limbo_entry *
txn_limbo_append(struct txn_limbo *limbo, uint32_t id, struct txn *txn)
{
@@ -150,6 +157,24 @@ txn_limbo_wait_complete(struct txn_limbo *limbo, struct txn_limbo_entry *entry)
bool timed_out = fiber_yield_timeout(txn_limbo_confirm_timeout(limbo));
fiber_set_cancellable(cancellable);
if (timed_out) {
+ assert(!txn_limbo_is_empty(limbo));
+ if (txn_limbo_first_entry(limbo) != entry) {
+ /*
+ * If this is not a first entry in the
+ * limbo, it is definitely not a first
+ * timed out entry. And since it managed
+ * to time out too, it means there is
+ * currently another fiber writing
+ * rollback. Wait when it will finish and
+ * wake us up.
+ */
+ bool cancellable = fiber_set_cancellable(false);
+ fiber_yield();
+ fiber_set_cancellable(cancellable);
+ assert(txn_limbo_entry_is_complete(entry));
+ goto complete;
+ }
+
txn_limbo_write_rollback(limbo, entry);
struct txn_limbo_entry *e, *tmp;
rlist_foreach_entry_safe_reverse(e, &limbo->queue,
diff --git a/test/replication/qsync_basic.result b/test/replication/qsync_basic.result
index cdecf00e8..32deb2ac3 100644
--- a/test/replication/qsync_basic.result
+++ b/test/replication/qsync_basic.result
@@ -272,7 +272,7 @@ box.cfg{replication_synchro_timeout = 0.001, replication_synchro_quorum = 3}
| ...
f = fiber.create(box.space.sync.replace, box.space.sync, {6}) s:replace{6}
| ---
- | - error: Quorum collection for a synchronous transaction is timed out
+ | - error: A rollback for a synchronous transaction is received
| ...
f:status()
| ---
--
2.21.1 (Apple Git-122.3)
^ permalink raw reply [flat|nested] 14+ messages in thread
* [Tarantool-patches] [PATCH 2/5] [tosquash] applier: remove unnecessary fiber name check
2020-07-02 23:40 [Tarantool-patches] [PATCH 0/5] Qsync: local rows fixes Vladislav Shpilevoy
2020-07-02 23:40 ` [Tarantool-patches] [PATCH 1/5] [tosquash] replication: fix multiple rollbacks Vladislav Shpilevoy
@ 2020-07-02 23:40 ` Vladislav Shpilevoy
2020-07-05 8:40 ` Serge Petrenko
2020-07-02 23:40 ` [Tarantool-patches] [PATCH 3/5] [tosquash] txn_limbo: fix release build Vladislav Shpilevoy
` (2 subsequent siblings)
4 siblings, 1 reply; 14+ messages in thread
From: Vladislav Shpilevoy @ 2020-07-02 23:40 UTC (permalink / raw)
To: tarantool-patches, sergepetrenko
This was an artifact from a first version of the patch.
Of course it is not valid, because final commit of this
transaction can be done by any fiber.
---
src/box/applier.cc | 1 -
1 file changed, 1 deletion(-)
diff --git a/src/box/applier.cc b/src/box/applier.cc
index 7e70211b7..9a9ec1dac 100644
--- a/src/box/applier.cc
+++ b/src/box/applier.cc
@@ -806,7 +806,6 @@ applier_txn_commit_cb(struct trigger *trigger, void *event)
(void) trigger;
struct txn *txn = (struct txn *)event;
assert(txn->fiber != NULL);
- assert(strncmp(txn->fiber->name, "applierw", 8) == 0);
/*
* Let the txn module free the transaction object. It is
* not needed for anything else.
--
2.21.1 (Apple Git-122.3)
^ permalink raw reply [flat|nested] 14+ messages in thread
* [Tarantool-patches] [PATCH 3/5] [tosquash] txn_limbo: fix release build
2020-07-02 23:40 [Tarantool-patches] [PATCH 0/5] Qsync: local rows fixes Vladislav Shpilevoy
2020-07-02 23:40 ` [Tarantool-patches] [PATCH 1/5] [tosquash] replication: fix multiple rollbacks Vladislav Shpilevoy
2020-07-02 23:40 ` [Tarantool-patches] [PATCH 2/5] [tosquash] applier: remove unnecessary fiber name check Vladislav Shpilevoy
@ 2020-07-02 23:40 ` Vladislav Shpilevoy
2020-07-05 8:41 ` Serge Petrenko
2020-07-02 23:40 ` [Tarantool-patches] [PATCH 4/5] [tosquash] replication: rework how local transactions wait sync Vladislav Shpilevoy
2020-07-02 23:40 ` [Tarantool-patches] [PATCH 5/5] [tosquash] replication: add test on local row in the end of sync txn Vladislav Shpilevoy
4 siblings, 1 reply; 14+ messages in thread
From: Vladislav Shpilevoy @ 2020-07-02 23:40 UTC (permalink / raw)
To: tarantool-patches, sergepetrenko
---
src/box/txn_limbo.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/src/box/txn_limbo.c b/src/box/txn_limbo.c
index 2cb687f4d..387cfd337 100644
--- a/src/box/txn_limbo.c
+++ b/src/box/txn_limbo.c
@@ -120,6 +120,7 @@ txn_limbo_assign_lsn(struct txn_limbo *limbo, struct txn_limbo_entry *entry,
assert(limbo->instance_id != REPLICA_ID_NIL);
assert(entry->lsn == -1);
assert(lsn > 0);
+ (void) limbo;
entry->lsn = lsn;
}
--
2.21.1 (Apple Git-122.3)
^ permalink raw reply [flat|nested] 14+ messages in thread
* [Tarantool-patches] [PATCH 4/5] [tosquash] replication: rework how local transactions wait sync
2020-07-02 23:40 [Tarantool-patches] [PATCH 0/5] Qsync: local rows fixes Vladislav Shpilevoy
` (2 preceding siblings ...)
2020-07-02 23:40 ` [Tarantool-patches] [PATCH 3/5] [tosquash] txn_limbo: fix release build Vladislav Shpilevoy
@ 2020-07-02 23:40 ` Vladislav Shpilevoy
2020-07-05 9:04 ` Serge Petrenko
2020-07-02 23:40 ` [Tarantool-patches] [PATCH 5/5] [tosquash] replication: add test on local row in the end of sync txn Vladislav Shpilevoy
4 siblings, 1 reply; 14+ messages in thread
From: Vladislav Shpilevoy @ 2020-07-02 23:40 UTC (permalink / raw)
To: tarantool-patches, sergepetrenko
There was a bug about how async transactions were blocked by a
not empty limbo. This was about fully local transactions. The
problem is that they don't have LSN in the needed vclock
component - it is always 0. It means, that their limbo entry
can't get a valid LSN by any means. Even a copy of the previous
sync transaction's LSN won't work, because the latter may by
still not written to WAL.
This patch makes async transactions always have lsn -1 in their
limbo entry. Because anyway it does not matter. It is not needed
to collect ACKs, nor to propagate limbo's vclock.
Now even a fully local transaction can be blocked by a pending
sync transaction.
Note, this does not cover the case, when the transaction is not
fully local, and its last row is local.
---
src/box/txn.c | 18 ++--
src/box/txn_limbo.c | 60 +++++++------
test/replication/qsync_basic.result | 120 ++++++++++++++++++++++++++
test/replication/qsync_basic.test.lua | 45 ++++++++++
4 files changed, 208 insertions(+), 35 deletions(-)
diff --git a/src/box/txn.c b/src/box/txn.c
index 6c333cbed..fe0591197 100644
--- a/src/box/txn.c
+++ b/src/box/txn.c
@@ -688,15 +688,15 @@ txn_commit_async(struct txn *txn)
/* See txn_commit(). */
uint32_t origin_id = req->rows[0]->replica_id;
- int64_t lsn = req->rows[txn->n_applier_rows - 1]->lsn;
limbo_entry = txn_limbo_append(&txn_limbo, origin_id, txn);
if (limbo_entry == NULL) {
txn_rollback(txn);
return -1;
}
- assert(lsn > 0);
- txn_limbo_assign_lsn(&txn_limbo, limbo_entry, lsn);
-
+ if (txn_has_flag(txn, TXN_WAIT_ACK)) {
+ int64_t lsn = req->rows[txn->n_applier_rows - 1]->lsn;
+ txn_limbo_assign_lsn(&txn_limbo, limbo_entry, lsn);
+ }
/*
* Set a trigger to abort waiting for confirm on
* WAL write failure.
@@ -779,10 +779,12 @@ txn_commit(struct txn *txn)
return -1;
}
if (is_sync) {
- int64_t lsn = req->rows[req->n_rows - 1]->lsn;
- txn_limbo_assign_lsn(&txn_limbo, limbo_entry, lsn);
- /* Local WAL write is a first 'ACK'. */
- txn_limbo_ack(&txn_limbo, txn_limbo.instance_id, lsn);
+ if (txn_has_flag(txn, TXN_WAIT_ACK)) {
+ int64_t lsn = req->rows[req->n_rows - 1]->lsn;
+ txn_limbo_assign_lsn(&txn_limbo, limbo_entry, lsn);
+ /* Local WAL write is a first 'ACK'. */
+ txn_limbo_ack(&txn_limbo, txn_limbo.instance_id, lsn);
+ }
if (txn_limbo_wait_complete(&txn_limbo, limbo_entry) < 0) {
txn_free(txn);
return -1;
diff --git a/src/box/txn_limbo.c b/src/box/txn_limbo.c
index 387cfd337..44a0c7273 100644
--- a/src/box/txn_limbo.c
+++ b/src/box/txn_limbo.c
@@ -120,6 +120,7 @@ txn_limbo_assign_lsn(struct txn_limbo *limbo, struct txn_limbo_entry *entry,
assert(limbo->instance_id != REPLICA_ID_NIL);
assert(entry->lsn == -1);
assert(lsn > 0);
+ assert(txn_has_flag(entry->txn, TXN_WAIT_ACK));
(void) limbo;
entry->lsn = lsn;
}
@@ -129,6 +130,12 @@ txn_limbo_check_complete(struct txn_limbo *limbo, struct txn_limbo_entry *entry)
{
if (txn_limbo_entry_is_complete(entry))
return true;
+ /*
+ * Async transaction can't complete itself. It is always
+ * completed by a previous sync transaction.
+ */
+ if (!txn_has_flag(entry->txn, TXN_WAIT_ACK))
+ return false;
struct vclock_iterator iter;
vclock_iterator_init(&iter, &limbo->vclock);
int ack_count = 0;
@@ -142,14 +149,13 @@ txn_limbo_check_complete(struct txn_limbo *limbo, struct txn_limbo_entry *entry)
}
static int
-txn_limbo_write_rollback(struct txn_limbo *limbo,
- struct txn_limbo_entry *entry);
+txn_limbo_write_rollback(struct txn_limbo *limbo, int64_t lsn);
int
txn_limbo_wait_complete(struct txn_limbo *limbo, struct txn_limbo_entry *entry)
{
struct txn *txn = entry->txn;
- assert(entry->lsn > 0);
+ assert(entry->lsn > 0 || !txn_has_flag(entry->txn, TXN_WAIT_ACK));
assert(!txn_has_flag(txn, TXN_IS_DONE));
assert(txn_has_flag(txn, TXN_WAIT_SYNC));
if (txn_limbo_check_complete(limbo, entry))
@@ -176,7 +182,7 @@ txn_limbo_wait_complete(struct txn_limbo *limbo, struct txn_limbo_entry *entry)
goto complete;
}
- txn_limbo_write_rollback(limbo, entry);
+ txn_limbo_write_rollback(limbo, entry->lsn);
struct txn_limbo_entry *e, *tmp;
rlist_foreach_entry_safe_reverse(e, &limbo->queue,
in_queue, tmp) {
@@ -210,10 +216,11 @@ complete:
}
static int
-txn_limbo_write_confirm_rollback(struct txn_limbo *limbo,
- struct txn_limbo_entry *entry,
+txn_limbo_write_confirm_rollback(struct txn_limbo *limbo, int64_t lsn,
bool is_confirm)
{
+ assert(lsn > 0);
+
struct xrow_header row;
struct request request = {
.header = &row,
@@ -221,14 +228,13 @@ txn_limbo_write_confirm_rollback(struct txn_limbo *limbo,
int res = 0;
if (is_confirm) {
- res = xrow_encode_confirm(&row, limbo->instance_id, entry->lsn);
+ res = xrow_encode_confirm(&row, limbo->instance_id, lsn);
} else {
/*
* This entry is the first to be rolled back, so
- * the last "safe" lsn is entry->lsn - 1.
+ * the last "safe" lsn is lsn - 1.
*/
- res = xrow_encode_rollback(&row, limbo->instance_id,
- entry->lsn - 1);
+ res = xrow_encode_rollback(&row, limbo->instance_id, lsn - 1);
}
if (res == -1)
return -1;
@@ -260,10 +266,9 @@ rollback:
* transactions waiting for confirmation may be finished.
*/
static int
-txn_limbo_write_confirm(struct txn_limbo *limbo,
- struct txn_limbo_entry *entry)
+txn_limbo_write_confirm(struct txn_limbo *limbo, int64_t lsn)
{
- return txn_limbo_write_confirm_rollback(limbo, entry, true);
+ return txn_limbo_write_confirm_rollback(limbo, lsn, true);
}
void
@@ -300,14 +305,13 @@ txn_limbo_read_confirm(struct txn_limbo *limbo, int64_t lsn)
/**
* Write a rollback message to WAL. After it's written
- * all the tarnsactions following the current one and waiting
+ * all the transactions following the current one and waiting
* for confirmation must be rolled back.
*/
static int
-txn_limbo_write_rollback(struct txn_limbo *limbo,
- struct txn_limbo_entry *entry)
+txn_limbo_write_rollback(struct txn_limbo *limbo, int64_t lsn)
{
- return txn_limbo_write_confirm_rollback(limbo, entry, false);
+ return txn_limbo_write_confirm_rollback(limbo, lsn, false);
}
void
@@ -316,7 +320,7 @@ txn_limbo_read_rollback(struct txn_limbo *limbo, int64_t lsn)
assert(limbo->instance_id != REPLICA_ID_NIL);
struct txn_limbo_entry *e, *tmp;
rlist_foreach_entry_safe_reverse(e, &limbo->queue, in_queue, tmp) {
- if (e->lsn <= lsn)
+ if (e->lsn <= lsn && txn_has_flag(e->txn, TXN_WAIT_ACK))
break;
e->is_rollback = true;
txn_limbo_pop(limbo, e);
@@ -350,31 +354,33 @@ txn_limbo_ack(struct txn_limbo *limbo, uint32_t replica_id, int64_t lsn)
int64_t prev_lsn = vclock_get(&limbo->vclock, replica_id);
vclock_follow(&limbo->vclock, replica_id, lsn);
struct txn_limbo_entry *e, *last_quorum = NULL;
+ int64_t confirm_lsn = -1;
rlist_foreach_entry(e, &limbo->queue, in_queue) {
+ assert(e->ack_count <= VCLOCK_MAX);
if (e->lsn > lsn)
break;
- if (e->lsn <= prev_lsn)
- continue;
- assert(e->ack_count <= VCLOCK_MAX);
/*
* Sync transactions need to collect acks. Async
* transactions are automatically committed right
* after all the previous sync transactions are.
*/
- if (txn_has_flag(e->txn, TXN_WAIT_ACK)) {
- if (++e->ack_count < replication_synchro_quorum)
- continue;
- } else {
- assert(txn_has_flag(e->txn, TXN_WAIT_SYNC));
+ if (!txn_has_flag(e->txn, TXN_WAIT_ACK)) {
+ assert(e->lsn == -1);
if (last_quorum == NULL)
continue;
+ } else if (e->lsn <= prev_lsn) {
+ continue;
+ } else if (++e->ack_count < replication_synchro_quorum) {
+ continue;
+ } else {
+ confirm_lsn = e->lsn;
}
e->is_commit = true;
last_quorum = e;
}
if (last_quorum == NULL)
return;
- if (txn_limbo_write_confirm(limbo, last_quorum) != 0) {
+ if (txn_limbo_write_confirm(limbo, confirm_lsn) != 0) {
// TODO: what to do here?.
// We already failed writing the CONFIRM
// message. What are the chances we'll be
diff --git a/test/replication/qsync_basic.result b/test/replication/qsync_basic.result
index 32deb2ac3..339fc0e33 100644
--- a/test/replication/qsync_basic.result
+++ b/test/replication/qsync_basic.result
@@ -299,6 +299,126 @@ box.space.sync:select{6}
| - []
| ...
+--
+-- Fully local async transaction also waits for existing sync txn.
+--
+test_run:switch('default')
+ | ---
+ | - true
+ | ...
+box.cfg{replication_synchro_timeout = 1000, replication_synchro_quorum = 2}
+ | ---
+ | ...
+_ = box.schema.create_space('locallocal', {is_local = true})
+ | ---
+ | ...
+_ = _:create_index('pk')
+ | ---
+ | ...
+-- Propagate local vclock to some insane value to ensure it won't
+-- affect anything.
+box.begin() for i = 1, 2000 do box.space.locallocal:replace{1} end box.commit()
+ | ---
+ | ...
+do \
+ f1 = fiber.create(box.space.sync.replace, box.space.sync, {8}) \
+ f2 = fiber.create(box.space.locallocal.replace, box.space.locallocal, {8}) \
+ box.space.test:replace{8} \
+end
+ | ---
+ | ...
+f1:status()
+ | ---
+ | - dead
+ | ...
+f2:status()
+ | ---
+ | - dead
+ | ...
+box.space.sync:select{8}
+ | ---
+ | - - [8]
+ | ...
+box.space.locallocal:select{8}
+ | ---
+ | - - [8]
+ | ...
+box.space.test:select{8}
+ | ---
+ | - - [8]
+ | ...
+
+test_run:switch('replica')
+ | ---
+ | - true
+ | ...
+box.space.sync:select{8}
+ | ---
+ | - - [8]
+ | ...
+box.space.locallocal:select{8}
+ | ---
+ | - []
+ | ...
+box.space.test:select{8}
+ | ---
+ | - - [8]
+ | ...
+
+-- Ensure sync rollback will affect all pending fully local async
+-- transactions too.
+test_run:switch('default')
+ | ---
+ | - true
+ | ...
+box.cfg{replication_synchro_timeout = 0.001, replication_synchro_quorum = 3}
+ | ---
+ | ...
+do \
+ f1 = fiber.create(box.space.sync.replace, box.space.sync, {9}) \
+ f2 = fiber.create(box.space.locallocal.replace, box.space.locallocal, {9}) \
+ box.space.test:replace{9} \
+end
+ | ---
+ | - error: A rollback for a synchronous transaction is received
+ | ...
+f1:status()
+ | ---
+ | - dead
+ | ...
+f2:status()
+ | ---
+ | - dead
+ | ...
+box.space.sync:select{9}
+ | ---
+ | - []
+ | ...
+box.space.locallocal:select{9}
+ | ---
+ | - []
+ | ...
+box.space.test:select{9}
+ | ---
+ | - []
+ | ...
+test_run:switch('replica')
+ | ---
+ | - true
+ | ...
+box.space.sync:select{9}
+ | ---
+ | - []
+ | ...
+box.space.locallocal:select{9}
+ | ---
+ | - []
+ | ...
+box.space.test:select{9}
+ | ---
+ | - []
+ | ...
+
--
-- gh-5123: quorum 1 still should write CONFIRM.
--
diff --git a/test/replication/qsync_basic.test.lua b/test/replication/qsync_basic.test.lua
index 361f22bc3..6e40131bf 100644
--- a/test/replication/qsync_basic.test.lua
+++ b/test/replication/qsync_basic.test.lua
@@ -118,6 +118,51 @@ test_run:switch('replica')
box.space.test:select{6}
box.space.sync:select{6}
+--
+-- Fully local async transaction also waits for existing sync txn.
+--
+test_run:switch('default')
+box.cfg{replication_synchro_timeout = 1000, replication_synchro_quorum = 2}
+_ = box.schema.create_space('locallocal', {is_local = true})
+_ = _:create_index('pk')
+-- Propagate local vclock to some insane value to ensure it won't
+-- affect anything.
+box.begin() for i = 1, 2000 do box.space.locallocal:replace{1} end box.commit()
+do \
+ f1 = fiber.create(box.space.sync.replace, box.space.sync, {8}) \
+ f2 = fiber.create(box.space.locallocal.replace, box.space.locallocal, {8}) \
+ box.space.test:replace{8} \
+end
+f1:status()
+f2:status()
+box.space.sync:select{8}
+box.space.locallocal:select{8}
+box.space.test:select{8}
+
+test_run:switch('replica')
+box.space.sync:select{8}
+box.space.locallocal:select{8}
+box.space.test:select{8}
+
+-- Ensure sync rollback will affect all pending fully local async
+-- transactions too.
+test_run:switch('default')
+box.cfg{replication_synchro_timeout = 0.001, replication_synchro_quorum = 3}
+do \
+ f1 = fiber.create(box.space.sync.replace, box.space.sync, {9}) \
+ f2 = fiber.create(box.space.locallocal.replace, box.space.locallocal, {9}) \
+ box.space.test:replace{9} \
+end
+f1:status()
+f2:status()
+box.space.sync:select{9}
+box.space.locallocal:select{9}
+box.space.test:select{9}
+test_run:switch('replica')
+box.space.sync:select{9}
+box.space.locallocal:select{9}
+box.space.test:select{9}
+
--
-- gh-5123: quorum 1 still should write CONFIRM.
--
--
2.21.1 (Apple Git-122.3)
^ permalink raw reply [flat|nested] 14+ messages in thread
* [Tarantool-patches] [PATCH 5/5] [tosquash] replication: add test on local row in the end of sync txn
2020-07-02 23:40 [Tarantool-patches] [PATCH 0/5] Qsync: local rows fixes Vladislav Shpilevoy
` (3 preceding siblings ...)
2020-07-02 23:40 ` [Tarantool-patches] [PATCH 4/5] [tosquash] replication: rework how local transactions wait sync Vladislav Shpilevoy
@ 2020-07-02 23:40 ` Vladislav Shpilevoy
2020-07-05 9:11 ` Serge Petrenko
4 siblings, 1 reply; 14+ messages in thread
From: Vladislav Shpilevoy @ 2020-07-02 23:40 UTC (permalink / raw)
To: tarantool-patches, sergepetrenko
Currently disabled, because soon should start working without
any changes when #4928 is fixed.
---
test/replication/qsync_basic.result | 20 ++++++++++++++++++++
test/replication/qsync_basic.test.lua | 20 ++++++++++++++++++++
2 files changed, 40 insertions(+)
diff --git a/test/replication/qsync_basic.result b/test/replication/qsync_basic.result
index 339fc0e33..83ff7d3d1 100644
--- a/test/replication/qsync_basic.result
+++ b/test/replication/qsync_basic.result
@@ -419,6 +419,26 @@ box.space.test:select{9}
| - []
| ...
+--
+-- gh-4928: test that a sync transaction works fine with local
+-- rows in the end.
+--
+
+-- test_run:switch('default')
+-- box.cfg{replication_synchro_timeout = 1000, replication_synchro_quorum = 2}
+-- do \
+-- box.begin() \
+-- box.space.sync:replace{10} \
+-- box.space.locallocal:replace({10}) \
+-- box.commit() \
+-- end
+-- box.space.sync:select{10}
+-- box.space.locallocal:select{10}
+
+-- test_run:switch('replica')
+-- box.space.sync:select{10}
+-- box.space.locallocal:select{10}
+
--
-- gh-5123: quorum 1 still should write CONFIRM.
--
diff --git a/test/replication/qsync_basic.test.lua b/test/replication/qsync_basic.test.lua
index 6e40131bf..74083a0b9 100644
--- a/test/replication/qsync_basic.test.lua
+++ b/test/replication/qsync_basic.test.lua
@@ -163,6 +163,26 @@ box.space.sync:select{9}
box.space.locallocal:select{9}
box.space.test:select{9}
+--
+-- gh-4928: test that a sync transaction works fine with local
+-- rows in the end.
+--
+
+-- test_run:switch('default')
+-- box.cfg{replication_synchro_timeout = 1000, replication_synchro_quorum = 2}
+-- do \
+-- box.begin() \
+-- box.space.sync:replace{10} \
+-- box.space.locallocal:replace({10}) \
+-- box.commit() \
+-- end
+-- box.space.sync:select{10}
+-- box.space.locallocal:select{10}
+
+-- test_run:switch('replica')
+-- box.space.sync:select{10}
+-- box.space.locallocal:select{10}
+
--
-- gh-5123: quorum 1 still should write CONFIRM.
--
--
2.21.1 (Apple Git-122.3)
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Tarantool-patches] [PATCH 2/5] [tosquash] applier: remove unnecessary fiber name check
2020-07-02 23:40 ` [Tarantool-patches] [PATCH 2/5] [tosquash] applier: remove unnecessary fiber name check Vladislav Shpilevoy
@ 2020-07-05 8:40 ` Serge Petrenko
0 siblings, 0 replies; 14+ messages in thread
From: Serge Petrenko @ 2020-07-05 8:40 UTC (permalink / raw)
To: Vladislav Shpilevoy, tarantool-patches
03.07.2020 02:40, Vladislav Shpilevoy пишет:
> This was an artifact from a first version of the patch.
> Of course it is not valid, because final commit of this
> transaction can be done by any fiber.
Hi! Thanks for the patch! LGTM.
> ---
> src/box/applier.cc | 1 -
> 1 file changed, 1 deletion(-)
>
> diff --git a/src/box/applier.cc b/src/box/applier.cc
> index 7e70211b7..9a9ec1dac 100644
> --- a/src/box/applier.cc
> +++ b/src/box/applier.cc
> @@ -806,7 +806,6 @@ applier_txn_commit_cb(struct trigger *trigger, void *event)
> (void) trigger;
> struct txn *txn = (struct txn *)event;
> assert(txn->fiber != NULL);
> - assert(strncmp(txn->fiber->name, "applierw", 8) == 0);
> /*
> * Let the txn module free the transaction object. It is
> * not needed for anything else.
--
Serge Petrenko
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Tarantool-patches] [PATCH 3/5] [tosquash] txn_limbo: fix release build
2020-07-02 23:40 ` [Tarantool-patches] [PATCH 3/5] [tosquash] txn_limbo: fix release build Vladislav Shpilevoy
@ 2020-07-05 8:41 ` Serge Petrenko
0 siblings, 0 replies; 14+ messages in thread
From: Serge Petrenko @ 2020-07-05 8:41 UTC (permalink / raw)
To: Vladislav Shpilevoy, tarantool-patches
03.07.2020 02:40, Vladislav Shpilevoy пишет:
> ---
> src/box/txn_limbo.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/src/box/txn_limbo.c b/src/box/txn_limbo.c
> index 2cb687f4d..387cfd337 100644
> --- a/src/box/txn_limbo.c
> +++ b/src/box/txn_limbo.c
> @@ -120,6 +120,7 @@ txn_limbo_assign_lsn(struct txn_limbo *limbo, struct txn_limbo_entry *entry,
> assert(limbo->instance_id != REPLICA_ID_NIL);
> assert(entry->lsn == -1);
> assert(lsn > 0);
> + (void) limbo;
> entry->lsn = lsn;
> }
>
LGTM.
--
Serge Petrenko
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Tarantool-patches] [PATCH 4/5] [tosquash] replication: rework how local transactions wait sync
2020-07-02 23:40 ` [Tarantool-patches] [PATCH 4/5] [tosquash] replication: rework how local transactions wait sync Vladislav Shpilevoy
@ 2020-07-05 9:04 ` Serge Petrenko
2020-07-05 15:13 ` Vladislav Shpilevoy
0 siblings, 1 reply; 14+ messages in thread
From: Serge Petrenko @ 2020-07-05 9:04 UTC (permalink / raw)
To: Vladislav Shpilevoy, tarantool-patches
03.07.2020 02:40, Vladislav Shpilevoy пишет:
> There was a bug about how async transactions were blocked by a
> not empty limbo. This was about fully local transactions. The
> problem is that they don't have LSN in the needed vclock
> component - it is always 0. It means, that their limbo entry
> can't get a valid LSN by any means. Even a copy of the previous
> sync transaction's LSN won't work, because the latter may by
> still not written to WAL.
>
> This patch makes async transactions always have lsn -1 in their
> limbo entry. Because anyway it does not matter. It is not needed
> to collect ACKs, nor to propagate limbo's vclock.
>
> Now even a fully local transaction can be blocked by a pending
> sync transaction.
>
> Note, this does not cover the case, when the transaction is not
> fully local, and its last row is local.
Hi! Thanks for the patch! Looks good, with one comment below.
> ---
> src/box/txn.c | 18 ++--
> src/box/txn_limbo.c | 60 +++++++------
> test/replication/qsync_basic.result | 120 ++++++++++++++++++++++++++
> test/replication/qsync_basic.test.lua | 45 ++++++++++
> 4 files changed, 208 insertions(+), 35 deletions(-)
>
> diff --git a/src/box/txn.c b/src/box/txn.c
> index 6c333cbed..fe0591197 100644
> --- a/src/box/txn.c
> +++ b/src/box/txn.c
> @@ -688,15 +688,15 @@ txn_commit_async(struct txn *txn)
>
> /* See txn_commit(). */
> uint32_t origin_id = req->rows[0]->replica_id;
> - int64_t lsn = req->rows[txn->n_applier_rows - 1]->lsn;
> limbo_entry = txn_limbo_append(&txn_limbo, origin_id, txn);
> if (limbo_entry == NULL) {
> txn_rollback(txn);
> return -1;
> }
> - assert(lsn > 0);
> - txn_limbo_assign_lsn(&txn_limbo, limbo_entry, lsn);
> -
> + if (txn_has_flag(txn, TXN_WAIT_ACK)) {
> + int64_t lsn = req->rows[txn->n_applier_rows - 1]->lsn;
> + txn_limbo_assign_lsn(&txn_limbo, limbo_entry, lsn);
> + }
> /*
> * Set a trigger to abort waiting for confirm on
> * WAL write failure.
> @@ -779,10 +779,12 @@ txn_commit(struct txn *txn)
> return -1;
> }
> if (is_sync) {
> - int64_t lsn = req->rows[req->n_rows - 1]->lsn;
> - txn_limbo_assign_lsn(&txn_limbo, limbo_entry, lsn);
> - /* Local WAL write is a first 'ACK'. */
> - txn_limbo_ack(&txn_limbo, txn_limbo.instance_id, lsn);
> + if (txn_has_flag(txn, TXN_WAIT_ACK)) {
> + int64_t lsn = req->rows[req->n_rows - 1]->lsn;
> + txn_limbo_assign_lsn(&txn_limbo, limbo_entry, lsn);
> + /* Local WAL write is a first 'ACK'. */
> + txn_limbo_ack(&txn_limbo, txn_limbo.instance_id, lsn);
> + }
> if (txn_limbo_wait_complete(&txn_limbo, limbo_entry) < 0) {
> txn_free(txn);
> return -1;
> diff --git a/src/box/txn_limbo.c b/src/box/txn_limbo.c
> index 387cfd337..44a0c7273 100644
> --- a/src/box/txn_limbo.c
> +++ b/src/box/txn_limbo.c
> @@ -120,6 +120,7 @@ txn_limbo_assign_lsn(struct txn_limbo *limbo, struct txn_limbo_entry *entry,
> assert(limbo->instance_id != REPLICA_ID_NIL);
> assert(entry->lsn == -1);
> assert(lsn > 0);
> + assert(txn_has_flag(entry->txn, TXN_WAIT_ACK));
> (void) limbo;
> entry->lsn = lsn;
> }
> @@ -129,6 +130,12 @@ txn_limbo_check_complete(struct txn_limbo *limbo, struct txn_limbo_entry *entry)
> {
> if (txn_limbo_entry_is_complete(entry))
> return true;
> + /*
> + * Async transaction can't complete itself. It is always
> + * completed by a previous sync transaction.
> + */
> + if (!txn_has_flag(entry->txn, TXN_WAIT_ACK))
> + return false;
> struct vclock_iterator iter;
> vclock_iterator_init(&iter, &limbo->vclock);
> int ack_count = 0;
> @@ -142,14 +149,13 @@ txn_limbo_check_complete(struct txn_limbo *limbo, struct txn_limbo_entry *entry)
> }
>
> static int
> -txn_limbo_write_rollback(struct txn_limbo *limbo,
> - struct txn_limbo_entry *entry);
> +txn_limbo_write_rollback(struct txn_limbo *limbo, int64_t lsn);
>
> int
> txn_limbo_wait_complete(struct txn_limbo *limbo, struct txn_limbo_entry *entry)
> {
> struct txn *txn = entry->txn;
> - assert(entry->lsn > 0);
> + assert(entry->lsn > 0 || !txn_has_flag(entry->txn, TXN_WAIT_ACK));
> assert(!txn_has_flag(txn, TXN_IS_DONE));
> assert(txn_has_flag(txn, TXN_WAIT_SYNC));
> if (txn_limbo_check_complete(limbo, entry))
> @@ -176,7 +182,7 @@ txn_limbo_wait_complete(struct txn_limbo *limbo, struct txn_limbo_entry *entry)
> goto complete;
> }
>
> - txn_limbo_write_rollback(limbo, entry);
> + txn_limbo_write_rollback(limbo, entry->lsn);
> struct txn_limbo_entry *e, *tmp;
> rlist_foreach_entry_safe_reverse(e, &limbo->queue,
> in_queue, tmp) {
> @@ -210,10 +216,11 @@ complete:
> }
>
> static int
> -txn_limbo_write_confirm_rollback(struct txn_limbo *limbo,
> - struct txn_limbo_entry *entry,
> +txn_limbo_write_confirm_rollback(struct txn_limbo *limbo, int64_t lsn,
> bool is_confirm)
> {
> + assert(lsn > 0);
> +
> struct xrow_header row;
> struct request request = {
> .header = &row,
> @@ -221,14 +228,13 @@ txn_limbo_write_confirm_rollback(struct txn_limbo *limbo,
>
> int res = 0;
> if (is_confirm) {
> - res = xrow_encode_confirm(&row, limbo->instance_id, entry->lsn);
> + res = xrow_encode_confirm(&row, limbo->instance_id, lsn);
> } else {
> /*
> * This entry is the first to be rolled back, so
> - * the last "safe" lsn is entry->lsn - 1.
> + * the last "safe" lsn is lsn - 1.
> */
> - res = xrow_encode_rollback(&row, limbo->instance_id,
> - entry->lsn - 1);
> + res = xrow_encode_rollback(&row, limbo->instance_id, lsn - 1);
> }
> if (res == -1)
> return -1;
> @@ -260,10 +266,9 @@ rollback:
> * transactions waiting for confirmation may be finished.
> */
> static int
> -txn_limbo_write_confirm(struct txn_limbo *limbo,
> - struct txn_limbo_entry *entry)
> +txn_limbo_write_confirm(struct txn_limbo *limbo, int64_t lsn)
> {
> - return txn_limbo_write_confirm_rollback(limbo, entry, true);
> + return txn_limbo_write_confirm_rollback(limbo, lsn, true);
> }
>
> void
> @@ -300,14 +305,13 @@ txn_limbo_read_confirm(struct txn_limbo *limbo, int64_t lsn)
>
> /**
> * Write a rollback message to WAL. After it's written
> - * all the tarnsactions following the current one and waiting
> + * all the transactions following the current one and waiting
> * for confirmation must be rolled back.
> */
> static int
> -txn_limbo_write_rollback(struct txn_limbo *limbo,
> - struct txn_limbo_entry *entry)
> +txn_limbo_write_rollback(struct txn_limbo *limbo, int64_t lsn)
> {
> - return txn_limbo_write_confirm_rollback(limbo, entry, false);
> + return txn_limbo_write_confirm_rollback(limbo, lsn, false);
> }
>
> void
> @@ -316,7 +320,7 @@ txn_limbo_read_rollback(struct txn_limbo *limbo, int64_t lsn)
> assert(limbo->instance_id != REPLICA_ID_NIL);
> struct txn_limbo_entry *e, *tmp;
> rlist_foreach_entry_safe_reverse(e, &limbo->queue, in_queue, tmp) {
> - if (e->lsn <= lsn)
> + if (e->lsn <= lsn && txn_has_flag(e->txn, TXN_WAIT_ACK))
> break;
Are you rolling back the async transactions that are before the last sync
transaction to be rolled back? Why?
Shouldn't this condition stay the same?
> e->is_rollback = true;
> txn_limbo_pop(limbo, e);
> @@ -350,31 +354,33 @@ txn_limbo_ack(struct txn_limbo *limbo, uint32_t replica_id, int64_t lsn)
> int64_t prev_lsn = vclock_get(&limbo->vclock, replica_id);
> vclock_follow(&limbo->vclock, replica_id, lsn);
> struct txn_limbo_entry *e, *last_quorum = NULL;
> + int64_t confirm_lsn = -1;
> rlist_foreach_entry(e, &limbo->queue, in_queue) {
> + assert(e->ack_count <= VCLOCK_MAX);
> if (e->lsn > lsn)
> break;
> - if (e->lsn <= prev_lsn)
> - continue;
> - assert(e->ack_count <= VCLOCK_MAX);
> /*
> * Sync transactions need to collect acks. Async
> * transactions are automatically committed right
> * after all the previous sync transactions are.
> */
> - if (txn_has_flag(e->txn, TXN_WAIT_ACK)) {
> - if (++e->ack_count < replication_synchro_quorum)
> - continue;
> - } else {
> - assert(txn_has_flag(e->txn, TXN_WAIT_SYNC));
> + if (!txn_has_flag(e->txn, TXN_WAIT_ACK)) {
> + assert(e->lsn == -1);
> if (last_quorum == NULL)
> continue;
> + } else if (e->lsn <= prev_lsn) {
> + continue;
> + } else if (++e->ack_count < replication_synchro_quorum) {
> + continue;
> + } else {
> + confirm_lsn = e->lsn;
> }
> e->is_commit = true;
> last_quorum = e;
> }
> if (last_quorum == NULL)
> return;
> - if (txn_limbo_write_confirm(limbo, last_quorum) != 0) {
> + if (txn_limbo_write_confirm(limbo, confirm_lsn) != 0) {
> // TODO: what to do here?.
> // We already failed writing the CONFIRM
> // message. What are the chances we'll be
> diff --git a/test/replication/qsync_basic.result b/test/replication/qsync_basic.result
> index 32deb2ac3..339fc0e33 100644
> --- a/test/replication/qsync_basic.result
> +++ b/test/replication/qsync_basic.result
> @@ -299,6 +299,126 @@ box.space.sync:select{6}
> | - []
> | ...
>
> +--
> +-- Fully local async transaction also waits for existing sync txn.
> +--
> +test_run:switch('default')
> + | ---
> + | - true
> + | ...
> +box.cfg{replication_synchro_timeout = 1000, replication_synchro_quorum = 2}
> + | ---
> + | ...
> +_ = box.schema.create_space('locallocal', {is_local = true})
> + | ---
> + | ...
> +_ = _:create_index('pk')
> + | ---
> + | ...
> +-- Propagate local vclock to some insane value to ensure it won't
> +-- affect anything.
> +box.begin() for i = 1, 2000 do box.space.locallocal:replace{1} end box.commit()
> + | ---
> + | ...
> +do \
> + f1 = fiber.create(box.space.sync.replace, box.space.sync, {8}) \
> + f2 = fiber.create(box.space.locallocal.replace, box.space.locallocal, {8}) \
> + box.space.test:replace{8} \
> +end
> + | ---
> + | ...
> +f1:status()
> + | ---
> + | - dead
> + | ...
> +f2:status()
> + | ---
> + | - dead
> + | ...
> +box.space.sync:select{8}
> + | ---
> + | - - [8]
> + | ...
> +box.space.locallocal:select{8}
> + | ---
> + | - - [8]
> + | ...
> +box.space.test:select{8}
> + | ---
> + | - - [8]
> + | ...
> +
> +test_run:switch('replica')
> + | ---
> + | - true
> + | ...
> +box.space.sync:select{8}
> + | ---
> + | - - [8]
> + | ...
> +box.space.locallocal:select{8}
> + | ---
> + | - []
> + | ...
> +box.space.test:select{8}
> + | ---
> + | - - [8]
> + | ...
> +
> +-- Ensure sync rollback will affect all pending fully local async
> +-- transactions too.
> +test_run:switch('default')
> + | ---
> + | - true
> + | ...
> +box.cfg{replication_synchro_timeout = 0.001, replication_synchro_quorum = 3}
> + | ---
> + | ...
> +do \
> + f1 = fiber.create(box.space.sync.replace, box.space.sync, {9}) \
> + f2 = fiber.create(box.space.locallocal.replace, box.space.locallocal, {9}) \
> + box.space.test:replace{9} \
> +end
> + | ---
> + | - error: A rollback for a synchronous transaction is received
> + | ...
> +f1:status()
> + | ---
> + | - dead
> + | ...
> +f2:status()
> + | ---
> + | - dead
> + | ...
> +box.space.sync:select{9}
> + | ---
> + | - []
> + | ...
> +box.space.locallocal:select{9}
> + | ---
> + | - []
> + | ...
> +box.space.test:select{9}
> + | ---
> + | - []
> + | ...
> +test_run:switch('replica')
> + | ---
> + | - true
> + | ...
> +box.space.sync:select{9}
> + | ---
> + | - []
> + | ...
> +box.space.locallocal:select{9}
> + | ---
> + | - []
> + | ...
> +box.space.test:select{9}
> + | ---
> + | - []
> + | ...
> +
> --
> -- gh-5123: quorum 1 still should write CONFIRM.
> --
> diff --git a/test/replication/qsync_basic.test.lua b/test/replication/qsync_basic.test.lua
> index 361f22bc3..6e40131bf 100644
> --- a/test/replication/qsync_basic.test.lua
> +++ b/test/replication/qsync_basic.test.lua
> @@ -118,6 +118,51 @@ test_run:switch('replica')
> box.space.test:select{6}
> box.space.sync:select{6}
>
> +--
> +-- Fully local async transaction also waits for existing sync txn.
> +--
> +test_run:switch('default')
> +box.cfg{replication_synchro_timeout = 1000, replication_synchro_quorum = 2}
> +_ = box.schema.create_space('locallocal', {is_local = true})
> +_ = _:create_index('pk')
> +-- Propagate local vclock to some insane value to ensure it won't
> +-- affect anything.
> +box.begin() for i = 1, 2000 do box.space.locallocal:replace{1} end box.commit()
> +do \
> + f1 = fiber.create(box.space.sync.replace, box.space.sync, {8}) \
> + f2 = fiber.create(box.space.locallocal.replace, box.space.locallocal, {8}) \
> + box.space.test:replace{8} \
> +end
> +f1:status()
> +f2:status()
> +box.space.sync:select{8}
> +box.space.locallocal:select{8}
> +box.space.test:select{8}
> +
> +test_run:switch('replica')
> +box.space.sync:select{8}
> +box.space.locallocal:select{8}
> +box.space.test:select{8}
> +
> +-- Ensure sync rollback will affect all pending fully local async
> +-- transactions too.
> +test_run:switch('default')
> +box.cfg{replication_synchro_timeout = 0.001, replication_synchro_quorum = 3}
> +do \
> + f1 = fiber.create(box.space.sync.replace, box.space.sync, {9}) \
> + f2 = fiber.create(box.space.locallocal.replace, box.space.locallocal, {9}) \
> + box.space.test:replace{9} \
> +end
> +f1:status()
> +f2:status()
> +box.space.sync:select{9}
> +box.space.locallocal:select{9}
> +box.space.test:select{9}
> +test_run:switch('replica')
> +box.space.sync:select{9}
> +box.space.locallocal:select{9}
> +box.space.test:select{9}
> +
> --
> -- gh-5123: quorum 1 still should write CONFIRM.
> --
--
Serge Petrenko
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Tarantool-patches] [PATCH 5/5] [tosquash] replication: add test on local row in the end of sync txn
2020-07-02 23:40 ` [Tarantool-patches] [PATCH 5/5] [tosquash] replication: add test on local row in the end of sync txn Vladislav Shpilevoy
@ 2020-07-05 9:11 ` Serge Petrenko
2020-07-05 15:13 ` Vladislav Shpilevoy
0 siblings, 1 reply; 14+ messages in thread
From: Serge Petrenko @ 2020-07-05 9:11 UTC (permalink / raw)
To: Vladislav Shpilevoy, tarantool-patches
03.07.2020 02:40, Vladislav Shpilevoy пишет:
> Currently disabled, because soon should start working without
> any changes when #4928 is fixed.
> ---
> test/replication/qsync_basic.result | 20 ++++++++++++++++++++
> test/replication/qsync_basic.test.lua | 20 ++++++++++++++++++++
> 2 files changed, 40 insertions(+)
>
> diff --git a/test/replication/qsync_basic.result b/test/replication/qsync_basic.result
> index 339fc0e33..83ff7d3d1 100644
> --- a/test/replication/qsync_basic.result
> +++ b/test/replication/qsync_basic.result
> @@ -419,6 +419,26 @@ box.space.test:select{9}
> | - []
> | ...
>
> +--
> +-- gh-4928: test that a sync transaction works fine with local
> +-- rows in the end.
> +--
> +
> +-- test_run:switch('default')
> +-- box.cfg{replication_synchro_timeout = 1000, replication_synchro_quorum = 2}
> +-- do \
> +-- box.begin() \
> +-- box.space.sync:replace{10} \
> +-- box.space.locallocal:replace({10}) \
> +-- box.commit() \
> +-- end
> +-- box.space.sync:select{10}
> +-- box.space.locallocal:select{10}
> +
> +-- test_run:switch('replica')
> +-- box.space.sync:select{10}
> +-- box.space.locallocal:select{10}
> +
> --
> -- gh-5123: quorum 1 still should write CONFIRM.
> --
> diff --git a/test/replication/qsync_basic.test.lua b/test/replication/qsync_basic.test.lua
> index 6e40131bf..74083a0b9 100644
> --- a/test/replication/qsync_basic.test.lua
> +++ b/test/replication/qsync_basic.test.lua
> @@ -163,6 +163,26 @@ box.space.sync:select{9}
> box.space.locallocal:select{9}
> box.space.test:select{9}
>
> +--
> +-- gh-4928: test that a sync transaction works fine with local
> +-- rows in the end.
> +--
> +
> +-- test_run:switch('default')
> +-- box.cfg{replication_synchro_timeout = 1000, replication_synchro_quorum = 2}
> +-- do \
> +-- box.begin() \
> +-- box.space.sync:replace{10} \
> +-- box.space.locallocal:replace({10}) \
> +-- box.commit() \
> +-- end
> +-- box.space.sync:select{10}
> +-- box.space.locallocal:select{10}
> +
> +-- test_run:switch('replica')
> +-- box.space.sync:select{10}
> +-- box.space.locallocal:select{10}
> +
> --
> -- gh-5123: quorum 1 still should write CONFIRM.
> --
Hi! Thanks for the test!
LGTM with one comment.
Looks like it relies on the vclock[0] increase you performed in
the previous test (patch 4/5).
If it wasn't for this increase,both correctly and incorrectly
working tarantools would complete the transaction. (The incorrect
one would take lsn = 1 from vclock[0])
Maybe add the same increase here, to make sure the test stays
valid if someone changes the previous test case?
I mean the `for i = 1,2000 do box.space.localocal:replace{1} end`
--
Serge Petrenko
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Tarantool-patches] [PATCH 1/5] [tosquash] replication: fix multiple rollbacks
2020-07-02 23:40 ` [Tarantool-patches] [PATCH 1/5] [tosquash] replication: fix multiple rollbacks Vladislav Shpilevoy
@ 2020-07-05 9:34 ` Serge Petrenko
2020-07-05 15:13 ` Vladislav Shpilevoy
0 siblings, 1 reply; 14+ messages in thread
From: Serge Petrenko @ 2020-07-05 9:34 UTC (permalink / raw)
To: Vladislav Shpilevoy, tarantool-patches
03.07.2020 02:40, Vladislav Shpilevoy пишет:
> The problem was that if several transactions time out in one
> event loop iteration, the all will write rollback. Moreover, they
> will do that in a weird order, starting from the oldest, not in
> a reversed order.
>
> This patch makes limbo write only one rollback at once.
> ---
> src/box/txn_limbo.c | 25 +++++++++++++++++++++++++
> test/replication/qsync_basic.result | 2 +-
> 2 files changed, 26 insertions(+), 1 deletion(-)
>
> diff --git a/src/box/txn_limbo.c b/src/box/txn_limbo.c
> index 0402664cb..2cb687f4d 100644
> --- a/src/box/txn_limbo.c
> +++ b/src/box/txn_limbo.c
> @@ -44,6 +44,13 @@ txn_limbo_create(struct txn_limbo *limbo)
> limbo->got_rollback = false;
> }
>
> +static inline struct txn_limbo_entry *
> +txn_limbo_first_entry(struct txn_limbo *limbo)
> +{
> + return rlist_first_entry(&limbo->queue, struct txn_limbo_entry,
> + in_queue);
> +}
> +
> struct txn_limbo_entry *
> txn_limbo_append(struct txn_limbo *limbo, uint32_t id, struct txn *txn)
> {
> @@ -150,6 +157,24 @@ txn_limbo_wait_complete(struct txn_limbo *limbo, struct txn_limbo_entry *entry)
> bool timed_out = fiber_yield_timeout(txn_limbo_confirm_timeout(limbo));
> fiber_set_cancellable(cancellable);
> if (timed_out) {
> + assert(!txn_limbo_is_empty(limbo));
> + if (txn_limbo_first_entry(limbo) != entry) {
> + /*
> + * If this is not a first entry in the
> + * limbo, it is definitely not a first
> + * timed out entry. And since it managed
> + * to time out too, it means there is
> + * currently another fiber writing
> + * rollback. Wait when it will finish and
> + * wake us up.
> + */
Why isn't it the first timed out? Is it because once previous entry was
confirmed, it
is removed from the queue immediately?
Looks fragile.
> + bool cancellable = fiber_set_cancellable(false);
> + fiber_yield();
> + fiber_set_cancellable(cancellable);
> + assert(txn_limbo_entry_is_complete(entry));
> + goto complete;
> + }
> +
> txn_limbo_write_rollback(limbo, entry);
> struct txn_limbo_entry *e, *tmp;
> rlist_foreach_entry_safe_reverse(e, &limbo->queue,
> diff --git a/test/replication/qsync_basic.result b/test/replication/qsync_basic.result
> index cdecf00e8..32deb2ac3 100644
> --- a/test/replication/qsync_basic.result
> +++ b/test/replication/qsync_basic.result
> @@ -272,7 +272,7 @@ box.cfg{replication_synchro_timeout = 0.001, replication_synchro_quorum = 3}
> | ...
> f = fiber.create(box.space.sync.replace, box.space.sync, {6}) s:replace{6}
> | ---
> - | - error: Quorum collection for a synchronous transaction is timed out
> + | - error: A rollback for a synchronous transaction is received
> | ...
> f:status()
> | ---
--
Serge Petrenko
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Tarantool-patches] [PATCH 5/5] [tosquash] replication: add test on local row in the end of sync txn
2020-07-05 9:11 ` Serge Petrenko
@ 2020-07-05 15:13 ` Vladislav Shpilevoy
0 siblings, 0 replies; 14+ messages in thread
From: Vladislav Shpilevoy @ 2020-07-05 15:13 UTC (permalink / raw)
To: Serge Petrenko, tarantool-patches
>> diff --git a/test/replication/qsync_basic.test.lua b/test/replication/qsync_basic.test.lua
>> index 6e40131bf..74083a0b9 100644
>> --- a/test/replication/qsync_basic.test.lua
>> +++ b/test/replication/qsync_basic.test.lua
>> @@ -163,6 +163,26 @@ box.space.sync:select{9}
>> box.space.locallocal:select{9}
>> box.space.test:select{9}
>> +--
>> +-- gh-4928: test that a sync transaction works fine with local
>> +-- rows in the end.
>> +--
>> +
>> +-- test_run:switch('default')
>> +-- box.cfg{replication_synchro_timeout = 1000, replication_synchro_quorum = 2}
>> +-- do \
>> +-- box.begin() \
>> +-- box.space.sync:replace{10} \
>> +-- box.space.locallocal:replace({10}) \
>> +-- box.commit() \
>> +-- end
>> +-- box.space.sync:select{10}
>> +-- box.space.locallocal:select{10}
>> +
>> +-- test_run:switch('replica')
>> +-- box.space.sync:select{10}
>> +-- box.space.locallocal:select{10}
>> +
>> --
>> -- gh-5123: quorum 1 still should write CONFIRM.
>> --
> Hi! Thanks for the test!
> LGTM with one comment.
>
> Looks like it relies on the vclock[0] increase you performed in
> the previous test (patch 4/5).
Yup.
> If it wasn't for this increase,both correctly and incorrectly
> working tarantools would complete the transaction. (The incorrect
> one would take lsn = 1 from vclock[0])
>
> Maybe add the same increase here, to make sure the test stays
> valid if someone changes the previous test case?
Ok, fair. I also reduced the first bump count, to speed up the
test a bit.
====================
diff --git a/test/replication/qsync_basic.result b/test/replication/qsync_basic.result
index 83ff7d3d1..3e28607b0 100644
--- a/test/replication/qsync_basic.result
+++ b/test/replication/qsync_basic.result
@@ -317,7 +317,7 @@ _ = _:create_index('pk')
| ...
-- Propagate local vclock to some insane value to ensure it won't
-- affect anything.
-box.begin() for i = 1, 2000 do box.space.locallocal:replace{1} end box.commit()
+box.begin() for i = 1, 500 do box.space.locallocal:replace{1} end box.commit()
| ---
| ...
do \
@@ -426,6 +426,9 @@ box.space.test:select{9}
-- test_run:switch('default')
-- box.cfg{replication_synchro_timeout = 1000, replication_synchro_quorum = 2}
+-- -- Propagate local vclock to some insane value to ensure it won't
+-- -- affect anything.
+-- box.begin() for i = 1, 500 do box.space.locallocal:replace{1} end box.commit()
-- do \
-- box.begin() \
-- box.space.sync:replace{10} \
diff --git a/test/replication/qsync_basic.test.lua b/test/replication/qsync_basic.test.lua
index 74083a0b9..860d6d6c4 100644
--- a/test/replication/qsync_basic.test.lua
+++ b/test/replication/qsync_basic.test.lua
@@ -127,7 +127,7 @@ _ = box.schema.create_space('locallocal', {is_local = true})
_ = _:create_index('pk')
-- Propagate local vclock to some insane value to ensure it won't
-- affect anything.
-box.begin() for i = 1, 2000 do box.space.locallocal:replace{1} end box.commit()
+box.begin() for i = 1, 500 do box.space.locallocal:replace{1} end box.commit()
do \
f1 = fiber.create(box.space.sync.replace, box.space.sync, {8}) \
f2 = fiber.create(box.space.locallocal.replace, box.space.locallocal, {8}) \
@@ -170,6 +170,9 @@ box.space.test:select{9}
-- test_run:switch('default')
-- box.cfg{replication_synchro_timeout = 1000, replication_synchro_quorum = 2}
+-- -- Propagate local vclock to some insane value to ensure it won't
+-- -- affect anything.
+-- box.begin() for i = 1, 500 do box.space.locallocal:replace{1} end box.commit()
-- do \
-- box.begin() \
-- box.space.sync:replace{10}
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Tarantool-patches] [PATCH 1/5] [tosquash] replication: fix multiple rollbacks
2020-07-05 9:34 ` Serge Petrenko
@ 2020-07-05 15:13 ` Vladislav Shpilevoy
0 siblings, 0 replies; 14+ messages in thread
From: Vladislav Shpilevoy @ 2020-07-05 15:13 UTC (permalink / raw)
To: Serge Petrenko, tarantool-patches
>> src/box/txn_limbo.c | 25 +++++++++++++++++++++++++
>> test/replication/qsync_basic.result | 2 +-
>> 2 files changed, 26 insertions(+), 1 deletion(-)
>>
>> diff --git a/src/box/txn_limbo.c b/src/box/txn_limbo.c
>> index 0402664cb..2cb687f4d 100644
>> --- a/src/box/txn_limbo.c
>> +++ b/src/box/txn_limbo.c
>> @@ -150,6 +157,24 @@ txn_limbo_wait_complete(struct txn_limbo *limbo, struct txn_limbo_entry *entry)
>> bool timed_out = fiber_yield_timeout(txn_limbo_confirm_timeout(limbo));
>> fiber_set_cancellable(cancellable);
>> if (timed_out) {
>> + assert(!txn_limbo_is_empty(limbo));
>> + if (txn_limbo_first_entry(limbo) != entry) {
>> + /*
>> + * If this is not a first entry in the
>> + * limbo, it is definitely not a first
>> + * timed out entry. And since it managed
>> + * to time out too, it means there is
>> + * currently another fiber writing
>> + * rollback. Wait when it will finish and
>> + * wake us up.
>> + */
>
> Why isn't it the first timed out? Is it because once previous entry was confirmed, it
> is removed from the queue immediately?
> Looks fragile.
Нет, это не связано с конфирмами. Если в лимбо добавлены две записи, и стаймаутила
вторая, то логично, что раз первая лежит в лимбе еще дольше, она стаймаутила тоже.
А значит достаточно записать роллбек для нее, и это откатит все следующие.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Tarantool-patches] [PATCH 4/5] [tosquash] replication: rework how local transactions wait sync
2020-07-05 9:04 ` Serge Petrenko
@ 2020-07-05 15:13 ` Vladislav Shpilevoy
0 siblings, 0 replies; 14+ messages in thread
From: Vladislav Shpilevoy @ 2020-07-05 15:13 UTC (permalink / raw)
To: Serge Petrenko, tarantool-patches
>> diff --git a/src/box/txn_limbo.c b/src/box/txn_limbo.c
>> index 387cfd337..44a0c7273 100644
>> --- a/src/box/txn_limbo.c
>> +++ b/src/box/txn_limbo.c
>> @@ -316,7 +320,7 @@ txn_limbo_read_rollback(struct txn_limbo *limbo, int64_t lsn)
>> assert(limbo->instance_id != REPLICA_ID_NIL);
>> struct txn_limbo_entry *e, *tmp;
>> rlist_foreach_entry_safe_reverse(e, &limbo->queue, in_queue, tmp) {
>> - if (e->lsn <= lsn)
>> + if (e->lsn <= lsn && txn_has_flag(e->txn, TXN_WAIT_ACK))
>> break;
>
> Are you rolling back the async transactions that are before the last sync
> transaction to be rolled back? Why?
> Shouldn't this condition stay the same?
Да, похоже на баг. Оставить без изменений не выйдет - у асинхронных
транзакций в лимбе лсн -1. Так что на первой асинхронной транзакции
роллбек остановится, даже если надо идти дальше. Я сделал как в
confirm в итоге. Сначала находим докуда откат. Потом делаем откат.
Тест пока не придумал. Мастер откатывает сейчас либо все, либо ничего.
Потому как откат только по таймауту, а значит всегда с самой первой.
====================
--- a/src/box/txn_limbo.c
+++ b/src/box/txn_limbo.c
@@ -319,9 +319,17 @@ txn_limbo_read_rollback(struct txn_limbo *limbo, int64_t lsn)
{
assert(limbo->instance_id != REPLICA_ID_NIL);
struct txn_limbo_entry *e, *tmp;
- rlist_foreach_entry_safe_reverse(e, &limbo->queue, in_queue, tmp) {
- if (e->lsn <= lsn && txn_has_flag(e->txn, TXN_WAIT_ACK))
+ struct txn_limbo_entry *last_rollback = NULL;
+ rlist_foreach_entry_reverse(e, &limbo->queue, in_queue) {
+ if (!txn_has_flag(e->txn, TXN_WAIT_ACK))
+ continue;
+ if (e->lsn <= lsn)
break;
+ last_rollback = e;
+ }
+ if (last_rollback == NULL)
+ return;
+ rlist_foreach_entry_safe_reverse(e, &limbo->queue, in_queue, tmp) {
e->is_rollback = true;
txn_limbo_pop(limbo, e);
txn_clear_flag(e->txn, TXN_WAIT_SYNC);
@@ -342,6 +350,8 @@ txn_limbo_read_rollback(struct txn_limbo *limbo, int64_t lsn)
txn_complete(e->txn);
e->txn->fiber = fiber;
}
+ if (e == last_rollback)
+ break;
}
}
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2020-07-05 15:13 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-02 23:40 [Tarantool-patches] [PATCH 0/5] Qsync: local rows fixes Vladislav Shpilevoy
2020-07-02 23:40 ` [Tarantool-patches] [PATCH 1/5] [tosquash] replication: fix multiple rollbacks Vladislav Shpilevoy
2020-07-05 9:34 ` Serge Petrenko
2020-07-05 15:13 ` Vladislav Shpilevoy
2020-07-02 23:40 ` [Tarantool-patches] [PATCH 2/5] [tosquash] applier: remove unnecessary fiber name check Vladislav Shpilevoy
2020-07-05 8:40 ` Serge Petrenko
2020-07-02 23:40 ` [Tarantool-patches] [PATCH 3/5] [tosquash] txn_limbo: fix release build Vladislav Shpilevoy
2020-07-05 8:41 ` Serge Petrenko
2020-07-02 23:40 ` [Tarantool-patches] [PATCH 4/5] [tosquash] replication: rework how local transactions wait sync Vladislav Shpilevoy
2020-07-05 9:04 ` Serge Petrenko
2020-07-05 15:13 ` Vladislav Shpilevoy
2020-07-02 23:40 ` [Tarantool-patches] [PATCH 5/5] [tosquash] replication: add test on local row in the end of sync txn Vladislav Shpilevoy
2020-07-05 9:11 ` Serge Petrenko
2020-07-05 15:13 ` Vladislav Shpilevoy
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox