* [Tarantool-patches] [PATCH v2 0/2] fix an assertion failure in box.ctl.promote()
@ 2021-05-25 10:39 Serge Petrenko via Tarantool-patches
2021-05-25 10:39 ` [Tarantool-patches] [PATCH v2 1/2] box: refactor in_promote using a guard Serge Petrenko via Tarantool-patches
` (2 more replies)
0 siblings, 3 replies; 13+ messages in thread
From: Serge Petrenko via Tarantool-patches @ 2021-05-25 10:39 UTC (permalink / raw)
To: v.shpilevoy, gorcunov; +Cc: tarantool-patches
https://github.com/tarantool/tarantool/tree/sp/gh-6032-promote-wal-write-v2
https://github.com/tarantool/tarantool/issues/6032
Changes in v2:
- replaced custom waiting for a WAL write with
a single wal_sync() call. This simplified the
patch quite a bit.
- shortened the test. It doesn't test promote()
cancellation now, because wal_sync() isn't
cancellable.
Serge Petrenko (2):
box: refactor in_promote using a guard
box: fix an assertion failure in box.ctl.promote()
src/box/box.cc | 38 ++++++----
.../gh-6032-promote-wal-write.result | 69 +++++++++++++++++++
.../gh-6032-promote-wal-write.test.lua | 28 ++++++++
test/replication/suite.cfg | 1 +
test/replication/suite.ini | 2 +-
5 files changed, 124 insertions(+), 14 deletions(-)
create mode 100644 test/replication/gh-6032-promote-wal-write.result
create mode 100644 test/replication/gh-6032-promote-wal-write.test.lua
--
2.30.1 (Apple Git-130)
^ permalink raw reply [flat|nested] 13+ messages in thread
* [Tarantool-patches] [PATCH v2 1/2] box: refactor in_promote using a guard
2021-05-25 10:39 [Tarantool-patches] [PATCH v2 0/2] fix an assertion failure in box.ctl.promote() Serge Petrenko via Tarantool-patches
@ 2021-05-25 10:39 ` Serge Petrenko via Tarantool-patches
2021-05-26 7:25 ` Cyrill Gorcunov via Tarantool-patches
2021-05-25 10:39 ` [Tarantool-patches] [PATCH v2 2/2] box: fix an assertion failure in box.ctl.promote() Serge Petrenko via Tarantool-patches
2021-06-01 12:20 ` [Tarantool-patches] [PATCH v2 0/2] " Kirill Yukhin via Tarantool-patches
2 siblings, 1 reply; 13+ messages in thread
From: Serge Petrenko via Tarantool-patches @ 2021-05-25 10:39 UTC (permalink / raw)
To: v.shpilevoy, gorcunov; +Cc: tarantool-patches
---
src/box/box.cc | 11 ++++-------
1 file changed, 4 insertions(+), 7 deletions(-)
diff --git a/src/box/box.cc b/src/box/box.cc
index c10e0d8bf..894e3d0f4 100644
--- a/src/box/box.cc
+++ b/src/box/box.cc
@@ -1562,6 +1562,9 @@ box_promote(void)
int rc = 0;
int quorum = replication_synchro_quorum;
in_promote = true;
+ auto promote_guard = make_scoped_guard([&] {
+ in_promote = false;
+ });
if (run_elections) {
/*
@@ -1581,19 +1584,15 @@ box_promote(void)
*/
if (box_election_mode == ELECTION_MODE_MANUAL)
raft_stop_candidate(box_raft(), false);
- if (rc != 0) {
- in_promote = false;
+ if (rc != 0)
return -1;
- }
if (!box_raft()->is_enabled) {
diag_set(ClientError, ER_RAFT_DISABLED);
- in_promote = false;
return -1;
}
if (box_raft()->state != RAFT_STATE_LEADER) {
diag_set(ClientError, ER_INTERFERING_PROMOTE,
box_raft()->leader);
- in_promote = false;
return -1;
}
}
@@ -1617,7 +1616,6 @@ box_promote(void)
if (former_leader_id != txn_limbo.owner_id) {
diag_set(ClientError, ER_INTERFERING_PROMOTE,
txn_limbo.owner_id);
- in_promote = false;
return -1;
}
}
@@ -1658,7 +1656,6 @@ promote:
assert(txn_limbo_is_empty(&txn_limbo));
}
}
- in_promote = false;
return rc;
}
--
2.30.1 (Apple Git-130)
^ permalink raw reply [flat|nested] 13+ messages in thread
* [Tarantool-patches] [PATCH v2 2/2] box: fix an assertion failure in box.ctl.promote()
2021-05-25 10:39 [Tarantool-patches] [PATCH v2 0/2] fix an assertion failure in box.ctl.promote() Serge Petrenko via Tarantool-patches
2021-05-25 10:39 ` [Tarantool-patches] [PATCH v2 1/2] box: refactor in_promote using a guard Serge Petrenko via Tarantool-patches
@ 2021-05-25 10:39 ` Serge Petrenko via Tarantool-patches
2021-05-26 6:14 ` Cyrill Gorcunov via Tarantool-patches
2021-06-01 12:20 ` [Tarantool-patches] [PATCH v2 0/2] " Kirill Yukhin via Tarantool-patches
2 siblings, 1 reply; 13+ messages in thread
From: Serge Petrenko via Tarantool-patches @ 2021-05-25 10:39 UTC (permalink / raw)
To: v.shpilevoy, gorcunov; +Cc: tarantool-patches
box.ctl.promote() used to assume that the last synchronous entry is
already written to WAL by the time it's called. This is not the case
when promote is executed on the limbo owner. The last synchronous entry
might still be en route to WAL.
In order to fix the issue, wait until all the limbo entries are written
to disk via wal_sync(). After this happens, it's safe to proceed to
gathering quorum in promote.
Closes #6032
---
src/box/box.cc | 27 ++++++--
.../gh-6032-promote-wal-write.result | 69 +++++++++++++++++++
.../gh-6032-promote-wal-write.test.lua | 28 ++++++++
test/replication/suite.cfg | 1 +
test/replication/suite.ini | 2 +-
5 files changed, 120 insertions(+), 7 deletions(-)
create mode 100644 test/replication/gh-6032-promote-wal-write.result
create mode 100644 test/replication/gh-6032-promote-wal-write.test.lua
diff --git a/src/box/box.cc b/src/box/box.cc
index 894e3d0f4..3d9cd0e57 100644
--- a/src/box/box.cc
+++ b/src/box/box.cc
@@ -1618,14 +1618,29 @@ box_promote(void)
txn_limbo.owner_id);
return -1;
}
+ if (txn_limbo_is_empty(&txn_limbo)) {
+ wait_lsn = txn_limbo.confirmed_lsn;
+ goto promote;
+ }
}
- /*
- * promote() is a no-op on the limbo owner, so all the rows
- * in the limbo must've come through the applier meaning they already
- * have an lsn assigned, even if their WAL write hasn't finished yet.
- */
- wait_lsn = txn_limbo_last_synchro_entry(&txn_limbo)->lsn;
+ struct txn_limbo_entry *last_entry;
+ last_entry = txn_limbo_last_synchro_entry(&txn_limbo);
+ /* Wait for the last entries WAL write. */
+ if (last_entry->lsn < 0) {
+ if (wal_sync(NULL) < 0)
+ return -1;
+ if (txn_limbo_is_empty(&txn_limbo)) {
+ wait_lsn = txn_limbo.confirmed_lsn;
+ goto promote;
+ }
+ if (last_entry != txn_limbo_last_synchro_entry(&txn_limbo)) {
+ diag_set(ClientError, ER_QUORUM_WAIT, quorum,
+ "new synchronous transactions appeared");
+ return -1;
+ }
+ }
+ wait_lsn = last_entry->lsn;
assert(wait_lsn > 0);
rc = box_wait_quorum(former_leader_id, wait_lsn, quorum,
diff --git a/test/replication/gh-6032-promote-wal-write.result b/test/replication/gh-6032-promote-wal-write.result
new file mode 100644
index 000000000..246c7974f
--- /dev/null
+++ b/test/replication/gh-6032-promote-wal-write.result
@@ -0,0 +1,69 @@
+-- test-run result file version 2
+test_run = require('test_run').new()
+ | ---
+ | ...
+fiber = require('fiber')
+ | ---
+ | ...
+
+replication_synchro_timeout = box.cfg.replication_synchro_timeout
+ | ---
+ | ...
+box.cfg{\
+ replication_synchro_timeout = 0.001,\
+}
+ | ---
+ | ...
+
+_ = box.schema.create_space('sync', {is_sync = true}):create_index('pk')
+ | ---
+ | ...
+
+box.error.injection.set('ERRINJ_WAL_DELAY', true)
+ | ---
+ | - ok
+ | ...
+_ = fiber.create(function() box.space.sync:replace{1} end)
+ | ---
+ | ...
+ok, err = nil, nil
+ | ---
+ | ...
+
+-- Test that the fiber actually waits for a WAL write to happen.
+f = fiber.create(function() ok, err = pcall(box.ctl.promote) end)
+ | ---
+ | ...
+fiber.sleep(0.1)
+ | ---
+ | ...
+f:status()
+ | ---
+ | - suspended
+ | ...
+box.error.injection.set('ERRINJ_WAL_DELAY', false)
+ | ---
+ | - ok
+ | ...
+test_run:wait_cond(function() return f:status() == 'dead' end)
+ | ---
+ | - true
+ | ...
+ok
+ | ---
+ | - true
+ | ...
+err
+ | ---
+ | - null
+ | ...
+
+-- Cleanup.
+box.cfg{\
+ replication_synchro_timeout = replication_synchro_timeout,\
+}
+ | ---
+ | ...
+box.space.sync:drop()
+ | ---
+ | ...
diff --git a/test/replication/gh-6032-promote-wal-write.test.lua b/test/replication/gh-6032-promote-wal-write.test.lua
new file mode 100644
index 000000000..8c1859083
--- /dev/null
+++ b/test/replication/gh-6032-promote-wal-write.test.lua
@@ -0,0 +1,28 @@
+test_run = require('test_run').new()
+fiber = require('fiber')
+
+replication_synchro_timeout = box.cfg.replication_synchro_timeout
+box.cfg{\
+ replication_synchro_timeout = 0.001,\
+}
+
+_ = box.schema.create_space('sync', {is_sync = true}):create_index('pk')
+
+box.error.injection.set('ERRINJ_WAL_DELAY', true)
+_ = fiber.create(function() box.space.sync:replace{1} end)
+ok, err = nil, nil
+
+-- Test that the fiber actually waits for a WAL write to happen.
+f = fiber.create(function() ok, err = pcall(box.ctl.promote) end)
+fiber.sleep(0.1)
+f:status()
+box.error.injection.set('ERRINJ_WAL_DELAY', false)
+test_run:wait_cond(function() return f:status() == 'dead' end)
+ok
+err
+
+-- Cleanup.
+box.cfg{\
+ replication_synchro_timeout = replication_synchro_timeout,\
+}
+box.space.sync:drop()
diff --git a/test/replication/suite.cfg b/test/replication/suite.cfg
index dc39e2f74..dfe4be9ae 100644
--- a/test/replication/suite.cfg
+++ b/test/replication/suite.cfg
@@ -45,6 +45,7 @@
"gh-5435-qsync-clear-synchro-queue-commit-all.test.lua": {},
"gh-5536-wal-limit.test.lua": {},
"gh-5566-final-join-synchro.test.lua": {},
+ "gh-6032-promote-wal-write.test.lua": {},
"*": {
"memtx": {"engine": "memtx"},
"vinyl": {"engine": "vinyl"}
diff --git a/test/replication/suite.ini b/test/replication/suite.ini
index 1d9c0a4ae..2625c5eea 100644
--- a/test/replication/suite.ini
+++ b/test/replication/suite.ini
@@ -3,7 +3,7 @@ core = tarantool
script = master.lua
description = tarantool/box, replication
disabled = consistent.test.lua
-release_disabled = catch.test.lua errinj.test.lua gc.test.lua gc_no_space.test.lua before_replace.test.lua qsync_advanced.test.lua qsync_errinj.test.lua quorum.test.lua recover_missing_xlog.test.lua sync.test.lua long_row_timeout.test.lua gh-4739-vclock-assert.test.lua gh-4730-applier-rollback.test.lua gh-5140-qsync-casc-rollback.test.lua gh-5144-qsync-dup-confirm.test.lua gh-5167-qsync-rollback-snap.test.lua gh-5506-election-on-off.test.lua gh-5536-wal-limit.test.lua hang_on_synchro_fail.test.lua anon_register_gap.test.lua gh-5213-qsync-applier-order.test.lua gh-5213-qsync-applier-order-3.test.lua
+release_disabled = catch.test.lua errinj.test.lua gc.test.lua gc_no_space.test.lua before_replace.test.lua qsync_advanced.test.lua qsync_errinj.test.lua quorum.test.lua recover_missing_xlog.test.lua sync.test.lua long_row_timeout.test.lua gh-4739-vclock-assert.test.lua gh-4730-applier-rollback.test.lua gh-5140-qsync-casc-rollback.test.lua gh-5144-qsync-dup-confirm.test.lua gh-5167-qsync-rollback-snap.test.lua gh-5506-election-on-off.test.lua gh-5536-wal-limit.test.lua hang_on_synchro_fail.test.lua anon_register_gap.test.lua gh-5213-qsync-applier-order.test.lua gh-5213-qsync-applier-order-3.test.lua gh-6032-promote-wal-write.test.lua
config = suite.cfg
lua_libs = lua/fast_replica.lua lua/rlimit.lua
use_unix_sockets = True
--
2.30.1 (Apple Git-130)
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Tarantool-patches] [PATCH v2 2/2] box: fix an assertion failure in box.ctl.promote()
2021-05-25 10:39 ` [Tarantool-patches] [PATCH v2 2/2] box: fix an assertion failure in box.ctl.promote() Serge Petrenko via Tarantool-patches
@ 2021-05-26 6:14 ` Cyrill Gorcunov via Tarantool-patches
2021-05-26 8:25 ` Serge Petrenko via Tarantool-patches
0 siblings, 1 reply; 13+ messages in thread
From: Cyrill Gorcunov via Tarantool-patches @ 2021-05-26 6:14 UTC (permalink / raw)
To: Serge Petrenko; +Cc: v.shpilevoy, tarantool-patches
On Tue, May 25, 2021 at 01:39:29PM +0300, Serge Petrenko wrote:
> box.ctl.promote() used to assume that the last synchronous entry is
> already written to WAL by the time it's called. This is not the case
> when promote is executed on the limbo owner. The last synchronous entry
> might still be en route to WAL.
Typo "en" -> "in".
> @@ -1618,14 +1618,29 @@ box_promote(void)
> txn_limbo.owner_id);
> return -1;
> }
> + if (txn_limbo_is_empty(&txn_limbo)) {
> + wait_lsn = txn_limbo.confirmed_lsn;
> + goto promote;
> + }
> }
>
> - /*
> - * promote() is a no-op on the limbo owner, so all the rows
> - * in the limbo must've come through the applier meaning they already
> - * have an lsn assigned, even if their WAL write hasn't finished yet.
> - */
> - wait_lsn = txn_limbo_last_synchro_entry(&txn_limbo)->lsn;
> + struct txn_limbo_entry *last_entry;
> + last_entry = txn_limbo_last_synchro_entry(&txn_limbo);
> + /* Wait for the last entries WAL write. */
> + if (last_entry->lsn < 0) {
> + if (wal_sync(NULL) < 0)
> + return -1;
> + if (txn_limbo_is_empty(&txn_limbo)) {
> + wait_lsn = txn_limbo.confirmed_lsn;
> + goto promote;
> + }
> + if (last_entry != txn_limbo_last_synchro_entry(&txn_limbo)) {
This is a bit dangerous. We cache a pointer and then go to fiber_yield,
which switches context, at this moment the pointer become dangling one
and we simply can't be sure if it _were_ reused. IOW, Serge are we
100% sure that the same pointer with same address but with new data
won't appear here as last entry in limbo?
> + diag_set(ClientError, ER_QUORUM_WAIT, quorum,
> + "new synchronous transactions appeared");
> + return -1;
> + }
> + }
> + wait_lsn = last_entry->lsn;
> assert(wait_lsn > 0);
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Tarantool-patches] [PATCH v2 1/2] box: refactor in_promote using a guard
2021-05-25 10:39 ` [Tarantool-patches] [PATCH v2 1/2] box: refactor in_promote using a guard Serge Petrenko via Tarantool-patches
@ 2021-05-26 7:25 ` Cyrill Gorcunov via Tarantool-patches
2021-05-27 10:57 ` Serge Petrenko via Tarantool-patches
0 siblings, 1 reply; 13+ messages in thread
From: Cyrill Gorcunov via Tarantool-patches @ 2021-05-26 7:25 UTC (permalink / raw)
To: Serge Petrenko; +Cc: v.shpilevoy, tarantool-patches
On Tue, May 25, 2021 at 01:39:28PM +0300, Serge Petrenko wrote:
> ---
> src/box/box.cc | 11 ++++-------
> 1 file changed, 4 insertions(+), 7 deletions(-)
>
> diff --git a/src/box/box.cc b/src/box/box.cc
> index c10e0d8bf..894e3d0f4 100644
> --- a/src/box/box.cc
> +++ b/src/box/box.cc
> @@ -1562,6 +1562,9 @@ box_promote(void)
> int rc = 0;
> int quorum = replication_synchro_quorum;
> in_promote = true;
> + auto promote_guard = make_scoped_guard([&] {
> + in_promote = false;
> + });
Looks ok to me, though I must confess I always consider such
flags spread all over the code is somehow clumsy. Since this
is a common pattern in our cpp code lets keep it but still in
my humble opinion we could rather move all box_promote code
into some box_do_promote helper and we would have
int
box_promote(void)
{
static bool in_promote = false;
if (in_promote) {
diag_set(ClientError, ER_UNSUPPORTED, "box.ctl.promote",
"simultaneous invocations");
return -1;
}
in_promote = true;
int rc = box_do_promote();
in_promote = false;
return rc;
}
but surely this is not a request for code refactoring, current form
is ok as well ;)
Ack.
Serge, while you're at this code anyway, could you please change
switch (box_election_mode) {
case ELECTION_MODE_OFF:
try_wait = true;
break;
...
default:
panic("enexpected box_election_mode mode");
break;
}
instead of unreacheable() call. We should stop using unreacheable()
as much as we could.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Tarantool-patches] [PATCH v2 2/2] box: fix an assertion failure in box.ctl.promote()
2021-05-26 6:14 ` Cyrill Gorcunov via Tarantool-patches
@ 2021-05-26 8:25 ` Serge Petrenko via Tarantool-patches
2021-05-26 18:46 ` Vladislav Shpilevoy via Tarantool-patches
0 siblings, 1 reply; 13+ messages in thread
From: Serge Petrenko via Tarantool-patches @ 2021-05-26 8:25 UTC (permalink / raw)
To: Cyrill Gorcunov; +Cc: v.shpilevoy, tarantool-patches
26.05.2021 09:14, Cyrill Gorcunov пишет:
> On Tue, May 25, 2021 at 01:39:29PM +0300, Serge Petrenko wrote:
Hi! Thanks for the review!
>> box.ctl.promote() used to assume that the last synchronous entry is
>> already written to WAL by the time it's called. This is not the case
>> when promote is executed on the limbo owner. The last synchronous entry
>> might still be en route to WAL.
> Typo "en" -> "in".
No, "en route" means в пути/по пути.
>
>> @@ -1618,14 +1618,29 @@ box_promote(void)
>> txn_limbo.owner_id);
>> return -1;
>> }
>> + if (txn_limbo_is_empty(&txn_limbo)) {
>> + wait_lsn = txn_limbo.confirmed_lsn;
>> + goto promote;
>> + }
>> }
>>
>> - /*
>> - * promote() is a no-op on the limbo owner, so all the rows
>> - * in the limbo must've come through the applier meaning they already
>> - * have an lsn assigned, even if their WAL write hasn't finished yet.
>> - */
>> - wait_lsn = txn_limbo_last_synchro_entry(&txn_limbo)->lsn;
>> + struct txn_limbo_entry *last_entry;
>> + last_entry = txn_limbo_last_synchro_entry(&txn_limbo);
>> + /* Wait for the last entries WAL write. */
>> + if (last_entry->lsn < 0) {
>> + if (wal_sync(NULL) < 0)
>> + return -1;
>> + if (txn_limbo_is_empty(&txn_limbo)) {
>> + wait_lsn = txn_limbo.confirmed_lsn;
>> + goto promote;
>> + }
>> + if (last_entry != txn_limbo_last_synchro_entry(&txn_limbo)) {
> This is a bit dangerous. We cache a pointer and then go to fiber_yield,
> which switches context, at this moment the pointer become dangling one
> and we simply can't be sure if it _were_ reused. IOW, Serge are we
> 100% sure that the same pointer with same address but with new data
> won't appear here as last entry in limbo?
I agree this solution is not perfect.
An alternative would be to do the following:
1) Check that the limbo owner hasn't changed
2) Check that the last entry has positive lsn (e.g. it's not a new entry
which
wasn't yet written to WAL). And that this lsn is equal to the lsn
of our entry.
But what if our entry was confirmed and destroyed during wal_sync()? We
can't compare
other entries lsn with this ones.
>
>> + diag_set(ClientError, ER_QUORUM_WAIT, quorum,
>> + "new synchronous transactions appeared");
>> + return -1;
>> + }
>> + }
>> + wait_lsn = last_entry->lsn;
>> assert(wait_lsn > 0);
--
Serge Petrenko
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Tarantool-patches] [PATCH v2 2/2] box: fix an assertion failure in box.ctl.promote()
2021-05-26 8:25 ` Serge Petrenko via Tarantool-patches
@ 2021-05-26 18:46 ` Vladislav Shpilevoy via Tarantool-patches
2021-05-27 10:53 ` Serge Petrenko via Tarantool-patches
0 siblings, 1 reply; 13+ messages in thread
From: Vladislav Shpilevoy via Tarantool-patches @ 2021-05-26 18:46 UTC (permalink / raw)
To: Serge Petrenko, Cyrill Gorcunov; +Cc: tarantool-patches
>>> @@ -1618,14 +1618,29 @@ box_promote(void)
>>> txn_limbo.owner_id);
>>> return -1;
>>> }
>>> + if (txn_limbo_is_empty(&txn_limbo)) {
>>> + wait_lsn = txn_limbo.confirmed_lsn;
>>> + goto promote;
>>> + }
>>> }
>>> - /*
>>> - * promote() is a no-op on the limbo owner, so all the rows
>>> - * in the limbo must've come through the applier meaning they already
>>> - * have an lsn assigned, even if their WAL write hasn't finished yet.
>>> - */
>>> - wait_lsn = txn_limbo_last_synchro_entry(&txn_limbo)->lsn;
>>> + struct txn_limbo_entry *last_entry;
>>> + last_entry = txn_limbo_last_synchro_entry(&txn_limbo);
>>> + /* Wait for the last entries WAL write. */
>>> + if (last_entry->lsn < 0) {
>>> + if (wal_sync(NULL) < 0)
>>> + return -1;
>>> + if (txn_limbo_is_empty(&txn_limbo)) {
>>> + wait_lsn = txn_limbo.confirmed_lsn;
>>> + goto promote;
>>> + }
>>> + if (last_entry != txn_limbo_last_synchro_entry(&txn_limbo)) {
>> This is a bit dangerous. We cache a pointer and then go to fiber_yield,
>> which switches context, at this moment the pointer become dangling one
>> and we simply can't be sure if it _were_ reused. IOW, Serge are we
>> 100% sure that the same pointer with same address but with new data
>> won't appear here as last entry in limbo?
>
> I agree this solution is not perfect.
>
> An alternative would be to do the following:
> 1) Check that the limbo owner hasn't changed
> 2) Check that the last entry has positive lsn (e.g. it's not a new entry which
> wasn't yet written to WAL). And that this lsn is equal to the lsn of our entry.
>
> But what if our entry was confirmed and destroyed during wal_sync()? We can't compare
> other entries lsn with this ones.
As decided in the chat, you can use txn->id. It is unique until
restart and should help to detect if the last transaction has
changed.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Tarantool-patches] [PATCH v2 2/2] box: fix an assertion failure in box.ctl.promote()
2021-05-26 18:46 ` Vladislav Shpilevoy via Tarantool-patches
@ 2021-05-27 10:53 ` Serge Petrenko via Tarantool-patches
2021-05-27 11:03 ` Cyrill Gorcunov via Tarantool-patches
2021-05-27 19:30 ` Vladislav Shpilevoy via Tarantool-patches
0 siblings, 2 replies; 13+ messages in thread
From: Serge Petrenko via Tarantool-patches @ 2021-05-27 10:53 UTC (permalink / raw)
To: Vladislav Shpilevoy, Cyrill Gorcunov; +Cc: tarantool-patches
26.05.2021 21:46, Vladislav Shpilevoy пишет:
>>>> @@ -1618,14 +1618,29 @@ box_promote(void)
>>>> txn_limbo.owner_id);
>>>> return -1;
>>>> }
>>>> + if (txn_limbo_is_empty(&txn_limbo)) {
>>>> + wait_lsn = txn_limbo.confirmed_lsn;
>>>> + goto promote;
>>>> + }
>>>> }
>>>> - /*
>>>> - * promote() is a no-op on the limbo owner, so all the rows
>>>> - * in the limbo must've come through the applier meaning they already
>>>> - * have an lsn assigned, even if their WAL write hasn't finished yet.
>>>> - */
>>>> - wait_lsn = txn_limbo_last_synchro_entry(&txn_limbo)->lsn;
>>>> + struct txn_limbo_entry *last_entry;
>>>> + last_entry = txn_limbo_last_synchro_entry(&txn_limbo);
>>>> + /* Wait for the last entries WAL write. */
>>>> + if (last_entry->lsn < 0) {
>>>> + if (wal_sync(NULL) < 0)
>>>> + return -1;
>>>> + if (txn_limbo_is_empty(&txn_limbo)) {
>>>> + wait_lsn = txn_limbo.confirmed_lsn;
>>>> + goto promote;
>>>> + }
>>>> + if (last_entry != txn_limbo_last_synchro_entry(&txn_limbo)) {
>>> This is a bit dangerous. We cache a pointer and then go to fiber_yield,
>>> which switches context, at this moment the pointer become dangling one
>>> and we simply can't be sure if it _were_ reused. IOW, Serge are we
>>> 100% sure that the same pointer with same address but with new data
>>> won't appear here as last entry in limbo?
>> I agree this solution is not perfect.
>>
>> An alternative would be to do the following:
>> 1) Check that the limbo owner hasn't changed
>> 2) Check that the last entry has positive lsn (e.g. it's not a new entry which
>> wasn't yet written to WAL). And that this lsn is equal to the lsn of our entry.
>>
>> But what if our entry was confirmed and destroyed during wal_sync()? We can't compare
>> other entries lsn with this ones.
> As decided in the chat, you can use txn->id. It is unique until
> restart and should help to detect if the last transaction has
> changed.
Yep, thanks for the suggestion!
Here's the diff:
=============================================================
diff --git a/src/box/box.cc b/src/box/box.cc
index 3d9cd0e57..3baae6afe 100644
--- a/src/box/box.cc
+++ b/src/box/box.cc
@@ -1628,13 +1628,19 @@ box_promote(void)
last_entry = txn_limbo_last_synchro_entry(&txn_limbo);
/* Wait for the last entries WAL write. */
if (last_entry->lsn < 0) {
+ int64_t tid = last_entry->txn->id;
if (wal_sync(NULL) < 0)
return -1;
+ if (former_leader_id != txn_limbo.owner_id) {
+ diag_set(ClientError, ER_INTERFERING_PROMOTE,
+ txn_limbo.owner_id);
+ return -1;
+ }
if (txn_limbo_is_empty(&txn_limbo)) {
wait_lsn = txn_limbo.confirmed_lsn;
goto promote;
}
- if (last_entry !=
txn_limbo_last_synchro_entry(&txn_limbo)) {
+ if (tid !=
txn_limbo_last_synchro_entry(&txn_limbo)->txn->id) {
diag_set(ClientError, ER_QUORUM_WAIT, quorum,
"new synchronous transactions appeared");
return -1;
=============================================================
--
Serge Petrenko
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Tarantool-patches] [PATCH v2 1/2] box: refactor in_promote using a guard
2021-05-26 7:25 ` Cyrill Gorcunov via Tarantool-patches
@ 2021-05-27 10:57 ` Serge Petrenko via Tarantool-patches
2021-05-27 11:02 ` Cyrill Gorcunov via Tarantool-patches
0 siblings, 1 reply; 13+ messages in thread
From: Serge Petrenko via Tarantool-patches @ 2021-05-27 10:57 UTC (permalink / raw)
To: Cyrill Gorcunov; +Cc: v.shpilevoy, tarantool-patches
26.05.2021 10:25, Cyrill Gorcunov пишет:
> On Tue, May 25, 2021 at 01:39:28PM +0300, Serge Petrenko wrote:
>> ---
>> src/box/box.cc | 11 ++++-------
>> 1 file changed, 4 insertions(+), 7 deletions(-)
>>
>> diff --git a/src/box/box.cc b/src/box/box.cc
>> index c10e0d8bf..894e3d0f4 100644
>> --- a/src/box/box.cc
>> +++ b/src/box/box.cc
>> @@ -1562,6 +1562,9 @@ box_promote(void)
>> int rc = 0;
>> int quorum = replication_synchro_quorum;
>> in_promote = true;
>> + auto promote_guard = make_scoped_guard([&] {
>> + in_promote = false;
>> + });
> Looks ok to me, though I must confess I always consider such
> flags spread all over the code is somehow clumsy. Since this
> is a common pattern in our cpp code lets keep it but still in
> my humble opinion we could rather move all box_promote code
> into some box_do_promote helper and we would have
>
> int
> box_promote(void)
> {
> static bool in_promote = false;
> if (in_promote) {
> diag_set(ClientError, ER_UNSUPPORTED, "box.ctl.promote",
> "simultaneous invocations");
> return -1;
> }
>
> in_promote = true;
> int rc = box_do_promote();
> in_promote = false;
>
> return rc;
> }
>
> but surely this is not a request for code refactoring, current form
> is ok as well ;)
>
> Ack.
>
> Serge, while you're at this code anyway, could you please change
>
> switch (box_election_mode) {
> case ELECTION_MODE_OFF:
> try_wait = true;
> break;
> ...
> default:
> panic("enexpected box_election_mode mode");
> break;
> }
>
> instead of unreacheable() call. We should stop using unreacheable()
> as much as we could.
Thanks for the review!
I think this piece of code is unrelated to what I'm changing.
This commit was introduced to avoid multiple additions
of `in_promote = false;` in the next commit.
The change you propose is good, but it's out of scope of this patchset IMO.
I may change apply it though if you insist.
--
Serge Petrenko
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Tarantool-patches] [PATCH v2 1/2] box: refactor in_promote using a guard
2021-05-27 10:57 ` Serge Petrenko via Tarantool-patches
@ 2021-05-27 11:02 ` Cyrill Gorcunov via Tarantool-patches
0 siblings, 0 replies; 13+ messages in thread
From: Cyrill Gorcunov via Tarantool-patches @ 2021-05-27 11:02 UTC (permalink / raw)
To: Serge Petrenko; +Cc: v.shpilevoy, tarantool-patches
On Thu, May 27, 2021 at 01:57:19PM +0300, Serge Petrenko wrote:
>
> Thanks for the review!
> I think this piece of code is unrelated to what I'm changing.
> This commit was introduced to avoid multiple additions
> of `in_promote = false;` in the next commit.
>
> The change you propose is good, but it's out of scope of this patchset IMO.
> I may change apply it though if you insist.
OK, lets leave it as is. No need for more patching.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Tarantool-patches] [PATCH v2 2/2] box: fix an assertion failure in box.ctl.promote()
2021-05-27 10:53 ` Serge Petrenko via Tarantool-patches
@ 2021-05-27 11:03 ` Cyrill Gorcunov via Tarantool-patches
2021-05-27 19:30 ` Vladislav Shpilevoy via Tarantool-patches
1 sibling, 0 replies; 13+ messages in thread
From: Cyrill Gorcunov via Tarantool-patches @ 2021-05-27 11:03 UTC (permalink / raw)
To: Serge Petrenko; +Cc: Vladislav Shpilevoy, tarantool-patches
On Thu, May 27, 2021 at 01:53:54PM +0300, Serge Petrenko wrote:
> > As decided in the chat, you can use txn->id. It is unique until
> > restart and should help to detect if the last transaction has
> > changed.
>
> Yep, thanks for the suggestion!
>
> Here's the diff:
>
Thanks! Looks ok for me. Lets wait for Vald.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Tarantool-patches] [PATCH v2 2/2] box: fix an assertion failure in box.ctl.promote()
2021-05-27 10:53 ` Serge Petrenko via Tarantool-patches
2021-05-27 11:03 ` Cyrill Gorcunov via Tarantool-patches
@ 2021-05-27 19:30 ` Vladislav Shpilevoy via Tarantool-patches
1 sibling, 0 replies; 13+ messages in thread
From: Vladislav Shpilevoy via Tarantool-patches @ 2021-05-27 19:30 UTC (permalink / raw)
To: Serge Petrenko, Cyrill Gorcunov; +Cc: tarantool-patches
Hi! Thanks for the fixes!
LGTM.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Tarantool-patches] [PATCH v2 0/2] fix an assertion failure in box.ctl.promote()
2021-05-25 10:39 [Tarantool-patches] [PATCH v2 0/2] fix an assertion failure in box.ctl.promote() Serge Petrenko via Tarantool-patches
2021-05-25 10:39 ` [Tarantool-patches] [PATCH v2 1/2] box: refactor in_promote using a guard Serge Petrenko via Tarantool-patches
2021-05-25 10:39 ` [Tarantool-patches] [PATCH v2 2/2] box: fix an assertion failure in box.ctl.promote() Serge Petrenko via Tarantool-patches
@ 2021-06-01 12:20 ` Kirill Yukhin via Tarantool-patches
2 siblings, 0 replies; 13+ messages in thread
From: Kirill Yukhin via Tarantool-patches @ 2021-06-01 12:20 UTC (permalink / raw)
To: Serge Petrenko; +Cc: tarantool-patches, v.shpilevoy
Hello,
On 25 май 13:39, Serge Petrenko via Tarantool-patches wrote:
> https://github.com/tarantool/tarantool/tree/sp/gh-6032-promote-wal-write-v2
> https://github.com/tarantool/tarantool/issues/6032
>
> Changes in v2:
> - replaced custom waiting for a WAL write with
> a single wal_sync() call. This simplified the
> patch quite a bit.
> - shortened the test. It doesn't test promote()
> cancellation now, because wal_sync() isn't
> cancellable.
>
> Serge Petrenko (2):
> box: refactor in_promote using a guard
> box: fix an assertion failure in box.ctl.promote()
I've checked your patch set into 2.7, 2.8 and master.
--
Regards, Kirill Yukhin
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2021-06-01 12:20 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-25 10:39 [Tarantool-patches] [PATCH v2 0/2] fix an assertion failure in box.ctl.promote() Serge Petrenko via Tarantool-patches
2021-05-25 10:39 ` [Tarantool-patches] [PATCH v2 1/2] box: refactor in_promote using a guard Serge Petrenko via Tarantool-patches
2021-05-26 7:25 ` Cyrill Gorcunov via Tarantool-patches
2021-05-27 10:57 ` Serge Petrenko via Tarantool-patches
2021-05-27 11:02 ` Cyrill Gorcunov via Tarantool-patches
2021-05-25 10:39 ` [Tarantool-patches] [PATCH v2 2/2] box: fix an assertion failure in box.ctl.promote() Serge Petrenko via Tarantool-patches
2021-05-26 6:14 ` Cyrill Gorcunov via Tarantool-patches
2021-05-26 8:25 ` Serge Petrenko via Tarantool-patches
2021-05-26 18:46 ` Vladislav Shpilevoy via Tarantool-patches
2021-05-27 10:53 ` Serge Petrenko via Tarantool-patches
2021-05-27 11:03 ` Cyrill Gorcunov via Tarantool-patches
2021-05-27 19:30 ` Vladislav Shpilevoy via Tarantool-patches
2021-06-01 12:20 ` [Tarantool-patches] [PATCH v2 0/2] " Kirill Yukhin via Tarantool-patches
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox