From: Serge Petrenko via Tarantool-patches <tarantool-patches@dev.tarantool.org> To: v.shpilevoy@tarantool.org, gorcunov@gmail.com Cc: tarantool-patches@dev.tarantool.org Subject: [Tarantool-patches] [PATCH v2 3.5/7] applier: fix not releasing the latch on apply_synchro_row() fail Date: Sat, 27 Mar 2021 21:30:01 +0300 [thread overview] Message-ID: <5c6f77b1-5407-f0c9-e600-fef52862c0b4@tarantool.org> (raw) In-Reply-To: <b27076fa5d648a7bb40041dc3976ad1e63cdfb0c.1616588119.git.sergepetrenko@tarantool.org> Once apply_synchro_row() failed, applier_apply_tx() would simply raise an error without unlocking replica latch. This lead to all the appliers hanging indefinitely on trying to lock the latch for this replica. In scope of #5566 --- src/box/applier.cc | 4 +- test/replication/hang_on_synchro_fail.result | 130 ++++++++++++++++++ .../replication/hang_on_synchro_fail.test.lua | 57 ++++++++ test/replication/suite.cfg | 1 + test/replication/suite.ini | 2 +- 5 files changed, 191 insertions(+), 3 deletions(-) create mode 100644 test/replication/hang_on_synchro_fail.result create mode 100644 test/replication/hang_on_synchro_fail.test.lua diff --git a/src/box/applier.cc b/src/box/applier.cc index e6d9673dd..41abe64f9 100644 --- a/src/box/applier.cc +++ b/src/box/applier.cc @@ -1055,8 +1055,8 @@ applier_apply_tx(struct applier *applier, struct stailq *rows) * each other. */ assert(first_row == last_row); - if (apply_synchro_row(first_row) != 0) - diag_raise(); + if ((rc = apply_synchro_row(first_row)) != 0) + goto finish; } else if ((rc = apply_plain_tx(rows, replication_skip_conflict, true)) != 0) { goto finish; diff --git a/test/replication/hang_on_synchro_fail.result b/test/replication/hang_on_synchro_fail.result new file mode 100644 index 000000000..9f6fac00b --- /dev/null +++ b/test/replication/hang_on_synchro_fail.result @@ -0,0 +1,130 @@ +-- test-run result file version 2 +test_run = require('test_run').new() + | --- + | ... +fiber = require('fiber') + | --- + | ... +-- +-- All appliers could hang after failing to apply a synchronous message: either +-- CONFIRM or ROLLBACK. +-- +box.schema.user.grant('guest', 'replication') + | --- + | ... + +_ = box.schema.space.create('sync', {is_sync=true}) + | --- + | ... +_ = box.space.sync:create_index('pk') + | --- + | ... + +old_synchro_quorum = box.cfg.replication_synchro_quorum + | --- + | ... +box.cfg{replication_synchro_quorum=3} + | --- + | ... +-- A huge timeout so that we can perform some actions on a replica before +-- writing ROLLBACK. +old_synchro_timeout = box.cfg.replication_synchro_timeout + | --- + | ... +box.cfg{replication_synchro_timeout=1000} + | --- + | ... + +test_run:cmd('create server replica with rpl_master=default,\ + script="replication/replica.lua"') + | --- + | - true + | ... +test_run:cmd('start server replica') + | --- + | - true + | ... + +_ = fiber.new(box.space.sync.insert, box.space.sync, {1}) + | --- + | ... +test_run:wait_lsn('replica', 'default') + | --- + | ... + +test_run:switch('replica') + | --- + | - true + | ... + +box.error.injection.set('ERRINJ_WAL_IO', true) + | --- + | - ok + | ... + +test_run:switch('default') + | --- + | - true + | ... + +box.cfg{replication_synchro_timeout=0.01} + | --- + | ... + +test_run:switch('replica') + | --- + | - true + | ... + +test_run:wait_upstream(1, {status='stopped',\ + message_re='Failed to write to disk'}) + | --- + | - true + | ... +box.error.injection.set('ERRINJ_WAL_IO', false) + | --- + | - ok + | ... + +-- Applier is killed due to a failed WAL write, so restart replication to +-- check whether it hangs or not. Actually this single applier would fail an +-- assertion rather than hang, but all the other appliers, if any, would hang. +old_repl = box.cfg.replication + | --- + | ... +box.cfg{replication=""} + | --- + | ... +box.cfg{replication=old_repl} + | --- + | ... + +test_run:wait_upstream(1, {status='follow'}) + | --- + | - true + | ... + +-- Cleanup. +test_run:switch('default') + | --- + | - true + | ... +test_run:cmd('stop server replica') + | --- + | - true + | ... +test_run:cmd('delete server replica') + | --- + | - true + | ... +box.cfg{replication_synchro_quorum=old_synchro_quorum,\ + replication_synchro_timeout=old_synchro_timeout} + | --- + | ... +box.space.sync:drop() + | --- + | ... +box.schema.user.revoke('guest', 'replication') + | --- + | ... + diff --git a/test/replication/hang_on_synchro_fail.test.lua b/test/replication/hang_on_synchro_fail.test.lua new file mode 100644 index 000000000..6c3b09fab --- /dev/null +++ b/test/replication/hang_on_synchro_fail.test.lua @@ -0,0 +1,57 @@ +test_run = require('test_run').new() +fiber = require('fiber') +-- +-- All appliers could hang after failing to apply a synchronous message: either +-- CONFIRM or ROLLBACK. +-- +box.schema.user.grant('guest', 'replication') + +_ = box.schema.space.create('sync', {is_sync=true}) +_ = box.space.sync:create_index('pk') + +old_synchro_quorum = box.cfg.replication_synchro_quorum +box.cfg{replication_synchro_quorum=3} +-- A huge timeout so that we can perform some actions on a replica before +-- writing ROLLBACK. +old_synchro_timeout = box.cfg.replication_synchro_timeout +box.cfg{replication_synchro_timeout=1000} + +test_run:cmd('create server replica with rpl_master=default,\ + script="replication/replica.lua"') +test_run:cmd('start server replica') + +_ = fiber.new(box.space.sync.insert, box.space.sync, {1}) +test_run:wait_lsn('replica', 'default') + +test_run:switch('replica') + +box.error.injection.set('ERRINJ_WAL_IO', true) + +test_run:switch('default') + +box.cfg{replication_synchro_timeout=0.01} + +test_run:switch('replica') + +test_run:wait_upstream(1, {status='stopped',\ + message_re='Failed to write to disk'}) +box.error.injection.set('ERRINJ_WAL_IO', false) + +-- Applier is killed due to a failed WAL write, so restart replication to +-- check whether it hangs or not. Actually this single applier would fail an +-- assertion rather than hang, but all the other appliers, if any, would hang. +old_repl = box.cfg.replication +box.cfg{replication=""} +box.cfg{replication=old_repl} + +test_run:wait_upstream(1, {status='follow'}) + +-- Cleanup. +test_run:switch('default') +test_run:cmd('stop server replica') +test_run:cmd('delete server replica') +box.cfg{replication_synchro_quorum=old_synchro_quorum,\ + replication_synchro_timeout=old_synchro_timeout} +box.space.sync:drop() +box.schema.user.revoke('guest', 'replication') + diff --git a/test/replication/suite.cfg b/test/replication/suite.cfg index 7e7004592..c1c329438 100644 --- a/test/replication/suite.cfg +++ b/test/replication/suite.cfg @@ -22,6 +22,7 @@ "status.test.lua": {}, "wal_off.test.lua": {}, "hot_standby.test.lua": {}, + "hang_on_synchro_fail.test.lua": {}, "rebootstrap.test.lua": {}, "wal_rw_stress.test.lua": {}, "force_recovery.test.lua": {}, diff --git a/test/replication/suite.ini b/test/replication/suite.ini index dcd711a2a..fc161700a 100644 --- a/test/replication/suite.ini +++ b/test/replication/suite.ini @@ -3,7 +3,7 @@ core = tarantool script = master.lua description = tarantool/box, replication disabled = consistent.test.lua -release_disabled = catch.test.lua errinj.test.lua gc.test.lua gc_no_space.test.lua before_replace.test.lua qsync_advanced.test.lua qsync_errinj.test.lua quorum.test.lua recover_missing_xlog.test.lua sync.test.lua long_row_timeout.test.lua gh-4739-vclock-assert.test.lua gh-4730-applier-rollback.test.lua gh-5140-qsync-casc-rollback.test.lua gh-5144-qsync-dup-confirm.test.lua gh-5167-qsync-rollback-snap.test.lua gh-5506-election-on-off.test.lua gh-5536-wal-limit.test.lua +release_disabled = catch.test.lua errinj.test.lua gc.test.lua gc_no_space.test.lua before_replace.test.lua qsync_advanced.test.lua qsync_errinj.test.lua quorum.test.lua recover_missing_xlog.test.lua sync.test.lua long_row_timeout.test.lua gh-4739-vclock-assert.test.lua gh-4730-applier-rollback.test.lua gh-5140-qsync-casc-rollback.test.lua gh-5144-qsync-dup-confirm.test.lua gh-5167-qsync-rollback-snap.test.lua gh-5506-election-on-off.test.lua gh-5536-wal-limit.test.lua hang_on_synchro_fail.test.lua config = suite.cfg lua_libs = lua/fast_replica.lua lua/rlimit.lua use_unix_sockets = True -- 2.24.3 (Apple Git-128)
next prev parent reply other threads:[~2021-03-27 18:30 UTC|newest] Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-03-24 12:24 [Tarantool-patches] [PATCH v2 0/7] applier: handle synchronous transactions during final Serge Petrenko via Tarantool-patches 2021-03-24 12:24 ` [Tarantool-patches] [PATCH v2 1/7] replication: fix a hang on final join retry Serge Petrenko via Tarantool-patches 2021-03-26 20:44 ` Vladislav Shpilevoy via Tarantool-patches 2021-03-27 16:52 ` Serge Petrenko via Tarantool-patches 2021-03-29 21:50 ` Vladislav Shpilevoy via Tarantool-patches 2021-03-24 12:24 ` [Tarantool-patches] [PATCH v2 2/7] applier: extract tx boundary checks from applier_read_tx into a separate routine Serge Petrenko via Tarantool-patches 2021-03-26 12:35 ` Cyrill Gorcunov via Tarantool-patches 2021-03-27 16:54 ` Serge Petrenko via Tarantool-patches 2021-03-24 12:24 ` [Tarantool-patches] [PATCH v2 3/7] applier: extract plain tx application from applier_apply_tx() Serge Petrenko via Tarantool-patches 2021-03-26 20:47 ` Vladislav Shpilevoy via Tarantool-patches 2021-03-27 17:34 ` Serge Petrenko via Tarantool-patches 2021-03-27 18:30 ` Serge Petrenko via Tarantool-patches [this message] 2021-03-29 21:50 ` [Tarantool-patches] [PATCH v2 3.5/7] applier: fix not releasing the latch on apply_synchro_row() fail Vladislav Shpilevoy via Tarantool-patches 2021-03-30 8:15 ` Serge Petrenko via Tarantool-patches 2021-03-24 12:24 ` [Tarantool-patches] [PATCH v2 4/7] applier: remove excess last_row_time update from subscribe loop Serge Petrenko via Tarantool-patches 2021-03-24 12:24 ` [Tarantool-patches] [PATCH v2 5/7] applier: make final join transactional Serge Petrenko via Tarantool-patches 2021-03-26 20:49 ` Vladislav Shpilevoy via Tarantool-patches 2021-03-27 19:05 ` Serge Petrenko via Tarantool-patches 2021-03-29 21:51 ` Vladislav Shpilevoy via Tarantool-patches 2021-03-30 8:15 ` Serge Petrenko via Tarantool-patches 2021-03-24 12:24 ` [Tarantool-patches] [PATCH v2 6/7] replication: tolerate synchro rollback during final join Serge Petrenko via Tarantool-patches 2021-03-24 12:45 ` Serge Petrenko via Tarantool-patches 2021-03-26 20:49 ` Vladislav Shpilevoy via Tarantool-patches 2021-03-27 19:23 ` Serge Petrenko via Tarantool-patches 2021-03-24 12:24 ` [Tarantool-patches] [PATCH v2 7/7] replication: do not ignore replica vclock on register Serge Petrenko via Tarantool-patches 2021-03-26 20:50 ` Vladislav Shpilevoy via Tarantool-patches 2021-03-27 20:13 ` Serge Petrenko via Tarantool-patches 2021-03-29 21:51 ` Vladislav Shpilevoy via Tarantool-patches 2021-03-30 8:16 ` Serge Petrenko via Tarantool-patches 2021-03-30 12:33 ` Serge Petrenko via Tarantool-patches 2021-03-26 13:46 ` [Tarantool-patches] [PATCH v2 0/7] applier: handle synchronous transactions during final Cyrill Gorcunov via Tarantool-patches 2021-03-30 20:13 ` Vladislav Shpilevoy via Tarantool-patches 2021-04-05 16:15 ` Kirill Yukhin via Tarantool-patches
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=5c6f77b1-5407-f0c9-e600-fef52862c0b4@tarantool.org \ --to=tarantool-patches@dev.tarantool.org \ --cc=gorcunov@gmail.com \ --cc=sergepetrenko@tarantool.org \ --cc=v.shpilevoy@tarantool.org \ --subject='Re: [Tarantool-patches] [PATCH v2 3.5/7] applier: fix not releasing the latch on apply_synchro_row() fail' \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox