From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from [87.239.111.99] (localhost [127.0.0.1]) by dev.tarantool.org (Postfix) with ESMTP id 69FE16E454; Fri, 4 Mar 2022 09:02:12 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org 69FE16E454 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=tarantool.org; s=dev; t=1646373732; bh=jqd5I1aBEGCfjdQZP6fLFvHhd+4J/PEp7VZkgHlw+mY=; h=To:Date:In-Reply-To:References:Subject:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc: From; b=mT03Gnz8tm+AWOepMoMwQCwI0auL7NME9boUZCfbeOUqo3WCQy9TGMMYo0fO2Idmh Vfx0ZqxI6VPZL4q0pET+czu7aTM3Ssf5HJsOeDplAEp7wnv3dGDa9TPwd8jqQz6rgx v76uaMgM+LkiFXCDS0Pc/RH7mfIbbLmkTbztvWbA= Received: from mail-lf1-f42.google.com (mail-lf1-f42.google.com [209.85.167.42]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by dev.tarantool.org (Postfix) with ESMTPS id 913426E21E for ; Fri, 4 Mar 2022 09:01:33 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org 913426E21E Received: by mail-lf1-f42.google.com with SMTP id j7so12347049lfu.6 for ; Thu, 03 Mar 2022 22:01:33 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=XtgZad+mY/7U7KKUMWdnc3j1Fm++50vc6TuLUTsbQ2w=; b=F0Z+ocOR2NkwB7GV0auXWu4l6ruwvX0gce7I5O6A/DPoA5b5yqTFWUDFkfjW0FphRq iAVOgfZeJ9k3+l7KVtSFL4h76d0SGDe6/o85yq9/W3Ma8hycCUPE2HhWk+tEdh+4ZZ6E TUhB23Rb/KUZLggRRoFvFXI5VISPOqZ/83rqzq5EplqYS6+1+qXPYUE1Ro9B2Fr1Zcn9 5MaO6J5gyNPWmFriHPke00ptiGkTawOecmzEgVARXiuLAi42ggC3hwHUGzPzRP5CVNA3 kJd/96qN+2CsrafC0+ayz4cioPIbVaQxn+tSGYUVFJEVYuexKRxEjFWVFKV1Sip22j/r kkOA== X-Gm-Message-State: AOAM530MUP6Ipd/n3PAqlfFIDpRp2k9uQvoHhe0HMFhFn5wjZc0TyFfg QwawSKuYdJ++kb0c5OqukAfQQUuvW3I= X-Google-Smtp-Source: ABdhPJy5w/7MOrR194+xcEeqBTXZ7QTzEFsqCN4j6/N/5/9S7zqq+C/z7Nv0RG3PEy7L9Wa32Zdt5g== X-Received: by 2002:a05:6512:3b98:b0:445:b7f2:991d with SMTP id g24-20020a0565123b9800b00445b7f2991dmr8523296lfv.62.1646373691771; Thu, 03 Mar 2022 22:01:31 -0800 (PST) Received: from grain.localdomain ([5.18.251.97]) by smtp.gmail.com with ESMTPSA id r29-20020a2e575d000000b002460a6ac99fsm902199ljd.115.2022.03.03.22.01.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Mar 2022 22:01:30 -0800 (PST) Received: by grain.localdomain (Postfix, from userid 1000) id B9D375A0022; Fri, 4 Mar 2022 09:01:06 +0300 (MSK) To: tml Date: Fri, 4 Mar 2022 09:01:05 +0300 Message-Id: <20220304060105.1186867-3-gorcunov@gmail.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220304060105.1186867-1-gorcunov@gmail.com> References: <20220304060105.1186867-1-gorcunov@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [Tarantool-patches] [PATCH v32 2/2] qsync: order access to the limbo terms X-BeenThere: tarantool-patches@dev.tarantool.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Tarantool development patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , From: Cyrill Gorcunov via Tarantool-patches Reply-To: Cyrill Gorcunov Cc: Vladislav Shpilevoy Errors-To: tarantool-patches-bounces@dev.tarantool.org Sender: "Tarantool-patches" Limbo terms tracking is shared between appliers and when one of appliers is waiting for write to complete inside journal_write() routine, an other may need to access read term value to figure out if promote request is valid to apply. Due to cooperative multitasking access to the terms is not consistent so we need to be sure that other fibers read up to date terms (ie written to the WAL). For this sake we use a latching mechanism, when one fiber takes a lock for updating other readers are waiting until the operation is complete. For example here is a call graph of two appliers applier 1 --------- applier_apply_tx (promote term = 3 current max term = 2) applier_synchro_filter_tx apply_synchro_row journal_write (sleeping) at this moment another applier comes in with obsolete data and term 2 applier 2 --------- applier_apply_tx (term 2) applier_synchro_filter_tx txn_limbo_is_replica_outdated -> false journal_write (sleep) applier 1 --------- journal wakes up apply_synchro_row_cb set max term to 3 So the applier 2 didn't notice that term 3 is already seen and wrote obsolete data. With locking the applier 2 will wait until applier 1 has finished its write. We introduce the following helpers: 1) txn_limbo_begin: which takes a lock 2) txn_limbo_commit and txn_limbo_rollback which simply release the lock but have different names for better semantics 3) txn_limbo_process is a general function which uses x_begin and x_commit helper internally 4) txn_limbo_apply to do a real job over processing the request, it implies that txn_limbo_begin been called Testing such in-flight condition won't be easy so we introduce "box.info.synchro.queue.busy" field to report if limbo is currently latched and processing a sync request. @TarantoolBot document Title: synchronous replication changes `box.info.synchro.queue` gets a new `busy` field. It is set to `true` when there is a synchronous transaction is processing but not yet complete. Thus any other incoming synchronous transactions will be delayed until active one is finished. Part-of #6036 Acked-by: Serge Petrenko Signed-off-by: Cyrill Gorcunov --- .../unreleased/gh-6036-applier-fix-race.md | 5 + src/box/applier.cc | 6 +- src/box/box.cc | 8 +- src/box/lua/info.c | 4 +- src/box/txn_limbo.c | 13 +- src/box/txn_limbo.h | 44 +++- .../gh_6036_qsync_order_test.lua | 194 ++++++++++++++++++ test/replication-luatest/suite.ini | 1 + 8 files changed, 265 insertions(+), 10 deletions(-) create mode 100644 changelogs/unreleased/gh-6036-applier-fix-race.md create mode 100644 test/replication-luatest/gh_6036_qsync_order_test.lua diff --git a/changelogs/unreleased/gh-6036-applier-fix-race.md b/changelogs/unreleased/gh-6036-applier-fix-race.md new file mode 100644 index 000000000..98f65447c --- /dev/null +++ b/changelogs/unreleased/gh-6036-applier-fix-race.md @@ -0,0 +1,5 @@ +## bugfix/replication + +* Fixed potential obsolete data write in synchronious replication + due to race in accessing terms while disk write operation is in + progress and not yet completed. diff --git a/src/box/applier.cc b/src/box/applier.cc index 0ac7ac3c7..99484eea2 100644 --- a/src/box/applier.cc +++ b/src/box/applier.cc @@ -959,7 +959,7 @@ apply_synchro_req_cb(struct journal_entry *entry) applier_rollback_by_wal_io(entry->res); } else { replica_txn_wal_write_cb(synchro_entry->rcb); - txn_limbo_process(&txn_limbo, synchro_entry->req); + txn_limbo_apply(&txn_limbo, synchro_entry->req); trigger_run(&replicaset.applier.on_wal_write, NULL); } fiber_wakeup(synchro_entry->owner); @@ -1004,14 +1004,18 @@ apply_synchro_req(uint32_t replica_id, struct xrow_header *row, struct synchro_r * before trying to commit. But that requires extra steps from the * transactions side, including the async ones. */ + txn_limbo_begin(&txn_limbo); if (journal_write(&entry.base) != 0) goto err; if (entry.base.res < 0) { diag_set_journal_res(entry.base.res); goto err; } + txn_limbo_commit(&txn_limbo); return 0; + err: + txn_limbo_rollback(&txn_limbo); diag_log(); return -1; } diff --git a/src/box/box.cc b/src/box/box.cc index 6a33203df..fd34ddb0c 100644 --- a/src/box/box.cc +++ b/src/box/box.cc @@ -1780,6 +1780,7 @@ box_issue_promote(uint32_t prev_leader_id, int64_t promote_lsn) struct raft *raft = box_raft(); assert(raft->volatile_term == raft->term); assert(promote_lsn >= 0); + txn_limbo_begin(&txn_limbo); txn_limbo_write_promote(&txn_limbo, promote_lsn, raft->term); struct synchro_request req = { @@ -1789,7 +1790,8 @@ box_issue_promote(uint32_t prev_leader_id, int64_t promote_lsn) .lsn = promote_lsn, .term = raft->term, }; - txn_limbo_process(&txn_limbo, &req); + txn_limbo_apply(&txn_limbo, &req); + txn_limbo_commit(&txn_limbo); assert(txn_limbo_is_empty(&txn_limbo)); } @@ -1802,6 +1804,7 @@ box_issue_demote(uint32_t prev_leader_id, int64_t promote_lsn) { assert(box_raft()->volatile_term == box_raft()->term); assert(promote_lsn >= 0); + txn_limbo_begin(&txn_limbo); txn_limbo_write_demote(&txn_limbo, promote_lsn, box_raft()->term); struct synchro_request req = { @@ -1811,7 +1814,8 @@ box_issue_demote(uint32_t prev_leader_id, int64_t promote_lsn) .lsn = promote_lsn, .term = box_raft()->term, }; - txn_limbo_process(&txn_limbo, &req); + txn_limbo_apply(&txn_limbo, &req); + txn_limbo_commit(&txn_limbo); assert(txn_limbo_is_empty(&txn_limbo)); } diff --git a/src/box/lua/info.c b/src/box/lua/info.c index b75a301eb..8d7bb1d4a 100644 --- a/src/box/lua/info.c +++ b/src/box/lua/info.c @@ -637,11 +637,13 @@ lbox_info_synchro(struct lua_State *L) /* Queue information. */ struct txn_limbo *queue = &txn_limbo; - lua_createtable(L, 0, 2); + lua_createtable(L, 0, 3); lua_pushnumber(L, queue->len); lua_setfield(L, -2, "len"); lua_pushnumber(L, queue->owner_id); lua_setfield(L, -2, "owner"); + lua_pushboolean(L, latch_is_locked(&queue->promote_latch)); + lua_setfield(L, -2, "busy"); lua_setfield(L, -2, "queue"); return 1; diff --git a/src/box/txn_limbo.c b/src/box/txn_limbo.c index 70447caaf..5e708a951 100644 --- a/src/box/txn_limbo.c +++ b/src/box/txn_limbo.c @@ -47,6 +47,7 @@ txn_limbo_create(struct txn_limbo *limbo) vclock_create(&limbo->vclock); vclock_create(&limbo->promote_term_map); limbo->promote_greatest_term = 0; + latch_create(&limbo->promote_latch); limbo->confirmed_lsn = 0; limbo->rollback_count = 0; limbo->is_in_rollback = false; @@ -724,8 +725,10 @@ txn_limbo_wait_empty(struct txn_limbo *limbo, double timeout) } void -txn_limbo_process(struct txn_limbo *limbo, const struct synchro_request *req) +txn_limbo_apply(struct txn_limbo *limbo, const struct synchro_request *req) { + assert(latch_is_locked(&limbo->promote_latch)); + uint64_t term = req->term; uint32_t origin = req->origin_id; if (txn_limbo_replica_term(limbo, origin) < term) { @@ -786,6 +789,14 @@ txn_limbo_process(struct txn_limbo *limbo, const struct synchro_request *req) return; } +void +txn_limbo_process(struct txn_limbo *limbo, const struct synchro_request *req) +{ + txn_limbo_begin(limbo); + txn_limbo_apply(limbo, req); + txn_limbo_commit(limbo); +} + void txn_limbo_on_parameters_change(struct txn_limbo *limbo) { diff --git a/src/box/txn_limbo.h b/src/box/txn_limbo.h index 53e52f676..e2ac75c08 100644 --- a/src/box/txn_limbo.h +++ b/src/box/txn_limbo.h @@ -31,6 +31,7 @@ */ #include "small/rlist.h" #include "vclock/vclock.h" +#include "latch.h" #include @@ -147,6 +148,10 @@ struct txn_limbo { * limbo and raft are in sync and the terms are the same. */ uint64_t promote_greatest_term; + /** + * To order access to the promote data. + */ + struct latch promote_latch; /** * Maximal LSN gathered quorum and either already confirmed in WAL, or * whose confirmation is in progress right now. Any attempt to confirm @@ -226,11 +231,13 @@ txn_limbo_replica_term(const struct txn_limbo *limbo, uint32_t replica_id) * data from it. The check is only valid when elections are enabled. */ static inline bool -txn_limbo_is_replica_outdated(const struct txn_limbo *limbo, - uint32_t replica_id) +txn_limbo_is_replica_outdated(struct txn_limbo *limbo, uint32_t replica_id) { - return txn_limbo_replica_term(limbo, replica_id) < - limbo->promote_greatest_term; + latch_lock(&limbo->promote_latch); + uint64_t v = vclock_get(&limbo->promote_term_map, replica_id); + bool res = v < limbo->promote_greatest_term; + latch_unlock(&limbo->promote_latch); + return res; } /** @@ -300,7 +307,34 @@ txn_limbo_ack(struct txn_limbo *limbo, uint32_t replica_id, int64_t lsn); int txn_limbo_wait_complete(struct txn_limbo *limbo, struct txn_limbo_entry *entry); -/** Execute a synchronous replication request. */ +/** + * Initiate execution of a synchronous replication request. + */ +static inline void +txn_limbo_begin(struct txn_limbo *limbo) +{ + latch_lock(&limbo->promote_latch); +} + +/** Commit a synchronous replication request. */ +static inline void +txn_limbo_commit(struct txn_limbo *limbo) +{ + latch_unlock(&limbo->promote_latch); +} + +/** Rollback a synchronous replication request. */ +static inline void +txn_limbo_rollback(struct txn_limbo *limbo) +{ + latch_unlock(&limbo->promote_latch); +} + +/** Apply a synchronous replication request after processing stage. */ +void +txn_limbo_apply(struct txn_limbo *limbo, const struct synchro_request *req); + +/** Process a synchronous replication request. */ void txn_limbo_process(struct txn_limbo *limbo, const struct synchro_request *req); diff --git a/test/replication-luatest/gh_6036_qsync_order_test.lua b/test/replication-luatest/gh_6036_qsync_order_test.lua new file mode 100644 index 000000000..8f0bb204f --- /dev/null +++ b/test/replication-luatest/gh_6036_qsync_order_test.lua @@ -0,0 +1,194 @@ +local t = require('luatest') +local cluster = require('test.luatest_helpers.cluster') +local server = require('test.luatest_helpers.server') +local fiber = require('fiber') + +local g = t.group('gh-6036') + +g.before_all(function(cg) + cg.cluster = cluster:new({}) + + cg.box_cfg = { + replication = { + server.build_instance_uri('r1'), + server.build_instance_uri('r2'), + }, + replication_timeout = 0.1, + replication_connect_quorum = 1, + election_mode = 'manual', + election_timeout = 0.1, + replication_synchro_quorum = 1, + replication_synchro_timeout = 0.1, + log_level = 6, + } + + cg.r1 = cg.cluster:build_and_add_server({ alias = 'r1', box_cfg = cg.box_cfg }) + cg.r2 = cg.cluster:build_and_add_server({ alias = 'r2', box_cfg = cg.box_cfg }) + cg.cluster:start() +end) + +g.after_all(function(cg) + cg.cluster:drop() + cg.cluster.servers = nil +end) + +local function update_replication(...) + return (box.cfg{ replication = { ... } }) +end + +-- +-- The test requires 3rd replica to graft in. +g.before_test("test_qsync_order", function(cg) + cg.box_cfg.replication[3] = server.build_instance_uri("r3") + cg.r3 = cg.cluster:build_and_add_server({ alias = 'r3', box_cfg = cg.box_cfg }) + cg.r3:start() + cg.r1:exec(update_replication, cg.box_cfg.replication) + cg.r2:exec(update_replication, cg.box_cfg.replication) +end) + +g.test_qsync_order = function(cg) + cg.cluster:wait_fullmesh() + + -- + -- Create a synchro space on the r1 node and make + -- sure the write processed just fine. + cg.r1:exec(function() + box.ctl.promote() + box.ctl.wait_rw() + local s = box.schema.create_space('test', {is_sync = true}) + s:create_index('pk') + s:insert{1} + end) + + local vclock = cg.r1:get_vclock() + vclock[0] = nil + cg.r2:wait_vclock(vclock) + cg.r3:wait_vclock(vclock) + + t.assert_equals(cg.r1:exec(function() return box.space.test:select() end), {{1}}) + t.assert_equals(cg.r2:exec(function() return box.space.test:select() end), {{1}}) + t.assert_equals(cg.r3:exec(function() return box.space.test:select() end), {{1}}) + + -- + -- Drop connection between r1 and r2. + cg.r1:exec(update_replication, { + server.build_instance_uri("r1"), + server.build_instance_uri("r3"), + }) + + -- + -- Drop connection between r2 and r1. + cg.r2:exec(update_replication, { + server.build_instance_uri("r2"), + server.build_instance_uri("r3"), + }) + + -- + -- Here we have the following scheme + -- + -- r3 (WAL delay) + -- / \ + -- r1 r2 + -- + + -- + -- Initiate disk delay in a bit tricky way: the next write will + -- fall into forever sleep. + cg.r3:exec(function() + box.error.injection.set('ERRINJ_WAL_DELAY', true) + end) + + -- + -- Make r2 been a leader and start writting data, the PROMOTE + -- request get queued on r3 and not yet processed, same time + -- the INSERT won't complete either waiting for the PROMOTE + -- completion first. Note that we enter r3 as well just to be + -- sure the PROMOTE has reached it via queue state test. + cg.r2:exec(function() + box.ctl.promote() + box.ctl.wait_rw() + end) + t.helpers.retrying({}, function() + t.assert(cg.r3:exec(function() + return box.info.synchro.queue.busy == true + end)) + end) + cg.r2:exec(function() + box.space.test:insert{2} + end) + + -- + -- The r1 node has no clue that there is a new leader and continue + -- writing data with obsolete term. Since r3 is delayed now + -- the INSERT won't proceed yet but get queued. + cg.r1:exec(function() + box.space.test:insert{3} + end) + + -- + -- Finally enable r3 back. Make sure the data from new r2 leader get + -- writing while old leader's data ignored. + cg.r3:exec(function() + box.error.injection.set('ERRINJ_WAL_DELAY', false) + end) + t.helpers.retrying({}, function() + t.assert(cg.r3:exec(function() + return box.space.test:get{2} ~= nil + end)) + end) + + t.assert_equals(cg.r3:exec(function() return box.space.test:select() end), {{1},{2}}) +end + +-- +-- Drop the r3 replica, since it is no longer needed for this test. +g.after_test("test_qsync_order", function(cg) + cg.box_cfg.replication[3] = nil + cg.r1:exec(update_replication, cg.box_cfg.replication) + cg.r2:exec(update_replication, cg.box_cfg.replication) + cg.r3:stop() + cg.r3:cleanup() + cg.r3 = nil +end) + +g.test_promote_order = function(cg) + -- + -- Make sure that while we're processing PROMOTE no other records + -- get sneaked in via applier code from other replicas. For this + -- sake initiate voting and stop inside wal thread just before + -- PROMOTE get written. Another replica sends us new record and + -- it should be dropped. + cg.r1:exec(function() + box.ctl.promote() + box.ctl.wait_rw() + end) + local vclock = cg.r1:get_vclock() + vclock[0] = nil + cg.r2:wait_vclock(vclock) + + -- + -- Drop connection between r1 and the rest of the cluster. + -- Otherwise r1 might become Raft follower before attempting + -- insert{4}. + cg.r1:exec(function() box.cfg{replication=""} end) + cg.r2:exec(function() + box.error.injection.set('ERRINJ_WAL_DELAY_COUNTDOWN', 2) + require('fiber').create(function() box.ctl.promote() end) + end) + t.helpers.retrying({}, function() + t.assert(cg.r2:exec(function() + return box.info.synchro.queue.busy + end)) + end) + t.assert(cg.r1:exec(function() return box.info.ro == false end)) + cg.r1:exec(function() + box.space.test:insert{4} + end) + cg.r2:exec(function() + require('luatest').assert(box.info.synchro.queue.busy == true) + box.error.injection.set('ERRINJ_WAL_DELAY', false) + box.ctl.wait_rw() + end) + + t.assert_equals(cg.r2:exec(function() return box.space.test:select() end), {{1},{2}}) +end diff --git a/test/replication-luatest/suite.ini b/test/replication-luatest/suite.ini index 374f1b87a..07ec93a52 100644 --- a/test/replication-luatest/suite.ini +++ b/test/replication-luatest/suite.ini @@ -2,3 +2,4 @@ core = luatest description = replication luatests is_parallel = True +release_disabled = gh_6036_qsync_order_test.lua -- 2.35.1