From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from [87.239.111.99] (localhost [127.0.0.1]) by dev.tarantool.org (Postfix) with ESMTP id 38FC86E44D; Fri, 15 Oct 2021 00:57:32 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org 38FC86E44D DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=tarantool.org; s=dev; t=1634248652; bh=JvRygmzvDFY5oxusdODVEWryRDskgoxzFHGOmfGtu20=; h=To:Date:In-Reply-To:References:Subject:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To: From; b=mOEUCtoyY3p9hPhQvA3cDRZeWMnZdOWh5mS6hsGRe6kXw63/gkUUmyfbtYL2McEey FuknspIOVplWTUaeW7DlEyJq1AJSVjdM0gIjUu31oE9ZVgE1zP7Jbq9EbWljhYTxr5 aGlzQInOtLUPoyw3JJ4GKl4/66pt4hnvdx662Coo= Received: from mail-lf1-f42.google.com (mail-lf1-f42.google.com [209.85.167.42]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by dev.tarantool.org (Postfix) with ESMTPS id B9F636E44D for ; Fri, 15 Oct 2021 00:56:49 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org B9F636E44D Received: by mail-lf1-f42.google.com with SMTP id p16so33159799lfa.2 for ; Thu, 14 Oct 2021 14:56:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=3GqWS5eX4txYiTGpAX0v2EBQ170jDbNRsSrieOfKNHM=; b=K72Fgfcv/rBZuX3Yfq7oxSYwyWX323HCA4F50q5D7ErPpsw8ICggYa96Fq5ZIk3b5j 13qry1lJ+ELfwTSsrXVrsfsrDiCKv+ZBtQhMVwOINpsxw/dZvt3KXNEi1gf7GnFdbftw cbQewBbhEizGyA6oLnvsfYvymfkYQeP78Wv+K3kCsr/W7GJk0AcEbBcYyPC0k/y0Jyw3 FKUse4xyCW1W843VulLOjED6R6y/9BZGvt1x9STGyBktbZRboSLF21DNdomQzWxRKbbC QwOw/Xy9pQdXyTh4P/CLSVOyZxWSZWYFwY2wTigTjOwNlWjVDBwQgq9ICgfvIn/ncPBv W+Wg== X-Gm-Message-State: AOAM531WilawYrLVuSDGv9kmpHBtNTDiOC9CldiRiZ97r2RLw3G72/YK HBHsuFwcTZVzgOdU/RIs5mpmgk1TYGo= X-Google-Smtp-Source: ABdhPJy31ivYYaRDYGgEBSrJMgv2Pdvu/We6mfqtcW+sbyO7NnLIyj/MWiHfbGiirGT4Ly+sGJxsCQ== X-Received: by 2002:a05:6512:6ca:: with SMTP id u10mr7396174lff.319.1634248608576; Thu, 14 Oct 2021 14:56:48 -0700 (PDT) Received: from grain.localdomain ([5.18.253.97]) by smtp.gmail.com with ESMTPSA id c3sm328643lfr.187.2021.10.14.14.56.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Oct 2021 14:56:47 -0700 (PDT) Received: by grain.localdomain (Postfix, from userid 1000) id BAC7E5A0021; Fri, 15 Oct 2021 00:56:23 +0300 (MSK) To: tml Date: Fri, 15 Oct 2021 00:56:21 +0300 Message-Id: <20211014215622.49732-3-gorcunov@gmail.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211014215622.49732-1-gorcunov@gmail.com> References: <20211014215622.49732-1-gorcunov@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [Tarantool-patches] [PATCH v23 2/3] qsync: order access to the limbo terms X-BeenThere: tarantool-patches@dev.tarantool.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Tarantool development patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , From: Cyrill Gorcunov via Tarantool-patches Reply-To: Cyrill Gorcunov Errors-To: tarantool-patches-bounces@dev.tarantool.org Sender: "Tarantool-patches" Limbo terms tracking is shared between appliers and when one of appliers is waiting for write to complete inside journal_write() routine, an other may need to access read term value to figure out if promote request is valid to apply. Due to cooperative multitasking access to the terms is not consistent so we need to be sure that other fibers read up to date terms (ie written to the WAL). For this sake we use a latching mechanism, when one fiber takes a lock for updating other readers are waiting until the operation is complete. For example here is a call graph of two appliers applier 1 --------- applier_apply_tx (promote term = 3 current max term = 2) applier_synchro_filter_tx apply_synchro_row journal_write (sleeping) at this moment another applier comes in with obsolete data and term 2 applier 2 --------- applier_apply_tx (term 2) applier_synchro_filter_tx txn_limbo_is_replica_outdated -> false journal_write (sleep) applier 1 --------- journal wakes up apply_synchro_row_cb set max term to 3 So the applier 2 didn't notice that term 3 is already seen and wrote obsolete data. With locking the applier 2 will wait until applier 1 has finished its write. Also Serge Petrenko pointed that we have somewhat similar situation with txn_limbo_ack()[we might try to write confirm on entry while new promote is in fly and not yet applied, so confirm might be invalid] and txn_limbo_on_parameters_change() [where we might confirm entries reducing quorum number while we even not a limbo owner]. Thus we need to fix these problems as well. We introduce the following helpers: 1) txn_limbo_process_begin: which takes a lock 2) txn_limbo_process_commit and txn_limbo_process_rollback which simply release the lock but have different names for better semantics 3) txn_limbo_process is a general function which uses x_begin and x_commit helper internally 4) txn_limbo_process_core to do a real job over processing the request, it implies that txn_limbo_process_begin been called 5) txn_limbo_ack() and txn_limbo_on_parameters_change() both respect current limbo owner via promote latch. Part-of #6036 Signed-off-by: Cyrill Gorcunov --- src/box/applier.cc | 12 +++-- src/box/box.cc | 15 +++--- src/box/relay.cc | 11 ++--- src/box/txn.c | 2 +- src/box/txn_limbo.c | 49 +++++++++++++++++-- src/box/txn_limbo.h | 87 +++++++++++++++++++++++++++++++--- test/unit/snap_quorum_delay.cc | 5 +- 7 files changed, 151 insertions(+), 30 deletions(-) diff --git a/src/box/applier.cc b/src/box/applier.cc index b981bd436..46c36e259 100644 --- a/src/box/applier.cc +++ b/src/box/applier.cc @@ -857,7 +857,7 @@ apply_synchro_row_cb(struct journal_entry *entry) applier_rollback_by_wal_io(entry->res); } else { replica_txn_wal_write_cb(synchro_entry->rcb); - txn_limbo_process(&txn_limbo, synchro_entry->req); + txn_limbo_process_core(&txn_limbo, synchro_entry->req); trigger_run(&replicaset.applier.on_wal_write, NULL); } fiber_wakeup(synchro_entry->owner); @@ -873,6 +873,8 @@ apply_synchro_row(uint32_t replica_id, struct xrow_header *row) if (xrow_decode_synchro(row, &req) != 0) goto err; + txn_limbo_process_begin(&txn_limbo); + struct replica_cb_data rcb_data; struct synchro_entry entry; /* @@ -910,12 +912,16 @@ apply_synchro_row(uint32_t replica_id, struct xrow_header *row) * transactions side, including the async ones. */ if (journal_write(&entry.base) != 0) - goto err; + goto err_rollback; if (entry.base.res < 0) { diag_set_journal_res(entry.base.res); - goto err; + goto err_rollback; } + txn_limbo_process_commit(&txn_limbo); return 0; + +err_rollback: + txn_limbo_process_rollback(&txn_limbo); err: diag_log(); return -1; diff --git a/src/box/box.cc b/src/box/box.cc index e082e1a3d..6a9be745a 100644 --- a/src/box/box.cc +++ b/src/box/box.cc @@ -1677,8 +1677,6 @@ box_issue_promote(uint32_t prev_leader_id, int64_t promote_lsn) struct raft *raft = box_raft(); assert(raft->volatile_term == raft->term); assert(promote_lsn >= 0); - txn_limbo_write_promote(&txn_limbo, promote_lsn, - raft->term); struct synchro_request req = { .type = IPROTO_RAFT_PROMOTE, .replica_id = prev_leader_id, @@ -1686,8 +1684,11 @@ box_issue_promote(uint32_t prev_leader_id, int64_t promote_lsn) .lsn = promote_lsn, .term = raft->term, }; - txn_limbo_process(&txn_limbo, &req); + txn_limbo_process_begin(&txn_limbo); + txn_limbo_write_promote(&txn_limbo, req.lsn, req.term); + txn_limbo_process_core(&txn_limbo, &req); assert(txn_limbo_is_empty(&txn_limbo)); + txn_limbo_process_commit(&txn_limbo); } /** A guard to block multiple simultaneous promote()/demote() invocations. */ @@ -1699,8 +1700,6 @@ box_issue_demote(uint32_t prev_leader_id, int64_t promote_lsn) { assert(box_raft()->volatile_term == box_raft()->term); assert(promote_lsn >= 0); - txn_limbo_write_demote(&txn_limbo, promote_lsn, - box_raft()->term); struct synchro_request req = { .type = IPROTO_RAFT_DEMOTE, .replica_id = prev_leader_id, @@ -1708,8 +1707,12 @@ box_issue_demote(uint32_t prev_leader_id, int64_t promote_lsn) .lsn = promote_lsn, .term = box_raft()->term, }; - txn_limbo_process(&txn_limbo, &req); + txn_limbo_process_begin(&txn_limbo); + txn_limbo_write_demote(&txn_limbo, promote_lsn, + box_raft()->term); + txn_limbo_process_core(&txn_limbo, &req); assert(txn_limbo_is_empty(&txn_limbo)); + txn_limbo_process_commit(&txn_limbo); } int diff --git a/src/box/relay.cc b/src/box/relay.cc index f5852df7b..61ef1e3a5 100644 --- a/src/box/relay.cc +++ b/src/box/relay.cc @@ -545,15 +545,10 @@ tx_status_update(struct cmsg *msg) ack.vclock = &status->vclock; /* * Let pending synchronous transactions know, which of - * them were successfully sent to the replica. Acks are - * collected only by the transactions originator (which is - * the single master in 100% so far). Other instances wait - * for master's CONFIRM message instead. + * them were successfully sent to the replica. */ - if (txn_limbo.owner_id == instance_id) { - txn_limbo_ack(&txn_limbo, ack.source, - vclock_get(ack.vclock, instance_id)); - } + txn_limbo_ack(&txn_limbo, instance_id, ack.source, + vclock_get(ack.vclock, instance_id)); trigger_run(&replicaset.on_ack, &ack); static const struct cmsg_hop route[] = { diff --git a/src/box/txn.c b/src/box/txn.c index e7fc81683..06bb85a09 100644 --- a/src/box/txn.c +++ b/src/box/txn.c @@ -939,7 +939,7 @@ txn_commit(struct txn *txn) txn_limbo_assign_local_lsn(&txn_limbo, limbo_entry, lsn); /* Local WAL write is a first 'ACK'. */ - txn_limbo_ack(&txn_limbo, txn_limbo.owner_id, lsn); + txn_limbo_ack_self(&txn_limbo, lsn); } if (txn_limbo_wait_complete(&txn_limbo, limbo_entry) < 0) goto rollback; diff --git a/src/box/txn_limbo.c b/src/box/txn_limbo.c index 70447caaf..9b643072a 100644 --- a/src/box/txn_limbo.c +++ b/src/box/txn_limbo.c @@ -47,6 +47,7 @@ txn_limbo_create(struct txn_limbo *limbo) vclock_create(&limbo->vclock); vclock_create(&limbo->promote_term_map); limbo->promote_greatest_term = 0; + latch_create(&limbo->promote_latch); limbo->confirmed_lsn = 0; limbo->rollback_count = 0; limbo->is_in_rollback = false; @@ -542,10 +543,30 @@ txn_limbo_read_demote(struct txn_limbo *limbo, int64_t lsn) } void -txn_limbo_ack(struct txn_limbo *limbo, uint32_t replica_id, int64_t lsn) +txn_limbo_ack(struct txn_limbo *limbo, uint32_t owner_id, + uint32_t replica_id, int64_t lsn) { + /* + * ACKs are collected only by the transactions originator + * (which is the single master in 100% so far). Other instances + * wait for master's CONFIRM message instead. + * + * Due to cooperative multitasking there might be limbo owner + * migration in-fly (while writing data to journal), so for + * simplicity sake the test for owner is done here instead + * of putting this check to the callers. + */ + if (!txn_limbo_is_owner(limbo, owner_id)) + return; + + /* + * Test for empty queue is done _after_ txn_limbo_is_owner + * call because we need to be sure that limbo is not been + * changed under our feet while we're reading it. + */ if (rlist_empty(&limbo->queue)) return; + /* * If limbo is currently writing a rollback, it means that the whole * queue will be rolled back. Because rollback is written only for @@ -724,11 +745,14 @@ txn_limbo_wait_empty(struct txn_limbo *limbo, double timeout) } void -txn_limbo_process(struct txn_limbo *limbo, const struct synchro_request *req) +txn_limbo_process_core(struct txn_limbo *limbo, + const struct synchro_request *req) { + assert(latch_is_locked(&limbo->promote_latch)); + uint64_t term = req->term; uint32_t origin = req->origin_id; - if (txn_limbo_replica_term(limbo, origin) < term) { + if (vclock_get(&limbo->promote_term_map, origin) < (int64_t)term) { vclock_follow(&limbo->promote_term_map, origin, term); if (term > limbo->promote_greatest_term) limbo->promote_greatest_term = term; @@ -786,11 +810,30 @@ txn_limbo_process(struct txn_limbo *limbo, const struct synchro_request *req) return; } +void +txn_limbo_process(struct txn_limbo *limbo, + const struct synchro_request *req) +{ + txn_limbo_process_begin(limbo); + txn_limbo_process_core(limbo, req); + txn_limbo_process_commit(limbo); +} + void txn_limbo_on_parameters_change(struct txn_limbo *limbo) { + /* + * In case if we're not current leader (ie not owning the + * limbo) then we should not confirm anything, otherwise + * we could reduce quorum number and start writing CONFIRM + * while leader node carries own maybe bigger quorum value. + */ + if (!txn_limbo_is_owner(limbo, instance_id)) + return; + if (rlist_empty(&limbo->queue)) return; + struct txn_limbo_entry *e; int64_t confirm_lsn = -1; rlist_foreach_entry(e, &limbo->queue, in_queue) { diff --git a/src/box/txn_limbo.h b/src/box/txn_limbo.h index 53e52f676..0bbd7a1c3 100644 --- a/src/box/txn_limbo.h +++ b/src/box/txn_limbo.h @@ -31,6 +31,7 @@ */ #include "small/rlist.h" #include "vclock/vclock.h" +#include "latch.h" #include @@ -147,6 +148,10 @@ struct txn_limbo { * limbo and raft are in sync and the terms are the same. */ uint64_t promote_greatest_term; + /** + * To order access to the promote data. + */ + struct latch promote_latch; /** * Maximal LSN gathered quorum and either already confirmed in WAL, or * whose confirmation is in progress right now. Any attempt to confirm @@ -194,6 +199,32 @@ txn_limbo_is_empty(struct txn_limbo *limbo) return rlist_empty(&limbo->queue); } +/** + * Test if the \a owner_id is current limbo owner. + */ +static inline bool +txn_limbo_is_owner(struct txn_limbo *limbo, uint32_t owner_id) +{ + /* + * A guard needed to prevent race with in-fly promote + * packets which are sitting inside journal but not yet + * written. + * + * Note that this test supports nesting calling, where + * the same fiber does the test on already taken lock + * (for example the recovery journal engine does so when + * it rolls back a transaction and updates replication + * number causing a nested test for limbo ownership). + */ + if (latch_owner(&limbo->promote_latch) == fiber()) + return limbo->owner_id == owner_id; + + latch_lock(&limbo->promote_latch); + bool v = limbo->owner_id == owner_id; + latch_unlock(&limbo->promote_latch); + return v; +} + bool txn_limbo_is_ro(struct txn_limbo *limbo); @@ -216,9 +247,12 @@ txn_limbo_last_entry(struct txn_limbo *limbo) * @a replica_id. */ static inline uint64_t -txn_limbo_replica_term(const struct txn_limbo *limbo, uint32_t replica_id) +txn_limbo_replica_term(struct txn_limbo *limbo, uint32_t replica_id) { - return vclock_get(&limbo->promote_term_map, replica_id); + latch_lock(&limbo->promote_latch); + uint64_t v = vclock_get(&limbo->promote_term_map, replica_id); + latch_unlock(&limbo->promote_latch); + return v; } /** @@ -226,11 +260,14 @@ txn_limbo_replica_term(const struct txn_limbo *limbo, uint32_t replica_id) * data from it. The check is only valid when elections are enabled. */ static inline bool -txn_limbo_is_replica_outdated(const struct txn_limbo *limbo, +txn_limbo_is_replica_outdated(struct txn_limbo *limbo, uint32_t replica_id) { - return txn_limbo_replica_term(limbo, replica_id) < - limbo->promote_greatest_term; + latch_lock(&limbo->promote_latch); + uint64_t v = vclock_get(&limbo->promote_term_map, replica_id); + bool res = v < limbo->promote_greatest_term; + latch_unlock(&limbo->promote_latch); + return res; } /** @@ -287,7 +324,15 @@ txn_limbo_assign_lsn(struct txn_limbo *limbo, struct txn_limbo_entry *entry, * replica with the specified ID. */ void -txn_limbo_ack(struct txn_limbo *limbo, uint32_t replica_id, int64_t lsn); +txn_limbo_ack(struct txn_limbo *limbo, uint32_t owner_id, + uint32_t replica_id, int64_t lsn); + +static inline void +txn_limbo_ack_self(struct txn_limbo *limbo, int64_t lsn) +{ + return txn_limbo_ack(limbo, limbo->owner_id, + limbo->owner_id, lsn); +} /** * Block the current fiber until the transaction in the limbo @@ -300,7 +345,35 @@ txn_limbo_ack(struct txn_limbo *limbo, uint32_t replica_id, int64_t lsn); int txn_limbo_wait_complete(struct txn_limbo *limbo, struct txn_limbo_entry *entry); -/** Execute a synchronous replication request. */ +/** + * Initiate execution of a synchronous replication request. + */ +static inline void +txn_limbo_process_begin(struct txn_limbo *limbo) +{ + latch_lock(&limbo->promote_latch); +} + +/** Commit a synchronous replication request. */ +static inline void +txn_limbo_process_commit(struct txn_limbo *limbo) +{ + latch_unlock(&limbo->promote_latch); +} + +/** Rollback a synchronous replication request. */ +static inline void +txn_limbo_process_rollback(struct txn_limbo *limbo) +{ + latch_unlock(&limbo->promote_latch); +} + +/** Core of processing synchronous replication request. */ +void +txn_limbo_process_core(struct txn_limbo *limbo, + const struct synchro_request *req); + +/** Process a synchronous replication request. */ void txn_limbo_process(struct txn_limbo *limbo, const struct synchro_request *req); diff --git a/test/unit/snap_quorum_delay.cc b/test/unit/snap_quorum_delay.cc index 803bbbea8..d43b4cd2c 100644 --- a/test/unit/snap_quorum_delay.cc +++ b/test/unit/snap_quorum_delay.cc @@ -130,7 +130,7 @@ txn_process_func(va_list ap) } txn_limbo_assign_local_lsn(&txn_limbo, entry, fake_lsn); - txn_limbo_ack(&txn_limbo, txn_limbo.owner_id, fake_lsn); + txn_limbo_ack_self(&txn_limbo, fake_lsn); txn_limbo_wait_complete(&txn_limbo, entry); switch (process_type) { @@ -157,7 +157,8 @@ txn_confirm_func(va_list ap) * inside gc_checkpoint(). */ fiber_sleep(0); - txn_limbo_ack(&txn_limbo, relay_id, fake_lsn); + txn_limbo_ack(&txn_limbo, txn_limbo.owner_id, + relay_id, fake_lsn); return 0; } -- 2.31.1