From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from [87.239.111.99] (localhost [127.0.0.1]) by dev.tarantool.org (Postfix) with ESMTP id A7C6A6EC55; Mon, 26 Jul 2021 18:36:58 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org A7C6A6EC55 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=tarantool.org; s=dev; t=1627313818; bh=NmKb6eaKfL3s4j3bP+ixJTSE7bD2lIrvY30WbQU+vsc=; h=To:Date:In-Reply-To:References:Subject:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc: From; b=ey+vvH68eHgpZyyxAKKiP0+++qACeoipKjdma83O8RgXRZbSZD2xQiu8gB5zQL5z3 OwGaMkQulU1O7uV02pKoe4ZktP0ho7/g3KCCu4jNK7XdGryjgFBzuSiHpa+C7/HuH0 Rf69ickSbhMznbgWdpuhNLESArVECR9cLrVHgPyw= Received: from mail-lf1-f45.google.com (mail-lf1-f45.google.com [209.85.167.45]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by dev.tarantool.org (Postfix) with ESMTPS id B3BB16EC55 for ; Mon, 26 Jul 2021 18:35:43 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org B3BB16EC55 Received: by mail-lf1-f45.google.com with SMTP id h14so16195853lfv.7 for ; Mon, 26 Jul 2021 08:35:43 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=AorFbau8RfLt2zYV1eCuBarNVMtUIZn90sLtLF9fiUE=; b=VQvO9LWOOzsCsuNGgBFs556fQ+1WsXXDmmSAjeHpiYaJbdPovmbtSgmwRhGB2Yehje wN5hOPVrrFPeDAhF/3N+w6uui23hoePJEpT5xhCw7hDqczKFHFGrVhL0537zvZClc66s XgEqOxoaT5GgVlSVcgq2CRj4/50q9kN3O2AW/j8pXTBFw3vNlPfkiPyxRk7xn7zhFw+M xgYL8zbOnRWn1L1jA1xLka+zttaPhDLpLUWFHpszLrA9qDNAxItIM042lDmy0PmOt1KQ c3bbEQp1lyuWDxRHI6ypi++4ER1OWPiBwZOYuMmAZttlINhiCeJSSGI9Bg0U8wsIniQK KJWw== X-Gm-Message-State: AOAM530obEo7cLB1uA54/Z13M1UtWKTCxHknNjfvlF6GcjJmIqrfOjm5 xGB5PJLYcQB3jre3OIiXBVxfQJwsoCjcEA== X-Google-Smtp-Source: ABdhPJzTXF6BpzogFkGKcWJlut/cUnwF9LNt9H3jlt/sAQEwxlNKL9dMoG6dKJbEJY2rbxgJrQiPUw== X-Received: by 2002:ac2:50c5:: with SMTP id h5mr7369968lfm.642.1627313742765; Mon, 26 Jul 2021 08:35:42 -0700 (PDT) Received: from grain.localdomain ([5.18.255.97]) by smtp.gmail.com with ESMTPSA id q20sm31552lfu.168.2021.07.26.08.35.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Jul 2021 08:35:41 -0700 (PDT) Received: by grain.localdomain (Postfix, from userid 1000) id 30B6C5A0022; Mon, 26 Jul 2021 18:34:53 +0300 (MSK) To: tml Date: Mon, 26 Jul 2021 18:34:50 +0300 Message-Id: <20210726153452.113897-5-gorcunov@gmail.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210726153452.113897-1-gorcunov@gmail.com> References: <20210726153452.113897-1-gorcunov@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [Tarantool-patches] [PATCH v8 4/6] limbo: order access to the limbo terms terms X-BeenThere: tarantool-patches@dev.tarantool.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Tarantool development patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , From: Cyrill Gorcunov via Tarantool-patches Reply-To: Cyrill Gorcunov Cc: Vladislav Shpilevoy Errors-To: tarantool-patches-bounces@dev.tarantool.org Sender: "Tarantool-patches" Limbo terms tracking is shared between appliers and when one of appliers is waiting for write to complete inside journal_write() routine, an other may need to access read term value to figure out if promote request is valid to apply. Due to cooperative multitasking access to the terms is not consistent so we need to be sure that other fibers either read up to date terms (ie written to the WAL). For this sake we use latching mechanism, when one fiber took terms-lock for updating other readers are waiting until the operation is complete. For example here is a call graph of two appliers applier 1 --------- applier_apply_tx (promote term = 3 current max term = 2) applier_synchro_filter_tx apply_synchro_row journal_write (sleeping) at this moment another applier comes in with obsolete data and term 2 applier 2 --------- applier_apply_tx (term 2) applier_synchro_filter_tx txn_limbo_is_replica_outdated -> false journal_write (sleep) applier 1 --------- journal wakes up apply_synchro_row_cb set max term to 3 So the applier 2 didn't notice that term 3 is already seen and wrote obsolete data. With locking the applier 2 will wait until applier 1 has finished its write. Part-of #6036 Signed-off-by: Cyrill Gorcunov --- src/box/applier.cc | 10 ++++-- src/box/box.cc | 16 ++++------ src/box/txn_limbo.c | 16 ++++++++-- src/box/txn_limbo.h | 74 ++++++++++++++++++++++++++++++++++++++++++--- 4 files changed, 96 insertions(+), 20 deletions(-) diff --git a/src/box/applier.cc b/src/box/applier.cc index f621fa657..b5c3a7b67 100644 --- a/src/box/applier.cc +++ b/src/box/applier.cc @@ -856,7 +856,7 @@ apply_synchro_row_cb(struct journal_entry *entry) applier_rollback_by_wal_io(entry->res); } else { replica_txn_wal_write_cb(synchro_entry->rcb); - txn_limbo_process(&txn_limbo, synchro_entry->req); + txn_limbo_process_locked(&txn_limbo, synchro_entry->req); trigger_run(&replicaset.applier.on_wal_write, NULL); } fiber_wakeup(synchro_entry->owner); @@ -872,6 +872,7 @@ apply_synchro_row(uint32_t replica_id, struct xrow_header *row) if (xrow_decode_synchro(row, &req) != 0) goto err; + txn_limbo_terms_lock(&txn_limbo); struct replica_cb_data rcb_data; struct synchro_entry entry; /* @@ -909,12 +910,15 @@ apply_synchro_row(uint32_t replica_id, struct xrow_header *row) * transactions side, including the async ones. */ if (journal_write(&entry.base) != 0) - goto err; + goto err_unlock; if (entry.base.res < 0) { diag_set_journal_res(entry.base.res); - goto err; + goto err_unlock; } + txn_limbo_terms_unlock(&txn_limbo); return 0; +err_unlock: + txn_limbo_terms_unlock(&txn_limbo); err: diag_log(); return -1; diff --git a/src/box/box.cc b/src/box/box.cc index b356508f0..395f0d9ef 100644 --- a/src/box/box.cc +++ b/src/box/box.cc @@ -1565,8 +1565,7 @@ box_run_elections(void) static int box_check_promote_term_intact(uint64_t promote_term) { - const struct txn_limbo_terms *tr = &txn_limbo.terms; - if (tr->terms_max != promote_term) { + if (txn_limbo_terms_max_raw(&txn_limbo) != promote_term) { diag_set(ClientError, ER_INTERFERING_PROMOTE, txn_limbo.owner_id); return -1; @@ -1578,8 +1577,7 @@ box_check_promote_term_intact(uint64_t promote_term) static int box_trigger_elections(void) { - const struct txn_limbo_terms *tr = &txn_limbo.terms; - uint64_t promote_term = tr->terms_max; + uint64_t promote_term = txn_limbo_terms_max_raw(&txn_limbo); raft_new_term(box_raft()); if (box_raft_wait_term_persisted() < 0) return -1; @@ -1590,8 +1588,7 @@ box_trigger_elections(void) static int box_try_wait_confirm(double timeout) { - const struct txn_limbo_terms *tr = &txn_limbo.terms; - uint64_t promote_term = tr->terms_max; + uint64_t promote_term = txn_limbo_terms_max_raw(&txn_limbo); txn_limbo_wait_empty(&txn_limbo, timeout); return box_check_promote_term_intact(promote_term); } @@ -1607,8 +1604,7 @@ box_wait_limbo_acked(void) if (txn_limbo_is_empty(&txn_limbo)) return txn_limbo.confirmed_lsn; - const struct txn_limbo_terms *tr = &txn_limbo.terms; - uint64_t promote_term = tr->terms_max; + uint64_t promote_term = txn_limbo_terms_max_raw(&txn_limbo); int quorum = replication_synchro_quorum; struct txn_limbo_entry *last_entry; last_entry = txn_limbo_last_synchro_entry(&txn_limbo); @@ -1724,7 +1720,7 @@ box_promote(void) * Currently active leader (the instance that is seen as leader by both * raft and txn_limbo) can't issue another PROMOTE. */ - bool is_leader = txn_limbo_replica_term(&txn_limbo, instance_id) == + bool is_leader = txn_limbo_term(&txn_limbo, instance_id) == raft->term && txn_limbo.owner_id == instance_id; if (box_election_mode != ELECTION_MODE_OFF) is_leader = is_leader && raft->state == RAFT_STATE_LEADER; @@ -1780,7 +1776,7 @@ box_demote(void) return 0; /* Currently active leader is the only one who can issue a DEMOTE. */ - bool is_leader = txn_limbo_replica_term(&txn_limbo, instance_id) == + bool is_leader = txn_limbo_term(&txn_limbo, instance_id) == box_raft()->term && txn_limbo.owner_id == instance_id; if (box_election_mode != ELECTION_MODE_OFF) is_leader = is_leader && box_raft()->state == RAFT_STATE_LEADER; diff --git a/src/box/txn_limbo.c b/src/box/txn_limbo.c index 53c86f34e..5f43f575c 100644 --- a/src/box/txn_limbo.c +++ b/src/box/txn_limbo.c @@ -40,6 +40,7 @@ struct txn_limbo txn_limbo; static void txn_limbo_terms_create(struct txn_limbo_terms *tr) { + latch_create(&tr->latch); vclock_create(&tr->terms_map); tr->terms_max = 0; } @@ -731,12 +732,14 @@ txn_limbo_wait_empty(struct txn_limbo *limbo, double timeout) } void -txn_limbo_process(struct txn_limbo *limbo, const struct synchro_request *req) +txn_limbo_process_locked(struct txn_limbo *limbo, + const struct synchro_request *req) { struct txn_limbo_terms *tr = &limbo->terms; uint64_t term = req->term; uint32_t origin = req->origin_id; - if (txn_limbo_replica_term(limbo, origin) < term) { + + if (txn_limbo_term_locked(limbo, origin) < term) { vclock_follow(&tr->terms_map, origin, term); if (term > tr->terms_max) tr->terms_max = term; @@ -794,6 +797,15 @@ txn_limbo_process(struct txn_limbo *limbo, const struct synchro_request *req) return; } +void +txn_limbo_process(struct txn_limbo *limbo, + const struct synchro_request *req) +{ + txn_limbo_terms_lock(limbo); + txn_limbo_process_locked(limbo, req); + txn_limbo_terms_unlock(limbo); +} + void txn_limbo_on_parameters_change(struct txn_limbo *limbo) { diff --git a/src/box/txn_limbo.h b/src/box/txn_limbo.h index dc980bf7c..45687381f 100644 --- a/src/box/txn_limbo.h +++ b/src/box/txn_limbo.h @@ -31,6 +31,7 @@ */ #include "small/rlist.h" #include "vclock/vclock.h" +#include "latch.h" #include @@ -80,6 +81,10 @@ txn_limbo_entry_is_complete(const struct txn_limbo_entry *e) * situation and other errors. */ struct txn_limbo_terms { + /** + * To order access to the promote data. + */ + struct latch latch; /** * Latest terms received with PROMOTE entries from remote instances. * Limbo uses them to filter out the transactions coming not from the @@ -222,15 +227,66 @@ txn_limbo_last_entry(struct txn_limbo *limbo) in_queue); } +/** Lock promote data. */ +static inline void +txn_limbo_terms_lock(struct txn_limbo *limbo) +{ + struct txn_limbo_terms *tr = &limbo->terms; + latch_lock(&tr->latch); +} + +/** Unlock promote data. */ +static void +txn_limbo_terms_unlock(struct txn_limbo *limbo) +{ + struct txn_limbo_terms *tr = &limbo->terms; + latch_unlock(&tr->latch); +} + +/** Test if promote data is locked. */ +static inline bool +txn_limbo_terms_is_locked(const struct txn_limbo *limbo) +{ + const struct txn_limbo_terms *tr = &limbo->terms; + return latch_is_locked(&tr->latch); +} + +/** Fetch replica's term with lock taken. */ +static inline uint64_t +txn_limbo_term_locked(struct txn_limbo *limbo, uint32_t replica_id) +{ + const struct txn_limbo_terms *tr = &limbo->terms; + panic_on(!txn_limbo_terms_is_locked(limbo), + "limbo: unlocked term read for replica %u", + replica_id); + return vclock_get(&tr->terms_map, replica_id); +} + /** * Return the latest term as seen in PROMOTE requests from instance with id * @a replica_id. */ static inline uint64_t -txn_limbo_replica_term(const struct txn_limbo *limbo, uint32_t replica_id) +txn_limbo_term(struct txn_limbo *limbo, uint32_t replica_id) +{ + txn_limbo_terms_lock(limbo); + uint64_t v = txn_limbo_term_locked(limbo, replica_id); + txn_limbo_terms_unlock(limbo); + return v; +} + +/** + * Fiber's preempt not safe read of @a terms_max. + * + * Use it if you're interested in current value + * only and ready that the value is getting updated + * if after the read yield happens. + */ +static inline uint64_t +txn_limbo_terms_max_raw(struct txn_limbo *limbo) { const struct txn_limbo_terms *tr = &limbo->terms; - return vclock_get(&tr->terms_map, replica_id); + return tr->terms_max; } /** @@ -238,12 +294,15 @@ txn_limbo_replica_term(const struct txn_limbo *limbo, uint32_t replica_id) * data from it. The check is only valid when elections are enabled. */ static inline bool -txn_limbo_is_replica_outdated(const struct txn_limbo *limbo, +txn_limbo_is_replica_outdated(struct txn_limbo *limbo, uint32_t replica_id) { const struct txn_limbo_terms *tr = &limbo->terms; - return txn_limbo_replica_term(limbo, replica_id) < - tr->terms_max; + txn_limbo_terms_lock(limbo); + bool res = txn_limbo_term_locked(limbo, replica_id) < + tr->terms_max; + txn_limbo_terms_unlock(limbo); + return res; } /** @@ -315,6 +374,11 @@ txn_limbo_wait_complete(struct txn_limbo *limbo, struct txn_limbo_entry *entry); /** Execute a synchronous replication request. */ void +txn_limbo_process_locked(struct txn_limbo *limbo, + const struct synchro_request *req); + +/** Lock limbo terms and execute a synchronous replication request. */ +void txn_limbo_process(struct txn_limbo *limbo, const struct synchro_request *req); /** -- 2.31.1