From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from [87.239.111.99] (localhost [127.0.0.1]) by dev.tarantool.org (Postfix) with ESMTP id F3D3D6EC5F; Wed, 4 Aug 2021 22:08:59 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org F3D3D6EC5F DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=tarantool.org; s=dev; t=1628104140; bh=dk9A3uFaciY9yHQvYpufVuJhfb+CvipR2tRAK5ll8ik=; h=To:Date:In-Reply-To:References:Subject:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc: From; b=Yt5TE7SSQO1v1V2gMve6r1cDDxsGua+0e0+i6QiwuvtP6jz4ccWzOwQoQEpMt0r2y cpxG5P6FKQ0p/mTZkMPifV8/DVbOmCSLJoOeRlyCGb0UfV6fJhjynj+I6thBMFpgBG 77Ir7c/oAq2CCWXrUksqdYEJzrtw7EDlmrVaU5/U= Received: from mail-lj1-f174.google.com (mail-lj1-f174.google.com [209.85.208.174]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by dev.tarantool.org (Postfix) with ESMTPS id 310CD6EC5F for ; Wed, 4 Aug 2021 22:08:21 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org 310CD6EC5F Received: by mail-lj1-f174.google.com with SMTP id h11so3719576ljo.12 for ; Wed, 04 Aug 2021 12:08:21 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Xkdiy+tIioezL3Yac3HZvyAyUPeYFYUmGNdrquPNInk=; b=jvsb8VfOKoee5lKIBqxSo8zdCC74vFULWgCRY5gu/GWj64YhTSOwMEwdPtKORzLOQ3 4cG1YPfnlnbTW92CiiH/E2flSR7+gibbZFLky032ehto3+AnooC4M6kfbG0qa9hzpxkt qruDi/6p5Q24f9VUjlgL29BDUTpSs36aJk+dhliiRCp05/kwFo9KndcGIGWXVJx6ijvp hfMnPgf6Wk+ngVsBF8k9EBug2O4zLKGnu3E1x9RQgwyaG7evQZ8KysSEcCWV8q5NmF5a kpQ3srWAeAvGMx5EjtmkopCiRFJkPMxuz4Q1AO5m/T7FfFNS277waUVQiDRf/i3o9XOj vhhA== X-Gm-Message-State: AOAM533d0o3S8Exq+tdMsSLPU/yC1zR9TKomMoqMSsyeoquVyAx2eQDI 8KyupXf9VJNlxv8gpu1tbAx/Fl8jB+Y= X-Google-Smtp-Source: ABdhPJzqdSJX/8Qaks+egmuMzWJDN1KF2YnDIwDi7fdbxeMOUKZbe47v6Z2q1lsKsO9ieT9qnATrBg== X-Received: by 2002:a2e:a906:: with SMTP id j6mr571809ljq.292.1628104100011; Wed, 04 Aug 2021 12:08:20 -0700 (PDT) Received: from grain.localdomain ([5.18.255.97]) by smtp.gmail.com with ESMTPSA id o10sm275447lfl.129.2021.08.04.12.08.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 Aug 2021 12:08:19 -0700 (PDT) Received: by grain.localdomain (Postfix, from userid 1000) id D799E5A0020; Wed, 4 Aug 2021 22:07:53 +0300 (MSK) To: tml Date: Wed, 4 Aug 2021 22:07:50 +0300 Message-Id: <20210804190752.488147-3-gorcunov@gmail.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210804190752.488147-1-gorcunov@gmail.com> References: <20210804190752.488147-1-gorcunov@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [Tarantool-patches] [PATCH v10 2/4] limbo: order access to the limbo terms X-BeenThere: tarantool-patches@dev.tarantool.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Tarantool development patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , From: Cyrill Gorcunov via Tarantool-patches Reply-To: Cyrill Gorcunov Cc: Vladislav Shpilevoy Errors-To: tarantool-patches-bounces@dev.tarantool.org Sender: "Tarantool-patches" Limbo terms tracking is shared between appliers and when one of appliers is waiting for write to complete inside journal_write() routine, an other may need to access read term value to figure out if promote request is valid to apply. Due to cooperative multitasking access to the terms is not consistent so we need to be sure that other fibers either read up to date terms (ie written to the WAL). For this sake we use latching mechanism, when one fiber took terms-lock for updating other readers are waiting until the operation is complete. For example here is a call graph of two appliers applier 1 --------- applier_apply_tx (promote term = 3 current max term = 2) applier_synchro_filter_tx apply_synchro_row journal_write (sleeping) at this moment another applier comes in with obsolete data and term 2 applier 2 --------- applier_apply_tx (term 2) applier_synchro_filter_tx txn_limbo_is_replica_outdated -> false journal_write (sleep) applier 1 --------- journal wakes up apply_synchro_row_cb set max term to 3 So the applier 2 didn't notice that term 3 is already seen and wrote obsolete data. With locking the applier 2 will wait until applier 1 has finished its write. Part-of #6036 Signed-off-by: Cyrill Gorcunov --- src/box/applier.cc | 8 ++++++-- src/box/txn_limbo.c | 17 ++++++++++++++-- src/box/txn_limbo.h | 48 ++++++++++++++++++++++++++++++++++++++++----- 3 files changed, 64 insertions(+), 9 deletions(-) diff --git a/src/box/applier.cc b/src/box/applier.cc index f621fa657..9db286ae2 100644 --- a/src/box/applier.cc +++ b/src/box/applier.cc @@ -856,7 +856,7 @@ apply_synchro_row_cb(struct journal_entry *entry) applier_rollback_by_wal_io(entry->res); } else { replica_txn_wal_write_cb(synchro_entry->rcb); - txn_limbo_process(&txn_limbo, synchro_entry->req); + txn_limbo_process_locked(&txn_limbo, synchro_entry->req); trigger_run(&replicaset.applier.on_wal_write, NULL); } fiber_wakeup(synchro_entry->owner); @@ -867,11 +867,13 @@ static int apply_synchro_row(uint32_t replica_id, struct xrow_header *row) { assert(iproto_type_is_synchro_request(row->type)); + int rc = 0; struct synchro_request req; if (xrow_decode_synchro(row, &req) != 0) goto err; + txn_limbo_term_lock(&txn_limbo); struct replica_cb_data rcb_data; struct synchro_entry entry; /* @@ -908,7 +910,9 @@ apply_synchro_row(uint32_t replica_id, struct xrow_header *row) * before trying to commit. But that requires extra steps from the * transactions side, including the async ones. */ - if (journal_write(&entry.base) != 0) + rc = journal_write(&entry.base); + txn_limbo_term_unlock(&txn_limbo); + if (rc != 0) goto err; if (entry.base.res < 0) { diag_set_journal_res(entry.base.res); diff --git a/src/box/txn_limbo.c b/src/box/txn_limbo.c index 570f77c46..a718c55a2 100644 --- a/src/box/txn_limbo.c +++ b/src/box/txn_limbo.c @@ -47,6 +47,7 @@ txn_limbo_create(struct txn_limbo *limbo) vclock_create(&limbo->vclock); vclock_create(&limbo->promote_term_map); limbo->promote_greatest_term = 0; + latch_create(&limbo->promote_latch); limbo->confirmed_lsn = 0; limbo->rollback_count = 0; limbo->is_in_rollback = false; @@ -724,11 +725,14 @@ txn_limbo_wait_empty(struct txn_limbo *limbo, double timeout) } void -txn_limbo_process(struct txn_limbo *limbo, const struct synchro_request *req) +txn_limbo_process_locked(struct txn_limbo *limbo, + const struct synchro_request *req) { + assert(latch_is_locked(&limbo->promote_latch)); + uint64_t term = req->term; uint32_t origin = req->origin_id; - if (txn_limbo_replica_term(limbo, origin) < term) { + if (txn_limbo_replica_term_locked(limbo, origin) < term) { vclock_follow(&limbo->promote_term_map, origin, term); if (term > limbo->promote_greatest_term) limbo->promote_greatest_term = term; @@ -786,6 +790,15 @@ txn_limbo_process(struct txn_limbo *limbo, const struct synchro_request *req) return; } +void +txn_limbo_process(struct txn_limbo *limbo, + const struct synchro_request *req) +{ + txn_limbo_term_lock(limbo); + txn_limbo_process_locked(limbo, req); + txn_limbo_term_unlock(limbo); +} + void txn_limbo_on_parameters_change(struct txn_limbo *limbo) { diff --git a/src/box/txn_limbo.h b/src/box/txn_limbo.h index 53e52f676..c77c501e9 100644 --- a/src/box/txn_limbo.h +++ b/src/box/txn_limbo.h @@ -31,6 +31,7 @@ */ #include "small/rlist.h" #include "vclock/vclock.h" +#include "latch.h" #include @@ -147,6 +148,10 @@ struct txn_limbo { * limbo and raft are in sync and the terms are the same. */ uint64_t promote_greatest_term; + /** + * To order access to the promote data. + */ + struct latch promote_latch; /** * Maximal LSN gathered quorum and either already confirmed in WAL, or * whose confirmation is in progress right now. Any attempt to confirm @@ -211,14 +216,39 @@ txn_limbo_last_entry(struct txn_limbo *limbo) in_queue); } +/** Lock promote data. */ +static inline void +txn_limbo_term_lock(struct txn_limbo *limbo) +{ + latch_lock(&limbo->promote_latch); +} + +/** Unlock promote data. */ +static inline void +txn_limbo_term_unlock(struct txn_limbo *limbo) +{ + latch_unlock(&limbo->promote_latch); +} + +/** Fetch replica's term with lock taken. */ +static inline uint64_t +txn_limbo_replica_term_locked(struct txn_limbo *limbo, uint32_t replica_id) +{ + assert(latch_is_locked(&limbo->promote_latch)); + return vclock_get(&limbo->promote_term_map, replica_id); +} + /** * Return the latest term as seen in PROMOTE requests from instance with id * @a replica_id. */ static inline uint64_t -txn_limbo_replica_term(const struct txn_limbo *limbo, uint32_t replica_id) +txn_limbo_replica_term(struct txn_limbo *limbo, uint32_t replica_id) { - return vclock_get(&limbo->promote_term_map, replica_id); + txn_limbo_term_lock(limbo); + uint64_t v = txn_limbo_replica_term_locked(limbo, replica_id); + txn_limbo_term_unlock(limbo); + return v; } /** @@ -226,11 +256,14 @@ txn_limbo_replica_term(const struct txn_limbo *limbo, uint32_t replica_id) * data from it. The check is only valid when elections are enabled. */ static inline bool -txn_limbo_is_replica_outdated(const struct txn_limbo *limbo, +txn_limbo_is_replica_outdated(struct txn_limbo *limbo, uint32_t replica_id) { - return txn_limbo_replica_term(limbo, replica_id) < - limbo->promote_greatest_term; + txn_limbo_term_lock(limbo); + bool res = txn_limbo_replica_term_locked(limbo, replica_id) < + limbo->promote_greatest_term; + txn_limbo_term_unlock(limbo); + return res; } /** @@ -302,6 +335,11 @@ txn_limbo_wait_complete(struct txn_limbo *limbo, struct txn_limbo_entry *entry); /** Execute a synchronous replication request. */ void +txn_limbo_process_locked(struct txn_limbo *limbo, + const struct synchro_request *req); + +/** Lock limbo terms and execute a synchronous replication request. */ +void txn_limbo_process(struct txn_limbo *limbo, const struct synchro_request *req); /** -- 2.31.1