From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from [87.239.111.99] (localhost [127.0.0.1]) by dev.tarantool.org (Postfix) with ESMTP id B14ED6E0CB; Thu, 30 Dec 2021 23:24:55 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org B14ED6E0CB DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=tarantool.org; s=dev; t=1640895895; bh=g+HNS3FJrEx7aCsxPhb6IxEIgg26g1a2Oe0dZhdGQeM=; h=To:Date:In-Reply-To:References:Subject:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc: From; b=kn1O3ZTRqe5N1zduVRmI81WG+J0iYO9MmlpYP9BLgEfUZVR+3M44OhGPLYUlzkagH cXc4HY47LA6ZXSk3hHU3L+2BBPUEiG8RX29XQZ+AzWjzDPb06g8Wq9XCWEcGPLhXVm XGIKrAlm7Kjf4M7vGF1pg5+zKpzOBaX3F7S9Ofso= Received: from mail-lf1-f49.google.com (mail-lf1-f49.google.com [209.85.167.49]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by dev.tarantool.org (Postfix) with ESMTPS id EFDB46E0CC for ; Thu, 30 Dec 2021 23:24:15 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org EFDB46E0CC Received: by mail-lf1-f49.google.com with SMTP id x7so56604823lfu.8 for ; Thu, 30 Dec 2021 12:24:15 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Q1Y56vpOwunc4vR5W5B6X0Ym48dI8JsooD17ZboYNXY=; b=vE0RBP/hMqWiOaUAUfJnhB+6OvyGMGZPA9GBdmD8496r9K6ERBgS1YSk2voB3k/lfU Xwo701bBdG4DQdED+bNm8SeIRANSsATR+DrqlKzN3gcYZrHFrDI4++A0ZxejflFY8rQV mkhuWSZzc4gIgdhfjzjx41cQKK5PfBF8rZvZzmQVCBDCi45zk6DJPQd5Bvur2wuT1Fz+ HymDQEnktz3pw7lpud7GZ2Cbg2pgppMpNb9hWVS4e3NEzmBW1PuWVjTwem2KXhibFTVT akAijvO5NzGfcHLQF4acbkkFyL8ZOetUnzKqk7/hy3nkHD/5S+86mKBLvNpLyyLND3Bv U7EQ== X-Gm-Message-State: AOAM533JNu3ouxtR5RmUZ/S3x0bdyBN23chtsqIhKW8dLBUCRFc6SpIm BI0fHIPjy9wgw9ZDnxV7YIvYxauuSs0= X-Google-Smtp-Source: ABdhPJw5ZYUhwpzqFUEf3Oa2SaZEZrPeKk01jNaw9Iv7fJ+Kb/cQksYOOJd43zw9K6Rjw8VAbIc6/w== X-Received: by 2002:a05:6512:138e:: with SMTP id p14mr4719268lfa.426.1640895854949; Thu, 30 Dec 2021 12:24:14 -0800 (PST) Received: from grain.localdomain ([5.18.251.97]) by smtp.gmail.com with ESMTPSA id i31sm2580899lfv.25.2021.12.30.12.24.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Dec 2021 12:24:14 -0800 (PST) Received: by grain.localdomain (Postfix, from userid 1000) id 88BEF5A0022; Thu, 30 Dec 2021 23:23:48 +0300 (MSK) To: tml Date: Thu, 30 Dec 2021 23:23:46 +0300 Message-Id: <20211230202347.353494-3-gorcunov@gmail.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211230202347.353494-1-gorcunov@gmail.com> References: <20211230202347.353494-1-gorcunov@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [Tarantool-patches] [PATCH v27 2/3] qsync: order access to the limbo terms X-BeenThere: tarantool-patches@dev.tarantool.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Tarantool development patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , From: Cyrill Gorcunov via Tarantool-patches Reply-To: Cyrill Gorcunov Cc: Vladislav Shpilevoy Errors-To: tarantool-patches-bounces@dev.tarantool.org Sender: "Tarantool-patches" Limbo terms tracking is shared between appliers and when one of appliers is waiting for write to complete inside journal_write() routine, an other may need to access read term value to figure out if promote request is valid to apply. Due to cooperative multitasking access to the terms is not consistent so we need to be sure that other fibers read up to date terms (ie written to the WAL). For this sake we use a latching mechanism, when one fiber takes a lock for updating other readers are waiting until the operation is complete. For example here is a call graph of two appliers applier 1 --------- applier_apply_tx (promote term = 3 current max term = 2) applier_synchro_filter_tx apply_synchro_row journal_write (sleeping) at this moment another applier comes in with obsolete data and term 2 applier 2 --------- applier_apply_tx (term 2) applier_synchro_filter_tx txn_limbo_is_replica_outdated -> false journal_write (sleep) applier 1 --------- journal wakes up apply_synchro_row_cb set max term to 3 So the applier 2 didn't notice that term 3 is already seen and wrote obsolete data. With locking the applier 2 will wait until applier 1 has finished its write. We introduce the following helpers: 1) txn_limbo_begin: which takes a lock 2) txn_limbo_commit and txn_limbo_rollback which simply release the lock but have different names for better semantics 3) txn_limbo_process is a general function which uses x_begin and x_commit helper internally 4) txn_limbo_apply to do a real job over processing the request, it implies that txn_limbo_begin been called Testing such in-flight condition won't be easy so we introduce "box.info.synchro.queue.waiters" field which represent current number of fibers waiting for limbo to finish request processing. @TarantoolBot document Title: synchronous replication changes `box.info.synchro.queue` gets a new field: `waiters`. It represents current number of fibers waiting the synchronous transaction processing to complete. Part-of #6036 Signed-off-by: Cyrill Gorcunov --- src/box/applier.cc | 12 ++++++++--- src/box/lua/info.c | 4 +++- src/box/txn_limbo.c | 18 ++++++++++++++-- src/box/txn_limbo.h | 52 ++++++++++++++++++++++++++++++++++++++++----- 4 files changed, 75 insertions(+), 11 deletions(-) diff --git a/src/box/applier.cc b/src/box/applier.cc index dd66ef3fa..f09b346a0 100644 --- a/src/box/applier.cc +++ b/src/box/applier.cc @@ -866,7 +866,7 @@ apply_synchro_row_cb(struct journal_entry *entry) applier_rollback_by_wal_io(entry->res); } else { replica_txn_wal_write_cb(synchro_entry->rcb); - txn_limbo_process(&txn_limbo, synchro_entry->req); + txn_limbo_apply(&txn_limbo, synchro_entry->req); trigger_run(&replicaset.applier.on_wal_write, NULL); } fiber_wakeup(synchro_entry->owner); @@ -882,6 +882,8 @@ apply_synchro_row(uint32_t replica_id, struct xrow_header *row) if (xrow_decode_synchro(row, &req) != 0) goto err; + txn_limbo_begin(&txn_limbo); + struct replica_cb_data rcb_data; struct synchro_entry entry; /* @@ -919,12 +921,16 @@ apply_synchro_row(uint32_t replica_id, struct xrow_header *row) * transactions side, including the async ones. */ if (journal_write(&entry.base) != 0) - goto err; + goto err_rollback; if (entry.base.res < 0) { diag_set_journal_res(entry.base.res); - goto err; + goto err_rollback; } + txn_limbo_commit(&txn_limbo); return 0; + +err_rollback: + txn_limbo_rollback(&txn_limbo); err: diag_log(); return -1; diff --git a/src/box/lua/info.c b/src/box/lua/info.c index 8e02f6594..274b0b047 100644 --- a/src/box/lua/info.c +++ b/src/box/lua/info.c @@ -637,11 +637,13 @@ lbox_info_synchro(struct lua_State *L) /* Queue information. */ struct txn_limbo *queue = &txn_limbo; - lua_createtable(L, 0, 2); + lua_createtable(L, 0, 3); lua_pushnumber(L, queue->len); lua_setfield(L, -2, "len"); lua_pushnumber(L, queue->owner_id); lua_setfield(L, -2, "owner"); + lua_pushnumber(L, queue->promote_latch_cnt); + lua_setfield(L, -2, "waiters"); lua_setfield(L, -2, "queue"); return 1; diff --git a/src/box/txn_limbo.c b/src/box/txn_limbo.c index 70447caaf..5eb6ee61d 100644 --- a/src/box/txn_limbo.c +++ b/src/box/txn_limbo.c @@ -47,6 +47,8 @@ txn_limbo_create(struct txn_limbo *limbo) vclock_create(&limbo->vclock); vclock_create(&limbo->promote_term_map); limbo->promote_greatest_term = 0; + latch_create(&limbo->promote_latch); + limbo->promote_latch_cnt = 0; limbo->confirmed_lsn = 0; limbo->rollback_count = 0; limbo->is_in_rollback = false; @@ -724,11 +726,14 @@ txn_limbo_wait_empty(struct txn_limbo *limbo, double timeout) } void -txn_limbo_process(struct txn_limbo *limbo, const struct synchro_request *req) +txn_limbo_apply(struct txn_limbo *limbo, + const struct synchro_request *req) { + assert(latch_is_locked(&limbo->promote_latch)); + uint64_t term = req->term; uint32_t origin = req->origin_id; - if (txn_limbo_replica_term(limbo, origin) < term) { + if (vclock_get(&limbo->promote_term_map, origin) < (int64_t)term) { vclock_follow(&limbo->promote_term_map, origin, term); if (term > limbo->promote_greatest_term) limbo->promote_greatest_term = term; @@ -786,6 +791,15 @@ txn_limbo_process(struct txn_limbo *limbo, const struct synchro_request *req) return; } +void +txn_limbo_process(struct txn_limbo *limbo, + const struct synchro_request *req) +{ + txn_limbo_begin(limbo); + txn_limbo_apply(limbo, req); + txn_limbo_commit(limbo); +} + void txn_limbo_on_parameters_change(struct txn_limbo *limbo) { diff --git a/src/box/txn_limbo.h b/src/box/txn_limbo.h index 53e52f676..42d572595 100644 --- a/src/box/txn_limbo.h +++ b/src/box/txn_limbo.h @@ -31,6 +31,7 @@ */ #include "small/rlist.h" #include "vclock/vclock.h" +#include "latch.h" #include @@ -147,6 +148,14 @@ struct txn_limbo { * limbo and raft are in sync and the terms are the same. */ uint64_t promote_greatest_term; + /** + * To order access to the promote data. + */ + struct latch promote_latch; + /** + * Latch owners/waiters stat. + */ + uint64_t promote_latch_cnt; /** * Maximal LSN gathered quorum and either already confirmed in WAL, or * whose confirmation is in progress right now. Any attempt to confirm @@ -216,7 +225,7 @@ txn_limbo_last_entry(struct txn_limbo *limbo) * @a replica_id. */ static inline uint64_t -txn_limbo_replica_term(const struct txn_limbo *limbo, uint32_t replica_id) +txn_limbo_replica_term(struct txn_limbo *limbo, uint32_t replica_id) { return vclock_get(&limbo->promote_term_map, replica_id); } @@ -226,11 +235,14 @@ txn_limbo_replica_term(const struct txn_limbo *limbo, uint32_t replica_id) * data from it. The check is only valid when elections are enabled. */ static inline bool -txn_limbo_is_replica_outdated(const struct txn_limbo *limbo, +txn_limbo_is_replica_outdated(struct txn_limbo *limbo, uint32_t replica_id) { - return txn_limbo_replica_term(limbo, replica_id) < - limbo->promote_greatest_term; + latch_lock(&limbo->promote_latch); + uint64_t v = vclock_get(&limbo->promote_term_map, replica_id); + bool res = v < limbo->promote_greatest_term; + latch_unlock(&limbo->promote_latch); + return res; } /** @@ -300,7 +312,37 @@ txn_limbo_ack(struct txn_limbo *limbo, uint32_t replica_id, int64_t lsn); int txn_limbo_wait_complete(struct txn_limbo *limbo, struct txn_limbo_entry *entry); -/** Execute a synchronous replication request. */ +/** + * Initiate execution of a synchronous replication request. + */ +static inline void +txn_limbo_begin(struct txn_limbo *limbo) +{ + limbo->promote_latch_cnt++; + latch_lock(&limbo->promote_latch); +} + +/** Commit a synchronous replication request. */ +static inline void +txn_limbo_commit(struct txn_limbo *limbo) +{ + latch_unlock(&limbo->promote_latch); + limbo->promote_latch_cnt--; +} + +/** Rollback a synchronous replication request. */ +static inline void +txn_limbo_rollback(struct txn_limbo *limbo) +{ + latch_unlock(&limbo->promote_latch); +} + +/** Apply a synchronous replication request after processing stage. */ +void +txn_limbo_apply(struct txn_limbo *limbo, + const struct synchro_request *req); + +/** Process a synchronous replication request. */ void txn_limbo_process(struct txn_limbo *limbo, const struct synchro_request *req); -- 2.31.1