From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from [87.239.111.99] (localhost [127.0.0.1]) by dev.tarantool.org (Postfix) with ESMTP id CE0886EC5A; Mon, 15 Feb 2021 17:01:54 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org CE0886EC5A DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=tarantool.org; s=dev; t=1613397714; bh=Rdj/hgKF2gbf9cw8NffDo9XmeVOmjG75MfMUXgCICxI=; h=To:Date:Subject:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:Cc:From; b=Gk29UbXLBZioFgiGOVBc+DR2SG+fF6fEVsg81LKb3B5rOKBTXhd9Aswg0SdDajesA E107fF1T4IlHOQHf75GRiV0zKS18W973pUUm2vyPOsxIZFXJxezvdTC+I8xN7cjtEL Ip+GqptNvMo9mqp93gOk/CjTLI9qsN7BpRQF0sMk= Received: from mail-lj1-f170.google.com (mail-lj1-f170.google.com [209.85.208.170]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by dev.tarantool.org (Postfix) with ESMTPS id 4D5966EC5A for ; Mon, 15 Feb 2021 17:01:54 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org 4D5966EC5A Received: by mail-lj1-f170.google.com with SMTP id x1so7793032ljj.11 for ; Mon, 15 Feb 2021 06:01:54 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=UsoSRhDSm2gqTU73c6l/8DcagaitSPqdUTMd+nqpVRE=; b=okEYpdlbZM2BYle3aqEe3chunfBCJb4LXLKWT4nhedZa7IMSNjJhZUGXt8Aqg1fPxp VI619ITZh/shRgfCobESYTeG4uyR3yjMS8tuZ51w7D+Wk9o0H1s8PCNmeeFQ6xL4uhmE 5ocusGLIOzjv9AAZylmVKUbKC6kb3Atc6W13vBNqCE22KYmthfXMBNtkLCIpTBwmkizu D8bOeogjlU5OmGogFp1+LUDLe6Aeke/Xpx76J0J57agX+h6CyQnneeAdMePW1umgkwfb xrEnAKcxZhp2nQunyIK4dLdluwj41fMSO9ygjDYPq1HdU7rWzlxK7CmnGke93SRcSGXb zk7w== X-Gm-Message-State: AOAM532IjTNuoO1IdyFHCL4Td2QYv5G7Q9w9LrQwmTop8NEzHkRur1aS BsrY8/o20oqwHcCx8ZPhXvyC53FB2HE= X-Google-Smtp-Source: ABdhPJwqk//UrtlJ9vpAhVc/9M72PxwNDDvvnOx3aPpaKFCUYsKTVO5qPInQwnUErSt25Wu9YHBZ6w== X-Received: by 2002:a05:651c:552:: with SMTP id q18mr9776492ljp.278.1613397712871; Mon, 15 Feb 2021 06:01:52 -0800 (PST) Received: from grain.localdomain ([5.18.171.94]) by smtp.gmail.com with ESMTPSA id i13sm2784492lfl.125.2021.02.15.06.01.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 Feb 2021 06:01:51 -0800 (PST) Received: by grain.localdomain (Postfix, from userid 1000) id 1D5A8560097; Mon, 15 Feb 2021 17:01:51 +0300 (MSK) To: tml Date: Mon, 15 Feb 2021 17:01:49 +0300 Message-Id: <20210215140149.194727-1-gorcunov@gmail.com> X-Mailer: git-send-email 2.29.2 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [Tarantool-patches] [PATCH] txn: convert flags to explicit bitfield X-BeenThere: tarantool-patches@dev.tarantool.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Tarantool development patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , From: Cyrill Gorcunov via Tarantool-patches Reply-To: Cyrill Gorcunov Cc: Vladislav Shpilevoy Errors-To: tarantool-patches-bounces@dev.tarantool.org Sender: "Tarantool-patches" Instead of shifting flags inside txn_x_flag() helpers lets define precompiled value. Moreover lets drop the helpers completely: better operate with bitfield directly which allows to test/set/clear a set of bits at once instead of one by one. Closes #5128 Suggested-by: Vladislav Shpilevoy Signed-off-by: Cyrill Gorcunov --- issue https://github.com/tarantool/tarantool/issues/5128 branch gorcunov/gh-5128-txn-flags-4 Guys, take a loook please, once time permit. Helpers are vanished, code style should be fit I hope. src/box/applier.cc | 4 +-- src/box/memtx_engine.c | 2 +- src/box/txn.c | 54 ++++++++++++++++------------------ src/box/txn.h | 36 ++++++----------------- src/box/txn_limbo.c | 35 ++++++++++------------ test/unit/snap_quorum_delay.cc | 3 +- 6 files changed, 55 insertions(+), 79 deletions(-) diff --git a/src/box/applier.cc b/src/box/applier.cc index 553db76fc..c7c85d329 100644 --- a/src/box/applier.cc +++ b/src/box/applier.cc @@ -237,7 +237,7 @@ apply_snapshot_row(struct xrow_header *row) * Do not wait for confirmation when fetching a snapshot. * Master only sends confirmed rows during join. */ - txn_set_flag(txn, TXN_FORCE_ASYNC); + txn->flags |= TXN_FORCE_ASYNC; if (txn_begin_stmt(txn, space) != 0) goto rollback; /* no access checks here - applier always works with admin privs */ @@ -308,7 +308,7 @@ apply_final_join_row(struct xrow_header *row) * Do not wait for confirmation while processing final * join rows. See apply_snapshot_row(). */ - txn_set_flag(txn, TXN_FORCE_ASYNC); + txn->flags |= TXN_FORCE_ASYNC; if (apply_row(row) != 0) { txn_rollback(txn); fiber_gc(); diff --git a/src/box/memtx_engine.c b/src/box/memtx_engine.c index 1174234d2..8d3cf5bbd 100644 --- a/src/box/memtx_engine.c +++ b/src/box/memtx_engine.c @@ -265,7 +265,7 @@ memtx_engine_recover_snapshot_row(struct memtx_engine *memtx, * Snapshot rows are confirmed by definition. They don't need to go to * the synchronous transactions limbo. */ - txn_set_flag(txn, TXN_FORCE_ASYNC); + txn->flags |= TXN_FORCE_ASYNC; rc = txn_commit(txn); /* * Don't let gc pool grow too much. Yet to diff --git a/src/box/txn.c b/src/box/txn.c index a5edbfc60..1958b079c 100644 --- a/src/box/txn.c +++ b/src/box/txn.c @@ -282,7 +282,7 @@ txn_begin(void) * It's a responsibility of an engine to disable yields * if they are not supported. */ - txn_set_flag(txn, TXN_CAN_YIELD); + txn->flags |= TXN_CAN_YIELD; return txn; } @@ -516,7 +516,7 @@ txn_free_or_wakeup(struct txn *txn) if (txn->fiber == NULL) txn_free(txn); else { - txn_set_flag(txn, TXN_IS_DONE); + txn->flags |= TXN_IS_DONE; if (txn->fiber != fiber()) /* Wake a waiting fiber up. */ fiber_wakeup(txn->fiber); @@ -526,7 +526,7 @@ txn_free_or_wakeup(struct txn *txn) void txn_complete_fail(struct txn *txn) { - assert(!txn_has_flag(txn, TXN_IS_DONE)); + assert((txn->flags & TXN_IS_DONE) == 0); assert(txn->signature < 0); txn->status = TXN_ABORTED; struct txn_stmt *stmt; @@ -535,7 +535,7 @@ txn_complete_fail(struct txn *txn) txn_rollback_one_stmt(txn, stmt); if (txn->engine != NULL) engine_rollback(txn->engine, txn); - if (txn_has_flag(txn, TXN_HAS_TRIGGERS)) + if (txn->flags & TXN_HAS_TRIGGERS) txn_run_rollback_triggers(txn, &txn->on_rollback); txn_free_or_wakeup(txn); } @@ -543,13 +543,12 @@ txn_complete_fail(struct txn *txn) void txn_complete_success(struct txn *txn) { - assert(!txn_has_flag(txn, TXN_IS_DONE)); - assert(!txn_has_flag(txn, TXN_WAIT_SYNC)); + assert((txn->flags & (TXN_IS_DONE | TXN_WAIT_SYNC)) == 0); assert(txn->signature >= 0); txn->status = TXN_COMMITTED; if (txn->engine != NULL) engine_commit(txn->engine, txn); - if (txn_has_flag(txn, TXN_HAS_TRIGGERS)) + if (txn->flags & TXN_HAS_TRIGGERS) txn_run_commit_triggers(txn, &txn->on_commit); txn_free_or_wakeup(txn); } @@ -587,9 +586,9 @@ txn_on_journal_write(struct journal_entry *entry) "%.3f sec", n_rows, txn->signature - n_rows + 1, delta); } - if (txn_has_flag(txn, TXN_HAS_TRIGGERS)) + if (txn->flags & TXN_HAS_TRIGGERS) txn_run_wal_write_triggers(txn); - if (!txn_has_flag(txn, TXN_WAIT_SYNC)) + if ((txn->flags & TXN_WAIT_SYNC) == 0) txn_complete_success(txn); else if (txn->fiber != NULL && txn->fiber != fiber()) fiber_wakeup(txn->fiber); @@ -640,10 +639,9 @@ txn_journal_entry_new(struct txn *txn) * space can't be synchronous. So if there is at least one * synchronous space, the transaction is not local. */ - if (!txn_has_flag(txn, TXN_FORCE_ASYNC)) { + if ((txn->flags & TXN_FORCE_ASYNC) == 0) { if (is_sync) { - txn_set_flag(txn, TXN_WAIT_SYNC); - txn_set_flag(txn, TXN_WAIT_ACK); + txn->flags |= TXN_WAIT_SYNC | TXN_WAIT_ACK; } else if (!txn_limbo_is_empty(&txn_limbo)) { /* * There some sync entries on the @@ -652,7 +650,7 @@ txn_journal_entry_new(struct txn *txn) * doesn't touch sync space (each sync txn * should be considered as a barrier). */ - txn_set_flag(txn, TXN_WAIT_SYNC); + txn->flags |= TXN_WAIT_SYNC; } } @@ -693,8 +691,8 @@ txn_prepare(struct txn *txn) { txn->psn = ++txn_last_psn; - if (txn_has_flag(txn, TXN_IS_ABORTED_BY_YIELD)) { - assert(!txn_has_flag(txn, TXN_CAN_YIELD)); + if (txn->flags & TXN_IS_ABORTED_BY_YIELD) { + assert((txn->flags & TXN_CAN_YIELD) == 0); diag_set(ClientError, ER_TRANSACTION_YIELD); diag_log(); return -1; @@ -748,7 +746,7 @@ txn_prepare(struct txn *txn) assert(rlist_empty(&txn->conflicted_by_list)); trigger_clear(&txn->fiber_on_stop); - if (!txn_has_flag(txn, TXN_CAN_YIELD)) + if ((txn->flags & TXN_CAN_YIELD) == 0) trigger_clear(&txn->fiber_on_yield); txn->start_tm = ev_monotonic_now(loop()); @@ -815,7 +813,7 @@ txn_commit_async(struct txn *txn) if (req == NULL) goto rollback; - bool is_sync = txn_has_flag(txn, TXN_WAIT_SYNC); + bool is_sync = txn->flags & TXN_WAIT_SYNC; struct txn_limbo_entry *limbo_entry; if (is_sync) { /* @@ -838,7 +836,7 @@ txn_commit_async(struct txn *txn) if (limbo_entry == NULL) goto rollback; - if (txn_has_flag(txn, TXN_WAIT_ACK)) { + if (txn->flags & TXN_WAIT_ACK) { int64_t lsn = req->rows[txn->n_applier_rows - 1]->lsn; /* * Can't tell whether it is local or not - @@ -895,7 +893,7 @@ txn_commit(struct txn *txn) if (req == NULL) goto rollback; - bool is_sync = txn_has_flag(txn, TXN_WAIT_SYNC); + bool is_sync = txn->flags & TXN_WAIT_SYNC; if (is_sync) { /* * Remote rows, if any, come before local rows, so @@ -922,7 +920,7 @@ txn_commit(struct txn *txn) goto rollback; } if (is_sync) { - if (txn_has_flag(txn, TXN_WAIT_ACK)) { + if (txn->flags & TXN_WAIT_ACK) { int64_t lsn = req->rows[req->n_rows - 1]->lsn; /* * Use local LSN assignment. Because @@ -937,7 +935,7 @@ txn_commit(struct txn *txn) if (txn_limbo_wait_complete(&txn_limbo, limbo_entry) < 0) goto rollback; } - assert(txn_has_flag(txn, TXN_IS_DONE)); + assert(txn->flags & TXN_IS_DONE); assert(txn->signature >= 0); /* Synchronous transactions are freed by the calling fiber. */ @@ -946,7 +944,7 @@ txn_commit(struct txn *txn) rollback: assert(txn->fiber != NULL); - if (!txn_has_flag(txn, TXN_IS_DONE)) { + if ((txn->flags & TXN_IS_DONE) == 0) { fiber_set_txn(fiber(), txn); txn_rollback(txn); } else { @@ -971,7 +969,7 @@ txn_rollback(struct txn *txn) assert(txn == in_txn()); txn->status = TXN_ABORTED; trigger_clear(&txn->fiber_on_stop); - if (!txn_has_flag(txn, TXN_CAN_YIELD)) + if ((txn->flags & TXN_CAN_YIELD) == 0) trigger_clear(&txn->fiber_on_yield); txn->signature = TXN_SIGNATURE_ROLLBACK; txn_complete_fail(txn); @@ -992,12 +990,12 @@ bool txn_can_yield(struct txn *txn, bool set) { assert(txn == in_txn()); - bool could = txn_has_flag(txn, TXN_CAN_YIELD); + bool could = txn->flags & TXN_CAN_YIELD; if (set && !could) { - txn_set_flag(txn, TXN_CAN_YIELD); + txn->flags |= TXN_CAN_YIELD; trigger_clear(&txn->fiber_on_yield); } else if (!set && could) { - txn_clear_flag(txn, TXN_CAN_YIELD); + txn->flags &= ~TXN_CAN_YIELD; trigger_create(&txn->fiber_on_yield, txn_on_yield, NULL, NULL); trigger_add(&fiber()->on_yield, &txn->fiber_on_yield); } @@ -1227,8 +1225,8 @@ txn_on_yield(struct trigger *trigger, void *event) (void) event; struct txn *txn = in_txn(); assert(txn != NULL); - assert(!txn_has_flag(txn, TXN_CAN_YIELD)); + assert((txn->flags & TXN_CAN_YIELD) == 0); txn_rollback_to_svp(txn, NULL); - txn_set_flag(txn, TXN_IS_ABORTED_BY_YIELD); + txn->flags |= TXN_IS_ABORTED_BY_YIELD; return 0; } diff --git a/src/box/txn.h b/src/box/txn.h index fca9bc1d0..fecb98e93 100644 --- a/src/box/txn.h +++ b/src/box/txn.h @@ -56,25 +56,25 @@ struct Vdbe; enum txn_flag { /** Transaction has been processed. */ - TXN_IS_DONE, + TXN_IS_DONE = 0x1, /** * Transaction has been aborted by fiber yield so * should be rolled back at commit. */ - TXN_IS_ABORTED_BY_YIELD, + TXN_IS_ABORTED_BY_YIELD = 0x2, /** * fiber_yield() is allowed inside the transaction. * See txn_can_yield() for more details. */ - TXN_CAN_YIELD, + TXN_CAN_YIELD = 0x4, /** on_commit and/or on_rollback list is not empty. */ - TXN_HAS_TRIGGERS, + TXN_HAS_TRIGGERS = 0x8, /** * Synchronous transaction touched sync spaces, or an * asynchronous transaction blocked by a sync one until it * is confirmed. */ - TXN_WAIT_SYNC, + TXN_WAIT_SYNC = 0x10, /** * Synchronous transaction 'waiting for ACKs' state before * commit. In this state it waits until it is replicated @@ -82,14 +82,14 @@ enum txn_flag { * commit and returns success to a user. * TXN_WAIT_SYNC is always set, if TXN_WAIT_ACK is set. */ - TXN_WAIT_ACK, + TXN_WAIT_ACK = 0x20, /** * A transaction may be forced to be asynchronous, not * wait for any ACKs, and not depend on prepending sync * transactions. This happens in a few special cases. For * example, when applier receives snapshot from master. */ - TXN_FORCE_ASYNC, + TXN_FORCE_ASYNC = 0x40, }; enum { @@ -393,24 +393,6 @@ struct txn { struct rlist read_set; }; -static inline bool -txn_has_flag(struct txn *txn, enum txn_flag flag) -{ - return (txn->flags & (1 << flag)) != 0; -} - -static inline void -txn_set_flag(struct txn *txn, enum txn_flag flag) -{ - txn->flags |= 1 << flag; -} - -static inline void -txn_clear_flag(struct txn *txn, enum txn_flag flag) -{ - txn->flags &= ~(1 << flag); -} - /* Pointer to the current transaction (if any) */ static inline struct txn * in_txn(void) @@ -487,11 +469,11 @@ txn_commit_async(struct txn *txn); static inline void txn_init_triggers(struct txn *txn) { - if (!txn_has_flag(txn, TXN_HAS_TRIGGERS)) { + if ((txn->flags & TXN_HAS_TRIGGERS) == 0) { rlist_create(&txn->on_commit); rlist_create(&txn->on_rollback); rlist_create(&txn->on_wal_write); - txn_set_flag(txn, TXN_HAS_TRIGGERS); + txn->flags |= TXN_HAS_TRIGGERS; } } diff --git a/src/box/txn_limbo.c b/src/box/txn_limbo.c index 9c4c3cdf1..0ac59a3a2 100644 --- a/src/box/txn_limbo.c +++ b/src/box/txn_limbo.c @@ -60,7 +60,7 @@ txn_limbo_last_synchro_entry(struct txn_limbo *limbo) { struct txn_limbo_entry *entry; rlist_foreach_entry_reverse(entry, &limbo->queue, in_queue) { - if (txn_has_flag(entry->txn, TXN_WAIT_ACK)) + if (entry->txn->flags & TXN_WAIT_ACK) return entry; } return NULL; @@ -69,7 +69,7 @@ txn_limbo_last_synchro_entry(struct txn_limbo *limbo) struct txn_limbo_entry * txn_limbo_append(struct txn_limbo *limbo, uint32_t id, struct txn *txn) { - assert(txn_has_flag(txn, TXN_WAIT_SYNC)); + assert(txn->flags & TXN_WAIT_SYNC); assert(limbo == &txn_limbo); /* * Transactions should be added to the limbo before WAL write. Limbo @@ -168,7 +168,7 @@ txn_limbo_assign_remote_lsn(struct txn_limbo *limbo, assert(limbo->owner_id != instance_id); assert(entry->lsn == -1); assert(lsn > 0); - assert(txn_has_flag(entry->txn, TXN_WAIT_ACK)); + assert(entry->txn->flags & TXN_WAIT_ACK); (void) limbo; entry->lsn = lsn; } @@ -181,7 +181,7 @@ txn_limbo_assign_local_lsn(struct txn_limbo *limbo, assert(limbo->owner_id == instance_id); assert(entry->lsn == -1); assert(lsn > 0); - assert(txn_has_flag(entry->txn, TXN_WAIT_ACK)); + assert(entry->txn->flags & TXN_WAIT_ACK); entry->lsn = lsn; /* @@ -215,14 +215,14 @@ txn_limbo_write_rollback(struct txn_limbo *limbo, int64_t lsn); int txn_limbo_wait_complete(struct txn_limbo *limbo, struct txn_limbo_entry *entry) { - assert(entry->lsn > 0 || !txn_has_flag(entry->txn, TXN_WAIT_ACK)); + assert(entry->lsn > 0 || (entry->txn->flags & TXN_WAIT_ACK) == 0); bool cancellable = fiber_set_cancellable(false); if (txn_limbo_entry_is_complete(entry)) goto complete; - assert(!txn_has_flag(entry->txn, TXN_IS_DONE)); - assert(txn_has_flag(entry->txn, TXN_WAIT_SYNC)); + assert((entry->txn->flags & TXN_IS_DONE) == 0); + assert(entry->txn->flags & TXN_WAIT_SYNC); double start_time = fiber_clock(); while (true) { double deadline = start_time + replication_synchro_timeout; @@ -266,8 +266,7 @@ txn_limbo_wait_complete(struct txn_limbo *limbo, struct txn_limbo_entry *entry) in_queue, tmp) { e->txn->signature = TXN_SIGNATURE_QUORUM_TIMEOUT; txn_limbo_abort(limbo, e); - txn_clear_flag(e->txn, TXN_WAIT_SYNC); - txn_clear_flag(e->txn, TXN_WAIT_ACK); + e->txn->flags &= ~(TXN_WAIT_SYNC | TXN_WAIT_ACK); txn_complete_fail(e->txn); if (e == entry) break; @@ -289,7 +288,7 @@ txn_limbo_wait_complete(struct txn_limbo *limbo, struct txn_limbo_entry *entry) * installed the commit/rollback flag. */ assert(rlist_empty(&entry->in_queue)); - assert(txn_has_flag(entry->txn, TXN_IS_DONE)); + assert(entry->txn->flags & TXN_IS_DONE); fiber_set_cancellable(cancellable); /* * The first tx to be rolled back already performed all @@ -385,7 +384,7 @@ txn_limbo_read_confirm(struct txn_limbo *limbo, int64_t lsn) * it is last, it does not depend on a not finished sync * transaction anymore and can be confirmed right away. */ - if (txn_has_flag(e->txn, TXN_WAIT_ACK)) { + if (e->txn->flags & TXN_WAIT_ACK) { /* Sync transaction not covered by the confirmation. */ if (e->lsn > lsn) break; @@ -399,8 +398,7 @@ txn_limbo_read_confirm(struct txn_limbo *limbo, int64_t lsn) } e->is_commit = true; txn_limbo_remove(limbo, e); - txn_clear_flag(e->txn, TXN_WAIT_SYNC); - txn_clear_flag(e->txn, TXN_WAIT_ACK); + e->txn->flags &= ~(TXN_WAIT_SYNC | TXN_WAIT_ACK); /* * If already written to WAL by now, finish tx processing. * Otherwise just clear the sync flags. Tx procesing will finish @@ -446,7 +444,7 @@ txn_limbo_read_rollback(struct txn_limbo *limbo, int64_t lsn) struct txn_limbo_entry *e, *tmp; struct txn_limbo_entry *last_rollback = NULL; rlist_foreach_entry_reverse(e, &limbo->queue, in_queue) { - if (!txn_has_flag(e->txn, TXN_WAIT_ACK)) + if ((e->txn->flags & TXN_WAIT_ACK) == 0) continue; if (e->lsn < lsn) break; @@ -456,8 +454,7 @@ txn_limbo_read_rollback(struct txn_limbo *limbo, int64_t lsn) return; rlist_foreach_entry_safe_reverse(e, &limbo->queue, in_queue, tmp) { txn_limbo_abort(limbo, e); - txn_clear_flag(e->txn, TXN_WAIT_SYNC); - txn_clear_flag(e->txn, TXN_WAIT_ACK); + e->txn->flags &= ~(TXN_WAIT_SYNC | TXN_WAIT_ACK); if (e->txn->signature >= 0) { /* Rollback the transaction. */ e->txn->signature = TXN_SIGNATURE_SYNC_ROLLBACK; @@ -530,7 +527,7 @@ txn_limbo_ack(struct txn_limbo *limbo, uint32_t replica_id, int64_t lsn) * transactions are automatically committed right * after all the previous sync transactions are. */ - if (!txn_has_flag(e->txn, TXN_WAIT_ACK)) { + if ((e->txn->flags & TXN_WAIT_ACK) == 0) { assert(e->lsn == -1); if (confirm_lsn == -1) continue; @@ -664,7 +661,7 @@ txn_limbo_force_empty(struct txn_limbo *limbo, int64_t confirm_lsn) struct txn_limbo_entry *e, *last_quorum = NULL; struct txn_limbo_entry *rollback = NULL; rlist_foreach_entry(e, &limbo->queue, in_queue) { - if (txn_has_flag(e->txn, TXN_WAIT_ACK)) { + if (e->txn->flags & TXN_WAIT_ACK) { if (e->lsn <= confirm_lsn) { last_quorum = e; } else { @@ -693,7 +690,7 @@ txn_limbo_on_parameters_change(struct txn_limbo *limbo) int64_t confirm_lsn = -1; rlist_foreach_entry(e, &limbo->queue, in_queue) { assert(e->ack_count <= VCLOCK_MAX); - if (!txn_has_flag(e->txn, TXN_WAIT_ACK)) { + if ((e->txn->flags & TXN_WAIT_ACK) == 0) { assert(e->lsn == -1); if (confirm_lsn == -1) continue; diff --git a/test/unit/snap_quorum_delay.cc b/test/unit/snap_quorum_delay.cc index b9d4cc6c4..ec78f837d 100644 --- a/test/unit/snap_quorum_delay.cc +++ b/test/unit/snap_quorum_delay.cc @@ -98,8 +98,7 @@ txn_process_func(va_list ap) struct txn *txn = txn_begin(); txn->fiber = fiber(); /* Simulate a sync transaction. */ - txn_set_flag(txn, TXN_WAIT_SYNC); - txn_set_flag(txn, TXN_WAIT_ACK); + txn->flags |= TXN_WAIT_SYNC | TXN_WAIT_ACK; /* * The true way to push the transaction to limbo is to call * txn_commit() for sync transaction. But, if txn_commit() -- 2.29.2