From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from [87.239.111.99] (localhost [127.0.0.1]) by dev.tarantool.org (Postfix) with ESMTP id C0B526EC40; Mon, 7 Jun 2021 18:55:55 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org C0B526EC40 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=tarantool.org; s=dev; t=1623081355; bh=4gogYs8y6cfXc8dBY7DLOYyCn87jl1yh197iIVxVlrM=; h=To:Date:In-Reply-To:References:Subject:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc: From; b=LRN6DKjAtaH6rhd0EwYl0VzkL1H6n4DaFS/MAUt9kXPrk/AeQnFd4hZi0QdbPzgA5 H6KV/Xk2ghueNu3C9CtJUQPiTB/JyiZZLQclLSw0SC/qz2teexzOlJy+hx+DD0z5RF x7/BpebLbjeYBGoPO7plvvN48XWFBMUXPek+C6t4= Received: from mail-lj1-f169.google.com (mail-lj1-f169.google.com [209.85.208.169]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by dev.tarantool.org (Postfix) with ESMTPS id D27EC6EC40 for ; Mon, 7 Jun 2021 18:55:38 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org D27EC6EC40 Received: by mail-lj1-f169.google.com with SMTP id e2so22949958ljk.4 for ; Mon, 07 Jun 2021 08:55:38 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=TGzCf28v3Cc6fMlFMxFyr3Dqx1tGX/+BqtwcoyF0P5Y=; b=Q6waFSxwP24HyhMv+X8or2G0li9YfXczuYp6gpwSGLiyslvQZh52+XeI8rd+rEzDCE C2JQQ6825f4nR2EQRNvf3GzzMuxHVfDaGEbI5gsoqMGzqwn2LzYxf2fRshdlSlgYIQ6D YTKCC2Pcn3DBJrES3KxoeN0TWmjaccJxU8Q+YBu0P4AwiFC5J1OFoxlxpDmzhPPY3Pbi Ya9Sm2Ww052CmZBvO6rkMH94TiX8BZcM/ih6km7aqyXAhUSOzk8E/D7wxgcdI27MsGPP E1b5Tq8azoD1xg/WtyTVx2v+09OBvoCkkgOscSK+yGsgKO+cnZfHtbSaOdEDXzcYSKnK KULw== X-Gm-Message-State: AOAM531lldzErjavyug3i9rM/yPABl+2sZMdVBUEYc3AW1D8G7oDawCg y05hzHHS649y251mWigmQAmzZK+HsUc= X-Google-Smtp-Source: ABdhPJw8hXLMZXg3M4k1lY7KsnCMCs1WWSPEpJAewkDICqYaYc4L+KdZYsLsGos04+QdhsS5+o5vNA== X-Received: by 2002:a05:651c:1202:: with SMTP id i2mr13770768lja.323.1623081337383; Mon, 07 Jun 2021 08:55:37 -0700 (PDT) Received: from grain.localdomain ([5.18.171.94]) by smtp.gmail.com with ESMTPSA id n17sm1010325lfq.118.2021.06.07.08.55.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Jun 2021 08:55:36 -0700 (PDT) Received: by grain.localdomain (Postfix, from userid 1000) id A3F125A0041; Mon, 7 Jun 2021 18:55:20 +0300 (MSK) To: tml Date: Mon, 7 Jun 2021 18:55:18 +0300 Message-Id: <20210607155519.109626-2-gorcunov@gmail.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210607155519.109626-1-gorcunov@gmail.com> References: <20210607155519.109626-1-gorcunov@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [Tarantool-patches] [PATCH v8 1/2] applier: send transaction's first row WAL time in the applier_writer_f X-BeenThere: tarantool-patches@dev.tarantool.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Tarantool development patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , From: Cyrill Gorcunov via Tarantool-patches Reply-To: Cyrill Gorcunov Cc: Vladislav Shpilevoy Errors-To: tarantool-patches-bounces@dev.tarantool.org Sender: "Tarantool-patches" Applier fiber sends current vclock of the node to remote relay reader, pointing current state of fetched WAL data so the relay will know which new data should be sent. The packet applier sends carries xrow_header::tm field as a zero but we can reuse it to provide information about first timestamp in a transaction we wrote to our WAL. Since old instances of Tarantool simply ignore this field such extension won't cause any problems. The timestamp will be needed to account lag of downstream replicas suitable for information purpose and cluster health monitoring. We update applier statistics in WAL callbacks but since both apply_synchro_row and apply_plain_tx are used not only in real data application but in final join stage as well (in this stage we're not writing the data yet) the apply_synchro_row is extended with replica_id argument which is non zero when applier is subscribed. The calculation of the downstream lag itself lag will be addressed in next patch because sending the timestamp and its observation are independent actions. Part-of #5447 Signed-off-by: Cyrill Gorcunov --- src/box/applier.cc | 90 +++++++++++++++++++++++++++++++++++------- src/box/replication.cc | 1 + src/box/replication.h | 5 +++ 3 files changed, 81 insertions(+), 15 deletions(-) diff --git a/src/box/applier.cc b/src/box/applier.cc index 33181fdbf..38695a54f 100644 --- a/src/box/applier.cc +++ b/src/box/applier.cc @@ -163,6 +163,9 @@ applier_writer_f(va_list ap) struct ev_io io; coio_create(&io, applier->io.fd); + /* ID is permanent while applier is alive */ + uint32_t replica_id = applier->instance_id; + while (!fiber_is_cancelled()) { /* * Tarantool >= 1.7.7 sends periodic heartbeat @@ -193,6 +196,16 @@ applier_writer_f(va_list ap) applier->has_acks_to_send = false; struct xrow_header xrow; xrow_encode_vclock(&xrow, &replicaset.vclock); + /* + * For relay lag statistics we report last + * written transaction timestamp in tm field. + * + * Replica might be dead already so we have to + * test on each iteration. + */ + struct replica *r = replica_by_id(replica_id); + if (likely(r != NULL)) + xrow.tm = r->applier_txn_start_tm; coio_write_xrow(&io, &xrow); ERROR_INJECT(ERRINJ_APPLIER_SLOW_ACK, { fiber_sleep(0.01); @@ -490,7 +503,7 @@ static uint64_t applier_read_tx(struct applier *applier, struct stailq *rows, double timeout); static int -apply_final_join_tx(struct stailq *rows); +apply_final_join_tx(uint32_t replica_id, struct stailq *rows); /** * A helper struct to link xrow objects in a list. @@ -535,7 +548,7 @@ applier_wait_register(struct applier *applier, uint64_t row_count) next)->row); break; } - if (apply_final_join_tx(&rows) != 0) + if (apply_final_join_tx(applier->instance_id, &rows) != 0) diag_raise(); } @@ -751,11 +764,35 @@ applier_txn_rollback_cb(struct trigger *trigger, void *event) return 0; } +struct replica_cb_data { + /** Replica ID the data belongs to. */ + uint32_t replica_id; + /** + * Timestamp of a transaction to be accounted + * for relay lag. Usually it is a first row in + * a transaction. + */ + double txn_start_tm; +}; + +/** Update replica associated data once write is complete. */ +static void +replica_txn_wal_write_cb(struct replica_cb_data *rcb) +{ + struct replica *r = replica_by_id(rcb->replica_id); + if (likely(r != NULL)) + r->applier_txn_start_tm = rcb->txn_start_tm; +} + static int applier_txn_wal_write_cb(struct trigger *trigger, void *event) { - (void) trigger; (void) event; + + struct replica_cb_data *rcb = + (struct replica_cb_data *)trigger->data; + replica_txn_wal_write_cb(rcb); + /* Broadcast the WAL write across all appliers. */ trigger_run(&replicaset.applier.on_wal_write, NULL); return 0; @@ -766,6 +803,8 @@ struct synchro_entry { struct synchro_request *req; /** Fiber created the entry. To wakeup when WAL write is done. */ struct fiber *owner; + /** Replica associated data. */ + struct replica_cb_data *rcb; /** * The base journal entry. It has unsized array and then must be the * last entry in the structure. But can workaround it via a union @@ -789,6 +828,7 @@ apply_synchro_row_cb(struct journal_entry *entry) if (entry->res < 0) { applier_rollback_by_wal_io(); } else { + replica_txn_wal_write_cb(synchro_entry->rcb); txn_limbo_process(&txn_limbo, synchro_entry->req); trigger_run(&replicaset.applier.on_wal_write, NULL); } @@ -797,7 +837,7 @@ apply_synchro_row_cb(struct journal_entry *entry) /** Process a synchro request. */ static int -apply_synchro_row(struct xrow_header *row) +apply_synchro_row(uint32_t replica_id, struct xrow_header *row) { assert(iproto_type_is_synchro_request(row->type)); @@ -805,6 +845,7 @@ apply_synchro_row(struct xrow_header *row) if (xrow_decode_synchro(row, &req) != 0) goto err; + struct replica_cb_data rcb_data; struct synchro_entry entry; /* * Rows array is cast from *[] to **, because otherwise g++ complains @@ -817,6 +858,11 @@ apply_synchro_row(struct xrow_header *row) apply_synchro_row_cb, &entry); entry.req = &req; entry.owner = fiber(); + + rcb_data.replica_id = replica_id; + rcb_data.txn_start_tm = row->tm; + entry.rcb = &rcb_data; + /* * The WAL write is blocking. Otherwise it might happen that a CONFIRM * or ROLLBACK is sent to WAL, and it would empty the limbo, but before @@ -862,8 +908,9 @@ applier_handle_raft(struct applier *applier, struct xrow_header *row) return box_raft_process(&req, applier->instance_id); } -static inline int -apply_plain_tx(struct stailq *rows, bool skip_conflict, bool use_triggers) +static int +apply_plain_tx(uint32_t replica_id, struct stailq *rows, + bool skip_conflict, bool use_triggers) { /* * Explicitly begin the transaction so that we can @@ -931,10 +978,21 @@ apply_plain_tx(struct stailq *rows, bool skip_conflict, bool use_triggers) goto fail; } + struct replica_cb_data *rcb; + rcb = region_alloc_object(&txn->region, typeof(*rcb), &size); + if (rcb == NULL) { + diag_set(OutOfMemory, size, "region_alloc_object", "rcb"); + goto fail; + } + trigger_create(on_rollback, applier_txn_rollback_cb, NULL, NULL); txn_on_rollback(txn, on_rollback); - trigger_create(on_wal_write, applier_txn_wal_write_cb, NULL, NULL); + item = stailq_first_entry(rows, struct applier_tx_row, next); + rcb->replica_id = replica_id; + rcb->txn_start_tm = item->row.tm; + + trigger_create(on_wal_write, applier_txn_wal_write_cb, rcb, NULL); txn_on_wal_write(txn, on_wal_write); } @@ -946,7 +1004,7 @@ apply_plain_tx(struct stailq *rows, bool skip_conflict, bool use_triggers) /** A simpler version of applier_apply_tx() for final join stage. */ static int -apply_final_join_tx(struct stailq *rows) +apply_final_join_tx(uint32_t replica_id, struct stailq *rows) { struct xrow_header *first_row = &stailq_first_entry(rows, struct applier_tx_row, next)->row; @@ -957,9 +1015,9 @@ apply_final_join_tx(struct stailq *rows) vclock_follow_xrow(&replicaset.vclock, last_row); if (unlikely(iproto_type_is_synchro_request(first_row->type))) { assert(first_row == last_row); - rc = apply_synchro_row(first_row); + rc = apply_synchro_row(replica_id, first_row); } else { - rc = apply_plain_tx(rows, false, false); + rc = apply_plain_tx(replica_id, rows, false, false); } fiber_gc(); return rc; @@ -1088,12 +1146,14 @@ applier_apply_tx(struct applier *applier, struct stailq *rows) * each other. */ assert(first_row == last_row); - if ((rc = apply_synchro_row(first_row)) != 0) - goto finish; - } else if ((rc = apply_plain_tx(rows, replication_skip_conflict, - true)) != 0) { - goto finish; + rc = apply_synchro_row(applier->instance_id, first_row); + } else { + rc = apply_plain_tx(applier->instance_id, rows, + replication_skip_conflict, true); } + if (rc != 0) + goto finish; + vclock_follow(&replicaset.applier.vclock, last_row->replica_id, last_row->lsn); finish: diff --git a/src/box/replication.cc b/src/box/replication.cc index aefb812b3..c97c1fc04 100644 --- a/src/box/replication.cc +++ b/src/box/replication.cc @@ -184,6 +184,7 @@ replica_new(void) trigger_create(&replica->on_applier_state, replica_on_applier_state_f, NULL, NULL); replica->applier_sync_state = APPLIER_DISCONNECTED; + replica->applier_txn_start_tm = 0; latch_create(&replica->order_latch); return replica; } diff --git a/src/box/replication.h b/src/box/replication.h index 2ad1cbf66..d9817d4ff 100644 --- a/src/box/replication.h +++ b/src/box/replication.h @@ -331,6 +331,11 @@ struct replica { * separate from applier. */ enum applier_state applier_sync_state; + /** + * Applier's last written to WAL transaction timestamp. + * Needed for relay lagging statistics. + */ + double applier_txn_start_tm; /* The latch is used to order replication requests. */ struct latch order_latch; }; -- 2.31.1