From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from [87.239.111.99] (localhost [127.0.0.1]) by dev.tarantool.org (Postfix) with ESMTP id 316026EC40; Fri, 4 Jun 2021 20:07:17 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org 316026EC40 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=tarantool.org; s=dev; t=1622826437; bh=gTinX/eP8NM8HT2ITIPGiD48gElXCphi+/Zk30eGIno=; h=To:Date:In-Reply-To:References:Subject:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc: From; b=U66RrJ/EuTQqg/KxxFlG1mCvXGzNCPEyaqrsD7IMwIgRYQ+P3AVnpjcSWpIcJq44m 0WAoZDPspctsQ6beiLMYLzfAVHTkV6Bac2ngqJdRrNun5stG0BM1Qjr5IAkiON0uow HHIA91hPYgJxynMFEFDXtXKc98Jpqa9zh0LEA43M= Received: from mail-lf1-f41.google.com (mail-lf1-f41.google.com [209.85.167.41]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by dev.tarantool.org (Postfix) with ESMTPS id B0FAE6EC42 for ; Fri, 4 Jun 2021 20:06:40 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org B0FAE6EC42 Received: by mail-lf1-f41.google.com with SMTP id n12so8122802lft.10 for ; Fri, 04 Jun 2021 10:06:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xBsASLFH+XlAM2VAPuR/mOcWfpM2E2IO6Cnf3NJtdMM=; b=IZh1yDIQgxagcksUOKGIEtO/uuuk4byxpGu+2cuRX0Je9nG+eOeeT/FIE7YJ41qDp3 AtioIVJ6RWt434IgGkPkDrTEMrgmK8c+/J4kSvZ0HwZ8Sbx9d90aoRRX3JyeJdnSPCMw aVJP0TX5h2DC3dIELlYUKTdWLwjHFclLTEuizOG1LpYsirB72T59bkv2cmDsQo41byKS b4K3KmwsBGoKOYrP31Yx3bHDz/rfpCSSEthltCJOHT20lv/MpHq5Q+oEL8sGXFJ7Asy8 LBcF6L4YfzPLDUMIZGLVAxUbtWHV5oyBngOSacy4E3yBdTryTxM3jYXzhGU9LF+ZFgHH 8f9A== X-Gm-Message-State: AOAM531I5jiEi2xM+7umzEij01junlmlaggP2oeb9BNTMjZBiYgrbuxZ zOQU42JJRZqzMDj1/smzq6wnB2doQKU= X-Google-Smtp-Source: ABdhPJz/Dc3n+huYO8kM94zbuubw0D7S4nRQ/oB9hGQQqd/3eedm0Gnas2AvTqmWKmqWIwRZYjlj2w== X-Received: by 2002:a19:670b:: with SMTP id b11mr3320002lfc.71.1622826398697; Fri, 04 Jun 2021 10:06:38 -0700 (PDT) Received: from grain.localdomain ([5.18.171.94]) by smtp.gmail.com with ESMTPSA id c14sm663324lfb.129.2021.06.04.10.06.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 10:06:37 -0700 (PDT) Received: by grain.localdomain (Postfix, from userid 1000) id 8F69A5A0044; Fri, 4 Jun 2021 20:06:11 +0300 (MSK) To: tml Date: Fri, 4 Jun 2021 20:06:07 +0300 Message-Id: <20210604170607.1127177-3-gorcunov@gmail.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210604170607.1127177-1-gorcunov@gmail.com> References: <20210604170607.1127177-1-gorcunov@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [Tarantool-patches] [RFC v7 2/2] relay: provide information about downstream lag X-BeenThere: tarantool-patches@dev.tarantool.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Tarantool development patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , From: Cyrill Gorcunov via Tarantool-patches Reply-To: Cyrill Gorcunov Cc: Vladislav Shpilevoy Errors-To: tarantool-patches-bounces@dev.tarantool.org Sender: "Tarantool-patches" We already have `box.replication.upstream.lag` entry for monitoring sake. Same time in synchronous replication timeouts are key properties for quorum gathering procedure. Thus we would like to know how long it took of a transaction to traverse `initiator WAL -> network -> remote applier -> ACK` path. Typical ouput is | tarantool> box.info.replication[2].downstream | --- | - status: follow | idle: 0.61753897101153 | vclock: {1: 147} | lag: 0 | ... | tarantool> box.space.sync:insert{69} | --- | - [69] | ... | | tarantool> box.info.replication[2].downstream | --- | - status: follow | idle: 0.75324084801832 | vclock: {1: 151} | lag: 0.0011014938354492 | ... Closes #5447 Signed-off-by: Cyrill Gorcunov @TarantoolBot document Title: Add `box.info.replication[n].downstream.lag` entry `replication[n].downstream.lag` is the time difference between last transactions been written to the WAL journal of the transaction initiator and the transaction written to WAL on the `n` replica. In other words this is a lag in seconds between the main node writes data to own WAL and replica `n` get this data replicated to the own WAL journal. --- .../unreleased/gh-5447-downstream-lag.md | 6 ++ src/box/lua/info.c | 3 + src/box/relay.cc | 51 ++++++++++ src/box/relay.h | 6 ++ .../replication/gh-5447-downstream-lag.result | 93 +++++++++++++++++++ .../gh-5447-downstream-lag.test.lua | 41 ++++++++ 6 files changed, 200 insertions(+) create mode 100644 changelogs/unreleased/gh-5447-downstream-lag.md create mode 100644 test/replication/gh-5447-downstream-lag.result create mode 100644 test/replication/gh-5447-downstream-lag.test.lua diff --git a/changelogs/unreleased/gh-5447-downstream-lag.md b/changelogs/unreleased/gh-5447-downstream-lag.md new file mode 100644 index 000000000..726175c6c --- /dev/null +++ b/changelogs/unreleased/gh-5447-downstream-lag.md @@ -0,0 +1,6 @@ +#feature/replication + + * Introduced `box.info.replication[n].downstream.lag` field to monitor + state of replication. This member represents time spent between + transaction been written to initiator's WAL file and reached WAL + file of a replica (gh-5447). diff --git a/src/box/lua/info.c b/src/box/lua/info.c index 0eb48b823..f201b25e3 100644 --- a/src/box/lua/info.c +++ b/src/box/lua/info.c @@ -143,6 +143,9 @@ lbox_pushrelay(lua_State *L, struct relay *relay) lua_pushnumber(L, ev_monotonic_now(loop()) - relay_last_row_time(relay)); lua_settable(L, -3); + lua_pushstring(L, "lag"); + lua_pushnumber(L, relay_txn_lag(relay)); + lua_settable(L, -3); break; case RELAY_STOPPED: { diff --git a/src/box/relay.cc b/src/box/relay.cc index b1571b361..18c1ac06b 100644 --- a/src/box/relay.cc +++ b/src/box/relay.cc @@ -158,6 +158,19 @@ struct relay { struct stailq pending_gc; /** Time when last row was sent to peer. */ double last_row_time; + /** + * A time difference between the moment when we + * wrote a transaction to the local WAL and when + * this transaction has been replicated to remote + * node (ie written to node's WAL). + */ + double txn_lag; + /** + * Last timestamp observed from remote node to + * persist @a txn_lag value. + */ + double txn_acked_tm; + /** Relay sync state. */ enum relay_state state; @@ -217,6 +230,12 @@ relay_last_row_time(const struct relay *relay) return relay->last_row_time; } +double +relay_txn_lag(const struct relay *relay) +{ + return relay->txn_lag; +} + static void relay_send(struct relay *relay, struct xrow_header *packet); static void @@ -284,6 +303,16 @@ relay_start(struct relay *relay, int fd, uint64_t sync, relay->state = RELAY_FOLLOW; relay->row_count = 0; relay->last_row_time = ev_monotonic_now(loop()); + relay->txn_lag = 0; + /* + * We assume that previously written rows in WAL + * are older than current node real time which allows + * to simplify @a txn_lag calculation. In worst + * scenario when runtime has been adjusted backwards + * between restart we simply get some big value in + * @a txn_lag until next transaction get replicated. + */ + relay->txn_acked_tm = ev_now(loop()); } void @@ -336,6 +365,8 @@ relay_stop(struct relay *relay) * upon cord_create(). */ relay->cord.id = 0; + relay->txn_lag = 0; + relay->txn_acked_tm = ev_now(loop()); } void @@ -629,6 +660,26 @@ relay_reader_f(va_list ap) /* vclock is followed while decoding, zeroing it. */ vclock_create(&relay->recv_vclock); xrow_decode_vclock_xc(&xrow, &relay->recv_vclock); + /* + * Replica send us last replicated transaction + * timestamp which is needed for relay lag + * monitoring. Note that this transaction has + * been written to WAL with our current realtime + * clock value, thus when it get reported back we + * can compute time spent regardless of the clock + * value on remote replica. + * + * An interesting moment is replica restart - it will + * send us value 0 after that but we can preserve + * old reported value here since we *assume* that + * timestamp is not going backwards on properly + * set up nodes, otherwise the lag get raised. + * After all this is a not tamper-proof value. + */ + if (relay->txn_acked_tm < xrow.tm) { + relay->txn_acked_tm = xrow.tm; + relay->txn_lag = ev_now(loop()) - xrow.tm; + } fiber_cond_signal(&relay->reader_cond); } } catch (Exception *e) { diff --git a/src/box/relay.h b/src/box/relay.h index b32e2ea2a..615ffb75d 100644 --- a/src/box/relay.h +++ b/src/box/relay.h @@ -93,6 +93,12 @@ relay_vclock(const struct relay *relay); double relay_last_row_time(const struct relay *relay); +/** + * Returns relay's transaction's lag. + */ +double +relay_txn_lag(const struct relay *relay); + /** * Send a Raft update request to the relay channel. It is not * guaranteed that it will be delivered. The connection may break. diff --git a/test/replication/gh-5447-downstream-lag.result b/test/replication/gh-5447-downstream-lag.result new file mode 100644 index 000000000..8586d0ed3 --- /dev/null +++ b/test/replication/gh-5447-downstream-lag.result @@ -0,0 +1,93 @@ +-- test-run result file version 2 +-- +-- gh-5447: Test for box.info.replication[n].downstream.lag. +-- We need to be sure that if replica start been back of +-- master node reports own lagging and cluster admin would +-- be able to detect such situation. +-- + +fiber = require('fiber') + | --- + | ... +test_run = require('test_run').new() + | --- + | ... +engine = test_run:get_cfg('engine') + | --- + | ... + +box.schema.user.grant('guest', 'replication') + | --- + | ... + +test_run:cmd('create server replica with rpl_master=default, \ + script="replication/replica.lua"') + | --- + | - true + | ... +test_run:cmd('start server replica') + | --- + | - true + | ... + +s = box.schema.space.create('test', {engine = engine}) + | --- + | ... +_ = s:create_index('pk') + | --- + | ... + +-- +-- The replica should wait some time (wal delay is 1 second +-- by default) so we would be able to detect the lag, since +-- on local instances the lag is minimal and usually transactions +-- are handled instantly. +test_run:switch('replica') + | --- + | - true + | ... +box.error.injection.set("ERRINJ_WAL_DELAY", true) + | --- + | - ok + | ... + +test_run:switch('default') + | --- + | - true + | ... +box.space.test:insert({1}) + | --- + | - [1] + | ... +test_run:wait_cond(function() return box.info.replication[2].downstream.lag ~= 0 end, 10) + | --- + | - true + | ... + +test_run:switch('replica') + | --- + | - true + | ... +box.error.injection.set("ERRINJ_WAL_DELAY", false) + | --- + | - ok + | ... +-- +-- Cleanup everything. +test_run:switch('default') + | --- + | - true + | ... + +test_run:cmd('stop server replica') + | --- + | - true + | ... +test_run:cmd('cleanup server replica') + | --- + | - true + | ... +test_run:cmd('delete server replica') + | --- + | - true + | ... diff --git a/test/replication/gh-5447-downstream-lag.test.lua b/test/replication/gh-5447-downstream-lag.test.lua new file mode 100644 index 000000000..650e8e215 --- /dev/null +++ b/test/replication/gh-5447-downstream-lag.test.lua @@ -0,0 +1,41 @@ +-- +-- gh-5447: Test for box.info.replication[n].downstream.lag. +-- We need to be sure that if replica start been back of +-- master node reports own lagging and cluster admin would +-- be able to detect such situation. +-- + +fiber = require('fiber') +test_run = require('test_run').new() +engine = test_run:get_cfg('engine') + +box.schema.user.grant('guest', 'replication') + +test_run:cmd('create server replica with rpl_master=default, \ + script="replication/replica.lua"') +test_run:cmd('start server replica') + +s = box.schema.space.create('test', {engine = engine}) +_ = s:create_index('pk') + +-- +-- The replica should wait some time (wal delay is 1 second +-- by default) so we would be able to detect the lag, since +-- on local instances the lag is minimal and usually transactions +-- are handled instantly. +test_run:switch('replica') +box.error.injection.set("ERRINJ_WAL_DELAY", true) + +test_run:switch('default') +box.space.test:insert({1}) +test_run:wait_cond(function() return box.info.replication[2].downstream.lag ~= 0 end, 10) + +test_run:switch('replica') +box.error.injection.set("ERRINJ_WAL_DELAY", false) +-- +-- Cleanup everything. +test_run:switch('default') + +test_run:cmd('stop server replica') +test_run:cmd('cleanup server replica') +test_run:cmd('delete server replica') -- 2.31.1