From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from [87.239.111.99] (localhost [127.0.0.1]) by dev.tarantool.org (Postfix) with ESMTP id EE8216EC40; Mon, 7 Jun 2021 18:56:25 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org EE8216EC40 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=tarantool.org; s=dev; t=1623081386; bh=MD//JmljQzv0QG7nMta5Tf/42Fc8sL/M58Y6b+l/JFg=; h=To:Date:In-Reply-To:References:Subject:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc: From; b=D56GIlM20bdnph6LhlOvtrAJLzomkGez/TITun0P9+y7sdOsQaUqqhp8mSFola+5h XshlYJomA838kiQ79PmYfJtI4bUH46PR6/0YFO9hOtAhD5UP+4BCTuWxevFXiNLRH4 iOD0sNLvBXDtGBMoAwKs75WUOzj0KPQAyisrNtw0= Received: from mail-lf1-f41.google.com (mail-lf1-f41.google.com [209.85.167.41]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by dev.tarantool.org (Postfix) with ESMTPS id 97BC06EC41 for ; Mon, 7 Jun 2021 18:55:50 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org 97BC06EC41 Received: by mail-lf1-f41.google.com with SMTP id v8so27068257lft.8 for ; Mon, 07 Jun 2021 08:55:50 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FlGGs7+FnaImU1Xy0CkncvQ2Mus4FFnk5qDlx2/klc4=; b=qIz7jK7KN7eDQm8SDjF1+SihMh6Ce81p8GG+rOIzTB+n4Dv0bMIZ0Zlmg3yKbP2fWE 9EBzuAHUK0I8FpP2MSWE2oF5UU+E2ARf8UHEfgA9xny47iicbRlV87lLV57XBkBdoOTY 6YDcmIVlw52/0acW37JHyda4icPfIpDM1SOFyoKH+mcIOvjSh+YAwacfodLOPq71V3/2 yqjvTiacj3piIyPLqLpxSy1RTymJeuJ2WKfjnXPwvLlTOjbzXxrWs+b6SMObOd/jxLQU 6V1M86fThEmpVA1nOf26zzJ/yd0Eg9kobtVHCTu+3xf6gkkgA7W7qz/RE24qQhduSG93 nkEA== X-Gm-Message-State: AOAM533VptVZ9S3XU+nlsauhVctPLcZrhBgInEKKMleVHkUVqNchG+KB LTcoXGzQKO+kQ8OYV9LKf1RJ65Gde5E= X-Google-Smtp-Source: ABdhPJzaY+X8wh7wO1jSWsSBJa6mhcTigF2pbHcGhnpyBxNSSmFLvcAWeZOO5uMzBHzWr3BrBhUogw== X-Received: by 2002:a05:6512:36c9:: with SMTP id e9mr12371979lfs.560.1623081349360; Mon, 07 Jun 2021 08:55:49 -0700 (PDT) Received: from grain.localdomain ([5.18.171.94]) by smtp.gmail.com with ESMTPSA id b125sm1540822lfd.196.2021.06.07.08.55.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Jun 2021 08:55:48 -0700 (PDT) Received: by grain.localdomain (Postfix, from userid 1000) id A6B015A0044; Mon, 7 Jun 2021 18:55:20 +0300 (MSK) To: tml Date: Mon, 7 Jun 2021 18:55:19 +0300 Message-Id: <20210607155519.109626-3-gorcunov@gmail.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210607155519.109626-1-gorcunov@gmail.com> References: <20210607155519.109626-1-gorcunov@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [Tarantool-patches] [PATCH v8 2/2] relay: provide information about downstream lag X-BeenThere: tarantool-patches@dev.tarantool.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Tarantool development patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , From: Cyrill Gorcunov via Tarantool-patches Reply-To: Cyrill Gorcunov Cc: Vladislav Shpilevoy Errors-To: tarantool-patches-bounces@dev.tarantool.org Sender: "Tarantool-patches" We already have `box.replication.upstream.lag` entry for monitoring sake. Same time in synchronous replication timeouts are key properties for quorum gathering procedure. Thus we would like to know how long it took of a transaction to traverse `initiator WAL -> network -> remote applier -> ACK` path. Typical output is | tarantool> box.info.replication[2].downstream | --- | - status: follow | idle: 0.61753897101153 | vclock: {1: 147} | lag: 0 | ... | tarantool> box.space.sync:insert{69} | --- | - [69] | ... | | tarantool> box.info.replication[2].downstream | --- | - status: follow | idle: 0.75324084801832 | vclock: {1: 151} | lag: 0.0011014938354492 | ... Closes #5447 Signed-off-by: Cyrill Gorcunov @TarantoolBot document Title: Add `box.info.replication[n].downstream.lag` entry `replication[n].downstream.lag` is the time difference between last transactions been written to the WAL journal of the transaction initiator and the transaction written to WAL on the `n` replica. In other words this is a lag in seconds between the main node writes data to own WAL and the replica `n` get this data replicated to own WAL journal. In case if a transaction failed to replicate the lag value won't be modified because only successfully applied transactions are accounted. Same time if the main node or a repllica get restarted the lag value will be zero until next success transaction. --- .../unreleased/gh-5447-downstream-lag.md | 6 ++ src/box/lua/info.c | 3 + src/box/relay.cc | 50 ++++++++++ src/box/relay.h | 6 ++ .../replication/gh-5447-downstream-lag.result | 93 +++++++++++++++++++ .../gh-5447-downstream-lag.test.lua | 41 ++++++++ 6 files changed, 199 insertions(+) create mode 100644 changelogs/unreleased/gh-5447-downstream-lag.md create mode 100644 test/replication/gh-5447-downstream-lag.result create mode 100644 test/replication/gh-5447-downstream-lag.test.lua diff --git a/changelogs/unreleased/gh-5447-downstream-lag.md b/changelogs/unreleased/gh-5447-downstream-lag.md new file mode 100644 index 000000000..726175c6c --- /dev/null +++ b/changelogs/unreleased/gh-5447-downstream-lag.md @@ -0,0 +1,6 @@ +#feature/replication + + * Introduced `box.info.replication[n].downstream.lag` field to monitor + state of replication. This member represents time spent between + transaction been written to initiator's WAL file and reached WAL + file of a replica (gh-5447). diff --git a/src/box/lua/info.c b/src/box/lua/info.c index 0eb48b823..f201b25e3 100644 --- a/src/box/lua/info.c +++ b/src/box/lua/info.c @@ -143,6 +143,9 @@ lbox_pushrelay(lua_State *L, struct relay *relay) lua_pushnumber(L, ev_monotonic_now(loop()) - relay_last_row_time(relay)); lua_settable(L, -3); + lua_pushstring(L, "lag"); + lua_pushnumber(L, relay_txn_lag(relay)); + lua_settable(L, -3); break; case RELAY_STOPPED: { diff --git a/src/box/relay.cc b/src/box/relay.cc index b1571b361..cdd1383e8 100644 --- a/src/box/relay.cc +++ b/src/box/relay.cc @@ -158,6 +158,18 @@ struct relay { struct stailq pending_gc; /** Time when last row was sent to peer. */ double last_row_time; + /** + * A time difference between the moment when we + * wrote a transaction to the local WAL and when + * this transaction has been replicated to remote + * node (ie written to node's WAL). + */ + double txn_lag; + /** + * Last timestamp observed from remote node to + * persist @a txn_lag value. + */ + double txn_acked_tm; /** Relay sync state. */ enum relay_state state; @@ -217,6 +229,12 @@ relay_last_row_time(const struct relay *relay) return relay->last_row_time; } +double +relay_txn_lag(const struct relay *relay) +{ + return relay->txn_lag; +} + static void relay_send(struct relay *relay, struct xrow_header *packet); static void @@ -284,6 +302,16 @@ relay_start(struct relay *relay, int fd, uint64_t sync, relay->state = RELAY_FOLLOW; relay->row_count = 0; relay->last_row_time = ev_monotonic_now(loop()); + relay->txn_lag = 0; + /* + * We assume that previously written rows in WAL + * are older than current node real time which allows + * to simplify @a txn_lag calculation. In worst + * scenario when runtime has been adjusted backwards + * between restart we simply get some big value in + * @a txn_lag until next transaction get replicated. + */ + relay->txn_acked_tm = ev_now(loop()); } void @@ -336,6 +364,8 @@ relay_stop(struct relay *relay) * upon cord_create(). */ relay->cord.id = 0; + relay->txn_lag = 0; + relay->txn_acked_tm = ev_now(loop()); } void @@ -629,6 +659,26 @@ relay_reader_f(va_list ap) /* vclock is followed while decoding, zeroing it. */ vclock_create(&relay->recv_vclock); xrow_decode_vclock_xc(&xrow, &relay->recv_vclock); + /* + * Replica send us last replicated transaction + * timestamp which is needed for relay lag + * monitoring. Note that this transaction has + * been written to WAL with our current realtime + * clock value, thus when it get reported back we + * can compute time spent regardless of the clock + * value on remote replica. + * + * An interesting moment is replica restart - it will + * send us value 0 after that but we can preserve + * old reported value here since we *assume* that + * timestamp is not going backwards on properly + * set up nodes, otherwise the lag get raised. + * After all this is a not tamper-proof value. + */ + if (relay->txn_acked_tm < xrow.tm) { + relay->txn_acked_tm = xrow.tm; + relay->txn_lag = ev_now(loop()) - xrow.tm; + } fiber_cond_signal(&relay->reader_cond); } } catch (Exception *e) { diff --git a/src/box/relay.h b/src/box/relay.h index b32e2ea2a..615ffb75d 100644 --- a/src/box/relay.h +++ b/src/box/relay.h @@ -93,6 +93,12 @@ relay_vclock(const struct relay *relay); double relay_last_row_time(const struct relay *relay); +/** + * Returns relay's transaction's lag. + */ +double +relay_txn_lag(const struct relay *relay); + /** * Send a Raft update request to the relay channel. It is not * guaranteed that it will be delivered. The connection may break. diff --git a/test/replication/gh-5447-downstream-lag.result b/test/replication/gh-5447-downstream-lag.result new file mode 100644 index 000000000..8586d0ed3 --- /dev/null +++ b/test/replication/gh-5447-downstream-lag.result @@ -0,0 +1,93 @@ +-- test-run result file version 2 +-- +-- gh-5447: Test for box.info.replication[n].downstream.lag. +-- We need to be sure that if replica start been back of +-- master node reports own lagging and cluster admin would +-- be able to detect such situation. +-- + +fiber = require('fiber') + | --- + | ... +test_run = require('test_run').new() + | --- + | ... +engine = test_run:get_cfg('engine') + | --- + | ... + +box.schema.user.grant('guest', 'replication') + | --- + | ... + +test_run:cmd('create server replica with rpl_master=default, \ + script="replication/replica.lua"') + | --- + | - true + | ... +test_run:cmd('start server replica') + | --- + | - true + | ... + +s = box.schema.space.create('test', {engine = engine}) + | --- + | ... +_ = s:create_index('pk') + | --- + | ... + +-- +-- The replica should wait some time (wal delay is 1 second +-- by default) so we would be able to detect the lag, since +-- on local instances the lag is minimal and usually transactions +-- are handled instantly. +test_run:switch('replica') + | --- + | - true + | ... +box.error.injection.set("ERRINJ_WAL_DELAY", true) + | --- + | - ok + | ... + +test_run:switch('default') + | --- + | - true + | ... +box.space.test:insert({1}) + | --- + | - [1] + | ... +test_run:wait_cond(function() return box.info.replication[2].downstream.lag ~= 0 end, 10) + | --- + | - true + | ... + +test_run:switch('replica') + | --- + | - true + | ... +box.error.injection.set("ERRINJ_WAL_DELAY", false) + | --- + | - ok + | ... +-- +-- Cleanup everything. +test_run:switch('default') + | --- + | - true + | ... + +test_run:cmd('stop server replica') + | --- + | - true + | ... +test_run:cmd('cleanup server replica') + | --- + | - true + | ... +test_run:cmd('delete server replica') + | --- + | - true + | ... diff --git a/test/replication/gh-5447-downstream-lag.test.lua b/test/replication/gh-5447-downstream-lag.test.lua new file mode 100644 index 000000000..650e8e215 --- /dev/null +++ b/test/replication/gh-5447-downstream-lag.test.lua @@ -0,0 +1,41 @@ +-- +-- gh-5447: Test for box.info.replication[n].downstream.lag. +-- We need to be sure that if replica start been back of +-- master node reports own lagging and cluster admin would +-- be able to detect such situation. +-- + +fiber = require('fiber') +test_run = require('test_run').new() +engine = test_run:get_cfg('engine') + +box.schema.user.grant('guest', 'replication') + +test_run:cmd('create server replica with rpl_master=default, \ + script="replication/replica.lua"') +test_run:cmd('start server replica') + +s = box.schema.space.create('test', {engine = engine}) +_ = s:create_index('pk') + +-- +-- The replica should wait some time (wal delay is 1 second +-- by default) so we would be able to detect the lag, since +-- on local instances the lag is minimal and usually transactions +-- are handled instantly. +test_run:switch('replica') +box.error.injection.set("ERRINJ_WAL_DELAY", true) + +test_run:switch('default') +box.space.test:insert({1}) +test_run:wait_cond(function() return box.info.replication[2].downstream.lag ~= 0 end, 10) + +test_run:switch('replica') +box.error.injection.set("ERRINJ_WAL_DELAY", false) +-- +-- Cleanup everything. +test_run:switch('default') + +test_run:cmd('stop server replica') +test_run:cmd('cleanup server replica') +test_run:cmd('delete server replica') -- 2.31.1