From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from [87.239.111.99] (localhost [127.0.0.1]) by dev.tarantool.org (Postfix) with ESMTP id A4BEE6EC45; Wed, 22 Sep 2021 16:08:20 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org A4BEE6EC45 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=tarantool.org; s=dev; t=1632316100; bh=8qNigvI4SFeZ6gWRq4jxeQSj6qv/0XQrHxwxAdvjOb0=; h=To:Date:In-Reply-To:References:Subject:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc: From; b=BQxIGZPNo3obY/kOREGr8zrIC/Teax2RgMGWAddQyo7kE7286OZB0T6WHs02SOazo y/+sZ5J5Q4WIJ3O6j2N35G5h4UWVOqMEjdR4Cg+GXpzmZbqdB5KU/zAe6iEQCmeCvc GLSjgbferoI2SR1PIXj33H9olOPsmIcZ1XGuPy0M= Received: from mail-lf1-f44.google.com (mail-lf1-f44.google.com [209.85.167.44]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by dev.tarantool.org (Postfix) with ESMTPS id ACC4C6EC45 for ; Wed, 22 Sep 2021 16:06:46 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org ACC4C6EC45 Received: by mail-lf1-f44.google.com with SMTP id i4so11757383lfv.4 for ; Wed, 22 Sep 2021 06:06:46 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FUIK49QCEFilje0tqqvY4axHzhxqvXRafJS5HoFEA7E=; b=nsBFC4qIpE/2OHjhE8eNQewVKUyFrg/NUIH6JPGn1McMoPs+liEOCGNgXR1XXu231Z 2HGOFeiALm/1nFvCDoJiL10G40pD/oH+IncBXOUzd9PKBYgGZ+PdXPxR/CBqC5mV5faU cw/zQOL9Rohig3NAEMswsomG3h2ZlRTVtEONZFSyRxTRGlRmdX3+3s2OO/v1wNssUPzG eskeS2g/8p4tTqK4hPu2i0rMqwrSsVvi7eZxIJCcclGsj0VxR0jQuZP5VbNmcTqlPbv9 gjXTgmKTiA4z9nojtS+FzWULQa0sC6t/gJiNQO0NA8E1FLB1XML+MIY0Hz5bTNQt7o1P vUxg== X-Gm-Message-State: AOAM532bc7qKYbESoL0ZEvYLKSL32j9Fz1dM5YruKPBiojyVlS4Awkyg wU3PeUoNyWRuyHNk+GPSjdz6qEB5TfJWiQ== X-Google-Smtp-Source: ABdhPJzjdpm0JHycFEQ+B4xF1UCFOb1EzMMz3nCluy00MCsfS82NNIbNOFXdNpnq2xQKpJ+Y5tudpQ== X-Received: by 2002:a05:6512:114e:: with SMTP id m14mr11061816lfg.230.1632315996734; Wed, 22 Sep 2021 06:06:36 -0700 (PDT) Received: from grain.localdomain ([5.18.253.97]) by smtp.gmail.com with ESMTPSA id a7sm250107ljj.50.2021.09.22.06.06.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 22 Sep 2021 06:06:35 -0700 (PDT) Received: by grain.localdomain (Postfix, from userid 1000) id 5359D5A0023; Wed, 22 Sep 2021 16:05:36 +0300 (MSK) To: tml Date: Wed, 22 Sep 2021 16:05:35 +0300 Message-Id: <20210922130535.79479-6-gorcunov@gmail.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210922130535.79479-1-gorcunov@gmail.com> References: <20210922130535.79479-1-gorcunov@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [Tarantool-patches] [PATCH v17 5/5] test: add gh-6036-term-order test X-BeenThere: tarantool-patches@dev.tarantool.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Tarantool development patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , From: Cyrill Gorcunov via Tarantool-patches Reply-To: Cyrill Gorcunov Cc: Vladislav Shpilevoy Errors-To: tarantool-patches-bounces@dev.tarantool.org Sender: "Tarantool-patches" To test that promotion requests are handled only when appropriate write to WAL completes, because we update memory data before the write finishes. Part-of #6036 Signed-off-by: Cyrill Gorcunov --- test/replication/gh-6036-order-master.lua | 1 + test/replication/gh-6036-order-node.lua | 60 ++++++ test/replication/gh-6036-order-replica1.lua | 1 + test/replication/gh-6036-order-replica2.lua | 1 + test/replication/gh-6036-term-order.result | 216 +++++++++++++++++++ test/replication/gh-6036-term-order.test.lua | 94 ++++++++ test/replication/suite.cfg | 1 + test/replication/suite.ini | 2 +- 8 files changed, 375 insertions(+), 1 deletion(-) create mode 120000 test/replication/gh-6036-order-master.lua create mode 100644 test/replication/gh-6036-order-node.lua create mode 120000 test/replication/gh-6036-order-replica1.lua create mode 120000 test/replication/gh-6036-order-replica2.lua create mode 100644 test/replication/gh-6036-term-order.result create mode 100644 test/replication/gh-6036-term-order.test.lua diff --git a/test/replication/gh-6036-order-master.lua b/test/replication/gh-6036-order-master.lua new file mode 120000 index 000000000..82a6073a1 --- /dev/null +++ b/test/replication/gh-6036-order-master.lua @@ -0,0 +1 @@ +gh-6036-order-node.lua \ No newline at end of file diff --git a/test/replication/gh-6036-order-node.lua b/test/replication/gh-6036-order-node.lua new file mode 100644 index 000000000..b22a7cb4c --- /dev/null +++ b/test/replication/gh-6036-order-node.lua @@ -0,0 +1,60 @@ +local INSTANCE_ID = string.match(arg[0], "gh%-6036%-order%-(.+)%.lua") + +local function unix_socket(name) + return "unix/:./" .. name .. '.sock'; +end + +require('console').listen(os.getenv('ADMIN')) + +if INSTANCE_ID == "master" then + box.cfg({ + listen = unix_socket(INSTANCE_ID), + replication = { + unix_socket(INSTANCE_ID), + unix_socket("replica1"), + unix_socket("replica2"), + }, + replication_connect_quorum = 1, + replication_synchro_quorum = 1, + replication_synchro_timeout = 10000, + replication_sync_timeout = 5, + read_only = false, + election_mode = "off", + }) +elseif INSTANCE_ID == "replica1" then + box.cfg({ + listen = unix_socket(INSTANCE_ID), + replication = { + unix_socket("master"), + unix_socket(INSTANCE_ID), + unix_socket("replica2"), + }, + replication_connect_quorum = 1, + replication_synchro_quorum = 1, + replication_synchro_timeout = 10000, + replication_sync_timeout = 5, + read_only = false, + election_mode = "off", + }) +else + assert(INSTANCE_ID == "replica2") + box.cfg({ + listen = unix_socket(INSTANCE_ID), + replication = { + unix_socket("master"), + unix_socket("replica1"), + unix_socket(INSTANCE_ID), + }, + replication_connect_quorum = 1, + replication_synchro_quorum = 1, + replication_synchro_timeout = 10000, + replication_sync_timeout = 5, + read_only = true, + election_mode = "off", + }) +end + +--box.ctl.wait_rw() +box.once("bootstrap", function() + box.schema.user.grant('guest', 'super') +end) diff --git a/test/replication/gh-6036-order-replica1.lua b/test/replication/gh-6036-order-replica1.lua new file mode 120000 index 000000000..82a6073a1 --- /dev/null +++ b/test/replication/gh-6036-order-replica1.lua @@ -0,0 +1 @@ +gh-6036-order-node.lua \ No newline at end of file diff --git a/test/replication/gh-6036-order-replica2.lua b/test/replication/gh-6036-order-replica2.lua new file mode 120000 index 000000000..82a6073a1 --- /dev/null +++ b/test/replication/gh-6036-order-replica2.lua @@ -0,0 +1 @@ +gh-6036-order-node.lua \ No newline at end of file diff --git a/test/replication/gh-6036-term-order.result b/test/replication/gh-6036-term-order.result new file mode 100644 index 000000000..6b19fc2c8 --- /dev/null +++ b/test/replication/gh-6036-term-order.result @@ -0,0 +1,216 @@ +-- test-run result file version 2 +-- +-- gh-6036: verify that terms are locked when we're inside journal +-- write routine, because parallel appliers may ignore the fact that +-- the term is updated already but not yet written leading to data +-- inconsistency. +-- +test_run = require('test_run').new() + | --- + | ... + +test_run:cmd('create server master with script="replication/gh-6036-order-master.lua"') + | --- + | - true + | ... +test_run:cmd('create server replica1 with script="replication/gh-6036-order-replica1.lua"') + | --- + | - true + | ... +test_run:cmd('create server replica2 with script="replication/gh-6036-order-replica2.lua"') + | --- + | - true + | ... + +test_run:cmd('start server master with wait=False') + | --- + | - true + | ... +test_run:cmd('start server replica1 with wait=False') + | --- + | - true + | ... +test_run:cmd('start server replica2 with wait=False') + | --- + | - true + | ... + +test_run:wait_fullmesh({"master", "replica1", "replica2"}) + | --- + | ... + +test_run:switch("master") + | --- + | - true + | ... +box.ctl.demote() + | --- + | ... + +test_run:switch("replica1") + | --- + | - true + | ... +box.ctl.demote() + | --- + | ... + +test_run:switch("replica2") + | --- + | - true + | ... +box.ctl.demote() + | --- + | ... + +-- +-- Drop connection between master and replica1. +test_run:switch("master") + | --- + | - true + | ... +box.cfg({ \ + replication = { \ + "unix/:./master.sock", \ + "unix/:./replica2.sock", \ + }, \ +}) + | --- + | ... +test_run:switch("replica1") + | --- + | - true + | ... +box.cfg({ \ + replication = { \ + "unix/:./replica1.sock", \ + "unix/:./replica2.sock", \ + }, \ +}) + | --- + | ... + +-- +-- Initiate disk delay and remember the max term seen so far. +test_run:switch("replica2") + | --- + | - true + | ... +box.error.injection.set('ERRINJ_WAL_DELAY', true) + | --- + | - ok + | ... +term_max_replica2 = box.info.synchro.promote.term_max + | --- + | ... + +-- +-- Ping-pong the promote action between master and +-- replica1 nodes, the term updates get queued on +-- replica2 because of disk being disabled. +test_run:switch("master") + | --- + | - true + | ... +box.ctl.promote() + | --- + | ... +box.ctl.demote() + | --- + | ... + +test_run:switch("replica1") + | --- + | - true + | ... +box.ctl.promote() + | --- + | ... +box.ctl.demote() + | --- + | ... + +test_run:switch("master") + | --- + | - true + | ... +box.ctl.promote() + | --- + | ... + +-- +-- Since we're guarding max promote term make sure that +-- 1) The max term has not yet been updated because WAL +-- is in sleeping state. +-- 2) Max term on master and replica1 nodes are greater +-- than we have now because terms update is locked. +-- 3) Once WAL is unlocked we make sure that terms has +-- reached us. +test_run:switch("replica2") + | --- + | - true + | ... +assert(term_max_replica2 == box.info.synchro.promote.term_max) + | --- + | - true + | ... + +term_max_master = test_run:eval('master', 'box.info.synchro.promote.term_max')[1] + | --- + | ... +term_max_replica1 = test_run:eval('replica1', 'box.info.synchro.promote.term_max')[1] + | --- + | ... +assert(term_max_master > term_max_replica2) + | --- + | - true + | ... +assert(term_max_replica1 > term_max_replica2) + | --- + | - true + | ... +term_max_wait4 = term_max_master + | --- + | ... +if term_max_wait4 < term_max_replica1 then term_max_wait4 = term_max_replica1 end + | --- + | ... + +box.error.injection.set('ERRINJ_WAL_DELAY', false) + | --- + | - ok + | ... +test_run:wait_cond(function() return box.info.synchro.promote.term_max == term_max_wait4 end) + | --- + | - true + | ... + +test_run:switch("default") + | --- + | - true + | ... +test_run:cmd('stop server master') + | --- + | - true + | ... +test_run:cmd('stop server replica1') + | --- + | - true + | ... +test_run:cmd('stop server replica2') + | --- + | - true + | ... + +test_run:cmd('delete server master') + | --- + | - true + | ... +test_run:cmd('delete server replica1') + | --- + | - true + | ... +test_run:cmd('delete server replica2') + | --- + | - true + | ... diff --git a/test/replication/gh-6036-term-order.test.lua b/test/replication/gh-6036-term-order.test.lua new file mode 100644 index 000000000..01d73ac55 --- /dev/null +++ b/test/replication/gh-6036-term-order.test.lua @@ -0,0 +1,94 @@ +-- +-- gh-6036: verify that terms are locked when we're inside journal +-- write routine, because parallel appliers may ignore the fact that +-- the term is updated already but not yet written leading to data +-- inconsistency. +-- +test_run = require('test_run').new() + +test_run:cmd('create server master with script="replication/gh-6036-order-master.lua"') +test_run:cmd('create server replica1 with script="replication/gh-6036-order-replica1.lua"') +test_run:cmd('create server replica2 with script="replication/gh-6036-order-replica2.lua"') + +test_run:cmd('start server master with wait=False') +test_run:cmd('start server replica1 with wait=False') +test_run:cmd('start server replica2 with wait=False') + +test_run:wait_fullmesh({"master", "replica1", "replica2"}) + +test_run:switch("master") +box.ctl.demote() + +test_run:switch("replica1") +box.ctl.demote() + +test_run:switch("replica2") +box.ctl.demote() + +-- +-- Drop connection between master and replica1. +test_run:switch("master") +box.cfg({ \ + replication = { \ + "unix/:./master.sock", \ + "unix/:./replica2.sock", \ + }, \ +}) +test_run:switch("replica1") +box.cfg({ \ + replication = { \ + "unix/:./replica1.sock", \ + "unix/:./replica2.sock", \ + }, \ +}) + +-- +-- Initiate disk delay and remember the max term seen so far. +test_run:switch("replica2") +box.error.injection.set('ERRINJ_WAL_DELAY', true) +term_max_replica2 = box.info.synchro.promote.term_max + +-- +-- Ping-pong the promote action between master and +-- replica1 nodes, the term updates get queued on +-- replica2 because of disk being disabled. +test_run:switch("master") +box.ctl.promote() +box.ctl.demote() + +test_run:switch("replica1") +box.ctl.promote() +box.ctl.demote() + +test_run:switch("master") +box.ctl.promote() + +-- +-- Since we're guarding max promote term make sure that +-- 1) The max term has not yet been updated because WAL +-- is in sleeping state. +-- 2) Max term on master and replica1 nodes are greater +-- than we have now because terms update is locked. +-- 3) Once WAL is unlocked we make sure that terms has +-- reached us. +test_run:switch("replica2") +assert(term_max_replica2 == box.info.synchro.promote.term_max) + +term_max_master = test_run:eval('master', 'box.info.synchro.promote.term_max')[1] +term_max_replica1 = test_run:eval('replica1', 'box.info.synchro.promote.term_max')[1] +assert(term_max_master > term_max_replica2) +assert(term_max_replica1 > term_max_replica2) +term_max_wait4 = term_max_master +if term_max_wait4 < term_max_replica1 then term_max_wait4 = term_max_replica1 end + +box.error.injection.set('ERRINJ_WAL_DELAY', false) +test_run:wait_cond(function() return box.info.synchro.promote.term_max == term_max_wait4 end) + +test_run:switch("default") +test_run:cmd('stop server master') +test_run:cmd('stop server replica1') +test_run:cmd('stop server replica2') + +test_run:cmd('delete server master') +test_run:cmd('delete server replica1') +test_run:cmd('delete server replica2') diff --git a/test/replication/suite.cfg b/test/replication/suite.cfg index 3eee0803c..ac2bedfd9 100644 --- a/test/replication/suite.cfg +++ b/test/replication/suite.cfg @@ -59,6 +59,7 @@ "gh-6094-rs-uuid-mismatch.test.lua": {}, "gh-6127-election-join-new.test.lua": {}, "gh-6035-applier-filter.test.lua": {}, + "gh-6036-term-order.test.lua": {}, "election-candidate-promote.test.lua": {}, "*": { "memtx": {"engine": "memtx"}, diff --git a/test/replication/suite.ini b/test/replication/suite.ini index 77eb95f49..16840e01f 100644 --- a/test/replication/suite.ini +++ b/test/replication/suite.ini @@ -3,7 +3,7 @@ core = tarantool script = master.lua description = tarantool/box, replication disabled = consistent.test.lua -release_disabled = catch.test.lua errinj.test.lua gc.test.lua gc_no_space.test.lua before_replace.test.lua qsync_advanced.test.lua qsync_errinj.test.lua quorum.test.lua recover_missing_xlog.test.lua sync.test.lua long_row_timeout.test.lua gh-4739-vclock-assert.test.lua gh-4730-applier-rollback.test.lua gh-5140-qsync-casc-rollback.test.lua gh-5144-qsync-dup-confirm.test.lua gh-5167-qsync-rollback-snap.test.lua gh-5430-qsync-promote-crash.test.lua gh-5430-cluster-mvcc.test.lua gh-5506-election-on-off.test.lua gh-5536-wal-limit.test.lua hang_on_synchro_fail.test.lua anon_register_gap.test.lua gh-5213-qsync-applier-order.test.lua gh-5213-qsync-applier-order-3.test.lua gh-6027-applier-error-show.test.lua gh-6032-promote-wal-write.test.lua gh-6057-qsync-confirm-async-no-wal.test.lua gh-5447-downstream-lag.test.lua gh-4040-invalid-msgpack.test.lua +release_disabled = catch.test.lua errinj.test.lua gc.test.lua gc_no_space.test.lua before_replace.test.lua qsync_advanced.test.lua qsync_errinj.test.lua quorum.test.lua recover_missing_xlog.test.lua sync.test.lua long_row_timeout.test.lua gh-4739-vclock-assert.test.lua gh-4730-applier-rollback.test.lua gh-5140-qsync-casc-rollback.test.lua gh-5144-qsync-dup-confirm.test.lua gh-5167-qsync-rollback-snap.test.lua gh-5430-qsync-promote-crash.test.lua gh-5430-cluster-mvcc.test.lua gh-5506-election-on-off.test.lua gh-5536-wal-limit.test.lua hang_on_synchro_fail.test.lua anon_register_gap.test.lua gh-5213-qsync-applier-order.test.lua gh-5213-qsync-applier-order-3.test.lua gh-6027-applier-error-show.test.lua gh-6032-promote-wal-write.test.lua gh-6057-qsync-confirm-async-no-wal.test.lua gh-5447-downstream-lag.test.lua gh-4040-invalid-msgpack.test.lua gh-6036-term-order.test.lua config = suite.cfg lua_libs = lua/fast_replica.lua lua/rlimit.lua use_unix_sockets = True -- 2.31.1