From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from [87.239.111.99] (localhost [127.0.0.1]) by dev.tarantool.org (Postfix) with ESMTP id 022216EC55; Mon, 26 Jul 2021 18:38:02 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org 022216EC55 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=tarantool.org; s=dev; t=1627313882; bh=pdBX+w1uClmWUiD6KETOc9pYJ7yai28RKFwEycxxFfw=; h=To:Date:In-Reply-To:References:Subject:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc: From; b=PYLCn9bAM2qeBDd9c4PTM5qGEh/B0N5lMojW6OFrH0geAcbsEWRqProc2KxMY1KtS /de/rXg9m0ZjiD5CrVJUMBUmc+GWYSJFLi+bHceWopfQ/B45SjOzIzjBoAlzVy5SP1 s/uMkHDrLJEm8Lw8WEy7jURdPJIgsUJg1ZzCSQy8= Received: from mail-lj1-f171.google.com (mail-lj1-f171.google.com [209.85.208.171]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by dev.tarantool.org (Postfix) with ESMTPS id 7DCAB6EC6E for ; Mon, 26 Jul 2021 18:36:08 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org 7DCAB6EC6E Received: by mail-lj1-f171.google.com with SMTP id r23so8654854lji.3 for ; Mon, 26 Jul 2021 08:36:08 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=aJm3a2zcAG7KnnUnY/AgYTXThtiE4+d/F28GXTkp1lo=; b=FCnFkedlDMSCVPyEmqHWeNBJY8B9EE228PBP6Kv+WgN/zQrk6WNyJWMa28X8P7dZEV 0rPJ7oxCFEevuAp+RmZ5SWI81rI9ZnWMM9VsfMqg7B+GlW4FoiI3GfSfF7puN+wb1Opn bAz0N9FvIV0FCeHlkdoufR7KMk65yLMBEow+oN2sSynD/7mwbuIJb0zAs9QYLDFxHGkr +edRGVlaTS9eX7GbxCk1BPsLtPZv1xsicBDjN4BSIZ9yJ5ib4zGGLdyhwwkfhkP86igP NQZjLR0f6sOQhBt6eO58HGyVaklMcH3rNBziabKExfr5ZYruje198HpWEiG0YNDMmJCe Rcag== X-Gm-Message-State: AOAM533cNM2+JTs+m5EBC+iw5JO4EliL1QMxOVRR4ncoPTQm/UgvAq1Z 9tvmjoUUnwZbgxSPVyggdlvMU9YgSQjZUQ== X-Google-Smtp-Source: ABdhPJxPbHTuTEuSfzR5BfUHPKtXbYJNYrSLeb3ZkVLocWEE4L1G6MbPKp4f7uJsEgETgM+6xb3NNw== X-Received: by 2002:a2e:8193:: with SMTP id e19mr1525913ljg.378.1627313766118; Mon, 26 Jul 2021 08:36:06 -0700 (PDT) Received: from grain.localdomain ([5.18.255.97]) by smtp.gmail.com with ESMTPSA id o3sm33544lfq.8.2021.07.26.08.36.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Jul 2021 08:36:05 -0700 (PDT) Received: by grain.localdomain (Postfix, from userid 1000) id 35FDD5A0024; Mon, 26 Jul 2021 18:34:53 +0300 (MSK) To: tml Date: Mon, 26 Jul 2021 18:34:52 +0300 Message-Id: <20210726153452.113897-7-gorcunov@gmail.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210726153452.113897-1-gorcunov@gmail.com> References: <20210726153452.113897-1-gorcunov@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [Tarantool-patches] [PATCH v8 6/6] test: replication -- add gh-6036-rollback-confirm X-BeenThere: tarantool-patches@dev.tarantool.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Tarantool development patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , From: Cyrill Gorcunov via Tarantool-patches Reply-To: Cyrill Gorcunov Cc: Vladislav Shpilevoy Errors-To: tarantool-patches-bounces@dev.tarantool.org Sender: "Tarantool-patches" Follow-up #6036 Signed-off-by: Cyrill Gorcunov --- test/replication/gh-6036-master.lua | 1 + test/replication/gh-6036-node.lua | 33 ++++ test/replication/gh-6036-replica.lua | 1 + .../gh-6036-rollback-confirm.result | 180 ++++++++++++++++++ .../gh-6036-rollback-confirm.test.lua | 88 +++++++++ 5 files changed, 303 insertions(+) create mode 120000 test/replication/gh-6036-master.lua create mode 100644 test/replication/gh-6036-node.lua create mode 120000 test/replication/gh-6036-replica.lua create mode 100644 test/replication/gh-6036-rollback-confirm.result create mode 100644 test/replication/gh-6036-rollback-confirm.test.lua diff --git a/test/replication/gh-6036-master.lua b/test/replication/gh-6036-master.lua new file mode 120000 index 000000000..65baed5de --- /dev/null +++ b/test/replication/gh-6036-master.lua @@ -0,0 +1 @@ +gh-6036-node.lua \ No newline at end of file diff --git a/test/replication/gh-6036-node.lua b/test/replication/gh-6036-node.lua new file mode 100644 index 000000000..ac701b7a2 --- /dev/null +++ b/test/replication/gh-6036-node.lua @@ -0,0 +1,33 @@ +local INSTANCE_ID = string.match(arg[0], "gh%-6036%-(.+)%.lua") + +local function unix_socket(name) + return "unix/:./" .. name .. '.sock'; +end + +require('console').listen(os.getenv('ADMIN')) + +if INSTANCE_ID == "master" then + box.cfg({ + listen = unix_socket("master"), + replication_connect_quorum = 0, + election_mode = 'candidate', + replication_synchro_quorum = 3, + replication_synchro_timeout = 1000, + }) +elseif INSTANCE_ID == "replica" then + box.cfg({ + listen = unix_socket("replica"), + replication = { + unix_socket("master"), + unix_socket("replica") + }, + read_only = true, + election_mode = 'voter', + replication_synchro_quorum = 2, + replication_synchro_timeout = 1000, + }) +end + +box.once("bootstrap", function() + box.schema.user.grant('guest', 'super') +end) diff --git a/test/replication/gh-6036-replica.lua b/test/replication/gh-6036-replica.lua new file mode 120000 index 000000000..65baed5de --- /dev/null +++ b/test/replication/gh-6036-replica.lua @@ -0,0 +1 @@ +gh-6036-node.lua \ No newline at end of file diff --git a/test/replication/gh-6036-rollback-confirm.result b/test/replication/gh-6036-rollback-confirm.result new file mode 100644 index 000000000..ec5403d5c --- /dev/null +++ b/test/replication/gh-6036-rollback-confirm.result @@ -0,0 +1,180 @@ +-- test-run result file version 2 +-- +-- gh-6036: Test for record collision detection. We have a cluster +-- of two nodes: master and replica. The master initiates syncho write +-- but fails to gather a quorum. Before it rolls back the record the +-- network breakage occurs and replica lives with dirty data while +-- master node goes offline. The replica becomes a new raft leader +-- and commits the dirty data, same time master node rolls back this +-- record and tries to connect to the new raft leader back. Such +-- connection should be refused because old master node is not longer +-- consistent. +-- +test_run = require('test_run').new() + | --- + | ... + +test_run:cmd('create server master with script="replication/gh-6036-master.lua"') + | --- + | - true + | ... +test_run:cmd('create server replica with script="replication/gh-6036-replica.lua"') + | --- + | - true + | ... + +test_run:cmd('start server master') + | --- + | - true + | ... +test_run:cmd('start server replica') + | --- + | - true + | ... + +-- +-- Connect master to the replica and write a record. Since the quorum +-- value is bigger than number of nodes in a cluster it will be rolled +-- back later. +test_run:switch('master') + | --- + | - true + | ... +box.cfg({ \ + replication = { \ + "unix/:./master.sock", \ + "unix/:./replica.sock", \ + }, \ +}) + | --- + | ... +_ = box.schema.create_space('sync', {is_sync = true}) + | --- + | ... +_ = box.space.sync:create_index('pk') + | --- + | ... + +-- +-- Wait the record to appear on the master. +f = require('fiber').create(function() box.space.sync:replace{1} end) + | --- + | ... +test_run:wait_cond(function() return box.space.sync:get({1}) ~= nil end, 100) + | --- + | - true + | ... +box.space.sync:select{} + | --- + | - - [1] + | ... + +-- +-- Wait the record from master get written and then +-- drop the replication. +test_run:switch('replica') + | --- + | - true + | ... +test_run:wait_cond(function() return box.space.sync:get({1}) ~= nil end, 100) + | --- + | - true + | ... +box.space.sync:select{} + | --- + | - - [1] + | ... +box.cfg{replication = {}} + | --- + | ... + +-- +-- Then we jump back to the master and drop the replication, +-- thus unconfirmed record get rolled back. +test_run:switch('master') + | --- + | - true + | ... +box.cfg({ \ + replication = {}, \ + replication_synchro_timeout = 0.001, \ + election_mode = 'manual', \ +}) + | --- + | ... +while f:status() ~= 'dead' do require('fiber').sleep(0.1) end + | --- + | ... +test_run:wait_cond(function() return box.space.sync:get({1}) == nil end, 100) + | --- + | - true + | ... + +-- +-- Force the replica to become a RAFT leader and +-- commit this new record. +test_run:switch('replica') + | --- + | - true + | ... +box.cfg({ \ + replication_synchro_quorum = 1, \ + election_mode = 'manual' \ +}) + | --- + | ... +box.ctl.promote() + | --- + | ... +box.space.sync:select{} + | --- + | - - [1] + | ... + +-- +-- Connect master back to the replica, it should +-- be refused. +test_run:switch('master') + | --- + | - true + | ... +box.cfg({ \ + replication = { \ + "unix/:./replica.sock", \ + }, \ +}) + | --- + | ... +box.space.sync:select{} + | --- + | - [] + | ... +assert(test_run:grep_log('master', 'rejecting PROMOTE') ~= nil); + | --- + | - true + | ... +assert(test_run:grep_log('master', 'ER_UNSUPPORTED') ~= nil); + | --- + | - true + | ... + +test_run:switch('default') + | --- + | - true + | ... +test_run:cmd('stop server master') + | --- + | - true + | ... +test_run:cmd('delete server master') + | --- + | - true + | ... +test_run:cmd('stop server replica') + | --- + | - true + | ... +test_run:cmd('delete server replica') + | --- + | - true + | ... diff --git a/test/replication/gh-6036-rollback-confirm.test.lua b/test/replication/gh-6036-rollback-confirm.test.lua new file mode 100644 index 000000000..dbeb9f496 --- /dev/null +++ b/test/replication/gh-6036-rollback-confirm.test.lua @@ -0,0 +1,88 @@ +-- +-- gh-6036: Test for record collision detection. We have a cluster +-- of two nodes: master and replica. The master initiates syncho write +-- but fails to gather a quorum. Before it rolls back the record the +-- network breakage occurs and replica lives with dirty data while +-- master node goes offline. The replica becomes a new raft leader +-- and commits the dirty data, same time master node rolls back this +-- record and tries to connect to the new raft leader back. Such +-- connection should be refused because old master node is not longer +-- consistent. +-- +test_run = require('test_run').new() + +test_run:cmd('create server master with script="replication/gh-6036-master.lua"') +test_run:cmd('create server replica with script="replication/gh-6036-replica.lua"') + +test_run:cmd('start server master') +test_run:cmd('start server replica') + +-- +-- Connect master to the replica and write a record. Since the quorum +-- value is bigger than number of nodes in a cluster it will be rolled +-- back later. +test_run:switch('master') +box.cfg({ \ + replication = { \ + "unix/:./master.sock", \ + "unix/:./replica.sock", \ + }, \ +}) +_ = box.schema.create_space('sync', {is_sync = true}) +_ = box.space.sync:create_index('pk') + +-- +-- Wait the record to appear on the master. +f = require('fiber').create(function() box.space.sync:replace{1} end) +test_run:wait_cond(function() return box.space.sync:get({1}) ~= nil end, 100) +box.space.sync:select{} + +-- +-- Wait the record from master get written and then +-- drop the replication. +test_run:switch('replica') +test_run:wait_cond(function() return box.space.sync:get({1}) ~= nil end, 100) +box.space.sync:select{} +box.cfg{replication = {}} + +-- +-- Then we jump back to the master and drop the replication, +-- thus unconfirmed record get rolled back. +test_run:switch('master') +box.cfg({ \ + replication = {}, \ + replication_synchro_timeout = 0.001, \ + election_mode = 'manual', \ +}) +while f:status() ~= 'dead' do require('fiber').sleep(0.1) end +test_run:wait_cond(function() return box.space.sync:get({1}) == nil end, 100) + +-- +-- Force the replica to become a RAFT leader and +-- commit this new record. +test_run:switch('replica') +box.cfg({ \ + replication_synchro_quorum = 1, \ + election_mode = 'manual' \ +}) +box.ctl.promote() +box.space.sync:select{} + +-- +-- Connect master back to the replica, it should +-- be refused. +test_run:switch('master') +box.cfg({ \ + replication = { \ + "unix/:./replica.sock", \ + }, \ +}) +box.space.sync:select{} +assert(test_run:grep_log('master', 'rejecting PROMOTE') ~= nil); +assert(test_run:grep_log('master', 'ER_UNSUPPORTED') ~= nil); + +test_run:switch('default') +test_run:cmd('stop server master') +test_run:cmd('delete server master') +test_run:cmd('stop server replica') +test_run:cmd('delete server replica') -- 2.31.1