From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from [87.239.111.99] (localhost [127.0.0.1]) by dev.tarantool.org (Postfix) with ESMTP id D9A326E46D; Tue, 12 Oct 2021 23:28:32 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org D9A326E46D DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=tarantool.org; s=dev; t=1634070512; bh=nLbU8NqA6EIYAXXGMw2ydvEpvC+ksDkfKpxycUWpRQk=; h=Date:To:Cc:References:In-Reply-To:Subject:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From:Reply-To:From; b=qg36047DhlPgXY7st7cintnglhvvNvV62qtWDseYqq/Z38jsBvosWSeOQB9ErCNsg Em4x9rqCTcIeESqVKSDx9g+HEWISRBhvfbXmIBG/30h9GThPIb6+EGYWcoqzuUb85W V8YJJTsQQk5Sh7j4WU5j8zLsSV3Sicw7PWB36iZY= Received: from mail-lf1-f52.google.com (mail-lf1-f52.google.com [209.85.167.52]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by dev.tarantool.org (Postfix) with ESMTPS id 0225D6E46D for ; Tue, 12 Oct 2021 23:28:32 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org 0225D6E46D Received: by mail-lf1-f52.google.com with SMTP id u18so1869198lfd.12 for ; Tue, 12 Oct 2021 13:28:31 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:content-transfer-encoding :in-reply-to:user-agent; bh=NrIGkzodPN8O1yKB8OVT6dMAgemKMHkm4iAY9Y/FyR4=; b=p7myT/ll6k1BQnYUQxUgjnX4N0T4TaO+xZ0FVf0bjtoFm06g/2H7e7QQ0348+Oj5PV rK3X72xaNBZn838Z/RdroJOXLI/zHjlh0K+P6tGqeCcjtq7LEg4VQ9FBnswDf1GgDdJt wl+0BCOMka93kUB94+k6vJ2j6iACrfiF57i0YglCdF2h7I4WxbQdLjkdZWa6Jm8orvJs N8GHcRp4+iO9tRzuEQe+dE4iBGjN8SpouYxWmIoky020/orngUfoycmQIlmlez718uAs vXb1jBfq0YB9E89B9MUgdocS762Fjpa4/3aC25kwYErqmUHTqNVr5nr5UBafo7Tey0ex ybIA== X-Gm-Message-State: AOAM5332GZ9n4FKGiXN8+ELiagU0VozQ8j2HIdzXZ9HS4W8eYLqTHMtr m1fe99x+srGK92D+1FZqCBGrOgpDXAhn1Q== X-Google-Smtp-Source: ABdhPJydQsQiAZnjjGDkZf88UOhAjDEZhYqFglEEGhLlbtQPy7fmJP28OnGB/vyntsRzWp4BylSjyw== X-Received: by 2002:a05:651c:1791:: with SMTP id bn17mr31986553ljb.516.1634070510905; Tue, 12 Oct 2021 13:28:30 -0700 (PDT) Received: from grain.localdomain ([5.18.253.97]) by smtp.gmail.com with ESMTPSA id h10sm1218140ljb.60.2021.10.12.13.28.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Oct 2021 13:28:29 -0700 (PDT) Received: by grain.localdomain (Postfix, from userid 1000) id 38DEB5A0020; Tue, 12 Oct 2021 23:28:29 +0300 (MSK) Date: Tue, 12 Oct 2021 23:28:29 +0300 To: Serge Petrenko Cc: tml , Vladislav Shpilevoy Message-ID: References: <20211011191635.573685-1-gorcunov@gmail.com> <20211011191635.573685-4-gorcunov@gmail.com> <93bf8e06-afd7-49b1-4924-df2b49ca082d@tarantool.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <93bf8e06-afd7-49b1-4924-df2b49ca082d@tarantool.org> User-Agent: Mutt/2.0.7 (2021-05-04) Subject: Re: [Tarantool-patches] [PATCH v22 3/3] test: add gh-6036-qsync-order test X-BeenThere: tarantool-patches@dev.tarantool.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Tarantool development patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , From: Cyrill Gorcunov via Tarantool-patches Reply-To: Cyrill Gorcunov Errors-To: tarantool-patches-bounces@dev.tarantool.org Sender: "Tarantool-patches" On Tue, Oct 12, 2021 at 12:47:06PM +0300, Serge Petrenko wrote: > > > 11.10.2021 22:16, Cyrill Gorcunov пишет: > > To test that promotion requests are handled only when appropriate > > write to WAL completes, because we update memory data before the > > write finishes. > > > > Note that without the patch "qsync: order access to the limbo terms" > > this test fires the assertion > > > > > tarantool: src/box/txn_limbo.c:481: txn_limbo_read_rollback: Assertion `e->txn->signature >= 0' failed. > > Thanks for the changes! > Sorry for the nitpicking, there are just a couple of minor comments left. Thanks! Here is force-pushed variant. Please take a look, hopefully I counted number of writes correctly, lets wait for CI results. --- >From 49f05ca2b31512b6555aecf1bb4d3ac1ce59729a Mon Sep 17 00:00:00 2001 From: Cyrill Gorcunov Date: Mon, 20 Sep 2021 17:22:38 +0300 Subject: [PATCH v22 3/3] test: add gh-6036-qsync-order test To test that promotion requests are handled only when appropriate write to WAL completes, because we update memory data before the write finishes. Note that without the patch "qsync: order access to the limbo terms" this test fires the assertion > tarantool: src/box/txn_limbo.c:481: txn_limbo_read_rollback: Assertion `e->txn->signature >= 0' failed. Part-of #6036 Signed-off-by: Cyrill Gorcunov --- test/replication/gh-6036-qsync-order.result | 194 ++++++++++++++++++ test/replication/gh-6036-qsync-order.test.lua | 94 +++++++++ test/replication/suite.cfg | 1 + test/replication/suite.ini | 2 +- 4 files changed, 290 insertions(+), 1 deletion(-) create mode 100644 test/replication/gh-6036-qsync-order.result create mode 100644 test/replication/gh-6036-qsync-order.test.lua diff --git a/test/replication/gh-6036-qsync-order.result b/test/replication/gh-6036-qsync-order.result new file mode 100644 index 000000000..0e93d429b --- /dev/null +++ b/test/replication/gh-6036-qsync-order.result @@ -0,0 +1,194 @@ +-- test-run result file version 2 +-- +-- gh-6036: verify that terms are locked when we're inside journal +-- write routine, because parallel appliers may ignore the fact that +-- the term is updated already but not yet written leading to data +-- inconsistency. +-- +test_run = require('test_run').new() + | --- + | ... + +SERVERS={"election_replica1", "election_replica2", "election_replica3"} + | --- + | ... +test_run:create_cluster(SERVERS, "replication", {args='1 nil manual 1'}) + | --- + | ... +test_run:wait_fullmesh(SERVERS) + | --- + | ... + +-- +-- Create a synchro space on the master node and make +-- sure the write processed just fine. +test_run:switch("election_replica1") + | --- + | - true + | ... +box.ctl.promote() + | --- + | ... +s = box.schema.create_space('test', {is_sync = true}) + | --- + | ... +_ = s:create_index('pk') + | --- + | ... +s:insert{1} + | --- + | - [1] + | ... + +test_run:wait_lsn('election_replica2', 'election_replica1') + | --- + | ... +test_run:wait_lsn('election_replica3', 'election_replica1') + | --- + | ... + +-- +-- Drop connection between election_replica1 and election_replica2. +box.cfg({ \ + replication = { \ + "unix/:./election_replica1.sock", \ + "unix/:./election_replica3.sock", \ + }, \ +}) + | --- + | ... + +-- +-- Drop connection between election_replica2 and election_replica1. +test_run:switch("election_replica2") + | --- + | - true + | ... +box.cfg({ \ + replication = { \ + "unix/:./election_replica2.sock", \ + "unix/:./election_replica3.sock", \ + }, \ +}) + | --- + | ... + +-- +-- Here we have the following scheme +-- +-- election_replica3 (will be delayed) +-- / \ +-- election_replica1 election_replica2 + +-- +-- Initiate disk delay in a bit tricky way: the next write will +-- fall into forever sleep. +test_run:switch("election_replica3") + | --- + | - true + | ... +write_cnt = box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") + | --- + | ... +box.error.injection.set("ERRINJ_WAL_DELAY", true) + | --- + | - ok + | ... +-- +-- Make election_replica2 been a leader and start writting data, +-- the PROMOTE request get queued on election_replica3 and not +-- yet processed, same time INSERT won't complete either +-- waiting for PROMOTE completion first. Note that we +-- enter election_replica3 as well just to be sure the PROMOTE +-- reached it. +test_run:switch("election_replica2") + | --- + | - true + | ... +box.ctl.promote() + | --- + | ... +test_run:switch("election_replica3") + | --- + | - true + | ... +test_run:wait_cond(function() return box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") > write_cnt end) + | --- + | - true + | ... +test_run:switch("election_replica2") + | --- + | - true + | ... +box.space.test:insert{2} + | --- + | - [2] + | ... + +-- +-- The election_replica1 node has no clue that there is a new leader +-- and continue writing data with obsolete term. Since election_replica3 +-- is delayed now the INSERT won't proceed yet but get queued. +test_run:switch("election_replica1") + | --- + | - true + | ... +box.space.test:insert{3} + | --- + | - [3] + | ... + +-- +-- Finally enable election_replica3 back. Make sure the data from new election_replica2 +-- leader get writing while old leader's data ignored. +test_run:switch("election_replica3") + | --- + | - true + | ... +assert(box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") == write_cnt + 2) + | --- + | - true + | ... +box.error.injection.set('ERRINJ_WAL_DELAY', false) + | --- + | - ok + | ... +test_run:wait_cond(function() return box.space.test:get{2} ~= nil end) + | --- + | - true + | ... +box.space.test:select{} + | --- + | - - [1] + | - [2] + | ... + +test_run:switch("default") + | --- + | - true + | ... +test_run:cmd('stop server election_replica1') + | --- + | - true + | ... +test_run:cmd('stop server election_replica2') + | --- + | - true + | ... +test_run:cmd('stop server election_replica3') + | --- + | - true + | ... + +test_run:cmd('delete server election_replica1') + | --- + | - true + | ... +test_run:cmd('delete server election_replica2') + | --- + | - true + | ... +test_run:cmd('delete server election_replica3') + | --- + | - true + | ... diff --git a/test/replication/gh-6036-qsync-order.test.lua b/test/replication/gh-6036-qsync-order.test.lua new file mode 100644 index 000000000..20030161e --- /dev/null +++ b/test/replication/gh-6036-qsync-order.test.lua @@ -0,0 +1,94 @@ +-- +-- gh-6036: verify that terms are locked when we're inside journal +-- write routine, because parallel appliers may ignore the fact that +-- the term is updated already but not yet written leading to data +-- inconsistency. +-- +test_run = require('test_run').new() + +SERVERS={"election_replica1", "election_replica2", "election_replica3"} +test_run:create_cluster(SERVERS, "replication", {args='1 nil manual 1'}) +test_run:wait_fullmesh(SERVERS) + +-- +-- Create a synchro space on the master node and make +-- sure the write processed just fine. +test_run:switch("election_replica1") +box.ctl.promote() +s = box.schema.create_space('test', {is_sync = true}) +_ = s:create_index('pk') +s:insert{1} + +test_run:wait_lsn('election_replica2', 'election_replica1') +test_run:wait_lsn('election_replica3', 'election_replica1') + +-- +-- Drop connection between election_replica1 and election_replica2. +box.cfg({ \ + replication = { \ + "unix/:./election_replica1.sock", \ + "unix/:./election_replica3.sock", \ + }, \ +}) + +-- +-- Drop connection between election_replica2 and election_replica1. +test_run:switch("election_replica2") +box.cfg({ \ + replication = { \ + "unix/:./election_replica2.sock", \ + "unix/:./election_replica3.sock", \ + }, \ +}) + +-- +-- Here we have the following scheme +-- +-- election_replica3 (will be delayed) +-- / \ +-- election_replica1 election_replica2 + +-- +-- Initiate disk delay in a bit tricky way: the next write will +-- fall into forever sleep. +test_run:switch("election_replica3") +write_cnt = box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") +box.error.injection.set("ERRINJ_WAL_DELAY", true) +-- +-- Make election_replica2 been a leader and start writting data, +-- the PROMOTE request get queued on election_replica3 and not +-- yet processed, same time INSERT won't complete either +-- waiting for PROMOTE completion first. Note that we +-- enter election_replica3 as well just to be sure the PROMOTE +-- reached it. +test_run:switch("election_replica2") +box.ctl.promote() +test_run:switch("election_replica3") +test_run:wait_cond(function() return box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") > write_cnt end) +test_run:switch("election_replica2") +box.space.test:insert{2} + +-- +-- The election_replica1 node has no clue that there is a new leader +-- and continue writing data with obsolete term. Since election_replica3 +-- is delayed now the INSERT won't proceed yet but get queued. +test_run:switch("election_replica1") +box.space.test:insert{3} + +-- +-- Finally enable election_replica3 back. Make sure the data from new election_replica2 +-- leader get writing while old leader's data ignored. +test_run:switch("election_replica3") +assert(box.error.injection.get("ERRINJ_WAL_WRITE_COUNT") == write_cnt + 2) +box.error.injection.set('ERRINJ_WAL_DELAY', false) +test_run:wait_cond(function() return box.space.test:get{2} ~= nil end) +box.space.test:select{} + +test_run:switch("default") +test_run:cmd('stop server election_replica1') +test_run:cmd('stop server election_replica2') +test_run:cmd('stop server election_replica3') + +test_run:cmd('delete server election_replica1') +test_run:cmd('delete server election_replica2') +test_run:cmd('delete server election_replica3') diff --git a/test/replication/suite.cfg b/test/replication/suite.cfg index 3eee0803c..ed09b2087 100644 --- a/test/replication/suite.cfg +++ b/test/replication/suite.cfg @@ -59,6 +59,7 @@ "gh-6094-rs-uuid-mismatch.test.lua": {}, "gh-6127-election-join-new.test.lua": {}, "gh-6035-applier-filter.test.lua": {}, + "gh-6036-qsync-order.test.lua": {}, "election-candidate-promote.test.lua": {}, "*": { "memtx": {"engine": "memtx"}, diff --git a/test/replication/suite.ini b/test/replication/suite.ini index 77eb95f49..080e4fbf4 100644 --- a/test/replication/suite.ini +++ b/test/replication/suite.ini @@ -3,7 +3,7 @@ core = tarantool script = master.lua description = tarantool/box, replication disabled = consistent.test.lua -release_disabled = catch.test.lua errinj.test.lua gc.test.lua gc_no_space.test.lua before_replace.test.lua qsync_advanced.test.lua qsync_errinj.test.lua quorum.test.lua recover_missing_xlog.test.lua sync.test.lua long_row_timeout.test.lua gh-4739-vclock-assert.test.lua gh-4730-applier-rollback.test.lua gh-5140-qsync-casc-rollback.test.lua gh-5144-qsync-dup-confirm.test.lua gh-5167-qsync-rollback-snap.test.lua gh-5430-qsync-promote-crash.test.lua gh-5430-cluster-mvcc.test.lua gh-5506-election-on-off.test.lua gh-5536-wal-limit.test.lua hang_on_synchro_fail.test.lua anon_register_gap.test.lua gh-5213-qsync-applier-order.test.lua gh-5213-qsync-applier-order-3.test.lua gh-6027-applier-error-show.test.lua gh-6032-promote-wal-write.test.lua gh-6057-qsync-confirm-async-no-wal.test.lua gh-5447-downstream-lag.test.lua gh-4040-invalid-msgpack.test.lua +release_disabled = catch.test.lua errinj.test.lua gc.test.lua gc_no_space.test.lua before_replace.test.lua qsync_advanced.test.lua qsync_errinj.test.lua quorum.test.lua recover_missing_xlog.test.lua sync.test.lua long_row_timeout.test.lua gh-4739-vclock-assert.test.lua gh-4730-applier-rollback.test.lua gh-5140-qsync-casc-rollback.test.lua gh-5144-qsync-dup-confirm.test.lua gh-5167-qsync-rollback-snap.test.lua gh-5430-qsync-promote-crash.test.lua gh-5430-cluster-mvcc.test.lua gh-5506-election-on-off.test.lua gh-5536-wal-limit.test.lua hang_on_synchro_fail.test.lua anon_register_gap.test.lua gh-5213-qsync-applier-order.test.lua gh-5213-qsync-applier-order-3.test.lua gh-6027-applier-error-show.test.lua gh-6032-promote-wal-write.test.lua gh-6057-qsync-confirm-async-no-wal.test.lua gh-5447-downstream-lag.test.lua gh-4040-invalid-msgpack.test.lua gh-6036-qsync-order.test.lua config = suite.cfg lua_libs = lua/fast_replica.lua lua/rlimit.lua use_unix_sockets = True -- 2.31.1