Tarantool development patches archive
 help / color / mirror / Atom feed
From: Serge Petrenko via Tarantool-patches <tarantool-patches@dev.tarantool.org>
To: Cyrill Gorcunov <gorcunov@gmail.com>,
	tml <tarantool-patches@dev.tarantool.org>
Cc: Vladislav Shpilevoy <v.shpilevoy@tarantool.org>
Subject: Re: [Tarantool-patches] [PATCH v5 2/3] test: add a test for wal_cleanup_delay option
Date: Fri, 26 Mar 2021 16:37:26 +0300	[thread overview]
Message-ID: <7d84bf90-7306-ad25-9f0f-74867c100fad@tarantool.org> (raw)
In-Reply-To: <20210326120605.2160131-3-gorcunov@gmail.com>



26.03.2021 15:06, Cyrill Gorcunov пишет:
> Part-of #5806
>
> Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com>
> ---
>   test/replication/gh-5806-master.lua           |   8 +
>   test/replication/gh-5806-slave.lua            |   8 +
>   test/replication/gh-5806-xlog-cleanup.result  | 435 ++++++++++++++++++
>   .../replication/gh-5806-xlog-cleanup.test.lua | 188 ++++++++
>   4 files changed, 639 insertions(+)
>   create mode 100644 test/replication/gh-5806-master.lua
>   create mode 100644 test/replication/gh-5806-slave.lua
>   create mode 100644 test/replication/gh-5806-xlog-cleanup.result
>   create mode 100644 test/replication/gh-5806-xlog-cleanup.test.lua
>
> diff --git a/test/replication/gh-5806-master.lua b/test/replication/gh-5806-master.lua
> new file mode 100644
> index 000000000..bc15dab67
> --- /dev/null
> +++ b/test/replication/gh-5806-master.lua
> @@ -0,0 +1,8 @@
> +#!/usr/bin/env tarantool
> +
> +require('console').listen(os.getenv('ADMIN'))
> +
> +box.cfg({
> +    listen              = os.getenv("LISTEN"),
> +    wal_cleanup_delay   = tonumber(arg[1]) or 0,
> +})
> diff --git a/test/replication/gh-5806-slave.lua b/test/replication/gh-5806-slave.lua
> new file mode 100644
> index 000000000..3abb3e035
> --- /dev/null
> +++ b/test/replication/gh-5806-slave.lua
> @@ -0,0 +1,8 @@
> +#!/usr/bin/env tarantool
> +
> +require('console').listen(os.getenv('ADMIN'))
> +
> +box.cfg({
> +    listen              = os.getenv("LISTEN"),
> +    replication         = os.getenv("MASTER"),
> +})

Hi! Thanks for the fixes!

You may  use `replica.lua` here freely. It takes the same parameters, 
and has the same
box.cfg() call, with exception that it has set `replication_timeout` and 
`memtx_memory`.
These two options are not a problem. In fact, `replica.lua` is used in 
almost all
replication tests.

So, there's no reason to spawn a new file for your needs.

> diff --git a/test/replication/gh-5806-xlog-cleanup.result b/test/replication/gh-5806-xlog-cleanup.result
> new file mode 100644
> index 000000000..e20784bcc
> --- /dev/null
> +++ b/test/replication/gh-5806-xlog-cleanup.result
> @@ -0,0 +1,435 @@
> +-- test-run result file version 2
> +--
> +-- gh-5806: defer xlog cleanup to keep xlogs until
> +-- replicas present in "_cluster" are connected.
> +-- Otherwise we are getting XlogGapError since
> +-- master might go far forward from replica and
> +-- replica won't be able to connect without full
> +-- rebootstrap.
> +--
> +
> +fiber = require('fiber')
> + | ---
> + | ...
> +test_run = require('test_run').new()
> + | ---
> + | ...
> +engine = test_run:get_cfg('engine')
> + | ---
> + | ...
> +
> +--
> +-- Case 1.
> +--
> +-- First lets make sure we're getting XlogGapError in
> +-- case if wal_cleanup_delay is not used.
> +--
> +
> +test_run:cmd('create server master with script="replication/gh-5806-master.lua"')
> + | ---
> + | - true
> + | ...
> +test_run:cmd('start server master with wait=True, wait_load=True')
> + | ---
> + | - true
> + | ...
> +
> +test_run:switch('master')
> + | ---
> + | - true
> + | ...
> +box.schema.user.grant('guest', 'replication')
> + | ---
> + | ...
> +
> +--
> +-- Keep small number of snaps to force cleanup
> +-- procedure be more intensive.
> +box.cfg{checkpoint_count = 1}
> + | ---
> + | ...
> +
> +engine = test_run:get_cfg('engine')
> + | ---
> + | ...
> +s = box.schema.space.create('test', {engine = engine})
> + | ---
> + | ...
> +_ = s:create_index('pk')
> + | ---
> + | ...
> +
> +test_run:switch('default')
> + | ---
> + | - true
> + | ...
> +test_run:cmd('create server replica with rpl_master=master,\
> +              script="replication/gh-5806-slave.lua"')
> + | ---
> + | - true
> + | ...
> +test_run:cmd('start server replica with wait=True, wait_load=True')
> + | ---
> + | - true
> + | ...
> +
> +--
> +-- On replica we create an own space which allows us to
> +-- use more complex scenario and disables replica from
> +-- automatic rejoin (since replica can't do auto-rejoin if
> +-- there gonna be an own data loss). This allows us to
> +-- trigger XlogGapError in the log.
> +test_run:switch('replica')
> + | ---
> + | - true
> + | ...
> +box.cfg{checkpoint_count = 1}
> + | ---
> + | ...
> +s = box.schema.space.create('testreplica')
> + | ---
> + | ...
> +_ = s:create_index('pk')
> + | ---
> + | ...
> +box.space.testreplica:insert({1})
> + | ---
> + | - [1]
> + | ...
> +box.snapshot()
> + | ---
> + | - ok
> + | ...
> +
> +--
> +-- Stop the replica node and generate
> +-- xlogs on the master.
> +test_run:switch('master')
> + | ---
> + | - true
> + | ...
> +test_run:cmd('stop server replica')
> + | ---
> + | - true
> + | ...
> +
> +box.space.test:insert({1})
> + | ---
> + | - [1]
> + | ...
> +box.snapshot()
> + | ---
> + | - ok
> + | ...
> +
> +--
> +-- We need to restart the master node since otherwise
> +-- the replica will be preventing us from removing old
> +-- xlog because it will be tracked by gc consumer which
> +-- kept in memory while master node is running.
> +--
> +-- Once restarted we write a new record into master's
> +-- space and run snapshot which removes old xlog required
> +-- by replica to subscribe leading to XlogGapError which
> +-- we need to test.
> +test_run:cmd('restart server master with wait_load=True')
> + |
> +box.space.test:insert({2})
> + | ---
> + | - [2]
> + | ...
> +box.snapshot()
> + | ---
> + | - ok
> + | ...
> +assert(box.info.gc().is_paused == false)
> + | ---
> + | - true
> + | ...
> +
> +--
> +-- Start replica and wait for error.
> +test_run:cmd('start server replica with wait=False, wait_load=False')
> + | ---
> + | - true
> + | ...
> +
> +--
> +-- Wait error to appear, 60 seconds should be more than enough,
> +-- usually it happens in a couple of seconds.
> +test_run:switch('default')
> + | ---
> + | - true
> + | ...
> +test_run:wait_log('master', 'XlogGapError', nil, 60) ~= nil
> + | ---
> + | - true
> + | ...
> +
> +--
> +-- Cleanup.
> +test_run:cmd('stop server master')
> + | ---
> + | - true
> + | ...
> +test_run:cmd('cleanup server master')
> + | ---
> + | - true
> + | ...
> +test_run:cmd('delete server master')
> + | ---
> + | - true
> + | ...
> +test_run:cmd('stop server replica')
> + | ---
> + | - true
> + | ...
> +test_run:cmd('cleanup server replica')
> + | ---
> + | - true
> + | ...
> +test_run:cmd('delete server replica')
> + | ---
> + | - true
> + | ...
> +
> +--
> +-- Case 2.
> +--
> +-- Lets make sure we're not getting XlogGapError in
> +-- case if wal_cleanup_delay is used the code is almost
> +-- the same as for Case 1 except we don't disable cleanup
> +-- fiber but delay it up to a hour until replica is up
> +-- and running.
> +--
> +
> +test_run:cmd('create server master with script="replication/gh-5806-master.lua"')
> + | ---
> + | - true
> + | ...
> +test_run:cmd('start server master with args="3600", wait=True, wait_load=True')
> + | ---
> + | - true
> + | ...
> +
> +test_run:switch('master')
> + | ---
> + | - true
> + | ...
> +box.schema.user.grant('guest', 'replication')
> + | ---
> + | ...
> +
> +box.cfg{checkpoint_count = 1}
> + | ---
> + | ...
> +
> +engine = test_run:get_cfg('engine')
> + | ---
> + | ...
> +s = box.schema.space.create('test', {engine = engine})
> + | ---
> + | ...
> +_ = s:create_index('pk')
> + | ---
> + | ...
> +
> +test_run:switch('default')
> + | ---
> + | - true
> + | ...
> +test_run:cmd('create server replica with rpl_master=master,\
> +              script="replication/gh-5806-slave.lua"')
> + | ---
> + | - true
> + | ...
> +test_run:cmd('start server replica with wait=True, wait_load=True')
> + | ---
> + | - true
> + | ...
> +
> +test_run:switch('replica')
> + | ---
> + | - true
> + | ...
> +box.cfg{checkpoint_count = 1}
> + | ---
> + | ...
> +s = box.schema.space.create('testreplica')
> + | ---
> + | ...
> +_ = s:create_index('pk')
> + | ---
> + | ...
> +box.space.testreplica:insert({1})
> + | ---
> + | - [1]
> + | ...
> +box.snapshot()
> + | ---
> + | - ok
> + | ...
> +
> +test_run:switch('master')
> + | ---
> + | - true
> + | ...
> +test_run:cmd('stop server replica')
> + | ---
> + | - true
> + | ...
> +
> +box.space.test:insert({1})
> + | ---
> + | - [1]
> + | ...
> +box.snapshot()
> + | ---
> + | - ok
> + | ...
> +
> +test_run:cmd('restart server master with args="3600", wait=True, wait_load=True')
> + |
> +box.space.test:insert({2})
> + | ---
> + | - [2]
> + | ...
> +box.snapshot()
> + | ---
> + | - ok
> + | ...
> +assert(box.info.gc().is_paused == true)
> + | ---
> + | - true
> + | ...
> +
> +test_run:cmd('start server replica with wait=True, wait_load=True')
> + | ---
> + | - true
> + | ...
> +
> +--
> +-- Make sure no error happened.
> +test_run:switch('default')
> + | ---
> + | - true
> + | ...
> +assert(test_run:grep_log("master", "XlogGapError") == nil)
> + | ---
> + | - true
> + | ...
> +
> +test_run:cmd('stop server master')
> + | ---
> + | - true
> + | ...
> +test_run:cmd('cleanup server master')
> + | ---
> + | - true
> + | ...
> +test_run:cmd('delete server master')
> + | ---
> + | - true
> + | ...
> +test_run:cmd('stop server replica')
> + | ---
> + | - true
> + | ...
> +test_run:cmd('cleanup server replica')
> + | ---
> + | - true
> + | ...
> +test_run:cmd('delete server replica')
> + | ---
> + | - true
> + | ...
> +--
> +--
> +-- Case 3: Fill _cluster with replica but then delete
> +-- the replica so that master's cleanup leave in "paused"
> +-- state, and then simply decrease the timeout to make
> +-- cleanup fiber work again.
> +--
> +test_run:cmd('create server master with script="replication/gh-5806-master.lua"')
> + | ---
> + | - true
> + | ...
> +test_run:cmd('start server master with args="3600", wait=True, wait_load=True')
> + | ---
> + | - true
> + | ...
> +
> +test_run:switch('master')
> + | ---
> + | - true
> + | ...
> +box.schema.user.grant('guest', 'replication')
> + | ---
> + | ...
> +
> +test_run:switch('default')
> + | ---
> + | - true
> + | ...
> +test_run:cmd('create server replica with rpl_master=master,\
> +              script="replication/gh-5806-slave.lua"')
> + | ---
> + | - true
> + | ...
> +test_run:cmd('start server replica with wait=True, wait_load=True')
> + | ---
> + | - true
> + | ...
> +
> +test_run:switch('master')
> + | ---
> + | - true
> + | ...
> +test_run:cmd('stop server replica')
> + | ---
> + | - true
> + | ...
> +test_run:cmd('cleanup server replica')
> + | ---
> + | - true
> + | ...
> +test_run:cmd('delete server replica')
> + | ---
> + | - true
> + | ...
> +
> +test_run:cmd('restart server master with args="3600", wait=True, wait_load=True')
> + |
> +assert(box.info.gc().is_paused == true)
> + | ---
> + | - true
> + | ...
> +
> +test_run:switch('master')
> + | ---
> + | - true
> + | ...
> +box.cfg{wal_cleanup_delay = 0.01}
> + | ---
> + | ...
> +test_run:wait_cond(function() return box.info.gc().is_paused == false end)
> + | ---
> + | - true
> + | ...
> +
> +test_run:switch('default')
> + | ---
> + | - true
> + | ...
> +test_run:cmd('stop server master')
> + | ---
> + | - true
> + | ...
> +test_run:cmd('cleanup server master')
> + | ---
> + | - true
> + | ...
> +test_run:cmd('delete server master')
> + | ---
> + | - true
> + | ...
> diff --git a/test/replication/gh-5806-xlog-cleanup.test.lua b/test/replication/gh-5806-xlog-cleanup.test.lua
> new file mode 100644
> index 000000000..ea3a35294
> --- /dev/null
> +++ b/test/replication/gh-5806-xlog-cleanup.test.lua
> @@ -0,0 +1,188 @@
> +--
> +-- gh-5806: defer xlog cleanup to keep xlogs until
> +-- replicas present in "_cluster" are connected.
> +-- Otherwise we are getting XlogGapError since
> +-- master might go far forward from replica and
> +-- replica won't be able to connect without full
> +-- rebootstrap.
> +--
> +
> +fiber = require('fiber')
> +test_run = require('test_run').new()
> +engine = test_run:get_cfg('engine')
> +
> +--
> +-- Case 1.
> +--
> +-- First lets make sure we're getting XlogGapError in
> +-- case if wal_cleanup_delay is not used.
> +--
> +
> +test_run:cmd('create server master with script="replication/gh-5806-master.lua"')
> +test_run:cmd('start server master with wait=True, wait_load=True')
> +
> +test_run:switch('master')
> +box.schema.user.grant('guest', 'replication')
> +
> +--
> +-- Keep small number of snaps to force cleanup
> +-- procedure be more intensive.
> +box.cfg{checkpoint_count = 1}
> +
> +engine = test_run:get_cfg('engine')
> +s = box.schema.space.create('test', {engine = engine})
> +_ = s:create_index('pk')
> +
> +test_run:switch('default')
> +test_run:cmd('create server replica with rpl_master=master,\
> +              script="replication/gh-5806-slave.lua"')
> +test_run:cmd('start server replica with wait=True, wait_load=True')
> +
> +--
> +-- On replica we create an own space which allows us to
> +-- use more complex scenario and disables replica from
> +-- automatic rejoin (since replica can't do auto-rejoin if
> +-- there gonna be an own data loss). This allows us to
> +-- trigger XlogGapError in the log.
> +test_run:switch('replica')
> +box.cfg{checkpoint_count = 1}
> +s = box.schema.space.create('testreplica')
> +_ = s:create_index('pk')
> +box.space.testreplica:insert({1})
> +box.snapshot()
> +
> +--
> +-- Stop the replica node and generate
> +-- xlogs on the master.
> +test_run:switch('master')
> +test_run:cmd('stop server replica')
> +
> +box.space.test:insert({1})
> +box.snapshot()
> +
> +--
> +-- We need to restart the master node since otherwise
> +-- the replica will be preventing us from removing old
> +-- xlog because it will be tracked by gc consumer which
> +-- kept in memory while master node is running.
> +--
> +-- Once restarted we write a new record into master's
> +-- space and run snapshot which removes old xlog required
> +-- by replica to subscribe leading to XlogGapError which
> +-- we need to test.
> +test_run:cmd('restart server master with wait_load=True')
> +box.space.test:insert({2})
> +box.snapshot()
> +assert(box.info.gc().is_paused == false)
> +
> +--
> +-- Start replica and wait for error.
> +test_run:cmd('start server replica with wait=False, wait_load=False')
> +
> +--
> +-- Wait error to appear, 60 seconds should be more than enough,
> +-- usually it happens in a couple of seconds.
> +test_run:switch('default')
> +test_run:wait_log('master', 'XlogGapError', nil, 60) ~= nil
> +
> +--
> +-- Cleanup.
> +test_run:cmd('stop server master')
> +test_run:cmd('cleanup server master')
> +test_run:cmd('delete server master')
> +test_run:cmd('stop server replica')
> +test_run:cmd('cleanup server replica')
> +test_run:cmd('delete server replica')
> +
> +--
> +-- Case 2.
> +--
> +-- Lets make sure we're not getting XlogGapError in
> +-- case if wal_cleanup_delay is used the code is almost
> +-- the same as for Case 1 except we don't disable cleanup
> +-- fiber but delay it up to a hour until replica is up
> +-- and running.
> +--
> +
> +test_run:cmd('create server master with script="replication/gh-5806-master.lua"')
> +test_run:cmd('start server master with args="3600", wait=True, wait_load=True')
> +
> +test_run:switch('master')
> +box.schema.user.grant('guest', 'replication')
> +
> +box.cfg{checkpoint_count = 1}
> +
> +engine = test_run:get_cfg('engine')
> +s = box.schema.space.create('test', {engine = engine})
> +_ = s:create_index('pk')
> +
> +test_run:switch('default')
> +test_run:cmd('create server replica with rpl_master=master,\
> +              script="replication/gh-5806-slave.lua"')
> +test_run:cmd('start server replica with wait=True, wait_load=True')
> +
> +test_run:switch('replica')
> +box.cfg{checkpoint_count = 1}
> +s = box.schema.space.create('testreplica')
> +_ = s:create_index('pk')
> +box.space.testreplica:insert({1})
> +box.snapshot()
> +
> +test_run:switch('master')
> +test_run:cmd('stop server replica')
> +
> +box.space.test:insert({1})
> +box.snapshot()
> +
> +test_run:cmd('restart server master with args="3600", wait=True, wait_load=True')
> +box.space.test:insert({2})
> +box.snapshot()
> +assert(box.info.gc().is_paused == true)
> +
> +test_run:cmd('start server replica with wait=True, wait_load=True')
> +
> +--
> +-- Make sure no error happened.
> +test_run:switch('default')
> +assert(test_run:grep_log("master", "XlogGapError") == nil)
> +
> +test_run:cmd('stop server master')
> +test_run:cmd('cleanup server master')
> +test_run:cmd('delete server master')
> +test_run:cmd('stop server replica')
> +test_run:cmd('cleanup server replica')
> +test_run:cmd('delete server replica')
> +--
> +--
> +-- Case 3: Fill _cluster with replica but then delete
> +-- the replica so that master's cleanup leave in "paused"
> +-- state, and then simply decrease the timeout to make
> +-- cleanup fiber work again.
> +--
> +test_run:cmd('create server master with script="replication/gh-5806-master.lua"')
> +test_run:cmd('start server master with args="3600", wait=True, wait_load=True')
> +
> +test_run:switch('master')
> +box.schema.user.grant('guest', 'replication')
> +
> +test_run:switch('default')
> +test_run:cmd('create server replica with rpl_master=master,\
> +              script="replication/gh-5806-slave.lua"')
> +test_run:cmd('start server replica with wait=True, wait_load=True')
> +
> +test_run:switch('master')
> +test_run:cmd('stop server replica')
> +test_run:cmd('cleanup server replica')
> +test_run:cmd('delete server replica')
> +
> +test_run:cmd('restart server master with args="3600", wait=True, wait_load=True')
> +assert(box.info.gc().is_paused == true)
> +
> +test_run:switch('master')
> +box.cfg{wal_cleanup_delay = 0.01}
> +test_run:wait_cond(function() return box.info.gc().is_paused == false end)
> +
> +test_run:switch('default')
> +test_run:cmd('stop server master')
> +test_run:cmd('cleanup server master')
> +test_run:cmd('delete server master')

-- 
Serge Petrenko


  reply	other threads:[~2021-03-26 13:37 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-26 12:06 [Tarantool-patches] [PATCH v5 0/3] gc/xlog: delay xlog cleanup until relays are subscribed Cyrill Gorcunov via Tarantool-patches
2021-03-26 12:06 ` [Tarantool-patches] [PATCH v5 1/3] " Cyrill Gorcunov via Tarantool-patches
2021-03-26 13:42   ` Serge Petrenko via Tarantool-patches
2021-03-26 19:45   ` Vladislav Shpilevoy via Tarantool-patches
2021-03-26 20:57     ` Cyrill Gorcunov via Tarantool-patches
2021-03-26 21:59     ` Cyrill Gorcunov via Tarantool-patches
2021-03-26 12:06 ` [Tarantool-patches] [PATCH v5 2/3] test: add a test for wal_cleanup_delay option Cyrill Gorcunov via Tarantool-patches
2021-03-26 13:37   ` Serge Petrenko via Tarantool-patches [this message]
2021-03-26 13:57     ` Cyrill Gorcunov via Tarantool-patches
2021-03-26 19:45   ` Vladislav Shpilevoy via Tarantool-patches
2021-03-26 12:06 ` [Tarantool-patches] [PATCH v5 3/3] test: box-tap/gc -- add test for is_paused field Cyrill Gorcunov via Tarantool-patches
2021-03-26 12:08 ` [Tarantool-patches] [PATCH v5 0/3] gc/xlog: delay xlog cleanup until relays are subscribed Cyrill Gorcunov via Tarantool-patches

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7d84bf90-7306-ad25-9f0f-74867c100fad@tarantool.org \
    --to=tarantool-patches@dev.tarantool.org \
    --cc=gorcunov@gmail.com \
    --cc=sergepetrenko@tarantool.org \
    --cc=v.shpilevoy@tarantool.org \
    --subject='Re: [Tarantool-patches] [PATCH v5 2/3] test: add a test for wal_cleanup_delay option' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox