Tarantool development patches archive
 help / color / mirror / Atom feed
From: Sergey Ostanevich via Tarantool-patches <tarantool-patches@dev.tarantool.org>
To: Yan Shtunder <ya.shtunder@gmail.com>
Cc: tarantool-patches@dev.tarantool.org
Subject: Re: [Tarantool-patches] [PATCH] replication: fill replicaset.applier.vclock after local recovery
Date: Mon, 9 Aug 2021 16:04:27 +0300	[thread overview]
Message-ID: <7F04672A-9416-4445-AC8A-96EC8457DF5C@tarantool.org> (raw)
In-Reply-To: <20210809100931.14367-1-ya.shtunder@gmail.com>

Hi! Thanks for the patch!

Just some minor updates to the message.

LGTM with changes applied.

Regards,
Sergos

> On 9 Aug 2021, at 13:09, Yan Shtunder <ya.shtunder@gmail.com> wrote:
> 
> replicaset.applier.vclock is initialized in replication_init(),
> which happens before local recovery. If some changes are come
> frome some instance via applier that applier.vclock will equal 0.
  ^^^   ^^^               ^^^^^   ^^^^                   ^^
  from   an          replication   the                   be

> This means that if some wild master send this node already applied
                                     ^
                                    will

> data, the node will apply the same data twice.
> 
> Closes #6028
> ---
> Issue: https://github.com/tarantool/tarantool/issues/6028
> Patch: https://github.com/tarantool/tarantool/tree/yshtunder/gh-6028-applier-vclock
> 
> src/box/applier.cc                            |  5 ++
> src/box/box.cc                                |  7 +++
> src/lib/core/errinj.h                         |  1 +
> test/box/errinj.result                        |  1 +
> test/replication/gh-6028-replica.lua          | 13 ++++
> .../gh-6028-vclock-is-empty.result            | 60 +++++++++++++++++++
> .../gh-6028-vclock-is-empty.test.lua          | 24 ++++++++
> 7 files changed, 111 insertions(+)
> create mode 100644 test/replication/gh-6028-replica.lua
> create mode 100644 test/replication/gh-6028-vclock-is-empty.result
> create mode 100644 test/replication/gh-6028-vclock-is-empty.test.lua
> 
> diff --git a/src/box/applier.cc b/src/box/applier.cc
> index 07fe7f5c7..9855b8d37 100644
> --- a/src/box/applier.cc
> +++ b/src/box/applier.cc
> @@ -1230,6 +1230,11 @@ applier_subscribe(struct applier *applier)
> 	struct vclock vclock;
> 	vclock_create(&vclock);
> 	vclock_copy(&vclock, &replicaset.vclock);
> +
> +	ERROR_INJECT(ERRINJ_REPLICASET_VCLOCK, {
> +		vclock_create(&vclock);
> +	});
> +
> 	/*
> 	 * Stop accepting local rows coming from a remote
> 	 * instance as soon as local WAL starts accepting writes.
> diff --git a/src/box/box.cc b/src/box/box.cc
> index ab7d983c9..f5cd63c9e 100644
> --- a/src/box/box.cc
> +++ b/src/box/box.cc
> @@ -3451,6 +3451,13 @@ box_cfg_xc(void)
> 		bootstrap(&instance_uuid, &replicaset_uuid,
> 			  &is_bootstrap_leader);
> 	}
> +
> +	/*
> +	 * replicaset.applier.vclock is filled with real
> +	 * value where local restore has already completed
	
> +	 */
> +	vclock_copy(&replicaset.applier.vclock, &replicaset.vclock);
> +
> 	fiber_gc();
> 
> 	/*
> diff --git a/src/lib/core/errinj.h b/src/lib/core/errinj.h
> index 359174b16..fcd856fbb 100644
> --- a/src/lib/core/errinj.h
> +++ b/src/lib/core/errinj.h
> @@ -152,6 +152,7 @@ struct errinj {
> 	_(ERRINJ_STDIN_ISATTY, ERRINJ_INT, {.iparam = -1}) \
> 	_(ERRINJ_SNAP_COMMIT_FAIL, ERRINJ_BOOL, {.bparam = false}) \
> 	_(ERRINJ_IPROTO_SINGLE_THREAD_STAT, ERRINJ_INT, {.iparam = -1}) \
> +	_(ERRINJ_REPLICASET_VCLOCK, ERRINJ_BOOL, {.bparam = false}) \
> 
> ENUM0(errinj_id, ERRINJ_LIST);
> extern struct errinj errinjs[];
> diff --git a/test/box/errinj.result b/test/box/errinj.result
> index 43daf5f0f..62e37bbdd 100644
> --- a/test/box/errinj.result
> +++ b/test/box/errinj.result
> @@ -70,6 +70,7 @@ evals
>   - ERRINJ_RELAY_REPORT_INTERVAL: 0
>   - ERRINJ_RELAY_SEND_DELAY: false
>   - ERRINJ_RELAY_TIMEOUT: 0
> +  - ERRINJ_REPLICASET_VCLOCK: false
>   - ERRINJ_REPLICA_JOIN_DELAY: false
>   - ERRINJ_SIO_READ_MAX: -1
>   - ERRINJ_SNAP_COMMIT_DELAY: false
> diff --git a/test/replication/gh-6028-replica.lua b/test/replication/gh-6028-replica.lua
> new file mode 100644
> index 000000000..5669cc4e9
> --- /dev/null
> +++ b/test/replication/gh-6028-replica.lua
> @@ -0,0 +1,13 @@
> +#!/usr/bin/env tarantool
> +
> +require('console').listen(os.getenv('ADMIN'))
> +
> +box.error.injection.set("ERRINJ_REPLICASET_VCLOCK", true)
> +
> +box.cfg({
> +    listen              = os.getenv("LISTEN"),
> +    replication         = {os.getenv("MASTER"), os.getenv("LISTEN")},
> +    memtx_memory        = 107374182,
> +})
> +
> +box.error.injection.set("ERRINJ_REPLICASET_VCLOCK", false)
> diff --git a/test/replication/gh-6028-vclock-is-empty.result b/test/replication/gh-6028-vclock-is-empty.result
> new file mode 100644
> index 000000000..0b103bb6e
> --- /dev/null
> +++ b/test/replication/gh-6028-vclock-is-empty.result
> @@ -0,0 +1,60 @@
> +-- test-run result file version 2
> +test_run = require('test_run').new()
> + | ---
> + | ...
> +
> +box.schema.user.grant('guest', 'replication')
> + | ---
> + | ...
> +s = box.schema.create_space('test')
> + | ---
> + | ...
> +_ = s:create_index('pk')
> + | ---
> + | ...
> +
> +
> +-- Case 1
> +test_run:cmd('create server replica with rpl_master=default,\
> +              script="replication/gh-6028-replica.lua"')
> + | ---
> + | - true
> + | ...
> +test_run:cmd('start server replica')
> + | ---
> + | - true
> + | ...
> +
> +test_run:cmd('stop server replica')
> + | ---
> + | - true
> + | ...
> +s:insert{1}
> + | ---
> + | - [1]
> + | ...
> +
> +
> +-- Case 2
> +test_run:cmd('start server replica')
> + | ---
> + | - true
> + | ...
> +s:insert{2}
> + | ---
> + | - [2]
> + | ...
> +
> +
> +test_run:cmd('stop server replica')
> + | ---
> + | - true
> + | ...
> +test_run:cmd('cleanup server replica')
> + | ---
> + | - true
> + | ...
> +test_run:cmd('delete server replica')
> + | ---
> + | - true
> + | ...
> diff --git a/test/replication/gh-6028-vclock-is-empty.test.lua b/test/replication/gh-6028-vclock-is-empty.test.lua
> new file mode 100644
> index 000000000..ba14eca55
> --- /dev/null
> +++ b/test/replication/gh-6028-vclock-is-empty.test.lua
> @@ -0,0 +1,24 @@
> +test_run = require('test_run').new()
> +
> +box.schema.user.grant('guest', 'replication')
> +s = box.schema.create_space('test')
> +_ = s:create_index('pk')
> +
> +
> +-- Case 1
> +test_run:cmd('create server replica with rpl_master=default,\
> +              script="replication/gh-6028-replica.lua"')
> +test_run:cmd('start server replica')
> +
> +test_run:cmd('stop server replica')
> +s:insert{1}
> +
> +
> +-- Case 2
> +test_run:cmd('start server replica')
> +s:insert{2}
> +
> +
> +test_run:cmd('stop server replica')
> +test_run:cmd('cleanup server replica')
> +test_run:cmd('delete server replica')
> --
> 2.25.1
> 


  reply	other threads:[~2021-08-09 13:04 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-09 10:09 Yan Shtunder via Tarantool-patches
2021-08-09 13:04 ` Sergey Ostanevich via Tarantool-patches [this message]
2021-08-09 13:11 ` Serge Petrenko via Tarantool-patches
2021-08-10 12:19   ` Vitaliia Ioffe via Tarantool-patches
2021-08-11  6:17 ` Kirill Yukhin via Tarantool-patches

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7F04672A-9416-4445-AC8A-96EC8457DF5C@tarantool.org \
    --to=tarantool-patches@dev.tarantool.org \
    --cc=sergos@tarantool.org \
    --cc=ya.shtunder@gmail.com \
    --subject='Re: [Tarantool-patches] [PATCH] replication: fill replicaset.applier.vclock after local recovery' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox