On Tue, Aug 28, 2018 at 02:43:28PM +0300, Olga Arkhangelskaia wrote:
> When replica reconnects to replica set not for the first time, we
> suffer from absence of synchronization. Such behavior leads to giving
> away outdated data.
>
> Closes #3427
Please write a documentation request.
Ok
> diff --git a/src/box/box.cc b/src/box/box.cc
> index be5077da8..aaae4219f 100644
> --- a/src/box/box.cc
> +++ b/src/box/box.cc
> @@ -634,6 +634,11 @@ box_set_replication(void)
> box_sync_replication(true);
> /* Follow replica */
> replicaset_follow();
> + /* Sync replica up to quorum */
> + if (!replicaset_sync()) {
> + tnt_raise(ClientError, ER_CFG, "replication",
> + "failed to connect to one or more replicas");
> + }
Throwing ER_CFG error from box.cfg() and still applying the new
replication configuration looks weird. We should either revert the
configuration back to what we had before box.cfg() was called or not
throw exceptions.
Reverting configuration seems to be unreasonable, because we could've
applied some rows from the new replicas.
We discussed the matter with Georgy and Kostja and agreed that instead
an instance should enter the orphan mode, just like it does on initial
configuration.
Just curious, why? How can we applied changes if box.cfg throws an error? Or I miss smth?
Ok
Sorry, we didn't come to an agreement earlier.
Please rework and add a test case.
> diff --git a/test/replication/sync.test.lua b/test/replication/sync.test.lua
> new file mode 100644
> index 000000000..4c2b55af8
> --- /dev/null
> +++ b/test/replication/sync.test.lua
> @@ -0,0 +1,38 @@
> +--
> +-- gh-3427: no sync after configuration update
> +--
> +
> +env = require('test_run')
> +test_run = env.new()
> +engine = test_run:get_cfg('engine')
> +
> +box.schema.user.grant('guest', 'replication')
> +
> +test_run:cmd("create server replica with rpl_master=default, script='replication/replica.lua'")
> +test_run:cmd("start server replica")
> +
> +s = box.schema.space.create('test', {engine = engine})
> +index = s:create_index('primary')
> +
> +-- change replica configuration
> +test_run:cmd("switch replica")
> +box.cfg{replication_sync_lag = 0.1}
> +replication = box.cfg.replication
> +box.cfg{replication={}}
> +
> +test_run:cmd("switch default")
> +-- insert values on the master while replica is unconfigured
> +a = 3000 box.begin() while a > 0 do a = a-1 box.space.test:insert{a,a} end box.commit()
Nit: for i = 1, 100 do ... end
Anyway, why 3000? When I change it to 1000 or even 100 the test still
passes with this patch and fails without it.
I used 3000 because when there is no patch and I put replica into sleep for replication sync lag (0.1) arrives nearly 2500 tuples.
Also, I'd like to see a test case that checks that in case
box.cfg.replication_sync_lag is big, not all records arrive
by the time box.cfg{replication} returns.
You mean see difference in tuples count in case when replicas are synced, however due to lag, but not due to data has arrived?
And a test case that checks that tarantool enters the orphan mode
if it fails to sync.
Please add.
Ok
> +
> +test_run:cmd("switch replica")
> +box.cfg{replication = replication}
> +
> +box.space.test:count() == 3000
Nit: better do
box.space.test:count() -- 3000
The reject file will be more informative in case of error then.
So I need 3 test case
Test that we are synced.
Test with sync and big lag.
Test with failed sync - orphan mode?
> +
> +test_run:cmd("switch default")
> +
> +-- cleanup
> +test_run:cmd("stop server replica")
> +test_run:cmd("cleanup server replica")
> +box.space.test:drop()
> +box.schema.user.revoke('guest', 'replication')