[Tarantool-patches] [PATCH 2/2] replication: fix replica disconnect upon reconfiguration

Serge Petrenko sergepetrenko at tarantool.org
Fri Oct 1 14:31:06 MSK 2021



01.10.2021 01:15, Vladislav Shpilevoy пишет:
> Hi! Thanks for the patch!
>
> See 6 comments below.
>
>> diff --git a/src/box/box.cc b/src/box/box.cc
>> index 219ffa38d..89cda5599 100644
>> --- a/src/box/box.cc
>> +++ b/src/box/box.cc> @@ -1261,7 +1261,9 @@ box_sync_replication(bool connect_quorum)
>>   			applier_delete(appliers[i]); /* doesn't affect diag */
>>   	});
>>   
>> -	replicaset_connect(appliers, count, connect_quorum);
>> +	bool connect_quorum = strict;
>> +	bool keep_connect = !strict;
>> +	replicaset_connect(appliers, count, connect_quorum, keep_connect);
> 1. How about passing both these parameters explicitly to box_sync_replication?
> I don't understand the link between them so that they could be one.
>
> It seems the only case when you need to drop the old connections is when
> you turn anon to normal. Why should they be fully reset otherwise?

Yes, it's true. anon to normal is the only place where existing
connections should be reset.

For both bootstrap and local recovery (first ever box.cfg) keep_connect
doesn't make sense at all, because there are no previous connections to
keep.

So the only two (out of 5) box_sync_replication calls, that need
keep_connect are replication reconfiguration (keep_connect = true) and
anon replica reconfiguration (keep_connect = false).

Speaking of the relation between keep_connect and connect_quorum:
We don't care about keep_connect in 3 calls (bootstrap and recovery),
and when keep_connect is important, it's equal to !connect_quorum.
I thought it might be nice to replace them with a single parameter.

I tried to pass both parameters to box_sync_repication() at first.
This looked rather ugly IMO:
box_sync_replication(true, false), box_sync_replication(false, true);
Two boolean parameters which are responsible for God knows what are
worse than one parameter.

I'm not 100% happy with my solution, but it at least hides the second
parameter. And IMO box_sync_replication(strict) is rather easy to
understand: when strict = true, you want to connect to quorum, and
you want to reset the connections. And vice versa when strict = false.

>
>> diff --git a/src/box/replication.cc b/src/box/replication.cc
>> index 1288bc9b1..e5fce6c8c 100644
>> --- a/src/box/replication.cc
>> +++ b/src/box/replication.cc
>> @@ -664,11 +681,11 @@ applier_on_connect_f(struct trigger *trigger, void *event)
>>   
>>   void
>>   replicaset_connect(struct applier **appliers, int count,
>> -		   bool connect_quorum)
>> +		   bool connect_quorum, bool keep_connect)
>>   {
>>   	if (count == 0) {
>>   		/* Cleanup the replica set. */
>> -		replicaset_update(appliers, count);
>> +		replicaset_update(appliers, count, keep_connect);
> 2. In case of count 0 it means all the appliers must be terminated,
> mustn't they? So you could pass always false here. Up to you.

Ok, let's change that. I'll change "count" to 0 as well.
It has always bothered me here.


=================================================

diff --git a/src/box/replication.cc b/src/box/replication.cc
index e5fce6c8c..10b4ac915 100644
--- a/src/box/replication.cc
+++ b/src/box/replication.cc
@@ -685,7 +685,7 @@ replicaset_connect(struct applier **appliers, int count,
  {
         if (count == 0) {
                 /* Cleanup the replica set. */
-               replicaset_update(appliers, count, keep_connect);
+               replicaset_update(appliers, 0, false);
                 return;
         }

=================================================
>
>>   		return;
>>   	}
> 3. A few lines below I see that all the appliers are started via
> applier_start and gather connections. Even if you have them already.
> Wasn't the point of this patchset not to create connections when you
> already have them? You could find matches by URI even before you
> try to create a connection.

It's true, but I decided it'd be simpler to find matches by replica uuid.
It gives us a bonus: you won't drop an existing connection to a replica
even when uri has changed. Say, "localhost:3303" and "127.0.0.1:3303".

The point of the patchset is "don't restart existing connections if they 
are ok".
Because if you restart them old relay doesn't exit in time and replica 
receives a
"duplicate connection with same replica UUID".

Here's how the patch works:
1. Get a list of appliers (one per each box.cfg.replication entry)
2. each applier is connected
3. new appliers receive master uuids
4. Find matches between new and old appliers, and remove duplicate
    new appliers. (only when the old applier is functional)

Here's how I thought I'd implement it at first:
1. you get a list of appliers
2. You check the new applier list against existing appliers
3. When you find matches, new appliers are removed.
4. Everything else works as before.

The problem with the second approach is replicaset_update().
It always replaces all of the existing appliers with the new ones.

How do we keep the old (matching) appliers then? Add them to new applier
list?

TBH I just decided that my approach would be simpler than this one.
So there might be no problem at all.

>
> Otherwise when I do this:
>
> 	box.cfg{replication = {3313, 3314}}
> 	box.cfg{replication = {3313, 3314, 3315}}
>
> you create 3 connections in the second box.cfg. While you
> could create just 1.



>
>> diff --git a/src/box/replication.h b/src/box/replication.h
>> index 4c82cd839..a8fed45e8 100644
>> --- a/src/box/replication.h
>> +++ b/src/box/replication.h
>> @@ -439,10 +439,12 @@ replicaset_add_anon(const struct tt_uuid *replica_uuid);
>>    * \param connect_quorum if this flag is set, fail unless at
>>    *                       least replication_connect_quorum
>>    *                       appliers have successfully connected.
>> + * \param keep_connect   if this flag is set do not force a reconnect if the
>> + *                       old connection to the replica is fine.
> 4. Why do you need to touch it even if the replica is not fine?
> Shouldn't it reconnect automatically anyway? You could maybe force
> it to reconnect right now if you don't want to wait until its
> reconnect timeout expires.

I don't understand this comment. Is it clear now after my other answers?

P.S. I think I understand now. What if replication with the
replica is permanently broken (by, say, ER_INVALID_MSGPACK)?
I guess another `box.cfg{replication=...}` call should revive it.

And yes, when replication_timeout is huge, and applier is waiting to
retry for some reason, you might want to speed things up by another
box.cfg{replication=...} call.

>
>>    */
>>   void
>>   replicaset_connect(struct applier **appliers, int count,
>> -		   bool connect_quorum);
>> +		   bool connect_quorum, bool keep_connect);
>>   
>> diff --git a/test/replication-luatest/gh_4669_applier_reconnect_test.lua b/test/replication-luatest/gh_4669_applier_reconnect_test.lua
>> new file mode 100644
>> index 000000000..62adff716
>> --- /dev/null
>> +++ b/test/replication-luatest/gh_4669_applier_reconnect_test.lua
>> @@ -0,0 +1,42 @@
>> +local t = require('luatest')
>> +local fio = require('fio')
>> +local Server = t.Server
>> +local Cluster = require('test.luatest_helpers.cluster')
> 5. Are we using first capital letters for variable names now? Maybe
> stick to the guidelines and use lower case letters?

I tried to stick to other luatest tests' style
(replication-luatest on Vitaliya's branch).

Sure, let's change that.

>
>> +
>> +local g = t.group('gh-4669-applier-reconnect')
>> +
>> +local function check_follow_master(server)
>> +    return t.assert_equals(
>> +        server:eval('return box.info.replication[1].upstream.status'), 'follow')
>> +end
>> +
>> +g.before_each(function()
>> +    g.cluster = Cluster:new({})
>> +    g.master = g.cluster:build_server(
>> +        {}, {alias = 'master'}, 'base_instance.lua')
>> +    g.replica = g.cluster:build_server(
>> +        {args={'master'}}, {alias = 'replica'}, 'replica.lua')
>> +
>> +    g.cluster:join_server(g.master)
>> +    g.cluster:join_server(g.replica)
>> +    g.cluster:start()
>> +    check_follow_master(g.replica)
>> +end)
>> +
>> +g.after_each(function()
>> +    g.cluster:stop()
>> +end)
>> +
>> +-- Test that appliers aren't recreated upon replication reconfiguration.
>> +g.test_applier_connection_on_reconfig = function(g)
>> +    g.replica:eval(
>> +        'box.cfg{'..
>> +            'replication = {'..
>> +                'os.getenv("TARANTOOL_LISTEN"),'..
>> +                'box.cfg.replication[1],'..
>> +            '}'..
>> +        '}'
>> +    )
> 6. Are we really supposed to write Lua code as Lua strings now?
> You could use here [[ ... ]] btw, but the question still remains.
>
> Not a comment for your patch. But it looks unusable. What if the
> test would be a bit more complicated? Look at qsync tests for instance.
> Imagine writing all of them as Lua strings with quotes and all.

>
> Especially if you want to pass into the strings some parameters not
> known in advance.

I agree we need some way to access server's console directly.
I don't know a good solution for this with luatest though.

>
> Could you maybe make it on top of master as a normal .test.lua test?

I think not. Both Sergos and KirillY asked me to make this a luatest-test.

Besides, this particular test looks ok in luatest. There's a single eval 
there.

>
>> +    check_follow_master(g.replica)
>> +    t.assert_equals(g.master:grep_log("exiting the relay loop"), nil)
>> +end
>>

Please, check out the diff.
(I've renamed the test to `gh-4669-applier-reconnect_test.lua`.
I tried `gh-4669-applier-reconnect.test.lua` first, but couldn't
make luatest understand *.test.lua test names. I've asked Vitaliya
to fix this)


=======================================

diff --git a/src/box/replication.cc b/src/box/replication.cc
index e5fce6c8c..10b4ac915 100644
--- a/src/box/replication.cc
+++ b/src/box/replication.cc
@@ -685,7 +685,7 @@ replicaset_connect(struct applier **appliers, int count,
  {
         if (count == 0) {
                 /* Cleanup the replica set. */
-               replicaset_update(appliers, count, keep_connect);
+               replicaset_update(appliers, 0, false);
                 return;
         }

diff --git a/test/replication-luatest/gh_4669_applier_reconnect_test.lua 
b/test/replication-luatest/gh-4669-applier-reconnect_test.lua
similarity index 74%
rename from test/replication-luatest/gh_4669_applier_reconnect_test.lua
rename to test/replication-luatest/gh-4669-applier-reconnect_test.lua
index 62adff716..a4a138714 100644
--- a/test/replication-luatest/gh_4669_applier_reconnect_test.lua
+++ b/test/replication-luatest/gh-4669-applier-reconnect_test.lua
@@ -1,7 +1,7 @@
  local t = require('luatest')
  local fio = require('fio')
-local Server = t.Server
-local Cluster = require('test.luatest_helpers.cluster')
+local server = t.Server
+local cluster = require('test.luatest_helpers.cluster')

  local g = t.group('gh-4669-applier-reconnect')

@@ -11,7 +11,7 @@ local function check_follow_master(server)
  end

  g.before_each(function()
-    g.cluster = Cluster:new({})
+    g.cluster = cluster:new({})
      g.master = g.cluster:build_server(
          {}, {alias = 'master'}, 'base_instance.lua')
      g.replica = g.cluster:build_server(
@@ -29,14 +29,14 @@ end)

  -- Test that appliers aren't recreated upon replication reconfiguration.
  g.test_applier_connection_on_reconfig = function(g)
-    g.replica:eval(
-        'box.cfg{'..
-            'replication = {'..
-                'os.getenv("TARANTOOL_LISTEN"),'..
-                'box.cfg.replication[1],'..
-            '}'..
-        '}'
-    )
+    g.replica:eval([[
+        box.cfg{
+            replication = {
+                os.getenv("TARANTOOL_LISTEN"),
+                box.cfg.replication[1],
+            }
+        }
+    ]])
      check_follow_master(g.replica)
      t.assert_equals(g.master:grep_log("exiting the relay loop"), nil)
  end


=======================================

-- 
Serge Petrenko



More information about the Tarantool-patches mailing list