Tarantool development patches archive
 help / color / mirror / Atom feed
From: "Serge Petrenko" <sergepetrenko@tarantool.org>
To: "Vladislav Shpilevoy" <v.shpilevoy@tarantool.org>
Cc: kirichenkoga@gmail.com, tarantool-patches@dev.tarantool.org
Subject: Re: [Tarantool-patches] [PATCH] replication: fix rebootstrap in case the instance is listed in box.cfg.replication
Date: Mon, 02 Mar 2020 20:18:33 +0300	[thread overview]
Message-ID: <1583169513.32666235@f144.i.mail.ru> (raw)
In-Reply-To: <41560ae6-e2fb-1b7e-32d0-d596e73db8c5@tarantool.org>

[-- Attachment #1: Type: text/plain, Size: 5382 bytes --]


  
>Суббота, 29 февраля 2020, 17:19 +03:00 от Vladislav Shpilevoy <v.shpilevoy@tarantool.org>:
> 
>On 29/02/2020 10:52, Serge Petrenko wrote:
>> Hi! 
>>
>> Суббота, 29 февраля 2020, 2:43 +03:00 от Vladislav Shpilevoy < v.shpilevoy@tarantool.org >:
>>  
>> Thanks for the patch!
>>
>> On 28/02/2020 18:01, Serge Petrenko wrote:
>> > When checking wheter rejoin is needed, replica loops through all the
>> > instances in box.cfg.replication, which makes it believe that there is a
>> > master holding files, needed by it, since it accounts itself just like
>> > all other instances.
>> > So make replica skip itself when finding an instance which holds files
>> > needed by it, and determining whether rebootstrap is needed.
>> >
>> > We already have a working test for the issue, it missed the issue due to
>> > replica.lua replication settings. Fix replica.lua to optionally include
>> > itself in box.cfg.replication, so that the corresponding test works
>> > correctly.
>> >
>> > Closes #4759
>> > ---
>> >  https://github.com/tarantool/tarantool/issues/4759
>> >  https://github.com/tarantool/tarantool/tree/sp/gh-4759-rebootstrap-fix
>> >
>> > @ChangeLog
>> > - fix rebootstrap procedure not working in case replica itself
>> > is listed in `box.cfg.replication`
>> >
>> > src/box/replication.cc | 13 ++++++++++++-
>> > test/replication/replica.lua | 11 ++++++++++-
>> > test/replication/replica_rejoin.result | 12 ++++++------
>> > test/replication/replica_rejoin.test.lua | 12 ++++++------
>> > 4 files changed, 34 insertions(+), 14 deletions(-)
>> >
>> > diff --git a/src/box/replication.cc b/src/box/replication.cc
>> > index e7bfa22ab..01edc0fb2 100644
>> > --- a/src/box/replication.cc
>> > +++ b/src/box/replication.cc
>> > @@ -768,8 +768,19 @@ replicaset_needs_rejoin(struct replica **master)
>> > struct replica *leader = NULL;
>> > replicaset_foreach(replica) {
>> > struct applier *applier = replica->applier;
>> > - if (applier == NULL)
>> > + /*
>> > + * The function is called right after
>> > + * box_sync_replication(), which in turn calls
>> > + * replicaset_connect(), which ensures that
>> > + * appliers are either stopped (APPLIER OFF) or
>> > + * connected.
>> > + * Also ignore self, as self applier might not
>> > + * have disconnected yet.
>> > + */
>> > + if (applier == NULL || applier->state == APPLIER_OFF ||
>> > + tt_uuid_is_equal(&replica->uuid, &INSTANCE_UUID))
>> > continue;
>> > + assert(applier->state == APPLIER_CONNECTED);
>> >
>>
>> Could you please understand one thing? Below I see this:
>>
>> > const struct ballot *ballot = &applier->ballot;
>> > if (vclock_compare(&ballot->gc_vclock,
>> > &replicaset.vclock) <= 0) {
>> > /*
>> > * There's at least one master that still stores
>> > * WALs needed by this instance. Proceed to local
>> > * recovery.
>> > */
>> > return false;
>> > }
>>
>> Question is why do we need rebootstrap if some remote node's
>> vclock is bigger than ours? It does not mean anything. It doesn't
>> say whether that remote instance still keeps any xlogs. All it
>> tells is that the remote instance committed some more data since
>> our restart.
>>
>>  
>> In the piece above we check for master’s gc vclock, which is vclock of the
>> oldest WAL the instance still has. If master’s gc vclock is less than our vclock,
>> no rebootstrap is needed. Master’ll be able to send us all the fresh data.
>>  Later we check whether our vclock is greater or incompatible with the remote
>> instance’s vclock. If it is, we cannot rebootstrap from the instance, since we have
>> some rows, that the instance doesn’t have.
>Thanks for the explanation. Then I would change the comment to say
>more about the issue. Currently it says about self only
>
>    "Also ignore self, as self applier might not have disconnected yet."
>
>This does not look related to the actual problem, that we just should
>not look at our gc vclock.
 
Ok, I changed the comment. The diff is below.
 
>
>Also why do we need 'applier->state == APPLIER_OFF' check? How is it
>related to the issue? If this is a separate bug, is it possible to
>make it a separate patch?
 
We needn’t this check, after all. replicaset_update() at the end of replicaset_connect()
ensures that all the replicas in the replicaset have appliers in state APPLIER_CONNECTED.
Thanks for pointing this out.
 
diff --git a/src/box/replication.cc b/src/box/replication.cc
index 01edc0fb2..1345f189b 100644
--- a/src/box/replication.cc
+++ b/src/box/replication.cc
@@ -769,18 +769,12 @@ replicaset_needs_rejoin(struct replica **master)
  replicaset_foreach(replica) {
  struct applier *applier = replica->applier;
  /*
- * The function is called right after
- * box_sync_replication(), which in turn calls
- * replicaset_connect(), which ensures that
- * appliers are either stopped (APPLIER OFF) or
- * connected.
- * Also ignore self, as self applier might not
- * have disconnected yet.
+ * Skip the local instance, we shouldn't perform a
+ * check against our own gc vclock.
  */
- if (applier == NULL || applier->state == APPLIER_OFF ||
-     tt_uuid_is_equal(&replica->uuid, &INSTANCE_UUID))
+ if (applier == NULL || tt_uuid_is_equal(&replica->uuid,
+ &INSTANCE_UUID))
  continue;
- assert(applier->state == APPLIER_CONNECTED);
 
  const struct ballot *ballot = &applier->ballot;
 
--
Serge Petrenko
 

[-- Attachment #2: Type: text/html, Size: 7208 bytes --]

  reply	other threads:[~2020-03-02 17:18 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-28 17:01 Serge Petrenko
2020-02-28 23:43 ` Vladislav Shpilevoy
2020-02-29  8:54   ` Konstantin Osipov
2020-02-29  9:52   ` Serge Petrenko
2020-02-29 14:19     ` Vladislav Shpilevoy
2020-03-02 17:18       ` Serge Petrenko [this message]
2020-03-02 21:26         ` Vladislav Shpilevoy
2020-03-03  8:25 ` Kirill Yukhin
2020-03-05  4:52   ` Kirill Yukhin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1583169513.32666235@f144.i.mail.ru \
    --to=sergepetrenko@tarantool.org \
    --cc=kirichenkoga@gmail.com \
    --cc=tarantool-patches@dev.tarantool.org \
    --cc=v.shpilevoy@tarantool.org \
    --subject='Re: [Tarantool-patches] [PATCH] replication: fix rebootstrap in case the instance is listed in box.cfg.replication' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox