From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp3.mail.ru (smtp3.mail.ru [94.100.179.58]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dev.tarantool.org (Postfix) with ESMTPS id C90DD469719 for ; Sat, 29 Feb 2020 02:43:03 +0300 (MSK) References: <20200228170130.81713-1-sergepetrenko@tarantool.org> From: Vladislav Shpilevoy Message-ID: <468ada69-df04-0564-4c01-1ccab99b535c@tarantool.org> Date: Sat, 29 Feb 2020 00:43:01 +0100 MIME-Version: 1.0 In-Reply-To: <20200228170130.81713-1-sergepetrenko@tarantool.org> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [Tarantool-patches] [PATCH] replication: fix rebootstrap in case the instance is listed in box.cfg.replication List-Id: Tarantool development patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Serge Petrenko , kostja.osipov@gmail.com, kirichenkoga@gmail.com Cc: tarantool-patches@dev.tarantool.org Thanks for the patch! On 28/02/2020 18:01, Serge Petrenko wrote: > When checking wheter rejoin is needed, replica loops through all the > instances in box.cfg.replication, which makes it believe that there is a > master holding files, needed by it, since it accounts itself just like > all other instances. > So make replica skip itself when finding an instance which holds files > needed by it, and determining whether rebootstrap is needed. > > We already have a working test for the issue, it missed the issue due to > replica.lua replication settings. Fix replica.lua to optionally include > itself in box.cfg.replication, so that the corresponding test works > correctly. > > Closes #4759 > --- > https://github.com/tarantool/tarantool/issues/4759 > https://github.com/tarantool/tarantool/tree/sp/gh-4759-rebootstrap-fix > > @ChangeLog > - fix rebootstrap procedure not working in case replica itself > is listed in `box.cfg.replication` > > src/box/replication.cc | 13 ++++++++++++- > test/replication/replica.lua | 11 ++++++++++- > test/replication/replica_rejoin.result | 12 ++++++------ > test/replication/replica_rejoin.test.lua | 12 ++++++------ > 4 files changed, 34 insertions(+), 14 deletions(-) > > diff --git a/src/box/replication.cc b/src/box/replication.cc > index e7bfa22ab..01edc0fb2 100644 > --- a/src/box/replication.cc > +++ b/src/box/replication.cc > @@ -768,8 +768,19 @@ replicaset_needs_rejoin(struct replica **master) > struct replica *leader = NULL; > replicaset_foreach(replica) { > struct applier *applier = replica->applier; > - if (applier == NULL) > + /* > + * The function is called right after > + * box_sync_replication(), which in turn calls > + * replicaset_connect(), which ensures that > + * appliers are either stopped (APPLIER OFF) or > + * connected. > + * Also ignore self, as self applier might not > + * have disconnected yet. > + */ > + if (applier == NULL || applier->state == APPLIER_OFF || > + tt_uuid_is_equal(&replica->uuid, &INSTANCE_UUID)) > continue; > + assert(applier->state == APPLIER_CONNECTED); > Could you please understand one thing? Below I see this: > const struct ballot *ballot = &applier->ballot; > if (vclock_compare(&ballot->gc_vclock, > &replicaset.vclock) <= 0) { > /* > * There's at least one master that still stores > * WALs needed by this instance. Proceed to local > * recovery. > */ > return false; > } Question is why do we need rebootstrap if some remote node's vclock is bigger than ours? It does not mean anything. It doesn't say whether that remote instance still keeps any xlogs. All it tells is that the remote instance committed some more data since our restart.