From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp59.i.mail.ru (smtp59.i.mail.ru [217.69.128.39]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dev.tarantool.org (Postfix) with ESMTPS id 2D789469719 for ; Mon, 19 Oct 2020 12:36:08 +0300 (MSK) References: From: Serge Petrenko Message-ID: <3b913104-6366-4ee0-fda2-812a06cc36a8@tarantool.org> Date: Mon, 19 Oct 2020 12:36:06 +0300 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8"; format="flowed" Content-Transfer-Encoding: 8bit Content-Language: en-GB Subject: Re: [Tarantool-patches] [PATCH 0/3] Raft on leader election recovery restart List-Id: Tarantool development patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Vladislav Shpilevoy , tarantool-patches@dev.tarantool.org 17.10.2020 20:17, Vladislav Shpilevoy пишет: > There were 2 issues with the relay restarting recovery cursor when the node is > elected as a leader. Fixed in the last 2 commits. First was about local LSN not > being set, second about GC not being propagated. > > The first patch is not related to the bugs above directly. Just was found while > working on this. In theory without the first patch we can get flakiness into > the testes changed in this commit, but only if a replication connection will > break without a reason. > > Additionally, the new test - gh-5433-election-restart-recovery - hangs on my > machine when I start tens of it. All workers, after executing it several times, > hang. But!!! not in something related to the raft - they hang in the first > box.snapshot(), where the election is not even enabled yet. From some debug > prints I see it hangs somewhere in engine_being_checkpoint(), and consumes > ~80% of the CPU. But it may be just a consequence of the corrupted memory on > Mac, due to libeio being broken. Don't know what to do with that now. Hi! Thanks  for the patchset! Patches 2 and 3 LGTM. Patch 1 looks ok, but I have one question. What happens when a user accidentally enables raft  during a cluster upgrade, when some of the instances support raft, and some don't? Looks like it'll lead to even more inconvenience. In my opinion it's fine if the leader just disappears without further notice. We have an election timeout set up for this anyway. > > Branch: http://github.com/tarantool/tarantool/tree/gerold103/gh-5433-raft-leader-recovery-restart > Issue: https://github.com/tarantool/tarantool/issues/5433 > > Vladislav Shpilevoy (3): > raft: send state to new subscribers if Raft worked > raft: use local LSN in relay recovery restart > raft: don't drop GC when restart relay recovery > > src/box/box.cc | 14 +- > src/box/raft.h | 10 + > src/box/relay.cc | 22 ++- > .../gh-5426-election-on-off.result | 59 ++++-- > .../gh-5426-election-on-off.test.lua | 26 ++- > .../gh-5433-election-restart-recovery.result | 174 ++++++++++++++++++ > ...gh-5433-election-restart-recovery.test.lua | 87 +++++++++ > test/replication/suite.cfg | 1 + > 8 files changed, 367 insertions(+), 26 deletions(-) > create mode 100644 test/replication/gh-5433-election-restart-recovery.result > create mode 100644 test/replication/gh-5433-election-restart-recovery.test.lua > -- Serge Petrenko