From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from [87.239.111.99] (localhost [127.0.0.1]) by dev.tarantool.org (Postfix) with ESMTP id 8074D6EC55; Sun, 18 Jul 2021 19:55:33 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org 8074D6EC55 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=tarantool.org; s=dev; t=1626627333; bh=+WkEtnz2fPhZe5jNMncS3b7ZvvrxaAJwdY/1O7gUTgM=; h=To:Date:In-Reply-To:References:Subject:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To: From; b=omI3E51UDZSqOjY8ArRrZ9w9/fSqhgnrYxOmxKwHLhDVjYujwSOwVlpUN4Acduuw1 oas2y5cpEzYoxcXi8ysG+6cLBw7XCyiEsC444gDORmdk/uLYTsVBUgC8UO6bMWdajf X4UawQDt0MXOrgT2sinZ11/TV32p5ZcPgYj/C8b8= Received: from smtpng3.i.mail.ru (smtpng3.i.mail.ru [94.100.177.149]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dev.tarantool.org (Postfix) with ESMTPS id 8728F6EC6F for ; Sun, 18 Jul 2021 19:53:34 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org 8728F6EC6F Received: by smtpng3.m.smailru.net with esmtpa (envelope-from ) id 1m5A2n-0005NM-PO; Sun, 18 Jul 2021 19:53:34 +0300 To: tarantool-patches@dev.tarantool.org, gorcunov@gmail.com, sergepetrenko@tarantool.org Date: Sun, 18 Jul 2021 18:53:28 +0200 Message-Id: <97e77a3158ef83db3c93f247297297874d7465cb.1626627097.git.v.shpilevoy@tarantool.org> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-7564579A: B8F34718100C35BD X-77F55803: 4F1203BC0FB41BD941C43E597735A9C36A98DBA789EBB6AEFB5A483DCE5E7579182A05F538085040CFB850207C29F41FA4767FEBE8C4AD32E6567F8E99A5A35008D140D7165E8817 X-7FA49CB5: FF5795518A3D127A4AD6D5ED66289B5278DA827A17800CE70FEF15FEF2CD350AEA1F7E6F0F101C67BD4B6F7A4D31EC0BCC500DACC3FED6E28638F802B75D45FF8AA50765F7900637BBCE7257090F96C9EA1F7E6F0F101C6723150C8DA25C47586E58E00D9D99D84E1BDDB23E98D2D38BBCA57AF85F7723F24DABBC29307E83D5CE0B70A224CFC979CC7F00164DA146DAFE8445B8C89999728AA50765F7900637F6B57BC7E64490618DEB871D839B7333395957E7521B51C2DFABB839C843B9C08941B15DA834481F8AA50765F7900637F6B57BC7E6449061A352F6E88A58FB86F5D81C698A659EA7E827F84554CEF5019E625A9149C048EE9ECD01F8117BC8BEE2021AF6380DFAD18AA50765F790063735872C767BF85DA227C277FBC8AE2E8BB07C9E286C61B7F975ECD9A6C639B01B4E70A05D1297E1BBCB5012B2E24CD356 X-C1DE0DAB: C20DE7B7AB408E4181F030C43753B8186998911F362727C414F749A5E30D975CBC3D6B4D9834ECA7E00FA669D0EFCC877305E2D503C886E19C2B6934AE262D3EE7EAB7254005DCED7532B743992DF240BDC6A1CF3F042BAD6DF99611D93F60EF3033054805BDE987699F904B3F4130E343918A1A30D5E7FCCB5012B2E24CD356 X-C8649E89: 4E36BF7865823D7055A7F0CF078B5EC49A30900B95165D341ADA1A41A420E9B2FCCE0EEB7F8246A55B502D0BF1CEEEC489D7286B9B7B923772AE9467BA032B271D7E09C32AA3244CEEECBF6555540C27851B483D65314B3264EE5813BBCA3A9DFACE5A9C96DEB163 X-D57D3AED: 3ZO7eAau8CL7WIMRKs4sN3D3tLDjz0dLbV79QFUyzQ2Ujvy7cMT6pYYqY16iZVKkSc3dCLJ7zSJH7+u4VD18S7Vl4ZUrpaVfd2+vE6kuoey4m4VkSEu530nj6fImhcD4MUrOEAnl0W826KZ9Q+tr5ycPtXkTV4k65bRjmOUUP8cvGozZ33TWg5HZplvhhXbhDGzqmQDTd6OAevLeAnq3Ra9uf7zvY2zzsIhlcp/Y7m53TZgf2aB4JOg4gkr2bioj+8+KVR9NZrF+rsScZMsb9w== X-Mailru-Sender: 689FA8AB762F7393C37E3C1AEC41BA5DFC05C88CBFDA0B7A91ACE3DBB8F4E8D93841015FED1DE5223CC9A89AB576DD93FB559BB5D741EB963CF37A108A312F5C27E8A8C3839CE0E267EA787935ED9F1B X-Mras: Ok Subject: [Tarantool-patches] [PATCH v2 4/5] election: during bootstrap prefer candidates X-BeenThere: tarantool-patches@dev.tarantool.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Tarantool development patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , From: Vladislav Shpilevoy via Tarantool-patches Reply-To: Vladislav Shpilevoy Errors-To: tarantool-patches-bounces@dev.tarantool.org Sender: "Tarantool-patches" During cluster bootstrap the boot master election algorithm didn't take into account election modes of the instances. It could be that all nodes have box.cfg.read_only = false, none is booted, all are read-only now. Then the node with the smallest UUID was chosen even if it was a box.cfg.election_mode='voter' node. It could neither boot nor register other nodes and the cluster couldn't start. The patch makes the boot master election prefer the instances which can become a Raft leader. If all the other parameters didn't help. Part #6018 --- src/box/replication.cc | 11 ++- .../gh-6018-election-boot-voter.result | 70 +++++++++++++++++++ .../gh-6018-election-boot-voter.test.lua | 38 ++++++++++ test/replication/gh-6018-master.lua | 17 +++++ test/replication/gh-6018-replica.lua | 15 ++++ test/replication/suite.cfg | 1 + 6 files changed, 150 insertions(+), 2 deletions(-) create mode 100644 test/replication/gh-6018-election-boot-voter.result create mode 100644 test/replication/gh-6018-election-boot-voter.test.lua create mode 100644 test/replication/gh-6018-master.lua create mode 100644 test/replication/gh-6018-replica.lua diff --git a/src/box/replication.cc b/src/box/replication.cc index a0b3e0186..45ad03dfd 100644 --- a/src/box/replication.cc +++ b/src/box/replication.cc @@ -978,12 +978,19 @@ replicaset_find_join_master(void) * config is stronger because if it is configured as read-only, * it is in read-only state for sure, until the config is * changed. + * + * In a cluster with leader election enabled all instances might + * look equal by the scores above. Then must prefer the ones + * which can be elected as a leader, because only they would be + * able to boot themselves and register the others. */ if (ballot->is_booted) - score += 10; + score += 1000; if (!ballot->is_ro_cfg) - score += 5; + score += 100; if (!ballot->is_ro) + score += 10; + if (ballot->can_lead) score += 1; if (leader_score < score) goto elect; diff --git a/test/replication/gh-6018-election-boot-voter.result b/test/replication/gh-6018-election-boot-voter.result new file mode 100644 index 000000000..6b05f0825 --- /dev/null +++ b/test/replication/gh-6018-election-boot-voter.result @@ -0,0 +1,70 @@ +-- test-run result file version 2 +-- +-- gh-6018: in an auto-election cluster nodes with voter state could be selected +-- as bootstrap leaders. They should not, because a voter can't be ever writable +-- and it can neither boot itself nor register other nodes. +-- +test_run = require('test_run').new() + | --- + | ... + +function boot_with_master_election_mode(mode) \ + test_run:cmd('create server master with '.. \ + 'script="replication/gh-6018-master.lua"') \ + test_run:cmd('start server master with wait=False, args="'..mode..'"') \ + test_run:cmd('create server replica with '.. \ + 'script="replication/gh-6018-replica.lua"') \ + test_run:cmd('start server replica') \ +end + | --- + | ... + +function stop_cluster() \ + test_run:cmd('stop server replica') \ + test_run:cmd('stop server master') \ + test_run:cmd('delete server replica') \ + test_run:cmd('delete server master') \ +end + | --- + | ... + +-- +-- Candidate leader. +-- +boot_with_master_election_mode('candidate') + | --- + | ... + +test_run:switch('master') + | --- + | - true + | ... +test_run:wait_cond(function() return not box.info.ro end) + | --- + | - true + | ... +assert(box.info.election.state == 'leader') + | --- + | - true + | ... + +test_run:switch('replica') + | --- + | - true + | ... +assert(box.info.ro) + | --- + | - true + | ... +assert(box.info.election.state == 'follower') + | --- + | - true + | ... + +test_run:switch('default') + | --- + | - true + | ... +stop_cluster() + | --- + | ... diff --git a/test/replication/gh-6018-election-boot-voter.test.lua b/test/replication/gh-6018-election-boot-voter.test.lua new file mode 100644 index 000000000..7222beb19 --- /dev/null +++ b/test/replication/gh-6018-election-boot-voter.test.lua @@ -0,0 +1,38 @@ +-- +-- gh-6018: in an auto-election cluster nodes with voter state could be selected +-- as bootstrap leaders. They should not, because a voter can't be ever writable +-- and it can neither boot itself nor register other nodes. +-- +test_run = require('test_run').new() + +function boot_with_master_election_mode(mode) \ + test_run:cmd('create server master with '.. \ + 'script="replication/gh-6018-master.lua"') \ + test_run:cmd('start server master with wait=False, args="'..mode..'"') \ + test_run:cmd('create server replica with '.. \ + 'script="replication/gh-6018-replica.lua"') \ + test_run:cmd('start server replica') \ +end + +function stop_cluster() \ + test_run:cmd('stop server replica') \ + test_run:cmd('stop server master') \ + test_run:cmd('delete server replica') \ + test_run:cmd('delete server master') \ +end + +-- +-- Candidate leader. +-- +boot_with_master_election_mode('candidate') + +test_run:switch('master') +test_run:wait_cond(function() return not box.info.ro end) +assert(box.info.election.state == 'leader') + +test_run:switch('replica') +assert(box.info.ro) +assert(box.info.election.state == 'follower') + +test_run:switch('default') +stop_cluster() diff --git a/test/replication/gh-6018-master.lua b/test/replication/gh-6018-master.lua new file mode 100644 index 000000000..1192204ff --- /dev/null +++ b/test/replication/gh-6018-master.lua @@ -0,0 +1,17 @@ +#!/usr/bin/env tarantool + +require('console').listen(os.getenv('ADMIN')) + +box.cfg({ + listen = 'unix/:./gh-6018-master.sock', + replication = { + 'unix/:./gh-6018-master.sock', + 'unix/:./gh-6018-replica.sock', + }, + election_mode = arg[1], + instance_uuid = 'cbf06940-0790-498b-948d-042b62cf3d29', + replication_timeout = 0.1, +}) + +box.ctl.wait_rw() +box.schema.user.grant('guest', 'super') diff --git a/test/replication/gh-6018-replica.lua b/test/replication/gh-6018-replica.lua new file mode 100644 index 000000000..71e669141 --- /dev/null +++ b/test/replication/gh-6018-replica.lua @@ -0,0 +1,15 @@ +#!/usr/bin/env tarantool + +require('console').listen(os.getenv('ADMIN')) + +box.cfg({ + listen = 'unix/:./gh-6018-replica.sock', + replication = { + 'unix/:./gh-6018-master.sock', + 'unix/:./gh-6018-replica.sock', + }, + election_mode = 'voter', + -- Smaller than master UUID. + instance_uuid = 'cbf06940-0790-498b-948d-042b62cf3d28', + replication_timeout = 0.1, +}) diff --git a/test/replication/suite.cfg b/test/replication/suite.cfg index 69f2f3511..2bfc3b845 100644 --- a/test/replication/suite.cfg +++ b/test/replication/suite.cfg @@ -45,6 +45,7 @@ "gh-5536-wal-limit.test.lua": {}, "gh-5566-final-join-synchro.test.lua": {}, "gh-5613-bootstrap-prefer-booted.test.lua": {}, + "gh-6018-election-boot-voter.test.lua": {}, "gh-6027-applier-error-show.test.lua": {}, "gh-6032-promote-wal-write.test.lua": {}, "gh-6057-qsync-confirm-async-no-wal.test.lua": {}, -- 2.24.3 (Apple Git-128)