From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp57.i.mail.ru (smtp57.i.mail.ru [217.69.128.37]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dev.tarantool.org (Postfix) with ESMTPS id 43FF9469710 for ; Tue, 26 May 2020 02:41:30 +0300 (MSK) References: <20200403210836.GB18283@tarantool.org> <20200430145033.GF112@tarantool.org> <20200506085249.GA2842@atlas> <20200506163901.GH112@tarantool.org> <20200506184445.GB24913@atlas> <20200512155508.GJ112@tarantool.org> <78713377-806f-8cf6-efe0-5019f3d3e428@tarantool.org> <20200514203811.GN112@tarantool.org> <20200520205925.GA58@tarantool.org> From: Vladislav Shpilevoy Message-ID: <887a0ec5-3a01-565a-0c31-f7fab619af8f@tarantool.org> Date: Tue, 26 May 2020 01:41:27 +0200 MIME-Version: 1.0 In-Reply-To: <20200520205925.GA58@tarantool.org> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [Tarantool-patches] [RFC] Quorum-based synchronous replication List-Id: Tarantool development patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Sergey Ostanevich Cc: tarantool-patches@dev.tarantool.org Hi! Thanks for the changes! >>>>>>>> As soon as leader appears in a situation it has not enough >>>>>>>> replicas >>>>>>>> to achieve quorum, the cluster should stop accepting any >>>>>>>> requests - both >>>>>>>> write and read. >>>> >>>> So it will not serve. >>> >>> This breaks compatibility, since now an orphan node is perfectly able >>> to serve reads. The cluster can't just stop doing everything, if the >>> quorum is lost. Stop writes - yes, since the quorum is lost anyway. But >>> reads do not need a quorum. >>> >>> If you say reads need a quorum, then they would need to go through WAL, >>> collect confirmations, and all. >> >> The reads should not be inconsistent - so that cluster will keep >> answering A or B for the same request. And in case we lost quorum we >> can't say for sure that all instances will answer the same. >> >> As we discussed it before, if leader appears in minor part of the >> cluster it can't issue rollback for all unconfirmed txns, since the >> majority will re-elect leader who will collect quorum for them. Means, >> we will appear is a state that cluster split in two. So the minor part >> should stop. Am I wrong here? Yeah, kinda. As long as you allow reading from replicas, you *always* will have a time slot, when you will be able to read different data for the same key on different replicas. Even with reads going through quorum. Because it is physically impossible to make nodes A and B start answering the same data at the same time moment. To notify them about a confirm you will send network messages, they will have not the same delay, won't be processed in the same moment of time, and some of them probably won't be even delivered. The only correct way to read the same - read from one node only. From the leader. And since this is not our way, it means we can't beat the 'inconsistent' reads problems. And I don't think we should. Because if somebody needs to do 'consistent' reads, they should read from leader only. In other words, the concept of 'consistency' is highly application dependent here. If we provide a way to read from replicas, we give flexibility to choose: read from leader only and see always the same data, or read from all, and have a possibility, that requests may see different data on different replicas sometimes. > ## Detailed design > > ### Quorum commit > > The main idea behind the proposal is to reuse existent machinery as much > as possible. It will ensure the well-tested and proven functionality > across many instances in MRG and beyond is used. The transaction rollback > mechanism is in place and works for WAL write failure. If we substitute > the WAL success with a new situation which is named 'quorum' later in > this document then no changes to the machinery is needed. The same is > true for snapshot machinery that allows to create a copy of the database > in memory for the whole period of snapshot file write. Adding quorum here > also minimizes changes. > > Currently replication represented by the following scheme: > ``` > Customer Leader WAL(L) Replica WAL(R) > |------TXN----->| | | | > | | | | | > | [TXN undo log | | | > | created] | | | > | | | | | > | |-----TXN----->| | | > | | | | | > | |<---WAL Ok----| | | > | | | | | > | [TXN undo log | | | > | destroyed] | | | > | | | | | > |<----TXN Ok----| | | | > | |-------Replicate TXN------->| | > | | | | | > | | | [TXN undo log | > | | | created] | > | | | | | > | | | |-----TXN----->| > | | | | | > | | | |<---WAL Ok----| > | | | | | > | | | [TXN undo log | > | | | destroyed] | > | | | | | > ``` > > To introduce the 'quorum' we have to receive confirmation from replicas > to make a decision on whether the quorum is actually present. Leader > collects necessary amount of replicas confirmation plus its own WAL > success. This state is named 'quorum' and gives leader the right to > complete the customers' request. So the picture will change to: > ``` > Customer Leader WAL(L) Replica WAL(R) > |------TXN----->| | | | > | | | | | > | [TXN undo log | | | > | created] | | | > | | | | | > | |-----TXN----->| | | > | | | | | > | |-------Replicate TXN------->| | > | | | | | > | | | [TXN undo log | > | |<---WAL Ok----| created] | > | | | | | > | [Waiting | |-----TXN----->| > | of a quorum] | | | > | | | |<---WAL Ok----| > | | | | | > | |<------Replication Ok-------| | > | | | | | > | [Quorum | | | > | achieved] | | | > | | | | | > | |---Confirm--->| | | > | | | | | > | |----------Confirm---------->| | > | | | | | > |<---TXN Ok-----| | |---Confirm--->| > | | | | | > | [TXN undo log | [TXN undo log | > | destroyed] | destroyed] | > | | | | | > ``` > > The quorum should be collected as a table for a list of transactions > waiting for quorum. The latest transaction that collects the quorum is > considered as complete, as well as all transactions prior to it, since > all transactions should be applied in order. Leader writes a 'confirm' > message to the WAL that refers to the transaction's [LEADER_ID, LSN] and > the confirm has its own LSN. This confirm message is delivered to all > replicas through the existing replication mechanism. > > Replica should report a TXN application success to the leader via the > IPROTO explicitly to allow leader to collect the quorum for the TXN. > In case of application failure the replica has to disconnect from the > replication the same way as it is done now. The replica also has to > report its disconnection to the orchestrator. Further actions require > human intervention, since failure means either technical problem (such > as not enough space for WAL) that has to be resolved or an inconsistent > state that requires rejoin. I don't think a replica should report disconnection. Problem of disconnection is that it leads to loosing the connection. So it may be not able to connect to the orchestrator. Also it would be strange for tarantool to depend on some external service, to which it should report. This looks like the orchestrator's business how will it determine connectivity. Replica has nothing to do with it from its side. > As soon as leader appears in a situation it has not enough replicas > to achieve quorum, the cluster should stop accepting any requests - both > write and read. The moment of not having enough replicas can't be determined properly. You may loose connection to replicas (they could be powered off), but TCP won't see that, and the node will continue working. The failure will be discovered only when a 'write' request will try to collect a quorum, or after a timeout will pass on not delivering heartbeats. During this time reads will be served. And there is no way to prevent them except collecting a quorum on that. See my first comment in this email for more details. On the summary: we can't stop accepting read requests. Btw, what to do with reads, which were *in-progress*, when the quorum was lost? Such as long vinyl reads. > The reason for this is that replication of transactions > can achieve quorum on replicas not visible to the leader. On the other > hand, leader can't achieve quorum with available minority. Leader has to > report the state and wait for human intervention. Yeah, but if the leader couldn't achieve a quorum on some transactions, they are not visible (assuming MVCC will work properly). So they can't be read anyway. And if a leader answered an error, it does not mean that the transaction wasn't replicated on the majority, as we discussed at some meeting, I don't already remember when. So here read allowance also works fine - not having some data visible and getting error at a sync transaction does not mean it is not committed. A user should be aware of that. > There's an option to > ask leader to rollback to the latest transaction that has quorum: leader > issues a 'rollback' message referring to the [LEADER_ID, LSN] where LSN > is of the first transaction in the leader's undo log. The rollback > message replicated to the available cluster will put it in a consistent > state. After that configuration of the cluster can be updated to > available quorum and leader can be switched back to write mode. > > ### Leader role assignment. > > Be it a user-initiated assignment or an algorithmic one, it should use > a common interface to assign the leader role. By now we implement a > simplified machinery, still it should be feasible in the future to fit > the algorithms, such as RAFT or proposed before box.ctl.promote. > > A system space \_voting can be used to replicate the voting among the > cluster, this space should be writable even for a read-only instance. > This space should contain a CURRENT_LEADER_ID at any time - means the > current leader, can be a zero value at the start. This is needed to > compare the appropriate vclock component below. > > All replicas should be subscribed to changes in the space and react as > described below. > > promote(ID) - should be called from a replica with it's own ID. > Writes an entry in the voting space about this ID is waiting for > votes from cluster. The entry should also contain the current > vclock[CURRENT_LEADER_ID] of the nominee. > > Upon changes in the space each replica should compare its appropriate > vclock component with submitted one and append its vote to the space: > AYE in case nominee's vclock is bigger or equal to the replica's one, > NAY otherwise. > > As soon as nominee collects the quorum for being elected, it claims > himself a Leader by switching in rw mode, writes CURRENT_LEADER_ID as > a FORMER_LEADER_ID in the \_voting space and put its ID as a > CURRENT_LEADER_ID. In case a NAY is appeared in the \_voting or a > timeout predefined in box.cfg is reached, the nominee should remove > it's entry from the space. > > The leader should assure that number of available instances in the > cluster is enough to achieve the quorum and proceed to step 3, otherwise > the leader should report the situation of incomplete quorum, as > described in the last paragraph of previous section. > > The new Leader has to take the responsibility to replicate former Leader's > entries from its WAL, obtain quorum and commit confirm messages referring > to [FORMER_LEADER_ID, LSN] in its WAL, replicating to the cluster, after > that it can start adding its own entries into the WAL. > > demote(ID) - should be called from the Leader instance. > The Leader has to switch in ro mode and wait for its' undo log is > empty. This effectively means all transactions are committed in the > cluster and it is safe pass the leadership. Then it should write > CURRENT_LEADER_ID as a FORMER_LEADER_ID and put CURRENT_LEADER_ID > into 0. This looks like box.ctl.promote() algorithm. Although I thought we decided not to implement any kind of auto election here, no? Box.ctl.promote() assumed, that it does all the steps automatically, except choosing on which node to call this function. This is what it was so complicated. It was basically raft. But yeah, as discussed verbally, this is a subject for improvement. The way I see it is that we need to give vclock based algorithm of choosing a new leader; tell how to stop replication from the old leader; allow to read vclock from replicas (basically, let the external service read box.info). Since you said you think we should not provide an API for all sync transactions rollback, it looks like no need in a special new API. But if we still want to allow to rollback all pending transactions of the old leader on a new leader (like Mons wants) then yeah, seems like we would need a new function. For example, box.ctl.sync_rollback() to rollback all pending. And box.ctl.sync_confirm() to confirm all pending. Perhaps we could add more admin-line parameters such as replica_id with which to write 'confirm/rollback' message. > ### Recovery and failover. > > Tarantool instance during reading WAL should postpone the undo log > deletion until the 'confirm' is read. In case the WAL eof is achieved, > the instance should keep undo log for all transactions that are waiting > for a confirm entry until the role of the instance is set. > > If this instance will be assigned a leader role then all transactions > that have no corresponding confirm message should be confirmed (see the > leader role assignment). > > In case there's not enough replicas to set up a quorum the cluster can > be switched into a read-only mode. Note, this can't be done by default > since some of transactions can have confirmed state. It is up to human > intervention to force rollback of all transactions that have no confirm > and to put the cluster into a consistent state. Above you said: >> As soon as leader appears in a situation it has not enough replicas >> to achieve quorum, the cluster should stop accepting any requests - both >> write and read. But here I see, that the cluster "switched into a read-only mode". So there is a contradiction. And I think it should be resolved in favor of 'read-only mode'. I explained why in the previous comments. > In case the instance will be assigned a replica role, it may appear in > a state that it has conflicting WAL entries, in case it recovered from a > leader role and some of transactions didn't replicated to the current > leader. This situation should be resolved through rejoin of the instance. > > Consider an example below. Originally instance with ID1 was assigned a > Leader role and the cluster had 2 replicas with quorum set to 2. > > ``` > +---------------------+---------------------+---------------------+ > | ID1 | ID2 | ID3 | > | Leader | Replica 1 | Replica 2 | > +---------------------+---------------------+---------------------+ > | ID1 Tx1 | ID1 Tx1 | ID1 Tx1 | > +---------------------+---------------------+---------------------+ > | ID1 Tx2 | ID1 Tx2 | ID1 Tx2 | > +---------------------+---------------------+---------------------+ > | ID1 Tx3 | ID1 Tx3 | ID1 Tx3 | > +---------------------+---------------------+---------------------+ > | ID1 Conf [ID1, Tx1] | ID1 Conf [ID1, Tx1] | | > +---------------------+---------------------+---------------------+ > | ID1 Tx4 | ID1 Tx4 | | > +---------------------+---------------------+---------------------+ > | ID1 Tx5 | ID1 Tx5 | | > +---------------------+---------------------+---------------------+ > | ID1 Conf [ID1, Tx2] | | | > +---------------------+---------------------+---------------------+ > | Tx6 | | | > +---------------------+---------------------+---------------------+ > | Tx7 | | | > +---------------------+---------------------+---------------------+ > ``` > Suppose at this moment the ID1 instance crashes. Then the ID2 instance > should be assigned a leader role since its ID1 LSN is the biggest. > Then this new leader will deliver its WAL to all replicas. > > As soon as quorum for Tx4 and Tx5 will be obtained, it should write the > corresponding Confirms to its WAL. Note that Tx are still uses ID1. > ``` > +---------------------+---------------------+---------------------+ > | ID1 | ID2 | ID3 | > | (dead) | Leader | Replica 2 | > +---------------------+---------------------+---------------------+ > | ID1 Tx1 | ID1 Tx1 | ID1 Tx1 | > +---------------------+---------------------+---------------------+ > | ID1 Tx2 | ID1 Tx2 | ID1 Tx2 | > +---------------------+---------------------+---------------------+ > | ID1 Tx3 | ID1 Tx3 | ID1 Tx3 | > +---------------------+---------------------+---------------------+ > | ID1 Conf [ID1, Tx1] | ID1 Conf [ID1, Tx1] | ID1 Conf [ID1, Tx1] | > +---------------------+---------------------+---------------------+ > | ID1 Tx4 | ID1 Tx4 | ID1 Tx4 | > +---------------------+---------------------+---------------------+ > | ID1 Tx5 | ID1 Tx5 | ID1 Tx5 | > +---------------------+---------------------+---------------------+ > | ID1 Conf [ID1, Tx2] | ID2 Conf [Id1, Tx5] | ID2 Conf [Id1, Tx5] | Id1 -> ID1 (typo) > +---------------------+---------------------+---------------------+ > | ID1 Tx6 | | | > +---------------------+---------------------+---------------------+ > | ID1 Tx7 | | | > +---------------------+---------------------+---------------------+ > ``` > After rejoining ID1 will figure out the inconsistency of its WAL: the > last WAL entry it has is corresponding to Tx7, while in Leader's log the > last entry with ID1 is Tx5. Confirm for a Tx can only be issued after > appearance of the Tx on the majoirty of replicas, hence there's a good > chances that ID1 will have inconsistency in its WAL covered with undo > log. So, by rolling back all excessive Txs (in the example they are Tx6 > and Tx7) the ID1 can put its memtx and vynil in consistent state. Yeah, but the problem is that the node1 has vclock[ID1] == 'Conf [ID1, Tx2]'. This row can't be rolled back. So looks like node1 needs a rejoin. > At this point a snapshot can be created at ID1 with appropriate WAL > rotation. The old WAL should be renamed so it will not be reused in the > future and can be kept for postmortem. > ``` > +---------------------+---------------------+---------------------+ > | ID1 | ID2 | ID3 | > | Replica 1 | Leader | Replica 2 | > +---------------------+---------------------+---------------------+ > | ID1 Tx1 | ID1 Tx1 | ID1 Tx1 | > +---------------------+---------------------+---------------------+ > | ID1 Tx2 | ID1 Tx2 | ID1 Tx2 | > +---------------------+---------------------+---------------------+ > | ID1 Tx3 | ID1 Tx3 | ID1 Tx3 | > +---------------------+---------------------+---------------------+ > | ID1 Conf [ID1, Tx1] | ID1 Conf [ID1, Tx1] | ID1 Conf [ID1, Tx1] | > +---------------------+---------------------+---------------------+ > | ID1 Tx4 | ID1 Tx4 | ID1 Tx4 | > +---------------------+---------------------+---------------------+ > | ID1 Tx5 | ID1 Tx5 | ID1 Tx5 | > +---------------------+---------------------+---------------------+ > | | ID2 Conf [Id1, Tx5] | ID2 Conf [Id1, Tx5] | > +---------------------+---------------------+---------------------+ > | | ID2 Tx1 | ID2 Tx1 | > +---------------------+---------------------+---------------------+ > | | ID2 Tx2 | ID2 Tx2 | > +---------------------+---------------------+---------------------+ > ``` > Although, in case undo log is not enough to cover the WAL inconsistence > with the new leader, the ID1 needs a complete rejoin.