From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from [87.239.111.99] (localhost [127.0.0.1]) by dev.tarantool.org (Postfix) with ESMTP id A78E66EC55; Sat, 17 Jul 2021 00:21:24 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org A78E66EC55 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=tarantool.org; s=dev; t=1626470484; bh=3vO16pdR5fhb1R5gIveWagGrygI2IExIAxk7yve02XU=; h=To:Date:In-Reply-To:References:Subject:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc: From; b=DNsFzZ7E0AF1g3aRj4J9MathvOiy/3IYxW6fVIExmMQzh/J46CJjDWoYc5KX9lFRv O3/dM+cG3V3SPhfU2uAREKu/yT/XN7z3Ws4LvIUGAG9l7uZDZu+kEK4xXV6NN46/8x 6chJ36hNqOySBtFCKNH2oaZxtMp0jAyyEBBTxzRg= Received: from mail-lf1-f43.google.com (mail-lf1-f43.google.com [209.85.167.43]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by dev.tarantool.org (Postfix) with ESMTPS id DEAC36EC57 for ; Sat, 17 Jul 2021 00:20:26 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org DEAC36EC57 Received: by mail-lf1-f43.google.com with SMTP id g8so12370869lfh.8 for ; Fri, 16 Jul 2021 14:20:26 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=RJq3U3h3S/qhxvRREOhrl3rvsLd1mYwKh7VDoD6Aqug=; b=W7R3YVtp11HU3C0sn+mgrehie92cG7uZ9O/rfTPDfS1RCGGgGS5qDgDsBVYNL5CTAc 6BmLO4RbqCXp/0WAsHXzO+MXO1s/oVEfR0I1je4OZDDQtk9JpzaF7nHONiSWkJhD5fUX qnja40SdtbsCdjxwCnMoKb29lB/gfkehC7JR9TcJ/vQaTM9MHewbsKnNBHLuvX3/ZZBe j+0ktOlUc6k699Htj6yfJJU1UtMEomMd2gMKPlasslJhvfS8i9wyCJARdIgvEcI3NACK Zt1WPBNGpt78W6cS3EP+Dn1/u04LdKz8pjpXcyqQp0iZRmDt5VP/ALYdB+NyBqZQvD9d hEwQ== X-Gm-Message-State: AOAM532NQn7KQfr8CTndJayqN5pMgXmODVDilr0zj4gH1rKIwGBxaSHH K6NTkQt4ZZouEir/fG2plh7mh3MlxV34Qg== X-Google-Smtp-Source: ABdhPJzQmYYzNC8sc89/PxHndORbCZDQt+/mTlhvqj6kSx0jbiA2sLLDIRUCLLZDDzkH1uhJC9VFJg== X-Received: by 2002:a05:6512:1382:: with SMTP id p2mr9011231lfa.120.1626470425986; Fri, 16 Jul 2021 14:20:25 -0700 (PDT) Received: from grain.localdomain ([5.18.255.97]) by smtp.gmail.com with ESMTPSA id w29sm785199lfu.160.2021.07.16.14.20.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Jul 2021 14:20:24 -0700 (PDT) Received: by grain.localdomain (Postfix, from userid 1000) id C5C305A0021; Sat, 17 Jul 2021 00:19:47 +0300 (MSK) To: tml Date: Sat, 17 Jul 2021 00:19:46 +0300 Message-Id: <20210716211946.23247-4-gorcunov@gmail.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210716211946.23247-1-gorcunov@gmail.com> References: <20210716211946.23247-1-gorcunov@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [Tarantool-patches] [RFC v6 3/3] limbo: filter incoming synchro requests X-BeenThere: tarantool-patches@dev.tarantool.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Tarantool development patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , From: Cyrill Gorcunov via Tarantool-patches Reply-To: Cyrill Gorcunov Cc: Vladislav Shpilevoy Errors-To: tarantool-patches-bounces@dev.tarantool.org Sender: "Tarantool-patches" When we receive synchro requests we can't just apply them blindly because in worse case they may come from split-brain configuration (where a cluster splitted into several subclusters and each one has own leader elected, then subclisters are trying to merge back into original cluster). We need to do our best to detect such configs and force these nodes to rejoin from the scratch for data consistency sake. Thus when we're processing requests we pass them to the packet filter first which validates their contents and refuse to apply if they are not matched. Closes #6036 Signed-off-by: Cyrill Gorcunov --- src/box/applier.cc | 6 +- src/box/box.cc | 6 +- src/box/memtx_engine.c | 3 +- src/box/txn_limbo.c | 209 ++++++++++++++++++++++++++++++++++++++--- src/box/txn_limbo.h | 9 +- 5 files changed, 216 insertions(+), 17 deletions(-) diff --git a/src/box/applier.cc b/src/box/applier.cc index 765ffc670..f07d4c7b0 100644 --- a/src/box/applier.cc +++ b/src/box/applier.cc @@ -458,7 +458,8 @@ applier_wait_snapshot(struct applier *applier) struct synchro_request req; if (xrow_decode_synchro(&row, &req) != 0) diag_raise(); - txn_limbo_process(&txn_limbo, &req); + if (txn_limbo_process(&txn_limbo, &req) != 0) + diag_raise(); } else if (iproto_type_is_raft_request(row.type)) { struct raft_request req; if (xrow_decode_raft(&row, &req, NULL) != 0) @@ -871,6 +872,9 @@ apply_synchro_row(uint32_t replica_id, struct xrow_header *row) goto err; txn_limbo_terms_lock(&txn_limbo); + if (txn_limbo_filter_locked(&txn_limbo, &req) != 0) + goto err_unlock; + struct replica_cb_data rcb_data; struct synchro_entry entry; /* diff --git a/src/box/box.cc b/src/box/box.cc index e590df425..6bef10219 100644 --- a/src/box/box.cc +++ b/src/box/box.cc @@ -1675,7 +1675,8 @@ box_issue_promote(uint32_t prev_leader_id, int64_t promote_lsn) .lsn = promote_lsn, .term = box_raft()->term, }; - txn_limbo_process(&txn_limbo, &req); + if (txn_limbo_process(&txn_limbo, &req) != 0) + diag_raise(); assert(txn_limbo_is_empty(&txn_limbo)); } @@ -1694,7 +1695,8 @@ box_issue_demote(uint32_t prev_leader_id, int64_t promote_lsn) .lsn = promote_lsn, .term = box_raft()->term, }; - txn_limbo_process(&txn_limbo, &req); + if (txn_limbo_process(&txn_limbo, &req) != 0) + diag_raise(); assert(txn_limbo_is_empty(&txn_limbo)); } diff --git a/src/box/memtx_engine.c b/src/box/memtx_engine.c index 0b06e5e63..4aed24fe3 100644 --- a/src/box/memtx_engine.c +++ b/src/box/memtx_engine.c @@ -238,7 +238,8 @@ memtx_engine_recover_synchro(const struct xrow_header *row) * because all its rows have a zero replica_id. */ req.origin_id = req.replica_id; - txn_limbo_process(&txn_limbo, &req); + if (txn_limbo_process(&txn_limbo, &req) != 0) + return -1; return 0; } diff --git a/src/box/txn_limbo.c b/src/box/txn_limbo.c index 437cf199b..8a34f3151 100644 --- a/src/box/txn_limbo.c +++ b/src/box/txn_limbo.c @@ -731,27 +731,211 @@ txn_limbo_wait_empty(struct txn_limbo *limbo, double timeout) return 0; } +enum filter_chain { + FILTER_CONFIRM, + FILTER_ROLLBACK, + FILTER_PROMOTE, + FILTER_MAX, +}; + +/** + * Filter CONFIRM and ROLLBACK packets. + */ +static int +filter_confirm_rollback(struct txn_limbo *limbo, + const struct synchro_request *req) +{ + /* + * When limbo is empty we have nothing to + * confirm/commit and if this request comes + * in it means the split brain has happened. + */ + if (!txn_limbo_is_empty(limbo)) + return 0; + + say_info("RAFT: rejecting %s request from " + "instance id %u for term %llu. " + "Empty limbo detected.", + iproto_type_name(req->type), + req->origin_id, + (long long)req->term); + + diag_set(ClientError, ER_UNSUPPORTED, + "Replication", + "confirm/rollback with empty limbo"); + return -1; +} + +/** + * Filter PROMOTE packets. + */ +static int +filter_promote(struct txn_limbo *limbo, const struct synchro_request *req) +{ + struct txn_limbo_terms *tr = &limbo->terms; + int64_t promote_lsn = req->lsn; + + /* + * If the term is already seen it means it comes + * from a node which didn't notice new elections, + * thus been living in subdomain and its data is + * no longer consistent. + */ + if (tr->terms_max > 1 && tr->terms_max > req->term) { + say_info("RAFT: rejecting %s request from " + "instance id %u for term %llu. " + "Max term seen is %llu.", + iproto_type_name(req->type), + req->origin_id, + (long long)req->term, + (long long)tr->terms_max); + + diag_set(ClientError, ER_UNSUPPORTED, + "Replication", "obsolete terms"); + return -1; + } + + /* + * Either the limbo is empty or new promote will + * rollback all waiting transactions. Which + * is fine. + */ + if (limbo->confirmed_lsn == promote_lsn) + return 0; + + /* + * Explicit split brain situation. Promote + * comes in with an old LSN which we've already + * processed. + */ + if (limbo->confirmed_lsn > promote_lsn) { + say_info("RAFT: rejecting %s request from " + "instance id %u for term %llu. " + "confirmed_lsn %lld > promote_lsn %lld.", + iproto_type_name(req->type), + req->origin_id, (long long)req->term, + (long long)limbo->confirmed_lsn, + (long long)promote_lsn); + + diag_set(ClientError, ER_UNSUPPORTED, + "Replication", + "backward promote LSN (split brain)"); + return -1; + } + + /* + * The last case requires a few subcases. + */ + assert(limbo->confirmed_lsn < promote_lsn); + + if (txn_limbo_is_empty(limbo)) { + /* + * Transactions are already rolled back + * since the limbo is empty. + */ + say_info("RAFT: rejecting %s request from " + "instance id %u for term %llu. " + "confirmed_lsn %lld < promote_lsn %lld " + "and empty limbo.", + iproto_type_name(req->type), + req->origin_id, (long long)req->term, + (long long)limbo->confirmed_lsn, + (long long)promote_lsn); + + diag_set(ClientError, ER_UNSUPPORTED, + "Replication", + "forward promote LSN " + "(empty limbo, split brain)"); + return -1; + } else { + /* + * Some entries are present in the limbo, + * and if first entry's LSN is greater than + * requested then old data either commited + * or rolled back, so can't continue. + */ + struct txn_limbo_entry *first; + + first = txn_limbo_first_entry(limbo); + if (first->lsn > promote_lsn) { + say_info("RAFT: rejecting %s request from " + "instance id %u for term %llu. " + "confirmed_lsn %lld < promote_lsn %lld " + "and limbo first lsn %lld.", + iproto_type_name(req->type), + req->origin_id, (long long)req->term, + (long long)limbo->confirmed_lsn, + (long long)promote_lsn, + (long long)first->lsn); + + diag_set(ClientError, ER_UNSUPPORTED, + "Replication", + "promote LSN confilict " + "(limbo LSN ahead, split brain)"); + return -1; + } + } + + return 0; +} + +static int (*filter_req[FILTER_MAX]) +(struct txn_limbo *limbo, const struct synchro_request *req) = { + [FILTER_CONFIRM] = filter_confirm_rollback, + [FILTER_ROLLBACK] = filter_confirm_rollback, + [FILTER_PROMOTE] = filter_promote, +}; + +int +txn_limbo_filter_locked(struct txn_limbo *limbo, + const struct synchro_request *req) +{ + unsigned int mask = 0; + unsigned int pos = 0; + + switch (req->type) { + case IPROTO_CONFIRM: + mask |= (1u << FILTER_CONFIRM); + break; + case IPROTO_ROLLBACK: + mask |= (1u << FILTER_ROLLBACK); + break; + case IPROTO_PROMOTE: + mask |= (1u << FILTER_PROMOTE); + break; + case IPROTO_DEMOTE: + /* Do nothing for a while */ + break; + default: + panic("limbo: unexpected request %u", + req->type); + } + + while (mask != 0) { + if ((mask & 1) != 0) { + assert(pos < lengthof(filter_req)); + if (filter_req[pos](limbo, req) != 0) + return -1; + } + pos++; + mask >>= 1; + }; + + return 0; +} + void txn_limbo_process_locked(struct txn_limbo *limbo, const struct synchro_request *req) { struct txn_limbo_terms *tr = &limbo->terms; - uint64_t term = req->term; uint32_t origin = req->origin_id; + uint64_t term = req->term; if (txn_limbo_term_locked(limbo, origin) < term) { vclock_follow(&tr->terms_map, origin, term); if (term > tr->terms_max) tr->terms_max = term; - } else if (iproto_type_is_promote_request(req->type) && - tr->terms_max > 1) { - /* PROMOTE for outdated term. Ignore. */ - say_info("RAFT: ignoring %s request from instance " - "id %u for term %llu. Greatest term seen " - "before (%llu) is bigger.", - iproto_type_name(req->type), origin, (long long)term, - (long long)tr->terms_max); - return; } int64_t lsn = req->lsn; @@ -794,14 +978,15 @@ txn_limbo_process_locked(struct txn_limbo *limbo, default: unreachable(); } - return; } -void +int txn_limbo_process(struct txn_limbo *limbo, const struct synchro_request *req) { txn_limbo_terms_lock(limbo); + if (txn_limbo_filter_locked(limbo, req) != 0) + return -1; txn_limbo_process_locked(limbo, req); txn_limbo_terms_unlock(limbo); return 0; diff --git a/src/box/txn_limbo.h b/src/box/txn_limbo.h index 45687381f..e89cf6e79 100644 --- a/src/box/txn_limbo.h +++ b/src/box/txn_limbo.h @@ -372,13 +372,20 @@ txn_limbo_ack(struct txn_limbo *limbo, uint32_t replica_id, int64_t lsn); int txn_limbo_wait_complete(struct txn_limbo *limbo, struct txn_limbo_entry *entry); +/** + * Verify if the request is valid for processing. + */ +int +txn_limbo_filter_locked(struct txn_limbo *limbo, + const struct synchro_request *req); + /** Execute a synchronous replication request. */ void txn_limbo_process_locked(struct txn_limbo *limbo, const struct synchro_request *req); /** Lock limbo terms and execute a synchronous replication request. */ -void +int txn_limbo_process(struct txn_limbo *limbo, const struct synchro_request *req); /** -- 2.31.1