From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from [87.239.111.99] (localhost [127.0.0.1]) by dev.tarantool.org (Postfix) with ESMTP id 813766EC55; Mon, 26 Jul 2021 18:37:28 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org 813766EC55 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=tarantool.org; s=dev; t=1627313848; bh=CkOQYUAAGPDvQ6MtjkWi4eCQO1ViPuTmJoajrTqWaVs=; h=To:Date:In-Reply-To:References:Subject:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc: From; b=r0X0ohmz0mZ/foCutxV32rGeXELF3JEtkCq/CjaXWf4+/j2nT95Lt2QhWYGDIJpw8 97Mx+pg/eIhJckr37bI3OI5bjx2MxNioTammsOt04Epfdf1pbvdpLwZz8u0hgZFk32 0oQaiN2zcoxiY1soJo879+GCWe2sBhxKhM1Gst70= Received: from mail-lf1-f53.google.com (mail-lf1-f53.google.com [209.85.167.53]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by dev.tarantool.org (Postfix) with ESMTPS id 94EB46EC5E for ; Mon, 26 Jul 2021 18:35:55 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org 94EB46EC5E Received: by mail-lf1-f53.google.com with SMTP id y34so16199800lfa.8 for ; Mon, 26 Jul 2021 08:35:55 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FeEFNthr9J8JUNH4OdHRcDg7knzuMVj7iRPqtZMD3BY=; b=Xs5Tk9yFl+zd9nr0POo9a8v9ohqxiX9lDkNGrssTAxPsnZHxtJg1qGc9a5v4iID14/ AkusxSrANFBarf5bOlb3s00tP3Aa6Zdh+l/Fj5mAVW0dnf3BUXm/KUI7OLUjNCPYuoKY p70iR8JNBCeLyj5nSNAksYiycAAtqgwSdzRcyXJJTX7e1KJYb4mnqzn8IHp7urSyNfNd KKc17YYEIkBCgCDuMWI7RTgMrIQyUIbrkHosJj0yd0X+3yIltrDNtKm5ZmQy4KwwNRPG I8l+2F0ZZ4H0tkJm8FMd0olLLAB3gMAfF9fYZxIXnkm3AfnQqsZKy3xHaQadzhYv5CWI 31iQ== X-Gm-Message-State: AOAM531bvSAqd7qyLd1iiuaDcb+Yo5adSep0TPi15ikYvKT+5xM0OmVR c4svvHL+gpTOxdkXnwwGCttcRm2b2OPrVA== X-Google-Smtp-Source: ABdhPJyVPrlr0GzeJ5KdOZ3WUrf8WoglZzk+zm7C2Ms9k7xQ01uICTEHydGPKeP1bqPjIlKo2SqRDg== X-Received: by 2002:a05:6512:3225:: with SMTP id f5mr12774437lfe.97.1627313754430; Mon, 26 Jul 2021 08:35:54 -0700 (PDT) Received: from grain.localdomain ([5.18.255.97]) by smtp.gmail.com with ESMTPSA id d4sm30565lfg.261.2021.07.26.08.35.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Jul 2021 08:35:53 -0700 (PDT) Received: by grain.localdomain (Postfix, from userid 1000) id 338315A0023; Mon, 26 Jul 2021 18:34:53 +0300 (MSK) To: tml Date: Mon, 26 Jul 2021 18:34:51 +0300 Message-Id: <20210726153452.113897-6-gorcunov@gmail.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210726153452.113897-1-gorcunov@gmail.com> References: <20210726153452.113897-1-gorcunov@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [Tarantool-patches] [PATCH v8 5/6] limbo: filter incoming synchro requests X-BeenThere: tarantool-patches@dev.tarantool.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Tarantool development patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , From: Cyrill Gorcunov via Tarantool-patches Reply-To: Cyrill Gorcunov Cc: Vladislav Shpilevoy Errors-To: tarantool-patches-bounces@dev.tarantool.org Sender: "Tarantool-patches" When we receive synchro requests we can't just apply them blindly because in worse case they may come from split-brain configuration (where a cluster splitted into several subclusters and each one has own leader elected, then subclisters are trying to merge back into original cluster). We need to do our best to detect such configs and force these nodes to rejoin from the scratch for data consistency sake. Thus when we're processing requests we pass them to the packet filter first which validates their contents and refuse to apply if they are not matched. Depending on request type each packet traverse an appropriate chain(s) FILTER_IN - Common chain for any synchro packet. We verify that if replica_id is nil then it shall be PROMOTE request with lsn 0 to migrate limbo owner FILTER_CONFIRM FILTER_ROLLBACK - Both confirm and rollback requests shall not come with empty limbo since it measn the synchro queue is already processed and the peer didn't notice that FILTER_PROMOTE - Promote request should come in with new terms only, otherwise it means the peer didn't notice election - If limbo's confirmed_lsn is equal to promote LSN then it is a valid request to process - If limbo's confirmed_lsn is bigger than requested then it is valid in one case only -- limbo migration so the queue shall be empty - If limbo's confirmed_lsn is less than promote LSN then - If queue is empty then it means the transactions are already rolled back and request is invalid - If queue is not empty then its first entry might be greater than promote LSN and it means that old data either committed or rolled back already and request is invalid FILTER_DEMOTE - NOP, reserved for future use Closes #6036 Signed-off-by: Cyrill Gorcunov --- src/box/applier.cc | 21 ++- src/box/box.cc | 11 +- src/box/memtx_engine.c | 3 +- src/box/txn_limbo.c | 304 ++++++++++++++++++++++++++++++++++++++--- src/box/txn_limbo.h | 33 ++++- 5 files changed, 346 insertions(+), 26 deletions(-) diff --git a/src/box/applier.cc b/src/box/applier.cc index b5c3a7b67..cf51dc8fb 100644 --- a/src/box/applier.cc +++ b/src/box/applier.cc @@ -458,7 +458,8 @@ applier_wait_snapshot(struct applier *applier) struct synchro_request req; if (xrow_decode_synchro(&row, &req) != 0) diag_raise(); - txn_limbo_process(&txn_limbo, &req); + if (txn_limbo_process(&txn_limbo, &req) != 0) + diag_raise(); } else if (iproto_type_is_raft_request(row.type)) { struct raft_request req; if (xrow_decode_raft(&row, &req, NULL) != 0) @@ -514,6 +515,11 @@ applier_fetch_snapshot(struct applier *applier) struct ev_io *coio = &applier->io; struct xrow_header row; + txn_limbo_filter_disable(&txn_limbo); + auto filter_guard = make_scoped_guard([&]{ + txn_limbo_filter_enable(&txn_limbo); + }); + memset(&row, 0, sizeof(row)); row.type = IPROTO_FETCH_SNAPSHOT; coio_write_xrow(coio, &row); @@ -587,6 +593,11 @@ applier_register(struct applier *applier, bool was_anon) struct ev_io *coio = &applier->io; struct xrow_header row; + txn_limbo_filter_disable(&txn_limbo); + auto filter_guard = make_scoped_guard([&]{ + txn_limbo_filter_enable(&txn_limbo); + }); + memset(&row, 0, sizeof(row)); /* * Send this instance's current vclock together @@ -620,6 +631,11 @@ applier_join(struct applier *applier) struct xrow_header row; uint64_t row_count; + txn_limbo_filter_disable(&txn_limbo); + auto filter_guard = make_scoped_guard([&]{ + txn_limbo_filter_enable(&txn_limbo); + }); + xrow_encode_join_xc(&row, &INSTANCE_UUID); coio_write_xrow(coio, &row); @@ -873,6 +889,9 @@ apply_synchro_row(uint32_t replica_id, struct xrow_header *row) goto err; txn_limbo_terms_lock(&txn_limbo); + if (txn_limbo_filter_locked(&txn_limbo, &req) != 0) + goto err_unlock; + struct replica_cb_data rcb_data; struct synchro_entry entry; /* diff --git a/src/box/box.cc b/src/box/box.cc index 395f0d9ef..617dae39c 100644 --- a/src/box/box.cc +++ b/src/box/box.cc @@ -1668,7 +1668,8 @@ box_issue_promote(uint32_t prev_leader_id, int64_t promote_lsn) .lsn = promote_lsn, .term = raft->term, }; - txn_limbo_process(&txn_limbo, &req); + if (txn_limbo_process(&txn_limbo, &req) != 0) + diag_raise(); assert(txn_limbo_is_empty(&txn_limbo)); } @@ -1695,7 +1696,8 @@ box_issue_demote(uint32_t prev_leader_id, int64_t promote_lsn) .lsn = promote_lsn, .term = box_raft()->term, }; - txn_limbo_process(&txn_limbo, &req); + if (txn_limbo_process(&txn_limbo, &req) != 0) + diag_raise(); assert(txn_limbo_is_empty(&txn_limbo)); } @@ -3273,6 +3275,11 @@ local_recovery(const struct tt_uuid *instance_uuid, say_info("instance uuid %s", tt_uuid_str(&INSTANCE_UUID)); + txn_limbo_filter_disable(&txn_limbo); + auto filter_guard = make_scoped_guard([&]{ + txn_limbo_filter_enable(&txn_limbo); + }); + struct wal_stream wal_stream; wal_stream_create(&wal_stream); auto stream_guard = make_scoped_guard([&]{ diff --git a/src/box/memtx_engine.c b/src/box/memtx_engine.c index 0b06e5e63..4aed24fe3 100644 --- a/src/box/memtx_engine.c +++ b/src/box/memtx_engine.c @@ -238,7 +238,8 @@ memtx_engine_recover_synchro(const struct xrow_header *row) * because all its rows have a zero replica_id. */ req.origin_id = req.replica_id; - txn_limbo_process(&txn_limbo, &req); + if (txn_limbo_process(&txn_limbo, &req) != 0) + return -1; return 0; } diff --git a/src/box/txn_limbo.c b/src/box/txn_limbo.c index 5f43f575c..2f901b4ef 100644 --- a/src/box/txn_limbo.c +++ b/src/box/txn_limbo.c @@ -57,6 +57,7 @@ txn_limbo_create(struct txn_limbo *limbo) limbo->confirmed_lsn = 0; limbo->rollback_count = 0; limbo->is_in_rollback = false; + limbo->is_filtering = true; } bool @@ -731,37 +732,291 @@ txn_limbo_wait_empty(struct txn_limbo *limbo, double timeout) return 0; } +enum filter_chain { + FILTER_IN, + FILTER_CONFIRM, + FILTER_ROLLBACK, + FILTER_PROMOTE, + FILTER_DEMOTE, + FILTER_MAX, +}; + +/** + * Common chain for any incoming packet. + */ +static int +filter_in(struct txn_limbo *limbo, const struct synchro_request *req) +{ + (void)limbo; + + if (req->replica_id == REPLICA_ID_NIL) { + /* + * The limbo was empty on the instance issuing + * the request. This means this instance must + * empty its limbo as well. + */ + if (req->lsn != 0 || + !iproto_type_is_promote_request(req->type)) { + say_info("RAFT: rejecting %s request from " + "instance id %u for term %llu. " + "req->replica_id = 0 but lsn %lld.", + iproto_type_name(req->type), + req->origin_id, (long long)req->term, + (long long)req->lsn); + + diag_set(ClientError, ER_UNSUPPORTED, + "Replication", + "empty replica_id with nonzero LSN"); + return -1; + } + } + + return 0; +} + +/** + * Filter CONFIRM and ROLLBACK packets. + */ +static int +filter_confirm_rollback(struct txn_limbo *limbo, + const struct synchro_request *req) +{ + /* + * When limbo is empty we have nothing to + * confirm/commit and if this request comes + * in it means the split brain has happened. + */ + if (!txn_limbo_is_empty(limbo)) + return 0; + + say_info("RAFT: rejecting %s request from " + "instance id %u for term %llu. " + "Empty limbo detected.", + iproto_type_name(req->type), + req->origin_id, + (long long)req->term); + + diag_set(ClientError, ER_UNSUPPORTED, + "Replication", + "confirm/rollback with empty limbo"); + return -1; +} + +/** + * Filter PROMOTE packets. + */ +static int +filter_promote(struct txn_limbo *limbo, const struct synchro_request *req) +{ + struct txn_limbo_terms *tr = &limbo->terms; + int64_t promote_lsn = req->lsn; + + /* + * If the term is already seen it means it comes + * from a node which didn't notice new elections, + * thus been living in subdomain and its data is + * no longer consistent. + */ + if (tr->terms_max > 1 && tr->terms_max > req->term) { + say_info("RAFT: rejecting %s request from " + "instance id %u for term %llu. " + "Max term seen is %llu.", + iproto_type_name(req->type), + req->origin_id, + (long long)req->term, + (long long)tr->terms_max); + + diag_set(ClientError, ER_UNSUPPORTED, + "Replication", "obsolete terms"); + return -1; + } + + /* + * Either the limbo is empty or new promote will + * rollback all waiting transactions. Which + * is fine. + */ + if (limbo->confirmed_lsn == promote_lsn) + return 0; + + /* + * Explicit split brain situation. Promote + * comes in with an old LSN which we've already + * processed. + */ + if (limbo->confirmed_lsn > promote_lsn) { + /* + * If limbo is empty we're migrating + * the owner. + */ + if (txn_limbo_is_empty(limbo)) + return 0; + + say_info("RAFT: rejecting %s request from " + "instance id %u for term %llu. " + "confirmed_lsn %lld > promote_lsn %lld.", + iproto_type_name(req->type), + req->origin_id, (long long)req->term, + (long long)limbo->confirmed_lsn, + (long long)promote_lsn); + + diag_set(ClientError, ER_UNSUPPORTED, + "Replication", + "backward promote LSN (split brain)"); + return -1; + } + + /* + * The last case requires a few subcases. + */ + assert(limbo->confirmed_lsn < promote_lsn); + + if (txn_limbo_is_empty(limbo)) { + /* + * Transactions are already rolled back + * since the limbo is empty. + */ + say_info("RAFT: rejecting %s request from " + "instance id %u for term %llu. " + "confirmed_lsn %lld < promote_lsn %lld " + "and empty limbo.", + iproto_type_name(req->type), + req->origin_id, (long long)req->term, + (long long)limbo->confirmed_lsn, + (long long)promote_lsn); + + diag_set(ClientError, ER_UNSUPPORTED, + "Replication", + "forward promote LSN " + "(empty limbo, split brain)"); + return -1; + } else { + /* + * Some entries are present in the limbo, + * and if first entry's LSN is greater than + * requested then old data either commited + * or rolled back, so can't continue. + */ + struct txn_limbo_entry *first; + + first = txn_limbo_first_entry(limbo); + if (first->lsn > promote_lsn) { + say_info("RAFT: rejecting %s request from " + "instance id %u for term %llu. " + "confirmed_lsn %lld < promote_lsn %lld " + "and limbo first lsn %lld.", + iproto_type_name(req->type), + req->origin_id, (long long)req->term, + (long long)limbo->confirmed_lsn, + (long long)promote_lsn, + (long long)first->lsn); + + diag_set(ClientError, ER_UNSUPPORTED, + "Replication", + "promote LSN confilict " + "(limbo LSN ahead, split brain)"); + return -1; + } + } + + return 0; +} + +/** + * Filter DEMOTE packets. + */ +static int +filter_demote(struct txn_limbo *limbo, const struct synchro_request *req) +{ + (void)limbo; + (void)req; + return 0; +} + +static int (*filter_req[FILTER_MAX]) +(struct txn_limbo *limbo, const struct synchro_request *req) = { + [FILTER_IN] = filter_in, + [FILTER_CONFIRM] = filter_confirm_rollback, + [FILTER_ROLLBACK] = filter_confirm_rollback, + [FILTER_PROMOTE] = filter_promote, + [FILTER_DEMOTE] = filter_demote, +}; + +int +txn_limbo_filter_locked(struct txn_limbo *limbo, + const struct synchro_request *req) +{ + unsigned int mask = (1u << FILTER_IN); + unsigned int pos = 0; + + if (!limbo->is_filtering) + return 0; + +#ifndef NDEBUG + say_info("limbo: filter %s replica_id %u origin_id %u " + "term %lld lsn %lld, queue owner_id %u len %lld " + "confirmed_lsn %lld", + iproto_type_name(req->type), + req->replica_id, req->origin_id, + (long long)req->term, (long long)req->lsn, + limbo->owner_id, (long long)limbo->len, + (long long)limbo->confirmed_lsn); +#endif + + switch (req->type) { + case IPROTO_CONFIRM: + mask |= (1u << FILTER_CONFIRM); + break; + case IPROTO_ROLLBACK: + mask |= (1u << FILTER_ROLLBACK); + break; + case IPROTO_PROMOTE: + mask |= (1u << FILTER_PROMOTE); + break; + case IPROTO_DEMOTE: + mask |= (1u << FILTER_DEMOTE); + break; + default: + say_info("RAFT: rejecting unexpected %d " + "request from instance id %u " + "for term %llu.", + req->type, req->origin_id, + (long long)req->term); + diag_set(ClientError, ER_UNSUPPORTED, + "Replication", + "unexpected request type"); + return -1; + } + + while (mask != 0) { + if ((mask & 1) != 0) { + assert(pos < lengthof(filter_req)); + if (filter_req[pos](limbo, req) != 0) + return -1; + } + pos++; + mask >>= 1; + }; + + return 0; +} + void txn_limbo_process_locked(struct txn_limbo *limbo, const struct synchro_request *req) { struct txn_limbo_terms *tr = &limbo->terms; - uint64_t term = req->term; uint32_t origin = req->origin_id; + uint64_t term = req->term; if (txn_limbo_term_locked(limbo, origin) < term) { vclock_follow(&tr->terms_map, origin, term); if (term > tr->terms_max) tr->terms_max = term; - } else if (iproto_type_is_promote_request(req->type) && - tr->terms_max > 1) { - /* PROMOTE for outdated term. Ignore. */ - say_info("RAFT: ignoring %s request from instance " - "id %u for term %llu. Greatest term seen " - "before (%llu) is bigger.", - iproto_type_name(req->type), origin, (long long)term, - (long long)tr->terms_max); - return; } int64_t lsn = req->lsn; - if (req->replica_id == REPLICA_ID_NIL) { - /* - * The limbo was empty on the instance issuing the request. - * This means this instance must empty its limbo as well. - */ - assert(lsn == 0 && iproto_type_is_promote_request(req->type)); - } else if (req->replica_id != limbo->owner_id) { + if (req->replica_id != limbo->owner_id) { /* * Ignore CONFIRM/ROLLBACK messages for a foreign master. * These are most likely outdated messages for already confirmed @@ -792,18 +1047,25 @@ txn_limbo_process_locked(struct txn_limbo *limbo, txn_limbo_read_demote(limbo, lsn); break; default: - unreachable(); + panic("limbo: unexpected request type %d", + req->type); + break; } - return; } -void +int txn_limbo_process(struct txn_limbo *limbo, const struct synchro_request *req) { + int rc; + txn_limbo_terms_lock(limbo); - txn_limbo_process_locked(limbo, req); + rc = txn_limbo_filter_locked(limbo, req); + if (rc == 0) + txn_limbo_process_locked(limbo, req); txn_limbo_terms_unlock(limbo); + + return rc; } void diff --git a/src/box/txn_limbo.h b/src/box/txn_limbo.h index 45687381f..97de6943b 100644 --- a/src/box/txn_limbo.h +++ b/src/box/txn_limbo.h @@ -195,6 +195,14 @@ struct txn_limbo { * by the 'reversed rollback order' rule - contradiction. */ bool is_in_rollback; + /** + * Whether the limbo should filter incoming requests. + * The phases of local recovery from WAL file and on applier's + * join phase we are in complete trust of incoming data because + * this data forms an initial limbo state and should not + * filter out requests. + */ + bool is_filtering; }; /** @@ -372,15 +380,38 @@ txn_limbo_ack(struct txn_limbo *limbo, uint32_t replica_id, int64_t lsn); int txn_limbo_wait_complete(struct txn_limbo *limbo, struct txn_limbo_entry *entry); +/** + * Verify if the request is valid for processing. + */ +int +txn_limbo_filter_locked(struct txn_limbo *limbo, + const struct synchro_request *req); + /** Execute a synchronous replication request. */ void txn_limbo_process_locked(struct txn_limbo *limbo, const struct synchro_request *req); /** Lock limbo terms and execute a synchronous replication request. */ -void +int txn_limbo_process(struct txn_limbo *limbo, const struct synchro_request *req); +/** Enable filtering of synchro requests. */ +static inline void +txn_limbo_filter_enable(struct txn_limbo *limbo) +{ + limbo->is_filtering = true; + say_info("limbo: filter enabled"); +} + +/** Disable filtering of synchro requests. */ +static inline void +txn_limbo_filter_disable(struct txn_limbo *limbo) +{ + limbo->is_filtering = false; + say_info("limbo: filter disabled"); +} + /** * Waiting for confirmation of all "sync" transactions * during confirm timeout or fail. -- 2.31.1