From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from [87.239.111.99] (localhost [127.0.0.1]) by dev.tarantool.org (Postfix) with ESMTP id 33C176EC55; Mon, 12 Jul 2021 10:54:05 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org 33C176EC55 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=tarantool.org; s=dev; t=1626076445; bh=oDYMjd1wEx9ai478j8xzblYITO6keInNsGFcfE3fAzQ=; h=To:Cc:References:Date:In-Reply-To:Subject:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From:Reply-To:From; b=MM0apKq2/9nrKo+2leXbVdJhfP3QcfxINYxICJvGRgWafef97NKhrTEsSjKOeG6qM dEj+N9rI3DUF1PrB0B5W3ffLYUnjJ2APCihN2fosPXsFQUEFA6gj7uRZzXRJGmuWsj mcO41hmiKe/vt0j/oXrZ6HKMtFloiXeW9dWzL6Ug= Received: from smtp37.i.mail.ru (smtp37.i.mail.ru [94.100.177.97]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dev.tarantool.org (Postfix) with ESMTPS id B33C46EC55 for ; Mon, 12 Jul 2021 10:54:03 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org B33C46EC55 Received: by smtp37.i.mail.ru with esmtpa (envelope-from ) id 1m2qlO-0003RA-Uf; Mon, 12 Jul 2021 10:54:03 +0300 To: Cyrill Gorcunov , tml Cc: Vladislav Shpilevoy References: <20210710222803.253251-1-gorcunov@gmail.com> Message-ID: Date: Mon, 12 Jul 2021 10:54:02 +0300 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0) Gecko/20100101 Thunderbird/78.11.0 MIME-Version: 1.0 In-Reply-To: <20210710222803.253251-1-gorcunov@gmail.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Content-Language: en-GB X-4EC0790: 10 X-7564579A: EEAE043A70213CC8 X-77F55803: 4F1203BC0FB41BD954DFF1DC42D673FB0C620705B15DE32D6B8174B3EFF2BD9A182A05F538085040BF404AE13992A3A5F6DF964ADC9B32E7EC95D7F735568398447C8C5CF9D8A041 X-7FA49CB5: FF5795518A3D127A4AD6D5ED66289B5278DA827A17800CE79EDB57D1FB735487EA1F7E6F0F101C67BD4B6F7A4D31EC0BCC500DACC3FED6E28638F802B75D45FF8AA50765F7900637C4BA093C789515AA8638F802B75D45FF36EB9D2243A4F8B5A6FCA7DBDB1FC311F39EFFDF887939037866D6147AF826D8B002B2E6329F8CF20E02F77FD274BCB5117882F4460429724CE54428C33FAD305F5C1EE8F4F765FC60CDF180582EB8FBA471835C12D1D9774AD6D5ED66289B52BA9C0B312567BB23117882F446042972877693876707352026055571C92BF10F2CC0D3CB04F14752D2E47CDBA5A96583BA9C0B312567BB231DD303D21008E29813377AFFFEAFD269A417C69337E82CC2E827F84554CEF50127C277FBC8AE2E8BA83251EDC214901ED5E8D9A59859A8B6753C3A5E0A5AB5B7089D37D7C0E48F6C5571747095F342E88FB05168BE4CE3AF X-C1DE0DAB: 0D63561A33F958A513C7D16F4CF4AF266C381C011FEAAA21FEB8ADB79463BF20D59269BC5F550898D99A6476B3ADF6B47008B74DF8BB9EF7333BD3B22AA88B938A852937E12ACA753753CEE10E4ED4A7410CA545F18667F91A7EA1CDA0B5A7A0 X-C8649E89: 4E36BF7865823D7055A7F0CF078B5EC49A30900B95165D34F1257DC9690AEBA191082E087EA3C4A6CD784DED8541AC1D824374993C3FBE2CF6EE022008FCA36B1D7E09C32AA3244CB94201D0FEBBB91E987C0C4D8F7541631DD47778AE04E04DFACE5A9C96DEB163 X-D57D3AED: 3ZO7eAau8CL7WIMRKs4sN3D3tLDjz0dLbV79QFUyzQ2Ujvy7cMT6pYYqY16iZVKkSc3dCLJ7zSJH7+u4VD18S7Vl4ZUrpaVfd2+vE6kuoey4m4VkSEu530nj6fImhcD4MUrOEAnl0W826KZ9Q+tr5ycPtXkTV4k65bRjmOUUP8cvGozZ33TWg5HZplvhhXbhDGzqmQDTd6OAevLeAnq3Ra9uf7zvY2zzsIhlcp/Y7m53TZgf2aB4JOg4gkr2bioj/3sbGI30Xhc+OXdw0o6hOg== X-Mailru-Sender: 3B9A0136629DC9125D61937A2360A446ED0388244E1EAD91F7B63F77F919E13539F8EBC7FD6E9411424AE0EB1F3D1D21E2978F233C3FAE6EE63DB1732555E4A8EE80603BA4A5B0BC112434F685709FCF0DA7A0AF5A3A8387 X-Mras: Ok Subject: Re: [Tarantool-patches] [PATCH] limbo: introduce request processing hooks X-BeenThere: tarantool-patches@dev.tarantool.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Tarantool development patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , From: Serge Petrenko via Tarantool-patches Reply-To: Serge Petrenko Errors-To: tarantool-patches-bounces@dev.tarantool.org Sender: "Tarantool-patches" 11.07.2021 01:28, Cyrill Gorcunov пишет: > Guys, this is an early rfc since I would like to discuss the > design first before going further. Currently we don't interrupt > incoming syncro requests which doesn't allow us to detect cluster > split-brain situation, as we were discussing verbally there are > a number of sign to detect it and we need to stop receiving data > from obsolete nodes. > > The main problem though is that such filtering of incoming packets > should happen at the moment where we still can do a step back and > inform the peer that data has been declined, but currently our > applier code process syncro requests inside WAL trigger, ie when > data is already applied or rolling back. > > Thus we need to separate "filer" and "apply" stages of processing. > What is more interesting is that we filter incomings via in memory > vclock and update them immediately. Thus the following situation > is possible -- a promote request comes in, we remember it inside > promote_term_map but then write to WAL fails and we never revert > the promote_term_map variable, thus other peer won't be able to > resend us this promote request because now we think that we've > alreday applied the promote. > > To solve this I split processing routine into stages: > > - filter stage: we investigate infly packets and remember > their terms in @terms_infly vclock > - apply stage: the data is verified and we try to apply the > data and write it to the disk, once everything is fine > we update @terms_applied vclock which serves as a backup > of @terms_infly > - error stage: data application failed and we should revert > @terms_infly vclock to the previous value > > The stages are processed by txn_limbo_apply routine which takes > a mask of stages it should execute. Old txn_limbo_process is > simply an inline wrapper with appropriate flags. > > Please note that I _didn't_ put complete conditions into > limbo_op_filter yet only moved current code there. > > So take a look once time permit, maybe there some easier > approach in code structure. > > branch gorcunov/gh-6036-rollback-confirm-notest > > Signed-off-by: Cyrill Gorcunov > --- Hi! Thanks for the patch! The general idea looks good to me. Please, find a couple of comments below. > src/box/applier.cc | 13 +++- > src/box/box.cc | 6 +- > src/box/txn_limbo.c | 149 ++++++++++++++++++++++++++++++++++++++------ > src/box/txn_limbo.h | 97 ++++++++++++++++++++++------ > 4 files changed, 223 insertions(+), 42 deletions(-) > > diff --git a/src/box/applier.cc b/src/box/applier.cc > index 978383e64..8a44bf1b2 100644 > --- a/src/box/applier.cc > +++ b/src/box/applier.cc > @@ -458,7 +458,8 @@ applier_wait_snapshot(struct applier *applier) > struct synchro_request req; > if (xrow_decode_synchro(&row, &req) != 0) > diag_raise(); > - txn_limbo_process(&txn_limbo, &req); > + if (txn_limbo_process(&txn_limbo, &req) != 0) > + diag_raise(); > } else if (iproto_type_is_raft_request(row.type)) { > struct raft_request req; > if (xrow_decode_raft(&row, &req, NULL) != 0) > @@ -850,11 +851,16 @@ apply_synchro_row_cb(struct journal_entry *entry) > assert(entry->complete_data != NULL); > struct synchro_entry *synchro_entry = > (struct synchro_entry *)entry->complete_data; > + struct synchro_request *req = synchro_entry->req; > + > if (entry->res < 0) { > + txn_limbo_apply(&txn_limbo, req, > + LIMBO_OP_ERROR | LIMBO_OP_ATTR_PANIC); I don't like that you pass LIMBO_OP_ATTR_PANIC in a bitfield. We usually don't do such things. It would be nice if you could get rid of this flag or maybe pass it as bool. Your approach might be totally fine though, that's just my POV. > applier_rollback_by_wal_io(entry->res); > } else { > replica_txn_wal_write_cb(synchro_entry->rcb); > - txn_limbo_process(&txn_limbo, synchro_entry->req); > + txn_limbo_apply(&txn_limbo, synchro_entry->req, > + LIMBO_OP_APPLY | LIMBO_OP_ATTR_PANIC); > trigger_run(&replicaset.applier.on_wal_write, NULL); > } > fiber_wakeup(synchro_entry->owner); > @@ -870,6 +876,9 @@ apply_synchro_row(uint32_t replica_id, struct xrow_header *row) > if (xrow_decode_synchro(row, &req) != 0) > goto err; > > + if (txn_limbo_apply(&txn_limbo, &req, LIMBO_OP_FILTER) != 0) > + goto err; > + Maybe simply call txn_limbo_filter() here? You always know which function to call at compile time, so there's no need to pass OP_FILTER, OP_APPLY, etc as a parameter IMO. Same in other places. > struct replica_cb_data rcb_data; > struct synchro_entry entry; > /* ... > + > +static int > +limbo_op_error(struct txn_limbo *limbo, const struct synchro_request *req) > +{ > + struct txn_limbo_promote *pmt = &limbo->promote; > + uint32_t replica_id = req->origin_id; > + /* > + * Restore to the applied value in case of error, > + * this will allow to reapply the entry when remote > + * node get error and try to resend data. > + */ > + uint64_t v = vclock_get(&pmt->terms_applied, replica_id); > + vclock_reset(&pmt->terms_infly, replica_id, v); Looks like the following situation is possible: The same instance writes two consequent promotes for terms i, i+1. When some replica receives the promotes, they both pass the filter stage, so vclock_infly[instance_id] = i+1 and vclock_applied[instance_id] = i - 1 Then the first promote succeeds and the second one fails Isn't it possible that vclock_infly would be reset to i - 1 instead of i ? AFAICS in wal.c in tx_complete_batch() code, rollbacks are processed earlier than commits, so the situation above is possible. Am I wrong? If not, we need to handle this somehow. Probably, remember previous term to roll back to in each request after the filter() stage. > + > + /* > + * The max value has to be recalculated in a linear > + * form, the errors should not happen frequently so > + * this is not a hot path. > + */ > + int64_t maxv = 0; > + struct vclock_iterator it; > + vclock_iterator_init(&it, &pmt->terms_infly); > + vclock_foreach(&it, r) > + maxv = r.lsn > maxv ? r.lsn : maxv; > + pmt->term_max_infly = maxv; > + return 0; > +} > + > ... -- Serge Petrenko