Tarantool development patches archive
 help / color / mirror / Atom feed
From: Vladislav Shpilevoy via Tarantool-patches <tarantool-patches@dev.tarantool.org>
To: Serge Petrenko <sergepetrenko@tarantool.org>,
	Cyrill Gorcunov <gorcunov@gmail.com>,
	tml <tarantool-patches@dev.tarantool.org>
Subject: Re: [Tarantool-patches] [PATCH] limbo: introduce request processing hooks
Date: Mon, 12 Jul 2021 10:04:56 +0200
Message-ID: <36639f07-5c56-5c94-d563-0862135d0ea9@tarantool.org> (raw)
In-Reply-To: <187d1ae2-99cb-50d4-d5b4-18aa6c5f5546@tarantool.org>

On 12.07.2021 10:01, Serge Petrenko via Tarantool-patches wrote:
> 
> 
> 11.07.2021 17:00, Vladislav Shpilevoy пишет:
>> Hi! Thanks for the patch!
>>
>> On 11.07.2021 00:28, Cyrill Gorcunov wrote:
>>> Guys, this is an early rfc since I would like to discuss the
>>> design first before going further. Currently we don't interrupt
>>> incoming syncro requests which doesn't allow us to detect cluster
>>> split-brain situation, as we were discussing verbally there are
>>> a number of sign to detect it and we need to stop receiving data
>>> from obsolete nodes.
>>>
>>> The main problem though is that such filtering of incoming packets
>>> should happen at the moment where we still can do a step back and
>>> inform the peer that data has been declined, but currently our
>>> applier code process syncro requests inside WAL trigger, ie when
>>> data is already applied or rolling back.
>>>
>>> Thus we need to separate "filer" and "apply" stages of processing.
>>> What is more interesting is that we filter incomings via in memory
>>> vclock and update them immediately. Thus the following situation
>>> is possible -- a promote request comes in, we remember it inside
>>> promote_term_map but then write to WAL fails and we never revert
>>> the promote_term_map variable, thus other peer won't be able to
>>> resend us this promote request because now we think that we've
>>> alreday applied the promote.
>> Well, I still don't understand what the issue is. We discussed it
>> privately already. You simply should not apply anything until WAL
>> write is done. And it is not happening now on the master. The
>> terms vclock is updated only **after** WAL write.
>>
>> Why do you need all these new vclocks if you should not apply
>> anything before WAL write in the first place?
> 
> If I understand correctly, the issue is that if we filter (and check for
> the split brain) after the WAL write, we will end up with a conflicting
> PROMOTE in our WAL. Cyrill is trying to avoid this, that's why
> he's separating the filter stage. This way the error will reach
> the remote peer before any WAL write, and the WAL write won't happen.
> 
> And if we filter before the WAL write, we need the second vclock, which
> Cyrill has introduced.

Why do you need a second vclock? Why can't you just filter by the
existing vclock and update it after WAL write like now?

  reply	other threads:[~2021-07-12  8:05 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-10 22:28 Cyrill Gorcunov via Tarantool-patches
2021-07-11 14:00 ` Vladislav Shpilevoy via Tarantool-patches
2021-07-11 18:22   ` Cyrill Gorcunov via Tarantool-patches
2021-07-12  8:03     ` Vladislav Shpilevoy via Tarantool-patches
2021-07-12  8:09       ` Cyrill Gorcunov via Tarantool-patches
2021-07-12 21:20         ` Vladislav Shpilevoy via Tarantool-patches
2021-07-12 22:32           ` Cyrill Gorcunov via Tarantool-patches
2021-07-13 19:32             ` Vladislav Shpilevoy via Tarantool-patches
2021-07-12  8:01   ` Serge Petrenko via Tarantool-patches
2021-07-12  8:04     ` Vladislav Shpilevoy via Tarantool-patches [this message]
2021-07-12  8:12       ` Cyrill Gorcunov via Tarantool-patches
2021-07-12  8:23         ` Cyrill Gorcunov via Tarantool-patches
2021-07-12 21:20         ` Vladislav Shpilevoy via Tarantool-patches
2021-07-12 22:34           ` Cyrill Gorcunov via Tarantool-patches
2021-07-12  9:43     ` Cyrill Gorcunov via Tarantool-patches
2021-07-12 15:48       ` Cyrill Gorcunov via Tarantool-patches
2021-07-12 16:49         ` Serge Petrenko via Tarantool-patches
2021-07-12 17:04           ` Cyrill Gorcunov via Tarantool-patches
2021-07-12 21:20             ` Vladislav Shpilevoy via Tarantool-patches
2021-07-12 21:52               ` Cyrill Gorcunov via Tarantool-patches
2021-07-12  7:54 ` Serge Petrenko via Tarantool-patches

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=36639f07-5c56-5c94-d563-0862135d0ea9@tarantool.org \
    --to=tarantool-patches@dev.tarantool.org \
    --cc=gorcunov@gmail.com \
    --cc=sergepetrenko@tarantool.org \
    --cc=v.shpilevoy@tarantool.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Tarantool development patches archive

This inbox may be cloned and mirrored by anyone:

	git clone --mirror https://lists.tarantool.org/tarantool-patches/0 tarantool-patches/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 tarantool-patches tarantool-patches/ https://lists.tarantool.org/tarantool-patches \
		tarantool-patches@dev.tarantool.org.
	public-inbox-index tarantool-patches

Example config snippet for mirrors.


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git