[Tarantool-patches] [PATCH v3 0/4] replication: fix applying of rows originating from local instance

Георгий Кириченко kirichenkoga at gmail.com
Mon Feb 24 15:31:34 MSK 2020


Please read messages before answering. I did never say that:
> You've been suggesting that filtering on the master is safer.
I said it safer do to it on the replica side and replica should not rely on
master correctness.
> I pointed out it's not, there is no way to guarantee
(even in theory) correctness/safety if replica if master is
malfunctioning.
Excuse my but this is demagogy, we talk about what is more safer but not
absolutely safety.
>The situation is symmetrical. Both peers do not have the whole
>picture. You can make either of the peers responsible for the
>decision, then the other peer will need to supply the missing
>bits.
No, you are wrong. A master has only one information source about the
stream it should send to a replica whereas
 a replica could connect to many masters to fetch proper data (from one or
many masters). And we already implemented similar logic -
a voting protocol and yoh should known about it.Additionally my
approach allows to collect all corresponding logic as filtering
 of concurrent streams, vclock following, subcriptions and replication
groups which are not implemented yet, registration and whatever else in one
module at replica side.
>I do not think the scope of this issue has ever been protecting
>against hacked masters. It has never been a goal of the protocol
>either.
A hacked master could be a master with an implementation error and we
should be able to detech such error as soon as possible. But if a replica
will not
check an incomming stream there is no way to prevent fatal data losses.
>This was added for specific reasons. There is no known reason the
>master should send unnecessary data to replica or replica fast
>path should get slower.
I am afraid you did not understand me. I did not ever said that I am
against any optimization which could make replication faster.
I completely against any attempts to rely on an optimiztion logic. If a
master allows to skip unrequired rows then replica should not rely on this
code corectness.
 In other words, if some input stream could broke replica the replica
should protect itself agains such data. This is not the replicas master
responsibility.

пн, 24 февр. 2020 г. в 13:18, Konstantin Osipov <kostja.osipov at gmail.com>:

> * Georgy Kirichenko <kirichenkoga at gmail.com> [20/02/23 12:21]:
>
> > Please do not think you are the only person who knows about byzantine
> faults.
> > Also there is little relevance between byzantine faults and my
> suggestion to
> > enforce replica-side checking.
>
> You've been suggesting that filtering on the master is safer. I
> pointed out it's not, there is no way to guarantee
> (even in theory) correctness/safety if replica if master is
> malfunctioning.
>
> I merely pointed out that your safety argument has no merit.
>
> There are no other practical advantages of filtering on replica
> either: there is a disadvantage, more traffic and more filtering work to
> do
> inside tx thread (as opposed to relay/wal thread if done on
> master).
>
> It is also against the current responsibilities of IPROTO_SUBSCRIBE: the
> concept of a subscription is that replica specifies what it is
> interested in. Specifically, it specifies vclock components it's.
> You suggest to make the replica responsible for
> submitting its vclock, but the master decide what to do with it -
> this splits the decision making logic between the two, making the
> whole thing harder to understand.
>
> IPROTO_SUBSCRIBE responsibility layout today is typical for a
> request-response protocol: the master, being the server, executes
> the command as specified by the client (the replica), and the
> replica runs the logic to decide what command to issue.
>
> You suggest to change it because of some theoretical concerns you
> have.
>
> > In any case filtering on the master side is the most worst  thing we
> could do.
> > In this case master has only one peer and have no chance to make a
> proper
> > decision if replica is broken. And we have no chance to know about it
> (except
> > assert which are excluded from release builds, or panic messages). For
> > instance if master skipped some rows then there are no any tracks of the
> > situation we could detect.
>
> The situation is symmetrical. Both peers do not have the whole
> picture. You can make either of the peers responsible for the
> decision, then the other peer will need to supply the missing
> bits. There is no way you can make it safer by changing who makes
> the decision, but you can certainly make it more messed up by
> splitting this logic or going against an established layout.
>
> If you have a specific example why things will improve if done
> otherwise - in the number of packets, or traffic, or some other
> measurable way, you should point it out.
>
> > In the opposite case a replica could connect to as many masters as they
> need
> > to filter out all invalid data or hacked masters. At least we could
> enforce
> > replication stream meta checking.
>
> I do not think the scope of this issue has ever been protecting
> against hacked masters. It has never been a goal of the protocol
> either.
>
> > Two major point I would like to mention are:
> > 1. Replica could consistently follow all vclock members and apply all
> > transactions without gaps (I already got rid of them, I hope you
> remember)
> > 2. Replica could protect itself against concurrent local writes (one was
> made
> > locally, the second one is returned from master)
>
> This was added for specific reasons. There is no known reason the
> master should send unnecessary data to replica or replica fast
> path should get slower.
>
> --
> Konstantin Osipov, Moscow, Russia
> https://scylladb.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.tarantool.org/pipermail/tarantool-patches/attachments/20200224/52193292/attachment.html>


More information about the Tarantool-patches mailing list