[Tarantool-patches] [PATCH 1/2] gc/xlog: delay xlog cleanup until relays are subscribed

Serge Petrenko sergepetrenko at tarantool.org
Fri Mar 19 16:40:05 MSK 2021



19.03.2021 02:04, Vladislav Shpilevoy пишет:
> Hi! Thanks for the patch!
>
> Generally looks fine except details.
>
>
> AFAIU, we go for the timeout option for the only reason that
> there might be non-fullmesh typologies, when a replica is in
> _cluster, but does not have relays to other nodes.
>
> If we would know the topology, we could tell which nodes will
> relay to who, and would be able to detect when a replica needs
> to keep the logs, and when don't. Not even the entire topology.
> Just the info from the adjacent nodes. From our own box.cfg.replication,
> and from box.cfg.replication of these nodes.
>
> I still don't understand what was wrong with implementing some kind
> of topology discovery for this 2-level topology tree. For instance
> when applier on a replica is connected, the remote node sends us a
> flag whether it is going to connect back. If the flag is false, we don't
> keep the logs for that instance.

IMO this topology discovery is not so easy to implement. And we've 
chosen this
particular fix approach instead of 'persistent GC state' for its simplicity.

How do you know whether a node is going to connect back? It has a 
corresponding
entry in box.cfg.replication, sure, but how does it understand that this 
entry (URI)
corresponds to the replica that has just connected?
IIRC we had similar problems with inability to understand who's who judging
solely by URI in some other fix. Don't remember which one exactly.

Moreover, you may have something strange like a cascade topology.
Say, there are 1 <- 2 <- 3 servers, with arrows showing replica-to-master
connection. When 2 is up, how can it understand that it should wait for 3?

-- 
Serge Petrenko



More information about the Tarantool-patches mailing list