[tarantool-patches] Re: [tarantool-patches] Re: [PATCH v5 2/2] replication: force gc to clean xdir on ENOSPC err

Konstantin Belyavskiy k.belyavskiy at tarantool.org
Tue Jul 10 18:10:32 MSK 2018




>Пятница,  6 июля 2018, 20:00 +03:00 от Konstantin Osipov <kostja at tarantool.org>:
>
>* Konstantin Belyavskiy < k.belyavskiy at tarantool.org > [18/07/05 22:56]:
>> Garbage collector do not delete xlog unless replica do not notify
>> master with newer vclock. This can lead to running out of disk
>> space error and this is not right behaviour since it will stop the
>> master.
>> Fix it by forcing gc to clean xlogs for replica with highest lag.
>> Add an error injection and a test.
>
>Please rebase this patch to the latest 1.10
>
>Please use relay_stop() as a callback to unregister the consumer. 
Rebase to 1.10 - ok.

Using relay_stop() makes sense only with replica_on_relay_stop(), since
relay_stop() itself actually do nothing with consumers.
Regarding replica_on_relay_stop(), replica should be in "orphan" mode
to avoid assertion in replica_delete(). And also there is a problem with
monitoring, since replica will leave replication cluster and thus silent the error.

On other hand, in case of implementation based on removing consumer,
replica, if being active again, will get an LSN gap and we will see an error.

1. Please give feedback on this section.
2. If not using relay_stop(), which branch use as a base 1.9 or 1.10?

>
>> +void
>> +gc_xdir_clean_notify()
>> +{
>> +	/*
>> +	 * Compare the current time with the time of the last run.
>> +	 * This is needed in case of multiple failures to prevent
>> +	 * from deleting all replicas.
>
>
>> +	 */
>> +	static double prev_time = 0.;
>> +	double cur_time = ev_monotonic_time();
>> +	if (cur_time - prev_time < 1.)
>> +		return;
>
>This throttles gc, which is good. But we would still get a lot of messages
>from WAL thread. Maybe we should move the throttling to the WAL
>side? This would spare us from creating the message as well.
>Ideally we should use a single statically allocated message from
>the WAL for this purpose (but still throttle it as well).
>
>Plus, eventually you're going to reach a state when kicking off
>replicas doesn't help with space. In this case you're going to
>have a lot of messages, and they are going to be all useless.
>This also suggests that throttling should be done on the WAL side.
Done.
>
>> +	prev_time = cur_time;
>> +	struct gc_consumer *leftmost =
>> +	    gc_tree_first(&gc.consumers);
>> +	/*
>> +	 * Exit if no consumers left or if this consumer is
>> +	 * not associated with replica (backup for example).
>> +	 */
>
>> +	if (leftmost == NULL || leftmost->replica == NULL)
>> +		return;
>
>
>> +	/*
>> +	 * We have to maintain @checkpoint_count oldest snapshots,
>> +	 * plus we can't remove snapshots that are still in use.
>> +	 * So if leftmost replica has signature greater or equel
>> +	 * then the oldest checkpoint that must be preserved,
>> +	 * nothing to do.
>> +	 */
>
>This comment is useful, but the search in checkpoint array is not.
>What about possible other types of consumers which are not
>dispensable with anyway, e.g. backups? What if they are holding a
>reference as well? 
>
>Apparently this check is taking care of the problem:
In previous review you have already mentioned this problem, searching
in checkpoint array helps in case then last stored checkpoint has exactly
the same value.
But in general, check below already prevents replicas from deletion.
Should I keep it (searching in checkpoint array)?
>
>
>> +	if (leftmost == NULL || leftmost->replica == NULL)
>> +		return;
>
>Could you write a test with two 
>"abandoned" replicas, each holding an xlog file? 
Which xlog, the same one or different for each replicas?
>
>
>-- 
>Konstantin Osipov, Moscow, Russia,  +7 903 626 22 32
>http://tarantool.io -  www.twitter.com/kostja_osipov
>


Best regards,
Konstantin Belyavskiy
k.belyavskiy at tarantool.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.tarantool.org/pipermail/tarantool-patches/attachments/20180710/7a93ebd9/attachment.html>


More information about the Tarantool-patches mailing list