[Tarantool-patches] [PATCH] recovery: make it yield when positioning in a WAL

Vladislav Shpilevoy v.shpilevoy at tarantool.org
Wed Apr 28 23:50:43 MSK 2021


>>> We had various places in box.cc and relay.cc which counted processed
>>> rows and yielded every now and then. These yields didn't cover cases,
>>> when recovery has to position inside a long WAL file:
>>>
>>> For example, when tarantool exits without leaving an empty WAL file
>>> which'll be used to recover instance vclock on restart. In this case
>>> the instance freezes while processing the last availabe WAL in order
>>> to recover the vclock.
>>>
>>> Another issue is with replication. If a replica connects and needs data
>>> from the end of a really long WAL, recovery will read up to the needed
>>> position without yields, making relay disconnect by timeout.
>>>
>>> In order to fix the issue, make recovery decide when a yield should
>>> happen. Introduce a new callback: schedule_yield, which is called by
>>> recovery once it processes (no matter how, either simply skips or calls
>>> xstream_write) enough rows (WAL_ROWS_PER_YIELD).
>>>
>>> schedule_yield either yields right away, in case of relay, or saves the
>>> yield for later, in case of local recovery, because it might be in the
>>> middle of a transaction.
>> 1. Did you consider an option to yield explicitly in recovery code when
>> it skips rows? If they are being skipped, it does not matter what are
>> their transaction borders.
> 
> I did consider that. It is possible to do so, but then we'll have yet another
> place (in addition to relay and wal_stream) which counts rows and yields
> every now and then.
> 
> I thought it would be better to unify all these places. Actually, this could be
> done this way from the very beginning.
> I think it's not recovery's business whether to yield or not once
> some rows are processed.
> 
> Anyway, I can make it this way, if you insist.

The current solution is also fine.

>>> +
>>> +/**
>>> + * Plan a yield in recovery stream. Wal stream will execute it as soon as it's
>>> + * ready.
>>> + */
>>> +static void
>>> +wal_stream_schedule_yield(void)
>>> +{
>>> +    wal_stream.has_yield = true;
>>> +    wal_stream_try_yield(&wal_stream);
>>> +}
>>> diff --git a/src/box/recovery.cc b/src/box/recovery.cc
>>> index cd33e7635..5351d8524 100644
>>> --- a/src/box/recovery.cc
>>> +++ b/src/box/recovery.cc
>>> @@ -241,10 +248,16 @@ static void
>>>   recover_xlog(struct recovery *r, struct xstream *stream,
>>>            const struct vclock *stop_vclock)
>>>   {
>>> +    /* Imitate old behaviour. Rows are counted separately for each xlog. */
>>> +    r->row_count = 0;
>> 3. But why do you need to imitate it? Does it mean if the files are
>> too small to yield even once in each, but in total their number is
>> huge, there won't be yields?
> 
> Yes, that's true.

Does not this look wrong to you? The xlog files might not contain enough
rows if wal_max_size is small enough, and then the same issue still
exists - no yields.

>> Also does it mean "1M rows processed" was not ever printed in that
>> case?
> 
> Yes, when WALs are not big enough.
> Recovery starts over with '0.1M rows processed' on every new WAL file.

Does not this look wrong to you too? That at least the number of
rows should not drop to 0 on each next xlog file.


More information about the Tarantool-patches mailing list