[Tarantool-patches] [PATCH] recovery: make it yield when positioning in a WAL
Serge Petrenko
sergepetrenko at tarantool.org
Thu Apr 29 11:55:28 MSK 2021
28.04.2021 23:50, Vladislav Shpilevoy пишет:
>>>> We had various places in box.cc and relay.cc which counted processed
>>>> rows and yielded every now and then. These yields didn't cover cases,
>>>> when recovery has to position inside a long WAL file:
>>>>
>>>> For example, when tarantool exits without leaving an empty WAL file
>>>> which'll be used to recover instance vclock on restart. In this case
>>>> the instance freezes while processing the last availabe WAL in order
>>>> to recover the vclock.
>>>>
>>>> Another issue is with replication. If a replica connects and needs data
>>>> from the end of a really long WAL, recovery will read up to the needed
>>>> position without yields, making relay disconnect by timeout.
>>>>
>>>> In order to fix the issue, make recovery decide when a yield should
>>>> happen. Introduce a new callback: schedule_yield, which is called by
>>>> recovery once it processes (no matter how, either simply skips or calls
>>>> xstream_write) enough rows (WAL_ROWS_PER_YIELD).
>>>>
>>>> schedule_yield either yields right away, in case of relay, or saves the
>>>> yield for later, in case of local recovery, because it might be in the
>>>> middle of a transaction.
>>> 1. Did you consider an option to yield explicitly in recovery code when
>>> it skips rows? If they are being skipped, it does not matter what are
>>> their transaction borders.
>> I did consider that. It is possible to do so, but then we'll have yet another
>> place (in addition to relay and wal_stream) which counts rows and yields
>> every now and then.
>>
>> I thought it would be better to unify all these places. Actually, this could be
>> done this way from the very beginning.
>> I think it's not recovery's business whether to yield or not once
>> some rows are processed.
>>
>> Anyway, I can make it this way, if you insist.
> The current solution is also fine.
>
>>>> +
>>>> +/**
>>>> + * Plan a yield in recovery stream. Wal stream will execute it as soon as it's
>>>> + * ready.
>>>> + */
>>>> +static void
>>>> +wal_stream_schedule_yield(void)
>>>> +{
>>>> + wal_stream.has_yield = true;
>>>> + wal_stream_try_yield(&wal_stream);
>>>> +}
>>>> diff --git a/src/box/recovery.cc b/src/box/recovery.cc
>>>> index cd33e7635..5351d8524 100644
>>>> --- a/src/box/recovery.cc
>>>> +++ b/src/box/recovery.cc
>>>> @@ -241,10 +248,16 @@ static void
>>>> recover_xlog(struct recovery *r, struct xstream *stream,
>>>> const struct vclock *stop_vclock)
>>>> {
>>>> + /* Imitate old behaviour. Rows are counted separately for each xlog. */
>>>> + r->row_count = 0;
>>> 3. But why do you need to imitate it? Does it mean if the files are
>>> too small to yield even once in each, but in total their number is
>>> huge, there won't be yields?
>> Yes, that's true.
> Does not this look wrong to you? The xlog files might not contain enough
> rows if wal_max_size is small enough, and then the same issue still
> exists - no yields.
>
>>> Also does it mean "1M rows processed" was not ever printed in that
>>> case?
>> Yes, when WALs are not big enough.
>> Recovery starts over with '0.1M rows processed' on every new WAL file.
> Does not this look wrong to you too? That at least the number of
> rows should not drop to 0 on each next xlog file.
Yep, let's change it then. I thought we had to preserve log output.
Fixed and added a changelog entry.
=================================
diff --git a/changelogs/unreleased/gh-5979-recovery-ligs.md
b/changelogs/unreleased/gh-5979-recovery-ligs.md
new file mode 100644
index 000000000..86abfd66a
--- /dev/null
+++ b/changelogs/unreleased/gh-5979-recovery-ligs.md
@@ -0,0 +1,11 @@
+# bugfix/core
+
+* Now tarantool yields when scanning `.xlog` files for the latest
applied vclock
+ and when finding the right place in `.xlog`s to start recovering.
This means
+ that the instance is responsive right after `box.cfg` call even when
an empty
+ `.xlog` was not created on previous exit.
+ Also this prevents relay from timing out when a freshly subscribed
replica
+ needs rows from the end of a relatively long (hundreds of MBs) `.xlog`
+ (gh-5979).
+* The counter in `x.yM rows processed` log messages does not reset on
each new
+ recovered `xlog` anymore.
diff --git a/src/box/recovery.cc b/src/box/recovery.cc
index 5351d8524..8359f216d 100644
--- a/src/box/recovery.cc
+++ b/src/box/recovery.cc
@@ -149,6 +149,13 @@ recovery_scan(struct recovery *r, struct vclock
*end_vclock,
}
}
xlog_cursor_close(&cursor, false);
+
+ /*
+ * Do not show scanned rows in log output and yield just in case
+ * row_count was less than WAL_ROWS_PER_YIELD when we reset it.
+ */
+ r->row_count = 0;
+ r->schedule_yield();
}
static inline void
@@ -248,8 +255,6 @@ static void
recover_xlog(struct recovery *r, struct xstream *stream,
const struct vclock *stop_vclock)
{
- /* Imitate old behaviour. Rows are counted separately for each
xlog. */
- r->row_count = 0;
struct xrow_header row;
while (xlog_cursor_next_xc(&r->cursor, &row,
r->wal_dir.force_recovery) == 0) {
=================================
--
Serge Petrenko
More information about the Tarantool-patches
mailing list