[Tarantool-patches] [PATCH] recovery: make it yield when positioning in a WAL
Vladislav Shpilevoy
v.shpilevoy at tarantool.org
Tue Apr 27 00:20:30 MSK 2021
Hi! Thanks for the patch!
See 2 questions, 1 comment.
On 26.04.2021 18:59, Serge Petrenko wrote:
> We had various places in box.cc and relay.cc which counted processed
> rows and yielded every now and then. These yields didn't cover cases,
> when recovery has to position inside a long WAL file:
>
> For example, when tarantool exits without leaving an empty WAL file
> which'll be used to recover instance vclock on restart. In this case
> the instance freezes while processing the last availabe WAL in order
> to recover the vclock.
>
> Another issue is with replication. If a replica connects and needs data
> from the end of a really long WAL, recovery will read up to the needed
> position without yields, making relay disconnect by timeout.
>
> In order to fix the issue, make recovery decide when a yield should
> happen. Introduce a new callback: schedule_yield, which is called by
> recovery once it processes (no matter how, either simply skips or calls
> xstream_write) enough rows (WAL_ROWS_PER_YIELD).
>
> schedule_yield either yields right away, in case of relay, or saves the
> yield for later, in case of local recovery, because it might be in the
> middle of a transaction.
1. Did you consider an option to yield explicitly in recovery code when
it skips rows? If they are being skipped, it does not matter what are
their transaction borders.
Then the whole patch would be to add the yield once per WAL_ROWS_PER_YIELD
to recovery_scan(), correct?
> The only place with explicit row counting and manual yielding is now in
> relay_initial_join, since its row sources are engines rather than recovery
> with its WAL files.
>
> Closes #5979
> ---
> https://github.com/tarantool/tarantool/tree/sp/gh-5979-recovery-yield
> https://github.com/tarantool/tarantool/issues/5979
>
> diff --git a/src/box/box.cc b/src/box/box.cc
> index 59925962d..69a8f87eb 100644
> --- a/src/box/box.cc
> +++ b/src/box/box.cc
> @@ -3101,6 +3087,19 @@ bootstrap(const struct tt_uuid *instance_uuid,
> }
> }
>
> +struct wal_stream wal_stream;
2. This must be static.
> +
> +/**
> + * Plan a yield in recovery stream. Wal stream will execute it as soon as it's
> + * ready.
> + */
> +static void
> +wal_stream_schedule_yield(void)
> +{
> + wal_stream.has_yield = true;
> + wal_stream_try_yield(&wal_stream);
> +}
> diff --git a/src/box/recovery.cc b/src/box/recovery.cc
> index cd33e7635..5351d8524 100644
> --- a/src/box/recovery.cc
> +++ b/src/box/recovery.cc
> @@ -241,10 +248,16 @@ static void
> recover_xlog(struct recovery *r, struct xstream *stream,
> const struct vclock *stop_vclock)
> {
> + /* Imitate old behaviour. Rows are counted separately for each xlog. */
> + r->row_count = 0;
3. But why do you need to imitate it? Does it mean if the files are
too small to yield even once in each, but in total their number is
huge, there won't be yields?
Also does it mean "1M rows processed" was not ever printed in that
case?
> struct xrow_header row;
> - uint64_t row_count = 0;
> while (xlog_cursor_next_xc(&r->cursor, &row,
> r->wal_dir.force_recovery) == 0) {
> + if (++r->row_count % WAL_ROWS_PER_YIELD == 0) {
> + r->schedule_yield();
> + }
> + if (r->row_count % 100000 == 0)
> + say_info("%.1fM rows processed", r->row_count / 1000000.);
> /*
> * Read the next row from xlog file.
> *
> @@ -273,12 +286,7 @@ recover_xlog(struct recovery *r, struct xstream *stream,
> * failed row anyway.
> */
> vclock_follow_xrow(&r->vclock, &row);
> - if (xstream_write(stream, &row) == 0) {
> - ++row_count;
> - if (row_count % 100000 == 0)
> - say_info("%.1fM rows processed",
> - row_count / 1000000.);
> - } else {
> + if (xstream_write(stream, &row) != 0) {
> if (!r->wal_dir.force_recovery)
> diag_raise();
>
More information about the Tarantool-patches
mailing list