[Tarantool-patches] [PATCH 04/13] wal: refactor wal_write_to_disk()

Cyrill Gorcunov gorcunov at gmail.com
Tue Jun 15 23:46:12 MSK 2021


On Fri, Jun 11, 2021 at 11:56:12PM +0200, Vladislav Shpilevoy wrote:
> It didn't have a single fail path. That led to some amount of code
> duplication, and it complicated future patches where the journal
> entries are going to get a proper error reason instead of default
> -1 without any details.
> 
> The patch is a preparation for #6027 where it is wanted to have
> more detailed errors on journal entry/transaction fail instead
> of ER_WAL_IO for everything. Sometimes it can override a real
> error like a cascade rollback, or a transaction conflict.
> 
> Part of #6027
> ---
> @@ -1038,7 +1036,10 @@ wal_write_to_disk(struct cmsg *msg)
>  {
>  	struct wal_writer *writer = &wal_writer_singleton;
>  	struct wal_msg *wal_msg = (struct wal_msg *) msg;
>  	struct error *error;
> +	assert(!stailq_empty(&wal_msg->commit));

Hi Vlad, you know I don't understand why we need this assert...

>  	/*
>  	 * Track all vclock changes made by this batch into
> @@ -1058,23 +1059,17 @@ wal_write_to_disk(struct cmsg *msg)
>  
>  	if (writer->is_in_rollback) {
>  		/* We're rolling back a failed write. */
> -		stailq_concat(&wal_msg->rollback, &wal_msg->commit);
> -		vclock_copy(&wal_msg->vclock, &writer->vclock);
> -		return;
> +		goto done;

Jumps to "done" label change the code logic. Before the patch if we
reached the write and say wal_opt_rotate failed we set up is_in_rollback
sign and exit early, after the patch we start notifying watchers that
there "write" happened which means relay code will be woken up while there
no new data on disk level at all, which means watchers wanna be notified
for no reason, no? Or I miss something obvious?


More information about the Tarantool-patches mailing list