[tarantool-patches] [PATCH 2/3] Enforce applier out of order protection
Vladimir Davydov
vdavydov.dev at gmail.com
Wed Feb 6 17:13:33 MSK 2019
On Wed, Feb 06, 2019 at 11:29:58AM +0300, Georgy Kirichenko wrote:
> Do not skip row until the row is not processed.
^^^
Redundant 'not'.
I think that this patch should be squashed with patch 3, because it
doesn't seem to make much sense on its own to me.
>
> Prerequisite #2283
> ---
> src/box/applier.cc | 48 ++++++++++++++++++++++------------------------
> 1 file changed, 23 insertions(+), 25 deletions(-)
>
> diff --git a/src/box/applier.cc b/src/box/applier.cc
> index 21d2e6bcb..d87b247e2 100644
> --- a/src/box/applier.cc
> +++ b/src/box/applier.cc
> @@ -512,31 +512,25 @@ applier_subscribe(struct applier *applier)
>
> applier->lag = ev_now(loop()) - row.tm;
> applier->last_row_time = ev_monotonic_now(loop());
> -
> - if (vclock_get(&replicaset.vclock, row.replica_id) < row.lsn) {
> - /**
> - * Promote the replica set vclock before
> - * applying the row. If there is an
> - * exception (conflict) applying the row,
> - * the row is skipped when the replication
> - * is resumed.
> - */
> + struct replica *replica = replica_by_id(row.replica_id);
> + struct latch *latch = (replica ? &replica->order_latch :
> + &replicaset.applier.order_latch);
> + /*
> + * In a full mesh topology, the same set
> + * of changes may arrive via two
> + * concurrently running appliers. Thanks
> + * to vclock_follow() above, the first row
^^^^^
Above? It's below now.
> + * in the set will be skipped - but the
> + * remaining may execute out of order,
> + * when the following xstream_write()
> + * yields on WAL. Hence we need a latch to
> + * strictly order all changes which belong
> + * to the same server id.
> + */
> + latch_lock(latch);
> + if (vclock_get(&replicaset.vclock,
> + row.replica_id) < row.lsn) {
> vclock_follow_xrow(&replicaset.vclock, &row);
> - struct replica *replica = replica_by_id(row.replica_id);
> - struct latch *latch = (replica ? &replica->order_latch :
> - &replicaset.applier.order_latch);
> - /*
> - * In a full mesh topology, the same set
> - * of changes may arrive via two
> - * concurrently running appliers. Thanks
> - * to vclock_follow() above, the first row
> - * in the set will be skipped - but the
> - * remaining may execute out of order,
> - * when the following xstream_write()
> - * yields on WAL. Hence we need a latch to
> - * strictly order all changes which belong
> - * to the same server id.
> - */
> latch_lock(latch);
Double lock...
> int res = xstream_write(applier->subscribe_stream, &row);
> latch_unlock(latch);
> @@ -550,10 +544,14 @@ applier_subscribe(struct applier *applier)
> box_error_code(e) == ER_TUPLE_FOUND &&
> replication_skip_conflict)
> diag_clear(diag_get());
> - else
> + else {
> + latch_unlock(latch);
> diag_raise();
> + }
> }
> }
> + latch_unlock(latch);
> +
> if (applier->state == APPLIER_SYNC ||
> applier->state == APPLIER_FOLLOW)
> fiber_cond_signal(&applier->writer_cond);
More information about the Tarantool-patches
mailing list