From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Date: Tue, 29 Jan 2019 15:00:43 +0300 From: Vladimir Davydov Subject: Re: [tarantool-patches] Re: [PATCH v2 3/5] Enforce applier out of order protection Message-ID: <20190129120042.vwboca44w4ffxryj@esperanza> References: <4c39bbbfcd12c47b9b14fc1a0a0484331939ed63.1548152776.git.georgy@tarantool.org> <20190128120901.spkitg7kyrfjp6xz@esperanza> <774394922.0SCKpy4CPs@home.lan> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <774394922.0SCKpy4CPs@home.lan> To: =?utf-8?B?0JPQtdC+0YDQs9C40Lkg0JrQuNGA0LjRh9C10L3QutC+?= Cc: tarantool-patches@freelists.org List-ID: On Tue, Jan 29, 2019 at 01:30:40PM +0300, Георгий Кириченко wrote: > On Monday, January 28, 2019 3:09:01 PM MSK Vladimir Davydov wrote: > > On Tue, Jan 22, 2019 at 01:31:11PM +0300, Georgy Kirichenko wrote: > > > Do not skip row until the row is not processed by other appliers. > > > > Looks like a fix for > > > > https://github.com/tarantool/tarantool/issues/3568 > > > > Worth adding a test? > > > > > Prerequisite #980 > > > --- > > > > > > src/box/applier.cc | 35 ++++++++++++++++++----------------- > > > 1 file changed, 18 insertions(+), 17 deletions(-) > > > > > > diff --git a/src/box/applier.cc b/src/box/applier.cc > > > index 87873e970..148c8ce5a 100644 > > > --- a/src/box/applier.cc > > > +++ b/src/box/applier.cc > > > @@ -504,6 +504,22 @@ applier_subscribe(struct applier *applier) > > > > > > applier->lag = ev_now(loop()) - row.tm; > > > applier->last_row_time = ev_monotonic_now(loop()); > > > > > > + struct replica *replica = replica_by_id(row.replica_id); > > > + struct latch *latch = (replica ? &replica->order_latch : > > > + &replicaset.applier.order_latch); > > > + /* > > > + * In a full mesh topology, the same set > > > + * of changes may arrive via two > > > + * concurrently running appliers. Thanks > > > + * to vclock_follow() above, the first row > > > > I don't see any vclock_follow() above. Please fix the comment. > > > > > + * in the set will be skipped - but the > > > + * remaining may execute out of order, > > > + * when the following xstream_write() > > > + * yields on WAL. Hence we need a latch to > > > + * strictly order all changes which belong > > > + * to the same server id. > > > + */ > > > + latch_lock(latch); > > > > > > if (vclock_get(&replicaset.applier.vclock, > > > > > > row.replica_id) < row.lsn) { > > > > > > if (row.replica_id == instance_id && > > > > AFAIU this patch makes replicaset.applier.vclock, introduced by the > > previous patch, useless. > You are right now, but I plan to release this latch just before commit in case > of parallel applier. Then let's please introduce applier.vclock when we get to implement parallel applier, because currently I can't say for sure whether we really need it or not.