From: "Георгий Кириченко" <georgy@tarantool.org> To: tarantool-patches@freelists.org Cc: Vladimir Davydov <vdavydov.dev@gmail.com> Subject: Re: [tarantool-patches] Re: [PATCH v2 3/5] Enforce applier out of order protection Date: Tue, 29 Jan 2019 13:30:40 +0300 [thread overview] Message-ID: <774394922.0SCKpy4CPs@home.lan> (raw) In-Reply-To: <20190128120901.spkitg7kyrfjp6xz@esperanza> [-- Attachment #1: Type: text/plain, Size: 3637 bytes --] On Monday, January 28, 2019 3:09:01 PM MSK Vladimir Davydov wrote: > On Tue, Jan 22, 2019 at 01:31:11PM +0300, Georgy Kirichenko wrote: > > Do not skip row until the row is not processed by other appliers. > > Looks like a fix for > > https://github.com/tarantool/tarantool/issues/3568 > > Worth adding a test? > > > Prerequisite #980 > > --- > > > > src/box/applier.cc | 35 ++++++++++++++++++----------------- > > 1 file changed, 18 insertions(+), 17 deletions(-) > > > > diff --git a/src/box/applier.cc b/src/box/applier.cc > > index 87873e970..148c8ce5a 100644 > > --- a/src/box/applier.cc > > +++ b/src/box/applier.cc > > @@ -504,6 +504,22 @@ applier_subscribe(struct applier *applier) > > > > applier->lag = ev_now(loop()) - row.tm; > > applier->last_row_time = ev_monotonic_now(loop()); > > > > + struct replica *replica = replica_by_id(row.replica_id); > > + struct latch *latch = (replica ? &replica->order_latch : > > + &replicaset.applier.order_latch); > > + /* > > + * In a full mesh topology, the same set > > + * of changes may arrive via two > > + * concurrently running appliers. Thanks > > + * to vclock_follow() above, the first row > > I don't see any vclock_follow() above. Please fix the comment. > > > + * in the set will be skipped - but the > > + * remaining may execute out of order, > > + * when the following xstream_write() > > + * yields on WAL. Hence we need a latch to > > + * strictly order all changes which belong > > + * to the same server id. > > + */ > > + latch_lock(latch); > > > > if (vclock_get(&replicaset.applier.vclock, > > > > row.replica_id) < row.lsn) { > > > > if (row.replica_id == instance_id && > > AFAIU this patch makes replicaset.applier.vclock, introduced by the > previous patch, useless. You are right now, but I plan to release this latch just before commit in case of parallel applier. > > > @@ -516,24 +532,7 @@ applier_subscribe(struct applier *applier) > > > > int64_t old_lsn = vclock_get(&replicaset.applier.vclock, > > > > row.replica_id); > > > > vclock_follow_xrow(&replicaset.applier.vclock, &row); > > > > - struct replica *replica = replica_by_id(row.replica_id); > > - struct latch *latch = (replica ? &replica->order_latch : > > - &replicaset.applier.order_latch); > > - /* > > - * In a full mesh topology, the same set > > - * of changes may arrive via two > > - * concurrently running appliers. Thanks > > - * to vclock_follow() above, the first row > > - * in the set will be skipped - but the > > - * remaining may execute out of order, > > - * when the following xstream_write() > > - * yields on WAL. Hence we need a latch to > > - * strictly order all changes which belong > > - * to the same server id. > > - */ > > - latch_lock(latch); > > > > int res = xstream_write(applier->subscribe_stream, &row); > > > > - latch_unlock(latch); > > > > if (res != 0) { > > > > struct error *e = diag_last_error(diag_get()); > > /** > > > > @@ -548,11 +547,13 @@ applier_subscribe(struct applier *applier) > > > > /* Rollback lsn to have a chance for a retry. */ > > vclock_set(&replicaset.applier.vclock, > > > > row.replica_id, old_lsn); > > > > + latch_unlock(latch); > > > > diag_raise(); > > > > } > > > > } > > > > } > > > > done: > > + latch_unlock(latch); > > > > /* > > > > * Stay 'orphan' until appliers catch up with > > * the remote vclock at the time of SUBSCRIBE [-- Attachment #2: This is a digitally signed message part. --] [-- Type: application/pgp-signature, Size: 488 bytes --]
next prev parent reply other threads:[~2019-01-29 10:30 UTC|newest] Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top 2019-01-22 10:31 [tarantool-patches] [PATCH v2 0/5] Strong sequentially LSN in journal Georgy Kirichenko 2019-01-22 10:31 ` [tarantool-patches] [PATCH v2 1/5] Do not promote wal vclock for failed writes Georgy Kirichenko 2019-01-28 11:20 ` Vladimir Davydov 2019-01-29 10:22 ` Георгий Кириченко 2019-01-29 11:58 ` Vladimir Davydov 2019-01-22 10:31 ` [tarantool-patches] [PATCH v2 2/5] Update replicaset vclock from wal Georgy Kirichenko 2019-01-28 11:59 ` Vladimir Davydov 2019-01-29 10:33 ` [tarantool-patches] " Георгий Кириченко 2019-01-22 10:31 ` [tarantool-patches] [PATCH v2 3/5] Enforce applier out of order protection Georgy Kirichenko 2019-01-28 12:09 ` Vladimir Davydov 2019-01-29 10:30 ` Георгий Кириченко [this message] 2019-01-29 12:00 ` [tarantool-patches] " Vladimir Davydov 2019-01-22 10:31 ` [tarantool-patches] [PATCH v2 4/5] Emit NOP if an applier skips row Georgy Kirichenko 2019-01-28 12:15 ` Vladimir Davydov 2019-02-08 16:50 ` [tarantool-patches] " Konstantin Osipov 2019-01-22 10:31 ` [tarantool-patches] [PATCH v2 5/5] Disallow lsn gaps while vclock following Georgy Kirichenko 2019-01-28 12:18 ` Vladimir Davydov 2019-01-28 11:15 ` [tarantool-patches] [PATCH v2 0/5] Strong sequentially LSN in journal Vladimir Davydov
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=774394922.0SCKpy4CPs@home.lan \ --to=georgy@tarantool.org \ --cc=tarantool-patches@freelists.org \ --cc=vdavydov.dev@gmail.com \ --subject='Re: [tarantool-patches] Re: [PATCH v2 3/5] Enforce applier out of order protection' \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox