From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Date: Tue, 29 Jan 2019 14:58:43 +0300 From: Vladimir Davydov Subject: Re: [tarantool-patches] [PATCH v2 1/5] Do not promote wal vclock for failed writes Message-ID: <20190129115843.763xi7h3eou7ekbl@esperanza> References: <20190128112018.a7rwhjtfscqj5x6m@esperanza> <2238035.WnxgNYYFuH@home.lan> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <2238035.WnxgNYYFuH@home.lan> To: =?utf-8?B?0JPQtdC+0YDQs9C40Lkg0JrQuNGA0LjRh9C10L3QutC+?= Cc: tarantool-patches@freelists.org List-ID: On Tue, Jan 29, 2019 at 01:22:21PM +0300, Георгий Кириченко wrote: > On Monday, January 28, 2019 2:20:18 PM MSK Vladimir Davydov wrote: > > On Tue, Jan 22, 2019 at 01:31:09PM +0300, Georgy Kirichenko wrote: > > > Increase replica lsn only if row was successfully written to disk. This > > > prevents wal from lsn gaps in case of IO errors and enforces wal > > > consistency. > > > > > > Needs for #980 > > > --- > > > > > > src/box/wal.c | 19 ++++++--- > > > test/xlog/errinj.result | 1 - > > > test/xlog/panic_on_lsn_gap.result | 65 +++++++++++++++---------------- > > > 3 files changed, 45 insertions(+), 40 deletions(-) > > > > > > diff --git a/src/box/wal.c b/src/box/wal.c > > > index 3b50d3629..a55b544aa 100644 > > > --- a/src/box/wal.c > > > +++ b/src/box/wal.c > > > @@ -901,16 +901,16 @@ wal_writer_begin_rollback(struct wal_writer *writer) > > > > > > } > > > > > > static void > > > > > > -wal_assign_lsn(struct wal_writer *writer, struct xrow_header **row, > > > +wal_assign_lsn(struct vclock *vclock, struct xrow_header **row, > > > > > > struct xrow_header **end) > > > > > > { > > > > > > /** Assign LSN to all local rows. */ > > > for ( ; row < end; row++) { > > > > > > if ((*row)->replica_id == 0) { > > > > > > - (*row)->lsn = vclock_inc(&writer->vclock, instance_id); > > > + (*row)->lsn = vclock_inc(vclock, instance_id); > > > > > > (*row)->replica_id = instance_id; > > > > > > } else { > > > > > > - vclock_follow_xrow(&writer->vclock, *row); > > > + vclock_follow_xrow(vclock, *row); > > > > > > } > > > > > > } > > > > > > } > > > > > > @@ -922,6 +922,11 @@ wal_write_to_disk(struct cmsg *msg) > > > > > > struct wal_msg *wal_msg = (struct wal_msg *) msg; > > > struct error *error; > > > > > > + /* Local vclock copy. */ > > > + struct vclock vclock; > > > + vclock_create(&vclock); > > > + vclock_copy(&vclock, &writer->vclock); > > > + > > > > > > struct errinj *inj = errinj(ERRINJ_WAL_DELAY, ERRINJ_BOOL); > > > while (inj != NULL && inj->bparam) > > > > > > usleep(10); > > > > > > @@ -974,14 +979,15 @@ wal_write_to_disk(struct cmsg *msg) > > > > > > struct journal_entry *entry; > > > struct stailq_entry *last_committed = NULL; > > > stailq_foreach_entry(entry, &wal_msg->commit, fifo) { > > > > > > - wal_assign_lsn(writer, entry->rows, entry->rows + entry- > >n_rows); > > > - entry->res = vclock_sum(&writer->vclock); > > > + wal_assign_lsn(&vclock, entry->rows, entry->rows + entry- > >n_rows); > > > + entry->res = vclock_sum(&vclock); > > > > > > rc = xlog_write_entry(l, entry); > > > if (rc < 0) > > > > > > goto done; > > > > > > if (rc > 0) { > > > > > > writer->checkpoint_wal_size += rc; > > > last_committed = &entry->fifo; > > > > > > + vclock_copy(&writer->vclock, &vclock); > > > > I don't like that you copy a vclock after applying each entry. > > Currently, it should be pretty cheap, but in future, when we make > > vclock store any number of ids, this might get pretty heavy. > > Can we minimize the number of memcpys somehow, ideally do it only > > on the rollback path? > In that case we should preserve vclock for rollback but it can be done only > with vclock_copy to. vclock_copy is used only for whole batch with all > entries. > When we introduce unlimited vclock we should introduce vclock_diff also and > then use them. Fair enough.