[Tarantool-patches] [PATCH v2 6/7] qsync: implement direct write of CONFIRM/ROLLBACK into a journal

Cyrill Gorcunov gorcunov at gmail.com
Fri Jul 24 21:16:40 MSK 2020


On Fri, Jul 24, 2020 at 12:10:58AM +0200, Vladislav Shpilevoy wrote:
> 
> 1. arhitectural -> architectural.

Thanks!

> > +static void
> > +txn_limbo_write_cb(struct journal_entry *entry)
> > +{
> > +	/*
> > +	 * It is safe to call fiber_wakeup on
> > +	 * active fiber but we need to call wakeup
> > +	 * in case if we're sitting in WAL thread.
> 
> 2. Actually it is not safe to call wakeup on the active fiber. It
> leads to something being broken in the scheduler. This is why we
> have so many checks like
> 
> 	if (txn->fiber != fiber()) fiber_wakeup(txn->fiber);
> 
> But yeah, in this case this is fine - the fiber is sleeping for
> sure.

As we've been taking f2f it is safe.

> > +txn_limbo_write(uint32_t replica_id, int64_t lsn, int type)
> >  {
> > +	assert(replica_id != REPLICA_ID_NIL);
> > +	assert(type == IPROTO_CONFIRM || type == IPROTO_ROLLBACK);
> >  	assert(lsn > 0);
> >  
> > -	struct xrow_header row;
> > -	struct request request = {
> > -		.header = &row,
> > -	};
> > +	char buf[sizeof(struct journal_entry) +
> > +		 sizeof(struct xrow_header *) +
> > +		 sizeof(struct xrow_header)];
> 
> 3. The last addition is an overkill a bit. Why can't
> you declare {struct xrow_header row;} on stack separately? I
> understand
> 
> 	sizeof(struct journal_entry) + sizeof(struct xrow_header *)
> 
> - they should be one object. But the third addition seems not
> necessarily to be done in the same buffer variable.

In new series (which I've not sent yet) I've done the folloing

static int
txn_limbo_write(uint32_t replica_id, int64_t lsn, int type)
{
	assert(replica_id != REPLICA_ID_NIL);
	assert(type == IPROTO_CONFIRM || type == IPROTO_ROLLBACK);
	assert(lsn > 0);

	/*
	 * When allocated statically some compilers (such as
	 * clang + asan) requires the journal_entry::rows to
	 * be last in a container structure. So it it simplier
	 * just to create a cummulative buffer.
	 */
	char buf[sizeof(struct journal_entry) +
		 sizeof(struct xrow_header *)];

	struct synchro_body_bin body_bin;
	struct xrow_header row;

	struct journal_entry *entry = (struct journal_entry *)buf;
	entry->rows[0] = &row;

	xrow_encode_synchro(&row, &body_bin, replica_id, lsn, type);

	journal_entry_create(entry, 1, xrow_approx_len(&row),
			     txn_limbo_write_cb, fiber());

...
> 
> 
> 4. I think all these changes are going to interfere with
> https://github.com/tarantool/tarantool/issues/5151.
> 
> What are you going to do to request_synchro_body to implement 5151? Will they be
> two separate structs? Or will the struct synchro_request contain the body? - this
> would lead to data duplication though, because the point of synchro_request is to
> provide the synchro fields not encoded nor bswaped.
> 
> Would be good to think through how will these changes live with 5151.


I though of declaring new structure like

struct synchro_request {
	uint32_t replica_id;
	int64_t lsn;
}

the convention to binary form has to be done anyway but when
we have _bin structure we can encode it fastly and use stack
instead of some dynamically allocated memory with mp_() helper
used to encode entries (current mp_ helpers use too much if's
inernally and we catually should really consider performance
loss due to them, I think not compressing msgpack would gain
significant speedup and keep in mind that later we encode data
second time when pushing down to journal binary records).

Back to synchro_request -- instead of opencoded replica_id
and lsn pair we could switch to synchro_request structure
in arguments. This will address 5151. I hope to implement
it at Monday and show you te result.
> 
> 5. It looks like you could make things simpler, if you would create a
> struct containing the journal_entry, body, *xrow_header[1], and
> xrow_header. Would simplify things IMO. You wouldn't need the stack
> manipulations like you did above. 

Above I've posted which new way I use.


More information about the Tarantool-patches mailing list