[Tarantool-patches] [PATCH v5 3/5] box/applier: fix nil dereference in applier rollback

Cyrill Gorcunov gorcunov at gmail.com
Wed Feb 5 11:27:21 MSK 2020


On Wed, Feb 05, 2020 at 01:11:10AM +0300, Konstantin Osipov wrote:
> * Cyrill Gorcunov <gorcunov at gmail.com> [20/01/28 10:16]:
> > +
> > +	/*
> > +	 * We must not loose the origin error, instead
> > +	 * lets keep it in replicaset diag instance.
> > +	 *
> > +	 * FIXME: We need to revisit this code and
> > +	 * figure out if we can reconnect and retry
> > +	 * the prelication process instead of cancelling
> > +	 * applier with FiberIsCancelled.
> 
> First of all, we're dealing with a regression introduced by the 
> parallel applier patch. 
> 
> Could you please describe what is triggering the error?

Kostya, it is vague area where we only managed to narrow down
that the issue is coming once we started rollback procedure
(i think real trigger was somewhere inside vynil processing,
 for example inside b+ tree failue)

> 
> > +		/*
> > +		 * If information is already lost
> > +		 * (say xlog cleared diag instance)
> 
> I don't understand this comment. How can it be lost exactly?

Hmmm, I think you're right. Actually unweaving the all possible
call traces by hands (which I had to do) is quite exhausting task
so I might be wrong here.

> 
> > +		 * setup general ClientError, seriously
> > +		 * we need to unweave this mess, if error
> > +		 * happened it must never been cleared
> > +		 * until error handling in rollback.
> 
> :-/
> 
> > +		 */
> > +		diag_set(ClientError, ER_WAL_IO);
> > +		e = diag_last_error(diag_get());
> > +	}
> > +	diag_add_error(&replicaset.applier.diag, e);
> > +
> >  	/* Broadcast the rollback event across all appliers. */
> >  	trigger_run(&replicaset.applier.on_rollback, event);
> >  	/* Rollback applier vclock to the committed one. */
> > @@ -849,8 +871,20 @@ applier_on_rollback(struct trigger *trigger, void *event)
> >  		diag_add_error(&applier->diag,
> >  			       diag_last_error(&replicaset.applier.diag));
> >  	}
> > -	/* Stop the applier fiber. */
> > +
> > +	/*
> > +	 * Something really bad happened, we can't proceed
> > +	 * thus stop the applier and throw FiberIsCancelled
> > +	 * exception which will be catched by the caller
> > +	 * and the fiber gracefully finish.
> > +	 *
> > +	 * FIXME: Need to make sure that this is a really
> > +	 * final error where we can't longer proceed and should
> > +	 * zap the applier, probably we could reconnect and
> > +	 * retry instead?
> > +	 */
> >  	fiber_cancel(applier->reader);
> 
> Let's begin by explaining why we need to cancel the reader fiber here.

This fiber_cancel has been already here, I only added diag_set(FiberIsCancelled)
to throw an exception thus the caller would zap this applier fiber.
Actually I think we could retry instead of reaping off the fiber
completely but it requies more deep understanding of how applier
works. So I left it in comment.

> 
> 
> 
> 
> > +	diag_set(FiberIsCancelled);
> 
> This is clearly a clutch: first you make an effort to set
> replicaset.applier.diag and then it is not used by diag_raise().

Not exactly, if I understand the initial logic of this applier
try/cath branch  we need to setup replicaset.applier.diag and
then on FiberIsCancelled we should move it from replicaset.applier.diag
back to current fiber->diag.


More information about the Tarantool-patches mailing list