From: Cyrill Gorcunov <gorcunov@gmail.com>
To: Konstantin Osipov <kostja.osipov@gmail.com>
Cc: tml <tarantool-patches@dev.tarantool.org>
Subject: Re: [Tarantool-patches] [PATCH v2 2/2] fiber: exit with panic if we unable to revert guard page
Date: Tue, 4 Feb 2020 19:08:00 +0300 [thread overview]
Message-ID: <20200204160800.GF12445@uranus> (raw)
In-Reply-To: <20200204154742.GA32754@atlas>
On Tue, Feb 04, 2020 at 06:47:42PM +0300, Konstantin Osipov wrote:
> > --- a/src/lib/core/fiber.c
> > +++ b/src/lib/core/fiber.c
> > @@ -1041,13 +1041,17 @@ fiber_stack_destroy(struct fiber *fiber, struct slab_cache *slabc)
> > * to setup the original protection back in
> > * background.
> > *
> > + * For now lets exit with panic: if mprotect
> > + * failed we must not allow to reuse such slab
> > + * with PROT_NONE'ed page somewhere inside.
> > + *
>
> somewhere inside its stack area would be more clear.
Thanks!
> > * Note that in case if we're called from
> > * fiber_stack_create() the @mprotect_flags is
> > * the same as the slab been created with, so
> > * calling mprotect for VMA with same flags
> > * won't fail.
> > */
> > - diag_log();
> > + panic_syserror("fiber: Can't put guard page to slab");
>
> While the patch itself is LGTM, we need to nail down the cause of
> the failure even at the cost of crash, I suspect what we're getting
> here is ENOMEM from the kernel. I suspect we have too many
> mprotect regions and the kernel runs out of some internal
> resources for them.
Yes, seems we might get out of VMA when there are too many fibers
and system doesn't have enough memory to continue splitting.
> I think adding better diagnostics could help us identify the
> issue: the failed address, its slab, the total number of fibers (and by
> induction mprotected pages).
>
> I have also discussed the issue with @xemul, and he suggests that
> wrong slab alignment could be causing this.
True, and we need investigate it. Once we have the "fix" merged
in I'll file a bug to investigate this ideas. Thanks a huge, Kostya!
>
> Finally, we could try to clear mprotect() first, and if it fails,
> avoid destroying the fiber and keep it cached for a while more.
> We could retry destroying it when kernel has more memory.
Yes. I put FIXME for such intelligent exit (I thought about
stack slabs only not the complete fibers though).
>
> These are of course ideas for follow ups.
prev parent reply other threads:[~2020-02-04 16:08 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-01-15 17:05 [Tarantool-patches] [PATCH v2 0/2] fiber: Handle stack madvise/mprotect errors Cyrill Gorcunov
2020-01-15 17:05 ` [Tarantool-patches] [PATCH v2 1/2] fiber: use diag_ logger in fiber_madvise/mprotect failures Cyrill Gorcunov
2020-02-03 21:56 ` Alexander Turenko
2020-02-03 22:05 ` Cyrill Gorcunov
2020-02-03 22:10 ` Alexander Turenko
2020-01-15 17:05 ` [Tarantool-patches] [PATCH v2 2/2] fiber: exit with panic if we unable to revert guard page Cyrill Gorcunov
2020-02-03 22:03 ` Alexander Turenko
2020-02-04 15:47 ` Konstantin Osipov
2020-02-04 16:08 ` Cyrill Gorcunov [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200204160800.GF12445@uranus \
--to=gorcunov@gmail.com \
--cc=kostja.osipov@gmail.com \
--cc=tarantool-patches@dev.tarantool.org \
--subject='Re: [Tarantool-patches] [PATCH v2 2/2] fiber: exit with panic if we unable to revert guard page' \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox