Tarantool development patches archive
 help / color / mirror / Atom feed
From: Konstantin Osipov <kostja.osipov@gmail.com>
To: Alexander Turenko <alexander.turenko@tarantool.org>
Cc: tarantool-patches@dev.tarantool.org
Subject: Re: [Tarantool-patches] [small] Revert "Free all slabs on region reset"
Date: Thu, 30 Jan 2020 15:16:04 +0300	[thread overview]
Message-ID: <20200130121604.GA11018@atlas> (raw)
In-Reply-To: <20200130110315.gzdi3q37ey2ytvct@tkn_work_nb>

* Alexander Turenko <alexander.turenko@tarantool.org> [20/01/30 14:06]:
> > > The Georgy's fix for small keeps linear traversal over allocated slabs
> > > on region_reset(). Fixed region_reserve() releases one slab at max per
> > > allocation, so this linear traversal will persist for several
> > > region_reset() calls in row.
> > 
> > Why do you think region_reset() should traverse over empty slabs?
> > Empty slabs should always remain at the end of the list, let's
> > stick to this invariant.
> 
> Don't get the point. Do you propose to stop this traverse on a first
> unused slabs? AFAIU, there may be slabs with zero slab->used in a
> middle. We cannot stop based on `slab->used` value.

Let's make sure it's impossible. This is basically what Georgy's
patch is doing already.

> > region_free()/region_reset() should be (and is in most cases)
> > bounded, since it is called as soon as region_used() reaches 128KB.
> > So yes, it's a list traversal, but it is done only once in every
> > dozen or so requests and is bounded by a list which is a few
> > dozens of slabs.
> 
> A list traversal on **each** region_reset(), not once. 20k, not few
> dozens. It does not look okay. It is even seen on the benchmark in
> #4736 (if uncomment per-batch RPS prints and run it on the patched
> version).

20k is caused by the bug, it's not worth considering.
Typically it's 128/4k = 32 elements.

> > The fix that Georgy made in May was not only about tests. It was
> > improving region memory usage (and thus indirectly performance),
> > too.
> 
> 1. This need to be verified.
> 2. Even if it increases performance on some workloads, it should not
>    significantly degrade others.
> 
> Small deallocations on during allocation is the good idea. Let's file an
> issue against this.
> 
> Linear traversal in region_reset() is the bad one, IMHO. Especially
> considering memory access pattern: it is list traversal, not an array.
> Especially if we need to traverse a slab N/2 times in average (N is the
> list size). At least until the upper limit of the list size is not
> proven to be small on any workload.

Well, OK. I guess you need time to sort this out.

-- 
Konstantin Osipov, Moscow, Russia

      reply	other threads:[~2020-01-30 12:16 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-01-29  8:06 Alexander Turenko
2020-01-29  9:59 ` Kirill Yukhin
2020-01-29 21:41 ` Konstantin Osipov
2020-01-30  7:23   ` Alexander Turenko
2020-01-30  8:21     ` Konstantin Osipov
2020-01-30 11:03       ` Alexander Turenko
2020-01-30 12:16         ` Konstantin Osipov [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200130121604.GA11018@atlas \
    --to=kostja.osipov@gmail.com \
    --cc=alexander.turenko@tarantool.org \
    --cc=tarantool-patches@dev.tarantool.org \
    --subject='Re: [Tarantool-patches] [small] Revert "Free all slabs on region reset"' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox