From: Aleksandr Lyapunov <alyapunov@tarantool.org>
To: Konstantin Osipov <kostja.osipov@gmail.com>
Cc: tarantool-patches@dev.tarantool.org
Subject: Re: [Tarantool-patches] [PATCH] small: unite the oscillation cache of all mempools
Date: Fri, 10 Apr 2020 15:44:15 +0300 [thread overview]
Message-ID: <8938a5ca-a4af-3275-96d0-1848b98fb683@tarantool.org> (raw)
In-Reply-To: <CAPZPwLrNbdCzX7LR+_P8xrZ+unDpw-k2EP1Jj5mAsGOTWz2frA@mail.gmail.com>
Yes, there is a case when we waste lots of memory. But I think that
the case case of split/merge hammering is not so impossible. I guess
there some workloads when tuple lifetime is very shot, perhaps queue.
If we add to the workload use of tuples of different sizes, from dozens
of small slabs, we'll get a case where small mempools very frequently
switch between empty and non-empty states. And if the average number
of non-empty small mempools is around some magic numbers (power
of two if all slabs have the same size) then there could some significant
performance difference.
This case is not frequent though, but we should not degrade worst case.
On 4/10/20 11:46 AM, Konstantin Osipov wrote:
> Aleksandr,
>
> just weight the chances and the costs. This optimization locks up up
> to a few hundred megs of memory. The chances that
> slab_get_with_order() is going to oscillate exactly the way you
> describe are minimal: *all* pools must release their fragments of a
> larger slab back to the slab cache. What are the chances of this
> happening at the same time? Even if this happens, the cost of a loop
> which performs slab split/slab merge are minimal. The optimization is
> simply not worth it in terms of price/performance, and causes pains in
> all small installs.
next prev parent reply other threads:[~2020-04-10 12:44 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-04-10 8:36 Aleksandr Lyapunov
[not found] ` <CAPZPwLrNbdCzX7LR+_P8xrZ+unDpw-k2EP1Jj5mAsGOTWz2frA@mail.gmail.com>
2020-04-10 12:44 ` Aleksandr Lyapunov [this message]
-- strict thread matches above, loose matches on Subject: below --
2020-01-29 1:48 Maksim Kulis
2020-01-29 10:28 ` Kirill Yukhin
2020-01-29 21:46 ` Konstantin Osipov
2020-01-30 8:02 ` Kirill Yukhin
2020-01-30 8:34 ` Konstantin Osipov
2020-01-30 11:18 ` Alexander Turenko
2020-01-30 12:23 ` Konstantin Osipov
2020-01-30 13:05 ` Alexander Turenko
2020-01-30 14:47 ` Konstantin Osipov
2020-01-30 12:20 ` Kirill Yukhin
2020-01-30 12:36 ` Konstantin Osipov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=8938a5ca-a4af-3275-96d0-1848b98fb683@tarantool.org \
--to=alyapunov@tarantool.org \
--cc=kostja.osipov@gmail.com \
--cc=tarantool-patches@dev.tarantool.org \
--subject='Re: [Tarantool-patches] [PATCH] small: unite the oscillation cache of all mempools' \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox