Tarantool development patches archive
 help / color / mirror / Atom feed
* Re: [Tarantool-patches] [PATCH] small: unite the oscillation cache of all mempools
@ 2020-04-10  8:36 Aleksandr Lyapunov
       [not found] ` <CAPZPwLrNbdCzX7LR+_P8xrZ+unDpw-k2EP1Jj5mAsGOTWz2frA@mail.gmail.com>
  0 siblings, 1 reply; 13+ messages in thread
From: Aleksandr Lyapunov @ 2020-04-10  8:36 UTC (permalink / raw)
  To: bokuno, tarantool-patches

[-- Attachment #1: Type: text/plain, Size: 3426 bytes --]

Thanks for the patch!

As I see mempool::spare was implemented in order to prevent
dangerous oscillation that is different to oscillation of
the-largest-size slab in slab_cache.
Allocation of a slab in slab_cache for mempool can cause slab
splitting routine that could be not so fast as one can expect.
In the edge case the-largest-size slab is split in two parts,
one of the parts is slit again ans so on, up to 16 iterations.
Releasing such a slab will cause a reverse procedure - merging
of all those slabs into the largest. And if we don't store
spare slab in the mempool, simple allocation/deallocation of
one and only item can be deadly slow. Imagine a loop with that
workout.

So I'm sure that we need mempool::spare optimization in order
to make mempool allocation performance more predictable.

Sorry, thanks btw.

> The task is to unite the oscillation cache of all mempools.
> In the function slab_put_with_order() we check if there is any
> slab of the largest size except the current one, in order to
> avoid oscillation.
> https://github.com/tarantool/small/blob/master/small/slab_cache.c#L349
>
> ---
>   small/mempool.c | 22 ++++------------------
>   small/mempool.h |  6 ------
>   2 files changed, 4 insertions(+), 24 deletions(-)
>
> diff --git a/small/mempool.c b/small/mempool.c
> index b20a416..54bd58d 100644
> --- a/small/mempool.c
> +++ b/small/mempool.c
> @@ -126,18 +126,9 @@ mslab_free(struct mempool *pool, struct mslab *slab, void *ptr)
>   		}
>   		mslab_tree_remove(&pool->hot_slabs, slab);
>   		slab->in_hot_slabs = false;
> -		if (pool->spare > slab) {
> -			slab_list_del(&pool->slabs, &pool->spare->slab,
> -				      next_in_list);
> -			slab_put_with_order(pool->cache, &pool->spare->slab);
> -			pool->spare = slab;
> -		 } else if (pool->spare) {
> -			 slab_list_del(&pool->slabs, &slab->slab,
> -				       next_in_list);
> -			 slab_put_with_order(pool->cache, &slab->slab);
> -		 } else {
> -			 pool->spare = slab;
> -		 }
> +		slab_list_del(&pool->slabs, &slab->slab,
> +			      next_in_list);
> +		slab_put_with_order(pool->cache, &slab->slab);
>   	}
>   }
>   
> @@ -153,7 +144,6 @@ mempool_create_with_order(struct mempool *pool, struct slab_cache *cache,
>   	mslab_tree_new(&pool->hot_slabs);
>   	pool->first_hot_slab = NULL;
>   	rlist_create(&pool->cold_slabs);
> -	pool->spare = NULL;
>   	pool->objsize = objsize;
>   	pool->slab_order = order;
>   	/* Total size of slab */
> @@ -179,11 +169,7 @@ mempool_alloc(struct mempool *pool)
>   {
>   	struct mslab *slab = pool->first_hot_slab;
>   	if (slab == NULL) {
> -		if (pool->spare) {
> -			slab = pool->spare;
> -			pool->spare = NULL;
> -
> -		} else if ((slab = (struct mslab *)
> +		if ((slab = (struct mslab *)
>   			    slab_get_with_order(pool->cache,
>   						pool->slab_order))) {
>   			mslab_create(slab, pool);
> diff --git a/small/mempool.h b/small/mempool.h
> index 15d85a4..9d0e162 100644
> --- a/small/mempool.h
> +++ b/small/mempool.h
> @@ -149,12 +149,6 @@ struct mempool
>   	 * tree is empty or the allocator runs out of memory.
>   	 */
>   	struct rlist cold_slabs;
> -	/**
> -	 * A completely empty slab which is not freed only to
> -	 * avoid the overhead of slab_cache oscillation around
> -	 * a single element allocation.
> -	 */
> -	struct mslab *spare;
>   	/**
>   	 * The size of an individual object. All objects
>   	 * allocated on the pool have the same size.
> -- 
> 2.17.1


[-- Attachment #2: Type: text/html, Size: 3784 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Tarantool-patches] [PATCH] small: unite the oscillation cache of all mempools
       [not found] ` <CAPZPwLrNbdCzX7LR+_P8xrZ+unDpw-k2EP1Jj5mAsGOTWz2frA@mail.gmail.com>
@ 2020-04-10 12:44   ` Aleksandr Lyapunov
  0 siblings, 0 replies; 13+ messages in thread
From: Aleksandr Lyapunov @ 2020-04-10 12:44 UTC (permalink / raw)
  To: Konstantin Osipov; +Cc: tarantool-patches

Yes, there is a case when we waste lots of memory. But I think that
the case case of split/merge hammering is not so impossible. I guess
there some workloads when tuple lifetime is very shot, perhaps queue.
If we add to the workload use of tuples of different sizes, from dozens
of small slabs, we'll get a case where small mempools very frequently
switch between empty and non-empty states. And if the average number
of non-empty small mempools is around some magic numbers (power
of two if all slabs have the same size) then there could some significant
performance difference.
This case is not frequent though, but we should not degrade worst case.

On 4/10/20 11:46 AM, Konstantin Osipov wrote:
> Aleksandr,
>
> just weight the chances and the costs. This optimization locks up up
> to a few hundred megs of memory. The chances that
> slab_get_with_order() is going to oscillate exactly the way you
> describe are minimal: *all* pools must release their fragments of a
> larger slab back to the slab cache. What are the chances of this
> happening at the same time? Even if this happens, the cost of a loop
> which performs slab split/slab merge are minimal. The optimization is
> simply not worth it in terms of price/performance, and causes pains in
> all small installs.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Tarantool-patches] [PATCH] small: unite the oscillation cache of all mempools
  2020-01-30 13:05             ` Alexander Turenko
@ 2020-01-30 14:47               ` Konstantin Osipov
  0 siblings, 0 replies; 13+ messages in thread
From: Konstantin Osipov @ 2020-01-30 14:47 UTC (permalink / raw)
  To: Alexander Turenko; +Cc: tarantool-patches

* Alexander Turenko <alexander.turenko@tarantool.org> [20/01/30 17:38]:
> The processes were not better (at least) when you was manager here. Now
> you don't seat near us and have even less necessary context. Yep, I
> personally think that we should ignore your suggestions in this area.
> This is not a viewpoint of the company, it is my personal opinion.

This logic is flawed. If something was wrong before, no reason to
keep going like that now. 

> 
> And since DDL tests are entirely unrelated to this thread, it does not
> look as feedback, whose goal is to suggest us to raise priority of this
> issue. It is either emotions or attempt to present us is an unfavourable
> light (because of, again, emotions or business needs of your tarantool
> consulting, but likely the former).
> 
> Sorry if it looks offending.

It's not about "business" or "emotions" at all. I have been
suggesting before, that decisions like that should be taken by
maintainers, not managers.

This is a governance issue, and it is in the interest of everyone
if it is solved. Right now whatever decision
Kirill may take is by definition subjective, especially since
there is no public policy and the expertise in Mail.Ru is scarce.

> (I propose to don't continue this particular discussion, at least here.)

Well, I definitely don't want to discuss emotions. But I have been
repeatedly saying there should be a policy open to external
contributions that serves in the best interest of the project.

> > As I said, I don't think this tiny patch justifies it. 
> > The patch fixes a real issue that multiple users have hit.
> 
> This is one example of the bad processes that we had when you did work
> with us: 'Kostja said LGTM, so proper testing is not necessary'.

Not necessarily. You could make an internal review, like you do
with other patches that I LGTM.

> > If bench.tarantool.org was operational, it would be easy to see if
> > the patch has any impact on performance. This is the only remotely
> > relevant concern here.
> 
> Maybe. Black box testing is not always enough.
> 
> > 
> > Meanwhile, if you wish to ignore it because it doesn't come with a
> > testing infrastructure - it's up to you.
> 
> I had asked for a workload that can show gains under manual run at
> least. Not a testing infrastructure.

Obviously that wasn't clear. Such a workload is easy to build.

-- 
Konstantin Osipov, Moscow, Russia

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Tarantool-patches] [PATCH] small: unite the oscillation cache of all mempools
  2020-01-30 12:23           ` Konstantin Osipov
@ 2020-01-30 13:05             ` Alexander Turenko
  2020-01-30 14:47               ` Konstantin Osipov
  0 siblings, 1 reply; 13+ messages in thread
From: Alexander Turenko @ 2020-01-30 13:05 UTC (permalink / raw)
  To: Konstantin Osipov; +Cc: tarantool-patches

On Thu, Jan 30, 2020 at 03:23:08PM +0300, Konstantin Osipov wrote:
> * Alexander Turenko <alexander.turenko@tarantool.org> [20/01/30 14:21]:
> > This mailing lists is not about your (or somebody else) emotions around
> > tarantool. Please, keep them private or send to some other place.
> 
> Is it OK to laugh, or you're going to ban for a joke?
> 
> And anyway, it was not about emotions. I've given you
> feedback about the quality of the process - you can try to
> address it or dismiss, like you've just did, and like you did
> a few times before, it's of course up to you.

The processes were not better (at least) when you was manager here. Now
you don't seat near us and have even less necessary context. Yep, I
personally think that we should ignore your suggestions in this area.
This is not a viewpoint of the company, it is my personal opinion.

And since DDL tests are entirely unrelated to this thread, it does not
look as feedback, whose goal is to suggest us to raise priority of this
issue. It is either emotions or attempt to present us is an unfavourable
light (because of, again, emotions or business needs of your tarantool
consulting, but likely the former).

Sorry if it looks offending.

(I propose to don't continue this particular discussion, at least here.)

> 
> > > PS It is of course possible to show that memory fragmentation has
> > > decreased with this patch by allocating a few objects and looking
> > > at memory stats. But such test will be fragile and thus bring more
> > > harm than good. 
> > 
> > You may LD_PRELOAD your own mmap() / munmap() / malloc() / free() (see
> > [1]) and count number of calls. This way you can look at the behaviour
> > before and after the patch manually or (maybe) automatically.
> > 
> > Just strace may be okay too, but there should be a case, which will show
> > that amount of mmap/munmap/malloc/free syscalls is decreased.
> > 
> > [1]: https://tbrindus.ca/correct-ld-preload-hooking-libc/
> 
> As I said, I don't think this tiny patch justifies it. 
> The patch fixes a real issue that multiple users have hit.

This is one example of the bad processes that we had when you did work
with us: 'Kostja said LGTM, so proper testing is not necessary'.

> 
> If bench.tarantool.org was operational, it would be easy to see if
> the patch has any impact on performance. This is the only remotely
> relevant concern here.

Maybe. Black box testing is not always enough.

> 
> Meanwhile, if you wish to ignore it because it doesn't come with a
> testing infrastructure - it's up to you.

I had asked for a workload that can show gains under manual run at
least. Not a testing infrastructure.

> 
> 
> 
> it's worth it, neither for this patch 
> 
> -- 
> Konstantin Osipov, Moscow, Russia

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Tarantool-patches] [PATCH] small: unite the oscillation cache of all mempools
  2020-01-30 12:20         ` Kirill Yukhin
@ 2020-01-30 12:36           ` Konstantin Osipov
  0 siblings, 0 replies; 13+ messages in thread
From: Konstantin Osipov @ 2020-01-30 12:36 UTC (permalink / raw)
  To: Kirill Yukhin; +Cc: tarantool-patches

* Kirill Yukhin <kyukhin@tarantool.org> [20/01/30 15:23]:
> > PS It is of course possible to show that memory fragmentation has
> > decreased with this patch by allocating a few objects and looking
> > at memory stats. But such test will be fragile and thus bring more
> > harm than good.
> 
> Small library is core of core. This is one of essential parts.
> Our team @ MRG doesn't have a guy (or girl) who has enough
> expertise to take responsibility and to accept such change
> without a test.
> 
> The patch looks outstanding, but to start off a review process,
> we need a test, sorry.

OK, I get it, this is a fair response at last. 

We can let the patch cook on a fork and re-submit it after we
learn it's safe.

You can also run it through bench.tarantool.org as soon as it's
functioning.

The patch is only removing the code, so it should be pretty safe. 
It is affecting some real people from @tarantoolru chat.

The test you suggest will not test any side-line impact of the
patch (if there was any), since it will only allocate a few bytes
and then free them. So I really think such test is useless.

I guess a good test would be a randomized longevity test 
with some integral quality metrics, like allocations/second or
average memory fragmentation. We don't have such test yet.

If things go as planned, Maksim (with my help) will work on log
structured allocator for small, and we might construct such test
in scope of this work.

-- 
Konstantin Osipov, Moscow, Russia

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Tarantool-patches] [PATCH] small: unite the oscillation cache of all mempools
  2020-01-30 11:18         ` Alexander Turenko
@ 2020-01-30 12:23           ` Konstantin Osipov
  2020-01-30 13:05             ` Alexander Turenko
  0 siblings, 1 reply; 13+ messages in thread
From: Konstantin Osipov @ 2020-01-30 12:23 UTC (permalink / raw)
  To: Alexander Turenko; +Cc: tarantool-patches

* Alexander Turenko <alexander.turenko@tarantool.org> [20/01/30 14:21]:
> This mailing lists is not about your (or somebody else) emotions around
> tarantool. Please, keep them private or send to some other place.

Is it OK to laugh, or you're going to ban for a joke?

And anyway, it was not about emotions. I've given you
feedback about the quality of the process - you can try to
address it or dismiss, like you've just did, and like you did
a few times before, it's of course up to you.

> > PS It is of course possible to show that memory fragmentation has
> > decreased with this patch by allocating a few objects and looking
> > at memory stats. But such test will be fragile and thus bring more
> > harm than good. 
> 
> You may LD_PRELOAD your own mmap() / munmap() / malloc() / free() (see
> [1]) and count number of calls. This way you can look at the behaviour
> before and after the patch manually or (maybe) automatically.
> 
> Just strace may be okay too, but there should be a case, which will show
> that amount of mmap/munmap/malloc/free syscalls is decreased.
> 
> [1]: https://tbrindus.ca/correct-ld-preload-hooking-libc/

As I said, I don't think this tiny patch justifies it. 
The patch fixes a real issue that multiple users have hit.

If bench.tarantool.org was operational, it would be easy to see if
the patch has any impact on performance. This is the only remotely
relevant concern here.

Meanwhile, if you wish to ignore it because it doesn't come with a
testing infrastructure - it's up to you.



it's worth it, neither for this patch 

-- 
Konstantin Osipov, Moscow, Russia

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Tarantool-patches] [PATCH] small: unite the oscillation cache of all mempools
  2020-01-30  8:34       ` Konstantin Osipov
  2020-01-30 11:18         ` Alexander Turenko
@ 2020-01-30 12:20         ` Kirill Yukhin
  2020-01-30 12:36           ` Konstantin Osipov
  1 sibling, 1 reply; 13+ messages in thread
From: Kirill Yukhin @ 2020-01-30 12:20 UTC (permalink / raw)
  To: Konstantin Osipov, Maksim Kulis, tarantool-patches

Hello,

Thanks a lot for extensive comment.

On 30 янв 11:34, Konstantin Osipov wrote:
> PS It is of course possible to show that memory fragmentation has
> decreased with this patch by allocating a few objects and looking
> at memory stats. But such test will be fragile and thus bring more
> harm than good.

Small library is core of core. This is one of essential parts.
Our team @ MRG doesn't have a guy (or girl) who has enough
expertise to take responsibility and to accept such change
without a test.

The patch looks outstanding, but to start off a review process,
we need a test, sorry.

> 
> -- 
> Konstantin Osipov, Moscow, Russia

--
Regards, Kirill Yukhin

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Tarantool-patches] [PATCH] small: unite the oscillation cache of all mempools
  2020-01-30  8:34       ` Konstantin Osipov
@ 2020-01-30 11:18         ` Alexander Turenko
  2020-01-30 12:23           ` Konstantin Osipov
  2020-01-30 12:20         ` Kirill Yukhin
  1 sibling, 1 reply; 13+ messages in thread
From: Alexander Turenko @ 2020-01-30 11:18 UTC (permalink / raw)
  To: Konstantin Osipov; +Cc: tarantool-patches

> <...>

This mailing lists is not about your (or somebody else) emotions around
tarantool. Please, keep them private or send to some other place.

> PS It is of course possible to show that memory fragmentation has
> decreased with this patch by allocating a few objects and looking
> at memory stats. But such test will be fragile and thus bring more
> harm than good. 

You may LD_PRELOAD your own mmap() / munmap() / malloc() / free() (see
[1]) and count number of calls. This way you can look at the behaviour
before and after the patch manually or (maybe) automatically.

Just strace may be okay too, but there should be a case, which will show
that amount of mmap/munmap/malloc/free syscalls is decreased.

[1]: https://tbrindus.ca/correct-ld-preload-hooking-libc/

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Tarantool-patches] [PATCH] small: unite the oscillation cache of all mempools
  2020-01-30  8:02     ` Kirill Yukhin
@ 2020-01-30  8:34       ` Konstantin Osipov
  2020-01-30 11:18         ` Alexander Turenko
  2020-01-30 12:20         ` Kirill Yukhin
  0 siblings, 2 replies; 13+ messages in thread
From: Konstantin Osipov @ 2020-01-30  8:34 UTC (permalink / raw)
  To: Kirill Yukhin; +Cc: tarantool-patches

* Kirill Yukhin <kyukhin@tarantool.org> [20/01/30 11:02]:
> Hello,
> 
> Thanks a lot for your answer!
> 
> On 30 янв 00:46, Konstantin Osipov wrote:
> > * Kirill Yukhin <kyukhin@tarantool.org> [20/01/29 13:30]:
> > > On 29 янв 04:48, Maksim Kulis wrote:
> > > > The task is to unite the oscillation cache of all mempools.
> > > > In the function slab_put_with_order() we check if there is any 
> > > > slab of the largest size except the current one, in order to 
> > > > avoid oscillation. 
> > > > https://github.com/tarantool/small/blob/master/small/slab_cache.c#L349
> > > 
> > > Thanks a lot for your patch. Since the change is performance
> > > related, before we start off review process, please provide a
> > > testcase (for tarantool or whatever) which will clearly show
> > > that there're cases where performance is actually increased.
> > > 
> > > We won't accept "nice looking" perf patches.
> > 
> > It's not about performance :)
> 
> Okay.
> 
> > The patch reduces memory fragmentation quite a bit on small-arena
> > systems. Imagine a 1G system with slab_alloc_factor =1.06
> > (default). It can easily have up to 50-100 mempools in slab arena,
> > each holding up one mslab, up to 100-400MB in total. 
> > 
> > The whole point of holding up to such mslab is to avoid mmap()
> > system call (which is costly) in cases like 
> > 
> > for (i = 1..10000)
> >     mslab_alloc()
> >     mslab_free()
> > 
> > This is allocation scenario is called "oscilation".
> > 
> > There is no point in it, because there is already a cached mslab in
> > slab_cache, and it will be used just fine.
> > 
> > Maksim should update the CS comment.
> 
> Text is fine. But we need a test which will clearly show that
> the change will improve something user-visible. If it is not
> then it is refactoring and I doubt we'll take it.

Is this why you accepted transactional DDL without tests and
haven't done any till now?

Nice to see https://github.com/tarantool/tarantool/issues/4349
being shifted around again and again.

It didn't stop you from proclaiming 2.2 stable either...

In the first response you show you don't know what the patch is
about, and in the second top it with double standards for
accepting Mail.Ru and community patches.

Good stewardship indeed...

PS It is of course possible to show that memory fragmentation has
decreased with this patch by allocating a few objects and looking
at memory stats. But such test will be fragile and thus bring more
harm than good. 

-- 
Konstantin Osipov, Moscow, Russia

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Tarantool-patches] [PATCH] small: unite the oscillation cache of all mempools
  2020-01-29 21:46   ` Konstantin Osipov
@ 2020-01-30  8:02     ` Kirill Yukhin
  2020-01-30  8:34       ` Konstantin Osipov
  0 siblings, 1 reply; 13+ messages in thread
From: Kirill Yukhin @ 2020-01-30  8:02 UTC (permalink / raw)
  To: Konstantin Osipov, Maksim Kulis, tarantool-patches

Hello,

Thanks a lot for your answer!

On 30 янв 00:46, Konstantin Osipov wrote:
> * Kirill Yukhin <kyukhin@tarantool.org> [20/01/29 13:30]:
> > On 29 янв 04:48, Maksim Kulis wrote:
> > > The task is to unite the oscillation cache of all mempools.
> > > In the function slab_put_with_order() we check if there is any 
> > > slab of the largest size except the current one, in order to 
> > > avoid oscillation. 
> > > https://github.com/tarantool/small/blob/master/small/slab_cache.c#L349
> > 
> > Thanks a lot for your patch. Since the change is performance
> > related, before we start off review process, please provide a
> > testcase (for tarantool or whatever) which will clearly show
> > that there're cases where performance is actually increased.
> > 
> > We won't accept "nice looking" perf patches.
> 
> It's not about performance :)

Okay.

> The patch reduces memory fragmentation quite a bit on small-arena
> systems. Imagine a 1G system with slab_alloc_factor =1.06
> (default). It can easily have up to 50-100 mempools in slab arena,
> each holding up one mslab, up to 100-400MB in total. 
> 
> The whole point of holding up to such mslab is to avoid mmap()
> system call (which is costly) in cases like 
> 
> for (i = 1..10000)
>     mslab_alloc()
>     mslab_free()
> 
> This is allocation scenario is called "oscilation".
> 
> There is no point in it, because there is already a cached mslab in
> slab_cache, and it will be used just fine.
> 
> Maksim should update the CS comment.

Text is fine. But we need a test which will clearly show that
the change will improve something user-visible. If it is not
then it is refactoring and I doubt we'll take it.

--
Regards, Kirill Yukhin

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Tarantool-patches] [PATCH] small: unite the oscillation cache of all mempools
  2020-01-29 10:28 ` Kirill Yukhin
@ 2020-01-29 21:46   ` Konstantin Osipov
  2020-01-30  8:02     ` Kirill Yukhin
  0 siblings, 1 reply; 13+ messages in thread
From: Konstantin Osipov @ 2020-01-29 21:46 UTC (permalink / raw)
  To: Kirill Yukhin; +Cc: tarantool-patches

* Kirill Yukhin <kyukhin@tarantool.org> [20/01/29 13:30]:
> On 29 янв 04:48, Maksim Kulis wrote:
> > The task is to unite the oscillation cache of all mempools.
> > In the function slab_put_with_order() we check if there is any 
> > slab of the largest size except the current one, in order to 
> > avoid oscillation. 
> > https://github.com/tarantool/small/blob/master/small/slab_cache.c#L349
> 
> Thanks a lot for your patch. Since the change is performance
> related, before we start off review process, please provide a
> testcase (for tarantool or whatever) which will clearly show
> that there're cases where performance is actually increased.
> 
> We won't accept "nice looking" perf patches.

It's not about performance :)

The patch reduces memory fragmentation quite a bit on small-arena
systems. Imagine a 1G system with slab_alloc_factor =1.06
(default). It can easily have up to 50-100 mempools in slab arena,
each holding up one mslab, up to 100-400MB in total. 

The whole point of holding up to such mslab is to avoid mmap()
system call (which is costly) in cases like 

for (i = 1..10000)
    mslab_alloc()
    mslab_free()

This is allocation scenario is called "oscilation".

There is no point in it, because there is already a cached mslab in
slab_cache, and it will be used just fine.

Maksim should update the CS comment.

-- 
Konstantin Osipov, Moscow, Russia

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Tarantool-patches] [PATCH] small: unite the oscillation cache of all mempools
  2020-01-29  1:48 Maksim Kulis
@ 2020-01-29 10:28 ` Kirill Yukhin
  2020-01-29 21:46   ` Konstantin Osipov
  0 siblings, 1 reply; 13+ messages in thread
From: Kirill Yukhin @ 2020-01-29 10:28 UTC (permalink / raw)
  To: Maksim Kulis; +Cc: tarantool-patches

Hello,

On 29 янв 04:48, Maksim Kulis wrote:
> The task is to unite the oscillation cache of all mempools.
> In the function slab_put_with_order() we check if there is any 
> slab of the largest size except the current one, in order to 
> avoid oscillation. 
> https://github.com/tarantool/small/blob/master/small/slab_cache.c#L349

Thanks a lot for your patch. Since the change is performance
related, before we start off review process, please provide a
testcase (for tarantool or whatever) which will clearly show
that there're cases where performance is actually increased.

We won't accept "nice looking" perf patches.

--
Regards, Kirill Yukhin

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [Tarantool-patches] [PATCH] small: unite the oscillation cache of all mempools
@ 2020-01-29  1:48 Maksim Kulis
  2020-01-29 10:28 ` Kirill Yukhin
  0 siblings, 1 reply; 13+ messages in thread
From: Maksim Kulis @ 2020-01-29  1:48 UTC (permalink / raw)
  To: tarantool-patches

The task is to unite the oscillation cache of all mempools.
In the function slab_put_with_order() we check if there is any 
slab of the largest size except the current one, in order to 
avoid oscillation. 
https://github.com/tarantool/small/blob/master/small/slab_cache.c#L349

---
 small/mempool.c | 22 ++++------------------
 small/mempool.h |  6 ------
 2 files changed, 4 insertions(+), 24 deletions(-)

diff --git a/small/mempool.c b/small/mempool.c
index b20a416..54bd58d 100644
--- a/small/mempool.c
+++ b/small/mempool.c
@@ -126,18 +126,9 @@ mslab_free(struct mempool *pool, struct mslab *slab, void *ptr)
 		}
 		mslab_tree_remove(&pool->hot_slabs, slab);
 		slab->in_hot_slabs = false;
-		if (pool->spare > slab) {
-			slab_list_del(&pool->slabs, &pool->spare->slab,
-				      next_in_list);
-			slab_put_with_order(pool->cache, &pool->spare->slab);
-			pool->spare = slab;
-		 } else if (pool->spare) {
-			 slab_list_del(&pool->slabs, &slab->slab,
-				       next_in_list);
-			 slab_put_with_order(pool->cache, &slab->slab);
-		 } else {
-			 pool->spare = slab;
-		 }
+		slab_list_del(&pool->slabs, &slab->slab,
+			      next_in_list);
+		slab_put_with_order(pool->cache, &slab->slab);
 	}
 }
 
@@ -153,7 +144,6 @@ mempool_create_with_order(struct mempool *pool, struct slab_cache *cache,
 	mslab_tree_new(&pool->hot_slabs);
 	pool->first_hot_slab = NULL;
 	rlist_create(&pool->cold_slabs);
-	pool->spare = NULL;
 	pool->objsize = objsize;
 	pool->slab_order = order;
 	/* Total size of slab */
@@ -179,11 +169,7 @@ mempool_alloc(struct mempool *pool)
 {
 	struct mslab *slab = pool->first_hot_slab;
 	if (slab == NULL) {
-		if (pool->spare) {
-			slab = pool->spare;
-			pool->spare = NULL;
-
-		} else if ((slab = (struct mslab *)
+		if ((slab = (struct mslab *)
 			    slab_get_with_order(pool->cache,
 						pool->slab_order))) {
 			mslab_create(slab, pool);
diff --git a/small/mempool.h b/small/mempool.h
index 15d85a4..9d0e162 100644
--- a/small/mempool.h
+++ b/small/mempool.h
@@ -149,12 +149,6 @@ struct mempool
 	 * tree is empty or the allocator runs out of memory.
 	 */
 	struct rlist cold_slabs;
-	/**
-	 * A completely empty slab which is not freed only to
-	 * avoid the overhead of slab_cache oscillation around
-	 * a single element allocation.
-	 */
-	struct mslab *spare;
 	/**
 	 * The size of an individual object. All objects
 	 * allocated on the pool have the same size.
-- 
2.17.1

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2020-04-10 12:44 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-10  8:36 [Tarantool-patches] [PATCH] small: unite the oscillation cache of all mempools Aleksandr Lyapunov
     [not found] ` <CAPZPwLrNbdCzX7LR+_P8xrZ+unDpw-k2EP1Jj5mAsGOTWz2frA@mail.gmail.com>
2020-04-10 12:44   ` Aleksandr Lyapunov
  -- strict thread matches above, loose matches on Subject: below --
2020-01-29  1:48 Maksim Kulis
2020-01-29 10:28 ` Kirill Yukhin
2020-01-29 21:46   ` Konstantin Osipov
2020-01-30  8:02     ` Kirill Yukhin
2020-01-30  8:34       ` Konstantin Osipov
2020-01-30 11:18         ` Alexander Turenko
2020-01-30 12:23           ` Konstantin Osipov
2020-01-30 13:05             ` Alexander Turenko
2020-01-30 14:47               ` Konstantin Osipov
2020-01-30 12:20         ` Kirill Yukhin
2020-01-30 12:36           ` Konstantin Osipov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox