From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp44.i.mail.ru (smtp44.i.mail.ru [94.100.177.104]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dev.tarantool.org (Postfix) with ESMTPS id DCDF04696C3 for ; Fri, 10 Apr 2020 11:36:35 +0300 (MSK) From: Aleksandr Lyapunov Message-ID: <4514cc23-b201-f12a-20b7-755dbfe24653@tarantool.org> Date: Fri, 10 Apr 2020 11:36:34 +0300 MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="------------A345CDA585C11E53B5DFED7D" Content-Language: en-US Subject: Re: [Tarantool-patches] [PATCH] small: unite the oscillation cache of all mempools List-Id: Tarantool development patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: bokuno@picodata.io, tarantool-patches@dev.tarantool.org This is a multi-part message in MIME format. --------------A345CDA585C11E53B5DFED7D Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Thanks for the patch! As I see mempool::spare was implemented in order to prevent dangerous oscillation that is different to oscillation of the-largest-size slab in slab_cache. Allocation of a slab in slab_cache for mempool can cause slab splitting routine that could be not so fast as one can expect. In the edge case the-largest-size slab is split in two parts, one of the parts is slit again ans so on, up to 16 iterations. Releasing such a slab will cause a reverse procedure - merging of all those slabs into the largest. And if we don't store spare slab in the mempool, simple allocation/deallocation of one and only item can be deadly slow. Imagine a loop with that workout. So I'm sure that we need mempool::spare optimization in order to make mempool allocation performance more predictable. Sorry, thanks btw. > The task is to unite the oscillation cache of all mempools. > In the function slab_put_with_order() we check if there is any > slab of the largest size except the current one, in order to > avoid oscillation. > https://github.com/tarantool/small/blob/master/small/slab_cache.c#L349 > > --- > small/mempool.c | 22 ++++------------------ > small/mempool.h | 6 ------ > 2 files changed, 4 insertions(+), 24 deletions(-) > > diff --git a/small/mempool.c b/small/mempool.c > index b20a416..54bd58d 100644 > --- a/small/mempool.c > +++ b/small/mempool.c > @@ -126,18 +126,9 @@ mslab_free(struct mempool *pool, struct mslab *slab, void *ptr) > } > mslab_tree_remove(&pool->hot_slabs, slab); > slab->in_hot_slabs = false; > - if (pool->spare > slab) { > - slab_list_del(&pool->slabs, &pool->spare->slab, > - next_in_list); > - slab_put_with_order(pool->cache, &pool->spare->slab); > - pool->spare = slab; > - } else if (pool->spare) { > - slab_list_del(&pool->slabs, &slab->slab, > - next_in_list); > - slab_put_with_order(pool->cache, &slab->slab); > - } else { > - pool->spare = slab; > - } > + slab_list_del(&pool->slabs, &slab->slab, > + next_in_list); > + slab_put_with_order(pool->cache, &slab->slab); > } > } > > @@ -153,7 +144,6 @@ mempool_create_with_order(struct mempool *pool, struct slab_cache *cache, > mslab_tree_new(&pool->hot_slabs); > pool->first_hot_slab = NULL; > rlist_create(&pool->cold_slabs); > - pool->spare = NULL; > pool->objsize = objsize; > pool->slab_order = order; > /* Total size of slab */ > @@ -179,11 +169,7 @@ mempool_alloc(struct mempool *pool) > { > struct mslab *slab = pool->first_hot_slab; > if (slab == NULL) { > - if (pool->spare) { > - slab = pool->spare; > - pool->spare = NULL; > - > - } else if ((slab = (struct mslab *) > + if ((slab = (struct mslab *) > slab_get_with_order(pool->cache, > pool->slab_order))) { > mslab_create(slab, pool); > diff --git a/small/mempool.h b/small/mempool.h > index 15d85a4..9d0e162 100644 > --- a/small/mempool.h > +++ b/small/mempool.h > @@ -149,12 +149,6 @@ struct mempool > * tree is empty or the allocator runs out of memory. > */ > struct rlist cold_slabs; > - /** > - * A completely empty slab which is not freed only to > - * avoid the overhead of slab_cache oscillation around > - * a single element allocation. > - */ > - struct mslab *spare; > /** > * The size of an individual object. All objects > * allocated on the pool have the same size. > -- > 2.17.1 --------------A345CDA585C11E53B5DFED7D Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 7bit
Thanks for the patch!

As I see mempool::spare was implemented in order to prevent
dangerous oscillation that is different to oscillation of
the-largest-size slab in slab_cache.
Allocation of a slab in slab_cache for mempool can cause slab
splitting routine that could be not so fast as one can expect.
In the edge case the-largest-size slab is split in two parts, 
one of the parts is slit again ans so on, up to 16 iterations.
Releasing such a slab will cause a reverse procedure - merging
of all those slabs into the largest. And if we don't store
spare slab in the mempool, simple allocation/deallocation of
one and only item can be deadly slow. Imagine a loop with that
workout.

So I'm sure that we need mempool::spare optimization in order
to make mempool allocation performance more predictable.

Sorry, thanks btw.

The task is to unite the oscillation cache of all mempools.
In the function slab_put_with_order() we check if there is any 
slab of the largest size except the current one, in order to 
avoid oscillation. 
https://github.com/tarantool/small/blob/master/small/slab_cache.c#L349

---
 small/mempool.c | 22 ++++------------------
 small/mempool.h |  6 ------
 2 files changed, 4 insertions(+), 24 deletions(-)

diff --git a/small/mempool.c b/small/mempool.c
index b20a416..54bd58d 100644
--- a/small/mempool.c
+++ b/small/mempool.c
@@ -126,18 +126,9 @@ mslab_free(struct mempool *pool, struct mslab *slab, void *ptr)
 		}
 		mslab_tree_remove(&pool->hot_slabs, slab);
 		slab->in_hot_slabs = false;
-		if (pool->spare > slab) {
-			slab_list_del(&pool->slabs, &pool->spare->slab,
-				      next_in_list);
-			slab_put_with_order(pool->cache, &pool->spare->slab);
-			pool->spare = slab;
-		 } else if (pool->spare) {
-			 slab_list_del(&pool->slabs, &slab->slab,
-				       next_in_list);
-			 slab_put_with_order(pool->cache, &slab->slab);
-		 } else {
-			 pool->spare = slab;
-		 }
+		slab_list_del(&pool->slabs, &slab->slab,
+			      next_in_list);
+		slab_put_with_order(pool->cache, &slab->slab);
 	}
 }
 
@@ -153,7 +144,6 @@ mempool_create_with_order(struct mempool *pool, struct slab_cache *cache,
 	mslab_tree_new(&pool->hot_slabs);
 	pool->first_hot_slab = NULL;
 	rlist_create(&pool->cold_slabs);
-	pool->spare = NULL;
 	pool->objsize = objsize;
 	pool->slab_order = order;
 	/* Total size of slab */
@@ -179,11 +169,7 @@ mempool_alloc(struct mempool *pool)
 {
 	struct mslab *slab = pool->first_hot_slab;
 	if (slab == NULL) {
-		if (pool->spare) {
-			slab = pool->spare;
-			pool->spare = NULL;
-
-		} else if ((slab = (struct mslab *)
+		if ((slab = (struct mslab *)
 			    slab_get_with_order(pool->cache,
 						pool->slab_order))) {
 			mslab_create(slab, pool);
diff --git a/small/mempool.h b/small/mempool.h
index 15d85a4..9d0e162 100644
--- a/small/mempool.h
+++ b/small/mempool.h
@@ -149,12 +149,6 @@ struct mempool
 	 * tree is empty or the allocator runs out of memory.
 	 */
 	struct rlist cold_slabs;
-	/**
-	 * A completely empty slab which is not freed only to
-	 * avoid the overhead of slab_cache oscillation around
-	 * a single element allocation.
-	 */
-	struct mslab *spare;
 	/**
 	 * The size of an individual object. All objects
 	 * allocated on the pool have the same size.
-- 
2.17.1

--------------A345CDA585C11E53B5DFED7D--