<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<pre class="moz-quote-pre" wrap="">Thanks for the patch!
As I see mempool::spare was implemented in order to prevent
dangerous oscillation that is different to oscillation of
the-largest-size slab in slab_cache.
Allocation of a slab in slab_cache for mempool can cause slab
splitting routine that could be not so fast as one can expect.
In the edge case the-largest-size slab is split in two parts,
one of the parts is slit again ans so on, up to 16 iterations.
Releasing such a slab will cause a reverse procedure - merging
of all those slabs into the largest. And if we don't store
spare slab in the mempool, simple allocation/deallocation of
one and only item can be deadly slow. Imagine a loop with that
workout.
So I'm sure that we need mempool::spare optimization in order
to make mempool allocation performance more predictable.
Sorry, thanks btw.
</pre>
<p>
<blockquote type="cite">
<pre>The task is to unite the oscillation cache of all mempools.
In the function slab_put_with_order() we check if there is any
slab of the largest size except the current one, in order to
avoid oscillation.
<a href="https://github.com/tarantool/small/blob/master/small/slab_cache.c#L349">https://github.com/tarantool/small/blob/master/small/slab_cache.c#L349</a>
---
small/mempool.c | 22 ++++------------------
small/mempool.h | 6 ------
2 files changed, 4 insertions(+), 24 deletions(-)
diff --git a/small/mempool.c b/small/mempool.c
index b20a416..54bd58d 100644
--- a/small/mempool.c
+++ b/small/mempool.c
@@ -126,18 +126,9 @@ mslab_free(struct mempool *pool, struct mslab *slab, void *ptr)
}
mslab_tree_remove(&pool->hot_slabs, slab);
slab->in_hot_slabs = false;
- if (pool->spare > slab) {
- slab_list_del(&pool->slabs, &pool->spare->slab,
- next_in_list);
- slab_put_with_order(pool->cache, &pool->spare->slab);
- pool->spare = slab;
- } else if (pool->spare) {
- slab_list_del(&pool->slabs, &slab->slab,
- next_in_list);
- slab_put_with_order(pool->cache, &slab->slab);
- } else {
- pool->spare = slab;
- }
+ slab_list_del(&pool->slabs, &slab->slab,
+ next_in_list);
+ slab_put_with_order(pool->cache, &slab->slab);
}
}
@@ -153,7 +144,6 @@ mempool_create_with_order(struct mempool *pool, struct slab_cache *cache,
mslab_tree_new(&pool->hot_slabs);
pool->first_hot_slab = NULL;
rlist_create(&pool->cold_slabs);
- pool->spare = NULL;
pool->objsize = objsize;
pool->slab_order = order;
/* Total size of slab */
@@ -179,11 +169,7 @@ mempool_alloc(struct mempool *pool)
{
struct mslab *slab = pool->first_hot_slab;
if (slab == NULL) {
- if (pool->spare) {
- slab = pool->spare;
- pool->spare = NULL;
-
- } else if ((slab = (struct mslab *)
+ if ((slab = (struct mslab *)
slab_get_with_order(pool->cache,
pool->slab_order))) {
mslab_create(slab, pool);
diff --git a/small/mempool.h b/small/mempool.h
index 15d85a4..9d0e162 100644
--- a/small/mempool.h
+++ b/small/mempool.h
@@ -149,12 +149,6 @@ struct mempool
* tree is empty or the allocator runs out of memory.
*/
struct rlist cold_slabs;
- /**
- * A completely empty slab which is not freed only to
- * avoid the overhead of slab_cache oscillation around
- * a single element allocation.
- */
- struct mslab *spare;
/**
* The size of an individual object. All objects
* allocated on the pool have the same size.
--
2.17.1</pre>
</blockquote>
<br>
</p>
</body>
</html>