From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp3.mail.ru (smtp3.mail.ru [94.100.179.58]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dev.tarantool.org (Postfix) with ESMTPS id 6CA3B46970E for ; Wed, 29 Jan 2020 11:06:47 +0300 (MSK) From: Alexander Turenko Date: Wed, 29 Jan 2020 11:06:45 +0300 Message-Id: <4e734e626aba336b27ec85790747c657d29c0338.1580284383.git.alexander.turenko@tarantool.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [Tarantool-patches] [small] Revert "Free all slabs on region reset" List-Id: Tarantool development patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Kirill Yukhin Cc: tarantool-patches@dev.tarantool.org This reverts commit 67d7ab44ab09df3356929e3692a03321b31f3ebb. The goal of the reverted commit was to fix flaky fails of tarantool tests that checks amount of memory used by a fiber: | fiber.info()[fiber.self().id()].memory.used It also attempts to overcome the situation when a fiber holds some amount of memory, which is not used in any way. The high limit of such memory is controlled by a threshold in fiber_gc() tarantool's function (128 KiB at the moment): | void | fiber_gc(void) | { | if (region_used(&fiber()->gc) < 128 * 1024) { | region_reset(&fiber()->gc); | return; | } | | region_free(&fiber()->gc); | } The reverted commit, however, leads to significant performance degradation on certain workloads (see #4736). So the revertion fixes the performance degradation and opens the problem with tests, which is tracked in #4750. Related to #12 Related to https://github.com/tarantool/tarantool/issues/4750 Fixes https://github.com/tarantool/tarantool/issues/4736 --- https://github.com/tarantool/small/tree/Totktonada/gh-4736-revert-region-reset small/region.h | 18 ++++-------------- 1 file changed, 4 insertions(+), 14 deletions(-) diff --git a/small/region.h b/small/region.h index d9be176..bea88c6 100644 --- a/small/region.h +++ b/small/region.h @@ -156,16 +156,6 @@ region_reserve(struct region *region, size_t size) slab.next_in_list); if (size <= rslab_unused(slab)) return (char *) rslab_data(slab) + slab->used; - /* Try to get a slab from the region cache. */ - slab = rlist_last_entry(®ion->slabs.slabs, - struct rslab, - slab.next_in_list); - if (slab->used == 0 && size <= rslab_unused(slab)) { - /* Move this slab to the head. */ - slab_list_del(®ion->slabs, &slab->slab, next_in_list); - slab_list_add(®ion->slabs, &slab->slab, next_in_list); - return (char *) rslab_data(slab); - } } return region_reserve_slow(region, size); } @@ -222,14 +212,14 @@ region_aligned_alloc(struct region *region, size_t size, size_t alignment) /** * Mark region as empty, but keep the blocks. - * Do not change the first slab and use previous slabs as a cache to - * use for future allocations. */ static inline void region_reset(struct region *region) { - struct rslab *slab; - rlist_foreach_entry(slab, ®ion->slabs.slabs, slab.next_in_list) { + if (! rlist_empty(®ion->slabs.slabs)) { + struct rslab *slab = rlist_first_entry(®ion->slabs.slabs, + struct rslab, + slab.next_in_list); region->slabs.stats.used -= slab->used; slab->used = 0; } -- 2.22.0