From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp34.i.mail.ru (smtp34.i.mail.ru [94.100.177.94]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dev.tarantool.org (Postfix) with ESMTPS id 799A9469711 for ; Fri, 15 May 2020 02:31:08 +0300 (MSK) From: Vladislav Shpilevoy Date: Fri, 15 May 2020 01:31:04 +0200 Message-Id: <90f1c60ad69d7e577467ad0d1664bbfee3816df7.1589498880.git.v.shpilevoy@tarantool.org> In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [Tarantool-patches] [PATCH small 2/2] lsregion: provide aligned version of alloc List-Id: Tarantool development patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: tarantool-patches@dev.tarantool.org, korablev@tarantool.org, tsafin@tarantool.org, gorcunov@gmail.com lsregion_alloc() can return an address, even odd. This is not good when the allocation is going to be used for a type, which requires alignment, such as a number, or a struct having numbers in it. Unaligned access to objects saved into the returned memory is either slower than aligned access, or can even lead to a crash on certain architectures. Also it makes 'alignment' clang sanitizer crazy. The patch provides a function to make aligned allocations on lsregion. Part of https://github.com/tarantool/tarantool/issues/4609 --- small/lsregion.h | 37 ++++++++++++++++++++++++++++++++---- test/lsregion.c | 49 +++++++++++++++++++++++++++++++++++++++++++++++- 2 files changed, 81 insertions(+), 5 deletions(-) diff --git a/small/lsregion.h b/small/lsregion.h index 52431e7..b4212b3 100644 --- a/small/lsregion.h +++ b/small/lsregion.h @@ -127,6 +127,18 @@ lslab_unused(const struct lslab *slab) return slab->slab_size - slab->slab_used; } +/** + * Update slab statistics and meta according to new allocation. + */ +static inline void +lslab_use(struct lslab *slab, size_t size, int64_t id) +{ + assert(size <= lslab_unused(slab)); + assert(slab->max_id <= id); + slab->slab_used += size; + slab->max_id = id; +} + /** * Pointer to the end of the used part of the slab. * @param slab Slab container. @@ -199,10 +211,27 @@ lsregion_alloc(struct lsregion *lsregion, size_t size, int64_t id) return NULL; struct lslab *slab = rlist_last_entry(&lsregion->slabs.slabs, struct lslab, next_in_list); - assert(size <= lslab_unused(slab)); - assert(slab->max_id <= id); - slab->slab_used += size; - slab->max_id = id; + lslab_use(slab, size, id); + lsregion->slabs.stats.used += size; + return res; +} + +/** + * The same as normal alloc, but the resulting pointer is aligned + * by @a alignment. + */ +static inline void * +lsregion_aligned_alloc(struct lsregion *lsregion, size_t size, size_t alignment, + int64_t id) +{ + void *unaligned = lsregion_reserve(lsregion, size + alignment - 1); + if (unaligned == NULL) + return NULL; + void *res = (void *)small_align((uintptr_t)unaligned, alignment); + struct lslab *slab = rlist_last_entry(&lsregion->slabs.slabs, + struct lslab, next_in_list); + size += res - unaligned; + lslab_use(slab, size, id); lsregion->slabs.stats.used += size; return res; } diff --git a/test/lsregion.c b/test/lsregion.c index 8a8ff56..0232f8a 100644 --- a/test/lsregion.c +++ b/test/lsregion.c @@ -396,16 +396,63 @@ test_reserve(void) check_plan(); } +static void +test_aligned(void) +{ + plan(11); + header(); + + struct quota quota; + struct slab_arena arena; + struct lsregion allocator; + quota_init("a, 16 * SLAB_MIN_SIZE); + is(slab_arena_create(&arena, "a, 0, 0, MAP_PRIVATE), 0, "init"); + lsregion_create(&allocator, &arena); + int id = 0; + + ++id; + void *p1 = lsregion_aligned_alloc(&allocator, 8, 8, id); + ok((unsigned long)p1 % 8 == 0, "trivial aligned"); + is(lsregion_used(&allocator), 8, "'used'"); + + ++id; + void *p2 = lsregion_aligned_alloc(&allocator, 1, 16, id); + ok((unsigned long)p2 % 16 == 0, "16 byte alignment for 1 byte"); + void *p3 = lsregion_aligned_alloc(&allocator, 1, 16, id); + ok(p3 == (char *)p2 + 16, "second 16 aligned alloc of 1 byte is far " + "from first"); + ok((unsigned long)p3 % 16 == 0, "aligned by 16 too"); + + is(lsregion_used(&allocator), (size_t)(p3 + 1 - p1), "'used'"); + + ++id; + void *p4 = lsregion_aligned_alloc(&allocator, 3, 4, id); + ok(p4 == (char *)p3 + 4, "align next by 4 bytes, should be closer to " + "previous"); + ok((unsigned long)p4 % 4 == 0, "aligned by 4"); + is(lsregion_used(&allocator), (size_t)(p4 + 3 - p1), "'used'"); + + lsregion_gc(&allocator, id); + is(lsregion_used(&allocator), 0, "gc deleted all"); + + lsregion_destroy(&allocator); + slab_arena_destroy(&arena); + + footer(); + check_plan(); +} + int main() { - plan(5); + plan(6); test_basic(); test_many_allocs_one_slab(); test_many_allocs_many_slabs(); test_big_data_small_slabs(); test_reserve(); + test_aligned(); return check_plan(); } -- 2.21.1 (Apple Git-122.3)