From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp41.i.mail.ru (smtp41.i.mail.ru [94.100.177.101]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dev.tarantool.org (Postfix) with ESMTPS id D8992469710 for ; Fri, 15 May 2020 16:03:50 +0300 (MSK) References: <90f1c60ad69d7e577467ad0d1664bbfee3816df7.1589498880.git.v.shpilevoy@tarantool.org> From: Aleksandr Lyapunov Message-ID: Date: Fri, 15 May 2020 16:03:48 +0300 MIME-Version: 1.0 In-Reply-To: <90f1c60ad69d7e577467ad0d1664bbfee3816df7.1589498880.git.v.shpilevoy@tarantool.org> Content-Type: multipart/alternative; boundary="------------19D0D2051FD7F37AB59EC201" Content-Language: en-US Subject: Re: [Tarantool-patches] [PATCH small 2/2] lsregion: provide aligned version of alloc List-Id: Tarantool development patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Vladislav Shpilevoy , tarantool-patches@dev.tarantool.org, korablev@tarantool.org, tsafin@tarantool.org, gorcunov@gmail.com This is a multi-part message in MIME format. --------------19D0D2051FD7F37AB59EC201 Content-Type: text/plain; charset="utf-8"; format="flowed" Content-Transfer-Encoding: 8bit On 5/15/20 2:31 AM, Vladislav Shpilevoy wrote: > +static inline void * > +lsregion_aligned_alloc(struct lsregion *lsregion, size_t size, size_t alignment, > + int64_t id) > +{ > + void *unaligned = lsregion_reserve(lsregion, size + alignment - 1); I do not like this change. It will work, but it's not optimal. We don't need to reserve more than size if lslab_pos(slab) is already aligned. It could lead to new slab allocation in cases we actually have enough aligned space in current. If we really need aligned alloc, I'm sure we should implement it more carefully. I think it should start with something like:  static inline size_t -lslab_unused(const struct lslab *slab) +lslab_unused(const struct lslab *slab, size_t align)  { -       assert(slab->slab_size >= slab->slab_used); -       return slab->slab_size - slab->slab_used; +       assert(slab->slab_size >= small_align(slab->slab_used, align)); +       return slab->slab_size - small_align(slab->slab_used, align);  } Note also we should remove small_align call from lslab_sizeof method. --------------19D0D2051FD7F37AB59EC201 Content-Type: text/html; charset="utf-8" Content-Transfer-Encoding: 8bit
On 5/15/20 2:31 AM, Vladislav Shpilevoy wrote:
+static inline void *
+lsregion_aligned_alloc(struct lsregion *lsregion, size_t size, size_t alignment,
+		       int64_t id)
+{
+	void *unaligned = lsregion_reserve(lsregion, size + alignment - 1);
I do not like this change. It will work, but it's not optimal.
We don't need to reserve more than size if lslab_pos(slab) is already aligned.
It could lead to new slab allocation in cases we actually have enough aligned space in current.

If we really need aligned alloc, I'm sure we should implement it more carefully.
I think it should start with something like:
 static inline size_t
-lslab_unused(const struct lslab *slab)
+lslab_unused(const struct lslab *slab, size_t align)
 {
-       assert(slab->slab_size >= slab->slab_used);
-       return slab->slab_size - slab->slab_used;
+       assert(slab->slab_size >= small_align(slab->slab_used, align));
+       return slab->slab_size - small_align(slab->slab_used, align);
 }

Note also we should remove small_align call from lslab_sizeof method.
--------------19D0D2051FD7F37AB59EC201--