From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp45.i.mail.ru (smtp45.i.mail.ru [94.100.177.105]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dev.tarantool.org (Postfix) with ESMTPS id 26DF0469710 for ; Sat, 16 May 2020 02:24:08 +0300 (MSK) References: <90f1c60ad69d7e577467ad0d1664bbfee3816df7.1589498880.git.v.shpilevoy@tarantool.org> From: Vladislav Shpilevoy Message-ID: <099effc3-4bfb-dba7-e479-29b4187ce7b9@tarantool.org> Date: Sat, 16 May 2020 01:24:05 +0200 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 8bit Subject: Re: [Tarantool-patches] [PATCH small 2/2] lsregion: provide aligned version of alloc List-Id: Tarantool development patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Aleksandr Lyapunov , tarantool-patches@dev.tarantool.org, korablev@tarantool.org, tsafin@tarantool.org, gorcunov@gmail.com Thanks for the review! On 15/05/2020 15:03, Aleksandr Lyapunov wrote: > On 5/15/20 2:31 AM, Vladislav Shpilevoy wrote: >> +static inline void * >> +lsregion_aligned_alloc(struct lsregion *lsregion, size_t size, size_t alignment, >> + int64_t id) >> +{ >> + void *unaligned = lsregion_reserve(lsregion, size + alignment - 1); > I do not like this change. It will work, but it's not optimal. It is optimal in terms of code simplicity and speed. The worst what can happen is that the slab size becomes basically 7 bytes smaller. But taking into account slab minimal size 65KB, this tail does not matter. However yeah, it is not optimal in terms of space, strictly speaking. I firstly wanted to keep the current implementation, but then out of curiosity tried your way, and it looks fine I think. Not too complex. But the patchset is changed significantly. Now there is only one commit (since the aligned allocs rewrite all the diff of the first patch, about reserve() function), and much more changes. > We don't need to reserve more than size if lslab_pos(slab) is already aligned. > It could lead to new slab allocation in cases we actually have enough aligned space in current. > > If we really need aligned alloc, I'm sure we should implement it more carefully. > I think it should start with something like: >  static inline size_t > -lslab_unused(const struct lslab *slab) > +lslab_unused(const struct lslab *slab, size_t align) >  { > -       assert(slab->slab_size >= slab->slab_used); > -       return slab->slab_size - slab->slab_used; > +       assert(slab->slab_size >= small_align(slab->slab_used, align)); > +       return slab->slab_size - small_align(slab->slab_used, align); >  } > > Note also we should remove small_align call from lslab_sizeof method. Eww. This looks really bad. lslab_unused() becomes too complex. And moreover, it can overflow. Because small_align() can return something bigger than slab->slab_size. Also this does not return aligned unused, in a common case. Because the slab itself may be not aligned enough for the requested alignment. I wrote a test for that. But I got the idea. I sent v2 in a new thread.