Tarantool development patches archive
 help / color / mirror / Atom feed
* [Tarantool-patches] [PATCH small 0/2] Aligned lsregion
@ 2020-05-14 23:31 Vladislav Shpilevoy
  2020-05-14 23:31 ` [Tarantool-patches] [PATCH small 1/2] lsregion: introduce lsregion_reserve() Vladislav Shpilevoy
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: Vladislav Shpilevoy @ 2020-05-14 23:31 UTC (permalink / raw)
  To: tarantool-patches, korablev, tsafin, gorcunov

The patchset introduces a new function - lsregion_aligned_alloc().
This activity goes in scope of the big issue about lots of
'undefined behaviour' cases among Tarantool sources. One and the
biggest of which is unaligned memory access.

Currently lsregion is used in vinyl for statements and BPS tree
blocks. All of them are not aligned. Addresses are absolutely
arbitrary, sometimes even odd, like ending with 1, 9, etc.

Not counting UB, this is also slower, may be much slower sometimes.

Branch: http://github.com/tarantool/small/tree/gerold103/aligned-lsregion
Issue: https://github.com/tarantool/tarantool/issues/4609

Vladislav Shpilevoy (2):
  lsregion: introduce lsregion_reserve()
  lsregion: provide aligned version of alloc

 small/lsregion.c | 12 ++-----
 small/lsregion.h | 78 +++++++++++++++++++++++++++++++++---------
 test/lsregion.c  | 88 +++++++++++++++++++++++++++++++++++++++++++++++-
 3 files changed, 152 insertions(+), 26 deletions(-)

-- 
2.21.1 (Apple Git-122.3)

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [Tarantool-patches] [PATCH small 1/2] lsregion: introduce lsregion_reserve()
  2020-05-14 23:31 [Tarantool-patches] [PATCH small 0/2] Aligned lsregion Vladislav Shpilevoy
@ 2020-05-14 23:31 ` Vladislav Shpilevoy
  2020-05-15 12:35   ` Aleksandr Lyapunov
  2020-05-14 23:31 ` [Tarantool-patches] [PATCH small 2/2] lsregion: provide aligned version of alloc Vladislav Shpilevoy
  2020-05-15 12:26 ` [Tarantool-patches] [PATCH small 0/2] Aligned lsregion Aleksandr Lyapunov
  2 siblings, 1 reply; 10+ messages in thread
From: Vladislav Shpilevoy @ 2020-05-14 23:31 UTC (permalink / raw)
  To: tarantool-patches, korablev, tsafin, gorcunov

So far lsregion provided only lsregion_alloc(). Even though all
the other allocators provide both alloc() and reserve().

Reservation is useful when need to reserve more, and allocate
less. In that way the unused tail has a chance of becoming a part
of a next allocation. This will be the case for upcoming
lsregion_aligned_alloc(), which will reserve more than requested
and use probably only part of it depending on result address
alignment.

Needed for https://github.com/tarantool/tarantool/issues/4609
---
 small/lsregion.c | 12 ++----------
 small/lsregion.h | 49 +++++++++++++++++++++++++++++++++---------------
 test/lsregion.c  | 41 +++++++++++++++++++++++++++++++++++++++-
 3 files changed, 76 insertions(+), 26 deletions(-)

diff --git a/small/lsregion.c b/small/lsregion.c
index 6bcc043..ff4a7aa 100644
--- a/small/lsregion.c
+++ b/small/lsregion.c
@@ -32,7 +32,7 @@
 #include "lsregion.h"
 
 void *
-lsregion_alloc_slow(struct lsregion *lsregion, size_t size, int64_t id)
+lsregion_reserve_slow(struct lsregion *lsregion, size_t size)
 {
 	struct lslab *slab = NULL;
 	size_t slab_size = lsregion->arena->slab_size;
@@ -77,14 +77,6 @@ lsregion_alloc_slow(struct lsregion *lsregion, size_t size, int64_t id)
 		}
 	}
 	assert(slab != NULL);
-	assert(slab->max_id <= id);
 	assert(size <= lslab_unused(slab));
-	void *res = lslab_pos(slab);
-	slab->slab_used += size;
-
-	/* Update the memory block meta info. */
-	assert(slab->max_id <= id);
-	slab->max_id = id;
-	lsregion->slabs.stats.used += size;
-	return res;
+	return lslab_pos(slab);
 }
diff --git a/small/lsregion.h b/small/lsregion.h
index 73d68dc..52431e7 100644
--- a/small/lsregion.h
+++ b/small/lsregion.h
@@ -153,22 +153,21 @@ lsregion_create(struct lsregion *lsregion, struct slab_arena *arena)
 	lsregion->cached = NULL;
 }
 
-/** @sa lsregion_alloc().  */
+/** @sa lsregion_reserve().  */
 void *
-lsregion_alloc_slow(struct lsregion *lsregion, size_t size, int64_t id);
+lsregion_reserve_slow(struct lsregion *lsregion, size_t size);
 
 /**
- * Allocate \p size bytes and assicoate the allocated block
- * with \p id.
+ * Make sure a next allocation of at least @a size bytes will not
+ * fail, and will return the same result as this call.
  * @param lsregion Allocator object.
  * @param size     Size to allocate.
- * @param id       Memory chunk identifier.
  *
  * @retval not NULL Success.
  * @retval NULL     Memory error.
  */
 static inline void *
-lsregion_alloc(struct lsregion *lsregion, size_t size, int64_t id)
+lsregion_reserve(struct lsregion *lsregion, size_t size)
 {
 	/* If there is an existing slab then try to use it. */
 	if (! rlist_empty(&lsregion->slabs.slabs)) {
@@ -176,16 +175,36 @@ lsregion_alloc(struct lsregion *lsregion, size_t size, int64_t id)
 		slab = rlist_last_entry(&lsregion->slabs.slabs, struct lslab,
 					next_in_list);
 		assert(slab != NULL);
-		assert(slab->max_id <= id);
-		if (size <= lslab_unused(slab)) {
-			void *res = lslab_pos(slab);
-			slab->slab_used += size;
-			slab->max_id = id;
-			lsregion->slabs.stats.used += size;
-			return res;
-		}
+		if (size <= lslab_unused(slab))
+			return lslab_pos(slab);
 	}
-	return lsregion_alloc_slow(lsregion, size, id);
+	return lsregion_reserve_slow(lsregion, size);
+}
+
+/**
+ * Allocate @a size bytes and associate the allocated block
+ * with @a id.
+ * @param lsregion Allocator object.
+ * @param size     Size to allocate.
+ * @param id       Memory chunk identifier.
+ *
+ * @retval not NULL Success.
+ * @retval NULL     Memory error.
+ */
+static inline void *
+lsregion_alloc(struct lsregion *lsregion, size_t size, int64_t id)
+{
+	void *res = lsregion_reserve(lsregion, size);
+	if (res == NULL)
+		return NULL;
+	struct lslab *slab = rlist_last_entry(&lsregion->slabs.slabs,
+					      struct lslab, next_in_list);
+	assert(size <= lslab_unused(slab));
+	assert(slab->max_id <= id);
+	slab->slab_used += size;
+	slab->max_id = id;
+	lsregion->slabs.stats.used += size;
+	return res;
 }
 
 /**
diff --git a/test/lsregion.c b/test/lsregion.c
index 90ad060..8a8ff56 100644
--- a/test/lsregion.c
+++ b/test/lsregion.c
@@ -358,15 +358,54 @@ test_big_data_small_slabs()
 	check_plan();
 }
 
+static void
+test_reserve(void)
+{
+	plan(10);
+	header();
+
+	struct quota quota;
+	struct slab_arena arena;
+	struct lsregion allocator;
+	quota_init(&quota, 16 * SLAB_MIN_SIZE);
+	is(slab_arena_create(&arena, &quota, 0, 0, MAP_PRIVATE), 0, "init");
+	lsregion_create(&allocator, &arena);
+
+	void *p1 = lsregion_reserve(&allocator, 100);
+	is(lsregion_used(&allocator), 0, "reserve does not occupy memory");
+	is(lsregion_total(&allocator), arena.slab_size, "reserve creates slabs");
+	void *p2 = lsregion_alloc(&allocator, 80, 1);
+	is(p1, p2, "alloc returns the same as reserve, even if size is less");
+	is(lsregion_used(&allocator), 80, "alloc updated 'used'");
+
+	p1 = lsregion_reserve(&allocator, arena.slab_size - lslab_sizeof());
+	is(lsregion_used(&allocator), 80, "next reserve didn't touch 'used'");
+	is(lsregion_total(&allocator), arena.slab_size * 2, "but changed "
+	   "'total' because second slab is allocated");
+	is(lsregion_slab_count(&allocator), 2, "slab count is 2 now");
+	lsregion_gc(&allocator, 1);
+
+	is(lsregion_used(&allocator), 0, "gc works fine with empty reserved "
+	   "slabs");
+	is(lsregion_slab_count(&allocator), 0, "all slabs are removed");
+
+	lsregion_destroy(&allocator);
+	slab_arena_destroy(&arena);
+
+	footer();
+	check_plan();
+}
+
 int
 main()
 {
-	plan(4);
+	plan(5);
 
 	test_basic();
 	test_many_allocs_one_slab();
 	test_many_allocs_many_slabs();
 	test_big_data_small_slabs();
+	test_reserve();
 
 	return check_plan();
 }
-- 
2.21.1 (Apple Git-122.3)

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [Tarantool-patches] [PATCH small 2/2] lsregion: provide aligned version of alloc
  2020-05-14 23:31 [Tarantool-patches] [PATCH small 0/2] Aligned lsregion Vladislav Shpilevoy
  2020-05-14 23:31 ` [Tarantool-patches] [PATCH small 1/2] lsregion: introduce lsregion_reserve() Vladislav Shpilevoy
@ 2020-05-14 23:31 ` Vladislav Shpilevoy
  2020-05-15 13:03   ` Aleksandr Lyapunov
  2020-05-15 12:26 ` [Tarantool-patches] [PATCH small 0/2] Aligned lsregion Aleksandr Lyapunov
  2 siblings, 1 reply; 10+ messages in thread
From: Vladislav Shpilevoy @ 2020-05-14 23:31 UTC (permalink / raw)
  To: tarantool-patches, korablev, tsafin, gorcunov

lsregion_alloc() can return an address, even odd. This is not
good when the allocation is going to be used for a type, which
requires alignment, such as a number, or a struct having numbers
in it.

Unaligned access to objects saved into the returned memory is
either slower than aligned access, or can even lead to a crash on
certain architectures. Also it makes 'alignment' clang sanitizer
crazy.

The patch provides a function to make aligned allocations on
lsregion.

Part of https://github.com/tarantool/tarantool/issues/4609
---
 small/lsregion.h | 37 ++++++++++++++++++++++++++++++++----
 test/lsregion.c  | 49 +++++++++++++++++++++++++++++++++++++++++++++++-
 2 files changed, 81 insertions(+), 5 deletions(-)

diff --git a/small/lsregion.h b/small/lsregion.h
index 52431e7..b4212b3 100644
--- a/small/lsregion.h
+++ b/small/lsregion.h
@@ -127,6 +127,18 @@ lslab_unused(const struct lslab *slab)
 	return slab->slab_size - slab->slab_used;
 }
 
+/**
+ * Update slab statistics and meta according to new allocation.
+ */
+static inline void
+lslab_use(struct lslab *slab, size_t size, int64_t id)
+{
+	assert(size <= lslab_unused(slab));
+	assert(slab->max_id <= id);
+	slab->slab_used += size;
+	slab->max_id = id;
+}
+
 /**
  * Pointer to the end of the used part of the slab.
  * @param slab Slab container.
@@ -199,10 +211,27 @@ lsregion_alloc(struct lsregion *lsregion, size_t size, int64_t id)
 		return NULL;
 	struct lslab *slab = rlist_last_entry(&lsregion->slabs.slabs,
 					      struct lslab, next_in_list);
-	assert(size <= lslab_unused(slab));
-	assert(slab->max_id <= id);
-	slab->slab_used += size;
-	slab->max_id = id;
+	lslab_use(slab, size, id);
+	lsregion->slabs.stats.used += size;
+	return res;
+}
+
+/**
+ * The same as normal alloc, but the resulting pointer is aligned
+ * by @a alignment.
+ */
+static inline void *
+lsregion_aligned_alloc(struct lsregion *lsregion, size_t size, size_t alignment,
+		       int64_t id)
+{
+	void *unaligned = lsregion_reserve(lsregion, size + alignment - 1);
+	if (unaligned == NULL)
+		return NULL;
+	void *res = (void *)small_align((uintptr_t)unaligned, alignment);
+	struct lslab *slab = rlist_last_entry(&lsregion->slabs.slabs,
+					      struct lslab, next_in_list);
+	size += res - unaligned;
+	lslab_use(slab, size, id);
 	lsregion->slabs.stats.used += size;
 	return res;
 }
diff --git a/test/lsregion.c b/test/lsregion.c
index 8a8ff56..0232f8a 100644
--- a/test/lsregion.c
+++ b/test/lsregion.c
@@ -396,16 +396,63 @@ test_reserve(void)
 	check_plan();
 }
 
+static void
+test_aligned(void)
+{
+	plan(11);
+	header();
+
+	struct quota quota;
+	struct slab_arena arena;
+	struct lsregion allocator;
+	quota_init(&quota, 16 * SLAB_MIN_SIZE);
+	is(slab_arena_create(&arena, &quota, 0, 0, MAP_PRIVATE), 0, "init");
+	lsregion_create(&allocator, &arena);
+	int id = 0;
+
+	++id;
+	void *p1 = lsregion_aligned_alloc(&allocator, 8, 8, id);
+	ok((unsigned long)p1 % 8 == 0, "trivial aligned");
+	is(lsregion_used(&allocator), 8, "'used'");
+
+	++id;
+	void *p2 = lsregion_aligned_alloc(&allocator, 1, 16, id);
+	ok((unsigned long)p2 % 16 == 0, "16 byte alignment for 1 byte");
+	void *p3 = lsregion_aligned_alloc(&allocator, 1, 16, id);
+	ok(p3 == (char *)p2 + 16, "second 16 aligned alloc of 1 byte is far "
+	   "from first");
+	ok((unsigned long)p3 % 16 == 0, "aligned by 16 too");
+
+	is(lsregion_used(&allocator), (size_t)(p3 + 1 - p1), "'used'");
+
+	++id;
+	void *p4 = lsregion_aligned_alloc(&allocator, 3, 4, id);
+	ok(p4 == (char *)p3 + 4, "align next by 4 bytes, should be closer to "
+	   "previous");
+	ok((unsigned long)p4 % 4 == 0, "aligned by 4");
+	is(lsregion_used(&allocator), (size_t)(p4 + 3 - p1), "'used'");
+
+	lsregion_gc(&allocator, id);
+	is(lsregion_used(&allocator), 0, "gc deleted all");
+
+	lsregion_destroy(&allocator);
+	slab_arena_destroy(&arena);
+
+	footer();
+	check_plan();
+}
+
 int
 main()
 {
-	plan(5);
+	plan(6);
 
 	test_basic();
 	test_many_allocs_one_slab();
 	test_many_allocs_many_slabs();
 	test_big_data_small_slabs();
 	test_reserve();
+	test_aligned();
 
 	return check_plan();
 }
-- 
2.21.1 (Apple Git-122.3)

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Tarantool-patches] [PATCH small 0/2] Aligned lsregion
  2020-05-14 23:31 [Tarantool-patches] [PATCH small 0/2] Aligned lsregion Vladislav Shpilevoy
  2020-05-14 23:31 ` [Tarantool-patches] [PATCH small 1/2] lsregion: introduce lsregion_reserve() Vladislav Shpilevoy
  2020-05-14 23:31 ` [Tarantool-patches] [PATCH small 2/2] lsregion: provide aligned version of alloc Vladislav Shpilevoy
@ 2020-05-15 12:26 ` Aleksandr Lyapunov
  2020-05-15 23:22   ` Vladislav Shpilevoy
  2 siblings, 1 reply; 10+ messages in thread
From: Aleksandr Lyapunov @ 2020-05-15 12:26 UTC (permalink / raw)
  To: Vladislav Shpilevoy, tarantool-patches, korablev, tsafin, gorcunov

[-- Attachment #1: Type: text/plain, Size: 1351 bytes --]

Thanks for the patch! see comments below.

On 5/15/20 2:31 AM, Vladislav Shpilevoy wrote:
> The patchset introduces a new function - lsregion_aligned_alloc().

Actually you don't need the patch for fixing #4609.
If you want to align vy_mem tuple by 4 bytes (due to uint32_t bsize), 
all you need is:

diff --git a/src/box/vy_stmt.c b/src/box/vy_stmt.c
index adc3ba4..2de4a28 100644
--- a/src/box/vy_stmt.c
+++ b/src/box/vy_stmt.c
@@ -228,7 +228,7 @@ vy_stmt_dup_lsregion(struct tuple *stmt, struct 
lsregion *lsregion,
         if (type == IPROTO_UPSERT)
                 alloc_size++;

-       mem_stmt = lsregion_alloc(lsregion, alloc_size, alloc_id);
+       mem_stmt = lsregion_alloc(lsregion, small_align(alloc_size, 4), 
alloc_id);
         if (mem_stmt == NULL) {
                 diag_set(OutOfMemory, size, "lsregion_alloc", "mem_stmt");
                 return NULL;

lsregion will allocate aligned addresses as long as you request only 
aligned sizes,
up to sizeof(uintptr_t) (see lslab_sizeof()).
This property is not documented (maybe it should be!). It is also 
similar to mempool
property, see mempool_create description.

I like one-line change more.

If you still want _reserve and _aligned_alloc method for further 
purposes, please
read my by-commit comments.


[-- Attachment #2: Type: text/html, Size: 2078 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Tarantool-patches] [PATCH small 1/2] lsregion: introduce lsregion_reserve()
  2020-05-14 23:31 ` [Tarantool-patches] [PATCH small 1/2] lsregion: introduce lsregion_reserve() Vladislav Shpilevoy
@ 2020-05-15 12:35   ` Aleksandr Lyapunov
  0 siblings, 0 replies; 10+ messages in thread
From: Aleksandr Lyapunov @ 2020-05-15 12:35 UTC (permalink / raw)
  To: Vladislav Shpilevoy, tarantool-patches, korablev, tsafin, gorcunov

This particular commit lgtm except the idea that we actually don't need it.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Tarantool-patches] [PATCH small 2/2] lsregion: provide aligned version of alloc
  2020-05-14 23:31 ` [Tarantool-patches] [PATCH small 2/2] lsregion: provide aligned version of alloc Vladislav Shpilevoy
@ 2020-05-15 13:03   ` Aleksandr Lyapunov
  2020-05-15 23:24     ` Vladislav Shpilevoy
  0 siblings, 1 reply; 10+ messages in thread
From: Aleksandr Lyapunov @ 2020-05-15 13:03 UTC (permalink / raw)
  To: Vladislav Shpilevoy, tarantool-patches, korablev, tsafin, gorcunov

[-- Attachment #1: Type: text/plain, Size: 1091 bytes --]

On 5/15/20 2:31 AM, Vladislav Shpilevoy wrote:
> +static inline void *
> +lsregion_aligned_alloc(struct lsregion *lsregion, size_t size, size_t alignment,
> +		       int64_t id)
> +{
> +	void *unaligned = lsregion_reserve(lsregion, size + alignment - 1);
I do not like this change. It will work, but it's not optimal.
We don't need to reserve more than size if lslab_pos(slab) is already 
aligned.
It could lead to new slab allocation in cases we actually have enough 
aligned space in current.

If we really need aligned alloc, I'm sure we should implement it more 
carefully.
I think it should start with something like:
  static inline size_t
-lslab_unused(const struct lslab *slab)
+lslab_unused(const struct lslab *slab, size_t align)
  {
-       assert(slab->slab_size >= slab->slab_used);
-       return slab->slab_size - slab->slab_used;
+       assert(slab->slab_size >= small_align(slab->slab_used, align));
+       return slab->slab_size - small_align(slab->slab_used, align);
  }

Note also we should remove small_align call from lslab_sizeof method.

[-- Attachment #2: Type: text/html, Size: 1738 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Tarantool-patches] [PATCH small 0/2] Aligned lsregion
  2020-05-15 12:26 ` [Tarantool-patches] [PATCH small 0/2] Aligned lsregion Aleksandr Lyapunov
@ 2020-05-15 23:22   ` Vladislav Shpilevoy
  2020-05-16 19:09     ` Aleksandr Lyapunov
  0 siblings, 1 reply; 10+ messages in thread
From: Vladislav Shpilevoy @ 2020-05-15 23:22 UTC (permalink / raw)
  To: Aleksandr Lyapunov, tarantool-patches, korablev, tsafin, gorcunov

Hi! Thanks for the review!

On 15/05/2020 14:26, Aleksandr Lyapunov wrote:
> Thanks for the patch! see comments below.
> 
> On 5/15/20 2:31 AM, Vladislav Shpilevoy wrote:
>> The patchset introduces a new function - lsregion_aligned_alloc().
> 
> Actually you don't need the patch for fixing #4609.
> If you want to align vy_mem tuple by 4 bytes (due to uint32_t bsize), all you need is:
> 
> diff --git a/src/box/vy_stmt.c b/src/box/vy_stmt.c
> index adc3ba4..2de4a28 100644
> --- a/src/box/vy_stmt.c
> +++ b/src/box/vy_stmt.c
> @@ -228,7 +228,7 @@ vy_stmt_dup_lsregion(struct tuple *stmt, struct lsregion *lsregion,
>         if (type == IPROTO_UPSERT)
>                 alloc_size++;
>  
> -       mem_stmt = lsregion_alloc(lsregion, alloc_size, alloc_id);
> +       mem_stmt = lsregion_alloc(lsregion, small_align(alloc_size, 4), alloc_id);
>         if (mem_stmt == NULL) {
>                 diag_set(OutOfMemory, size, "lsregion_alloc", "mem_stmt");
>                 return NULL;
> 
> lsregion will allocate aligned addresses as long as you request only aligned sizes,
> up to sizeof(uintptr_t) (see lslab_sizeof()).
> This property is not documented (maybe it should be!). It is also similar to mempool
> property, see mempool_create description.

Yeah, here are 3 problems:
- lsregion is used not only for tuples, but for bps tree extents too. We can't
  rely on their alignment being the same. Sanitizer revealed, that both extents
  and tuples are not aligned;
- sanitizer revealed, that mempool_alloc returns unaligned addresses too. I am
  going to look into that after lsregion;
- slab first position can be aligned not enough. So we can't assume anything
  about first address in a slab. And therefore about all the other addresses
  too.

> I like one-line change more.
> 
> If you still want _reserve and _aligned_alloc method for further purposes, please
> read my by-commit comments.

I want lsregion be consistent with other allocators, and provide basic
API for aligned allocations. Like region does.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Tarantool-patches] [PATCH small 2/2] lsregion: provide aligned version of alloc
  2020-05-15 13:03   ` Aleksandr Lyapunov
@ 2020-05-15 23:24     ` Vladislav Shpilevoy
  0 siblings, 0 replies; 10+ messages in thread
From: Vladislav Shpilevoy @ 2020-05-15 23:24 UTC (permalink / raw)
  To: Aleksandr Lyapunov, tarantool-patches, korablev, tsafin, gorcunov

Thanks for the review!

On 15/05/2020 15:03, Aleksandr Lyapunov wrote:
> On 5/15/20 2:31 AM, Vladislav Shpilevoy wrote:
>> +static inline void *
>> +lsregion_aligned_alloc(struct lsregion *lsregion, size_t size, size_t alignment,
>> +		       int64_t id)
>> +{
>> +	void *unaligned = lsregion_reserve(lsregion, size + alignment - 1);
> I do not like this change. It will work, but it's not optimal.

It is optimal in terms of code simplicity and speed. The worst what can happen
is that the slab size becomes basically 7 bytes smaller. But taking into account
slab minimal size 65KB, this tail does not matter.

However yeah, it is not optimal in terms of space, strictly speaking. I firstly
wanted to keep the current implementation, but then out of curiosity tried your
way, and it looks fine I think. Not too complex. But the patchset is changed
significantly. Now there is only one commit (since the aligned allocs rewrite all
the diff of the first patch, about reserve() function), and much more changes.

> We don't need to reserve more than size if lslab_pos(slab) is already aligned.
> It could lead to new slab allocation in cases we actually have enough aligned space in current.
> 
> If we really need aligned alloc, I'm sure we should implement it more carefully.
> I think it should start with something like:
>  static inline size_t
> -lslab_unused(const struct lslab *slab)
> +lslab_unused(const struct lslab *slab, size_t align)
>  {
> -       assert(slab->slab_size >= slab->slab_used);
> -       return slab->slab_size - slab->slab_used;
> +       assert(slab->slab_size >= small_align(slab->slab_used, align));
> +       return slab->slab_size - small_align(slab->slab_used, align);
>  }
> 
> Note also we should remove small_align call from lslab_sizeof method.

Eww. This looks really bad. lslab_unused() becomes too complex. And moreover,
it can overflow. Because small_align() can return something bigger than
slab->slab_size. Also this does not return aligned unused, in a common case.
Because the slab itself may be not aligned enough for the requested alignment.
I wrote a test for that.

But I got the idea. I sent v2 in a new thread.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Tarantool-patches] [PATCH small 0/2] Aligned lsregion
  2020-05-15 23:22   ` Vladislav Shpilevoy
@ 2020-05-16 19:09     ` Aleksandr Lyapunov
  2020-05-17 13:56       ` Vladislav Shpilevoy
  0 siblings, 1 reply; 10+ messages in thread
From: Aleksandr Lyapunov @ 2020-05-16 19:09 UTC (permalink / raw)
  To: Vladislav Shpilevoy, tarantool-patches, korablev, tsafin, gorcunov


On 5/16/20 2:22 AM, Vladislav Shpilevoy wrote:
>
>> lsregion will allocate aligned addresses as long as you request only aligned sizes,
>> up to sizeof(uintptr_t) (see lslab_sizeof()).
>> This property is not documented (maybe it should be!). It is also similar to mempool
>> property, see mempool_create description.
> Yeah, here are 3 problems:
> - lsregion is used not only for tuples, but for bps tree extents too. We can't
>    rely on their alignment being the same. Sanitizer revealed, that both extents
>    and tuples are not aligned;
Since you decided to implement _aligned_alloc anyway, it's not very 
important, but
I want to find the truth. It could help us one day.
Extents' sizes are also (very) aligned, all you need is to align tuple 
sizes, as I proposed.
Try it and you will see: if you align sizes (up to 8) - you'll get 
aligned addresses.
> - sanitizer revealed, that mempool_alloc returns unaligned addresses too. I am
>    going to look into that after lsregion;
I checked the code and I confirm that if mempool item size is aligned, 
the addresses
will be also aligned. Could you suggest me how to get those sanitizer 
results?
I also see that small_alloc aligns all allocations with 8 bytes.
> - slab first position can be aligned not enough. So we can't assume anything
>    about first address in a slab. And therefore about all the other addresses
>    too.
Yes, that position is aligned with 8 bytes now. Are there any cases when 
we need bigger
align factor?
>
>> I like one-line change more.
>>
>> If you still want _reserve and _aligned_alloc method for further purposes, please
>> read my by-commit comments.
> I want lsregion be consistent with other allocators, and provide basic
> API for aligned allocations. Like region does.
Ok, that's a great idea, doesn't it earn a special ticket on github?

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Tarantool-patches] [PATCH small 0/2] Aligned lsregion
  2020-05-16 19:09     ` Aleksandr Lyapunov
@ 2020-05-17 13:56       ` Vladislav Shpilevoy
  0 siblings, 0 replies; 10+ messages in thread
From: Vladislav Shpilevoy @ 2020-05-17 13:56 UTC (permalink / raw)
  To: Aleksandr Lyapunov, tarantool-patches, korablev, tsafin, gorcunov

On 16/05/2020 21:09, Aleksandr Lyapunov wrote:
> 
> On 5/16/20 2:22 AM, Vladislav Shpilevoy wrote:
>>
>>> lsregion will allocate aligned addresses as long as you request only aligned sizes,
>>> up to sizeof(uintptr_t) (see lslab_sizeof()).
>>> This property is not documented (maybe it should be!). It is also similar to mempool
>>> property, see mempool_create description.
>> Yeah, here are 3 problems:
>> - lsregion is used not only for tuples, but for bps tree extents too. We can't
>>    rely on their alignment being the same. Sanitizer revealed, that both extents
>>    and tuples are not aligned;
> Since you decided to implement _aligned_alloc anyway, it's not very important, but
> I want to find the truth. It could help us one day.
> Extents' sizes are also (very) aligned, all you need is to align tuple sizes, as I proposed.
> Try it and you will see: if you align sizes (up to 8) - you'll get aligned addresses.

This is based on an assumption, that extent and tuples need the same alignment.
I don't like making such obscure assumptions, because they tend to break, when
any tiny detail changes.

>> - sanitizer revealed, that mempool_alloc returns unaligned addresses too. I am
>>    going to look into that after lsregion;
> I checked the code and I confirm that if mempool item size is aligned, the addresses
> will be also aligned. Could you suggest me how to get those sanitizer results?
> I also see that small_alloc aligns all allocations with 8 bytes.

If mempool item size is not aligned, this won't work. I got the unaligned
access crash somewhere in box/port.c around port_c_entry structure. I didn't
check where removal of 'PACKED' from this structure helps yet. But anyway
this looks like a mempool bug. It should not have problems with any object
size, regardless of its alignment. Because the crash was inside of mempool,
when it tried to write something into a block passed to mempool_free(). So
mempool assumes, that object size is aligned at least like mempool internal
entries. And if one day we will add uint64 into a mempool entry, but will
use the pool to allocate 4 byte aligned objects, this will break too.

You can try the sanitizer, if you have clang, with this diff:

====================
diff --git a/cmake/compiler.cmake b/cmake/compiler.cmake
index ce3e7e506..373bcd3b0 100644
--- a/cmake/compiler.cmake
+++ b/cmake/compiler.cmake
@@ -238,6 +238,8 @@ endif()
 
 option(ENABLE_WERROR "Make all compiler warnings into errors" OFF)
 
+option(ENABLE_UB_SANITIZER "Make the compiler generate runtime code to perform undefined behaviour checks" OFF)
+
 macro(enable_tnt_compile_flags)
     # Tarantool code is written in GNU C dialect.
     # Additionally, compile it with more strict flags than the rest
@@ -263,6 +265,14 @@ macro(enable_tnt_compile_flags)
         "-Wno-strict-aliasing"
     )
 
+    if (ENABLE_UB_SANITIZER)
+        if (NOT CMAKE_COMPILER_IS_CLANG)
+            message(FATAL_ERROR "Undefined behaviour sanitizer only available for clang")
+        else()
+            add_compile_flags("C;CXX" "-fsanitize=alignment -fno-sanitize-recover=alignment")
+        endif()
+    endif()
+
     if (CMAKE_COMPILER_IS_CLANG AND CC_HAS_WNO_UNUSED_VALUE)
         # False-positive warnings for ({ xx = ...; x; }) macroses
         add_compile_flags("C;CXX" "-Wno-unused-value")

====================

>> - slab first position can be aligned not enough. So we can't assume anything
>>    about first address in a slab. And therefore about all the other addresses
>>    too.
> Yes, that position is aligned with 8 bytes now. Are there any cases when we need bigger
> align factor?

As I said, I don't like making such assumptions.

>>> I like one-line change more.
>>>
>>> If you still want _reserve and _aligned_alloc method for further purposes, please
>>> read my by-commit comments.
>> I want lsregion be consistent with other allocators, and provide basic
>> API for aligned allocations. Like region does.
> Ok, that's a great idea, doesn't it earn a special ticket on github?

No. I didn't create an issue for that, since I consider it a part of 4609.

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2020-05-17 13:56 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-14 23:31 [Tarantool-patches] [PATCH small 0/2] Aligned lsregion Vladislav Shpilevoy
2020-05-14 23:31 ` [Tarantool-patches] [PATCH small 1/2] lsregion: introduce lsregion_reserve() Vladislav Shpilevoy
2020-05-15 12:35   ` Aleksandr Lyapunov
2020-05-14 23:31 ` [Tarantool-patches] [PATCH small 2/2] lsregion: provide aligned version of alloc Vladislav Shpilevoy
2020-05-15 13:03   ` Aleksandr Lyapunov
2020-05-15 23:24     ` Vladislav Shpilevoy
2020-05-15 12:26 ` [Tarantool-patches] [PATCH small 0/2] Aligned lsregion Aleksandr Lyapunov
2020-05-15 23:22   ` Vladislav Shpilevoy
2020-05-16 19:09     ` Aleksandr Lyapunov
2020-05-17 13:56       ` Vladislav Shpilevoy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox