Tarantool development patches archive
 help / color / mirror / Atom feed
* [Tarantool-patches] [PATCH 0/4] Choose allocator mor memtx
@ 2020-12-29 11:03 mechanik20051988
  2020-12-29 11:03 ` [Tarantool-patches] [PATCH 1/4] test: add performance test for memtx allocator mechanik20051988
                   ` (5 more replies)
  0 siblings, 6 replies; 9+ messages in thread
From: mechanik20051988 @ 2020-12-29 11:03 UTC (permalink / raw)
  To: v.shpilevoy, alyapunov; +Cc: tarantool-patches

Branch: https://github.com/tarantool/tarantool/tree/mechanik20051988/gh-5419-choose-allocator-for-memtx-cpp-14
        (Do not pay attention to the number 14 in the branch header)
Issue: https://github.com/tarantool/tarantool/issues/5419
Pull request: https://github.com/tarantool/tarantool/pull/5670
About patches:
	1. First patch add performance test for memtx allocator. You can copy perf folder 
	to master branch and compare performance.
	2. Second patch convert some *.c files to *.cc files. 
	   This is the preparation for the patch with allocator choise
	3. Third patch implement api for allocator choise
	4. Fourth patch add system allocator based on malloc and free

This is a completely redesigned patch, however I would like to provide answers 
to some questions from the previous patch that may overlap

1. malloc_usable_size / malloc_size has different headers in different OS, so i use
   TARGET_OS_*** to check this in source file not in CMakeLists
2. I test snapshot using checkpoint_interval=5 option, several snapshots
   are created during my test

mechanik20051988 (4):
  test: add performance test for memtx allocator.
  memtx: changed some memtx files from .c to .cc
  memtx: implement api for memory allocator selection
  Implement system allocator, based on malloc

 perf/allocator_perf.test.lua                |  34 +++
 src/box/CMakeLists.txt                      |   8 +-
 src/box/box.cc                              |   3 +
 src/box/field_map.h                         |   8 +
 src/box/lua/init.c                          |   2 +-
 src/box/lua/load_cfg.lua                    |   2 +
 src/box/lua/slab.c                          | 214 +------------
 src/box/lua/slab.cc                         | 320 ++++++++++++++++++++
 src/box/lua/slab.h                          |   1 +
 src/box/{memtx_engine.c => memtx_engine.cc} | 177 ++++++++---
 src/box/memtx_engine.h                      |  53 ++--
 src/box/{memtx_space.c => memtx_space.cc}   |  94 +++---
 src/box/small_allocator.cc                  |  74 +++++
 src/box/small_allocator.h                   |  58 ++++
 src/box/sysalloc.c                          | 210 +++++++++++++
 src/box/sysalloc.h                          | 145 +++++++++
 src/box/system_allocator.cc                 |  68 +++++
 src/box/system_allocator.h                  |  54 ++++
 test/app-tap/init_script.result             |   1 +
 test/box/admin.result                       |   4 +-
 test/box/cfg.result                         |   8 +-
 test/box/choose_memtx_allocator.lua         |   9 +
 test/box/choose_memtx_allocator.result      | 139 +++++++++
 test/box/choose_memtx_allocator.test.lua    |  44 +++
 24 files changed, 1410 insertions(+), 320 deletions(-)
 create mode 100755 perf/allocator_perf.test.lua
 create mode 100644 src/box/lua/slab.cc
 rename src/box/{memtx_engine.c => memtx_engine.cc} (89%)
 rename src/box/{memtx_space.c => memtx_space.cc} (93%)
 create mode 100644 src/box/small_allocator.cc
 create mode 100644 src/box/small_allocator.h
 create mode 100644 src/box/sysalloc.c
 create mode 100644 src/box/sysalloc.h
 create mode 100644 src/box/system_allocator.cc
 create mode 100644 src/box/system_allocator.h
 create mode 100644 test/box/choose_memtx_allocator.lua
 create mode 100644 test/box/choose_memtx_allocator.result
 create mode 100644 test/box/choose_memtx_allocator.test.lua

-- 
2.20.1

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [Tarantool-patches] [PATCH 1/4] test: add performance test for memtx allocator.
  2020-12-29 11:03 [Tarantool-patches] [PATCH 0/4] Choose allocator mor memtx mechanik20051988
@ 2020-12-29 11:03 ` mechanik20051988
  2020-12-29 11:03 ` [Tarantool-patches] [PATCH 2/4] memtx: changed some memtx files from .c to .cc mechanik20051988
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: mechanik20051988 @ 2020-12-29 11:03 UTC (permalink / raw)
  To: v.shpilevoy, alyapunov; +Cc: mechanik20051988, tarantool-patches

From: mechanik20051988 <mechanik20.05.1988@gmail.com>

---
 perf/allocator_perf.test.lua | 34 ++++++++++++++++++++++++++++++++++
 1 file changed, 34 insertions(+)
 create mode 100755 perf/allocator_perf.test.lua

diff --git a/perf/allocator_perf.test.lua b/perf/allocator_perf.test.lua
new file mode 100755
index 000000000..ffd217cdc
--- /dev/null
+++ b/perf/allocator_perf.test.lua
@@ -0,0 +1,34 @@
+#!/usr/bin/env ../src/tarantool
+os.execute('rm -rf *.snap *.xlog')
+local clock = require('clock')
+box.cfg{listen = 3301, wal_mode='none', allocator=arg[1]}
+local space = box.schema.space.create('test')
+space:format({ {name = 'id', type = 'unsigned'}, {name = 'year', type = 'unsigned'} })
+space:create_index('primary', { parts = {'id'} })
+local time_insert = 0
+local time_replace = 0
+local time_delete = 0
+local cnt = 0
+local cnt_max = 20
+local op_max = 2500000
+local nanosec = 1.0e9
+while cnt < cnt_max do
+    cnt = cnt + 1
+    local time_before = clock.monotonic64()
+    for key = 1, op_max do space:insert({key, key + 1000}) end
+    local time_after = clock.monotonic64()
+    time_insert = time_insert + (time_after - time_before)
+    time_before = clock.monotonic64()
+    for key = 1, op_max do space:replace({key, key + 5000}) end
+    time_after = clock.monotonic64()
+    time_replace = time_replace + (time_after - time_before)
+    time_before = clock.monotonic64()
+    for key = 1, op_max do space:delete(key) end
+    time_after = clock.monotonic64()
+    time_delete = time_delete + (time_after - time_before)
+end
+io.write("{\n")
+io.write(string.format("  \"alloc time\": \"%.3f\"\n", tonumber(time_insert) / (nanosec * cnt_max)))
+io.write(string.format("  \"replace time\": \"%.3f\"\n", tonumber(time_replace) / (nanosec * cnt_max)))
+io.write(string.format("  \"delete time\": \"%.3f\"\n}\n", tonumber(time_delete) / (nanosec * cnt_max)))
+os.exit()
-- 
2.20.1

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [Tarantool-patches] [PATCH 2/4] memtx: changed some memtx files from .c to .cc
  2020-12-29 11:03 [Tarantool-patches] [PATCH 0/4] Choose allocator mor memtx mechanik20051988
  2020-12-29 11:03 ` [Tarantool-patches] [PATCH 1/4] test: add performance test for memtx allocator mechanik20051988
@ 2020-12-29 11:03 ` mechanik20051988
  2020-12-29 11:03 ` [Tarantool-patches] [PATCH 3/4] memtx: implement api for memory allocator selection mechanik20051988
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: mechanik20051988 @ 2020-12-29 11:03 UTC (permalink / raw)
  To: v.shpilevoy, alyapunov; +Cc: mechanik20051988, tarantool-patches

From: mechanik20051988 <mechanik20.05.1988@gmail.com>

---
 src/box/CMakeLists.txt                      |  4 +--
 src/box/field_map.h                         |  8 +++++
 src/box/{memtx_engine.c => memtx_engine.cc} | 38 ++++++++++++---------
 src/box/{memtx_space.c => memtx_space.cc}   | 15 ++++----
 4 files changed, 40 insertions(+), 25 deletions(-)
 rename src/box/{memtx_engine.c => memtx_engine.cc} (97%)
 rename src/box/{memtx_space.c => memtx_space.cc} (98%)

diff --git a/src/box/CMakeLists.txt b/src/box/CMakeLists.txt
index 19203f770..d2af89d05 100644
--- a/src/box/CMakeLists.txt
+++ b/src/box/CMakeLists.txt
@@ -127,8 +127,8 @@ add_library(box STATIC
     memtx_bitset.c
     memtx_tx.c
     engine.c
-    memtx_engine.c
-    memtx_space.c
+    memtx_engine.cc
+    memtx_space.cc
     sysview.c
     blackhole.c
     service_engine.c
diff --git a/src/box/field_map.h b/src/box/field_map.h
index d8ef726a1..5087d25e5 100644
--- a/src/box/field_map.h
+++ b/src/box/field_map.h
@@ -35,6 +35,10 @@
 #include <stddef.h>
 #include "bit/bit.h"
 
+#if defined(__cplusplus)
+extern "C" {
+#endif /* defined(__cplusplus) */
+
 struct region;
 struct field_map_builder_slot;
 
@@ -257,4 +261,8 @@ field_map_build_size(struct field_map_builder *builder)
 void
 field_map_build(struct field_map_builder *builder, char *buffer);
 
+#if defined(__cplusplus)
+} /* extern "C" */
+#endif /* defined(__plusplus) */
+
 #endif /* TARANTOOL_BOX_FIELD_MAP_H_INCLUDED */
diff --git a/src/box/memtx_engine.c b/src/box/memtx_engine.cc
similarity index 97%
rename from src/box/memtx_engine.c
rename to src/box/memtx_engine.cc
index f79f14b4f..520a221dd 100644
--- a/src/box/memtx_engine.c
+++ b/src/box/memtx_engine.cc
@@ -552,7 +552,7 @@ struct checkpoint {
 static struct checkpoint *
 checkpoint_new(const char *snap_dirname, uint64_t snap_io_rate_limit)
 {
-	struct checkpoint *ckpt = malloc(sizeof(*ckpt));
+	struct checkpoint *ckpt = (struct checkpoint *)malloc(sizeof(*ckpt));
 	if (ckpt == NULL) {
 		diag_set(OutOfMemory, sizeof(*ckpt), "malloc",
 			 "struct checkpoint");
@@ -621,7 +621,7 @@ checkpoint_add_space(struct space *sp, void *data)
 	if (!pk)
 		return 0;
 	struct checkpoint *ckpt = (struct checkpoint *)data;
-	struct checkpoint_entry *entry = malloc(sizeof(*entry));
+	struct checkpoint_entry *entry = (struct checkpoint_entry *)malloc(sizeof(*entry));
 	if (entry == NULL) {
 		diag_set(OutOfMemory, sizeof(*entry),
 			 "malloc", "struct checkpoint_entry");
@@ -851,7 +851,7 @@ struct memtx_join_ctx {
 static int
 memtx_join_add_space(struct space *space, void *arg)
 {
-	struct memtx_join_ctx *ctx = arg;
+	struct memtx_join_ctx *ctx = (struct memtx_join_ctx *)arg;
 	if (!space_is_memtx(space))
 		return 0;
 	if (space_is_temporary(space))
@@ -861,7 +861,7 @@ memtx_join_add_space(struct space *space, void *arg)
 	struct index *pk = space_index(space, 0);
 	if (pk == NULL)
 		return 0;
-	struct memtx_join_entry *entry = malloc(sizeof(*entry));
+	struct memtx_join_entry *entry = (struct memtx_join_entry *)malloc(sizeof(*entry));
 	if (entry == NULL) {
 		diag_set(OutOfMemory, sizeof(*entry),
 			 "malloc", "struct memtx_join_entry");
@@ -881,7 +881,7 @@ static int
 memtx_engine_prepare_join(struct engine *engine, void **arg)
 {
 	(void)engine;
-	struct memtx_join_ctx *ctx = malloc(sizeof(*ctx));
+	struct memtx_join_ctx *ctx = (struct memtx_join_ctx *)malloc(sizeof(*ctx));
 	if (ctx == NULL) {
 		diag_set(OutOfMemory, sizeof(*ctx),
 			 "malloc", "struct memtx_join_ctx");
@@ -919,7 +919,7 @@ memtx_join_send_tuple(struct xstream *stream, uint32_t space_id,
 static int
 memtx_join_f(va_list ap)
 {
-	struct memtx_join_ctx *ctx = va_arg(ap, struct memtx_join_ctx *);
+	struct memtx_join_ctx *ctx = (struct memtx_join_ctx *)va_arg(ap, struct memtx_join_ctx *);
 	struct memtx_join_entry *entry;
 	rlist_foreach_entry(entry, &ctx->entries, in_ctx) {
 		struct snapshot_iterator *it = entry->iterator;
@@ -941,7 +941,7 @@ static int
 memtx_engine_join(struct engine *engine, void *arg, struct xstream *stream)
 {
 	(void)engine;
-	struct memtx_join_ctx *ctx = arg;
+	struct memtx_join_ctx *ctx = (struct memtx_join_ctx *)arg;
 	ctx->stream = stream;
 	/*
 	 * Memtx snapshot iterators are safe to use from another
@@ -962,7 +962,7 @@ static void
 memtx_engine_complete_join(struct engine *engine, void *arg)
 {
 	(void)engine;
-	struct memtx_join_ctx *ctx = arg;
+	struct memtx_join_ctx *ctx = (struct memtx_join_ctx *)arg;
 	struct memtx_join_entry *entry, *next;
 	rlist_foreach_entry_safe(entry, &ctx->entries, in_ctx, next) {
 		entry->iterator->free(entry->iterator);
@@ -1066,7 +1066,8 @@ memtx_engine_new(const char *snap_dirname, bool force_recovery,
 		 uint64_t tuple_arena_max_size, uint32_t objsize_min,
 		 bool dontdump, float alloc_factor)
 {
-	struct memtx_engine *memtx = calloc(1, sizeof(*memtx));
+	int64_t snap_signature;
+	struct memtx_engine *memtx = (struct memtx_engine *)calloc(1, sizeof(*memtx));
 	if (memtx == NULL) {
 		diag_set(OutOfMemory, sizeof(*memtx),
 			 "malloc", "struct memtx_engine");
@@ -1090,7 +1091,7 @@ memtx_engine_new(const char *snap_dirname, bool force_recovery,
 	 * So if the local directory isn't empty, read the snapshot
 	 * signature right now to initialize the instance UUID.
 	 */
-	int64_t snap_signature = xdir_last_vclock(&memtx->snap_dir, NULL);
+	snap_signature = xdir_last_vclock(&memtx->snap_dir, NULL);
 	if (snap_signature >= 0) {
 		struct xlog_cursor cursor;
 		if (xdir_open_cursor(&memtx->snap_dir,
@@ -1220,15 +1221,19 @@ memtx_tuple_new(struct tuple_format *format, const char *data, const char *end)
 	struct region *region = &fiber()->gc;
 	size_t region_svp = region_used(region);
 	struct field_map_builder builder;
+	uint32_t field_map_size, data_offset;
+	size_t tuple_len, total;
+	char *raw;
+
 	if (tuple_field_map_create(format, data, true, &builder) != 0)
 		goto end;
-	uint32_t field_map_size = field_map_build_size(&builder);
+	field_map_size = field_map_build_size(&builder);
 	/*
 	 * Data offset is calculated from the begin of the struct
 	 * tuple base, not from memtx_tuple, because the struct
 	 * tuple is not the first field of the memtx_tuple.
 	 */
-	uint32_t data_offset = sizeof(struct tuple) + field_map_size;
+	data_offset = sizeof(struct tuple) + field_map_size;
 	if (data_offset > INT16_MAX) {
 		/** tuple->data_offset is 15 bits */
 		diag_set(ClientError, ER_TUPLE_METADATA_IS_TOO_BIG,
@@ -1236,8 +1241,8 @@ memtx_tuple_new(struct tuple_format *format, const char *data, const char *end)
 		goto end;
 	}
 
-	size_t tuple_len = end - data;
-	size_t total = sizeof(struct memtx_tuple) + field_map_size + tuple_len;
+	tuple_len = end - data;
+	total = sizeof(struct memtx_tuple) + field_map_size + tuple_len;
 
 	ERROR_INJECT(ERRINJ_TUPLE_ALLOC, {
 		diag_set(OutOfMemory, total, "slab allocator", "memtx_tuple");
@@ -1250,7 +1255,8 @@ memtx_tuple_new(struct tuple_format *format, const char *data, const char *end)
 	}
 
 	struct memtx_tuple *memtx_tuple;
-	while ((memtx_tuple = smalloc(&memtx->alloc, total)) == NULL) {
+	while ((memtx_tuple = (struct memtx_tuple *)
+		smalloc(&memtx->alloc, total)) == NULL) {
 		bool stop;
 		memtx_engine_run_gc(memtx, &stop);
 		if (stop)
@@ -1269,7 +1275,7 @@ memtx_tuple_new(struct tuple_format *format, const char *data, const char *end)
 	tuple_format_ref(format);
 	tuple->data_offset = data_offset;
 	tuple->is_dirty = false;
-	char *raw = (char *) tuple + tuple->data_offset;
+	raw = (char *) tuple + tuple->data_offset;
 	field_map_build(&builder, raw - field_map_size);
 	memcpy(raw, data, tuple_len);
 	say_debug("%s(%zu) = %p", __func__, tuple_len, memtx_tuple);
diff --git a/src/box/memtx_space.c b/src/box/memtx_space.cc
similarity index 98%
rename from src/box/memtx_space.c
rename to src/box/memtx_space.cc
index 73b4c450e..e46e4eaeb 100644
--- a/src/box/memtx_space.c
+++ b/src/box/memtx_space.cc
@@ -861,8 +861,8 @@ struct memtx_ddl_state {
 static int
 memtx_check_on_replace(struct trigger *trigger, void *event)
 {
-	struct txn *txn = event;
-	struct memtx_ddl_state *state = trigger->data;
+	struct txn *txn = (struct txn *)event;
+	struct memtx_ddl_state *state = (struct memtx_ddl_state *)trigger->data;
 	struct txn_stmt *stmt = txn_current_stmt(txn);
 
 	/* Nothing to check on DELETE. */
@@ -985,8 +985,8 @@ memtx_init_ephemeral_space(struct space *space)
 static int
 memtx_build_on_replace(struct trigger *trigger, void *event)
 {
-	struct txn *txn = event;
-	struct memtx_ddl_state *state = trigger->data;
+	struct txn *txn = (struct txn *)event;
+	struct memtx_ddl_state *state = (struct memtx_ddl_state *)trigger->data;
 	struct txn_stmt *stmt = txn_current_stmt(txn);
 
 	struct tuple *cmp_tuple = stmt->new_tuple != NULL ? stmt->new_tuple :
@@ -1006,12 +1006,12 @@ memtx_build_on_replace(struct trigger *trigger, void *event)
 		return 0;
 	}
 
-	struct tuple *delete = NULL;
+	struct tuple *unused = nullptr;
 	enum dup_replace_mode mode =
 		state->index->def->opts.is_unique ? DUP_INSERT :
 						    DUP_REPLACE_OR_INSERT;
 	state->rc = index_replace(state->index, stmt->old_tuple,
-				  stmt->new_tuple, mode, &delete);
+				  stmt->new_tuple, mode, &unused);
 	if (state->rc != 0) {
 		diag_move(diag_get(), &state->diag);
 		return 0;
@@ -1193,7 +1193,8 @@ struct space *
 memtx_space_new(struct memtx_engine *memtx,
 		struct space_def *def, struct rlist *key_list)
 {
-	struct memtx_space *memtx_space = malloc(sizeof(*memtx_space));
+	struct memtx_space *memtx_space =
+		(struct memtx_space *)malloc(sizeof(*memtx_space));
 	if (memtx_space == NULL) {
 		diag_set(OutOfMemory, sizeof(*memtx_space),
 			 "malloc", "struct memtx_space");
-- 
2.20.1

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [Tarantool-patches] [PATCH 3/4] memtx: implement api for memory allocator selection
  2020-12-29 11:03 [Tarantool-patches] [PATCH 0/4] Choose allocator mor memtx mechanik20051988
  2020-12-29 11:03 ` [Tarantool-patches] [PATCH 1/4] test: add performance test for memtx allocator mechanik20051988
  2020-12-29 11:03 ` [Tarantool-patches] [PATCH 2/4] memtx: changed some memtx files from .c to .cc mechanik20051988
@ 2020-12-29 11:03 ` mechanik20051988
  2021-01-10 13:56   ` Vladislav Shpilevoy via Tarantool-patches
  2020-12-29 11:03 ` [Tarantool-patches] [PATCH 4/4] Implement system allocator, based on malloc mechanik20051988
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 9+ messages in thread
From: mechanik20051988 @ 2020-12-29 11:03 UTC (permalink / raw)
  To: v.shpilevoy, alyapunov; +Cc: mechanik20051988, tarantool-patches

From: mechanik20051988 <mechanik20.05.1988@gmail.com>

Slab allocator, which is used for tuples allocation,
has a certain disadvantage - it tends to unresolvable
fragmentation on certain workloads (size migration).
New option allows to select the appropriate
allocator if necessary.

@TarantoolBot document
Title: Add new 'allocator' option
Add new 'allocator' option which allows to
select the appropriate allocator for memtx
tuples if necessary.

Closes #5419
---
 src/box/CMakeLists.txt          |   2 +
 src/box/box.cc                  |   3 +
 src/box/lua/init.c              |   2 +-
 src/box/lua/load_cfg.lua        |   2 +
 src/box/lua/slab.c              | 214 +----------------------
 src/box/lua/slab.cc             | 292 ++++++++++++++++++++++++++++++++
 src/box/lua/slab.h              |   1 +
 src/box/memtx_engine.cc         | 113 ++++++++----
 src/box/memtx_engine.h          |  53 ++++--
 src/box/memtx_space.cc          |  75 ++++----
 src/box/small_allocator.cc      |  74 ++++++++
 src/box/small_allocator.h       |  58 +++++++
 test/app-tap/init_script.result |   1 +
 test/box/admin.result           |   4 +-
 test/box/cfg.result             |   8 +-
 15 files changed, 606 insertions(+), 296 deletions(-)
 create mode 100644 src/box/lua/slab.cc
 create mode 100644 src/box/small_allocator.cc
 create mode 100644 src/box/small_allocator.h

diff --git a/src/box/CMakeLists.txt b/src/box/CMakeLists.txt
index d2af89d05..aebf76bd4 100644
--- a/src/box/CMakeLists.txt
+++ b/src/box/CMakeLists.txt
@@ -129,6 +129,7 @@ add_library(box STATIC
     engine.c
     memtx_engine.cc
     memtx_space.cc
+    small_allocator.cc
     sysview.c
     blackhole.c
     service_engine.c
@@ -198,6 +199,7 @@ add_library(box STATIC
     lua/serialize_lua.c
     lua/tuple.c
     lua/slab.c
+    lua/slab.cc
     lua/index.c
     lua/space.cc
     lua/sequence.c
diff --git a/src/box/box.cc b/src/box/box.cc
index 26cbe8aab..7e1b9d207 100644
--- a/src/box/box.cc
+++ b/src/box/box.cc
@@ -33,6 +33,7 @@
 #include "trivia/config.h"
 
 #include "lua/utils.h" /* lua_hash() */
+#include "lua/slab.h" /* box_lua_slab_init */
 #include "fiber_pool.h"
 #include <say.h>
 #include <scoped_guard.h>
@@ -2541,6 +2542,7 @@ engine_init()
 				    cfg_getd("memtx_memory"),
 				    cfg_geti("memtx_min_tuple_size"),
 				    cfg_geti("strip_core"),
+				    cfg_gets("allocator"),
 				    cfg_getd("slab_alloc_factor"));
 	engine_register((struct engine *)memtx);
 	box_set_memtx_max_tuple_size();
@@ -2947,6 +2949,7 @@ box_cfg_xc(void)
 
 	gc_init();
 	engine_init();
+	box_lua_slab_init(tarantool_L);
 	schema_init();
 	replication_init();
 	port_init();
diff --git a/src/box/lua/init.c b/src/box/lua/init.c
index fbcdfb20b..480176f7a 100644
--- a/src/box/lua/init.c
+++ b/src/box/lua/init.c
@@ -465,7 +465,7 @@ box_lua_init(struct lua_State *L)
 	box_lua_tuple_init(L);
 	box_lua_call_init(L);
 	box_lua_cfg_init(L);
-	box_lua_slab_init(L);
+	box_lua_slab_runtime_init(L);
 	box_lua_index_init(L);
 	box_lua_space_init(L);
 	box_lua_sequence_init(L);
diff --git a/src/box/lua/load_cfg.lua b/src/box/lua/load_cfg.lua
index 574c8bef4..2fe8a5b6c 100644
--- a/src/box/lua/load_cfg.lua
+++ b/src/box/lua/load_cfg.lua
@@ -43,6 +43,7 @@ local default_cfg = {
     memtx_min_tuple_size = 16,
     memtx_max_tuple_size = 1024 * 1024,
     slab_alloc_factor   = 1.05,
+    allocator           = "small",
     work_dir            = nil,
     memtx_dir           = ".",
     wal_dir             = ".",
@@ -124,6 +125,7 @@ local template_cfg = {
     memtx_min_tuple_size  = 'number',
     memtx_max_tuple_size  = 'number',
     slab_alloc_factor   = 'number',
+    allocator           = 'string',
     work_dir            = 'string',
     memtx_dir            = 'string',
     wal_dir             = 'string',
diff --git a/src/box/lua/slab.c b/src/box/lua/slab.c
index 9f5e7e95c..b9565e768 100644
--- a/src/box/lua/slab.c
+++ b/src/box/lua/slab.c
@@ -44,193 +44,6 @@
 #include "box/engine.h"
 #include "box/memtx_engine.h"
 
-static int
-small_stats_noop_cb(const struct mempool_stats *stats, void *cb_ctx)
-{
-	(void) stats;
-	(void) cb_ctx;
-	return 0;
-}
-
-static int
-small_stats_lua_cb(const struct mempool_stats *stats, void *cb_ctx)
-{
-	/** Don't publish information about empty slabs. */
-	if (stats->slabcount == 0)
-		return 0;
-
-	struct lua_State *L = (struct lua_State *) cb_ctx;
-
-	/*
-	 * Create a Lua table for every slab class. A class is
-	 * defined by its item size.
-	 */
-	/** Assign next slab size to the next member of an array. */
-	lua_pushnumber(L, lua_objlen(L, -1) + 1);
-	lua_newtable(L);
-	/**
-	 * This is in fact only to force YaML flow "compact" for this
-	 * table.
-	 */
-	luaL_setmaphint(L, -1);
-
-	lua_pushstring(L, "mem_used");
-	luaL_pushuint64(L, stats->totals.used);
-	lua_settable(L, -3);
-
-	lua_pushstring(L, "slab_size");
-	luaL_pushuint64(L, stats->slabsize);
-	lua_settable(L, -3);
-
-	lua_pushstring(L, "mem_free");
-	luaL_pushuint64(L, stats->totals.total - stats->totals.used);
-	lua_settable(L, -3);
-
-	lua_pushstring(L, "item_size");
-	luaL_pushuint64(L, stats->objsize);
-	lua_settable(L, -3);
-
-	lua_pushstring(L, "slab_count");
-	luaL_pushuint64(L, stats->slabcount);
-	lua_settable(L, -3);
-
-	lua_pushstring(L, "item_count");
-	luaL_pushuint64(L, stats->objcount);
-	lua_settable(L, -3);
-
-	lua_settable(L, -3);
-	return 0;
-}
-
-static int
-lbox_slab_stats(struct lua_State *L)
-{
-	struct memtx_engine *memtx;
-	memtx = (struct memtx_engine *)engine_by_name("memtx");
-
-	struct small_stats totals;
-	lua_newtable(L);
-	/*
-	 * List all slabs used for tuples and slabs used for
-	 * indexes, with their stats.
-	 */
-	small_stats(&memtx->alloc, &totals, small_stats_lua_cb, L);
-	struct mempool_stats index_stats;
-	mempool_stats(&memtx->index_extent_pool, &index_stats);
-	small_stats_lua_cb(&index_stats, L);
-
-	return 1;
-}
-
-static int
-lbox_slab_info(struct lua_State *L)
-{
-	struct memtx_engine *memtx;
-	memtx = (struct memtx_engine *)engine_by_name("memtx");
-
-	struct small_stats totals;
-
-	/*
-	 * List all slabs used for tuples and slabs used for
-	 * indexes, with their stats.
-	 */
-	lua_newtable(L);
-	small_stats(&memtx->alloc, &totals, small_stats_noop_cb, L);
-	struct mempool_stats index_stats;
-	mempool_stats(&memtx->index_extent_pool, &index_stats);
-
-	double ratio;
-	char ratio_buf[32];
-
-	ratio = 100 * ((double) totals.used
-		/ ((double) totals.total + 0.0001));
-	snprintf(ratio_buf, sizeof(ratio_buf), "%0.2lf%%", ratio);
-
-	/** How much address space has been already touched */
-	lua_pushstring(L, "items_size");
-	luaL_pushuint64(L, totals.total);
-	lua_settable(L, -3);
-	/**
-	 * How much of this formatted address space is used for
-	 * actual data.
-	 */
-	lua_pushstring(L, "items_used");
-	luaL_pushuint64(L, totals.used);
-	lua_settable(L, -3);
-
-	/*
-	 * Fragmentation factor for tuples. Don't account indexes,
-	 * even if they are fragmented, there is nothing people
-	 * can do about it.
-	 */
-	lua_pushstring(L, "items_used_ratio");
-	lua_pushstring(L, ratio_buf);
-	lua_settable(L, -3);
-
-	/** How much address space has been already touched
-	 * (tuples and indexes) */
-	lua_pushstring(L, "arena_size");
-	/*
-	 * We could use totals.total + index_stats.total here,
-	 * but this would not account for slabs which are sitting
-	 * in slab cache or in the arena, available for reuse.
-	 * Make sure a simple formula:
-	 * items_used_ratio > 0.9 && arena_used_ratio > 0.9 &&
-	 * quota_used_ratio > 0.9 work as an indicator
-	 * for reaching Tarantool memory limit.
-	 */
-	size_t arena_size = memtx->arena.used;
-	luaL_pushuint64(L, arena_size);
-	lua_settable(L, -3);
-	/**
-	 * How much of this formatted address space is used for
-	 * data (tuples and indexes).
-	 */
-	lua_pushstring(L, "arena_used");
-	luaL_pushuint64(L, totals.used + index_stats.totals.used);
-	lua_settable(L, -3);
-
-	ratio = 100 * ((double) (totals.used + index_stats.totals.used)
-		       / (double) arena_size);
-	snprintf(ratio_buf, sizeof(ratio_buf), "%0.1lf%%", ratio);
-
-	lua_pushstring(L, "arena_used_ratio");
-	lua_pushstring(L, ratio_buf);
-	lua_settable(L, -3);
-
-	/*
-	 * This is pretty much the same as
-	 * box.cfg.slab_alloc_arena, but in bytes
-	 */
-	lua_pushstring(L, "quota_size");
-	luaL_pushuint64(L, quota_total(&memtx->quota));
-	lua_settable(L, -3);
-
-	/*
-	 * How much quota has been booked - reflects the total
-	 * size of slabs in various slab caches.
-	 */
-	lua_pushstring(L, "quota_used");
-	luaL_pushuint64(L, quota_used(&memtx->quota));
-	lua_settable(L, -3);
-
-	/**
-	 * This should be the same as arena_size/arena_used, however,
-	 * don't trust totals in the most important monitoring
-	 * factor, it's the quota that give you OOM error in the
-	 * end of the day.
-	 */
-	ratio = 100 * ((double) quota_used(&memtx->quota) /
-		 ((double) quota_total(&memtx->quota) + 0.0001));
-	snprintf(ratio_buf, sizeof(ratio_buf), "%0.2lf%%", ratio);
-
-	lua_pushstring(L, "quota_used_ratio");
-	lua_pushstring(L, ratio_buf);
-	lua_settable(L, -3);
-
-	return 1;
-}
-
 static int
 lbox_runtime_info(struct lua_State *L)
 {
@@ -254,36 +67,11 @@ lbox_runtime_info(struct lua_State *L)
 	return 1;
 }
 
-static int
-lbox_slab_check(MAYBE_UNUSED struct lua_State *L)
-{
-	struct memtx_engine *memtx;
-	memtx = (struct memtx_engine *)engine_by_name("memtx");
-	slab_cache_check(memtx->alloc.cache);
-	return 0;
-}
-
 /** Initialize box.slab package. */
 void
-box_lua_slab_init(struct lua_State *L)
+box_lua_slab_runtime_init(struct lua_State *L)
 {
 	lua_getfield(L, LUA_GLOBALSINDEX, "box");
-	lua_pushstring(L, "slab");
-	lua_newtable(L);
-
-	lua_pushstring(L, "info");
-	lua_pushcfunction(L, lbox_slab_info);
-	lua_settable(L, -3);
-
-	lua_pushstring(L, "stats");
-	lua_pushcfunction(L, lbox_slab_stats);
-	lua_settable(L, -3);
-
-	lua_pushstring(L, "check");
-	lua_pushcfunction(L, lbox_slab_check);
-	lua_settable(L, -3);
-
-	lua_settable(L, -3); /* box.slab */
 
 	lua_pushstring(L, "runtime");
 	lua_newtable(L);
diff --git a/src/box/lua/slab.cc b/src/box/lua/slab.cc
new file mode 100644
index 000000000..4b247885f
--- /dev/null
+++ b/src/box/lua/slab.cc
@@ -0,0 +1,292 @@
+/*
+ * Copyright 2010-2020, Tarantool AUTHORS, please see AUTHORS file.
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above
+ *    copyright notice, this list of conditions and the
+ *    following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above
+ *    copyright notice, this list of conditions and the following
+ *    disclaimer in the documentation and/or other materials
+ *    provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY <COPYRIGHT HOLDER> ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
+ * <COPYRIGHT HOLDER> OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
+ * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
+ * THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ */
+#include "trivia/util.h"
+
+#include "box/lua/slab.h"
+#include "lua/utils.h"
+
+#include <lua.h>
+#include <lauxlib.h>
+#include <lualib.h>
+#include <lj_obj.h> /* internals: lua in box.runtime.info() */
+
+#include "small/small.h"
+#include "small/quota.h"
+#include "memory.h"
+#include "box/engine.h"
+#include "box/memtx_engine.h"
+#include "box/small_allocator.h"
+
+static int
+small_stats_noop_cb(const struct mempool_stats *stats, void *cb_ctx)
+{
+	(void) stats;
+	(void) cb_ctx;
+	return 0;
+}
+
+static int
+small_stats_lua_cb(const struct mempool_stats *stats, void *cb_ctx)
+{
+	/** Don't publish information about empty slabs. */
+	if (stats->slabcount == 0)
+		return 0;
+
+	struct lua_State *L = (struct lua_State *) cb_ctx;
+
+	/*
+	 * Create a Lua table for every slab class. A class is
+	 * defined by its item size.
+	 */
+	/** Assign next slab size to the next member of an array. */
+	lua_pushnumber(L, lua_objlen(L, -1) + 1);
+	lua_newtable(L);
+	/**
+	 * This is in fact only to force YaML flow "compact" for this
+	 * table.
+	 */
+	luaL_setmaphint(L, -1);
+
+	lua_pushstring(L, "mem_used");
+	luaL_pushuint64(L, stats->totals.used);
+	lua_settable(L, -3);
+
+	lua_pushstring(L, "slab_size");
+	luaL_pushuint64(L, stats->slabsize);
+	lua_settable(L, -3);
+
+	lua_pushstring(L, "mem_free");
+	luaL_pushuint64(L, stats->totals.total - stats->totals.used);
+	lua_settable(L, -3);
+
+	lua_pushstring(L, "item_size");
+	luaL_pushuint64(L, stats->objsize);
+	lua_settable(L, -3);
+
+	lua_pushstring(L, "slab_count");
+	luaL_pushuint64(L, stats->slabcount);
+	lua_settable(L, -3);
+
+	lua_pushstring(L, "item_count");
+	luaL_pushuint64(L, stats->objcount);
+	lua_settable(L, -3);
+
+	lua_settable(L, -3);
+	return 0;
+}
+
+template <class allocator_stats, class cb_stats, class Allocator,
+	  int (*stats_cb)(const cb_stats *stats, void *cb_ctx)>
+static int
+lbox_slab_stats(struct lua_State *L)
+{
+	struct memtx_engine *memtx;
+	memtx = (struct memtx_engine *)engine_by_name("memtx");
+
+	allocator_stats totals;
+	lua_newtable(L);
+	/*
+	 * List all slabs used for tuples and slabs used for
+	 * indexes, with their stats.
+	 */
+	Allocator::stats(&totals, stats_cb, L);
+	struct mempool_stats index_stats;
+	mempool_stats(&memtx->index_extent_pool, &index_stats);
+	stats_cb(&index_stats, L);
+
+	return 1;
+}
+
+template <class allocator_stats, class cb_stats, class Allocator,
+	  int (*stats_cb)(const cb_stats *stats, void *cb_ctx)>
+static int
+lbox_slab_info(struct lua_State *L)
+{
+	struct memtx_engine *memtx;
+	memtx = (struct memtx_engine *)engine_by_name("memtx");
+
+	allocator_stats totals;
+
+	/*
+	 * List all slabs used for tuples and slabs used for
+	 * indexes, with their stats.
+	 */
+	lua_newtable(L);
+	Allocator::stats(&totals, stats_cb, L);
+	struct mempool_stats index_stats;
+	mempool_stats(&memtx->index_extent_pool, &index_stats);
+
+	double ratio;
+	char ratio_buf[32];
+
+	ratio = 100 * ((double) totals.used
+		/ ((double) totals.total + 0.0001));
+	snprintf(ratio_buf, sizeof(ratio_buf), "%0.2lf%%", ratio);
+
+	/** How much address space has been already touched */
+	lua_pushstring(L, "items_size");
+	luaL_pushuint64(L, totals.total);
+	lua_settable(L, -3);
+	/**
+	 * How much of this formatted address space is used for
+	 * actual data.
+	 */
+	lua_pushstring(L, "items_used");
+	luaL_pushuint64(L, totals.used);
+	lua_settable(L, -3);
+
+	/*
+	 * Fragmentation factor for tuples. Don't account indexes,
+	 * even if they are fragmented, there is nothing people
+	 * can do about it.
+	 */
+	lua_pushstring(L, "items_used_ratio");
+	lua_pushstring(L, ratio_buf);
+	lua_settable(L, -3);
+
+	/** How much address space has been already touched
+	 * (tuples and indexes) */
+	lua_pushstring(L, "arena_size");
+	/*
+	 * We could use totals.total + index_stats.total here,
+	 * but this would not account for slabs which are sitting
+	 * in slab cache or in the arena, available for reuse.
+	 * Make sure a simple formula:
+	 * items_used_ratio > 0.9 && arena_used_ratio > 0.9 &&
+	 * quota_used_ratio > 0.9 work as an indicator
+	 * for reaching Tarantool memory limit.
+	 */
+	size_t arena_size = memtx->arena.used;
+	luaL_pushuint64(L, arena_size);
+	lua_settable(L, -3);
+	/**
+	 * How much of this formatted address space is used for
+	 * data (tuples and indexes).
+	 */
+	lua_pushstring(L, "arena_used");
+	luaL_pushuint64(L, totals.used + index_stats.totals.used);
+	lua_settable(L, -3);
+
+	ratio = 100 * ((double) (totals.used + index_stats.totals.used)
+		       / (double) arena_size);
+	snprintf(ratio_buf, sizeof(ratio_buf), "%0.1lf%%", ratio);
+
+	lua_pushstring(L, "arena_used_ratio");
+	lua_pushstring(L, ratio_buf);
+	lua_settable(L, -3);
+
+	/*
+	 * This is pretty much the same as
+	 * box.cfg.slab_alloc_arena, but in bytes
+	 */
+	lua_pushstring(L, "quota_size");
+	luaL_pushuint64(L, quota_total(&memtx->quota));
+	lua_settable(L, -3);
+
+	/*
+	 * How much quota has been booked - reflects the total
+	 * size of slabs in various slab caches.
+	 */
+	lua_pushstring(L, "quota_used");
+	luaL_pushuint64(L, quota_used(&memtx->quota));
+	lua_settable(L, -3);
+
+	/**
+	 * This should be the same as arena_size/arena_used, however,
+	 * don't trust totals in the most important monitoring
+	 * factor, it's the quota that give you OOM error in the
+	 * end of the day.
+	 */
+	ratio = 100 * ((double) quota_used(&memtx->quota) /
+		 ((double) quota_total(&memtx->quota) + 0.0001));
+	snprintf(ratio_buf, sizeof(ratio_buf), "%0.2lf%%", ratio);
+
+	lua_pushstring(L, "quota_used_ratio");
+	lua_pushstring(L, ratio_buf);
+	lua_settable(L, -3);
+
+	return 1;
+}
+
+template <class Allocator>
+static int
+lbox_slab_check(MAYBE_UNUSED struct lua_State *L)
+{
+	Allocator::memory_check();
+	return 0;
+}
+
+template <class allocator_stats, class cb_stats, class Allocator,
+	  int (*stats_cb_info)(const cb_stats *stats, void *cb_ctx),
+	  int (*stats_cb_stats)(const cb_stats *stats, void *cb_ctx)>
+static void
+box_lua_slab_init(struct lua_State *L)
+{
+	lua_pushstring(L, "info");
+	lua_pushcfunction(L, (lbox_slab_info<allocator_stats,
+		cb_stats, Allocator, stats_cb_info>));
+	lua_settable(L, -3);
+
+	lua_pushstring(L, "stats");
+	lua_pushcfunction(L, (lbox_slab_stats<allocator_stats,
+		cb_stats, Allocator, stats_cb_stats>));
+	lua_settable(L, -3);
+
+	lua_pushstring(L, "check");
+	lua_pushcfunction(L, lbox_slab_check<Allocator>);
+	lua_settable(L, -3);
+}
+
+/** Initialize box.slab package. */
+void
+box_lua_slab_init(struct lua_State *L)
+{
+	struct memtx_engine *memtx;
+	memtx = (struct memtx_engine *)engine_by_name("memtx");
+
+	lua_getfield(L, LUA_GLOBALSINDEX, "box");
+	lua_pushstring(L, "slab");
+	lua_newtable(L);
+
+	switch(memtx->allocator_type) {
+	case MEMTX_SMALL_ALLOCATOR:
+		box_lua_slab_init<struct small_stats,
+			struct mempool_stats, SmallAllocator,
+			small_stats_noop_cb, small_stats_lua_cb>(L);
+		break;
+	default:
+		;
+	}
+
+	lua_settable(L, -3); /* box.slab */
+
+	lua_pop(L, 1); /* box. */
+}
diff --git a/src/box/lua/slab.h b/src/box/lua/slab.h
index fd4ef8893..41280343f 100644
--- a/src/box/lua/slab.h
+++ b/src/box/lua/slab.h
@@ -35,6 +35,7 @@ extern "C" {
 #endif /* defined(__cplusplus) */
 
 struct lua_State;
+void box_lua_slab_runtime_init(struct lua_State *L);
 void box_lua_slab_init(struct lua_State *L);
 
 #if defined(__cplusplus)
diff --git a/src/box/memtx_engine.cc b/src/box/memtx_engine.cc
index 520a221dd..48c0f13d0 100644
--- a/src/box/memtx_engine.cc
+++ b/src/box/memtx_engine.cc
@@ -50,10 +50,20 @@
 #include "schema.h"
 #include "gc.h"
 #include "raft.h"
+#include "small_allocator.h"
 
 /* sync snapshot every 16MB */
 #define SNAP_SYNC_INTERVAL	(1 << 24)
 
+#define MEMXT_TUPLE_FORMAT_VTAB(Allocator)				\
+memtx_tuple_format_vtab.tuple_delete = memtx_tuple_delete<Allocator>;	\
+memtx_tuple_format_vtab.tuple_new = memtx_tuple_new<Allocator>;		\
+memtx_tuple_format_vtab.tuple_chunk_delete =				\
+	metmx_tuple_chunk_delete<Allocator>;				\
+memtx_tuple_format_vtab.tuple_chunk_new =				\
+	memtx_tuple_chunk_new<Allocator>;
+
+
 static void
 checkpoint_cancel(struct checkpoint *ckpt);
 
@@ -141,8 +151,13 @@ memtx_engine_shutdown(struct engine *engine)
 		mempool_destroy(&memtx->rtree_iterator_pool);
 	mempool_destroy(&memtx->index_extent_pool);
 	slab_cache_destroy(&memtx->index_slab_cache);
-	small_alloc_destroy(&memtx->alloc);
-	slab_cache_destroy(&memtx->slab_cache);
+	switch (memtx->allocator_type) {
+	case MEMTX_SMALL_ALLOCATOR:
+		SmallAllocator::destroy();
+		break;
+	default:
+		;
+	}
 	tuple_arena_destroy(&memtx->arena);
 	xdir_destroy(&memtx->snap_dir);
 	free(memtx);
@@ -979,19 +994,21 @@ small_stats_noop_cb(const struct mempool_stats *stats, void *cb_ctx)
 	return 0;
 }
 
+template <class allocator_stats, class cb_stats, class Allocator,
+	  int (*stats_cb)(const cb_stats *stats, void *cb_ctx)>
 static void
 memtx_engine_memory_stat(struct engine *engine, struct engine_memory_stat *stat)
 {
 	struct memtx_engine *memtx = (struct memtx_engine *)engine;
-	struct small_stats data_stats;
+	allocator_stats data_stats;
 	struct mempool_stats index_stats;
 	mempool_stats(&memtx->index_extent_pool, &index_stats);
-	small_stats(&memtx->alloc, &data_stats, small_stats_noop_cb, NULL);
+	Allocator::stats(&data_stats, stats_cb, NULL);
 	stat->data += data_stats.used;
 	stat->index += index_stats.totals.used;
 }
 
-static const struct engine_vtab memtx_engine_vtab = {
+static struct engine_vtab memtx_engine_vtab = {
 	/* .shutdown = */ memtx_engine_shutdown,
 	/* .create_space = */ memtx_engine_create_space,
 	/* .prepare_join = */ memtx_engine_prepare_join,
@@ -1014,7 +1031,7 @@ static const struct engine_vtab memtx_engine_vtab = {
 	/* .abort_checkpoint = */ memtx_engine_abort_checkpoint,
 	/* .collect_garbage = */ memtx_engine_collect_garbage,
 	/* .backup = */ memtx_engine_backup,
-	/* .memory_stat = */ memtx_engine_memory_stat,
+	/* .memory_stat = */ nullptr,
 	/* .reset_stat = */ generic_engine_reset_stat,
 	/* .check_space_def = */ generic_engine_check_space_def,
 };
@@ -1064,7 +1081,7 @@ memtx_engine_gc_f(va_list va)
 struct memtx_engine *
 memtx_engine_new(const char *snap_dirname, bool force_recovery,
 		 uint64_t tuple_arena_max_size, uint32_t objsize_min,
-		 bool dontdump, float alloc_factor)
+		 bool dontdump, const char *allocator, float alloc_factor)
 {
 	int64_t snap_signature;
 	struct memtx_engine *memtx = (struct memtx_engine *)calloc(1, sizeof(*memtx));
@@ -1074,6 +1091,21 @@ memtx_engine_new(const char *snap_dirname, bool force_recovery,
 		return NULL;
 	}
 
+	assert(allocator != NULL);
+	if (!strcmp(allocator, "small")) {
+		memtx->allocator_type = MEMTX_SMALL_ALLOCATOR;
+		MEMXT_TUPLE_FORMAT_VTAB(SmallAllocator)
+		memtx_engine_vtab.memory_stat =
+			memtx_engine_memory_stat<struct small_stats,
+						 struct mempool_stats,
+						 SmallAllocator,
+						 small_stats_noop_cb>;
+	} else {
+		diag_set(IllegalParams, "Invalid memory allocator name");
+		free(memtx);
+		return NULL;
+	}
+
 	xdir_create(&memtx->snap_dir, snap_dirname, SNAP, &INSTANCE_UUID,
 		    &xlog_opts_default);
 	memtx->snap_dir.force_recovery = force_recovery;
@@ -1131,12 +1163,17 @@ memtx_engine_new(const char *snap_dirname, bool force_recovery,
 	quota_init(&memtx->quota, tuple_arena_max_size);
 	tuple_arena_create(&memtx->arena, &memtx->quota, tuple_arena_max_size,
 			   SLAB_SIZE, dontdump, "memtx");
-	slab_cache_create(&memtx->slab_cache, &memtx->arena);
-	float actual_alloc_factor;
-	small_alloc_create(&memtx->alloc, &memtx->slab_cache,
-			   objsize_min, alloc_factor, &actual_alloc_factor);
-	say_info("Actual slab_alloc_factor calculated on the basis of desired "
-		 "slab_alloc_factor = %f", actual_alloc_factor);
+
+	switch (memtx->allocator_type) {
+	case MEMTX_SMALL_ALLOCATOR:
+		float actual_alloc_factor;
+		SmallAllocator::create(&memtx->arena, objsize_min, alloc_factor, &actual_alloc_factor);
+		say_info("Actual slab_alloc_factor calculated on the basis of desired "
+			 "slab_alloc_factor = %f", actual_alloc_factor);
+		break;
+	default:
+		;
+	}
 
 	/* Initialize index extent allocator. */
 	slab_cache_create(&memtx->index_slab_cache, &memtx->arena);
@@ -1200,18 +1237,33 @@ void
 memtx_enter_delayed_free_mode(struct memtx_engine *memtx)
 {
 	memtx->snapshot_version++;
-	if (memtx->delayed_free_mode++ == 0)
-		small_alloc_setopt(&memtx->alloc, SMALL_DELAYED_FREE_MODE, true);
+	if (memtx->delayed_free_mode++ == 0) {
+		switch (memtx->allocator_type) {
+		case MEMTX_SMALL_ALLOCATOR:
+			SmallAllocator::enter_delayed_free_mode();
+			break;
+		default:
+			;
+		}
+	}
 }
 
 void
 memtx_leave_delayed_free_mode(struct memtx_engine *memtx)
 {
 	assert(memtx->delayed_free_mode > 0);
-	if (--memtx->delayed_free_mode == 0)
-		small_alloc_setopt(&memtx->alloc, SMALL_DELAYED_FREE_MODE, false);
+	if (--memtx->delayed_free_mode == 0) {
+		switch (memtx->allocator_type) {
+		case MEMTX_SMALL_ALLOCATOR:
+			SmallAllocator::enter_delayed_free_mode();
+			break;
+		default:
+			;
+		}
+	}
 }
 
+template <class Allocator>
 struct tuple *
 memtx_tuple_new(struct tuple_format *format, const char *data, const char *end)
 {
@@ -1256,7 +1308,7 @@ memtx_tuple_new(struct tuple_format *format, const char *data, const char *end)
 
 	struct memtx_tuple *memtx_tuple;
 	while ((memtx_tuple = (struct memtx_tuple *)
-		smalloc(&memtx->alloc, total)) == NULL) {
+		Allocator::alloc(total)) == NULL) {
 		bool stop;
 		memtx_engine_run_gc(memtx, &stop);
 		if (stop)
@@ -1284,6 +1336,7 @@ end:
 	return tuple;
 }
 
+template <class Allocator>
 void
 memtx_tuple_delete(struct tuple_format *format, struct tuple *tuple)
 {
@@ -1293,34 +1346,35 @@ memtx_tuple_delete(struct tuple_format *format, struct tuple *tuple)
 	struct memtx_tuple *memtx_tuple =
 		container_of(tuple, struct memtx_tuple, base);
 	size_t total = tuple_size(tuple) + offsetof(struct memtx_tuple, base);
-	if (memtx->alloc.free_mode != SMALL_DELAYED_FREE ||
-	    memtx_tuple->version == memtx->snapshot_version ||
+	if (memtx_tuple->version == memtx->snapshot_version ||
 	    format->is_temporary)
-		smfree(&memtx->alloc, memtx_tuple, total);
+		Allocator::free(memtx_tuple, total);
 	else
-		smfree_delayed(&memtx->alloc, memtx_tuple, total);
+		Allocator::free_delayed(memtx_tuple, total);
 	tuple_format_unref(format);
 }
 
+template <class Allocator>
 void
 metmx_tuple_chunk_delete(struct tuple_format *format, const char *data)
 {
-	struct memtx_engine *memtx = (struct memtx_engine *)format->engine;
+	(void)format;
 	struct tuple_chunk *tuple_chunk =
 		container_of((const char (*)[0])data,
 			     struct tuple_chunk, data);
 	uint32_t sz = tuple_chunk_sz(tuple_chunk->data_sz);
-	smfree(&memtx->alloc, tuple_chunk, sz);
+	Allocator::free(tuple_chunk, sz);
 }
 
+template <class Allocator>
 const char *
 memtx_tuple_chunk_new(struct tuple_format *format, struct tuple *tuple,
 		      const char *data, uint32_t data_sz)
 {
-	struct memtx_engine *memtx = (struct memtx_engine *)format->engine;
+	(void)format;
 	uint32_t sz = tuple_chunk_sz(data_sz);
 	struct tuple_chunk *tuple_chunk =
-		(struct tuple_chunk *) smalloc(&memtx->alloc, sz);
+		(struct tuple_chunk *) Allocator::alloc(sz);
 	if (tuple == NULL) {
 		diag_set(OutOfMemory, sz, "smalloc", "tuple");
 		return NULL;
@@ -1330,12 +1384,7 @@ memtx_tuple_chunk_new(struct tuple_format *format, struct tuple *tuple,
 	return tuple_chunk->data;
 }
 
-struct tuple_format_vtab memtx_tuple_format_vtab = {
-	memtx_tuple_delete,
-	memtx_tuple_new,
-	metmx_tuple_chunk_delete,
-	memtx_tuple_chunk_new,
-};
+struct tuple_format_vtab memtx_tuple_format_vtab;
 
 /**
  * Allocate a block of size MEMTX_EXTENT_SIZE for memtx index
diff --git a/src/box/memtx_engine.h b/src/box/memtx_engine.h
index 8b380bf3c..6edb8b373 100644
--- a/src/box/memtx_engine.h
+++ b/src/box/memtx_engine.h
@@ -99,6 +99,11 @@ enum memtx_reserve_extents_num {
 	RESERVE_EXTENTS_BEFORE_REPLACE = 16
 };
 
+enum memtx_allocator_type {
+	MEMTX_SMALL_ALLOCATOR,
+	MEMTX_SYSTEM_ALLOCATOR,
+};
+
 /**
  * The size of the biggest memtx iterator. Used with
  * mempool_create. This is the size of the block that will be
@@ -133,10 +138,6 @@ struct memtx_engine {
 	 * is reflected in box.slab.info(), @sa lua/slab.c.
 	 */
 	struct slab_arena arena;
-	/** Slab cache for allocating tuples. */
-	struct slab_cache slab_cache;
-	/** Tuple allocator. */
-	struct small_alloc alloc;
 	/** Slab cache for allocating index extents. */
 	struct slab_cache index_slab_cache;
 	/** Index extent allocator. */
@@ -178,6 +179,10 @@ struct memtx_engine {
 	 * memtx_gc_task::link.
 	 */
 	struct stailq gc_queue;
+	/**
+	 * TYpe of memtx allocator
+	 */
+	enum memtx_allocator_type allocator_type;
 };
 
 struct memtx_gc_task;
@@ -213,7 +218,7 @@ struct memtx_engine *
 memtx_engine_new(const char *snap_dirname, bool force_recovery,
 		 uint64_t tuple_arena_max_size,
 		 uint32_t objsize_min, bool dontdump,
-		 float alloc_factor);
+		 const char *allocator, float alloc_factor);
 
 int
 memtx_engine_recover_snapshot(struct memtx_engine *memtx,
@@ -238,6 +243,9 @@ memtx_engine_set_max_tuple_size(struct memtx_engine *memtx, size_t max_size);
 void
 memtx_enter_delayed_free_mode(struct memtx_engine *memtx);
 
+/** Tuple format vtab for memtx engine. */
+extern struct tuple_format_vtab memtx_tuple_format_vtab;
+
 /**
  * Leave tuple delayed free mode. This function undoes the effect
  * of memtx_enter_delayed_free_mode().
@@ -245,17 +253,6 @@ memtx_enter_delayed_free_mode(struct memtx_engine *memtx);
 void
 memtx_leave_delayed_free_mode(struct memtx_engine *memtx);
 
-/** Allocate a memtx tuple. @sa tuple_new(). */
-struct tuple *
-memtx_tuple_new(struct tuple_format *format, const char *data, const char *end);
-
-/** Free a memtx tuple. @sa tuple_delete(). */
-void
-memtx_tuple_delete(struct tuple_format *format, struct tuple *tuple);
-
-/** Tuple format vtab for memtx engine. */
-extern struct tuple_format_vtab memtx_tuple_format_vtab;
-
 enum {
 	MEMTX_EXTENT_SIZE = 16 * 1024,
 	MEMTX_SLAB_SIZE = 4 * 1024 * 1024
@@ -294,18 +291,38 @@ memtx_index_def_change_requires_rebuild(struct index *index,
 } /* extern "C" */
 
 #include "diag.h"
+#include "tuple_format.h"
+
+/** Allocate a memtx tuple. @sa tuple_new(). */
+template<class Allocator>
+struct tuple *
+memtx_tuple_new(struct tuple_format *format, const char *data, const char *end);
+
+/** Free a memtx tuple. @sa tuple_delete(). */
+template<class Allocator>
+void
+memtx_tuple_delete(struct tuple_format *format, struct tuple *tuple);
+
+template <class Allocator>
+const char *
+memtx_tuple_chunk_new(MAYBE_UNUSED struct tuple_format *format, struct tuple *tuple,
+		      const char *data, uint32_t data_sz);
+
+template <class Allocator>
+void
+metmx_tuple_chunk_delete(MAYBE_UNUSED struct tuple_format *format, const char *data);
 
 static inline struct memtx_engine *
 memtx_engine_new_xc(const char *snap_dirname, bool force_recovery,
 		    uint64_t tuple_arena_max_size,
 		    uint32_t objsize_min, bool dontdump,
-		    float alloc_factor)
+		    const char *allocator, float alloc_factor)
 {
 	struct memtx_engine *memtx;
 	memtx = memtx_engine_new(snap_dirname, force_recovery,
 				 tuple_arena_max_size,
 				 objsize_min, dontdump,
-				 alloc_factor);
+				 allocator, alloc_factor);
 	if (memtx == NULL)
 		diag_raise();
 	return memtx;
diff --git a/src/box/memtx_space.cc b/src/box/memtx_space.cc
index e46e4eaeb..932b3af16 100644
--- a/src/box/memtx_space.cc
+++ b/src/box/memtx_space.cc
@@ -320,6 +320,7 @@ dup_replace_mode(uint32_t op)
 	return op == IPROTO_INSERT ? DUP_INSERT : DUP_REPLACE_OR_INSERT;
 }
 
+template<class Allocator>
 static int
 memtx_space_execute_replace(struct space *space, struct txn *txn,
 			    struct request *request, struct tuple **result)
@@ -327,8 +328,8 @@ memtx_space_execute_replace(struct space *space, struct txn *txn,
 	struct memtx_space *memtx_space = (struct memtx_space *)space;
 	struct txn_stmt *stmt = txn_current_stmt(txn);
 	enum dup_replace_mode mode = dup_replace_mode(request->type);
-	stmt->new_tuple = memtx_tuple_new(space->format, request->tuple,
-					  request->tuple_end);
+	stmt->new_tuple = memtx_tuple_new<Allocator>(space->format,
+					request->tuple, request->tuple_end);
 	if (stmt->new_tuple == NULL)
 		return -1;
 	tuple_ref(stmt->new_tuple);
@@ -378,6 +379,7 @@ memtx_space_execute_delete(struct space *space, struct txn *txn,
 	return 0;
 }
 
+template<class Allocator>
 static int
 memtx_space_execute_update(struct space *space, struct txn *txn,
 			   struct request *request, struct tuple **result)
@@ -412,7 +414,7 @@ memtx_space_execute_update(struct space *space, struct txn *txn,
 	if (new_data == NULL)
 		return -1;
 
-	stmt->new_tuple = memtx_tuple_new(format, new_data,
+	stmt->new_tuple = memtx_tuple_new<Allocator>(format, new_data,
 					  new_data + new_size);
 	if (stmt->new_tuple == NULL)
 		return -1;
@@ -428,6 +430,7 @@ memtx_space_execute_update(struct space *space, struct txn *txn,
 	return 0;
 }
 
+template<class Allocator>
 static int
 memtx_space_execute_upsert(struct space *space, struct txn *txn,
 			   struct request *request)
@@ -483,7 +486,7 @@ memtx_space_execute_upsert(struct space *space, struct txn *txn,
 					  format, request->index_base) != 0) {
 			return -1;
 		}
-		stmt->new_tuple = memtx_tuple_new(format, request->tuple,
+		stmt->new_tuple = memtx_tuple_new<Allocator>(format, request->tuple,
 						  request->tuple_end);
 		if (stmt->new_tuple == NULL)
 			return -1;
@@ -507,7 +510,7 @@ memtx_space_execute_upsert(struct space *space, struct txn *txn,
 		if (new_data == NULL)
 			return -1;
 
-		stmt->new_tuple = memtx_tuple_new(format, new_data,
+		stmt->new_tuple = memtx_tuple_new<Allocator>(format, new_data,
 						  new_data + new_size);
 		if (stmt->new_tuple == NULL)
 			return -1;
@@ -554,19 +557,20 @@ memtx_space_execute_upsert(struct space *space, struct txn *txn,
  * destroyed space may lead to undefined behaviour. For this reason it
  * doesn't take txn as an argument.
  */
+template<class Allocator>
 static int
 memtx_space_ephemeral_replace(struct space *space, const char *tuple,
 				      const char *tuple_end)
 {
 	struct memtx_space *memtx_space = (struct memtx_space *)space;
-	struct tuple *new_tuple = memtx_tuple_new(space->format, tuple,
+	struct tuple *new_tuple = memtx_tuple_new<Allocator>(space->format, tuple,
 						  tuple_end);
 	if (new_tuple == NULL)
 		return -1;
 	struct tuple *old_tuple;
 	if (memtx_space->replace(space, NULL, new_tuple,
 				 DUP_REPLACE_OR_INSERT, &old_tuple) != 0) {
-		memtx_tuple_delete(space->format, new_tuple);
+		memtx_tuple_delete<Allocator>(space->format, new_tuple);
 		return -1;
 	}
 	if (old_tuple != NULL)
@@ -1166,28 +1170,31 @@ memtx_space_prepare_alter(struct space *old_space, struct space *new_space)
 
 /* }}} DDL */
 
-static const struct space_vtab memtx_space_vtab = {
-	/* .destroy = */ memtx_space_destroy,
-	/* .bsize = */ memtx_space_bsize,
-	/* .execute_replace = */ memtx_space_execute_replace,
-	/* .execute_delete = */ memtx_space_execute_delete,
-	/* .execute_update = */ memtx_space_execute_update,
-	/* .execute_upsert = */ memtx_space_execute_upsert,
-	/* .ephemeral_replace = */ memtx_space_ephemeral_replace,
-	/* .ephemeral_delete = */ memtx_space_ephemeral_delete,
-	/* .ephemeral_rowid_next = */ memtx_space_ephemeral_rowid_next,
-	/* .init_system_space = */ memtx_init_system_space,
-	/* .init_ephemeral_space = */ memtx_init_ephemeral_space,
-	/* .check_index_def = */ memtx_space_check_index_def,
-	/* .create_index = */ memtx_space_create_index,
-	/* .add_primary_key = */ memtx_space_add_primary_key,
-	/* .drop_primary_key = */ memtx_space_drop_primary_key,
-	/* .check_format  = */ memtx_space_check_format,
-	/* .build_index = */ memtx_space_build_index,
-	/* .swap_index = */ generic_space_swap_index,
-	/* .prepare_alter = */ memtx_space_prepare_alter,
-	/* .invalidate = */ generic_space_invalidate,
+struct SmallAllocator;
+#define MEMTX_SPACE_VTAB(Allocator, allocator)					\
+static const struct space_vtab memtx_space_vtab_##allocator = { 		\
+	/* .destroy = */ memtx_space_destroy,					\
+	/* .bsize = */ memtx_space_bsize,					\
+	/* .execute_replace = */ memtx_space_execute_replace<Allocator>,	\
+	/* .execute_delete = */ memtx_space_execute_delete,			\
+	/* .execute_update = */ memtx_space_execute_update<Allocator>,		\
+	/* .execute_upsert = */ memtx_space_execute_upsert<Allocator>,		\
+	/* .ephemeral_replace = */ memtx_space_ephemeral_replace<Allocator>,	\
+	/* .ephemeral_delete = */ memtx_space_ephemeral_delete,			\
+	/* .ephemeral_rowid_next = */ memtx_space_ephemeral_rowid_next,		\
+	/* .init_system_space = */ memtx_init_system_space,			\
+	/* .init_ephemeral_space = */ memtx_init_ephemeral_space,		\
+	/* .check_index_def = */ memtx_space_check_index_def,			\
+	/* .create_index = */ memtx_space_create_index, 			\
+	/* .add_primary_key = */ memtx_space_add_primary_key,   		\
+	/* .drop_primary_key = */ memtx_space_drop_primary_key, 		\
+	/* .check_format  = */ memtx_space_check_format,			\
+	/* .build_index = */ memtx_space_build_index,   			\
+	/* .swap_index = */ generic_space_swap_index,   			\
+	/* .prepare_alter = */ memtx_space_prepare_alter,			\
+	/* .invalidate = */ generic_space_invalidate,   			\
 };
+MEMTX_SPACE_VTAB(SmallAllocator, small)
 
 struct space *
 memtx_space_new(struct memtx_engine *memtx,
@@ -1219,8 +1226,18 @@ memtx_space_new(struct memtx_engine *memtx,
 	}
 	tuple_format_ref(format);
 
+	const struct space_vtab *vtab;
+	switch (memtx->allocator_type) {
+	case MEMTX_SMALL_ALLOCATOR:
+		vtab = &memtx_space_vtab_small;
+		break;
+	default:
+		tuple_format_unref(format);
+		free(memtx_space);
+		return NULL;
+	}
 	if (space_create((struct space *)memtx_space, (struct engine *)memtx,
-			 &memtx_space_vtab, def, key_list, format) != 0) {
+			  vtab, def, key_list, format) != 0) {
 		tuple_format_unref(format);
 		free(memtx_space);
 		return NULL;
diff --git a/src/box/small_allocator.cc b/src/box/small_allocator.cc
new file mode 100644
index 000000000..e6b21c355
--- /dev/null
+++ b/src/box/small_allocator.cc
@@ -0,0 +1,74 @@
+/*
+ * Copyright 2010-2020, Tarantool AUTHORS, please see AUTHORS file.
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above
+ *    copyright notice, this list of conditions and the
+ *    following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above
+ *    copyright notice, this list of conditions and the following
+ *    disclaimer in the documentation and/or other materials
+ *    provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY <COPYRIGHT HOLDER> ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
+ * <COPYRIGHT HOLDER> OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
+ * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
+ * THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ */
+#include "small_allocator.h"
+
+void
+SmallAllocator::create(struct slab_arena *arena,
+		uint32_t objsize_min, float alloc_factor, float *actual_alloc_factor)
+{
+	slab_cache_create(&slab_cache, arena);
+	small_alloc_create(&small_alloc, &slab_cache,
+			   objsize_min, alloc_factor, actual_alloc_factor);
+}
+
+void
+SmallAllocator::destroy(void)
+{
+	small_alloc_destroy(&small_alloc);
+	slab_cache_destroy(&slab_cache);
+}
+
+void
+SmallAllocator::enter_delayed_free_mode(void)
+{
+	small_alloc_setopt(&small_alloc, SMALL_DELAYED_FREE_MODE, true);
+}
+
+void
+SmallAllocator::leave_delayed_free_mode(void)
+{
+	small_alloc_setopt(&small_alloc, SMALL_DELAYED_FREE_MODE, false);
+}
+
+void
+SmallAllocator::stats(struct small_stats *stats, mempool_stats_cb cb, void *cb_ctx)
+{
+	small_stats(&small_alloc, stats, cb, cb_ctx);
+}
+
+void
+SmallAllocator::memory_check(void)
+{
+	slab_cache_check(&slab_cache);
+}
+
+struct small_alloc SmallAllocator::small_alloc;
+struct slab_cache SmallAllocator::slab_cache;
diff --git a/src/box/small_allocator.h b/src/box/small_allocator.h
new file mode 100644
index 000000000..f6aa5a069
--- /dev/null
+++ b/src/box/small_allocator.h
@@ -0,0 +1,58 @@
+#pragma once
+/*
+ * Copyright 2010-2020, Tarantool AUTHORS, please see AUTHORS file.
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above
+ *    copyright notice, this list of conditions and the
+ *    following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above
+ *    copyright notice, this list of conditions and the following
+ *    disclaimer in the documentation and/or other materials
+ *    provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY <COPYRIGHT HOLDER> ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
+ * <COPYRIGHT HOLDER> OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
+ * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
+ * THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ */
+#include <small/small.h>
+
+struct SmallAllocator
+{
+	static void create(struct slab_arena *arena,
+		uint32_t objsize_min, float alloc_factor,
+		float *actual_alloc_factor);
+	static void destroy(void);
+	static void enter_delayed_free_mode(void);
+	static void leave_delayed_free_mode(void);
+	static void stats(struct small_stats *stats, mempool_stats_cb cb, void *cb_ctx);
+	static void memory_check(void);
+	static inline void *alloc(size_t size) {
+		return smalloc(&small_alloc, size);
+	};
+	static inline void free(void *ptr, size_t size) {
+		smfree(&small_alloc, ptr, size);
+	}
+	static inline void free_delayed(void *ptr, size_t size) {
+		smfree_delayed(&small_alloc, ptr, size);
+	}
+
+	/** Tuple allocator. */
+	static struct small_alloc small_alloc;
+	/** Slab cache for allocating tuples. */
+	static struct slab_cache slab_cache;
+};
diff --git a/test/app-tap/init_script.result b/test/app-tap/init_script.result
index 16c5b01d2..cd5218e61 100644
--- a/test/app-tap/init_script.result
+++ b/test/app-tap/init_script.result
@@ -3,6 +3,7 @@
 --
 
 box.cfg
+allocator:small
 background:false
 checkpoint_count:2
 checkpoint_interval:3600
diff --git a/test/box/admin.result b/test/box/admin.result
index 05debe673..ecea53957 100644
--- a/test/box/admin.result
+++ b/test/box/admin.result
@@ -27,7 +27,9 @@ help()
 ...
 cfg_filter(box.cfg)
 ---
-- - - background
+- - - allocator
+    - small
+  - - background
     - false
   - - checkpoint_count
     - 2
diff --git a/test/box/cfg.result b/test/box/cfg.result
index 22a720c2c..16b321008 100644
--- a/test/box/cfg.result
+++ b/test/box/cfg.result
@@ -15,7 +15,9 @@ box.cfg.nosuchoption = 1
  | ...
 cfg_filter(box.cfg)
  | ---
- | - - - background
+ | - - - allocator
+ |     - small
+ |   - - background
  |     - false
  |   - - checkpoint_count
  |     - 2
@@ -130,7 +132,9 @@ box.cfg()
  | ...
 cfg_filter(box.cfg)
  | ---
- | - - - background
+ | - - - allocator
+ |     - small
+ |   - - background
  |     - false
  |   - - checkpoint_count
  |     - 2
-- 
2.20.1

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [Tarantool-patches] [PATCH 4/4] Implement system allocator, based on malloc
  2020-12-29 11:03 [Tarantool-patches] [PATCH 0/4] Choose allocator mor memtx mechanik20051988
                   ` (2 preceding siblings ...)
  2020-12-29 11:03 ` [Tarantool-patches] [PATCH 3/4] memtx: implement api for memory allocator selection mechanik20051988
@ 2020-12-29 11:03 ` mechanik20051988
  2021-01-10 13:56   ` Vladislav Shpilevoy via Tarantool-patches
  2021-01-10 13:55 ` [Tarantool-patches] [PATCH 0/4] Choose allocator mor memtx Vladislav Shpilevoy via Tarantool-patches
  2021-01-10 14:36 ` Vladislav Shpilevoy via Tarantool-patches
  5 siblings, 1 reply; 9+ messages in thread
From: mechanik20051988 @ 2020-12-29 11:03 UTC (permalink / raw)
  To: v.shpilevoy, alyapunov; +Cc: tarantool-patches

---
 src/box/CMakeLists.txt                   |   2 +
 src/box/lua/slab.cc                      |  44 ++++-
 src/box/memtx_engine.cc                  |  30 +++-
 src/box/memtx_space.cc                   |   6 +-
 src/box/sysalloc.c                       | 210 +++++++++++++++++++++++
 src/box/sysalloc.h                       | 145 ++++++++++++++++
 src/box/system_allocator.cc              |  68 ++++++++
 src/box/system_allocator.h               |  54 ++++++
 test/box/choose_memtx_allocator.lua      |   9 +
 test/box/choose_memtx_allocator.result   | 139 +++++++++++++++
 test/box/choose_memtx_allocator.test.lua |  44 +++++
 11 files changed, 741 insertions(+), 10 deletions(-)
 create mode 100644 src/box/sysalloc.c
 create mode 100644 src/box/sysalloc.h
 create mode 100644 src/box/system_allocator.cc
 create mode 100644 src/box/system_allocator.h
 create mode 100644 test/box/choose_memtx_allocator.lua
 create mode 100644 test/box/choose_memtx_allocator.result
 create mode 100644 test/box/choose_memtx_allocator.test.lua

diff --git a/src/box/CMakeLists.txt b/src/box/CMakeLists.txt
index aebf76bd4..55fb14d0a 100644
--- a/src/box/CMakeLists.txt
+++ b/src/box/CMakeLists.txt
@@ -130,6 +130,8 @@ add_library(box STATIC
     memtx_engine.cc
     memtx_space.cc
     small_allocator.cc
+    system_allocator.cc
+    sysalloc.c
     sysview.c
     blackhole.c
     service_engine.c
diff --git a/src/box/lua/slab.cc b/src/box/lua/slab.cc
index 4b247885f..c4fdd35c4 100644
--- a/src/box/lua/slab.cc
+++ b/src/box/lua/slab.cc
@@ -44,14 +44,18 @@
 #include "box/engine.h"
 #include "box/memtx_engine.h"
 #include "box/small_allocator.h"
-
-static int
-small_stats_noop_cb(const struct mempool_stats *stats, void *cb_ctx)
-{
-	(void) stats;
-	(void) cb_ctx;
-	return 0;
+#include "box/system_allocator.h"
+
+#define STATS_NOOP_CB(allocator, cb_stats)						\
+static int 										\
+allocator##_stats_noop_cb(const struct cb_stats *stats, void *cb_ctx)			\
+{											\
+	(void) stats;									\
+	(void) cb_ctx;									\
+	return 0;									\
 }
+STATS_NOOP_CB(small, mempool_stats)
+STATS_NOOP_CB(system, system_stats)
 
 static int
 small_stats_lua_cb(const struct mempool_stats *stats, void *cb_ctx)
@@ -103,6 +107,25 @@ small_stats_lua_cb(const struct mempool_stats *stats, void *cb_ctx)
 	return 0;
 }
 
+static int
+system_stats_lua_cb(const struct system_stats *stats, void *cb_ctx)
+{
+	struct lua_State *L = (struct lua_State *) cb_ctx;
+	lua_pushnumber(L, lua_objlen(L, -1) + 1);
+	lua_newtable(L);
+	luaL_setmaphint(L, -1);
+	lua_pushstring(L, "mem_used");
+	luaL_pushuint64(L, stats->used);
+	lua_settable(L, -3);
+
+	lua_pushstring(L, "mem_free");
+	luaL_pushuint64(L, stats->total - stats->used);
+	lua_settable(L, -3);
+	lua_settable(L, -3);
+	return 0;
+}
+
+
 template <class allocator_stats, class cb_stats, class Allocator,
 	  int (*stats_cb)(const cb_stats *stats, void *cb_ctx)>
 static int
@@ -120,7 +143,7 @@ lbox_slab_stats(struct lua_State *L)
 	Allocator::stats(&totals, stats_cb, L);
 	struct mempool_stats index_stats;
 	mempool_stats(&memtx->index_extent_pool, &index_stats);
-	stats_cb(&index_stats, L);
+	small_stats_lua_cb(&index_stats, L);
 
 	return 1;
 }
@@ -282,6 +305,11 @@ box_lua_slab_init(struct lua_State *L)
 			struct mempool_stats, SmallAllocator,
 			small_stats_noop_cb, small_stats_lua_cb>(L);
 		break;
+	case MEMTX_SYSTEM_ALLOCATOR:
+		box_lua_slab_init<struct system_stats,
+			struct system_stats, SystemAllocator,
+			system_stats_noop_cb, system_stats_lua_cb>(L);
+		break;
 	default:
 		;
 	}
diff --git a/src/box/memtx_engine.cc b/src/box/memtx_engine.cc
index 48c0f13d0..526187062 100644
--- a/src/box/memtx_engine.cc
+++ b/src/box/memtx_engine.cc
@@ -51,6 +51,7 @@
 #include "gc.h"
 #include "raft.h"
 #include "small_allocator.h"
+#include "system_allocator.h"
 
 /* sync snapshot every 16MB */
 #define SNAP_SYNC_INTERVAL	(1 << 24)
@@ -155,6 +156,8 @@ memtx_engine_shutdown(struct engine *engine)
 	case MEMTX_SMALL_ALLOCATOR:
 		SmallAllocator::destroy();
 		break;
+	case MEMTX_SYSTEM_ALLOCATOR:
+		SystemAllocator::destroy();
 	default:
 		;
 	}
@@ -994,6 +997,14 @@ small_stats_noop_cb(const struct mempool_stats *stats, void *cb_ctx)
 	return 0;
 }
 
+static int
+system_stats_noop_cb(const struct system_stats *stats, void *cb_ctx)
+{
+	(void)stats;
+	(void)cb_ctx;
+	return 0;
+}
+
 template <class allocator_stats, class cb_stats, class Allocator,
 	  int (*stats_cb)(const cb_stats *stats, void *cb_ctx)>
 static void
@@ -1100,6 +1111,14 @@ memtx_engine_new(const char *snap_dirname, bool force_recovery,
 						 struct mempool_stats,
 						 SmallAllocator,
 						 small_stats_noop_cb>;
+	} else if (!strcmp(allocator, "system")) {
+		memtx->allocator_type = MEMTX_SYSTEM_ALLOCATOR;
+		MEMXT_TUPLE_FORMAT_VTAB(SystemAllocator)
+		memtx_engine_vtab.memory_stat =
+			memtx_engine_memory_stat<struct system_stats,
+						 struct system_stats,
+						 SystemAllocator,
+						 system_stats_noop_cb>;
 	} else {
 		diag_set(IllegalParams, "Invalid memory allocator name");
 		free(memtx);
@@ -1171,6 +1190,9 @@ memtx_engine_new(const char *snap_dirname, bool force_recovery,
 		say_info("Actual slab_alloc_factor calculated on the basis of desired "
 			 "slab_alloc_factor = %f", actual_alloc_factor);
 		break;
+	case MEMTX_SYSTEM_ALLOCATOR:
+		SystemAllocator::create(&memtx->arena);
+		break;
 	default:
 		;
 	}
@@ -1242,6 +1264,9 @@ memtx_enter_delayed_free_mode(struct memtx_engine *memtx)
 		case MEMTX_SMALL_ALLOCATOR:
 			SmallAllocator::enter_delayed_free_mode();
 			break;
+		case MEMTX_SYSTEM_ALLOCATOR:
+			SystemAllocator::enter_delayed_free_mode();
+			break;
 		default:
 			;
 		}
@@ -1255,7 +1280,10 @@ memtx_leave_delayed_free_mode(struct memtx_engine *memtx)
 	if (--memtx->delayed_free_mode == 0) {
 		switch (memtx->allocator_type) {
 		case MEMTX_SMALL_ALLOCATOR:
-			SmallAllocator::enter_delayed_free_mode();
+			SmallAllocator::leave_delayed_free_mode();
+			break;
+		case MEMTX_SYSTEM_ALLOCATOR:
+			SystemAllocator::leave_delayed_free_mode();
 			break;
 		default:
 			;
diff --git a/src/box/memtx_space.cc b/src/box/memtx_space.cc
index 932b3af16..df0a7021e 100644
--- a/src/box/memtx_space.cc
+++ b/src/box/memtx_space.cc
@@ -1170,8 +1170,8 @@ memtx_space_prepare_alter(struct space *old_space, struct space *new_space)
 
 /* }}} DDL */
 
-struct SmallAllocator;
 #define MEMTX_SPACE_VTAB(Allocator, allocator)					\
+struct Allocator;								\
 static const struct space_vtab memtx_space_vtab_##allocator = { 		\
 	/* .destroy = */ memtx_space_destroy,					\
 	/* .bsize = */ memtx_space_bsize,					\
@@ -1195,6 +1195,7 @@ static const struct space_vtab memtx_space_vtab_##allocator = { 		\
 	/* .invalidate = */ generic_space_invalidate,   			\
 };
 MEMTX_SPACE_VTAB(SmallAllocator, small)
+MEMTX_SPACE_VTAB(SystemAllocator, system)
 
 struct space *
 memtx_space_new(struct memtx_engine *memtx,
@@ -1231,6 +1232,9 @@ memtx_space_new(struct memtx_engine *memtx,
 	case MEMTX_SMALL_ALLOCATOR:
 		vtab = &memtx_space_vtab_small;
 		break;
+	case MEMTX_SYSTEM_ALLOCATOR:
+		vtab = &memtx_space_vtab_system;
+		break;
 	default:
 		tuple_format_unref(format);
 		free(memtx_space);
diff --git a/src/box/sysalloc.c b/src/box/sysalloc.c
new file mode 100644
index 000000000..40dd067ad
--- /dev/null
+++ b/src/box/sysalloc.c
@@ -0,0 +1,210 @@
+/*
+ * Copyright 2010-2020, Tarantool AUTHORS, please see AUTHORS file.
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above
+ *    copyright notice, this list of conditions and the
+ *    following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above
+ *    copyright notice, this list of conditions and the following
+ *    disclaimer in the documentation and/or other materials
+ *    provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY <COPYRIGHT HOLDER> ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
+ * <COPYRIGHT HOLDER> OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
+ * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
+ * THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ */
+#include "sysalloc.h"
+
+#include <small/slab_arena.h>
+#include <small/rlist.h>
+
+#if TARGET_OS_DARWIN
+#include <malloc/malloc.h>
+static inline size_t
+portable_malloc_usable_size(void *p)
+{
+	return malloc_size(p);
+}
+#elif (TARGET_OS_FREEBSD || TARGET_OS_NETBSD || TARGET_OS_OPENBSD)
+#include <malloc_np.h>
+static inline size_t
+portable_malloc_usable_size(void *p)
+{
+	return malloc_usable_size(p);
+}
+#elif TARGET_OS_LINUX
+#include <malloc.h>
+static inline size_t
+portable_malloc_usable_size(void *p)
+{
+	return malloc_usable_size(p);
+}
+#else
+#error "Undefined system type"
+#endif
+
+static  RLIST_HEAD(alloc_list);
+
+static inline void
+system_collect_garbage(struct system_alloc *alloc)
+{
+	if (alloc->free_mode != SYSTEM_COLLECT_GARBAGE)
+		return;
+
+	const int BATCH = 100;
+	if (!lifo_is_empty(&alloc->delayed)) {
+		for (int i = 0; i < BATCH; i++) {
+			void *item = lifo_pop(&alloc->delayed);
+			if (item == NULL)
+				break;
+			sysfree(alloc, item, 0 /* unused parameter */);
+		}
+	} else {
+		/* Finish garbage collection and switch to regular mode */
+		alloc->free_mode = SYSTEM_FREE;
+	}
+}
+
+void
+system_alloc_setopt(struct system_alloc *alloc, enum system_opt opt, bool val)
+{
+	switch (opt) {
+	case SYSTEM_DELAYED_FREE_MODE:
+		alloc->free_mode = val ? SYSTEM_DELAYED_FREE :
+			SYSTEM_COLLECT_GARBAGE;
+		break;
+	default:
+		assert(false);
+		break;
+	}
+}
+
+void
+system_stats(struct system_alloc *alloc, struct system_stats *totals,
+	     system_stats_cb cb, void *cb_ctx)
+{
+	totals->used = pm_atomic_load_explicit(&alloc->used_bytes,
+		pm_memory_order_relaxed);
+	totals->total = quota_total(alloc->quota);
+	cb(totals, cb_ctx);
+}
+
+void
+system_alloc_create(struct system_alloc *alloc, struct slab_arena *arena)
+{
+	alloc->used_bytes = 0;
+	alloc->arena_bytes = 0;
+	alloc->arena = arena;
+	alloc->quota = arena->quota;
+	lifo_init(&alloc->delayed);
+	alloc->allocator_thread = pthread_self();
+}
+
+void
+system_alloc_destroy(struct system_alloc *alloc)
+{
+	assert(alloc->allocator_thread == pthread_self());
+	struct rlist *item, *tmp;
+	for (item = alloc_list.next; (item != &alloc_list) &&
+	     (tmp = item->next); item = tmp)
+		sysfree(alloc, ((void *)item) + sizeof(struct rlist), (~0lu));
+	assert(alloc->used_bytes == 0);
+	uint32_t units = alloc->arena_bytes / alloc->arena->slab_size;
+	pm_atomic_fetch_sub(&alloc->arena->used,
+				units * alloc->arena->slab_size);
+}
+
+void
+sysfree(struct system_alloc *alloc, void *ptr, size_t bytes)
+{
+	assert(alloc->allocator_thread == pthread_self());
+	ptr -= sizeof(struct rlist);
+	size_t size = portable_malloc_usable_size(ptr);
+	uint32_t s = size % QUOTA_UNIT_SIZE, units = size / QUOTA_UNIT_SIZE;
+	size_t used_bytes =  pm_atomic_fetch_sub(&alloc->used_bytes, size);
+	if (small_align(used_bytes, QUOTA_UNIT_SIZE) >
+	    small_align(used_bytes - s, QUOTA_UNIT_SIZE))
+		units++;
+	if (units > 0)
+		quota_release(alloc->quota, units * QUOTA_UNIT_SIZE);
+	pm_atomic_fetch_add(&alloc->arena_bytes, size);
+	if (bytes != (~0lu))
+		rlist_del((struct rlist *)ptr);
+	free(ptr);
+}
+
+void
+sysfree_delayed(struct system_alloc *alloc, void *ptr, size_t bytes)
+{
+	assert(alloc->allocator_thread == pthread_self());
+	if (alloc->free_mode == SYSTEM_DELAYED_FREE && ptr) {
+		lifo_push(&alloc->delayed, ptr);
+	} else {
+		sysfree(alloc, ptr, bytes);
+	}
+}
+
+void *
+sysalloc(struct system_alloc *alloc, size_t bytes)
+{
+	assert(alloc->allocator_thread == pthread_self());
+	system_collect_garbage(alloc);
+
+	void *ptr = malloc(sizeof(struct rlist) + bytes);
+	if (!ptr)
+		return NULL;
+	size_t size = portable_malloc_usable_size(ptr);
+	uint32_t s = size % QUOTA_UNIT_SIZE, units = size / QUOTA_UNIT_SIZE;
+	while (1) {
+		size_t used_bytes =  pm_atomic_load(&alloc->used_bytes);
+		if (small_align(used_bytes, QUOTA_UNIT_SIZE) <
+		    small_align(used_bytes + s, QUOTA_UNIT_SIZE))
+			units++;
+		if (units > 0) {
+			if (quota_use(alloc->quota,
+				units * QUOTA_UNIT_SIZE) < 0) {
+				free(ptr);
+				return NULL;
+			}
+		}
+		if (pm_atomic_compare_exchange_strong(&alloc->used_bytes,
+			&used_bytes, used_bytes + size))
+			break;
+		if (units > 0)
+			quota_release(alloc->quota, units * QUOTA_UNIT_SIZE);
+	}
+
+	size_t arena_bytes;
+	do {
+		while (size > (arena_bytes = pm_atomic_load(&alloc->arena_bytes))) {
+			uint32_t units = (size - arena_bytes) /
+				alloc->arena->slab_size + 1;
+			if (!pm_atomic_compare_exchange_strong(&alloc->arena_bytes,
+				&arena_bytes, arena_bytes +
+				units * alloc->arena->slab_size))
+				continue;
+			pm_atomic_fetch_add(&alloc->arena->used,
+				units * alloc->arena->slab_size);
+		}
+	} while (!pm_atomic_compare_exchange_strong(&alloc->arena_bytes,
+		&arena_bytes, arena_bytes - size));
+
+	rlist_add_tail(&alloc_list, (struct rlist *)ptr);
+	return ptr + sizeof(struct rlist);
+}
+
diff --git a/src/box/sysalloc.h b/src/box/sysalloc.h
new file mode 100644
index 000000000..bd906c8cf
--- /dev/null
+++ b/src/box/sysalloc.h
@@ -0,0 +1,145 @@
+#pragma once
+/*
+ * Copyright 2010-2020, Tarantool AUTHORS, please see AUTHORS file.
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above
+ *    copyright notice, this list of conditions and the
+ *    following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above
+ *    copyright notice, this list of conditions and the following
+ *    disclaimer in the documentation and/or other materials
+ *    provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY <COPYRIGHT HOLDER> ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
+ * <COPYRIGHT HOLDER> OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
+ * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
+ * THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ */
+#include <pthread.h>
+#include <trivia/util.h>
+#include <trivia/config.h>
+#include <small/slab_arena.h>
+#include <small/quota.h>
+#include <small/lifo.h>
+
+#if defined(__cplusplus)
+extern "C" {
+#endif /* defined(__cplusplus) */
+
+enum system_opt {
+	SYSTEM_DELAYED_FREE_MODE
+};
+
+/**
+ * Free mode
+ */
+enum system_free_mode {
+	/** Free objects immediately. */
+	SYSTEM_FREE,
+	/** Collect garbage after delayed free. */
+	SYSTEM_COLLECT_GARBAGE,
+	/** Postpone deletion of objects. */
+	SYSTEM_DELAYED_FREE,
+};
+
+struct system_alloc {
+	/**
+	 * Bytes allocated by system allocator
+	 */
+	uint64_t used_bytes;
+	/**
+	 * Arena free bytes
+	 */
+	uint64_t arena_bytes;
+	/**
+	 * Allocator arena
+	 */
+	struct slab_arena *arena;
+	/**
+	 * Allocator quota
+	 */
+	struct quota *quota;
+	/**
+	 * Free mode.
+	 */
+	enum system_free_mode free_mode;
+	/**
+	  * List of pointers for delayed free.
+	  */
+	struct lifo delayed;
+	/**
+	  * Allocator thread
+	  */
+	pthread_t allocator_thread;
+};
+
+struct system_stats {
+	size_t used;
+	size_t total;
+};
+
+typedef int (*system_stats_cb)(const struct system_stats *stats,
+				void *cb_ctx);
+
+/** Initialize a system memory allocator. */
+void
+system_alloc_create(struct system_alloc *alloc, struct slab_arena *arena);
+
+/**
+ * Enter or leave delayed mode - in delayed mode sysfree_delayed()
+ * doesn't free memory but puts them into a list, for futher deletion.
+ */
+ void
+system_alloc_setopt(struct system_alloc *alloc, enum system_opt opt, bool val);
+
+/**
+ * Destroy the allocator, the destruction of
+ * all allocated memory is on the user's conscience.
+ */
+void
+system_alloc_destroy(struct system_alloc *alloc);
+
+/**
+ * Allocate memory in the system allocator, using malloc.
+ */
+void *
+sysalloc(struct system_alloc *alloc, size_t bytes);
+
+/**
+ * Free memory in the system allocator, using feee.
+ */
+void
+sysfree(struct system_alloc *alloc, void *ptr, MAYBE_UNUSED size_t bytes);
+
+/**
+ * Free memory allocated by the system allocator
+ * if not in snapshot mode, otherwise put to the delayed
+ * free list.
+ */
+void
+sysfree_delayed(struct system_alloc *alloc, void *ptr, size_t bytes);
+
+/**
+ * Get system allocator statistic
+ */
+void
+system_stats(struct system_alloc *alloc, struct system_stats *totals,
+	     system_stats_cb cb, void *cb_ctx);
+
+#if defined(__cplusplus)
+} /* extern "C" */
+#endif /* defined(__cplusplus) */
diff --git a/src/box/system_allocator.cc b/src/box/system_allocator.cc
new file mode 100644
index 000000000..8b9e3114b
--- /dev/null
+++ b/src/box/system_allocator.cc
@@ -0,0 +1,68 @@
+/*
+ * Copyright 2010-2020, Tarantool AUTHORS, please see AUTHORS file.
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above
+ *    copyright notice, this list of conditions and the
+ *    following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above
+ *    copyright notice, this list of conditions and the following
+ *    disclaimer in the documentation and/or other materials
+ *    provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY <COPYRIGHT HOLDER> ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
+ * <COPYRIGHT HOLDER> OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
+ * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
+ * THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ */
+#include "system_allocator.h"
+
+void
+SystemAllocator::create(struct slab_arena *arena)
+{
+	system_alloc_create(&system_alloc, arena);
+}
+
+void
+SystemAllocator::destroy(void)
+{
+	system_alloc_destroy(&system_alloc);
+}
+
+void
+SystemAllocator::enter_delayed_free_mode(void)
+{
+	system_alloc_setopt(&system_alloc, SYSTEM_DELAYED_FREE_MODE, true);
+}
+
+void
+SystemAllocator::leave_delayed_free_mode(void)
+{
+	system_alloc_setopt(&system_alloc, SYSTEM_DELAYED_FREE_MODE, false);
+}
+
+void
+SystemAllocator::stats(struct system_stats *stats, system_stats_cb cb, void *cb_ctx)
+{
+	system_stats(&system_alloc, stats, cb, cb_ctx);
+}
+
+void
+SystemAllocator::memory_check(void)
+{
+}
+
+struct system_alloc SystemAllocator::system_alloc;
diff --git a/src/box/system_allocator.h b/src/box/system_allocator.h
new file mode 100644
index 000000000..aed4e0fc9
--- /dev/null
+++ b/src/box/system_allocator.h
@@ -0,0 +1,54 @@
+#pragma once
+/*
+ * Copyright 2010-2020, Tarantool AUTHORS, please see AUTHORS file.
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above
+ *    copyright notice, this list of conditions and the
+ *    following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above
+ *    copyright notice, this list of conditions and the following
+ *    disclaimer in the documentation and/or other materials
+ *    provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY <COPYRIGHT HOLDER> ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
+ * <COPYRIGHT HOLDER> OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
+ * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
+ * THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ */
+#include "sysalloc.h"
+
+struct SystemAllocator
+{
+	static void create(struct slab_arena *arena);
+	static void destroy(void);
+	static void enter_delayed_free_mode(void);
+	static void leave_delayed_free_mode(void);
+	static void stats(struct system_stats *stats, system_stats_cb cb, void *cb_ctx);
+	static void memory_check(void);
+	static inline void *alloc(size_t size) {
+		return sysalloc(&system_alloc, size);
+	};
+	static inline void free(void *ptr, size_t size) {
+		sysfree(&system_alloc, ptr, size);
+	}
+	static inline void free_delayed(void *ptr, size_t size) {
+		sysfree_delayed(&system_alloc, ptr, size);
+	}
+
+	/** Tuple allocator. */
+	static struct system_alloc system_alloc;
+};
diff --git a/test/box/choose_memtx_allocator.lua b/test/box/choose_memtx_allocator.lua
new file mode 100644
index 000000000..d56f985e1
--- /dev/null
+++ b/test/box/choose_memtx_allocator.lua
@@ -0,0 +1,9 @@
+#!/usr/bin/env tarantool
+
+require('console').listen(os.getenv('ADMIN'))
+
+box.cfg({
+    listen = os.getenv("LISTEN"),
+    allocator=arg[1],
+    checkpoint_interval=5
+})
diff --git a/test/box/choose_memtx_allocator.result b/test/box/choose_memtx_allocator.result
new file mode 100644
index 000000000..23660be6e
--- /dev/null
+++ b/test/box/choose_memtx_allocator.result
@@ -0,0 +1,139 @@
+-- test-run result file version 2
+
+-- write data recover from latest snapshot
+env = require('test_run')
+ | ---
+ | ...
+test_run = env.new()
+ | ---
+ | ...
+test_run:cmd('create server test with script="box/choose_memtx_allocator.lua"')
+ | ---
+ | - true
+ | ...
+--test small allocator
+test_run:cmd('start server test with args="small"')
+ | ---
+ | - true
+ | ...
+test_run:cmd('switch test')
+ | ---
+ | - true
+ | ...
+space = box.schema.space.create('test')
+ | ---
+ | ...
+space:format({ {name = 'id', type = 'unsigned'}, {name = 'year', type = 'unsigned'} })
+ | ---
+ | ...
+s = space:create_index('primary', { parts = {'id'} })
+ | ---
+ | ...
+for key = 1, 1000 do space:insert({key, key + 1000}) end
+ | ---
+ | ...
+for key = 1, 1000 do space:replace({key, key + 5000}) end
+ | ---
+ | ...
+box.snapshot()
+ | ---
+ | - ok
+ | ...
+for key = 1, 1000 do space:delete(key) end
+ | ---
+ | ...
+space:drop()
+ | ---
+ | ...
+test_run:cmd('switch default')
+ | ---
+ | - true
+ | ...
+test_run:cmd('stop server test')
+ | ---
+ | - true
+ | ...
+--test system(malloc) allocator
+test_run:cmd('start server test with args="system"')
+ | ---
+ | - true
+ | ...
+test_run:cmd('switch test')
+ | ---
+ | - true
+ | ...
+space = box.schema.space.create('test')
+ | ---
+ | ...
+space:format({ {name = 'id', type = 'unsigned'}, {name = 'year', type = 'unsigned'} })
+ | ---
+ | ...
+s = space:create_index('primary', { parts = {'id'} })
+ | ---
+ | ...
+for key = 1, 200000 do space:insert({key, key + 1000}) end
+ | ---
+ | ...
+for key = 1, 200000 do space:replace({key, key + 5000}) end
+ | ---
+ | ...
+for key = 1, 200000 do space:delete(key) end
+ | ---
+ | ...
+space:drop()
+ | ---
+ | ...
+test_run:cmd('switch default')
+ | ---
+ | - true
+ | ...
+test_run:cmd('stop server test')
+ | ---
+ | - true
+ | ...
+--test default (small) allocator
+test_run:cmd('start server test')
+ | ---
+ | - true
+ | ...
+test_run:cmd('switch test')
+ | ---
+ | - true
+ | ...
+space = box.schema.space.create('test')
+ | ---
+ | ...
+space:format({ {name = 'id', type = 'unsigned'}, {name = 'year', type = 'unsigned'} })
+ | ---
+ | ...
+s = space:create_index('primary', { parts = {'id'} })
+ | ---
+ | ...
+for key = 1, 1000 do space:insert({key, key + 1000}) end
+ | ---
+ | ...
+for key = 1, 1000 do space:replace({key, key + 5000}) end
+ | ---
+ | ...
+for key = 1, 1000 do space:delete(key) end
+ | ---
+ | ...
+space:drop()
+ | ---
+ | ...
+test_run:cmd('switch default')
+ | ---
+ | - true
+ | ...
+test_run:cmd('stop server test')
+ | ---
+ | - true
+ | ...
+test_run:cmd('cleanup server test')
+ | ---
+ | - true
+ | ...
+test_run:cmd('delete server test')
+ | ---
+ | - true
+ | ...
diff --git a/test/box/choose_memtx_allocator.test.lua b/test/box/choose_memtx_allocator.test.lua
new file mode 100644
index 000000000..25d43c781
--- /dev/null
+++ b/test/box/choose_memtx_allocator.test.lua
@@ -0,0 +1,44 @@
+
+-- write data recover from latest snapshot
+env = require('test_run')
+test_run = env.new()
+test_run:cmd('create server test with script="box/choose_memtx_allocator.lua"')
+--test small allocator
+test_run:cmd('start server test with args="small"')
+test_run:cmd('switch test')
+space = box.schema.space.create('test')
+space:format({ {name = 'id', type = 'unsigned'}, {name = 'year', type = 'unsigned'} })
+s = space:create_index('primary', { parts = {'id'} })
+for key = 1, 1000 do space:insert({key, key + 1000}) end
+for key = 1, 1000 do space:replace({key, key + 5000}) end
+box.snapshot()
+for key = 1, 1000 do space:delete(key) end
+space:drop()
+test_run:cmd('switch default')
+test_run:cmd('stop server test')
+--test system(malloc) allocator
+test_run:cmd('start server test with args="system"')
+test_run:cmd('switch test')
+space = box.schema.space.create('test')
+space:format({ {name = 'id', type = 'unsigned'}, {name = 'year', type = 'unsigned'} })
+s = space:create_index('primary', { parts = {'id'} })
+for key = 1, 200000 do space:insert({key, key + 1000}) end
+for key = 1, 200000 do space:replace({key, key + 5000}) end
+for key = 1, 200000 do space:delete(key) end
+space:drop()
+test_run:cmd('switch default')
+test_run:cmd('stop server test')
+--test default (small) allocator
+test_run:cmd('start server test')
+test_run:cmd('switch test')
+space = box.schema.space.create('test')
+space:format({ {name = 'id', type = 'unsigned'}, {name = 'year', type = 'unsigned'} })
+s = space:create_index('primary', { parts = {'id'} })
+for key = 1, 1000 do space:insert({key, key + 1000}) end
+for key = 1, 1000 do space:replace({key, key + 5000}) end
+for key = 1, 1000 do space:delete(key) end
+space:drop()
+test_run:cmd('switch default')
+test_run:cmd('stop server test')
+test_run:cmd('cleanup server test')
+test_run:cmd('delete server test')
-- 
2.20.1

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Tarantool-patches] [PATCH 0/4] Choose allocator mor memtx
  2020-12-29 11:03 [Tarantool-patches] [PATCH 0/4] Choose allocator mor memtx mechanik20051988
                   ` (3 preceding siblings ...)
  2020-12-29 11:03 ` [Tarantool-patches] [PATCH 4/4] Implement system allocator, based on malloc mechanik20051988
@ 2021-01-10 13:55 ` Vladislav Shpilevoy via Tarantool-patches
  2021-01-10 14:36 ` Vladislav Shpilevoy via Tarantool-patches
  5 siblings, 0 replies; 9+ messages in thread
From: Vladislav Shpilevoy via Tarantool-patches @ 2021-01-10 13:55 UTC (permalink / raw)
  To: mechanik20051988, alyapunov; +Cc: tarantool-patches

Hi! Thanks for the patchset!

I didn't review the code yet. Only the commit messages. But already
can give some comments.

On 29.12.2020 12:03, mechanik20051988 via Tarantool-patches wrote:
> Branch: https://github.com/tarantool/tarantool/tree/mechanik20051988/gh-5419-choose-allocator-for-memtx-cpp-14
>         (Do not pay attention to the number 14 in the branch header)
> Issue: https://github.com/tarantool/tarantool/issues/5419
> Pull request: https://github.com/tarantool/tarantool/pull/5670
> About patches:
> 	1. First patch add performance test for memtx allocator. You can copy perf folder 
> 	to master branch and compare performance.
> 	2. Second patch convert some *.c files to *.cc files. 
> 	   This is the preparation for the patch with allocator choise
> 	3. Third patch implement api for allocator choise
> 	4. Fourth patch add system allocator based on malloc and free
> 
> This is a completely redesigned patch, however I would like to provide answers 
> to some questions from the previous patch that may overlap
> 
> 1. malloc_usable_size / malloc_size has different headers in different OS, so i use
>    TARGET_OS_*** to check this in source file not in CMakeLists
> 2. I test snapshot using checkpoint_interval=5 option, several snapshots
>    are created during my test

And I said in private that it not only makes the test run at least 5 seconds,
which is way too long. But also is not any different from calling box.snapshot()
manually. Please, use box.snapshot().

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Tarantool-patches] [PATCH 3/4] memtx: implement api for memory allocator selection
  2020-12-29 11:03 ` [Tarantool-patches] [PATCH 3/4] memtx: implement api for memory allocator selection mechanik20051988
@ 2021-01-10 13:56   ` Vladislav Shpilevoy via Tarantool-patches
  0 siblings, 0 replies; 9+ messages in thread
From: Vladislav Shpilevoy via Tarantool-patches @ 2021-01-10 13:56 UTC (permalink / raw)
  To: mechanik20051988, alyapunov; +Cc: mechanik20051988, tarantool-patches

Thanks for the patch!

On 29.12.2020 12:03, mechanik20051988 via Tarantool-patches wrote:
> From: mechanik20051988 <mechanik20.05.1988@gmail.com>
> 
> Slab allocator, which is used for tuples allocation,
> has a certain disadvantage - it tends to unresolvable
> fragmentation on certain workloads (size migration).
> New option allows to select the appropriate
> allocator if necessary.
> 
> @TarantoolBot document
> Title: Add new 'allocator' option
> Add new 'allocator' option which allows to
> select the appropriate allocator for memtx
> tuples if necessary.

- Option for what? box.cfg?

- What are the option values?

- How a user is supposed to choose one? Depending on what?

- Is system allocator restricted by the same memory quota?

- Does system allocator allocate all the memory at start, like
small does?

- The option introduction could be a separate commit. Now you
did refactoring + new 'feature' in one commit.

> Closes #5419

- It does not really 'close' the issue, because at this commit the
new allocator type is not implemented.

- Besides, this line is below docbot request, which means it
will go to the doc ticket. It shouldn't.

Taking into account some comments being about system allocator, it
seems you should better extract the box.cfg change into a new
commit, move it to the end, and add the docbot request to there.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Tarantool-patches] [PATCH 4/4] Implement system allocator, based on malloc
  2020-12-29 11:03 ` [Tarantool-patches] [PATCH 4/4] Implement system allocator, based on malloc mechanik20051988
@ 2021-01-10 13:56   ` Vladislav Shpilevoy via Tarantool-patches
  0 siblings, 0 replies; 9+ messages in thread
From: Vladislav Shpilevoy via Tarantool-patches @ 2021-01-10 13:56 UTC (permalink / raw)
  To: mechanik20051988, alyapunov; +Cc: tarantool-patches

Thanks for the patch!

It is never appropriate to leave a commit without any message.
Especially such non-trivial one.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Tarantool-patches] [PATCH 0/4] Choose allocator mor memtx
  2020-12-29 11:03 [Tarantool-patches] [PATCH 0/4] Choose allocator mor memtx mechanik20051988
                   ` (4 preceding siblings ...)
  2021-01-10 13:55 ` [Tarantool-patches] [PATCH 0/4] Choose allocator mor memtx Vladislav Shpilevoy via Tarantool-patches
@ 2021-01-10 14:36 ` Vladislav Shpilevoy via Tarantool-patches
  5 siblings, 0 replies; 9+ messages in thread
From: Vladislav Shpilevoy via Tarantool-patches @ 2021-01-10 14:36 UTC (permalink / raw)
  To: mechanik20051988, alyapunov; +Cc: tarantool-patches

Second attempt to send this email.
---------------------------------

Hi! Thanks for the patchset!

I didn't review the code yet. Only the commit messages. But already
can give some comments.

On 29.12.2020 12:03, mechanik20051988 via Tarantool-patches wrote:
> Branch: https://github.com/tarantool/tarantool/tree/mechanik20051988/gh-5419-choose-allocator-for-memtx-cpp-14
>         (Do not pay attention to the number 14 in the branch header)
> Issue: https://github.com/tarantool/tarantool/issues/5419
> Pull request: https://github.com/tarantool/tarantool/pull/5670
> About patches:
> 	1. First patch add performance test for memtx allocator. You can copy perf folder 
> 	to master branch and compare performance.
> 	2. Second patch convert some *.c files to *.cc files. 
> 	   This is the preparation for the patch with allocator choise
> 	3. Third patch implement api for allocator choise
> 	4. Fourth patch add system allocator based on malloc and free
>
> This is a completely redesigned patch, however I would like to provide answers 
> to some questions from the previous patch that may overlap
>
> 1. malloc_usable_size / malloc_size has different headers in different OS, so i use
>    TARGET_OS_*** to check this in source file not in CMakeLists
> 2. I test snapshot using checkpoint_interval=5 option, several snapshots
>    are created during my test

And I said in private that it not only makes the test run at least 5 seconds,
which is way too long. But also is not any different from calling box.snapshot()
manually. Please, use box.snapshot().


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2021-01-10 15:07 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-29 11:03 [Tarantool-patches] [PATCH 0/4] Choose allocator mor memtx mechanik20051988
2020-12-29 11:03 ` [Tarantool-patches] [PATCH 1/4] test: add performance test for memtx allocator mechanik20051988
2020-12-29 11:03 ` [Tarantool-patches] [PATCH 2/4] memtx: changed some memtx files from .c to .cc mechanik20051988
2020-12-29 11:03 ` [Tarantool-patches] [PATCH 3/4] memtx: implement api for memory allocator selection mechanik20051988
2021-01-10 13:56   ` Vladislav Shpilevoy via Tarantool-patches
2020-12-29 11:03 ` [Tarantool-patches] [PATCH 4/4] Implement system allocator, based on malloc mechanik20051988
2021-01-10 13:56   ` Vladislav Shpilevoy via Tarantool-patches
2021-01-10 13:55 ` [Tarantool-patches] [PATCH 0/4] Choose allocator mor memtx Vladislav Shpilevoy via Tarantool-patches
2021-01-10 14:36 ` Vladislav Shpilevoy via Tarantool-patches

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox