* [Tarantool-patches] [PATCH v2 0/2] Make tree hint optional
@ 2020-10-21 10:23 Aleksandr Lyapunov
2020-10-21 10:23 ` [Tarantool-patches] [PATCH v2 1/2] memtx: move memtx_tree.c to memtx_tree.cc Aleksandr Lyapunov
` (2 more replies)
0 siblings, 3 replies; 7+ messages in thread
From: Aleksandr Lyapunov @ 2020-10-21 10:23 UTC (permalink / raw)
To: tarantool-patches, Nikita Pettik, Ilya Kosarev
Add an option that disables hints in tree indexes.
https://github.com/tarantool/tarantool/issues/4927
https://github.com/tarantool/tarantool/tree/alyapunov/gh-4927-optional-hints
v2 changes:
* int template parameter was replaced by bool.
* fix compilation error, just a couple of casts added:
@@ -542,7 +542,7 @@ tree_iterator_start(struct iterator *iterator, struct tuple **ret)
struct txn *txn = in_txn();
struct space *space = space_by_id(iterator->space_id);
bool is_rw = txn != NULL;
- uint32_t mk_index = is_multikey ? res->hint : 0;
+ uint32_t mk_index = is_multikey ? (uint32_t)res->hint : 0;
*ret = memtx_tx_tuple_clarify(txn, space, *ret, iid, mk_index, is_rw);
if (*ret == NULL) {
return iterator->next(iterator, ret);
@@ -734,13 +734,14 @@ memtx_tree_index_get(struct index *base, const char *key,
struct txn *txn = in_txn();
struct space *space = space_by_id(base->def->space_id);
bool is_rw = txn != NULL;
- uint32_t mk_index = base->def->key_def->is_multikey ? res->hint : 0;
+ bool is_multikey = base->def->key_def->is_multikey;
+ uint32_t mk_index = is_multikey ? (uint32_t)res->hint : 0;
*result = memtx_tx_tuple_clarify(txn, space, res->tuple, base->def->iid,
mk_index, is_rw);
return 0;
}
Aleksandr Lyapunov (1):
memtx: make tuple compare hints optional
Ilya Kosarev (1):
memtx: move memtx_tree.c to memtx_tree.cc
src/box/CMakeLists.txt | 2 +-
src/box/index_def.c | 2 +
src/box/index_def.h | 6 +
src/box/lua/schema.lua | 53 ++
src/box/lua/space.cc | 7 +
src/box/memtx_engine.c | 2 +
src/box/memtx_tree.c | 1523 -------------------------------
src/box/memtx_tree.cc | 1726 +++++++++++++++++++++++++++++++++++
src/lib/salad/bps_tree.h | 19 +
test/box/alter.result | 103 ++-
test/box/alter.test.lua | 34 +
test/box/errinj.result | 3 +-
test/box/tree_pk.result | 314 +++++++
test/box/tree_pk.test.lua | 115 +++
test/box/tree_pk_multipart.result | 153 ++++
test/box/tree_pk_multipart.test.lua | 64 ++
16 files changed, 2598 insertions(+), 1528 deletions(-)
delete mode 100644 src/box/memtx_tree.c
create mode 100644 src/box/memtx_tree.cc
--
2.7.4
^ permalink raw reply [flat|nested] 7+ messages in thread
* [Tarantool-patches] [PATCH v2 1/2] memtx: move memtx_tree.c to memtx_tree.cc
2020-10-21 10:23 [Tarantool-patches] [PATCH v2 0/2] Make tree hint optional Aleksandr Lyapunov
@ 2020-10-21 10:23 ` Aleksandr Lyapunov
2020-10-21 16:12 ` Cyrill Gorcunov
2020-10-21 10:23 ` [Tarantool-patches] [PATCH v2 2/2] memtx: make tuple compare hints optional Aleksandr Lyapunov
2020-10-21 15:28 ` [Tarantool-patches] [PATCH v2 0/2] Make tree hint optional Nikita Pettik
2 siblings, 1 reply; 7+ messages in thread
From: Aleksandr Lyapunov @ 2020-10-21 10:23 UTC (permalink / raw)
To: tarantool-patches, Nikita Pettik, Ilya Kosarev
From: Ilya Kosarev <i.kosarev@tarantool.org>
It is needed for the further c++ implementation in memtx_tree.cc. To
see the file diff properly it should not be renamed and reworked
in one commit. Some not c++ comparable casts were fixed.
Prerequisites: #4927
---
src/box/CMakeLists.txt | 2 +-
src/box/memtx_tree.c | 1523 -----------------------------------------------
src/box/memtx_tree.cc | 1526 ++++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 1527 insertions(+), 1524 deletions(-)
delete mode 100644 src/box/memtx_tree.c
create mode 100644 src/box/memtx_tree.cc
diff --git a/src/box/CMakeLists.txt b/src/box/CMakeLists.txt
index fbcfbe4..df243ac 100644
--- a/src/box/CMakeLists.txt
+++ b/src/box/CMakeLists.txt
@@ -126,7 +126,7 @@ add_library(box STATIC
index_def.c
iterator_type.c
memtx_hash.c
- memtx_tree.c
+ memtx_tree.cc
memtx_rtree.c
memtx_bitset.c
memtx_tx.c
diff --git a/src/box/memtx_tree.c b/src/box/memtx_tree.c
deleted file mode 100644
index 5af482f..0000000
--- a/src/box/memtx_tree.c
+++ /dev/null
@@ -1,1523 +0,0 @@
-/*
- * Copyright 2010-2016, Tarantool AUTHORS, please see AUTHORS file.
- *
- * Redistribution and use in source and binary forms, with or
- * without modification, are permitted provided that the following
- * conditions are met:
- *
- * 1. Redistributions of source code must retain the above
- * copyright notice, this list of conditions and the
- * following disclaimer.
- *
- * 2. Redistributions in binary form must reproduce the above
- * copyright notice, this list of conditions and the following
- * disclaimer in the documentation and/or other materials
- * provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY <COPYRIGHT HOLDER> ``AS IS'' AND
- * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
- * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
- * <COPYRIGHT HOLDER> OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
- * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
- * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
- * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
- * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
- * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
- * THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
- * SUCH DAMAGE.
- */
-#include "memtx_tree.h"
-#include "memtx_engine.h"
-#include "space.h"
-#include "schema.h" /* space_by_id(), space_cache_find() */
-#include "errinj.h"
-#include "memory.h"
-#include "fiber.h"
-#include "key_list.h"
-#include "tuple.h"
-#include "txn.h"
-#include "memtx_tx.h"
-#include <third_party/qsort_arg.h>
-#include <small/mempool.h>
-
-/**
- * Struct that is used as a key in BPS tree definition.
- */
-struct memtx_tree_key_data {
- /** Sequence of msgpacked search fields. */
- const char *key;
- /** Number of msgpacked search fields. */
- uint32_t part_count;
- /** Comparison hint, see tuple_hint(). */
- hint_t hint;
-};
-
-/**
- * Struct that is used as a elem in BPS tree definition.
- */
-struct memtx_tree_data {
- /* Tuple that this node is represents. */
- struct tuple *tuple;
- /** Comparison hint, see key_hint(). */
- hint_t hint;
-};
-
-/**
- * Test whether BPS tree elements are identical i.e. represent
- * the same tuple at the same position in the tree.
- * @param a - First BPS tree element to compare.
- * @param b - Second BPS tree element to compare.
- * @retval true - When elements a and b are identical.
- * @retval false - Otherwise.
- */
-static bool
-memtx_tree_data_is_equal(const struct memtx_tree_data *a,
- const struct memtx_tree_data *b)
-{
- return a->tuple == b->tuple;
-}
-
-#define BPS_TREE_NAME memtx_tree
-#define BPS_TREE_BLOCK_SIZE (512)
-#define BPS_TREE_EXTENT_SIZE MEMTX_EXTENT_SIZE
-#define BPS_TREE_COMPARE(a, b, arg)\
- tuple_compare((&a)->tuple, (&a)->hint, (&b)->tuple, (&b)->hint, arg)
-#define BPS_TREE_COMPARE_KEY(a, b, arg)\
- tuple_compare_with_key((&a)->tuple, (&a)->hint, (b)->key,\
- (b)->part_count, (b)->hint, arg)
-#define BPS_TREE_IS_IDENTICAL(a, b) memtx_tree_data_is_equal(&a, &b)
-#define BPS_TREE_NO_DEBUG 1
-#define bps_tree_elem_t struct memtx_tree_data
-#define bps_tree_key_t struct memtx_tree_key_data *
-#define bps_tree_arg_t struct key_def *
-
-#include "salad/bps_tree.h"
-
-#undef BPS_TREE_NAME
-#undef BPS_TREE_BLOCK_SIZE
-#undef BPS_TREE_EXTENT_SIZE
-#undef BPS_TREE_COMPARE
-#undef BPS_TREE_COMPARE_KEY
-#undef BPS_TREE_IS_IDENTICAL
-#undef BPS_TREE_NO_DEBUG
-#undef bps_tree_elem_t
-#undef bps_tree_key_t
-#undef bps_tree_arg_t
-
-struct memtx_tree_index {
- struct index base;
- struct memtx_tree tree;
- struct memtx_tree_data *build_array;
- size_t build_array_size, build_array_alloc_size;
- struct memtx_gc_task gc_task;
- struct memtx_tree_iterator gc_iterator;
-};
-
-/* {{{ Utilities. *************************************************/
-
-static inline struct key_def *
-memtx_tree_cmp_def(struct memtx_tree *tree)
-{
- return tree->arg;
-}
-
-static int
-memtx_tree_qcompare(const void* a, const void *b, void *c)
-{
- const struct memtx_tree_data *data_a = a;
- const struct memtx_tree_data *data_b = b;
- struct key_def *key_def = c;
- return tuple_compare(data_a->tuple, data_a->hint, data_b->tuple,
- data_b->hint, key_def);
-}
-
-/* {{{ MemtxTree Iterators ****************************************/
-struct tree_iterator {
- struct iterator base;
- struct memtx_tree_iterator tree_iterator;
- enum iterator_type type;
- struct memtx_tree_key_data key_data;
- struct memtx_tree_data current;
- /** Memory pool the iterator was allocated from. */
- struct mempool *pool;
-};
-
-static_assert(sizeof(struct tree_iterator) <= MEMTX_ITERATOR_SIZE,
- "sizeof(struct tree_iterator) must be less than or equal "
- "to MEMTX_ITERATOR_SIZE");
-
-static void
-tree_iterator_free(struct iterator *iterator);
-
-static inline struct tree_iterator *
-tree_iterator(struct iterator *it)
-{
- assert(it->free == tree_iterator_free);
- return (struct tree_iterator *) it;
-}
-
-static void
-tree_iterator_free(struct iterator *iterator)
-{
- struct tree_iterator *it = tree_iterator(iterator);
- struct tuple *tuple = it->current.tuple;
- if (tuple != NULL)
- tuple_unref(tuple);
- mempool_free(it->pool, it);
-}
-
-static int
-tree_iterator_dummie(struct iterator *iterator, struct tuple **ret)
-{
- (void)iterator;
- *ret = NULL;
- return 0;
-}
-
-static int
-tree_iterator_next_base(struct iterator *iterator, struct tuple **ret)
-{
- struct memtx_tree_index *index =
- (struct memtx_tree_index *)iterator->index;
- struct tree_iterator *it = tree_iterator(iterator);
- assert(it->current.tuple != NULL);
- struct memtx_tree_data *check =
- memtx_tree_iterator_get_elem(&index->tree, &it->tree_iterator);
- if (check == NULL || !memtx_tree_data_is_equal(check, &it->current)) {
- it->tree_iterator = memtx_tree_upper_bound_elem(&index->tree,
- it->current, NULL);
- } else {
- memtx_tree_iterator_next(&index->tree, &it->tree_iterator);
- }
- tuple_unref(it->current.tuple);
- struct memtx_tree_data *res =
- memtx_tree_iterator_get_elem(&index->tree, &it->tree_iterator);
- if (res == NULL) {
- iterator->next = tree_iterator_dummie;
- it->current.tuple = NULL;
- *ret = NULL;
- } else {
- *ret = res->tuple;
- tuple_ref(*ret);
- it->current = *res;
- }
- return 0;
-}
-
-static int
-tree_iterator_prev_base(struct iterator *iterator, struct tuple **ret)
-{
- struct memtx_tree_index *index =
- (struct memtx_tree_index *)iterator->index;
- struct tree_iterator *it = tree_iterator(iterator);
- assert(it->current.tuple != NULL);
- struct memtx_tree_data *check =
- memtx_tree_iterator_get_elem(&index->tree, &it->tree_iterator);
- if (check == NULL || !memtx_tree_data_is_equal(check, &it->current)) {
- it->tree_iterator = memtx_tree_lower_bound_elem(&index->tree,
- it->current, NULL);
- }
- memtx_tree_iterator_prev(&index->tree, &it->tree_iterator);
- tuple_unref(it->current.tuple);
- struct memtx_tree_data *res =
- memtx_tree_iterator_get_elem(&index->tree, &it->tree_iterator);
- if (!res) {
- iterator->next = tree_iterator_dummie;
- it->current.tuple = NULL;
- *ret = NULL;
- } else {
- *ret = res->tuple;
- tuple_ref(*ret);
- it->current = *res;
- }
- return 0;
-}
-
-static int
-tree_iterator_next_equal_base(struct iterator *iterator, struct tuple **ret)
-{
- struct memtx_tree_index *index =
- (struct memtx_tree_index *)iterator->index;
- struct tree_iterator *it = tree_iterator(iterator);
- assert(it->current.tuple != NULL);
- struct memtx_tree_data *check =
- memtx_tree_iterator_get_elem(&index->tree, &it->tree_iterator);
- if (check == NULL || !memtx_tree_data_is_equal(check, &it->current)) {
- it->tree_iterator = memtx_tree_upper_bound_elem(&index->tree,
- it->current, NULL);
- } else {
- memtx_tree_iterator_next(&index->tree, &it->tree_iterator);
- }
- tuple_unref(it->current.tuple);
- struct memtx_tree_data *res =
- memtx_tree_iterator_get_elem(&index->tree, &it->tree_iterator);
- /* Use user key def to save a few loops. */
- if (res == NULL ||
- tuple_compare_with_key(res->tuple, res->hint,
- it->key_data.key,
- it->key_data.part_count,
- it->key_data.hint,
- index->base.def->key_def) != 0) {
- iterator->next = tree_iterator_dummie;
- it->current.tuple = NULL;
- *ret = NULL;
- } else {
- *ret = res->tuple;
- tuple_ref(*ret);
- it->current = *res;
- }
- return 0;
-}
-
-static int
-tree_iterator_prev_equal_base(struct iterator *iterator, struct tuple **ret)
-{
- struct memtx_tree_index *index =
- (struct memtx_tree_index *)iterator->index;
- struct tree_iterator *it = tree_iterator(iterator);
- assert(it->current.tuple != NULL);
- struct memtx_tree_data *check =
- memtx_tree_iterator_get_elem(&index->tree, &it->tree_iterator);
- if (check == NULL || !memtx_tree_data_is_equal(check, &it->current)) {
- it->tree_iterator = memtx_tree_lower_bound_elem(&index->tree,
- it->current, NULL);
- }
- memtx_tree_iterator_prev(&index->tree, &it->tree_iterator);
- tuple_unref(it->current.tuple);
- struct memtx_tree_data *res =
- memtx_tree_iterator_get_elem(&index->tree, &it->tree_iterator);
- /* Use user key def to save a few loops. */
- if (res == NULL ||
- tuple_compare_with_key(res->tuple, res->hint,
- it->key_data.key,
- it->key_data.part_count,
- it->key_data.hint,
- index->base.def->key_def) != 0) {
- iterator->next = tree_iterator_dummie;
- it->current.tuple = NULL;
- *ret = NULL;
- } else {
- *ret = res->tuple;
- tuple_ref(*ret);
- it->current = *res;
- }
- return 0;
-}
-
-#define WRAP_ITERATOR_METHOD(name) \
-static int \
-name(struct iterator *iterator, struct tuple **ret) \
-{ \
- struct memtx_tree *tree = \
- &((struct memtx_tree_index *)iterator->index)->tree; \
- struct tree_iterator *it = tree_iterator(iterator); \
- struct memtx_tree_iterator *ti = &it->tree_iterator; \
- uint32_t iid = iterator->index->def->iid; \
- bool is_multikey = iterator->index->def->key_def->is_multikey; \
- struct txn *txn = in_txn(); \
- struct space *space = space_by_id(iterator->space_id); \
- bool is_rw = txn != NULL; \
- do { \
- int rc = name##_base(iterator, ret); \
- if (rc != 0 || *ret == NULL) \
- return rc; \
- uint32_t mk_index = 0; \
- if (is_multikey) { \
- struct memtx_tree_data *check = \
- memtx_tree_iterator_get_elem(tree, ti); \
- assert(check != NULL); \
- mk_index = check->hint; \
- } \
- *ret = memtx_tx_tuple_clarify(txn, space, *ret, \
- iid, mk_index, is_rw); \
- } while (*ret == NULL); \
- tuple_unref(it->current.tuple); \
- it->current.tuple = *ret; \
- tuple_ref(it->current.tuple); \
- return 0; \
-} \
-struct forgot_to_add_semicolon
-
-WRAP_ITERATOR_METHOD(tree_iterator_next);
-WRAP_ITERATOR_METHOD(tree_iterator_prev);
-WRAP_ITERATOR_METHOD(tree_iterator_next_equal);
-WRAP_ITERATOR_METHOD(tree_iterator_prev_equal);
-
-#undef WRAP_ITERATOR_METHOD
-
-static void
-tree_iterator_set_next_method(struct tree_iterator *it)
-{
- assert(it->current.tuple != NULL);
- switch (it->type) {
- case ITER_EQ:
- it->base.next = tree_iterator_next_equal;
- break;
- case ITER_REQ:
- it->base.next = tree_iterator_prev_equal;
- break;
- case ITER_ALL:
- it->base.next = tree_iterator_next;
- break;
- case ITER_LT:
- case ITER_LE:
- it->base.next = tree_iterator_prev;
- break;
- case ITER_GE:
- case ITER_GT:
- it->base.next = tree_iterator_next;
- break;
- default:
- /* The type was checked in initIterator */
- assert(false);
- }
-}
-
-static int
-tree_iterator_start(struct iterator *iterator, struct tuple **ret)
-{
- *ret = NULL;
- struct memtx_tree_index *index =
- (struct memtx_tree_index *)iterator->index;
- struct tree_iterator *it = tree_iterator(iterator);
- it->base.next = tree_iterator_dummie;
- struct memtx_tree *tree = &index->tree;
- enum iterator_type type = it->type;
- bool exact = false;
- assert(it->current.tuple == NULL);
- if (it->key_data.key == 0) {
- if (iterator_type_is_reverse(it->type))
- it->tree_iterator = memtx_tree_iterator_last(tree);
- else
- it->tree_iterator = memtx_tree_iterator_first(tree);
- } else {
- if (type == ITER_ALL || type == ITER_EQ ||
- type == ITER_GE || type == ITER_LT) {
- it->tree_iterator =
- memtx_tree_lower_bound(tree, &it->key_data,
- &exact);
- if (type == ITER_EQ && !exact)
- return 0;
- } else { // ITER_GT, ITER_REQ, ITER_LE
- it->tree_iterator =
- memtx_tree_upper_bound(tree, &it->key_data,
- &exact);
- if (type == ITER_REQ && !exact)
- return 0;
- }
- if (iterator_type_is_reverse(type)) {
- /*
- * Because of limitations of tree search API we use use
- * lower_bound for LT search and upper_bound for LE
- * and REQ searches. Thus we found position to the
- * right of the target one. Let's make a step to the
- * left to reach target position.
- * If we found an invalid iterator all the elements in
- * the tree are less (less or equal) to the key, and
- * iterator_next call will convert the iterator to the
- * last position in the tree, that's what we need.
- */
- memtx_tree_iterator_prev(tree, &it->tree_iterator);
- }
- }
-
- struct memtx_tree_data *res = memtx_tree_iterator_get_elem(tree,
- &it->tree_iterator);
- if (!res)
- return 0;
- *ret = res->tuple;
- tuple_ref(*ret);
- it->current = *res;
- tree_iterator_set_next_method(it);
-
- uint32_t iid = iterator->index->def->iid;
- bool is_multikey = iterator->index->def->key_def->is_multikey;
- struct txn *txn = in_txn();
- struct space *space = space_by_id(iterator->space_id);
- bool is_rw = txn != NULL;
- uint32_t mk_index = is_multikey ? res->hint : 0;
- *ret = memtx_tx_tuple_clarify(txn, space, *ret, iid, mk_index, is_rw);
- if (*ret == NULL) {
- return iterator->next(iterator, ret);
- } else {
- tuple_unref(it->current.tuple);
- it->current.tuple = *ret;
- tuple_ref(it->current.tuple);
- }
-
- return 0;
-}
-
-/* }}} */
-
-/* {{{ MemtxTree **********************************************************/
-
-static void
-memtx_tree_index_free(struct memtx_tree_index *index)
-{
- memtx_tree_destroy(&index->tree);
- free(index->build_array);
- free(index);
-}
-
-static void
-memtx_tree_index_gc_run(struct memtx_gc_task *task, bool *done)
-{
- /*
- * Yield every 1K tuples to keep latency < 0.1 ms.
- * Yield more often in debug mode.
- */
-#ifdef NDEBUG
- enum { YIELD_LOOPS = 1000 };
-#else
- enum { YIELD_LOOPS = 10 };
-#endif
-
- struct memtx_tree_index *index = container_of(task,
- struct memtx_tree_index, gc_task);
- struct memtx_tree *tree = &index->tree;
- struct memtx_tree_iterator *itr = &index->gc_iterator;
-
- unsigned int loops = 0;
- while (!memtx_tree_iterator_is_invalid(itr)) {
- struct memtx_tree_data *res =
- memtx_tree_iterator_get_elem(tree, itr);
- memtx_tree_iterator_next(tree, itr);
- tuple_unref(res->tuple);
- if (++loops >= YIELD_LOOPS) {
- *done = false;
- return;
- }
- }
- *done = true;
-}
-
-static void
-memtx_tree_index_gc_free(struct memtx_gc_task *task)
-{
- struct memtx_tree_index *index = container_of(task,
- struct memtx_tree_index, gc_task);
- memtx_tree_index_free(index);
-}
-
-static const struct memtx_gc_task_vtab memtx_tree_index_gc_vtab = {
- .run = memtx_tree_index_gc_run,
- .free = memtx_tree_index_gc_free,
-};
-
-static void
-memtx_tree_index_destroy(struct index *base)
-{
- struct memtx_tree_index *index = (struct memtx_tree_index *)base;
- struct memtx_engine *memtx = (struct memtx_engine *)base->engine;
- if (base->def->iid == 0) {
- /*
- * Primary index. We need to free all tuples stored
- * in the index, which may take a while. Schedule a
- * background task in order not to block tx thread.
- */
- index->gc_task.vtab = &memtx_tree_index_gc_vtab;
- index->gc_iterator = memtx_tree_iterator_first(&index->tree);
- memtx_engine_schedule_gc(memtx, &index->gc_task);
- } else {
- /*
- * Secondary index. Destruction is fast, no need to
- * hand over to background fiber.
- */
- memtx_tree_index_free(index);
- }
-}
-
-static void
-memtx_tree_index_update_def(struct index *base)
-{
- struct memtx_tree_index *index = (struct memtx_tree_index *)base;
- struct index_def *def = base->def;
- /*
- * We use extended key def for non-unique and nullable
- * indexes. Unique but nullable index can store multiple
- * NULLs. To correctly compare these NULLs extended key
- * def must be used. For details @sa tuple_compare.cc.
- */
- index->tree.arg = def->opts.is_unique && !def->key_def->is_nullable ?
- def->key_def : def->cmp_def;
-}
-
-static bool
-memtx_tree_index_depends_on_pk(struct index *base)
-{
- struct index_def *def = base->def;
- /* See comment to memtx_tree_index_update_def(). */
- return !def->opts.is_unique || def->key_def->is_nullable;
-}
-
-static ssize_t
-memtx_tree_index_size(struct index *base)
-{
- struct memtx_tree_index *index = (struct memtx_tree_index *)base;
- return memtx_tree_size(&index->tree);
-}
-
-static ssize_t
-memtx_tree_index_bsize(struct index *base)
-{
- struct memtx_tree_index *index = (struct memtx_tree_index *)base;
- return memtx_tree_mem_used(&index->tree);
-}
-
-static int
-memtx_tree_index_random(struct index *base, uint32_t rnd, struct tuple **result)
-{
- struct memtx_tree_index *index = (struct memtx_tree_index *)base;
- struct memtx_tree_data *res = memtx_tree_random(&index->tree, rnd);
- *result = res != NULL ? res->tuple : NULL;
- return 0;
-}
-
-static ssize_t
-memtx_tree_index_count(struct index *base, enum iterator_type type,
- const char *key, uint32_t part_count)
-{
- if (type == ITER_ALL)
- return memtx_tree_index_size(base); /* optimization */
- return generic_index_count(base, type, key, part_count);
-}
-
-static int
-memtx_tree_index_get(struct index *base, const char *key,
- uint32_t part_count, struct tuple **result)
-{
- assert(base->def->opts.is_unique &&
- part_count == base->def->key_def->part_count);
- struct memtx_tree_index *index = (struct memtx_tree_index *)base;
- struct key_def *cmp_def = memtx_tree_cmp_def(&index->tree);
- struct memtx_tree_key_data key_data;
- key_data.key = key;
- key_data.part_count = part_count;
- key_data.hint = key_hint(key, part_count, cmp_def);
- struct memtx_tree_data *res = memtx_tree_find(&index->tree, &key_data);
- if (res == NULL) {
- *result = NULL;
- return 0;
- }
- struct txn *txn = in_txn();
- struct space *space = space_by_id(base->def->space_id);
- bool is_rw = txn != NULL;
- uint32_t mk_index = base->def->key_def->is_multikey ? res->hint : 0;
- *result = memtx_tx_tuple_clarify(txn, space, res->tuple, base->def->iid,
- mk_index, is_rw);
- return 0;
-}
-
-static int
-memtx_tree_index_replace(struct index *base, struct tuple *old_tuple,
- struct tuple *new_tuple, enum dup_replace_mode mode,
- struct tuple **result)
-{
- struct memtx_tree_index *index = (struct memtx_tree_index *)base;
- struct key_def *cmp_def = memtx_tree_cmp_def(&index->tree);
- if (new_tuple) {
- struct memtx_tree_data new_data;
- new_data.tuple = new_tuple;
- new_data.hint = tuple_hint(new_tuple, cmp_def);
- struct memtx_tree_data dup_data;
- dup_data.tuple = NULL;
-
- /* Try to optimistically replace the new_tuple. */
- int tree_res = memtx_tree_insert(&index->tree, new_data,
- &dup_data);
- if (tree_res) {
- diag_set(OutOfMemory, MEMTX_EXTENT_SIZE,
- "memtx_tree_index", "replace");
- return -1;
- }
-
- uint32_t errcode = replace_check_dup(old_tuple,
- dup_data.tuple, mode);
- if (errcode) {
- memtx_tree_delete(&index->tree, new_data);
- if (dup_data.tuple != NULL)
- memtx_tree_insert(&index->tree, dup_data, NULL);
- struct space *sp = space_cache_find(base->def->space_id);
- if (sp != NULL)
- diag_set(ClientError, errcode, base->def->name,
- space_name(sp));
- return -1;
- }
- if (dup_data.tuple != NULL) {
- *result = dup_data.tuple;
- return 0;
- }
- }
- if (old_tuple) {
- struct memtx_tree_data old_data;
- old_data.tuple = old_tuple;
- old_data.hint = tuple_hint(old_tuple, cmp_def);
- memtx_tree_delete(&index->tree, old_data);
- }
- *result = old_tuple;
- return 0;
-}
-
-/**
- * Perform tuple insertion by given multikey index.
- * In case of replacement, all old tuple entries are deleted
- * by all it's multikey indexes.
- */
-static int
-memtx_tree_index_replace_multikey_one(struct memtx_tree_index *index,
- struct tuple *old_tuple, struct tuple *new_tuple,
- enum dup_replace_mode mode, hint_t hint,
- struct memtx_tree_data *replaced_data,
- bool *is_multikey_conflict)
-{
- struct memtx_tree_data new_data, dup_data;
- new_data.tuple = new_tuple;
- new_data.hint = hint;
- dup_data.tuple = NULL;
- *is_multikey_conflict = false;
- if (memtx_tree_insert(&index->tree, new_data, &dup_data) != 0) {
- diag_set(OutOfMemory, MEMTX_EXTENT_SIZE, "memtx_tree_index",
- "replace");
- return -1;
- }
- int errcode = 0;
- if (dup_data.tuple == new_tuple) {
- /*
- * When tuple contains the same key multiple
- * times, the previous key occurrence is pushed
- * out of the index.
- */
- *is_multikey_conflict = true;
- } else if ((errcode = replace_check_dup(old_tuple, dup_data.tuple,
- mode)) != 0) {
- /* Rollback replace. */
- memtx_tree_delete(&index->tree, new_data);
- if (dup_data.tuple != NULL)
- memtx_tree_insert(&index->tree, dup_data, NULL);
- struct space *sp = space_cache_find(index->base.def->space_id);
- if (sp != NULL) {
- diag_set(ClientError, errcode, index->base.def->name,
- space_name(sp));
- }
- return -1;
- }
- *replaced_data = dup_data;
- return 0;
-}
-
-/**
- * Rollback the sequence of memtx_tree_index_replace_multikey_one
- * insertions with multikey indexes [0, err_multikey_idx - 1]
- * where the err_multikey_idx is the first multikey index where
- * error has been raised.
- *
- * This routine can't fail because all replaced_tuple (when
- * specified) nodes in tree are already allocated (they might be
- * overridden with new_tuple, but they definitely present) and
- * delete operation is fault-tolerant.
- */
-static void
-memtx_tree_index_replace_multikey_rollback(struct memtx_tree_index *index,
- struct tuple *new_tuple, struct tuple *replaced_tuple,
- int err_multikey_idx)
-{
- struct memtx_tree_data data;
- if (replaced_tuple != NULL) {
- /* Restore replaced tuple index occurrences. */
- struct key_def *cmp_def = memtx_tree_cmp_def(&index->tree);
- data.tuple = replaced_tuple;
- uint32_t multikey_count =
- tuple_multikey_count(replaced_tuple, cmp_def);
- for (int i = 0; (uint32_t) i < multikey_count; i++) {
- data.hint = i;
- memtx_tree_insert(&index->tree, data, NULL);
- }
- }
- /*
- * Rollback new_tuple insertion by multikey index
- * [0, multikey_idx).
- */
- data.tuple = new_tuple;
- for (int i = 0; i < err_multikey_idx; i++) {
- data.hint = i;
- memtx_tree_delete_value(&index->tree, data, NULL);
- }
-}
-
-/**
- * :replace() function for a multikey index: replace old tuple
- * index entries with ones from the new tuple.
- *
- * In a multikey index a single tuple is associated with 0..N keys
- * of the b+*tree. Imagine old tuple key set is called "old_keys"
- * and a new tuple set is called "new_keys". This function must
- * 1) delete all removed keys: (new_keys - old_keys)
- * 2) update tuple pointer in all preserved keys: (old_keys ^ new_keys)
- * 3) insert data for all new keys (new_keys - old_keys).
- *
- * Compare with a standard unique or non-unique index, when a key
- * is present only once, so whenever we encounter a duplicate, it
- * is guaranteed to point at the old tuple (in non-unique indexes
- * we augment the secondary key parts with primary key parts, so
- * b+*tree still contains unique entries only).
- *
- * To reduce the number of insert and delete operations on the
- * tree, this function attempts to optimistically add all keys
- * from the new tuple to the tree first.
- *
- * When this step finds a duplicate, it's either of the following:
- * - for a unique multikey index, it may be the old tuple or
- * some other tuple. Since unique index forbids duplicates,
- * this branch ends with an error unless we found the old tuple.
- * - for a non-unique multikey index, both secondary and primary
- * key parts must match, so it's guaranteed to be the old tuple.
- *
- * In other words, when an optimistic insert finds a duplicate,
- * it's either an error, in which case we roll back all the new
- * keys from the tree and abort the procedure, or the old tuple,
- * which we save to get back to, later.
- *
- * When adding new keys finishes, we have completed steps
- * 2) and 3):
- * - added set (new_keys - old_keys) to the index
- * - updated set (new_keys ^ old_keys) with a new tuple pointer.
- *
- * We now must perform 1), which is remove (old_keys - new_keys).
- *
- * This is done by using the old tuple pointer saved from the
- * previous step. To not accidentally delete the common key
- * set of the old and the new tuple, we don't using key parts alone
- * to compare - we also look at b+* tree value that has the tuple
- * pointer, and delete old tuple entries only.
- */
-static int
-memtx_tree_index_replace_multikey(struct index *base, struct tuple *old_tuple,
- struct tuple *new_tuple, enum dup_replace_mode mode,
- struct tuple **result)
-{
- struct memtx_tree_index *index = (struct memtx_tree_index *)base;
- struct key_def *cmp_def = memtx_tree_cmp_def(&index->tree);
- *result = NULL;
- if (new_tuple != NULL) {
- int multikey_idx = 0, err = 0;
- uint32_t multikey_count =
- tuple_multikey_count(new_tuple, cmp_def);
- for (; (uint32_t) multikey_idx < multikey_count;
- multikey_idx++) {
- bool is_multikey_conflict;
- struct memtx_tree_data replaced_data;
- err = memtx_tree_index_replace_multikey_one(index,
- old_tuple, new_tuple, mode,
- multikey_idx, &replaced_data,
- &is_multikey_conflict);
- if (err != 0)
- break;
- if (replaced_data.tuple != NULL &&
- !is_multikey_conflict) {
- assert(*result == NULL ||
- *result == replaced_data.tuple);
- *result = replaced_data.tuple;
- }
- }
- if (err != 0) {
- memtx_tree_index_replace_multikey_rollback(index,
- new_tuple, *result, multikey_idx);
- return -1;
- }
- if (*result != NULL) {
- assert(old_tuple == NULL || old_tuple == *result);
- old_tuple = *result;
- }
- }
- if (old_tuple != NULL) {
- struct memtx_tree_data data;
- data.tuple = old_tuple;
- uint32_t multikey_count =
- tuple_multikey_count(old_tuple, cmp_def);
- for (int i = 0; (uint32_t) i < multikey_count; i++) {
- data.hint = i;
- memtx_tree_delete_value(&index->tree, data, NULL);
- }
- }
- return 0;
-}
-
-/** A dummy key allocator used when removing tuples from an index. */
-static const char *
-func_index_key_dummy_alloc(struct tuple *tuple, const char *key,
- uint32_t key_sz)
-{
- (void) tuple;
- (void) key_sz;
- return (void*) key;
-}
-
-/**
- * An undo entry for multikey functional index replace operation.
- * Used to roll back a failed insert/replace and restore the
- * original key_hint(s) and to commit a completed insert/replace
- * and destruct old tuple key_hint(s).
-*/
-struct func_key_undo {
- /** A link to organize entries in list. */
- struct rlist link;
- /** An inserted record copy. */
- struct memtx_tree_data key;
-};
-
-/** Allocate a new func_key_undo on given region. */
-struct func_key_undo *
-func_key_undo_new(struct region *region)
-{
- size_t size;
- struct func_key_undo *undo = region_alloc_object(region, typeof(*undo),
- &size);
- if (undo == NULL) {
- diag_set(OutOfMemory, size, "region_alloc_object", "undo");
- return NULL;
- }
- return undo;
-}
-
-/**
- * Rollback a sequence of memtx_tree_index_replace_multikey_one
- * insertions for functional index. Routine uses given list to
- * return a given index object in it's original state.
- */
-static void
-memtx_tree_func_index_replace_rollback(struct memtx_tree_index *index,
- struct rlist *old_keys,
- struct rlist *new_keys)
-{
- struct func_key_undo *entry;
- rlist_foreach_entry(entry, new_keys, link) {
- memtx_tree_delete_value(&index->tree, entry->key, NULL);
- tuple_chunk_delete(entry->key.tuple,
- (const char *)entry->key.hint);
- }
- rlist_foreach_entry(entry, old_keys, link)
- memtx_tree_insert(&index->tree, entry->key, NULL);
-}
-
-/**
- * @sa memtx_tree_index_replace_multikey().
- * Use the functional index function from the key definition
- * to build a key list. Then each returned key is reallocated in
- * engine's memory as key_hint object and is used as comparison
- * hint.
- * To release key_hint memory in case of replace failure
- * we use a list of undo records which are allocated on region.
- * It is used to restore the original b+* entries with their
- * original key_hint(s) pointers in case of failure and release
- * the now useless hints of old items in case of success.
- */
-static int
-memtx_tree_func_index_replace(struct index *base, struct tuple *old_tuple,
- struct tuple *new_tuple, enum dup_replace_mode mode,
- struct tuple **result)
-{
- struct memtx_tree_index *index = (struct memtx_tree_index *)base;
- struct index_def *index_def = index->base.def;
- assert(index_def->key_def->for_func_index);
-
- int rc = -1;
- struct region *region = &fiber()->gc;
- size_t region_svp = region_used(region);
-
- *result = NULL;
- struct key_list_iterator it;
- if (new_tuple != NULL) {
- struct rlist old_keys, new_keys;
- rlist_create(&old_keys);
- rlist_create(&new_keys);
- if (key_list_iterator_create(&it, new_tuple, index_def, true,
- tuple_chunk_new) != 0)
- goto end;
- int err = 0;
- const char *key;
- struct func_key_undo *undo;
- while ((err = key_list_iterator_next(&it, &key)) == 0 &&
- key != NULL) {
- /* Perform insertion, log it in list. */
- undo = func_key_undo_new(region);
- if (undo == NULL) {
- tuple_chunk_delete(new_tuple, key);
- err = -1;
- break;
- }
- undo->key.tuple = new_tuple;
- undo->key.hint = (hint_t)key;
- rlist_add(&new_keys, &undo->link);
- bool is_multikey_conflict;
- struct memtx_tree_data old_data;
- old_data.tuple = NULL;
- err = memtx_tree_index_replace_multikey_one(index,
- old_tuple, new_tuple,
- mode, (hint_t)key, &old_data,
- &is_multikey_conflict);
- if (err != 0)
- break;
- if (old_data.tuple != NULL && !is_multikey_conflict) {
- undo = func_key_undo_new(region);
- if (undo == NULL) {
- /*
- * Can't append this
- * operation in rollback
- * journal. Roll it back
- * manually.
- */
- memtx_tree_insert(&index->tree,
- old_data, NULL);
- err = -1;
- break;
- }
- undo->key = old_data;
- rlist_add(&old_keys, &undo->link);
- *result = old_data.tuple;
- } else if (old_data.tuple != NULL &&
- is_multikey_conflict) {
- /*
- * Remove the replaced tuple undo
- * from undo list.
- */
- tuple_chunk_delete(new_tuple,
- (const char *)old_data.hint);
- rlist_foreach_entry(undo, &new_keys, link) {
- if (undo->key.hint == old_data.hint) {
- rlist_del(&undo->link);
- break;
- }
- }
- }
- }
- if (key != NULL || err != 0) {
- memtx_tree_func_index_replace_rollback(index,
- &old_keys, &new_keys);
- goto end;
- }
- if (*result != NULL) {
- assert(old_tuple == NULL || old_tuple == *result);
- old_tuple = *result;
- }
- /*
- * Commit changes: release hints for
- * replaced entries.
- */
- rlist_foreach_entry(undo, &old_keys, link) {
- tuple_chunk_delete(undo->key.tuple,
- (const char *)undo->key.hint);
- }
- }
- if (old_tuple != NULL) {
- if (key_list_iterator_create(&it, old_tuple, index_def, false,
- func_index_key_dummy_alloc) != 0)
- goto end;
- struct memtx_tree_data data, deleted_data;
- data.tuple = old_tuple;
- const char *key;
- while (key_list_iterator_next(&it, &key) == 0 && key != NULL) {
- data.hint = (hint_t) key;
- deleted_data.tuple = NULL;
- memtx_tree_delete_value(&index->tree, data,
- &deleted_data);
- if (deleted_data.tuple != NULL) {
- /*
- * Release related hint on
- * successful node deletion.
- */
- tuple_chunk_delete(deleted_data.tuple,
- (const char *)deleted_data.hint);
- }
- }
- assert(key == NULL);
- }
- rc = 0;
-end:
- region_truncate(region, region_svp);
- return rc;
-}
-
-static struct iterator *
-memtx_tree_index_create_iterator(struct index *base, enum iterator_type type,
- const char *key, uint32_t part_count)
-{
- struct memtx_tree_index *index = (struct memtx_tree_index *)base;
- struct memtx_engine *memtx = (struct memtx_engine *)base->engine;
- struct key_def *cmp_def = memtx_tree_cmp_def(&index->tree);
-
- assert(part_count == 0 || key != NULL);
- if (type > ITER_GT) {
- diag_set(UnsupportedIndexFeature, base->def,
- "requested iterator type");
- return NULL;
- }
-
- if (part_count == 0) {
- /*
- * If no key is specified, downgrade equality
- * iterators to a full range.
- */
- type = iterator_type_is_reverse(type) ? ITER_LE : ITER_GE;
- key = NULL;
- }
-
- struct tree_iterator *it = mempool_alloc(&memtx->iterator_pool);
- if (it == NULL) {
- diag_set(OutOfMemory, sizeof(struct tree_iterator),
- "memtx_tree_index", "iterator");
- return NULL;
- }
- iterator_create(&it->base, base);
- it->pool = &memtx->iterator_pool;
- it->base.next = tree_iterator_start;
- it->base.free = tree_iterator_free;
- it->type = type;
- it->key_data.key = key;
- it->key_data.part_count = part_count;
- it->key_data.hint = key_hint(key, part_count, cmp_def);
- it->tree_iterator = memtx_tree_invalid_iterator();
- it->current.tuple = NULL;
- return (struct iterator *)it;
-}
-
-static void
-memtx_tree_index_begin_build(struct index *base)
-{
- struct memtx_tree_index *index = (struct memtx_tree_index *)base;
- assert(memtx_tree_size(&index->tree) == 0);
- (void)index;
-}
-
-static int
-memtx_tree_index_reserve(struct index *base, uint32_t size_hint)
-{
- struct memtx_tree_index *index = (struct memtx_tree_index *)base;
- if (size_hint < index->build_array_alloc_size)
- return 0;
- struct memtx_tree_data *tmp =
- realloc(index->build_array, size_hint * sizeof(*tmp));
- if (tmp == NULL) {
- diag_set(OutOfMemory, size_hint * sizeof(*tmp),
- "memtx_tree_index", "reserve");
- return -1;
- }
- index->build_array = tmp;
- index->build_array_alloc_size = size_hint;
- return 0;
-}
-
-/** Initialize the next element of the index build_array. */
-static int
-memtx_tree_index_build_array_append(struct memtx_tree_index *index,
- struct tuple *tuple, hint_t hint)
-{
- if (index->build_array == NULL) {
- index->build_array = malloc(MEMTX_EXTENT_SIZE);
- if (index->build_array == NULL) {
- diag_set(OutOfMemory, MEMTX_EXTENT_SIZE,
- "memtx_tree_index", "build_next");
- return -1;
- }
- index->build_array_alloc_size =
- MEMTX_EXTENT_SIZE / sizeof(index->build_array[0]);
- }
- assert(index->build_array_size <= index->build_array_alloc_size);
- if (index->build_array_size == index->build_array_alloc_size) {
- index->build_array_alloc_size = index->build_array_alloc_size +
- DIV_ROUND_UP(index->build_array_alloc_size, 2);
- struct memtx_tree_data *tmp =
- realloc(index->build_array,
- index->build_array_alloc_size * sizeof(*tmp));
- if (tmp == NULL) {
- diag_set(OutOfMemory, index->build_array_alloc_size *
- sizeof(*tmp), "memtx_tree_index", "build_next");
- return -1;
- }
- index->build_array = tmp;
- }
- struct memtx_tree_data *elem =
- &index->build_array[index->build_array_size++];
- elem->tuple = tuple;
- elem->hint = hint;
- return 0;
-}
-
-static int
-memtx_tree_index_build_next(struct index *base, struct tuple *tuple)
-{
- struct memtx_tree_index *index = (struct memtx_tree_index *)base;
- struct key_def *cmp_def = memtx_tree_cmp_def(&index->tree);
- return memtx_tree_index_build_array_append(index, tuple,
- tuple_hint(tuple, cmp_def));
-}
-
-static int
-memtx_tree_index_build_next_multikey(struct index *base, struct tuple *tuple)
-{
- struct memtx_tree_index *index = (struct memtx_tree_index *)base;
- struct key_def *cmp_def = memtx_tree_cmp_def(&index->tree);
- uint32_t multikey_count = tuple_multikey_count(tuple, cmp_def);
- for (uint32_t multikey_idx = 0; multikey_idx < multikey_count;
- multikey_idx++) {
- if (memtx_tree_index_build_array_append(index, tuple,
- multikey_idx) != 0)
- return -1;
- }
- return 0;
-}
-
-static int
-memtx_tree_func_index_build_next(struct index *base, struct tuple *tuple)
-{
- struct memtx_tree_index *index = (struct memtx_tree_index *)base;
- struct index_def *index_def = index->base.def;
- assert(index_def->key_def->for_func_index);
-
- struct region *region = &fiber()->gc;
- size_t region_svp = region_used(region);
-
- struct key_list_iterator it;
- if (key_list_iterator_create(&it, tuple, index_def, false,
- tuple_chunk_new) != 0)
- return -1;
-
- const char *key;
- uint32_t insert_idx = index->build_array_size;
- while (key_list_iterator_next(&it, &key) == 0 && key != NULL) {
- if (memtx_tree_index_build_array_append(index, tuple,
- (hint_t)key) != 0)
- goto error;
- }
- assert(key == NULL);
- region_truncate(region, region_svp);
- return 0;
-error:
- for (uint32_t i = insert_idx; i < index->build_array_size; i++) {
- tuple_chunk_delete(index->build_array[i].tuple,
- (const char *)index->build_array[i].hint);
- }
- region_truncate(region, region_svp);
- return -1;
-}
-
-/**
- * Process build_array of specified index and remove duplicates
- * of equal tuples (in terms of index's cmp_def and have same
- * tuple pointer). The build_array is expected to be sorted.
- */
-static void
-memtx_tree_index_build_array_deduplicate(struct memtx_tree_index *index,
- void (*destroy)(struct tuple *tuple, const char *hint))
-{
- if (index->build_array_size == 0)
- return;
- struct key_def *cmp_def = memtx_tree_cmp_def(&index->tree);
- size_t w_idx = 0, r_idx = 1;
- while (r_idx < index->build_array_size) {
- if (index->build_array[w_idx].tuple !=
- index->build_array[r_idx].tuple ||
- tuple_compare(index->build_array[w_idx].tuple,
- index->build_array[w_idx].hint,
- index->build_array[r_idx].tuple,
- index->build_array[r_idx].hint,
- cmp_def) != 0) {
- /* Do not override the element itself. */
- if (++w_idx == r_idx)
- continue;
- SWAP(index->build_array[w_idx],
- index->build_array[r_idx]);
- }
- r_idx++;
- }
- if (destroy != NULL) {
- /* Destroy deduplicated entries. */
- for (r_idx = w_idx + 1;
- r_idx < index->build_array_size; r_idx++) {
- destroy(index->build_array[r_idx].tuple,
- (const char *)index->build_array[r_idx].hint);
- }
- }
- index->build_array_size = w_idx + 1;
-}
-
-static void
-memtx_tree_index_end_build(struct index *base)
-{
- struct memtx_tree_index *index = (struct memtx_tree_index *)base;
- struct key_def *cmp_def = memtx_tree_cmp_def(&index->tree);
- qsort_arg(index->build_array, index->build_array_size,
- sizeof(index->build_array[0]), memtx_tree_qcompare, cmp_def);
- if (cmp_def->is_multikey) {
- /*
- * Multikey index may have equal(in terms of
- * cmp_def) keys inserted by different multikey
- * offsets. We must deduplicate them because
- * the following memtx_tree_build assumes that
- * all keys are unique.
- */
- memtx_tree_index_build_array_deduplicate(index, NULL);
- } else if (cmp_def->for_func_index) {
- memtx_tree_index_build_array_deduplicate(index,
- tuple_chunk_delete);
- }
- memtx_tree_build(&index->tree, index->build_array,
- index->build_array_size);
-
- free(index->build_array);
- index->build_array = NULL;
- index->build_array_size = 0;
- index->build_array_alloc_size = 0;
-}
-
-struct tree_snapshot_iterator {
- struct snapshot_iterator base;
- struct memtx_tree_index *index;
- struct memtx_tree_iterator tree_iterator;
- struct memtx_tx_snapshot_cleaner cleaner;
-};
-
-static void
-tree_snapshot_iterator_free(struct snapshot_iterator *iterator)
-{
- assert(iterator->free == tree_snapshot_iterator_free);
- struct tree_snapshot_iterator *it =
- (struct tree_snapshot_iterator *)iterator;
- memtx_leave_delayed_free_mode((struct memtx_engine *)
- it->index->base.engine);
- memtx_tree_iterator_destroy(&it->index->tree, &it->tree_iterator);
- index_unref(&it->index->base);
- memtx_tx_snapshot_cleaner_destroy(&it->cleaner);
- free(iterator);
-}
-
-static int
-tree_snapshot_iterator_next(struct snapshot_iterator *iterator,
- const char **data, uint32_t *size)
-{
- assert(iterator->free == tree_snapshot_iterator_free);
- struct tree_snapshot_iterator *it =
- (struct tree_snapshot_iterator *)iterator;
- struct memtx_tree *tree = &it->index->tree;
-
- while (true) {
- struct memtx_tree_data *res =
- memtx_tree_iterator_get_elem(tree, &it->tree_iterator);
-
- if (res == NULL) {
- *data = NULL;
- return 0;
- }
-
- memtx_tree_iterator_next(tree, &it->tree_iterator);
-
- struct tuple *tuple = res->tuple;
- tuple = memtx_tx_snapshot_clarify(&it->cleaner, tuple);
-
- if (tuple != NULL) {
- *data = tuple_data_range(tuple, size);
- return 0;
- }
- }
-
- return 0;
-}
-
-/**
- * Create an ALL iterator with personal read view so further
- * index modifications will not affect the iteration results.
- * Must be destroyed by iterator->free after usage.
- */
-static struct snapshot_iterator *
-memtx_tree_index_create_snapshot_iterator(struct index *base)
-{
- struct memtx_tree_index *index = (struct memtx_tree_index *)base;
- struct tree_snapshot_iterator *it =
- (struct tree_snapshot_iterator *) calloc(1, sizeof(*it));
- if (it == NULL) {
- diag_set(OutOfMemory, sizeof(struct tree_snapshot_iterator),
- "memtx_tree_index", "create_snapshot_iterator");
- return NULL;
- }
-
- struct space *space = space_cache_find(base->def->space_id);
- if (memtx_tx_snapshot_cleaner_create(&it->cleaner, space,
- "memtx_tree_index") != 0) {
- free(it);
- return NULL;
- }
-
- it->base.free = tree_snapshot_iterator_free;
- it->base.next = tree_snapshot_iterator_next;
- it->index = index;
- index_ref(base);
- it->tree_iterator = memtx_tree_iterator_first(&index->tree);
- memtx_tree_iterator_freeze(&index->tree, &it->tree_iterator);
- memtx_enter_delayed_free_mode((struct memtx_engine *)base->engine);
- return (struct snapshot_iterator *) it;
-}
-
-static const struct index_vtab memtx_tree_index_vtab = {
- /* .destroy = */ memtx_tree_index_destroy,
- /* .commit_create = */ generic_index_commit_create,
- /* .abort_create = */ generic_index_abort_create,
- /* .commit_modify = */ generic_index_commit_modify,
- /* .commit_drop = */ generic_index_commit_drop,
- /* .update_def = */ memtx_tree_index_update_def,
- /* .depends_on_pk = */ memtx_tree_index_depends_on_pk,
- /* .def_change_requires_rebuild = */
- memtx_index_def_change_requires_rebuild,
- /* .size = */ memtx_tree_index_size,
- /* .bsize = */ memtx_tree_index_bsize,
- /* .min = */ generic_index_min,
- /* .max = */ generic_index_max,
- /* .random = */ memtx_tree_index_random,
- /* .count = */ memtx_tree_index_count,
- /* .get = */ memtx_tree_index_get,
- /* .replace = */ memtx_tree_index_replace,
- /* .create_iterator = */ memtx_tree_index_create_iterator,
- /* .create_snapshot_iterator = */
- memtx_tree_index_create_snapshot_iterator,
- /* .stat = */ generic_index_stat,
- /* .compact = */ generic_index_compact,
- /* .reset_stat = */ generic_index_reset_stat,
- /* .begin_build = */ memtx_tree_index_begin_build,
- /* .reserve = */ memtx_tree_index_reserve,
- /* .build_next = */ memtx_tree_index_build_next,
- /* .end_build = */ memtx_tree_index_end_build,
-};
-
-static const struct index_vtab memtx_tree_index_multikey_vtab = {
- /* .destroy = */ memtx_tree_index_destroy,
- /* .commit_create = */ generic_index_commit_create,
- /* .abort_create = */ generic_index_abort_create,
- /* .commit_modify = */ generic_index_commit_modify,
- /* .commit_drop = */ generic_index_commit_drop,
- /* .update_def = */ memtx_tree_index_update_def,
- /* .depends_on_pk = */ memtx_tree_index_depends_on_pk,
- /* .def_change_requires_rebuild = */
- memtx_index_def_change_requires_rebuild,
- /* .size = */ memtx_tree_index_size,
- /* .bsize = */ memtx_tree_index_bsize,
- /* .min = */ generic_index_min,
- /* .max = */ generic_index_max,
- /* .random = */ memtx_tree_index_random,
- /* .count = */ memtx_tree_index_count,
- /* .get = */ memtx_tree_index_get,
- /* .replace = */ memtx_tree_index_replace_multikey,
- /* .create_iterator = */ memtx_tree_index_create_iterator,
- /* .create_snapshot_iterator = */
- memtx_tree_index_create_snapshot_iterator,
- /* .stat = */ generic_index_stat,
- /* .compact = */ generic_index_compact,
- /* .reset_stat = */ generic_index_reset_stat,
- /* .begin_build = */ memtx_tree_index_begin_build,
- /* .reserve = */ memtx_tree_index_reserve,
- /* .build_next = */ memtx_tree_index_build_next_multikey,
- /* .end_build = */ memtx_tree_index_end_build,
-};
-
-static const struct index_vtab memtx_tree_func_index_vtab = {
- /* .destroy = */ memtx_tree_index_destroy,
- /* .commit_create = */ generic_index_commit_create,
- /* .abort_create = */ generic_index_abort_create,
- /* .commit_modify = */ generic_index_commit_modify,
- /* .commit_drop = */ generic_index_commit_drop,
- /* .update_def = */ memtx_tree_index_update_def,
- /* .depends_on_pk = */ memtx_tree_index_depends_on_pk,
- /* .def_change_requires_rebuild = */
- memtx_index_def_change_requires_rebuild,
- /* .size = */ memtx_tree_index_size,
- /* .bsize = */ memtx_tree_index_bsize,
- /* .min = */ generic_index_min,
- /* .max = */ generic_index_max,
- /* .random = */ memtx_tree_index_random,
- /* .count = */ memtx_tree_index_count,
- /* .get = */ memtx_tree_index_get,
- /* .replace = */ memtx_tree_func_index_replace,
- /* .create_iterator = */ memtx_tree_index_create_iterator,
- /* .create_snapshot_iterator = */
- memtx_tree_index_create_snapshot_iterator,
- /* .stat = */ generic_index_stat,
- /* .compact = */ generic_index_compact,
- /* .reset_stat = */ generic_index_reset_stat,
- /* .begin_build = */ memtx_tree_index_begin_build,
- /* .reserve = */ memtx_tree_index_reserve,
- /* .build_next = */ memtx_tree_func_index_build_next,
- /* .end_build = */ memtx_tree_index_end_build,
-};
-
-/**
- * A disabled index vtab provides safe dummy methods for
- * 'inactive' index. It is required to perform a fault-tolerant
- * recovery from snapshoot in case of func_index (because
- * key defintion is not completely initialized at that moment).
- */
-static const struct index_vtab memtx_tree_disabled_index_vtab = {
- /* .destroy = */ memtx_tree_index_destroy,
- /* .commit_create = */ generic_index_commit_create,
- /* .abort_create = */ generic_index_abort_create,
- /* .commit_modify = */ generic_index_commit_modify,
- /* .commit_drop = */ generic_index_commit_drop,
- /* .update_def = */ generic_index_update_def,
- /* .depends_on_pk = */ generic_index_depends_on_pk,
- /* .def_change_requires_rebuild = */
- generic_index_def_change_requires_rebuild,
- /* .size = */ generic_index_size,
- /* .bsize = */ generic_index_bsize,
- /* .min = */ generic_index_min,
- /* .max = */ generic_index_max,
- /* .random = */ generic_index_random,
- /* .count = */ generic_index_count,
- /* .get = */ generic_index_get,
- /* .replace = */ disabled_index_replace,
- /* .create_iterator = */ generic_index_create_iterator,
- /* .create_snapshot_iterator = */
- generic_index_create_snapshot_iterator,
- /* .stat = */ generic_index_stat,
- /* .compact = */ generic_index_compact,
- /* .reset_stat = */ generic_index_reset_stat,
- /* .begin_build = */ generic_index_begin_build,
- /* .reserve = */ generic_index_reserve,
- /* .build_next = */ disabled_index_build_next,
- /* .end_build = */ generic_index_end_build,
-};
-
-struct index *
-memtx_tree_index_new(struct memtx_engine *memtx, struct index_def *def)
-{
- struct memtx_tree_index *index =
- (struct memtx_tree_index *)calloc(1, sizeof(*index));
- if (index == NULL) {
- diag_set(OutOfMemory, sizeof(*index),
- "malloc", "struct memtx_tree_index");
- return NULL;
- }
- const struct index_vtab *vtab;
- if (def->key_def->for_func_index) {
- if (def->key_def->func_index_func == NULL)
- vtab = &memtx_tree_disabled_index_vtab;
- else
- vtab = &memtx_tree_func_index_vtab;
- } else if (def->key_def->is_multikey) {
- vtab = &memtx_tree_index_multikey_vtab;
- } else {
- vtab = &memtx_tree_index_vtab;
- }
- if (index_create(&index->base, (struct engine *)memtx,
- vtab, def) != 0) {
- free(index);
- return NULL;
- }
-
- /* See comment to memtx_tree_index_update_def(). */
- struct key_def *cmp_def;
- cmp_def = def->opts.is_unique && !def->key_def->is_nullable ?
- index->base.def->key_def : index->base.def->cmp_def;
-
- memtx_tree_create(&index->tree, cmp_def, memtx_index_extent_alloc,
- memtx_index_extent_free, memtx);
- return &index->base;
-}
diff --git a/src/box/memtx_tree.cc b/src/box/memtx_tree.cc
new file mode 100644
index 0000000..d3b993b
--- /dev/null
+++ b/src/box/memtx_tree.cc
@@ -0,0 +1,1526 @@
+/*
+ * Copyright 2010-2016, Tarantool AUTHORS, please see AUTHORS file.
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the
+ * following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY <COPYRIGHT HOLDER> ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
+ * <COPYRIGHT HOLDER> OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
+ * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
+ * THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ */
+#include "memtx_tree.h"
+#include "memtx_engine.h"
+#include "space.h"
+#include "schema.h" /* space_by_id(), space_cache_find() */
+#include "errinj.h"
+#include "memory.h"
+#include "fiber.h"
+#include "key_list.h"
+#include "tuple.h"
+#include "txn.h"
+#include "memtx_tx.h"
+#include <third_party/qsort_arg.h>
+#include <small/mempool.h>
+
+/**
+ * Struct that is used as a key in BPS tree definition.
+ */
+struct memtx_tree_key_data {
+ /** Sequence of msgpacked search fields. */
+ const char *key;
+ /** Number of msgpacked search fields. */
+ uint32_t part_count;
+ /** Comparison hint, see tuple_hint(). */
+ hint_t hint;
+};
+
+/**
+ * Struct that is used as a elem in BPS tree definition.
+ */
+struct memtx_tree_data {
+ /* Tuple that this node is represents. */
+ struct tuple *tuple;
+ /** Comparison hint, see key_hint(). */
+ hint_t hint;
+};
+
+/**
+ * Test whether BPS tree elements are identical i.e. represent
+ * the same tuple at the same position in the tree.
+ * @param a - First BPS tree element to compare.
+ * @param b - Second BPS tree element to compare.
+ * @retval true - When elements a and b are identical.
+ * @retval false - Otherwise.
+ */
+static bool
+memtx_tree_data_is_equal(const struct memtx_tree_data *a,
+ const struct memtx_tree_data *b)
+{
+ return a->tuple == b->tuple;
+}
+
+#define BPS_TREE_NAME memtx_tree
+#define BPS_TREE_BLOCK_SIZE (512)
+#define BPS_TREE_EXTENT_SIZE MEMTX_EXTENT_SIZE
+#define BPS_TREE_COMPARE(a, b, arg)\
+ tuple_compare((&a)->tuple, (&a)->hint, (&b)->tuple, (&b)->hint, arg)
+#define BPS_TREE_COMPARE_KEY(a, b, arg)\
+ tuple_compare_with_key((&a)->tuple, (&a)->hint, (b)->key,\
+ (b)->part_count, (b)->hint, arg)
+#define BPS_TREE_IS_IDENTICAL(a, b) memtx_tree_data_is_equal(&a, &b)
+#define BPS_TREE_NO_DEBUG 1
+#define bps_tree_elem_t struct memtx_tree_data
+#define bps_tree_key_t struct memtx_tree_key_data *
+#define bps_tree_arg_t struct key_def *
+
+#include "salad/bps_tree.h"
+
+#undef BPS_TREE_NAME
+#undef BPS_TREE_BLOCK_SIZE
+#undef BPS_TREE_EXTENT_SIZE
+#undef BPS_TREE_COMPARE
+#undef BPS_TREE_COMPARE_KEY
+#undef BPS_TREE_IS_IDENTICAL
+#undef BPS_TREE_NO_DEBUG
+#undef bps_tree_elem_t
+#undef bps_tree_key_t
+#undef bps_tree_arg_t
+
+struct memtx_tree_index {
+ struct index base;
+ struct memtx_tree tree;
+ struct memtx_tree_data *build_array;
+ size_t build_array_size, build_array_alloc_size;
+ struct memtx_gc_task gc_task;
+ struct memtx_tree_iterator gc_iterator;
+};
+
+/* {{{ Utilities. *************************************************/
+
+static inline struct key_def *
+memtx_tree_cmp_def(struct memtx_tree *tree)
+{
+ return tree->arg;
+}
+
+static int
+memtx_tree_qcompare(const void* a, const void *b, void *c)
+{
+ const struct memtx_tree_data *data_a = (struct memtx_tree_data *)a;
+ const struct memtx_tree_data *data_b = (struct memtx_tree_data *)b;
+ struct key_def *key_def = (struct key_def *)c;
+ return tuple_compare(data_a->tuple, data_a->hint, data_b->tuple,
+ data_b->hint, key_def);
+}
+
+/* {{{ MemtxTree Iterators ****************************************/
+struct tree_iterator {
+ struct iterator base;
+ struct memtx_tree_iterator tree_iterator;
+ enum iterator_type type;
+ struct memtx_tree_key_data key_data;
+ struct memtx_tree_data current;
+ /** Memory pool the iterator was allocated from. */
+ struct mempool *pool;
+};
+
+static_assert(sizeof(struct tree_iterator) <= MEMTX_ITERATOR_SIZE,
+ "sizeof(struct tree_iterator) must be less than or equal "
+ "to MEMTX_ITERATOR_SIZE");
+
+static void
+tree_iterator_free(struct iterator *iterator);
+
+static inline struct tree_iterator *
+tree_iterator(struct iterator *it)
+{
+ assert(it->free == tree_iterator_free);
+ return (struct tree_iterator *) it;
+}
+
+static void
+tree_iterator_free(struct iterator *iterator)
+{
+ struct tree_iterator *it = tree_iterator(iterator);
+ struct tuple *tuple = it->current.tuple;
+ if (tuple != NULL)
+ tuple_unref(tuple);
+ mempool_free(it->pool, it);
+}
+
+static int
+tree_iterator_dummie(struct iterator *iterator, struct tuple **ret)
+{
+ (void)iterator;
+ *ret = NULL;
+ return 0;
+}
+
+static int
+tree_iterator_next_base(struct iterator *iterator, struct tuple **ret)
+{
+ struct memtx_tree_index *index =
+ (struct memtx_tree_index *)iterator->index;
+ struct tree_iterator *it = tree_iterator(iterator);
+ assert(it->current.tuple != NULL);
+ struct memtx_tree_data *check =
+ memtx_tree_iterator_get_elem(&index->tree, &it->tree_iterator);
+ if (check == NULL || !memtx_tree_data_is_equal(check, &it->current)) {
+ it->tree_iterator = memtx_tree_upper_bound_elem(&index->tree,
+ it->current, NULL);
+ } else {
+ memtx_tree_iterator_next(&index->tree, &it->tree_iterator);
+ }
+ tuple_unref(it->current.tuple);
+ struct memtx_tree_data *res =
+ memtx_tree_iterator_get_elem(&index->tree, &it->tree_iterator);
+ if (res == NULL) {
+ iterator->next = tree_iterator_dummie;
+ it->current.tuple = NULL;
+ *ret = NULL;
+ } else {
+ *ret = res->tuple;
+ tuple_ref(*ret);
+ it->current = *res;
+ }
+ return 0;
+}
+
+static int
+tree_iterator_prev_base(struct iterator *iterator, struct tuple **ret)
+{
+ struct memtx_tree_index *index =
+ (struct memtx_tree_index *)iterator->index;
+ struct tree_iterator *it = tree_iterator(iterator);
+ assert(it->current.tuple != NULL);
+ struct memtx_tree_data *check =
+ memtx_tree_iterator_get_elem(&index->tree, &it->tree_iterator);
+ if (check == NULL || !memtx_tree_data_is_equal(check, &it->current)) {
+ it->tree_iterator = memtx_tree_lower_bound_elem(&index->tree,
+ it->current, NULL);
+ }
+ memtx_tree_iterator_prev(&index->tree, &it->tree_iterator);
+ tuple_unref(it->current.tuple);
+ struct memtx_tree_data *res =
+ memtx_tree_iterator_get_elem(&index->tree, &it->tree_iterator);
+ if (!res) {
+ iterator->next = tree_iterator_dummie;
+ it->current.tuple = NULL;
+ *ret = NULL;
+ } else {
+ *ret = res->tuple;
+ tuple_ref(*ret);
+ it->current = *res;
+ }
+ return 0;
+}
+
+static int
+tree_iterator_next_equal_base(struct iterator *iterator, struct tuple **ret)
+{
+ struct memtx_tree_index *index =
+ (struct memtx_tree_index *)iterator->index;
+ struct tree_iterator *it = tree_iterator(iterator);
+ assert(it->current.tuple != NULL);
+ struct memtx_tree_data *check =
+ memtx_tree_iterator_get_elem(&index->tree, &it->tree_iterator);
+ if (check == NULL || !memtx_tree_data_is_equal(check, &it->current)) {
+ it->tree_iterator = memtx_tree_upper_bound_elem(&index->tree,
+ it->current, NULL);
+ } else {
+ memtx_tree_iterator_next(&index->tree, &it->tree_iterator);
+ }
+ tuple_unref(it->current.tuple);
+ struct memtx_tree_data *res =
+ memtx_tree_iterator_get_elem(&index->tree, &it->tree_iterator);
+ /* Use user key def to save a few loops. */
+ if (res == NULL ||
+ tuple_compare_with_key(res->tuple, res->hint,
+ it->key_data.key,
+ it->key_data.part_count,
+ it->key_data.hint,
+ index->base.def->key_def) != 0) {
+ iterator->next = tree_iterator_dummie;
+ it->current.tuple = NULL;
+ *ret = NULL;
+ } else {
+ *ret = res->tuple;
+ tuple_ref(*ret);
+ it->current = *res;
+ }
+ return 0;
+}
+
+static int
+tree_iterator_prev_equal_base(struct iterator *iterator, struct tuple **ret)
+{
+ struct memtx_tree_index *index =
+ (struct memtx_tree_index *)iterator->index;
+ struct tree_iterator *it = tree_iterator(iterator);
+ assert(it->current.tuple != NULL);
+ struct memtx_tree_data *check =
+ memtx_tree_iterator_get_elem(&index->tree, &it->tree_iterator);
+ if (check == NULL || !memtx_tree_data_is_equal(check, &it->current)) {
+ it->tree_iterator = memtx_tree_lower_bound_elem(&index->tree,
+ it->current, NULL);
+ }
+ memtx_tree_iterator_prev(&index->tree, &it->tree_iterator);
+ tuple_unref(it->current.tuple);
+ struct memtx_tree_data *res =
+ memtx_tree_iterator_get_elem(&index->tree, &it->tree_iterator);
+ /* Use user key def to save a few loops. */
+ if (res == NULL ||
+ tuple_compare_with_key(res->tuple, res->hint,
+ it->key_data.key,
+ it->key_data.part_count,
+ it->key_data.hint,
+ index->base.def->key_def) != 0) {
+ iterator->next = tree_iterator_dummie;
+ it->current.tuple = NULL;
+ *ret = NULL;
+ } else {
+ *ret = res->tuple;
+ tuple_ref(*ret);
+ it->current = *res;
+ }
+ return 0;
+}
+
+#define WRAP_ITERATOR_METHOD(name) \
+static int \
+name(struct iterator *iterator, struct tuple **ret) \
+{ \
+ struct memtx_tree *tree = \
+ &((struct memtx_tree_index *)iterator->index)->tree; \
+ struct tree_iterator *it = tree_iterator(iterator); \
+ struct memtx_tree_iterator *ti = &it->tree_iterator; \
+ uint32_t iid = iterator->index->def->iid; \
+ bool is_multikey = iterator->index->def->key_def->is_multikey; \
+ struct txn *txn = in_txn(); \
+ struct space *space = space_by_id(iterator->space_id); \
+ bool is_rw = txn != NULL; \
+ do { \
+ int rc = name##_base(iterator, ret); \
+ if (rc != 0 || *ret == NULL) \
+ return rc; \
+ uint32_t mk_index = 0; \
+ if (is_multikey) { \
+ struct memtx_tree_data *check = \
+ memtx_tree_iterator_get_elem(tree, ti); \
+ assert(check != NULL); \
+ mk_index = check->hint; \
+ } \
+ *ret = memtx_tx_tuple_clarify(txn, space, *ret, \
+ iid, mk_index, is_rw); \
+ } while (*ret == NULL); \
+ tuple_unref(it->current.tuple); \
+ it->current.tuple = *ret; \
+ tuple_ref(it->current.tuple); \
+ return 0; \
+} \
+struct forgot_to_add_semicolon
+
+WRAP_ITERATOR_METHOD(tree_iterator_next);
+WRAP_ITERATOR_METHOD(tree_iterator_prev);
+WRAP_ITERATOR_METHOD(tree_iterator_next_equal);
+WRAP_ITERATOR_METHOD(tree_iterator_prev_equal);
+
+#undef WRAP_ITERATOR_METHOD
+
+static void
+tree_iterator_set_next_method(struct tree_iterator *it)
+{
+ assert(it->current.tuple != NULL);
+ switch (it->type) {
+ case ITER_EQ:
+ it->base.next = tree_iterator_next_equal;
+ break;
+ case ITER_REQ:
+ it->base.next = tree_iterator_prev_equal;
+ break;
+ case ITER_ALL:
+ it->base.next = tree_iterator_next;
+ break;
+ case ITER_LT:
+ case ITER_LE:
+ it->base.next = tree_iterator_prev;
+ break;
+ case ITER_GE:
+ case ITER_GT:
+ it->base.next = tree_iterator_next;
+ break;
+ default:
+ /* The type was checked in initIterator */
+ assert(false);
+ }
+}
+
+static int
+tree_iterator_start(struct iterator *iterator, struct tuple **ret)
+{
+ *ret = NULL;
+ struct memtx_tree_index *index =
+ (struct memtx_tree_index *)iterator->index;
+ struct tree_iterator *it = tree_iterator(iterator);
+ it->base.next = tree_iterator_dummie;
+ struct memtx_tree *tree = &index->tree;
+ enum iterator_type type = it->type;
+ bool exact = false;
+ assert(it->current.tuple == NULL);
+ if (it->key_data.key == 0) {
+ if (iterator_type_is_reverse(it->type))
+ it->tree_iterator = memtx_tree_iterator_last(tree);
+ else
+ it->tree_iterator = memtx_tree_iterator_first(tree);
+ } else {
+ if (type == ITER_ALL || type == ITER_EQ ||
+ type == ITER_GE || type == ITER_LT) {
+ it->tree_iterator =
+ memtx_tree_lower_bound(tree, &it->key_data,
+ &exact);
+ if (type == ITER_EQ && !exact)
+ return 0;
+ } else { // ITER_GT, ITER_REQ, ITER_LE
+ it->tree_iterator =
+ memtx_tree_upper_bound(tree, &it->key_data,
+ &exact);
+ if (type == ITER_REQ && !exact)
+ return 0;
+ }
+ if (iterator_type_is_reverse(type)) {
+ /*
+ * Because of limitations of tree search API we use use
+ * lower_bound for LT search and upper_bound for LE
+ * and REQ searches. Thus we found position to the
+ * right of the target one. Let's make a step to the
+ * left to reach target position.
+ * If we found an invalid iterator all the elements in
+ * the tree are less (less or equal) to the key, and
+ * iterator_next call will convert the iterator to the
+ * last position in the tree, that's what we need.
+ */
+ memtx_tree_iterator_prev(tree, &it->tree_iterator);
+ }
+ }
+
+ struct memtx_tree_data *res = memtx_tree_iterator_get_elem(tree,
+ &it->tree_iterator);
+ if (!res)
+ return 0;
+ *ret = res->tuple;
+ tuple_ref(*ret);
+ it->current = *res;
+ tree_iterator_set_next_method(it);
+
+ uint32_t iid = iterator->index->def->iid;
+ bool is_multikey = iterator->index->def->key_def->is_multikey;
+ struct txn *txn = in_txn();
+ struct space *space = space_by_id(iterator->space_id);
+ bool is_rw = txn != NULL;
+ uint32_t mk_index = is_multikey ? res->hint : 0;
+ *ret = memtx_tx_tuple_clarify(txn, space, *ret, iid, mk_index, is_rw);
+ if (*ret == NULL) {
+ return iterator->next(iterator, ret);
+ } else {
+ tuple_unref(it->current.tuple);
+ it->current.tuple = *ret;
+ tuple_ref(it->current.tuple);
+ }
+
+ return 0;
+}
+
+/* }}} */
+
+/* {{{ MemtxTree **********************************************************/
+
+static void
+memtx_tree_index_free(struct memtx_tree_index *index)
+{
+ memtx_tree_destroy(&index->tree);
+ free(index->build_array);
+ free(index);
+}
+
+static void
+memtx_tree_index_gc_run(struct memtx_gc_task *task, bool *done)
+{
+ /*
+ * Yield every 1K tuples to keep latency < 0.1 ms.
+ * Yield more often in debug mode.
+ */
+#ifdef NDEBUG
+ enum { YIELD_LOOPS = 1000 };
+#else
+ enum { YIELD_LOOPS = 10 };
+#endif
+
+ struct memtx_tree_index *index = container_of(task,
+ struct memtx_tree_index, gc_task);
+ struct memtx_tree *tree = &index->tree;
+ struct memtx_tree_iterator *itr = &index->gc_iterator;
+
+ unsigned int loops = 0;
+ while (!memtx_tree_iterator_is_invalid(itr)) {
+ struct memtx_tree_data *res =
+ memtx_tree_iterator_get_elem(tree, itr);
+ memtx_tree_iterator_next(tree, itr);
+ tuple_unref(res->tuple);
+ if (++loops >= YIELD_LOOPS) {
+ *done = false;
+ return;
+ }
+ }
+ *done = true;
+}
+
+static void
+memtx_tree_index_gc_free(struct memtx_gc_task *task)
+{
+ struct memtx_tree_index *index = container_of(task,
+ struct memtx_tree_index, gc_task);
+ memtx_tree_index_free(index);
+}
+
+static const struct memtx_gc_task_vtab memtx_tree_index_gc_vtab = {
+ .run = memtx_tree_index_gc_run,
+ .free = memtx_tree_index_gc_free,
+};
+
+static void
+memtx_tree_index_destroy(struct index *base)
+{
+ struct memtx_tree_index *index = (struct memtx_tree_index *)base;
+ struct memtx_engine *memtx = (struct memtx_engine *)base->engine;
+ if (base->def->iid == 0) {
+ /*
+ * Primary index. We need to free all tuples stored
+ * in the index, which may take a while. Schedule a
+ * background task in order not to block tx thread.
+ */
+ index->gc_task.vtab = &memtx_tree_index_gc_vtab;
+ index->gc_iterator = memtx_tree_iterator_first(&index->tree);
+ memtx_engine_schedule_gc(memtx, &index->gc_task);
+ } else {
+ /*
+ * Secondary index. Destruction is fast, no need to
+ * hand over to background fiber.
+ */
+ memtx_tree_index_free(index);
+ }
+}
+
+static void
+memtx_tree_index_update_def(struct index *base)
+{
+ struct memtx_tree_index *index = (struct memtx_tree_index *)base;
+ struct index_def *def = base->def;
+ /*
+ * We use extended key def for non-unique and nullable
+ * indexes. Unique but nullable index can store multiple
+ * NULLs. To correctly compare these NULLs extended key
+ * def must be used. For details @sa tuple_compare.cc.
+ */
+ index->tree.arg = def->opts.is_unique && !def->key_def->is_nullable ?
+ def->key_def : def->cmp_def;
+}
+
+static bool
+memtx_tree_index_depends_on_pk(struct index *base)
+{
+ struct index_def *def = base->def;
+ /* See comment to memtx_tree_index_update_def(). */
+ return !def->opts.is_unique || def->key_def->is_nullable;
+}
+
+static ssize_t
+memtx_tree_index_size(struct index *base)
+{
+ struct memtx_tree_index *index = (struct memtx_tree_index *)base;
+ return memtx_tree_size(&index->tree);
+}
+
+static ssize_t
+memtx_tree_index_bsize(struct index *base)
+{
+ struct memtx_tree_index *index = (struct memtx_tree_index *)base;
+ return memtx_tree_mem_used(&index->tree);
+}
+
+static int
+memtx_tree_index_random(struct index *base, uint32_t rnd, struct tuple **result)
+{
+ struct memtx_tree_index *index = (struct memtx_tree_index *)base;
+ struct memtx_tree_data *res = memtx_tree_random(&index->tree, rnd);
+ *result = res != NULL ? res->tuple : NULL;
+ return 0;
+}
+
+static ssize_t
+memtx_tree_index_count(struct index *base, enum iterator_type type,
+ const char *key, uint32_t part_count)
+{
+ if (type == ITER_ALL)
+ return memtx_tree_index_size(base); /* optimization */
+ return generic_index_count(base, type, key, part_count);
+}
+
+static int
+memtx_tree_index_get(struct index *base, const char *key,
+ uint32_t part_count, struct tuple **result)
+{
+ assert(base->def->opts.is_unique &&
+ part_count == base->def->key_def->part_count);
+ struct memtx_tree_index *index = (struct memtx_tree_index *)base;
+ struct key_def *cmp_def = memtx_tree_cmp_def(&index->tree);
+ struct memtx_tree_key_data key_data;
+ key_data.key = key;
+ key_data.part_count = part_count;
+ key_data.hint = key_hint(key, part_count, cmp_def);
+ struct memtx_tree_data *res = memtx_tree_find(&index->tree, &key_data);
+ if (res == NULL) {
+ *result = NULL;
+ return 0;
+ }
+ struct txn *txn = in_txn();
+ struct space *space = space_by_id(base->def->space_id);
+ bool is_rw = txn != NULL;
+ uint32_t mk_index = base->def->key_def->is_multikey ? res->hint : 0;
+ *result = memtx_tx_tuple_clarify(txn, space, res->tuple, base->def->iid,
+ mk_index, is_rw);
+ return 0;
+}
+
+static int
+memtx_tree_index_replace(struct index *base, struct tuple *old_tuple,
+ struct tuple *new_tuple, enum dup_replace_mode mode,
+ struct tuple **result)
+{
+ struct memtx_tree_index *index = (struct memtx_tree_index *)base;
+ struct key_def *cmp_def = memtx_tree_cmp_def(&index->tree);
+ if (new_tuple) {
+ struct memtx_tree_data new_data;
+ new_data.tuple = new_tuple;
+ new_data.hint = tuple_hint(new_tuple, cmp_def);
+ struct memtx_tree_data dup_data;
+ dup_data.tuple = NULL;
+
+ /* Try to optimistically replace the new_tuple. */
+ int tree_res = memtx_tree_insert(&index->tree, new_data,
+ &dup_data);
+ if (tree_res) {
+ diag_set(OutOfMemory, MEMTX_EXTENT_SIZE,
+ "memtx_tree_index", "replace");
+ return -1;
+ }
+
+ uint32_t errcode = replace_check_dup(old_tuple,
+ dup_data.tuple, mode);
+ if (errcode) {
+ memtx_tree_delete(&index->tree, new_data);
+ if (dup_data.tuple != NULL)
+ memtx_tree_insert(&index->tree, dup_data, NULL);
+ struct space *sp = space_cache_find(base->def->space_id);
+ if (sp != NULL)
+ diag_set(ClientError, errcode, base->def->name,
+ space_name(sp));
+ return -1;
+ }
+ if (dup_data.tuple != NULL) {
+ *result = dup_data.tuple;
+ return 0;
+ }
+ }
+ if (old_tuple) {
+ struct memtx_tree_data old_data;
+ old_data.tuple = old_tuple;
+ old_data.hint = tuple_hint(old_tuple, cmp_def);
+ memtx_tree_delete(&index->tree, old_data);
+ }
+ *result = old_tuple;
+ return 0;
+}
+
+/**
+ * Perform tuple insertion by given multikey index.
+ * In case of replacement, all old tuple entries are deleted
+ * by all it's multikey indexes.
+ */
+static int
+memtx_tree_index_replace_multikey_one(struct memtx_tree_index *index,
+ struct tuple *old_tuple, struct tuple *new_tuple,
+ enum dup_replace_mode mode, hint_t hint,
+ struct memtx_tree_data *replaced_data,
+ bool *is_multikey_conflict)
+{
+ struct memtx_tree_data new_data, dup_data;
+ new_data.tuple = new_tuple;
+ new_data.hint = hint;
+ dup_data.tuple = NULL;
+ *is_multikey_conflict = false;
+ if (memtx_tree_insert(&index->tree, new_data, &dup_data) != 0) {
+ diag_set(OutOfMemory, MEMTX_EXTENT_SIZE, "memtx_tree_index",
+ "replace");
+ return -1;
+ }
+ int errcode = 0;
+ if (dup_data.tuple == new_tuple) {
+ /*
+ * When tuple contains the same key multiple
+ * times, the previous key occurrence is pushed
+ * out of the index.
+ */
+ *is_multikey_conflict = true;
+ } else if ((errcode = replace_check_dup(old_tuple, dup_data.tuple,
+ mode)) != 0) {
+ /* Rollback replace. */
+ memtx_tree_delete(&index->tree, new_data);
+ if (dup_data.tuple != NULL)
+ memtx_tree_insert(&index->tree, dup_data, NULL);
+ struct space *sp = space_cache_find(index->base.def->space_id);
+ if (sp != NULL) {
+ diag_set(ClientError, errcode, index->base.def->name,
+ space_name(sp));
+ }
+ return -1;
+ }
+ *replaced_data = dup_data;
+ return 0;
+}
+
+/**
+ * Rollback the sequence of memtx_tree_index_replace_multikey_one
+ * insertions with multikey indexes [0, err_multikey_idx - 1]
+ * where the err_multikey_idx is the first multikey index where
+ * error has been raised.
+ *
+ * This routine can't fail because all replaced_tuple (when
+ * specified) nodes in tree are already allocated (they might be
+ * overridden with new_tuple, but they definitely present) and
+ * delete operation is fault-tolerant.
+ */
+static void
+memtx_tree_index_replace_multikey_rollback(struct memtx_tree_index *index,
+ struct tuple *new_tuple, struct tuple *replaced_tuple,
+ int err_multikey_idx)
+{
+ struct memtx_tree_data data;
+ if (replaced_tuple != NULL) {
+ /* Restore replaced tuple index occurrences. */
+ struct key_def *cmp_def = memtx_tree_cmp_def(&index->tree);
+ data.tuple = replaced_tuple;
+ uint32_t multikey_count =
+ tuple_multikey_count(replaced_tuple, cmp_def);
+ for (int i = 0; (uint32_t) i < multikey_count; i++) {
+ data.hint = i;
+ memtx_tree_insert(&index->tree, data, NULL);
+ }
+ }
+ /*
+ * Rollback new_tuple insertion by multikey index
+ * [0, multikey_idx).
+ */
+ data.tuple = new_tuple;
+ for (int i = 0; i < err_multikey_idx; i++) {
+ data.hint = i;
+ memtx_tree_delete_value(&index->tree, data, NULL);
+ }
+}
+
+/**
+ * :replace() function for a multikey index: replace old tuple
+ * index entries with ones from the new tuple.
+ *
+ * In a multikey index a single tuple is associated with 0..N keys
+ * of the b+*tree. Imagine old tuple key set is called "old_keys"
+ * and a new tuple set is called "new_keys". This function must
+ * 1) delete all removed keys: (new_keys - old_keys)
+ * 2) update tuple pointer in all preserved keys: (old_keys ^ new_keys)
+ * 3) insert data for all new keys (new_keys - old_keys).
+ *
+ * Compare with a standard unique or non-unique index, when a key
+ * is present only once, so whenever we encounter a duplicate, it
+ * is guaranteed to point at the old tuple (in non-unique indexes
+ * we augment the secondary key parts with primary key parts, so
+ * b+*tree still contains unique entries only).
+ *
+ * To reduce the number of insert and delete operations on the
+ * tree, this function attempts to optimistically add all keys
+ * from the new tuple to the tree first.
+ *
+ * When this step finds a duplicate, it's either of the following:
+ * - for a unique multikey index, it may be the old tuple or
+ * some other tuple. Since unique index forbids duplicates,
+ * this branch ends with an error unless we found the old tuple.
+ * - for a non-unique multikey index, both secondary and primary
+ * key parts must match, so it's guaranteed to be the old tuple.
+ *
+ * In other words, when an optimistic insert finds a duplicate,
+ * it's either an error, in which case we roll back all the new
+ * keys from the tree and abort the procedure, or the old tuple,
+ * which we save to get back to, later.
+ *
+ * When adding new keys finishes, we have completed steps
+ * 2) and 3):
+ * - added set (new_keys - old_keys) to the index
+ * - updated set (new_keys ^ old_keys) with a new tuple pointer.
+ *
+ * We now must perform 1), which is remove (old_keys - new_keys).
+ *
+ * This is done by using the old tuple pointer saved from the
+ * previous step. To not accidentally delete the common key
+ * set of the old and the new tuple, we don't using key parts alone
+ * to compare - we also look at b+* tree value that has the tuple
+ * pointer, and delete old tuple entries only.
+ */
+static int
+memtx_tree_index_replace_multikey(struct index *base, struct tuple *old_tuple,
+ struct tuple *new_tuple, enum dup_replace_mode mode,
+ struct tuple **result)
+{
+ struct memtx_tree_index *index = (struct memtx_tree_index *)base;
+ struct key_def *cmp_def = memtx_tree_cmp_def(&index->tree);
+ *result = NULL;
+ if (new_tuple != NULL) {
+ int multikey_idx = 0, err = 0;
+ uint32_t multikey_count =
+ tuple_multikey_count(new_tuple, cmp_def);
+ for (; (uint32_t) multikey_idx < multikey_count;
+ multikey_idx++) {
+ bool is_multikey_conflict;
+ struct memtx_tree_data replaced_data;
+ err = memtx_tree_index_replace_multikey_one(index,
+ old_tuple, new_tuple, mode,
+ multikey_idx, &replaced_data,
+ &is_multikey_conflict);
+ if (err != 0)
+ break;
+ if (replaced_data.tuple != NULL &&
+ !is_multikey_conflict) {
+ assert(*result == NULL ||
+ *result == replaced_data.tuple);
+ *result = replaced_data.tuple;
+ }
+ }
+ if (err != 0) {
+ memtx_tree_index_replace_multikey_rollback(index,
+ new_tuple, *result, multikey_idx);
+ return -1;
+ }
+ if (*result != NULL) {
+ assert(old_tuple == NULL || old_tuple == *result);
+ old_tuple = *result;
+ }
+ }
+ if (old_tuple != NULL) {
+ struct memtx_tree_data data;
+ data.tuple = old_tuple;
+ uint32_t multikey_count =
+ tuple_multikey_count(old_tuple, cmp_def);
+ for (int i = 0; (uint32_t) i < multikey_count; i++) {
+ data.hint = i;
+ memtx_tree_delete_value(&index->tree, data, NULL);
+ }
+ }
+ return 0;
+}
+
+/** A dummy key allocator used when removing tuples from an index. */
+static const char *
+func_index_key_dummy_alloc(struct tuple *tuple, const char *key,
+ uint32_t key_sz)
+{
+ (void) tuple;
+ (void) key_sz;
+ return key;
+}
+
+/**
+ * An undo entry for multikey functional index replace operation.
+ * Used to roll back a failed insert/replace and restore the
+ * original key_hint(s) and to commit a completed insert/replace
+ * and destruct old tuple key_hint(s).
+*/
+struct func_key_undo {
+ /** A link to organize entries in list. */
+ struct rlist link;
+ /** An inserted record copy. */
+ struct memtx_tree_data key;
+};
+
+/** Allocate a new func_key_undo on given region. */
+struct func_key_undo *
+func_key_undo_new(struct region *region)
+{
+ size_t size;
+ struct func_key_undo *undo = region_alloc_object(region, typeof(*undo),
+ &size);
+ if (undo == NULL) {
+ diag_set(OutOfMemory, size, "region_alloc_object", "undo");
+ return NULL;
+ }
+ return undo;
+}
+
+/**
+ * Rollback a sequence of memtx_tree_index_replace_multikey_one
+ * insertions for functional index. Routine uses given list to
+ * return a given index object in it's original state.
+ */
+static void
+memtx_tree_func_index_replace_rollback(struct memtx_tree_index *index,
+ struct rlist *old_keys,
+ struct rlist *new_keys)
+{
+ struct func_key_undo *entry;
+ rlist_foreach_entry(entry, new_keys, link) {
+ memtx_tree_delete_value(&index->tree, entry->key, NULL);
+ tuple_chunk_delete(entry->key.tuple,
+ (const char *)entry->key.hint);
+ }
+ rlist_foreach_entry(entry, old_keys, link)
+ memtx_tree_insert(&index->tree, entry->key, NULL);
+}
+
+/**
+ * @sa memtx_tree_index_replace_multikey().
+ * Use the functional index function from the key definition
+ * to build a key list. Then each returned key is reallocated in
+ * engine's memory as key_hint object and is used as comparison
+ * hint.
+ * To release key_hint memory in case of replace failure
+ * we use a list of undo records which are allocated on region.
+ * It is used to restore the original b+* entries with their
+ * original key_hint(s) pointers in case of failure and release
+ * the now useless hints of old items in case of success.
+ */
+static int
+memtx_tree_func_index_replace(struct index *base, struct tuple *old_tuple,
+ struct tuple *new_tuple, enum dup_replace_mode mode,
+ struct tuple **result)
+{
+ struct memtx_tree_index *index = (struct memtx_tree_index *)base;
+ struct index_def *index_def = index->base.def;
+ assert(index_def->key_def->for_func_index);
+
+ int rc = -1;
+ struct region *region = &fiber()->gc;
+ size_t region_svp = region_used(region);
+
+ *result = NULL;
+ struct key_list_iterator it;
+ if (new_tuple != NULL) {
+ struct rlist old_keys, new_keys;
+ rlist_create(&old_keys);
+ rlist_create(&new_keys);
+ if (key_list_iterator_create(&it, new_tuple, index_def, true,
+ tuple_chunk_new) != 0)
+ goto end;
+ int err = 0;
+ const char *key;
+ struct func_key_undo *undo;
+ while ((err = key_list_iterator_next(&it, &key)) == 0 &&
+ key != NULL) {
+ /* Perform insertion, log it in list. */
+ undo = func_key_undo_new(region);
+ if (undo == NULL) {
+ tuple_chunk_delete(new_tuple, key);
+ err = -1;
+ break;
+ }
+ undo->key.tuple = new_tuple;
+ undo->key.hint = (hint_t)key;
+ rlist_add(&new_keys, &undo->link);
+ bool is_multikey_conflict;
+ struct memtx_tree_data old_data;
+ old_data.tuple = NULL;
+ err = memtx_tree_index_replace_multikey_one(index,
+ old_tuple, new_tuple,
+ mode, (hint_t)key, &old_data,
+ &is_multikey_conflict);
+ if (err != 0)
+ break;
+ if (old_data.tuple != NULL && !is_multikey_conflict) {
+ undo = func_key_undo_new(region);
+ if (undo == NULL) {
+ /*
+ * Can't append this
+ * operation in rollback
+ * journal. Roll it back
+ * manually.
+ */
+ memtx_tree_insert(&index->tree,
+ old_data, NULL);
+ err = -1;
+ break;
+ }
+ undo->key = old_data;
+ rlist_add(&old_keys, &undo->link);
+ *result = old_data.tuple;
+ } else if (old_data.tuple != NULL &&
+ is_multikey_conflict) {
+ /*
+ * Remove the replaced tuple undo
+ * from undo list.
+ */
+ tuple_chunk_delete(new_tuple,
+ (const char *)old_data.hint);
+ rlist_foreach_entry(undo, &new_keys, link) {
+ if (undo->key.hint == old_data.hint) {
+ rlist_del(&undo->link);
+ break;
+ }
+ }
+ }
+ }
+ if (key != NULL || err != 0) {
+ memtx_tree_func_index_replace_rollback(index,
+ &old_keys, &new_keys);
+ goto end;
+ }
+ if (*result != NULL) {
+ assert(old_tuple == NULL || old_tuple == *result);
+ old_tuple = *result;
+ }
+ /*
+ * Commit changes: release hints for
+ * replaced entries.
+ */
+ rlist_foreach_entry(undo, &old_keys, link) {
+ tuple_chunk_delete(undo->key.tuple,
+ (const char *)undo->key.hint);
+ }
+ }
+ if (old_tuple != NULL) {
+ if (key_list_iterator_create(&it, old_tuple, index_def, false,
+ func_index_key_dummy_alloc) != 0)
+ goto end;
+ struct memtx_tree_data data, deleted_data;
+ data.tuple = old_tuple;
+ const char *key;
+ while (key_list_iterator_next(&it, &key) == 0 && key != NULL) {
+ data.hint = (hint_t) key;
+ deleted_data.tuple = NULL;
+ memtx_tree_delete_value(&index->tree, data,
+ &deleted_data);
+ if (deleted_data.tuple != NULL) {
+ /*
+ * Release related hint on
+ * successful node deletion.
+ */
+ tuple_chunk_delete(deleted_data.tuple,
+ (const char *)deleted_data.hint);
+ }
+ }
+ assert(key == NULL);
+ }
+ rc = 0;
+end:
+ region_truncate(region, region_svp);
+ return rc;
+}
+
+static struct iterator *
+memtx_tree_index_create_iterator(struct index *base, enum iterator_type type,
+ const char *key, uint32_t part_count)
+{
+ struct memtx_tree_index *index = (struct memtx_tree_index *)base;
+ struct memtx_engine *memtx = (struct memtx_engine *)base->engine;
+ struct key_def *cmp_def = memtx_tree_cmp_def(&index->tree);
+
+ assert(part_count == 0 || key != NULL);
+ if (type > ITER_GT) {
+ diag_set(UnsupportedIndexFeature, base->def,
+ "requested iterator type");
+ return NULL;
+ }
+
+ if (part_count == 0) {
+ /*
+ * If no key is specified, downgrade equality
+ * iterators to a full range.
+ */
+ type = iterator_type_is_reverse(type) ? ITER_LE : ITER_GE;
+ key = NULL;
+ }
+
+ struct tree_iterator *it = (struct tree_iterator *)
+ mempool_alloc(&memtx->iterator_pool);
+ if (it == NULL) {
+ diag_set(OutOfMemory, sizeof(struct tree_iterator),
+ "memtx_tree_index", "iterator");
+ return NULL;
+ }
+ iterator_create(&it->base, base);
+ it->pool = &memtx->iterator_pool;
+ it->base.next = tree_iterator_start;
+ it->base.free = tree_iterator_free;
+ it->type = type;
+ it->key_data.key = key;
+ it->key_data.part_count = part_count;
+ it->key_data.hint = key_hint(key, part_count, cmp_def);
+ it->tree_iterator = memtx_tree_invalid_iterator();
+ it->current.tuple = NULL;
+ return (struct iterator *)it;
+}
+
+static void
+memtx_tree_index_begin_build(struct index *base)
+{
+ struct memtx_tree_index *index = (struct memtx_tree_index *)base;
+ assert(memtx_tree_size(&index->tree) == 0);
+ (void)index;
+}
+
+static int
+memtx_tree_index_reserve(struct index *base, uint32_t size_hint)
+{
+ struct memtx_tree_index *index = (struct memtx_tree_index *)base;
+ if (size_hint < index->build_array_alloc_size)
+ return 0;
+ struct memtx_tree_data *tmp =
+ (struct memtx_tree_data *)
+ realloc(index->build_array, size_hint * sizeof(*tmp));
+ if (tmp == NULL) {
+ diag_set(OutOfMemory, size_hint * sizeof(*tmp),
+ "memtx_tree_index", "reserve");
+ return -1;
+ }
+ index->build_array = tmp;
+ index->build_array_alloc_size = size_hint;
+ return 0;
+}
+
+/** Initialize the next element of the index build_array. */
+static int
+memtx_tree_index_build_array_append(struct memtx_tree_index *index,
+ struct tuple *tuple, hint_t hint)
+{
+ if (index->build_array == NULL) {
+ index->build_array =
+ (struct memtx_tree_data *)malloc(MEMTX_EXTENT_SIZE);
+ if (index->build_array == NULL) {
+ diag_set(OutOfMemory, MEMTX_EXTENT_SIZE,
+ "memtx_tree_index", "build_next");
+ return -1;
+ }
+ index->build_array_alloc_size =
+ MEMTX_EXTENT_SIZE / sizeof(index->build_array[0]);
+ }
+ assert(index->build_array_size <= index->build_array_alloc_size);
+ if (index->build_array_size == index->build_array_alloc_size) {
+ index->build_array_alloc_size = index->build_array_alloc_size +
+ DIV_ROUND_UP(index->build_array_alloc_size, 2);
+ struct memtx_tree_data *tmp =
+ (struct memtx_tree_data *)realloc(index->build_array,
+ index->build_array_alloc_size * sizeof(*tmp));
+ if (tmp == NULL) {
+ diag_set(OutOfMemory, index->build_array_alloc_size *
+ sizeof(*tmp), "memtx_tree_index", "build_next");
+ return -1;
+ }
+ index->build_array = tmp;
+ }
+ struct memtx_tree_data *elem =
+ &index->build_array[index->build_array_size++];
+ elem->tuple = tuple;
+ elem->hint = hint;
+ return 0;
+}
+
+static int
+memtx_tree_index_build_next(struct index *base, struct tuple *tuple)
+{
+ struct memtx_tree_index *index = (struct memtx_tree_index *)base;
+ struct key_def *cmp_def = memtx_tree_cmp_def(&index->tree);
+ return memtx_tree_index_build_array_append(index, tuple,
+ tuple_hint(tuple, cmp_def));
+}
+
+static int
+memtx_tree_index_build_next_multikey(struct index *base, struct tuple *tuple)
+{
+ struct memtx_tree_index *index = (struct memtx_tree_index *)base;
+ struct key_def *cmp_def = memtx_tree_cmp_def(&index->tree);
+ uint32_t multikey_count = tuple_multikey_count(tuple, cmp_def);
+ for (uint32_t multikey_idx = 0; multikey_idx < multikey_count;
+ multikey_idx++) {
+ if (memtx_tree_index_build_array_append(index, tuple,
+ multikey_idx) != 0)
+ return -1;
+ }
+ return 0;
+}
+
+static int
+memtx_tree_func_index_build_next(struct index *base, struct tuple *tuple)
+{
+ struct memtx_tree_index *index = (struct memtx_tree_index *)base;
+ struct index_def *index_def = index->base.def;
+ assert(index_def->key_def->for_func_index);
+
+ struct region *region = &fiber()->gc;
+ size_t region_svp = region_used(region);
+
+ struct key_list_iterator it;
+ if (key_list_iterator_create(&it, tuple, index_def, false,
+ tuple_chunk_new) != 0)
+ return -1;
+
+ const char *key;
+ uint32_t insert_idx = index->build_array_size;
+ while (key_list_iterator_next(&it, &key) == 0 && key != NULL) {
+ if (memtx_tree_index_build_array_append(index, tuple,
+ (hint_t)key) != 0)
+ goto error;
+ }
+ assert(key == NULL);
+ region_truncate(region, region_svp);
+ return 0;
+error:
+ for (uint32_t i = insert_idx; i < index->build_array_size; i++) {
+ tuple_chunk_delete(index->build_array[i].tuple,
+ (const char *)index->build_array[i].hint);
+ }
+ region_truncate(region, region_svp);
+ return -1;
+}
+
+/**
+ * Process build_array of specified index and remove duplicates
+ * of equal tuples (in terms of index's cmp_def and have same
+ * tuple pointer). The build_array is expected to be sorted.
+ */
+static void
+memtx_tree_index_build_array_deduplicate(struct memtx_tree_index *index,
+ void (*destroy)(struct tuple *tuple, const char *hint))
+{
+ if (index->build_array_size == 0)
+ return;
+ struct key_def *cmp_def = memtx_tree_cmp_def(&index->tree);
+ size_t w_idx = 0, r_idx = 1;
+ while (r_idx < index->build_array_size) {
+ if (index->build_array[w_idx].tuple !=
+ index->build_array[r_idx].tuple ||
+ tuple_compare(index->build_array[w_idx].tuple,
+ index->build_array[w_idx].hint,
+ index->build_array[r_idx].tuple,
+ index->build_array[r_idx].hint,
+ cmp_def) != 0) {
+ /* Do not override the element itself. */
+ if (++w_idx == r_idx)
+ continue;
+ SWAP(index->build_array[w_idx],
+ index->build_array[r_idx]);
+ }
+ r_idx++;
+ }
+ if (destroy != NULL) {
+ /* Destroy deduplicated entries. */
+ for (r_idx = w_idx + 1;
+ r_idx < index->build_array_size; r_idx++) {
+ destroy(index->build_array[r_idx].tuple,
+ (const char *)index->build_array[r_idx].hint);
+ }
+ }
+ index->build_array_size = w_idx + 1;
+}
+
+static void
+memtx_tree_index_end_build(struct index *base)
+{
+ struct memtx_tree_index *index = (struct memtx_tree_index *)base;
+ struct key_def *cmp_def = memtx_tree_cmp_def(&index->tree);
+ qsort_arg(index->build_array, index->build_array_size,
+ sizeof(index->build_array[0]), memtx_tree_qcompare, cmp_def);
+ if (cmp_def->is_multikey) {
+ /*
+ * Multikey index may have equal(in terms of
+ * cmp_def) keys inserted by different multikey
+ * offsets. We must deduplicate them because
+ * the following memtx_tree_build assumes that
+ * all keys are unique.
+ */
+ memtx_tree_index_build_array_deduplicate(index, NULL);
+ } else if (cmp_def->for_func_index) {
+ memtx_tree_index_build_array_deduplicate(index,
+ tuple_chunk_delete);
+ }
+ memtx_tree_build(&index->tree, index->build_array,
+ index->build_array_size);
+
+ free(index->build_array);
+ index->build_array = NULL;
+ index->build_array_size = 0;
+ index->build_array_alloc_size = 0;
+}
+
+struct tree_snapshot_iterator {
+ struct snapshot_iterator base;
+ struct memtx_tree_index *index;
+ struct memtx_tree_iterator tree_iterator;
+ struct memtx_tx_snapshot_cleaner cleaner;
+};
+
+static void
+tree_snapshot_iterator_free(struct snapshot_iterator *iterator)
+{
+ assert(iterator->free == tree_snapshot_iterator_free);
+ struct tree_snapshot_iterator *it =
+ (struct tree_snapshot_iterator *)iterator;
+ memtx_leave_delayed_free_mode((struct memtx_engine *)
+ it->index->base.engine);
+ memtx_tree_iterator_destroy(&it->index->tree, &it->tree_iterator);
+ index_unref(&it->index->base);
+ memtx_tx_snapshot_cleaner_destroy(&it->cleaner);
+ free(iterator);
+}
+
+static int
+tree_snapshot_iterator_next(struct snapshot_iterator *iterator,
+ const char **data, uint32_t *size)
+{
+ assert(iterator->free == tree_snapshot_iterator_free);
+ struct tree_snapshot_iterator *it =
+ (struct tree_snapshot_iterator *)iterator;
+ struct memtx_tree *tree = &it->index->tree;
+
+ while (true) {
+ struct memtx_tree_data *res =
+ memtx_tree_iterator_get_elem(tree, &it->tree_iterator);
+
+ if (res == NULL) {
+ *data = NULL;
+ return 0;
+ }
+
+ memtx_tree_iterator_next(tree, &it->tree_iterator);
+
+ struct tuple *tuple = res->tuple;
+ tuple = memtx_tx_snapshot_clarify(&it->cleaner, tuple);
+
+ if (tuple != NULL) {
+ *data = tuple_data_range(tuple, size);
+ return 0;
+ }
+ }
+
+ return 0;
+}
+
+/**
+ * Create an ALL iterator with personal read view so further
+ * index modifications will not affect the iteration results.
+ * Must be destroyed by iterator->free after usage.
+ */
+static struct snapshot_iterator *
+memtx_tree_index_create_snapshot_iterator(struct index *base)
+{
+ struct memtx_tree_index *index = (struct memtx_tree_index *)base;
+ struct tree_snapshot_iterator *it =
+ (struct tree_snapshot_iterator *) calloc(1, sizeof(*it));
+ if (it == NULL) {
+ diag_set(OutOfMemory, sizeof(struct tree_snapshot_iterator),
+ "memtx_tree_index", "create_snapshot_iterator");
+ return NULL;
+ }
+
+ struct space *space = space_cache_find(base->def->space_id);
+ if (memtx_tx_snapshot_cleaner_create(&it->cleaner, space,
+ "memtx_tree_index") != 0) {
+ free(it);
+ return NULL;
+ }
+
+ it->base.free = tree_snapshot_iterator_free;
+ it->base.next = tree_snapshot_iterator_next;
+ it->index = index;
+ index_ref(base);
+ it->tree_iterator = memtx_tree_iterator_first(&index->tree);
+ memtx_tree_iterator_freeze(&index->tree, &it->tree_iterator);
+ memtx_enter_delayed_free_mode((struct memtx_engine *)base->engine);
+ return (struct snapshot_iterator *) it;
+}
+
+static const struct index_vtab memtx_tree_index_vtab = {
+ /* .destroy = */ memtx_tree_index_destroy,
+ /* .commit_create = */ generic_index_commit_create,
+ /* .abort_create = */ generic_index_abort_create,
+ /* .commit_modify = */ generic_index_commit_modify,
+ /* .commit_drop = */ generic_index_commit_drop,
+ /* .update_def = */ memtx_tree_index_update_def,
+ /* .depends_on_pk = */ memtx_tree_index_depends_on_pk,
+ /* .def_change_requires_rebuild = */
+ memtx_index_def_change_requires_rebuild,
+ /* .size = */ memtx_tree_index_size,
+ /* .bsize = */ memtx_tree_index_bsize,
+ /* .min = */ generic_index_min,
+ /* .max = */ generic_index_max,
+ /* .random = */ memtx_tree_index_random,
+ /* .count = */ memtx_tree_index_count,
+ /* .get = */ memtx_tree_index_get,
+ /* .replace = */ memtx_tree_index_replace,
+ /* .create_iterator = */ memtx_tree_index_create_iterator,
+ /* .create_snapshot_iterator = */
+ memtx_tree_index_create_snapshot_iterator,
+ /* .stat = */ generic_index_stat,
+ /* .compact = */ generic_index_compact,
+ /* .reset_stat = */ generic_index_reset_stat,
+ /* .begin_build = */ memtx_tree_index_begin_build,
+ /* .reserve = */ memtx_tree_index_reserve,
+ /* .build_next = */ memtx_tree_index_build_next,
+ /* .end_build = */ memtx_tree_index_end_build,
+};
+
+static const struct index_vtab memtx_tree_index_multikey_vtab = {
+ /* .destroy = */ memtx_tree_index_destroy,
+ /* .commit_create = */ generic_index_commit_create,
+ /* .abort_create = */ generic_index_abort_create,
+ /* .commit_modify = */ generic_index_commit_modify,
+ /* .commit_drop = */ generic_index_commit_drop,
+ /* .update_def = */ memtx_tree_index_update_def,
+ /* .depends_on_pk = */ memtx_tree_index_depends_on_pk,
+ /* .def_change_requires_rebuild = */
+ memtx_index_def_change_requires_rebuild,
+ /* .size = */ memtx_tree_index_size,
+ /* .bsize = */ memtx_tree_index_bsize,
+ /* .min = */ generic_index_min,
+ /* .max = */ generic_index_max,
+ /* .random = */ memtx_tree_index_random,
+ /* .count = */ memtx_tree_index_count,
+ /* .get = */ memtx_tree_index_get,
+ /* .replace = */ memtx_tree_index_replace_multikey,
+ /* .create_iterator = */ memtx_tree_index_create_iterator,
+ /* .create_snapshot_iterator = */
+ memtx_tree_index_create_snapshot_iterator,
+ /* .stat = */ generic_index_stat,
+ /* .compact = */ generic_index_compact,
+ /* .reset_stat = */ generic_index_reset_stat,
+ /* .begin_build = */ memtx_tree_index_begin_build,
+ /* .reserve = */ memtx_tree_index_reserve,
+ /* .build_next = */ memtx_tree_index_build_next_multikey,
+ /* .end_build = */ memtx_tree_index_end_build,
+};
+
+static const struct index_vtab memtx_tree_func_index_vtab = {
+ /* .destroy = */ memtx_tree_index_destroy,
+ /* .commit_create = */ generic_index_commit_create,
+ /* .abort_create = */ generic_index_abort_create,
+ /* .commit_modify = */ generic_index_commit_modify,
+ /* .commit_drop = */ generic_index_commit_drop,
+ /* .update_def = */ memtx_tree_index_update_def,
+ /* .depends_on_pk = */ memtx_tree_index_depends_on_pk,
+ /* .def_change_requires_rebuild = */
+ memtx_index_def_change_requires_rebuild,
+ /* .size = */ memtx_tree_index_size,
+ /* .bsize = */ memtx_tree_index_bsize,
+ /* .min = */ generic_index_min,
+ /* .max = */ generic_index_max,
+ /* .random = */ memtx_tree_index_random,
+ /* .count = */ memtx_tree_index_count,
+ /* .get = */ memtx_tree_index_get,
+ /* .replace = */ memtx_tree_func_index_replace,
+ /* .create_iterator = */ memtx_tree_index_create_iterator,
+ /* .create_snapshot_iterator = */
+ memtx_tree_index_create_snapshot_iterator,
+ /* .stat = */ generic_index_stat,
+ /* .compact = */ generic_index_compact,
+ /* .reset_stat = */ generic_index_reset_stat,
+ /* .begin_build = */ memtx_tree_index_begin_build,
+ /* .reserve = */ memtx_tree_index_reserve,
+ /* .build_next = */ memtx_tree_func_index_build_next,
+ /* .end_build = */ memtx_tree_index_end_build,
+};
+
+/**
+ * A disabled index vtab provides safe dummy methods for
+ * 'inactive' index. It is required to perform a fault-tolerant
+ * recovery from snapshoot in case of func_index (because
+ * key defintion is not completely initialized at that moment).
+ */
+static const struct index_vtab memtx_tree_disabled_index_vtab = {
+ /* .destroy = */ memtx_tree_index_destroy,
+ /* .commit_create = */ generic_index_commit_create,
+ /* .abort_create = */ generic_index_abort_create,
+ /* .commit_modify = */ generic_index_commit_modify,
+ /* .commit_drop = */ generic_index_commit_drop,
+ /* .update_def = */ generic_index_update_def,
+ /* .depends_on_pk = */ generic_index_depends_on_pk,
+ /* .def_change_requires_rebuild = */
+ generic_index_def_change_requires_rebuild,
+ /* .size = */ generic_index_size,
+ /* .bsize = */ generic_index_bsize,
+ /* .min = */ generic_index_min,
+ /* .max = */ generic_index_max,
+ /* .random = */ generic_index_random,
+ /* .count = */ generic_index_count,
+ /* .get = */ generic_index_get,
+ /* .replace = */ disabled_index_replace,
+ /* .create_iterator = */ generic_index_create_iterator,
+ /* .create_snapshot_iterator = */
+ generic_index_create_snapshot_iterator,
+ /* .stat = */ generic_index_stat,
+ /* .compact = */ generic_index_compact,
+ /* .reset_stat = */ generic_index_reset_stat,
+ /* .begin_build = */ generic_index_begin_build,
+ /* .reserve = */ generic_index_reserve,
+ /* .build_next = */ disabled_index_build_next,
+ /* .end_build = */ generic_index_end_build,
+};
+
+struct index *
+memtx_tree_index_new(struct memtx_engine *memtx, struct index_def *def)
+{
+ struct memtx_tree_index *index =
+ (struct memtx_tree_index *)calloc(1, sizeof(*index));
+ if (index == NULL) {
+ diag_set(OutOfMemory, sizeof(*index),
+ "malloc", "struct memtx_tree_index");
+ return NULL;
+ }
+ const struct index_vtab *vtab;
+ if (def->key_def->for_func_index) {
+ if (def->key_def->func_index_func == NULL)
+ vtab = &memtx_tree_disabled_index_vtab;
+ else
+ vtab = &memtx_tree_func_index_vtab;
+ } else if (def->key_def->is_multikey) {
+ vtab = &memtx_tree_index_multikey_vtab;
+ } else {
+ vtab = &memtx_tree_index_vtab;
+ }
+ if (index_create(&index->base, (struct engine *)memtx,
+ vtab, def) != 0) {
+ free(index);
+ return NULL;
+ }
+
+ /* See comment to memtx_tree_index_update_def(). */
+ struct key_def *cmp_def;
+ cmp_def = def->opts.is_unique && !def->key_def->is_nullable ?
+ index->base.def->key_def : index->base.def->cmp_def;
+
+ memtx_tree_create(&index->tree, cmp_def, memtx_index_extent_alloc,
+ memtx_index_extent_free, memtx);
+ return &index->base;
+}
--
2.7.4
^ permalink raw reply [flat|nested] 7+ messages in thread
* [Tarantool-patches] [PATCH v2 2/2] memtx: make tuple compare hints optional
2020-10-21 10:23 [Tarantool-patches] [PATCH v2 0/2] Make tree hint optional Aleksandr Lyapunov
2020-10-21 10:23 ` [Tarantool-patches] [PATCH v2 1/2] memtx: move memtx_tree.c to memtx_tree.cc Aleksandr Lyapunov
@ 2020-10-21 10:23 ` Aleksandr Lyapunov
2020-10-21 15:28 ` [Tarantool-patches] [PATCH v2 0/2] Make tree hint optional Nikita Pettik
2 siblings, 0 replies; 7+ messages in thread
From: Aleksandr Lyapunov @ 2020-10-21 10:23 UTC (permalink / raw)
To: tarantool-patches, Nikita Pettik, Ilya Kosarev
Since 9fba29abb4e05babb9b23b4413bd8083f0fba933 (memtx: introduce tuple
compare hint) memtx tree key data (indexes) started to contain extra 8
bytes as a hint. Now it is optional and can be turned off in an index
options with "hint = false" entry.
Closes #4927
@TarantoolBot document
Title: memtx: optional tuple compare hints
Update the documentation for an index creation to reflect that there is
now an option to turn off hints for the index.
---
src/box/index_def.c | 2 +
src/box/index_def.h | 6 +
src/box/lua/schema.lua | 53 ++++
src/box/lua/space.cc | 7 +
src/box/memtx_engine.c | 2 +
src/box/memtx_tree.cc | 606 ++++++++++++++++++++++++------------
src/lib/salad/bps_tree.h | 19 ++
test/box/alter.result | 103 +++++-
test/box/alter.test.lua | 34 ++
test/box/errinj.result | 3 +-
test/box/tree_pk.result | 314 +++++++++++++++++++
test/box/tree_pk.test.lua | 115 +++++++
test/box/tree_pk_multipart.result | 153 +++++++++
test/box/tree_pk_multipart.test.lua | 64 ++++
14 files changed, 1274 insertions(+), 207 deletions(-)
diff --git a/src/box/index_def.c b/src/box/index_def.c
index 9802961..79394b8 100644
--- a/src/box/index_def.c
+++ b/src/box/index_def.c
@@ -51,6 +51,7 @@ const struct index_opts index_opts_default = {
/* .lsn = */ 0,
/* .stat = */ NULL,
/* .func = */ 0,
+ /* .hint = */ true,
};
const struct opt_def index_opts_reg[] = {
@@ -66,6 +67,7 @@ const struct opt_def index_opts_reg[] = {
OPT_DEF("lsn", OPT_INT64, struct index_opts, lsn),
OPT_DEF("func", OPT_UINT32, struct index_opts, func_id),
OPT_DEF_LEGACY("sql"),
+ OPT_DEF("hint", OPT_BOOL, struct index_opts, hint),
OPT_END,
};
diff --git a/src/box/index_def.h b/src/box/index_def.h
index d928b23..2180a69 100644
--- a/src/box/index_def.h
+++ b/src/box/index_def.h
@@ -165,6 +165,10 @@ struct index_opts {
struct index_stat *stat;
/** Identifier of the functional index function. */
uint32_t func_id;
+ /**
+ * Use hint optimization for tree index.
+ */
+ bool hint;
};
extern const struct index_opts index_opts_default;
@@ -211,6 +215,8 @@ index_opts_cmp(const struct index_opts *o1, const struct index_opts *o2)
return o1->bloom_fpr < o2->bloom_fpr ? -1 : 1;
if (o1->func_id != o2->func_id)
return o1->func_id - o2->func_id;
+ if (o1->hint != o2->hint)
+ return o1->hint - o2->hint;
return 0;
}
diff --git a/src/box/lua/schema.lua b/src/box/lua/schema.lua
index 1131af7..9cc1289 100644
--- a/src/box/lua/schema.lua
+++ b/src/box/lua/schema.lua
@@ -1000,8 +1000,31 @@ local index_options = {
page_size = 'number',
bloom_fpr = 'number',
func = 'number, string',
+ hint = 'boolean',
}
+local function jsonpaths_from_idx_parts(parts)
+ local paths = {}
+
+ for _, part in pairs(parts) do
+ if type(part.path) == 'string' then
+ table.insert(paths, part.path)
+ end
+ end
+
+ return paths
+end
+
+local function is_multikey_index(parts)
+ for _, path in pairs(jsonpaths_from_idx_parts(parts)) do
+ if path:find('[*]', 1, true) then
+ return true
+ end
+ end
+
+ return false
+end
+
--
-- check_param_table() template for alter index,
-- includes all index options.
@@ -1076,6 +1099,15 @@ box.schema.index.create = function(space_id, name, options)
options_defaults = {}
end
options = update_param_table(options, options_defaults)
+ if options.hint and
+ (options.type ~= 'tree' or box.space[space_id].engine ~= 'memtx') then
+ box.error(box.error.MODIFY_INDEX, name, space.name,
+ "hint is only reasonable with memtx tree index")
+ end
+ if options.hint and options.func then
+ box.error(box.error.MODIFY_INDEX, name, space.name,
+ "functional index can't use hints")
+ end
local _index = box.space[box.schema.INDEX_ID]
local _vindex = box.space[box.schema.VINDEX_ID]
@@ -1115,6 +1147,7 @@ box.schema.index.create = function(space_id, name, options)
run_size_ratio = options.run_size_ratio,
bloom_fpr = options.bloom_fpr,
func = options.func,
+ hint = options.hint,
}
local field_type_aliases = {
num = 'unsigned'; -- Deprecated since 1.7.2
@@ -1135,6 +1168,10 @@ box.schema.index.create = function(space_id, name, options)
if parts_can_be_simplified then
parts = simplify_index_parts(parts)
end
+ if options.hint and is_multikey_index(parts) then
+ box.error(box.error.MODIFY_INDEX, name, space.name,
+ "multikey index can't use hints")
+ end
if index_opts.func ~= nil and type(index_opts.func) == 'string' then
index_opts.func = func_id_by_name(index_opts.func)
end
@@ -1253,6 +1290,17 @@ box.schema.index.alter = function(space_id, index_id, options)
index_opts[k] = options[k]
end
end
+ if options.hint and
+ (options.type ~= 'tree' or box.space[space_id].engine ~= 'memtx') then
+ box.error(box.error.MODIFY_INDEX, space.index[index_id].name,
+ space.name,
+ "hint is only reasonable with memtx tree index")
+ end
+ if options.hint and options.func then
+ box.error(box.error.MODIFY_INDEX, space.index[index_id].name,
+ space.name,
+ "functional index can't use hints")
+ end
if options.parts then
local parts_can_be_simplified
parts, parts_can_be_simplified =
@@ -1262,6 +1310,11 @@ box.schema.index.alter = function(space_id, index_id, options)
parts = simplify_index_parts(parts)
end
end
+ if options.hint and is_multikey_index(parts) then
+ box.error(box.error.MODIFY_INDEX, space.index[index_id].name,
+ space.name,
+ "multikey index can't use hints")
+ end
if index_opts.func ~= nil and type(index_opts.func) == 'string' then
index_opts.func = func_id_by_name(index_opts.func)
end
diff --git a/src/box/lua/space.cc b/src/box/lua/space.cc
index 177c588..1ea993c 100644
--- a/src/box/lua/space.cc
+++ b/src/box/lua/space.cc
@@ -344,6 +344,13 @@ lbox_fillspace(struct lua_State *L, struct space *space, int i)
lua_pushnumber(L, index_opts->dimension);
lua_setfield(L, -2, "dimension");
}
+ if (space_is_memtx(space) && index_def->type == TREE) {
+ lua_pushboolean(L, index_opts->hint);
+ lua_setfield(L, -2, "hint");
+ } else {
+ lua_pushnil(L);
+ lua_setfield(L, -2, "hint");
+ }
if (index_opts->func_id > 0) {
lua_pushstring(L, "func");
diff --git a/src/box/memtx_engine.c b/src/box/memtx_engine.c
index 8147557..43000ba 100644
--- a/src/box/memtx_engine.c
+++ b/src/box/memtx_engine.c
@@ -1398,6 +1398,8 @@ memtx_index_def_change_requires_rebuild(struct index *index,
return true;
if (old_def->opts.func_id != new_def->opts.func_id)
return true;
+ if (old_def->opts.hint != new_def->opts.hint)
+ return true;
const struct key_def *old_cmp_def, *new_cmp_def;
if (index_depends_on_pk(index)) {
diff --git a/src/box/memtx_tree.cc b/src/box/memtx_tree.cc
index d3b993b..44bdc86 100644
--- a/src/box/memtx_tree.cc
+++ b/src/box/memtx_tree.cc
@@ -45,23 +45,51 @@
/**
* Struct that is used as a key in BPS tree definition.
*/
-struct memtx_tree_key_data {
+struct memtx_tree_key_data_common {
/** Sequence of msgpacked search fields. */
const char *key;
/** Number of msgpacked search fields. */
uint32_t part_count;
+};
+
+template <bool USE_HINT>
+struct memtx_tree_key_data;
+
+template <>
+struct memtx_tree_key_data<false> : memtx_tree_key_data_common {
+ static constexpr hint_t hint = HINT_NONE;
+ void set_hint(hint_t) { assert(false); }
+};
+
+template <>
+struct memtx_tree_key_data<true> : memtx_tree_key_data_common {
/** Comparison hint, see tuple_hint(). */
hint_t hint;
+ void set_hint(hint_t h) { hint = h; }
};
/**
* Struct that is used as a elem in BPS tree definition.
*/
-struct memtx_tree_data {
+struct memtx_tree_data_common {
/* Tuple that this node is represents. */
struct tuple *tuple;
+};
+
+template <bool USE_HINT>
+struct memtx_tree_data;
+
+template <>
+struct memtx_tree_data<false> : memtx_tree_data_common {
+ static constexpr hint_t hint = HINT_NONE;
+ void set_hint(hint_t) { assert(false); }
+};
+
+template <>
+struct memtx_tree_data<true> : memtx_tree_data<false> {
/** Comparison hint, see key_hint(). */
hint_t hint;
+ void set_hint(hint_t h) { hint = h; }
};
/**
@@ -73,8 +101,8 @@ struct memtx_tree_data {
* @retval false - Otherwise.
*/
static bool
-memtx_tree_data_is_equal(const struct memtx_tree_data *a,
- const struct memtx_tree_data *b)
+memtx_tree_data_is_equal(const struct memtx_tree_data_common *a,
+ const struct memtx_tree_data_common *b)
{
return a->tuple == b->tuple;
}
@@ -89,12 +117,28 @@ memtx_tree_data_is_equal(const struct memtx_tree_data *a,
(b)->part_count, (b)->hint, arg)
#define BPS_TREE_IS_IDENTICAL(a, b) memtx_tree_data_is_equal(&a, &b)
#define BPS_TREE_NO_DEBUG 1
-#define bps_tree_elem_t struct memtx_tree_data
-#define bps_tree_key_t struct memtx_tree_key_data *
#define bps_tree_arg_t struct key_def *
+#define BPS_TREE_NAMESPACE NS_NO_HINT
+#define bps_tree_elem_t struct memtx_tree_data<false>
+#define bps_tree_key_t struct memtx_tree_key_data<false> *
+
#include "salad/bps_tree.h"
+#undef BPS_TREE_NAMESPACE
+#undef bps_tree_elem_t
+#undef bps_tree_key_t
+
+#define BPS_TREE_NAMESPACE NS_USE_HINT
+#define bps_tree_elem_t struct memtx_tree_data<true>
+#define bps_tree_key_t struct memtx_tree_key_data<true> *
+
+#include "salad/bps_tree.h"
+
+#undef BPS_TREE_NAMESPACE
+#undef bps_tree_elem_t
+#undef bps_tree_key_t
+
#undef BPS_TREE_NAME
#undef BPS_TREE_BLOCK_SIZE
#undef BPS_TREE_EXTENT_SIZE
@@ -102,66 +146,119 @@ memtx_tree_data_is_equal(const struct memtx_tree_data *a,
#undef BPS_TREE_COMPARE_KEY
#undef BPS_TREE_IS_IDENTICAL
#undef BPS_TREE_NO_DEBUG
-#undef bps_tree_elem_t
-#undef bps_tree_key_t
#undef bps_tree_arg_t
+using namespace NS_NO_HINT;
+using namespace NS_USE_HINT;
+
+template <bool USE_HINT>
+struct memtx_tree_selector;
+
+template <>
+struct memtx_tree_selector<false> : NS_NO_HINT::memtx_tree {};
+
+template <>
+struct memtx_tree_selector<true> : NS_USE_HINT::memtx_tree {};
+
+template <bool USE_HINT>
+using memtx_tree_t = struct memtx_tree_selector<USE_HINT>;
+
+template <bool USE_HINT>
+struct memtx_tree_iterator_selector;
+
+template <>
+struct memtx_tree_iterator_selector<false> {
+ using type = NS_NO_HINT::memtx_tree_iterator;
+};
+
+template <>
+struct memtx_tree_iterator_selector<true> {
+ using type = NS_USE_HINT::memtx_tree_iterator;
+};
+
+template <bool USE_HINT>
+using memtx_tree_iterator_t = typename memtx_tree_iterator_selector<USE_HINT>::type;
+
+static void
+invalidate_tree_iterator(NS_NO_HINT::memtx_tree_iterator *itr)
+{
+ *itr = NS_NO_HINT::memtx_tree_invalid_iterator();
+}
+
+static void
+invalidate_tree_iterator(NS_USE_HINT::memtx_tree_iterator *itr)
+{
+ *itr = NS_USE_HINT::memtx_tree_invalid_iterator();
+}
+
+template <bool USE_HINT>
struct memtx_tree_index {
struct index base;
- struct memtx_tree tree;
- struct memtx_tree_data *build_array;
+ memtx_tree_t<USE_HINT> tree;
+ struct memtx_tree_data<USE_HINT> *build_array;
size_t build_array_size, build_array_alloc_size;
struct memtx_gc_task gc_task;
- struct memtx_tree_iterator gc_iterator;
+ memtx_tree_iterator_t<USE_HINT> gc_iterator;
};
/* {{{ Utilities. *************************************************/
+template <class TREE>
static inline struct key_def *
-memtx_tree_cmp_def(struct memtx_tree *tree)
+memtx_tree_cmp_def(TREE *tree)
{
return tree->arg;
}
+template <bool USE_HINT>
static int
memtx_tree_qcompare(const void* a, const void *b, void *c)
{
- const struct memtx_tree_data *data_a = (struct memtx_tree_data *)a;
- const struct memtx_tree_data *data_b = (struct memtx_tree_data *)b;
+ const struct memtx_tree_data<USE_HINT> *data_a =
+ (struct memtx_tree_data<USE_HINT> *)a;
+ const struct memtx_tree_data<USE_HINT> *data_b =
+ (struct memtx_tree_data<USE_HINT> *)b;
struct key_def *key_def = (struct key_def *)c;
return tuple_compare(data_a->tuple, data_a->hint, data_b->tuple,
data_b->hint, key_def);
}
/* {{{ MemtxTree Iterators ****************************************/
+template <bool USE_HINT>
struct tree_iterator {
struct iterator base;
- struct memtx_tree_iterator tree_iterator;
+ memtx_tree_iterator_t<USE_HINT> tree_iterator;
enum iterator_type type;
- struct memtx_tree_key_data key_data;
- struct memtx_tree_data current;
+ struct memtx_tree_key_data<USE_HINT> key_data;
+ struct memtx_tree_data<USE_HINT> current;
/** Memory pool the iterator was allocated from. */
struct mempool *pool;
};
-static_assert(sizeof(struct tree_iterator) <= MEMTX_ITERATOR_SIZE,
- "sizeof(struct tree_iterator) must be less than or equal "
+static_assert(sizeof(struct tree_iterator<false>) <= MEMTX_ITERATOR_SIZE,
+ "sizeof(struct tree_iterator<false>) must be less than or equal "
+ "to MEMTX_ITERATOR_SIZE");
+static_assert(sizeof(struct tree_iterator<true>) <= MEMTX_ITERATOR_SIZE,
+ "sizeof(struct tree_iterator<true>) must be less than or equal "
"to MEMTX_ITERATOR_SIZE");
+template <bool USE_HINT>
static void
tree_iterator_free(struct iterator *iterator);
-static inline struct tree_iterator *
-tree_iterator(struct iterator *it)
+template <bool USE_HINT>
+static inline struct tree_iterator<USE_HINT> *
+get_tree_iterator(struct iterator *it)
{
- assert(it->free == tree_iterator_free);
- return (struct tree_iterator *) it;
+ assert(it->free == &tree_iterator_free<USE_HINT>);
+ return (struct tree_iterator<USE_HINT> *) it;
}
+template <bool USE_HINT>
static void
tree_iterator_free(struct iterator *iterator)
{
- struct tree_iterator *it = tree_iterator(iterator);
+ struct tree_iterator<USE_HINT> *it = get_tree_iterator<USE_HINT>(iterator);
struct tuple *tuple = it->current.tuple;
if (tuple != NULL)
tuple_unref(tuple);
@@ -176,14 +273,15 @@ tree_iterator_dummie(struct iterator *iterator, struct tuple **ret)
return 0;
}
+template <bool USE_HINT>
static int
tree_iterator_next_base(struct iterator *iterator, struct tuple **ret)
{
- struct memtx_tree_index *index =
- (struct memtx_tree_index *)iterator->index;
- struct tree_iterator *it = tree_iterator(iterator);
+ struct memtx_tree_index<USE_HINT> *index =
+ (struct memtx_tree_index<USE_HINT> *)iterator->index;
+ struct tree_iterator<USE_HINT> *it = get_tree_iterator<USE_HINT>(iterator);
assert(it->current.tuple != NULL);
- struct memtx_tree_data *check =
+ struct memtx_tree_data<USE_HINT> *check =
memtx_tree_iterator_get_elem(&index->tree, &it->tree_iterator);
if (check == NULL || !memtx_tree_data_is_equal(check, &it->current)) {
it->tree_iterator = memtx_tree_upper_bound_elem(&index->tree,
@@ -192,7 +290,7 @@ tree_iterator_next_base(struct iterator *iterator, struct tuple **ret)
memtx_tree_iterator_next(&index->tree, &it->tree_iterator);
}
tuple_unref(it->current.tuple);
- struct memtx_tree_data *res =
+ struct memtx_tree_data<USE_HINT> *res =
memtx_tree_iterator_get_elem(&index->tree, &it->tree_iterator);
if (res == NULL) {
iterator->next = tree_iterator_dummie;
@@ -206,14 +304,15 @@ tree_iterator_next_base(struct iterator *iterator, struct tuple **ret)
return 0;
}
+template <bool USE_HINT>
static int
tree_iterator_prev_base(struct iterator *iterator, struct tuple **ret)
{
- struct memtx_tree_index *index =
- (struct memtx_tree_index *)iterator->index;
- struct tree_iterator *it = tree_iterator(iterator);
+ struct memtx_tree_index<USE_HINT> *index =
+ (struct memtx_tree_index<USE_HINT> *)iterator->index;
+ struct tree_iterator<USE_HINT> *it = get_tree_iterator<USE_HINT>(iterator);
assert(it->current.tuple != NULL);
- struct memtx_tree_data *check =
+ struct memtx_tree_data<USE_HINT> *check =
memtx_tree_iterator_get_elem(&index->tree, &it->tree_iterator);
if (check == NULL || !memtx_tree_data_is_equal(check, &it->current)) {
it->tree_iterator = memtx_tree_lower_bound_elem(&index->tree,
@@ -221,7 +320,7 @@ tree_iterator_prev_base(struct iterator *iterator, struct tuple **ret)
}
memtx_tree_iterator_prev(&index->tree, &it->tree_iterator);
tuple_unref(it->current.tuple);
- struct memtx_tree_data *res =
+ struct memtx_tree_data<USE_HINT> *res =
memtx_tree_iterator_get_elem(&index->tree, &it->tree_iterator);
if (!res) {
iterator->next = tree_iterator_dummie;
@@ -235,14 +334,15 @@ tree_iterator_prev_base(struct iterator *iterator, struct tuple **ret)
return 0;
}
+template <bool USE_HINT>
static int
tree_iterator_next_equal_base(struct iterator *iterator, struct tuple **ret)
{
- struct memtx_tree_index *index =
- (struct memtx_tree_index *)iterator->index;
- struct tree_iterator *it = tree_iterator(iterator);
+ struct memtx_tree_index<USE_HINT> *index =
+ (struct memtx_tree_index<USE_HINT> *)iterator->index;
+ struct tree_iterator<USE_HINT> *it = get_tree_iterator<USE_HINT>(iterator);
assert(it->current.tuple != NULL);
- struct memtx_tree_data *check =
+ struct memtx_tree_data<USE_HINT> *check =
memtx_tree_iterator_get_elem(&index->tree, &it->tree_iterator);
if (check == NULL || !memtx_tree_data_is_equal(check, &it->current)) {
it->tree_iterator = memtx_tree_upper_bound_elem(&index->tree,
@@ -251,7 +351,7 @@ tree_iterator_next_equal_base(struct iterator *iterator, struct tuple **ret)
memtx_tree_iterator_next(&index->tree, &it->tree_iterator);
}
tuple_unref(it->current.tuple);
- struct memtx_tree_data *res =
+ struct memtx_tree_data<USE_HINT> *res =
memtx_tree_iterator_get_elem(&index->tree, &it->tree_iterator);
/* Use user key def to save a few loops. */
if (res == NULL ||
@@ -271,14 +371,15 @@ tree_iterator_next_equal_base(struct iterator *iterator, struct tuple **ret)
return 0;
}
+template <bool USE_HINT>
static int
tree_iterator_prev_equal_base(struct iterator *iterator, struct tuple **ret)
{
- struct memtx_tree_index *index =
- (struct memtx_tree_index *)iterator->index;
- struct tree_iterator *it = tree_iterator(iterator);
+ struct memtx_tree_index<USE_HINT> *index =
+ (struct memtx_tree_index<USE_HINT> *)iterator->index;
+ struct tree_iterator<USE_HINT> *it = get_tree_iterator<USE_HINT>(iterator);
assert(it->current.tuple != NULL);
- struct memtx_tree_data *check =
+ struct memtx_tree_data<USE_HINT> *check =
memtx_tree_iterator_get_elem(&index->tree, &it->tree_iterator);
if (check == NULL || !memtx_tree_data_is_equal(check, &it->current)) {
it->tree_iterator = memtx_tree_lower_bound_elem(&index->tree,
@@ -286,7 +387,7 @@ tree_iterator_prev_equal_base(struct iterator *iterator, struct tuple **ret)
}
memtx_tree_iterator_prev(&index->tree, &it->tree_iterator);
tuple_unref(it->current.tuple);
- struct memtx_tree_data *res =
+ struct memtx_tree_data<USE_HINT> *res =
memtx_tree_iterator_get_elem(&index->tree, &it->tree_iterator);
/* Use user key def to save a few loops. */
if (res == NULL ||
@@ -307,28 +408,30 @@ tree_iterator_prev_equal_base(struct iterator *iterator, struct tuple **ret)
}
#define WRAP_ITERATOR_METHOD(name) \
+template <bool USE_HINT> \
static int \
name(struct iterator *iterator, struct tuple **ret) \
{ \
- struct memtx_tree *tree = \
- &((struct memtx_tree_index *)iterator->index)->tree; \
- struct tree_iterator *it = tree_iterator(iterator); \
- struct memtx_tree_iterator *ti = &it->tree_iterator; \
+ memtx_tree_t<USE_HINT> *tree = \
+ &((struct memtx_tree_index<USE_HINT> *)iterator->index)->tree; \
+ struct tree_iterator<USE_HINT> *it = \
+ get_tree_iterator<USE_HINT>(iterator); \
+ memtx_tree_iterator_t<USE_HINT> *ti = &it->tree_iterator; \
uint32_t iid = iterator->index->def->iid; \
bool is_multikey = iterator->index->def->key_def->is_multikey; \
struct txn *txn = in_txn(); \
struct space *space = space_by_id(iterator->space_id); \
bool is_rw = txn != NULL; \
do { \
- int rc = name##_base(iterator, ret); \
+ int rc = name##_base<USE_HINT>(iterator, ret); \
if (rc != 0 || *ret == NULL) \
return rc; \
uint32_t mk_index = 0; \
if (is_multikey) { \
- struct memtx_tree_data *check = \
+ struct memtx_tree_data<USE_HINT> *check = \
memtx_tree_iterator_get_elem(tree, ti); \
assert(check != NULL); \
- mk_index = check->hint; \
+ mk_index = (uint32_t)check->hint; \
} \
*ret = memtx_tx_tuple_clarify(txn, space, *ret, \
iid, mk_index, is_rw); \
@@ -347,27 +450,28 @@ WRAP_ITERATOR_METHOD(tree_iterator_prev_equal);
#undef WRAP_ITERATOR_METHOD
+template <bool USE_HINT>
static void
-tree_iterator_set_next_method(struct tree_iterator *it)
+tree_iterator_set_next_method(struct tree_iterator<USE_HINT> *it)
{
assert(it->current.tuple != NULL);
switch (it->type) {
case ITER_EQ:
- it->base.next = tree_iterator_next_equal;
+ it->base.next = tree_iterator_next_equal<USE_HINT>;
break;
case ITER_REQ:
- it->base.next = tree_iterator_prev_equal;
+ it->base.next = tree_iterator_prev_equal<USE_HINT>;
break;
case ITER_ALL:
- it->base.next = tree_iterator_next;
+ it->base.next = tree_iterator_next<USE_HINT>;
break;
case ITER_LT:
case ITER_LE:
- it->base.next = tree_iterator_prev;
+ it->base.next = tree_iterator_prev<USE_HINT>;
break;
case ITER_GE:
case ITER_GT:
- it->base.next = tree_iterator_next;
+ it->base.next = tree_iterator_next<USE_HINT>;
break;
default:
/* The type was checked in initIterator */
@@ -375,15 +479,16 @@ tree_iterator_set_next_method(struct tree_iterator *it)
}
}
+template <bool USE_HINT>
static int
tree_iterator_start(struct iterator *iterator, struct tuple **ret)
{
*ret = NULL;
- struct memtx_tree_index *index =
- (struct memtx_tree_index *)iterator->index;
- struct tree_iterator *it = tree_iterator(iterator);
+ struct memtx_tree_index<USE_HINT> *index =
+ (struct memtx_tree_index<USE_HINT> *)iterator->index;
+ struct tree_iterator<USE_HINT> *it = get_tree_iterator<USE_HINT>(iterator);
it->base.next = tree_iterator_dummie;
- struct memtx_tree *tree = &index->tree;
+ memtx_tree_t<USE_HINT> *tree = &index->tree;
enum iterator_type type = it->type;
bool exact = false;
assert(it->current.tuple == NULL);
@@ -423,8 +528,8 @@ tree_iterator_start(struct iterator *iterator, struct tuple **ret)
}
}
- struct memtx_tree_data *res = memtx_tree_iterator_get_elem(tree,
- &it->tree_iterator);
+ struct memtx_tree_data<USE_HINT> *res =
+ memtx_tree_iterator_get_elem(tree, &it->tree_iterator);
if (!res)
return 0;
*ret = res->tuple;
@@ -437,7 +542,7 @@ tree_iterator_start(struct iterator *iterator, struct tuple **ret)
struct txn *txn = in_txn();
struct space *space = space_by_id(iterator->space_id);
bool is_rw = txn != NULL;
- uint32_t mk_index = is_multikey ? res->hint : 0;
+ uint32_t mk_index = is_multikey ? (uint32_t)res->hint : 0;
*ret = memtx_tx_tuple_clarify(txn, space, *ret, iid, mk_index, is_rw);
if (*ret == NULL) {
return iterator->next(iterator, ret);
@@ -454,14 +559,16 @@ tree_iterator_start(struct iterator *iterator, struct tuple **ret)
/* {{{ MemtxTree **********************************************************/
+template <bool USE_HINT>
static void
-memtx_tree_index_free(struct memtx_tree_index *index)
+memtx_tree_index_free(struct memtx_tree_index<USE_HINT> *index)
{
memtx_tree_destroy(&index->tree);
free(index->build_array);
free(index);
}
+template <bool USE_HINT>
static void
memtx_tree_index_gc_run(struct memtx_gc_task *task, bool *done)
{
@@ -475,14 +582,14 @@ memtx_tree_index_gc_run(struct memtx_gc_task *task, bool *done)
enum { YIELD_LOOPS = 10 };
#endif
- struct memtx_tree_index *index = container_of(task,
- struct memtx_tree_index, gc_task);
- struct memtx_tree *tree = &index->tree;
- struct memtx_tree_iterator *itr = &index->gc_iterator;
+ struct memtx_tree_index<USE_HINT> *index = container_of(task,
+ struct memtx_tree_index<USE_HINT>, gc_task);
+ memtx_tree_t<USE_HINT> *tree = &index->tree;
+ memtx_tree_iterator_t<USE_HINT> *itr = &index->gc_iterator;
unsigned int loops = 0;
while (!memtx_tree_iterator_is_invalid(itr)) {
- struct memtx_tree_data *res =
+ struct memtx_tree_data<USE_HINT> *res =
memtx_tree_iterator_get_elem(tree, itr);
memtx_tree_iterator_next(tree, itr);
tuple_unref(res->tuple);
@@ -494,23 +601,32 @@ memtx_tree_index_gc_run(struct memtx_gc_task *task, bool *done)
*done = true;
}
+template <bool USE_HINT>
static void
memtx_tree_index_gc_free(struct memtx_gc_task *task)
{
- struct memtx_tree_index *index = container_of(task,
- struct memtx_tree_index, gc_task);
+ struct memtx_tree_index<USE_HINT> *index = container_of(task,
+ struct memtx_tree_index<USE_HINT>, gc_task);
memtx_tree_index_free(index);
}
-static const struct memtx_gc_task_vtab memtx_tree_index_gc_vtab = {
- .run = memtx_tree_index_gc_run,
- .free = memtx_tree_index_gc_free,
+template <bool USE_HINT>
+static struct memtx_gc_task_vtab * get_memtx_tree_index_gc_vtab()
+{
+ static memtx_gc_task_vtab tab =
+ {
+ .run = memtx_tree_index_gc_run<USE_HINT>,
+ .free = memtx_tree_index_gc_free<USE_HINT>,
+ };
+ return &tab;
};
+template <bool USE_HINT>
static void
memtx_tree_index_destroy(struct index *base)
{
- struct memtx_tree_index *index = (struct memtx_tree_index *)base;
+ struct memtx_tree_index<USE_HINT> *index =
+ (struct memtx_tree_index<USE_HINT> *)base;
struct memtx_engine *memtx = (struct memtx_engine *)base->engine;
if (base->def->iid == 0) {
/*
@@ -518,7 +634,7 @@ memtx_tree_index_destroy(struct index *base)
* in the index, which may take a while. Schedule a
* background task in order not to block tx thread.
*/
- index->gc_task.vtab = &memtx_tree_index_gc_vtab;
+ index->gc_task.vtab = get_memtx_tree_index_gc_vtab<USE_HINT>();
index->gc_iterator = memtx_tree_iterator_first(&index->tree);
memtx_engine_schedule_gc(memtx, &index->gc_task);
} else {
@@ -530,10 +646,12 @@ memtx_tree_index_destroy(struct index *base)
}
}
+template <bool USE_HINT>
static void
memtx_tree_index_update_def(struct index *base)
{
- struct memtx_tree_index *index = (struct memtx_tree_index *)base;
+ struct memtx_tree_index<USE_HINT> *index =
+ (struct memtx_tree_index<USE_HINT> *)base;
struct index_def *def = base->def;
/*
* We use extended key def for non-unique and nullable
@@ -553,51 +671,62 @@ memtx_tree_index_depends_on_pk(struct index *base)
return !def->opts.is_unique || def->key_def->is_nullable;
}
+template <bool USE_HINT>
static ssize_t
memtx_tree_index_size(struct index *base)
{
- struct memtx_tree_index *index = (struct memtx_tree_index *)base;
+ struct memtx_tree_index<USE_HINT> *index =
+ (struct memtx_tree_index<USE_HINT> *)base;
return memtx_tree_size(&index->tree);
}
+template <bool USE_HINT>
static ssize_t
memtx_tree_index_bsize(struct index *base)
{
- struct memtx_tree_index *index = (struct memtx_tree_index *)base;
+ struct memtx_tree_index<USE_HINT> *index =
+ (struct memtx_tree_index<USE_HINT> *)base;
return memtx_tree_mem_used(&index->tree);
}
+template <bool USE_HINT>
static int
memtx_tree_index_random(struct index *base, uint32_t rnd, struct tuple **result)
{
- struct memtx_tree_index *index = (struct memtx_tree_index *)base;
- struct memtx_tree_data *res = memtx_tree_random(&index->tree, rnd);
+ struct memtx_tree_index<USE_HINT> *index =
+ (struct memtx_tree_index<USE_HINT> *)base;
+ struct memtx_tree_data<USE_HINT> *res = memtx_tree_random(&index->tree, rnd);
*result = res != NULL ? res->tuple : NULL;
return 0;
}
+template <bool USE_HINT>
static ssize_t
memtx_tree_index_count(struct index *base, enum iterator_type type,
const char *key, uint32_t part_count)
{
if (type == ITER_ALL)
- return memtx_tree_index_size(base); /* optimization */
+ return memtx_tree_index_size<USE_HINT>(base); /* optimization */
return generic_index_count(base, type, key, part_count);
}
+template <bool USE_HINT>
static int
memtx_tree_index_get(struct index *base, const char *key,
uint32_t part_count, struct tuple **result)
{
assert(base->def->opts.is_unique &&
part_count == base->def->key_def->part_count);
- struct memtx_tree_index *index = (struct memtx_tree_index *)base;
+ struct memtx_tree_index<USE_HINT> *index =
+ (struct memtx_tree_index<USE_HINT> *)base;
struct key_def *cmp_def = memtx_tree_cmp_def(&index->tree);
- struct memtx_tree_key_data key_data;
+ struct memtx_tree_key_data<USE_HINT> key_data;
key_data.key = key;
key_data.part_count = part_count;
- key_data.hint = key_hint(key, part_count, cmp_def);
- struct memtx_tree_data *res = memtx_tree_find(&index->tree, &key_data);
+ if (USE_HINT)
+ key_data.set_hint(key_hint(key, part_count, cmp_def));
+ struct memtx_tree_data<USE_HINT> *res =
+ memtx_tree_find(&index->tree, &key_data);
if (res == NULL) {
*result = NULL;
return 0;
@@ -605,24 +734,28 @@ memtx_tree_index_get(struct index *base, const char *key,
struct txn *txn = in_txn();
struct space *space = space_by_id(base->def->space_id);
bool is_rw = txn != NULL;
- uint32_t mk_index = base->def->key_def->is_multikey ? res->hint : 0;
+ bool is_multikey = base->def->key_def->is_multikey;
+ uint32_t mk_index = is_multikey ? (uint32_t)res->hint : 0;
*result = memtx_tx_tuple_clarify(txn, space, res->tuple, base->def->iid,
mk_index, is_rw);
return 0;
}
+template <bool USE_HINT>
static int
memtx_tree_index_replace(struct index *base, struct tuple *old_tuple,
struct tuple *new_tuple, enum dup_replace_mode mode,
struct tuple **result)
{
- struct memtx_tree_index *index = (struct memtx_tree_index *)base;
+ struct memtx_tree_index<USE_HINT> *index =
+ (struct memtx_tree_index<USE_HINT> *)base;
struct key_def *cmp_def = memtx_tree_cmp_def(&index->tree);
if (new_tuple) {
- struct memtx_tree_data new_data;
+ struct memtx_tree_data<USE_HINT> new_data;
new_data.tuple = new_tuple;
- new_data.hint = tuple_hint(new_tuple, cmp_def);
- struct memtx_tree_data dup_data;
+ if (USE_HINT)
+ new_data.set_hint(tuple_hint(new_tuple, cmp_def));
+ struct memtx_tree_data<USE_HINT> dup_data;
dup_data.tuple = NULL;
/* Try to optimistically replace the new_tuple. */
@@ -652,9 +785,10 @@ memtx_tree_index_replace(struct index *base, struct tuple *old_tuple,
}
}
if (old_tuple) {
- struct memtx_tree_data old_data;
+ struct memtx_tree_data<USE_HINT> old_data;
old_data.tuple = old_tuple;
- old_data.hint = tuple_hint(old_tuple, cmp_def);
+ if (USE_HINT)
+ old_data.set_hint(tuple_hint(old_tuple, cmp_def));
memtx_tree_delete(&index->tree, old_data);
}
*result = old_tuple;
@@ -667,13 +801,13 @@ memtx_tree_index_replace(struct index *base, struct tuple *old_tuple,
* by all it's multikey indexes.
*/
static int
-memtx_tree_index_replace_multikey_one(struct memtx_tree_index *index,
+memtx_tree_index_replace_multikey_one(struct memtx_tree_index<true> *index,
struct tuple *old_tuple, struct tuple *new_tuple,
enum dup_replace_mode mode, hint_t hint,
- struct memtx_tree_data *replaced_data,
+ struct memtx_tree_data<true> *replaced_data,
bool *is_multikey_conflict)
{
- struct memtx_tree_data new_data, dup_data;
+ struct memtx_tree_data<true> new_data, dup_data;
new_data.tuple = new_tuple;
new_data.hint = hint;
dup_data.tuple = NULL;
@@ -720,11 +854,11 @@ memtx_tree_index_replace_multikey_one(struct memtx_tree_index *index,
* delete operation is fault-tolerant.
*/
static void
-memtx_tree_index_replace_multikey_rollback(struct memtx_tree_index *index,
+memtx_tree_index_replace_multikey_rollback(struct memtx_tree_index<true> *index,
struct tuple *new_tuple, struct tuple *replaced_tuple,
int err_multikey_idx)
{
- struct memtx_tree_data data;
+ struct memtx_tree_data<true> data;
if (replaced_tuple != NULL) {
/* Restore replaced tuple index occurrences. */
struct key_def *cmp_def = memtx_tree_cmp_def(&index->tree);
@@ -798,7 +932,8 @@ memtx_tree_index_replace_multikey(struct index *base, struct tuple *old_tuple,
struct tuple *new_tuple, enum dup_replace_mode mode,
struct tuple **result)
{
- struct memtx_tree_index *index = (struct memtx_tree_index *)base;
+ struct memtx_tree_index<true> *index =
+ (struct memtx_tree_index<true> *)base;
struct key_def *cmp_def = memtx_tree_cmp_def(&index->tree);
*result = NULL;
if (new_tuple != NULL) {
@@ -808,7 +943,7 @@ memtx_tree_index_replace_multikey(struct index *base, struct tuple *old_tuple,
for (; (uint32_t) multikey_idx < multikey_count;
multikey_idx++) {
bool is_multikey_conflict;
- struct memtx_tree_data replaced_data;
+ struct memtx_tree_data<true> replaced_data;
err = memtx_tree_index_replace_multikey_one(index,
old_tuple, new_tuple, mode,
multikey_idx, &replaced_data,
@@ -833,7 +968,7 @@ memtx_tree_index_replace_multikey(struct index *base, struct tuple *old_tuple,
}
}
if (old_tuple != NULL) {
- struct memtx_tree_data data;
+ struct memtx_tree_data<true> data;
data.tuple = old_tuple;
uint32_t multikey_count =
tuple_multikey_count(old_tuple, cmp_def);
@@ -865,7 +1000,7 @@ struct func_key_undo {
/** A link to organize entries in list. */
struct rlist link;
/** An inserted record copy. */
- struct memtx_tree_data key;
+ struct memtx_tree_data<true> key;
};
/** Allocate a new func_key_undo on given region. */
@@ -888,7 +1023,7 @@ func_key_undo_new(struct region *region)
* return a given index object in it's original state.
*/
static void
-memtx_tree_func_index_replace_rollback(struct memtx_tree_index *index,
+memtx_tree_func_index_replace_rollback(struct memtx_tree_index<true> *index,
struct rlist *old_keys,
struct rlist *new_keys)
{
@@ -919,7 +1054,8 @@ memtx_tree_func_index_replace(struct index *base, struct tuple *old_tuple,
struct tuple *new_tuple, enum dup_replace_mode mode,
struct tuple **result)
{
- struct memtx_tree_index *index = (struct memtx_tree_index *)base;
+ struct memtx_tree_index<true> *index =
+ (struct memtx_tree_index<true> *)base;
struct index_def *index_def = index->base.def;
assert(index_def->key_def->for_func_index);
@@ -952,7 +1088,7 @@ memtx_tree_func_index_replace(struct index *base, struct tuple *old_tuple,
undo->key.hint = (hint_t)key;
rlist_add(&new_keys, &undo->link);
bool is_multikey_conflict;
- struct memtx_tree_data old_data;
+ struct memtx_tree_data<true> old_data;
old_data.tuple = NULL;
err = memtx_tree_index_replace_multikey_one(index,
old_tuple, new_tuple,
@@ -1015,7 +1151,7 @@ memtx_tree_func_index_replace(struct index *base, struct tuple *old_tuple,
if (key_list_iterator_create(&it, old_tuple, index_def, false,
func_index_key_dummy_alloc) != 0)
goto end;
- struct memtx_tree_data data, deleted_data;
+ struct memtx_tree_data<true> data, deleted_data;
data.tuple = old_tuple;
const char *key;
while (key_list_iterator_next(&it, &key) == 0 && key != NULL) {
@@ -1040,11 +1176,13 @@ end:
return rc;
}
+template <bool USE_HINT>
static struct iterator *
memtx_tree_index_create_iterator(struct index *base, enum iterator_type type,
const char *key, uint32_t part_count)
{
- struct memtx_tree_index *index = (struct memtx_tree_index *)base;
+ struct memtx_tree_index<USE_HINT> *index =
+ (struct memtx_tree_index<USE_HINT> *)base;
struct memtx_engine *memtx = (struct memtx_engine *)base->engine;
struct key_def *cmp_def = memtx_tree_cmp_def(&index->tree);
@@ -1064,42 +1202,47 @@ memtx_tree_index_create_iterator(struct index *base, enum iterator_type type,
key = NULL;
}
- struct tree_iterator *it = (struct tree_iterator *)
+ struct tree_iterator<USE_HINT> *it = (struct tree_iterator<USE_HINT> *)
mempool_alloc(&memtx->iterator_pool);
if (it == NULL) {
- diag_set(OutOfMemory, sizeof(struct tree_iterator),
+ diag_set(OutOfMemory, sizeof(struct tree_iterator<USE_HINT>),
"memtx_tree_index", "iterator");
return NULL;
}
iterator_create(&it->base, base);
it->pool = &memtx->iterator_pool;
- it->base.next = tree_iterator_start;
- it->base.free = tree_iterator_free;
+ it->base.next = tree_iterator_start<USE_HINT>;
+ it->base.free = tree_iterator_free<USE_HINT>;
it->type = type;
it->key_data.key = key;
it->key_data.part_count = part_count;
- it->key_data.hint = key_hint(key, part_count, cmp_def);
- it->tree_iterator = memtx_tree_invalid_iterator();
+ if (USE_HINT)
+ it->key_data.set_hint(key_hint(key, part_count, cmp_def));
+ invalidate_tree_iterator(&it->tree_iterator);
it->current.tuple = NULL;
return (struct iterator *)it;
}
+template <bool USE_HINT>
static void
memtx_tree_index_begin_build(struct index *base)
{
- struct memtx_tree_index *index = (struct memtx_tree_index *)base;
+ struct memtx_tree_index<USE_HINT> *index =
+ (struct memtx_tree_index<USE_HINT> *)base;
assert(memtx_tree_size(&index->tree) == 0);
(void)index;
}
+template <bool USE_HINT>
static int
memtx_tree_index_reserve(struct index *base, uint32_t size_hint)
{
- struct memtx_tree_index *index = (struct memtx_tree_index *)base;
+ struct memtx_tree_index<USE_HINT> *index =
+ (struct memtx_tree_index<USE_HINT> *)base;
if (size_hint < index->build_array_alloc_size)
return 0;
- struct memtx_tree_data *tmp =
- (struct memtx_tree_data *)
+ struct memtx_tree_data<USE_HINT> *tmp =
+ (struct memtx_tree_data<USE_HINT> *)
realloc(index->build_array, size_hint * sizeof(*tmp));
if (tmp == NULL) {
diag_set(OutOfMemory, size_hint * sizeof(*tmp),
@@ -1111,14 +1254,15 @@ memtx_tree_index_reserve(struct index *base, uint32_t size_hint)
return 0;
}
+template <bool USE_HINT>
/** Initialize the next element of the index build_array. */
static int
-memtx_tree_index_build_array_append(struct memtx_tree_index *index,
+memtx_tree_index_build_array_append(struct memtx_tree_index<USE_HINT> *index,
struct tuple *tuple, hint_t hint)
{
if (index->build_array == NULL) {
index->build_array =
- (struct memtx_tree_data *)malloc(MEMTX_EXTENT_SIZE);
+ (struct memtx_tree_data<USE_HINT> *)malloc(MEMTX_EXTENT_SIZE);
if (index->build_array == NULL) {
diag_set(OutOfMemory, MEMTX_EXTENT_SIZE,
"memtx_tree_index", "build_next");
@@ -1131,8 +1275,8 @@ memtx_tree_index_build_array_append(struct memtx_tree_index *index,
if (index->build_array_size == index->build_array_alloc_size) {
index->build_array_alloc_size = index->build_array_alloc_size +
DIV_ROUND_UP(index->build_array_alloc_size, 2);
- struct memtx_tree_data *tmp =
- (struct memtx_tree_data *)realloc(index->build_array,
+ struct memtx_tree_data<USE_HINT> *tmp =
+ (struct memtx_tree_data<USE_HINT> *)realloc(index->build_array,
index->build_array_alloc_size * sizeof(*tmp));
if (tmp == NULL) {
diag_set(OutOfMemory, index->build_array_alloc_size *
@@ -1141,17 +1285,20 @@ memtx_tree_index_build_array_append(struct memtx_tree_index *index,
}
index->build_array = tmp;
}
- struct memtx_tree_data *elem =
+ struct memtx_tree_data<USE_HINT> *elem =
&index->build_array[index->build_array_size++];
elem->tuple = tuple;
- elem->hint = hint;
+ if (USE_HINT)
+ elem->set_hint(hint);
return 0;
}
+template <bool USE_HINT>
static int
memtx_tree_index_build_next(struct index *base, struct tuple *tuple)
{
- struct memtx_tree_index *index = (struct memtx_tree_index *)base;
+ struct memtx_tree_index<USE_HINT> *index =
+ (struct memtx_tree_index<USE_HINT> *)base;
struct key_def *cmp_def = memtx_tree_cmp_def(&index->tree);
return memtx_tree_index_build_array_append(index, tuple,
tuple_hint(tuple, cmp_def));
@@ -1160,7 +1307,7 @@ memtx_tree_index_build_next(struct index *base, struct tuple *tuple)
static int
memtx_tree_index_build_next_multikey(struct index *base, struct tuple *tuple)
{
- struct memtx_tree_index *index = (struct memtx_tree_index *)base;
+ struct memtx_tree_index<true> *index = (struct memtx_tree_index<true> *)base;
struct key_def *cmp_def = memtx_tree_cmp_def(&index->tree);
uint32_t multikey_count = tuple_multikey_count(tuple, cmp_def);
for (uint32_t multikey_idx = 0; multikey_idx < multikey_count;
@@ -1175,7 +1322,7 @@ memtx_tree_index_build_next_multikey(struct index *base, struct tuple *tuple)
static int
memtx_tree_func_index_build_next(struct index *base, struct tuple *tuple)
{
- struct memtx_tree_index *index = (struct memtx_tree_index *)base;
+ struct memtx_tree_index<true> *index = (struct memtx_tree_index<true> *)base;
struct index_def *index_def = index->base.def;
assert(index_def->key_def->for_func_index);
@@ -1211,8 +1358,9 @@ error:
* of equal tuples (in terms of index's cmp_def and have same
* tuple pointer). The build_array is expected to be sorted.
*/
+template <bool USE_HINT>
static void
-memtx_tree_index_build_array_deduplicate(struct memtx_tree_index *index,
+memtx_tree_index_build_array_deduplicate(struct memtx_tree_index<USE_HINT> *index,
void (*destroy)(struct tuple *tuple, const char *hint))
{
if (index->build_array_size == 0)
@@ -1246,13 +1394,16 @@ memtx_tree_index_build_array_deduplicate(struct memtx_tree_index *index,
index->build_array_size = w_idx + 1;
}
+template <bool USE_HINT>
static void
memtx_tree_index_end_build(struct index *base)
{
- struct memtx_tree_index *index = (struct memtx_tree_index *)base;
+ struct memtx_tree_index<USE_HINT> *index =
+ (struct memtx_tree_index<USE_HINT> *)base;
struct key_def *cmp_def = memtx_tree_cmp_def(&index->tree);
qsort_arg(index->build_array, index->build_array_size,
- sizeof(index->build_array[0]), memtx_tree_qcompare, cmp_def);
+ sizeof(index->build_array[0]),
+ memtx_tree_qcompare<USE_HINT>, cmp_def);
if (cmp_def->is_multikey) {
/*
* Multikey index may have equal(in terms of
@@ -1261,9 +1412,9 @@ memtx_tree_index_end_build(struct index *base)
* the following memtx_tree_build assumes that
* all keys are unique.
*/
- memtx_tree_index_build_array_deduplicate(index, NULL);
+ memtx_tree_index_build_array_deduplicate<USE_HINT>(index, NULL);
} else if (cmp_def->for_func_index) {
- memtx_tree_index_build_array_deduplicate(index,
+ memtx_tree_index_build_array_deduplicate<USE_HINT>(index,
tuple_chunk_delete);
}
memtx_tree_build(&index->tree, index->build_array,
@@ -1275,19 +1426,21 @@ memtx_tree_index_end_build(struct index *base)
index->build_array_alloc_size = 0;
}
+template <bool USE_HINT>
struct tree_snapshot_iterator {
struct snapshot_iterator base;
- struct memtx_tree_index *index;
- struct memtx_tree_iterator tree_iterator;
+ struct memtx_tree_index<USE_HINT> *index;
+ memtx_tree_iterator_t<USE_HINT> tree_iterator;
struct memtx_tx_snapshot_cleaner cleaner;
};
+template <bool USE_HINT>
static void
tree_snapshot_iterator_free(struct snapshot_iterator *iterator)
{
- assert(iterator->free == tree_snapshot_iterator_free);
- struct tree_snapshot_iterator *it =
- (struct tree_snapshot_iterator *)iterator;
+ assert(iterator->free == &tree_snapshot_iterator_free<USE_HINT>);
+ struct tree_snapshot_iterator<USE_HINT> *it =
+ (struct tree_snapshot_iterator<USE_HINT> *)iterator;
memtx_leave_delayed_free_mode((struct memtx_engine *)
it->index->base.engine);
memtx_tree_iterator_destroy(&it->index->tree, &it->tree_iterator);
@@ -1296,17 +1449,18 @@ tree_snapshot_iterator_free(struct snapshot_iterator *iterator)
free(iterator);
}
+template <bool USE_HINT>
static int
tree_snapshot_iterator_next(struct snapshot_iterator *iterator,
const char **data, uint32_t *size)
{
- assert(iterator->free == tree_snapshot_iterator_free);
- struct tree_snapshot_iterator *it =
- (struct tree_snapshot_iterator *)iterator;
- struct memtx_tree *tree = &it->index->tree;
+ assert(iterator->free == &tree_snapshot_iterator_free<USE_HINT>);
+ struct tree_snapshot_iterator<USE_HINT> *it =
+ (struct tree_snapshot_iterator<USE_HINT> *)iterator;
+ memtx_tree_t<USE_HINT> *tree = &it->index->tree;
while (true) {
- struct memtx_tree_data *res =
+ struct memtx_tree_data<USE_HINT> *res =
memtx_tree_iterator_get_elem(tree, &it->tree_iterator);
if (res == NULL) {
@@ -1333,14 +1487,18 @@ tree_snapshot_iterator_next(struct snapshot_iterator *iterator,
* index modifications will not affect the iteration results.
* Must be destroyed by iterator->free after usage.
*/
+template <bool USE_HINT>
static struct snapshot_iterator *
memtx_tree_index_create_snapshot_iterator(struct index *base)
{
- struct memtx_tree_index *index = (struct memtx_tree_index *)base;
- struct tree_snapshot_iterator *it =
- (struct tree_snapshot_iterator *) calloc(1, sizeof(*it));
+ struct memtx_tree_index<USE_HINT> *index =
+ (struct memtx_tree_index<USE_HINT> *)base;
+ struct tree_snapshot_iterator<USE_HINT> *it =
+ (struct tree_snapshot_iterator<USE_HINT> *)
+ calloc(1, sizeof(*it));
if (it == NULL) {
- diag_set(OutOfMemory, sizeof(struct tree_snapshot_iterator),
+ diag_set(OutOfMemory,
+ sizeof(struct tree_snapshot_iterator<USE_HINT>),
"memtx_tree_index", "create_snapshot_iterator");
return NULL;
}
@@ -1352,8 +1510,8 @@ memtx_tree_index_create_snapshot_iterator(struct index *base)
return NULL;
}
- it->base.free = tree_snapshot_iterator_free;
- it->base.next = tree_snapshot_iterator_next;
+ it->base.free = tree_snapshot_iterator_free<USE_HINT>;
+ it->base.next = tree_snapshot_iterator_next<USE_HINT>;
it->index = index;
index_ref(base);
it->tree_iterator = memtx_tree_iterator_first(&index->tree);
@@ -1362,94 +1520,124 @@ memtx_tree_index_create_snapshot_iterator(struct index *base)
return (struct snapshot_iterator *) it;
}
-static const struct index_vtab memtx_tree_index_vtab = {
- /* .destroy = */ memtx_tree_index_destroy,
+static const struct index_vtab memtx_tree_no_hint_index_vtab = {
+ /* .destroy = */ memtx_tree_index_destroy<false>,
/* .commit_create = */ generic_index_commit_create,
/* .abort_create = */ generic_index_abort_create,
/* .commit_modify = */ generic_index_commit_modify,
/* .commit_drop = */ generic_index_commit_drop,
- /* .update_def = */ memtx_tree_index_update_def,
+ /* .update_def = */ memtx_tree_index_update_def<false>,
/* .depends_on_pk = */ memtx_tree_index_depends_on_pk,
/* .def_change_requires_rebuild = */
memtx_index_def_change_requires_rebuild,
- /* .size = */ memtx_tree_index_size,
- /* .bsize = */ memtx_tree_index_bsize,
+ /* .size = */ memtx_tree_index_size<false>,
+ /* .bsize = */ memtx_tree_index_bsize<false>,
/* .min = */ generic_index_min,
/* .max = */ generic_index_max,
- /* .random = */ memtx_tree_index_random,
- /* .count = */ memtx_tree_index_count,
- /* .get = */ memtx_tree_index_get,
- /* .replace = */ memtx_tree_index_replace,
- /* .create_iterator = */ memtx_tree_index_create_iterator,
+ /* .random = */ memtx_tree_index_random<false>,
+ /* .count = */ memtx_tree_index_count<false>,
+ /* .get = */ memtx_tree_index_get<false>,
+ /* .replace = */ memtx_tree_index_replace<false>,
+ /* .create_iterator = */ memtx_tree_index_create_iterator<false>,
/* .create_snapshot_iterator = */
- memtx_tree_index_create_snapshot_iterator,
+ memtx_tree_index_create_snapshot_iterator<false>,
/* .stat = */ generic_index_stat,
/* .compact = */ generic_index_compact,
/* .reset_stat = */ generic_index_reset_stat,
- /* .begin_build = */ memtx_tree_index_begin_build,
- /* .reserve = */ memtx_tree_index_reserve,
- /* .build_next = */ memtx_tree_index_build_next,
- /* .end_build = */ memtx_tree_index_end_build,
+ /* .begin_build = */ memtx_tree_index_begin_build<false>,
+ /* .reserve = */ memtx_tree_index_reserve<false>,
+ /* .build_next = */ memtx_tree_index_build_next<false>,
+ /* .end_build = */ memtx_tree_index_end_build<false>,
+};
+
+static const struct index_vtab memtx_tree_use_hint_index_vtab = {
+ /* .destroy = */ memtx_tree_index_destroy<true>,
+ /* .commit_create = */ generic_index_commit_create,
+ /* .abort_create = */ generic_index_abort_create,
+ /* .commit_modify = */ generic_index_commit_modify,
+ /* .commit_drop = */ generic_index_commit_drop,
+ /* .update_def = */ memtx_tree_index_update_def<true>,
+ /* .depends_on_pk = */ memtx_tree_index_depends_on_pk,
+ /* .def_change_requires_rebuild = */
+ memtx_index_def_change_requires_rebuild,
+ /* .size = */ memtx_tree_index_size<true>,
+ /* .bsize = */ memtx_tree_index_bsize<true>,
+ /* .min = */ generic_index_min,
+ /* .max = */ generic_index_max,
+ /* .random = */ memtx_tree_index_random<true>,
+ /* .count = */ memtx_tree_index_count<true>,
+ /* .get = */ memtx_tree_index_get<true>,
+ /* .replace = */ memtx_tree_index_replace<true>,
+ /* .create_iterator = */ memtx_tree_index_create_iterator<true>,
+ /* .create_snapshot_iterator = */
+ memtx_tree_index_create_snapshot_iterator<true>,
+ /* .stat = */ generic_index_stat,
+ /* .compact = */ generic_index_compact,
+ /* .reset_stat = */ generic_index_reset_stat,
+ /* .begin_build = */ memtx_tree_index_begin_build<true>,
+ /* .reserve = */ memtx_tree_index_reserve<true>,
+ /* .build_next = */ memtx_tree_index_build_next<true>,
+ /* .end_build = */ memtx_tree_index_end_build<true>,
};
static const struct index_vtab memtx_tree_index_multikey_vtab = {
- /* .destroy = */ memtx_tree_index_destroy,
+ /* .destroy = */ memtx_tree_index_destroy<true>,
/* .commit_create = */ generic_index_commit_create,
/* .abort_create = */ generic_index_abort_create,
/* .commit_modify = */ generic_index_commit_modify,
/* .commit_drop = */ generic_index_commit_drop,
- /* .update_def = */ memtx_tree_index_update_def,
+ /* .update_def = */ memtx_tree_index_update_def<true>,
/* .depends_on_pk = */ memtx_tree_index_depends_on_pk,
/* .def_change_requires_rebuild = */
memtx_index_def_change_requires_rebuild,
- /* .size = */ memtx_tree_index_size,
- /* .bsize = */ memtx_tree_index_bsize,
+ /* .size = */ memtx_tree_index_size<true>,
+ /* .bsize = */ memtx_tree_index_bsize<true>,
/* .min = */ generic_index_min,
/* .max = */ generic_index_max,
- /* .random = */ memtx_tree_index_random,
- /* .count = */ memtx_tree_index_count,
- /* .get = */ memtx_tree_index_get,
+ /* .random = */ memtx_tree_index_random<true>,
+ /* .count = */ memtx_tree_index_count<true>,
+ /* .get = */ memtx_tree_index_get<true>,
/* .replace = */ memtx_tree_index_replace_multikey,
- /* .create_iterator = */ memtx_tree_index_create_iterator,
+ /* .create_iterator = */ memtx_tree_index_create_iterator<true>,
/* .create_snapshot_iterator = */
- memtx_tree_index_create_snapshot_iterator,
+ memtx_tree_index_create_snapshot_iterator<true>,
/* .stat = */ generic_index_stat,
/* .compact = */ generic_index_compact,
/* .reset_stat = */ generic_index_reset_stat,
- /* .begin_build = */ memtx_tree_index_begin_build,
- /* .reserve = */ memtx_tree_index_reserve,
+ /* .begin_build = */ memtx_tree_index_begin_build<true>,
+ /* .reserve = */ memtx_tree_index_reserve<true>,
/* .build_next = */ memtx_tree_index_build_next_multikey,
- /* .end_build = */ memtx_tree_index_end_build,
+ /* .end_build = */ memtx_tree_index_end_build<true>,
};
static const struct index_vtab memtx_tree_func_index_vtab = {
- /* .destroy = */ memtx_tree_index_destroy,
+ /* .destroy = */ memtx_tree_index_destroy<true>,
/* .commit_create = */ generic_index_commit_create,
/* .abort_create = */ generic_index_abort_create,
/* .commit_modify = */ generic_index_commit_modify,
/* .commit_drop = */ generic_index_commit_drop,
- /* .update_def = */ memtx_tree_index_update_def,
+ /* .update_def = */ memtx_tree_index_update_def<true>,
/* .depends_on_pk = */ memtx_tree_index_depends_on_pk,
/* .def_change_requires_rebuild = */
memtx_index_def_change_requires_rebuild,
- /* .size = */ memtx_tree_index_size,
- /* .bsize = */ memtx_tree_index_bsize,
+ /* .size = */ memtx_tree_index_size<true>,
+ /* .bsize = */ memtx_tree_index_bsize<true>,
/* .min = */ generic_index_min,
/* .max = */ generic_index_max,
- /* .random = */ memtx_tree_index_random,
- /* .count = */ memtx_tree_index_count,
- /* .get = */ memtx_tree_index_get,
+ /* .random = */ memtx_tree_index_random<true>,
+ /* .count = */ memtx_tree_index_count<true>,
+ /* .get = */ memtx_tree_index_get<true>,
/* .replace = */ memtx_tree_func_index_replace,
- /* .create_iterator = */ memtx_tree_index_create_iterator,
+ /* .create_iterator = */ memtx_tree_index_create_iterator<true>,
/* .create_snapshot_iterator = */
- memtx_tree_index_create_snapshot_iterator,
+ memtx_tree_index_create_snapshot_iterator<true>,
/* .stat = */ generic_index_stat,
/* .compact = */ generic_index_compact,
/* .reset_stat = */ generic_index_reset_stat,
- /* .begin_build = */ memtx_tree_index_begin_build,
- /* .reserve = */ memtx_tree_index_reserve,
+ /* .begin_build = */ memtx_tree_index_begin_build<true>,
+ /* .reserve = */ memtx_tree_index_reserve<true>,
/* .build_next = */ memtx_tree_func_index_build_next,
- /* .end_build = */ memtx_tree_index_end_build,
+ /* .end_build = */ memtx_tree_index_end_build<true>,
};
/**
@@ -1459,7 +1647,7 @@ static const struct index_vtab memtx_tree_func_index_vtab = {
* key defintion is not completely initialized at that moment).
*/
static const struct index_vtab memtx_tree_disabled_index_vtab = {
- /* .destroy = */ memtx_tree_index_destroy,
+ /* .destroy = */ memtx_tree_index_destroy<true>,
/* .commit_create = */ generic_index_commit_create,
/* .abort_create = */ generic_index_abort_create,
/* .commit_modify = */ generic_index_commit_modify,
@@ -1488,27 +1676,19 @@ static const struct index_vtab memtx_tree_disabled_index_vtab = {
/* .end_build = */ generic_index_end_build,
};
-struct index *
-memtx_tree_index_new(struct memtx_engine *memtx, struct index_def *def)
+template <bool USE_HINT>
+static struct index *
+memtx_tree_index_new_tpl(struct memtx_engine *memtx, struct index_def *def,
+ const struct index_vtab *vtab)
{
- struct memtx_tree_index *index =
- (struct memtx_tree_index *)calloc(1, sizeof(*index));
+ struct memtx_tree_index<USE_HINT> *index =
+ (struct memtx_tree_index<USE_HINT> *)
+ calloc(1, sizeof(*index));
if (index == NULL) {
diag_set(OutOfMemory, sizeof(*index),
"malloc", "struct memtx_tree_index");
return NULL;
}
- const struct index_vtab *vtab;
- if (def->key_def->for_func_index) {
- if (def->key_def->func_index_func == NULL)
- vtab = &memtx_tree_disabled_index_vtab;
- else
- vtab = &memtx_tree_func_index_vtab;
- } else if (def->key_def->is_multikey) {
- vtab = &memtx_tree_index_multikey_vtab;
- } else {
- vtab = &memtx_tree_index_vtab;
- }
if (index_create(&index->base, (struct engine *)memtx,
vtab, def) != 0) {
free(index);
@@ -1524,3 +1704,23 @@ memtx_tree_index_new(struct memtx_engine *memtx, struct index_def *def)
memtx_index_extent_free, memtx);
return &index->base;
}
+
+struct index *
+memtx_tree_index_new(struct memtx_engine *memtx, struct index_def *def)
+{
+ const struct index_vtab *vtab;
+ if (def->key_def->for_func_index) {
+ if (def->key_def->func_index_func == NULL)
+ vtab = &memtx_tree_disabled_index_vtab;
+ else
+ vtab = &memtx_tree_func_index_vtab;
+ } else if (def->key_def->is_multikey) {
+ vtab = &memtx_tree_index_multikey_vtab;
+ } else if (def->opts.hint) {
+ vtab = &memtx_tree_use_hint_index_vtab;
+ } else {
+ vtab = &memtx_tree_no_hint_index_vtab;
+ return memtx_tree_index_new_tpl<false>(memtx, def, vtab);
+ }
+ return memtx_tree_index_new_tpl<true>(memtx, def, vtab);
+}
diff --git a/src/lib/salad/bps_tree.h b/src/lib/salad/bps_tree.h
index ef5ae3d..0bb803a 100644
--- a/src/lib/salad/bps_tree.h
+++ b/src/lib/salad/bps_tree.h
@@ -168,6 +168,17 @@
/* {{{ BPS-tree interface settings */
/**
+ * Optional namespace for structs and functions.
+ * Is set, struct and functions will have BPS_TREE_NAMESPACE:: prefix.
+ * For example one can #define BPS_TREE_NAMESPACE my along with
+ * #define BPS_TREE_NAME _test, and use then
+ * struct my::bps_tree_test my_tree;
+ * my::bps_tree_test_create(&my_tree, ...);
+ *
+ * #define BPS_TREE_NAMESPACE
+ */
+
+/**
* Custom name for structs and functions.
* Struct and functions will have bps_tree##BPS_TREE_NAME name or prefix.
* For example one can #define BPS_TREE_NAME _test, and use then
@@ -300,6 +311,10 @@
/* }}} */
+#ifdef BPS_TREE_NAMESPACE
+namespace BPS_TREE_NAMESPACE {
+#endif
+
/* {{{ BPS-tree internal settings */
typedef int16_t bps_tree_pos_t;
typedef uint32_t bps_tree_block_id_t;
@@ -6188,3 +6203,7 @@ bps_tree_debug_check_internal_functions(bool assertme)
#undef bps_tree_debug_check_insert_and_move_to_left_inner
/* }}} */
+
+#ifdef BPS_TREE_NAMESPACE
+} /* namespace BPS_TREE_NAMESPACE { */
+#endif
diff --git a/test/box/alter.result b/test/box/alter.result
index 237c2d8..5b412a7 100644
--- a/test/box/alter.result
+++ b/test/box/alter.result
@@ -680,8 +680,9 @@ s.index.pk
- type: unsigned
is_nullable: false
fieldno: 2
- type: TREE
+ hint: true
id: 0
+ type: TREE
space_id: 731
name: pk
...
@@ -710,9 +711,10 @@ s.index.pk
- type: unsigned
is_nullable: false
fieldno: 1
- space_id: 731
+ hint: true
id: 0
type: TREE
+ space_id: 731
name: pk
...
s.index.secondary
@@ -722,8 +724,9 @@ s.index.secondary
- type: unsigned
is_nullable: false
fieldno: 2
- type: TREE
+ hint: true
id: 1
+ type: TREE
space_id: 731
name: secondary
...
@@ -1559,3 +1562,97 @@ assert(err:match('does not exist') ~= nil)
---
- true
...
+-- hint
+s = box.schema.create_space('test');
+---
+...
+s:create_index('test1', {type='tree', parts={{1, 'uint'}}}).hint
+---
+- true
+...
+s:create_index('test2', {type='tree', parts={{2, 'uint'}}, hint = true}).hint
+---
+- true
+...
+s:create_index('test3', {type='tree', parts={{3, 'uint'}}, hint = false}).hint
+---
+- false
+...
+s:create_index('test4', {type='tree', parts={{4, 'string'}}, hint = false}).hint
+---
+- false
+...
+s.index.test1.hint
+---
+- true
+...
+s.index.test2.hint
+---
+- true
+...
+s.index.test3.hint
+---
+- false
+...
+s.index.test4.hint
+---
+- false
+...
+N = 1000 box.begin() for i = 1,N do s:replace{i, i, i, '' .. i} end box.commit()
+---
+...
+s.index.test1:bsize() == s.index.test2:bsize()
+---
+- true
+...
+s.index.test1:bsize() > s.index.test3:bsize()
+---
+- true
+...
+s.index.test1:bsize() > s.index.test4:bsize()
+---
+- true
+...
+s.index.test1:alter{hint=false}
+---
+...
+s.index.test2:alter{hint=true}
+---
+...
+s.index.test3:alter{name='test33', hint=false}
+---
+...
+s.index.test4:alter{hint=true}
+---
+...
+s.index.test1.hint
+---
+- false
+...
+s.index.test2.hint
+---
+- true
+...
+s.index.test33.hint
+---
+- false
+...
+s.index.test4.hint
+---
+- true
+...
+s.index.test1:bsize() < s.index.test2:bsize()
+---
+- true
+...
+s.index.test1:bsize() == s.index.test33:bsize()
+---
+- true
+...
+s.index.test1:bsize() < s.index.test4:bsize()
+---
+- true
+...
+s:drop()
+---
+...
diff --git a/test/box/alter.test.lua b/test/box/alter.test.lua
index abd08e2..2114186 100644
--- a/test/box/alter.test.lua
+++ b/test/box/alter.test.lua
@@ -620,3 +620,37 @@ s:drop()
s:alter({})
ok, err = pcall(box.schema.space.alter, s.id, {})
assert(err:match('does not exist') ~= nil)
+
+-- hint
+s = box.schema.create_space('test');
+s:create_index('test1', {type='tree', parts={{1, 'uint'}}}).hint
+s:create_index('test2', {type='tree', parts={{2, 'uint'}}, hint = true}).hint
+s:create_index('test3', {type='tree', parts={{3, 'uint'}}, hint = false}).hint
+s:create_index('test4', {type='tree', parts={{4, 'string'}}, hint = false}).hint
+
+s.index.test1.hint
+s.index.test2.hint
+s.index.test3.hint
+s.index.test4.hint
+
+N = 1000 box.begin() for i = 1,N do s:replace{i, i, i, '' .. i} end box.commit()
+
+s.index.test1:bsize() == s.index.test2:bsize()
+s.index.test1:bsize() > s.index.test3:bsize()
+s.index.test1:bsize() > s.index.test4:bsize()
+
+s.index.test1:alter{hint=false}
+s.index.test2:alter{hint=true}
+s.index.test3:alter{name='test33', hint=false}
+s.index.test4:alter{hint=true}
+
+s.index.test1.hint
+s.index.test2.hint
+s.index.test33.hint
+s.index.test4.hint
+
+s.index.test1:bsize() < s.index.test2:bsize()
+s.index.test1:bsize() == s.index.test33:bsize()
+s.index.test1:bsize() < s.index.test4:bsize()
+
+s:drop()
\ No newline at end of file
diff --git a/test/box/errinj.result b/test/box/errinj.result
index 613d22c..b8c2476 100644
--- a/test/box/errinj.result
+++ b/test/box/errinj.result
@@ -1774,9 +1774,10 @@ rtreespace:create_index('pk', {if_not_exists = true})
- type: unsigned
is_nullable: false
fieldno: 1
+ hint: true
id: 0
- space_id: 512
type: TREE
+ space_id: 512
name: pk
...
rtreespace:create_index('target', {type='rtree', dimension = 3, parts={2, 'array'},unique = false, if_not_exists = true,})
diff --git a/test/box/tree_pk.result b/test/box/tree_pk.result
index df3c78b..18cb607 100644
--- a/test/box/tree_pk.result
+++ b/test/box/tree_pk.result
@@ -852,3 +852,317 @@ box.internal.collation.drop('test')
box.internal.collation.drop('test-ci')
---
...
+-- hints
+s = box.schema.space.create('test')
+---
+...
+s:create_index('test', {type = 'tree', hint = 'true'} )
+---
+- error: Illegal parameters, options parameter 'hint' should be of type boolean
+...
+s:create_index('test', {type = 'hash', hint = true} )
+---
+- error: 'Can''t create or modify index ''test'' in space ''test'': hint is only reasonable
+ with memtx tree index'
+...
+s:create_index('test', {type = 'hash'}):alter({hint = true})
+---
+- error: 'Can''t create or modify index ''test'' in space ''test'': hint is only reasonable
+ with memtx tree index'
+...
+s:create_index('multikey', {hint = true, parts = {{2, 'int', path = '[*]'}}})
+---
+- error: 'Can''t create or modify index ''multikey'' in space ''test'': multikey index
+ can''t use hints'
+...
+s:create_index('multikey', {parts = {{2, 'int', path = '[*]'}}}):alter({hint = true})
+---
+- error: 'Can''t create or modify index ''multikey'' in space ''test'': multikey index
+ can''t use hints'
+...
+lua_code = [[function(tuple) return {tuple[1] + tuple[2]} end]]
+---
+...
+box.schema.func.create('s', {body = lua_code, is_deterministic = true, is_sandboxed = true})
+---
+...
+s:create_index('func', {hint = true, func = box.func.s.id, parts = {{1, 'unsigned'}}})
+---
+- error: 'Can''t create or modify index ''func'' in space ''test'': functional index
+ can''t use hints'
+...
+s:drop()
+---
+...
+s = box.schema.space.create('test', {engine = 'vinyl'})
+---
+...
+s:create_index('test', {type = 'tree', hint = true} )
+---
+- error: 'Can''t create or modify index ''test'' in space ''test'': hint is only reasonable
+ with memtx tree index'
+...
+s:drop()
+---
+...
+-- numeric hints
+s1 = box.schema.space.create('test1')
+---
+...
+s1:create_index('test', {type = 'tree', hint = false}).hint
+---
+- false
+...
+s2 = box.schema.space.create('test2')
+---
+...
+s2:create_index('test', {type = 'tree'}).hint
+---
+- true
+...
+N = 2000
+---
+...
+box.begin() for i = 1,N do local v = math.random(10 * N) s1:replace{v} s2:replace{v} end box.commit()
+---
+...
+s1:count() == s2:count()
+---
+- true
+...
+good = true r1 = s1:select{} r2 = s2:select{}
+---
+...
+if #r1 ~= #r2 then good = false end
+---
+...
+for k,v in pairs(r1) do if r2[k][1] ~= v[1] then good = false end end
+---
+...
+good
+---
+- true
+...
+r1 = nil r2 = nil
+---
+...
+function diff(t1, t2) if t1 then return t1[1] ~= t2[1] else return t2 ~= nil end end
+---
+...
+for i = 1, N * 10 do if diff(s1:get{i}, s2:get{i}) then good = false end end
+---
+...
+good
+---
+- true
+...
+s1.index[0]:bsize() < s2.index[0]:bsize()
+---
+- true
+...
+s1.index[0]:alter{hint=true}
+---
+...
+s1.index[0]:bsize() == s2.index[0]:bsize()
+---
+- true
+...
+s2.index[0]:alter{hint=false}
+---
+...
+s1.index[0]:bsize() > s2.index[0]:bsize()
+---
+- true
+...
+s1.index[0]:alter{hint=false}
+---
+...
+s1.index[0]:bsize() == s2.index[0]:bsize()
+---
+- true
+...
+s1:drop()
+---
+...
+s2:drop()
+---
+...
+-- string hints
+s1 = box.schema.space.create('test1')
+---
+...
+s1:create_index('test', {type = 'tree', parts = {1, 'str'}, hint = false}).hint
+---
+- false
+...
+s2 = box.schema.space.create('test2')
+---
+...
+s2:create_index('test', {type = 'tree', parts = {1, 'str'}}).hint
+---
+- true
+...
+N = 1000
+---
+...
+box.begin() for i = 1,N do local v = ''..math.random(10 * N) s1:replace{v} s2:replace{v} end box.commit()
+---
+...
+s1:count() == s2:count()
+---
+- true
+...
+good = true r1 = s1:select{} r2 = s2:select{}
+---
+...
+if #r1 ~= #r2 then good = false end
+---
+...
+for k,v in pairs(r1) do if r2[k][1] ~= v[1] then good = false end end
+---
+...
+good
+---
+- true
+...
+r1 = nil r2 = nil
+---
+...
+function diff(t1, t2) if t1 then return t1[1] ~= t2[1] else return t2 ~= nil end end
+---
+...
+for i = 1, N * 10 do v = ''..i if diff(s1:get{v}, s2:get{v}) then good = false end end
+---
+...
+good
+---
+- true
+...
+s1.index[0]:bsize() < s2.index[0]:bsize()
+---
+- true
+...
+s1.index[0]:alter{hint=true}
+---
+...
+s1.index[0]:bsize() == s2.index[0]:bsize()
+---
+- true
+...
+s2.index[0]:alter{hint=false}
+---
+...
+s1.index[0]:bsize() > s2.index[0]:bsize()
+---
+- true
+...
+s1.index[0]:alter{hint=false}
+---
+...
+s1.index[0]:bsize() == s2.index[0]:bsize()
+---
+- true
+...
+s1:drop()
+---
+...
+s2:drop()
+---
+...
+-- string with collation hints
+s1 = box.schema.space.create('test1')
+---
+...
+s1:create_index('test', {type = 'tree', parts = {{1, 'str', collation = 'Unicode'}}, hint = false}).hint
+---
+- false
+...
+s2 = box.schema.space.create('test2')
+---
+...
+s2:create_index('test', {type = 'tree', parts = {{1, 'str', collation = 'Unicode'}}}).hint
+---
+- true
+...
+N = 1000
+---
+...
+syms = {'a', 'B', 'c', 'D', 'ж', 'Ё', '~', '1', '%', '2'}
+---
+...
+syms_size = #syms
+---
+...
+len = 20
+---
+...
+vals = {}
+---
+...
+for i = 1,2*N do str = '' for j=1,len do str = str .. syms[math.random(syms_size)] end vals[i] = str end
+---
+...
+for i = 1,N do local v = vals[i] s1:replace{v} s2:replace{v} end
+---
+...
+s1:count() == s2:count()
+---
+- true
+...
+good = true r1 = s1:select{} r2 = s2:select{}
+---
+...
+if #r1 ~= #r2 then good = false end
+---
+...
+for k,v in pairs(r1) do if r2[k][1] ~= v[1] then good = false end end
+---
+...
+good
+---
+- true
+...
+r1 = nil r2 = nil
+---
+...
+function diff(t1, t2) if t1 then return t1[1] ~= t2[1] else return t2 ~= nil end end
+---
+...
+for i = 1, N * 2 do v = vals[i] if diff(s1:get{v}, s2:get{v}) then good = false end end
+---
+...
+good
+---
+- true
+...
+s1.index[0]:bsize() < s2.index[0]:bsize()
+---
+- true
+...
+s1.index[0]:alter{hint=true}
+---
+...
+s1.index[0]:bsize() == s2.index[0]:bsize()
+---
+- true
+...
+s2.index[0]:alter{hint=false}
+---
+...
+s1.index[0]:bsize() > s2.index[0]:bsize()
+---
+- true
+...
+s1.index[0]:alter{hint=false}
+---
+...
+s1.index[0]:bsize() == s2.index[0]:bsize()
+---
+- true
+...
+s1:drop()
+---
+...
+s2:drop()
+---
+...
diff --git a/test/box/tree_pk.test.lua b/test/box/tree_pk.test.lua
index 1190ab4..2f22ed9 100644
--- a/test/box/tree_pk.test.lua
+++ b/test/box/tree_pk.test.lua
@@ -301,3 +301,118 @@ s:drop()
box.internal.collation.drop('test')
box.internal.collation.drop('test-ci')
+
+-- hints
+s = box.schema.space.create('test')
+s:create_index('test', {type = 'tree', hint = 'true'} )
+s:create_index('test', {type = 'hash', hint = true} )
+s:create_index('test', {type = 'hash'}):alter({hint = true})
+s:create_index('multikey', {hint = true, parts = {{2, 'int', path = '[*]'}}})
+s:create_index('multikey', {parts = {{2, 'int', path = '[*]'}}}):alter({hint = true})
+lua_code = [[function(tuple) return {tuple[1] + tuple[2]} end]]
+box.schema.func.create('s', {body = lua_code, is_deterministic = true, is_sandboxed = true})
+s:create_index('func', {hint = true, func = box.func.s.id, parts = {{1, 'unsigned'}}})
+s:drop()
+
+s = box.schema.space.create('test', {engine = 'vinyl'})
+s:create_index('test', {type = 'tree', hint = true} )
+s:drop()
+
+-- numeric hints
+s1 = box.schema.space.create('test1')
+s1:create_index('test', {type = 'tree', hint = false}).hint
+s2 = box.schema.space.create('test2')
+s2:create_index('test', {type = 'tree'}).hint
+
+N = 2000
+box.begin() for i = 1,N do local v = math.random(10 * N) s1:replace{v} s2:replace{v} end box.commit()
+s1:count() == s2:count()
+
+good = true r1 = s1:select{} r2 = s2:select{}
+if #r1 ~= #r2 then good = false end
+for k,v in pairs(r1) do if r2[k][1] ~= v[1] then good = false end end
+good
+r1 = nil r2 = nil
+
+function diff(t1, t2) if t1 then return t1[1] ~= t2[1] else return t2 ~= nil end end
+for i = 1, N * 10 do if diff(s1:get{i}, s2:get{i}) then good = false end end
+good
+
+s1.index[0]:bsize() < s2.index[0]:bsize()
+s1.index[0]:alter{hint=true}
+s1.index[0]:bsize() == s2.index[0]:bsize()
+s2.index[0]:alter{hint=false}
+s1.index[0]:bsize() > s2.index[0]:bsize()
+s1.index[0]:alter{hint=false}
+s1.index[0]:bsize() == s2.index[0]:bsize()
+
+s1:drop()
+s2:drop()
+
+-- string hints
+s1 = box.schema.space.create('test1')
+s1:create_index('test', {type = 'tree', parts = {1, 'str'}, hint = false}).hint
+s2 = box.schema.space.create('test2')
+s2:create_index('test', {type = 'tree', parts = {1, 'str'}}).hint
+
+N = 1000
+box.begin() for i = 1,N do local v = ''..math.random(10 * N) s1:replace{v} s2:replace{v} end box.commit()
+s1:count() == s2:count()
+
+good = true r1 = s1:select{} r2 = s2:select{}
+if #r1 ~= #r2 then good = false end
+for k,v in pairs(r1) do if r2[k][1] ~= v[1] then good = false end end
+good
+r1 = nil r2 = nil
+
+function diff(t1, t2) if t1 then return t1[1] ~= t2[1] else return t2 ~= nil end end
+for i = 1, N * 10 do v = ''..i if diff(s1:get{v}, s2:get{v}) then good = false end end
+good
+
+s1.index[0]:bsize() < s2.index[0]:bsize()
+s1.index[0]:alter{hint=true}
+s1.index[0]:bsize() == s2.index[0]:bsize()
+s2.index[0]:alter{hint=false}
+s1.index[0]:bsize() > s2.index[0]:bsize()
+s1.index[0]:alter{hint=false}
+s1.index[0]:bsize() == s2.index[0]:bsize()
+
+s1:drop()
+s2:drop()
+
+-- string with collation hints
+s1 = box.schema.space.create('test1')
+s1:create_index('test', {type = 'tree', parts = {{1, 'str', collation = 'Unicode'}}, hint = false}).hint
+s2 = box.schema.space.create('test2')
+s2:create_index('test', {type = 'tree', parts = {{1, 'str', collation = 'Unicode'}}}).hint
+
+N = 1000
+syms = {'a', 'B', 'c', 'D', 'ж', 'Ё', '~', '1', '%', '2'}
+syms_size = #syms
+len = 20
+vals = {}
+for i = 1,2*N do str = '' for j=1,len do str = str .. syms[math.random(syms_size)] end vals[i] = str end
+
+for i = 1,N do local v = vals[i] s1:replace{v} s2:replace{v} end
+s1:count() == s2:count()
+
+good = true r1 = s1:select{} r2 = s2:select{}
+if #r1 ~= #r2 then good = false end
+for k,v in pairs(r1) do if r2[k][1] ~= v[1] then good = false end end
+good
+r1 = nil r2 = nil
+
+function diff(t1, t2) if t1 then return t1[1] ~= t2[1] else return t2 ~= nil end end
+for i = 1, N * 2 do v = vals[i] if diff(s1:get{v}, s2:get{v}) then good = false end end
+good
+
+s1.index[0]:bsize() < s2.index[0]:bsize()
+s1.index[0]:alter{hint=true}
+s1.index[0]:bsize() == s2.index[0]:bsize()
+s2.index[0]:alter{hint=false}
+s1.index[0]:bsize() > s2.index[0]:bsize()
+s1.index[0]:alter{hint=false}
+s1.index[0]:bsize() == s2.index[0]:bsize()
+
+s1:drop()
+s2:drop()
diff --git a/test/box/tree_pk_multipart.result b/test/box/tree_pk_multipart.result
index 93219f6..3f6bb94 100644
--- a/test/box/tree_pk_multipart.result
+++ b/test/box/tree_pk_multipart.result
@@ -542,3 +542,156 @@ space:drop()
space = nil
---
...
+-- hints
+test_run:cmd("setopt delimiter ';'")
+---
+- true
+...
+function equal(res1, res2)
+ if #res1 ~= #res2 then
+ return false
+ end
+ for k,v in pairs(res1) do
+ if res2[k][1] ~= v[1] or res2[k][2] ~= v[2] then
+ return false
+ end
+ end
+ return true
+end
+test_run:cmd("setopt delimiter ''");
+---
+...
+-- num num
+N1 = 100
+---
+...
+t1 = {}
+---
+...
+for i = 1,N1*2 do t1[i] = math.random(10000) * 10000 + i end
+---
+...
+N2 = 5
+---
+...
+t2 = {}
+---
+...
+for i = 1,N2*2 do t2[i] = math.random(1000000) end
+---
+...
+s1 = box.schema.space.create('test1')
+---
+...
+s1:create_index('test', {type = 'tree', parts = {{1, 'num'}, {2, 'num'}}, hint = false}).hint
+---
+- false
+...
+s2 = box.schema.space.create('test2')
+---
+...
+s2:create_index('test', {type = 'tree', parts = {{1, 'num'}, {2, 'num'}}, hint = true}).hint
+---
+- true
+...
+for j = 1,N2 do for i = 1,N1 do s1:replace{t1[i], t2[j]} s2:replace{t1[i], t2[j]} end end
+---
+...
+s1:count() == s2:count()
+---
+- true
+...
+equal(s1:select{}, s2:select{})
+---
+- true
+...
+good = true
+---
+...
+for i = 1,N1*2 do good = good and equal(s1:select{t1[i]}, s2:select{t1[i]}) end
+---
+...
+good
+---
+- true
+...
+for i = 1,N1*2 do for j=1,N2*2 do good = good and equal(s1:select{t1[i], t2[j]}, s2:select{t1[i], t2[j]}) end end
+---
+...
+good
+---
+- true
+...
+s1:drop()
+---
+...
+s2:drop()
+---
+...
+-- str num
+N1 = 100
+---
+...
+t1 = {}
+---
+...
+for i = 1,N1*2 do t1[i] = ''..(math.random(10000) * 10000 + i) end
+---
+...
+N2 = 5
+---
+...
+t2 = {}
+---
+...
+for i = 1,N2*2 do t2[i] = math.random(1000000) end
+---
+...
+s1 = box.schema.space.create('test1')
+---
+...
+s1:create_index('test', {type = 'tree', parts = {{1, 'str'}, {2, 'num'}}, hint = false}).hint
+---
+- false
+...
+s2 = box.schema.space.create('test2')
+---
+...
+s2:create_index('test', {type = 'tree', parts = {{1, 'str'}, {2, 'num'}}, hint = true}).hint
+---
+- true
+...
+for j = 1,N2 do for i = 1,N1 do s1:replace{t1[i], t2[j]} s2:replace{t1[i], t2[j]} end end
+---
+...
+s1:count() == s2:count()
+---
+- true
+...
+equal(s1:select{}, s2:select{})
+---
+- true
+...
+good = true
+---
+...
+for i = 1,N1*2 do good = good and equal(s1:select{t1[i]}, s2:select{t1[i]}) end
+---
+...
+good
+---
+- true
+...
+for i = 1,N1*2 do for j=1,N2*2 do good = good and equal(s1:select{t1[i], t2[j]}, s2:select{t1[i], t2[j]}) end end
+---
+...
+good
+---
+- true
+...
+s1:drop()
+---
+...
+s2:drop()
+---
+...
diff --git a/test/box/tree_pk_multipart.test.lua b/test/box/tree_pk_multipart.test.lua
index 7d3405f..cba3a98 100644
--- a/test/box/tree_pk_multipart.test.lua
+++ b/test/box/tree_pk_multipart.test.lua
@@ -194,3 +194,67 @@ test_run:cmd("setopt delimiter ''");
space:drop()
space = nil
+
+
+-- hints
+test_run:cmd("setopt delimiter ';'")
+function equal(res1, res2)
+ if #res1 ~= #res2 then
+ return false
+ end
+ for k,v in pairs(res1) do
+ if res2[k][1] ~= v[1] or res2[k][2] ~= v[2] then
+ return false
+ end
+ end
+ return true
+end
+test_run:cmd("setopt delimiter ''");
+
+-- num num
+N1 = 100
+t1 = {}
+for i = 1,N1*2 do t1[i] = math.random(10000) * 10000 + i end
+N2 = 5
+t2 = {}
+for i = 1,N2*2 do t2[i] = math.random(1000000) end
+
+s1 = box.schema.space.create('test1')
+s1:create_index('test', {type = 'tree', parts = {{1, 'num'}, {2, 'num'}}, hint = false}).hint
+s2 = box.schema.space.create('test2')
+s2:create_index('test', {type = 'tree', parts = {{1, 'num'}, {2, 'num'}}, hint = true}).hint
+for j = 1,N2 do for i = 1,N1 do s1:replace{t1[i], t2[j]} s2:replace{t1[i], t2[j]} end end
+s1:count() == s2:count()
+equal(s1:select{}, s2:select{})
+good = true
+for i = 1,N1*2 do good = good and equal(s1:select{t1[i]}, s2:select{t1[i]}) end
+good
+for i = 1,N1*2 do for j=1,N2*2 do good = good and equal(s1:select{t1[i], t2[j]}, s2:select{t1[i], t2[j]}) end end
+good
+
+s1:drop()
+s2:drop()
+
+-- str num
+N1 = 100
+t1 = {}
+for i = 1,N1*2 do t1[i] = ''..(math.random(10000) * 10000 + i) end
+N2 = 5
+t2 = {}
+for i = 1,N2*2 do t2[i] = math.random(1000000) end
+
+s1 = box.schema.space.create('test1')
+s1:create_index('test', {type = 'tree', parts = {{1, 'str'}, {2, 'num'}}, hint = false}).hint
+s2 = box.schema.space.create('test2')
+s2:create_index('test', {type = 'tree', parts = {{1, 'str'}, {2, 'num'}}, hint = true}).hint
+for j = 1,N2 do for i = 1,N1 do s1:replace{t1[i], t2[j]} s2:replace{t1[i], t2[j]} end end
+s1:count() == s2:count()
+equal(s1:select{}, s2:select{})
+good = true
+for i = 1,N1*2 do good = good and equal(s1:select{t1[i]}, s2:select{t1[i]}) end
+good
+for i = 1,N1*2 do for j=1,N2*2 do good = good and equal(s1:select{t1[i], t2[j]}, s2:select{t1[i], t2[j]}) end end
+good
+
+s1:drop()
+s2:drop()
--
2.7.4
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [Tarantool-patches] [PATCH v2 0/2] Make tree hint optional
2020-10-21 10:23 [Tarantool-patches] [PATCH v2 0/2] Make tree hint optional Aleksandr Lyapunov
2020-10-21 10:23 ` [Tarantool-patches] [PATCH v2 1/2] memtx: move memtx_tree.c to memtx_tree.cc Aleksandr Lyapunov
2020-10-21 10:23 ` [Tarantool-patches] [PATCH v2 2/2] memtx: make tuple compare hints optional Aleksandr Lyapunov
@ 2020-10-21 15:28 ` Nikita Pettik
2020-10-21 16:38 ` Ilya Kosarev
2 siblings, 1 reply; 7+ messages in thread
From: Nikita Pettik @ 2020-10-21 15:28 UTC (permalink / raw)
To: Aleksandr Lyapunov; +Cc: tarantool-patches
On 21 Oct 13:23, Aleksandr Lyapunov wrote:
> Add an option that disables hints in tree indexes.
>
> https://github.com/tarantool/tarantool/issues/4927
> https://github.com/tarantool/tarantool/tree/alyapunov/gh-4927-optional-hints
LGTM
> v2 changes:
> * int template parameter was replaced by bool.
> * fix compilation error, just a couple of casts added:
>
> @@ -542,7 +542,7 @@ tree_iterator_start(struct iterator *iterator, struct tuple **ret)
> struct txn *txn = in_txn();
> struct space *space = space_by_id(iterator->space_id);
> bool is_rw = txn != NULL;
> - uint32_t mk_index = is_multikey ? res->hint : 0;
> + uint32_t mk_index = is_multikey ? (uint32_t)res->hint : 0;
> *ret = memtx_tx_tuple_clarify(txn, space, *ret, iid, mk_index, is_rw);
> if (*ret == NULL) {
> return iterator->next(iterator, ret);
> @@ -734,13 +734,14 @@ memtx_tree_index_get(struct index *base, const char *key,
> struct txn *txn = in_txn();
> struct space *space = space_by_id(base->def->space_id);
> bool is_rw = txn != NULL;
> - uint32_t mk_index = base->def->key_def->is_multikey ? res->hint : 0;
> + bool is_multikey = base->def->key_def->is_multikey;
> + uint32_t mk_index = is_multikey ? (uint32_t)res->hint : 0;
> *result = memtx_tx_tuple_clarify(txn, space, res->tuple, base->def->iid,
> mk_index, is_rw);
> return 0;
> }
>
>
>
> Aleksandr Lyapunov (1):
> memtx: make tuple compare hints optional
>
> Ilya Kosarev (1):
> memtx: move memtx_tree.c to memtx_tree.cc
>
> src/box/CMakeLists.txt | 2 +-
> src/box/index_def.c | 2 +
> src/box/index_def.h | 6 +
> src/box/lua/schema.lua | 53 ++
> src/box/lua/space.cc | 7 +
> src/box/memtx_engine.c | 2 +
> src/box/memtx_tree.c | 1523 -------------------------------
> src/box/memtx_tree.cc | 1726 +++++++++++++++++++++++++++++++++++
> src/lib/salad/bps_tree.h | 19 +
> test/box/alter.result | 103 ++-
> test/box/alter.test.lua | 34 +
> test/box/errinj.result | 3 +-
> test/box/tree_pk.result | 314 +++++++
> test/box/tree_pk.test.lua | 115 +++
> test/box/tree_pk_multipart.result | 153 ++++
> test/box/tree_pk_multipart.test.lua | 64 ++
> 16 files changed, 2598 insertions(+), 1528 deletions(-)
> delete mode 100644 src/box/memtx_tree.c
> create mode 100644 src/box/memtx_tree.cc
>
> --
> 2.7.4
>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [Tarantool-patches] [PATCH v2 1/2] memtx: move memtx_tree.c to memtx_tree.cc
2020-10-21 10:23 ` [Tarantool-patches] [PATCH v2 1/2] memtx: move memtx_tree.c to memtx_tree.cc Aleksandr Lyapunov
@ 2020-10-21 16:12 ` Cyrill Gorcunov
0 siblings, 0 replies; 7+ messages in thread
From: Cyrill Gorcunov @ 2020-10-21 16:12 UTC (permalink / raw)
To: Aleksandr Lyapunov; +Cc: tarantool-patches
On Wed, Oct 21, 2020 at 01:23:31PM +0300, Aleksandr Lyapunov wrote:
> From: Ilya Kosarev <i.kosarev@tarantool.org>
>
> It is needed for the further c++ implementation in memtx_tree.cc. To
> see the file diff properly it should not be renamed and reworked
> in one commit. Some not c++ comparable casts were fixed.
>
> Prerequisites: #4927
Guys, please use -M git option, which detects such renaming. The
patch in result becomes
---
From: Ilya Kosarev <i.kosarev@tarantool.org>
Date: Wed, 21 Oct 2020 13:23:31 +0300
Subject: [PATCH] memtx: move memtx_tree.c to memtx_tree.cc
It is needed for the further c++ implementation in memtx_tree.cc. To
see the file diff properly it should not be renamed and reworked
in one commit. Some not c++ comparable casts were fixed.
Prerequisites: #4927
---
src/box/CMakeLists.txt | 2 +-
src/box/{memtx_tree.c => memtx_tree.cc} | 19 +++++++++++--------
2 files changed, 12 insertions(+), 9 deletions(-)
rename src/box/{memtx_tree.c => memtx_tree.cc} (98%)
diff --git a/src/box/CMakeLists.txt b/src/box/CMakeLists.txt
index 18d678e3d..ab1ab19d2 100644
--- a/src/box/CMakeLists.txt
+++ b/src/box/CMakeLists.txt
@@ -126,7 +126,7 @@ add_library(box STATIC
index_def.c
iterator_type.c
memtx_hash.c
- memtx_tree.c
+ memtx_tree.cc
memtx_rtree.c
memtx_bitset.c
memtx_tx.c
diff --git a/src/box/memtx_tree.c b/src/box/memtx_tree.cc
similarity index 98%
rename from src/box/memtx_tree.c
rename to src/box/memtx_tree.cc
index 5af482fb3..d3b993bfe 100644
--- a/src/box/memtx_tree.c
+++ b/src/box/memtx_tree.cc
@@ -126,9 +126,9 @@ memtx_tree_cmp_def(struct memtx_tree *tree)
static int
memtx_tree_qcompare(const void* a, const void *b, void *c)
{
- const struct memtx_tree_data *data_a = a;
- const struct memtx_tree_data *data_b = b;
- struct key_def *key_def = c;
+ const struct memtx_tree_data *data_a = (struct memtx_tree_data *)a;
+ const struct memtx_tree_data *data_b = (struct memtx_tree_data *)b;
+ struct key_def *key_def = (struct key_def *)c;
return tuple_compare(data_a->tuple, data_a->hint, data_b->tuple,
data_b->hint, key_def);
}
@@ -852,7 +852,7 @@ func_index_key_dummy_alloc(struct tuple *tuple, const char *key,
{
(void) tuple;
(void) key_sz;
- return (void*) key;
+ return key;
}
/**
@@ -1064,7 +1064,8 @@ memtx_tree_index_create_iterator(struct index *base, enum iterator_type type,
key = NULL;
}
- struct tree_iterator *it = mempool_alloc(&memtx->iterator_pool);
+ struct tree_iterator *it = (struct tree_iterator *)
+ mempool_alloc(&memtx->iterator_pool);
if (it == NULL) {
diag_set(OutOfMemory, sizeof(struct tree_iterator),
"memtx_tree_index", "iterator");
@@ -1098,7 +1099,8 @@ memtx_tree_index_reserve(struct index *base, uint32_t size_hint)
if (size_hint < index->build_array_alloc_size)
return 0;
struct memtx_tree_data *tmp =
- realloc(index->build_array, size_hint * sizeof(*tmp));
+ (struct memtx_tree_data *)
+ realloc(index->build_array, size_hint * sizeof(*tmp));
if (tmp == NULL) {
diag_set(OutOfMemory, size_hint * sizeof(*tmp),
"memtx_tree_index", "reserve");
@@ -1115,7 +1117,8 @@ memtx_tree_index_build_array_append(struct memtx_tree_index *index,
struct tuple *tuple, hint_t hint)
{
if (index->build_array == NULL) {
- index->build_array = malloc(MEMTX_EXTENT_SIZE);
+ index->build_array =
+ (struct memtx_tree_data *)malloc(MEMTX_EXTENT_SIZE);
if (index->build_array == NULL) {
diag_set(OutOfMemory, MEMTX_EXTENT_SIZE,
"memtx_tree_index", "build_next");
@@ -1129,7 +1132,7 @@ memtx_tree_index_build_array_append(struct memtx_tree_index *index,
index->build_array_alloc_size = index->build_array_alloc_size +
DIV_ROUND_UP(index->build_array_alloc_size, 2);
struct memtx_tree_data *tmp =
- realloc(index->build_array,
+ (struct memtx_tree_data *)realloc(index->build_array,
index->build_array_alloc_size * sizeof(*tmp));
if (tmp == NULL) {
diag_set(OutOfMemory, index->build_array_alloc_size *
--
2.26.2
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [Tarantool-patches] [PATCH v2 0/2] Make tree hint optional
2020-10-21 15:28 ` [Tarantool-patches] [PATCH v2 0/2] Make tree hint optional Nikita Pettik
@ 2020-10-21 16:38 ` Ilya Kosarev
0 siblings, 0 replies; 7+ messages in thread
From: Ilya Kosarev @ 2020-10-21 16:38 UTC (permalink / raw)
To: Nikita Pettik; +Cc: tarantool-patches
[-- Attachment #1: Type: text/plain, Size: 2425 bytes --]
LGTM
>Среда, 21 октября 2020, 18:28 +03:00 от Nikita Pettik <korablev@tarantool.org>:
>
>On 21 Oct 13:23, Aleksandr Lyapunov wrote:
>> Add an option that disables hints in tree indexes.
>>
>> https://github.com/tarantool/tarantool/issues/4927
>> https://github.com/tarantool/tarantool/tree/alyapunov/gh-4927-optional-hints
>
>LGTM
>
>> v2 changes:
>> * int template parameter was replaced by bool.
>> * fix compilation error, just a couple of casts added:
>>
>> @@ -542,7 +542,7 @@ tree_iterator_start(struct iterator *iterator, struct tuple **ret)
>> struct txn *txn = in_txn();
>> struct space *space = space_by_id(iterator->space_id);
>> bool is_rw = txn != NULL;
>> - uint32_t mk_index = is_multikey ? res->hint : 0;
>> + uint32_t mk_index = is_multikey ? (uint32_t)res->hint : 0;
>> *ret = memtx_tx_tuple_clarify(txn, space, *ret, iid, mk_index, is_rw);
>> if (*ret == NULL) {
>> return iterator->next(iterator, ret);
>> @@ -734,13 +734,14 @@ memtx_tree_index_get(struct index *base, const char *key,
>> struct txn *txn = in_txn();
>> struct space *space = space_by_id(base->def->space_id);
>> bool is_rw = txn != NULL;
>> - uint32_t mk_index = base->def->key_def->is_multikey ? res->hint : 0;
>> + bool is_multikey = base->def->key_def->is_multikey;
>> + uint32_t mk_index = is_multikey ? (uint32_t)res->hint : 0;
>> *result = memtx_tx_tuple_clarify(txn, space, res->tuple, base->def->iid,
>> mk_index, is_rw);
>> return 0;
>> }
>>
>>
>>
>> Aleksandr Lyapunov (1):
>> memtx: make tuple compare hints optional
>>
>> Ilya Kosarev (1):
>> memtx: move memtx_tree.c to memtx_tree.cc
>>
>> src/box/CMakeLists.txt | 2 +-
>> src/box/index_def.c | 2 +
>> src/box/index_def.h | 6 +
>> src/box/lua/schema.lua | 53 ++
>> src/box/lua/space.cc | 7 +
>> src/box/memtx_engine.c | 2 +
>> src/box/memtx_tree.c | 1523 -------------------------------
>> src/box/memtx_tree.cc | 1726 +++++++++++++++++++++++++++++++++++
>> src/lib/salad/bps_tree.h | 19 +
>> test/box/alter.result | 103 ++-
>> test/box/alter.test.lua | 34 +
>> test/box/errinj.result | 3 +-
>> test/box/tree_pk.result | 314 +++++++
>> test/box/tree_pk.test.lua | 115 +++
>> test/box/tree_pk_multipart.result | 153 ++++
>> test/box/tree_pk_multipart.test.lua | 64 ++
>> 16 files changed, 2598 insertions(+), 1528 deletions(-)
>> delete mode 100644 src/box/memtx_tree.c
>> create mode 100644 src/box/memtx_tree.cc
>>
>> --
>> 2.7.4
>>
--
Ilya Kosarev
[-- Attachment #2: Type: text/html, Size: 3349 bytes --]
^ permalink raw reply [flat|nested] 7+ messages in thread
* [Tarantool-patches] [PATCH v2 1/2] memtx: move memtx_tree.c to memtx_tree.cc
@ 2020-10-21 16:34 Aleksandr Lyapunov
0 siblings, 0 replies; 7+ messages in thread
From: Aleksandr Lyapunov @ 2020-10-21 16:34 UTC (permalink / raw)
To: tarantool-patches, Nikita Pettik, Ilya Kosarev
From: Ilya Kosarev <i.kosarev@tarantool.org>
It is needed for the further c++ implementation in memtx_tree.cc. To
see the file diff properly it should not be renamed and reworked
in one commit. Some not c++ comparable casts were fixed.
Prerequisites: #4927
---
src/box/CMakeLists.txt | 2 +-
src/box/{memtx_tree.c => memtx_tree.cc} | 19 +++++++++++--------
2 files changed, 12 insertions(+), 9 deletions(-)
rename src/box/{memtx_tree.c => memtx_tree.cc} (98%)
diff --git a/src/box/CMakeLists.txt b/src/box/CMakeLists.txt
index fbcfbe4..df243ac 100644
--- a/src/box/CMakeLists.txt
+++ b/src/box/CMakeLists.txt
@@ -126,7 +126,7 @@ add_library(box STATIC
index_def.c
iterator_type.c
memtx_hash.c
- memtx_tree.c
+ memtx_tree.cc
memtx_rtree.c
memtx_bitset.c
memtx_tx.c
diff --git a/src/box/memtx_tree.c b/src/box/memtx_tree.cc
similarity index 98%
rename from src/box/memtx_tree.c
rename to src/box/memtx_tree.cc
index 5af482f..d3b993b 100644
--- a/src/box/memtx_tree.c
+++ b/src/box/memtx_tree.cc
@@ -126,9 +126,9 @@ memtx_tree_cmp_def(struct memtx_tree *tree)
static int
memtx_tree_qcompare(const void* a, const void *b, void *c)
{
- const struct memtx_tree_data *data_a = a;
- const struct memtx_tree_data *data_b = b;
- struct key_def *key_def = c;
+ const struct memtx_tree_data *data_a = (struct memtx_tree_data *)a;
+ const struct memtx_tree_data *data_b = (struct memtx_tree_data *)b;
+ struct key_def *key_def = (struct key_def *)c;
return tuple_compare(data_a->tuple, data_a->hint, data_b->tuple,
data_b->hint, key_def);
}
@@ -852,7 +852,7 @@ func_index_key_dummy_alloc(struct tuple *tuple, const char *key,
{
(void) tuple;
(void) key_sz;
- return (void*) key;
+ return key;
}
/**
@@ -1064,7 +1064,8 @@ memtx_tree_index_create_iterator(struct index *base, enum iterator_type type,
key = NULL;
}
- struct tree_iterator *it = mempool_alloc(&memtx->iterator_pool);
+ struct tree_iterator *it = (struct tree_iterator *)
+ mempool_alloc(&memtx->iterator_pool);
if (it == NULL) {
diag_set(OutOfMemory, sizeof(struct tree_iterator),
"memtx_tree_index", "iterator");
@@ -1098,7 +1099,8 @@ memtx_tree_index_reserve(struct index *base, uint32_t size_hint)
if (size_hint < index->build_array_alloc_size)
return 0;
struct memtx_tree_data *tmp =
- realloc(index->build_array, size_hint * sizeof(*tmp));
+ (struct memtx_tree_data *)
+ realloc(index->build_array, size_hint * sizeof(*tmp));
if (tmp == NULL) {
diag_set(OutOfMemory, size_hint * sizeof(*tmp),
"memtx_tree_index", "reserve");
@@ -1115,7 +1117,8 @@ memtx_tree_index_build_array_append(struct memtx_tree_index *index,
struct tuple *tuple, hint_t hint)
{
if (index->build_array == NULL) {
- index->build_array = malloc(MEMTX_EXTENT_SIZE);
+ index->build_array =
+ (struct memtx_tree_data *)malloc(MEMTX_EXTENT_SIZE);
if (index->build_array == NULL) {
diag_set(OutOfMemory, MEMTX_EXTENT_SIZE,
"memtx_tree_index", "build_next");
@@ -1129,7 +1132,7 @@ memtx_tree_index_build_array_append(struct memtx_tree_index *index,
index->build_array_alloc_size = index->build_array_alloc_size +
DIV_ROUND_UP(index->build_array_alloc_size, 2);
struct memtx_tree_data *tmp =
- realloc(index->build_array,
+ (struct memtx_tree_data *)realloc(index->build_array,
index->build_array_alloc_size * sizeof(*tmp));
if (tmp == NULL) {
diag_set(OutOfMemory, index->build_array_alloc_size *
--
2.7.4
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2020-10-21 16:38 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-21 10:23 [Tarantool-patches] [PATCH v2 0/2] Make tree hint optional Aleksandr Lyapunov
2020-10-21 10:23 ` [Tarantool-patches] [PATCH v2 1/2] memtx: move memtx_tree.c to memtx_tree.cc Aleksandr Lyapunov
2020-10-21 16:12 ` Cyrill Gorcunov
2020-10-21 10:23 ` [Tarantool-patches] [PATCH v2 2/2] memtx: make tuple compare hints optional Aleksandr Lyapunov
2020-10-21 15:28 ` [Tarantool-patches] [PATCH v2 0/2] Make tree hint optional Nikita Pettik
2020-10-21 16:38 ` Ilya Kosarev
2020-10-21 16:34 [Tarantool-patches] [PATCH v2 1/2] memtx: move memtx_tree.c to memtx_tree.cc Aleksandr Lyapunov
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox