<HTML><BODY><br><br><br><blockquote style="border-left:1px solid #0857A6; margin:10px; padding:0 0 0 10px;">
Суббота, 9 июня 2018, 18:15 +03:00 от Vladislav Shpilevoy <v.shpilevoy@tarantool.org>:<br><br><div id=""><div class="js-helper js-readmsg-msg"><div><div id="style_15285573260000000056_BODY">Thanks for the patch! I have pushed a separate commit with my fixes, but<br>
it does not pass the test. Please, fix it.<br><br>
And use unit.h helpers: ok, isnt, is, fail etc to test something instead<br>
of assertions. Assert is omitted in Release build, and the test will test<br>
nothing.</div></div></div></div></blockquote>Done.<br><blockquote style="border-left:1px solid #0857A6; margin:10px; padding:0 0 0 10px;"><div id=""><div class="js-helper js-readmsg-msg"><div><div id="style_15285573260000000056_BODY"><br><br>
On 09/06/2018 17:40, <a href="mailto:imeevma@tarantool.org">imeevma@tarantool.org</a> wrote:<br>
> Due to limitation of reference counters for tuple being only<br>
> 65535 it was possible to reach this limitation. This patch<br>
> increases capacity of reference counters to 4 billions.<br>
> <br>
> Closes #3224<br>
> ---<br>
> Branch: <a href="https://github.com/tarantool/tarantool/commits/imeevma/gh-3224-tuple-bigrefs" target="_blank">https://github.com/tarantool/tarantool/commits/imeevma/gh-3224-tuple-bigrefs</a><br>
> Issue: <a href="https://github.com/tarantool/tarantool/issues/3224" target="_blank">https://github.com/tarantool/tarantool/issues/3224</a><br>
> <br>
> src/box/box.cc | 20 +++---<br>
> src/box/errcode.h | 2 +-<br>
> src/box/index.cc | 20 +++---<br>
> src/box/lua/tuple.c | 5 +-<br>
> src/box/port.c | 8 +--<br>
> src/box/tuple.c | 143 ++++++++++++++++++++++++++++++++++++++++--<br>
> src/box/tuple.h | 105 ++++++++++++-------------------<br>
> src/box/vinyl.c | 38 +++++------<br>
> test/box/misc.result | 39 ++++++------<br>
> test/box/select.result | 43 +++++++++++--<br>
> test/box/select.test.lua | 21 +++++--<br>
> test/unit/CMakeLists.txt | 3 +<br>
> test/unit/tuple_bigref.c | 117 ++++++++++++++++++++++++++++++++++<br>
> test/unit/tuple_bigref.result | 5 ++<br>
> 14 files changed, 413 insertions(+), 156 deletions(-)<br>
> create mode 100644 test/unit/tuple_bigref.c<br>
> create mode 100644 test/unit/tuple_bigref.result<br>
> <br>
> diff --git a/src/box/box.cc b/src/box/box.cc<br>
> index c728a4c..4257861 100644<br>
> --- a/src/box/box.cc<br>
> +++ b/src/box/box.cc<br>
> @@ -174,20 +174,22 @@ process_rw(struct request *request, struct space *space, struct tuple **result)<br>
> txn_rollback_stmt();<br>
> return -1;<br>
> }<br>
> + if (result == NULL)<br>
> + return txn_commit_stmt(txn, request);<br>
> + *result = tuple;<br>
> + if (tuple == NULL)<br>
> + return txn_commit_stmt(txn, request);<br>
> /*<br>
> * Pin the tuple locally before the commit,<br>
> * otherwise it may go away during yield in<br>
> * when WAL is written in autocommit mode.<br>
> */<br>
> - TupleRefNil ref(tuple);<br>
> - if (txn_commit_stmt(txn, request) != 0)<br>
> - return -1;<br>
> - if (result != NULL) {<br>
> - if (tuple != NULL && tuple_bless(tuple) == NULL)<br>
> - return -1;<br>
> - *result = tuple;<br>
> - }<br>
> - return 0;<br>
> + tuple_ref(tuple);<br>
> + int rc = txn_commit_stmt(txn, request);<br>
> + if (rc == 0)<br>
> + tuple_bless(tuple);<br>
> + tuple_unref(tuple);<br>
> + return rc;<br>
> }<br>
> <br>
> void<br>
> diff --git a/src/box/errcode.h b/src/box/errcode.h<br>
> index a0759f8..e009524 100644<br>
> --- a/src/box/errcode.h<br>
> +++ b/src/box/errcode.h<br>
> @@ -138,7 +138,7 @@ struct errcode_record {<br>
> /* 83 */_(ER_ROLE_EXISTS, "Role '%s' already exists") \<br>
> /* 84 */_(ER_CREATE_ROLE, "Failed to create role '%s': %s") \<br>
> /* 85 */_(ER_INDEX_EXISTS, "Index '%s' already exists") \<br>
> - /* 86 */_(ER_TUPLE_REF_OVERFLOW, "Tuple reference counter overflow") \<br>
> + /* 86 */_(ER_UNUSED6, "") \<br>
> /* 87 */_(ER_ROLE_LOOP, "Granting role '%s' to role '%s' would create a loop") \<br>
> /* 88 */_(ER_GRANT, "Incorrect grant arguments: %s") \<br>
> /* 89 */_(ER_PRIV_GRANTED, "User '%s' already has %s access on %s '%s'") \<br>
> diff --git a/src/box/index.cc b/src/box/index.cc<br>
> index 3c62ec1..f992bc9 100644<br>
> --- a/src/box/index.cc<br>
> +++ b/src/box/index.cc<br>
> @@ -220,8 +220,8 @@ box_index_random(uint32_t space_id, uint32_t index_id, uint32_t rnd,<br>
> /* No tx management, random() is for approximation anyway. */<br>
> if (index_random(index, rnd, result) != 0)<br>
> return -1;<br>
> - if (*result != NULL && tuple_bless(*result) == NULL)<br>
> - return -1;<br>
> + if (*result != NULL)<br>
> + tuple_bless(*result);<br>
> return 0;<br>
> }<br>
> <br>
> @@ -253,8 +253,8 @@ box_index_get(uint32_t space_id, uint32_t index_id, const char *key,<br>
> txn_commit_ro_stmt(txn);<br>
> /* Count statistics. */<br>
> rmean_collect(rmean_box, IPROTO_SELECT, 1);<br>
> - if (*result != NULL && tuple_bless(*result) == NULL)<br>
> - return -1;<br>
> + if (*result != NULL)<br>
> + tuple_bless(*result);<br>
> return 0;<br>
> }<br>
> <br>
> @@ -285,8 +285,8 @@ box_index_min(uint32_t space_id, uint32_t index_id, const char *key,<br>
> return -1;<br>
> }<br>
> txn_commit_ro_stmt(txn);<br>
> - if (*result != NULL && tuple_bless(*result) == NULL)<br>
> - return -1;<br>
> + if (*result != NULL)<br>
> + tuple_bless(*result);<br>
> return 0;<br>
> }<br>
> <br>
> @@ -317,8 +317,8 @@ box_index_max(uint32_t space_id, uint32_t index_id, const char *key,<br>
> return -1;<br>
> }<br>
> txn_commit_ro_stmt(txn);<br>
> - if (*result != NULL && tuple_bless(*result) == NULL)<br>
> - return -1;<br>
> + if (*result != NULL)<br>
> + tuple_bless(*result);<br>
> return 0;<br>
> }<br>
> <br>
> @@ -397,8 +397,8 @@ box_iterator_next(box_iterator_t *itr, box_tuple_t **result)<br>
> assert(result != NULL);<br>
> if (iterator_next(itr, result) != 0)<br>
> return -1;<br>
> - if (*result != NULL && tuple_bless(*result) == NULL)<br>
> - return -1;<br>
> + if (*result != NULL)<br>
> + tuple_bless(*result);<br>
> return 0;<br>
> }<br>
> <br>
> diff --git a/src/box/lua/tuple.c b/src/box/lua/tuple.c<br>
> index <span class="js-phone-number">8057331</span>..22fe696 100644<br>
> --- a/src/box/lua/tuple.c<br>
> +++ b/src/box/lua/tuple.c<br>
> @@ -496,10 +496,7 @@ luaT_pushtuple(struct lua_State *L, box_tuple_t *tuple)<br>
> luaL_pushcdata(L, CTID_CONST_STRUCT_TUPLE_REF);<br>
> *ptr = tuple;<br>
> /* The order is important - first reference tuple, next set gc */<br>
> - if (box_tuple_ref(tuple) != 0) {<br>
> - luaT_error(L);<br>
> - return;<br>
> - }<br>
> + box_tuple_ref(tuple);<br>
> lua_pushcfunction(L, lbox_tuple_gc);<br>
> luaL_setcdatagc(L, -2);<br>
> }<br>
> diff --git a/src/box/port.c b/src/box/port.c<br>
> index 03f6be7..9b6b858 100644<br>
> --- a/src/box/port.c<br>
> +++ b/src/box/port.c<br>
> @@ -45,8 +45,7 @@ port_tuple_add(struct port *base, struct tuple *tuple)<br>
> struct port_tuple *port = port_tuple(base);<br>
> struct port_tuple_entry *e;<br>
> if (port->size == 0) {<br>
> - if (tuple_ref(tuple) != 0)<br>
> - return -1;<br>
> + tuple_ref(tuple);<br>
> e = &port->first_entry;<br>
> port->first = port->last = e;<br>
> } else {<br>
> @@ -55,10 +54,7 @@ port_tuple_add(struct port *base, struct tuple *tuple)<br>
> diag_set(OutOfMemory, sizeof(*e), "mempool_alloc", "e");<br>
> return -1;<br>
> }<br>
> - if (tuple_ref(tuple) != 0) {<br>
> - mempool_free(&port_tuple_entry_pool, e);<br>
> - return -1;<br>
> - }<br>
> + tuple_ref(tuple);<br>
> port->last->next = e;<br>
> port->last = e;<br>
> }<br>
> diff --git a/src/box/tuple.c b/src/box/tuple.c<br>
> index 014f374..b632183 100644<br>
> --- a/src/box/tuple.c<br>
> +++ b/src/box/tuple.c<br>
> @@ -48,6 +48,28 @@ enum {<br>
> OBJSIZE_MIN = 16,<br>
> };<br>
> <br>
> +/**<br>
> + * Container for big reference counters. Contains array of big<br>
> + * reference counters, size of this array and number of non-zero<br>
> + * big reference counters. When reference counter of tuple becomes<br>
> + * more than 32767, field refs of this tuple becomes index of big<br>
> + * reference counter in big reference counter array and field<br>
> + * is_bigref is set true. The moment big reference becomes equal<br>
> + * 32767 it is set to 0, refs of the tuple becomes 32767 and<br>
> + * is_bigref becomes false. Big reference counter can be equal to<br>
> + * 0 or be more than 32767.<br>
> + */<br>
> +static struct bigref_list {<br>
> + /** Array of big reference counters. */<br>
> + uint32_t *refs;<br>
> + /** Number of non-zero elements in the array. */<br>
> + uint16_t size;<br>
> + /** Capacity of the array. */<br>
> + uint16_t capacity;<br>
> + /** Index of first free element. */<br>
> + uint16_t vacant_index;<br>
> +} bigref_list;<br>
> +<br>
> static const double ALLOC_FACTOR = 1.05;<br>
> <br>
> /**<br>
> @@ -151,6 +173,13 @@ tuple_validate_raw(struct tuple_format *format, const char *tuple)<br>
> return 0;<br>
> }<br>
> <br>
> +/** Initialize big references container. */<br>
> +static inline void<br>
> +bigref_list_create()<br>
> +{<br>
> + memset(&bigref_list, 0, sizeof(bigref_list));<br>
> +}<br>
> +<br>
> /**<br>
> * Incremented on every snapshot and is used to distinguish tuples<br>
> * which were created after start of a snapshot (these tuples can<br>
> @@ -211,6 +240,8 @@ tuple_init(field_name_hash_f hash)<br>
> <br>
> box_tuple_last = NULL;<br>
> <br>
> + bigref_list_create();<br>
> +<br>
> if (coll_id_cache_init() != 0)<br>
> return -1;<br>
> <br>
> @@ -244,6 +275,107 @@ tuple_arena_create(struct slab_arena *arena, struct quota *quota,<br>
> }<br>
> }<br>
> <br>
> +enum {<br>
> + BIGREF_FACTOR = 2,<br>
> + BIGREF_MAX = UINT32_MAX,<br>
> + BIGREF_MIN_CAPACITY = 16,<br>
> + /**<br>
> + * Only 15 bits are available for bigref list index in<br>
> + * struct tuple.<br>
> + */<br>
> + BIGREF_MAX_CAPACITY = UINT16_MAX >> 1<br>
> +};<br>
> +<br>
> +/** Destroy big references and free memory that was allocated. */<br>
> +static inline void<br>
> +bigref_list_reset()<br>
> +{<br>
> + free(bigref_list.refs);<br>
> + bigref_list_create();<br>
> +}<br>
> +<br>
> +/**<br>
> + * Increase capacity of bigref_list.<br>
> + */<br>
> +static inline void<br>
> +bigref_list_increase_capacity()<br>
> +{<br>
> + assert(bigref_list.capacity == bigref_list.vacant_index);<br>
> + uint32_t *refs = bigref_list.refs;<br>
> + uint16_t capacity = bigref_list.capacity;<br>
> + if (capacity == 0)<br>
> + capacity = BIGREF_MIN_CAPACITY;<br>
> + else if (capacity < BIGREF_MAX_CAPACITY)<br>
> + capacity = MIN(capacity * BIGREF_FACTOR, BIGREF_MAX_CAPACITY);<br>
> + else<br>
> + panic("Too many big references");<br>
> + refs = (uint32_t *) realloc(refs, capacity * sizeof(*refs));<br>
> + if (refs == NULL) {<br>
> + panic("failed to reallocate %zu bytes: Cannot allocate "\<br>
> + "memory.", capacity * sizeof(*refs));<br>
> + }<br>
> + for(uint16_t i = bigref_list.capacity; i < capacity; ++i) {<br>
> + refs[i] = i + 1;<br>
> + }<br>
> + bigref_list.refs = refs;<br>
> + bigref_list.capacity = capacity;<br>
> +}<br>
> +<br>
> +/**<br>
> + * Return index for new big reference counter and allocate memory<br>
> + * if needed.<br>
> + * @retval index for new big reference counter.<br>
> + */<br>
> +static inline uint16_t<br>
> +bigref_list_new_index()<br>
> +{<br>
> + if (bigref_list.size == bigref_list.capacity)<br>
> + bigref_list_increase_capacity();<br>
> + ++bigref_list.size;<br>
> + uint16_t vacant_index = bigref_list.vacant_index;<br>
> + bigref_list.vacant_index = bigref_list.refs[vacant_index];<br>
> + return vacant_index;<br>
> +}<br>
> +<br>
> +void<br>
> +tuple_ref_slow(struct tuple *tuple)<br>
> +{<br>
> + assert(tuple->is_bigref || tuple->refs == TUPLE_REF_MAX);<br>
> + if (! tuple->is_bigref) {<br>
> + tuple->ref_index = bigref_list_new_index();<br>
> + tuple->is_bigref = true;<br>
> + bigref_list.refs[tuple->ref_index] = TUPLE_REF_MAX;<br>
> + } else if (bigref_list.refs[tuple->ref_index] == BIGREF_MAX) {<br>
> + panic("Tuple big reference counter overflow");<br>
> + }<br>
> + bigref_list.refs[tuple->ref_index]++;<br>
> +}<br>
> +<br>
> +/**<br>
> + * Free index and add it to free-index-list. Free memory<br>
> + * when size == 0.<br>
> + */<br>
> +static inline void<br>
> +bigref_list_delete_index(uint16_t index)<br>
> +{<br>
> + bigref_list.refs[index] = bigref_list.vacant_index;<br>
> + bigref_list.vacant_index = index;<br>
> + if (--bigref_list.size == 0)<br>
> + bigref_list_reset();<br>
> +}<br>
> +<br>
> +void<br>
> +tuple_unref_slow(struct tuple *tuple)<br>
> +{<br>
> + assert(tuple->is_bigref &&<br>
> + bigref_list.refs[tuple->ref_index] > TUPLE_REF_MAX);<br>
> + if(--bigref_list.refs[tuple->ref_index] == TUPLE_REF_MAX) {<br>
> + bigref_list_delete_index(tuple->ref_index);<br>
> + tuple->ref_index = TUPLE_REF_MAX;<br>
> + tuple->is_bigref = false;<br>
> + }<br>
> +}<br>
> +<br>
> void<br>
> tuple_arena_destroy(struct slab_arena *arena)<br>
> {<br>
> @@ -265,6 +397,8 @@ tuple_free(void)<br>
> tuple_format_free();<br>
> <br>
> coll_id_cache_destroy();<br>
> +<br>
> + bigref_list_reset();<br>
> }<br>
> <br>
> box_tuple_format_t *<br>
> @@ -288,7 +422,8 @@ int<br>
> box_tuple_ref(box_tuple_t *tuple)<br>
> {<br>
> assert(tuple != NULL);<br>
> - return tuple_ref(tuple);<br>
> + tuple_ref(tuple);<br>
> + return 0;<br>
> }<br>
> <br>
> void<br>
> @@ -357,10 +492,7 @@ box_tuple_iterator(box_tuple_t *tuple)<br>
> "mempool", "new slab");<br>
> return NULL;<br>
> }<br>
> - if (tuple_ref(tuple) != 0) {<br>
> - mempool_free(&tuple_iterator_pool, it);<br>
> - return NULL;<br>
> - }<br>
> + tuple_ref(tuple);<br>
> tuple_rewind(it, tuple);<br>
> return it;<br>
> }<br>
> @@ -451,7 +583,6 @@ box_tuple_new(box_tuple_format_t *format, const char *data, const char *end)<br>
> struct tuple *ret = tuple_new(format, data, end);<br>
> if (ret == NULL)<br>
> return NULL;<br>
> - /* Can't fail on zero refs. */<br>
> return tuple_bless(ret);<br>
> }<br>
> <br>
> diff --git a/src/box/tuple.h b/src/box/tuple.h<br>
> index e2384dd..14dbd40 100644<br>
> --- a/src/box/tuple.h<br>
> +++ b/src/box/tuple.h<br>
> @@ -105,8 +105,7 @@ typedef struct tuple box_tuple_t;<br>
> * tuple will leak.<br>
> *<br>
> * \param tuple a tuple<br>
> - * \retval -1 on error (check box_error_last())<br>
> - * \retval 0 on success<br>
> + * \retval 0 always<br>
> * \sa box_tuple_unref()<br>
> */<br>
> int<br>
> @@ -269,8 +268,7 @@ box_tuple_next(box_tuple_iterator_t *it);<br>
> * Use box_tuple_format_default() to create space-independent tuple.<br>
> * \param data tuple data in MsgPack Array format ([field1, field2, ...]).<br>
> * \param end the end of \a data<br>
> - * \retval NULL on out of memory<br>
> - * \retval tuple otherwise<br>
> + * \retval tuple<br>
> * \pre data, end is valid MsgPack Array<br>
> * \sa \code box.tuple.new(data) \endcode<br>
> */<br>
> @@ -307,9 +305,17 @@ box_tuple_upsert(const box_tuple_t *tuple, const char *expr, const<br>
> */<br>
> struct PACKED tuple<br>
> {<br>
> - /** reference counter */<br>
> - uint16_t refs;<br>
> - /** format identifier */<br>
> + union {<br>
> + /** Reference counter. */<br>
> + uint16_t refs;<br>
> + struct {<br>
> + /** Index of big reference counter. */<br>
> + uint16_t ref_index : 15;<br>
> + /** Big reference flag. */<br>
> + bool is_bigref : 1;<br>
> + };<br>
> + };<br>
> + /** Format identifier. */<br>
> uint16_t format_id;<br>
> /**<br>
> * Length of the MessagePack data in raw part of the<br>
> @@ -774,26 +780,36 @@ tuple_field_uuid(const struct tuple *tuple, int fieldno,<br>
> return 0;<br>
> }<br>
> <br>
> -enum { TUPLE_REF_MAX = UINT16_MAX };<br>
> +enum { TUPLE_REF_MAX = UINT16_MAX >> 1 };<br>
> +<br>
> +/**<br>
> + * Increase tuple big reference counter.<br>
> + * @param tuple Tuple to reference.<br>
> + */<br>
> +void<br>
> +tuple_ref_slow(struct tuple *tuple);<br>
> <br>
> /**<br>
> * Increment tuple reference counter.<br>
> * @param tuple Tuple to reference.<br>
> - * @retval 0 Success.<br>
> - * @retval -1 Too many refs error.<br>
> */<br>
> -static inline int<br>
> +static inline void<br>
> tuple_ref(struct tuple *tuple)<br>
> {<br>
> - if (tuple->refs + 1 > TUPLE_REF_MAX) {<br>
> - diag_set(ClientError, ER_TUPLE_REF_OVERFLOW);<br>
> - return -1;<br>
> - }<br>
> - tuple->refs++;<br>
> - return 0;<br>
> + if (unlikely(tuple->refs >= TUPLE_REF_MAX))<br>
> + tuple_ref_slow(tuple);<br>
> + else<br>
> + tuple->refs++;<br>
> }<br>
> <br>
> /**<br>
> + * Decrease tuple big reference counter.<br>
> + * @param tuple Tuple to reference.<br>
> + */<br>
> +void<br>
> +tuple_unref_slow(struct tuple *tuple);<br>
> +<br>
> +/**<br>
> * Decrement tuple reference counter. If it has reached zero, free the tuple.<br>
> *<br>
> * @pre tuple->refs + count >= 0<br>
> @@ -802,10 +818,9 @@ static inline void<br>
> tuple_unref(struct tuple *tuple)<br>
> {<br>
> assert(tuple->refs - 1 >= 0);<br>
> -<br>
> - tuple->refs--;<br>
> -<br>
> - if (tuple->refs == 0)<br>
> + if (unlikely(tuple->is_bigref))<br>
> + tuple_unref_slow(tuple);<br>
> + else if (--tuple->refs == 0)<br>
> tuple_delete(tuple);<br>
> }<br>
> <br>
> @@ -813,25 +828,18 @@ extern struct tuple *box_tuple_last;<br>
> <br>
> /**<br>
> * Convert internal `struct tuple` to public `box_tuple_t`.<br>
> - * \retval tuple on success<br>
> - * \retval NULL on error, check diag<br>
> + * \retval tuple<br>
> * \post \a tuple ref counted until the next call.<br>
> - * \post tuple_ref() doesn't fail at least once<br>
> * \sa tuple_ref<br>
> */<br>
> static inline box_tuple_t *<br>
> tuple_bless(struct tuple *tuple)<br>
> {<br>
> assert(tuple != NULL);<br>
> - /* Ensure tuple can be referenced at least once after return */<br>
> - if (tuple->refs + 2 > TUPLE_REF_MAX) {<br>
> - diag_set(ClientError, ER_TUPLE_REF_OVERFLOW);<br>
> - return NULL;<br>
> - }<br>
> - tuple->refs++;<br>
> + tuple_ref(tuple);<br>
> /* Remove previous tuple */<br>
> if (likely(box_tuple_last != NULL))<br>
> - tuple_unref(box_tuple_last); /* do not throw */<br>
> + tuple_unref(box_tuple_last);<br>
> /* Remember current tuple */<br>
> box_tuple_last = tuple;<br>
> return tuple;<br>
> @@ -849,41 +857,6 @@ tuple_to_buf(const struct tuple *tuple, char *buf, size_t size);<br>
> #include "tuple_update.h"<br>
> #include "errinj.h"<br>
> <br>
> -/**<br>
> - * \copydoc tuple_ref()<br>
> - * \throws if overflow detected.<br>
> - */<br>
> -static inline void<br>
> -tuple_ref_xc(struct tuple *tuple)<br>
> -{<br>
> - if (tuple_ref(tuple))<br>
> - diag_raise();<br>
> -}<br>
> -<br>
> -/**<br>
> - * \copydoc tuple_bless<br>
> - * \throw ER_TUPLE_REF_OVERFLOW<br>
> - */<br>
> -static inline box_tuple_t *<br>
> -tuple_bless_xc(struct tuple *tuple)<br>
> -{<br>
> - box_tuple_t *blessed = tuple_bless(tuple);<br>
> - if (blessed == NULL)<br>
> - diag_raise();<br>
> - return blessed;<br>
> -}<br>
> -<br>
> -/** Make tuple references exception-friendly in absence of @finally. */<br>
> -struct TupleRefNil {<br>
> - struct tuple *tuple;<br>
> - TupleRefNil (struct tuple *arg) :tuple(arg)<br>
> - { if (tuple) tuple_ref_xc(tuple); }<br>
> - ~TupleRefNil() { if (tuple) tuple_unref(tuple); }<br>
> -<br>
> - TupleRefNil(const TupleRefNil&) = delete;<br>
> - void operator=(const TupleRefNil&) = delete;<br>
> -};<br>
> -<br>
> /* @copydoc tuple_field_with_type() */<br>
> static inline const char *<br>
> tuple_field_with_type_xc(const struct tuple *tuple, uint32_t fieldno,<br>
> diff --git a/src/box/vinyl.c b/src/box/vinyl.c<br>
> index f0d2687..dc0d020 100644<br>
> --- a/src/box/vinyl.c<br>
> +++ b/src/box/vinyl.c<br>
> @@ -3822,7 +3822,6 @@ vinyl_iterator_primary_next(struct iterator *base, struct tuple **ret)<br>
> assert(base->next = vinyl_iterator_primary_next);<br>
> struct vinyl_iterator *it = (struct vinyl_iterator *)base;<br>
> assert(it->lsm->index_id == 0);<br>
> - struct tuple *tuple;<br>
> <br>
> if (it->tx == NULL) {<br>
> diag_set(ClientError, ER_CURSOR_NO_TRANSACTION);<br>
> @@ -3833,18 +3832,15 @@ vinyl_iterator_primary_next(struct iterator *base, struct tuple **ret)<br>
> goto fail;<br>
> }<br>
> <br>
> - if (vy_read_iterator_next(&it->iterator, &tuple) != 0)<br>
> + if (vy_read_iterator_next(&it->iterator, ret) != 0)<br>
> goto fail;<br>
> -<br>
> - if (tuple == NULL) {<br>
> + if (*ret == NULL) {<br>
> /* EOF. Close the iterator immediately. */<br>
> vinyl_iterator_close(it);<br>
> - *ret = NULL;<br>
> - return 0;<br>
> + } else {<br>
> + tuple_bless(*ret);<br>
> }<br>
> - *ret = tuple_bless(tuple);<br>
> - if (*ret != NULL)<br>
> - return 0;<br>
> + return 0;<br>
> fail:<br>
> vinyl_iterator_close(it);<br>
> return -1;<br>
> @@ -3890,11 +3886,10 @@ next:<br>
> * Note, there's no need in vy_tx_track() as the<br>
> * tuple is already tracked in the secondary index.<br>
> */<br>
> - struct tuple *full_tuple;<br>
> if (vy_point_lookup(it->lsm->pk, it->tx, vy_tx_read_view(it->tx),<br>
> - tuple, &full_tuple) != 0)<br>
> + tuple, ret) != 0)<br>
> goto fail;<br>
> - if (full_tuple == NULL) {<br>
> + if (*ret == NULL) {<br>
> /*<br>
> * All indexes of a space must be consistent, i.e.<br>
> * if a tuple is present in one index, it must be<br>
> @@ -3908,10 +3903,9 @@ next:<br>
> vy_lsm_name(it->lsm), vy_stmt_str(tuple));<br>
> goto next;<br>
> }<br>
> - *ret = tuple_bless(full_tuple);<br>
> - tuple_unref(full_tuple);<br>
> - if (*ret != NULL)<br>
> - return 0;<br>
> + tuple_bless(*ret);<br>
> + tuple_unref(*ret);<br>
> + return 0;<br>
> fail:<br>
> vinyl_iterator_close(it);<br>
> return -1;<br>
> @@ -3997,16 +3991,12 @@ vinyl_index_get(struct index *index, const char *key,<br>
> const struct vy_read_view **rv = (tx != NULL ? vy_tx_read_view(tx) :<br>
> &env->xm->p_global_read_view);<br>
> <br>
> - struct tuple *tuple;<br>
> - if (vy_lsm_full_by_key(lsm, tx, rv, key, part_count, &tuple) != 0)<br>
> + if (vy_lsm_full_by_key(lsm, tx, rv, key, part_count, ret) != 0)<br>
> return -1;<br>
> -<br>
> - if (tuple != NULL) {<br>
> - *ret = tuple_bless(tuple);<br>
> - tuple_unref(tuple);<br>
> - return *ret == NULL ? -1 : 0;<br>
> + if (*ret != NULL) {<br>
> + tuple_bless(*ret);<br>
> + tuple_unref(*ret);<br>
> }<br>
> - *ret = NULL;<br>
> return 0;<br>
> }<br>
> <br>
> diff --git a/test/box/misc.result b/test/box/misc.result<br>
> index 8f94f55..f7703ba 100644<br>
> --- a/test/box/misc.result<br>
> +++ b/test/box/misc.result<br>
> @@ -332,7 +332,6 @@ t;<br>
> - 'box.error.UNKNOWN_UPDATE_OP : 28'<br>
> - 'box.error.WRONG_COLLATION_OPTIONS : 151'<br>
> - 'box.error.CURSOR_NO_TRANSACTION : 80'<br>
> - - 'box.error.TUPLE_REF_OVERFLOW : 86'<br>
> - 'box.error.ALTER_SEQUENCE : 143'<br>
> - 'box.error.INVALID_XLOG_NAME : 75'<br>
> - 'box.error.SAVEPOINT_EMPTY_TX : 60'<br>
> @@ -360,7 +359,7 @@ t;<br>
> - 'box.error.VINYL_MAX_TUPLE_SIZE : 139'<br>
> - 'box.error.LOAD_FUNCTION : 99'<br>
> - 'box.error.INVALID_XLOG : 74'<br>
> - - 'box.error.PRIV_NOT_GRANTED : 91'<br>
> + - 'box.error.READ_VIEW_ABORTED : 130'<br>
> - 'box.error.TRANSACTION_CONFLICT : 97'<br>
> - 'box.error.GUEST_USER_PASSWORD : 96'<br>
> - 'box.error.PROC_C : 102'<br>
> @@ -405,7 +404,7 @@ t;<br>
> - 'box.error.injection : table: <address><br>
> - 'box.error.NULLABLE_MISMATCH : 153'<br>
> - 'box.error.LAST_DROP : 15'<br>
> - - 'box.error.NO_SUCH_ROLE : 82'<br>
> + - 'box.error.TUPLE_FORMAT_LIMIT : 16'<br>
> - 'box.error.DECOMPRESSION : 124'<br>
> - 'box.error.CREATE_SEQUENCE : 142'<br>
> - 'box.error.CREATE_USER : 43'<br>
> @@ -414,66 +413,66 @@ t;<br>
> - 'box.error.SEQUENCE_OVERFLOW : 147'<br>
> - 'box.error.SYSTEM : 115'<br>
> - 'box.error.KEY_PART_IS_TOO_LONG : 118'<br>
> - - 'box.error.TUPLE_FORMAT_LIMIT : 16'<br>
> - - 'box.error.BEFORE_REPLACE_RET : 53'<br>
> + - 'box.error.INJECTION : 8'<br>
> + - 'box.error.INVALID_MSGPACK : 20'<br>
> - 'box.error.NO_SUCH_SAVEPOINT : 61'<br>
> - 'box.error.TRUNCATE_SYSTEM_SPACE : 137'<br>
> - 'box.error.VY_QUOTA_TIMEOUT : 135'<br>
> - 'box.error.WRONG_INDEX_OPTIONS : 108'<br>
> - 'box.error.INVALID_VYLOG_FILE : 133'<br>
> - 'box.error.INDEX_FIELD_COUNT_LIMIT : 127'<br>
> - - 'box.error.READ_VIEW_ABORTED : 130'<br>
> + - 'box.error.PRIV_NOT_GRANTED : 91'<br>
> - 'box.error.USER_MAX : 56'<br>
> - - 'box.error.PROTOCOL : 104'<br>
> + - 'box.error.BEFORE_REPLACE_RET : 53'<br>
> - 'box.error.TUPLE_NOT_ARRAY : 22'<br>
> - 'box.error.KEY_PART_COUNT : 31'<br>
> - 'box.error.ALTER_SPACE : 12'<br>
> - 'box.error.ACTIVE_TRANSACTION : 79'<br>
> - 'box.error.EXACT_FIELD_COUNT : 38'<br>
> - 'box.error.DROP_SEQUENCE : 144'<br>
> - - 'box.error.INVALID_MSGPACK : 20'<br>
> - 'box.error.MORE_THAN_ONE_TUPLE : 41'<br>
> - - 'box.error.RTREE_RECT : 101'<br>
> - - 'box.error.SUB_STMT_MAX : 121'<br>
> + - 'box.error.UPSERT_UNIQUE_SECONDARY_KEY : 105'<br>
> - 'box.error.UNKNOWN_REQUEST_TYPE : 48'<br>
> - - 'box.error.SPACE_EXISTS : 10'<br>
> + - 'box.error.SUB_STMT_MAX : 121'<br>
> - 'box.error.PROC_LUA : 32'<br>
> + - 'box.error.SPACE_EXISTS : 10'<br>
> - 'box.error.ROLE_NOT_GRANTED : 92'<br>
> + - 'box.error.UNSUPPORTED : 5'<br>
> - 'box.error.NO_SUCH_SPACE : 36'<br>
> - 'box.error.WRONG_INDEX_PARTS : 107'<br>
> - - 'box.error.DROP_SPACE : 11'<br>
> - 'box.error.MIN_FIELD_COUNT : 39'<br>
> - 'box.error.REPLICASET_UUID_MISMATCH : 63'<br>
> - 'box.error.UPDATE_FIELD : 29'<br>
> + - 'box.error.INDEX_EXISTS : 85'<br>
> - 'box.error.COMPRESSION : 119'<br>
> - 'box.error.INVALID_ORDER : 68'<br>
> - - 'box.error.INDEX_EXISTS : 85'<br>
> - 'box.error.SPLICE : 25'<br>
> - 'box.error.UNKNOWN : 0'<br>
> + - 'box.error.IDENTIFIER : 70'<br>
> - 'box.error.DROP_PRIMARY_KEY : 17'<br>
> - 'box.error.NULLABLE_PRIMARY : 152'<br>
> - 'box.error.NO_SUCH_SEQUENCE : 145'<br>
> - 'box.error.RELOAD_CFG : 58'<br>
> - 'box.error.INVALID_UUID : 64'<br>
> - - 'box.error.INJECTION : 8'<br>
> + - 'box.error.DROP_SPACE : 11'<br>
> - 'box.error.TIMEOUT : 78'<br>
> - - 'box.error.IDENTIFIER : 70'<br>
> - 'box.error.ITERATOR_TYPE : 72'<br>
> - 'box.error.REPLICA_MAX : 73'<br>
> + - 'box.error.NO_SUCH_ROLE : 82'<br>
> - 'box.error.MISSING_REQUEST_FIELD : 69'<br>
> - 'box.error.MISSING_SNAPSHOT : 93'<br>
> - 'box.error.WRONG_SPACE_OPTIONS : 111'<br>
> - 'box.error.READONLY : 7'<br>
> - - 'box.error.UNSUPPORTED : 5'<br>
> - 'box.error.UPDATE_INTEGER_OVERFLOW : 95'<br>
> + - 'box.error.RTREE_RECT : 101'<br>
> - 'box.error.NO_CONNECTION : 77'<br>
> - 'box.error.INVALID_XLOG_ORDER : 76'<br>
> - - 'box.error.UPSERT_UNIQUE_SECONDARY_KEY : 105'<br>
> - - 'box.error.ROLLBACK_IN_SUB_STMT : 123'<br>
> - 'box.error.WRONG_SCHEMA_VERSION : 109'<br>
> - - 'box.error.UNSUPPORTED_INDEX_FEATURE : 112'<br>
> - - 'box.error.INDEX_PART_TYPE_MISMATCH : 24'<br>
> + - 'box.error.ROLLBACK_IN_SUB_STMT : 123'<br>
> + - 'box.error.PROTOCOL : 104'<br>
> - 'box.error.INVALID_XLOG_TYPE : 125'<br>
> + - 'box.error.INDEX_PART_TYPE_MISMATCH : 24'<br>
> + - 'box.error.UNSUPPORTED_INDEX_FEATURE : 112'<br>
> ...<br>
> test_run:cmd("setopt delimiter ''");<br>
> ---<br>
> diff --git a/test/box/select.result b/test/box/select.result<br>
> index 4aed706..b3ee6cd 100644<br>
> --- a/test/box/select.result<br>
> +++ b/test/box/select.result<br>
> @@ -619,31 +619,62 @@ collectgarbage('collect')<br>
> ---<br>
> - 0<br>
> ...<br>
> +-- gh-3224 resurrect tuple bigrefs<br>
> +collectgarbage('stop')<br>
> +---<br>
> +- 0<br>
> +...<br>
> s = box.schema.space.create('select', { temporary = true })<br>
> ---<br>
> ...<br>
> index = s:create_index('primary', { type = 'tree' })<br>
> ---<br>
> ...<br>
> -a = s:insert{0}<br>
> +_ = s:insert{0}<br>
> +---<br>
> +...<br>
> +_ = s:insert{1}<br>
> +---<br>
> +...<br>
> +_ = s:insert{2}<br>
> +---<br>
> +...<br>
> +_ = s:insert{3}<br>
> +---<br>
> +...<br>
> +lots_of_links = setmetatable({}, {__mode = 'v'})<br>
> ---<br>
> ...<br>
> -lots_of_links = {}<br>
> +i = 0<br>
> +---<br>
> +...<br>
> +while (i < 33000) do table.insert(lots_of_links, s:get{0}) i = i + 1 end<br>
> +---<br>
> +...<br>
> +while (i < 66000) do table.insert(lots_of_links, s:get{1}) i = i + 1 end<br>
> +---<br>
> +...<br>
> +while (i < 100000) do table.insert(lots_of_links, s:get{2}) i = i + 1 end<br>
> ---<br>
> ...<br>
> ref_count = 0<br>
> ---<br>
> ...<br>
> -while (true) do table.insert(lots_of_links, s:get{0}) ref_count = ref_count + 1 end<br>
> +for k, v in pairs(lots_of_links) do ref_count = ref_count + 1 end<br>
> ---<br>
> -- error: Tuple reference counter overflow<br>
> ...<br>
> ref_count<br>
> ---<br>
> -- 65531<br>
> +- 100000<br>
> ...<br>
> -lots_of_links = {}<br>
> +-- check that tuples are deleted after gc is activated<br>
> +collectgarbage('collect')<br>
> ---<br>
> +- 0<br>
> +...<br>
> +lots_of_links<br>
> +---<br>
> +- []<br>
> ...<br>
> s:drop()<br>
> ---<br>
> diff --git a/test/box/select.test.lua b/test/box/select.test.lua<br>
> index 54c2ecc..3400bda 100644<br>
> --- a/test/box/select.test.lua<br>
> +++ b/test/box/select.test.lua<br>
> @@ -124,12 +124,25 @@ test.random(s.index[0], 48)<br>
> s:drop()<br>
> <br>
> collectgarbage('collect')<br>
> +<br>
> +-- gh-3224 resurrect tuple bigrefs<br>
> +<br>
> +collectgarbage('stop')<br>
> s = box.schema.space.create('select', { temporary = true })<br>
> index = s:create_index('primary', { type = 'tree' })<br>
> -a = s:insert{0}<br>
> -lots_of_links = {}<br>
> +_ = s:insert{0}<br>
> +_ = s:insert{1}<br>
> +_ = s:insert{2}<br>
> +_ = s:insert{3}<br>
> +lots_of_links = setmetatable({}, {__mode = 'v'})<br>
> +i = 0<br>
> +while (i < 33000) do table.insert(lots_of_links, s:get{0}) i = i + 1 end<br>
> +while (i < 66000) do table.insert(lots_of_links, s:get{1}) i = i + 1 end<br>
> +while (i < 100000) do table.insert(lots_of_links, s:get{2}) i = i + 1 end<br>
> ref_count = 0<br>
> -while (true) do table.insert(lots_of_links, s:get{0}) ref_count = ref_count + 1 end<br>
> +for k, v in pairs(lots_of_links) do ref_count = ref_count + 1 end<br>
> ref_count<br>
> -lots_of_links = {}<br>
> +-- check that tuples are deleted after gc is activated<br>
> +collectgarbage('collect')<br>
> +lots_of_links<br>
> s:drop()<br>
> diff --git a/test/unit/CMakeLists.txt b/test/unit/CMakeLists.txt<br>
> index dbc02cd..aef5316 100644<br>
> --- a/test/unit/CMakeLists.txt<br>
> +++ b/test/unit/CMakeLists.txt<br>
> @@ -192,3 +192,6 @@ target_link_libraries(vy_cache.test ${ITERATOR_TEST_LIBS})<br>
> <br>
> add_executable(coll.test coll.cpp)<br>
> target_link_libraries(coll.test core unit ${ICU_LIBRARIES} misc)<br>
> +<br>
> +add_executable(tuple_bigref.test tuple_bigref.c)<br>
> +target_link_libraries(tuple_bigref.test tuple unit)<br>
> diff --git a/test/unit/tuple_bigref.c b/test/unit/tuple_bigref.c<br>
> new file mode 100644<br>
> index 0000000..97095dc<br>
> --- /dev/null<br>
> +++ b/test/unit/tuple_bigref.c<br>
> @@ -0,0 +1,117 @@<br>
> +#include "vy_iterators_helper.h"<br>
> +#include "memory.h"<br>
> +#include "fiber.h"<br>
> +#include "unit.h"<br>
> +#include <msgpuck.h><br>
> +#include "trivia/util.h"<br>
> +<br>
> +enum {<br>
> + BIGREF_DIFF = 10,<br>
> + BIGREF_COUNT = 70003,<br>
> + BIGREF_CAPACITY = 107,<br>
> +};<br>
> +<br>
> +static char tuple_buf[64];<br>
> +static char *tuple_end = tuple_buf;<br>
> +<br>
> +/**<br>
> + * This function creates new tuple with refs == 1.<br>
> + */<br>
> +static inline struct tuple *<br>
> +create_tuple()<br>
> +{<br>
> + struct tuple *ret =<br>
> + tuple_new(box_tuple_format_default(), tuple_buf, tuple_end);<br>
> + tuple_ref(ret);<br>
> + return ret;<br>
> +}<br>
> +<br>
> +/**<br>
> + * This test performs overall check of bigrefs.<br>
> + * What it checks:<br>
> + * 1) Till refs <= TUPLE_REF_MAX it shows number of refs<br>
> + * of tuple and it isn't a bigref.<br>
> + * 2) When refs > TUPLE_REF_MAX first 15 bits of it becomes<br>
> + * index of bigref and the last bit becomes true which<br>
> + * shows that it is bigref.<br>
> + * 3) Each of tuple has its own number of refs, but all<br>
> + * these numbers more than it is needed for getting a bigref.<br>
> + * 4) Indexes of bigrefs are given sequentially.<br>
> + * 5) After some tuples are sequentially deleted all of<br>
> + * others bigrefs are fine. In this test BIGREF_CAPACITY<br>
> + * tuples created and each of their ref counter increased<br>
> + * to (BIGREF_COUNT - index of tuple). Tuples are created<br>
> + * consistently.<br>
> + */<br>
> +static int<br>
> +test_bigrefs_1()<br>
> +{<br>
> + struct tuple **tuples = (struct tuple **) malloc(BIGREF_CAPACITY *<br>
> + sizeof(*tuples));<br>
> + for(int i = 0; i < BIGREF_CAPACITY; ++i)<br>
> + tuples[i] = create_tuple();<br>
> + for(int i = 0; i < BIGREF_CAPACITY; ++i) {<br>
> + assert(tuples[i]->refs == 1);<br>
> + for(int j = 1; j < TUPLE_REF_MAX; ++j)<br>
> + tuple_ref(tuples[i]);<br>
> + assert(! tuples[i]->is_bigref);<br>
> + tuple_ref(tuples[i]);<br>
> + assert(tuples[i]->is_bigref);<br>
> + for(int j = TUPLE_REF_MAX + 1; j < BIGREF_COUNT - i; ++j)<br>
> + tuple_ref(tuples[i]);<br>
> + assert(tuples[i]->is_bigref);<br>
> + }<br>
> + for(int i = 0; i < BIGREF_CAPACITY; ++i) {<br>
> + for(int j = 1; j < BIGREF_COUNT - i; ++j)<br>
> + tuple_unref(tuples[i]);<br>
> + assert(tuples[i]->refs == 1);<br>
> + tuple_unref(tuples[i]);<br>
> + }<br>
> + free(tuples);<br>
> + return 0;<br>
> +}<br>
> +<br>
> +/**<br>
> + * This test checks that bigrefs works fine after being<br>
> + * created and destroyed BIGREF_DIFF times.<br>
> + */<br>
> +static int<br>
> +test_bigrefs_2()<br>
> +{<br>
> + struct tuple *tuple = create_tuple();<br>
> + for(int i = 0; i < 2; ++i) {<br>
> + assert(tuple->refs == 1);<br>
> + for(int j = 1; j < BIGREF_COUNT; ++j)<br>
> + tuple_ref(tuple);<br>
> + assert(tuple->is_bigref);<br>
> + for(int j = 1; j < BIGREF_COUNT; ++j)<br>
> + tuple_unref(tuple);<br>
> + assert(tuple->refs == 1);<br>
> + }<br>
> + tuple_unref(tuple);<br>
> + return 0;<br>
> +}<br>
> +<br>
> +int<br>
> +main()<br>
> +{<br>
> + header();<br>
> + plan(2);<br>
> +<br>
> + memory_init();<br>
> + fiber_init(fiber_c_invoke);<br>
> + tuple_init(NULL);<br>
> +<br>
> + tuple_end = mp_encode_array(tuple_end, 1);<br>
> + tuple_end = mp_encode_uint(tuple_end, 2);<br>
> +<br>
> + ok(test_bigrefs_1() == 0, "Overall test passed.");<br>
> + ok(test_bigrefs_2() == 0, "Create/destroy test passed.");<br>
> +<br>
> + tuple_free();<br>
> + fiber_free();<br>
> + memory_free();<br>
> +<br>
> + footer();<br>
> + check_plan();<br>
> +}<br>
> diff --git a/test/unit/tuple_bigref.result b/test/unit/tuple_bigref.result<br>
> new file mode 100644<br>
> index 0000000..ad22f45<br>
> --- /dev/null<br>
> +++ b/test/unit/tuple_bigref.result<br>
> @@ -0,0 +1,5 @@<br>
> + *** main ***<br>
> +1..2<br>
> +ok 1 - Overall test passed.<br>
> +ok 2 - Create/destroy test passed.<br>
> + *** main: done ***<br>
> <br><br></div></div></div></div></blockquote>
commit 1d497cd2488202a6b566e5399160acc8484ff39d<br>Author: Mergen Imeev <imeevma@gmail.com><br>Date: Mon May 28 19:17:51 2018 +0300<br><br> box: create bigrefs for tuples<br> <br> Due to limitation of reference counters for tuple being only<br> 65535 it was possible to reach this limitation. This patch<br> increases capacity of reference counters to 4 billions.<br> <br> Closes #3224<br><br>diff --git a/src/box/box.cc b/src/box/box.cc<br>index c728a4c53..42578613d 100644<br>--- a/src/box/box.cc<br>+++ b/src/box/box.cc<br>@@ -174,20 +174,22 @@ process_rw(struct request *request, struct space *space, struct tuple **result)<br> txn_rollback_stmt();<br> return -1;<br> }<br>+ if (result == NULL)<br>+ return txn_commit_stmt(txn, request);<br>+ *result = tuple;<br>+ if (tuple == NULL)<br>+ return txn_commit_stmt(txn, request);<br> /*<br> * Pin the tuple locally before the commit,<br> * otherwise it may go away during yield in<br> * when WAL is written in autocommit mode.<br> */<br>- TupleRefNil ref(tuple);<br>- if (txn_commit_stmt(txn, request) != 0)<br>- return -1;<br>- if (result != NULL) {<br>- if (tuple != NULL && tuple_bless(tuple) == NULL)<br>- return -1;<br>- *result = tuple;<br>- }<br>- return 0;<br>+ tuple_ref(tuple);<br>+ int rc = txn_commit_stmt(txn, request);<br>+ if (rc == 0)<br>+ tuple_bless(tuple);<br>+ tuple_unref(tuple);<br>+ return rc;<br> }<br> <br> void<br>diff --git a/src/box/errcode.h b/src/box/errcode.h<br>index a0759f8f4..e009524ad 100644<br>--- a/src/box/errcode.h<br>+++ b/src/box/errcode.h<br>@@ -138,7 +138,7 @@ struct errcode_record {<br> /* 83 */_(ER_ROLE_EXISTS, "Role '%s' already exists") \<br> /* 84 */_(ER_CREATE_ROLE, "Failed to create role '%s': %s") \<br> /* 85 */_(ER_INDEX_EXISTS, "Index '%s' already exists") \<br>- /* 86 */_(ER_TUPLE_REF_OVERFLOW, "Tuple reference counter overflow") \<br>+ /* 86 */_(ER_UNUSED6, "") \<br> /* 87 */_(ER_ROLE_LOOP, "Granting role '%s' to role '%s' would create a loop") \<br> /* 88 */_(ER_GRANT, "Incorrect grant arguments: %s") \<br> /* 89 */_(ER_PRIV_GRANTED, "User '%s' already has %s access on %s '%s'") \<br>diff --git a/src/box/index.cc b/src/box/index.cc<br>index 3c62ec1cc..f992bc94c 100644<br>--- a/src/box/index.cc<br>+++ b/src/box/index.cc<br>@@ -220,8 +220,8 @@ box_index_random(uint32_t space_id, uint32_t index_id, uint32_t rnd,<br> /* No tx management, random() is for approximation anyway. */<br> if (index_random(index, rnd, result) != 0)<br> return -1;<br>- if (*result != NULL && tuple_bless(*result) == NULL)<br>- return -1;<br>+ if (*result != NULL)<br>+ tuple_bless(*result);<br> return 0;<br> }<br> <br>@@ -253,8 +253,8 @@ box_index_get(uint32_t space_id, uint32_t index_id, const char *key,<br> txn_commit_ro_stmt(txn);<br> /* Count statistics. */<br> rmean_collect(rmean_box, IPROTO_SELECT, 1);<br>- if (*result != NULL && tuple_bless(*result) == NULL)<br>- return -1;<br>+ if (*result != NULL)<br>+ tuple_bless(*result);<br> return 0;<br> }<br> <br>@@ -285,8 +285,8 @@ box_index_min(uint32_t space_id, uint32_t index_id, const char *key,<br> return -1;<br> }<br> txn_commit_ro_stmt(txn);<br>- if (*result != NULL && tuple_bless(*result) == NULL)<br>- return -1;<br>+ if (*result != NULL)<br>+ tuple_bless(*result);<br> return 0;<br> }<br> <br>@@ -317,8 +317,8 @@ box_index_max(uint32_t space_id, uint32_t index_id, const char *key,<br> return -1;<br> }<br> txn_commit_ro_stmt(txn);<br>- if (*result != NULL && tuple_bless(*result) == NULL)<br>- return -1;<br>+ if (*result != NULL)<br>+ tuple_bless(*result);<br> return 0;<br> }<br> <br>@@ -397,8 +397,8 @@ box_iterator_next(box_iterator_t *itr, box_tuple_t **result)<br> assert(result != NULL);<br> if (iterator_next(itr, result) != 0)<br> return -1;<br>- if (*result != NULL && tuple_bless(*result) == NULL)<br>- return -1;<br>+ if (*result != NULL)<br>+ tuple_bless(*result);<br> return 0;<br> }<br> <br>diff --git a/src/box/lua/tuple.c b/src/box/lua/tuple.c<br>index 80573313d..22fe69611 100644<br>--- a/src/box/lua/tuple.c<br>+++ b/src/box/lua/tuple.c<br>@@ -496,10 +496,7 @@ luaT_pushtuple(struct lua_State *L, box_tuple_t *tuple)<br> luaL_pushcdata(L, CTID_CONST_STRUCT_TUPLE_REF);<br> *ptr = tuple;<br> /* The order is important - first reference tuple, next set gc */<br>- if (box_tuple_ref(tuple) != 0) {<br>- luaT_error(L);<br>- return;<br>- }<br>+ box_tuple_ref(tuple);<br> lua_pushcfunction(L, lbox_tuple_gc);<br> luaL_setcdatagc(L, -2);<br> }<br>diff --git a/src/box/port.c b/src/box/port.c<br>index 03f6be79d..9b6b8580c 100644<br>--- a/src/box/port.c<br>+++ b/src/box/port.c<br>@@ -45,8 +45,7 @@ port_tuple_add(struct port *base, struct tuple *tuple)<br> struct port_tuple *port = port_tuple(base);<br> struct port_tuple_entry *e;<br> if (port->size == 0) {<br>- if (tuple_ref(tuple) != 0)<br>- return -1;<br>+ tuple_ref(tuple);<br> e = &port->first_entry;<br> port->first = port->last = e;<br> } else {<br>@@ -55,10 +54,7 @@ port_tuple_add(struct port *base, struct tuple *tuple)<br> diag_set(OutOfMemory, sizeof(*e), "mempool_alloc", "e");<br> return -1;<br> }<br>- if (tuple_ref(tuple) != 0) {<br>- mempool_free(&port_tuple_entry_pool, e);<br>- return -1;<br>- }<br>+ tuple_ref(tuple);<br> port->last->next = e;<br> port->last = e;<br> }<br>diff --git a/src/box/tuple.c b/src/box/tuple.c<br>index 014f37406..4eed95f6c 100644<br>--- a/src/box/tuple.c<br>+++ b/src/box/tuple.c<br>@@ -48,6 +48,28 @@ enum {<br> OBJSIZE_MIN = 16,<br> };<br> <br>+/**<br>+ * Container for big reference counters. Contains array of big<br>+ * reference counters, size of this array and number of non-zero<br>+ * big reference counters. When reference counter of tuple becomes<br>+ * more than 32767, field refs of this tuple becomes index of big<br>+ * reference counter in big reference counter array and field<br>+ * is_bigref is set true. The moment big reference becomes equal<br>+ * 32767 it is set to 0, refs of the tuple becomes 32767 and<br>+ * is_bigref becomes false. Big reference counter can be equal to<br>+ * 0 or be more than 32767.<br>+ */<br>+static struct bigref_list {<br>+ /** Free-list of big reference counters. */<br>+ uint32_t *refs;<br>+ /** Number of non-zero elements in the array. */<br>+ uint16_t size;<br>+ /** Capacity of the array. */<br>+ uint16_t capacity;<br>+ /** Index of first free element. */<br>+ uint16_t vacant_index;<br>+} bigref_list;<br>+<br> static const double ALLOC_FACTOR = 1.05;<br> <br> /**<br>@@ -151,6 +173,13 @@ tuple_validate_raw(struct tuple_format *format, const char *tuple)<br> return 0;<br> }<br> <br>+/** Initialize big references container. */<br>+static inline void<br>+bigref_list_create()<br>+{<br>+ memset(&bigref_list, 0, sizeof(bigref_list));<br>+}<br>+<br> /**<br> * Incremented on every snapshot and is used to distinguish tuples<br> * which were created after start of a snapshot (these tuples can<br>@@ -211,6 +240,8 @@ tuple_init(field_name_hash_f hash)<br> <br> box_tuple_last = NULL;<br> <br>+ bigref_list_create();<br>+<br> if (coll_id_cache_init() != 0)<br> return -1;<br> <br>@@ -244,6 +275,106 @@ tuple_arena_create(struct slab_arena *arena, struct quota *quota,<br> }<br> }<br> <br>+enum {<br>+ BIGREF_FACTOR = 2,<br>+ BIGREF_MAX = UINT32_MAX,<br>+ BIGREF_MIN_CAPACITY = 16,<br>+ /**<br>+ * Only 15 bits are available for bigref list index in<br>+ * struct tuple.<br>+ */<br>+ BIGREF_MAX_CAPACITY = UINT16_MAX >> 1<br>+};<br>+<br>+/** Destroy big references and free memory that was allocated. */<br>+static inline void<br>+bigref_list_reset()<br>+{<br>+ free(bigref_list.refs);<br>+ bigref_list_create();<br>+}<br>+<br>+/**<br>+ * Increase capacity of bigref_list.<br>+ */<br>+static inline void<br>+bigref_list_increase_capacity()<br>+{<br>+ assert(bigref_list.capacity == bigref_list.vacant_index);<br>+ uint32_t *refs = bigref_list.refs;<br>+ uint16_t capacity = bigref_list.capacity;<br>+ if (capacity == 0)<br>+ capacity = BIGREF_MIN_CAPACITY;<br>+ else if (capacity < BIGREF_MAX_CAPACITY)<br>+ capacity = MIN(capacity * BIGREF_FACTOR, BIGREF_MAX_CAPACITY);<br>+ else<br>+ panic("Too many big references");<br>+ refs = (uint32_t *) realloc(refs, capacity * sizeof(*refs));<br>+ if (refs == NULL) {<br>+ panic("failed to reallocate %zu bytes: Cannot allocate "\<br>+ "memory.", capacity * sizeof(*refs));<br>+ }<br>+ for(uint16_t i = bigref_list.capacity; i < capacity; ++i)<br>+ refs[i] = i + 1;<br>+ bigref_list.refs = refs;<br>+ bigref_list.capacity = capacity;<br>+}<br>+<br>+/**<br>+ * Return index for new big reference counter and allocate memory<br>+ * if needed.<br>+ * @retval index for new big reference counter.<br>+ */<br>+static inline uint16_t<br>+bigref_list_new_index()<br>+{<br>+ if (bigref_list.size == bigref_list.capacity)<br>+ bigref_list_increase_capacity();<br>+ ++bigref_list.size;<br>+ uint16_t vacant_index = bigref_list.vacant_index;<br>+ bigref_list.vacant_index = bigref_list.refs[vacant_index];<br>+ return vacant_index;<br>+}<br>+<br>+void<br>+tuple_ref_slow(struct tuple *tuple)<br>+{<br>+ assert(tuple->is_bigref || tuple->refs == TUPLE_REF_MAX);<br>+ if (! tuple->is_bigref) {<br>+ tuple->ref_index = bigref_list_new_index();<br>+ tuple->is_bigref = true;<br>+ bigref_list.refs[tuple->ref_index] = TUPLE_REF_MAX;<br>+ } else if (bigref_list.refs[tuple->ref_index] == BIGREF_MAX) {<br>+ panic("Tuple big reference counter overflow");<br>+ }<br>+ bigref_list.refs[tuple->ref_index]++;<br>+}<br>+<br>+/**<br>+ * Free index and add it to free-index-list. Free memory<br>+ * when size == 0.<br>+ */<br>+static inline void<br>+bigref_list_delete_index(uint16_t index)<br>+{<br>+ bigref_list.refs[index] = bigref_list.vacant_index;<br>+ bigref_list.vacant_index = index;<br>+ if (--bigref_list.size == 0)<br>+ bigref_list_reset();<br>+}<br>+<br>+void<br>+tuple_unref_slow(struct tuple *tuple)<br>+{<br>+ assert(tuple->is_bigref &&<br>+ bigref_list.refs[tuple->ref_index] > TUPLE_REF_MAX);<br>+ if(--bigref_list.refs[tuple->ref_index] == TUPLE_REF_MAX) {<br>+ bigref_list_delete_index(tuple->ref_index);<br>+ tuple->ref_index = TUPLE_REF_MAX;<br>+ tuple->is_bigref = false;<br>+ }<br>+}<br>+<br> void<br> tuple_arena_destroy(struct slab_arena *arena)<br> {<br>@@ -265,6 +396,8 @@ tuple_free(void)<br> tuple_format_free();<br> <br> coll_id_cache_destroy();<br>+<br>+ bigref_list_reset();<br> }<br> <br> box_tuple_format_t *<br>@@ -288,7 +421,8 @@ int<br> box_tuple_ref(box_tuple_t *tuple)<br> {<br> assert(tuple != NULL);<br>- return tuple_ref(tuple);<br>+ tuple_ref(tuple);<br>+ return 0;<br> }<br> <br> void<br>@@ -357,10 +491,7 @@ box_tuple_iterator(box_tuple_t *tuple)<br> "mempool", "new slab");<br> return NULL;<br> }<br>- if (tuple_ref(tuple) != 0) {<br>- mempool_free(&tuple_iterator_pool, it);<br>- return NULL;<br>- }<br>+ tuple_ref(tuple);<br> tuple_rewind(it, tuple);<br> return it;<br> }<br>@@ -451,7 +582,6 @@ box_tuple_new(box_tuple_format_t *format, const char *data, const char *end)<br> struct tuple *ret = tuple_new(format, data, end);<br> if (ret == NULL)<br> return NULL;<br>- /* Can't fail on zero refs. */<br> return tuple_bless(ret);<br> }<br> <br>diff --git a/src/box/tuple.h b/src/box/tuple.h<br>index e2384dda0..14dbd4094 100644<br>--- a/src/box/tuple.h<br>+++ b/src/box/tuple.h<br>@@ -105,8 +105,7 @@ typedef struct tuple box_tuple_t;<br> * tuple will leak.<br> *<br> * \param tuple a tuple<br>- * \retval -1 on error (check box_error_last())<br>- * \retval 0 on success<br>+ * \retval 0 always<br> * \sa box_tuple_unref()<br> */<br> int<br>@@ -269,8 +268,7 @@ box_tuple_next(box_tuple_iterator_t *it);<br> * Use box_tuple_format_default() to create space-independent tuple.<br> * \param data tuple data in MsgPack Array format ([field1, field2, ...]).<br> * \param end the end of \a data<br>- * \retval NULL on out of memory<br>- * \retval tuple otherwise<br>+ * \retval tuple<br> * \pre data, end is valid MsgPack Array<br> * \sa \code box.tuple.new(data) \endcode<br> */<br>@@ -307,9 +305,17 @@ box_tuple_upsert(const box_tuple_t *tuple, const char *expr, const<br> */<br> struct PACKED tuple<br> {<br>- /** reference counter */<br>- uint16_t refs;<br>- /** format identifier */<br>+ union {<br>+ /** Reference counter. */<br>+ uint16_t refs;<br>+ struct {<br>+ /** Index of big reference counter. */<br>+ uint16_t ref_index : 15;<br>+ /** Big reference flag. */<br>+ bool is_bigref : 1;<br>+ };<br>+ };<br>+ /** Format identifier. */<br> uint16_t format_id;<br> /**<br> * Length of the MessagePack data in raw part of the<br>@@ -774,25 +780,35 @@ tuple_field_uuid(const struct tuple *tuple, int fieldno,<br> return 0;<br> }<br> <br>-enum { TUPLE_REF_MAX = UINT16_MAX };<br>+enum { TUPLE_REF_MAX = UINT16_MAX >> 1 };<br>+<br>+/**<br>+ * Increase tuple big reference counter.<br>+ * @param tuple Tuple to reference.<br>+ */<br>+void<br>+tuple_ref_slow(struct tuple *tuple);<br> <br> /**<br> * Increment tuple reference counter.<br> * @param tuple Tuple to reference.<br>- * @retval 0 Success.<br>- * @retval -1 Too many refs error.<br> */<br>-static inline int<br>+static inline void<br> tuple_ref(struct tuple *tuple)<br> {<br>- if (tuple->refs + 1 > TUPLE_REF_MAX) {<br>- diag_set(ClientError, ER_TUPLE_REF_OVERFLOW);<br>- return -1;<br>- }<br>- tuple->refs++;<br>- return 0;<br>+ if (unlikely(tuple->refs >= TUPLE_REF_MAX))<br>+ tuple_ref_slow(tuple);<br>+ else<br>+ tuple->refs++;<br> }<br> <br>+/**<br>+ * Decrease tuple big reference counter.<br>+ * @param tuple Tuple to reference.<br>+ */<br>+void<br>+tuple_unref_slow(struct tuple *tuple);<br>+<br> /**<br> * Decrement tuple reference counter. If it has reached zero, free the tuple.<br> *<br>@@ -802,10 +818,9 @@ static inline void<br> tuple_unref(struct tuple *tuple)<br> {<br> assert(tuple->refs - 1 >= 0);<br>-<br>- tuple->refs--;<br>-<br>- if (tuple->refs == 0)<br>+ if (unlikely(tuple->is_bigref))<br>+ tuple_unref_slow(tuple);<br>+ else if (--tuple->refs == 0)<br> tuple_delete(tuple);<br> }<br> <br>@@ -813,25 +828,18 @@ extern struct tuple *box_tuple_last;<br> <br> /**<br> * Convert internal `struct tuple` to public `box_tuple_t`.<br>- * \retval tuple on success<br>- * \retval NULL on error, check diag<br>+ * \retval tuple<br> * \post \a tuple ref counted until the next call.<br>- * \post tuple_ref() doesn't fail at least once<br> * \sa tuple_ref<br> */<br> static inline box_tuple_t *<br> tuple_bless(struct tuple *tuple)<br> {<br> assert(tuple != NULL);<br>- /* Ensure tuple can be referenced at least once after return */<br>- if (tuple->refs + 2 > TUPLE_REF_MAX) {<br>- diag_set(ClientError, ER_TUPLE_REF_OVERFLOW);<br>- return NULL;<br>- }<br>- tuple->refs++;<br>+ tuple_ref(tuple);<br> /* Remove previous tuple */<br> if (likely(box_tuple_last != NULL))<br>- tuple_unref(box_tuple_last); /* do not throw */<br>+ tuple_unref(box_tuple_last);<br> /* Remember current tuple */<br> box_tuple_last = tuple;<br> return tuple;<br>@@ -849,41 +857,6 @@ tuple_to_buf(const struct tuple *tuple, char *buf, size_t size);<br> #include "tuple_update.h"<br> #include "errinj.h"<br> <br>-/**<br>- * \copydoc tuple_ref()<br>- * \throws if overflow detected.<br>- */<br>-static inline void<br>-tuple_ref_xc(struct tuple *tuple)<br>-{<br>- if (tuple_ref(tuple))<br>- diag_raise();<br>-}<br>-<br>-/**<br>- * \copydoc tuple_bless<br>- * \throw ER_TUPLE_REF_OVERFLOW<br>- */<br>-static inline box_tuple_t *<br>-tuple_bless_xc(struct tuple *tuple)<br>-{<br>- box_tuple_t *blessed = tuple_bless(tuple);<br>- if (blessed == NULL)<br>- diag_raise();<br>- return blessed;<br>-}<br>-<br>-/** Make tuple references exception-friendly in absence of @finally. */<br>-struct TupleRefNil {<br>- struct tuple *tuple;<br>- TupleRefNil (struct tuple *arg) :tuple(arg)<br>- { if (tuple) tuple_ref_xc(tuple); }<br>- ~TupleRefNil() { if (tuple) tuple_unref(tuple); }<br>-<br>- TupleRefNil(const TupleRefNil&) = delete;<br>- void operator=(const TupleRefNil&) = delete;<br>-};<br>-<br> /* @copydoc tuple_field_with_type() */<br> static inline const char *<br> tuple_field_with_type_xc(const struct tuple *tuple, uint32_t fieldno,<br>diff --git a/src/box/vinyl.c b/src/box/vinyl.c<br>index f0d26874e..dc0d02054 100644<br>--- a/src/box/vinyl.c<br>+++ b/src/box/vinyl.c<br>@@ -3822,7 +3822,6 @@ vinyl_iterator_primary_next(struct iterator *base, struct tuple **ret)<br> assert(base->next = vinyl_iterator_primary_next);<br> struct vinyl_iterator *it = (struct vinyl_iterator *)base;<br> assert(it->lsm->index_id == 0);<br>- struct tuple *tuple;<br> <br> if (it->tx == NULL) {<br> diag_set(ClientError, ER_CURSOR_NO_TRANSACTION);<br>@@ -3833,18 +3832,15 @@ vinyl_iterator_primary_next(struct iterator *base, struct tuple **ret)<br> goto fail;<br> }<br> <br>- if (vy_read_iterator_next(&it->iterator, &tuple) != 0)<br>+ if (vy_read_iterator_next(&it->iterator, ret) != 0)<br> goto fail;<br>-<br>- if (tuple == NULL) {<br>+ if (*ret == NULL) {<br> /* EOF. Close the iterator immediately. */<br> vinyl_iterator_close(it);<br>- *ret = NULL;<br>- return 0;<br>+ } else {<br>+ tuple_bless(*ret);<br> }<br>- *ret = tuple_bless(tuple);<br>- if (*ret != NULL)<br>- return 0;<br>+ return 0;<br> fail:<br> vinyl_iterator_close(it);<br> return -1;<br>@@ -3890,11 +3886,10 @@ next:<br> * Note, there's no need in vy_tx_track() as the<br> * tuple is already tracked in the secondary index.<br> */<br>- struct tuple *full_tuple;<br> if (vy_point_lookup(it->lsm->pk, it->tx, vy_tx_read_view(it->tx),<br>- tuple, &full_tuple) != 0)<br>+ tuple, ret) != 0)<br> goto fail;<br>- if (full_tuple == NULL) {<br>+ if (*ret == NULL) {<br> /*<br> * All indexes of a space must be consistent, i.e.<br> * if a tuple is present in one index, it must be<br>@@ -3908,10 +3903,9 @@ next:<br> vy_lsm_name(it->lsm), vy_stmt_str(tuple));<br> goto next;<br> }<br>- *ret = tuple_bless(full_tuple);<br>- tuple_unref(full_tuple);<br>- if (*ret != NULL)<br>- return 0;<br>+ tuple_bless(*ret);<br>+ tuple_unref(*ret);<br>+ return 0;<br> fail:<br> vinyl_iterator_close(it);<br> return -1;<br>@@ -3997,16 +3991,12 @@ vinyl_index_get(struct index *index, const char *key,<br> const struct vy_read_view **rv = (tx != NULL ? vy_tx_read_view(tx) :<br> &env->xm->p_global_read_view);<br> <br>- struct tuple *tuple;<br>- if (vy_lsm_full_by_key(lsm, tx, rv, key, part_count, &tuple) != 0)<br>+ if (vy_lsm_full_by_key(lsm, tx, rv, key, part_count, ret) != 0)<br> return -1;<br>-<br>- if (tuple != NULL) {<br>- *ret = tuple_bless(tuple);<br>- tuple_unref(tuple);<br>- return *ret == NULL ? -1 : 0;<br>+ if (*ret != NULL) {<br>+ tuple_bless(*ret);<br>+ tuple_unref(*ret);<br> }<br>- *ret = NULL;<br> return 0;<br> }<br> <br>diff --git a/test/box/misc.result b/test/box/misc.result<br>index 8f94f5513..f7703ba8d 100644<br>--- a/test/box/misc.result<br>+++ b/test/box/misc.result<br>@@ -332,7 +332,6 @@ t;<br> - 'box.error.UNKNOWN_UPDATE_OP : 28'<br> - 'box.error.WRONG_COLLATION_OPTIONS : 151'<br> - 'box.error.CURSOR_NO_TRANSACTION : 80'<br>- - 'box.error.TUPLE_REF_OVERFLOW : 86'<br> - 'box.error.ALTER_SEQUENCE : 143'<br> - 'box.error.INVALID_XLOG_NAME : 75'<br> - 'box.error.SAVEPOINT_EMPTY_TX : 60'<br>@@ -360,7 +359,7 @@ t;<br> - 'box.error.VINYL_MAX_TUPLE_SIZE : 139'<br> - 'box.error.LOAD_FUNCTION : 99'<br> - 'box.error.INVALID_XLOG : 74'<br>- - 'box.error.PRIV_NOT_GRANTED : 91'<br>+ - 'box.error.READ_VIEW_ABORTED : 130'<br> - 'box.error.TRANSACTION_CONFLICT : 97'<br> - 'box.error.GUEST_USER_PASSWORD : 96'<br> - 'box.error.PROC_C : 102'<br>@@ -405,7 +404,7 @@ t;<br> - 'box.error.injection : table: <address><br> - 'box.error.NULLABLE_MISMATCH : 153'<br> - 'box.error.LAST_DROP : 15'<br>- - 'box.error.NO_SUCH_ROLE : 82'<br>+ - 'box.error.TUPLE_FORMAT_LIMIT : 16'<br> - 'box.error.DECOMPRESSION : 124'<br> - 'box.error.CREATE_SEQUENCE : 142'<br> - 'box.error.CREATE_USER : 43'<br>@@ -414,66 +413,66 @@ t;<br> - 'box.error.SEQUENCE_OVERFLOW : 147'<br> - 'box.error.SYSTEM : 115'<br> - 'box.error.KEY_PART_IS_TOO_LONG : 118'<br>- - 'box.error.TUPLE_FORMAT_LIMIT : 16'<br>- - 'box.error.BEFORE_REPLACE_RET : 53'<br>+ - 'box.error.INJECTION : 8'<br>+ - 'box.error.INVALID_MSGPACK : 20'<br> - 'box.error.NO_SUCH_SAVEPOINT : 61'<br> - 'box.error.TRUNCATE_SYSTEM_SPACE : 137'<br> - 'box.error.VY_QUOTA_TIMEOUT : 135'<br> - 'box.error.WRONG_INDEX_OPTIONS : 108'<br> - 'box.error.INVALID_VYLOG_FILE : 133'<br> - 'box.error.INDEX_FIELD_COUNT_LIMIT : 127'<br>- - 'box.error.READ_VIEW_ABORTED : 130'<br>+ - 'box.error.PRIV_NOT_GRANTED : 91'<br> - 'box.error.USER_MAX : 56'<br>- - 'box.error.PROTOCOL : 104'<br>+ - 'box.error.BEFORE_REPLACE_RET : 53'<br> - 'box.error.TUPLE_NOT_ARRAY : 22'<br> - 'box.error.KEY_PART_COUNT : 31'<br> - 'box.error.ALTER_SPACE : 12'<br> - 'box.error.ACTIVE_TRANSACTION : 79'<br> - 'box.error.EXACT_FIELD_COUNT : 38'<br> - 'box.error.DROP_SEQUENCE : 144'<br>- - 'box.error.INVALID_MSGPACK : 20'<br> - 'box.error.MORE_THAN_ONE_TUPLE : 41'<br>- - 'box.error.RTREE_RECT : 101'<br>- - 'box.error.SUB_STMT_MAX : 121'<br>+ - 'box.error.UPSERT_UNIQUE_SECONDARY_KEY : 105'<br> - 'box.error.UNKNOWN_REQUEST_TYPE : 48'<br>- - 'box.error.SPACE_EXISTS : 10'<br>+ - 'box.error.SUB_STMT_MAX : 121'<br> - 'box.error.PROC_LUA : 32'<br>+ - 'box.error.SPACE_EXISTS : 10'<br> - 'box.error.ROLE_NOT_GRANTED : 92'<br>+ - 'box.error.UNSUPPORTED : 5'<br> - 'box.error.NO_SUCH_SPACE : 36'<br> - 'box.error.WRONG_INDEX_PARTS : 107'<br>- - 'box.error.DROP_SPACE : 11'<br> - 'box.error.MIN_FIELD_COUNT : 39'<br> - 'box.error.REPLICASET_UUID_MISMATCH : 63'<br> - 'box.error.UPDATE_FIELD : 29'<br>+ - 'box.error.INDEX_EXISTS : 85'<br> - 'box.error.COMPRESSION : 119'<br> - 'box.error.INVALID_ORDER : 68'<br>- - 'box.error.INDEX_EXISTS : 85'<br> - 'box.error.SPLICE : 25'<br> - 'box.error.UNKNOWN : 0'<br>+ - 'box.error.IDENTIFIER : 70'<br> - 'box.error.DROP_PRIMARY_KEY : 17'<br> - 'box.error.NULLABLE_PRIMARY : 152'<br> - 'box.error.NO_SUCH_SEQUENCE : 145'<br> - 'box.error.RELOAD_CFG : 58'<br> - 'box.error.INVALID_UUID : 64'<br>- - 'box.error.INJECTION : 8'<br>+ - 'box.error.DROP_SPACE : 11'<br> - 'box.error.TIMEOUT : 78'<br>- - 'box.error.IDENTIFIER : 70'<br> - 'box.error.ITERATOR_TYPE : 72'<br> - 'box.error.REPLICA_MAX : 73'<br>+ - 'box.error.NO_SUCH_ROLE : 82'<br> - 'box.error.MISSING_REQUEST_FIELD : 69'<br> - 'box.error.MISSING_SNAPSHOT : 93'<br> - 'box.error.WRONG_SPACE_OPTIONS : 111'<br> - 'box.error.READONLY : 7'<br>- - 'box.error.UNSUPPORTED : 5'<br> - 'box.error.UPDATE_INTEGER_OVERFLOW : 95'<br>+ - 'box.error.RTREE_RECT : 101'<br> - 'box.error.NO_CONNECTION : 77'<br> - 'box.error.INVALID_XLOG_ORDER : 76'<br>- - 'box.error.UPSERT_UNIQUE_SECONDARY_KEY : 105'<br>- - 'box.error.ROLLBACK_IN_SUB_STMT : 123'<br> - 'box.error.WRONG_SCHEMA_VERSION : 109'<br>- - 'box.error.UNSUPPORTED_INDEX_FEATURE : 112'<br>- - 'box.error.INDEX_PART_TYPE_MISMATCH : 24'<br>+ - 'box.error.ROLLBACK_IN_SUB_STMT : 123'<br>+ - 'box.error.PROTOCOL : 104'<br> - 'box.error.INVALID_XLOG_TYPE : 125'<br>+ - 'box.error.INDEX_PART_TYPE_MISMATCH : 24'<br>+ - 'box.error.UNSUPPORTED_INDEX_FEATURE : 112'<br> ...<br> test_run:cmd("setopt delimiter ''");<br> ---<br>diff --git a/test/box/select.result b/test/box/select.result<br>index 4aed70630..b3ee6cd57 100644<br>--- a/test/box/select.result<br>+++ b/test/box/select.result<br>@@ -619,31 +619,62 @@ collectgarbage('collect')<br> ---<br> - 0<br> ...<br>+-- gh-3224 resurrect tuple bigrefs<br>+collectgarbage('stop')<br>+---<br>+- 0<br>+...<br> s = box.schema.space.create('select', { temporary = true })<br> ---<br> ...<br> index = s:create_index('primary', { type = 'tree' })<br> ---<br> ...<br>-a = s:insert{0}<br>+_ = s:insert{0}<br>+---<br>+...<br>+_ = s:insert{1}<br>+---<br>+...<br>+_ = s:insert{2}<br>+---<br>+...<br>+_ = s:insert{3}<br>+---<br>+...<br>+lots_of_links = setmetatable({}, {__mode = 'v'})<br> ---<br> ...<br>-lots_of_links = {}<br>+i = 0<br>+---<br>+...<br>+while (i < 33000) do table.insert(lots_of_links, s:get{0}) i = i + 1 end<br>+---<br>+...<br>+while (i < 66000) do table.insert(lots_of_links, s:get{1}) i = i + 1 end<br>+---<br>+...<br>+while (i < 100000) do table.insert(lots_of_links, s:get{2}) i = i + 1 end<br> ---<br> ...<br> ref_count = 0<br> ---<br> ...<br>-while (true) do table.insert(lots_of_links, s:get{0}) ref_count = ref_count + 1 end<br>+for k, v in pairs(lots_of_links) do ref_count = ref_count + 1 end<br> ---<br>-- error: Tuple reference counter overflow<br> ...<br> ref_count<br> ---<br>-- 65531<br>+- 100000<br> ...<br>-lots_of_links = {}<br>+-- check that tuples are deleted after gc is activated<br>+collectgarbage('collect')<br> ---<br>+- 0<br>+...<br>+lots_of_links<br>+---<br>+- []<br> ...<br> s:drop()<br> ---<br>diff --git a/test/box/select.test.lua b/test/box/select.test.lua<br>index 54c2ecc1c..3400bdafe 100644<br>--- a/test/box/select.test.lua<br>+++ b/test/box/select.test.lua<br>@@ -124,12 +124,25 @@ test.random(s.index[0], 48)<br> s:drop()<br> <br> collectgarbage('collect')<br>+<br>+-- gh-3224 resurrect tuple bigrefs<br>+<br>+collectgarbage('stop')<br> s = box.schema.space.create('select', { temporary = true })<br> index = s:create_index('primary', { type = 'tree' })<br>-a = s:insert{0}<br>-lots_of_links = {}<br>+_ = s:insert{0}<br>+_ = s:insert{1}<br>+_ = s:insert{2}<br>+_ = s:insert{3}<br>+lots_of_links = setmetatable({}, {__mode = 'v'})<br>+i = 0<br>+while (i < 33000) do table.insert(lots_of_links, s:get{0}) i = i + 1 end<br>+while (i < 66000) do table.insert(lots_of_links, s:get{1}) i = i + 1 end<br>+while (i < 100000) do table.insert(lots_of_links, s:get{2}) i = i + 1 end<br> ref_count = 0<br>-while (true) do table.insert(lots_of_links, s:get{0}) ref_count = ref_count + 1 end<br>+for k, v in pairs(lots_of_links) do ref_count = ref_count + 1 end<br> ref_count<br>-lots_of_links = {}<br>+-- check that tuples are deleted after gc is activated<br>+collectgarbage('collect')<br>+lots_of_links<br> s:drop()<br>diff --git a/test/unit/CMakeLists.txt b/test/unit/CMakeLists.txt<br>index dbc02cdf0..aef531600 100644<br>--- a/test/unit/CMakeLists.txt<br>+++ b/test/unit/CMakeLists.txt<br>@@ -192,3 +192,6 @@ target_link_libraries(vy_cache.test ${ITERATOR_TEST_LIBS})<br> <br> add_executable(coll.test coll.cpp)<br> target_link_libraries(coll.test core unit ${ICU_LIBRARIES} misc)<br>+<br>+add_executable(tuple_bigref.test tuple_bigref.c)<br>+target_link_libraries(tuple_bigref.test tuple unit)<br>diff --git a/test/unit/tuple_bigref.c b/test/unit/tuple_bigref.c<br>new file mode 100644<br>index 000000000..154c10ac1<br>--- /dev/null<br>+++ b/test/unit/tuple_bigref.c<br>@@ -0,0 +1,179 @@<br>+#include "memory.h"<br>+#include "fiber.h"<br>+#include "tuple.h"<br>+#include "unit.h"<br>+#include <stdio.h><br>+<br>+enum {<br>+ BIGREF_DIFF = 10,<br>+ BIGREF_COUNT = 70003,<br>+ BIGREF_CAPACITY = 107,<br>+};<br>+<br>+static char tuple_buf[64];<br>+static char *tuple_end = tuple_buf;<br>+<br>+/**<br>+ * This function creates new tuple with refs == 1.<br>+ */<br>+static inline struct tuple *<br>+create_tuple()<br>+{<br>+ struct tuple *ret =<br>+ tuple_new(box_tuple_format_default(), tuple_buf, tuple_end);<br>+ tuple_ref(ret);<br>+ return ret;<br>+}<br>+<br>+/**<br>+ * This test performs overall check of bigrefs.<br>+ * What it checks:<br>+ * 1) Till refs <= TUPLE_REF_MAX it shows number of refs<br>+ * of tuple and it isn't a bigref.<br>+ * 2) When refs > TUPLE_REF_MAX first 15 bits of it becomes<br>+ * index of bigref and the last bit becomes true which<br>+ * shows that it is bigref.<br>+ * 3) Each of tuple has its own number of refs, but all<br>+ * these numbers more than it is needed for getting a bigref.<br>+ * 4) Indexes of bigrefs are given sequentially.<br>+ * 5) After some tuples are sequentially deleted all of<br>+ * others bigrefs are fine. In this test BIGREF_CAPACITY<br>+ * tuples created and each of their ref counter increased<br>+ * to (BIGREF_COUNT - index of tuple). Tuples are created<br>+ * consistently.<br>+ */<br>+static void<br>+test_bigrefs_1()<br>+{<br>+ printf("Test1: Overall check of bigrefs.\n");<br>+ uint16_t counter = 0;<br>+ struct tuple **tuples = (struct tuple **) malloc(BIGREF_CAPACITY *<br>+ sizeof(*tuples));<br>+ for(int i = 0; i < BIGREF_CAPACITY; ++i)<br>+ tuples[i] = create_tuple();<br>+ for(int i = 0; i < BIGREF_CAPACITY; ++i)<br>+ counter += tuples[i]->refs == 1;<br>+ is(counter, BIGREF_CAPACITY, "All tuples have refs == 1.");<br>+ for(int i = 0; i < BIGREF_CAPACITY; ++i) {<br>+ for(int j = 1; j < TUPLE_REF_MAX; ++j)<br>+ tuple_ref(tuples[i]);<br>+ tuple_ref(tuples[i]);<br>+ for(int j = TUPLE_REF_MAX + 1; j < BIGREF_COUNT - i; ++j)<br>+ tuple_ref(tuples[i]);<br>+ }<br>+ counter = 0;<br>+ for(int i = 0; i < BIGREF_CAPACITY; ++i)<br>+ counter += tuples[i]->is_bigref == true;<br>+ is(counter, BIGREF_CAPACITY, "All tuples have bigrefs.");<br>+ counter = 0;<br>+ for(int i = 0; i < BIGREF_CAPACITY; ++i) {<br>+ for(int j = 1; j < BIGREF_COUNT - i; ++j)<br>+ tuple_unref(tuples[i]);<br>+ counter += tuples[i]->refs == 1;<br>+ tuple_unref(tuples[i]);<br>+ }<br>+ is(counter, BIGREF_CAPACITY, "All tuples were deleted.");<br>+ free(tuples);<br>+}<br>+<br>+/**<br>+ * This test checks that bigrefs works fine after being<br>+ * created and destroyed 2 times.<br>+ */<br>+static void<br>+test_bigrefs_2()<br>+{<br>+ printf("Test 2: Create/destroy test.\n");<br>+ struct tuple *tuple = create_tuple();<br>+ for(int j = 1; j < BIGREF_COUNT; ++j)<br>+ tuple_ref(tuple);<br>+ ok(tuple->refs && tuple->ref_index == 0,<br>+ "Tuple becomes bigref first time with ref_index == 0.");<br>+ for(int j = 1; j < BIGREF_COUNT; ++j)<br>+ tuple_unref(tuple);<br>+ for(int j = 1; j < BIGREF_COUNT; ++j)<br>+ tuple_ref(tuple);<br>+ ok(tuple->refs && tuple->ref_index == 0,<br>+ "Tuple becomes bigref second time with ref_index == 0.");<br>+ for(int j = 1; j < BIGREF_COUNT; ++j)<br>+ tuple_unref(tuple);<br>+ tuple_unref(tuple);<br>+}<br>+<br>+/**<br>+ * This test checks that indexes are given as<br>+ * intended.<br>+ */<br>+static void<br>+test_bigrefs_3()<br>+{<br>+ printf("Test3: Non-consistent indexes test.\n");<br>+ uint16_t counter = 0;<br>+ uint16_t max_index = BIGREF_CAPACITY / BIGREF_DIFF;<br>+ struct tuple **tuples = (struct tuple **) malloc(BIGREF_CAPACITY *<br>+ sizeof(*tuples));<br>+ uint16_t *indexes = (uint16_t *) malloc (sizeof(*indexes) *<br>+ (max_index + 1));<br>+ for(int i = 0; i < BIGREF_CAPACITY; ++i)<br>+ tuples[i] = create_tuple();<br>+ for(int i = 0; i < BIGREF_CAPACITY; ++i) {<br>+ for(int j = 1; j < BIGREF_COUNT; ++j)<br>+ tuple_ref(tuples[i]);<br>+ counter += tuples[i]->is_bigref == true;<br>+ }<br>+ is(counter, BIGREF_CAPACITY, "All tuples have bigrefs.");<br>+ counter = 0;<br>+ for(int i = 0; i < BIGREF_CAPACITY; i += BIGREF_DIFF) {<br>+ indexes[counter] = tuples[i]->ref_index;<br>+ for(int j = 1; j < BIGREF_COUNT; ++j)<br>+ tuple_unref(tuples[i]);<br>+ assert(!tuples[i]->is_bigref);<br>+ counter++;<br>+ }<br>+ is(counter, max_index + 1, "%d tuples don't have bigrefs "\<br>+ "and all other tuples have", max_index + 1);<br>+ counter = 0;<br>+ for(int i = 0; i < BIGREF_CAPACITY; i += BIGREF_DIFF) {<br>+ assert(tuples[i]->refs == 1);<br>+ for(int j = 1; j < BIGREF_COUNT; ++j)<br>+ tuple_ref(tuples[i]);<br>+ assert(tuples[i]->is_bigref &&<br>+ tuples[i]->ref_index == indexes[max_index - counter]);<br>+ ++counter;<br>+ }<br>+ is(counter, max_index + 1, "All tuples have bigrefs and "\<br>+ "their indexes are in right order.");<br>+ for (int i = 0; i < BIGREF_CAPACITY; ++i) {<br>+ for (int j = 1; j < BIGREF_COUNT; ++j)<br>+ tuple_unref(tuples[i]);<br>+ assert(tuples[i]->refs == 1);<br>+ tuple_unref(tuples[i]);<br>+ }<br>+ free(indexes);<br>+ free(tuples);<br>+}<br>+<br>+int<br>+main()<br>+{<br>+ header();<br>+ plan(9);<br>+<br>+ memory_init();<br>+ fiber_init(fiber_c_invoke);<br>+ tuple_init(NULL);<br>+<br>+ tuple_end = mp_encode_array(tuple_end, 1);<br>+ tuple_end = mp_encode_uint(tuple_end, 2);<br>+<br>+ test_bigrefs_1();<br>+ test_bigrefs_2();<br>+ test_bigrefs_3();<br>+<br>+ tuple_free();<br>+ fiber_free();<br>+ memory_free();<br>+<br>+ footer();<br>+ check_plan();<br>+}<br>diff --git a/test/unit/tuple_bigref.result b/test/unit/tuple_bigref.result<br>new file mode 100644<br>index 000000000..7e8694e15<br>--- /dev/null<br>+++ b/test/unit/tuple_bigref.result<br>@@ -0,0 +1,15 @@<br>+# Looks like you planned 9 tests but ran 8.<br>+ *** main ***<br>+1..9<br>+Test1: Overall check of bigrefs.<br>+ok 1 - All tuples have refs == 1.<br>+ok 2 - All tuples have bigrefs.<br>+ok 3 - All tuples were deleted.<br>+Test 2: Create/destroy test.<br>+ok 4 - Tuple becomes bigref first time with ref_index == 0.<br>+ok 5 - Tuple becomes bigref second time with ref_index == 0.<br>+Test3: Non-consistent indexes test.<br>+ok 6 - All tuples have bigrefs.<br>+ok 7 - 11 tuples don't have bigrefs and all other tuples have<br>+ok 8 - All tuples have bigrefs and their indexes are in right order.<br>+ *** main: done ***<br><br><br></BODY></HTML>