* [Tarantool-patches] [PATCH 0/2] reduce performance degradation introduced by JSON path indices @ 2020-11-14 17:28 Serge Petrenko 2020-11-14 17:28 ` [Tarantool-patches] [PATCH 1/2] box: speed up tuple_field_map_create Serge Petrenko ` (2 more replies) 0 siblings, 3 replies; 17+ messages in thread From: Serge Petrenko @ 2020-11-14 17:28 UTC (permalink / raw) To: v.shpilevoy, korablev; +Cc: tarantool-patches The patchset fixes two degradations found by measuring snapshot recovery time for a 1.5G snapshot containing 30M tuples in a memtx space with a simple primary key and one secondary key over 4 integer and one string field. The first degradation manifests itself during snapshot recovery phase (the one with "3.5M rows processed" messages) and is connected to `memtx_tuple_new` slowdown due to unoptimised `tuple_field_map_create`. First patch deals with this degradation and manages to restore almost all performance lost since 1.10. (The patched version is only 11% slower than 1.10, while the current master is 39% slower on this phase). The second degradation appears during next snapshot recovery phase, secondary index building. Here the degradation is rooted in slow tuple field access via tuple_field_raw(). The second patch deals with this issue and manages to restore part of the lost performance. (The patched version is 14% slower than 1.10 while the current master is 27% slower). Serge Petrenko (2): box: speed up tuple_field_map_create box: use tuple_field_raw_by_part where possible src/box/tuple_compare.cc | 37 ++++----------- src/box/tuple_extract_key.cc | 10 +--- src/box/tuple_format.c | 91 ++++++++++++++++++++++++++++++++++++ src/box/tuple_hash.cc | 35 ++++---------- 4 files changed, 111 insertions(+), 62 deletions(-) -- 2.24.3 (Apple Git-128) ^ permalink raw reply [flat|nested] 17+ messages in thread
* [Tarantool-patches] [PATCH 1/2] box: speed up tuple_field_map_create 2020-11-14 17:28 [Tarantool-patches] [PATCH 0/2] reduce performance degradation introduced by JSON path indices Serge Petrenko @ 2020-11-14 17:28 ` Serge Petrenko 2020-11-14 18:00 ` Cyrill Gorcunov 2020-11-20 23:26 ` Vladislav Shpilevoy 2020-11-14 17:28 ` [Tarantool-patches] [PATCH 2/2] box: use tuple_field_raw_by_part where possible Serge Petrenko 2020-11-16 7:50 ` [Tarantool-patches] [PATCH 0/2] reduce performance degradation introduced by JSON path indices Serge Petrenko 2 siblings, 2 replies; 17+ messages in thread From: Serge Petrenko @ 2020-11-14 17:28 UTC (permalink / raw) To: v.shpilevoy, korablev; +Cc: tarantool-patches Since the introduction of JSON path indices tuple_init_field_map, which was quite a simple routine traversing a tuple and storing its field data offsets in the field map, was renamed to tuple_field_map_create and optimised for working with JSON path indices. The main difference is that tuple format fields are now organised in a tree rather than an array, and the tuple itself may have indexed fields, which are not plain array members, but rather members of some sub-array or map. This requires more complex iteration over tuple format fields and additional tuple parsing. All the changes were, however, unneeded for tuple formats not supporting fields indexed by JSON paths. Rework tuple_field_map_create so that it doesn't go through all the unnecessary JSON path-related checks for simple cases and restore most of the lost performance. Below are some benchmark results for the same workload that pointed to the degradation initially. Snapshot recovery time on RelWithDebInfo build for a 1.5G snapshot containing a single memtx space with one secondary index over 4 integer and 1 string field: Version | Time (s) | Difference relative to 1.10 ---------------------------|----------|---------------------------- 1.10 (the golden standard) | 28 | -/- 2.x (degradation) | 39 | + 39% 2.x (patched) | 31 | + 11% Profile shows that the main difference is in memtx_tuple_new due to tuple_init_field_map/tuple_field_map_create performance difference. Numbers below show cumulative time spent in tuple_init_field_map (1.10) / tuple_field_map_create (unpatched) / tuple_field_map_create (patched). 2.44 s / 8.61 s / 3.98 s More benchmark results can be seen at #4774 Part of #4774 --- src/box/tuple_format.c | 91 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 91 insertions(+) diff --git a/src/box/tuple_format.c b/src/box/tuple_format.c index 9b817d3cf..f2db521ea 100644 --- a/src/box/tuple_format.c +++ b/src/box/tuple_format.c @@ -852,6 +852,88 @@ tuple_format1_can_store_format2_tuples(struct tuple_format *format1, return true; } +static int +tuple_format_required_fields_validate(struct tuple_format *format, + void *required_fields, + uint32_t required_fields_sz); + +static int +tuple_field_map_create_plain(struct tuple_format *format, const char *tuple, + bool validate, struct field_map_builder *builder) +{ +#define check_field_type(field, pos) { \ + if (validate && \ + !field_mp_type_is_compatible(field->type, pos, \ + tuple_field_is_nullable(field))) { \ + diag_set(ClientError, ER_FIELD_TYPE, tuple_field_path(field), \ + field_type_strs[field->type]); \ + return -1; \ + } \ +} + + struct region *region = &fiber()->gc; + const char *pos = tuple; + uint32_t defined_field_count = mp_decode_array(&pos); + if (validate && format->exact_field_count > 0 && + format->exact_field_count != defined_field_count) { + diag_set(ClientError, ER_EXACT_FIELD_COUNT, + (unsigned) defined_field_count, + (unsigned) format->exact_field_count); + return -1; + } + defined_field_count = MIN(defined_field_count, + tuple_format_field_count(format)); + + void *required_fields = NULL; + uint32_t required_fields_sz = bitmap_size(format->total_field_count); + if (validate) { + required_fields = region_alloc(region, required_fields_sz); + memcpy(required_fields, format->required_fields, + required_fields_sz); + } + + if (unlikely(defined_field_count == 0)) + goto end; + + struct json_token token = { + .type = JSON_TOKEN_NUM, + .num = 0, + }; + struct tuple_field *field; + + field = json_tree_lookup_entry(&format->fields, &format->fields.root, + &token, struct tuple_field, token); + /* Check 1st field's type, but don't store offset to it. */ + check_field_type(field, pos); + if (validate) + bit_clear(required_fields, field->id); + mp_next(&pos); + + for (uint32_t i = 1; i < defined_field_count; i++, mp_next(&pos)) { + token.num = i; + field = json_tree_lookup_entry(&format->fields, + &format->fields.root, &token, + struct tuple_field, token); + check_field_type(field, pos); + if (validate) + bit_clear(required_fields, field->id); + if (field->offset_slot != TUPLE_OFFSET_SLOT_NIL && + field_map_builder_set_slot(builder, field->offset_slot, + pos - tuple, MULTIKEY_NONE, + 0, NULL) != 0) { + return -1; + } + } + +#undef check_field_type + +end: + return validate ? + tuple_format_required_fields_validate(format, required_fields, + required_fields_sz) : + 0; +} + /** @sa declaration for details. */ int tuple_field_map_create(struct tuple_format *format, const char *tuple, @@ -864,6 +946,15 @@ tuple_field_map_create(struct tuple_format *format, const char *tuple, if (tuple_format_field_count(format) == 0) return 0; /* Nothing to initialize */ + /* + * In case tuple format doesn't contain fields accessed by JSON paths, + * tuple field traversal may be simplified. + */ + if (format->fields_depth == 1) { + return tuple_field_map_create_plain(format, tuple, validate, + builder); + } + uint32_t field_count; struct tuple_format_iterator it; uint8_t flags = validate ? TUPLE_FORMAT_ITERATOR_VALIDATE : 0; -- 2.24.3 (Apple Git-128) ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Tarantool-patches] [PATCH 1/2] box: speed up tuple_field_map_create 2020-11-14 17:28 ` [Tarantool-patches] [PATCH 1/2] box: speed up tuple_field_map_create Serge Petrenko @ 2020-11-14 18:00 ` Cyrill Gorcunov 2020-11-16 7:34 ` Serge Petrenko 2020-11-20 23:26 ` Vladislav Shpilevoy 1 sibling, 1 reply; 17+ messages in thread From: Cyrill Gorcunov @ 2020-11-14 18:00 UTC (permalink / raw) To: Serge Petrenko; +Cc: v.shpilevoy, tarantool-patches On Sat, Nov 14, 2020 at 08:28:22PM +0300, Serge Petrenko wrote: ... > +static int > +tuple_field_map_create_plain(struct tuple_format *format, const char *tuple, > + bool validate, struct field_map_builder *builder) > +{ > +#define check_field_type(field, pos) { \ > + if (validate && \ > + !field_mp_type_is_compatible(field->type, pos, \ > + tuple_field_is_nullable(field))) { \ > + diag_set(ClientError, ER_FIELD_TYPE, tuple_field_path(field), \ > + field_type_strs[field->type]); \ > + return -1; \ > + } \ > +} Serge, I'm completely not familiar with the code thus may be simply wrong but @check_field_type test for @validate first, right? ... > + > + field = json_tree_lookup_entry(&format->fields, &format->fields.root, > + &token, struct tuple_field, token); > + /* Check 1st field's type, but don't store offset to it. */ > + check_field_type(field, pos); check_field_type -> if (validate ...) > + if (validate) > + bit_clear(required_fields, field->id); and here we test for if (validate) again. Should not we simply drop if (validate) from check_field_type and call this macro under the caller's if? IOW if (validate) { check_field_type(); bit_clear(); } While check_field_type will be something like #define check_field_type(field, pos) \ ({ \ bool nullable = tuple_field_is_nullable(field); \ if (!field_mp_type_is_compatible(field->type, pos, nullable)) { \ diag_set(ClientError, ER_FIELD_TYPE, \ tuple_field_path(field), \ field_type_strs[field->type]); \ return -1; \ } \ }) - if I'm right we may fix it on top (actually since these two ifs are close to each other they won't hurt hw branch predictor even in current form or may be compiler merge these two basic blocks under one "if" flow) > + check_field_type(field, pos); > + if (validate) > + bit_clear(required_fields, field->id); and here too. > + if (field->offset_slot != TUPLE_OFFSET_SLOT_NIL && > + field_map_builder_set_slot(builder, field->offset_slot, > + pos - tuple, MULTIKEY_NONE, > + 0, NULL) != 0) { > + return -1; > + } > + } ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Tarantool-patches] [PATCH 1/2] box: speed up tuple_field_map_create 2020-11-14 18:00 ` Cyrill Gorcunov @ 2020-11-16 7:34 ` Serge Petrenko 0 siblings, 0 replies; 17+ messages in thread From: Serge Petrenko @ 2020-11-16 7:34 UTC (permalink / raw) To: Cyrill Gorcunov; +Cc: v.shpilevoy, tarantool-patches 14.11.2020 21:00, Cyrill Gorcunov пишет: > On Sat, Nov 14, 2020 at 08:28:22PM +0300, Serge Petrenko wrote: > ... >> +static int >> +tuple_field_map_create_plain(struct tuple_format *format, const char *tuple, >> + bool validate, struct field_map_builder *builder) >> +{ >> +#define check_field_type(field, pos) { \ >> + if (validate && \ >> + !field_mp_type_is_compatible(field->type, pos, \ >> + tuple_field_is_nullable(field))) { \ >> + diag_set(ClientError, ER_FIELD_TYPE, tuple_field_path(field), \ >> + field_type_strs[field->type]); \ >> + return -1; \ >> + } \ >> +} Hi! Thanks for the review! > Serge, I'm completely not familiar with the code thus may be simply wrong but > @check_field_type test for @validate first, right? > ... >> + >> + field = json_tree_lookup_entry(&format->fields, &format->fields.root, >> + &token, struct tuple_field, token); >> + /* Check 1st field's type, but don't store offset to it. */ >> + check_field_type(field, pos); > check_field_type -> if (validate ...) > >> + if (validate) >> + bit_clear(required_fields, field->id); > and here we test for if (validate) again. Should not we simply > drop if (validate) from check_field_type and call this macro under > the caller's if? IOW > > if (validate) { > check_field_type(); > bit_clear(); > } > > While check_field_type will be something like > > #define check_field_type(field, pos) \ > ({ \ > bool nullable = tuple_field_is_nullable(field); \ > if (!field_mp_type_is_compatible(field->type, pos, nullable)) { \ > diag_set(ClientError, ER_FIELD_TYPE, \ > tuple_field_path(field), \ > field_type_strs[field->type]); \ > return -1; \ > } \ > }) Yes, you're correct. Thanks for pointing this out! I've amended the patch. Incremental diff is below. > - if I'm right we may fix it on top (actually since these two ifs are close > to each other they won't hurt hw branch predictor even in current form or > may be compiler merge these two basic blocks under one "if" flow) > >> + check_field_type(field, pos); >> + if (validate) >> + bit_clear(required_fields, field->id); > and here too. > >> + if (field->offset_slot != TUPLE_OFFSET_SLOT_NIL && >> + field_map_builder_set_slot(builder, field->offset_slot, >> + pos - tuple, MULTIKEY_NONE, >> + 0, NULL) != 0) { >> + return -1; >> + } >> + } diff --git a/src/box/tuple_format.c b/src/box/tuple_format.c index f2db521ea..ad2f251b4 100644 --- a/src/box/tuple_format.c +++ b/src/box/tuple_format.c @@ -861,15 +861,14 @@ static int tuple_field_map_create_plain(struct tuple_format *format, const char *tuple, bool validate, struct field_map_builder *builder) { -#define check_field_type(field, pos) { \ - if (validate && \ - !field_mp_type_is_compatible(field->type, pos, \ - tuple_field_is_nullable(field))) { \ +#define check_field_type(field, pos) ({ \ + bool nullable = tuple_field_is_nullable(field); \ + if(!field_mp_type_is_compatible(field->type, pos, nullable)) { \ diag_set(ClientError, ER_FIELD_TYPE, tuple_field_path(field), \ field_type_strs[field->type]); \ return -1; \ } \ -} +}) struct region *region = &fiber()->gc; const char *pos = tuple; @@ -904,9 +903,10 @@ tuple_field_map_create_plain(struct tuple_format *format, const char *tuple, field = json_tree_lookup_entry(&format->fields, &format->fields.root, &token, struct tuple_field, token); /* Check 1st field's type, but don't store offset to it. */ - check_field_type(field, pos); - if (validate) + if (validate) { + check_field_type(field, pos); bit_clear(required_fields, field->id); + } mp_next(&pos); for (uint32_t i = 1; i < defined_field_count; i++, mp_next(&pos)) { @@ -914,9 +914,10 @@ tuple_field_map_create_plain(struct tuple_format *format, const char *tuple, field = json_tree_lookup_entry(&format->fields, &format->fields.root, &token, struct tuple_field, token); - check_field_type(field, pos); - if (validate) + if (validate) { + check_field_type(field, pos); bit_clear(required_fields, field->id); + } if (field->offset_slot != TUPLE_OFFSET_SLOT_NIL && field_map_builder_set_slot(builder, field->offset_slot, pos - tuple, MULTIKEY_NONE, -- Serge Petrenko ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Tarantool-patches] [PATCH 1/2] box: speed up tuple_field_map_create 2020-11-14 17:28 ` [Tarantool-patches] [PATCH 1/2] box: speed up tuple_field_map_create Serge Petrenko 2020-11-14 18:00 ` Cyrill Gorcunov @ 2020-11-20 23:26 ` Vladislav Shpilevoy 2020-11-23 13:50 ` Serge Petrenko 1 sibling, 1 reply; 17+ messages in thread From: Vladislav Shpilevoy @ 2020-11-20 23:26 UTC (permalink / raw) To: Serge Petrenko, korablev; +Cc: tarantool-patches Hi! Thanks for the patch! Did you think of trying to flatten the first level of tuple_format.fields tree into an array, like it was before JSON indexes? So we would have an array of fields, and each one has a json-subtree in it. Which is just unused in 99.999% of cases. It could restore even more perf maybe. See 2 comments below. > diff --git a/src/box/tuple_format.c b/src/box/tuple_format.c > index 9b817d3cf..ad2f251b4 100644 > --- a/src/box/tuple_format.c > +++ b/src/box/tuple_format.c > @@ -852,6 +852,89 @@ tuple_format1_can_store_format2_tuples(struct tuple_format *format1, > return true; > } > > +static int > +tuple_format_required_fields_validate(struct tuple_format *format, > + void *required_fields, > + uint32_t required_fields_sz); > + > +static int > +tuple_field_map_create_plain(struct tuple_format *format, const char *tuple, > + bool validate, struct field_map_builder *builder) > +{ > +#define check_field_type(field, pos) ({ \ > + bool nullable = tuple_field_is_nullable(field); \ > + if(!field_mp_type_is_compatible(field->type, pos, nullable)) { \ > + diag_set(ClientError, ER_FIELD_TYPE, tuple_field_path(field), \ > + field_type_strs[field->type]); \ > + return -1; \ > + } \ > +}) > + > + struct region *region = &fiber()->gc; > + const char *pos = tuple; > + uint32_t defined_field_count = mp_decode_array(&pos); > + if (validate && format->exact_field_count > 0 && > + format->exact_field_count != defined_field_count) { > + diag_set(ClientError, ER_EXACT_FIELD_COUNT, > + (unsigned) defined_field_count, > + (unsigned) format->exact_field_count); > + return -1; > + } > + defined_field_count = MIN(defined_field_count, > + tuple_format_field_count(format)); > + > + void *required_fields = NULL; > + uint32_t required_fields_sz = bitmap_size(format->total_field_count); > + if (validate) { > + required_fields = region_alloc(region, required_fields_sz); > + memcpy(required_fields, format->required_fields, > + required_fields_sz); > + } > + > + if (unlikely(defined_field_count == 0)) > + goto end; 1. If the count is 0, you anyway allocate something on the region. Better do the check before allocating anything. > + > + struct json_token token = { > + .type = JSON_TOKEN_NUM, > + .num = 0, > + }; > + struct tuple_field *field; > + > + field = json_tree_lookup_entry(&format->fields, &format->fields.root, > + &token, struct tuple_field, token); > + /* Check 1st field's type, but don't store offset to it. */ > + if (validate) { > + check_field_type(field, pos); > + bit_clear(required_fields, field->id); > + } 2. Since you use macro, you can move bit_clear and 'if validate' into it. These 4 lines above are repeated 2 times without changes. Also, why do you separate the first iteration? Doesn't the first field have offset_slot == TUPLE_OFFSET_SLOT_NIL? So its slot won't be set anyway. > + mp_next(&pos); > + > + for (uint32_t i = 1; i < defined_field_count; i++, mp_next(&pos)) { > + token.num = i; > + field = json_tree_lookup_entry(&format->fields, > + &format->fields.root, &token, > + struct tuple_field, token); > + if (validate) { > + check_field_type(field, pos); > + bit_clear(required_fields, field->id); > + } > + if (field->offset_slot != TUPLE_OFFSET_SLOT_NIL && > + field_map_builder_set_slot(builder, field->offset_slot, > + pos - tuple, MULTIKEY_NONE, > + 0, NULL) != 0) { > + return -1; > + } > + } > + > +#undef check_field_type > + > +end: > + return validate ? > + tuple_format_required_fields_validate(format, required_fields, > + required_fields_sz) : > + 0; > +} ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Tarantool-patches] [PATCH 1/2] box: speed up tuple_field_map_create 2020-11-20 23:26 ` Vladislav Shpilevoy @ 2020-11-23 13:50 ` Serge Petrenko 2020-11-24 22:33 ` Vladislav Shpilevoy 0 siblings, 1 reply; 17+ messages in thread From: Serge Petrenko @ 2020-11-23 13:50 UTC (permalink / raw) To: Vladislav Shpilevoy, korablev; +Cc: tarantool-patches 21.11.2020 02:26, Vladislav Shpilevoy пишет: > Hi! Thanks for the patch! Thanks for the review! > Did you think of trying to flatten the first level of tuple_format.fields > tree into an array, like it was before JSON indexes? So we would have an > array of fields, and each one has a json-subtree in it. Which is just > unused in 99.999% of cases. It could restore even more perf maybe. Your suggestion is good, but looks like it won't work. As I understand, the first level of JSON schema (or how does one call this?) may be a map, rather than an array. So, even top-level fields must be organised in a tree, rather than an array. I was thinking of making fields a union. So that it'd hold an array for simple cases and a json tree for complex ones. I also thought it should work even better, but didn't start implementation. What's interesting, the JSON tree root has all its children organised in an array anyway. It's just hidden behind one level of abstraction. So, in `json_tree_lookup_entry` one actually does `return root->children[token.num]`. So, I'm not sure whether we should introduce a plain array for top-level fields. This'll require a bigger refactoring just to check whether the idea is good or not. I have a suggestion: let's access the root's array directly. I didn't implement it initially since I thought it'd harm JSON tree encapsulation. However, I tried it out, and it actually looks good. Consider this diff (not applied yet, just an example): ``` diff --git a/src/box/tuple_format.c b/src/box/tuple_format.c index d8656aa26..6c9b2a255 100644 --- a/src/box/tuple_format.c +++ b/src/box/tuple_format.c @@ -888,17 +888,10 @@ tuple_field_map_create_plain(struct tuple_format *format, const char *tuple, required_fields_sz); } - struct json_token token = { - .type = JSON_TOKEN_NUM, - .num = 0, - }; struct tuple_field *field; - - for (uint32_t i = 0; i < defined_field_count; i++, mp_next(&pos)) { - token.num = i; - field = json_tree_lookup_entry(&format->fields, - &format->fields.root, &token, - struct tuple_field, token); + struct json_token **token = format->fields.root.children; + for (uint32_t i = 0; i < defined_field_count; i++, token++, mp_next(&pos)) { + field = json_tree_entry(*token, struct tuple_field, token); if (validate) { bool nullable = tuple_field_is_nullable(field); if(!field_mp_type_is_compatible(field->type, pos, ``` I actually profiled my original patch and the version with this diff applied: This diff reduces time taken by tuple_field_map_create from 2.12 seconds to 1.69 seconds. 1.10 still has the best time: 1.31 seconds. Just for comparison, current 2.x takes 6.1 seconds for the same amount of tuple_field_map_create calls. I'm starting to like this variant more. Anyway, I'm not applying this diff until your ack. Please find other comments and incremental diff below. > See 2 comments below. > >> diff --git a/src/box/tuple_format.c b/src/box/tuple_format.c >> index 9b817d3cf..ad2f251b4 100644 >> --- a/src/box/tuple_format.c >> +++ b/src/box/tuple_format.c >> @@ -852,6 +852,89 @@ tuple_format1_can_store_format2_tuples(struct tuple_format *format1, >> return true; >> } >> >> +static int >> +tuple_format_required_fields_validate(struct tuple_format *format, >> + void *required_fields, >> + uint32_t required_fields_sz); >> + >> +static int >> +tuple_field_map_create_plain(struct tuple_format *format, const char *tuple, >> + bool validate, struct field_map_builder *builder) >> +{ >> +#define check_field_type(field, pos) ({ \ >> + bool nullable = tuple_field_is_nullable(field); \ >> + if(!field_mp_type_is_compatible(field->type, pos, nullable)) { \ >> + diag_set(ClientError, ER_FIELD_TYPE, tuple_field_path(field), \ >> + field_type_strs[field->type]); \ >> + return -1; \ >> + } \ >> +}) >> + >> + struct region *region = &fiber()->gc; >> + const char *pos = tuple; >> + uint32_t defined_field_count = mp_decode_array(&pos); >> + if (validate && format->exact_field_count > 0 && >> + format->exact_field_count != defined_field_count) { >> + diag_set(ClientError, ER_EXACT_FIELD_COUNT, >> + (unsigned) defined_field_count, >> + (unsigned) format->exact_field_count); >> + return -1; >> + } >> + defined_field_count = MIN(defined_field_count, >> + tuple_format_field_count(format)); >> + >> + void *required_fields = NULL; >> + uint32_t required_fields_sz = bitmap_size(format->total_field_count); >> + if (validate) { >> + required_fields = region_alloc(region, required_fields_sz); >> + memcpy(required_fields, format->required_fields, >> + required_fields_sz); >> + } >> + >> + if (unlikely(defined_field_count == 0)) >> + goto end; > 1. If the count is 0, you anyway allocate something on the region. Better do the > check before allocating anything. Thanks for noticing! Fixed. >> + >> + struct json_token token = { >> + .type = JSON_TOKEN_NUM, >> + .num = 0, >> + }; >> + struct tuple_field *field; >> + >> + field = json_tree_lookup_entry(&format->fields, &format->fields.root, >> + &token, struct tuple_field, token); >> + /* Check 1st field's type, but don't store offset to it. */ >> + if (validate) { >> + check_field_type(field, pos); >> + bit_clear(required_fields, field->id); >> + } > 2. Since you use macro, you can move bit_clear and 'if validate' into it. > These 4 lines above are repeated 2 times without changes. Agree, but since your next comment is also on point, let's actually remove the macro at all. There'd be only one its invocation now. > Also, why do you separate the first iteration? Doesn't the first field > have offset_slot == TUPLE_OFFSET_SLOT_NIL? So its slot won't be set anyway. That's some optimisation I borrowed from 1.10 code without a second thought. Not needed here, so removed. >> + mp_next(&pos); >> + >> + for (uint32_t i = 1; i < defined_field_count; i++, mp_next(&pos)) { >> + token.num = i; >> + field = json_tree_lookup_entry(&format->fields, >> + &format->fields.root, &token, >> + struct tuple_field, token); >> + if (validate) { >> + check_field_type(field, pos); >> + bit_clear(required_fields, field->id); >> + } >> + if (field->offset_slot != TUPLE_OFFSET_SLOT_NIL && >> + field_map_builder_set_slot(builder, field->offset_slot, >> + pos - tuple, MULTIKEY_NONE, >> + 0, NULL) != 0) { >> + return -1; >> + } >> + } >> + >> +#undef check_field_type >> + >> +end: >> + return validate ? >> + tuple_format_required_fields_validate(format, required_fields, >> + required_fields_sz) : >> + 0; >> +} diff: diff --git a/src/box/tuple_format.c b/src/box/tuple_format.c index ad2f251b4..18c292926 100644 --- a/src/box/tuple_format.c +++ b/src/box/tuple_format.c @@ -861,15 +861,6 @@ static int tuple_field_map_create_plain(struct tuple_format *format, const char *tuple, bool validate, struct field_map_builder *builder) { -#define check_field_type(field, pos) ({ \ - bool nullable = tuple_field_is_nullable(field); \ - if(!field_mp_type_is_compatible(field->type, pos, nullable)) { \ - diag_set(ClientError, ER_FIELD_TYPE, tuple_field_path(field), \ - field_type_strs[field->type]); \ - return -1; \ - } \ -}) - struct region *region = &fiber()->gc; const char *pos = tuple; uint32_t defined_field_count = mp_decode_array(&pos); @@ -885,37 +876,38 @@ tuple_field_map_create_plain(struct tuple_format *format, const char *tuple, void *required_fields = NULL; uint32_t required_fields_sz = bitmap_size(format->total_field_count); + + if (unlikely(defined_field_count == 0)) { + required_fields = format->required_fields; + goto end; + } + if (validate) { required_fields = region_alloc(region, required_fields_sz); memcpy(required_fields, format->required_fields, required_fields_sz); } - if (unlikely(defined_field_count == 0)) - goto end; - struct json_token token = { .type = JSON_TOKEN_NUM, .num = 0, }; struct tuple_field *field; - field = json_tree_lookup_entry(&format->fields, &format->fields.root, - &token, struct tuple_field, token); - /* Check 1st field's type, but don't store offset to it. */ - if (validate) { - check_field_type(field, pos); - bit_clear(required_fields, field->id); - } - mp_next(&pos); - - for (uint32_t i = 1; i < defined_field_count; i++, mp_next(&pos)) { + for (uint32_t i = 0; i < defined_field_count; i++, mp_next(&pos)) { token.num = i; field = json_tree_lookup_entry(&format->fields, &format->fields.root, &token, - struct tuple_field, token); + struct tuple_field, token); if (validate) { - check_field_type(field, pos); + bool nullable = tuple_field_is_nullable(field); + if(!field_mp_type_is_compatible(field->type, pos, + nullable)) { + diag_set(ClientError, ER_FIELD_TYPE, + tuple_field_path(field), + field_type_strs[field->type]); + return -1; + } bit_clear(required_fields, field->id); } if (field->offset_slot != TUPLE_OFFSET_SLOT_NIL && @@ -926,8 +918,6 @@ tuple_field_map_create_plain(struct tuple_format *format, const char *tuple, } } -#undef check_field_type - end: return validate ? tuple_format_required_fields_validate(format, required_fields, -- Serge Petrenko ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Tarantool-patches] [PATCH 1/2] box: speed up tuple_field_map_create 2020-11-23 13:50 ` Serge Petrenko @ 2020-11-24 22:33 ` Vladislav Shpilevoy 2020-12-01 8:48 ` Serge Petrenko 0 siblings, 1 reply; 17+ messages in thread From: Vladislav Shpilevoy @ 2020-11-24 22:33 UTC (permalink / raw) To: Serge Petrenko, korablev; +Cc: tarantool-patches Hi! Thanks for the fixes! >> Did you think of trying to flatten the first level of tuple_format.fields >> tree into an array, like it was before JSON indexes? So we would have an >> array of fields, and each one has a json-subtree in it. Which is just >> unused in 99.999% of cases. It could restore even more perf maybe. > > > Your suggestion is good, but looks like it won't work. As I understand, > the first level of JSON schema (or how does one call this?) may be a map, > rather than an array. So, even top-level fields must be organised in a tree, > rather than an array. First level of tuple_format is always an array. Because tuple root type is MP_ARRAY. The first level names are emulated using tuple_dictionary. But in the code we usually just trim the first JSON path part, convert it to a field number using the dictionary, and then work with an array in the root. This is why in many functions we have fieldno + path, not just path. > I was thinking of making fields a union. So that it'd hold an array for > simple cases and a json tree for complex ones. I also thought it should work > even better, but didn't start implementation. Sounds complex. But maybe it would look not as complex as it sounds. However if the idea with json tree root as array works, we probably don't need it. > What's interesting, the JSON tree root has all its children organised in an > array anyway. It's just hidden behind one level of abstraction. So, in > `json_tree_lookup_entry` one actually does `return root->children[token.num]`. Yeah, this is because the tree does not have hash tables in each node. Instead, all nodes store their children in an array. Regardless of their key. Even string keys are stored in the children array. To lookup paths the tree uses a single hash json_tree.hash. Here by a parent node pointer and a child name you can find the next json_token. So even if children looks like an array, it is not entirely correct to use as an array. Although it will work for the root tuple level which is always an array I guess. I don't know if we put root field names to this json tree. If we don't, then the root level safely can be used as an array. > So, I'm not sure whether we should introduce a plain array for top-level > fields. This'll require a bigger refactoring just to check whether the idea > is good or not. We already have it in the tree. So maybe nothing to do here, and I was wrong. We should not bring it up to tuple_format. > I have a suggestion: let's access the root's array directly. I didn't > implement it initially since I thought it'd harm JSON tree encapsulation. Should be fine, as long as root tuple type is MP_ARRAY, I hope. > However, I tried it out, and it actually looks good. > Consider this diff (not applied yet, just an example): > > ``` > diff --git a/src/box/tuple_format.c b/src/box/tuple_format.c > index d8656aa26..6c9b2a255 100644 > --- a/src/box/tuple_format.c > +++ b/src/box/tuple_format.c > @@ -888,17 +888,10 @@ tuple_field_map_create_plain(struct tuple_format *format, const char *tuple, > required_fields_sz); > } > > - struct json_token token = { > - .type = JSON_TOKEN_NUM, > - .num = 0, > - }; > struct tuple_field *field; > - > - for (uint32_t i = 0; i < defined_field_count; i++, mp_next(&pos)) { > - token.num = i; > - field = json_tree_lookup_entry(&format->fields, > - &format->fields.root, &token, > - struct tuple_field, token); > + struct json_token **token = format->fields.root.children; > + for (uint32_t i = 0; i < defined_field_count; i++, token++, mp_next(&pos)) { > + field = json_tree_entry(*token, struct tuple_field, token); > if (validate) { > bool nullable = tuple_field_is_nullable(field); > if(!field_mp_type_is_compatible(field->type, pos, > ``` > > > I actually profiled my original patch and the version with this diff applied: > > > This diff reduces time taken by tuple_field_map_create from 2.12 seconds > to 1.69 seconds. 1.10 still has the best time: 1.31 seconds. > Just for comparison, current 2.x takes 6.1 seconds for the same amount of > tuple_field_map_create calls. Hm. That looks too much. json_tree_lookup_entry() is sure slower than a direct access, but why so much slower? It calls json_tree_lookup, which checks token type, which is num, and then it does the same as you in the diff above. So it is just +2 ifs, not counting the 'unlikely' multikey check. Does a couple of ifs slow down the code so heavily? Maybe the inlining was too aggressive, and we washed out the instruction cache? Don't know how to check it properly. Just out of curiosity, what if we move json_tree_lookup to json.c, and field_map_builder_set_slot to field_map.c? And maybe field_mp_type_is_compatible to field_def.c. If it won't change anything, then probably just 2 ifs really matter that much. > I'm starting to like this variant more. Anyway, I'm not applying this diff > until your ack. Looks fine. Just don't understand why so big difference for such a simple change. Then we probably also could gain some perf by splitting these functions into 2 versions with validate true and validate false. Because we always know it at compile time. Would be cool to use templates for this, but not sure if we can simply change tuple.c to tuple.cc only for that. We could also template tuple_field_raw_by_path, because tuple_field_raw always passes NULL path and NULL offset_slot_hint. >>> + mp_next(&pos); >>> + >>> + for (uint32_t i = 1; i < defined_field_count; i++, mp_next(&pos)) { >>> + token.num = i; >>> + field = json_tree_lookup_entry(&format->fields, >>> + &format->fields.root, &token, >>> + struct tuple_field, token); >>> + if (validate) { >>> + check_field_type(field, pos); >>> + bit_clear(required_fields, field->id); >>> + } >>> + if (field->offset_slot != TUPLE_OFFSET_SLOT_NIL && >>> + field_map_builder_set_slot(builder, field->offset_slot, >>> + pos - tuple, MULTIKEY_NONE, >>> + 0, NULL) != 0) { >>> + return -1; >>> + } >>> + } >>> + ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Tarantool-patches] [PATCH 1/2] box: speed up tuple_field_map_create 2020-11-24 22:33 ` Vladislav Shpilevoy @ 2020-12-01 8:48 ` Serge Petrenko 2020-12-01 22:04 ` Vladislav Shpilevoy 0 siblings, 1 reply; 17+ messages in thread From: Serge Petrenko @ 2020-12-01 8:48 UTC (permalink / raw) To: Vladislav Shpilevoy, korablev; +Cc: tarantool-patches 25.11.2020 01:33, Vladislav Shpilevoy пишет: > >> ``` >> diff --git a/src/box/tuple_format.c b/src/box/tuple_format.c >> index d8656aa26..6c9b2a255 100644 >> --- a/src/box/tuple_format.c >> +++ b/src/box/tuple_format.c >> @@ -888,17 +888,10 @@ tuple_field_map_create_plain(struct tuple_format *format, const char *tuple, >> required_fields_sz); >> } >> >> - struct json_token token = { >> - .type = JSON_TOKEN_NUM, >> - .num = 0, >> - }; >> struct tuple_field *field; >> - >> - for (uint32_t i = 0; i < defined_field_count; i++, mp_next(&pos)) { >> - token.num = i; >> - field = json_tree_lookup_entry(&format->fields, >> - &format->fields.root, &token, >> - struct tuple_field, token); >> + struct json_token **token = format->fields.root.children; >> + for (uint32_t i = 0; i < defined_field_count; i++, token++, mp_next(&pos)) { >> + field = json_tree_entry(*token, struct tuple_field, token); >> if (validate) { >> bool nullable = tuple_field_is_nullable(field); >> if(!field_mp_type_is_compatible(field->type, pos, >> ``` >> >> >> I actually profiled my original patch and the version with this diff applied: >> >> >> This diff reduces time taken by tuple_field_map_create from 2.12 seconds >> to 1.69 seconds. 1.10 still has the best time: 1.31 seconds. >> Just for comparison, current 2.x takes 6.1 seconds for the same amount of >> tuple_field_map_create calls. > Hm. That looks too much. json_tree_lookup_entry() is sure slower than a > direct access, but why so much slower? It calls json_tree_lookup, which > checks token type, which is num, and then it does the same as you in the > diff above. So it is just +2 ifs, not counting the 'unlikely' multikey > check. Does a couple of ifs slow down the code so heavily? I suppose we should count all ifs. likely/unlikely shouldn't make any difference after some iterations, the branch predictor should already learn and guess correctly every time. and we have ~30 mil iteratiions in this test. So there's +3 ifs in a cycle (just +3 instructions, without any branch misprediction). > Maybe the inlining was too aggressive, and we washed out the instruction > cache? Don't know how to check it properly. Just out of curiosity, what > if we move json_tree_lookup to json.c, and field_map_builder_set_slot > to field_map.c? And maybe field_mp_type_is_compatible to field_def.c. tuple_field_map_create time with everything moved to *.c: 2.44 s field_mp_type_is_compatible inlined, everything elese in *.c: 2.8 s only json_tree_lookup moved to json.c: 2.21 s everything inlined: 2.12 s > If it won't change anything, then probably just 2 ifs really matter that > much. > >> I'm starting to like this variant more. Anyway, I'm not applying this diff >> until your ack. > Looks fine. Just don't understand why so big difference for such a simple > change. These 3 ifs are a noticeable portion of all the instructions inside the loop. Why not? The diff applied. > Then we probably also could gain some perf by splitting these functions > into 2 versions with validate true and validate false. Because we always > know it at compile time. Would be cool to use templates for this, but not > sure if we can simply change tuple.c to tuple.cc only for that. You mean like that? tuple_field_map_create(...) { tuple_field_map_create_impl(..., true, ...); } tuple_field_map_create_unchecked(...) { tuple_field_mp_create_impl(..., false, ...); } I tried this approach, it didn't give anything. tuple_field_map_create time 2.15 s. Is this possible that compiler already evaluates `validate`? The diff, just in case. ============================================= diff --git a/src/box/tuple_format.c b/src/box/tuple_format.c index d8656aa26..14e3984b2 100644 --- a/src/box/tuple_format.c +++ b/src/box/tuple_format.c @@ -857,7 +857,7 @@ tuple_format_required_fields_validate(struct tuple_format *format, void *required_fields, uint32_t required_fields_sz); -static int +static inline int tuple_field_map_create_plain(struct tuple_format *format, const char *tuple, bool validate, struct field_map_builder *builder) { @@ -925,10 +925,9 @@ end: 0; } -/** @sa declaration for details. */ -int -tuple_field_map_create(struct tuple_format *format, const char *tuple, - bool validate, struct field_map_builder *builder) +static inline int +tuple_field_map_create_impl(struct tuple_format *format, const char *tuple, + bool validate, struct field_map_builder *builder) { struct region *region = &fiber()->gc; if (field_map_builder_create(builder, format->field_map_size, @@ -966,6 +965,20 @@ tuple_field_map_create(struct tuple_format *format, const char *tuple, return entry.data == NULL ? 0 : -1; } +int +tuple_field_map_create(struct tuple_format *format, const char *tuple, + struct field_map_builder *builder) +{ + return tuple_field_map_create_impl(format, tuple, true, builder); +} + +int +tuple_field_map_create_unchecked(struct tuple_format *format, const char *tuple, + struct field_map_builder *builder) +{ + return tuple_field_map_create_impl(format, tuple, false, builder); +} + uint32_t tuple_format_min_field_count(struct key_def * const *keys, uint16_t key_count, const struct field_def *space_fields, > We could also template tuple_field_raw_by_path, because tuple_field_raw > always passes NULL path and NULL offset_slot_hint. Yes, this should also help. -- Serge Petrenko ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Tarantool-patches] [PATCH 1/2] box: speed up tuple_field_map_create 2020-12-01 8:48 ` Serge Petrenko @ 2020-12-01 22:04 ` Vladislav Shpilevoy 2020-12-02 9:53 ` Serge Petrenko 0 siblings, 1 reply; 17+ messages in thread From: Vladislav Shpilevoy @ 2020-12-01 22:04 UTC (permalink / raw) To: Serge Petrenko, korablev; +Cc: tarantool-patches Thanks for the investigation! >> Then we probably also could gain some perf by splitting these functions >> into 2 versions with validate true and validate false. Because we always >> know it at compile time. Would be cool to use templates for this, but not >> sure if we can simply change tuple.c to tuple.cc only for that. > > > You mean like that? > > tuple_field_map_create(...) { > > tuple_field_map_create_impl(..., true, ...); > > } > > tuple_field_map_create_unchecked(...) { > > tuple_field_mp_create_impl(..., false, ...); > > } > > I tried this approach, it didn't give anything. tuple_field_map_create time 2.15 s. No, I mean create 2 versions of tuple_field_map_create. It leads to some code duplication, and we probably don't want to do that this way. But if it will give notable perf increase, we may decide to consider templates. > Is this possible that compiler already evaluates `validate`? I guess it can, but we can not be sure it will always do so. Consider this diff. Super ugly, but should reveal if validate matters. After all, we have at least 4 validate checks here. The diff replaces it with 1. ==================== diff --git a/src/box/tuple_format.c b/src/box/tuple_format.c index 6c9b2a255..5ee0c4484 100644 --- a/src/box/tuple_format.c +++ b/src/box/tuple_format.c @@ -859,12 +859,38 @@ tuple_format_required_fields_validate(struct tuple_format *format, static int tuple_field_map_create_plain(struct tuple_format *format, const char *tuple, - bool validate, struct field_map_builder *builder) + struct field_map_builder *builder) +{ + const char *pos = tuple; + uint32_t defined_field_count = mp_decode_array(&pos); + defined_field_count = MIN(defined_field_count, + tuple_format_field_count(format)); + if (unlikely(defined_field_count == 0)) + return 0; + + struct tuple_field *field; + struct json_token **token = format->fields.root.children; + for (uint32_t i = 0; i < defined_field_count; i++, token++, mp_next(&pos)) { + field = json_tree_entry(*token, struct tuple_field, token); + if (field->offset_slot != TUPLE_OFFSET_SLOT_NIL && + field_map_builder_set_slot(builder, field->offset_slot, + pos - tuple, MULTIKEY_NONE, + 0, NULL) != 0) { + return -1; + } + } + return 0; +} + +static int +tuple_field_map_create_and_validate_plain(struct tuple_format *format, + const char *tuple, + struct field_map_builder *builder) { struct region *region = &fiber()->gc; const char *pos = tuple; uint32_t defined_field_count = mp_decode_array(&pos); - if (validate && format->exact_field_count > 0 && + if (format->exact_field_count > 0 && format->exact_field_count != defined_field_count) { diag_set(ClientError, ER_EXACT_FIELD_COUNT, (unsigned) defined_field_count, @@ -881,28 +907,22 @@ tuple_field_map_create_plain(struct tuple_format *format, const char *tuple, required_fields = format->required_fields; goto end; } - - if (validate) { - required_fields = region_alloc(region, required_fields_sz); - memcpy(required_fields, format->required_fields, - required_fields_sz); - } + required_fields = region_alloc(region, required_fields_sz); + memcpy(required_fields, format->required_fields, + required_fields_sz); struct tuple_field *field; struct json_token **token = format->fields.root.children; for (uint32_t i = 0; i < defined_field_count; i++, token++, mp_next(&pos)) { field = json_tree_entry(*token, struct tuple_field, token); - if (validate) { - bool nullable = tuple_field_is_nullable(field); - if(!field_mp_type_is_compatible(field->type, pos, - nullable)) { - diag_set(ClientError, ER_FIELD_TYPE, - tuple_field_path(field), - field_type_strs[field->type]); - return -1; - } - bit_clear(required_fields, field->id); + bool nullable = tuple_field_is_nullable(field); + if(!field_mp_type_is_compatible(field->type, pos, nullable)) { + diag_set(ClientError, ER_FIELD_TYPE, + tuple_field_path(field), + field_type_strs[field->type]); + return -1; } + bit_clear(required_fields, field->id); if (field->offset_slot != TUPLE_OFFSET_SLOT_NIL && field_map_builder_set_slot(builder, field->offset_slot, pos - tuple, MULTIKEY_NONE, @@ -910,12 +930,9 @@ tuple_field_map_create_plain(struct tuple_format *format, const char *tuple, return -1; } } - end: - return validate ? - tuple_format_required_fields_validate(format, required_fields, - required_fields_sz) : - 0; + return tuple_format_required_fields_validate(format, required_fields, + required_fields_sz); } /** @sa declaration for details. */ @@ -935,8 +952,11 @@ tuple_field_map_create(struct tuple_format *format, const char *tuple, * tuple field traversal may be simplified. */ if (format->fields_depth == 1) { - return tuple_field_map_create_plain(format, tuple, validate, - builder); + if (validate) { + return tuple_field_map_create_and_validate_plain( + format, tuple, builder); + } + return tuple_field_map_create_plain(format, tuple, builder); } uint32_t field_count; ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Tarantool-patches] [PATCH 1/2] box: speed up tuple_field_map_create 2020-12-01 22:04 ` Vladislav Shpilevoy @ 2020-12-02 9:53 ` Serge Petrenko 0 siblings, 0 replies; 17+ messages in thread From: Serge Petrenko @ 2020-12-02 9:53 UTC (permalink / raw) To: Vladislav Shpilevoy, korablev; +Cc: tarantool-patches 02.12.2020 01:04, Vladislav Shpilevoy пишет: > Thanks for the investigation! > >>> Then we probably also could gain some perf by splitting these functions >>> into 2 versions with validate true and validate false. Because we always >>> know it at compile time. Would be cool to use templates for this, but not >>> sure if we can simply change tuple.c to tuple.cc only for that. >> >> You mean like that? >> >> tuple_field_map_create(...) { >> >> tuple_field_map_create_impl(..., true, ...); >> >> } >> >> tuple_field_map_create_unchecked(...) { >> >> tuple_field_mp_create_impl(..., false, ...); >> >> } >> >> I tried this approach, it didn't give anything. tuple_field_map_create time 2.15 s. > No, I mean create 2 versions of tuple_field_map_create. It leads > to some code duplication, and we probably don't want to do that > this way. But if it will give notable perf increase, we may decide > to consider templates. I see. I thought the compiler would do the same if it saw an inline function with explicit boolean argument. > >> Is this possible that compiler already evaluates `validate`? > I guess it can, but we can not be sure it will always do so. > > Consider this diff. Super ugly, but should reveal if validate > matters. After all, we have at least 4 validate checks here. The > diff replaces it with 1. Thanks for all the suggestions! Tested this diff. My numbers must be off somewhere in the previous letter, because tuple_field_map_create() took ~2.15 seconds back then. This must be because I took a 19 second profile, instead of a 15 second one, I guess. Anyway, all the comparisons from the previous letter were valid. During this test tuple_field_map_create() took ~ 1.6 seconds. With and without the diff applied. I double-checked it this time. It is a 15 second profile. So looks like splitting by validate doesn't help much. > > ==================== > diff --git a/src/box/tuple_format.c b/src/box/tuple_format.c > index 6c9b2a255..5ee0c4484 100644 > --- a/src/box/tuple_format.c > +++ b/src/box/tuple_format.c > @@ -859,12 +859,38 @@ tuple_format_required_fields_validate(struct tuple_format *format, > > static int > tuple_field_map_create_plain(struct tuple_format *format, const char *tuple, > - bool validate, struct field_map_builder *builder) > + struct field_map_builder *builder) > +{ > + const char *pos = tuple; > + uint32_t defined_field_count = mp_decode_array(&pos); > + defined_field_count = MIN(defined_field_count, > + tuple_format_field_count(format)); > + if (unlikely(defined_field_count == 0)) > + return 0; > + > + struct tuple_field *field; > + struct json_token **token = format->fields.root.children; > + for (uint32_t i = 0; i < defined_field_count; i++, token++, mp_next(&pos)) { > + field = json_tree_entry(*token, struct tuple_field, token); > + if (field->offset_slot != TUPLE_OFFSET_SLOT_NIL && > + field_map_builder_set_slot(builder, field->offset_slot, > + pos - tuple, MULTIKEY_NONE, > + 0, NULL) != 0) { > + return -1; > + } > + } > + return 0; > +} > + > +static int > +tuple_field_map_create_and_validate_plain(struct tuple_format *format, > + const char *tuple, > + struct field_map_builder *builder) > { > struct region *region = &fiber()->gc; > const char *pos = tuple; > uint32_t defined_field_count = mp_decode_array(&pos); > - if (validate && format->exact_field_count > 0 && > + if (format->exact_field_count > 0 && > format->exact_field_count != defined_field_count) { > diag_set(ClientError, ER_EXACT_FIELD_COUNT, > (unsigned) defined_field_count, > @@ -881,28 +907,22 @@ tuple_field_map_create_plain(struct tuple_format *format, const char *tuple, > required_fields = format->required_fields; > goto end; > } > - > - if (validate) { > - required_fields = region_alloc(region, required_fields_sz); > - memcpy(required_fields, format->required_fields, > - required_fields_sz); > - } > + required_fields = region_alloc(region, required_fields_sz); > + memcpy(required_fields, format->required_fields, > + required_fields_sz); > > struct tuple_field *field; > struct json_token **token = format->fields.root.children; > for (uint32_t i = 0; i < defined_field_count; i++, token++, mp_next(&pos)) { > field = json_tree_entry(*token, struct tuple_field, token); > - if (validate) { > - bool nullable = tuple_field_is_nullable(field); > - if(!field_mp_type_is_compatible(field->type, pos, > - nullable)) { > - diag_set(ClientError, ER_FIELD_TYPE, > - tuple_field_path(field), > - field_type_strs[field->type]); > - return -1; > - } > - bit_clear(required_fields, field->id); > + bool nullable = tuple_field_is_nullable(field); > + if(!field_mp_type_is_compatible(field->type, pos, nullable)) { > + diag_set(ClientError, ER_FIELD_TYPE, > + tuple_field_path(field), > + field_type_strs[field->type]); > + return -1; > } > + bit_clear(required_fields, field->id); > if (field->offset_slot != TUPLE_OFFSET_SLOT_NIL && > field_map_builder_set_slot(builder, field->offset_slot, > pos - tuple, MULTIKEY_NONE, > @@ -910,12 +930,9 @@ tuple_field_map_create_plain(struct tuple_format *format, const char *tuple, > return -1; > } > } > - > end: > - return validate ? > - tuple_format_required_fields_validate(format, required_fields, > - required_fields_sz) : > - 0; > + return tuple_format_required_fields_validate(format, required_fields, > + required_fields_sz); > } > > /** @sa declaration for details. */ > @@ -935,8 +952,11 @@ tuple_field_map_create(struct tuple_format *format, const char *tuple, > * tuple field traversal may be simplified. > */ > if (format->fields_depth == 1) { > - return tuple_field_map_create_plain(format, tuple, validate, > - builder); > + if (validate) { > + return tuple_field_map_create_and_validate_plain( > + format, tuple, builder); > + } > + return tuple_field_map_create_plain(format, tuple, builder); > } > > uint32_t field_count; -- Serge Petrenko ^ permalink raw reply [flat|nested] 17+ messages in thread
* [Tarantool-patches] [PATCH 2/2] box: use tuple_field_raw_by_part where possible 2020-11-14 17:28 [Tarantool-patches] [PATCH 0/2] reduce performance degradation introduced by JSON path indices Serge Petrenko 2020-11-14 17:28 ` [Tarantool-patches] [PATCH 1/2] box: speed up tuple_field_map_create Serge Petrenko @ 2020-11-14 17:28 ` Serge Petrenko 2020-11-20 23:26 ` Vladislav Shpilevoy 2020-11-16 7:50 ` [Tarantool-patches] [PATCH 0/2] reduce performance degradation introduced by JSON path indices Serge Petrenko 2 siblings, 1 reply; 17+ messages in thread From: Serge Petrenko @ 2020-11-14 17:28 UTC (permalink / raw) To: v.shpilevoy, korablev; +Cc: tarantool-patches tuple_field_raw_by_part allows to use part's offset slot cache to bypass tuple field lookup in a JSON tree, which is involved even for plain tuples. Since both tuple_field_raw_by_part and tuple_field_raw are basically aliases to tuple_field_raw_by_path, prefer the former to the latter, since it takes advantage over key part's offset slot cache. Also remove has_json_paths template argument for functions where it was used to decide between tuple_field_raw and tuple_field_raw_by_part. This patch was tested by snapshot recovery part involving secondary index building for a 1.5G snapshot with one space and one secondary index over 4 integer and one string field. Comparison table is below: Version | time(seconds) | Change relative to 1.10 ---------------|----------------|------------------------ 1.10 | 2:24 | -/- 2.x(unpatched) | 3:03 | + 27% 2.x (patched) | 2:44 | + 14% Numbers below show cumulative time spent in tuple_compare_slowpath, for 1.10 / 2.x(unpatched) / 2.x(patched) for 15, 19 and 17 second profiles respectively: 13.9 / 17.8 / 15.7. tuple_field_raw_by_path wasn't measured, since it's inlined, but all its uses come from tuple_compare_slowpath. Closes #4774 --- src/box/tuple_compare.cc | 37 +++++++++--------------------------- src/box/tuple_extract_key.cc | 10 ++-------- src/box/tuple_hash.cc | 35 +++++++++------------------------- 3 files changed, 20 insertions(+), 62 deletions(-) diff --git a/src/box/tuple_compare.cc b/src/box/tuple_compare.cc index 0946d77f8..cbb3a850d 100644 --- a/src/box/tuple_compare.cc +++ b/src/box/tuple_compare.cc @@ -589,18 +589,13 @@ tuple_compare_slowpath(struct tuple *tuple_a, hint_t tuple_a_hint, field_b = tuple_field_raw_by_part(format_b, tuple_b_raw, field_map_b, part, (int)tuple_b_hint); - } else if (has_json_paths) { + } else { field_a = tuple_field_raw_by_part(format_a, tuple_a_raw, field_map_a, part, MULTIKEY_NONE); field_b = tuple_field_raw_by_part(format_b, tuple_b_raw, field_map_b, part, MULTIKEY_NONE); - } else { - field_a = tuple_field_raw(format_a, tuple_a_raw, - field_map_a, part->fieldno); - field_b = tuple_field_raw(format_b, tuple_b_raw, - field_map_b, part->fieldno); } assert(has_optional_parts || (field_a != NULL && field_b != NULL)); @@ -655,18 +650,13 @@ tuple_compare_slowpath(struct tuple *tuple_a, hint_t tuple_a_hint, field_b = tuple_field_raw_by_part(format_b, tuple_b_raw, field_map_b, part, (int)tuple_b_hint); - } else if (has_json_paths) { + } else { field_a = tuple_field_raw_by_part(format_a, tuple_a_raw, field_map_a, part, MULTIKEY_NONE); field_b = tuple_field_raw_by_part(format_b, tuple_b_raw, field_map_b, part, MULTIKEY_NONE); - } else { - field_a = tuple_field_raw(format_a, tuple_a_raw, - field_map_a, part->fieldno); - field_b = tuple_field_raw(format_b, tuple_b_raw, - field_map_b, part->fieldno); } /* * Extended parts are primary, and they can not @@ -681,14 +671,12 @@ tuple_compare_slowpath(struct tuple *tuple_a, hint_t tuple_a_hint, return 0; } -template<bool is_nullable, bool has_optional_parts, bool has_json_paths, - bool is_multikey> +template<bool is_nullable, bool has_optional_parts, bool is_multikey> static inline int tuple_compare_with_key_slowpath(struct tuple *tuple, hint_t tuple_hint, const char *key, uint32_t part_count, hint_t key_hint, struct key_def *key_def) { - assert(has_json_paths == key_def->has_json_paths); assert(!has_optional_parts || is_nullable); assert(is_nullable == key_def->is_nullable); assert(has_optional_parts == key_def->has_optional_parts); @@ -711,13 +699,10 @@ tuple_compare_with_key_slowpath(struct tuple *tuple, hint_t tuple_hint, field = tuple_field_raw_by_part(format, tuple_raw, field_map, part, (int)tuple_hint); - } else if (has_json_paths) { + } else { field = tuple_field_raw_by_part(format, tuple_raw, field_map, part, MULTIKEY_NONE); - } else { - field = tuple_field_raw(format, tuple_raw, field_map, - part->fieldno); } if (! is_nullable) { return tuple_compare_field(field, key, part->type, @@ -746,13 +731,10 @@ tuple_compare_with_key_slowpath(struct tuple *tuple, hint_t tuple_hint, field = tuple_field_raw_by_part(format, tuple_raw, field_map, part, (int)tuple_hint); - } else if (has_json_paths) { + } else { field = tuple_field_raw_by_part(format, tuple_raw, field_map, part, MULTIKEY_NONE); - } else { - field = tuple_field_raw(format, tuple_raw, field_map, - part->fieldno); } if (! is_nullable) { rc = tuple_compare_field(field, key, part->type, @@ -1998,8 +1980,7 @@ key_def_set_compare_func_fast(struct key_def *def) if (cmp_wk == NULL) { cmp_wk = is_sequential ? tuple_compare_with_key_sequential<false, false> : - tuple_compare_with_key_slowpath<false, false, - false, false>; + tuple_compare_with_key_slowpath<false, false, false>; } def->tuple_compare = cmp; @@ -2020,7 +2001,7 @@ key_def_set_compare_func_plain(struct key_def *def) def->tuple_compare = tuple_compare_slowpath <is_nullable, has_optional_parts, false, false>; def->tuple_compare_with_key = tuple_compare_with_key_slowpath - <is_nullable, has_optional_parts, false, false>; + <is_nullable, has_optional_parts, false>; } } @@ -2033,12 +2014,12 @@ key_def_set_compare_func_json(struct key_def *def) def->tuple_compare = tuple_compare_slowpath <is_nullable, has_optional_parts, true, true>; def->tuple_compare_with_key = tuple_compare_with_key_slowpath - <is_nullable, has_optional_parts, true, true>; + <is_nullable, has_optional_parts, true>; } else { def->tuple_compare = tuple_compare_slowpath <is_nullable, has_optional_parts, true, false>; def->tuple_compare_with_key = tuple_compare_with_key_slowpath - <is_nullable, has_optional_parts, true, false>; + <is_nullable, has_optional_parts, false>; } } diff --git a/src/box/tuple_extract_key.cc b/src/box/tuple_extract_key.cc index c1ad3929e..6f347ac16 100644 --- a/src/box/tuple_extract_key.cc +++ b/src/box/tuple_extract_key.cc @@ -132,10 +132,7 @@ tuple_extract_key_slowpath(struct tuple *tuple, struct key_def *key_def, /* Calculate the key size. */ for (uint32_t i = 0; i < part_count; ++i) { const char *field; - if (!has_json_paths) { - field = tuple_field_raw(format, data, field_map, - key_def->parts[i].fieldno); - } else if (!is_multikey) { + if (!is_multikey) { field = tuple_field_raw_by_part(format, data, field_map, &key_def->parts[i], MULTIKEY_NONE); @@ -184,10 +181,7 @@ tuple_extract_key_slowpath(struct tuple *tuple, struct key_def *key_def, char *key_buf = mp_encode_array(key, part_count); for (uint32_t i = 0; i < part_count; ++i) { const char *field; - if (!has_json_paths) { - field = tuple_field_raw(format, data, field_map, - key_def->parts[i].fieldno); - } else if (!is_multikey) { + if (!is_multikey) { field = tuple_field_raw_by_part(format, data, field_map, &key_def->parts[i], MULTIKEY_NONE); diff --git a/src/box/tuple_hash.cc b/src/box/tuple_hash.cc index 39f89a659..43e37af87 100644 --- a/src/box/tuple_hash.cc +++ b/src/box/tuple_hash.cc @@ -217,7 +217,7 @@ static const hasher_signature hash_arr[] = { #undef HASHER -template <bool has_optional_parts, bool has_json_paths> +template <bool has_optional_parts> uint32_t tuple_hash_slowpath(struct tuple *tuple, struct key_def *key_def); @@ -261,15 +261,9 @@ key_def_set_hash_func(struct key_def *key_def) { slowpath: if (key_def->has_optional_parts) { - if (key_def->has_json_paths) - key_def->tuple_hash = tuple_hash_slowpath<true, true>; - else - key_def->tuple_hash = tuple_hash_slowpath<true, false>; + key_def->tuple_hash = tuple_hash_slowpath<true>; } else { - if (key_def->has_json_paths) - key_def->tuple_hash = tuple_hash_slowpath<false, true>; - else - key_def->tuple_hash = tuple_hash_slowpath<false, false>; + key_def->tuple_hash = tuple_hash_slowpath<false>; } key_def->key_hash = key_hash_slowpath; } @@ -358,11 +352,10 @@ tuple_hash_key_part(uint32_t *ph1, uint32_t *pcarry, struct tuple *tuple, return tuple_hash_field(ph1, pcarry, &field, part->coll); } -template <bool has_optional_parts, bool has_json_paths> +template <bool has_optional_parts> uint32_t tuple_hash_slowpath(struct tuple *tuple, struct key_def *key_def) { - assert(has_json_paths == key_def->has_json_paths); assert(has_optional_parts == key_def->has_optional_parts); assert(!key_def->is_multikey); assert(!key_def->for_func_index); @@ -374,13 +367,8 @@ tuple_hash_slowpath(struct tuple *tuple, struct key_def *key_def) const char *tuple_raw = tuple_data(tuple); const uint32_t *field_map = tuple_field_map(tuple); const char *field; - if (has_json_paths) { - field = tuple_field_raw_by_part(format, tuple_raw, field_map, - key_def->parts, MULTIKEY_NONE); - } else { - field = tuple_field_raw(format, tuple_raw, field_map, - prev_fieldno); - } + field = tuple_field_raw_by_part(format, tuple_raw, field_map, + key_def->parts, MULTIKEY_NONE); const char *end = (char *)tuple + tuple_size(tuple); if (has_optional_parts && field == NULL) { total_size += tuple_hash_null(&h, &carry); @@ -395,14 +383,9 @@ tuple_hash_slowpath(struct tuple *tuple, struct key_def *key_def) */ if (prev_fieldno + 1 != key_def->parts[part_id].fieldno) { struct key_part *part = &key_def->parts[part_id]; - if (has_json_paths) { - field = tuple_field_raw_by_part(format, tuple_raw, - field_map, part, - MULTIKEY_NONE); - } else { - field = tuple_field_raw(format, tuple_raw, field_map, - part->fieldno); - } + field = tuple_field_raw_by_part(format, tuple_raw, + field_map, part, + MULTIKEY_NONE); } if (has_optional_parts && (field == NULL || field >= end)) { total_size += tuple_hash_null(&h, &carry); -- 2.24.3 (Apple Git-128) ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Tarantool-patches] [PATCH 2/2] box: use tuple_field_raw_by_part where possible 2020-11-14 17:28 ` [Tarantool-patches] [PATCH 2/2] box: use tuple_field_raw_by_part where possible Serge Petrenko @ 2020-11-20 23:26 ` Vladislav Shpilevoy 2020-11-23 13:51 ` Serge Petrenko 0 siblings, 1 reply; 17+ messages in thread From: Vladislav Shpilevoy @ 2020-11-20 23:26 UTC (permalink / raw) To: Serge Petrenko, korablev; +Cc: tarantool-patches Thanks for the patch! On 14.11.2020 18:28, Serge Petrenko wrote: > tuple_field_raw_by_part allows to use part's offset slot cache to bypass > tuple field lookup in a JSON tree, which is involved even for plain > tuples. I am not sure I understand. How does it work in 1.10 then? It uses tuple_field_by_part_raw, which in the end simply calls tuple_field_raw(format, data, field_map, part->fieldno); Exactly what we had here. Why did anything change? > Since both tuple_field_raw_by_part and tuple_field_raw are basically > aliases to tuple_field_raw_by_path, prefer the former to the latter, > since it takes advantage over key part's offset slot cache. Besides, I can't find any info why do we need this 'offset slot cache'. The comment above this member simply narrates the code, and I don't understand the actual reason it was added. Did you manage to realize it? Can you explain, please? ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Tarantool-patches] [PATCH 2/2] box: use tuple_field_raw_by_part where possible 2020-11-20 23:26 ` Vladislav Shpilevoy @ 2020-11-23 13:51 ` Serge Petrenko 2020-11-24 22:33 ` Vladislav Shpilevoy 0 siblings, 1 reply; 17+ messages in thread From: Serge Petrenko @ 2020-11-23 13:51 UTC (permalink / raw) To: Vladislav Shpilevoy, korablev; +Cc: tarantool-patches 21.11.2020 02:26, Vladislav Shpilevoy пишет: > Thanks for the patch! Thanks for your answer! > On 14.11.2020 18:28, Serge Petrenko wrote: >> tuple_field_raw_by_part allows to use part's offset slot cache to bypass >> tuple field lookup in a JSON tree, which is involved even for plain >> tuples. > I am not sure I understand. How does it work in 1.10 then? It uses > tuple_field_by_part_raw, which in the end simply calls > > tuple_field_raw(format, data, field_map, part->fieldno); > > Exactly what we had here. Why did anything change? tuple_field_raw() itself changed. On 1.10 it accesses field as an array element, then takes its offset slot and finds the shift in the field map. On 2.x it aliases to tuple_field_raw_by_path(): finds the top-level field in the format's JSON tree, does a couple of checks for JSON paths and multikey indices and only then takes the offset like 1.10 did. But while tuple_field_raw_by_path() takes an offset slot hint, tuple_field_raw() doesn't use it, so its call always results in searching the JSON tree. >> Since both tuple_field_raw_by_part and tuple_field_raw are basically >> aliases to tuple_field_raw_by_path, prefer the former to the latter, >> since it takes advantage over key part's offset slot cache. > Besides, I can't find any info why do we need this 'offset slot cache'. > The comment above this member simply narrates the code, and I don't > understand the actual reason it was added. Did you manage to realize it? > Can you explain, please? Yep. This cache ameliorates the performance loss for looking the field up in JSON tree every time we need to access some tuple field by offset. Say, each time you call tuple_compare_slowpath and it gets to comparing field values, you'll look up the field in JSON tree to know its offset slot. You may have lots of calls to tuple_compare for different tuples but for the same set of key_parts. So instead of looking the field up on every comparison, you look it up once and remember its offset slot in the key part. Now next time you need offset slot access you don't need to look for the field in the JSON tree at all. Looks like initially this cache was intended to save us from some complex tree lookups, when path is actually not null. But the cache helps even for top-level field access. It helps skip not only the JSON tree lookup, but all the additional checks coming after it. So that's why I made use of it in this patch. -- Serge Petrenko ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Tarantool-patches] [PATCH 2/2] box: use tuple_field_raw_by_part where possible 2020-11-23 13:51 ` Serge Petrenko @ 2020-11-24 22:33 ` Vladislav Shpilevoy 2020-12-01 8:48 ` Serge Petrenko 0 siblings, 1 reply; 17+ messages in thread From: Vladislav Shpilevoy @ 2020-11-24 22:33 UTC (permalink / raw) To: Serge Petrenko, korablev; +Cc: tarantool-patches Thanks for the investigation! >> On 14.11.2020 18:28, Serge Petrenko wrote: >>> tuple_field_raw_by_part allows to use part's offset slot cache to bypass >>> tuple field lookup in a JSON tree, which is involved even for plain >>> tuples. >> I am not sure I understand. How does it work in 1.10 then? It uses >> tuple_field_by_part_raw, which in the end simply calls >> >> tuple_field_raw(format, data, field_map, part->fieldno); >> >> Exactly what we had here. Why did anything change? > > > tuple_field_raw() itself changed. On 1.10 it accesses field as an array element, > then takes its offset slot and finds the shift in the field map. > > On 2.x it aliases to tuple_field_raw_by_path(): > finds the top-level field in the format's JSON tree, does a couple of > checks for JSON paths and multikey indices and only then takes the offset > like 1.10 did. > > But while tuple_field_raw_by_path() takes an offset slot hint, > tuple_field_raw() doesn't use it, so its call always results in searching > the JSON tree. Aha, now I see. So tuple_field_raw used the offset slot but with tons of unnecessary checks before that. Looking at how much did we gain from removing a couple of 'ifs' from the field map build, we could probably gain another portion of perf by splitting huge tuple_field_raw_by_path into 2 parts. 1 part with non-NULL path, and one with NULL path. Non-NULL path would simply use the root JSON array like you did in the first patch, and would omit all checks path !=/== NULL, whose count now is 3!!! in one function, at least. 1) if (path == NULL && fieldno == 0) { mp_decode_array(&tuple); return tuple; } 2) tuple_format_field_by_path() does another path == NULL inside; 3) if (path != NULL && field == NULL) goto parse; Could be done with templates relatively trivial. We just make path check a boolean template argument, and check it before checking the path. Or we can literally split this function into 2 independent functions, because it seems they will look quite different. >>> Since both tuple_field_raw_by_part and tuple_field_raw are basically >>> aliases to tuple_field_raw_by_path, prefer the former to the latter, >>> since it takes advantage over key part's offset slot cache. >> Besides, I can't find any info why do we need this 'offset slot cache'. >> The comment above this member simply narrates the code, and I don't >> understand the actual reason it was added. Did you manage to realize it? >> Can you explain, please? > > > Yep. This cache ameliorates the performance loss for looking the field up > in JSON tree every time we need to access some tuple field by offset. > > Say, each time you call tuple_compare_slowpath and it gets to comparing > field values, you'll look up the field in JSON tree to know its offset slot. > > You may have lots of calls to tuple_compare for different tuples but for > the same set of key_parts. So instead of looking the field up on every > comparison, you look it up once and remember its offset slot in the > key part. Now next time you need offset slot access you don't need > to look for the field in the JSON tree at all. > > Looks like initially this cache was intended to save us from some complex > tree lookups, when path is actually not null. > > But the cache helps even for top-level field access. It helps skip not only > the JSON tree lookup, but all the additional checks coming after it. > > So that's why I made use of it in this patch. There must be a problem with these offset slots in case of space alter. If we add a new index or remove another index, the existing tuples will have 2 formats in one index. Some tuples will have the old format, and some will have the new one. With probably different offset slots for the same index parts. Or not, but it does not matter - format->epoch will change anyway. AFAIU, such a simple alter can make the offset slots almost not used, because tuple_field_raw_by_part will reset them on each switch from tuple of one format to a tuple of another format. Maybe we don't want to optimize it though, since alter is not a common case. ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Tarantool-patches] [PATCH 2/2] box: use tuple_field_raw_by_part where possible 2020-11-24 22:33 ` Vladislav Shpilevoy @ 2020-12-01 8:48 ` Serge Petrenko 2020-12-01 22:04 ` Vladislav Shpilevoy 0 siblings, 1 reply; 17+ messages in thread From: Serge Petrenko @ 2020-12-01 8:48 UTC (permalink / raw) To: Vladislav Shpilevoy, korablev; +Cc: tarantool-patches 25.11.2020 01:33, Vladislav Shpilevoy пишет: > Thanks for the investigation! > >>> On 14.11.2020 18:28, Serge Petrenko wrote: >>>> tuple_field_raw_by_part allows to use part's offset slot cache to bypass >>>> tuple field lookup in a JSON tree, which is involved even for plain >>>> tuples. >>> I am not sure I understand. How does it work in 1.10 then? It uses >>> tuple_field_by_part_raw, which in the end simply calls >>> >>> tuple_field_raw(format, data, field_map, part->fieldno); >>> >>> Exactly what we had here. Why did anything change? >> tuple_field_raw() itself changed. On 1.10 it accesses field as an array element, >> then takes its offset slot and finds the shift in the field map. >> >> On 2.x it aliases to tuple_field_raw_by_path(): >> finds the top-level field in the format's JSON tree, does a couple of >> checks for JSON paths and multikey indices and only then takes the offset >> like 1.10 did. >> >> But while tuple_field_raw_by_path() takes an offset slot hint, >> tuple_field_raw() doesn't use it, so its call always results in searching >> the JSON tree. > Aha, now I see. So tuple_field_raw used the offset slot but with tons of > unnecessary checks before that. > > Looking at how much did we gain from removing a couple of 'ifs' from the > field map build, we could probably gain another portion of perf by > splitting huge tuple_field_raw_by_path into 2 parts. 1 part with non-NULL > path, and one with NULL path. Non-NULL path would simply use the root JSON > array like you did in the first patch, and would omit all checks > path !=/== NULL, whose count now is 3!!! in one function, at least. > > 1) > if (path == NULL && fieldno == 0) { > mp_decode_array(&tuple); > return tuple; > } > > 2) > tuple_format_field_by_path() does another path == NULL > inside; > > 3) > if (path != NULL && field == NULL) > goto parse; Thanks for the suggestion! That's what I did and it gave even more perf back. Please see v2 of this patch for details. > Could be done with templates relatively trivial. We just make path > check a boolean template argument, and check it before checking the > path. > > Or we can literally split this function into 2 independent functions, > because it seems they will look quite different. >>>> Since both tuple_field_raw_by_part and tuple_field_raw are basically >>>> aliases to tuple_field_raw_by_path, prefer the former to the latter, >>>> since it takes advantage over key part's offset slot cache. >>> Besides, I can't find any info why do we need this 'offset slot cache'. >>> The comment above this member simply narrates the code, and I don't >>> understand the actual reason it was added. Did you manage to realize it? >>> Can you explain, please? >> Yep. This cache ameliorates the performance loss for looking the field up >> in JSON tree every time we need to access some tuple field by offset. >> >> Say, each time you call tuple_compare_slowpath and it gets to comparing >> field values, you'll look up the field in JSON tree to know its offset slot. >> >> You may have lots of calls to tuple_compare for different tuples but for >> the same set of key_parts. So instead of looking the field up on every >> comparison, you look it up once and remember its offset slot in the >> key part. Now next time you need offset slot access you don't need >> to look for the field in the JSON tree at all. >> >> Looks like initially this cache was intended to save us from some complex >> tree lookups, when path is actually not null. >> >> But the cache helps even for top-level field access. It helps skip not only >> the JSON tree lookup, but all the additional checks coming after it. >> >> So that's why I made use of it in this patch. > There must be a problem with these offset slots in case of space alter. If > we add a new index or remove another index, the existing tuples will have > 2 formats in one index. Some tuples will have the old format, and some will > have the new one. With probably different offset slots for the same index > parts. Or not, but it does not matter - format->epoch will change anyway. > > AFAIU, such a simple alter can make the offset slots almost not used, because > tuple_field_raw_by_part will reset them on each switch from tuple of one > format to a tuple of another format. > > Maybe we don't want to optimize it though, since alter is not a common case. I didn't think of that. Aren't tuples updating their format on the fly? Or you probably have to replace the tuple for its format to be updated? -- Serge Petrenko ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Tarantool-patches] [PATCH 2/2] box: use tuple_field_raw_by_part where possible 2020-12-01 8:48 ` Serge Petrenko @ 2020-12-01 22:04 ` Vladislav Shpilevoy 0 siblings, 0 replies; 17+ messages in thread From: Vladislav Shpilevoy @ 2020-12-01 22:04 UTC (permalink / raw) To: Serge Petrenko, korablev; +Cc: tarantool-patches >> Could be done with templates relatively trivial. We just make path >> check a boolean template argument, and check it before checking the >> path. >> >> Or we can literally split this function into 2 independent functions, >> because it seems they will look quite different. >>>>> Since both tuple_field_raw_by_part and tuple_field_raw are basically >>>>> aliases to tuple_field_raw_by_path, prefer the former to the latter, >>>>> since it takes advantage over key part's offset slot cache. >>>> Besides, I can't find any info why do we need this 'offset slot cache'. >>>> The comment above this member simply narrates the code, and I don't >>>> understand the actual reason it was added. Did you manage to realize it? >>>> Can you explain, please? >>> Yep. This cache ameliorates the performance loss for looking the field up >>> in JSON tree every time we need to access some tuple field by offset. >>> >>> Say, each time you call tuple_compare_slowpath and it gets to comparing >>> field values, you'll look up the field in JSON tree to know its offset slot. >>> >>> You may have lots of calls to tuple_compare for different tuples but for >>> the same set of key_parts. So instead of looking the field up on every >>> comparison, you look it up once and remember its offset slot in the >>> key part. Now next time you need offset slot access you don't need >>> to look for the field in the JSON tree at all. >>> >>> Looks like initially this cache was intended to save us from some complex >>> tree lookups, when path is actually not null. >>> >>> But the cache helps even for top-level field access. It helps skip not only >>> the JSON tree lookup, but all the additional checks coming after it. >>> >>> So that's why I made use of it in this patch. >> There must be a problem with these offset slots in case of space alter. If >> we add a new index or remove another index, the existing tuples will have >> 2 formats in one index. Some tuples will have the old format, and some will >> have the new one. With probably different offset slots for the same index >> parts. Or not, but it does not matter - format->epoch will change anyway. >> >> AFAIU, such a simple alter can make the offset slots almost not used, because >> tuple_field_raw_by_part will reset them on each switch from tuple of one >> format to a tuple of another format. >> >> Maybe we don't want to optimize it though, since alter is not a common case. > > > I didn't think of that. Aren't tuples updating their format on the fly? > Or you probably have to replace the tuple for its format to be updated? No, tuple never changes its format. The only way is to replace the old tuples. ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Tarantool-patches] [PATCH 0/2] reduce performance degradation introduced by JSON path indices 2020-11-14 17:28 [Tarantool-patches] [PATCH 0/2] reduce performance degradation introduced by JSON path indices Serge Petrenko 2020-11-14 17:28 ` [Tarantool-patches] [PATCH 1/2] box: speed up tuple_field_map_create Serge Petrenko 2020-11-14 17:28 ` [Tarantool-patches] [PATCH 2/2] box: use tuple_field_raw_by_part where possible Serge Petrenko @ 2020-11-16 7:50 ` Serge Petrenko 2 siblings, 0 replies; 17+ messages in thread From: Serge Petrenko @ 2020-11-16 7:50 UTC (permalink / raw) To: v.shpilevoy, korablev; +Cc: tarantool-patches Issue: https://github.com/tarantool/tarantool/issues/4774 Branch: sp/gh-4774-multikey-refactoring 14.11.2020 20:28, Serge Petrenko пишет: > The patchset fixes two degradations found by measuring snapshot recovery time > for a 1.5G snapshot containing 30M tuples in a memtx space with a simple primary > key and one secondary key over 4 integer and one string field. > > The first degradation manifests itself during snapshot recovery phase (the one > with "3.5M rows processed" messages) and is connected to `memtx_tuple_new` > slowdown due to unoptimised `tuple_field_map_create`. > > First patch deals with this degradation and manages to restore almost all > performance lost since 1.10. (The patched version is only 11% slower than 1.10, > while the current master is 39% slower on this phase). > > The second degradation appears during next snapshot recovery phase, secondary > index building. Here the degradation is rooted in slow tuple field access via > tuple_field_raw(). > > The second patch deals with this issue and manages to restore part of the lost > performance. (The patched version is 14% slower than 1.10 while the current > master is 27% slower). > > Serge Petrenko (2): > box: speed up tuple_field_map_create > box: use tuple_field_raw_by_part where possible > > src/box/tuple_compare.cc | 37 ++++----------- > src/box/tuple_extract_key.cc | 10 +--- > src/box/tuple_format.c | 91 ++++++++++++++++++++++++++++++++++++ > src/box/tuple_hash.cc | 35 ++++---------- > 4 files changed, 111 insertions(+), 62 deletions(-) > -- Serge Petrenko ^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2020-12-02 9:53 UTC | newest] Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2020-11-14 17:28 [Tarantool-patches] [PATCH 0/2] reduce performance degradation introduced by JSON path indices Serge Petrenko 2020-11-14 17:28 ` [Tarantool-patches] [PATCH 1/2] box: speed up tuple_field_map_create Serge Petrenko 2020-11-14 18:00 ` Cyrill Gorcunov 2020-11-16 7:34 ` Serge Petrenko 2020-11-20 23:26 ` Vladislav Shpilevoy 2020-11-23 13:50 ` Serge Petrenko 2020-11-24 22:33 ` Vladislav Shpilevoy 2020-12-01 8:48 ` Serge Petrenko 2020-12-01 22:04 ` Vladislav Shpilevoy 2020-12-02 9:53 ` Serge Petrenko 2020-11-14 17:28 ` [Tarantool-patches] [PATCH 2/2] box: use tuple_field_raw_by_part where possible Serge Petrenko 2020-11-20 23:26 ` Vladislav Shpilevoy 2020-11-23 13:51 ` Serge Petrenko 2020-11-24 22:33 ` Vladislav Shpilevoy 2020-12-01 8:48 ` Serge Petrenko 2020-12-01 22:04 ` Vladislav Shpilevoy 2020-11-16 7:50 ` [Tarantool-patches] [PATCH 0/2] reduce performance degradation introduced by JSON path indices Serge Petrenko
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox