Tarantool development patches archive
 help / color / mirror / Atom feed
* [Tarantool-patches] [PATCH] vinyl: add NULL check of xrow_upsert_execute() retval
@ 2020-05-27  2:56 Nikita Pettik
  2020-05-29 21:24 ` Vladislav Shpilevoy
  0 siblings, 1 reply; 7+ messages in thread
From: Nikita Pettik @ 2020-05-27  2:56 UTC (permalink / raw)
  To: tarantool-patches; +Cc: v.shpilevoy

xrow_upsert_execute() can fail and return NULL for various reasons.
However, in vy_apply_upsert() the result of xrow_upsert_execute() is
used unconditionally which may lead to crash. Let's fix it and in case
xrow_upsert_execute() fails return from vy_apply_upsert() NULL value.

Closes #4957
---
Brief problem description: if user puts a lot (more than 4000) of
upserts which modify the same tuple into one transaction, it may
lead to crash. Since the number of update operations exceeds the
limit (BOX_UPDATE_OP_CNT_MAX == 4000), they are not allowed to be
applied (still all upserts are squashed into one).
So xrow_upsert_execute() can return NULL instead ofvalid result
which will be dereferenced later.

Note that patch is based on np/gh-1622-skip-invalid-upserts branch.
If we don't skip invalid upsert which is the result of squashing
4000 other upserts, dump won't be able to finish due to raised error.

As a rule, all upserts modifying the same key are squashed and/or
executed during dump process. So basically users should not face
scenario when a lot of upserts get stuck in disk run. The only case
is invalid upserts which are not skipped (in contrast to branch
containing fix for 1622) and reside until squash with DELETE statement
(AFAIU). So I believe we should not bother with BOX_UPDATE_OP_CNT_MAX
restriction as it is mentioned in issue.

Branch: https://gitlab.com/tarantool/tarantool/pipelines/149917031
Issue: https://github.com/tarantool/tarantool/issues/4957

@ChangeLog:
 * Fix crash during squash of many (more than 4000) upserts modifying
the same key.

 src/box/vy_upsert.c                          |   4 +
 test/vinyl/gh-4957-too-many-upserts.result   | 118 +++++++++++++++++++
 test/vinyl/gh-4957-too-many-upserts.test.lua |  48 ++++++++
 3 files changed, 170 insertions(+)
 create mode 100644 test/vinyl/gh-4957-too-many-upserts.result
 create mode 100644 test/vinyl/gh-4957-too-many-upserts.test.lua

diff --git a/src/box/vy_upsert.c b/src/box/vy_upsert.c
index 6855b9820..007921bb2 100644
--- a/src/box/vy_upsert.c
+++ b/src/box/vy_upsert.c
@@ -133,6 +133,10 @@ vy_apply_upsert(const struct tuple *new_stmt, const struct tuple *old_stmt,
 					 new_ops_end, result_mp, result_mp_end,
 					 &mp_size, 0, suppress_error,
 					 &column_mask);
+	if (result_mp == NULL) {
+		region_truncate(region, region_svp);
+		return NULL;
+	}
 	result_mp_end = result_mp + mp_size;
 	if (tuple_validate_raw(format, result_mp) != 0) {
 		region_truncate(region, region_svp);
diff --git a/test/vinyl/gh-4957-too-many-upserts.result b/test/vinyl/gh-4957-too-many-upserts.result
new file mode 100644
index 000000000..203329788
--- /dev/null
+++ b/test/vinyl/gh-4957-too-many-upserts.result
@@ -0,0 +1,118 @@
+-- test-run result file version 2
+s = box.schema.create_space('test', {engine = 'vinyl'})
+ | ---
+ | ...
+pk = s:create_index('pk')
+ | ---
+ | ...
+s:insert{1, 1}
+ | ---
+ | - [1, 1]
+ | ...
+box.snapshot()
+ | ---
+ | - ok
+ | ...
+
+-- Let's test number of upserts in one transaction that exceeds
+-- the limit of operations allowed in one update.
+--
+ups_cnt = 5000
+ | ---
+ | ...
+box.begin()
+ | ---
+ | ...
+for i = 1, ups_cnt do s:upsert({1}, {{'&', 2, 1}}) end
+ | ---
+ | ...
+box.commit()
+ | ---
+ | ...
+dump_count = box.stat.vinyl().scheduler.dump_count
+ | ---
+ | ...
+tasks_completed = box.stat.vinyl().scheduler.tasks_completed
+ | ---
+ | ...
+box.snapshot()
+ | ---
+ | - ok
+ | ...
+
+fiber = require('fiber')
+ | ---
+ | ...
+while box.stat.vinyl().scheduler.tasks_inprogress > 0 do fiber.sleep(0.01) end
+ | ---
+ | ...
+
+assert(box.stat.vinyl().scheduler.dump_count - dump_count == 1)
+ | ---
+ | - true
+ | ...
+-- Last :snapshot() triggers both dump and compaction processes.
+--
+assert(box.stat.vinyl().scheduler.tasks_completed - tasks_completed == 2)
+ | ---
+ | - true
+ | ...
+
+s:select()
+ | ---
+ | - - [1, 1]
+ | ...
+
+s:drop()
+ | ---
+ | ...
+
+s = box.schema.create_space('test', {engine = 'vinyl'})
+ | ---
+ | ...
+pk = s:create_index('pk')
+ | ---
+ | ...
+
+tuple = {}
+ | ---
+ | ...
+for i = 1, ups_cnt do tuple[i] = i end
+ | ---
+ | ...
+_ = s:insert(tuple)
+ | ---
+ | ...
+box.snapshot()
+ | ---
+ | - ok
+ | ...
+
+box.begin()
+ | ---
+ | ...
+for k = 1, ups_cnt do s:upsert({1}, {{'+', k, 1}}) end
+ | ---
+ | ...
+box.commit()
+ | ---
+ | ...
+box.snapshot()
+ | ---
+ | - ok
+ | ...
+while box.stat.vinyl().scheduler.tasks_inprogress > 0 do fiber.sleep(0.01) end
+ | ---
+ | ...
+
+-- All upserts are ignored since they are squashed to one update
+-- operation with too many operations.
+--
+assert(s:select()[1][1] == 1)
+ | ---
+ | - true
+ | ...
+
+s:drop()
+ | ---
+ | ...
diff --git a/test/vinyl/gh-4957-too-many-upserts.test.lua b/test/vinyl/gh-4957-too-many-upserts.test.lua
new file mode 100644
index 000000000..6c201f29e
--- /dev/null
+++ b/test/vinyl/gh-4957-too-many-upserts.test.lua
@@ -0,0 +1,48 @@
+s = box.schema.create_space('test', {engine = 'vinyl'})
+pk = s:create_index('pk')
+s:insert{1, 1}
+box.snapshot()
+
+-- Let's test number of upserts in one transaction that exceeds
+-- the limit of operations allowed in one update.
+--
+ups_cnt = 5000
+box.begin()
+for i = 1, ups_cnt do s:upsert({1}, {{'&', 2, 1}}) end
+box.commit()
+dump_count = box.stat.vinyl().scheduler.dump_count
+tasks_completed = box.stat.vinyl().scheduler.tasks_completed
+box.snapshot()
+
+fiber = require('fiber')
+while box.stat.vinyl().scheduler.tasks_inprogress > 0 do fiber.sleep(0.01) end
+
+assert(box.stat.vinyl().scheduler.dump_count - dump_count == 1)
+-- Last :snapshot() triggers both dump and compaction processes.
+--
+assert(box.stat.vinyl().scheduler.tasks_completed - tasks_completed == 2)
+
+s:select()
+
+s:drop()
+
+s = box.schema.create_space('test', {engine = 'vinyl'})
+pk = s:create_index('pk')
+
+tuple = {}
+for i = 1, ups_cnt do tuple[i] = i end
+_ = s:insert(tuple)
+box.snapshot()
+
+box.begin()
+for k = 1, ups_cnt do s:upsert({1}, {{'+', k, 1}}) end
+box.commit()
+box.snapshot()
+while box.stat.vinyl().scheduler.tasks_inprogress > 0 do fiber.sleep(0.01) end
+
+-- All upserts are ignored since they are squashed to one update
+-- operation with too many operations.
+--
+assert(s:select()[1][1] == 1)
+
+s:drop()
\ No newline at end of file
-- 
2.17.1

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Tarantool-patches] [PATCH] vinyl: add NULL check of xrow_upsert_execute() retval
  2020-05-27  2:56 [Tarantool-patches] [PATCH] vinyl: add NULL check of xrow_upsert_execute() retval Nikita Pettik
@ 2020-05-29 21:24 ` Vladislav Shpilevoy
  2020-05-29 21:34   ` Vladislav Shpilevoy
                     ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Vladislav Shpilevoy @ 2020-05-29 21:24 UTC (permalink / raw)
  To: Nikita Pettik, tarantool-patches

Hi! Thanks for the patch!

While the patch is obviously correct (we need to check NULL
for sure), it solves the problem only partially, and creates
another.

We discussed that verbally, and here is a short resume of what
is happening in the patch, and where we have a tricky problem:
if there are 2 perfectly valid upserts, each with 2.5k operations,
and they are merged into one, both of them are skipped, because
after merge they become too fat - opcount > 4k.

It looks at first that this can only happen when field count > 4k,
because otherwise all the operations would be squashed into something
smaller or equal than field count, but it is not. There are a few
cases, when even after squash total operation count will be bigger
than field count:

1) operations are complex - ':', '&', '|', '^', '#', '!'. The last
two operations are actually used by people. These operations are not
squashed. The last one - '!' - can't be squashed even in theory.

2) operations have negative field number. For example, {'=', -1, ...} -
assign a value to the last field in the tuple. But honestly I don't
remember. Perhaps they are merged, if in both squashed upserts the
field number is the same. But imagine this: {'=', -1, 100}, and
{'=', 5, 100}. They look different, but if the tuple has only 5 fields,
they operate on the same field.

That means it is not safe to drop any upsert having more than 4k
operations. Because it can consist of many small valid upserts.

I don't know how to fix it in a simple way. The only thing I could
come up with is probably don't squash such fat upserts. Just keep
them all on the disk, until they eventually meet bottom of their key,
or a terminal statement like REPLACE/INSERT/DELETE.

This is not only about disk, btw. 2 fat upserts could be inserted into
the memory level, turn into an invalid upsert, and that will be skipped.

Here is a test. Create a tuple, and dump it on disk so as it would
disappear from the memory level and from the cache:

	box.cfg{}
	s = box.schema.create_space('test', {engine = 'vinyl'})
	pk = s:create_index('pk')
	s:insert({1, 1})
	box.snapshot()

Then restart (to ensure the cache is clear), and create 2 upserts:

	box.cfg{}
	s = box.space.test
	ops = {}
	op = {'=', 2, 100}
	for i = 1, 2500 do table.insert(ops, op) end
	s:upsert({1}, ops)
	op = {'=', -1, 200}
	ops = {}
	for i = 1, 2500 do table.insert(ops, op) end
	s:upsert({1}, ops)

Now if I do select, I get

	tarantool> s:select{}
	---
	- - [1, 200]
	...

But if I do dump + select, I get:

	tarantool> box.snapshot()
	---
	- ok
	...

	tarantool> s:select{}
	---
	- - [1, 100]
	...

During dump the second upsert was skipped even though it was valid.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Tarantool-patches] [PATCH] vinyl: add NULL check of xrow_upsert_execute() retval
  2020-05-29 21:24 ` Vladislav Shpilevoy
@ 2020-05-29 21:34   ` Vladislav Shpilevoy
  2020-07-08 12:22     ` Nikita Pettik
  2020-05-29 23:04   ` Konstantin Osipov
  2020-07-08 12:53   ` Nikita Pettik
  2 siblings, 1 reply; 7+ messages in thread
From: Vladislav Shpilevoy @ 2020-05-29 21:34 UTC (permalink / raw)
  To: Nikita Pettik, tarantool-patches

Note, I didn't see an error message in the log in the test below,
when it skipped the upsert.

> Here is a test. Create a tuple, and dump it on disk so as it would
> disappear from the memory level and from the cache:
> 
> 	box.cfg{}
> 	s = box.schema.create_space('test', {engine = 'vinyl'})
> 	pk = s:create_index('pk')
> 	s:insert({1, 1})
> 	box.snapshot()
> 
> Then restart (to ensure the cache is clear), and create 2 upserts:
> 
> 	box.cfg{}
> 	s = box.space.test
> 	ops = {}
> 	op = {'=', 2, 100}
> 	for i = 1, 2500 do table.insert(ops, op) end
> 	s:upsert({1}, ops)
> 	op = {'=', -1, 200}
> 	ops = {}
> 	for i = 1, 2500 do table.insert(ops, op) end
> 	s:upsert({1}, ops)
> 
> Now if I do select, I get
> 
> 	tarantool> s:select{}
> 	---
> 	- - [1, 200]
> 	...
> 
> But if I do dump + select, I get:
> 
> 	tarantool> box.snapshot()
> 	---
> 	- ok
> 	...
> 
> 	tarantool> s:select{}
> 	---
> 	- - [1, 100]
> 	...
> 
> During dump the second upsert was skipped even though it was valid.
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Tarantool-patches] [PATCH] vinyl: add NULL check of xrow_upsert_execute() retval
  2020-05-29 21:24 ` Vladislav Shpilevoy
  2020-05-29 21:34   ` Vladislav Shpilevoy
@ 2020-05-29 23:04   ` Konstantin Osipov
  2020-07-08 12:53   ` Nikita Pettik
  2 siblings, 0 replies; 7+ messages in thread
From: Konstantin Osipov @ 2020-05-29 23:04 UTC (permalink / raw)
  To: Vladislav Shpilevoy; +Cc: tarantool-patches

* Vladislav Shpilevoy <v.shpilevoy@tarantool.org> [20/05/30 00:29]:

> I don't know how to fix it in a simple way. The only thing I could
> come up with is probably don't squash such fat upserts. Just keep
> them all on the disk, until they eventually meet bottom of their key,
> or a terminal statement like REPLACE/INSERT/DELETE.

I wrote earlier about this problem in @tarantoolru.

We need to balance the op limit and the squash threshold (how many
upserts we keep before forcing a squash) so that
accumulation before squash can never lead to going out of the op limit.

Basically, we should force squash before we have a chance of
creating an invalid upsert.

We could even have a separate constant - number of ops in *user*
upsert, which == 
max_number_of_ops/max_upserts_before_forced_squash.

-- 
Konstantin Osipov, Moscow, Russia

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Tarantool-patches] [PATCH] vinyl: add NULL check of xrow_upsert_execute() retval
  2020-05-29 21:34   ` Vladislav Shpilevoy
@ 2020-07-08 12:22     ` Nikita Pettik
  0 siblings, 0 replies; 7+ messages in thread
From: Nikita Pettik @ 2020-07-08 12:22 UTC (permalink / raw)
  To: Vladislav Shpilevoy; +Cc: tarantool-patches

On 29 May 23:34, Vladislav Shpilevoy wrote:
> Note, I didn't see an error message in the log in the test below,
> when it skipped the upsert.
> 
> > Here is a test. Create a tuple, and dump it on disk so as it would
> > disappear from the memory level and from the cache:
> > 
> > 	box.cfg{}
> > 	s = box.schema.create_space('test', {engine = 'vinyl'})
> > 	pk = s:create_index('pk')
> > 	s:insert({1, 1})
> > 	box.snapshot()
> > 
> > Then restart (to ensure the cache is clear), and create 2 upserts:
> > 
> > 	box.cfg{}
> > 	s = box.space.test
> > 	ops = {}
> > 	op = {'=', 2, 100}
> > 	for i = 1, 2500 do table.insert(ops, op) end
> > 	s:upsert({1}, ops)
> > 	op = {'=', -1, 200}
> > 	ops = {}
> > 	for i = 1, 2500 do table.insert(ops, op) end
> > 	s:upsert({1}, ops)
> > 
> > Now if I do select, I get
> > 
> > 	tarantool> s:select{}
> > 	---
> > 	- - [1, 200]
> > 	...
> > 
> > But if I do dump + select, I get:
> > 
> > 	tarantool> box.snapshot()
> > 	---
> > 	- ok
> > 	...
> > 
> > 	tarantool> s:select{}
> > 	---
> > 	- - [1, 100]
> > 	...
> > 
> > During dump the second upsert was skipped even though it was valid.
> >


Oh, sorry, I've forgotten to reply to this message. The reason why
upsert is skipped is described here: https://github.com/tarantool/tarantool/issues/5087

In fact, it is an independent issue and is not connected with current patch.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Tarantool-patches] [PATCH] vinyl: add NULL check of xrow_upsert_execute() retval
  2020-05-29 21:24 ` Vladislav Shpilevoy
  2020-05-29 21:34   ` Vladislav Shpilevoy
  2020-05-29 23:04   ` Konstantin Osipov
@ 2020-07-08 12:53   ` Nikita Pettik
  2020-07-09 11:56     ` Nikita Pettik
  2 siblings, 1 reply; 7+ messages in thread
From: Nikita Pettik @ 2020-07-08 12:53 UTC (permalink / raw)
  To: Vladislav Shpilevoy; +Cc: tarantool-patches

On 29 May 23:24, Vladislav Shpilevoy wrote:
> Hi! Thanks for the patch!
> 
> While the patch is obviously correct (we need to check NULL
> for sure), it solves the problem only partially, and creates
> another.

Okay, I suggest following. Let's push patch with minor test
changes as is (in scope of 1.10.7 release), but leave issue open.
As a result, we will get rid of crash, and postpone a bit
reconsideration of upsert application till 1.10.8.
We are going to rework upsert (according to the plan defined in #5107
https://github.com/tarantool/tarantool/issues/5107). Here's changed
test (https://github.com/tarantool/tarantool/tree/np/gh-4957-master):

diff --git a/test/vinyl/gh-4957-too-many-upserts.test.lua b/test/vinyl/gh-4957-too-many-upserts.test.lua
new file mode 100644
index 000000000..e5adfe41c
--- /dev/null
+++ b/test/vinyl/gh-4957-too-many-upserts.test.lua
@@ -0,0 +1,38 @@
+s = box.schema.create_space('test', {engine = 'vinyl'})
+pk = s:create_index('pk')
+s:insert{1, 1}
+box.snapshot()
+
+-- Let's test number of upserts in one transaction that exceeds
+-- the limit of operations allowed in one update.
+--
+ups_cnt = 5000
+box.begin()
+for i = 1, ups_cnt do s:upsert({1}, {{'&', 2, 1}}) end
+box.commit()
+-- Upserts are not able to squash, so scheduler will get stuck.
+-- So let's not waste much time here, just check that no crash
+-- takes place.
+--
+box.snapshot()
+
+fiber = require('fiber')
+fiber.sleep(0.01)
+
+s:drop()
+
+s = box.schema.create_space('test', {engine = 'vinyl'})
+pk = s:create_index('pk')
+
+tuple = {}
+for i = 1, ups_cnt do tuple[i] = i end
+_ = s:insert(tuple)
+box.snapshot()
+
+box.begin()
+for k = 1, ups_cnt do s:upsert({1}, {{'+', k, 1}}) end
+box.commit()
+box.snapshot()
+fiber.sleep(0.01)
+

Are you guys okay with this suggestion?
 
> We discussed that verbally, and here is a short resume of what
> is happening in the patch, and where we have a tricky problem:
> if there are 2 perfectly valid upserts, each with 2.5k operations,
> and they are merged into one, both of them are skipped, because
> after merge they become too fat - opcount > 4k.
> 
> It looks at first that this can only happen when field count > 4k,
> because otherwise all the operations would be squashed into something
> smaller or equal than field count, but it is not. There are a few
> cases, when even after squash total operation count will be bigger
> than field count:
> 
> 1) operations are complex - ':', '&', '|', '^', '#', '!'. The last
> two operations are actually used by people. These operations are not
> squashed. The last one - '!' - can't be squashed even in theory.
> 
> 2) operations have negative field number. For example, {'=', -1, ...} -
> assign a value to the last field in the tuple. But honestly I don't
> remember. Perhaps they are merged, if in both squashed upserts the
> field number is the same. But imagine this: {'=', -1, 100}, and
> {'=', 5, 100}. They look different, but if the tuple has only 5 fields,
> they operate on the same field.
> 
> That means it is not safe to drop any upsert having more than 4k
> operations. Because it can consist of many small valid upserts.
> 
> I don't know how to fix it in a simple way. The only thing I could
> come up with is probably don't squash such fat upserts. Just keep
> them all on the disk, until they eventually meet bottom of their key,
> or a terminal statement like REPLACE/INSERT/DELETE.
> 
> This is not only about disk, btw. 2 fat upserts could be inserted into
> the memory level, turn into an invalid upsert, and that will be skipped.
> 
> Here is a test. Create a tuple, and dump it on disk so as it would
> disappear from the memory level and from the cache:
> 
> 	box.cfg{}
> 	s = box.schema.create_space('test', {engine = 'vinyl'})
> 	pk = s:create_index('pk')
> 	s:insert({1, 1})
> 	box.snapshot()
> 
> Then restart (to ensure the cache is clear), and create 2 upserts:
> 
> 	box.cfg{}
> 	s = box.space.test
> 	ops = {}
> 	op = {'=', 2, 100}
> 	for i = 1, 2500 do table.insert(ops, op) end
> 	s:upsert({1}, ops)
> 	op = {'=', -1, 200}
> 	ops = {}
> 	for i = 1, 2500 do table.insert(ops, op) end
> 	s:upsert({1}, ops)
> 
> Now if I do select, I get
> 
> 	tarantool> s:select{}
> 	---
> 	- - [1, 200]
> 	...
> 
> But if I do dump + select, I get:
> 
> 	tarantool> box.snapshot()
> 	---
> 	- ok
> 	...
> 
> 	tarantool> s:select{}
> 	---
> 	- - [1, 100]
> 	...
> 
> During dump the second upsert was skipped even though it was valid.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Tarantool-patches] [PATCH] vinyl: add NULL check of xrow_upsert_execute() retval
  2020-07-08 12:53   ` Nikita Pettik
@ 2020-07-09 11:56     ` Nikita Pettik
  0 siblings, 0 replies; 7+ messages in thread
From: Nikita Pettik @ 2020-07-09 11:56 UTC (permalink / raw)
  To: Vladislav Shpilevoy; +Cc: tarantool-patches

On 08 Jul 12:53, Nikita Pettik wrote:
> On 29 May 23:24, Vladislav Shpilevoy wrote:
> > Hi! Thanks for the patch!
> > 
> > While the patch is obviously correct (we need to check NULL
> > for sure), it solves the problem only partially, and creates
> > another.
> 
> Okay, I suggest following. Let's push patch with minor test
> changes as is (in scope of 1.10.7 release), but leave issue open.
> As a result, we will get rid of crash, and postpone a bit
> reconsideration of upsert application till 1.10.8.
> We are going to rework upsert (according to the plan defined in #5107
> https://github.com/tarantool/tarantool/issues/5107). Here's changed
> test (https://github.com/tarantool/tarantool/tree/np/gh-4957-master):
> 
> diff --git a/test/vinyl/gh-4957-too-many-upserts.test.lua b/test/vinyl/gh-4957-too-many-upserts.test.lua
> new file mode 100644
> index 000000000..e5adfe41c
> --- /dev/null
> +++ b/test/vinyl/gh-4957-too-many-upserts.test.lua
> @@ -0,0 +1,38 @@
> +s = box.schema.create_space('test', {engine = 'vinyl'})
> +pk = s:create_index('pk')
> +s:insert{1, 1}
> +box.snapshot()
> +
> +-- Let's test number of upserts in one transaction that exceeds
> +-- the limit of operations allowed in one update.
> +--
> +ups_cnt = 5000
> +box.begin()
> +for i = 1, ups_cnt do s:upsert({1}, {{'&', 2, 1}}) end
> +box.commit()
> +-- Upserts are not able to squash, so scheduler will get stuck.
> +-- So let's not waste much time here, just check that no crash
> +-- takes place.
> +--
> +box.snapshot()
> +
> +fiber = require('fiber')
> +fiber.sleep(0.01)
> +
> +s:drop()
> +
> +s = box.schema.create_space('test', {engine = 'vinyl'})
> +pk = s:create_index('pk')
> +
> +tuple = {}
> +for i = 1, ups_cnt do tuple[i] = i end
> +_ = s:insert(tuple)
> +box.snapshot()
> +
> +box.begin()
> +for k = 1, ups_cnt do s:upsert({1}, {{'+', k, 1}}) end
> +box.commit()
> +box.snapshot()
> +fiber.sleep(0.01)
> +
> 
> Are you guys okay with this suggestion?
>

Pushed to master, 2.4, 2.3 and 1.10. Branch is dropped, changelogs are
updated correspondingly. Also I had to slightly modify test for 2.4, 2.3
and 1.10 versions, since we have to unthrottle scheduler manually to
process snapshot. As a result test has become release disabled.
  

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2020-07-09 11:56 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-27  2:56 [Tarantool-patches] [PATCH] vinyl: add NULL check of xrow_upsert_execute() retval Nikita Pettik
2020-05-29 21:24 ` Vladislav Shpilevoy
2020-05-29 21:34   ` Vladislav Shpilevoy
2020-07-08 12:22     ` Nikita Pettik
2020-05-29 23:04   ` Konstantin Osipov
2020-07-08 12:53   ` Nikita Pettik
2020-07-09 11:56     ` Nikita Pettik

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox