* [Tarantool-patches] [PATCH v2 00/16] Transaction engine for memtx engine
@ 2020-07-08 15:14 Aleksandr Lyapunov
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 01/16] Update license file (2020) Aleksandr Lyapunov
` (17 more replies)
0 siblings, 18 replies; 49+ messages in thread
From: Aleksandr Lyapunov @ 2020-07-08 15:14 UTC (permalink / raw)
To: tarantool-patches; +Cc: v.shpilevoy
Implement a transaction engine that would allow yieding etc.
That is a draft, work is still in progress.
GH issue: https://github.com/tarantool/tarantool/issues/4897
GH branch: https://github.com/tarantool/tarantool/tree/alyapunov/gh-4897-memtx-tx-engine-v6
Difference from previous version:
- txm_story simplified
- comments added
- lost of tests are passed now
Aleksandr Lyapunov (16):
Update license file (2020)
Check data_offset overflow in struct tuple
tx: introduce dirty tuples
vinyl: rename tx_manager -> vy_tx_manager
tx: save txn in txn_stmt
tx: add TX status
tx: save preserve old tuple flag in txn_stmt
tx: introduce tx manager
tx: introduce prepare sequence number
tx: introduce txn_stmt_destroy
tx: introduce conflict tracker
introduce tuple smart pointers
tx: introduce txm_story
tx: indexes
tx: introduce point conflict tracker
tx: use new tx manager in memtx
LICENSE | 2 +-
src/box/errcode.h | 1 +
src/box/memtx_bitset.c | 28 +-
src/box/memtx_engine.c | 69 ++-
src/box/memtx_hash.c | 60 ++-
src/box/memtx_rtree.c | 27 +-
src/box/memtx_space.c | 29 ++
src/box/memtx_tree.c | 73 ++-
src/box/tuple.c | 12 +-
src/box/tuple.h | 62 ++-
src/box/tuple_format.c | 4 +-
src/box/txn.c | 942 +++++++++++++++++++++++++++++++++-
src/box/txn.h | 192 +++++++
src/box/vinyl.c | 30 +-
src/box/vy_stmt.c | 9 +
src/box/vy_tx.c | 49 +-
src/box/vy_tx.h | 33 +-
src/main.cc | 3 +
test/box/error.result | 1 +
test/box/huge_field_map.result | 49 ++
test/box/huge_field_map.test.lua | 22 +
test/box/huge_field_map_long.result | 51 ++
test/box/huge_field_map_long.test.lua | 28 +
test/box/suite.ini | 1 +
24 files changed, 1673 insertions(+), 104 deletions(-)
create mode 100644 test/box/huge_field_map.result
create mode 100644 test/box/huge_field_map.test.lua
create mode 100644 test/box/huge_field_map_long.result
create mode 100644 test/box/huge_field_map_long.test.lua
--
2.7.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [Tarantool-patches] [PATCH 01/16] Update license file (2020)
2020-07-08 15:14 [Tarantool-patches] [PATCH v2 00/16] Transaction engine for memtx engine Aleksandr Lyapunov
@ 2020-07-08 15:14 ` Aleksandr Lyapunov
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 02/16] Check data_offset overflow in struct tuple Aleksandr Lyapunov
` (16 subsequent siblings)
17 siblings, 0 replies; 49+ messages in thread
From: Aleksandr Lyapunov @ 2020-07-08 15:14 UTC (permalink / raw)
To: tarantool-patches; +Cc: v.shpilevoy
---
LICENSE | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/LICENSE b/LICENSE
index d797c05..734ba6c 100644
--- a/LICENSE
+++ b/LICENSE
@@ -1,4 +1,4 @@
-Copyright 2010-2016 Tarantool AUTHORS: please see AUTHORS file.
+Copyright 2010-2020 Tarantool AUTHORS: please see AUTHORS file.
Redistribution and use in source and binary forms, with or
without modification, are permitted provided that the following
--
2.7.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [Tarantool-patches] [PATCH 02/16] Check data_offset overflow in struct tuple
2020-07-08 15:14 [Tarantool-patches] [PATCH v2 00/16] Transaction engine for memtx engine Aleksandr Lyapunov
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 01/16] Update license file (2020) Aleksandr Lyapunov
@ 2020-07-08 15:14 ` Aleksandr Lyapunov
2020-07-12 17:15 ` Vladislav Shpilevoy
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 03/16] tx: introduce dirty tuples Aleksandr Lyapunov
` (15 subsequent siblings)
17 siblings, 1 reply; 49+ messages in thread
From: Aleksandr Lyapunov @ 2020-07-08 15:14 UTC (permalink / raw)
To: tarantool-patches; +Cc: v.shpilevoy
data_offset member of tuple is uint16_t now. At the same time
this field is calculated from field_map_size which is uint32_t.
That could lead to overflows and crashes.
Fixes #5084
---
src/box/errcode.h | 1 +
src/box/memtx_engine.c | 19 ++++++++-----
src/box/tuple.c | 11 ++++++--
src/box/vy_stmt.c | 8 ++++++
test/box/error.result | 1 +
test/box/huge_field_map.result | 49 +++++++++++++++++++++++++++++++++
test/box/huge_field_map.test.lua | 22 +++++++++++++++
test/box/huge_field_map_long.result | 51 +++++++++++++++++++++++++++++++++++
test/box/huge_field_map_long.test.lua | 28 +++++++++++++++++++
test/box/suite.ini | 1 +
10 files changed, 183 insertions(+), 8 deletions(-)
create mode 100644 test/box/huge_field_map.result
create mode 100644 test/box/huge_field_map.test.lua
create mode 100644 test/box/huge_field_map_long.result
create mode 100644 test/box/huge_field_map_long.test.lua
diff --git a/src/box/errcode.h b/src/box/errcode.h
index d1e4d02..938d411 100644
--- a/src/box/errcode.h
+++ b/src/box/errcode.h
@@ -266,6 +266,7 @@ struct errcode_record {
/*211 */_(ER_WRONG_QUERY_ID, "Prepared statement with id %u does not exist") \
/*212 */_(ER_SEQUENCE_NOT_STARTED, "Sequence '%s' is not started") \
/*213 */_(ER_NO_SUCH_SESSION_SETTING, "Session setting %s doesn't exist") \
+ /*214 */_(ER_TUPLE_METADATA_IS_TOO_BIG, "Can't create tuple: metadata size %u is too big") \
/*
* !IMPORTANT! Please follow instructions at start of the file
diff --git a/src/box/memtx_engine.c b/src/box/memtx_engine.c
index 6ce8cac..b5b6b14 100644
--- a/src/box/memtx_engine.c
+++ b/src/box/memtx_engine.c
@@ -1125,6 +1125,18 @@ memtx_tuple_new(struct tuple_format *format, const char *data, const char *end)
if (tuple_field_map_create(format, data, true, &builder) != 0)
goto end;
uint32_t field_map_size = field_map_build_size(&builder);
+ /*
+ * Data offset is calculated from the begin of the struct
+ * tuple base, not from memtx_tuple, because the struct
+ * tuple is not the first field of the memtx_tuple.
+ */
+ uint32_t data_offset = sizeof(struct tuple) + field_map_size;
+ if (data_offset > UINT16_MAX) {
+ /** tuple->data_offset is 16 bits */
+ diag_set(ClientError, ER_TUPLE_METADATA_IS_TOO_BIG,
+ data_offset);
+ goto end;
+ }
size_t tuple_len = end - data;
size_t total = sizeof(struct memtx_tuple) + field_map_size + tuple_len;
@@ -1157,12 +1169,7 @@ memtx_tuple_new(struct tuple_format *format, const char *data, const char *end)
tuple->bsize = tuple_len;
tuple->format_id = tuple_format_id(format);
tuple_format_ref(format);
- /*
- * Data offset is calculated from the begin of the struct
- * tuple base, not from memtx_tuple, because the struct
- * tuple is not the first field of the memtx_tuple.
- */
- tuple->data_offset = sizeof(struct tuple) + field_map_size;
+ tuple->data_offset = data_offset;
char *raw = (char *) tuple + tuple->data_offset;
field_map_build(&builder, raw - field_map_size);
memcpy(raw, data, tuple_len);
diff --git a/src/box/tuple.c b/src/box/tuple.c
index 1f52a8c..e48ee08 100644
--- a/src/box/tuple.c
+++ b/src/box/tuple.c
@@ -83,6 +83,13 @@ runtime_tuple_new(struct tuple_format *format, const char *data, const char *end
if (tuple_field_map_create(format, data, true, &builder) != 0)
goto end;
uint32_t field_map_size = field_map_build_size(&builder);
+ uint32_t data_offset = sizeof(struct tuple) + field_map_size;
+ if (data_offset > UINT16_MAX) {
+ /** tuple->data_offset is 16 bits */
+ diag_set(ClientError, ER_TUPLE_METADATA_IS_TOO_BIG,
+ data_offset);
+ goto end;
+ }
size_t data_len = end - data;
size_t total = sizeof(struct tuple) + field_map_size + data_len;
@@ -97,8 +104,8 @@ runtime_tuple_new(struct tuple_format *format, const char *data, const char *end
tuple->bsize = data_len;
tuple->format_id = tuple_format_id(format);
tuple_format_ref(format);
- tuple->data_offset = sizeof(struct tuple) + field_map_size;
- char *raw = (char *) tuple + tuple->data_offset;
+ tuple->data_offset = data_offset;
+ char *raw = (char *) tuple + data_offset;
field_map_build(&builder, raw - field_map_size);
memcpy(raw, data, data_len);
say_debug("%s(%zu) = %p", __func__, data_len, tuple);
diff --git a/src/box/vy_stmt.c b/src/box/vy_stmt.c
index 392f3da..f59c418 100644
--- a/src/box/vy_stmt.c
+++ b/src/box/vy_stmt.c
@@ -159,6 +159,14 @@ static struct tuple *
vy_stmt_alloc(struct tuple_format *format, uint32_t data_offset, uint32_t bsize)
{
assert(data_offset >= sizeof(struct vy_stmt) + format->field_map_size);
+
+ if (data_offset > UINT16_MAX) {
+ /** tuple->data_offset is 16 bits */
+ diag_set(ClientError, ER_TUPLE_METADATA_IS_TOO_BIG,
+ data_offset);
+ return NULL;
+ }
+
struct vy_stmt_env *env = format->engine;
uint32_t total_size = data_offset + bsize;
if (unlikely(total_size > env->max_tuple_size)) {
diff --git a/test/box/error.result b/test/box/error.result
index 2196fa5..a166824 100644
--- a/test/box/error.result
+++ b/test/box/error.result
@@ -432,6 +432,7 @@ t;
| 211: box.error.WRONG_QUERY_ID
| 212: box.error.SEQUENCE_NOT_STARTED
| 213: box.error.NO_SUCH_SESSION_SETTING
+ | 214: box.error.TUPLE_METADATA_IS_TOO_BIG
| ...
test_run:cmd("setopt delimiter ''");
diff --git a/test/box/huge_field_map.result b/test/box/huge_field_map.result
new file mode 100644
index 0000000..11b4da3
--- /dev/null
+++ b/test/box/huge_field_map.result
@@ -0,0 +1,49 @@
+-- test-run result file version 2
+env = require('test_run')
+ | ---
+ | ...
+test_run = env.new()
+ | ---
+ | ...
+
+s = box.schema.space.create('test', {engine = 'memtx'})
+ | ---
+ | ...
+i1 = s:create_index('pk')
+ | ---
+ | ...
+i2 = s:create_index('mk', {parts={{'[2][*]', 'uint'}}})
+ | ---
+ | ...
+test_run:cmd("setopt delimiter ';'")
+ | ---
+ | - true
+ | ...
+function test()
+ local t = {1, {}}
+ for i = 1,65536 do
+ table.insert(t[2], i)
+ if (i % 4096 == 0) then
+ s:replace(t)
+ end
+ end
+end;
+ | ---
+ | ...
+test_run:cmd("setopt delimiter ''");
+ | ---
+ | - true
+ | ...
+
+pcall(test) -- must fail but not crash
+ | ---
+ | - false
+ | - 'Can''t create tuple: metadata size 65558 is too big'
+ | ...
+
+test = nil
+ | ---
+ | ...
+s:drop()
+ | ---
+ | ...
diff --git a/test/box/huge_field_map.test.lua b/test/box/huge_field_map.test.lua
new file mode 100644
index 0000000..9042751
--- /dev/null
+++ b/test/box/huge_field_map.test.lua
@@ -0,0 +1,22 @@
+env = require('test_run')
+test_run = env.new()
+
+s = box.schema.space.create('test', {engine = 'memtx'})
+i1 = s:create_index('pk')
+i2 = s:create_index('mk', {parts={{'[2][*]', 'uint'}}})
+test_run:cmd("setopt delimiter ';'")
+function test()
+ local t = {1, {}}
+ for i = 1,65536 do
+ table.insert(t[2], i)
+ if (i % 4096 == 0) then
+ s:replace(t)
+ end
+ end
+end;
+test_run:cmd("setopt delimiter ''");
+
+pcall(test) -- must fail but not crash
+
+test = nil
+s:drop()
\ No newline at end of file
diff --git a/test/box/huge_field_map_long.result b/test/box/huge_field_map_long.result
new file mode 100644
index 0000000..d7971ae
--- /dev/null
+++ b/test/box/huge_field_map_long.result
@@ -0,0 +1,51 @@
+-- test-run result file version 2
+env = require('test_run')
+ | ---
+ | ...
+test_run = env.new()
+ | ---
+ | ...
+
+s = box.schema.space.create('test', {engine = 'memtx'})
+ | ---
+ | ...
+test_run:cmd("setopt delimiter ';'")
+ | ---
+ | - true
+ | ...
+function test()
+ local t = {}
+ local k = {}
+ for i = 1,128 do
+ local parts = {}
+ for j = 0,127 do
+ table.insert(parts, {i * 128 - j, 'uint'})
+ table.insert(t, 1)
+ end
+ if i == 1 then k = table.deepcopy(t) end
+ s:create_index('test'..i, {parts = parts})
+ if i % 16 == 0 then
+ s:replace(t)
+ s:delete(k)
+ end
+ end
+end;
+ | ---
+ | ...
+test_run:cmd("setopt delimiter ''");
+ | ---
+ | - true
+ | ...
+
+pcall(test) -- must fail but not crash
+ | ---
+ | - false
+ | - 'Can''t create tuple: metadata size 65542 is too big'
+ | ...
+
+test = nil
+ | ---
+ | ...
+s:drop()
+ | ---
+ | ...
diff --git a/test/box/huge_field_map_long.test.lua b/test/box/huge_field_map_long.test.lua
new file mode 100644
index 0000000..6415615
--- /dev/null
+++ b/test/box/huge_field_map_long.test.lua
@@ -0,0 +1,28 @@
+env = require('test_run')
+test_run = env.new()
+
+s = box.schema.space.create('test', {engine = 'memtx'})
+test_run:cmd("setopt delimiter ';'")
+function test()
+ local t = {}
+ local k = {}
+ for i = 1,128 do
+ local parts = {}
+ for j = 0,127 do
+ table.insert(parts, {i * 128 - j, 'uint'})
+ table.insert(t, 1)
+ end
+ if i == 1 then k = table.deepcopy(t) end
+ s:create_index('test'..i, {parts = parts})
+ if i % 16 == 0 then
+ s:replace(t)
+ s:delete(k)
+ end
+ end
+end;
+test_run:cmd("setopt delimiter ''");
+
+pcall(test) -- must fail but not crash
+
+test = nil
+s:drop()
\ No newline at end of file
diff --git a/test/box/suite.ini b/test/box/suite.ini
index de8f5a7..801a91e 100644
--- a/test/box/suite.ini
+++ b/test/box/suite.ini
@@ -3,6 +3,7 @@ core = tarantool
description = Database tests
script = box.lua
disabled = rtree_errinj.test.lua tuple_bench.test.lua
+long_run = huge_field_map_long.test.lua
config = engine.cfg
release_disabled = errinj.test.lua errinj_index.test.lua rtree_errinj.test.lua upsert_errinj.test.lua iproto_stress.test.lua gh-4648-func-load-unload.test.lua
lua_libs = lua/fifo.lua lua/utils.lua lua/bitset.lua lua/index_random_test.lua lua/push.lua lua/identifier.lua
--
2.7.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [Tarantool-patches] [PATCH 03/16] tx: introduce dirty tuples
2020-07-08 15:14 [Tarantool-patches] [PATCH v2 00/16] Transaction engine for memtx engine Aleksandr Lyapunov
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 01/16] Update license file (2020) Aleksandr Lyapunov
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 02/16] Check data_offset overflow in struct tuple Aleksandr Lyapunov
@ 2020-07-08 15:14 ` Aleksandr Lyapunov
2020-07-12 17:15 ` Vladislav Shpilevoy
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 04/16] vinyl: rename tx_manager -> vy_tx_manager Aleksandr Lyapunov
` (14 subsequent siblings)
17 siblings, 1 reply; 49+ messages in thread
From: Aleksandr Lyapunov @ 2020-07-08 15:14 UTC (permalink / raw)
To: tarantool-patches; +Cc: v.shpilevoy
---
src/box/memtx_engine.c | 5 +++--
src/box/tuple.c | 5 +++--
src/box/tuple.h | 12 ++++++++++--
src/box/tuple_format.c | 4 ++--
src/box/vy_stmt.c | 5 +++--
test/box/huge_field_map.result | 2 +-
test/box/huge_field_map_long.result | 2 +-
7 files changed, 23 insertions(+), 12 deletions(-)
diff --git a/src/box/memtx_engine.c b/src/box/memtx_engine.c
index b5b6b14..dfd6fce 100644
--- a/src/box/memtx_engine.c
+++ b/src/box/memtx_engine.c
@@ -1131,8 +1131,8 @@ memtx_tuple_new(struct tuple_format *format, const char *data, const char *end)
* tuple is not the first field of the memtx_tuple.
*/
uint32_t data_offset = sizeof(struct tuple) + field_map_size;
- if (data_offset > UINT16_MAX) {
- /** tuple->data_offset is 16 bits */
+ if (data_offset > INT16_MAX) {
+ /** tuple->data_offset is 15 bits */
diag_set(ClientError, ER_TUPLE_METADATA_IS_TOO_BIG,
data_offset);
goto end;
@@ -1170,6 +1170,7 @@ memtx_tuple_new(struct tuple_format *format, const char *data, const char *end)
tuple->format_id = tuple_format_id(format);
tuple_format_ref(format);
tuple->data_offset = data_offset;
+ tuple->is_dirty = false;
char *raw = (char *) tuple + tuple->data_offset;
field_map_build(&builder, raw - field_map_size);
memcpy(raw, data, tuple_len);
diff --git a/src/box/tuple.c b/src/box/tuple.c
index e48ee08..9f0f24c 100644
--- a/src/box/tuple.c
+++ b/src/box/tuple.c
@@ -84,8 +84,8 @@ runtime_tuple_new(struct tuple_format *format, const char *data, const char *end
goto end;
uint32_t field_map_size = field_map_build_size(&builder);
uint32_t data_offset = sizeof(struct tuple) + field_map_size;
- if (data_offset > UINT16_MAX) {
- /** tuple->data_offset is 16 bits */
+ if (data_offset > INT16_MAX) {
+ /** tuple->data_offset is 15 bits */
diag_set(ClientError, ER_TUPLE_METADATA_IS_TOO_BIG,
data_offset);
goto end;
@@ -105,6 +105,7 @@ runtime_tuple_new(struct tuple_format *format, const char *data, const char *end
tuple->format_id = tuple_format_id(format);
tuple_format_ref(format);
tuple->data_offset = data_offset;
+ tuple->is_dirty = false;
char *raw = (char *) tuple + data_offset;
field_map_build(&builder, raw - field_map_size);
memcpy(raw, data, data_len);
diff --git a/src/box/tuple.h b/src/box/tuple.h
index 9a88772..4752323 100644
--- a/src/box/tuple.h
+++ b/src/box/tuple.h
@@ -319,7 +319,13 @@ struct PACKED tuple
/**
* Offset to the MessagePack from the begin of the tuple.
*/
- uint16_t data_offset;
+ uint16_t data_offset : 15;
+ /**
+ * The tuple (if it's found in index for example) could be invisible
+ * for current transactions. The flag means that the tuple must
+ * be clarified by transaction engine.
+ */
+ bool is_dirty : 1;
/**
* Engine specific fields and offsets array concatenated
* with MessagePack fields array.
@@ -1081,8 +1087,10 @@ tuple_unref(struct tuple *tuple)
assert(tuple->refs - 1 >= 0);
if (unlikely(tuple->is_bigref))
tuple_unref_slow(tuple);
- else if (--tuple->refs == 0)
+ else if (--tuple->refs == 0) {
+ assert(!tuple->is_dirty);
tuple_delete(tuple);
+ }
}
extern struct tuple *box_tuple_last;
diff --git a/src/box/tuple_format.c b/src/box/tuple_format.c
index 68ec2a7..6ebc855 100644
--- a/src/box/tuple_format.c
+++ b/src/box/tuple_format.c
@@ -481,8 +481,8 @@ tuple_format_create(struct tuple_format *format, struct key_def * const *keys,
assert(tuple_format_field(format, 0)->offset_slot ==
TUPLE_OFFSET_SLOT_NIL);
size_t field_map_size = -current_slot * sizeof(uint32_t);
- if (field_map_size > UINT16_MAX) {
- /** tuple->data_offset is 16 bits */
+ if (field_map_size > INT16_MAX) {
+ /** tuple->data_offset is 15 bits */
diag_set(ClientError, ER_INDEX_FIELD_COUNT_LIMIT,
-current_slot);
return -1;
diff --git a/src/box/vy_stmt.c b/src/box/vy_stmt.c
index f59c418..92e0aa1 100644
--- a/src/box/vy_stmt.c
+++ b/src/box/vy_stmt.c
@@ -160,8 +160,8 @@ vy_stmt_alloc(struct tuple_format *format, uint32_t data_offset, uint32_t bsize)
{
assert(data_offset >= sizeof(struct vy_stmt) + format->field_map_size);
- if (data_offset > UINT16_MAX) {
- /** tuple->data_offset is 16 bits */
+ if (data_offset > INT16_MAX) {
+ /** tuple->data_offset is 15 bits */
diag_set(ClientError, ER_TUPLE_METADATA_IS_TOO_BIG,
data_offset);
return NULL;
@@ -198,6 +198,7 @@ vy_stmt_alloc(struct tuple_format *format, uint32_t data_offset, uint32_t bsize)
tuple_format_ref(format);
tuple->bsize = bsize;
tuple->data_offset = data_offset;
+ tuple->is_dirty = false;
vy_stmt_set_lsn(tuple, 0);
vy_stmt_set_type(tuple, 0);
vy_stmt_set_flags(tuple, 0);
diff --git a/test/box/huge_field_map.result b/test/box/huge_field_map.result
index 11b4da3..45022cc 100644
--- a/test/box/huge_field_map.result
+++ b/test/box/huge_field_map.result
@@ -38,7 +38,7 @@ test_run:cmd("setopt delimiter ''");
pcall(test) -- must fail but not crash
| ---
| - false
- | - 'Can''t create tuple: metadata size 65558 is too big'
+ | - 'Can''t create tuple: metadata size 32790 is too big'
| ...
test = nil
diff --git a/test/box/huge_field_map_long.result b/test/box/huge_field_map_long.result
index d7971ae..cb47900 100644
--- a/test/box/huge_field_map_long.result
+++ b/test/box/huge_field_map_long.result
@@ -40,7 +40,7 @@ test_run:cmd("setopt delimiter ''");
pcall(test) -- must fail but not crash
| ---
| - false
- | - 'Can''t create tuple: metadata size 65542 is too big'
+ | - 'Can''t create tuple: metadata size 32774 is too big'
| ...
test = nil
--
2.7.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [Tarantool-patches] [PATCH 04/16] vinyl: rename tx_manager -> vy_tx_manager
2020-07-08 15:14 [Tarantool-patches] [PATCH v2 00/16] Transaction engine for memtx engine Aleksandr Lyapunov
` (2 preceding siblings ...)
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 03/16] tx: introduce dirty tuples Aleksandr Lyapunov
@ 2020-07-08 15:14 ` Aleksandr Lyapunov
2020-07-12 17:14 ` Vladislav Shpilevoy
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 05/16] tx: save txn in txn_stmt Aleksandr Lyapunov
` (13 subsequent siblings)
17 siblings, 1 reply; 49+ messages in thread
From: Aleksandr Lyapunov @ 2020-07-08 15:14 UTC (permalink / raw)
To: tarantool-patches; +Cc: v.shpilevoy
Apart from other vinyl objects that are named with "vy_" prefix,
its transaction manager (tx_manager) have no such prefix.
It should have in order to avoid conflicts with global tx manager.
Needed for #4897
---
src/box/vinyl.c | 30 +++++++++++++++---------------
src/box/vy_tx.c | 49 +++++++++++++++++++++++++------------------------
src/box/vy_tx.h | 33 +++++++++++++++++----------------
3 files changed, 57 insertions(+), 55 deletions(-)
diff --git a/src/box/vinyl.c b/src/box/vinyl.c
index fd9b7e6..6e32331 100644
--- a/src/box/vinyl.c
+++ b/src/box/vinyl.c
@@ -98,7 +98,7 @@ struct vy_env {
/** Recovery status */
enum vy_status status;
/** TX manager */
- struct tx_manager *xm;
+ struct vy_tx_manager *xm;
/** Upsert squash queue */
struct vy_squash_queue *squash_queue;
/** Memory pool for index iterator. */
@@ -267,7 +267,7 @@ vy_info_append_regulator(struct vy_env *env, struct info_handler *h)
static void
vy_info_append_tx(struct vy_env *env, struct info_handler *h)
{
- struct tx_manager *xm = env->xm;
+ struct vy_tx_manager *xm = env->xm;
info_table_begin(h, "tx");
@@ -292,7 +292,7 @@ static void
vy_info_append_memory(struct vy_env *env, struct info_handler *h)
{
info_table_begin(h, "memory");
- info_append_int(h, "tx", tx_manager_mem_used(env->xm));
+ info_append_int(h, "tx", vy_tx_manager_mem_used(env->xm));
info_append_int(h, "level0", lsregion_used(&env->mem_env.allocator));
info_append_int(h, "tuple_cache", env->cache_env.mem_used);
info_append_int(h, "page_index", env->lsm_env.page_index_size);
@@ -509,7 +509,7 @@ vinyl_engine_memory_stat(struct engine *engine, struct engine_memory_stat *stat)
stat->index += env->lsm_env.bloom_size;
stat->index += env->lsm_env.page_index_size;
stat->cache += env->cache_env.mem_used;
- stat->tx += tx_manager_mem_used(env->xm);
+ stat->tx += vy_tx_manager_mem_used(env->xm);
}
static void
@@ -517,7 +517,7 @@ vinyl_engine_reset_stat(struct engine *engine)
{
struct vy_env *env = vy_env(engine);
- struct tx_manager *xm = env->xm;
+ struct vy_tx_manager *xm = env->xm;
memset(&xm->stat, 0, sizeof(xm->stat));
vy_scheduler_reset_stat(&env->scheduler);
@@ -996,7 +996,7 @@ vinyl_space_invalidate(struct space *space)
* request bail out early, without dereferencing the space.
*/
bool unused;
- tx_manager_abort_writers_for_ddl(env->xm, space, &unused);
+ vy_tx_manager_abort_writers_for_ddl(env->xm, space, &unused);
}
/** Argument passed to vy_check_format_on_replace(). */
@@ -1064,7 +1064,7 @@ vinyl_space_check_format(struct space *space, struct tuple_format *format)
* be checked with on_replace trigger so we abort them.
*/
bool need_wal_sync;
- tx_manager_abort_writers_for_ddl(env->xm, space, &need_wal_sync);
+ vy_tx_manager_abort_writers_for_ddl(env->xm, space, &need_wal_sync);
if (!need_wal_sync && vy_lsm_is_empty(pk))
return 0; /* space is empty, nothing to do */
@@ -2486,7 +2486,7 @@ static void
vinyl_engine_switch_to_ro(struct engine *engine)
{
struct vy_env *env = vy_env(engine);
- tx_manager_abort_writers_for_ro(env->xm);
+ vy_tx_manager_abort_writers_for_ro(env->xm);
}
/* }}} Public API of transaction control */
@@ -2573,7 +2573,7 @@ vy_env_new(const char *path, size_t memory,
goto error_path;
}
- e->xm = tx_manager_new();
+ e->xm = vy_tx_manager_new();
if (e->xm == NULL)
goto error_xm;
e->squash_queue = vy_squash_queue_new();
@@ -2609,7 +2609,7 @@ error_lsm_env:
vy_scheduler_destroy(&e->scheduler);
vy_squash_queue_delete(e->squash_queue);
error_squash_queue:
- tx_manager_delete(e->xm);
+ vy_tx_manager_delete(e->xm);
error_xm:
free(e->path);
error_path:
@@ -2623,7 +2623,7 @@ vy_env_delete(struct vy_env *e)
vy_regulator_destroy(&e->regulator);
vy_scheduler_destroy(&e->scheduler);
vy_squash_queue_delete(e->squash_queue);
- tx_manager_delete(e->xm);
+ vy_tx_manager_delete(e->xm);
free(e->path);
mempool_destroy(&e->iterator_pool);
vy_run_env_destroy(&e->run_env);
@@ -2882,7 +2882,7 @@ vinyl_engine_end_recovery(struct engine *engine)
/*
* During recovery we skip statements that have
* been dumped to disk - see vy_is_committed() -
- * so it may turn out that tx_manager::lsn stays
+ * so it may turn out that vy_tx_manager::lsn stays
* behind the instance vclock while we need it
* to be up-to-date once recovery is complete,
* because we use it while building an index to
@@ -3706,7 +3706,7 @@ vinyl_snapshot_iterator_free(struct snapshot_iterator *base)
struct vy_lsm *lsm = it->iterator.lsm;
struct vy_env *env = vy_env(lsm->base.engine);
vy_read_iterator_close(&it->iterator);
- tx_manager_destroy_read_view(env->xm, it->rv);
+ vy_tx_manager_destroy_read_view(env->xm, it->rv);
vy_lsm_unref(lsm);
free(it);
}
@@ -3726,7 +3726,7 @@ vinyl_index_create_snapshot_iterator(struct index *base)
it->base.next = vinyl_snapshot_iterator_next;
it->base.free = vinyl_snapshot_iterator_free;
- it->rv = tx_manager_read_view(env->xm);
+ it->rv = vy_tx_manager_read_view(env->xm);
if (it->rv == NULL) {
free(it);
return NULL;
@@ -4149,7 +4149,7 @@ vinyl_space_build_index(struct space *src_space, struct index *new_index,
* be checked with on_replace trigger so we abort them.
*/
bool need_wal_sync;
- tx_manager_abort_writers_for_ddl(env->xm, src_space, &need_wal_sync);
+ vy_tx_manager_abort_writers_for_ddl(env->xm, src_space, &need_wal_sync);
if (!need_wal_sync && vy_lsm_is_empty(pk))
return 0; /* space is empty, nothing to do */
diff --git a/src/box/vy_tx.c b/src/box/vy_tx.c
index 846c632..17c79d6 100644
--- a/src/box/vy_tx.c
+++ b/src/box/vy_tx.c
@@ -98,13 +98,13 @@ vy_global_read_view_create(struct vy_read_view *rv, int64_t lsn)
rv->refs = 0;
}
-struct tx_manager *
-tx_manager_new(void)
+struct vy_tx_manager *
+vy_tx_manager_new(void)
{
- struct tx_manager *xm = calloc(1, sizeof(*xm));
+ struct vy_tx_manager *xm = calloc(1, sizeof(*xm));
if (xm == NULL) {
diag_set(OutOfMemory, sizeof(*xm),
- "malloc", "struct tx_manager");
+ "malloc", "struct vy_tx_manager");
return NULL;
}
@@ -128,7 +128,7 @@ tx_manager_new(void)
}
void
-tx_manager_delete(struct tx_manager *xm)
+vy_tx_manager_delete(struct vy_tx_manager *xm)
{
mempool_destroy(&xm->read_view_mempool);
mempool_destroy(&xm->read_interval_mempool);
@@ -138,7 +138,7 @@ tx_manager_delete(struct tx_manager *xm)
}
size_t
-tx_manager_mem_used(struct tx_manager *xm)
+vy_tx_manager_mem_used(struct vy_tx_manager *xm)
{
struct mempool_stats mstats;
size_t ret = 0;
@@ -157,7 +157,7 @@ tx_manager_mem_used(struct tx_manager *xm)
}
struct vy_read_view *
-tx_manager_read_view(struct tx_manager *xm)
+vy_tx_manager_read_view(struct vy_tx_manager *xm)
{
struct vy_read_view *rv;
/*
@@ -195,7 +195,8 @@ tx_manager_read_view(struct tx_manager *xm)
}
void
-tx_manager_destroy_read_view(struct tx_manager *xm, struct vy_read_view *rv)
+vy_tx_manager_destroy_read_view(struct vy_tx_manager *xm,
+ struct vy_read_view *rv)
{
if (rv == xm->p_global_read_view)
return;
@@ -209,7 +210,7 @@ tx_manager_destroy_read_view(struct tx_manager *xm, struct vy_read_view *rv)
static struct txv *
txv_new(struct vy_tx *tx, struct vy_lsm *lsm, struct vy_entry entry)
{
- struct tx_manager *xm = tx->xm;
+ struct vy_tx_manager *xm = tx->xm;
struct txv *v = mempool_alloc(&xm->txv_mempool);
if (v == NULL) {
diag_set(OutOfMemory, sizeof(*v), "mempool", "struct txv");
@@ -234,7 +235,7 @@ txv_new(struct vy_tx *tx, struct vy_lsm *lsm, struct vy_entry entry)
static void
txv_delete(struct txv *v)
{
- struct tx_manager *xm = v->tx->xm;
+ struct vy_tx_manager *xm = v->tx->xm;
xm->write_set_size -= tuple_size(v->entry.stmt);
vy_stmt_counter_unacct_tuple(&v->lsm->stat.txw.count, v->entry.stmt);
tuple_unref(v->entry.stmt);
@@ -248,7 +249,7 @@ txv_delete(struct txv *v)
static void
vy_read_interval_acct(struct vy_read_interval *interval)
{
- struct tx_manager *xm = interval->tx->xm;
+ struct vy_tx_manager *xm = interval->tx->xm;
xm->read_set_size += tuple_size(interval->left.stmt);
if (interval->left.stmt != interval->right.stmt)
xm->read_set_size += tuple_size(interval->right.stmt);
@@ -260,7 +261,7 @@ vy_read_interval_acct(struct vy_read_interval *interval)
static void
vy_read_interval_unacct(struct vy_read_interval *interval)
{
- struct tx_manager *xm = interval->tx->xm;
+ struct vy_tx_manager *xm = interval->tx->xm;
xm->read_set_size -= tuple_size(interval->left.stmt);
if (interval->left.stmt != interval->right.stmt)
xm->read_set_size -= tuple_size(interval->right.stmt);
@@ -271,7 +272,7 @@ vy_read_interval_new(struct vy_tx *tx, struct vy_lsm *lsm,
struct vy_entry left, bool left_belongs,
struct vy_entry right, bool right_belongs)
{
- struct tx_manager *xm = tx->xm;
+ struct vy_tx_manager *xm = tx->xm;
struct vy_read_interval *interval;
interval = mempool_alloc(&xm->read_interval_mempool);
if (interval == NULL) {
@@ -296,7 +297,7 @@ vy_read_interval_new(struct vy_tx *tx, struct vy_lsm *lsm,
static void
vy_read_interval_delete(struct vy_read_interval *interval)
{
- struct tx_manager *xm = interval->tx->xm;
+ struct vy_tx_manager *xm = interval->tx->xm;
vy_read_interval_unacct(interval);
vy_lsm_unref(interval->lsm);
tuple_unref(interval->left.stmt);
@@ -316,7 +317,7 @@ vy_tx_read_set_free_cb(vy_tx_read_set_t *read_set,
}
void
-vy_tx_create(struct tx_manager *xm, struct vy_tx *tx)
+vy_tx_create(struct vy_tx_manager *xm, struct vy_tx *tx)
{
tx->last_stmt_space = NULL;
stailq_create(&tx->log);
@@ -339,7 +340,7 @@ vy_tx_destroy(struct vy_tx *tx)
trigger_run(&tx->on_destroy, NULL);
trigger_destroy(&tx->on_destroy);
- tx_manager_destroy_read_view(tx->xm, tx->read_view);
+ vy_tx_manager_destroy_read_view(tx->xm, tx->read_view);
struct txv *v, *tmp;
stailq_foreach_entry_safe(v, tmp, &tx->log, next_in_log)
@@ -392,7 +393,7 @@ vy_tx_send_to_read_view(struct vy_tx *tx, struct txv *v)
/* already in (earlier) read view */
if (vy_tx_is_in_read_view(abort))
continue;
- struct vy_read_view *rv = tx_manager_read_view(tx->xm);
+ struct vy_read_view *rv = vy_tx_manager_read_view(tx->xm);
if (rv == NULL)
return -1;
abort->read_view = rv;
@@ -422,7 +423,7 @@ vy_tx_abort_readers(struct vy_tx *tx, struct txv *v)
}
struct vy_tx *
-vy_tx_begin(struct tx_manager *xm)
+vy_tx_begin(struct vy_tx_manager *xm)
{
struct vy_tx *tx = mempool_alloc(&xm->tx_mempool);
if (unlikely(tx == NULL)) {
@@ -662,7 +663,7 @@ vy_tx_handle_deferred_delete(struct vy_tx *tx, struct txv *v)
int
vy_tx_prepare(struct vy_tx *tx)
{
- struct tx_manager *xm = tx->xm;
+ struct vy_tx_manager *xm = tx->xm;
if (tx->state == VINYL_TX_ABORT) {
/* Conflict is already accounted - see vy_tx_abort(). */
@@ -793,7 +794,7 @@ void
vy_tx_commit(struct vy_tx *tx, int64_t lsn)
{
assert(tx->state == VINYL_TX_COMMIT);
- struct tx_manager *xm = tx->xm;
+ struct vy_tx_manager *xm = tx->xm;
xm->stat.commit++;
@@ -833,7 +834,7 @@ vy_tx_rollback_after_prepare(struct vy_tx *tx)
{
assert(tx->state == VINYL_TX_COMMIT);
- struct tx_manager *xm = tx->xm;
+ struct vy_tx_manager *xm = tx->xm;
/*
* There are two reasons of rollback_after_prepare:
@@ -878,7 +879,7 @@ vy_tx_rollback_after_prepare(struct vy_tx *tx)
void
vy_tx_rollback(struct vy_tx *tx)
{
- struct tx_manager *xm = tx->xm;
+ struct vy_tx_manager *xm = tx->xm;
xm->stat.rollback++;
@@ -1140,7 +1141,7 @@ vy_tx_set(struct vy_tx *tx, struct vy_lsm *lsm, struct tuple *stmt)
}
void
-tx_manager_abort_writers_for_ddl(struct tx_manager *xm, struct space *space,
+vy_tx_manager_abort_writers_for_ddl(struct vy_tx_manager *xm, struct space *space,
bool *need_wal_sync)
{
*need_wal_sync = false;
@@ -1166,7 +1167,7 @@ tx_manager_abort_writers_for_ddl(struct tx_manager *xm, struct space *space,
}
void
-tx_manager_abort_writers_for_ro(struct tx_manager *xm)
+vy_tx_manager_abort_writers_for_ro(struct vy_tx_manager *xm)
{
struct vy_tx *tx;
rlist_foreach_entry(tx, &xm->writers, in_writers) {
diff --git a/src/box/vy_tx.h b/src/box/vy_tx.h
index 3144c92..4fac5f6 100644
--- a/src/box/vy_tx.h
+++ b/src/box/vy_tx.h
@@ -54,7 +54,7 @@ extern "C" {
struct space;
struct tuple;
-struct tx_manager;
+struct vy_tx_manager;
struct vy_mem;
struct vy_tx;
struct vy_history;
@@ -140,16 +140,16 @@ write_set_search_key(write_set_t *tree, struct vy_lsm *lsm,
/** Transaction object. */
struct vy_tx {
- /** Link in tx_manager::writers. */
+ /** Link in vy_tx_manager::writers. */
struct rlist in_writers;
/** Transaction manager. */
- struct tx_manager *xm;
+ struct vy_tx_manager *xm;
/**
* Pointer to the space affected by the last prepared statement.
* We need it so that we can abort a transaction on DDL even
* if it hasn't inserted anything into the write set yet (e.g.
* yielded on unique check) and therefore would otherwise be
- * ignored by tx_manager_abort_writers_for_ddl().
+ * ignored by vy_tx_manager_abort_writers_for_ddl().
*/
struct space *last_stmt_space;
/**
@@ -209,7 +209,7 @@ vy_tx_read_view(struct vy_tx *tx)
}
/** Transaction manager object. */
-struct tx_manager {
+struct vy_tx_manager {
/**
* The last committed log sequence number known to
* vinyl. Updated in vy_commit().
@@ -278,24 +278,25 @@ struct tx_manager {
};
/** Allocate a tx manager object. */
-struct tx_manager *
-tx_manager_new(void);
+struct vy_tx_manager *
+vy_tx_manager_new(void);
/** Delete a tx manager object. */
void
-tx_manager_delete(struct tx_manager *xm);
+vy_tx_manager_delete(struct vy_tx_manager *xm);
/** Return total amount of memory used by active transactions. */
size_t
-tx_manager_mem_used(struct tx_manager *xm);
+vy_tx_manager_mem_used(struct vy_tx_manager *xm);
/** Create or reuse an instance of a read view. */
struct vy_read_view *
-tx_manager_read_view(struct tx_manager *xm);
+vy_tx_manager_read_view(struct vy_tx_manager *xm);
/** Dereference and possibly destroy a read view. */
void
-tx_manager_destroy_read_view(struct tx_manager *xm, struct vy_read_view *rv);
+vy_tx_manager_destroy_read_view(struct vy_tx_manager *xm,
+ struct vy_read_view *rv);
/**
* Abort all rw transactions that affect the given space
@@ -307,19 +308,19 @@ tx_manager_destroy_read_view(struct tx_manager *xm, struct vy_read_view *rv);
* to call wal_sync() to flush them.
*/
void
-tx_manager_abort_writers_for_ddl(struct tx_manager *xm, struct space *space,
- bool *need_wal_sync);
+vy_tx_manager_abort_writers_for_ddl(struct vy_tx_manager *xm,
+ struct space *space, bool *need_wal_sync);
/**
* Abort all local rw transactions that haven't reached WAL yet.
* Called before switching to read-only mode.
*/
void
-tx_manager_abort_writers_for_ro(struct tx_manager *xm);
+vy_tx_manager_abort_writers_for_ro(struct vy_tx_manager *xm);
/** Initialize a tx object. */
void
-vy_tx_create(struct tx_manager *xm, struct vy_tx *tx);
+vy_tx_create(struct vy_tx_manager *xm, struct vy_tx *tx);
/** Destroy a tx object. */
void
@@ -327,7 +328,7 @@ vy_tx_destroy(struct vy_tx *tx);
/** Begin a new transaction. */
struct vy_tx *
-vy_tx_begin(struct tx_manager *xm);
+vy_tx_begin(struct vy_tx_manager *xm);
/** Prepare a transaction to be committed. */
int
--
2.7.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [Tarantool-patches] [PATCH 05/16] tx: save txn in txn_stmt
2020-07-08 15:14 [Tarantool-patches] [PATCH v2 00/16] Transaction engine for memtx engine Aleksandr Lyapunov
` (3 preceding siblings ...)
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 04/16] vinyl: rename tx_manager -> vy_tx_manager Aleksandr Lyapunov
@ 2020-07-08 15:14 ` Aleksandr Lyapunov
2020-07-12 17:15 ` Vladislav Shpilevoy
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 06/16] tx: add TX status Aleksandr Lyapunov
` (12 subsequent siblings)
17 siblings, 1 reply; 49+ messages in thread
From: Aleksandr Lyapunov @ 2020-07-08 15:14 UTC (permalink / raw)
To: tarantool-patches; +Cc: v.shpilevoy
---
src/box/txn.c | 1 +
src/box/txn.h | 2 ++
2 files changed, 3 insertions(+)
diff --git a/src/box/txn.c b/src/box/txn.c
index 765dbd2..22013cc 100644
--- a/src/box/txn.c
+++ b/src/box/txn.c
@@ -101,6 +101,7 @@ txn_stmt_new(struct region *region)
}
/* Initialize members explicitly to save time on memset() */
+ stmt->txn = in_txn();
stmt->space = NULL;
stmt->old_tuple = NULL;
stmt->new_tuple = NULL;
diff --git a/src/box/txn.h b/src/box/txn.h
index 3f6d79d..5b264f0 100644
--- a/src/box/txn.h
+++ b/src/box/txn.h
@@ -85,6 +85,8 @@ struct txn_stmt {
/** A linked list of all statements. */
struct stailq_entry next;
+ /** Owner of that statement. */
+ struct txn *txn;
/** Undo info. */
struct space *space;
struct tuple *old_tuple;
--
2.7.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [Tarantool-patches] [PATCH 06/16] tx: add TX status
2020-07-08 15:14 [Tarantool-patches] [PATCH v2 00/16] Transaction engine for memtx engine Aleksandr Lyapunov
` (4 preceding siblings ...)
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 05/16] tx: save txn in txn_stmt Aleksandr Lyapunov
@ 2020-07-08 15:14 ` Aleksandr Lyapunov
2020-07-12 17:15 ` Vladislav Shpilevoy
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 07/16] tx: save preserve old tuple flag in txn_stmt Aleksandr Lyapunov
` (11 subsequent siblings)
17 siblings, 1 reply; 49+ messages in thread
From: Aleksandr Lyapunov @ 2020-07-08 15:14 UTC (permalink / raw)
To: tarantool-patches; +Cc: v.shpilevoy
---
src/box/txn.c | 5 +++++
src/box/txn.h | 34 ++++++++++++++++++++++++++++++++++
2 files changed, 39 insertions(+)
diff --git a/src/box/txn.c b/src/box/txn.c
index 22013cc..b26613c 100644
--- a/src/box/txn.c
+++ b/src/box/txn.c
@@ -223,6 +223,7 @@ txn_begin(void)
txn->flags = 0;
txn->in_sub_stmt = 0;
txn->id = ++tsn;
+ txn->status = TXN_INPROGRESS;
txn->signature = -1;
txn->engine = NULL;
txn->engine_tx = NULL;
@@ -426,6 +427,7 @@ txn_complete(struct txn *txn)
* IPROTO_NOP statements only.
*/
if (txn->signature < 0) {
+ txn->status = TXN_ABORTED;
/* Undo the transaction. */
struct txn_stmt *stmt;
stailq_reverse(&txn->stmts);
@@ -436,6 +438,7 @@ txn_complete(struct txn *txn)
if (txn_has_flag(txn, TXN_HAS_TRIGGERS))
txn_run_rollback_triggers(txn, &txn->on_rollback);
} else {
+ txn->status = TXN_COMMITTED;
/* Commit the transaction. */
if (txn->engine != NULL)
engine_commit(txn->engine, txn);
@@ -580,6 +583,7 @@ txn_prepare(struct txn *txn)
trigger_clear(&txn->fiber_on_yield);
txn->start_tm = ev_monotonic_now(loop());
+ txn->status = TXN_PREPARED;
return 0;
}
@@ -708,6 +712,7 @@ void
txn_rollback(struct txn *txn)
{
assert(txn == in_txn());
+ txn->status = TXN_ABORTED;
trigger_clear(&txn->fiber_on_stop);
if (!txn_has_flag(txn, TXN_CAN_YIELD))
trigger_clear(&txn->fiber_on_yield);
diff --git a/src/box/txn.h b/src/box/txn.h
index 5b264f0..1394bfb 100644
--- a/src/box/txn.h
+++ b/src/box/txn.h
@@ -77,6 +77,38 @@ enum {
};
/**
+ * Status of a transaction.
+ */
+enum txn_status {
+ /**
+ * Initial state of TX. The only state of a TX that allowed to do
+ * read or write actions.
+ */
+ TXN_INPROGRESS,
+ /**
+ * The TX have passed conflict checks and is ready to be committed.
+ */
+ TXN_PREPARED,
+ /**
+ * The TX was aborted by other TX due to conflict.
+ */
+ TXN_CONFLICTED,
+ /**
+ * The TX was read_only, has a conflict and was sent to read view.
+ * Read-only and does not participate in conflict resolution ever more.
+ */
+ TXN_IN_READ_VIEW,
+ /**
+ * The TX was committed.
+ */
+ TXN_COMMITTED,
+ /**
+ * The TX was aborted.
+ */
+ TXN_ABORTED,
+};
+
+/**
* A single statement of a multi-statement
* transaction: undo and redo info.
*/
@@ -173,6 +205,8 @@ struct txn {
* Valid IDs start from 1.
*/
int64_t id;
+ /** Status of the TX */
+ enum txn_status status;
/** List of statements in a transaction. */
struct stailq stmts;
/** Number of new rows without an assigned LSN. */
--
2.7.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [Tarantool-patches] [PATCH 07/16] tx: save preserve old tuple flag in txn_stmt
2020-07-08 15:14 [Tarantool-patches] [PATCH v2 00/16] Transaction engine for memtx engine Aleksandr Lyapunov
` (5 preceding siblings ...)
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 06/16] tx: add TX status Aleksandr Lyapunov
@ 2020-07-08 15:14 ` Aleksandr Lyapunov
2020-07-12 17:14 ` Vladislav Shpilevoy
2020-07-14 23:46 ` Vladislav Shpilevoy
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 08/16] tx: introduce tx manager Aleksandr Lyapunov
` (10 subsequent siblings)
17 siblings, 2 replies; 49+ messages in thread
From: Aleksandr Lyapunov @ 2020-07-08 15:14 UTC (permalink / raw)
To: tarantool-patches; +Cc: v.shpilevoy
---
src/box/memtx_space.c | 17 +++++++++++++++++
src/box/txn.c | 3 +++
src/box/txn.h | 9 +++++++++
3 files changed, 29 insertions(+)
diff --git a/src/box/memtx_space.c b/src/box/memtx_space.c
index 8755920..5820c40 100644
--- a/src/box/memtx_space.c
+++ b/src/box/memtx_space.c
@@ -316,6 +316,10 @@ memtx_space_execute_replace(struct space *space, struct txn *txn,
if (stmt->new_tuple == NULL)
return -1;
tuple_ref(stmt->new_tuple);
+
+ if (mode == DUP_INSERT)
+ stmt->preserve_old_tuple = true;
+
if (memtx_space->replace(space, NULL, stmt->new_tuple,
mode, &stmt->old_tuple) != 0)
return -1;
@@ -342,6 +346,13 @@ memtx_space_execute_delete(struct space *space, struct txn *txn,
struct tuple *old_tuple;
if (index_get(pk, key, part_count, &old_tuple) != 0)
return -1;
+
+ /*
+ * We have to delete exactly old_tuple just because we return it as
+ * a result.
+ */
+ stmt->preserve_old_tuple = true;
+
if (old_tuple != NULL &&
memtx_space->replace(space, old_tuple, NULL,
DUP_REPLACE_OR_INSERT, &stmt->old_tuple) != 0)
@@ -390,6 +401,9 @@ memtx_space_execute_update(struct space *space, struct txn *txn,
if (stmt->new_tuple == NULL)
return -1;
tuple_ref(stmt->new_tuple);
+
+ stmt->preserve_old_tuple = true;
+
if (memtx_space->replace(space, old_tuple, stmt->new_tuple,
DUP_REPLACE, &stmt->old_tuple) != 0)
return -1;
@@ -496,6 +510,9 @@ memtx_space_execute_upsert(struct space *space, struct txn *txn,
stmt->new_tuple = NULL;
}
}
+
+ stmt->preserve_old_tuple = true;
+
/*
* It's OK to use DUP_REPLACE_OR_INSERT: we don't risk
* inserting a new tuple if the old one exists, since
diff --git a/src/box/txn.c b/src/box/txn.c
index b26613c..5fc34f8 100644
--- a/src/box/txn.c
+++ b/src/box/txn.c
@@ -108,6 +108,7 @@ txn_stmt_new(struct region *region)
stmt->engine_savepoint = NULL;
stmt->row = NULL;
stmt->has_triggers = false;
+ stmt->preserve_old_tuple = false;
return stmt;
}
@@ -350,6 +351,8 @@ txn_commit_stmt(struct txn *txn, struct request *request)
*/
if (stmt->space != NULL && !rlist_empty(&stmt->space->on_replace) &&
stmt->space->run_triggers && (stmt->old_tuple || stmt->new_tuple)) {
+ /* Triggers see old_tuple and that tuple must remain the same */
+ stmt->preserve_old_tuple = true;
int rc = 0;
if(!space_is_temporary(stmt->space)) {
rc = trigger_run(&stmt->space->on_replace, txn);
diff --git a/src/box/txn.h b/src/box/txn.h
index 1394bfb..e860e1e 100644
--- a/src/box/txn.h
+++ b/src/box/txn.h
@@ -129,6 +129,15 @@ struct txn_stmt {
struct xrow_header *row;
/** on_commit and/or on_rollback list is not empty. */
bool has_triggers;
+ /**
+ * Whether the stmt upon commit must replace exactly old_tuple from it.
+ * Explanation: to the moment of commit of the statement actual state
+ * of the space could change due to commit of other transaction(s).
+ * Some statements require the replaced tuple at the moment of commit to
+ * be exactly the same as replaced tuple at the moment of execution.
+ * Some - doesn't.
+ */
+ bool preserve_old_tuple;
/** Commit/rollback triggers associated with this statement. */
struct rlist on_commit;
struct rlist on_rollback;
--
2.7.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [Tarantool-patches] [PATCH 08/16] tx: introduce tx manager
2020-07-08 15:14 [Tarantool-patches] [PATCH v2 00/16] Transaction engine for memtx engine Aleksandr Lyapunov
` (6 preceding siblings ...)
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 07/16] tx: save preserve old tuple flag in txn_stmt Aleksandr Lyapunov
@ 2020-07-08 15:14 ` Aleksandr Lyapunov
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 09/16] tx: introduce prepare sequence number Aleksandr Lyapunov
` (9 subsequent siblings)
17 siblings, 0 replies; 49+ messages in thread
From: Aleksandr Lyapunov @ 2020-07-08 15:14 UTC (permalink / raw)
To: tarantool-patches; +Cc: v.shpilevoy
---
src/box/txn.c | 16 ++++++++++++++++
src/box/txn.h | 6 ++++++
src/main.cc | 3 +++
3 files changed, 25 insertions(+)
diff --git a/src/box/txn.c b/src/box/txn.c
index 5fc34f8..0ba86cb 100644
--- a/src/box/txn.c
+++ b/src/box/txn.c
@@ -37,6 +37,12 @@
#include "errinj.h"
#include "iproto_constants.h"
+struct tx_manager
+{
+};
+
+static struct tx_manager txm;
+
double too_long_threshold;
/* Txn cache. */
@@ -978,3 +984,13 @@ txn_on_yield(struct trigger *trigger, void *event)
txn_set_flag(txn, TXN_IS_ABORTED_BY_YIELD);
return 0;
}
+
+void
+tx_manager_init()
+{
+}
+
+void
+tx_manager_free()
+{
+}
diff --git a/src/box/txn.h b/src/box/txn.h
index e860e1e..e2194b6 100644
--- a/src/box/txn.h
+++ b/src/box/txn.h
@@ -636,6 +636,12 @@ box_txn_savepoint(void);
API_EXPORT int
box_txn_rollback_to_savepoint(box_txn_savepoint_t *savepoint);
+void
+tx_manager_init();
+
+void
+tx_manager_free();
+
#if defined(__cplusplus)
} /* extern "C" */
#endif /* defined(__cplusplus) */
diff --git a/src/main.cc b/src/main.cc
index b3e7e41..9a50986 100644
--- a/src/main.cc
+++ b/src/main.cc
@@ -75,6 +75,7 @@
#include <libutil.h>
#include "box/lua/init.h" /* box_lua_init() */
#include "box/session.h"
+#include "box/txn.h"
#include "systemd.h"
#include "crypto/crypto.h"
#include "core/popen.h"
@@ -667,6 +668,7 @@ tarantool_free(void)
random_free();
#endif
crypto_free();
+ tx_manager_free();
coll_free();
systemd_free();
say_logger_free();
@@ -830,6 +832,7 @@ main(int argc, char **argv)
signal_init();
cbus_init();
coll_init();
+ tx_manager_init();
crypto_init();
systemd_init();
tarantool_lua_init(tarantool_bin, main_argc, main_argv);
--
2.7.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [Tarantool-patches] [PATCH 09/16] tx: introduce prepare sequence number
2020-07-08 15:14 [Tarantool-patches] [PATCH v2 00/16] Transaction engine for memtx engine Aleksandr Lyapunov
` (7 preceding siblings ...)
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 08/16] tx: introduce tx manager Aleksandr Lyapunov
@ 2020-07-08 15:14 ` Aleksandr Lyapunov
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 10/16] tx: introduce txn_stmt_destroy Aleksandr Lyapunov
` (8 subsequent siblings)
17 siblings, 0 replies; 49+ messages in thread
From: Aleksandr Lyapunov @ 2020-07-08 15:14 UTC (permalink / raw)
To: tarantool-patches; +Cc: v.shpilevoy
---
src/box/txn.c | 4 ++++
src/box/txn.h | 5 +++++
2 files changed, 9 insertions(+)
diff --git a/src/box/txn.c b/src/box/txn.c
index 0ba86cb..5964ca7 100644
--- a/src/box/txn.c
+++ b/src/box/txn.c
@@ -39,6 +39,8 @@
struct tx_manager
{
+ /** Last prepare-sequence-number that was assigned to preped TX. */
+ int64_t last_psn;
};
static struct tx_manager txm;
@@ -564,6 +566,8 @@ txn_journal_entry_new(struct txn *txn)
static int
txn_prepare(struct txn *txn)
{
+ txn->psn = ++txm.last_psn;
+
if (txn_has_flag(txn, TXN_IS_ABORTED_BY_YIELD)) {
assert(!txn_has_flag(txn, TXN_CAN_YIELD));
diag_set(ClientError, ER_TRANSACTION_YIELD);
diff --git a/src/box/txn.h b/src/box/txn.h
index e2194b6..cd1665f 100644
--- a/src/box/txn.h
+++ b/src/box/txn.h
@@ -214,6 +214,11 @@ struct txn {
* Valid IDs start from 1.
*/
int64_t id;
+ /**
+ * A sequential ID that is assigned when the TX become prepared.
+ * Transactions are committed on in that order.
+ */
+ int64_t psn;
/** Status of the TX */
enum txn_status status;
/** List of statements in a transaction. */
--
2.7.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [Tarantool-patches] [PATCH 10/16] tx: introduce txn_stmt_destroy
2020-07-08 15:14 [Tarantool-patches] [PATCH v2 00/16] Transaction engine for memtx engine Aleksandr Lyapunov
` (8 preceding siblings ...)
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 09/16] tx: introduce prepare sequence number Aleksandr Lyapunov
@ 2020-07-08 15:14 ` Aleksandr Lyapunov
2020-07-12 17:15 ` Vladislav Shpilevoy
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 11/16] tx: introduce conflict tracker Aleksandr Lyapunov
` (7 subsequent siblings)
17 siblings, 1 reply; 49+ messages in thread
From: Aleksandr Lyapunov @ 2020-07-08 15:14 UTC (permalink / raw)
To: tarantool-patches; +Cc: v.shpilevoy
From: Aleksandr Lyapunov <a.lyapunov@corp.mail.ru>
---
src/box/txn.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/src/box/txn.c b/src/box/txn.c
index 5964ca7..2bd44a3 100644
--- a/src/box/txn.c
+++ b/src/box/txn.c
@@ -121,7 +121,7 @@ txn_stmt_new(struct region *region)
}
static inline void
-txn_stmt_unref_tuples(struct txn_stmt *stmt)
+txn_stmt_destroy(struct txn_stmt *stmt)
{
if (stmt->old_tuple != NULL)
tuple_unref(stmt->old_tuple);
@@ -169,7 +169,7 @@ txn_rollback_to_svp(struct txn *txn, struct stailq_entry *svp)
assert(txn->n_applier_rows > 0);
txn->n_applier_rows--;
}
- txn_stmt_unref_tuples(stmt);
+ txn_stmt_destroy(stmt);
stmt->space = NULL;
stmt->row = NULL;
}
@@ -208,7 +208,7 @@ txn_free(struct txn *txn)
{
struct txn_stmt *stmt;
stailq_foreach_entry(stmt, &txn->stmts, next)
- txn_stmt_unref_tuples(stmt);
+ txn_stmt_destroy(stmt);
/* Truncate region up to struct txn size. */
region_truncate(&txn->region, sizeof(struct txn));
--
2.7.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [Tarantool-patches] [PATCH 11/16] tx: introduce conflict tracker
2020-07-08 15:14 [Tarantool-patches] [PATCH v2 00/16] Transaction engine for memtx engine Aleksandr Lyapunov
` (9 preceding siblings ...)
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 10/16] tx: introduce txn_stmt_destroy Aleksandr Lyapunov
@ 2020-07-08 15:14 ` Aleksandr Lyapunov
2020-07-12 17:15 ` Vladislav Shpilevoy
2020-07-14 23:51 ` Vladislav Shpilevoy
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 12/16] introduce tuple smart pointers Aleksandr Lyapunov
` (6 subsequent siblings)
17 siblings, 2 replies; 49+ messages in thread
From: Aleksandr Lyapunov @ 2020-07-08 15:14 UTC (permalink / raw)
To: tarantool-patches; +Cc: v.shpilevoy
---
src/box/txn.c | 98 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
src/box/txn.h | 5 +++
2 files changed, 103 insertions(+)
diff --git a/src/box/txn.c b/src/box/txn.c
index 2bd44a3..b9f1737 100644
--- a/src/box/txn.c
+++ b/src/box/txn.c
@@ -45,6 +45,13 @@ struct tx_manager
static struct tx_manager txm;
+struct tx_conflict_tracker {
+ struct txn *wreaker;
+ struct txn *victim;
+ struct rlist in_conflict_list;
+ struct rlist in_conflicted_by_list;
+};
+
double too_long_threshold;
/* Txn cache. */
@@ -197,6 +204,8 @@ txn_new(void)
}
assert(region_used(®ion) == sizeof(*txn));
txn->region = region;
+ rlist_create(&txn->conflict_list);
+ rlist_create(&txn->conflicted_by_list);
return txn;
}
@@ -206,6 +215,20 @@ txn_new(void)
inline static void
txn_free(struct txn *txn)
{
+ struct tx_conflict_tracker *entry, *next;
+ rlist_foreach_entry_safe(entry, &txn->conflict_list,
+ in_conflict_list, next) {
+ rlist_del(&entry->in_conflict_list);
+ rlist_del(&entry->in_conflicted_by_list);
+ }
+ rlist_foreach_entry_safe(entry, &txn->conflicted_by_list,
+ in_conflicted_by_list, next) {
+ rlist_del(&entry->in_conflict_list);
+ rlist_del(&entry->in_conflicted_by_list);
+ }
+ assert(rlist_empty(&txn->conflict_list));
+ assert(rlist_empty(&txn->conflicted_by_list));
+
struct txn_stmt *stmt;
stailq_foreach_entry(stmt, &txn->stmts, next)
txn_stmt_destroy(stmt);
@@ -223,6 +246,8 @@ txn_begin(void)
struct txn *txn = txn_new();
if (txn == NULL)
return NULL;
+ assert(rlist_empty(&txn->conflict_list));
+ assert(rlist_empty(&txn->conflicted_by_list));
/* Initialize members explicitly to save time on memset() */
stailq_create(&txn->stmts);
@@ -280,6 +305,15 @@ txn_begin_stmt(struct txn *txn, struct space *space)
diag_set(ClientError, ER_SUB_STMT_MAX);
return -1;
}
+
+ /*
+ * A conflict have happened; there is no reason to continue the TX.
+ */
+ if (txn->status == TXN_CONFLICTED) {
+ diag_set(ClientError, ER_TRANSACTION_CONFLICT);
+ return -1;
+ }
+
struct txn_stmt *stmt = txn_stmt_new(&txn->region);
if (stmt == NULL)
return -1;
@@ -583,6 +617,16 @@ txn_prepare(struct txn *txn)
diag_set(ClientError, ER_FOREIGN_KEY_CONSTRAINT);
return -1;
}
+
+ /*
+ * Somebody else has written some value that we have read.
+ * The transaction is not possible.
+ */
+ if (txn->status == TXN_CONFLICTED) {
+ diag_set(ClientError, ER_TRANSACTION_CONFLICT);
+ return -1;
+ }
+
/*
* Perform transaction conflict resolution. Engine == NULL when
* we have a bunch of IPROTO_NOP statements.
@@ -591,6 +635,17 @@ txn_prepare(struct txn *txn)
if (engine_prepare(txn->engine, txn) != 0)
return -1;
}
+
+ struct tx_conflict_tracker *entry, *next;
+ rlist_foreach_entry_safe(entry, &txn->conflict_list,
+ in_conflict_list, next) {
+ if (entry->victim->status == TXN_INPROGRESS)
+ entry->victim->status = TXN_CONFLICTED;
+ rlist_del(&entry->in_conflict_list);
+ rlist_del(&entry->in_conflicted_by_list);
+ }
+
+
trigger_clear(&txn->fiber_on_stop);
if (!txn_has_flag(txn, TXN_CAN_YIELD))
trigger_clear(&txn->fiber_on_yield);
@@ -998,3 +1053,46 @@ void
tx_manager_free()
{
}
+
+int
+txm_cause_conflict(struct txn *wreaker, struct txn *victim)
+{
+ struct tx_conflict_tracker *tracker = NULL;
+ struct rlist *r1 = wreaker->conflict_list.next;
+ struct rlist *r2 = wreaker->conflicted_by_list.next;
+ while (r1 != &wreaker->conflict_list &&
+ r2 != &wreaker->conflicted_by_list) {
+ tracker = rlist_entry(r1, struct tx_conflict_tracker,
+ in_conflict_list);
+ if (tracker->wreaker == wreaker && tracker->victim == victim)
+ break;
+ tracker = rlist_entry(r2, struct tx_conflict_tracker,
+ in_conflicted_by_list);
+ if (tracker->wreaker == wreaker && tracker->victim == victim)
+ break;
+ tracker = NULL;
+ r1 = r1->next;
+ r2 = r2->next;
+ }
+ if (tracker != NULL) {
+ /* Move to the beginning of a list
+ * for a case of subsequent lookups */
+ rlist_del(&tracker->in_conflict_list);
+ rlist_del(&tracker->in_conflicted_by_list);
+ } else {
+ size_t size;
+ tracker = region_alloc_object(&victim->region,
+ struct tx_conflict_tracker,
+ &size);
+ if (tracker == NULL) {
+ diag_set(OutOfMemory, size, "tx region",
+ "conflict_tracker");
+ return -1;
+ }
+ }
+ tracker->wreaker = wreaker;
+ tracker->victim = victim;
+ rlist_add(&wreaker->conflict_list, &tracker->in_conflict_list);
+ rlist_add(&wreaker->conflicted_by_list, &tracker->in_conflicted_by_list);
+ return 0;
+}
\ No newline at end of file
diff --git a/src/box/txn.h b/src/box/txn.h
index cd1665f..92c0116 100644
--- a/src/box/txn.h
+++ b/src/box/txn.h
@@ -280,6 +280,8 @@ struct txn {
uint32_t fk_deferred_count;
/** List of savepoints to find savepoint by name. */
struct rlist savepoints;
+ struct rlist conflict_list;
+ struct rlist conflicted_by_list;
};
static inline bool
@@ -647,6 +649,9 @@ tx_manager_init();
void
tx_manager_free();
+int
+txm_cause_conflict(struct txn *wreaker, struct txn *victim);
+
#if defined(__cplusplus)
} /* extern "C" */
#endif /* defined(__cplusplus) */
--
2.7.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [Tarantool-patches] [PATCH 12/16] introduce tuple smart pointers
2020-07-08 15:14 [Tarantool-patches] [PATCH v2 00/16] Transaction engine for memtx engine Aleksandr Lyapunov
` (10 preceding siblings ...)
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 11/16] tx: introduce conflict tracker Aleksandr Lyapunov
@ 2020-07-08 15:14 ` Aleksandr Lyapunov
2020-07-12 17:16 ` Vladislav Shpilevoy
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 13/16] tx: introduce txm_story Aleksandr Lyapunov
` (5 subsequent siblings)
17 siblings, 1 reply; 49+ messages in thread
From: Aleksandr Lyapunov @ 2020-07-08 15:14 UTC (permalink / raw)
To: tarantool-patches; +Cc: v.shpilevoy
---
src/box/tuple.h | 50 ++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 50 insertions(+)
diff --git a/src/box/tuple.h b/src/box/tuple.h
index 4752323..5234574 100644
--- a/src/box/tuple.h
+++ b/src/box/tuple.h
@@ -1093,6 +1093,56 @@ tuple_unref(struct tuple *tuple)
}
}
+/**
+ * General case of setting a smart pointer to tuple.
+ * If *dst pointed to tuple - unref it.
+ * If src points to tuple - ref it.
+ * @param dst - destination smart pointer (lvalue).
+ * @param src - source pointer (rvalue).
+ */
+static inline void
+tusp_set(struct tuple **dst, struct tuple *src)
+{
+ if (*dst != NULL)
+ tuple_unref(*dst);
+ if (src != NULL)
+ tuple_ref(src);
+ *dst = src;
+}
+
+/**
+ * Specific cases of setting a smart pointer to tuple.
+ * Just like @sa tusp_set, but have two-letter prefix,
+ * giving some additional information about possible values of *dst and src.
+ * Possible letters (one for *dst, second - for src):
+ * N - the corresponding pointer is definitely NULL.
+ * T - the corresponding pointer is non-null pointer to tuple.
+ * U - unknown, the function has to compare pointer with NULL by itself.
+ * @param dst - destination smart pointer (lvalue).
+ * @param src - source pointer (rvalue).
+ */
+#define TUSP_SPEC_DEF(name, dst_mode, src_mode) \
+static inline void \
+name(struct tuple **dst, struct tuple *src) \
+{ \
+ if (dst_mode == 1 || (dst_mode == 2 && *dst != NULL)) \
+ tuple_unref(*dst); \
+ if (src_mode == 1 || (src_mode == 2 && src != NULL)) \
+ tuple_ref(src); \
+ *dst = src; \
+} \
+struct forgot_to_add_semicolon
+
+TUSP_SPEC_DEF(tusp_setNN, 0, 0);
+TUSP_SPEC_DEF(tusp_setNT, 0, 1);
+TUSP_SPEC_DEF(tusp_setNU, 0, 2);
+TUSP_SPEC_DEF(tusp_setTN, 1, 0);
+TUSP_SPEC_DEF(tusp_setTT, 1, 1);
+TUSP_SPEC_DEF(tusp_setTU, 1, 2);
+TUSP_SPEC_DEF(tusp_setUN, 2, 0);
+TUSP_SPEC_DEF(tusp_setUT, 2, 1);
+TUSP_SPEC_DEF(tusp_setUU, 2, 2);
+
extern struct tuple *box_tuple_last;
/**
--
2.7.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [Tarantool-patches] [PATCH 13/16] tx: introduce txm_story
2020-07-08 15:14 [Tarantool-patches] [PATCH v2 00/16] Transaction engine for memtx engine Aleksandr Lyapunov
` (11 preceding siblings ...)
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 12/16] introduce tuple smart pointers Aleksandr Lyapunov
@ 2020-07-08 15:14 ` Aleksandr Lyapunov
2020-07-12 17:14 ` Vladislav Shpilevoy
2020-07-14 23:46 ` Vladislav Shpilevoy
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 14/16] tx: indexes Aleksandr Lyapunov
` (4 subsequent siblings)
17 siblings, 2 replies; 49+ messages in thread
From: Aleksandr Lyapunov @ 2020-07-08 15:14 UTC (permalink / raw)
To: tarantool-patches; +Cc: v.shpilevoy
---
src/box/txn.c | 725 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
src/box/txn.h | 127 ++++++++++
2 files changed, 851 insertions(+), 1 deletion(-)
diff --git a/src/box/txn.c b/src/box/txn.c
index b9f1737..78c542f 100644
--- a/src/box/txn.c
+++ b/src/box/txn.c
@@ -36,13 +36,44 @@
#include "xrow.h"
#include "errinj.h"
#include "iproto_constants.h"
+#include "small/mempool.h"
+
+static uint32_t
+txm_story_key_hash(const struct tuple *a)
+{
+ uintptr_t u = (uintptr_t)a;
+ if (sizeof(uintptr_t) <= sizeof(uint32_t))
+ return u;
+ else
+ return u ^ (u >> 32);
+}
+
+#define mh_name _history
+#define mh_key_t struct tuple *
+#define mh_node_t struct txm_story *
+#define mh_arg_t int
+#define mh_hash(a, arg) (txm_story_key_hash((*(a))->tuple))
+#define mh_hash_key(a, arg) (txm_story_key_hash(a))
+#define mh_cmp(a, b, arg) ((*(a))->tuple != (*(b))->tuple)
+#define mh_cmp_key(a, b, arg) ((a) != (*(b))->tuple)
+#define MH_SOURCE
+#include "salad/mhash.h"
struct tx_manager
{
/** Last prepare-sequence-number that was assigned to preped TX. */
int64_t last_psn;
+ /** Mempools for tx_story objects with difference index count. */
+ struct mempool txm_story_pool[BOX_INDEX_MAX];
+ /** Hash table tuple -> txm_story of that tuple. */
+ struct mh_history_t *history;
+ /** List of all txm_story objects. */
+ struct rlist all_stories;
+ /** Iterator that sequentially traverses all txm_story objects. */
+ struct rlist *traverse_all_stories;
};
+/** The one and only instance of tx_manager. */
static struct tx_manager txm;
struct tx_conflict_tracker {
@@ -120,6 +151,9 @@ txn_stmt_new(struct region *region)
stmt->space = NULL;
stmt->old_tuple = NULL;
stmt->new_tuple = NULL;
+ stmt->add_story = NULL;
+ stmt->del_story = NULL;
+ stmt->next_in_del_list = NULL;
stmt->engine_savepoint = NULL;
stmt->row = NULL;
stmt->has_triggers = false;
@@ -1047,11 +1081,23 @@ txn_on_yield(struct trigger *trigger, void *event)
void
tx_manager_init()
{
+ for (size_t i = 0; i < BOX_INDEX_MAX; i++) {
+ size_t item_size = sizeof(struct txm_story) +
+ i * sizeof(struct txm_story_link);
+ mempool_create(&txm.txm_story_pool[i],
+ cord_slab_cache(), item_size);
+ }
+ txm.history = mh_history_new();
+ rlist_create(&txm.all_stories);
+ txm.traverse_all_stories = &txm.all_stories;
}
void
tx_manager_free()
{
+ mh_history_delete(txm.history);
+ for (size_t i = 0; i < BOX_INDEX_MAX; i++)
+ mempool_destroy(&txm.txm_story_pool[i]);
}
int
@@ -1095,4 +1141,681 @@ txm_cause_conflict(struct txn *wreaker, struct txn *victim)
rlist_add(&wreaker->conflict_list, &tracker->in_conflict_list);
rlist_add(&wreaker->conflicted_by_list, &tracker->in_conflicted_by_list);
return 0;
-}
\ No newline at end of file
+}
+
+struct txm_story *
+txm_story_new(struct tuple *tuple, uint32_t index_count)
+{
+ assert(!tuple->is_dirty);
+ assert(index_count < BOX_INDEX_MAX);
+ struct mempool *pool = &txm.txm_story_pool[index_count];
+ struct txm_story *story = (struct txm_story *)mempool_alloc(pool);
+ if (story == NULL) {
+ size_t item_size = sizeof(struct txm_story) +
+ index_count * sizeof(struct txm_story_link);
+ diag_set(OutOfMemory, item_size,
+ "tx_manager", "tx story");
+ return story;
+ }
+ story->tuple = tuple;
+
+ const struct txm_story **put_story = (const struct txm_story **)&story;
+ struct txm_story **empty = NULL;
+ mh_int_t pos = mh_history_put(txm.history, put_story, &empty, 0);
+ if (pos == mh_end(txm.history)) {
+ mempool_free(pool, story);
+ diag_set(OutOfMemory, pos + 1,
+ "tx_manager", "tx history hash table");
+ return NULL;
+ }
+ tuple->is_dirty = true;
+ tuple_ref(tuple);
+
+ story->index_count = index_count;
+ story->add_stmt = NULL;
+ story->add_psn = 0;
+ story->del_stmt = NULL;
+ story->del_psn = 0;
+ rlist_create(&story->reader_list);
+ rlist_add(&txm.all_stories, &story->in_all_stories);
+ memset(story->link, 0, sizeof(story->link[0]) * index_count);
+ return story;
+}
+
+static struct txm_story *
+txm_story_new_add_stmt(struct tuple *tuple, struct txn_stmt *stmt)
+{
+ struct txm_story *res = txm_story_new(tuple, stmt->space->index_count);
+ if (res == NULL)
+ return NULL;
+ res->add_stmt = stmt;
+ assert(stmt->add_story == NULL);
+ stmt->add_story = res;
+ return res;
+}
+
+static struct txm_story *
+txm_story_new_del_stmt(struct tuple *tuple, struct txn_stmt *stmt)
+{
+ struct txm_story *res = txm_story_new(tuple, stmt->space->index_count);
+ if (res == NULL)
+ return NULL;
+ res->del_stmt = stmt;
+ assert(stmt->del_story == NULL);
+ stmt->del_story = res;
+ return res;
+}
+
+static void
+txm_story_delete_add_stmt(struct txm_story *story)
+{
+ story->add_stmt->add_story = NULL;
+ story->add_stmt = NULL;
+ txm_story_delete(story);
+}
+
+static void
+txm_story_delete_del_stmt(struct txm_story *story)
+{
+ story->del_stmt->del_story = NULL;
+ story->del_stmt = NULL;
+ txm_story_delete(story);
+}
+
+
+static struct txm_story *
+txm_story_get(struct tuple *tuple)
+{
+ assert(tuple->is_dirty);
+
+ mh_int_t pos = mh_history_find(txm.history, tuple, 0);
+ assert(pos != mh_end(txm.history));
+ return *mh_history_node(txm.history, pos);
+}
+
+static struct tuple *
+txm_story_older_tuple(struct txm_story_link *link)
+{
+ return link->older.is_story ? link->older.story->tuple
+ : link->older.tuple;
+}
+
+static void
+txm_story_link_story(struct txm_story *story, struct txm_story *older_story,
+ uint32_t index)
+{
+ assert(older_story != NULL);
+ struct txm_story_link *link = &story->link[index];
+ /* Must be unlinked. */
+ assert(!link->older.is_story);
+ assert(link->older.tuple == NULL);
+ link->older.is_story = true;
+ link->older.story = older_story;
+ older_story->link[index].newer_story = story;
+}
+
+static void
+txm_story_link_tuple(struct txm_story *story, struct tuple *older_tuple,
+ uint32_t index)
+{
+ struct txm_story_link *link = &story->link[index];
+ /* Must be unlinked. */
+ assert(!link->older.is_story);
+ assert(link->older.tuple == NULL);
+ if (older_tuple == NULL)
+ return;
+ if (older_tuple->is_dirty) {
+ txm_story_link_story(story, txm_story_get(older_tuple), index);
+ return;
+ }
+ tusp_setNT(&link->older.tuple, older_tuple);
+}
+
+static void
+txm_story_unlink(struct txm_story *story, uint32_t index)
+{
+ struct txm_story_link *link = &story->link[index];
+ if (link->older.is_story)
+ link->older.story->link[index].newer_story = NULL;
+ else
+ tusp_setUN(&link->older.tuple, NULL);
+ link->older.is_story = false;
+ link->older.tuple = NULL;
+}
+
+static bool
+txm_story_is_visible(struct txm_story *story, struct txn *txn,
+ struct tuple **visible_tuple, bool prepared_ok,
+ bool *own_change)
+{
+ *own_change = false;
+ struct txn_stmt *dels = story->del_stmt;
+ while (dels != NULL) {
+ if (dels->txn == txn) {
+ /* deleted by us. */
+ *visible_tuple = NULL;
+ *own_change = true;
+ return true;
+ }
+ dels = dels->next_in_del_list;
+ }
+ if (prepared_ok && story->del_psn != 0) {
+ /* deleted by at least prepared TX. */
+ *visible_tuple = NULL;
+ return true;
+ }
+ if (story->del_psn != 0 && story->del_stmt == NULL) {
+ /* deleted by committed TX. */
+ *visible_tuple = NULL;
+ return true;
+ }
+
+ if (story->add_stmt != NULL && story->add_stmt->txn == txn) {
+ /* added by us. */
+ *visible_tuple = story->tuple;
+ *own_change = true;
+ return true;
+ }
+ if (prepared_ok && story->add_psn != 0) {
+ /* added by at least prepared TX. */
+ *visible_tuple = story->tuple;
+ return true;
+ }
+ if (story->add_psn != 0 && story->add_stmt == NULL) {
+ /* added by committed TX. */
+ *visible_tuple = story->tuple;
+ return true;
+ }
+ return false;
+}
+
+/** Temporary allocated on region that stores a conflicting TX. */
+struct txn_conflict
+{
+ struct txn *other_txn;
+ struct txn_conflict *next;
+};
+
+static int
+txm_save_conflict(struct txn *other_txn, struct txn_conflict **coflicts_head,
+ struct region *region)
+{
+ size_t err_size;
+ struct txn_conflict *next_conflict;
+ next_conflict = region_alloc_object(region, struct txn_conflict,
+ &err_size);
+ if (next_conflict == NULL) {
+ diag_set(OutOfMemory, err_size,
+ "txn_region", "txn conflict");
+ return -1;
+ }
+ next_conflict->other_txn = other_txn;
+ next_conflict->next = *coflicts_head;
+ *coflicts_head = next_conflict;
+ return 0;
+}
+
+int
+txm_story_find_visible(struct txn_stmt *stmt, struct txm_story *story,
+ uint32_t ind, struct tuple **visible_replaced,
+ struct txn_conflict **collected_conflicts,
+ struct region *region)
+{
+ while (true) {
+ if (!story->link[ind].older.is_story) {
+ /*
+ * the tuple is so old that we doesn't
+ * know its story.
+ */
+ *visible_replaced = story->link[ind].older.tuple;
+ assert(*visible_replaced == NULL ||
+ !(*visible_replaced)->is_dirty);
+ break;
+ }
+ story = story->link[ind].older.story;
+ bool unused;
+ if (txm_story_is_visible(story, stmt->txn,
+ visible_replaced, true, &unused))
+ break;
+
+ /*
+ * We skip the story but once the story is committed
+ * before out TX that may cause conflict.
+ * The conflict will be inavoidable if this statement
+ * relies on old_tuple. If not (it's a replace),
+ * the conflict will take place only for secondary
+ * index if the story will not be overwritten in primary
+ * index.
+ */
+ bool cross_conflict = false;
+ if (stmt->preserve_old_tuple) {
+ cross_conflict = true;
+ } else if (ind != 0) {
+ struct txm_story *look_up = story;
+ cross_conflict = true;
+ while (look_up->link[ind].newer_story != NULL) {
+ struct txm_story *over;
+ over = look_up->link[ind].newer_story;
+ if (over->add_stmt->txn == stmt->txn) {
+ cross_conflict = false;
+ break;
+ }
+ look_up = over;
+ }
+ }
+ if (cross_conflict) {
+ if (txm_save_conflict(story->add_stmt->txn,
+ collected_conflicts,
+ region) != 0)
+ return -1;
+
+ }
+ }
+ return 0;
+}
+
+int
+txm_history_link_stmt(struct txn_stmt *stmt,
+ struct tuple *old_tuple, struct tuple *new_tuple,
+ enum dup_replace_mode mode, struct tuple **result)
+{
+ struct txm_story *add_story = NULL;
+ uint32_t add_story_linked = 0;
+ struct txm_story *del_story = NULL;
+ bool del_story_created = false;
+ struct region *region = &stmt->txn->region;
+ size_t region_svp = region_used(region);
+
+ /**
+ * List of transactions that will conflict us once one of them
+ * become committed.
+ */
+ struct txn_conflict *collected_conflicts = NULL;
+
+ if (new_tuple != NULL) {
+ add_story = txm_story_new_add_stmt(new_tuple, stmt);
+ if (add_story == NULL)
+ goto fail;
+
+ for (uint32_t i = 0; i < stmt->space->index_count; i++) {
+ struct tuple *replaced;
+ struct index *index = stmt->space->index[i];
+ if (index_replace(index, NULL, new_tuple,
+ DUP_REPLACE_OR_INSERT,
+ &replaced) != 0)
+ goto fail;
+ txm_story_link_tuple(add_story, replaced, i);
+ add_story_linked++;
+
+ struct tuple *visible_replaced = NULL;
+ if (txm_story_find_visible(stmt, add_story, i,
+ &visible_replaced,
+ &collected_conflicts,
+ region) != 0)
+ goto fail;
+
+ uint32_t errcode;
+ errcode = replace_check_dup(old_tuple,
+ visible_replaced,
+ i == 0 ? mode : DUP_INSERT);
+ if (errcode != 0) {
+ struct space *sp = stmt->space;
+ if (sp != NULL)
+ diag_set(ClientError, errcode,
+ sp->index[i]->def->name,
+ space_name(sp));
+ goto fail;
+ }
+
+ if (i == 0) {
+ old_tuple = visible_replaced;
+ *result = visible_replaced;
+ }
+ }
+ } else {
+ assert(old_tuple != NULL);
+ *result = old_tuple;
+ }
+
+ struct tuple *del_tuple = NULL;
+ if (new_tuple != NULL) {
+ struct txm_story_link *link = &add_story->link[0];
+ if (link->older.is_story) {
+ del_story = link->older.story;
+ del_tuple = del_story->tuple;
+ } else {
+ del_tuple = link->older.tuple;
+ }
+ } else {
+ del_tuple = old_tuple;
+ }
+ if (del_tuple != NULL && del_story == NULL) {
+ if (del_tuple->is_dirty) {
+ del_story = txm_story_get(del_tuple);
+ } else {
+ del_story = txm_story_new_del_stmt(del_tuple, stmt);
+ if (del_story == NULL)
+ goto fail;
+ del_story_created = true;
+ }
+ }
+ if (new_tuple != NULL && del_story_created) {
+ for (uint32_t i = 0; i < add_story->index_count; i++) {
+ struct txm_story_link *link = &add_story->link[i];
+ if (link->older.is_story)
+ continue;
+ if (link->older.tuple == del_tuple) {
+ txm_story_unlink(add_story, i);
+ txm_story_link_story(add_story, del_story, i);
+ }
+ }
+ }
+ if (del_story != NULL) {
+ stmt->next_in_del_list = del_story->del_stmt;
+ del_story->del_stmt = stmt;
+ stmt->del_story = del_story;
+ }
+
+ while (collected_conflicts != NULL) {
+ if (txm_cause_conflict(collected_conflicts->other_txn,
+ stmt->txn) != 0) {
+ goto fail;
+ }
+ collected_conflicts = collected_conflicts->next;
+ }
+ region_truncate(region, region_svp);
+ if (stmt->new_tuple != NULL)
+ tuple_ref(stmt->new_tuple);
+ if (*result != NULL)
+ tuple_ref(*result);
+ return 0;
+
+fail:
+ if (add_story != NULL) {
+ while (add_story_linked > 0) {
+ --add_story_linked;
+ uint32_t i = add_story_linked;
+
+ struct index *index = stmt->space->index[i];
+ struct txm_story_link *link = &add_story->link[i];
+ struct tuple *was = txm_story_older_tuple(link);
+ struct tuple *unused;
+ if (index_replace(index, new_tuple, was,
+ DUP_INSERT,
+ &unused) != 0) {
+ diag_log();
+ unreachable();
+ panic("failed to rollback change");
+ }
+
+ txm_story_unlink(stmt->add_story, i);
+
+ }
+ txm_story_delete_add_stmt(stmt->add_story);
+ }
+
+ if (del_story != NULL && del_story->del_stmt == stmt) {
+ del_story->del_stmt = stmt->next_in_del_list;
+ stmt->next_in_del_list = NULL;
+ }
+
+ if (del_story_created)
+ txm_story_delete_del_stmt(stmt->del_story);
+ else
+ stmt->del_story = NULL;
+
+ region_truncate(region, region_svp);
+ return -1;
+}
+
+void
+txm_history_unlink_stmt(struct txn_stmt *stmt)
+{
+ if (stmt->add_story != NULL) {
+ assert(stmt->add_story->tuple == stmt->new_tuple);
+ struct txm_story *story = stmt->add_story;
+
+ for (uint32_t i = 0; i < story->index_count; i++) {
+ struct txm_story_link *link = &story->link[i];
+ if (link->newer_story == NULL) {
+ struct tuple *unused;
+ struct index *index = stmt->space->index[i];
+ struct tuple *was = txm_story_older_tuple(link);
+ if (index_replace(index, story->tuple, was,
+ DUP_INSERT, &unused) != 0) {
+ diag_log();
+ unreachable();
+ panic("failed to rollback change");
+ }
+ } else {
+ struct txm_story *newer = link->newer_story;
+ assert(newer->link[i].older.is_story);
+ assert(newer->link[i].older.story == story);
+ txm_story_unlink(newer, i);
+ if (link->older.is_story) {
+ struct txm_story *to = link->older.story;
+ txm_story_link_story(newer,to, i);
+ } else {
+ struct tuple *to = link->older.tuple;
+ txm_story_link_tuple(newer, to, i);
+ }
+ }
+ txm_story_unlink(story, i);
+ }
+ stmt->add_story->add_stmt = NULL;
+ stmt->add_story = NULL;
+ tuple_unref(stmt->new_tuple);
+ }
+
+ if (stmt->del_story != NULL) {
+ struct txm_story *story = stmt->del_story;
+
+ struct txn_stmt **prev = &story->del_stmt;
+ while (*prev != stmt) {
+ prev = &(*prev)->next_in_del_list;
+ assert(*prev != NULL);
+ }
+ *prev = stmt->next_in_del_list;
+ stmt->next_in_del_list = NULL;
+
+ stmt->del_story->del_stmt = NULL;
+ stmt->del_story = NULL;
+ }
+}
+
+void
+txm_history_prepare_stmt(struct txn_stmt *stmt)
+{
+ assert(stmt->txn->psn != 0);
+
+ /* Move story to the past to prepared stories. */
+
+ struct txm_story *story = stmt->add_story;
+ uint32_t index_count = story == NULL ? 0 : story->index_count;
+ for (uint32_t i = 0; i < index_count; ) {
+ if (!story->link[i].older.is_story) {
+ /* tuple is old. */
+ i++;
+ continue;
+ }
+ struct txm_story *old_story =
+ story->link[i].older.story;
+ if (old_story->del_psn != 0) {
+ /* is psn is set, the change is prepared. */
+ i++;
+ continue;
+ }
+ if (old_story->add_psn != 0) {
+ /* is psn is set, the change is prepared. */
+ i++;
+ continue;
+ }
+
+ if (old_story->add_stmt != NULL) {
+ /* ancient */
+ i++;
+ continue;
+ }
+ if (old_story->add_stmt->txn == stmt->txn) {
+ /* added by us. */
+ i++;
+ continue;
+ }
+
+ if (old_story->add_stmt->preserve_old_tuple || i != 0)
+ old_story->add_stmt->txn->status = TXN_CONFLICTED;
+
+ /* Swap story and old story. */
+ struct txm_story_link *link = &story->link[i];
+ if (link->newer_story == NULL) {
+ /* we have to replace the tuple in index. */
+ struct tuple *unused;
+ struct index *index = stmt->space->index[i];
+ if (index_replace(index, story->tuple,
+ old_story->tuple,
+ DUP_INSERT, &unused) != 0) {
+ diag_log();
+ unreachable();
+ panic("failed to rollback change");
+ }
+ } else {
+ struct txm_story *newer = link->newer_story;
+ assert(newer->link[i].older.is_story);
+ assert(newer->link[i].older.story == story);
+ txm_story_unlink(newer, i);
+ txm_story_link_story(newer, old_story, i);
+ }
+
+ txm_story_unlink(story, i);
+
+ txm_story_unlink(story, i);
+ if (old_story->link[i].older.is_story) {
+ struct txm_story *to =
+ old_story->link[i].older.story;
+ txm_story_unlink(old_story, i);
+ txm_story_link_story(story, to, i);
+ } else {
+ struct tuple *to =
+ old_story->link[i].older.tuple;
+ txm_story_unlink(old_story, i);
+ txm_story_link_tuple(story, to, i);
+ }
+
+ txm_story_link_story(old_story, story, i);
+
+ if (i == 0) {
+ assert(stmt->del_story == old_story);
+ assert(story->link[0].older.is_story ||
+ story->link[0].older.tuple == NULL);
+
+ struct txn_stmt *dels = old_story->del_stmt;
+ assert(dels != NULL);
+ do {
+ if (dels->txn != stmt->txn)
+ dels->txn->status = TXN_CONFLICTED;
+ dels->del_story = NULL;
+ struct txn_stmt *next = dels->next_in_del_list;
+ dels->next_in_del_list = NULL;
+ dels = next;
+ } while (dels != NULL);
+ old_story->del_stmt = NULL;
+
+ if (story->link[0].older.is_story) {
+ struct txm_story *oldest_story =
+ story->link[0].older.story;
+ dels = oldest_story->del_stmt;
+ while (dels != NULL) {
+ assert(dels->txn != stmt->txn);
+ dels->del_story = NULL;
+ struct txn_stmt *next =
+ dels->next_in_del_list;
+ dels->next_in_del_list = NULL;
+ dels = next;
+ }
+ oldest_story->del_stmt = stmt;
+ stmt->del_story = oldest_story;
+ }
+ }
+ }
+ if (stmt->add_story != NULL)
+ stmt->add_story->add_psn = stmt->txn->psn;
+
+ if (stmt->del_story != NULL)
+ stmt->del_story->del_psn = stmt->txn->psn;
+}
+
+ssize_t
+txm_history_release_stmt(struct txn_stmt *stmt)
+{
+ size_t res = 0;
+ if (stmt->add_story != NULL) {
+ assert(stmt->add_story->add_stmt == stmt);
+ res += stmt->add_story->tuple->bsize;
+ stmt->add_story->add_stmt = NULL;
+ stmt->add_story = NULL;
+ }
+ if (stmt->del_story != NULL) {
+ assert(stmt->del_story->del_stmt == stmt);
+ assert(stmt->next_in_del_list == NULL);
+ res -= stmt->del_story->tuple->bsize;
+ tuple_unref(stmt->del_story->tuple);
+ stmt->del_story->del_stmt = NULL;
+ stmt->del_story = NULL;
+ }
+ return res;
+}
+
+struct tuple *
+txm_tuple_clarify_slow(struct txn* txn, struct tuple* tuple, uint32_t index,
+ uint32_t mk_index, bool prepared_ok)
+{
+ assert(tuple->is_dirty);
+ struct txm_story *story = txm_story_get(tuple);
+ bool own_change = false;
+ struct tuple *result = NULL;
+
+ while (true) {
+ if (txm_story_is_visible(story, txn, &result,
+ prepared_ok, &own_change)) {
+ break;
+ }
+ if (story->link[index].older.is_story) {
+ story = story->link[index].older.story;
+ } else {
+ result = story->link[index].older.tuple;
+ break;
+ }
+ }
+ (void)own_change; /* TODO: add conflict */
+ (void)mk_index; /* TODO: multiindex */
+ return result;
+}
+void
+txm_story_delete(struct txm_story *story)
+{
+ assert(story->add_stmt == NULL);
+ assert(story->del_stmt == NULL);
+
+ if (txm.traverse_all_stories == &story->in_all_stories)
+ txm.traverse_all_stories = rlist_next(txm.traverse_all_stories);
+ rlist_del(&story->in_all_stories);
+
+ mh_int_t pos = mh_history_find(txm.history, story->tuple, 0);
+ assert(pos != mh_end(txm.history));
+ mh_history_del(txm.history, pos, 0);
+
+ story->tuple->is_dirty = false;
+ tuple_unref(story->tuple);
+
+#ifndef NDEBUG
+ /* Expecting to delete fully unlinked story. */
+ for (uint32_t i = 0; i < story->index_count; i++) {
+ assert(story->link[i].newer_story == NULL);
+ assert(story->link[i].older.is_story == false);
+ assert(story->link[i].older.tuple == NULL);
+ }
+#endif
+
+ struct mempool *pool = &txm.txm_story_pool[story->index_count];
+ mempool_free(pool, story);
+}
diff --git a/src/box/txn.h b/src/box/txn.h
index 92c0116..bbe1c51 100644
--- a/src/box/txn.h
+++ b/src/box/txn.h
@@ -36,6 +36,7 @@
#include "trigger.h"
#include "fiber.h"
#include "space.h"
+#include "tuple.h"
#if defined(__cplusplus)
extern "C" {
@@ -123,6 +124,18 @@ struct txn_stmt {
struct space *space;
struct tuple *old_tuple;
struct tuple *new_tuple;
+ /**
+ * If new_tuple != NULL and this transaction was not prepared,
+ * this member holds added story of the new_tuple.
+ */
+ struct txm_story *add_story;
+ /**
+ * If new_tuple == NULL and this transaction was not prepared,
+ * this member holds added story of the old_tuple.
+ */
+ struct txm_story *del_story;
+ /** Link in txm_story::del_stmt linked list. */
+ struct txn_stmt *next_in_del_list;
/** Engine savepoint for the start of this statement. */
void *engine_savepoint;
/** Redo info: the binary log row */
@@ -284,6 +297,87 @@ struct txn {
struct rlist conflicted_by_list;
};
+/**
+ * Pointer to tuple or story.
+ */
+struct txm_story_or_tuple {
+ /** Flag whether it's a story. */
+ bool is_story;
+ union {
+ /** Pointer to story, it must be reverse liked. */
+ struct txm_story *story;
+ /** Smart pointer to tuple: the tuple is referenced if set. */
+ struct tuple *tuple;
+ };
+};
+
+/**
+ * Link that connects a txm_story with older and newer stories of the same
+ * key in index.
+ */
+struct txm_story_link {
+ /** Story that was happened after that story was ended. */
+ struct txm_story *newer_story;
+ /**
+ * Older story or ancient tuple (so old that its story was lost).
+ * In case of tuple is can also be NULL.
+ */
+ struct txm_story_or_tuple older;
+};
+
+/**
+ * A part of a history of a value in space.
+ * It's a story about a tuple, from the point it was added to space to the
+ * point when it was deleted from a space.
+ * All stories are linked into a list of stories of the same key of each index.
+ */
+struct txm_story {
+ /** The story is about this tuple. The tuple is referenced. */
+
+ struct tuple *tuple;
+ /**
+ * Statement that told this story. Is set to NULL when the statement's
+ * transaction becomes committed. Can also be NULL if we don't know who
+ * introduced that story.
+ */
+ struct txn_stmt *add_stmt;
+ /**
+ * Prepare sequence number of add_stmt's transaction. Is set when
+ * the transactions is prepared. Can be 0 if the transaction is
+ * in progress or we don't know who introduced that story.
+ */
+ int64_t add_psn;
+ /**
+ * Statement that ended this story. Is set to NULL when the statement's
+ * transaction becomes committed. Can also be NULL if the tuple has not
+ * been deleted yet.
+ */
+ struct txn_stmt *del_stmt;
+ /**
+ * Prepare sequence number of del_stmt's transaction. Is set when
+ * the transactions is prepared. Can be 0 if the transaction is
+ * in progress or if nobody has deleted the tuple.
+ */
+ int64_t del_psn;
+ /**
+ * List of trackers - transactions that has read this tuple.
+ */
+ struct rlist reader_list;
+ /**
+ * Link in tx_manager::all_stories
+ */
+ struct rlist in_all_stories;
+ /**
+ * Number of indexes in this space - and the count of link[].
+ */
+ uint32_t index_count;
+ /**
+ * Link with older and newer stories (and just tuples) for each
+ * index respectively.
+ */
+ struct txm_story_link link[];
+};
+
static inline bool
txn_has_flag(struct txn *txn, enum txn_flag flag)
{
@@ -652,6 +746,39 @@ tx_manager_free();
int
txm_cause_conflict(struct txn *wreaker, struct txn *victim);
+struct txm_story *
+txm_story_new(struct tuple *tuple, uint32_t index_count);
+
+int
+txm_history_link_stmt(struct txn_stmt *stmt,
+ struct tuple *old_tuple, struct tuple *new_tuple,
+ enum dup_replace_mode mode, struct tuple **result);
+
+void
+txm_history_unlink_stmt(struct txn_stmt *stmt);
+
+void
+txm_history_prepare_stmt(struct txn_stmt *stmt);
+
+ssize_t
+txm_history_release_stmt(struct txn_stmt *stmt);
+
+struct tuple *
+txm_tuple_clarify_slow(struct txn *txn, struct tuple *tuple, uint32_t index,
+ uint32_t mk_index, bool prepared_ok);
+
+static inline struct tuple*
+txm_tuple_clarify(struct txn *txn, struct tuple* tuple, uint32_t index,
+ uint32_t mk_index, bool prepared_ok)
+{
+ if (!tuple->is_dirty)
+ return tuple;
+ return txm_tuple_clarify_slow(txn, tuple, index, mk_index, prepared_ok);
+}
+
+void
+txm_story_delete(struct txm_story *story);
+
#if defined(__cplusplus)
} /* extern "C" */
#endif /* defined(__cplusplus) */
--
2.7.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [Tarantool-patches] [PATCH 14/16] tx: indexes
2020-07-08 15:14 [Tarantool-patches] [PATCH v2 00/16] Transaction engine for memtx engine Aleksandr Lyapunov
` (12 preceding siblings ...)
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 13/16] tx: introduce txm_story Aleksandr Lyapunov
@ 2020-07-08 15:14 ` Aleksandr Lyapunov
2020-07-14 23:50 ` Vladislav Shpilevoy
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 15/16] tx: introduce point conflict tracker Aleksandr Lyapunov
` (3 subsequent siblings)
17 siblings, 1 reply; 49+ messages in thread
From: Aleksandr Lyapunov @ 2020-07-08 15:14 UTC (permalink / raw)
To: tarantool-patches; +Cc: v.shpilevoy
---
src/box/memtx_bitset.c | 28 ++++++++++++-------
src/box/memtx_hash.c | 60 +++++++++++++++++++++++++++++++++++------
src/box/memtx_rtree.c | 27 ++++++++++++++++---
src/box/memtx_tree.c | 73 ++++++++++++++++++++++++++++++++++++++++++++++----
4 files changed, 162 insertions(+), 26 deletions(-)
diff --git a/src/box/memtx_bitset.c b/src/box/memtx_bitset.c
index 67eaf6f..f3ab74f 100644
--- a/src/box/memtx_bitset.c
+++ b/src/box/memtx_bitset.c
@@ -40,6 +40,7 @@
#include "fiber.h"
#include "index.h"
#include "tuple.h"
+#include "txn.h"
#include "memtx_engine.h"
struct memtx_bitset_index {
@@ -198,19 +199,26 @@ bitset_index_iterator_next(struct iterator *iterator, struct tuple **ret)
assert(iterator->free == bitset_index_iterator_free);
struct bitset_index_iterator *it = bitset_index_iterator(iterator);
- size_t value = tt_bitset_iterator_next(&it->bitset_it);
- if (value == SIZE_MAX) {
- *ret = NULL;
- return 0;
- }
-
+ do {
+ size_t value = tt_bitset_iterator_next(&it->bitset_it);
+ if (value == SIZE_MAX) {
+ *ret = NULL;
+ return 0;
+ }
#ifndef OLD_GOOD_BITSET
- struct memtx_bitset_index *index =
- (struct memtx_bitset_index *)iterator->index;
- *ret = memtx_bitset_index_value_to_tuple(index, value);
+ struct memtx_bitset_index *index =
+ (struct memtx_bitset_index *)iterator->index;
+ struct tuple *tuple =
+ memtx_bitset_index_value_to_tuple(index, value);
#else /* #ifndef OLD_GOOD_BITSET */
- *ret = value_to_tuple(value);
+ struct tuple *tuple =value_to_tuple(value);
#endif /* #ifndef OLD_GOOD_BITSET */
+ uint32_t iid = iterator->index->def->iid;
+ struct txn *txn = in_txn();
+ bool is_rw = txn != NULL;
+ *ret = txm_tuple_clarify(txn, tuple, iid, 0, is_rw);
+ } while (*ret == NULL);
+
return 0;
}
diff --git a/src/box/memtx_hash.c b/src/box/memtx_hash.c
index cdd531c..b3ae60c 100644
--- a/src/box/memtx_hash.c
+++ b/src/box/memtx_hash.c
@@ -33,6 +33,7 @@
#include "fiber.h"
#include "index.h"
#include "tuple.h"
+#include "txn.h"
#include "memtx_engine.h"
#include "space.h"
#include "schema.h" /* space_cache_find() */
@@ -101,7 +102,7 @@ hash_iterator_free(struct iterator *iterator)
}
static int
-hash_iterator_ge(struct iterator *ptr, struct tuple **ret)
+hash_iterator_ge_base(struct iterator *ptr, struct tuple **ret)
{
assert(ptr->free == hash_iterator_free);
struct hash_iterator *it = (struct hash_iterator *) ptr;
@@ -113,10 +114,10 @@ hash_iterator_ge(struct iterator *ptr, struct tuple **ret)
}
static int
-hash_iterator_gt(struct iterator *ptr, struct tuple **ret)
+hash_iterator_gt_base(struct iterator *ptr, struct tuple **ret)
{
assert(ptr->free == hash_iterator_free);
- ptr->next = hash_iterator_ge;
+ ptr->next = hash_iterator_ge_base;
struct hash_iterator *it = (struct hash_iterator *) ptr;
struct memtx_hash_index *index = (struct memtx_hash_index *)ptr->index;
struct tuple **res = light_index_iterator_get_and_next(&index->hash_table,
@@ -128,6 +129,31 @@ hash_iterator_gt(struct iterator *ptr, struct tuple **ret)
return 0;
}
+#define WRAP_ITERATOR_METHOD(name) \
+static int \
+name(struct iterator *iterator, struct tuple **ret) \
+{ \
+ struct txn *txn = in_txn(); \
+ bool is_rw = txn != NULL; \
+ uint32_t iid = iterator->index->def->iid; \
+ bool first = true; \
+ do { \
+ int rc = first ? name##_base(iterator, ret) \
+ : hash_iterator_ge_base(iterator, ret); \
+ if (rc != 0 || *ret == NULL) \
+ return rc; \
+ first = false; \
+ *ret = txm_tuple_clarify(txn, *ret, iid, 0, is_rw); \
+ } while (*ret == NULL); \
+ return 0; \
+} \
+struct forgot_to_add_semicolon
+
+WRAP_ITERATOR_METHOD(hash_iterator_ge);
+WRAP_ITERATOR_METHOD(hash_iterator_gt);
+
+#undef WRAP_ITERATOR_METHOD
+
static int
hash_iterator_eq_next(MAYBE_UNUSED struct iterator *it, struct tuple **ret)
{
@@ -136,12 +162,25 @@ hash_iterator_eq_next(MAYBE_UNUSED struct iterator *it, struct tuple **ret)
}
static int
-hash_iterator_eq(struct iterator *it, struct tuple **ret)
+hash_iterator_eq(struct iterator *ptr, struct tuple **ret)
{
- it->next = hash_iterator_eq_next;
- return hash_iterator_ge(it, ret);
+ ptr->next = hash_iterator_eq_next;
+ assert(ptr->free == hash_iterator_free);
+ struct hash_iterator *it = (struct hash_iterator *) ptr;
+ struct memtx_hash_index *index = (struct memtx_hash_index *)ptr->index;
+ struct tuple **res = light_index_iterator_get_and_next(&index->hash_table,
+ &it->iterator);
+ if (res == NULL) {
+ *ret = NULL;
+ return 0;
+ }
+ struct txn *txn = in_txn();
+ bool is_rw = txn != NULL;
+ *ret = txm_tuple_clarify(txn, *res, ptr->index->def->iid, 0, is_rw);
+ return 0;
}
+
/* }}} */
/* {{{ MemtxHash -- implementation of all hashes. **********************/
@@ -282,8 +321,13 @@ memtx_hash_index_get(struct index *base, const char *key,
*result = NULL;
uint32_t h = key_hash(key, base->def->key_def);
uint32_t k = light_index_find_key(&index->hash_table, h, key);
- if (k != light_index_end)
- *result = light_index_get(&index->hash_table, k);
+ if (k != light_index_end) {
+ struct tuple *tuple = light_index_get(&index->hash_table, k);
+ uint32_t iid = base->def->iid;
+ struct txn *txn = in_txn();
+ bool is_rw = txn != NULL;
+ *result = txm_tuple_clarify(txn, tuple, iid, 0, is_rw);
+ }
return 0;
}
diff --git a/src/box/memtx_rtree.c b/src/box/memtx_rtree.c
index 612fcb2..992a422 100644
--- a/src/box/memtx_rtree.c
+++ b/src/box/memtx_rtree.c
@@ -40,6 +40,7 @@
#include "trivia/util.h"
#include "tuple.h"
+#include "txn.h"
#include "space.h"
#include "memtx_engine.h"
@@ -148,7 +149,15 @@ static int
index_rtree_iterator_next(struct iterator *i, struct tuple **ret)
{
struct index_rtree_iterator *itr = (struct index_rtree_iterator *)i;
- *ret = (struct tuple *)rtree_iterator_next(&itr->impl);
+ do {
+ *ret = (struct tuple *) rtree_iterator_next(&itr->impl);
+ if (*ret == NULL)
+ break;
+ uint32_t iid = i->index->def->iid;
+ struct txn *txn = in_txn();
+ bool is_rw = txn != NULL;
+ *ret = txm_tuple_clarify(txn, *ret, iid, 0, is_rw);
+ } while (*ret == NULL);
return 0;
}
@@ -213,8 +222,20 @@ memtx_rtree_index_get(struct index *base, const char *key,
unreachable();
*result = NULL;
- if (rtree_search(&index->tree, &rect, SOP_OVERLAPS, &iterator))
- *result = (struct tuple *)rtree_iterator_next(&iterator);
+ if (!rtree_search(&index->tree, &rect, SOP_OVERLAPS, &iterator)) {
+ rtree_iterator_destroy(&iterator);
+ return 0;
+ }
+ do {
+ struct tuple *tuple = (struct tuple *)
+ rtree_iterator_next(&iterator);
+ if (tuple == NULL)
+ break;
+ uint32_t iid = base->def->iid;
+ struct txn *txn = in_txn();
+ bool is_rw = txn != NULL;
+ *result = txm_tuple_clarify(txn, tuple, iid, 0, is_rw);
+ } while (*result == NULL);
rtree_iterator_destroy(&iterator);
return 0;
}
diff --git a/src/box/memtx_tree.c b/src/box/memtx_tree.c
index 76ff3fc..b77c85c 100644
--- a/src/box/memtx_tree.c
+++ b/src/box/memtx_tree.c
@@ -37,6 +37,7 @@
#include "fiber.h"
#include "key_list.h"
#include "tuple.h"
+#include "txn.h"
#include <third_party/qsort_arg.h>
#include <small/mempool.h>
@@ -175,7 +176,7 @@ tree_iterator_dummie(struct iterator *iterator, struct tuple **ret)
}
static int
-tree_iterator_next(struct iterator *iterator, struct tuple **ret)
+tree_iterator_next_base(struct iterator *iterator, struct tuple **ret)
{
struct memtx_tree_index *index =
(struct memtx_tree_index *)iterator->index;
@@ -205,7 +206,7 @@ tree_iterator_next(struct iterator *iterator, struct tuple **ret)
}
static int
-tree_iterator_prev(struct iterator *iterator, struct tuple **ret)
+tree_iterator_prev_base(struct iterator *iterator, struct tuple **ret)
{
struct memtx_tree_index *index =
(struct memtx_tree_index *)iterator->index;
@@ -234,7 +235,7 @@ tree_iterator_prev(struct iterator *iterator, struct tuple **ret)
}
static int
-tree_iterator_next_equal(struct iterator *iterator, struct tuple **ret)
+tree_iterator_next_equal_base(struct iterator *iterator, struct tuple **ret)
{
struct memtx_tree_index *index =
(struct memtx_tree_index *)iterator->index;
@@ -270,7 +271,7 @@ tree_iterator_next_equal(struct iterator *iterator, struct tuple **ret)
}
static int
-tree_iterator_prev_equal(struct iterator *iterator, struct tuple **ret)
+tree_iterator_prev_equal_base(struct iterator *iterator, struct tuple **ret)
{
struct memtx_tree_index *index =
(struct memtx_tree_index *)iterator->index;
@@ -304,6 +305,45 @@ tree_iterator_prev_equal(struct iterator *iterator, struct tuple **ret)
return 0;
}
+#define WRAP_ITERATOR_METHOD(name) \
+static int \
+name(struct iterator *iterator, struct tuple **ret) \
+{ \
+ struct memtx_tree *tree = \
+ &((struct memtx_tree_index *)iterator->index)->tree; \
+ struct tree_iterator *it = tree_iterator(iterator); \
+ struct memtx_tree_iterator *ti = &it->tree_iterator; \
+ uint32_t iid = iterator->index->def->iid; \
+ bool is_multikey = iterator->index->def->key_def->is_multikey; \
+ struct txn *txn = in_txn(); \
+ bool is_rw = txn != NULL; \
+ do { \
+ int rc = name##_base(iterator, ret); \
+ if (rc != 0 || *ret == NULL) \
+ return rc; \
+ uint32_t mk_index = 0; \
+ if (is_multikey) { \
+ struct memtx_tree_data *check = \
+ memtx_tree_iterator_get_elem(tree, ti); \
+ assert(check != NULL); \
+ mk_index = check->hint; \
+ } \
+ *ret = txm_tuple_clarify(txn, *ret, iid, mk_index, is_rw); \
+ } while (*ret == NULL); \
+ tuple_unref(it->current.tuple); \
+ it->current.tuple = *ret; \
+ tuple_ref(it->current.tuple); \
+ return 0; \
+} \
+struct forgot_to_add_semicolon
+
+WRAP_ITERATOR_METHOD(tree_iterator_next);
+WRAP_ITERATOR_METHOD(tree_iterator_prev);
+WRAP_ITERATOR_METHOD(tree_iterator_next_equal);
+WRAP_ITERATOR_METHOD(tree_iterator_prev_equal);
+
+#undef WRAP_ITERATOR_METHOD
+
static void
tree_iterator_set_next_method(struct tree_iterator *it)
{
@@ -388,6 +428,21 @@ tree_iterator_start(struct iterator *iterator, struct tuple **ret)
tuple_ref(*ret);
it->current = *res;
tree_iterator_set_next_method(it);
+
+ uint32_t iid = iterator->index->def->iid;
+ bool is_multikey = iterator->index->def->key_def->is_multikey;
+ struct txn *txn = in_txn();
+ bool is_rw = txn != NULL;
+ uint32_t mk_index = is_multikey ? res->hint : 0;
+ *ret = txm_tuple_clarify(txn, *ret, iid, mk_index, is_rw);
+ if (*ret == NULL) {
+ return iterator->next(iterator, ret);
+ } else {
+ tuple_unref(it->current.tuple);
+ it->current.tuple = *ret;
+ tuple_ref(it->current.tuple);
+ }
+
return 0;
}
@@ -539,7 +594,15 @@ memtx_tree_index_get(struct index *base, const char *key,
key_data.part_count = part_count;
key_data.hint = key_hint(key, part_count, cmp_def);
struct memtx_tree_data *res = memtx_tree_find(&index->tree, &key_data);
- *result = res != NULL ? res->tuple : NULL;
+ if (res == NULL) {
+ *result = NULL;
+ return 0;
+ }
+ struct txn *txn = in_txn();
+ bool is_rw = txn != NULL;
+ uint32_t mk_index = base->def->key_def->is_multikey ? res->hint : 0;
+ *result = txm_tuple_clarify(txn, res->tuple, base->def->iid,
+ mk_index, is_rw);
return 0;
}
--
2.7.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [Tarantool-patches] [PATCH 15/16] tx: introduce point conflict tracker
2020-07-08 15:14 [Tarantool-patches] [PATCH v2 00/16] Transaction engine for memtx engine Aleksandr Lyapunov
` (13 preceding siblings ...)
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 14/16] tx: indexes Aleksandr Lyapunov
@ 2020-07-08 15:14 ` Aleksandr Lyapunov
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 16/16] tx: use new tx manager in memtx Aleksandr Lyapunov
` (2 subsequent siblings)
17 siblings, 0 replies; 49+ messages in thread
From: Aleksandr Lyapunov @ 2020-07-08 15:14 UTC (permalink / raw)
To: tarantool-patches; +Cc: v.shpilevoy
---
src/box/txn.c | 85 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
src/box/txn.h | 4 +++
2 files changed, 88 insertions(+), 1 deletion(-)
diff --git a/src/box/txn.c b/src/box/txn.c
index 78c542f..7724637 100644
--- a/src/box/txn.c
+++ b/src/box/txn.c
@@ -76,6 +76,13 @@ struct tx_manager
/** The one and only instance of tx_manager. */
static struct tx_manager txm;
+struct tx_read_tracker {
+ struct txn *reader;
+ struct txm_story *story;
+ struct rlist in_reader_list;
+ struct rlist in_read_set;
+};
+
struct tx_conflict_tracker {
struct txn *wreaker;
struct txn *victim;
@@ -238,6 +245,7 @@ txn_new(void)
}
assert(region_used(®ion) == sizeof(*txn));
txn->region = region;
+ rlist_create(&txn->read_set);
rlist_create(&txn->conflict_list);
rlist_create(&txn->conflicted_by_list);
return txn;
@@ -249,6 +257,14 @@ txn_new(void)
inline static void
txn_free(struct txn *txn)
{
+ struct tx_read_tracker *tracker, *tmp;
+ rlist_foreach_entry_safe(tracker, &txn->read_set,
+ in_read_set, tmp) {
+ rlist_del(&tracker->in_reader_list);
+ rlist_del(&tracker->in_read_set);
+ }
+ assert(rlist_empty(&txn->read_set));
+
struct tx_conflict_tracker *entry, *next;
rlist_foreach_entry_safe(entry, &txn->conflict_list,
in_conflict_list, next) {
@@ -679,7 +695,6 @@ txn_prepare(struct txn *txn)
rlist_del(&entry->in_conflicted_by_list);
}
-
trigger_clear(&txn->fiber_on_stop);
if (!txn_has_flag(txn, TXN_CAN_YIELD))
trigger_clear(&txn->fiber_on_yield);
@@ -1819,3 +1834,71 @@ txm_story_delete(struct txm_story *story)
struct mempool *pool = &txm.txm_story_pool[story->index_count];
mempool_free(pool, story);
}
+
+int
+tx_track_read(struct txn *txn, struct tuple *tuple, uint32_t index_count)
+{
+ if (tuple == NULL)
+ return 0;
+ if (txn == NULL)
+ return 0;
+
+ struct txm_story *story;
+ struct tx_read_tracker *tracker = NULL;
+
+ if (!tuple->is_dirty) {
+ story = txm_story_new(tuple, index_count);
+ if (story != NULL)
+ return -1;
+ size_t sz;
+ tracker = region_alloc_object(&txn->region,
+ struct tx_read_tracker, &sz);
+ if (tracker == NULL) {
+ diag_set(OutOfMemory, sz, "tx region", "read_tracker");
+ txm_story_delete(story);
+ return -1;
+ }
+ tracker->reader = txn;
+ tracker->story = story;
+ rlist_add(&story->reader_list, &tracker->in_reader_list);
+ rlist_add(&txn->read_set, &tracker->in_read_set);
+ return 0;
+ }
+ story = txm_story_get(tuple);
+
+ struct rlist *r1 = story->reader_list.next;
+ struct rlist *r2 = txn->read_set.next;
+ while (r1 != &story->reader_list && r2 != &txn->read_set) {
+ tracker = rlist_entry(r1, struct tx_read_tracker,
+ in_reader_list);
+ assert(tracker->story == story);
+ if (tracker->reader == txn)
+ break;
+ tracker = rlist_entry(r2, struct tx_read_tracker,
+ in_read_set);
+ assert(tracker->reader == txn);
+ if (tracker->story == story)
+ break;
+ tracker = NULL;
+ r1 = r1->next;
+ r2 = r2->next;
+ }
+ if (tracker != NULL) {
+ /* Move to the beginning of a list for faster further lookups.*/
+ rlist_del(&tracker->in_reader_list);
+ rlist_del(&tracker->in_read_set);
+ } else {
+ size_t sz;
+ tracker = region_alloc_object(&txn->region,
+ struct tx_read_tracker, &sz);
+ if (tracker == NULL) {
+ diag_set(OutOfMemory, sz, "tx region", "read_tracker");
+ return -1;
+ }
+ tracker->reader = txn;
+ tracker->story = story;
+ }
+ rlist_add(&story->reader_list, &tracker->in_reader_list);
+ rlist_add(&txn->read_set, &tracker->in_read_set);
+ return 0;
+}
diff --git a/src/box/txn.h b/src/box/txn.h
index bbe1c51..64cbe66 100644
--- a/src/box/txn.h
+++ b/src/box/txn.h
@@ -293,6 +293,7 @@ struct txn {
uint32_t fk_deferred_count;
/** List of savepoints to find savepoint by name. */
struct rlist savepoints;
+ struct rlist read_set;
struct rlist conflict_list;
struct rlist conflicted_by_list;
};
@@ -779,6 +780,9 @@ txm_tuple_clarify(struct txn *txn, struct tuple* tuple, uint32_t index,
void
txm_story_delete(struct txm_story *story);
+int
+tx_track_read(struct txn *txn, struct tuple *tuple, uint32_t index_count);
+
#if defined(__cplusplus)
} /* extern "C" */
#endif /* defined(__cplusplus) */
--
2.7.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [Tarantool-patches] [PATCH 16/16] tx: use new tx manager in memtx
2020-07-08 15:14 [Tarantool-patches] [PATCH v2 00/16] Transaction engine for memtx engine Aleksandr Lyapunov
` (14 preceding siblings ...)
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 15/16] tx: introduce point conflict tracker Aleksandr Lyapunov
@ 2020-07-08 15:14 ` Aleksandr Lyapunov
2020-07-14 23:45 ` Vladislav Shpilevoy
2020-07-12 17:19 ` [Tarantool-patches] [PATCH v2 00/16] Transaction engine for memtx engine Vladislav Shpilevoy
2020-07-14 23:47 ` Vladislav Shpilevoy
17 siblings, 1 reply; 49+ messages in thread
From: Aleksandr Lyapunov @ 2020-07-08 15:14 UTC (permalink / raw)
To: tarantool-patches; +Cc: v.shpilevoy
---
src/box/memtx_engine.c | 49 ++++++++++++++++++++++++++++++++++++++++++-------
src/box/memtx_space.c | 12 ++++++++++++
src/box/txn.c | 3 +++
3 files changed, 57 insertions(+), 7 deletions(-)
diff --git a/src/box/memtx_engine.c b/src/box/memtx_engine.c
index dfd6fce..b5c5f03 100644
--- a/src/box/memtx_engine.c
+++ b/src/box/memtx_engine.c
@@ -339,6 +339,35 @@ memtx_engine_begin(struct engine *engine, struct txn *txn)
return 0;
}
+static int
+memtx_engine_prepare(struct engine *engine, struct txn *txn)
+{
+ (void)engine;
+ struct txn_stmt *stmt;
+ stailq_foreach_entry(stmt, &txn->stmts, next) {
+ if (stmt->add_story != NULL || stmt->del_story != NULL)
+ txm_history_prepare_stmt(stmt);
+ }
+ return 0;
+}
+
+static void
+memtx_engine_commit(struct engine *engine, struct txn *txn)
+{
+ (void)engine;
+ struct txn_stmt *stmt;
+ stailq_foreach_entry(stmt, &txn->stmts, next) {
+ if (stmt->add_story != NULL || stmt->del_story != NULL)
+ {
+ ssize_t bsize = txm_history_release_stmt(stmt);
+ assert(stmt->space->engine == engine);
+ struct memtx_space *mspace =
+ (struct memtx_space *)stmt->space;
+ mspace->bsize += bsize;
+ }
+ }
+}
+
static void
memtx_engine_rollback_statement(struct engine *engine, struct txn *txn,
struct txn_stmt *stmt)
@@ -348,13 +377,17 @@ memtx_engine_rollback_statement(struct engine *engine, struct txn *txn,
if (stmt->old_tuple == NULL && stmt->new_tuple == NULL)
return;
struct space *space = stmt->space;
- struct memtx_space *memtx_space = (struct memtx_space *)space;
+ struct memtx_space* memtx_space = (struct memtx_space*) space;
uint32_t index_count;
/* Only roll back the changes if they were made. */
if (stmt->engine_savepoint == NULL)
return;
+ if (memtx_space->replace == memtx_space_replace_all_keys &&
+ (stmt->add_story != NULL || stmt->del_story != NULL))
+ return txm_history_unlink_stmt(stmt);
+
if (memtx_space->replace == memtx_space_replace_all_keys)
index_count = space->index_count;
else if (memtx_space->replace == memtx_space_replace_primary_key)
@@ -362,12 +395,14 @@ memtx_engine_rollback_statement(struct engine *engine, struct txn *txn,
else
panic("transaction rolled back during snapshot recovery");
- for (uint32_t i = 0; i < index_count; i++) {
- struct tuple *unused;
- struct index *index = space->index[i];
+ for (uint32_t i = 0; i < index_count; i++)
+ {
+ struct tuple* unused;
+ struct index* index = space->index[i];
/* Rollback must not fail. */
if (index_replace(index, stmt->new_tuple, stmt->old_tuple,
- DUP_INSERT, &unused) != 0) {
+ DUP_INSERT, &unused) != 0)
+ {
diag_log();
unreachable();
panic("failed to rollback change");
@@ -914,8 +949,8 @@ static const struct engine_vtab memtx_engine_vtab = {
/* .complete_join = */ memtx_engine_complete_join,
/* .begin = */ memtx_engine_begin,
/* .begin_statement = */ generic_engine_begin_statement,
- /* .prepare = */ generic_engine_prepare,
- /* .commit = */ generic_engine_commit,
+ /* .prepare = */ memtx_engine_prepare,
+ /* .commit = */ memtx_engine_commit,
/* .rollback_statement = */ memtx_engine_rollback_statement,
/* .rollback = */ generic_engine_rollback,
/* .switch_to_ro = */ generic_engine_switch_to_ro,
diff --git a/src/box/memtx_space.c b/src/box/memtx_space.c
index 5820c40..28552a9 100644
--- a/src/box/memtx_space.c
+++ b/src/box/memtx_space.c
@@ -260,6 +260,18 @@ memtx_space_replace_all_keys(struct space *space, struct tuple *old_tuple,
if (pk == NULL)
return -1;
assert(pk->def->opts.is_unique);
+
+ struct txn *txn = in_txn();
+ struct txn_stmt *stmt = txn == NULL ? NULL : txn_current_stmt(txn);
+ if (stmt != NULL) {
+ return txm_history_link_stmt(txn_current_stmt(txn),
+ old_tuple, new_tuple, mode,
+ result);
+ } else {
+ /** Ephemeral space */
+ assert(space->def->id == 0);
+ }
+
/*
* If old_tuple is not NULL, the index has to
* find and delete it, or return an error.
diff --git a/src/box/txn.c b/src/box/txn.c
index 7724637..ed3a994 100644
--- a/src/box/txn.c
+++ b/src/box/txn.c
@@ -171,6 +171,9 @@ txn_stmt_new(struct region *region)
static inline void
txn_stmt_destroy(struct txn_stmt *stmt)
{
+ if (stmt->add_story != NULL || stmt->del_story != NULL)
+ txm_history_unlink_stmt(stmt);
+
if (stmt->old_tuple != NULL)
tuple_unref(stmt->old_tuple);
if (stmt->new_tuple != NULL)
--
2.7.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Tarantool-patches] [PATCH 04/16] vinyl: rename tx_manager -> vy_tx_manager
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 04/16] vinyl: rename tx_manager -> vy_tx_manager Aleksandr Lyapunov
@ 2020-07-12 17:14 ` Vladislav Shpilevoy
0 siblings, 0 replies; 49+ messages in thread
From: Vladislav Shpilevoy @ 2020-07-12 17:14 UTC (permalink / raw)
To: Aleksandr Lyapunov, tarantool-patches
My comment from the previous version is still actual.
The old name is still used in one place:
src/box/vy_scheduler.h
151: /** List of read views, see tx_manager::read_views. */
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Tarantool-patches] [PATCH 07/16] tx: save preserve old tuple flag in txn_stmt
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 07/16] tx: save preserve old tuple flag in txn_stmt Aleksandr Lyapunov
@ 2020-07-12 17:14 ` Vladislav Shpilevoy
2020-07-14 23:46 ` Vladislav Shpilevoy
1 sibling, 0 replies; 49+ messages in thread
From: Vladislav Shpilevoy @ 2020-07-12 17:14 UTC (permalink / raw)
To: Aleksandr Lyapunov, tarantool-patches
Ditto. See the previous review.
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Tarantool-patches] [PATCH 13/16] tx: introduce txm_story
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 13/16] tx: introduce txm_story Aleksandr Lyapunov
@ 2020-07-12 17:14 ` Vladislav Shpilevoy
2020-07-14 23:46 ` Vladislav Shpilevoy
1 sibling, 0 replies; 49+ messages in thread
From: Vladislav Shpilevoy @ 2020-07-12 17:14 UTC (permalink / raw)
To: Aleksandr Lyapunov, tarantool-patches
I am 100% lost starting from this commit.
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Tarantool-patches] [PATCH 06/16] tx: add TX status
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 06/16] tx: add TX status Aleksandr Lyapunov
@ 2020-07-12 17:15 ` Vladislav Shpilevoy
0 siblings, 0 replies; 49+ messages in thread
From: Vladislav Shpilevoy @ 2020-07-12 17:15 UTC (permalink / raw)
To: Aleksandr Lyapunov, tarantool-patches
In my previous review I said:
Seems like part of https://github.com/tarantool/tarantool/issues/5108.
It means, that I expect you either to say that I am wrong,
and it has nothing to do with 5108, or add "Part of #5108".
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Tarantool-patches] [PATCH 11/16] tx: introduce conflict tracker
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 11/16] tx: introduce conflict tracker Aleksandr Lyapunov
@ 2020-07-12 17:15 ` Vladislav Shpilevoy
2020-07-14 23:51 ` Vladislav Shpilevoy
1 sibling, 0 replies; 49+ messages in thread
From: Vladislav Shpilevoy @ 2020-07-12 17:15 UTC (permalink / raw)
To: Aleksandr Lyapunov, tarantool-patches
Sorry, but I don't think I understood anything. No
comments, no commit message.
txm_cause_conflict() is absolute black hole for me,
I don't understand what is happening.
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Tarantool-patches] [PATCH 10/16] tx: introduce txn_stmt_destroy
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 10/16] tx: introduce txn_stmt_destroy Aleksandr Lyapunov
@ 2020-07-12 17:15 ` Vladislav Shpilevoy
0 siblings, 0 replies; 49+ messages in thread
From: Vladislav Shpilevoy @ 2020-07-12 17:15 UTC (permalink / raw)
To: Aleksandr Lyapunov, tarantool-patches
Pushed to master as obvious.
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Tarantool-patches] [PATCH 05/16] tx: save txn in txn_stmt
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 05/16] tx: save txn in txn_stmt Aleksandr Lyapunov
@ 2020-07-12 17:15 ` Vladislav Shpilevoy
0 siblings, 0 replies; 49+ messages in thread
From: Vladislav Shpilevoy @ 2020-07-12 17:15 UTC (permalink / raw)
To: Aleksandr Lyapunov, tarantool-patches
Well, we can review this patch second time, but the
question from the previous version remains: why is it
needed in txn_stmt?
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Tarantool-patches] [PATCH 02/16] Check data_offset overflow in struct tuple
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 02/16] Check data_offset overflow in struct tuple Aleksandr Lyapunov
@ 2020-07-12 17:15 ` Vladislav Shpilevoy
2020-07-14 17:09 ` Aleksandr Lyapunov
0 siblings, 1 reply; 49+ messages in thread
From: Vladislav Shpilevoy @ 2020-07-12 17:15 UTC (permalink / raw)
To: Aleksandr Lyapunov, tarantool-patches
> diff --git a/test/box/huge_field_map_long.test.lua b/test/box/huge_field_map_long.test.lua
> new file mode 100644
> index 0000000..6415615
> --- /dev/null
> +++ b/test/box/huge_field_map_long.test.lua
> @@ -0,0 +1,28 @@
> +env = require('test_run')
> +test_run = env.new()
> +
> +s = box.schema.space.create('test', {engine = 'memtx'})
> +test_run:cmd("setopt delimiter ';'")
> +function test()
> + local t = {}
> + local k = {}
> + for i = 1,128 do
> + local parts = {}
> + for j = 0,127 do
> + table.insert(parts, {i * 128 - j, 'uint'})
> + table.insert(t, 1)
> + end
> + if i == 1 then k = table.deepcopy(t) end
> + s:create_index('test'..i, {parts = parts})
> + if i % 16 == 0 then
> + s:replace(t)
> + s:delete(k)
> + end
> + end
> +end;
> +test_run:cmd("setopt delimiter ''");
So this test would crash even without multikeys? With just too many
field offsets? How long does it work?
And why do you need to try to many combinations, if you know which one was
going to crash before the patch?
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Tarantool-patches] [PATCH 03/16] tx: introduce dirty tuples
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 03/16] tx: introduce dirty tuples Aleksandr Lyapunov
@ 2020-07-12 17:15 ` Vladislav Shpilevoy
2020-07-12 22:24 ` Nikita Pettik
0 siblings, 1 reply; 49+ messages in thread
From: Vladislav Shpilevoy @ 2020-07-12 17:15 UTC (permalink / raw)
To: Aleksandr Lyapunov, tarantool-patches
I still have the same question as in the previous review:
How are the dirty tuples going to be used? Why don't we have such a flag
for vinyl tuples?
> diff --git a/src/box/tuple.h b/src/box/tuple.h
> index 9a88772..4752323 100644
> --- a/src/box/tuple.h
> +++ b/src/box/tuple.h
> @@ -319,7 +319,13 @@ struct PACKED tuple
> /**
> * Offset to the MessagePack from the begin of the tuple.
> */
> - uint16_t data_offset;
> + uint16_t data_offset : 15;
> + /**
> + * The tuple (if it's found in index for example) could be invisible
> + * for current transactions. The flag means that the tuple must
> + * be clarified by transaction engine.
What is current transaction? Assume, we can yield now. And there are
some transactions. A new transaction appears and adds a tuple to an index.
Will be it be visible for newer transactions? Since they are not 'current'
in terms of when the tuple was added. Or will it be visible to this
transaction only?
> + */
> + bool is_dirty : 1;
> /**
> * Engine specific fields and offsets array concatenated
> * with MessagePack fields array.
> @@ -1081,8 +1087,10 @@ tuple_unref(struct tuple *tuple)
> assert(tuple->refs - 1 >= 0);
> if (unlikely(tuple->is_bigref))
> tuple_unref_slow(tuple);
> - else if (--tuple->refs == 0)
> + else if (--tuple->refs == 0) {
> + assert(!tuple->is_dirty);
Why? Is something supposed to clear this flag after it is set?
> tuple_delete(tuple);
> + }
> }
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Tarantool-patches] [PATCH 12/16] introduce tuple smart pointers
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 12/16] introduce tuple smart pointers Aleksandr Lyapunov
@ 2020-07-12 17:16 ` Vladislav Shpilevoy
0 siblings, 0 replies; 49+ messages in thread
From: Vladislav Shpilevoy @ 2020-07-12 17:16 UTC (permalink / raw)
To: Aleksandr Lyapunov, tarantool-patches
This looks like an overkill. You literally use these
'smart pointers' 2 times. Two of these functions, each
used 1 time. Why do you need all of that? Is it too hard
to add explicit tuple_ref/unref in these 2 places?
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Tarantool-patches] [PATCH v2 00/16] Transaction engine for memtx engine
2020-07-08 15:14 [Tarantool-patches] [PATCH v2 00/16] Transaction engine for memtx engine Aleksandr Lyapunov
` (15 preceding siblings ...)
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 16/16] tx: use new tx manager in memtx Aleksandr Lyapunov
@ 2020-07-12 17:19 ` Vladislav Shpilevoy
2020-07-14 23:47 ` Vladislav Shpilevoy
17 siblings, 0 replies; 49+ messages in thread
From: Vladislav Shpilevoy @ 2020-07-12 17:19 UTC (permalink / raw)
To: Aleksandr Lyapunov, tarantool-patches
It would help to have a description of the engine. Of
its expected behaviour at least. It is hard to review
something which you can't tell how should work.
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Tarantool-patches] [PATCH 03/16] tx: introduce dirty tuples
2020-07-12 17:15 ` Vladislav Shpilevoy
@ 2020-07-12 22:24 ` Nikita Pettik
0 siblings, 0 replies; 49+ messages in thread
From: Nikita Pettik @ 2020-07-12 22:24 UTC (permalink / raw)
To: Vladislav Shpilevoy; +Cc: tarantool-patches
On 12 Jul 19:15, Vladislav Shpilevoy wrote:
> I still have the same question as in the previous review:
>
> How are the dirty tuples going to be used? Why don't we have such a flag
> for vinyl tuples?
>
Alexander a bit explained me how new engine is supposed to work.
I believe he is going to publish sort of docs containing details soon.
As I understand, one of the main goals of new engine is to avoid
performance degradation in case tuple is not modified during transaction.
'dirty' tuple is a tuple which features 'story': we have to seek through
changes to find the proper one (depending on existing read views etc).
'clean' tuple is a casual tuple: if we read such tuple we return immediately.
> > diff --git a/src/box/tuple.h b/src/box/tuple.h
> > index 9a88772..4752323 100644
> > --- a/src/box/tuple.h
> > +++ b/src/box/tuple.h
> > @@ -319,7 +319,13 @@ struct PACKED tuple
> > /**
> > * Offset to the MessagePack from the begin of the tuple.
> > */
> > - uint16_t data_offset;
> > + uint16_t data_offset : 15;
> > + /**
> > + * The tuple (if it's found in index for example) could be invisible
> > + * for current transactions. The flag means that the tuple must
> > + * be clarified by transaction engine.
>
> What is current transaction? Assume, we can yield now. And there are
> some transactions. A new transaction appears and adds a tuple to an index.
> Will be it be visible for newer transactions? Since they are not 'current'
> in terms of when the tuple was added. Or will it be visible to this
> transaction only?
>
> > + */
> > + bool is_dirty : 1;
> > /**
> > * Engine specific fields and offsets array concatenated
> > * with MessagePack fields array.
> > @@ -1081,8 +1087,10 @@ tuple_unref(struct tuple *tuple)
> > assert(tuple->refs - 1 >= 0);
> > if (unlikely(tuple->is_bigref))
> > tuple_unref_slow(tuple);
> > - else if (--tuple->refs == 0)
> > + else if (--tuple->refs == 0) {
> > + assert(!tuple->is_dirty);
>
> Why? Is something supposed to clear this flag after it is set?
>
> > tuple_delete(tuple);
> > + }
> > }
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Tarantool-patches] [PATCH 02/16] Check data_offset overflow in struct tuple
2020-07-12 17:15 ` Vladislav Shpilevoy
@ 2020-07-14 17:09 ` Aleksandr Lyapunov
2020-07-14 22:48 ` Vladislav Shpilevoy
0 siblings, 1 reply; 49+ messages in thread
From: Aleksandr Lyapunov @ 2020-07-14 17:09 UTC (permalink / raw)
To: Vladislav Shpilevoy, tarantool-patches
Hi! thanks for review! see my replys below.
On 12.07.2020 20:15, Vladislav Shpilevoy wrote:
>
> So this test would crash even without multikeys? With just too many
> field offsets? How long does it work?
Yes, the bug was introduced long time before multikeys.
The test runs 1.3 second on my comp, and my comp is quite good.
>
> And why do you need to try to many combinations, if you know which one was
> going to crash before the patch?
I want the test to be more universal. For example in further commits I grab
one bit from data_offset for is_dirty flag, and the test still checks
the possible
crash while max allowed data_offset have changed.
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Tarantool-patches] [PATCH 02/16] Check data_offset overflow in struct tuple
2020-07-14 17:09 ` Aleksandr Lyapunov
@ 2020-07-14 22:48 ` Vladislav Shpilevoy
0 siblings, 0 replies; 49+ messages in thread
From: Vladislav Shpilevoy @ 2020-07-14 22:48 UTC (permalink / raw)
To: Aleksandr Lyapunov, tarantool-patches
>> So this test would crash even without multikeys? With just too many
>> field offsets? How long does it work?
> Yes, the bug was introduced long time before multikeys.
> The test runs 1.3 second on my comp, and my comp is quite good.
>>
>> And why do you need to try to many combinations, if you know which one was
>> going to crash before the patch?
>
> I want the test to be more universal. For example in further commits I grab
> one bit from data_offset for is_dirty flag, and the test still checks the possible
> crash while max allowed data_offset have changed.
But wouldn't it still be the same universal, if you would just take the biggest
possible meta size? If we will someday loose the check, it will catch the
problem anyway. With a long test I am afraid its fail won't be noticed, since
long tests are not run locally by anybody. In CI they are run by coverage only.
Anyway, I am ok with the patch.
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Tarantool-patches] [PATCH 16/16] tx: use new tx manager in memtx
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 16/16] tx: use new tx manager in memtx Aleksandr Lyapunov
@ 2020-07-14 23:45 ` Vladislav Shpilevoy
2020-07-15 10:32 ` Aleksandr Lyapunov
0 siblings, 1 reply; 49+ messages in thread
From: Vladislav Shpilevoy @ 2020-07-14 23:45 UTC (permalink / raw)
To: Aleksandr Lyapunov, tarantool-patches
Thanks for the patch!
See 6 comments below.
On 08.07.2020 17:14, Aleksandr Lyapunov wrote:
> ---
> src/box/memtx_engine.c | 49 ++++++++++++++++++++++++++++++++++++++++++-------
> src/box/memtx_space.c | 12 ++++++++++++
> src/box/txn.c | 3 +++
> 3 files changed, 57 insertions(+), 7 deletions(-)
>
> diff --git a/src/box/memtx_engine.c b/src/box/memtx_engine.c
> index dfd6fce..b5c5f03 100644
> --- a/src/box/memtx_engine.c
> +++ b/src/box/memtx_engine.c
> @@ -339,6 +339,35 @@ memtx_engine_begin(struct engine *engine, struct txn *txn)
> return 0;
> }
>
> +static int
> +memtx_engine_prepare(struct engine *engine, struct txn *txn)
> +{
> + (void)engine;
> + struct txn_stmt *stmt;
> + stailq_foreach_entry(stmt, &txn->stmts, next) {
> + if (stmt->add_story != NULL || stmt->del_story != NULL)
1. Seems like it is worth moving this check into a static inline
function in the txn.h header. You use it a lot.
> + txm_history_prepare_stmt(stmt);
> + }
> + return 0;
> +}
> +
> +static void
> +memtx_engine_commit(struct engine *engine, struct txn *txn)
> +{
> + (void)engine;
> + struct txn_stmt *stmt;
> + stailq_foreach_entry(stmt, &txn->stmts, next) {
> + if (stmt->add_story != NULL || stmt->del_story != NULL)
> + {
2. Opening brace should be on the same line as 'if'. The same please
find and fix in all the other places.
> + ssize_t bsize = txm_history_release_stmt(stmt);
> + assert(stmt->space->engine == engine);
> + struct memtx_space *mspace =
> + (struct memtx_space *)stmt->space;
> + mspace->bsize += bsize;
> + }
> + }
> +}
> +
> static void
> memtx_engine_rollback_statement(struct engine *engine, struct txn *txn,
> struct txn_stmt *stmt)
> @@ -348,13 +377,17 @@ memtx_engine_rollback_statement(struct engine *engine, struct txn *txn,
> if (stmt->old_tuple == NULL && stmt->new_tuple == NULL)
> return;
> struct space *space = stmt->space;
> - struct memtx_space *memtx_space = (struct memtx_space *)space;
> + struct memtx_space* memtx_space = (struct memtx_space*) space;
3. Unnecessary diff.
> uint32_t index_count;
>
> /* Only roll back the changes if they were made. */
> if (stmt->engine_savepoint == NULL)
> return;
>
> + if (memtx_space->replace == memtx_space_replace_all_keys &&
4. Why is it done only in case of memtx_space_replace_all_keys?
> + (stmt->add_story != NULL || stmt->del_story != NULL))
> + return txm_history_unlink_stmt(stmt);
> +
> if (memtx_space->replace == memtx_space_replace_all_keys)
> index_count = space->index_count;
> else if (memtx_space->replace == memtx_space_replace_primary_key)
> @@ -362,12 +395,14 @@ memtx_engine_rollback_statement(struct engine *engine, struct txn *txn,
> else
> panic("transaction rolled back during snapshot recovery");
>
> - for (uint32_t i = 0; i < index_count; i++) {
> - struct tuple *unused;
> - struct index *index = space->index[i];
> + for (uint32_t i = 0; i < index_count; i++)
> + {
> + struct tuple* unused;
> + struct index* index = space->index[i];
5. Unnecessary changes. Also not correct. We use 'type *' instead of 'type*'.
We put { on the same line as cycle, not on a separate line. The same below
and in all the other places.
> /* Rollback must not fail. */
> if (index_replace(index, stmt->new_tuple, stmt->old_tuple,
> - DUP_INSERT, &unused) != 0) {
> + DUP_INSERT, &unused) != 0)
> + {
> diag_log();
> unreachable();
> panic("failed to rollback change");
> diff --git a/src/box/memtx_space.c b/src/box/memtx_space.c
> index 5820c40..28552a9 100644
> --- a/src/box/memtx_space.c
> +++ b/src/box/memtx_space.c
> @@ -260,6 +260,18 @@ memtx_space_replace_all_keys(struct space *space, struct tuple *old_tuple,
> if (pk == NULL)
> return -1;
> assert(pk->def->opts.is_unique);
> +
> + struct txn *txn = in_txn();
> + struct txn_stmt *stmt = txn == NULL ? NULL : txn_current_stmt(txn);
> + if (stmt != NULL) {
> + return txm_history_link_stmt(txn_current_stmt(txn),
6. Why do you get txn_current_stmt(stmt) again? You already have it
in stmt variable.
> + old_tuple, new_tuple, mode,
> + result);
> + } else {
> + /** Ephemeral space */
> + assert(space->def->id == 0);
> + }
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Tarantool-patches] [PATCH 13/16] tx: introduce txm_story
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 13/16] tx: introduce txm_story Aleksandr Lyapunov
2020-07-12 17:14 ` Vladislav Shpilevoy
@ 2020-07-14 23:46 ` Vladislav Shpilevoy
2020-07-15 8:11 ` Aleksandr Lyapunov
1 sibling, 1 reply; 49+ messages in thread
From: Vladislav Shpilevoy @ 2020-07-14 23:46 UTC (permalink / raw)
To: Aleksandr Lyapunov, tarantool-patches
Thanks for the patch!
See 3 comments below.
On 08.07.2020 17:14, Aleksandr Lyapunov wrote:
> ---
> src/box/txn.c | 725 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
> src/box/txn.h | 127 ++++++++++
> 2 files changed, 851 insertions(+), 1 deletion(-)
>
> diff --git a/src/box/txn.c b/src/box/txn.c
> index b9f1737..78c542f 100644
> --- a/src/box/txn.c
> +++ b/src/box/txn.c
> @@ -1095,4 +1141,681 @@ txm_cause_conflict(struct txn *wreaker, struct txn *victim)
> rlist_add(&wreaker->conflict_list, &tracker->in_conflict_list);
> rlist_add(&wreaker->conflicted_by_list, &tracker->in_conflicted_by_list);
> return 0;
> -}
> \ No newline at end of file
> +}
> +
> +struct txm_story *
> +txm_story_new(struct tuple *tuple, uint32_t index_count)
> +{
> + assert(!tuple->is_dirty);
> + assert(index_count < BOX_INDEX_MAX);
> + struct mempool *pool = &txm.txm_story_pool[index_count];
> + struct txm_story *story = (struct txm_story *)mempool_alloc(pool);
> + if (story == NULL) {
> + size_t item_size = sizeof(struct txm_story) +
> + index_count * sizeof(struct txm_story_link);
> + diag_set(OutOfMemory, item_size,
> + "tx_manager", "tx story");
> + return story;
1. What if index count of the space will change in a concurrect
DDL transaction?
> + }
> + story->tuple = tuple;
> +
> + const struct txm_story **put_story = (const struct txm_story **)&story;
> + struct txm_story **empty = NULL;
> + mh_int_t pos = mh_history_put(txm.history, put_story, &empty, 0);
> + if (pos == mh_end(txm.history)) {
> + mempool_free(pool, story);
> + diag_set(OutOfMemory, pos + 1,
> + "tx_manager", "tx history hash table");
> + return NULL;
> + }
> + tuple->is_dirty = true;
> + tuple_ref(tuple);
> +
> + story->index_count = index_count;
> + story->add_stmt = NULL;
> + story->add_psn = 0;
> + story->del_stmt = NULL;
> + story->del_psn = 0;
> + rlist_create(&story->reader_list);
> + rlist_add(&txm.all_stories, &story->in_all_stories);
> + memset(story->link, 0, sizeof(story->link[0]) * index_count);
> + return story;
> +}
> +
> +static struct txm_story *
> +txm_story_new_add_stmt(struct tuple *tuple, struct txn_stmt *stmt)
2. Probably would be better to use the existing terms: old_tuple and new_tuple.
So the functions would be txm_story_add_new_stmt and txm_story_add_old_stmt. The
same in all the other places. 'add_stmt' -> 'new_stmt'. 'del_stmt' -> 'old_stmt'.
> +{
> + struct txm_story *res = txm_story_new(tuple, stmt->space->index_count);
> + if (res == NULL)
> + return NULL;
> + res->add_stmt = stmt;
> + assert(stmt->add_story == NULL);
> + stmt->add_story = res;
> + return res;
> +}
> +
> +static struct txm_story *
> +txm_story_new_del_stmt(struct tuple *tuple, struct txn_stmt *stmt)
> +{
> + struct txm_story *res = txm_story_new(tuple, stmt->space->index_count);
> + if (res == NULL)
> + return NULL;
> + res->del_stmt = stmt;
> + assert(stmt->del_story == NULL);
> + stmt->del_story = res;
> + return res;
> +}
> +
> +void
> +txm_history_prepare_stmt(struct txn_stmt *stmt)
> +{
> + assert(stmt->txn->psn != 0);
> +
> + /* Move story to the past to prepared stories. */
> +
> + struct txm_story *story = stmt->add_story;
> + uint32_t index_count = story == NULL ? 0 : story->index_count;
> + for (uint32_t i = 0; i < index_count; ) {
> + if (!story->link[i].older.is_story) {
> + /* tuple is old. */
> + i++;
> + continue;
> + }
> + struct txm_story *old_story =
> + story->link[i].older.story;
> + if (old_story->del_psn != 0) {
> + /* is psn is set, the change is prepared. */
> + i++;
> + continue;
> + }
> + if (old_story->add_psn != 0) {
> + /* is psn is set, the change is prepared. */
> + i++;
> + continue;
> + }
> +
> + if (old_story->add_stmt != NULL) {
> + /* ancient */
> + i++;
> + continue;
> + }
> + if (old_story->add_stmt->txn == stmt->txn) {
> + /* added by us. */
> + i++;
> + continue;
> + }
> +
> + if (old_story->add_stmt->preserve_old_tuple || i != 0)
> + old_story->add_stmt->txn->status = TXN_CONFLICTED;
> +
> + /* Swap story and old story. */
> + struct txm_story_link *link = &story->link[i];
> + if (link->newer_story == NULL) {
> + /* we have to replace the tuple in index. */
> + struct tuple *unused;
> + struct index *index = stmt->space->index[i];
> + if (index_replace(index, story->tuple,
> + old_story->tuple,
> + DUP_INSERT, &unused) != 0) {
> + diag_log();
> + unreachable();
> + panic("failed to rollback change");
> + }
> + } else {
> + struct txm_story *newer = link->newer_story;
> + assert(newer->link[i].older.is_story);
> + assert(newer->link[i].older.story == story);
> + txm_story_unlink(newer, i);
> + txm_story_link_story(newer, old_story, i);
> + }
> +
> + txm_story_unlink(story, i);
> +
> + txm_story_unlink(story, i);
> + if (old_story->link[i].older.is_story) {
> + struct txm_story *to =
> + old_story->link[i].older.story;
> + txm_story_unlink(old_story, i);
> + txm_story_link_story(story, to, i);
> + } else {
> + struct tuple *to =
> + old_story->link[i].older.tuple;
> + txm_story_unlink(old_story, i);
> + txm_story_link_tuple(story, to, i);
> + }
> +
> + txm_story_link_story(old_story, story, i);
> +
> + if (i == 0) {
> + assert(stmt->del_story == old_story);
> + assert(story->link[0].older.is_story ||
> + story->link[0].older.tuple == NULL);
> +
> + struct txn_stmt *dels = old_story->del_stmt;
> + assert(dels != NULL);
> + do {
> + if (dels->txn != stmt->txn)
> + dels->txn->status = TXN_CONFLICTED;
> + dels->del_story = NULL;
> + struct txn_stmt *next = dels->next_in_del_list;
> + dels->next_in_del_list = NULL;
> + dels = next;
> + } while (dels != NULL);
> + old_story->del_stmt = NULL;
> +
> + if (story->link[0].older.is_story) {
> + struct txm_story *oldest_story =
> + story->link[0].older.story;
> + dels = oldest_story->del_stmt;
> + while (dels != NULL) {
> + assert(dels->txn != stmt->txn);
> + dels->del_story = NULL;
> + struct txn_stmt *next =
> + dels->next_in_del_list;
> + dels->next_in_del_list = NULL;
> + dels = next;
> + }
> + oldest_story->del_stmt = stmt;
> + stmt->del_story = oldest_story;
> + }
> + }
> + }
> + if (stmt->add_story != NULL)
3. According to a check in the beginning of the function stmt can
be NULL. But then this will crash.
> + stmt->add_story->add_psn = stmt->txn->psn;
> +
> + if (stmt->del_story != NULL)
> + stmt->del_story->del_psn = stmt->txn->psn;
> +}
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Tarantool-patches] [PATCH 07/16] tx: save preserve old tuple flag in txn_stmt
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 07/16] tx: save preserve old tuple flag in txn_stmt Aleksandr Lyapunov
2020-07-12 17:14 ` Vladislav Shpilevoy
@ 2020-07-14 23:46 ` Vladislav Shpilevoy
2020-07-15 7:53 ` Aleksandr Lyapunov
1 sibling, 1 reply; 49+ messages in thread
From: Vladislav Shpilevoy @ 2020-07-14 23:46 UTC (permalink / raw)
To: Aleksandr Lyapunov, tarantool-patches
Thanks for the patch!
On 08.07.2020 17:14, Aleksandr Lyapunov wrote:
> ---
> src/box/memtx_space.c | 17 +++++++++++++++++
> src/box/txn.c | 3 +++
> src/box/txn.h | 9 +++++++++
> 3 files changed, 29 insertions(+)
>
> diff --git a/src/box/memtx_space.c b/src/box/memtx_space.c
> index 8755920..5820c40 100644
> --- a/src/box/memtx_space.c
> +++ b/src/box/memtx_space.c
> @@ -316,6 +316,10 @@ memtx_space_execute_replace(struct space *space, struct txn *txn,
> if (stmt->new_tuple == NULL)
> return -1;
> tuple_ref(stmt->new_tuple);
> +
> + if (mode == DUP_INSERT)
> + stmt->preserve_old_tuple = true;
Why do you preserve old tuple in case of insert? There is no an
old tuple in case of insert. Otherwise it would fail, no?
> +
> if (memtx_space->replace(space, NULL, stmt->new_tuple,
> mode, &stmt->old_tuple) != 0)
> return -1;
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Tarantool-patches] [PATCH v2 00/16] Transaction engine for memtx engine
2020-07-08 15:14 [Tarantool-patches] [PATCH v2 00/16] Transaction engine for memtx engine Aleksandr Lyapunov
` (16 preceding siblings ...)
2020-07-12 17:19 ` [Tarantool-patches] [PATCH v2 00/16] Transaction engine for memtx engine Vladislav Shpilevoy
@ 2020-07-14 23:47 ` Vladislav Shpilevoy
2020-07-15 12:25 ` Aleksandr Lyapunov
17 siblings, 1 reply; 49+ messages in thread
From: Vladislav Shpilevoy @ 2020-07-14 23:47 UTC (permalink / raw)
To: Aleksandr Lyapunov, tarantool-patches
A lot of new expenses come from struct txn becoming huge.
It could be mitigated if you would introduce a new object
which would store struct txn, psn, all the new rlists. And
which would be created only when needed. However it may be
wrong if you will enable the new manager by default with
keeping on_yield error.
I sent some comments, but I didn't review the patchset whole
yet, it is huge.
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Tarantool-patches] [PATCH 14/16] tx: indexes
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 14/16] tx: indexes Aleksandr Lyapunov
@ 2020-07-14 23:50 ` Vladislav Shpilevoy
2020-07-15 10:02 ` Aleksandr Lyapunov
2020-07-15 10:19 ` Aleksandr Lyapunov
0 siblings, 2 replies; 49+ messages in thread
From: Vladislav Shpilevoy @ 2020-07-14 23:50 UTC (permalink / raw)
To: Aleksandr Lyapunov, tarantool-patches
Thanks for the patch!
See 11 comments below.
> diff --git a/src/box/memtx_bitset.c b/src/box/memtx_bitset.c
> index 67eaf6f..f3ab74f 100644
> --- a/src/box/memtx_bitset.c
> +++ b/src/box/memtx_bitset.c
> @@ -198,19 +199,26 @@ bitset_index_iterator_next(struct iterator *iterator, struct tuple **ret)
> assert(iterator->free == bitset_index_iterator_free);
> struct bitset_index_iterator *it = bitset_index_iterator(iterator);
>
> - size_t value = tt_bitset_iterator_next(&it->bitset_it);
> - if (value == SIZE_MAX) {
> - *ret = NULL;
> - return 0;
> - }
> -
> + do {
> + size_t value = tt_bitset_iterator_next(&it->bitset_it);
> + if (value == SIZE_MAX) {
> + *ret = NULL;
> + return 0;
> + }
> #ifndef OLD_GOOD_BITSET
> - struct memtx_bitset_index *index =
> - (struct memtx_bitset_index *)iterator->index;
> - *ret = memtx_bitset_index_value_to_tuple(index, value);
> + struct memtx_bitset_index *index =
> + (struct memtx_bitset_index *)iterator->index;
> + struct tuple *tuple =
> + memtx_bitset_index_value_to_tuple(index, value);
> #else /* #ifndef OLD_GOOD_BITSET */
> - *ret = value_to_tuple(value);
> + struct tuple *tuple =value_to_tuple(value);
1. Missing whitespace afrer =.
> #endif /* #ifndef OLD_GOOD_BITSET */
> + uint32_t iid = iterator->index->def->iid;
> + struct txn *txn = in_txn();
> + bool is_rw = txn != NULL;
> + *ret = txm_tuple_clarify(txn, tuple, iid, 0, is_rw);
2. Some of these values you don't need to load in the cycle. They don't
change.
* in_txn() can be called out of the cycle just once;
* is_rw can be calculated only once;
* iid does not change;
* struct memtx_bitset_index *index does not change;
The same applies to rtree changes.
> + } while (*ret == NULL);
> +
> return 0;
> }
>
> diff --git a/src/box/memtx_hash.c b/src/box/memtx_hash.c
3. On the branch I see a 'txm_snapshot_cleanser' structure
in this file. But not in the email. Can't review it. Why is
it called 'cleanser' instead of 'cleaner'? What is it doing?
> index cdd531c..b3ae60c 100644
> --- a/src/box/memtx_hash.c
> +++ b/src/box/memtx_hash.c
> @@ -128,6 +129,31 @@ hash_iterator_gt(struct iterator *ptr, struct tuple **ret)
> return 0;
> }
>
> +#define WRAP_ITERATOR_METHOD(name) \
> +static int \
> +name(struct iterator *iterator, struct tuple **ret) \
> +{ \
> + struct txn *txn = in_txn(); \
> + bool is_rw = txn != NULL; \
> + uint32_t iid = iterator->index->def->iid; \
> + bool first = true; \
> + do { \
> + int rc = first ? name##_base(iterator, ret) \
> + : hash_iterator_ge_base(iterator, ret); \
4. Seems like unnecessary branching. If you know you will specially
handle only the first iteration, then why no to make it before the
cycle? And eliminate 'first' + '?' branch. Also use prefix 'is_' for
flag names. Or 'has_'/'does_'/etc. The same for all the other new
flags, including 'preserve_old_tuple'.
> + if (rc != 0 || *ret == NULL) \
> + return rc; \
> + first = false; \
> + *ret = txm_tuple_clarify(txn, *ret, iid, 0, is_rw); \
> + } while (*ret == NULL); \
> + return 0; \
> +} \
5. Please, use tabs for alignment. In other places too.
> +struct forgot_to_add_semicolon
6. What is this?
> +
> +WRAP_ITERATOR_METHOD(hash_iterator_ge);
> +WRAP_ITERATOR_METHOD(hash_iterator_gt);
> +
> +#undef WRAP_ITERATOR_METHOD
> +
> @@ -136,12 +162,25 @@ hash_iterator_eq_next(MAYBE_UNUSED struct iterator *it, struct tuple **ret)
> }
>
> static int
> -hash_iterator_eq(struct iterator *it, struct tuple **ret)
> +hash_iterator_eq(struct iterator *ptr, struct tuple **ret)
> {
> - it->next = hash_iterator_eq_next;
> - return hash_iterator_ge(it, ret);
> + ptr->next = hash_iterator_eq_next;
> + assert(ptr->free == hash_iterator_free);
> + struct hash_iterator *it = (struct hash_iterator *) ptr;
> + struct memtx_hash_index *index = (struct memtx_hash_index *)ptr->index;
> + struct tuple **res = light_index_iterator_get_and_next(&index->hash_table,
> + &it->iterator);
7. Why did you remove the hash_iterator_ge() call? You still can use
it here, with the new name hash_iterator_ge_base().
> + if (res == NULL) {
> + *ret = NULL;
> + return 0;
> + }
> + struct txn *txn = in_txn();
> + bool is_rw = txn != NULL;
> + *ret = txm_tuple_clarify(txn, *res, ptr->index->def->iid, 0, is_rw);
8. Why isn't it a cycle?
9. Why 'txn != NULL' can't be done inside txm_tuple_clarify()? It
takes txn pointer anyway, and you calculate 'is_rw' everywhere
before the call.
> + return 0;
> }
>
> +
10. Unnecessary new line.
> /* }}} */
> diff --git a/src/box/memtx_rtree.c b/src/box/memtx_rtree.c
> index 612fcb2..992a422 100644
> --- a/src/box/memtx_rtree.c
> +++ b/src/box/memtx_rtree.c
> @@ -304,6 +305,45 @@ tree_iterator_prev_equal(struct iterator *iterator, struct tuple **ret)
> return 0;
> }
>
> +#define WRAP_ITERATOR_METHOD(name) \
> +static int \
> +name(struct iterator *iterator, struct tuple **ret) \
> +{ \
> + struct memtx_tree *tree = \
> + &((struct memtx_tree_index *)iterator->index)->tree; \
> + struct tree_iterator *it = tree_iterator(iterator); \
> + struct memtx_tree_iterator *ti = &it->tree_iterator; \
> + uint32_t iid = iterator->index->def->iid; \
> + bool is_multikey = iterator->index->def->key_def->is_multikey; \
11. All these dereferences are going to cost a lot, even when
there are no concurrent txns. Can they be done in a lazy mode?
Only if the found tuple is dirty. The same applies to all the
other places.
> + struct txn *txn = in_txn(); \
> + bool is_rw = txn != NULL; \
> + do { \
> + int rc = name##_base(iterator, ret); \
> + if (rc != 0 || *ret == NULL) \
> + return rc; \
> + uint32_t mk_index = 0; \
> + if (is_multikey) { \
> + struct memtx_tree_data *check = \
> + memtx_tree_iterator_get_elem(tree, ti); \
> + assert(check != NULL); \
> + mk_index = check->hint; \
> + } \
> + *ret = txm_tuple_clarify(txn, *ret, iid, mk_index, is_rw); \
> + } while (*ret == NULL); \
> + tuple_unref(it->current.tuple); \
> + it->current.tuple = *ret; \
> + tuple_ref(it->current.tuple); \
> + return 0; \
> +} \
> +struct forgot_to_add_semicolon
> +
> +WRAP_ITERATOR_METHOD(tree_iterator_next);
> +WRAP_ITERATOR_METHOD(tree_iterator_prev);
> +WRAP_ITERATOR_METHOD(tree_iterator_next_equal);
> +WRAP_ITERATOR_METHOD(tree_iterator_prev_equal);
> +
> +#undef WRAP_ITERATOR_METHOD
> +
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Tarantool-patches] [PATCH 11/16] tx: introduce conflict tracker
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 11/16] tx: introduce conflict tracker Aleksandr Lyapunov
2020-07-12 17:15 ` Vladislav Shpilevoy
@ 2020-07-14 23:51 ` Vladislav Shpilevoy
2020-07-15 7:57 ` Aleksandr Lyapunov
1 sibling, 1 reply; 49+ messages in thread
From: Vladislav Shpilevoy @ 2020-07-14 23:51 UTC (permalink / raw)
To: Aleksandr Lyapunov, tarantool-patches
Thanks for the patch!
See 1 comment below.
On 08.07.2020 17:14, Aleksandr Lyapunov wrote:
> ---
> src/box/txn.c | 98 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> src/box/txn.h | 5 +++
> 2 files changed, 103 insertions(+)
>
> diff --git a/src/box/txn.c b/src/box/txn.c
> index 2bd44a3..b9f1737 100644
> --- a/src/box/txn.c
> +++ b/src/box/txn.c
> @@ -998,3 +1053,46 @@ void
> tx_manager_free()
> {
> }
> +
> +int
> +txm_cause_conflict(struct txn *wreaker, struct txn *victim)
> +{
> + struct tx_conflict_tracker *tracker = NULL;
> + struct rlist *r1 = wreaker->conflict_list.next;
> + struct rlist *r2 = wreaker->conflicted_by_list.next;
> + while (r1 != &wreaker->conflict_list &&
> + r2 != &wreaker->conflicted_by_list) {
> + tracker = rlist_entry(r1, struct tx_conflict_tracker,
> + in_conflict_list);
> + if (tracker->wreaker == wreaker && tracker->victim == victim)
> + break;
> + tracker = rlist_entry(r2, struct tx_conflict_tracker,
> + in_conflicted_by_list);
> + if (tracker->wreaker == wreaker && tracker->victim == victim)
> + break;
> + tracker = NULL;
> + r1 = r1->next;
> + r2 = r2->next;
> + }
> + if (tracker != NULL) {
> + /* Move to the beginning of a list
> + * for a case of subsequent lookups */
> + rlist_del(&tracker->in_conflict_list);
> + rlist_del(&tracker->in_conflicted_by_list);
> + } else {
> + size_t size;
> + tracker = region_alloc_object(&victim->region,
> + struct tx_conflict_tracker,
> + &size);
> + if (tracker == NULL) {
> + diag_set(OutOfMemory, size, "tx region",
> + "conflict_tracker");
> + return -1;
> + }
> + }
> + tracker->wreaker = wreaker;
> + tracker->victim = victim;
Why don't you save anything into the victim? How will it be
removed from the lists if it will commit earlier than 'wreaker'?
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Tarantool-patches] [PATCH 07/16] tx: save preserve old tuple flag in txn_stmt
2020-07-14 23:46 ` Vladislav Shpilevoy
@ 2020-07-15 7:53 ` Aleksandr Lyapunov
0 siblings, 0 replies; 49+ messages in thread
From: Aleksandr Lyapunov @ 2020-07-15 7:53 UTC (permalink / raw)
To: Vladislav Shpilevoy, tarantool-patches
Hi, thanks for review!
On 15.07.2020 02:46, Vladislav Shpilevoy wrote:
> +
>> + if (mode == DUP_INSERT)
>> + stmt->preserve_old_tuple = true;
> Why do you preserve old tuple in case of insert? There is no an
> old tuple in case of insert. Otherwise it would fail, no?
This means that old_tuple (old value with the same key as new_tuple)
must remain NULL up to the moment when this TX is committed.
I made a comment about it.
>
>> +
>> if (memtx_space->replace(space, NULL, stmt->new_tuple,
>> mode, &stmt->old_tuple) != 0)
>> return -1;
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Tarantool-patches] [PATCH 11/16] tx: introduce conflict tracker
2020-07-14 23:51 ` Vladislav Shpilevoy
@ 2020-07-15 7:57 ` Aleksandr Lyapunov
0 siblings, 0 replies; 49+ messages in thread
From: Aleksandr Lyapunov @ 2020-07-15 7:57 UTC (permalink / raw)
To: Vladislav Shpilevoy, tarantool-patches
On 15.07.2020 02:51, Vladislav Shpilevoy wrote:
>
>> + tracker->wreaker = wreaker;
>> + tracker->victim = victim;
> Why don't you save anything into the victim? How will it be
> removed from the lists if it will commit earlier than 'wreaker'?
>
Good catch, it's a bug.
I have actually found it by myself and fixed it, but thanks nevertheless.
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Tarantool-patches] [PATCH 13/16] tx: introduce txm_story
2020-07-14 23:46 ` Vladislav Shpilevoy
@ 2020-07-15 8:11 ` Aleksandr Lyapunov
2020-07-15 22:02 ` Vladislav Shpilevoy
0 siblings, 1 reply; 49+ messages in thread
From: Aleksandr Lyapunov @ 2020-07-15 8:11 UTC (permalink / raw)
To: Vladislav Shpilevoy, tarantool-patches
Hi, thank for review!
On 15.07.2020 02:46, Vladislav Shpilevoy wrote:
> Thanks for the patch!
>
> See 3 comments below.
>
>
> 1. What if index count of the space will change in a concurrect
> DDL transaction?
That's a good question. I know what should be - DDL must create
a new space structure for new transaction while leaving old
space structure for old transactions.. But I'm sure it does not
work that way now. I'll write a test and file a ticket.
>
>> +}
>> +
>> +static struct txm_story *
>> +txm_story_new_add_stmt(struct tuple *tuple, struct txn_stmt *stmt)
> 2. Probably would be better to use the existing terms: old_tuple and new_tuple.
> So the functions would be txm_story_add_new_stmt and txm_story_add_old_stmt. The
> same in all the other places. 'add_stmt' -> 'new_stmt'. 'del_stmt' -> 'old_stmt'.
I thought about it. I fear in terms of txm_story, when we speak about
lifetime of a tuple it would be weird. add_stmt - is the statement that
added the tuple. But what is new_stmt? new_psn?
>
>> +{
>> + struct txm_story *res = txm_story_new(tuple, stmt->space->index_count);
>> + if (res == NULL)
>> + return NULL;
>> + res->add_stmt = stmt;
>> + assert(stmt->add_story == NULL);
>> + stmt->add_story = res;
>> + return res;
>> +}
>> +
>> +static struct txm_story *
>> +txm_story_new_del_stmt(struct tuple *tuple, struct txn_stmt *stmt)
>> +{
>> + struct txm_story *res = txm_story_new(tuple, stmt->space->index_count);
>> + if (res == NULL)
>> + return NULL;
>> + res->del_stmt = stmt;
>> + assert(stmt->del_story == NULL);
>> + stmt->del_story = res;
>> + return res;
>> +}
>> +
>> +void
>> +txm_history_prepare_stmt(struct txn_stmt *stmt)
>> +{
>> + assert(stmt->txn->psn != 0);
>> +
>> + /* Move story to the past to prepared stories. */
>> +
>> + struct txm_story *story = stmt->add_story;
>> + uint32_t index_count = story == NULL ? 0 : story->index_count;
>> + for (uint32_t i = 0; i < index_count; ) {
>> ...
>> ...
>> ...
>> + }
>> + if (stmt->add_story != NULL)
> 3. According to a check in the beginning of the function stmt can
> be NULL. But then this will crash.
No, index_count is initialized as 0 in that case and we will not enter
the loop.
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Tarantool-patches] [PATCH 14/16] tx: indexes
2020-07-14 23:50 ` Vladislav Shpilevoy
@ 2020-07-15 10:02 ` Aleksandr Lyapunov
2020-07-15 22:08 ` Vladislav Shpilevoy
2020-07-15 10:19 ` Aleksandr Lyapunov
1 sibling, 1 reply; 49+ messages in thread
From: Aleksandr Lyapunov @ 2020-07-15 10:02 UTC (permalink / raw)
To: Vladislav Shpilevoy, tarantool-patches
Hi, thanks for your review!
On 15.07.2020 02:50, Vladislav Shpilevoy wrote:
>
> 1. Missing whitespace afrer =.
fixed
>
>> #endif /* #ifndef OLD_GOOD_BITSET */
>> + uint32_t iid = iterator->index->def->iid;
>> + struct txn *txn = in_txn();
>> + bool is_rw = txn != NULL;
>> + *ret = txm_tuple_clarify(txn, tuple, iid, 0, is_rw);
> 2. Some of these values you don't need to load in the cycle. They don't
> change.
>
> * in_txn() can be called out of the cycle just once;
> * is_rw can be calculated only once;
> * iid does not change;
> * struct memtx_bitset_index *index does not change;
>
> The same applies to rtree changes.
Actually that is not a problem for modern compilers not to make the
same thing several times.
For example https://godbolt.org/z/9zvnn5
So it's not a performance issue.
I make those variables as aliases for readability.
I could move them out of loop if you insist but I fear that it will
become less readable.
>
> 3. On the branch I see a 'txm_snapshot_cleanser' structure
> in this file. But not in the email. Can't review it. Why is
> it called 'cleanser' instead of 'cleaner'? What is it doing?
Shame on me, maybe I forgot to add in. In a new version it's there, with
comments.
btw renamed is as 'cleaner'
> \
>> + do { \
>> + int rc = first ? name##_base(iterator, ret) \
>> + : hash_iterator_ge_base(iterator, ret); \
> 4. Seems like unnecessary branching. If you know you will specially
> handle only the first iteration, then why no to make it before the
> cycle? And eliminate 'first' + '?' branch. Also use prefix 'is_' for
> flag names. Or 'has_'/'does_'/etc. The same for all the other new
> flags, including 'preserve_old_tuple'.
names - ok, but again this work for a compiler https://godbolt.org/z/vbEeEP
I could change it if you insist but compiled code will be merely the same.
>
>> + if (rc != 0 || *ret == NULL) \
>> + return rc; \
>> + first = false; \
>> + *ret = txm_tuple_clarify(txn, *ret, iid, 0, is_rw); \
>> + } while (*ret == NULL); \
>> + return 0; \
>> +} \
> 5. Please, use tabs for alignment. In other places too.
done
>
>> +struct forgot_to_add_semicolon
> 6. What is this?
That's a standard guard that prohibits usage of macro w/o semicolon in
the end of line
If somebody forgets to add ; he will get an error message with
'forgot_to_add_semicolon'.
>
>> +
>> 7. Why did you remove the hash_iterator_ge() call? You still can use
>> it here, with the new name hash_iterator_ge_base().
fixed
>
>> + bool is_rw = txn != NULL;
>> + *ret = txm_tuple_clarify(txn, *res, ptr->index->def->iid, 0, is_rw);
> 8. Why isn't it a cycle?
because there can be only one tuple with the desired key in the hash table.
>
> 9. Why 'txn != NULL' can't be done inside txm_tuple_clarify()? It
> takes txn pointer anyway, and you calculate 'is_rw' everywhere
> before the call.
Historical, will fix it.
>
>> + return 0;
>> }
>>
>> +
> 10. Unnecessary new line.
ok
>
>> + struct memtx_tree_iterator *ti = &it->tree_iterator; \
>> + uint32_t iid = iterator->index->def->iid; \
>> + bool is_multikey = iterator->index->def->key_def->is_multikey; \
> 11. All these dereferences are going to cost a lot, even when
> there are no concurrent txns. Can they be done in a lazy mode?
> Only if the found tuple is dirty. The same applies to all the
> other places.
A compiler should surely handle it, since ..._clarify() is a static
inline member.
Even a processor would handle it, it also reorders instructions, but usually
it has nothing to do while the tuple is fetching from memory, and I guess
it will try to do something even outside a branch.
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Tarantool-patches] [PATCH 14/16] tx: indexes
2020-07-14 23:50 ` Vladislav Shpilevoy
2020-07-15 10:02 ` Aleksandr Lyapunov
@ 2020-07-15 10:19 ` Aleksandr Lyapunov
1 sibling, 0 replies; 49+ messages in thread
From: Aleksandr Lyapunov @ 2020-07-15 10:19 UTC (permalink / raw)
To: Vladislav Shpilevoy, tarantool-patches
One moment:
On 15.07.2020 02:50, Vladislav Shpilevoy wrote:
>
> 9. Why 'txn != NULL' can't be done inside txm_tuple_clarify()? It
> takes txn pointer anyway, and you calculate 'is_rw' everywhere
> before the call.
There's one place where when it's not possible, in snapshot cleaner.
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Tarantool-patches] [PATCH 16/16] tx: use new tx manager in memtx
2020-07-14 23:45 ` Vladislav Shpilevoy
@ 2020-07-15 10:32 ` Aleksandr Lyapunov
2020-07-15 22:09 ` Vladislav Shpilevoy
0 siblings, 1 reply; 49+ messages in thread
From: Aleksandr Lyapunov @ 2020-07-15 10:32 UTC (permalink / raw)
To: Vladislav Shpilevoy, tarantool-patches
Hi, thanks again for the review, see my comments below
On 15.07.2020 02:45, Vladislav Shpilevoy wrote:
> +static int
>> +memtx_engine_prepare(struct engine *engine, struct txn *txn)
>> +{
>> + (void)engine;
>> + struct txn_stmt *stmt;
>> + stailq_foreach_entry(stmt, &txn->stmts, next) {
>> + if (stmt->add_story != NULL || stmt->del_story != NULL)
> 1. Seems like it is worth moving this check into a static inline
> function in the txn.h header. You use it a lot.
It's a virtual method, it can't be inlined..
>
>> + if (stmt->add_story != NULL || stmt->del_story != NULL)
>> + {
> 2. Opening brace should be on the same line as 'if'. The same please
> find and fix in all the other places.
Damn change of code style. I'll try to find more.
>
>> - struct memtx_space *memtx_space = (struct memtx_space *)space;
>> + struct memtx_space* memtx_space = (struct memtx_space*) space;
> 3. Unnecessary diff.
Fixed IDE settings.
> if (stmt->engine_savepoint == NULL)
> return;
>
> + if (memtx_space->replace == memtx_space_replace_all_keys &&
> 4. Why is it done only in case of memtx_space_replace_all_keys?
Useless check, removed
>
> - for (uint32_t i = 0; i < index_count; i++) {
> - struct tuple *unused;
> - struct index *index = space->index[i];
> + for (uint32_t i = 0; i < index_count; i++)
> + {
> + struct tuple* unused;
> + struct index* index = space->index[i];
> 5. Unnecessary changes. Also not correct. We use 'type *' instead of 'type*'.
> We put { on the same line as cycle, not on a separate line. The same below
> and in all the other places.
Actually didn't found places below, but fixed.
> + if (stmt != NULL) {
>> + return txm_history_link_stmt(txn_current_stmt(txn),
> 6. Why do you get txn_current_stmt(stmt) again? You already have it
> in stmt variable.
fixed
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Tarantool-patches] [PATCH v2 00/16] Transaction engine for memtx engine
2020-07-14 23:47 ` Vladislav Shpilevoy
@ 2020-07-15 12:25 ` Aleksandr Lyapunov
2020-07-15 22:10 ` Vladislav Shpilevoy
0 siblings, 1 reply; 49+ messages in thread
From: Aleksandr Lyapunov @ 2020-07-15 12:25 UTC (permalink / raw)
To: Vladislav Shpilevoy, tarantool-patches
I hope it will be enable by default..
Let's leave it for performance part. As for me I don't cate
about memory but initialization could be significantly longer.
I think I can refactor and improve that part.
On 15.07.2020 02:47, Vladislav Shpilevoy wrote:
> A lot of new expenses come from struct txn becoming huge.
> It could be mitigated if you would introduce a new object
> which would store struct txn, psn, all the new rlists. And
> which would be created only when needed. However it may be
> wrong if you will enable the new manager by default with
> keeping on_yield error.
>
> I sent some comments, but I didn't review the patchset whole
> yet, it is huge.
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Tarantool-patches] [PATCH 13/16] tx: introduce txm_story
2020-07-15 8:11 ` Aleksandr Lyapunov
@ 2020-07-15 22:02 ` Vladislav Shpilevoy
0 siblings, 0 replies; 49+ messages in thread
From: Vladislav Shpilevoy @ 2020-07-15 22:02 UTC (permalink / raw)
To: Aleksandr Lyapunov, tarantool-patches
>>> +}
>>> +
>>> +static struct txm_story *
>>> +txm_story_new_add_stmt(struct tuple *tuple, struct txn_stmt *stmt)
>> 2. Probably would be better to use the existing terms: old_tuple and new_tuple.
>> So the functions would be txm_story_add_new_stmt and txm_story_add_old_stmt. The
>> same in all the other places. 'add_stmt' -> 'new_stmt'. 'del_stmt' -> 'old_stmt'.
> I thought about it. I fear in terms of txm_story, when we speak about
> lifetime of a tuple it would be weird. add_stmt - is the statement that
> added the tuple. But what is new_stmt? new_psn?
Ok, lets leave it as is.
>>
>>> +{
>>> + struct txm_story *res = txm_story_new(tuple, stmt->space->index_count);
>>> + if (res == NULL)
>>> + return NULL;
>>> + res->add_stmt = stmt;
>>> + assert(stmt->add_story == NULL);
>>> + stmt->add_story = res;
>>> + return res;
>>> +}
>>> +
>>> +static struct txm_story *
>>> +txm_story_new_del_stmt(struct tuple *tuple, struct txn_stmt *stmt)
>>> +{
>>> + struct txm_story *res = txm_story_new(tuple, stmt->space->index_count);
>>> + if (res == NULL)
>>> + return NULL;
>>> + res->del_stmt = stmt;
>>> + assert(stmt->del_story == NULL);
>>> + stmt->del_story = res;
>>> + return res;
>>> +}
>>> +
>>> +void
>>> +txm_history_prepare_stmt(struct txn_stmt *stmt)
>>> +{
>>> + assert(stmt->txn->psn != 0);
>>> +
>>> + /* Move story to the past to prepared stories. */
>>> +
>>> + struct txm_story *story = stmt->add_story;
>>> + uint32_t index_count = story == NULL ? 0 : story->index_count;
>>> + for (uint32_t i = 0; i < index_count; ) {
>>> ...
>>> ...
>>> ...
>>> + }
>>> + if (stmt->add_story != NULL)
>> 3. According to a check in the beginning of the function stmt can
>> be NULL. But then this will crash.
> No, index_count is initialized as 0 in that case and we will not enter the loop.
This check is out of the loop. So you will stumble into it.
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Tarantool-patches] [PATCH 14/16] tx: indexes
2020-07-15 10:02 ` Aleksandr Lyapunov
@ 2020-07-15 22:08 ` Vladislav Shpilevoy
0 siblings, 0 replies; 49+ messages in thread
From: Vladislav Shpilevoy @ 2020-07-15 22:08 UTC (permalink / raw)
To: Aleksandr Lyapunov, tarantool-patches
>>> #endif /* #ifndef OLD_GOOD_BITSET */
>>> + uint32_t iid = iterator->index->def->iid;
>>> + struct txn *txn = in_txn();
>>> + bool is_rw = txn != NULL;
>>> + *ret = txm_tuple_clarify(txn, tuple, iid, 0, is_rw);
>> 2. Some of these values you don't need to load in the cycle. They don't
>> change.
>>
>> * in_txn() can be called out of the cycle just once;
>> * is_rw can be calculated only once;
>> * iid does not change;
>> * struct memtx_bitset_index *index does not change;
>>
>> The same applies to rtree changes.
> Actually that is not a problem for modern compilers not to make the
> same thing several times.
> For example https://godbolt.org/z/9zvnn5
> So it's not a performance issue.
> I make those variables as aliases for readability.
> I could move them out of loop if you insist but I fear that it will become less readable.
I don't see how can it become less readable. You just move a few lines out
of the loop.
Talking of the compiler - I am sure it can handle simple cases like you've
shown in the online compiler. But still I wouldn't rely on it so much. So
I still would better advise to move the reads of the cycle.
>>> + do { \
>>> + int rc = first ? name##_base(iterator, ret) \
>>> + : hash_iterator_ge_base(iterator, ret); \
>> 4. Seems like unnecessary branching. If you know you will specially
>> handle only the first iteration, then why no to make it before the
>> cycle? And eliminate 'first' + '?' branch. Also use prefix 'is_' for
>> flag names. Or 'has_'/'does_'/etc. The same for all the other new
>> flags, including 'preserve_old_tuple'.
> names - ok, but again this work for a compiler https://godbolt.org/z/vbEeEP
> I could change it if you insist but compiled code will be merely the same.
I can't insist on anything. I am just a middle dev. But I would better
reduce the amount of trust we put on the compiler.
>>> +struct forgot_to_add_semicolon
>> 6. What is this?
> That's a standard guard that prohibits usage of macro w/o semicolon in the end of line
> If somebody forgets to add ; he will get an error message with 'forgot_to_add_semicolon'.
We never use it anywhere. Please, drop. No need to complicate things. I don't
see any problems about not using ';' in the end of this macro.
>>> + bool is_rw = txn != NULL;
>>> + *ret = txm_tuple_clarify(txn, *res, ptr->index->def->iid, 0, is_rw);
>> 8. Why isn't it a cycle?
> because there can be only one tuple with the desired key in the hash table.
Yeah, but one tuple with a key can have a history of the tuples with the
same key.
>>> + struct memtx_tree_iterator *ti = &it->tree_iterator; \
>>> + uint32_t iid = iterator->index->def->iid; \
>>> + bool is_multikey = iterator->index->def->key_def->is_multikey; \
>> 11. All these dereferences are going to cost a lot, even when
>> there are no concurrent txns. Can they be done in a lazy mode?
>> Only if the found tuple is dirty. The same applies to all the
>> other places.
> A compiler should surely handle it, since ..._clarify() is a static inline member.
> Even a processor would handle it, it also reorders instructions, but usually
> it has nothing to do while the tuple is fetching from memory, and I guess
> it will try to do something even outside a branch.
It will speculate, yes. But in 99% of cases it will speculate into the branch
where the tuple is not dirty, because that happens almost always (supposed to
happen). So the reads won't be done unless necessary, in rare cases. Currently
they are done always.
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Tarantool-patches] [PATCH 16/16] tx: use new tx manager in memtx
2020-07-15 10:32 ` Aleksandr Lyapunov
@ 2020-07-15 22:09 ` Vladislav Shpilevoy
0 siblings, 0 replies; 49+ messages in thread
From: Vladislav Shpilevoy @ 2020-07-15 22:09 UTC (permalink / raw)
To: Aleksandr Lyapunov, tarantool-patches
>> +static int
>>> +memtx_engine_prepare(struct engine *engine, struct txn *txn)
>>> +{
>>> + (void)engine;
>>> + struct txn_stmt *stmt;
>>> + stailq_foreach_entry(stmt, &txn->stmts, next) {
>>> + if (stmt->add_story != NULL || stmt->del_story != NULL)
>> 1. Seems like it is worth moving this check into a static inline
>> function in the txn.h header. You use it a lot.
> It's a virtual method, it can't be inlined..
I am talking about the check. Not about the memtx_engine_prepare().
stmt->add_story != NULL || stmt->del_story != NULL
This line can be moved into a static inline function in txn.h.
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Tarantool-patches] [PATCH v2 00/16] Transaction engine for memtx engine
2020-07-15 12:25 ` Aleksandr Lyapunov
@ 2020-07-15 22:10 ` Vladislav Shpilevoy
2020-07-16 4:48 ` Aleksandr Lyapunov
0 siblings, 1 reply; 49+ messages in thread
From: Vladislav Shpilevoy @ 2020-07-15 22:10 UTC (permalink / raw)
To: Aleksandr Lyapunov, tarantool-patches
On 15.07.2020 14:25, Aleksandr Lyapunov wrote:
> I hope it will be enable by default..
> Let's leave it for performance part. As for me I don't cate
> about memory but initialization could be significantly longer.
> I think I can refactor and improve that part.
Probably look at how triggers are initialized on demand,
with TXN_HAS_TRIGGERS flag.
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Tarantool-patches] [PATCH v2 00/16] Transaction engine for memtx engine
2020-07-15 22:10 ` Vladislav Shpilevoy
@ 2020-07-16 4:48 ` Aleksandr Lyapunov
0 siblings, 0 replies; 49+ messages in thread
From: Aleksandr Lyapunov @ 2020-07-16 4:48 UTC (permalink / raw)
To: Vladislav Shpilevoy, tarantool-patches
yes, sure.
On 7/16/20 1:10 AM, Vladislav Shpilevoy wrote:
> On 15.07.2020 14:25, Aleksandr Lyapunov wrote:
>> I hope it will be enable by default..
>> Let's leave it for performance part. As for me I don't cate
>> about memory but initialization could be significantly longer.
>> I think I can refactor and improve that part.
> Probably look at how triggers are initialized on demand,
> with TXN_HAS_TRIGGERS flag.
^ permalink raw reply [flat|nested] 49+ messages in thread
end of thread, other threads:[~2020-07-16 4:49 UTC | newest]
Thread overview: 49+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-08 15:14 [Tarantool-patches] [PATCH v2 00/16] Transaction engine for memtx engine Aleksandr Lyapunov
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 01/16] Update license file (2020) Aleksandr Lyapunov
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 02/16] Check data_offset overflow in struct tuple Aleksandr Lyapunov
2020-07-12 17:15 ` Vladislav Shpilevoy
2020-07-14 17:09 ` Aleksandr Lyapunov
2020-07-14 22:48 ` Vladislav Shpilevoy
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 03/16] tx: introduce dirty tuples Aleksandr Lyapunov
2020-07-12 17:15 ` Vladislav Shpilevoy
2020-07-12 22:24 ` Nikita Pettik
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 04/16] vinyl: rename tx_manager -> vy_tx_manager Aleksandr Lyapunov
2020-07-12 17:14 ` Vladislav Shpilevoy
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 05/16] tx: save txn in txn_stmt Aleksandr Lyapunov
2020-07-12 17:15 ` Vladislav Shpilevoy
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 06/16] tx: add TX status Aleksandr Lyapunov
2020-07-12 17:15 ` Vladislav Shpilevoy
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 07/16] tx: save preserve old tuple flag in txn_stmt Aleksandr Lyapunov
2020-07-12 17:14 ` Vladislav Shpilevoy
2020-07-14 23:46 ` Vladislav Shpilevoy
2020-07-15 7:53 ` Aleksandr Lyapunov
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 08/16] tx: introduce tx manager Aleksandr Lyapunov
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 09/16] tx: introduce prepare sequence number Aleksandr Lyapunov
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 10/16] tx: introduce txn_stmt_destroy Aleksandr Lyapunov
2020-07-12 17:15 ` Vladislav Shpilevoy
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 11/16] tx: introduce conflict tracker Aleksandr Lyapunov
2020-07-12 17:15 ` Vladislav Shpilevoy
2020-07-14 23:51 ` Vladislav Shpilevoy
2020-07-15 7:57 ` Aleksandr Lyapunov
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 12/16] introduce tuple smart pointers Aleksandr Lyapunov
2020-07-12 17:16 ` Vladislav Shpilevoy
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 13/16] tx: introduce txm_story Aleksandr Lyapunov
2020-07-12 17:14 ` Vladislav Shpilevoy
2020-07-14 23:46 ` Vladislav Shpilevoy
2020-07-15 8:11 ` Aleksandr Lyapunov
2020-07-15 22:02 ` Vladislav Shpilevoy
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 14/16] tx: indexes Aleksandr Lyapunov
2020-07-14 23:50 ` Vladislav Shpilevoy
2020-07-15 10:02 ` Aleksandr Lyapunov
2020-07-15 22:08 ` Vladislav Shpilevoy
2020-07-15 10:19 ` Aleksandr Lyapunov
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 15/16] tx: introduce point conflict tracker Aleksandr Lyapunov
2020-07-08 15:14 ` [Tarantool-patches] [PATCH 16/16] tx: use new tx manager in memtx Aleksandr Lyapunov
2020-07-14 23:45 ` Vladislav Shpilevoy
2020-07-15 10:32 ` Aleksandr Lyapunov
2020-07-15 22:09 ` Vladislav Shpilevoy
2020-07-12 17:19 ` [Tarantool-patches] [PATCH v2 00/16] Transaction engine for memtx engine Vladislav Shpilevoy
2020-07-14 23:47 ` Vladislav Shpilevoy
2020-07-15 12:25 ` Aleksandr Lyapunov
2020-07-15 22:10 ` Vladislav Shpilevoy
2020-07-16 4:48 ` Aleksandr Lyapunov
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox