[Tarantool-patches] [PATCH v1] test: filter replication/skip_conflict_row output
Alexander V. Tikhonov
avtikhon at tarantool.org
Sun Dec 20 21:45:53 MSK 2020
Cc: tarantool-patches at dev.tarantool.org
Found that test replication/skip_conflict_row.test.lua fails with
output message in results file:
[035] @@ -139,7 +139,19 @@
[035] -- applier is not in follow state
[035] test_run:wait_upstream(1, {status = 'stopped', message_re = "Duplicate key exists in unique index 'primary' in space 'test'"})
[035] ---
[035] -- true
[035] +- false
[035] +- id: 1
[035] + uuid: f2084d3c-93f2-4267-925f-015df034d0a5
[035] + lsn: 553
[035] + upstream:
[035] + status: follow
[035] + idle: 0.0024020448327065
[035] + peer: unix/:/builds/4BUsapPU/0/tarantool/tarantool/test/var/035_replication/master.socket-iproto
[035] + lag: 0.0046234130859375
[035] + downstream:
[035] + status: follow
[035] + idle: 0.086121961474419
[035] + vclock: {2: 3, 1: 553}
[035] ...
[035] --
[035] -- gh-3977: check that NOP is written instead of conflicting row.
Test could not be restarted with checksum because of changing values
like UUID on each fail. It happend because test-run uses internal
chain of functions wait_upstream() -> gen_box_info_replication_cond()
which returns instance information on its fails. To avoid of it this
output was redirected to log file instead of results file.
---
Github: https://github.com/tarantool/tarantool/tree/avtikhon/skip_cond_2nd2
test/replication/skip_conflict_row.result | 7 ++++++-
test/replication/skip_conflict_row.test.lua | 5 ++++-
test/replication/suite.ini | 2 +-
3 files changed, 11 insertions(+), 3 deletions(-)
diff --git a/test/replication/skip_conflict_row.result b/test/replication/skip_conflict_row.result
index df209f029..783702b1a 100644
--- a/test/replication/skip_conflict_row.result
+++ b/test/replication/skip_conflict_row.result
@@ -137,7 +137,12 @@ test_run:cmd("restart server replica")
- true
...
-- applier is not in follow state
-test_run:wait_upstream(1, {status = 'stopped', message_re = "Duplicate key exists in unique index 'primary' in space 'test'"})
+ok, instance_info = test_run:wait_upstream(1, {status = 'stopped', \
+ message_re = "Duplicate key exists in unique index 'primary' in space 'test'"})
+---
+...
+ok or require('log').error('test_run:wait_upstream failed with instance info: ' \
+ .. require('json').encode(instance_info))
---
- true
...
diff --git a/test/replication/skip_conflict_row.test.lua b/test/replication/skip_conflict_row.test.lua
index 32d473b66..6cb67898e 100644
--- a/test/replication/skip_conflict_row.test.lua
+++ b/test/replication/skip_conflict_row.test.lua
@@ -48,7 +48,10 @@ ok or require('log').error('test_run:wait_upstream failed with instance info: '
test_run:cmd("switch default")
test_run:cmd("restart server replica")
-- applier is not in follow state
-test_run:wait_upstream(1, {status = 'stopped', message_re = "Duplicate key exists in unique index 'primary' in space 'test'"})
+ok, instance_info = test_run:wait_upstream(1, {status = 'stopped', \
+ message_re = "Duplicate key exists in unique index 'primary' in space 'test'"})
+ok or require('log').error('test_run:wait_upstream failed with instance info: ' \
+ .. require('json').encode(instance_info))
--
-- gh-3977: check that NOP is written instead of conflicting row.
diff --git a/test/replication/suite.ini b/test/replication/suite.ini
index 9c3845369..0fc618294 100644
--- a/test/replication/suite.ini
+++ b/test/replication/suite.ini
@@ -28,7 +28,7 @@ fragile = {
},
"skip_conflict_row.test.lua": {
"issues": [ "gh-4958" ],
- "checksums": [ "c7f20590643ed9e0263d1f5784d65c2e" ]
+ "checksums": [ "a21f07339237cd9d0b8c74e144284449" ]
},
"sync.test.lua": {
"issues": [ "gh-3835" ],
--
2.25.1
More information about the Tarantool-patches
mailing list