* [Tarantool-patches] [PATCH v1] test: block qsync_snapshots.test.lua
@ 2020-09-10 16:45 Alexander V. Tikhonov
2020-09-11 21:35 ` Vladislav Shpilevoy
0 siblings, 1 reply; 2+ messages in thread
From: Alexander V. Tikhonov @ 2020-09-10 16:45 UTC (permalink / raw)
To: Kirill Yukhin, Serge Petrenko; +Cc: tarantool-patches
Found that after qsync_snapshots.test.lua test reproducer:
test_run = require('test_run').new()
engine = test_run:get_cfg('engine')
box.schema.user.grant('guest', 'replication')
test_run:cmd('create server replica with rpl_master=default,\
script="replication/replica.lua"')
test_run:cmd('start server replica with wait=True, wait_load=True')
test_run:switch('default')
_ = box.schema.space.create('sync', {is_sync=true, engine=engine})
_ = box.space.sync:create_index('pk')
box.space.sync:insert{1}
box.snapshot()
box.space.sync:drop()
test_run:cmd('stop server replica')
test_run:cmd('delete server replica')
test_run:cleanup_cluster()
box.schema.user.revoke('guest', 'replication')
the later running tests with commands, like:
test_run:cmd("restart server default")
on the same test-run worker failed with output, like:
main/103/master C> Tarantool 2.6.0-52-g71a24b9f2
main/103/master C> log level 5
main/103/master I> mapping 117440512 bytes for memtx tuple arena...
main/103/master I> mapping 134217728 bytes for vinyl tuple arena...
main/103/master I> instance uuid b086288b-5e2d-4696-b690-fd26222a6274
main/103/master I> instance vclock {1: 13}
iproto/101/main I> binary: bound to unix/:/Users/tntmac02.tarantool.i/tnt/test/var/001_replication/master.socket-iproto
main/103/master I> recovery start
main/103/master I> recovering from `/Users/tntmac02.tarantool.i/tnt/test/var/001_replication/master/00000000000000000007.snap'
main/103/master I> cluster uuid 889d55d8-2cf1-4b62-966e-228fc7c86f58
main/103/master I> assigned id 1 to replica b086288b-5e2d-4696-b690-fd26222a6274
main/103/master I> assigned id 2 to replica 9c712376-d95d-4ed1-9eee-695353997c3c
main/103/master I> recover from `/Users/tntmac02.tarantool.i/tnt/test/var/001_replication/master/00000000000000000007.xlog'
main/103/master I> removed replica 9c712376-d95d-4ed1-9eee-695353997c3c
main/103/master I> done `/Users/tntmac02.tarantool.i/tnt/test/var/001_replication/master/00000000000000000007.xlog'
main/103/master I> recover from `/Users/tntmac02.tarantool.i/tnt/test/var/001_replication/master/00000000000000000013.xlog'
main/103/master I> done `/Users/tntmac02.tarantool.i/tnt/test/var/001_replication/master/00000000000000000013.xlog'
main/103/master cbus.c:442 !> SystemError timed out: Operation timed out
main/103/master F> can't initialize storage: timed out
main/103/master F> can't initialize storage: timed out
Found that this command used in many tests:
find test/replication -name "*.test.lua" -exec grep "restart server default" {} \; | wc -l
16
To save the testing stability the test qsync_snapshots.test.lua must
be disabled either run after all of the tests that use this command.
Decided to add this test to 'fragile' test-run list to set it run
at the very end of the replication suite tests run list.
Part of #5288
---
Github: https://github.com/tarantool/tarantool/tree/avtikhon/gh-5288-qsync-snapshots-block
Issue: https://github.com/tarantool/tarantool/issues/5288
test/replication/suite.ini | 1 +
1 file changed, 1 insertion(+)
diff --git a/test/replication/suite.ini b/test/replication/suite.ini
index a6d653d3b..3baa3af47 100644
--- a/test/replication/suite.ini
+++ b/test/replication/suite.ini
@@ -24,3 +24,4 @@ fragile = errinj.test.lua ; gh-3870
gh-4605-empty-password.test.lua ; gh-5030
anon.test.lua ; gh-5058
status.test.lua ; gh-5110
+ qsync_snapshots.test.lua ; gh-5288
--
2.17.1
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2020-09-11 21:35 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-10 16:45 [Tarantool-patches] [PATCH v1] test: block qsync_snapshots.test.lua Alexander V. Tikhonov
2020-09-11 21:35 ` Vladislav Shpilevoy
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox