* [Tarantool-patches] [PATCH 0/2] Qsync flaky tests, next iteration
@ 2020-07-14 22:44 Vladislav Shpilevoy
2020-07-14 22:44 ` [Tarantool-patches] [PATCH 1/2] test: fix flaky qsync_advanced.test.lua Vladislav Shpilevoy
` (2 more replies)
0 siblings, 3 replies; 7+ messages in thread
From: Vladislav Shpilevoy @ 2020-07-14 22:44 UTC (permalink / raw)
To: tarantool-patches, avtikhon
The tests keep failing, each time in a new way. The patchset
attempts to fix them again. Worth mentioning, that I couldn't
reproduce the fails in the issues, and the fixes are based on my
assumptions + on the passed CI (failed, but in other tests).
How 5168 managed to happen I can't even imagine, but the flaky
test case is reworked in this patchset anyway, it was incorrect.
I suspect these fails depend on disk speed somehow, not on CPU.
Especially looking at how 5167 failed. On my machine
reproducibility seems to be so low, that I couldn't get it, even
with tens of workers.
Branch: http://github.com/tarantool/tarantool/tree/gerold103/gh-5167-5168-qsync-flaky
Issue: https://github.com/tarantool/tarantool/issues/5167
Issue: https://github.com/tarantool/tarantool/issues/5168
Vladislav Shpilevoy (2):
test: fix flaky qsync_advanced.test.lua
test: fix flaky qsync_snapshots.test.lua
test/replication/qsync_advanced.result | 42 ++++++++++++++++-------
test/replication/qsync_advanced.test.lua | 32 ++++++++++-------
test/replication/qsync_snapshots.result | 4 +++
test/replication/qsync_snapshots.test.lua | 1 +
4 files changed, 55 insertions(+), 24 deletions(-)
--
2.21.1 (Apple Git-122.3)
^ permalink raw reply [flat|nested] 7+ messages in thread
* [Tarantool-patches] [PATCH 1/2] test: fix flaky qsync_advanced.test.lua
2020-07-14 22:44 [Tarantool-patches] [PATCH 0/2] Qsync flaky tests, next iteration Vladislav Shpilevoy
@ 2020-07-14 22:44 ` Vladislav Shpilevoy
2020-07-14 22:44 ` [Tarantool-patches] [PATCH 2/2] test: fix flaky qsync_snapshots.test.lua Vladislav Shpilevoy
2020-07-17 10:56 ` [Tarantool-patches] [PATCH 0/2] Qsync flaky tests, next iteration Sergey Bronnikov
2 siblings, 0 replies; 7+ messages in thread
From: Vladislav Shpilevoy @ 2020-07-14 22:44 UTC (permalink / raw)
To: tarantool-patches, avtikhon
There were multiple problems:
- Some timeouts were too small. Timeout 0.1 is a very small value,
which leads to flakiness in 100% cases sooner or later.
- One timeout was too big - 5 second waiting, whereas it could
easily be less than a second. Anyway it was expected to fail. No
need to wait so long.
- To check if timeout really passed whole, was used os.time(),
which is incorrect: precision is seconds. Also the passed time
was checked using equation duration == timeout, but it is also
wrong. When something is blocked on a timeout, if the system is
not real-time, the really passed time is always >= timeout. Not
== timeout.
- In the failover test there was no fullmesh. As a result, when a
replica was promoted and wrote something into the sync space, it
wasn't replicated to master. But the test passed because
1) The incorrect behaviour was in .result file;
2) On the replica the quorum was default, i.e. 1. So the replica
didn't wait master, and successfully wrote data into the sync
space.
The initial problem of the test was that in the last case one of
the test jobs somehow got the old master seeing the replica's
data. But it is impossible, there was no replication from the
replica to master. Anyway now the test case is reworked, and even
if it would fail, it would be a new fail.
Closes #5168
---
test/replication/qsync_advanced.result | 42 +++++++++++++++++-------
test/replication/qsync_advanced.test.lua | 32 +++++++++++-------
2 files changed, 50 insertions(+), 24 deletions(-)
diff --git a/test/replication/qsync_advanced.result b/test/replication/qsync_advanced.result
index 3a288e0ca..94b19b1f2 100644
--- a/test/replication/qsync_advanced.result
+++ b/test/replication/qsync_advanced.result
@@ -63,7 +63,7 @@ test_run:switch('default')
| ---
| - true
| ...
-box.cfg{replication_synchro_quorum=NUM_INSTANCES, replication_synchro_timeout=0.1}
+box.cfg{replication_synchro_quorum=NUM_INSTANCES, replication_synchro_timeout=1000}
| ---
| ...
_ = box.schema.space.create('sync', {is_sync=true, engine=engine})
@@ -100,7 +100,7 @@ test_run:switch('default')
| ---
| - true
| ...
-box.cfg{replication_synchro_quorum=BROKEN_QUORUM, replication_synchro_timeout=0.1}
+box.cfg{replication_synchro_quorum=BROKEN_QUORUM, replication_synchro_timeout=0.001}
| ---
| ...
_ = box.schema.space.create('sync', {is_sync=true, engine=engine})
@@ -138,7 +138,7 @@ test_run:switch('default')
| ---
| - true
| ...
-box.cfg{replication_synchro_quorum=NUM_INSTANCES, replication_synchro_timeout=0.1}
+box.cfg{replication_synchro_quorum=NUM_INSTANCES, replication_synchro_timeout=1000}
| ---
| ...
_ = box.schema.space.create('sync', {is_sync=true, engine=engine})
@@ -191,7 +191,7 @@ test_run:switch('default')
| ---
| - true
| ...
-box.cfg{replication_synchro_quorum=BROKEN_QUORUM, replication_synchro_timeout=orig_synchro_timeout}
+box.cfg{replication_synchro_quorum=BROKEN_QUORUM, replication_synchro_timeout=0.001}
| ---
| ...
_ = box.schema.space.create('sync', {is_sync=true, engine=engine})
@@ -201,14 +201,17 @@ _ = box.space.sync:create_index('pk')
| ---
| ...
-- Testcase body.
-start = os.time()
+start = fiber.clock()
| ---
| ...
box.space.sync:insert{1}
| ---
| - error: Quorum collection for a synchronous transaction is timed out
| ...
-(os.time() - start) == box.cfg.replication_synchro_timeout -- true
+duration = fiber.clock() - start
+ | ---
+ | ...
+duration >= box.cfg.replication_synchro_timeout or duration -- true
| ---
| - true
| ...
@@ -298,7 +301,7 @@ test_run:switch('default')
| ---
| - true
| ...
-box.cfg{replication_synchro_quorum=NUM_INSTANCES, replication_synchro_timeout=0.1}
+box.cfg{replication_synchro_quorum=NUM_INSTANCES, replication_synchro_timeout=1000}
| ---
| ...
_ = box.schema.space.create('sync', {is_sync=true, engine=engine})
@@ -326,7 +329,7 @@ test_run:switch('default')
| ---
| - true
| ...
-box.cfg{replication_synchro_quorum=NUM_INSTANCES, replication_synchro_timeout=0.1}
+box.cfg{replication_synchro_quorum=NUM_INSTANCES, replication_synchro_timeout=1000}
| ---
| ...
_ = box.schema.space.create('sync', {is_sync=true, engine=engine})
@@ -428,7 +431,15 @@ test_run:switch('default')
| ---
| - true
| ...
-box.cfg{replication_synchro_quorum=NUM_INSTANCES, replication_synchro_timeout=0.1}
+test_run:cmd("set variable replica_url to 'replica.listen'")
+ | ---
+ | - true
+ | ...
+box.cfg{ \
+ replication_synchro_quorum = NUM_INSTANCES, \
+ replication_synchro_timeout = 1000, \
+ replication = replica_url, \
+}
| ---
| ...
_ = box.schema.space.create('sync', {is_sync=true, engine=engine})
@@ -468,6 +479,9 @@ test_run:switch('replica')
| ---
| - true
| ...
+box.cfg{replication_synchro_quorum = 2, replication_synchro_timeout = 1000}
+ | ---
+ | ...
box.space.sync:insert{2}
| ---
| - [2]
@@ -484,6 +498,7 @@ test_run:switch('default')
box.space.sync:select{} -- 1, 2
| ---
| - - [1]
+ | - [2]
| ...
-- Revert cluster configuration.
test_run:switch('default')
@@ -508,6 +523,9 @@ test_run:switch('default')
box.space.sync:drop()
| ---
| ...
+box.cfg{replication = {}}
+ | ---
+ | ...
-- Check behaviour with failed write to WAL on master (ERRINJ_WAL_IO).
-- Testcase setup.
@@ -515,7 +533,7 @@ test_run:switch('default')
| ---
| - true
| ...
-box.cfg{replication_synchro_quorum=NUM_INSTANCES, replication_synchro_timeout=0.1}
+box.cfg{replication_synchro_quorum=NUM_INSTANCES, replication_synchro_timeout=1000}
| ---
| ...
_ = box.schema.space.create('sync', {is_sync=true, engine=engine})
@@ -699,14 +717,14 @@ disable_sync_mode()
| ---
| ...
-- Space is in sync mode now.
-box.cfg{replication_synchro_quorum=NUM_INSTANCES, replication_synchro_timeout=0.1}
+box.cfg{replication_synchro_quorum=NUM_INSTANCES, replication_synchro_timeout=1000}
| ---
| ...
box.space.sync:insert{2} -- success
| ---
| - [2]
| ...
-box.cfg{replication_synchro_quorum=BROKEN_QUORUM, replication_synchro_timeout=0.1}
+box.cfg{replication_synchro_quorum=BROKEN_QUORUM, replication_synchro_timeout=1000}
| ---
| ...
box.space.sync:insert{3} -- success
diff --git a/test/replication/qsync_advanced.test.lua b/test/replication/qsync_advanced.test.lua
index 4b62c6fb4..058ece602 100644
--- a/test/replication/qsync_advanced.test.lua
+++ b/test/replication/qsync_advanced.test.lua
@@ -27,7 +27,7 @@ test_run:cmd('start server replica with wait=True, wait_load=True')
-- Successful write.
-- Testcase setup.
test_run:switch('default')
-box.cfg{replication_synchro_quorum=NUM_INSTANCES, replication_synchro_timeout=0.1}
+box.cfg{replication_synchro_quorum=NUM_INSTANCES, replication_synchro_timeout=1000}
_ = box.schema.space.create('sync', {is_sync=true, engine=engine})
_ = box.space.sync:create_index('pk')
-- Testcase body.
@@ -41,7 +41,7 @@ box.space.sync:drop()
-- Unsuccessfull write.
-- Testcase setup.
test_run:switch('default')
-box.cfg{replication_synchro_quorum=BROKEN_QUORUM, replication_synchro_timeout=0.1}
+box.cfg{replication_synchro_quorum=BROKEN_QUORUM, replication_synchro_timeout=0.001}
_ = box.schema.space.create('sync', {is_sync=true, engine=engine})
_ = box.space.sync:create_index('pk')
-- Testcase body.
@@ -56,7 +56,7 @@ box.space.sync:drop()
-- same order as on client in case of achieved quorum.
-- Testcase setup.
test_run:switch('default')
-box.cfg{replication_synchro_quorum=NUM_INSTANCES, replication_synchro_timeout=0.1}
+box.cfg{replication_synchro_quorum=NUM_INSTANCES, replication_synchro_timeout=1000}
_ = box.schema.space.create('sync', {is_sync=true, engine=engine})
_ = box.space.sync:create_index('pk')
-- Testcase body.
@@ -73,13 +73,14 @@ box.space.sync:drop()
-- Synchro timeout is not bigger than replication_synchro_timeout value.
-- Testcase setup.
test_run:switch('default')
-box.cfg{replication_synchro_quorum=BROKEN_QUORUM, replication_synchro_timeout=orig_synchro_timeout}
+box.cfg{replication_synchro_quorum=BROKEN_QUORUM, replication_synchro_timeout=0.001}
_ = box.schema.space.create('sync', {is_sync=true, engine=engine})
_ = box.space.sync:create_index('pk')
-- Testcase body.
-start = os.time()
+start = fiber.clock()
box.space.sync:insert{1}
-(os.time() - start) == box.cfg.replication_synchro_timeout -- true
+duration = fiber.clock() - start
+duration >= box.cfg.replication_synchro_timeout or duration -- true
-- Testcase cleanup.
test_run:switch('default')
box.space.sync:drop()
@@ -108,7 +109,7 @@ box.cfg.replication_synchro_timeout -- old value
-- TX is in synchronous replication.
-- Testcase setup.
test_run:switch('default')
-box.cfg{replication_synchro_quorum=NUM_INSTANCES, replication_synchro_timeout=0.1}
+box.cfg{replication_synchro_quorum=NUM_INSTANCES, replication_synchro_timeout=1000}
_ = box.schema.space.create('sync', {is_sync=true, engine=engine})
_ = box.space.sync:create_index('pk')
-- Testcase body.
@@ -121,7 +122,7 @@ box.space.sync:drop()
-- data consistency on a leader and replicas.
-- Testcase setup.
test_run:switch('default')
-box.cfg{replication_synchro_quorum=NUM_INSTANCES, replication_synchro_timeout=0.1}
+box.cfg{replication_synchro_quorum=NUM_INSTANCES, replication_synchro_timeout=1000}
_ = box.schema.space.create('sync', {is_sync=true, engine=engine})
_ = box.space.sync:create_index('pk')
-- Testcase body.
@@ -155,7 +156,12 @@ box.cfg{replication_synchro_quorum=BROKEN_QUORUM} -- warning
-- success and data consistency on a leader and replicas (gh-5124).
-- Testcase setup.
test_run:switch('default')
-box.cfg{replication_synchro_quorum=NUM_INSTANCES, replication_synchro_timeout=0.1}
+test_run:cmd("set variable replica_url to 'replica.listen'")
+box.cfg{ \
+ replication_synchro_quorum = NUM_INSTANCES, \
+ replication_synchro_timeout = 1000, \
+ replication = replica_url, \
+}
_ = box.schema.space.create('sync', {is_sync=true, engine=engine})
_ = box.space.sync:create_index('pk')
-- Testcase body.
@@ -167,6 +173,7 @@ box.cfg{read_only=false} -- promote replica to master
test_run:switch('default')
box.cfg{read_only=true} -- demote master to replica
test_run:switch('replica')
+box.cfg{replication_synchro_quorum = 2, replication_synchro_timeout = 1000}
box.space.sync:insert{2}
box.space.sync:select{} -- 1, 2
test_run:switch('default')
@@ -179,11 +186,12 @@ box.cfg{read_only=true}
-- Testcase cleanup.
test_run:switch('default')
box.space.sync:drop()
+box.cfg{replication = {}}
-- Check behaviour with failed write to WAL on master (ERRINJ_WAL_IO).
-- Testcase setup.
test_run:switch('default')
-box.cfg{replication_synchro_quorum=NUM_INSTANCES, replication_synchro_timeout=0.1}
+box.cfg{replication_synchro_quorum=NUM_INSTANCES, replication_synchro_timeout=1000}
_ = box.schema.space.create('sync', {is_sync=true, engine=engine})
_ = box.space.sync:create_index('pk')
-- Testcase body.
@@ -250,9 +258,9 @@ test_run:switch('default')
-- Enable synchronous mode.
disable_sync_mode()
-- Space is in sync mode now.
-box.cfg{replication_synchro_quorum=NUM_INSTANCES, replication_synchro_timeout=0.1}
+box.cfg{replication_synchro_quorum=NUM_INSTANCES, replication_synchro_timeout=1000}
box.space.sync:insert{2} -- success
-box.cfg{replication_synchro_quorum=BROKEN_QUORUM, replication_synchro_timeout=0.1}
+box.cfg{replication_synchro_quorum=BROKEN_QUORUM, replication_synchro_timeout=1000}
box.space.sync:insert{3} -- success
box.space.sync:select{} -- 1, 2, 3
test_run:cmd('switch replica')
--
2.21.1 (Apple Git-122.3)
^ permalink raw reply [flat|nested] 7+ messages in thread
* [Tarantool-patches] [PATCH 2/2] test: fix flaky qsync_snapshots.test.lua
2020-07-14 22:44 [Tarantool-patches] [PATCH 0/2] Qsync flaky tests, next iteration Vladislav Shpilevoy
2020-07-14 22:44 ` [Tarantool-patches] [PATCH 1/2] test: fix flaky qsync_advanced.test.lua Vladislav Shpilevoy
@ 2020-07-14 22:44 ` Vladislav Shpilevoy
2020-07-17 10:56 ` [Tarantool-patches] [PATCH 0/2] Qsync flaky tests, next iteration Sergey Bronnikov
2 siblings, 0 replies; 7+ messages in thread
From: Vladislav Shpilevoy @ 2020-07-14 22:44 UTC (permalink / raw)
To: tarantool-patches, avtikhon
There was a test case about master writing a transaction, replica
stating snapshot, master writing rollback, and replica canceling
the snapshot. Because its data was spoiled with rolled back data.
Sometimes it could happen that the master started writing the
transaction to WAL in a newly created fiber, then the test
switched to replica, successfully created the snapshot, and only
then the data from master was received. As a result, the following
rollback didn't affect the already finished snapshot.
The patch forces the replica wait for receipt of dirty data from
master before starting the snapshot.
Closes #5167
---
test/replication/qsync_snapshots.result | 4 ++++
test/replication/qsync_snapshots.test.lua | 1 +
2 files changed, 5 insertions(+)
diff --git a/test/replication/qsync_snapshots.result b/test/replication/qsync_snapshots.result
index 2a126087a..782ffd482 100644
--- a/test/replication/qsync_snapshots.result
+++ b/test/replication/qsync_snapshots.result
@@ -204,6 +204,10 @@ fiber = require('fiber')
box.cfg{replication_synchro_timeout=1000}
| ---
| ...
+test_run:wait_cond(function() return box.space.sync:count() == 1 end)
+ | ---
+ | - true
+ | ...
ok, err = nil
| ---
| ...
diff --git a/test/replication/qsync_snapshots.test.lua b/test/replication/qsync_snapshots.test.lua
index 0db61da95..979f04d5f 100644
--- a/test/replication/qsync_snapshots.test.lua
+++ b/test/replication/qsync_snapshots.test.lua
@@ -97,6 +97,7 @@ end)
test_run:switch('replica')
fiber = require('fiber')
box.cfg{replication_synchro_timeout=1000}
+test_run:wait_cond(function() return box.space.sync:count() == 1 end)
ok, err = nil
f = fiber.create(function() ok, err = pcall(box.snapshot) end)
--
2.21.1 (Apple Git-122.3)
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [Tarantool-patches] [PATCH 0/2] Qsync flaky tests, next iteration
2020-07-14 22:44 [Tarantool-patches] [PATCH 0/2] Qsync flaky tests, next iteration Vladislav Shpilevoy
2020-07-14 22:44 ` [Tarantool-patches] [PATCH 1/2] test: fix flaky qsync_advanced.test.lua Vladislav Shpilevoy
2020-07-14 22:44 ` [Tarantool-patches] [PATCH 2/2] test: fix flaky qsync_snapshots.test.lua Vladislav Shpilevoy
@ 2020-07-17 10:56 ` Sergey Bronnikov
2 siblings, 0 replies; 7+ messages in thread
From: Sergey Bronnikov @ 2020-07-17 10:56 UTC (permalink / raw)
To: Vladislav Shpilevoy; +Cc: tarantool-patches
Thanks for the patch!
I have spend some time to reproduce an original issue described in a
commit message and failed. On the other hand both tests with applied
patches passed 1000 iterations in concurrent mode (-j 10) without fails.
LGTM
On 00:44 Wed 15 Jul , Vladislav Shpilevoy wrote:
> The tests keep failing, each time in a new way. The patchset
> attempts to fix them again. Worth mentioning, that I couldn't
> reproduce the fails in the issues, and the fixes are based on my
> assumptions + on the passed CI (failed, but in other tests).
>
> How 5168 managed to happen I can't even imagine, but the flaky
> test case is reworked in this patchset anyway, it was incorrect.
>
> I suspect these fails depend on disk speed somehow, not on CPU.
> Especially looking at how 5167 failed. On my machine
> reproducibility seems to be so low, that I couldn't get it, even
> with tens of workers.
>
> Branch: http://github.com/tarantool/tarantool/tree/gerold103/gh-5167-5168-qsync-flaky
> Issue: https://github.com/tarantool/tarantool/issues/5167
> Issue: https://github.com/tarantool/tarantool/issues/5168
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [Tarantool-patches] [PATCH 0/2] Qsync flaky tests, next iteration
2020-07-28 8:09 ` Alexander V. Tikhonov
@ 2020-07-28 20:37 ` Vladislav Shpilevoy
0 siblings, 0 replies; 7+ messages in thread
From: Vladislav Shpilevoy @ 2020-07-28 20:37 UTC (permalink / raw)
To: Alexander V. Tikhonov; +Cc: tarantool-patches
Pushed to master and 2.5.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [Tarantool-patches] [PATCH 0/2] Qsync flaky tests, next iteration
2020-07-22 23:57 Vladislav Shpilevoy
@ 2020-07-28 8:09 ` Alexander V. Tikhonov
2020-07-28 20:37 ` Vladislav Shpilevoy
0 siblings, 1 reply; 7+ messages in thread
From: Alexander V. Tikhonov @ 2020-07-28 8:09 UTC (permalink / raw)
To: Vladislav Shpilevoy; +Cc: tarantool-patches
Hi Vlad, thanks for the fixes. I've checked it and seems that it helps
to avoid of issues - the patches LGTM.
On Thu, Jul 23, 2020 at 01:57:00AM +0200, Vladislav Shpilevoy wrote:
> The patchset attempts to fix more flaky test cases discovered since the last
> fixes.
>
> Branch: http://github.com/tarantool/tarantool/tree/gerold103/qsync-flaky-tests
> Issue: https://github.com/tarantool/tarantool/issues/5196
> Issue: https://github.com/tarantool/tarantool/issues/5167
>
> Vladislav Shpilevoy (2):
> test: fix flaky qsync_snapshots.test.lua again
> test: fix flaky qsync_with_anon.test.lua again
>
> test/replication/qsync_snapshots.result | 8 +++-
> test/replication/qsync_snapshots.test.lua | 4 +-
> test/replication/qsync_with_anon.result | 58 +++++++++++++++++++++--
> test/replication/qsync_with_anon.test.lua | 27 +++++++++--
> 4 files changed, 86 insertions(+), 11 deletions(-)
>
> --
> 2.21.1 (Apple Git-122.3)
>
^ permalink raw reply [flat|nested] 7+ messages in thread
* [Tarantool-patches] [PATCH 0/2] Qsync flaky tests, next iteration
@ 2020-07-22 23:57 Vladislav Shpilevoy
2020-07-28 8:09 ` Alexander V. Tikhonov
0 siblings, 1 reply; 7+ messages in thread
From: Vladislav Shpilevoy @ 2020-07-22 23:57 UTC (permalink / raw)
To: tarantool-patches, avtikhon
The patchset attempts to fix more flaky test cases discovered since the last
fixes.
Branch: http://github.com/tarantool/tarantool/tree/gerold103/qsync-flaky-tests
Issue: https://github.com/tarantool/tarantool/issues/5196
Issue: https://github.com/tarantool/tarantool/issues/5167
Vladislav Shpilevoy (2):
test: fix flaky qsync_snapshots.test.lua again
test: fix flaky qsync_with_anon.test.lua again
test/replication/qsync_snapshots.result | 8 +++-
test/replication/qsync_snapshots.test.lua | 4 +-
test/replication/qsync_with_anon.result | 58 +++++++++++++++++++++--
test/replication/qsync_with_anon.test.lua | 27 +++++++++--
4 files changed, 86 insertions(+), 11 deletions(-)
--
2.21.1 (Apple Git-122.3)
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2020-07-28 20:37 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-14 22:44 [Tarantool-patches] [PATCH 0/2] Qsync flaky tests, next iteration Vladislav Shpilevoy
2020-07-14 22:44 ` [Tarantool-patches] [PATCH 1/2] test: fix flaky qsync_advanced.test.lua Vladislav Shpilevoy
2020-07-14 22:44 ` [Tarantool-patches] [PATCH 2/2] test: fix flaky qsync_snapshots.test.lua Vladislav Shpilevoy
2020-07-17 10:56 ` [Tarantool-patches] [PATCH 0/2] Qsync flaky tests, next iteration Sergey Bronnikov
2020-07-22 23:57 Vladislav Shpilevoy
2020-07-28 8:09 ` Alexander V. Tikhonov
2020-07-28 20:37 ` Vladislav Shpilevoy
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox