Tarantool development patches archive
 help / color / mirror / Atom feed
* [tarantool-patches] Re[2]: [PATCH] test: enable parallel for python tests and long
@ 2019-02-28  7:53 Sergei Voronezhskii
  2019-03-06 12:27 ` [tarantool-patches] Re[3]: " Sergei Voronezhskii
  0 siblings, 1 reply; 3+ messages in thread
From: Sergei Voronezhskii @ 2019-02-28  7:53 UTC (permalink / raw)
  To: tarantool-patches


Updated branch:  https://github.com/tarantool/tarantool/commits/sergw/enable-parallel-test-py-long-fixups
Test passed:  https://travis-ci.org/tarantool/tarantool/builds/499664259

>Воскресенье, 23 декабря 2018, 14:04 +03:00 от Alexander Turenko < alexander.turenko@tarantool.org >:
>
>Hi!
>
>See comments below and fixups on the
>sergw/enable-parallel-test-py-long-fixups branch.
>
>I tested it with the command below and after the fixup it passes.
>
>```
>$ ./test-run.py --long -- $(for i in $(seq 1 10); do echo -n "-py/ "; done)
>```
>
>WBR, Alexander Turenko.
>
>On Tue, Dec 18, 2018 at 11:43:00AM +0300, Sergei Voronezhskii wrote:
>> Fixed cleanup for python tests:
>> - box-py/iproto.test.py need to cleanup created cluster
>>   reproduce:
>>   - [box-py/iproto.test.py, null]
>>   - [box-py/bootstrap.test.py, null]
>> - box-py/boostrap.test.py should not restart server because of next
>>   box-py/call.test.py got error:
>>    `NetworkError: (104, 'Connection reset by peer')`
>>    at `iproto.authenticate('test', 'test')`
>>   reproduce:
>>   - [box-py/iproto.test.py, null]
>>   - [box-py/bootstrap.test.py, null]
>>   - [box-py/call.test.py, null]
>> - replication-py/multi.test.py should not relay on hardcoded server.id
>
>relay -> rely

fixed

>Why it should not? You cleaned up _cluster in box-py/iproto.test.py.
>Why don't clean up it in replication-py/cluster.test.py too?
>
>>   because previous test can create some and `id` will autoincremented,
>>   instead this we need to calculate vclock_diff

>You use hardcoded server id 1. 'previous test can create some' -- some
>what? I suppose you are about using vclock difference instead of
>absolute value, but the message states something different.

I guess it's enough for checking with hardcoded server id 1.
If master==replica1 and master==replica2, it useless to checking
replica1 and replica2.

Previous test autoincrement replica id and also moves vclock.

>
>>   reproduce:
>>   - [replication-py/cluster.test.py, null]
>>   - [replication-py/multi.test.py, null]
>> 
>> Part of: #3232
>> ---
>> BRANCH:  https://github.com/tarantool/tarantool/tree/sergw/enable-parallel-test-py-long
>>  test/box-py/bootstrap.test.py     |  4 +---
>>  test/box-py/iproto.result         |  4 ++++
>>  test/box-py/iproto.test.py        |  1 +
>>  test/box-py/suite.ini             |  2 +-
>>  test/engine_long/suite.ini        |  2 +-
>>  test/long_run-py/suite.ini        |  2 +-
>>  test/luajit-tap/suite.ini         |  2 +-
>>  test/replication-py/multi.result  | 19 +++++--------------
>>  test/replication-py/multi.test.py | 28 +++++++++++++++++-----------
>>  test/replication-py/suite.ini     |  2 +-
>>  test/xlog-py/suite.ini            |  2 +-
>>  11 files changed, 34 insertions(+), 34 deletions(-)
>> 
>> diff --git a/test/box-py/bootstrap.test.py b/test/box-py/bootstrap.test.py
>> index 9d690b03f..dba6f5ae9 100644
>> --- a/test/box-py/bootstrap.test.py
>> +++ b/test/box-py/bootstrap.test.py
>> @@ -9,8 +9,6 @@ cluster_uuid = yaml.load(server.admin('box.space._schema:get("cluster")',
>>  sys.stdout.push_filter(cluster_uuid, '<cluster uuid>')
>> 
>>  server.admin('box.internal.bootstrap()')
>> -server.restart()
>> -
>
>I suppose now the test case will not test what it was intended. Please,
>implement waiting for the server startup.
>
>You should have strong opinion why this will not broke the test case
>before propose such kind of things in a patch. When you have this
>opinion it is easy to write it in the commit message.
>
>I don't sure whether it really brokes the test case, but I don't see any
>justification in the commit message. So I need to investigate the
>problem from scratch to prove it is good or bad. And so what gain we get
>from your work?
>
>The code cost is zero w/o investigation. The investigation result was
>not shared. I don't understand why do you behave in this way. Again and
>again.
>

I have checked upgrade.lua for bootstrap method for investigation.
But I did not found any clue why we should do restart here.
To fix it without delete `restart` we can do:

iproto.py_con.ping()
server.restart()
iproto.reconnect()
iproto.authenticate('test', 'test')

But I have exception TarantoolStartError on OSX.
After while you can see @locker already removed it at
https://github.com/tarantool/tarantool/commit/3fe8c2835406c02acb59b6f86721d63cec69fac7

>>  server.admin('box.space._schema:select{}')
>>  server.admin('box.space._cluster:select{}')
>>  server.admin('box.space._space:select{}')
>> @@ -20,4 +18,4 @@ server.admin('box.space._func:select{}')
>>  server.admin('box.space._priv:select{}')
>> 
>>  # Cleanup
>> -sys.stdout.pop_filter()
>> +sys.stdout.clear_all_filters()
>
>Two filters was pushed, two should be popped, I think.
>
>> diff --git a/test/replication-py/multi.test.py b/test/replication-py/multi.test.py
>> index 224332266..3394d47be 100644
>> --- a/test/replication-py/multi.test.py
>> +++ b/test/replication-py/multi.test.py
>> @@ -10,10 +10,14 @@ ROW_N = REPLICA_N * 20
>> 
>>  # master server
>>  master = server
>> +
>>  master.admin("fiber = require('fiber')")
>>  master.admin("box.schema.user.grant('guest', 'replication')")
>>  master.admin("box.schema.user.grant('guest', 'execute', 'universe')")
>> 
>> +# Get vclock on master
>> +vclock_cnt = yaml.load(master.admin("box.info.vclock[1]", silent = True))[0]
>> +
>>  print '----------------------------------------------------------------------'
>>  print 'Bootstrap replicas'
>>  print '----------------------------------------------------------------------'
>> @@ -48,14 +52,16 @@ for server in cluster:
>>      server.iproto.py_con.eval("box.cfg { replication = ... }", [sources])
>> 
>>  # Wait connections to establish
>> -for server in cluster:
>> +for sid, server in enumerate(cluster, 1):
>>      for server2 in cluster:
>>          server.iproto.py_con.eval("""
>>              while #box.info.vclock[...] ~= nil do
>>                  fiber.sleep(0.01)
>>              end;""", server2.id)
>> -        print 'server', server.id, "connected"
>> -    server.admin("box.info.vclock")
>> +        print 'server', sid, "connected"
>> +    vclock_new = yaml.load(server.admin("box.info.vclock[1]", silent = True))[0]
>> +    print "vclock_diff: {}".format(vclock_new - vclock_cnt)
>> +
>> 
>
>Re server.id -> sid: Please, cleanup _cluster in
>replication-py/cluster.test.py instead. The same for all such changes
>below. 
Added

diff --git a/test/replication-py/cluster.result b/test/replication-py/cluster.result
index 3b5566829..04f06f74e 100644
--- a/test/replication-py/cluster.result
+++ b/test/replication-py/cluster.result
@@ -196,6 +196,10 @@ box.info.vclock[2] == 2
 ---
 - true
 ...
+box.space._cluster:delete{2} ~= nil
+---
+- true
+...
 -------------------------------------------------------------
 JOIN replica to read-only master
 -------------------------------------------------------------
@@ -216,3 +220,7 @@ Cleanup
 box.schema.user.revoke('guest', 'replication')
 ---
 ...
+box.space._cluster:delete{2} ~= nil
+---
+- true
+...
diff --git a/test/replication-py/cluster.test.py b/test/replication-py/cluster.test.py
index b5f59f40f..dcb05a517 100644
--- a/test/replication-py/cluster.test.py
+++ b/test/replication-py/cluster.test.py
@@ -239,6 +239,7 @@ replica.admin('box.info.vclock[%d] == 2' % replica_id)

 replica.stop()
 replica.cleanup()
+master.admin('box.space._cluster:delete{%d} ~= nil' % replica_id)

 print '-------------------------------------------------------------'
 print 'JOIN replica to read-only master'
@@ -290,3 +291,5 @@ print '-------------------------------------------------------------'
 sys.stdout.pop_filter()

 master.admin("box.schema.user.revoke('guest', 'replication')")
+
+master.admin('box.space._cluster:delete{2} ~= nil')

>
>Re vclocks: Why this vector clock difference is a scalar value? Why it
>is always for server id 1? Please, describe the problem first. It is
>that cluster.test.py moves vclock? I think we can just don't print it,
>because it was informational thing.

Yes it because of cluster.test.py moves vclock. I've updated commit message


Fixed cleanup for python tests:
- replication-py/cluster.test.py
- box-py/iproto.test.py
  need to cleanup created cluster
  reproduce:
  - [box-py/iproto.test.py, null]
  - [box-py/bootstrap.test.py, null]
- replication-py/multi.test.py
  1) should not rely on hardcoded server.id because previous test can
     create replica and `id` will autoincremented, instead enumerate
     through cluster list
  2) because of cluster.test.py moves vclock need to check for vclock
     difference instead of printing `box.info.vclock`
  reproduce:
  - [replication-py/cluster.test.py, null]
  - [replication-py/multi.test.py, null]

Part of: #3232

-- 
Sergei Voronezhskii


^ permalink raw reply	[flat|nested] 3+ messages in thread

* [tarantool-patches] Re[3]: [PATCH] test: enable parallel for python tests and long
  2019-02-28  7:53 [tarantool-patches] Re[2]: [PATCH] test: enable parallel for python tests and long Sergei Voronezhskii
@ 2019-03-06 12:27 ` Sergei Voronezhskii
  2019-03-13 17:43   ` Alexander Turenko
  0 siblings, 1 reply; 3+ messages in thread
From: Sergei Voronezhskii @ 2019-03-06 12:27 UTC (permalink / raw)
  To: tarantool-patches

Hello, I've updated branch, and rebase on 2.1

Removed vclock:
diff --git a/test/replication-py/multi.test.py b/test/replication-py/multi.test.py
index 224332266..15d5a409a 100644
--- a/test/replication-py/multi.test.py
+++ b/test/replication-py/multi.test.py
@@ -55,7 +55,6 @@ for server in cluster:
                 fiber.sleep(0.01)
             end;""", server2.id)
         print 'server', server.id, "connected"
-    server.admin("box.info.vclock")

 print 'done'

diff --git a/test/replication-py/multi.result b/test/replication-py/multi.result
index b4f16b699..f90ab22a0 100644
--- a/test/replication-py/multi.result
+++ b/test/replication-py/multi.result
@@ -17,24 +17,12 @@ Make a full mesh
 server 1 connected
 server 1 connected
 server 1 connected
-box.info.vclock
----
-- {1: 4}
-...
 server 2 connected
 server 2 connected
 server 2 connected
-box.info.vclock
----
-- {1: 4}
-...
 server 3 connected
 server 3 connected
 server 3 connected
-box.info.vclock
----
-- {1: 4}
-...
 done
 ----------------------------------------------------------------------
 Test inserts

Fixed typo:
diff --git a/test/replication-py/multi.test.py b/test/replication-py/multi.test.py
index 224332266..15d5a409a 100644
@@ -84,7 +83,7 @@ print 'Synchronize'
 for server1 in cluster:
     for server2 in cluster:
         server1.wait_lsn(server2.id, server2.get_lsn(server2.id))
-    print 'server', server.id, 'done'
+    print 'server', server1.id, 'done'
 print 'done'
 print

diff --git a/test/replication-py/multi.result b/test/replication-py/multi.result
index b4f16b699..f90ab22a0 100644
--- a/test/replication-py/multi.result
+++ b/test/replication-py/multi.result
@@ -54,8 +42,8 @@ Insert records
 inserted 60 records

 Synchronize
-server 3 done
-server 3 done
+server 1 done
+server 2 done
 server 3 done
 done

Squash wait for whole data:
diff --git a/test/box-py/iproto.test.py b/test/box-py/iproto.test.py
index 81cdddb61..ad788eef4 100644
--- a/test/box-py/iproto.test.py
+++ b/test/box-py/iproto.test.py
@@ -226,6 +226,10 @@ def receive_response():
         resp_len = s.recv(5)
         resp_len = msgpack.loads(resp_len)
         resp_headerbody = s.recv(resp_len)
+        # wait for the whole data
+        while len(resp_headerbody) < resp_len:
+            chunk = s.recv(resp_len - len(resp_headerbody))
+            resp_headerbody = resp_headerbody + chunk
         unpacker = msgpack.Unpacker(use_list = True)
         unpacker.feed(resp_headerbody)
         resp_header = unpacker.unpack()


>Четверг, 28 февраля 2019, 10:53 +03:00 от Sergei Voronezhskii <sergw@tarantool.org>:
>
>
>Updated branch:  https://github.com/tarantool/tarantool/commits/sergw/enable-parallel-test-py-long-fixups
>Test passed:  https://travis-ci.org/tarantool/tarantool/builds/499664259
>
>>Воскресенье, 23 декабря 2018, 14:04 +03:00 от Alexander Turenko <  alexander.turenko@tarantool.org >:
>>
>>Hi!
>>
>>See comments below and fixups on the
>>sergw/enable-parallel-test-py-long-fixups branch.
>>
>>I tested it with the command below and after the fixup it passes.
>>
>>```
>>$ ./test-run.py --long -- $(for i in $(seq 1 10); do echo -n "-py/ "; done)
>>```
>>
>>WBR, Alexander Turenko.
>>
>>On Tue, Dec 18, 2018 at 11:43:00AM +0300, Sergei Voronezhskii wrote:
>>> Fixed cleanup for python tests:
>>> - box-py/iproto.test.py need to cleanup created cluster
>>>   reproduce:
>>>   - [box-py/iproto.test.py, null]
>>>   - [box-py/bootstrap.test.py, null]
>>> - box-py/boostrap.test.py should not restart server because of next
>>>   box-py/call.test.py got error:
>>>    `NetworkError: (104, 'Connection reset by peer')`
>>>    at `iproto.authenticate('test', 'test')`
>>>   reproduce:
>>>   - [box-py/iproto.test.py, null]
>>>   - [box-py/bootstrap.test.py, null]
>>>   - [box-py/call.test.py, null]
>>> - replication-py/multi.test.py should not relay on hardcoded server.id
>>
>>relay -> rely
>fixed
>
>>Why it should not? You cleaned up _cluster in box-py/iproto.test.py.
>>Why don't clean up it in replication-py/cluster.test.py too?
>>
>>>   because previous test can create some and `id` will autoincremented,
>>>   instead this we need to calculate vclock_diff
>
>>You use hardcoded server id 1. 'previous test can create some' -- some
>>what? I suppose you are about using vclock difference instead of
>>absolute value, but the message states something different.
>
>I guess it's enough for checking with hardcoded server id 1.
>If master==replica1 and master==replica2, it useless to checking
>replica1 and replica2.
>
>Previous test autoincrement replica id and also moves vclock.
>
>>
>>>   reproduce:
>>>   - [replication-py/cluster.test.py, null]
>>>   - [replication-py/multi.test.py, null]
>>> 
>>> Part of: #3232
>>> ---
>>> BRANCH:  https://github.com/tarantool/tarantool/tree/sergw/enable-parallel-test-py-long
>>>  test/box-py/bootstrap.test.py     |  4 +---
>>>  test/box-py/iproto.result         |  4 ++++
>>>  test/box-py/iproto.test.py        |  1 +
>>>  test/box-py/suite.ini             |  2 +-
>>>  test/engine_long/suite.ini        |  2 +-
>>>  test/long_run-py/suite.ini        |  2 +-
>>>  test/luajit-tap/suite.ini         |  2 +-
>>>  test/replication-py/multi.result  | 19 +++++--------------
>>>  test/replication-py/multi.test.py | 28 +++++++++++++++++-----------
>>>  test/replication-py/suite.ini     |  2 +-
>>>  test/xlog-py/suite.ini            |  2 +-
>>>  11 files changed, 34 insertions(+), 34 deletions(-)
>>> 
>>> diff --git a/test/box-py/bootstrap.test.py b/test/box-py/bootstrap.test.py
>>> index 9d690b03f..dba6f5ae9 100644
>>> --- a/test/box-py/bootstrap.test.py
>>> +++ b/test/box-py/bootstrap.test.py
>>> @@ -9,8 +9,6 @@ cluster_uuid = yaml.load(server.admin('box.space._schema:get("cluster")',
>>>  sys.stdout.push_filter(cluster_uuid, '<cluster uuid>')
>>> 
>>>  server.admin('box.internal.bootstrap()')
>>> -server.restart()
>>> -
>>
>>I suppose now the test case will not test what it was intended. Please,
>>implement waiting for the server startup.
>>
>>You should have strong opinion why this will not broke the test case
>>before propose such kind of things in a patch. When you have this
>>opinion it is easy to write it in the commit message.
>>
>>I don't sure whether it really brokes the test case, but I don't see any
>>justification in the commit message. So I need to investigate the
>>problem from scratch to prove it is good or bad. And so what gain we get
>>from your work?
>>
>>The code cost is zero w/o investigation. The investigation result was
>>not shared. I don't understand why do you behave in this way. Again and
>>again.
>>
>
>I have checked upgrade.lua for bootstrap method for investigation.
>But I did not found any clue why we should do restart here.
>To fix it without delete `restart` we can do:
>
>iproto.py_con.ping()
>server.restart()
>iproto.reconnect()
>iproto.authenticate('test', 'test')
>
>But I have exception TarantoolStartError on OSX.
>After while you can see @locker already removed it at
>https://github.com/tarantool/tarantool/commit/3fe8c2835406c02acb59b6f86721d63cec69fac7
>
>>>  server.admin('box.space._schema:select{}')
>>>  server.admin('box.space._cluster:select{}')
>>>  server.admin('box.space._space:select{}')
>>> @@ -20,4 +18,4 @@ server.admin('box.space._func:select{}')
>>>  server.admin('box.space._priv:select{}')
>>> 
>>>  # Cleanup
>>> -sys.stdout.pop_filter()
>>> +sys.stdout.clear_all_filters()
>>
>>Two filters was pushed, two should be popped, I think.
>>
>>> diff --git a/test/replication-py/multi.test.py b/test/replication-py/multi.test.py
>>> index 224332266..3394d47be 100644
>>> --- a/test/replication-py/multi.test.py
>>> +++ b/test/replication-py/multi.test.py
>>> @@ -10,10 +10,14 @@ ROW_N = REPLICA_N * 20
>>> 
>>>  # master server
>>>  master = server
>>> +
>>>  master.admin("fiber = require('fiber')")
>>>  master.admin("box.schema.user.grant('guest', 'replication')")
>>>  master.admin("box.schema.user.grant('guest', 'execute', 'universe')")
>>> 
>>> +# Get vclock on master
>>> +vclock_cnt = yaml.load(master.admin("box.info.vclock[1]", silent = True))[0]
>>> +
>>>  print '----------------------------------------------------------------------'
>>>  print 'Bootstrap replicas'
>>>  print '----------------------------------------------------------------------'
>>> @@ -48,14 +52,16 @@ for server in cluster:
>>>      server.iproto.py_con.eval("box.cfg { replication = ... }", [sources])
>>> 
>>>  # Wait connections to establish
>>> -for server in cluster:
>>> +for sid, server in enumerate(cluster, 1):
>>>      for server2 in cluster:
>>>          server.iproto.py_con.eval("""
>>>              while #box.info.vclock[...] ~= nil do
>>>                  fiber.sleep(0.01)
>>>              end;""", server2.id)
>>> -        print 'server', server.id, "connected"
>>> -    server.admin("box.info.vclock")
>>> +        print 'server', sid, "connected"
>>> +    vclock_new = yaml.load(server.admin("box.info.vclock[1]", silent = True))[0]
>>> +    print "vclock_diff: {}".format(vclock_new - vclock_cnt)
>>> +
>>> 
>>
>>Re server.id -> sid: Please, cleanup _cluster in
>>replication-py/cluster.test.py instead. The same for all such changes
>>below. 
>Added
>
>diff --git a/test/replication-py/cluster.result b/test/replication-py/cluster.result
>index 3b5566829..04f06f74e 100644
>--- a/test/replication-py/cluster.result
>+++ b/test/replication-py/cluster.result
>@@ -196,6 +196,10 @@ box.info.vclock[2] == 2
> ---
> - true
> ...
>+box.space._cluster:delete{2} ~= nil
>+---
>+- true
>+...
> -------------------------------------------------------------
> JOIN replica to read-only master
> -------------------------------------------------------------
>@@ -216,3 +220,7 @@ Cleanup
> box.schema.user.revoke('guest', 'replication')
> ---
> ...
>+box.space._cluster:delete{2} ~= nil
>+---
>+- true
>+...
>diff --git a/test/replication-py/cluster.test.py b/test/replication-py/cluster.test.py
>index b5f59f40f..dcb05a517 100644
>--- a/test/replication-py/cluster.test.py
>+++ b/test/replication-py/cluster.test.py
>@@ -239,6 +239,7 @@ replica.admin('box.info.vclock[%d] == 2' % replica_id)
>
> replica.stop()
> replica.cleanup()
>+master.admin('box.space._cluster:delete{%d} ~= nil' % replica_id)
>
> print '-------------------------------------------------------------'
> print 'JOIN replica to read-only master'
>@@ -290,3 +291,5 @@ print '-------------------------------------------------------------'
> sys.stdout.pop_filter()
>
> master.admin("box.schema.user.revoke('guest', 'replication')")
>+
>+master.admin('box.space._cluster:delete{2} ~= nil')
>
>>
>>Re vclocks: Why this vector clock difference is a scalar value? Why it
>>is always for server id 1? Please, describe the problem first. It is
>>that cluster.test.py moves vclock? I think we can just don't print it,
>>because it was informational thing.
>
>Yes it because of cluster.test.py moves vclock. I've updated commit message
>
>
>Fixed cleanup for python tests:
>- replication-py/cluster.test.py
>- box-py/iproto.test.py
>  need to cleanup created cluster
>  reproduce:
>  - [box-py/iproto.test.py, null]
>  - [box-py/bootstrap.test.py, null]
>- replication-py/multi.test.py
>  1) should not rely on hardcoded server.id because previous test can
>     create replica and `id` will autoincremented, instead enumerate
>     through cluster list
>  2) because of cluster.test.py moves vclock need to check for vclock
>     difference instead of printing `box.info.vclock`
>  reproduce:
>  - [replication-py/cluster.test.py, null]
>  - [replication-py/multi.test.py, null]
>
>Part of: #3232
>
>-- 
>Sergei Voronezhskii
>


-- 
Sergei Voronezhskii

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [tarantool-patches] Re[3]: [PATCH] test: enable parallel for python tests and long
  2019-03-06 12:27 ` [tarantool-patches] Re[3]: " Sergei Voronezhskii
@ 2019-03-13 17:43   ` Alexander Turenko
  0 siblings, 0 replies; 3+ messages in thread
From: Alexander Turenko @ 2019-03-13 17:43 UTC (permalink / raw)
  To: Sergei Voronezhskii
  Cc: tarantool-patches, Konstantin Osipov, Kirill Yukhin, Vladimir Davydov

I have splitted the patch into two (long and python test suites), made
minor changes in the commit message, made minor changes in the code (w/o
behaviour changes, see below).

Pushed to 2.1.

CCed maintainers just in case.

WBR, Alexander Turenko.

----

diff --git a/test/box-py/iproto.result b/test/box-py/iproto.result
index eb84eaf1c..900e6e24f 100644
--- a/test/box-py/iproto.result
+++ b/test/box-py/iproto.result
@@ -137,9 +137,9 @@ space:drop()
 space2:drop()
 ---
 ...
-box.space._cluster:delete(2)
+box.space._cluster:delete{2} ~= nil
 ---
-- [2, '0d5bd431-7f3e-4695-a5c2-82de0a9cbc95']
+- true
 ...
 space = box.schema.create_space('gh1280', { engine = 'vinyl' })
 ---
diff --git a/test/box-py/iproto.test.py b/test/box-py/iproto.test.py
index ad788eef4..77637d8ed 100644
--- a/test/box-py/iproto.test.py
+++ b/test/box-py/iproto.test.py
@@ -321,7 +321,7 @@ c.close()
 
 admin("space:drop()")
 admin("space2:drop()")
-admin("box.space._cluster:delete(2)")
+admin("box.space._cluster:delete{2} ~= nil")
 
 #
 # gh-1280 Segmentation fault on space.select(tuple()) or space.select([2])
diff --git a/test/replication-py/cluster.test.py b/test/replication-py/cluster.test.py
index dcb05a517..b328b1c89 100644
--- a/test/replication-py/cluster.test.py
+++ b/test/replication-py/cluster.test.py
@@ -289,7 +289,5 @@ print '-------------------------------------------------------------'
 
 # Cleanup
 sys.stdout.pop_filter()
-
 master.admin("box.schema.user.revoke('guest', 'replication')")
-
 master.admin('box.space._cluster:delete{2} ~= nil')

On Wed, Mar 06, 2019 at 03:27:18PM +0300, Sergei Voronezhskii wrote:
> Hello, I've updated branch, and rebase on 2.1
> 
> Removed vclock:
> diff --git a/test/replication-py/multi.test.py b/test/replication-py/multi.test.py
> index 224332266..15d5a409a 100644
> --- a/test/replication-py/multi.test.py
> +++ b/test/replication-py/multi.test.py
> @@ -55,7 +55,6 @@ for server in cluster:
>                  fiber.sleep(0.01)
>              end;""", server2.id)
>          print 'server', server.id, "connected"
> -    server.admin("box.info.vclock")
> 
>  print 'done'
> 
> diff --git a/test/replication-py/multi.result b/test/replication-py/multi.result
> index b4f16b699..f90ab22a0 100644
> --- a/test/replication-py/multi.result
> +++ b/test/replication-py/multi.result
> @@ -17,24 +17,12 @@ Make a full mesh
>  server 1 connected
>  server 1 connected
>  server 1 connected
> -box.info.vclock
> ----
> -- {1: 4}
> -...
>  server 2 connected
>  server 2 connected
>  server 2 connected
> -box.info.vclock
> ----
> -- {1: 4}
> -...
>  server 3 connected
>  server 3 connected
>  server 3 connected
> -box.info.vclock
> ----
> -- {1: 4}
> -...
>  done
>  ----------------------------------------------------------------------
>  Test inserts
> 
> Fixed typo:
> diff --git a/test/replication-py/multi.test.py b/test/replication-py/multi.test.py
> index 224332266..15d5a409a 100644
> @@ -84,7 +83,7 @@ print 'Synchronize'
>  for server1 in cluster:
>      for server2 in cluster:
>          server1.wait_lsn(server2.id, server2.get_lsn(server2.id))
> -    print 'server', server.id, 'done'
> +    print 'server', server1.id, 'done'
>  print 'done'
>  print
> 
> diff --git a/test/replication-py/multi.result b/test/replication-py/multi.result
> index b4f16b699..f90ab22a0 100644
> --- a/test/replication-py/multi.result
> +++ b/test/replication-py/multi.result
> @@ -54,8 +42,8 @@ Insert records
>  inserted 60 records
> 
>  Synchronize
> -server 3 done
> -server 3 done
> +server 1 done
> +server 2 done
>  server 3 done
>  done
> 
> Squash wait for whole data:
> diff --git a/test/box-py/iproto.test.py b/test/box-py/iproto.test.py
> index 81cdddb61..ad788eef4 100644
> --- a/test/box-py/iproto.test.py
> +++ b/test/box-py/iproto.test.py
> @@ -226,6 +226,10 @@ def receive_response():
>          resp_len = s.recv(5)
>          resp_len = msgpack.loads(resp_len)
>          resp_headerbody = s.recv(resp_len)
> +        # wait for the whole data
> +        while len(resp_headerbody) < resp_len:
> +            chunk = s.recv(resp_len - len(resp_headerbody))
> +            resp_headerbody = resp_headerbody + chunk
>          unpacker = msgpack.Unpacker(use_list = True)
>          unpacker.feed(resp_headerbody)
>          resp_header = unpacker.unpack()
> 
> 
> >Четверг, 28 февраля 2019, 10:53 +03:00 от Sergei Voronezhskii <sergw@tarantool.org>:
> >
> >
> >Updated branch:  https://github.com/tarantool/tarantool/commits/sergw/enable-parallel-test-py-long-fixups
> >Test passed:  https://travis-ci.org/tarantool/tarantool/builds/499664259
> >
> >>Воскресенье, 23 декабря 2018, 14:04 +03:00 от Alexander Turenko <  alexander.turenko@tarantool.org >:
> >>
> >>Hi!
> >>
> >>See comments below and fixups on the
> >>sergw/enable-parallel-test-py-long-fixups branch.
> >>
> >>I tested it with the command below and after the fixup it passes.
> >>
> >>```
> >>$ ./test-run.py --long -- $(for i in $(seq 1 10); do echo -n "-py/ "; done)
> >>```
> >>
> >>WBR, Alexander Turenko.
> >>
> >>On Tue, Dec 18, 2018 at 11:43:00AM +0300, Sergei Voronezhskii wrote:
> >>> Fixed cleanup for python tests:
> >>> - box-py/iproto.test.py need to cleanup created cluster
> >>>   reproduce:
> >>>   - [box-py/iproto.test.py, null]
> >>>   - [box-py/bootstrap.test.py, null]
> >>> - box-py/boostrap.test.py should not restart server because of next
> >>>   box-py/call.test.py got error:
> >>>    `NetworkError: (104, 'Connection reset by peer')`
> >>>    at `iproto.authenticate('test', 'test')`
> >>>   reproduce:
> >>>   - [box-py/iproto.test.py, null]
> >>>   - [box-py/bootstrap.test.py, null]
> >>>   - [box-py/call.test.py, null]
> >>> - replication-py/multi.test.py should not relay on hardcoded server.id
> >>
> >>relay -> rely
> >fixed
> >
> >>Why it should not? You cleaned up _cluster in box-py/iproto.test.py.
> >>Why don't clean up it in replication-py/cluster.test.py too?
> >>
> >>>   because previous test can create some and `id` will autoincremented,
> >>>   instead this we need to calculate vclock_diff
> >
> >>You use hardcoded server id 1. 'previous test can create some' -- some
> >>what? I suppose you are about using vclock difference instead of
> >>absolute value, but the message states something different.
> >
> >I guess it's enough for checking with hardcoded server id 1.
> >If master==replica1 and master==replica2, it useless to checking
> >replica1 and replica2.
> >
> >Previous test autoincrement replica id and also moves vclock.
> >
> >>
> >>>   reproduce:
> >>>   - [replication-py/cluster.test.py, null]
> >>>   - [replication-py/multi.test.py, null]
> >>> 
> >>> Part of: #3232
> >>> ---
> >>> BRANCH:  https://github.com/tarantool/tarantool/tree/sergw/enable-parallel-test-py-long
> >>>  test/box-py/bootstrap.test.py     |  4 +---
> >>>  test/box-py/iproto.result         |  4 ++++
> >>>  test/box-py/iproto.test.py        |  1 +
> >>>  test/box-py/suite.ini             |  2 +-
> >>>  test/engine_long/suite.ini        |  2 +-
> >>>  test/long_run-py/suite.ini        |  2 +-
> >>>  test/luajit-tap/suite.ini         |  2 +-
> >>>  test/replication-py/multi.result  | 19 +++++--------------
> >>>  test/replication-py/multi.test.py | 28 +++++++++++++++++-----------
> >>>  test/replication-py/suite.ini     |  2 +-
> >>>  test/xlog-py/suite.ini            |  2 +-
> >>>  11 files changed, 34 insertions(+), 34 deletions(-)
> >>> 
> >>> diff --git a/test/box-py/bootstrap.test.py b/test/box-py/bootstrap.test.py
> >>> index 9d690b03f..dba6f5ae9 100644
> >>> --- a/test/box-py/bootstrap.test.py
> >>> +++ b/test/box-py/bootstrap.test.py
> >>> @@ -9,8 +9,6 @@ cluster_uuid = yaml.load(server.admin('box.space._schema:get("cluster")',
> >>>  sys.stdout.push_filter(cluster_uuid, '<cluster uuid>')
> >>> 
> >>>  server.admin('box.internal.bootstrap()')
> >>> -server.restart()
> >>> -
> >>
> >>I suppose now the test case will not test what it was intended. Please,
> >>implement waiting for the server startup.
> >>
> >>You should have strong opinion why this will not broke the test case
> >>before propose such kind of things in a patch. When you have this
> >>opinion it is easy to write it in the commit message.
> >>
> >>I don't sure whether it really brokes the test case, but I don't see any
> >>justification in the commit message. So I need to investigate the
> >>problem from scratch to prove it is good or bad. And so what gain we get
> >>from your work?
> >>
> >>The code cost is zero w/o investigation. The investigation result was
> >>not shared. I don't understand why do you behave in this way. Again and
> >>again.
> >>
> >
> >I have checked upgrade.lua for bootstrap method for investigation.
> >But I did not found any clue why we should do restart here.
> >To fix it without delete `restart` we can do:
> >
> >iproto.py_con.ping()
> >server.restart()
> >iproto.reconnect()
> >iproto.authenticate('test', 'test')
> >
> >But I have exception TarantoolStartError on OSX.
> >After while you can see @locker already removed it at
> >https://github.com/tarantool/tarantool/commit/3fe8c2835406c02acb59b6f86721d63cec69fac7
> >
> >>>  server.admin('box.space._schema:select{}')
> >>>  server.admin('box.space._cluster:select{}')
> >>>  server.admin('box.space._space:select{}')
> >>> @@ -20,4 +18,4 @@ server.admin('box.space._func:select{}')
> >>>  server.admin('box.space._priv:select{}')
> >>> 
> >>>  # Cleanup
> >>> -sys.stdout.pop_filter()
> >>> +sys.stdout.clear_all_filters()
> >>
> >>Two filters was pushed, two should be popped, I think.
> >>
> >>> diff --git a/test/replication-py/multi.test.py b/test/replication-py/multi.test.py
> >>> index 224332266..3394d47be 100644
> >>> --- a/test/replication-py/multi.test.py
> >>> +++ b/test/replication-py/multi.test.py
> >>> @@ -10,10 +10,14 @@ ROW_N = REPLICA_N * 20
> >>> 
> >>>  # master server
> >>>  master = server
> >>> +
> >>>  master.admin("fiber = require('fiber')")
> >>>  master.admin("box.schema.user.grant('guest', 'replication')")
> >>>  master.admin("box.schema.user.grant('guest', 'execute', 'universe')")
> >>> 
> >>> +# Get vclock on master
> >>> +vclock_cnt = yaml.load(master.admin("box.info.vclock[1]", silent = True))[0]
> >>> +
> >>>  print '----------------------------------------------------------------------'
> >>>  print 'Bootstrap replicas'
> >>>  print '----------------------------------------------------------------------'
> >>> @@ -48,14 +52,16 @@ for server in cluster:
> >>>      server.iproto.py_con.eval("box.cfg { replication = ... }", [sources])
> >>> 
> >>>  # Wait connections to establish
> >>> -for server in cluster:
> >>> +for sid, server in enumerate(cluster, 1):
> >>>      for server2 in cluster:
> >>>          server.iproto.py_con.eval("""
> >>>              while #box.info.vclock[...] ~= nil do
> >>>                  fiber.sleep(0.01)
> >>>              end;""", server2.id)
> >>> -        print 'server', server.id, "connected"
> >>> -    server.admin("box.info.vclock")
> >>> +        print 'server', sid, "connected"
> >>> +    vclock_new = yaml.load(server.admin("box.info.vclock[1]", silent = True))[0]
> >>> +    print "vclock_diff: {}".format(vclock_new - vclock_cnt)
> >>> +
> >>> 
> >>
> >>Re server.id -> sid: Please, cleanup _cluster in
> >>replication-py/cluster.test.py instead. The same for all such changes
> >>below. 
> >Added
> >
> >diff --git a/test/replication-py/cluster.result b/test/replication-py/cluster.result
> >index 3b5566829..04f06f74e 100644
> >--- a/test/replication-py/cluster.result
> >+++ b/test/replication-py/cluster.result
> >@@ -196,6 +196,10 @@ box.info.vclock[2] == 2
> > ---
> > - true
> > ...
> >+box.space._cluster:delete{2} ~= nil
> >+---
> >+- true
> >+...
> > -------------------------------------------------------------
> > JOIN replica to read-only master
> > -------------------------------------------------------------
> >@@ -216,3 +220,7 @@ Cleanup
> > box.schema.user.revoke('guest', 'replication')
> > ---
> > ...
> >+box.space._cluster:delete{2} ~= nil
> >+---
> >+- true
> >+...
> >diff --git a/test/replication-py/cluster.test.py b/test/replication-py/cluster.test.py
> >index b5f59f40f..dcb05a517 100644
> >--- a/test/replication-py/cluster.test.py
> >+++ b/test/replication-py/cluster.test.py
> >@@ -239,6 +239,7 @@ replica.admin('box.info.vclock[%d] == 2' % replica_id)
> >
> > replica.stop()
> > replica.cleanup()
> >+master.admin('box.space._cluster:delete{%d} ~= nil' % replica_id)
> >
> > print '-------------------------------------------------------------'
> > print 'JOIN replica to read-only master'
> >@@ -290,3 +291,5 @@ print '-------------------------------------------------------------'
> > sys.stdout.pop_filter()
> >
> > master.admin("box.schema.user.revoke('guest', 'replication')")
> >+
> >+master.admin('box.space._cluster:delete{2} ~= nil')
> >
> >>
> >>Re vclocks: Why this vector clock difference is a scalar value? Why it
> >>is always for server id 1? Please, describe the problem first. It is
> >>that cluster.test.py moves vclock? I think we can just don't print it,
> >>because it was informational thing.
> >
> >Yes it because of cluster.test.py moves vclock. I've updated commit message
> >
> >
> >Fixed cleanup for python tests:
> >- replication-py/cluster.test.py
> >- box-py/iproto.test.py
> >  need to cleanup created cluster
> >  reproduce:
> >  - [box-py/iproto.test.py, null]
> >  - [box-py/bootstrap.test.py, null]
> >- replication-py/multi.test.py
> >  1) should not rely on hardcoded server.id because previous test can
> >     create replica and `id` will autoincremented, instead enumerate
> >     through cluster list
> >  2) because of cluster.test.py moves vclock need to check for vclock
> >     difference instead of printing `box.info.vclock`
> >  reproduce:
> >  - [replication-py/cluster.test.py, null]
> >  - [replication-py/multi.test.py, null]
> >
> >Part of: #3232
> >
> >-- 
> >Sergei Voronezhskii
> >
> 
> 
> -- 
> Sergei Voronezhskii

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2019-03-13 17:43 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-28  7:53 [tarantool-patches] Re[2]: [PATCH] test: enable parallel for python tests and long Sergei Voronezhskii
2019-03-06 12:27 ` [tarantool-patches] Re[3]: " Sergei Voronezhskii
2019-03-13 17:43   ` Alexander Turenko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox