Tarantool development patches archive
 help / color / mirror / Atom feed
* [tarantool-patches] [PATCH 0/3][vshard] Small patches
@ 2018-06-22 21:43 AKhatskevich
  2018-06-22 21:43 ` [tarantool-patches] [PATCH 1/3] Do not force login/pass in URI AKhatskevich
                   ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: AKhatskevich @ 2018-06-22 21:43 UTC (permalink / raw)
  To: tarantool-patches, v.shpilevoy

Branch: https://github.com/tarantool/vshard/tree/kh/small_patches-1
Issues:
 * https://github.com/tarantool/vshard/issues/121
 * https://github.com/tarantool/vshard/issues/52
 * https://github.com/tarantool/vshard/issues/47

AKhatskevich (3):
  #47: Do not force login/pass in URI
  #52: Allow use existing user
  #121: Introduce destroy module feature

Note: #52 and #121 are bounded:
 1. #52 is tested in #121, because destroy module makes it mutch easier
 2. #121 needs #52, because #121 doesn't delete user, and user exists when
    the vshard is loading back
Note: those patches come together, because they are very small.

 test/lua_libs/util.lua        |  34 +++++++++++
 test/router/destroy.result    | 108 ++++++++++++++++++++++++++++++++++
 test/router/destroy.test.lua  |  39 +++++++++++++
 test/storage/destroy.result   | 132 ++++++++++++++++++++++++++++++++++++++++++
 test/storage/destroy.test.lua |  51 ++++++++++++++++
 test/unit/config.result       |  14 +++++
 test/unit/config.test.lua     |   6 ++
 vshard/cfg.lua                |   7 +--
 vshard/router/init.lua        |  22 +++++++
 vshard/storage/init.lua       |  82 ++++++++++++++++++++------
 10 files changed, 473 insertions(+), 22 deletions(-)
 create mode 100644 test/router/destroy.result
 create mode 100644 test/router/destroy.test.lua
 create mode 100644 test/storage/destroy.result
 create mode 100644 test/storage/destroy.test.lua

-- 
2.14.1

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [tarantool-patches] [PATCH 1/3] Do not force login/pass in URI
  2018-06-22 21:43 [tarantool-patches] [PATCH 0/3][vshard] Small patches AKhatskevich
@ 2018-06-22 21:43 ` AKhatskevich
  2018-06-22 22:11   ` [tarantool-patches] " Vladislav Shpilevoy
  2018-06-22 21:43 ` [tarantool-patches] [PATCH 2/3] Allow use existing user AKhatskevich
  2018-06-22 21:43 ` [tarantool-patches] [PATCH 3/3] Introduce destroy module feature AKhatskevich
  2 siblings, 1 reply; 13+ messages in thread
From: AKhatskevich @ 2018-06-22 21:43 UTC (permalink / raw)
  To: tarantool-patches, v.shpilevoy

All access rights would be checked explicitly by Tarantool.

Closes #47
---
 test/unit/config.result   | 14 ++++++++++++++
 test/unit/config.test.lua |  6 ++++++
 vshard/cfg.lua            |  7 +++----
 3 files changed, 23 insertions(+), 4 deletions(-)

diff --git a/test/unit/config.result b/test/unit/config.result
index 6b4f87b..847536c 100644
--- a/test/unit/config.result
+++ b/test/unit/config.result
@@ -506,3 +506,17 @@ _ = lcfg.check(cfg)
 replica.name = 'storage'
 ---
 ...
+-- gh-47: Check uri
+_, err = pcall(lcfg.check_uri, 'invalid uri')
+---
+...
+err:match('Invalid URI.*')
+---
+- 'Invalid URI: invalid uri'
+...
+lcfg.check_uri('127.0.0.1')
+---
+...
+lcfg.check_uri('user:password@localhost')
+---
+...
diff --git a/test/unit/config.test.lua b/test/unit/config.test.lua
index 0f549d3..af1dd7d 100644
--- a/test/unit/config.test.lua
+++ b/test/unit/config.test.lua
@@ -206,3 +206,9 @@ _ = lcfg.check(cfg)
 replica.name = nil
 _ = lcfg.check(cfg)
 replica.name = 'storage'
+
+-- gh-47: Check uri
+_, err = pcall(lcfg.check_uri, 'invalid uri')
+err:match('Invalid URI.*')
+lcfg.check_uri('127.0.0.1')
+lcfg.check_uri('user:password@localhost')
diff --git a/vshard/cfg.lua b/vshard/cfg.lua
index f5db4c0..5644d2c 100644
--- a/vshard/cfg.lua
+++ b/vshard/cfg.lua
@@ -5,10 +5,8 @@ local luri = require('uri')
 local consts = require('vshard.consts')
 
 local function check_uri(uri)
-    uri = luri.parse(uri)
-    if uri.login == nil or uri.password == nil then
-        error('URI must contain login and password')
-    end
+    local parsed = luri.parse(uri)
+    assert(parsed, 'Invalid URI: ' .. uri)
 end
 
 local function check_master(master, ctx)
@@ -251,5 +249,6 @@ end
 
 return {
     check = cfg_check,
+    check_uri = check_uri,
     remove_non_box_options = remove_non_box_options,
 }
-- 
2.14.1

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [tarantool-patches] [PATCH 2/3] Allow use existing user
  2018-06-22 21:43 [tarantool-patches] [PATCH 0/3][vshard] Small patches AKhatskevich
  2018-06-22 21:43 ` [tarantool-patches] [PATCH 1/3] Do not force login/pass in URI AKhatskevich
@ 2018-06-22 21:43 ` AKhatskevich
  2018-06-22 22:12   ` [tarantool-patches] " Vladislav Shpilevoy
  2018-06-22 21:43 ` [tarantool-patches] [PATCH 3/3] Introduce destroy module feature AKhatskevich
  2 siblings, 1 reply; 13+ messages in thread
From: AKhatskevich @ 2018-06-22 21:43 UTC (permalink / raw)
  To: tarantool-patches, v.shpilevoy

This commit allows to create vshard user outside of vshard.

This feature is tested in the next commit (destroy).

Closes #52
---
 vshard/storage/init.lua | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/vshard/storage/init.lua b/vshard/storage/init.lua
index 57076e1..059e705 100644
--- a/vshard/storage/init.lua
+++ b/vshard/storage/init.lua
@@ -138,11 +138,19 @@ end
 --------------------------------------------------------------------------------
 -- Schema
 --------------------------------------------------------------------------------
+local function create_user(username, password)
+    box.schema.user.create(username, {
+        password = password,
+        if_not_exists = true,
+    })
+    -- Replication may has not been granted, if user exists.
+    box.schema.user.grant(username, 'replication', nil, nil,
+                          {if_not_exists = true})
+end
 
 local function storage_schema_v1(username, password)
     log.info("Initializing schema")
-    box.schema.user.create(username, {password = password})
-    box.schema.user.grant(username, 'replication')
+    create_user(username, password)
 
     local bucket = box.schema.space.create('_bucket')
     bucket:format({
-- 
2.14.1

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [tarantool-patches] [PATCH 3/3] Introduce destroy module feature
  2018-06-22 21:43 [tarantool-patches] [PATCH 0/3][vshard] Small patches AKhatskevich
  2018-06-22 21:43 ` [tarantool-patches] [PATCH 1/3] Do not force login/pass in URI AKhatskevich
  2018-06-22 21:43 ` [tarantool-patches] [PATCH 2/3] Allow use existing user AKhatskevich
@ 2018-06-22 21:43 ` AKhatskevich
  2018-06-22 22:11   ` [tarantool-patches] " Vladislav Shpilevoy
  2 siblings, 1 reply; 13+ messages in thread
From: AKhatskevich @ 2018-06-22 21:43 UTC (permalink / raw)
  To: tarantool-patches, v.shpilevoy

Introduce functions:
 * vshard.router.destroy()
 * vshard.storage.destroy()

Those functions:
 * close connections
 * stop background fibers
 * delete vshard spaces
 * delete vshard funcitons
 * delete `once` metadate

Closes #121
---
 test/lua_libs/util.lua        |  34 +++++++++++
 test/router/destroy.result    | 108 ++++++++++++++++++++++++++++++++++
 test/router/destroy.test.lua  |  39 +++++++++++++
 test/storage/destroy.result   | 132 ++++++++++++++++++++++++++++++++++++++++++
 test/storage/destroy.test.lua |  51 ++++++++++++++++
 vshard/router/init.lua        |  22 +++++++
 vshard/storage/init.lua       |  70 +++++++++++++++++-----
 7 files changed, 440 insertions(+), 16 deletions(-)
 create mode 100644 test/router/destroy.result
 create mode 100644 test/router/destroy.test.lua
 create mode 100644 test/storage/destroy.result
 create mode 100644 test/storage/destroy.test.lua

diff --git a/test/lua_libs/util.lua b/test/lua_libs/util.lua
index f2d3b48..ce0ea67 100644
--- a/test/lua_libs/util.lua
+++ b/test/lua_libs/util.lua
@@ -69,9 +69,43 @@ local function wait_master(test_run, replicaset, master)
     log.info('Slaves are connected to a master "%s"', master)
 end
 
+function vshard_fiber_list()
+    -- Flush jit traces to prevent them from
+    -- keeping its upvalues in memory.
+    jit.flush()
+    collectgarbage()
+    -- Give a fiber time to clean itself.
+    fiber.sleep(0.05)
+    local fibers = fiber.info()
+    local non_vshard_patterns = {
+        '^console',
+        'feedback_daemon$',
+        '^checkpoint_daemon$',
+        '^main$',
+        '^memtx%.gc$',
+        '^vinyl%.scheduler$',
+    }
+    local names = {}
+    for _, fib in pairs(fibers) do
+        local add = true
+        for _, pattern in pairs(non_vshard_patterns) do
+            if fib.name:match(pattern) then
+                add = false
+                break
+            end
+        end
+        if add then
+            table.insert(names, fib.name)
+        end
+    end
+    table.sort(names)
+    return names
+end;
+
 return {
     check_error = check_error,
     shuffle_masters = shuffle_masters,
     collect_timeouts = collect_timeouts,
     wait_master = wait_master,
+    vshard_fiber_list = vshard_fiber_list,
 }
diff --git a/test/router/destroy.result b/test/router/destroy.result
new file mode 100644
index 0000000..48ba661
--- /dev/null
+++ b/test/router/destroy.result
@@ -0,0 +1,108 @@
+test_run = require('test_run').new()
+---
+...
+netbox = require('net.box')
+---
+...
+fiber = require('fiber')
+---
+...
+REPLICASET_1 = { 'storage_1_a', 'storage_1_b' }
+---
+...
+REPLICASET_2 = { 'storage_2_a', 'storage_2_b' }
+---
+...
+test_run:create_cluster(REPLICASET_1, 'storage')
+---
+...
+test_run:create_cluster(REPLICASET_2, 'storage')
+---
+...
+util = require('util')
+---
+...
+util.wait_master(test_run, REPLICASET_1, 'storage_1_a')
+---
+...
+util.wait_master(test_run, REPLICASET_2, 'storage_2_a')
+---
+...
+test_run:cmd("create server router_1 with script='router/router_1.lua'")
+---
+- true
+...
+test_run:cmd("start server router_1")
+---
+- true
+...
+_ = test_run:cmd("switch router_1")
+---
+...
+util = require('util')
+---
+...
+test_run:cmd("setopt delimiter ';'")
+---
+- true
+...
+function wait_fibers_exit()
+    while #util.vshard_fiber_list() > 0 do
+        sleep(0.05)
+    end
+end;
+---
+...
+test_run:cmd("setopt delimiter ''");
+---
+- true
+...
+-- Validate destroy finction by fiber_list.
+-- Netbox fibers are deleted after replicas by GC.
+util.vshard_fiber_list()
+---
+- - 127.0.0.1:3301 (net.box)
+  - 127.0.0.1:3302 (net.box)
+  - 127.0.0.1:3303 (net.box)
+  - 127.0.0.1:3304 (net.box)
+  - discovery_fiber
+  - vshard.failover
+...
+vshard.router.destroy()
+---
+...
+wait_fibers_exit()
+---
+...
+package.loaded['vshard.router'] = nil
+---
+...
+vshard.router = require('vshard.router')
+---
+...
+vshard.router.cfg(cfg)
+---
+...
+util.vshard_fiber_list()
+---
+- - 127.0.0.1:3301 (net.box)
+  - 127.0.0.1:3302 (net.box)
+  - 127.0.0.1:3303 (net.box)
+  - 127.0.0.1:3304 (net.box)
+  - discovery_fiber
+  - vshard.failover
+...
+_ = test_run:cmd("switch default")
+---
+...
+test_run:cmd('stop server router_1')
+---
+- true
+...
+test_run:cmd('cleanup server router_1')
+---
+- true
+...
+test_run:drop_cluster(REPLICASET_2)
+---
+...
diff --git a/test/router/destroy.test.lua b/test/router/destroy.test.lua
new file mode 100644
index 0000000..1f972bd
--- /dev/null
+++ b/test/router/destroy.test.lua
@@ -0,0 +1,39 @@
+test_run = require('test_run').new()
+netbox = require('net.box')
+fiber = require('fiber')
+
+REPLICASET_1 = { 'storage_1_a', 'storage_1_b' }
+REPLICASET_2 = { 'storage_2_a', 'storage_2_b' }
+
+test_run:create_cluster(REPLICASET_1, 'storage')
+test_run:create_cluster(REPLICASET_2, 'storage')
+util = require('util')
+util.wait_master(test_run, REPLICASET_1, 'storage_1_a')
+util.wait_master(test_run, REPLICASET_2, 'storage_2_a')
+test_run:cmd("create server router_1 with script='router/router_1.lua'")
+test_run:cmd("start server router_1")
+
+_ = test_run:cmd("switch router_1")
+util = require('util')
+test_run:cmd("setopt delimiter ';'")
+function wait_fibers_exit()
+    while #util.vshard_fiber_list() > 0 do
+        sleep(0.05)
+    end
+end;
+test_run:cmd("setopt delimiter ''");
+
+-- Validate destroy finction by fiber_list.
+-- Netbox fibers are deleted after replicas by GC.
+util.vshard_fiber_list()
+vshard.router.destroy()
+wait_fibers_exit()
+package.loaded['vshard.router'] = nil
+vshard.router = require('vshard.router')
+vshard.router.cfg(cfg)
+util.vshard_fiber_list()
+
+_ = test_run:cmd("switch default")
+test_run:cmd('stop server router_1')
+test_run:cmd('cleanup server router_1')
+test_run:drop_cluster(REPLICASET_2)
diff --git a/test/storage/destroy.result b/test/storage/destroy.result
new file mode 100644
index 0000000..23ec6bd
--- /dev/null
+++ b/test/storage/destroy.result
@@ -0,0 +1,132 @@
+test_run = require('test_run').new()
+---
+...
+REPLICASET_1 = { 'storage_1_a', 'storage_1_b' }
+---
+...
+REPLICASET_2 = { 'storage_2_a', 'storage_2_b' }
+---
+...
+test_run:create_cluster(REPLICASET_1, 'storage')
+---
+...
+test_run:create_cluster(REPLICASET_2, 'storage')
+---
+...
+util = require('util')
+---
+...
+util.wait_master(test_run, REPLICASET_1, 'storage_1_a')
+---
+...
+util.wait_master(test_run, REPLICASET_2, 'storage_2_a')
+---
+...
+_ = test_run:cmd("switch storage_1_a")
+---
+...
+util = require('util')
+---
+...
+test_run:cmd("setopt delimiter ';'")
+---
+- true
+...
+function wait_fibers_exit()
+    while #util.vshard_fiber_list() > 0 do
+        sleep(0.05)
+    end
+end;
+---
+...
+test_run:cmd("setopt delimiter ''");
+---
+- true
+...
+-- Storage is configured.
+-- Establish net.box connection.
+_, rs = next(vshard.storage.internal.replicasets)
+---
+...
+rs:callro('echo', {'some data'})
+---
+- some data
+- null
+- null
+...
+rs = nil
+---
+...
+-- Validate destroy finction by fiber_list.
+-- Netbox fibers are deleted after replicas by GC.
+util.vshard_fiber_list()
+---
+- - 127.0.0.1:3303 (net.box)
+  - vshard.gc
+  - vshard.recovery
+...
+box.schema.user.exists('storage') == true
+---
+- true
+...
+box.space._bucket ~= nil
+---
+- true
+...
+-- Destroy storage.
+vshard.storage.destroy()
+---
+...
+wait_fibers_exit()
+---
+...
+box.space._bucket == nil
+---
+- true
+...
+-- Reconfigure storage.
+-- gh-52: Allow use existing user.
+box.schema.user.exists('storage') == true
+---
+- true
+...
+package.loaded['vshard.storage'] = nil
+---
+...
+vshard.storage = require('vshard.storage')
+---
+...
+vshard.storage.cfg(cfg, names['storage_1_a'])
+---
+...
+_, rs = next(vshard.storage.internal.replicasets)
+---
+...
+rs:callro('echo', {'some data'})
+---
+- some data
+- null
+- null
+...
+rs = nil
+---
+...
+box.space._bucket ~= nil
+---
+- true
+...
+util.vshard_fiber_list()
+---
+- - 127.0.0.1:3303 (net.box)
+  - vshard.gc
+  - vshard.recovery
+...
+_ = test_run:cmd("switch default")
+---
+...
+test_run:drop_cluster(REPLICASET_2)
+---
+...
+test_run:drop_cluster(REPLICASET_1)
+---
+...
diff --git a/test/storage/destroy.test.lua b/test/storage/destroy.test.lua
new file mode 100644
index 0000000..156b76e
--- /dev/null
+++ b/test/storage/destroy.test.lua
@@ -0,0 +1,51 @@
+test_run = require('test_run').new()
+
+REPLICASET_1 = { 'storage_1_a', 'storage_1_b' }
+REPLICASET_2 = { 'storage_2_a', 'storage_2_b' }
+
+test_run:create_cluster(REPLICASET_1, 'storage')
+test_run:create_cluster(REPLICASET_2, 'storage')
+util = require('util')
+util.wait_master(test_run, REPLICASET_1, 'storage_1_a')
+util.wait_master(test_run, REPLICASET_2, 'storage_2_a')
+
+_ = test_run:cmd("switch storage_1_a")
+util = require('util')
+test_run:cmd("setopt delimiter ';'")
+function wait_fibers_exit()
+    while #util.vshard_fiber_list() > 0 do
+        sleep(0.05)
+    end
+end;
+test_run:cmd("setopt delimiter ''");
+
+-- Storage is configured.
+-- Establish net.box connection.
+_, rs = next(vshard.storage.internal.replicasets)
+rs:callro('echo', {'some data'})
+rs = nil
+-- Validate destroy finction by fiber_list.
+-- Netbox fibers are deleted after replicas by GC.
+util.vshard_fiber_list()
+box.schema.user.exists('storage') == true
+box.space._bucket ~= nil
+-- Destroy storage.
+vshard.storage.destroy()
+wait_fibers_exit()
+box.space._bucket == nil
+
+-- Reconfigure storage.
+-- gh-52: Allow use existing user.
+box.schema.user.exists('storage') == true
+package.loaded['vshard.storage'] = nil
+vshard.storage = require('vshard.storage')
+vshard.storage.cfg(cfg, names['storage_1_a'])
+_, rs = next(vshard.storage.internal.replicasets)
+rs:callro('echo', {'some data'})
+rs = nil
+box.space._bucket ~= nil
+util.vshard_fiber_list()
+
+_ = test_run:cmd("switch default")
+test_run:drop_cluster(REPLICASET_2)
+test_run:drop_cluster(REPLICASET_1)
diff --git a/vshard/router/init.lua b/vshard/router/init.lua
index 21093e5..da8e49e 100644
--- a/vshard/router/init.lua
+++ b/vshard/router/init.lua
@@ -33,6 +33,27 @@ if not M then
     }
 end
 
+--
+-- Destroy router module.
+--
+local function destroy()
+    local MODULE_INTERNALS = '__module_vshard_router'
+    assert(rawget(_G, MODULE_INTERNALS), 'Already destroyed')
+    local bg_fibers = {
+        'failover_fiber',
+        'discovery_fiber',
+    }
+    for _, fib_name in pairs(bg_fibers) do
+        if M[fib_name] then
+            M[fib_name]:cancel()
+            M[fib_name] = nil
+        end
+    end
+    vshard.router.internal = nil
+    rawset(_G, MODULE_INTERNALS, nil)
+    M = nil
+end
+
 -- Set a replicaset by container of a bucket.
 local function bucket_set(bucket_id, replicaset)
     assert(replicaset)
@@ -792,6 +813,7 @@ return {
     sync = router_sync;
     bootstrap = cluster_bootstrap;
     bucket_discovery = bucket_discovery;
+    destroy = destroy,
     discovery_wakeup = discovery_wakeup;
     internal = M;
     module_version = function() return M.module_version end;
diff --git a/vshard/storage/init.lua b/vshard/storage/init.lua
index 059e705..ac37163 100644
--- a/vshard/storage/init.lua
+++ b/vshard/storage/init.lua
@@ -148,6 +148,22 @@ local function create_user(username, password)
                           {if_not_exists = true})
 end
 
+local storage_api = {
+    'vshard.storage.sync',
+    'vshard.storage.call',
+    'vshard.storage.bucket_force_create',
+    'vshard.storage.bucket_force_drop',
+    'vshard.storage.bucket_collect',
+    'vshard.storage.bucket_send',
+    'vshard.storage.bucket_recv',
+    'vshard.storage.bucket_stat',
+    'vshard.storage.buckets_count',
+    'vshard.storage.buckets_info',
+    'vshard.storage.buckets_discovery',
+    'vshard.storage.rebalancer_request_state',
+    'vshard.storage.rebalancer_apply_routes',
+}
+
 local function storage_schema_v1(username, password)
     log.info("Initializing schema")
     create_user(username, password)
@@ -161,22 +177,6 @@ local function storage_schema_v1(username, password)
     bucket:create_index('pk', {parts = {'id'}})
     bucket:create_index('status', {parts = {'status'}, unique = false})
 
-    local storage_api = {
-        'vshard.storage.sync',
-        'vshard.storage.call',
-        'vshard.storage.bucket_force_create',
-        'vshard.storage.bucket_force_drop',
-        'vshard.storage.bucket_collect',
-        'vshard.storage.bucket_send',
-        'vshard.storage.bucket_recv',
-        'vshard.storage.bucket_stat',
-        'vshard.storage.buckets_count',
-        'vshard.storage.buckets_info',
-        'vshard.storage.buckets_discovery',
-        'vshard.storage.rebalancer_request_state',
-        'vshard.storage.rebalancer_apply_routes',
-    }
-
     for _, name in ipairs(storage_api) do
         box.schema.func.create(name, {setuid = true})
         box.schema.user.grant(username, 'execute', 'function', name)
@@ -207,6 +207,43 @@ local function on_master_enable(...)
     end
 end
 
+--------------------------------------------------------------------------------
+-- Destroy
+--------------------------------------------------------------------------------
+
+--
+-- Destroy storage module.
+--
+local function destroy()
+    local MODULE_INTERNALS = '__module_vshard_storage'
+    assert(rawget(_G, MODULE_INTERNALS), 'Already destroyed')
+    box.space._bucket:drop()
+    for _, name in ipairs(storage_api) do
+        box.schema.func.drop(name)
+    end
+    local bg_fibers = {
+        'recovery_fiber',
+        'collect_bucket_garbage_fiber',
+        'rebalancer_applier_fiber',
+        'rebalancer_fiber',
+    }
+    for _, fib_name in pairs(bg_fibers) do
+        if M[fib_name] then
+            M[fib_name]:cancel()
+            M[fib_name] = nil
+        end
+    end
+    local box_cfg = table.deepcopy(box.cfg)
+    box_cfg.replication = nil
+    box_cfg.read_only = nil
+    box.cfg(box_cfg)
+    vshard.storage.internal = nil
+    box.space._schema:delete{'oncevshard:storage:1'}
+    rawset(_G, MODULE_INTERNALS, nil)
+    M = nil
+    box.snapshot()
+end
+
 --------------------------------------------------------------------------------
 -- Recovery
 --------------------------------------------------------------------------------
@@ -1844,6 +1881,7 @@ return {
     bucket_pin = bucket_pin,
     bucket_unpin = bucket_unpin,
     bucket_delete_garbage = bucket_delete_garbage,
+    destroy = destroy,
     garbage_collector_wakeup = garbage_collector_wakeup,
     rebalancer_wakeup = rebalancer_wakeup,
     rebalancer_apply_routes = rebalancer_apply_routes,
-- 
2.14.1

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [tarantool-patches] Re: [PATCH 1/3] Do not force login/pass in URI
  2018-06-22 21:43 ` [tarantool-patches] [PATCH 1/3] Do not force login/pass in URI AKhatskevich
@ 2018-06-22 22:11   ` Vladislav Shpilevoy
  2018-06-25 21:56     ` Alex Khatskevich
  0 siblings, 1 reply; 13+ messages in thread
From: Vladislav Shpilevoy @ 2018-06-22 22:11 UTC (permalink / raw)
  To: tarantool-patches, AKhatskevich

Hello. Thanks for the patch! See 3 comments below.

On 23/06/2018 00:43, AKhatskevich wrote:
> All access rights would be checked explicitly by Tarantool.
> 
> Closes #47
> ---
>   test/unit/config.result   | 14 ++++++++++++++
>   test/unit/config.test.lua |  6 ++++++
>   vshard/cfg.lua            |  7 +++----
>   3 files changed, 23 insertions(+), 4 deletions(-)
> 
> diff --git a/test/unit/config.result b/test/unit/config.result
> index 6b4f87b..847536c 100644
> --- a/test/unit/config.result
> +++ b/test/unit/config.result
> @@ -506,3 +506,17 @@ _ = lcfg.check(cfg)
>   replica.name = 'storage'
>   ---
>   ...
> +-- gh-47: Check uri
> +_, err = pcall(lcfg.check_uri, 'invalid uri')
> +---
> +...
> +err:match('Invalid URI.*')

1. Please, use util.check_error instead of pcall + match as
I told you earlier.

> diff --git a/vshard/cfg.lua b/vshard/cfg.lua
> index f5db4c0..5644d2c 100644
> --- a/vshard/cfg.lua
> +++ b/vshard/cfg.lua
> @@ -5,10 +5,8 @@ local luri = require('uri')
>   local consts = require('vshard.consts')
>   
>   local function check_uri(uri)
> -    uri = luri.parse(uri)
> -    if uri.login == nil or uri.password == nil then
> -        error('URI must contain login and password')
> -    end
> +    local parsed = luri.parse(uri)
> +    assert(parsed, 'Invalid URI: ' .. uri)

2. Please, do not assertions for regular errors. Use 'error()'.

>   end
>   
>   local function check_master(master, ctx)
> @@ -251,5 +249,6 @@ end
>   
>   return {
>       check = cfg_check,
> +    check_uri = check_uri,

3. Do not expose this one-line function for testing only. Just
use cfg.check for this corrupting an uri of a replica in cfg.

And restore back after the test.

>       remove_non_box_options = remove_non_box_options,
>   }
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [tarantool-patches] Re: [PATCH 3/3] Introduce destroy module feature
  2018-06-22 21:43 ` [tarantool-patches] [PATCH 3/3] Introduce destroy module feature AKhatskevich
@ 2018-06-22 22:11   ` Vladislav Shpilevoy
  2018-06-25 21:54     ` Alex Khatskevich
  0 siblings, 1 reply; 13+ messages in thread
From: Vladislav Shpilevoy @ 2018-06-22 22:11 UTC (permalink / raw)
  To: AKhatskevich, tarantool-patches

Thanks for the patch! See 9 comments below.

On 23/06/2018 00:43, AKhatskevich wrote:
> Introduce functions:
>   * vshard.router.destroy()
>   * vshard.storage.destroy()
> 
> Those functions:
>   * close connections
>   * stop background fibers
>   * delete vshard spaces
>   * delete vshard funcitons
>   * delete `once` metadate
> 
> Closes #121
> ---
>   test/lua_libs/util.lua        |  34 +++++++++++
>   test/router/destroy.result    | 108 ++++++++++++++++++++++++++++++++++
>   test/router/destroy.test.lua  |  39 +++++++++++++
>   test/storage/destroy.result   | 132 ++++++++++++++++++++++++++++++++++++++++++
>   test/storage/destroy.test.lua |  51 ++++++++++++++++
>   vshard/router/init.lua        |  22 +++++++
>   vshard/storage/init.lua       |  70 +++++++++++++++++-----
>   7 files changed, 440 insertions(+), 16 deletions(-)
>   create mode 100644 test/router/destroy.result
>   create mode 100644 test/router/destroy.test.lua
>   create mode 100644 test/storage/destroy.result
>   create mode 100644 test/storage/destroy.test.lua
> 
> diff --git a/test/lua_libs/util.lua b/test/lua_libs/util.lua
> index f2d3b48..ce0ea67 100644
> --- a/test/lua_libs/util.lua
> +++ b/test/lua_libs/util.lua
> @@ -69,9 +69,43 @@ local function wait_master(test_run, replicaset, master)
>       log.info('Slaves are connected to a master "%s"', master)
>   end
>   
> +function vshard_fiber_list()
> +    -- Flush jit traces to prevent them from
> +    -- keeping its upvalues in memory.
> +    jit.flush()
> +    collectgarbage()
> +    -- Give a fiber time to clean itself.
> +    fiber.sleep(0.05)

1. Why do you need this sleep? As far as I know
collectgarbage just cleans all the garbage in the same
fiber with no yields or async tasks.

> +    local fibers = fiber.info()
> +    local non_vshard_patterns = {
> +        '^console',
> +        'feedback_daemon$',
> +        '^checkpoint_daemon$',
> +        '^main$',
> +        '^memtx%.gc$',
> +        '^vinyl%.scheduler$',
> +    }
> +    local names = {}
> +    for _, fib in pairs(fibers) do
> +        local add = true
> +        for _, pattern in pairs(non_vshard_patterns) do
> +            if fib.name:match(pattern) then
> +                add = false
> +                break
> +            end
> +        end
> +        if add then
> +            table.insert(names, fib.name)
> +        end
> +    end
> +    table.sort(names)

2. Sort of an array just does nothing, it is not?

> +    return names
> +end;
> +
>   return {
>       check_error = check_error,
>       shuffle_masters = shuffle_masters,
>       collect_timeouts = collect_timeouts,
>       wait_master = wait_master,
> +    vshard_fiber_list = vshard_fiber_list,
>   }
> diff --git a/test/router/destroy.result b/test/router/destroy.result
> new file mode 100644
> index 0000000..48ba661
> --- /dev/null
> +++ b/test/router/destroy.result
> @@ -0,0 +1,108 @@
> +test_run = require('test_run').new()
> +---
> +...
> +netbox = require('net.box')
> +---
> +...
> +fiber = require('fiber')
> +---
> +...
> +REPLICASET_1 = { 'storage_1_a', 'storage_1_b' }
> +---
> +...
> +REPLICASET_2 = { 'storage_2_a', 'storage_2_b' }
> +---
> +...
> +test_run:create_cluster(REPLICASET_1, 'storage')
> +---
> +...
> +test_run:create_cluster(REPLICASET_2, 'storage')
> +---
> +...
> +util = require('util')
> +---
> +...
> +util.wait_master(test_run, REPLICASET_1, 'storage_1_a')
> +---
> +...
> +util.wait_master(test_run, REPLICASET_2, 'storage_2_a')
> +---
> +...
> +test_run:cmd("create server router_1 with script='router/router_1.lua'")
> +---
> +- true
> +...
> +test_run:cmd("start server router_1")
> +---
> +- true
> +...
> +_ = test_run:cmd("switch router_1")
> +---
> +...
> +util = require('util')
> +---
> +...
> +test_run:cmd("setopt delimiter ';'")
> +---
> +- true
> +...
> +function wait_fibers_exit()
> +    while #util.vshard_fiber_list() > 0 do
> +        sleep(0.05)

3. Maybe fiber.sleep? Process sleep looks weird here.

> +    end
> +end;
> +---> diff --git a/test/storage/destroy.result b/test/storage/destroy.result
> new file mode 100644
> index 0000000..23ec6bd
> --- /dev/null
> +++ b/test/storage/destroy.result
> @@ -0,0 +1,132 @@
> +test_run = require('test_run').new()
> +---
> +...
> +REPLICASET_1 = { 'storage_1_a', 'storage_1_b' }
> +---
> +...
> +REPLICASET_2 = { 'storage_2_a', 'storage_2_b' }
> +---
> +...
> +test_run:create_cluster(REPLICASET_1, 'storage')
> +---
> +...
> +test_run:create_cluster(REPLICASET_2, 'storage')
> +---
> +...
> +util = require('util')
> +---
> +...
> +util.wait_master(test_run, REPLICASET_1, 'storage_1_a')
> +---
> +...
> +util.wait_master(test_run, REPLICASET_2, 'storage_2_a')
> +---
> +...
> +_ = test_run:cmd("switch storage_1_a")
> +---
> +...
> +util = require('util')
> +---
> +...
> +test_run:cmd("setopt delimiter ';'")
> +---
> +- true
> +...
> +function wait_fibers_exit()
> +    while #util.vshard_fiber_list() > 0 do
> +        sleep(0.05)

4. Same.

> +    end
> +end;
> +---> diff --git a/vshard/router/init.lua b/vshard/router/init.lua
> index 21093e5..da8e49e 100644
> --- a/vshard/router/init.lua
> +++ b/vshard/router/init.lua
> @@ -33,6 +33,27 @@ if not M then
>       }
>   end
>   
> +--
> +-- Destroy router module.
> +--
> +local function destroy()
> +    local MODULE_INTERNALS = '__module_vshard_router'
> +    assert(rawget(_G, MODULE_INTERNALS), 'Already destroyed')
> +    local bg_fibers = {
> +        'failover_fiber',
> +        'discovery_fiber',
> +    }

5. Please, just inline this cycle in 2 lines. Anyway on a new
fiber it is necessary to add a new name to bg_fibers.

> +    for _, fib_name in pairs(bg_fibers) do
> +        if M[fib_name] then
> +            M[fib_name]:cancel()
> +            M[fib_name] = nil
> +        end
> +    end
> +    vshard.router.internal = nil
> +    rawset(_G, MODULE_INTERNALS, nil)
> +    M = nil

6. I do not like, that recfg now looks like

destroy
package.loaded = nil
cfg

It should not be necessary to nullify the package. Today in
the public chat a customer asked about

box.cfg{}
box.stop()
box.cfg{}
box.stop()

So there is no place for package.loaded. Most of people
even do not know about this thing existence.

> +end
> diff --git a/vshard/storage/init.lua b/vshard/storage/init.lua
> index 059e705..ac37163 100644
> --- a/vshard/storage/init.lua
> +++ b/vshard/storage/init.lua
> @@ -207,6 +207,43 @@ local function on_master_enable(...)
>       end
>   end
>   
> +--------------------------------------------------------------------------------
> +-- Destroy
> +--------------------------------------------------------------------------------
> +
> +--
> +-- Destroy storage module.
> +--
> +local function destroy()
> +    local MODULE_INTERNALS = '__module_vshard_storage'
> +    assert(rawget(_G, MODULE_INTERNALS), 'Already destroyed')
> +    box.space._bucket:drop()
7. When this instance is master for another instance (it will be so
when we return to master-master topology), this drop will be replicated
to active instances. Please, forbid destroy when the instance has active
relays.

> +    for _, name in ipairs(storage_api) do
> +        box.schema.func.drop(name)
> +    end
> +    local bg_fibers = {
> +        'recovery_fiber',
> +        'collect_bucket_garbage_fiber',
> +        'rebalancer_applier_fiber',
> +        'rebalancer_fiber',
> +    }
> +    for _, fib_name in pairs(bg_fibers) do
> +        if M[fib_name] then
> +            M[fib_name]:cancel()
> +            M[fib_name] = nil
> +        end
> +    end
> +    local box_cfg = table.deepcopy(box.cfg)
> +    box_cfg.replication = nil
> +    box_cfg.read_only = nil

8. Assign nil to table is like removal of the key. So here actually
you just recall empty box.cfg{}. To turn off the replication you
should pass empty replication array explicitly. To turn off the
read_only you should pass read_only = false explicitly.

> +    box.cfg(box_cfg)
> +    vshard.storage.internal = nil
> +    box.space._schema:delete{'oncevshard:storage:1'}
> +    rawset(_G, MODULE_INTERNALS, nil)
> +    M = nil
> +    box.snapshot()

9. Why?

> +end
> +

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [tarantool-patches] Re: [PATCH 2/3] Allow use existing user
  2018-06-22 21:43 ` [tarantool-patches] [PATCH 2/3] Allow use existing user AKhatskevich
@ 2018-06-22 22:12   ` Vladislav Shpilevoy
  2018-06-25 21:55     ` Alex Khatskevich
  0 siblings, 1 reply; 13+ messages in thread
From: Vladislav Shpilevoy @ 2018-06-22 22:12 UTC (permalink / raw)
  To: tarantool-patches, AKhatskevich

Thanks for the patch! See 1 comment below.

On 23/06/2018 00:43, AKhatskevich wrote:
> This commit allows to create vshard user outside of vshard.
> 
> This feature is tested in the next commit (destroy).
> 
> Closes #52
> ---
>   vshard/storage/init.lua | 12 ++++++++++--
>   1 file changed, 10 insertions(+), 2 deletions(-)
> 
> diff --git a/vshard/storage/init.lua b/vshard/storage/init.lua
> index 57076e1..059e705 100644
> --- a/vshard/storage/init.lua
> +++ b/vshard/storage/init.lua
> @@ -138,11 +138,19 @@ end
>   --------------------------------------------------------------------------------
>   -- Schema
>   --------------------------------------------------------------------------------
> +local function create_user(username, password)
> +    box.schema.user.create(username, {
> +        password = password,
> +        if_not_exists = true,
> +    })
> +    -- Replication may has not been granted, if user exists.
> +    box.schema.user.grant(username, 'replication', nil, nil,
> +                          {if_not_exists = true})
> +end
>   
>   local function storage_schema_v1(username, password)
>       log.info("Initializing schema")
> -    box.schema.user.create(username, {password = password})
> -    box.schema.user.grant(username, 'replication')
> +    create_user(username, password)

1. Please, just inline this function. It is not needed anywhere else
and is extremely simple.

>   
>       local bucket = box.schema.space.create('_bucket')
>       bucket:format({
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [tarantool-patches] Re: [PATCH 3/3] Introduce destroy module feature
  2018-06-22 22:11   ` [tarantool-patches] " Vladislav Shpilevoy
@ 2018-06-25 21:54     ` Alex Khatskevich
  2018-06-26 11:11       ` Vladislav Shpilevoy
  0 siblings, 1 reply; 13+ messages in thread
From: Alex Khatskevich @ 2018-06-25 21:54 UTC (permalink / raw)
  To: Vladislav Shpilevoy, tarantool-patches


>> --- a/test/lua_libs/util.lua
>> +++ b/test/lua_libs/util.lua
>> @@ -69,9 +69,43 @@ local function wait_master(test_run, replicaset, 
>> master)
>>       log.info('Slaves are connected to a master "%s"', master)
>>   end
>>   +function vshard_fiber_list()
>> +    -- Flush jit traces to prevent them from
>> +    -- keeping its upvalues in memory.
>> +    jit.flush()
>> +    collectgarbage()
>> +    -- Give a fiber time to clean itself.
>> +    fiber.sleep(0.05)
>
> 1. Why do you need this sleep? As far as I know
> collectgarbage just cleans all the garbage in the same
> fiber with no yields or async tasks.
netbox: If `collect garbage` has collected a connection, the fiber need 
a time to wakeup after
fiber:cancel() to process it and exit.
another module: the same possible behavior.
>>
>> +        local add = true
>> +        for _, pattern in pairs(non_vshard_patterns) do
>> +            if fib.name:match(pattern) then
>> +                add = false
>> +                break
>> +            end
>> +        end
>> +        if add then
>> +            table.insert(names, fib.name)
>> +        end
>> +    end
>> +    table.sort(names)
>
> 2. Sort of an array just does nothing, it is not?
Fibers return in non-predictable order.
If one uses this function just to immediately print a fiber list, he/she 
expects the retval
to be consistent over time.
>
>> diff --git a/test/router/destroy.result b/test/router/destroy.result
>> +        sleep(0.05)
>
> 3. Maybe fiber.sleep? Process sleep looks weird here.
>> +---> diff --git a/test/storage/destroy.result 
>> b/test/storage/destroy.result
>> +        sleep(0.05)
> 4. Same.

Fixed, thanks
>> +    end
>> +end;
>> +---> diff --git a/vshard/router/init.lua b/vshard/router/init.lua
>> index 21093e5..da8e49e 100644
>> --- a/vshard/router/init.lua
>> +++ b/vshard/router/init.lua
>> @@ -33,6 +33,27 @@ if not M then
>>       }
>>   end
>>   +--
>> +-- Destroy router module.
>> +--
>> +local function destroy()
>> +    local MODULE_INTERNALS = '__module_vshard_router'
>> +    assert(rawget(_G, MODULE_INTERNALS), 'Already destroyed')
>> +    local bg_fibers = {
>> +        'failover_fiber',
>> +        'discovery_fiber',
>> +    }
>
> 5. Please, just inline this cycle in 2 lines. Anyway on a new
> fiber it is necessary to add a new name to bg_fibers.
done
>
>> +    for _, fib_name in pairs(bg_fibers) do
>> +        if M[fib_name] then
>> +            M[fib_name]:cancel()
>> +            M[fib_name] = nil
>> +        end
>> +    end
>> +    vshard.router.internal = nil
>> +    rawset(_G, MODULE_INTERNALS, nil)
>> +    M = nil
>
> 6. I do not like, that recfg now looks like
>
> destroy
> package.loaded = nil
> cfg
>
> It should not be necessary to nullify the package. Today in
> the public chat a customer asked about
>
> box.cfg{}
> box.stop()
> box.cfg{}
> box.stop()
>
> So there is no place for package.loaded. Most of people
> even do not know about this thing existence.
reimplemented. waiting for feedback.
>
>> +end
>> diff --git a/vshard/storage/init.lua b/vshard/storage/init.lua
>> index 059e705..ac37163 100644
>> --- a/vshard/storage/init.lua
>> +++ b/vshard/storage/init.lua
>> @@ -207,6 +207,43 @@ local function on_master_enable(...)
>>       end
>>   end
>> +--------------------------------------------------------------------------------
>> +-- Destroy
>> +-------------------------------------------------------------------------------- 
>>
>> +
>> +--
>> +-- Destroy storage module.
>> +--
>> +local function destroy()
>> +    local MODULE_INTERNALS = '__module_vshard_storage'
>> +    assert(rawget(_G, MODULE_INTERNALS), 'Already destroyed')
>> +    box.space._bucket:drop()
> 7. When this instance is master for another instance (it will be so
> when we return to master-master topology), this drop will be replicated
> to active instances. Please, forbid destroy when the instance has active
> relays.
This feature is under development...
>> +    for _, name in ipairs(storage_api) do
>> +        box.schema.func.drop(name)
>> +    end
>> +    local bg_fibers = {
>> +        'recovery_fiber',
>> +        'collect_bucket_garbage_fiber',
>> +        'rebalancer_applier_fiber',
>> +        'rebalancer_fiber',
>> +    }
>> +    for _, fib_name in pairs(bg_fibers) do
>> +        if M[fib_name] then
>> +            M[fib_name]:cancel()
>> +            M[fib_name] = nil
>> +        end
>> +    end
>> +    local box_cfg = table.deepcopy(box.cfg)
>> +    box_cfg.replication = nil
>> +    box_cfg.read_only = nil
>
> 8. Assign nil to table is like removal of the key. So here actually
> you just recall empty box.cfg{}. To turn off the replication you
> should pass empty replication array explicitly. To turn off the
> read_only you should pass read_only = false explicitly.
Thanks
>> +    box.snapshot()
>
> 9. Why?
deleted

Full diff


commit b0cddc2a3ba30d3ecf57e71ce8d095bf2f093a5e
Author: AKhatskevich <avkhatskevich@tarantool.org>
Date:   Wed Jun 20 00:31:25 2018 +0300

     Introduce destroy module feature

     Introduce functions:
      * vshard.router.destroy()
      * vshard.storage.destroy()

     Those functions:
      * close connections
      * stop background fibers
      * delete vshard spaces
      * delete vshard funcitons
      * delete `once` metadate

     After the destroy, module can be configured as it was just loaded.

     Extra changes:
      * introduce fiber_list function which returns names of non-tarantool
        fibers
      * introduce update_M function, which updates M (module internals) with
        values defined in the module

     Closes #121

diff --git a/test/lua_libs/util.lua b/test/lua_libs/util.lua
index f2d3b48..ce0ea67 100644
--- a/test/lua_libs/util.lua
+++ b/test/lua_libs/util.lua
@@ -69,9 +69,43 @@ local function wait_master(test_run, replicaset, master)
      log.info('Slaves are connected to a master "%s"', master)
  end

+function vshard_fiber_list()
+    -- Flush jit traces to prevent them from
+    -- keeping its upvalues in memory.
+    jit.flush()
+    collectgarbage()
+    -- Give a fiber time to clean itself.
+    fiber.sleep(0.05)
+    local fibers = fiber.info()
+    local non_vshard_patterns = {
+        '^console',
+        'feedback_daemon$',
+        '^checkpoint_daemon$',
+        '^main$',
+        '^memtx%.gc$',
+        '^vinyl%.scheduler$',
+    }
+    local names = {}
+    for _, fib in pairs(fibers) do
+        local add = true
+        for _, pattern in pairs(non_vshard_patterns) do
+            if fib.name:match(pattern) then
+                add = false
+                break
+            end
+        end
+        if add then
+            table.insert(names, fib.name)
+        end
+    end
+    table.sort(names)
+    return names
+end;
+
  return {
      check_error = check_error,
      shuffle_masters = shuffle_masters,
      collect_timeouts = collect_timeouts,
      wait_master = wait_master,
+    vshard_fiber_list = vshard_fiber_list,
  }
diff --git a/test/router/destroy.result b/test/router/destroy.result
new file mode 100644
index 0000000..9284ef6
--- /dev/null
+++ b/test/router/destroy.result
@@ -0,0 +1,105 @@
+test_run = require('test_run').new()
+---
+...
+netbox = require('net.box')
+---
+...
+fiber = require('fiber')
+---
+...
+REPLICASET_1 = { 'storage_1_a', 'storage_1_b' }
+---
+...
+REPLICASET_2 = { 'storage_2_a', 'storage_2_b' }
+---
+...
+test_run:create_cluster(REPLICASET_1, 'storage')
+---
+...
+test_run:create_cluster(REPLICASET_2, 'storage')
+---
+...
+util = require('util')
+---
+...
+util.wait_master(test_run, REPLICASET_1, 'storage_1_a')
+---
+...
+util.wait_master(test_run, REPLICASET_2, 'storage_2_a')
+---
+...
+test_run:cmd("create server router_1 with script='router/router_1.lua'")
+---
+- true
+...
+test_run:cmd("start server router_1")
+---
+- true
+...
+_ = test_run:cmd("switch router_1")
+---
+...
+util = require('util')
+---
+...
+fiber = require('fiber')
+---
+...
+test_run:cmd("setopt delimiter ';'")
+---
+- true
+...
+function wait_fibers_exit()
+    while #util.vshard_fiber_list() > 0 do
+        fiber.sleep(0.05)
+    end
+end;
+---
+...
+test_run:cmd("setopt delimiter ''");
+---
+- true
+...
+-- Validate destroy finction by fiber_list.
+-- Netbox fibers are deleted after replicas by GC.
+util.vshard_fiber_list()
+---
+- - 127.0.0.1:3301 (net.box)
+  - 127.0.0.1:3302 (net.box)
+  - 127.0.0.1:3303 (net.box)
+  - 127.0.0.1:3304 (net.box)
+  - discovery_fiber
+  - vshard.failover
+...
+vshard.router.destroy()
+---
+...
+wait_fibers_exit()
+---
+...
+vshard.router.cfg(cfg)
+---
+...
+util.vshard_fiber_list()
+---
+- - 127.0.0.1:3301 (net.box)
+  - 127.0.0.1:3302 (net.box)
+  - 127.0.0.1:3303 (net.box)
+  - 127.0.0.1:3304 (net.box)
+  - discovery_fiber
+  - vshard.failover
+...
+_ = test_run:cmd("switch default")
+---
+...
+test_run:cmd('stop server router_1')
+---
+- true
+...
+test_run:cmd('cleanup server router_1')
+---
+- true
+...
+test_run:drop_cluster(REPLICASET_2)
+---
+...
diff --git a/test/router/destroy.test.lua b/test/router/destroy.test.lua
new file mode 100644
index 0000000..caf4d8e
--- /dev/null
+++ b/test/router/destroy.test.lua
@@ -0,0 +1,38 @@
+test_run = require('test_run').new()
+netbox = require('net.box')
+fiber = require('fiber')
+
+REPLICASET_1 = { 'storage_1_a', 'storage_1_b' }
+REPLICASET_2 = { 'storage_2_a', 'storage_2_b' }
+
+test_run:create_cluster(REPLICASET_1, 'storage')
+test_run:create_cluster(REPLICASET_2, 'storage')
+util = require('util')
+util.wait_master(test_run, REPLICASET_1, 'storage_1_a')
+util.wait_master(test_run, REPLICASET_2, 'storage_2_a')
+test_run:cmd("create server router_1 with script='router/router_1.lua'")
+test_run:cmd("start server router_1")
+
+_ = test_run:cmd("switch router_1")
+util = require('util')
+fiber = require('fiber')
+test_run:cmd("setopt delimiter ';'")
+function wait_fibers_exit()
+    while #util.vshard_fiber_list() > 0 do
+        fiber.sleep(0.05)
+    end
+end;
+test_run:cmd("setopt delimiter ''");
+
+-- Validate destroy finction by fiber_list.
+-- Netbox fibers are deleted after replicas by GC.
+util.vshard_fiber_list()
+vshard.router.destroy()
+wait_fibers_exit()
+vshard.router.cfg(cfg)
+util.vshard_fiber_list()
+
+_ = test_run:cmd("switch default")
+test_run:cmd('stop server router_1')
+test_run:cmd('cleanup server router_1')
+test_run:drop_cluster(REPLICASET_2)
diff --git a/test/storage/destroy.result b/test/storage/destroy.result
new file mode 100644
index 0000000..180fc0b
--- /dev/null
+++ b/test/storage/destroy.result
@@ -0,0 +1,129 @@
+test_run = require('test_run').new()
+---
+...
+REPLICASET_1 = { 'storage_1_a', 'storage_1_b' }
+---
+...
+REPLICASET_2 = { 'storage_2_a', 'storage_2_b' }
+---
+...
+test_run:create_cluster(REPLICASET_1, 'storage')
+---
+...
+test_run:create_cluster(REPLICASET_2, 'storage')
+---
+...
+util = require('util')
+---
+...
+util.wait_master(test_run, REPLICASET_1, 'storage_1_a')
+---
+...
+util.wait_master(test_run, REPLICASET_2, 'storage_2_a')
+---
+...
+_ = test_run:cmd("switch storage_1_a")
+---
+...
+util = require('util')
+---
+...
+fiber = require('fiber')
+---
+...
+test_run:cmd("setopt delimiter ';'")
+---
+- true
+...
+function wait_fibers_exit()
+    while #util.vshard_fiber_list() > 0 do
+        fiber.sleep(0.05)
+    end
+end;
+---
+...
+test_run:cmd("setopt delimiter ''");
+---
+- true
+...
+-- Storage is configured.
+-- Establish net.box connection.
+_, rs = next(vshard.storage.internal.replicasets)
+---
+...
+rs:callro('echo', {'some data'})
+---
+- some data
+- null
+- null
+...
+rs = nil
+---
+...
+-- Validate destroy finction by fiber_list.
+-- Netbox fibers are deleted after replicas by GC.
+util.vshard_fiber_list()
+---
+- - 127.0.0.1:3303 (net.box)
+  - vshard.gc
+  - vshard.recovery
+...
+box.schema.user.exists('storage') == true
+---
+- true
+...
+box.space._bucket ~= nil
+---
+- true
+...
+-- Destroy storage.
+vshard.storage.destroy()
+---
+...
+wait_fibers_exit()
+---
+...
+box.space._bucket == nil
+---
+- true
+...
+-- Reconfigure storage.
+-- gh-52: Allow use existing user.
+box.schema.user.exists('storage') == true
+---
+- true
+...
+vshard.storage.cfg(cfg, names['storage_1_a'])
+---
+...
+_, rs = next(vshard.storage.internal.replicasets)
+---
+...
+rs:callro('echo', {'some data'})
+---
+- some data
+- null
+- null
+...
+rs = nil
+---
+...
+box.space._bucket ~= nil
+---
+- true
+...
+util.vshard_fiber_list()
+---
+- - 127.0.0.1:3303 (net.box)
+  - vshard.gc
+  - vshard.recovery
+...
+_ = test_run:cmd("switch default")
+---
+...
+test_run:drop_cluster(REPLICASET_2)
+---
+...
+test_run:drop_cluster(REPLICASET_1)
+---
+...
diff --git a/test/storage/destroy.test.lua b/test/storage/destroy.test.lua
new file mode 100644
index 0000000..38e7c0a
--- /dev/null
+++ b/test/storage/destroy.test.lua
@@ -0,0 +1,50 @@
+test_run = require('test_run').new()
+
+REPLICASET_1 = { 'storage_1_a', 'storage_1_b' }
+REPLICASET_2 = { 'storage_2_a', 'storage_2_b' }
+
+test_run:create_cluster(REPLICASET_1, 'storage')
+test_run:create_cluster(REPLICASET_2, 'storage')
+util = require('util')
+util.wait_master(test_run, REPLICASET_1, 'storage_1_a')
+util.wait_master(test_run, REPLICASET_2, 'storage_2_a')
+
+_ = test_run:cmd("switch storage_1_a")
+util = require('util')
+fiber = require('fiber')
+test_run:cmd("setopt delimiter ';'")
+function wait_fibers_exit()
+    while #util.vshard_fiber_list() > 0 do
+        fiber.sleep(0.05)
+    end
+end;
+test_run:cmd("setopt delimiter ''");
+
+-- Storage is configured.
+-- Establish net.box connection.
+_, rs = next(vshard.storage.internal.replicasets)
+rs:callro('echo', {'some data'})
+rs = nil
+-- Validate destroy finction by fiber_list.
+-- Netbox fibers are deleted after replicas by GC.
+util.vshard_fiber_list()
+box.schema.user.exists('storage') == true
+box.space._bucket ~= nil
+-- Destroy storage.
+vshard.storage.destroy()
+wait_fibers_exit()
+box.space._bucket == nil
+
+-- Reconfigure storage.
+-- gh-52: Allow use existing user.
+box.schema.user.exists('storage') == true
+vshard.storage.cfg(cfg, names['storage_1_a'])
+_, rs = next(vshard.storage.internal.replicasets)
+rs:callro('echo', {'some data'})
+rs = nil
+box.space._bucket ~= nil
+util.vshard_fiber_list()
+
+_ = test_run:cmd("switch default")
+test_run:drop_cluster(REPLICASET_2)
+test_run:drop_cluster(REPLICASET_1)
diff --git a/vshard/router/init.lua b/vshard/router/init.lua
index 21093e5..e56c38a 100644
--- a/vshard/router/init.lua
+++ b/vshard/router/init.lua
@@ -7,30 +7,49 @@ local lhash = require('vshard.hash')
  local lreplicaset = require('vshard.replicaset')
  local util = require('vshard.util')

+local MODULE_SKELETON = {
+    errinj = {
+        ERRINJ_FAILOVER_CHANGE_CFG = false,
+        ERRINJ_RELOAD = false,
+    },
+    -- Bucket map cache.
+    route_map = {},
+    -- All known replicasets used for bucket re-balancing
+    replicasets = nil,
+    -- Fiber to maintain replica connections.
+    failover_fiber = nil,
+    -- Fiber to discovery buckets in background.
+    discovery_fiber = nil,
+    -- Bucket count stored on all replicasets.
+    total_bucket_count = 0,
+    -- If true, then discovery fiber starts to call
+    -- collectgarbage() periodically.
+    collect_lua_garbage = nil,
+    -- This counter is used to restart background fibers with
+    -- new reloaded code.
+    module_version = 0,
+}
+
  local M = rawget(_G, '__module_vshard_router')
-if not M then
-    M = {
-        errinj = {
-            ERRINJ_FAILOVER_CHANGE_CFG = false,
-            ERRINJ_RELOAD = false,
-        },
-        -- Bucket map cache.
-        route_map = {},
-        -- All known replicasets used for bucket re-balancing
-        replicasets = nil,
-        -- Fiber to maintain replica connections.
-        failover_fiber = nil,
-        -- Fiber to discovery buckets in background.
-        discovery_fiber = nil,
-        -- Bucket count stored on all replicasets.
-        total_bucket_count = 0,
-        -- If true, then discovery fiber starts to call
-        -- collectgarbage() periodically.
-        collect_lua_garbage = nil,
-        -- This counter is used to restart background fibers with
-        -- new reloaded code.
-        module_version = 0,
-    }
+local update_M
+
+--
+-- Destroy router module.
+--
+local function destroy()
+    local MODULE_INTERNALS = '__module_vshard_router'
+    assert(rawget(_G, MODULE_INTERNALS), 'Already destroyed')
+    -- Cancel background fibers.
+    for _, fib_name in pairs({
+        'failover_fiber',
+        'discovery_fiber',
+    }) do
+        if M[fib_name] then
+            M[fib_name]:cancel()
+        end
+    end
+    util.override_table(M, MODULE_SKELETON)
+    update_M()
  end

  -- Set a replicaset by container of a bucket.
@@ -758,10 +777,6 @@ local function router_sync(timeout)
      end
  end

-if M.errinj.ERRINJ_RELOAD then
-    error('Error injection: reload')
-end
-
  --------------------------------------------------------------------------------
  -- Module definition
  --------------------------------------------------------------------------------
@@ -769,15 +784,28 @@ end
  -- About functions, saved in M, and reloading see comment in
  -- storage/init.lua.
  --
-M.discovery_f = discovery_f
-M.failover_f = failover_f
+
+--
+-- Store module-definde values to M.
+--
+update_M = function()
+    M.discovery_f = discovery_f
+    M.failover_f = failover_f
+end

  if not rawget(_G, '__module_vshard_router') then
+    M = table.deepcopy(MODULE_SKELETON)
+    update_M()
      rawset(_G, '__module_vshard_router', M)
  else
+    M = rawget(_G, '__module_vshard_router')
      M.module_version = M.module_version + 1
  end

+if M.errinj.ERRINJ_RELOAD then
+    error('Error injection: reload')
+end
+
  return {
      cfg = router_cfg;
      info = router_info;
@@ -792,6 +820,7 @@ return {
      sync = router_sync;
      bootstrap = cluster_bootstrap;
      bucket_discovery = bucket_discovery;
+    destroy = destroy,
      discovery_wakeup = discovery_wakeup;
      internal = M;
      module_version = function() return M.module_version end;
diff --git a/vshard/storage/init.lua b/vshard/storage/init.lua
index c80bfbf..0d3cc24 100644
--- a/vshard/storage/init.lua
+++ b/vshard/storage/init.lua
@@ -9,89 +9,88 @@ local lcfg = require('vshard.cfg')
  local lreplicaset = require('vshard.replicaset')
  local trigger = require('internal.trigger')

-local M = rawget(_G, '__module_vshard_storage')
-if not M then
+--
+-- The module is loaded for the first time.
+--
+local MODULE_SKELETON = {
+    ---------------- Common module attributes ----------------
      --
-    -- The module is loaded for the first time.
+    -- All known replicasets used for bucket re-balancing.
+    -- See format in replicaset.lua.
      --
-    M = {
-        ---------------- Common module attributes ----------------
-        --
-        -- All known replicasets used for bucket re-balancing.
-        -- See format in replicaset.lua.
-        --
-        replicasets = nil,
-        -- Triggers on master switch event. They are called right
-        -- before the event occurs.
-        _on_master_enable = trigger.new('_on_master_enable'),
-        _on_master_disable = trigger.new('_on_master_disable'),
-        -- Index which is a trigger to shard its space by numbers in
-        -- this index. It must have at first part either unsigned,
-        -- or integer or number type and be not nullable. Values in
-        -- this part are considered as bucket identifiers.
-        shard_index = nil,
-        -- Bucket count stored on all replicasets.
-        total_bucket_count = 0,
-        errinj = {
-            ERRINJ_BUCKET_FIND_GARBAGE_DELAY = false,
-            ERRINJ_RELOAD = false,
-            ERRINJ_CFG_DELAY = false,
-            ERRINJ_LONG_RECEIVE = false,
-        },
-        -- This counter is used to restart background fibers with
-        -- new reloaded code.
-        module_version = 0,
-        --
-        -- Timeout to wait sync with slaves. Used on master
-        -- demotion or on a manual sync() call.
-        --
-        sync_timeout = consts.DEFAULT_SYNC_TIMEOUT,
-        -- References to a parent replicaset and self in it.
-        this_replicaset = nil,
-        this_replica = nil,
-
-        ------------------- Garbage collection -------------------
-        -- Fiber to remove garbage buckets data.
-        collect_bucket_garbage_fiber = nil,
-        -- Do buckets garbage collection once per this time.
-        collect_bucket_garbage_interval = nil,
-        -- If true, then bucket garbage collection fiber starts to
-        -- call collectgarbage() periodically.
-        collect_lua_garbage = nil,
-
-        -------------------- Bucket recovery ---------------------
-        -- Bucket identifiers which are not active and are not being
-        -- sent - their status is unknown. Their state must be checked
-        -- periodically in recovery fiber.
-        buckets_to_recovery = {},
-        buckets_to_recovery_count = 0,
-        recovery_fiber = nil,
-
-        ----------------------- Rebalancer -----------------------
-        -- Fiber to rebalance a cluster.
-        rebalancer_fiber = nil,
-        -- Fiber which applies routes one by one. Its presense and
-        -- active status means that the rebalancing is in progress
-        -- now on the current node.
-        rebalancer_applier_fiber = nil,
-        -- Internal flag to activate and deactivate rebalancer. Mostly
-        -- for tests.
-        is_rebalancer_active = true,
-        -- Maximal allowed percent deviation of bucket count on a
-        -- replicaset from etalon bucket count.
-        rebalancer_disbalance_threshold = 0,
-        -- Maximal bucket count that can be received by a single
-        -- replicaset simultaneously.
-        rebalancer_max_receiving = 0,
-        -- Identifier of a bucket that rebalancer is sending now,
-        -- or else 0. If a bucket has state SENDING, but its id is
-        -- not stored here, it means, that its sending was
-        -- interrupted, for example by restart of an instance, and
-        -- a destination replicaset must drop already received
-        -- data.
-        rebalancer_sending_bucket = 0,
-    }
-end
+    replicasets = nil,
+    -- Triggers on master switch event. They are called right
+    -- before the event occurs.
+    _on_master_enable = trigger.new('_on_master_enable'),
+    _on_master_disable = trigger.new('_on_master_disable'),
+    -- Index which is a trigger to shard its space by numbers in
+    -- this index. It must have at first part either unsigned,
+    -- or integer or number type and be not nullable. Values in
+    -- this part are considered as bucket identifiers.
+    shard_index = nil,
+    -- Bucket count stored on all replicasets.
+    total_bucket_count = 0,
+    errinj = {
+        ERRINJ_BUCKET_FIND_GARBAGE_DELAY = false,
+        ERRINJ_RELOAD = false,
+        ERRINJ_CFG_DELAY = false,
+        ERRINJ_LONG_RECEIVE = false,
+    },
+    -- This counter is used to restart background fibers with
+    -- new reloaded code.
+    module_version = 0,
+    --
+    -- Timeout to wait sync with slaves. Used on master
+    -- demotion or on a manual sync() call.
+    --
+    sync_timeout = consts.DEFAULT_SYNC_TIMEOUT,
+    -- References to a parent replicaset and self in it.
+    this_replicaset = nil,
+    this_replica = nil,
+
+    ------------------- Garbage collection -------------------
+    -- Fiber to remove garbage buckets data.
+    collect_bucket_garbage_fiber = nil,
+    -- Do buckets garbage collection once per this time.
+    collect_bucket_garbage_interval = nil,
+    -- If true, then bucket garbage collection fiber starts to
+    -- call collectgarbage() periodically.
+    collect_lua_garbage = nil,
+
+    -------------------- Bucket recovery ---------------------
+    -- Bucket identifiers which are not active and are not being
+    -- sent - their status is unknown. Their state must be checked
+    -- periodically in recovery fiber.
+    buckets_to_recovery = {},
+    buckets_to_recovery_count = 0,
+    recovery_fiber = nil,
+
+    ----------------------- Rebalancer -----------------------
+    -- Fiber to rebalance a cluster.
+    rebalancer_fiber = nil,
+    -- Fiber which applies routes one by one. Its presense and
+    -- active status means that the rebalancing is in progress
+    -- now on the current node.
+    rebalancer_applier_fiber = nil,
+    -- Internal flag to activate and deactivate rebalancer. Mostly
+    -- for tests.
+    is_rebalancer_active = true,
+    -- Maximal allowed percent deviation of bucket count on a
+    -- replicaset from etalon bucket count.
+    rebalancer_disbalance_threshold = 0,
+    -- Maximal bucket count that can be received by a single
+    -- replicaset simultaneously.
+    rebalancer_max_receiving = 0,
+    -- Identifier of a bucket that rebalancer is sending now,
+    -- or else 0. If a bucket has state SENDING, but its id is
+    -- not stored here, it means, that its sending was
+    -- interrupted, for example by restart of an instance, and
+    -- a destination replicaset must drop already received
+    -- data.
+    rebalancer_sending_bucket = 0,
+}
+local M
+local update_M

  --
  -- Check if this replicaset is locked. It means be invisible for
@@ -138,6 +137,22 @@ end
  --------------------------------------------------------------------------------
  -- Schema
  --------------------------------------------------------------------------------
+local storage_api = {
+    'vshard.storage.sync',
+    'vshard.storage.call',
+    'vshard.storage.bucket_force_create',
+    'vshard.storage.bucket_force_drop',
+    'vshard.storage.bucket_collect',
+    'vshard.storage.bucket_send',
+    'vshard.storage.bucket_recv',
+    'vshard.storage.bucket_stat',
+    'vshard.storage.buckets_count',
+    'vshard.storage.buckets_info',
+    'vshard.storage.buckets_discovery',
+    'vshard.storage.rebalancer_request_state',
+    'vshard.storage.rebalancer_apply_routes',
+}
+
  local function storage_schema_v1(username, password)
      log.info("Initializing schema")
      box.schema.user.create(username, {
@@ -157,22 +172,6 @@ local function storage_schema_v1(username, password)
      bucket:create_index('pk', {parts = {'id'}})
      bucket:create_index('status', {parts = {'status'}, unique = false})

-    local storage_api = {
-        'vshard.storage.sync',
-        'vshard.storage.call',
-        'vshard.storage.bucket_force_create',
-        'vshard.storage.bucket_force_drop',
-        'vshard.storage.bucket_collect',
-        'vshard.storage.bucket_send',
-        'vshard.storage.bucket_recv',
-        'vshard.storage.bucket_stat',
-        'vshard.storage.buckets_count',
-        'vshard.storage.buckets_info',
-        'vshard.storage.buckets_discovery',
-        'vshard.storage.rebalancer_request_state',
-        'vshard.storage.rebalancer_apply_routes',
-    }
-
      for _, name in ipairs(storage_api) do
          box.schema.func.create(name, {setuid = true})
          box.schema.user.grant(username, 'execute', 'function', name)
@@ -203,6 +202,40 @@ local function on_master_enable(...)
      end
  end

+--------------------------------------------------------------------------------
+-- Destroy
+--------------------------------------------------------------------------------
+
+--
+-- Destroy storage module.
+--
+local function destroy()
+    local MODULE_INTERNALS = '__module_vshard_storage'
+    assert(rawget(_G, MODULE_INTERNALS), 'Already destroyed')
+    box.space._bucket:drop()
+    for _, name in ipairs(storage_api) do
+        box.schema.func.drop(name)
+    end
+    -- Cancel background fibers.
+    for _, fib_name in pairs({
+        'recovery_fiber',
+        'collect_bucket_garbage_fiber',
+        'rebalancer_applier_fiber',
+        'rebalancer_fiber',
+    }) do
+        if M[fib_name] then
+            M[fib_name]:cancel()
+        end
+    end
+    local box_cfg = table.deepcopy(box.cfg)
+    box_cfg.replication = {}
+    box_cfg.read_only = false
+    box.cfg(box_cfg)
+    box.space._schema:delete{'oncevshard:storage:1'}
+    util.override_table(M, MODULE_SKELETON)
+    update_M()
+end
+
  --------------------------------------------------------------------------------
  -- Recovery
  --------------------------------------------------------------------------------
@@ -594,10 +627,6 @@ local function bucket_collect_internal(bucket_id)
      return data
  end

-if M.errinj.ERRINJ_RELOAD then
-    error('Error injection: reload')
-end
-
  --
  -- Collect content of ACTIVE bucket.
  --
@@ -1808,27 +1837,37 @@ end
  -- restarted (or is restarted from M.background_f, which is not
  -- changed) and continues use old func1 and func2.
  --
-M.recovery_f = recovery_f
-M.collect_garbage_f = collect_garbage_f
-M.rebalancer_f = rebalancer_f

  --
--- These functions are saved in M not for atomic reload, but for
--- unit testing.
+-- Store module-definde values to M.
  --
-M.find_garbage_bucket = find_garbage_bucket
-M.collect_garbage_step = collect_garbage_step
-M.collect_garbage_f = collect_garbage_f
-M.rebalancer_build_routes = rebalancer_build_routes
-M.rebalancer_calculate_metrics = rebalancer_calculate_metrics
-M.cached_find_sharded_spaces = find_sharded_spaces
+update_M = function()
+    M.recovery_f = recovery_f
+    M.collect_garbage_f = collect_garbage_f
+    M.rebalancer_f = rebalancer_f
+    -- These functions are saved in M not for atomic reload, but for
+    -- unit testing.
+    M.find_garbage_bucket = find_garbage_bucket
+    M.collect_garbage_step = collect_garbage_step
+    M.collect_garbage_f = collect_garbage_f
+    M.rebalancer_build_routes = rebalancer_build_routes
+    M.rebalancer_calculate_metrics = rebalancer_calculate_metrics
+    M.cached_find_sharded_spaces = find_sharded_spaces
+end

  if not rawget(_G, '__module_vshard_storage') then
+    M = table.deepcopy(MODULE_SKELETON)
+    update_M()
      rawset(_G, '__module_vshard_storage', M)
  else
+    M = rawget(_G, '__module_vshard_router')
      M.module_version = M.module_version + 1
  end

+if M.errinj.ERRINJ_RELOAD then
+    error('Error injection: reload')
+end
+
  return {
      sync = sync,
      bucket_force_create = bucket_force_create,
@@ -1840,6 +1879,7 @@ return {
      bucket_pin = bucket_pin,
      bucket_unpin = bucket_unpin,
      bucket_delete_garbage = bucket_delete_garbage,
+    destroy = destroy,
      garbage_collector_wakeup = garbage_collector_wakeup,
      rebalancer_wakeup = rebalancer_wakeup,
      rebalancer_apply_routes = rebalancer_apply_routes,
diff --git a/vshard/util.lua b/vshard/util.lua
index bb71318..d57cf44 100644
--- a/vshard/util.lua
+++ b/vshard/util.lua
@@ -75,7 +75,22 @@ local function generate_self_checker(obj_name, 
func_name, mt, func)
      end
  end

+--
+-- Replace itrernals of a table with a template.
+-- @param tbl Table to be updated.
+-- @param template Internals of the `tbl` after the call.
+--
+local function override_table(tbl, template)
+    for k, _ in pairs(tbl) do
+        tbl[k] = nil
+    end
+    for k, v in pairs(template) do
+        tbl[k] = table.deepcopy(v)
+    end
+end
+
  return {
+    override_table = override_table,
      tuple_extract_key = tuple_extract_key,
      reloadable_fiber_f = reloadable_fiber_f,
      generate_self_checker = generate_self_checker,

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [tarantool-patches] Re: [PATCH 2/3] Allow use existing user
  2018-06-22 22:12   ` [tarantool-patches] " Vladislav Shpilevoy
@ 2018-06-25 21:55     ` Alex Khatskevich
  2018-06-26 11:11       ` Vladislav Shpilevoy
  0 siblings, 1 reply; 13+ messages in thread
From: Alex Khatskevich @ 2018-06-25 21:55 UTC (permalink / raw)
  To: Vladislav Shpilevoy, tarantool-patches


>> @@ -138,11 +138,19 @@ end
>> --------------------------------------------------------------------------------
>>   -- Schema
>> --------------------------------------------------------------------------------
>> +local function create_user(username, password)
>> +    box.schema.user.create(username, {
>> +        password = password,
>> +        if_not_exists = true,
>> +    })
>> +    -- Replication may has not been granted, if user exists.
>> +    box.schema.user.grant(username, 'replication', nil, nil,
>> +                          {if_not_exists = true})
>> +end
>>     local function storage_schema_v1(username, password)
>>       log.info("Initializing schema")
>> -    box.schema.user.create(username, {password = password})
>> -    box.schema.user.grant(username, 'replication')
>> +    create_user(username, password)
>
> 1. Please, just inline this function. It is not needed anywhere else
> and is extremely simple.
>
done

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [tarantool-patches] Re: [PATCH 1/3] Do not force login/pass in URI
  2018-06-22 22:11   ` [tarantool-patches] " Vladislav Shpilevoy
@ 2018-06-25 21:56     ` Alex Khatskevich
  2018-06-26 11:11       ` Vladislav Shpilevoy
  0 siblings, 1 reply; 13+ messages in thread
From: Alex Khatskevich @ 2018-06-25 21:56 UTC (permalink / raw)
  To: Vladislav Shpilevoy, tarantool-patches


>>
>> diff --git a/test/unit/config.result b/test/unit/config.result
>> index 6b4f87b..847536c 100644
>> --- a/test/unit/config.result
>> +++ b/test/unit/config.result
>> @@ -506,3 +506,17 @@ _ = lcfg.check(cfg)
>>   replica.name = 'storage'
>>   ---
>>   ...
>> +-- gh-47: Check uri
>> +_, err = pcall(lcfg.check_uri, 'invalid uri')
>> +---
>> +...
>> +err:match('Invalid URI.*')
>
> 1. Please, use util.check_error instead of pcall + match as
> I told you earlier.
>
done
>> diff --git a/vshard/cfg.lua b/vshard/cfg.lua
>> index f5db4c0..5644d2c 100644
>> --- a/vshard/cfg.lua
>> +++ b/vshard/cfg.lua
>> @@ -5,10 +5,8 @@ local luri = require('uri')
>>   local consts = require('vshard.consts')
>>     local function check_uri(uri)
>> -    uri = luri.parse(uri)
>> -    if uri.login == nil or uri.password == nil then
>> -        error('URI must contain login and password')
>> -    end
>> +    local parsed = luri.parse(uri)
>> +    assert(parsed, 'Invalid URI: ' .. uri)
>
> 2. Please, do not assertions for regular errors. Use 'error()'.
done
>
>>   end
>>     local function check_master(master, ctx)
>> @@ -251,5 +249,6 @@ end
>>     return {
>>       check = cfg_check,
>> +    check_uri = check_uri,
>
> 3. Do not expose this one-line function for testing only. Just
> use cfg.check for this corrupting an uri of a replica in cfg.
>
> And restore back after the test.
done

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [tarantool-patches] Re: [PATCH 2/3] Allow use existing user
  2018-06-25 21:55     ` Alex Khatskevich
@ 2018-06-26 11:11       ` Vladislav Shpilevoy
  0 siblings, 0 replies; 13+ messages in thread
From: Vladislav Shpilevoy @ 2018-06-26 11:11 UTC (permalink / raw)
  To: tarantool-patches, Alex Khatskevich

Thanks for the fixes! The patch is pushed.

On 26/06/2018 00:55, Alex Khatskevich wrote:
> 
>>> @@ -138,11 +138,19 @@ end
>>> --------------------------------------------------------------------------------
>>>   -- Schema
>>> --------------------------------------------------------------------------------
>>> +local function create_user(username, password)
>>> +    box.schema.user.create(username, {
>>> +        password = password,
>>> +        if_not_exists = true,
>>> +    })
>>> +    -- Replication may has not been granted, if user exists.
>>> +    box.schema.user.grant(username, 'replication', nil, nil,
>>> +                          {if_not_exists = true})
>>> +end
>>>     local function storage_schema_v1(username, password)
>>>       log.info("Initializing schema")
>>> -    box.schema.user.create(username, {password = password})
>>> -    box.schema.user.grant(username, 'replication')
>>> +    create_user(username, password)
>>
>> 1. Please, just inline this function. It is not needed anywhere else
>> and is extremely simple.
>>
> done
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [tarantool-patches] Re: [PATCH 3/3] Introduce destroy module feature
  2018-06-25 21:54     ` Alex Khatskevich
@ 2018-06-26 11:11       ` Vladislav Shpilevoy
  0 siblings, 0 replies; 13+ messages in thread
From: Vladislav Shpilevoy @ 2018-06-26 11:11 UTC (permalink / raw)
  To: Alex Khatskevich, tarantool-patches

Thanks for the fixes! See 3 comments below.

On 26/06/2018 00:54, Alex Khatskevich wrote:
> 
>>> --- a/test/lua_libs/util.lua
>>> +++ b/test/lua_libs/util.lua
>>> @@ -69,9 +69,43 @@ local function wait_master(test_run, replicaset, master)
>>>       log.info('Slaves are connected to a master "%s"', master)
>>>   end
>>>   +function vshard_fiber_list()
>>> +    -- Flush jit traces to prevent them from
>>> +    -- keeping its upvalues in memory.
>>> +    jit.flush()
>>> +    collectgarbage()
>>> +    -- Give a fiber time to clean itself.
>>> +    fiber.sleep(0.05)
>>
>> 1. Why do you need this sleep? As far as I know
>> collectgarbage just cleans all the garbage in the same
>> fiber with no yields or async tasks.
> netbox: If `collect garbage` has collected a connection, the fiber need a time to wakeup after
> fiber:cancel() to process it and exit.
> another module: the same possible behavior.

1. I removed this sleep and nothing happened. See below comment 5.

> commit b0cddc2a3ba30d3ecf57e71ce8d095bf2f093a5e
> Author: AKhatskevich <avkhatskevich@tarantool.org>
> Date:   Wed Jun 20 00:31:25 2018 +0300
> 
>      Introduce destroy module feature
> 
>      Introduce functions:
>       * vshard.router.destroy()
>       * vshard.storage.destroy()
> 
>      Those functions:
>       * close connections
>       * stop background fibers
>       * delete vshard spaces
>       * delete vshard funcitons

2. Typo.

>       * delete `once` metadate

3. Typo.

> 
>      After the destroy, module can be configured as it was just loaded.
> 
>      Extra changes:
>       * introduce fiber_list function which returns names of non-tarantool
>         fibers
>       * introduce update_M function, which updates M (module internals) with
>         values defined in the module
> 
>      Closes #121
> 
> diff --git a/test/lua_libs/util.lua b/test/lua_libs/util.lua
> index f2d3b48..ce0ea67 100644
> --- a/test/lua_libs/util.lua
> +++ b/test/lua_libs/util.lua
> @@ -69,9 +69,43 @@ local function wait_master(test_run, replicaset, master)
>       log.info('Slaves are connected to a master "%s"', master)
>   end
> 
> +function vshard_fiber_list()
> +    -- Flush jit traces to prevent them from
> +    -- keeping its upvalues in memory.
> +    jit.flush()
> +    collectgarbage()
> +    -- Give a fiber time to clean itself.
> +    fiber.sleep(0.05)

4. It makes no sense not only because it changes nothing in the
test output. Even if it could affect active fiber count because of
fibers that had not managed to exit, you call this vshard_fiber_list()
in the while cycles until its output matches your expectations. So
please, remove the sleep.

5. Since we do not know what to do with spaces cleaning, I will review
the next part of the patch later, when we will know.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [tarantool-patches] Re: [PATCH 1/3] Do not force login/pass in URI
  2018-06-25 21:56     ` Alex Khatskevich
@ 2018-06-26 11:11       ` Vladislav Shpilevoy
  0 siblings, 0 replies; 13+ messages in thread
From: Vladislav Shpilevoy @ 2018-06-26 11:11 UTC (permalink / raw)
  To: Alex Khatskevich, tarantool-patches

Thanks for the fixes! The patch is pushed.

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2018-06-26 11:11 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-22 21:43 [tarantool-patches] [PATCH 0/3][vshard] Small patches AKhatskevich
2018-06-22 21:43 ` [tarantool-patches] [PATCH 1/3] Do not force login/pass in URI AKhatskevich
2018-06-22 22:11   ` [tarantool-patches] " Vladislav Shpilevoy
2018-06-25 21:56     ` Alex Khatskevich
2018-06-26 11:11       ` Vladislav Shpilevoy
2018-06-22 21:43 ` [tarantool-patches] [PATCH 2/3] Allow use existing user AKhatskevich
2018-06-22 22:12   ` [tarantool-patches] " Vladislav Shpilevoy
2018-06-25 21:55     ` Alex Khatskevich
2018-06-26 11:11       ` Vladislav Shpilevoy
2018-06-22 21:43 ` [tarantool-patches] [PATCH 3/3] Introduce destroy module feature AKhatskevich
2018-06-22 22:11   ` [tarantool-patches] " Vladislav Shpilevoy
2018-06-25 21:54     ` Alex Khatskevich
2018-06-26 11:11       ` Vladislav Shpilevoy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox