Tarantool development patches archive
 help / color / mirror / Atom feed
* [tarantool-patches] [PATCH v3] vshard reload mechanism
@ 2018-07-26  8:27 AKhatskevich
  2018-07-26  8:27 ` [tarantool-patches] [PATCH 1/3] Fix races related to object outdating AKhatskevich
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: AKhatskevich @ 2018-07-26  8:27 UTC (permalink / raw)
  To: v.shpilevoy, tarantool-patches

Issue1: https://github.com/tarantool/vshard/issues/112
Issue2: https://github.com/tarantool/vshard/issues/125
Branch: https://github.com/tarantool/vshard/tree/kh/gh-112-reload-mt-2

This patcheset improves vshard reload mechanism.

Changes since PATCH v2:
 - Races related to object outdating are fixed.

AKhatskevich (3):
  Fix races related to object outdating
  tests: separate bootstrap routine to a lua_libs
  Introduce storage reload evolution

 .travis.yml                                        |   2 +-
 rpm/prebuild.sh                                    |   2 +
 test/lua_libs/bootstrap.lua                        |  50 +++++
 test/lua_libs/git_util.lua                         |  51 +++++
 test/lua_libs/util.lua                             |  20 ++
 test/rebalancer/box_1_a.lua                        |  47 +---
 test/rebalancer/errinj.result                      |   2 +-
 test/rebalancer/errinj.test.lua                    |   2 +-
 test/rebalancer/rebalancer.result                  |   2 +-
 test/rebalancer/rebalancer.test.lua                |   2 +-
 test/rebalancer/rebalancer_lock_and_pin.result     |   2 +-
 test/rebalancer/rebalancer_lock_and_pin.test.lua   |   2 +-
 test/rebalancer/restart_during_rebalancing.result  |   2 +-
 .../rebalancer/restart_during_rebalancing.test.lua |   2 +-
 test/rebalancer/stress_add_remove_rs.result        |   2 +-
 test/rebalancer/stress_add_remove_rs.test.lua      |   2 +-
 .../rebalancer/stress_add_remove_several_rs.result |   2 +-
 .../stress_add_remove_several_rs.test.lua          |   2 +-
 test/rebalancer/suite.ini                          |   2 +-
 test/reload_evolution/storage.result               | 245 +++++++++++++++++++++
 test/reload_evolution/storage.test.lua             |  87 ++++++++
 test/reload_evolution/storage_1_a.lua              |  48 ++++
 test/reload_evolution/storage_1_b.lua              |   1 +
 test/reload_evolution/storage_2_a.lua              |   1 +
 test/reload_evolution/storage_2_b.lua              |   1 +
 test/reload_evolution/suite.ini                    |   6 +
 test/reload_evolution/test.lua                     |   9 +
 test/unit/reload_evolution.result                  |  45 ++++
 test/unit/reload_evolution.test.lua                |  18 ++
 vshard/replicaset.lua                              |  30 +--
 vshard/router/init.lua                             |  58 +++--
 vshard/storage/init.lua                            |  12 +
 vshard/storage/reload_evolution.lua                |  58 +++++
 33 files changed, 722 insertions(+), 95 deletions(-)
 create mode 100644 test/lua_libs/bootstrap.lua
 create mode 100644 test/lua_libs/git_util.lua
 create mode 100644 test/reload_evolution/storage.result
 create mode 100644 test/reload_evolution/storage.test.lua
 create mode 100755 test/reload_evolution/storage_1_a.lua
 create mode 120000 test/reload_evolution/storage_1_b.lua
 create mode 120000 test/reload_evolution/storage_2_a.lua
 create mode 120000 test/reload_evolution/storage_2_b.lua
 create mode 100644 test/reload_evolution/suite.ini
 create mode 100644 test/reload_evolution/test.lua
 create mode 100644 test/unit/reload_evolution.result
 create mode 100644 test/unit/reload_evolution.test.lua
 create mode 100644 vshard/storage/reload_evolution.lua

-- 
2.14.1

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [tarantool-patches] [PATCH 1/3] Fix races related to object outdating
  2018-07-26  8:27 [tarantool-patches] [PATCH v3] vshard reload mechanism AKhatskevich
@ 2018-07-26  8:27 ` AKhatskevich
  2018-07-26  8:27 ` [tarantool-patches] [PATCH 2/3] tests: separate bootstrap routine to a lua_libs AKhatskevich
  2018-07-26  8:27 ` [tarantool-patches] [PATCH 3/3] Introduce storage reload evolution AKhatskevich
  2 siblings, 0 replies; 5+ messages in thread
From: AKhatskevich @ 2018-07-26  8:27 UTC (permalink / raw)
  To: v.shpilevoy, tarantool-patches

Reload/reconfigure may replace many of M fields during any yield.
Old objects should not be accessed after they are outdated.

This commit handles such cases within `vshard.router`.
---
 vshard/replicaset.lua   | 30 ++++++++++++++-----------
 vshard/router/init.lua  | 58 +++++++++++++++++++++++++++++--------------------
 vshard/storage/init.lua |  1 +
 3 files changed, 52 insertions(+), 37 deletions(-)

diff --git a/vshard/replicaset.lua b/vshard/replicaset.lua
index 6c8d477..87e26d3 100644
--- a/vshard/replicaset.lua
+++ b/vshard/replicaset.lua
@@ -340,16 +340,13 @@ local function replicaset_tostring(replicaset)
                          master)
 end
 
-local outdate_replicasets
 --
 -- Copy netbox connections from old replica objects to new ones
 -- and outdate old objects.
 -- @param replicasets New replicasets
 -- @param old_replicasets Replicasets and replicas to be outdated.
--- @param outdate_delay Number of seconds; delay to outdate
---        old objects.
 --
-local function rebind_replicasets(replicasets, old_replicasets, outdate_delay)
+local function rebind_replicasets(replicasets, old_replicasets)
     for replicaset_uuid, replicaset in pairs(replicasets) do
         local old_replicaset = old_replicasets and
                                old_replicasets[replicaset_uuid]
@@ -370,9 +367,6 @@ local function rebind_replicasets(replicasets, old_replicasets, outdate_delay)
             end
         end
     end
-    if old_replicasets then
-        util.async_task(outdate_delay, outdate_replicasets, old_replicasets)
-    end
 end
 
 --
@@ -453,12 +447,7 @@ for fname, func in pairs(replica_mt.__index) do
     outdated_replica_mt.__index[fname] = outdated_warning
 end
 
---
--- Outdate replicaset and replica objects:
---  * Set outdated_metatables.
---  * Remove connections.
---
-outdate_replicasets = function(replicasets)
+local outdate_replicasets_internal = function(replicasets)
     for _, replicaset in pairs(replicasets) do
         setmetatable(replicaset, outdated_replicaset_mt)
         for _, replica in pairs(replicaset.replicas) do
@@ -469,6 +458,20 @@ outdate_replicasets = function(replicasets)
     log.info('Old replicaset and replica objects are outdated.')
 end
 
+--
+-- Outdate replicaset and replica objects:
+--  * Set outdated_metatables.
+--  * Remove connections.
+-- @param replicasets Old replicasets to be outdated.
+-- @param outdate_delay Delay in seconds before the outdating.
+--
+local function outdate_replicasets(replicasets, outdate_delay)
+    if replicasets then
+        util.async_task(outdate_delay, outdate_replicasets_internal,
+                        replicasets)
+    end
+end
+
 --
 -- Calculate for each replicaset its etalon bucket count.
 -- Iterative algorithm is used to learn the best balance in a
@@ -650,4 +653,5 @@ return {
     calculate_etalon_balance = cluster_calculate_etalon_balance,
     wait_masters_connect = wait_masters_connect,
     rebind_replicasets = rebind_replicasets,
+    outdate_replicasets = outdate_replicasets,
 }
diff --git a/vshard/router/init.lua b/vshard/router/init.lua
index 142ddb6..1a0ed2f 100644
--- a/vshard/router/init.lua
+++ b/vshard/router/init.lua
@@ -52,9 +52,14 @@ if not M then
     }
 end
 
--- Set a replicaset by container of a bucket.
-local function bucket_set(bucket_id, replicaset)
-    assert(replicaset)
+-- Set a bucket to a replicaset.
+local function bucket_set(bucket_id, rs_uuid)
+    local replicaset = M.replicasets[rs_uuid]
+    -- It is technically possible to delete a replicaset at the
+    -- same time when route to the bucket is discovered.
+    if not replicaset then
+        return nil, lerror.vshard(lerror.code.NO_ROUTE_TO_BUCKET, bucket_id)
+    end
     local old_replicaset = M.route_map[bucket_id]
     if old_replicaset ~= replicaset then
         if old_replicaset then
@@ -63,6 +68,7 @@ local function bucket_set(bucket_id, replicaset)
         replicaset.bucket_count = replicaset.bucket_count + 1
     end
     M.route_map[bucket_id] = replicaset
+    return replicaset
 end
 
 -- Remove a bucket from the cache.
@@ -88,15 +94,18 @@ local function bucket_discovery(bucket_id)
     log.verbose("Discovering bucket %d", bucket_id)
     local last_err = nil
     local unreachable_uuid = nil
-    for uuid, replicaset in pairs(M.replicasets) do
-        local _, err =
-            replicaset:callrw('vshard.storage.bucket_stat', {bucket_id})
-        if err == nil then
-            bucket_set(bucket_id, replicaset)
-            return replicaset
-        elseif err.code ~= lerror.code.WRONG_BUCKET then
-            last_err = err
-            unreachable_uuid = uuid
+    for uuid, _ in pairs(M.replicasets) do
+        -- Handle reload/reconfigure.
+        replicaset = M.replicasets[uuid]
+        if replicaset then
+            local _, err =
+                replicaset:callrw('vshard.storage.bucket_stat', {bucket_id})
+            if err == nil then
+                return bucket_set(bucket_id, replicaset.uuid)
+            elseif err.code ~= lerror.code.WRONG_BUCKET then
+                last_err = err
+                unreachable_uuid = uuid
+            end
         end
     end
     local err = nil
@@ -262,13 +271,13 @@ local function router_call(bucket_id, mode, func, args, opts)
                             end
                         end
                     else
-                        bucket_set(bucket_id, replicaset)
+                        replicaset = bucket_set(bucket_id, replicaset.uuid)
                         lfiber.yield()
                         -- Protect against infinite cycle in a
                         -- case of broken cluster, when a bucket
                         -- is sent on two replicasets to each
                         -- other.
-                        if lfiber.time() <= tend then
+                        if replicaset and lfiber.time() <= tend then
                             goto replicaset_is_found
                         end
                     end
@@ -513,27 +522,28 @@ local function router_cfg(cfg)
     end
     box.cfg(box_cfg)
     log.info("Box has been configured")
-    M.connection_outdate_delay = cfg.connection_outdate_delay
-    M.total_bucket_count = total_bucket_count
-    M.collect_lua_garbage = collect_lua_garbage
-    M.current_cfg = new_cfg
     -- Move connections from an old configuration to a new one.
     -- It must be done with no yields to prevent usage both of not
     -- fully moved old replicasets, and not fully built new ones.
-    lreplicaset.rebind_replicasets(new_replicasets, M.replicasets,
-                                   M.connection_outdate_delay)
-    M.replicasets = new_replicasets
+    lreplicaset.rebind_replicasets(new_replicasets, M.replicasets)
     -- Now the new replicasets are fully built. Can establish
     -- connections and yield.
     for _, replicaset in pairs(new_replicasets) do
         replicaset:connect_all()
     end
+    lreplicaset.wait_masters_connect(new_replicasets)
+    lreplicaset.outdate_replicasets(M.replicasets, cfg.connection_outdate_delay)
+    M.connection_outdate_delay = cfg.connection_outdate_delay
+    M.total_bucket_count = total_bucket_count
+    M.collect_lua_garbage = collect_lua_garbage
+    M.current_cfg = cfg
+    M.replicasets = new_replicasets
     -- Update existing route map in-place.
-    for bucket, rs in pairs(M.route_map) do
+    local old_route_map = M.route_map
+    M.route_map = {}
+    for bucket, rs in pairs(old_route_map) do
         M.route_map[bucket] = M.replicasets[rs.uuid]
     end
-
-    lreplicaset.wait_masters_connect(new_replicasets)
     if M.failover_fiber == nil then
         lfiber.create(util.reloadable_fiber_f, M, 'failover_f', 'Failover')
     end
diff --git a/vshard/storage/init.lua b/vshard/storage/init.lua
index 07bd00c..c1df0e6 100644
--- a/vshard/storage/init.lua
+++ b/vshard/storage/init.lua
@@ -1620,6 +1620,7 @@ local function storage_cfg(cfg, this_replica_uuid)
     box.once("vshard:storage:1", storage_schema_v1, uri.login, uri.password)
 
     lreplicaset.rebind_replicasets(new_replicasets, M.replicasets)
+    lreplicaset.outdate_replicasets(M.replicasets)
     M.replicasets = new_replicasets
     M.this_replicaset = this_replicaset
     M.this_replica = this_replica
-- 
2.14.1

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [tarantool-patches] [PATCH 2/3] tests: separate bootstrap routine to a lua_libs
  2018-07-26  8:27 [tarantool-patches] [PATCH v3] vshard reload mechanism AKhatskevich
  2018-07-26  8:27 ` [tarantool-patches] [PATCH 1/3] Fix races related to object outdating AKhatskevich
@ 2018-07-26  8:27 ` AKhatskevich
  2018-07-26  8:27 ` [tarantool-patches] [PATCH 3/3] Introduce storage reload evolution AKhatskevich
  2 siblings, 0 replies; 5+ messages in thread
From: AKhatskevich @ 2018-07-26  8:27 UTC (permalink / raw)
  To: v.shpilevoy, tarantool-patches

What is moved to `test/lua_libs/bootstrap.lua`:
1. create schema
2. create main stored procedures
3. `wait_rebalancer_state` procedure

This code will be reused in further commits.
---
 test/lua_libs/bootstrap.lua                        | 50 ++++++++++++++++++++++
 test/rebalancer/box_1_a.lua                        | 47 ++------------------
 test/rebalancer/errinj.result                      |  2 +-
 test/rebalancer/errinj.test.lua                    |  2 +-
 test/rebalancer/rebalancer.result                  |  2 +-
 test/rebalancer/rebalancer.test.lua                |  2 +-
 test/rebalancer/rebalancer_lock_and_pin.result     |  2 +-
 test/rebalancer/rebalancer_lock_and_pin.test.lua   |  2 +-
 test/rebalancer/restart_during_rebalancing.result  |  2 +-
 .../rebalancer/restart_during_rebalancing.test.lua |  2 +-
 test/rebalancer/stress_add_remove_rs.result        |  2 +-
 test/rebalancer/stress_add_remove_rs.test.lua      |  2 +-
 .../rebalancer/stress_add_remove_several_rs.result |  2 +-
 .../stress_add_remove_several_rs.test.lua          |  2 +-
 test/rebalancer/suite.ini                          |  2 +-
 15 files changed, 66 insertions(+), 57 deletions(-)
 create mode 100644 test/lua_libs/bootstrap.lua

diff --git a/test/lua_libs/bootstrap.lua b/test/lua_libs/bootstrap.lua
new file mode 100644
index 0000000..62c2f78
--- /dev/null
+++ b/test/lua_libs/bootstrap.lua
@@ -0,0 +1,50 @@
+local log = require('log')
+
+function init_schema()
+	local format = {}
+	format[1] = {name = 'field', type = 'unsigned'}
+	format[2] = {name = 'bucket_id', type = 'unsigned'}
+	local s = box.schema.create_space('test', {format = format})
+	local pk = s:create_index('pk')
+	local bucket_id_idx =
+		s:create_index('vbucket', {parts = {'bucket_id'},
+					   unique = false})
+end
+
+box.once('schema', function()
+	box.schema.func.create('do_replace')
+	box.schema.role.grant('public', 'execute', 'function', 'do_replace')
+	box.schema.func.create('do_select')
+	box.schema.role.grant('public', 'execute', 'function', 'do_select')
+	init_schema()
+end)
+
+function do_replace(...)
+	box.space.test:replace(...)
+	return true
+end
+
+function do_select(...)
+	return box.space.test:select(...)
+end
+
+function check_consistency()
+	for _, tuple in box.space.test:pairs() do
+		assert(box.space._bucket:get{tuple.bucket_id})
+	end
+	return true
+end
+
+--
+-- Wait a specified log message.
+-- Requirements:
+-- * Should be executed from a storage with a rebalancer.
+-- * NAME - global variable, name of instance should be set.
+function wait_rebalancer_state(state, test_run)
+	log.info(string.rep('a', 1000))
+	vshard.storage.rebalancer_wakeup()
+	while not test_run:grep_log(NAME, state, 1000) do
+		fiber.sleep(0.1)
+		vshard.storage.rebalancer_wakeup()
+	end
+end
diff --git a/test/rebalancer/box_1_a.lua b/test/rebalancer/box_1_a.lua
index 8fddcf0..2ca8306 100644
--- a/test/rebalancer/box_1_a.lua
+++ b/test/rebalancer/box_1_a.lua
@@ -2,7 +2,7 @@
 -- Get instance name
 require('strict').on()
 local fio = require('fio')
-local NAME = fio.basename(arg[0], '.lua')
+NAME = fio.basename(arg[0], '.lua')
 log = require('log')
 require('console').listen(os.getenv('ADMIN'))
 fiber = require('fiber')
@@ -23,40 +23,8 @@ if NAME == 'box_4_a' or NAME == 'box_4_b' or
 end
 vshard.storage.cfg(cfg, names.replica_uuid[NAME])
 
-function init_schema()
-	local format = {}
-	format[1] = {name = 'field', type = 'unsigned'}
-	format[2] = {name = 'bucket_id', type = 'unsigned'}
-	local s = box.schema.create_space('test', {format = format})
-	local pk = s:create_index('pk')
-	local bucket_id_idx =
-		s:create_index('vbucket', {parts = {'bucket_id'},
-					   unique = false})
-end
-
-box.once('schema', function()
-	box.schema.func.create('do_replace')
-	box.schema.role.grant('public', 'execute', 'function', 'do_replace')
-	box.schema.func.create('do_select')
-	box.schema.role.grant('public', 'execute', 'function', 'do_select')
-	init_schema()
-end)
-
-function do_replace(...)
-	box.space.test:replace(...)
-	return true
-end
-
-function do_select(...)
-	return box.space.test:select(...)
-end
-
-function check_consistency()
-	for _, tuple in box.space.test:pairs() do
-		assert(box.space._bucket:get{tuple.bucket_id})
-	end
-	return true
-end
+-- Bootstrap storage.
+require('lua_libs.bootstrap')
 
 function switch_rs1_master()
 	local replica_uuid = names.replica_uuid
@@ -68,12 +36,3 @@ end
 function nullify_rs_weight()
 	cfg.sharding[names.rs_uuid[1]].weight = 0
 end
-
-function wait_rebalancer_state(state, test_run)
-	log.info(string.rep('a', 1000))
-	vshard.storage.rebalancer_wakeup()
-	while not test_run:grep_log(NAME, state, 1000) do
-		fiber.sleep(0.1)
-		vshard.storage.rebalancer_wakeup()
-	end
-end
diff --git a/test/rebalancer/errinj.result b/test/rebalancer/errinj.result
index d09349e..826c2c6 100644
--- a/test/rebalancer/errinj.result
+++ b/test/rebalancer/errinj.result
@@ -13,7 +13,7 @@ test_run:create_cluster(REPLICASET_1, 'rebalancer')
 test_run:create_cluster(REPLICASET_2, 'rebalancer')
 ---
 ...
-util = require('util')
+util = require('lua_libs.util')
 ---
 ...
 util.wait_master(test_run, REPLICASET_1, 'box_1_a')
diff --git a/test/rebalancer/errinj.test.lua b/test/rebalancer/errinj.test.lua
index d6a2920..fc0730c 100644
--- a/test/rebalancer/errinj.test.lua
+++ b/test/rebalancer/errinj.test.lua
@@ -5,7 +5,7 @@ REPLICASET_2 = { 'box_2_a', 'box_2_b' }
 
 test_run:create_cluster(REPLICASET_1, 'rebalancer')
 test_run:create_cluster(REPLICASET_2, 'rebalancer')
-util = require('util')
+util = require('lua_libs.util')
 util.wait_master(test_run, REPLICASET_1, 'box_1_a')
 util.wait_master(test_run, REPLICASET_2, 'box_2_a')
 
diff --git a/test/rebalancer/rebalancer.result b/test/rebalancer/rebalancer.result
index 88cbaae..71e43e1 100644
--- a/test/rebalancer/rebalancer.result
+++ b/test/rebalancer/rebalancer.result
@@ -13,7 +13,7 @@ test_run:create_cluster(REPLICASET_1, 'rebalancer')
 test_run:create_cluster(REPLICASET_2, 'rebalancer')
 ---
 ...
-util = require('util')
+util = require('lua_libs.util')
 ---
 ...
 util.wait_master(test_run, REPLICASET_1, 'box_1_a')
diff --git a/test/rebalancer/rebalancer.test.lua b/test/rebalancer/rebalancer.test.lua
index 01f2061..1b7ddae 100644
--- a/test/rebalancer/rebalancer.test.lua
+++ b/test/rebalancer/rebalancer.test.lua
@@ -5,7 +5,7 @@ REPLICASET_2 = { 'box_2_a', 'box_2_b' }
 
 test_run:create_cluster(REPLICASET_1, 'rebalancer')
 test_run:create_cluster(REPLICASET_2, 'rebalancer')
-util = require('util')
+util = require('lua_libs.util')
 util.wait_master(test_run, REPLICASET_1, 'box_1_a')
 util.wait_master(test_run, REPLICASET_2, 'box_2_a')
 
diff --git a/test/rebalancer/rebalancer_lock_and_pin.result b/test/rebalancer/rebalancer_lock_and_pin.result
index dd9fe47..0f2921c 100644
--- a/test/rebalancer/rebalancer_lock_and_pin.result
+++ b/test/rebalancer/rebalancer_lock_and_pin.result
@@ -16,7 +16,7 @@ test_run:create_cluster(REPLICASET_1, 'rebalancer')
 test_run:create_cluster(REPLICASET_2, 'rebalancer')
 ---
 ...
-util = require('util')
+util = require('lua_libs.util')
 ---
 ...
 util.wait_master(test_run, REPLICASET_1, 'box_1_a')
diff --git a/test/rebalancer/rebalancer_lock_and_pin.test.lua b/test/rebalancer/rebalancer_lock_and_pin.test.lua
index fe866c4..3a2daa0 100644
--- a/test/rebalancer/rebalancer_lock_and_pin.test.lua
+++ b/test/rebalancer/rebalancer_lock_and_pin.test.lua
@@ -6,7 +6,7 @@ REPLICASET_3 = { 'box_3_a', 'box_3_b' }
 
 test_run:create_cluster(REPLICASET_1, 'rebalancer')
 test_run:create_cluster(REPLICASET_2, 'rebalancer')
-util = require('util')
+util = require('lua_libs.util')
 util.wait_master(test_run, REPLICASET_1, 'box_1_a')
 util.wait_master(test_run, REPLICASET_2, 'box_2_a')
 
diff --git a/test/rebalancer/restart_during_rebalancing.result b/test/rebalancer/restart_during_rebalancing.result
index d2b8a12..0eb0f2e 100644
--- a/test/rebalancer/restart_during_rebalancing.result
+++ b/test/rebalancer/restart_during_rebalancing.result
@@ -25,7 +25,7 @@ test_run:create_cluster(REPLICASET_3, 'rebalancer')
 test_run:create_cluster(REPLICASET_4, 'rebalancer')
 ---
 ...
-util = require('util')
+util = require('lua_libs.util')
 ---
 ...
 util.wait_master(test_run, REPLICASET_1, 'fullbox_1_a')
diff --git a/test/rebalancer/restart_during_rebalancing.test.lua b/test/rebalancer/restart_during_rebalancing.test.lua
index 5b1a8df..7b707ca 100644
--- a/test/rebalancer/restart_during_rebalancing.test.lua
+++ b/test/rebalancer/restart_during_rebalancing.test.lua
@@ -9,7 +9,7 @@ test_run:create_cluster(REPLICASET_1, 'rebalancer')
 test_run:create_cluster(REPLICASET_2, 'rebalancer')
 test_run:create_cluster(REPLICASET_3, 'rebalancer')
 test_run:create_cluster(REPLICASET_4, 'rebalancer')
-util = require('util')
+util = require('lua_libs.util')
 util.wait_master(test_run, REPLICASET_1, 'fullbox_1_a')
 util.wait_master(test_run, REPLICASET_2, 'fullbox_2_a')
 util.wait_master(test_run, REPLICASET_3, 'fullbox_3_a')
diff --git a/test/rebalancer/stress_add_remove_rs.result b/test/rebalancer/stress_add_remove_rs.result
index 8a955e2..10bcaac 100644
--- a/test/rebalancer/stress_add_remove_rs.result
+++ b/test/rebalancer/stress_add_remove_rs.result
@@ -16,7 +16,7 @@ test_run:create_cluster(REPLICASET_1, 'rebalancer')
 test_run:create_cluster(REPLICASET_2, 'rebalancer')
 ---
 ...
-util = require('util')
+util = require('lua_libs.util')
 ---
 ...
 util.wait_master(test_run, REPLICASET_1, 'box_1_a')
diff --git a/test/rebalancer/stress_add_remove_rs.test.lua b/test/rebalancer/stress_add_remove_rs.test.lua
index c80df40..b9bb027 100644
--- a/test/rebalancer/stress_add_remove_rs.test.lua
+++ b/test/rebalancer/stress_add_remove_rs.test.lua
@@ -6,7 +6,7 @@ REPLICASET_3 = { 'box_3_a', 'box_3_b' }
 
 test_run:create_cluster(REPLICASET_1, 'rebalancer')
 test_run:create_cluster(REPLICASET_2, 'rebalancer')
-util = require('util')
+util = require('lua_libs.util')
 util.wait_master(test_run, REPLICASET_1, 'box_1_a')
 util.wait_master(test_run, REPLICASET_2, 'box_2_a')
 
diff --git a/test/rebalancer/stress_add_remove_several_rs.result b/test/rebalancer/stress_add_remove_several_rs.result
index d6008b8..611362c 100644
--- a/test/rebalancer/stress_add_remove_several_rs.result
+++ b/test/rebalancer/stress_add_remove_several_rs.result
@@ -19,7 +19,7 @@ test_run:create_cluster(REPLICASET_1, 'rebalancer')
 test_run:create_cluster(REPLICASET_2, 'rebalancer')
 ---
 ...
-util = require('util')
+util = require('lua_libs.util')
 ---
 ...
 util.wait_master(test_run, REPLICASET_1, 'box_1_a')
diff --git a/test/rebalancer/stress_add_remove_several_rs.test.lua b/test/rebalancer/stress_add_remove_several_rs.test.lua
index 3cc105e..9acb8de 100644
--- a/test/rebalancer/stress_add_remove_several_rs.test.lua
+++ b/test/rebalancer/stress_add_remove_several_rs.test.lua
@@ -7,7 +7,7 @@ REPLICASET_4 = { 'box_4_a', 'box_4_b' }
 
 test_run:create_cluster(REPLICASET_1, 'rebalancer')
 test_run:create_cluster(REPLICASET_2, 'rebalancer')
-util = require('util')
+util = require('lua_libs.util')
 util.wait_master(test_run, REPLICASET_1, 'box_1_a')
 util.wait_master(test_run, REPLICASET_2, 'box_2_a')
 
diff --git a/test/rebalancer/suite.ini b/test/rebalancer/suite.ini
index afc5141..8689da5 100644
--- a/test/rebalancer/suite.ini
+++ b/test/rebalancer/suite.ini
@@ -4,6 +4,6 @@ description = Rebalancer tests
 script = test.lua
 is_parallel = False
 release_disabled = errinj.test.lua
-lua_libs = ../lua_libs/util.lua config.lua names.lua router_1.lua
+lua_libs = ../lua_libs config.lua names.lua router_1.lua
            box_1_a.lua box_1_b.lua box_2_a.lua box_2_b.lua
            box_3_a.lua box_3_b.lua rebalancer_utils.lua
-- 
2.14.1

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [tarantool-patches] [PATCH 3/3] Introduce storage reload evolution
  2018-07-26  8:27 [tarantool-patches] [PATCH v3] vshard reload mechanism AKhatskevich
  2018-07-26  8:27 ` [tarantool-patches] [PATCH 1/3] Fix races related to object outdating AKhatskevich
  2018-07-26  8:27 ` [tarantool-patches] [PATCH 2/3] tests: separate bootstrap routine to a lua_libs AKhatskevich
@ 2018-07-26  8:27 ` AKhatskevich
  2 siblings, 0 replies; 5+ messages in thread
From: AKhatskevich @ 2018-07-26  8:27 UTC (permalink / raw)
  To: v.shpilevoy, tarantool-patches

Changes:
1. Introduce storage reload evolution.
2. Setup cross-version reload testing.

1:
This mechanism updates Lua objects on reload in case they are
changed in a new vshard.storage version.

Since this commit, any change in vshard.storage.M has to be
reflected in vshard.storage.reload_evolution to guarantee
correct reload.

2:
The testing uses git infrastructure and is performed in the following
way:
1. Copy old version of vshard to a temp folder.
2. Run vshard on this code.
3. Checkout the latest version of the vshard sources.
4. Reload vshard storage.
5. Make sure it works (Perform simple tests).

Notes:
* this patch contains some legacy-driven decisions:
  1. SOURCEDIR path retrieved differently in case of
     packpack build.
  2. git directory in the `reload_evolution/storage` test
     is copied with respect to Centos 7 and `ro` mode of
     SOURCEDIR.

Closes #112 #125
---
 .travis.yml                            |   2 +-
 rpm/prebuild.sh                        |   2 +
 test/lua_libs/git_util.lua             |  51 +++++++
 test/lua_libs/util.lua                 |  20 +++
 test/reload_evolution/storage.result   | 245 +++++++++++++++++++++++++++++++++
 test/reload_evolution/storage.test.lua |  87 ++++++++++++
 test/reload_evolution/storage_1_a.lua  |  48 +++++++
 test/reload_evolution/storage_1_b.lua  |   1 +
 test/reload_evolution/storage_2_a.lua  |   1 +
 test/reload_evolution/storage_2_b.lua  |   1 +
 test/reload_evolution/suite.ini        |   6 +
 test/reload_evolution/test.lua         |   9 ++
 test/unit/reload_evolution.result      |  45 ++++++
 test/unit/reload_evolution.test.lua    |  18 +++
 vshard/storage/init.lua                |  11 ++
 vshard/storage/reload_evolution.lua    |  58 ++++++++
 16 files changed, 604 insertions(+), 1 deletion(-)
 create mode 100644 test/lua_libs/git_util.lua
 create mode 100644 test/reload_evolution/storage.result
 create mode 100644 test/reload_evolution/storage.test.lua
 create mode 100755 test/reload_evolution/storage_1_a.lua
 create mode 120000 test/reload_evolution/storage_1_b.lua
 create mode 120000 test/reload_evolution/storage_2_a.lua
 create mode 120000 test/reload_evolution/storage_2_b.lua
 create mode 100644 test/reload_evolution/suite.ini
 create mode 100644 test/reload_evolution/test.lua
 create mode 100644 test/unit/reload_evolution.result
 create mode 100644 test/unit/reload_evolution.test.lua
 create mode 100644 vshard/storage/reload_evolution.lua

diff --git a/.travis.yml b/.travis.yml
index 54bfe44..eff4a51 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -41,7 +41,7 @@ env:
 script:
   - git describe --long
   - git clone https://github.com/packpack/packpack.git packpack
-  - packpack/packpack
+  - packpack/packpack -e PACKPACK_GIT_SOURCEDIR=/source/
 
 before_deploy:
   - ls -l build/
diff --git a/rpm/prebuild.sh b/rpm/prebuild.sh
index 768b22b..554032b 100755
--- a/rpm/prebuild.sh
+++ b/rpm/prebuild.sh
@@ -1 +1,3 @@
 curl -s https://packagecloud.io/install/repositories/tarantool/1_9/script.rpm.sh | sudo bash
+sudo yum -y install python-devel python-pip
+sudo pip install tarantool msgpack
diff --git a/test/lua_libs/git_util.lua b/test/lua_libs/git_util.lua
new file mode 100644
index 0000000..a75bb08
--- /dev/null
+++ b/test/lua_libs/git_util.lua
@@ -0,0 +1,51 @@
+--
+-- Lua bridge for some of the git commands.
+--
+local os = require('os')
+
+local temp_file = 'some_strange_rare_unique_file_name_for_git_util'
+
+--
+-- Exec a git command.
+-- @param params Table of parameters:
+--        * options - git options.
+--        * cmd - git command.
+--        * args - command arguments.
+--        * dir - working directory.
+--        * fout - write output to the file.
+local function exec_cmd(params)
+    local fout = params.fout
+    local shell_cmd = {'git'}
+    for _, param in pairs({'options', 'cmd', 'args'}) do
+        table.insert(shell_cmd, params[param])
+    end
+    if fout then
+        table.insert(shell_cmd, ' >' .. fout)
+    end
+    shell_cmd = table.concat(shell_cmd, ' ')
+    if params.dir then
+        shell_cmd = string.format('cd %s && %s', params.dir, shell_cmd)
+    end
+    local res = os.execute(shell_cmd)
+    assert(res == 0, 'Git cmd error: ' .. res)
+end
+
+local function log_hashes(params)
+    params.args = "--format='%h' " .. params.args
+    local local_temp_file = string.format('%s/%s', os.getenv('PWD'), temp_file)
+    params.fout = local_temp_file
+    params.cmd = 'log'
+    exec_cmd(params)
+    local lines = {}
+    for line in io.lines(local_temp_file) do
+        table.insert(lines, line)
+    end
+    os.remove(local_temp_file)
+    return lines
+end
+
+
+return {
+    exec_cmd = exec_cmd,
+    log_hashes = log_hashes
+}
diff --git a/test/lua_libs/util.lua b/test/lua_libs/util.lua
index f40d3a6..935ff41 100644
--- a/test/lua_libs/util.lua
+++ b/test/lua_libs/util.lua
@@ -1,5 +1,6 @@
 local fiber = require('fiber')
 local log = require('log')
+local fio = require('fio')
 
 local function check_error(func, ...)
     local pstatus, status, err = pcall(func, ...)
@@ -92,10 +93,29 @@ local function has_same_fields(etalon, data)
     return true
 end
 
+-- Git directory of the project. Used in evolution tests to
+-- fetch old versions of vshard.
+local SOURCEDIR = os.getenv('PACKPACK_GIT_SOURCEDIR')
+if not SOURCEDIR then
+    SOURCEDIR = os.getenv('SOURCEDIR')
+end
+if not SOURCEDIR then
+    local script_path = debug.getinfo(1).source:match("@?(.*/)")
+    script_path = fio.abspath(script_path)
+    SOURCEDIR = fio.abspath(script_path .. '/../../../')
+end
+
+local BUILDDIR = os.getenv('BUILDDIR')
+if not BUILDDIR then
+    BUILDDIR = SOURCEDIR
+end
+
 return {
     check_error = check_error,
     shuffle_masters = shuffle_masters,
     collect_timeouts = collect_timeouts,
     wait_master = wait_master,
     has_same_fields = has_same_fields,
+    SOURCEDIR = SOURCEDIR,
+    BUILDDIR = BUILDDIR,
 }
diff --git a/test/reload_evolution/storage.result b/test/reload_evolution/storage.result
new file mode 100644
index 0000000..007192c
--- /dev/null
+++ b/test/reload_evolution/storage.result
@@ -0,0 +1,245 @@
+test_run = require('test_run').new()
+---
+...
+git_util = require('lua_libs.git_util')
+---
+...
+util = require('lua_libs.util')
+---
+...
+vshard_copy_path = util.BUILDDIR .. '/test/var/vshard_git_tree_copy'
+---
+...
+evolution_log = git_util.log_hashes({args='vshard/storage/reload_evolution.lua', dir=util.SOURCEDIR})
+---
+...
+-- Cleanup the directory after a previous build.
+_ = os.execute('rm -rf ' .. vshard_copy_path)
+---
+...
+-- 1. `git worktree` cannot be used because PACKPACK mounts
+-- `/source/` in `ro` mode.
+-- 2. Just `cp -rf` cannot be used due to a little different
+-- behavior in Centos 7.
+_ = os.execute('mkdir ' .. vshard_copy_path)
+---
+...
+_ = os.execute("cd " .. util.SOURCEDIR .. ' && cp -rf `ls -A --ignore=build` ' .. vshard_copy_path)
+---
+...
+-- Checkout the first commit with a reload_evolution mechanism.
+git_util.exec_cmd({cmd='checkout', args='-f', dir=vshard_copy_path})
+---
+...
+git_util.exec_cmd({cmd='checkout', args=evolution_log[#evolution_log] .. '~1', dir=vshard_copy_path})
+---
+...
+REPLICASET_1 = { 'storage_1_a', 'storage_1_b' }
+---
+...
+REPLICASET_2 = { 'storage_2_a', 'storage_2_b' }
+---
+...
+test_run:create_cluster(REPLICASET_1, 'reload_evolution')
+---
+...
+test_run:create_cluster(REPLICASET_2, 'reload_evolution')
+---
+...
+util = require('lua_libs.util')
+---
+...
+util.wait_master(test_run, REPLICASET_1, 'storage_1_a')
+---
+...
+util.wait_master(test_run, REPLICASET_2, 'storage_2_a')
+---
+...
+test_run:switch('storage_1_a')
+---
+- true
+...
+vshard.storage.bucket_force_create(1, vshard.consts.DEFAULT_BUCKET_COUNT / 2)
+---
+- true
+...
+bucket_id_to_move = vshard.consts.DEFAULT_BUCKET_COUNT
+---
+...
+test_run:switch('storage_2_a')
+---
+- true
+...
+fiber = require('fiber')
+---
+...
+vshard.storage.bucket_force_create(vshard.consts.DEFAULT_BUCKET_COUNT / 2 + 1, vshard.consts.DEFAULT_BUCKET_COUNT / 2)
+---
+- true
+...
+bucket_id_to_move = vshard.consts.DEFAULT_BUCKET_COUNT
+---
+...
+vshard.storage.internal.reload_version
+---
+- null
+...
+wait_rebalancer_state('The cluster is balanced ok', test_run)
+---
+...
+box.space.test:insert({42, bucket_id_to_move})
+---
+- [42, 3000]
+...
+test_run:switch('default')
+---
+- true
+...
+git_util.exec_cmd({cmd='checkout', args=evolution_log[1], dir=vshard_copy_path})
+---
+...
+test_run:switch('storage_2_a')
+---
+- true
+...
+package.loaded['vshard.storage'] = nil
+---
+...
+vshard.storage = require("vshard.storage")
+---
+...
+test_run:grep_log('storage_2_a', 'vshard.storage.reload_evolution: upgraded to') ~= nil
+---
+- true
+...
+vshard.storage.internal.reload_version
+---
+- 1
+...
+-- Make sure storage operates well.
+vshard.storage.bucket_force_drop(2000)
+---
+- true
+...
+vshard.storage.bucket_force_create(2000)
+---
+- true
+...
+vshard.storage.buckets_info()[2000]
+---
+- status: active
+  id: 2000
+...
+vshard.storage.call(bucket_id_to_move, 'read', 'do_select', {42})
+---
+- true
+- - [42, 3000]
+...
+vshard.storage.bucket_send(bucket_id_to_move, replicaset1_uuid)
+---
+- true
+...
+vshard.storage.garbage_collector_wakeup()
+---
+...
+fiber = require('fiber')
+---
+...
+while box.space._bucket:get({bucket_id_to_move}) do fiber.sleep(0.01) end
+---
+...
+test_run:switch('storage_1_a')
+---
+- true
+...
+vshard.storage.bucket_send(bucket_id_to_move, replicaset2_uuid)
+---
+- true
+...
+test_run:switch('storage_2_a')
+---
+- true
+...
+vshard.storage.call(bucket_id_to_move, 'read', 'do_select', {42})
+---
+- true
+- - [42, 3000]
+...
+-- Check info() does not fail.
+vshard.storage.info() ~= nil
+---
+- true
+...
+--
+-- Send buckets to create a disbalance. Wait until the rebalancer
+-- repairs it. Similar to `tests/rebalancer/rebalancer.test.lua`.
+--
+vshard.storage.rebalancer_disable()
+---
+...
+move_start = vshard.consts.DEFAULT_BUCKET_COUNT / 2 + 1
+---
+...
+move_cnt = 100
+---
+...
+assert(move_start + move_cnt < vshard.consts.DEFAULT_BUCKET_COUNT)
+---
+- true
+...
+for i = move_start, move_start + move_cnt - 1 do box.space._bucket:delete{i} end
+---
+...
+box.space._bucket.index.status:count({vshard.consts.BUCKET.ACTIVE})
+---
+- 1400
+...
+test_run:switch('storage_1_a')
+---
+- true
+...
+move_start = vshard.consts.DEFAULT_BUCKET_COUNT / 2 + 1
+---
+...
+move_cnt = 100
+---
+...
+vshard.storage.bucket_force_create(move_start, move_cnt)
+---
+- true
+...
+box.space._bucket.index.status:count({vshard.consts.BUCKET.ACTIVE})
+---
+- 1600
+...
+test_run:switch('storage_2_a')
+---
+- true
+...
+vshard.storage.rebalancer_enable()
+---
+...
+wait_rebalancer_state('Rebalance routes are sent', test_run)
+---
+...
+wait_rebalancer_state('The cluster is balanced ok', test_run)
+---
+...
+box.space._bucket.index.status:count({vshard.consts.BUCKET.ACTIVE})
+---
+- 1500
+...
+test_run:switch('default')
+---
+- true
+...
+test_run:drop_cluster(REPLICASET_2)
+---
+...
+test_run:drop_cluster(REPLICASET_1)
+---
+...
+test_run:cmd('clear filter')
+---
+- true
+...
diff --git a/test/reload_evolution/storage.test.lua b/test/reload_evolution/storage.test.lua
new file mode 100644
index 0000000..7af464b
--- /dev/null
+++ b/test/reload_evolution/storage.test.lua
@@ -0,0 +1,87 @@
+test_run = require('test_run').new()
+
+git_util = require('lua_libs.git_util')
+util = require('lua_libs.util')
+vshard_copy_path = util.BUILDDIR .. '/test/var/vshard_git_tree_copy'
+evolution_log = git_util.log_hashes({args='vshard/storage/reload_evolution.lua', dir=util.SOURCEDIR})
+-- Cleanup the directory after a previous build.
+_ = os.execute('rm -rf ' .. vshard_copy_path)
+-- 1. `git worktree` cannot be used because PACKPACK mounts
+-- `/source/` in `ro` mode.
+-- 2. Just `cp -rf` cannot be used due to a little different
+-- behavior in Centos 7.
+_ = os.execute('mkdir ' .. vshard_copy_path)
+_ = os.execute("cd " .. util.SOURCEDIR .. ' && cp -rf `ls -A --ignore=build` ' .. vshard_copy_path)
+-- Checkout the first commit with a reload_evolution mechanism.
+git_util.exec_cmd({cmd='checkout', args='-f', dir=vshard_copy_path})
+git_util.exec_cmd({cmd='checkout', args=evolution_log[#evolution_log] .. '~1', dir=vshard_copy_path})
+
+REPLICASET_1 = { 'storage_1_a', 'storage_1_b' }
+REPLICASET_2 = { 'storage_2_a', 'storage_2_b' }
+test_run:create_cluster(REPLICASET_1, 'reload_evolution')
+test_run:create_cluster(REPLICASET_2, 'reload_evolution')
+util = require('lua_libs.util')
+util.wait_master(test_run, REPLICASET_1, 'storage_1_a')
+util.wait_master(test_run, REPLICASET_2, 'storage_2_a')
+
+test_run:switch('storage_1_a')
+vshard.storage.bucket_force_create(1, vshard.consts.DEFAULT_BUCKET_COUNT / 2)
+bucket_id_to_move = vshard.consts.DEFAULT_BUCKET_COUNT
+
+test_run:switch('storage_2_a')
+fiber = require('fiber')
+vshard.storage.bucket_force_create(vshard.consts.DEFAULT_BUCKET_COUNT / 2 + 1, vshard.consts.DEFAULT_BUCKET_COUNT / 2)
+bucket_id_to_move = vshard.consts.DEFAULT_BUCKET_COUNT
+vshard.storage.internal.reload_version
+wait_rebalancer_state('The cluster is balanced ok', test_run)
+box.space.test:insert({42, bucket_id_to_move})
+
+test_run:switch('default')
+git_util.exec_cmd({cmd='checkout', args=evolution_log[1], dir=vshard_copy_path})
+
+test_run:switch('storage_2_a')
+package.loaded['vshard.storage'] = nil
+vshard.storage = require("vshard.storage")
+test_run:grep_log('storage_2_a', 'vshard.storage.reload_evolution: upgraded to') ~= nil
+vshard.storage.internal.reload_version
+-- Make sure storage operates well.
+vshard.storage.bucket_force_drop(2000)
+vshard.storage.bucket_force_create(2000)
+vshard.storage.buckets_info()[2000]
+vshard.storage.call(bucket_id_to_move, 'read', 'do_select', {42})
+vshard.storage.bucket_send(bucket_id_to_move, replicaset1_uuid)
+vshard.storage.garbage_collector_wakeup()
+fiber = require('fiber')
+while box.space._bucket:get({bucket_id_to_move}) do fiber.sleep(0.01) end
+test_run:switch('storage_1_a')
+vshard.storage.bucket_send(bucket_id_to_move, replicaset2_uuid)
+test_run:switch('storage_2_a')
+vshard.storage.call(bucket_id_to_move, 'read', 'do_select', {42})
+-- Check info() does not fail.
+vshard.storage.info() ~= nil
+
+--
+-- Send buckets to create a disbalance. Wait until the rebalancer
+-- repairs it. Similar to `tests/rebalancer/rebalancer.test.lua`.
+--
+vshard.storage.rebalancer_disable()
+move_start = vshard.consts.DEFAULT_BUCKET_COUNT / 2 + 1
+move_cnt = 100
+assert(move_start + move_cnt < vshard.consts.DEFAULT_BUCKET_COUNT)
+for i = move_start, move_start + move_cnt - 1 do box.space._bucket:delete{i} end
+box.space._bucket.index.status:count({vshard.consts.BUCKET.ACTIVE})
+test_run:switch('storage_1_a')
+move_start = vshard.consts.DEFAULT_BUCKET_COUNT / 2 + 1
+move_cnt = 100
+vshard.storage.bucket_force_create(move_start, move_cnt)
+box.space._bucket.index.status:count({vshard.consts.BUCKET.ACTIVE})
+test_run:switch('storage_2_a')
+vshard.storage.rebalancer_enable()
+wait_rebalancer_state('Rebalance routes are sent', test_run)
+wait_rebalancer_state('The cluster is balanced ok', test_run)
+box.space._bucket.index.status:count({vshard.consts.BUCKET.ACTIVE})
+
+test_run:switch('default')
+test_run:drop_cluster(REPLICASET_2)
+test_run:drop_cluster(REPLICASET_1)
+test_run:cmd('clear filter')
diff --git a/test/reload_evolution/storage_1_a.lua b/test/reload_evolution/storage_1_a.lua
new file mode 100755
index 0000000..f1a2981
--- /dev/null
+++ b/test/reload_evolution/storage_1_a.lua
@@ -0,0 +1,48 @@
+#!/usr/bin/env tarantool
+
+require('strict').on()
+
+local log = require('log')
+local fiber = require('fiber')
+local util = require('lua_libs.util')
+local fio = require('fio')
+
+-- Get instance name
+NAME = fio.basename(arg[0], '.lua')
+
+-- test-run gate.
+test_run = require('test_run').new()
+require('console').listen(os.getenv('ADMIN'))
+
+-- Run one storage on a different vshard version.
+-- To do that, place vshard src to
+-- BUILDDIR/test/var/vshard_git_tree_copy/.
+if NAME == 'storage_2_a' then
+    local script_path = debug.getinfo(1).source:match("@?(.*/)")
+    vshard_copy = util.BUILDDIR .. '/test/var/vshard_git_tree_copy'
+    package.path = string.format(
+        '%s/?.lua;%s/?/init.lua;%s',
+        vshard_copy, vshard_copy, package.path
+    )
+end
+
+-- Call a configuration provider
+cfg = require('localcfg')
+-- Name to uuid map
+names = {
+    ['storage_1_a'] = '8a274925-a26d-47fc-9e1b-af88ce939412',
+    ['storage_1_b'] = '3de2e3e1-9ebe-4d0d-abb1-26d301b84633',
+    ['storage_2_a'] = '1e02ae8a-afc0-4e91-ba34-843a356b8ed7',
+    ['storage_2_b'] = '001688c3-66f8-4a31-8e19-036c17d489c2',
+}
+
+replicaset1_uuid = 'cbf06940-0790-498b-948d-042b62cf3d29'
+replicaset2_uuid = 'ac522f65-aa94-4134-9f64-51ee384f1a54'
+replicasets = {replicaset1_uuid, replicaset2_uuid}
+
+-- Start the database with sharding
+vshard = require('vshard')
+vshard.storage.cfg(cfg, names[NAME])
+
+-- Bootstrap storage.
+require('lua_libs.bootstrap')
diff --git a/test/reload_evolution/storage_1_b.lua b/test/reload_evolution/storage_1_b.lua
new file mode 120000
index 0000000..02572da
--- /dev/null
+++ b/test/reload_evolution/storage_1_b.lua
@@ -0,0 +1 @@
+storage_1_a.lua
\ No newline at end of file
diff --git a/test/reload_evolution/storage_2_a.lua b/test/reload_evolution/storage_2_a.lua
new file mode 120000
index 0000000..02572da
--- /dev/null
+++ b/test/reload_evolution/storage_2_a.lua
@@ -0,0 +1 @@
+storage_1_a.lua
\ No newline at end of file
diff --git a/test/reload_evolution/storage_2_b.lua b/test/reload_evolution/storage_2_b.lua
new file mode 120000
index 0000000..02572da
--- /dev/null
+++ b/test/reload_evolution/storage_2_b.lua
@@ -0,0 +1 @@
+storage_1_a.lua
\ No newline at end of file
diff --git a/test/reload_evolution/suite.ini b/test/reload_evolution/suite.ini
new file mode 100644
index 0000000..5f55418
--- /dev/null
+++ b/test/reload_evolution/suite.ini
@@ -0,0 +1,6 @@
+[default]
+core = tarantool
+description = Reload evolution tests
+script = test.lua
+is_parallel = False
+lua_libs = ../lua_libs ../../example/localcfg.lua
diff --git a/test/reload_evolution/test.lua b/test/reload_evolution/test.lua
new file mode 100644
index 0000000..ad0543a
--- /dev/null
+++ b/test/reload_evolution/test.lua
@@ -0,0 +1,9 @@
+#!/usr/bin/env tarantool
+
+require('strict').on()
+
+box.cfg{
+    listen = os.getenv("LISTEN"),
+}
+
+require('console').listen(os.getenv('ADMIN'))
diff --git a/test/unit/reload_evolution.result b/test/unit/reload_evolution.result
new file mode 100644
index 0000000..10e606d
--- /dev/null
+++ b/test/unit/reload_evolution.result
@@ -0,0 +1,45 @@
+test_run = require('test_run').new()
+---
+...
+fiber = require('fiber')
+---
+...
+log = require('log')
+---
+...
+util = require('util')
+---
+...
+reload_evolution = require('vshard.storage.reload_evolution')
+---
+...
+-- Init with the latest version.
+fake_M = { reload_version = reload_evolution.version }
+---
+...
+-- Test reload to the same version.
+reload_evolution.upgrade(fake_M)
+---
+...
+test_run:grep_log('default', 'vshard.storage.evolution') == nil
+---
+- true
+...
+-- Test downgrage version.
+log.info(string.rep('a', 1000))
+---
+...
+fake_M.reload_version = fake_M.reload_version + 1
+---
+...
+err = util.check_error(reload_evolution.upgrade, fake_M)
+---
+...
+err:match('auto%-downgrade is not implemented')
+---
+- auto-downgrade is not implemented
+...
+test_run:grep_log('default', 'vshard.storage.evolution', 1000) ~= nil
+---
+- false
+...
diff --git a/test/unit/reload_evolution.test.lua b/test/unit/reload_evolution.test.lua
new file mode 100644
index 0000000..2e99152
--- /dev/null
+++ b/test/unit/reload_evolution.test.lua
@@ -0,0 +1,18 @@
+test_run = require('test_run').new()
+fiber = require('fiber')
+log = require('log')
+util = require('util')
+reload_evolution = require('vshard.storage.reload_evolution')
+-- Init with the latest version.
+fake_M = { reload_version = reload_evolution.version }
+
+-- Test reload to the same version.
+reload_evolution.upgrade(fake_M)
+test_run:grep_log('default', 'vshard.storage.evolution') == nil
+
+-- Test downgrage version.
+log.info(string.rep('a', 1000))
+fake_M.reload_version = fake_M.reload_version + 1
+err = util.check_error(reload_evolution.upgrade, fake_M)
+err:match('auto%-downgrade is not implemented')
+test_run:grep_log('default', 'vshard.storage.evolution', 1000) ~= nil
diff --git a/vshard/storage/init.lua b/vshard/storage/init.lua
index c1df0e6..c7d81e3 100644
--- a/vshard/storage/init.lua
+++ b/vshard/storage/init.lua
@@ -10,6 +10,7 @@ if rawget(_G, MODULE_INTERNALS) then
     local vshard_modules = {
         'vshard.consts', 'vshard.error', 'vshard.cfg',
         'vshard.replicaset', 'vshard.util',
+        'vshard.storage.reload_evolution'
     }
     for _, module in pairs(vshard_modules) do
         package.loaded[module] = nil
@@ -20,12 +21,16 @@ local lerror = require('vshard.error')
 local lcfg = require('vshard.cfg')
 local lreplicaset = require('vshard.replicaset')
 local util = require('vshard.util')
+local reload_evolution = require('vshard.storage.reload_evolution')
 
 local M = rawget(_G, MODULE_INTERNALS)
 if not M then
     --
     -- The module is loaded for the first time.
     --
+    -- !!!WARNING: any change of this table must be reflected in
+    -- `vshard.storage.reload_evolution` module to guarantee
+    -- reloadability of the module.
     M = {
         ---------------- Common module attributes ----------------
         -- The last passed configuration.
@@ -105,6 +110,11 @@ if not M then
         -- a destination replicaset must drop already received
         -- data.
         rebalancer_sending_bucket = 0,
+
+        ------------------------- Reload -------------------------
+        -- Version of the loaded module. This number is used on
+        -- reload to determine which upgrade scripts to run.
+        reload_version = reload_evolution.version,
     }
 end
 
@@ -1864,6 +1874,7 @@ end
 if not rawget(_G, MODULE_INTERNALS) then
     rawset(_G, MODULE_INTERNALS, M)
 else
+    reload_evolution.upgrade(M)
     storage_cfg(M.current_cfg, M.this_replica.uuid)
     M.module_version = M.module_version + 1
 end
diff --git a/vshard/storage/reload_evolution.lua b/vshard/storage/reload_evolution.lua
new file mode 100644
index 0000000..8502a33
--- /dev/null
+++ b/vshard/storage/reload_evolution.lua
@@ -0,0 +1,58 @@
+--
+-- This module is used to upgrade the vshard.storage on the fly.
+-- It updates internal Lua structures in case they are changed
+-- in a commit.
+--
+local log = require('log')
+
+--
+-- Array of upgrade functions.
+-- migrations[version] = function which upgrades module version
+-- from `version` to `version + 1`.
+--
+local migrations = {}
+
+-- Initialize reload_upgrade mechanism
+migrations[#migrations + 1] = function (M)
+    -- Code to update Lua objects.
+end
+
+--
+-- Perform an update based on a version stored in `M` (internals).
+-- @param M Old module internals which should be updated.
+--
+local function upgrade(M)
+    local start_version = M.reload_version or 1
+    if start_version > #migrations then
+        local err_msg = string.format(
+            'vshard.storage.reload_evolution: ' ..
+            'auto-downgrade is not implemented; ' ..
+            'loaded version is %d, upgrade script version is %d',
+            start_version, #migrations
+        )
+        log.error(err_msg)
+        error(err_msg)
+    end
+    for i = start_version, #migrations  do
+        local ok, err = pcall(migrations[i], M)
+        if ok then
+            log.info('vshard.storage.reload_evolution: upgraded to %d version',
+                     i)
+        else
+            local err_msg = string.format(
+                'vshard.storage.reload_evolution: ' ..
+                'error during upgrade to %d version: %s', i, err
+            )
+            log.error(err_msg)
+            error(err_msg)
+        end
+        -- Update the version just after upgrade to have an
+        -- actual version in case of an error.
+        M.reload_version = i
+    end
+end
+
+return {
+    version = #migrations,
+    upgrade = upgrade,
+}
-- 
2.14.1

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [tarantool-patches] [PATCH 3/3] Introduce storage reload evolution
  2018-07-18 17:47 [tarantool-patches] [PATCH 0/3] vshard reload mechanism AKhatskevich
@ 2018-07-18 17:47 ` AKhatskevich
  0 siblings, 0 replies; 5+ messages in thread
From: AKhatskevich @ 2018-07-18 17:47 UTC (permalink / raw)
  To: v.shpilevoy, tarantool-patches

Changes:
1. Introduce storage reload evolution.
2. Setup cross-version reload testing.

1:
This mechanism updates Lua objects on reload in case they are
changed in a new vshard.storage version.

Since this commit, any change in vshard.storage.M has to be
reflected in vshard.storage.reload_evolution to guarantee
correct reload.

2:
The testing uses git infrastructure and is performed in the following
way:
1. Copy old version of vshard to a temp folder.
2. Run vshard on this code.
3. Checkout the latest version of the vshard sources.
4. Reload vshard storage.
5. Make sure it works (Perform simple tests).

Notes:
* this patch contains some legacy-driven decisions:
  1. SOURCEDIR path retrieved differentpy in case of
     packpack build.
  2. git directory in the `reload_evolution/storage` test
     is copied with respect to Centos 7 and `ro` mode of
     SOURCEDIR.

Closes <shut git>112 125
---
 .travis.yml                            |   2 +-
 rpm/prebuild.sh                        |   2 +
 test/lua_libs/git_util.lua             |  39 +++++++
 test/lua_libs/util.lua                 |  20 ++++
 test/reload_evolution/storage.result   | 184 +++++++++++++++++++++++++++++++++
 test/reload_evolution/storage.test.lua |  64 ++++++++++++
 test/reload_evolution/storage_1_a.lua  | 144 ++++++++++++++++++++++++++
 test/reload_evolution/storage_1_b.lua  |   1 +
 test/reload_evolution/storage_2_a.lua  |   1 +
 test/reload_evolution/storage_2_b.lua  |   1 +
 test/reload_evolution/suite.ini        |   6 ++
 test/reload_evolution/test.lua         |   9 ++
 vshard/storage/init.lua                |  11 ++
 vshard/storage/reload_evolution.lua    |  58 +++++++++++
 14 files changed, 541 insertions(+), 1 deletion(-)
 create mode 100644 test/lua_libs/git_util.lua
 create mode 100644 test/reload_evolution/storage.result
 create mode 100644 test/reload_evolution/storage.test.lua
 create mode 100755 test/reload_evolution/storage_1_a.lua
 create mode 120000 test/reload_evolution/storage_1_b.lua
 create mode 120000 test/reload_evolution/storage_2_a.lua
 create mode 120000 test/reload_evolution/storage_2_b.lua
 create mode 100644 test/reload_evolution/suite.ini
 create mode 100644 test/reload_evolution/test.lua
 create mode 100644 vshard/storage/reload_evolution.lua

diff --git a/.travis.yml b/.travis.yml
index 54bfe44..eff4a51 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -41,7 +41,7 @@ env:
 script:
   - git describe --long
   - git clone https://github.com/packpack/packpack.git packpack
-  - packpack/packpack
+  - packpack/packpack -e PACKPACK_GIT_SOURCEDIR=/source/
 
 before_deploy:
   - ls -l build/
diff --git a/rpm/prebuild.sh b/rpm/prebuild.sh
index 768b22b..554032b 100755
--- a/rpm/prebuild.sh
+++ b/rpm/prebuild.sh
@@ -1 +1,3 @@
 curl -s https://packagecloud.io/install/repositories/tarantool/1_9/script.rpm.sh | sudo bash
+sudo yum -y install python-devel python-pip
+sudo pip install tarantool msgpack
diff --git a/test/lua_libs/git_util.lua b/test/lua_libs/git_util.lua
new file mode 100644
index 0000000..e2c17d0
--- /dev/null
+++ b/test/lua_libs/git_util.lua
@@ -0,0 +1,39 @@
+--
+-- Lua bridge for some of the git commands.
+--
+local os = require('os')
+
+local temp_file = 'some_strange_rare_unique_file_name_for_git_util'
+local function exec_cmd(options, cmd, args, files, dir, fout)
+    files = files or ''
+    options = options or ''
+    args = args or ''
+    local shell_cmd
+    shell_cmd = string.format('git %s %s %s %s', options, cmd, args, files)
+    if fout then
+        shell_cmd = shell_cmd .. ' >' .. fout
+    end
+    if dir then
+        shell_cmd = string.format('cd %s && %s', dir, shell_cmd)
+    end
+    local res = os.execute(shell_cmd)
+    assert(res == 0, 'Git cmd error: ' .. res)
+end
+
+local function log_hashes(options, args, files, dir)
+    args = args .. " --format='%h'"
+    local local_temp_file = string.format('%s/%s', os.getenv('PWD'), temp_file)
+    exec_cmd(options, 'log', args, files, dir, local_temp_file)
+    local lines = {}
+    for line in io.lines(local_temp_file) do
+        table.insert(lines, line)
+    end
+    os.remove(local_temp_file)
+    return lines
+end
+
+
+return {
+    exec_cmd = exec_cmd,
+    log_hashes = log_hashes
+}
diff --git a/test/lua_libs/util.lua b/test/lua_libs/util.lua
index aeb2342..108510e 100644
--- a/test/lua_libs/util.lua
+++ b/test/lua_libs/util.lua
@@ -1,5 +1,6 @@
 local fiber = require('fiber')
 local log = require('log')
+local fio = require('fio')
 
 local function check_error(func, ...)
     local pstatus, status, err = pcall(func, ...)
@@ -84,10 +85,29 @@ local function has_same_fields(ethalon, data)
     return true
 end
 
+-- Git directory of the project. Used in evolution tests to
+-- fetch old versions of vshard.
+local SOURCEDIR = os.getenv('PACKPACK_GIT_SOURCEDIR')
+if not SOURCEDIR then
+    SOURCEDIR = os.getenv('SOURCEDIR')
+end
+if not SOURCEDIR then
+    local script_path = debug.getinfo(1).source:match("@?(.*/)")
+    script_path = fio.abspath(script_path)
+    SOURCEDIR = fio.abspath(script_path .. '/../../../')
+end
+
+local BUILDDIR = os.getenv('BUILDDIR')
+if not BUILDDIR then
+    BUILDDIR = SOURCEDIR
+end
+
 return {
     check_error = check_error,
     shuffle_masters = shuffle_masters,
     collect_timeouts = collect_timeouts,
     wait_master = wait_master,
     has_same_fields = has_same_fields,
+    SOURCEDIR = SOURCEDIR,
+    BUILDDIR = BUILDDIR,
 }
diff --git a/test/reload_evolution/storage.result b/test/reload_evolution/storage.result
new file mode 100644
index 0000000..2cf21fd
--- /dev/null
+++ b/test/reload_evolution/storage.result
@@ -0,0 +1,184 @@
+test_run = require('test_run').new()
+---
+...
+git_util = require('git_util')
+---
+...
+util = require('util')
+---
+...
+vshard_copy_path = util.BUILDDIR .. '/test/var/vshard_git_tree_copy'
+---
+...
+evolution_log = git_util.log_hashes('', '', 'vshard/storage/reload_evolution.lua', util.SOURCEDIR)
+---
+...
+-- Cleanup the directory after a previous build.
+_ = os.execute('rm -rf ' .. vshard_copy_path)
+---
+...
+-- 1. `git worktree` cannot be used because PACKPACK mounts
+-- `/source/` in `ro` mode.
+-- 2. Just `cp -rf` cannot be used due to a little different
+-- behavior in Centos 7.
+_ = os.execute('mkdir ' .. vshard_copy_path)
+---
+...
+_ = os.execute("cd " .. util.SOURCEDIR .. ' && cp -rf `ls -A --ignore=build` ' .. vshard_copy_path)
+---
+...
+-- Checkout the first commit with a reload_evolution mechanism.
+git_util.exec_cmd('', 'checkout', '-f', '', vshard_copy_path)
+---
+...
+git_util.exec_cmd('', 'checkout', evolution_log[#evolution_log] .. '~1', '', vshard_copy_path)
+---
+...
+REPLICASET_1 = { 'storage_1_a', 'storage_1_b' }
+---
+...
+REPLICASET_2 = { 'storage_2_a', 'storage_2_b' }
+---
+...
+test_run:create_cluster(REPLICASET_1, 'reload_evolution')
+---
+...
+test_run:create_cluster(REPLICASET_2, 'reload_evolution')
+---
+...
+util = require('util')
+---
+...
+util.wait_master(test_run, REPLICASET_1, 'storage_1_a')
+---
+...
+util.wait_master(test_run, REPLICASET_2, 'storage_2_a')
+---
+...
+test_run:switch('storage_1_a')
+---
+- true
+...
+vshard.storage.internal.reload_evolution_version
+---
+- null
+...
+vshard.storage.bucket_force_create(1, vshard.consts.DEFAULT_BUCKET_COUNT / 2)
+---
+- true
+...
+box.space.customer:insert({1, 1, 'customer_name'})
+---
+- [1, 1, 'customer_name']
+...
+test_run:switch('storage_2_a')
+---
+- true
+...
+fiber = require('fiber')
+---
+...
+vshard.storage.bucket_force_create(vshard.consts.DEFAULT_BUCKET_COUNT / 2 + 1, vshard.consts.DEFAULT_BUCKET_COUNT / 2)
+---
+- true
+...
+while test_run:grep_log('storage_2_a', 'The cluster is balanced ok') == nil do vshard.storage.rebalancer_wakeup() fiber.sleep(0.1) end
+---
+...
+test_run:switch('default')
+---
+- true
+...
+git_util.exec_cmd('', 'checkout', evolution_log[1], '', vshard_copy_path)
+---
+...
+test_run:switch('storage_1_a')
+---
+- true
+...
+package.loaded["vshard.storage"] = nil
+---
+...
+vshard.storage = require("vshard.storage")
+---
+...
+test_run:grep_log('storage_1_a', 'vshard.storage.reload_evolution: upgraded to') ~= nil
+---
+- true
+...
+vshard.storage.internal.reload_evolution_version
+---
+- 1
+...
+-- Make sure storage operates well.
+vshard.storage.bucket_force_drop(2)
+---
+- true
+...
+vshard.storage.bucket_force_create(2)
+---
+- true
+...
+vshard.storage.buckets_info()[2]
+---
+- status: active
+  id: 2
+...
+vshard.storage.call(1, 'read', 'customer_lookup', {1})
+---
+- true
+- accounts: []
+  customer_id: 1
+  name: customer_name
+...
+vshard.storage.bucket_send(1, replicaset2_uuid)
+---
+- true
+...
+vshard.storage.garbage_collector_wakeup()
+---
+...
+fiber = require('fiber')
+---
+...
+while box.space._bucket:get({1}) do fiber.sleep(0.01) end
+---
+...
+test_run:switch('storage_2_a')
+---
+- true
+...
+vshard.storage.bucket_send(1, replicaset1_uuid)
+---
+- true
+...
+test_run:switch('storage_1_a')
+---
+- true
+...
+vshard.storage.call(1, 'read', 'customer_lookup', {1})
+---
+- true
+- accounts: []
+  customer_id: 1
+  name: customer_name
+...
+-- Check info() does not fail.
+vshard.storage.info() ~= nil
+---
+- true
+...
+test_run:switch('default')
+---
+- true
+...
+test_run:drop_cluster(REPLICASET_2)
+---
+...
+test_run:drop_cluster(REPLICASET_1)
+---
+...
+test_run:cmd('clear filter')
+---
+- true
+...
diff --git a/test/reload_evolution/storage.test.lua b/test/reload_evolution/storage.test.lua
new file mode 100644
index 0000000..fc1bd0c
--- /dev/null
+++ b/test/reload_evolution/storage.test.lua
@@ -0,0 +1,64 @@
+test_run = require('test_run').new()
+
+git_util = require('git_util')
+util = require('util')
+vshard_copy_path = util.BUILDDIR .. '/test/var/vshard_git_tree_copy'
+evolution_log = git_util.log_hashes('', '', 'vshard/storage/reload_evolution.lua', util.SOURCEDIR)
+-- Cleanup the directory after a previous build.
+_ = os.execute('rm -rf ' .. vshard_copy_path)
+-- 1. `git worktree` cannot be used because PACKPACK mounts
+-- `/source/` in `ro` mode.
+-- 2. Just `cp -rf` cannot be used due to a little different
+-- behavior in Centos 7.
+_ = os.execute('mkdir ' .. vshard_copy_path)
+_ = os.execute("cd " .. util.SOURCEDIR .. ' && cp -rf `ls -A --ignore=build` ' .. vshard_copy_path)
+-- Checkout the first commit with a reload_evolution mechanism.
+git_util.exec_cmd('', 'checkout', '-f', '', vshard_copy_path)
+git_util.exec_cmd('', 'checkout', evolution_log[#evolution_log] .. '~1', '', vshard_copy_path)
+
+REPLICASET_1 = { 'storage_1_a', 'storage_1_b' }
+REPLICASET_2 = { 'storage_2_a', 'storage_2_b' }
+test_run:create_cluster(REPLICASET_1, 'reload_evolution')
+test_run:create_cluster(REPLICASET_2, 'reload_evolution')
+util = require('util')
+util.wait_master(test_run, REPLICASET_1, 'storage_1_a')
+util.wait_master(test_run, REPLICASET_2, 'storage_2_a')
+
+test_run:switch('storage_1_a')
+vshard.storage.internal.reload_evolution_version
+vshard.storage.bucket_force_create(1, vshard.consts.DEFAULT_BUCKET_COUNT / 2)
+box.space.customer:insert({1, 1, 'customer_name'})
+
+test_run:switch('storage_2_a')
+fiber = require('fiber')
+vshard.storage.bucket_force_create(vshard.consts.DEFAULT_BUCKET_COUNT / 2 + 1, vshard.consts.DEFAULT_BUCKET_COUNT / 2)
+while test_run:grep_log('storage_2_a', 'The cluster is balanced ok') == nil do vshard.storage.rebalancer_wakeup() fiber.sleep(0.1) end
+
+test_run:switch('default')
+git_util.exec_cmd('', 'checkout', evolution_log[1], '', vshard_copy_path)
+
+test_run:switch('storage_1_a')
+package.loaded["vshard.storage"] = nil
+vshard.storage = require("vshard.storage")
+test_run:grep_log('storage_1_a', 'vshard.storage.reload_evolution: upgraded to') ~= nil
+vshard.storage.internal.reload_evolution_version
+-- Make sure storage operates well.
+vshard.storage.bucket_force_drop(2)
+vshard.storage.bucket_force_create(2)
+vshard.storage.buckets_info()[2]
+vshard.storage.call(1, 'read', 'customer_lookup', {1})
+vshard.storage.bucket_send(1, replicaset2_uuid)
+vshard.storage.garbage_collector_wakeup()
+fiber = require('fiber')
+while box.space._bucket:get({1}) do fiber.sleep(0.01) end
+test_run:switch('storage_2_a')
+vshard.storage.bucket_send(1, replicaset1_uuid)
+test_run:switch('storage_1_a')
+vshard.storage.call(1, 'read', 'customer_lookup', {1})
+-- Check info() does not fail.
+vshard.storage.info() ~= nil
+
+test_run:switch('default')
+test_run:drop_cluster(REPLICASET_2)
+test_run:drop_cluster(REPLICASET_1)
+test_run:cmd('clear filter')
diff --git a/test/reload_evolution/storage_1_a.lua b/test/reload_evolution/storage_1_a.lua
new file mode 100755
index 0000000..3e03f8f
--- /dev/null
+++ b/test/reload_evolution/storage_1_a.lua
@@ -0,0 +1,144 @@
+#!/usr/bin/env tarantool
+
+require('strict').on()
+
+
+local fio = require('fio')
+
+-- Get instance name
+local fio = require('fio')
+local NAME = fio.basename(arg[0], '.lua')
+local fiber = require('fiber')
+local util = require('util')
+
+-- Run one storage on a different vshard version.
+-- To do that, place vshard src to
+-- BUILDDIR/test/var/vshard_git_tree_copy/.
+if NAME == 'storage_1_a' then
+    local script_path = debug.getinfo(1).source:match("@?(.*/)")
+    vshard_copy = util.BUILDDIR .. '/test/var/vshard_git_tree_copy'
+    package.path = string.format(
+        '%s/?.lua;%s/?/init.lua;%s',
+        vshard_copy, vshard_copy, package.path
+    )
+end
+
+-- Check if we are running under test-run
+if os.getenv('ADMIN') then
+    test_run = require('test_run').new()
+    require('console').listen(os.getenv('ADMIN'))
+end
+
+-- Call a configuration provider
+cfg = require('localcfg')
+-- Name to uuid map
+names = {
+    ['storage_1_a'] = '8a274925-a26d-47fc-9e1b-af88ce939412',
+    ['storage_1_b'] = '3de2e3e1-9ebe-4d0d-abb1-26d301b84633',
+    ['storage_2_a'] = '1e02ae8a-afc0-4e91-ba34-843a356b8ed7',
+    ['storage_2_b'] = '001688c3-66f8-4a31-8e19-036c17d489c2',
+}
+
+replicaset1_uuid = 'cbf06940-0790-498b-948d-042b62cf3d29'
+replicaset2_uuid = 'ac522f65-aa94-4134-9f64-51ee384f1a54'
+replicasets = {replicaset1_uuid, replicaset2_uuid}
+
+-- Start the database with sharding
+vshard = require('vshard')
+vshard.storage.cfg(cfg, names[NAME])
+
+box.once("testapp:schema:1", function()
+    local customer = box.schema.space.create('customer')
+    customer:format({
+        {'customer_id', 'unsigned'},
+        {'bucket_id', 'unsigned'},
+        {'name', 'string'},
+    })
+    customer:create_index('customer_id', {parts = {'customer_id'}})
+    customer:create_index('bucket_id', {parts = {'bucket_id'}, unique = false})
+
+    local account = box.schema.space.create('account')
+    account:format({
+        {'account_id', 'unsigned'},
+        {'customer_id', 'unsigned'},
+        {'bucket_id', 'unsigned'},
+        {'balance', 'unsigned'},
+        {'name', 'string'},
+    })
+    account:create_index('account_id', {parts = {'account_id'}})
+    account:create_index('customer_id', {parts = {'customer_id'}, unique = false})
+    account:create_index('bucket_id', {parts = {'bucket_id'}, unique = false})
+    box.snapshot()
+
+    box.schema.func.create('customer_lookup')
+    box.schema.role.grant('public', 'execute', 'function', 'customer_lookup')
+    box.schema.func.create('customer_add')
+    box.schema.role.grant('public', 'execute', 'function', 'customer_add')
+    box.schema.func.create('echo')
+    box.schema.role.grant('public', 'execute', 'function', 'echo')
+    box.schema.func.create('sleep')
+    box.schema.role.grant('public', 'execute', 'function', 'sleep')
+    box.schema.func.create('raise_luajit_error')
+    box.schema.role.grant('public', 'execute', 'function', 'raise_luajit_error')
+    box.schema.func.create('raise_client_error')
+    box.schema.role.grant('public', 'execute', 'function', 'raise_client_error')
+end)
+
+function customer_add(customer)
+    box.begin()
+    box.space.customer:insert({customer.customer_id, customer.bucket_id,
+                               customer.name})
+    for _, account in ipairs(customer.accounts) do
+        box.space.account:insert({
+            account.account_id,
+            customer.customer_id,
+            customer.bucket_id,
+            0,
+            account.name
+        })
+    end
+    box.commit()
+    return true
+end
+
+function customer_lookup(customer_id)
+    if type(customer_id) ~= 'number' then
+        error('Usage: customer_lookup(customer_id)')
+    end
+
+    local customer = box.space.customer:get(customer_id)
+    if customer == nil then
+        return nil
+    end
+    customer = {
+        customer_id = customer.customer_id;
+        name = customer.name;
+    }
+    local accounts = {}
+    for _, account in box.space.account.index.customer_id:pairs(customer_id) do
+        table.insert(accounts, {
+            account_id = account.account_id;
+            name = account.name;
+            balance = account.balance;
+        })
+    end
+    customer.accounts = accounts;
+    return customer
+end
+
+function echo(...)
+    return ...
+end
+
+function sleep(time)
+    fiber.sleep(time)
+    return true
+end
+
+function raise_luajit_error()
+    assert(1 == 2)
+end
+
+function raise_client_error()
+    box.error(box.error.UNKNOWN)
+end
diff --git a/test/reload_evolution/storage_1_b.lua b/test/reload_evolution/storage_1_b.lua
new file mode 120000
index 0000000..02572da
--- /dev/null
+++ b/test/reload_evolution/storage_1_b.lua
@@ -0,0 +1 @@
+storage_1_a.lua
\ No newline at end of file
diff --git a/test/reload_evolution/storage_2_a.lua b/test/reload_evolution/storage_2_a.lua
new file mode 120000
index 0000000..02572da
--- /dev/null
+++ b/test/reload_evolution/storage_2_a.lua
@@ -0,0 +1 @@
+storage_1_a.lua
\ No newline at end of file
diff --git a/test/reload_evolution/storage_2_b.lua b/test/reload_evolution/storage_2_b.lua
new file mode 120000
index 0000000..02572da
--- /dev/null
+++ b/test/reload_evolution/storage_2_b.lua
@@ -0,0 +1 @@
+storage_1_a.lua
\ No newline at end of file
diff --git a/test/reload_evolution/suite.ini b/test/reload_evolution/suite.ini
new file mode 100644
index 0000000..bb5435b
--- /dev/null
+++ b/test/reload_evolution/suite.ini
@@ -0,0 +1,6 @@
+[default]
+core = tarantool
+description = Reload evolution tests
+script = test.lua
+is_parallel = False
+lua_libs = ../lua_libs/util.lua ../lua_libs/git_util.lua ../../example/localcfg.lua
diff --git a/test/reload_evolution/test.lua b/test/reload_evolution/test.lua
new file mode 100644
index 0000000..ad0543a
--- /dev/null
+++ b/test/reload_evolution/test.lua
@@ -0,0 +1,9 @@
+#!/usr/bin/env tarantool
+
+require('strict').on()
+
+box.cfg{
+    listen = os.getenv("LISTEN"),
+}
+
+require('console').listen(os.getenv('ADMIN'))
diff --git a/vshard/storage/init.lua b/vshard/storage/init.lua
index bf560e6..1740c98 100644
--- a/vshard/storage/init.lua
+++ b/vshard/storage/init.lua
@@ -10,6 +10,7 @@ if rawget(_G, MODULE_INTERNALS) then
     local vshard_modules = {
         'vshard.consts', 'vshard.error', 'vshard.cfg',
         'vshard.replicaset', 'vshard.util',
+        'vshard.storage.reload_evolution'
     }
     for _, module in pairs(vshard_modules) do
         package.loaded[module] = nil
@@ -20,12 +21,16 @@ local lerror = require('vshard.error')
 local lcfg = require('vshard.cfg')
 local lreplicaset = require('vshard.replicaset')
 local util = require('vshard.util')
+local reload_evolution = require('vshard.storage.reload_evolution')
 
 local M = rawget(_G, MODULE_INTERNALS)
 if not M then
     --
     -- The module is loaded for the first time.
     --
+    -- !!!WARNING: any change of this table must be reflected in
+    -- `vshard.storage.reload_evolution` module to guarantee
+    -- reloadability of the module.
     M = {
         ---------------- Common module attributes ----------------
         -- The last passed configuration.
@@ -105,6 +110,11 @@ if not M then
         -- a destination replicaset must drop already received
         -- data.
         rebalancer_sending_bucket = 0,
+
+        ------------------------- Reload -------------------------
+        -- Version of the loaded module. This number is used on
+        -- reload to determine which upgrade scripts to run.
+        reload_evolution_version = reload_evolution.version,
     }
 end
 
@@ -1863,6 +1873,7 @@ end
 if not rawget(_G, MODULE_INTERNALS) then
     rawset(_G, MODULE_INTERNALS, M)
 else
+    reload_evolution.upgrade(M)
     storage_cfg(M.current_cfg, M.this_replica.uuid)
     M.module_version = M.module_version + 1
 end
diff --git a/vshard/storage/reload_evolution.lua b/vshard/storage/reload_evolution.lua
new file mode 100644
index 0000000..cfac888
--- /dev/null
+++ b/vshard/storage/reload_evolution.lua
@@ -0,0 +1,58 @@
+--
+-- This module is used to upgrade the vshard.storage on the fly.
+-- It updates internal Lua structures in case they are changed
+-- in a commit.
+--
+local log = require('log')
+
+--
+-- Array of upgrade functions.
+-- magrations[version] = function which upgrades module version
+-- from `version` to `version + 1`.
+--
+local migrations = {}
+
+-- Initialize reload_upgrade mechanism
+migrations[#migrations + 1] = function (M)
+    -- Code to update Lua objects.
+end
+
+--
+-- Perform an update based on a version stored in `M` (internals).
+-- @param M Old module internals which should be updated.
+--
+local function upgrade(M)
+    local start_version = M.reload_evolution_version or 1
+    if start_version > #migrations then
+        local err_msg = string.format(
+            'vshard.storage.reload_evolution: ' ..
+            'auto-downgrade is not implemented; ' ..
+            'loaded version is %d, upgrade script version is %d',
+            start_version, #migrations
+        )
+        log.error(err_msg)
+        error(err_msg)
+    end
+    for i = start_version, #migrations  do
+        local ok, err = pcall(migrations[i], M)
+        if ok then
+            log.info('vshard.storage.reload_evolution: upgraded to %d version',
+                     i)
+        else
+            local err_msg = string.format(
+                'vshard.storage.reload_evolution: ' ..
+                'error during upgrade to %d version: %s', i, err
+            )
+            log.error(err_msg)
+            error(err_msg)
+        end
+        -- Update the version just after upgrade to have an
+        -- actual version in case of an error.
+        M.reload_evolution_version = i
+    end
+end
+
+return {
+    version = #migrations,
+    upgrade = upgrade,
+}
-- 
2.14.1

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2018-07-26  8:27 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-07-26  8:27 [tarantool-patches] [PATCH v3] vshard reload mechanism AKhatskevich
2018-07-26  8:27 ` [tarantool-patches] [PATCH 1/3] Fix races related to object outdating AKhatskevich
2018-07-26  8:27 ` [tarantool-patches] [PATCH 2/3] tests: separate bootstrap routine to a lua_libs AKhatskevich
2018-07-26  8:27 ` [tarantool-patches] [PATCH 3/3] Introduce storage reload evolution AKhatskevich
  -- strict thread matches above, loose matches on Subject: below --
2018-07-18 17:47 [tarantool-patches] [PATCH 0/3] vshard reload mechanism AKhatskevich
2018-07-18 17:47 ` [tarantool-patches] [PATCH 3/3] Introduce storage reload evolution AKhatskevich

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox