Tarantool development patches archive
 help / color / mirror / Atom feed
From: Vladislav Shpilevoy via Tarantool-patches <tarantool-patches@dev.tarantool.org>
To: tarantool-patches@dev.tarantool.org, olegrok@tarantool.org
Subject: [Tarantool-patches] [PATCH vshard 1/4] test: support luatest
Date: Wed,  9 Feb 2022 01:32:31 +0100	[thread overview]
Message-ID: <bee6d6b91217a9419bc44127ae55504ba9c35ef3.1644366575.git.v.shpilevoy@tarantool.org> (raw)
In-Reply-To: <cover.1644366575.git.v.shpilevoy@tarantool.org>

Test-run's most recent guideline is to write new tests in luatest
instead of diff console tests when possible. Luatest isn't exactly
in a perfect condition now, but it has 2 important features which
would be very useful in vshard:

- Easy cluster build. All can be done programmatically except a
  few basic things - from config creation to all replicasets
  start, router start, and buckets bootstrap. No need to hardcode
  that into files like storage_1_a.lua, router_1.lua, etc.

- Can opt-out certain tests depending on Tarantool version. For
  instance, soon coming support for netbox's return_raw option
  will need to run tests only for > 2.10.0-beta2. In diff tests it
  is also possible but would be notably complicated to achieve.

Needed for #312
---
All luatest_helpers/ except vtest.lua are blindly copied from Tarantool's main
repo. I didn't check their bugs, code style, etc.

 test-run                            |   2 +-
 test/instances/router.lua           |  15 ++
 test/instances/storage.lua          |  21 +++
 test/luatest_helpers.lua            |  72 ++++++++
 test/luatest_helpers/asserts.lua    |  43 +++++
 test/luatest_helpers/cluster.lua    | 132 ++++++++++++++
 test/luatest_helpers/server.lua     | 266 ++++++++++++++++++++++++++++
 test/luatest_helpers/vtest.lua      | 135 ++++++++++++++
 test/router-luatest/router_test.lua |  54 ++++++
 test/router-luatest/suite.ini       |   5 +
 10 files changed, 744 insertions(+), 1 deletion(-)
 create mode 100755 test/instances/router.lua
 create mode 100755 test/instances/storage.lua
 create mode 100644 test/luatest_helpers.lua
 create mode 100644 test/luatest_helpers/asserts.lua
 create mode 100644 test/luatest_helpers/cluster.lua
 create mode 100644 test/luatest_helpers/server.lua
 create mode 100644 test/luatest_helpers/vtest.lua
 create mode 100644 test/router-luatest/router_test.lua
 create mode 100644 test/router-luatest/suite.ini

diff --git a/test-run b/test-run
index c345003..2604c46 160000
--- a/test-run
+++ b/test-run
@@ -1 +1 @@
-Subproject commit c34500365efe8316e79c7936a2f2d04644602936
+Subproject commit 2604c46c7b6368dbde59489d5303ce3d1d430331
diff --git a/test/instances/router.lua b/test/instances/router.lua
new file mode 100755
index 0000000..ccec6c1
--- /dev/null
+++ b/test/instances/router.lua
@@ -0,0 +1,15 @@
+#!/usr/bin/env tarantool
+local helpers = require('test.luatest_helpers')
+-- Do not load entire vshard into the global namespace to catch errors when code
+-- relies on that.
+_G.vshard = {
+    router = require('vshard.router'),
+}
+-- Somewhy shutdown hangs on new Tarantools even though the nodes do not seem to
+-- have any long requests running.
+box.ctl.set_on_shutdown_timeout(0.001)
+
+box.cfg(helpers.box_cfg())
+box.schema.user.grant('guest', 'super', nil, nil, {if_not_exists = true})
+
+_G.ready = true
diff --git a/test/instances/storage.lua b/test/instances/storage.lua
new file mode 100755
index 0000000..2d679ba
--- /dev/null
+++ b/test/instances/storage.lua
@@ -0,0 +1,21 @@
+#!/usr/bin/env tarantool
+local helpers = require('test.luatest_helpers')
+-- Do not load entire vshard into the global namespace to catch errors when code
+-- relies on that.
+_G.vshard = {
+    storage = require('vshard.storage'),
+}
+-- Somewhy shutdown hangs on new Tarantools even though the nodes do not seem to
+-- have any long requests running.
+box.ctl.set_on_shutdown_timeout(0.001)
+
+box.cfg(helpers.box_cfg())
+box.schema.user.grant('guest', 'super', nil, nil, {if_not_exists = true})
+
+local function echo(...)
+    return ...
+end
+
+_G.echo = echo
+
+_G.ready = true
diff --git a/test/luatest_helpers.lua b/test/luatest_helpers.lua
new file mode 100644
index 0000000..283906c
--- /dev/null
+++ b/test/luatest_helpers.lua
@@ -0,0 +1,72 @@
+local fun = require('fun')
+local json = require('json')
+local fio = require('fio')
+local log = require('log')
+local yaml = require('yaml')
+local fiber = require('fiber')
+
+local luatest_helpers = {
+    SOCKET_DIR = fio.abspath(os.getenv('VARDIR') or 'test/var')
+}
+
+luatest_helpers.Server = require('test.luatest_helpers.server')
+
+local function default_cfg()
+    return {
+        work_dir = os.getenv('TARANTOOL_WORKDIR'),
+        listen = os.getenv('TARANTOOL_LISTEN'),
+        log = ('%s/%s.log'):format(os.getenv('TARANTOOL_WORKDIR'), os.getenv('TARANTOOL_ALIAS')),
+    }
+end
+
+local function env_cfg()
+    local src = os.getenv('TARANTOOL_BOX_CFG')
+    if src == nil then
+        return {}
+    end
+    local res = json.decode(src)
+    assert(type(res) == 'table')
+    return res
+end
+
+-- Collect box.cfg table from values passed through
+-- luatest_helpers.Server({<...>}) and from the given argument.
+--
+-- Use it from inside an instance script.
+function luatest_helpers.box_cfg(cfg)
+    return fun.chain(default_cfg(), env_cfg(), cfg or {}):tomap()
+end
+
+function luatest_helpers.instance_uri(alias, instance_id)
+    if instance_id == nil then
+        instance_id = ''
+    end
+    instance_id = tostring(instance_id)
+    return ('%s/%s%s.iproto'):format(luatest_helpers.SOCKET_DIR, alias, instance_id);
+end
+
+function luatest_helpers:get_vclock(server)
+    return server:eval('return box.info.vclock')
+end
+
+function luatest_helpers:wait_vclock(server, to_vclock)
+    while true do
+        local vclock = self:get_vclock(server)
+        local ok = true
+        for server_id, to_lsn in pairs(to_vclock) do
+            local lsn = vclock[server_id]
+            if lsn == nil or lsn < to_lsn then
+                ok = false
+                break
+            end
+        end
+        if ok then
+            return
+        end
+        log.info("wait vclock: %s to %s", yaml.encode(vclock),
+                 yaml.encode(to_vclock))
+        fiber.sleep(0.001)
+    end
+end
+
+return luatest_helpers
diff --git a/test/luatest_helpers/asserts.lua b/test/luatest_helpers/asserts.lua
new file mode 100644
index 0000000..77385d8
--- /dev/null
+++ b/test/luatest_helpers/asserts.lua
@@ -0,0 +1,43 @@
+local t = require('luatest')
+
+local asserts = {}
+
+function asserts:new(object)
+    self:inherit(object)
+    object:initialize()
+    return object
+end
+
+function asserts:inherit(object)
+    object = object or {}
+    setmetatable(object, self)
+    self.__index = self
+    return object
+end
+
+function asserts:assert_server_follow_upstream(server, id)
+    local status = server:eval(
+        ('return box.info.replication[%d].upstream.status'):format(id))
+    t.assert_equals(status, 'follow',
+        ('%s: this server does not follow others.'):format(server.alias))
+end
+
+
+function asserts:wait_fullmesh(servers, wait_time)
+    wait_time = wait_time or 20
+    t.helpers.retrying({timeout = wait_time}, function()
+        for _, server in pairs(servers) do
+            for _, server2 in pairs(servers) do
+                if server ~= server2 then
+                    local server_id = server:eval('return box.info.id')
+                    local server2_id = server2:eval('return box.info.id')
+                    if server_id ~= server2_id then
+                            self:assert_server_follow_upstream(server, server2_id)
+                    end
+                end
+            end
+        end
+    end)
+end
+
+return asserts
diff --git a/test/luatest_helpers/cluster.lua b/test/luatest_helpers/cluster.lua
new file mode 100644
index 0000000..43e3479
--- /dev/null
+++ b/test/luatest_helpers/cluster.lua
@@ -0,0 +1,132 @@
+local fio = require('fio')
+local Server = require('test.luatest_helpers.server')
+
+local root = os.environ()['SOURCEDIR'] or '.'
+
+local Cluster = {}
+
+function Cluster:new(object)
+    self:inherit(object)
+    object:initialize()
+    self.servers = object.servers
+    self.built_servers = object.built_servers
+    return object
+end
+
+function Cluster:inherit(object)
+    object = object or {}
+    setmetatable(object, self)
+    self.__index = self
+    self.servers = {}
+    self.built_servers = {}
+    return object
+end
+
+function Cluster:initialize()
+    self.servers = {}
+end
+
+function Cluster:server(alias)
+    for _, server in ipairs(self.servers) do
+        if server.alias == alias then
+            return server
+        end
+    end
+    return nil
+end
+
+function Cluster:drop()
+    for _, server in ipairs(self.servers) do
+        if server ~= nil then
+            server:stop()
+            server:cleanup()
+        end
+    end
+end
+
+function Cluster:get_index(server)
+    local index = nil
+    for i, v in ipairs(self.servers) do
+        if (v.id == server) then
+          index = i
+        end
+    end
+    return index
+end
+
+function Cluster:delete_server(server)
+    local idx = self:get_index(server)
+    if idx == nil then
+        print("Key does not exist")
+    else
+        table.remove(self.servers, idx)
+    end
+end
+
+function Cluster:stop()
+    for _, server in ipairs(self.servers) do
+        if server ~= nil then
+            server:stop()
+        end
+    end
+end
+
+function Cluster:start(opts)
+    for _, server in ipairs(self.servers) do
+        if not server.process then
+            server:start({wait_for_readiness = false})
+        end
+    end
+
+    -- The option is true by default.
+    local wait_for_readiness = true
+    if opts ~= nil and opts.wait_for_readiness ~= nil then
+        wait_for_readiness = opts.wait_for_readiness
+    end
+
+    if wait_for_readiness then
+        for _, server in ipairs(self.servers) do
+            server:wait_for_readiness()
+        end
+    end
+end
+
+function Cluster:build_server(server_config, instance_file)
+    instance_file = instance_file or 'default.lua'
+    server_config = table.deepcopy(server_config)
+    server_config.command = fio.pathjoin(root, 'test/instances/', instance_file)
+    assert(server_config.alias, 'Either replicaset.alias or server.alias must be given')
+    local server = Server:new(server_config)
+    table.insert(self.built_servers, server)
+    return server
+end
+
+function Cluster:add_server(server)
+    if self:server(server.alias) ~= nil then
+        error('Alias is not provided')
+    end
+    table.insert(self.servers, server)
+end
+
+function Cluster:build_and_add_server(config, replicaset_config, engine)
+    local server = self:build_server(config, replicaset_config, engine)
+    self:add_server(server)
+    return server
+end
+
+
+function Cluster:get_leader()
+    for _, instance in ipairs(self.servers) do
+        if instance:eval('return box.info.ro') == false then
+            return instance
+        end
+    end
+end
+
+function Cluster:exec_on_leader(bootstrap_function)
+    local leader = self:get_leader()
+    return leader:exec(bootstrap_function)
+end
+
+
+return Cluster
diff --git a/test/luatest_helpers/server.lua b/test/luatest_helpers/server.lua
new file mode 100644
index 0000000..714c537
--- /dev/null
+++ b/test/luatest_helpers/server.lua
@@ -0,0 +1,266 @@
+local clock = require('clock')
+local digest = require('digest')
+local ffi = require('ffi')
+local fiber = require('fiber')
+local fio = require('fio')
+local fun = require('fun')
+local json = require('json')
+local errno = require('errno')
+
+local checks = require('checks')
+local luatest = require('luatest')
+
+ffi.cdef([[
+    int kill(pid_t pid, int sig);
+]])
+
+local Server = luatest.Server:inherit({})
+
+local WAIT_TIMEOUT = 60
+local WAIT_DELAY = 0.1
+
+local DEFAULT_CHECKPOINT_PATTERNS = {"*.snap", "*.xlog", "*.vylog",
+                                     "*.inprogress", "[0-9]*/"}
+
+-- Differences from luatest.Server:
+--
+-- * 'alias' is mandatory.
+-- * 'command' is optional, assumed test/instances/default.lua by
+--   default.
+-- * 'workdir' is optional, determined by 'alias'.
+-- * The new 'box_cfg' parameter.
+-- * engine - provides engine for parameterized tests
+Server.constructor_checks = fun.chain(Server.constructor_checks, {
+    alias = 'string',
+    command = '?string',
+    workdir = '?string',
+    box_cfg = '?table',
+    engine = '?string',
+}):tomap()
+
+function Server:initialize()
+    local vardir = fio.abspath(os.getenv('VARDIR') or 'test/var')
+
+    if self.id == nil then
+        local random = digest.urandom(9)
+        self.id = digest.base64_encode(random, {urlsafe = true})
+    end
+    if self.command == nil then
+        self.command = 'test/instances/default.lua'
+    end
+    if self.workdir == nil then
+        self.workdir = ('%s/%s-%s'):format(vardir, self.alias, self.id)
+        fio.rmtree(self.workdir)
+        fio.mktree(self.workdir)
+    end
+    if self.net_box_port == nil and self.net_box_uri == nil then
+        self.net_box_uri = ('%s/%s.iproto'):format(vardir, self.alias)
+        fio.mktree(vardir)
+    end
+
+    -- AFAIU, the inner getmetatable() returns our helpers.Server
+    -- class, the outer one returns luatest.Server class.
+    getmetatable(getmetatable(self)).initialize(self)
+end
+
+--- Generates environment to run process with.
+-- The result is merged into os.environ().
+-- @return map
+function Server:build_env()
+    local res = getmetatable(getmetatable(self)).build_env(self)
+    if self.box_cfg ~= nil then
+        res.TARANTOOL_BOX_CFG = json.encode(self.box_cfg)
+    end
+    res.TARANTOOL_ENGINE = self.engine
+    return res
+end
+
+local function wait_cond(cond_name, server, func, ...)
+    local alias = server.alias
+    local id = server.id
+    local pid = server.process.pid
+
+    local deadline = clock.time() + WAIT_TIMEOUT
+    while true do
+        if func(...) then
+            return
+        end
+        if clock.time() > deadline then
+            error(('Waiting for "%s" on server %s-%s (PID %d) timed out')
+                  :format(cond_name, alias, id, pid))
+        end
+        fiber.sleep(WAIT_DELAY)
+    end
+end
+
+function Server:wait_for_readiness()
+    return wait_cond('readiness', self, function()
+        local ok, is_ready = pcall(function()
+            self:connect_net_box()
+            return self.net_box:eval('return _G.ready') == true
+        end)
+        return ok and is_ready
+    end)
+end
+
+function Server:wait_election_leader()
+    -- Include read-only property too because if an instance is a leader, it
+    -- does not mean it finished the synchro queue ownership transition. It is
+    -- read-only until that happens. But in tests usually the leader is needed
+    -- as a writable node.
+    return wait_cond('election leader', self, self.exec, self, function()
+        return box.info.election.state == 'leader' and not box.info.ro
+    end)
+end
+
+function Server:wait_election_leader_found()
+    return wait_cond('election leader is found', self, self.exec, self,
+                     function() return box.info.election.leader ~= 0 end)
+end
+
+-- Unlike the original luatest.Server function it waits for
+-- starting the server.
+function Server:start(opts)
+    checks('table', {
+        wait_for_readiness = '?boolean',
+    })
+    getmetatable(getmetatable(self)).start(self)
+
+    -- The option is true by default.
+    local wait_for_readiness = true
+    if opts ~= nil and opts.wait_for_readiness ~= nil then
+        wait_for_readiness = opts.wait_for_readiness
+    end
+
+    if wait_for_readiness then
+        self:wait_for_readiness()
+    end
+end
+
+function Server:instance_id()
+    -- Cache the value when found it first time.
+    if self.instance_id_value then
+        return self.instance_id_value
+    end
+    local id = self:exec(function() return box.info.id end)
+    -- But do not cache 0 - it is an anon instance, its ID might change.
+    if id ~= 0 then
+        self.instance_id_value = id
+    end
+    return id
+end
+
+function Server:instance_uuid()
+    -- Cache the value when found it first time.
+    if self.instance_uuid_value then
+        return self.instance_uuid_value
+    end
+    local uuid = self:exec(function() return box.info.uuid end)
+    self.instance_uuid_value = uuid
+    return uuid
+end
+
+-- TODO: Add the 'wait_for_readiness' parameter for the restart()
+-- method.
+
+-- Unlike the original luatest.Server function it waits until
+-- the server will stop.
+function Server:stop()
+    local alias = self.alias
+    local id = self.id
+    if self.process then
+        local pid = self.process.pid
+        getmetatable(getmetatable(self)).stop(self)
+
+        local deadline = clock.time() + WAIT_TIMEOUT
+        while true do
+            if ffi.C.kill(pid, 0) ~= 0 then
+                break
+            end
+            if clock.time() > deadline then
+                error(('Stopping of server %s-%s (PID %d) was timed out'):format(
+                    alias, id, pid))
+            end
+            fiber.sleep(WAIT_DELAY)
+        end
+    end
+end
+
+function Server:cleanup()
+    for _, pattern in ipairs(DEFAULT_CHECKPOINT_PATTERNS) do
+        fio.rmtree(('%s/%s'):format(self.workdir, pattern))
+    end
+    self.instance_id_value = nil
+    self.instance_uuid_value = nil
+end
+
+function Server:drop()
+    self:stop()
+    self:cleanup()
+end
+
+-- A copy of test_run:grep_log.
+function Server:grep_log(what, bytes, opts)
+    local opts = opts or {}
+    local noreset = opts.noreset or false
+    -- if instance has crashed provide filename to use grep_log
+    local filename = opts.filename or self:eval('return box.cfg.log')
+    local file = fio.open(filename, {'O_RDONLY', 'O_NONBLOCK'})
+
+    local function fail(msg)
+        local err = errno.strerror()
+        if file ~= nil then
+            file:close()
+        end
+        error(string.format("%s: %s: %s", msg, filename, err))
+    end
+
+    if file == nil then
+        fail("Failed to open log file")
+    end
+    io.flush() -- attempt to flush stdout == log fd
+    local filesize = file:seek(0, 'SEEK_END')
+    if filesize == nil then
+        fail("Failed to get log file size")
+    end
+    local bytes = bytes or 65536 -- don't read whole log - it can be huge
+    bytes = bytes > filesize and filesize or bytes
+    if file:seek(-bytes, 'SEEK_END') == nil then
+        fail("Failed to seek log file")
+    end
+    local found, buf
+    repeat -- read file in chunks
+        local s = file:read(2048)
+        if s == nil then
+            fail("Failed to read log file")
+        end
+        local pos = 1
+        repeat -- split read string in lines
+            local endpos = string.find(s, '\n', pos)
+            endpos = endpos and endpos - 1 -- strip terminating \n
+            local line = string.sub(s, pos, endpos)
+            if endpos == nil and s ~= '' then
+                -- line doesn't end with \n or eof, append it to buffer
+                -- to be checked on next iteration
+                buf = buf or {}
+                table.insert(buf, line)
+            else
+                if buf ~= nil then -- prepend line with buffered data
+                    table.insert(buf, line)
+                    line = table.concat(buf)
+                    buf = nil
+                end
+                if string.match(line, "Starting instance") and not noreset then
+                    found = nil -- server was restarted, reset search
+                else
+                    found = string.match(line, what) or found
+                end
+            end
+            pos = endpos and endpos + 2 -- jump to char after \n
+        until pos == nil
+    until s == ''
+    file:close()
+    return found
+end
+
+return Server
diff --git a/test/luatest_helpers/vtest.lua b/test/luatest_helpers/vtest.lua
new file mode 100644
index 0000000..affc008
--- /dev/null
+++ b/test/luatest_helpers/vtest.lua
@@ -0,0 +1,135 @@
+local helpers = require('test.luatest_helpers')
+local cluster = require('test.luatest_helpers.cluster')
+
+local uuid_idx = 1
+
+--
+-- New UUID unique per this process. Generation is not random - for simplicity
+-- and reproducibility.
+--
+local function uuid_next()
+    local last = tostring(uuid_idx)
+    uuid_idx = uuid_idx + 1
+    assert(#last <= 12)
+    return '00000000-0000-0000-0000-'..string.rep('0', 12 - #last)..last
+end
+
+--
+-- Build a valid vshard config by a template. A template does not specify
+-- anything volatile such as URIs, UUIDs - these are installed at runtime.
+--
+local function config_new(templ)
+    local res = table.deepcopy(templ)
+    local sharding = {}
+    res.sharding = sharding
+    for _, replicaset_templ in pairs(templ.sharding) do
+        local replicaset_uuid = uuid_next()
+        local replicas = {}
+        local replicaset = table.deepcopy(replicaset_templ)
+        replicaset.replicas = replicas
+        for replica_name, replica_templ in pairs(replicaset_templ.replicas) do
+            local replica_uuid = uuid_next()
+            local replica = table.deepcopy(replica_templ)
+            replica.name = replica_name
+            replica.uri = 'storage:storage@'..helpers.instance_uri(replica_name)
+            replicas[replica_uuid] = replica
+        end
+        sharding[replicaset_uuid] = replicaset
+    end
+    return res
+end
+
+--
+-- Build new cluster by a given config.
+--
+local function storage_new(g, cfg)
+    if not g.cluster then
+        g.cluster = cluster:new({})
+    end
+    local all_servers = {}
+    local masters = {}
+    local replicas = {}
+    for replicaset_uuid, replicaset in pairs(cfg.sharding) do
+        -- Luatest depends on box.cfg being ready and listening. Need to
+        -- configure it before vshard.storage.cfg().
+        local box_repl = {}
+        for _, replica in pairs(replicaset.replicas) do
+            table.insert(box_repl, replica.uri)
+        end
+        local box_cfg = {
+            replication = box_repl,
+            -- Speed retries up.
+            replication_timeout = 0.1,
+        }
+        for replica_uuid, replica in pairs(replicaset.replicas) do
+            local name = replica.name
+            box_cfg.instance_uuid = replica_uuid
+            box_cfg.replicaset_uuid = replicaset_uuid
+            box_cfg.listen = helpers.instance_uri(replica.name)
+            -- Need to specify read-only explicitly to know how is master.
+            box_cfg.read_only = not replica.master
+            local server = g.cluster:build_server({
+                alias = name,
+                box_cfg = box_cfg,
+            }, 'storage.lua')
+            g[name] = server
+            g.cluster:add_server(server)
+
+            table.insert(all_servers, server)
+            if replica.master then
+                table.insert(masters, server)
+            else
+                table.insert(replicas, server)
+            end
+        end
+    end
+    for _, replica in pairs(all_servers) do
+        replica:start({wait_for_readiness = false})
+    end
+    for _, master in pairs(masters) do
+        master:wait_for_readiness()
+        master:exec(function(cfg)
+            -- Logged in as guest with 'super' access rights. Yet 'super' is not
+            -- enough to grant 'replication' privilege. The simplest way - login
+            -- as admin for that temporary.
+            local user = box.session.user()
+            box.session.su('admin')
+
+            vshard.storage.cfg(cfg, box.info.uuid)
+            box.schema.user.grant('storage', 'super')
+
+            box.session.su(user)
+        end, {cfg})
+    end
+    for _, replica in pairs(replicas) do
+        replica:wait_for_readiness()
+        replica:exec(function(cfg)
+            vshard.storage.cfg(cfg, box.info.uuid)
+        end, {cfg})
+    end
+end
+
+--
+-- Create a new router in the cluster.
+--
+local function router_new(g, name, cfg)
+    if not g.cluster then
+        g.cluster = cluster:new({})
+    end
+    local server = g.cluster:build_server({
+        alias = name,
+    }, 'router.lua')
+    g[name] = server
+    g.cluster:add_server(server)
+    server:start()
+    server:exec(function(cfg)
+        vshard.router.cfg(cfg)
+    end, {cfg})
+    return server
+end
+
+return {
+    config_new = config_new,
+    storage_new = storage_new,
+    router_new = router_new,
+}
diff --git a/test/router-luatest/router_test.lua b/test/router-luatest/router_test.lua
new file mode 100644
index 0000000..621794a
--- /dev/null
+++ b/test/router-luatest/router_test.lua
@@ -0,0 +1,54 @@
+local t = require('luatest')
+local vtest = require('test.luatest_helpers.vtest')
+local wait_timeout = 120
+
+local g = t.group('router')
+local cluster_cfg = vtest.config_new({
+    sharding = {
+        {
+            replicas = {
+                replica_1_a = {
+                    master = true,
+                },
+                replica_1_b = {},
+            },
+        },
+        {
+            replicas = {
+                replica_2_a = {
+                    master = true,
+                },
+                replica_2_b = {},
+            },
+        },
+    },
+    bucket_count = 100
+})
+
+g.before_all(function()
+    vtest.storage_new(g, cluster_cfg)
+
+    t.assert_equals(g.replica_1_a:exec(function()
+        return #vshard.storage.info().alerts
+    end), 0, 'no alerts after boot')
+
+    local router = vtest.router_new(g, 'router', cluster_cfg)
+    g.router = router
+    local res, err = router:exec(function(timeout)
+        return vshard.router.bootstrap({timeout = timeout})
+    end, {wait_timeout})
+    t.assert(res and not err, 'bootstrap buckets')
+end)
+
+g.after_all(function()
+    g.cluster:drop()
+end)
+
+g.test_basic = function(g)
+    local router = g.router
+    local res, err = router:exec(function(timeout)
+        return vshard.router.callrw(1, 'echo', {1}, {timeout = timeout})
+    end, {wait_timeout})
+    t.assert(not err, 'no error')
+    t.assert_equals(res, 1, 'good result')
+end
diff --git a/test/router-luatest/suite.ini b/test/router-luatest/suite.ini
new file mode 100644
index 0000000..ae79147
--- /dev/null
+++ b/test/router-luatest/suite.ini
@@ -0,0 +1,5 @@
+[default]
+core = luatest
+description = Router tests
+is_parallel = True
+release_disabled =
-- 
2.24.3 (Apple Git-128)


  reply	other threads:[~2022-02-09  0:33 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-09  0:32 [Tarantool-patches] [PATCH vshard 0/4] Router msgpack object and netbox return_raw Vladislav Shpilevoy via Tarantool-patches
2022-02-09  0:32 ` Vladislav Shpilevoy via Tarantool-patches [this message]
2022-02-09 17:53   ` [Tarantool-patches] [PATCH vshard 1/4] test: support luatest Oleg Babin via Tarantool-patches
2022-02-10 22:32     ` Vladislav Shpilevoy via Tarantool-patches
2022-02-11 16:38       ` Oleg Babin via Tarantool-patches
2022-02-09  0:32 ` [Tarantool-patches] [PATCH vshard 2/4] util: introduce Tarantool's semver parser Vladislav Shpilevoy via Tarantool-patches
2022-02-09 17:53   ` Oleg Babin via Tarantool-patches
2022-02-10 22:33     ` Vladislav Shpilevoy via Tarantool-patches
2022-02-11 16:38       ` Oleg Babin via Tarantool-patches
2022-02-09  0:32 ` [Tarantool-patches] [PATCH vshard 3/4] router: support msgpack object args Vladislav Shpilevoy via Tarantool-patches
2022-02-09 17:53   ` Oleg Babin via Tarantool-patches
2022-02-10 22:33     ` Vladislav Shpilevoy via Tarantool-patches
2022-02-11 16:38       ` Oleg Babin via Tarantool-patches
2022-02-09  0:32 ` [Tarantool-patches] [PATCH vshard 4/4] router: support netbox return_raw Vladislav Shpilevoy via Tarantool-patches
2022-02-09 17:53   ` Oleg Babin via Tarantool-patches
2022-02-10 22:34     ` Vladislav Shpilevoy via Tarantool-patches
2022-02-11 16:38       ` Oleg Babin via Tarantool-patches
2022-02-11 23:05 ` [Tarantool-patches] [PATCH vshard 0/4] Router msgpack object and " Vladislav Shpilevoy via Tarantool-patches
2022-02-15 16:55   ` Oleg Babin via Tarantool-patches
2022-02-15 21:16     ` Vladislav Shpilevoy via Tarantool-patches

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bee6d6b91217a9419bc44127ae55504ba9c35ef3.1644366575.git.v.shpilevoy@tarantool.org \
    --to=tarantool-patches@dev.tarantool.org \
    --cc=olegrok@tarantool.org \
    --cc=v.shpilevoy@tarantool.org \
    --subject='Re: [Tarantool-patches] [PATCH vshard 1/4] test: support luatest' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox