From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from [87.239.111.99] (localhost [127.0.0.1]) by dev.tarantool.org (Postfix) with ESMTP id 9A34B6E454; Tue, 22 Feb 2022 15:56:48 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org 9A34B6E454 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=tarantool.org; s=dev; t=1645534608; bh=6tqMdzbQKMQaz1TCEhahNjREB/OdgVKgQ5lKKS+bGtM=; h=To:Cc:Date:Subject:List-Id:List-Unsubscribe:List-Archive: List-Post:List-Help:List-Subscribe:From:Reply-To:From; b=DQzNXOerB6MzIBCsVWib3bkpOOzGcSrUUkjcoqEWU+P1UmqBome7w/omNIBM6TI3K lDOreQGdOgWzZMioFtaBCiYkX9NB0tfG68cx7UXZr8AkjfsHomdCeiLX8J3nvqRr5l PwVQBRtft+vtCNpiRBjvH/NzDsjMY96G3gVk48Hk= Received: from mail-lj1-f171.google.com (mail-lj1-f171.google.com [209.85.208.171]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by dev.tarantool.org (Postfix) with ESMTPS id 244936E454 for ; Tue, 22 Feb 2022 15:56:47 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org 244936E454 Received: by mail-lj1-f171.google.com with SMTP id v28so11924844ljv.9 for ; Tue, 22 Feb 2022 04:56:47 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=0IC4694dUS3pY20pLz7y3oFcYnRnaxdwMyAKfYNw2Ek=; b=qoYQE1vsncYRaV2hZ/0VfZvFSMy65LVt+qyk7F6pZly/DxVRjsR1Q8YE3dbX2J3nHY 5/Rfl8/RrhKIM0TPJl56U+W+YBmwDCo3h43SNaaKe9/iWx/R3a1fegc1xG8GdMQ9LEiC k3zrwKMhGpRIkqnr4IFD+0LD44lV3FeI+3hgHhsf1QJfmhzlqk8Cf5CiCSPgp9/ih8rQ B5KU7/JiLT1LPOdc+vj3iM1be1C4cAHk9YFibi4jSfJXlOIVyv+nSutM6Z3nkqC3sYOu J1MW7O2UMX1ELF7jYHsSYfQ+9pYd5F5U+lJ6SJUmMBUVEbUh+/Z8OYXU1e3eXI//IV2k 7+rg== X-Gm-Message-State: AOAM531lvZQc2/2YYkKhdSArN8cfku8a+Txy4GtJZtcd5GmFVVFT7S1b /j4b069nySIHIUsgt4Z8IOzL8OafowE= X-Google-Smtp-Source: ABdhPJy2mfYiy6XhAg+Kxvgewrjk82pNV7OL5tmd6lY+sN4q70HHIrdwAIH+8quQG0LvjegaqaSWsQ== X-Received: by 2002:a2e:8608:0:b0:246:3988:518b with SMTP id a8-20020a2e8608000000b002463988518bmr8632553lji.43.1645534606162; Tue, 22 Feb 2022 04:56:46 -0800 (PST) Received: from localhost.localdomain (broadband-46-242-12-132.ip.moscow.rt.ru. [46.242.12.132]) by smtp.gmail.com with ESMTPSA id n17sm910215lfu.193.2022.02.22.04.56.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Feb 2022 04:56:45 -0800 (PST) To: v.shpilevoy@tarantool.org Cc: tarantool-patches@dev.tarantool.org, Yan Shtunder Date: Tue, 22 Feb 2022 15:56:43 +0300 Message-Id: <20220222125643.42705-1-ya.shtunder@gmail.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [Tarantool-patches] [PATCH v4 net.box] Add predefined system events for pub/sub X-BeenThere: tarantool-patches@dev.tarantool.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Tarantool development patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , From: Yan Shtunder via Tarantool-patches Reply-To: Yan Shtunder Errors-To: tarantool-patches-bounces@dev.tarantool.org Sender: "Tarantool-patches" Added predefined system events: box.status, box.id, box.election and box.schema. Closes #6260 @TarantoolBot document Title: Built-in events for pub/sub Built-in events are needed, first of all, in order to learn who is the master, unless it is defined in an application specific way. Knowing who is the master is necessary to send changes to a correct instance, and probably make reads of the most actual data if it is important. Also defined more built-in events for other mutable properties like replication list of the server, schema version change and instance state. Example usage: * Client: ```lua conn = net.box.connect(URI) -- Subscribe to updates of key 'box.id' w = conn:watch('box.id', function(key, value) assert(key == 'box.id') -- do something with value end) -- or to updates of key 'box.status' w = conn:watch('box.status', function(key, value) assert(key == 'box.status') -- do something with value end) -- or to updates of key 'box.election' w = conn:watch('box.election', function(key, value) assert(key == 'box.election') -- do something with value end) -- or to updates of key 'box.schema' w = conn:watch('box.schema', function(key, value) assert(key == 'box.schema') -- do something with value end) -- Unregister the watcher when it's no longer needed. w:unregister() ``` --- Issue: https://github.com/tarantool/tarantool/issues/6260 Patch: https://github.com/tarantool/tarantool/tree/yshtunder/gh-6260-events-v4 .../unreleased/gh-6260-add-builtin-events.md | 4 + src/box/alter.cc | 21 +- src/box/box.cc | 101 +++++- src/box/box.h | 18 ++ src/box/raft.c | 1 + src/box/replication.cc | 2 + .../gh_6260_add_builtin_events_test.lua | 294 ++++++++++++++++++ 7 files changed, 427 insertions(+), 14 deletions(-) create mode 100644 changelogs/unreleased/gh-6260-add-builtin-events.md create mode 100644 test/box-luatest/gh_6260_add_builtin_events_test.lua diff --git a/changelogs/unreleased/gh-6260-add-builtin-events.md b/changelogs/unreleased/gh-6260-add-builtin-events.md new file mode 100644 index 000000000..1d13e410f --- /dev/null +++ b/changelogs/unreleased/gh-6260-add-builtin-events.md @@ -0,0 +1,4 @@ +## feature/core + +* Added predefined system events: `box.status`, `box.id`, `box.election` + and `box.schema` (gh-6260) diff --git a/src/box/alter.cc b/src/box/alter.cc index b85d279e3..cd48cd6aa 100644 --- a/src/box/alter.cc +++ b/src/box/alter.cc @@ -57,6 +57,14 @@ #include "sequence.h" #include "sql.h" #include "constraint_id.h" +#include "box.h" + +static void +box_schema_version_bump(void) +{ + ++schema_version; + box_broadcast_schema(); +} /* {{{ Auxiliary functions and methods. */ @@ -1684,7 +1692,7 @@ void UpdateSchemaVersion::alter(struct alter_space *alter) { (void)alter; - ++schema_version; + box_schema_version_bump(); } /** @@ -2259,7 +2267,7 @@ on_replace_dd_space(struct trigger * /* trigger */, void *event) * AlterSpaceOps are registered in case of space * create. */ - ++schema_version; + box_schema_version_bump(); /* * So may happen that until the DDL change record * is written to the WAL, the space is used for @@ -2373,7 +2381,7 @@ on_replace_dd_space(struct trigger * /* trigger */, void *event) * deleting the space from the space_cache, since no * AlterSpaceOps are registered in case of space drop. */ - ++schema_version; + box_schema_version_bump(); struct trigger *on_commit = txn_alter_trigger_new(on_drop_space_commit, old_space); if (on_commit == NULL) @@ -4238,6 +4246,7 @@ on_replace_dd_schema(struct trigger * /* trigger */, void *event) if (tuple_field_uuid(new_tuple, BOX_CLUSTER_FIELD_UUID, &uu) != 0) return -1; REPLICASET_UUID = uu; + box_broadcast_id(); say_info("cluster uuid %s", tt_uuid_str(&uu)); } else if (strcmp(key, "version") == 0) { if (new_tuple != NULL) { @@ -5084,7 +5093,7 @@ on_replace_dd_trigger(struct trigger * /* trigger */, void *event) txn_stmt_on_rollback(stmt, on_rollback); txn_stmt_on_commit(stmt, on_commit); - ++schema_version; + box_schema_version_bump(); return 0; } @@ -5611,7 +5620,7 @@ on_replace_dd_fk_constraint(struct trigger * /* trigger*/, void *event) space_reset_fk_constraint_mask(child_space); space_reset_fk_constraint_mask(parent_space); } - ++schema_version; + box_schema_version_bump(); return 0; } @@ -5867,7 +5876,7 @@ on_replace_dd_ck_constraint(struct trigger * /* trigger*/, void *event) if (trigger_run(&on_alter_space, space) != 0) return -1; - ++schema_version; + box_schema_version_bump(); return 0; } diff --git a/src/box/box.cc b/src/box/box.cc index 6a33203df..38a0f24e6 100644 --- a/src/box/box.cc +++ b/src/box/box.cc @@ -85,6 +85,7 @@ #include "audit.h" #include "trivia/util.h" #include "version.h" +#include "mp_uuid.h" static char status[64] = "unknown"; @@ -98,14 +99,6 @@ double txn_timeout_default; struct rlist box_on_shutdown_trigger_list = RLIST_HEAD_INITIALIZER(box_on_shutdown_trigger_list); -static void title(const char *new_status) -{ - snprintf(status, sizeof(status), "%s", new_status); - title_set_status(new_status); - title_update(); - systemd_snotify("STATUS=%s", status); -} - const struct vclock *box_vclock = &replicaset.vclock; /** @@ -161,6 +154,19 @@ static struct fiber_pool tx_fiber_pool; */ static struct cbus_endpoint tx_prio_endpoint; +static void +box_broadcast_status(void); + +static void +title(const char *new_status) +{ + snprintf(status, sizeof(status), "%s", new_status); + title_set_status(new_status); + title_update(); + systemd_snotify("STATUS=%s", status); + box_broadcast_status(); +} + void box_update_ro_summary(void) { @@ -174,6 +180,8 @@ box_update_ro_summary(void) if (is_ro_summary) engine_switch_to_ro(); fiber_cond_broadcast(&ro_cond); + box_broadcast_status(); + box_broadcast_election(); } const char * @@ -3355,6 +3363,7 @@ bootstrap(const struct tt_uuid *instance_uuid, else tt_uuid_create(&INSTANCE_UUID); + box_broadcast_id(); say_info("instance uuid %s", tt_uuid_str(&INSTANCE_UUID)); /* @@ -3933,3 +3942,79 @@ box_reset_stat(void) engine_reset_stat(); space_foreach(box_reset_space_stat, NULL); } + +void +box_broadcast_id(void) +{ + char buf[1024]; + char *w = buf; + + w = mp_encode_map(w, 3); + w = mp_encode_str0(w, "id"); + w = mp_encode_uint(w, instance_id); + w = mp_encode_str0(w, "instance_uuid"); + w = mp_encode_uuid(w, &INSTANCE_UUID); + w = mp_encode_str0(w, "replicaset_uuid"); + w = mp_encode_uuid(w, &REPLICASET_UUID); + + box_broadcast("box.id", strlen("box.id"), buf, w); + + assert((size_t)(w - buf) < 1024); +} + +static void +box_broadcast_status(void) +{ + char buf[1024]; + char *w = buf; + + w = mp_encode_map(w, 3); + w = mp_encode_str0(w, "is_ro"); + w = mp_encode_bool(w, box_is_ro()); + w = mp_encode_str0(w, "is_ro_cfg"); + w = mp_encode_bool(w, cfg_geti("read_only")); + w = mp_encode_str0(w, "status"); + w = mp_encode_str0(w, box_status()); + + box_broadcast("box.status", strlen("box.status"), buf, w); + + assert((size_t)(w - buf) < 1024); +} + +void +box_broadcast_election(void) +{ + struct raft *raft = box_raft(); + + char buf[1024]; + char *w = buf; + + w = mp_encode_map(w, 4); + w = mp_encode_str0(w, "term"); + w = mp_encode_uint(w, raft->term); + w = mp_encode_str0(w, "role"); + w = mp_encode_str0(w, raft_state_str(raft->state)); + w = mp_encode_str0(w, "is_ro"); + w = mp_encode_bool(w, box_is_ro()); + w = mp_encode_str0(w, "leader"); + w = mp_encode_uint(w, raft->leader); + + box_broadcast("box.election", strlen("box.election"), buf, w); + + assert((size_t)(w - buf) < 1024); +} + +void +box_broadcast_schema(void) +{ + char buf[1024]; + char *w = buf; + + w = mp_encode_map(w, 1); + w = mp_encode_str0(w, "version"); + w = mp_encode_uint(w, box_schema_version()); + + box_broadcast("box.schema", strlen("box.schema"), buf, w); + + assert((size_t)(w - buf) < 1024); +} diff --git a/src/box/box.h b/src/box/box.h index b594f6646..766568368 100644 --- a/src/box/box.h +++ b/src/box/box.h @@ -563,6 +563,24 @@ box_process_rw(struct request *request, struct space *space, int boxk(int type, uint32_t space_id, const char *format, ...); +/** + * Broadcast the identification of the instance + */ +void +box_broadcast_id(void); + +/** + * Broadcast the current election state of RAFT machinery + */ +void +box_broadcast_election(void); + +/** + * Broadcast the current schema version + */ +void +box_broadcast_schema(void); + #if defined(__cplusplus) } /* extern "C" */ #endif /* defined(__cplusplus) */ diff --git a/src/box/raft.c b/src/box/raft.c index be6009cc1..efc7e1038 100644 --- a/src/box/raft.c +++ b/src/box/raft.c @@ -170,6 +170,7 @@ box_raft_on_update_f(struct trigger *trigger, void *event) * writable only after it clears its synchro queue. */ box_update_ro_summary(); + box_broadcast_election(); if (raft->state != RAFT_STATE_LEADER) return 0; /* diff --git a/src/box/replication.cc b/src/box/replication.cc index bbf4b7417..0da256721 100644 --- a/src/box/replication.cc +++ b/src/box/replication.cc @@ -247,6 +247,7 @@ replica_set_id(struct replica *replica, uint32_t replica_id) /* Assign local replica id */ assert(instance_id == REPLICA_ID_NIL); instance_id = replica_id; + box_broadcast_id(); } else if (replica->anon) { /* * Set replica gc on its transition from @@ -287,6 +288,7 @@ replica_clear_id(struct replica *replica) /* See replica_check_id(). */ assert(replicaset.is_joining); instance_id = REPLICA_ID_NIL; + box_broadcast_id(); } replica->id = REPLICA_ID_NIL; say_info("removed replica %s", tt_uuid_str(&replica->uuid)); diff --git a/test/box-luatest/gh_6260_add_builtin_events_test.lua b/test/box-luatest/gh_6260_add_builtin_events_test.lua new file mode 100644 index 000000000..c11d79e30 --- /dev/null +++ b/test/box-luatest/gh_6260_add_builtin_events_test.lua @@ -0,0 +1,294 @@ +local t = require('luatest') +local net = require('net.box') +local cluster = require('test.luatest_helpers.cluster') +local server = require('test.luatest_helpers.server') +local fiber = require('fiber') + +local g = t.group('gh_6260') + +g.before_test('test_box_status', function(cg) + cg.cluster = cluster:new({}) + cg.master = cg.cluster:build_server({alias = 'master'}) + cg.cluster:add_server(cg.master) + cg.cluster:start() +end) + +g.after_test('test_box_status', function(cg) + cg.cluster.servers = nil + cg.cluster:drop() +end) + +g.test_box_status = function(cg) + local c = net.connect(cg.master.net_box_uri) + + local result = {} + local result_no = 0 + local watcher = c:watch('box.status', + function(name, state) + assert(name == 'box.status') + result = state + result_no = result_no + 1 + end) + + -- initial state should arrive + t.helpers.retrying({}, function() + while result_no < 1 do fiber.sleep(0.000001) end + end) + + t.assert_equals(result, + {is_ro = false, is_ro_cfg = false, status = 'running'}) + + -- test orphan status appearance + cg.master:exec(function(repl) + box.cfg{ + replication = repl, + replication_connect_timeout = 0.001, + replication_timeout = 0.001, + } + end, {{server.build_instance_uri('master'), server.build_instance_uri('replica')}}) + -- here we have 2 notifications: entering ro when can't connect + -- to master and the second one when going orphan + t.helpers.retrying({}, function() + while result_no < 3 do fiber.sleep(0.000001) end + end) + t.assert_equals(result, + {is_ro = true, is_ro_cfg = false, status = 'orphan'}) + + -- test ro_cfg appearance + cg.master:exec(function() + box.cfg{ + replication = {}, + read_only = true, + } + end) + t.helpers.retrying({}, function() + while result_no < 4 do fiber.sleep(0.000001) end + end) + t.assert_equals(result, + {is_ro = true, is_ro_cfg = true, status = 'running'}) + + -- reset to rw + cg.master:exec(function() + box.cfg{ + read_only = false, + } + end) + t.helpers.retrying({}, function() + while result_no < 5 do fiber.sleep(0.000001) end + end) + t.assert_equals(result, + {is_ro = false, is_ro_cfg = false, status = 'running'}) + + -- turning manual election mode puts into ro + cg.master:exec(function() + box.cfg{ + election_mode = 'manual', + } + end) + t.helpers.retrying({}, function() + while result_no < 6 do fiber.sleep(0.000001) end + end) + t.assert_equals(result, + {is_ro = true, is_ro_cfg = false, status = 'running'}) + + -- promotion should turn rm + cg.master:exec(function() box.ctl.promote() end) + t.helpers.retrying({}, function() + while result_no < 7 do fiber.sleep(0.000001) end + end) + t.assert_equals(result, + {is_ro = false, is_ro_cfg = false, status = 'running'}) + + watcher:unregister() + c:close() +end + +g.before_test('test_box_election', function(cg) + cg.cluster = cluster:new({}) + + local box_cfg = { + replication = { + server.build_instance_uri('instance_1'), + server.build_instance_uri('instance_2'), + server.build_instance_uri('instance_3'), + }, + replication_connect_quorum = 0, + election_mode = 'off', + replication_synchro_quorum = 2, + replication_synchro_timeout = 1, + replication_timeout = 0.25, + election_timeout = 0.25, + } + + cg.instance_1 = cg.cluster:build_server( + {alias = 'instance_1', box_cfg = box_cfg}) + + cg.instance_2 = cg.cluster:build_server( + {alias = 'instance_2', box_cfg = box_cfg}) + + cg.instance_3 = cg.cluster:build_server( + {alias = 'instance_3', box_cfg = box_cfg}) + + cg.cluster:add_server(cg.instance_1) + cg.cluster:add_server(cg.instance_2) + cg.cluster:add_server(cg.instance_3) + cg.cluster:start({cg.instance_1, cg.instance_2, cg.instance_3}) +end) + +g.after_test('test_box_election', function(cg) + cg.cluster.servers = nil + cg.cluster:drop({cg.instance_1, cg.instance_2, cg.instance_3}) +end) + +g.test_box_election = function(cg) + local c = {} + c[1] = net.connect(cg.instance_1.net_box_uri) + c[2] = net.connect(cg.instance_2.net_box_uri) + c[3] = net.connect(cg.instance_3.net_box_uri) + + local res = {} + local res_n = {0, 0, 0} + + for i = 1, 3 do + c[i]:watch('box.election', + function(n, s) + t.assert_equals(n, 'box.election') + res[i] = s + res_n[i] = res_n[i] + 1 + end) + end + t.helpers.retrying({}, function() + while res_n[1] + res_n[2] + res_n[3] < 3 do fiber.sleep(0.00001) end + end) + + -- verify all instances are in the same state + t.assert_equals(res[1], res[2]) + t.assert_equals(res[1], res[3]) + + -- wait for elections to complete, verify leader is the instance_1 + -- trying to avoid the exact number of term - it can vary + local instance1_id = cg.instance_1:instance_id() + + cg.instance_1:exec(function() box.cfg{election_mode='candidate'} end) + cg.instance_2:exec(function() box.cfg{election_mode='voter'} end) + cg.instance_3:exec(function() box.cfg{election_mode='voter'} end) + + cg.instance_1:wait_election_leader_found() + cg.instance_2:wait_election_leader_found() + cg.instance_3:wait_election_leader_found() + + t.assert_covers(res, + { + {leader = instance1_id, is_ro = false, role = 'leader', term = 2}, + {leader = instance1_id, is_ro = true, role = 'follower', term = 2}, + {leader = instance1_id, is_ro = true, role = 'follower', term = 2} + }) + + -- check the stepping down is working + res_n = {0, 0, 0} + cg.instance_1:exec(function() box.cfg{election_mode='voter'} end) + t.helpers.retrying({}, function() + while res_n[1] + res_n[2] + res_n[3] < 3 do fiber.sleep(0.00001) end + end) + + local expected = {is_ro = true, role = 'follower', term = res[1].term, + leader = 0} + t.assert_covers(res, {expected, expected, expected}) + + c[1]:close() + c[2]:close() + c[3]:close() +end + +g.before_test('test_box_schema', function(cg) + cg.cluster = cluster:new({}) + cg.master = cg.cluster:build_server({alias = 'master'}) + cg.cluster:add_server(cg.master) + cg.cluster:start() +end) + +g.after_test('test_box_schema', function(cg) + cg.cluster.servers = nil + cg.cluster:drop() +end) + +g.test_box_schema = function(cg) + local c = net.connect(cg.master.net_box_uri) + local version = 0 + local version_n = 0 + + local watcher = c:watch('box.schema', + function(n, s) + assert(n == 'box.schema') + version = s.version + version_n = version_n + 1 + end) + + cg.master:exec(function() box.schema.create_space('p') end) + t.helpers.retrying({}, function() + while version_n < 1 do fiber.sleep(0.00001) end + end) + -- first version change, use it as initial value + local init_version = version + + version_n = 0 + cg.master:exec(function() box.space.p:create_index('i') end) + t.helpers.retrying({}, function() + while version_n < 1 do fiber.sleep(0.00001) end + end) + t.assert_equals(version, init_version + 1) + + version_n = 0 + cg.master:exec(function() box.space.p:drop() end) + t.helpers.retrying({}, function() + while version_n < 1 do fiber.sleep(0.00001) end + end) + -- there'll be 2 changes - index and space + t.assert_equals(version, init_version + 3) + + watcher:unregister() + c:close() +end + +g.before_test('test_box_id', function(cg) + cg.cluster = cluster:new({}) + + local box_cfg = { + replicaset_uuid = t.helpers.uuid('ab', 1), + instance_uuid = t.helpers.uuid('1', '2', '3') + } + + cg.instance = cg.cluster:build_server({alias = 'master', box_cfg = box_cfg}) + cg.cluster:add_server(cg.instance) + cg.cluster:start() +end) + +g.after_test('test_box_id', function(cg) + cg.cluster.servers = nil + cg.cluster:drop() +end) + +g.test_box_id = function(cg) + local c = net.connect(cg.instance.net_box_uri) + + local result = {} + local result_no = 0 + + local watcher = c:watch('box.id', + function(name, state) + assert(name == 'box.id') + result = state + result_no = result_no + 1 + end) + + -- initial state should arrive + t.helpers.retrying({}, function() + while result_no < 1 do fiber.sleep(0.00001) end + end) + t.assert_equals(result, {id = 1, + instance_uuid = cg.instance.box_cfg.instance_uuid, + replicaset_uuid = cg.instance.box_cfg.replicaset_uuid}) + + watcher:unregister() + c:close() +end -- 2.25.1 Hi! Thank you for the review! > Hi! Thanks for the patch, good job! > > See some new comments below! > > On 12.02.2022 02:18, Yan Shtunder via Tarantool-patches wrote: > > Adding predefined system events: box.status, box.id, box.election > > and box.schema. > > > > Closes #6260 > > > > NO_CHANGELOG=test stuff > > NO_DOC=testing stuff > > 1. This is a new public API. Lets add both a docbot request and a changelog. 1. Done. > @TarantoolBot document > Title: Built-in events for pub/sub > > Built-in events are needed, first of all, in order to learn who is the > master, unless it is defined in an application specific way. Knowing who > is the master is necessary to send changes to a correct instance, and > probably make reads of the most actual data if it is important. Also > defined more built-in events for other mutable properties like replication > list of the server, schema version change and instance state. > > Example usage: > > * Client: > ```lua > conn = net.box.connect(URI) > -- Subscribe to updates of key 'box.id' > w = conn:watch('box.id', function(key, value) > assert(key == 'box.id') > -- do something with value > end) > -- or to updates of key 'box.status' > w = conn:watch('box.status', function(key, value) > assert(key == 'box.status') > -- do something with value > end) > -- or to updates of key 'box.election' > w = conn:watch('box.election', function(key, value) > assert(key == 'box.election') > -- do something with value > end) > -- or to updates of key 'box.schema' > w = conn:watch('box.schema', function(key, value) > assert(key == 'box.schema') > -- do something with value > end) > -- Unregister the watcher when it's no longer needed. > w:unregister() > ``` > > diff --git a/changelogs/unreleased/gh-6260-add-builtin-events.md b/changelogs/unreleased/gh-6260-add-builtin-events.md > new file mode 100644 > index 000000000..1d13e410f > --- /dev/null > +++ b/changelogs/unreleased/gh-6260-add-builtin-events.md > @@ -0,0 +1,4 @@ > +## feature/core > + > +* Added predefined system events: `box.status`, `box.id`, `box.election` > + and `box.schema` (gh-6260) > > --- > > Issue: https://github.com/tarantool/tarantool/issues/6260 > > Patch: https://github.com/tarantool/tarantool/tree/yshtunder/gh-6260-events-v3 > > > > src/box/alter.cc | 9 + > > src/box/box.cc | 103 +++++- > > src/box/box.h | 18 + > > src/box/mp_error.cc | 6 - > > src/box/raft.c | 1 + > > src/box/replication.cc | 16 + > > src/lib/msgpuck | 2 +- > > src/lib/raft/raft.c | 1 + > > .../gh_6260_add_builtin_events_test.lua | 320 ++++++++++++++++++ > > test/unit/mp_error.cc | 6 - > > 10 files changed, 462 insertions(+), 20 deletions(-) > > create mode 100644 test/app-luatest/gh_6260_add_builtin_events_test.lua > > > > diff --git a/src/box/alter.cc b/src/box/alter.cc > > index 65c1cb952..e9899b968 100644 > > --- a/src/box/alter.cc > > +++ b/src/box/alter.cc > > @@ -1685,6 +1686,7 @@ UpdateSchemaVersion::alter(struct alter_space *alter) > > { > > (void)alter; > > ++schema_version; > > + box_broadcast_schema(); > > 2. The schema version increment + its broadcast are duplicated 6 times. Lets > wrap it into a function like > > box_schema_version_bump() 2. > --- a/src/box/alter.cc > +++ b/src/box/alter.cc > @@ -57,6 +57,14 @@ > #include "sequence.h" > #include "sql.h" > #include "constraint_id.h" > +#include "box.h" > + > +static void > +box_schema_version_bump(void) > +{ > + ++schema_version; > + box_broadcast_schema(); > +} > > /* {{{ Auxiliary functions and methods. */ > > @@ -1684,7 +1692,7 @@ void > UpdateSchemaVersion::alter(struct alter_space *alter) > { > (void)alter; > - ++schema_version; > + box_schema_version_bump(); > } > > /** > @@ -2259,7 +2267,7 @@ on_replace_dd_space(struct trigger * /* trigger */, void *event) > * AlterSpaceOps are registered in case of space > * create. > */ > - ++schema_version; > + box_schema_version_bump(); > /* > * So may happen that until the DDL change record > * is written to the WAL, the space is used for > @@ -2373,7 +2381,7 @@ on_replace_dd_space(struct trigger * /* trigger */, void *event) > * deleting the space from the space_cache, since no > * AlterSpaceOps are registered in case of space drop. > */ > - ++schema_version; > + box_schema_version_bump(); > struct trigger *on_commit = > txn_alter_trigger_new(on_drop_space_commit, old_space); > if (on_commit == NULL) > @@ -4238,6 +4246,7 @@ on_replace_dd_schema(struct trigger * /* trigger */, void *event) > if (tuple_field_uuid(new_tuple, BOX_CLUSTER_FIELD_UUID, &uu) != 0) > return -1; > REPLICASET_UUID = uu; > + box_broadcast_id(); > say_info("cluster uuid %s", tt_uuid_str(&uu)); > } else if (strcmp(key, "version") == 0) { > if (new_tuple != NULL) { > @@ -5084,7 +5093,7 @@ on_replace_dd_trigger(struct trigger * /* trigger */, void *event) > > txn_stmt_on_rollback(stmt, on_rollback); > txn_stmt_on_commit(stmt, on_commit); > - ++schema_version; > + box_schema_version_bump(); > return 0; > } > > @@ -5611,7 +5620,7 @@ on_replace_dd_fk_constraint(struct trigger * /* trigger*/, void *event) > space_reset_fk_constraint_mask(child_space); > space_reset_fk_constraint_mask(parent_space); > } > - ++schema_version; > + box_schema_version_bump(); > return 0; > } > > @@ -5867,7 +5876,7 @@ on_replace_dd_ck_constraint(struct trigger * /* trigger*/, void *event) > > if (trigger_run(&on_alter_space, space) != 0) > return -1; > - ++schema_version; > + box_schema_version_bump(); > return 0; > } > 3. A response to Vova's comment about box_broadcast_fmt() - while I am not > against that, I am just afraid it might not work with UUIDs. AFAIR we don't > have a formatter for UUID or any other MP_EXT type. You can still try to use > box_broadcast_fmt() for events having only MessagePack-native types though. > Such as schema version and election event. 3. I didn't do anything. > 4. There is a CI failure: > https://github.com/tarantool/tarantool/runs/5164254300?check_suite_focus=true. > Msgpuck submodule was bumped, but the new version doesn't update its test's > .result file. 4. Done. > > diff --git a/src/box/box.cc b/src/box/box.cc > > index 6a33203df..d72bd3dad 100644 > > --- a/src/box/box.cc > > +++ b/src/box/box.cc> @@ -161,6 +155,20 @@ static struct fiber_pool tx_fiber_pool; > > */ > > static struct cbus_endpoint tx_prio_endpoint; > > > > +static void > > +box_broadcast_status(void); > > + > > +static void > > +title(const char *new_status) > > +{ > > + snprintf(status, sizeof(status), "%s", new_status); > > + title_set_status(new_status); > > + title_update(); > > + systemd_snotify("STATUS=%s", status); > > + /* Checking box.info.status change */ > > 5. The comment narrates the code, doesn't tell anything new. I would > suggest to drop it. The same in other broadcast calls for all events. 5. Done. > > + box_broadcast_status(); > > +} > > + > > void > > box_update_ro_summary(void) > > { > > @@ -3933,3 +3943,82 @@ box_reset_stat(void) > > engine_reset_stat(); > > space_foreach(box_reset_space_stat, NULL); > > } > > + > > +void > > +box_broadcast_id(void) > > +{ > > + char buf[1024]; > > + char *w = buf; > > + > > + struct replica *replica = replica_by_uuid(&INSTANCE_UUID); > > + uint32_t id = (replica == NULL) ? 0 : replica->id; > > 6. You don't need to get the replica. You can use the global variable > 'instance_id'. 6. I fixed this. > +void > +box_broadcast_id(void) > +{ > + char buf[1024]; > + char *w = buf; > + > + w = mp_encode_map(w, 3); > + w = mp_encode_str0(w, "id"); > + w = mp_encode_uint(w, instance_id); > + w = mp_encode_str0(w, "instance_uuid"); > + w = mp_encode_uuid(w, &INSTANCE_UUID); > + w = mp_encode_str0(w, "replicaset_uuid"); > + w = mp_encode_uuid(w, &REPLICASET_UUID); > + > + box_broadcast("box.id", strlen("box.id"), buf, w); > + > + assert((size_t)(w - buf) < 1024); > +} > > - > > static char * > > mp_encode_error_one(char *data, const struct error *error) > > { > > diff --git a/src/box/replication.cc b/src/box/replication.cc > > index bbf4b7417..05d5ffb58 100644 > > --- a/src/box/replication.cc > > +++ b/src/box/replication.cc > > @@ -183,8 +183,12 @@ replica_new(void) > > diag_raise(); > > } > > replica->id = 0; > > + /* Checking box.info.id change */ > > + box_broadcast_id(); > > replica->anon = false; > > replica->uuid = uuid_nil; > > + /* Checking box.info.uuid change */ > > + box_broadcast_id(); > > 8. You don't need to broadcast on every line where a change happens. > Anyway nothing is sent until a next yield. You only need to broadcast > when all the assignments are done and you are leaving the context. > > > 9. Here not necessarily the instance's own data is changed. It > is a generic replica constructor. It is called for every record in > _cluster space. You only need to broadcast the places where own IDs > of the current instance are changed. The global variables 'instance_id', > 'INSTANCE_UUID', and 'REPLICASET_UUID'. 7. I fixed this. > > diff --git a/src/lib/raft/raft.c b/src/lib/raft/raft.c > > index 89cfb3c17..489df235b 100644 > > --- a/src/lib/raft/raft.c > > +++ b/src/lib/raft/raft.c > > @@ -494,6 +494,7 @@ raft_process_msg(struct raft *raft, const struct raft_msg *req, uint32_t source) > > * it manually. > > */ > > raft->leader = 0; > > + raft_schedule_broadcast(raft); > > 10. This should be a separate commit with its own unit test. Sergos already made > a PR, but it lacks a test. I suggest to finish it and rebase this patchset on > top of that commit. 8. It is done. > > if (raft->is_candidate) > > raft_sm_schedule_new_election(raft); > > } > > diff --git a/test/app-luatest/gh_6260_add_builtin_events_test.lua b/test/app-luatest/gh_6260_add_builtin_events_test.lua > > new file mode 100644 > > index 000000000..3c363fc9a > > --- /dev/null > > +++ b/test/app-luatest/gh_6260_add_builtin_events_test.lua > > 11. The test should be in box-luatest, not app-luatest. 9. Done. > diff --git a/test/box-luatest/gh_6260_add_builtin_events_test.lua b/test/box-luatest/gh_6260_add_builtin_events_test.lua > new file mode 100644 > index 000000000..6c3731449 > --- /dev/null > +++ b/test/box-luatest/gh_6260_add_builtin_events_test.lua > > @@ -0,0 +1,320 @@ > > +local t = require('luatest') > > +local net = require('net.box') > > +local cluster = require('test.luatest_helpers.cluster') > > +local helpers = require('test.luatest_helpers') > > +local fiber = require('fiber') > > + > > +local g = t.group('gh_6260') > > + > > + > > +g.before_test('test_box_status', function(cg) > > + cg.cluster = cluster:new({}) > > + > > + local box_cfg = { > > + read_only = false > > 12. It is false by default anyway. You don't need this config. 10. Done. > +local t = require('luatest') > +local net = require('net.box') > +local cluster = require('test.luatest_helpers.cluster') > +local server = require('test.luatest_helpers.server') > +local fiber = require('fiber') > + > +local g = t.group('gh_6260') > + > +g.before_test('test_box_status', function(cg) > + cg.cluster = cluster:new({}) > + cg.master = cg.cluster:build_server({alias = 'master'}) > + cg.cluster:add_server(cg.master) > + cg.cluster:start() > +end) > > + } > > + > > + cg.master = cg.cluster:build_server({alias = 'master', box_cfg = box_cfg}) > > + cg.cluster:add_server(cg.master) > > + cg.cluster:start() > > +end) > > + > > + > > +g.after_test('test_box_status', function(cg) > > + cg.cluster.servers = nil > > + cg.cluster:drop() > > +end) > > + > > + > > +g.test_box_status = function(cg) > > + local c = net.connect(cg.master.net_box_uri) > > + > > + local result = {} > > + local result_no = 0 > > + local watcher = c:watch('box.status', > > + function(name, state) > > + assert(name == 'box.status') > > + result = state > > + result_no = result_no + 1 > > + end) > > + > > + -- initial state should arrive > > + while result_no < 1 do fiber.sleep(0.001) end > > 13. Lets better use t.helpers.retrying(). In other places too. See how it is used > in existing tests for examples. 11. > + t.helpers.retrying({}, function() > + while result_no < 1 do fiber.sleep(0.000001) end > + end) > > + t.assert_equals(result, > > + {is_ro = false, is_ro_cfg = false, status = 'running'}) > > + > > + -- test orphan status appearance > > + cg.master:eval(([[ > > + box.cfg{ > > + replication = { > > + "%s", > > + "%s" > > + }, > > + replication_connect_timeout = 0.001, > > + replication_timeout = 0.001, > > + } > + ]]):format(helpers.instance_uri('master'), > + helpers.instance_uri('replica'))) > > 14. Well, while this way of doing box.cfg certainly works, you might > want to consider an alternative via :exec(). Probably you didn't find how > to pass parameters into it? Here is how: > > cg.master:exec(function(repl) > box.cfg{ > replication = repl, > ... > } > end, {{helpers.instance_uri('master'), > helpers.instance_uri('replica')}}) > > > + -- here we have 2 notifications: entering ro when can't connect > > + -- to master and the second one when going orphan > > + while result_no < 3 do fiber.sleep(0.000001) end > > + t.assert_equals(result, > > + {is_ro = true, is_ro_cfg = false, status = 'orphan'}) > > + > > + -- test ro_cfg appearance > > + cg.master:eval([[ > > + box.cfg{ > > + replication = {}, > > + read_only = true, > > + } > > + ]]) > > 15. Lets better use :exec(). 12. Done. > + -- test orphan status appearance > + cg.master:exec(function(repl) > + box.cfg{ > + replication = repl, > + replication_connect_timeout = 0.001, > + replication_timeout = 0.001, > + } > + end, {{server.build_instance_uri('master'), server.build_instance_uri('replica')}}) > + -- here we have 2 notifications: entering ro when can't connect > + -- to master and the second one when going orphan > + t.helpers.retrying({}, function() > + while result_no < 3 do fiber.sleep(0.000001) end > + end) > + t.assert_equals(result, > + {is_ro = true, is_ro_cfg = false, status = 'orphan'}) > + > + -- test ro_cfg appearance > + cg.master:exec(function() > + box.cfg{ > + replication = {}, > + read_only = true, > + } > + end) > > + while result_no < 4 do fiber.sleep(0.000001) end > > + t.assert_equals(result, > > + {is_ro = true, is_ro_cfg = true, status = 'running'}) > > + > > + -- reset to rw > > + cg.master:eval([[ > > + box.cfg{ > > + read_only = false, > > + } > > + ]]) > > + while result_no < 5 do fiber.sleep(0.000001) end > > + t.assert_equals(result, > > + {is_ro = false, is_ro_cfg = false, status = 'running'}) > > + > > + -- turning manual election mode puts into ro > > + cg.master:eval([[ > > + box.cfg{ > > + election_mode = 'manual', > > + } > > + ]]) > > + while result_no < 6 do fiber.sleep(0.000001) end > > + t.assert_equals(result, > > + {is_ro = true, is_ro_cfg = false, status = 'running'}) > > + > > + -- promotion should turn rm > > + cg.master:eval([[ > > + box.ctl.promote() > > + ]]) > > + while result_no < 7 do fiber.sleep(0.000001) end > > + t.assert_equals(result, > > + {is_ro = false, is_ro_cfg = false, status = 'running'}) > > + > > + watcher:unregister() > > + c:close() > > +end > > + > > + > > 16. Extra empty line. In some other places too. Please, lets remove them. 13. Deleted extra empty line. > > +g.before_test('test_box_election', function(cg) > > + > > + cg.cluster = cluster:new({}) > > + > > + local box_cfg = { > > + replication = { > > + helpers.instance_uri('instance_', 1); > > + helpers.instance_uri('instance_', 2); > > + helpers.instance_uri('instance_', 3); > > 17. Too long indentation. Also you could write simpler: > > helpers.instance_uri('instance_1') > helpers.instance_uri('instance_2') > ... > > No need to pass the numbers separately. 14. > +g.before_test('test_box_election', function(cg) > + cg.cluster = cluster:new({}) > + > + local box_cfg = { > + replication = { > + server.build_instance_uri('instance_1'), > + server.build_instance_uri('instance_2'), > + server.build_instance_uri('instance_3'), > > + }, > > + replication_connect_quorum = 0, > > + election_mode = 'off', > > + replication_synchro_quorum = 2, > > + replication_synchro_timeout = 1, > > + replication_timeout = 0.25, > > + election_timeout = 0.25, > > + } > > + > > + cg.instance_1 = cg.cluster:build_server( > > + {alias = 'instance_1', box_cfg = box_cfg}) > > + > > + cg.instance_2 = cg.cluster:build_server( > > + {alias = 'instance_2', box_cfg = box_cfg}) > > + > > + cg.instance_3 = cg.cluster:build_server( > > + {alias = 'instance_3', box_cfg = box_cfg}) > > + > > + cg.cluster:add_server(cg.instance_1) > > + cg.cluster:add_server(cg.instance_2) > > + cg.cluster:add_server(cg.instance_3) > > + cg.cluster:start({cg.instance_1, cg.instance_2, cg.instance_3}) > > +end) > > + > > + > > +g.after_test('test_box_election', function(cg) > > + cg.cluster.servers = nil > > + cg.cluster:drop({cg.instance_1, cg.instance_2, cg.instance_3}) > > +end) > > + > > + > > +g.test_box_election = function(cg) > > + local c = {} > > + c[1] = net.connect(cg.instance_1.net_box_uri) > > + c[2] = net.connect(cg.instance_2.net_box_uri) > > + c[3] = net.connect(cg.instance_3.net_box_uri) > > + > > + local res = {} > > + local res_n = {0, 0, 0} > > + > > + for i = 1, 3 do > > + c[i]:watch('box.election', > > + function(n, s) > > + t.assert_equals(n, 'box.election') > > + res[i] = s > > + res_n[i] = res_n[i] + 1 > > + end) > > + end > > + while res_n[1] + res_n[2] + res_n[3] < 3 do fiber.sleep(0.00001) end > > + > > + -- verify all instances are in the same state > > + t.assert_equals(res[1], res[2]) > > + t.assert_equals(res[1], res[3]) > > + > > + -- wait for elections to complete, verify leader is the instance_1 > > + -- trying to avoid the exact number of term - it can vary > > + local instance1_id = cg.instance_1:eval("return box.info.id") > > 18. You can do cg.instance_1:instance_id(). 15. > + -- wait for elections to complete, verify leader is the instance_1 > + -- trying to avoid the exact number of term - it can vary > + local instance1_id = cg.instance_1:instance_id() > > + > > + cg.instance_1:eval("box.cfg{election_mode='candidate'}") > > + cg.instance_2:eval("box.cfg{election_mode='voter'}") > > + cg.instance_3:eval("box.cfg{election_mode='voter'}") > > + > > + cg.instance_1:wait_election_leader_found() > > + cg.instance_2:wait_election_leader_found() > > + cg.instance_3:wait_election_leader_found() > > + > > + t.assert_equals(res[1].leader, instance1_id) > > + t.assert_equals(res[1].is_ro, false) > > + t.assert_equals(res[1].role, 'leader') > > 19. You can use t.assert_covers() if you want not to check a > specific term number. > > > + t.assert_equals(res[2].leader, instance1_id) > > + t.assert_equals(res[2].is_ro, true) > > + t.assert_equals(res[2].role, 'follower') > > + t.assert_equals(res[3].leader, instance1_id) > > + t.assert_equals(res[3].is_ro, true) > > + t.assert_equals(res[3].role, 'follower') > > + t.assert_equals(res[1].term, res[2].term) > > + t.assert_equals(res[1].term, res[3].term) > > + > > + -- check the stepping down is working > > + res_n = {0, 0, 0} > > + cg.instance_1:eval("box.cfg{election_mode='voter'}") > > + while res_n[1] + res_n[2] + res_n[3] < 3 do fiber.sleep(0.00001) end > > + > > + local expected = {is_ro = true, role = 'follower', term = res[1].term, > > + leader = 0} > > + t.assert_equals(res[1], expected) > > + t.assert_equals(res[2], expected) > > + t.assert_equals(res[3], expected) > > + > > + c[1]:close() > > + c[2]:close() > > + c[3]:close() > > +end > > <...> 16. > + cg.instance_1:exec(function() box.cfg{election_mode='candidate'} end) > + cg.instance_2:exec(function() box.cfg{election_mode='voter'} end) > + cg.instance_3:exec(function() box.cfg{election_mode='voter'} end) > + > + cg.instance_1:wait_election_leader_found() > + cg.instance_2:wait_election_leader_found() > + cg.instance_3:wait_election_leader_found() > + > + t.assert_covers(res, > + { > + {leader = instance1_id, is_ro = false, role = 'leader', term = 2}, > + {leader = instance1_id, is_ro = true, role = 'follower', term = 2}, > + {leader = instance1_id, is_ro = true, role = 'follower', term = 2} > + }) > + > + -- check the stepping down is working > + res_n = {0, 0, 0} > + cg.instance_1:exec(function() box.cfg{election_mode='voter'} end) > + t.helpers.retrying({}, function() > + while res_n[1] + res_n[2] + res_n[3] < 3 do fiber.sleep(0.00001) end > + end) > + > + local expected = {is_ro = true, role = 'follower', term = res[1].term, > + leader = 0} > + t.assert_covers(res, {expected, expected, expected}) > + > + c[1]:close() > + c[2]:close() > + c[3]:close() > +end > > +g.test_box_id = function(cg) > > + local c = net.connect(cg.instance.net_box_uri) > > + > > + local result = {} > > + local result_no = 0 > > + > > + local watcher = c:watch('box.id', > > + function(name, state) > > + assert(name == 'box.id') > > + result = state > > + result_no = result_no + 1 > > + end) > > + > > + -- initial state should arrive > > + while result_no < 1 do fiber.sleep(0.001) end > > + t.assert_equals(result, {id = 1, > > + instance_uuid = '11111111-2222-0000-0000-333333333333', > > + replicaset_uuid = 'abababab-0000-0000-0000-000000000001'}) > > 20. Lets better use cg.instance.box_cfg.instance_uuid and > cg.instance.box_cfg.replicaset_uuid instead of the hardcoded values. 17. > +g.test_box_id = function(cg) > + local c = net.connect(cg.instance.net_box_uri) > + > + local result = {} > + local result_no = 0 > + > + local watcher = c:watch('box.id', > + function(name, state) > + assert(name == 'box.id') > + result = state > + result_no = result_no + 1 > + end) > + > + -- initial state should arrive > + t.helpers.retrying({}, function() > + while result_no < 1 do fiber.sleep(0.00001) end > + end) > + t.assert_equals(result, {id = 1, > + instance_uuid = cg.instance.box_cfg.instance_uuid, > + replicaset_uuid = cg.instance.box_cfg.replicaset_uuid}) > + > + watcher:unregister() > + c:close() > +end > > + > > + -- error when instance_uuid set dynamically > > + t.assert_error_msg_content_equals("Can't set option 'instance_uuid' dynamically", > > + function() > > + cg.instance:eval(([[ > > + box.cfg{ > > + instance_uuid = "%s" > > + } > > + ]]):format(t.helpers.uuid('1', '2', '4'))) > > + end) > > 21. This and the case below don't seem related to your patch. This was > banned beforehand and is already tested. I would suggest to drop these. 18. I deleted these parts of code. > > + > > + w = mp_encode_map(w, 3); > > + w = mp_encode_str0(w, "id"); > > + w = mp_encode_uint(w, id); > > + w = mp_encode_str0(w, "instance_uuid"); > > + w = mp_encode_uuid(w, &INSTANCE_UUID); > > + w = mp_encode_str0(w, "replicaset_uuid"); > > + w = mp_encode_uuid(w, &REPLICASET_UUID); > > + > > + box_broadcast("box.id", strlen("box.id"), buf, w); > > + > > + assert((size_t)(w - buf) < 1024); > > +} > > diff --git a/src/box/mp_error.cc b/src/box/mp_error.cc > > index 3c4176e59..93109562d 100644 > > --- a/src/box/mp_error.cc > > +++ b/src/box/mp_error.cc > > @@ -169,12 +169,6 @@ mp_sizeof_error_one(const struct error *error) > > return data_size; > > } > > > > -static inline char * > > -mp_encode_str0(char *data, const char *str) > > -{ > > - return mp_encode_str(data, str, strlen(str)); > > -} > > 7. This and the msgpuck submodule update better be done in a separate > preparatory commit. > > > > + > > + -- error when replicaset_uuid set dynamically > > + t.assert_error_msg_content_equals("Can't set option 'replicaset_uuid' dynamically", > > + function() > > + cg.instance:eval(([[ > > + box.cfg{ > > + replicaset_uuid = "%s" > > + } > > + ]]):format(t.helpers.uuid('ab', 2))) > > + end) > > + > > + watcher:unregister() > > + c:close() > > +end > > diff --git a/test/unit/mp_error.cc b/test/unit/mp_error.cc > > index 777c68dff..6afb73b22 100644 > > --- a/test/unit/mp_error.cc > > +++ b/test/unit/mp_error.cc > > @@ -94,12 +94,6 @@ enum { > > sizeof(standard_errors) / sizeof(standard_errors[0]), > > }; > > > > -static inline char * > > -mp_encode_str0(char *data, const char *str) > > -{ > > - return mp_encode_str(data, str, strlen(str)); > > -} > > 22. Should be in the separate commit which bumps msgpuck submodule. 19 Done. https://github.com/tarantool/tarantool/commit/99d6b8d0660fdaa14a91e0e42f67de6b51997f02