From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from localhost (localhost [127.0.0.1]) by turing.freelists.org (Avenir Technologies Mail Multiplex) with ESMTP id 7B53D2B986 for ; Thu, 29 Mar 2018 12:15:21 -0400 (EDT) Received: from turing.freelists.org ([127.0.0.1]) by localhost (turing.freelists.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id M8-zUA176pcc for ; Thu, 29 Mar 2018 12:15:20 -0400 (EDT) Received: from smtp58.i.mail.ru (smtp58.i.mail.ru [217.69.128.38]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by turing.freelists.org (Avenir Technologies Mail Multiplex) with ESMTPS id CD80D2B382 for ; Thu, 29 Mar 2018 12:15:19 -0400 (EDT) Received: by smtp58.i.mail.ru with esmtpa (envelope-from ) id 1f1aCs-0003v5-0O for tarantool-patches@freelists.org; Thu, 29 Mar 2018 19:15:18 +0300 From: Konstantin Belyavskiy Subject: [tarantool-patches] [PATCH 2/2] [replication] [recovery] recover missing data Date: Thu, 29 Mar 2018 19:15:16 +0300 Message-Id: <734ad912f840868e94e7e34048795afee209b78f.1522339565.git.k.belyavskiy@tarantool.org> In-Reply-To: References: In-Reply-To: References: Sender: tarantool-patches-bounce@freelists.org Errors-to: tarantool-patches-bounce@freelists.org Reply-To: tarantool-patches@freelists.org List-help: List-unsubscribe: List-software: Ecartis version 1.0.0 List-Id: tarantool-patches List-subscribe: List-owner: List-post: List-archive: To: tarantool-patches@freelists.org Part 2 of 2. Recover missing local data from replica. In case of sudden power-loss, if data was not written to WAL but already sent to remote replica, local can't recover properly and we have different datasets. Fix it by using remote replica's data and LSN comparison. Based on @GeorgyKirichenko proposal and @locker race free check. Closes #3210 --- branch: gh-3210-recover-missing-local-data-master-master src/box/relay.cc | 16 ++++- src/box/wal.cc | 15 +++- test/replication/recover_missing.result | 116 ++++++++++++++++++++++++++++++ test/replication/recover_missing.test.lua | 41 +++++++++++ test/replication/suite.ini | 2 +- 5 files changed, 185 insertions(+), 5 deletions(-) create mode 100644 test/replication/recover_missing.result create mode 100644 test/replication/recover_missing.test.lua diff --git a/src/box/relay.cc b/src/box/relay.cc index 2bd05ad5f..88de2a32b 100644 --- a/src/box/relay.cc +++ b/src/box/relay.cc @@ -110,6 +110,11 @@ struct relay { struct vclock recv_vclock; /** Replicatoin slave version. */ uint32_t version_id; + /** + * Local vclock at the moment of subscribe, used to check + * dataset on the other side and send missing data rows if any. + */ + struct vclock local_vclock_at_subscribe; /** Relay endpoint */ struct cbus_endpoint endpoint; @@ -541,6 +546,7 @@ relay_subscribe(int fd, uint64_t sync, struct replica *replica, relay.version_id = replica_version_id; relay.replica = replica; replica_set_relay(replica, &relay); + vclock_copy(&relay.local_vclock_at_subscribe, &replicaset.vclock); int rc = cord_costart(&relay.cord, tt_sprintf("relay_%p", &relay), relay_subscribe_f, &relay); @@ -583,10 +589,16 @@ relay_send_row(struct xstream *stream, struct xrow_header *packet) /* * We're feeding a WAL, thus responding to SUBSCRIBE request. * In that case, only send a row if it is not from the same replica - * (i.e. don't send replica's own rows back). + * (i.e. don't send replica's own rows back) or if this row is + * missing on the other side (i.e. in case of sudden power-loss, + * data was not written to WAL, so remote master can't recover + * it). In this case packet's LSN is lower or equal then local + * master's LSN at the moment it has issued 'SUBSCRIBE' request. */ if (relay->replica == NULL || - packet->replica_id != relay->replica->id) { + packet->replica_id != relay->replica->id || + packet->lsn <= vclock_get(&relay->local_vclock_at_subscribe, + packet->replica_id)) { relay_send(relay, packet); } } diff --git a/src/box/wal.cc b/src/box/wal.cc index 4576cfe09..7ea8fe20b 100644 --- a/src/box/wal.cc +++ b/src/box/wal.cc @@ -770,8 +770,19 @@ wal_write(struct journal *journal, struct journal_entry *entry) * and promote vclock. */ if ((*last)->replica_id == instance_id) { - vclock_follow(&replicaset.vclock, instance_id, - (*last)->lsn); + /* + * In master-master configuration, during sudden + * power-loss, if the data has not been written + * to WAL but have been already sent to others + * they will send the data back. In this case + * vclock has already been promoted in applier. + */ + if (vclock_get(&replicaset.vclock, + instance_id) < (*last)->lsn) { + vclock_follow(&replicaset.vclock, + instance_id, + (*last)->lsn); + } break; } --last; diff --git a/test/replication/recover_missing.result b/test/replication/recover_missing.result new file mode 100644 index 000000000..4c9c9b195 --- /dev/null +++ b/test/replication/recover_missing.result @@ -0,0 +1,116 @@ +env = require('test_run') +--- +... +test_run = env.new() +--- +... +SERVERS = { 'autobootstrap1', 'autobootstrap2', 'autobootstrap3' } +--- +... +-- Start servers +test_run:create_cluster(SERVERS) +--- +... +-- Wait for full mesh +test_run:wait_fullmesh(SERVERS) +--- +... +test_run:cmd("switch autobootstrap1") +--- +- true +... +for i = 0, 9 do box.space.test:insert{i, 'test' .. i} end +--- +... +box.space.test:count() +--- +- 10 +... +test_run:cmd('switch default') +--- +- true +... +vclock1 = test_run:get_vclock('autobootstrap1') +--- +... +vclock2 = test_run:wait_cluster_vclock(SERVERS, vclock1) +--- +... +test_run:cmd("switch autobootstrap2") +--- +- true +... +box.space.test:count() +--- +- 10 +... +box.error.injection.set("ERRINJ_RELAY_TIMEOUT", 0.01) +--- +- ok +... +test_run:cmd("stop server autobootstrap1") +--- +- true +... +fio = require('fio') +--- +... +-- This test checks ability to recover missing local data +-- from remote replica. See #3210. +-- Delete data on first master and test that after restart, +-- due to difference in vclock it will be able to recover +-- all missing data from replica. +-- Also check that there is no concurrency, i.e. master is +-- in 'read-only' mode unless it receives all data. +fio.unlink(fio.pathjoin(fio.abspath("."), string.format('autobootstrap1/%020d.xlog', 8))) +--- +- true +... +test_run:cmd("start server autobootstrap1") +--- +- true +... +test_run:cmd("switch autobootstrap1") +--- +- true +... +for i = 10, 19 do box.space.test:insert{i, 'test' .. i} end +--- +... +fiber = require('fiber') +--- +... +fiber.sleep(0.1) +--- +... +box.space.test:select() +--- +- - [0, 'test0'] + - [1, 'test1'] + - [2, 'test2'] + - [3, 'test3'] + - [4, 'test4'] + - [5, 'test5'] + - [6, 'test6'] + - [7, 'test7'] + - [8, 'test8'] + - [9, 'test9'] + - [10, 'test10'] + - [11, 'test11'] + - [12, 'test12'] + - [13, 'test13'] + - [14, 'test14'] + - [15, 'test15'] + - [16, 'test16'] + - [17, 'test17'] + - [18, 'test18'] + - [19, 'test19'] +... +-- Cleanup. +test_run:cmd('switch default') +--- +- true +... +test_run:drop_cluster(SERVERS) +--- +... diff --git a/test/replication/recover_missing.test.lua b/test/replication/recover_missing.test.lua new file mode 100644 index 000000000..775d23a0b --- /dev/null +++ b/test/replication/recover_missing.test.lua @@ -0,0 +1,41 @@ +env = require('test_run') +test_run = env.new() + +SERVERS = { 'autobootstrap1', 'autobootstrap2', 'autobootstrap3' } +-- Start servers +test_run:create_cluster(SERVERS) +-- Wait for full mesh +test_run:wait_fullmesh(SERVERS) + +test_run:cmd("switch autobootstrap1") +for i = 0, 9 do box.space.test:insert{i, 'test' .. i} end +box.space.test:count() + +test_run:cmd('switch default') +vclock1 = test_run:get_vclock('autobootstrap1') +vclock2 = test_run:wait_cluster_vclock(SERVERS, vclock1) + +test_run:cmd("switch autobootstrap2") +box.space.test:count() +box.error.injection.set("ERRINJ_RELAY_TIMEOUT", 0.01) +test_run:cmd("stop server autobootstrap1") +fio = require('fio') +-- This test checks ability to recover missing local data +-- from remote replica. See #3210. +-- Delete data on first master and test that after restart, +-- due to difference in vclock it will be able to recover +-- all missing data from replica. +-- Also check that there is no concurrency, i.e. master is +-- in 'read-only' mode unless it receives all data. +fio.unlink(fio.pathjoin(fio.abspath("."), string.format('autobootstrap1/%020d.xlog', 8))) +test_run:cmd("start server autobootstrap1") + +test_run:cmd("switch autobootstrap1") +for i = 10, 19 do box.space.test:insert{i, 'test' .. i} end +fiber = require('fiber') +fiber.sleep(0.1) +box.space.test:select() + +-- Cleanup. +test_run:cmd('switch default') +test_run:drop_cluster(SERVERS) diff --git a/test/replication/suite.ini b/test/replication/suite.ini index ee76a3b00..b538f9625 100644 --- a/test/replication/suite.ini +++ b/test/replication/suite.ini @@ -3,7 +3,7 @@ core = tarantool script = master.lua description = tarantool/box, replication disabled = consistent.test.lua -release_disabled = catch.test.lua errinj.test.lua gc.test.lua before_replace.test.lua quorum.test.lua +release_disabled = catch.test.lua errinj.test.lua gc.test.lua before_replace.test.lua quorum.test.lua recover_missing.test.lua config = suite.cfg lua_libs = lua/fast_replica.lua long_run = prune.test.lua -- 2.14.3 (Apple Git-98)