Tarantool development patches archive
 help / color / mirror / Atom feed
* [tarantool-patches] [PATCH v5 0/2] force gc on running out of disk space
@ 2018-07-05 18:39 Konstantin Belyavskiy
  2018-07-05 18:39 ` [tarantool-patches] [PATCH v5 1/2] replication: rename thread from tx to tx_prio Konstantin Belyavskiy
  2018-07-05 18:39 ` [tarantool-patches] [PATCH v5 2/2] replication: force gc to clean xdir on ENOSPC err Konstantin Belyavskiy
  0 siblings, 2 replies; 6+ messages in thread
From: Konstantin Belyavskiy @ 2018-07-05 18:39 UTC (permalink / raw)
  To: tarantool-patches

Garbage collector do not delete xlog unless replica do not notify
master with newer vclock. This can lead to running out of disk
space error and this is not right behaviour since it will stop the
master.
Fix it by forcing gc to clean xlogs for replica with highest lag.
Add an error injection and a test.

Changes in V2:
- Promoting error from wal_thread to tx via cpipe.
Changes in V3:
- Delete consumers and only for replicas (but not backup).
Changes in V4:
- Bug fix and small changes according to review.
Changes in V5:
- Compare signatures of the oldest replica and the oldest snapshot
  to keep to prevent deletion if it will not free any disk space.
- Add say_crit on consumer deletion with a little information.

Tichet: https://github.com/tarantool/tarantool/issues/3397
Branch: https://github.com/tarantool/tarantool/compare/kbelyavs/gh-3397-force-del-logs-on-no-disk-space

Konstantin Belyavskiy (2):
  replication: rename thread from tx to tx_prio
  replication: force gc to clean xdir on ENOSPC err

 src/box/box.cc                                     |   1 +
 src/box/gc.c                                       |  62 +++++++++++
 src/box/gc.h                                       |  18 +++
 src/box/relay.cc                                   |   1 +
 src/box/wal.cc                                     |  44 ++++++--
 src/errinj.h                                       |   1 +
 src/fio.c                                          |   7 ++
 test/box/errinj.result                             |   2 +
 test/replication/kick_dead_replica_on_enspc.result | 121 +++++++++++++++++++++
 .../kick_dead_replica_on_enspc.test.lua            |  56 ++++++++++
 test/replication/suite.ini                         |   2 +-
 11 files changed, 306 insertions(+), 9 deletions(-)
 create mode 100644 test/replication/kick_dead_replica_on_enspc.result
 create mode 100644 test/replication/kick_dead_replica_on_enspc.test.lua

-- 
2.14.3 (Apple Git-98)

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [tarantool-patches] [PATCH v5 1/2] replication: rename thread from tx to tx_prio
  2018-07-05 18:39 [tarantool-patches] [PATCH v5 0/2] force gc on running out of disk space Konstantin Belyavskiy
@ 2018-07-05 18:39 ` Konstantin Belyavskiy
  2018-07-05 18:39 ` [tarantool-patches] [PATCH v5 2/2] replication: force gc to clean xdir on ENOSPC err Konstantin Belyavskiy
  1 sibling, 0 replies; 6+ messages in thread
From: Konstantin Belyavskiy @ 2018-07-05 18:39 UTC (permalink / raw)
  To: tarantool-patches

There are two different threads: 'tx' and 'tx_prio', the latter
does not support yield(). Rename to avoid misunderstanding.

Needed for #3397
---
 src/box/wal.cc | 25 ++++++++++++++-----------
 1 file changed, 14 insertions(+), 11 deletions(-)

diff --git a/src/box/wal.cc b/src/box/wal.cc
index 099c70caa..93c350e1f 100644
--- a/src/box/wal.cc
+++ b/src/box/wal.cc
@@ -59,8 +59,11 @@ struct wal_thread {
 	struct cord cord;
 	/** A pipe from 'tx' thread to 'wal' */
 	struct cpipe wal_pipe;
-	/** Return pipe from 'wal' to tx' */
-	struct cpipe tx_pipe;
+	/**
+	 * Return pipe from 'wal' to tx'. This is a
+	 * priority pipe and DOES NOT support yield.
+	 */
+	struct cpipe tx_prio_pipe;
 };
 
 /*
@@ -154,7 +157,7 @@ static void
 tx_schedule_commit(struct cmsg *msg);
 
 static struct cmsg_hop wal_request_route[] = {
-	{wal_write_to_disk, &wal_thread.tx_pipe},
+	{wal_write_to_disk, &wal_thread.tx_prio_pipe},
 	{tx_schedule_commit, NULL},
 };
 
@@ -414,7 +417,7 @@ wal_checkpoint(struct vclock *vclock, bool rotate)
 		return 0;
 	}
 	static struct cmsg_hop wal_checkpoint_route[] = {
-		{wal_checkpoint_f, &wal_thread.tx_pipe},
+		{wal_checkpoint_f, &wal_thread.tx_prio_pipe},
 		{wal_checkpoint_done_f, NULL},
 	};
 	vclock_create(vclock);
@@ -453,7 +456,7 @@ wal_collect_garbage(int64_t lsn)
 	struct wal_gc_msg msg;
 	msg.lsn = lsn;
 	bool cancellable = fiber_set_cancellable(false);
-	cbus_call(&wal_thread.wal_pipe, &wal_thread.tx_pipe, &msg,
+	cbus_call(&wal_thread.wal_pipe, &wal_thread.tx_prio_pipe, &msg,
 		  wal_collect_garbage_f, NULL, TIMEOUT_INFINITY);
 	fiber_set_cancellable(cancellable);
 }
@@ -544,7 +547,7 @@ wal_writer_begin_rollback(struct wal_writer *writer)
 		 * list.
 		 */
 		{ wal_writer_clear_bus, &wal_thread.wal_pipe },
-		{ wal_writer_clear_bus, &wal_thread.tx_pipe },
+		{ wal_writer_clear_bus, &wal_thread.tx_prio_pipe },
 		/*
 		 * Step 2: writer->rollback queue contains all
 		 * messages which need to be rolled back,
@@ -562,7 +565,7 @@ wal_writer_begin_rollback(struct wal_writer *writer)
 	 * all input until rollback mode is off.
 	 */
 	cmsg_init(&writer->in_rollback, rollback_route);
-	cpipe_push(&wal_thread.tx_pipe, &writer->in_rollback);
+	cpipe_push(&wal_thread.tx_prio_pipe, &writer->in_rollback);
 }
 
 static void
@@ -691,7 +694,7 @@ wal_thread_f(va_list ap)
 	 * endpoint, to ensure that WAL messages are delivered
 	 * even when tx fiber pool is used up by net messages.
 	 */
-	cpipe_create(&wal_thread.tx_pipe, "tx_prio");
+	cpipe_create(&wal_thread.tx_prio_pipe, "tx_prio");
 
 	cbus_loop(&endpoint);
 
@@ -703,7 +706,7 @@ wal_thread_f(va_list ap)
 	if (xlog_is_open(&vy_log_writer.xlog))
 		xlog_close(&vy_log_writer.xlog, false);
 
-	cpipe_destroy(&wal_thread.tx_pipe);
+	cpipe_destroy(&wal_thread.tx_prio_pipe);
 	return 0;
 }
 
@@ -843,7 +846,7 @@ wal_write_vy_log(struct journal_entry *entry)
 	struct wal_write_vy_log_msg msg;
 	msg.entry= entry;
 	bool cancellable = fiber_set_cancellable(false);
-	int rc = cbus_call(&wal_thread.wal_pipe, &wal_thread.tx_pipe, &msg,
+	int rc = cbus_call(&wal_thread.wal_pipe, &wal_thread.tx_prio_pipe, &msg,
 			   wal_write_vy_log_f, NULL, TIMEOUT_INFINITY);
 	fiber_set_cancellable(cancellable);
 	return rc;
@@ -863,7 +866,7 @@ wal_rotate_vy_log()
 {
 	struct cbus_call_msg msg;
 	bool cancellable = fiber_set_cancellable(false);
-	cbus_call(&wal_thread.wal_pipe, &wal_thread.tx_pipe, &msg,
+	cbus_call(&wal_thread.wal_pipe, &wal_thread.tx_prio_pipe, &msg,
 		  wal_rotate_vy_log_f, NULL, TIMEOUT_INFINITY);
 	fiber_set_cancellable(cancellable);
 }
-- 
2.14.3 (Apple Git-98)

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [tarantool-patches] [PATCH v5 2/2] replication: force gc to clean xdir on ENOSPC err
  2018-07-05 18:39 [tarantool-patches] [PATCH v5 0/2] force gc on running out of disk space Konstantin Belyavskiy
  2018-07-05 18:39 ` [tarantool-patches] [PATCH v5 1/2] replication: rename thread from tx to tx_prio Konstantin Belyavskiy
@ 2018-07-05 18:39 ` Konstantin Belyavskiy
  2018-07-06 17:00   ` [tarantool-patches] " Konstantin Osipov
  1 sibling, 1 reply; 6+ messages in thread
From: Konstantin Belyavskiy @ 2018-07-05 18:39 UTC (permalink / raw)
  To: tarantool-patches

Garbage collector do not delete xlog unless replica do not notify
master with newer vclock. This can lead to running out of disk
space error and this is not right behaviour since it will stop the
master.
Fix it by forcing gc to clean xlogs for replica with highest lag.
Add an error injection and a test.

Changes in V2:
- Promoting error from wal_thread to tx via cpipe.
Changes in V3:
- Delete consumers and only for replicas (but not backup).
Changes in V4:
- Bug fix and small changes according to review.
Changes in V5:
- Compare signatures of the oldest replica and the oldest snapshot
  to keep to prevent deletion if it will not free any disk space.
- Add say_crit on consumer deletion with a little information.

Closes #3397
---
 src/box/box.cc                                     |   1 +
 src/box/gc.c                                       |  62 +++++++++++
 src/box/gc.h                                       |  18 +++
 src/box/relay.cc                                   |   1 +
 src/box/wal.cc                                     |  25 +++++
 src/errinj.h                                       |   1 +
 src/fio.c                                          |   7 ++
 test/box/errinj.result                             |   2 +
 test/replication/kick_dead_replica_on_enspc.result | 121 +++++++++++++++++++++
 .../kick_dead_replica_on_enspc.test.lua            |  56 ++++++++++
 test/replication/suite.ini                         |   2 +-
 11 files changed, 295 insertions(+), 1 deletion(-)
 create mode 100644 test/replication/kick_dead_replica_on_enspc.result
 create mode 100644 test/replication/kick_dead_replica_on_enspc.test.lua

diff --git a/src/box/box.cc b/src/box/box.cc
index e3eb2738f..ba894c33a 100644
--- a/src/box/box.cc
+++ b/src/box/box.cc
@@ -1370,6 +1370,7 @@ box_process_join(struct ev_io *io, struct xrow_header *header)
 	replica = replica_by_uuid(&instance_uuid);
 	assert(replica != NULL);
 	replica->gc = gc;
+	gc_consumer_set_replica(gc, replica);
 	gc_guard.is_active = false;
 
 	/* Remember master's vclock after the last request */
diff --git a/src/box/gc.c b/src/box/gc.c
index 12e68f3dc..b2a275e77 100644
--- a/src/box/gc.c
+++ b/src/box/gc.c
@@ -61,6 +61,8 @@ struct gc_consumer {
 	char *name;
 	/** The vclock signature tracked by this consumer. */
 	int64_t signature;
+	/** Replica associated with consumer (if any). */
+	struct replica *replica;
 };
 
 typedef rb_tree(struct gc_consumer) gc_tree_t;
@@ -123,10 +125,18 @@ gc_consumer_new(const char *name, int64_t signature)
 	return consumer;
 }
 
+void
+gc_consumer_set_replica(struct gc_consumer *gc, struct replica *replica)
+{
+	gc->replica = replica;
+}
+
 /** Free a consumer object. */
 static void
 gc_consumer_delete(struct gc_consumer *consumer)
 {
+	if (consumer->replica != NULL)
+		consumer->replica->gc = NULL;
 	free(consumer->name);
 	TRASH(consumer);
 	free(consumer);
@@ -216,6 +226,58 @@ gc_set_checkpoint_count(int checkpoint_count)
 	gc.checkpoint_count = checkpoint_count;
 }
 
+void
+gc_xdir_clean_notify()
+{
+	/*
+	 * Compare the current time with the time of the last run.
+	 * This is needed in case of multiple failures to prevent
+	 * from deleting all replicas.
+	 */
+	static double prev_time = 0.;
+	double cur_time = ev_monotonic_time();
+	if (cur_time - prev_time < 1.)
+		return;
+	prev_time = cur_time;
+	struct gc_consumer *leftmost =
+	    gc_tree_first(&gc.consumers);
+	/*
+	 * Exit if no consumers left or if this consumer is
+	 * not associated with replica (backup for example).
+	 */
+	if (leftmost == NULL || leftmost->replica == NULL)
+		return;
+	/*
+	 * We have to maintain @checkpoint_count oldest snapshots,
+	 * plus we can't remove snapshots that are still in use.
+	 * So if leftmost replica has signature greater or equel
+	 * then the oldest checkpoint that must be preserved,
+	 * nothing to do.
+	 */
+	struct checkpoint_iterator checkpoints;
+	checkpoint_iterator_init(&checkpoints);
+	assert(gc.checkpoint_count > 0);
+	const struct vclock *vclock;
+	for (int i = 0; i < gc.checkpoint_count; i++)
+		if((vclock = checkpoint_iterator_prev(&checkpoints)) == NULL)
+			return;
+	if (leftmost->signature >= vclock_sum(vclock))
+		return;
+	int64_t signature = leftmost->signature;
+	while (true) {
+		say_crit("remove replica with the oldest signature = %lld"
+		         " and uuid = %s", signature,
+			 tt_uuid_str(&leftmost->replica->uuid));
+		gc_consumer_unregister(leftmost);
+		leftmost = gc_tree_first(&gc.consumers);
+		if (leftmost == NULL || leftmost->replica == NULL ||
+		    leftmost->signature > signature) {
+			gc_run();
+			return;
+		}
+	}
+}
+
 struct gc_consumer *
 gc_consumer_register(const char *name, int64_t signature)
 {
diff --git a/src/box/gc.h b/src/box/gc.h
index 634ce6d38..83b34b53b 100644
--- a/src/box/gc.h
+++ b/src/box/gc.h
@@ -31,9 +31,12 @@
  * SUCH DAMAGE.
  */
 
+#include <stdbool.h>
 #include <stddef.h>
 #include <stdint.h>
 
+#include "replication.h"
+
 #if defined(__cplusplus)
 extern "C" {
 #endif /* defined(__cplusplus) */
@@ -81,6 +84,12 @@ gc_set_checkpoint_count(int checkpoint_count);
 struct gc_consumer *
 gc_consumer_register(const char *name, int64_t signature);
 
+/**
+ * Bind consumer with associated replica (if any).
+ */
+void
+gc_consumer_set_replica(struct gc_consumer *gc, struct replica *replica);
+
 /**
  * Unregister a consumer and invoke garbage collection
  * if needed.
@@ -88,6 +97,15 @@ gc_consumer_register(const char *name, int64_t signature);
 void
 gc_consumer_unregister(struct gc_consumer *consumer);
 
+/**
+ * Delete consumer with the least recent vclock and start
+ * garbage collection. If nothing to delete find next
+ * consumer etc. Originally created for cases with running
+ * out of disk space because of disconnected replica.
+ */
+void
+gc_xdir_clean_notify();
+
 /**
  * Advance the vclock signature tracked by a consumer and
  * invoke garbage collection if needed.
diff --git a/src/box/relay.cc b/src/box/relay.cc
index d2ceaf110..c317775a4 100644
--- a/src/box/relay.cc
+++ b/src/box/relay.cc
@@ -535,6 +535,7 @@ relay_subscribe(int fd, uint64_t sync, struct replica *replica,
 			vclock_sum(replica_clock));
 		if (replica->gc == NULL)
 			diag_raise();
+		gc_consumer_set_replica(replica->gc, replica);
 	}
 
 	struct relay relay;
diff --git a/src/box/wal.cc b/src/box/wal.cc
index 93c350e1f..f6de97cef 100644
--- a/src/box/wal.cc
+++ b/src/box/wal.cc
@@ -41,6 +41,7 @@
 #include "cbus.h"
 #include "coio_task.h"
 #include "replication.h"
+#include "gc.h"
 
 
 const char *wal_mode_STRS[] = { "none", "write", "fsync", NULL };
@@ -64,6 +65,8 @@ struct wal_thread {
 	 * priority pipe and DOES NOT support yield.
 	 */
 	struct cpipe tx_prio_pipe;
+	/** Return pipe from 'wal' to tx' */
+	struct cpipe tx_pipe;
 };
 
 /*
@@ -584,6 +587,13 @@ wal_assign_lsn(struct wal_writer *writer, struct xrow_header **row,
 	}
 }
 
+static void
+gc_status_update(struct cmsg *msg)
+{
+	gc_xdir_clean_notify();
+	free(msg);
+}
+
 static void
 wal_write_to_disk(struct cmsg *msg)
 {
@@ -655,6 +665,19 @@ done:
 		/* Until we can pass the error to tx, log it and clear. */
 		error_log(error);
 		diag_clear(diag_get());
+		if (errno == ENOSPC) {
+			struct cmsg *msg =
+			    (struct cmsg*)calloc(1, sizeof(struct cmsg));
+			if (msg == NULL) {
+				say_error("failed to allocate cmsg");
+			} else {
+				static const struct cmsg_hop route[] = {
+					{gc_status_update, NULL}
+				};
+				cmsg_init(msg, route);
+				cpipe_push(&wal_thread.tx_pipe, msg);
+			}
+		}
 	}
 	/*
 	 * We need to start rollback from the first request
@@ -695,6 +718,7 @@ wal_thread_f(va_list ap)
 	 * even when tx fiber pool is used up by net messages.
 	 */
 	cpipe_create(&wal_thread.tx_prio_pipe, "tx_prio");
+	cpipe_create(&wal_thread.tx_pipe, "tx");
 
 	cbus_loop(&endpoint);
 
@@ -707,6 +731,7 @@ wal_thread_f(va_list ap)
 		xlog_close(&vy_log_writer.xlog, false);
 
 	cpipe_destroy(&wal_thread.tx_prio_pipe);
+	cpipe_destroy(&wal_thread.tx_pipe);
 	return 0;
 }
 
diff --git a/src/errinj.h b/src/errinj.h
index 895d938d5..11f1b7fdc 100644
--- a/src/errinj.h
+++ b/src/errinj.h
@@ -112,6 +112,7 @@ struct errinj {
 	_(ERRINJ_LOG_ROTATE, ERRINJ_BOOL, {.bparam = false}) \
 	_(ERRINJ_SNAP_COMMIT_DELAY, ERRINJ_BOOL, {.bparam = 0}) \
 	_(ERRINJ_SNAP_WRITE_ROW_TIMEOUT, ERRINJ_DOUBLE, {.dparam = 0}) \
+	_(ERRINJ_NO_DISK_SPACE, ERRINJ_BOOL, {.bparam = false}) \
 
 ENUM0(errinj_id, ERRINJ_LIST);
 extern struct errinj errinjs[];
diff --git a/src/fio.c b/src/fio.c
index b79d3d058..cdea11e87 100644
--- a/src/fio.c
+++ b/src/fio.c
@@ -29,6 +29,7 @@
  * SUCH DAMAGE.
  */
 #include "fio.h"
+#include "errinj.h"
 
 #include <sys/types.h>
 
@@ -141,6 +142,12 @@ fio_writev(int fd, struct iovec *iov, int iovcnt)
 	ssize_t nwr;
 restart:
 	nwr = writev(fd, iov, iovcnt);
+	/* Simulate running out of disk space to force the gc to clean logs. */
+	struct errinj *inj = errinj(ERRINJ_NO_DISK_SPACE, ERRINJ_BOOL);
+	if (inj != NULL && inj->bparam) {
+		errno = ENOSPC;
+		nwr = -1;
+	}
 	if (nwr < 0) {
 		if (errno == EINTR) {
 			errno = 0;
diff --git a/test/box/errinj.result b/test/box/errinj.result
index 21a949965..a28688436 100644
--- a/test/box/errinj.result
+++ b/test/box/errinj.result
@@ -56,6 +56,8 @@ errinj.info()
     state: false
   ERRINJ_VY_RUN_WRITE:
     state: false
+  ERRINJ_NO_DISK_SPACE:
+    state: false
   ERRINJ_VY_LOG_FLUSH_DELAY:
     state: false
   ERRINJ_SNAP_COMMIT_DELAY:
diff --git a/test/replication/kick_dead_replica_on_enspc.result b/test/replication/kick_dead_replica_on_enspc.result
new file mode 100644
index 000000000..53ecc86a8
--- /dev/null
+++ b/test/replication/kick_dead_replica_on_enspc.result
@@ -0,0 +1,121 @@
+env = require('test_run')
+---
+...
+vclock_diff = require('fast_replica').vclock_diff
+---
+...
+test_run = env.new()
+---
+...
+SERVERS = { 'autobootstrap1', 'autobootstrap2', 'autobootstrap3' }
+---
+...
+--
+-- Start servers
+--
+test_run:create_cluster(SERVERS)
+---
+...
+--
+-- Wait for full mesh
+--
+test_run:wait_fullmesh(SERVERS)
+---
+...
+--
+-- Check vclock
+--
+vclock1 = test_run:get_vclock('autobootstrap1')
+---
+...
+vclock_diff(vclock1, test_run:get_vclock('autobootstrap2'))
+---
+- 0
+...
+vclock_diff(vclock1, test_run:get_vclock('autobootstrap3'))
+---
+- 0
+...
+--
+-- Switch off third replica
+--
+test_run:cmd("switch autobootstrap3")
+---
+- true
+...
+repl = box.cfg.replication
+---
+...
+box.cfg{replication = ""}
+---
+...
+--
+-- Insert rows
+--
+test_run:cmd("switch autobootstrap1")
+---
+- true
+...
+s = box.space.test
+---
+...
+for i = 1, 5 do s:insert{i} box.snapshot() end
+---
+...
+s:select()
+---
+- - [1]
+  - [2]
+  - [3]
+  - [4]
+  - [5]
+...
+fio = require('fio')
+---
+...
+path = fio.pathjoin(fio.abspath("."), 'autobootstrap1/*.xlog')
+---
+...
+-- Depend on first master is a leader or not it should be 5 or 6.
+#fio.glob(path) >= 5
+---
+- true
+...
+errinj = box.error.injection
+---
+...
+errinj.set("ERRINJ_NO_DISK_SPACE", true)
+---
+- ok
+...
+function insert(a) s:insert(a) end
+---
+...
+_, err = pcall(insert, {6})
+---
+...
+err:match("ailed to write")
+---
+- ailed to write
+...
+-- add a little timeout so gc could finish job
+fiber = require('fiber')
+---
+...
+while #fio.glob(path) ~= 2 do fiber.sleep(0.01) end
+---
+...
+#fio.glob(path)
+---
+- 2
+...
+test_run:cmd("switch default")
+---
+- true
+...
+--
+-- Stop servers
+--
+test_run:drop_cluster(SERVERS)
+---
+...
diff --git a/test/replication/kick_dead_replica_on_enspc.test.lua b/test/replication/kick_dead_replica_on_enspc.test.lua
new file mode 100644
index 000000000..88cb9df63
--- /dev/null
+++ b/test/replication/kick_dead_replica_on_enspc.test.lua
@@ -0,0 +1,56 @@
+env = require('test_run')
+vclock_diff = require('fast_replica').vclock_diff
+test_run = env.new()
+
+
+SERVERS = { 'autobootstrap1', 'autobootstrap2', 'autobootstrap3' }
+
+--
+-- Start servers
+--
+test_run:create_cluster(SERVERS)
+
+--
+-- Wait for full mesh
+--
+test_run:wait_fullmesh(SERVERS)
+
+--
+-- Check vclock
+--
+vclock1 = test_run:get_vclock('autobootstrap1')
+vclock_diff(vclock1, test_run:get_vclock('autobootstrap2'))
+vclock_diff(vclock1, test_run:get_vclock('autobootstrap3'))
+
+--
+-- Switch off third replica
+--
+test_run:cmd("switch autobootstrap3")
+repl = box.cfg.replication
+box.cfg{replication = ""}
+
+--
+-- Insert rows
+--
+test_run:cmd("switch autobootstrap1")
+s = box.space.test
+for i = 1, 5 do s:insert{i} box.snapshot() end
+s:select()
+fio = require('fio')
+path = fio.pathjoin(fio.abspath("."), 'autobootstrap1/*.xlog')
+-- Depend on first master is a leader or not it should be 5 or 6.
+#fio.glob(path) >= 5
+errinj = box.error.injection
+errinj.set("ERRINJ_NO_DISK_SPACE", true)
+function insert(a) s:insert(a) end
+_, err = pcall(insert, {6})
+err:match("ailed to write")
+-- add a little timeout so gc could finish job
+fiber = require('fiber')
+while #fio.glob(path) ~= 2 do fiber.sleep(0.01) end
+#fio.glob(path)
+test_run:cmd("switch default")
+--
+-- Stop servers
+--
+test_run:drop_cluster(SERVERS)
diff --git a/test/replication/suite.ini b/test/replication/suite.ini
index b489add58..27815acb6 100644
--- a/test/replication/suite.ini
+++ b/test/replication/suite.ini
@@ -3,7 +3,7 @@ core = tarantool
 script =  master.lua
 description = tarantool/box, replication
 disabled = consistent.test.lua
-release_disabled = catch.test.lua errinj.test.lua gc.test.lua before_replace.test.lua quorum.test.lua recover_missing_xlog.test.lua
+release_disabled = catch.test.lua errinj.test.lua gc.test.lua before_replace.test.lua kick_dead_replica_on_enspc.test.lua quorum.test.lua recover_missing_xlog.test.lua
 config = suite.cfg
 lua_libs = lua/fast_replica.lua
 long_run = prune.test.lua
-- 
2.14.3 (Apple Git-98)

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [tarantool-patches] Re: [PATCH v5 2/2] replication: force gc to clean xdir on ENOSPC err
  2018-07-05 18:39 ` [tarantool-patches] [PATCH v5 2/2] replication: force gc to clean xdir on ENOSPC err Konstantin Belyavskiy
@ 2018-07-06 17:00   ` Konstantin Osipov
  2018-07-10 15:10     ` [tarantool-patches] " Konstantin Belyavskiy
  0 siblings, 1 reply; 6+ messages in thread
From: Konstantin Osipov @ 2018-07-06 17:00 UTC (permalink / raw)
  To: tarantool-patches

* Konstantin Belyavskiy <k.belyavskiy@tarantool.org> [18/07/05 22:56]:
> Garbage collector do not delete xlog unless replica do not notify
> master with newer vclock. This can lead to running out of disk
> space error and this is not right behaviour since it will stop the
> master.
> Fix it by forcing gc to clean xlogs for replica with highest lag.
> Add an error injection and a test.

Please rebase this patch to the latest 1.10

Please use relay_stop() as a callback to unregister the consumer.

> +void
> +gc_xdir_clean_notify()
> +{
> +	/*
> +	 * Compare the current time with the time of the last run.
> +	 * This is needed in case of multiple failures to prevent
> +	 * from deleting all replicas.


> +	 */
> +	static double prev_time = 0.;
> +	double cur_time = ev_monotonic_time();
> +	if (cur_time - prev_time < 1.)
> +		return;

This throttles gc, which is good. But we would still get a lot of messages
from WAL thread. Maybe we should move the throttling to the WAL
side? This would spare us from creating the message as well.
Ideally we should use a single statically allocated message from
the WAL for this purpose (but still throttle it as well).

Plus, eventually you're going to reach a state when kicking off
replicas doesn't help with space. In this case you're going to
have a lot of messages, and they are going to be all useless.
This also suggests that throttling should be done on the WAL side.

> +	prev_time = cur_time;
> +	struct gc_consumer *leftmost =
> +	    gc_tree_first(&gc.consumers);
> +	/*
> +	 * Exit if no consumers left or if this consumer is
> +	 * not associated with replica (backup for example).
> +	 */

> +	if (leftmost == NULL || leftmost->replica == NULL)
> +		return;


> +	/*
> +	 * We have to maintain @checkpoint_count oldest snapshots,
> +	 * plus we can't remove snapshots that are still in use.
> +	 * So if leftmost replica has signature greater or equel
> +	 * then the oldest checkpoint that must be preserved,
> +	 * nothing to do.
> +	 */

This comment is useful, but the search in checkpoint array is not.
What about possible other types of consumers which are not
dispensable with anyway, e.g. backups? What if they are holding a
reference as well? 

Apparently this check is taking care of the problem:

> +	if (leftmost == NULL || leftmost->replica == NULL)
> +		return;

Could you write a test with two 
"abandoned" replicas, each holding an xlog file?


-- 
Konstantin Osipov, Moscow, Russia, +7 903 626 22 32
http://tarantool.io - www.twitter.com/kostja_osipov

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [tarantool-patches] Re: [tarantool-patches] Re: [PATCH v5 2/2] replication: force gc to clean xdir on ENOSPC err
  2018-07-06 17:00   ` [tarantool-patches] " Konstantin Osipov
@ 2018-07-10 15:10     ` Konstantin Belyavskiy
  2018-07-10 18:37       ` Konstantin Osipov
  0 siblings, 1 reply; 6+ messages in thread
From: Konstantin Belyavskiy @ 2018-07-10 15:10 UTC (permalink / raw)
  To: Konstantin Osipov; +Cc: tarantool-patches

[-- Attachment #1: Type: text/plain, Size: 3770 bytes --]




>Пятница,  6 июля 2018, 20:00 +03:00 от Konstantin Osipov <kostja@tarantool.org>:
>
>* Konstantin Belyavskiy < k.belyavskiy@tarantool.org > [18/07/05 22:56]:
>> Garbage collector do not delete xlog unless replica do not notify
>> master with newer vclock. This can lead to running out of disk
>> space error and this is not right behaviour since it will stop the
>> master.
>> Fix it by forcing gc to clean xlogs for replica with highest lag.
>> Add an error injection and a test.
>
>Please rebase this patch to the latest 1.10
>
>Please use relay_stop() as a callback to unregister the consumer. 
Rebase to 1.10 - ok.

Using relay_stop() makes sense only with replica_on_relay_stop(), since
relay_stop() itself actually do nothing with consumers.
Regarding replica_on_relay_stop(), replica should be in "orphan" mode
to avoid assertion in replica_delete(). And also there is a problem with
monitoring, since replica will leave replication cluster and thus silent the error.

On other hand, in case of implementation based on removing consumer,
replica, if being active again, will get an LSN gap and we will see an error.

1. Please give feedback on this section.
2. If not using relay_stop(), which branch use as a base 1.9 or 1.10?

>
>> +void
>> +gc_xdir_clean_notify()
>> +{
>> +	/*
>> +	 * Compare the current time with the time of the last run.
>> +	 * This is needed in case of multiple failures to prevent
>> +	 * from deleting all replicas.
>
>
>> +	 */
>> +	static double prev_time = 0.;
>> +	double cur_time = ev_monotonic_time();
>> +	if (cur_time - prev_time < 1.)
>> +		return;
>
>This throttles gc, which is good. But we would still get a lot of messages
>from WAL thread. Maybe we should move the throttling to the WAL
>side? This would spare us from creating the message as well.
>Ideally we should use a single statically allocated message from
>the WAL for this purpose (but still throttle it as well).
>
>Plus, eventually you're going to reach a state when kicking off
>replicas doesn't help with space. In this case you're going to
>have a lot of messages, and they are going to be all useless.
>This also suggests that throttling should be done on the WAL side.
Done.
>
>> +	prev_time = cur_time;
>> +	struct gc_consumer *leftmost =
>> +	    gc_tree_first(&gc.consumers);
>> +	/*
>> +	 * Exit if no consumers left or if this consumer is
>> +	 * not associated with replica (backup for example).
>> +	 */
>
>> +	if (leftmost == NULL || leftmost->replica == NULL)
>> +		return;
>
>
>> +	/*
>> +	 * We have to maintain @checkpoint_count oldest snapshots,
>> +	 * plus we can't remove snapshots that are still in use.
>> +	 * So if leftmost replica has signature greater or equel
>> +	 * then the oldest checkpoint that must be preserved,
>> +	 * nothing to do.
>> +	 */
>
>This comment is useful, but the search in checkpoint array is not.
>What about possible other types of consumers which are not
>dispensable with anyway, e.g. backups? What if they are holding a
>reference as well? 
>
>Apparently this check is taking care of the problem:
In previous review you have already mentioned this problem, searching
in checkpoint array helps in case then last stored checkpoint has exactly
the same value.
But in general, check below already prevents replicas from deletion.
Should I keep it (searching in checkpoint array)?
>
>
>> +	if (leftmost == NULL || leftmost->replica == NULL)
>> +		return;
>
>Could you write a test with two 
>"abandoned" replicas, each holding an xlog file? 
Which xlog, the same one or different for each replicas?
>
>
>-- 
>Konstantin Osipov, Moscow, Russia,  +7 903 626 22 32
>http://tarantool.io -  www.twitter.com/kostja_osipov
>


Best regards,
Konstantin Belyavskiy
k.belyavskiy@tarantool.org

[-- Attachment #2: Type: text/html, Size: 5578 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [tarantool-patches] Re: [PATCH v5 2/2] replication: force gc to clean xdir on ENOSPC err
  2018-07-10 15:10     ` [tarantool-patches] " Konstantin Belyavskiy
@ 2018-07-10 18:37       ` Konstantin Osipov
  0 siblings, 0 replies; 6+ messages in thread
From: Konstantin Osipov @ 2018-07-10 18:37 UTC (permalink / raw)
  To: Konstantin Belyavskiy; +Cc: tarantool-patches

* Konstantin Belyavskiy <k.belyavskiy@tarantool.org> [18/07/10 19:19]:
> Rebase to 1.10 - ok.
> 
> Using relay_stop() makes sense only with replica_on_relay_stop(), since
> relay_stop() itself actually do nothing with consumers.
> Regarding replica_on_relay_stop(), replica should be in "orphan" mode
> to avoid assertion in replica_delete(). And also there is a problem with
> monitoring, since replica will leave replication cluster and thus silent the error.
> 
> On other hand, in case of implementation based on removing consumer,
> replica, if being active again, will get an LSN gap and we will see an error.

This not a problem - it will rejoin once rejoin is in the trunk.
> 
> 1. Please give feedback on this section.
> 2. If not using relay_stop(), which branch use as a base 1.9 or 1.10?

1.10
> >Could you write a test with two 
> >"abandoned" replicas, each holding an xlog file? 
> Which xlog, the same one or different for each replicas?

Different one.

I know I skipped some questions - let's discuss the rest
separately, hope the above answers help.


-- 
Konstantin Osipov, Moscow, Russia, +7 903 626 22 32
http://tarantool.io - www.twitter.com/kostja_osipov

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2018-07-10 18:37 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-07-05 18:39 [tarantool-patches] [PATCH v5 0/2] force gc on running out of disk space Konstantin Belyavskiy
2018-07-05 18:39 ` [tarantool-patches] [PATCH v5 1/2] replication: rename thread from tx to tx_prio Konstantin Belyavskiy
2018-07-05 18:39 ` [tarantool-patches] [PATCH v5 2/2] replication: force gc to clean xdir on ENOSPC err Konstantin Belyavskiy
2018-07-06 17:00   ` [tarantool-patches] " Konstantin Osipov
2018-07-10 15:10     ` [tarantool-patches] " Konstantin Belyavskiy
2018-07-10 18:37       ` Konstantin Osipov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox