Tarantool development patches archive
 help / color / mirror / Atom feed
* [tarantool-patches] [PATCH 0/2] Deprecate rows_per_wal
@ 2019-09-07 16:05 Vladislav Shpilevoy
  2019-09-07 16:05 ` [tarantool-patches] [PATCH 1/2] wal: deprecate rows_per_wal in favour of wal_max_size Vladislav Shpilevoy
  2019-09-07 16:05 ` [tarantool-patches] [PATCH 2/2] wal: drop rows_per_wal option Vladislav Shpilevoy
  0 siblings, 2 replies; 6+ messages in thread
From: Vladislav Shpilevoy @ 2019-09-07 16:05 UTC (permalink / raw)
  To: tarantool-patches; +Cc: alexander.turenko

The patchset deprecates and drops rows_per_wal box.cfg option. It is done in
two separate commits so as the first one could be cherry-picked to all the
branches < master and >= 1.10.

The second commit actually drops the option - box.cfg will raise an error on it.
It it is for master branch only.

Branch: http://github.com/tarantool/tarantool/tree/gerold103/gh-3762-rows_per_wal-deprecate
Issue: https://github.com/tarantool/tarantool/issues/3762

Vladislav Shpilevoy (2):
  wal: deprecate rows_per_wal in favour of wal_max_size
  wal: drop rows_per_wal option

 src/box/box.cc                       |  30 +-------
 src/box/lua/load_cfg.lua             |  38 ++++++----
 src/box/wal.c                        |  14 +---
 src/box/wal.h                        |  13 +++-
 test/app-tap/init_script.result      |  39 +++++-----
 test/app-tap/snapshot.test.lua       |   2 +-
 test/app/app.lua                     |   2 +-
 test/app/fiber.result                |   6 +-
 test/app/fiber.test.lua              |   4 +-
 test/box-py/box.lua                  |   2 +-
 test/box-tap/cfg.test.lua            |   3 +-
 test/box/admin.result                |   2 -
 test/box/cfg.result                  |   4 -
 test/box/configuration.result        | 107 ---------------------------
 test/box/proxy.lua                   |   2 +-
 test/box/tiny.lua                    |   1 -
 test/engine/box.lua                  |   2 +-
 test/engine_long/box.lua             |   1 -
 test/long_run-py/box.lua             |   1 -
 test/vinyl/vinyl.lua                 |   1 -
 test/xlog-py/box.lua                 |   1 -
 test/xlog/checkpoint_daemon.result   |  11 ++-
 test/xlog/checkpoint_daemon.test.lua |   9 ++-
 test/xlog/errinj.result              |   7 +-
 test/xlog/errinj.test.lua            |   5 +-
 test/xlog/panic.lua                  |   1 -
 test/xlog/upgrade/fill.lua           |   2 +-
 test/xlog/xlog.lua                   |   2 +-
 28 files changed, 97 insertions(+), 215 deletions(-)
 delete mode 100644 test/box/configuration.result

-- 
2.20.1 (Apple Git-117)

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [tarantool-patches] [PATCH 1/2] wal: deprecate rows_per_wal in favour of wal_max_size
  2019-09-07 16:05 [tarantool-patches] [PATCH 0/2] Deprecate rows_per_wal Vladislav Shpilevoy
@ 2019-09-07 16:05 ` Vladislav Shpilevoy
  2019-09-08 12:53   ` [tarantool-patches] " Vladislav Shpilevoy
  2019-09-10 18:50   ` Kirill Yukhin
  2019-09-07 16:05 ` [tarantool-patches] [PATCH 2/2] wal: drop rows_per_wal option Vladislav Shpilevoy
  1 sibling, 2 replies; 6+ messages in thread
From: Vladislav Shpilevoy @ 2019-09-07 16:05 UTC (permalink / raw)
  To: tarantool-patches; +Cc: alexander.turenko

rows_per_wal does not allow to properly limit size of WAL logs.
A user would need to estimate size of each WAL row to do that.
Now WAL supports option wal_max_size allowing to limit WAL size
much more precise.

Part of #3762
---
 src/box/lua/load_cfg.lua | 39 ++++++++++++++++++++++++++++-----------
 1 file changed, 28 insertions(+), 11 deletions(-)

diff --git a/src/box/lua/load_cfg.lua b/src/box/lua/load_cfg.lua
index d402468ab..4432e0d08 100644
--- a/src/box/lua/load_cfg.lua
+++ b/src/box/lua/load_cfg.lua
@@ -283,19 +283,32 @@ local function convert_gb(size)
     return math.floor(size * 1024 * 1024 * 1024)
 end
 
--- Old to new config translation tables
+-- Old to new config translation tables. In case a translation is
+-- not 1-to-1, then a function can be used. It takes 2 parameters:
+-- value of the old option, value of the new if present. It
+-- returns two values - value to replace the old option and to
+-- replace the new one.
 local translate_cfg = {
     snapshot_count = {'checkpoint_count'},
     snapshot_period = {'checkpoint_interval'},
-    slab_alloc_arena = {'memtx_memory', convert_gb},
+    slab_alloc_arena = {'memtx_memory', function(old)
+        return nil, convert_gb(old)
+    end},
     slab_alloc_minimal = {'memtx_min_tuple_size'},
     slab_alloc_maximal = {'memtx_max_tuple_size'},
     snap_dir = {'memtx_dir'},
     logger = {'log'},
     logger_nonblock = {'log_nonblock'},
-    panic_on_snap_error = {'force_recovery', function (p) return not p end},
-    panic_on_wal_error = {'force_recovery', function (p) return not p end},
+    panic_on_snap_error = {'force_recovery', function(old)
+        return nil, not old end
+    },
+    panic_on_wal_error = {'force_recovery', function(old)
+        return nil, not old end
+    },
     replication_source = {'replication'},
+    rows_per_wal = {'wal_max_size', function(old, new)
+        return old, new
+    end},
 }
 
 -- Upgrade old config
@@ -307,19 +320,23 @@ local function upgrade_cfg(cfg, translate_cfg)
     for k, v in pairs(cfg) do
         local translation = translate_cfg[k]
         if translation ~= nil then
-            log.warn('Deprecated option %s, please use %s instead', k, translation[1])
-            local new_val
-            if translation[2] == nil then
+            local new_key = translation[1]
+            local transofm = translation[2]
+            log.warn('Deprecated option %s, please use %s instead', k, new_key)
+            local new_val_orig = cfg[new_key]
+            local old_val, new_val
+            if transofm == nil then
                 new_val = v
             else
-                new_val = translation[2](v)
+                old_val, new_val = transofm(v, new_val_orig)
             end
-            if cfg[translation[1]] ~= nil and
-               cfg[translation[1]] ~= new_val then
+            if new_val_orig ~= nil and
+               new_val_orig ~= new_val then
                 box.error(box.error.CFG, k,
                           'can not override a value for a deprecated option')
             end
-            result_cfg[translation[1]] = new_val
+            result_cfg[k] = old_val
+            result_cfg[new_key] = new_val
         else
             result_cfg[k] = v
         end
-- 
2.20.1 (Apple Git-117)

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [tarantool-patches] [PATCH 2/2] wal: drop rows_per_wal option
  2019-09-07 16:05 [tarantool-patches] [PATCH 0/2] Deprecate rows_per_wal Vladislav Shpilevoy
  2019-09-07 16:05 ` [tarantool-patches] [PATCH 1/2] wal: deprecate rows_per_wal in favour of wal_max_size Vladislav Shpilevoy
@ 2019-09-07 16:05 ` Vladislav Shpilevoy
  2019-09-08 12:53   ` [tarantool-patches] " Vladislav Shpilevoy
  1 sibling, 1 reply; 6+ messages in thread
From: Vladislav Shpilevoy @ 2019-09-07 16:05 UTC (permalink / raw)
  To: tarantool-patches; +Cc: alexander.turenko

Rows_per_wal option was deprecated because it can be covered by
wal_max_size. In order not to complicate WAL code with that
option's support this commit drops it completely.

In some tests the option was used to create several small xlog
files. Now the same is done via wal_max_size. Where it was
needed, number of rows per wal is estimated as wal_max_size / 50.
Because struct xrow_header size ~= 50 not counting paddings and
body.

Closes #3762
---
 src/box/box.cc                       |  30 +-------
 src/box/lua/load_cfg.lua             |   5 --
 src/box/wal.c                        |  14 +---
 src/box/wal.h                        |  13 +++-
 test/app-tap/init_script.result      |  39 +++++-----
 test/app-tap/snapshot.test.lua       |   2 +-
 test/app/app.lua                     |   2 +-
 test/app/fiber.result                |   6 +-
 test/app/fiber.test.lua              |   4 +-
 test/box-py/box.lua                  |   2 +-
 test/box-tap/cfg.test.lua            |   3 +-
 test/box/admin.result                |   2 -
 test/box/cfg.result                  |   4 -
 test/box/configuration.result        | 107 ---------------------------
 test/box/proxy.lua                   |   2 +-
 test/box/tiny.lua                    |   1 -
 test/engine/box.lua                  |   2 +-
 test/engine_long/box.lua             |   1 -
 test/long_run-py/box.lua             |   1 -
 test/vinyl/vinyl.lua                 |   1 -
 test/xlog-py/box.lua                 |   1 -
 test/xlog/checkpoint_daemon.result   |  11 ++-
 test/xlog/checkpoint_daemon.test.lua |   9 ++-
 test/xlog/errinj.result              |   7 +-
 test/xlog/errinj.test.lua            |   5 +-
 test/xlog/panic.lua                  |   1 -
 test/xlog/upgrade/fill.lua           |   2 +-
 test/xlog/xlog.lua                   |   2 +-
 28 files changed, 72 insertions(+), 207 deletions(-)
 delete mode 100644 test/box/configuration.result

diff --git a/src/box/box.cc b/src/box/box.cc
index ac10c21ad..0514122fa 100644
--- a/src/box/box.cc
+++ b/src/box/box.cc
@@ -286,8 +286,6 @@ struct wal_stream {
 	struct xstream base;
 	/** How many rows have been recovered so far. */
 	size_t rows;
-	/** Yield once per 'yield' rows. */
-	size_t yield;
 };
 
 /**
@@ -341,22 +339,15 @@ apply_wal_row(struct xstream *stream, struct xrow_header *row)
 	 * Yield once in a while, but not too often,
 	 * mostly to allow signal handling to take place.
 	 */
-	if (++xstream->rows % xstream->yield == 0)
+	if (++xstream->rows % WAL_ROWS_PER_YIELD == 0)
 		fiber_sleep(0);
 }
 
 static void
-wal_stream_create(struct wal_stream *ctx, size_t wal_max_rows)
+wal_stream_create(struct wal_stream *ctx)
 {
 	xstream_create(&ctx->base, apply_wal_row);
 	ctx->rows = 0;
-	/**
-	 * Make the yield logic covered by the functional test
-	 * suite, which has a small setting for rows_per_wal.
-	 * Each yield can take up to 1ms if there are no events,
-	 * so we can't afford many of them during recovery.
-	 */
-	ctx->yield = (wal_max_rows >> 4)  + 1;
 }
 
 /* {{{ configuration bindings */
@@ -528,17 +519,6 @@ box_check_checkpoint_count(int checkpoint_count)
 	}
 }
 
-static int64_t
-box_check_wal_max_rows(int64_t wal_max_rows)
-{
-	/* check rows_per_wal configuration */
-	if (wal_max_rows <= 1) {
-		tnt_raise(ClientError, ER_CFG, "rows_per_wal",
-			  "the value must be greater than one");
-	}
-	return wal_max_rows;
-}
-
 static int64_t
 box_check_wal_max_size(int64_t wal_max_size)
 {
@@ -626,7 +606,6 @@ box_check_config()
 	box_check_replication_sync_timeout();
 	box_check_readahead(cfg_geti("readahead"));
 	box_check_checkpoint_count(cfg_geti("checkpoint_count"));
-	box_check_wal_max_rows(cfg_geti64("rows_per_wal"));
 	box_check_wal_max_size(cfg_geti64("wal_max_size"));
 	box_check_wal_mode(cfg_gets("wal_mode"));
 	box_check_memtx_memory(cfg_geti64("memtx_memory"));
@@ -1913,7 +1892,7 @@ local_recovery(const struct tt_uuid *instance_uuid,
 	say_info("instance uuid %s", tt_uuid_str(&INSTANCE_UUID));
 
 	struct wal_stream wal_stream;
-	wal_stream_create(&wal_stream, cfg_geti64("rows_per_wal"));
+	wal_stream_create(&wal_stream);
 
 	struct recovery *recovery;
 	recovery = recovery_new(cfg_gets("wal_dir"),
@@ -2099,10 +2078,9 @@ box_cfg_xc(void)
 	iproto_init();
 	sql_init();
 
-	int64_t wal_max_rows = box_check_wal_max_rows(cfg_geti64("rows_per_wal"));
 	int64_t wal_max_size = box_check_wal_max_size(cfg_geti64("wal_max_size"));
 	enum wal_mode wal_mode = box_check_wal_mode(cfg_gets("wal_mode"));
-	if (wal_init(wal_mode, cfg_gets("wal_dir"), wal_max_rows,
+	if (wal_init(wal_mode, cfg_gets("wal_dir"),
 		     wal_max_size, &INSTANCE_UUID, on_wal_garbage_collection,
 		     on_wal_checkpoint_threshold) != 0) {
 		diag_raise();
diff --git a/src/box/lua/load_cfg.lua b/src/box/lua/load_cfg.lua
index 4432e0d08..026b7d320 100644
--- a/src/box/lua/load_cfg.lua
+++ b/src/box/lua/load_cfg.lua
@@ -54,7 +54,6 @@ local default_cfg = {
     snap_io_rate_limit  = nil, -- no limit
     too_long_threshold  = 0.5,
     wal_mode            = "write",
-    rows_per_wal        = 500000,
     wal_max_size        = 256 * 1024 * 1024,
     wal_dir_rescan_delay= 2,
     force_recovery      = false,
@@ -118,7 +117,6 @@ local template_cfg = {
     snap_io_rate_limit  = 'number',
     too_long_threshold  = 'number',
     wal_mode            = 'string',
-    rows_per_wal        = 'number',
     wal_max_size        = 'number',
     wal_dir_rescan_delay= 'number',
     force_recovery      = 'boolean',
@@ -306,9 +304,6 @@ local translate_cfg = {
         return nil, not old end
     },
     replication_source = {'replication'},
-    rows_per_wal = {'wal_max_size', function(old, new)
-        return old, new
-    end},
 }
 
 -- Upgrade old config
diff --git a/src/box/wal.c b/src/box/wal.c
index 9219d6779..5e2c13e0e 100644
--- a/src/box/wal.c
+++ b/src/box/wal.c
@@ -92,8 +92,6 @@ struct wal_writer
 	/** A memory pool for messages. */
 	struct mempool msg_pool;
 	/* ----------------- wal ------------------- */
-	/** A setting from instance configuration - rows_per_wal */
-	int64_t wal_max_rows;
 	/** A setting from instance configuration - wal_max_size */
 	int64_t wal_max_size;
 	/** Another one - wal_mode */
@@ -344,13 +342,12 @@ tx_notify_checkpoint(struct cmsg *msg)
  */
 static void
 wal_writer_create(struct wal_writer *writer, enum wal_mode wal_mode,
-		  const char *wal_dirname, int64_t wal_max_rows,
+		  const char *wal_dirname,
 		  int64_t wal_max_size, const struct tt_uuid *instance_uuid,
 		  wal_on_garbage_collection_f on_garbage_collection,
 		  wal_on_checkpoint_threshold_f on_checkpoint_threshold)
 {
 	writer->wal_mode = wal_mode;
-	writer->wal_max_rows = wal_max_rows;
 	writer->wal_max_size = wal_max_size;
 	journal_create(&writer->base, wal_mode == WAL_NONE ?
 		       wal_write_in_wal_mode_none : wal_write, NULL);
@@ -461,16 +458,14 @@ wal_open(struct wal_writer *writer)
 }
 
 int
-wal_init(enum wal_mode wal_mode, const char *wal_dirname, int64_t wal_max_rows,
+wal_init(enum wal_mode wal_mode, const char *wal_dirname,
 	 int64_t wal_max_size, const struct tt_uuid *instance_uuid,
 	 wal_on_garbage_collection_f on_garbage_collection,
 	 wal_on_checkpoint_threshold_f on_checkpoint_threshold)
 {
-	assert(wal_max_rows > 1);
-
 	/* Initialize the state. */
 	struct wal_writer *writer = &wal_writer_singleton;
-	wal_writer_create(writer, wal_mode, wal_dirname, wal_max_rows,
+	wal_writer_create(writer, wal_mode, wal_dirname,
 			  wal_max_size, instance_uuid, on_garbage_collection,
 			  on_checkpoint_threshold);
 
@@ -762,8 +757,7 @@ wal_opt_rotate(struct wal_writer *writer)
 	 * one.
 	 */
 	if (xlog_is_open(&writer->current_wal) &&
-	    (writer->current_wal.rows >= writer->wal_max_rows ||
-	     writer->current_wal.offset >= writer->wal_max_size)) {
+	    writer->current_wal.offset >= writer->wal_max_size) {
 		/*
 		 * We can not handle xlog_close()
 		 * failure in any reasonable way.
diff --git a/src/box/wal.h b/src/box/wal.h
index 6725f26d3..2ddc008ff 100644
--- a/src/box/wal.h
+++ b/src/box/wal.h
@@ -43,6 +43,17 @@ struct tt_uuid;
 
 enum wal_mode { WAL_NONE = 0, WAL_WRITE, WAL_FSYNC, WAL_MODE_MAX };
 
+enum {
+	/**
+	 * Such a value originates from the old setting
+	 * 'rows_per_wal'. By default it was 500000, and its
+	 * value was used to decide how often wal writer needs
+	 * to yield, by formula: (rows_per_wal >> 4) + 1. With
+	 * default rows_per_wal it was equal to this constant.
+	 */
+	WAL_ROWS_PER_YIELD = 31251,
+};
+
 /** String constants for the supported modes. */
 extern const char *wal_mode_STRS[];
 
@@ -72,7 +83,7 @@ typedef void (*wal_on_checkpoint_threshold_f)(void);
  * Start WAL thread and initialize WAL writer.
  */
 int
-wal_init(enum wal_mode wal_mode, const char *wal_dirname, int64_t wal_max_rows,
+wal_init(enum wal_mode wal_mode, const char *wal_dirname,
 	 int64_t wal_max_size, const struct tt_uuid *instance_uuid,
 	 wal_on_garbage_collection_f on_garbage_collection,
 	 wal_on_checkpoint_threshold_f on_checkpoint_threshold);
diff --git a/test/app-tap/init_script.result b/test/app-tap/init_script.result
index 6a296d9f6..799297ba0 100644
--- a/test/app-tap/init_script.result
+++ b/test/app-tap/init_script.result
@@ -30,26 +30,25 @@ box.cfg
 25	replication_sync_lag:10
 26	replication_sync_timeout:300
 27	replication_timeout:1
-28	rows_per_wal:500000
-29	slab_alloc_factor:1.05
-30	strip_core:true
-31	too_long_threshold:0.5
-32	vinyl_bloom_fpr:0.05
-33	vinyl_cache:134217728
-34	vinyl_dir:.
-35	vinyl_max_tuple_size:1048576
-36	vinyl_memory:134217728
-37	vinyl_page_size:8192
-38	vinyl_read_threads:1
-39	vinyl_run_count_per_level:2
-40	vinyl_run_size_ratio:3.5
-41	vinyl_timeout:60
-42	vinyl_write_threads:4
-43	wal_dir:.
-44	wal_dir_rescan_delay:2
-45	wal_max_size:268435456
-46	wal_mode:write
-47	worker_pool_threads:4
+28	slab_alloc_factor:1.05
+29	strip_core:true
+30	too_long_threshold:0.5
+31	vinyl_bloom_fpr:0.05
+32	vinyl_cache:134217728
+33	vinyl_dir:.
+34	vinyl_max_tuple_size:1048576
+35	vinyl_memory:134217728
+36	vinyl_page_size:8192
+37	vinyl_read_threads:1
+38	vinyl_run_count_per_level:2
+39	vinyl_run_size_ratio:3.5
+40	vinyl_timeout:60
+41	vinyl_write_threads:4
+42	wal_dir:.
+43	wal_dir_rescan_delay:2
+44	wal_max_size:268435456
+45	wal_mode:write
+46	worker_pool_threads:4
 --
 -- Test insert from detached fiber
 --
diff --git a/test/app-tap/snapshot.test.lua b/test/app-tap/snapshot.test.lua
index d86f32fe5..587f8279b 100755
--- a/test/app-tap/snapshot.test.lua
+++ b/test/app-tap/snapshot.test.lua
@@ -6,7 +6,7 @@ local tap = require('tap')
 local ffi = require('ffi')
 local fio = require('fio')
 
-box.cfg{ log="tarantool.log", memtx_memory=107374182, rows_per_wal=5000}
+box.cfg{ log="tarantool.log", memtx_memory=107374182}
 
 local test = tap.test("snapshot")
 test:plan(5)
diff --git a/test/app/app.lua b/test/app/app.lua
index 3a0eaa7dc..dfd159e4b 100644
--- a/test/app/app.lua
+++ b/test/app/app.lua
@@ -4,7 +4,7 @@ box.cfg{
     listen              = os.getenv("LISTEN"),
     memtx_memory        = 107374182,
     pid_file            = "tarantool.pid",
-    rows_per_wal        = 50
+    wal_max_size        = 2500
 }
 
 require('console').listen(os.getenv('ADMIN'))
diff --git a/test/app/fiber.result b/test/app/fiber.result
index 94e690f6c..3c6115a33 100644
--- a/test/app/fiber.result
+++ b/test/app/fiber.result
@@ -17,12 +17,12 @@ test_run = env.new()
 -- and wal_schedule fiber schedulers.
 -- The same fiber should not be scheduled by ev_schedule (e.g.
 -- due to cancellation) if it is within th wal_schedule queue.
--- The test case is dependent on rows_per_wal, since this is when
+-- The test case is dependent on wal_max_size, since this is when
 -- we reopen the .xlog file and thus wal_scheduler takes a long
 -- pause
-box.cfg.rows_per_wal
+box.cfg.wal_max_size
 ---
-- 50
+- 2500
 ...
 space:insert{1, 'testing', 'lua rocks'}
 ---
diff --git a/test/app/fiber.test.lua b/test/app/fiber.test.lua
index bb8c24990..c5647b8f2 100644
--- a/test/app/fiber.test.lua
+++ b/test/app/fiber.test.lua
@@ -8,10 +8,10 @@ test_run = env.new()
 -- and wal_schedule fiber schedulers.
 -- The same fiber should not be scheduled by ev_schedule (e.g.
 -- due to cancellation) if it is within th wal_schedule queue.
--- The test case is dependent on rows_per_wal, since this is when
+-- The test case is dependent on wal_max_size, since this is when
 -- we reopen the .xlog file and thus wal_scheduler takes a long
 -- pause
-box.cfg.rows_per_wal
+box.cfg.wal_max_size
 space:insert{1, 'testing', 'lua rocks'}
 space:delete{1}
 space:insert{1, 'testing', 'lua rocks'}
diff --git a/test/box-py/box.lua b/test/box-py/box.lua
index 00c592c81..e9403774c 100644
--- a/test/box-py/box.lua
+++ b/test/box-py/box.lua
@@ -6,7 +6,7 @@ box.cfg{
     memtx_memory        = 107374182,
     pid_file            = "tarantool.pid",
     force_recovery      = true,
-    rows_per_wal        = 10
+    wal_max_size        = 500
 }
 
 require('console').listen(os.getenv('ADMIN'))
diff --git a/test/box-tap/cfg.test.lua b/test/box-tap/cfg.test.lua
index 55de5e41c..1c3518f9e 100755
--- a/test/box-tap/cfg.test.lua
+++ b/test/box-tap/cfg.test.lua
@@ -6,7 +6,7 @@ local socket = require('socket')
 local fio = require('fio')
 local uuid = require('uuid')
 local msgpack = require('msgpack')
-test:plan(104)
+test:plan(103)
 
 --------------------------------------------------------------------------------
 -- Invalid values
@@ -36,7 +36,6 @@ invalid('replication_connect_timeout', -1)
 invalid('replication_connect_timeout', 0)
 invalid('replication_connect_quorum', -1)
 invalid('wal_mode', 'invalid')
-invalid('rows_per_wal', -1)
 invalid('listen', '//!')
 invalid('log', ':')
 invalid('log', 'syslog:xxx=')
diff --git a/test/box/admin.result b/test/box/admin.result
index 0c137e371..6126f3a97 100644
--- a/test/box/admin.result
+++ b/test/box/admin.result
@@ -81,8 +81,6 @@ cfg_filter(box.cfg)
     - 300
   - - replication_timeout
     - 1
-  - - rows_per_wal
-    - 500000
   - - slab_alloc_factor
     - 1.05
   - - strip_core
diff --git a/test/box/cfg.result b/test/box/cfg.result
index cdca64ef0..5370bb870 100644
--- a/test/box/cfg.result
+++ b/test/box/cfg.result
@@ -69,8 +69,6 @@ cfg_filter(box.cfg)
  |     - 300
  |   - - replication_timeout
  |     - 1
- |   - - rows_per_wal
- |     - 500000
  |   - - slab_alloc_factor
  |     - 1.05
  |   - - strip_core
@@ -170,8 +168,6 @@ cfg_filter(box.cfg)
  |     - 300
  |   - - replication_timeout
  |     - 1
- |   - - rows_per_wal
- |     - 500000
  |   - - slab_alloc_factor
  |     - 1.05
  |   - - strip_core
diff --git a/test/box/configuration.result b/test/box/configuration.result
deleted file mode 100644
index c885c28dc..000000000
--- a/test/box/configuration.result
+++ /dev/null
@@ -1,107 +0,0 @@
-
-# Bug #876541:
-#  Test floating point values (wal_fsync_delay) with fractional part
-#  (https://bugs.launchpad.net/bugs/876541)
-
-box.cfg.wal_fsync_delay
----
-- 0.01
-...
-print_config()
----
-- io_collect_interval: 0
-  pid_file: box.pid
-  slab_alloc_factor: 2
-  slab_alloc_minimal: 64
-  admin_port: <number>
-  logger: cat - >> tarantool.log
-  readahead: 16320
-  wal_dir: .
-  logger_nonblock: true
-  log_level: 5
-  snap_dir: .
-  coredump: false
-  background: false
-  too_long_threshold: 0.5
-  rows_per_wal: 50
-  wal_mode: fsync_delay
-  snap_io_rate_limit: 0
-  panic_on_snap_error: true
-  panic_on_wal_error: false
-  local_hot_standby: false
-  slab_alloc_arena: 0.1
-  bind_ipaddr: INADDR_ANY
-  wal_fsync_delay: 0
-  primary_port: <number>
-  wal_dir_rescan_delay: 0.1
-...
-
-# Test bug #977898
-
-box.space.tweedledum:insert{4, 8, 16}
----
-- [4, 8, 16]
-...
-
-# Test insert from init.lua
-
-box.space.tweedledum:get(1)
----
-- [1, 2, 4, 8]
-...
-box.space.tweedledum:get(2)
----
-- [2, 4, 8, 16]
-...
-box.space.tweedledum:get(4)
----
-- [4, 8, 16]
-...
-
-# Test bug #1002272
-
-floor(0.5)
----
-- 0
-...
-floor(0.9)
----
-- 0
-...
-floor(1.1)
----
-- 1
-...
-mod.test(10, 15)
----
-- 25
-...
-
-# Bug#99 Salloc initialization is not checked on startup
-#  (https://github.com/tarantool/tarantool/issues/99)
-
-Can't start Tarantool
-ok
-
-# Bug#100 Segmentation fault if rows_per_wal = 0
-#  (https://github.com/tarantool/tarantool/issues/100)
-
-Can't start Tarantool
-ok
-#
-# Check that --background  doesn't work if there is no logger
-# This is a test case for
-# https://bugs.launchpad.net/tarantool/+bug/750658
-# "--background neither closes nor redirects stdin/stdout/stderr"
-
-Can't start Tarantool
-ok
-
-# A test case for Bug#726778 "Gopt broke wal_dir and snap_dir: they are no
-# longer relative to work_dir".
-# https://bugs.launchpad.net/tarantool/+bug/726778
-# After addition of gopt(), we started to chdir() to the working
-# directory after option parsing.
-# Verify that this is not the case, and snap_dir and xlog_dir
-# can be relative to work_dir.
-
diff --git a/test/box/proxy.lua b/test/box/proxy.lua
index fa87ab879..8bbd505f8 100644
--- a/test/box/proxy.lua
+++ b/test/box/proxy.lua
@@ -5,7 +5,7 @@ box.cfg{
     listen              = os.getenv("LISTEN"),
     memtx_memory        = 107374182,
     pid_file            = "tarantool.pid",
-    rows_per_wal        = 50
+    wal_max_size        = 2500
 }
 
 require('console').listen(os.getenv('ADMIN'))
diff --git a/test/box/tiny.lua b/test/box/tiny.lua
index 8d3025083..04b523fb2 100644
--- a/test/box/tiny.lua
+++ b/test/box/tiny.lua
@@ -7,7 +7,6 @@ box.cfg{
     pid_file            = "tarantool.pid",
     force_recovery  = false,
     slab_alloc_factor = 1.1,
-    rows_per_wal        = 5000000
 }
 
 require('console').listen(os.getenv('ADMIN'))
diff --git a/test/engine/box.lua b/test/engine/box.lua
index b1a379daf..e2f04cba2 100644
--- a/test/engine/box.lua
+++ b/test/engine/box.lua
@@ -11,7 +11,7 @@ box.cfg{
     listen              = os.getenv("LISTEN"),
     memtx_memory        = 107374182,
     pid_file            = "tarantool.pid",
-    rows_per_wal        = 50,
+    wal_max_size        = 2500,
     vinyl_read_threads  = 2,
     vinyl_write_threads = 3,
     vinyl_range_size    = 64 * 1024,
diff --git a/test/engine_long/box.lua b/test/engine_long/box.lua
index c24eac5fe..28a1560d5 100644
--- a/test/engine_long/box.lua
+++ b/test/engine_long/box.lua
@@ -9,7 +9,6 @@ box.cfg {
     listen            = os.getenv("LISTEN"),
     memtx_memory      = 107374182,
     pid_file          = "tarantool.pid",
-    rows_per_wal      = 500000,
     vinyl_dir         = "./vinyl_test",
     vinyl_memory      = 107374182,
     vinyl_read_threads = 3,
diff --git a/test/long_run-py/box.lua b/test/long_run-py/box.lua
index 8b0738bf8..b4f65dcdb 100644
--- a/test/long_run-py/box.lua
+++ b/test/long_run-py/box.lua
@@ -9,7 +9,6 @@ box.cfg {
     listen            = os.getenv("LISTEN"),
     memtx_memory      = 107374182,
     pid_file          = "tarantool.pid",
-    rows_per_wal      = 500000,
     vinyl_dir         = "./vinyl_test",
     vinyl_read_threads = 3,
     vinyl_write_threads = 5,
diff --git a/test/vinyl/vinyl.lua b/test/vinyl/vinyl.lua
index 34bd948ff..31307f4bc 100644
--- a/test/vinyl/vinyl.lua
+++ b/test/vinyl/vinyl.lua
@@ -4,7 +4,6 @@ box.cfg {
     listen            = os.getenv("LISTEN"),
     memtx_memory      = 512 * 1024 * 1024,
     memtx_max_tuple_size = 4 * 1024 * 1024,
-    rows_per_wal      = 1000000,
     vinyl_read_threads = 2,
     vinyl_write_threads = 3,
     vinyl_memory = 512 * 1024 * 1024,
diff --git a/test/xlog-py/box.lua b/test/xlog-py/box.lua
index 00c592c81..c87f7b94b 100644
--- a/test/xlog-py/box.lua
+++ b/test/xlog-py/box.lua
@@ -6,7 +6,6 @@ box.cfg{
     memtx_memory        = 107374182,
     pid_file            = "tarantool.pid",
     force_recovery      = true,
-    rows_per_wal        = 10
 }
 
 require('console').listen(os.getenv('ADMIN'))
diff --git a/test/xlog/checkpoint_daemon.result b/test/xlog/checkpoint_daemon.result
index 6c96da0d5..5be7124fe 100644
--- a/test/xlog/checkpoint_daemon.result
+++ b/test/xlog/checkpoint_daemon.result
@@ -87,12 +87,15 @@ box.cfg{checkpoint_interval = PERIOD, checkpoint_count = 2 }
 no = 1
 ---
 ...
+row_count_per_wal = box.cfg.wal_max_size / 50
+---
+...
 -- first xlog
-for i = 1, box.cfg.rows_per_wal + 10 do space:insert { no } no = no + 1 end
+for i = 1, row_count_per_wal + 10 do space:insert { no } no = no + 1 end
 ---
 ...
 -- second xlog
-for i = 1, box.cfg.rows_per_wal + 10 do space:insert { no } no = no + 1 end
+for i = 1, row_count_per_wal + 10 do space:insert { no } no = no + 1 end
 ---
 ...
 wait_snapshot(WAIT_COND_TIMEOUT)
@@ -100,11 +103,11 @@ wait_snapshot(WAIT_COND_TIMEOUT)
 - true
 ...
 -- third xlog
-for i = 1, box.cfg.rows_per_wal + 10 do space:insert { no } no = no + 1 end
+for i = 1, row_count_per_wal + 10 do space:insert { no } no = no + 1 end
 ---
 ...
 -- fourth xlog
-for i = 1, box.cfg.rows_per_wal + 10 do space:insert { no } no = no + 1 end
+for i = 1, row_count_per_wal + 10 do space:insert { no } no = no + 1 end
 ---
 ...
 wait_snapshot(WAIT_COND_TIMEOUT)
diff --git a/test/xlog/checkpoint_daemon.test.lua b/test/xlog/checkpoint_daemon.test.lua
index 37d7f7528..d3138f356 100644
--- a/test/xlog/checkpoint_daemon.test.lua
+++ b/test/xlog/checkpoint_daemon.test.lua
@@ -54,17 +54,18 @@ test_run:cmd("setopt delimiter ''");
 box.cfg{checkpoint_interval = PERIOD, checkpoint_count = 2 }
 
 no = 1
+row_count_per_wal = box.cfg.wal_max_size / 50
 -- first xlog
-for i = 1, box.cfg.rows_per_wal + 10 do space:insert { no } no = no + 1 end
+for i = 1, row_count_per_wal + 10 do space:insert { no } no = no + 1 end
 -- second xlog
-for i = 1, box.cfg.rows_per_wal + 10 do space:insert { no } no = no + 1 end
+for i = 1, row_count_per_wal + 10 do space:insert { no } no = no + 1 end
 
 wait_snapshot(WAIT_COND_TIMEOUT)
 
 -- third xlog
-for i = 1, box.cfg.rows_per_wal + 10 do space:insert { no } no = no + 1 end
+for i = 1, row_count_per_wal + 10 do space:insert { no } no = no + 1 end
 -- fourth xlog
-for i = 1, box.cfg.rows_per_wal + 10 do space:insert { no } no = no + 1 end
+for i = 1, row_count_per_wal + 10 do space:insert { no } no = no + 1 end
 
 wait_snapshot(WAIT_COND_TIMEOUT)
 wait_snapshot_gc(WAIT_COND_TIMEOUT)
diff --git a/test/xlog/errinj.result b/test/xlog/errinj.result
index d6d4141b5..c524d7a80 100644
--- a/test/xlog/errinj.result
+++ b/test/xlog/errinj.result
@@ -58,7 +58,10 @@ _ = test:create_index('primary')
 box.schema.user.grant('guest', 'write', 'space', 'test')
 ---
 ...
-for i=1, box.cfg.rows_per_wal do test:insert{i, 'test'} end
+row_count_per_wal = box.cfg.wal_max_size / 50 + 10
+---
+...
+for i=1, row_count_per_wal do test:insert{i, 'test'} end
 ---
 ...
 c = require('net.box').connect(box.cfg.listen)
@@ -69,7 +72,7 @@ errinj.set('ERRINJ_WAL_WRITE', true)
 ---
 - ok
 ...
-c.space.test:insert({box.cfg.rows_per_wal + 1,1,2,3})
+c.space.test:insert({row_count_per_wal + 1,1,2,3})
 ---
 - error: Failed to write to disk
 ...
diff --git a/test/xlog/errinj.test.lua b/test/xlog/errinj.test.lua
index de0e2e7e8..3d72dc4e4 100644
--- a/test/xlog/errinj.test.lua
+++ b/test/xlog/errinj.test.lua
@@ -30,12 +30,13 @@ _ = test:create_index('primary')
 
 box.schema.user.grant('guest', 'write', 'space', 'test')
 
-for i=1, box.cfg.rows_per_wal do test:insert{i, 'test'} end
+row_count_per_wal = box.cfg.wal_max_size / 50 + 10
+for i=1, row_count_per_wal do test:insert{i, 'test'} end
 c = require('net.box').connect(box.cfg.listen)
 
 -- try to write xlog without permission to write to disk
 errinj.set('ERRINJ_WAL_WRITE', true)
-c.space.test:insert({box.cfg.rows_per_wal + 1,1,2,3})
+c.space.test:insert({row_count_per_wal + 1,1,2,3})
 errinj.set('ERRINJ_WAL_WRITE', false)
 
 -- Cleanup
diff --git a/test/xlog/panic.lua b/test/xlog/panic.lua
index dee83e608..2d4eb8d2e 100644
--- a/test/xlog/panic.lua
+++ b/test/xlog/panic.lua
@@ -6,7 +6,6 @@ box.cfg{
     memtx_memory        = 107374182,
     pid_file            = "tarantool.pid",
     force_recovery      = false,
-    rows_per_wal        = 10
 }
 
 require('console').listen(os.getenv('ADMIN'))
diff --git a/test/xlog/upgrade/fill.lua b/test/xlog/upgrade/fill.lua
index cb38b08e3..0ef1a8bb9 100644
--- a/test/xlog/upgrade/fill.lua
+++ b/test/xlog/upgrade/fill.lua
@@ -2,7 +2,7 @@
 --- A script to generate some dataset used by migration.test.lua
 ---
 
-box.cfg{ rows_per_wal = 5 }
+box.cfg{ wal_max_size = 250 }
 box.schema.space.create("distro")
 box.space.distro:create_index('primary', { type = 'hash', unique = true,
     parts = {1, 'str', 2, 'str', 3, 'num'}})
diff --git a/test/xlog/xlog.lua b/test/xlog/xlog.lua
index b1c9719ab..004096d2d 100644
--- a/test/xlog/xlog.lua
+++ b/test/xlog/xlog.lua
@@ -6,7 +6,7 @@ box.cfg{
     memtx_memory        = 107374182,
     pid_file            = "tarantool.pid",
     force_recovery      = true,
-    rows_per_wal        = 10,
+    wal_max_size        = 500,
     snap_io_rate_limit  = 16
 }
 
-- 
2.20.1 (Apple Git-117)

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [tarantool-patches] Re: [PATCH 2/2] wal: drop rows_per_wal option
  2019-09-07 16:05 ` [tarantool-patches] [PATCH 2/2] wal: drop rows_per_wal option Vladislav Shpilevoy
@ 2019-09-08 12:53   ` Vladislav Shpilevoy
  0 siblings, 0 replies; 6+ messages in thread
From: Vladislav Shpilevoy @ 2019-09-08 12:53 UTC (permalink / raw)
  To: tarantool-patches; +Cc: alexander.turenko

Changed a constant and its comment, as asked by Kostja:

diff --git a/src/box/wal.h b/src/box/wal.h
index 2ddc008ff..b76b0a41f 100644
--- a/src/box/wal.h
+++ b/src/box/wal.h
@@ -45,13 +45,11 @@ enum wal_mode { WAL_NONE = 0, WAL_WRITE, WAL_FSYNC, WAL_MODE_MAX };
 
 enum {
 	/**
-	 * Such a value originates from the old setting
-	 * 'rows_per_wal'. By default it was 500000, and its
-	 * value was used to decide how often wal writer needs
-	 * to yield, by formula: (rows_per_wal >> 4) + 1. With
-	 * default rows_per_wal it was equal to this constant.
+	 * Recovery yields once per that number of rows read and
+	 * applied from WAL. It allows not to block the event
+	 * loop for the whole recovery stage.
 	 */
-	WAL_ROWS_PER_YIELD = 31251,
+	WAL_ROWS_PER_YIELD = 32000,
 };

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [tarantool-patches] Re: [PATCH 1/2] wal: deprecate rows_per_wal in favour of wal_max_size
  2019-09-07 16:05 ` [tarantool-patches] [PATCH 1/2] wal: deprecate rows_per_wal in favour of wal_max_size Vladislav Shpilevoy
@ 2019-09-08 12:53   ` Vladislav Shpilevoy
  2019-09-10 18:50   ` Kirill Yukhin
  1 sibling, 0 replies; 6+ messages in thread
From: Vladislav Shpilevoy @ 2019-09-08 12:53 UTC (permalink / raw)
  To: tarantool-patches; +Cc: alexander.turenko

Fixed a typo:

diff --git a/src/box/lua/load_cfg.lua b/src/box/lua/load_cfg.lua
index 4432e0d08..79bde4d23 100644
--- a/src/box/lua/load_cfg.lua
+++ b/src/box/lua/load_cfg.lua
@@ -321,14 +321,14 @@ local function upgrade_cfg(cfg, translate_cfg)
         local translation = translate_cfg[k]
         if translation ~= nil then
             local new_key = translation[1]
-            local transofm = translation[2]
+            local transform = translation[2]
             log.warn('Deprecated option %s, please use %s instead', k, new_key)
             local new_val_orig = cfg[new_key]
             local old_val, new_val
-            if transofm == nil then
+            if transform == nil then
                 new_val = v
             else
-                old_val, new_val = transofm(v, new_val_orig)
+                old_val, new_val = transform(v, new_val_orig)
             end
             if new_val_orig ~= nil and
                new_val_orig ~= new_val then

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [tarantool-patches] Re: [PATCH 1/2] wal: deprecate rows_per_wal in favour of wal_max_size
  2019-09-07 16:05 ` [tarantool-patches] [PATCH 1/2] wal: deprecate rows_per_wal in favour of wal_max_size Vladislav Shpilevoy
  2019-09-08 12:53   ` [tarantool-patches] " Vladislav Shpilevoy
@ 2019-09-10 18:50   ` Kirill Yukhin
  1 sibling, 0 replies; 6+ messages in thread
From: Kirill Yukhin @ 2019-09-10 18:50 UTC (permalink / raw)
  To: tarantool-patches; +Cc: alexander.turenko

Hello,

On 07 Sep 18:05, Vladislav Shpilevoy wrote:
> rows_per_wal does not allow to properly limit size of WAL logs.
> A user would need to estimate size of each WAL row to do that.
> Now WAL supports option wal_max_size allowing to limit WAL size
> much more precise.
> 
> Part of #3762

I've checked your patch into 1.10, 2.1, 2.2 and master.

--
Regards, Kirill Yukhin

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2019-09-10 18:50 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-07 16:05 [tarantool-patches] [PATCH 0/2] Deprecate rows_per_wal Vladislav Shpilevoy
2019-09-07 16:05 ` [tarantool-patches] [PATCH 1/2] wal: deprecate rows_per_wal in favour of wal_max_size Vladislav Shpilevoy
2019-09-08 12:53   ` [tarantool-patches] " Vladislav Shpilevoy
2019-09-10 18:50   ` Kirill Yukhin
2019-09-07 16:05 ` [tarantool-patches] [PATCH 2/2] wal: drop rows_per_wal option Vladislav Shpilevoy
2019-09-08 12:53   ` [tarantool-patches] " Vladislav Shpilevoy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox