From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from [87.239.111.99] (localhost [127.0.0.1]) by dev.tarantool.org (Postfix) with ESMTP id 46033169C83A; Fri, 26 Dec 2025 11:06:28 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org 46033169C83A DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=tarantool.org; s=dev; t=1766736388; bh=5JkOsFasA8cNEVcQHsQjD8Tg7s7k/IGFpGxAu8FrpG8=; h=Date:To:Cc:References:In-Reply-To:Subject:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From:Reply-To:From; b=dU1O+cncw9FAI49MIeJ27yIPwyDbOQ6qt/Mr9vHppc/sJhT36i7vd5mGibPiy/OCj BPH6yxYeV3N36GxqiTltE6uthqaBrFBl5MrKBeTyqb7HhCYqTTcAYm+oZCJL1Z+IpQ LLTOja8sUnd5/8xLLdXeF+GyP9+3zPzwyc/Ycf34= Received: from send240.i.mail.ru (send240.i.mail.ru [95.163.59.79]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by dev.tarantool.org (Postfix) with ESMTPS id 92D97169C83A for ; Fri, 26 Dec 2025 11:06:26 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org 92D97169C83A Received: by exim-smtp-7b4fb89df9-82fjl with esmtpa (envelope-from ) id 1vZ2q9-000000009pB-1uhr; Fri, 26 Dec 2025 11:06:25 +0300 Date: Fri, 26 Dec 2025 11:06:23 +0300 To: Sergey Bronnikov Cc: tarantool-patches@dev.tarantool.org Message-ID: References: <330aaa69-00f8-44b6-b9ff-2086b96edd03@tarantool.org> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <330aaa69-00f8-44b6-b9ff-2086b96edd03@tarantool.org> X-Mailru-Src: smtp X-4EC0790: 10 X-7564579A: 646B95376F6C166E X-77F55803: 4F1203BC0FB41BD979975AF0D777FEBDA796860CCD9352B78452F76EDBE9CB18182A05F538085040CD79C712C31F34C13DE06ABAFEAF6705B7AE547EB0D801FE4F327FDBBEDA3D81CFCBA5A4EB472813 X-7FA49CB5: FF5795518A3D127A4AD6D5ED66289B5278DA827A17800CE7051A6EFB787CE9C4EA1F7E6F0F101C67BD4B6F7A4D31EC0BCC500DACC3FED6E28638F802B75D45FF8AA50765F7900637FE9EFE935CD7C6AE8638F802B75D45FF914D58D5BE9E6BC1A93B80C6DEB9DEE97C6FB206A91F05B2051038409BB1D9C42E070BE324C7D3C4320F84854598DDB2F6B57BC7E64490618DEB871D839B73339E8FC8737B5C22498424CA1AAF98A6958941B15DA834481FCF19DD082D7633A0EF3E4896CB9E6436389733CBF5DBD5E9D5E8D9A59859A8B6D082881546D93491CC7F00164DA146DA6F5DAA56C3B73B237318B6A418E8EAB86D1867E19FE14079C09775C1D3CA48CF3D321E7403792E342EB15956EA79C166A417C69337E82CC275ECD9A6C639B01B78DA827A17800CE732FCE54C4D9A645443847C11F186F3C59DAA53EE0834AAEE X-C1DE0DAB: 0D63561A33F958A5E8A40DA8EB0294B25002B1117B3ED696C067F8BE2BEF3E1C6E5F408120975D33823CB91A9FED034534781492E4B8EEAD37F46C620FF2CAEEBDAD6C7F3747799A X-C8649E89: 1C3962B70DF3F0AD73CAD6646DEDE1918E10F71CB4DF9F96AB70F9BE574AE9C625B6776AC983F447FC0B9F89525902EE6F57B2FD27647F25E66C117BDB76D659368CC671CC0315C25388158275D9D61BEF5F5E9C759B79FE45156C9673C3D00FBA50173C453F9BB8B8341EE9D5BE9A0A1D888D77E84695FC38A3267E1C5CDDC2461C06D51D8C55E18CD93680B12512CF4C41F94D744909CECFA6C6B0C050A61A8CAF69B82BA93681CD72808BE417F3B9E0E7457915DAA85F X-D57D3AED: 3ZO7eAau8CL7WIMRKs4sN3D3tLDjz0dLbV79QFUyzQ2Ujvy7cMT6pYYqY16iZVKkSc3dCLJ7zSJH7+u4VD18S7Vl4ZUrpaVfd2+vE6kuoey4m4VkSEu53w8ahmwBjZKM/YPHZyZHvz5uv+WouB9+ObcCpyrx6l7KImUglyhkEat/+ysWwi0gdhEs0JGjl6ggRWTy1haxBpVdbIX1nthFXMZebaIdHP2ghjoIc/363UZI6Kf1ptIMVdtTL5f5BIXbZ6pnyXd1UO0= X-DA7885C5: FDFF13233A3D57CFF255D290C0D534F98359B4CE035FB617623A37F3A1703A221CB3708AA4D896955B1A4C17EAA7BC4BEF2421ABFA55128DAF83EF9164C44C7E X-Mailru-Sender: 689FA8AB762F7393DDD5FD59B456EAD26D1C86A369724F4FFA75312B44B8CBACEC20B7640457CD87E49D44BB4BD9522A059A1ED8796F048DB274557F927329BE89D5A3BC2B10C37545BD1C3CC395C826B4A721A3011E896F X-Mras: Ok Subject: Re: [Tarantool-patches] [PATCH v1 luajit 03/41] perf: introduce bench module X-BeenThere: tarantool-patches@dev.tarantool.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Tarantool development patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , From: Sergey Kaplun via Tarantool-patches Reply-To: Sergey Kaplun Errors-To: tarantool-patches-bounces@dev.tarantool.org Sender: "Tarantool-patches" Hi again, Sergey! Thanks for the review! Please consider my answers below. On 11.11.25, Sergey Bronnikov wrote: > Hi, Sergey, again! > > thanks for the patch! > > Please see comments below. > > Sergey > > On 10/24/25 13:50, Sergey Kaplun wrote: > > This module provides functionality to run custom benchmark workloads > > defined by the following syntax: > > > > | local bench = require('bench').new(arg) > > there are many LuaJIT-specific functions below (ffi, jit etc.) > > I propose to check that luabin is LuaJIT-compatible and exit with > appropriate message > > if it is not. For now I have no goal to use this module as LuaJIT-agnostic. The same approach is used for our tap module for tests -- it is LuaJIT-only. If we want to make it LuaJIT-agnostic, it should be done out of scope of this patch set. > > > | > > | -- f_* are functions, n_* are numbers. > > s/functions/user-defined functions/ > > s/numbers/user-defined numbers/ Fixed. > > > |bench:add({ > > | setup = f_setup, > > | payload = f_payload, > > | teardown = f_teardown, > > | items = n_items_processed, > > | > > | checker = f_checker, > > | -- Or instead: > > | skip_check = true, > > | > > | iterations = n_iterations, > > | -- Or instead: > > | min_time = n_seconds, > > | }) > > | > > |bench:run_and_report() > > > > The checker function received the single value returned by the payload > > function and completed all checks related to the test. If it returns a > > true value, it is considered a successful check pass. The checker > > function is called before the main workload as a warm-up. Generally, you > > should always provide the checker function to be sure that your > > benchmark is still correct after optimizations. In cases when it is > > impossible (for some reason), you may specify the `skip_check` flag. In > > that case the warm-up part will be skipped as well. > > > > Each test is run in the order it was added. The module measures the > > real-time and CPU time necessary to run `iterations` repetitions of the > please consider using monotonic time, not a realtime (see a previous patch) Yes, discussed in the corresponding ML reply. > > test or amount of iterations `min_time` in seconds (4 by default) and > > calculates the metric items per second (more is better). The total > > amount of items equals `n_items_processed * n_iterations`. The items may > > be added in the table with the description inside the payload function > > as well. The results (real-time, CPU time, iterations, items/s) are > > reported in a format similar to the Google Benchmark suite [1]. > s/similar/compatible/ Fixed. > > > > Each test may be run from the command line as follows: > > | LUA_PATH="..." luajit test_name.lua [flags] arguments > > > > The supported flags are: > > | -j{off|on} Disable/Enable JIT for the benchmarks. > Why do you implement this flag for a Lua module? It can be passed to > luajit directly. It's just a little bit more convenient to use it for me if I mispositioned the flag :). > > | --benchmark_color={true|false|auto} > > | Enables the colorized output for the > > | terminal (not the file). > > | --benchmark_min_time={number} Minimum seconds to run the benchmark > > | tests. > > | --benchmark_out= Places the output into . > > | --benchmark_out_format={console|json} > > | The format is used when saving the results in the > > | file. The default format is the JSON format. > > | -h, --help Display help message and exit. > > > > These options are similar to the Google Benchmark command line options, > > but with a few changes: > > 1) If an output file is given, there is no output in the terminal. > > 2) The min_time option supports only number values. There is no support > > for the iterations number (by the 'x' suffix). > > > > [1]:https://github.com/google/benchmark > > --- > > perf/utils/bench.lua | 509 +++++++++++++++++++++++++++++++++++++++++++ > > 1 file changed, 509 insertions(+) > > create mode 100644 perf/utils/bench.lua > > > > diff --git a/perf/utils/bench.lua b/perf/utils/bench.lua > > new file mode 100644 > > index 00000000..68473215 > > --- /dev/null > > +++ b/perf/utils/bench.lua > > @@ -0,0 +1,509 @@ > > +local clock = require('clock') > > +local ffi = require('ffi') > > +-- Require 'cjson' only on demand for formatted output to file. > > +local json > > + > > +local M = {} > > + > > +local type, assert, error = type, assert, error > > +local format, rep = string.format, string.rep > > s/, rep/string_rep/ > > s/local format/string_format/ > > for consistency with shortcuts below AFAICS, it's kind of common practice to use the shortcuts as mentioned above and below. `table_remove` is the only exception from this naming. > > > > +local floor, max, min = math.floor, math.max, math.min > > +local table_remove = table.remove > > + > > +local LJ_HASJIT = jit and jit.opt > > + > > +-- Argument parsing. --------------------------------------------- > > + > > +-- XXX: Make options compatible with Google Benchmark, since most > > +-- probably it will be used for the C benchmarks as well. > > +-- Compatibility isn't full: there is no support for environment > > +-- variables (since they are not so useful) and the output to the > > +-- terminal is suppressed if the --benchmark_out flag is > > +-- specified. > > + > > +local HELP_MSG = [[ > > + Options: > > + -j{off|on} Disable/Enable JIT for the benchmarks. > Add a default value to the description. Here and below. There is no default value for this option. Behaviour is affected only if JIT is available and the -j option contradicts the current `jit.status()`. I've thought that the default for the benchmark_color is obvious, but make the help more verbouse: =================================================================== diff --git a/perf/utils/bench.lua b/perf/utils/bench.lua index 7b7bb5b3..09a5c41a 100644 --- a/perf/utils/bench.lua +++ b/perf/utils/bench.lua @@ -29,7 +29,7 @@ local HELP_MSG = [[ the file). 'auto' means to use colors if the output is being sent to a terminal and the TERM environment variable is set to a terminal type - that supports colors. + that supports colors. Default is 'auto'. --benchmark_min_time={number} Minimum seconds to run the benchmark tests. 4.0 by default. =================================================================== > > + --benchmark_color={true|false|auto} > > + Enables the colorized output for the terminal (not > > + the file). 'auto' means to use colors if the > > + output is being sent to a terminal and the TERM > > + environment variable is set to a terminal type > > + that supports colors. > > + --benchmark_min_time={number} > > + Minimum seconds to run the benchmark tests. > > + 4.0 by default. > Why 4.0? Just some empirical value that is good enough. 2.0 seconds (the default in the Google benchmark) is sometimes (in a few rare cases) not enough for the LuaJIT runtime to be stable. > > + --benchmark_out= Places the output into . > > + --benchmark_out_format={console|json} > > + The format is used when saving the results in the > > + file. The default format is the JSON format. > > >  The default format is the JSON format. > > by default JSON module is not available, in source code it is marked as > "on demand". > > I'm not sure JSON format should be by default. This flag is meaningful only if the `--benchmark_out` flag is given as well. So, when we have output to the file, JSON format is the default. The behaviour is the same as for Google Benchmark. > > > + -h, --help Display this message and exit. > > + > > + There are a bunch of suggestions on how to achieve the most > > + stable benchmark results: > > +https://github.com/tarantool/tarantool/wiki/Benchmarking > > +]] > > + > > +local function usage(ctx) > > + local header = format('USAGE: luajit %s [options]\n', ctx.name) > > I would not hardcode "luajit" and use luabin instead, see `M.luabin` in > > test/tarantool-tests/utils/exec.lua. This can be at least tarantool No, it can't. I see no reason to run this benchmark (and any LuaJIT-related) suites anywhere except LuaJIT binary (and maybe Lua binary, but this is out of scope of this patch set). Hence, the "luajit" is hardcoded. Also, I see no reason to mention some custom path to the binary. > instead luajit. > > > +io.stderr:write(header, HELP_MSG) > > + os.exit(1) > > +end > > + > > +local function check_param(check, strfmt, ...) > > + if not check then > > +io.stderr:write(format(strfmt, ...)) > > + os.exit(1) > > please define possible exit codes as variables and use them in os.exit(). > > Like well-known EXIT_SUCCES and EXIT_FAILURE in C. Feel free to ignore. Added: =================================================================== diff --git a/perf/utils/bench.lua b/perf/utils/bench.lua index 68473215..7b7bb5b3 100644 --- a/perf/utils/bench.lua +++ b/perf/utils/bench.lua @@ -44,16 +44,18 @@ local HELP_MSG = [[ https://github.com/tarantool/tarantool/wiki/Benchmarking ]] +local EXIT_FAILURE = 1 + local function usage(ctx) local header = format('USAGE: luajit %s [options]\n', ctx.name) io.stderr:write(header, HELP_MSG) - os.exit(1) + os.exit(EXIT_FAILURE) end local function check_param(check, strfmt, ...) if not check then io.stderr:write(format(strfmt, ...)) - os.exit(1) + os.exit(EXIT_FAILURE) end end @@ -108,7 +110,7 @@ local function unrecognized_option(optname, dashes) local fullname = dashes .. (optname or '=') io.stderr:write(format('unrecognized command-line flag: %s\n', fullname)) io.stderr:write(HELP_MSG) - os.exit(1) + os.exit(EXIT_FAILURE) end local function unrecognized_long_option(_, optname) =================================================================== > > > + end > > +end > > + > > +-- Valid values: 'false'/'no'/'0'. > > +-- In case of an invalid value the 'auto' is used. > > +local function set_color(ctx, value) > > + if value == 'false' or value == 'no' or value == '0' then > > + ctx.color = false > > + else > > + -- In case of an invalid value, the Google Benchmark uses > > + -- 'auto', which is true for the stdout output (the only > > + -- colorizable output). So just set it to true by default. > > + ctx.color = true > > + end > > +end > > + > > +local DEFAULT_MIN_TIME = 4.0 > > +local function set_min_time(ctx, value) > > + local time = tonumber(value) > > >  Tries to convert its argument to a number. If the argument is already > > > a number or a string convertible to a number, then tonumber returns > this number; > > > otherwise, it returns *nil*. > > https://www.lua.org/manual/5.1/manual.html#pdf-tonumber > > please check result for nil It is checked via the line below. > > > + check_param(time, 'Invalid min time: "%s"\n', value) > > + ctx.min_time = time > > +end > > + > > +local function set_output(ctx, filename) > > + check_param(type(filename) == "string", 'Invalid output value: "%s"\n', > > + filename) > > + ctx.output = filename > > +end > > + > > +local SHORT_OPTS = setmetatable({ > > + ['h'] = usage, > > + ['j'] = set_jit, > > +}, {__index = unrecognized_short_option}) > > + > > +local LONG_OPTS = setmetatable({ > > + ['benchmark_color'] = set_color, > > + ['benchmark_min_time'] = set_min_time, > > + ['benchmark_out'] = set_output, > > + -- XXX: For now support only JSON encoded and raw output. > > + ['benchmark_out_format'] = set_output_format, > > + ['help'] = usage, > > +}, {__index = unrecognized_long_option}) > > + > Is taking argparse from the tarantool repository (src/lua/argparse.lua) > not the option? I don't like this particular module very much and I don't want to add any third-party dependencies. Also, the arg parser is rather primitive, so I prefer to write it from scratch. > > +local function is_option(str) > > + return type(str) == 'string' andstr:sub(1, 1) == '-' and str ~= '-' > > +end > > + > > + > > + if ctx.output_format == 'json' then > > + json = require('cjson') > should we make it compatible with tarantool? (module is named 'json' there) I suppose not, since it wouldn't be launched by the Tarantool. > > + end > > + > > + > > +--https://luajit.org/running.html#foot. > > +local JIT_DEFAULTS = { > > + maxtrace = 1000, > > + maxrecord = 4000, > > + maxirconst = 500, > > + maxside = 100, > > + maxsnap = 500, > > + hotloop = 56, > > + hotexit = 10, > > + tryside = 4, > > + instunroll = 4, > > + loopunroll = 15, > > + callunroll = 3, > > + recunroll = 2, > > + sizemcode = 32, > > + maxmcode = 512, > > +} > please sort alphabetically I've used the same sorting order as in the source, see the link. It may be easier to find the option this way (at least it is for me). > > + > > +-- Basic setup for all tests to clean up after a previous > > +-- executor. > > +local function luajit_tests_setup(ctx) > > + -- Reset the GC to the defaults. > > + collectgarbage('setstepmul', 200) > > + collectgarbage('setpause', 200) > should we define 200 as a named constant? It is the default value from the Lua Reference Manual. These values are used only once here, so I don't see the necessity in it. > > + > > + -- Collect all garbage at the end. Twice to be sure that all > > + -- finalizers are run. > > + collectgarbage() > > + collectgarbage() -- Best regards, Sergey Kaplun