From: Sergey Kaplun via Tarantool-patches <tarantool-patches@dev.tarantool.org>
To: Sergey Bronnikov <sergeyb@tarantool.org>
Cc: tarantool-patches@dev.tarantool.org
Subject: Re: [Tarantool-patches] [PATCH v1 luajit 03/41] perf: introduce bench module
Date: Fri, 26 Dec 2025 11:06:23 +0300 [thread overview]
Message-ID: <aU5B_zA9GMIKCgzg@root> (raw)
In-Reply-To: <330aaa69-00f8-44b6-b9ff-2086b96edd03@tarantool.org>
Hi again, Sergey!
Thanks for the review!
Please consider my answers below.
On 11.11.25, Sergey Bronnikov wrote:
> Hi, Sergey, again!
>
> thanks for the patch!
>
> Please see comments below.
>
> Sergey
>
> On 10/24/25 13:50, Sergey Kaplun wrote:
> > This module provides functionality to run custom benchmark workloads
> > defined by the following syntax:
> >
> > | local bench = require('bench').new(arg)
>
> there are many LuaJIT-specific functions below (ffi, jit etc.)
>
> I propose to check that luabin is LuaJIT-compatible and exit with
> appropriate message
>
> if it is not.
For now I have no goal to use this module as LuaJIT-agnostic. The same
approach is used for our tap module for tests -- it is LuaJIT-only. If
we want to make it LuaJIT-agnostic, it should be done out of scope of
this patch set.
>
> > |
> > | -- f_* are functions, n_* are numbers.
>
> s/functions/user-defined functions/
>
> s/numbers/user-defined numbers/
Fixed.
>
> > |bench:add({
> > | setup = f_setup,
> > | payload = f_payload,
> > | teardown = f_teardown,
> > | items = n_items_processed,
> > |
> > | checker = f_checker,
> > | -- Or instead:
> > | skip_check = true,
> > |
> > | iterations = n_iterations,
> > | -- Or instead:
> > | min_time = n_seconds,
> > | })
> > |
> > |bench:run_and_report()
> >
> > The checker function received the single value returned by the payload
> > function and completed all checks related to the test. If it returns a
> > true value, it is considered a successful check pass. The checker
> > function is called before the main workload as a warm-up. Generally, you
> > should always provide the checker function to be sure that your
> > benchmark is still correct after optimizations. In cases when it is
> > impossible (for some reason), you may specify the `skip_check` flag. In
> > that case the warm-up part will be skipped as well.
> >
> > Each test is run in the order it was added. The module measures the
> > real-time and CPU time necessary to run `iterations` repetitions of the
> please consider using monotonic time, not a realtime (see a previous patch)
Yes, discussed in the corresponding ML reply.
> > test or amount of iterations `min_time` in seconds (4 by default) and
> > calculates the metric items per second (more is better). The total
> > amount of items equals `n_items_processed * n_iterations`. The items may
> > be added in the table with the description inside the payload function
> > as well. The results (real-time, CPU time, iterations, items/s) are
> > reported in a format similar to the Google Benchmark suite [1].
> s/similar/compatible/
Fixed.
> >
> > Each test may be run from the command line as follows:
> > | LUA_PATH="..." luajit test_name.lua [flags] arguments
> >
> > The supported flags are:
> > | -j{off|on} Disable/Enable JIT for the benchmarks.
> Why do you implement this flag for a Lua module? It can be passed to
> luajit directly.
It's just a little bit more convenient to use it for me if I
mispositioned the flag :).
> > | --benchmark_color={true|false|auto}
> > | Enables the colorized output for the
> > | terminal (not the file).
> > | --benchmark_min_time={number} Minimum seconds to run the benchmark
> > | tests.
> > | --benchmark_out=<file> Places the output into <file>.
> > | --benchmark_out_format={console|json}
> > | The format is used when saving the results in the
> > | file. The default format is the JSON format.
> > | -h, --help Display help message and exit.
> >
> > These options are similar to the Google Benchmark command line options,
> > but with a few changes:
> > 1) If an output file is given, there is no output in the terminal.
> > 2) The min_time option supports only number values. There is no support
> > for the iterations number (by the 'x' suffix).
> >
> > [1]:https://github.com/google/benchmark
> > ---
> > perf/utils/bench.lua | 509 +++++++++++++++++++++++++++++++++++++++++++
> > 1 file changed, 509 insertions(+)
> > create mode 100644 perf/utils/bench.lua
> >
> > diff --git a/perf/utils/bench.lua b/perf/utils/bench.lua
> > new file mode 100644
> > index 00000000..68473215
> > --- /dev/null
> > +++ b/perf/utils/bench.lua
> > @@ -0,0 +1,509 @@
> > +local clock = require('clock')
> > +local ffi = require('ffi')
> > +-- Require 'cjson' only on demand for formatted output to file.
> > +local json
> > +
> > +local M = {}
> > +
> > +local type, assert, error = type, assert, error
> > +local format, rep = string.format, string.rep
>
> s/, rep/string_rep/
>
> s/local format/string_format/
>
> for consistency with shortcuts below
AFAICS, it's kind of common practice to use the shortcuts as mentioned
above and below. `table_remove` is the only exception from this naming.
>
>
> > +local floor, max, min = math.floor, math.max, math.min
> > +local table_remove = table.remove
> > +
> > +local LJ_HASJIT = jit and jit.opt
> > +
> > +-- Argument parsing. ---------------------------------------------
> > +
> > +-- XXX: Make options compatible with Google Benchmark, since most
> > +-- probably it will be used for the C benchmarks as well.
> > +-- Compatibility isn't full: there is no support for environment
> > +-- variables (since they are not so useful) and the output to the
> > +-- terminal is suppressed if the --benchmark_out flag is
> > +-- specified.
> > +
> > +local HELP_MSG = [[
> > + Options:
> > + -j{off|on} Disable/Enable JIT for the benchmarks.
> Add a default value to the description. Here and below.
There is no default value for this option. Behaviour is affected only if
JIT is available and the -j option contradicts the current
`jit.status()`.
I've thought that the default for the benchmark_color is obvious, but
make the help more verbouse:
===================================================================
diff --git a/perf/utils/bench.lua b/perf/utils/bench.lua
index 7b7bb5b3..09a5c41a 100644
--- a/perf/utils/bench.lua
+++ b/perf/utils/bench.lua
@@ -29,7 +29,7 @@ local HELP_MSG = [[
the file). 'auto' means to use colors if the
output is being sent to a terminal and the TERM
environment variable is set to a terminal type
- that supports colors.
+ that supports colors. Default is 'auto'.
--benchmark_min_time={number}
Minimum seconds to run the benchmark tests.
4.0 by default.
===================================================================
> > + --benchmark_color={true|false|auto}
> > + Enables the colorized output for the terminal (not
> > + the file). 'auto' means to use colors if the
> > + output is being sent to a terminal and the TERM
> > + environment variable is set to a terminal type
> > + that supports colors.
> > + --benchmark_min_time={number}
> > + Minimum seconds to run the benchmark tests.
> > + 4.0 by default.
> Why 4.0?
Just some empirical value that is good enough. 2.0 seconds (the default
in the Google benchmark) is sometimes (in a few rare cases) not enough
for the LuaJIT runtime to be stable.
> > + --benchmark_out=<file> Places the output into <file>.
> > + --benchmark_out_format={console|json}
> > + The format is used when saving the results in the
> > + file. The default format is the JSON format.
>
> > The default format is the JSON format.
>
> by default JSON module is not available, in source code it is marked as
> "on demand".
>
> I'm not sure JSON format should be by default.
This flag is meaningful only if the `--benchmark_out` flag is given as
well. So, when we have output to the file, JSON format is the default.
The behaviour is the same as for Google Benchmark.
>
> > + -h, --help Display this message and exit.
> > +
> > + There are a bunch of suggestions on how to achieve the most
> > + stable benchmark results:
> > +https://github.com/tarantool/tarantool/wiki/Benchmarking
> > +]]
> > +
> > +local function usage(ctx)
> > + local header = format('USAGE: luajit %s [options]\n', ctx.name)
>
> I would not hardcode "luajit" and use luabin instead, see `M.luabin` in
>
> test/tarantool-tests/utils/exec.lua. This can be at least tarantool
No, it can't. I see no reason to run this benchmark (and any
LuaJIT-related) suites anywhere except LuaJIT binary (and maybe Lua
binary, but this is out of scope of this patch set). Hence, the "luajit"
is hardcoded. Also, I see no reason to mention some custom path to the
binary.
> instead luajit.
>
> > +io.stderr:write(header, HELP_MSG)
> > + os.exit(1)
> > +end
> > +
> > +local function check_param(check, strfmt, ...)
> > + if not check then
> > +io.stderr:write(format(strfmt, ...))
> > + os.exit(1)
>
> please define possible exit codes as variables and use them in os.exit().
>
> Like well-known EXIT_SUCCES and EXIT_FAILURE in C. Feel free to ignore.
Added:
===================================================================
diff --git a/perf/utils/bench.lua b/perf/utils/bench.lua
index 68473215..7b7bb5b3 100644
--- a/perf/utils/bench.lua
+++ b/perf/utils/bench.lua
@@ -44,16 +44,18 @@ local HELP_MSG = [[
https://github.com/tarantool/tarantool/wiki/Benchmarking
]]
+local EXIT_FAILURE = 1
+
local function usage(ctx)
local header = format('USAGE: luajit %s [options]\n', ctx.name)
io.stderr:write(header, HELP_MSG)
- os.exit(1)
+ os.exit(EXIT_FAILURE)
end
local function check_param(check, strfmt, ...)
if not check then
io.stderr:write(format(strfmt, ...))
- os.exit(1)
+ os.exit(EXIT_FAILURE)
end
end
@@ -108,7 +110,7 @@ local function unrecognized_option(optname, dashes)
local fullname = dashes .. (optname or '=')
io.stderr:write(format('unrecognized command-line flag: %s\n', fullname))
io.stderr:write(HELP_MSG)
- os.exit(1)
+ os.exit(EXIT_FAILURE)
end
local function unrecognized_long_option(_, optname)
===================================================================
>
> > + end
> > +end
> > +
> > +-- Valid values: 'false'/'no'/'0'.
> > +-- In case of an invalid value the 'auto' is used.
> > +local function set_color(ctx, value)
> > + if value == 'false' or value == 'no' or value == '0' then
> > + ctx.color = false
> > + else
> > + -- In case of an invalid value, the Google Benchmark uses
> > + -- 'auto', which is true for the stdout output (the only
> > + -- colorizable output). So just set it to true by default.
> > + ctx.color = true
> > + end
> > +end
> > +
> > +local DEFAULT_MIN_TIME = 4.0
> > +local function set_min_time(ctx, value)
> > + local time = tonumber(value)
>
> > Tries to convert its argument to a number. If the argument is already
>
> > a number or a string convertible to a number, then tonumber returns
> this number;
>
> > otherwise, it returns *nil*.
>
> https://www.lua.org/manual/5.1/manual.html#pdf-tonumber
>
> please check result for nil
It is checked via the line below.
>
> > + check_param(time, 'Invalid min time: "%s"\n', value)
> > + ctx.min_time = time
> > +end
> > +
> > +local function set_output(ctx, filename)
> > + check_param(type(filename) == "string", 'Invalid output value: "%s"\n',
> > + filename)
> > + ctx.output = filename
> > +end
> > +
<snipped>
> > +local SHORT_OPTS = setmetatable({
> > + ['h'] = usage,
> > + ['j'] = set_jit,
> > +}, {__index = unrecognized_short_option})
> > +
> > +local LONG_OPTS = setmetatable({
> > + ['benchmark_color'] = set_color,
> > + ['benchmark_min_time'] = set_min_time,
> > + ['benchmark_out'] = set_output,
> > + -- XXX: For now support only JSON encoded and raw output.
> > + ['benchmark_out_format'] = set_output_format,
> > + ['help'] = usage,
> > +}, {__index = unrecognized_long_option})
> > +
> Is taking argparse from the tarantool repository (src/lua/argparse.lua)
> not the option?
I don't like this particular module very much and I don't want to add
any third-party dependencies. Also, the arg parser is rather primitive,
so I prefer to write it from scratch.
> > +local function is_option(str)
> > + return type(str) == 'string' andstr:sub(1, 1) == '-' and str ~= '-'
> > +end
> > +
<snipped>
> > +
> > + if ctx.output_format == 'json' then
> > + json = require('cjson')
> should we make it compatible with tarantool? (module is named 'json' there)
I suppose not, since it wouldn't be launched by the Tarantool.
> > + end
> > +
<snipped>
> > +
> > +--https://luajit.org/running.html#foot.
> > +local JIT_DEFAULTS = {
> > + maxtrace = 1000,
> > + maxrecord = 4000,
> > + maxirconst = 500,
> > + maxside = 100,
> > + maxsnap = 500,
> > + hotloop = 56,
> > + hotexit = 10,
> > + tryside = 4,
> > + instunroll = 4,
> > + loopunroll = 15,
> > + callunroll = 3,
> > + recunroll = 2,
> > + sizemcode = 32,
> > + maxmcode = 512,
> > +}
> please sort alphabetically
I've used the same sorting order as in the source, see the link.
It may be easier to find the option this way (at least it is for me).
> > +
> > +-- Basic setup for all tests to clean up after a previous
> > +-- executor.
> > +local function luajit_tests_setup(ctx)
<snipped>
> > + -- Reset the GC to the defaults.
> > + collectgarbage('setstepmul', 200)
> > + collectgarbage('setpause', 200)
> should we define 200 as a named constant?
It is the default value from the Lua Reference Manual. These values are
used only once here, so I don't see the necessity in it.
> > +
> > + -- Collect all garbage at the end. Twice to be sure that all
> > + -- finalizers are run.
> > + collectgarbage()
> > + collectgarbage()
<snipped>
--
Best regards,
Sergey Kaplun
next prev parent reply other threads:[~2025-12-26 8:06 UTC|newest]
Thread overview: 134+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-24 10:50 [Tarantool-patches] [PATCH v1 luajit 00/41] LuaJIT performance testing Sergey Kaplun via Tarantool-patches
2025-10-24 10:50 ` [Tarantool-patches] [PATCH v1 luajit 01/41] perf: add LuaJIT-test-cleanup perf suite Sergey Kaplun via Tarantool-patches
2025-11-11 14:28 ` Sergey Bronnikov via Tarantool-patches
2025-12-26 8:04 ` Sergey Kaplun via Tarantool-patches
2025-10-24 10:50 ` [Tarantool-patches] [PATCH v1 luajit 02/41] perf: introduce clock module Sergey Kaplun via Tarantool-patches
2025-11-11 14:28 ` Sergey Bronnikov via Tarantool-patches
2025-12-26 8:05 ` Sergey Kaplun via Tarantool-patches
2025-10-24 10:50 ` [Tarantool-patches] [PATCH v1 luajit 03/41] perf: introduce bench module Sergey Kaplun via Tarantool-patches
2025-11-11 15:41 ` Sergey Bronnikov via Tarantool-patches
2025-12-26 8:06 ` Sergey Kaplun via Tarantool-patches [this message]
2025-10-24 10:50 ` [Tarantool-patches] [PATCH v1 luajit 04/41] perf: adjust array3d in LuaJIT-benches Sergey Kaplun via Tarantool-patches
2025-11-13 11:06 ` Sergey Bronnikov via Tarantool-patches
2025-12-26 8:07 ` Sergey Kaplun via Tarantool-patches
2025-10-24 10:50 ` [Tarantool-patches] [PATCH v1 luajit 05/41] perf: adjust binary-trees " Sergey Kaplun via Tarantool-patches
2025-11-13 11:06 ` Sergey Bronnikov via Tarantool-patches
2025-12-26 8:08 ` Sergey Kaplun via Tarantool-patches
2025-10-24 10:50 ` [Tarantool-patches] [PATCH v1 luajit 06/41] perf: adjust chameneos " Sergey Kaplun via Tarantool-patches
2025-11-13 11:11 ` Sergey Bronnikov via Tarantool-patches
2025-12-26 8:10 ` Sergey Kaplun via Tarantool-patches
2025-10-24 10:50 ` [Tarantool-patches] [PATCH v1 luajit 07/41] perf: adjust coroutine-ring " Sergey Kaplun via Tarantool-patches
2025-11-13 11:17 ` Sergey Bronnikov via Tarantool-patches
2025-12-26 8:11 ` Sergey Kaplun via Tarantool-patches
2025-10-24 10:50 ` [Tarantool-patches] [PATCH v1 luajit 08/41] perf: adjust euler14-bit " Sergey Kaplun via Tarantool-patches
2025-11-13 11:44 ` Sergey Bronnikov via Tarantool-patches
2025-12-26 8:12 ` Sergey Kaplun via Tarantool-patches
2025-10-24 10:50 ` [Tarantool-patches] [PATCH v1 luajit 09/41] perf: adjust fannkuch " Sergey Kaplun via Tarantool-patches
2025-11-17 8:36 ` Sergey Bronnikov via Tarantool-patches
2025-12-26 8:13 ` Sergey Kaplun via Tarantool-patches
2025-10-24 10:50 ` [Tarantool-patches] [PATCH v1 luajit 10/41] perf: adjust fasta " Sergey Kaplun via Tarantool-patches
2025-12-23 10:37 ` Sergey Bronnikov via Tarantool-patches
2025-12-26 8:15 ` Sergey Kaplun via Tarantool-patches
2025-10-24 10:50 ` [Tarantool-patches] [PATCH v1 luajit 11/41] perf: adjust k-nucleotide " Sergey Kaplun via Tarantool-patches
2025-11-17 8:36 ` Sergey Bronnikov via Tarantool-patches
2025-12-26 8:17 ` Sergey Kaplun via Tarantool-patches
2025-10-24 10:50 ` [Tarantool-patches] [PATCH v1 luajit 12/41] perf: adjust life " Sergey Kaplun via Tarantool-patches
2025-11-17 8:35 ` Sergey Bronnikov via Tarantool-patches
2025-12-26 8:18 ` Sergey Kaplun via Tarantool-patches
2025-10-24 10:50 ` [Tarantool-patches] [PATCH v1 luajit 13/41] perf: adjust mandelbrot-bit " Sergey Kaplun via Tarantool-patches
2025-11-17 13:26 ` Sergey Bronnikov via Tarantool-patches
2025-12-26 8:20 ` Sergey Kaplun via Tarantool-patches
2025-10-24 10:50 ` [Tarantool-patches] [PATCH v1 luajit 14/41] perf: adjust mandelbrot " Sergey Kaplun via Tarantool-patches
2025-12-23 10:38 ` Sergey Bronnikov via Tarantool-patches
2025-12-26 8:20 ` Sergey Kaplun via Tarantool-patches
2025-10-24 10:50 ` [Tarantool-patches] [PATCH v1 luajit 15/41] perf: adjust md5 " Sergey Kaplun via Tarantool-patches
2025-11-17 13:26 ` Sergey Bronnikov via Tarantool-patches
2025-12-26 8:22 ` Sergey Kaplun via Tarantool-patches
2025-10-24 10:50 ` [Tarantool-patches] [PATCH v1 luajit 16/41] perf: adjust meteor " Sergey Kaplun via Tarantool-patches
2025-12-23 10:38 ` Sergey Bronnikov via Tarantool-patches
2025-12-26 8:23 ` Sergey Kaplun via Tarantool-patches
2025-10-24 10:50 ` [Tarantool-patches] [PATCH v1 luajit 17/41] perf: adjust nbody " Sergey Kaplun via Tarantool-patches
2025-11-17 13:26 ` Sergey Bronnikov via Tarantool-patches
2025-12-26 8:24 ` Sergey Kaplun via Tarantool-patches
2025-10-24 10:50 ` [Tarantool-patches] [PATCH v1 luajit 18/41] perf: adjust nsieve-bit-fp " Sergey Kaplun via Tarantool-patches
2025-11-17 13:26 ` Sergey Bronnikov via Tarantool-patches
2025-12-26 8:25 ` Sergey Kaplun via Tarantool-patches
2025-10-24 10:50 ` [Tarantool-patches] [PATCH v1 luajit 19/41] perf: adjust nsieve-bit " Sergey Kaplun via Tarantool-patches
2025-11-17 13:26 ` Sergey Bronnikov via Tarantool-patches
2025-12-26 8:25 ` Sergey Kaplun via Tarantool-patches
2025-10-24 10:50 ` [Tarantool-patches] [PATCH v1 luajit 20/41] perf: adjust nsieve " Sergey Kaplun via Tarantool-patches
2025-11-17 13:25 ` Sergey Bronnikov via Tarantool-patches
2025-12-26 8:26 ` Sergey Kaplun via Tarantool-patches
2025-10-24 10:50 ` [Tarantool-patches] [PATCH v1 luajit 21/41] perf: adjust partialsums " Sergey Kaplun via Tarantool-patches
2025-11-17 13:25 ` Sergey Bronnikov via Tarantool-patches
2025-12-26 8:27 ` Sergey Kaplun via Tarantool-patches
2025-10-24 10:50 ` [Tarantool-patches] [PATCH v1 luajit 22/41] perf: adjust pidigits-nogmp " Sergey Kaplun via Tarantool-patches
2025-11-17 13:25 ` Sergey Bronnikov via Tarantool-patches
2025-12-26 8:27 ` Sergey Kaplun via Tarantool-patches
2025-10-24 10:50 ` [Tarantool-patches] [PATCH v1 luajit 23/41] perf: adjust ray " Sergey Kaplun via Tarantool-patches
2025-11-17 13:25 ` Sergey Bronnikov via Tarantool-patches
2025-12-26 8:29 ` Sergey Kaplun via Tarantool-patches
2025-10-24 10:50 ` [Tarantool-patches] [PATCH v1 luajit 24/41] perf: adjust recursive-ack " Sergey Kaplun via Tarantool-patches
2025-11-17 13:25 ` Sergey Bronnikov via Tarantool-patches
2025-12-26 8:30 ` Sergey Kaplun via Tarantool-patches
2025-10-24 10:50 ` [Tarantool-patches] [PATCH v1 luajit 25/41] perf: adjust recursive-fib " Sergey Kaplun via Tarantool-patches
2025-11-17 13:59 ` Sergey Bronnikov via Tarantool-patches
2025-12-26 8:30 ` Sergey Kaplun via Tarantool-patches
2025-10-24 10:50 ` [Tarantool-patches] [PATCH v1 luajit 26/41] perf: adjust revcomp " Sergey Kaplun via Tarantool-patches
2025-11-17 13:59 ` Sergey Bronnikov via Tarantool-patches
2025-12-26 8:31 ` Sergey Kaplun via Tarantool-patches
2025-10-24 10:50 ` [Tarantool-patches] [PATCH v1 luajit 27/41] perf: adjust scimark-2010-12-20 " Sergey Kaplun via Tarantool-patches
2025-11-17 13:56 ` Sergey Bronnikov via Tarantool-patches
2025-12-26 8:32 ` Sergey Kaplun via Tarantool-patches
2025-10-24 10:50 ` [Tarantool-patches] [PATCH v1 luajit 28/41] perf: move <scimark_lib.lua> to <libs/> directory Sergey Kaplun via Tarantool-patches
2025-11-17 13:58 ` Sergey Bronnikov via Tarantool-patches
2025-12-26 8:32 ` Sergey Kaplun via Tarantool-patches
2025-10-24 10:50 ` [Tarantool-patches] [PATCH v1 luajit 29/41] perf: adjust scimark-fft in LuaJIT-benches Sergey Kaplun via Tarantool-patches
2025-11-17 14:00 ` Sergey Bronnikov via Tarantool-patches
2025-12-26 8:33 ` Sergey Kaplun via Tarantool-patches
2025-10-24 10:50 ` [Tarantool-patches] [PATCH v1 luajit 30/41] perf: adjust scimark-lu " Sergey Kaplun via Tarantool-patches
2025-10-24 11:00 ` Sergey Kaplun via Tarantool-patches
2025-10-24 11:01 ` Sergey Kaplun via Tarantool-patches
2025-11-17 14:07 ` Sergey Bronnikov via Tarantool-patches
2025-12-26 8:34 ` Sergey Kaplun via Tarantool-patches
2025-10-24 10:50 ` [Tarantool-patches] [PATCH v1 luajit 31/41] perf: add scimark-mc " Sergey Kaplun via Tarantool-patches
2025-10-24 11:00 ` Sergey Kaplun via Tarantool-patches
2025-10-24 11:02 ` Sergey Kaplun via Tarantool-patches
2025-11-17 14:09 ` Sergey Bronnikov via Tarantool-patches
2025-12-26 8:35 ` Sergey Kaplun via Tarantool-patches
2025-10-24 10:50 ` [Tarantool-patches] [PATCH v1 luajit 32/41] perf: adjust scimark-sor " Sergey Kaplun via Tarantool-patches
2025-10-24 11:00 ` Sergey Kaplun via Tarantool-patches
2025-10-24 11:02 ` Sergey Kaplun via Tarantool-patches
2025-11-17 14:11 ` Sergey Bronnikov via Tarantool-patches
2025-12-26 8:35 ` Sergey Kaplun via Tarantool-patches
2025-10-24 10:50 ` [Tarantool-patches] [PATCH v1 luajit 33/41] perf: adjust scimark-sparse " Sergey Kaplun via Tarantool-patches
2025-10-24 11:00 ` Sergey Kaplun via Tarantool-patches
2025-10-24 11:03 ` Sergey Kaplun via Tarantool-patches
2025-11-17 14:15 ` Sergey Bronnikov via Tarantool-patches
2025-12-26 8:36 ` Sergey Kaplun via Tarantool-patches
2025-10-24 11:00 ` [Tarantool-patches] [PATCH v1 luajit 34/41] perf: adjust series " Sergey Kaplun via Tarantool-patches
2025-11-17 14:19 ` Sergey Bronnikov via Tarantool-patches
2025-12-26 8:37 ` Sergey Kaplun via Tarantool-patches
2025-10-24 11:00 ` [Tarantool-patches] [PATCH v1 luajit 35/41] perf: adjust spectral-norm " Sergey Kaplun via Tarantool-patches
2025-11-17 14:23 ` Sergey Bronnikov via Tarantool-patches
2025-12-26 8:37 ` Sergey Kaplun via Tarantool-patches
2025-10-24 11:00 ` [Tarantool-patches] [PATCH v1 luajit 36/41] perf: adjust sum-file " Sergey Kaplun via Tarantool-patches
2025-12-23 10:37 ` Sergey Bronnikov via Tarantool-patches
2025-12-23 10:44 ` Sergey Bronnikov via Tarantool-patches
2025-12-26 8:38 ` Sergey Kaplun via Tarantool-patches
2025-10-24 11:00 ` [Tarantool-patches] [PATCH v1 luajit 37/41] perf: add CMake infrastructure Sergey Kaplun via Tarantool-patches
2025-11-18 12:21 ` Sergey Bronnikov via Tarantool-patches
2025-12-26 8:40 ` Sergey Kaplun via Tarantool-patches
2025-10-24 11:00 ` [Tarantool-patches] [PATCH v1 luajit 38/41] perf: add aggregator helper for bench statistics Sergey Kaplun via Tarantool-patches
2025-11-18 12:31 ` Sergey Bronnikov via Tarantool-patches
2025-12-26 8:41 ` Sergey Kaplun via Tarantool-patches
2025-10-24 11:00 ` [Tarantool-patches] [PATCH v1 luajit 39/41] perf: add a script for the environment setup Sergey Kaplun via Tarantool-patches
2025-11-18 12:36 ` Sergey Bronnikov via Tarantool-patches
2025-12-26 8:41 ` Sergey Kaplun via Tarantool-patches
2025-10-24 11:00 ` [Tarantool-patches] [PATCH v1 luajit 40/41] perf: provide CMake option to setup the benchmark Sergey Kaplun via Tarantool-patches
2025-11-18 12:51 ` Sergey Bronnikov via Tarantool-patches
2025-12-26 8:42 ` Sergey Kaplun via Tarantool-patches
2025-10-24 11:00 ` [Tarantool-patches] [PATCH v1 luajit 41/41] ci: introduce the performance workflow Sergey Kaplun via Tarantool-patches
2025-11-18 13:08 ` Sergey Bronnikov via Tarantool-patches
2025-12-26 8:43 ` Sergey Kaplun via Tarantool-patches
2025-11-18 13:13 ` Sergey Bronnikov via Tarantool-patches
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aU5B_zA9GMIKCgzg@root \
--to=tarantool-patches@dev.tarantool.org \
--cc=sergeyb@tarantool.org \
--cc=skaplun@tarantool.org \
--subject='Re: [Tarantool-patches] [PATCH v1 luajit 03/41] perf: introduce bench module' \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox