[Tarantool-patches] [PATCH v1 luajit 31/41] perf: add scimark-mc in LuaJIT-benches

Sergey Bronnikov sergeyb at tarantool.org
Mon Nov 17 17:09:24 MSK 2025


Hi, Sergey,

thanks for the patch! LGTM with a minor comment below.

I propose to add a small test description to a comment:

SciMark is a popular benchmark, MC is a Monte Carlo Integration.

Sergey

On 10/24/25 13:50, Sergey Kaplun wrote:
> This patch adds the aforementioned test with the use of the benchmark
> framework introduced before. The default arguments are adjusted
> according to the amount of cycles in the <scimark-2010-12-20.lua> file.
> The arguments to the script can be provided in the command line run.
>
> Checks are omitted since they were not present in the original suite,
> plus the precise result value depends on the input parameter.
> ---
>   perf/LuaJIT-benches/scimark-mc.lua | 19 +++++++++++++++++++
>   1 file changed, 19 insertions(+)
>   create mode 100644 perf/LuaJIT-benches/scimark-mc.lua
>
> diff --git a/perf/LuaJIT-benches/scimark-mc.lua b/perf/LuaJIT-benches/scimark-mc.lua
> new file mode 100644
> index 00000000..d26b6e48
> --- /dev/null
> +++ b/perf/LuaJIT-benches/scimark-mc.lua
> @@ -0,0 +1,19 @@
> +local bench = require("bench").new(arg)
> +
> +local cycles = tonumber(arg and arg[1]) or 15e7
> +
> +local benchmark
> +benchmark = {
> +  name = "scimark_mc",
> +  -- XXX: The description of tests for the function is too
> +  -- inconvenient.
> +  skip_check = true,
> +  payload = function()
> +    local flops = require("scimark_lib").MC()(cycles)
> +    benchmark.items = flops
> +  end,
> +}
> +
> +bench:add(benchmark)
> +
> +bench:run_and_report()
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.tarantool.org/pipermail/tarantool-patches/attachments/20251117/b2bcefd4/attachment.htm>


More information about the Tarantool-patches mailing list