Hi, Sergey,

LGTM then

Sergey

On 1/3/26 09:05, Sergey Kaplun wrote:
Hi, Sergey!
Thanks for the review!
Please consider my answer below.

On 02.01.26, Sergey Bronnikov wrote:
Hi, Sergey!

thanks for the patch! LGTM with a minor comment.

Sergey

On 12/26/25 12:18, Sergey Kaplun wrote:
This patch adds the aforementioned test with the use of the benchmark
framework introduced before. The default arguments are adjusted
according to the amount of cycles in the <scimark-2010-12-20.lua> file.
The arguments to the script can be provided in the command line run.

Checks are omitted since they were not present in the original suite,
plus the precise result value depends on the input parameter.
---
  perf/LuaJIT-benches/scimark-mc.lua | 19 +++++++++++++++++++
  1 file changed, 19 insertions(+)
  create mode 100644 perf/LuaJIT-benches/scimark-mc.lua

diff --git a/perf/LuaJIT-benches/scimark-mc.lua b/perf/LuaJIT-benches/scimark-mc.lua
new file mode 100644
index 00000000..d26b6e48
--- /dev/null
+++ b/perf/LuaJIT-benches/scimark-mc.lua
@@ -0,0 +1,19 @@
+local bench = require("bench").new(arg)
+
+local cycles = tonumber(arg and arg[1]) or 15e7
Do we want to add this to the usage?
I suppose there is no need for it.
This still may be done if you are exploring the benchmark behaviour. But
you will read its sources in this case anyway.

+
+local benchmark
+benchmark = {
+  name = "scimark_mc",
+  -- XXX: The description of tests for the function is too
+  -- inconvenient.
+  skip_check = true,
+  payload = function()
+    local flops = require("scimark_lib").MC()(cycles)
+    benchmark.items = flops
+  end,
+}
+
+bench:add(benchmark)
+
+bench:run_and_report()