Tarantool development patches archive
 help / color / mirror / Atom feed
From: Maxim Kokryashkin via Tarantool-patches <tarantool-patches@dev.tarantool.org>
To: "Sergey Kaplun" <skaplun@tarantool.org>
Cc: tarantool-patches@dev.tarantool.org
Subject: Re: [Tarantool-patches]  [PATCH v1 luajit 2/5] test: introduce module for C tests
Date: Mon, 20 Mar 2023 18:17:13 +0300	[thread overview]
Message-ID: <1679325433.571861292@f426.i.mail.ru> (raw)
In-Reply-To: <5d25f8889666f875fb0429ba373f8884b039b4c5.1678895861.git.skaplun@tarantool.org>

[-- Attachment #1: Type: text/plain, Size: 21377 bytes --]


Hi!
Thanks for the patch!
Please consider my comments below.
  
>Среда, 15 марта 2023, 19:14 +03:00 от Sergey Kaplun <skaplun@tarantool.org>:
> 
>We need an instrument to write tests in plain C for LuaJIT, to be able:
>* easily test LuaC API
>* test patches without usage plain Lua
Typo: s/usage/usage of/
>* write unit tests
>* startup LuaJIT with custom memory allocator, to test some GC issues
>* maybe, in future, use custom hashing function to test a behavior
>  of LuaJIT tables
>and so on.
>
>The <test.c> module serves to achieve these goals without too fancy
>features.
>
>It's functionality inspired by cmoka API [1], but only TAP14 [2]
>protocol is supported (Version of TAP set to 13 to be compatible with
>old TAP13 harnesses).
>
>The group of unit tests is declared like the following:
>
>| void *t_state = NULL;
>| const struct test_unit tgroup[] = {
>| test_unit_new(test_base),
>| test_unit_new(test_subtest),
>| };
>| return test_run_group(tgroup, t_state);
>
>`test_run_group()` runs the whole group of tests, returns
>`TEST_EXIT_SUCCESS` or `TEST_EXIT_FAILURE`.
>
>If a similar group is declared inside unit test, this group will be
>considered as a subtest.
>
>This library provides an API similar to glibc (3) `assert()` to use
>inside unit tests. `assert_[true,false]()` are useful for condition
>checks and `assert_{type}_[not_,]_equal()` are useful for value
>comparisons. If some assertion fails diagnostic is set, all test
>considered as failing and finished via `longjmp()`, so these assertions
>can be used inside custom subroutines.
>
>Also, this module provides ability to skip one test or all tests, mark
>test as todo, bail out all tests. `skip()`, `skip_all()` and `todo()`
>macros are implemented via an early return to be used only in the test
>body to make skipping clear. `skip_all()` may be used both for the
>parent test and for a subtest.
>
>As a part of this commit, tarantool-c-tests directory is created with
>the corresponding CMakeLists.txt file to build this test library.
>Tests to be rewritten in C with this library in the next commit and
>placed as unit tests are:
>* misclib-getmetrics-capi.test.lua
>* misclib-sysprof-capi.test.lua
>
>For now the tarantool-c-tests target just build the test library without
>new tests to run.
>
>[1]:  https://github.com/clibs/cmocka
>[2]:  https://testanything.org/tap-version-14-specification.html
>
>Part of tarantool/tarantool#7900
>---
>
>I left some notes about this test module and I'll be happy to read your
>thoughts about them.
>
>* Should we cast to `(void *)` in `assert_ptr_[not_]equal()`? Or it will
>  be better to notice user about bad type comparisons?
I believe we should not cast it to the (void *) and notice user. It is the C
language, so if we can prevent some potentially incorrect behavior in
compile time, we should do that.
>* How often should we flush stdout?
It seems to be often enough for now. I believe, it is better
to get used to that module and add additional `flush()`’es
a little bit down the road, if we’ll need that.
>* Obviously we can use `_test_run_group(__func__, NULL, 0, NULL)` with
>  `test_set_skip_reason()` set to implement `skip_all()` functionality.
>  Nevertheless, I decided to reimpliment it's logic separately to be
>  more easily maintained in the future.
If we’ll change the signature to the one I proposed below, the 
`_test_run_group(__func__, NULL, 0, NULL)` approach is not
possible anymore, so there is nothing wrong with reimplementation.
>
>
> test/CMakeLists.txt | 2 +
> test/tarantool-c-tests/CMakeLists.txt | 43 +++++
> test/tarantool-c-tests/test.c | 251 ++++++++++++++++++++++++++
> test/tarantool-c-tests/test.h | 251 ++++++++++++++++++++++++++
> 4 files changed, 547 insertions(+)
> create mode 100644 test/tarantool-c-tests/CMakeLists.txt
> create mode 100644 test/tarantool-c-tests/test.c
> create mode 100644 test/tarantool-c-tests/test.h
>
>diff --git a/test/CMakeLists.txt b/test/CMakeLists.txt
>index a8262b12..47296a22 100644
>--- a/test/CMakeLists.txt
>+++ b/test/CMakeLists.txt
>@@ -48,12 +48,14 @@ separate_arguments(LUAJIT_TEST_COMMAND)
> add_subdirectory(LuaJIT-tests)
> add_subdirectory(PUC-Rio-Lua-5.1-tests)
> add_subdirectory(lua-Harness-tests)
>+add_subdirectory(tarantool-c-tests)
> add_subdirectory(tarantool-tests)
> 
> add_custom_target(${PROJECT_NAME}-test DEPENDS
>   LuaJIT-tests
>   PUC-Rio-Lua-5.1-tests
>   lua-Harness-tests
>+ tarantool-c-tests
>   tarantool-tests
> )
> 
>diff --git a/test/tarantool-c-tests/CMakeLists.txt b/test/tarantool-c-tests/CMakeLists.txt
>new file mode 100644
>index 00000000..5ebea441
>--- /dev/null
>+++ b/test/tarantool-c-tests/CMakeLists.txt
>@@ -0,0 +1,43 @@
>+find_program(PROVE prove)
>+if(NOT PROVE)
>+ message(WARNING "`prove' is not found, so tarantool-c-tests target is not generated")
>+ return()
>+endif()
There is the same check in the test/tarantool-tests/CMakeLists.txt. Maybe
we should move it to the higher-level CMake so the lower-level CMakeLists
inherit it.
>+
>+set(C_TEST_SUFFIX .c_test)
>+set(C_TEST_FLAGS --failures --shuffle)
>+
>+if(CMAKE_VERBOSE_MAKEFILE)
>+ list(APPEND C_TEST_FLAGS --verbose)
>+endif()
>+
>+# Build libtest.
>+
>+set(TEST_LIB_NAME "test")
>+add_library(libtest STATIC EXCLUDE_FROM_ALL ${CMAKE_CURRENT_SOURCE_DIR}/test.c)
>+target_include_directories(libtest PRIVATE ${CMAKE_CURRENT_SOURCE_DIR})
>+set_target_properties(libtest PROPERTIES
>+ COMPILE_FLAGS "-Wall -Wextra"
>+ OUTPUT_NAME "${TEST_LIB_NAME}"
>+ LIBRARY_OUTPUT_DIRECTORY "${CMAKE_CURRENT_BINARY_DIR}"
>+)
>+
>+# XXX: For now, just build libtest. The tests to be depended on
>+# will be added at the next commit.
Typo: s/at the next/in the next/
>+add_custom_target(tarantool-c-tests
>+ DEPENDS libluajit libtest
>+)
>+
>+# XXX: For now, run 0 tests. Just verify that libtest was build.
Typo: s/was build/was built/
>+add_custom_command(TARGET tarantool-c-tests
>+ COMMENT "Running Tarantool C tests"
>+ COMMAND
>+ ${PROVE}
>+ ${CMAKE_CURRENT_BINARY_DIR}
>+ --ext ${C_TEST_SUFFIX}
>+ --jobs ${CMAKE_BUILD_PARALLEL_LEVEL}
>+ ${C_TEST_FLAGS}
>+ WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
>+)
>+
>+# vim: ft=cmake expandtab shiftwidth=2: tabstop=2:
That change is not necessary.
>diff --git a/test/tarantool-c-tests/test.c b/test/tarantool-c-tests/test.c
>new file mode 100644
>index 00000000..dc63cf3f
>--- /dev/null
>+++ b/test/tarantool-c-tests/test.c
>@@ -0,0 +1,251 @@
>+#include "test.h"
>+
>+/*
>+ * Test module, based on TAP 14 specification [1].
>+ * [1]:  https://testanything.org/tap-version-14-specification.html
>+ */
>+
>+/* Need for `PATH_MAX` in diagnostic definition. */
>+#include <limits.h>
>+#include <setjmp.h>
>+#include <stdarg.h>
>+/* Need for `strchr()` in diagnostic parsing. */
`strchr()` is not safe, despite the fact it searches till `\0`.
We should at least replace it with `memchr()`, which has
the explicit constraint for buffer length.
>+#include <string.h>
>+
>+/*
>+ * Test level: 0 for the parent test, >0 for any subtests.
>+ */
>+static int level = -1;
>+
>+/*
>+ * The last diagnostic data to be used in the YAML Diagnostic
>+ * block.
>+ *
>+ * Contains filename, line number and failed expression or assert
>+ * name and "got" and "expected" fields. All entries are separated
>+ * by \n.
>+ * The longest field is filename here, so PATH_MAX * 3 as
>+ * the diagnostic string length should be enough.
>+ *
>+ * The first \0 means the end of diagnostic data.
>+ *
>+ * As far as `strchr()` searches until \0, all previous entries
>+ * are suppressed by the last one. If the first byte is \0 --
>+ * diagnostic is empty.
>+ */
>+#define TEST_DIAG_DATA_MAX (PATH_MAX * 3)
>+char test_diag_buf[TEST_DIAG_DATA_MAX] = {0};
>+
>+const char *skip_reason = NULL;
>+const char *todo_reason = NULL;
>+
>+/* Indent for the TAP. 4 spaces is default for subtest. */
>+static void indent(void)
>+{
>+ int i;
>+ for (i = 0; i < level; i++)
>+ printf(" ");
>+}
>+
>+void test_message(const char *fmt, ...)
>+{
>+ va_list ap;
>+ indent();
>+ va_start(ap, fmt);
>+ vprintf(fmt, ap);
>+ printf("\n");
>+ va_end(ap);
>+}
>+
>+static void test_print_tap_version(void)
>+{
>+ /*
>+ * Since several TAP13 parsers in popular usage treat
>+ * a repeated Version declaration as an error, even if the
>+ * Version is indented, Subtests _should not_ include a
>+ * Version, if TAP13 Harness compatibility is
>+ * desirable [1].
>+ */
>+ if (level == 0)
>+ test_message("TAP version %d", TAP_VERSION);
>+}
>+
>+static void test_start_comment(const char *t_name)
>+{
>+ if (level > -1)
>+ /*
>+ * Inform about starting subtest, easier for
>+ * humans to read.
>+ * Subtest with a name must be terminated by a
>+ * Test Point with a matching Description [1].
>+ */
>+ test_comment("Subtest: %s", t_name);
>+}
>+
>+void _test_print_skip_all(const char *group_name, const char *reason)
>+{
>+ test_start_comment(group_name);
>+ /*
>+ * XXX: This test isn't started yet, so set indent level
>+ * manually.
>+ */
>+ level++;
>+ test_print_tap_version();
>+ /*
>+ * XXX: `SKIP_DIRECTIVE` is not necessary here according
>+ * to the TAP14 specification [1], but some harnesses may
>+ * fail to parse the output without it.
>+ */
>+ test_message("1..0" SKIP_DIRECTIVE "%s", reason);
>+ level--;
>+}
>+
>+/* Just inform TAP parser how many tests we want to run. */
>+static void test_plan(size_t planned)
>+{
>+ test_message("1..%lu", planned);
>+}
>+
>+/* Human-readable output how many tests/subtests are failed. */
>+static void test_finish(size_t planned, size_t failed)
>+{
>+ const char *t_type = level == 0 ? "tests" : "subtests";
>+ if (failed > 0)
>+ test_comment("Looks like you failed %lu %s out of %lu",
>+ failed, t_type, planned);
Side note: «Looks like» is a bit misleading, it seems like we are
not sure whether the tests failed or not. I propose to rephrase it
in a more strict fashion: «Failed %lu out of %lu».
>+ fflush(stdout);
>+}
>+
>+void test_set_skip_reason(const char *reason)
>+{
>+ skip_reason = reason;
>+}
>+
>+void test_set_todo_reason(const char *reason)
>+{
>+ todo_reason = reason;
>+}
>+
>+void test_save_diag_data(const char *fmt, ...)
>+{
>+ va_list ap;
>+ va_start(ap, fmt);
>+ vsnprintf(test_diag_buf, TEST_DIAG_DATA_MAX, fmt, ap);
>+ va_end(ap);
>+}
>+
>+static void test_clear_diag_data(void)
>+{
>+ /*
>+ * Limit buffer with zero byte to show that there is no
>+ * any entry.
>+ */
>+ test_diag_buf[0] = '\0';
>+}
>+
>+static int test_diagnostic_is_set(void)
>+{
>+ return test_diag_buf[0] != '\0';
>+}
>+
>+/*
>+ * Parse the last diagnostic data entry and print it in YAML
>+ * format with the corresponding additional half-indent in TAP
>+ * (2 spaces).
>+ * Clear diagnostic message to be sure that it's printed once.
>+ * XXX: \n separators are changed to \0 during parsing and
>+ * printing output for convenience in usage.
>+ */
>+static void test_diagnostic(void)
>+{
>+ test_message(" ---");
>+ char *ent = test_diag_buf;
>+ char *ent_end = NULL;
>+ while ((ent_end = strchr(ent, '\n')) != NULL) {
>+ char *next_ent = ent_end + 1;
>+ /*
>+ * Limit string with with the zero byte for
>+ * formatted output. Anyway, don't need this \n
>+ * anymore.
>+ */
>+ *ent_end = '\0';
>+ test_message(" %s", ent);
>+ ent = next_ent;
>+ }
>+ test_message(" ...");
>+ test_clear_diag_data();
>+}
>+
>+static jmp_buf test_run_env;
>+
>+TEST_NORET void _test_exit(int status)
>+{
>+ longjmp(test_run_env, status);
>+}
>+
>+static int test_run(const struct test_unit *test, size_t test_number,
>+ void *test_state)
>+{
>+ int status = TEST_EXIT_SUCCESS;
>+ /*
>+ * Run unit test. Diagnostic in case of failure setup by
>+ * helpers assert macros defined in the header.
>+ */
>+ int jmp_status;
>+ if ((jmp_status = setjmp(test_run_env)) == 0) {
>+ if (test->f(test_state) != TEST_EXIT_SUCCESS)
>+ status = TEST_EXIT_FAILURE;
>+ } else {
>+ status = jmp_status - TEST_JMP_STATUS_SHIFT;
>+ }
>+ const char *result = status == TEST_EXIT_SUCCESS ? "ok" : "not ok";
>+
>+ /*
>+ * Format suffix of the test message for SKIP or TODO
>+ * directives.
>+ */
>+#define SUFFIX_SZ 1024
>+ char suffix[SUFFIX_SZ] = {0};
>+ if (skip_reason) {
>+ snprintf(suffix, SUFFIX_SZ, SKIP_DIRECTIVE "%s", skip_reason);
>+ skip_reason = NULL;
>+ } else if (todo_reason) {
>+ /* Prevent count this test as failed. */
>+ status = TEST_EXIT_SUCCESS;
>+ snprintf(suffix, SUFFIX_SZ, TODO_DIRECTIVE "%s", todo_reason);
>+ todo_reason = NULL;
>+ }
>+#undef SUFFIX_SZ
>+
>+ test_message("%s %lu - %s%s", result, test_number, test->name,
>+ suffix);
>+
>+ if (status && test_diagnostic_is_set())
>+ test_diagnostic();
>+ return status;
>+}
>+
>+int _test_run_group(const char *group_name, const struct test_unit *tests,
>+ size_t n_tests, void *test_state)
Strictly saying, the <type>* and <type>[] are different, and since that testing
facility is dependent on the `sizeof` behavior on <type>[], I think that argument
type should be changed to `const struct test_unit[]`. 
>+{
>+ test_start_comment(group_name);
>+
>+ level++;
>+ test_print_tap_version();
>+
>+ test_plan(n_tests);
>+
>+ size_t n_failed = 0;
>+
>+ size_t i;
>+ for (i = 0; i < n_tests; i++) {
>+ size_t test_number = i + 1;
>+ /* Return 1 on failure, 0 on success. */
>+ n_failed += test_run(&tests[i], test_number, test_state);
>+ }
>+
>+ test_finish(n_tests, n_failed);
>+
>+ level--;
>+ return n_failed > 0 ? TEST_EXIT_FAILURE : TEST_EXIT_SUCCESS;
>+}
>diff --git a/test/tarantool-c-tests/test.h b/test/tarantool-c-tests/test.h
>new file mode 100644
>index 00000000..695c5b4d
>--- /dev/null
>+++ b/test/tarantool-c-tests/test.h
>@@ -0,0 +1,251 @@
>+#ifndef TEST_H
>+#define TEST_H
>+
>+#include <stdio.h>
>+#include <stdlib.h>
>+
>+/*
>+ * Test module, based on TAP 14 specification [1].
>+ * [1]:  https://testanything.org/tap-version-14-specification.html
>+ * Version 13 is set for better compatibility on old machines.
>+ *
>+ * TODO:
>+ * * Helpers assert macros:
>+ * - assert_uint_equal if needed
>+ * - assert_uint_not_equal if needed
>+ * - assert_str_equal if needed
>+ * - assert_str_not_equal if needed
>+ * - assert_memory_equal if needed
>+ * - assert_memory_not_equal if needed
>+ * * Pragmas.
>+ */
>+
>+#define TAP_VERSION 13
>+
>+#define TEST_EXIT_SUCCESS 0
>+#define TEST_EXIT_FAILURE 1
>+
>+#define TEST_JMP_STATUS_SHIFT 2
>+#define TEST_LJMP_EXIT_SUCCESS (TEST_EXIT_SUCCESS + TEST_JMP_STATUS_SHIFT)
>+#define TEST_LJMP_EXIT_FAILURE (TEST_EXIT_FAILURE + TEST_JMP_STATUS_SHIFT)
>+
>+#define TEST_NORET __attribute__((noreturn))
>+
>+typedef int (*test_func)(void *test_state);
>+struct test_unit {
>+ const char *name;
>+ test_func f;
>+};
>+
>+/* Initialize `test_unit` structure. */
>+#define test_unit_new(f) {#f, f}
>+
>+#define lengthof(arr) (sizeof(arr) / sizeof((arr)[0]))
See the comment with my concerns about the <type> and <type>[]
above.
>+
>+/*
>+ * __func__ is the name for a test group, "main" for the parent
>+ * test.
>+ */
>+#define test_run_group(t_arr, t_state) \
>+ _test_run_group(__func__, t_arr, lengthof(t_arr), t_state)
Is there any reason for it to be a macro and not a function wrapper?
I believe it is better to use the functions when possible, since they are
easier to support and debug.
>+
>+#define SKIP_DIRECTIVE " # SKIP "
>+#define TODO_DIRECTIVE " # TODO "
>+
>+/*
>+ * XXX: May be implemented as well via
>+ * `_test_run_group(__func, NULL, 0, NULL)` and
>+ * `test_set_skip_reason` with additional changes in the former.
>+ * But the current approach is easier to maintain, as far as we
>+ * don't want to interfere different entities.
>+ */
>+#define skip_all(reason) do { \
>+ _test_print_skip_all(__func__, reason); \
>+ return TEST_EXIT_SUCCESS; \
>+} while (0)
Again, I propose to replace it with a conventional function to
make the return point explicit in the test implementation, so it
looks like `return skip_all(<reason>)`, instead of just `skip_all`.
Same for the skipcond and todo facilities below.
>+
>+#define skip(reason) do { \
>+ test_set_skip_reason(reason); \
>+ return TEST_EXIT_SUCCESS; \
>+} while (0)
>+
>+#define todo(reason) do { \
>+ test_set_todo_reason(reason); \
>+ return TEST_EXIT_FAILURE; \
>+} while (0)
>+
>+#define bail_out(reason) do { \
>+ /* \
>+ * For backwards compatibility with TAP13 Harnesses, \
>+ * Producers _should_ emit a "Bail out!" line at the root \
>+ * indentation level whenever a Subtest bails out [1]. \
>+ */ \
>+ printf("Bail out! %s\n", reason); \
>+ exit(TEST_EXIT_FAILURE); \
>+} while (0)
>+
>+/* `fmt` should always be a format string here. */
>+#define test_comment(fmt, ...) test_message("# " fmt, __VA_ARGS__)
>+
>+/*
>+ * This is a set of useful assert macros like the standard C
>+ * libary's assert(3) macro.
>+ *
>+ * On an assertion failure an assert macro will save the
>+ * diagnostic to the special buffer, to be reported via YAML
>+ * Diagnostic block and finish a test function with
>+ * `return TEST_EXIT_FAILURE`.
>+ *
>+ * Due to limitations of the C language `assert_true()` and
>+ * `assert_false()` macros can only display the expression that
>+ * caused the assertion failure. Type specific assert macros,
>+ * `assert_{type}_equal()` and `assert_{type}_not_equal()`, save
>+ * the data that caused the assertion failure which increases data
>+ * visibility aiding debugging of failing test cases.
>+ */
>+
>+#define LOCATION_FMT "location:\t%s:%d\n"
>+#define ASSERT_NAME_FMT(name) "failed_assertion:\t" #name "\n"
>+
>+#define assert_true(cond) do { \
>+ if (!(cond)) { \
>+ test_save_diag_data(LOCATION_FMT \
>+ "condition_failed:\t'" #cond "'\n", \
>+ __FILE__, __LINE__); \
>+ _test_exit(TEST_LJMP_EXIT_FAILURE); \
>+ } \
>+} while (0)
>+
>+#define assert_false(cond) assert_true(!(cond))
>+
>+#define assert_ptr_equal(got, expected) do { \
>+ if ((got) != (expected)) { \
>+ test_save_diag_data( \
>+ LOCATION_FMT \
>+ ASSERT_NAME_FMT(assert_ptr_equal) \
>+ "got: %p\n" \
>+ "expected: %p\n", \
>+ __FILE__, __LINE__, (got), (expected) \
>+ ); \
>+ _test_exit(TEST_LJMP_EXIT_FAILURE); \
>+ } \
>+} while (0)
>+
>+#define assert_ptr_not_equal(got, unexpected) do { \
>+ if ((got) == (unexpected)) { \
>+ test_save_diag_data( \
>+ LOCATION_FMT \
>+ ASSERT_NAME_FMT(assert_ptr_not_equal) \
>+ "got: %p\n" \
>+ "unexpected: %p\n", \
>+ __FILE__, __LINE__, (got), (unexpected) \
>+ ); \
>+ _test_exit(TEST_LJMP_EXIT_FAILURE); \
>+ } \
>+} while (0)
>+
>+#define assert_int_equal(got, expected) do { \
>+ if ((got) != (expected)) { \
>+ test_save_diag_data( \
>+ LOCATION_FMT \
>+ ASSERT_NAME_FMT(assert_int_equal) \
>+ "got: %d\n" \
>+ "expected: %d\n", \
>+ __FILE__, __LINE__, (got), (expected) \
>+ ); \
>+ _test_exit(TEST_LJMP_EXIT_FAILURE); \
>+ } \
>+} while (0)
>+
>+#define assert_int_not_equal(got, unexpected) do { \
>+ if ((got) == (unexpected)) { \
>+ test_save_diag_data( \
>+ LOCATION_FMT \
>+ ASSERT_NAME_FMT(assert_int_not_equal) \
>+ "got: %d\n" \
>+ "unexpected: %d\n", \
>+ __FILE__, __LINE__, (got), (unexpected) \
>+ ); \
>+ _test_exit(TEST_LJMP_EXIT_FAILURE); \
>+ } \
>+} while (0)
>+
>+#define assert_sizet_equal(got, expected) do { \
>+ if ((got) != (expected)) { \
>+ test_save_diag_data( \
>+ LOCATION_FMT \
>+ ASSERT_NAME_FMT(assert_sizet_equal) \
>+ "got: %lu\n" \
>+ "expected: %lu\n", \
>+ __FILE__, __LINE__, (got), (expected) \
>+ ); \
>+ _test_exit(TEST_LJMP_EXIT_FAILURE); \
>+ } \
>+} while (0)
>+
>+#define assert_sizet_not_equal(got, unexpected) do { \
>+ if ((got) == (unexpected)) { \
>+ test_save_diag_data( \
>+ LOCATION_FMT \
>+ ASSERT_NAME_FMT(assert_sizet_not_equal) \
>+ "got: %lu\n" \
>+ "unexpected: %lu\n", \
>+ __FILE__, __LINE__, (got), (unexpected) \
>+ ); \
>+ _test_exit(TEST_LJMP_EXIT_FAILURE); \
>+ } \
>+} while (0)
>+
>+/* Check that doubles are __exactly__ the same. */
>+#define assert_double_equal(got, expected) do { \
>+ if ((got) != (expected)) { \
>+ test_save_diag_data( \
>+ LOCATION_FMT \
>+ ASSERT_NAME_FMT(assert_double_equal) \
>+ "got: %lf\n" \
>+ "expected: %lf\n", \
>+ __FILE__, __LINE__, (got), (expected) \
>+ ); \
>+ _test_exit(TEST_LJMP_EXIT_FAILURE); \
>+ } \
>+} while (0)
>+
>+/* Check that doubles are not __exactly__ the same. */
>+#define assert_double_not_equal(got, unexpected) do { \
>+ if ((got) == (unexpected)) { \
>+ test_save_diag_data( \
>+ LOCATION_FMT \
>+ ASSERT_NAME_FMT(assert_double_not_equal) \
>+ "got: %lf\n" \
>+ "unexpected: %lf\n", \
>+ __FILE__, __LINE__, (got), (unexpected) \
>+ ); \
>+ _test_exit(TEST_LJMP_EXIT_FAILURE); \
>+ } \
>+} while (0)
>+
>+/* API declaration. */
>+
>+/*
>+ * Print formatted message with the corresponding indent.
>+ * If you want to leave a comment, use `test_comment()` instead.
>+ */
>+void test_message(const char *fmt, ...);
>+
>+/* Need for `skip_all()`, please, don't use it. */
>+void _test_print_skip_all(const char *group_name, const char *reason);
>+/* End test via `longjmp()`, please, don't use it. */
>+TEST_NORET void _test_exit(int status);
>+
>+void test_set_skip_reason(const char *reason);
>+void test_set_todo_reason(const char *reason);
>+/*
>+ * Save formatted diagnostic data. Each entry separated with \n.
>+ */
>+void test_save_diag_data(const char *fmt, ...);
>+
>+/* Internal, it is better to use `test_run_group()` instead. */
>+int _test_run_group(const char *group_name, const struct test_unit *tests,
>+ size_t n_tests, void *test_state);
>+
>+#endif /* TEST_H */
>--
>2.34.1
--
Best regards,
Maxim Kokryashkin
 

[-- Attachment #2: Type: text/html, Size: 26600 bytes --]

  reply	other threads:[~2023-03-20 15:17 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-03-15 16:11 [Tarantool-patches] [PATCH v1 luajit 0/5] reworking " Sergey Kaplun via Tarantool-patches
2023-03-15 16:11 ` [Tarantool-patches] [PATCH v1 luajit 1/5] test: fix setting of {DY}LD_LIBRARY_PATH variables Sergey Kaplun via Tarantool-patches
2023-03-20 13:54   ` Maxim Kokryashkin via Tarantool-patches
2023-03-15 16:11 ` [Tarantool-patches] [PATCH v1 luajit 2/5] test: introduce module for C tests Sergey Kaplun via Tarantool-patches
2023-03-20 15:17   ` Maxim Kokryashkin via Tarantool-patches [this message]
2023-03-15 16:11 ` [Tarantool-patches] [PATCH v1 luajit 3/5] test: introduce utils.h helper " Sergey Kaplun via Tarantool-patches
2023-03-20 15:21   ` Maxim Kokryashkin via Tarantool-patches
2023-03-15 16:11 ` [Tarantool-patches] [PATCH v1 luajit 4/5] test: rewrite misclib-getmetrics-capi test in C Sergey Kaplun via Tarantool-patches
2023-03-22  0:07   ` Maxim Kokryashkin via Tarantool-patches
2023-03-15 16:11 ` [Tarantool-patches] [PATCH v1 luajit 5/5] test: rewrite misclib-sysprof-capi " Sergey Kaplun via Tarantool-patches
2023-03-20 16:24   ` Maxim Kokryashkin via Tarantool-patches
2023-03-20 13:50 ` [Tarantool-patches] [PATCH v1 luajit 0/5] reworking C tests Maxim Kokryashkin via Tarantool-patches

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1679325433.571861292@f426.i.mail.ru \
    --to=tarantool-patches@dev.tarantool.org \
    --cc=m.kokryashkin@tarantool.org \
    --cc=skaplun@tarantool.org \
    --subject='Re: [Tarantool-patches]  [PATCH v1 luajit 2/5] test: introduce module for C tests' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox