From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from [87.239.111.99] (localhost [127.0.0.1]) by dev.tarantool.org (Postfix) with ESMTP id 0A41B128C1; Wed, 15 Mar 2023 19:15:50 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org 0A41B128C1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=tarantool.org; s=dev; t=1678896950; bh=agw4ZSd4Pz+3QVx0T3z7SoRjxazvhpCKb/5xfiAfcMI=; h=To:Date:In-Reply-To:References:Subject:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc: From; b=moD54T4tGuB65ZKg5irNqpzuJeZwtTWshPBlAdbXmIawFu5xiCx9koG59aY4I4jel 6uouPhX00m0qgyyypHRS1a4BsFVg3wDkIaa1nmXf+XxYFilMZVlvc0GCTaBIAHj/vm mQFWZTtG6HHzCaOtzjqNfPIVsXswrxbx0dDaHOlY= Received: from smtpng1.i.mail.ru (smtpng1.i.mail.ru [94.100.181.251]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dev.tarantool.org (Postfix) with ESMTPS id 3C54F128C1 for ; Wed, 15 Mar 2023 19:14:52 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org 3C54F128C1 Received: by smtpng1.m.smailru.net with esmtpa (envelope-from ) id 1pcTm7-0003ZV-2h; Wed, 15 Mar 2023 19:14:51 +0300 To: Igor Munkin , Maxim Kokryashkin Date: Wed, 15 Mar 2023 19:11:02 +0300 Message-Id: <5d25f8889666f875fb0429ba373f8884b039b4c5.1678895861.git.skaplun@tarantool.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Mailru-Src: smtp X-4EC0790: 10 X-7564579A: 646B95376F6C166E X-77F55803: 4F1203BC0FB41BD9BCEC41593EBD8357ED47A4A6F8E39D1627AD6C6CCC9DF7C0182A05F538085040D27376F5D4C1B52CA715FE9A35797ED6A710C74D68166BDDF1CF934CAA1CF663 X-7FA49CB5: FF5795518A3D127A4AD6D5ED66289B5278DA827A17800CE7B264C8851FD8E810EA1F7E6F0F101C67BD4B6F7A4D31EC0BCC500DACC3FED6E28638F802B75D45FF8AA50765F79006378B49D47CE295E66E8638F802B75D45FF36EB9D2243A4F8B5A6FCA7DBDB1FC311F39EFFDF887939037866D6147AF826D8836B80CC50EA324A17F3A8968E435FA6117882F4460429724CE54428C33FAD305F5C1EE8F4F765FCC0EC8C44E4C1BEE2A471835C12D1D9774AD6D5ED66289B52BA9C0B312567BB23117882F44604297287769387670735200AC5B80A05675ACD2CC0D3CB04F14752D2E47CDBA5A96583BA9C0B312567BB231DD303D21008E29813377AFFFEAFD269A417C69337E82CC2E827F84554CEF50127C277FBC8AE2E8BA83251EDC214901ED5E8D9A59859A8B6B1CFA6D474D4A6A4089D37D7C0E48F6C5571747095F342E88FB05168BE4CE3AF X-C8649E89: 4E36BF7865823D7055A7F0CF078B5EC49A30900B95165D34A783A638E01A3CDE5694FE286A559A909F6E064BB340292B2EA0BDA7289541F872EF9319A2179C041D7E09C32AA3244C8EE22F356EC5C63CD0B4DB9B06ACA9D9C86C126E7119A0FE927AC6DF5659F194 X-D57D3AED: 3ZO7eAau8CL7WIMRKs4sN3D3tLDjz0dLbV79QFUyzQ2Ujvy7cMT6pYYqY16iZVKkSc3dCLJ7zSJH7+u4VD18S7Vl4ZUrpaVfd2+vE6kuoey4m4VkSEu530nj6fImhcD4MUrOEAnl0W826KZ9Q+tr5ycPtXkTV4k65bRjmOUUP8cvGozZ33TWg5HZplvhhXbhDGzqmQDTd6OAevLeAnq3Ra9uf7zvY2zzsIhlcp/Y7m53TZgf2aB4JOg4gkr2biojOXwBDQf4j7M9ulE1WNdRCQ== X-DA7885C5: 5ABF57B33C4484DD992EB771DD7126D0D124CD39EAA8DB207D60323779A8DD94262E2D401490A4A0DB037EFA58388B346E8BC1A9835FDE71 X-Mailru-Sender: 689FA8AB762F73933AF1F914F131DBF5CF336CA814896E5C4C0414FEB8E91E620FBE9A32752B8C9C2AA642CC12EC09F1FB559BB5D741EB962F61BD320559CF1EFD657A8799238ED55FEEDEB644C299C0ED14614B50AE0675 X-Mras: Ok Subject: [Tarantool-patches] [PATCH v1 luajit 2/5] test: introduce module for C tests X-BeenThere: tarantool-patches@dev.tarantool.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Tarantool development patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , From: Sergey Kaplun via Tarantool-patches Reply-To: Sergey Kaplun Cc: tarantool-patches@dev.tarantool.org Errors-To: tarantool-patches-bounces@dev.tarantool.org Sender: "Tarantool-patches" We need an instrument to write tests in plain C for LuaJIT, to be able: * easily test LuaC API * test patches without usage plain Lua * write unit tests * startup LuaJIT with custom memory allocator, to test some GC issues * maybe, in future, use custom hashing function to test a behavior of LuaJIT tables and so on. The module serves to achieve these goals without too fancy features. It's functionality inspired by cmoka API [1], but only TAP14 [2] protocol is supported (Version of TAP set to 13 to be compatible with old TAP13 harnesses). The group of unit tests is declared like the following: | void *t_state = NULL; | const struct test_unit tgroup[] = { | test_unit_new(test_base), | test_unit_new(test_subtest), | }; | return test_run_group(tgroup, t_state); `test_run_group()` runs the whole group of tests, returns `TEST_EXIT_SUCCESS` or `TEST_EXIT_FAILURE`. If a similar group is declared inside unit test, this group will be considered as a subtest. This library provides an API similar to glibc (3) `assert()` to use inside unit tests. `assert_[true,false]()` are useful for condition checks and `assert_{type}_[not_,]_equal()` are useful for value comparisons. If some assertion fails diagnostic is set, all test considered as failing and finished via `longjmp()`, so these assertions can be used inside custom subroutines. Also, this module provides ability to skip one test or all tests, mark test as todo, bail out all tests. `skip()`, `skip_all()` and `todo()` macros are implemented via an early return to be used only in the test body to make skipping clear. `skip_all()` may be used both for the parent test and for a subtest. As a part of this commit, tarantool-c-tests directory is created with the corresponding CMakeLists.txt file to build this test library. Tests to be rewritten in C with this library in the next commit and placed as unit tests are: * misclib-getmetrics-capi.test.lua * misclib-sysprof-capi.test.lua For now the tarantool-c-tests target just build the test library without new tests to run. [1]: https://github.com/clibs/cmocka [2]: https://testanything.org/tap-version-14-specification.html Part of tarantool/tarantool#7900 --- I left some notes about this test module and I'll be happy to read your thoughts about them. * Should we cast to `(void *)` in `assert_ptr_[not_]equal()`? Or it will be better to notice user about bad type comparisons? * How often should we flush stdout? * Obviously we can use `_test_run_group(__func__, NULL, 0, NULL)` with `test_set_skip_reason()` set to implement `skip_all()` functionality. Nevertheless, I decided to reimpliment it's logic separately to be more easily maintained in the future. test/CMakeLists.txt | 2 + test/tarantool-c-tests/CMakeLists.txt | 43 +++++ test/tarantool-c-tests/test.c | 251 ++++++++++++++++++++++++++ test/tarantool-c-tests/test.h | 251 ++++++++++++++++++++++++++ 4 files changed, 547 insertions(+) create mode 100644 test/tarantool-c-tests/CMakeLists.txt create mode 100644 test/tarantool-c-tests/test.c create mode 100644 test/tarantool-c-tests/test.h diff --git a/test/CMakeLists.txt b/test/CMakeLists.txt index a8262b12..47296a22 100644 --- a/test/CMakeLists.txt +++ b/test/CMakeLists.txt @@ -48,12 +48,14 @@ separate_arguments(LUAJIT_TEST_COMMAND) add_subdirectory(LuaJIT-tests) add_subdirectory(PUC-Rio-Lua-5.1-tests) add_subdirectory(lua-Harness-tests) +add_subdirectory(tarantool-c-tests) add_subdirectory(tarantool-tests) add_custom_target(${PROJECT_NAME}-test DEPENDS LuaJIT-tests PUC-Rio-Lua-5.1-tests lua-Harness-tests + tarantool-c-tests tarantool-tests ) diff --git a/test/tarantool-c-tests/CMakeLists.txt b/test/tarantool-c-tests/CMakeLists.txt new file mode 100644 index 00000000..5ebea441 --- /dev/null +++ b/test/tarantool-c-tests/CMakeLists.txt @@ -0,0 +1,43 @@ +find_program(PROVE prove) +if(NOT PROVE) + message(WARNING "`prove' is not found, so tarantool-c-tests target is not generated") + return() +endif() + +set(C_TEST_SUFFIX .c_test) +set(C_TEST_FLAGS --failures --shuffle) + +if(CMAKE_VERBOSE_MAKEFILE) + list(APPEND C_TEST_FLAGS --verbose) +endif() + +# Build libtest. + +set(TEST_LIB_NAME "test") +add_library(libtest STATIC EXCLUDE_FROM_ALL ${CMAKE_CURRENT_SOURCE_DIR}/test.c) +target_include_directories(libtest PRIVATE ${CMAKE_CURRENT_SOURCE_DIR}) +set_target_properties(libtest PROPERTIES + COMPILE_FLAGS "-Wall -Wextra" + OUTPUT_NAME "${TEST_LIB_NAME}" + LIBRARY_OUTPUT_DIRECTORY "${CMAKE_CURRENT_BINARY_DIR}" +) + +# XXX: For now, just build libtest. The tests to be depended on +# will be added at the next commit. +add_custom_target(tarantool-c-tests + DEPENDS libluajit libtest +) + +# XXX: For now, run 0 tests. Just verify that libtest was build. +add_custom_command(TARGET tarantool-c-tests + COMMENT "Running Tarantool C tests" + COMMAND + ${PROVE} + ${CMAKE_CURRENT_BINARY_DIR} + --ext ${C_TEST_SUFFIX} + --jobs ${CMAKE_BUILD_PARALLEL_LEVEL} + ${C_TEST_FLAGS} + WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR} +) + +# vim: ft=cmake expandtab shiftwidth=2: tabstop=2: diff --git a/test/tarantool-c-tests/test.c b/test/tarantool-c-tests/test.c new file mode 100644 index 00000000..dc63cf3f --- /dev/null +++ b/test/tarantool-c-tests/test.c @@ -0,0 +1,251 @@ +#include "test.h" + +/* + * Test module, based on TAP 14 specification [1]. + * [1]: https://testanything.org/tap-version-14-specification.html + */ + +/* Need for `PATH_MAX` in diagnostic definition. */ +#include +#include +#include +/* Need for `strchr()` in diagnostic parsing. */ +#include + +/* + * Test level: 0 for the parent test, >0 for any subtests. + */ +static int level = -1; + +/* + * The last diagnostic data to be used in the YAML Diagnostic + * block. + * + * Contains filename, line number and failed expression or assert + * name and "got" and "expected" fields. All entries are separated + * by \n. + * The longest field is filename here, so PATH_MAX * 3 as + * the diagnostic string length should be enough. + * + * The first \0 means the end of diagnostic data. + * + * As far as `strchr()` searches until \0, all previous entries + * are suppressed by the last one. If the first byte is \0 -- + * diagnostic is empty. + */ +#define TEST_DIAG_DATA_MAX (PATH_MAX * 3) +char test_diag_buf[TEST_DIAG_DATA_MAX] = {0}; + +const char *skip_reason = NULL; +const char *todo_reason = NULL; + +/* Indent for the TAP. 4 spaces is default for subtest. */ +static void indent(void) +{ + int i; + for (i = 0; i < level; i++) + printf(" "); +} + +void test_message(const char *fmt, ...) +{ + va_list ap; + indent(); + va_start(ap, fmt); + vprintf(fmt, ap); + printf("\n"); + va_end(ap); +} + +static void test_print_tap_version(void) +{ + /* + * Since several TAP13 parsers in popular usage treat + * a repeated Version declaration as an error, even if the + * Version is indented, Subtests _should not_ include a + * Version, if TAP13 Harness compatibility is + * desirable [1]. + */ + if (level == 0) + test_message("TAP version %d", TAP_VERSION); +} + +static void test_start_comment(const char *t_name) +{ + if (level > -1) + /* + * Inform about starting subtest, easier for + * humans to read. + * Subtest with a name must be terminated by a + * Test Point with a matching Description [1]. + */ + test_comment("Subtest: %s", t_name); +} + +void _test_print_skip_all(const char *group_name, const char *reason) +{ + test_start_comment(group_name); + /* + * XXX: This test isn't started yet, so set indent level + * manually. + */ + level++; + test_print_tap_version(); + /* + * XXX: `SKIP_DIRECTIVE` is not necessary here according + * to the TAP14 specification [1], but some harnesses may + * fail to parse the output without it. + */ + test_message("1..0" SKIP_DIRECTIVE "%s", reason); + level--; +} + +/* Just inform TAP parser how many tests we want to run. */ +static void test_plan(size_t planned) +{ + test_message("1..%lu", planned); +} + +/* Human-readable output how many tests/subtests are failed. */ +static void test_finish(size_t planned, size_t failed) +{ + const char *t_type = level == 0 ? "tests" : "subtests"; + if (failed > 0) + test_comment("Looks like you failed %lu %s out of %lu", + failed, t_type, planned); + fflush(stdout); +} + +void test_set_skip_reason(const char *reason) +{ + skip_reason = reason; +} + +void test_set_todo_reason(const char *reason) +{ + todo_reason = reason; +} + +void test_save_diag_data(const char *fmt, ...) +{ + va_list ap; + va_start(ap, fmt); + vsnprintf(test_diag_buf, TEST_DIAG_DATA_MAX, fmt, ap); + va_end(ap); +} + +static void test_clear_diag_data(void) +{ + /* + * Limit buffer with zero byte to show that there is no + * any entry. + */ + test_diag_buf[0] = '\0'; +} + +static int test_diagnostic_is_set(void) +{ + return test_diag_buf[0] != '\0'; +} + +/* + * Parse the last diagnostic data entry and print it in YAML + * format with the corresponding additional half-indent in TAP + * (2 spaces). + * Clear diagnostic message to be sure that it's printed once. + * XXX: \n separators are changed to \0 during parsing and + * printing output for convenience in usage. + */ +static void test_diagnostic(void) +{ + test_message(" ---"); + char *ent = test_diag_buf; + char *ent_end = NULL; + while ((ent_end = strchr(ent, '\n')) != NULL) { + char *next_ent = ent_end + 1; + /* + * Limit string with with the zero byte for + * formatted output. Anyway, don't need this \n + * anymore. + */ + *ent_end = '\0'; + test_message(" %s", ent); + ent = next_ent; + } + test_message(" ..."); + test_clear_diag_data(); +} + +static jmp_buf test_run_env; + +TEST_NORET void _test_exit(int status) +{ + longjmp(test_run_env, status); +} + +static int test_run(const struct test_unit *test, size_t test_number, + void *test_state) +{ + int status = TEST_EXIT_SUCCESS; + /* + * Run unit test. Diagnostic in case of failure setup by + * helpers assert macros defined in the header. + */ + int jmp_status; + if ((jmp_status = setjmp(test_run_env)) == 0) { + if (test->f(test_state) != TEST_EXIT_SUCCESS) + status = TEST_EXIT_FAILURE; + } else { + status = jmp_status - TEST_JMP_STATUS_SHIFT; + } + const char *result = status == TEST_EXIT_SUCCESS ? "ok" : "not ok"; + + /* + * Format suffix of the test message for SKIP or TODO + * directives. + */ +#define SUFFIX_SZ 1024 + char suffix[SUFFIX_SZ] = {0}; + if (skip_reason) { + snprintf(suffix, SUFFIX_SZ, SKIP_DIRECTIVE "%s", skip_reason); + skip_reason = NULL; + } else if (todo_reason) { + /* Prevent count this test as failed. */ + status = TEST_EXIT_SUCCESS; + snprintf(suffix, SUFFIX_SZ, TODO_DIRECTIVE "%s", todo_reason); + todo_reason = NULL; + } +#undef SUFFIX_SZ + + test_message("%s %lu - %s%s", result, test_number, test->name, + suffix); + + if (status && test_diagnostic_is_set()) + test_diagnostic(); + return status; +} + +int _test_run_group(const char *group_name, const struct test_unit *tests, + size_t n_tests, void *test_state) +{ + test_start_comment(group_name); + + level++; + test_print_tap_version(); + + test_plan(n_tests); + + size_t n_failed = 0; + + size_t i; + for (i = 0; i < n_tests; i++) { + size_t test_number = i + 1; + /* Return 1 on failure, 0 on success. */ + n_failed += test_run(&tests[i], test_number, test_state); + } + + test_finish(n_tests, n_failed); + + level--; + return n_failed > 0 ? TEST_EXIT_FAILURE : TEST_EXIT_SUCCESS; +} diff --git a/test/tarantool-c-tests/test.h b/test/tarantool-c-tests/test.h new file mode 100644 index 00000000..695c5b4d --- /dev/null +++ b/test/tarantool-c-tests/test.h @@ -0,0 +1,251 @@ +#ifndef TEST_H +#define TEST_H + +#include +#include + +/* + * Test module, based on TAP 14 specification [1]. + * [1]: https://testanything.org/tap-version-14-specification.html + * Version 13 is set for better compatibility on old machines. + * + * TODO: + * * Helpers assert macros: + * - assert_uint_equal if needed + * - assert_uint_not_equal if needed + * - assert_str_equal if needed + * - assert_str_not_equal if needed + * - assert_memory_equal if needed + * - assert_memory_not_equal if needed + * * Pragmas. + */ + +#define TAP_VERSION 13 + +#define TEST_EXIT_SUCCESS 0 +#define TEST_EXIT_FAILURE 1 + +#define TEST_JMP_STATUS_SHIFT 2 +#define TEST_LJMP_EXIT_SUCCESS (TEST_EXIT_SUCCESS + TEST_JMP_STATUS_SHIFT) +#define TEST_LJMP_EXIT_FAILURE (TEST_EXIT_FAILURE + TEST_JMP_STATUS_SHIFT) + +#define TEST_NORET __attribute__((noreturn)) + +typedef int (*test_func)(void *test_state); +struct test_unit { + const char *name; + test_func f; +}; + +/* Initialize `test_unit` structure. */ +#define test_unit_new(f) {#f, f} + +#define lengthof(arr) (sizeof(arr) / sizeof((arr)[0])) + +/* + * __func__ is the name for a test group, "main" for the parent + * test. + */ +#define test_run_group(t_arr, t_state) \ + _test_run_group(__func__, t_arr, lengthof(t_arr), t_state) + +#define SKIP_DIRECTIVE " # SKIP " +#define TODO_DIRECTIVE " # TODO " + +/* + * XXX: May be implemented as well via + * `_test_run_group(__func, NULL, 0, NULL)` and + * `test_set_skip_reason` with additional changes in the former. + * But the current approach is easier to maintain, as far as we + * don't want to interfere different entities. + */ +#define skip_all(reason) do { \ + _test_print_skip_all(__func__, reason); \ + return TEST_EXIT_SUCCESS; \ +} while (0) + +#define skip(reason) do { \ + test_set_skip_reason(reason); \ + return TEST_EXIT_SUCCESS; \ +} while (0) + +#define todo(reason) do { \ + test_set_todo_reason(reason); \ + return TEST_EXIT_FAILURE; \ +} while (0) + +#define bail_out(reason) do { \ + /* \ + * For backwards compatibility with TAP13 Harnesses, \ + * Producers _should_ emit a "Bail out!" line at the root \ + * indentation level whenever a Subtest bails out [1]. \ + */ \ + printf("Bail out! %s\n", reason); \ + exit(TEST_EXIT_FAILURE); \ +} while (0) + +/* `fmt` should always be a format string here. */ +#define test_comment(fmt, ...) test_message("# " fmt, __VA_ARGS__) + +/* + * This is a set of useful assert macros like the standard C + * libary's assert(3) macro. + * + * On an assertion failure an assert macro will save the + * diagnostic to the special buffer, to be reported via YAML + * Diagnostic block and finish a test function with + * `return TEST_EXIT_FAILURE`. + * + * Due to limitations of the C language `assert_true()` and + * `assert_false()` macros can only display the expression that + * caused the assertion failure. Type specific assert macros, + * `assert_{type}_equal()` and `assert_{type}_not_equal()`, save + * the data that caused the assertion failure which increases data + * visibility aiding debugging of failing test cases. + */ + +#define LOCATION_FMT "location:\t%s:%d\n" +#define ASSERT_NAME_FMT(name) "failed_assertion:\t" #name "\n" + +#define assert_true(cond) do { \ + if (!(cond)) { \ + test_save_diag_data(LOCATION_FMT \ + "condition_failed:\t'" #cond "'\n", \ + __FILE__, __LINE__); \ + _test_exit(TEST_LJMP_EXIT_FAILURE); \ + } \ +} while (0) + +#define assert_false(cond) assert_true(!(cond)) + +#define assert_ptr_equal(got, expected) do { \ + if ((got) != (expected)) { \ + test_save_diag_data( \ + LOCATION_FMT \ + ASSERT_NAME_FMT(assert_ptr_equal) \ + "got: %p\n" \ + "expected: %p\n", \ + __FILE__, __LINE__, (got), (expected) \ + ); \ + _test_exit(TEST_LJMP_EXIT_FAILURE); \ + } \ +} while (0) + +#define assert_ptr_not_equal(got, unexpected) do { \ + if ((got) == (unexpected)) { \ + test_save_diag_data( \ + LOCATION_FMT \ + ASSERT_NAME_FMT(assert_ptr_not_equal) \ + "got: %p\n" \ + "unexpected: %p\n", \ + __FILE__, __LINE__, (got), (unexpected) \ + ); \ + _test_exit(TEST_LJMP_EXIT_FAILURE); \ + } \ +} while (0) + +#define assert_int_equal(got, expected) do { \ + if ((got) != (expected)) { \ + test_save_diag_data( \ + LOCATION_FMT \ + ASSERT_NAME_FMT(assert_int_equal) \ + "got: %d\n" \ + "expected: %d\n", \ + __FILE__, __LINE__, (got), (expected) \ + ); \ + _test_exit(TEST_LJMP_EXIT_FAILURE); \ + } \ +} while (0) + +#define assert_int_not_equal(got, unexpected) do { \ + if ((got) == (unexpected)) { \ + test_save_diag_data( \ + LOCATION_FMT \ + ASSERT_NAME_FMT(assert_int_not_equal) \ + "got: %d\n" \ + "unexpected: %d\n", \ + __FILE__, __LINE__, (got), (unexpected) \ + ); \ + _test_exit(TEST_LJMP_EXIT_FAILURE); \ + } \ +} while (0) + +#define assert_sizet_equal(got, expected) do { \ + if ((got) != (expected)) { \ + test_save_diag_data( \ + LOCATION_FMT \ + ASSERT_NAME_FMT(assert_sizet_equal) \ + "got: %lu\n" \ + "expected: %lu\n", \ + __FILE__, __LINE__, (got), (expected) \ + ); \ + _test_exit(TEST_LJMP_EXIT_FAILURE); \ + } \ +} while (0) + +#define assert_sizet_not_equal(got, unexpected) do { \ + if ((got) == (unexpected)) { \ + test_save_diag_data( \ + LOCATION_FMT \ + ASSERT_NAME_FMT(assert_sizet_not_equal) \ + "got: %lu\n" \ + "unexpected: %lu\n", \ + __FILE__, __LINE__, (got), (unexpected) \ + ); \ + _test_exit(TEST_LJMP_EXIT_FAILURE); \ + } \ +} while (0) + +/* Check that doubles are __exactly__ the same. */ +#define assert_double_equal(got, expected) do { \ + if ((got) != (expected)) { \ + test_save_diag_data( \ + LOCATION_FMT \ + ASSERT_NAME_FMT(assert_double_equal) \ + "got: %lf\n" \ + "expected: %lf\n", \ + __FILE__, __LINE__, (got), (expected) \ + ); \ + _test_exit(TEST_LJMP_EXIT_FAILURE); \ + } \ +} while (0) + +/* Check that doubles are not __exactly__ the same. */ +#define assert_double_not_equal(got, unexpected) do { \ + if ((got) == (unexpected)) { \ + test_save_diag_data( \ + LOCATION_FMT \ + ASSERT_NAME_FMT(assert_double_not_equal) \ + "got: %lf\n" \ + "unexpected: %lf\n", \ + __FILE__, __LINE__, (got), (unexpected) \ + ); \ + _test_exit(TEST_LJMP_EXIT_FAILURE); \ + } \ +} while (0) + +/* API declaration. */ + +/* + * Print formatted message with the corresponding indent. + * If you want to leave a comment, use `test_comment()` instead. + */ +void test_message(const char *fmt, ...); + +/* Need for `skip_all()`, please, don't use it. */ +void _test_print_skip_all(const char *group_name, const char *reason); +/* End test via `longjmp()`, please, don't use it. */ +TEST_NORET void _test_exit(int status); + +void test_set_skip_reason(const char *reason); +void test_set_todo_reason(const char *reason); +/* + * Save formatted diagnostic data. Each entry separated with \n. + */ +void test_save_diag_data(const char *fmt, ...); + +/* Internal, it is better to use `test_run_group()` instead. */ +int _test_run_group(const char *group_name, const struct test_unit *tests, + size_t n_tests, void *test_state); + +#endif /* TEST_H */ -- 2.34.1