[Tarantool-patches] [PATCH v1 luajit 2/5] test: introduce module for C tests

Sergey Kaplun skaplun at tarantool.org
Wed Mar 15 19:11:02 MSK 2023


We need an instrument to write tests in plain C for LuaJIT, to be able:
* easily test LuaC API
* test patches without usage plain Lua
* write unit tests
* startup LuaJIT with custom memory allocator, to test some GC issues
* maybe, in future, use custom hashing function to test a behavior
  of LuaJIT tables
and so on.

The <test.c> module serves to achieve these goals without too fancy
features.

It's functionality inspired by cmoka API [1], but only TAP14 [2]
protocol is supported (Version of TAP set to 13 to be compatible with
old TAP13 harnesses).

The group of unit tests is declared like the following:

| void *t_state = NULL;
| const struct test_unit tgroup[] = {
| 	test_unit_new(test_base),
| 	test_unit_new(test_subtest),
| };
| return test_run_group(tgroup, t_state);

`test_run_group()` runs the whole group of tests, returns
`TEST_EXIT_SUCCESS` or `TEST_EXIT_FAILURE`.

If a similar group is declared inside unit test, this group will be
considered as a subtest.

This library provides an API similar to glibc (3) `assert()` to use
inside unit tests. `assert_[true,false]()` are useful for condition
checks and `assert_{type}_[not_,]_equal()` are useful for value
comparisons. If some assertion fails diagnostic is set, all test
considered as failing and finished via `longjmp()`, so these assertions
can be used inside custom subroutines.

Also, this module provides ability to skip one test or all tests, mark
test as todo, bail out all tests. `skip()`, `skip_all()` and `todo()`
macros are implemented via an early return to be used only in the test
body to make skipping clear. `skip_all()` may be used both for the
parent test and for a subtest.

As a part of this commit, tarantool-c-tests directory is created with
the corresponding CMakeLists.txt file to build this test library.
Tests to be rewritten in C with this library in the next commit and
placed as unit tests are:
* misclib-getmetrics-capi.test.lua
* misclib-sysprof-capi.test.lua

For now the tarantool-c-tests target just build the test library without
new tests to run.

[1]: https://github.com/clibs/cmocka
[2]: https://testanything.org/tap-version-14-specification.html

Part of tarantool/tarantool#7900
---

I left some notes about this test module and I'll be happy to read your
thoughts about them.

* Should we cast to `(void *)` in `assert_ptr_[not_]equal()`? Or it will
  be better to notice user about bad type comparisons?
* How often should we flush stdout?
* Obviously we can use `_test_run_group(__func__, NULL, 0, NULL)` with
  `test_set_skip_reason()` set to implement `skip_all()` functionality.
  Nevertheless, I decided to reimpliment it's logic separately to be
  more easily maintained in the future.


 test/CMakeLists.txt                   |   2 +
 test/tarantool-c-tests/CMakeLists.txt |  43 +++++
 test/tarantool-c-tests/test.c         | 251 ++++++++++++++++++++++++++
 test/tarantool-c-tests/test.h         | 251 ++++++++++++++++++++++++++
 4 files changed, 547 insertions(+)
 create mode 100644 test/tarantool-c-tests/CMakeLists.txt
 create mode 100644 test/tarantool-c-tests/test.c
 create mode 100644 test/tarantool-c-tests/test.h

diff --git a/test/CMakeLists.txt b/test/CMakeLists.txt
index a8262b12..47296a22 100644
--- a/test/CMakeLists.txt
+++ b/test/CMakeLists.txt
@@ -48,12 +48,14 @@ separate_arguments(LUAJIT_TEST_COMMAND)
 add_subdirectory(LuaJIT-tests)
 add_subdirectory(PUC-Rio-Lua-5.1-tests)
 add_subdirectory(lua-Harness-tests)
+add_subdirectory(tarantool-c-tests)
 add_subdirectory(tarantool-tests)
 
 add_custom_target(${PROJECT_NAME}-test DEPENDS
   LuaJIT-tests
   PUC-Rio-Lua-5.1-tests
   lua-Harness-tests
+  tarantool-c-tests
   tarantool-tests
 )
 
diff --git a/test/tarantool-c-tests/CMakeLists.txt b/test/tarantool-c-tests/CMakeLists.txt
new file mode 100644
index 00000000..5ebea441
--- /dev/null
+++ b/test/tarantool-c-tests/CMakeLists.txt
@@ -0,0 +1,43 @@
+find_program(PROVE prove)
+if(NOT PROVE)
+  message(WARNING "`prove' is not found, so tarantool-c-tests target is not generated")
+  return()
+endif()
+
+set(C_TEST_SUFFIX .c_test)
+set(C_TEST_FLAGS --failures --shuffle)
+
+if(CMAKE_VERBOSE_MAKEFILE)
+  list(APPEND C_TEST_FLAGS --verbose)
+endif()
+
+# Build libtest.
+
+set(TEST_LIB_NAME "test")
+add_library(libtest STATIC EXCLUDE_FROM_ALL ${CMAKE_CURRENT_SOURCE_DIR}/test.c)
+target_include_directories(libtest PRIVATE ${CMAKE_CURRENT_SOURCE_DIR})
+set_target_properties(libtest PROPERTIES
+  COMPILE_FLAGS "-Wall -Wextra"
+  OUTPUT_NAME "${TEST_LIB_NAME}"
+  LIBRARY_OUTPUT_DIRECTORY "${CMAKE_CURRENT_BINARY_DIR}"
+)
+
+# XXX: For now, just build libtest. The tests to be depended on
+# will be added at the next commit.
+add_custom_target(tarantool-c-tests
+  DEPENDS libluajit libtest
+)
+
+# XXX: For now, run 0 tests. Just verify that libtest was build.
+add_custom_command(TARGET tarantool-c-tests
+  COMMENT "Running Tarantool C tests"
+  COMMAND
+  ${PROVE}
+    ${CMAKE_CURRENT_BINARY_DIR}
+    --ext ${C_TEST_SUFFIX}
+    --jobs ${CMAKE_BUILD_PARALLEL_LEVEL}
+    ${C_TEST_FLAGS}
+  WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
+)
+
+# vim: ft=cmake expandtab shiftwidth=2: tabstop=2:
diff --git a/test/tarantool-c-tests/test.c b/test/tarantool-c-tests/test.c
new file mode 100644
index 00000000..dc63cf3f
--- /dev/null
+++ b/test/tarantool-c-tests/test.c
@@ -0,0 +1,251 @@
+#include "test.h"
+
+/*
+ * Test module, based on TAP 14 specification [1].
+ * [1]: https://testanything.org/tap-version-14-specification.html
+ */
+
+/* Need for `PATH_MAX` in diagnostic definition. */
+#include <limits.h>
+#include <setjmp.h>
+#include <stdarg.h>
+/* Need for `strchr()` in diagnostic parsing. */
+#include <string.h>
+
+/*
+ * Test level: 0 for the parent test, >0 for any subtests.
+ */
+static int level = -1;
+
+/*
+ * The last diagnostic data to be used in the YAML Diagnostic
+ * block.
+ *
+ * Contains filename, line number and failed expression or assert
+ * name and "got" and "expected" fields. All entries are separated
+ * by \n.
+ * The longest field is filename here, so PATH_MAX * 3 as
+ * the diagnostic string length should be enough.
+ *
+ * The first \0 means the end of diagnostic data.
+ *
+ * As far as `strchr()` searches until \0, all previous entries
+ * are suppressed by the last one. If the first byte is \0 --
+ * diagnostic is empty.
+ */
+#define TEST_DIAG_DATA_MAX (PATH_MAX * 3)
+char test_diag_buf[TEST_DIAG_DATA_MAX] = {0};
+
+const char *skip_reason = NULL;
+const char *todo_reason = NULL;
+
+/* Indent for the TAP. 4 spaces is default for subtest. */
+static void indent(void)
+{
+	int i;
+	for (i = 0; i < level; i++)
+		printf("    ");
+}
+
+void test_message(const char *fmt, ...)
+{
+	va_list ap;
+	indent();
+	va_start(ap, fmt);
+	vprintf(fmt, ap);
+	printf("\n");
+	va_end(ap);
+}
+
+static void test_print_tap_version(void)
+{
+	/*
+	 * Since several TAP13 parsers in popular usage treat
+	 * a repeated Version declaration as an error, even if the
+	 * Version is indented, Subtests _should not_ include a
+	 * Version, if TAP13 Harness compatibility is
+	 * desirable [1].
+	 */
+	if (level == 0)
+		test_message("TAP version %d", TAP_VERSION);
+}
+
+static void test_start_comment(const char *t_name)
+{
+	if (level > -1)
+		/*
+		 * Inform about starting subtest, easier for
+		 * humans to read.
+		 * Subtest with a name must be terminated by a
+		 * Test Point with a matching Description [1].
+		 */
+		test_comment("Subtest: %s", t_name);
+}
+
+void _test_print_skip_all(const char *group_name, const char *reason)
+{
+	test_start_comment(group_name);
+	/*
+	 * XXX: This test isn't started yet, so set indent level
+	 * manually.
+	 */
+	level++;
+	test_print_tap_version();
+	/*
+	 * XXX: `SKIP_DIRECTIVE` is not necessary here according
+	 * to the TAP14 specification [1], but some harnesses may
+	 * fail to parse the output without it.
+	 */
+	test_message("1..0" SKIP_DIRECTIVE "%s", reason);
+	level--;
+}
+
+/* Just inform TAP parser how many tests we want to run. */
+static void test_plan(size_t planned)
+{
+	test_message("1..%lu", planned);
+}
+
+/* Human-readable output how many tests/subtests are failed. */
+static void test_finish(size_t planned, size_t failed)
+{
+	const char *t_type = level == 0 ? "tests" : "subtests";
+	if (failed > 0)
+		test_comment("Looks like you failed %lu %s out of %lu",
+		     failed, t_type, planned);
+	fflush(stdout);
+}
+
+void test_set_skip_reason(const char *reason)
+{
+	skip_reason = reason;
+}
+
+void test_set_todo_reason(const char *reason)
+{
+	todo_reason = reason;
+}
+
+void test_save_diag_data(const char *fmt, ...)
+{
+	va_list ap;
+	va_start(ap, fmt);
+	vsnprintf(test_diag_buf, TEST_DIAG_DATA_MAX, fmt, ap);
+	va_end(ap);
+}
+
+static void test_clear_diag_data(void)
+{
+	/*
+	 * Limit buffer with zero byte to show that there is no
+	 * any entry.
+	 */
+	test_diag_buf[0] = '\0';
+}
+
+static int test_diagnostic_is_set(void)
+{
+	return test_diag_buf[0] != '\0';
+}
+
+/*
+ * Parse the last diagnostic data entry and print it in YAML
+ * format with the corresponding additional half-indent in TAP
+ * (2 spaces).
+ * Clear diagnostic message to be sure that it's printed once.
+ * XXX: \n separators are changed to \0 during parsing and
+ * printing output for convenience in usage.
+ */
+static void test_diagnostic(void)
+{
+	test_message("  ---");
+	char *ent = test_diag_buf;
+	char *ent_end = NULL;
+	while ((ent_end = strchr(ent, '\n')) != NULL) {
+		char *next_ent = ent_end + 1;
+		/*
+		 * Limit string with with the zero byte for
+		 * formatted output. Anyway, don't need this \n
+		 * anymore.
+		 */
+		*ent_end = '\0';
+		test_message("  %s", ent);
+		ent = next_ent;
+	}
+	test_message("  ...");
+	test_clear_diag_data();
+}
+
+static jmp_buf test_run_env;
+
+TEST_NORET void _test_exit(int status)
+{
+	longjmp(test_run_env, status);
+}
+
+static int test_run(const struct test_unit *test, size_t test_number,
+		    void *test_state)
+{
+	int status = TEST_EXIT_SUCCESS;
+	/*
+	 * Run unit test. Diagnostic in case of failure setup by
+	 * helpers assert macros defined in the header.
+	 */
+	int jmp_status;
+	if ((jmp_status = setjmp(test_run_env)) == 0) {
+		if (test->f(test_state) != TEST_EXIT_SUCCESS)
+			status = TEST_EXIT_FAILURE;
+	} else {
+		status = jmp_status - TEST_JMP_STATUS_SHIFT;
+	}
+	const char *result = status == TEST_EXIT_SUCCESS ? "ok" : "not ok";
+
+	/*
+	 * Format suffix of the test message for SKIP or TODO
+	 * directives.
+	 */
+#define SUFFIX_SZ 1024
+	char suffix[SUFFIX_SZ] = {0};
+	if (skip_reason) {
+		snprintf(suffix, SUFFIX_SZ, SKIP_DIRECTIVE "%s", skip_reason);
+		skip_reason = NULL;
+	} else if (todo_reason) {
+		/* Prevent count this test as failed. */
+		status = TEST_EXIT_SUCCESS;
+		snprintf(suffix, SUFFIX_SZ, TODO_DIRECTIVE "%s", todo_reason);
+		todo_reason = NULL;
+	}
+#undef SUFFIX_SZ
+
+	test_message("%s %lu - %s%s", result, test_number, test->name,
+		     suffix);
+
+	if (status && test_diagnostic_is_set())
+		test_diagnostic();
+	return status;
+}
+
+int _test_run_group(const char *group_name, const struct test_unit *tests,
+		    size_t n_tests, void *test_state)
+{
+	test_start_comment(group_name);
+
+	level++;
+	test_print_tap_version();
+
+	test_plan(n_tests);
+
+	size_t n_failed = 0;
+
+	size_t i;
+	for (i = 0; i < n_tests; i++) {
+		size_t test_number = i + 1;
+		/* Return 1 on failure, 0 on success. */
+		n_failed += test_run(&tests[i], test_number, test_state);
+	}
+
+	test_finish(n_tests, n_failed);
+
+	level--;
+	return n_failed > 0 ? TEST_EXIT_FAILURE : TEST_EXIT_SUCCESS;
+}
diff --git a/test/tarantool-c-tests/test.h b/test/tarantool-c-tests/test.h
new file mode 100644
index 00000000..695c5b4d
--- /dev/null
+++ b/test/tarantool-c-tests/test.h
@@ -0,0 +1,251 @@
+#ifndef TEST_H
+#define TEST_H
+
+#include <stdio.h>
+#include <stdlib.h>
+
+/*
+ * Test module, based on TAP 14 specification [1].
+ * [1]: https://testanything.org/tap-version-14-specification.html
+ * Version 13 is set for better compatibility on old machines.
+ *
+ * TODO:
+ * * Helpers assert macros:
+ *   - assert_uint_equal if needed
+ *   - assert_uint_not_equal if needed
+ *   - assert_str_equal if needed
+ *   - assert_str_not_equal if needed
+ *   - assert_memory_equal if needed
+ *   - assert_memory_not_equal if needed
+ * * Pragmas.
+ */
+
+#define TAP_VERSION 13
+
+#define TEST_EXIT_SUCCESS 0
+#define TEST_EXIT_FAILURE 1
+
+#define TEST_JMP_STATUS_SHIFT 2
+#define TEST_LJMP_EXIT_SUCCESS (TEST_EXIT_SUCCESS + TEST_JMP_STATUS_SHIFT)
+#define TEST_LJMP_EXIT_FAILURE (TEST_EXIT_FAILURE + TEST_JMP_STATUS_SHIFT)
+
+#define TEST_NORET __attribute__((noreturn))
+
+typedef int (*test_func)(void *test_state);
+struct test_unit {
+	const char *name;
+	test_func f;
+};
+
+/* Initialize `test_unit` structure. */
+#define test_unit_new(f) {#f, f}
+
+#define lengthof(arr) (sizeof(arr) / sizeof((arr)[0]))
+
+/*
+ * __func__ is the name for a test group, "main" for the parent
+ * test.
+ */
+#define test_run_group(t_arr, t_state) \
+	_test_run_group(__func__, t_arr, lengthof(t_arr), t_state)
+
+#define SKIP_DIRECTIVE " # SKIP "
+#define TODO_DIRECTIVE " # TODO "
+
+/*
+ * XXX: May be implemented as well via
+ * `_test_run_group(__func, NULL, 0, NULL)` and
+ * `test_set_skip_reason` with additional changes in the former.
+ * But the current approach is easier to maintain, as far as we
+ * don't want to interfere different entities.
+ */
+#define skip_all(reason) do {						\
+	_test_print_skip_all(__func__, reason);				\
+	return TEST_EXIT_SUCCESS;					\
+} while (0)
+
+#define skip(reason) do {						\
+	test_set_skip_reason(reason);					\
+	return TEST_EXIT_SUCCESS;					\
+} while (0)
+
+#define todo(reason) do {						\
+	test_set_todo_reason(reason);					\
+	return TEST_EXIT_FAILURE;					\
+} while (0)
+
+#define bail_out(reason) do {						\
+	/*								\
+	 * For backwards compatibility with TAP13 Harnesses,		\
+	 * Producers _should_ emit a "Bail out!" line at the root	\
+	 * indentation level whenever a Subtest bails out [1].		\
+	 */								\
+	printf("Bail out! %s\n", reason);				\
+	exit(TEST_EXIT_FAILURE);					\
+} while (0)
+
+/* `fmt` should always be a format string here. */
+#define test_comment(fmt, ...) test_message("# " fmt, __VA_ARGS__)
+
+/*
+ * This is a set of useful assert macros like the standard C
+ * libary's assert(3) macro.
+ *
+ * On an assertion failure an assert macro will save the
+ * diagnostic to the special buffer, to be reported via YAML
+ * Diagnostic block and finish a test function with
+ * `return TEST_EXIT_FAILURE`.
+ *
+ * Due to limitations of the C language `assert_true()` and
+ * `assert_false()` macros can only display the expression that
+ * caused the assertion failure. Type specific assert macros,
+ * `assert_{type}_equal()` and `assert_{type}_not_equal()`, save
+ * the data that caused the assertion failure which increases data
+ * visibility aiding debugging of failing test cases.
+ */
+
+#define LOCATION_FMT "location:\t%s:%d\n"
+#define ASSERT_NAME_FMT(name) "failed_assertion:\t" #name "\n"
+
+#define assert_true(cond) do {						\
+	if (!(cond)) {							\
+		test_save_diag_data(LOCATION_FMT			\
+				    "condition_failed:\t'" #cond "'\n",	\
+				    __FILE__, __LINE__);		\
+		_test_exit(TEST_LJMP_EXIT_FAILURE);			\
+	}								\
+} while (0)
+
+#define assert_false(cond) assert_true(!(cond))
+
+#define assert_ptr_equal(got, expected) do {				\
+	if ((got) != (expected)) {					\
+		test_save_diag_data(					\
+			LOCATION_FMT					\
+			ASSERT_NAME_FMT(assert_ptr_equal)		\
+			"got: %p\n"					\
+			"expected: %p\n",				\
+			__FILE__, __LINE__, (got), (expected)		\
+		);							\
+		_test_exit(TEST_LJMP_EXIT_FAILURE);			\
+	}								\
+} while (0)
+
+#define assert_ptr_not_equal(got, unexpected) do {			\
+	if ((got) == (unexpected)) {					\
+		test_save_diag_data(					\
+			LOCATION_FMT					\
+			ASSERT_NAME_FMT(assert_ptr_not_equal)		\
+			"got: %p\n"					\
+			"unexpected: %p\n",				\
+			__FILE__, __LINE__, (got), (unexpected)		\
+		);							\
+		_test_exit(TEST_LJMP_EXIT_FAILURE);			\
+	}								\
+} while (0)
+
+#define assert_int_equal(got, expected) do {				\
+	if ((got) != (expected)) {					\
+		test_save_diag_data(					\
+			LOCATION_FMT					\
+			ASSERT_NAME_FMT(assert_int_equal)		\
+			"got: %d\n"					\
+			"expected: %d\n",				\
+			__FILE__, __LINE__, (got), (expected)		\
+		);							\
+		_test_exit(TEST_LJMP_EXIT_FAILURE);			\
+	}								\
+} while (0)
+
+#define assert_int_not_equal(got, unexpected) do {			\
+	if ((got) == (unexpected)) {					\
+		test_save_diag_data(					\
+			LOCATION_FMT					\
+			ASSERT_NAME_FMT(assert_int_not_equal)		\
+			"got: %d\n"					\
+			"unexpected: %d\n",				\
+			__FILE__, __LINE__, (got), (unexpected)		\
+		);							\
+		_test_exit(TEST_LJMP_EXIT_FAILURE);			\
+	}								\
+} while (0)
+
+#define assert_sizet_equal(got, expected) do {				\
+	if ((got) != (expected)) {					\
+		test_save_diag_data(					\
+			LOCATION_FMT					\
+			ASSERT_NAME_FMT(assert_sizet_equal)		\
+			"got: %lu\n"					\
+			"expected: %lu\n",				\
+			__FILE__, __LINE__, (got), (expected)		\
+		);							\
+		_test_exit(TEST_LJMP_EXIT_FAILURE);			\
+	}								\
+} while (0)
+
+#define assert_sizet_not_equal(got, unexpected) do {			\
+	if ((got) == (unexpected)) {					\
+		test_save_diag_data(					\
+			LOCATION_FMT					\
+			ASSERT_NAME_FMT(assert_sizet_not_equal)		\
+			"got: %lu\n"					\
+			"unexpected: %lu\n",				\
+			__FILE__, __LINE__, (got), (unexpected)		\
+		);							\
+		_test_exit(TEST_LJMP_EXIT_FAILURE);			\
+	}								\
+} while (0)
+
+/* Check that doubles are __exactly__ the same. */
+#define assert_double_equal(got, expected) do {				\
+	if ((got) != (expected)) {					\
+		test_save_diag_data(					\
+			LOCATION_FMT					\
+			ASSERT_NAME_FMT(assert_double_equal)		\
+			"got: %lf\n"					\
+			"expected: %lf\n",				\
+			__FILE__, __LINE__, (got), (expected)		\
+		);							\
+		_test_exit(TEST_LJMP_EXIT_FAILURE);			\
+	}								\
+} while (0)
+
+/* Check that doubles are not __exactly__ the same. */
+#define assert_double_not_equal(got, unexpected) do {			\
+	if ((got) == (unexpected)) {					\
+		test_save_diag_data(					\
+			LOCATION_FMT					\
+			ASSERT_NAME_FMT(assert_double_not_equal)	\
+			"got: %lf\n"					\
+			"unexpected: %lf\n",				\
+			__FILE__, __LINE__, (got), (unexpected)		\
+		);							\
+		_test_exit(TEST_LJMP_EXIT_FAILURE);			\
+	}								\
+} while (0)
+
+/* API declaration. */
+
+/*
+ * Print formatted message with the corresponding indent.
+ * If you want to leave a comment, use `test_comment()` instead.
+ */
+void test_message(const char *fmt, ...);
+
+/* Need for `skip_all()`, please, don't use it. */
+void _test_print_skip_all(const char *group_name, const char *reason);
+/* End test via `longjmp()`, please, don't use it. */
+TEST_NORET void _test_exit(int status);
+
+void test_set_skip_reason(const char *reason);
+void test_set_todo_reason(const char *reason);
+/*
+ * Save formatted diagnostic data. Each entry separated with \n.
+ */
+void test_save_diag_data(const char *fmt, ...);
+
+/* Internal, it is better to use `test_run_group()` instead. */
+int _test_run_group(const char *group_name, const struct test_unit *tests,
+		    size_t n_tests, void *test_state);
+
+#endif /* TEST_H */
-- 
2.34.1



More information about the Tarantool-patches mailing list