* [Tarantool-patches] [PATCH 0/3] Add clang format
@ 2020-10-07 13:24 Kirill Yukhin
2020-10-07 13:24 ` [Tarantool-patches] [PATCH 1/3] clang-format: guard various declarations Kirill Yukhin
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: Kirill Yukhin @ 2020-10-07 13:24 UTC (permalink / raw)
To: tarantool-patches
Hello,
This set of 3 patches apply custom clang format rules
which look more or less close to our coding guideline.
First one blocks some artistic plases in constants declarations.
Second adds custom clang format file.
Third was derived by appliyng it:
/src/box$ find . -iname \*.h -o -iname \*.c -o -iname \*.cc |grep -v sql |xargs clang-format -i
NB: I was playing with clang v11, but hope this will work
with clang 8
Documentation for clang-format style options is here [1].
It'd be great to hear inputs like clang-format file improvements.
If we'll agree, next steps would be:
1. Add cmake rule, which will allow to run formatter against
the source tree.
2. Add new CI job, which will execute formatter on each commit.
3. Add formatter for SQL subsytem.
4. Extend formatter out of box/ folder (questionable)
[1] - https://clang.llvm.org/docs/ClangFormatStyleOptions.html
Issue: https://github.com/tarantool/tarantool/issues/4297
Branch: https://github.com/tarantool/tarantool/tree/kyukhin/gh-4297-clang-format
Kirill Yukhin (3):
clang-format: guard various declarations
Add .clang-format for src/box/
Apply clang-format
src/box/.clang-format | 125 ++++
src/box/alter.cc | 1565 ++++++++++++++++++++---------------------
src/box/applier.cc | 140 ++--
src/box/applier.h | 36 +-
src/box/authentication.cc | 8 +-
src/box/authentication.h | 1 -
src/box/bind.c | 13 +-
src/box/bind.h | 3 +-
src/box/blackhole.c | 15 +-
src/box/box.cc | 192 +++--
src/box/box.h | 104 ++-
src/box/call.c | 9 +-
src/box/checkpoint_schedule.c | 4 +-
src/box/checkpoint_schedule.h | 4 +-
src/box/ck_constraint.c | 21 +-
src/box/ck_constraint.h | 4 +-
src/box/coll_id.c | 2 +-
src/box/coll_id_cache.c | 9 +-
src/box/coll_id_def.c | 17 +-
src/box/column_mask.h | 8 +-
src/box/constraint_id.c | 8 +-
src/box/engine.c | 56 +-
src/box/engine.h | 84 ++-
src/box/errcode.c | 11 +-
src/box/errcode.h | 4 +-
src/box/error.cc | 83 ++-
src/box/error.h | 86 +--
src/box/execute.c | 98 ++-
src/box/field_def.c | 28 +-
src/box/field_def.h | 2 +-
src/box/field_map.c | 17 +-
src/box/field_map.h | 12 +-
src/box/fk_constraint.h | 5 +-
src/box/func.c | 58 +-
src/box/func.h | 3 +-
src/box/func_def.c | 25 +-
src/box/func_def.h | 4 +-
src/box/gc.c | 38 +-
src/box/identifier.c | 6 +-
src/box/index.cc | 118 ++--
src/box/index.h | 135 ++--
src/box/index_def.c | 46 +-
src/box/index_def.h | 22 +-
src/box/iproto.cc | 220 +++---
src/box/iproto_constants.c | 4 +
src/box/iproto_constants.h | 26 +-
src/box/iterator_type.c | 3 +-
src/box/iterator_type.h | 29 +-
src/box/journal.c | 6 +-
src/box/journal.h | 23 +-
src/box/key_def.c | 114 ++-
src/box/key_def.h | 62 +-
src/box/key_list.c | 8 +-
src/box/key_list.h | 4 +-
src/box/lua/call.c | 104 ++-
src/box/lua/call.h | 8 +-
src/box/lua/cfg.cc | 92 ++-
src/box/lua/console.c | 112 +--
src/box/lua/ctl.c | 14 +-
src/box/lua/error.cc | 35 +-
src/box/lua/execute.c | 55 +-
src/box/lua/index.c | 75 +-
src/box/lua/info.c | 85 +--
src/box/lua/init.c | 110 ++-
src/box/lua/key_def.c | 52 +-
src/box/lua/merger.c | 178 +++--
src/box/lua/misc.cc | 41 +-
src/box/lua/net_box.c | 95 ++-
src/box/lua/sequence.c | 12 +-
src/box/lua/serialize_lua.c | 197 +++---
src/box/lua/session.c | 68 +-
src/box/lua/slab.c | 17 +-
src/box/lua/slab.h | 3 +-
src/box/lua/space.cc | 81 +--
src/box/lua/stat.c | 36 +-
src/box/lua/stat.h | 3 +-
src/box/lua/tuple.c | 102 ++-
src/box/lua/xlog.c | 38 +-
src/box/memtx_bitset.c | 70 +-
src/box/memtx_engine.c | 109 ++-
src/box/memtx_engine.h | 18 +-
src/box/memtx_hash.c | 142 ++--
src/box/memtx_rtree.c | 59 +-
src/box/memtx_space.c | 188 +++--
src/box/memtx_space.h | 4 +-
src/box/memtx_tree.c | 274 ++++----
src/box/memtx_tx.c | 141 ++--
src/box/memtx_tx.h | 4 +-
src/box/merger.c | 9 +-
src/box/mp_error.cc | 80 +--
src/box/msgpack.c | 4 +-
src/box/opt_def.c | 10 +-
src/box/opt_def.h | 72 +-
src/box/port.h | 6 +-
src/box/raft.c | 31 +-
src/box/recovery.cc | 40 +-
src/box/recovery.h | 2 +-
src/box/relay.cc | 71 +-
src/box/relay.h | 2 +-
src/box/replication.cc | 121 ++--
src/box/replication.h | 11 +-
src/box/request.c | 14 +-
src/box/schema.cc | 158 ++---
src/box/schema.h | 17 +-
src/box/schema_def.c | 6 +-
src/box/schema_def.h | 6 +-
src/box/sequence.c | 55 +-
src/box/service_engine.c | 10 +-
src/box/session.cc | 38 +-
src/box/session.h | 9 +-
src/box/session_settings.c | 34 +-
src/box/space.c | 86 +--
src/box/space.h | 87 ++-
src/box/space_def.c | 29 +-
src/box/space_def.h | 16 +-
src/box/sysview.c | 53 +-
src/box/tuple.c | 105 +--
src/box/tuple.h | 32 +-
src/box/tuple_bloom.c | 37 +-
src/box/tuple_bloom.h | 5 +-
src/box/tuple_compare.cc | 398 +++++------
src/box/tuple_convert.c | 28 +-
src/box/tuple_dictionary.c | 37 +-
src/box/tuple_extract_key.cc | 94 +--
src/box/tuple_format.c | 216 +++---
src/box/tuple_format.h | 34 +-
src/box/tuple_hash.cc | 140 ++--
src/box/txn.c | 66 +-
src/box/txn_limbo.c | 34 +-
src/box/user.cc | 101 ++-
src/box/user.h | 5 +-
src/box/user_def.c | 23 +-
src/box/user_def.h | 6 +-
src/box/vclock.c | 151 ++--
src/box/vclock.h | 25 +-
src/box/vinyl.c | 388 +++++-----
src/box/vinyl.h | 12 +-
src/box/vy_cache.c | 65 +-
src/box/vy_cache.h | 10 +-
src/box/vy_history.c | 8 +-
src/box/vy_history.h | 8 +-
src/box/vy_log.c | 226 +++---
src/box/vy_log.h | 52 +-
src/box/vy_lsm.c | 184 ++---
src/box/vy_lsm.h | 18 +-
src/box/vy_mem.c | 91 ++-
src/box/vy_mem.h | 13 +-
src/box/vy_point_lookup.c | 41 +-
src/box/vy_point_lookup.h | 4 +-
src/box/vy_quota.c | 13 +-
src/box/vy_quota.h | 7 +-
src/box/vy_range.c | 22 +-
src/box/vy_range.h | 4 +-
src/box/vy_read_iterator.c | 136 ++--
src/box/vy_read_set.c | 11 +-
src/box/vy_regulator.c | 40 +-
src/box/vy_regulator.h | 7 +-
src/box/vy_run.c | 375 +++++-----
src/box/vy_run.h | 48 +-
src/box/vy_scheduler.c | 182 ++---
src/box/vy_scheduler.h | 6 +-
src/box/vy_stmt.c | 119 ++--
src/box/vy_stmt.h | 84 +--
src/box/vy_stmt_stream.h | 10 +-
src/box/vy_tx.c | 128 ++--
src/box/vy_tx.h | 21 +-
src/box/vy_upsert.c | 39 +-
src/box/vy_upsert.h | 4 +-
src/box/vy_write_iterator.c | 74 +-
src/box/vy_write_iterator.h | 11 +-
src/box/wal.c | 185 +++--
src/box/wal.h | 8 +-
src/box/xlog.c | 289 ++++----
src/box/xlog.h | 36 +-
src/box/xrow.c | 361 +++++-----
src/box/xrow.h | 44 +-
src/box/xrow_io.cc | 16 +-
src/box/xrow_io.h | 1 -
src/box/xrow_update.c | 65 +-
src/box/xrow_update.h | 12 +-
src/box/xrow_update_array.c | 89 +--
src/box/xrow_update_bar.c | 68 +-
src/box/xrow_update_field.c | 104 +--
src/box/xrow_update_field.h | 132 ++--
src/box/xrow_update_map.c | 84 +--
src/box/xrow_update_route.c | 45 +-
186 files changed, 6470 insertions(+), 6494 deletions(-)
create mode 100644 src/box/.clang-format
--
1.8.3.1
^ permalink raw reply [flat|nested] 4+ messages in thread
* [Tarantool-patches] [PATCH 1/3] clang-format: guard various declarations
2020-10-07 13:24 [Tarantool-patches] [PATCH 0/3] Add clang format Kirill Yukhin
@ 2020-10-07 13:24 ` Kirill Yukhin
2020-10-07 13:24 ` [Tarantool-patches] [PATCH 2/3] Add .clang-format for src/box/ Kirill Yukhin
2020-10-07 14:11 ` [Tarantool-patches] [PATCH 3/3] Apply clang-format Kirill Yukhin
2 siblings, 0 replies; 4+ messages in thread
From: Kirill Yukhin @ 2020-10-07 13:24 UTC (permalink / raw)
To: tarantool-patches
Disable clang-formatter for:
- iproto_constants.c
- constants in vy_log.c
- key comparator definitions.
- for field_def's types compatibility
---
src/box/errcode.h | 4 +++-
src/box/field_def.c | 2 ++
src/box/iproto_constants.c | 4 ++++
src/box/tuple_compare.cc | 4 ++++
src/box/vy_log.c | 2 ++
5 files changed, 15 insertions(+), 1 deletion(-)
diff --git a/src/box/errcode.h b/src/box/errcode.h
index e6957d6..244bda1 100644
--- a/src/box/errcode.h
+++ b/src/box/errcode.h
@@ -51,7 +51,8 @@ struct errcode_record {
* Please don't forget to do it!
*/
-#define ERROR_CODES(_) \
+/* clang-format off */
+#define ERROR_CODES(_) \
/* 0 */_(ER_UNKNOWN, "Unknown error") \
/* 1 */_(ER_ILLEGAL_PARAMS, "Illegal parameters, %s") \
/* 2 */_(ER_MEMORY_ISSUE, "Failed to allocate %u bytes in %s for %s") \
@@ -273,6 +274,7 @@ struct errcode_record {
/*218 */_(ER_TUPLE_METADATA_IS_TOO_BIG, "Can't create tuple: metadata size %u is too big") \
/*219 */_(ER_XLOG_GAP, "%s") \
/*220 */_(ER_TOO_EARLY_SUBSCRIBE, "Can't subscribe non-anonymous replica %s until join is done") \
+/* clang-format on */
/*
* !IMPORTANT! Please follow instructions at start of the file
diff --git a/src/box/field_def.c b/src/box/field_def.c
index 213e916..34cecfa 100644
--- a/src/box/field_def.c
+++ b/src/box/field_def.c
@@ -127,6 +127,7 @@ field_type_by_name_wrapper(const char *str, uint32_t len)
* For an i row and j column the value is true, if the i type
* values can be stored in the j type.
*/
+/* clang-format off */
static const bool field_type_compatibility[] = {
/* ANY UNSIGNED STRING NUMBER DOUBLE INTEGER BOOLEAN VARBINARY SCALAR DECIMAL UUID ARRAY MAP */
/* ANY */ true, false, false, false, false, false, false, false, false, false, false, false, false,
@@ -143,6 +144,7 @@ static const bool field_type_compatibility[] = {
/* ARRAY */ true, false, false, false, false, false, false, false, false, false, false, true, false,
/* MAP */ true, false, false, false, false, false, false, false, false, false, false, false, true,
};
+/* clang-format on */
bool
field_type1_contains_type2(enum field_type type1, enum field_type type2)
diff --git a/src/box/iproto_constants.c b/src/box/iproto_constants.c
index 029d988..af3ab60 100644
--- a/src/box/iproto_constants.c
+++ b/src/box/iproto_constants.c
@@ -30,6 +30,8 @@
*/
#include "iproto_constants.h"
+/* clang-format off */
+
const unsigned char iproto_key_type[IPROTO_KEY_MAX] =
{
/* {{{ header */
@@ -226,3 +228,5 @@ const char *vy_row_index_key_strs[VY_ROW_INDEX_KEY_MAX] = {
NULL,
"row index",
};
+
+/* clang-format on */
diff --git a/src/box/tuple_compare.cc b/src/box/tuple_compare.cc
index d059c70..bb786cc 100644
--- a/src/box/tuple_compare.cc
+++ b/src/box/tuple_compare.cc
@@ -1144,6 +1144,7 @@ struct comparator_signature {
/**
* field1 no, field1 type, field2 no, field2 type, ...
*/
+/* clang-format off */
static const comparator_signature cmp_arr[] = {
COMPARATOR(0, FIELD_TYPE_UNSIGNED)
COMPARATOR(0, FIELD_TYPE_STRING)
@@ -1160,6 +1161,7 @@ static const comparator_signature cmp_arr[] = {
COMPARATOR(0, FIELD_TYPE_UNSIGNED, 1, FIELD_TYPE_STRING , 2, FIELD_TYPE_STRING)
COMPARATOR(0, FIELD_TYPE_STRING , 1, FIELD_TYPE_STRING , 2, FIELD_TYPE_STRING)
};
+/* clang-format on */
#undef COMPARATOR
@@ -1330,6 +1332,7 @@ struct comparator_with_key_signature
#define KEY_COMPARATOR(...) \
{ TupleCompareWithKey<0, __VA_ARGS__>::compare, { __VA_ARGS__ } },
+/* clang-format off */
static const comparator_with_key_signature cmp_wk_arr[] = {
KEY_COMPARATOR(0, FIELD_TYPE_UNSIGNED, 1, FIELD_TYPE_UNSIGNED, 2, FIELD_TYPE_UNSIGNED)
KEY_COMPARATOR(0, FIELD_TYPE_STRING , 1, FIELD_TYPE_UNSIGNED, 2, FIELD_TYPE_UNSIGNED)
@@ -1345,6 +1348,7 @@ static const comparator_with_key_signature cmp_wk_arr[] = {
KEY_COMPARATOR(1, FIELD_TYPE_UNSIGNED, 2, FIELD_TYPE_STRING)
KEY_COMPARATOR(1, FIELD_TYPE_STRING , 2, FIELD_TYPE_STRING)
};
+/* clang-format on */
/**
* A functional index tuple compare.
diff --git a/src/box/vy_log.c b/src/box/vy_log.c
index d23b1c1..06b2596 100644
--- a/src/box/vy_log.c
+++ b/src/box/vy_log.c
@@ -68,6 +68,7 @@
* Integer key of a field in the vy_log_record structure.
* Used for packing a record in MsgPack.
*/
+/* clang-format off */
enum vy_log_key {
VY_LOG_KEY_LSM_ID = 0,
VY_LOG_KEY_RANGE_ID = 1,
@@ -130,6 +131,7 @@ static const char *vy_log_type_name[] = {
[VY_LOG_REBOOTSTRAP] = "rebootstrap",
[VY_LOG_ABORT_REBOOTSTRAP] = "abort_rebootstrap",
};
+/* clang-format on */
/** Batch of vylog records that must be written in one go. */
struct vy_log_tx {
--
1.8.3.1
^ permalink raw reply [flat|nested] 4+ messages in thread
* [Tarantool-patches] [PATCH 2/3] Add .clang-format for src/box/
2020-10-07 13:24 [Tarantool-patches] [PATCH 0/3] Add clang format Kirill Yukhin
2020-10-07 13:24 ` [Tarantool-patches] [PATCH 1/3] clang-format: guard various declarations Kirill Yukhin
@ 2020-10-07 13:24 ` Kirill Yukhin
2020-10-07 14:11 ` [Tarantool-patches] [PATCH 3/3] Apply clang-format Kirill Yukhin
2 siblings, 0 replies; 4+ messages in thread
From: Kirill Yukhin @ 2020-10-07 13:24 UTC (permalink / raw)
To: tarantool-patches
---
src/box/.clang-format | 125 ++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 125 insertions(+)
create mode 100644 src/box/.clang-format
diff --git a/src/box/.clang-format b/src/box/.clang-format
new file mode 100644
index 0000000..bf6aa82
--- /dev/null
+++ b/src/box/.clang-format
@@ -0,0 +1,125 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# clang-format configuration file. Intended for clang-format >= 4.
+#
+# For more information, see:
+#
+# Documentation/process/clang-format.rst
+# https://clang.llvm.org/docs/ClangFormat.html
+# https://clang.llvm.org/docs/ClangFormatStyleOptions.html
+#
+---
+AccessModifierOffset: -8
+AlignAfterOpenBracket: Align
+AlignConsecutiveAssignments: false
+AlignConsecutiveBitFields: true
+AlignConsecutiveDeclarations: false
+AlignEscapedNewlines: Left # Unknown to clang-format-4.0
+AlignOperands: true
+AlignTrailingComments: true
+AllowAllParametersOfDeclarationOnNextLine: false
+AllowAllArgumentsOnNextLine: true # Unknown to clang-format-7.0
+AllowShortBlocksOnASingleLine: false
+AllowShortCaseLabelsOnASingleLine: false
+AllowShortFunctionsOnASingleLine: Inline
+AllowShortIfStatementsOnASingleLine: false
+AllowShortLoopsOnASingleLine: false
+AlwaysBreakAfterDefinitionReturnType: None
+AlwaysBreakAfterReturnType: TopLevel
+AlwaysBreakBeforeMultilineStrings: false
+AlwaysBreakTemplateDeclarations: false
+BinPackArguments: true
+BinPackParameters: true
+BreakBeforeBraces: Custom
+BraceWrapping:
+ AfterCaseLabel: false
+ AfterClass: false
+ AfterControlStatement: Never
+ AfterEnum: false
+ AfterFunction: true
+ AfterNamespace: false
+ AfterObjCDeclaration: false
+ AfterStruct: false
+ AfterUnion: false
+ #AfterExternBlock: false # Unknown to clang-format-5.0
+ BeforeCatch: false
+ BeforeElse: false
+ IndentBraces: false
+ SplitEmptyFunction: false # Unknown to clang-format-4.0
+ SplitEmptyRecord: false # Unknown to clang-format-4.0
+ #SplitEmptyNamespace: true # Unknown to clang-format-4.0
+BreakBeforeBinaryOperators: None
+#BreakBeforeInheritanceComma: false # Unknown to clang-format-4.0
+BreakBeforeTernaryOperators: false
+BreakConstructorInitializersBeforeComma: false
+#BreakConstructorInitializers: BeforeComma # Unknown to clang-format-4.0
+BreakAfterJavaFieldAnnotations: false
+BreakStringLiterals: false
+ColumnLimit: 80
+CommentPragmas: '^ IWYU pragma:'
+ConstructorInitializerAllOnOneLineOrOnePerLine: false
+ConstructorInitializerIndentWidth: 8
+ContinuationIndentWidth: 8
+Cpp11BracedListStyle: false
+DerivePointerAlignment: false
+DisableFormat: false
+ExperimentalAutoDetectBinPacking: false
+ForEachMacros:
+ - 'rlist_foreach'
+ - 'rlist_foreach_entry'
+ - 'rlist_foreach_entry_continue'
+ - 'rlist_foreach_entry_continue_rcu'
+ - 'rlist_foreach_entry_continue_rcu_bh'
+ - 'rlist_foreach_entry_from'
+ - 'rlist_foreach_entry_from_rcu'
+ - 'rlist_foreach_entry_rcu'
+ - 'rlist_foreach_entry_rcu_bh'
+ - 'rlist_foreach_entry_rcu_notrace'
+ - 'rlist_foreach_entry_safe'
+ - 'rlist_foreach_safe'
+ - 'vy_stmt_foreach_entry'
+IncludeBlocks: Preserve # Unknown to clang-format-5.0
+IncludeCategories:
+ - Regex: '.*'
+ Priority: 1
+IncludeIsMainRegex: '(Test)?$'
+IndentCaseLabels: false
+IndentPPDirectives: None # Unknown to clang-format-5.0
+IndentWidth: 8
+IndentWrappedFunctionNames: false
+JavaScriptQuotes: Leave
+JavaScriptWrapImports: true
+KeepEmptyLinesAtTheStartOfBlocks: false
+MacroBlockBegin: ''
+MacroBlockEnd: ''
+MaxEmptyLinesToKeep: 1
+NamespaceIndentation: Inner
+ObjCBlockIndentWidth: 8
+ObjCSpaceAfterProperty: true
+ObjCSpaceBeforeProtocolList: true
+PenaltyBreakAssignment: 10 # Unknown to clang-format-4.0
+PenaltyBreakBeforeFirstCallParameter: 30
+PenaltyBreakComment: 10
+PenaltyBreakFirstLessLess: 0
+PenaltyBreakString: 10
+PenaltyExcessCharacter: 100
+PenaltyReturnTypeOnItsOwnLine: 60
+PointerAlignment: Right
+ReflowComments: false
+SortIncludes: false
+SpaceAfterCStyleCast: false
+SpaceAfterTemplateKeyword: true
+SpaceBeforeAssignmentOperators: true
+SpaceBeforeCtorInitializerColon: true # Unknown to clang-format-5.0
+SpaceBeforeInheritanceColon: false # Unknown to clang-format-5.0
+SpaceBeforeParens: ControlStatementsExceptForEachMacros
+SpaceInEmptyParentheses: false
+SpacesBeforeTrailingComments: 1
+SpacesInAngles: false
+SpacesInContainerLiterals: false
+SpacesInCStyleCastParentheses: false
+SpacesInParentheses: false
+SpacesInSquareBrackets: false
+Standard: Cpp11
+TabWidth: 8
+UseTab: Always
--
1.8.3.1
^ permalink raw reply [flat|nested] 4+ messages in thread
* [Tarantool-patches] [PATCH 3/3] Apply clang-format
2020-10-07 13:24 [Tarantool-patches] [PATCH 0/3] Add clang format Kirill Yukhin
2020-10-07 13:24 ` [Tarantool-patches] [PATCH 1/3] clang-format: guard various declarations Kirill Yukhin
2020-10-07 13:24 ` [Tarantool-patches] [PATCH 2/3] Add .clang-format for src/box/ Kirill Yukhin
@ 2020-10-07 14:11 ` Kirill Yukhin
2 siblings, 0 replies; 4+ messages in thread
From: Kirill Yukhin @ 2020-10-07 14:11 UTC (permalink / raw)
To: tarantool-patches
---
src/box/alter.cc | 1565 ++++++++++++++++++++---------------------
src/box/applier.cc | 140 ++--
src/box/applier.h | 36 +-
src/box/authentication.cc | 8 +-
src/box/authentication.h | 1 -
src/box/bind.c | 13 +-
src/box/bind.h | 3 +-
src/box/blackhole.c | 15 +-
src/box/box.cc | 192 +++--
src/box/box.h | 104 ++-
src/box/call.c | 9 +-
src/box/checkpoint_schedule.c | 4 +-
src/box/checkpoint_schedule.h | 4 +-
src/box/ck_constraint.c | 21 +-
src/box/ck_constraint.h | 4 +-
src/box/coll_id.c | 2 +-
src/box/coll_id_cache.c | 9 +-
src/box/coll_id_def.c | 17 +-
src/box/column_mask.h | 8 +-
src/box/constraint_id.c | 8 +-
src/box/engine.c | 56 +-
src/box/engine.h | 84 ++-
src/box/errcode.c | 11 +-
src/box/error.cc | 83 ++-
src/box/error.h | 86 +--
src/box/execute.c | 98 ++-
src/box/field_def.c | 26 +-
src/box/field_def.h | 2 +-
src/box/field_map.c | 17 +-
src/box/field_map.h | 12 +-
src/box/fk_constraint.h | 5 +-
src/box/func.c | 58 +-
src/box/func.h | 3 +-
src/box/func_def.c | 25 +-
src/box/func_def.h | 4 +-
src/box/gc.c | 38 +-
src/box/identifier.c | 6 +-
src/box/index.cc | 118 ++--
src/box/index.h | 135 ++--
src/box/index_def.c | 46 +-
src/box/index_def.h | 22 +-
src/box/iproto.cc | 220 +++---
src/box/iproto_constants.h | 26 +-
src/box/iterator_type.c | 3 +-
src/box/iterator_type.h | 29 +-
src/box/journal.c | 6 +-
src/box/journal.h | 23 +-
src/box/key_def.c | 114 ++-
src/box/key_def.h | 62 +-
src/box/key_list.c | 8 +-
src/box/key_list.h | 4 +-
src/box/lua/call.c | 104 ++-
src/box/lua/call.h | 8 +-
src/box/lua/cfg.cc | 92 ++-
src/box/lua/console.c | 112 +--
src/box/lua/ctl.c | 14 +-
src/box/lua/error.cc | 35 +-
src/box/lua/execute.c | 55 +-
src/box/lua/index.c | 75 +-
src/box/lua/info.c | 85 +--
src/box/lua/init.c | 110 ++-
src/box/lua/key_def.c | 52 +-
src/box/lua/merger.c | 178 +++--
src/box/lua/misc.cc | 41 +-
src/box/lua/net_box.c | 95 ++-
src/box/lua/sequence.c | 12 +-
src/box/lua/serialize_lua.c | 197 +++---
src/box/lua/session.c | 68 +-
src/box/lua/slab.c | 17 +-
src/box/lua/slab.h | 3 +-
src/box/lua/space.cc | 81 +--
src/box/lua/stat.c | 36 +-
src/box/lua/stat.h | 3 +-
src/box/lua/tuple.c | 102 ++-
src/box/lua/xlog.c | 38 +-
src/box/memtx_bitset.c | 70 +-
src/box/memtx_engine.c | 109 ++-
src/box/memtx_engine.h | 18 +-
src/box/memtx_hash.c | 142 ++--
src/box/memtx_rtree.c | 59 +-
src/box/memtx_space.c | 188 +++--
src/box/memtx_space.h | 4 +-
src/box/memtx_tree.c | 274 ++++----
src/box/memtx_tx.c | 141 ++--
src/box/memtx_tx.h | 4 +-
src/box/merger.c | 9 +-
src/box/mp_error.cc | 80 +--
src/box/msgpack.c | 4 +-
src/box/opt_def.c | 10 +-
src/box/opt_def.h | 72 +-
src/box/port.h | 6 +-
src/box/raft.c | 31 +-
src/box/recovery.cc | 40 +-
src/box/recovery.h | 2 +-
src/box/relay.cc | 71 +-
src/box/relay.h | 2 +-
src/box/replication.cc | 121 ++--
src/box/replication.h | 11 +-
src/box/request.c | 14 +-
src/box/schema.cc | 158 ++---
src/box/schema.h | 17 +-
src/box/schema_def.c | 6 +-
src/box/schema_def.h | 6 +-
src/box/sequence.c | 55 +-
src/box/service_engine.c | 10 +-
src/box/session.cc | 38 +-
src/box/session.h | 9 +-
src/box/session_settings.c | 34 +-
src/box/space.c | 86 +--
src/box/space.h | 87 ++-
src/box/space_def.c | 29 +-
src/box/space_def.h | 16 +-
src/box/sysview.c | 53 +-
src/box/tuple.c | 105 +--
src/box/tuple.h | 32 +-
src/box/tuple_bloom.c | 37 +-
src/box/tuple_bloom.h | 5 +-
src/box/tuple_compare.cc | 394 +++++------
src/box/tuple_convert.c | 28 +-
src/box/tuple_dictionary.c | 37 +-
| 94 +--
src/box/tuple_format.c | 216 +++---
src/box/tuple_format.h | 34 +-
src/box/tuple_hash.cc | 140 ++--
src/box/txn.c | 66 +-
src/box/txn_limbo.c | 34 +-
src/box/user.cc | 101 ++-
src/box/user.h | 5 +-
src/box/user_def.c | 23 +-
src/box/user_def.h | 6 +-
src/box/vclock.c | 151 ++--
src/box/vclock.h | 25 +-
src/box/vinyl.c | 388 +++++-----
src/box/vinyl.h | 12 +-
src/box/vy_cache.c | 65 +-
src/box/vy_cache.h | 10 +-
src/box/vy_history.c | 8 +-
src/box/vy_history.h | 8 +-
src/box/vy_log.c | 224 +++---
src/box/vy_log.h | 52 +-
src/box/vy_lsm.c | 184 ++---
src/box/vy_lsm.h | 18 +-
src/box/vy_mem.c | 91 ++-
src/box/vy_mem.h | 13 +-
src/box/vy_point_lookup.c | 41 +-
src/box/vy_point_lookup.h | 4 +-
src/box/vy_quota.c | 13 +-
src/box/vy_quota.h | 7 +-
src/box/vy_range.c | 22 +-
src/box/vy_range.h | 4 +-
src/box/vy_read_iterator.c | 136 ++--
src/box/vy_read_set.c | 11 +-
src/box/vy_regulator.c | 40 +-
src/box/vy_regulator.h | 7 +-
src/box/vy_run.c | 375 +++++-----
src/box/vy_run.h | 48 +-
src/box/vy_scheduler.c | 182 ++---
src/box/vy_scheduler.h | 6 +-
src/box/vy_stmt.c | 119 ++--
src/box/vy_stmt.h | 84 +--
src/box/vy_stmt_stream.h | 10 +-
src/box/vy_tx.c | 128 ++--
src/box/vy_tx.h | 21 +-
src/box/vy_upsert.c | 39 +-
src/box/vy_upsert.h | 4 +-
src/box/vy_write_iterator.c | 74 +-
src/box/vy_write_iterator.h | 11 +-
src/box/wal.c | 185 +++--
src/box/wal.h | 8 +-
src/box/xlog.c | 289 ++++----
src/box/xlog.h | 36 +-
src/box/xrow.c | 361 +++++-----
src/box/xrow.h | 44 +-
src/box/xrow_io.cc | 16 +-
src/box/xrow_io.h | 1 -
src/box/xrow_update.c | 65 +-
src/box/xrow_update.h | 12 +-
src/box/xrow_update_array.c | 89 +--
src/box/xrow_update_bar.c | 68 +-
src/box/xrow_update_field.c | 104 +--
src/box/xrow_update_field.h | 132 ++--
src/box/xrow_update_map.c | 84 +--
src/box/xrow_update_route.c | 45 +-
183 files changed, 6330 insertions(+), 6493 deletions(-)
diff --git a/src/box/alter.cc b/src/box/alter.cc
index 08957f6..3f2ad9b 100644
--- a/src/box/alter.cc
+++ b/src/box/alter.cc
@@ -45,12 +45,12 @@
#include "fiber.h" /* for gc_pool */
#include "scoped_guard.h"
#include "third_party/base64.h"
-#include <new> /* for placement new */
+#include <new> /* for placement new */
#include <stdio.h> /* snprintf() */
#include <ctype.h>
#include "replication.h" /* for replica_set_id() */
-#include "session.h" /* to fetch the current user. */
-#include "vclock.h" /* VCLOCK_MAX */
+#include "session.h" /* to fetch the current user. */
+#include "vclock.h" /* VCLOCK_MAX */
#include "xrow.h"
#include "iproto_constants.h"
#include "identifier.h"
@@ -68,8 +68,8 @@ access_check_ddl(const char *name, uint32_t object_id, uint32_t owner_uid,
struct credentials *cr = effective_user();
user_access_t has_access = cr->universal_access;
- user_access_t access = ((PRIV_U | (user_access_t) priv_type) &
- ~has_access);
+ user_access_t access =
+ ((PRIV_U | (user_access_t)priv_type) & ~has_access);
bool is_owner = owner_uid == cr->uid || cr->uid == ADMIN;
if (access == 0)
return 0; /* Access granted. */
@@ -135,24 +135,25 @@ index_def_check_sequence(struct index_def *index_def, uint32_t sequence_fieldno,
continue;
if ((part->path == NULL && sequence_path == NULL) ||
(part->path != NULL && sequence_path != NULL &&
- json_path_cmp(part->path, part->path_len,
- sequence_path, sequence_path_len,
- TUPLE_INDEX_BASE) == 0)) {
+ json_path_cmp(part->path, part->path_len, sequence_path,
+ sequence_path_len, TUPLE_INDEX_BASE) == 0)) {
sequence_part = part;
break;
}
}
if (sequence_part == NULL) {
diag_set(ClientError, ER_MODIFY_INDEX, index_def->name,
- space_name, "sequence field must be a part of "
- "the index");
+ space_name,
+ "sequence field must be a part of "
+ "the index");
return -1;
}
enum field_type type = sequence_part->type;
if (type != FIELD_TYPE_UNSIGNED && type != FIELD_TYPE_INTEGER) {
diag_set(ClientError, ER_MODIFY_INDEX, index_def->name,
- space_name, "sequence cannot be used with "
- "a non-integer key");
+ space_name,
+ "sequence cannot be used with "
+ "a non-integer key");
return -1;
}
return 0;
@@ -166,8 +167,8 @@ index_def_check_sequence(struct index_def *index_def, uint32_t sequence_fieldno,
static int
index_def_check_tuple(struct tuple *tuple)
{
- const mp_type common_template[] =
- {MP_UINT, MP_UINT, MP_STR, MP_STR, MP_MAP, MP_ARRAY};
+ const mp_type common_template[] = { MP_UINT, MP_UINT, MP_STR,
+ MP_STR, MP_MAP, MP_ARRAY };
const char *data = tuple_data(tuple);
uint32_t field_count = mp_decode_array(&data);
const char *field_start = data;
@@ -191,8 +192,8 @@ err:
p += snprintf(p, e - p, i ? ", %s" : "%s", mp_type_strs[type]);
}
diag_set(ClientError, ER_WRONG_INDEX_RECORD, got,
- "space id (unsigned), index id (unsigned), name (string), "\
- "type (string), options (map), parts (array)");
+ "space id (unsigned), index id (unsigned), name (string), "
+ "type (string), options (map), parts (array)");
return -1;
}
@@ -210,12 +211,13 @@ index_opts_decode(struct index_opts *opts, const char *map,
return -1;
if (opts->distance == rtree_index_distance_type_MAX) {
diag_set(ClientError, ER_WRONG_INDEX_OPTIONS,
- BOX_INDEX_FIELD_OPTS, "distance must be either "\
- "'euclid' or 'manhattan'");
+ BOX_INDEX_FIELD_OPTS,
+ "distance must be either "
+ "'euclid' or 'manhattan'");
return -1;
}
- if (opts->page_size <= 0 || (opts->range_size > 0 &&
- opts->page_size > opts->range_size)) {
+ if (opts->page_size <= 0 ||
+ (opts->range_size > 0 && opts->page_size > opts->range_size)) {
diag_set(ClientError, ER_WRONG_INDEX_OPTIONS,
BOX_INDEX_FIELD_OPTS,
"page_size must be greater than 0 and "
@@ -250,14 +252,15 @@ index_opts_decode(struct index_opts *opts, const char *map,
* functional index for now.
*/
static int
-func_index_check_func(struct func *func) {
+func_index_check_func(struct func *func)
+{
assert(func != NULL);
if (func->def->language != FUNC_LANGUAGE_LUA ||
func->def->body == NULL || !func->def->is_deterministic ||
!func->def->is_sandboxed) {
diag_set(ClientError, ER_WRONG_INDEX_OPTIONS, 0,
- "referenced function doesn't satisfy "
- "functional index function constraints");
+ "referenced function doesn't satisfy "
+ "functional index function constraints");
return -1;
}
return 0;
@@ -293,12 +296,12 @@ index_def_new_from_tuple(struct tuple *tuple, struct space *space)
return NULL;
enum index_type type = STR2ENUM(index_type, out);
uint32_t name_len;
- const char *name = tuple_field_str(tuple, BOX_INDEX_FIELD_NAME,
- &name_len);
+ const char *name =
+ tuple_field_str(tuple, BOX_INDEX_FIELD_NAME, &name_len);
if (name == NULL)
return NULL;
- const char *opts_field = tuple_field_with_type(tuple,
- BOX_INDEX_FIELD_OPTS, MP_MAP);
+ const char *opts_field =
+ tuple_field_with_type(tuple, BOX_INDEX_FIELD_OPTS, MP_MAP);
if (opts_field == NULL)
return NULL;
if (index_opts_decode(&opts, opts_field, &fiber()->gc) != 0)
@@ -307,18 +310,18 @@ index_def_new_from_tuple(struct tuple *tuple, struct space *space)
uint32_t part_count = mp_decode_array(&parts);
if (name_len > BOX_NAME_MAX) {
diag_set(ClientError, ER_MODIFY_INDEX,
- tt_cstr(name, BOX_INVALID_NAME_MAX),
- space_name(space), "index name is too long");
+ tt_cstr(name, BOX_INVALID_NAME_MAX), space_name(space),
+ "index name is too long");
return NULL;
}
if (identifier_check(name, name_len) != 0)
return NULL;
struct key_def *key_def = NULL;
- struct key_part_def *part_def = (struct key_part_def *)
- malloc(sizeof(*part_def) * part_count);
+ struct key_part_def *part_def =
+ (struct key_part_def *)malloc(sizeof(*part_def) * part_count);
if (part_def == NULL) {
- diag_set(OutOfMemory, sizeof(*part_def) * part_count,
- "malloc", "key_part_def");
+ diag_set(OutOfMemory, sizeof(*part_def) * part_count, "malloc",
+ "key_part_def");
return NULL;
}
auto key_def_guard = make_scoped_guard([&] {
@@ -327,19 +330,20 @@ index_def_new_from_tuple(struct tuple *tuple, struct space *space)
key_def_delete(key_def);
});
if (key_def_decode_parts(part_def, part_count, &parts,
- space->def->fields,
- space->def->field_count, &fiber()->gc) != 0)
+ space->def->fields, space->def->field_count,
+ &fiber()->gc) != 0)
return NULL;
bool for_func_index = opts.func_id > 0;
key_def = key_def_new(part_def, part_count, for_func_index);
if (key_def == NULL)
return NULL;
struct index_def *index_def =
- index_def_new(id, index_id, name, name_len, type,
- &opts, key_def, space_index_key_def(space, 0));
+ index_def_new(id, index_id, name, name_len, type, &opts,
+ key_def, space_index_key_def(space, 0));
if (index_def == NULL)
return NULL;
- auto index_def_guard = make_scoped_guard([=] { index_def_delete(index_def); });
+ auto index_def_guard =
+ make_scoped_guard([=] { index_def_delete(index_def); });
if (!index_def_is_valid(index_def, space_name(space)))
return NULL;
if (space_check_index_def(space, index_def) != 0)
@@ -366,11 +370,13 @@ index_def_new_from_tuple(struct tuple *tuple, struct space *space)
index_def_set_func(index_def, func);
}
if (index_def->iid == 0 && space->sequence != NULL)
- if (index_def_check_sequence(index_def, space->sequence_fieldno,
- space->sequence_path,
- space->sequence_path != NULL ?
- strlen(space->sequence_path) : 0,
- space_name(space)) != 0)
+ if (index_def_check_sequence(
+ index_def, space->sequence_fieldno,
+ space->sequence_path,
+ space->sequence_path != NULL ?
+ strlen(space->sequence_path) :
+ 0,
+ space_name(space)) != 0)
return NULL;
index_def_guard.is_active = false;
return index_def;
@@ -416,8 +422,8 @@ space_opts_decode(struct space_opts *opts, const char *map,
*/
static int
field_def_decode(struct field_def *field, const char **data,
- const char *space_name, uint32_t name_len,
- uint32_t errcode, uint32_t fieldno, struct region *region)
+ const char *space_name, uint32_t name_len, uint32_t errcode,
+ uint32_t fieldno, struct region *region)
{
if (mp_typeof(**data) != MP_MAP) {
diag_set(ClientError, errcode, tt_cstr(space_name, name_len),
@@ -433,8 +439,8 @@ field_def_decode(struct field_def *field, const char **data,
if (mp_typeof(**data) != MP_STR) {
diag_set(ClientError, errcode,
tt_cstr(space_name, name_len),
- tt_sprintf("field %d format is not map"\
- " with string keys",
+ tt_sprintf("field %d format is not map"
+ " with string keys",
fieldno + TUPLE_INDEX_BASE));
return -1;
}
@@ -445,15 +451,14 @@ field_def_decode(struct field_def *field, const char **data,
fieldno + TUPLE_INDEX_BASE, region,
true) != 0)
return -1;
- if (is_action_missing &&
- key_len == action_literal_len &&
+ if (is_action_missing && key_len == action_literal_len &&
memcmp(key, "nullable_action", action_literal_len) == 0)
is_action_missing = false;
}
if (is_action_missing) {
field->nullable_action = field->is_nullable ?
- ON_CONFLICT_ACTION_NONE
- : ON_CONFLICT_ACTION_DEFAULT;
+ ON_CONFLICT_ACTION_NONE :
+ ON_CONFLICT_ACTION_DEFAULT;
}
if (field->name == NULL) {
diag_set(ClientError, errcode, tt_cstr(space_name, name_len),
@@ -483,20 +488,18 @@ field_def_decode(struct field_def *field, const char **data,
fieldno + TUPLE_INDEX_BASE));
return -1;
}
- if (!((field->is_nullable && field->nullable_action ==
- ON_CONFLICT_ACTION_NONE)
- || (!field->is_nullable
- && field->nullable_action != ON_CONFLICT_ACTION_NONE))) {
+ if (!((field->is_nullable &&
+ field->nullable_action == ON_CONFLICT_ACTION_NONE) ||
+ (!field->is_nullable &&
+ field->nullable_action != ON_CONFLICT_ACTION_NONE))) {
diag_set(ClientError, errcode, tt_cstr(space_name, name_len),
tt_sprintf("field %d has conflicting nullability and "
- "nullable action properties", fieldno +
- TUPLE_INDEX_BASE));
+ "nullable action properties",
+ fieldno + TUPLE_INDEX_BASE));
return -1;
}
- if (field->coll_id != COLL_NONE &&
- field->type != FIELD_TYPE_STRING &&
- field->type != FIELD_TYPE_SCALAR &&
- field->type != FIELD_TYPE_ANY) {
+ if (field->coll_id != COLL_NONE && field->type != FIELD_TYPE_STRING &&
+ field->type != FIELD_TYPE_SCALAR && field->type != FIELD_TYPE_ANY) {
diag_set(ClientError, errcode, tt_cstr(space_name, name_len),
tt_sprintf("collation is reasonable only for "
"string, scalar and any fields"));
@@ -505,8 +508,8 @@ field_def_decode(struct field_def *field, const char **data,
const char *dv = field->default_value;
if (dv != NULL) {
- field->default_value_expr = sql_expr_compile(sql_get(), dv,
- strlen(dv));
+ field->default_value_expr =
+ sql_expr_compile(sql_get(), dv, strlen(dv));
if (field->default_value_expr == NULL)
return -1;
}
@@ -526,8 +529,8 @@ field_def_decode(struct field_def *field, const char **data,
*/
static int
space_format_decode(const char *data, uint32_t *out_count,
- const char *space_name, uint32_t name_len,
- uint32_t errcode, struct region *region, struct field_def **fields)
+ const char *space_name, uint32_t name_len, uint32_t errcode,
+ struct region *region, struct field_def **fields)
{
/* Type is checked by _space format. */
assert(mp_typeof(*data) == MP_ARRAY);
@@ -538,9 +541,8 @@ space_format_decode(const char *data, uint32_t *out_count,
return 0;
}
size_t size;
- struct field_def *region_defs =
- region_alloc_array(region, typeof(region_defs[0]), count,
- &size);
+ struct field_def *region_defs = region_alloc_array(
+ region, typeof(region_defs[0]), count, &size);
if (region_defs == NULL) {
diag_set(OutOfMemory, size, "region_alloc_array",
"region_defs");
@@ -552,12 +554,11 @@ space_format_decode(const char *data, uint32_t *out_count,
* work with garbage pointers.
*/
memset(region_defs, 0, size);
- auto fields_guard = make_scoped_guard([=] {
- space_def_destroy_fields(region_defs, count, false);
- });
+ auto fields_guard = make_scoped_guard(
+ [=] { space_def_destroy_fields(region_defs, count, false); });
for (uint32_t i = 0; i < count; ++i) {
- if (field_def_decode(®ion_defs[i], &data, space_name, name_len,
- errcode, i, region) != 0)
+ if (field_def_decode(®ion_defs[i], &data, space_name,
+ name_len, errcode, i, region) != 0)
return -1;
}
fields_guard.is_active = false;
@@ -573,8 +574,8 @@ space_def_new_from_tuple(struct tuple *tuple, uint32_t errcode,
struct region *region)
{
uint32_t name_len;
- const char *name = tuple_field_str(tuple, BOX_SPACE_FIELD_NAME,
- &name_len);
+ const char *name =
+ tuple_field_str(tuple, BOX_SPACE_FIELD_NAME, &name_len);
if (name == NULL)
return NULL;
if (name_len > BOX_NAME_MAX) {
@@ -606,8 +607,8 @@ space_def_new_from_tuple(struct tuple *tuple, uint32_t errcode,
&exact_field_count) != 0)
return NULL;
uint32_t engine_name_len;
- const char *engine_name = tuple_field_str(tuple,
- BOX_SPACE_FIELD_ENGINE, &engine_name_len);
+ const char *engine_name = tuple_field_str(tuple, BOX_SPACE_FIELD_ENGINE,
+ &engine_name_len);
if (engine_name == NULL)
return NULL;
/*
@@ -622,28 +623,26 @@ space_def_new_from_tuple(struct tuple *tuple, uint32_t errcode,
if (identifier_check(engine_name, engine_name_len) != 0)
return NULL;
/* Check space opts. */
- const char *space_opts = tuple_field_with_type(tuple,
- BOX_SPACE_FIELD_OPTS, MP_MAP);
+ const char *space_opts =
+ tuple_field_with_type(tuple, BOX_SPACE_FIELD_OPTS, MP_MAP);
if (space_opts == NULL)
return NULL;
/* Check space format */
- const char *format = tuple_field_with_type(tuple,
- BOX_SPACE_FIELD_FORMAT, MP_ARRAY);
+ const char *format =
+ tuple_field_with_type(tuple, BOX_SPACE_FIELD_FORMAT, MP_ARRAY);
if (format == NULL)
return NULL;
struct field_def *fields = NULL;
uint32_t field_count;
- if (space_format_decode(format, &field_count, name,
- name_len, errcode, region, &fields) != 0)
+ if (space_format_decode(format, &field_count, name, name_len, errcode,
+ region, &fields) != 0)
return NULL;
- auto fields_guard = make_scoped_guard([=] {
- space_def_destroy_fields(fields, field_count, false);
- });
- if (exact_field_count != 0 &&
- exact_field_count < field_count) {
+ auto fields_guard = make_scoped_guard(
+ [=] { space_def_destroy_fields(fields, field_count, false); });
+ if (exact_field_count != 0 && exact_field_count < field_count) {
diag_set(ClientError, errcode, tt_cstr(name, name_len),
- "exact_field_count must be either 0 or >= "\
- "formatted field count");
+ "exact_field_count must be either 0 or >= "
+ "formatted field count");
return NULL;
}
struct space_opts opts;
@@ -653,10 +652,8 @@ space_def_new_from_tuple(struct tuple *tuple, uint32_t errcode,
* Currently, only predefined replication groups
* are supported.
*/
- if (opts.group_id != GROUP_DEFAULT &&
- opts.group_id != GROUP_LOCAL) {
- diag_set(ClientError, ER_NO_SUCH_GROUP,
- int2str(opts.group_id));
+ if (opts.group_id != GROUP_DEFAULT && opts.group_id != GROUP_LOCAL) {
+ diag_set(ClientError, ER_NO_SUCH_GROUP, int2str(opts.group_id));
return NULL;
}
if (opts.is_view && opts.sql == NULL) {
@@ -668,10 +665,10 @@ space_def_new_from_tuple(struct tuple *tuple, uint32_t errcode,
"local space can't be synchronous");
return NULL;
}
- struct space_def *def =
- space_def_new(id, uid, exact_field_count, name, name_len,
- engine_name, engine_name_len, &opts, fields,
- field_count);
+ struct space_def *def = space_def_new(id, uid, exact_field_count, name,
+ name_len, engine_name,
+ engine_name_len, &opts, fields,
+ field_count);
if (def == NULL)
return NULL;
auto def_guard = make_scoped_guard([=] { space_def_delete(def); });
@@ -736,8 +733,8 @@ space_has_data(uint32_t id, uint32_t iid, uint32_t uid, bool *out)
}
if (!space_is_memtx(space)) {
- diag_set(ClientError, ER_UNSUPPORTED,
- space->engine->name, "system data");
+ diag_set(ClientError, ER_UNSUPPORTED, space->engine->name,
+ "system data");
return -1;
}
struct index *index = index_find(space, iid);
@@ -794,7 +791,8 @@ public:
* to WAL. Must not fail.
*/
virtual void commit(struct alter_space * /* alter */,
- int64_t /* signature */) {}
+ int64_t /* signature */)
+ {}
/**
* Called in case a WAL error occurred. It is supposed to undo
* the effect of AlterSpaceOp::prepare and AlterSpaceOp::alter.
@@ -820,9 +818,8 @@ static struct trigger *
txn_alter_trigger_new(trigger_f run, void *data)
{
size_t size = sizeof(struct trigger);
- struct trigger *trigger = (struct trigger *)
- region_aligned_alloc(&in_txn()->region, size,
- alignof(struct trigger));
+ struct trigger *trigger = (struct trigger *)region_aligned_alloc(
+ &in_txn()->region, size, alignof(struct trigger));
if (trigger == NULL) {
diag_set(OutOfMemory, size, "region", "struct trigger");
return NULL;
@@ -868,9 +865,8 @@ alter_space_new(struct space *old_space)
{
struct txn *txn = in_txn();
size_t size = sizeof(struct alter_space);
- struct alter_space *alter = (struct alter_space *)
- region_aligned_alloc(&in_txn()->region, size,
- alignof(struct alter_space));
+ struct alter_space *alter = (struct alter_space *)region_aligned_alloc(
+ &in_txn()->region, size, alignof(struct alter_space));
if (alter == NULL) {
diag_set(OutOfMemory, size, "region", "struct alter_space");
return NULL;
@@ -894,9 +890,9 @@ static void
alter_space_delete(struct alter_space *alter)
{
/* Destroy the ops. */
- while (! rlist_empty(&alter->ops)) {
- AlterSpaceOp *op = rlist_shift_entry(&alter->ops,
- AlterSpaceOp, link);
+ while (!rlist_empty(&alter->ops)) {
+ AlterSpaceOp *op =
+ rlist_shift_entry(&alter->ops, AlterSpaceOp, link);
delete op;
}
/* Delete the new space, if any. */
@@ -946,9 +942,11 @@ class AlterSpaceLock {
static struct mh_i32_t *registry;
/** Identifier of the space this lock is for. */
uint32_t space_id;
+
public:
/** Take a lock for the altered space. */
- AlterSpaceLock(struct alter_space *alter) {
+ AlterSpaceLock(struct alter_space *alter)
+ {
if (registry == NULL) {
registry = mh_i32_new();
if (registry == NULL) {
@@ -966,7 +964,8 @@ public:
if (k == mh_end(registry))
tnt_raise(OutOfMemory, 0, "mh_i32_put", "alter lock");
}
- ~AlterSpaceLock() {
+ ~AlterSpaceLock()
+ {
mh_int_t k = mh_i32_find(registry, space_id, NULL);
assert(k != mh_end(registry));
mh_i32_del(registry, k, NULL);
@@ -986,8 +985,8 @@ struct mh_i32_t *AlterSpaceLock::registry;
static int
alter_space_commit(struct trigger *trigger, void *event)
{
- struct txn *txn = (struct txn *) event;
- struct alter_space *alter = (struct alter_space *) trigger->data;
+ struct txn *txn = (struct txn *)event;
+ struct alter_space *alter = (struct alter_space *)trigger->data;
/*
* The engine (vinyl) expects us to pass the signature of
* the row that performed this operation, not the signature
@@ -1031,7 +1030,7 @@ alter_space_commit(struct trigger *trigger, void *event)
static int
alter_space_rollback(struct trigger *trigger, void * /* event */)
{
- struct alter_space *alter = (struct alter_space *) trigger->data;
+ struct alter_space *alter = (struct alter_space *)trigger->data;
/* Rollback alter ops */
class AlterSpaceOp *op;
try {
@@ -1185,11 +1184,9 @@ alter_space_do(struct txn_stmt *stmt, struct alter_space *alter)
* This operation does not modify the space, it just checks that
* tuples stored in it conform to the new format.
*/
-class CheckSpaceFormat: public AlterSpaceOp
-{
+class CheckSpaceFormat: public AlterSpaceOp {
public:
- CheckSpaceFormat(struct alter_space *alter)
- :AlterSpaceOp(alter) {}
+ CheckSpaceFormat(struct alter_space *alter) : AlterSpaceOp(alter) {}
virtual void prepare(struct alter_space *alter);
};
@@ -1204,16 +1201,16 @@ CheckSpaceFormat::prepare(struct alter_space *alter)
assert(new_format != NULL);
if (!tuple_format1_can_store_format2_tuples(new_format,
old_format))
- space_check_format_xc(old_space, new_format);
+ space_check_format_xc(old_space, new_format);
}
}
/** Change non-essential properties of a space. */
-class ModifySpace: public AlterSpaceOp
-{
+class ModifySpace: public AlterSpaceOp {
public:
ModifySpace(struct alter_space *alter, struct space_def *def)
- :AlterSpaceOp(alter), new_def(def), new_dict(NULL) {}
+ : AlterSpaceOp(alter), new_def(def), new_dict(NULL)
+ {}
/* New space definition. */
struct space_def *new_def;
/**
@@ -1275,11 +1272,11 @@ ModifySpace::~ModifySpace()
/** DropIndex - remove an index from space. */
-class DropIndex: public AlterSpaceOp
-{
+class DropIndex: public AlterSpaceOp {
public:
DropIndex(struct alter_space *alter, struct index *index)
- :AlterSpaceOp(alter), old_index(index) {}
+ : AlterSpaceOp(alter), old_index(index)
+ {}
struct index *old_index;
virtual void alter_def(struct alter_space *alter);
virtual void prepare(struct alter_space *alter);
@@ -1316,11 +1313,11 @@ DropIndex::commit(struct alter_space *alter, int64_t signature)
* Added to the alter specification when the index at hand
* is not affected by alter in any way.
*/
-class MoveIndex: public AlterSpaceOp
-{
+class MoveIndex: public AlterSpaceOp {
public:
MoveIndex(struct alter_space *alter, uint32_t iid_arg)
- :AlterSpaceOp(alter), iid(iid_arg) {}
+ : AlterSpaceOp(alter), iid(iid_arg)
+ {}
/** id of the index on the move. */
uint32_t iid;
virtual void alter(struct alter_space *alter);
@@ -1343,24 +1340,24 @@ MoveIndex::rollback(struct alter_space *alter)
* Change non-essential properties of an index, i.e.
* properties not involving index data or layout on disk.
*/
-class ModifyIndex: public AlterSpaceOp
-{
+class ModifyIndex: public AlterSpaceOp {
public:
- ModifyIndex(struct alter_space *alter,
- struct index *index, struct index_def *def)
- : AlterSpaceOp(alter), old_index(index),
- new_index(NULL), new_index_def(def) {
- if (new_index_def->iid == 0 &&
- key_part_cmp(new_index_def->key_def->parts,
- new_index_def->key_def->part_count,
- old_index->def->key_def->parts,
- old_index->def->key_def->part_count) != 0) {
- /*
+ ModifyIndex(struct alter_space *alter, struct index *index,
+ struct index_def *def)
+ : AlterSpaceOp(alter), old_index(index), new_index(NULL),
+ new_index_def(def)
+ {
+ if (new_index_def->iid == 0 &&
+ key_part_cmp(new_index_def->key_def->parts,
+ new_index_def->key_def->part_count,
+ old_index->def->key_def->parts,
+ old_index->def->key_def->part_count) != 0) {
+ /*
* Primary parts have been changed -
* update secondary indexes.
*/
- alter->pk_def = new_index_def->key_def;
- }
+ alter->pk_def = new_index_def->key_def;
+ }
}
struct index *old_index;
struct index *new_index;
@@ -1422,15 +1419,15 @@ ModifyIndex::~ModifyIndex()
}
/** CreateIndex - add a new index to the space. */
-class CreateIndex: public AlterSpaceOp
-{
+class CreateIndex: public AlterSpaceOp {
/** New index. */
struct index *new_index;
/** New index index_def. */
struct index_def *new_index_def;
+
public:
CreateIndex(struct alter_space *alter, struct index_def *def)
- :AlterSpaceOp(alter), new_index(NULL), new_index_def(def)
+ : AlterSpaceOp(alter), new_index(NULL), new_index_def(def)
{}
virtual void alter_def(struct alter_space *alter);
virtual void prepare(struct alter_space *alter);
@@ -1482,7 +1479,7 @@ CreateIndex::prepare(struct alter_space *alter)
void
CreateIndex::commit(struct alter_space *alter, int64_t signature)
{
- (void) alter;
+ (void)alter;
assert(new_index != NULL);
index_commit_create(new_index, signature);
new_index = NULL;
@@ -1501,15 +1498,14 @@ CreateIndex::~CreateIndex()
* from by reading the primary key. Used when key_def of
* an index is changed.
*/
-class RebuildIndex: public AlterSpaceOp
-{
+class RebuildIndex: public AlterSpaceOp {
public:
RebuildIndex(struct alter_space *alter,
struct index_def *new_index_def_arg,
struct index_def *old_index_def_arg)
- :AlterSpaceOp(alter), new_index(NULL),
- new_index_def(new_index_def_arg),
- old_index_def(old_index_def_arg)
+ : AlterSpaceOp(alter), new_index(NULL),
+ new_index_def(new_index_def_arg),
+ old_index_def(old_index_def_arg)
{
/* We may want to rebuild secondary keys as well. */
if (new_index_def->iid == 0)
@@ -1547,8 +1543,8 @@ RebuildIndex::prepare(struct alter_space *alter)
void
RebuildIndex::commit(struct alter_space *alter, int64_t signature)
{
- struct index *old_index = space_index(alter->old_space,
- old_index_def->iid);
+ struct index *old_index =
+ space_index(alter->old_space, old_index_def->iid);
assert(old_index != NULL);
index_commit_drop(old_index, signature);
assert(new_index != NULL);
@@ -1569,25 +1565,26 @@ RebuildIndex::~RebuildIndex()
* drop the old index data and rebuild index from by reading the
* primary key.
*/
-class RebuildFuncIndex: public RebuildIndex
-{
- struct index_def *
- func_index_def_new(struct index_def *index_def, struct func *func)
+class RebuildFuncIndex: public RebuildIndex {
+ struct index_def *func_index_def_new(struct index_def *index_def,
+ struct func *func)
{
struct index_def *new_index_def = index_def_dup_xc(index_def);
index_def_set_func(new_index_def, func);
return new_index_def;
}
+
public:
RebuildFuncIndex(struct alter_space *alter,
- struct index_def *old_index_def_arg, struct func *func) :
- RebuildIndex(alter, func_index_def_new(old_index_def_arg, func),
- old_index_def_arg) {}
+ struct index_def *old_index_def_arg, struct func *func)
+ : RebuildIndex(alter,
+ func_index_def_new(old_index_def_arg, func),
+ old_index_def_arg)
+ {}
};
/** TruncateIndex - truncate an index. */
-class TruncateIndex: public AlterSpaceOp
-{
+class TruncateIndex: public AlterSpaceOp {
/** id of the index to truncate. */
uint32_t iid;
/**
@@ -1596,10 +1593,12 @@ class TruncateIndex: public AlterSpaceOp
*/
struct index *old_index;
struct index *new_index;
+
public:
TruncateIndex(struct alter_space *alter, uint32_t iid)
- : AlterSpaceOp(alter), iid(iid),
- old_index(NULL), new_index(NULL) {}
+ : AlterSpaceOp(alter), iid(iid), old_index(NULL),
+ new_index(NULL)
+ {}
virtual void prepare(struct alter_space *alter);
virtual void commit(struct alter_space *alter, int64_t signature);
virtual ~TruncateIndex();
@@ -1652,19 +1651,17 @@ TruncateIndex::~TruncateIndex()
* in alter_space_do(), i.e. when creating or dropping
* an index, altering a space.
*/
-class UpdateSchemaVersion: public AlterSpaceOp
-{
+class UpdateSchemaVersion: public AlterSpaceOp {
public:
- UpdateSchemaVersion(struct alter_space * alter)
- :AlterSpaceOp(alter) {}
+ UpdateSchemaVersion(struct alter_space *alter) : AlterSpaceOp(alter) {}
virtual void alter(struct alter_space *alter);
};
void
UpdateSchemaVersion::alter(struct alter_space *alter)
{
- (void)alter;
- ++schema_version;
+ (void)alter;
+ ++schema_version;
}
/**
@@ -1676,13 +1673,15 @@ UpdateSchemaVersion::alter(struct alter_space *alter)
* Finally in ::alter or ::rollback methods we only swap those
* lists securely.
*/
-class RebuildCkConstraints: public AlterSpaceOp
-{
+class RebuildCkConstraints: public AlterSpaceOp {
void space_swap_ck_constraint(struct space *old_space,
struct space *new_space);
+
public:
- RebuildCkConstraints(struct alter_space *alter) : AlterSpaceOp(alter),
- ck_constraint(RLIST_HEAD_INITIALIZER(ck_constraint)) {}
+ RebuildCkConstraints(struct alter_space *alter)
+ : AlterSpaceOp(alter),
+ ck_constraint(RLIST_HEAD_INITIALIZER(ck_constraint))
+ {}
struct rlist ck_constraint;
virtual void prepare(struct alter_space *alter);
virtual void alter(struct alter_space *alter);
@@ -1696,9 +1695,8 @@ RebuildCkConstraints::prepare(struct alter_space *alter)
struct ck_constraint *old_ck_constraint;
rlist_foreach_entry(old_ck_constraint, &alter->old_space->ck_constraint,
link) {
- struct ck_constraint *new_ck_constraint =
- ck_constraint_new(old_ck_constraint->def,
- alter->new_space->def);
+ struct ck_constraint *new_ck_constraint = ck_constraint_new(
+ old_ck_constraint->def, alter->new_space->def);
if (new_ck_constraint == NULL)
diag_raise();
rlist_add_entry(&ck_constraint, new_ck_constraint, link);
@@ -1748,10 +1746,10 @@ RebuildCkConstraints::~RebuildCkConstraints()
* ck constraints rebuild. This may be used in scenarios where
* space format doesn't change i.e. on index alter or space trim.
*/
-class MoveCkConstraints: public AlterSpaceOp
-{
+class MoveCkConstraints: public AlterSpaceOp {
void space_swap_ck_constraint(struct space *old_space,
struct space *new_space);
+
public:
MoveCkConstraints(struct alter_space *alter) : AlterSpaceOp(alter) {}
virtual void alter(struct alter_space *alter);
@@ -1762,8 +1760,7 @@ void
MoveCkConstraints::space_swap_ck_constraint(struct space *old_space,
struct space *new_space)
{
- rlist_swap(&new_space->ck_constraint,
- &old_space->ck_constraint);
+ rlist_swap(&new_space->ck_constraint, &old_space->ck_constraint);
SWAP(new_space->ck_constraint_trigger,
old_space->ck_constraint_trigger);
}
@@ -1822,13 +1819,13 @@ space_delete_constraint_id(struct space *space, const char *name)
}
/** CreateConstraintID - add a new constraint id to a space. */
-class CreateConstraintID: public AlterSpaceOp
-{
+class CreateConstraintID: public AlterSpaceOp {
struct constraint_id *new_id;
+
public:
CreateConstraintID(struct alter_space *alter, enum constraint_type type,
const char *name)
- :AlterSpaceOp(alter), new_id(NULL)
+ : AlterSpaceOp(alter), new_id(NULL)
{
new_id = constraint_id_new(type, name);
if (new_id == NULL)
@@ -1867,8 +1864,8 @@ CreateConstraintID::rollback(struct alter_space *alter)
void
CreateConstraintID::commit(struct alter_space *alter, int64_t signature)
{
- (void) alter;
- (void) signature;
+ (void)alter;
+ (void)signature;
/*
* Constraint id is added to the space, and should not be
* deleted from now on.
@@ -1883,16 +1880,16 @@ CreateConstraintID::~CreateConstraintID()
}
/** DropConstraintID - drop a constraint id from the space. */
-class DropConstraintID: public AlterSpaceOp
-{
+class DropConstraintID: public AlterSpaceOp {
struct constraint_id *old_id;
const char *name;
+
public:
DropConstraintID(struct alter_space *alter, const char *name)
- :AlterSpaceOp(alter), old_id(NULL), name(name)
+ : AlterSpaceOp(alter), old_id(NULL), name(name)
{}
virtual void alter(struct alter_space *alter);
- virtual void commit(struct alter_space *alter , int64_t signature);
+ virtual void commit(struct alter_space *alter, int64_t signature);
virtual void rollback(struct alter_space *alter);
};
@@ -1905,8 +1902,8 @@ DropConstraintID::alter(struct alter_space *alter)
void
DropConstraintID::commit(struct alter_space *alter, int64_t signature)
{
- (void) alter;
- (void) signature;
+ (void)alter;
+ (void)signature;
constraint_id_delete(old_id);
}
@@ -1927,7 +1924,7 @@ DropConstraintID::rollback(struct alter_space *alter)
static int
on_drop_space_commit(struct trigger *trigger, void *event)
{
- (void) event;
+ (void)event;
struct space *space = (struct space *)trigger->data;
space_delete(space);
return 0;
@@ -1941,7 +1938,7 @@ on_drop_space_commit(struct trigger *trigger, void *event)
static int
on_drop_space_rollback(struct trigger *trigger, void *event)
{
- (void) event;
+ (void)event;
struct space *space = (struct space *)trigger->data;
space_cache_replace(NULL, space);
return 0;
@@ -1957,7 +1954,7 @@ on_drop_space_rollback(struct trigger *trigger, void *event)
static int
on_create_space_rollback(struct trigger *trigger, void *event)
{
- (void) event;
+ (void)event;
struct space *space = (struct space *)trigger->data;
space_cache_replace(space, NULL);
space_delete(space);
@@ -1994,13 +1991,15 @@ alter_space_move_indexes(struct alter_space *alter, uint32_t begin,
index_def_update_optionality(new_def,
min_field_count);
try {
- (void) new ModifyIndex(alter, old_index, new_def);
+ (void)new ModifyIndex(alter, old_index,
+ new_def);
} catch (Exception *e) {
return -1;
}
} else {
try {
- (void) new MoveIndex(alter, old_def->iid);
+ (void)new MoveIndex(alter,
+ old_def->iid);
} catch (Exception *e) {
return -1;
}
@@ -2016,16 +2015,18 @@ alter_space_move_indexes(struct alter_space *alter, uint32_t begin,
old_def->type, &old_def->opts,
old_def->key_def, alter->pk_def);
index_def_update_optionality(new_def, min_field_count);
- auto guard = make_scoped_guard([=] { index_def_delete(new_def); });
+ auto guard =
+ make_scoped_guard([=] { index_def_delete(new_def); });
if (!index_def_change_requires_rebuild(old_index, new_def))
try {
- (void) new ModifyIndex(alter, old_index, new_def);
+ (void)new ModifyIndex(alter, old_index,
+ new_def);
} catch (Exception *e) {
return -1;
}
else
try {
- (void) new RebuildIndex(alter, new_def, old_def);
+ (void)new RebuildIndex(alter, new_def, old_def);
} catch (Exception *e) {
return -1;
}
@@ -2071,7 +2072,7 @@ update_view_references(struct Select *select, int update_value,
continue;
struct space *space = space_by_name(space_name);
if (space == NULL) {
- if (! suppress_error) {
+ if (!suppress_error) {
assert(not_found_space != NULL);
*not_found_space = tt_sprintf("%s", space_name);
sqlSrcListDelete(sql_get(), list);
@@ -2093,7 +2094,7 @@ update_view_references(struct Select *select, int update_value,
static int
on_create_view_commit(struct trigger *trigger, void *event)
{
- (void) event;
+ (void)event;
struct Select *select = (struct Select *)trigger->data;
sql_select_delete(sql_get(), select);
return 0;
@@ -2107,7 +2108,7 @@ on_create_view_commit(struct trigger *trigger, void *event)
static int
on_create_view_rollback(struct trigger *trigger, void *event)
{
- (void) event;
+ (void)event;
struct Select *select = (struct Select *)trigger->data;
update_view_references(select, -1, true, NULL);
sql_select_delete(sql_get(), select);
@@ -2122,7 +2123,7 @@ on_create_view_rollback(struct trigger *trigger, void *event)
static int
on_drop_view_commit(struct trigger *trigger, void *event)
{
- (void) event;
+ (void)event;
struct Select *select = (struct Select *)trigger->data;
sql_select_delete(sql_get(), select);
return 0;
@@ -2136,7 +2137,7 @@ on_drop_view_commit(struct trigger *trigger, void *event)
static int
on_drop_view_rollback(struct trigger *trigger, void *event)
{
- (void) event;
+ (void)event;
struct Select *select = (struct Select *)trigger->data;
update_view_references(select, 1, true, NULL);
sql_select_delete(sql_get(), select);
@@ -2196,7 +2197,7 @@ on_drop_view_rollback(struct trigger *trigger, void *event)
static int
on_replace_dd_space(struct trigger * /* trigger */, void *event)
{
- struct txn *txn = (struct txn *) event;
+ struct txn *txn = (struct txn *)event;
struct txn_stmt *stmt = txn_current_stmt(txn);
struct tuple *old_tuple = stmt->old_tuple;
struct tuple *new_tuple = stmt->new_tuple;
@@ -2221,15 +2222,14 @@ on_replace_dd_space(struct trigger * /* trigger */, void *event)
return -1;
struct space *old_space = space_by_id(old_id);
if (new_tuple != NULL && old_space == NULL) { /* INSERT */
- struct space_def *def =
- space_def_new_from_tuple(new_tuple, ER_CREATE_SPACE,
- region);
+ struct space_def *def = space_def_new_from_tuple(
+ new_tuple, ER_CREATE_SPACE, region);
if (def == NULL)
return -1;
auto def_guard =
make_scoped_guard([=] { space_def_delete(def); });
if (access_check_ddl(def->name, def->id, def->uid, SC_SPACE,
- PRIV_C) != 0)
+ PRIV_C) != 0)
return -1;
RLIST_HEAD(empty_list);
struct space *space = space_new(def, &empty_list);
@@ -2262,13 +2262,12 @@ on_replace_dd_space(struct trigger * /* trigger */, void *event)
return -1;
txn_stmt_on_rollback(stmt, on_rollback);
if (def->opts.is_view) {
- struct Select *select = sql_view_compile(sql_get(),
- def->opts.sql);
+ struct Select *select =
+ sql_view_compile(sql_get(), def->opts.sql);
if (select == NULL)
return -1;
- auto select_guard = make_scoped_guard([=] {
- sql_select_delete(sql_get(), select);
- });
+ auto select_guard = make_scoped_guard(
+ [=] { sql_select_delete(sql_get(), select); });
const char *disappeared_space;
if (update_view_references(select, 1, false,
&disappeared_space) != 0) {
@@ -2279,12 +2278,11 @@ on_replace_dd_space(struct trigger * /* trigger */, void *event)
update_view_references(select, -1, false,
&disappeared_space);
diag_set(ClientError, ER_NO_SUCH_SPACE,
- disappeared_space);
+ disappeared_space);
return -1;
}
- struct trigger *on_commit_view =
- txn_alter_trigger_new(on_create_view_commit,
- select);
+ struct trigger *on_commit_view = txn_alter_trigger_new(
+ on_create_view_commit, select);
if (on_commit_view == NULL)
return -1;
txn_stmt_on_commit(stmt, on_commit_view);
@@ -2298,37 +2296,39 @@ on_replace_dd_space(struct trigger * /* trigger */, void *event)
}
} else if (new_tuple == NULL) { /* DELETE */
if (access_check_ddl(old_space->def->name, old_space->def->id,
- old_space->def->uid, SC_SPACE, PRIV_D) != 0)
+ old_space->def->uid, SC_SPACE,
+ PRIV_D) != 0)
return -1;
/* Verify that the space is empty (has no indexes) */
if (old_space->index_count) {
diag_set(ClientError, ER_DROP_SPACE,
- space_name(old_space),
- "the space has indexes");
+ space_name(old_space),
+ "the space has indexes");
return -1;
}
bool out;
- if (schema_find_grants("space", old_space->def->id, &out) != 0) {
+ if (schema_find_grants("space", old_space->def->id, &out) !=
+ 0) {
return -1;
}
if (out) {
diag_set(ClientError, ER_DROP_SPACE,
- space_name(old_space),
- "the space has grants");
+ space_name(old_space), "the space has grants");
return -1;
}
- if (space_has_data(BOX_TRUNCATE_ID, 0, old_space->def->id, &out) != 0)
+ if (space_has_data(BOX_TRUNCATE_ID, 0, old_space->def->id,
+ &out) != 0)
return -1;
if (out) {
diag_set(ClientError, ER_DROP_SPACE,
- space_name(old_space),
- "the space has truncate record");
+ space_name(old_space),
+ "the space has truncate record");
return -1;
}
if (old_space->def->view_ref_count > 0) {
diag_set(ClientError, ER_DROP_SPACE,
- space_name(old_space),
- "other views depend on this space");
+ space_name(old_space),
+ "other views depend on this space");
return -1;
}
/*
@@ -2340,14 +2340,14 @@ on_replace_dd_space(struct trigger * /* trigger */, void *event)
*/
if (!rlist_empty(&old_space->child_fk_constraint)) {
diag_set(ClientError, ER_DROP_SPACE,
- space_name(old_space),
- "the space has foreign key constraints");
+ space_name(old_space),
+ "the space has foreign key constraints");
return -1;
}
if (!rlist_empty(&old_space->ck_constraint)) {
diag_set(ClientError, ER_DROP_SPACE,
- space_name(old_space),
- "the space has check constraints");
+ space_name(old_space),
+ "the space has check constraints");
return -1;
}
/**
@@ -2367,23 +2367,20 @@ on_replace_dd_space(struct trigger * /* trigger */, void *event)
if (on_commit == NULL)
return -1;
txn_stmt_on_commit(stmt, on_commit);
- struct trigger *on_rollback =
- txn_alter_trigger_new(on_drop_space_rollback, old_space);
+ struct trigger *on_rollback = txn_alter_trigger_new(
+ on_drop_space_rollback, old_space);
if (on_rollback == NULL)
return -1;
txn_stmt_on_rollback(stmt, on_rollback);
if (old_space->def->opts.is_view) {
- struct Select *select =
- sql_view_compile(sql_get(),
- old_space->def->opts.sql);
+ struct Select *select = sql_view_compile(
+ sql_get(), old_space->def->opts.sql);
if (select == NULL)
return -1;
- auto select_guard = make_scoped_guard([=] {
- sql_select_delete(sql_get(), select);
- });
- struct trigger *on_commit_view =
- txn_alter_trigger_new(on_drop_view_commit,
- select);
+ auto select_guard = make_scoped_guard(
+ [=] { sql_select_delete(sql_get(), select); });
+ struct trigger *on_commit_view = txn_alter_trigger_new(
+ on_drop_view_commit, select);
if (on_commit_view == NULL)
return -1;
txn_stmt_on_commit(stmt, on_commit_view);
@@ -2404,47 +2401,47 @@ on_replace_dd_space(struct trigger * /* trigger */, void *event)
"view can not be altered");
return -1;
}
- struct space_def *def =
- space_def_new_from_tuple(new_tuple, ER_ALTER_SPACE,
- region);
+ struct space_def *def = space_def_new_from_tuple(
+ new_tuple, ER_ALTER_SPACE, region);
if (def == NULL)
return -1;
auto def_guard =
make_scoped_guard([=] { space_def_delete(def); });
if (access_check_ddl(def->name, def->id, def->uid, SC_SPACE,
- PRIV_A) != 0)
+ PRIV_A) != 0)
return -1;
if (def->id != space_id(old_space)) {
diag_set(ClientError, ER_ALTER_SPACE,
- space_name(old_space),
- "space id is immutable");
+ space_name(old_space),
+ "space id is immutable");
return -1;
}
- if (strcmp(def->engine_name, old_space->def->engine_name) != 0) {
+ if (strcmp(def->engine_name, old_space->def->engine_name) !=
+ 0) {
diag_set(ClientError, ER_ALTER_SPACE,
- space_name(old_space),
- "can not change space engine");
+ space_name(old_space),
+ "can not change space engine");
return -1;
}
if (def->opts.group_id != space_group_id(old_space)) {
diag_set(ClientError, ER_ALTER_SPACE,
- space_name(old_space),
- "replication group is immutable");
+ space_name(old_space),
+ "replication group is immutable");
return -1;
}
if (def->opts.is_view != old_space->def->opts.is_view) {
diag_set(ClientError, ER_ALTER_SPACE,
- space_name(old_space),
- "can not convert a space to "
- "a view and vice versa");
+ space_name(old_space),
+ "can not convert a space to "
+ "a view and vice versa");
return -1;
}
if (strcmp(def->name, old_space->def->name) != 0 &&
old_space->def->view_ref_count > 0) {
diag_set(ClientError, ER_ALTER_SPACE,
- space_name(old_space),
- "can not rename space which is referenced by "
- "view");
+ space_name(old_space),
+ "can not rename space which is referenced by "
+ "view");
return -1;
}
/*
@@ -2455,7 +2452,7 @@ on_replace_dd_space(struct trigger * /* trigger */, void *event)
if (alter == NULL)
return -1;
auto alter_guard =
- make_scoped_guard([=] {alter_space_delete(alter);});
+ make_scoped_guard([=] { alter_space_delete(alter); });
/*
* Calculate a new min_field_count. It can be
* changed by resetting space:format(), if an old
@@ -2475,26 +2472,24 @@ on_replace_dd_space(struct trigger * /* trigger */, void *event)
}
for (uint32_t i = 0; i < old_space->index_count; ++i)
keys[i] = old_space->index[i]->def->key_def;
- alter->new_min_field_count =
- tuple_format_min_field_count(keys,
- old_space->index_count,
- def->fields,
- def->field_count);
+ alter->new_min_field_count = tuple_format_min_field_count(
+ keys, old_space->index_count, def->fields,
+ def->field_count);
try {
- (void) new CheckSpaceFormat(alter);
- (void) new ModifySpace(alter, def);
- (void) new RebuildCkConstraints(alter);
+ (void)new CheckSpaceFormat(alter);
+ (void)new ModifySpace(alter, def);
+ (void)new RebuildCkConstraints(alter);
} catch (Exception *e) {
return -1;
}
def_guard.is_active = false;
/* Create MoveIndex ops for all space indexes. */
if (alter_space_move_indexes(alter, 0,
- old_space->index_id_max + 1) != 0)
+ old_space->index_id_max + 1) != 0)
return -1;
try {
/* Remember to update schema_version. */
- (void) new UpdateSchemaVersion(alter);
+ (void)new UpdateSchemaVersion(alter);
alter_space_do(stmt, alter);
} catch (Exception *e) {
return -1;
@@ -2565,7 +2560,7 @@ index_is_used_by_fk_constraint(struct rlist *fk_list, uint32_t iid)
static int
on_replace_dd_index(struct trigger * /* trigger */, void *event)
{
- struct txn *txn = (struct txn *) event;
+ struct txn *txn = (struct txn *)event;
struct txn_stmt *stmt = txn_current_stmt(txn);
struct tuple *old_tuple = stmt->old_tuple;
struct tuple *new_tuple = stmt->new_tuple;
@@ -2581,14 +2576,14 @@ on_replace_dd_index(struct trigger * /* trigger */, void *event)
return -1;
if (old_space->def->opts.is_view) {
diag_set(ClientError, ER_ALTER_SPACE, space_name(old_space),
- "can not add index on a view");
+ "can not add index on a view");
return -1;
}
enum priv_type priv_type = new_tuple ? PRIV_C : PRIV_D;
if (old_tuple && new_tuple)
priv_type = PRIV_A;
if (access_check_ddl(old_space->def->name, old_space->def->id,
- old_space->def->uid, SC_SPACE, priv_type) != 0)
+ old_space->def->uid, SC_SPACE, priv_type) != 0)
return -1;
struct index *old_index = space_index(old_space, iid);
struct index_def *old_def = old_index != NULL ? old_index->def : NULL;
@@ -2602,7 +2597,7 @@ on_replace_dd_index(struct trigger * /* trigger */, void *event)
*/
if (space_is_system(old_space)) {
diag_set(ClientError, ER_LAST_DROP,
- space_name(old_space));
+ space_name(old_space));
return -1;
}
/*
@@ -2610,7 +2605,7 @@ on_replace_dd_index(struct trigger * /* trigger */, void *event)
*/
if (old_space->index_count > 1) {
diag_set(ClientError, ER_DROP_PRIMARY_KEY,
- space_name(old_space));
+ space_name(old_space));
return -1;
}
/*
@@ -2618,9 +2613,9 @@ on_replace_dd_index(struct trigger * /* trigger */, void *event)
*/
if (old_space->sequence != NULL) {
diag_set(ClientError, ER_ALTER_SPACE,
- space_name(old_space),
- "can not drop primary key while "
- "space sequence exists");
+ space_name(old_space),
+ "can not drop primary key while "
+ "space sequence exists");
return -1;
}
}
@@ -2630,9 +2625,8 @@ on_replace_dd_index(struct trigger * /* trigger */, void *event)
* A secondary index can not be created without
* a primary key.
*/
- diag_set(ClientError, ER_ALTER_SPACE,
- space_name(old_space),
- "can not add a secondary key before primary");
+ diag_set(ClientError, ER_ALTER_SPACE, space_name(old_space),
+ "can not add a secondary key before primary");
return -1;
}
@@ -2655,21 +2649,21 @@ on_replace_dd_index(struct trigger * /* trigger */, void *event)
* Can't drop index if foreign key constraints
* references this index.
*/
- if (index_is_used_by_fk_constraint(&old_space->parent_fk_constraint,
- iid)) {
+ if (index_is_used_by_fk_constraint(
+ &old_space->parent_fk_constraint, iid)) {
diag_set(ClientError, ER_ALTER_SPACE,
- space_name(old_space),
- "can not drop a referenced index");
+ space_name(old_space),
+ "can not drop a referenced index");
return -1;
}
if (alter_space_move_indexes(alter, 0, iid) != 0)
return -1;
try {
if (old_index->def->opts.is_unique) {
- (void) new DropConstraintID(alter,
- old_def->name);
+ (void)new DropConstraintID(alter,
+ old_def->name);
}
- (void) new DropIndex(alter, old_index);
+ (void)new DropIndex(alter, old_index);
} catch (Exception *e) {
return -1;
}
@@ -2685,11 +2679,13 @@ on_replace_dd_index(struct trigger * /* trigger */, void *event)
index_def_update_optionality(def, alter->new_min_field_count);
try {
if (def->opts.is_unique) {
- (void) new CreateConstraintID(
- alter, iid == 0 ? CONSTRAINT_TYPE_PK :
- CONSTRAINT_TYPE_UNIQUE, def->name);
+ (void)new CreateConstraintID(
+ alter,
+ iid == 0 ? CONSTRAINT_TYPE_PK :
+ CONSTRAINT_TYPE_UNIQUE,
+ def->name);
}
- (void) new CreateIndex(alter, def);
+ (void)new CreateIndex(alter, def);
} catch (Exception *e) {
index_def_delete(def);
return -1;
@@ -2708,10 +2704,10 @@ on_replace_dd_index(struct trigger * /* trigger */, void *event)
* becoming unique (i.e. constraint), or when a
* unique index's name is under change.
*/
- bool do_new_constraint_id =
- !old_def->opts.is_unique && index_def->opts.is_unique;
- bool do_drop_constraint_id =
- old_def->opts.is_unique && !index_def->opts.is_unique;
+ bool do_new_constraint_id = !old_def->opts.is_unique &&
+ index_def->opts.is_unique;
+ bool do_drop_constraint_id = old_def->opts.is_unique &&
+ !index_def->opts.is_unique;
if (old_def->opts.is_unique && index_def->opts.is_unique &&
strcmp(index_def->name, old_def->name) != 0) {
@@ -2720,13 +2716,13 @@ on_replace_dd_index(struct trigger * /* trigger */, void *event)
}
try {
if (do_new_constraint_id) {
- (void) new CreateConstraintID(
+ (void)new CreateConstraintID(
alter, CONSTRAINT_TYPE_UNIQUE,
index_def->name);
}
if (do_drop_constraint_id) {
- (void) new DropConstraintID(alter,
- old_def->name);
+ (void)new DropConstraintID(alter,
+ old_def->name);
}
} catch (Exception *e) {
return -1;
@@ -2764,11 +2760,9 @@ on_replace_dd_index(struct trigger * /* trigger */, void *event)
keys[j++] = index_def->key_def;
}
struct space_def *def = old_space->def;
- alter->new_min_field_count =
- tuple_format_min_field_count(keys,
- old_space->index_count,
- def->fields,
- def->field_count);
+ alter->new_min_field_count = tuple_format_min_field_count(
+ keys, old_space->index_count, def->fields,
+ def->field_count);
index_def_update_optionality(index_def,
alter->new_min_field_count);
if (alter_space_move_indexes(alter, 0, iid))
@@ -2776,26 +2770,26 @@ on_replace_dd_index(struct trigger * /* trigger */, void *event)
if (index_def_cmp(index_def, old_index->def) == 0) {
/* Index is not changed so just move it. */
try {
- (void) new MoveIndex(alter, old_index->def->iid);
+ (void)new MoveIndex(alter, old_index->def->iid);
} catch (Exception *e) {
return -1;
}
} else if (index_def_change_requires_rebuild(old_index,
index_def)) {
- if (index_is_used_by_fk_constraint(&old_space->parent_fk_constraint,
- iid)) {
+ if (index_is_used_by_fk_constraint(
+ &old_space->parent_fk_constraint, iid)) {
diag_set(ClientError, ER_ALTER_SPACE,
- space_name(old_space),
- "can not alter a referenced index");
+ space_name(old_space),
+ "can not alter a referenced index");
return -1;
}
/*
* Operation demands an index rebuild.
*/
try {
- (void) new RebuildIndex(alter, index_def,
- old_index->def);
+ (void)new RebuildIndex(alter, index_def,
+ old_index->def);
} catch (Exception *e) {
return -1;
}
@@ -2807,8 +2801,9 @@ on_replace_dd_index(struct trigger * /* trigger */, void *event)
* in the space conform to the new format.
*/
try {
- (void) new CheckSpaceFormat(alter);
- (void) new ModifyIndex(alter, old_index, index_def);
+ (void)new CheckSpaceFormat(alter);
+ (void)new ModifyIndex(alter, old_index,
+ index_def);
} catch (Exception *e) {
return -1;
}
@@ -2819,12 +2814,13 @@ on_replace_dd_index(struct trigger * /* trigger */, void *event)
* Create MoveIndex ops for the remaining indexes in the
* old space.
*/
- if (alter_space_move_indexes(alter, iid + 1, old_space->index_id_max + 1) != 0)
+ if (alter_space_move_indexes(alter, iid + 1,
+ old_space->index_id_max + 1) != 0)
return -1;
try {
- (void) new MoveCkConstraints(alter);
+ (void)new MoveCkConstraints(alter);
/* Add an op to update schema_version on commit. */
- (void) new UpdateSchemaVersion(alter);
+ (void)new UpdateSchemaVersion(alter);
alter_space_do(stmt, alter);
} catch (Exception *e) {
return -1;
@@ -2847,7 +2843,7 @@ on_replace_dd_index(struct trigger * /* trigger */, void *event)
static int
on_replace_dd_truncate(struct trigger * /* trigger */, void *event)
{
- struct txn *txn = (struct txn *) event;
+ struct txn *txn = (struct txn *)event;
struct txn_stmt *stmt = txn_current_stmt(txn);
struct tuple *new_tuple = stmt->new_tuple;
@@ -2857,7 +2853,8 @@ on_replace_dd_truncate(struct trigger * /* trigger */, void *event)
}
uint32_t space_id;
- if (tuple_field_u32(new_tuple, BOX_TRUNCATE_FIELD_SPACE_ID, &space_id) != 0)
+ if (tuple_field_u32(new_tuple, BOX_TRUNCATE_FIELD_SPACE_ID,
+ &space_id) != 0)
return -1;
struct space *old_space = space_cache_find(space_id);
if (old_space == NULL)
@@ -2878,7 +2875,7 @@ on_replace_dd_truncate(struct trigger * /* trigger */, void *event)
*/
if (space_is_system(old_space)) {
diag_set(ClientError, ER_TRUNCATE_SYSTEM_SPACE,
- space_name(old_space));
+ space_name(old_space));
return -1;
}
@@ -2910,10 +2907,10 @@ on_replace_dd_truncate(struct trigger * /* trigger */, void *event)
*/
for (uint32_t i = 0; i < old_space->index_count; i++) {
struct index *old_index = old_space->index[i];
- (void) new TruncateIndex(alter, old_index->def->iid);
+ (void)new TruncateIndex(alter, old_index->def->iid);
}
- (void) new MoveCkConstraints(alter);
+ (void)new MoveCkConstraints(alter);
alter_space_do(stmt, alter);
} catch (Exception *e) {
return -1;
@@ -2935,7 +2932,7 @@ user_has_data(struct user *user, bool *has_data)
* For _priv also check that the user has no grants.
*/
uint32_t indexes[] = { 1, 1, 1, 1, 0 };
- uint32_t count = sizeof(spaces)/sizeof(*spaces);
+ uint32_t count = sizeof(spaces) / sizeof(*spaces);
bool out;
for (uint32_t i = 0; i < count; i++) {
if (space_has_data(spaces[i], indexes[i], uid, &out) != 0)
@@ -2945,7 +2942,7 @@ user_has_data(struct user *user, bool *has_data)
return 0;
}
}
- if (! user_map_is_empty(&user->users)) {
+ if (!user_map_is_empty(&user->users)) {
*has_data = true;
return 0;
}
@@ -2979,9 +2976,9 @@ user_def_fill_auth_data(struct user_def *user, const char *auth_data)
}
if (mp_typeof(*auth_data) != MP_MAP) {
/** Prevent users from making silly mistakes */
- diag_set(ClientError, ER_CREATE_USER,
- user->name, "invalid password format, "
- "use box.schema.user.passwd() to reset password");
+ diag_set(ClientError, ER_CREATE_USER, user->name,
+ "invalid password format, "
+ "use box.schema.user.passwd() to reset password");
return -1;
}
uint32_t mech_count = mp_decode_map(&auth_data);
@@ -2999,13 +2996,14 @@ user_def_fill_auth_data(struct user_def *user, const char *auth_data)
}
const char *hash2_base64 = mp_decode_str(&auth_data, &len);
if (len != 0 && len != SCRAMBLE_BASE64_SIZE) {
- diag_set(ClientError, ER_CREATE_USER,
- user->name, "invalid user password");
+ diag_set(ClientError, ER_CREATE_USER, user->name,
+ "invalid user password");
return -1;
}
if (user->uid == GUEST) {
/** Guest user is permitted to have empty password */
- if (strncmp(hash2_base64, CHAP_SHA1_EMPTY_PASSWORD, len)) {
+ if (strncmp(hash2_base64, CHAP_SHA1_EMPTY_PASSWORD,
+ len)) {
diag_set(ClientError, ER_GUEST_USER_PASSWORD);
return -1;
}
@@ -3022,19 +3020,19 @@ static struct user_def *
user_def_new_from_tuple(struct tuple *tuple)
{
uint32_t name_len;
- const char *name = tuple_field_str(tuple, BOX_USER_FIELD_NAME,
- &name_len);
+ const char *name =
+ tuple_field_str(tuple, BOX_USER_FIELD_NAME, &name_len);
if (name == NULL)
return NULL;
if (name_len > BOX_NAME_MAX) {
diag_set(ClientError, ER_CREATE_USER,
- tt_cstr(name, BOX_INVALID_NAME_MAX),
- "user name is too long");
+ tt_cstr(name, BOX_INVALID_NAME_MAX),
+ "user name is too long");
return NULL;
}
size_t size = user_def_sizeof(name_len);
/* Use calloc: in case user password is empty, fill it with \0 */
- struct user_def *user = (struct user_def *) malloc(size);
+ struct user_def *user = (struct user_def *)malloc(size);
if (user == NULL) {
diag_set(OutOfMemory, size, "malloc", "user");
return NULL;
@@ -3051,8 +3049,8 @@ user_def_new_from_tuple(struct tuple *tuple)
memcpy(user->name, name, name_len);
user->name[name_len] = 0;
if (user->type != SC_ROLE && user->type != SC_USER) {
- diag_set(ClientError, ER_CREATE_USER,
- user->name, "unknown user type");
+ diag_set(ClientError, ER_CREATE_USER, user->name,
+ "unknown user type");
return NULL;
}
if (identifier_check(user->name, name_len) != 0)
@@ -3079,8 +3077,8 @@ user_def_new_from_tuple(struct tuple *tuple)
}
if (!is_auth_empty && user->type == SC_ROLE) {
diag_set(ClientError, ER_CREATE_ROLE, user->name,
- "authentication data can not be set for a "\
- "role");
+ "authentication data can not be set for a "
+ "role");
return NULL;
}
if (user_def_fill_auth_data(user, auth_data) != 0)
@@ -3125,7 +3123,7 @@ user_cache_alter_user(struct trigger *trigger, void * /* event */)
static int
on_replace_dd_user(struct trigger * /* trigger */, void *event)
{
- struct txn *txn = (struct txn *) event;
+ struct txn *txn = (struct txn *)event;
struct txn_stmt *stmt = txn_current_stmt(txn);
struct tuple *old_tuple = stmt->old_tuple;
struct tuple *new_tuple = stmt->new_tuple;
@@ -3139,31 +3137,30 @@ on_replace_dd_user(struct trigger * /* trigger */, void *event)
struct user_def *user = user_def_new_from_tuple(new_tuple);
if (user == NULL)
return -1;
- if (access_check_ddl(user->name, user->uid, user->owner, user->type,
- PRIV_C) != 0)
+ if (access_check_ddl(user->name, user->uid, user->owner,
+ user->type, PRIV_C) != 0)
return -1;
auto def_guard = make_scoped_guard([=] { free(user); });
try {
- (void) user_cache_replace(user);
+ (void)user_cache_replace(user);
} catch (Exception *e) {
return -1;
}
def_guard.is_active = false;
- struct trigger *on_rollback =
- txn_alter_trigger_new(user_cache_remove_user, new_tuple);
+ struct trigger *on_rollback = txn_alter_trigger_new(
+ user_cache_remove_user, new_tuple);
if (on_rollback == NULL)
return -1;
txn_stmt_on_rollback(stmt, on_rollback);
} else if (new_tuple == NULL) { /* DELETE */
if (access_check_ddl(old_user->def->name, old_user->def->uid,
- old_user->def->owner, old_user->def->type,
- PRIV_D) != 0)
+ old_user->def->owner, old_user->def->type,
+ PRIV_D) != 0)
return -1;
/* Can't drop guest or super user */
- if (uid <= (uint32_t) BOX_SYSTEM_USER_ID_MAX || uid == SUPER) {
- diag_set(ClientError, ER_DROP_USER,
- old_user->def->name,
- "the user or the role is a system");
+ if (uid <= (uint32_t)BOX_SYSTEM_USER_ID_MAX || uid == SUPER) {
+ diag_set(ClientError, ER_DROP_USER, old_user->def->name,
+ "the user or the role is a system");
return -1;
}
/*
@@ -3175,8 +3172,8 @@ on_replace_dd_user(struct trigger * /* trigger */, void *event)
return -1;
}
if (has_data) {
- diag_set(ClientError, ER_DROP_USER,
- old_user->def->name, "the user has objects");
+ diag_set(ClientError, ER_DROP_USER, old_user->def->name,
+ "the user has objects");
return -1;
}
user_cache_delete(uid);
@@ -3196,7 +3193,7 @@ on_replace_dd_user(struct trigger * /* trigger */, void *event)
if (user == NULL)
return -1;
if (access_check_ddl(user->name, user->uid, user->uid,
- old_user->def->type, PRIV_A) != 0)
+ old_user->def->type, PRIV_A) != 0)
return -1;
auto def_guard = make_scoped_guard([=] { free(user); });
try {
@@ -3241,8 +3238,8 @@ func_def_new_from_tuple(struct tuple *tuple)
return NULL;
if (name_len > BOX_NAME_MAX) {
diag_set(ClientError, ER_CREATE_FUNCTION,
- tt_cstr(name, BOX_INVALID_NAME_MAX),
- "function name is too long");
+ tt_cstr(name, BOX_INVALID_NAME_MAX),
+ "function name is too long");
return NULL;
}
if (identifier_check(name, name_len) != 0)
@@ -3256,24 +3253,24 @@ func_def_new_from_tuple(struct tuple *tuple)
if (comment == NULL)
return NULL;
uint32_t len;
- const char *routine_type = tuple_field_str(tuple,
- BOX_FUNC_FIELD_ROUTINE_TYPE, &len);
+ const char *routine_type = tuple_field_str(
+ tuple, BOX_FUNC_FIELD_ROUTINE_TYPE, &len);
if (routine_type == NULL)
return NULL;
if (len != strlen("function") ||
strncasecmp(routine_type, "function", len) != 0) {
diag_set(ClientError, ER_CREATE_FUNCTION, name,
- "unsupported routine_type value");
+ "unsupported routine_type value");
return NULL;
}
- const char *sql_data_access = tuple_field_str(tuple,
- BOX_FUNC_FIELD_SQL_DATA_ACCESS, &len);
+ const char *sql_data_access = tuple_field_str(
+ tuple, BOX_FUNC_FIELD_SQL_DATA_ACCESS, &len);
if (sql_data_access == NULL)
return NULL;
if (len != strlen("none") ||
strncasecmp(sql_data_access, "none", len) != 0) {
diag_set(ClientError, ER_CREATE_FUNCTION, name,
- "unsupported sql_data_access value");
+ "unsupported sql_data_access value");
return NULL;
}
bool is_null_call;
@@ -3282,7 +3279,7 @@ func_def_new_from_tuple(struct tuple *tuple)
return NULL;
if (is_null_call != true) {
diag_set(ClientError, ER_CREATE_FUNCTION, name,
- "unsupported is_null_call value");
+ "unsupported is_null_call value");
return NULL;
}
} else {
@@ -3294,7 +3291,7 @@ func_def_new_from_tuple(struct tuple *tuple)
uint32_t body_offset, comment_offset;
uint32_t def_sz = func_def_sizeof(name_len, body_len, comment_len,
&body_offset, &comment_offset);
- struct func_def *def = (struct func_def *) malloc(def_sz);
+ struct func_def *def = (struct func_def *)malloc(def_sz);
if (def == NULL) {
diag_set(OutOfMemory, def_sz, "malloc", "def");
return NULL;
@@ -3304,7 +3301,7 @@ func_def_new_from_tuple(struct tuple *tuple)
return NULL;
if (def->fid > BOX_FUNCTION_MAX) {
diag_set(ClientError, ER_CREATE_FUNCTION,
- tt_cstr(name, name_len), "function id is too big");
+ tt_cstr(name, name_len), "function id is too big");
return NULL;
}
func_opts_create(&def->opts);
@@ -3341,8 +3338,8 @@ func_def_new_from_tuple(struct tuple *tuple)
def->language = STR2ENUM(func_language, language);
if (def->language == func_language_MAX ||
def->language == FUNC_LANGUAGE_SQL) {
- diag_set(ClientError, ER_FUNCTION_LANGUAGE,
- language, def->name);
+ diag_set(ClientError, ER_FUNCTION_LANGUAGE, language,
+ def->name);
return NULL;
}
} else {
@@ -3362,20 +3359,20 @@ func_def_new_from_tuple(struct tuple *tuple)
return NULL;
def->returns = STR2ENUM(field_type, returns);
if (def->returns == field_type_MAX) {
- diag_set(ClientError, ER_CREATE_FUNCTION,
- def->name, "invalid returns value");
+ diag_set(ClientError, ER_CREATE_FUNCTION, def->name,
+ "invalid returns value");
return NULL;
}
def->exports.all = 0;
- const char *exports = tuple_field_with_type(tuple,
- BOX_FUNC_FIELD_EXPORTS, MP_ARRAY);
+ const char *exports = tuple_field_with_type(
+ tuple, BOX_FUNC_FIELD_EXPORTS, MP_ARRAY);
if (exports == NULL)
return NULL;
uint32_t cnt = mp_decode_array(&exports);
for (uint32_t i = 0; i < cnt; i++) {
if (mp_typeof(*exports) != MP_STR) {
diag_set(ClientError, ER_FIELD_TYPE,
- int2str(BOX_FUNC_FIELD_EXPORTS + 1),
+ int2str(BOX_FUNC_FIELD_EXPORTS + 1),
mp_type_strs[MP_STR]);
return NULL;
}
@@ -3390,7 +3387,7 @@ func_def_new_from_tuple(struct tuple *tuple)
break;
default:
diag_set(ClientError, ER_CREATE_FUNCTION,
- def->name, "invalid exports value");
+ def->name, "invalid exports value");
return NULL;
}
}
@@ -3400,19 +3397,19 @@ func_def_new_from_tuple(struct tuple *tuple)
return NULL;
def->aggregate = STR2ENUM(func_aggregate, aggregate);
if (def->aggregate == func_aggregate_MAX) {
- diag_set(ClientError, ER_CREATE_FUNCTION,
- def->name, "invalid aggregate value");
+ diag_set(ClientError, ER_CREATE_FUNCTION, def->name,
+ "invalid aggregate value");
return NULL;
}
- const char *param_list = tuple_field_with_type(tuple,
- BOX_FUNC_FIELD_PARAM_LIST, MP_ARRAY);
+ const char *param_list = tuple_field_with_type(
+ tuple, BOX_FUNC_FIELD_PARAM_LIST, MP_ARRAY);
if (param_list == NULL)
return NULL;
uint32_t argc = mp_decode_array(¶m_list);
for (uint32_t i = 0; i < argc; i++) {
- if (mp_typeof(*param_list) != MP_STR) {
+ if (mp_typeof(*param_list) != MP_STR) {
diag_set(ClientError, ER_FIELD_TYPE,
- int2str(BOX_FUNC_FIELD_PARAM_LIST + 1),
+ int2str(BOX_FUNC_FIELD_PARAM_LIST + 1),
mp_type_strs[MP_STR]);
return NULL;
}
@@ -3420,7 +3417,7 @@ func_def_new_from_tuple(struct tuple *tuple)
const char *str = mp_decode_str(¶m_list, &len);
if (STRN2ENUM(field_type, str, len) == field_type_MAX) {
diag_set(ClientError, ER_CREATE_FUNCTION,
- def->name, "invalid argument type");
+ def->name, "invalid argument type");
return NULL;
}
}
@@ -3485,7 +3482,7 @@ on_drop_func_rollback(struct trigger *trigger, void * /* event */)
static int
on_replace_dd_func(struct trigger * /* trigger */, void *event)
{
- struct txn *txn = (struct txn *) event;
+ struct txn *txn = (struct txn *)event;
struct txn_stmt *stmt = txn_current_stmt(txn);
struct tuple *old_tuple = stmt->old_tuple;
struct tuple *new_tuple = stmt->new_tuple;
@@ -3501,7 +3498,7 @@ on_replace_dd_func(struct trigger * /* trigger */, void *event)
return -1;
auto def_guard = make_scoped_guard([=] { free(def); });
if (access_check_ddl(def->name, def->fid, def->uid, SC_FUNCTION,
- PRIV_C) != 0)
+ PRIV_C) != 0)
return -1;
struct trigger *on_rollback =
txn_alter_trigger_new(on_create_func_rollback, NULL);
@@ -3516,7 +3513,7 @@ on_replace_dd_func(struct trigger * /* trigger */, void *event)
txn_stmt_on_rollback(stmt, on_rollback);
if (trigger_run(&on_alter_func, func) != 0)
return -1;
- } else if (new_tuple == NULL) { /* DELETE */
+ } else if (new_tuple == NULL) { /* DELETE */
uint32_t uid;
if (func_def_get_ids_from_tuple(old_tuple, &fid, &uid) != 0)
return -1;
@@ -3525,32 +3522,34 @@ on_replace_dd_func(struct trigger * /* trigger */, void *event)
* who created it or a superuser.
*/
if (access_check_ddl(old_func->def->name, fid, uid, SC_FUNCTION,
- PRIV_D) != 0)
+ PRIV_D) != 0)
return -1;
/* Can only delete func if it has no grants. */
bool out;
- if (schema_find_grants("function", old_func->def->fid, &out) != 0) {
+ if (schema_find_grants("function", old_func->def->fid, &out) !=
+ 0) {
return -1;
}
if (out) {
diag_set(ClientError, ER_DROP_FUNCTION,
- (unsigned) old_func->def->uid,
- "function has grants");
+ (unsigned)old_func->def->uid,
+ "function has grants");
return -1;
}
- if (space_has_data(BOX_FUNC_INDEX_ID, 1, old_func->def->fid, &out) != 0)
+ if (space_has_data(BOX_FUNC_INDEX_ID, 1, old_func->def->fid,
+ &out) != 0)
return -1;
if (old_func != NULL && out) {
diag_set(ClientError, ER_DROP_FUNCTION,
- (unsigned) old_func->def->uid,
- "function has references");
+ (unsigned)old_func->def->uid,
+ "function has references");
return -1;
}
/* Can't' drop a builtin function. */
if (old_func->def->language == FUNC_LANGUAGE_SQL_BUILTIN) {
diag_set(ClientError, ER_DROP_FUNCTION,
- (unsigned) old_func->def->uid,
- "function is SQL built-in");
+ (unsigned)old_func->def->uid,
+ "function is SQL built-in");
return -1;
}
struct trigger *on_commit =
@@ -3564,7 +3563,7 @@ on_replace_dd_func(struct trigger * /* trigger */, void *event)
txn_stmt_on_rollback(stmt, on_rollback);
if (trigger_run(&on_alter_func, old_func) != 0)
return -1;
- } else { /* UPDATE, REPLACE */
+ } else { /* UPDATE, REPLACE */
assert(new_tuple != NULL && old_tuple != NULL);
/**
* Allow an alter that doesn't change the
@@ -3581,7 +3580,7 @@ on_replace_dd_func(struct trigger * /* trigger */, void *event)
return -1;
if (func_def_cmp(new_def, old_def) != 0) {
diag_set(ClientError, ER_UNSUPPORTED, "function",
- "alter");
+ "alter");
return -1;
}
}
@@ -3602,31 +3601,32 @@ coll_id_def_new_from_tuple(struct tuple *tuple, struct coll_id_def *def)
def->name_len = name_len;
if (name_len > BOX_NAME_MAX) {
diag_set(ClientError, ER_CANT_CREATE_COLLATION,
- "collation name is too long");
+ "collation name is too long");
return -1;
}
if (identifier_check(def->name, name_len) != 0)
return -1;
- if (tuple_field_u32(tuple, BOX_COLLATION_FIELD_UID, &(def->owner_id)) != 0)
+ if (tuple_field_u32(tuple, BOX_COLLATION_FIELD_UID, &(def->owner_id)) !=
+ 0)
return -1;
- const char *type = tuple_field_str(tuple, BOX_COLLATION_FIELD_TYPE,
- &type_len);
+ const char *type =
+ tuple_field_str(tuple, BOX_COLLATION_FIELD_TYPE, &type_len);
if (type == NULL)
return -1;
struct coll_def *base = &def->base;
base->type = STRN2ENUM(coll_type, type, type_len);
if (base->type == coll_type_MAX) {
diag_set(ClientError, ER_CANT_CREATE_COLLATION,
- "unknown collation type");
+ "unknown collation type");
return -1;
}
- const char *locale = tuple_field_str(tuple, BOX_COLLATION_FIELD_LOCALE,
- &locale_len);
+ const char *locale =
+ tuple_field_str(tuple, BOX_COLLATION_FIELD_LOCALE, &locale_len);
if (locale == NULL)
return -1;
if (locale_len > COLL_LOCALE_LEN_MAX) {
diag_set(ClientError, ER_CANT_CREATE_COLLATION,
- "collation locale is too long");
+ "collation locale is too long");
return -1;
}
if (locale_len > 0)
@@ -3634,62 +3634,62 @@ coll_id_def_new_from_tuple(struct tuple *tuple, struct coll_id_def *def)
return -1;
snprintf(base->locale, sizeof(base->locale), "%.*s", locale_len,
locale);
- const char *options = tuple_field_with_type(tuple,
- BOX_COLLATION_FIELD_OPTIONS, MP_MAP);
+ const char *options = tuple_field_with_type(
+ tuple, BOX_COLLATION_FIELD_OPTIONS, MP_MAP);
if (options == NULL)
return -1;
if (opts_decode(&base->icu, coll_icu_opts_reg, &options,
- ER_WRONG_COLLATION_OPTIONS,
- BOX_COLLATION_FIELD_OPTIONS, NULL) != 0)
+ ER_WRONG_COLLATION_OPTIONS, BOX_COLLATION_FIELD_OPTIONS,
+ NULL) != 0)
return -1;
if (base->icu.french_collation == coll_icu_on_off_MAX) {
diag_set(ClientError, ER_CANT_CREATE_COLLATION,
- "ICU wrong french_collation option setting, "
- "expected ON | OFF");
+ "ICU wrong french_collation option setting, "
+ "expected ON | OFF");
return -1;
}
if (base->icu.alternate_handling == coll_icu_alternate_handling_MAX) {
diag_set(ClientError, ER_CANT_CREATE_COLLATION,
- "ICU wrong alternate_handling option setting, "
- "expected NON_IGNORABLE | SHIFTED");
+ "ICU wrong alternate_handling option setting, "
+ "expected NON_IGNORABLE | SHIFTED");
return -1;
}
if (base->icu.case_first == coll_icu_case_first_MAX) {
diag_set(ClientError, ER_CANT_CREATE_COLLATION,
- "ICU wrong case_first option setting, "
- "expected OFF | UPPER_FIRST | LOWER_FIRST");
+ "ICU wrong case_first option setting, "
+ "expected OFF | UPPER_FIRST | LOWER_FIRST");
return -1;
}
if (base->icu.case_level == coll_icu_on_off_MAX) {
diag_set(ClientError, ER_CANT_CREATE_COLLATION,
- "ICU wrong case_level option setting, "
- "expected ON | OFF");
+ "ICU wrong case_level option setting, "
+ "expected ON | OFF");
return -1;
}
if (base->icu.normalization_mode == coll_icu_on_off_MAX) {
diag_set(ClientError, ER_CANT_CREATE_COLLATION,
- "ICU wrong normalization_mode option setting, "
- "expected ON | OFF");
+ "ICU wrong normalization_mode option setting, "
+ "expected ON | OFF");
return -1;
}
if (base->icu.strength == coll_icu_strength_MAX) {
diag_set(ClientError, ER_CANT_CREATE_COLLATION,
- "ICU wrong strength option setting, "
- "expected PRIMARY | SECONDARY | "
- "TERTIARY | QUATERNARY | IDENTICAL");
+ "ICU wrong strength option setting, "
+ "expected PRIMARY | SECONDARY | "
+ "TERTIARY | QUATERNARY | IDENTICAL");
return -1;
}
if (base->icu.numeric_collation == coll_icu_on_off_MAX) {
diag_set(ClientError, ER_CANT_CREATE_COLLATION,
- "ICU wrong numeric_collation option setting, "
- "expected ON | OFF");
+ "ICU wrong numeric_collation option setting, "
+ "expected ON | OFF");
return -1;
}
return 0;
@@ -3699,20 +3699,19 @@ coll_id_def_new_from_tuple(struct tuple *tuple, struct coll_id_def *def)
static int
on_create_collation_rollback(struct trigger *trigger, void *event)
{
- (void) event;
- struct coll_id *coll_id = (struct coll_id *) trigger->data;
+ (void)event;
+ struct coll_id *coll_id = (struct coll_id *)trigger->data;
coll_id_cache_delete(coll_id);
coll_id_delete(coll_id);
return 0;
}
-
/** Free a deleted collation identifier on commit. */
static int
on_drop_collation_commit(struct trigger *trigger, void *event)
{
- (void) event;
- struct coll_id *coll_id = (struct coll_id *) trigger->data;
+ (void)event;
+ struct coll_id *coll_id = (struct coll_id *)trigger->data;
coll_id_delete(coll_id);
return 0;
}
@@ -3721,8 +3720,8 @@ on_drop_collation_commit(struct trigger *trigger, void *event)
static int
on_drop_collation_rollback(struct trigger *trigger, void *event)
{
- (void) event;
- struct coll_id *coll_id = (struct coll_id *) trigger->data;
+ (void)event;
+ struct coll_id *coll_id = (struct coll_id *)trigger->data;
struct coll_id *replaced_id;
if (coll_id_cache_replace(coll_id, &replaced_id) != 0)
panic("Out of memory on insertion into collation cache");
@@ -3737,7 +3736,7 @@ on_drop_collation_rollback(struct trigger *trigger, void *event)
static int
on_replace_dd_collation(struct trigger * /* trigger */, void *event)
{
- struct txn *txn = (struct txn *) event;
+ struct txn *txn = (struct txn *)event;
struct txn_stmt *stmt = txn_current_stmt(txn);
struct tuple *old_tuple = stmt->old_tuple;
struct tuple *new_tuple = stmt->new_tuple;
@@ -3754,7 +3753,8 @@ on_replace_dd_collation(struct trigger * /* trigger */, void *event)
* identifier.
*/
uint32_t out;
- if (tuple_field_u32(old_tuple, BOX_COLLATION_FIELD_ID, &out) != 0)
+ if (tuple_field_u32(old_tuple, BOX_COLLATION_FIELD_ID, &out) !=
+ 0)
return -1;
int32_t old_id = out;
/*
@@ -3765,14 +3765,14 @@ on_replace_dd_collation(struct trigger * /* trigger */, void *event)
*/
if (old_id == COLL_NONE) {
diag_set(ClientError, ER_DROP_COLLATION, "none",
- "system collation");
+ "system collation");
return -1;
}
struct coll_id *old_coll_id = coll_by_id(old_id);
assert(old_coll_id != NULL);
if (access_check_ddl(old_coll_id->name, old_coll_id->id,
- old_coll_id->owner_id, SC_COLLATION,
- PRIV_D) != 0)
+ old_coll_id->owner_id, SC_COLLATION,
+ PRIV_D) != 0)
return -1;
/*
* Set on_commit/on_rollback triggers after
@@ -3786,15 +3786,15 @@ on_replace_dd_collation(struct trigger * /* trigger */, void *event)
txn_stmt_on_commit(stmt, on_commit);
} else if (new_tuple != NULL && old_tuple == NULL) {
/* INSERT */
- struct trigger *on_rollback =
- txn_alter_trigger_new(on_create_collation_rollback, NULL);
+ struct trigger *on_rollback = txn_alter_trigger_new(
+ on_create_collation_rollback, NULL);
if (on_rollback == NULL)
return -1;
struct coll_id_def new_def;
if (coll_id_def_new_from_tuple(new_tuple, &new_def) != 0)
return -1;
if (access_check_ddl(new_def.name, new_def.id, new_def.owner_id,
- SC_COLLATION, PRIV_C) != 0)
+ SC_COLLATION, PRIV_C) != 0)
return -1;
struct coll_id *new_coll_id = coll_id_new(&new_def);
if (new_coll_id == NULL)
@@ -3822,8 +3822,10 @@ on_replace_dd_collation(struct trigger * /* trigger */, void *event)
int
priv_def_create_from_tuple(struct priv_def *priv, struct tuple *tuple)
{
- if (tuple_field_u32(tuple, BOX_PRIV_FIELD_ID, &(priv->grantor_id)) != 0 ||
- tuple_field_u32(tuple, BOX_PRIV_FIELD_UID, &(priv->grantee_id)) != 0)
+ if (tuple_field_u32(tuple, BOX_PRIV_FIELD_ID, &(priv->grantor_id)) !=
+ 0 ||
+ tuple_field_u32(tuple, BOX_PRIV_FIELD_UID, &(priv->grantee_id)) !=
+ 0)
return -1;
const char *object_type =
@@ -3835,7 +3837,7 @@ priv_def_create_from_tuple(struct priv_def *priv, struct tuple *tuple)
const char *data = tuple_field(tuple, BOX_PRIV_FIELD_OBJECT_ID);
if (data == NULL) {
diag_set(ClientError, ER_NO_SUCH_FIELD_NO,
- BOX_PRIV_FIELD_OBJECT_ID + TUPLE_INDEX_BASE);
+ BOX_PRIV_FIELD_OBJECT_ID + TUPLE_INDEX_BASE);
return -1;
}
/*
@@ -3849,18 +3851,18 @@ priv_def_create_from_tuple(struct priv_def *priv, struct tuple *tuple)
if (mp_decode_strl(&data) == 0) {
/* Entity-wide privilege. */
priv->object_id = 0;
- priv->object_type = schema_entity_type(priv->object_type);
+ priv->object_type =
+ schema_entity_type(priv->object_type);
break;
}
FALLTHROUGH;
default:
- if (tuple_field_u32(tuple,
- BOX_PRIV_FIELD_OBJECT_ID, &(priv->object_id)) != 0)
+ if (tuple_field_u32(tuple, BOX_PRIV_FIELD_OBJECT_ID,
+ &(priv->object_id)) != 0)
return -1;
}
if (priv->object_type == SC_UNKNOWN) {
- diag_set(ClientError, ER_UNKNOWN_SCHEMA_OBJECT,
- object_type);
+ diag_set(ClientError, ER_UNKNOWN_SCHEMA_OBJECT, object_type);
return -1;
}
uint32_t out;
@@ -3890,7 +3892,7 @@ priv_def_check(struct priv_def *priv, enum priv_type priv_type)
struct user *grantee = user_by_id(priv->grantee_id);
if (grantee == NULL) {
diag_set(ClientError, ER_NO_SUCH_USER,
- int2str(priv->grantee_id));
+ int2str(priv->grantee_id));
return -1;
}
const char *name = schema_find_name(priv->object_type, priv->object_id);
@@ -3900,70 +3902,63 @@ priv_def_check(struct priv_def *priv, enum priv_type priv_type)
switch (priv->object_type) {
case SC_UNIVERSE:
if (grantor->def->uid != ADMIN) {
- diag_set(AccessDeniedError,
- priv_name(priv_type),
- schema_object_name(SC_UNIVERSE),
- name,
- grantor->def->name);
+ diag_set(AccessDeniedError, priv_name(priv_type),
+ schema_object_name(SC_UNIVERSE), name,
+ grantor->def->name);
return -1;
}
break;
- case SC_SPACE:
- {
+ case SC_SPACE: {
struct space *space = space_cache_find(priv->object_id);
if (space == NULL)
return -1;
if (space->def->uid != grantor->def->uid &&
grantor->def->uid != ADMIN) {
- diag_set(AccessDeniedError,
- priv_name(priv_type),
- schema_object_name(SC_SPACE), name,
- grantor->def->name);
+ diag_set(AccessDeniedError, priv_name(priv_type),
+ schema_object_name(SC_SPACE), name,
+ grantor->def->name);
return -1;
}
break;
}
- case SC_FUNCTION:
- {
+ case SC_FUNCTION: {
struct func *func = func_by_id(priv->object_id);
if (func == NULL) {
- diag_set(ClientError, ER_NO_SUCH_FUNCTION, int2str(priv->object_id));
+ diag_set(ClientError, ER_NO_SUCH_FUNCTION,
+ int2str(priv->object_id));
return -1;
}
if (func->def->uid != grantor->def->uid &&
grantor->def->uid != ADMIN) {
- diag_set(AccessDeniedError,
- priv_name(priv_type),
- schema_object_name(SC_FUNCTION), name,
- grantor->def->name);
+ diag_set(AccessDeniedError, priv_name(priv_type),
+ schema_object_name(SC_FUNCTION), name,
+ grantor->def->name);
return -1;
}
break;
}
- case SC_SEQUENCE:
- {
+ case SC_SEQUENCE: {
struct sequence *seq = sequence_by_id(priv->object_id);
if (seq == NULL) {
- diag_set(ClientError, ER_NO_SUCH_SEQUENCE, int2str(priv->object_id));
+ diag_set(ClientError, ER_NO_SUCH_SEQUENCE,
+ int2str(priv->object_id));
return -1;
}
if (seq->def->uid != grantor->def->uid &&
grantor->def->uid != ADMIN) {
- diag_set(AccessDeniedError,
- priv_name(priv_type),
- schema_object_name(SC_SEQUENCE), name,
- grantor->def->name);
+ diag_set(AccessDeniedError, priv_name(priv_type),
+ schema_object_name(SC_SEQUENCE), name,
+ grantor->def->name);
return -1;
}
break;
}
- case SC_ROLE:
- {
+ case SC_ROLE: {
struct user *role = user_by_id(priv->object_id);
if (role == NULL || role->def->type != SC_ROLE) {
diag_set(ClientError, ER_NO_SUCH_ROLE,
- role ? role->def->name :
- int2str(priv->object_id));
+ role ? role->def->name :
+ int2str(priv->object_id));
return -1;
}
/*
@@ -3973,10 +3968,9 @@ priv_def_check(struct priv_def *priv, enum priv_type priv_type)
if (role->def->owner != grantor->def->uid &&
grantor->def->uid != ADMIN &&
(role->def->uid != PUBLIC || priv->access != PRIV_X)) {
- diag_set(AccessDeniedError,
- priv_name(priv_type),
- schema_object_name(SC_ROLE), name,
- grantor->def->name);
+ diag_set(AccessDeniedError, priv_name(priv_type),
+ schema_object_name(SC_ROLE), name,
+ grantor->def->name);
return -1;
}
/* Not necessary to do during revoke, but who cares. */
@@ -3984,21 +3978,19 @@ priv_def_check(struct priv_def *priv, enum priv_type priv_type)
return -1;
break;
}
- case SC_USER:
- {
+ case SC_USER: {
struct user *user = user_by_id(priv->object_id);
if (user == NULL || user->def->type != SC_USER) {
diag_set(ClientError, ER_NO_SUCH_USER,
- user ? user->def->name :
- int2str(priv->object_id));
+ user ? user->def->name :
+ int2str(priv->object_id));
return -1;
}
if (user->def->owner != grantor->def->uid &&
grantor->def->uid != ADMIN) {
- diag_set(AccessDeniedError,
- priv_name(priv_type),
- schema_object_name(SC_USER), name,
- grantor->def->name);
+ diag_set(AccessDeniedError, priv_name(priv_type),
+ schema_object_name(SC_USER), name,
+ grantor->def->name);
return -1;
}
break;
@@ -4007,13 +3999,12 @@ priv_def_check(struct priv_def *priv, enum priv_type priv_type)
case SC_ENTITY_FUNCTION:
case SC_ENTITY_SEQUENCE:
case SC_ENTITY_ROLE:
- case SC_ENTITY_USER:
- {
+ case SC_ENTITY_USER: {
/* Only admin may grant privileges on an entire entity. */
if (grantor->def->uid != ADMIN) {
diag_set(AccessDeniedError, priv_name(priv_type),
- schema_object_name(priv->object_type), name,
- grantor->def->name);
+ schema_object_name(priv->object_type), name,
+ grantor->def->name);
return -1;
}
}
@@ -4022,7 +4013,7 @@ priv_def_check(struct priv_def *priv, enum priv_type priv_type)
}
if (priv->access == 0) {
diag_set(ClientError, ER_GRANT,
- "the grant tuple has no privileges");
+ "the grant tuple has no privileges");
return -1;
}
return 0;
@@ -4064,7 +4055,7 @@ grant_or_revoke(struct priv_def *priv)
static int
revoke_priv(struct trigger *trigger, void *event)
{
- (void) event;
+ (void)event;
struct tuple *tuple = (struct tuple *)trigger->data;
struct priv_def priv;
if (priv_def_create_from_tuple(&priv, tuple) != 0)
@@ -4079,7 +4070,7 @@ revoke_priv(struct trigger *trigger, void *event)
static int
modify_priv(struct trigger *trigger, void *event)
{
- (void) event;
+ (void)event;
struct tuple *tuple = (struct tuple *)trigger->data;
struct priv_def priv;
if (priv_def_create_from_tuple(&priv, tuple) != 0 ||
@@ -4095,13 +4086,13 @@ modify_priv(struct trigger *trigger, void *event)
static int
on_replace_dd_priv(struct trigger * /* trigger */, void *event)
{
- struct txn *txn = (struct txn *) event;
+ struct txn *txn = (struct txn *)event;
struct txn_stmt *stmt = txn_current_stmt(txn);
struct tuple *old_tuple = stmt->old_tuple;
struct tuple *new_tuple = stmt->new_tuple;
struct priv_def priv;
- if (new_tuple != NULL && old_tuple == NULL) { /* grant */
+ if (new_tuple != NULL && old_tuple == NULL) { /* grant */
if (priv_def_create_from_tuple(&priv, new_tuple) != 0 ||
priv_def_check(&priv, PRIV_GRANT) != 0 ||
grant_or_revoke(&priv) != 0)
@@ -4111,7 +4102,7 @@ on_replace_dd_priv(struct trigger * /* trigger */, void *event)
if (on_rollback == NULL)
return -1;
txn_stmt_on_rollback(stmt, on_rollback);
- } else if (new_tuple == NULL) { /* revoke */
+ } else if (new_tuple == NULL) { /* revoke */
assert(old_tuple);
if (priv_def_create_from_tuple(&priv, old_tuple) != 0 ||
priv_def_check(&priv, PRIV_REVOKE) != 0)
@@ -4124,7 +4115,7 @@ on_replace_dd_priv(struct trigger * /* trigger */, void *event)
if (on_rollback == NULL)
return -1;
txn_stmt_on_rollback(stmt, on_rollback);
- } else { /* modify */
+ } else { /* modify */
if (priv_def_create_from_tuple(&priv, new_tuple) != 0 ||
priv_def_check(&priv, PRIV_GRANT) != 0 ||
grant_or_revoke(&priv) != 0)
@@ -4154,12 +4145,12 @@ on_replace_dd_priv(struct trigger * /* trigger */, void *event)
static int
on_replace_dd_schema(struct trigger * /* trigger */, void *event)
{
- struct txn *txn = (struct txn *) event;
+ struct txn *txn = (struct txn *)event;
struct txn_stmt *stmt = txn_current_stmt(txn);
struct tuple *old_tuple = stmt->old_tuple;
struct tuple *new_tuple = stmt->new_tuple;
const char *key = tuple_field_cstr(new_tuple ? new_tuple : old_tuple,
- BOX_SCHEMA_FIELD_KEY);
+ BOX_SCHEMA_FIELD_KEY);
if (key == NULL)
return -1;
if (strcmp(key, "cluster") == 0) {
@@ -4168,7 +4159,8 @@ on_replace_dd_schema(struct trigger * /* trigger */, void *event)
return -1;
}
tt_uuid uu;
- if (tuple_field_uuid(new_tuple, BOX_CLUSTER_FIELD_UUID, &uu) != 0)
+ if (tuple_field_uuid(new_tuple, BOX_CLUSTER_FIELD_UUID, &uu) !=
+ 0)
return -1;
REPLICASET_UUID = uu;
say_info("cluster uuid %s", tt_uuid_str(&uu));
@@ -4198,7 +4190,7 @@ register_replica(struct trigger *trigger, void * /* event */)
try {
replica = replicaset_add(id, &uuid);
/* Can't throw exceptions from on_commit trigger */
- } catch(Exception *e) {
+ } catch (Exception *e) {
panic("Can't register replica: %s", e->errmsg);
}
}
@@ -4241,25 +4233,26 @@ unregister_replica(struct trigger *trigger, void * /* event */)
static int
on_replace_dd_cluster(struct trigger *trigger, void *event)
{
- (void) trigger;
- struct txn *txn = (struct txn *) event;
+ (void)trigger;
+ struct txn *txn = (struct txn *)event;
struct txn_stmt *stmt = txn_current_stmt(txn);
struct tuple *old_tuple = stmt->old_tuple;
struct tuple *new_tuple = stmt->new_tuple;
if (new_tuple != NULL) { /* Insert or replace */
/* Check fields */
uint32_t replica_id;
- if (tuple_field_u32(new_tuple, BOX_CLUSTER_FIELD_ID, &replica_id) != 0)
+ if (tuple_field_u32(new_tuple, BOX_CLUSTER_FIELD_ID,
+ &replica_id) != 0)
return -1;
if (replica_check_id(replica_id) != 0)
return -1;
tt_uuid replica_uuid;
if (tuple_field_uuid(new_tuple, BOX_CLUSTER_FIELD_UUID,
- &replica_uuid) != 0)
+ &replica_uuid) != 0)
return -1;
if (tt_uuid_is_nil(&replica_uuid)) {
diag_set(ClientError, ER_INVALID_UUID,
- tt_uuid_str(&replica_uuid));
+ tt_uuid_str(&replica_uuid));
return -1;
}
if (old_tuple != NULL) {
@@ -4270,12 +4263,12 @@ on_replace_dd_cluster(struct trigger *trigger, void *event)
*/
tt_uuid old_uuid;
if (tuple_field_uuid(old_tuple, BOX_CLUSTER_FIELD_UUID,
- &old_uuid) != 0)
+ &old_uuid) != 0)
return -1;
if (!tt_uuid_is_equal(&replica_uuid, &old_uuid)) {
diag_set(ClientError, ER_UNSUPPORTED,
- "Space _cluster",
- "updates of instance uuid");
+ "Space _cluster",
+ "updates of instance uuid");
return -1;
}
} else {
@@ -4293,14 +4286,15 @@ on_replace_dd_cluster(struct trigger *trigger, void *event)
*/
assert(old_tuple != NULL);
uint32_t replica_id;
- if (tuple_field_u32(old_tuple, BOX_CLUSTER_FIELD_ID, &replica_id) != 0)
+ if (tuple_field_u32(old_tuple, BOX_CLUSTER_FIELD_ID,
+ &replica_id) != 0)
return -1;
if (replica_check_id(replica_id) != 0)
return -1;
struct trigger *on_commit;
- on_commit = txn_alter_trigger_new(unregister_replica,
- old_tuple);
+ on_commit =
+ txn_alter_trigger_new(unregister_replica, old_tuple);
if (on_commit == NULL)
return -1;
txn_stmt_on_commit(stmt, on_commit);
@@ -4317,20 +4311,20 @@ static struct sequence_def *
sequence_def_new_from_tuple(struct tuple *tuple, uint32_t errcode)
{
uint32_t name_len;
- const char *name = tuple_field_str(tuple, BOX_USER_FIELD_NAME,
- &name_len);
+ const char *name =
+ tuple_field_str(tuple, BOX_USER_FIELD_NAME, &name_len);
if (name == NULL)
return NULL;
if (name_len > BOX_NAME_MAX) {
diag_set(ClientError, errcode,
- tt_cstr(name, BOX_INVALID_NAME_MAX),
- "sequence name is too long");
+ tt_cstr(name, BOX_INVALID_NAME_MAX),
+ "sequence name is too long");
return NULL;
}
if (identifier_check(name, name_len) != 0)
return NULL;
size_t sz = sequence_def_sizeof(name_len);
- struct sequence_def *def = (struct sequence_def *) malloc(sz);
+ struct sequence_def *def = (struct sequence_def *)malloc(sz);
if (def == NULL) {
diag_set(OutOfMemory, sz, "malloc", "sequence");
return NULL;
@@ -4348,11 +4342,14 @@ sequence_def_new_from_tuple(struct tuple *tuple, uint32_t errcode)
return NULL;
if (tuple_field_i64(tuple, BOX_SEQUENCE_FIELD_MAX, &(def->max)) != 0)
return NULL;
- if (tuple_field_i64(tuple, BOX_SEQUENCE_FIELD_START, &(def->start)) != 0)
+ if (tuple_field_i64(tuple, BOX_SEQUENCE_FIELD_START, &(def->start)) !=
+ 0)
return NULL;
- if (tuple_field_i64(tuple, BOX_SEQUENCE_FIELD_CACHE, &(def->cache)) != 0)
+ if (tuple_field_i64(tuple, BOX_SEQUENCE_FIELD_CACHE, &(def->cache)) !=
+ 0)
return NULL;
- if (tuple_field_bool(tuple, BOX_SEQUENCE_FIELD_CYCLE, &(def->cycle)) != 0)
+ if (tuple_field_bool(tuple, BOX_SEQUENCE_FIELD_CYCLE, &(def->cycle)) !=
+ 0)
return NULL;
if (def->step == 0) {
diag_set(ClientError, errcode, def->name,
@@ -4405,7 +4402,6 @@ on_drop_sequence_rollback(struct trigger *trigger, void * /* event */)
return 0;
}
-
static int
on_alter_sequence_commit(struct trigger *trigger, void * /* event */)
{
@@ -4436,7 +4432,7 @@ on_alter_sequence_rollback(struct trigger *trigger, void * /* event */)
static int
on_replace_dd_sequence(struct trigger * /* trigger */, void *event)
{
- struct txn *txn = (struct txn *) event;
+ struct txn *txn = (struct txn *)event;
struct txn_stmt *stmt = txn_current_stmt(txn);
struct tuple *old_tuple = stmt->old_tuple;
struct tuple *new_tuple = stmt->new_tuple;
@@ -4445,16 +4441,16 @@ on_replace_dd_sequence(struct trigger * /* trigger */, void *event)
auto def_guard = make_scoped_guard([&new_def] { free(new_def); });
struct sequence *seq;
- if (old_tuple == NULL && new_tuple != NULL) { /* INSERT */
+ if (old_tuple == NULL && new_tuple != NULL) { /* INSERT */
new_def = sequence_def_new_from_tuple(new_tuple,
ER_CREATE_SEQUENCE);
if (new_def == NULL)
return -1;
if (access_check_ddl(new_def->name, new_def->id, new_def->uid,
- SC_SEQUENCE, PRIV_C) != 0)
+ SC_SEQUENCE, PRIV_C) != 0)
return -1;
- struct trigger *on_rollback =
- txn_alter_trigger_new(on_create_sequence_rollback, NULL);
+ struct trigger *on_rollback = txn_alter_trigger_new(
+ on_create_sequence_rollback, NULL);
if (on_rollback == NULL)
return -1;
seq = sequence_new(new_def);
@@ -4463,36 +4459,37 @@ on_replace_dd_sequence(struct trigger * /* trigger */, void *event)
sequence_cache_insert(seq);
on_rollback->data = seq;
txn_stmt_on_rollback(stmt, on_rollback);
- } else if (old_tuple != NULL && new_tuple == NULL) { /* DELETE */
+ } else if (old_tuple != NULL && new_tuple == NULL) { /* DELETE */
uint32_t id;
- if (tuple_field_u32(old_tuple, BOX_SEQUENCE_DATA_FIELD_ID, &id) != 0)
+ if (tuple_field_u32(old_tuple, BOX_SEQUENCE_DATA_FIELD_ID,
+ &id) != 0)
return -1;
seq = sequence_by_id(id);
assert(seq != NULL);
- if (access_check_ddl(seq->def->name, seq->def->id, seq->def->uid,
- SC_SEQUENCE, PRIV_D) != 0)
+ if (access_check_ddl(seq->def->name, seq->def->id,
+ seq->def->uid, SC_SEQUENCE, PRIV_D) != 0)
return -1;
bool out;
if (space_has_data(BOX_SEQUENCE_DATA_ID, 0, id, &out) != 0)
return -1;
if (out) {
- diag_set(ClientError, ER_DROP_SEQUENCE,
- seq->def->name, "the sequence has data");
+ diag_set(ClientError, ER_DROP_SEQUENCE, seq->def->name,
+ "the sequence has data");
return -1;
}
if (space_has_data(BOX_SPACE_SEQUENCE_ID, 1, id, &out) != 0)
return -1;
if (out) {
- diag_set(ClientError, ER_DROP_SEQUENCE,
- seq->def->name, "the sequence is in use");
+ diag_set(ClientError, ER_DROP_SEQUENCE, seq->def->name,
+ "the sequence is in use");
return -1;
}
if (schema_find_grants("sequence", seq->def->id, &out) != 0) {
return -1;
}
if (out) {
- diag_set(ClientError, ER_DROP_SEQUENCE,
- seq->def->name, "the sequence has grants");
+ diag_set(ClientError, ER_DROP_SEQUENCE, seq->def->name,
+ "the sequence has grants");
return -1;
}
struct trigger *on_commit =
@@ -4504,20 +4501,20 @@ on_replace_dd_sequence(struct trigger * /* trigger */, void *event)
sequence_cache_delete(seq->def->id);
txn_stmt_on_commit(stmt, on_commit);
txn_stmt_on_rollback(stmt, on_rollback);
- } else { /* UPDATE */
+ } else { /* UPDATE */
new_def = sequence_def_new_from_tuple(new_tuple,
ER_ALTER_SEQUENCE);
if (new_def == NULL)
return -1;
seq = sequence_by_id(new_def->id);
assert(seq != NULL);
- if (access_check_ddl(seq->def->name, seq->def->id, seq->def->uid,
- SC_SEQUENCE, PRIV_A) != 0)
+ if (access_check_ddl(seq->def->name, seq->def->id,
+ seq->def->uid, SC_SEQUENCE, PRIV_A) != 0)
return -1;
- struct trigger *on_commit =
- txn_alter_trigger_new(on_alter_sequence_commit, seq->def);
- struct trigger *on_rollback =
- txn_alter_trigger_new(on_alter_sequence_rollback, seq->def);
+ struct trigger *on_commit = txn_alter_trigger_new(
+ on_alter_sequence_commit, seq->def);
+ struct trigger *on_rollback = txn_alter_trigger_new(
+ on_alter_sequence_rollback, seq->def);
if (on_commit == NULL || on_rollback == NULL)
return -1;
seq->def = new_def;
@@ -4556,7 +4553,7 @@ on_drop_sequence_data_rollback(struct trigger *trigger, void * /* event */)
static int
on_replace_dd_sequence_data(struct trigger * /* trigger */, void *event)
{
- struct txn *txn = (struct txn *) event;
+ struct txn *txn = (struct txn *)event;
struct txn_stmt *stmt = txn_current_stmt(txn);
struct tuple *old_tuple = stmt->old_tuple;
struct tuple *new_tuple = stmt->new_tuple;
@@ -4570,14 +4567,14 @@ on_replace_dd_sequence_data(struct trigger * /* trigger */, void *event)
diag_set(ClientError, ER_NO_SUCH_SEQUENCE, int2str(id));
return -1;
}
- if (new_tuple != NULL) { /* INSERT, UPDATE */
+ if (new_tuple != NULL) { /* INSERT, UPDATE */
int64_t value;
if (tuple_field_i64(new_tuple, BOX_SEQUENCE_DATA_FIELD_VALUE,
&value) != 0)
return -1;
if (sequence_set(seq, value) != 0)
return -1;
- } else { /* DELETE */
+ } else { /* DELETE */
/*
* A sequence isn't supposed to roll back to the old
* value if the transaction it was used in is aborted
@@ -4586,7 +4583,7 @@ on_replace_dd_sequence_data(struct trigger * /* trigger */, void *event)
* on rollback.
*/
struct trigger *on_rollback = txn_alter_trigger_new(
- on_drop_sequence_data_rollback, old_tuple);
+ on_drop_sequence_data_rollback, old_tuple);
if (on_rollback == NULL)
return -1;
txn_stmt_on_rollback(stmt, on_rollback);
@@ -4631,8 +4628,8 @@ sequence_field_from_tuple(struct space *space, struct tuple *tuple,
if (path_raw != NULL) {
path = (char *)malloc(path_len + 1);
if (path == NULL) {
- diag_set(OutOfMemory, path_len + 1,
- "malloc", "sequence path");
+ diag_set(OutOfMemory, path_len + 1, "malloc",
+ "sequence path");
return -1;
}
memcpy(path, path_raw, path_len);
@@ -4657,7 +4654,7 @@ set_space_sequence(struct trigger *trigger, void * /* event */)
return -1;
bool is_generated;
if (tuple_field_bool(tuple, BOX_SPACE_SEQUENCE_FIELD_IS_GENERATED,
- &is_generated) != 0)
+ &is_generated) != 0)
return -1;
struct space *space = space_by_id(space_id);
assert(space != NULL);
@@ -4705,9 +4702,10 @@ clear_space_sequence(struct trigger *trigger, void * /* event */)
static int
on_replace_dd_space_sequence(struct trigger * /* trigger */, void *event)
{
- struct txn *txn = (struct txn *) event;
+ struct txn *txn = (struct txn *)event;
struct txn_stmt *stmt = txn_current_stmt(txn);
- struct tuple *tuple = stmt->new_tuple ? stmt->new_tuple : stmt->old_tuple;
+ struct tuple *tuple = stmt->new_tuple ? stmt->new_tuple :
+ stmt->old_tuple;
uint32_t space_id;
if (tuple_field_u32(tuple, BOX_SPACE_SEQUENCE_FIELD_ID, &space_id) != 0)
return -1;
@@ -4724,7 +4722,8 @@ on_replace_dd_space_sequence(struct trigger * /* trigger */, void *event)
return -1;
struct sequence *seq = sequence_by_id(sequence_id);
if (seq == NULL) {
- diag_set(ClientError, ER_NO_SUCH_SEQUENCE, int2str(sequence_id));
+ diag_set(ClientError, ER_NO_SUCH_SEQUENCE,
+ int2str(sequence_id));
return -1;
}
if (stmt->new_tuple != NULL && stmt->old_tuple != NULL) {
@@ -4741,39 +4740,38 @@ on_replace_dd_space_sequence(struct trigger * /* trigger */, void *event)
/* Check we have the correct access type on the sequence. * */
if (is_generated || !stmt->new_tuple) {
- if (access_check_ddl(seq->def->name, seq->def->id, seq->def->uid,
- SC_SEQUENCE, priv_type) != 0)
+ if (access_check_ddl(seq->def->name, seq->def->id,
+ seq->def->uid, SC_SEQUENCE,
+ priv_type) != 0)
return -1;
} else {
/*
* In case user wants to attach an existing sequence,
* check that it has read and write access.
*/
- if (access_check_ddl(seq->def->name, seq->def->id, seq->def->uid,
- SC_SEQUENCE, PRIV_R) != 0)
+ if (access_check_ddl(seq->def->name, seq->def->id,
+ seq->def->uid, SC_SEQUENCE, PRIV_R) != 0)
return -1;
- if (access_check_ddl(seq->def->name, seq->def->id, seq->def->uid,
- SC_SEQUENCE, PRIV_W) != 0)
+ if (access_check_ddl(seq->def->name, seq->def->id,
+ seq->def->uid, SC_SEQUENCE, PRIV_W) != 0)
return -1;
}
/** Check we have alter access on space. */
if (access_check_ddl(space->def->name, space->def->id, space->def->uid,
- SC_SPACE, PRIV_A) != 0)
+ SC_SPACE, PRIV_A) != 0)
return -1;
- if (stmt->new_tuple != NULL) { /* INSERT, UPDATE */
+ if (stmt->new_tuple != NULL) { /* INSERT, UPDATE */
char *sequence_path;
uint32_t sequence_fieldno;
if (sequence_field_from_tuple(space, tuple, &sequence_path,
&sequence_fieldno) != 0)
return -1;
- auto sequence_path_guard = make_scoped_guard([=] {
- free(sequence_path);
- });
+ auto sequence_path_guard =
+ make_scoped_guard([=] { free(sequence_path); });
if (seq->is_generated) {
- diag_set(ClientError, ER_ALTER_SPACE,
- space_name(space),
- "can not attach generated sequence");
+ diag_set(ClientError, ER_ALTER_SPACE, space_name(space),
+ "can not attach generated sequence");
return -1;
}
struct trigger *on_rollback;
@@ -4781,8 +4779,8 @@ on_replace_dd_space_sequence(struct trigger * /* trigger */, void *event)
on_rollback = txn_alter_trigger_new(set_space_sequence,
stmt->old_tuple);
else
- on_rollback = txn_alter_trigger_new(clear_space_sequence,
- stmt->new_tuple);
+ on_rollback = txn_alter_trigger_new(
+ clear_space_sequence, stmt->new_tuple);
if (on_rollback == NULL)
return -1;
seq->is_generated = is_generated;
@@ -4792,7 +4790,7 @@ on_replace_dd_space_sequence(struct trigger * /* trigger */, void *event)
space->sequence_path = sequence_path;
sequence_path_guard.is_active = false;
txn_stmt_on_rollback(stmt, on_rollback);
- } else { /* DELETE */
+ } else { /* DELETE */
struct trigger *on_rollback;
on_rollback = txn_alter_trigger_new(set_space_sequence,
stmt->old_tuple);
@@ -4820,8 +4818,8 @@ on_create_trigger_rollback(struct trigger *trigger, void * /* event */)
struct sql_trigger *old_trigger = (struct sql_trigger *)trigger->data;
struct sql_trigger *new_trigger;
int rc = sql_trigger_replace(sql_trigger_name(old_trigger),
- sql_trigger_space_id(old_trigger),
- NULL, &new_trigger);
+ sql_trigger_space_id(old_trigger), NULL,
+ &new_trigger);
(void)rc;
assert(rc == 0);
assert(new_trigger == old_trigger);
@@ -4838,8 +4836,8 @@ on_drop_trigger_rollback(struct trigger *trigger, void * /* event */)
if (old_trigger == NULL)
return 0;
if (sql_trigger_replace(sql_trigger_name(old_trigger),
- sql_trigger_space_id(old_trigger),
- old_trigger, &new_trigger) != 0)
+ sql_trigger_space_id(old_trigger), old_trigger,
+ &new_trigger) != 0)
panic("Out of memory on insertion into trigger hash");
assert(new_trigger == NULL);
return 0;
@@ -4855,8 +4853,8 @@ on_replace_trigger_rollback(struct trigger *trigger, void * /* event */)
struct sql_trigger *old_trigger = (struct sql_trigger *)trigger->data;
struct sql_trigger *new_trigger;
if (sql_trigger_replace(sql_trigger_name(old_trigger),
- sql_trigger_space_id(old_trigger),
- old_trigger, &new_trigger) != 0)
+ sql_trigger_space_id(old_trigger), old_trigger,
+ &new_trigger) != 0)
panic("Out of memory on insertion into trigger hash");
sql_trigger_delete(sql_get(), new_trigger);
return 0;
@@ -4881,7 +4879,7 @@ on_replace_trigger_commit(struct trigger *trigger, void * /* event */)
static int
on_replace_dd_trigger(struct trigger * /* trigger */, void *event)
{
- struct txn *txn = (struct txn *) event;
+ struct txn *txn = (struct txn *)event;
struct txn_stmt *stmt = txn_current_stmt(txn);
struct tuple *old_tuple = stmt->old_tuple;
struct tuple *new_tuple = stmt->new_tuple;
@@ -4895,8 +4893,8 @@ on_replace_dd_trigger(struct trigger * /* trigger */, void *event)
if (old_tuple != NULL && new_tuple == NULL) {
/* DROP trigger. */
uint32_t trigger_name_len;
- const char *trigger_name_src = tuple_field_str(old_tuple,
- BOX_TRIGGER_FIELD_NAME, &trigger_name_len);
+ const char *trigger_name_src = tuple_field_str(
+ old_tuple, BOX_TRIGGER_FIELD_NAME, &trigger_name_len);
if (trigger_name_src == NULL)
return -1;
uint32_t space_id;
@@ -4922,12 +4920,12 @@ on_replace_dd_trigger(struct trigger * /* trigger */, void *event)
} else {
/* INSERT, REPLACE trigger. */
uint32_t trigger_name_len;
- const char *trigger_name_src = tuple_field_str(new_tuple,
- BOX_TRIGGER_FIELD_NAME, &trigger_name_len);
+ const char *trigger_name_src = tuple_field_str(
+ new_tuple, BOX_TRIGGER_FIELD_NAME, &trigger_name_len);
if (trigger_name_src == NULL)
return -1;
- const char *space_opts = tuple_field_with_type(new_tuple,
- BOX_TRIGGER_FIELD_OPTS,MP_MAP);
+ const char *space_opts = tuple_field_with_type(
+ new_tuple, BOX_TRIGGER_FIELD_OPTS, MP_MAP);
if (space_opts == NULL)
return -1;
struct space_opts opts;
@@ -4939,17 +4937,16 @@ on_replace_dd_trigger(struct trigger * /* trigger */, void *event)
if (new_trigger == NULL)
return -1;
- auto new_trigger_guard = make_scoped_guard([=] {
- sql_trigger_delete(sql_get(), new_trigger);
- });
+ auto new_trigger_guard = make_scoped_guard(
+ [=] { sql_trigger_delete(sql_get(), new_trigger); });
const char *trigger_name = sql_trigger_name(new_trigger);
if (strlen(trigger_name) != trigger_name_len ||
- memcmp(trigger_name_src, trigger_name,
- trigger_name_len) != 0) {
+ memcmp(trigger_name_src, trigger_name, trigger_name_len) !=
+ 0) {
diag_set(ClientError, ER_SQL_EXECUTE,
- "trigger name does not match extracted "
- "from SQL");
+ "trigger name does not match extracted "
+ "from SQL");
return -1;
}
uint32_t space_id;
@@ -4958,8 +4955,8 @@ on_replace_dd_trigger(struct trigger * /* trigger */, void *event)
return -1;
if (space_id != sql_trigger_space_id(new_trigger)) {
diag_set(ClientError, ER_SQL_EXECUTE,
- "trigger space_id does not match the value "
- "resolved on AST building from SQL");
+ "trigger space_id does not match the value "
+ "resolved on AST building from SQL");
return -1;
}
@@ -5003,33 +5000,32 @@ decode_fk_links(struct tuple *tuple, uint32_t *out_count,
const char *constraint_name, uint32_t constraint_len,
uint32_t errcode)
{
- const char *parent_cols = tuple_field_with_type(tuple,
- BOX_FK_CONSTRAINT_FIELD_PARENT_COLS, MP_ARRAY);
+ const char *parent_cols = tuple_field_with_type(
+ tuple, BOX_FK_CONSTRAINT_FIELD_PARENT_COLS, MP_ARRAY);
if (parent_cols == NULL)
return NULL;
uint32_t count = mp_decode_array(&parent_cols);
if (count == 0) {
diag_set(ClientError, errcode,
- tt_cstr(constraint_name, constraint_len),
- "at least one link must be specified");
+ tt_cstr(constraint_name, constraint_len),
+ "at least one link must be specified");
return NULL;
}
- const char *child_cols = tuple_field_with_type(tuple,
- BOX_FK_CONSTRAINT_FIELD_CHILD_COLS, MP_ARRAY);
+ const char *child_cols = tuple_field_with_type(
+ tuple, BOX_FK_CONSTRAINT_FIELD_CHILD_COLS, MP_ARRAY);
if (child_cols == NULL)
return NULL;
if (mp_decode_array(&child_cols) != count) {
diag_set(ClientError, errcode,
- tt_cstr(constraint_name, constraint_len),
- "number of referenced and referencing fields "
- "must be the same");
+ tt_cstr(constraint_name, constraint_len),
+ "number of referenced and referencing fields "
+ "must be the same");
return NULL;
}
*out_count = count;
size_t size;
- struct field_link *region_links =
- region_alloc_array(&fiber()->gc, typeof(region_links[0]), count,
- &size);
+ struct field_link *region_links = region_alloc_array(
+ &fiber()->gc, typeof(region_links[0]), count, &size);
if (region_links == NULL) {
diag_set(OutOfMemory, size, "region_alloc_array",
"region_links");
@@ -5040,9 +5036,9 @@ decode_fk_links(struct tuple *tuple, uint32_t *out_count,
if (mp_typeof(*parent_cols) != MP_UINT ||
mp_typeof(*child_cols) != MP_UINT) {
diag_set(ClientError, errcode,
- tt_cstr(constraint_name, constraint_len),
- tt_sprintf("value of %d link is not unsigned",
- i));
+ tt_cstr(constraint_name, constraint_len),
+ tt_sprintf("value of %d link is not unsigned",
+ i));
return NULL;
}
region_links[i].parent_field = mp_decode_uint(&parent_cols);
@@ -5056,31 +5052,31 @@ static struct fk_constraint_def *
fk_constraint_def_new_from_tuple(struct tuple *tuple, uint32_t errcode)
{
uint32_t name_len;
- const char *name = tuple_field_str(tuple,
- BOX_FK_CONSTRAINT_FIELD_NAME, &name_len);
+ const char *name =
+ tuple_field_str(tuple, BOX_FK_CONSTRAINT_FIELD_NAME, &name_len);
if (name == NULL)
return NULL;
if (name_len > BOX_NAME_MAX) {
diag_set(ClientError, errcode,
- tt_cstr(name, BOX_INVALID_NAME_MAX),
- "constraint name is too long");
+ tt_cstr(name, BOX_INVALID_NAME_MAX),
+ "constraint name is too long");
return NULL;
}
if (identifier_check(name, name_len) != 0)
return NULL;
uint32_t link_count;
- struct field_link *links = decode_fk_links(tuple, &link_count, name,
- name_len, errcode);
+ struct field_link *links =
+ decode_fk_links(tuple, &link_count, name, name_len, errcode);
if (links == NULL)
return NULL;
uint32_t links_offset;
- size_t fk_def_sz = fk_constraint_def_sizeof(link_count, name_len,
- &links_offset);
+ size_t fk_def_sz =
+ fk_constraint_def_sizeof(link_count, name_len, &links_offset);
struct fk_constraint_def *fk_def =
- (struct fk_constraint_def *) malloc(fk_def_sz);
+ (struct fk_constraint_def *)malloc(fk_def_sz);
if (fk_def == NULL) {
diag_set(OutOfMemory, fk_def_sz, "malloc",
- "struct fk_constraint_def");
+ "struct fk_constraint_def");
return NULL;
}
auto def_guard = make_scoped_guard([=] { free(fk_def); });
@@ -5090,7 +5086,7 @@ fk_constraint_def_new_from_tuple(struct tuple *tuple, uint32_t errcode)
memcpy(fk_def->links, links, link_count * sizeof(struct field_link));
fk_def->field_count = link_count;
if (tuple_field_u32(tuple, BOX_FK_CONSTRAINT_FIELD_CHILD_ID,
- &(fk_def->child_id )) != 0)
+ &(fk_def->child_id)) != 0)
return NULL;
if (tuple_field_u32(tuple, BOX_FK_CONSTRAINT_FIELD_PARENT_ID,
&(fk_def->parent_id)) != 0)
@@ -5098,36 +5094,36 @@ fk_constraint_def_new_from_tuple(struct tuple *tuple, uint32_t errcode)
if (tuple_field_bool(tuple, BOX_FK_CONSTRAINT_FIELD_DEFERRED,
&(fk_def->is_deferred)) != 0)
return NULL;
- const char *match = tuple_field_str(tuple,
- BOX_FK_CONSTRAINT_FIELD_MATCH, &name_len);
+ const char *match = tuple_field_str(
+ tuple, BOX_FK_CONSTRAINT_FIELD_MATCH, &name_len);
if (match == NULL)
return NULL;
fk_def->match = STRN2ENUM(fk_constraint_match, match, name_len);
if (fk_def->match == fk_constraint_match_MAX) {
diag_set(ClientError, errcode, fk_def->name,
- "unknown MATCH clause");
+ "unknown MATCH clause");
return NULL;
}
- const char *on_delete_action = tuple_field_str(tuple,
- BOX_FK_CONSTRAINT_FIELD_ON_DELETE, &name_len);
+ const char *on_delete_action = tuple_field_str(
+ tuple, BOX_FK_CONSTRAINT_FIELD_ON_DELETE, &name_len);
if (on_delete_action == NULL)
return NULL;
- fk_def->on_delete = STRN2ENUM(fk_constraint_action,
- on_delete_action, name_len);
+ fk_def->on_delete =
+ STRN2ENUM(fk_constraint_action, on_delete_action, name_len);
if (fk_def->on_delete == fk_constraint_action_MAX) {
diag_set(ClientError, errcode, fk_def->name,
- "unknown ON DELETE action");
+ "unknown ON DELETE action");
return NULL;
}
- const char *on_update_action = tuple_field_str(tuple,
- BOX_FK_CONSTRAINT_FIELD_ON_UPDATE, &name_len);
+ const char *on_update_action = tuple_field_str(
+ tuple, BOX_FK_CONSTRAINT_FIELD_ON_UPDATE, &name_len);
if (on_update_action == NULL)
return NULL;
- fk_def->on_update = STRN2ENUM(fk_constraint_action,
- on_update_action, name_len);
+ fk_def->on_update =
+ STRN2ENUM(fk_constraint_action, on_update_action, name_len);
if (fk_def->on_update == fk_constraint_action_MAX) {
diag_set(ClientError, errcode, fk_def->name,
- "unknown ON UPDATE action");
+ "unknown ON UPDATE action");
return NULL;
}
def_guard.is_active = false;
@@ -5186,12 +5182,10 @@ space_reset_fk_constraint_mask(struct space *space)
space->fk_constraint_mask = 0;
struct fk_constraint *fk;
rlist_foreach_entry(fk, &space->child_fk_constraint, in_child_space) {
-
fk_constraint_set_mask(fk, &space->fk_constraint_mask,
FIELD_LINK_CHILD);
}
rlist_foreach_entry(fk, &space->parent_fk_constraint, in_parent_space) {
-
fk_constraint_set_mask(fk, &space->fk_constraint_mask,
FIELD_LINK_PARENT);
}
@@ -5205,7 +5199,7 @@ space_reset_fk_constraint_mask(struct space *space)
static int
on_create_fk_constraint_rollback(struct trigger *trigger, void *event)
{
- (void) event;
+ (void)event;
struct fk_constraint *fk = (struct fk_constraint *)trigger->data;
rlist_del_entry(fk, in_parent_space);
rlist_del_entry(fk, in_child_space);
@@ -5222,13 +5216,12 @@ on_create_fk_constraint_rollback(struct trigger *trigger, void *event)
static int
on_replace_fk_constraint_rollback(struct trigger *trigger, void *event)
{
- (void) event;
+ (void)event;
struct fk_constraint *old_fk = (struct fk_constraint *)trigger->data;
struct space *parent = space_by_id(old_fk->def->parent_id);
struct space *child = space_by_id(old_fk->def->child_id);
- struct fk_constraint *new_fk =
- fk_constraint_remove(&child->child_fk_constraint,
- old_fk->def->name);
+ struct fk_constraint *new_fk = fk_constraint_remove(
+ &child->child_fk_constraint, old_fk->def->name);
fk_constraint_delete(new_fk);
rlist_add_entry(&child->child_fk_constraint, old_fk, in_child_space);
rlist_add_entry(&parent->parent_fk_constraint, old_fk, in_parent_space);
@@ -5241,7 +5234,7 @@ on_replace_fk_constraint_rollback(struct trigger *trigger, void *event)
static int
on_drop_fk_constraint_rollback(struct trigger *trigger, void *event)
{
- (void) event;
+ (void)event;
struct fk_constraint *old_fk = (struct fk_constraint *)trigger->data;
struct space *parent = space_by_id(old_fk->def->parent_id);
struct space *child = space_by_id(old_fk->def->child_id);
@@ -5267,8 +5260,8 @@ on_drop_fk_constraint_rollback(struct trigger *trigger, void *event)
static int
on_drop_or_replace_fk_constraint_commit(struct trigger *trigger, void *event)
{
- (void) event;
- fk_constraint_delete((struct fk_constraint *) trigger->data);
+ (void)event;
+ fk_constraint_delete((struct fk_constraint *)trigger->data);
return 0;
}
@@ -5287,7 +5280,7 @@ fk_constraint_check_dup_links(struct fk_constraint_def *fk_def)
uint32_t parent_field = fk_def->links[i].parent_field;
if (parent_field > 63)
goto slow_check;
- parent_field = ((uint64_t) 1) << parent_field;
+ parent_field = ((uint64_t)1) << parent_field;
if ((field_mask & parent_field) != 0)
goto error;
field_mask |= parent_field;
@@ -5304,7 +5297,7 @@ slow_check:
return 0;
error:
diag_set(ClientError, ER_CREATE_FK_CONSTRAINT, fk_def->name,
- "referenced fields can not contain duplicates");
+ "referenced fields can not contain duplicates");
return -1;
}
@@ -5312,15 +5305,15 @@ error:
static int
on_replace_dd_fk_constraint(struct trigger * /* trigger*/, void *event)
{
- struct txn *txn = (struct txn *) event;
+ struct txn *txn = (struct txn *)event;
struct txn_stmt *stmt = txn_current_stmt(txn);
struct tuple *old_tuple = stmt->old_tuple;
struct tuple *new_tuple = stmt->new_tuple;
if (new_tuple != NULL) {
/* Create or replace foreign key. */
struct fk_constraint_def *fk_def =
- fk_constraint_def_new_from_tuple(new_tuple,
- ER_CREATE_FK_CONSTRAINT);
+ fk_constraint_def_new_from_tuple(
+ new_tuple, ER_CREATE_FK_CONSTRAINT);
if (fk_def == NULL)
return -1;
auto fk_def_guard = make_scoped_guard([=] { free(fk_def); });
@@ -5329,17 +5322,18 @@ on_replace_dd_fk_constraint(struct trigger * /* trigger*/, void *event)
return -1;
if (child_space->def->opts.is_view) {
diag_set(ClientError, ER_CREATE_FK_CONSTRAINT,
- fk_def->name,
- "referencing space can't be VIEW");
+ fk_def->name,
+ "referencing space can't be VIEW");
return -1;
}
- struct space *parent_space = space_cache_find(fk_def->parent_id);
+ struct space *parent_space =
+ space_cache_find(fk_def->parent_id);
if (parent_space == NULL)
return -1;
if (parent_space->def->opts.is_view) {
diag_set(ClientError, ER_CREATE_FK_CONSTRAINT,
- fk_def->name,
- "referenced space can't be VIEW");
+ fk_def->name,
+ "referenced space can't be VIEW");
return -1;
}
/*
@@ -5352,8 +5346,8 @@ on_replace_dd_fk_constraint(struct trigger * /* trigger*/, void *event)
struct index *pk = space_index(child_space, 0);
if (pk != NULL && index_size(pk) > 0) {
diag_set(ClientError, ER_CREATE_FK_CONSTRAINT,
- fk_def->name,
- "referencing space must be empty");
+ fk_def->name,
+ "referencing space must be empty");
return -1;
}
/* Check types of referenced fields. */
@@ -5363,24 +5357,25 @@ on_replace_dd_fk_constraint(struct trigger * /* trigger*/, void *event)
if (child_fieldno >= child_space->def->field_count ||
parent_fieldno >= parent_space->def->field_count) {
diag_set(ClientError, ER_CREATE_FK_CONSTRAINT,
- fk_def->name, "foreign key refers to "
- "nonexistent field");
+ fk_def->name,
+ "foreign key refers to "
+ "nonexistent field");
return -1;
}
struct field_def *child_field =
&child_space->def->fields[child_fieldno];
struct field_def *parent_field =
&parent_space->def->fields[parent_fieldno];
- if (! field_type1_contains_type2(parent_field->type,
- child_field->type)) {
+ if (!field_type1_contains_type2(parent_field->type,
+ child_field->type)) {
diag_set(ClientError, ER_CREATE_FK_CONSTRAINT,
- fk_def->name, "field type mismatch");
+ fk_def->name, "field type mismatch");
return -1;
}
if (child_field->coll_id != parent_field->coll_id) {
diag_set(ClientError, ER_CREATE_FK_CONSTRAINT,
- fk_def->name,
- "field collation mismatch");
+ fk_def->name,
+ "field collation mismatch");
return -1;
}
}
@@ -5402,10 +5397,10 @@ on_replace_dd_fk_constraint(struct trigger * /* trigger*/, void *event)
continue;
uint32_t j;
for (j = 0; j < fk_def->field_count; ++j) {
- if (key_def_find_by_fieldno(idx->def->key_def,
- fk_def->links[j].
- parent_field) ==
- NULL)
+ if (key_def_find_by_fieldno(
+ idx->def->key_def,
+ fk_def->links[j].parent_field) ==
+ NULL)
break;
}
if (j != fk_def->field_count)
@@ -5415,15 +5410,16 @@ on_replace_dd_fk_constraint(struct trigger * /* trigger*/, void *event)
}
if (fk_index == NULL) {
diag_set(ClientError, ER_CREATE_FK_CONSTRAINT,
- fk_def->name, "referenced fields don't "
- "compose unique index");
+ fk_def->name,
+ "referenced fields don't "
+ "compose unique index");
return -1;
}
struct fk_constraint *fk =
- (struct fk_constraint *) malloc(sizeof(*fk));
+ (struct fk_constraint *)malloc(sizeof(*fk));
if (fk == NULL) {
- diag_set(OutOfMemory, sizeof(*fk),
- "malloc", "struct fk_constraint");
+ diag_set(OutOfMemory, sizeof(*fk), "malloc",
+ "struct fk_constraint");
return -1;
}
auto fk_guard = make_scoped_guard([=] { free(fk); });
@@ -5431,43 +5427,41 @@ on_replace_dd_fk_constraint(struct trigger * /* trigger*/, void *event)
fk->def = fk_def;
fk->index_id = fk_index->def->iid;
if (old_tuple == NULL) {
- struct trigger *on_rollback =
- txn_alter_trigger_new(on_create_fk_constraint_rollback,
- fk);
+ struct trigger *on_rollback = txn_alter_trigger_new(
+ on_create_fk_constraint_rollback, fk);
if (on_rollback == NULL)
return -1;
if (space_insert_constraint_id(child_space,
CONSTRAINT_TYPE_FK,
fk_def->name) != 0)
return -1;
- rlist_add_entry(&child_space->child_fk_constraint,
- fk, in_child_space);
- rlist_add_entry(&parent_space->parent_fk_constraint,
- fk, in_parent_space);
+ rlist_add_entry(&child_space->child_fk_constraint, fk,
+ in_child_space);
+ rlist_add_entry(&parent_space->parent_fk_constraint, fk,
+ in_parent_space);
txn_stmt_on_rollback(stmt, on_rollback);
- fk_constraint_set_mask(fk,
- &parent_space->fk_constraint_mask,
- FIELD_LINK_PARENT);
+ fk_constraint_set_mask(
+ fk, &parent_space->fk_constraint_mask,
+ FIELD_LINK_PARENT);
fk_constraint_set_mask(fk,
&child_space->fk_constraint_mask,
FIELD_LINK_CHILD);
} else {
- struct fk_constraint *old_fk =
- fk_constraint_remove(&child_space->child_fk_constraint,
- fk_def->name);
+ struct fk_constraint *old_fk = fk_constraint_remove(
+ &child_space->child_fk_constraint,
+ fk_def->name);
rlist_add_entry(&child_space->child_fk_constraint, fk,
in_child_space);
rlist_add_entry(&parent_space->parent_fk_constraint, fk,
in_parent_space);
- struct trigger *on_rollback =
- txn_alter_trigger_new(on_replace_fk_constraint_rollback,
- old_fk);
+ struct trigger *on_rollback = txn_alter_trigger_new(
+ on_replace_fk_constraint_rollback, old_fk);
if (on_rollback == NULL)
return -1;
txn_stmt_on_rollback(stmt, on_rollback);
- struct trigger *on_commit =
- txn_alter_trigger_new(on_drop_or_replace_fk_constraint_commit,
- old_fk);
+ struct trigger *on_commit = txn_alter_trigger_new(
+ on_drop_or_replace_fk_constraint_commit,
+ old_fk);
if (on_commit == NULL)
return -1;
txn_stmt_on_commit(stmt, on_commit);
@@ -5480,28 +5474,26 @@ on_replace_dd_fk_constraint(struct trigger * /* trigger*/, void *event)
/* Drop foreign key. */
struct fk_constraint_def *fk_def =
fk_constraint_def_new_from_tuple(old_tuple,
- ER_DROP_FK_CONSTRAINT);
+ ER_DROP_FK_CONSTRAINT);
if (fk_def == NULL)
return -1;
auto fk_def_guard = make_scoped_guard([=] { free(fk_def); });
struct space *child_space = space_cache_find(fk_def->child_id);
if (child_space == NULL)
return -1;
- struct space *parent_space = space_cache_find(fk_def->parent_id);
+ struct space *parent_space =
+ space_cache_find(fk_def->parent_id);
if (parent_space == NULL)
return -1;
- struct fk_constraint *old_fk=
- fk_constraint_remove(&child_space->child_fk_constraint,
- fk_def->name);
- struct trigger *on_commit =
- txn_alter_trigger_new(on_drop_or_replace_fk_constraint_commit,
- old_fk);
+ struct fk_constraint *old_fk = fk_constraint_remove(
+ &child_space->child_fk_constraint, fk_def->name);
+ struct trigger *on_commit = txn_alter_trigger_new(
+ on_drop_or_replace_fk_constraint_commit, old_fk);
if (on_commit == NULL)
return -1;
txn_stmt_on_commit(stmt, on_commit);
- struct trigger *on_rollback =
- txn_alter_trigger_new(on_drop_fk_constraint_rollback,
- old_fk);
+ struct trigger *on_rollback = txn_alter_trigger_new(
+ on_drop_fk_constraint_rollback, old_fk);
if (on_rollback == NULL)
return -1;
space_delete_constraint_id(child_space, fk_def->name);
@@ -5518,14 +5510,14 @@ static struct ck_constraint_def *
ck_constraint_def_new_from_tuple(struct tuple *tuple)
{
uint32_t name_len;
- const char *name = tuple_field_str(tuple, BOX_CK_CONSTRAINT_FIELD_NAME,
- &name_len);
+ const char *name =
+ tuple_field_str(tuple, BOX_CK_CONSTRAINT_FIELD_NAME, &name_len);
if (name == NULL)
return NULL;
if (name_len > BOX_NAME_MAX) {
diag_set(ClientError, ER_CREATE_CK_CONSTRAINT,
- tt_cstr(name, BOX_INVALID_NAME_MAX),
- "check constraint name is too long");
+ tt_cstr(name, BOX_INVALID_NAME_MAX),
+ "check constraint name is too long");
return NULL;
}
if (identifier_check(name, name_len) != 0)
@@ -5534,26 +5526,25 @@ ck_constraint_def_new_from_tuple(struct tuple *tuple)
if (tuple_field_u32(tuple, BOX_CK_CONSTRAINT_FIELD_SPACE_ID,
&space_id) != 0)
return NULL;
- const char *language_str = tuple_field_cstr(tuple,
- BOX_CK_CONSTRAINT_FIELD_LANGUAGE);
+ const char *language_str =
+ tuple_field_cstr(tuple, BOX_CK_CONSTRAINT_FIELD_LANGUAGE);
if (language_str == NULL)
return NULL;
enum ck_constraint_language language =
STR2ENUM(ck_constraint_language, language_str);
if (language == ck_constraint_language_MAX) {
diag_set(ClientError, ER_FUNCTION_LANGUAGE, language_str,
- tt_cstr(name, name_len));
+ tt_cstr(name, name_len));
return NULL;
}
uint32_t expr_str_len;
- const char *expr_str = tuple_field_str(tuple,
- BOX_CK_CONSTRAINT_FIELD_CODE, &expr_str_len);
+ const char *expr_str = tuple_field_str(
+ tuple, BOX_CK_CONSTRAINT_FIELD_CODE, &expr_str_len);
if (expr_str == NULL)
return NULL;
bool is_enabled = true;
if (tuple_field_count(tuple) > BOX_CK_CONSTRAINT_FIELD_IS_ENABLED) {
- if (tuple_field_bool(tuple,
- BOX_CK_CONSTRAINT_FIELD_IS_ENABLED,
+ if (tuple_field_bool(tuple, BOX_CK_CONSTRAINT_FIELD_IS_ENABLED,
&is_enabled) != 0)
return NULL;
}
@@ -5627,8 +5618,8 @@ on_replace_ck_constraint_rollback(struct trigger *trigger, void * /* event */)
assert(ck != NULL);
struct space *space = space_by_id(ck->def->space_id);
assert(space != NULL);
- struct ck_constraint *new_ck = space_ck_constraint_by_name(space,
- ck->def->name, strlen(ck->def->name));
+ struct ck_constraint *new_ck = space_ck_constraint_by_name(
+ space, ck->def->name, strlen(ck->def->name));
assert(new_ck != NULL);
rlist_del_entry(new_ck, link);
rlist_add_entry(&space->ck_constraint, ck, link);
@@ -5642,7 +5633,7 @@ on_replace_ck_constraint_rollback(struct trigger *trigger, void * /* event */)
static int
on_replace_dd_ck_constraint(struct trigger * /* trigger*/, void *event)
{
- struct txn *txn = (struct txn *) event;
+ struct txn *txn = (struct txn *)event;
struct txn_stmt *stmt = txn_current_stmt(txn);
struct tuple *old_tuple = stmt->old_tuple;
struct tuple *new_tuple = stmt->new_tuple;
@@ -5661,11 +5652,12 @@ on_replace_dd_ck_constraint(struct trigger * /* trigger*/, void *event)
if (new_tuple != NULL) {
bool is_deferred;
if (tuple_field_bool(new_tuple,
- BOX_CK_CONSTRAINT_FIELD_DEFERRED, &is_deferred) != 0)
+ BOX_CK_CONSTRAINT_FIELD_DEFERRED,
+ &is_deferred) != 0)
return -1;
if (is_deferred) {
diag_set(ClientError, ER_UNSUPPORTED, "Tarantool",
- "deferred ck constraints");
+ "deferred ck constraints");
return -1;
}
/* Create or replace check constraint. */
@@ -5673,9 +5665,8 @@ on_replace_dd_ck_constraint(struct trigger * /* trigger*/, void *event)
ck_constraint_def_new_from_tuple(new_tuple);
if (ck_def == NULL)
return -1;
- auto ck_def_guard = make_scoped_guard([=] {
- ck_constraint_def_delete(ck_def);
- });
+ auto ck_def_guard = make_scoped_guard(
+ [=] { ck_constraint_def_delete(ck_def); });
/*
* A corner case: enabling/disabling an existent
* ck constraint doesn't require the object
@@ -5691,7 +5682,7 @@ on_replace_dd_ck_constraint(struct trigger * /* trigger*/, void *event)
bool is_insert = old_ck_constraint == NULL;
if (!is_insert) {
struct ck_constraint_def *old_def =
- old_ck_constraint->def;
+ old_ck_constraint->def;
assert(old_def->space_id == ck_def->space_id);
assert(strcmp(old_def->name, ck_def->name) == 0);
if (old_def->language == ck_def->language &&
@@ -5708,9 +5699,8 @@ on_replace_dd_ck_constraint(struct trigger * /* trigger*/, void *event)
*/
struct index *pk = space_index(space, 0);
if (pk != NULL && index_size(pk) > 0) {
- diag_set(ClientError, ER_CREATE_CK_CONSTRAINT,
- name,
- "referencing space must be empty");
+ diag_set(ClientError, ER_CREATE_CK_CONSTRAINT, name,
+ "referencing space must be empty");
return -1;
}
struct ck_constraint *new_ck_constraint =
@@ -5718,9 +5708,8 @@ on_replace_dd_ck_constraint(struct trigger * /* trigger*/, void *event)
if (new_ck_constraint == NULL)
return -1;
ck_def_guard.is_active = false;
- auto ck_guard = make_scoped_guard([=] {
- ck_constraint_delete(new_ck_constraint);
- });
+ auto ck_guard = make_scoped_guard(
+ [=] { ck_constraint_delete(new_ck_constraint); });
if (space_add_ck_constraint(space, new_ck_constraint) != 0)
return -1;
if (!is_insert) {
@@ -5728,9 +5717,8 @@ on_replace_dd_ck_constraint(struct trigger * /* trigger*/, void *event)
on_rollback->data = old_ck_constraint;
on_rollback->run = on_replace_ck_constraint_rollback;
} else {
- if (space_insert_constraint_id(space,
- CONSTRAINT_TYPE_CK,
- name) != 0) {
+ if (space_insert_constraint_id(
+ space, CONSTRAINT_TYPE_CK, name) != 0) {
space_remove_ck_constraint(space,
new_ck_constraint);
return -1;
@@ -5745,8 +5733,8 @@ on_replace_dd_ck_constraint(struct trigger * /* trigger*/, void *event)
assert(new_tuple == NULL && old_tuple != NULL);
/* Drop check constraint. */
uint32_t name_len;
- const char *name = tuple_field_str(old_tuple,
- BOX_CK_CONSTRAINT_FIELD_NAME, &name_len);
+ const char *name = tuple_field_str(
+ old_tuple, BOX_CK_CONSTRAINT_FIELD_NAME, &name_len);
if (name == NULL)
return -1;
struct ck_constraint *old_ck_constraint =
@@ -5773,8 +5761,8 @@ on_replace_dd_ck_constraint(struct trigger * /* trigger*/, void *event)
static int
on_replace_dd_func_index(struct trigger *trigger, void *event)
{
- (void) trigger;
- struct txn *txn = (struct txn *) event;
+ (void)trigger;
+ struct txn *txn = (struct txn *)event;
struct txn_stmt *stmt = txn_current_stmt(txn);
struct tuple *old_tuple = stmt->old_tuple;
struct tuple *new_tuple = stmt->new_tuple;
@@ -5794,7 +5782,7 @@ on_replace_dd_func_index(struct trigger *trigger, void *event)
&index_id) != 0)
return -1;
if (tuple_field_u32(new_tuple, BOX_FUNC_INDEX_FUNCTION_ID,
- &fid) != 0)
+ &fid) != 0)
return -1;
space = space_cache_find(space_id);
if (space == NULL)
@@ -5804,15 +5792,16 @@ on_replace_dd_func_index(struct trigger *trigger, void *event)
return -1;
func = func_by_id(fid);
if (func == NULL) {
- diag_set(ClientError, ER_NO_SUCH_FUNCTION, int2str(fid));
+ diag_set(ClientError, ER_NO_SUCH_FUNCTION,
+ int2str(fid));
return -1;
}
if (func_index_check_func(func) != 0)
return -1;
if (index->def->opts.func_id != func->def->fid) {
diag_set(ClientError, ER_WRONG_INDEX_OPTIONS, 0,
- "Function ids defined in _index and "
- "_func_index don't match");
+ "Function ids defined in _index and "
+ "_func_index don't match");
return -1;
}
} else if (old_tuple != NULL && new_tuple == NULL) {
@@ -5833,7 +5822,8 @@ on_replace_dd_func_index(struct trigger *trigger, void *event)
func = NULL;
} else {
assert(old_tuple != NULL && new_tuple != NULL);
- diag_set(ClientError, ER_UNSUPPORTED, "functional index", "alter");
+ diag_set(ClientError, ER_UNSUPPORTED, "functional index",
+ "alter");
return -1;
}
@@ -5847,11 +5837,12 @@ on_replace_dd_func_index(struct trigger *trigger, void *event)
alter = alter_space_new(space);
if (alter == NULL)
return -1;
- auto scoped_guard = make_scoped_guard([=] {alter_space_delete(alter);});
+ auto scoped_guard =
+ make_scoped_guard([=] { alter_space_delete(alter); });
if (alter_space_move_indexes(alter, 0, index->def->iid) != 0)
return -1;
try {
- (void) new RebuildFuncIndex(alter, index->def, func);
+ (void)new RebuildFuncIndex(alter, index->def, func);
} catch (Exception *e) {
return -1;
}
@@ -5859,8 +5850,8 @@ on_replace_dd_func_index(struct trigger *trigger, void *event)
space->index_id_max + 1) != 0)
return -1;
try {
- (void) new MoveCkConstraints(alter);
- (void) new UpdateSchemaVersion(alter);
+ (void)new MoveCkConstraints(alter);
+ (void)new UpdateSchemaVersion(alter);
alter_space_do(stmt, alter);
} catch (Exception *e) {
return -1;
@@ -5870,68 +5861,58 @@ on_replace_dd_func_index(struct trigger *trigger, void *event)
return 0;
}
-struct trigger alter_space_on_replace_space = {
- RLIST_LINK_INITIALIZER, on_replace_dd_space, NULL, NULL
-};
+struct trigger alter_space_on_replace_space = { RLIST_LINK_INITIALIZER,
+ on_replace_dd_space, NULL,
+ NULL };
-struct trigger alter_space_on_replace_index = {
- RLIST_LINK_INITIALIZER, on_replace_dd_index, NULL, NULL
-};
+struct trigger alter_space_on_replace_index = { RLIST_LINK_INITIALIZER,
+ on_replace_dd_index, NULL,
+ NULL };
-struct trigger on_replace_truncate = {
- RLIST_LINK_INITIALIZER, on_replace_dd_truncate, NULL, NULL
-};
+struct trigger on_replace_truncate = { RLIST_LINK_INITIALIZER,
+ on_replace_dd_truncate, NULL, NULL };
-struct trigger on_replace_schema = {
- RLIST_LINK_INITIALIZER, on_replace_dd_schema, NULL, NULL
-};
+struct trigger on_replace_schema = { RLIST_LINK_INITIALIZER,
+ on_replace_dd_schema, NULL, NULL };
-struct trigger on_replace_user = {
- RLIST_LINK_INITIALIZER, on_replace_dd_user, NULL, NULL
-};
+struct trigger on_replace_user = { RLIST_LINK_INITIALIZER, on_replace_dd_user,
+ NULL, NULL };
-struct trigger on_replace_func = {
- RLIST_LINK_INITIALIZER, on_replace_dd_func, NULL, NULL
-};
+struct trigger on_replace_func = { RLIST_LINK_INITIALIZER, on_replace_dd_func,
+ NULL, NULL };
-struct trigger on_replace_collation = {
- RLIST_LINK_INITIALIZER, on_replace_dd_collation, NULL, NULL
-};
+struct trigger on_replace_collation = { RLIST_LINK_INITIALIZER,
+ on_replace_dd_collation, NULL, NULL };
-struct trigger on_replace_priv = {
- RLIST_LINK_INITIALIZER, on_replace_dd_priv, NULL, NULL
-};
+struct trigger on_replace_priv = { RLIST_LINK_INITIALIZER, on_replace_dd_priv,
+ NULL, NULL };
-struct trigger on_replace_cluster = {
- RLIST_LINK_INITIALIZER, on_replace_dd_cluster, NULL, NULL
-};
+struct trigger on_replace_cluster = { RLIST_LINK_INITIALIZER,
+ on_replace_dd_cluster, NULL, NULL };
-struct trigger on_replace_sequence = {
- RLIST_LINK_INITIALIZER, on_replace_dd_sequence, NULL, NULL
-};
+struct trigger on_replace_sequence = { RLIST_LINK_INITIALIZER,
+ on_replace_dd_sequence, NULL, NULL };
-struct trigger on_replace_sequence_data = {
- RLIST_LINK_INITIALIZER, on_replace_dd_sequence_data, NULL, NULL
-};
+struct trigger on_replace_sequence_data = { RLIST_LINK_INITIALIZER,
+ on_replace_dd_sequence_data, NULL,
+ NULL };
-struct trigger on_replace_space_sequence = {
- RLIST_LINK_INITIALIZER, on_replace_dd_space_sequence, NULL, NULL
-};
+struct trigger on_replace_space_sequence = { RLIST_LINK_INITIALIZER,
+ on_replace_dd_space_sequence, NULL,
+ NULL };
-struct trigger on_replace_trigger = {
- RLIST_LINK_INITIALIZER, on_replace_dd_trigger, NULL, NULL
-};
+struct trigger on_replace_trigger = { RLIST_LINK_INITIALIZER,
+ on_replace_dd_trigger, NULL, NULL };
-struct trigger on_replace_fk_constraint = {
- RLIST_LINK_INITIALIZER, on_replace_dd_fk_constraint, NULL, NULL
-};
+struct trigger on_replace_fk_constraint = { RLIST_LINK_INITIALIZER,
+ on_replace_dd_fk_constraint, NULL,
+ NULL };
-struct trigger on_replace_ck_constraint = {
- RLIST_LINK_INITIALIZER, on_replace_dd_ck_constraint, NULL, NULL
-};
+struct trigger on_replace_ck_constraint = { RLIST_LINK_INITIALIZER,
+ on_replace_dd_ck_constraint, NULL,
+ NULL };
-struct trigger on_replace_func_index = {
- RLIST_LINK_INITIALIZER, on_replace_dd_func_index, NULL, NULL
-};
+struct trigger on_replace_func_index = { RLIST_LINK_INITIALIZER,
+ on_replace_dd_func_index, NULL, NULL };
/* vim: set foldmethod=marker */
diff --git a/src/box/applier.cc b/src/box/applier.cc
index 7686d6c..06e55e6 100644
--- a/src/box/applier.cc
+++ b/src/box/applier.cc
@@ -63,8 +63,7 @@ static inline void
applier_set_state(struct applier *applier, enum applier_state state)
{
applier->state = state;
- say_debug("=> %s", applier_state_strs[state] +
- strlen("APPLIER_"));
+ say_debug("=> %s", applier_state_strs[state] + strlen("APPLIER_"));
trigger_run_xc(&applier->on_state, applier);
}
@@ -185,9 +184,8 @@ applier_writer_f(va_list ap)
struct xrow_header xrow;
xrow_encode_vclock(&xrow, &replicaset.vclock);
coio_write_xrow(&io, &xrow);
- ERROR_INJECT(ERRINJ_APPLIER_SLOW_ACK, {
- fiber_sleep(0.01);
- });
+ ERROR_INJECT(ERRINJ_APPLIER_SLOW_ACK,
+ { fiber_sleep(0.01); });
/*
* Even if new ACK is requested during the
* write, don't send it again right away.
@@ -385,16 +383,17 @@ applier_connect(struct applier *applier)
coio_read_xrow(coio, ibuf, &row);
if (row.type == IPROTO_OK) {
xrow_decode_ballot_xc(&row, &applier->ballot);
- } else try {
- xrow_decode_error_xc(&row);
- } catch (ClientError *e) {
- if (e->errcode() != ER_UNKNOWN_REQUEST_TYPE)
- e->raise();
- /*
+ } else
+ try {
+ xrow_decode_error_xc(&row);
+ } catch (ClientError *e) {
+ if (e->errcode() != ER_UNKNOWN_REQUEST_TYPE)
+ e->raise();
+ /*
* Master isn't aware of IPROTO_VOTE request.
* It's OK - we can proceed without it.
*/
- }
+ }
applier_set_state(applier, APPLIER_CONNECTED);
@@ -442,7 +441,7 @@ applier_wait_snapshot(struct applier *applier)
xrow_decode_error_xc(&row); /* re-throw error */
} else if (row.type != IPROTO_OK) {
tnt_raise(ClientError, ER_UNKNOWN_REQUEST_TYPE,
- (uint32_t) row.type);
+ (uint32_t)row.type);
}
/*
* Start vclock. The vclock of the checkpoint
@@ -464,7 +463,8 @@ applier_wait_snapshot(struct applier *applier)
if (apply_snapshot_row(&row) != 0)
diag_raise();
if (++row_count % 100000 == 0)
- say_info("%.1fM rows received", row_count / 1e6);
+ say_info("%.1fM rows received",
+ row_count / 1e6);
} else if (row.type == IPROTO_OK) {
if (applier->version_id < version_id(1, 7, 0)) {
/*
@@ -478,10 +478,10 @@ applier_wait_snapshot(struct applier *applier)
}
break; /* end of stream */
} else if (iproto_type_is_error(row.type)) {
- xrow_decode_error_xc(&row); /* rethrow error */
+ xrow_decode_error_xc(&row); /* rethrow error */
} else {
tnt_raise(ClientError, ER_UNKNOWN_REQUEST_TYPE,
- (uint32_t) row.type);
+ (uint32_t)row.type);
}
}
@@ -531,7 +531,8 @@ applier_wait_register(struct applier *applier, uint64_t row_count)
if (apply_final_join_row(&row) != 0)
diag_raise();
if (++row_count % 100000 == 0)
- say_info("%.1fM rows received", row_count / 1e6);
+ say_info("%.1fM rows received",
+ row_count / 1e6);
} else if (row.type == IPROTO_OK) {
/*
* Current vclock. This is not used now,
@@ -540,10 +541,10 @@ applier_wait_register(struct applier *applier, uint64_t row_count)
++row_count;
break; /* end of stream */
} else if (iproto_type_is_error(row.type)) {
- xrow_decode_error_xc(&row); /* rethrow error */
+ xrow_decode_error_xc(&row); /* rethrow error */
} else {
tnt_raise(ClientError, ER_UNKNOWN_REQUEST_TYPE,
- (uint32_t) row.type);
+ (uint32_t)row.type);
}
}
@@ -695,8 +696,7 @@ applier_read_tx(struct applier *applier, struct stailq *rows)
"transaction.");
}
if (tsn != row->tsn)
- tnt_raise(ClientError, ER_UNSUPPORTED,
- "replication",
+ tnt_raise(ClientError, ER_UNSUPPORTED, "replication",
"interleaving transactions");
assert(row->bodycnt <= 1);
@@ -708,8 +708,8 @@ applier_read_tx(struct applier *applier, struct stailq *rows)
* buffer will not be used while the
* transaction is applied.
*/
- void *new_base = region_alloc(&fiber()->gc,
- row->body->iov_len);
+ void *new_base =
+ region_alloc(&fiber()->gc, row->body->iov_len);
if (new_base == NULL)
tnt_raise(OutOfMemory, row->body->iov_len,
"region", "xrow body");
@@ -720,8 +720,8 @@ applier_read_tx(struct applier *applier, struct stailq *rows)
}
stailq_add_tail(rows, &tx_row->next);
- } while (!stailq_last_entry(rows, struct applier_tx_row,
- next)->row.is_commit);
+ } while (!stailq_last_entry(rows, struct applier_tx_row, next)
+ ->row.is_commit);
}
static void
@@ -740,8 +740,7 @@ applier_rollback_by_wal_io(void)
* the journal engine.
*/
diag_set(ClientError, ER_WAL_IO);
- diag_set_error(&replicaset.applier.diag,
- diag_last_error(diag_get()));
+ diag_set_error(&replicaset.applier.diag, diag_last_error(diag_get()));
/* Broadcast the rollback across all appliers. */
trigger_run(&replicaset.applier.on_rollback, NULL);
@@ -753,8 +752,8 @@ applier_rollback_by_wal_io(void)
static int
applier_txn_rollback_cb(struct trigger *trigger, void *event)
{
- (void) trigger;
- struct txn *txn = (struct txn *) event;
+ (void)trigger;
+ struct txn *txn = (struct txn *)event;
/*
* Synchronous transaction rollback due to receiving a
* ROLLBACK entry is a normal event and requires no
@@ -768,8 +767,8 @@ applier_txn_rollback_cb(struct trigger *trigger, void *event)
static int
applier_txn_wal_write_cb(struct trigger *trigger, void *event)
{
- (void) trigger;
- (void) event;
+ (void)trigger;
+ (void)event;
/* Broadcast the WAL write across all appliers. */
trigger_run(&replicaset.applier.on_wal_write, NULL);
return 0;
@@ -777,7 +776,7 @@ applier_txn_wal_write_cb(struct trigger *trigger, void *event)
struct synchro_entry {
/** Encoded form of a synchro record. */
- struct synchro_body_bin body_bin;
+ struct synchro_body_bin body_bin;
/** xrow to write, used by the journal engine. */
struct xrow_header row;
@@ -818,8 +817,7 @@ apply_synchro_row_cb(struct journal_entry *entry)
* the journal engine in async write way.
*/
static struct synchro_entry *
-synchro_entry_new(struct xrow_header *applier_row,
- struct synchro_request *req)
+synchro_entry_new(struct xrow_header *applier_row, struct synchro_request *req)
{
struct synchro_entry *entry;
size_t size = sizeof(*entry) + sizeof(struct xrow_header *);
@@ -884,7 +882,8 @@ applier_handle_raft(struct applier *applier, struct xrow_header *row)
{
assert(iproto_type_is_raft_request(row->type));
if (applier->instance_id == 0) {
- diag_set(ClientError, ER_PROTOCOL, "Can't apply a Raft request "
+ diag_set(ClientError, ER_PROTOCOL,
+ "Can't apply a Raft request "
"from an instance without an ID");
return -1;
}
@@ -917,8 +916,8 @@ applier_apply_tx(struct applier *applier, struct stailq *rows)
*/
if (!raft_is_source_allowed(applier->instance_id))
return 0;
- struct xrow_header *first_row = &stailq_first_entry(rows,
- struct applier_tx_row, next)->row;
+ struct xrow_header *first_row =
+ &stailq_first_entry(rows, struct applier_tx_row, next)->row;
struct xrow_header *last_row;
last_row = &stailq_last_entry(rows, struct applier_tx_row, next)->row;
struct replica *replica = replica_by_id(first_row->replica_id);
@@ -929,10 +928,10 @@ applier_apply_tx(struct applier *applier, struct stailq *rows)
* that belong to the same server id.
*/
struct latch *latch = (replica ? &replica->order_latch :
- &replicaset.applier.order_latch);
+ &replicaset.applier.order_latch);
latch_lock(latch);
- if (vclock_get(&replicaset.applier.vclock,
- last_row->replica_id) >= last_row->lsn) {
+ if (vclock_get(&replicaset.applier.vclock, last_row->replica_id) >=
+ last_row->lsn) {
latch_unlock(latch);
return 0;
} else if (vclock_get(&replicaset.applier.vclock,
@@ -944,9 +943,9 @@ applier_apply_tx(struct applier *applier, struct stailq *rows)
*/
struct xrow_header *tmp;
while (true) {
- tmp = &stailq_first_entry(rows,
- struct applier_tx_row,
- next)->row;
+ tmp = &stailq_first_entry(rows, struct applier_tx_row,
+ next)
+ ->row;
if (tmp->lsn <= vclock_get(&replicaset.applier.vclock,
tmp->replica_id)) {
stailq_shift(rows);
@@ -981,7 +980,8 @@ applier_apply_tx(struct applier *applier, struct stailq *rows)
latch_unlock(latch);
return -1;
}
- stailq_foreach_entry(item, rows, next) {
+ stailq_foreach_entry(item, rows, next)
+ {
struct xrow_header *row = &item->row;
int res = apply_row(row);
if (res != 0) {
@@ -1017,18 +1017,18 @@ applier_apply_tx(struct applier *applier, struct stailq *rows)
* new changes which local rows may overwrite.
* Raise an error.
*/
- diag_set(ClientError, ER_UNSUPPORTED,
- "Replication", "distributed transactions");
+ diag_set(ClientError, ER_UNSUPPORTED, "Replication",
+ "distributed transactions");
goto rollback;
}
/* We are ready to submit txn to wal. */
struct trigger *on_rollback, *on_wal_write;
size_t size;
- on_rollback = region_alloc_object(&txn->region, typeof(*on_rollback),
- &size);
- on_wal_write = region_alloc_object(&txn->region, typeof(*on_wal_write),
- &size);
+ on_rollback =
+ region_alloc_object(&txn->region, typeof(*on_rollback), &size);
+ on_wal_write =
+ region_alloc_object(&txn->region, typeof(*on_wal_write), &size);
if (on_rollback == NULL || on_wal_write == NULL) {
diag_set(OutOfMemory, size, "region_alloc_object",
"on_rollback/on_wal_write");
@@ -1081,7 +1081,7 @@ applier_signal_ack(struct applier *applier)
static int
applier_on_wal_write(struct trigger *trigger, void *event)
{
- (void) event;
+ (void)event;
struct applier *applier = (struct applier *)trigger->data;
applier_signal_ack(applier);
return 0;
@@ -1093,7 +1093,7 @@ applier_on_wal_write(struct trigger *trigger, void *event)
static int
applier_on_rollback(struct trigger *trigger, void *event)
{
- (void) event;
+ (void)event;
struct applier *applier = (struct applier *)trigger->data;
/* Setup a shared error. */
if (!diag_is_empty(&replicaset.applier.diag)) {
@@ -1133,7 +1133,7 @@ applier_subscribe(struct applier *applier)
if (applier->version_id >= version_id(1, 6, 7)) {
coio_read_xrow(coio, ibuf, &row);
if (iproto_type_is_error(row.type)) {
- xrow_decode_error_xc(&row); /* error */
+ xrow_decode_error_xc(&row); /* error */
} else if (row.type != IPROTO_OK) {
tnt_raise(ClientError, ER_PROTOCOL,
"Invalid response to SUBSCRIBE");
@@ -1147,8 +1147,9 @@ applier_subscribe(struct applier *applier)
* its and master's cluster ids match.
*/
vclock_create(&applier->remote_vclock_at_subscribe);
- xrow_decode_subscribe_response_xc(&row, &cluster_id,
- &applier->remote_vclock_at_subscribe);
+ xrow_decode_subscribe_response_xc(
+ &row, &cluster_id,
+ &applier->remote_vclock_at_subscribe);
applier->instance_id = row.replica_id;
/*
* If master didn't send us its cluster id
@@ -1204,7 +1205,8 @@ applier_subscribe(struct applier *applier)
char name[FIBER_NAME_MAX];
int pos = snprintf(name, sizeof(name), "applierw/");
- uri_format(name + pos, sizeof(name) - pos, &applier->uri, false);
+ uri_format(name + pos, sizeof(name) - pos, &applier->uri,
+ false);
applier->writer = fiber_new_xc(name, applier_writer_f);
fiber_set_joinable(applier->writer, true);
@@ -1254,14 +1256,14 @@ applier_subscribe(struct applier *applier)
* and check applier state.
*/
struct xrow_header *first_row =
- &stailq_first_entry(&rows, struct applier_tx_row,
- next)->row;
+ &stailq_first_entry(&rows, struct applier_tx_row, next)
+ ->row;
raft_process_heartbeat(applier->instance_id);
if (first_row->lsn == 0) {
if (unlikely(iproto_type_is_raft_request(
- first_row->type))) {
- if (applier_handle_raft(applier,
- first_row) != 0)
+ first_row->type))) {
+ if (applier_handle_raft(applier, first_row) !=
+ 0)
diag_raise();
}
applier_signal_ack(applier);
@@ -1381,7 +1383,8 @@ applier_f(va_list ap)
} else if (e->errcode() == ER_SYSTEM) {
/* System error from master instance. */
applier_log_error(applier, e);
- applier_disconnect(applier, APPLIER_DISCONNECTED);
+ applier_disconnect(applier,
+ APPLIER_DISCONNECTED);
goto reconnect;
} else {
/* Unrecoverable errors */
@@ -1448,7 +1451,7 @@ applier_f(va_list ap)
*
* See: https://github.com/tarantool/tarantool/issues/136
*/
-reconnect:
+ reconnect:
fiber_sleep(replication_reconnect_interval());
}
return 0;
@@ -1488,8 +1491,8 @@ applier_stop(struct applier *applier)
struct applier *
applier_new(const char *uri)
{
- struct applier *applier = (struct applier *)
- calloc(1, sizeof(struct applier));
+ struct applier *applier =
+ (struct applier *)calloc(1, sizeof(struct applier));
if (applier == NULL) {
diag_set(OutOfMemory, sizeof(*applier), "malloc",
"struct applier");
@@ -1503,7 +1506,7 @@ applier_new(const char *uri)
int rc = uri_parse(&applier->uri, applier->source);
/* URI checked by box_check_replication() */
assert(rc == 0 && applier->uri.service != NULL);
- (void) rc;
+ (void)rc;
applier->last_row_time = ev_monotonic_now(loop());
rlist_create(&applier->on_state);
@@ -1554,7 +1557,7 @@ struct applier_on_state {
static int
applier_on_state_f(struct trigger *trigger, void *event)
{
- (void) event;
+ (void)event;
struct applier_on_state *on_state =
container_of(trigger, struct applier_on_state, base);
@@ -1573,8 +1576,7 @@ applier_on_state_f(struct trigger *trigger, void *event)
}
static inline void
-applier_add_on_state(struct applier *applier,
- struct applier_on_state *trigger,
+applier_add_on_state(struct applier *applier, struct applier_on_state *trigger,
enum applier_state desired_state)
{
trigger_create(&trigger->base, applier_on_state_f, NULL, NULL);
diff --git a/src/box/applier.h b/src/box/applier.h
index 15ca1fc..d519cee 100644
--- a/src/box/applier.h
+++ b/src/box/applier.h
@@ -47,24 +47,24 @@
enum { APPLIER_SOURCE_MAXLEN = 1024 }; /* enough to fit URI with passwords */
-#define applier_STATE(_) \
- _(APPLIER_OFF, 0) \
- _(APPLIER_CONNECT, 1) \
- _(APPLIER_CONNECTED, 2) \
- _(APPLIER_AUTH, 3) \
- _(APPLIER_READY, 4) \
- _(APPLIER_INITIAL_JOIN, 5) \
- _(APPLIER_FINAL_JOIN, 6) \
- _(APPLIER_JOINED, 7) \
- _(APPLIER_SYNC, 8) \
- _(APPLIER_FOLLOW, 9) \
- _(APPLIER_STOPPED, 10) \
- _(APPLIER_DISCONNECTED, 11) \
- _(APPLIER_LOADING, 12) \
- _(APPLIER_FETCH_SNAPSHOT, 13) \
- _(APPLIER_FETCHED_SNAPSHOT, 14) \
- _(APPLIER_REGISTER, 15) \
- _(APPLIER_REGISTERED, 16) \
+#define applier_STATE(_) \
+ _(APPLIER_OFF, 0) \
+ _(APPLIER_CONNECT, 1) \
+ _(APPLIER_CONNECTED, 2) \
+ _(APPLIER_AUTH, 3) \
+ _(APPLIER_READY, 4) \
+ _(APPLIER_INITIAL_JOIN, 5) \
+ _(APPLIER_FINAL_JOIN, 6) \
+ _(APPLIER_JOINED, 7) \
+ _(APPLIER_SYNC, 8) \
+ _(APPLIER_FOLLOW, 9) \
+ _(APPLIER_STOPPED, 10) \
+ _(APPLIER_DISCONNECTED, 11) \
+ _(APPLIER_LOADING, 12) \
+ _(APPLIER_FETCH_SNAPSHOT, 13) \
+ _(APPLIER_FETCHED_SNAPSHOT, 14) \
+ _(APPLIER_REGISTER, 15) \
+ _(APPLIER_REGISTERED, 16)
/** States for the applier */
ENUM(applier_state, applier_STATE);
diff --git a/src/box/authentication.cc b/src/box/authentication.cc
index a7a3587..e62723a 100644
--- a/src/box/authentication.cc
+++ b/src/box/authentication.cc
@@ -68,7 +68,7 @@ authenticate(const char *user_name, uint32_t len, const char *salt,
if (part_count < 2) {
/* Expected at least: authentication mechanism and data. */
tnt_raise(ClientError, ER_INVALID_MSGPACK,
- "authentication request body");
+ "authentication request body");
}
mp_next(&tuple); /* Skip authentication mechanism. */
if (mp_typeof(*tuple) == MP_STR) {
@@ -81,12 +81,12 @@ authenticate(const char *user_name, uint32_t len, const char *salt,
scramble = mp_decode_bin(&tuple, &scramble_len);
} else {
tnt_raise(ClientError, ER_INVALID_MSGPACK,
- "authentication scramble");
+ "authentication scramble");
}
if (scramble_len != SCRAMBLE_SIZE) {
/* Authentication mechanism, data. */
tnt_raise(ClientError, ER_INVALID_MSGPACK,
- "invalid scramble size");
+ "invalid scramble size");
}
if (scramble_check(scramble, salt, user->def->hash2)) {
@@ -97,7 +97,7 @@ authenticate(const char *user_name, uint32_t len, const char *salt,
}
ok:
/* check and run auth triggers on success */
- if (! rlist_empty(&session_on_auth) &&
+ if (!rlist_empty(&session_on_auth) &&
session_run_on_auth_triggers(&auth_res) != 0)
diag_raise();
credentials_reset(&session->credentials, user);
diff --git a/src/box/authentication.h b/src/box/authentication.h
index 9935e35..0f37c7d 100644
--- a/src/box/authentication.h
+++ b/src/box/authentication.h
@@ -43,7 +43,6 @@ struct on_auth_trigger_ctx {
bool is_authenticated;
};
-
void
authenticate(const char *user_name, uint32_t len, const char *salt,
const char *tuple);
diff --git a/src/box/bind.c b/src/box/bind.c
index d45a0f9..c871e5d 100644
--- a/src/box/bind.c
+++ b/src/box/bind.c
@@ -41,7 +41,7 @@ sql_bind_name(const struct sql_bind *bind)
if (bind->name)
return tt_sprintf("'%.*s'", bind->name_len, bind->name);
else
- return tt_sprintf("%d", (int) bind->pos);
+ return tt_sprintf("%d", (int)bind->pos);
}
int
@@ -132,14 +132,14 @@ sql_bind_list_decode(const char *data, struct sql_bind **out_bind)
return 0;
if (bind_count > SQL_BIND_PARAMETER_MAX) {
diag_set(ClientError, ER_SQL_BIND_PARAMETER_MAX,
- (int) bind_count);
+ (int)bind_count);
return -1;
}
struct region *region = &fiber()->gc;
uint32_t used = region_used(region);
size_t size;
- struct sql_bind *bind = region_alloc_array(region, typeof(bind[0]),
- bind_count, &size);
+ struct sql_bind *bind =
+ region_alloc_array(region, typeof(bind[0]), bind_count, &size);
if (bind == NULL) {
diag_set(OutOfMemory, size, "region_alloc_array", "bind");
return -1;
@@ -155,8 +155,7 @@ sql_bind_list_decode(const char *data, struct sql_bind **out_bind)
}
int
-sql_bind_column(struct sql_stmt *stmt, const struct sql_bind *p,
- uint32_t pos)
+sql_bind_column(struct sql_stmt *stmt, const struct sql_bind *p, uint32_t pos)
{
if (p->name != NULL) {
pos = sql_bind_parameter_lindex(stmt, p->name, p->name_len);
@@ -189,7 +188,7 @@ sql_bind_column(struct sql_stmt *stmt, const struct sql_bind *p,
case MP_NIL:
return sql_bind_null(stmt, pos);
case MP_BIN:
- return sql_bind_blob64(stmt, pos, (const void *) p->s, p->bytes,
+ return sql_bind_blob64(stmt, pos, (const void *)p->s, p->bytes,
SQL_STATIC);
default:
unreachable();
diff --git a/src/box/bind.h b/src/box/bind.h
index 568c558..58fabd3 100644
--- a/src/box/bind.h
+++ b/src/box/bind.h
@@ -116,8 +116,7 @@ sql_bind_decode(struct sql_bind *bind, int i, const char **packet);
* @retval -1 SQL error.
*/
int
-sql_bind_column(struct sql_stmt *stmt, const struct sql_bind *p,
- uint32_t pos);
+sql_bind_column(struct sql_stmt *stmt, const struct sql_bind *p, uint32_t pos);
/**
* Bind parameter values to the prepared statement.
diff --git a/src/box/blackhole.c b/src/box/blackhole.c
index 69f1deb..46d449c 100644
--- a/src/box/blackhole.c
+++ b/src/box/blackhole.c
@@ -52,8 +52,8 @@ blackhole_space_execute_replace(struct space *space, struct txn *txn,
struct request *request, struct tuple **result)
{
struct txn_stmt *stmt = txn_current_stmt(txn);
- stmt->new_tuple = tuple_new(space->format, request->tuple,
- request->tuple_end);
+ stmt->new_tuple =
+ tuple_new(space->format, request->tuple, request->tuple_end);
if (stmt->new_tuple == NULL)
return -1;
tuple_ref(stmt->new_tuple);
@@ -146,8 +146,7 @@ blackhole_engine_create_space(struct engine *engine, struct space_def *def,
struct space *space = (struct space *)calloc(1, sizeof(*space));
if (space == NULL) {
- diag_set(OutOfMemory, sizeof(*space),
- "malloc", "struct space");
+ diag_set(OutOfMemory, sizeof(*space), "malloc", "struct space");
return NULL;
}
@@ -163,8 +162,8 @@ blackhole_engine_create_space(struct engine *engine, struct space_def *def,
}
tuple_format_ref(format);
- if (space_create(space, engine, &blackhole_space_vtab,
- def, key_list, format) != 0) {
+ if (space_create(space, engine, &blackhole_space_vtab, def, key_list,
+ format) != 0) {
tuple_format_unref(format);
free(space);
return NULL;
@@ -205,8 +204,8 @@ blackhole_engine_new(void)
{
struct engine *engine = calloc(1, sizeof(*engine));
if (engine == NULL) {
- diag_set(OutOfMemory, sizeof(*engine),
- "malloc", "struct engine");
+ diag_set(OutOfMemory, sizeof(*engine), "malloc",
+ "struct engine");
return NULL;
}
diff --git a/src/box/box.cc b/src/box/box.cc
index 6ec813c..10658d9 100644
--- a/src/box/box.cc
+++ b/src/box/box.cc
@@ -88,7 +88,8 @@ struct rmean *rmean_box;
struct rlist box_on_shutdown = RLIST_HEAD_INITIALIZER(box_on_shutdown);
-static void title(const char *new_status)
+static void
+title(const char *new_status)
{
snprintf(status, sizeof(status), "%s", new_status);
title_set_status(new_status);
@@ -192,10 +193,9 @@ box_check_writable_xc(void)
static void
box_check_memtx_min_tuple_size(ssize_t memtx_min_tuple_size)
{
-
if (memtx_min_tuple_size < 8 || memtx_min_tuple_size > 1048280)
- tnt_raise(ClientError, ER_CFG, "memtx_min_tuple_size",
- "specified value is out of bounds");
+ tnt_raise(ClientError, ER_CFG, "memtx_min_tuple_size",
+ "specified value is out of bounds");
}
int
@@ -254,7 +254,7 @@ box_process_rw(struct request *request, struct space *space,
}
if (res < 0)
goto error;
- fiber_gc();
+ fiber_gc();
}
if (return_tuple) {
tuple_bless(tuple);
@@ -354,10 +354,9 @@ struct recovery_journal {
* min/max LSN of created LSM levels.
*/
static int
-recovery_journal_write(struct journal *base,
- struct journal_entry *entry)
+recovery_journal_write(struct journal *base, struct journal_entry *entry)
{
- struct recovery_journal *journal = (struct recovery_journal *) base;
+ struct recovery_journal *journal = (struct recovery_journal *)base;
entry->res = vclock_sum(journal->vclock);
/*
* Since there're no actual writes, fire a
@@ -401,7 +400,8 @@ apply_wal_row(struct xstream *stream, struct xrow_header *row)
if (request.type != IPROTO_NOP) {
struct space *space = space_cache_find_xc(request.space_id);
if (box_process_rw(&request, space, NULL) != 0) {
- say_error("error applying row: %s", request_str(&request));
+ say_error("error applying row: %s",
+ request_str(&request));
diag_raise();
}
}
@@ -450,7 +450,7 @@ box_check_say(void)
enum say_format format = say_format_by_name(log_format);
if (format == say_format_MAX)
tnt_raise(ClientError, ER_CFG, "log_format",
- "expected 'plain' or 'json'");
+ "expected 'plain' or 'json'");
if (type == SAY_LOGGER_SYSLOG && format == SF_JSON) {
tnt_raise(ClientError, ER_CFG, "log_format",
"'json' can't be used with syslog logger");
@@ -662,15 +662,14 @@ box_check_wal_mode(const char *mode_name)
int mode = strindex(wal_mode_STRS, mode_name, WAL_MODE_MAX);
if (mode == WAL_MODE_MAX)
tnt_raise(ClientError, ER_CFG, "wal_mode", mode_name);
- return (enum wal_mode) mode;
+ return (enum wal_mode)mode;
}
static void
box_check_readahead(int readahead)
{
enum { READAHEAD_MIN = 128, READAHEAD_MAX = 2147483647 };
- if (readahead < (int) READAHEAD_MIN ||
- readahead > (int) READAHEAD_MAX) {
+ if (readahead < (int)READAHEAD_MIN || readahead > (int)READAHEAD_MAX) {
tnt_raise(ClientError, ER_CFG, "readahead",
"specified value is out of bounds");
}
@@ -700,11 +699,11 @@ static ssize_t
box_check_memory_quota(const char *quota_name)
{
int64_t size = cfg_geti64(quota_name);
- if (size >= 0 && (size_t) size <= QUOTA_MAX)
+ if (size >= 0 && (size_t)size <= QUOTA_MAX)
return size;
diag_set(ClientError, ER_CFG, quota_name,
tt_sprintf("must be >= 0 and <= %zu, but it is %lld",
- QUOTA_MAX, size));
+ QUOTA_MAX, size));
return -1;
}
@@ -832,14 +831,13 @@ box_set_election_timeout(void)
static struct applier **
cfg_get_replication(int *p_count)
{
-
/* Use static buffer for result */
static struct applier *appliers[VCLOCK_MAX];
int count = cfg_getarr_size("replication");
if (count >= VCLOCK_MAX) {
tnt_raise(ClientError, ER_CFG, "replication",
- "too many replicas");
+ "too many replicas");
}
for (int i = 0; i < count; i++) {
@@ -871,7 +869,7 @@ box_sync_replication(bool connect_quorum)
if (appliers == NULL)
diag_raise();
- auto guard = make_scoped_guard([=]{
+ auto guard = make_scoped_guard([=] {
for (int i = 0; i < count; i++)
applier_delete(appliers[i]); /* doesn't affect diag */
});
@@ -976,9 +974,8 @@ box_set_replication_anon(void)
return;
if (!anon) {
- auto guard = make_scoped_guard([&]{
- replication_anon = !anon;
- });
+ auto guard =
+ make_scoped_guard([&] { replication_anon = !anon; });
/* Turn anonymous instance into a normal one. */
replication_anon = anon;
/*
@@ -1021,7 +1018,6 @@ box_set_replication_anon(void)
"cannot be turned on after bootstrap"
" has finished");
}
-
}
void
@@ -1046,16 +1042,17 @@ box_clear_synchro_queue(void)
if (!txn_limbo_is_empty(&txn_limbo)) {
int64_t lsns[VCLOCK_MAX];
int len = 0;
- const struct vclock *vclock;
- replicaset_foreach(replica) {
+ const struct vclock *vclock;
+ replicaset_foreach(replica)
+ {
if (replica->relay != NULL &&
relay_get_state(replica->relay) != RELAY_OFF &&
!replica->anon) {
assert(!tt_uuid_is_equal(&INSTANCE_UUID,
&replica->uuid));
vclock = relay_vclock(replica->relay);
- int64_t lsn = vclock_get(vclock,
- former_leader_id);
+ int64_t lsn =
+ vclock_get(vclock, former_leader_id);
lsns[len++] = lsn;
}
}
@@ -1108,11 +1105,11 @@ box_set_snap_io_rate_limit(void)
memtx = (struct memtx_engine *)engine_by_name("memtx");
assert(memtx != NULL);
memtx_engine_set_snap_io_rate_limit(memtx,
- cfg_getd("snap_io_rate_limit"));
+ cfg_getd("snap_io_rate_limit"));
struct engine *vinyl = engine_by_name("vinyl");
assert(vinyl != NULL);
vinyl_engine_set_snap_io_rate_limit(vinyl,
- cfg_getd("snap_io_rate_limit"));
+ cfg_getd("snap_io_rate_limit"));
}
void
@@ -1134,7 +1131,7 @@ box_set_memtx_max_tuple_size(void)
memtx = (struct memtx_engine *)engine_by_name("memtx");
assert(memtx != NULL);
memtx_engine_set_max_tuple_size(memtx,
- cfg_geti("memtx_max_tuple_size"));
+ cfg_geti("memtx_max_tuple_size"));
}
void
@@ -1194,7 +1191,7 @@ box_set_vinyl_max_tuple_size(void)
struct engine *vinyl = engine_by_name("vinyl");
assert(vinyl != NULL);
vinyl_engine_set_max_tuple_size(vinyl,
- cfg_geti("vinyl_max_tuple_size"));
+ cfg_geti("vinyl_max_tuple_size"));
}
void
@@ -1210,7 +1207,7 @@ box_set_vinyl_timeout(void)
{
struct engine *vinyl = engine_by_name("vinyl");
assert(vinyl != NULL);
- vinyl_engine_set_timeout(vinyl, cfg_getd("vinyl_timeout"));
+ vinyl_engine_set_timeout(vinyl, cfg_getd("vinyl_timeout"));
}
void
@@ -1220,7 +1217,7 @@ box_set_net_msg_max(void)
iproto_set_msg_max(new_iproto_msg_max);
fiber_pool_set_max_size(&tx_fiber_pool,
new_iproto_msg_max *
- IPROTO_FIBER_POOL_SIZE_FACTOR);
+ IPROTO_FIBER_POOL_SIZE_FACTOR);
}
int
@@ -1317,7 +1314,7 @@ box_space_id_by_name(const char *name, uint32_t len)
if (len > BOX_NAME_MAX)
return BOX_ID_NIL;
uint32_t size = mp_sizeof_array(1) + mp_sizeof_str(len);
- char *begin = (char *) region_alloc(&fiber()->gc, size);
+ char *begin = (char *)region_alloc(&fiber()->gc, size);
if (begin == NULL) {
diag_set(OutOfMemory, size, "region_alloc", "begin");
return BOX_ID_NIL;
@@ -1332,7 +1329,7 @@ box_space_id_by_name(const char *name, uint32_t len)
if (tuple == NULL)
return BOX_ID_NIL;
uint32_t result = BOX_ID_NIL;
- (void) tuple_field_u32(tuple, BOX_SPACE_FIELD_ID, &result);
+ (void)tuple_field_u32(tuple, BOX_SPACE_FIELD_ID, &result);
return result;
}
@@ -1343,7 +1340,7 @@ box_index_id_by_name(uint32_t space_id, const char *name, uint32_t len)
return BOX_ID_NIL;
uint32_t size = mp_sizeof_array(2) + mp_sizeof_uint(space_id) +
mp_sizeof_str(len);
- char *begin = (char *) region_alloc(&fiber()->gc, size);
+ char *begin = (char *)region_alloc(&fiber()->gc, size);
if (begin == NULL) {
diag_set(OutOfMemory, size, "region_alloc", "begin");
return BOX_ID_NIL;
@@ -1359,7 +1356,7 @@ box_index_id_by_name(uint32_t space_id, const char *name, uint32_t len)
if (tuple == NULL)
return BOX_ID_NIL;
uint32_t result = BOX_ID_NIL;
- (void) tuple_field_u32(tuple, BOX_INDEX_FIELD_ID, &result);
+ (void)tuple_field_u32(tuple, BOX_INDEX_FIELD_ID, &result);
return result;
}
/** \endcond public */
@@ -1372,16 +1369,14 @@ box_process1(struct request *request, box_tuple_t **result)
if (space == NULL)
return -1;
if (!space_is_temporary(space) &&
- space_group_id(space) != GROUP_LOCAL &&
- box_check_writable() != 0)
+ space_group_id(space) != GROUP_LOCAL && box_check_writable() != 0)
return -1;
return box_process_rw(request, space, result);
}
API_EXPORT int
-box_select(uint32_t space_id, uint32_t index_id,
- int iterator, uint32_t offset, uint32_t limit,
- const char *key, const char *key_end,
+box_select(uint32_t space_id, uint32_t index_id, int iterator, uint32_t offset,
+ uint32_t limit, const char *key, const char *key_end,
struct port *port)
{
(void)key_end;
@@ -1404,7 +1399,7 @@ box_select(uint32_t space_id, uint32_t index_id,
if (index == NULL)
return -1;
- enum iterator_type type = (enum iterator_type) iterator;
+ enum iterator_type type = (enum iterator_type)iterator;
uint32_t part_count = key ? mp_decode_array(&key) : 0;
if (key_validate(index->def, type, key, part_count))
return -1;
@@ -1418,8 +1413,8 @@ box_select(uint32_t space_id, uint32_t index_id,
if (txn_begin_ro_stmt(space, &txn) != 0)
return -1;
- struct iterator *it = index_create_iterator(index, type,
- key, part_count);
+ struct iterator *it =
+ index_create_iterator(index, type, key, part_count);
if (it == NULL) {
txn_rollback_stmt(txn);
return -1;
@@ -1564,8 +1559,8 @@ space_truncate(struct space *space)
ops_buf_end = mp_encode_uint(ops_buf_end, 1);
assert(ops_buf_end < buf + buf_size);
- if (box_upsert(BOX_TRUNCATE_ID, 0, tuple_buf, tuple_buf_end,
- ops_buf, ops_buf_end, 0, NULL) != 0)
+ if (box_upsert(BOX_TRUNCATE_ID, 0, tuple_buf, tuple_buf_end, ops_buf,
+ ops_buf_end, 0, NULL) != 0)
diag_raise();
}
@@ -1585,9 +1580,9 @@ box_truncate(uint32_t space_id)
static int
sequence_data_update(uint32_t seq_id, int64_t value)
{
- size_t tuple_buf_size = (mp_sizeof_array(2) +
- 2 * mp_sizeof_uint(UINT64_MAX));
- char *tuple_buf = (char *) region_alloc(&fiber()->gc, tuple_buf_size);
+ size_t tuple_buf_size =
+ (mp_sizeof_array(2) + 2 * mp_sizeof_uint(UINT64_MAX));
+ char *tuple_buf = (char *)region_alloc(&fiber()->gc, tuple_buf_size);
if (tuple_buf == NULL) {
diag_set(OutOfMemory, tuple_buf_size, "region", "tuple");
return -1;
@@ -1595,16 +1590,15 @@ sequence_data_update(uint32_t seq_id, int64_t value)
char *tuple_buf_end = tuple_buf;
tuple_buf_end = mp_encode_array(tuple_buf_end, 2);
tuple_buf_end = mp_encode_uint(tuple_buf_end, seq_id);
- tuple_buf_end = (value < 0 ?
- mp_encode_int(tuple_buf_end, value) :
- mp_encode_uint(tuple_buf_end, value));
+ tuple_buf_end = (value < 0 ? mp_encode_int(tuple_buf_end, value) :
+ mp_encode_uint(tuple_buf_end, value));
assert(tuple_buf_end < tuple_buf + tuple_buf_size);
struct credentials *orig_credentials = effective_user();
fiber_set_user(fiber(), &admin_credentials);
- int rc = box_replace(BOX_SEQUENCE_DATA_ID,
- tuple_buf, tuple_buf_end, NULL);
+ int rc = box_replace(BOX_SEQUENCE_DATA_ID, tuple_buf, tuple_buf_end,
+ NULL);
fiber_set_user(fiber(), orig_credentials);
return rc;
@@ -1615,7 +1609,7 @@ static int
sequence_data_delete(uint32_t seq_id)
{
size_t key_buf_size = mp_sizeof_array(1) + mp_sizeof_uint(UINT64_MAX);
- char *key_buf = (char *) region_alloc(&fiber()->gc, key_buf_size);
+ char *key_buf = (char *)region_alloc(&fiber()->gc, key_buf_size);
if (key_buf == NULL) {
diag_set(OutOfMemory, key_buf_size, "region", "key");
return -1;
@@ -1628,8 +1622,8 @@ sequence_data_delete(uint32_t seq_id)
struct credentials *orig_credentials = effective_user();
fiber_set_user(fiber(), &admin_credentials);
- int rc = box_delete(BOX_SEQUENCE_DATA_ID, 0,
- key_buf, key_buf_end, NULL);
+ int rc =
+ box_delete(BOX_SEQUENCE_DATA_ID, 0, key_buf, key_buf_end, NULL);
fiber_set_user(fiber(), orig_credentials);
return rc;
@@ -1707,8 +1701,8 @@ box_session_push(const char *data, const char *data_end)
static inline void
box_register_replica(uint32_t id, const struct tt_uuid *uuid)
{
- if (boxk(IPROTO_INSERT, BOX_CLUSTER_ID, "[%u%s]",
- (unsigned) id, tt_uuid_str(uuid)) != 0)
+ if (boxk(IPROTO_INSERT, BOX_CLUSTER_ID, "[%u%s]", (unsigned)id,
+ tt_uuid_str(uuid)) != 0)
diag_raise();
assert(replica_by_uuid(uuid)->id == id);
}
@@ -1732,15 +1726,15 @@ box_on_join(const tt_uuid *instance_uuid)
/** Find the largest existing replica id. */
struct space *space = space_cache_find_xc(BOX_CLUSTER_ID);
struct index *index = index_find_system_xc(space, 0);
- struct iterator *it = index_create_iterator_xc(index, ITER_ALL,
- NULL, 0);
+ struct iterator *it =
+ index_create_iterator_xc(index, ITER_ALL, NULL, 0);
IteratorGuard iter_guard(it);
struct tuple *tuple;
/** Assign a new replica id. */
uint32_t replica_id = 1;
while ((tuple = iterator_next_xc(it)) != NULL) {
- if (tuple_field_u32_xc(tuple,
- BOX_CLUSTER_FIELD_ID) != replica_id)
+ if (tuple_field_u32_xc(tuple, BOX_CLUSTER_FIELD_ID) !=
+ replica_id)
break;
replica_id++;
}
@@ -1779,7 +1773,8 @@ box_process_fetch_snapshot(struct ev_io *io, struct xrow_header *header)
"wal_mode = 'none'");
}
- say_info("sending current read-view to replica at %s", sio_socketname(io->fd));
+ say_info("sending current read-view to replica at %s",
+ sio_socketname(io->fd));
/* Send the snapshot data to the instance. */
struct vclock start_vclock;
@@ -1830,14 +1825,14 @@ box_process_register(struct ev_io *io, struct xrow_header *header)
"wal_mode = 'none'");
}
- struct gc_consumer *gc = gc_consumer_register(&replicaset.vclock,
- "replica %s", tt_uuid_str(&instance_uuid));
+ struct gc_consumer *gc = gc_consumer_register(
+ &replicaset.vclock, "replica %s", tt_uuid_str(&instance_uuid));
if (gc == NULL)
diag_raise();
auto gc_guard = make_scoped_guard([&] { gc_consumer_unregister(gc); });
- say_info("registering replica %s at %s",
- tt_uuid_str(&instance_uuid), sio_socketname(io->fd));
+ say_info("registering replica %s at %s", tt_uuid_str(&instance_uuid),
+ sio_socketname(io->fd));
/* See box_process_join() */
int64_t limbo_rollback_count = txn_limbo.rollback_count;
@@ -1974,14 +1969,14 @@ box_process_join(struct ev_io *io, struct xrow_header *header)
* Register the replica as a WAL consumer so that
* it can resume FINAL JOIN where INITIAL JOIN ends.
*/
- struct gc_consumer *gc = gc_consumer_register(&replicaset.vclock,
- "replica %s", tt_uuid_str(&instance_uuid));
+ struct gc_consumer *gc = gc_consumer_register(
+ &replicaset.vclock, "replica %s", tt_uuid_str(&instance_uuid));
if (gc == NULL)
diag_raise();
auto gc_guard = make_scoped_guard([&] { gc_consumer_unregister(gc); });
- say_info("joining replica %s at %s",
- tt_uuid_str(&instance_uuid), sio_socketname(io->fd));
+ say_info("joining replica %s at %s", tt_uuid_str(&instance_uuid),
+ sio_socketname(io->fd));
/*
* In order to join a replica, master has to make sure it
@@ -2102,7 +2097,8 @@ box_process_subscribe(struct ev_io *io, struct xrow_header *header)
tt_uuid_str(&replica_uuid));
}
if (anon && replica != NULL && replica->id != REPLICA_ID_NIL) {
- tnt_raise(ClientError, ER_PROTOCOL, "Can't subscribe an "
+ tnt_raise(ClientError, ER_PROTOCOL,
+ "Can't subscribe an "
"anonymous replica having an ID assigned");
}
if (replica == NULL)
@@ -2147,8 +2143,8 @@ box_process_subscribe(struct ev_io *io, struct xrow_header *header)
row.sync = header->sync;
coio_write_xrow(io, &row);
- say_info("subscribed replica %s at %s",
- tt_uuid_str(&replica_uuid), sio_socketname(io->fd));
+ say_info("subscribed replica %s at %s", tt_uuid_str(&replica_uuid),
+ sio_socketname(io->fd));
say_info("remote vclock %s local vclock %s",
vclock_to_string(&replica_clock), vclock_to_string(&vclock));
if (raft_is_enabled()) {
@@ -2265,12 +2261,10 @@ engine_init()
* so it must be registered first.
*/
struct memtx_engine *memtx;
- memtx = memtx_engine_new_xc(cfg_gets("memtx_dir"),
- cfg_geti("force_recovery"),
- cfg_getd("memtx_memory"),
- cfg_geti("memtx_min_tuple_size"),
- cfg_geti("strip_core"),
- cfg_getd("slab_alloc_factor"));
+ memtx = memtx_engine_new_xc(
+ cfg_gets("memtx_dir"), cfg_geti("force_recovery"),
+ cfg_getd("memtx_memory"), cfg_geti("memtx_min_tuple_size"),
+ cfg_geti("strip_core"), cfg_getd("slab_alloc_factor"));
engine_register((struct engine *)memtx);
box_set_memtx_max_tuple_size();
@@ -2368,15 +2362,15 @@ bootstrap_from_master(struct replica *master)
assert(!tt_uuid_is_nil(&INSTANCE_UUID));
enum applier_state wait_state = replication_anon ?
- APPLIER_FETCH_SNAPSHOT :
- APPLIER_INITIAL_JOIN;
+ APPLIER_FETCH_SNAPSHOT :
+ APPLIER_INITIAL_JOIN;
applier_resume_to_state(applier, wait_state, TIMEOUT_INFINITY);
/*
* Process initial data (snapshot or dirty disk data).
*/
engine_begin_initial_recovery_xc(NULL);
wait_state = replication_anon ? APPLIER_FETCHED_SNAPSHOT :
- APPLIER_FINAL_JOIN;
+ APPLIER_FINAL_JOIN;
applier_resume_to_state(applier, wait_state, TIMEOUT_INFINITY);
/*
@@ -2423,8 +2417,7 @@ bootstrap_from_master(struct replica *master)
*/
static void
bootstrap(const struct tt_uuid *instance_uuid,
- const struct tt_uuid *replicaset_uuid,
- bool *is_bootstrap_leader)
+ const struct tt_uuid *replicaset_uuid, bool *is_bootstrap_leader)
{
/* Initialize instance UUID. */
assert(tt_uuid_is_nil(&INSTANCE_UUID));
@@ -2456,7 +2449,8 @@ bootstrap(const struct tt_uuid *instance_uuid,
struct replica *master = replicaset_leader();
assert(master == NULL || master->applier != NULL);
- if (master != NULL && !tt_uuid_is_equal(&master->uuid, &INSTANCE_UUID)) {
+ if (master != NULL &&
+ !tt_uuid_is_equal(&master->uuid, &INSTANCE_UUID)) {
bootstrap_from_master(master);
/* Check replica set UUID */
if (!tt_uuid_is_nil(replicaset_uuid) &&
@@ -2506,7 +2500,7 @@ local_recovery(const struct tt_uuid *instance_uuid,
* in box.info while local recovery is in progress.
*/
box_vclock = &recovery->vclock;
- auto guard = make_scoped_guard([&]{
+ auto guard = make_scoped_guard([&] {
box_vclock = &replicaset.vclock;
recovery_delete(recovery);
});
@@ -2611,8 +2605,8 @@ local_recovery(const struct tt_uuid *instance_uuid,
static void
tx_prio_cb(struct ev_loop *loop, ev_watcher *watcher, int events)
{
- (void) loop;
- (void) events;
+ (void)loop;
+ (void)events;
struct cbus_endpoint *endpoint = (struct cbus_endpoint *)watcher->data;
cbus_process(endpoint);
}
@@ -2669,7 +2663,8 @@ box_cfg_xc(void)
IPROTO_MSG_MAX_MIN * IPROTO_FIBER_POOL_SIZE_FACTOR,
FIBER_POOL_IDLE_TIMEOUT);
/* Add an extra endpoint for WAL wake up/rollback messages. */
- cbus_endpoint_create(&tx_prio_endpoint, "tx_prio", tx_prio_cb, &tx_prio_endpoint);
+ cbus_endpoint_create(&tx_prio_endpoint, "tx_prio", tx_prio_cb,
+ &tx_prio_endpoint);
rmean_box = rmean_new(iproto_type_strs, IPROTO_TYPE_STAT_MAX);
rmean_error = rmean_new(rmean_error_strings, RMEAN_ERROR_LAST);
@@ -2682,7 +2677,8 @@ box_cfg_xc(void)
iproto_init();
sql_init();
- int64_t wal_max_size = box_check_wal_max_size(cfg_geti64("wal_max_size"));
+ int64_t wal_max_size =
+ box_check_wal_max_size(cfg_geti64("wal_max_size"));
enum wal_mode wal_mode = box_check_wal_mode(cfg_gets("wal_mode"));
if (wal_init(wal_mode, cfg_gets("wal_dir"), wal_max_size,
&INSTANCE_UUID, on_wal_garbage_collection,
@@ -2734,9 +2730,8 @@ box_cfg_xc(void)
struct journal bootstrap_journal;
journal_create(&bootstrap_journal, NULL, bootstrap_journal_write);
journal_set(&bootstrap_journal);
- auto bootstrap_journal_guard = make_scoped_guard([] {
- journal_set(NULL);
- });
+ auto bootstrap_journal_guard =
+ make_scoped_guard([] { journal_set(NULL); });
bool is_bootstrap_leader = false;
if (checkpoint != NULL) {
@@ -2855,7 +2850,7 @@ int
box_checkpoint(void)
{
/* Signal arrived before box.cfg{} */
- if (! is_box_configured)
+ if (!is_box_configured)
return 0;
return gc_checkpoint();
@@ -2870,7 +2865,8 @@ box_backup_start(int checkpoint_idx, box_backup_cb cb, void *cb_arg)
return -1;
}
struct gc_checkpoint *checkpoint;
- gc_foreach_checkpoint_reverse(checkpoint) {
+ gc_foreach_checkpoint_reverse(checkpoint)
+ {
if (checkpoint_idx-- == 0)
break;
}
@@ -2900,7 +2896,7 @@ box_backup_stop(void)
const char *
box_status(void)
{
- return status;
+ return status;
}
static int
diff --git a/src/box/box.h b/src/box/box.h
index 45ff8bb..448f931 100644
--- a/src/box/box.h
+++ b/src/box/box.h
@@ -147,7 +147,8 @@ box_update_ro_summary(void);
* Iterate over all spaces and save them to the
* snapshot file.
*/
-int box_checkpoint(void);
+int
+box_checkpoint(void);
typedef int (*box_backup_cb)(const char *path, void *arg);
@@ -174,7 +175,8 @@ box_backup_stop(void);
/**
* Spit out some basic module status (master/slave, etc.
*/
-const char *box_status(void);
+const char *
+box_status(void);
/**
* Reset box statistics.
@@ -228,36 +230,66 @@ box_process_vote(struct ballot *ballot);
void
box_check_config(void);
-void box_listen(void);
-void box_set_replication(void);
-void box_set_log_level(void);
-void box_set_log_format(void);
-void box_set_io_collect_interval(void);
-void box_set_snap_io_rate_limit(void);
-void box_set_too_long_threshold(void);
-void box_set_readahead(void);
-void box_set_checkpoint_count(void);
-void box_set_checkpoint_interval(void);
-void box_set_checkpoint_wal_threshold(void);
-void box_set_memtx_memory(void);
-void box_set_memtx_max_tuple_size(void);
-void box_set_vinyl_memory(void);
-void box_set_vinyl_max_tuple_size(void);
-void box_set_vinyl_cache(void);
-void box_set_vinyl_timeout(void);
-int box_set_election_is_enabled(void);
-int box_set_election_is_candidate(void);
-int box_set_election_timeout(void);
-void box_set_replication_timeout(void);
-void box_set_replication_connect_timeout(void);
-void box_set_replication_connect_quorum(void);
-void box_set_replication_sync_lag(void);
-int box_set_replication_synchro_quorum(void);
-int box_set_replication_synchro_timeout(void);
-void box_set_replication_sync_timeout(void);
-void box_set_replication_skip_conflict(void);
-void box_set_replication_anon(void);
-void box_set_net_msg_max(void);
+void
+box_listen(void);
+void
+box_set_replication(void);
+void
+box_set_log_level(void);
+void
+box_set_log_format(void);
+void
+box_set_io_collect_interval(void);
+void
+box_set_snap_io_rate_limit(void);
+void
+box_set_too_long_threshold(void);
+void
+box_set_readahead(void);
+void
+box_set_checkpoint_count(void);
+void
+box_set_checkpoint_interval(void);
+void
+box_set_checkpoint_wal_threshold(void);
+void
+box_set_memtx_memory(void);
+void
+box_set_memtx_max_tuple_size(void);
+void
+box_set_vinyl_memory(void);
+void
+box_set_vinyl_max_tuple_size(void);
+void
+box_set_vinyl_cache(void);
+void
+box_set_vinyl_timeout(void);
+int
+box_set_election_is_enabled(void);
+int
+box_set_election_is_candidate(void);
+int
+box_set_election_timeout(void);
+void
+box_set_replication_timeout(void);
+void
+box_set_replication_connect_timeout(void);
+void
+box_set_replication_connect_quorum(void);
+void
+box_set_replication_sync_lag(void);
+int
+box_set_replication_synchro_quorum(void);
+int
+box_set_replication_synchro_timeout(void);
+void
+box_set_replication_sync_timeout(void);
+void
+box_set_replication_skip_conflict(void);
+void
+box_set_replication_anon(void);
+void
+box_set_net_msg_max(void);
int
box_set_prepared_stmt_cache_size(void);
@@ -267,13 +299,13 @@ extern "C" {
typedef struct tuple box_tuple_t;
-void box_clear_synchro_queue(void);
+void
+box_clear_synchro_queue(void);
/* box_select is private and used only by FFI */
API_EXPORT int
-box_select(uint32_t space_id, uint32_t index_id,
- int iterator, uint32_t offset, uint32_t limit,
- const char *key, const char *key_end,
+box_select(uint32_t space_id, uint32_t index_id, int iterator, uint32_t offset,
+ uint32_t limit, const char *key, const char *key_end,
struct port *port);
/** \cond public */
diff --git a/src/box/call.c b/src/box/call.c
index 9c29126..bdc83d0 100644
--- a/src/box/call.c
+++ b/src/box/call.c
@@ -48,7 +48,7 @@ static const struct port_vtab port_msgpack_vtab;
void
port_msgpack_create(struct port *base, const char *data, uint32_t data_sz)
{
- struct port_msgpack *port_msgpack = (struct port_msgpack *) base;
+ struct port_msgpack *port_msgpack = (struct port_msgpack *)base;
memset(port_msgpack, 0, sizeof(*port_msgpack));
port_msgpack->vtab = &port_msgpack_vtab;
port_msgpack->data = data;
@@ -58,7 +58,7 @@ port_msgpack_create(struct port *base, const char *data, uint32_t data_sz)
static const char *
port_msgpack_get_msgpack(struct port *base, uint32_t *size)
{
- struct port_msgpack *port = (struct port_msgpack *) base;
+ struct port_msgpack *port = (struct port_msgpack *)base;
assert(port->vtab == &port_msgpack_vtab);
*size = port->data_sz;
return port->data;
@@ -158,8 +158,9 @@ box_process_call(struct call_request *request, struct port *port)
struct func *func = func_by_name(name, name_len);
if (func != NULL) {
rc = func_call(func, &args, port);
- } else if ((rc = access_check_universe_object(PRIV_X | PRIV_U,
- SC_FUNCTION, tt_cstr(name, name_len))) == 0) {
+ } else if ((rc = access_check_universe_object(
+ PRIV_X | PRIV_U, SC_FUNCTION,
+ tt_cstr(name, name_len))) == 0) {
rc = box_lua_call(name, name_len, &args, port);
}
if (rc != 0)
diff --git a/src/box/checkpoint_schedule.c b/src/box/checkpoint_schedule.c
index d37eba7..4e77f87 100644
--- a/src/box/checkpoint_schedule.c
+++ b/src/box/checkpoint_schedule.c
@@ -35,8 +35,8 @@
#include <stdlib.h>
void
-checkpoint_schedule_cfg(struct checkpoint_schedule *sched,
- double now, double interval)
+checkpoint_schedule_cfg(struct checkpoint_schedule *sched, double now,
+ double interval)
{
sched->interval = interval;
sched->start_time = now + interval;
diff --git a/src/box/checkpoint_schedule.h b/src/box/checkpoint_schedule.h
index 7fbbfe2..dc3b185 100644
--- a/src/box/checkpoint_schedule.h
+++ b/src/box/checkpoint_schedule.h
@@ -55,8 +55,8 @@ struct checkpoint_schedule {
* @interval is the configured interval between checkpoints.
*/
void
-checkpoint_schedule_cfg(struct checkpoint_schedule *sched,
- double now, double interval);
+checkpoint_schedule_cfg(struct checkpoint_schedule *sched, double now,
+ double interval);
/**
* Reset a checkpoint schedule.
diff --git a/src/box/ck_constraint.c b/src/box/ck_constraint.c
index b629a73..ab606ee 100644
--- a/src/box/ck_constraint.c
+++ b/src/box/ck_constraint.c
@@ -40,7 +40,7 @@
#include "sql/vdbeInt.h"
#include "tuple.h"
-const char *ck_constraint_language_strs[] = {"SQL"};
+const char *ck_constraint_language_strs[] = { "SQL" };
struct ck_constraint_def *
ck_constraint_def_new(const char *name, uint32_t name_len, const char *expr_str,
@@ -51,7 +51,7 @@ ck_constraint_def_new(const char *name, uint32_t name_len, const char *expr_str,
uint32_t ck_def_sz = ck_constraint_def_sizeof(name_len, expr_str_len,
&expr_str_offset);
struct ck_constraint_def *ck_def =
- (struct ck_constraint_def *) malloc(ck_def_sz);
+ (struct ck_constraint_def *)malloc(ck_def_sz);
if (ck_def == NULL) {
diag_set(OutOfMemory, ck_def_sz, "malloc", "ck_def");
return NULL;
@@ -131,7 +131,8 @@ ck_constraint_program_compile(struct ck_constraint_def *ck_constraint_def,
sqlVdbeAddOp2(v, OP_Variable, ++parser.nVar, vdbe_field_ref_reg);
/* Generate ck constraint test code. */
vdbe_emit_ck_constraint(&parser, expr, ck_constraint_def->name,
- ck_constraint_def->expr_str, vdbe_field_ref_reg);
+ ck_constraint_def->expr_str,
+ vdbe_field_ref_reg);
/* Clean-up and restore user-defined sql context. */
bool is_error = parser.is_aborted;
@@ -142,10 +143,10 @@ ck_constraint_program_compile(struct ck_constraint_def *ck_constraint_def,
diag_set(ClientError, ER_CREATE_CK_CONSTRAINT,
ck_constraint_def->name,
box_error_message(box_error_last()));
- sql_stmt_finalize((struct sql_stmt *) v);
+ sql_stmt_finalize((struct sql_stmt *)v);
return NULL;
}
- return (struct sql_stmt *) v;
+ return (struct sql_stmt *)v;
}
/**
@@ -167,7 +168,7 @@ ck_constraint_program_run(struct ck_constraint *ck_constraint,
return -1;
}
/* Checks VDBE can't expire, reset expired flag and go. */
- struct Vdbe *v = (struct Vdbe *) ck_constraint->stmt;
+ struct Vdbe *v = (struct Vdbe *)ck_constraint->stmt;
v->expired = 0;
sql_step(ck_constraint->stmt);
/*
@@ -180,8 +181,8 @@ ck_constraint_program_run(struct ck_constraint *ck_constraint,
int
ck_constraint_on_replace_trigger(struct trigger *trigger, void *event)
{
- (void) trigger;
- struct txn *txn = (struct txn *) event;
+ (void)trigger;
+ struct txn *txn = (struct txn *)event;
struct txn_stmt *stmt = txn_current_stmt(txn);
assert(stmt != NULL);
struct tuple *new_tuple = stmt->new_tuple;
@@ -193,8 +194,8 @@ ck_constraint_on_replace_trigger(struct trigger *trigger, void *event)
struct vdbe_field_ref *field_ref;
size_t size = sizeof(field_ref->slots[0]) * space->def->field_count +
sizeof(*field_ref);
- field_ref = (struct vdbe_field_ref *)
- region_aligned_alloc(&fiber()->gc, size, alignof(*field_ref));
+ field_ref = (struct vdbe_field_ref *)region_aligned_alloc(
+ &fiber()->gc, size, alignof(*field_ref));
if (field_ref == NULL) {
diag_set(OutOfMemory, size, "region_aligned_alloc",
"field_ref");
diff --git a/src/box/ck_constraint.h b/src/box/ck_constraint.h
index f8f2465..abf313a 100644
--- a/src/box/ck_constraint.h
+++ b/src/box/ck_constraint.h
@@ -46,8 +46,8 @@ struct trigger;
/** Supported languages of ck constraint. */
enum ck_constraint_language {
- CK_CONSTRAINT_LANGUAGE_SQL,
- ck_constraint_language_MAX,
+ CK_CONSTRAINT_LANGUAGE_SQL,
+ ck_constraint_language_MAX,
};
/** The supported languages strings. */
diff --git a/src/box/coll_id.c b/src/box/coll_id.c
index 5abeaed..fddac49 100644
--- a/src/box/coll_id.c
+++ b/src/box/coll_id.c
@@ -38,7 +38,7 @@ struct coll_id *
coll_id_new(const struct coll_id_def *def)
{
size_t total_len = sizeof(struct coll_id) + def->name_len + 1;
- struct coll_id *coll_id = (struct coll_id *) malloc(total_len);
+ struct coll_id *coll_id = (struct coll_id *)malloc(total_len);
if (coll_id == NULL) {
diag_set(OutOfMemory, total_len, "malloc", "coll_id");
return NULL;
diff --git a/src/box/coll_id_cache.c b/src/box/coll_id_cache.c
index 22673ef..a5c43d6 100644
--- a/src/box/coll_id_cache.c
+++ b/src/box/coll_id_cache.c
@@ -67,8 +67,8 @@ coll_id_cache_destroy(void)
int
coll_id_cache_replace(struct coll_id *coll_id, struct coll_id **replaced_id)
{
- const struct mh_i32ptr_node_t id_node = {coll_id->id, coll_id};
- struct mh_i32ptr_node_t repl_id_node = {0, NULL};
+ const struct mh_i32ptr_node_t id_node = { coll_id->id, coll_id };
+ struct mh_i32ptr_node_t repl_id_node = { 0, NULL };
struct mh_i32ptr_node_t *prepl_id_node = &repl_id_node;
mh_int_t i =
mh_i32ptr_put(coll_id_cache, &id_node, &prepl_id_node, NULL);
@@ -79,8 +79,9 @@ coll_id_cache_replace(struct coll_id *coll_id, struct coll_id **replaced_id)
}
uint32_t hash = mh_strn_hash(coll_id->name, coll_id->name_len);
- const struct mh_strnptr_node_t name_node =
- { coll_id->name, coll_id->name_len, hash, coll_id };
+ const struct mh_strnptr_node_t name_node = { coll_id->name,
+ coll_id->name_len, hash,
+ coll_id };
struct mh_strnptr_node_t repl_name_node = { NULL, 0, 0, NULL };
struct mh_strnptr_node_t *prepl_node_name = &repl_name_node;
if (mh_strnptr_put(coll_cache_name, &name_node, &prepl_node_name,
diff --git a/src/box/coll_id_def.c b/src/box/coll_id_def.c
index 9fe0cda..d518ead 100644
--- a/src/box/coll_id_def.c
+++ b/src/box/coll_id_def.c
@@ -35,35 +35,40 @@ static int64_t
icu_on_off_from_str(const char *str, uint32_t len)
{
return strnindex(coll_icu_on_off_strs + 1, str, len,
- coll_icu_on_off_MAX - 1) + 1;
+ coll_icu_on_off_MAX - 1) +
+ 1;
}
static int64_t
icu_alternate_handling_from_str(const char *str, uint32_t len)
{
return strnindex(coll_icu_alternate_handling_strs + 1, str, len,
- coll_icu_alternate_handling_MAX - 1) + 1;
+ coll_icu_alternate_handling_MAX - 1) +
+ 1;
}
static int64_t
icu_case_first_from_str(const char *str, uint32_t len)
{
return strnindex(coll_icu_case_first_strs + 1, str, len,
- coll_icu_case_first_MAX - 1) + 1;
+ coll_icu_case_first_MAX - 1) +
+ 1;
}
static int64_t
icu_strength_from_str(const char *str, uint32_t len)
{
return strnindex(coll_icu_strength_strs + 1, str, len,
- coll_icu_strength_MAX - 1) + 1;
+ coll_icu_strength_MAX - 1) +
+ 1;
}
const struct opt_def coll_icu_opts_reg[] = {
OPT_DEF_ENUM("french_collation", coll_icu_on_off, struct coll_icu_def,
french_collation, icu_on_off_from_str),
- OPT_DEF_ENUM("alternate_handling", coll_icu_alternate_handling, struct coll_icu_def,
- alternate_handling, icu_alternate_handling_from_str),
+ OPT_DEF_ENUM("alternate_handling", coll_icu_alternate_handling,
+ struct coll_icu_def, alternate_handling,
+ icu_alternate_handling_from_str),
OPT_DEF_ENUM("case_first", coll_icu_case_first, struct coll_icu_def,
case_first, icu_case_first_from_str),
OPT_DEF_ENUM("case_level", coll_icu_on_off, struct coll_icu_def,
diff --git a/src/box/column_mask.h b/src/box/column_mask.h
index 9470fe1..9731b44 100644
--- a/src/box/column_mask.h
+++ b/src/box/column_mask.h
@@ -66,9 +66,9 @@ column_mask_set_fieldno(uint64_t *column_mask, uint32_t fieldno)
* @sa column_mask key_def declaration for
* details.
*/
- *column_mask |= ((uint64_t) 1) << 63;
+ *column_mask |= ((uint64_t)1) << 63;
else
- *column_mask |= ((uint64_t) 1) << fieldno;
+ *column_mask |= ((uint64_t)1) << fieldno;
}
/**
@@ -90,7 +90,7 @@ column_mask_set_range(uint64_t *column_mask, uint32_t first_fieldno_in_range)
*column_mask |= COLUMN_MASK_FULL << first_fieldno_in_range;
} else {
/* A range outside "short" range. */
- *column_mask |= ((uint64_t) 1) << 63;
+ *column_mask |= ((uint64_t)1) << 63;
}
}
@@ -119,7 +119,7 @@ key_update_can_be_skipped(uint64_t key_mask, uint64_t update_mask)
static inline bool
column_mask_fieldno_is_set(uint64_t column_mask, uint32_t fieldno)
{
- uint64_t mask = (uint64_t) 1 << (fieldno < 63 ? fieldno : 63);
+ uint64_t mask = (uint64_t)1 << (fieldno < 63 ? fieldno : 63);
return (column_mask & mask) != 0;
}
diff --git a/src/box/constraint_id.c b/src/box/constraint_id.c
index ba6ed85..1047c7e 100644
--- a/src/box/constraint_id.c
+++ b/src/box/constraint_id.c
@@ -35,10 +35,10 @@
#include "diag.h"
const char *constraint_type_strs[] = {
- [CONSTRAINT_TYPE_PK] = "PRIMARY KEY",
- [CONSTRAINT_TYPE_UNIQUE] = "UNIQUE",
- [CONSTRAINT_TYPE_FK] = "FOREIGN KEY",
- [CONSTRAINT_TYPE_CK] = "CHECK",
+ [CONSTRAINT_TYPE_PK] = "PRIMARY KEY",
+ [CONSTRAINT_TYPE_UNIQUE] = "UNIQUE",
+ [CONSTRAINT_TYPE_FK] = "FOREIGN KEY",
+ [CONSTRAINT_TYPE_CK] = "CHECK",
};
struct constraint_id *
diff --git a/src/box/engine.c b/src/box/engine.c
index 88ed928..63ab517 100644
--- a/src/box/engine.c
+++ b/src/box/engine.c
@@ -43,7 +43,8 @@ RLIST_HEAD(engines);
enum { MAX_ENGINE_COUNT = 10 };
/** Register engine instance. */
-void engine_register(struct engine *engine)
+void
+engine_register(struct engine *engine)
{
static int n_engines;
rlist_add_tail_entry(&engines, engine, link);
@@ -55,7 +56,8 @@ struct engine *
engine_by_name(const char *name)
{
struct engine *e;
- engine_foreach(e) {
+ engine_foreach(e)
+ {
if (strcmp(e->name, name) == 0)
return e;
}
@@ -75,15 +77,15 @@ void
engine_switch_to_ro(void)
{
struct engine *engine;
- engine_foreach(engine)
- engine->vtab->switch_to_ro(engine);
+ engine_foreach(engine) engine->vtab->switch_to_ro(engine);
}
int
engine_bootstrap(void)
{
struct engine *engine;
- engine_foreach(engine) {
+ engine_foreach(engine)
+ {
if (engine->vtab->bootstrap(engine) != 0)
return -1;
}
@@ -94,9 +96,10 @@ int
engine_begin_initial_recovery(const struct vclock *recovery_vclock)
{
struct engine *engine;
- engine_foreach(engine) {
+ engine_foreach(engine)
+ {
if (engine->vtab->begin_initial_recovery(engine,
- recovery_vclock) != 0)
+ recovery_vclock) != 0)
return -1;
}
return 0;
@@ -106,7 +109,8 @@ int
engine_begin_final_recovery(void)
{
struct engine *engine;
- engine_foreach(engine) {
+ engine_foreach(engine)
+ {
if (engine->vtab->begin_final_recovery(engine) != 0)
return -1;
}
@@ -121,7 +125,8 @@ engine_end_recovery(void)
* when the primary key is added, enable all keys.
*/
struct engine *engine;
- engine_foreach(engine) {
+ engine_foreach(engine)
+ {
if (engine->vtab->end_recovery(engine) != 0)
return -1;
}
@@ -132,7 +137,8 @@ int
engine_begin_checkpoint(bool is_scheduled)
{
struct engine *engine;
- engine_foreach(engine) {
+ engine_foreach(engine)
+ {
if (engine->vtab->begin_checkpoint(engine, is_scheduled) < 0)
return -1;
}
@@ -143,11 +149,13 @@ int
engine_commit_checkpoint(const struct vclock *vclock)
{
struct engine *engine;
- engine_foreach(engine) {
+ engine_foreach(engine)
+ {
if (engine->vtab->wait_checkpoint(engine, vclock) < 0)
return -1;
}
- engine_foreach(engine) {
+ engine_foreach(engine)
+ {
engine->vtab->commit_checkpoint(engine, vclock);
}
return 0;
@@ -157,23 +165,22 @@ void
engine_abort_checkpoint(void)
{
struct engine *engine;
- engine_foreach(engine)
- engine->vtab->abort_checkpoint(engine);
+ engine_foreach(engine) engine->vtab->abort_checkpoint(engine);
}
void
engine_collect_garbage(const struct vclock *vclock)
{
struct engine *engine;
- engine_foreach(engine)
- engine->vtab->collect_garbage(engine, vclock);
+ engine_foreach(engine) engine->vtab->collect_garbage(engine, vclock);
}
int
engine_backup(const struct vclock *vclock, engine_backup_cb cb, void *cb_arg)
{
struct engine *engine;
- engine_foreach(engine) {
+ engine_foreach(engine)
+ {
if (engine->vtab->backup(engine, vclock, cb, cb_arg) < 0)
return -1;
}
@@ -191,7 +198,8 @@ engine_prepare_join(struct engine_join_ctx *ctx)
}
int i = 0;
struct engine *engine;
- engine_foreach(engine) {
+ engine_foreach(engine)
+ {
assert(i < MAX_ENGINE_COUNT);
if (engine->vtab->prepare_join(engine, &ctx->array[i]) != 0)
goto fail;
@@ -208,7 +216,8 @@ engine_join(struct engine_join_ctx *ctx, struct xstream *stream)
{
int i = 0;
struct engine *engine;
- engine_foreach(engine) {
+ engine_foreach(engine)
+ {
if (engine->vtab->join(engine, ctx->array[i], stream) != 0)
return -1;
i++;
@@ -221,7 +230,8 @@ engine_complete_join(struct engine_join_ctx *ctx)
{
int i = 0;
struct engine *engine;
- engine_foreach(engine) {
+ engine_foreach(engine)
+ {
if (ctx->array[i] != NULL)
engine->vtab->complete_join(engine, ctx->array[i]);
i++;
@@ -234,16 +244,14 @@ engine_memory_stat(struct engine_memory_stat *stat)
{
memset(stat, 0, sizeof(*stat));
struct engine *engine;
- engine_foreach(engine)
- engine->vtab->memory_stat(engine, stat);
+ engine_foreach(engine) engine->vtab->memory_stat(engine, stat);
}
void
engine_reset_stat(void)
{
struct engine *engine;
- engine_foreach(engine)
- engine->vtab->reset_stat(engine);
+ engine_foreach(engine) engine->vtab->reset_stat(engine);
}
/* {{{ Virtual method stubs */
diff --git a/src/box/engine.h b/src/box/engine.h
index c4da01e..2270c9f 100644
--- a/src/box/engine.h
+++ b/src/box/engine.h
@@ -72,7 +72,8 @@ struct engine_vtab {
void (*shutdown)(struct engine *);
/** Allocate a new space instance. */
struct space *(*create_space)(struct engine *engine,
- struct space_def *def, struct rlist *key_list);
+ struct space_def *def,
+ struct rlist *key_list);
/**
* Freeze a read view to feed to a new replica.
* Setup and return a context that will be used
@@ -140,7 +141,7 @@ struct engine_vtab {
* On remote recovery, it is set to NULL.
*/
int (*begin_initial_recovery)(struct engine *engine,
- const struct vclock *recovery_vclock);
+ const struct vclock *recovery_vclock);
/**
* Notify engine about a start of recovering from WALs
* that could be local WALs during local recovery
@@ -237,7 +238,8 @@ struct engine_join_ctx {
};
/** Register engine engine instance. */
-void engine_register(struct engine *engine);
+void
+engine_register(struct engine *engine);
/** Call a visitor function on every registered engine. */
#define engine_foreach(engine) rlist_foreach_entry(engine, &engines, link)
@@ -380,32 +382,54 @@ engine_reset_stat(void);
/*
* Virtual method stubs.
*/
-int generic_engine_prepare_join(struct engine *, void **);
-int generic_engine_join(struct engine *, void *, struct xstream *);
-void generic_engine_complete_join(struct engine *, void *);
-int generic_engine_begin(struct engine *, struct txn *);
-int generic_engine_begin_statement(struct engine *, struct txn *);
-int generic_engine_prepare(struct engine *, struct txn *);
-void generic_engine_commit(struct engine *, struct txn *);
-void generic_engine_rollback_statement(struct engine *, struct txn *,
- struct txn_stmt *);
-void generic_engine_rollback(struct engine *, struct txn *);
-void generic_engine_switch_to_ro(struct engine *);
-int generic_engine_bootstrap(struct engine *);
-int generic_engine_begin_initial_recovery(struct engine *,
- const struct vclock *);
-int generic_engine_begin_final_recovery(struct engine *);
-int generic_engine_end_recovery(struct engine *);
-int generic_engine_begin_checkpoint(struct engine *, bool);
-int generic_engine_wait_checkpoint(struct engine *, const struct vclock *);
-void generic_engine_commit_checkpoint(struct engine *, const struct vclock *);
-void generic_engine_abort_checkpoint(struct engine *);
-void generic_engine_collect_garbage(struct engine *, const struct vclock *);
-int generic_engine_backup(struct engine *, const struct vclock *,
- engine_backup_cb, void *);
-void generic_engine_memory_stat(struct engine *, struct engine_memory_stat *);
-void generic_engine_reset_stat(struct engine *);
-int generic_engine_check_space_def(struct space_def *);
+int
+generic_engine_prepare_join(struct engine *, void **);
+int
+generic_engine_join(struct engine *, void *, struct xstream *);
+void
+generic_engine_complete_join(struct engine *, void *);
+int
+generic_engine_begin(struct engine *, struct txn *);
+int
+generic_engine_begin_statement(struct engine *, struct txn *);
+int
+generic_engine_prepare(struct engine *, struct txn *);
+void
+generic_engine_commit(struct engine *, struct txn *);
+void
+generic_engine_rollback_statement(struct engine *, struct txn *,
+ struct txn_stmt *);
+void
+generic_engine_rollback(struct engine *, struct txn *);
+void
+generic_engine_switch_to_ro(struct engine *);
+int
+generic_engine_bootstrap(struct engine *);
+int
+generic_engine_begin_initial_recovery(struct engine *, const struct vclock *);
+int
+generic_engine_begin_final_recovery(struct engine *);
+int
+generic_engine_end_recovery(struct engine *);
+int
+generic_engine_begin_checkpoint(struct engine *, bool);
+int
+generic_engine_wait_checkpoint(struct engine *, const struct vclock *);
+void
+generic_engine_commit_checkpoint(struct engine *, const struct vclock *);
+void
+generic_engine_abort_checkpoint(struct engine *);
+void
+generic_engine_collect_garbage(struct engine *, const struct vclock *);
+int
+generic_engine_backup(struct engine *, const struct vclock *, engine_backup_cb,
+ void *);
+void
+generic_engine_memory_stat(struct engine *, struct engine_memory_stat *);
+void
+generic_engine_reset_stat(struct engine *);
+int
+generic_engine_check_space_def(struct space_def *);
#if defined(__cplusplus)
} /* extern "C" */
@@ -421,7 +445,7 @@ engine_find_xc(const char *name)
static inline struct space *
engine_create_space_xc(struct engine *engine, struct space_def *def,
- struct rlist *key_list)
+ struct rlist *key_list)
{
struct space *space = engine_create_space(engine, def, key_list);
if (space == NULL)
diff --git a/src/box/errcode.c b/src/box/errcode.c
index c1cb594..778054f 100644
--- a/src/box/errcode.c
+++ b/src/box/errcode.c
@@ -31,12 +31,7 @@
*/
#include "errcode.h"
-#define ERRCODE_RECORD_MEMBER(s, d) { \
- .errstr = #s, \
- .errdesc = d \
-},
-
-struct errcode_record box_error_codes[box_error_code_MAX] = {
- ERROR_CODES(ERRCODE_RECORD_MEMBER)
-};
+#define ERRCODE_RECORD_MEMBER(s, d) { .errstr = #s, .errdesc = d },
+struct errcode_record box_error_codes[box_error_code_MAX] = { ERROR_CODES(
+ ERRCODE_RECORD_MEMBER) };
diff --git a/src/box/error.cc b/src/box/error.cc
index ca1d73e..9305a2f 100644
--- a/src/box/error.cc
+++ b/src/box/error.cc
@@ -70,8 +70,8 @@ box_error_clear(void)
}
int
-box_error_set(const char *file, unsigned line, uint32_t code,
- const char *fmt, ...)
+box_error_set(const char *file, unsigned line, uint32_t code, const char *fmt,
+ ...)
{
struct error *e = BuildClientError(file, line, ER_UNKNOWN);
ClientError *client_error = type_cast(ClientError, e);
@@ -99,8 +99,8 @@ box_error_new_va(const char *file, unsigned line, uint32_t code,
}
return e;
} else {
- struct error *e = BuildCustomError(file, line, custom_type,
- code);
+ struct error *e =
+ BuildCustomError(file, line, custom_type, code);
CustomError *custom_error = type_cast(CustomError, e);
if (custom_error != NULL) {
error_vformat_msg(e, fmt, ap);
@@ -116,8 +116,8 @@ box_error_new(const char *file, unsigned line, uint32_t code,
{
va_list ap;
va_start(ap, fmt);
- struct error *e = box_error_new_va(file, line, code, custom_type,
- fmt, ap);
+ struct error *e =
+ box_error_new_va(file, line, code, custom_type, fmt, ap);
va_end(ap);
return e;
}
@@ -128,8 +128,8 @@ box_error_add(const char *file, unsigned line, uint32_t code,
{
va_list ap;
va_start(ap, fmt);
- struct error *e = box_error_new_va(file, line, code, custom_type,
- fmt, ap);
+ struct error *e =
+ box_error_new_va(file, line, code, custom_type, fmt, ap);
va_end(ap);
struct diag *d = &fiber()->diag;
@@ -154,9 +154,7 @@ box_error_custom_type(const struct error *e)
struct rmean *rmean_error = NULL;
-const char *rmean_error_strings[RMEAN_ERROR_LAST] = {
- "ERROR"
-};
+const char *rmean_error_strings[RMEAN_ERROR_LAST] = { "ERROR" };
static struct method_info clienterror_methods[] = {
make_method(&type_ClientError, "code", &ClientError::errcode),
@@ -168,16 +166,15 @@ const struct type_info type_ClientError =
ClientError::ClientError(const type_info *type, const char *file, unsigned line,
uint32_t errcode)
- :Exception(type, file, line)
+ : Exception(type, file, line)
{
m_errcode = errcode;
if (rmean_error)
rmean_collect(rmean_error, RMEAN_ERROR, 1);
}
-ClientError::ClientError(const char *file, unsigned line,
- uint32_t errcode, ...)
- :ClientError(&type_ClientError, file, line, errcode)
+ClientError::ClientError(const char *file, unsigned line, uint32_t errcode, ...)
+ : ClientError(&type_ClientError, file, line, errcode)
{
va_list ap;
va_start(ap, errcode);
@@ -208,7 +205,6 @@ ClientError::log() const
tnt_errcode_str(m_errcode));
}
-
uint32_t
ClientError::get_errcode(const struct error *e)
{
@@ -247,26 +243,25 @@ const struct type_info type_XlogGapError =
XlogGapError::XlogGapError(const char *file, unsigned line,
const struct vclock *from, const struct vclock *to)
- : XlogError(&type_XlogGapError, file, line)
+ : XlogError(&type_XlogGapError, file, line)
{
const char *s_from = vclock_to_string(from);
const char *s_to = vclock_to_string(to);
snprintf(errmsg, sizeof(errmsg),
"Missing .xlog file between LSN %lld %s and %lld %s",
- (long long) vclock_sum(from), s_from ? s_from : "",
- (long long) vclock_sum(to), s_to ? s_to : "");
+ (long long)vclock_sum(from), s_from ? s_from : "",
+ (long long)vclock_sum(to), s_to ? s_to : "");
}
-XlogGapError::XlogGapError(const char *file, unsigned line,
- const char *msg)
- : XlogError(&type_XlogGapError, file, line)
+XlogGapError::XlogGapError(const char *file, unsigned line, const char *msg)
+ : XlogError(&type_XlogGapError, file, line)
{
error_format_msg(this, "%s", msg);
}
struct error *
-BuildXlogGapError(const char *file, unsigned line,
- const struct vclock *from, const struct vclock *to)
+BuildXlogGapError(const char *file, unsigned line, const struct vclock *from,
+ const struct vclock *to)
{
try {
return new XlogGapError(file, line, from, to);
@@ -278,34 +273,36 @@ BuildXlogGapError(const char *file, unsigned line,
struct rlist on_access_denied = RLIST_HEAD_INITIALIZER(on_access_denied);
static struct method_info accessdeniederror_methods[] = {
- make_method(&type_AccessDeniedError, "access_type", &AccessDeniedError::access_type),
- make_method(&type_AccessDeniedError, "object_type", &AccessDeniedError::object_type),
- make_method(&type_AccessDeniedError, "object_name", &AccessDeniedError::object_name),
+ make_method(&type_AccessDeniedError, "access_type",
+ &AccessDeniedError::access_type),
+ make_method(&type_AccessDeniedError, "object_type",
+ &AccessDeniedError::object_type),
+ make_method(&type_AccessDeniedError, "object_name",
+ &AccessDeniedError::object_name),
METHODS_SENTINEL
};
-const struct type_info type_AccessDeniedError =
- make_type("AccessDeniedError", &type_ClientError,
- accessdeniederror_methods);
+const struct type_info type_AccessDeniedError = make_type(
+ "AccessDeniedError", &type_ClientError, accessdeniederror_methods);
AccessDeniedError::AccessDeniedError(const char *file, unsigned int line,
const char *access_type,
const char *object_type,
const char *object_name,
- const char *user_name,
- bool run_trigers)
- :ClientError(&type_AccessDeniedError, file, line, ER_ACCESS_DENIED)
+ const char *user_name, bool run_trigers)
+ : ClientError(&type_AccessDeniedError, file, line, ER_ACCESS_DENIED)
{
- error_format_msg(this, tnt_errcode_desc(m_errcode),
- access_type, object_type, object_name, user_name);
+ error_format_msg(this, tnt_errcode_desc(m_errcode), access_type,
+ object_type, object_name, user_name);
- struct on_access_denied_ctx ctx = {access_type, object_type, object_name};
+ struct on_access_denied_ctx ctx = { access_type, object_type,
+ object_name };
/*
* Don't run the triggers when create after marshaling
* through network.
*/
if (run_trigers)
- trigger_run(&on_access_denied, (void *) &ctx);
+ trigger_run(&on_access_denied, (void *)&ctx);
m_object_type = strdup(object_type);
m_access_type = strdup(access_type);
m_object_name = strdup(object_name);
@@ -314,8 +311,7 @@ AccessDeniedError::AccessDeniedError(const char *file, unsigned int line,
struct error *
BuildAccessDeniedError(const char *file, unsigned int line,
const char *access_type, const char *object_type,
- const char *object_name,
- const char *user_name)
+ const char *object_name, const char *user_name)
{
try {
return new AccessDeniedError(file, line, access_type,
@@ -327,7 +323,8 @@ BuildAccessDeniedError(const char *file, unsigned int line,
}
static struct method_info customerror_methods[] = {
- make_method(&type_CustomError, "custom_type", &CustomError::custom_type),
+ make_method(&type_CustomError, "custom_type",
+ &CustomError::custom_type),
METHODS_SENTINEL
};
@@ -336,7 +333,7 @@ const struct type_info type_CustomError =
CustomError::CustomError(const char *file, unsigned int line,
const char *custom_type, uint32_t errcode)
- :ClientError(&type_CustomError, file, line, errcode)
+ : ClientError(&type_CustomError, file, line, errcode)
{
strncpy(m_custom_type, custom_type, sizeof(m_custom_type) - 1);
m_custom_type[sizeof(m_custom_type) - 1] = '\0';
@@ -345,8 +342,8 @@ CustomError::CustomError(const char *file, unsigned int line,
void
CustomError::log() const
{
- say_file_line(S_ERROR, file, line, errmsg, "%s",
- "Custom type %s", m_custom_type);
+ say_file_line(S_ERROR, file, line, errmsg, "%s", "Custom type %s",
+ m_custom_type);
}
struct error *
diff --git a/src/box/error.h b/src/box/error.h
index 338121d..7bab7e8 100644
--- a/src/box/error.h
+++ b/src/box/error.h
@@ -50,8 +50,8 @@ struct error *
BuildXlogError(const char *file, unsigned line, const char *format, ...);
struct error *
-BuildXlogGapError(const char *file, unsigned line,
- const struct vclock *from, const struct vclock *to);
+BuildXlogGapError(const char *file, unsigned line, const struct vclock *from,
+ const struct vclock *to);
struct error *
BuildCustomError(const char *file, unsigned int line, const char *custom_type,
@@ -189,43 +189,33 @@ extern const struct type_info type_CustomError;
struct rmean;
extern "C" struct rmean *rmean_error;
-enum rmean_error_name {
- RMEAN_ERROR,
- RMEAN_ERROR_LAST
-};
+enum rmean_error_name { RMEAN_ERROR, RMEAN_ERROR_LAST };
extern const char *rmean_error_strings[RMEAN_ERROR_LAST];
-class ClientError: public Exception
-{
+class ClientError: public Exception {
public:
- virtual void raise()
- {
- throw this;
- }
+ virtual void raise() { throw this; }
virtual void log() const;
- int
- errcode() const
- {
- return m_errcode;
- }
+ int errcode() const { return m_errcode; }
ClientError(const char *file, unsigned line, uint32_t errcode, ...);
static uint32_t get_errcode(const struct error *e);
/* client errno code */
int m_errcode;
+
protected:
ClientError(const type_info *type, const char *file, unsigned line,
uint32_t errcode);
};
-class LoggedError: public ClientError
-{
+class LoggedError: public ClientError {
public:
- template <typename ... Args>
- LoggedError(const char *file, unsigned line, uint32_t errcode, Args ... args)
+ template <typename... Args>
+ LoggedError(const char *file, unsigned line, uint32_t errcode,
+ Args... args)
: ClientError(file, line, errcode, args...)
{
/* TODO: actually calls ClientError::log */
@@ -237,8 +227,7 @@ public:
* A special type of exception which must be used
* for all access denied errors, since it invokes audit triggers.
*/
-class AccessDeniedError: public ClientError
-{
+class AccessDeniedError: public ClientError {
public:
AccessDeniedError(const char *file, unsigned int line,
const char *access_type, const char *object_type,
@@ -252,23 +241,11 @@ public:
free(m_access_type);
}
- const char *
- object_type()
- {
- return m_object_type;
- }
+ const char *object_type() { return m_object_type; }
- const char *
- object_name()
- {
- return m_object_name?:"(nil)";
- }
+ const char *object_name() { return m_object_name ?: "(nil)"; }
- const char *
- access_type()
- {
- return m_access_type;
- }
+ const char *access_type() { return m_access_type; }
private:
/** Type of object the required access was denied to */
@@ -285,46 +262,37 @@ private:
* of exception is introduced to gracefully skip such errors
* in force_recovery = true mode.
*/
-struct XlogError: public Exception
-{
+struct XlogError: public Exception {
XlogError(const char *file, unsigned line, const char *format,
va_list ap)
- :Exception(&type_XlogError, file, line)
+ : Exception(&type_XlogError, file, line)
{
error_vformat_msg(this, format, ap);
}
- XlogError(const struct type_info *type, const char *file,
- unsigned line)
- :Exception(type, file, line)
- {
- }
+ XlogError(const struct type_info *type, const char *file, unsigned line)
+ : Exception(type, file, line)
+ {}
virtual void raise() { throw this; }
};
-struct XlogGapError: public XlogError
-{
- XlogGapError(const char *file, unsigned line,
- const struct vclock *from, const struct vclock *to);
- XlogGapError(const char *file, unsigned line,
- const char *msg);
+struct XlogGapError: public XlogError {
+ XlogGapError(const char *file, unsigned line, const struct vclock *from,
+ const struct vclock *to);
+ XlogGapError(const char *file, unsigned line, const char *msg);
virtual void raise() { throw this; }
};
-class CustomError: public ClientError
-{
+class CustomError: public ClientError {
public:
CustomError(const char *file, unsigned int line,
const char *custom_type, uint32_t errcode);
virtual void log() const;
- const char*
- custom_type()
- {
- return m_custom_type;
- }
+ const char *custom_type() { return m_custom_type; }
+
private:
/** Custom type name. */
char m_custom_type[64];
diff --git a/src/box/execute.c b/src/box/execute.c
index e14da20..11736c7 100644
--- a/src/box/execute.c
+++ b/src/box/execute.c
@@ -104,7 +104,7 @@ static void
port_sql_destroy(struct port *base)
{
port_c_vtab.destroy(base);
- struct port_sql *port_sql = (struct port_sql *) base;
+ struct port_sql *port_sql = (struct port_sql *)base;
if (port_sql->do_finalize)
sql_stmt_finalize(((struct port_sql *)base)->stmt);
}
@@ -125,7 +125,7 @@ port_sql_create(struct port *port, struct sql_stmt *stmt,
{
port_c_create(port);
port->vtab = &port_sql_vtab;
- struct port_sql *port_sql = (struct port_sql *) port;
+ struct port_sql *port_sql = (struct port_sql *)port;
port_sql->stmt = stmt;
port_sql->serialization_format = format;
port_sql->do_finalize = do_finalize;
@@ -142,8 +142,7 @@ port_sql_create(struct port *port, struct sql_stmt *stmt,
* @retval -1 Out of memory when resizing the output buffer.
*/
static inline int
-sql_column_to_messagepack(struct sql_stmt *stmt, int i,
- struct region *region)
+sql_column_to_messagepack(struct sql_stmt *stmt, int i, struct region *region)
{
size_t size;
enum mp_type type = sql_column_type(stmt, i);
@@ -151,7 +150,7 @@ sql_column_to_messagepack(struct sql_stmt *stmt, int i,
case MP_INT: {
int64_t n = sql_column_int64(stmt, i);
size = mp_sizeof_int(n);
- char *pos = (char *) region_alloc(region, size);
+ char *pos = (char *)region_alloc(region, size);
if (pos == NULL)
goto oom;
mp_encode_int(pos, n);
@@ -160,7 +159,7 @@ sql_column_to_messagepack(struct sql_stmt *stmt, int i,
case MP_UINT: {
uint64_t n = sql_column_uint64(stmt, i);
size = mp_sizeof_uint(n);
- char *pos = (char *) region_alloc(region, size);
+ char *pos = (char *)region_alloc(region, size);
if (pos == NULL)
goto oom;
mp_encode_uint(pos, n);
@@ -169,7 +168,7 @@ sql_column_to_messagepack(struct sql_stmt *stmt, int i,
case MP_DOUBLE: {
double d = sql_column_double(stmt, i);
size = mp_sizeof_double(d);
- char *pos = (char *) region_alloc(region, size);
+ char *pos = (char *)region_alloc(region, size);
if (pos == NULL)
goto oom;
mp_encode_double(pos, d);
@@ -178,7 +177,7 @@ sql_column_to_messagepack(struct sql_stmt *stmt, int i,
case MP_STR: {
uint32_t len = sql_column_bytes(stmt, i);
size = mp_sizeof_str(len);
- char *pos = (char *) region_alloc(region, size);
+ char *pos = (char *)region_alloc(region, size);
if (pos == NULL)
goto oom;
const char *s;
@@ -190,8 +189,7 @@ sql_column_to_messagepack(struct sql_stmt *stmt, int i,
case MP_MAP:
case MP_ARRAY: {
uint32_t len = sql_column_bytes(stmt, i);
- const char *s =
- (const char *)sql_column_blob(stmt, i);
+ const char *s = (const char *)sql_column_blob(stmt, i);
if (sql_column_subtype(stmt, i) == SQL_SUBTYPE_MSGPACK) {
size = len;
char *pos = (char *)region_alloc(region, size);
@@ -210,7 +208,7 @@ sql_column_to_messagepack(struct sql_stmt *stmt, int i,
case MP_BOOL: {
bool b = sql_column_boolean(stmt, i);
size = mp_sizeof_bool(b);
- char *pos = (char *) region_alloc(region, size);
+ char *pos = (char *)region_alloc(region, size);
if (pos == NULL)
goto oom;
mp_encode_bool(pos, b);
@@ -218,7 +216,7 @@ sql_column_to_messagepack(struct sql_stmt *stmt, int i,
}
case MP_NIL: {
size = mp_sizeof_nil();
- char *pos = (char *) region_alloc(region, size);
+ char *pos = (char *)region_alloc(region, size);
if (pos == NULL)
goto oom;
mp_encode_nil(pos);
@@ -245,13 +243,13 @@ oom:
* @retval -1 Memory error.
*/
static inline int
-sql_row_to_port(struct sql_stmt *stmt, int column_count,
- struct region *region, struct port *port)
+sql_row_to_port(struct sql_stmt *stmt, int column_count, struct region *region,
+ struct port *port)
{
assert(column_count > 0);
size_t size = mp_sizeof_array(column_count);
size_t svp = region_used(region);
- char *pos = (char *) region_alloc(region, size);
+ char *pos = (char *)region_alloc(region, size);
if (pos == NULL) {
diag_set(OutOfMemory, size, "region_alloc", "SQL row");
return -1;
@@ -263,7 +261,7 @@ sql_row_to_port(struct sql_stmt *stmt, int column_count,
goto error;
}
size = region_used(region) - svp;
- pos = (char *) region_join(region, size);
+ pos = (char *)region_join(region, size);
if (pos == NULL) {
diag_set(OutOfMemory, size, "region_join", "pos");
goto error;
@@ -305,7 +303,7 @@ metadata_map_sizeof(const char *name, const char *type, const char *coll,
members_count++;
map_size += mp_sizeof_uint(IPROTO_FIELD_SPAN);
map_size += span != NULL ? mp_sizeof_str(strlen(span)) :
- mp_sizeof_nil();
+ mp_sizeof_nil();
}
map_size += mp_sizeof_uint(IPROTO_FIELD_NAME);
map_size += mp_sizeof_uint(IPROTO_FIELD_TYPE);
@@ -340,7 +338,7 @@ metadata_map_encode(char *buf, const char *name, const char *type,
buf = mp_encode_uint(buf, IPROTO_FIELD_IS_AUTOINCREMENT);
buf = mp_encode_bool(buf, true);
}
- if (! is_full)
+ if (!is_full)
return;
/*
* Span is an original expression that forms
@@ -370,9 +368,9 @@ static inline int
sql_get_metadata(struct sql_stmt *stmt, struct obuf *out, int column_count)
{
assert(column_count > 0);
- int size = mp_sizeof_uint(IPROTO_METADATA) +
- mp_sizeof_array(column_count);
- char *pos = (char *) obuf_alloc(out, size);
+ int size =
+ mp_sizeof_uint(IPROTO_METADATA) + mp_sizeof_array(column_count);
+ char *pos = (char *)obuf_alloc(out, size);
if (pos == NULL) {
diag_set(OutOfMemory, size, "obuf_alloc", "pos");
return -1;
@@ -395,7 +393,7 @@ sql_get_metadata(struct sql_stmt *stmt, struct obuf *out, int column_count)
assert(type != NULL);
size = metadata_map_sizeof(name, type, coll, span, nullable,
is_autoincrement);
- char *pos = (char *) obuf_alloc(out, size);
+ char *pos = (char *)obuf_alloc(out, size);
if (pos == NULL) {
diag_set(OutOfMemory, size, "obuf_alloc", "pos");
return -1;
@@ -412,7 +410,7 @@ sql_get_params_metadata(struct sql_stmt *stmt, struct obuf *out)
int bind_count = sql_bind_parameter_count(stmt);
int size = mp_sizeof_uint(IPROTO_BIND_METADATA) +
mp_sizeof_array(bind_count);
- char *pos = (char *) obuf_alloc(out, size);
+ char *pos = (char *)obuf_alloc(out, size);
if (pos == NULL) {
diag_set(OutOfMemory, size, "obuf_alloc", "pos");
return -1;
@@ -429,7 +427,7 @@ sql_get_params_metadata(struct sql_stmt *stmt, struct obuf *out)
const char *type = "ANY";
size += mp_sizeof_str(strlen(name));
size += mp_sizeof_str(strlen(type));
- char *pos = (char *) obuf_alloc(out, size);
+ char *pos = (char *)obuf_alloc(out, size);
if (pos == NULL) {
diag_set(OutOfMemory, size, "obuf_alloc", "pos");
return -1;
@@ -448,12 +446,10 @@ sql_get_prepare_common_keys(struct sql_stmt *stmt, struct obuf *out, int keys)
{
const char *sql_str = sql_stmt_query_str(stmt);
uint32_t stmt_id = sql_stmt_calculate_id(sql_str, strlen(sql_str));
- int size = mp_sizeof_map(keys) +
- mp_sizeof_uint(IPROTO_STMT_ID) +
- mp_sizeof_uint(stmt_id) +
- mp_sizeof_uint(IPROTO_BIND_COUNT) +
+ int size = mp_sizeof_map(keys) + mp_sizeof_uint(IPROTO_STMT_ID) +
+ mp_sizeof_uint(stmt_id) + mp_sizeof_uint(IPROTO_BIND_COUNT) +
mp_sizeof_uint(sql_bind_parameter_count(stmt));
- char *pos = (char *) obuf_alloc(out, size);
+ char *pos = (char *)obuf_alloc(out, size);
if (pos == NULL) {
diag_set(OutOfMemory, size, "obuf_alloc", "pos");
return -1;
@@ -479,7 +475,7 @@ port_sql_dump_msgpack(struct port *port, struct obuf *out)
case DQL_EXECUTE: {
int keys = 2;
int size = mp_sizeof_map(keys);
- char *pos = (char *) obuf_alloc(out, size);
+ char *pos = (char *)obuf_alloc(out, size);
if (pos == NULL) {
diag_set(OutOfMemory, size, "obuf_alloc", "pos");
return -1;
@@ -488,7 +484,7 @@ port_sql_dump_msgpack(struct port *port, struct obuf *out)
if (sql_get_metadata(stmt, out, sql_column_count(stmt)) != 0)
return -1;
size = mp_sizeof_uint(IPROTO_DATA);
- pos = (char *) obuf_alloc(out, size);
+ pos = (char *)obuf_alloc(out, size);
if (pos == NULL) {
diag_set(OutOfMemory, size, "obuf_alloc", "pos");
return -1;
@@ -507,7 +503,7 @@ port_sql_dump_msgpack(struct port *port, struct obuf *out)
int size = mp_sizeof_map(keys) +
mp_sizeof_uint(IPROTO_SQL_INFO) +
mp_sizeof_map(map_size);
- char *pos = (char *) obuf_alloc(out, size);
+ char *pos = (char *)obuf_alloc(out, size);
if (pos == NULL) {
diag_set(OutOfMemory, size, "obuf_alloc", "pos");
return -1;
@@ -521,10 +517,11 @@ port_sql_dump_msgpack(struct port *port, struct obuf *out)
mp_sizeof_uint(changes);
if (!stailq_empty(autoinc_id_list)) {
struct autoinc_id_entry *id_entry;
- stailq_foreach_entry(id_entry, autoinc_id_list, link) {
+ stailq_foreach_entry(id_entry, autoinc_id_list, link)
+ {
size += id_entry->id >= 0 ?
- mp_sizeof_uint(id_entry->id) :
- mp_sizeof_int(id_entry->id);
+ mp_sizeof_uint(id_entry->id) :
+ mp_sizeof_int(id_entry->id);
id_count++;
}
size += mp_sizeof_uint(SQL_INFO_AUTOINCREMENT_IDS) +
@@ -541,10 +538,12 @@ port_sql_dump_msgpack(struct port *port, struct obuf *out)
buf = mp_encode_uint(buf, SQL_INFO_AUTOINCREMENT_IDS);
buf = mp_encode_array(buf, id_count);
struct autoinc_id_entry *id_entry;
- stailq_foreach_entry(id_entry, autoinc_id_list, link) {
+ stailq_foreach_entry(id_entry, autoinc_id_list, link)
+ {
buf = id_entry->id >= 0 ?
- mp_encode_uint(buf, id_entry->id) :
- mp_encode_int(buf, id_entry->id);
+ mp_encode_uint(buf,
+ id_entry->id) :
+ mp_encode_int(buf, id_entry->id);
}
}
break;
@@ -569,7 +568,7 @@ port_sql_dump_msgpack(struct port *port, struct obuf *out)
*/
int keys = 3;
return sql_get_prepare_common_keys(stmt, out, keys);
- }
+ }
default: {
unreachable();
}
@@ -592,8 +591,8 @@ sql_reprepare(struct sql_stmt **stmt)
{
const char *sql_str = sql_stmt_query_str(*stmt);
struct sql_stmt *new_stmt;
- if (sql_stmt_compile(sql_str, strlen(sql_str), NULL,
- &new_stmt, NULL) != 0)
+ if (sql_stmt_compile(sql_str, strlen(sql_str), NULL, &new_stmt, NULL) !=
+ 0)
return -1;
if (sql_stmt_cache_update(*stmt, new_stmt) != 0)
return -1;
@@ -630,8 +629,8 @@ sql_prepare(const char *sql, int len, struct port *port)
/* Add id to the list of available statements in session. */
if (!session_check_stmt_id(current_session(), stmt_id))
session_add_stmt_id(current_session(), stmt_id);
- enum sql_serialization_format format = sql_column_count(stmt) > 0 ?
- DQL_PREPARE : DML_PREPARE;
+ enum sql_serialization_format format =
+ sql_column_count(stmt) > 0 ? DQL_PREPARE : DML_PREPARE;
port_sql_create(port, stmt, format, false);
return 0;
@@ -677,8 +676,8 @@ sql_execute(struct sql_stmt *stmt, struct port *port, struct region *region)
if (column_count > 0) {
/* Either ROW or DONE or ERROR. */
while ((rc = sql_step(stmt)) == SQL_ROW) {
- if (sql_row_to_port(stmt, column_count, region,
- port) != 0)
+ if (sql_row_to_port(stmt, column_count, region, port) !=
+ 0)
return -1;
}
assert(rc == SQL_DONE || rc != 0);
@@ -697,7 +696,6 @@ sql_execute_prepared(uint32_t stmt_id, const struct sql_bind *bind,
uint32_t bind_count, struct port *port,
struct region *region)
{
-
if (!session_check_stmt_id(current_session(), stmt_id)) {
diag_set(ClientError, ER_WRONG_QUERY_ID, stmt_id);
return -1;
@@ -720,8 +718,8 @@ sql_execute_prepared(uint32_t stmt_id, const struct sql_bind *bind,
sql_unbind(stmt);
if (sql_bind(stmt, bind, bind_count) != 0)
return -1;
- enum sql_serialization_format format = sql_column_count(stmt) > 0 ?
- DQL_EXECUTE : DML_EXECUTE;
+ enum sql_serialization_format format =
+ sql_column_count(stmt) > 0 ? DQL_EXECUTE : DML_EXECUTE;
port_sql_create(port, stmt, format, false);
if (sql_execute(stmt, port, region) != 0) {
port_destroy(port);
@@ -742,8 +740,8 @@ sql_prepare_and_execute(const char *sql, int len, const struct sql_bind *bind,
if (sql_stmt_compile(sql, len, NULL, &stmt, NULL) != 0)
return -1;
assert(stmt != NULL);
- enum sql_serialization_format format = sql_column_count(stmt) > 0 ?
- DQL_EXECUTE : DML_EXECUTE;
+ enum sql_serialization_format format =
+ sql_column_count(stmt) > 0 ? DQL_EXECUTE : DML_EXECUTE;
port_sql_create(port, stmt, format, true);
if (sql_bind(stmt, bind, bind_count) == 0 &&
sql_execute(stmt, port, region) == 0)
diff --git a/src/box/field_def.c b/src/box/field_def.c
index 34cecfa..e893641 100644
--- a/src/box/field_def.c
+++ b/src/box/field_def.c
@@ -75,7 +75,8 @@ const uint32_t field_mp_type[] = {
};
const uint32_t field_ext_type[] = {
- /* [FIELD_TYPE_ANY] = */ UINT32_MAX ^ (1U << MP_UNKNOWN_EXTENSION),
+ /* [FIELD_TYPE_ANY] = */ UINT32_MAX ^
+ (1U << MP_UNKNOWN_EXTENSION),
/* [FIELD_TYPE_UNSIGNED] = */ 0,
/* [FIELD_TYPE_STRING] = */ 0,
/* [FIELD_TYPE_NUMBER] = */ 1U << MP_DECIMAL,
@@ -98,7 +99,7 @@ const char *field_type_strs[] = {
/* [FIELD_TYPE_DOUBLE] = */ "double",
/* [FIELD_TYPE_INTEGER] = */ "integer",
/* [FIELD_TYPE_BOOLEAN] = */ "boolean",
- /* [FIELD_TYPE_VARBINARY] = */"varbinary",
+ /* [FIELD_TYPE_VARBINARY] = */ "varbinary",
/* [FIELD_TYPE_SCALAR] = */ "scalar",
/* [FIELD_TYPE_DECIMAL] = */ "decimal",
/* [FIELD_TYPE_UUID] = */ "uuid",
@@ -165,21 +166,20 @@ const struct opt_def field_def_reg[] = {
OPT_END,
};
-const struct field_def field_def_default = {
- .type = FIELD_TYPE_ANY,
- .name = NULL,
- .is_nullable = false,
- .nullable_action = ON_CONFLICT_ACTION_DEFAULT,
- .coll_id = COLL_NONE,
- .default_value = NULL,
- .default_value_expr = NULL
-};
+const struct field_def field_def_default = { .type = FIELD_TYPE_ANY,
+ .name = NULL,
+ .is_nullable = false,
+ .nullable_action =
+ ON_CONFLICT_ACTION_DEFAULT,
+ .coll_id = COLL_NONE,
+ .default_value = NULL,
+ .default_value_expr = NULL };
enum field_type
field_type_by_name(const char *name, size_t len)
{
- enum field_type field_type = strnindex(field_type_strs, name, len,
- field_type_MAX);
+ enum field_type field_type =
+ strnindex(field_type_strs, name, len, field_type_MAX);
if (field_type != field_type_MAX)
return field_type;
/* 'num' and 'str' in _index are deprecated since Tarantool 1.7 */
diff --git a/src/box/field_def.h b/src/box/field_def.h
index c5cfe5e..e839180 100644
--- a/src/box/field_def.h
+++ b/src/box/field_def.h
@@ -92,7 +92,7 @@ enum {
* For detailed explanation see context of OP_Eq, OP_Lt etc
* opcodes in vdbe.c.
*/
-static_assert((int) field_type_MAX <= (int) FIELD_TYPE_MASK,
+static_assert((int)field_type_MAX <= (int)FIELD_TYPE_MASK,
"values of enum field_type should fit into 4 bits of VdbeOp.p5");
extern const char *field_type_strs[];
diff --git a/src/box/field_map.c b/src/box/field_map.c
index dc90311..6b4433c 100644
--- a/src/box/field_map.c
+++ b/src/box/field_map.c
@@ -34,8 +34,7 @@
int
field_map_builder_create(struct field_map_builder *builder,
- uint32_t minimal_field_map_size,
- struct region *region)
+ uint32_t minimal_field_map_size, struct region *region)
{
builder->extents_size = 0;
builder->slot_count = minimal_field_map_size / sizeof(uint32_t);
@@ -63,10 +62,10 @@ field_map_builder_slot_extent_new(struct field_map_builder *builder,
{
struct field_map_builder_slot_extent *extent;
assert(!builder->slots[offset_slot].has_extent);
- uint32_t sz = sizeof(*extent) +
- multikey_count * sizeof(extent->offset[0]);
- extent = (struct field_map_builder_slot_extent *)
- region_aligned_alloc(region, sz, alignof(*extent));
+ uint32_t sz =
+ sizeof(*extent) + multikey_count * sizeof(extent->offset[0]);
+ extent = (struct field_map_builder_slot_extent *)region_aligned_alloc(
+ region, sz, alignof(*extent));
if (extent == NULL) {
diag_set(OutOfMemory, sz, "region_aligned_alloc", "extent");
return NULL;
@@ -112,13 +111,13 @@ field_map_build(struct field_map_builder *builder, char *buffer)
continue;
}
struct field_map_builder_slot_extent *extent =
- builder->slots[i].extent;
+ builder->slots[i].extent;
/** Retrive memory for the extent. */
store_u32(&field_map[i], extent_wptr - (char *)field_map);
store_u32(extent_wptr, extent->size);
uint32_t extent_offset_sz = extent->size * sizeof(uint32_t);
- memcpy(&((uint32_t *) extent_wptr)[1], extent->offset,
- extent_offset_sz);
+ memcpy(&((uint32_t *)extent_wptr)[1], extent->offset,
+ extent_offset_sz);
extent_wptr += sizeof(uint32_t) + extent_offset_sz;
}
assert(extent_wptr == buffer + builder->extents_size);
diff --git a/src/box/field_map.h b/src/box/field_map.h
index d8ef726..f96663b 100644
--- a/src/box/field_map.h
+++ b/src/box/field_map.h
@@ -163,8 +163,9 @@ field_map_get_offset(const uint32_t *field_map, int32_t offset_slot,
* The field_map extent has the following
* structure: [size=N|slot1|slot2|..|slotN]
*/
- const uint32_t *extent = (const uint32_t *)
- ((const char *)field_map + (int32_t)offset);
+ const uint32_t *extent =
+ (const uint32_t *)((const char *)field_map +
+ (int32_t)offset);
if ((uint32_t)multikey_idx >= load_u32(&extent[0]))
return 0;
offset = load_u32(&extent[multikey_idx + 1]);
@@ -229,8 +230,8 @@ field_map_builder_set_slot(struct field_map_builder *builder,
assert(extent != NULL);
assert(extent->size == multikey_count);
} else {
- extent = field_map_builder_slot_extent_new(builder,
- offset_slot, multikey_count, region);
+ extent = field_map_builder_slot_extent_new(
+ builder, offset_slot, multikey_count, region);
if (extent == NULL)
return -1;
}
@@ -245,8 +246,7 @@ field_map_builder_set_slot(struct field_map_builder *builder,
static inline uint32_t
field_map_build_size(struct field_map_builder *builder)
{
- return builder->slot_count * sizeof(uint32_t) +
- builder->extents_size;
+ return builder->slot_count * sizeof(uint32_t) + builder->extents_size;
}
/**
diff --git a/src/box/fk_constraint.h b/src/box/fk_constraint.h
index dcc5363..c1d90a6 100644
--- a/src/box/fk_constraint.h
+++ b/src/box/fk_constraint.h
@@ -135,8 +135,9 @@ static inline size_t
fk_constraint_def_sizeof(uint32_t link_count, uint32_t name_len,
uint32_t *links_offset)
{
- *links_offset = small_align(sizeof(struct fk_constraint_def) +
- name_len + 1, alignof(struct field_link));
+ *links_offset =
+ small_align(sizeof(struct fk_constraint_def) + name_len + 1,
+ alignof(struct field_link));
return *links_offset + link_count * sizeof(struct field_link);
}
diff --git a/src/box/func.c b/src/box/func.c
index 8087c95..6adbb71 100644
--- a/src/box/func.c
+++ b/src/box/func.c
@@ -111,8 +111,8 @@ struct module_find_ctx {
static int
luaT_module_find(lua_State *L)
{
- struct module_find_ctx *ctx = (struct module_find_ctx *)
- lua_topointer(L, 1);
+ struct module_find_ctx *ctx =
+ (struct module_find_ctx *)lua_topointer(L, 1);
/*
* Call package.searchpath(name, package.cpath) and use
@@ -156,7 +156,7 @@ module_find(const char *package, const char *package_end, char *path,
lua_State *L = tarantool_L;
int top = lua_gettop(L);
if (luaT_cpcall(L, luaT_module_find, &ctx) != 0) {
- int package_len = (int) (package_end - package);
+ int package_len = (int)(package_end - package);
diag_set(ClientError, ER_LOAD_MODULE, package_len, package,
lua_tostring(L, -1));
lua_settop(L, top);
@@ -177,7 +177,7 @@ module_init(void)
modules = mh_strnptr_new();
if (modules == NULL) {
diag_set(OutOfMemory, sizeof(*modules), "malloc",
- "modules hash table");
+ "modules hash table");
return -1;
}
return 0;
@@ -189,7 +189,7 @@ module_free(void)
while (mh_size(modules) > 0) {
mh_int_t i = mh_first(modules);
struct module *module =
- (struct module *) mh_strnptr_node(modules, i)->val;
+ (struct module *)mh_strnptr_node(modules, i)->val;
/* Can't delete modules if they have active calls */
module_gc(module);
}
@@ -216,8 +216,8 @@ module_cache_put(struct module *module)
{
size_t package_len = strlen(module->package);
uint32_t name_hash = mh_strn_hash(module->package, package_len);
- const struct mh_strnptr_node_t strnode = {
- module->package, package_len, name_hash, module};
+ const struct mh_strnptr_node_t strnode = { module->package, package_len,
+ name_hash, module };
if (mh_strnptr_put(modules, &strnode, NULL, NULL) == mh_end(modules)) {
diag_set(OutOfMemory, sizeof(strnode), "malloc", "modules");
@@ -252,8 +252,8 @@ module_load(const char *package, const char *package_end)
return NULL;
int package_len = package_end - package;
- struct module *module = (struct module *)
- malloc(sizeof(*module) + package_len + 1);
+ struct module *module =
+ (struct module *)malloc(sizeof(*module) + package_len + 1);
if (module == NULL) {
diag_set(OutOfMemory, sizeof(struct module) + package_len + 1,
"malloc", "struct module");
@@ -269,7 +269,7 @@ module_load(const char *package, const char *package_end)
tmpdir = "/tmp";
char dir_name[PATH_MAX];
int rc = snprintf(dir_name, sizeof(dir_name), "%s/tntXXXXXX", tmpdir);
- if (rc < 0 || (size_t) rc >= sizeof(dir_name)) {
+ if (rc < 0 || (size_t)rc >= sizeof(dir_name)) {
diag_set(SystemError, "failed to generate path to tmp dir");
goto error;
}
@@ -282,7 +282,7 @@ module_load(const char *package, const char *package_end)
char load_name[PATH_MAX];
rc = snprintf(load_name, sizeof(load_name), "%s/%.*s." TARANTOOL_LIBEXT,
dir_name, package_len, package);
- if (rc < 0 || (size_t) rc >= sizeof(dir_name)) {
+ if (rc < 0 || (size_t)rc >= sizeof(dir_name)) {
diag_set(SystemError, "failed to generate path to DSO");
goto error;
}
@@ -295,11 +295,13 @@ module_load(const char *package, const char *package_end)
int source_fd = open(path, O_RDONLY);
if (source_fd < 0) {
- diag_set(SystemError, "failed to open module %s file for" \
- " reading", path);
+ diag_set(SystemError,
+ "failed to open module %s file for"
+ " reading",
+ path);
goto error;
}
- int dest_fd = open(load_name, O_WRONLY|O_CREAT|O_TRUNC,
+ int dest_fd = open(load_name, O_WRONLY | O_CREAT | O_TRUNC,
st.st_mode & 0777);
if (dest_fd < 0) {
diag_set(SystemError, "failed to open file %s for writing ",
@@ -312,8 +314,8 @@ module_load(const char *package, const char *package_end)
close(source_fd);
close(dest_fd);
if (ret != st.st_size) {
- diag_set(SystemError, "failed to copy DSO %s to %s",
- path, load_name);
+ diag_set(SystemError, "failed to copy DSO %s to %s", path,
+ load_name);
goto error;
}
@@ -323,8 +325,8 @@ module_load(const char *package, const char *package_end)
if (rmdir(dir_name) != 0)
say_warn("failed to delete temporary dir %s", dir_name);
if (module->handle == NULL) {
- diag_set(ClientError, ER_LOAD_MODULE, package_len,
- package, dlerror());
+ diag_set(ClientError, ER_LOAD_MODULE, package_len, package,
+ dlerror());
goto error;
}
struct errinj *e = errinj(ERRINJ_DYN_MODULE_COUNT, ERRINJ_INT);
@@ -372,7 +374,8 @@ module_sym(struct module *module, const char *name)
}
int
-module_reload(const char *package, const char *package_end, struct module **module)
+module_reload(const char *package, const char *package_end,
+ struct module **module)
{
struct module *old_module = module_cache_find(package, package_end);
if (old_module == NULL) {
@@ -420,8 +423,8 @@ restore:
}
func->module = old_module;
rlist_move(&old_module->funcs, &func->item);
- } while (func != rlist_first_entry(&old_module->funcs,
- struct func_c, item));
+ } while (func !=
+ rlist_first_entry(&old_module->funcs, struct func_c, item));
assert(rlist_empty(&new_module->funcs));
module_delete(new_module);
return -1;
@@ -478,7 +481,7 @@ func_c_new(MAYBE_UNUSED struct func_def *def)
{
assert(def->language == FUNC_LANGUAGE_C);
assert(def->body == NULL && !def->is_sandboxed);
- struct func_c *func = (struct func_c *) malloc(sizeof(struct func_c));
+ struct func_c *func = (struct func_c *)malloc(sizeof(struct func_c));
if (func == NULL) {
diag_set(OutOfMemory, sizeof(*func), "malloc", "func");
return NULL;
@@ -510,7 +513,7 @@ func_c_destroy(struct func *base)
{
assert(base->vtab == &func_c_vtab);
assert(base != NULL && base->def->language == FUNC_LANGUAGE_C);
- struct func_c *func = (struct func_c *) base;
+ struct func_c *func = (struct func_c *)base;
func_c_unload(func);
TRASH(base);
free(func);
@@ -528,8 +531,8 @@ func_c_load(struct func_c *func)
struct func_name name;
func_split_name(func->base.def->name, &name);
- struct module *module = module_cache_find(name.package,
- name.package_end);
+ struct module *module =
+ module_cache_find(name.package, name.package_end);
if (module == NULL) {
/* Try to find loaded module in the cache */
module = module_load(name.package, name.package_end);
@@ -554,7 +557,7 @@ func_c_call(struct func *base, struct port *args, struct port *ret)
{
assert(base->vtab == &func_c_vtab);
assert(base != NULL && base->def->language == FUNC_LANGUAGE_C);
- struct func_c *func = (struct func_c *) base;
+ struct func_c *func = (struct func_c *)base;
if (func->func == NULL) {
if (func_c_load(func) != 0)
return -1;
@@ -618,7 +621,8 @@ func_access_check(struct func *func)
return 0;
user_access_t access = PRIV_X | PRIV_U;
/* Check access for all functions. */
- access &= ~entity_access_get(SC_FUNCTION)[credentials->auth_token].effective;
+ access &= ~entity_access_get(SC_FUNCTION)[credentials->auth_token]
+ .effective;
user_access_t func_access = access & ~credentials->universal_access;
if ((func_access & PRIV_U) != 0 ||
(func->def->uid != credentials->uid &&
diff --git a/src/box/func.h b/src/box/func.h
index 581e468..4e9d22d 100644
--- a/src/box/func.h
+++ b/src/box/func.h
@@ -118,7 +118,8 @@ func_call(struct func *func, struct port *args, struct port *ret);
* @retval 0 on success.
*/
int
-module_reload(const char *package, const char *package_end, struct module **module);
+module_reload(const char *package, const char *package_end,
+ struct module **module);
#if defined(__cplusplus)
} /* extern "C" */
diff --git a/src/box/func_def.c b/src/box/func_def.c
index 11d2bdb..47ead9e 100644
--- a/src/box/func_def.c
+++ b/src/box/func_def.c
@@ -34,9 +34,9 @@
#include "diag.h"
#include "error.h"
-const char *func_language_strs[] = {"LUA", "C", "SQL", "SQL_BUILTIN"};
+const char *func_language_strs[] = { "LUA", "C", "SQL", "SQL_BUILTIN" };
-const char *func_aggregate_strs[] = {"none", "group"};
+const char *func_aggregate_strs[] = { "none", "group" };
const struct func_opts func_opts_default = {
/* .is_multikey = */ false,
@@ -102,25 +102,28 @@ func_def_check(struct func_def *def)
switch (def->language) {
case FUNC_LANGUAGE_C:
if (def->body != NULL || def->is_sandboxed) {
- diag_set(ClientError, ER_CREATE_FUNCTION, def->name,
- "body and is_sandboxed options are not compatible "
- "with C language");
+ diag_set(
+ ClientError, ER_CREATE_FUNCTION, def->name,
+ "body and is_sandboxed options are not compatible "
+ "with C language");
return -1;
}
break;
case FUNC_LANGUAGE_LUA:
if (def->is_sandboxed && def->body == NULL) {
- diag_set(ClientError, ER_CREATE_FUNCTION, def->name,
- "is_sandboxed option may be set only for a persistent "
- "Lua function (one with a non-empty body)");
+ diag_set(
+ ClientError, ER_CREATE_FUNCTION, def->name,
+ "is_sandboxed option may be set only for a persistent "
+ "Lua function (one with a non-empty body)");
return -1;
}
break;
case FUNC_LANGUAGE_SQL_BUILTIN:
if (def->body != NULL || def->is_sandboxed) {
- diag_set(ClientError, ER_CREATE_FUNCTION, def->name,
- "body and is_sandboxed options are not compatible "
- "with SQL language");
+ diag_set(
+ ClientError, ER_CREATE_FUNCTION, def->name,
+ "body and is_sandboxed options are not compatible "
+ "with SQL language");
return -1;
}
break;
diff --git a/src/box/func_def.h b/src/box/func_def.h
index d99d891..13db751 100644
--- a/src/box/func_def.h
+++ b/src/box/func_def.h
@@ -179,8 +179,8 @@ struct box_function_ctx {
};
typedef struct box_function_ctx box_function_ctx_t;
-typedef int (*box_function_f)(box_function_ctx_t *ctx,
- const char *args, const char *args_end);
+typedef int (*box_function_f)(box_function_ctx_t *ctx, const char *args,
+ const char *args_end);
#ifdef __cplusplus
}
diff --git a/src/box/gc.c b/src/box/gc.c
index 76f7c63..0763a9b 100644
--- a/src/box/gc.c
+++ b/src/box/gc.c
@@ -54,17 +54,15 @@
#include "say.h"
#include "vclock.h"
#include "cbus.h"
-#include "engine.h" /* engine_collect_garbage() */
-#include "wal.h" /* wal_collect_garbage() */
+#include "engine.h" /* engine_collect_garbage() */
+#include "wal.h" /* wal_collect_garbage() */
#include "checkpoint_schedule.h"
#include "txn_limbo.h"
struct gc_state gc;
-static int
-gc_cleanup_fiber_f(va_list);
-static int
-gc_checkpoint_fiber_f(va_list);
+static int gc_cleanup_fiber_f(va_list);
+static int gc_checkpoint_fiber_f(va_list);
/**
* Comparator used for ordering gc_consumer objects
@@ -83,8 +81,8 @@ gc_consumer_cmp(const struct gc_consumer *a, const struct gc_consumer *b)
return 0;
}
-rb_gen(MAYBE_UNUSED static inline, gc_tree_, gc_tree_t,
- struct gc_consumer, node, gc_consumer_cmp);
+rb_gen(MAYBE_UNUSED static inline, gc_tree_, gc_tree_t, struct gc_consumer,
+ node, gc_consumer_cmp);
/** Free a consumer object. */
static void
@@ -119,8 +117,8 @@ gc_init(void)
if (gc.cleanup_fiber == NULL)
panic("failed to start garbage collection fiber");
- gc.checkpoint_fiber = fiber_new("checkpoint_daemon",
- gc_checkpoint_fiber_f);
+ gc.checkpoint_fiber =
+ fiber_new("checkpoint_daemon", gc_checkpoint_fiber_f);
if (gc.checkpoint_fiber == NULL)
panic("failed to start checkpoint daemon fiber");
@@ -145,8 +143,8 @@ gc_free(void)
/* Free all registered consumers. */
struct gc_consumer *consumer = gc_tree_first(&gc.consumers);
while (consumer != NULL) {
- struct gc_consumer *next = gc_tree_next(&gc.consumers,
- consumer);
+ struct gc_consumer *next =
+ gc_tree_next(&gc.consumers, consumer);
gc_tree_remove(&gc.consumers, consumer);
gc_consumer_delete(consumer);
consumer = next;
@@ -172,8 +170,8 @@ gc_run_cleanup(void)
*/
struct gc_checkpoint *checkpoint = NULL;
while (true) {
- checkpoint = rlist_first_entry(&gc.checkpoints,
- struct gc_checkpoint, in_checkpoints);
+ checkpoint = rlist_first_entry(
+ &gc.checkpoints, struct gc_checkpoint, in_checkpoints);
if (gc.checkpoint_count <= gc.min_checkpoint_count)
break;
if (!rlist_empty(&checkpoint->refs))
@@ -297,8 +295,8 @@ gc_advance(const struct vclock *vclock)
struct gc_consumer *consumer = gc_tree_first(&gc.consumers);
while (consumer != NULL) {
- struct gc_consumer *next = gc_tree_next(&gc.consumers,
- consumer);
+ struct gc_consumer *next =
+ gc_tree_next(&gc.consumers, consumer);
/*
* Remove all the consumers whose vclocks are
* either less than or incomparable with the wal
@@ -496,8 +494,8 @@ gc_checkpoint_fiber_f(va_list ap)
struct checkpoint_schedule *sched = &gc.checkpoint_schedule;
while (!fiber_is_cancelled()) {
- double timeout = checkpoint_schedule_timeout(sched,
- ev_monotonic_now(loop()));
+ double timeout = checkpoint_schedule_timeout(
+ sched, ev_monotonic_now(loop()));
if (timeout > 0) {
char buf[128];
struct tm tm;
@@ -556,8 +554,8 @@ gc_consumer_register(const struct vclock *vclock, const char *format, ...)
{
struct gc_consumer *consumer = calloc(1, sizeof(*consumer));
if (consumer == NULL) {
- diag_set(OutOfMemory, sizeof(*consumer),
- "malloc", "struct gc_consumer");
+ diag_set(OutOfMemory, sizeof(*consumer), "malloc",
+ "struct gc_consumer");
return NULL;
}
diff --git a/src/box/identifier.c b/src/box/identifier.c
index b1c56bd..4432cc8 100644
--- a/src/box/identifier.c
+++ b/src/box/identifier.c
@@ -57,10 +57,8 @@ identifier_check(const char *str, int str_len)
* Here the `c` symbol printability is determined by comparison
* with unicode category types explicitly.
*/
- if (type == U_UNASSIGNED ||
- type == U_LINE_SEPARATOR ||
- type == U_CONTROL_CHAR ||
- type == U_PARAGRAPH_SEPARATOR)
+ if (type == U_UNASSIGNED || type == U_LINE_SEPARATOR ||
+ type == U_CONTROL_CHAR || type == U_PARAGRAPH_SEPARATOR)
goto error;
}
return 0;
diff --git a/src/box/index.cc b/src/box/index.cc
index c2fc008..0d88298 100644
--- a/src/box/index.cc
+++ b/src/box/index.cc
@@ -42,14 +42,16 @@
/* {{{ Utilities. **********************************************/
UnsupportedIndexFeature::UnsupportedIndexFeature(const char *file,
- unsigned line, struct index_def *index_def, const char *what)
+ unsigned line,
+ struct index_def *index_def,
+ const char *what)
: ClientError(file, line, ER_UNKNOWN)
{
struct space *space = space_cache_find_xc(index_def->space_id);
m_errcode = ER_UNSUPPORTED_INDEX_FEATURE;
error_format_msg(this, tnt_errcode_desc(m_errcode), index_def->name,
- index_type_strs[index_def->type],
- space->def->name, space->def->engine_name, what);
+ index_type_strs[index_def->type], space->def->name,
+ space->def->engine_name, what);
}
struct error *
@@ -84,7 +86,7 @@ key_validate(const struct index_def *index_def, enum iterator_type type,
if (index_def->type == RTREE) {
unsigned d = index_def->opts.dimension;
if (part_count != 1 && part_count != d && part_count != d * 2) {
- diag_set(ClientError, ER_KEY_PART_COUNT, d * 2,
+ diag_set(ClientError, ER_KEY_PART_COUNT, d * 2,
part_count);
return -1;
}
@@ -98,8 +100,8 @@ key_validate(const struct index_def *index_def, enum iterator_type type,
return -1;
}
for (uint32_t part = 0; part < array_size; part++) {
- if (key_part_validate(FIELD_TYPE_NUMBER, key,
- 0, false))
+ if (key_part_validate(FIELD_TYPE_NUMBER, key, 0,
+ false))
return -1;
mp_next(&key);
}
@@ -119,16 +121,16 @@ key_validate(const struct index_def *index_def, enum iterator_type type,
}
/* Partial keys are allowed only for TREE index type. */
- if (index_def->type != TREE && part_count < index_def->key_def->part_count) {
+ if (index_def->type != TREE &&
+ part_count < index_def->key_def->part_count) {
diag_set(ClientError, ER_PARTIAL_KEY,
index_type_strs[index_def->type],
- index_def->key_def->part_count,
- part_count);
+ index_def->key_def->part_count, part_count);
return -1;
}
const char *key_end;
- if (key_validate_parts(index_def->key_def, key,
- part_count, true, &key_end) != 0)
+ if (key_validate_parts(index_def->key_def, key, part_count,
+ true, &key_end) != 0)
return -1;
}
return 0;
@@ -158,13 +160,13 @@ box_tuple_extract_key(box_tuple_t *tuple, uint32_t space_id, uint32_t index_id,
struct index *index = index_find(space, index_id);
if (index == NULL)
return NULL;
- return tuple_extract_key(tuple, index->def->key_def,
- MULTIKEY_NONE, key_size);
+ return tuple_extract_key(tuple, index->def->key_def, MULTIKEY_NONE,
+ key_size);
}
static inline int
-check_index(uint32_t space_id, uint32_t index_id,
- struct space **space, struct index **index)
+check_index(uint32_t space_id, uint32_t index_id, struct space **space,
+ struct index **index)
{
*space = space_cache_find(space_id);
if (*space == NULL)
@@ -205,7 +207,7 @@ box_index_bsize(uint32_t space_id, uint32_t index_id)
int
box_index_random(uint32_t space_id, uint32_t index_id, uint32_t rnd,
- box_tuple_t **result)
+ box_tuple_t **result)
{
assert(result != NULL);
struct space *space;
@@ -318,8 +320,8 @@ box_index_max(uint32_t space_id, uint32_t index_id, const char *key,
}
ssize_t
-box_index_count(uint32_t space_id, uint32_t index_id, int type,
- const char *key, const char *key_end)
+box_index_count(uint32_t space_id, uint32_t index_id, int type, const char *key,
+ const char *key_end)
{
assert(key != NULL && key_end != NULL);
mp_tuple_assert(key, key_end);
@@ -328,7 +330,7 @@ box_index_count(uint32_t space_id, uint32_t index_id, int type,
"Invalid iterator type");
return -1;
}
- enum iterator_type itype = (enum iterator_type) type;
+ enum iterator_type itype = (enum iterator_type)type;
struct space *space;
struct index *index;
if (check_index(space_id, index_id, &space, &index) != 0)
@@ -355,7 +357,7 @@ box_index_count(uint32_t space_id, uint32_t index_id, int type,
box_iterator_t *
box_index_iterator(uint32_t space_id, uint32_t index_id, int type,
- const char *key, const char *key_end)
+ const char *key, const char *key_end)
{
assert(key != NULL && key_end != NULL);
mp_tuple_assert(key, key_end);
@@ -364,7 +366,7 @@ box_index_iterator(uint32_t space_id, uint32_t index_id, int type,
"Invalid iterator type");
return NULL;
}
- enum iterator_type itype = (enum iterator_type) type;
+ enum iterator_type itype = (enum iterator_type)type;
struct space *space;
struct index *index;
if (check_index(space_id, index_id, &space, &index) != 0)
@@ -376,8 +378,8 @@ box_index_iterator(uint32_t space_id, uint32_t index_id, int type,
struct txn *txn;
if (txn_begin_ro_stmt(space, &txn) != 0)
return NULL;
- struct iterator *it = index_create_iterator(index, itype,
- key, part_count);
+ struct iterator *it =
+ index_create_iterator(index, itype, key, part_count);
if (it == NULL) {
txn_rollback_stmt(txn);
return NULL;
@@ -409,8 +411,7 @@ box_iterator_free(box_iterator_t *it)
/* {{{ Other index functions */
int
-box_index_stat(uint32_t space_id, uint32_t index_id,
- struct info_handler *info)
+box_index_stat(uint32_t space_id, uint32_t index_id, struct info_handler *info)
{
struct space *space;
struct index *index;
@@ -520,9 +521,8 @@ index_build(struct index *index, struct index *pk)
return -1;
if (n_tuples > 0) {
- say_info("Adding %zd keys to %s index '%s' ...",
- n_tuples, index_type_strs[index->def->type],
- index->def->name);
+ say_info("Adding %zd keys to %s index '%s' ...", n_tuples,
+ index_type_strs[index->def->type], index->def->name);
}
struct iterator *it = index_create_iterator(pk, ITER_ALL, NULL, 0);
@@ -555,30 +555,26 @@ index_build(struct index *index, struct index *pk)
void
generic_index_commit_create(struct index *, int64_t)
-{
-}
+{}
void
generic_index_abort_create(struct index *)
-{
-}
+{}
void
generic_index_commit_modify(struct index *, int64_t)
-{
-}
+{}
void
generic_index_commit_drop(struct index *, int64_t)
-{
-}
+{}
void
generic_index_update_def(struct index *)
-{
-}
+{}
-bool generic_index_depends_on_pk(struct index *)
+bool
+generic_index_depends_on_pk(struct index *)
{
return false;
}
@@ -604,11 +600,11 @@ generic_index_bsize(struct index *)
}
int
-generic_index_min(struct index *index, const char *key,
- uint32_t part_count, struct tuple **result)
+generic_index_min(struct index *index, const char *key, uint32_t part_count,
+ struct tuple **result)
{
- struct iterator *it = index_create_iterator(index, ITER_EQ,
- key, part_count);
+ struct iterator *it =
+ index_create_iterator(index, ITER_EQ, key, part_count);
if (it == NULL)
return -1;
int rc = iterator_next(it, result);
@@ -617,11 +613,11 @@ generic_index_min(struct index *index, const char *key,
}
int
-generic_index_max(struct index *index, const char *key,
- uint32_t part_count, struct tuple **result)
+generic_index_max(struct index *index, const char *key, uint32_t part_count,
+ struct tuple **result)
{
- struct iterator *it = index_create_iterator(index, ITER_REQ,
- key, part_count);
+ struct iterator *it =
+ index_create_iterator(index, ITER_REQ, key, part_count);
if (it == NULL)
return -1;
int rc = iterator_next(it, result);
@@ -642,8 +638,8 @@ ssize_t
generic_index_count(struct index *index, enum iterator_type type,
const char *key, uint32_t part_count)
{
- struct iterator *it = index_create_iterator(index, type,
- key, part_count);
+ struct iterator *it =
+ index_create_iterator(index, type, key, part_count);
if (it == NULL)
return -1;
int rc = 0;
@@ -658,8 +654,8 @@ generic_index_count(struct index *index, enum iterator_type type,
}
int
-generic_index_get(struct index *index, const char *key,
- uint32_t part_count, struct tuple **result)
+generic_index_get(struct index *index, const char *key, uint32_t part_count,
+ struct tuple **result)
{
(void)key;
(void)part_count;
@@ -685,12 +681,13 @@ struct iterator *
generic_index_create_iterator(struct index *base, enum iterator_type type,
const char *key, uint32_t part_count)
{
- (void) type; (void) key; (void) part_count;
+ (void)type;
+ (void)key;
+ (void)part_count;
diag_set(UnsupportedIndexFeature, base->def, "read view");
return NULL;
}
-
struct snapshot_iterator *
generic_index_create_snapshot_iterator(struct index *index)
{
@@ -720,8 +717,7 @@ generic_index_reset_stat(struct index *index)
void
generic_index_begin_build(struct index *)
-{
-}
+{}
int
generic_index_reserve(struct index *, uint32_t)
@@ -745,13 +741,13 @@ generic_index_build_next(struct index *index, struct tuple *tuple)
void
generic_index_end_build(struct index *)
-{
-}
+{}
int
disabled_index_build_next(struct index *index, struct tuple *tuple)
{
- (void) index; (void) tuple;
+ (void)index;
+ (void)tuple;
return 0;
}
@@ -760,8 +756,10 @@ disabled_index_replace(struct index *index, struct tuple *old_tuple,
struct tuple *new_tuple, enum dup_replace_mode mode,
struct tuple **result)
{
- (void) old_tuple; (void) new_tuple; (void) mode;
- (void) index;
+ (void)old_tuple;
+ (void)new_tuple;
+ (void)mode;
+ (void)index;
*result = NULL;
return 0;
}
diff --git a/src/box/index.h b/src/box/index.h
index 8614802..70bf2e5 100644
--- a/src/box/index.h
+++ b/src/box/index.h
@@ -124,7 +124,7 @@ box_index_bsize(uint32_t space_id, uint32_t index_id);
*/
int
box_index_random(uint32_t space_id, uint32_t index_id, uint32_t rnd,
- box_tuple_t **result);
+ box_tuple_t **result);
/**
* Get a tuple from index by the key.
@@ -192,8 +192,8 @@ box_index_max(uint32_t space_id, uint32_t index_id, const char *key,
* { iterator = type }) \endcode
*/
ssize_t
-box_index_count(uint32_t space_id, uint32_t index_id, int type,
- const char *key, const char *key_end);
+box_index_count(uint32_t space_id, uint32_t index_id, int type, const char *key,
+ const char *key_end);
/**
* Extract key from tuple according to key definition of given
@@ -206,8 +206,8 @@ box_index_count(uint32_t space_id, uint32_t index_id, int type,
* @retval NULL Memory Allocation error
*/
char *
-box_tuple_extract_key(box_tuple_t *tuple, uint32_t space_id,
- uint32_t index_id, uint32_t *key_size);
+box_tuple_extract_key(box_tuple_t *tuple, uint32_t space_id, uint32_t index_id,
+ uint32_t *key_size);
/** \endcond public */
@@ -221,8 +221,7 @@ box_tuple_extract_key(box_tuple_t *tuple, uint32_t space_id,
* \retval >=0 on success
*/
int
-box_index_stat(uint32_t space_id, uint32_t index_id,
- struct info_handler *info);
+box_index_stat(uint32_t space_id, uint32_t index_id, struct info_handler *info);
/**
* Trigger index compaction (index:compact())
@@ -293,8 +292,8 @@ struct snapshot_iterator {
* Returns a pointer to the tuple data and its
* size or NULL if EOF.
*/
- int (*next)(struct snapshot_iterator *,
- const char **data, uint32_t *size);
+ int (*next)(struct snapshot_iterator *, const char **data,
+ uint32_t *size);
/**
* Destroy the iterator.
*/
@@ -403,22 +402,23 @@ struct index_vtab {
ssize_t (*size)(struct index *);
ssize_t (*bsize)(struct index *);
- int (*min)(struct index *index, const char *key,
- uint32_t part_count, struct tuple **result);
- int (*max)(struct index *index, const char *key,
- uint32_t part_count, struct tuple **result);
+ int (*min)(struct index *index, const char *key, uint32_t part_count,
+ struct tuple **result);
+ int (*max)(struct index *index, const char *key, uint32_t part_count,
+ struct tuple **result);
int (*random)(struct index *index, uint32_t rnd, struct tuple **result);
ssize_t (*count)(struct index *index, enum iterator_type type,
const char *key, uint32_t part_count);
- int (*get)(struct index *index, const char *key,
- uint32_t part_count, struct tuple **result);
+ int (*get)(struct index *index, const char *key, uint32_t part_count,
+ struct tuple **result);
int (*replace)(struct index *index, struct tuple *old_tuple,
struct tuple *new_tuple, enum dup_replace_mode mode,
struct tuple **result);
/** Create an index iterator. */
struct iterator *(*create_iterator)(struct index *index,
- enum iterator_type type,
- const char *key, uint32_t part_count);
+ enum iterator_type type,
+ const char *key,
+ uint32_t part_count);
/**
* Create an ALL iterator with personal read view so further
* index modifications will not affect the iteration results.
@@ -475,8 +475,8 @@ replace_check_dup(struct tuple *old_tuple, struct tuple *dup_tuple,
* dup_replace_mode is DUP_REPLACE, and
* a tuple with the same key is not found.
*/
- return old_tuple ?
- ER_CANT_UPDATE_PRIMARY_KEY : ER_TUPLE_NOT_FOUND;
+ return old_tuple ? ER_CANT_UPDATE_PRIMARY_KEY :
+ ER_TUPLE_NOT_FOUND;
}
} else { /* dup_tuple != NULL */
if (dup_tuple != old_tuple &&
@@ -589,15 +589,15 @@ index_bsize(struct index *index)
}
static inline int
-index_min(struct index *index, const char *key,
- uint32_t part_count, struct tuple **result)
+index_min(struct index *index, const char *key, uint32_t part_count,
+ struct tuple **result)
{
return index->vtab->min(index, key, part_count, result);
}
static inline int
-index_max(struct index *index, const char *key,
- uint32_t part_count, struct tuple **result)
+index_max(struct index *index, const char *key, uint32_t part_count,
+ struct tuple **result)
{
return index->vtab->max(index, key, part_count, result);
}
@@ -609,15 +609,15 @@ index_random(struct index *index, uint32_t rnd, struct tuple **result)
}
static inline ssize_t
-index_count(struct index *index, enum iterator_type type,
- const char *key, uint32_t part_count)
+index_count(struct index *index, enum iterator_type type, const char *key,
+ uint32_t part_count)
{
return index->vtab->count(index, type, key, part_count);
}
static inline int
-index_get(struct index *index, const char *key,
- uint32_t part_count, struct tuple **result)
+index_get(struct index *index, const char *key, uint32_t part_count,
+ struct tuple **result)
{
return index->vtab->get(index, key, part_count, result);
}
@@ -688,35 +688,57 @@ index_end_build(struct index *index)
/*
* Virtual method stubs.
*/
-void generic_index_commit_create(struct index *, int64_t);
-void generic_index_abort_create(struct index *);
-void generic_index_commit_modify(struct index *, int64_t);
-void generic_index_commit_drop(struct index *, int64_t);
-void generic_index_update_def(struct index *);
-bool generic_index_depends_on_pk(struct index *);
-bool generic_index_def_change_requires_rebuild(struct index *,
- const struct index_def *);
-ssize_t generic_index_bsize(struct index *);
-ssize_t generic_index_size(struct index *);
-int generic_index_min(struct index *, const char *, uint32_t, struct tuple **);
-int generic_index_max(struct index *, const char *, uint32_t, struct tuple **);
-int generic_index_random(struct index *, uint32_t, struct tuple **);
-ssize_t generic_index_count(struct index *, enum iterator_type,
- const char *, uint32_t);
-int generic_index_get(struct index *, const char *, uint32_t, struct tuple **);
-int generic_index_replace(struct index *, struct tuple *, struct tuple *,
- enum dup_replace_mode, struct tuple **);
-struct snapshot_iterator *generic_index_create_snapshot_iterator(struct index *);
-void generic_index_stat(struct index *, struct info_handler *);
-void generic_index_compact(struct index *);
-void generic_index_reset_stat(struct index *);
-void generic_index_begin_build(struct index *);
-int generic_index_reserve(struct index *, uint32_t);
+void
+generic_index_commit_create(struct index *, int64_t);
+void
+generic_index_abort_create(struct index *);
+void
+generic_index_commit_modify(struct index *, int64_t);
+void
+generic_index_commit_drop(struct index *, int64_t);
+void
+generic_index_update_def(struct index *);
+bool
+generic_index_depends_on_pk(struct index *);
+bool
+generic_index_def_change_requires_rebuild(struct index *,
+ const struct index_def *);
+ssize_t
+generic_index_bsize(struct index *);
+ssize_t
+generic_index_size(struct index *);
+int
+generic_index_min(struct index *, const char *, uint32_t, struct tuple **);
+int
+generic_index_max(struct index *, const char *, uint32_t, struct tuple **);
+int
+generic_index_random(struct index *, uint32_t, struct tuple **);
+ssize_t
+generic_index_count(struct index *, enum iterator_type, const char *, uint32_t);
+int
+generic_index_get(struct index *, const char *, uint32_t, struct tuple **);
+int
+generic_index_replace(struct index *, struct tuple *, struct tuple *,
+ enum dup_replace_mode, struct tuple **);
+struct snapshot_iterator *
+generic_index_create_snapshot_iterator(struct index *);
+void
+generic_index_stat(struct index *, struct info_handler *);
+void
+generic_index_compact(struct index *);
+void
+generic_index_reset_stat(struct index *);
+void
+generic_index_begin_build(struct index *);
+int
+generic_index_reserve(struct index *, uint32_t);
struct iterator *
generic_index_create_iterator(struct index *base, enum iterator_type type,
const char *key, uint32_t part_count);
-int generic_index_build_next(struct index *, struct tuple *);
-void generic_index_end_build(struct index *);
+int
+generic_index_build_next(struct index *, struct tuple *);
+void
+generic_index_end_build(struct index *);
int
disabled_index_build_next(struct index *index, struct tuple *tuple);
int
@@ -739,8 +761,7 @@ public:
struct index_def *index_def, const char *what);
};
-struct IteratorGuard
-{
+struct IteratorGuard {
struct iterator *it;
IteratorGuard(struct iterator *it_arg) : it(it_arg) {}
~IteratorGuard() { iterator_delete(it); }
@@ -755,8 +776,8 @@ static inline struct iterator *
index_create_iterator_xc(struct index *index, enum iterator_type type,
const char *key, uint32_t part_count)
{
- struct iterator *it = index_create_iterator(index, type,
- key, part_count);
+ struct iterator *it =
+ index_create_iterator(index, type, key, part_count);
if (it == NULL)
diag_raise();
return it;
diff --git a/src/box/index_def.c b/src/box/index_def.c
index 9802961..5b1c538 100644
--- a/src/box/index_def.c
+++ b/src/box/index_def.c
@@ -60,7 +60,8 @@ const struct opt_def index_opts_reg[] = {
distance, NULL),
OPT_DEF("range_size", OPT_INT64, struct index_opts, range_size),
OPT_DEF("page_size", OPT_INT64, struct index_opts, page_size),
- OPT_DEF("run_count_per_level", OPT_INT64, struct index_opts, run_count_per_level),
+ OPT_DEF("run_count_per_level", OPT_INT64, struct index_opts,
+ run_count_per_level),
OPT_DEF("run_size_ratio", OPT_FLOAT, struct index_opts, run_size_ratio),
OPT_DEF("bloom_fpr", OPT_FLOAT, struct index_opts, bloom_fpr),
OPT_DEF("lsn", OPT_INT64, struct index_opts, lsn),
@@ -72,14 +73,15 @@ const struct opt_def index_opts_reg[] = {
struct index_def *
index_def_new(uint32_t space_id, uint32_t iid, const char *name,
uint32_t name_len, enum index_type type,
- const struct index_opts *opts,
- struct key_def *key_def, struct key_def *pk_def)
+ const struct index_opts *opts, struct key_def *key_def,
+ struct key_def *pk_def)
{
assert(name_len <= BOX_NAME_MAX);
/* Use calloc to make index_def_delete() safe at all times. */
- struct index_def *def = (struct index_def *) calloc(1, sizeof(*def));
+ struct index_def *def = (struct index_def *)calloc(1, sizeof(*def));
if (def == NULL) {
- diag_set(OutOfMemory, sizeof(*def), "malloc", "struct index_def");
+ diag_set(OutOfMemory, sizeof(*def), "malloc",
+ "struct index_def");
return NULL;
}
def->name = strndup(name, name_len);
@@ -95,7 +97,7 @@ index_def_new(uint32_t space_id, uint32_t iid, const char *name,
def->key_def = key_def_dup(key_def);
if (iid != 0) {
def->cmp_def = key_def_merge(key_def, pk_def);
- if (! opts->is_unique) {
+ if (!opts->is_unique) {
def->cmp_def->unique_part_count =
def->cmp_def->part_count;
} else {
@@ -121,7 +123,7 @@ index_def_new(uint32_t space_id, uint32_t iid, const char *name,
struct index_def *
index_def_dup(const struct index_def *def)
{
- struct index_def *dup = (struct index_def *) malloc(sizeof(*dup));
+ struct index_def *dup = (struct index_def *)malloc(sizeof(*dup));
if (dup == NULL) {
diag_set(OutOfMemory, sizeof(*dup), "malloc",
"struct index_def");
@@ -180,7 +182,7 @@ index_stat_dup(const struct index_stat *src)
{
size_t size = index_stat_sizeof(src->samples, src->sample_count,
src->sample_field_count);
- struct index_stat *dup = (struct index_stat *) malloc(size);
+ struct index_stat *dup = (struct index_stat *)malloc(size);
if (dup == NULL) {
diag_set(OutOfMemory, size, "malloc", "index stat");
return NULL;
@@ -188,21 +190,21 @@ index_stat_dup(const struct index_stat *src)
memcpy(dup, src, size);
uint32_t array_size = src->sample_field_count * sizeof(uint32_t);
uint32_t stat1_offset = sizeof(struct index_stat);
- char *pos = (char *) dup + stat1_offset;
- dup->tuple_stat1 = (uint32_t *) pos;
+ char *pos = (char *)dup + stat1_offset;
+ dup->tuple_stat1 = (uint32_t *)pos;
pos += array_size + sizeof(uint32_t);
- dup->tuple_log_est = (log_est_t *) pos;
+ dup->tuple_log_est = (log_est_t *)pos;
pos += array_size + sizeof(uint32_t);
- dup->avg_eq = (uint32_t *) pos;
+ dup->avg_eq = (uint32_t *)pos;
pos += array_size;
- dup->samples = (struct index_sample *) pos;
+ dup->samples = (struct index_sample *)pos;
pos += src->sample_count * sizeof(struct index_sample);
for (uint32_t i = 0; i < src->sample_count; ++i) {
- dup->samples[i].eq = (uint32_t *) pos;
+ dup->samples[i].eq = (uint32_t *)pos;
pos += array_size;
- dup->samples[i].lt = (uint32_t *) pos;
+ dup->samples[i].lt = (uint32_t *)pos;
pos += array_size;
- dup->samples[i].dlt = (uint32_t *) pos;
+ dup->samples[i].dlt = (uint32_t *)pos;
pos += array_size;
dup->samples[i].sample_key = pos;
pos += dup->samples[i].key_size;
@@ -240,7 +242,7 @@ index_def_cmp(const struct index_def *key1, const struct index_def *key2)
if (strcmp(key1->name, key2->name))
return strcmp(key1->name, key2->name);
if (key1->type != key2->type)
- return (int) key1->type < (int) key2->type ? -1 : 1;
+ return (int)key1->type < (int)key2->type ? -1 : 1;
if (index_opts_cmp(&key1->opts, &key2->opts))
return index_opts_cmp(&key1->opts, &key2->opts);
@@ -256,9 +258,8 @@ index_def_to_key_def(struct rlist *index_defs, int *size)
rlist_foreach_entry(index_def, index_defs, link)
key_count++;
size_t bsize;
- struct key_def **keys =
- region_alloc_array(&fiber()->gc, typeof(keys[0]), key_count,
- &bsize);
+ struct key_def **keys = region_alloc_array(
+ &fiber()->gc, typeof(keys[0]), key_count, &bsize);
if (keys == NULL) {
diag_set(OutOfMemory, bsize, "region_alloc_array", "keys");
return NULL;
@@ -301,12 +302,13 @@ index_def_is_valid(struct index_def *index_def, const char *space_name)
}
if (index_def->iid == 0 && index_def->key_def->for_func_index) {
diag_set(ClientError, ER_MODIFY_INDEX, index_def->name,
- space_name, "primary key can not use a function");
+ space_name, "primary key can not use a function");
return false;
}
for (uint32_t i = 0; i < index_def->key_def->part_count; i++) {
assert(index_def->key_def->parts[i].type < field_type_MAX);
- if (index_def->key_def->parts[i].fieldno > BOX_INDEX_FIELD_MAX) {
+ if (index_def->key_def->parts[i].fieldno >
+ BOX_INDEX_FIELD_MAX) {
diag_set(ClientError, ER_MODIFY_INDEX, index_def->name,
space_name, "field no is too big");
return false;
diff --git a/src/box/index_def.h b/src/box/index_def.h
index d928b23..a0088b9 100644
--- a/src/box/index_def.h
+++ b/src/box/index_def.h
@@ -41,16 +41,16 @@ extern "C" {
enum index_type {
HASH = 0, /* HASH Index */
- TREE, /* TREE Index */
- BITSET, /* BITSET Index */
- RTREE, /* R-Tree Index */
+ TREE, /* TREE Index */
+ BITSET, /* BITSET Index */
+ RTREE, /* R-Tree Index */
index_type_MAX,
};
extern const char *index_type_strs[];
enum rtree_index_distance_type {
- /* Euclid distance, sqrt(dx*dx + dy*dy) */
+ /* Euclid distance, sqrt(dx*dx + dy*dy) */
RTREE_INDEX_DISTANCE_TYPE_EUCLID,
/* Manhattan distance, fabs(dx) + fabs(dy) */
RTREE_INDEX_DISTANCE_TYPE_MANHATTAN,
@@ -203,8 +203,8 @@ index_opts_cmp(const struct index_opts *o1, const struct index_opts *o2)
if (o1->page_size != o2->page_size)
return o1->page_size < o2->page_size ? -1 : 1;
if (o1->run_count_per_level != o2->run_count_per_level)
- return o1->run_count_per_level < o2->run_count_per_level ?
- -1 : 1;
+ return o1->run_count_per_level < o2->run_count_per_level ? -1 :
+ 1;
if (o1->run_size_ratio != o2->run_size_ratio)
return o1->run_size_ratio < o2->run_size_ratio ? -1 : 1;
if (o1->bloom_fpr != o2->bloom_fpr)
@@ -310,8 +310,8 @@ index_def_update_optionality(struct index_def *def, uint32_t min_field_count)
static inline void
index_def_set_func(struct index_def *def, struct func *func)
{
- assert(def->opts.func_id > 0 &&
- def->key_def->for_func_index && def->cmp_def->for_func_index);
+ assert(def->opts.func_id > 0 && def->key_def->for_func_index &&
+ def->cmp_def->for_func_index);
/*
* def->key_def is used in key_list module to build a key
* a key for given tuple.
@@ -368,8 +368,8 @@ index_def_list_add(struct rlist *index_def_list, struct index_def *index_def)
struct index_def *
index_def_new(uint32_t space_id, uint32_t iid, const char *name,
uint32_t name_len, enum index_type type,
- const struct index_opts *opts,
- struct key_def *key_def, struct key_def *pk_def);
+ const struct index_opts *opts, struct key_def *key_def,
+ struct key_def *pk_def);
/**
* Create an array (on a region) of key_defs from list of index
@@ -415,7 +415,7 @@ index_def_dup_xc(const struct index_def *def)
static inline void
index_def_check_xc(struct index_def *index_def, const char *space_name)
{
- if (! index_def_is_valid(index_def, space_name))
+ if (!index_def_is_valid(index_def, space_name))
diag_raise();
}
diff --git a/src/box/iproto.cc b/src/box/iproto.cc
index b8f65e5..d31c58c 100644
--- a/src/box/iproto.cc
+++ b/src/box/iproto.cc
@@ -56,7 +56,7 @@
#include "tuple_convert.h"
#include "session.h"
#include "xrow.h"
-#include "schema.h" /* schema_version */
+#include "schema.h" /* schema_version */
#include "replication.h" /* instance_uuid */
#include "iproto_constants.h"
#include "rmean.h"
@@ -142,7 +142,7 @@ iproto_bound_address(void)
{
if (iproto_bound_address_len == 0)
return NULL;
- return sio_strfaddr((struct sockaddr *) &iproto_bound_address_storage,
+ return sio_strfaddr((struct sockaddr *)&iproto_bound_address_storage,
iproto_bound_address_len);
}
@@ -180,8 +180,7 @@ iproto_reset_input(struct ibuf *ibuf)
* from all connections are queued into a single queue
* and processed in FIFO order.
*/
-struct iproto_msg
-{
+struct iproto_msg {
struct cmsg base;
struct iproto_connection *connection;
@@ -339,11 +338,8 @@ iproto_process_push(struct cmsg *m);
static void
tx_end_push(struct cmsg *m);
-static const struct cmsg_hop push_route[] = {
- { iproto_process_push, &tx_pipe },
- { tx_end_push, NULL }
-};
-
+static const struct cmsg_hop push_route[] = { { iproto_process_push, &tx_pipe },
+ { tx_end_push, NULL } };
/* }}} */
@@ -408,8 +404,7 @@ enum iproto_connection_state {
* messages are
* discarded
*/
-struct iproto_connection
-{
+struct iproto_connection {
/**
* Two rotating buffers for input. Input is first read into
* ibuf[0]. As soon as it buffer becomes full, the buffers are
@@ -522,8 +517,8 @@ struct iproto_connection
*/
struct {
alignas(CACHELINE_SIZE)
- /** Pointer to the current output buffer. */
- struct obuf *p_obuf;
+ /** Pointer to the current output buffer. */
+ struct obuf *p_obuf;
/** True if Kharon is in use/travelling. */
bool is_push_sent;
/**
@@ -547,14 +542,14 @@ static inline bool
iproto_check_msg_max(void)
{
size_t request_count = mempool_count(&iproto_msg_pool);
- return request_count > (size_t) iproto_msg_max;
+ return request_count > (size_t)iproto_msg_max;
}
static struct iproto_msg *
iproto_msg_new(struct iproto_connection *con)
{
struct iproto_msg *msg =
- (struct iproto_msg *) mempool_alloc(&iproto_msg_pool);
+ (struct iproto_msg *)mempool_alloc(&iproto_msg_pool);
ERROR_INJECT(ERRINJ_TESTING, {
mempool_free(&iproto_msg_pool, msg);
msg = NULL;
@@ -562,7 +557,8 @@ iproto_msg_new(struct iproto_connection *con)
if (msg == NULL) {
diag_set(OutOfMemory, sizeof(*msg), "mempool_alloc", "msg");
say_warn("can not allocate memory for a new message, "
- "connection %s", sio_socketname(con->input.fd));
+ "connection %s",
+ sio_socketname(con->input.fd));
return NULL;
}
msg->connection = con;
@@ -588,8 +584,7 @@ iproto_msg_new(struct iproto_connection *con)
static inline bool
iproto_connection_is_idle(struct iproto_connection *con)
{
- return con->long_poll_count == 0 &&
- ibuf_used(&con->ibuf[0]) == 0 &&
+ return con->long_poll_count == 0 && ibuf_used(&con->ibuf[0]) == 0 &&
ibuf_used(&con->ibuf[1]) == 0;
}
@@ -821,19 +816,18 @@ iproto_enqueue_batch(struct iproto_connection *con, struct ibuf *in)
/* Read request length. */
if (mp_typeof(*pos) != MP_UINT) {
errmsg = "packet length";
-err_msgpack:
+ err_msgpack:
cpipe_flush_input(&tx_pipe);
- diag_set(ClientError, ER_INVALID_MSGPACK,
- errmsg);
+ diag_set(ClientError, ER_INVALID_MSGPACK, errmsg);
return -1;
}
if (mp_check_uint(pos, in->wpos) >= 0)
break;
uint64_t len = mp_decode_uint(&pos);
if (len > IPROTO_PACKET_SIZE_MAX) {
- errmsg = tt_sprintf("too big packet size in the "\
+ errmsg = tt_sprintf("too big packet size in the "
"header: %llu",
- (unsigned long long) len);
+ (unsigned long long)len);
goto err_msgpack;
}
const char *reqend = pos + len;
@@ -862,7 +856,7 @@ err_msgpack:
n_requests++;
/* Request is parsed */
assert(reqend > reqstart);
- assert(con->parse_size >= (size_t) (reqend - reqstart));
+ assert(con->parse_size >= (size_t)(reqend - reqstart));
con->parse_size -= reqend - reqstart;
}
if (stop_input) {
@@ -908,7 +902,7 @@ err_msgpack:
static void
iproto_connection_resume(struct iproto_connection *con)
{
- assert(! iproto_check_msg_max());
+ assert(!iproto_check_msg_max());
rlist_del(&con->in_stop_list);
/*
* Enqueue_batch() stops the connection again, if the
@@ -939,10 +933,9 @@ iproto_resume(void)
* Shift from list head to ensure strict FIFO
* (fairness) for resumed connections.
*/
- struct iproto_connection *con =
- rlist_first_entry(&stopped_connections,
- struct iproto_connection,
- in_stop_list);
+ struct iproto_connection *con = rlist_first_entry(
+ &stopped_connections, struct iproto_connection,
+ in_stop_list);
iproto_connection_resume(con);
}
}
@@ -952,7 +945,7 @@ iproto_connection_on_input(ev_loop *loop, struct ev_io *watcher,
int /* revents */)
{
struct iproto_connection *con =
- (struct iproto_connection *) watcher->data;
+ (struct iproto_connection *)watcher->data;
int fd = con->input.fd;
assert(fd >= 0);
assert(rlist_empty(&con->in_stop_list));
@@ -976,13 +969,13 @@ iproto_connection_on_input(ev_loop *loop, struct ev_io *watcher,
}
/* Read input. */
int nrd = sio_read(fd, in->wpos, ibuf_unused(in));
- if (nrd < 0) { /* Socket is not ready. */
- if (! sio_wouldblock(errno))
+ if (nrd < 0) { /* Socket is not ready. */
+ if (!sio_wouldblock(errno))
diag_raise();
ev_io_start(loop, &con->input);
return;
}
- if (nrd == 0) { /* EOF */
+ if (nrd == 0) { /* EOF */
iproto_connection_close(con);
return;
}
@@ -1030,7 +1023,7 @@ iproto_flush(struct iproto_connection *con)
return 1;
}
assert(begin->used < end->used);
- struct iovec iov[SMALL_OBUF_IOV_MAX+1];
+ struct iovec iov[SMALL_OBUF_IOV_MAX + 1];
struct iovec *src = obuf->iov;
int iovcnt = end->pos - begin->pos + 1;
/*
@@ -1040,7 +1033,7 @@ iproto_flush(struct iproto_connection *con)
memcpy(iov, src + begin->pos, iovcnt * sizeof(struct iovec));
sio_add_to_iov(iov, -begin->iov_len);
/* *Overwrite* iov_len of the last pos as it may be garbage. */
- iov[iovcnt-1].iov_len = end->iov_len - begin->iov_len * (iovcnt == 1);
+ iov[iovcnt - 1].iov_len = end->iov_len - begin->iov_len * (iovcnt == 1);
ssize_t nwr = sio_writev(fd, iov, iovcnt);
@@ -1054,11 +1047,12 @@ iproto_flush(struct iproto_connection *con)
size_t offset = 0;
int advance = 0;
advance = sio_move_iov(iov, nwr, &offset);
- begin->used += nwr; /* advance write position */
- begin->iov_len = advance == 0 ? begin->iov_len + offset: offset;
+ begin->used += nwr; /* advance write position */
+ begin->iov_len = advance == 0 ? begin->iov_len + offset :
+ offset;
begin->pos += advance;
assert(begin->pos <= end->pos);
- } else if (nwr < 0 && ! sio_wouldblock(errno)) {
+ } else if (nwr < 0 && !sio_wouldblock(errno)) {
diag_raise();
}
return -1;
@@ -1068,7 +1062,8 @@ static void
iproto_connection_on_output(ev_loop *loop, struct ev_io *watcher,
int /* revents */)
{
- struct iproto_connection *con = (struct iproto_connection *) watcher->data;
+ struct iproto_connection *con =
+ (struct iproto_connection *)watcher->data;
try {
int rc;
@@ -1077,7 +1072,7 @@ iproto_connection_on_output(ev_loop *loop, struct ev_io *watcher,
ev_io_start(loop, &con->output);
return;
}
- if (! ev_is_active(&con->input) &&
+ if (!ev_is_active(&con->input) &&
rlist_empty(&con->in_stop_list)) {
ev_feed_event(loop, &con->input, EV_READ);
}
@@ -1093,8 +1088,9 @@ iproto_connection_on_output(ev_loop *loop, struct ev_io *watcher,
static struct iproto_connection *
iproto_connection_new(int fd)
{
- struct iproto_connection *con = (struct iproto_connection *)
- mempool_alloc(&iproto_connection_pool);
+ struct iproto_connection *con =
+ (struct iproto_connection *)mempool_alloc(
+ &iproto_connection_pool);
if (con == NULL) {
diag_set(OutOfMemory, sizeof(*con), "mempool_alloc", "con");
return NULL;
@@ -1140,10 +1136,8 @@ iproto_connection_delete(struct iproto_connection *con)
*/
ibuf_destroy(&con->ibuf[0]);
ibuf_destroy(&con->ibuf[1]);
- assert(con->obuf[0].pos == 0 &&
- con->obuf[0].iov[0].iov_base == NULL);
- assert(con->obuf[1].pos == 0 &&
- con->obuf[1].iov[0].iov_base == NULL);
+ assert(con->obuf[0].pos == 0 && con->obuf[0].iov[0].iov_base == NULL);
+ assert(con->obuf[1].pos == 0 && con->obuf[1].iov[0].iov_base == NULL);
mempool_free(&iproto_connection_pool, con);
}
@@ -1213,20 +1207,20 @@ static const struct cmsg_hop sql_route[] = {
};
static const struct cmsg_hop *dml_route[IPROTO_TYPE_STAT_MAX] = {
- NULL, /* IPROTO_OK */
- select_route, /* IPROTO_SELECT */
- process1_route, /* IPROTO_INSERT */
- process1_route, /* IPROTO_REPLACE */
- process1_route, /* IPROTO_UPDATE */
- process1_route, /* IPROTO_DELETE */
- call_route, /* IPROTO_CALL_16 */
- misc_route, /* IPROTO_AUTH */
- call_route, /* IPROTO_EVAL */
- process1_route, /* IPROTO_UPSERT */
- call_route, /* IPROTO_CALL */
- sql_route, /* IPROTO_EXECUTE */
- NULL, /* IPROTO_NOP */
- sql_route, /* IPROTO_PREPARE */
+ NULL, /* IPROTO_OK */
+ select_route, /* IPROTO_SELECT */
+ process1_route, /* IPROTO_INSERT */
+ process1_route, /* IPROTO_REPLACE */
+ process1_route, /* IPROTO_UPDATE */
+ process1_route, /* IPROTO_DELETE */
+ call_route, /* IPROTO_CALL_16 */
+ misc_route, /* IPROTO_AUTH */
+ call_route, /* IPROTO_EVAL */
+ process1_route, /* IPROTO_UPSERT */
+ call_route, /* IPROTO_CALL */
+ sql_route, /* IPROTO_EXECUTE */
+ NULL, /* IPROTO_NOP */
+ sql_route, /* IPROTO_PREPARE */
};
static const struct cmsg_hop join_route[] = {
@@ -1271,7 +1265,7 @@ iproto_msg_decode(struct iproto_msg *msg, const char **pos, const char *reqend,
if (xrow_decode_dml(&msg->header, &msg->dml,
dml_request_key_map(type)))
goto error;
- assert(type < sizeof(dml_route)/sizeof(*dml_route));
+ assert(type < sizeof(dml_route) / sizeof(*dml_route));
cmsg_init(&msg->base, dml_route[type]);
break;
case IPROTO_CALL_16:
@@ -1310,8 +1304,7 @@ iproto_msg_decode(struct iproto_msg *msg, const char **pos, const char *reqend,
cmsg_init(&msg->base, misc_route);
break;
default:
- diag_set(ClientError, ER_UNKNOWN_REQUEST_TYPE,
- (uint32_t) type);
+ diag_set(ClientError, ER_UNKNOWN_REQUEST_TYPE, (uint32_t)type);
goto error;
}
return;
@@ -1354,7 +1347,7 @@ tx_process_disconnect(struct cmsg *m)
container_of(m, struct iproto_connection, disconnect_msg);
if (con->session != NULL) {
session_close(con->session);
- if (! rlist_empty(&session_on_disconnect)) {
+ if (!rlist_empty(&session_on_disconnect)) {
tx_fiber_init(con->session, 0);
session_run_on_disconnect_triggers(con->session);
}
@@ -1403,7 +1396,6 @@ net_finish_destroy(struct cmsg *m)
iproto_connection_delete(con);
}
-
static int
tx_check_schema(uint32_t new_schema_version)
{
@@ -1418,8 +1410,8 @@ tx_check_schema(uint32_t new_schema_version)
static void
net_discard_input(struct cmsg *m)
{
- struct iproto_msg *msg = container_of(m, struct iproto_msg,
- discard_input);
+ struct iproto_msg *msg =
+ container_of(m, struct iproto_msg, discard_input);
struct iproto_connection *con = msg->connection;
msg->p_ibuf->rpos += msg->len;
msg->len = 0;
@@ -1481,7 +1473,7 @@ tx_accept_wpos(struct iproto_connection *con, const struct iproto_wpos *wpos)
static inline struct iproto_msg *
tx_accept_msg(struct cmsg *m)
{
- struct iproto_msg *msg = (struct iproto_msg *) m;
+ struct iproto_msg *msg = (struct iproto_msg *)m;
tx_accept_wpos(msg->connection, &msg->wpos);
tx_fiber_init(msg->connection->session, msg->header.sync);
return msg;
@@ -1509,8 +1501,8 @@ tx_reply_iproto_error(struct cmsg *m)
{
struct iproto_msg *msg = tx_accept_msg(m);
struct obuf *out = msg->connection->tx.p_obuf;
- iproto_reply_error(out, diag_last_error(&msg->diag),
- msg->header.sync, ::schema_version);
+ iproto_reply_error(out, diag_last_error(&msg->diag), msg->header.sync,
+ ::schema_version);
iproto_wpos_create(&msg->wpos, out);
}
@@ -1564,9 +1556,8 @@ tx_process_select(struct cmsg *m)
goto error;
tx_inject_delay();
- rc = box_select(req->space_id, req->index_id,
- req->iterator, req->offset, req->limit,
- req->key, req->key_end, &port);
+ rc = box_select(req->space_id, req->index_id, req->iterator,
+ req->offset, req->limit, req->key, req->key_end, &port);
if (rc < 0)
goto error;
@@ -1585,8 +1576,8 @@ tx_process_select(struct cmsg *m)
obuf_rollback_to_svp(out, &svp);
goto error;
}
- iproto_reply_select(out, &svp, msg->header.sync,
- ::schema_version, count);
+ iproto_reply_select(out, &svp, msg->header.sync, ::schema_version,
+ count);
iproto_wpos_create(&msg->wpos, out);
return;
error:
@@ -1675,8 +1666,8 @@ tx_process_call(struct cmsg *m)
goto error;
}
- iproto_reply_select(out, &svp, msg->header.sync,
- ::schema_version, count);
+ iproto_reply_select(out, &svp, msg->header.sync, ::schema_version,
+ count);
iproto_wpos_create(&msg->wpos, out);
return;
error:
@@ -1868,7 +1859,7 @@ tx_process_replication(struct cmsg *m)
static void
net_send_msg(struct cmsg *m)
{
- struct iproto_msg *msg = (struct iproto_msg *) m;
+ struct iproto_msg *msg = (struct iproto_msg *)m;
struct iproto_connection *con = msg->connection;
if (msg->len != 0) {
@@ -1882,7 +1873,7 @@ net_send_msg(struct cmsg *m)
con->wend = msg->wpos;
if (evio_has_fd(&con->output)) {
- if (! ev_is_active(&con->output))
+ if (!ev_is_active(&con->output))
ev_feed_event(con->loop, &con->output, EV_WRITE);
} else if (iproto_connection_is_idle(con)) {
iproto_connection_close(con);
@@ -1897,7 +1888,7 @@ net_send_msg(struct cmsg *m)
static void
net_send_error(struct cmsg *m)
{
- struct iproto_msg *msg = (struct iproto_msg *) m;
+ struct iproto_msg *msg = (struct iproto_msg *)m;
/* Recycle the exception. */
diag_move(&msg->diag, &fiber()->diag);
net_send_msg(m);
@@ -1906,13 +1897,13 @@ net_send_error(struct cmsg *m)
static void
net_end_join(struct cmsg *m)
{
- struct iproto_msg *msg = (struct iproto_msg *) m;
+ struct iproto_msg *msg = (struct iproto_msg *)m;
struct iproto_connection *con = msg->connection;
msg->p_ibuf->rpos += msg->len;
iproto_msg_delete(msg);
- assert(! ev_is_active(&con->input));
+ assert(!ev_is_active(&con->input));
/*
* Enqueue any messages if they are in the readahead
* queue. Will simply start input otherwise.
@@ -1924,13 +1915,13 @@ net_end_join(struct cmsg *m)
static void
net_end_subscribe(struct cmsg *m)
{
- struct iproto_msg *msg = (struct iproto_msg *) m;
+ struct iproto_msg *msg = (struct iproto_msg *)m;
struct iproto_connection *con = msg->connection;
msg->p_ibuf->rpos += msg->len;
iproto_msg_delete(msg);
- assert(! ev_is_active(&con->input));
+ assert(!ev_is_active(&con->input));
iproto_connection_close(con);
}
@@ -1943,23 +1934,23 @@ net_end_subscribe(struct cmsg *m)
static void
tx_process_connect(struct cmsg *m)
{
- struct iproto_msg *msg = (struct iproto_msg *) m;
+ struct iproto_msg *msg = (struct iproto_msg *)m;
struct iproto_connection *con = msg->connection;
struct obuf *out = msg->connection->tx.p_obuf;
- try { /* connect. */
+ try { /* connect. */
con->session = session_create(SESSION_TYPE_BINARY);
if (con->session == NULL)
diag_raise();
con->session->meta.connection = con;
tx_fiber_init(con->session, 0);
- char *greeting = (char *) static_alloc(IPROTO_GREETING_SIZE);
+ char *greeting = (char *)static_alloc(IPROTO_GREETING_SIZE);
/* TODO: dirty read from tx thread */
struct tt_uuid uuid = INSTANCE_UUID;
random_bytes(con->salt, IPROTO_SALT_SIZE);
greeting_encode(greeting, tarantool_version_id(), &uuid,
con->salt, IPROTO_SALT_SIZE);
obuf_dup_xc(out, greeting, IPROTO_GREETING_SIZE);
- if (! rlist_empty(&session_on_connect)) {
+ if (!rlist_empty(&session_on_connect)) {
if (session_run_on_connect_triggers(con->session) != 0)
diag_raise();
}
@@ -1977,17 +1968,17 @@ tx_process_connect(struct cmsg *m)
static void
net_send_greeting(struct cmsg *m)
{
- struct iproto_msg *msg = (struct iproto_msg *) m;
+ struct iproto_msg *msg = (struct iproto_msg *)m;
struct iproto_connection *con = msg->connection;
if (msg->close_connection) {
struct obuf *out = msg->wpos.obuf;
- int64_t nwr = sio_writev(con->output.fd, out->iov,
- obuf_iovcnt(out));
+ int64_t nwr =
+ sio_writev(con->output.fd, out->iov, obuf_iovcnt(out));
if (nwr > 0) {
/* Count statistics. */
rmean_collect(rmean_net, IPROTO_SENT, nwr);
- } else if (nwr < 0 && ! sio_wouldblock(errno)) {
+ } else if (nwr < 0 && !sio_wouldblock(errno)) {
diag_log();
}
assert(iproto_connection_is_idle(con));
@@ -2021,8 +2012,8 @@ static int
iproto_on_accept(struct evio_service * /* service */, int fd,
struct sockaddr *addr, socklen_t addrlen)
{
- (void) addr;
- (void) addrlen;
+ (void)addr;
+ (void)addrlen;
struct iproto_msg *msg;
struct iproto_connection *con = iproto_connection_new(fd);
if (con == NULL)
@@ -2051,24 +2042,21 @@ static struct evio_service binary; /* iproto binary listener */
* The network io thread main function:
* begin serving the message bus.
*/
-static int
-net_cord_f(va_list /* ap */)
+static int net_cord_f(va_list /* ap */)
{
mempool_create(&iproto_msg_pool, &cord()->slabc,
sizeof(struct iproto_msg));
mempool_create(&iproto_connection_pool, &cord()->slabc,
sizeof(struct iproto_connection));
- evio_service_init(loop(), &binary, "binary",
- iproto_on_accept, NULL);
-
+ evio_service_init(loop(), &binary, "binary", iproto_on_accept, NULL);
/* Init statistics counter */
rmean_net = rmean_new(rmean_net_strings, IPROTO_LAST);
if (rmean_net == NULL) {
- tnt_raise(OutOfMemory, sizeof(struct rmean),
- "rmean", "struct rmean");
+ tnt_raise(OutOfMemory, sizeof(struct rmean), "rmean",
+ "struct rmean");
}
struct cbus_endpoint endpoint;
@@ -2097,14 +2085,14 @@ int
iproto_session_fd(struct session *session)
{
struct iproto_connection *con =
- (struct iproto_connection *) session->meta.connection;
+ (struct iproto_connection *)session->meta.connection;
return con->output.fd;
}
int64_t
iproto_session_sync(struct session *session)
{
- (void) session;
+ (void)session;
assert(session == fiber()->storage.session);
return fiber()->storage.net.sync;
}
@@ -2114,7 +2102,7 @@ iproto_session_sync(struct session *session)
static void
iproto_process_push(struct cmsg *m)
{
- struct iproto_kharon *kharon = (struct iproto_kharon *) m;
+ struct iproto_kharon *kharon = (struct iproto_kharon *)m;
struct iproto_connection *con =
container_of(kharon, struct iproto_connection, kharon);
con->wend = kharon->wpos;
@@ -2130,18 +2118,18 @@ iproto_process_push(struct cmsg *m)
static void
tx_begin_push(struct iproto_connection *con)
{
- assert(! con->tx.is_push_sent);
+ assert(!con->tx.is_push_sent);
cmsg_init(&con->kharon.base, push_route);
iproto_wpos_create(&con->kharon.wpos, con->tx.p_obuf);
con->tx.is_push_pending = false;
con->tx.is_push_sent = true;
- cpipe_push(&net_pipe, (struct cmsg *) &con->kharon);
+ cpipe_push(&net_pipe, (struct cmsg *)&con->kharon);
}
static void
tx_end_push(struct cmsg *m)
{
- struct iproto_kharon *kharon = (struct iproto_kharon *) m;
+ struct iproto_kharon *kharon = (struct iproto_kharon *)m;
struct iproto_connection *con =
container_of(kharon, struct iproto_connection, kharon);
tx_accept_wpos(con, &kharon->wpos);
@@ -2164,7 +2152,7 @@ static int
iproto_session_push(struct session *session, struct port *port)
{
struct iproto_connection *con =
- (struct iproto_connection *) session->meta.connection;
+ (struct iproto_connection *)session->meta.connection;
struct obuf_svp svp;
if (iproto_prepare_select(con->tx.p_obuf, &svp) != 0)
return -1;
@@ -2174,7 +2162,7 @@ iproto_session_push(struct session *session, struct port *port)
}
iproto_reply_chunk(con->tx.p_obuf, &svp, iproto_session_sync(session),
::schema_version);
- if (! con->tx.is_push_sent)
+ if (!con->tx.is_push_sent)
tx_begin_push(con);
else
con->tx.is_push_pending = true;
@@ -2204,10 +2192,7 @@ iproto_init(void)
}
/** Available iproto configuration changes. */
-enum iproto_cfg_op {
- IPROTO_CFG_MSG_MAX,
- IPROTO_CFG_LISTEN
-};
+enum iproto_cfg_op { IPROTO_CFG_MSG_MAX, IPROTO_CFG_LISTEN };
/**
* Since there is no way to "synchronously" change the
@@ -2215,8 +2200,7 @@ enum iproto_cfg_op {
* message count in flight send a special message to iproto
* thread.
*/
-struct iproto_cfg_msg: public cbus_call_msg
-{
+struct iproto_cfg_msg: public cbus_call_msg {
/** Operation to execute in iproto thread. */
enum iproto_cfg_op op;
union {
@@ -2244,7 +2228,7 @@ iproto_cfg_msg_create(struct iproto_cfg_msg *msg, enum iproto_cfg_op op)
static int
iproto_do_cfg_f(struct cbus_call_msg *m)
{
- struct iproto_cfg_msg *cfg_msg = (struct iproto_cfg_msg *) m;
+ struct iproto_cfg_msg *cfg_msg = (struct iproto_cfg_msg *)m;
int old;
try {
switch (cfg_msg->op) {
@@ -2278,8 +2262,8 @@ iproto_do_cfg_f(struct cbus_call_msg *m)
static inline void
iproto_do_cfg(struct iproto_cfg_msg *msg)
{
- if (cbus_call(&net_pipe, &tx_pipe, msg, iproto_do_cfg_f,
- NULL, TIMEOUT_INFINITY) != 0)
+ if (cbus_call(&net_pipe, &tx_pipe, msg, iproto_do_cfg_f, NULL,
+ TIMEOUT_INFINITY) != 0)
diag_raise();
}
diff --git a/src/box/iproto_constants.h b/src/box/iproto_constants.h
index d3738c7..eb0926b 100644
--- a/src/box/iproto_constants.h
+++ b/src/box/iproto_constants.h
@@ -94,7 +94,7 @@ enum iproto_key {
/* Also request keys. See the comment above. */
IPROTO_EXPR = 0x27, /* EVAL */
- IPROTO_OPS = 0x28, /* UPSERT but not UPDATE ops, because of legacy */
+ IPROTO_OPS = 0x28, /* UPSERT but not UPDATE ops, because of legacy */
IPROTO_BALLOT = 0x29,
IPROTO_TUPLE_META = 0x2a,
IPROTO_OPTIONS = 0x2b,
@@ -152,26 +152,28 @@ enum iproto_ballot_key {
IPROTO_BALLOT_IS_ANON = 0x05,
};
-#define bit(c) (1ULL<<IPROTO_##c)
+#define bit(c) (1ULL << IPROTO_##c)
-#define IPROTO_HEAD_BMAP (bit(REQUEST_TYPE) | bit(SYNC) | bit(REPLICA_ID) |\
- bit(LSN) | bit(SCHEMA_VERSION))
-#define IPROTO_DML_BODY_BMAP (bit(SPACE_ID) | bit(INDEX_ID) | bit(LIMIT) |\
- bit(OFFSET) | bit(ITERATOR) | bit(INDEX_BASE) |\
- bit(KEY) | bit(TUPLE) | bit(OPS) | bit(TUPLE_META))
+#define IPROTO_HEAD_BMAP \
+ (bit(REQUEST_TYPE) | bit(SYNC) | bit(REPLICA_ID) | bit(LSN) | \
+ bit(SCHEMA_VERSION))
+#define IPROTO_DML_BODY_BMAP \
+ (bit(SPACE_ID) | bit(INDEX_ID) | bit(LIMIT) | bit(OFFSET) | \
+ bit(ITERATOR) | bit(INDEX_BASE) | bit(KEY) | bit(TUPLE) | bit(OPS) | \
+ bit(TUPLE_META))
static inline bool
xrow_header_has_key(const char *pos, const char *end)
{
- unsigned char key = pos < end ? *pos : (unsigned char) IPROTO_KEY_MAX;
- return key < IPROTO_KEY_MAX && IPROTO_HEAD_BMAP & (1ULL<<key);
+ unsigned char key = pos < end ? *pos : (unsigned char)IPROTO_KEY_MAX;
+ return key < IPROTO_KEY_MAX && IPROTO_HEAD_BMAP & (1ULL << key);
}
static inline bool
iproto_dml_body_has_key(const char *pos, const char *end)
{
- unsigned char key = pos < end ? *pos : (unsigned char) IPROTO_KEY_MAX;
- return key < IPROTO_KEY_MAX && IPROTO_DML_BODY_BMAP & (1ULL<<key);
+ unsigned char key = pos < end ? *pos : (unsigned char)IPROTO_KEY_MAX;
+ return key < IPROTO_KEY_MAX && IPROTO_DML_BODY_BMAP & (1ULL << key);
}
#undef bit
@@ -319,7 +321,7 @@ static inline bool
iproto_type_is_dml(uint32_t type)
{
return (type >= IPROTO_SELECT && type <= IPROTO_DELETE) ||
- type == IPROTO_UPSERT || type == IPROTO_NOP;
+ type == IPROTO_UPSERT || type == IPROTO_NOP;
}
/**
diff --git a/src/box/iterator_type.c b/src/box/iterator_type.c
index 5d6b55f..c49f021 100644
--- a/src/box/iterator_type.c
+++ b/src/box/iterator_type.c
@@ -47,4 +47,5 @@ const char *iterator_type_strs[] = {
};
static_assert(sizeof(iterator_type_strs) / sizeof(const char *) ==
- iterator_type_MAX, "iterator_type_str constants");
+ iterator_type_MAX,
+ "iterator_type_str constants");
diff --git a/src/box/iterator_type.h b/src/box/iterator_type.h
index c57e614..5a66701 100644
--- a/src/box/iterator_type.h
+++ b/src/box/iterator_type.h
@@ -61,18 +61,19 @@ extern "C" {
*/
enum iterator_type {
/* ITER_EQ must be the first member for request_create */
- ITER_EQ = 0, /* key == x ASC order */
- ITER_REQ = 1, /* key == x DESC order */
- ITER_ALL = 2, /* all tuples */
- ITER_LT = 3, /* key < x */
- ITER_LE = 4, /* key <= x */
- ITER_GE = 5, /* key >= x */
- ITER_GT = 6, /* key > x */
- ITER_BITS_ALL_SET = 7, /* all bits from x are set in key */
- ITER_BITS_ANY_SET = 8, /* at least one x's bit is set */
- ITER_BITS_ALL_NOT_SET = 9, /* all bits are not set */
- ITER_OVERLAPS = 10, /* key overlaps x */
- ITER_NEIGHBOR = 11, /* tuples in distance ascending order from specified point */
+ ITER_EQ = 0, /* key == x ASC order */
+ ITER_REQ = 1, /* key == x DESC order */
+ ITER_ALL = 2, /* all tuples */
+ ITER_LT = 3, /* key < x */
+ ITER_LE = 4, /* key <= x */
+ ITER_GE = 5, /* key >= x */
+ ITER_GT = 6, /* key > x */
+ ITER_BITS_ALL_SET = 7, /* all bits from x are set in key */
+ ITER_BITS_ANY_SET = 8, /* at least one x's bit is set */
+ ITER_BITS_ALL_NOT_SET = 9, /* all bits are not set */
+ ITER_OVERLAPS = 10, /* key overlaps x */
+ ITER_NEIGHBOR =
+ 11, /* tuples in distance ascending order from specified point */
iterator_type_MAX
};
@@ -87,8 +88,8 @@ extern const char *iterator_type_strs[];
static inline int
iterator_direction(enum iterator_type type)
{
- const unsigned reverse =
- (1u << ITER_REQ) | (1u << ITER_LT) | (1u << ITER_LE);
+ const unsigned reverse = (1u << ITER_REQ) | (1u << ITER_LT) |
+ (1u << ITER_LE);
return (reverse & (1u << type)) ? -1 : 1;
}
diff --git a/src/box/journal.c b/src/box/journal.c
index cb320b5..afaf30f 100644
--- a/src/box/journal.c
+++ b/src/box/journal.c
@@ -36,8 +36,7 @@ struct journal *current_journal = NULL;
struct journal_entry *
journal_entry_new(size_t n_rows, struct region *region,
- journal_write_async_f write_async_cb,
- void *complete_data)
+ journal_write_async_f write_async_cb, void *complete_data)
{
struct journal_entry *entry;
@@ -51,7 +50,6 @@ journal_entry_new(size_t n_rows, struct region *region,
return NULL;
}
- journal_entry_create(entry, n_rows, 0, write_async_cb,
- complete_data);
+ journal_entry_create(entry, n_rows, 0, write_async_cb, complete_data);
return entry;
}
diff --git a/src/box/journal.h b/src/box/journal.h
index 5d8d5a7..8d9f347 100644
--- a/src/box/journal.h
+++ b/src/box/journal.h
@@ -88,15 +88,14 @@ struct region;
*/
static inline void
journal_entry_create(struct journal_entry *entry, size_t n_rows,
- size_t approx_len,
- journal_write_async_f write_async_cb,
+ size_t approx_len, journal_write_async_f write_async_cb,
void *complete_data)
{
- entry->write_async_cb = write_async_cb;
- entry->complete_data = complete_data;
- entry->approx_len = approx_len;
- entry->n_rows = n_rows;
- entry->res = -1;
+ entry->write_async_cb = write_async_cb;
+ entry->complete_data = complete_data;
+ entry->approx_len = approx_len;
+ entry->n_rows = n_rows;
+ entry->res = -1;
}
/**
@@ -106,8 +105,7 @@ journal_entry_create(struct journal_entry *entry, size_t n_rows,
*/
struct journal_entry *
journal_entry_new(size_t n_rows, struct region *region,
- journal_write_async_f write_async_cb,
- void *complete_data);
+ journal_write_async_f write_async_cb, void *complete_data);
/**
* An API for an abstract journal for all transactions of this
@@ -120,8 +118,7 @@ struct journal {
struct journal_entry *entry);
/** Synchronous write */
- int (*write)(struct journal *journal,
- struct journal_entry *entry);
+ int (*write)(struct journal *journal, struct journal_entry *entry);
};
/**
@@ -196,8 +193,8 @@ journal_create(struct journal *journal,
int (*write)(struct journal *journal,
struct journal_entry *entry))
{
- journal->write_async = write_async;
- journal->write = write;
+ journal->write_async = write_async;
+ journal->write = write;
}
static inline bool
diff --git a/src/box/key_def.c b/src/box/key_def.c
index a035372..8cc0a29 100644
--- a/src/box/key_def.c
+++ b/src/box/key_def.c
@@ -42,15 +42,13 @@
const char *sort_order_strs[] = { "asc", "desc", "undef" };
-const struct key_part_def key_part_def_default = {
- 0,
- field_type_MAX,
- COLL_NONE,
- false,
- ON_CONFLICT_ACTION_DEFAULT,
- SORT_ORDER_ASC,
- NULL
-};
+const struct key_part_def key_part_def_default = { 0,
+ field_type_MAX,
+ COLL_NONE,
+ false,
+ ON_CONFLICT_ACTION_DEFAULT,
+ SORT_ORDER_ASC,
+ NULL };
static int64_t
part_type_by_name_wrapper(const char *str, uint32_t len)
@@ -58,13 +56,13 @@ part_type_by_name_wrapper(const char *str, uint32_t len)
return field_type_by_name(str, len);
}
-#define PART_OPT_TYPE "type"
-#define PART_OPT_FIELD "field"
-#define PART_OPT_COLLATION "collation"
-#define PART_OPT_NULLABILITY "is_nullable"
+#define PART_OPT_TYPE "type"
+#define PART_OPT_FIELD "field"
+#define PART_OPT_COLLATION "collation"
+#define PART_OPT_NULLABILITY "is_nullable"
#define PART_OPT_NULLABLE_ACTION "nullable_action"
-#define PART_OPT_SORT_ORDER "sort_order"
-#define PART_OPT_PATH "path"
+#define PART_OPT_SORT_ORDER "sort_order"
+#define PART_OPT_PATH "path"
const struct opt_def part_def_reg[] = {
OPT_DEF_ENUM(PART_OPT_TYPE, field_type, struct key_part_def, type,
@@ -177,7 +175,7 @@ key_def_set_part_path(struct key_def *def, uint32_t part_no, const char *path,
*/
int multikey_path_len =
json_path_multikey_offset(path, path_len, TUPLE_INDEX_BASE);
- if ((uint32_t) multikey_path_len == path_len)
+ if ((uint32_t)multikey_path_len == path_len)
return 0;
/*
@@ -192,7 +190,7 @@ key_def_set_part_path(struct key_def *def, uint32_t part_no, const char *path,
*/
def->multikey_path = part->path;
def->multikey_fieldno = part->fieldno;
- def->multikey_path_len = (uint32_t) multikey_path_len;
+ def->multikey_path_len = (uint32_t)multikey_path_len;
def->is_multikey = true;
} else if (def->multikey_fieldno != part->fieldno ||
json_path_cmp(path, multikey_path_len, def->multikey_path,
@@ -278,7 +276,8 @@ key_def_new(const struct key_part_def *parts, uint32_t part_count,
struct coll_id *coll_id = coll_by_id(part->coll_id);
if (coll_id == NULL) {
diag_set(ClientError, ER_WRONG_INDEX_OPTIONS,
- i + 1, "collation was not found by ID");
+ i + 1,
+ "collation was not found by ID");
goto error;
}
coll = coll_id->coll;
@@ -287,8 +286,7 @@ key_def_new(const struct key_part_def *parts, uint32_t part_count,
if (key_def_set_part(def, i, part->fieldno, part->type,
part->nullable_action, coll, part->coll_id,
part->sort_order, part->path, path_len,
- &path_pool, TUPLE_OFFSET_SLOT_NIL,
- 0) != 0)
+ &path_pool, TUPLE_OFFSET_SLOT_NIL, 0) != 0)
goto error;
}
if (for_func_index) {
@@ -297,7 +295,7 @@ key_def_new(const struct key_part_def *parts, uint32_t part_count,
"Functional index", "json paths");
goto error;
}
- if(!key_def_is_sequential(def) || parts->fieldno != 0) {
+ if (!key_def_is_sequential(def) || parts->fieldno != 0) {
diag_set(ClientError, ER_FUNC_INDEX_PARTS,
"key part numbers must be sequential and "
"first part number must be 1");
@@ -386,9 +384,8 @@ box_tuple_compare_with_key(box_tuple_t *tuple_a, const char *key_b,
box_key_def_t *key_def)
{
uint32_t part_count = mp_decode_array(&key_b);
- return tuple_compare_with_key(tuple_a, HINT_NONE, key_b,
- part_count, HINT_NONE, key_def);
-
+ return tuple_compare_with_key(tuple_a, HINT_NONE, key_b, part_count,
+ HINT_NONE, key_def);
}
int
@@ -402,16 +399,19 @@ key_part_cmp(const struct key_part *parts1, uint32_t part_count1,
for (; part1 != end; part1++, part2++) {
if (part1->fieldno != part2->fieldno)
return part1->fieldno < part2->fieldno ? -1 : 1;
- if ((int) part1->type != (int) part2->type)
- return (int) part1->type < (int) part2->type ? -1 : 1;
+ if ((int)part1->type != (int)part2->type)
+ return (int)part1->type < (int)part2->type ? -1 : 1;
if (part1->coll != part2->coll)
- return (uintptr_t) part1->coll <
- (uintptr_t) part2->coll ? -1 : 1;
+ return (uintptr_t)part1->coll < (uintptr_t)part2->coll ?
+ -1 :
+ 1;
if (part1->sort_order != part2->sort_order)
return part1->sort_order < part2->sort_order ? -1 : 1;
if (key_part_is_nullable(part1) != key_part_is_nullable(part2))
return key_part_is_nullable(part1) <
- key_part_is_nullable(part2) ? -1 : 1;
+ key_part_is_nullable(part2) ?
+ -1 :
+ 1;
int rc = json_path_cmp(part1->path, part1->path_len,
part2->path, part2->path_len,
TUPLE_INDEX_BASE);
@@ -429,7 +429,8 @@ key_def_update_optionality(struct key_def *def, uint32_t min_field_count)
struct key_part *part = &def->parts[i];
def->has_optional_parts |=
(min_field_count < part->fieldno + 1 ||
- part->path != NULL) && key_part_is_nullable(part);
+ part->path != NULL) &&
+ key_part_is_nullable(part);
/*
* One optional part is enough to switch to new
* comparators.
@@ -577,7 +578,7 @@ key_def_decode_parts_166(struct key_part_def *parts, uint32_t part_count,
return -1;
}
*part = key_part_def_default;
- part->fieldno = (uint32_t) mp_decode_uint(data);
+ part->fieldno = (uint32_t)mp_decode_uint(data);
if (mp_typeof(**data) != MP_STR) {
diag_set(ClientError, ER_WRONG_INDEX_PARTS,
"field type must be a string");
@@ -594,8 +595,8 @@ key_def_decode_parts_166(struct key_part_def *parts, uint32_t part_count,
return -1;
}
part->is_nullable = (part->fieldno < field_count ?
- fields[part->fieldno].is_nullable :
- key_part_def_default.is_nullable);
+ fields[part->fieldno].is_nullable :
+ key_part_def_default.is_nullable);
part->coll_id = COLL_NONE;
part->path = NULL;
}
@@ -608,8 +609,8 @@ key_def_decode_parts(struct key_part_def *parts, uint32_t part_count,
uint32_t field_count, struct region *region)
{
if (mp_typeof(**data) == MP_ARRAY) {
- return key_def_decode_parts_166(parts, part_count, data,
- fields, field_count);
+ return key_def_decode_parts_166(parts, part_count, data, fields,
+ field_count);
}
for (uint32_t i = 0; i < part_count; i++) {
struct key_part_def *part = &parts[i];
@@ -622,7 +623,7 @@ key_def_decode_parts(struct key_part_def *parts, uint32_t part_count,
int opts_count = mp_decode_map(data);
*part = key_part_def_default;
bool is_action_missing = true;
- uint32_t action_literal_len = strlen("nullable_action");
+ uint32_t action_literal_len = strlen("nullable_action");
for (int j = 0; j < opts_count; ++j) {
if (mp_typeof(**data) != MP_STR) {
diag_set(ClientError, ER_WRONG_INDEX_OPTIONS,
@@ -632,8 +633,8 @@ key_def_decode_parts(struct key_part_def *parts, uint32_t part_count,
}
uint32_t key_len;
const char *key = mp_decode_str(data, &key_len);
- if (opts_parse_key(part, part_def_reg, key, key_len, data,
- ER_WRONG_INDEX_OPTIONS,
+ if (opts_parse_key(part, part_def_reg, key, key_len,
+ data, ER_WRONG_INDEX_OPTIONS,
i + TUPLE_INDEX_BASE, region,
false) != 0)
return -1;
@@ -644,9 +645,9 @@ key_def_decode_parts(struct key_part_def *parts, uint32_t part_count,
is_action_missing = false;
}
if (is_action_missing) {
- part->nullable_action = part->is_nullable ?
- ON_CONFLICT_ACTION_NONE
- : ON_CONFLICT_ACTION_DEFAULT;
+ part->nullable_action =
+ part->is_nullable ? ON_CONFLICT_ACTION_NONE :
+ ON_CONFLICT_ACTION_DEFAULT;
}
if (part->type == field_type_MAX) {
diag_set(ClientError, ER_WRONG_INDEX_OPTIONS,
@@ -657,17 +658,15 @@ key_def_decode_parts(struct key_part_def *parts, uint32_t part_count,
if (part->coll_id != COLL_NONE &&
part->type != FIELD_TYPE_STRING &&
part->type != FIELD_TYPE_SCALAR) {
- diag_set(ClientError, ER_WRONG_INDEX_OPTIONS,
- i + 1,
+ diag_set(ClientError, ER_WRONG_INDEX_OPTIONS, i + 1,
"collation is reasonable only for "
"string and scalar parts");
return -1;
}
- if (!((part->is_nullable && part->nullable_action ==
- ON_CONFLICT_ACTION_NONE)
- || (!part->is_nullable
- && part->nullable_action !=
- ON_CONFLICT_ACTION_NONE))) {
+ if (!((part->is_nullable &&
+ part->nullable_action == ON_CONFLICT_ACTION_NONE) ||
+ (!part->is_nullable &&
+ part->nullable_action != ON_CONFLICT_ACTION_NONE))) {
diag_set(ClientError, ER_WRONG_INDEX_OPTIONS,
i + TUPLE_INDEX_BASE,
"index part: conflicting nullability and "
@@ -707,9 +706,8 @@ key_def_find(const struct key_def *key_def, const struct key_part *to_find)
const struct key_part *end = part + key_def->part_count;
for (; part != end; part++) {
if (part->fieldno == to_find->fieldno &&
- json_path_cmp(part->path, part->path_len,
- to_find->path, to_find->path_len,
- TUPLE_INDEX_BASE) == 0)
+ json_path_cmp(part->path, part->path_len, to_find->path,
+ to_find->path_len, TUPLE_INDEX_BASE) == 0)
return part;
}
return NULL;
@@ -840,17 +838,15 @@ key_def_merge(const struct key_def *first, const struct key_def *second)
struct key_def *
key_def_find_pk_in_cmp_def(const struct key_def *cmp_def,
- const struct key_def *pk_def,
- struct region *region)
+ const struct key_def *pk_def, struct region *region)
{
struct key_def *extracted_def = NULL;
size_t region_svp = region_used(region);
/* First, dump primary key parts as is. */
size_t size;
- struct key_part_def *parts =
- region_alloc_array(region, typeof(parts[0]), pk_def->part_count,
- &size);
+ struct key_part_def *parts = region_alloc_array(
+ region, typeof(parts[0]), pk_def->part_count, &size);
if (parts == NULL) {
diag_set(OutOfMemory, size, "region_alloc_array", "parts");
goto out;
@@ -862,8 +858,8 @@ key_def_find_pk_in_cmp_def(const struct key_def *cmp_def,
* parts in a secondary key.
*/
for (uint32_t i = 0; i < pk_def->part_count; i++) {
- const struct key_part *part = key_def_find(cmp_def,
- &pk_def->parts[i]);
+ const struct key_part *part =
+ key_def_find(cmp_def, &pk_def->parts[i]);
assert(part != NULL);
parts[i].fieldno = part - cmp_def->parts;
parts[i].path = NULL;
@@ -885,7 +881,7 @@ key_validate_parts(const struct key_def *key_def, const char *key,
const struct key_part *part = &key_def->parts[i];
if (key_part_validate(part->type, key, i,
key_part_is_nullable(part) &&
- allow_nullable))
+ allow_nullable))
return -1;
mp_next(&key);
}
diff --git a/src/box/key_def.h b/src/box/key_def.h
index f4d9e76..c3ea3db 100644
--- a/src/box/key_def.h
+++ b/src/box/key_def.h
@@ -130,38 +130,28 @@ key_part_is_nullable(const struct key_part *part)
}
/** @copydoc tuple_compare_with_key() */
-typedef int (*tuple_compare_with_key_t)(struct tuple *tuple,
- hint_t tuple_hint,
- const char *key,
- uint32_t part_count,
+typedef int (*tuple_compare_with_key_t)(struct tuple *tuple, hint_t tuple_hint,
+ const char *key, uint32_t part_count,
hint_t key_hint,
struct key_def *key_def);
/** @copydoc tuple_compare() */
-typedef int (*tuple_compare_t)(struct tuple *tuple_a,
- hint_t tuple_a_hint,
- struct tuple *tuple_b,
- hint_t tuple_b_hint,
+typedef int (*tuple_compare_t)(struct tuple *tuple_a, hint_t tuple_a_hint,
+ struct tuple *tuple_b, hint_t tuple_b_hint,
struct key_def *key_def);
/** @copydoc tuple_extract_key() */
typedef char *(*tuple_extract_key_t)(struct tuple *tuple,
- struct key_def *key_def,
- int multikey_idx,
+ struct key_def *key_def, int multikey_idx,
uint32_t *key_size);
/** @copydoc tuple_extract_key_raw() */
-typedef char *(*tuple_extract_key_raw_t)(const char *data,
- const char *data_end,
+typedef char *(*tuple_extract_key_raw_t)(const char *data, const char *data_end,
struct key_def *key_def,
- int multikey_idx,
- uint32_t *key_size);
+ int multikey_idx, uint32_t *key_size);
/** @copydoc tuple_hash() */
-typedef uint32_t (*tuple_hash_t)(struct tuple *tuple,
- struct key_def *key_def);
+typedef uint32_t (*tuple_hash_t)(struct tuple *tuple, struct key_def *key_def);
/** @copydoc key_hash() */
-typedef uint32_t (*key_hash_t)(const char *key,
- struct key_def *key_def);
+typedef uint32_t (*key_hash_t)(const char *key, struct key_def *key_def);
/** @copydoc tuple_hint() */
-typedef hint_t (*tuple_hint_t)(struct tuple *tuple,
- struct key_def *key_def);
+typedef hint_t (*tuple_hint_t)(struct tuple *tuple, struct key_def *key_def);
/** @copydoc key_hint() */
typedef hint_t (*key_hint_t)(const char *key, uint32_t part_count,
struct key_def *key_def);
@@ -459,8 +449,7 @@ key_def_merge(const struct key_def *first, const struct key_def *second);
*/
struct key_def *
key_def_find_pk_in_cmp_def(const struct key_def *cmp_def,
- const struct key_def *pk_def,
- struct region *region);
+ const struct key_def *pk_def, struct region *region);
/*
* Check that parts of the key match with the key definition.
@@ -525,10 +514,11 @@ key_def_has_collation(const struct key_def *key_def)
* @retval -1 mp_type is invalid.
*/
static inline int
-key_part_validate(enum field_type key_type, const char *key,
- uint32_t field_no, bool is_nullable)
+key_part_validate(enum field_type key_type, const char *key, uint32_t field_no,
+ bool is_nullable)
{
- if (unlikely(!field_mp_type_is_compatible(key_type, key, is_nullable))) {
+ if (unlikely(
+ !field_mp_type_is_compatible(key_type, key, is_nullable))) {
diag_set(ClientError, ER_KEY_PART_TYPE, field_no,
field_type_strs[key_type]);
return -1;
@@ -630,9 +620,8 @@ tuple_extract_key_raw(const char *data, const char *data_end,
* @retval >0 if key_a > key_b
*/
int
-key_compare(const char *key_a, hint_t key_a_hint,
- const char *key_b, hint_t key_b_hint,
- struct key_def *key_def);
+key_compare(const char *key_a, hint_t key_a_hint, const char *key_b,
+ hint_t key_b_hint, struct key_def *key_def);
/**
* Compare tuples using the key definition and comparison hints.
@@ -646,12 +635,11 @@ key_compare(const char *key_a, hint_t key_a_hint,
* @retval >0 if key_fields(tuple_a) > key_fields(tuple_b)
*/
static inline int
-tuple_compare(struct tuple *tuple_a, hint_t tuple_a_hint,
- struct tuple *tuple_b, hint_t tuple_b_hint,
- struct key_def *key_def)
+tuple_compare(struct tuple *tuple_a, hint_t tuple_a_hint, struct tuple *tuple_b,
+ hint_t tuple_b_hint, struct key_def *key_def)
{
- return key_def->tuple_compare(tuple_a, tuple_a_hint,
- tuple_b, tuple_b_hint, key_def);
+ return key_def->tuple_compare(tuple_a, tuple_a_hint, tuple_b,
+ tuple_b_hint, key_def);
}
/**
@@ -668,9 +656,9 @@ tuple_compare(struct tuple *tuple_a, hint_t tuple_a_hint,
* @retval >0 if key_fields(tuple) > parts(key)
*/
static inline int
-tuple_compare_with_key(struct tuple *tuple, hint_t tuple_hint,
- const char *key, uint32_t part_count,
- hint_t key_hint, struct key_def *key_def)
+tuple_compare_with_key(struct tuple *tuple, hint_t tuple_hint, const char *key,
+ uint32_t part_count, hint_t key_hint,
+ struct key_def *key_def)
{
return key_def->tuple_compare_with_key(tuple, tuple_hint, key,
part_count, key_hint, key_def);
@@ -730,7 +718,7 @@ key_hash(const char *key, struct key_def *key_def)
return key_def->key_hash(key, key_def);
}
- /*
+/*
* Get comparison hint for a tuple.
* @param tuple - tuple to compute the hint for
* @param key_def - key_def used for tuple comparison
diff --git a/src/box/key_list.c b/src/box/key_list.c
index 6143b84..a604a1d 100644
--- a/src/box/key_list.c
+++ b/src/box/key_list.c
@@ -97,7 +97,7 @@ key_list_iterator_create(struct key_list_iterator *it, struct tuple *tuple,
}
if (func->def->opts.is_multikey) {
if (mp_typeof(*key_data) != MP_ARRAY) {
- struct space * space = space_by_id(index_def->space_id);
+ struct space *space = space_by_id(index_def->space_id);
/*
* Multikey function must return an array
* of keys.
@@ -159,12 +159,12 @@ key_list_iterator_next(struct key_list_iterator *it, const char **value)
diag_set(ClientError, ER_FUNC_INDEX_FORMAT, it->index_def->name,
space ? space_name(space) : "",
tt_sprintf(tnt_errcode_desc(ER_EXACT_MATCH),
- key_def->part_count, part_count));
+ key_def->part_count, part_count));
return -1;
}
const char *key_end;
- if (key_validate_parts(key_def, rptr, part_count, true,
- &key_end) != 0) {
+ if (key_validate_parts(key_def, rptr, part_count, true, &key_end) !=
+ 0) {
struct space *space = space_by_id(it->index_def->space_id);
/*
* The key doesn't follow functional index key
diff --git a/src/box/key_list.h b/src/box/key_list.h
index ccc91e7..6b56eb3 100644
--- a/src/box/key_list.h
+++ b/src/box/key_list.h
@@ -49,8 +49,8 @@ struct tuple;
* key, since the key is only used to lookup the old tuple in the
* b+* tree, so we pass in a dummy allocator.
*/
-typedef const char *(*key_list_allocator_t)(struct tuple *tuple, const char *key,
- uint32_t key_sz);
+typedef const char *(*key_list_allocator_t)(struct tuple *tuple,
+ const char *key, uint32_t key_sz);
/**
* An iterator over key_data returned by a stored function function.
diff --git a/src/box/lua/call.c b/src/box/lua/call.c
index 0315e72..a78e15a 100644
--- a/src/box/lua/call.c
+++ b/src/box/lua/call.c
@@ -77,57 +77,55 @@ box_lua_find(lua_State *L, const char *name, const char *name_end)
int objstack = 0, top = lua_gettop(L);
const char *start = name, *end;
- while ((end = (const char *) memchr(start, '.', name_end - start))) {
+ while ((end = (const char *)memchr(start, '.', name_end - start))) {
lua_checkstack(L, 3);
lua_pushlstring(L, start, end - start);
lua_gettable(L, index);
- if (! lua_istable(L, -1)) {
- diag_set(ClientError, ER_NO_SUCH_PROC,
- name_end - name, name);
+ if (!lua_istable(L, -1)) {
+ diag_set(ClientError, ER_NO_SUCH_PROC, name_end - name,
+ name);
return -1;
}
- start = end + 1; /* next piece of a.b.c */
+ start = end + 1; /* next piece of a.b.c */
index = lua_gettop(L); /* top of the stack */
}
/* box.something:method */
- if ((end = (const char *) memchr(start, ':', name_end - start))) {
+ if ((end = (const char *)memchr(start, ':', name_end - start))) {
lua_checkstack(L, 3);
lua_pushlstring(L, start, end - start);
lua_gettable(L, index);
- if (! (lua_istable(L, -1) ||
- lua_islightuserdata(L, -1) || lua_isuserdata(L, -1) )) {
- diag_set(ClientError, ER_NO_SUCH_PROC,
- name_end - name, name);
- return -1;
+ if (!(lua_istable(L, -1) || lua_islightuserdata(L, -1) ||
+ lua_isuserdata(L, -1))) {
+ diag_set(ClientError, ER_NO_SUCH_PROC, name_end - name,
+ name);
+ return -1;
}
- start = end + 1; /* next piece of a.b.c */
+ start = end + 1; /* next piece of a.b.c */
index = lua_gettop(L); /* top of the stack */
objstack = index - top;
}
-
lua_pushlstring(L, start, name_end - start);
lua_gettable(L, index);
if (!lua_isfunction(L, -1) && !lua_istable(L, -1)) {
/* lua_call or lua_gettable would raise a type error
* for us, but our own message is more verbose. */
- diag_set(ClientError, ER_NO_SUCH_PROC,
- name_end - name, name);
+ diag_set(ClientError, ER_NO_SUCH_PROC, name_end - name, name);
return -1;
}
/* setting stack that it would contain only
* the function pointer. */
if (index != LUA_GLOBALSINDEX) {
- if (objstack == 0) { /* no object, only a function */
+ if (objstack == 0) { /* no object, only a function */
lua_replace(L, top + 1);
lua_pop(L, lua_gettop(L) - top - 1);
} else if (objstack == 1) { /* just two values, swap them */
lua_insert(L, -2);
lua_pop(L, lua_gettop(L) - top - 2);
- } else { /* long path */
+ } else { /* long path */
lua_insert(L, top + 1);
lua_insert(L, top + 2);
lua_pop(L, objstack - 1);
@@ -300,7 +298,7 @@ static const struct port_vtab port_lua_vtab;
void
port_lua_create(struct port *port, struct lua_State *L)
{
- struct port_lua *port_lua = (struct port_lua *) port;
+ struct port_lua *port_lua = (struct port_lua *)port;
memset(port_lua, 0, sizeof(*port_lua));
port_lua->vtab = &port_lua_vtab;
port_lua->L = L;
@@ -328,7 +326,7 @@ static int
execute_lua_call(lua_State *L)
{
struct execute_lua_ctx *ctx =
- (struct execute_lua_ctx *) lua_topointer(L, 1);
+ (struct execute_lua_ctx *)lua_topointer(L, 1);
lua_settop(L, 0); /* clear the stack to simplify the logic below */
const char *name = ctx->name;
@@ -356,7 +354,7 @@ static int
execute_lua_call_by_ref(lua_State *L)
{
struct execute_lua_ctx *ctx =
- (struct execute_lua_ctx *) lua_topointer(L, 1);
+ (struct execute_lua_ctx *)lua_topointer(L, 1);
lua_settop(L, 0); /* clear the stack to simplify the logic below */
lua_rawgeti(L, LUA_REGISTRYINDEX, ctx->lua_ref);
@@ -374,7 +372,7 @@ static int
execute_lua_eval(lua_State *L)
{
struct execute_lua_ctx *ctx =
- (struct execute_lua_ctx *) lua_topointer(L, 1);
+ (struct execute_lua_ctx *)lua_topointer(L, 1);
lua_settop(L, 0); /* clear the stack to simplify the logic below */
/* Compile expression */
@@ -404,7 +402,7 @@ static int
encode_lua_call(lua_State *L)
{
struct encode_lua_ctx *ctx =
- (struct encode_lua_ctx *) lua_topointer(L, 1);
+ (struct encode_lua_ctx *)lua_topointer(L, 1);
/*
* Add all elements from Lua stack to the buffer.
*
@@ -425,7 +423,7 @@ static int
encode_lua_call_16(lua_State *L)
{
struct encode_lua_ctx *ctx =
- (struct encode_lua_ctx *) lua_topointer(L, 1);
+ (struct encode_lua_ctx *)lua_topointer(L, 1);
/*
* Add all elements from Lua stack to the buffer.
*
@@ -441,7 +439,7 @@ static inline int
port_lua_do_dump(struct port *base, struct mpstream *stream,
lua_CFunction handler)
{
- struct port_lua *port = (struct port_lua *) base;
+ struct port_lua *port = (struct port_lua *)base;
assert(port->vtab == &port_lua_vtab);
/*
* Use the same global state, assuming the encoder doesn't
@@ -463,10 +461,10 @@ port_lua_do_dump(struct port *base, struct mpstream *stream,
static int
port_lua_dump(struct port *base, struct obuf *out)
{
- struct port_lua *port = (struct port_lua *) base;
+ struct port_lua *port = (struct port_lua *)base;
struct mpstream stream;
- mpstream_init(&stream, out, obuf_reserve_cb, obuf_alloc_cb,
- luamp_error, port->L);
+ mpstream_init(&stream, out, obuf_reserve_cb, obuf_alloc_cb, luamp_error,
+ port->L);
return port_lua_do_dump(base, &stream, encode_lua_call);
}
@@ -475,17 +473,17 @@ port_lua_dump_16(struct port *base, struct obuf *out)
{
struct port_lua *port = (struct port_lua *)base;
struct mpstream stream;
- mpstream_init(&stream, out, obuf_reserve_cb, obuf_alloc_cb,
- luamp_error, port->L);
+ mpstream_init(&stream, out, obuf_reserve_cb, obuf_alloc_cb, luamp_error,
+ port->L);
return port_lua_do_dump(base, &stream, encode_lua_call_16);
}
static void
port_lua_dump_lua(struct port *base, struct lua_State *L, bool is_flat)
{
- (void) is_flat;
+ (void)is_flat;
assert(is_flat == true);
- struct port_lua *port = (struct port_lua *) base;
+ struct port_lua *port = (struct port_lua *)base;
uint32_t size = lua_gettop(port->L);
lua_xmove(port->L, L, size);
port->size = size;
@@ -494,7 +492,7 @@ port_lua_dump_lua(struct port *base, struct lua_State *L, bool is_flat)
static const char *
port_lua_get_msgpack(struct port *base, uint32_t *size)
{
- struct port_lua *port = (struct port_lua *) base;
+ struct port_lua *port = (struct port_lua *)base;
struct region *region = &fiber()->gc;
uint32_t region_svp = region_used(region);
struct mpstream stream;
@@ -553,7 +551,7 @@ box_process_lua(enum handlers handler, struct execute_lua_ctx *ctx,
return -1;
int coro_ref = luaL_ref(tarantool_L, LUA_REGISTRYINDEX);
port_lua_create(ret, L);
- ((struct port_lua *) ret)->ref = coro_ref;
+ ((struct port_lua *)ret)->ref = coro_ref;
/*
* A code that need a temporary fiber-local Lua state may
@@ -593,8 +591,8 @@ box_process_lua(enum handlers handler, struct execute_lua_ctx *ctx,
}
int
-box_lua_call(const char *name, uint32_t name_len,
- struct port *args, struct port *ret)
+box_lua_call(const char *name, uint32_t name_len, struct port *args,
+ struct port *ret)
{
struct execute_lua_ctx ctx;
ctx.name = name;
@@ -604,8 +602,8 @@ box_lua_call(const char *name, uint32_t name_len,
}
int
-box_lua_eval(const char *expr, uint32_t expr_len,
- struct port *args, struct port *ret)
+box_lua_eval(const char *expr, uint32_t expr_len, struct port *args,
+ struct port *ret)
{
struct execute_lua_ctx ctx;
ctx.name = expr;
@@ -628,9 +626,9 @@ static struct func_vtab func_lua_vtab;
static struct func_vtab func_persistent_lua_vtab;
static const char *default_sandbox_exports[] = {
- "assert", "error", "ipairs", "math", "next", "pairs", "pcall", "print",
- "select", "string", "table", "tonumber", "tostring", "type", "unpack",
- "xpcall", "utf8",
+ "assert", "error", "ipairs", "math", "next", "pairs",
+ "pcall", "print", "select", "string", "table", "tonumber",
+ "tostring", "type", "unpack", "xpcall", "utf8",
};
/**
@@ -696,7 +694,8 @@ func_persistent_lua_load(struct func_lua *func)
diag_set(OutOfMemory, load_str_sz, "region", "load_str");
return -1;
}
- snprintf(load_str, load_str_sz, "%s%s", load_pref, func->base.def->body);
+ snprintf(load_str, load_str_sz, "%s%s", load_pref,
+ func->base.def->body);
/*
* Perform loading of the persistent Lua function
@@ -759,7 +758,7 @@ func_lua_new(struct func_def *def)
{
assert(def->language == FUNC_LANGUAGE_LUA);
struct func_lua *func =
- (struct func_lua *) malloc(sizeof(struct func_lua));
+ (struct func_lua *)malloc(sizeof(struct func_lua));
if (func == NULL) {
diag_set(OutOfMemory, sizeof(*func), "malloc", "func");
return NULL;
@@ -812,7 +811,7 @@ func_persistent_lua_destroy(struct func *base)
assert(base != NULL && base->def->language == FUNC_LANGUAGE_LUA &&
base->def->body != NULL);
assert(base->vtab == &func_persistent_lua_vtab);
- struct func_lua *func = (struct func_lua *) base;
+ struct func_lua *func = (struct func_lua *)base;
func_persistent_lua_unload(func);
free(func);
}
@@ -828,7 +827,6 @@ func_persistent_lua_call(struct func *base, struct port *args, struct port *ret)
ctx.lua_ref = func->lua_ref;
ctx.args = args;
return box_process_lua(HANDLER_CALL_BY_REF, &ctx, ret);
-
}
static struct func_vtab func_persistent_lua_vtab = {
@@ -872,7 +870,7 @@ lbox_func_call(struct lua_State *L)
lua_xmove(L, args_L, lua_gettop(L) - 1);
struct port args;
port_lua_create(&args, args_L);
- ((struct port_lua *) &args)->ref = coro_ref;
+ ((struct port_lua *)&args)->ref = coro_ref;
struct port ret;
if (func_call(func, &args, &ret) != 0) {
@@ -1011,7 +1009,7 @@ lbox_func_delete(struct lua_State *L, struct func *func)
static int
lbox_func_new_or_delete(struct trigger *trigger, void *event)
{
- struct lua_State *L = (struct lua_State *) trigger->data;
+ struct lua_State *L = (struct lua_State *)trigger->data;
struct func *func = (struct func *)event;
if (!func->def->exports.lua)
return 0;
@@ -1022,15 +1020,15 @@ lbox_func_new_or_delete(struct trigger *trigger, void *event)
return 0;
}
-static struct trigger on_alter_func_in_lua = {
- RLIST_LINK_INITIALIZER, lbox_func_new_or_delete, NULL, NULL
-};
+static struct trigger on_alter_func_in_lua = { RLIST_LINK_INITIALIZER,
+ lbox_func_new_or_delete, NULL,
+ NULL };
static const struct luaL_Reg boxlib_internal[] = {
- {"call_loadproc", lbox_call_loadproc},
- {"module_reload", lbox_module_reload},
- {"func_call", lbox_func_call},
- {NULL, NULL}
+ { "call_loadproc", lbox_call_loadproc },
+ { "module_reload", lbox_module_reload },
+ { "func_call", lbox_func_call },
+ { NULL, NULL }
};
void
diff --git a/src/box/lua/call.h b/src/box/lua/call.h
index 83aa439..8c0c5a1 100644
--- a/src/box/lua/call.h
+++ b/src/box/lua/call.h
@@ -51,12 +51,12 @@ struct func_def;
* (implementation of 'CALL' command code).
*/
int
-box_lua_call(const char *name, uint32_t name_len,
- struct port *args, struct port *ret);
+box_lua_call(const char *name, uint32_t name_len, struct port *args,
+ struct port *ret);
int
-box_lua_eval(const char *expr, uint32_t expr_len,
- struct port *args, struct port *ret);
+box_lua_eval(const char *expr, uint32_t expr_len, struct port *args,
+ struct port *ret);
/** Construct a Lua function object. */
struct func *
diff --git a/src/box/lua/cfg.cc b/src/box/lua/cfg.cc
index bbb92f0..c422870 100644
--- a/src/box/lua/cfg.cc
+++ b/src/box/lua/cfg.cc
@@ -40,7 +40,7 @@
#include "libeio/eio.h"
extern "C" {
- #include <lua.h>
+#include <lua.h>
} // extern "C"
static int
@@ -263,7 +263,7 @@ lbox_set_prepared_stmt_cache_size(struct lua_State *L)
static int
lbox_cfg_set_worker_pool_threads(struct lua_State *L)
{
- (void) L;
+ (void)L;
eio_set_min_parallel(cfg_geti("worker_pool_threads"));
eio_set_max_parallel(cfg_geti("worker_pool_threads"));
return 0;
@@ -378,7 +378,7 @@ lbox_cfg_set_replication_anon(struct lua_State *L)
static int
lbox_cfg_set_replication_skip_conflict(struct lua_State *L)
{
- (void) L;
+ (void)L;
box_set_replication_skip_conflict();
return 0;
}
@@ -387,40 +387,58 @@ void
box_lua_cfg_init(struct lua_State *L)
{
static const struct luaL_Reg cfglib_internal[] = {
- {"cfg_check", lbox_cfg_check},
- {"cfg_load", lbox_cfg_load},
- {"cfg_set_listen", lbox_cfg_set_listen},
- {"cfg_set_replication", lbox_cfg_set_replication},
- {"cfg_set_worker_pool_threads", lbox_cfg_set_worker_pool_threads},
- {"cfg_set_readahead", lbox_cfg_set_readahead},
- {"cfg_set_io_collect_interval", lbox_cfg_set_io_collect_interval},
- {"cfg_set_too_long_threshold", lbox_cfg_set_too_long_threshold},
- {"cfg_set_snap_io_rate_limit", lbox_cfg_set_snap_io_rate_limit},
- {"cfg_set_checkpoint_count", lbox_cfg_set_checkpoint_count},
- {"cfg_set_checkpoint_interval", lbox_cfg_set_checkpoint_interval},
- {"cfg_set_checkpoint_wal_threshold", lbox_cfg_set_checkpoint_wal_threshold},
- {"cfg_set_read_only", lbox_cfg_set_read_only},
- {"cfg_set_memtx_memory", lbox_cfg_set_memtx_memory},
- {"cfg_set_memtx_max_tuple_size", lbox_cfg_set_memtx_max_tuple_size},
- {"cfg_set_vinyl_memory", lbox_cfg_set_vinyl_memory},
- {"cfg_set_vinyl_max_tuple_size", lbox_cfg_set_vinyl_max_tuple_size},
- {"cfg_set_vinyl_cache", lbox_cfg_set_vinyl_cache},
- {"cfg_set_vinyl_timeout", lbox_cfg_set_vinyl_timeout},
- {"cfg_set_election_is_enabled", lbox_cfg_set_election_is_enabled},
- {"cfg_set_election_is_candidate", lbox_cfg_set_election_is_candidate},
- {"cfg_set_election_timeout", lbox_cfg_set_election_timeout},
- {"cfg_set_replication_timeout", lbox_cfg_set_replication_timeout},
- {"cfg_set_replication_connect_quorum", lbox_cfg_set_replication_connect_quorum},
- {"cfg_set_replication_connect_timeout", lbox_cfg_set_replication_connect_timeout},
- {"cfg_set_replication_sync_lag", lbox_cfg_set_replication_sync_lag},
- {"cfg_set_replication_synchro_quorum", lbox_cfg_set_replication_synchro_quorum},
- {"cfg_set_replication_synchro_timeout", lbox_cfg_set_replication_synchro_timeout},
- {"cfg_set_replication_sync_timeout", lbox_cfg_set_replication_sync_timeout},
- {"cfg_set_replication_skip_conflict", lbox_cfg_set_replication_skip_conflict},
- {"cfg_set_replication_anon", lbox_cfg_set_replication_anon},
- {"cfg_set_net_msg_max", lbox_cfg_set_net_msg_max},
- {"cfg_set_sql_cache_size", lbox_set_prepared_stmt_cache_size},
- {NULL, NULL}
+ { "cfg_check", lbox_cfg_check },
+ { "cfg_load", lbox_cfg_load },
+ { "cfg_set_listen", lbox_cfg_set_listen },
+ { "cfg_set_replication", lbox_cfg_set_replication },
+ { "cfg_set_worker_pool_threads",
+ lbox_cfg_set_worker_pool_threads },
+ { "cfg_set_readahead", lbox_cfg_set_readahead },
+ { "cfg_set_io_collect_interval",
+ lbox_cfg_set_io_collect_interval },
+ { "cfg_set_too_long_threshold",
+ lbox_cfg_set_too_long_threshold },
+ { "cfg_set_snap_io_rate_limit",
+ lbox_cfg_set_snap_io_rate_limit },
+ { "cfg_set_checkpoint_count", lbox_cfg_set_checkpoint_count },
+ { "cfg_set_checkpoint_interval",
+ lbox_cfg_set_checkpoint_interval },
+ { "cfg_set_checkpoint_wal_threshold",
+ lbox_cfg_set_checkpoint_wal_threshold },
+ { "cfg_set_read_only", lbox_cfg_set_read_only },
+ { "cfg_set_memtx_memory", lbox_cfg_set_memtx_memory },
+ { "cfg_set_memtx_max_tuple_size",
+ lbox_cfg_set_memtx_max_tuple_size },
+ { "cfg_set_vinyl_memory", lbox_cfg_set_vinyl_memory },
+ { "cfg_set_vinyl_max_tuple_size",
+ lbox_cfg_set_vinyl_max_tuple_size },
+ { "cfg_set_vinyl_cache", lbox_cfg_set_vinyl_cache },
+ { "cfg_set_vinyl_timeout", lbox_cfg_set_vinyl_timeout },
+ { "cfg_set_election_is_enabled",
+ lbox_cfg_set_election_is_enabled },
+ { "cfg_set_election_is_candidate",
+ lbox_cfg_set_election_is_candidate },
+ { "cfg_set_election_timeout", lbox_cfg_set_election_timeout },
+ { "cfg_set_replication_timeout",
+ lbox_cfg_set_replication_timeout },
+ { "cfg_set_replication_connect_quorum",
+ lbox_cfg_set_replication_connect_quorum },
+ { "cfg_set_replication_connect_timeout",
+ lbox_cfg_set_replication_connect_timeout },
+ { "cfg_set_replication_sync_lag",
+ lbox_cfg_set_replication_sync_lag },
+ { "cfg_set_replication_synchro_quorum",
+ lbox_cfg_set_replication_synchro_quorum },
+ { "cfg_set_replication_synchro_timeout",
+ lbox_cfg_set_replication_synchro_timeout },
+ { "cfg_set_replication_sync_timeout",
+ lbox_cfg_set_replication_sync_timeout },
+ { "cfg_set_replication_skip_conflict",
+ lbox_cfg_set_replication_skip_conflict },
+ { "cfg_set_replication_anon", lbox_cfg_set_replication_anon },
+ { "cfg_set_net_msg_max", lbox_cfg_set_net_msg_max },
+ { "cfg_set_sql_cache_size", lbox_set_prepared_stmt_cache_size },
+ { NULL, NULL }
};
luaL_register(L, "box.internal", cfglib_internal);
diff --git a/src/box/lua/console.c b/src/box/lua/console.c
index ea5385c..997244a 100644
--- a/src/box/lua/console.c
+++ b/src/box/lua/console.c
@@ -170,7 +170,6 @@ console_completion_handler(const char *text, int start, int end)
if (lua_pcall(readline_L, 3, 1, 0) != 0 ||
!lua_istable(readline_L, -1) ||
(n = lua_objlen(readline_L, -1)) == 0) {
-
lua_pop(readline_L, 1);
return NULL;
}
@@ -281,8 +280,8 @@ lbox_console_readline(struct lua_State *L)
rl_callback_handler_install(prompt, console_push_line);
top = lua_gettop(L);
while (top == lua_gettop(L)) {
- while (coio_wait(STDIN_FILENO, COIO_READ,
- TIMEOUT_INFINITY) == 0) {
+ while (coio_wait(STDIN_FILENO, COIO_READ, TIMEOUT_INFINITY) ==
+ 0) {
/*
* Make sure the user of interactive
* console has not hanged us, otherwise
@@ -336,8 +335,8 @@ lbox_console_completion_handler(struct lua_State *L)
lua_pushcfunction(L, console_completion_helper);
lua_pushlightuserdata(L, &res);
- res = lua_rl_complete(L, lua_tostring(L, 1),
- lua_tointeger(L, 2), lua_tointeger(L, 3));
+ res = lua_rl_complete(L, lua_tostring(L, 1), lua_tointeger(L, 2),
+ lua_tointeger(L, 3));
if (res == NULL) {
return 0;
@@ -387,7 +386,8 @@ lbox_console_add_history(struct lua_State *L)
const char *s = lua_tostring(L, 1);
if (*s) {
- HIST_ENTRY *hist_ent = history_get(history_length - 1 + history_base);
+ HIST_ENTRY *hist_ent =
+ history_get(history_length - 1 + history_base);
const char *prev_s = hist_ent ? hist_ent->line : "";
if (strcmp(prev_s, s) != 0)
add_history(s);
@@ -480,7 +480,7 @@ console_dump_plain(struct lua_State *L, uint32_t *size)
assert(lua_isstring(L, -1));
size_t len;
const char *result = lua_tolstring(L, -1, &len);
- *size = (uint32_t) len;
+ *size = (uint32_t)len;
return result;
}
@@ -526,14 +526,14 @@ port_msgpack_dump_plain_via_lua(struct lua_State *L)
port_msgpack_set_plain((struct port *)port, data, *size);
}
return 0;
- }
+}
/** Plain text converter for raw MessagePack. */
const char *
port_msgpack_dump_plain(struct port *base, uint32_t *size)
{
struct lua_State *L = tarantool_L;
- void *ctx[2] = {(void *)base, (void *)size};
+ void *ctx[2] = { (void *)base, (void *)size };
/*
* lua_cpcall() protects from errors thrown from Lua which
* may break a caller, not knowing about Lua and not
@@ -587,12 +587,12 @@ lua_serpent_init(struct lua_State *L)
lua_getfield(L, LUA_REGISTRYINDEX, "_LOADED");
modfile = lua_pushfstring(L, "@builtin/%s.lua", modname);
if (luaL_loadbuffer(L, serpent_lua, strlen(serpent_lua), modfile)) {
- panic("Error loading Lua module %s...: %s",
- modname, lua_tostring(L, -1));
+ panic("Error loading Lua module %s...: %s", modname,
+ lua_tostring(L, -1));
}
lua_call(L, 0, 1);
- lua_setfield(L, -3, modname); /* _LOADED[modname] = new table */
+ lua_setfield(L, -3, modname); /* _LOADED[modname] = new table */
lua_pop(L, 2);
}
@@ -600,13 +600,13 @@ void
tarantool_lua_console_init(struct lua_State *L)
{
static const struct luaL_Reg consolelib[] = {
- {"load_history", lbox_console_load_history},
- {"save_history", lbox_console_save_history},
- {"add_history", lbox_console_add_history},
- {"completion_handler", lbox_console_completion_handler},
- {"format_yaml", lbox_console_format_yaml},
- {"format_lua", lbox_console_format_lua},
- {NULL, NULL}
+ { "load_history", lbox_console_load_history },
+ { "save_history", lbox_console_save_history },
+ { "add_history", lbox_console_add_history },
+ { "completion_handler", lbox_console_completion_handler },
+ { "format_yaml", lbox_console_format_yaml },
+ { "format_lua", lbox_console_format_lua },
+ { NULL, NULL }
};
luaL_register_module(L, "console", consolelib);
@@ -641,11 +641,11 @@ tarantool_lua_console_init(struct lua_State *L)
};
serializer_lua = luaL_newserializer(L, NULL, lualib);
- serializer_lua->has_compact = 1;
- serializer_lua->encode_invalid_numbers = 1;
- serializer_lua->encode_load_metatables = 1;
- serializer_lua->encode_use_tostring = 1;
- serializer_lua->encode_invalid_as_nil = 1;
+ serializer_lua->has_compact = 1;
+ serializer_lua->encode_invalid_numbers = 1;
+ serializer_lua->encode_load_metatables = 1;
+ serializer_lua->encode_use_tostring = 1;
+ serializer_lua->encode_invalid_as_nil = 1;
/*
* Keep a reference to this module so it
@@ -657,9 +657,9 @@ tarantool_lua_console_init(struct lua_State *L)
lua_serializer_init(L);
struct session_vtab console_session_vtab = {
- .push = console_session_push,
- .fd = console_session_fd,
- .sync = generic_session_sync,
+ .push = console_session_push,
+ .fd = console_session_fd,
+ .sync = generic_session_sync,
};
session_vtab_registry[SESSION_TYPE_CONSOLE] = console_session_vtab;
session_vtab_registry[SESSION_TYPE_REPL] = console_session_vtab;
@@ -696,19 +696,21 @@ enum {
};
/* goto intentionally omited */
-static const char *
-const lua_rl_keywords[] = {
- "and", "break", "do", "else", "elseif", "end", "false",
- "for", "function", "if", "in", "local", "nil", "not", "or",
- "repeat", "return", "then", "true", "until", "while", NULL
+static const char *const lua_rl_keywords[] = {
+ "and", "break", "do", "else", "elseif", "end",
+ "false", "for", "function", "if", "in", "local",
+ "nil", "not", "or", "repeat", "return", "then",
+ "true", "until", "while", NULL
};
static int
valid_identifier(const char *s)
{
- if (!(isalpha(*s) || *s == '_')) return 0;
+ if (!(isalpha(*s) || *s == '_'))
+ return 0;
for (s++; *s; s++)
- if (!(isalpha(*s) || isdigit(*s) || *s == '_')) return 0;
+ if (!(isalpha(*s) || isdigit(*s) || *s == '_'))
+ return 0;
return 1;
}
@@ -746,10 +748,10 @@ lua_rl_dmadd(dmlist *ml, const char *p, size_t pn, const char *s, int suf)
{
char *t = NULL;
- if (ml->idx+1 >= ml->allocated) {
+ if (ml->idx + 1 >= ml->allocated) {
char **new_list;
- new_list = realloc(
- ml->list, sizeof(char *)*(ml->allocated += 32));
+ new_list = realloc(ml->list,
+ sizeof(char *) * (ml->allocated += 32));
if (!new_list)
return -1;
ml->list = new_list;
@@ -757,20 +759,23 @@ lua_rl_dmadd(dmlist *ml, const char *p, size_t pn, const char *s, int suf)
if (s) {
size_t n = strlen(s);
- if (!(t = (char *)malloc(sizeof(char)*(pn + n + 2))))
+ if (!(t = (char *)malloc(sizeof(char) * (pn + n + 2))))
return 1;
memcpy(t, p, pn);
memcpy(t + pn, s, n);
n += pn;
t[n] = suf;
- if (suf) t[++n] = '\0';
+ if (suf)
+ t[++n] = '\0';
if (ml->idx == 0) {
ml->matchlen = n;
} else {
size_t i;
for (i = 0; i < ml->matchlen && i < n &&
- ml->list[1][i] == t[i]; i++) ;
+ ml->list[1][i] == t[i];
+ i++)
+ ;
/* Set matchlen to common prefix. */
ml->matchlen = i;
}
@@ -797,7 +802,7 @@ lua_rl_getmetaindex(lua_State *L)
}
lua_replace(L, -2);
return 1;
-} /* 1: obj -- val, 0: obj -- */
+} /* 1: obj -- val, 0: obj -- */
/* Get field from object on top of stack. Avoid calling metamethods. */
static int
@@ -820,7 +825,7 @@ lua_rl_getfield(lua_State *L, const char *s, size_t n)
}
} while (lua_rl_getmetaindex(L));
return 0;
-} /* 1: obj -- val, 0: obj -- */
+} /* 1: obj -- val, 0: obj -- */
static char **
lua_rl_complete(lua_State *L, const char *text, int start, int end)
@@ -838,12 +843,12 @@ lua_rl_complete(lua_State *L, const char *text, int start, int end)
savetop = lua_gettop(L);
lua_pushglobaltable(L);
- for (n = (size_t)(end-start), i = dot = 0; i < n; i++) {
+ for (n = (size_t)(end - start), i = dot = 0; i < n; i++) {
if (text[i] == '.' || text[i] == ':') {
is_method_ref = (text[i] == ':');
- if (!lua_rl_getfield(L, text+dot, i-dot))
+ if (!lua_rl_getfield(L, text + dot, i - dot))
goto error; /* Invalid prefix. */
- dot = i+1;
+ dot = i + 1;
/* Points to first char after dot/colon. */
}
}
@@ -851,10 +856,8 @@ lua_rl_complete(lua_State *L, const char *text, int start, int end)
/* Add all matches against keywords if there is no dot/colon. */
if (dot == 0) {
for (i = 0; (s = lua_rl_keywords[i]) != NULL; i++) {
- if (n >= KEYWORD_MATCH_MIN &&
- !strncmp(s, text, n) &&
+ if (n >= KEYWORD_MATCH_MIN && !strncmp(s, text, n) &&
lua_rl_dmadd(&ml, NULL, 0, s, ' ')) {
-
goto error;
}
}
@@ -871,7 +874,6 @@ lua_rl_complete(lua_State *L, const char *text, int start, int end)
continue;
for (lua_pushnil(L); lua_next(L, -2); lua_pop(L, 1)) {
-
/* Beware huge tables */
if (++items_checked > ITEMS_CHECKED_MAX)
break;
@@ -884,9 +886,10 @@ lua_rl_complete(lua_State *L, const char *text, int start, int end)
* Only match names starting with '_'
* if explicitly requested.
*/
- if (strncmp(s, text+dot, n-dot) ||
+ if (strncmp(s, text + dot, n - dot) ||
!valid_identifier(s) ||
- (*s == '_' && text[dot] != '_')) continue;
+ (*s == '_' && text[dot] != '_'))
+ continue;
int suf = 0; /* Omit suffix by default. */
int type = lua_type(L, -1);
@@ -929,7 +932,7 @@ lua_rl_complete(lua_State *L, const char *text, int start, int end)
lua_pop(L, 1);
if (ml.idx == 0) {
-error:
+ error:
lua_rl_dmfree(&ml);
lua_settop(L, savetop);
return NULL;
@@ -937,13 +940,14 @@ error:
/* list[0] holds the common prefix of all matches (may
* be ""). If there is only one match, list[0] and
* list[1] will be the same. */
- ml.list[0] = malloc(sizeof(char)*(ml.matchlen+1));
+ ml.list[0] = malloc(sizeof(char) * (ml.matchlen + 1));
if (!ml.list[0])
goto error;
memcpy(ml.list[0], ml.list[1], ml.matchlen);
ml.list[0][ml.matchlen] = '\0';
/* Add the NULL list terminator. */
- if (lua_rl_dmadd(&ml, NULL, 0, NULL, 0)) goto error;
+ if (lua_rl_dmadd(&ml, NULL, 0, NULL, 0))
+ goto error;
}
lua_settop(L, savetop);
diff --git a/src/box/lua/ctl.c b/src/box/lua/ctl.c
index 2017ddc..72bfa81 100644
--- a/src/box/lua/ctl.c
+++ b/src/box/lua/ctl.c
@@ -81,18 +81,18 @@ lbox_ctl_on_schema_init(struct lua_State *L)
static int
lbox_ctl_clear_synchro_queue(struct lua_State *L)
{
- (void) L;
+ (void)L;
box_clear_synchro_queue();
return 0;
}
static const struct luaL_Reg lbox_ctl_lib[] = {
- {"wait_ro", lbox_ctl_wait_ro},
- {"wait_rw", lbox_ctl_wait_rw},
- {"on_shutdown", lbox_ctl_on_shutdown},
- {"on_schema_init", lbox_ctl_on_schema_init},
- {"clear_synchro_queue", lbox_ctl_clear_synchro_queue},
- {NULL, NULL}
+ { "wait_ro", lbox_ctl_wait_ro },
+ { "wait_rw", lbox_ctl_wait_rw },
+ { "on_shutdown", lbox_ctl_on_shutdown },
+ { "on_schema_init", lbox_ctl_on_schema_init },
+ { "clear_synchro_queue", lbox_ctl_clear_synchro_queue },
+ { NULL, NULL }
};
void
diff --git a/src/box/lua/error.cc b/src/box/lua/error.cc
index 54ec284..3a51d93 100644
--- a/src/box/lua/error.cc
+++ b/src/box/lua/error.cc
@@ -70,8 +70,8 @@ luaT_error_create(lua_State *L, int top_base)
lua_Debug info;
int top = lua_gettop(L);
int top_type = lua_type(L, top_base);
- if (top >= top_base && (top_type == LUA_TNUMBER ||
- top_type == LUA_TSTRING)) {
+ if (top >= top_base &&
+ (top_type == LUA_TNUMBER || top_type == LUA_TSTRING)) {
/* Shift of the "reason args". */
int shift = 1;
if (top_type == LUA_TNUMBER) {
@@ -182,8 +182,8 @@ luaT_error_new(lua_State *L)
{
struct error *e;
if (lua_gettop(L) == 0 || (e = luaT_error_create(L, 1)) == NULL) {
- return luaL_error(L, "Usage: box.error.new(code, args) or "\
- "box.error.new(type, args)");
+ return luaL_error(L, "Usage: box.error.new(code, args) or "
+ "box.error.new(type, args)");
}
lua_settop(L, 0);
luaT_pusherror(L, e);
@@ -213,12 +213,13 @@ luaT_error_set(struct lua_State *L)
static int
lbox_errinj_set(struct lua_State *L)
{
- char *name = (char*)luaL_checkstring(L, 1);
+ char *name = (char *)luaL_checkstring(L, 1);
struct errinj *errinj;
errinj = errinj_by_name(name);
if (errinj == NULL) {
say_error("%s", name);
- lua_pushfstring(L, "error: can't find error injection '%s'", name);
+ lua_pushfstring(L, "error: can't find error injection '%s'",
+ name);
return 1;
}
switch (errinj->type) {
@@ -262,7 +263,7 @@ lbox_errinj_push_value(struct lua_State *L, const struct errinj *e)
static int
lbox_errinj_get(struct lua_State *L)
{
- char *name = (char*)luaL_checkstring(L, 1);
+ char *name = (char *)luaL_checkstring(L, 1);
struct errinj *e = errinj_by_name(name);
if (e != NULL)
return lbox_errinj_push_value(L, e);
@@ -273,7 +274,7 @@ lbox_errinj_get(struct lua_State *L)
static inline int
lbox_errinj_cb(struct errinj *e, void *cb_ctx)
{
- struct lua_State *L = (struct lua_State*)cb_ctx;
+ struct lua_State *L = (struct lua_State *)cb_ctx;
lua_pushstring(L, e->name);
lua_newtable(L);
lua_pushstring(L, "state");
@@ -292,10 +293,9 @@ lbox_errinj_info(struct lua_State *L)
}
void
-box_lua_error_init(struct lua_State *L) {
- static const struct luaL_Reg errorlib[] = {
- {NULL, NULL}
- };
+box_lua_error_init(struct lua_State *L)
+{
+ static const struct luaL_Reg errorlib[] = { { NULL, NULL } };
luaL_register_module(L, "box.error", errorlib);
for (int i = 0; i < box_error_code_MAX; i++) {
const char *name = box_error_codes[i].errstr;
@@ -334,12 +334,11 @@ box_lua_error_init(struct lua_State *L) {
lua_pop(L, 1);
- static const struct luaL_Reg errinjlib[] = {
- {"info", lbox_errinj_info},
- {"set", lbox_errinj_set},
- {"get", lbox_errinj_get},
- {NULL, NULL}
- };
+ static const struct luaL_Reg errinjlib[] = { { "info",
+ lbox_errinj_info },
+ { "set", lbox_errinj_set },
+ { "get", lbox_errinj_get },
+ { NULL, NULL } };
/* box.error.injection is not set by register_module */
luaL_register_module(L, "box.error.injection", errinjlib);
lua_pop(L, 1);
diff --git a/src/box/lua/execute.c b/src/box/lua/execute.c
index 926a0a6..41bfee9 100644
--- a/src/box/lua/execute.c
+++ b/src/box/lua/execute.c
@@ -99,7 +99,7 @@ lbox_execute_prepared(struct lua_State *L)
{
int top = lua_gettop(L);
- if ((top != 1 && top != 2) || ! lua_istable(L, 1))
+ if ((top != 1 && top != 2) || !lua_istable(L, 1))
return luaL_error(L, "Usage: statement:execute([, params])");
lua_getfield(L, 1, "stmt_id");
if (!lua_isnumber(L, -1))
@@ -138,15 +138,15 @@ lbox_unprepare(struct lua_State *L)
{
int top = lua_gettop(L);
- if (top != 1 || (! lua_istable(L, 1) && ! lua_isnumber(L, 1))) {
- return luaL_error(L, "Usage: statement:unprepare() or "\
+ if (top != 1 || (!lua_istable(L, 1) && !lua_isnumber(L, 1))) {
+ return luaL_error(L, "Usage: statement:unprepare() or "
"box.unprepare(stmt_id)");
}
lua_Integer stmt_id;
if (lua_istable(L, 1)) {
lua_getfield(L, -1, "stmt_id");
- if (! lua_isnumber(L, -1)) {
- return luaL_error(L, "Statement id is expected "\
+ if (!lua_isnumber(L, -1)) {
+ return luaL_error(L, "Statement id is expected "
"to be numeric");
}
stmt_id = lua_tointeger(L, -1);
@@ -156,7 +156,7 @@ lbox_unprepare(struct lua_State *L)
}
if (stmt_id < 0)
return luaL_error(L, "Statement id can't be negative");
- if (sql_unprepare((uint32_t) stmt_id) != 0)
+ if (sql_unprepare((uint32_t)stmt_id) != 0)
return luaT_push_nil_and_error(L);
return 0;
}
@@ -164,7 +164,7 @@ lbox_unprepare(struct lua_State *L)
void
port_sql_dump_lua(struct port *port, struct lua_State *L, bool is_flat)
{
- (void) is_flat;
+ (void)is_flat;
assert(is_flat == false);
assert(port->vtab == &port_sql_vtab);
struct sql *db = sql_get();
@@ -180,9 +180,9 @@ port_sql_dump_lua(struct port *port, struct lua_State *L, bool is_flat)
break;
}
case DML_EXECUTE: {
- assert(((struct port_c *) port)->size == 0);
+ assert(((struct port_c *)port)->size == 0);
struct stailq *autoinc_id_list =
- vdbe_autoinc_id_list((struct Vdbe *) stmt);
+ vdbe_autoinc_id_list((struct Vdbe *)stmt);
lua_createtable(L, 0, stailq_empty(autoinc_id_list) ? 1 : 2);
luaL_pushuint64(L, db->nChange);
@@ -192,7 +192,8 @@ port_sql_dump_lua(struct port *port, struct lua_State *L, bool is_flat)
lua_newtable(L);
int i = 1;
struct autoinc_id_entry *id_entry;
- stailq_foreach_entry(id_entry, autoinc_id_list, link) {
+ stailq_foreach_entry(id_entry, autoinc_id_list, link)
+ {
if (id_entry->id >= 0)
luaL_pushuint64(L, id_entry->id);
else
@@ -236,8 +237,8 @@ port_sql_dump_lua(struct port *port, struct lua_State *L, bool is_flat)
lua_setfield(L, -2, "unprepare");
break;
}
- case DML_PREPARE : {
- assert(((struct port_c *) port)->size == 0);
+ case DML_PREPARE: {
+ assert(((struct port_c *)port)->size == 0);
/* Format is following:
* stmt_id,
* param_count,
@@ -264,7 +265,7 @@ port_sql_dump_lua(struct port *port, struct lua_State *L, bool is_flat)
lua_setfield(L, -2, "unprepare");
break;
}
- default:{
+ default: {
unreachable();
}
}
@@ -296,16 +297,18 @@ lua_sql_bind_decode(struct lua_State *L, struct sql_bind *bind, int idx, int i)
*/
lua_pushnil(L);
lua_next(L, -2);
- if (! lua_isstring(L, -2)) {
- diag_set(ClientError, ER_ILLEGAL_PARAMS, "name of the "\
+ if (!lua_isstring(L, -2)) {
+ diag_set(ClientError, ER_ILLEGAL_PARAMS,
+ "name of the "
"parameter should be a string.");
return -1;
}
/* Check that the table is one-row sized. */
lua_pushvalue(L, -2);
if (lua_next(L, -4) != 0) {
- diag_set(ClientError, ER_ILLEGAL_PARAMS, "SQL bind "\
- "named parameter should be a table with "\
+ diag_set(ClientError, ER_ILLEGAL_PARAMS,
+ "SQL bind "
+ "named parameter should be a table with "
"one key - {name = value}");
return -1;
}
@@ -399,7 +402,7 @@ lua_sql_bind_list_decode(struct lua_State *L, struct sql_bind **out_bind,
return 0;
if (bind_count > SQL_BIND_PARAMETER_MAX) {
diag_set(ClientError, ER_SQL_BIND_PARAMETER_MAX,
- (int) bind_count);
+ (int)bind_count);
return -1;
}
struct region *region = &fiber()->gc;
@@ -410,8 +413,8 @@ lua_sql_bind_list_decode(struct lua_State *L, struct sql_bind **out_bind,
* sql_stmt_finalize() or in txn_commit()/txn_rollback() if
* there is an active transaction.
*/
- struct sql_bind *bind = region_alloc_array(region, typeof(bind[0]),
- bind_count, &size);
+ struct sql_bind *bind =
+ region_alloc_array(region, typeof(bind[0]), bind_count, &size);
if (bind == NULL) {
diag_set(OutOfMemory, size, "region_alloc_array", "bind");
return -1;
@@ -435,12 +438,12 @@ lbox_execute(struct lua_State *L)
struct port port;
int top = lua_gettop(L);
- if ((top != 1 && top != 2) || ! lua_isstring(L, 1))
+ if ((top != 1 && top != 2) || !lua_isstring(L, 1))
return luaL_error(L, "Usage: box.execute(sqlstring[, params]) "
- "or box.execute(stmt_id[, params])");
+ "or box.execute(stmt_id[, params])");
if (top == 2) {
- if (! lua_istable(L, 2))
+ if (!lua_istable(L, 2))
return luaL_error(L, "Second argument must be a table");
bind_count = lua_sql_bind_list_decode(L, &bind, 2);
if (bind_count < 0)
@@ -452,8 +455,8 @@ lbox_execute(struct lua_State *L)
*/
if (lua_type(L, 1) == LUA_TSTRING) {
const char *sql = lua_tolstring(L, 1, &length);
- if (sql_prepare_and_execute(sql, length, bind, bind_count, &port,
- &fiber()->gc) != 0)
+ if (sql_prepare_and_execute(sql, length, bind, bind_count,
+ &port, &fiber()->gc) != 0)
return luaT_push_nil_and_error(L);
} else {
assert(lua_type(L, 1) == LUA_TNUMBER);
@@ -479,7 +482,7 @@ lbox_prepare(struct lua_State *L)
struct port port;
int top = lua_gettop(L);
- if ((top != 1 && top != 2) || ! lua_isstring(L, 1))
+ if ((top != 1 && top != 2) || !lua_isstring(L, 1))
return luaL_error(L, "Usage: box.prepare(sqlstring)");
const char *sql = lua_tolstring(L, 1, &length);
diff --git a/src/box/lua/index.c b/src/box/lua/index.c
index 4cf3c4d..5996979 100644
--- a/src/box/lua/index.c
+++ b/src/box/lua/index.c
@@ -90,8 +90,8 @@ lbox_index_update(lua_State *L)
const char *ops = lbox_encode_tuple_on_gc(L, 4, &ops_len);
struct tuple *result;
- if (box_update(space_id, index_id, key, key + key_len,
- ops, ops + ops_len, 1, &result) != 0)
+ if (box_update(space_id, index_id, key, key + key_len, ops,
+ ops + ops_len, 1, &result) != 0)
return luaT_error(L);
return luaT_pushtupleornil(L, result);
}
@@ -111,8 +111,8 @@ lbox_upsert(lua_State *L)
const char *ops = lbox_encode_tuple_on_gc(L, 3, &ops_len);
struct tuple *result;
- if (box_upsert(space_id, 0, tuple, tuple + tuple_len,
- ops, ops + ops_len, 1, &result) != 0)
+ if (box_upsert(space_id, 0, tuple, tuple + tuple_len, ops,
+ ops + ops_len, 1, &result) != 0)
return luaT_error(L);
return luaT_pushtupleornil(L, result);
}
@@ -140,7 +140,8 @@ lbox_index_random(lua_State *L)
{
if (lua_gettop(L) != 3 || !lua_isnumber(L, 1) || !lua_isnumber(L, 2) ||
!lua_isnumber(L, 3))
- return luaL_error(L, "Usage index.random(space_id, index_id, rnd)");
+ return luaL_error(
+ L, "Usage index.random(space_id, index_id, rnd)");
uint32_t space_id = lua_tonumber(L, 1);
uint32_t index_id = lua_tonumber(L, 2);
@@ -156,7 +157,8 @@ static int
lbox_index_get(lua_State *L)
{
if (lua_gettop(L) != 3 || !lua_isnumber(L, 1) || !lua_isnumber(L, 2))
- return luaL_error(L, "Usage index.get(space_id, index_id, key)");
+ return luaL_error(L,
+ "Usage index.get(space_id, index_id, key)");
uint32_t space_id = lua_tonumber(L, 1);
uint32_t index_id = lua_tonumber(L, 2);
@@ -173,7 +175,8 @@ static int
lbox_index_min(lua_State *L)
{
if (lua_gettop(L) != 3 || !lua_isnumber(L, 1) || !lua_isnumber(L, 2))
- return luaL_error(L, "usage index.min(space_id, index_id, key)");
+ return luaL_error(L,
+ "usage index.min(space_id, index_id, key)");
uint32_t space_id = lua_tonumber(L, 1);
uint32_t index_id = lua_tonumber(L, 2);
@@ -190,7 +193,8 @@ static int
lbox_index_max(lua_State *L)
{
if (lua_gettop(L) != 3 || !lua_isnumber(L, 1) || !lua_isnumber(L, 2))
- return luaL_error(L, "usage index.max(space_id, index_id, key)");
+ return luaL_error(L,
+ "usage index.max(space_id, index_id, key)");
uint32_t space_id = lua_tonumber(L, 1);
uint32_t index_id = lua_tonumber(L, 2);
@@ -209,7 +213,7 @@ lbox_index_count(lua_State *L)
if (lua_gettop(L) != 4 || !lua_isnumber(L, 1) || !lua_isnumber(L, 2) ||
!lua_isnumber(L, 3)) {
return luaL_error(L, "usage index.count(space_id, index_id, "
- "iterator, key)");
+ "iterator, key)");
}
uint32_t space_id = lua_tonumber(L, 1);
@@ -244,13 +248,16 @@ lbox_index_iterator(lua_State *L)
{
if (lua_gettop(L) != 4 || !lua_isnumber(L, 1) || !lua_isnumber(L, 2) ||
!lua_isnumber(L, 3))
- return luaL_error(L, "usage index.iterator(space_id, index_id, type, key)");
+ return luaL_error(
+ L,
+ "usage index.iterator(space_id, index_id, type, key)");
uint32_t space_id = lua_tonumber(L, 1);
uint32_t index_id = lua_tonumber(L, 2);
uint32_t iterator = lua_tonumber(L, 3);
size_t mpkey_len;
- const char *mpkey = lua_tolstring(L, 4, &mpkey_len); /* Key encoded by Lua */
+ const char *mpkey =
+ lua_tolstring(L, 4, &mpkey_len); /* Key encoded by Lua */
/* const char *key = lbox_encode_tuple_on_gc(L, 4, key_len); */
struct iterator *it = box_index_iterator(space_id, index_id, iterator,
mpkey, mpkey + mpkey_len);
@@ -258,8 +265,8 @@ lbox_index_iterator(lua_State *L)
return luaT_error(L);
assert(CTID_STRUCT_ITERATOR_REF != 0);
- struct iterator **ptr = (struct iterator **) luaL_pushcdata(L,
- CTID_STRUCT_ITERATOR_REF);
+ struct iterator **ptr =
+ (struct iterator **)luaL_pushcdata(L, CTID_STRUCT_ITERATOR_REF);
*ptr = it; /* NULL handled by Lua, gc also set by Lua */
return 1;
}
@@ -274,10 +281,10 @@ lbox_iterator_next(lua_State *L)
assert(CTID_STRUCT_ITERATOR_REF != 0);
uint32_t ctypeid;
void *data = luaL_checkcdata(L, 1, &ctypeid);
- if (ctypeid != (uint32_t) CTID_STRUCT_ITERATOR_REF)
+ if (ctypeid != (uint32_t)CTID_STRUCT_ITERATOR_REF)
return luaL_error(L, "usage: next(state)");
- struct iterator *itr = *(struct iterator **) data;
+ struct iterator *itr = *(struct iterator **)data;
struct tuple *tuple;
if (box_iterator_next(itr, &tuple) != 0)
return luaT_error(L);
@@ -336,13 +343,11 @@ box_lua_index_init(struct lua_State *L)
/* Get CTypeIDs */
int rc = luaL_cdef(L, "struct iterator;");
assert(rc == 0);
- (void) rc;
+ (void)rc;
CTID_STRUCT_ITERATOR_REF = luaL_ctypeid(L, "struct iterator&");
assert(CTID_STRUCT_ITERATOR_REF != 0);
- static const struct luaL_Reg indexlib [] = {
- {NULL, NULL}
- };
+ static const struct luaL_Reg indexlib[] = { { NULL, NULL } };
/* box.index */
luaL_register_module(L, "box.index", indexlib);
@@ -350,22 +355,22 @@ box_lua_index_init(struct lua_State *L)
lua_pop(L, 1);
static const struct luaL_Reg boxlib_internal[] = {
- {"insert", lbox_insert},
- {"replace", lbox_replace},
- {"update", lbox_index_update},
- {"upsert", lbox_upsert},
- {"delete", lbox_index_delete},
- {"random", lbox_index_random},
- {"get", lbox_index_get},
- {"min", lbox_index_min},
- {"max", lbox_index_max},
- {"count", lbox_index_count},
- {"iterator", lbox_index_iterator},
- {"iterator_next", lbox_iterator_next},
- {"truncate", lbox_truncate},
- {"stat", lbox_index_stat},
- {"compact", lbox_index_compact},
- {NULL, NULL}
+ { "insert", lbox_insert },
+ { "replace", lbox_replace },
+ { "update", lbox_index_update },
+ { "upsert", lbox_upsert },
+ { "delete", lbox_index_delete },
+ { "random", lbox_index_random },
+ { "get", lbox_index_get },
+ { "min", lbox_index_min },
+ { "max", lbox_index_max },
+ { "count", lbox_index_count },
+ { "iterator", lbox_index_iterator },
+ { "iterator_next", lbox_iterator_next },
+ { "truncate", lbox_truncate },
+ { "stat", lbox_index_stat },
+ { "compact", lbox_index_compact },
+ { NULL, NULL }
};
luaL_register(L, "box.internal", boxlib_internal);
diff --git a/src/box/lua/info.c b/src/box/lua/info.c
index cac3fd4..66d314c 100644
--- a/src/box/lua/info.c
+++ b/src/box/lua/info.c
@@ -60,7 +60,8 @@ lbox_pushvclock(struct lua_State *L, const struct vclock *vclock)
lua_createtable(L, 0, vclock_size(vclock));
struct vclock_iterator it;
vclock_iterator_init(&it, vclock);
- vclock_foreach(&it, replica) {
+ vclock_foreach(&it, replica)
+ {
lua_pushinteger(L, replica.id);
luaL_pushuint64(L, replica.lsn);
lua_settable(L, -3);
@@ -91,7 +92,8 @@ lbox_pushapplier(lua_State *L, struct applier *applier)
char *d = status;
const char *s = applier_state_strs[applier->state] + strlen("APPLIER_");
assert(strlen(s) < sizeof(status));
- while ((*(d++) = tolower(*(s++))));
+ while ((*(d++) = tolower(*(s++))))
+ ;
lua_pushstring(L, "status");
lua_pushstring(L, status);
@@ -104,11 +106,12 @@ lbox_pushapplier(lua_State *L, struct applier *applier)
lua_pushstring(L, "idle");
lua_pushnumber(L, ev_monotonic_now(loop()) -
- applier->last_row_time);
+ applier->last_row_time);
lua_settable(L, -3);
char name[APPLIER_SOURCE_MAXLEN];
- int total = uri_format(name, sizeof(name), &applier->uri, false);
+ int total =
+ uri_format(name, sizeof(name), &applier->uri, false);
/*
* total can be greater than sizeof(name) if
* name has insufficient length. Terminating
@@ -131,7 +134,7 @@ lbox_pushrelay(lua_State *L, struct relay *relay)
lua_newtable(L);
lua_pushstring(L, "status");
- switch(relay_get_state(relay)) {
+ switch (relay_get_state(relay)) {
case RELAY_FOLLOW:
lua_pushstring(L, "follow");
lua_settable(L, -3);
@@ -140,11 +143,10 @@ lbox_pushrelay(lua_State *L, struct relay *relay)
lua_settable(L, -3);
lua_pushstring(L, "idle");
lua_pushnumber(L, ev_monotonic_now(loop()) -
- relay_last_row_time(relay));
+ relay_last_row_time(relay));
lua_settable(L, -3);
break;
- case RELAY_STOPPED:
- {
+ case RELAY_STOPPED: {
lua_pushstring(L, "stopped");
lua_settable(L, -3);
@@ -153,7 +155,8 @@ lbox_pushrelay(lua_State *L, struct relay *relay)
lbox_push_replication_error_message(L, e, -1);
break;
}
- default: unreachable();
+ default:
+ unreachable();
}
}
@@ -202,7 +205,8 @@ lbox_info_replication(struct lua_State *L)
lua_setfield(L, -2, "__serialize");
lua_setmetatable(L, -2);
- replicaset_foreach(replica) {
+ replicaset_foreach(replica)
+ {
/* Applier hasn't received replica id yet */
if (replica->id == REPLICA_ID_NIL)
continue;
@@ -226,7 +230,8 @@ lbox_info_replication_anon_call(struct lua_State *L)
lua_setfield(L, -2, "__serialize");
lua_setmetatable(L, -2);
- replicaset_foreach(replica) {
+ replicaset_foreach(replica)
+ {
if (!replica->anon)
continue;
@@ -450,7 +455,8 @@ lbox_info_gc_call(struct lua_State *L)
count = 0;
struct gc_checkpoint *checkpoint;
- gc_foreach_checkpoint(checkpoint) {
+ gc_foreach_checkpoint(checkpoint)
+ {
lua_createtable(L, 0, 2);
lua_pushstring(L, "vclock");
@@ -465,7 +471,8 @@ lbox_info_gc_call(struct lua_State *L)
lua_newtable(L);
int ref_idx = 0;
struct gc_checkpoint_ref *ref;
- gc_foreach_checkpoint_ref(ref, checkpoint) {
+ gc_foreach_checkpoint_ref(ref, checkpoint)
+ {
lua_pushstring(L, ref->name);
lua_rawseti(L, -2, ++ref_idx);
}
@@ -594,38 +601,38 @@ lbox_info_election(struct lua_State *L)
}
static const struct luaL_Reg lbox_info_dynamic_meta[] = {
- {"id", lbox_info_id},
- {"uuid", lbox_info_uuid},
- {"lsn", lbox_info_lsn},
- {"signature", lbox_info_signature},
- {"vclock", lbox_info_vclock},
- {"ro", lbox_info_ro},
- {"replication", lbox_info_replication},
- {"replication_anon", lbox_info_replication_anon},
- {"status", lbox_info_status},
- {"uptime", lbox_info_uptime},
- {"pid", lbox_info_pid},
- {"cluster", lbox_info_cluster},
- {"memory", lbox_info_memory},
- {"gc", lbox_info_gc},
- {"vinyl", lbox_info_vinyl},
- {"sql", lbox_info_sql},
- {"listen", lbox_info_listen},
- {"election", lbox_info_election},
- {NULL, NULL}
+ { "id", lbox_info_id },
+ { "uuid", lbox_info_uuid },
+ { "lsn", lbox_info_lsn },
+ { "signature", lbox_info_signature },
+ { "vclock", lbox_info_vclock },
+ { "ro", lbox_info_ro },
+ { "replication", lbox_info_replication },
+ { "replication_anon", lbox_info_replication_anon },
+ { "status", lbox_info_status },
+ { "uptime", lbox_info_uptime },
+ { "pid", lbox_info_pid },
+ { "cluster", lbox_info_cluster },
+ { "memory", lbox_info_memory },
+ { "gc", lbox_info_gc },
+ { "vinyl", lbox_info_vinyl },
+ { "sql", lbox_info_sql },
+ { "listen", lbox_info_listen },
+ { "election", lbox_info_election },
+ { NULL, NULL }
};
static const struct luaL_Reg lbox_info_dynamic_meta_v16[] = {
- {"server", lbox_info_server},
- {NULL, NULL}
+ { "server", lbox_info_server },
+ { NULL, NULL }
};
/** Evaluate box.info.* function value and push it on the stack. */
static int
lbox_info_index(struct lua_State *L)
{
- lua_pushvalue(L, -1); /* dup key */
- lua_gettable(L, lua_upvalueindex(1)); /* table[key] */
+ lua_pushvalue(L, -1); /* dup key */
+ lua_gettable(L, lua_upvalueindex(1)); /* table[key] */
if (!lua_isfunction(L, -1)) {
/* No such key. Leave nil is on the stack. */
@@ -683,13 +690,11 @@ lbox_info_call(struct lua_State *L)
void
box_lua_info_init(struct lua_State *L)
{
- static const struct luaL_Reg infolib [] = {
- {NULL, NULL}
- };
+ static const struct luaL_Reg infolib[] = { { NULL, NULL } };
luaL_register_module(L, "box.info", infolib);
- lua_newtable(L); /* metatable for info */
+ lua_newtable(L); /* metatable for info */
lua_pushstring(L, "__index");
diff --git a/src/box/lua/init.c b/src/box/lua/init.c
index d0316ef..05c8b7e 100644
--- a/src/box/lua/init.c
+++ b/src/box/lua/init.c
@@ -70,41 +70,28 @@
static uint32_t CTID_STRUCT_TXN_SAVEPOINT_PTR = 0;
-extern char session_lua[],
- tuple_lua[],
- key_def_lua[],
- schema_lua[],
- load_cfg_lua[],
- xlog_lua[],
+extern char session_lua[], tuple_lua[], key_def_lua[], schema_lua[],
+ load_cfg_lua[], xlog_lua[],
#if ENABLE_FEEDBACK_DAEMON
feedback_daemon_lua[],
#endif
- net_box_lua[],
- upgrade_lua[],
- console_lua[],
- merger_lua[];
-
-static const char *lua_sources[] = {
- "box/session", session_lua,
- "box/tuple", tuple_lua,
- "box/schema", schema_lua,
+ net_box_lua[], upgrade_lua[], console_lua[], merger_lua[];
+
+static const char *lua_sources[] = { "box/session", session_lua, "box/tuple",
+ tuple_lua, "box/schema", schema_lua,
#if ENABLE_FEEDBACK_DAEMON
- /*
+ /*
* It is important to initialize the daemon before
* load_cfg, because the latter picks up some values
* from the feedback daemon.
*/
- "box/feedback_daemon", feedback_daemon_lua,
+ "box/feedback_daemon", feedback_daemon_lua,
#endif
- "box/upgrade", upgrade_lua,
- "box/net_box", net_box_lua,
- "box/console", console_lua,
- "box/load_cfg", load_cfg_lua,
- "box/xlog", xlog_lua,
- "box/key_def", key_def_lua,
- "box/merger", merger_lua,
- NULL
-};
+ "box/upgrade", upgrade_lua, "box/net_box",
+ net_box_lua, "box/console", console_lua,
+ "box/load_cfg", load_cfg_lua, "box/xlog",
+ xlog_lua, "box/key_def", key_def_lua,
+ "box/merger", merger_lua, NULL };
static int
lbox_commit(lua_State *L)
@@ -193,8 +180,8 @@ lbox_rollback_to_savepoint(struct lua_State *L)
if (lua_gettop(L) != 1 ||
(svp = luaT_check_txn_savepoint(L, 1, &svp_txn_id)) == NULL)
- return luaL_error(L,
- "Usage: box.rollback_to_savepoint(savepoint)");
+ return luaL_error(
+ L, "Usage: box.rollback_to_savepoint(savepoint)");
/*
* Verify that we're in a transaction and that it is the
@@ -242,7 +229,7 @@ lbox_txn_iterator_next(struct lua_State *L)
return luaT_error(L);
}
struct txn_stmt *stmt =
- (struct txn_stmt *) lua_topointer(L, lua_upvalueindex(2));
+ (struct txn_stmt *)lua_topointer(L, lua_upvalueindex(2));
if (stmt == NULL)
return 0;
while (stmt->row == NULL) {
@@ -302,7 +289,7 @@ lbox_txn_pairs(struct lua_State *L)
static int
lbox_push_txn(struct lua_State *L, void *event)
{
- struct txn *txn = (struct txn *) event;
+ struct txn *txn = (struct txn *)event;
luaL_pushint64(L, txn->id);
lua_pushcclosure(L, lbox_txn_pairs, 1);
return 1;
@@ -313,18 +300,20 @@ lbox_push_txn(struct lua_State *L, void *event)
* @sa lbox_trigger_reset.
*/
#define LBOX_TXN_TRIGGER(name) \
-static int \
-lbox_on_##name(struct lua_State *L) { \
- struct txn *txn = in_txn(); \
- int top = lua_gettop(L); \
- if (top > 2 || txn == NULL) { \
- return luaL_error(L, "Usage inside a transaction: " \
- "box.on_" #name "([function | nil, " \
- "[function | nil]])"); \
- } \
- txn_init_triggers(txn); \
- return lbox_trigger_reset(L, 2, &txn->on_##name, lbox_push_txn, NULL); \
-}
+ static int lbox_on_##name(struct lua_State *L) \
+ { \
+ struct txn *txn = in_txn(); \
+ int top = lua_gettop(L); \
+ if (top > 2 || txn == NULL) { \
+ return luaL_error(L, \
+ "Usage inside a transaction: " \
+ "box.on_" #name "([function | nil, " \
+ "[function | nil]])"); \
+ } \
+ txn_init_triggers(txn); \
+ return lbox_trigger_reset(L, 2, &txn->on_##name, \
+ lbox_push_txn, NULL); \
+ }
LBOX_TXN_TRIGGER(commit)
LBOX_TXN_TRIGGER(rollback)
@@ -384,21 +373,18 @@ lbox_backup_stop(struct lua_State *L)
return 0;
}
-static const struct luaL_Reg boxlib[] = {
- {"commit", lbox_commit},
- {"rollback", lbox_rollback},
- {"on_commit", lbox_on_commit},
- {"on_rollback", lbox_on_rollback},
- {"snapshot", lbox_snapshot},
- {"rollback_to_savepoint", lbox_rollback_to_savepoint},
- {NULL, NULL}
-};
+static const struct luaL_Reg boxlib[] = { { "commit", lbox_commit },
+ { "rollback", lbox_rollback },
+ { "on_commit", lbox_on_commit },
+ { "on_rollback", lbox_on_rollback },
+ { "snapshot", lbox_snapshot },
+ { "rollback_to_savepoint",
+ lbox_rollback_to_savepoint },
+ { NULL, NULL } };
-static const struct luaL_Reg boxlib_backup[] = {
- {"start", lbox_backup_start},
- {"stop", lbox_backup_stop},
- {NULL, NULL}
-};
+static const struct luaL_Reg boxlib_backup[] = { { "start", lbox_backup_start },
+ { "stop", lbox_backup_stop },
+ { NULL, NULL } };
/**
* A MsgPack extensions handler, for types defined in box.
@@ -452,8 +438,8 @@ void
box_lua_init(struct lua_State *L)
{
luaL_cdef(L, "struct txn_savepoint;");
- CTID_STRUCT_TXN_SAVEPOINT_PTR = luaL_ctypeid(L,
- "struct txn_savepoint*");
+ CTID_STRUCT_TXN_SAVEPOINT_PTR =
+ luaL_ctypeid(L, "struct txn_savepoint*");
/* Use luaL_register() to set _G.box */
luaL_register(L, "box", boxlib);
@@ -493,12 +479,12 @@ box_lua_init(struct lua_State *L)
for (const char **s = lua_sources; *s; s += 2) {
const char *modname = *s;
const char *modsrc = *(s + 1);
- const char *modfile = lua_pushfstring(L,
- "@builtin/%s.lua", modname);
+ const char *modfile =
+ lua_pushfstring(L, "@builtin/%s.lua", modname);
if (luaL_loadbuffer(L, modsrc, strlen(modsrc), modfile) != 0 ||
lua_pcall(L, 0, 0, 0) != 0)
- panic("Error loading Lua module %s...: %s",
- modname, lua_tostring(L, -1));
+ panic("Error loading Lua module %s...: %s", modname,
+ lua_tostring(L, -1));
lua_pop(L, 1); /* modfile */
}
diff --git a/src/box/lua/key_def.c b/src/box/lua/key_def.c
index 1a99fab..81f54ea 100644
--- a/src/box/lua/key_def.c
+++ b/src/box/lua/key_def.c
@@ -102,7 +102,7 @@ luaT_key_def_set_part(struct lua_State *L, struct key_part_def *part,
}
} else {
lua_getfield(L, -2, "field");
- if (! lua_isnil(L, -1)) {
+ if (!lua_isnil(L, -1)) {
diag_set(IllegalParams,
"Conflicting options: fieldno and field");
return -1;
@@ -171,14 +171,14 @@ luaT_key_def_set_part(struct lua_State *L, struct key_part_def *part,
/* Check for conflicting options. */
if (part->coll_id != COLL_NONE) {
diag_set(IllegalParams, "Conflicting options: "
- "collation_id and collation");
+ "collation_id and collation");
return -1;
}
size_t coll_name_len;
const char *coll_name = lua_tolstring(L, -1, &coll_name_len);
- struct coll_id *coll_id = coll_by_name(coll_name,
- coll_name_len);
+ struct coll_id *coll_id =
+ coll_by_name(coll_name, coll_name_len);
if (coll_id == NULL) {
diag_set(IllegalParams, "Unknown collation: \"%s\"",
coll_name);
@@ -198,8 +198,8 @@ luaT_key_def_set_part(struct lua_State *L, struct key_part_def *part,
diag_set(IllegalParams, "invalid path");
return -1;
}
- if ((size_t)json_path_multikey_offset(path, path_len,
- TUPLE_INDEX_BASE) != path_len) {
+ if ((size_t)json_path_multikey_offset(
+ path, path_len, TUPLE_INDEX_BASE) != path_len) {
diag_set(IllegalParams, "multikey path is unsupported");
return -1;
}
@@ -358,15 +358,14 @@ lbox_key_def_compare_with_key(struct lua_State *L)
size_t key_len;
const char *key_end, *key = lbox_encode_tuple_on_gc(L, 3, &key_len);
uint32_t part_count = mp_decode_array(&key);
- if (key_validate_parts(key_def, key, part_count, true,
- &key_end) != 0) {
+ if (key_validate_parts(key_def, key, part_count, true, &key_end) != 0) {
region_truncate(region, region_svp);
tuple_unref(tuple);
return luaT_error(L);
}
- int rc = tuple_compare_with_key(tuple, HINT_NONE, key,
- part_count, HINT_NONE, key_def);
+ int rc = tuple_compare_with_key(tuple, HINT_NONE, key, part_count,
+ HINT_NONE, key_def);
region_truncate(region, region_svp);
tuple_unref(tuple);
lua_pushinteger(L, rc);
@@ -395,14 +394,13 @@ lbox_key_def_merge(struct lua_State *L)
if (new_key_def == NULL)
return luaT_error(L);
- *(struct key_def **) luaL_pushcdata(L,
- CTID_STRUCT_KEY_DEF_REF) = new_key_def;
+ *(struct key_def **)luaL_pushcdata(L, CTID_STRUCT_KEY_DEF_REF) =
+ new_key_def;
lua_pushcfunction(L, lbox_key_def_gc);
luaL_setcdatagc(L, -2);
return 1;
}
-
/**
* Push a new table representing a key_def to a Lua stack.
*/
@@ -431,11 +429,11 @@ lbox_key_def_new(struct lua_State *L)
{
if (lua_gettop(L) != 1 || lua_istable(L, 1) != 1)
return luaL_error(L, "Bad params, use: key_def.new({"
- "{fieldno = fieldno, type = type"
- "[, is_nullable = <boolean>]"
- "[, path = <string>]"
- "[, collation_id = <number>]"
- "[, collation = <string>]}, ...}");
+ "{fieldno = fieldno, type = type"
+ "[, is_nullable = <boolean>]"
+ "[, path = <string>]"
+ "[, collation_id = <number>]"
+ "[, collation = <string>]}, ...}");
uint32_t part_count = lua_objlen(L, 1);
@@ -478,8 +476,8 @@ lbox_key_def_new(struct lua_State *L)
*/
key_def_update_optionality(key_def, 0);
- *(struct key_def **) luaL_pushcdata(L,
- CTID_STRUCT_KEY_DEF_REF) = key_def;
+ *(struct key_def **)luaL_pushcdata(L, CTID_STRUCT_KEY_DEF_REF) =
+ key_def;
lua_pushcfunction(L, lbox_key_def_gc);
luaL_setcdatagc(L, -2);
@@ -494,13 +492,13 @@ luaopen_key_def(struct lua_State *L)
/* Export C functions to Lua. */
static const struct luaL_Reg meta[] = {
- {"new", lbox_key_def_new},
- {"extract_key", lbox_key_def_extract_key},
- {"compare", lbox_key_def_compare},
- {"compare_with_key", lbox_key_def_compare_with_key},
- {"merge", lbox_key_def_merge},
- {"totable", lbox_key_def_to_table},
- {NULL, NULL}
+ { "new", lbox_key_def_new },
+ { "extract_key", lbox_key_def_extract_key },
+ { "compare", lbox_key_def_compare },
+ { "compare_with_key", lbox_key_def_compare_with_key },
+ { "merge", lbox_key_def_merge },
+ { "totable", lbox_key_def_to_table },
+ { NULL, NULL }
};
luaL_register_module(L, "key_def", meta);
return 1;
diff --git a/src/box/lua/merger.c b/src/box/lua/merger.c
index 583946c..1dbca5b 100644
--- a/src/box/lua/merger.c
+++ b/src/box/lua/merger.c
@@ -37,26 +37,26 @@
#include <stdlib.h>
#include <string.h>
-#include <lua.h> /* lua_*() */
-#include <lauxlib.h> /* luaL_*() */
+#include <lua.h> /* lua_*() */
+#include <lauxlib.h> /* luaL_*() */
-#include "fiber.h" /* fiber() */
-#include "diag.h" /* diag_set() */
+#include "fiber.h" /* fiber() */
+#include "diag.h" /* diag_set() */
-#include "box/tuple.h" /* tuple_format_runtime,
+#include "box/tuple.h" /* tuple_format_runtime,
tuple_*(), ... */
-#include "lua/error.h" /* luaT_error() */
-#include "lua/utils.h" /* luaL_pushcdata(),
+#include "lua/error.h" /* luaT_error() */
+#include "lua/utils.h" /* luaL_pushcdata(),
luaL_iterator_*() */
#include "box/lua/key_def.h" /* luaT_check_key_def() */
#include "box/lua/tuple.h" /* luaT_tuple_new() */
-#include "small/ibuf.h" /* struct ibuf */
-#include "msgpuck.h" /* mp_*() */
+#include "small/ibuf.h" /* struct ibuf */
+#include "msgpuck.h" /* mp_*() */
-#include "box/merger.h" /* merge_source_*, merger_*() */
+#include "box/merger.h" /* merge_source_*, merger_*() */
static uint32_t CTID_STRUCT_MERGE_SOURCE_REF = 0;
@@ -105,7 +105,7 @@ decode_header(struct ibuf *buf, size_t *len_p)
if (ok)
ok = mp_check_array(buf->rpos, buf->wpos) <= 0;
if (ok)
- *len_p = mp_decode_array((const char **) &buf->rpos);
+ *len_p = mp_decode_array((const char **)&buf->rpos);
return ok ? 0 : -1;
}
@@ -270,8 +270,8 @@ lbox_merge_source_new(struct lua_State *L, const char *func_name,
merge_source_unref(source);
return luaT_error(L);
}
- *(struct merge_source **)
- luaL_pushcdata(L, CTID_STRUCT_MERGE_SOURCE_REF) = source;
+ *(struct merge_source **)luaL_pushcdata(
+ L, CTID_STRUCT_MERGE_SOURCE_REF) = source;
lua_pushcfunction(L, lbox_merge_source_gc);
luaL_setcdatagc(L, -2);
@@ -310,8 +310,8 @@ luaT_merger_new_parse_sources(struct lua_State *L, int idx,
{
/* Allocate sources array. */
uint32_t source_count = lua_objlen(L, idx);
- const size_t sources_size = sizeof(struct merge_source *) *
- source_count;
+ const size_t sources_size =
+ sizeof(struct merge_source *) * source_count;
struct merge_source **sources = malloc(sources_size);
if (sources == NULL) {
diag_set(OutOfMemory, sources_size, "malloc", "sources");
@@ -352,12 +352,12 @@ lbox_merger_new(struct lua_State *L)
struct key_def *key_def;
int top = lua_gettop(L);
bool ok = (top == 2 || top == 3) &&
- /* key_def. */
- (key_def = luaT_check_key_def(L, 1)) != NULL &&
- /* Sources. */
- lua_istable(L, 2) == 1 &&
- /* Opts. */
- (lua_isnoneornil(L, 3) == 1 || lua_istable(L, 3) == 1);
+ /* key_def. */
+ (key_def = luaT_check_key_def(L, 1)) != NULL &&
+ /* Sources. */
+ lua_istable(L, 2) == 1 &&
+ /* Opts. */
+ (lua_isnoneornil(L, 3) == 1 || lua_istable(L, 3) == 1);
if (!ok)
return lbox_merger_new_usage(L, NULL);
@@ -379,21 +379,21 @@ lbox_merger_new(struct lua_State *L)
}
uint32_t source_count = 0;
- struct merge_source **sources = luaT_merger_new_parse_sources(L, 2,
- &source_count);
+ struct merge_source **sources =
+ luaT_merger_new_parse_sources(L, 2, &source_count);
if (sources == NULL)
return luaT_error(L);
- struct merge_source *merger = merger_new(key_def, sources, source_count,
- reverse);
+ struct merge_source *merger =
+ merger_new(key_def, sources, source_count, reverse);
free(sources);
if (merger == NULL) {
merge_source_unref(merger);
return luaT_error(L);
}
- *(struct merge_source **)
- luaL_pushcdata(L, CTID_STRUCT_MERGE_SOURCE_REF) = merger;
+ *(struct merge_source **)luaL_pushcdata(
+ L, CTID_STRUCT_MERGE_SOURCE_REF) = merger;
lua_pushcfunction(L, lbox_merge_source_gc);
luaL_setcdatagc(L, -2);
@@ -435,8 +435,7 @@ static void
luaL_merge_source_buffer_destroy(struct merge_source *base);
static int
luaL_merge_source_buffer_next(struct merge_source *base,
- struct tuple_format *format,
- struct tuple **out);
+ struct tuple_format *format, struct tuple **out);
/* Non-virtual methods */
@@ -455,8 +454,8 @@ luaL_merge_source_buffer_new(struct lua_State *L)
.next = luaL_merge_source_buffer_next,
};
- struct merge_source_buffer *source = malloc(
- sizeof(struct merge_source_buffer));
+ struct merge_source_buffer *source =
+ malloc(sizeof(struct merge_source_buffer));
if (source == NULL) {
diag_set(OutOfMemory, sizeof(struct merge_source_buffer),
"malloc", "merge_source_buffer");
@@ -492,8 +491,10 @@ luaL_merge_source_buffer_fetch_impl(struct merge_source_buffer *source,
/* Handle incorrect results count. */
if (nresult != 2) {
- diag_set(IllegalParams, "Expected <state>, <buffer>, got %d "
- "return values", nresult);
+ diag_set(IllegalParams,
+ "Expected <state>, <buffer>, got %d "
+ "return values",
+ nresult);
return -1;
}
@@ -550,8 +551,8 @@ luaL_merge_source_buffer_fetch(struct merge_source_buffer *source)
static void
luaL_merge_source_buffer_destroy(struct merge_source *base)
{
- struct merge_source_buffer *source = container_of(base,
- struct merge_source_buffer, base);
+ struct merge_source_buffer *source =
+ container_of(base, struct merge_source_buffer, base);
assert(source->fetch_it != NULL);
luaL_iterator_delete(source->fetch_it);
@@ -568,11 +569,10 @@ luaL_merge_source_buffer_destroy(struct merge_source *base)
*/
static int
luaL_merge_source_buffer_next(struct merge_source *base,
- struct tuple_format *format,
- struct tuple **out)
+ struct tuple_format *format, struct tuple **out)
{
- struct merge_source_buffer *source = container_of(base,
- struct merge_source_buffer, base);
+ struct merge_source_buffer *source =
+ container_of(base, struct merge_source_buffer, base);
/*
* Handle the case when all data were processed: ask a
@@ -599,7 +599,7 @@ luaL_merge_source_buffer_next(struct merge_source *base,
return -1;
}
--source->remaining_tuple_count;
- source->buf->rpos = (char *) tuple_end;
+ source->buf->rpos = (char *)tuple_end;
if (format == NULL)
format = tuple_format_runtime;
struct tuple *tuple = tuple_new(format, tuple_beg, tuple_end);
@@ -648,8 +648,7 @@ static void
luaL_merge_source_table_destroy(struct merge_source *base);
static int
luaL_merge_source_table_next(struct merge_source *base,
- struct tuple_format *format,
- struct tuple **out);
+ struct tuple_format *format, struct tuple **out);
/* Non-virtual methods */
@@ -666,8 +665,8 @@ luaL_merge_source_table_new(struct lua_State *L)
.next = luaL_merge_source_table_next,
};
- struct merge_source_table *source = malloc(
- sizeof(struct merge_source_table));
+ struct merge_source_table *source =
+ malloc(sizeof(struct merge_source_table));
if (source == NULL) {
diag_set(OutOfMemory, sizeof(struct merge_source_table),
"malloc", "merge_source_table");
@@ -705,8 +704,10 @@ luaL_merge_source_table_fetch(struct merge_source_table *source,
/* Handle incorrect results count. */
if (nresult != 2) {
- diag_set(IllegalParams, "Expected <state>, <table>, got %d "
- "return values", nresult);
+ diag_set(IllegalParams,
+ "Expected <state>, <table>, got %d "
+ "return values",
+ nresult);
return -1;
}
@@ -737,8 +738,8 @@ luaL_merge_source_table_fetch(struct merge_source_table *source,
static void
luaL_merge_source_table_destroy(struct merge_source *base)
{
- struct merge_source_table *source = container_of(base,
- struct merge_source_table, base);
+ struct merge_source_table *source =
+ container_of(base, struct merge_source_table, base);
assert(source->fetch_it != NULL);
luaL_iterator_delete(source->fetch_it);
@@ -754,11 +755,10 @@ luaL_merge_source_table_destroy(struct merge_source *base)
static int
luaL_merge_source_table_next_impl(struct merge_source *base,
struct tuple_format *format,
- struct tuple **out,
- struct lua_State *L)
+ struct tuple **out, struct lua_State *L)
{
- struct merge_source_table *source = container_of(base,
- struct merge_source_table, base);
+ struct merge_source_table *source =
+ container_of(base, struct merge_source_table, base);
if (source->ref > 0) {
lua_rawgeti(L, LUA_REGISTRYINDEX, source->ref);
@@ -806,8 +806,7 @@ luaL_merge_source_table_next_impl(struct merge_source *base,
*/
static int
luaL_merge_source_table_next(struct merge_source *base,
- struct tuple_format *format,
- struct tuple **out)
+ struct tuple_format *format, struct tuple **out)
{
int coro_ref = LUA_NOREF;
int top = -1;
@@ -850,8 +849,7 @@ static void
luaL_merge_source_tuple_destroy(struct merge_source *base);
static int
luaL_merge_source_tuple_next(struct merge_source *base,
- struct tuple_format *format,
- struct tuple **out);
+ struct tuple_format *format, struct tuple **out);
/* Non-virtual methods */
@@ -868,8 +866,8 @@ luaL_merge_source_tuple_new(struct lua_State *L)
.next = luaL_merge_source_tuple_next,
};
- struct merge_source_tuple *source = malloc(
- sizeof(struct merge_source_tuple));
+ struct merge_source_tuple *source =
+ malloc(sizeof(struct merge_source_tuple));
if (source == NULL) {
diag_set(OutOfMemory, sizeof(struct merge_source_tuple),
"malloc", "merge_source_tuple");
@@ -896,7 +894,7 @@ luaL_merge_source_tuple_new(struct lua_State *L)
*/
static int
luaL_merge_source_tuple_fetch(struct merge_source_tuple *source,
- struct lua_State *L)
+ struct lua_State *L)
{
int nresult = luaL_iterator_next(L, source->fetch_it);
@@ -910,14 +908,16 @@ luaL_merge_source_tuple_fetch(struct merge_source_tuple *source,
/* Handle incorrect results count. */
if (nresult != 2) {
- diag_set(IllegalParams, "Expected <state>, <tuple>, got %d "
- "return values", nresult);
+ diag_set(IllegalParams,
+ "Expected <state>, <tuple>, got %d "
+ "return values",
+ nresult);
return -1;
}
/* Set a new tuple as the current chunk. */
lua_insert(L, -2); /* Swap state and tuple. */
- lua_pop(L, 1); /* Pop state. */
+ lua_pop(L, 1); /* Pop state. */
return 1;
}
@@ -932,8 +932,8 @@ luaL_merge_source_tuple_fetch(struct merge_source_tuple *source,
static void
luaL_merge_source_tuple_destroy(struct merge_source *base)
{
- struct merge_source_tuple *source = container_of(base,
- struct merge_source_tuple, base);
+ struct merge_source_tuple *source =
+ container_of(base, struct merge_source_tuple, base);
assert(source->fetch_it != NULL);
luaL_iterator_delete(source->fetch_it);
@@ -947,11 +947,10 @@ luaL_merge_source_tuple_destroy(struct merge_source *base)
static int
luaL_merge_source_tuple_next_impl(struct merge_source *base,
struct tuple_format *format,
- struct tuple **out,
- struct lua_State *L)
+ struct tuple **out, struct lua_State *L)
{
- struct merge_source_tuple *source = container_of(base,
- struct merge_source_tuple, base);
+ struct merge_source_tuple *source =
+ container_of(base, struct merge_source_tuple, base);
int rc = luaL_merge_source_tuple_fetch(source, L);
if (rc < 0)
@@ -981,8 +980,7 @@ luaL_merge_source_tuple_next_impl(struct merge_source *base,
*/
static int
luaL_merge_source_tuple_next(struct merge_source *base,
- struct tuple_format *format,
- struct tuple **out)
+ struct tuple_format *format, struct tuple **out)
{
int coro_ref = LUA_NOREF;
int top = -1;
@@ -1024,10 +1022,10 @@ lbox_merge_source_gen(struct lua_State *L)
{
struct merge_source *source;
bool ok = lua_gettop(L) == 2 && lua_isnil(L, 1) &&
- (source = luaT_check_merge_source(L, 2)) != NULL;
+ (source = luaT_check_merge_source(L, 2)) != NULL;
if (!ok)
return luaL_error(L, "Bad params, use: lbox_merge_source_gen("
- "nil, merge_source)");
+ "nil, merge_source)");
struct tuple *tuple;
if (merge_source_next(source, NULL, &tuple) != 0)
@@ -1039,8 +1037,8 @@ lbox_merge_source_gen(struct lua_State *L)
}
/* Push merge_source, tuple. */
- *(struct merge_source **)
- luaL_pushcdata(L, CTID_STRUCT_MERGE_SOURCE_REF) = source;
+ *(struct merge_source **)luaL_pushcdata(
+ L, CTID_STRUCT_MERGE_SOURCE_REF) = source;
luaT_pushtuple(L, tuple);
/*
@@ -1066,7 +1064,7 @@ lbox_merge_source_ipairs(struct lua_State *L)
{
struct merge_source *source;
bool ok = lua_gettop(L) == 1 &&
- (source = luaT_check_merge_source(L, 1)) != NULL;
+ (source = luaT_check_merge_source(L, 1)) != NULL;
if (!ok)
return luaL_error(L, "Usage: merge_source:ipairs()");
/* Stack: merge_source. */
@@ -1116,8 +1114,8 @@ encode_result_buffer(struct lua_State *L, struct merge_source *source,
/* Fetch, merge and copy tuples to the buffer. */
struct tuple *tuple;
int rc = 0;
- while (result_len < limit && (rc =
- merge_source_next(source, NULL, &tuple)) == 0 &&
+ while (result_len < limit &&
+ (rc = merge_source_next(source, NULL, &tuple)) == 0 &&
tuple != NULL) {
uint32_t bsize = tuple->bsize;
ibuf_reserve(output_buffer, bsize);
@@ -1156,8 +1154,8 @@ create_result_table(struct lua_State *L, struct merge_source *source,
/* Fetch, merge and save tuples to the table. */
struct tuple *tuple;
int rc = 0;
- while (cur - 1 < limit && (rc =
- merge_source_next(source, NULL, &tuple)) == 0 &&
+ while (cur - 1 < limit &&
+ (rc = merge_source_next(source, NULL, &tuple)) == 0 &&
tuple != NULL) {
luaT_pushtuple(L, tuple);
lua_rawseti(L, -2, cur);
@@ -1209,10 +1207,10 @@ lbox_merge_source_select(struct lua_State *L)
struct merge_source *source;
int top = lua_gettop(L);
bool ok = (top == 1 || top == 2) &&
- /* Merge source. */
- (source = luaT_check_merge_source(L, 1)) != NULL &&
- /* Opts. */
- (lua_isnoneornil(L, 2) == 1 || lua_istable(L, 2) == 1);
+ /* Merge source. */
+ (source = luaT_check_merge_source(L, 1)) != NULL &&
+ /* Opts. */
+ (lua_isnoneornil(L, 2) == 1 || lua_istable(L, 2) == 1);
if (!ok)
return lbox_merge_source_select_usage(L, NULL);
@@ -1227,7 +1225,7 @@ lbox_merge_source_select(struct lua_State *L)
if (!lua_isnil(L, -1)) {
if ((output_buffer = luaL_checkibuf(L, -1)) == NULL)
return lbox_merge_source_select_usage(L,
- "buffer");
+ "buffer");
}
lua_pop(L, 1);
@@ -1239,7 +1237,7 @@ lbox_merge_source_select(struct lua_State *L)
limit = lua_tointeger(L, -1);
else
return lbox_merge_source_select_usage(L,
- "limit");
+ "limit");
}
lua_pop(L, 1);
}
@@ -1263,11 +1261,11 @@ luaopen_merger(struct lua_State *L)
/* Export C functions to Lua. */
static const struct luaL_Reg meta[] = {
- {"new_buffer_source", lbox_merger_new_buffer_source},
- {"new_table_source", lbox_merger_new_table_source},
- {"new_tuple_source", lbox_merger_new_tuple_source},
- {"new", lbox_merger_new},
- {NULL, NULL}
+ { "new_buffer_source", lbox_merger_new_buffer_source },
+ { "new_table_source", lbox_merger_new_table_source },
+ { "new_tuple_source", lbox_merger_new_tuple_source },
+ { "new", lbox_merger_new },
+ { NULL, NULL }
};
luaL_register_module(L, "merger", meta);
diff --git a/src/box/lua/misc.cc b/src/box/lua/misc.cc
index e356f2d..fd378b5 100644
--- a/src/box/lua/misc.cc
+++ b/src/box/lua/misc.cc
@@ -54,11 +54,11 @@ lbox_encode_tuple_on_gc(lua_State *L, int idx, size_t *p_len)
size_t used = region_used(gc);
struct mpstream stream;
mpstream_init(&stream, gc, region_reserve_cb, region_alloc_cb,
- luamp_error, L);
+ luamp_error, L);
luamp_encode_tuple(L, luaL_msgpack_default, &stream, idx);
mpstream_flush(&stream);
*p_len = region_used(gc) - used;
- return (char *) region_join_xc(gc, *p_len);
+ return (char *)region_join_xc(gc, *p_len);
}
extern "C" void
@@ -84,9 +84,9 @@ port_c_dump_lua(struct port *base, struct lua_State *L, bool is_flat)
extern "C" void
port_msgpack_dump_lua(struct port *base, struct lua_State *L, bool is_flat)
{
- (void) is_flat;
+ (void)is_flat;
assert(is_flat == true);
- struct port_msgpack *port = (struct port_msgpack *) base;
+ struct port_msgpack *port = (struct port_msgpack *)base;
const char *args = port->data;
uint32_t arg_count = mp_decode_array(&args);
@@ -102,9 +102,9 @@ static int
lbox_select(lua_State *L)
{
if (lua_gettop(L) != 6 || !lua_isnumber(L, 1) || !lua_isnumber(L, 2) ||
- !lua_isnumber(L, 3) || !lua_isnumber(L, 4) || !lua_isnumber(L, 5)) {
+ !lua_isnumber(L, 3) || !lua_isnumber(L, 4) || !lua_isnumber(L, 5)) {
return luaL_error(L, "Usage index:select(iterator, offset, "
- "limit, key)");
+ "limit, key)");
}
uint32_t space_id = lua_tonumber(L, 1);
@@ -117,8 +117,8 @@ lbox_select(lua_State *L)
const char *key = lbox_encode_tuple_on_gc(L, 6, &key_len);
struct port port;
- if (box_select(space_id, index_id, iterator, offset, limit,
- key, key + key_len, &port) != 0) {
+ if (box_select(space_id, index_id, iterator, offset, limit, key,
+ key + key_len, &port) != 0) {
return luaT_error(L);
}
@@ -147,7 +147,8 @@ lbox_check_tuple_format(struct lua_State *L, int narg)
struct tuple_format *format =
*(struct tuple_format **)luaL_checkcdata(L, narg, &ctypeid);
if (ctypeid != CTID_STRUCT_TUPLE_FORMAT_PTR) {
- luaL_error(L, "Invalid argument: 'struct tuple_format *' "
+ luaL_error(L,
+ "Invalid argument: 'struct tuple_format *' "
"expected, got %s)",
lua_typename(L, lua_type(L, narg)));
}
@@ -157,7 +158,7 @@ lbox_check_tuple_format(struct lua_State *L, int narg)
static int
lbox_tuple_format_gc(struct lua_State *L)
{
- struct tuple_format *format = lbox_check_tuple_format(L, 1);
+ struct tuple_format *format = lbox_check_tuple_format(L, 1);
tuple_format_unref(format);
return 0;
}
@@ -165,8 +166,8 @@ lbox_tuple_format_gc(struct lua_State *L)
static int
lbox_push_tuple_format(struct lua_State *L, struct tuple_format *format)
{
- struct tuple_format **ptr = (struct tuple_format **)
- luaL_pushcdata(L, CTID_STRUCT_TUPLE_FORMAT_PTR);
+ struct tuple_format **ptr = (struct tuple_format **)luaL_pushcdata(
+ L, CTID_STRUCT_TUPLE_FORMAT_PTR);
*ptr = format;
tuple_format_ref(format);
lua_pushcfunction(L, lbox_tuple_format_gc);
@@ -188,8 +189,8 @@ lbox_tuple_format_new(struct lua_State *L)
size_t size;
struct region *region = &fiber()->gc;
size_t region_svp = region_used(region);
- struct field_def *fields = region_alloc_array(region, typeof(fields[0]),
- count, &size);
+ struct field_def *fields =
+ region_alloc_array(region, typeof(fields[0]), count, &size);
if (fields == NULL) {
diag_set(OutOfMemory, size, "region_alloc_array", "fields");
return luaT_error(L);
@@ -204,7 +205,7 @@ lbox_tuple_format_new(struct lua_State *L)
lua_pushstring(L, "type");
lua_gettable(L, -2);
- if (! lua_isnil(L, -1)) {
+ if (!lua_isnil(L, -1)) {
const char *type_name = lua_tolstring(L, -1, &len);
fields[i].type = field_type_by_name(type_name, len);
assert(fields[i].type != field_type_MAX);
@@ -213,7 +214,7 @@ lbox_tuple_format_new(struct lua_State *L)
lua_pushstring(L, "name");
lua_gettable(L, -2);
- assert(! lua_isnil(L, -1));
+ assert(!lua_isnil(L, -1));
const char *name = lua_tolstring(L, -1, &len);
fields[i].name = (char *)region_alloc(region, len + 1);
if (fields == NULL) {
@@ -251,9 +252,9 @@ void
box_lua_misc_init(struct lua_State *L)
{
static const struct luaL_Reg boxlib_internal[] = {
- {"select", lbox_select},
- {"new_tuple_format", lbox_tuple_format_new},
- {NULL, NULL}
+ { "select", lbox_select },
+ { "new_tuple_format", lbox_tuple_format_new },
+ { NULL, NULL }
};
luaL_register(L, "box.internal", boxlib_internal);
@@ -261,7 +262,7 @@ box_lua_misc_init(struct lua_State *L)
int rc = luaL_cdef(L, "struct tuple_format;");
assert(rc == 0);
- (void) rc;
+ (void)rc;
CTID_STRUCT_TUPLE_FORMAT_PTR = luaL_ctypeid(L, "struct tuple_format *");
assert(CTID_STRUCT_TUPLE_FORMAT_PTR != 0);
}
diff --git a/src/box/lua/net_box.c b/src/box/lua/net_box.c
index 0b6c362..d2aa3ef 100644
--- a/src/box/lua/net_box.c
+++ b/src/box/lua/net_box.c
@@ -55,11 +55,11 @@
static inline size_t
netbox_prepare_request(lua_State *L, struct mpstream *stream, uint32_t r_type)
{
- struct ibuf *ibuf = (struct ibuf *) lua_topointer(L, 1);
+ struct ibuf *ibuf = (struct ibuf *)lua_topointer(L, 1);
uint64_t sync = luaL_touint64(L, 2);
- mpstream_init(stream, ibuf, ibuf_reserve_cb, ibuf_alloc_cb,
- luamp_error, L);
+ mpstream_init(stream, ibuf, ibuf_reserve_cb, ibuf_alloc_cb, luamp_error,
+ L);
/* Remember initial size of ibuf (see netbox_encode_request()) */
size_t used = ibuf_used(ibuf);
@@ -87,7 +87,7 @@ netbox_encode_request(struct mpstream *stream, size_t initial_size)
{
mpstream_flush(stream);
- struct ibuf *ibuf = (struct ibuf *) stream->ctx;
+ struct ibuf *ibuf = (struct ibuf *)stream->ctx;
/*
* Calculation the start position in ibuf by getting current size
@@ -428,8 +428,7 @@ netbox_decode_greeting(lua_State *L)
buf = lua_tolstring(L, 1, &len);
if (buf == NULL || len != IPROTO_GREETING_SIZE ||
- greeting_decode(buf, &greeting) != 0) {
-
+ greeting_decode(buf, &greeting) != 0) {
lua_pushboolean(L, 0);
lua_pushstring(L, "Invalid greeting");
return 2;
@@ -469,8 +468,8 @@ netbox_communicate(lua_State *L)
{
uint32_t fd = lua_tonumber(L, 1);
const int NETBOX_READAHEAD = 16320;
- struct ibuf *send_buf = (struct ibuf *) lua_topointer(L, 2);
- struct ibuf *recv_buf = (struct ibuf *) lua_topointer(L, 3);
+ struct ibuf *send_buf = (struct ibuf *)lua_topointer(L, 2);
+ struct ibuf *recv_buf = (struct ibuf *)lua_topointer(L, 3);
/* limit or boundary */
size_t limit = SIZE_MAX;
@@ -494,20 +493,18 @@ netbox_communicate(lua_State *L)
int revents = COIO_READ;
while (true) {
/* reader serviced first */
-check_limit:
+ check_limit:
if (ibuf_used(recv_buf) >= limit) {
lua_pushnil(L);
lua_pushinteger(L, (lua_Integer)limit);
return 2;
}
const char *p;
- if (boundary != NULL && (p = memmem(
- recv_buf->rpos,
- ibuf_used(recv_buf),
- boundary, boundary_len)) != NULL) {
+ if (boundary != NULL &&
+ (p = memmem(recv_buf->rpos, ibuf_used(recv_buf), boundary,
+ boundary_len)) != NULL) {
lua_pushnil(L);
- lua_pushinteger(L, (lua_Integer)(
- p - recv_buf->rpos));
+ lua_pushinteger(L, (lua_Integer)(p - recv_buf->rpos));
return 2;
}
@@ -515,13 +512,14 @@ check_limit:
void *p = ibuf_reserve(recv_buf, NETBOX_READAHEAD);
if (p == NULL)
luaL_error(L, "out of memory");
- ssize_t rc = recv(
- fd, recv_buf->wpos, ibuf_unused(recv_buf), 0);
+ ssize_t rc = recv(fd, recv_buf->wpos,
+ ibuf_unused(recv_buf), 0);
if (rc == 0) {
lua_pushinteger(L, ER_NO_CONNECTION);
lua_pushstring(L, "Peer closed");
return 2;
- } if (rc > 0) {
+ }
+ if (rc > 0) {
recv_buf->wpos += rc;
goto check_limit;
} else if (errno == EAGAIN || errno == EWOULDBLOCK)
@@ -531,8 +529,8 @@ check_limit:
}
while ((revents & COIO_WRITE) && ibuf_used(send_buf) != 0) {
- ssize_t rc = send(
- fd, send_buf->rpos, ibuf_used(send_buf), 0);
+ ssize_t rc = send(fd, send_buf->rpos,
+ ibuf_used(send_buf), 0);
if (rc >= 0)
send_buf->rpos += rc;
else if (errno == EAGAIN || errno == EWOULDBLOCK)
@@ -542,8 +540,9 @@ check_limit:
}
ev_tstamp deadline = ev_monotonic_now(loop()) + timeout;
- revents = coio_wait(fd, EV_READ | (ibuf_used(send_buf) != 0 ?
- EV_WRITE : 0), timeout);
+ revents = coio_wait(
+ fd, EV_READ | (ibuf_used(send_buf) != 0 ? EV_WRITE : 0),
+ timeout);
luaL_testcancel(L);
timeout = deadline - ev_monotonic_now(loop());
timeout = MAX(0.0, timeout);
@@ -563,8 +562,8 @@ static int
netbox_encode_execute(lua_State *L)
{
if (lua_gettop(L) < 5)
- return luaL_error(L, "Usage: netbox.encode_execute(ibuf, "\
- "sync, query, parameters, options)");
+ return luaL_error(L, "Usage: netbox.encode_execute(ibuf, "
+ "sync, query, parameters, options)");
struct mpstream stream;
size_t svp = netbox_prepare_request(L, &stream, IPROTO_EXECUTE);
@@ -595,7 +594,7 @@ static int
netbox_encode_prepare(lua_State *L)
{
if (lua_gettop(L) < 3)
- return luaL_error(L, "Usage: netbox.encode_prepare(ibuf, "\
+ return luaL_error(L, "Usage: netbox.encode_prepare(ibuf, "
"sync, query)");
struct mpstream stream;
size_t svp = netbox_prepare_request(L, &stream, IPROTO_PREPARE);
@@ -631,8 +630,7 @@ netbox_decode_data(struct lua_State *L, const char **data,
for (uint32_t j = 0; j < count; ++j) {
const char *begin = *data;
mp_next(data);
- struct tuple *tuple =
- box_tuple_new(format, begin, *data);
+ struct tuple *tuple = box_tuple_new(format, begin, *data);
if (tuple == NULL)
luaT_error(L);
luaT_pushtuple(L, tuple);
@@ -661,10 +659,10 @@ netbox_decode_select(struct lua_State *L)
uint32_t map_size = mp_decode_map(&data);
/* Until 2.0 body has no keys except DATA. */
assert(map_size == 1);
- (void) map_size;
+ (void)map_size;
uint32_t key = mp_decode_uint(&data);
assert(key == IPROTO_DATA);
- (void) key;
+ (void)key;
netbox_decode_data(L, &data, format);
*(const char **)luaL_pushcdata(L, ctypeid) = data;
return 2;
@@ -729,7 +727,7 @@ netbox_decode_metadata(struct lua_State *L, const char **data)
assert(map_size >= 2 && map_size <= 6);
uint32_t key = mp_decode_uint(data);
assert(key == IPROTO_FIELD_NAME);
- (void) key;
+ (void)key;
lua_createtable(L, 0, map_size);
uint32_t name_len, type_len;
const char *str = mp_decode_str(data, &name_len);
@@ -796,7 +794,7 @@ netbox_decode_execute(struct lua_State *L)
int rows_index = 0, meta_index = 0, info_index = 0;
for (uint32_t i = 0; i < map_size; ++i) {
uint32_t key = mp_decode_uint(&data);
- switch(key) {
+ switch (key) {
case IPROTO_DATA:
netbox_decode_data(L, &data, tuple_format_runtime);
rows_index = i - map_size;
@@ -840,7 +838,7 @@ netbox_decode_prepare(struct lua_State *L)
uint32_t stmt_id = 0;
for (uint32_t i = 0; i < map_size; ++i) {
uint32_t key = mp_decode_uint(&data);
- switch(key) {
+ switch (key) {
case IPROTO_STMT_ID: {
stmt_id = mp_decode_uint(&data);
luaL_pushuint64(L, stmt_id);
@@ -863,7 +861,8 @@ netbox_decode_prepare(struct lua_State *L)
luaL_pushuint64(L, bind_count);
bind_count_idx = i - map_size;
break;
- }}
+ }
+ }
}
/* These fields must be present in response. */
assert(stmt_id_idx * bind_meta_idx * bind_count_idx != 0);
@@ -888,25 +887,25 @@ int
luaopen_net_box(struct lua_State *L)
{
static const luaL_Reg net_box_lib[] = {
- { "encode_ping", netbox_encode_ping },
+ { "encode_ping", netbox_encode_ping },
{ "encode_call_16", netbox_encode_call_16 },
- { "encode_call", netbox_encode_call },
- { "encode_eval", netbox_encode_eval },
- { "encode_select", netbox_encode_select },
- { "encode_insert", netbox_encode_insert },
+ { "encode_call", netbox_encode_call },
+ { "encode_eval", netbox_encode_eval },
+ { "encode_select", netbox_encode_select },
+ { "encode_insert", netbox_encode_insert },
{ "encode_replace", netbox_encode_replace },
- { "encode_delete", netbox_encode_delete },
- { "encode_update", netbox_encode_update },
- { "encode_upsert", netbox_encode_upsert },
- { "encode_execute", netbox_encode_execute},
- { "encode_prepare", netbox_encode_prepare},
- { "encode_auth", netbox_encode_auth },
- { "decode_greeting",netbox_decode_greeting },
- { "communicate", netbox_communicate },
- { "decode_select", netbox_decode_select },
+ { "encode_delete", netbox_encode_delete },
+ { "encode_update", netbox_encode_update },
+ { "encode_upsert", netbox_encode_upsert },
+ { "encode_execute", netbox_encode_execute },
+ { "encode_prepare", netbox_encode_prepare },
+ { "encode_auth", netbox_encode_auth },
+ { "decode_greeting", netbox_decode_greeting },
+ { "communicate", netbox_communicate },
+ { "decode_select", netbox_decode_select },
{ "decode_execute", netbox_decode_execute },
{ "decode_prepare", netbox_decode_prepare },
- { NULL, NULL}
+ { NULL, NULL }
};
/* luaL_register_module polutes _G */
lua_newtable(L);
diff --git a/src/box/lua/sequence.c b/src/box/lua/sequence.c
index bf0714c..e33904f 100644
--- a/src/box/lua/sequence.c
+++ b/src/box/lua/sequence.c
@@ -173,16 +173,16 @@ void
box_lua_sequence_init(struct lua_State *L)
{
static const struct luaL_Reg sequence_internal_lib[] = {
- {"next", lbox_sequence_next},
- {"set", lbox_sequence_set},
- {"reset", lbox_sequence_reset},
- {NULL, NULL}
+ { "next", lbox_sequence_next },
+ { "set", lbox_sequence_set },
+ { "reset", lbox_sequence_reset },
+ { NULL, NULL }
};
luaL_register(L, "box.internal.sequence", sequence_internal_lib);
lua_pop(L, 1);
static struct trigger on_alter_sequence_in_lua;
- trigger_create(&on_alter_sequence_in_lua,
- lbox_sequence_new_or_delete, L, NULL);
+ trigger_create(&on_alter_sequence_in_lua, lbox_sequence_new_or_delete,
+ L, NULL);
trigger_add(&on_alter_sequence, &on_alter_sequence_in_lua);
}
diff --git a/src/box/lua/serialize_lua.c b/src/box/lua/serialize_lua.c
index caa08a6..32dcf47 100644
--- a/src/box/lua/serialize_lua.c
+++ b/src/box/lua/serialize_lua.c
@@ -45,36 +45,36 @@
#include "serialize_lua.h"
#if 0
-# define SERIALIZER_TRACE
+#define SERIALIZER_TRACE
#endif
/* Serializer for Lua output mode */
static struct luaL_serializer *serializer_lua;
enum {
- NODE_NONE_BIT = 0,
- NODE_ROOT_BIT = 1,
- NODE_RAW_BIT = 2,
- NODE_LVALUE_BIT = 3,
- NODE_RVALUE_BIT = 4,
- NODE_MAP_KEY_BIT = 5,
- NODE_MAP_VALUE_BIT = 6,
- NODE_EMBRACE_BIT = 7,
- NODE_QUOTE_BIT = 8,
+ NODE_NONE_BIT = 0,
+ NODE_ROOT_BIT = 1,
+ NODE_RAW_BIT = 2,
+ NODE_LVALUE_BIT = 3,
+ NODE_RVALUE_BIT = 4,
+ NODE_MAP_KEY_BIT = 5,
+ NODE_MAP_VALUE_BIT = 6,
+ NODE_EMBRACE_BIT = 7,
+ NODE_QUOTE_BIT = 8,
NODE_MAX
};
enum {
- NODE_NONE = (1u << NODE_NONE_BIT),
- NODE_ROOT = (1u << NODE_ROOT_BIT),
- NODE_RAW = (1u << NODE_RAW_BIT),
- NODE_LVALUE = (1u << NODE_LVALUE_BIT),
- NODE_RVALUE = (1u << NODE_RVALUE_BIT),
- NODE_MAP_KEY = (1u << NODE_MAP_KEY_BIT),
- NODE_MAP_VALUE = (1u << NODE_MAP_VALUE_BIT),
- NODE_EMBRACE = (1u << NODE_EMBRACE_BIT),
- NODE_QUOTE = (1u << NODE_QUOTE_BIT),
+ NODE_NONE = (1u << NODE_NONE_BIT),
+ NODE_ROOT = (1u << NODE_ROOT_BIT),
+ NODE_RAW = (1u << NODE_RAW_BIT),
+ NODE_LVALUE = (1u << NODE_LVALUE_BIT),
+ NODE_RVALUE = (1u << NODE_RVALUE_BIT),
+ NODE_MAP_KEY = (1u << NODE_MAP_KEY_BIT),
+ NODE_MAP_VALUE = (1u << NODE_MAP_VALUE_BIT),
+ NODE_EMBRACE = (1u << NODE_EMBRACE_BIT),
+ NODE_QUOTE = (1u << NODE_QUOTE_BIT),
};
struct node {
@@ -136,18 +136,13 @@ struct lua_dumper {
#ifdef SERIALIZER_TRACE
-#define __gen_mp_name(__v) [__v] = # __v
+#define __gen_mp_name(__v) [__v] = #__v
static const char *mp_type_names[] = {
- __gen_mp_name(MP_NIL),
- __gen_mp_name(MP_UINT),
- __gen_mp_name(MP_INT),
- __gen_mp_name(MP_STR),
- __gen_mp_name(MP_BIN),
- __gen_mp_name(MP_ARRAY),
- __gen_mp_name(MP_MAP),
- __gen_mp_name(MP_BOOL),
- __gen_mp_name(MP_FLOAT),
- __gen_mp_name(MP_DOUBLE),
+ __gen_mp_name(MP_NIL), __gen_mp_name(MP_UINT),
+ __gen_mp_name(MP_INT), __gen_mp_name(MP_STR),
+ __gen_mp_name(MP_BIN), __gen_mp_name(MP_ARRAY),
+ __gen_mp_name(MP_MAP), __gen_mp_name(MP_BOOL),
+ __gen_mp_name(MP_FLOAT), __gen_mp_name(MP_DOUBLE),
__gen_mp_name(MP_EXT),
};
@@ -158,16 +153,12 @@ static const char *mp_ext_type_names[] = {
};
#undef __gen_mp_name
-#define __gen_nd_name(__v) [__v ##_BIT] = # __v
+#define __gen_nd_name(__v) [__v##_BIT] = #__v
static const char *nd_type_names[] = {
- __gen_nd_name(NODE_NONE),
- __gen_nd_name(NODE_ROOT),
- __gen_nd_name(NODE_RAW),
- __gen_nd_name(NODE_LVALUE),
- __gen_nd_name(NODE_RVALUE),
- __gen_nd_name(NODE_MAP_KEY),
- __gen_nd_name(NODE_MAP_VALUE),
- __gen_nd_name(NODE_EMBRACE),
+ __gen_nd_name(NODE_NONE), __gen_nd_name(NODE_ROOT),
+ __gen_nd_name(NODE_RAW), __gen_nd_name(NODE_LVALUE),
+ __gen_nd_name(NODE_RVALUE), __gen_nd_name(NODE_MAP_KEY),
+ __gen_nd_name(NODE_MAP_VALUE), __gen_nd_name(NODE_EMBRACE),
__gen_nd_name(NODE_QUOTE),
};
#undef __gen_nd_name
@@ -204,8 +195,8 @@ static void
trace_node(struct lua_dumper *d)
{
int ltype = lua_type(d->L, -1);
- say_info("serializer-trace: node : lua type %d -> %s",
- ltype, lua_typename(d->L, ltype));
+ say_info("serializer-trace: node : lua type %d -> %s", ltype,
+ lua_typename(d->L, ltype));
if (d->err != 0)
return;
@@ -223,8 +214,8 @@ trace_node(struct lua_dumper *d)
snprintf(mp_type, sizeof(mp_type), "%s/%s",
mp_type_names[field.type],
field.ext_type < max_ext ?
- mp_ext_type_names[field.ext_type] :
- "UNKNOWN");
+ mp_ext_type_names[field.ext_type] :
+ "UNKNOWN");
} else {
type_str = (char *)mp_type_names[field.type];
}
@@ -235,8 +226,8 @@ trace_node(struct lua_dumper *d)
memset(&field, 0, sizeof(field));
luaL_checkfield(d->L, d->cfg, top, &field);
- say_info("serializer-trace: node :\tfield type %s (%d)",
- type_str, field.type);
+ say_info("serializer-trace: node :\tfield type %s (%d)", type_str,
+ field.type);
}
static char *
@@ -245,8 +236,8 @@ trace_string(const char *src, size_t len)
static char buf[128];
size_t pos = 0;
- if (len > sizeof(buf)-1)
- len = sizeof(buf)-1;
+ if (len > sizeof(buf) - 1)
+ len = sizeof(buf) - 1;
while (pos < len) {
if (src[pos] == '\n') {
@@ -262,31 +253,27 @@ trace_string(const char *src, size_t len)
}
static void
-trace_emit(struct lua_dumper *d, int nd_mask, int indent,
- const char *str, size_t len)
+trace_emit(struct lua_dumper *d, int nd_mask, int indent, const char *str,
+ size_t len)
{
if (d->suffix_len) {
say_info("serializer-trace: emit-sfx: \"%s\"",
- trace_string(d->suffix_buf,
- d->suffix_len));
+ trace_string(d->suffix_buf, d->suffix_len));
}
- static_assert(NODE_MAX < sizeof(int) * 8,
- "NODE_MAX is too big");
+ static_assert(NODE_MAX < sizeof(int) * 8, "NODE_MAX is too big");
char *names = trace_nd_mask_str(nd_mask);
say_info("serializer-trace: emit : type %s (0x%x) "
"indent %d val \"%s\" len %zu",
- names, nd_mask, indent,
- trace_string(str, len), len);
+ names, nd_mask, indent, trace_string(str, len), len);
}
static void
trace_anchor(const char *s, bool alias)
{
- say_info("serializer-trace: anchor : alias %d name %s",
- alias, s);
+ say_info("serializer-trace: anchor : alias %d name %s", alias, s);
}
#else /* SERIALIZER_TRACE */
@@ -298,8 +285,8 @@ trace_node(struct lua_dumper *d)
}
static void
-trace_emit(struct lua_dumper *d, int nd_mask, int indent,
- const char *str, size_t len)
+trace_emit(struct lua_dumper *d, int nd_mask, int indent, const char *str,
+ size_t len)
{
(void)d;
(void)nd_mask;
@@ -318,20 +305,18 @@ trace_anchor(const char *s, bool alias)
#endif /* SERIALIZER_TRACE */
static const char *lua_keywords[] = {
- "and", "break", "do", "else",
- "elseif", "end", "false", "for",
- "function", "if", "in", "local",
- "nil", "not", "or", "repeat",
- "return", "then", "true", "until",
- "while", "and",
+ "and", "break", "do", "else", "elseif", "end",
+ "false", "for", "function", "if", "in", "local",
+ "nil", "not", "or", "repeat", "return", "then",
+ "true", "until", "while", "and",
};
static int
dump_node(struct lua_dumper *d, struct node *nd, int indent);
static int
-emit_node(struct lua_dumper *d, struct node *nd, int indent,
- const char *str, size_t len);
+emit_node(struct lua_dumper *d, struct node *nd, int indent, const char *str,
+ size_t len);
/**
* Generate anchor numbers for self references.
@@ -405,12 +390,11 @@ suffix_flush(struct lua_dumper *d)
static int
gen_indent(struct lua_dumper *d, int indent)
{
- static_assert(sizeof(d->indent_buf) > 0,
- "indent buffer is too small");
+ static_assert(sizeof(d->indent_buf) > 0, "indent buffer is too small");
if (indent > 0 && d->opts->block_mode && !d->noindent) {
- snprintf(d->indent_buf, sizeof(d->indent_buf),
- "%*s", indent, "");
+ snprintf(d->indent_buf, sizeof(d->indent_buf), "%*s", indent,
+ "");
size_t len = strlen(d->indent_buf);
d->indent_buf[len] = '\0';
return len;
@@ -425,12 +409,12 @@ emit_hex_char(struct lua_dumper *d, unsigned char c)
luaL_addchar(&d->luabuf, '\\');
luaL_addchar(&d->luabuf, 'x');
-#define __emit_hex(v) \
- do { \
- if (v <= 9) \
- luaL_addchar(&d->luabuf, '0' + v); \
- else \
- luaL_addchar(&d->luabuf, v - 10 + 'a'); \
+#define __emit_hex(v) \
+ do { \
+ if (v <= 9) \
+ luaL_addchar(&d->luabuf, '0' + v); \
+ else \
+ luaL_addchar(&d->luabuf, v - 10 + 'a'); \
} while (0)
__emit_hex((c >> 4));
@@ -477,9 +461,8 @@ emit_string(struct lua_dumper *d, const char *str, size_t len)
luaL_addchar(&d->luabuf, '\\');
luaL_addchar(&d->luabuf, 't');
} else if (str[i] == '\xef') {
- if (i < len-1 && i < len-2 &&
- str[i+1] == '\xbb' &&
- str[i+2] == '\xbf') {
+ if (i < len - 1 && i < len - 2 &&
+ str[i + 1] == '\xbb' && str[i + 2] == '\xbf') {
emit_hex_char(d, 0xef);
emit_hex_char(d, 0xbb);
emit_hex_char(d, 0xbf);
@@ -498,8 +481,8 @@ emit_string(struct lua_dumper *d, const char *str, size_t len)
* Emit value into output buffer.
*/
static void
-emit_value(struct lua_dumper *d, struct node *nd,
- int indent, const char *str, size_t len)
+emit_value(struct lua_dumper *d, struct node *nd, int indent, const char *str,
+ size_t len)
{
trace_emit(d, nd->mask, indent, str, len);
@@ -511,8 +494,7 @@ emit_value(struct lua_dumper *d, struct node *nd,
*/
suffix_flush(d);
- luaL_addlstring(&d->luabuf, d->indent_buf,
- gen_indent(d, indent));
+ luaL_addlstring(&d->luabuf, d->indent_buf, gen_indent(d, indent));
if (nd->mask & NODE_EMBRACE)
luaL_addlstring(&d->luabuf, "[", 1);
@@ -535,8 +517,7 @@ emit_value(struct lua_dumper *d, struct node *nd,
* Emit a raw string into output.
*/
static void
-emit_raw_value(struct lua_dumper *d, int indent,
- const char *str, size_t len)
+emit_raw_value(struct lua_dumper *d, int indent, const char *str, size_t len)
{
struct node node = {
.mask = NODE_RAW,
@@ -650,16 +631,16 @@ dump_table(struct lua_dumper *d, struct node *nd, int indent)
while (lua_next(d->L, -2)) {
lua_pushvalue(d->L, -2);
struct node node_key = {
- .prev = nd,
- .mask = NODE_LVALUE | NODE_MAP_KEY,
- .index = index++,
+ .prev = nd,
+ .mask = NODE_LVALUE | NODE_MAP_KEY,
+ .index = index++,
};
dump_node(d, &node_key, indent);
lua_pop(d->L, 1);
struct node node_val = {
- .key = &node_key,
- .mask = NODE_RVALUE | NODE_MAP_VALUE,
+ .key = &node_key,
+ .mask = NODE_RVALUE | NODE_MAP_VALUE,
};
dump_node(d, &node_val, indent);
lua_pop(d->L, 1);
@@ -708,8 +689,8 @@ decorate_key(struct node *nd, const char *str, size_t len)
}
static int
-emit_node(struct lua_dumper *d, struct node *nd, int indent,
- const char *str, size_t len)
+emit_node(struct lua_dumper *d, struct node *nd, int indent, const char *str,
+ size_t len)
{
struct luaL_field *field = &nd->field;
@@ -724,8 +705,7 @@ emit_node(struct lua_dumper *d, struct node *nd, int indent,
* the current position in the table we
* can simply skip it and print value only.
*/
- if (nd->field.type == MP_INT ||
- nd->field.type == MP_UINT) {
+ if (nd->field.type == MP_INT || nd->field.type == MP_UINT) {
if (nd->index == (int)field->ival) {
d->noindent = false;
return 0;
@@ -837,14 +817,12 @@ dump_node(struct lua_dumper *d, struct node *nd, int indent)
}
break;
case MP_FLOAT:
- fpconv_g_fmt(buf, field->fval,
- d->cfg->encode_number_precision);
+ fpconv_g_fmt(buf, field->fval, d->cfg->encode_number_precision);
len = strlen(buf);
str = buf;
break;
case MP_DOUBLE:
- fpconv_g_fmt(buf, field->dval,
- d->cfg->encode_number_precision);
+ fpconv_g_fmt(buf, field->dval, d->cfg->encode_number_precision);
len = strlen(buf);
str = buf;
break;
@@ -872,8 +850,7 @@ dump_node(struct lua_dumper *d, struct node *nd, int indent)
default:
d->err = EINVAL;
snprintf(d->err_msg, sizeof(d->err_msg),
- "serializer: Unknown field %d type",
- field->type);
+ "serializer: Unknown field %d type", field->type);
len = strlen(d->err_msg);
return -1;
}
@@ -966,10 +943,10 @@ lua_encode(lua_State *L, struct luaL_serializer *serializer,
lua_dumper_opts_t *opts)
{
struct lua_dumper dumper = {
- .L = L,
- .cfg = serializer,
- .outputL= luaT_newthread(L),
- .opts = opts,
+ .L = L,
+ .cfg = serializer,
+ .outputL = luaT_newthread(L),
+ .opts = opts,
};
if (!dumper.outputL)
@@ -1045,11 +1022,11 @@ lua_serializer_init(struct lua_State *L)
};
serializer_lua = luaL_newserializer(L, NULL, lualib);
- serializer_lua->has_compact = 1;
- serializer_lua->encode_invalid_numbers = 1;
- serializer_lua->encode_load_metatables = 1;
- serializer_lua->encode_use_tostring = 1;
- serializer_lua->encode_invalid_as_nil = 1;
+ serializer_lua->has_compact = 1;
+ serializer_lua->encode_invalid_numbers = 1;
+ serializer_lua->encode_load_metatables = 1;
+ serializer_lua->encode_use_tostring = 1;
+ serializer_lua->encode_invalid_as_nil = 1;
/*
* Keep a reference to this module so it
diff --git a/src/box/lua/session.c b/src/box/lua/session.c
index 0a20aaa..2cc23ee 100644
--- a/src/box/lua/session.c
+++ b/src/box/lua/session.c
@@ -247,7 +247,6 @@ lbox_session_fd(struct lua_State *L)
return 1;
}
-
/**
* Pretty print peer name.
*/
@@ -286,15 +285,15 @@ lbox_session_peer(struct lua_State *L)
static int
lbox_push_on_connect_event(struct lua_State *L, void *event)
{
- (void) L;
- (void) event;
+ (void)L;
+ (void)event;
return 0;
}
static int
lbox_push_on_auth_event(struct lua_State *L, void *event)
{
- struct on_auth_trigger_ctx *ctx = (struct on_auth_trigger_ctx *) event;
+ struct on_auth_trigger_ctx *ctx = (struct on_auth_trigger_ctx *)event;
lua_pushstring(L, ctx->username);
lua_pushboolean(L, ctx->is_authenticated);
return 2;
@@ -328,7 +327,7 @@ lbox_session_run_on_disconnect(struct lua_State *L)
{
struct session *session = current_session();
session_run_on_disconnect_triggers(session);
- (void) L;
+ (void)L;
return 0;
}
@@ -360,7 +359,7 @@ lbox_session_run_on_auth(struct lua_State *L)
static int
lbox_push_on_access_denied_event(struct lua_State *L, void *event)
{
- struct on_access_denied_ctx *ctx = (struct on_access_denied_ctx *) event;
+ struct on_access_denied_ctx *ctx = (struct on_access_denied_ctx *)event;
lua_pushstring(L, ctx->access_type);
lua_pushstring(L, ctx->object_type);
lua_pushstring(L, ctx->object_name);
@@ -427,8 +426,10 @@ lbox_session_setting_get(struct lua_State *L)
const char *setting_name = lua_tostring(L, -1);
int sid = session_setting_find(setting_name);
if (sid < 0) {
- diag_set(ClientError, ER_PROC_LUA, tt_sprintf("Session "\
- "setting %s doesn't exist", setting_name));
+ diag_set(ClientError, ER_PROC_LUA,
+ tt_sprintf("Session "
+ "setting %s doesn't exist",
+ setting_name));
return luaT_error(L);
}
return lbox_session_setting_get_by_id(L, sid);
@@ -450,7 +451,7 @@ lbox_session_setting_set(struct lua_State *L)
case LUA_TBOOLEAN: {
bool value = lua_toboolean(L, -1);
size_t size = mp_sizeof_bool(value);
- char *mp_value = (char *) static_alloc(size);
+ char *mp_value = (char *)static_alloc(size);
mp_encode_bool(mp_value, value);
if (setting->set(sid, mp_value) != 0)
return luaT_error(L);
@@ -460,10 +461,9 @@ lbox_session_setting_set(struct lua_State *L)
const char *str = lua_tostring(L, -1);
size_t len = strlen(str);
uint32_t size = mp_sizeof_str(len);
- char *mp_value = (char *) static_alloc(size);
+ char *mp_value = (char *)static_alloc(size);
if (mp_value == NULL) {
- diag_set(OutOfMemory, size, "static_alloc",
- "mp_value");
+ diag_set(OutOfMemory, size, "static_alloc", "mp_value");
return luaT_error(L);
}
mp_encode_str(mp_value, str, len);
@@ -544,33 +544,33 @@ void
box_lua_session_init(struct lua_State *L)
{
static const struct luaL_Reg session_internal_lib[] = {
- {"create", lbox_session_create},
- {"run_on_connect", lbox_session_run_on_connect},
- {"run_on_disconnect", lbox_session_run_on_disconnect},
- {"run_on_auth", lbox_session_run_on_auth},
- {NULL, NULL}
+ { "create", lbox_session_create },
+ { "run_on_connect", lbox_session_run_on_connect },
+ { "run_on_disconnect", lbox_session_run_on_disconnect },
+ { "run_on_auth", lbox_session_run_on_auth },
+ { NULL, NULL }
};
luaL_register(L, "box.internal.session", session_internal_lib);
lua_pop(L, 1);
static const struct luaL_Reg sessionlib[] = {
- {"id", lbox_session_id},
- {"type", lbox_session_type},
- {"sync", lbox_session_sync},
- {"uid", lbox_session_uid},
- {"euid", lbox_session_euid},
- {"user", lbox_session_user},
- {"effective_user", lbox_session_effective_user},
- {"su", lbox_session_su},
- {"fd", lbox_session_fd},
- {"exists", lbox_session_exists},
- {"peer", lbox_session_peer},
- {"on_connect", lbox_session_on_connect},
- {"on_disconnect", lbox_session_on_disconnect},
- {"on_auth", lbox_session_on_auth},
- {"on_access_denied", lbox_session_on_access_denied},
- {"push", lbox_session_push},
- {NULL, NULL}
+ { "id", lbox_session_id },
+ { "type", lbox_session_type },
+ { "sync", lbox_session_sync },
+ { "uid", lbox_session_uid },
+ { "euid", lbox_session_euid },
+ { "user", lbox_session_user },
+ { "effective_user", lbox_session_effective_user },
+ { "su", lbox_session_su },
+ { "fd", lbox_session_fd },
+ { "exists", lbox_session_exists },
+ { "peer", lbox_session_peer },
+ { "on_connect", lbox_session_on_connect },
+ { "on_disconnect", lbox_session_on_disconnect },
+ { "on_auth", lbox_session_on_auth },
+ { "on_access_denied", lbox_session_on_access_denied },
+ { "push", lbox_session_push },
+ { NULL, NULL }
};
luaL_register_module(L, sessionlib_name, sessionlib);
lbox_session_settings_init(L);
diff --git a/src/box/lua/slab.c b/src/box/lua/slab.c
index 9f5e7e9..e6e0202 100644
--- a/src/box/lua/slab.c
+++ b/src/box/lua/slab.c
@@ -47,8 +47,8 @@
static int
small_stats_noop_cb(const struct mempool_stats *stats, void *cb_ctx)
{
- (void) stats;
- (void) cb_ctx;
+ (void)stats;
+ (void)cb_ctx;
return 0;
}
@@ -59,7 +59,7 @@ small_stats_lua_cb(const struct mempool_stats *stats, void *cb_ctx)
if (stats->slabcount == 0)
return 0;
- struct lua_State *L = (struct lua_State *) cb_ctx;
+ struct lua_State *L = (struct lua_State *)cb_ctx;
/*
* Create a Lua table for every slab class. A class is
@@ -142,8 +142,7 @@ lbox_slab_info(struct lua_State *L)
double ratio;
char ratio_buf[32];
- ratio = 100 * ((double) totals.used
- / ((double) totals.total + 0.0001));
+ ratio = 100 * ((double)totals.used / ((double)totals.total + 0.0001));
snprintf(ratio_buf, sizeof(ratio_buf), "%0.2lf%%", ratio);
/** How much address space has been already touched */
@@ -190,8 +189,8 @@ lbox_slab_info(struct lua_State *L)
luaL_pushuint64(L, totals.used + index_stats.totals.used);
lua_settable(L, -3);
- ratio = 100 * ((double) (totals.used + index_stats.totals.used)
- / (double) arena_size);
+ ratio = 100 * ((double)(totals.used + index_stats.totals.used) /
+ (double)arena_size);
snprintf(ratio_buf, sizeof(ratio_buf), "%0.1lf%%", ratio);
lua_pushstring(L, "arena_used_ratio");
@@ -220,8 +219,8 @@ lbox_slab_info(struct lua_State *L)
* factor, it's the quota that give you OOM error in the
* end of the day.
*/
- ratio = 100 * ((double) quota_used(&memtx->quota) /
- ((double) quota_total(&memtx->quota) + 0.0001));
+ ratio = 100 * ((double)quota_used(&memtx->quota) /
+ ((double)quota_total(&memtx->quota) + 0.0001));
snprintf(ratio_buf, sizeof(ratio_buf), "%0.2lf%%", ratio);
lua_pushstring(L, "quota_used_ratio");
diff --git a/src/box/lua/slab.h b/src/box/lua/slab.h
index fd4ef88..9d73f8a 100644
--- a/src/box/lua/slab.h
+++ b/src/box/lua/slab.h
@@ -35,7 +35,8 @@ extern "C" {
#endif /* defined(__cplusplus) */
struct lua_State;
-void box_lua_slab_init(struct lua_State *L);
+void
+box_lua_slab_init(struct lua_State *L);
#if defined(__cplusplus)
} /* extern "C" */
diff --git a/src/box/lua/space.cc b/src/box/lua/space.cc
index 177c588..d074bc9 100644
--- a/src/box/lua/space.cc
+++ b/src/box/lua/space.cc
@@ -37,9 +37,9 @@
#include "lua/trigger.h"
extern "C" {
- #include <lua.h>
- #include <lauxlib.h>
- #include <lualib.h>
+#include <lua.h>
+#include <lauxlib.h>
+#include <lualib.h>
} /* extern "C" */
#include "box/func.h"
@@ -61,7 +61,7 @@ extern "C" {
static int
lbox_push_txn_stmt(struct lua_State *L, void *event)
{
- struct txn_stmt *stmt = txn_current_stmt((struct txn *) event);
+ struct txn_stmt *stmt = txn_current_stmt((struct txn *)event);
if (stmt->old_tuple) {
luaT_pushtuple(L, stmt->old_tuple);
@@ -84,7 +84,7 @@ lbox_push_txn_stmt(struct lua_State *L, void *event)
static int
lbox_pop_txn_stmt(struct lua_State *L, int nret, void *event)
{
- struct txn_stmt *stmt = txn_current_stmt((struct txn *) event);
+ struct txn_stmt *stmt = txn_current_stmt((struct txn *)event);
if (nret < 1) {
/* No return value - nothing to do. */
@@ -117,16 +117,17 @@ lbox_space_on_replace(struct lua_State *L)
int top = lua_gettop(L);
if (top < 1 || !lua_istable(L, 1)) {
- luaL_error(L,
- "usage: space:on_replace(function | nil, [function | nil])");
+ luaL_error(
+ L,
+ "usage: space:on_replace(function | nil, [function | nil])");
}
lua_getfield(L, 1, "id"); /* Get space id. */
uint32_t id = lua_tonumber(L, lua_gettop(L));
struct space *space = space_cache_find_xc(id);
lua_pop(L, 1);
- return lbox_trigger_reset(L, 3, &space->on_replace,
- lbox_push_txn_stmt, NULL);
+ return lbox_trigger_reset(L, 3, &space->on_replace, lbox_push_txn_stmt,
+ NULL);
}
/**
@@ -138,8 +139,9 @@ lbox_space_before_replace(struct lua_State *L)
int top = lua_gettop(L);
if (top < 1 || !lua_istable(L, 1)) {
- luaL_error(L,
- "usage: space:before_replace(function | nil, [function | nil])");
+ luaL_error(
+ L,
+ "usage: space:before_replace(function | nil, [function | nil])");
}
lua_getfield(L, 1, "id"); /* Get space id. */
uint32_t id = lua_tonumber(L, lua_gettop(L));
@@ -175,8 +177,8 @@ lbox_push_ck_constraint(struct lua_State *L, struct space *space, int i)
* Remove ck_constraint only if it was
* deleted.
*/
- if (space_ck_constraint_by_name(space, name,
- (uint32_t)name_len) == NULL) {
+ if (space_ck_constraint_by_name(
+ space, name, (uint32_t)name_len) == NULL) {
lua_pushlstring(L, name, name_len);
lua_pushnil(L);
lua_settable(L, -5);
@@ -262,16 +264,15 @@ lbox_fillspace(struct lua_State *L, struct space *space, int i)
lua_pushboolean(L, space_index(space, 0) != 0);
lua_settable(L, i);
+ /* space:on_replace */
+ lua_pushstring(L, "on_replace");
+ lua_pushcfunction(L, lbox_space_on_replace);
+ lua_settable(L, i);
- /* space:on_replace */
- lua_pushstring(L, "on_replace");
- lua_pushcfunction(L, lbox_space_on_replace);
- lua_settable(L, i);
-
- /* space:before_replace */
- lua_pushstring(L, "before_replace");
- lua_pushcfunction(L, lbox_space_before_replace);
- lua_settable(L, i);
+ /* space:before_replace */
+ lua_pushstring(L, "before_replace");
+ lua_pushcfunction(L, lbox_space_before_replace);
+ lua_settable(L, i);
lua_getfield(L, i, "index");
if (lua_isnil(L, -1)) {
@@ -279,13 +280,13 @@ lbox_fillspace(struct lua_State *L, struct space *space, int i)
/* space.index */
lua_pushstring(L, "index");
lua_newtable(L);
- lua_settable(L, i); /* push space.index */
+ lua_settable(L, i); /* push space.index */
lua_getfield(L, i, "index");
} else {
lua_pushnil(L);
while (lua_next(L, -2) != 0) {
if (lua_isnumber(L, -2)) {
- uint32_t iid = (uint32_t) lua_tonumber(L, -2);
+ uint32_t iid = (uint32_t)lua_tonumber(L, -2);
/*
* Remove index only if it was deleted.
* If an existing index was
@@ -334,7 +335,7 @@ lbox_fillspace(struct lua_State *L, struct space *space, int i)
lua_newtable(L);
lua_settable(L, -3);
lua_rawgeti(L, -1, index_def->iid);
- assert(! lua_isnil(L, -1));
+ assert(!lua_isnil(L, -1));
}
if (index_def->type == HASH || index_def->type == TREE) {
@@ -399,7 +400,7 @@ lbox_fillspace(struct lua_State *L, struct space *space, int i)
lua_pushstring(L, "sequence_fieldno");
if (k == 0 && space->sequence != NULL)
lua_pushnumber(L, space->sequence_fieldno +
- TUPLE_INDEX_BASE);
+ TUPLE_INDEX_BASE);
else
lua_pushnil(L);
lua_rawset(L, -3);
@@ -449,9 +450,9 @@ lbox_fillspace(struct lua_State *L, struct space *space, int i)
lua_pushstring(L, "bless");
lua_gettable(L, -2);
- lua_pushvalue(L, i); /* space */
+ lua_pushvalue(L, i); /* space */
lua_call(L, 1, 0);
- lua_pop(L, 3); /* cleanup stack - box, schema, space */
+ lua_pop(L, 3); /* cleanup stack - box, schema, space */
}
/** Export a space to Lua */
@@ -511,8 +512,8 @@ box_lua_space_delete(struct lua_State *L, uint32_t id)
static int
box_lua_space_new_or_delete(struct trigger *trigger, void *event)
{
- struct lua_State *L = (struct lua_State *) trigger->data;
- struct space *space = (struct space *) event;
+ struct lua_State *L = (struct lua_State *)trigger->data;
+ struct space *space = (struct space *)event;
if (space_by_id(space->def->id) != NULL) {
box_lua_space_new(L, space);
@@ -522,9 +523,9 @@ box_lua_space_new_or_delete(struct trigger *trigger, void *event)
return 0;
}
-static struct trigger on_alter_space_in_lua = {
- RLIST_LINK_INITIALIZER, box_lua_space_new_or_delete, NULL, NULL
-};
+static struct trigger on_alter_space_in_lua = { RLIST_LINK_INITIALIZER,
+ box_lua_space_new_or_delete,
+ NULL, NULL };
/**
* Make a tuple or a table Lua object by map.
@@ -561,8 +562,9 @@ lbox_space_frommap(struct lua_State *L)
space = space_by_id(id);
if (space == NULL) {
lua_pushnil(L);
- lua_pushstring(L, tt_sprintf("Space with id '%d' "\
- "doesn't exist", id));
+ lua_pushstring(L, tt_sprintf("Space with id '%d' "
+ "doesn't exist",
+ id));
return 2;
}
assert(space->format != NULL);
@@ -579,11 +581,11 @@ lbox_space_frommap(struct lua_State *L)
if (tuple_fieldno_by_name(dict, key, key_len, key_hash,
&fieldno)) {
lua_pushnil(L);
- lua_pushstring(L, tt_sprintf("Unknown field '%s'",
- key));
+ lua_pushstring(L,
+ tt_sprintf("Unknown field '%s'", key));
return 2;
}
- lua_rawseti(L, -3, fieldno+1);
+ lua_rawseti(L, -3, fieldno + 1);
}
lua_replace(L, 1);
@@ -701,8 +703,7 @@ box_lua_space_init(struct lua_State *L)
lua_pop(L, 2); /* box, schema */
static const struct luaL_Reg space_internal_lib[] = {
- {"frommap", lbox_space_frommap},
- {NULL, NULL}
+ { "frommap", lbox_space_frommap }, { NULL, NULL }
};
luaL_register(L, "box.internal.space", space_internal_lib);
lua_pop(L, 1);
diff --git a/src/box/lua/stat.c b/src/box/lua/stat.c
index 29ec38b..81dacdd 100644
--- a/src/box/lua/stat.c
+++ b/src/box/lua/stat.c
@@ -68,7 +68,7 @@ fill_stat_item(struct lua_State *L, int rps, int64_t total)
static int
set_stat_item(const char *name, int rps, int64_t total, void *cb_ctx)
{
- struct lua_State *L = (struct lua_State *) cb_ctx;
+ struct lua_State *L = (struct lua_State *)cb_ctx;
lua_pushstring(L, name);
lua_newtable(L);
@@ -87,7 +87,7 @@ set_stat_item(const char *name, int rps, int64_t total, void *cb_ctx)
static int
seek_stat_item(const char *name, int rps, int64_t total, void *cb_ctx)
{
- struct lua_State *L = (struct lua_State *) cb_ctx;
+ struct lua_State *L = (struct lua_State *)cb_ctx;
if (strcmp(name, lua_tostring(L, -1)) != 0)
return 0;
@@ -211,28 +211,25 @@ lbox_stat_sql(struct lua_State *L)
return 1;
}
-static const struct luaL_Reg lbox_stat_meta [] = {
- {"__index", lbox_stat_index},
- {"__call", lbox_stat_call},
- {NULL, NULL}
-};
+static const struct luaL_Reg lbox_stat_meta[] = { { "__index",
+ lbox_stat_index },
+ { "__call", lbox_stat_call },
+ { NULL, NULL } };
-static const struct luaL_Reg lbox_stat_net_meta [] = {
- {"__index", lbox_stat_net_index},
- {"__call", lbox_stat_net_call},
- {NULL, NULL}
+static const struct luaL_Reg lbox_stat_net_meta[] = {
+ { "__index", lbox_stat_net_index },
+ { "__call", lbox_stat_net_call },
+ { NULL, NULL }
};
/** Initialize box.stat package. */
void
box_lua_stat_init(struct lua_State *L)
{
- static const struct luaL_Reg statlib [] = {
- {"vinyl", lbox_stat_vinyl},
- {"reset", lbox_stat_reset},
- {"sql", lbox_stat_sql},
- {NULL, NULL}
- };
+ static const struct luaL_Reg statlib[] = { { "vinyl", lbox_stat_vinyl },
+ { "reset", lbox_stat_reset },
+ { "sql", lbox_stat_sql },
+ { NULL, NULL } };
luaL_register_module(L, "box.stat", statlib);
@@ -241,9 +238,7 @@ box_lua_stat_init(struct lua_State *L)
lua_setmetatable(L, -2);
lua_pop(L, 1); /* stat module */
- static const struct luaL_Reg netstatlib [] = {
- {NULL, NULL}
- };
+ static const struct luaL_Reg netstatlib[] = { { NULL, NULL } };
luaL_register_module(L, "box.stat.net", netstatlib);
@@ -252,4 +247,3 @@ box_lua_stat_init(struct lua_State *L)
lua_setmetatable(L, -2);
lua_pop(L, 1); /* stat net module */
}
-
diff --git a/src/box/lua/stat.h b/src/box/lua/stat.h
index bd22383..c5d46c0 100644
--- a/src/box/lua/stat.h
+++ b/src/box/lua/stat.h
@@ -35,7 +35,8 @@ extern "C" {
#endif /* defined(__cplusplus) */
struct lua_State;
-void box_lua_stat_init(struct lua_State *L);
+void
+box_lua_stat_init(struct lua_State *L);
#if defined(__cplusplus)
} /* extern "C" */
diff --git a/src/box/lua/tuple.c b/src/box/lua/tuple.c
index 03b4b8a..e800fec 100644
--- a/src/box/lua/tuple.c
+++ b/src/box/lua/tuple.c
@@ -31,9 +31,9 @@
#include "box/lua/tuple.h"
#include "box/xrow_update.h"
-#include "lua/utils.h" /* luaT_error() */
+#include "lua/utils.h" /* luaT_error() */
#include "lua/msgpack.h" /* luamp_encode_XXX() */
-#include "diag.h" /* diag_set() */
+#include "diag.h" /* diag_set() */
#include <small/ibuf.h>
#include <small/region.h>
#include <fiber.h>
@@ -73,9 +73,10 @@ box_tuple_t *
luaT_checktuple(struct lua_State *L, int idx)
{
struct tuple *tuple = luaT_istuple(L, idx);
- if (tuple == NULL) {
- luaL_error(L, "Invalid argument #%d (box.tuple expected, got %s)",
- idx, lua_typename(L, lua_type(L, idx)));
+ if (tuple == NULL) {
+ luaL_error(L,
+ "Invalid argument #%d (box.tuple expected, got %s)",
+ idx, lua_typename(L, lua_type(L, idx)));
}
return tuple;
@@ -95,7 +96,7 @@ luaT_istuple(struct lua_State *L, int narg)
if (ctypeid != CTID_STRUCT_TUPLE_REF)
return NULL;
- return *(struct tuple **) data;
+ return *(struct tuple **)data;
}
struct tuple *
@@ -110,8 +111,8 @@ luaT_tuple_new(struct lua_State *L, int idx, box_tuple_format_t *format)
struct ibuf *buf = tarantool_lua_ibuf;
ibuf_reset(buf);
struct mpstream stream;
- mpstream_init(&stream, buf, ibuf_reserve_cb, ibuf_alloc_cb,
- luamp_error, L);
+ mpstream_init(&stream, buf, ibuf_reserve_cb, ibuf_alloc_cb, luamp_error,
+ L);
if (idx == 0) {
/*
* Create the tuple from lua stack
@@ -127,8 +128,8 @@ luaT_tuple_new(struct lua_State *L, int idx, box_tuple_format_t *format)
luamp_encode_tuple(L, &tuple_serializer, &stream, idx);
}
mpstream_flush(&stream);
- struct tuple *tuple = box_tuple_new(format, buf->buf,
- buf->buf + ibuf_used(buf));
+ struct tuple *tuple =
+ box_tuple_new(format, buf->buf, buf->buf + ibuf_used(buf));
if (tuple == NULL)
return NULL;
ibuf_reinit(tarantool_lua_ibuf);
@@ -148,8 +149,7 @@ lbox_tuple_new(lua_State *L)
* box.tuple.new(1, 2, 3) (idx == 0), or the new one:
* box.tuple.new({1, 2, 3}) (idx == 1).
*/
- int idx = argc == 1 && (lua_istable(L, 1) ||
- luaT_istuple(L, 1));
+ int idx = argc == 1 && (lua_istable(L, 1) || luaT_istuple(L, 1));
box_tuple_format_t *fmt = box_tuple_format_default();
struct tuple *tuple = luaT_tuple_new(L, idx, fmt);
if (tuple == NULL)
@@ -170,8 +170,7 @@ lbox_tuple_gc(struct lua_State *L)
static int
lbox_tuple_slice_wrapper(struct lua_State *L)
{
- box_tuple_iterator_t *it = (box_tuple_iterator_t *)
- lua_topointer(L, 1);
+ box_tuple_iterator_t *it = (box_tuple_iterator_t *)lua_topointer(L, 1);
uint32_t start = lua_tonumber(L, 2);
uint32_t end = lua_tonumber(L, 3);
assert(end >= start);
@@ -221,13 +220,15 @@ lbox_tuple_slice(struct lua_State *L)
} else if (offset < 0 && -offset < field_count) {
end = offset + field_count;
} else {
- return luaL_error(L, "tuple.slice(): end > field count");
+ return luaL_error(L,
+ "tuple.slice(): end > field count");
}
} else {
end = field_count;
}
if (end <= start)
- return luaL_error(L, "tuple.slice(): start must be less than end");
+ return luaL_error(L,
+ "tuple.slice(): start must be less than end");
box_tuple_iterator_t *it = box_tuple_iterator(tuple);
lua_pushcfunction(L, lbox_tuple_slice_wrapper);
@@ -365,7 +366,8 @@ lbox_tuple_transform(struct lua_State *L)
int argc = lua_gettop(L);
if (argc < 3)
luaL_error(L, "tuple.transform(): bad arguments");
- lua_Integer offset = lua_tointeger(L, 2); /* Can be negative and can be > INT_MAX */
+ lua_Integer offset =
+ lua_tointeger(L, 2); /* Can be negative and can be > INT_MAX */
lua_Integer len = lua_tointeger(L, 3);
lua_Integer field_count = box_tuple_field_count(tuple);
@@ -374,7 +376,8 @@ lbox_tuple_transform(struct lua_State *L)
luaL_error(L, "tuple.transform(): offset is out of bound");
} else if (offset < 0) {
if (-offset > field_count)
- luaL_error(L, "tuple.transform(): offset is out of bound");
+ luaL_error(L,
+ "tuple.transform(): offset is out of bound");
offset += field_count + 1;
} else if (offset > field_count) {
offset = field_count + 1;
@@ -403,8 +406,8 @@ lbox_tuple_transform(struct lua_State *L)
struct ibuf *buf = tarantool_lua_ibuf;
ibuf_reset(buf);
struct mpstream stream;
- mpstream_init(&stream, buf, ibuf_reserve_cb, ibuf_alloc_cb,
- luamp_error, L);
+ mpstream_init(&stream, buf, ibuf_reserve_cb, ibuf_alloc_cb, luamp_error,
+ L);
/*
* Prepare UPDATE expression
@@ -417,7 +420,7 @@ lbox_tuple_transform(struct lua_State *L)
mpstream_encode_uint(&stream, len);
}
- for (int i = argc ; i > 3; i--) {
+ for (int i = argc; i > 3; i--) {
mpstream_encode_array(&stream, 3);
mpstream_encode_str(&stream, "!");
mpstream_encode_uint(&stream, offset);
@@ -438,13 +441,13 @@ lbox_tuple_transform(struct lua_State *L)
* to use the default one with no restrictions on field
* count or types.
*/
- const char *new_data =
- xrow_update_execute(buf->buf, buf->buf + ibuf_used(buf),
- old_data, old_data + bsize, format,
- &new_size, 1, NULL);
+ const char *new_data = xrow_update_execute(buf->buf,
+ buf->buf + ibuf_used(buf),
+ old_data, old_data + bsize,
+ format, &new_size, 1, NULL);
if (new_data != NULL)
- new_tuple = tuple_new(box_tuple_format_default(),
- new_data, new_data + new_size);
+ new_tuple = tuple_new(box_tuple_format_default(), new_data,
+ new_data + new_size);
region_truncate(region, used);
if (new_tuple == NULL)
@@ -478,11 +481,9 @@ lbox_tuple_field_by_path(struct lua_State *L)
const char *field = NULL, *path = lua_tolstring(L, 2, &len);
if (len == 0)
return 0;
- field = tuple_field_raw_by_full_path(tuple_format(tuple),
- tuple_data(tuple),
- tuple_field_map(tuple),
- path, (uint32_t)len,
- lua_hashstring(L, 2));
+ field = tuple_field_raw_by_full_path(
+ tuple_format(tuple), tuple_data(tuple), tuple_field_map(tuple),
+ path, (uint32_t)len, lua_hashstring(L, 2));
if (field == NULL)
return 0;
luamp_decode(L, luaL_msgpack_default, &field);
@@ -508,8 +509,8 @@ void
luaT_pushtuple(struct lua_State *L, box_tuple_t *tuple)
{
assert(CTID_STRUCT_TUPLE_REF != 0);
- struct tuple **ptr = (struct tuple **)
- luaL_pushcdata(L, CTID_STRUCT_TUPLE_REF);
+ struct tuple **ptr =
+ (struct tuple **)luaL_pushcdata(L, CTID_STRUCT_TUPLE_REF);
*ptr = tuple;
/* The order is important - first reference tuple, next set gc */
box_tuple_ref(tuple);
@@ -518,23 +519,19 @@ luaT_pushtuple(struct lua_State *L, box_tuple_t *tuple)
}
static const struct luaL_Reg lbox_tuple_meta[] = {
- {"__gc", lbox_tuple_gc},
- {"tostring", lbox_tuple_to_string},
- {"slice", lbox_tuple_slice},
- {"transform", lbox_tuple_transform},
- {"tuple_to_map", lbox_tuple_to_map},
- {"tuple_field_by_path", lbox_tuple_field_by_path},
- {NULL, NULL}
+ { "__gc", lbox_tuple_gc },
+ { "tostring", lbox_tuple_to_string },
+ { "slice", lbox_tuple_slice },
+ { "transform", lbox_tuple_transform },
+ { "tuple_to_map", lbox_tuple_to_map },
+ { "tuple_field_by_path", lbox_tuple_field_by_path },
+ { NULL, NULL }
};
-static const struct luaL_Reg lbox_tuplelib[] = {
- {"new", lbox_tuple_new},
- {NULL, NULL}
-};
+static const struct luaL_Reg lbox_tuplelib[] = { { "new", lbox_tuple_new },
+ { NULL, NULL } };
-static const struct luaL_Reg lbox_tuple_iterator_meta[] = {
- {NULL, NULL}
-};
+static const struct luaL_Reg lbox_tuple_iterator_meta[] = { { NULL, NULL } };
/* }}} */
@@ -548,8 +545,8 @@ tuple_serializer_update_options(void)
static int
on_msgpack_serializer_update(struct trigger *trigger, void *event)
{
- (void) trigger;
- (void) event;
+ (void)trigger;
+ (void)event;
tuple_serializer_update_options();
return 0;
}
@@ -563,8 +560,7 @@ box_lua_tuple_init(struct lua_State *L)
luaL_register(L, NULL, lbox_tuple_meta);
lua_setfield(L, -2, "tuple");
lua_pop(L, 1); /* box.internal */
- luaL_register_type(L, tuple_iteratorlib_name,
- lbox_tuple_iterator_meta);
+ luaL_register_type(L, tuple_iteratorlib_name, lbox_tuple_iterator_meta);
luaL_register_module(L, tuplelib_name, lbox_tuplelib);
lua_pop(L, 1);
@@ -577,7 +573,7 @@ box_lua_tuple_init(struct lua_State *L)
/* Get CTypeID for `struct tuple' */
int rc = luaL_cdef(L, "struct tuple;");
assert(rc == 0);
- (void) rc;
+ (void)rc;
CTID_STRUCT_TUPLE_REF = luaL_ctypeid(L, "struct tuple &");
assert(CTID_STRUCT_TUPLE_REF != 0);
}
diff --git a/src/box/lua/xlog.c b/src/box/lua/xlog.c
index 971a26a..1074d3b 100644
--- a/src/box/lua/xlog.c
+++ b/src/box/lua/xlog.c
@@ -53,8 +53,8 @@ static int
lbox_pushcursor(struct lua_State *L, struct xlog_cursor *cur)
{
struct xlog_cursor **pcur = NULL;
- pcur = (struct xlog_cursor **)luaL_pushcdata(L,
- CTID_STRUCT_XLOG_CURSOR_REF);
+ pcur = (struct xlog_cursor **)luaL_pushcdata(
+ L, CTID_STRUCT_XLOG_CURSOR_REF);
*pcur = cur;
return 1;
}
@@ -66,7 +66,7 @@ lbox_checkcursor(struct lua_State *L, int narg, const char *src)
void *data = NULL;
data = (struct xlog_cursor *)luaL_checkcdata(L, narg, &ctypeid);
assert(ctypeid == CTID_STRUCT_XLOG_CURSOR_REF);
- if (ctypeid != (uint32_t )CTID_STRUCT_XLOG_CURSOR_REF)
+ if (ctypeid != (uint32_t)CTID_STRUCT_XLOG_CURSOR_REF)
luaL_error(L, "%s: expecting xlog_cursor object", src);
return *(struct xlog_cursor **)data;
}
@@ -90,7 +90,8 @@ lbox_xlog_pushkey(lua_State *L, const char *key)
}
static void
-lbox_xlog_parse_body_kv(struct lua_State *L, int type, const char **beg, const char *end)
+lbox_xlog_parse_body_kv(struct lua_State *L, int type, const char **beg,
+ const char *end)
{
if (mp_typeof(**beg) != MP_UINT)
luaL_error(L, "Broken type of body key");
@@ -146,7 +147,8 @@ lbox_xlog_parse_body(struct lua_State *L, int type, const char *ptr, size_t len)
lbox_xlog_parse_body_kv(L, type, beg, end);
if (i != size)
say_warn("warning: decoded %u values from"
- " MP_MAP, %u expected", i, size);
+ " MP_MAP, %u expected",
+ i, size);
return 0;
}
@@ -244,7 +246,7 @@ lbox_xlog_parser_iterate(struct lua_State *L)
lua_newtable(L);
lbox_xlog_parse_body(L, row.type, row.body[0].iov_base,
row.body[0].iov_len);
- lua_settable(L, -3); /* BODY */
+ lua_settable(L, -3); /* BODY */
}
return 2;
}
@@ -252,7 +254,8 @@ lbox_xlog_parser_iterate(struct lua_State *L)
/* }}} */
static void
-lbox_xlog_parser_close(struct xlog_cursor *cur) {
+lbox_xlog_parser_close(struct xlog_cursor *cur)
+{
if (cur == NULL)
return;
xlog_cursor_close(cur, false);
@@ -277,11 +280,11 @@ lbox_xlog_parser_open_pairs(struct lua_State *L)
const char *filename = luaL_checkstring(L, 1);
/* Construct xlog cursor */
- struct xlog_cursor *cur = (struct xlog_cursor *)calloc(1,
- sizeof(struct xlog_cursor));
+ struct xlog_cursor *cur =
+ (struct xlog_cursor *)calloc(1, sizeof(struct xlog_cursor));
if (cur == NULL) {
- diag_set(OutOfMemory, sizeof(struct xlog_cursor),
- "malloc", "struct xlog_cursor");
+ diag_set(OutOfMemory, sizeof(struct xlog_cursor), "malloc",
+ "struct xlog_cursor");
return luaT_error(L);
}
/* Construct xlog object */
@@ -296,8 +299,7 @@ lbox_xlog_parser_open_pairs(struct lua_State *L)
strncmp(cur->meta.filetype, "VYLOG", 4) != 0) {
char buf[1024];
snprintf(buf, sizeof(buf), "'%.*s' file type",
- (int) strlen(cur->meta.filetype),
- cur->meta.filetype);
+ (int)strlen(cur->meta.filetype), cur->meta.filetype);
diag_set(ClientError, ER_UNSUPPORTED, "xlog reader", buf);
xlog_cursor_close(cur, false);
free(cur);
@@ -314,9 +316,9 @@ lbox_xlog_parser_open_pairs(struct lua_State *L)
return 3;
}
-static const struct luaL_Reg lbox_xlog_parser_lib [] = {
- { "pairs", lbox_xlog_parser_open_pairs },
- { NULL, NULL }
+static const struct luaL_Reg lbox_xlog_parser_lib[] = {
+ { "pairs", lbox_xlog_parser_open_pairs },
+ { NULL, NULL }
};
void
@@ -324,7 +326,9 @@ box_lua_xlog_init(struct lua_State *L)
{
int rc = 0;
/* Get CTypeIDs */
- rc = luaL_cdef(L, "struct xlog_cursor;"); assert(rc == 0); (void) rc;
+ rc = luaL_cdef(L, "struct xlog_cursor;");
+ assert(rc == 0);
+ (void)rc;
CTID_STRUCT_XLOG_CURSOR_REF = luaL_ctypeid(L, "struct xlog_cursor&");
assert(CTID_STRUCT_XLOG_CURSOR_REF != 0);
diff --git a/src/box/memtx_bitset.c b/src/box/memtx_bitset.c
index 2283a47..9006edd 100644
--- a/src/box/memtx_bitset.c
+++ b/src/box/memtx_bitset.c
@@ -69,7 +69,9 @@ struct bitset_hash_entry {
#if UINTPTR_MAX == 0xffffffff
#define mh_hash_key(a, arg) ((uintptr_t)(a))
#else
-#define mh_hash_key(a, arg) ((uint32_t)(((uintptr_t)(a)) >> 33 ^ ((uintptr_t)(a)) ^ ((uintptr_t)(a)) << 11))
+#define mh_hash_key(a, arg) \
+ ((uint32_t)(((uintptr_t)(a)) >> 33 ^ ((uintptr_t)(a)) ^ \
+ ((uintptr_t)(a)) << 11))
#endif
#define mh_hash(a, arg) mh_hash_key((a)->tuple, arg)
#define mh_cmp(a, b, arg) ((a)->tuple != (b)->tuple)
@@ -81,9 +83,7 @@ struct bitset_hash_entry {
#define MH_SOURCE 1
#include <salad/mhash.h>
-enum {
- SPARE_ID_END = 0xFFFFFFFF
-};
+enum { SPARE_ID_END = 0xFFFFFFFF };
static int
memtx_bitset_index_register_tuple(struct memtx_bitset_index *index,
@@ -108,7 +108,7 @@ memtx_bitset_index_register_tuple(struct memtx_bitset_index *index,
if (pos == mh_end(index->tuple_to_id)) {
*(uint32_t *)tuple = index->spare_id;
index->spare_id = id;
- diag_set(OutOfMemory, (ssize_t) pos, "hash", "key");
+ diag_set(OutOfMemory, (ssize_t)pos, "hash", "key");
return -1;
}
return 0;
@@ -118,9 +118,9 @@ static void
memtx_bitset_index_unregister_tuple(struct memtx_bitset_index *index,
struct tuple *tuple)
{
-
uint32_t k = mh_bitset_index_find(index->tuple_to_id, tuple, 0);
- struct bitset_hash_entry *e = mh_bitset_index_node(index->tuple_to_id, k);
+ struct bitset_hash_entry *e =
+ mh_bitset_index_node(index->tuple_to_id, k);
void *mem = matras_get(index->id_to_tuple, e->id);
*(uint32_t *)mem = index->spare_id;
index->spare_id = e->id;
@@ -132,7 +132,8 @@ memtx_bitset_index_tuple_to_value(struct memtx_bitset_index *index,
struct tuple *tuple)
{
uint32_t k = mh_bitset_index_find(index->tuple_to_id, tuple, 0);
- struct bitset_hash_entry *e = mh_bitset_index_node(index->tuple_to_id, k);
+ struct bitset_hash_entry *e =
+ mh_bitset_index_node(index->tuple_to_id, k);
return e->id;
}
@@ -155,7 +156,7 @@ tuple_to_value(struct tuple *tuple)
* https://github.com/tarantool/tarantool/issues/49
*/
/* size_t value = small_ptr_compress(tuple); */
- size_t value = (intptr_t) tuple >> 2;
+ size_t value = (intptr_t)tuple >> 2;
assert(value_to_tuple(value) == tuple);
return value;
}
@@ -164,7 +165,7 @@ static inline struct tuple *
value_to_tuple(size_t value)
{
/* return (struct tuple *) salloc_ptr_from_index(value); */
- return (struct tuple *) (value << 2);
+ return (struct tuple *)(value << 2);
}
#endif /* #ifndef OLD_GOOD_BITSET */
@@ -182,7 +183,7 @@ static_assert(sizeof(struct bitset_index_iterator) <= MEMTX_ITERATOR_SIZE,
static struct bitset_index_iterator *
bitset_index_iterator(struct iterator *it)
{
- return (struct bitset_index_iterator *) it;
+ return (struct bitset_index_iterator *)it;
}
static void
@@ -266,7 +267,7 @@ make_key(const char *field, uint32_t *key_len)
case MP_UINT:
u64key = mp_decode_uint(&field);
*key_len = sizeof(uint64_t);
- return (const char *) &u64key;
+ return (const char *)&u64key;
break;
case MP_STR:
return mp_decode_str(&field, key_len);
@@ -288,18 +289,19 @@ memtx_bitset_index_replace(struct index *base, struct tuple *old_tuple,
assert(!base->def->opts.is_unique);
assert(!base->def->key_def->is_multikey);
assert(old_tuple != NULL || new_tuple != NULL);
- (void) mode;
+ (void)mode;
*result = NULL;
if (old_tuple != NULL) {
#ifndef OLD_GOOD_BITSET
- uint32_t value = memtx_bitset_index_tuple_to_value(index, old_tuple);
+ uint32_t value =
+ memtx_bitset_index_tuple_to_value(index, old_tuple);
#else /* #ifndef OLD_GOOD_BITSET */
size_t value = tuple_to_value(old_tuple);
#endif /* #ifndef OLD_GOOD_BITSET */
if (tt_bitset_index_contains_value(&index->index,
- (size_t) value)) {
+ (size_t)value)) {
*result = old_tuple;
assert(old_tuple != new_tuple);
@@ -311,23 +313,25 @@ memtx_bitset_index_replace(struct index *base, struct tuple *old_tuple,
}
if (new_tuple != NULL) {
- const char *field = tuple_field_by_part(new_tuple,
- base->def->key_def->parts, MULTIKEY_NONE);
+ const char *field = tuple_field_by_part(
+ new_tuple, base->def->key_def->parts, MULTIKEY_NONE);
uint32_t key_len;
const void *key = make_key(field, &key_len);
#ifndef OLD_GOOD_BITSET
if (memtx_bitset_index_register_tuple(index, new_tuple) != 0)
return -1;
- uint32_t value = memtx_bitset_index_tuple_to_value(index, new_tuple);
+ uint32_t value =
+ memtx_bitset_index_tuple_to_value(index, new_tuple);
#else /* #ifndef OLD_GOOD_BITSET */
uint32_t value = tuple_to_value(new_tuple);
#endif /* #ifndef OLD_GOOD_BITSET */
- if (tt_bitset_index_insert(&index->index, key, key_len,
- value) < 0) {
+ if (tt_bitset_index_insert(&index->index, key, key_len, value) <
+ 0) {
#ifndef OLD_GOOD_BITSET
memtx_bitset_index_unregister_tuple(index, new_tuple);
#endif /* #ifndef OLD_GOOD_BITSET */
- diag_set(OutOfMemory, 0, "memtx_bitset_index", "insert");
+ diag_set(OutOfMemory, 0, "memtx_bitset_index",
+ "insert");
return -1;
}
}
@@ -342,13 +346,13 @@ memtx_bitset_index_create_iterator(struct index *base, enum iterator_type type,
struct memtx_engine *memtx = (struct memtx_engine *)base->engine;
assert(part_count == 0 || key != NULL);
- (void) part_count;
+ (void)part_count;
struct bitset_index_iterator *it;
it = mempool_alloc(&memtx->iterator_pool);
if (!it) {
- diag_set(OutOfMemory, sizeof(*it),
- "memtx_bitset_index", "iterator");
+ diag_set(OutOfMemory, sizeof(*it), "memtx_bitset_index",
+ "iterator");
return NULL;
}
@@ -474,7 +478,7 @@ memtx_bitset_index_count(struct index *base, enum iterator_type type,
*/
if (bit_iterator_next(&bit_it) == SIZE_MAX)
return tt_bitset_index_size(&index->index) -
- tt_bitset_index_count(&index->index, bit);
+ tt_bitset_index_count(&index->index, bit);
}
/* Call generic method */
@@ -490,7 +494,7 @@ static const struct index_vtab memtx_bitset_index_vtab = {
/* .update_def = */ generic_index_update_def,
/* .depends_on_pk = */ generic_index_depends_on_pk,
/* .def_change_requires_rebuild = */
- memtx_index_def_change_requires_rebuild,
+ memtx_index_def_change_requires_rebuild,
/* .size = */ memtx_bitset_index_size,
/* .bsize = */ memtx_bitset_index_bsize,
/* .min = */ generic_index_min,
@@ -501,7 +505,7 @@ static const struct index_vtab memtx_bitset_index_vtab = {
/* .replace = */ memtx_bitset_index_replace,
/* .create_iterator = */ memtx_bitset_index_create_iterator,
/* .create_snapshot_iterator = */
- generic_index_create_snapshot_iterator,
+ generic_index_create_snapshot_iterator,
/* .stat = */ generic_index_stat,
/* .compact = */ generic_index_compact,
/* .reset_stat = */ generic_index_reset_stat,
@@ -520,8 +524,8 @@ memtx_bitset_index_new(struct memtx_engine *memtx, struct index_def *def)
struct memtx_bitset_index *index =
(struct memtx_bitset_index *)calloc(1, sizeof(*index));
if (index == NULL) {
- diag_set(OutOfMemory, sizeof(*index),
- "malloc", "struct memtx_bitset_index");
+ diag_set(OutOfMemory, sizeof(*index), "malloc",
+ "struct memtx_bitset_index");
return NULL;
}
if (index_create(&index->base, (struct engine *)memtx,
@@ -532,11 +536,13 @@ memtx_bitset_index_new(struct memtx_engine *memtx, struct index_def *def)
#ifndef OLD_GOOD_BITSET
index->spare_id = SPARE_ID_END;
- index->id_to_tuple = (struct matras *)malloc(sizeof(*index->id_to_tuple));
+ index->id_to_tuple =
+ (struct matras *)malloc(sizeof(*index->id_to_tuple));
if (index->id_to_tuple == NULL)
panic("failed to allocate memtx bitset index");
- matras_create(index->id_to_tuple, MEMTX_EXTENT_SIZE, sizeof(struct tuple *),
- memtx_index_extent_alloc, memtx_index_extent_free, memtx);
+ matras_create(index->id_to_tuple, MEMTX_EXTENT_SIZE,
+ sizeof(struct tuple *), memtx_index_extent_alloc,
+ memtx_index_extent_free, memtx);
index->tuple_to_id = mh_bitset_index_new();
if (index->tuple_to_id == NULL)
diff --git a/src/box/memtx_engine.c b/src/box/memtx_engine.c
index 8147557..cc8670e 100644
--- a/src/box/memtx_engine.c
+++ b/src/box/memtx_engine.c
@@ -52,7 +52,7 @@
#include "raft.h"
/* sync snapshot every 16MB */
-#define SNAP_SYNC_INTERVAL (1 << 24)
+#define SNAP_SYNC_INTERVAL (1 << 24)
static void
checkpoint_cancel(struct checkpoint *ckpt);
@@ -159,8 +159,8 @@ memtx_engine_recover_snapshot(struct memtx_engine *memtx,
/* Process existing snapshot */
say_info("recovery start");
int64_t signature = vclock_sum(vclock);
- const char *filename = xdir_format_filename(&memtx->snap_dir,
- signature, NONE);
+ const char *filename =
+ xdir_format_filename(&memtx->snap_dir, signature, NONE);
say_info("recovering from `%s'", filename);
struct xlog_cursor cursor;
@@ -170,8 +170,8 @@ memtx_engine_recover_snapshot(struct memtx_engine *memtx,
int rc;
struct xrow_header row;
uint64_t row_count = 0;
- while ((rc = xlog_cursor_next(&cursor, &row,
- memtx->force_recovery)) == 0) {
+ while ((rc = xlog_cursor_next(&cursor, &row, memtx->force_recovery)) ==
+ 0) {
row.lsn = signature;
rc = memtx_engine_recover_snapshot_row(memtx, &row);
if (rc < 0) {
@@ -182,8 +182,7 @@ memtx_engine_recover_snapshot(struct memtx_engine *memtx,
}
++row_count;
if (row_count % 100000 == 0) {
- say_info("%.1fM rows processed",
- row_count / 1000000.);
+ say_info("%.1fM rows processed", row_count / 1000000.);
fiber_yield_timeout(0);
}
}
@@ -223,7 +222,7 @@ memtx_engine_recover_snapshot_row(struct memtx_engine *memtx,
if (row->type == IPROTO_RAFT)
return memtx_engine_recover_raft(row);
diag_set(ClientError, ER_UNKNOWN_REQUEST_TYPE,
- (uint32_t) row->type);
+ (uint32_t)row->type);
return -1;
}
int rc;
@@ -287,8 +286,8 @@ memtx_engine_begin_initial_recovery(struct engine *engine,
* recovery mode. Enable all keys on start, to detect and
* discard duplicates in the snapshot.
*/
- memtx->state = (memtx->force_recovery ?
- MEMTX_OK : MEMTX_INITIAL_RECOVERY);
+ memtx->state =
+ (memtx->force_recovery ? MEMTX_OK : MEMTX_INITIAL_RECOVERY);
return 0;
}
@@ -365,7 +364,8 @@ memtx_engine_prepare(struct engine *engine, struct txn *txn)
{
(void)engine;
struct txn_stmt *stmt;
- stailq_foreach_entry(stmt, &txn->stmts, next) {
+ stailq_foreach_entry(stmt, &txn->stmts, next)
+ {
if (stmt->add_story != NULL || stmt->del_story != NULL)
memtx_tx_history_prepare_stmt(stmt);
}
@@ -377,7 +377,8 @@ memtx_engine_commit(struct engine *engine, struct txn *txn)
{
(void)engine;
struct txn_stmt *stmt;
- stailq_foreach_entry(stmt, &txn->stmts, next) {
+ stailq_foreach_entry(stmt, &txn->stmts, next)
+ {
if (stmt->add_story != NULL || stmt->del_story != NULL) {
ssize_t bsize = memtx_tx_history_commit_stmt(stmt);
assert(stmt->space->engine == engine);
@@ -486,9 +487,9 @@ checkpoint_write_row(struct xlog *l, struct xrow_header *row)
return -1;
if ((l->rows + l->tx_rows) % 100000 == 0)
- say_crit("%.1fM rows written", (l->rows + l->tx_rows) / 1000000.0);
+ say_crit("%.1fM rows written",
+ (l->rows + l->tx_rows) / 1000000.0);
return 0;
-
}
static int
@@ -611,8 +612,8 @@ checkpoint_add_space(struct space *sp, void *data)
struct checkpoint *ckpt = (struct checkpoint *)data;
struct checkpoint_entry *entry = malloc(sizeof(*entry));
if (entry == NULL) {
- diag_set(OutOfMemory, sizeof(*entry),
- "malloc", "struct checkpoint_entry");
+ diag_set(OutOfMemory, sizeof(*entry), "malloc",
+ "struct checkpoint_entry");
return -1;
}
rlist_add_tail_entry(&ckpt->entries, entry, link);
@@ -672,7 +673,8 @@ checkpoint_f(va_list ap)
struct snapshot_iterator *it = entry->iterator;
while ((rc = it->next(it, &data, &size)) == 0 && data != NULL) {
if (checkpoint_write_tuple(&snap, entry->space_id,
- entry->group_id, data, size) != 0)
+ entry->group_id, data,
+ size) != 0)
goto fail;
}
if (rc != 0)
@@ -694,7 +696,7 @@ fail:
static int
memtx_engine_begin_checkpoint(struct engine *engine, bool is_scheduled)
{
- (void) is_scheduled;
+ (void)is_scheduled;
struct memtx_engine *memtx = (struct memtx_engine *)engine;
assert(memtx->checkpoint == NULL);
@@ -712,8 +714,7 @@ memtx_engine_begin_checkpoint(struct engine *engine, bool is_scheduled)
}
static int
-memtx_engine_wait_checkpoint(struct engine *engine,
- const struct vclock *vclock)
+memtx_engine_wait_checkpoint(struct engine *engine, const struct vclock *vclock)
{
struct memtx_engine *memtx = (struct memtx_engine *)engine;
@@ -728,8 +729,8 @@ memtx_engine_wait_checkpoint(struct engine *engine,
}
vclock_copy(&memtx->checkpoint->vclock, vclock);
- if (cord_costart(&memtx->checkpoint->cord, "snapshot",
- checkpoint_f, memtx->checkpoint)) {
+ if (cord_costart(&memtx->checkpoint->cord, "snapshot", checkpoint_f,
+ memtx->checkpoint)) {
return -1;
}
memtx->checkpoint->waiting_for_snap_thread = true;
@@ -747,7 +748,7 @@ static void
memtx_engine_commit_checkpoint(struct engine *engine,
const struct vclock *vclock)
{
- (void) vclock;
+ (void)vclock;
struct memtx_engine *memtx = (struct memtx_engine *)engine;
/* beginCheckpoint() must have been done */
@@ -796,11 +797,10 @@ memtx_engine_abort_checkpoint(struct engine *engine)
}
/** Remove garbage .inprogress file. */
- const char *filename =
- xdir_format_filename(&memtx->checkpoint->dir,
- vclock_sum(&memtx->checkpoint->vclock),
- INPROGRESS);
- (void) coio_unlink(filename);
+ const char *filename = xdir_format_filename(
+ &memtx->checkpoint->dir, vclock_sum(&memtx->checkpoint->vclock),
+ INPROGRESS);
+ (void)coio_unlink(filename);
checkpoint_delete(memtx->checkpoint);
memtx->checkpoint = NULL;
@@ -851,8 +851,8 @@ memtx_join_add_space(struct space *space, void *arg)
return 0;
struct memtx_join_entry *entry = malloc(sizeof(*entry));
if (entry == NULL) {
- diag_set(OutOfMemory, sizeof(*entry),
- "malloc", "struct memtx_join_entry");
+ diag_set(OutOfMemory, sizeof(*entry), "malloc",
+ "struct memtx_join_entry");
return -1;
}
entry->space_id = space_id(space);
@@ -871,8 +871,8 @@ memtx_engine_prepare_join(struct engine *engine, void **arg)
(void)engine;
struct memtx_join_ctx *ctx = malloc(sizeof(*ctx));
if (ctx == NULL) {
- diag_set(OutOfMemory, sizeof(*ctx),
- "malloc", "struct memtx_join_ctx");
+ diag_set(OutOfMemory, sizeof(*ctx), "malloc",
+ "struct memtx_join_ctx");
return -1;
}
rlist_create(&ctx->entries);
@@ -1018,8 +1018,8 @@ memtx_engine_run_gc(struct memtx_engine *memtx, bool *stop)
if (*stop)
return;
- struct memtx_gc_task *task = stailq_first_entry(&memtx->gc_queue,
- struct memtx_gc_task, link);
+ struct memtx_gc_task *task = stailq_first_entry(
+ &memtx->gc_queue, struct memtx_gc_task, link);
bool task_done;
task->vtab->run(task, &task_done);
if (task_done) {
@@ -1056,8 +1056,8 @@ memtx_engine_new(const char *snap_dirname, bool force_recovery,
{
struct memtx_engine *memtx = calloc(1, sizeof(*memtx));
if (memtx == NULL) {
- diag_set(OutOfMemory, sizeof(*memtx),
- "malloc", "struct memtx_engine");
+ diag_set(OutOfMemory, sizeof(*memtx), "malloc",
+ "struct memtx_engine");
return NULL;
}
@@ -1081,8 +1081,8 @@ memtx_engine_new(const char *snap_dirname, bool force_recovery,
int64_t snap_signature = xdir_last_vclock(&memtx->snap_dir, NULL);
if (snap_signature >= 0) {
struct xlog_cursor cursor;
- if (xdir_open_cursor(&memtx->snap_dir,
- snap_signature, &cursor) != 0)
+ if (xdir_open_cursor(&memtx->snap_dir, snap_signature,
+ &cursor) != 0)
goto fail;
INSTANCE_UUID = cursor.meta.instance_uuid;
xlog_cursor_close(&cursor, false);
@@ -1109,8 +1109,8 @@ memtx_engine_new(const char *snap_dirname, bool force_recovery,
tuple_arena_create(&memtx->arena, &memtx->quota, tuple_arena_max_size,
SLAB_SIZE, dontdump, "memtx");
slab_cache_create(&memtx->slab_cache, &memtx->arena);
- small_alloc_create(&memtx->alloc, &memtx->slab_cache,
- objsize_min, alloc_factor);
+ small_alloc_create(&memtx->alloc, &memtx->slab_cache, objsize_min,
+ alloc_factor);
/* Initialize index extent allocator. */
slab_cache_create(&memtx->index_slab_cache, &memtx->arena);
@@ -1139,8 +1139,7 @@ fail:
}
void
-memtx_engine_schedule_gc(struct memtx_engine *memtx,
- struct memtx_gc_task *task)
+memtx_engine_schedule_gc(struct memtx_engine *memtx, struct memtx_gc_task *task)
{
stailq_add_tail_entry(&memtx->gc_queue, task, link);
fiber_wakeup(memtx->gc_fiber);
@@ -1175,7 +1174,8 @@ memtx_enter_delayed_free_mode(struct memtx_engine *memtx)
{
memtx->snapshot_version++;
if (memtx->delayed_free_mode++ == 0)
- small_alloc_setopt(&memtx->alloc, SMALL_DELAYED_FREE_MODE, true);
+ small_alloc_setopt(&memtx->alloc, SMALL_DELAYED_FREE_MODE,
+ true);
}
void
@@ -1183,7 +1183,8 @@ memtx_leave_delayed_free_mode(struct memtx_engine *memtx)
{
assert(memtx->delayed_free_mode > 0);
if (--memtx->delayed_free_mode == 0)
- small_alloc_setopt(&memtx->alloc, SMALL_DELAYED_FREE_MODE, false);
+ small_alloc_setopt(&memtx->alloc, SMALL_DELAYED_FREE_MODE,
+ false);
}
struct tuple *
@@ -1244,7 +1245,7 @@ memtx_tuple_new(struct tuple_format *format, const char *data, const char *end)
tuple_format_ref(format);
tuple->data_offset = data_offset;
tuple->is_dirty = false;
- char *raw = (char *) tuple + tuple->data_offset;
+ char *raw = (char *)tuple + tuple->data_offset;
field_map_build(&builder, raw - field_map_size);
memcpy(raw, data, tuple_len);
say_debug("%s(%zu) = %p", __func__, tuple_len, memtx_tuple);
@@ -1276,8 +1277,7 @@ metmx_tuple_chunk_delete(struct tuple_format *format, const char *data)
{
struct memtx_engine *memtx = (struct memtx_engine *)format->engine;
struct tuple_chunk *tuple_chunk =
- container_of((const char (*)[0])data,
- struct tuple_chunk, data);
+ container_of((const char(*)[0])data, struct tuple_chunk, data);
uint32_t sz = tuple_chunk_sz(tuple_chunk->data_sz);
smfree(&memtx->alloc, tuple_chunk, sz);
}
@@ -1289,7 +1289,7 @@ memtx_tuple_chunk_new(struct tuple_format *format, struct tuple *tuple,
struct memtx_engine *memtx = (struct memtx_engine *)format->engine;
uint32_t sz = tuple_chunk_sz(data_sz);
struct tuple_chunk *tuple_chunk =
- (struct tuple_chunk *) smalloc(&memtx->alloc, sz);
+ (struct tuple_chunk *)smalloc(&memtx->alloc, sz);
if (tuple == NULL) {
diag_set(OutOfMemory, sz, "smalloc", "tuple");
return NULL;
@@ -1322,8 +1322,7 @@ memtx_index_extent_alloc(void *ctx)
}
ERROR_INJECT(ERRINJ_INDEX_ALLOC, {
/* same error as in mempool_alloc */
- diag_set(OutOfMemory, MEMTX_EXTENT_SIZE,
- "mempool", "new slab");
+ diag_set(OutOfMemory, MEMTX_EXTENT_SIZE, "mempool", "new slab");
return NULL;
});
void *ret;
@@ -1334,8 +1333,7 @@ memtx_index_extent_alloc(void *ctx)
break;
}
if (ret == NULL)
- diag_set(OutOfMemory, MEMTX_EXTENT_SIZE,
- "mempool", "new slab");
+ diag_set(OutOfMemory, MEMTX_EXTENT_SIZE, "mempool", "new slab");
return ret;
}
@@ -1358,8 +1356,7 @@ memtx_index_extent_reserve(struct memtx_engine *memtx, int num)
{
ERROR_INJECT(ERRINJ_INDEX_ALLOC, {
/* same error as in mempool_alloc */
- diag_set(OutOfMemory, MEMTX_EXTENT_SIZE,
- "mempool", "new slab");
+ diag_set(OutOfMemory, MEMTX_EXTENT_SIZE, "mempool", "new slab");
return -1;
});
struct mempool *pool = &memtx->index_extent_pool;
@@ -1372,8 +1369,8 @@ memtx_index_extent_reserve(struct memtx_engine *memtx, int num)
break;
}
if (ext == NULL) {
- diag_set(OutOfMemory, MEMTX_EXTENT_SIZE,
- "mempool", "new slab");
+ diag_set(OutOfMemory, MEMTX_EXTENT_SIZE, "mempool",
+ "new slab");
return -1;
}
*(void **)ext = memtx->reserved_extents;
diff --git a/src/box/memtx_engine.h b/src/box/memtx_engine.h
index 8b380bf..a033055 100644
--- a/src/box/memtx_engine.h
+++ b/src/box/memtx_engine.h
@@ -211,9 +211,8 @@ memtx_engine_schedule_gc(struct memtx_engine *memtx,
struct memtx_engine *
memtx_engine_new(const char *snap_dirname, bool force_recovery,
- uint64_t tuple_arena_max_size,
- uint32_t objsize_min, bool dontdump,
- float alloc_factor);
+ uint64_t tuple_arena_max_size, uint32_t objsize_min,
+ bool dontdump, float alloc_factor);
int
memtx_engine_recover_snapshot(struct memtx_engine *memtx,
@@ -256,10 +255,7 @@ memtx_tuple_delete(struct tuple_format *format, struct tuple *tuple);
/** Tuple format vtab for memtx engine. */
extern struct tuple_format_vtab memtx_tuple_format_vtab;
-enum {
- MEMTX_EXTENT_SIZE = 16 * 1024,
- MEMTX_SLAB_SIZE = 4 * 1024 * 1024
-};
+enum { MEMTX_EXTENT_SIZE = 16 * 1024, MEMTX_SLAB_SIZE = 4 * 1024 * 1024 };
/**
* Allocate a block of size MEMTX_EXTENT_SIZE for memtx index
@@ -297,14 +293,12 @@ memtx_index_def_change_requires_rebuild(struct index *index,
static inline struct memtx_engine *
memtx_engine_new_xc(const char *snap_dirname, bool force_recovery,
- uint64_t tuple_arena_max_size,
- uint32_t objsize_min, bool dontdump,
- float alloc_factor)
+ uint64_t tuple_arena_max_size, uint32_t objsize_min,
+ bool dontdump, float alloc_factor)
{
struct memtx_engine *memtx;
memtx = memtx_engine_new(snap_dirname, force_recovery,
- tuple_arena_max_size,
- objsize_min, dontdump,
+ tuple_arena_max_size, objsize_min, dontdump,
alloc_factor);
if (memtx == NULL)
diag_raise();
diff --git a/src/box/memtx_hash.c b/src/box/memtx_hash.c
index ed4dba9..a8b6fb2 100644
--- a/src/box/memtx_hash.c
+++ b/src/box/memtx_hash.c
@@ -46,16 +46,17 @@ static inline bool
memtx_hash_equal(struct tuple *tuple_a, struct tuple *tuple_b,
struct key_def *key_def)
{
- return tuple_compare(tuple_a, HINT_NONE,
- tuple_b, HINT_NONE, key_def) == 0;
+ return tuple_compare(tuple_a, HINT_NONE, tuple_b, HINT_NONE, key_def) ==
+ 0;
}
static inline bool
memtx_hash_equal_key(struct tuple *tuple, const char *key,
struct key_def *key_def)
{
- return tuple_compare_with_key(tuple, HINT_NONE, key, key_def->part_count,
- HINT_NONE, key_def) == 0;
+ return tuple_compare_with_key(tuple, HINT_NONE, key,
+ key_def->part_count, HINT_NONE,
+ key_def) == 0;
}
#define LIGHT_NAME _index
@@ -98,7 +99,7 @@ static void
hash_iterator_free(struct iterator *iterator)
{
assert(iterator->free == hash_iterator_free);
- struct hash_iterator *it = (struct hash_iterator *) iterator;
+ struct hash_iterator *it = (struct hash_iterator *)iterator;
mempool_free(it->pool, it);
}
@@ -106,10 +107,10 @@ static int
hash_iterator_ge_base(struct iterator *ptr, struct tuple **ret)
{
assert(ptr->free == hash_iterator_free);
- struct hash_iterator *it = (struct hash_iterator *) ptr;
+ struct hash_iterator *it = (struct hash_iterator *)ptr;
struct memtx_hash_index *index = (struct memtx_hash_index *)ptr->index;
- struct tuple **res = light_index_iterator_get_and_next(&index->hash_table,
- &it->iterator);
+ struct tuple **res = light_index_iterator_get_and_next(
+ &index->hash_table, &it->iterator);
*ret = res != NULL ? *res : NULL;
return 0;
}
@@ -119,10 +120,10 @@ hash_iterator_gt_base(struct iterator *ptr, struct tuple **ret)
{
assert(ptr->free == hash_iterator_free);
ptr->next = hash_iterator_ge_base;
- struct hash_iterator *it = (struct hash_iterator *) ptr;
+ struct hash_iterator *it = (struct hash_iterator *)ptr;
struct memtx_hash_index *index = (struct memtx_hash_index *)ptr->index;
- struct tuple **res = light_index_iterator_get_and_next(&index->hash_table,
- &it->iterator);
+ struct tuple **res = light_index_iterator_get_and_next(
+ &index->hash_table, &it->iterator);
if (res != NULL)
res = light_index_iterator_get_and_next(&index->hash_table,
&it->iterator);
@@ -130,26 +131,27 @@ hash_iterator_gt_base(struct iterator *ptr, struct tuple **ret)
return 0;
}
-#define WRAP_ITERATOR_METHOD(name) \
-static int \
-name(struct iterator *iterator, struct tuple **ret) \
-{ \
- struct txn *txn = in_txn(); \
- struct space *space = space_by_id(iterator->space_id); \
- bool is_rw = txn != NULL; \
- uint32_t iid = iterator->index->def->iid; \
- bool is_first = true; \
- do { \
- int rc = is_first ? name##_base(iterator, ret) \
- : hash_iterator_ge_base(iterator, ret); \
- if (rc != 0 || *ret == NULL) \
- return rc; \
- is_first = false; \
- *ret = memtx_tx_tuple_clarify(txn, space, *ret, iid, 0, is_rw); \
- } while (*ret == NULL); \
- return 0; \
-} \
-struct forgot_to_add_semicolon
+#define WRAP_ITERATOR_METHOD(name) \
+ static int name(struct iterator *iterator, struct tuple **ret) \
+ { \
+ struct txn *txn = in_txn(); \
+ struct space *space = space_by_id(iterator->space_id); \
+ bool is_rw = txn != NULL; \
+ uint32_t iid = iterator->index->def->iid; \
+ bool is_first = true; \
+ do { \
+ int rc = is_first ? \
+ name##_base(iterator, ret) : \
+ hash_iterator_ge_base(iterator, ret); \
+ if (rc != 0 || *ret == NULL) \
+ return rc; \
+ is_first = false; \
+ *ret = memtx_tx_tuple_clarify(txn, space, *ret, iid, \
+ 0, is_rw); \
+ } while (*ret == NULL); \
+ return 0; \
+ } \
+ struct forgot_to_add_semicolon
WRAP_ITERATOR_METHOD(hash_iterator_ge);
WRAP_ITERATOR_METHOD(hash_iterator_gt);
@@ -173,8 +175,8 @@ hash_iterator_eq(struct iterator *it, struct tuple **ret)
struct txn *txn = in_txn();
struct space *sp = space_by_id(it->space_id);
bool is_rw = txn != NULL;
- *ret = memtx_tx_tuple_clarify(txn, sp, *ret, it->index->def->iid,
- 0, is_rw);
+ *ret = memtx_tx_tuple_clarify(txn, sp, *ret, it->index->def->iid, 0,
+ is_rw);
return 0;
}
@@ -202,8 +204,8 @@ memtx_hash_index_gc_run(struct memtx_gc_task *task, bool *done)
enum { YIELD_LOOPS = 10 };
#endif
- struct memtx_hash_index *index = container_of(task,
- struct memtx_hash_index, gc_task);
+ struct memtx_hash_index *index =
+ container_of(task, struct memtx_hash_index, gc_task);
struct light_index_core *hash = &index->hash_table;
struct light_index_iterator *itr = &index->gc_iterator;
@@ -222,8 +224,8 @@ memtx_hash_index_gc_run(struct memtx_gc_task *task, bool *done)
static void
memtx_hash_index_gc_free(struct memtx_gc_task *task)
{
- struct memtx_hash_index *index = container_of(task,
- struct memtx_hash_index, gc_task);
+ struct memtx_hash_index *index =
+ container_of(task, struct memtx_hash_index, gc_task);
memtx_hash_index_free(index);
}
@@ -275,7 +277,7 @@ memtx_hash_index_bsize(struct index *base)
{
struct memtx_hash_index *index = (struct memtx_hash_index *)base;
return matras_extent_count(&index->hash_table.mtable) *
- MEMTX_EXTENT_SIZE;
+ MEMTX_EXTENT_SIZE;
}
static int
@@ -306,14 +308,14 @@ memtx_hash_index_count(struct index *base, enum iterator_type type,
}
static int
-memtx_hash_index_get(struct index *base, const char *key,
- uint32_t part_count, struct tuple **result)
+memtx_hash_index_get(struct index *base, const char *key, uint32_t part_count,
+ struct tuple **result)
{
struct memtx_hash_index *index = (struct memtx_hash_index *)base;
assert(base->def->opts.is_unique &&
part_count == base->def->key_def->part_count);
- (void) part_count;
+ (void)part_count;
struct space *space = space_by_id(base->def->space_id);
*result = NULL;
@@ -324,8 +326,8 @@ memtx_hash_index_get(struct index *base, const char *key,
uint32_t iid = base->def->iid;
struct txn *txn = in_txn();
bool is_rw = txn != NULL;
- *result = memtx_tx_tuple_clarify(txn, space, tuple, iid,
- 0, is_rw);
+ *result = memtx_tx_tuple_clarify(txn, space, tuple, iid, 0,
+ is_rw);
}
return 0;
}
@@ -346,8 +348,7 @@ memtx_hash_index_replace(struct index *base, struct tuple *old_tuple,
if (pos == light_index_end)
pos = light_index_insert(hash_table, h, new_tuple);
- ERROR_INJECT(ERRINJ_INDEX_ALLOC,
- {
+ ERROR_INJECT(ERRINJ_INDEX_ALLOC, {
light_index_delete(hash_table, pos);
pos = light_index_end;
});
@@ -357,18 +358,20 @@ memtx_hash_index_replace(struct index *base, struct tuple *old_tuple,
"hash_table", "key");
return -1;
}
- uint32_t errcode = replace_check_dup(old_tuple,
- dup_tuple, mode);
+ uint32_t errcode =
+ replace_check_dup(old_tuple, dup_tuple, mode);
if (errcode) {
light_index_delete(hash_table, pos);
if (dup_tuple) {
- uint32_t pos = light_index_insert(hash_table, h, dup_tuple);
+ uint32_t pos = light_index_insert(hash_table, h,
+ dup_tuple);
if (pos == light_index_end) {
panic("Failed to allocate memory in "
"recover of int hash_table");
}
}
- struct space *sp = space_cache_find(base->def->space_id);
+ struct space *sp =
+ space_cache_find(base->def->space_id);
if (sp != NULL)
diag_set(ClientError, errcode, base->def->name,
space_name(sp));
@@ -384,7 +387,8 @@ memtx_hash_index_replace(struct index *base, struct tuple *old_tuple,
if (old_tuple) {
uint32_t h = tuple_hash(old_tuple, base->def->key_def);
int res = light_index_delete_value(hash_table, h, old_tuple);
- assert(res == 0); (void) res;
+ assert(res == 0);
+ (void)res;
}
*result = old_tuple;
return 0;
@@ -413,11 +417,13 @@ memtx_hash_index_create_iterator(struct index *base, enum iterator_type type,
switch (type) {
case ITER_GT:
if (part_count != 0) {
- light_index_iterator_key(&index->hash_table, &it->iterator,
- key_hash(key, base->def->key_def), key);
+ light_index_iterator_key(
+ &index->hash_table, &it->iterator,
+ key_hash(key, base->def->key_def), key);
it->base.next = hash_iterator_gt;
} else {
- light_index_iterator_begin(&index->hash_table, &it->iterator);
+ light_index_iterator_begin(&index->hash_table,
+ &it->iterator);
it->base.next = hash_iterator_ge;
}
break;
@@ -428,7 +434,8 @@ memtx_hash_index_create_iterator(struct index *base, enum iterator_type type,
case ITER_EQ:
assert(part_count > 0);
light_index_iterator_key(&index->hash_table, &it->iterator,
- key_hash(key, base->def->key_def), key);
+ key_hash(key, base->def->key_def),
+ key);
it->base.next = hash_iterator_eq;
break;
default:
@@ -457,9 +464,9 @@ hash_snapshot_iterator_free(struct snapshot_iterator *iterator)
{
assert(iterator->free == hash_snapshot_iterator_free);
struct hash_snapshot_iterator *it =
- (struct hash_snapshot_iterator *) iterator;
- memtx_leave_delayed_free_mode((struct memtx_engine *)
- it->index->base.engine);
+ (struct hash_snapshot_iterator *)iterator;
+ memtx_leave_delayed_free_mode(
+ (struct memtx_engine *)it->index->base.engine);
light_index_iterator_destroy(&it->index->hash_table, &it->iterator);
index_unref(&it->index->base);
memtx_tx_snapshot_cleaner_destroy(&it->cleaner);
@@ -477,13 +484,12 @@ hash_snapshot_iterator_next(struct snapshot_iterator *iterator,
{
assert(iterator->free == hash_snapshot_iterator_free);
struct hash_snapshot_iterator *it =
- (struct hash_snapshot_iterator *) iterator;
+ (struct hash_snapshot_iterator *)iterator;
struct light_index_core *hash_table = &it->index->hash_table;
while (true) {
- struct tuple **res =
- light_index_iterator_get_and_next(hash_table,
- &it->iterator);
+ struct tuple **res = light_index_iterator_get_and_next(
+ hash_table, &it->iterator);
if (res == NULL) {
*data = NULL;
return 0;
@@ -509,8 +515,8 @@ static struct snapshot_iterator *
memtx_hash_index_create_snapshot_iterator(struct index *base)
{
struct memtx_hash_index *index = (struct memtx_hash_index *)base;
- struct hash_snapshot_iterator *it = (struct hash_snapshot_iterator *)
- calloc(1, sizeof(*it));
+ struct hash_snapshot_iterator *it =
+ (struct hash_snapshot_iterator *)calloc(1, sizeof(*it));
if (it == NULL) {
diag_set(OutOfMemory, sizeof(struct hash_snapshot_iterator),
"memtx_hash_index", "iterator");
@@ -524,7 +530,7 @@ memtx_hash_index_create_snapshot_iterator(struct index *base)
light_index_iterator_begin(&index->hash_table, &it->iterator);
light_index_iterator_freeze(&index->hash_table, &it->iterator);
memtx_enter_delayed_free_mode((struct memtx_engine *)base->engine);
- return (struct snapshot_iterator *) it;
+ return (struct snapshot_iterator *)it;
}
static const struct index_vtab memtx_hash_index_vtab = {
@@ -536,7 +542,7 @@ static const struct index_vtab memtx_hash_index_vtab = {
/* .update_def = */ memtx_hash_index_update_def,
/* .depends_on_pk = */ generic_index_depends_on_pk,
/* .def_change_requires_rebuild = */
- memtx_index_def_change_requires_rebuild,
+ memtx_index_def_change_requires_rebuild,
/* .size = */ memtx_hash_index_size,
/* .bsize = */ memtx_hash_index_bsize,
/* .min = */ generic_index_min,
@@ -547,7 +553,7 @@ static const struct index_vtab memtx_hash_index_vtab = {
/* .replace = */ memtx_hash_index_replace,
/* .create_iterator = */ memtx_hash_index_create_iterator,
/* .create_snapshot_iterator = */
- memtx_hash_index_create_snapshot_iterator,
+ memtx_hash_index_create_snapshot_iterator,
/* .stat = */ generic_index_stat,
/* .compact = */ generic_index_compact,
/* .reset_stat = */ generic_index_reset_stat,
@@ -563,8 +569,8 @@ memtx_hash_index_new(struct memtx_engine *memtx, struct index_def *def)
struct memtx_hash_index *index =
(struct memtx_hash_index *)calloc(1, sizeof(*index));
if (index == NULL) {
- diag_set(OutOfMemory, sizeof(*index),
- "malloc", "struct memtx_hash_index");
+ diag_set(OutOfMemory, sizeof(*index), "malloc",
+ "struct memtx_hash_index");
return NULL;
}
if (index_create(&index->base, (struct engine *)memtx,
diff --git a/src/box/memtx_rtree.c b/src/box/memtx_rtree.c
index b734daa..911da37 100644
--- a/src/box/memtx_rtree.c
+++ b/src/box/memtx_rtree.c
@@ -71,8 +71,8 @@ mp_decode_num(const char **data, uint32_t fieldno, double *ret)
* There must be <count> or <count * 2> numbers in that string.
*/
static inline int
-mp_decode_rect(struct rtree_rect *rect, unsigned dimension,
- const char *mp, unsigned count, const char *what)
+mp_decode_rect(struct rtree_rect *rect, unsigned dimension, const char *mp,
+ unsigned count, const char *what)
{
coord_t c = 0;
if (count == dimension) { /* point */
@@ -94,8 +94,8 @@ mp_decode_rect(struct rtree_rect *rect, unsigned dimension,
rect->coords[i * 2 + 1] = c;
}
} else {
- diag_set(ClientError, ER_RTREE_RECT,
- what, dimension, dimension * 2);
+ diag_set(ClientError, ER_RTREE_RECT, what, dimension,
+ dimension * 2);
return -1;
}
rtree_rect_normalize(rect, dimension);
@@ -124,8 +124,8 @@ extract_rectangle(struct rtree_rect *rect, struct tuple *tuple,
{
assert(index_def->key_def->part_count == 1);
assert(!index_def->key_def->is_multikey);
- const char *elems = tuple_field_by_part(tuple,
- index_def->key_def->parts, MULTIKEY_NONE);
+ const char *elems = tuple_field_by_part(
+ tuple, index_def->key_def->parts, MULTIKEY_NONE);
unsigned dimension = index_def->opts.dimension;
uint32_t count = mp_decode_array(&elems);
return mp_decode_rect(rect, dimension, elems, count, "Field");
@@ -133,8 +133,8 @@ extract_rectangle(struct rtree_rect *rect, struct tuple *tuple,
/* {{{ MemtxRTree Iterators ****************************************/
struct index_rtree_iterator {
- struct iterator base;
- struct rtree_iterator impl;
+ struct iterator base;
+ struct rtree_iterator impl;
/** Memory pool the iterator was allocated from. */
struct mempool *pool;
};
@@ -152,7 +152,7 @@ index_rtree_iterator_next(struct iterator *i, struct tuple **ret)
{
struct index_rtree_iterator *itr = (struct index_rtree_iterator *)i;
do {
- *ret = (struct tuple *) rtree_iterator_next(&itr->impl);
+ *ret = (struct tuple *)rtree_iterator_next(&itr->impl);
if (*ret == NULL)
break;
uint32_t iid = i->index->def->iid;
@@ -186,7 +186,6 @@ memtx_rtree_index_def_change_requires_rebuild(struct index *index,
index->def->opts.dimension != new_def->opts.dimension)
return true;
return false;
-
}
static ssize_t
@@ -213,8 +212,8 @@ memtx_rtree_index_count(struct index *base, enum iterator_type type,
}
static int
-memtx_rtree_index_get(struct index *base, const char *key,
- uint32_t part_count, struct tuple **result)
+memtx_rtree_index_get(struct index *base, const char *key, uint32_t part_count,
+ struct tuple **result)
{
struct memtx_rtree_index *index = (struct memtx_rtree_index *)base;
struct rtree_iterator iterator;
@@ -230,16 +229,16 @@ memtx_rtree_index_get(struct index *base, const char *key,
return 0;
}
do {
- struct tuple *tuple = (struct tuple *)
- rtree_iterator_next(&iterator);
+ struct tuple *tuple =
+ (struct tuple *)rtree_iterator_next(&iterator);
if (tuple == NULL)
break;
uint32_t iid = base->def->iid;
struct txn *txn = in_txn();
struct space *space = space_by_id(base->def->space_id);
bool is_rw = txn != NULL;
- *result = memtx_tx_tuple_clarify(txn, space, tuple, iid,
- 0, is_rw);
+ *result = memtx_tx_tuple_clarify(txn, space, tuple, iid, 0,
+ is_rw);
} while (*result == NULL);
rtree_iterator_destroy(&iterator);
return 0;
@@ -283,11 +282,12 @@ memtx_rtree_index_reserve(struct index *base, uint32_t size_hint)
return -1;
});
struct memtx_engine *memtx = (struct memtx_engine *)base->engine;
- return memtx_index_extent_reserve(memtx, RESERVE_EXTENTS_BEFORE_REPLACE);
+ return memtx_index_extent_reserve(memtx,
+ RESERVE_EXTENTS_BEFORE_REPLACE);
}
static struct iterator *
-memtx_rtree_index_create_iterator(struct index *base, enum iterator_type type,
+memtx_rtree_index_create_iterator(struct index *base, enum iterator_type type,
const char *key, uint32_t part_count)
{
struct memtx_rtree_index *index = (struct memtx_rtree_index *)base;
@@ -300,8 +300,8 @@ memtx_rtree_index_create_iterator(struct index *base, enum iterator_type type,
"empty keys for requested iterator type");
return NULL;
}
- } else if (mp_decode_rect_from_key(&rect, index->dimension,
- key, part_count)) {
+ } else if (mp_decode_rect_from_key(&rect, index->dimension, key,
+ part_count)) {
return NULL;
}
@@ -337,7 +337,8 @@ memtx_rtree_index_create_iterator(struct index *base, enum iterator_type type,
return NULL;
}
- struct index_rtree_iterator *it = mempool_alloc(&memtx->rtree_iterator_pool);
+ struct index_rtree_iterator *it =
+ mempool_alloc(&memtx->rtree_iterator_pool);
if (it == NULL) {
diag_set(OutOfMemory, sizeof(struct index_rtree_iterator),
"memtx_rtree_index", "iterator");
@@ -368,7 +369,7 @@ static const struct index_vtab memtx_rtree_index_vtab = {
/* .update_def = */ generic_index_update_def,
/* .depends_on_pk = */ generic_index_depends_on_pk,
/* .def_change_requires_rebuild = */
- memtx_rtree_index_def_change_requires_rebuild,
+ memtx_rtree_index_def_change_requires_rebuild,
/* .size = */ memtx_rtree_index_size,
/* .bsize = */ memtx_rtree_index_bsize,
/* .min = */ generic_index_min,
@@ -379,7 +380,7 @@ static const struct index_vtab memtx_rtree_index_vtab = {
/* .replace = */ memtx_rtree_index_replace,
/* .create_iterator = */ memtx_rtree_index_create_iterator,
/* .create_snapshot_iterator = */
- generic_index_create_snapshot_iterator,
+ generic_index_create_snapshot_iterator,
/* .stat = */ generic_index_stat,
/* .compact = */ generic_index_compact,
/* .reset_stat = */ generic_index_reset_stat,
@@ -401,13 +402,15 @@ memtx_rtree_index_new(struct memtx_engine *memtx, struct index_def *def)
def->opts.dimension > RTREE_MAX_DIMENSION) {
diag_set(UnsupportedIndexFeature, def,
tt_sprintf("dimension (%lld): must belong to "
- "range [%u, %u]", def->opts.dimension,
- 1, RTREE_MAX_DIMENSION));
+ "range [%u, %u]",
+ def->opts.dimension, 1,
+ RTREE_MAX_DIMENSION));
return NULL;
}
assert((int)RTREE_EUCLID == (int)RTREE_INDEX_DISTANCE_TYPE_EUCLID);
- assert((int)RTREE_MANHATTAN == (int)RTREE_INDEX_DISTANCE_TYPE_MANHATTAN);
+ assert((int)RTREE_MANHATTAN ==
+ (int)RTREE_INDEX_DISTANCE_TYPE_MANHATTAN);
enum rtree_distance_type distance_type =
(enum rtree_distance_type)def->opts.distance;
@@ -419,8 +422,8 @@ memtx_rtree_index_new(struct memtx_engine *memtx, struct index_def *def)
struct memtx_rtree_index *index =
(struct memtx_rtree_index *)calloc(1, sizeof(*index));
if (index == NULL) {
- diag_set(OutOfMemory, sizeof(*index),
- "malloc", "struct memtx_rtree_index");
+ diag_set(OutOfMemory, sizeof(*index), "malloc",
+ "struct memtx_rtree_index");
return NULL;
}
if (index_create(&index->base, (struct engine *)memtx,
diff --git a/src/box/memtx_space.c b/src/box/memtx_space.c
index d4b18d9..9060ca3 100644
--- a/src/box/memtx_space.c
+++ b/src/box/memtx_space.c
@@ -91,8 +91,8 @@ memtx_space_update_bsize(struct space *space, struct tuple *old_tuple,
*/
int
memtx_space_replace_no_keys(struct space *space, struct tuple *old_tuple,
- struct tuple *new_tuple,
- enum dup_replace_mode mode, struct tuple **result)
+ struct tuple *new_tuple, enum dup_replace_mode mode,
+ struct tuple **result)
{
(void)old_tuple;
(void)new_tuple;
@@ -100,7 +100,7 @@ memtx_space_replace_no_keys(struct space *space, struct tuple *old_tuple,
(void)result;
struct index *index = index_find(space, 0);
assert(index == NULL); /* not reached. */
- (void) index;
+ (void)index;
return -1;
}
@@ -144,8 +144,8 @@ memtx_space_replace_primary_key(struct space *space, struct tuple *old_tuple,
enum dup_replace_mode mode,
struct tuple **result)
{
- if (index_replace(space->index[0], old_tuple,
- new_tuple, mode, &old_tuple) != 0)
+ if (index_replace(space->index[0], old_tuple, new_tuple, mode,
+ &old_tuple) != 0)
return -1;
memtx_space_update_bsize(space, old_tuple, new_tuple);
if (new_tuple != NULL)
@@ -241,17 +241,17 @@ memtx_space_replace_primary_key(struct space *space, struct tuple *old_tuple,
int
memtx_space_replace_all_keys(struct space *space, struct tuple *old_tuple,
struct tuple *new_tuple,
- enum dup_replace_mode mode,
- struct tuple **result)
+ enum dup_replace_mode mode, struct tuple **result)
{
struct memtx_engine *memtx = (struct memtx_engine *)space->engine;
/*
* Ensure we have enough slack memory to guarantee
* successful statement-level rollback.
*/
- if (memtx_index_extent_reserve(memtx, new_tuple != NULL ?
- RESERVE_EXTENTS_BEFORE_REPLACE :
- RESERVE_EXTENTS_BEFORE_DELETE) != 0)
+ if (memtx_index_extent_reserve(
+ memtx, new_tuple != NULL ?
+ RESERVE_EXTENTS_BEFORE_REPLACE :
+ RESERVE_EXTENTS_BEFORE_DELETE) != 0)
return -1;
uint32_t i = 0;
@@ -264,11 +264,11 @@ memtx_space_replace_all_keys(struct space *space, struct tuple *old_tuple,
if (memtx_tx_manager_use_mvcc_engine) {
struct txn *txn = in_txn();
- struct txn_stmt *stmt =
- txn == NULL ? NULL : txn_current_stmt(txn);
+ struct txn_stmt *stmt = txn == NULL ? NULL :
+ txn_current_stmt(txn);
if (stmt != NULL) {
- return memtx_tx_history_add_stmt(stmt, old_tuple, new_tuple,
- mode, result);
+ return memtx_tx_history_add_stmt(
+ stmt, old_tuple, new_tuple, mode, result);
} else {
/** Ephemeral space */
assert(space->def->id == 0);
@@ -287,8 +287,8 @@ memtx_space_replace_all_keys(struct space *space, struct tuple *old_tuple,
for (i++; i < space->index_count; i++) {
struct tuple *unused;
struct index *index = space->index[i];
- if (index_replace(index, old_tuple, new_tuple,
- DUP_INSERT, &unused) != 0)
+ if (index_replace(index, old_tuple, new_tuple, DUP_INSERT,
+ &unused) != 0)
goto rollback;
}
@@ -303,8 +303,8 @@ rollback:
struct tuple *unused;
struct index *index = space->index[i - 1];
/* Rollback must not fail. */
- if (index_replace(index, new_tuple, old_tuple,
- DUP_INSERT, &unused) != 0) {
+ if (index_replace(index, new_tuple, old_tuple, DUP_INSERT,
+ &unused) != 0) {
diag_log();
unreachable();
panic("failed to rollback change");
@@ -335,8 +335,8 @@ memtx_space_execute_replace(struct space *space, struct txn *txn,
if (mode == DUP_INSERT)
stmt->does_require_old_tuple = true;
- if (memtx_space->replace(space, NULL, stmt->new_tuple,
- mode, &stmt->old_tuple) != 0)
+ if (memtx_space->replace(space, NULL, stmt->new_tuple, mode,
+ &stmt->old_tuple) != 0)
return -1;
stmt->engine_savepoint = stmt;
/** The new tuple is referenced by the primary key. */
@@ -369,8 +369,8 @@ memtx_space_execute_delete(struct space *space, struct txn *txn,
stmt->does_require_old_tuple = true;
if (old_tuple != NULL &&
- memtx_space->replace(space, old_tuple, NULL,
- DUP_REPLACE_OR_INSERT, &stmt->old_tuple) != 0)
+ memtx_space->replace(space, old_tuple, NULL, DUP_REPLACE_OR_INSERT,
+ &stmt->old_tuple) != 0)
return -1;
stmt->engine_savepoint = stmt;
*result = stmt->old_tuple;
@@ -404,23 +404,22 @@ memtx_space_execute_update(struct space *space, struct txn *txn,
uint32_t new_size = 0, bsize;
struct tuple_format *format = space->format;
const char *old_data = tuple_data_range(old_tuple, &bsize);
- const char *new_data =
- xrow_update_execute(request->tuple, request->tuple_end,
- old_data, old_data + bsize, format,
- &new_size, request->index_base, NULL);
+ const char *new_data = xrow_update_execute(
+ request->tuple, request->tuple_end, old_data, old_data + bsize,
+ format, &new_size, request->index_base, NULL);
if (new_data == NULL)
return -1;
- stmt->new_tuple = memtx_tuple_new(format, new_data,
- new_data + new_size);
+ stmt->new_tuple =
+ memtx_tuple_new(format, new_data, new_data + new_size);
if (stmt->new_tuple == NULL)
return -1;
tuple_ref(stmt->new_tuple);
stmt->does_require_old_tuple = true;
- if (memtx_space->replace(space, old_tuple, stmt->new_tuple,
- DUP_REPLACE, &stmt->old_tuple) != 0)
+ if (memtx_space->replace(space, old_tuple, stmt->new_tuple, DUP_REPLACE,
+ &stmt->old_tuple) != 0)
return -1;
stmt->engine_savepoint = stmt;
*result = stmt->new_tuple;
@@ -446,10 +445,9 @@ memtx_space_execute_upsert(struct space *space, struct txn *txn,
uint32_t part_count = index->def->key_def->part_count;
/* Extract the primary key from tuple. */
- const char *key = tuple_extract_key_raw(request->tuple,
- request->tuple_end,
- index->def->key_def,
- MULTIKEY_NONE, NULL);
+ const char *key =
+ tuple_extract_key_raw(request->tuple, request->tuple_end,
+ index->def->key_def, MULTIKEY_NONE, NULL);
if (key == NULL)
return -1;
/* Cut array header */
@@ -497,17 +495,15 @@ memtx_space_execute_upsert(struct space *space, struct txn *txn,
* for the tuple.
*/
uint64_t column_mask = COLUMN_MASK_FULL;
- const char *new_data =
- xrow_upsert_execute(request->ops, request->ops_end,
- old_data, old_data + bsize,
- format, &new_size,
- request->index_base, false,
- &column_mask);
+ const char *new_data = xrow_upsert_execute(
+ request->ops, request->ops_end, old_data,
+ old_data + bsize, format, &new_size,
+ request->index_base, false, &column_mask);
if (new_data == NULL)
return -1;
- stmt->new_tuple = memtx_tuple_new(format, new_data,
- new_data + new_size);
+ stmt->new_tuple =
+ memtx_tuple_new(format, new_data, new_data + new_size);
if (stmt->new_tuple == NULL)
return -1;
tuple_ref(stmt->new_tuple);
@@ -555,16 +551,16 @@ memtx_space_execute_upsert(struct space *space, struct txn *txn,
*/
static int
memtx_space_ephemeral_replace(struct space *space, const char *tuple,
- const char *tuple_end)
+ const char *tuple_end)
{
struct memtx_space *memtx_space = (struct memtx_space *)space;
- struct tuple *new_tuple = memtx_tuple_new(space->format, tuple,
- tuple_end);
+ struct tuple *new_tuple =
+ memtx_tuple_new(space->format, tuple, tuple_end);
if (new_tuple == NULL)
return -1;
struct tuple *old_tuple;
- if (memtx_space->replace(space, NULL, new_tuple,
- DUP_REPLACE_OR_INSERT, &old_tuple) != 0) {
+ if (memtx_space->replace(space, NULL, new_tuple, DUP_REPLACE_OR_INSERT,
+ &old_tuple) != 0) {
memtx_tuple_delete(space->format, new_tuple);
return -1;
}
@@ -598,8 +594,8 @@ memtx_space_ephemeral_delete(struct space *space, const char *key)
if (index_get(primary_index, key, part_count, &old_tuple) != 0)
return -1;
if (old_tuple != NULL &&
- memtx_space->replace(space, old_tuple, NULL,
- DUP_REPLACE, &old_tuple) != 0)
+ memtx_space->replace(space, old_tuple, NULL, DUP_REPLACE,
+ &old_tuple) != 0)
return -1;
tuple_unref(old_tuple);
return 0;
@@ -638,21 +634,21 @@ memtx_space_check_index_def(struct space *space, struct index_def *index_def)
}
switch (index_def->type) {
case HASH:
- if (! index_def->opts.is_unique) {
- diag_set(ClientError, ER_MODIFY_INDEX,
- index_def->name, space_name(space),
+ if (!index_def->opts.is_unique) {
+ diag_set(ClientError, ER_MODIFY_INDEX, index_def->name,
+ space_name(space),
"HASH index must be unique");
return -1;
}
if (key_def->is_multikey) {
- diag_set(ClientError, ER_MODIFY_INDEX,
- index_def->name, space_name(space),
+ diag_set(ClientError, ER_MODIFY_INDEX, index_def->name,
+ space_name(space),
"HASH index cannot be multikey");
return -1;
}
if (key_def->for_func_index) {
- diag_set(ClientError, ER_MODIFY_INDEX,
- index_def->name, space_name(space),
+ diag_set(ClientError, ER_MODIFY_INDEX, index_def->name,
+ space_name(space),
"HASH index can not use a function");
return -1;
}
@@ -662,32 +658,32 @@ memtx_space_check_index_def(struct space *space, struct index_def *index_def)
break;
case RTREE:
if (key_def->part_count != 1) {
- diag_set(ClientError, ER_MODIFY_INDEX,
- index_def->name, space_name(space),
+ diag_set(ClientError, ER_MODIFY_INDEX, index_def->name,
+ space_name(space),
"RTREE index key can not be multipart");
return -1;
}
if (index_def->opts.is_unique) {
- diag_set(ClientError, ER_MODIFY_INDEX,
- index_def->name, space_name(space),
+ diag_set(ClientError, ER_MODIFY_INDEX, index_def->name,
+ space_name(space),
"RTREE index can not be unique");
return -1;
}
if (key_def->parts[0].type != FIELD_TYPE_ARRAY) {
- diag_set(ClientError, ER_MODIFY_INDEX,
- index_def->name, space_name(space),
+ diag_set(ClientError, ER_MODIFY_INDEX, index_def->name,
+ space_name(space),
"RTREE index field type must be ARRAY");
return -1;
}
if (key_def->is_multikey) {
- diag_set(ClientError, ER_MODIFY_INDEX,
- index_def->name, space_name(space),
+ diag_set(ClientError, ER_MODIFY_INDEX, index_def->name,
+ space_name(space),
"RTREE index cannot be multikey");
return -1;
}
if (key_def->for_func_index) {
- diag_set(ClientError, ER_MODIFY_INDEX,
- index_def->name, space_name(space),
+ diag_set(ClientError, ER_MODIFY_INDEX, index_def->name,
+ space_name(space),
"RTREE index can not use a function");
return -1;
}
@@ -695,41 +691,40 @@ memtx_space_check_index_def(struct space *space, struct index_def *index_def)
return 0;
case BITSET:
if (key_def->part_count != 1) {
- diag_set(ClientError, ER_MODIFY_INDEX,
- index_def->name, space_name(space),
+ diag_set(ClientError, ER_MODIFY_INDEX, index_def->name,
+ space_name(space),
"BITSET index key can not be multipart");
return -1;
}
if (index_def->opts.is_unique) {
- diag_set(ClientError, ER_MODIFY_INDEX,
- index_def->name, space_name(space),
- "BITSET can not be unique");
+ diag_set(ClientError, ER_MODIFY_INDEX, index_def->name,
+ space_name(space), "BITSET can not be unique");
return -1;
}
if (key_def->parts[0].type != FIELD_TYPE_UNSIGNED &&
key_def->parts[0].type != FIELD_TYPE_STRING) {
- diag_set(ClientError, ER_MODIFY_INDEX,
- index_def->name, space_name(space),
+ diag_set(ClientError, ER_MODIFY_INDEX, index_def->name,
+ space_name(space),
"BITSET index field type must be NUM or STR");
return -1;
}
if (key_def->is_multikey) {
- diag_set(ClientError, ER_MODIFY_INDEX,
- index_def->name, space_name(space),
+ diag_set(ClientError, ER_MODIFY_INDEX, index_def->name,
+ space_name(space),
"BITSET index cannot be multikey");
return -1;
}
if (key_def->for_func_index) {
- diag_set(ClientError, ER_MODIFY_INDEX,
- index_def->name, space_name(space),
+ diag_set(ClientError, ER_MODIFY_INDEX, index_def->name,
+ space_name(space),
"BITSET index can not use a function");
return -1;
}
/* no furter checks of parts needed */
return 0;
default:
- diag_set(ClientError, ER_INDEX_TYPE,
- index_def->name, space_name(space));
+ diag_set(ClientError, ER_INDEX_TYPE, index_def->name,
+ space_name(space));
return -1;
}
/* Only HASH and TREE indexes checks parts there */
@@ -738,8 +733,8 @@ memtx_space_check_index_def(struct space *space, struct index_def *index_def)
struct key_part *part = &key_def->parts[i];
if (part->type <= FIELD_TYPE_ANY ||
part->type >= FIELD_TYPE_ARRAY) {
- diag_set(ClientError, ER_MODIFY_INDEX,
- index_def->name, space_name(space),
+ diag_set(ClientError, ER_MODIFY_INDEX, index_def->name,
+ space_name(space),
tt_sprintf("field type '%s' is not supported",
field_type_strs[part->type]));
return -1;
@@ -987,7 +982,7 @@ memtx_build_on_replace(struct trigger *trigger, void *event)
struct txn_stmt *stmt = txn_current_stmt(txn);
struct tuple *cmp_tuple = stmt->new_tuple != NULL ? stmt->new_tuple :
- stmt->old_tuple;
+ stmt->old_tuple;
/*
* Only update the already built part of an index. All the other
* tuples will be inserted when build continues.
@@ -1004,9 +999,9 @@ memtx_build_on_replace(struct trigger *trigger, void *event)
}
struct tuple *delete = NULL;
- enum dup_replace_mode mode =
- state->index->def->opts.is_unique ? DUP_INSERT :
- DUP_REPLACE_OR_INSERT;
+ enum dup_replace_mode mode = state->index->def->opts.is_unique ?
+ DUP_INSERT :
+ DUP_REPLACE_OR_INSERT;
state->rc = index_replace(state->index, stmt->old_tuple,
stmt->new_tuple, mode, &delete);
if (state->rc != 0) {
@@ -1098,12 +1093,12 @@ memtx_space_build_index(struct space *src_space, struct index *new_index,
* @todo: better message if there is a duplicate.
*/
struct tuple *old_tuple;
- rc = index_replace(new_index, NULL, tuple,
- DUP_INSERT, &old_tuple);
+ rc = index_replace(new_index, NULL, tuple, DUP_INSERT,
+ &old_tuple);
if (rc != 0)
break;
assert(old_tuple == NULL); /* Guaranteed by DUP_INSERT. */
- (void) old_tuple;
+ (void)old_tuple;
/*
* All tuples stored in a memtx space must be
* referenced by the primary index.
@@ -1187,13 +1182,13 @@ static const struct space_vtab memtx_space_vtab = {
};
struct space *
-memtx_space_new(struct memtx_engine *memtx,
- struct space_def *def, struct rlist *key_list)
+memtx_space_new(struct memtx_engine *memtx, struct space_def *def,
+ struct rlist *key_list)
{
struct memtx_space *memtx_space = malloc(sizeof(*memtx_space));
if (memtx_space == NULL) {
- diag_set(OutOfMemory, sizeof(*memtx_space),
- "malloc", "struct memtx_space");
+ diag_set(OutOfMemory, sizeof(*memtx_space), "malloc",
+ "struct memtx_space");
return NULL;
}
@@ -1204,11 +1199,10 @@ memtx_space_new(struct memtx_engine *memtx,
free(memtx_space);
return NULL;
}
- struct tuple_format *format =
- tuple_format_new(&memtx_tuple_format_vtab, memtx, keys, key_count,
- def->fields, def->field_count,
- def->exact_field_count, def->dict,
- def->opts.is_temporary, def->opts.is_ephemeral);
+ struct tuple_format *format = tuple_format_new(
+ &memtx_tuple_format_vtab, memtx, keys, key_count, def->fields,
+ def->field_count, def->exact_field_count, def->dict,
+ def->opts.is_temporary, def->opts.is_ephemeral);
if (format == NULL) {
free(memtx_space);
return NULL;
diff --git a/src/box/memtx_space.h b/src/box/memtx_space.h
index a14065f..b3082f8 100644
--- a/src/box/memtx_space.h
+++ b/src/box/memtx_space.h
@@ -84,8 +84,8 @@ memtx_space_replace_all_keys(struct space *, struct tuple *, struct tuple *,
enum dup_replace_mode, struct tuple **);
struct space *
-memtx_space_new(struct memtx_engine *memtx,
- struct space_def *def, struct rlist *key_list);
+memtx_space_new(struct memtx_engine *memtx, struct space_def *def,
+ struct rlist *key_list);
#if defined(__cplusplus)
} /* extern "C" */
diff --git a/src/box/memtx_tree.c b/src/box/memtx_tree.c
index 5af482f..43fa8e2 100644
--- a/src/box/memtx_tree.c
+++ b/src/box/memtx_tree.c
@@ -82,10 +82,10 @@ memtx_tree_data_is_equal(const struct memtx_tree_data *a,
#define BPS_TREE_NAME memtx_tree
#define BPS_TREE_BLOCK_SIZE (512)
#define BPS_TREE_EXTENT_SIZE MEMTX_EXTENT_SIZE
-#define BPS_TREE_COMPARE(a, b, arg)\
+#define BPS_TREE_COMPARE(a, b, arg) \
tuple_compare((&a)->tuple, (&a)->hint, (&b)->tuple, (&b)->hint, arg)
-#define BPS_TREE_COMPARE_KEY(a, b, arg)\
- tuple_compare_with_key((&a)->tuple, (&a)->hint, (b)->key,\
+#define BPS_TREE_COMPARE_KEY(a, b, arg) \
+ tuple_compare_with_key((&a)->tuple, (&a)->hint, (b)->key, \
(b)->part_count, (b)->hint, arg)
#define BPS_TREE_IS_IDENTICAL(a, b) memtx_tree_data_is_equal(&a, &b)
#define BPS_TREE_NO_DEBUG 1
@@ -124,7 +124,7 @@ memtx_tree_cmp_def(struct memtx_tree *tree)
}
static int
-memtx_tree_qcompare(const void* a, const void *b, void *c)
+memtx_tree_qcompare(const void *a, const void *b, void *c)
{
const struct memtx_tree_data *data_a = a;
const struct memtx_tree_data *data_b = b;
@@ -155,7 +155,7 @@ static inline struct tree_iterator *
tree_iterator(struct iterator *it)
{
assert(it->free == tree_iterator_free);
- return (struct tree_iterator *) it;
+ return (struct tree_iterator *)it;
}
static void
@@ -186,8 +186,8 @@ tree_iterator_next_base(struct iterator *iterator, struct tuple **ret)
struct memtx_tree_data *check =
memtx_tree_iterator_get_elem(&index->tree, &it->tree_iterator);
if (check == NULL || !memtx_tree_data_is_equal(check, &it->current)) {
- it->tree_iterator = memtx_tree_upper_bound_elem(&index->tree,
- it->current, NULL);
+ it->tree_iterator = memtx_tree_upper_bound_elem(
+ &index->tree, it->current, NULL);
} else {
memtx_tree_iterator_next(&index->tree, &it->tree_iterator);
}
@@ -216,8 +216,8 @@ tree_iterator_prev_base(struct iterator *iterator, struct tuple **ret)
struct memtx_tree_data *check =
memtx_tree_iterator_get_elem(&index->tree, &it->tree_iterator);
if (check == NULL || !memtx_tree_data_is_equal(check, &it->current)) {
- it->tree_iterator = memtx_tree_lower_bound_elem(&index->tree,
- it->current, NULL);
+ it->tree_iterator = memtx_tree_lower_bound_elem(
+ &index->tree, it->current, NULL);
}
memtx_tree_iterator_prev(&index->tree, &it->tree_iterator);
tuple_unref(it->current.tuple);
@@ -245,8 +245,8 @@ tree_iterator_next_equal_base(struct iterator *iterator, struct tuple **ret)
struct memtx_tree_data *check =
memtx_tree_iterator_get_elem(&index->tree, &it->tree_iterator);
if (check == NULL || !memtx_tree_data_is_equal(check, &it->current)) {
- it->tree_iterator = memtx_tree_upper_bound_elem(&index->tree,
- it->current, NULL);
+ it->tree_iterator = memtx_tree_upper_bound_elem(
+ &index->tree, it->current, NULL);
} else {
memtx_tree_iterator_next(&index->tree, &it->tree_iterator);
}
@@ -255,10 +255,8 @@ tree_iterator_next_equal_base(struct iterator *iterator, struct tuple **ret)
memtx_tree_iterator_get_elem(&index->tree, &it->tree_iterator);
/* Use user key def to save a few loops. */
if (res == NULL ||
- tuple_compare_with_key(res->tuple, res->hint,
- it->key_data.key,
- it->key_data.part_count,
- it->key_data.hint,
+ tuple_compare_with_key(res->tuple, res->hint, it->key_data.key,
+ it->key_data.part_count, it->key_data.hint,
index->base.def->key_def) != 0) {
iterator->next = tree_iterator_dummie;
it->current.tuple = NULL;
@@ -281,8 +279,8 @@ tree_iterator_prev_equal_base(struct iterator *iterator, struct tuple **ret)
struct memtx_tree_data *check =
memtx_tree_iterator_get_elem(&index->tree, &it->tree_iterator);
if (check == NULL || !memtx_tree_data_is_equal(check, &it->current)) {
- it->tree_iterator = memtx_tree_lower_bound_elem(&index->tree,
- it->current, NULL);
+ it->tree_iterator = memtx_tree_lower_bound_elem(
+ &index->tree, it->current, NULL);
}
memtx_tree_iterator_prev(&index->tree, &it->tree_iterator);
tuple_unref(it->current.tuple);
@@ -290,10 +288,8 @@ tree_iterator_prev_equal_base(struct iterator *iterator, struct tuple **ret)
memtx_tree_iterator_get_elem(&index->tree, &it->tree_iterator);
/* Use user key def to save a few loops. */
if (res == NULL ||
- tuple_compare_with_key(res->tuple, res->hint,
- it->key_data.key,
- it->key_data.part_count,
- it->key_data.hint,
+ tuple_compare_with_key(res->tuple, res->hint, it->key_data.key,
+ it->key_data.part_count, it->key_data.hint,
index->base.def->key_def) != 0) {
iterator->next = tree_iterator_dummie;
it->current.tuple = NULL;
@@ -306,39 +302,39 @@ tree_iterator_prev_equal_base(struct iterator *iterator, struct tuple **ret)
return 0;
}
-#define WRAP_ITERATOR_METHOD(name) \
-static int \
-name(struct iterator *iterator, struct tuple **ret) \
-{ \
- struct memtx_tree *tree = \
- &((struct memtx_tree_index *)iterator->index)->tree; \
- struct tree_iterator *it = tree_iterator(iterator); \
- struct memtx_tree_iterator *ti = &it->tree_iterator; \
- uint32_t iid = iterator->index->def->iid; \
- bool is_multikey = iterator->index->def->key_def->is_multikey; \
- struct txn *txn = in_txn(); \
- struct space *space = space_by_id(iterator->space_id); \
- bool is_rw = txn != NULL; \
- do { \
- int rc = name##_base(iterator, ret); \
- if (rc != 0 || *ret == NULL) \
- return rc; \
- uint32_t mk_index = 0; \
- if (is_multikey) { \
- struct memtx_tree_data *check = \
- memtx_tree_iterator_get_elem(tree, ti); \
- assert(check != NULL); \
- mk_index = check->hint; \
- } \
- *ret = memtx_tx_tuple_clarify(txn, space, *ret, \
- iid, mk_index, is_rw); \
- } while (*ret == NULL); \
- tuple_unref(it->current.tuple); \
- it->current.tuple = *ret; \
- tuple_ref(it->current.tuple); \
- return 0; \
-} \
-struct forgot_to_add_semicolon
+#define WRAP_ITERATOR_METHOD(name) \
+ static int name(struct iterator *iterator, struct tuple **ret) \
+ { \
+ struct memtx_tree *tree = \
+ &((struct memtx_tree_index *)iterator->index)->tree; \
+ struct tree_iterator *it = tree_iterator(iterator); \
+ struct memtx_tree_iterator *ti = &it->tree_iterator; \
+ uint32_t iid = iterator->index->def->iid; \
+ bool is_multikey = iterator->index->def->key_def->is_multikey; \
+ struct txn *txn = in_txn(); \
+ struct space *space = space_by_id(iterator->space_id); \
+ bool is_rw = txn != NULL; \
+ do { \
+ int rc = name##_base(iterator, ret); \
+ if (rc != 0 || *ret == NULL) \
+ return rc; \
+ uint32_t mk_index = 0; \
+ if (is_multikey) { \
+ struct memtx_tree_data *check = \
+ memtx_tree_iterator_get_elem(tree, \
+ ti); \
+ assert(check != NULL); \
+ mk_index = check->hint; \
+ } \
+ *ret = memtx_tx_tuple_clarify(txn, space, *ret, iid, \
+ mk_index, is_rw); \
+ } while (*ret == NULL); \
+ tuple_unref(it->current.tuple); \
+ it->current.tuple = *ret; \
+ tuple_ref(it->current.tuple); \
+ return 0; \
+ } \
+ struct forgot_to_add_semicolon
WRAP_ITERATOR_METHOD(tree_iterator_next);
WRAP_ITERATOR_METHOD(tree_iterator_prev);
@@ -393,17 +389,15 @@ tree_iterator_start(struct iterator *iterator, struct tuple **ret)
else
it->tree_iterator = memtx_tree_iterator_first(tree);
} else {
- if (type == ITER_ALL || type == ITER_EQ ||
- type == ITER_GE || type == ITER_LT) {
- it->tree_iterator =
- memtx_tree_lower_bound(tree, &it->key_data,
- &exact);
+ if (type == ITER_ALL || type == ITER_EQ || type == ITER_GE ||
+ type == ITER_LT) {
+ it->tree_iterator = memtx_tree_lower_bound(
+ tree, &it->key_data, &exact);
if (type == ITER_EQ && !exact)
return 0;
} else { // ITER_GT, ITER_REQ, ITER_LE
- it->tree_iterator =
- memtx_tree_upper_bound(tree, &it->key_data,
- &exact);
+ it->tree_iterator = memtx_tree_upper_bound(
+ tree, &it->key_data, &exact);
if (type == ITER_REQ && !exact)
return 0;
}
@@ -423,8 +417,8 @@ tree_iterator_start(struct iterator *iterator, struct tuple **ret)
}
}
- struct memtx_tree_data *res = memtx_tree_iterator_get_elem(tree,
- &it->tree_iterator);
+ struct memtx_tree_data *res =
+ memtx_tree_iterator_get_elem(tree, &it->tree_iterator);
if (!res)
return 0;
*ret = res->tuple;
@@ -475,8 +469,8 @@ memtx_tree_index_gc_run(struct memtx_gc_task *task, bool *done)
enum { YIELD_LOOPS = 10 };
#endif
- struct memtx_tree_index *index = container_of(task,
- struct memtx_tree_index, gc_task);
+ struct memtx_tree_index *index =
+ container_of(task, struct memtx_tree_index, gc_task);
struct memtx_tree *tree = &index->tree;
struct memtx_tree_iterator *itr = &index->gc_iterator;
@@ -497,8 +491,8 @@ memtx_tree_index_gc_run(struct memtx_gc_task *task, bool *done)
static void
memtx_tree_index_gc_free(struct memtx_gc_task *task)
{
- struct memtx_tree_index *index = container_of(task,
- struct memtx_tree_index, gc_task);
+ struct memtx_tree_index *index =
+ container_of(task, struct memtx_tree_index, gc_task);
memtx_tree_index_free(index);
}
@@ -542,7 +536,8 @@ memtx_tree_index_update_def(struct index *base)
* def must be used. For details @sa tuple_compare.cc.
*/
index->tree.arg = def->opts.is_unique && !def->key_def->is_nullable ?
- def->key_def : def->cmp_def;
+ def->key_def :
+ def->cmp_def;
}
static bool
@@ -586,8 +581,8 @@ memtx_tree_index_count(struct index *base, enum iterator_type type,
}
static int
-memtx_tree_index_get(struct index *base, const char *key,
- uint32_t part_count, struct tuple **result)
+memtx_tree_index_get(struct index *base, const char *key, uint32_t part_count,
+ struct tuple **result)
{
assert(base->def->opts.is_unique &&
part_count == base->def->key_def->part_count);
@@ -626,21 +621,22 @@ memtx_tree_index_replace(struct index *base, struct tuple *old_tuple,
dup_data.tuple = NULL;
/* Try to optimistically replace the new_tuple. */
- int tree_res = memtx_tree_insert(&index->tree, new_data,
- &dup_data);
+ int tree_res =
+ memtx_tree_insert(&index->tree, new_data, &dup_data);
if (tree_res) {
diag_set(OutOfMemory, MEMTX_EXTENT_SIZE,
"memtx_tree_index", "replace");
return -1;
}
- uint32_t errcode = replace_check_dup(old_tuple,
- dup_data.tuple, mode);
+ uint32_t errcode =
+ replace_check_dup(old_tuple, dup_data.tuple, mode);
if (errcode) {
memtx_tree_delete(&index->tree, new_data);
if (dup_data.tuple != NULL)
memtx_tree_insert(&index->tree, dup_data, NULL);
- struct space *sp = space_cache_find(base->def->space_id);
+ struct space *sp =
+ space_cache_find(base->def->space_id);
if (sp != NULL)
diag_set(ClientError, errcode, base->def->name,
space_name(sp));
@@ -668,10 +664,11 @@ memtx_tree_index_replace(struct index *base, struct tuple *old_tuple,
*/
static int
memtx_tree_index_replace_multikey_one(struct memtx_tree_index *index,
- struct tuple *old_tuple, struct tuple *new_tuple,
- enum dup_replace_mode mode, hint_t hint,
- struct memtx_tree_data *replaced_data,
- bool *is_multikey_conflict)
+ struct tuple *old_tuple,
+ struct tuple *new_tuple,
+ enum dup_replace_mode mode, hint_t hint,
+ struct memtx_tree_data *replaced_data,
+ bool *is_multikey_conflict)
{
struct memtx_tree_data new_data, dup_data;
new_data.tuple = new_tuple;
@@ -692,7 +689,7 @@ memtx_tree_index_replace_multikey_one(struct memtx_tree_index *index,
*/
*is_multikey_conflict = true;
} else if ((errcode = replace_check_dup(old_tuple, dup_data.tuple,
- mode)) != 0) {
+ mode)) != 0) {
/* Rollback replace. */
memtx_tree_delete(&index->tree, new_data);
if (dup_data.tuple != NULL)
@@ -721,8 +718,9 @@ memtx_tree_index_replace_multikey_one(struct memtx_tree_index *index,
*/
static void
memtx_tree_index_replace_multikey_rollback(struct memtx_tree_index *index,
- struct tuple *new_tuple, struct tuple *replaced_tuple,
- int err_multikey_idx)
+ struct tuple *new_tuple,
+ struct tuple *replaced_tuple,
+ int err_multikey_idx)
{
struct memtx_tree_data data;
if (replaced_tuple != NULL) {
@@ -731,7 +729,7 @@ memtx_tree_index_replace_multikey_rollback(struct memtx_tree_index *index,
data.tuple = replaced_tuple;
uint32_t multikey_count =
tuple_multikey_count(replaced_tuple, cmp_def);
- for (int i = 0; (uint32_t) i < multikey_count; i++) {
+ for (int i = 0; (uint32_t)i < multikey_count; i++) {
data.hint = i;
memtx_tree_insert(&index->tree, data, NULL);
}
@@ -795,8 +793,9 @@ memtx_tree_index_replace_multikey_rollback(struct memtx_tree_index *index,
*/
static int
memtx_tree_index_replace_multikey(struct index *base, struct tuple *old_tuple,
- struct tuple *new_tuple, enum dup_replace_mode mode,
- struct tuple **result)
+ struct tuple *new_tuple,
+ enum dup_replace_mode mode,
+ struct tuple **result)
{
struct memtx_tree_index *index = (struct memtx_tree_index *)base;
struct key_def *cmp_def = memtx_tree_cmp_def(&index->tree);
@@ -805,14 +804,13 @@ memtx_tree_index_replace_multikey(struct index *base, struct tuple *old_tuple,
int multikey_idx = 0, err = 0;
uint32_t multikey_count =
tuple_multikey_count(new_tuple, cmp_def);
- for (; (uint32_t) multikey_idx < multikey_count;
+ for (; (uint32_t)multikey_idx < multikey_count;
multikey_idx++) {
bool is_multikey_conflict;
struct memtx_tree_data replaced_data;
- err = memtx_tree_index_replace_multikey_one(index,
- old_tuple, new_tuple, mode,
- multikey_idx, &replaced_data,
- &is_multikey_conflict);
+ err = memtx_tree_index_replace_multikey_one(
+ index, old_tuple, new_tuple, mode, multikey_idx,
+ &replaced_data, &is_multikey_conflict);
if (err != 0)
break;
if (replaced_data.tuple != NULL &&
@@ -823,8 +821,8 @@ memtx_tree_index_replace_multikey(struct index *base, struct tuple *old_tuple,
}
}
if (err != 0) {
- memtx_tree_index_replace_multikey_rollback(index,
- new_tuple, *result, multikey_idx);
+ memtx_tree_index_replace_multikey_rollback(
+ index, new_tuple, *result, multikey_idx);
return -1;
}
if (*result != NULL) {
@@ -837,7 +835,7 @@ memtx_tree_index_replace_multikey(struct index *base, struct tuple *old_tuple,
data.tuple = old_tuple;
uint32_t multikey_count =
tuple_multikey_count(old_tuple, cmp_def);
- for (int i = 0; (uint32_t) i < multikey_count; i++) {
+ for (int i = 0; (uint32_t)i < multikey_count; i++) {
data.hint = i;
memtx_tree_delete_value(&index->tree, data, NULL);
}
@@ -850,9 +848,9 @@ static const char *
func_index_key_dummy_alloc(struct tuple *tuple, const char *key,
uint32_t key_sz)
{
- (void) tuple;
- (void) key_sz;
- return (void*) key;
+ (void)tuple;
+ (void)key_sz;
+ return (void *)key;
}
/**
@@ -873,8 +871,8 @@ struct func_key_undo *
func_key_undo_new(struct region *region)
{
size_t size;
- struct func_key_undo *undo = region_alloc_object(region, typeof(*undo),
- &size);
+ struct func_key_undo *undo =
+ region_alloc_object(region, typeof(*undo), &size);
if (undo == NULL) {
diag_set(OutOfMemory, size, "region_alloc_object", "undo");
return NULL;
@@ -916,8 +914,8 @@ memtx_tree_func_index_replace_rollback(struct memtx_tree_index *index,
*/
static int
memtx_tree_func_index_replace(struct index *base, struct tuple *old_tuple,
- struct tuple *new_tuple, enum dup_replace_mode mode,
- struct tuple **result)
+ struct tuple *new_tuple,
+ enum dup_replace_mode mode, struct tuple **result)
{
struct memtx_tree_index *index = (struct memtx_tree_index *)base;
struct index_def *index_def = index->base.def;
@@ -940,7 +938,7 @@ memtx_tree_func_index_replace(struct index *base, struct tuple *old_tuple,
const char *key;
struct func_key_undo *undo;
while ((err = key_list_iterator_next(&it, &key)) == 0 &&
- key != NULL) {
+ key != NULL) {
/* Perform insertion, log it in list. */
undo = func_key_undo_new(region);
if (undo == NULL) {
@@ -954,10 +952,9 @@ memtx_tree_func_index_replace(struct index *base, struct tuple *old_tuple,
bool is_multikey_conflict;
struct memtx_tree_data old_data;
old_data.tuple = NULL;
- err = memtx_tree_index_replace_multikey_one(index,
- old_tuple, new_tuple,
- mode, (hint_t)key, &old_data,
- &is_multikey_conflict);
+ err = memtx_tree_index_replace_multikey_one(
+ index, old_tuple, new_tuple, mode, (hint_t)key,
+ &old_data, &is_multikey_conflict);
if (err != 0)
break;
if (old_data.tuple != NULL && !is_multikey_conflict) {
@@ -984,7 +981,7 @@ memtx_tree_func_index_replace(struct index *base, struct tuple *old_tuple,
* from undo list.
*/
tuple_chunk_delete(new_tuple,
- (const char *)old_data.hint);
+ (const char *)old_data.hint);
rlist_foreach_entry(undo, &new_keys, link) {
if (undo->key.hint == old_data.hint) {
rlist_del(&undo->link);
@@ -994,8 +991,8 @@ memtx_tree_func_index_replace(struct index *base, struct tuple *old_tuple,
}
}
if (key != NULL || err != 0) {
- memtx_tree_func_index_replace_rollback(index,
- &old_keys, &new_keys);
+ memtx_tree_func_index_replace_rollback(index, &old_keys,
+ &new_keys);
goto end;
}
if (*result != NULL) {
@@ -1019,7 +1016,7 @@ memtx_tree_func_index_replace(struct index *base, struct tuple *old_tuple,
data.tuple = old_tuple;
const char *key;
while (key_list_iterator_next(&it, &key) == 0 && key != NULL) {
- data.hint = (hint_t) key;
+ data.hint = (hint_t)key;
deleted_data.tuple = NULL;
memtx_tree_delete_value(&index->tree, data,
&deleted_data);
@@ -1028,7 +1025,8 @@ memtx_tree_func_index_replace(struct index *base, struct tuple *old_tuple,
* Release related hint on
* successful node deletion.
*/
- tuple_chunk_delete(deleted_data.tuple,
+ tuple_chunk_delete(
+ deleted_data.tuple,
(const char *)deleted_data.hint);
}
}
@@ -1126,14 +1124,16 @@ memtx_tree_index_build_array_append(struct memtx_tree_index *index,
}
assert(index->build_array_size <= index->build_array_alloc_size);
if (index->build_array_size == index->build_array_alloc_size) {
- index->build_array_alloc_size = index->build_array_alloc_size +
- DIV_ROUND_UP(index->build_array_alloc_size, 2);
+ index->build_array_alloc_size =
+ index->build_array_alloc_size +
+ DIV_ROUND_UP(index->build_array_alloc_size, 2);
struct memtx_tree_data *tmp =
realloc(index->build_array,
index->build_array_alloc_size * sizeof(*tmp));
if (tmp == NULL) {
- diag_set(OutOfMemory, index->build_array_alloc_size *
- sizeof(*tmp), "memtx_tree_index", "build_next");
+ diag_set(OutOfMemory,
+ index->build_array_alloc_size * sizeof(*tmp),
+ "memtx_tree_index", "build_next");
return -1;
}
index->build_array = tmp;
@@ -1210,7 +1210,8 @@ error:
*/
static void
memtx_tree_index_build_array_deduplicate(struct memtx_tree_index *index,
- void (*destroy)(struct tuple *tuple, const char *hint))
+ void (*destroy)(struct tuple *tuple,
+ const char *hint))
{
if (index->build_array_size == 0)
return;
@@ -1218,7 +1219,7 @@ memtx_tree_index_build_array_deduplicate(struct memtx_tree_index *index,
size_t w_idx = 0, r_idx = 1;
while (r_idx < index->build_array_size) {
if (index->build_array[w_idx].tuple !=
- index->build_array[r_idx].tuple ||
+ index->build_array[r_idx].tuple ||
tuple_compare(index->build_array[w_idx].tuple,
index->build_array[w_idx].hint,
index->build_array[r_idx].tuple,
@@ -1234,8 +1235,8 @@ memtx_tree_index_build_array_deduplicate(struct memtx_tree_index *index,
}
if (destroy != NULL) {
/* Destroy deduplicated entries. */
- for (r_idx = w_idx + 1;
- r_idx < index->build_array_size; r_idx++) {
+ for (r_idx = w_idx + 1; r_idx < index->build_array_size;
+ r_idx++) {
destroy(index->build_array[r_idx].tuple,
(const char *)index->build_array[r_idx].hint);
}
@@ -1285,8 +1286,8 @@ tree_snapshot_iterator_free(struct snapshot_iterator *iterator)
assert(iterator->free == tree_snapshot_iterator_free);
struct tree_snapshot_iterator *it =
(struct tree_snapshot_iterator *)iterator;
- memtx_leave_delayed_free_mode((struct memtx_engine *)
- it->index->base.engine);
+ memtx_leave_delayed_free_mode(
+ (struct memtx_engine *)it->index->base.engine);
memtx_tree_iterator_destroy(&it->index->tree, &it->tree_iterator);
index_unref(&it->index->base);
memtx_tx_snapshot_cleaner_destroy(&it->cleaner);
@@ -1335,7 +1336,7 @@ memtx_tree_index_create_snapshot_iterator(struct index *base)
{
struct memtx_tree_index *index = (struct memtx_tree_index *)base;
struct tree_snapshot_iterator *it =
- (struct tree_snapshot_iterator *) calloc(1, sizeof(*it));
+ (struct tree_snapshot_iterator *)calloc(1, sizeof(*it));
if (it == NULL) {
diag_set(OutOfMemory, sizeof(struct tree_snapshot_iterator),
"memtx_tree_index", "create_snapshot_iterator");
@@ -1356,7 +1357,7 @@ memtx_tree_index_create_snapshot_iterator(struct index *base)
it->tree_iterator = memtx_tree_iterator_first(&index->tree);
memtx_tree_iterator_freeze(&index->tree, &it->tree_iterator);
memtx_enter_delayed_free_mode((struct memtx_engine *)base->engine);
- return (struct snapshot_iterator *) it;
+ return (struct snapshot_iterator *)it;
}
static const struct index_vtab memtx_tree_index_vtab = {
@@ -1368,7 +1369,7 @@ static const struct index_vtab memtx_tree_index_vtab = {
/* .update_def = */ memtx_tree_index_update_def,
/* .depends_on_pk = */ memtx_tree_index_depends_on_pk,
/* .def_change_requires_rebuild = */
- memtx_index_def_change_requires_rebuild,
+ memtx_index_def_change_requires_rebuild,
/* .size = */ memtx_tree_index_size,
/* .bsize = */ memtx_tree_index_bsize,
/* .min = */ generic_index_min,
@@ -1379,7 +1380,7 @@ static const struct index_vtab memtx_tree_index_vtab = {
/* .replace = */ memtx_tree_index_replace,
/* .create_iterator = */ memtx_tree_index_create_iterator,
/* .create_snapshot_iterator = */
- memtx_tree_index_create_snapshot_iterator,
+ memtx_tree_index_create_snapshot_iterator,
/* .stat = */ generic_index_stat,
/* .compact = */ generic_index_compact,
/* .reset_stat = */ generic_index_reset_stat,
@@ -1398,7 +1399,7 @@ static const struct index_vtab memtx_tree_index_multikey_vtab = {
/* .update_def = */ memtx_tree_index_update_def,
/* .depends_on_pk = */ memtx_tree_index_depends_on_pk,
/* .def_change_requires_rebuild = */
- memtx_index_def_change_requires_rebuild,
+ memtx_index_def_change_requires_rebuild,
/* .size = */ memtx_tree_index_size,
/* .bsize = */ memtx_tree_index_bsize,
/* .min = */ generic_index_min,
@@ -1409,7 +1410,7 @@ static const struct index_vtab memtx_tree_index_multikey_vtab = {
/* .replace = */ memtx_tree_index_replace_multikey,
/* .create_iterator = */ memtx_tree_index_create_iterator,
/* .create_snapshot_iterator = */
- memtx_tree_index_create_snapshot_iterator,
+ memtx_tree_index_create_snapshot_iterator,
/* .stat = */ generic_index_stat,
/* .compact = */ generic_index_compact,
/* .reset_stat = */ generic_index_reset_stat,
@@ -1428,7 +1429,7 @@ static const struct index_vtab memtx_tree_func_index_vtab = {
/* .update_def = */ memtx_tree_index_update_def,
/* .depends_on_pk = */ memtx_tree_index_depends_on_pk,
/* .def_change_requires_rebuild = */
- memtx_index_def_change_requires_rebuild,
+ memtx_index_def_change_requires_rebuild,
/* .size = */ memtx_tree_index_size,
/* .bsize = */ memtx_tree_index_bsize,
/* .min = */ generic_index_min,
@@ -1439,7 +1440,7 @@ static const struct index_vtab memtx_tree_func_index_vtab = {
/* .replace = */ memtx_tree_func_index_replace,
/* .create_iterator = */ memtx_tree_index_create_iterator,
/* .create_snapshot_iterator = */
- memtx_tree_index_create_snapshot_iterator,
+ memtx_tree_index_create_snapshot_iterator,
/* .stat = */ generic_index_stat,
/* .compact = */ generic_index_compact,
/* .reset_stat = */ generic_index_reset_stat,
@@ -1464,7 +1465,7 @@ static const struct index_vtab memtx_tree_disabled_index_vtab = {
/* .update_def = */ generic_index_update_def,
/* .depends_on_pk = */ generic_index_depends_on_pk,
/* .def_change_requires_rebuild = */
- generic_index_def_change_requires_rebuild,
+ generic_index_def_change_requires_rebuild,
/* .size = */ generic_index_size,
/* .bsize = */ generic_index_bsize,
/* .min = */ generic_index_min,
@@ -1475,7 +1476,7 @@ static const struct index_vtab memtx_tree_disabled_index_vtab = {
/* .replace = */ disabled_index_replace,
/* .create_iterator = */ generic_index_create_iterator,
/* .create_snapshot_iterator = */
- generic_index_create_snapshot_iterator,
+ generic_index_create_snapshot_iterator,
/* .stat = */ generic_index_stat,
/* .compact = */ generic_index_compact,
/* .reset_stat = */ generic_index_reset_stat,
@@ -1491,8 +1492,8 @@ memtx_tree_index_new(struct memtx_engine *memtx, struct index_def *def)
struct memtx_tree_index *index =
(struct memtx_tree_index *)calloc(1, sizeof(*index));
if (index == NULL) {
- diag_set(OutOfMemory, sizeof(*index),
- "malloc", "struct memtx_tree_index");
+ diag_set(OutOfMemory, sizeof(*index), "malloc",
+ "struct memtx_tree_index");
return NULL;
}
const struct index_vtab *vtab;
@@ -1506,8 +1507,8 @@ memtx_tree_index_new(struct memtx_engine *memtx, struct index_def *def)
} else {
vtab = &memtx_tree_index_vtab;
}
- if (index_create(&index->base, (struct engine *)memtx,
- vtab, def) != 0) {
+ if (index_create(&index->base, (struct engine *)memtx, vtab, def) !=
+ 0) {
free(index);
return NULL;
}
@@ -1515,7 +1516,8 @@ memtx_tree_index_new(struct memtx_engine *memtx, struct index_def *def)
/* See comment to memtx_tree_index_update_def(). */
struct key_def *cmp_def;
cmp_def = def->opts.is_unique && !def->key_def->is_nullable ?
- index->base.def->key_def : index->base.def->cmp_def;
+ index->base.def->key_def :
+ index->base.def->cmp_def;
memtx_tree_create(&index->tree, cmp_def, memtx_index_extent_alloc,
memtx_index_extent_free, memtx);
diff --git a/src/box/memtx_tx.c b/src/box/memtx_tx.c
index 55748ad..c1d8c10 100644
--- a/src/box/memtx_tx.c
+++ b/src/box/memtx_tx.c
@@ -59,8 +59,7 @@ memtx_tx_story_key_hash(const struct tuple *a)
#define MH_SOURCE
#include "salad/mhash.h"
-struct tx_manager
-{
+struct tx_manager {
/**
* List of all transactions that are in a read view.
* New transactions are added to the tail of this list,
@@ -83,7 +82,7 @@ enum {
* searching and deleting no more used memtx_tx_stories per creation of
* a new story.
*/
- TX_MANAGER_GC_STEPS_SIZE = 2,
+ TX_MANAGER_GC_STEPS_SIZE = 2,
};
/** That's a definition, see declaration for description. */
@@ -99,8 +98,8 @@ memtx_tx_manager_init()
for (size_t i = 0; i < BOX_INDEX_MAX; i++) {
size_t item_size = sizeof(struct memtx_story) +
i * sizeof(struct memtx_story_link);
- mempool_create(&txm.memtx_tx_story_pool[i],
- cord_slab_cache(), item_size);
+ mempool_create(&txm.memtx_tx_story_pool[i], cord_slab_cache(),
+ item_size);
}
txm.history = mh_history_new();
rlist_create(&txm.all_stories);
@@ -109,8 +108,7 @@ memtx_tx_manager_init()
void
memtx_tx_manager_free()
-{
-}
+{}
int
memtx_tx_cause_conflict(struct txn *breaker, struct txn *victim)
@@ -121,12 +119,12 @@ memtx_tx_cause_conflict(struct txn *breaker, struct txn *victim)
while (r1 != &breaker->conflict_list &&
r2 != &victim->conflicted_by_list) {
tracker = rlist_entry(r1, struct tx_conflict_tracker,
- in_conflict_list);
+ in_conflict_list);
assert(tracker->breaker == breaker);
if (tracker->victim == victim)
break;
tracker = rlist_entry(r2, struct tx_conflict_tracker,
- in_conflicted_by_list);
+ in_conflicted_by_list);
assert(tracker->victim == victim);
if (tracker->breaker == breaker)
break;
@@ -143,9 +141,8 @@ memtx_tx_cause_conflict(struct txn *breaker, struct txn *victim)
rlist_del(&tracker->in_conflicted_by_list);
} else {
size_t size;
- tracker = region_alloc_object(&victim->region,
- struct tx_conflict_tracker,
- &size);
+ tracker = region_alloc_object(
+ &victim->region, struct tx_conflict_tracker, &size);
if (tracker == NULL) {
diag_set(OutOfMemory, size, "tx region",
"conflict_tracker");
@@ -196,18 +193,18 @@ memtx_tx_story_new(struct space *space, struct tuple *tuple)
uint32_t index_count = space->index_count;
assert(index_count < BOX_INDEX_MAX);
struct mempool *pool = &txm.memtx_tx_story_pool[index_count];
- struct memtx_story *story = (struct memtx_story *) mempool_alloc(pool);
+ struct memtx_story *story = (struct memtx_story *)mempool_alloc(pool);
if (story == NULL) {
- size_t item_size = sizeof(struct memtx_story) +
- index_count *
- sizeof(struct memtx_story_link);
+ size_t item_size =
+ sizeof(struct memtx_story) +
+ index_count * sizeof(struct memtx_story_link);
diag_set(OutOfMemory, item_size, "mempool_alloc", "story");
return NULL;
}
story->tuple = tuple;
const struct memtx_story **put_story =
- (const struct memtx_story **) &story;
+ (const struct memtx_story **)&story;
struct memtx_story **empty = NULL;
mh_int_t pos = mh_history_put(txm.history, put_story, &empty, 0);
if (pos == mh_end(txm.history)) {
@@ -289,7 +286,6 @@ memtx_tx_story_delete_del_stmt(struct memtx_story *story)
memtx_tx_story_delete(story);
}
-
/**
* Find a story of a @a tuple. The story expected to be present (assert).
*/
@@ -309,8 +305,8 @@ memtx_tx_story_get(struct tuple *tuple)
static struct tuple *
memtx_tx_story_older_tuple(struct memtx_story_link *link)
{
- return link->older.is_story ? link->older.story->tuple
- : link->older.tuple;
+ return link->older.is_story ? link->older.story->tuple :
+ link->older.tuple;
}
/**
@@ -318,8 +314,7 @@ memtx_tx_story_older_tuple(struct memtx_story_link *link)
*/
static void
memtx_tx_story_link_story(struct memtx_story *story,
- struct memtx_story *older_story,
- uint32_t index)
+ struct memtx_story *older_story, uint32_t index)
{
assert(older_story != NULL);
struct memtx_story_link *link = &story->link[index];
@@ -336,8 +331,7 @@ memtx_tx_story_link_story(struct memtx_story *story,
* dirty -find and link with the corresponding story.
*/
static void
-memtx_tx_story_link_tuple(struct memtx_story *story,
- struct tuple *older_tuple,
+memtx_tx_story_link_tuple(struct memtx_story *story, struct tuple *older_tuple,
uint32_t index)
{
struct memtx_story_link *link = &story->link[index];
@@ -347,9 +341,8 @@ memtx_tx_story_link_tuple(struct memtx_story *story,
if (older_tuple == NULL)
return;
if (older_tuple->is_dirty) {
- memtx_tx_story_link_story(story,
- memtx_tx_story_get(older_tuple),
- index);
+ memtx_tx_story_link_story(
+ story, memtx_tx_story_get(older_tuple), index);
return;
}
link->older.tuple = older_tuple;
@@ -389,16 +382,14 @@ memtx_tx_story_gc_step()
/* Lowest read view PSN */
int64_t lowest_rv_psm = txn_last_psn;
if (!rlist_empty(&txm.read_view_txs)) {
- struct txn *txn =
- rlist_first_entry(&txm.read_view_txs, struct txn,
- in_read_view_txs);
+ struct txn *txn = rlist_first_entry(
+ &txm.read_view_txs, struct txn, in_read_view_txs);
assert(txn->rv_psn != 0);
lowest_rv_psm = txn->rv_psn;
}
- struct memtx_story *story =
- rlist_entry(txm.traverse_all_stories, struct memtx_story,
- in_all_stories);
+ struct memtx_story *story = rlist_entry(
+ txm.traverse_all_stories, struct memtx_story, in_all_stories);
txm.traverse_all_stories = txm.traverse_all_stories->next;
if (story->add_stmt != NULL || story->del_stmt != NULL ||
@@ -511,8 +502,7 @@ memtx_tx_story_is_visible(struct memtx_story *story, struct txn *txn,
/**
* Temporary (allocated on region) struct that stores a conflicting TX.
*/
-struct memtx_tx_conflict
-{
+struct memtx_tx_conflict {
/* The transaction that will conflict us upon commit. */
struct txn *breaker;
/* Link in single-linked list. */
@@ -552,12 +542,10 @@ memtx_tx_save_conflict(struct txn *breaker,
* @return 0 on success, -1 on memory error.
*/
static int
-memtx_tx_story_find_visible_tuple(struct memtx_story *story,
- struct txn_stmt *stmt,
- uint32_t index,
- struct tuple **visible_replaced,
- struct memtx_tx_conflict **collected_conflicts,
- struct region *region)
+memtx_tx_story_find_visible_tuple(
+ struct memtx_story *story, struct txn_stmt *stmt, uint32_t index,
+ struct tuple **visible_replaced,
+ struct memtx_tx_conflict **collected_conflicts, struct region *region)
{
while (true) {
if (!story->link[index].older.is_story) {
@@ -603,7 +591,6 @@ memtx_tx_story_find_visible_tuple(struct memtx_story *story,
collected_conflicts,
region) != 0)
return -1;
-
}
}
return 0;
@@ -647,18 +634,16 @@ memtx_tx_history_add_stmt(struct txn_stmt *stmt, struct tuple *old_tuple,
add_story_linked++;
struct tuple *visible_replaced = NULL;
- if (memtx_tx_story_find_visible_tuple(add_story, stmt, i,
- &visible_replaced,
- &collected_conflicts,
- region) != 0)
+ if (memtx_tx_story_find_visible_tuple(
+ add_story, stmt, i, &visible_replaced,
+ &collected_conflicts, region) != 0)
goto fail;
uint32_t errcode;
errcode = replace_check_dup(old_tuple, visible_replaced,
i == 0 ? mode : DUP_INSERT);
if (errcode != 0) {
- diag_set(ClientError, errcode,
- index->def->name,
+ diag_set(ClientError, errcode, index->def->name,
space_name(space));
goto fail;
}
@@ -685,8 +670,8 @@ memtx_tx_history_add_stmt(struct txn_stmt *stmt, struct tuple *old_tuple,
if (del_tuple->is_dirty) {
del_story = memtx_tx_story_get(del_tuple);
} else {
- del_story = memtx_tx_story_new_del_stmt(del_tuple,
- stmt);
+ del_story =
+ memtx_tx_story_new_del_stmt(del_tuple, stmt);
if (del_story == NULL)
goto fail;
del_story_created = true;
@@ -729,7 +714,7 @@ memtx_tx_history_add_stmt(struct txn_stmt *stmt, struct tuple *old_tuple,
tuple_ref(*result);
return 0;
- fail:
+fail:
if (add_story != NULL) {
while (add_story_linked > 0) {
--add_story_linked;
@@ -739,15 +724,14 @@ memtx_tx_history_add_stmt(struct txn_stmt *stmt, struct tuple *old_tuple,
struct memtx_story_link *link = &add_story->link[i];
struct tuple *was = memtx_tx_story_older_tuple(link);
struct tuple *unused;
- if (index_replace(index, new_tuple, was,
- DUP_INSERT, &unused) != 0) {
+ if (index_replace(index, new_tuple, was, DUP_INSERT,
+ &unused) != 0) {
diag_log();
unreachable();
panic("failed to rollback change");
}
memtx_tx_story_unlink(stmt->add_story, i);
-
}
memtx_tx_story_delete_add_stmt(stmt->add_story);
}
@@ -778,7 +762,8 @@ memtx_tx_history_rollback_stmt(struct txn_stmt *stmt)
if (link->newer_story == NULL) {
struct tuple *unused;
struct index *index = stmt->space->index[i];
- struct tuple *was = memtx_tx_story_older_tuple(link);
+ struct tuple *was =
+ memtx_tx_story_older_tuple(link);
if (index_replace(index, story->tuple, was,
DUP_INSERT, &unused) != 0) {
diag_log();
@@ -791,7 +776,8 @@ memtx_tx_history_rollback_stmt(struct txn_stmt *stmt)
assert(newer->link[i].older.story == story);
memtx_tx_story_unlink(newer, i);
if (link->older.is_story) {
- struct memtx_story *to = link->older.story;
+ struct memtx_story *to =
+ link->older.story;
memtx_tx_story_link_story(newer, to, i);
} else {
struct tuple *to = link->older.tuple;
@@ -834,7 +820,7 @@ memtx_tx_history_prepare_stmt(struct txn_stmt *stmt)
* Note that if stmt->add_story == NULL, the index_count is set to 0,
* and we will not enter the loop.
*/
- for (uint32_t i = 0; i < index_count; ) {
+ for (uint32_t i = 0; i < index_count;) {
if (!story->link[i].older.is_story) {
/* tuple is old. */
i++;
@@ -894,13 +880,11 @@ memtx_tx_history_prepare_stmt(struct txn_stmt *stmt)
memtx_tx_story_unlink(story, i);
if (old_story->link[i].older.is_story) {
- struct memtx_story *to =
- old_story->link[i].older.story;
+ struct memtx_story *to = old_story->link[i].older.story;
memtx_tx_story_unlink(old_story, i);
memtx_tx_story_link_story(story, to, i);
} else {
- struct tuple *to =
- old_story->link[i].older.tuple;
+ struct tuple *to = old_story->link[i].older.tuple;
memtx_tx_story_unlink(old_story, i);
memtx_tx_story_link_tuple(story, to, i);
}
@@ -1019,10 +1003,9 @@ memtx_tx_on_space_delete(struct space *space)
{
/* Just clear pointer to space, it will be handled in GC. */
while (!rlist_empty(&space->memtx_stories)) {
- struct memtx_story *story
- = rlist_first_entry(&space->memtx_stories,
- struct memtx_story,
- in_space_stories);
+ struct memtx_story *story =
+ rlist_first_entry(&space->memtx_stories,
+ struct memtx_story, in_space_stories);
story->space = NULL;
rlist_del(&story->in_space_stories);
}
@@ -1095,13 +1078,12 @@ memtx_tx_track_read(struct txn *txn, struct space *space, struct tuple *tuple)
struct rlist *r1 = story->reader_list.next;
struct rlist *r2 = txn->read_set.next;
while (r1 != &story->reader_list && r2 != &txn->read_set) {
- tracker = rlist_entry(r1, struct tx_read_tracker,
- in_reader_list);
+ tracker =
+ rlist_entry(r1, struct tx_read_tracker, in_reader_list);
assert(tracker->story == story);
if (tracker->reader == txn)
break;
- tracker = rlist_entry(r2, struct tx_read_tracker,
- in_read_set);
+ tracker = rlist_entry(r2, struct tx_read_tracker, in_read_set);
assert(tracker->reader == txn);
if (tracker->story == story)
break;
@@ -1139,8 +1121,7 @@ memtx_tx_snapshot_cleaner_hash(const struct tuple *a)
return u ^ (u >> 32);
}
-struct memtx_tx_snapshot_cleaner_entry
-{
+struct memtx_tx_snapshot_cleaner_entry {
struct tuple *from;
struct tuple *to;
};
@@ -1165,8 +1146,8 @@ memtx_tx_snapshot_cleaner_create(struct memtx_tx_snapshot_cleaner *cleaner,
return 0;
struct mh_snapshot_cleaner_t *ht = mh_snapshot_cleaner_new();
if (ht == NULL) {
- diag_set(OutOfMemory, sizeof(*ht),
- index_name, "snapshot cleaner");
+ diag_set(OutOfMemory, sizeof(*ht), index_name,
+ "snapshot cleaner");
free(ht);
return -1;
}
@@ -1174,19 +1155,18 @@ memtx_tx_snapshot_cleaner_create(struct memtx_tx_snapshot_cleaner *cleaner,
struct memtx_story *story;
rlist_foreach_entry(story, &space->memtx_stories, in_space_stories) {
struct tuple *tuple = story->tuple;
- struct tuple *clean =
- memtx_tx_tuple_clarify_slow(NULL, space, tuple, 0, 0,
- true);
+ struct tuple *clean = memtx_tx_tuple_clarify_slow(
+ NULL, space, tuple, 0, 0, true);
if (clean == tuple)
continue;
struct memtx_tx_snapshot_cleaner_entry entry;
entry.from = tuple;
entry.to = clean;
- mh_int_t res = mh_snapshot_cleaner_put(ht, &entry, NULL, 0);
+ mh_int_t res = mh_snapshot_cleaner_put(ht, &entry, NULL, 0);
if (res == mh_end(ht)) {
- diag_set(OutOfMemory, sizeof(entry),
- index_name, "snapshot rollback entry");
+ diag_set(OutOfMemory, sizeof(entry), index_name,
+ "snapshot rollback entry");
mh_snapshot_cleaner_delete(ht);
return -1;
}
@@ -1204,7 +1184,7 @@ memtx_tx_snapshot_clarify_slow(struct memtx_tx_snapshot_cleaner *cleaner,
struct mh_snapshot_cleaner_t *ht = cleaner->ht;
while (true) {
- mh_int_t pos = mh_snapshot_cleaner_find(ht, tuple, 0);
+ mh_int_t pos = mh_snapshot_cleaner_find(ht, tuple, 0);
if (pos == mh_end(ht))
break;
struct memtx_tx_snapshot_cleaner_entry *entry =
@@ -1216,7 +1196,6 @@ memtx_tx_snapshot_clarify_slow(struct memtx_tx_snapshot_cleaner *cleaner,
return tuple;
}
-
void
memtx_tx_snapshot_cleaner_destroy(struct memtx_tx_snapshot_cleaner *cleaner)
{
diff --git a/src/box/memtx_tx.h b/src/box/memtx_tx.h
index 25a2038..22a5872 100644
--- a/src/box/memtx_tx.h
+++ b/src/box/memtx_tx.h
@@ -297,8 +297,8 @@ memtx_tx_track_read(struct txn *txn, struct space *space, struct tuple *tuple);
*/
static inline struct tuple *
memtx_tx_tuple_clarify(struct txn *txn, struct space *space,
- struct tuple *tuple, uint32_t index,
- uint32_t mk_index, bool is_prepared_ok)
+ struct tuple *tuple, uint32_t index, uint32_t mk_index,
+ bool is_prepared_ok)
{
if (!memtx_tx_manager_use_mvcc_engine)
return tuple;
diff --git a/src/box/merger.c b/src/box/merger.c
index fff12f9..8d33ba4 100644
--- a/src/box/merger.c
+++ b/src/box/merger.c
@@ -39,8 +39,8 @@
#define HEAP_FORWARD_DECLARATION
#include "salad/heap.h"
-#include "diag.h" /* diag_set() */
-#include "box/tuple.h" /* tuple_ref(), tuple_unref(),
+#include "diag.h" /* diag_set() */
+#include "box/tuple.h" /* tuple_ref(), tuple_unref(),
tuple_validate() */
#include "box/tuple_format.h" /* box_tuple_format_new(),
tuple_format_*() */
@@ -210,8 +210,8 @@ static int
merger_set_sources(struct merger *merger, struct merge_source **sources,
uint32_t source_count)
{
- const size_t nodes_size = sizeof(struct merger_heap_node) *
- source_count;
+ const size_t nodes_size =
+ sizeof(struct merger_heap_node) * source_count;
struct merger_heap_node *nodes = malloc(nodes_size);
if (nodes == NULL) {
diag_set(OutOfMemory, nodes_size, "malloc",
@@ -227,7 +227,6 @@ merger_set_sources(struct merger *merger, struct merge_source **sources,
return 0;
}
-
struct merge_source *
merger_new(struct key_def *key_def, struct merge_source **sources,
uint32_t source_count, bool reverse)
diff --git a/src/box/mp_error.cc b/src/box/mp_error.cc
index 36fbcef..4f5fa53 100644
--- a/src/box/mp_error.cc
+++ b/src/box/mp_error.cc
@@ -69,9 +69,7 @@
/**
* MP_ERROR keys
*/
-enum {
- MP_ERROR_STACK = 0x00
-};
+enum { MP_ERROR_STACK = 0x00 };
/**
* Keys of individual error in the stack.
@@ -98,13 +96,8 @@ enum {
};
static const char *const mp_error_field_to_json_key[MP_ERROR_MAX] = {
- "\"type\": ",
- "\"file\": ",
- "\"line\": ",
- "\"message\": ",
- "\"errno\": ",
- "\"code\": ",
- "\"fields\": ",
+ "\"type\": ", "\"file\": ", "\"line\": ", "\"message\": ",
+ "\"errno\": ", "\"code\": ", "\"fields\": ",
};
/**
@@ -253,7 +246,7 @@ error_build_xc(struct mp_error *mp_error)
struct error *err = NULL;
if (mp_error->type == NULL || mp_error->message == NULL ||
mp_error->file == NULL) {
-missing_fields:
+ missing_fields:
diag_set(ClientError, ER_INVALID_MSGPACK,
"Missing mandatory error fields");
return NULL;
@@ -286,14 +279,14 @@ missing_fields:
err = new XlogGapError(mp_error->file, mp_error->line,
mp_error->message);
} else if (strcmp(mp_error->type, "SystemError") == 0) {
- err = new SystemError(mp_error->file, mp_error->line,
- "%s", mp_error->message);
+ err = new SystemError(mp_error->file, mp_error->line, "%s",
+ mp_error->message);
} else if (strcmp(mp_error->type, "SocketError") == 0) {
err = new SocketError(mp_error->file, mp_error->line, "", "");
error_format_msg(err, "%s", mp_error->message);
} else if (strcmp(mp_error->type, "OutOfMemory") == 0) {
- err = new OutOfMemory(mp_error->file, mp_error->line,
- 0, "", "");
+ err = new OutOfMemory(mp_error->file, mp_error->line, 0, "",
+ "");
} else if (strcmp(mp_error->type, "TimedOut") == 0) {
err = new TimedOut(mp_error->file, mp_error->line);
} else if (strcmp(mp_error->type, "ChannelIsClosed") == 0) {
@@ -304,17 +297,17 @@ missing_fields:
err = new LuajitError(mp_error->file, mp_error->line,
mp_error->message);
} else if (strcmp(mp_error->type, "IllegalParams") == 0) {
- err = new IllegalParams(mp_error->file, mp_error->line,
- "%s", mp_error->message);
+ err = new IllegalParams(mp_error->file, mp_error->line, "%s",
+ mp_error->message);
} else if (strcmp(mp_error->type, "CollationError") == 0) {
- err = new CollationError(mp_error->file, mp_error->line,
- "%s", mp_error->message);
+ err = new CollationError(mp_error->file, mp_error->line, "%s",
+ mp_error->message);
} else if (strcmp(mp_error->type, "SwimError") == 0) {
- err = new SwimError(mp_error->file, mp_error->line,
- "%s", mp_error->message);
+ err = new SwimError(mp_error->file, mp_error->line, "%s",
+ mp_error->message);
} else if (strcmp(mp_error->type, "CryptoError") == 0) {
- err = new CryptoError(mp_error->file, mp_error->line,
- "%s", mp_error->message);
+ err = new CryptoError(mp_error->file, mp_error->line, "%s",
+ mp_error->message);
} else {
err = new ClientError(mp_error->file, mp_error->line,
ER_UNKNOWN);
@@ -347,7 +340,8 @@ mp_decode_and_copy_str(const char **data, struct region *region)
}
uint32_t str_len;
const char *str = mp_decode_str(data, &str_len);
- return region_strdup(region, str, str_len);;
+ return region_strdup(region, str, str_len);
+ ;
}
static inline bool
@@ -415,7 +409,7 @@ mp_decode_error_one(const char **data)
goto error;
uint64_t key = mp_decode_uint(data);
- switch(key) {
+ switch (key) {
case MP_ERROR_TYPE:
mp_err.type = mp_decode_and_copy_str(data, region);
if (mp_err.type == NULL)
@@ -540,7 +534,7 @@ error_unpack_unsafe(const char **data)
return NULL;
}
uint64_t key = mp_decode_uint(data);
- switch(key) {
+ switch (key) {
case MP_ERROR_STACK: {
if (mp_typeof(**data) != MP_ARRAY) {
diag_set(ClientError, ER_INVALID_MSGPACK,
@@ -579,7 +573,7 @@ error_unpack_unsafe(const char **data)
#define MP_ERROR_PRINT_DEFINITION
#define MP_PRINT_FUNC snprintf
#define MP_PRINT_SUFFIX snprint
-#define MP_PRINT_2(total, func, ...) \
+#define MP_PRINT_2(total, func, ...) \
SNPRINT(total, func, buf, size, __VA_ARGS__)
#define MP_PRINT_ARGS_DECL char *buf, int size
#include __FILE__
@@ -587,12 +581,13 @@ error_unpack_unsafe(const char **data)
#define MP_ERROR_PRINT_DEFINITION
#define MP_PRINT_FUNC fprintf
#define MP_PRINT_SUFFIX fprint
-#define MP_PRINT_2(total, func, ...) do { \
- int bytes = func(file, __VA_ARGS__); \
- if (bytes < 0) \
- return -1; \
- total += bytes; \
-} while (0)
+#define MP_PRINT_2(total, func, ...) \
+ do { \
+ int bytes = func(file, __VA_ARGS__); \
+ if (bytes < 0) \
+ return -1; \
+ total += bytes; \
+ } while (0)
#define MP_PRINT_ARGS_DECL FILE *file
#include __FILE__
@@ -624,16 +619,15 @@ error_unpack_unsafe(const char **data)
* turn it into a template.
*/
-#define MP_CONCAT4_R(a, b, c, d) a##b##c##d
-#define MP_CONCAT4(a, b, c, d) MP_CONCAT4_R(a, b, c, d)
-#define MP_PRINT(total, ...) MP_PRINT_2(total, MP_PRINT_FUNC, \
- __VA_ARGS__)
+#define MP_CONCAT4_R(a, b, c, d) a##b##c##d
+#define MP_CONCAT4(a, b, c, d) MP_CONCAT4_R(a, b, c, d)
+#define MP_PRINT(total, ...) MP_PRINT_2(total, MP_PRINT_FUNC, __VA_ARGS__)
-#define mp_func_name(name) MP_CONCAT4(mp_, MP_PRINT_SUFFIX, _, name)
-#define mp_print_error_one mp_func_name(error_one)
-#define mp_print_error_stack mp_func_name(error_stack)
-#define mp_print_error mp_func_name(error)
-#define mp_print_common mp_func_name(recursion)
+#define mp_func_name(name) MP_CONCAT4(mp_, MP_PRINT_SUFFIX, _, name)
+#define mp_print_error_one mp_func_name(error_one)
+#define mp_print_error_stack mp_func_name(error_stack)
+#define mp_print_error mp_func_name(error)
+#define mp_print_common mp_func_name(recursion)
static int
mp_print_error_one(MP_PRINT_ARGS_DECL, const char **data, int depth)
@@ -710,7 +704,7 @@ mp_print_error(MP_PRINT_ARGS_DECL, const char **data, int depth)
if (mp_typeof(**data) != MP_UINT)
return -1;
uint64_t key = mp_decode_uint(data);
- switch(key) {
+ switch (key) {
case MP_ERROR_STACK: {
MP_PRINT(total, "\"stack\": ");
MP_PRINT_2(total, mp_print_error_stack, data, depth);
diff --git a/src/box/msgpack.c b/src/box/msgpack.c
index 1723dea..7a4aad3 100644
--- a/src/box/msgpack.c
+++ b/src/box/msgpack.c
@@ -42,7 +42,7 @@ msgpack_fprint_ext(FILE *file, const char **data, int depth)
const char **orig = data;
int8_t type;
uint32_t len = mp_decode_extl(data, &type);
- switch(type) {
+ switch (type) {
case MP_DECIMAL:
return mp_fprint_decimal(file, data, len);
case MP_UUID:
@@ -60,7 +60,7 @@ msgpack_snprint_ext(char *buf, int size, const char **data, int depth)
const char **orig = data;
int8_t type;
uint32_t len = mp_decode_extl(data, &type);
- switch(type) {
+ switch (type) {
case MP_DECIMAL:
return mp_snprint_decimal(buf, size, data, len);
case MP_UUID:
diff --git a/src/box/opt_def.c b/src/box/opt_def.c
index e282085..76ed42f 100644
--- a/src/box/opt_def.c
+++ b/src/box/opt_def.c
@@ -60,7 +60,7 @@ opt_set(void *opts, const struct opt_def *def, const char **val,
uint32_t str_len;
const char *str;
char *ptr;
- char *opt = ((char *) opts) + def->offset;
+ char *opt = ((char *)opts) + def->offset;
switch (def->type) {
case OPT_BOOL:
if (mp_typeof(**val) != MP_BOOL)
@@ -98,7 +98,7 @@ opt_set(void *opts, const struct opt_def *def, const char **val,
goto type_mismatch_err;
str = mp_decode_str(val, &str_len);
if (str_len > 0) {
- ptr = (char *) region_alloc(region, str_len + 1);
+ ptr = (char *)region_alloc(region, str_len + 1);
if (ptr == NULL) {
diag_set(OutOfMemory, str_len + 1, "region",
"opt string");
@@ -106,7 +106,7 @@ opt_set(void *opts, const struct opt_def *def, const char **val,
}
memcpy(ptr, str, str_len);
ptr[str_len] = '\0';
- assert (strlen(ptr) == str_len);
+ assert(strlen(ptr) == str_len);
} else {
ptr = NULL;
}
@@ -122,7 +122,7 @@ opt_set(void *opts, const struct opt_def *def, const char **val,
} else {
ival = def->to_enum(str, str_len);
}
- switch(def->enum_size) {
+ switch (def->enum_size) {
case sizeof(uint8_t):
store_u8(opt, (uint8_t)ival);
break;
@@ -175,7 +175,7 @@ opts_parse_key(void *opts, const struct opt_def *reg, const char *key,
return opt_set(opts, def, data, region, errcode, field_no);
}
- if (! skip_unknown_options) {
+ if (!skip_unknown_options) {
char *errmsg = tt_static_buf();
snprintf(errmsg, TT_STATIC_BUF_LEN, "unexpected option '%.*s'",
key_len, key);
diff --git a/src/box/opt_def.h b/src/box/opt_def.h
index 2154441..6640678 100644
--- a/src/box/opt_def.h
+++ b/src/box/opt_def.h
@@ -40,15 +40,15 @@ extern "C" {
#endif /* defined(__cplusplus) */
enum opt_type {
- OPT_BOOL, /* bool */
- OPT_UINT32, /* uint32_t */
- OPT_INT64, /* int64_t */
- OPT_FLOAT, /* double */
- OPT_STR, /* char[] */
- OPT_STRPTR, /* char* */
- OPT_ENUM, /* enum */
- OPT_ARRAY, /* array */
- OPT_LEGACY, /* any type, skipped */
+ OPT_BOOL, /* bool */
+ OPT_UINT32, /* uint32_t */
+ OPT_INT64, /* int64_t */
+ OPT_FLOAT, /* double */
+ OPT_STR, /* char[] */
+ OPT_STRPTR, /* char* */
+ OPT_ENUM, /* enum */
+ OPT_ARRAY, /* array */
+ OPT_LEGACY, /* any type, skipped */
opt_type_MAX,
};
@@ -94,23 +94,43 @@ struct opt_def {
};
};
-#define OPT_DEF(key, type, opts, field) \
- { key, type, offsetof(opts, field), sizeof(((opts *)0)->field), \
- NULL, 0, NULL, 0, {NULL} }
-
-#define OPT_DEF_ENUM(key, enum_name, opts, field, to_enum) \
- { key, OPT_ENUM, offsetof(opts, field), sizeof(int), #enum_name, \
- sizeof(enum enum_name), enum_name##_strs, enum_name##_MAX, \
- {(void *)to_enum} }
-
-#define OPT_DEF_ARRAY(key, opts, field, to_array) \
- { key, OPT_ARRAY, offsetof(opts, field), sizeof(((opts *)0)->field), \
- NULL, 0, NULL, 0, {(void *)to_array} }
-
-#define OPT_DEF_LEGACY(key) \
- { key, OPT_LEGACY, 0, 0, NULL, 0, NULL, 0, {NULL} }
-
-#define OPT_END {NULL, opt_type_MAX, 0, 0, NULL, 0, NULL, 0, {NULL}}
+#define OPT_DEF(key, type, opts, field) \
+ { \
+ key, type, offsetof(opts, field), sizeof(((opts *)0)->field), \
+ NULL, 0, NULL, 0, \
+ { \
+ NULL \
+ } \
+ }
+
+#define OPT_DEF_ENUM(key, enum_name, opts, field, to_enum) \
+ { \
+ key, OPT_ENUM, offsetof(opts, field), sizeof(int), #enum_name, \
+ sizeof(enum enum_name), enum_name##_strs, \
+ enum_name##_MAX, \
+ { \
+ (void *)to_enum \
+ } \
+ }
+
+#define OPT_DEF_ARRAY(key, opts, field, to_array) \
+ { \
+ key, OPT_ARRAY, offsetof(opts, field), \
+ sizeof(((opts *)0)->field), NULL, 0, NULL, 0, \
+ { \
+ (void *)to_array \
+ } \
+ }
+
+#define OPT_DEF_LEGACY(key) \
+ { \
+ key, OPT_LEGACY, 0, 0, NULL, 0, NULL, 0, { NULL } \
+ }
+
+#define OPT_END \
+ { \
+ NULL, opt_type_MAX, 0, 0, NULL, 0, NULL, 0, { NULL } \
+ }
struct region;
diff --git a/src/box/port.h b/src/box/port.h
index 43d0f9d..8714422 100644
--- a/src/box/port.h
+++ b/src/box/port.h
@@ -93,9 +93,9 @@ struct sql_value;
/** Port implementation used with vdbe memory variables. */
struct port_vdbemem {
- const struct port_vtab *vtab;
- struct sql_value *mem;
- uint32_t mem_count;
+ const struct port_vtab *vtab;
+ struct sql_value *mem;
+ uint32_t mem_count;
};
static_assert(sizeof(struct port_vdbemem) <= sizeof(struct port),
diff --git a/src/box/raft.c b/src/box/raft.c
index 0b6c373..b752c97 100644
--- a/src/box/raft.c
+++ b/src/box/raft.c
@@ -338,20 +338,23 @@ raft_process_msg(const struct raft_request *req, uint32_t source)
assert(source > 0);
assert(source != instance_id);
if (req->term == 0 || req->state == 0) {
- diag_set(ClientError, ER_PROTOCOL, "Raft term and state can't "
+ diag_set(ClientError, ER_PROTOCOL,
+ "Raft term and state can't "
"be zero");
return -1;
}
if (req->state == RAFT_STATE_CANDIDATE &&
(req->vote != source || req->vclock == NULL)) {
- diag_set(ClientError, ER_PROTOCOL, "Candidate should always "
+ diag_set(ClientError, ER_PROTOCOL,
+ "Candidate should always "
"vote for self and provide its vclock");
return -1;
}
/* Outdated request. */
if (req->term < raft.volatile_term) {
say_info("RAFT: the message is ignored due to outdated term - "
- "current term is %u", raft.volatile_term);
+ "current term is %u",
+ raft.volatile_term);
return 0;
}
@@ -428,8 +431,8 @@ raft_process_msg(const struct raft_request *req, uint32_t source)
raft.vote_count += !was_set;
if (raft.vote_count < quorum) {
say_info("RAFT: accepted vote for self, vote "
- "count is %d/%d", raft.vote_count,
- quorum);
+ "count is %d/%d",
+ raft.vote_count, quorum);
break;
}
raft_sm_become_leader();
@@ -441,7 +444,8 @@ raft_process_msg(const struct raft_request *req, uint32_t source)
if (req->state != RAFT_STATE_LEADER) {
if (source == raft.leader) {
say_info("RAFT: the node %u has resigned from the "
- "leader role", raft.leader);
+ "leader role",
+ raft.leader);
raft_sm_schedule_new_election();
}
return 0;
@@ -457,7 +461,8 @@ raft_process_msg(const struct raft_request *req, uint32_t source)
*/
if (raft.leader != 0) {
say_warn("RAFT: conflicting leader detected in one term - "
- "known is %u, received %u", raft.leader, source);
+ "known is %u, received %u",
+ raft.leader, source);
return 0;
}
@@ -531,8 +536,7 @@ raft_write_request(const struct raft_request *req)
struct region *region = &fiber()->gc;
uint32_t svp = region_used(region);
struct xrow_header row;
- char buf[sizeof(struct journal_entry) +
- sizeof(struct xrow_header *)];
+ char buf[sizeof(struct journal_entry) + sizeof(struct xrow_header *)];
struct journal_entry *entry = (struct journal_entry *)buf;
entry->rows[0] = &row;
@@ -567,7 +571,7 @@ raft_worker_handle_io(void)
struct raft_request req;
if (raft_is_fully_on_disk()) {
-end_dump:
+ end_dump:
raft.is_write_in_progress = false;
/*
* The state machine is stable. Can see now, to what state to
@@ -634,8 +638,7 @@ raft_worker_handle_broadcast(void)
assert(raft.vote == instance_id);
req.vclock = &replicaset.vclock;
}
- replicaset_foreach(replica)
- relay_push_raft(replica->relay, &req);
+ replicaset_foreach(replica) relay_push_raft(replica->relay, &req);
raft.is_broadcast_scheduled = false;
}
@@ -820,8 +823,8 @@ raft_sm_wait_election_end(void)
(raft.state == RAFT_STATE_CANDIDATE &&
raft.volatile_vote == instance_id));
assert(raft.leader == 0);
- double election_timeout = raft.election_timeout +
- raft_new_random_election_shift();
+ double election_timeout =
+ raft.election_timeout + raft_new_random_election_shift();
ev_timer_set(&raft.timer, election_timeout, election_timeout);
ev_timer_start(loop(), &raft.timer);
}
diff --git a/src/box/recovery.cc b/src/box/recovery.cc
index cd33e76..35693b9 100644
--- a/src/box/recovery.cc
+++ b/src/box/recovery.cc
@@ -83,17 +83,13 @@ struct recovery *
recovery_new(const char *wal_dirname, bool force_recovery,
const struct vclock *vclock)
{
- struct recovery *r = (struct recovery *)
- calloc(1, sizeof(*r));
+ struct recovery *r = (struct recovery *)calloc(1, sizeof(*r));
if (r == NULL) {
- tnt_raise(OutOfMemory, sizeof(*r), "malloc",
- "struct recovery");
+ tnt_raise(OutOfMemory, sizeof(*r), "malloc", "struct recovery");
}
- auto guard = make_scoped_guard([=]{
- free(r);
- });
+ auto guard = make_scoped_guard([=] { free(r); });
xdir_create(&r->wal_dir, wal_dirname, XLOG, &INSTANCE_UUID,
&xlog_opts_default);
@@ -152,8 +148,7 @@ recovery_close_log(struct recovery *r)
if (xlog_cursor_is_eof(&r->cursor)) {
say_info("done `%s'", r->cursor.name);
} else {
- say_warn("file `%s` wasn't correctly closed",
- r->cursor.name);
+ say_warn("file `%s` wasn't correctly closed", r->cursor.name);
}
xlog_cursor_close(&r->cursor, false);
trigger_run_xc(&r->on_close_log, NULL);
@@ -325,8 +320,7 @@ recover_remaining_wals(struct recovery *r, struct xstream *stream,
}
for (clock = vclockset_match(&r->wal_dir.index, &r->vclock);
- clock != NULL;
- clock = vclockset_next(&r->wal_dir.index, clock)) {
+ clock != NULL; clock = vclockset_next(&r->wal_dir.index, clock)) {
if (stop_vclock != NULL &&
clock->signature >= stop_vclock->signature) {
break;
@@ -345,7 +339,7 @@ recover_remaining_wals(struct recovery *r, struct xstream *stream,
say_info("recover from `%s'", r->cursor.name);
-recover_current_wal:
+ recover_current_wal:
recover_xlog(r, stream, stop_vclock);
}
@@ -364,7 +358,6 @@ recovery_finalize(struct recovery *r)
recovery_close_log(r);
}
-
/* }}} */
/* {{{ Local recovery: support of hot standby and replication relay */
@@ -405,9 +398,8 @@ public:
{
f = fiber();
events = 0;
- if ((size_t)snprintf(dir_path, sizeof(dir_path), "%s", wal_dir) >=
- sizeof(dir_path)) {
-
+ if ((size_t)snprintf(dir_path, sizeof(dir_path), "%s",
+ wal_dir) >= sizeof(dir_path)) {
panic("path too long: %s", wal_dir);
}
@@ -433,8 +425,7 @@ public:
* Note: .file_path valid iff file_stat is active.
*/
if (path && ev_is_active(&file_stat) &&
- strcmp(file_path, path) == 0) {
-
+ strcmp(file_path, path) == 0) {
return;
}
@@ -443,9 +434,8 @@ public:
if (path == NULL)
return;
- if ((size_t)snprintf(file_path, sizeof(file_path), "%s", path) >=
- sizeof(file_path)) {
-
+ if ((size_t)snprintf(file_path, sizeof(file_path), "%s",
+ path) >= sizeof(file_path)) {
panic("path too long: %s", path);
}
ev_stat_set(&file_stat, file_path, 0.0);
@@ -465,8 +455,7 @@ hot_standby_f(va_list ap)
WalSubscription subscription(r->wal_dir.dirname);
- while (! fiber_is_cancelled()) {
-
+ while (!fiber_is_cancelled()) {
/*
* Recover until there is no new stuff which appeared in
* the log dir while recovery was running.
@@ -491,7 +480,8 @@ hot_standby_f(va_list ap)
} while (end > start && !xlog_cursor_is_open(&r->cursor));
subscription.set_log_path(xlog_cursor_is_open(&r->cursor) ?
- r->cursor.name : NULL);
+ r->cursor.name :
+ NULL);
bool timed_out = false;
if (subscription.events == 0) {
@@ -505,7 +495,7 @@ hot_standby_f(va_list ap)
}
scan_dir = timed_out ||
- (subscription.events & WAL_EVENT_ROTATE) != 0;
+ (subscription.events & WAL_EVENT_ROTATE) != 0;
subscription.events = 0;
}
diff --git a/src/box/recovery.h b/src/box/recovery.h
index b8d8395..774e76e 100644
--- a/src/box/recovery.h
+++ b/src/box/recovery.h
@@ -73,7 +73,7 @@ recovery_delete(struct recovery *r);
* WAL directory.
*/
void
-recovery_scan(struct recovery *r, struct vclock *end_vclock,
+recovery_scan(struct recovery *r, struct vclock *end_vclock,
struct vclock *gc_vclock);
void
diff --git a/src/box/relay.cc b/src/box/relay.cc
index 096f455..941c60d 100644
--- a/src/box/relay.cc
+++ b/src/box/relay.cc
@@ -144,8 +144,8 @@ struct relay {
struct {
/* Align to prevent false-sharing with tx thread */
alignas(CACHELINE_SIZE)
- /** Known relay vclock. */
- struct vclock vclock;
+ /** Known relay vclock. */
+ struct vclock vclock;
/**
* True if the relay needs Raft updates. It can live fine
* without sending Raft updates, if it is a relay to an
@@ -155,7 +155,7 @@ struct relay {
} tx;
};
-struct diag*
+struct diag *
relay_get_diag(struct relay *relay)
{
return &relay->diag;
@@ -189,10 +189,10 @@ relay_send_row(struct xstream *stream, struct xrow_header *row);
struct relay *
relay_new(struct replica *replica)
{
- struct relay *relay = (struct relay *) calloc(1, sizeof(struct relay));
+ struct relay *relay = (struct relay *)calloc(1, sizeof(struct relay));
if (relay == NULL) {
diag_set(OutOfMemory, sizeof(struct relay), "malloc",
- "struct relay");
+ "struct relay");
return NULL;
}
relay->replica = replica;
@@ -206,7 +206,7 @@ relay_new(struct replica *replica)
static void
relay_start(struct relay *relay, int fd, uint64_t sync,
- void (*stream_write)(struct xstream *, struct xrow_header *))
+ void (*stream_write)(struct xstream *, struct xrow_header *))
{
xstream_create(&relay->stream, stream_write);
/*
@@ -256,8 +256,9 @@ static void
relay_stop(struct relay *relay)
{
struct relay_gc_msg *gc_msg, *next_gc_msg;
- stailq_foreach_entry_safe(gc_msg, next_gc_msg,
- &relay->pending_gc, in_pending) {
+ stailq_foreach_entry_safe(gc_msg, next_gc_msg, &relay->pending_gc,
+ in_pending)
+ {
free(gc_msg);
}
stailq_create(&relay->pending_gc);
@@ -290,7 +291,7 @@ relay_set_cord_name(int fd)
char name[FIBER_NAME_MAX];
struct sockaddr_storage peer;
socklen_t addrlen = sizeof(peer);
- if (getpeername(fd, ((struct sockaddr*)&peer), &addrlen) == 0) {
+ if (getpeername(fd, ((struct sockaddr *)&peer), &addrlen) == 0) {
snprintf(name, sizeof(name), "relay/%s",
sio_strfaddr((struct sockaddr *)&peer, addrlen));
} else {
@@ -315,9 +316,8 @@ relay_initial_join(int fd, uint64_t sync, struct vclock *vclock)
/* Freeze a read view in engines. */
struct engine_join_ctx ctx;
engine_prepare_join_xc(&ctx);
- auto join_guard = make_scoped_guard([&] {
- engine_complete_join(&ctx);
- });
+ auto join_guard =
+ make_scoped_guard([&] { engine_complete_join(&ctx); });
/*
* Sync WAL to make sure that all changes visible from
@@ -355,8 +355,8 @@ relay_final_join_f(va_list ap)
/* Send all WALs until stop_vclock */
assert(relay->stream.write != NULL);
- recover_remaining_wals(relay->r, &relay->stream,
- &relay->stop_vclock, true);
+ recover_remaining_wals(relay->r, &relay->stream, &relay->stop_vclock,
+ true);
assert(vclock_compare(&relay->r->vclock, &relay->stop_vclock) == 0);
return 0;
}
@@ -378,8 +378,8 @@ relay_final_join(int fd, uint64_t sync, struct vclock *start_vclock,
relay->r = recovery_new(wal_dir(), false, start_vclock);
vclock_copy(&relay->stop_vclock, stop_vclock);
- int rc = cord_costart(&relay->cord, "final_join",
- relay_final_join_f, relay);
+ int rc = cord_costart(&relay->cord, "final_join", relay_final_join_f,
+ relay);
if (rc == 0)
rc = cord_cojoin(&relay->cord);
if (rc != 0)
@@ -423,9 +423,8 @@ tx_status_update(struct cmsg *msg)
txn_limbo_ack(&txn_limbo, status->relay->replica->id,
vclock_get(&status->vclock, instance_id));
}
- static const struct cmsg_hop route[] = {
- {relay_status_update, NULL}
- };
+ static const struct cmsg_hop route[] = { { relay_status_update,
+ NULL } };
cmsg_init(msg, route);
cpipe_push(&status->relay->relay_pipe, msg);
}
@@ -444,9 +443,7 @@ tx_gc_advance(struct cmsg *msg)
static int
relay_on_close_log_f(struct trigger *trigger, void * /* event */)
{
- static const struct cmsg_hop route[] = {
- {tx_gc_advance, NULL}
- };
+ static const struct cmsg_hop route[] = { { tx_gc_advance, NULL } };
struct relay *relay = (struct relay *)trigger->data;
struct relay_gc_msg *m = (struct relay_gc_msg *)malloc(sizeof(*m));
if (m == NULL) {
@@ -477,7 +474,8 @@ static inline void
relay_schedule_pending_gc(struct relay *relay, const struct vclock *vclock)
{
struct relay_gc_msg *curr, *next, *gc_msg = NULL;
- stailq_foreach_entry_safe(curr, next, &relay->pending_gc, in_pending) {
+ stailq_foreach_entry_safe(curr, next, &relay->pending_gc, in_pending)
+ {
/*
* We may delete a WAL file only if its vclock is
* less than or equal to the vclock acknowledged by
@@ -548,8 +546,9 @@ relay_reader_f(va_list ap)
try {
while (!fiber_is_cancelled()) {
struct xrow_header xrow;
- coio_read_xrow_timeout_xc(&io, &ibuf, &xrow,
- replication_disconnect_timeout());
+ coio_read_xrow_timeout_xc(
+ &io, &ibuf, &xrow,
+ replication_disconnect_timeout());
/* vclock is followed while decoding, zeroing it. */
vclock_create(&relay->recv_vclock);
xrow_decode_vclock_xc(&xrow, &relay->recv_vclock);
@@ -706,8 +705,8 @@ relay_subscribe_f(va_list ap)
*/
while (!fiber_is_cancelled()) {
double timeout = replication_timeout;
- struct errinj *inj = errinj(ERRINJ_RELAY_REPORT_INTERVAL,
- ERRINJ_DOUBLE);
+ struct errinj *inj =
+ errinj(ERRINJ_RELAY_REPORT_INTERVAL, ERRINJ_DOUBLE);
if (inj != NULL && inj->dparam != 0)
timeout = inj->dparam;
@@ -741,9 +740,8 @@ relay_subscribe_f(va_list ap)
if (vclock_sum(&relay->status_msg.vclock) ==
vclock_sum(send_vclock))
continue;
- static const struct cmsg_hop route[] = {
- {tx_status_update, NULL}
- };
+ static const struct cmsg_hop route[] = { { tx_status_update,
+ NULL } };
cmsg_init(&relay->status_msg.msg, route);
vclock_copy(&relay->status_msg.vclock, send_vclock);
relay->status_msg.relay = relay;
@@ -775,8 +773,8 @@ relay_subscribe_f(va_list ap)
fiber_join(reader);
/* Destroy cpipe to tx. */
- cbus_unpair(&relay->tx_pipe, &relay->relay_pipe,
- NULL, NULL, cbus_process);
+ cbus_unpair(&relay->tx_pipe, &relay->relay_pipe, NULL, NULL,
+ cbus_process);
cbus_endpoint_destroy(&relay->endpoint, cbus_process);
relay_exit(relay);
@@ -817,8 +815,8 @@ relay_subscribe(struct replica *replica, int fd, uint64_t sync,
relay->id_filter = replica_id_filter;
- int rc = cord_costart(&relay->cord, "subscribe",
- relay_subscribe_f, relay);
+ int rc = cord_costart(&relay->cord, "subscribe", relay_subscribe_f,
+ relay);
if (rc == 0)
rc = cord_cojoin(&relay->cord);
if (rc != 0)
@@ -982,12 +980,11 @@ relay_send_row(struct xstream *stream, struct xrow_header *packet)
packet->replica_id != relay->replica->id ||
packet->lsn <= vclock_get(&relay->local_vclock_at_subscribe,
packet->replica_id)) {
- struct errinj *inj = errinj(ERRINJ_RELAY_BREAK_LSN,
- ERRINJ_INT);
+ struct errinj *inj = errinj(ERRINJ_RELAY_BREAK_LSN, ERRINJ_INT);
if (inj != NULL && packet->lsn == inj->iparam) {
packet->lsn = inj->iparam - 1;
say_warn("injected broken lsn: %lld",
- (long long) packet->lsn);
+ (long long)packet->lsn);
}
relay_send(relay, packet);
}
diff --git a/src/box/relay.h b/src/box/relay.h
index b32e2ea..6707bba 100644
--- a/src/box/relay.h
+++ b/src/box/relay.h
@@ -70,7 +70,7 @@ void
relay_delete(struct relay *relay);
/** Get last relay's diagnostic error */
-struct diag*
+struct diag *
relay_get_diag(struct relay *relay);
/** Return the current state of relay. */
diff --git a/src/box/replication.cc b/src/box/replication.cc
index c19f8c6..a4d0dae 100644
--- a/src/box/replication.cc
+++ b/src/box/replication.cc
@@ -47,13 +47,13 @@ uint32_t instance_id = REPLICA_ID_NIL;
struct tt_uuid INSTANCE_UUID;
struct tt_uuid REPLICASET_UUID;
-double replication_timeout = 1.0; /* seconds */
+double replication_timeout = 1.0; /* seconds */
double replication_connect_timeout = 30.0; /* seconds */
int replication_connect_quorum = REPLICATION_CONNECT_QUORUM_ALL;
double replication_sync_lag = 10.0; /* seconds */
int replication_synchro_quorum = 1;
double replication_synchro_timeout = 5.0; /* seconds */
-double replication_sync_timeout = 300.0; /* seconds */
+double replication_sync_timeout = 300.0; /* seconds */
bool replication_skip_conflict = false;
bool replication_anon = false;
@@ -65,11 +65,11 @@ replica_compare_by_uuid(const struct replica *a, const struct replica *b)
return tt_uuid_compare(&a->uuid, &b->uuid);
}
-rb_gen(MAYBE_UNUSED static, replica_hash_, replica_hash_t,
- struct replica, in_hash, replica_compare_by_uuid);
+rb_gen(MAYBE_UNUSED static, replica_hash_, replica_hash_t, struct replica,
+ in_hash, replica_compare_by_uuid);
-#define replica_hash_foreach_safe(hash, item, next) \
- for (item = replica_hash_first(hash); \
+#define replica_hash_foreach_safe(hash, item, next) \
+ for (item = replica_hash_first(hash); \
item != NULL && ((next = replica_hash_next(hash, item)) || 1); \
item = next)
@@ -109,8 +109,7 @@ replication_free(void)
* cbus upon shutdown, which could lead to segfaults.
* So cancel them.
*/
- replicaset_foreach(replica)
- relay_cancel(replica->relay);
+ replicaset_foreach(replica) relay_cancel(replica->relay);
diag_destroy(&replicaset.applier.diag);
}
@@ -120,12 +119,11 @@ replica_check_id(uint32_t replica_id)
{
if (replica_id == REPLICA_ID_NIL) {
diag_set(ClientError, ER_REPLICA_ID_IS_RESERVED,
- (unsigned) replica_id);
+ (unsigned)replica_id);
return -1;
}
if (replica_id >= VCLOCK_MAX) {
- diag_set(ClientError, ER_REPLICA_MAX,
- (unsigned) replica_id);
+ diag_set(ClientError, ER_REPLICA_MAX, (unsigned)replica_id);
return -1;
}
/*
@@ -140,7 +138,7 @@ replica_check_id(uint32_t replica_id)
*/
if (!replicaset.is_joining && replica_id == instance_id) {
diag_set(ClientError, ER_LOCAL_INSTANCE_ID_IS_READ_ONLY,
- (unsigned) replica_id);
+ (unsigned)replica_id);
return -1;
}
return 0;
@@ -161,8 +159,8 @@ replica_on_applier_state_f(struct trigger *trigger, void *event);
static struct replica *
replica_new(void)
{
- struct replica *replica = (struct replica *)
- malloc(sizeof(struct replica));
+ struct replica *replica =
+ (struct replica *)malloc(sizeof(struct replica));
if (replica == NULL) {
tnt_raise(OutOfMemory, sizeof(*replica), "malloc",
"struct replica");
@@ -178,8 +176,8 @@ replica_new(void)
replica->applier = NULL;
replica->gc = NULL;
rlist_create(&replica->in_anon);
- trigger_create(&replica->on_applier_state,
- replica_on_applier_state_f, NULL, NULL);
+ trigger_create(&replica->on_applier_state, replica_on_applier_state_f,
+ NULL, NULL);
replica->applier_sync_state = APPLIER_DISCONNECTED;
latch_create(&replica->order_latch);
return replica;
@@ -248,8 +246,8 @@ replica_set_id(struct replica *replica, uint32_t replica_id)
}
replicaset.replica_by_id[replica_id] = replica;
++replicaset.registered_count;
- say_info("assigned id %d to replica %s",
- replica->id, tt_uuid_str(&replica->uuid));
+ say_info("assigned id %d to replica %s", replica->id,
+ tt_uuid_str(&replica->uuid));
replica->anon = false;
}
@@ -306,8 +304,7 @@ replica_set_applier(struct replica *replica, struct applier *applier)
{
assert(replica->applier == NULL);
replica->applier = applier;
- trigger_add(&replica->applier->on_state,
- &replica->on_applier_state);
+ trigger_add(&replica->applier->on_state, &replica->on_applier_state);
}
void
@@ -449,8 +446,8 @@ static int
replica_on_applier_state_f(struct trigger *trigger, void *event)
{
(void)event;
- struct replica *replica = container_of(trigger,
- struct replica, on_applier_state);
+ struct replica *replica =
+ container_of(trigger, struct replica, on_applier_state);
switch (replica->applier->state) {
case APPLIER_INITIAL_JOIN:
replicaset.is_joining = true;
@@ -508,8 +505,9 @@ replicaset_update(struct applier **appliers, int count)
struct replica *replica, *next;
struct applier *applier;
- auto uniq_guard = make_scoped_guard([&]{
- replica_hash_foreach_safe(&uniq, replica, next) {
+ auto uniq_guard = make_scoped_guard([&] {
+ replica_hash_foreach_safe(&uniq, replica, next)
+ {
replica_hash_remove(&uniq, replica);
replica_clear_applier(replica);
replica_delete(replica);
@@ -555,8 +553,8 @@ replicaset_update(struct applier **appliers, int count)
/* Prune old appliers */
while (!rlist_empty(&replicaset.anon)) {
- replica = rlist_first_entry(&replicaset.anon,
- typeof(*replica), in_anon);
+ replica = rlist_first_entry(&replicaset.anon, typeof(*replica),
+ in_anon);
applier = replica->applier;
replica_clear_applier(replica);
rlist_del_entry(replica, in_anon);
@@ -564,7 +562,8 @@ replicaset_update(struct applier **appliers, int count)
applier_stop(applier);
applier_delete(applier);
}
- replicaset_foreach(replica) {
+ replicaset_foreach(replica)
+ {
if (replica->applier == NULL)
continue;
applier = replica->applier;
@@ -580,11 +579,12 @@ replicaset_update(struct applier **appliers, int count)
replicaset.applier.loading = 0;
replicaset.applier.synced = 0;
- replica_hash_foreach_safe(&uniq, replica, next) {
+ replica_hash_foreach_safe(&uniq, replica, next)
+ {
replica_hash_remove(&uniq, replica);
- struct replica *orig = replica_hash_search(&replicaset.hash,
- replica);
+ struct replica *orig =
+ replica_hash_search(&replicaset.hash, replica);
if (orig != NULL) {
/* Use existing struct replica */
replica_set_applier(orig, replica->applier);
@@ -603,7 +603,8 @@ replicaset_update(struct applier **appliers, int count)
rlist_swap(&replicaset.anon, &anon_replicas);
assert(replica_hash_first(&uniq) == NULL);
- replica_hash_foreach_safe(&replicaset.hash, replica, next) {
+ replica_hash_foreach_safe(&replicaset.hash, replica, next)
+ {
if (replica_is_orphan(replica)) {
replica_hash_remove(&replicaset.hash, replica);
replicaset.anon_count -= replica->anon;
@@ -633,8 +634,8 @@ struct applier_on_connect {
static int
applier_on_connect_f(struct trigger *trigger, void *event)
{
- struct applier_on_connect *on_connect = container_of(trigger,
- struct applier_on_connect, base);
+ struct applier_on_connect *on_connect =
+ container_of(trigger, struct applier_on_connect, base);
struct replicaset_connect_state *state = on_connect->state;
struct applier *applier = (struct applier *)event;
@@ -655,8 +656,7 @@ applier_on_connect_f(struct trigger *trigger, void *event)
}
void
-replicaset_connect(struct applier **appliers, int count,
- bool connect_quorum)
+replicaset_connect(struct applier **appliers, int count, bool connect_quorum)
{
if (count == 0) {
/* Cleanup the replica set. */
@@ -707,7 +707,8 @@ replicaset_connect(struct applier **appliers, int count,
struct applier *applier = appliers[i];
struct applier_on_connect *trigger = &triggers[i];
/* Register a trigger to wake us up when peer is connected */
- trigger_create(&trigger->base, applier_on_connect_f, NULL, NULL);
+ trigger_create(&trigger->base, applier_on_connect_f, NULL,
+ NULL);
trigger->state = &state;
trigger_add(&applier->on_state, &trigger->base);
/* Start background connection */
@@ -782,19 +783,20 @@ bool
replicaset_needs_rejoin(struct replica **master)
{
struct replica *leader = NULL;
- replicaset_foreach(replica) {
+ replicaset_foreach(replica)
+ {
struct applier *applier = replica->applier;
/*
* Skip the local instance, we shouldn't perform a
* check against our own gc vclock.
*/
- if (applier == NULL || tt_uuid_is_equal(&replica->uuid,
- &INSTANCE_UUID))
+ if (applier == NULL ||
+ tt_uuid_is_equal(&replica->uuid, &INSTANCE_UUID))
continue;
const struct ballot *ballot = &applier->ballot;
- if (vclock_compare(&ballot->gc_vclock,
- &replicaset.vclock) <= 0) {
+ if (vclock_compare(&ballot->gc_vclock, &replicaset.vclock) <=
+ 0) {
/*
* There's at least one master that still stores
* WALs needed by this instance. Proceed to local
@@ -804,11 +806,14 @@ replicaset_needs_rejoin(struct replica **master)
}
const char *uuid_str = tt_uuid_str(&replica->uuid);
- const char *addr_str = sio_strfaddr(&applier->addr,
- applier->addr_len);
- const char *local_vclock_str = vclock_to_string(&replicaset.vclock);
- const char *remote_vclock_str = vclock_to_string(&ballot->vclock);
- const char *gc_vclock_str = vclock_to_string(&ballot->gc_vclock);
+ const char *addr_str =
+ sio_strfaddr(&applier->addr, applier->addr_len);
+ const char *local_vclock_str =
+ vclock_to_string(&replicaset.vclock);
+ const char *remote_vclock_str =
+ vclock_to_string(&ballot->vclock);
+ const char *gc_vclock_str =
+ vclock_to_string(&ballot->gc_vclock);
say_info("can't follow %s at %s: required %s available %s",
uuid_str, addr_str, local_vclock_str, gc_vclock_str);
@@ -829,7 +834,7 @@ replicaset_needs_rejoin(struct replica **master)
/* Prefer a master with the max vclock. */
if (leader == NULL ||
vclock_sum(&ballot->vclock) >
- vclock_sum(&leader->applier->ballot.vclock))
+ vclock_sum(&leader->applier->ballot.vclock))
leader = replica;
}
if (leader == NULL)
@@ -843,7 +848,8 @@ void
replicaset_follow(void)
{
struct replica *replica;
- replicaset_foreach(replica) {
+ replicaset_foreach(replica)
+ {
/* Resume connected appliers. */
if (replica->applier != NULL)
applier_resume(replica->applier);
@@ -877,8 +883,8 @@ replicaset_sync(void)
*/
double deadline = ev_monotonic_now(loop()) + replication_sync_timeout;
while (replicaset.applier.synced < quorum &&
- replicaset.applier.connected +
- replicaset.applier.loading >= quorum) {
+ replicaset.applier.connected + replicaset.applier.loading >=
+ quorum) {
if (fiber_cond_wait_deadline(&replicaset.applier.cond,
deadline) != 0)
break;
@@ -957,7 +963,8 @@ static struct replica *
replicaset_round(bool skip_ro)
{
struct replica *leader = NULL;
- replicaset_foreach(replica) {
+ replicaset_foreach(replica)
+ {
struct applier *applier = replica->applier;
if (applier == NULL)
continue;
@@ -978,7 +985,8 @@ replicaset_round(bool skip_ro)
* Try to find a replica which has already left
* orphan mode.
*/
- if (applier->ballot.is_loading && ! leader->applier->ballot.is_loading)
+ if (applier->ballot.is_loading &&
+ !leader->applier->ballot.is_loading)
continue;
/*
* Choose the replica with the most advanced
@@ -986,12 +994,13 @@ replicaset_round(bool skip_ro)
* with the same vclock, prefer the one with
* the lowest uuid.
*/
- int cmp = vclock_compare_ignore0(&applier->ballot.vclock,
- &leader->applier->ballot.vclock);
+ int cmp =
+ vclock_compare_ignore0(&applier->ballot.vclock,
+ &leader->applier->ballot.vclock);
if (cmp < 0)
continue;
- if (cmp == 0 && tt_uuid_compare(&replica->uuid,
- &leader->uuid) > 0)
+ if (cmp == 0 &&
+ tt_uuid_compare(&replica->uuid, &leader->uuid) > 0)
continue;
leader = replica;
}
diff --git a/src/box/replication.h b/src/box/replication.h
index 3e46c59..3282b76 100644
--- a/src/box/replication.h
+++ b/src/box/replication.h
@@ -357,9 +357,9 @@ replicaset_first(void);
struct replica *
replicaset_next(struct replica *replica);
-#define replicaset_foreach(var) \
- for (struct replica *var = replicaset_first(); \
- var != NULL; var = replicaset_next(var))
+#define replicaset_foreach(var) \
+ for (struct replica *var = replicaset_first(); var != NULL; \
+ var = replicaset_next(var))
/**
* Set numeric replica-set-local id of remote replica.
@@ -380,7 +380,7 @@ void
replica_clear_applier(struct replica *replica);
void
-replica_set_applier(struct replica * replica, struct applier * applier);
+replica_set_applier(struct replica *replica, struct applier *applier);
/**
* Unregister \a relay from the \a replica.
@@ -421,8 +421,7 @@ replicaset_add_anon(const struct tt_uuid *replica_uuid);
* appliers have successfully connected.
*/
void
-replicaset_connect(struct applier **appliers, int count,
- bool connect_quorum);
+replicaset_connect(struct applier **appliers, int count, bool connect_quorum);
/**
* Check if the current instance fell too much behind its
diff --git a/src/box/request.c b/src/box/request.c
index 994f2da..f47cd1b 100644
--- a/src/box/request.c
+++ b/src/box/request.c
@@ -91,9 +91,9 @@ request_create_from_tuple(struct request *request, struct space *space,
if (new_tuple == NULL) {
uint32_t size, key_size;
const char *data = tuple_data_range(old_tuple, &size);
- request->key = tuple_extract_key_raw(data, data + size,
- space->index[0]->def->key_def, MULTIKEY_NONE,
- &key_size);
+ request->key = tuple_extract_key_raw(
+ data, data + size, space->index[0]->def->key_def,
+ MULTIKEY_NONE, &key_size);
if (request->key == NULL)
return -1;
request->key_end = request->key + key_size;
@@ -151,8 +151,7 @@ request_handle_sequence(struct request *request, struct space *space)
* An automatically generated sequence inherits
* privileges of the space it is used with.
*/
- if (!seq->is_generated &&
- access_check_sequence(seq) != 0)
+ if (!seq->is_generated && access_check_sequence(seq) != 0)
return -1;
struct index *pk = space_index(space, 0);
@@ -199,10 +198,11 @@ request_handle_sequence(struct request *request, struct space *space)
mp_decode_nil(&key_end);
size_t buf_size = (request->tuple_end - request->tuple) +
- mp_sizeof_uint(UINT64_MAX);
+ mp_sizeof_uint(UINT64_MAX);
char *tuple = region_alloc(&fiber()->gc, buf_size);
if (tuple == NULL) {
- diag_set(OutOfMemory, buf_size, "region_alloc", "tuple");
+ diag_set(OutOfMemory, buf_size, "region_alloc",
+ "tuple");
return -1;
}
char *tuple_end = mp_encode_array(tuple, len);
diff --git a/src/box/schema.cc b/src/box/schema.cc
index 60e4a7f..a8d5425 100644
--- a/src/box/schema.cc
+++ b/src/box/schema.cc
@@ -81,7 +81,7 @@ bool
space_is_system(struct space *space)
{
return space->def->id > BOX_SYSTEM_ID_MIN &&
- space->def->id < BOX_SYSTEM_ID_MAX;
+ space->def->id < BOX_SYSTEM_ID_MAX;
}
/** Return space by its number */
@@ -91,15 +91,15 @@ space_by_id(uint32_t id)
mh_int_t space = mh_i32ptr_find(spaces, id, NULL);
if (space == mh_end(spaces))
return NULL;
- return (struct space *) mh_i32ptr_node(spaces, space)->val;
+ return (struct space *)mh_i32ptr_node(spaces, space)->val;
}
/** Return space by its name */
struct space *
space_by_name(const char *name)
{
- mh_int_t space = mh_strnptr_find_inp(spaces_by_name, name,
- strlen(name));
+ mh_int_t space =
+ mh_strnptr_find_inp(spaces_by_name, name, strlen(name));
if (space == mh_end(spaces_by_name))
return NULL;
return (struct space *)mh_strnptr_node(spaces_by_name, space)->val;
@@ -133,20 +133,21 @@ space_foreach(int (*func)(struct space *sp, void *udata), void *udata)
space = space_by_id(BOX_SPACE_ID);
struct index *pk = space ? space_index(space, 0) : NULL;
if (pk) {
- struct iterator *it = index_create_iterator(pk, ITER_GE,
- key, 1);
+ struct iterator *it =
+ index_create_iterator(pk, ITER_GE, key, 1);
if (it == NULL)
return -1;
int rc;
struct tuple *tuple;
while ((rc = iterator_next(it, &tuple)) == 0 && tuple != NULL) {
uint32_t id;
- if (tuple_field_u32(tuple, BOX_SPACE_FIELD_ID, &id) != 0)
+ if (tuple_field_u32(tuple, BOX_SPACE_FIELD_ID, &id) !=
+ 0)
continue;
space = space_cache_find(id);
if (space == NULL)
continue;
- if (! space_is_system(space))
+ if (!space_is_system(space))
break;
rc = func(space, udata);
if (rc != 0)
@@ -157,8 +158,9 @@ space_foreach(int (*func)(struct space *sp, void *udata), void *udata)
return -1;
}
- mh_foreach(spaces, i) {
- space = (struct space *) mh_i32ptr_node(spaces, i)->val;
+ mh_foreach(spaces, i)
+ {
+ space = (struct space *)mh_i32ptr_node(spaces, i)->val;
if (space_is_system(space))
continue;
if (func(space, udata) != 0)
@@ -180,14 +182,15 @@ space_cache_replace(struct space *old_space, struct space *new_space)
* don't need to do so for @spaces cache.
*/
struct space *old_space_by_name = NULL;
- if (old_space != NULL && strcmp(space_name(old_space),
- space_name(new_space)) != 0) {
+ if (old_space != NULL &&
+ strcmp(space_name(old_space), space_name(new_space)) != 0) {
const char *name = space_name(old_space);
mh_int_t k = mh_strnptr_find_inp(spaces_by_name, name,
strlen(name));
assert(k != mh_end(spaces_by_name));
- old_space_by_name = (struct space *)
- mh_strnptr_node(spaces_by_name, k)->val;
+ old_space_by_name = (struct space *)mh_strnptr_node(
+ spaces_by_name, k)
+ ->val;
mh_strnptr_del(spaces_by_name, k, NULL);
}
/*
@@ -202,8 +205,8 @@ space_cache_replace(struct space *old_space, struct space *new_space)
panic_syserror("Out of memory for the data "
"dictionary cache.");
}
- struct space *old_space_by_id = p_old != NULL ?
- (struct space *)p_old->val : NULL;
+ struct space *old_space_by_id =
+ p_old != NULL ? (struct space *)p_old->val : NULL;
assert(old_space_by_id == old_space);
(void)old_space_by_id;
/*
@@ -213,7 +216,8 @@ space_cache_replace(struct space *old_space, struct space *new_space)
uint32_t name_len = strlen(name);
uint32_t name_hash = mh_strn_hash(name, name_len);
const struct mh_strnptr_node_t node_s = { name, name_len,
- name_hash, new_space };
+ name_hash,
+ new_space };
struct mh_strnptr_node_t old_s, *p_old_s = &old_s;
k = mh_strnptr_put(spaces_by_name, &node_s, &p_old_s, NULL);
if (k == mh_end(spaces_by_name)) {
@@ -249,8 +253,8 @@ space_cache_replace(struct space *old_space, struct space *new_space)
}
space_cache_version++;
- if (trigger_run(&on_alter_space, new_space != NULL ?
- new_space : old_space) != 0) {
+ if (trigger_run(&on_alter_space,
+ new_space != NULL ? new_space : old_space) != 0) {
diag_log();
panic("Can't update space cache");
}
@@ -261,23 +265,19 @@ space_cache_replace(struct space *old_space, struct space *new_space)
/** A wrapper around space_new() for data dictionary spaces. */
static void
-sc_space_new(uint32_t id, const char *name,
- struct key_part_def *key_parts,
- uint32_t key_part_count,
- struct trigger *replace_trigger)
+sc_space_new(uint32_t id, const char *name, struct key_part_def *key_parts,
+ uint32_t key_part_count, struct trigger *replace_trigger)
{
struct key_def *key_def = key_def_new(key_parts, key_part_count, false);
if (key_def == NULL)
diag_raise();
auto key_def_guard =
make_scoped_guard([=] { key_def_delete(key_def); });
- struct index_def *index_def = index_def_new(id, /* space id */
- 0 /* index id */,
- "primary", /* name */
- strlen("primary"),
- TREE /* index type */,
- &index_opts_default,
- key_def, NULL);
+ struct index_def *index_def =
+ index_def_new(id, /* space id */
+ 0 /* index id */, "primary", /* name */
+ strlen("primary"), TREE /* index type */,
+ &index_opts_default, key_def, NULL);
if (index_def == NULL)
diag_raise();
auto index_def_guard =
@@ -307,8 +307,8 @@ sc_space_new(uint32_t id, const char *name,
}
int
-schema_find_id(uint32_t system_space_id, uint32_t index_id,
- const char *name, uint32_t len, uint32_t *object_id)
+schema_find_id(uint32_t system_space_id, uint32_t index_id, const char *name,
+ uint32_t len, uint32_t *object_id)
{
if (len > BOX_NAME_MAX) {
*object_id = BOX_ID_NIL;
@@ -318,8 +318,8 @@ schema_find_id(uint32_t system_space_id, uint32_t index_id,
if (space == NULL)
return -1;
if (!space_is_memtx(space)) {
- diag_set(ClientError, ER_UNSUPPORTED,
- space->engine->name, "system data");
+ diag_set(ClientError, ER_UNSUPPORTED, space->engine->name,
+ "system data");
return -1;
}
struct index *index = index_find(space, index_id);
@@ -492,9 +492,8 @@ schema_init(void)
def = space_def_new_xc(BOX_VINYL_DEFERRED_DELETE_ID, ADMIN, 0,
name, strlen(name), engine,
strlen(engine), &opts, NULL, 0);
- auto def_guard = make_scoped_guard([=] {
- space_def_delete(def);
- });
+ auto def_guard =
+ make_scoped_guard([=] { space_def_delete(def); });
RLIST_HEAD(key_list);
struct space *space = space_new_xc(def, &key_list);
space_cache_replace(NULL, space);
@@ -516,8 +515,8 @@ schema_free(void)
while (mh_size(spaces) > 0) {
mh_int_t i = mh_first(spaces);
- struct space *space = (struct space *)
- mh_i32ptr_node(spaces, i)->val;
+ struct space *space =
+ (struct space *)mh_i32ptr_node(spaces, i)->val;
space_cache_replace(space, NULL);
space_delete(space);
}
@@ -526,8 +525,8 @@ schema_free(void)
while (mh_size(funcs) > 0) {
mh_int_t i = mh_first(funcs);
- struct func *func = ((struct func *)
- mh_i32ptr_node(funcs, i)->val);
+ struct func *func =
+ ((struct func *)mh_i32ptr_node(funcs, i)->val);
func_cache_delete(func->def->fid);
func_delete(func);
}
@@ -535,8 +534,8 @@ schema_free(void)
while (mh_size(sequences) > 0) {
mh_int_t i = mh_first(sequences);
- struct sequence *seq = ((struct sequence *)
- mh_i32ptr_node(sequences, i)->val);
+ struct sequence *seq =
+ ((struct sequence *)mh_i32ptr_node(sequences, i)->val);
sequence_cache_delete(seq->def->id);
}
mh_i32ptr_delete(sequences);
@@ -550,14 +549,15 @@ func_cache_insert(struct func *func)
const struct mh_i32ptr_node_t node = { func->def->fid, func };
mh_int_t k1 = mh_i32ptr_put(funcs, &node, NULL, NULL);
if (k1 == mh_end(funcs)) {
-error:
+ error:
panic_syserror("Out of memory for the data "
"dictionary cache (stored function).");
}
size_t def_name_len = strlen(func->def->name);
uint32_t name_hash = mh_strn_hash(func->def->name, def_name_len);
- const struct mh_strnptr_node_t strnode = {
- func->def->name, def_name_len, name_hash, func };
+ const struct mh_strnptr_node_t strnode = { func->def->name,
+ def_name_len, name_hash,
+ func };
mh_int_t k2 = mh_strnptr_put(funcs_by_name, &strnode, NULL, NULL);
if (k2 == mh_end(funcs_by_name)) {
mh_i32ptr_del(funcs, k1, NULL);
@@ -571,8 +571,7 @@ func_cache_delete(uint32_t fid)
mh_int_t k = mh_i32ptr_find(funcs, fid, NULL);
if (k == mh_end(funcs))
return;
- struct func *func = (struct func *)
- mh_i32ptr_node(funcs, k)->val;
+ struct func *func = (struct func *)mh_i32ptr_node(funcs, k)->val;
mh_i32ptr_del(funcs, k, NULL);
k = mh_strnptr_find_inp(funcs_by_name, func->def->name,
strlen(func->def->name));
@@ -586,7 +585,7 @@ func_by_id(uint32_t fid)
mh_int_t func = mh_i32ptr_find(funcs, fid, NULL);
if (func == mh_end(funcs))
return NULL;
- return (struct func *) mh_i32ptr_node(funcs, func)->val;
+ return (struct func *)mh_i32ptr_node(funcs, func)->val;
}
struct func *
@@ -595,7 +594,7 @@ func_by_name(const char *name, uint32_t name_len)
mh_int_t func = mh_strnptr_find_inp(funcs_by_name, name, name_len);
if (func == mh_end(funcs_by_name))
return NULL;
- return (struct func *) mh_strnptr_node(funcs_by_name, func)->val;
+ return (struct func *)mh_strnptr_node(funcs_by_name, func)->val;
}
int
@@ -607,8 +606,8 @@ schema_find_grants(const char *type, uint32_t id, bool *out)
/** "object" index */
if (!space_is_memtx(priv)) {
- diag_set(ClientError, ER_UNSUPPORTED,
- priv->engine->name, "system data");
+ diag_set(ClientError, ER_UNSUPPORTED, priv->engine->name,
+ "system data");
return -1;
}
struct index *index = index_find(priv, 2);
@@ -638,7 +637,7 @@ sequence_by_id(uint32_t id)
mh_int_t k = mh_i32ptr_find(sequences, id, NULL);
if (k == mh_end(sequences))
return NULL;
- return (struct sequence *) mh_i32ptr_node(sequences, k)->val;
+ return (struct sequence *)mh_i32ptr_node(sequences, k)->val;
}
struct sequence *
@@ -682,39 +681,34 @@ schema_find_name(enum schema_object_type type, uint32_t object_id)
case SC_ENTITY_ROLE:
case SC_ENTITY_USER:
return "";
- case SC_SPACE:
- {
- struct space *space = space_by_id(object_id);
- if (space == NULL)
- break;
- return space->def->name;
- }
- case SC_FUNCTION:
- {
- struct func *func = func_by_id(object_id);
- if (func == NULL)
- break;
- return func->def->name;
- }
- case SC_SEQUENCE:
- {
- struct sequence *seq = sequence_by_id(object_id);
- if (seq == NULL)
- break;
- return seq->def->name;
- }
+ case SC_SPACE: {
+ struct space *space = space_by_id(object_id);
+ if (space == NULL)
+ break;
+ return space->def->name;
+ }
+ case SC_FUNCTION: {
+ struct func *func = func_by_id(object_id);
+ if (func == NULL)
+ break;
+ return func->def->name;
+ }
+ case SC_SEQUENCE: {
+ struct sequence *seq = sequence_by_id(object_id);
+ if (seq == NULL)
+ break;
+ return seq->def->name;
+ }
case SC_ROLE:
- case SC_USER:
- {
- struct user *role = user_by_id(object_id);
- if (role == NULL)
- break;
- return role->def->name;
- }
+ case SC_USER: {
+ struct user *role = user_by_id(object_id);
+ if (role == NULL)
+ break;
+ return role->def->name;
+ }
default:
break;
}
assert(false);
return "(nil)";
}
-
diff --git a/src/box/schema.h b/src/box/schema.h
index 25ac6f1..6b22039 100644
--- a/src/box/schema.h
+++ b/src/box/schema.h
@@ -169,8 +169,8 @@ schema_init(void);
void
schema_free(void);
-struct space *schema_space(uint32_t id);
-
+struct space *
+schema_space(uint32_t id);
/**
* Check whether or not an object has grants on it (restrict
@@ -241,18 +241,17 @@ struct on_access_denied_ctx {
/** Global grants to classes of objects. */
struct entity_access {
- struct access space[BOX_USER_MAX];
- struct access function[BOX_USER_MAX];
- struct access user[BOX_USER_MAX];
- struct access role[BOX_USER_MAX];
- struct access sequence[BOX_USER_MAX];
+ struct access space[BOX_USER_MAX];
+ struct access function[BOX_USER_MAX];
+ struct access user[BOX_USER_MAX];
+ struct access role[BOX_USER_MAX];
+ struct access sequence[BOX_USER_MAX];
};
/** A single instance of the global entities. */
extern struct entity_access entity_access;
-static inline
-struct access *
+static inline struct access *
entity_access_get(enum schema_object_type type)
{
switch (type) {
diff --git a/src/box/schema_def.c b/src/box/schema_def.c
index b974703..4f315c6 100644
--- a/src/box/schema_def.c
+++ b/src/box/schema_def.c
@@ -75,14 +75,14 @@ schema_object_type(const char *name)
* name, and they are case-sensitive, so be case-sensitive
* here too.
*/
- int n_strs = sizeof(object_type_strs)/sizeof(*object_type_strs);
+ int n_strs = sizeof(object_type_strs) / sizeof(*object_type_strs);
int index = strindex(object_type_strs, name, n_strs);
- return (enum schema_object_type) (index == n_strs ? 0 : index);
+ return (enum schema_object_type)(index == n_strs ? 0 : index);
}
const char *
schema_object_name(enum schema_object_type type)
{
- assert((int) type < (int) schema_object_type_MAX);
+ assert((int)type < (int)schema_object_type_MAX);
return object_type_strs[type];
}
diff --git a/src/box/schema_def.h b/src/box/schema_def.h
index f86cd42..9dedf73 100644
--- a/src/box/schema_def.h
+++ b/src/box/schema_def.h
@@ -316,9 +316,9 @@ enum schema_object_type {
/** SQL Storage engine. */
enum sql_storage_engine {
- SQL_STORAGE_ENGINE_MEMTX = 0,
- SQL_STORAGE_ENGINE_VINYL = 1,
- sql_storage_engine_MAX = 2
+ SQL_STORAGE_ENGINE_MEMTX = 0,
+ SQL_STORAGE_ENGINE_VINYL = 1,
+ sql_storage_engine_MAX = 2
};
extern const char *sql_storage_engine_strs[];
diff --git a/src/box/sequence.c b/src/box/sequence.c
index 4afbc26..4bdc090 100644
--- a/src/box/sequence.c
+++ b/src/box/sequence.c
@@ -92,8 +92,8 @@ sequence_data_extent_alloc(void *ctx)
(void)ctx;
void *ret = mempool_alloc(&sequence_data_extent_pool);
if (ret == NULL)
- diag_set(OutOfMemory, SEQUENCE_DATA_EXTENT_SIZE,
- "mempool", "sequence_data_extent");
+ diag_set(OutOfMemory, SEQUENCE_DATA_EXTENT_SIZE, "mempool",
+ "sequence_data_extent");
return ret;
}
@@ -166,11 +166,11 @@ sequence_set(struct sequence *seq, int64_t value)
struct sequence_data new_data, old_data;
new_data.id = key;
new_data.value = value;
- if (light_sequence_replace(&sequence_data_index, hash,
- new_data, &old_data) != light_sequence_end)
+ if (light_sequence_replace(&sequence_data_index, hash, new_data,
+ &old_data) != light_sequence_end)
return 0;
- if (light_sequence_insert(&sequence_data_index, hash,
- new_data) != light_sequence_end)
+ if (light_sequence_insert(&sequence_data_index, hash, new_data) !=
+ light_sequence_end)
return 0;
return -1;
}
@@ -189,7 +189,8 @@ sequence_update(struct sequence *seq, int64_t value)
if ((seq->def->step > 0 && value > data.value) ||
(seq->def->step < 0 && value < data.value)) {
if (light_sequence_replace(&sequence_data_index, hash,
- new_data, &data) == light_sequence_end)
+ new_data,
+ &data) == light_sequence_end)
unreachable();
}
} else {
@@ -246,8 +247,8 @@ done:
assert(value >= def->min && value <= def->max);
new_data.id = key;
new_data.value = value;
- if (light_sequence_replace(&sequence_data_index, hash,
- new_data, &old_data) == light_sequence_end)
+ if (light_sequence_replace(&sequence_data_index, hash, new_data,
+ &old_data) == light_sequence_end)
unreachable();
*result = value;
return 0;
@@ -272,25 +273,23 @@ access_check_sequence(struct sequence *seq)
user_access_t access = PRIV_U | PRIV_W;
user_access_t sequence_access = access & ~cr->universal_access;
- sequence_access &= ~entity_access_get(SC_SEQUENCE)[cr->auth_token].effective;
+ sequence_access &=
+ ~entity_access_get(SC_SEQUENCE)[cr->auth_token].effective;
if (sequence_access &&
/* Check for missing Usage access, ignore owner rights. */
(sequence_access & PRIV_U ||
/* Check for missing specific access, respect owner rights. */
(seq->def->uid != cr->uid &&
sequence_access & ~seq->access[cr->auth_token].effective))) {
-
/* Access violation, report error. */
struct user *user = user_find(cr->uid);
if (user != NULL) {
if (!(cr->universal_access & PRIV_U)) {
- diag_set(AccessDeniedError,
- priv_name(PRIV_U),
+ diag_set(AccessDeniedError, priv_name(PRIV_U),
schema_object_name(SC_UNIVERSE), "",
user->def->name);
} else {
- diag_set(AccessDeniedError,
- priv_name(access),
+ diag_set(AccessDeniedError, priv_name(access),
schema_object_name(SC_SEQUENCE),
seq->def->name, user->def->name);
}
@@ -308,19 +307,18 @@ struct sequence_data_iterator {
char tuple[0];
};
-#define SEQUENCE_TUPLE_BUF_SIZE (mp_sizeof_array(2) + \
- 2 * mp_sizeof_uint(UINT64_MAX))
+#define SEQUENCE_TUPLE_BUF_SIZE \
+ (mp_sizeof_array(2) + 2 * mp_sizeof_uint(UINT64_MAX))
static int
-sequence_data_iterator_next(struct snapshot_iterator *base,
- const char **data, uint32_t *size)
+sequence_data_iterator_next(struct snapshot_iterator *base, const char **data,
+ uint32_t *size)
{
struct sequence_data_iterator *iter =
(struct sequence_data_iterator *)base;
- struct sequence_data *sd =
- light_sequence_iterator_get_and_next(&sequence_data_index,
- &iter->iter);
+ struct sequence_data *sd = light_sequence_iterator_get_and_next(
+ &sequence_data_index, &iter->iter);
if (sd == NULL) {
*data = NULL;
return 0;
@@ -329,9 +327,8 @@ sequence_data_iterator_next(struct snapshot_iterator *base,
char *buf_end = iter->tuple;
buf_end = mp_encode_array(buf_end, 2);
buf_end = mp_encode_uint(buf_end, sd->id);
- buf_end = (sd->value >= 0 ?
- mp_encode_uint(buf_end, sd->value) :
- mp_encode_int(buf_end, sd->value));
+ buf_end = (sd->value >= 0 ? mp_encode_uint(buf_end, sd->value) :
+ mp_encode_int(buf_end, sd->value));
assert(buf_end <= iter->tuple + SEQUENCE_TUPLE_BUF_SIZE);
*data = iter->tuple;
*size = buf_end - iter->tuple;
@@ -351,8 +348,8 @@ sequence_data_iterator_free(struct snapshot_iterator *base)
struct snapshot_iterator *
sequence_data_iterator_create(void)
{
- struct sequence_data_iterator *iter = calloc(1, sizeof(*iter) +
- SEQUENCE_TUPLE_BUF_SIZE);
+ struct sequence_data_iterator *iter =
+ calloc(1, sizeof(*iter) + SEQUENCE_TUPLE_BUF_SIZE);
if (iter == NULL) {
diag_set(OutOfMemory, sizeof(*iter) + SEQUENCE_TUPLE_BUF_SIZE,
"malloc", "sequence_data_iterator");
@@ -377,8 +374,8 @@ sequence_get_value(struct sequence *seq, int64_t *result)
diag_set(ClientError, ER_SEQUENCE_NOT_STARTED, seq->def->name);
return -1;
}
- struct sequence_data data = light_sequence_get(&sequence_data_index,
- pos);
+ struct sequence_data data =
+ light_sequence_get(&sequence_data_index, pos);
*result = data.value;
return 0;
}
diff --git a/src/box/service_engine.c b/src/box/service_engine.c
index 5a33a73..e59fc1a 100644
--- a/src/box/service_engine.c
+++ b/src/box/service_engine.c
@@ -66,12 +66,10 @@ service_engine_create_space(struct engine *engine, struct space_def *def,
free(space);
return NULL;
}
- struct tuple_format *format =
- tuple_format_new(&tuple_format_runtime->vtab, NULL, keys,
- key_count, def->fields, def->field_count,
- def->exact_field_count, def->dict,
- def->opts.is_temporary,
- def->opts.is_ephemeral);
+ struct tuple_format *format = tuple_format_new(
+ &tuple_format_runtime->vtab, NULL, keys, key_count, def->fields,
+ def->field_count, def->exact_field_count, def->dict,
+ def->opts.is_temporary, def->opts.is_ephemeral);
if (format == NULL) {
free(space);
return NULL;
diff --git a/src/box/session.cc b/src/box/session.cc
index 7ba7235..98714f5 100644
--- a/src/box/session.cc
+++ b/src/box/session.cc
@@ -39,12 +39,7 @@
#include "sql_stmt_cache.h"
const char *session_type_strs[] = {
- "background",
- "binary",
- "console",
- "repl",
- "applier",
- "unknown",
+ "background", "binary", "console", "repl", "applier", "unknown",
};
static struct session_vtab generic_session_vtab = {
@@ -98,8 +93,8 @@ session_on_stop(struct trigger *trigger, void * /* event */)
static int
closed_session_push(struct session *session, struct port *port)
{
- (void) session;
- (void) port;
+ (void)session;
+ (void)port;
diag_set(ClientError, ER_SESSION_CLOSED);
return -1;
}
@@ -129,7 +124,7 @@ struct session *
session_create(enum session_type type)
{
struct session *session =
- (struct session *) mempool_alloc(&session_pool);
+ (struct session *)mempool_alloc(&session_pool);
if (session == NULL) {
diag_set(OutOfMemory, session_pool.objsize, "mempool",
"new slab");
@@ -168,9 +163,8 @@ session_create_on_demand(void)
struct session *s = session_create(SESSION_TYPE_BACKGROUND);
if (s == NULL)
return NULL;
- s->fiber_on_stop = {
- RLIST_LINK_INITIALIZER, session_on_stop, NULL, NULL
- };
+ s->fiber_on_stop = { RLIST_LINK_INITIALIZER, session_on_stop, NULL,
+ NULL };
/* Add a trigger to destroy session on fiber stop */
trigger_add(&fiber()->on_stop, &s->fiber_on_stop);
credentials_reset(&s->credentials, admin_user);
@@ -270,8 +264,7 @@ session_find(uint64_t sid)
mh_int_t k = mh_i64ptr_find(session_registry, sid, NULL);
if (k == mh_end(session_registry))
return NULL;
- return (struct session *)
- mh_i64ptr_node(session_registry, k)->val;
+ return (struct session *)mh_i64ptr_node(session_registry, k)->val;
}
extern "C" void
@@ -305,8 +298,7 @@ access_check_session(struct user *user)
*/
if (!(universe.access[user->auth_token].effective & PRIV_S)) {
diag_set(AccessDeniedError, priv_name(PRIV_S),
- schema_object_name(SC_UNIVERSE), "",
- user->def->name);
+ schema_object_name(SC_UNIVERSE), "", user->def->name);
return -1;
}
return 0;
@@ -325,12 +317,12 @@ access_check_universe_object(user_access_t access,
* The user may not exist already, if deleted
* from a different connection.
*/
- int denied_access = access & ((credentials->universal_access
- & access) ^ access);
+ int denied_access =
+ access &
+ ((credentials->universal_access & access) ^ access);
struct user *user = user_find(credentials->uid);
if (user != NULL) {
- diag_set(AccessDeniedError,
- priv_name(denied_access),
+ diag_set(AccessDeniedError, priv_name(denied_access),
schema_object_name(object_type), object_name,
user->def->name);
} else {
@@ -355,7 +347,7 @@ access_check_universe(user_access_t access)
int
generic_session_push(struct session *session, struct port *port)
{
- (void) port;
+ (void)port;
const char *name =
tt_sprintf("Session '%s'", session_type_strs[session->type]);
diag_set(ClientError, ER_UNSUPPORTED, name, "push()");
@@ -365,13 +357,13 @@ generic_session_push(struct session *session, struct port *port)
int
generic_session_fd(struct session *session)
{
- (void) session;
+ (void)session;
return -1;
}
int64_t
generic_session_sync(struct session *session)
{
- (void) session;
+ (void)session;
return 0;
}
diff --git a/src/box/session.h b/src/box/session.h
index 833a457..69a1e85 100644
--- a/src/box/session.h
+++ b/src/box/session.h
@@ -135,16 +135,14 @@ struct session_vtab {
* @retval 0 Success.
* @retval -1 Error.
*/
- int
- (*push)(struct session *session, struct port *port);
+ int (*push)(struct session *session, struct port *port);
/**
* Get session file descriptor if exists.
* @param session Session to get descriptor from.
* @retval -1 No fd.
* @retval >=0 Found fd.
*/
- int
- (*fd)(struct session *session);
+ int (*fd)(struct session *session);
/**
* For iproto requests, we set sync to the value of packet
* sync. Since the session may be reused between many
@@ -152,8 +150,7 @@ struct session_vtab {
* of the request, and gets distorted after the first
* yield. For other sessions it is 0.
*/
- int64_t
- (*sync)(struct session *session);
+ int64_t (*sync)(struct session *session);
};
extern struct session_vtab session_vtab_registry[];
diff --git a/src/box/session_settings.c b/src/box/session_settings.c
index dbbbf24..5bef498 100644
--- a/src/box/session_settings.c
+++ b/src/box/session_settings.c
@@ -42,16 +42,11 @@ struct session_setting session_settings[SESSION_SETTING_COUNT] = {};
/** Corresponding names of session settings. */
const char *session_setting_strs[SESSION_SETTING_COUNT] = {
- "error_marshaling_enabled",
- "sql_default_engine",
- "sql_defer_foreign_keys",
- "sql_full_column_names",
- "sql_full_metadata",
- "sql_parser_debug",
- "sql_recursive_triggers",
- "sql_reverse_unordered_selects",
- "sql_select_debug",
- "sql_vdbe_debug",
+ "error_marshaling_enabled", "sql_default_engine",
+ "sql_defer_foreign_keys", "sql_full_column_names",
+ "sql_full_metadata", "sql_parser_debug",
+ "sql_recursive_triggers", "sql_reverse_unordered_selects",
+ "sql_select_debug", "sql_vdbe_debug",
};
struct session_settings_index {
@@ -105,8 +100,7 @@ session_settings_next(int *sid, const char *key, bool is_eq, bool is_including)
for (; i < SESSION_SETTING_COUNT; ++i) {
const char *name = session_setting_strs[i];
int cmp = strcmp(name, key);
- if ((cmp == 0 && is_including) ||
- (cmp > 0 && !is_eq)) {
+ if ((cmp == 0 && is_including) || (cmp > 0 && !is_eq)) {
*sid = i;
return 0;
}
@@ -128,8 +122,7 @@ session_settings_prev(int *sid, const char *key, bool is_eq, bool is_including)
for (; i >= 0; --i) {
const char *name = session_setting_strs[i];
int cmp = strcmp(name, key);
- if ((cmp == 0 && is_including) ||
- (cmp < 0 && !is_eq)) {
+ if ((cmp == 0 && is_including) || (cmp < 0 && !is_eq)) {
*sid = i;
return 0;
}
@@ -238,9 +231,9 @@ session_settings_index_get(struct index *base, const char *key,
uint32_t part_count, struct tuple **result)
{
struct session_settings_index *index =
- (struct session_settings_index *) base;
+ (struct session_settings_index *)base;
assert(part_count == 1);
- (void) part_count;
+ (void)part_count;
uint32_t len;
key = mp_decode_str(&key, &len);
key = tt_cstr(key, len);
@@ -265,7 +258,7 @@ static const struct index_vtab session_settings_index_vtab = {
/* .update_def = */ generic_index_update_def,
/* .depends_on_pk = */ generic_index_depends_on_pk,
/* .def_change_requires_rebuild = */
- generic_index_def_change_requires_rebuild,
+ generic_index_def_change_requires_rebuild,
/* .size = */ generic_index_size,
/* .bsize = */ generic_index_bsize,
/* .min = */ generic_index_min,
@@ -276,7 +269,7 @@ static const struct index_vtab session_settings_index_vtab = {
/* .replace = */ generic_index_replace,
/* .create_iterator = */ session_settings_index_create_iterator,
/* .create_snapshot_iterator = */
- generic_index_create_snapshot_iterator,
+ generic_index_create_snapshot_iterator,
/* .stat = */ generic_index_stat,
/* .compact = */ generic_index_compact,
/* .reset_stat = */ generic_index_reset_stat,
@@ -443,7 +436,8 @@ const struct space_vtab session_settings_space_vtab = {
};
int
-session_setting_find(const char *name) {
+session_setting_find(const char *name)
+{
int sid = 0;
if (session_settings_next(&sid, name, true, true) == 0)
return sid;
@@ -465,7 +459,7 @@ session_setting_error_marshaling_enabled_get(int id, const char **mp_pair,
size_t size = mp_sizeof_array(2) + mp_sizeof_str(name_len) +
mp_sizeof_bool(value);
- char *pos = (char*)static_alloc(size);
+ char *pos = (char *)static_alloc(size);
assert(pos != NULL);
char *pos_end = mp_encode_array(pos, 2);
pos_end = mp_encode_str(pos_end, name, name_len);
diff --git a/src/box/space.c b/src/box/space.c
index 6d1d771..54435fd 100644
--- a/src/box/space.c
+++ b/src/box/space.c
@@ -73,8 +73,8 @@ access_check_space(struct space *space, user_access_t access)
/* Check for missing USAGE access, ignore owner rights. */
(space_access & PRIV_U ||
/* Check for missing specific access, respect owner rights. */
- (space->def->uid != cr->uid &&
- space_access & ~space->access[cr->auth_token].effective))) {
+ (space->def->uid != cr->uid &&
+ space_access & ~space->access[cr->auth_token].effective))) {
/*
* Report access violation. Throw "no such user"
* error if there is no user with this id.
@@ -84,13 +84,11 @@ access_check_space(struct space *space, user_access_t access)
struct user *user = user_find(cr->uid);
if (user != NULL) {
if (!(cr->universal_access & PRIV_U)) {
- diag_set(AccessDeniedError,
- priv_name(PRIV_U),
+ diag_set(AccessDeniedError, priv_name(PRIV_U),
schema_object_name(SC_UNIVERSE), "",
user->def->name);
} else {
- diag_set(AccessDeniedError,
- priv_name(access),
+ diag_set(AccessDeniedError, priv_name(access),
schema_object_name(SC_SPACE),
space->def->name, user->def->name);
}
@@ -120,8 +118,8 @@ space_create(struct space *space, struct engine *engine,
{
if (!rlist_empty(key_list)) {
/* Primary key must go first. */
- struct index_def *pk = rlist_first_entry(key_list,
- struct index_def, link);
+ struct index_def *pk =
+ rlist_first_entry(key_list, struct index_def, link);
assert(pk->iid == 0);
(void)pk;
}
@@ -152,11 +150,13 @@ space_create(struct space *space, struct engine *engine,
goto fail;
/* Create indexes and fill the index map. */
- space->index_map = (struct index **)
- calloc(index_count + index_id_max + 1, sizeof(struct index *));
+ space->index_map = (struct index **)calloc(
+ index_count + index_id_max + 1, sizeof(struct index *));
if (space->index_map == NULL) {
- diag_set(OutOfMemory, (index_count + index_id_max + 1) *
- sizeof(struct index *), "malloc", "index_map");
+ diag_set(OutOfMemory,
+ (index_count + index_id_max + 1) *
+ sizeof(struct index *),
+ "malloc", "index_map");
goto fail;
}
space->index = space->index_map + index_id_max + 1;
@@ -195,8 +195,9 @@ space_create(struct space *space, struct engine *engine,
continue;
for (int j = 0; j < (int)space->index_count; j++) {
struct index *other = space->index[j];
- if (i != j && bit_test(space->check_unique_constraint_map,
- other->def->iid) &&
+ if (i != j &&
+ bit_test(space->check_unique_constraint_map,
+ other->def->iid) &&
key_def_contains(index->def->key_def,
other->def->key_def)) {
bit_clear(space->check_unique_constraint_map,
@@ -412,9 +413,9 @@ after_old_tuple_lookup:;
old_data_end = old_data + old_size;
new_data = xrow_update_execute(request->tuple,
request->tuple_end, old_data,
- old_data_end,
- space->format, &new_size,
- request->index_base, NULL);
+ old_data_end, space->format,
+ &new_size, request->index_base,
+ NULL);
if (new_data == NULL)
return -1;
new_data_end = new_data + new_size;
@@ -434,10 +435,9 @@ after_old_tuple_lookup:;
*/
new_data = request->tuple;
new_data_end = request->tuple_end;
- if (xrow_update_check_ops(request->ops,
- request->ops_end,
- space->format,
- request->index_base) != 0)
+ if (xrow_update_check_ops(
+ request->ops, request->ops_end,
+ space->format, request->index_base) != 0)
return -1;
break;
}
@@ -456,8 +456,8 @@ after_old_tuple_lookup:;
struct tuple *new_tuple = NULL;
if (new_data != NULL) {
- new_tuple = tuple_new(tuple_format_runtime,
- new_data, new_data_end);
+ new_tuple =
+ tuple_new(tuple_format_runtime, new_data, new_data_end);
if (new_tuple == NULL)
return -1;
tuple_ref(new_tuple);
@@ -511,12 +511,12 @@ after_old_tuple_lookup:;
* We don't allow to change the value of the primary key
* in the same statement.
*/
- if (pk != NULL && request_changed &&
- old_tuple != NULL && new_tuple != NULL &&
+ if (pk != NULL && request_changed && old_tuple != NULL &&
+ new_tuple != NULL &&
tuple_compare(old_tuple, HINT_NONE, new_tuple, HINT_NONE,
pk->def->key_def) != 0) {
- diag_set(ClientError, ER_CANT_UPDATE_PRIMARY_KEY,
- pk->def->name, space->def->name);
+ diag_set(ClientError, ER_CANT_UPDATE_PRIMARY_KEY, pk->def->name,
+ space->def->name);
rc = -1;
goto out;
}
@@ -526,8 +526,8 @@ after_old_tuple_lookup:;
* Fix the request to conform.
*/
if (request_changed)
- rc = request_create_from_tuple(request, space,
- old_tuple, new_tuple);
+ rc = request_create_from_tuple(request, space, old_tuple,
+ new_tuple);
out:
if (new_tuple != NULL)
tuple_unref(new_tuple);
@@ -535,8 +535,8 @@ out:
}
int
-space_execute_dml(struct space *space, struct txn *txn,
- struct request *request, struct tuple **result)
+space_execute_dml(struct space *space, struct txn *txn, struct request *request,
+ struct tuple **result)
{
if (unlikely(space->sequence != NULL) &&
(request->type == IPROTO_INSERT ||
@@ -565,13 +565,13 @@ space_execute_dml(struct space *space, struct txn *txn,
switch (request->type) {
case IPROTO_INSERT:
case IPROTO_REPLACE:
- if (space->vtab->execute_replace(space, txn,
- request, result) != 0)
+ if (space->vtab->execute_replace(space, txn, request, result) !=
+ 0)
return -1;
break;
case IPROTO_UPDATE:
- if (space->vtab->execute_update(space, txn,
- request, result) != 0)
+ if (space->vtab->execute_update(space, txn, request, result) !=
+ 0)
return -1;
if (*result != NULL && request->index_id != 0) {
/*
@@ -583,8 +583,8 @@ space_execute_dml(struct space *space, struct txn *txn,
}
break;
case IPROTO_DELETE:
- if (space->vtab->execute_delete(space, txn,
- request, result) != 0)
+ if (space->vtab->execute_delete(space, txn, request, result) !=
+ 0)
return -1;
if (*result != NULL && request->index_id != 0)
request_rebind_to_primary_key(request, space, *result);
@@ -606,14 +606,14 @@ space_add_ck_constraint(struct space *space, struct ck_constraint *ck)
rlist_add_entry(&space->ck_constraint, ck, link);
if (space->ck_constraint_trigger == NULL) {
struct trigger *ck_trigger =
- (struct trigger *) malloc(sizeof(*ck_trigger));
+ (struct trigger *)malloc(sizeof(*ck_trigger));
if (ck_trigger == NULL) {
diag_set(OutOfMemory, sizeof(*ck_trigger), "malloc",
"ck_trigger");
return -1;
}
trigger_create(ck_trigger, ck_constraint_on_replace_trigger,
- NULL, (trigger_f0) free);
+ NULL, (trigger_f0)free);
trigger_add(&space->on_replace, ck_trigger);
space->ck_constraint_trigger = ck_trigger;
}
@@ -640,7 +640,7 @@ space_find_constraint_id(struct space *space, const char *name)
mh_int_t pos = mh_strnptr_find_inp(ids, name, len);
if (pos == mh_end(ids))
return NULL;
- return (struct constraint_id *) mh_strnptr_node(ids, pos)->val;
+ return (struct constraint_id *)mh_strnptr_node(ids, pos)->val;
}
int
@@ -650,7 +650,7 @@ space_add_constraint_id(struct space *space, struct constraint_id *id)
struct mh_strnptr_t *ids = space->constraint_ids;
uint32_t len = strlen(id->name);
uint32_t hash = mh_strn_hash(id->name, len);
- const struct mh_strnptr_node_t name_node = {id->name, len, hash, id};
+ const struct mh_strnptr_node_t name_node = { id->name, len, hash, id };
if (mh_strnptr_put(ids, &name_node, NULL, NULL) == mh_end(ids)) {
diag_set(OutOfMemory, sizeof(name_node), "malloc", "node");
return -1;
@@ -665,8 +665,8 @@ space_pop_constraint_id(struct space *space, const char *name)
uint32_t len = strlen(name);
mh_int_t pos = mh_strnptr_find_inp(ids, name, len);
assert(pos != mh_end(ids));
- struct constraint_id *id = (struct constraint_id *)
- mh_strnptr_node(ids, pos)->val;
+ struct constraint_id *id =
+ (struct constraint_id *)mh_strnptr_node(ids, pos)->val;
mh_strnptr_del(ids, pos, NULL);
return id;
}
diff --git a/src/box/space.h b/src/box/space.h
index 7cfba65..a74c900 100644
--- a/src/box/space.h
+++ b/src/box/space.h
@@ -60,12 +60,12 @@ struct space_vtab {
/** Return binary size of a space. */
size_t (*bsize)(struct space *);
- int (*execute_replace)(struct space *, struct txn *,
- struct request *, struct tuple **result);
- int (*execute_delete)(struct space *, struct txn *,
- struct request *, struct tuple **result);
- int (*execute_update)(struct space *, struct txn *,
- struct request *, struct tuple **result);
+ int (*execute_replace)(struct space *, struct txn *, struct request *,
+ struct tuple **result);
+ int (*execute_delete)(struct space *, struct txn *, struct request *,
+ struct tuple **result);
+ int (*execute_update)(struct space *, struct txn *, struct request *,
+ struct tuple **result);
int (*execute_upsert)(struct space *, struct txn *, struct request *);
int (*ephemeral_replace)(struct space *, const char *, const char *);
@@ -140,8 +140,7 @@ struct space_vtab {
* Notify the engine about the changed space,
* before it's done, to prepare 'new_space' object.
*/
- int (*prepare_alter)(struct space *old_space,
- struct space *new_space);
+ int (*prepare_alter)(struct space *old_space, struct space *new_space);
/**
* Called right after removing a space from the cache.
* The engine should abort all transactions involving
@@ -253,7 +252,10 @@ space_create(struct space *space, struct engine *engine,
/** Get space ordinal number. */
static inline uint32_t
-space_id(struct space *space) { return space->def->id; }
+space_id(struct space *space)
+{
+ return space->def->id;
+}
/** Get space name. */
static inline const char *
@@ -302,7 +304,7 @@ space_index(struct space *space, uint32_t id)
static inline struct index *
space_index_by_name(struct space *space, const char *index_name)
{
- for(uint32_t i = 0; i < space->index_count; i++) {
+ for (uint32_t i = 0; i < space->index_count; i++) {
struct index *index = space->index[i];
if (strcmp(index_name, index->def->name) == 0)
return index;
@@ -391,8 +393,8 @@ access_check_space(struct space *space, user_access_t access);
* Execute a DML request on the given space.
*/
int
-space_execute_dml(struct space *space, struct txn *txn,
- struct request *request, struct tuple **result);
+space_execute_dml(struct space *space, struct txn *txn, struct request *request,
+ struct tuple **result);
static inline int
space_ephemeral_replace(struct space *space, const char *tuple,
@@ -466,8 +468,8 @@ space_swap_index(struct space *old_space, struct space *new_space,
uint32_t old_index_id, uint32_t new_index_id)
{
assert(old_space->vtab == new_space->vtab);
- return new_space->vtab->swap_index(old_space, new_space,
- old_index_id, new_index_id);
+ return new_space->vtab->swap_index(old_space, new_space, old_index_id,
+ new_index_id);
}
static inline int
@@ -484,11 +486,17 @@ space_invalidate(struct space *space)
}
static inline bool
-space_is_memtx(struct space *space) { return space->engine->id == 0; }
+space_is_memtx(struct space *space)
+{
+ return space->engine->id == 0;
+}
/** Return true if space is run under vinyl engine. */
static inline bool
-space_is_vinyl(struct space *space) { return strcmp(space->engine->name, "vinyl") == 0; }
+space_is_vinyl(struct space *space)
+{
+ return strcmp(space->engine->name, "vinyl") == 0;
+}
struct field_def;
/**
@@ -566,20 +574,33 @@ space_pop_constraint_id(struct space *space, const char *name);
/*
* Virtual method stubs.
*/
-size_t generic_space_bsize(struct space *);
-int generic_space_ephemeral_replace(struct space *, const char *, const char *);
-int generic_space_ephemeral_delete(struct space *, const char *);
-int generic_space_ephemeral_rowid_next(struct space *, uint64_t *);
-void generic_init_system_space(struct space *);
-void generic_init_ephemeral_space(struct space *);
-int generic_space_check_index_def(struct space *, struct index_def *);
-int generic_space_add_primary_key(struct space *space);
-void generic_space_drop_primary_key(struct space *space);
-int generic_space_check_format(struct space *, struct tuple_format *);
-int generic_space_build_index(struct space *, struct index *,
- struct tuple_format *, bool);
-int generic_space_prepare_alter(struct space *, struct space *);
-void generic_space_invalidate(struct space *);
+size_t
+generic_space_bsize(struct space *);
+int
+generic_space_ephemeral_replace(struct space *, const char *, const char *);
+int
+generic_space_ephemeral_delete(struct space *, const char *);
+int
+generic_space_ephemeral_rowid_next(struct space *, uint64_t *);
+void
+generic_init_system_space(struct space *);
+void
+generic_init_ephemeral_space(struct space *);
+int
+generic_space_check_index_def(struct space *, struct index_def *);
+int
+generic_space_add_primary_key(struct space *space);
+void
+generic_space_drop_primary_key(struct space *space);
+int
+generic_space_check_format(struct space *, struct tuple_format *);
+int
+generic_space_build_index(struct space *, struct index *, struct tuple_format *,
+ bool);
+int
+generic_space_prepare_alter(struct space *, struct space *);
+void
+generic_space_invalidate(struct space *);
#if defined(__cplusplus)
} /* extern "C" */
@@ -629,9 +650,9 @@ index_find_unique_xc(struct space *space, uint32_t index_id)
static inline struct index *
index_find_system_xc(struct space *space, uint32_t index_id)
{
- if (! space_is_memtx(space)) {
- tnt_raise(ClientError, ER_UNSUPPORTED,
- space->engine->name, "system data");
+ if (!space_is_memtx(space)) {
+ tnt_raise(ClientError, ER_UNSUPPORTED, space->engine->name,
+ "system data");
}
return index_find_xc(space, index_id);
}
diff --git a/src/box/space_def.c b/src/box/space_def.c
index 83566bf..ec45955 100644
--- a/src/box/space_def.c
+++ b/src/box/space_def.c
@@ -76,8 +76,8 @@ space_def_sizeof(uint32_t name_len, const struct field_def *fields,
*fields_offset = small_align(sizeof(struct space_def) + name_len + 1,
alignof(typeof(fields[0])));
*names_offset = *fields_offset + field_count * sizeof(struct field_def);
- *def_expr_offset = small_align(*names_offset + field_strs_size,
- alignof(uint64_t));
+ *def_expr_offset =
+ small_align(*names_offset + field_strs_size, alignof(uint64_t));
return *def_expr_offset + def_exprs_size;
}
@@ -110,7 +110,7 @@ space_def_dup(const struct space_def *src)
size_t size = space_def_sizeof(strlen(src->name), src->fields,
src->field_count, &strs_offset,
&fields_offset, &def_expr_offset);
- struct space_def *ret = (struct space_def *) malloc(size);
+ struct space_def *ret = (struct space_def *)malloc(size);
if (ret == NULL) {
diag_set(OutOfMemory, size, "malloc", "ret");
return NULL;
@@ -131,7 +131,7 @@ space_def_dup(const struct space_def *src)
struct Expr *e = src->fields[i].default_value_expr;
if (e != NULL) {
char *expr_pos_old = expr_pos;
- (void) expr_pos_old;
+ (void)expr_pos_old;
e = sql_expr_dup(sql_get(), e, 0, &expr_pos);
assert(e != NULL);
/* Note: due to SQL legacy
@@ -156,16 +156,15 @@ space_def_dup(const struct space_def *src)
struct space_def *
space_def_new(uint32_t id, uint32_t uid, uint32_t exact_field_count,
- const char *name, uint32_t name_len,
- const char *engine_name, uint32_t engine_len,
- const struct space_opts *opts, const struct field_def *fields,
- uint32_t field_count)
+ const char *name, uint32_t name_len, const char *engine_name,
+ uint32_t engine_len, const struct space_opts *opts,
+ const struct field_def *fields, uint32_t field_count)
{
uint32_t strs_offset, fields_offset, def_expr_offset;
size_t size = space_def_sizeof(name_len, fields, field_count,
&strs_offset, &fields_offset,
&def_expr_offset);
- struct space_def *def = (struct space_def *) malloc(size);
+ struct space_def *def = (struct space_def *)malloc(size);
if (def == NULL) {
diag_set(OutOfMemory, size, "malloc", "def");
return NULL;
@@ -212,7 +211,7 @@ space_def_new(uint32_t id, uint32_t uid, uint32_t exact_field_count,
struct Expr *e = def->fields[i].default_value_expr;
if (e != NULL) {
char *expr_pos_old = expr_pos;
- (void) expr_pos_old;
+ (void)expr_pos_old;
e = sql_expr_dup(sql_get(), e, 0, &expr_pos);
assert(e != NULL);
/* Note: due to SQL legacy
@@ -235,7 +234,7 @@ space_def_new(uint32_t id, uint32_t uid, uint32_t exact_field_count,
return def;
}
-struct space_def*
+struct space_def *
space_def_new_ephemeral(uint32_t exact_field_count, struct field_def *fields)
{
struct space_opts opts = space_opts_default;
@@ -246,11 +245,9 @@ space_def_new_ephemeral(uint32_t exact_field_count, struct field_def *fields)
fields = (struct field_def *)&field_def_default;
field_count = 0;
}
- struct space_def *space_def = space_def_new(0, 0, exact_field_count,
- "ephemeral",
- strlen("ephemeral"),
- "memtx", strlen("memtx"),
- &opts, fields, field_count);
+ struct space_def *space_def = space_def_new(
+ 0, 0, exact_field_count, "ephemeral", strlen("ephemeral"),
+ "memtx", strlen("memtx"), &opts, fields, field_count);
return space_def;
}
diff --git a/src/box/space_def.h b/src/box/space_def.h
index 198242d..2fa28d1 100644
--- a/src/box/space_def.h
+++ b/src/box/space_def.h
@@ -47,7 +47,7 @@ struct space_opts {
* made to a space are replicated.
*/
uint32_t group_id;
- /**
+ /**
* The space is a temporary:
* - it is empty at server start
* - changes are not written to WAL
@@ -171,10 +171,9 @@ space_def_dup(const struct space_def *src);
*/
struct space_def *
space_def_new(uint32_t id, uint32_t uid, uint32_t exact_field_count,
- const char *name, uint32_t name_len,
- const char *engine_name, uint32_t engine_len,
- const struct space_opts *opts, const struct field_def *fields,
- uint32_t field_count);
+ const char *name, uint32_t name_len, const char *engine_name,
+ uint32_t engine_len, const struct space_opts *opts,
+ const struct field_def *fields, uint32_t field_count);
/**
* Create a new ephemeral space definition.
@@ -220,10 +219,9 @@ space_def_dup_xc(const struct space_def *src)
static inline struct space_def *
space_def_new_xc(uint32_t id, uint32_t uid, uint32_t exact_field_count,
- const char *name, uint32_t name_len,
- const char *engine_name, uint32_t engine_len,
- const struct space_opts *opts, const struct field_def *fields,
- uint32_t field_count)
+ const char *name, uint32_t name_len, const char *engine_name,
+ uint32_t engine_len, const struct space_opts *opts,
+ const struct field_def *fields, uint32_t field_count)
{
struct space_def *ret = space_def_new(id, uid, exact_field_count, name,
name_len, engine_name, engine_len,
diff --git a/src/box/sysview.c b/src/box/sysview.c
index 00c320b..9f2fc40 100644
--- a/src/box/sysview.c
+++ b/src/box/sysview.c
@@ -74,7 +74,7 @@ struct sysview_iterator {
static inline struct sysview_iterator *
sysview_iterator(struct iterator *ptr)
{
- return (struct sysview_iterator *) ptr;
+ return (struct sysview_iterator *)ptr;
}
static void
@@ -150,8 +150,8 @@ sysview_index_create_iterator(struct index *base, enum iterator_type type,
}
static int
-sysview_index_get(struct index *base, const char *key,
- uint32_t part_count, struct tuple **result)
+sysview_index_get(struct index *base, const char *key, uint32_t part_count,
+ struct tuple **result)
{
struct sysview_index *index = (struct sysview_index *)base;
struct space *source = space_cache_find(index->source_space_id);
@@ -185,7 +185,7 @@ static const struct index_vtab sysview_index_vtab = {
/* .update_def = */ generic_index_update_def,
/* .depends_on_pk = */ generic_index_depends_on_pk,
/* .def_change_requires_rebuild = */
- generic_index_def_change_requires_rebuild,
+ generic_index_def_change_requires_rebuild,
/* .size = */ generic_index_size,
/* .bsize = */ generic_index_bsize,
/* .min = */ generic_index_min,
@@ -196,7 +196,7 @@ static const struct index_vtab sysview_index_vtab = {
/* .replace = */ generic_index_replace,
/* .create_iterator = */ sysview_index_create_iterator,
/* .create_snapshot_iterator = */
- generic_index_create_snapshot_iterator,
+ generic_index_create_snapshot_iterator,
/* .stat = */ generic_index_stat,
/* .compact = */ generic_index_compact,
/* .reset_stat = */ generic_index_reset_stat,
@@ -295,8 +295,7 @@ vspace_filter(struct space *source, struct tuple *tuple)
* Allow access for space owners and users with any
* privilege for the space.
*/
- return (PRIV_WRDA & effective ||
- space->def->uid == cr->uid);
+ return (PRIV_WRDA & effective || space->def->uid == cr->uid);
}
static bool
@@ -363,15 +362,14 @@ vfunc_filter(struct space *source, struct tuple *tuple)
return true; /* read access to _func space */
uint32_t name_len;
- const char *name = tuple_field_str(tuple, BOX_FUNC_FIELD_NAME,
- &name_len);
+ const char *name =
+ tuple_field_str(tuple, BOX_FUNC_FIELD_NAME, &name_len);
if (name == NULL)
return false;
struct func *func = func_by_name(name, name_len);
assert(func != NULL);
user_access_t effective = func->access[cr->auth_token].effective;
- return func->def->uid == cr->uid ||
- ((PRIV_WRDA | PRIV_X) & effective);
+ return func->def->uid == cr->uid || ((PRIV_WRDA | PRIV_X) & effective);
}
static bool
@@ -405,8 +403,8 @@ vsequence_filter(struct space *source, struct tuple *tuple)
static bool
vcollation_filter(struct space *source, struct tuple *tuple)
{
- (void) source;
- (void) tuple;
+ (void)source;
+ (void)tuple;
return true;
}
@@ -462,17 +460,16 @@ sysview_space_create_index(struct space *space, struct index_def *def)
filter = vcollation_filter;
break;
default:
- diag_set(ClientError, ER_MODIFY_INDEX,
- def->name, space_name(space),
- "unknown space for system view");
+ diag_set(ClientError, ER_MODIFY_INDEX, def->name,
+ space_name(space), "unknown space for system view");
return NULL;
}
struct sysview_index *index =
(struct sysview_index *)calloc(1, sizeof(*index));
if (index == NULL) {
- diag_set(OutOfMemory, sizeof(*index),
- "malloc", "struct sysview_index");
+ diag_set(OutOfMemory, sizeof(*index), "malloc",
+ "struct sysview_index");
return NULL;
}
if (index_create(&index->base, (struct engine *)sysview,
@@ -525,8 +522,7 @@ sysview_engine_create_space(struct engine *engine, struct space_def *def,
{
struct space *space = (struct space *)calloc(1, sizeof(*space));
if (space == NULL) {
- diag_set(OutOfMemory, sizeof(*space),
- "malloc", "struct space");
+ diag_set(OutOfMemory, sizeof(*space), "malloc", "struct space");
return NULL;
}
int key_count = 0;
@@ -542,18 +538,17 @@ sysview_engine_create_space(struct engine *engine, struct space_def *def,
free(space);
return NULL;
}
- struct tuple_format *format =
- tuple_format_new(NULL, NULL, keys, key_count, def->fields,
- def->field_count, def->exact_field_count,
- def->dict, def->opts.is_temporary,
- def->opts.is_ephemeral);
+ struct tuple_format *format = tuple_format_new(
+ NULL, NULL, keys, key_count, def->fields, def->field_count,
+ def->exact_field_count, def->dict, def->opts.is_temporary,
+ def->opts.is_ephemeral);
if (format == NULL) {
free(space);
return NULL;
}
tuple_format_ref(format);
- if (space_create(space, engine, &sysview_space_vtab,
- def, key_list, format) != 0) {
+ if (space_create(space, engine, &sysview_space_vtab, def, key_list,
+ format) != 0) {
free(space);
return NULL;
}
@@ -595,8 +590,8 @@ sysview_engine_new(void)
{
struct sysview_engine *sysview = calloc(1, sizeof(*sysview));
if (sysview == NULL) {
- diag_set(OutOfMemory, sizeof(*sysview),
- "malloc", "struct sysview_engine");
+ diag_set(OutOfMemory, sizeof(*sysview), "malloc",
+ "struct sysview_engine");
return NULL;
}
diff --git a/src/box/tuple.c b/src/box/tuple.c
index f396547..88b9e32 100644
--- a/src/box/tuple.c
+++ b/src/box/tuple.c
@@ -60,7 +60,8 @@ static void
runtime_tuple_delete(struct tuple_format *format, struct tuple *tuple);
static struct tuple *
-runtime_tuple_new(struct tuple_format *format, const char *data, const char *end);
+runtime_tuple_new(struct tuple_format *format, const char *data,
+ const char *end);
/** A virtual method table for tuple_format_runtime */
static struct tuple_format_vtab tuple_format_runtime_vtab = {
@@ -71,9 +72,11 @@ static struct tuple_format_vtab tuple_format_runtime_vtab = {
};
static struct tuple *
-runtime_tuple_new(struct tuple_format *format, const char *data, const char *end)
+runtime_tuple_new(struct tuple_format *format, const char *data,
+ const char *end)
{
- assert(format->vtab.tuple_delete == tuple_format_runtime_vtab.tuple_delete);
+ assert(format->vtab.tuple_delete ==
+ tuple_format_runtime_vtab.tuple_delete);
mp_tuple_assert(data, end);
struct tuple *tuple = NULL;
@@ -93,10 +96,9 @@ runtime_tuple_new(struct tuple_format *format, const char *data, const char *end
size_t data_len = end - data;
size_t total = sizeof(struct tuple) + field_map_size + data_len;
- tuple = (struct tuple *) smalloc(&runtime_alloc, total);
+ tuple = (struct tuple *)smalloc(&runtime_alloc, total);
if (tuple == NULL) {
- diag_set(OutOfMemory, (unsigned) total,
- "malloc", "tuple");
+ diag_set(OutOfMemory, (unsigned)total, "malloc", "tuple");
goto end;
}
@@ -106,7 +108,7 @@ runtime_tuple_new(struct tuple_format *format, const char *data, const char *end
tuple_format_ref(format);
tuple->data_offset = data_offset;
tuple->is_dirty = false;
- char *raw = (char *) tuple + data_offset;
+ char *raw = (char *)tuple + data_offset;
field_map_build(&builder, raw - field_map_size);
memcpy(raw, data, data_len);
say_debug("%s(%zu) = %p", __func__, data_len, tuple);
@@ -118,7 +120,8 @@ end:
static void
runtime_tuple_delete(struct tuple_format *format, struct tuple *tuple)
{
- assert(format->vtab.tuple_delete == tuple_format_runtime_vtab.tuple_delete);
+ assert(format->vtab.tuple_delete ==
+ tuple_format_runtime_vtab.tuple_delete);
say_debug("%s(%p)", __func__, tuple);
assert(tuple->refs == 0);
size_t total = tuple_size(tuple);
@@ -239,10 +242,11 @@ bigref_list_increase_capacity(void)
capacity = MIN(capacity * BIGREF_FACTOR, BIGREF_MAX_CAPACITY);
else
panic("Too many big references");
- refs = (uint32_t *) realloc(refs, capacity * sizeof(*refs));
+ refs = (uint32_t *)realloc(refs, capacity * sizeof(*refs));
if (refs == NULL) {
- panic("failed to reallocate %zu bytes: Cannot allocate "\
- "memory.", capacity * sizeof(*refs));
+ panic("failed to reallocate %zu bytes: Cannot allocate "
+ "memory.",
+ capacity * sizeof(*refs));
}
for (uint16_t i = bigref_list.capacity; i < capacity; ++i)
refs[i] = i + 1;
@@ -269,7 +273,7 @@ void
tuple_ref_slow(struct tuple *tuple)
{
assert(tuple->is_bigref || tuple->refs == TUPLE_REF_MAX);
- if (! tuple->is_bigref) {
+ if (!tuple->is_bigref) {
tuple->ref_index = bigref_list_new_index();
tuple->is_bigref = true;
bigref_list.refs[tuple->ref_index] = TUPLE_REF_MAX;
@@ -284,7 +288,7 @@ tuple_unref_slow(struct tuple *tuple)
{
assert(tuple->is_bigref &&
bigref_list.refs[tuple->ref_index] > TUPLE_REF_MAX);
- if(--bigref_list.refs[tuple->ref_index] == TUPLE_REF_MAX) {
+ if (--bigref_list.refs[tuple->ref_index] == TUPLE_REF_MAX) {
bigref_list.refs[tuple->ref_index] = bigref_list.vacant_index;
bigref_list.vacant_index = tuple->ref_index;
tuple->ref_index = TUPLE_REF_MAX;
@@ -304,9 +308,9 @@ tuple_init(field_name_hash_f hash)
/*
* Create a format for runtime tuples
*/
- tuple_format_runtime = tuple_format_new(&tuple_format_runtime_vtab, NULL,
- NULL, 0, NULL, 0, 0, NULL, false,
- false);
+ tuple_format_runtime = tuple_format_new(&tuple_format_runtime_vtab,
+ NULL, NULL, 0, NULL, 0, 0, NULL,
+ false, false);
if (tuple_format_runtime == NULL)
return -1;
@@ -331,8 +335,8 @@ tuple_init(field_name_hash_f hash)
void
tuple_arena_create(struct slab_arena *arena, struct quota *quota,
- uint64_t arena_max_size, uint32_t slab_size,
- bool dontdump, const char *arena_name)
+ uint64_t arena_max_size, uint32_t slab_size, bool dontdump,
+ const char *arena_name)
{
/*
* Ensure that quota is a multiple of slab_size, to
@@ -340,24 +344,25 @@ tuple_arena_create(struct slab_arena *arena, struct quota *quota,
*/
size_t prealloc = small_align(arena_max_size, slab_size);
- /*
+ /*
* Skip from coredump if requested.
*/
- int flags = SLAB_ARENA_PRIVATE;
- if (dontdump)
- flags |= SLAB_ARENA_DONTDUMP;
+ int flags = SLAB_ARENA_PRIVATE;
+ if (dontdump)
+ flags |= SLAB_ARENA_DONTDUMP;
say_info("mapping %zu bytes for %s tuple arena...", prealloc,
arena_name);
if (slab_arena_create(arena, quota, prealloc, slab_size, flags) != 0) {
if (errno == ENOMEM) {
- panic("failed to preallocate %zu bytes: Cannot "\
- "allocate memory, check option '%s_memory' in box.cfg(..)", prealloc,
- arena_name);
+ panic("failed to preallocate %zu bytes: Cannot "
+ "allocate memory, check option '%s_memory' in box.cfg(..)",
+ prealloc, arena_name);
} else {
- panic_syserror("failed to preallocate %zu bytes for %s"\
- " tuple arena", prealloc, arena_name);
+ panic_syserror("failed to preallocate %zu bytes for %s"
+ " tuple arena",
+ prealloc, arena_name);
}
}
@@ -506,7 +511,7 @@ tuple_field_raw_by_full_path(struct tuple_format *format, const char *tuple,
json_lexer_create(&lexer, path, path_len, TUPLE_INDEX_BASE);
if (json_lexer_next_token(&lexer, &token) != 0)
return NULL;
- switch(token.type) {
+ switch (token.type) {
case JSON_TOKEN_NUM: {
fieldno = token.num;
break;
@@ -514,7 +519,7 @@ tuple_field_raw_by_full_path(struct tuple_format *format, const char *tuple,
case JSON_TOKEN_STR: {
/* First part of a path is a field name. */
uint32_t name_hash;
- if (path_len == (uint32_t) token.len) {
+ if (path_len == (uint32_t)token.len) {
name_hash = path_hash;
} else {
/*
@@ -537,22 +542,19 @@ tuple_field_raw_by_full_path(struct tuple_format *format, const char *tuple,
}
return tuple_field_raw_by_path(format, tuple, field_map, fieldno,
path + lexer.offset,
- path_len - lexer.offset,
- NULL, MULTIKEY_NONE);
+ path_len - lexer.offset, NULL,
+ MULTIKEY_NONE);
}
uint32_t
tuple_raw_multikey_count(struct tuple_format *format, const char *data,
- const uint32_t *field_map,
- struct key_def *key_def)
+ const uint32_t *field_map, struct key_def *key_def)
{
assert(key_def->is_multikey);
- const char *array_raw =
- tuple_field_raw_by_path(format, data, field_map,
- key_def->multikey_fieldno,
- key_def->multikey_path,
- key_def->multikey_path_len,
- NULL, MULTIKEY_NONE);
+ const char *array_raw = tuple_field_raw_by_path(
+ format, data, field_map, key_def->multikey_fieldno,
+ key_def->multikey_path, key_def->multikey_path_len, NULL,
+ MULTIKEY_NONE);
if (array_raw == NULL)
return 0;
enum mp_type type = mp_typeof(*array_raw);
@@ -576,9 +578,8 @@ box_tuple_format_t *
box_tuple_format_new(struct key_def **keys, uint16_t key_count)
{
box_tuple_format_t *format =
- tuple_format_new(&tuple_format_runtime_vtab, NULL,
- keys, key_count, NULL, 0, 0, NULL, false,
- false);
+ tuple_format_new(&tuple_format_runtime_vtab, NULL, keys,
+ key_count, NULL, 0, 0, NULL, false, false);
if (format != NULL)
tuple_format_ref(format);
return format;
@@ -651,11 +652,11 @@ box_tuple_iterator_t *
box_tuple_iterator(box_tuple_t *tuple)
{
assert(tuple != NULL);
- struct tuple_iterator *it = (struct tuple_iterator *)
- mempool_alloc(&tuple_iterator_pool);
+ struct tuple_iterator *it =
+ (struct tuple_iterator *)mempool_alloc(&tuple_iterator_pool);
if (it == NULL) {
- diag_set(OutOfMemory, tuple_iterator_pool.objsize,
- "mempool", "new slab");
+ diag_set(OutOfMemory, tuple_iterator_pool.objsize, "mempool",
+ "new slab");
return NULL;
}
tuple_ref(tuple);
@@ -702,9 +703,9 @@ box_tuple_update(box_tuple_t *tuple, const char *expr, const char *expr_end)
struct region *region = &fiber()->gc;
size_t used = region_used(region);
struct tuple_format *format = tuple_format(tuple);
- const char *new_data =
- xrow_update_execute(expr, expr_end, old_data, old_data + bsize,
- format, &new_size, 1, NULL);
+ const char *new_data = xrow_update_execute(expr, expr_end, old_data,
+ old_data + bsize, format,
+ &new_size, 1, NULL);
if (new_data == NULL) {
region_truncate(region, used);
return NULL;
@@ -724,9 +725,9 @@ box_tuple_upsert(box_tuple_t *tuple, const char *expr, const char *expr_end)
struct region *region = &fiber()->gc;
size_t used = region_used(region);
struct tuple_format *format = tuple_format(tuple);
- const char *new_data =
- xrow_upsert_execute(expr, expr_end, old_data, old_data + bsize,
- format, &new_size, 1, false, NULL);
+ const char *new_data = xrow_upsert_execute(expr, expr_end, old_data,
+ old_data + bsize, format,
+ &new_size, 1, false, NULL);
if (new_data == NULL) {
region_truncate(region, used);
return NULL;
diff --git a/src/box/tuple.h b/src/box/tuple.h
index 53ae690..0d1dbac 100644
--- a/src/box/tuple.h
+++ b/src/box/tuple.h
@@ -69,8 +69,8 @@ tuple_free(void);
*/
void
tuple_arena_create(struct slab_arena *arena, struct quota *quota,
- uint64_t arena_max_size, uint32_t slab_size,
- bool dontdump, const char *arena_name);
+ uint64_t arena_max_size, uint32_t slab_size, bool dontdump,
+ const char *arena_name);
void
tuple_arena_destroy(struct slab_arena *arena);
@@ -297,8 +297,7 @@ box_tuple_upsert(box_tuple_t *tuple, const char *expr, const char *expr_end);
*
* Each 'off_i' is the offset to the i-th indexed field.
*/
-struct PACKED tuple
-{
+struct PACKED tuple {
union {
/** Reference counter. */
uint16_t refs;
@@ -349,7 +348,7 @@ tuple_size(struct tuple *tuple)
static inline const char *
tuple_data(struct tuple *tuple)
{
- return (const char *) tuple + tuple->data_offset;
+ return (const char *)tuple + tuple->data_offset;
}
/**
@@ -371,7 +370,7 @@ static inline const char *
tuple_data_range(struct tuple *tuple, uint32_t *p_size)
{
*p_size = tuple->bsize;
- return (const char *) tuple + tuple->data_offset;
+ return (const char *)tuple + tuple->data_offset;
}
/**
@@ -524,7 +523,7 @@ tuple_validate(struct tuple_format *format, struct tuple *tuple)
static inline const uint32_t *
tuple_field_map(struct tuple *tuple)
{
- return (const uint32_t *) ((const char *) tuple + tuple->data_offset);
+ return (const uint32_t *)((const char *)tuple + tuple->data_offset);
}
/**
@@ -647,7 +646,7 @@ tuple_field_raw_by_path(struct tuple_format *format, const char *tuple,
*/
goto parse;
}
-offset_slot_access:
+ offset_slot_access:
/* Indexed field */
offset = field_map_get_offset(field_map, offset_slot,
multikey_idx);
@@ -656,7 +655,7 @@ offset_slot_access:
tuple += offset;
} else {
uint32_t field_count;
-parse:
+ parse:
ERROR_INJECT(ERRINJ_TUPLE_FIELD, return NULL);
field_count = mp_decode_array(&tuple);
if (unlikely(fieldno >= field_count))
@@ -686,8 +685,8 @@ static inline const char *
tuple_field_raw(struct tuple_format *format, const char *tuple,
const uint32_t *field_map, uint32_t field_no)
{
- return tuple_field_raw_by_path(format, tuple, field_map, field_no,
- NULL, 0, NULL, MULTIKEY_NONE);
+ return tuple_field_raw_by_path(format, tuple, field_map, field_no, NULL,
+ 0, NULL, MULTIKEY_NONE);
}
/**
@@ -737,8 +736,8 @@ tuple_field_raw_by_full_path(struct tuple_format *format, const char *tuple,
*/
static inline const char *
tuple_field_raw_by_part(struct tuple_format *format, const char *data,
- const uint32_t *field_map,
- struct key_part *part, int multikey_idx)
+ const uint32_t *field_map, struct key_part *part,
+ int multikey_idx)
{
if (unlikely(part->format_epoch != format->epoch)) {
assert(format->epoch != 0);
@@ -837,7 +836,7 @@ tuple_rewind(struct tuple_iterator *it, struct tuple *tuple)
uint32_t bsize;
const char *data = tuple_data_range(tuple, &bsize);
it->pos = data;
- (void) mp_decode_array(&it->pos); /* Skip array header */
+ (void)mp_decode_array(&it->pos); /* Skip array header */
it->fieldno = 0;
it->end = data + bsize;
}
@@ -921,8 +920,8 @@ mp_tuple_assert(const char *tuple, const char *tuple_end)
mp_next(&tuple);
#endif
assert(tuple == tuple_end);
- (void) tuple;
- (void) tuple_end;
+ (void)tuple;
+ (void)tuple_end;
}
static inline const char *
@@ -1158,4 +1157,3 @@ tuple_field_u32_xc(struct tuple *tuple, uint32_t fieldno)
#endif /* defined(__cplusplus) */
#endif /* TARANTOOL_BOX_TUPLE_H_INCLUDED */
-
diff --git a/src/box/tuple_bloom.c b/src/box/tuple_bloom.c
index 420a7c6..88dc6ed 100644
--- a/src/box/tuple_bloom.c
+++ b/src/box/tuple_bloom.c
@@ -51,7 +51,7 @@ struct tuple_bloom_builder *
tuple_bloom_builder_new(uint32_t part_count)
{
size_t size = sizeof(struct tuple_bloom_builder) +
- part_count * sizeof(struct tuple_hash_array);
+ part_count * sizeof(struct tuple_hash_array);
struct tuple_bloom_builder *builder = malloc(size);
if (builder == NULL) {
diag_set(OutOfMemory, size, "malloc", "tuple bloom builder");
@@ -89,8 +89,8 @@ tuple_hash_array_add(struct tuple_hash_array *hash_arr, uint32_t hash)
}
if (hash_arr->count >= hash_arr->capacity) {
uint32_t capacity = MAX(hash_arr->capacity * 2, 1024U);
- uint32_t *values = realloc(hash_arr->values,
- capacity * sizeof(*values));
+ uint32_t *values =
+ realloc(hash_arr->values, capacity * sizeof(*values));
if (values == NULL) {
diag_set(OutOfMemory, capacity * sizeof(*values),
"malloc", "tuple hash array");
@@ -116,9 +116,8 @@ tuple_bloom_builder_add(struct tuple_bloom_builder *builder,
uint32_t total_size = 0;
for (uint32_t i = 0; i < key_def->part_count; i++) {
- total_size += tuple_hash_key_part(&h, &carry, tuple,
- &key_def->parts[i],
- multikey_idx);
+ total_size += tuple_hash_key_part(
+ &h, &carry, tuple, &key_def->parts[i], multikey_idx);
uint32_t hash = PMurHash32_Result(h, carry, total_size);
if (tuple_hash_array_add(&builder->parts[i], hash) != 0)
return -1;
@@ -153,8 +152,8 @@ struct tuple_bloom *
tuple_bloom_new(struct tuple_bloom_builder *builder, double fpr)
{
uint32_t part_count = builder->part_count;
- size_t size = sizeof(struct tuple_bloom) +
- part_count * sizeof(struct bloom);
+ size_t size =
+ sizeof(struct tuple_bloom) + part_count * sizeof(struct bloom);
struct tuple_bloom *bloom = malloc(size);
if (bloom == NULL) {
diag_set(OutOfMemory, size, "malloc", "tuple bloom");
@@ -218,9 +217,8 @@ tuple_bloom_maybe_has(const struct tuple_bloom *bloom, struct tuple *tuple,
uint32_t total_size = 0;
for (uint32_t i = 0; i < key_def->part_count; i++) {
- total_size += tuple_hash_key_part(&h, &carry, tuple,
- &key_def->parts[i],
- multikey_idx);
+ total_size += tuple_hash_key_part(
+ &h, &carry, tuple, &key_def->parts[i], multikey_idx);
uint32_t hash = PMurHash32_Result(h, carry, total_size);
if (!bloom_maybe_has(&bloom->parts[i], hash))
return false;
@@ -229,9 +227,8 @@ tuple_bloom_maybe_has(const struct tuple_bloom *bloom, struct tuple *tuple,
}
bool
-tuple_bloom_maybe_has_key(const struct tuple_bloom *bloom,
- const char *key, uint32_t part_count,
- struct key_def *key_def)
+tuple_bloom_maybe_has_key(const struct tuple_bloom *bloom, const char *key,
+ uint32_t part_count, struct key_def *key_def)
{
if (bloom->is_legacy) {
if (part_count < key_def->part_count)
@@ -321,11 +318,11 @@ struct tuple_bloom *
tuple_bloom_decode(const char **data)
{
uint32_t part_count = mp_decode_array(data);
- struct tuple_bloom *bloom = malloc(sizeof(*bloom) +
- part_count * sizeof(*bloom->parts));
+ struct tuple_bloom *bloom =
+ malloc(sizeof(*bloom) + part_count * sizeof(*bloom->parts));
if (bloom == NULL) {
- diag_set(OutOfMemory, sizeof(*bloom) +
- part_count * sizeof(*bloom->parts),
+ diag_set(OutOfMemory,
+ sizeof(*bloom) + part_count * sizeof(*bloom->parts),
"malloc", "tuple bloom");
return NULL;
}
@@ -346,8 +343,8 @@ tuple_bloom_decode(const char **data)
struct tuple_bloom *
tuple_bloom_decode_legacy(const char **data)
{
- struct tuple_bloom *bloom = malloc(sizeof(*bloom) +
- sizeof(*bloom->parts));
+ struct tuple_bloom *bloom =
+ malloc(sizeof(*bloom) + sizeof(*bloom->parts));
if (bloom == NULL) {
diag_set(OutOfMemory, sizeof(*bloom) + sizeof(*bloom->parts),
"malloc", "tuple bloom");
diff --git a/src/box/tuple_bloom.h b/src/box/tuple_bloom.h
index 1b7e4ac..ca6bfc4 100644
--- a/src/box/tuple_bloom.h
+++ b/src/box/tuple_bloom.h
@@ -189,9 +189,8 @@ tuple_bloom_maybe_has(const struct tuple_bloom *bloom, struct tuple *tuple,
* the bloom, false if there is definitely no such tuple
*/
bool
-tuple_bloom_maybe_has_key(const struct tuple_bloom *bloom,
- const char *key, uint32_t part_count,
- struct key_def *key_def);
+tuple_bloom_maybe_has_key(const struct tuple_bloom *bloom, const char *key,
+ uint32_t part_count, struct key_def *key_def);
/**
* Return the size of a tuple bloom filter when encoded.
diff --git a/src/box/tuple_compare.cc b/src/box/tuple_compare.cc
index bb786cc..4cc20c2 100644
--- a/src/box/tuple_compare.cc
+++ b/src/box/tuple_compare.cc
@@ -175,8 +175,8 @@ mp_compare_double_any_int(double lhs, const char *rhs, enum mp_type rhs_type,
}
static int
-mp_compare_double_any_number(double lhs, const char *rhs,
- enum mp_type rhs_type, int k)
+mp_compare_double_any_number(double lhs, const char *rhs, enum mp_type rhs_type,
+ int k)
{
double v;
if (rhs_type == MP_FLOAT)
@@ -225,7 +225,6 @@ mp_compare_decimal(const char *lhs, const char *rhs)
assert(ret != NULL);
(void)ret;
return decimal_compare(&lhs_dec, &rhs_dec);
-
}
static int
@@ -234,32 +233,27 @@ mp_compare_decimal_any_number(decimal_t *lhs, const char *rhs,
{
decimal_t rhs_dec;
switch (rhs_type) {
- case MP_FLOAT:
- {
+ case MP_FLOAT: {
double d = mp_decode_float(&rhs);
decimal_from_double(&rhs_dec, d);
break;
}
- case MP_DOUBLE:
- {
+ case MP_DOUBLE: {
double d = mp_decode_double(&rhs);
decimal_from_double(&rhs_dec, d);
break;
}
- case MP_INT:
- {
+ case MP_INT: {
int64_t num = mp_decode_int(&rhs);
decimal_from_int64(&rhs_dec, num);
break;
}
- case MP_UINT:
- {
+ case MP_UINT: {
uint64_t num = mp_decode_uint(&rhs);
decimal_from_uint64(&rhs_dec, num);
break;
}
- case MP_EXT:
- {
+ case MP_EXT: {
int8_t ext_type;
uint32_t len = mp_decode_extl(&rhs, &ext_type);
switch (ext_type) {
@@ -297,8 +291,8 @@ mp_compare_number_with_type(const char *lhs, enum mp_type lhs_type,
switch (ext_type) {
case MP_DECIMAL:
return mp_compare_decimal_any_number(
- decimal_unpack(&rhs, len, &dec), lhs, lhs_type, -1
- );
+ decimal_unpack(&rhs, len, &dec), lhs, lhs_type,
+ -1);
default:
unreachable();
}
@@ -309,32 +303,28 @@ mp_compare_number_with_type(const char *lhs, enum mp_type lhs_type,
switch (ext_type) {
case MP_DECIMAL:
return mp_compare_decimal_any_number(
- decimal_unpack(&lhs, len, &dec), rhs, rhs_type, 1
- );
+ decimal_unpack(&lhs, len, &dec), rhs, rhs_type,
+ 1);
default:
unreachable();
}
}
if (rhs_type == MP_FLOAT) {
- return mp_compare_double_any_number(
- mp_decode_float(&rhs), lhs, lhs_type, -1
- );
+ return mp_compare_double_any_number(mp_decode_float(&rhs), lhs,
+ lhs_type, -1);
}
if (rhs_type == MP_DOUBLE) {
- return mp_compare_double_any_number(
- mp_decode_double(&rhs), lhs, lhs_type, -1
- );
+ return mp_compare_double_any_number(mp_decode_double(&rhs), lhs,
+ lhs_type, -1);
}
assert(rhs_type == MP_INT || rhs_type == MP_UINT);
if (lhs_type == MP_FLOAT) {
- return mp_compare_double_any_int(
- mp_decode_float(&lhs), rhs, rhs_type, 1
- );
+ return mp_compare_double_any_int(mp_decode_float(&lhs), rhs,
+ rhs_type, 1);
}
if (lhs_type == MP_DOUBLE) {
- return mp_compare_double_any_int(
- mp_decode_double(&lhs), rhs, rhs_type, 1
- );
+ return mp_compare_double_any_int(mp_decode_double(&lhs), rhs,
+ rhs_type, 1);
}
assert(lhs_type == MP_INT || lhs_type == MP_UINT);
return mp_compare_integer_with_type(lhs, lhs_type, rhs, rhs_type);
@@ -343,8 +333,8 @@ mp_compare_number_with_type(const char *lhs, enum mp_type lhs_type,
static inline int
mp_compare_number(const char *lhs, const char *rhs)
{
- return mp_compare_number_with_type(lhs, mp_typeof(*lhs),
- rhs, mp_typeof(*rhs));
+ return mp_compare_number_with_type(lhs, mp_typeof(*lhs), rhs,
+ mp_typeof(*rhs));
}
static inline int
@@ -407,11 +397,11 @@ mp_compare_scalar_with_type(const char *field_a, enum mp_type a_type,
const char *field_b, enum mp_type b_type)
{
enum mp_class a_class = mp_classof(a_type) < mp_class_max ?
- mp_classof(a_type) :
- mp_extension_class(field_a);
+ mp_classof(a_type) :
+ mp_extension_class(field_a);
enum mp_class b_class = mp_classof(b_type) < mp_class_max ?
- mp_classof(b_type) :
- mp_extension_class(field_b);
+ mp_classof(b_type) :
+ mp_extension_class(field_b);
if (a_class != b_class)
return COMPARE_RESULT(a_class, b_class);
mp_compare_f cmp = mp_class_comparators[a_class];
@@ -447,16 +437,16 @@ mp_compare_scalar_coll(const char *field_a, const char *field_b,
* @retval >0 if field_a > field_b
*/
static int
-tuple_compare_field(const char *field_a, const char *field_b,
- int8_t type, struct coll *coll)
+tuple_compare_field(const char *field_a, const char *field_b, int8_t type,
+ struct coll *coll)
{
switch (type) {
case FIELD_TYPE_UNSIGNED:
return mp_compare_uint(field_a, field_b);
case FIELD_TYPE_STRING:
return coll != NULL ?
- mp_compare_str_coll(field_a, field_b, coll) :
- mp_compare_str(field_a, field_b);
+ mp_compare_str_coll(field_a, field_b, coll) :
+ mp_compare_str(field_a, field_b);
case FIELD_TYPE_INTEGER:
return mp_compare_integer_with_type(field_a,
mp_typeof(*field_a),
@@ -472,8 +462,8 @@ tuple_compare_field(const char *field_a, const char *field_b,
return mp_compare_bin(field_a, field_b);
case FIELD_TYPE_SCALAR:
return coll != NULL ?
- mp_compare_scalar_coll(field_a, field_b, coll) :
- mp_compare_scalar(field_a, field_b);
+ mp_compare_scalar_coll(field_a, field_b, coll) :
+ mp_compare_scalar(field_a, field_b);
case FIELD_TYPE_DECIMAL:
return mp_compare_decimal(field_a, field_b);
case FIELD_TYPE_UUID:
@@ -494,14 +484,14 @@ tuple_compare_field_with_type(const char *field_a, enum mp_type a_type,
return mp_compare_uint(field_a, field_b);
case FIELD_TYPE_STRING:
return coll != NULL ?
- mp_compare_str_coll(field_a, field_b, coll) :
- mp_compare_str(field_a, field_b);
+ mp_compare_str_coll(field_a, field_b, coll) :
+ mp_compare_str(field_a, field_b);
case FIELD_TYPE_INTEGER:
- return mp_compare_integer_with_type(field_a, a_type,
- field_b, b_type);
+ return mp_compare_integer_with_type(field_a, a_type, field_b,
+ b_type);
case FIELD_TYPE_NUMBER:
- return mp_compare_number_with_type(field_a, a_type,
- field_b, b_type);
+ return mp_compare_number_with_type(field_a, a_type, field_b,
+ b_type);
case FIELD_TYPE_DOUBLE:
return mp_compare_double(field_a, field_b);
case FIELD_TYPE_BOOLEAN:
@@ -510,12 +500,12 @@ tuple_compare_field_with_type(const char *field_a, enum mp_type a_type,
return mp_compare_bin(field_a, field_b);
case FIELD_TYPE_SCALAR:
return coll != NULL ?
- mp_compare_scalar_coll(field_a, field_b, coll) :
- mp_compare_scalar_with_type(field_a, a_type,
- field_b, b_type);
+ mp_compare_scalar_coll(field_a, field_b, coll) :
+ mp_compare_scalar_with_type(field_a, a_type,
+ field_b, b_type);
case FIELD_TYPE_DECIMAL:
- return mp_compare_number_with_type(field_a, a_type,
- field_b, b_type);
+ return mp_compare_number_with_type(field_a, a_type, field_b,
+ b_type);
case FIELD_TYPE_UUID:
return mp_compare_uuid(field_a, field_b);
default:
@@ -524,8 +514,8 @@ tuple_compare_field_with_type(const char *field_a, enum mp_type a_type,
}
}
-template<bool is_nullable, bool has_optional_parts, bool has_json_paths,
- bool is_multikey>
+template <bool is_nullable, bool has_optional_parts, bool has_json_paths,
+ bool is_multikey>
static inline int
tuple_compare_slowpath(struct tuple *tuple_a, hint_t tuple_a_hint,
struct tuple *tuple_b, hint_t tuple_b_hint,
@@ -536,8 +526,8 @@ tuple_compare_slowpath(struct tuple *tuple_a, hint_t tuple_a_hint,
assert(is_nullable == key_def->is_nullable);
assert(has_optional_parts == key_def->has_optional_parts);
assert(key_def->is_multikey == is_multikey);
- assert(!is_multikey || (tuple_a_hint != HINT_NONE &&
- tuple_b_hint != HINT_NONE));
+ assert(!is_multikey ||
+ (tuple_a_hint != HINT_NONE && tuple_b_hint != HINT_NONE));
int rc = 0;
if (!is_multikey && (rc = hint_cmp(tuple_a_hint, tuple_b_hint)) != 0)
return rc;
@@ -553,7 +543,7 @@ tuple_compare_slowpath(struct tuple *tuple_a, hint_t tuple_a_hint,
assert(!has_optional_parts);
mp_decode_array(&tuple_a_raw);
mp_decode_array(&tuple_b_raw);
- if (! is_nullable) {
+ if (!is_nullable) {
return tuple_compare_field(tuple_a_raw, tuple_b_raw,
part->type, part->coll);
}
@@ -563,7 +553,7 @@ tuple_compare_slowpath(struct tuple *tuple_a, hint_t tuple_a_hint,
return b_type == MP_NIL ? 0 : -1;
else if (b_type == MP_NIL)
return 1;
- return tuple_compare_field_with_type(tuple_a_raw, a_type,
+ return tuple_compare_field_with_type(tuple_a_raw, a_type,
tuple_b_raw, b_type,
part->type, part->coll);
}
@@ -604,7 +594,7 @@ tuple_compare_slowpath(struct tuple *tuple_a, hint_t tuple_a_hint,
}
assert(has_optional_parts ||
(field_a != NULL && field_b != NULL));
- if (! is_nullable) {
+ if (!is_nullable) {
rc = tuple_compare_field(field_a, field_b, part->type,
part->coll);
if (rc != 0)
@@ -681,8 +671,8 @@ tuple_compare_slowpath(struct tuple *tuple_a, hint_t tuple_a_hint,
return 0;
}
-template<bool is_nullable, bool has_optional_parts, bool has_json_paths,
- bool is_multikey>
+template <bool is_nullable, bool has_optional_parts, bool has_json_paths,
+ bool is_multikey>
static inline int
tuple_compare_with_key_slowpath(struct tuple *tuple, hint_t tuple_hint,
const char *key, uint32_t part_count,
@@ -695,8 +685,8 @@ tuple_compare_with_key_slowpath(struct tuple *tuple, hint_t tuple_hint,
assert(key != NULL || part_count == 0);
assert(part_count <= key_def->part_count);
assert(key_def->is_multikey == is_multikey);
- assert(!is_multikey || (tuple_hint != HINT_NONE &&
- key_hint == HINT_NONE));
+ assert(!is_multikey ||
+ (tuple_hint != HINT_NONE && key_hint == HINT_NONE));
int rc = 0;
if (!is_multikey && (rc = hint_cmp(tuple_hint, key_hint)) != 0)
return rc;
@@ -719,7 +709,7 @@ tuple_compare_with_key_slowpath(struct tuple *tuple, hint_t tuple_hint,
field = tuple_field_raw(format, tuple_raw, field_map,
part->fieldno);
}
- if (! is_nullable) {
+ if (!is_nullable) {
return tuple_compare_field(field, key, part->type,
part->coll);
}
@@ -754,7 +744,7 @@ tuple_compare_with_key_slowpath(struct tuple *tuple, hint_t tuple_hint,
field = tuple_field_raw(format, tuple_raw, field_map,
part->fieldno);
}
- if (! is_nullable) {
+ if (!is_nullable) {
rc = tuple_compare_field(field, key, part->type,
part->coll);
if (rc != 0)
@@ -783,7 +773,7 @@ tuple_compare_with_key_slowpath(struct tuple *tuple, hint_t tuple_hint,
return 0;
}
-template<bool is_nullable>
+template <bool is_nullable>
static inline int
key_compare_parts(const char *key_a, const char *key_b, uint32_t part_count,
struct key_def *key_def)
@@ -792,7 +782,7 @@ key_compare_parts(const char *key_a, const char *key_b, uint32_t part_count,
assert((key_a != NULL && key_b != NULL) || part_count == 0);
struct key_part *part = key_def->parts;
if (likely(part_count == 1)) {
- if (! is_nullable) {
+ if (!is_nullable) {
return tuple_compare_field(key_a, key_b, part->type,
part->coll);
}
@@ -813,7 +803,7 @@ key_compare_parts(const char *key_a, const char *key_b, uint32_t part_count,
struct key_part *end = part + part_count;
int rc;
for (; part < end; ++part, mp_next(&key_a), mp_next(&key_b)) {
- if (! is_nullable) {
+ if (!is_nullable) {
rc = tuple_compare_field(key_a, key_b, part->type,
part->coll);
if (rc != 0)
@@ -839,7 +829,7 @@ key_compare_parts(const char *key_a, const char *key_b, uint32_t part_count,
return 0;
}
-template<bool is_nullable, bool has_optional_parts>
+template <bool is_nullable, bool has_optional_parts>
static inline int
tuple_compare_with_key_sequential(struct tuple *tuple, hint_t tuple_hint,
const char *key, uint32_t part_count,
@@ -885,8 +875,8 @@ tuple_compare_with_key_sequential(struct tuple *tuple, hint_t tuple_hint,
}
int
-key_compare(const char *key_a, hint_t key_a_hint,
- const char *key_b, hint_t key_b_hint, struct key_def *key_def)
+key_compare(const char *key_a, hint_t key_a_hint, const char *key_b,
+ hint_t key_b_hint, struct key_def *key_def)
{
int rc = hint_cmp(key_a_hint, key_b_hint);
if (rc != 0)
@@ -897,7 +887,7 @@ key_compare(const char *key_a, hint_t key_a_hint,
assert(part_count_b <= key_def->part_count);
uint32_t part_count = MIN(part_count_a, part_count_b);
assert(part_count <= key_def->part_count);
- if (! key_def->is_nullable) {
+ if (!key_def->is_nullable) {
return key_compare_parts<false>(key_a, key_b, part_count,
key_def);
} else {
@@ -960,7 +950,7 @@ tuple_compare_sequential(struct tuple *tuple_a, hint_t tuple_a_hint,
if (!has_optional_parts || i < fc_b)
mp_next(&key_b);
}
- if (! was_null_met)
+ if (!was_null_met)
return 0;
end = key_def->parts + key_def->part_count;
for (; part < end; ++part, ++i, mp_next(&key_a), mp_next(&key_b)) {
@@ -970,8 +960,7 @@ tuple_compare_sequential(struct tuple *tuple_a, hint_t tuple_a_hint,
* not be absent or be null.
*/
assert(i < fc_a && i < fc_b);
- rc = tuple_compare_field(key_a, key_b, part->type,
- part->coll);
+ rc = tuple_compare_field(key_a, key_b, part->type, part->coll);
if (rc != 0)
return rc;
}
@@ -1036,20 +1025,17 @@ field_compare_and_next<FIELD_TYPE_STRING>(const char **field_a,
/* Tuple comparator */
namespace /* local symbols */ {
-template <int IDX, int TYPE, int ...MORE_TYPES> struct FieldCompare { };
+template <int IDX, int TYPE, int... MORE_TYPES> struct FieldCompare {};
/**
* Common case.
*/
-template <int IDX, int TYPE, int IDX2, int TYPE2, int ...MORE_TYPES>
-struct FieldCompare<IDX, TYPE, IDX2, TYPE2, MORE_TYPES...>
-{
- inline static int compare(struct tuple *tuple_a,
- struct tuple *tuple_b,
+template <int IDX, int TYPE, int IDX2, int TYPE2, int... MORE_TYPES>
+struct FieldCompare<IDX, TYPE, IDX2, TYPE2, MORE_TYPES...> {
+ inline static int compare(struct tuple *tuple_a, struct tuple *tuple_b,
struct tuple_format *format_a,
struct tuple_format *format_b,
- const char *field_a,
- const char *field_b)
+ const char *field_a, const char *field_b)
{
int r;
/* static if */
@@ -1067,21 +1053,15 @@ struct FieldCompare<IDX, TYPE, IDX2, TYPE2, MORE_TYPES...>
tuple_field_map(tuple_b),
IDX2);
}
- return FieldCompare<IDX2, TYPE2, MORE_TYPES...>::
- compare(tuple_a, tuple_b, format_a,
- format_b, field_a, field_b);
+ return FieldCompare<IDX2, TYPE2, MORE_TYPES...>::compare(
+ tuple_a, tuple_b, format_a, format_b, field_a, field_b);
}
};
-template <int IDX, int TYPE>
-struct FieldCompare<IDX, TYPE>
-{
- inline static int compare(struct tuple *,
- struct tuple *,
- struct tuple_format *,
- struct tuple_format *,
- const char *field_a,
- const char *field_b)
+template <int IDX, int TYPE> struct FieldCompare<IDX, TYPE> {
+ inline static int compare(struct tuple *, struct tuple *,
+ struct tuple_format *, struct tuple_format *,
+ const char *field_a, const char *field_b)
{
return field_compare<TYPE>(&field_a, &field_b);
}
@@ -1090,9 +1070,7 @@ struct FieldCompare<IDX, TYPE>
/**
* header
*/
-template <int IDX, int TYPE, int ...MORE_TYPES>
-struct TupleCompare
-{
+template <int IDX, int TYPE, int... MORE_TYPES> struct TupleCompare {
static int compare(struct tuple *tuple_a, hint_t tuple_a_hint,
struct tuple *tuple_b, hint_t tuple_b_hint,
struct key_def *)
@@ -1107,13 +1085,12 @@ struct TupleCompare
tuple_field_map(tuple_a), IDX);
field_b = tuple_field_raw(format_b, tuple_data(tuple_b),
tuple_field_map(tuple_b), IDX);
- return FieldCompare<IDX, TYPE, MORE_TYPES...>::
- compare(tuple_a, tuple_b, format_a,
- format_b, field_a, field_b);
+ return FieldCompare<IDX, TYPE, MORE_TYPES...>::compare(
+ tuple_a, tuple_b, format_a, format_b, field_a, field_b);
}
};
-template <int TYPE, int ...MORE_TYPES>
+template <int TYPE, int... MORE_TYPES>
struct TupleCompare<0, TYPE, MORE_TYPES...> {
static int compare(struct tuple *tuple_a, hint_t tuple_a_hint,
struct tuple *tuple_b, hint_t tuple_b_hint,
@@ -1128,8 +1105,8 @@ struct TupleCompare<0, TYPE, MORE_TYPES...> {
const char *field_b = tuple_data(tuple_b);
mp_decode_array(&field_a);
mp_decode_array(&field_b);
- return FieldCompare<0, TYPE, MORE_TYPES...>::compare(tuple_a, tuple_b,
- format_a, format_b, field_a, field_b);
+ return FieldCompare<0, TYPE, MORE_TYPES...>::compare(
+ tuple_a, tuple_b, format_a, format_b, field_a, field_b);
}
};
} /* end of anonymous namespace */
@@ -1170,11 +1147,13 @@ static const comparator_signature cmp_arr[] = {
/* {{{ tuple_compare_with_key */
template <int TYPE>
-static inline int field_compare_with_key(const char **field, const char **key);
+static inline int
+field_compare_with_key(const char **field, const char **key);
template <>
inline int
-field_compare_with_key<FIELD_TYPE_UNSIGNED>(const char **field, const char **key)
+field_compare_with_key<FIELD_TYPE_UNSIGNED>(const char **field,
+ const char **key)
{
return mp_compare_uint(*field, *key);
}
@@ -1210,7 +1189,7 @@ field_compare_with_key_and_next<FIELD_TYPE_UNSIGNED>(const char **field_a,
template <>
inline int
field_compare_with_key_and_next<FIELD_TYPE_STRING>(const char **field_a,
- const char **field_b)
+ const char **field_b)
{
uint32_t size_a, size_b;
size_a = mp_decode_strl(field_a);
@@ -1226,18 +1205,17 @@ field_compare_with_key_and_next<FIELD_TYPE_STRING>(const char **field_a,
/* Tuple with key comparator */
namespace /* local symbols */ {
-template <int FLD_ID, int IDX, int TYPE, int ...MORE_TYPES>
+template <int FLD_ID, int IDX, int TYPE, int... MORE_TYPES>
struct FieldCompareWithKey {};
/**
* common
*/
-template <int FLD_ID, int IDX, int TYPE, int IDX2, int TYPE2, int ...MORE_TYPES>
-struct FieldCompareWithKey<FLD_ID, IDX, TYPE, IDX2, TYPE2, MORE_TYPES...>
-{
- inline static int
- compare(struct tuple *tuple, const char *key, uint32_t part_count,
- struct key_def *key_def, struct tuple_format *format,
- const char *field)
+template <int FLD_ID, int IDX, int TYPE, int IDX2, int TYPE2, int... MORE_TYPES>
+struct FieldCompareWithKey<FLD_ID, IDX, TYPE, IDX2, TYPE2, MORE_TYPES...> {
+ inline static int compare(struct tuple *tuple, const char *key,
+ uint32_t part_count, struct key_def *key_def,
+ struct tuple_format *format,
+ const char *field)
{
int r;
/* static if */
@@ -1253,19 +1231,19 @@ struct FieldCompareWithKey<FLD_ID, IDX, TYPE, IDX2, TYPE2, MORE_TYPES...>
tuple_field_map(tuple), IDX2);
mp_next(&key);
}
- return FieldCompareWithKey<FLD_ID + 1, IDX2, TYPE2, MORE_TYPES...>::
- compare(tuple, key, part_count,
- key_def, format, field);
+ return FieldCompareWithKey<FLD_ID + 1, IDX2, TYPE2,
+ MORE_TYPES...>::compare(tuple, key,
+ part_count,
+ key_def,
+ format,
+ field);
}
};
template <int FLD_ID, int IDX, int TYPE>
struct FieldCompareWithKey<FLD_ID, IDX, TYPE> {
- inline static int compare(struct tuple *,
- const char *key,
- uint32_t,
- struct key_def *,
- struct tuple_format *,
+ inline static int compare(struct tuple *, const char *key, uint32_t,
+ struct key_def *, struct tuple_format *,
const char *field)
{
return field_compare_with_key<TYPE>(&field, &key);
@@ -1275,13 +1253,11 @@ struct FieldCompareWithKey<FLD_ID, IDX, TYPE> {
/**
* header
*/
-template <int FLD_ID, int IDX, int TYPE, int ...MORE_TYPES>
-struct TupleCompareWithKey
-{
- static int
- compare(struct tuple *tuple, hint_t tuple_hint,
- const char *key, uint32_t part_count,
- hint_t key_hint, struct key_def *key_def)
+template <int FLD_ID, int IDX, int TYPE, int... MORE_TYPES>
+struct TupleCompareWithKey {
+ static int compare(struct tuple *tuple, hint_t tuple_hint,
+ const char *key, uint32_t part_count,
+ hint_t key_hint, struct key_def *key_def)
{
/* Part count can be 0 in wildcard searches. */
if (part_count == 0)
@@ -1290,18 +1266,19 @@ struct TupleCompareWithKey
if (rc != 0)
return rc;
struct tuple_format *format = tuple_format(tuple);
- const char *field = tuple_field_raw(format, tuple_data(tuple),
- tuple_field_map(tuple),
- IDX);
- return FieldCompareWithKey<FLD_ID, IDX, TYPE, MORE_TYPES...>::
- compare(tuple, key, part_count,
- key_def, format, field);
+ const char *field = tuple_field_raw(
+ format, tuple_data(tuple), tuple_field_map(tuple), IDX);
+ return FieldCompareWithKey<FLD_ID, IDX, TYPE,
+ MORE_TYPES...>::compare(tuple, key,
+ part_count,
+ key_def,
+ format,
+ field);
}
};
-template <int TYPE, int ...MORE_TYPES>
-struct TupleCompareWithKey<0, 0, TYPE, MORE_TYPES...>
-{
+template <int TYPE, int... MORE_TYPES>
+struct TupleCompareWithKey<0, 0, TYPE, MORE_TYPES...> {
static int compare(struct tuple *tuple, hint_t tuple_hint,
const char *key, uint32_t part_count,
hint_t key_hint, struct key_def *key_def)
@@ -1315,16 +1292,14 @@ struct TupleCompareWithKey<0, 0, TYPE, MORE_TYPES...>
struct tuple_format *format = tuple_format(tuple);
const char *field = tuple_data(tuple);
mp_decode_array(&field);
- return FieldCompareWithKey<0, 0, TYPE, MORE_TYPES...>::
- compare(tuple, key, part_count,
- key_def, format, field);
+ return FieldCompareWithKey<0, 0, TYPE, MORE_TYPES...>::compare(
+ tuple, key, part_count, key_def, format, field);
}
};
} /* end of anonymous namespace */
-struct comparator_with_key_signature
-{
+struct comparator_with_key_signature {
tuple_compare_with_key_t f;
uint32_t p[64];
};
@@ -1360,7 +1335,7 @@ static const comparator_with_key_signature cmp_wk_arr[] = {
* and the primary key. So its tail parts are taken from primary
* index key definition.
*/
-template<bool is_nullable>
+template <bool is_nullable>
static inline int
func_index_compare(struct tuple *tuple_a, hint_t tuple_a_hint,
struct tuple *tuple_b, hint_t tuple_b_hint,
@@ -1418,13 +1393,14 @@ func_index_compare(struct tuple *tuple_a, hint_t tuple_a_hint,
* functional key memory and is compared with the given key by
* using the functional index key definition.
*/
-template<bool is_nullable>
+template <bool is_nullable>
static inline int
func_index_compare_with_key(struct tuple *tuple, hint_t tuple_hint,
const char *key, uint32_t part_count,
hint_t key_hint, struct key_def *key_def)
{
- (void)tuple; (void)key_hint;
+ (void)tuple;
+ (void)key_hint;
assert(key_def->for_func_index);
assert(is_nullable == key_def->is_nullable);
const char *tuple_key = (const char *)tuple_hint;
@@ -1497,30 +1473,30 @@ func_index_compare_with_key(struct tuple *tuple, hint_t tuple_hint,
* Note: comparison hint only makes sense for non-multikey
* indexes.
*/
-#define HINT_BITS (sizeof(hint_t) * CHAR_BIT)
-#define HINT_CLASS_BITS 4
-#define HINT_VALUE_BITS (HINT_BITS - HINT_CLASS_BITS)
+#define HINT_BITS (sizeof(hint_t) * CHAR_BIT)
+#define HINT_CLASS_BITS 4
+#define HINT_VALUE_BITS (HINT_BITS - HINT_CLASS_BITS)
/** Number of bytes that fit in a hint value. */
-#define HINT_VALUE_BYTES (HINT_VALUE_BITS / CHAR_BIT)
+#define HINT_VALUE_BYTES (HINT_VALUE_BITS / CHAR_BIT)
/** Max unsigned integer that can be stored in a hint value. */
-#define HINT_VALUE_MAX ((1ULL << HINT_VALUE_BITS) - 1)
+#define HINT_VALUE_MAX ((1ULL << HINT_VALUE_BITS) - 1)
/**
* Max and min signed integer numbers that fit in a hint value.
* For numbers > MAX and < MIN we store MAX and MIN, respectively.
*/
-#define HINT_VALUE_INT_MAX ((1LL << (HINT_VALUE_BITS - 1)) - 1)
-#define HINT_VALUE_INT_MIN (-(1LL << (HINT_VALUE_BITS - 1)))
+#define HINT_VALUE_INT_MAX ((1LL << (HINT_VALUE_BITS - 1)) - 1)
+#define HINT_VALUE_INT_MIN (-(1LL << (HINT_VALUE_BITS - 1)))
/**
* Max and min floating point numbers whose integral parts fit
* in a hint value. Note, we can't compare a floating point number
* with HINT_VALUE_INT_{MIN,MAX} because of rounding errors.
*/
-#define HINT_VALUE_DOUBLE_MAX (exp2(HINT_VALUE_BITS - 1) - 1)
-#define HINT_VALUE_DOUBLE_MIN (-exp2(HINT_VALUE_BITS - 1))
+#define HINT_VALUE_DOUBLE_MAX (exp2(HINT_VALUE_BITS - 1) - 1)
+#define HINT_VALUE_DOUBLE_MIN (-exp2(HINT_VALUE_BITS - 1))
/*
* HINT_CLASS_BITS should be big enough to store any mp_class value.
@@ -1552,7 +1528,8 @@ static inline hint_t
hint_uint(uint64_t u)
{
uint64_t val = (u >= (uint64_t)HINT_VALUE_INT_MAX ?
- HINT_VALUE_MAX : u - HINT_VALUE_INT_MIN);
+ HINT_VALUE_MAX :
+ u - HINT_VALUE_INT_MIN);
return hint_create(MP_CLASS_NUMBER, val);
}
@@ -1586,8 +1563,8 @@ hint_decimal(decimal_t *dec)
{
uint64_t val = 0;
int64_t num;
- if (decimal_to_int64(dec, &num) &&
- num >= HINT_VALUE_INT_MIN && num <= HINT_VALUE_INT_MAX) {
+ if (decimal_to_int64(dec, &num) && num >= HINT_VALUE_INT_MIN &&
+ num <= HINT_VALUE_INT_MAX) {
val = num - HINT_VALUE_INT_MIN;
} else if (!(dec->bits & DECNEG)) {
val = HINT_VALUE_MAX;
@@ -1697,13 +1674,11 @@ field_hint_number(const char *field)
return hint_double(mp_decode_float(&field));
case MP_DOUBLE:
return hint_double(mp_decode_double(&field));
- case MP_EXT:
- {
+ case MP_EXT: {
int8_t ext_type;
uint32_t len = mp_decode_extl(&field, &ext_type);
switch (ext_type) {
- case MP_DECIMAL:
- {
+ case MP_DECIMAL: {
decimal_t dec;
return hint_decimal(decimal_unpack(&field, len, &dec));
}
@@ -1724,8 +1699,7 @@ field_hint_decimal(const char *field)
int8_t ext_type;
uint32_t len = mp_decode_extl(&field, &ext_type);
switch (ext_type) {
- case MP_DECIMAL:
- {
+ case MP_DECIMAL: {
decimal_t dec;
return hint_decimal(decimal_unpack(&field, len, &dec));
}
@@ -1751,7 +1725,7 @@ field_hint_string(const char *field, struct coll *coll)
assert(mp_typeof(*field) == MP_STR);
uint32_t len = mp_decode_strl(&field);
return coll == NULL ? hint_str(field, len) :
- hint_str_coll(field, len, coll);
+ hint_str_coll(field, len, coll);
}
static inline hint_t
@@ -1766,7 +1740,7 @@ static inline hint_t
field_hint_scalar(const char *field, struct coll *coll)
{
uint32_t len;
- switch(mp_typeof(*field)) {
+ switch (mp_typeof(*field)) {
case MP_BOOL:
return hint_bool(mp_decode_bool(&field));
case MP_UINT:
@@ -1780,17 +1754,15 @@ field_hint_scalar(const char *field, struct coll *coll)
case MP_STR:
len = mp_decode_strl(&field);
return coll == NULL ? hint_str(field, len) :
- hint_str_coll(field, len, coll);
+ hint_str_coll(field, len, coll);
case MP_BIN:
len = mp_decode_binl(&field);
return hint_bin(field, len);
- case MP_EXT:
- {
+ case MP_EXT: {
int8_t ext_type;
uint32_t len = mp_decode_extl(&field, &ext_type);
switch (ext_type) {
- case MP_DECIMAL:
- {
+ case MP_DECIMAL: {
decimal_t dec;
return hint_decimal(decimal_unpack(&field, len, &dec));
}
@@ -1852,8 +1824,8 @@ static hint_t
tuple_hint(struct tuple *tuple, struct key_def *key_def)
{
assert(!key_def->is_multikey);
- const char *field = tuple_field_by_part(tuple, key_def->parts,
- MULTIKEY_NONE);
+ const char *field =
+ tuple_field_by_part(tuple, key_def->parts, MULTIKEY_NONE);
if (is_nullable && field == NULL)
return hint_nil();
return field_hint<type, is_nullable>(field, key_def->parts->coll);
@@ -1862,9 +1834,9 @@ tuple_hint(struct tuple *tuple, struct key_def *key_def)
static hint_t
key_hint_stub(const char *key, uint32_t part_count, struct key_def *key_def)
{
- (void) key;
- (void) part_count;
- (void) key_def;
+ (void)key;
+ (void)part_count;
+ (void)key_def;
/*
* Multikey hint for tuple is an index of the key in
* array, it always must be defined. While
@@ -1882,13 +1854,13 @@ key_hint_stub(const char *key, uint32_t part_count, struct key_def *key_def)
static hint_t
key_hint_stub(struct tuple *tuple, struct key_def *key_def)
{
- (void) tuple;
- (void) key_def;
+ (void)tuple;
+ (void)key_def;
unreachable();
return HINT_NONE;
}
-template<enum field_type type, bool is_nullable>
+template <enum field_type type, bool is_nullable>
static void
key_def_set_hint_func(struct key_def *def)
{
@@ -1896,7 +1868,7 @@ key_def_set_hint_func(struct key_def *def)
def->tuple_hint = tuple_hint<type, is_nullable>;
}
-template<enum field_type type>
+template <enum field_type type>
static void
key_def_set_hint_func(struct key_def *def)
{
@@ -1996,57 +1968,63 @@ key_def_set_compare_func_fast(struct key_def *def)
}
if (cmp == NULL) {
cmp = is_sequential ?
- tuple_compare_sequential<false, false> :
- tuple_compare_slowpath<false, false, false, false>;
+ tuple_compare_sequential<false, false> :
+ tuple_compare_slowpath<false, false, false, false>;
}
if (cmp_wk == NULL) {
- cmp_wk = is_sequential ?
- tuple_compare_with_key_sequential<false, false> :
- tuple_compare_with_key_slowpath<false, false,
- false, false>;
+ cmp_wk =
+ is_sequential ?
+ tuple_compare_with_key_sequential<false, false> :
+ tuple_compare_with_key_slowpath<false, false,
+ false, false>;
}
def->tuple_compare = cmp;
def->tuple_compare_with_key = cmp_wk;
}
-template<bool is_nullable, bool has_optional_parts>
+template <bool is_nullable, bool has_optional_parts>
static void
key_def_set_compare_func_plain(struct key_def *def)
{
assert(!def->has_json_paths);
if (key_def_is_sequential(def)) {
- def->tuple_compare = tuple_compare_sequential
- <is_nullable, has_optional_parts>;
- def->tuple_compare_with_key = tuple_compare_with_key_sequential
- <is_nullable, has_optional_parts>;
+ def->tuple_compare =
+ tuple_compare_sequential<is_nullable,
+ has_optional_parts>;
+ def->tuple_compare_with_key =
+ tuple_compare_with_key_sequential<is_nullable,
+ has_optional_parts>;
} else {
- def->tuple_compare = tuple_compare_slowpath
- <is_nullable, has_optional_parts, false, false>;
- def->tuple_compare_with_key = tuple_compare_with_key_slowpath
- <is_nullable, has_optional_parts, false, false>;
+ def->tuple_compare =
+ tuple_compare_slowpath<is_nullable, has_optional_parts,
+ false, false>;
+ def->tuple_compare_with_key = tuple_compare_with_key_slowpath<
+ is_nullable, has_optional_parts, false, false>;
}
}
-template<bool is_nullable, bool has_optional_parts>
+template <bool is_nullable, bool has_optional_parts>
static void
key_def_set_compare_func_json(struct key_def *def)
{
assert(def->has_json_paths);
if (def->is_multikey) {
- def->tuple_compare = tuple_compare_slowpath
- <is_nullable, has_optional_parts, true, true>;
- def->tuple_compare_with_key = tuple_compare_with_key_slowpath
- <is_nullable, has_optional_parts, true, true>;
+ def->tuple_compare =
+ tuple_compare_slowpath<is_nullable, has_optional_parts,
+ true, true>;
+ def->tuple_compare_with_key = tuple_compare_with_key_slowpath<
+ is_nullable, has_optional_parts, true, true>;
} else {
- def->tuple_compare = tuple_compare_slowpath
- <is_nullable, has_optional_parts, true, false>;
- def->tuple_compare_with_key = tuple_compare_with_key_slowpath
- <is_nullable, has_optional_parts, true, false>;
+ def->tuple_compare =
+ tuple_compare_slowpath<is_nullable, has_optional_parts,
+ true, false>;
+ def->tuple_compare_with_key = tuple_compare_with_key_slowpath<
+ is_nullable, has_optional_parts, true, false>;
}
}
-template<bool is_nullable>
+template <bool is_nullable>
static void
key_def_set_compare_func_for_func_index(struct key_def *def)
{
@@ -2063,8 +2041,8 @@ key_def_set_compare_func(struct key_def *def)
key_def_set_compare_func_for_func_index<true>(def);
else
key_def_set_compare_func_for_func_index<false>(def);
- } else if (!key_def_has_collation(def) &&
- !def->is_nullable && !def->has_json_paths) {
+ } else if (!key_def_has_collation(def) && !def->is_nullable &&
+ !def->has_json_paths) {
key_def_set_compare_func_fast(def);
} else if (!def->has_json_paths) {
if (def->is_nullable && def->has_optional_parts) {
diff --git a/src/box/tuple_convert.c b/src/box/tuple_convert.c
index 5cc268a..256a3d6 100644
--- a/src/box/tuple_convert.c
+++ b/src/box/tuple_convert.c
@@ -52,10 +52,10 @@ tuple_to_obuf(struct tuple *tuple, struct obuf *buf)
int
append_output(void *arg, unsigned char *buf, size_t len)
{
- (void) arg;
+ (void)arg;
char *buf_out = region_alloc(&fiber()->gc, len + 1);
if (!buf_out) {
- diag_set(OutOfMemory, len , "region", "tuple_to_yaml");
+ diag_set(OutOfMemory, len, "region", "tuple_to_yaml");
return 0;
}
memcpy(buf_out, buf, len);
@@ -71,8 +71,9 @@ encode_table(yaml_emitter_t *emitter, const char **data)
{
yaml_event_t ev;
yaml_mapping_style_t yaml_style = YAML_FLOW_MAPPING_STYLE;
- if (!yaml_mapping_start_event_initialize(&ev, NULL, NULL, 0, yaml_style)
- || !yaml_emitter_emit(emitter, &ev)) {
+ if (!yaml_mapping_start_event_initialize(&ev, NULL, NULL, 0,
+ yaml_style) ||
+ !yaml_emitter_emit(emitter, &ev)) {
diag_set(SystemError, "failed to init event libyaml");
return 0;
}
@@ -94,15 +95,14 @@ encode_table(yaml_emitter_t *emitter, const char **data)
return 1;
}
-
static int
encode_array(yaml_emitter_t *emitter, const char **data)
{
yaml_event_t ev;
yaml_sequence_style_t yaml_style = YAML_FLOW_SEQUENCE_STYLE;
if (!yaml_sequence_start_event_initialize(&ev, NULL, NULL, 0,
- yaml_style) ||
- !yaml_emitter_emit(emitter, &ev)) {
+ yaml_style) ||
+ !yaml_emitter_emit(emitter, &ev)) {
diag_set(SystemError, "failed to init event libyaml");
return 0;
}
@@ -110,7 +110,7 @@ encode_array(yaml_emitter_t *emitter, const char **data)
uint32_t size = mp_decode_array(data);
for (uint32_t i = 0; i < size; i++) {
if (!encode_node(emitter, data))
- return 0;
+ return 0;
}
if (!yaml_sequence_end_event_initialize(&ev) ||
@@ -136,16 +136,16 @@ encode_node(yaml_emitter_t *emitter, const char **data)
yaml_scalar_style_t style = YAML_PLAIN_SCALAR_STYLE;
char buf[FPCONV_G_FMT_BUFSIZE];
int type = mp_typeof(**data);
- switch(type) {
+ switch (type) {
case MP_UINT:
len = snprintf(buf, sizeof(buf), "%llu",
- (unsigned long long) mp_decode_uint(data));
+ (unsigned long long)mp_decode_uint(data));
buf[len] = 0;
str = buf;
break;
case MP_INT:
len = snprintf(buf, sizeof(buf), "%lld",
- (long long) mp_decode_int(data));
+ (long long)mp_decode_int(data));
buf[len] = 0;
str = buf;
break;
@@ -177,7 +177,7 @@ encode_node(yaml_emitter_t *emitter, const char **data)
style = YAML_ANY_SCALAR_STYLE;
/* Binary or not UTF8 */
binlen = base64_bufsize(len, 0);
- bin = (char *) malloc(binlen);
+ bin = (char *)malloc(binlen);
if (bin == NULL) {
diag_set(OutOfMemory, binlen, "malloc",
"tuple_to_yaml");
@@ -186,7 +186,7 @@ encode_node(yaml_emitter_t *emitter, const char **data)
binlen = base64_encode(str, len, bin, binlen, 0);
str = bin;
len = binlen;
- tag = (yaml_char_t *) LUAYAML_TAG_PREFIX "binary";
+ tag = (yaml_char_t *)LUAYAML_TAG_PREFIX "binary";
break;
case MP_BOOL:
if (mp_decode_bool(data)) {
@@ -266,7 +266,7 @@ tuple_to_yaml(struct tuple *tuple)
yaml_emitter_delete(&emitter);
size_t total_len = region_used(&fiber()->gc) - used;
- char *buf = (char *) region_join(&fiber()->gc, total_len);
+ char *buf = (char *)region_join(&fiber()->gc, total_len);
if (buf == NULL) {
diag_set(OutOfMemory, total_len, "region", "tuple_to_yaml");
return NULL;
diff --git a/src/box/tuple_dictionary.c b/src/box/tuple_dictionary.c
index a8ea13a..4998bac 100644
--- a/src/box/tuple_dictionary.c
+++ b/src/box/tuple_dictionary.c
@@ -52,8 +52,8 @@ struct mh_strnu32_node_t {
#define mh_arg_t void *
#define mh_hash(a, arg) ((a)->hash)
#define mh_hash_key(a, arg) mh_hash(a, arg)
-#define mh_cmp(a, b, arg) ((a)->len != (b)->len || \
- memcmp((a)->str, (b)->str, (a)->len))
+#define mh_cmp(a, b, arg) \
+ ((a)->len != (b)->len || memcmp((a)->str, (b)->str, (a)->len))
#define mh_cmp_key(a, b, arg) mh_cmp(a, b, arg)
#define MH_SOURCE 1
#include "salad/mhash.h" /* Create mh_strnu32_t hash. */
@@ -100,22 +100,18 @@ tuple_dictionary_set_name(struct tuple_dictionary *dict, const char *name,
{
assert(fieldno < dict->name_count);
uint32_t name_hash = field_name_hash(name, name_len);
- struct mh_strnu32_key_t key = {
- name, name_len, name_hash
- };
+ struct mh_strnu32_key_t key = { name, name_len, name_hash };
mh_int_t rc = mh_strnu32_find(dict->hash, &key, NULL);
if (rc != mh_end(dict->hash)) {
- diag_set(ClientError, ER_SPACE_FIELD_IS_DUPLICATE,
- name);
+ diag_set(ClientError, ER_SPACE_FIELD_IS_DUPLICATE, name);
return -1;
}
- struct mh_strnu32_node_t name_node = {
- name, name_len, name_hash, fieldno
- };
+ struct mh_strnu32_node_t name_node = { name, name_len, name_hash,
+ fieldno };
rc = mh_strnu32_put(dict->hash, &name_node, NULL, NULL);
/* Memory was reserved in new(). */
assert(rc != mh_end(dict->hash));
- (void) rc;
+ (void)rc;
return 0;
}
@@ -125,8 +121,7 @@ tuple_dictionary_new(const struct field_def *fields, uint32_t field_count)
struct tuple_dictionary *dict =
(struct tuple_dictionary *)calloc(1, sizeof(*dict));
if (dict == NULL) {
- diag_set(OutOfMemory, sizeof(*dict), "malloc",
- "dict");
+ diag_set(OutOfMemory, sizeof(*dict), "malloc", "dict");
return NULL;
}
dict->refs = 1;
@@ -137,24 +132,24 @@ tuple_dictionary_new(const struct field_def *fields, uint32_t field_count)
uint32_t total = names_offset;
for (uint32_t i = 0; i < field_count; ++i)
total += strlen(fields[i].name) + 1;
- dict->names = (char **) malloc(total);
+ dict->names = (char **)malloc(total);
if (dict->names == NULL) {
diag_set(OutOfMemory, total, "malloc", "dict->names");
goto err_memory;
}
dict->hash = mh_strnu32_new();
if (dict->hash == NULL) {
- diag_set(OutOfMemory, sizeof(*dict->hash),
- "mh_strnu32_new", "dict->hash");
+ diag_set(OutOfMemory, sizeof(*dict->hash), "mh_strnu32_new",
+ "dict->hash");
goto err_hash;
}
if (mh_strnu32_reserve(dict->hash, field_count, NULL) != 0) {
- diag_set(OutOfMemory, field_count *
- sizeof(struct mh_strnu32_node_t), "mh_strnu32_reserve",
- "dict->hash");
+ diag_set(OutOfMemory,
+ field_count * sizeof(struct mh_strnu32_node_t),
+ "mh_strnu32_reserve", "dict->hash");
goto err_name;
}
- char *pos = (char *) dict->names + names_offset;
+ char *pos = (char *)dict->names + names_offset;
for (uint32_t i = 0; i < field_count; ++i) {
int len = strlen(fields[i].name);
memcpy(pos, fields[i].name, len);
@@ -208,7 +203,7 @@ tuple_fieldno_by_name(struct tuple_dictionary *dict, const char *name,
struct mh_strnu32_t *hash = dict->hash;
if (hash == NULL)
return -1;
- struct mh_strnu32_key_t key = {name, name_len, name_hash};
+ struct mh_strnu32_key_t key = { name, name_len, name_hash };
mh_int_t rc = mh_strnu32_find(hash, &key, NULL);
if (rc == mh_end(hash))
return -1;
--git a/src/box/tuple_extract_key.cc b/src/box/tuple_extract_key.cc
index c1ad392..7bb48a8 100644
--- a/src/box/tuple_extract_key.cc
+++ b/src/box/tuple_extract_key.cc
@@ -64,10 +64,10 @@ tuple_extract_key_sequential_raw(const char *data, const char *data_end,
assert(field_end - field_start <= data_end - data);
bsize += field_end - field_start;
- char *key = (char *) region_alloc(&fiber()->gc, bsize);
+ char *key = (char *)region_alloc(&fiber()->gc, bsize);
if (key == NULL) {
diag_set(OutOfMemory, bsize, "region",
- "tuple_extract_key_raw_sequential");
+ "tuple_extract_key_raw_sequential");
return NULL;
}
char *key_buf = mp_encode_array(key, key_def->part_count);
@@ -96,11 +96,8 @@ tuple_extract_key_sequential(struct tuple *tuple, struct key_def *key_def,
assert(has_optional_parts == key_def->has_optional_parts);
const char *data = tuple_data(tuple);
const char *data_end = data + tuple->bsize;
- return tuple_extract_key_sequential_raw<has_optional_parts>(data,
- data_end,
- key_def,
- multikey_idx,
- key_size);
+ return tuple_extract_key_sequential_raw<has_optional_parts>(
+ data, data_end, key_def, multikey_idx, key_size);
}
/**
@@ -156,8 +153,8 @@ tuple_extract_key_slowpath(struct tuple *tuple, struct key_def *key_def,
* minimize tuple_field_raw() calls.
*/
for (; i < part_count - 1; i++) {
- if (!key_def_parts_are_sequential
- <has_json_paths>(key_def, i)) {
+ if (!key_def_parts_are_sequential<
+ has_json_paths>(key_def, i)) {
/*
* End of sequential part.
*/
@@ -176,7 +173,7 @@ tuple_extract_key_slowpath(struct tuple *tuple, struct key_def *key_def,
bsize += end - field;
}
- char *key = (char *) region_alloc(&fiber()->gc, bsize);
+ char *key = (char *)region_alloc(&fiber()->gc, bsize);
if (key == NULL) {
diag_set(OutOfMemory, bsize, "region", "tuple_extract_key");
return NULL;
@@ -208,8 +205,8 @@ tuple_extract_key_slowpath(struct tuple *tuple, struct key_def *key_def,
* minimize tuple_field_raw() calls.
*/
for (; i < part_count - 1; i++) {
- if (!key_def_parts_are_sequential
- <has_json_paths>(key_def, i)) {
+ if (!key_def_parts_are_sequential<
+ has_json_paths>(key_def, i)) {
/*
* End of sequential part.
*/
@@ -255,7 +252,7 @@ tuple_extract_key_slowpath_raw(const char *data, const char *data_end,
assert(!key_def->for_func_index);
assert(mp_sizeof_nil() == 1);
/* allocate buffer with maximal possible size */
- char *key = (char *) region_alloc(&fiber()->gc, data_end - data);
+ char *key = (char *)region_alloc(&fiber()->gc, data_end - data);
if (key == NULL) {
diag_set(OutOfMemory, data_end - data, "region",
"tuple_extract_key_raw");
@@ -268,7 +265,7 @@ tuple_extract_key_slowpath_raw(const char *data, const char *data_end,
* A tuple can not be empty - at least a pk always exists.
*/
assert(field_count > 0);
- (void) field_count;
+ (void)field_count;
const char *field0_end = field0;
mp_next(&field0_end);
const char *field = field0;
@@ -278,8 +275,8 @@ tuple_extract_key_slowpath_raw(const char *data, const char *data_end,
uint32_t fieldno = key_def->parts[i].fieldno;
uint32_t null_count = 0;
for (; i < key_def->part_count - 1; i++) {
- if (!key_def_parts_are_sequential
- <has_json_paths>(key_def, i))
+ if (!key_def_parts_are_sequential<has_json_paths>(
+ key_def, i))
break;
}
const struct key_part *part = &key_def->parts[i];
@@ -363,7 +360,7 @@ tuple_extract_key_slowpath_raw(const char *data, const char *data_end,
/**
* Initialize tuple_extract_key() and tuple_extract_key_raw()
*/
-template<bool contains_sequential_parts, bool has_optional_parts>
+template <bool contains_sequential_parts, bool has_optional_parts>
static void
key_def_set_extract_func_plain(struct key_def *def)
{
@@ -372,43 +369,50 @@ key_def_set_extract_func_plain(struct key_def *def)
assert(!def->for_func_index);
if (key_def_is_sequential(def)) {
assert(contains_sequential_parts || def->part_count == 1);
- def->tuple_extract_key = tuple_extract_key_sequential
- <has_optional_parts>;
- def->tuple_extract_key_raw = tuple_extract_key_sequential_raw
- <has_optional_parts>;
+ def->tuple_extract_key =
+ tuple_extract_key_sequential<has_optional_parts>;
+ def->tuple_extract_key_raw =
+ tuple_extract_key_sequential_raw<has_optional_parts>;
} else {
- def->tuple_extract_key = tuple_extract_key_slowpath
- <contains_sequential_parts,
- has_optional_parts, false, false>;
- def->tuple_extract_key_raw = tuple_extract_key_slowpath_raw
- <has_optional_parts, false>;
+ def->tuple_extract_key =
+ tuple_extract_key_slowpath<contains_sequential_parts,
+ has_optional_parts, false,
+ false>;
+ def->tuple_extract_key_raw =
+ tuple_extract_key_slowpath_raw<has_optional_parts,
+ false>;
}
}
-template<bool contains_sequential_parts, bool has_optional_parts>
+template <bool contains_sequential_parts, bool has_optional_parts>
static void
key_def_set_extract_func_json(struct key_def *def)
{
assert(def->has_json_paths);
assert(!def->for_func_index);
if (def->is_multikey) {
- def->tuple_extract_key = tuple_extract_key_slowpath
- <contains_sequential_parts,
- has_optional_parts, true, true>;
+ def->tuple_extract_key =
+ tuple_extract_key_slowpath<contains_sequential_parts,
+ has_optional_parts, true,
+ true>;
} else {
- def->tuple_extract_key = tuple_extract_key_slowpath
- <contains_sequential_parts,
- has_optional_parts, true, false>;
+ def->tuple_extract_key =
+ tuple_extract_key_slowpath<contains_sequential_parts,
+ has_optional_parts, true,
+ false>;
}
- def->tuple_extract_key_raw = tuple_extract_key_slowpath_raw
- <has_optional_parts, true>;
+ def->tuple_extract_key_raw =
+ tuple_extract_key_slowpath_raw<has_optional_parts, true>;
}
static char *
tuple_extract_key_stub(struct tuple *tuple, struct key_def *key_def,
- int multikey_idx, uint32_t *key_size)
+ int multikey_idx, uint32_t *key_size)
{
- (void)tuple; (void)key_def; (void)multikey_idx; (void)key_size;
+ (void)tuple;
+ (void)key_def;
+ (void)multikey_idx;
+ (void)key_size;
unreachable();
return NULL;
}
@@ -418,8 +422,11 @@ tuple_extract_key_raw_stub(const char *data, const char *data_end,
struct key_def *key_def, int multikey_idx,
uint32_t *key_size)
{
- (void)data; (void)data_end;
- (void)key_def; (void)multikey_idx; (void)key_size;
+ (void)data;
+ (void)data_end;
+ (void)key_def;
+ (void)multikey_idx;
+ (void)key_size;
unreachable();
return NULL;
}
@@ -467,9 +474,8 @@ tuple_key_contains_null(struct tuple *tuple, struct key_def *def,
const uint32_t *field_map = tuple_field_map(tuple);
for (struct key_part *part = def->parts, *end = part + def->part_count;
part < end; ++part) {
- const char *field = tuple_field_raw_by_part(format, data,
- field_map, part,
- multikey_idx);
+ const char *field = tuple_field_raw_by_part(
+ format, data, field_map, part, multikey_idx);
if (field == NULL || mp_typeof(*field) == MP_NIL)
return true;
}
@@ -482,8 +488,8 @@ tuple_validate_key_parts(struct key_def *key_def, struct tuple *tuple)
assert(!key_def->is_multikey);
for (uint32_t idx = 0; idx < key_def->part_count; idx++) {
struct key_part *part = &key_def->parts[idx];
- const char *field = tuple_field_by_part(tuple, part,
- MULTIKEY_NONE);
+ const char *field =
+ tuple_field_by_part(tuple, part, MULTIKEY_NONE);
if (field == NULL) {
if (key_part_is_nullable(part))
continue;
diff --git a/src/box/tuple_format.c b/src/box/tuple_format.c
index 9b817d3..7545315 100644
--- a/src/box/tuple_format.c
+++ b/src/box/tuple_format.c
@@ -55,18 +55,16 @@ tuple_format1_field_by_format2_field(struct tuple_format *format1,
{
struct region *region = &fiber()->gc;
size_t region_svp = region_used(region);
- uint32_t path_len = json_tree_snprint_path(NULL, 0,
- &format2_field->token, TUPLE_INDEX_BASE);
+ uint32_t path_len = json_tree_snprint_path(
+ NULL, 0, &format2_field->token, TUPLE_INDEX_BASE);
char *path = region_alloc(region, path_len + 1);
if (path == NULL)
panic("Can not allocate memory for path");
json_tree_snprint_path(path, path_len + 1, &format2_field->token,
TUPLE_INDEX_BASE);
- struct tuple_field *format1_field =
- json_tree_lookup_path_entry(&format1->fields,
- &format1->fields.root, path,
- path_len, TUPLE_INDEX_BASE,
- struct tuple_field, token);
+ struct tuple_field *format1_field = json_tree_lookup_path_entry(
+ &format1->fields, &format1->fields.root, path, path_len,
+ TUPLE_INDEX_BASE, struct tuple_field, token);
region_truncate(region, region_svp);
return format1_field;
}
@@ -84,7 +82,8 @@ tuple_format_cmp(const struct tuple_format *format1,
struct tuple_field *field_a;
json_tree_foreach_entry_preorder(field_a, &a->fields.root,
- struct tuple_field, token) {
+ struct tuple_field, token)
+ {
struct tuple_field *field_b =
tuple_format1_field_by_format2_field(b, field_a);
if (field_a->type != field_b->type)
@@ -93,10 +92,10 @@ tuple_format_cmp(const struct tuple_format *format1,
return (int)field_a->coll_id - (int)field_b->coll_id;
if (field_a->nullable_action != field_b->nullable_action)
return (int)field_a->nullable_action -
- (int)field_b->nullable_action;
+ (int)field_b->nullable_action;
if (field_a->is_key_part != field_b->is_key_part)
return (int)field_a->is_key_part -
- (int)field_b->is_key_part;
+ (int)field_b->is_key_part;
}
return 0;
@@ -105,9 +104,8 @@ tuple_format_cmp(const struct tuple_format *format1,
static uint32_t
tuple_format_hash(struct tuple_format *format)
{
-#define TUPLE_FIELD_MEMBER_HASH(field, member, h, carry, size) \
- PMurHash32_Process(&h, &carry, &field->member, \
- sizeof(field->member)); \
+#define TUPLE_FIELD_MEMBER_HASH(field, member, h, carry, size) \
+ PMurHash32_Process(&h, &carry, &field->member, sizeof(field->member)); \
size += sizeof(field->member);
uint32_t h = 13;
@@ -115,7 +113,8 @@ tuple_format_hash(struct tuple_format *format)
uint32_t size = 0;
struct tuple_field *f;
json_tree_foreach_entry_preorder(f, &format->fields.root,
- struct tuple_field, token) {
+ struct tuple_field, token)
+ {
TUPLE_FIELD_MEMBER_HASH(f, type, h, carry, size)
TUPLE_FIELD_MEMBER_HASH(f, coll_id, h, carry, size)
TUPLE_FIELD_MEMBER_HASH(f, nullable_action, h, carry, size)
@@ -190,7 +189,8 @@ tuple_format_field_by_id(struct tuple_format *format, uint32_t id)
{
struct tuple_field *field;
json_tree_foreach_entry_preorder(field, &format->fields.root,
- struct tuple_field, token) {
+ struct tuple_field, token)
+ {
if (field->id == id)
return field;
}
@@ -205,9 +205,9 @@ static int
tuple_field_ensure_child_compatibility(struct tuple_field *parent,
struct tuple_field *child)
{
- enum field_type expected_type =
- child->token.type == JSON_TOKEN_STR ?
- FIELD_TYPE_MAP : FIELD_TYPE_ARRAY;
+ enum field_type expected_type = child->token.type == JSON_TOKEN_STR ?
+ FIELD_TYPE_MAP :
+ FIELD_TYPE_ARRAY;
if (field_type1_contains_type2(parent->type, expected_type)) {
parent->type = expected_type;
} else {
@@ -285,10 +285,9 @@ tuple_format_add_field(struct tuple_format *format, uint32_t fieldno,
field->token.type != JSON_TOKEN_END) {
if (tuple_field_ensure_child_compatibility(parent, field) != 0)
goto fail;
- struct tuple_field *next =
- json_tree_lookup_entry(tree, &parent->token,
- &field->token,
- struct tuple_field, token);
+ struct tuple_field *next = json_tree_lookup_entry(
+ tree, &parent->token, &field->token, struct tuple_field,
+ token);
if (next == NULL) {
field->id = format->total_field_count++;
rc = json_tree_add(tree, &parent->token, &field->token);
@@ -355,10 +354,9 @@ tuple_format_use_key_part(struct tuple_format *format, uint32_t field_count,
int *current_slot, char **path_pool)
{
assert(part->fieldno < tuple_format_field_count(format));
- struct tuple_field *field =
- tuple_format_add_field(format, part->fieldno, part->path,
- part->path_len, is_sequential,
- current_slot, path_pool);
+ struct tuple_field *field = tuple_format_add_field(
+ format, part->fieldno, part->path, part->path_len,
+ is_sequential, current_slot, path_pool);
if (field == NULL)
return -1;
/*
@@ -398,11 +396,9 @@ tuple_format_use_key_part(struct tuple_format *format, uint32_t field_count,
* with field's one, then the part type is more strict
* and the part type must be used in tuple_format.
*/
- if (field_type1_contains_type2(field->type,
- part->type)) {
+ if (field_type1_contains_type2(field->type, part->type)) {
field->type = part->type;
- } else if (!field_type1_contains_type2(part->type,
- field->type)) {
+ } else if (!field_type1_contains_type2(part->type, field->type)) {
int errcode;
if (!field->is_key_part)
errcode = ER_FORMAT_MISMATCH_INDEX_PART;
@@ -422,13 +418,12 @@ tuple_format_use_key_part(struct tuple_format *format, uint32_t field_count,
* definitions.
*/
static int
-tuple_format_create(struct tuple_format *format, struct key_def * const *keys,
+tuple_format_create(struct tuple_format *format, struct key_def *const *keys,
uint16_t key_count, const struct field_def *fields,
uint32_t field_count)
{
- format->min_field_count =
- tuple_format_min_field_count(keys, key_count, fields,
- field_count);
+ format->min_field_count = tuple_format_min_field_count(
+ keys, key_count, fields, field_count);
if (tuple_format_field_count(format) == 0) {
format->field_map_size = 0;
return 0;
@@ -443,8 +438,9 @@ tuple_format_create(struct tuple_format *format, struct key_def * const *keys,
if (cid != COLL_NONE) {
struct coll_id *coll_id = coll_by_id(cid);
if (coll_id == NULL) {
- diag_set(ClientError,ER_WRONG_COLLATION_OPTIONS,
- i + 1, "collation was not found by ID");
+ diag_set(ClientError,
+ ER_WRONG_COLLATION_OPTIONS, i + 1,
+ "collation was not found by ID");
return -1;
}
coll = coll_id->coll;
@@ -470,16 +466,16 @@ tuple_format_create(struct tuple_format *format, struct key_def * const *keys,
const struct key_part *parts_end = part + key_def->part_count;
for (; part < parts_end; part++) {
- if (tuple_format_use_key_part(format, field_count, part,
- is_sequential,
- ¤t_slot,
- &path_pool) != 0)
+ if (tuple_format_use_key_part(
+ format, field_count, part, is_sequential,
+ ¤t_slot, &path_pool) != 0)
return -1;
}
}
- assert(tuple_format_field(format, 0)->offset_slot == TUPLE_OFFSET_SLOT_NIL
- || json_token_is_multikey(&tuple_format_field(format, 0)->token));
+ assert(tuple_format_field(format, 0)->offset_slot ==
+ TUPLE_OFFSET_SLOT_NIL ||
+ json_token_is_multikey(&tuple_format_field(format, 0)->token));
size_t field_map_size = -current_slot * sizeof(uint32_t);
if (field_map_size > INT16_MAX) {
/** tuple->data_offset is 15 bits */
@@ -492,14 +488,15 @@ tuple_format_create(struct tuple_format *format, struct key_def * const *keys,
size_t required_fields_sz = bitmap_size(format->total_field_count);
format->required_fields = calloc(1, required_fields_sz);
if (format->required_fields == NULL) {
- diag_set(OutOfMemory, required_fields_sz,
- "malloc", "required field bitmap");
+ diag_set(OutOfMemory, required_fields_sz, "malloc",
+ "required field bitmap");
return -1;
}
struct tuple_field *field;
uint32_t *required_fields = format->required_fields;
json_tree_foreach_entry_preorder(field, &format->fields.root,
- struct tuple_field, token) {
+ struct tuple_field, token)
+ {
/*
* In the case of the multikey index,
* required_fields is overridden with local for
@@ -521,7 +518,7 @@ tuple_format_create(struct tuple_format *format, struct key_def * const *keys,
calloc(1, required_fields_sz);
if (multikey_required_fields == NULL) {
diag_set(OutOfMemory, required_fields_sz,
- "malloc", "required field bitmap");
+ "malloc", "required field bitmap");
return -1;
}
field->multikey_required_fields =
@@ -545,17 +542,17 @@ static int
tuple_format_register(struct tuple_format *format)
{
if (recycled_format_ids != FORMAT_ID_NIL) {
-
- format->id = (uint16_t) recycled_format_ids;
- recycled_format_ids = (intptr_t) tuple_formats[recycled_format_ids];
+ format->id = (uint16_t)recycled_format_ids;
+ recycled_format_ids =
+ (intptr_t)tuple_formats[recycled_format_ids];
} else {
if (formats_size == formats_capacity) {
- uint32_t new_capacity = formats_capacity ?
- formats_capacity * 2 : 16;
+ uint32_t new_capacity =
+ formats_capacity ? formats_capacity * 2 : 16;
struct tuple_format **formats;
- formats = (struct tuple_format **)
- realloc(tuple_formats, new_capacity *
- sizeof(tuple_formats[0]));
+ formats = (struct tuple_format **)realloc(
+ tuple_formats,
+ new_capacity * sizeof(tuple_formats[0]));
if (formats == NULL) {
diag_set(OutOfMemory,
sizeof(struct tuple_format), "malloc",
@@ -567,13 +564,13 @@ tuple_format_register(struct tuple_format *format)
tuple_formats = formats;
}
uint32_t formats_size_max = FORMAT_ID_MAX + 1;
- struct errinj *inj = errinj(ERRINJ_TUPLE_FORMAT_COUNT,
- ERRINJ_INT);
+ struct errinj *inj =
+ errinj(ERRINJ_TUPLE_FORMAT_COUNT, ERRINJ_INT);
if (inj != NULL && inj->iparam > 0)
formats_size_max = inj->iparam;
if (formats_size >= formats_size_max) {
diag_set(ClientError, ER_TUPLE_FORMAT_LIMIT,
- (unsigned) formats_capacity);
+ (unsigned)formats_capacity);
return -1;
}
format->id = formats_size++;
@@ -587,7 +584,7 @@ tuple_format_deregister(struct tuple_format *format)
{
if (format->id == FORMAT_ID_NIL)
return;
- tuple_formats[format->id] = (struct tuple_format *) recycled_format_ids;
+ tuple_formats[format->id] = (struct tuple_format *)recycled_format_ids;
recycled_format_ids = format->id;
format->id = FORMAT_ID_NIL;
}
@@ -601,7 +598,8 @@ tuple_format_destroy_fields(struct tuple_format *format)
{
struct tuple_field *field, *tmp;
json_tree_foreach_entry_safe(field, &format->fields.root,
- struct tuple_field, token, tmp) {
+ struct tuple_field, token, tmp)
+ {
json_tree_del(&format->fields, &field->token);
tuple_field_delete(field);
}
@@ -609,7 +607,7 @@ tuple_format_destroy_fields(struct tuple_format *format)
}
static struct tuple_format *
-tuple_format_alloc(struct key_def * const *keys, uint16_t key_count,
+tuple_format_alloc(struct key_def *const *keys, uint16_t key_count,
uint32_t space_field_count, struct tuple_dictionary *dict)
{
/* Size of area to store JSON paths data. */
@@ -623,8 +621,8 @@ tuple_format_alloc(struct key_def * const *keys, uint16_t key_count,
const struct key_part *part = key_def->parts;
const struct key_part *pend = part + key_def->part_count;
for (; part < pend; part++) {
- index_field_count = MAX(index_field_count,
- part->fieldno + 1);
+ index_field_count =
+ MAX(index_field_count, part->fieldno + 1);
path_pool_size += part->path_len;
}
}
@@ -717,11 +715,10 @@ tuple_format_reuse(struct tuple_format **p_format)
struct tuple_format *format = *p_format;
assert(format->is_ephemeral);
assert(format->is_temporary);
- mh_int_t key = mh_tuple_format_find(tuple_formats_hash, format,
- NULL);
+ mh_int_t key = mh_tuple_format_find(tuple_formats_hash, format, NULL);
if (key != mh_end(tuple_formats_hash)) {
- struct tuple_format **entry = mh_tuple_format_node(
- tuple_formats_hash, key);
+ struct tuple_format **entry =
+ mh_tuple_format_node(tuple_formats_hash, key);
tuple_format_destroy(format);
free(format);
*p_format = *entry;
@@ -741,9 +738,9 @@ tuple_format_add_to_hash(struct tuple_format *format)
{
assert(format->is_ephemeral);
assert(format->is_temporary);
- mh_int_t key = mh_tuple_format_put(tuple_formats_hash,
- (const struct tuple_format **)&format,
- NULL, NULL);
+ mh_int_t key = mh_tuple_format_put(
+ tuple_formats_hash, (const struct tuple_format **)&format, NULL,
+ NULL);
if (key == mh_end(tuple_formats_hash)) {
diag_set(OutOfMemory, 0, "tuple_format_add_to_hash",
"tuple formats hash entry");
@@ -771,7 +768,7 @@ tuple_format_delete(struct tuple_format *format)
struct tuple_format *
tuple_format_new(struct tuple_format_vtab *vtab, void *engine,
- struct key_def * const *keys, uint16_t key_count,
+ struct key_def *const *keys, uint16_t key_count,
const struct field_def *space_fields,
uint32_t space_field_count, uint32_t exact_field_count,
struct tuple_dictionary *dict, bool is_temporary,
@@ -816,7 +813,8 @@ tuple_format1_can_store_format2_tuples(struct tuple_format *format1,
return false;
struct tuple_field *field1;
json_tree_foreach_entry_preorder(field1, &format1->fields.root,
- struct tuple_field, token) {
+ struct tuple_field, token)
+ {
struct tuple_field *field2 =
tuple_format1_field_by_format2_field(format2, field1);
/*
@@ -839,7 +837,7 @@ tuple_format1_can_store_format2_tuples(struct tuple_format *format1,
else
return false;
}
- if (! field_type1_contains_type2(field1->type, field2->type))
+ if (!field_type1_contains_type2(field1->type, field2->type))
return false;
/*
* Do not allow transition from nullable to non-nullable:
@@ -858,8 +856,8 @@ tuple_field_map_create(struct tuple_format *format, const char *tuple,
bool validate, struct field_map_builder *builder)
{
struct region *region = &fiber()->gc;
- if (field_map_builder_create(builder, format->field_map_size,
- region) != 0)
+ if (field_map_builder_create(builder, format->field_map_size, region) !=
+ 0)
return -1;
if (tuple_format_field_count(format) == 0)
return 0; /* Nothing to initialize */
@@ -876,22 +874,23 @@ tuple_field_map_create(struct tuple_format *format, const char *tuple,
if (entry.field == NULL)
continue;
if (entry.field->offset_slot != TUPLE_OFFSET_SLOT_NIL &&
- field_map_builder_set_slot(builder, entry.field->offset_slot,
- entry.data - tuple, entry.multikey_idx,
- entry.multikey_count, region) != 0)
+ field_map_builder_set_slot(
+ builder, entry.field->offset_slot,
+ entry.data - tuple, entry.multikey_idx,
+ entry.multikey_count, region) != 0)
return -1;
}
return entry.data == NULL ? 0 : -1;
}
uint32_t
-tuple_format_min_field_count(struct key_def * const *keys, uint16_t key_count,
+tuple_format_min_field_count(struct key_def *const *keys, uint16_t key_count,
const struct field_def *space_fields,
uint32_t space_field_count)
{
uint32_t min_field_count = 0;
for (uint32_t i = 0; i < space_field_count; ++i) {
- if (! space_fields[i].is_nullable)
+ if (!space_fields[i].is_nullable)
min_field_count = i + 1;
}
for (uint32_t i = 0; i < key_count; ++i) {
@@ -911,8 +910,8 @@ tuple_format_init()
{
tuple_formats_hash = mh_tuple_format_new();
if (tuple_formats_hash == NULL) {
- diag_set(OutOfMemory, sizeof(struct mh_tuple_format_t), "malloc",
- "tuple format hash");
+ diag_set(OutOfMemory, sizeof(struct mh_tuple_format_t),
+ "malloc", "tuple format hash");
return -1;
}
return 0;
@@ -924,8 +923,8 @@ tuple_format_free()
{
/* Clear recycled ids. */
while (recycled_format_ids != FORMAT_ID_NIL) {
- uint16_t id = (uint16_t) recycled_format_ids;
- recycled_format_ids = (intptr_t) tuple_formats[id];
+ uint16_t id = (uint16_t)recycled_format_ids;
+ recycled_format_ids = (intptr_t)tuple_formats[id];
tuple_formats[id] = NULL;
}
for (struct tuple_format **format = tuple_formats;
@@ -964,8 +963,8 @@ tuple_format_iterator_create(struct tuple_format_iterator *it,
if (validate && format->exact_field_count > 0 &&
format->exact_field_count != *defined_field_count) {
diag_set(ClientError, ER_EXACT_FIELD_COUNT,
- (unsigned) *defined_field_count,
- (unsigned) format->exact_field_count);
+ (unsigned)*defined_field_count,
+ (unsigned)format->exact_field_count);
return -1;
}
it->parent = &format->fields.root;
@@ -981,19 +980,20 @@ tuple_format_iterator_create(struct tuple_format_iterator *it,
if (validate)
it->required_fields_sz = bitmap_size(format->total_field_count);
uint32_t total_sz = frames_sz + 2 * it->required_fields_sz;
- struct mp_frame *frames = region_aligned_alloc(region, total_sz,
- alignof(frames[0]));
+ struct mp_frame *frames =
+ region_aligned_alloc(region, total_sz, alignof(frames[0]));
if (frames == NULL) {
diag_set(OutOfMemory, total_sz, "region",
"tuple_format_iterator");
return -1;
}
mp_stack_create(&it->stack, format->fields_depth, frames);
- bool key_parts_only =
- (flags & TUPLE_FORMAT_ITERATOR_KEY_PARTS_ONLY) != 0;
- *defined_field_count = MIN(*defined_field_count, key_parts_only ?
- format->index_field_count :
- tuple_format_field_count(format));
+ bool key_parts_only = (flags & TUPLE_FORMAT_ITERATOR_KEY_PARTS_ONLY) !=
+ 0;
+ *defined_field_count = MIN(*defined_field_count,
+ key_parts_only ?
+ format->index_field_count :
+ tuple_format_field_count(format));
mp_stack_push(&it->stack, MP_ARRAY, *defined_field_count);
if (validate) {
@@ -1066,15 +1066,16 @@ tuple_format_iterator_next(struct tuple_format_iterator *it,
* all required fields are present.
*/
if (it->flags & TUPLE_FORMAT_ITERATOR_VALIDATE &&
- tuple_format_required_fields_validate(it->format,
- it->multikey_required_fields,
- it->required_fields_sz) != 0)
+ tuple_format_required_fields_validate(
+ it->format, it->multikey_required_fields,
+ it->required_fields_sz) != 0)
return -1;
}
}
entry->parent =
it->parent != &it->format->fields.root ?
- json_tree_entry(it->parent, struct tuple_field, token) : NULL;
+ json_tree_entry(it->parent, struct tuple_field, token) :
+ NULL;
/*
* Use the top frame of the stack and the
* current data offset to prepare the JSON token
@@ -1105,8 +1106,8 @@ tuple_format_iterator_next(struct tuple_format_iterator *it,
struct tuple_field *field =
json_tree_lookup_entry(&it->format->fields, it->parent, &token,
struct tuple_field, token);
- if (it->flags & TUPLE_FORMAT_ITERATOR_KEY_PARTS_ONLY &&
- field != NULL && !field->is_key_part)
+ if (it->flags & TUPLE_FORMAT_ITERATOR_KEY_PARTS_ONLY && field != NULL &&
+ !field->is_key_part)
field = NULL;
entry->field = field;
entry->data = it->pos;
@@ -1127,9 +1128,8 @@ tuple_format_iterator_next(struct tuple_format_iterator *it,
enum mp_type type = mp_typeof(*it->pos);
if ((type == MP_ARRAY || type == MP_MAP) &&
!mp_stack_is_full(&it->stack) && field != NULL) {
- uint32_t size = type == MP_ARRAY ?
- mp_decode_array(&it->pos) :
- mp_decode_map(&it->pos);
+ uint32_t size = type == MP_ARRAY ? mp_decode_array(&it->pos) :
+ mp_decode_map(&it->pos);
entry->count = size;
mp_stack_push(&it->stack, type, size);
if (json_token_is_multikey(&field->token)) {
@@ -1169,19 +1169,21 @@ tuple_format_iterator_next(struct tuple_format_iterator *it,
* defined in format.
*/
bool is_nullable = tuple_field_is_nullable(field);
- if (!field_mp_type_is_compatible(field->type, entry->data, is_nullable) != 0) {
- diag_set(ClientError, ER_FIELD_TYPE,
- tuple_field_path(field),
+ if (!field_mp_type_is_compatible(field->type, entry->data,
+ is_nullable) != 0) {
+ diag_set(ClientError, ER_FIELD_TYPE, tuple_field_path(field),
field_type_strs[field->type]);
return -1;
}
- bit_clear(it->multikey_frame != NULL ?
- it->multikey_required_fields : it->required_fields, field->id);
+ bit_clear(it->multikey_frame != NULL ? it->multikey_required_fields :
+ it->required_fields,
+ field->id);
return 0;
eof:
if (it->flags & TUPLE_FORMAT_ITERATOR_VALIDATE &&
tuple_format_required_fields_validate(it->format,
- it->required_fields, it->required_fields_sz) != 0)
+ it->required_fields,
+ it->required_fields_sz) != 0)
return -1;
entry->data = NULL;
return 0;
diff --git a/src/box/tuple_format.h b/src/box/tuple_format.h
index 021072d..a25132f 100644
--- a/src/box/tuple_format.h
+++ b/src/box/tuple_format.h
@@ -49,7 +49,7 @@ void
tuple_format_free();
enum { FORMAT_ID_MAX = UINT16_MAX - 1, FORMAT_ID_NIL = UINT16_MAX };
-enum { FORMAT_REF_MAX = INT32_MAX};
+enum { FORMAT_REF_MAX = INT32_MAX };
/*
* We don't pass TUPLE_INDEX_BASE around dynamically all the time,
@@ -74,29 +74,26 @@ struct tuple_format_vtab {
* Free allocated tuple using engine-specific
* memory allocator.
*/
- void
- (*tuple_delete)(struct tuple_format *format, struct tuple *tuple);
+ void (*tuple_delete)(struct tuple_format *format, struct tuple *tuple);
/**
* Allocates a new tuple on the same allocator
* and with the same format.
*/
- struct tuple*
- (*tuple_new)(struct tuple_format *format, const char *data,
- const char *end);
+ struct tuple *(*tuple_new)(struct tuple_format *format,
+ const char *data, const char *end);
/**
* Free a tuple_chunk allocated for given tuple and
* data.
*/
- void
- (*tuple_chunk_delete)(struct tuple_format *format,
- const char *data);
+ void (*tuple_chunk_delete)(struct tuple_format *format,
+ const char *data);
/**
* Allocate a new tuple_chunk for given tuple and data and
* return a pointer to it's data section.
*/
- const char *
- (*tuple_chunk_new)(struct tuple_format *format, struct tuple *tuple,
- const char *data, uint32_t data_sz);
+ const char *(*tuple_chunk_new)(struct tuple_format *format,
+ struct tuple *tuple, const char *data,
+ uint32_t data_sz);
};
/** Tuple field meta information for tuple_format. */
@@ -272,8 +269,8 @@ tuple_format_field_by_path(struct tuple_format *format, uint32_t fieldno,
assert(root != NULL);
if (path == NULL)
return root;
- return json_tree_lookup_path_entry(&format->fields, &root->token,
- path, path_len, TUPLE_INDEX_BASE,
+ return json_tree_lookup_path_entry(&format->fields, &root->token, path,
+ path_len, TUPLE_INDEX_BASE,
struct tuple_field, token);
}
@@ -338,7 +335,7 @@ tuple_format_unref(struct tuple_format *format)
*/
struct tuple_format *
tuple_format_new(struct tuple_format_vtab *vtab, void *engine,
- struct key_def * const *keys, uint16_t key_count,
+ struct key_def *const *keys, uint16_t key_count,
const struct field_def *space_fields,
uint32_t space_field_count, uint32_t exact_field_count,
struct tuple_dictionary *dict, bool is_temporary,
@@ -370,7 +367,7 @@ tuple_format1_can_store_format2_tuples(struct tuple_format *format1,
* @retval Minimal field count.
*/
uint32_t
-tuple_format_min_field_count(struct key_def * const *keys, uint16_t key_count,
+tuple_format_min_field_count(struct key_def *const *keys, uint16_t key_count,
const struct field_def *space_fields,
uint32_t space_field_count);
@@ -436,20 +433,19 @@ tuple_field_map_create(struct tuple_format *format, const char *tuple,
int
tuple_format_init();
-
/** Tuple format iterator flags to configure parse mode. */
enum {
/**
* This flag is set for iterator that should perform tuple
* validation to conform the specified format.
*/
- TUPLE_FORMAT_ITERATOR_VALIDATE = 1 << 0,
+ TUPLE_FORMAT_ITERATOR_VALIDATE = 1 << 0,
/**
* This flag is set for iterator that should skip the
* tuple fields that are not marked as key_parts in
* format::fields tree.
*/
- TUPLE_FORMAT_ITERATOR_KEY_PARTS_ONLY = 1 << 1,
+ TUPLE_FORMAT_ITERATOR_KEY_PARTS_ONLY = 1 << 1,
};
/**
diff --git a/src/box/tuple_hash.cc b/src/box/tuple_hash.cc
index 39f89a6..3d26dd0 100644
--- a/src/box/tuple_hash.cc
+++ b/src/box/tuple_hash.cc
@@ -38,9 +38,7 @@
/* Tuple and key hasher */
namespace {
-enum {
- HASH_SEED = 13U
-};
+enum { HASH_SEED = 13U };
template <int TYPE>
static inline uint32_t
@@ -57,7 +55,7 @@ field_hash(uint32_t *ph, uint32_t *pcarry, const char **field)
const char *f = *field;
uint32_t size;
mp_next(field);
- size = *field - f; /* calculate the size of field */
+ size = *field - f; /* calculate the size of field */
assert(size < INT32_MAX);
PMurHash32_Process(ph, pcarry, f, size);
return size;
@@ -81,30 +79,28 @@ field_hash<FIELD_TYPE_STRING>(uint32_t *ph, uint32_t *pcarry,
return size;
}
-template <int TYPE, int ...MORE_TYPES> struct KeyFieldHash {};
+template <int TYPE, int... MORE_TYPES> struct KeyFieldHash {};
-template <int TYPE, int TYPE2, int ...MORE_TYPES>
+template <int TYPE, int TYPE2, int... MORE_TYPES>
struct KeyFieldHash<TYPE, TYPE2, MORE_TYPES...> {
- static void hash(uint32_t *ph, uint32_t *pcarry,
- const char **pfield, uint32_t *ptotal_size)
+ static void hash(uint32_t *ph, uint32_t *pcarry, const char **pfield,
+ uint32_t *ptotal_size)
{
*ptotal_size += field_hash<TYPE>(ph, pcarry, pfield);
- KeyFieldHash<TYPE2, MORE_TYPES...>::
- hash(ph, pcarry, pfield, ptotal_size);
+ KeyFieldHash<TYPE2, MORE_TYPES...>::hash(ph, pcarry, pfield,
+ ptotal_size);
}
};
-template <int TYPE>
-struct KeyFieldHash<TYPE> {
- static void hash(uint32_t *ph, uint32_t *pcarry,
- const char **pfield, uint32_t *ptotal_size)
+template <int TYPE> struct KeyFieldHash<TYPE> {
+ static void hash(uint32_t *ph, uint32_t *pcarry, const char **pfield,
+ uint32_t *ptotal_size)
{
*ptotal_size += field_hash<TYPE>(ph, pcarry, pfield);
}
};
-template <int TYPE, int ...MORE_TYPES>
-struct KeyHash {
+template <int TYPE, int... MORE_TYPES> struct KeyHash {
static uint32_t hash(const char *key, struct key_def *)
{
uint32_t h = HASH_SEED;
@@ -116,33 +112,31 @@ struct KeyHash {
}
};
-template <>
-struct KeyHash<FIELD_TYPE_UNSIGNED> {
+template <> struct KeyHash<FIELD_TYPE_UNSIGNED> {
static uint32_t hash(const char *key, struct key_def *key_def)
{
uint64_t val = mp_decode_uint(&key);
- (void) key_def;
+ (void)key_def;
if (likely(val <= UINT32_MAX))
return val;
- return ((uint32_t)((val)>>33^(val)^(val)<<11));
+ return ((uint32_t)((val) >> 33 ^ (val) ^ (val) << 11));
}
};
-template <int TYPE, int ...MORE_TYPES> struct TupleFieldHash { };
+template <int TYPE, int... MORE_TYPES> struct TupleFieldHash {};
-template <int TYPE, int TYPE2, int ...MORE_TYPES>
+template <int TYPE, int TYPE2, int... MORE_TYPES>
struct TupleFieldHash<TYPE, TYPE2, MORE_TYPES...> {
static void hash(const char **pfield, uint32_t *ph, uint32_t *pcarry,
uint32_t *ptotal_size)
{
*ptotal_size += field_hash<TYPE>(ph, pcarry, pfield);
- TupleFieldHash<TYPE2, MORE_TYPES...>::
- hash(pfield, ph, pcarry, ptotal_size);
+ TupleFieldHash<TYPE2, MORE_TYPES...>::hash(pfield, ph, pcarry,
+ ptotal_size);
}
};
-template <int TYPE>
-struct TupleFieldHash<TYPE> {
+template <int TYPE> struct TupleFieldHash<TYPE> {
static void hash(const char **pfield, uint32_t *ph, uint32_t *pcarry,
uint32_t *ptotal_size)
{
@@ -150,44 +144,40 @@ struct TupleFieldHash<TYPE> {
}
};
-template <int TYPE, int ...MORE_TYPES>
-struct TupleHash
-{
+template <int TYPE, int... MORE_TYPES> struct TupleHash {
static uint32_t hash(struct tuple *tuple, struct key_def *key_def)
{
assert(!key_def->is_multikey);
uint32_t h = HASH_SEED;
uint32_t carry = 0;
uint32_t total_size = 0;
- const char *field = tuple_field_by_part(tuple,
- key_def->parts,
- MULTIKEY_NONE);
- TupleFieldHash<TYPE, MORE_TYPES...>::
- hash(&field, &h, &carry, &total_size);
+ const char *field = tuple_field_by_part(tuple, key_def->parts,
+ MULTIKEY_NONE);
+ TupleFieldHash<TYPE, MORE_TYPES...>::hash(&field, &h, &carry,
+ &total_size);
return PMurHash32_Result(h, carry, total_size);
}
};
-template <>
-struct TupleHash<FIELD_TYPE_UNSIGNED> {
- static uint32_t hash(struct tuple *tuple, struct key_def *key_def)
+template <> struct TupleHash<FIELD_TYPE_UNSIGNED> {
+ static uint32_t hash(struct tuple *tuple, struct key_def *key_def)
{
assert(!key_def->is_multikey);
- const char *field = tuple_field_by_part(tuple,
- key_def->parts,
- MULTIKEY_NONE);
+ const char *field = tuple_field_by_part(tuple, key_def->parts,
+ MULTIKEY_NONE);
uint64_t val = mp_decode_uint(&field);
if (likely(val <= UINT32_MAX))
return val;
- return ((uint32_t)((val)>>33^(val)^(val)<<11));
+ return ((uint32_t)((val) >> 33 ^ (val) ^ (val) << 11));
}
};
-}; /* namespace { */
+}; // namespace
-#define HASHER(...) \
- { KeyHash<__VA_ARGS__>::hash, TupleHash<__VA_ARGS__>::hash, \
- { __VA_ARGS__, UINT32_MAX } },
+#define HASHER(...) \
+ { KeyHash<__VA_ARGS__>::hash, \
+ TupleHash<__VA_ARGS__>::hash, \
+ { __VA_ARGS__, UINT32_MAX } },
struct hasher_signature {
key_hash_t kf;
@@ -199,20 +189,32 @@ struct hasher_signature {
* field1 type, field2 type, ...
*/
static const hasher_signature hash_arr[] = {
- HASHER(FIELD_TYPE_UNSIGNED)
- HASHER(FIELD_TYPE_STRING)
- HASHER(FIELD_TYPE_UNSIGNED, FIELD_TYPE_UNSIGNED)
- HASHER(FIELD_TYPE_STRING , FIELD_TYPE_UNSIGNED)
- HASHER(FIELD_TYPE_UNSIGNED, FIELD_TYPE_STRING)
- HASHER(FIELD_TYPE_STRING , FIELD_TYPE_STRING)
- HASHER(FIELD_TYPE_UNSIGNED, FIELD_TYPE_UNSIGNED, FIELD_TYPE_UNSIGNED)
- HASHER(FIELD_TYPE_STRING , FIELD_TYPE_UNSIGNED, FIELD_TYPE_UNSIGNED)
- HASHER(FIELD_TYPE_UNSIGNED, FIELD_TYPE_STRING , FIELD_TYPE_UNSIGNED)
- HASHER(FIELD_TYPE_STRING , FIELD_TYPE_STRING , FIELD_TYPE_UNSIGNED)
- HASHER(FIELD_TYPE_UNSIGNED, FIELD_TYPE_UNSIGNED, FIELD_TYPE_STRING)
- HASHER(FIELD_TYPE_STRING , FIELD_TYPE_UNSIGNED, FIELD_TYPE_STRING)
- HASHER(FIELD_TYPE_UNSIGNED, FIELD_TYPE_STRING , FIELD_TYPE_STRING)
- HASHER(FIELD_TYPE_STRING , FIELD_TYPE_STRING , FIELD_TYPE_STRING)
+ HASHER(FIELD_TYPE_UNSIGNED) HASHER(FIELD_TYPE_STRING) HASHER(
+ FIELD_TYPE_UNSIGNED,
+ FIELD_TYPE_UNSIGNED) HASHER(FIELD_TYPE_STRING,
+ FIELD_TYPE_UNSIGNED)
+ HASHER(FIELD_TYPE_UNSIGNED, FIELD_TYPE_STRING) HASHER(
+ FIELD_TYPE_STRING,
+ FIELD_TYPE_STRING) HASHER(FIELD_TYPE_UNSIGNED,
+ FIELD_TYPE_UNSIGNED,
+ FIELD_TYPE_UNSIGNED)
+ HASHER(FIELD_TYPE_STRING, FIELD_TYPE_UNSIGNED,
+ FIELD_TYPE_UNSIGNED) HASHER(FIELD_TYPE_UNSIGNED,
+ FIELD_TYPE_STRING,
+ FIELD_TYPE_UNSIGNED)
+ HASHER(FIELD_TYPE_STRING, FIELD_TYPE_STRING,
+ FIELD_TYPE_UNSIGNED) HASHER(FIELD_TYPE_UNSIGNED,
+ FIELD_TYPE_UNSIGNED,
+ FIELD_TYPE_STRING)
+ HASHER(FIELD_TYPE_STRING,
+ FIELD_TYPE_UNSIGNED,
+ FIELD_TYPE_STRING)
+ HASHER(FIELD_TYPE_UNSIGNED,
+ FIELD_TYPE_STRING,
+ FIELD_TYPE_STRING)
+ HASHER(FIELD_TYPE_STRING,
+ FIELD_TYPE_STRING,
+ FIELD_TYPE_STRING)
};
#undef HASHER
@@ -225,7 +227,8 @@ uint32_t
key_hash_slowpath(const char *key, struct key_def *key_def);
void
-key_def_set_hash_func(struct key_def *key_def) {
+key_def_set_hash_func(struct key_def *key_def)
+{
if (key_def->is_nullable || key_def->has_json_paths)
goto slowpath;
/*
@@ -252,7 +255,8 @@ key_def_set_hash_func(struct key_def *key_def) {
break;
}
}
- if (i == key_def->part_count && hash_arr[k].p[i] == UINT32_MAX){
+ if (i == key_def->part_count &&
+ hash_arr[k].p[i] == UINT32_MAX) {
key_def->tuple_hash = hash_arr[k].tf;
key_def->key_hash = hash_arr[k].kf;
return;
@@ -304,8 +308,8 @@ tuple_hash_field(uint32_t *ph1, uint32_t *pcarry, const char **field,
*/
double iptr;
double val = mp_typeof(**field) == MP_FLOAT ?
- mp_decode_float(field) :
- mp_decode_double(field);
+ mp_decode_float(field) :
+ mp_decode_double(field);
if (!isfinite(val) || modf(val, &iptr) != 0 ||
val < -exp2(63) || val >= exp2(64)) {
size = *field - f;
@@ -323,7 +327,7 @@ tuple_hash_field(uint32_t *ph1, uint32_t *pcarry, const char **field,
}
default:
mp_next(field);
- size = *field - f; /* calculate the size of field */
+ size = *field - f; /* calculate the size of field */
/*
* (!) All other fields hashed **including** MsgPack format
* identifier (e.g. 0xcc). This was done **intentionally**
@@ -396,12 +400,14 @@ tuple_hash_slowpath(struct tuple *tuple, struct key_def *key_def)
if (prev_fieldno + 1 != key_def->parts[part_id].fieldno) {
struct key_part *part = &key_def->parts[part_id];
if (has_json_paths) {
- field = tuple_field_raw_by_part(format, tuple_raw,
+ field = tuple_field_raw_by_part(format,
+ tuple_raw,
field_map, part,
MULTIKEY_NONE);
} else {
- field = tuple_field_raw(format, tuple_raw, field_map,
- part->fieldno);
+ field = tuple_field_raw(format, tuple_raw,
+ field_map,
+ part->fieldno);
}
}
if (has_optional_parts && (field == NULL || field >= end)) {
diff --git a/src/box/txn.c b/src/box/txn.c
index 4f5484e..5f27e3b 100644
--- a/src/box/txn.c
+++ b/src/box/txn.c
@@ -45,7 +45,7 @@ double too_long_threshold;
int64_t txn_last_psn = 0;
/* Txn cache. */
-static struct stailq txn_cache = {NULL, &txn_cache.first};
+static struct stailq txn_cache = { NULL, &txn_cache.first };
static int
txn_on_stop(struct trigger *trigger, void *event);
@@ -159,7 +159,8 @@ txn_rollback_to_svp(struct txn *txn, struct stailq_entry *svp)
struct stailq rollback;
stailq_cut_tail(&txn->stmts, svp, &rollback);
stailq_reverse(&rollback);
- stailq_foreach_entry(stmt, &rollback, next) {
+ stailq_foreach_entry(stmt, &rollback, next)
+ {
txn_rollback_one_stmt(txn, stmt);
if (stmt->row != NULL && stmt->row->replica_id == 0) {
assert(txn->n_new_rows > 0);
@@ -213,16 +214,15 @@ inline static void
txn_free(struct txn *txn)
{
struct tx_read_tracker *tracker, *tmp;
- rlist_foreach_entry_safe(tracker, &txn->read_set,
- in_read_set, tmp) {
+ rlist_foreach_entry_safe(tracker, &txn->read_set, in_read_set, tmp) {
rlist_del(&tracker->in_reader_list);
rlist_del(&tracker->in_read_set);
}
assert(rlist_empty(&txn->read_set));
struct tx_conflict_tracker *entry, *next;
- rlist_foreach_entry_safe(entry, &txn->conflict_list,
- in_conflict_list, next) {
+ rlist_foreach_entry_safe(entry, &txn->conflict_list, in_conflict_list,
+ next) {
rlist_del(&entry->in_conflict_list);
rlist_del(&entry->in_conflicted_by_list);
}
@@ -237,8 +237,7 @@ txn_free(struct txn *txn)
rlist_del(&txn->in_read_view_txs);
struct txn_stmt *stmt;
- stailq_foreach_entry(stmt, &txn->stmts, next)
- txn_stmt_destroy(stmt);
+ stailq_foreach_entry(stmt, &txn->stmts, next) txn_stmt_destroy(stmt);
/* Truncate region up to struct txn size. */
region_truncate(&txn->region, sizeof(struct txn));
@@ -249,7 +248,7 @@ struct txn *
txn_begin(void)
{
static int64_t tsn = 0;
- assert(! in_txn());
+ assert(!in_txn());
struct txn *txn = txn_new();
if (txn == NULL)
return NULL;
@@ -419,7 +418,7 @@ txn_commit_stmt(struct txn *txn, struct request *request)
stmt->does_require_old_tuple = true;
int rc = 0;
- if(!space_is_temporary(stmt->space)) {
+ if (!space_is_temporary(stmt->space)) {
rc = trigger_run(&stmt->space->on_replace, txn);
} else {
/*
@@ -545,7 +544,8 @@ txn_complete(struct txn *txn)
if (stop_tm - txn->start_tm > too_long_threshold) {
int n_rows = txn->n_new_rows + txn->n_applier_rows;
say_warn_ratelimited("too long WAL write: %d rows at "
- "LSN %lld: %.3f sec", n_rows,
+ "LSN %lld: %.3f sec",
+ n_rows,
txn->signature - n_rows + 1,
stop_tm - txn->start_tm);
}
@@ -622,7 +622,8 @@ txn_journal_entry_new(struct txn *txn)
struct xrow_header **local_row = req->rows + txn->n_applier_rows;
bool is_sync = false;
- stailq_foreach_entry(stmt, &txn->stmts, next) {
+ stailq_foreach_entry(stmt, &txn->stmts, next)
+ {
if (stmt->has_triggers) {
txn_init_triggers(txn);
rlist_splice(&txn->on_commit, &stmt->on_commit);
@@ -737,8 +738,8 @@ txn_prepare(struct txn *txn)
struct tx_conflict_tracker *entry, *next;
/* Handle conflicts. */
- rlist_foreach_entry_safe(entry, &txn->conflict_list,
- in_conflict_list, next) {
+ rlist_foreach_entry_safe(entry, &txn->conflict_list, in_conflict_list,
+ next) {
assert(entry->breaker == txn);
memtx_tx_handle_conflict(txn, entry->victim);
rlist_del(&entry->in_conflict_list);
@@ -786,12 +787,12 @@ txn_commit_nop(struct txn *txn)
static int
txn_limbo_on_rollback(struct trigger *trig, void *event)
{
- (void) event;
- struct txn *txn = (struct txn *) event;
+ (void)event;
+ struct txn *txn = (struct txn *)event;
/* Check whether limbo has performed the cleanup. */
if (txn->signature != TXN_SIGNATURE_ROLLBACK)
return 0;
- struct txn_limbo_entry *entry = (struct txn_limbo_entry *) trig->data;
+ struct txn_limbo_entry *entry = (struct txn_limbo_entry *)trig->data;
txn_limbo_abort(&txn_limbo, entry);
return 0;
}
@@ -861,8 +862,7 @@ txn_commit_async(struct txn *txn)
* Set a trigger to abort waiting for confirm on
* WAL write failure.
*/
- trigger_create(trig, txn_limbo_on_rollback,
- limbo_entry, NULL);
+ trigger_create(trig, txn_limbo_on_rollback, limbo_entry, NULL);
txn_on_rollback(txn, trig);
}
@@ -1049,7 +1049,7 @@ box_txn_commit(void)
* Do nothing if transaction is not started,
* it's the same as BEGIN + COMMIT.
*/
- if (! txn)
+ if (!txn)
return 0;
if (txn->in_sub_stmt) {
diag_set(ClientError, ER_COMMIT_IN_SUB_STMT);
@@ -1090,7 +1090,7 @@ box_txn_alloc(size_t size)
long l;
};
return region_aligned_alloc(&txn->region, size,
- alignof(union natural_align));
+ alignof(union natural_align));
}
struct txn_savepoint *
@@ -1160,8 +1160,10 @@ box_txn_rollback_to_savepoint(box_txn_savepoint_t *svp)
diag_set(ClientError, ER_NO_TRANSACTION);
return -1;
}
- struct txn_stmt *stmt = svp->stmt == NULL ? NULL :
- stailq_entry(svp->stmt, struct txn_stmt, next);
+ struct txn_stmt *stmt =
+ svp->stmt == NULL ?
+ NULL :
+ stailq_entry(svp->stmt, struct txn_stmt, next);
if (stmt != NULL && stmt->space == NULL && stmt->row == NULL) {
/*
* The statement at which this savepoint was
@@ -1188,10 +1190,12 @@ txn_savepoint_release(struct txn_savepoint *svp)
struct txn *txn = in_txn();
assert(txn != NULL);
/* Make sure that savepoint hasn't been released yet. */
- struct txn_stmt *stmt = svp->stmt == NULL ? NULL :
- stailq_entry(svp->stmt, struct txn_stmt, next);
+ struct txn_stmt *stmt =
+ svp->stmt == NULL ?
+ NULL :
+ stailq_entry(svp->stmt, struct txn_stmt, next);
assert(stmt == NULL || (stmt->space != NULL && stmt->row != NULL));
- (void) stmt;
+ (void)stmt;
/*
* Discard current savepoint alongside with all
* created after it savepoints.
@@ -1203,9 +1207,9 @@ txn_savepoint_release(struct txn_savepoint *svp)
static int
txn_on_stop(struct trigger *trigger, void *event)
{
- (void) trigger;
- (void) event;
- txn_rollback(in_txn()); /* doesn't yield or fail */
+ (void)trigger;
+ (void)event;
+ txn_rollback(in_txn()); /* doesn't yield or fail */
fiber_gc();
return 0;
}
@@ -1230,8 +1234,8 @@ txn_on_stop(struct trigger *trigger, void *event)
static int
txn_on_yield(struct trigger *trigger, void *event)
{
- (void) trigger;
- (void) event;
+ (void)trigger;
+ (void)event;
struct txn *txn = in_txn();
assert(txn != NULL);
assert(!txn_has_flag(txn, TXN_CAN_YIELD));
diff --git a/src/box/txn_limbo.c b/src/box/txn_limbo.c
index 3655338..24475cc 100644
--- a/src/box/txn_limbo.c
+++ b/src/box/txn_limbo.c
@@ -83,8 +83,8 @@ txn_limbo_append(struct txn_limbo *limbo, uint32_t id, struct txn *txn)
}
}
size_t size;
- struct txn_limbo_entry *e = region_alloc_object(&txn->region,
- typeof(*e), &size);
+ struct txn_limbo_entry *e =
+ region_alloc_object(&txn->region, typeof(*e), &size);
if (e == NULL) {
diag_set(OutOfMemory, size, "region_alloc_object", "e");
return NULL;
@@ -103,7 +103,7 @@ txn_limbo_remove(struct txn_limbo *limbo, struct txn_limbo_entry *entry)
{
assert(!rlist_empty(&entry->in_queue));
assert(txn_limbo_first_entry(limbo) == entry);
- (void) limbo;
+ (void)limbo;
rlist_del_entry(entry, in_queue);
}
@@ -140,7 +140,7 @@ txn_limbo_assign_remote_lsn(struct txn_limbo *limbo,
assert(entry->lsn == -1);
assert(lsn > 0);
assert(txn_has_flag(entry->txn, TXN_WAIT_ACK));
- (void) limbo;
+ (void)limbo;
entry->lsn = lsn;
}
@@ -164,8 +164,7 @@ txn_limbo_assign_local_lsn(struct txn_limbo *limbo,
struct vclock_iterator iter;
vclock_iterator_init(&iter, &limbo->vclock);
int ack_count = 0;
- vclock_foreach(&iter, vc)
- ack_count += vc.lsn >= lsn;
+ vclock_foreach(&iter, vc) ack_count += vc.lsn >= lsn;
assert(ack_count >= entry->ack_count);
entry->ack_count = ack_count;
}
@@ -233,8 +232,8 @@ txn_limbo_wait_complete(struct txn_limbo *limbo, struct txn_limbo_entry *entry)
txn_limbo_write_rollback(limbo, entry->lsn);
struct txn_limbo_entry *e, *tmp;
- rlist_foreach_entry_safe_reverse(e, &limbo->queue,
- in_queue, tmp) {
+ rlist_foreach_entry_safe_reverse(e, &limbo->queue, in_queue, tmp)
+ {
e->txn->signature = TXN_SIGNATURE_QUORUM_TIMEOUT;
txn_limbo_abort(limbo, e);
txn_clear_flag(e->txn, TXN_WAIT_SYNC);
@@ -291,9 +290,9 @@ txn_limbo_write_synchro(struct txn_limbo *limbo, uint32_t type, int64_t lsn)
assert(lsn > 0);
struct synchro_request req = {
- .type = type,
- .replica_id = limbo->instance_id,
- .lsn = lsn,
+ .type = type,
+ .replica_id = limbo->instance_id,
+ .lsn = lsn,
};
/*
@@ -302,8 +301,7 @@ txn_limbo_write_synchro(struct txn_limbo *limbo, uint32_t type, int64_t lsn)
*/
struct synchro_body_bin body;
struct xrow_header row;
- char buf[sizeof(struct journal_entry) +
- sizeof(struct xrow_header *)];
+ char buf[sizeof(struct journal_entry) + sizeof(struct xrow_header *)];
struct journal_entry *entry = (struct journal_entry *)buf;
entry->rows[0] = &row;
@@ -325,8 +323,8 @@ txn_limbo_write_synchro(struct txn_limbo *limbo, uint32_t type, int64_t lsn)
* Or retry automatically with some period.
*/
panic("Could not write a synchro request to WAL: "
- "lsn = %lld, type = %s\n", lsn,
- iproto_type_name(type));
+ "lsn = %lld, type = %s\n",
+ lsn, iproto_type_name(type));
}
}
@@ -404,7 +402,8 @@ txn_limbo_read_rollback(struct txn_limbo *limbo, int64_t lsn)
assert(limbo->instance_id != REPLICA_ID_NIL);
struct txn_limbo_entry *e, *tmp;
struct txn_limbo_entry *last_rollback = NULL;
- rlist_foreach_entry_reverse(e, &limbo->queue, in_queue) {
+ rlist_foreach_entry_reverse(e, &limbo->queue, in_queue)
+ {
if (!txn_has_flag(e->txn, TXN_WAIT_ACK))
continue;
if (e->lsn < lsn)
@@ -413,7 +412,8 @@ txn_limbo_read_rollback(struct txn_limbo *limbo, int64_t lsn)
}
if (last_rollback == NULL)
return;
- rlist_foreach_entry_safe_reverse(e, &limbo->queue, in_queue, tmp) {
+ rlist_foreach_entry_safe_reverse(e, &limbo->queue, in_queue, tmp)
+ {
txn_limbo_abort(limbo, e);
txn_clear_flag(e->txn, TXN_WAIT_SYNC);
txn_clear_flag(e->txn, TXN_WAIT_ACK);
diff --git a/src/box/user.cc b/src/box/user.cc
index 5042fb1..fbb526d 100644
--- a/src/box/user.cc
+++ b/src/box/user.cc
@@ -62,14 +62,13 @@ user_map_calc_idx(uint8_t auth_token, uint8_t *bit_no)
return auth_token / UMAP_INT_BITS;
}
-
/** Set a bit in the user map - add a user. */
static inline void
user_map_set(struct user_map *map, uint8_t auth_token)
{
uint8_t bit_no;
int idx = user_map_calc_idx(auth_token, &bit_no);
- map->m[idx] |= ((umap_int_t) 1) << bit_no;
+ map->m[idx] |= ((umap_int_t)1) << bit_no;
}
/** Clear a bit in the user map - remove a user. */
@@ -78,7 +77,7 @@ user_map_clear(struct user_map *map, uint8_t auth_token)
{
uint8_t bit_no;
int idx = user_map_calc_idx(auth_token, &bit_no);
- map->m[idx] &= ~(((umap_int_t) 1) << bit_no);
+ map->m[idx] &= ~(((umap_int_t)1) << bit_no);
}
/* Check if a bit is set in the user map. */
@@ -87,7 +86,7 @@ user_map_is_set(struct user_map *map, uint8_t auth_token)
{
uint8_t bit_no;
int idx = user_map_calc_idx(auth_token, &bit_no);
- return map->m[idx] & (((umap_int_t) 1) << bit_no);
+ return map->m[idx] & (((umap_int_t)1) << bit_no);
}
/**
@@ -112,16 +111,15 @@ user_map_minus(struct user_map *lhs, struct user_map *rhs)
}
/** Iterate over users in the set of users. */
-struct user_map_iterator
-{
+struct user_map_iterator {
struct bit_iterator it;
};
static void
user_map_iterator_init(struct user_map_iterator *it, struct user_map *map)
{
- bit_iterator_init(&it->it, map->m,
- USER_MAP_SIZE * sizeof(umap_int_t), true);
+ bit_iterator_init(&it->it, map->m, USER_MAP_SIZE * sizeof(umap_int_t),
+ true);
}
static struct user *
@@ -216,66 +214,55 @@ access_find(enum schema_object_type object_type, uint32_t object_id)
{
struct access *access = NULL;
switch (object_type) {
- case SC_UNIVERSE:
- {
+ case SC_UNIVERSE: {
access = universe.access;
break;
}
- case SC_ENTITY_SPACE:
- {
+ case SC_ENTITY_SPACE: {
access = entity_access.space;
break;
}
- case SC_ENTITY_FUNCTION:
- {
+ case SC_ENTITY_FUNCTION: {
access = entity_access.function;
break;
}
- case SC_ENTITY_USER:
- {
+ case SC_ENTITY_USER: {
access = entity_access.user;
break;
}
- case SC_ENTITY_ROLE:
- {
+ case SC_ENTITY_ROLE: {
access = entity_access.role;
break;
}
- case SC_ENTITY_SEQUENCE:
- {
+ case SC_ENTITY_SEQUENCE: {
access = entity_access.sequence;
break;
}
- case SC_SPACE:
- {
+ case SC_SPACE: {
struct space *space = space_by_id(object_id);
if (space)
access = space->access;
break;
}
- case SC_FUNCTION:
- {
+ case SC_FUNCTION: {
struct func *func = func_by_id(object_id);
if (func)
access = func->access;
break;
}
- case SC_USER:
- {
+ case SC_USER: {
struct user *user = user_by_id(object_id);
if (user)
access = user->access;
break;
}
- case SC_ROLE:
- {
+ case SC_ROLE: {
struct user *role = user_by_id(object_id);
if (role)
access = role->access;
break;
}
- case SC_SEQUENCE:
- {
+ case SC_SEQUENCE: {
struct sequence *seq = sequence_by_id(object_id);
if (seq)
access = seq->access;
@@ -287,7 +274,6 @@ access_find(enum schema_object_type object_type, uint32_t object_id)
return access;
}
-
/**
* Reset effective access of the user in the
* corresponding objects.
@@ -299,9 +285,9 @@ user_set_effective_access(struct user *user)
privset_ifirst(&user->privs, &it);
struct priv_def *priv;
while ((priv = privset_inext(&it)) != NULL) {
- struct access *object = access_find(priv->object_type,
- priv->object_id);
- /* Protect against a concurrent drop. */
+ struct access *object =
+ access_find(priv->object_type, priv->object_id);
+ /* Protect against a concurrent drop. */
if (object == NULL)
continue;
struct access *access = &object[user->auth_token];
@@ -340,7 +326,7 @@ user_reload_privs(struct user *user)
/** Primary key - by user id */
if (!space_is_memtx(space)) {
diag_set(ClientError, ER_UNSUPPORTED,
- space->engine->name, "system data");
+ space->engine->name, "system data");
return -1;
}
struct index *index = index_find(space, 0);
@@ -348,8 +334,8 @@ user_reload_privs(struct user *user)
return -1;
mp_encode_uint(key, user->def->uid);
- struct iterator *it = index_create_iterator(index, ITER_EQ,
- key, 1);
+ struct iterator *it =
+ index_create_iterator(index, ITER_EQ, key, 1);
if (it == NULL)
return -1;
IteratorGuard iter_guard(it);
@@ -365,7 +351,8 @@ user_reload_privs(struct user *user)
* Skip role grants, we're only
* interested in real objects.
*/
- if (priv.object_type != SC_ROLE || !(priv.access & PRIV_X))
+ if (priv.object_type != SC_ROLE ||
+ !(priv.access & PRIV_X))
if (user_grant_priv(user, &priv) != 0)
return -1;
if (iterator_next(it, &tuple) != 0)
@@ -418,11 +405,11 @@ auth_token_get(void)
{
uint8_t bit_no = 0;
while (min_token_idx < USER_MAP_SIZE) {
- bit_no = __builtin_ffs(tokens[min_token_idx]);
+ bit_no = __builtin_ffs(tokens[min_token_idx]);
if (bit_no)
break;
min_token_idx++;
- }
+ }
if (bit_no == 0 || bit_no > BOX_USER_MAX) {
/* A cap on the number of users was reached.
* Check for BOX_USER_MAX to cover case when
@@ -430,12 +417,12 @@ auth_token_get(void)
*/
tnt_raise(LoggedError, ER_USER_MAX, BOX_USER_MAX);
}
- /*
+ /*
* find-first-set returns bit index starting from 1,
* or 0 if no bit is set. Rebase the index to offset 0.
*/
bit_no--;
- tokens[min_token_idx] ^= ((umap_int_t) 1) << bit_no;
+ tokens[min_token_idx] ^= ((umap_int_t)1) << bit_no;
int auth_token = min_token_idx * UMAP_INT_BITS + bit_no;
assert(auth_token < UINT8_MAX);
return auth_token;
@@ -450,7 +437,7 @@ auth_token_put(uint8_t auth_token)
{
uint8_t bit_no;
int idx = user_map_calc_idx(auth_token, &bit_no);
- tokens[idx] |= ((umap_int_t) 1) << bit_no;
+ tokens[idx] |= ((umap_int_t)1) << bit_no;
if (idx < min_token_idx)
min_token_idx = idx;
}
@@ -481,8 +468,8 @@ user_cache_delete(uint32_t uid)
{
mh_int_t k = mh_i32ptr_find(user_registry, uid, NULL);
if (k != mh_end(user_registry)) {
- struct user *user = (struct user *)
- mh_i32ptr_node(user_registry, k)->val;
+ struct user *user =
+ (struct user *)mh_i32ptr_node(user_registry, k)->val;
assert(user->auth_token > ADMIN);
auth_token_put(user->auth_token);
assert(user_map_is_empty(&user->roles));
@@ -505,7 +492,7 @@ user_by_id(uint32_t uid)
mh_int_t k = mh_i32ptr_find(user_registry, uid, NULL);
if (k == mh_end(user_registry))
return NULL;
- return (struct user *) mh_i32ptr_node(user_registry, k)->val;
+ return (struct user *)mh_i32ptr_node(user_registry, k)->val;
}
struct user *
@@ -521,7 +508,7 @@ user_find(uint32_t uid)
struct user *
user_find_by_token(uint8_t auth_token)
{
- return &users[auth_token];
+ return &users[auth_token];
}
/** Find user by name. */
@@ -537,7 +524,7 @@ user_find_by_name(const char *name, uint32_t len)
return user;
}
diag_set(ClientError, ER_NO_SUCH_USER,
- tt_cstr(name, MIN((uint32_t) BOX_INVALID_NAME_MAX, len)));
+ tt_cstr(name, MIN((uint32_t)BOX_INVALID_NAME_MAX, len)));
return NULL;
}
@@ -557,7 +544,7 @@ user_cache_init(void)
*/
size_t name_len = strlen("guest");
size_t sz = user_def_sizeof(name_len);
- struct user_def *def = (struct user_def *) calloc(1, sz);
+ struct user_def *def = (struct user_def *)calloc(1, sz);
if (def == NULL)
tnt_raise(OutOfMemory, sz, "malloc", "def");
/* Free def in a case of exception. */
@@ -570,11 +557,11 @@ user_cache_init(void)
guest_def_guard.is_active = false;
/* 0 is the auth token and user id by default. */
assert(user->def->uid == GUEST && user->auth_token == GUEST);
- (void) user;
+ (void)user;
name_len = strlen("admin");
sz = user_def_sizeof(name_len);
- def = (struct user_def *) calloc(1, sz);
+ def = (struct user_def *)calloc(1, sz);
if (def == NULL)
tnt_raise(OutOfMemory, sz, "malloc", "def");
auto admin_def_guard = make_scoped_guard([=] { free(def); });
@@ -628,7 +615,7 @@ role_check(struct user *grantee, struct user *role)
struct user_map transitive_closure = user_map_nil;
user_map_set(&transitive_closure, grantee->auth_token);
struct user_map current_layer = transitive_closure;
- while (! user_map_is_empty(¤t_layer)) {
+ while (!user_map_is_empty(¤t_layer)) {
/*
* As long as we're traversing a directed
* acyclic graph, we're bound to end at some
@@ -647,10 +634,9 @@ role_check(struct user *grantee, struct user *role)
* Check if the role is in the list of roles to which the
* grantee is granted.
*/
- if (user_map_is_set(&transitive_closure,
- role->auth_token)) {
- diag_set(ClientError, ER_ROLE_LOOP,
- role->def->name, grantee->def->name);
+ if (user_map_is_set(&transitive_closure, role->auth_token)) {
+ diag_set(ClientError, ER_ROLE_LOOP, role->def->name,
+ grantee->def->name);
return -1;
}
return 0;
@@ -704,7 +690,7 @@ rebuild_effective_grants(struct user *grantee)
* Propagate effective privileges from the nodes
* with no incoming edges to the remaining nodes.
*/
- while (! user_map_is_empty(¤t_layer)) {
+ while (!user_map_is_empty(¤t_layer)) {
struct user_map postponed = user_map_nil;
struct user_map next_layer = user_map_nil;
user_map_iterator_init(&it, ¤t_layer);
@@ -737,7 +723,6 @@ rebuild_effective_grants(struct user *grantee)
return 0;
}
-
/**
* Update verges in the graph of dependencies.
* Grant all effective privileges of the role to whoever
diff --git a/src/box/user.h b/src/box/user.h
index 9ed52c4..4600ba8 100644
--- a/src/box/user.h
+++ b/src/box/user.h
@@ -51,7 +51,7 @@ extern struct universe universe;
typedef unsigned int umap_int_t;
enum {
UMAP_INT_BITS = CHAR_BIT * sizeof(umap_int_t),
- USER_MAP_SIZE = (BOX_USER_MAX + UMAP_INT_BITS - 1)/UMAP_INT_BITS
+ USER_MAP_SIZE = (BOX_USER_MAX + UMAP_INT_BITS - 1) / UMAP_INT_BITS
};
struct user_map {
@@ -70,8 +70,7 @@ user_map_is_empty(struct user_map *map)
typedef rb_tree(struct priv_def) privset_t;
rb_proto(, privset_, privset_t, struct priv_def);
-struct user
-{
+struct user {
struct user_def *def;
/**
* An id in privileges array to quickly find a
diff --git a/src/box/user_def.c b/src/box/user_def.c
index 4d9821a..ef1dd70 100644
--- a/src/box/user_def.c
+++ b/src/box/user_def.c
@@ -36,25 +36,12 @@ const char *
priv_name(user_access_t access)
{
static const char *priv_name_strs[] = {
- "Read",
- "Write",
- "Execute",
- "Session",
- "Usage",
- "Create",
- "Drop",
- "Alter",
- "Reference",
- "Trigger",
- "Insert",
- "Update",
- "Delete",
- "Grant",
- "Revoke",
+ "Read", "Write", "Execute", "Session", "Usage",
+ "Create", "Drop", "Alter", "Reference", "Trigger",
+ "Insert", "Update", "Delete", "Grant", "Revoke",
};
- int bit_no = __builtin_ffs((int) access);
- if (bit_no > 0 && bit_no <= (int) lengthof(priv_name_strs))
+ int bit_no = __builtin_ffs((int)access);
+ if (bit_no > 0 && bit_no <= (int)lengthof(priv_name_strs))
return priv_name_strs[bit_no - 1];
return "Any";
}
-
diff --git a/src/box/user_def.h b/src/box/user_def.h
index 486a4ae..a82d5f3 100644
--- a/src/box/user_def.h
+++ b/src/box/user_def.h
@@ -31,7 +31,7 @@
* SUCH DAMAGE.
*/
#include "schema_def.h" /* for SCHEMA_OBJECT_TYPE */
-#include "scramble.h" /* for SCRAMBLE_SIZE */
+#include "scramble.h" /* for SCRAMBLE_SIZE */
#define RB_COMPACT 1
#include "small/rb.h"
#include "small/rlist.h"
@@ -102,7 +102,7 @@ enum priv_type {
/* Never granted, but used internally. */
PRIV_REVOKE = 16384,
/* all bits */
- PRIV_ALL = ~((user_access_t) 0),
+ PRIV_ALL = ~((user_access_t)0),
};
/**
@@ -180,7 +180,7 @@ user_def_sizeof(uint32_t name_len)
enum {
BOX_SYSTEM_USER_ID_MIN = 0,
GUEST = 0,
- ADMIN = 1,
+ ADMIN = 1,
PUBLIC = 2, /* role */
SUPER = 31, /* role */
BOX_SYSTEM_USER_ID_MAX = PUBLIC
diff --git a/src/box/vclock.c b/src/box/vclock.c
index 90ae275..00e4563 100644
--- a/src/box/vclock.c
+++ b/src/box/vclock.c
@@ -60,9 +60,10 @@ vclock_snprint(char *buf, int size, const struct vclock *vclock)
const char *sep = "";
struct vclock_iterator it;
vclock_iterator_init(&it, vclock);
- vclock_foreach(&it, replica) {
- SNPRINT(total, snprintf, buf, size, "%s%u: %lld",
- sep, (unsigned)replica.id, (long long)replica.lsn);
+ vclock_foreach(&it, replica)
+ {
+ SNPRINT(total, snprintf, buf, size, "%s%u: %lld", sep,
+ (unsigned)replica.id, (long long)replica.lsn);
sep = ", ";
}
@@ -86,78 +87,78 @@ vclock_from_string(struct vclock *vclock, const char *str)
long long lsn;
const char *p = str;
- begin:
- if (*p == '{') {
- ++p;
- goto key;
- } else if (isblank(*p)) {
- ++p;
- goto begin;
- }
- goto error;
- key:
- if (isdigit(*p)) {
- errno = 0;
- replica_id = strtol(p, (char **) &p, 10);
- if (errno != 0 || replica_id < 0 || replica_id >= VCLOCK_MAX)
- goto error;
- goto sep;
- } else if (*p == '}') {
- ++p;
- goto end;
- } else if (isblank(*p)) {
- ++p;
- goto key;
- }
- goto error;
- sep:
- if (*p == ':') {
- ++p;
- goto val;
- } else if (isblank(*p)) {
- ++p;
- goto sep;
- }
- goto error;
- val:
- if (isblank(*p)) {
- ++p;
- goto val;
- } else if (isdigit(*p)) {
- errno = 0;
- lsn = strtoll(p, (char **) &p, 10);
- if (errno != 0 || lsn < 0 || lsn > INT64_MAX ||
- replica_id >= VCLOCK_MAX ||
- vclock_get(vclock, replica_id) > 0)
- goto error;
- vclock->map |= 1 << replica_id;
- vclock->lsn[replica_id] = lsn;
- goto comma;
- }
- goto error;
- comma:
- if (isspace(*p)) {
- ++p;
- goto comma;
- } else if (*p == '}') {
- ++p;
- goto end;
- } else if (*p == ',') {
- ++p;
- goto key;
- }
- goto error;
- end:
- if (*p == '\0') {
- vclock->signature = vclock_calc_sum(vclock);
- return 0;
- } else if (isblank(*p)) {
- ++p;
- goto end;
- }
- /* goto error; */
- error:
- return p - str + 1; /* error */
+begin:
+ if (*p == '{') {
+ ++p;
+ goto key;
+ } else if (isblank(*p)) {
+ ++p;
+ goto begin;
+ }
+ goto error;
+key:
+ if (isdigit(*p)) {
+ errno = 0;
+ replica_id = strtol(p, (char **)&p, 10);
+ if (errno != 0 || replica_id < 0 || replica_id >= VCLOCK_MAX)
+ goto error;
+ goto sep;
+ } else if (*p == '}') {
+ ++p;
+ goto end;
+ } else if (isblank(*p)) {
+ ++p;
+ goto key;
+ }
+ goto error;
+sep:
+ if (*p == ':') {
+ ++p;
+ goto val;
+ } else if (isblank(*p)) {
+ ++p;
+ goto sep;
+ }
+ goto error;
+val:
+ if (isblank(*p)) {
+ ++p;
+ goto val;
+ } else if (isdigit(*p)) {
+ errno = 0;
+ lsn = strtoll(p, (char **)&p, 10);
+ if (errno != 0 || lsn < 0 || lsn > INT64_MAX ||
+ replica_id >= VCLOCK_MAX ||
+ vclock_get(vclock, replica_id) > 0)
+ goto error;
+ vclock->map |= 1 << replica_id;
+ vclock->lsn[replica_id] = lsn;
+ goto comma;
+ }
+ goto error;
+comma:
+ if (isspace(*p)) {
+ ++p;
+ goto comma;
+ } else if (*p == '}') {
+ ++p;
+ goto end;
+ } else if (*p == ',') {
+ ++p;
+ goto key;
+ }
+ goto error;
+end:
+ if (*p == '\0') {
+ vclock->signature = vclock_calc_sum(vclock);
+ return 0;
+ } else if (isblank(*p)) {
+ ++p;
+ goto end;
+ }
+ /* goto error; */
+error:
+ return p - str + 1; /* error */
}
static int
diff --git a/src/box/vclock.h b/src/box/vclock.h
index 5865f74..bb7eac8 100644
--- a/src/box/vclock.h
+++ b/src/box/vclock.h
@@ -98,8 +98,7 @@ struct vclock_c {
int64_t lsn;
};
-struct vclock_iterator
-{
+struct vclock_iterator {
struct bit_iterator it;
const struct vclock *vclock;
};
@@ -116,14 +115,13 @@ vclock_iterator_next(struct vclock_iterator *it)
{
struct vclock_c c = { 0, 0 };
size_t id = bit_iterator_next(&it->it);
- c.id = id == SIZE_MAX ? (int) VCLOCK_MAX : id;
+ c.id = id == SIZE_MAX ? (int)VCLOCK_MAX : id;
if (c.id < VCLOCK_MAX)
c.lsn = it->vclock->lsn[c.id];
return c;
}
-
-#define vclock_foreach(it, var) \
+#define vclock_foreach(it, var) \
for (struct vclock_c var = vclock_iterator_next(it); \
(var).id < VCLOCK_MAX; (var) = vclock_iterator_next(it))
@@ -215,8 +213,8 @@ vclock_copy(struct vclock *dst, const struct vclock *src)
* undefined result if zero passed.
*/
unsigned int max_pos = VCLOCK_MAX - bit_clz_u32(src->map | 0x01);
- memcpy(dst, src, offsetof(struct vclock, lsn) +
- sizeof(*dst->lsn) * max_pos);
+ memcpy(dst, src,
+ offsetof(struct vclock, lsn) + sizeof(*dst->lsn) * max_pos);
}
static inline uint32_t
@@ -237,8 +235,7 @@ vclock_calc_sum(const struct vclock *vclock)
int64_t sum = 0;
struct vclock_iterator it;
vclock_iterator_init(&it, vclock);
- vclock_foreach(&it, replica)
- sum += replica.lsn;
+ vclock_foreach(&it, replica) sum += replica.lsn;
return sum;
}
@@ -268,8 +265,8 @@ vclock_merge(struct vclock *dst, struct vclock *diff)
{
struct vclock_iterator it;
vclock_iterator_init(&it, diff);
- vclock_foreach(&it, item)
- vclock_follow(dst, item.id, vclock_get(dst, item.id) + item.lsn);
+ vclock_foreach(&it, item) vclock_follow(
+ dst, item.id, vclock_get(dst, item.id) + item.lsn);
vclock_create(diff);
}
@@ -377,8 +374,8 @@ vclock_lex_compare(const struct vclock *a, const struct vclock *b)
vclock_map_t map = a->map | b->map;
struct bit_iterator it;
bit_iterator_init(&it, &map, sizeof(map), true);
- for(size_t replica_id = bit_iterator_next(&it); replica_id < VCLOCK_MAX;
- replica_id = bit_iterator_next(&it)) {
+ for (size_t replica_id = bit_iterator_next(&it);
+ replica_id < VCLOCK_MAX; replica_id = bit_iterator_next(&it)) {
int64_t lsn_a = vclock_get(a, replica_id);
int64_t lsn_b = vclock_get(b, replica_id);
if (lsn_a < lsn_b)
@@ -406,7 +403,7 @@ vclock_min_ignore0(struct vclock *a, const struct vclock *b)
if (replica_id == 0)
replica_id = bit_iterator_next(&it);
- for( ; replica_id < VCLOCK_MAX; replica_id = bit_iterator_next(&it)) {
+ for (; replica_id < VCLOCK_MAX; replica_id = bit_iterator_next(&it)) {
int64_t lsn_a = vclock_get(a, replica_id);
int64_t lsn_b = vclock_get(b, replica_id);
if (lsn_a <= lsn_b)
diff --git a/src/box/vinyl.c b/src/box/vinyl.c
index 7e56299..0035581 100644
--- a/src/box/vinyl.c
+++ b/src/box/vinyl.c
@@ -104,7 +104,7 @@ struct vy_env {
/** Memory pool for index iterator. */
struct mempool iterator_pool;
/** Memory quota */
- struct vy_quota quota;
+ struct vy_quota quota;
/** Statement environment. */
struct vy_stmt_env stmt_env;
/** Common LSM tree environment. */
@@ -134,14 +134,14 @@ struct vy_env {
/** Mask passed to vy_gc(). */
enum {
/** Delete incomplete runs. */
- VY_GC_INCOMPLETE = 1 << 0,
+ VY_GC_INCOMPLETE = 1 << 0,
/** Delete dropped runs. */
- VY_GC_DROPPED = 1 << 1,
+ VY_GC_DROPPED = 1 << 1,
};
static void
-vy_gc(struct vy_env *env, struct vy_recovery *recovery,
- unsigned int gc_mask, int64_t gc_lsn);
+vy_gc(struct vy_env *env, struct vy_recovery *recovery, unsigned int gc_mask,
+ int64_t gc_lsn);
struct vinyl_iterator {
struct iterator base;
@@ -259,8 +259,9 @@ vy_info_append_regulator(struct vy_env *env, struct info_handler *h)
info_append_int(h, "write_rate", r->write_rate);
info_append_int(h, "dump_bandwidth", r->dump_bandwidth);
info_append_int(h, "dump_watermark", r->dump_watermark);
- info_append_int(h, "rate_limit", vy_quota_get_rate_limit(r->quota,
- VY_QUOTA_CONSUMER_TX));
+ info_append_int(h, "rate_limit",
+ vy_quota_get_rate_limit(r->quota,
+ VY_QUOTA_CONSUMER_TX));
info_table_end(h); /* regulator */
}
@@ -419,9 +420,12 @@ vinyl_index_stat(struct index *index, struct info_handler *h)
info_table_begin(h, "compaction");
info_append_int(h, "count", stat->disk.compaction.count);
info_append_double(h, "time", stat->disk.compaction.time);
- vy_info_append_disk_stmt_counter(h, "input", &stat->disk.compaction.input);
- vy_info_append_disk_stmt_counter(h, "output", &stat->disk.compaction.output);
- vy_info_append_disk_stmt_counter(h, "queue", &stat->disk.compaction.queue);
+ vy_info_append_disk_stmt_counter(h, "input",
+ &stat->disk.compaction.input);
+ vy_info_append_disk_stmt_counter(h, "output",
+ &stat->disk.compaction.output);
+ vy_info_append_disk_stmt_counter(h, "queue",
+ &stat->disk.compaction.queue);
info_table_end(h); /* compaction */
info_append_int(h, "index_size", lsm->page_index_size);
info_append_int(h, "bloom_size", lsm->bloom_size);
@@ -504,7 +508,7 @@ vinyl_engine_memory_stat(struct engine *engine, struct engine_memory_stat *stat)
struct vy_env *env = vy_env(engine);
stat->data += lsregion_used(&env->mem_env.allocator) -
- env->mem_env.tree_extent_size;
+ env->mem_env.tree_extent_size;
stat->index += env->mem_env.tree_extent_size;
stat->index += env->lsm_env.bloom_size;
stat->index += env->lsm_env.page_index_size;
@@ -563,7 +567,7 @@ vy_lsm_find(struct space *space, uint32_t iid)
* Wrapper around vy_lsm_find() which ensures that
* the found index is unique.
*/
-static struct vy_lsm *
+static struct vy_lsm *
vy_lsm_find_unique(struct space *space, uint32_t index_id)
{
struct vy_lsm *lsm = vy_lsm_find(space, index_id);
@@ -578,8 +582,8 @@ static int
vinyl_engine_check_space_def(struct space_def *def)
{
if (def->opts.is_temporary) {
- diag_set(ClientError, ER_ALTER_SPACE,
- def->name, "engine does not support temporary flag");
+ diag_set(ClientError, ER_ALTER_SPACE, def->name,
+ "engine does not support temporary flag");
return -1;
}
return 0;
@@ -592,8 +596,7 @@ vinyl_engine_create_space(struct engine *engine, struct space_def *def,
struct vy_env *env = vy_env(engine);
struct space *space = malloc(sizeof(*space));
if (space == NULL) {
- diag_set(OutOfMemory, sizeof(*space),
- "malloc", "struct space");
+ diag_set(OutOfMemory, sizeof(*space), "malloc", "struct space");
return NULL;
}
@@ -625,8 +628,8 @@ vinyl_engine_create_space(struct engine *engine, struct space_def *def,
}
tuple_format_ref(format);
- if (space_create(space, engine, &vinyl_space_vtab,
- def, key_list, format) != 0) {
+ if (space_create(space, engine, &vinyl_space_vtab, def, key_list,
+ format) != 0) {
tuple_format_unref(format);
free(space);
return NULL;
@@ -647,8 +650,8 @@ static int
vinyl_space_check_index_def(struct space *space, struct index_def *index_def)
{
if (index_def->type != TREE) {
- diag_set(ClientError, ER_INDEX_TYPE,
- index_def->name, space_name(space));
+ diag_set(ClientError, ER_INDEX_TYPE, index_def->name,
+ space_name(space));
return -1;
}
@@ -663,8 +666,8 @@ vinyl_space_check_index_def(struct space *space, struct index_def *index_def)
struct key_part *part = &key_def->parts[i];
if (part->type <= FIELD_TYPE_ANY ||
part->type >= FIELD_TYPE_ARRAY) {
- diag_set(ClientError, ER_MODIFY_INDEX,
- index_def->name, space_name(space),
+ diag_set(ClientError, ER_MODIFY_INDEX, index_def->name,
+ space_name(space),
tt_sprintf("field type '%s' is not supported",
field_type_strs[part->type]));
return -1;
@@ -694,8 +697,8 @@ vinyl_space_create_index(struct space *space, struct index_def *index_def)
if (lsm == NULL)
return NULL;
- if (index_create(&lsm->base, &env->base,
- &vinyl_index_vtab, index_def) != 0) {
+ if (index_create(&lsm->base, &env->base, &vinyl_index_vtab,
+ index_def) != 0) {
vy_lsm_delete(lsm);
return NULL;
}
@@ -1143,8 +1146,8 @@ vinyl_space_swap_index(struct space *old_space, struct space *new_space,
* Swap the two indexes between the two spaces,
* but leave tuple formats.
*/
- generic_space_swap_index(old_space, new_space,
- old_index_id, new_index_id);
+ generic_space_swap_index(old_space, new_space, old_index_id,
+ new_index_id);
SWAP(old_lsm, new_lsm);
SWAP(old_lsm->mem_format, new_lsm->mem_format);
@@ -1201,8 +1204,8 @@ vinyl_index_bsize(struct index *index)
* they are only needed for building the index.
*/
struct vy_lsm *lsm = vy_lsm(index);
- ssize_t bsize = vy_lsm_mem_tree_size(lsm) +
- lsm->page_index_size + lsm->bloom_size;
+ ssize_t bsize = vy_lsm_mem_tree_size(lsm) + lsm->page_index_size +
+ lsm->bloom_size;
if (lsm->index_id > 0)
bsize += lsm->stat.disk.count.bytes;
return bsize;
@@ -1266,8 +1269,8 @@ vy_is_committed(struct vy_env *env, struct vy_lsm *lsm)
*/
static int
vy_get_by_secondary_tuple(struct vy_lsm *lsm, struct vy_tx *tx,
- const struct vy_read_view **rv,
- struct vy_entry entry, struct vy_entry *result)
+ const struct vy_read_view **rv, struct vy_entry entry,
+ struct vy_entry *result)
{
int rc = 0;
assert(lsm->index_id > 0);
@@ -1306,8 +1309,8 @@ vy_get_by_secondary_tuple(struct vy_lsm *lsm, struct vy_tx *tx,
struct vy_entry full_entry;
if (pk_entry.stmt != NULL) {
vy_stmt_foreach_entry(full_entry, pk_entry.stmt, lsm->cmp_def) {
- if (vy_entry_compare(full_entry, entry,
- lsm->cmp_def) == 0) {
+ if (vy_entry_compare(full_entry, entry, lsm->cmp_def) ==
+ 0) {
match = true;
break;
}
@@ -1354,8 +1357,8 @@ vy_get_by_secondary_tuple(struct vy_lsm *lsm, struct vy_tx *tx,
}
if ((*rv)->vlsn == INT64_MAX) {
- vy_cache_add(&lsm->pk->cache, pk_entry,
- vy_entry_none(), key, ITER_EQ);
+ vy_cache_add(&lsm->pk->cache, pk_entry, vy_entry_none(), key,
+ ITER_EQ);
}
vy_stmt_counter_acct_tuple(&lsm->pk->stat.get, pk_entry.stmt);
@@ -1378,8 +1381,7 @@ out:
* @param -1 Memory error or read error.
*/
static int
-vy_get(struct vy_lsm *lsm, struct vy_tx *tx,
- const struct vy_read_view **rv,
+vy_get(struct vy_lsm *lsm, struct vy_tx *tx, const struct vy_read_view **rv,
struct tuple *key_stmt, struct tuple **result)
{
double start_time = ev_monotonic_now(loop());
@@ -1407,8 +1409,8 @@ vy_get(struct vy_lsm *lsm, struct vy_tx *tx,
if (vy_point_lookup(lsm, tx, rv, key, &partial) != 0)
return -1;
if (lsm->index_id > 0 && partial.stmt != NULL) {
- rc = vy_get_by_secondary_tuple(lsm, tx, rv,
- partial, &entry);
+ rc = vy_get_by_secondary_tuple(lsm, tx, rv, partial,
+ &entry);
tuple_unref(partial.stmt);
if (rc != 0)
return -1;
@@ -1416,8 +1418,8 @@ vy_get(struct vy_lsm *lsm, struct vy_tx *tx,
entry = partial;
}
if ((*rv)->vlsn == INT64_MAX) {
- vy_cache_add(&lsm->cache, entry,
- vy_entry_none(), key, ITER_EQ);
+ vy_cache_add(&lsm->cache, entry, vy_entry_none(), key,
+ ITER_EQ);
}
goto out;
}
@@ -1472,12 +1474,11 @@ out:
*/
static int
vy_get_by_raw_key(struct vy_lsm *lsm, struct vy_tx *tx,
- const struct vy_read_view **rv,
- const char *key_raw, uint32_t part_count,
- struct tuple **result)
+ const struct vy_read_view **rv, const char *key_raw,
+ uint32_t part_count, struct tuple **result)
{
- struct tuple *key = vy_key_new(lsm->env->key_format,
- key_raw, part_count);
+ struct tuple *key =
+ vy_key_new(lsm->env->key_format, key_raw, part_count);
if (key == NULL)
return -1;
int rc = vy_get(lsm, tx, rv, key, result);
@@ -1512,15 +1513,15 @@ vy_check_is_unique_primary(struct vy_tx *tx, const struct vy_read_view **rv,
return -1;
if (found != NULL) {
tuple_unref(found);
- diag_set(ClientError, ER_TUPLE_FOUND,
- index_name, space_name);
+ diag_set(ClientError, ER_TUPLE_FOUND, index_name, space_name);
return -1;
}
return 0;
}
static int
-vy_check_is_unique_secondary_one(struct vy_tx *tx, const struct vy_read_view **rv,
+vy_check_is_unique_secondary_one(struct vy_tx *tx,
+ const struct vy_read_view **rv,
const char *space_name, const char *index_name,
struct vy_lsm *lsm, struct tuple *stmt,
int multikey_idx)
@@ -1532,9 +1533,8 @@ vy_check_is_unique_secondary_one(struct vy_tx *tx, const struct vy_read_view **r
if (lsm->key_def->is_nullable &&
tuple_key_contains_null(stmt, lsm->key_def, multikey_idx))
return 0;
- struct tuple *key = vy_stmt_extract_key(stmt, lsm->key_def,
- lsm->env->key_format,
- multikey_idx);
+ struct tuple *key = vy_stmt_extract_key(
+ stmt, lsm->key_def, lsm->env->key_format, multikey_idx);
if (key == NULL)
return -1;
struct tuple *found;
@@ -1560,8 +1560,7 @@ vy_check_is_unique_secondary_one(struct vy_tx *tx, const struct vy_read_view **r
}
if (found != NULL) {
tuple_unref(found);
- diag_set(ClientError, ER_TUPLE_FOUND,
- index_name, space_name);
+ diag_set(ClientError, ER_TUPLE_FOUND, index_name, space_name);
return -1;
}
return 0;
@@ -1587,14 +1586,14 @@ vy_check_is_unique_secondary(struct vy_tx *tx, const struct vy_read_view **rv,
{
assert(lsm->opts.is_unique);
if (!lsm->cmp_def->is_multikey) {
- return vy_check_is_unique_secondary_one(tx, rv,
- space_name, index_name, lsm, stmt,
- MULTIKEY_NONE);
+ return vy_check_is_unique_secondary_one(tx, rv, space_name,
+ index_name, lsm, stmt,
+ MULTIKEY_NONE);
}
int count = tuple_multikey_count(stmt, lsm->cmp_def);
for (int i = 0; i < count; ++i) {
- if (vy_check_is_unique_secondary_one(tx, rv,
- space_name, index_name, lsm, stmt, i) != 0)
+ if (vy_check_is_unique_secondary_one(
+ tx, rv, space_name, index_name, lsm, stmt, i) != 0)
return -1;
}
return 0;
@@ -1616,9 +1615,8 @@ vy_check_is_unique_secondary(struct vy_tx *tx, const struct vy_read_view **rv,
* @retval -1 Duplicate is found or read error occurred.
*/
static int
-vy_check_is_unique(struct vy_env *env, struct vy_tx *tx,
- struct space *space, struct tuple *stmt,
- uint64_t column_mask)
+vy_check_is_unique(struct vy_env *env, struct vy_tx *tx, struct space *space,
+ struct tuple *stmt, uint64_t column_mask)
{
assert(space->index_count > 0);
assert(vy_stmt_type(stmt) == IPROTO_INSERT ||
@@ -1641,8 +1639,8 @@ vy_check_is_unique(struct vy_env *env, struct vy_tx *tx,
vy_stmt_type(stmt) == IPROTO_INSERT) {
struct vy_lsm *lsm = vy_lsm(space->index[0]);
if (vy_check_is_unique_primary(tx, rv, space_name(space),
- index_name_by_id(space, 0),
- lsm, stmt) != 0)
+ index_name_by_id(space, 0), lsm,
+ stmt) != 0)
return -1;
}
@@ -1678,8 +1676,7 @@ vy_check_is_unique(struct vy_env *env, struct vy_tx *tx,
* in the diagnostics area.
*/
static inline int
-vy_unique_key_validate(struct vy_lsm *lsm, const char *key,
- uint32_t part_count)
+vy_unique_key_validate(struct vy_lsm *lsm, const char *key, uint32_t part_count)
{
assert(lsm->opts.is_unique);
assert(key != NULL || part_count == 0);
@@ -1694,8 +1691,8 @@ vy_unique_key_validate(struct vy_lsm *lsm, const char *key,
*/
uint32_t original_part_count = lsm->key_def->part_count;
if (original_part_count != part_count) {
- diag_set(ClientError, ER_EXACT_MATCH,
- original_part_count, part_count);
+ diag_set(ClientError, ER_EXACT_MATCH, original_part_count,
+ part_count);
return -1;
}
const char *key_end;
@@ -1740,8 +1737,8 @@ vy_delete(struct vy_env *env, struct vy_tx *tx, struct txn_stmt *stmt,
* - if deletion is done by a secondary index.
*/
if (lsm->index_id > 0 || !rlist_empty(&space->on_replace)) {
- if (vy_get_by_raw_key(lsm, tx, vy_tx_read_view(tx),
- key, part_count, &stmt->old_tuple) != 0)
+ if (vy_get_by_raw_key(lsm, tx, vy_tx_read_view(tx), key,
+ part_count, &stmt->old_tuple) != 0)
return -1;
if (stmt->old_tuple == NULL)
return 0;
@@ -1763,8 +1760,8 @@ vy_delete(struct vy_env *env, struct vy_tx *tx, struct txn_stmt *stmt,
}
} else {
assert(lsm->index_id == 0);
- delete = vy_stmt_new_delete(pk->env->key_format,
- request->key, request->key_end);
+ delete = vy_stmt_new_delete(pk->env->key_format, request->key,
+ request->key_end);
if (delete == NULL)
return -1;
if (space->index_count > 1)
@@ -1797,8 +1794,8 @@ vy_check_update(struct space *space, const struct vy_lsm *pk,
uint64_t column_mask)
{
if (!key_update_can_be_skipped(pk->key_def->column_mask, column_mask) &&
- vy_stmt_compare(old_tuple, HINT_NONE, new_tuple,
- HINT_NONE, pk->key_def) != 0) {
+ vy_stmt_compare(old_tuple, HINT_NONE, new_tuple, HINT_NONE,
+ pk->key_def) != 0) {
diag_set(ClientError, ER_CANT_UPDATE_PRIMARY_KEY,
index_name_by_id(space, pk->index_id),
space_name(space));
@@ -1821,8 +1818,8 @@ vy_perform_update(struct vy_env *env, struct vy_tx *tx, struct txn_stmt *stmt,
assert(stmt->old_tuple != NULL);
assert(stmt->new_tuple != NULL);
- if (vy_check_is_unique(env, tx, space, stmt->new_tuple,
- column_mask) != 0)
+ if (vy_check_is_unique(env, tx, space, stmt->new_tuple, column_mask) !=
+ 0)
return -1;
vy_stmt_set_flags(stmt->new_tuple, VY_STMT_UPDATE);
@@ -1832,8 +1829,8 @@ vy_perform_update(struct vy_env *env, struct vy_tx *tx, struct txn_stmt *stmt,
if (space->index_count == 1)
return 0;
- struct tuple *delete = vy_stmt_new_surrogate_delete(pk->mem_format,
- stmt->old_tuple);
+ struct tuple *delete =
+ vy_stmt_new_surrogate_delete(pk->mem_format, stmt->old_tuple);
if (delete == NULL)
return -1;
@@ -1884,8 +1881,8 @@ vy_update(struct vy_env *env, struct vy_tx *tx, struct txn_stmt *stmt,
if (vy_unique_key_validate(lsm, key, part_count))
return -1;
- if (vy_get_by_raw_key(lsm, tx, vy_tx_read_view(tx),
- key, part_count, &stmt->old_tuple) != 0)
+ if (vy_get_by_raw_key(lsm, tx, vy_tx_read_view(tx), key, part_count,
+ &stmt->old_tuple) != 0)
return -1;
/* Nothing to update. */
if (stmt->old_tuple == NULL)
@@ -1910,8 +1907,8 @@ vy_update(struct vy_env *env, struct vy_tx *tx, struct txn_stmt *stmt,
*/
if (tuple_validate_raw(pk->mem_format, new_tuple))
return -1;
- stmt->new_tuple = vy_stmt_new_replace(pk->mem_format, new_tuple,
- new_tuple_end);
+ stmt->new_tuple =
+ vy_stmt_new_replace(pk->mem_format, new_tuple, new_tuple_end);
if (stmt->new_tuple == NULL)
return -1;
if (vy_check_update(space, pk, stmt->old_tuple, stmt->new_tuple,
@@ -1966,9 +1963,8 @@ vy_insert_first_upsert(struct vy_env *env, struct vy_tx *tx,
* @retval -1 Memory error.
*/
static int
-vy_lsm_upsert(struct vy_tx *tx, struct vy_lsm *lsm,
- const char *tuple, const char *tuple_end,
- const char *expr, const char *expr_end)
+vy_lsm_upsert(struct vy_tx *tx, struct vy_lsm *lsm, const char *tuple,
+ const char *tuple_end, const char *expr, const char *expr_end)
{
assert(tx == NULL || tx->state == VINYL_TX_READY);
struct tuple *vystmt;
@@ -2006,7 +2002,7 @@ request_normalize_ops(struct request *request)
ops_end = mp_encode_array(ops_end, op_len);
uint32_t op_name_len;
- const char *op_name = mp_decode_str(&pos, &op_name_len);
+ const char *op_name = mp_decode_str(&pos, &op_name_len);
ops_end = mp_encode_str(ops_end, op_name, op_name_len);
int field_no;
@@ -2122,9 +2118,9 @@ vy_upsert(struct vy_env *env, struct vy_tx *tx, struct txn_stmt *stmt,
* to delete old tuples from secondary indexes.
*/
/* Find the old tuple using the primary key. */
- struct tuple *key = vy_stmt_extract_key_raw(tuple, tuple_end,
- pk->key_def, pk->env->key_format,
- MULTIKEY_NONE);
+ struct tuple *key =
+ vy_stmt_extract_key_raw(tuple, tuple_end, pk->key_def,
+ pk->env->key_format, MULTIKEY_NONE);
if (key == NULL)
return -1;
int rc = vy_get(pk, tx, vy_tx_read_view(tx), key, &stmt->old_tuple);
@@ -2136,8 +2132,8 @@ vy_upsert(struct vy_env *env, struct vy_tx *tx, struct txn_stmt *stmt,
* turns into INSERT.
*/
if (stmt->old_tuple == NULL) {
- stmt->new_tuple = vy_stmt_new_insert(pk->mem_format,
- tuple, tuple_end);
+ stmt->new_tuple =
+ vy_stmt_new_insert(pk->mem_format, tuple, tuple_end);
if (stmt->new_tuple == NULL)
return -1;
return vy_insert_first_upsert(env, tx, space, stmt->new_tuple);
@@ -2159,8 +2155,8 @@ vy_upsert(struct vy_env *env, struct vy_tx *tx, struct txn_stmt *stmt,
if (tuple_validate_raw(pk->mem_format, new_tuple))
return -1;
new_tuple_end = new_tuple + new_size;
- stmt->new_tuple = vy_stmt_new_replace(pk->mem_format, new_tuple,
- new_tuple_end);
+ stmt->new_tuple =
+ vy_stmt_new_replace(pk->mem_format, new_tuple, new_tuple_end);
if (stmt->new_tuple == NULL)
return -1;
if (vy_check_update(space, pk, stmt->old_tuple, stmt->new_tuple,
@@ -2263,8 +2259,8 @@ vy_replace(struct vy_env *env, struct vy_tx *tx, struct txn_stmt *stmt,
* need to pass the old tuple to trigger callbacks.
*/
if (!rlist_empty(&space->on_replace)) {
- if (vy_get(pk, tx, vy_tx_read_view(tx),
- stmt->new_tuple, &stmt->old_tuple) != 0)
+ if (vy_get(pk, tx, vy_tx_read_view(tx), stmt->new_tuple,
+ &stmt->old_tuple) != 0)
return -1;
if (stmt->old_tuple == NULL) {
/*
@@ -2365,7 +2361,7 @@ vinyl_space_execute_update(struct space *space, struct txn *txn,
static int
vinyl_space_execute_upsert(struct space *space, struct txn *txn,
- struct request *request)
+ struct request *request)
{
struct vy_env *env = vy_env(space->engine);
struct vy_tx *tx = txn->engine_tx;
@@ -2391,8 +2387,7 @@ vinyl_engine_prepare(struct engine *engine, struct txn *txn)
struct vy_tx *tx = txn->engine_tx;
assert(tx != NULL);
- if (tx->write_size > 0 &&
- vinyl_check_wal(env, "DML") != 0)
+ if (tx->write_size > 0 && vinyl_check_wal(env, "DML") != 0)
return -1;
/*
@@ -2401,16 +2396,16 @@ vinyl_engine_prepare(struct engine *engine, struct txn *txn)
* available for the admin to track the lag so let the applier
* wait as long as necessary for memory dump to complete.
*/
- double timeout = (tx->is_applier_session ?
- TIMEOUT_INFINITY : env->timeout);
+ double timeout =
+ (tx->is_applier_session ? TIMEOUT_INFINITY : env->timeout);
/*
* Reserve quota needed by the transaction before allocating
* memory. Since this may yield, which opens a time window for
* the transaction to be sent to read view or aborted, we call
* it before checking for conflicts.
*/
- if (vy_quota_use(&env->quota, VY_QUOTA_CONSUMER_TX,
- tx->write_size, timeout) != 0)
+ if (vy_quota_use(&env->quota, VY_QUOTA_CONSUMER_TX, tx->write_size,
+ timeout) != 0)
return -1;
size_t mem_used_before = lsregion_used(&env->mem_env.allocator);
@@ -2419,8 +2414,8 @@ vinyl_engine_prepare(struct engine *engine, struct txn *txn)
size_t mem_used_after = lsregion_used(&env->mem_env.allocator);
assert(mem_used_after >= mem_used_before);
- vy_quota_adjust(&env->quota, VY_QUOTA_CONSUMER_TX,
- tx->write_size, mem_used_after - mem_used_before);
+ vy_quota_adjust(&env->quota, VY_QUOTA_CONSUMER_TX, tx->write_size,
+ mem_used_after - mem_used_before);
vy_regulator_check_dump_watermark(&env->regulator);
return rc;
}
@@ -2522,8 +2517,8 @@ vy_env_trigger_dump_cb(struct vy_regulator *regulator)
}
static void
-vy_env_dump_complete_cb(struct vy_scheduler *scheduler,
- int64_t dump_generation, double dump_duration)
+vy_env_dump_complete_cb(struct vy_scheduler *scheduler, int64_t dump_generation,
+ double dump_duration)
{
struct vy_env *env = container_of(scheduler, struct vy_env, scheduler);
@@ -2557,8 +2552,8 @@ vy_squash_schedule(struct vy_lsm *lsm, struct vy_entry entry,
void /* struct vy_env */ *arg);
static struct vy_env *
-vy_env_new(const char *path, size_t memory,
- int read_threads, int write_threads, bool force_recovery)
+vy_env_new(const char *path, size_t memory, int read_threads, int write_threads,
+ bool force_recovery)
{
struct vy_env *e = malloc(sizeof(*e));
if (unlikely(e == NULL)) {
@@ -2571,8 +2566,7 @@ vy_env_new(const char *path, size_t memory,
e->force_recovery = force_recovery;
e->path = strdup(path);
if (e->path == NULL) {
- diag_set(OutOfMemory, strlen(path),
- "malloc", "env->path");
+ diag_set(OutOfMemory, strlen(path), "malloc", "env->path");
goto error_path;
}
@@ -2586,22 +2580,20 @@ vy_env_new(const char *path, size_t memory,
vy_stmt_env_create(&e->stmt_env);
vy_mem_env_create(&e->mem_env, memory);
vy_scheduler_create(&e->scheduler, write_threads,
- vy_env_dump_complete_cb,
- &e->run_env, &e->xm->read_views);
+ vy_env_dump_complete_cb, &e->run_env,
+ &e->xm->read_views);
- if (vy_lsm_env_create(&e->lsm_env, e->path,
- &e->scheduler.generation,
- e->stmt_env.key_format,
- vy_squash_schedule, e) != 0)
+ if (vy_lsm_env_create(&e->lsm_env, e->path, &e->scheduler.generation,
+ e->stmt_env.key_format, vy_squash_schedule,
+ e) != 0)
goto error_lsm_env;
vy_quota_create(&e->quota, memory, vy_env_quota_exceeded_cb);
- vy_regulator_create(&e->regulator, &e->quota,
- vy_env_trigger_dump_cb);
+ vy_regulator_create(&e->regulator, &e->quota, vy_env_trigger_dump_cb);
struct slab_cache *slab_cache = cord_slab_cache();
mempool_create(&e->iterator_pool, slab_cache,
- sizeof(struct vinyl_iterator));
+ sizeof(struct vinyl_iterator));
vy_cache_env_create(&e->cache_env, slab_cache);
vy_run_env_create(&e->run_env, read_threads);
vy_log_init(e->path);
@@ -2655,8 +2647,8 @@ vy_env_complete_recovery(struct vy_env *env)
}
struct engine *
-vinyl_engine_new(const char *dir, size_t memory,
- int read_threads, int write_threads, bool force_recovery)
+vinyl_engine_new(const char *dir, size_t memory, int read_threads,
+ int write_threads, bool force_recovery)
{
struct vy_env *env = vy_env_new(dir, memory, read_threads,
write_threads, force_recovery);
@@ -2734,7 +2726,7 @@ vinyl_engine_set_snap_io_rate_limit(struct engine *engine, double limit)
static int
vinyl_engine_begin_checkpoint(struct engine *engine, bool is_scheduled)
{
- (void) is_scheduled;
+ (void)is_scheduled;
struct vy_env *env = vy_env(engine);
assert(env->status == VINYL_ONLINE);
/*
@@ -2750,8 +2742,7 @@ vinyl_engine_begin_checkpoint(struct engine *engine, bool is_scheduled)
}
static int
-vinyl_engine_wait_checkpoint(struct engine *engine,
- const struct vclock *vclock)
+vinyl_engine_wait_checkpoint(struct engine *engine, const struct vclock *vclock)
{
struct vy_env *env = vy_env(engine);
assert(env->status == VINYL_ONLINE);
@@ -2941,8 +2932,8 @@ vy_join_add_space(struct space *space, void *arg)
return 0;
struct vy_join_entry *entry = malloc(sizeof(*entry));
if (entry == NULL) {
- diag_set(OutOfMemory, sizeof(*entry),
- "malloc", "struct vy_join_entry");
+ diag_set(OutOfMemory, sizeof(*entry), "malloc",
+ "struct vy_join_entry");
return -1;
}
entry->space_id = space_id(space);
@@ -2961,8 +2952,8 @@ vinyl_engine_prepare_join(struct engine *engine, void **arg)
(void)engine;
struct vy_join_ctx *ctx = malloc(sizeof(*ctx));
if (ctx == NULL) {
- diag_set(OutOfMemory, sizeof(*ctx),
- "malloc", "struct vy_join_ctx");
+ diag_set(OutOfMemory, sizeof(*ctx), "malloc",
+ "struct vy_join_ctx");
return -1;
}
rlist_create(&ctx->entries);
@@ -2975,8 +2966,8 @@ vinyl_engine_prepare_join(struct engine *engine, void **arg)
}
static int
-vy_join_send_tuple(struct xstream *stream, uint32_t space_id,
- const char *data, size_t size)
+vy_join_send_tuple(struct xstream *stream, uint32_t space_id, const char *data,
+ size_t size)
{
struct request_replace_body body;
request_replace_body_create(&body, space_id);
@@ -3007,8 +2998,8 @@ vinyl_engine_join(struct engine *engine, void *arg, struct xstream *stream)
uint32_t size;
const char *data;
while ((rc = it->next(it, &data, &size)) == 0 && data != NULL) {
- if (vy_join_send_tuple(stream, entry->space_id,
- data, size) != 0)
+ if (vy_join_send_tuple(stream, entry->space_id, data,
+ size) != 0)
return -1;
}
if (rc != 0)
@@ -3043,8 +3034,7 @@ vinyl_engine_complete_join(struct engine *engine, void *arg)
* next log rotation.
*/
static void
-vy_gc_run(struct vy_env *env,
- struct vy_lsm_recovery_info *lsm_info,
+vy_gc_run(struct vy_env *env, struct vy_lsm_recovery_info *lsm_info,
struct vy_run_recovery_info *run_info)
{
/* Try to delete files. */
@@ -3072,8 +3062,7 @@ vy_gc_run(struct vy_env *env,
static void
vy_gc_lsm(struct vy_lsm_recovery_info *lsm_info)
{
- assert(lsm_info->drop_lsn >= 0 ||
- lsm_info->create_lsn < 0);
+ assert(lsm_info->drop_lsn >= 0 || lsm_info->create_lsn < 0);
vy_log_tx_begin();
if (lsm_info->drop_lsn < 0) {
@@ -3097,8 +3086,7 @@ vy_gc_lsm(struct vy_lsm_recovery_info *lsm_info)
vy_log_drop_run(run_info->id, run_info->gc_lsn);
}
}
- if (rlist_empty(&lsm_info->ranges) &&
- rlist_empty(&lsm_info->runs))
+ if (rlist_empty(&lsm_info->ranges) && rlist_empty(&lsm_info->runs))
vy_log_forget_lsm(lsm_info->id);
vy_log_tx_try_commit();
}
@@ -3111,8 +3099,8 @@ vy_gc_lsm(struct vy_lsm_recovery_info *lsm_info)
* @param gc_lsn LSN of the oldest checkpoint to save.
*/
static void
-vy_gc(struct vy_env *env, struct vy_recovery *recovery,
- unsigned int gc_mask, int64_t gc_lsn)
+vy_gc(struct vy_env *env, struct vy_recovery *recovery, unsigned int gc_mask,
+ int64_t gc_lsn)
{
int loops = 0;
struct vy_lsm_recovery_info *lsm_info;
@@ -3246,8 +3234,8 @@ struct vy_squash_queue {
};
static struct vy_squash *
-vy_squash_new(struct mempool *pool, struct vy_env *env,
- struct vy_lsm *lsm, struct vy_entry entry)
+vy_squash_new(struct mempool *pool, struct vy_env *env, struct vy_lsm *lsm,
+ struct vy_entry entry)
{
struct vy_squash *squash;
squash = mempool_alloc(pool);
@@ -3319,7 +3307,8 @@ vy_squash_process(struct vy_squash *squash)
uint8_t n_upserts = 0;
while (!vy_mem_tree_iterator_is_invalid(&mem_itr)) {
struct vy_entry mem_entry;
- mem_entry = *vy_mem_tree_iterator_get_elem(&mem->tree, &mem_itr);
+ mem_entry =
+ *vy_mem_tree_iterator_get_elem(&mem->tree, &mem_itr);
if (vy_entry_compare(result, mem_entry, lsm->cmp_def) != 0 ||
vy_stmt_type(mem_entry.stmt) != IPROTO_UPSERT)
break;
@@ -3367,8 +3356,7 @@ vy_squash_queue_new(void)
sq->fiber = NULL;
fiber_cond_create(&sq->cond);
stailq_create(&sq->queue);
- mempool_create(&sq->pool, cord_slab_cache(),
- sizeof(struct vy_squash));
+ mempool_create(&sq->pool, cord_slab_cache(), sizeof(struct vy_squash));
return sq;
}
@@ -3414,8 +3402,8 @@ vy_squash_schedule(struct vy_lsm *lsm, struct vy_entry entry, void *arg)
struct vy_env *env = arg;
struct vy_squash_queue *sq = env->squash_queue;
- say_verbose("%s: schedule upsert optimization for %s",
- vy_lsm_name(lsm), vy_stmt_str(entry.stmt));
+ say_verbose("%s: schedule upsert optimization for %s", vy_lsm_name(lsm),
+ vy_stmt_str(entry.stmt));
/* Start the upsert squashing fiber on demand. */
if (sq->fiber == NULL) {
@@ -3443,8 +3431,8 @@ static int
vinyl_iterator_on_tx_destroy(struct trigger *trigger, void *event)
{
(void)event;
- struct vinyl_iterator *it = container_of(trigger,
- struct vinyl_iterator, on_tx_destroy);
+ struct vinyl_iterator *it =
+ container_of(trigger, struct vinyl_iterator, on_tx_destroy);
it->tx = NULL;
return 0;
}
@@ -3647,12 +3635,12 @@ vinyl_index_create_iterator(struct index *base, enum iterator_type type,
struct vinyl_iterator *it = mempool_alloc(&env->iterator_pool);
if (it == NULL) {
- diag_set(OutOfMemory, sizeof(struct vinyl_iterator),
- "mempool", "struct vinyl_iterator");
+ diag_set(OutOfMemory, sizeof(struct vinyl_iterator), "mempool",
+ "struct vinyl_iterator");
return NULL;
}
- it->key = vy_entry_key_new(lsm->env->key_format, lsm->cmp_def,
- key, part_count);
+ it->key = vy_entry_key_new(lsm->env->key_format, lsm->cmp_def, key,
+ part_count);
if (it->key.stmt == NULL) {
mempool_free(&env->iterator_pool, it);
return NULL;
@@ -3671,8 +3659,8 @@ vinyl_index_create_iterator(struct index *base, enum iterator_type type,
* Register a trigger that will abort this iterator
* when the transaction ends.
*/
- trigger_create(&it->on_tx_destroy,
- vinyl_iterator_on_tx_destroy, NULL, NULL);
+ trigger_create(&it->on_tx_destroy, vinyl_iterator_on_tx_destroy,
+ NULL, NULL);
trigger_add(&tx->on_destroy, &it->on_tx_destroy);
} else {
tx = &it->tx_autocommit;
@@ -3687,8 +3675,8 @@ vinyl_index_create_iterator(struct index *base, enum iterator_type type,
}
static int
-vinyl_snapshot_iterator_next(struct snapshot_iterator *base,
- const char **data, uint32_t *size)
+vinyl_snapshot_iterator_next(struct snapshot_iterator *base, const char **data,
+ uint32_t *size)
{
assert(base->next == vinyl_snapshot_iterator_next);
struct vinyl_snapshot_iterator *it =
@@ -3734,8 +3722,8 @@ vinyl_index_create_snapshot_iterator(struct index *base)
free(it);
return NULL;
}
- vy_read_iterator_open(&it->iterator, lsm, NULL,
- ITER_ALL, lsm->env->empty_key,
+ vy_read_iterator_open(&it->iterator, lsm, NULL, ITER_ALL,
+ lsm->env->empty_key,
(const struct vy_read_view **)&it->rv);
/*
* The index may be dropped while we are reading it.
@@ -3747,8 +3735,8 @@ vinyl_index_create_snapshot_iterator(struct index *base)
}
static int
-vinyl_index_get(struct index *index, const char *key,
- uint32_t part_count, struct tuple **ret)
+vinyl_index_get(struct index *index, const char *key, uint32_t part_count,
+ struct tuple **ret)
{
assert(index->def->opts.is_unique);
assert(index->def->key_def->part_count == part_count);
@@ -3756,8 +3744,9 @@ vinyl_index_get(struct index *index, const char *key,
struct vy_lsm *lsm = vy_lsm(index);
struct vy_env *env = vy_env(index->engine);
struct vy_tx *tx = in_txn() ? in_txn()->engine_tx : NULL;
- const struct vy_read_view **rv = (tx != NULL ? vy_tx_read_view(tx) :
- &env->xm->p_global_read_view);
+ const struct vy_read_view **rv = (tx != NULL ?
+ vy_tx_read_view(tx) :
+ &env->xm->p_global_read_view);
if (tx != NULL && tx->state == VINYL_TX_ABORT) {
diag_set(ClientError, ER_TRANSACTION_CONFLICT);
@@ -3832,14 +3821,14 @@ vy_build_on_replace(struct trigger *trigger, void *event)
/* Check key uniqueness if necessary. */
if (ctx->check_unique_constraint && stmt->new_tuple != NULL &&
vy_check_is_unique_secondary(tx, vy_tx_read_view(tx),
- ctx->space_name, ctx->index_name,
- lsm, stmt->new_tuple) != 0)
+ ctx->space_name, ctx->index_name, lsm,
+ stmt->new_tuple) != 0)
goto err;
/* Forward the statement to the new LSM tree. */
if (stmt->old_tuple != NULL) {
- struct tuple *delete = vy_stmt_new_surrogate_delete(format,
- stmt->old_tuple);
+ struct tuple *delete =
+ vy_stmt_new_surrogate_delete(format, stmt->old_tuple);
if (delete == NULL)
goto err;
int rc = vy_tx_set(tx, lsm, delete);
@@ -3850,8 +3839,8 @@ vy_build_on_replace(struct trigger *trigger, void *event)
if (stmt->new_tuple != NULL) {
uint32_t data_len;
const char *data = tuple_data_range(stmt->new_tuple, &data_len);
- struct tuple *insert = vy_stmt_new_insert(format, data,
- data + data_len);
+ struct tuple *insert =
+ vy_stmt_new_insert(format, data, data + data_len);
if (insert == NULL)
goto err;
int rc = vy_tx_set(tx, lsm, insert);
@@ -3880,11 +3869,11 @@ err:
* being built.
*/
static int
-vy_build_insert_stmt(struct vy_lsm *lsm, struct vy_mem *mem,
- struct tuple *stmt, int64_t lsn)
+vy_build_insert_stmt(struct vy_lsm *lsm, struct vy_mem *mem, struct tuple *stmt,
+ int64_t lsn)
{
- struct tuple *region_stmt = vy_stmt_dup_lsregion(stmt,
- &mem->env->allocator, mem->generation);
+ struct tuple *region_stmt = vy_stmt_dup_lsregion(
+ stmt, &mem->env->allocator, mem->generation);
if (region_stmt == NULL)
return -1;
vy_stmt_set_lsn(region_stmt, lsn);
@@ -3920,8 +3909,8 @@ vy_build_insert_tuple(struct vy_env *env, struct vy_lsm *lsm,
/* Reallocate the new tuple using the new space format. */
uint32_t data_len;
const char *data = tuple_data_range(tuple, &data_len);
- struct tuple *stmt = vy_stmt_new_replace(new_format, data,
- data + data_len);
+ struct tuple *stmt =
+ vy_stmt_new_replace(new_format, data, data + data_len);
if (stmt == NULL)
return -1;
@@ -3946,9 +3935,9 @@ vy_build_insert_tuple(struct vy_env *env, struct vy_lsm *lsm,
*/
if (check_unique_constraint) {
vy_mem_pin(mem);
- rc = vy_check_is_unique_secondary(NULL,
- &env->xm->p_committed_read_view,
- space_name, index_name, lsm, stmt);
+ rc = vy_check_is_unique_secondary(
+ NULL, &env->xm->p_committed_read_view, space_name,
+ index_name, lsm, stmt);
vy_mem_unpin(mem);
if (rc != 0) {
tuple_unref(stmt);
@@ -4023,19 +4012,19 @@ vy_build_recover_stmt(struct vy_lsm *lsm, struct vy_lsm *pk,
if (type == IPROTO_REPLACE || type == IPROTO_INSERT) {
uint32_t data_len;
const char *data = tuple_data_range(mem_stmt, &data_len);
- insert = vy_stmt_new_insert(lsm->mem_format,
- data, data + data_len);
+ insert = vy_stmt_new_insert(lsm->mem_format, data,
+ data + data_len);
if (insert == NULL)
goto err;
} else if (type == IPROTO_UPSERT) {
- struct tuple *new_tuple = vy_apply_upsert(mem_stmt, old_tuple,
- pk->cmp_def, true);
+ struct tuple *new_tuple =
+ vy_apply_upsert(mem_stmt, old_tuple, pk->cmp_def, true);
if (new_tuple == NULL)
goto err;
uint32_t data_len;
const char *data = tuple_data_range(new_tuple, &data_len);
- insert = vy_stmt_new_insert(lsm->mem_format,
- data, data + data_len);
+ insert = vy_stmt_new_insert(lsm->mem_format, data,
+ data + data_len);
tuple_unref(new_tuple);
if (insert == NULL)
goto err;
@@ -4105,7 +4094,8 @@ vy_build_recover(struct vy_env *env, struct vy_lsm *lsm, struct vy_lsm *pk)
size_t mem_used_before, mem_used_after;
mem_used_before = lsregion_used(&env->mem_env.allocator);
- rlist_foreach_entry_reverse(mem, &pk->sealed, in_sealed) {
+ rlist_foreach_entry_reverse(mem, &pk->sealed, in_sealed)
+ {
rc = vy_build_recover_mem(lsm, pk, mem);
if (rc != 0)
break;
@@ -4236,12 +4226,10 @@ vinyl_space_build_index(struct space *src_space, struct index *new_index,
* in which case we would insert an outdated tuple.
*/
if (vy_stmt_lsn(tuple) <= build_lsn) {
- rc = vy_build_insert_tuple(env, new_lsm,
- space_name(src_space),
- new_index->def->name,
- new_format,
- check_unique_constraint,
- tuple);
+ rc = vy_build_insert_tuple(
+ env, new_lsm, space_name(src_space),
+ new_index->def->name, new_format,
+ check_unique_constraint, tuple);
if (rc != 0)
break;
}
@@ -4455,18 +4443,16 @@ vy_deferred_delete_on_replace(struct trigger *trigger, void *event)
* the LSM tree.
*/
size_t size;
- struct trigger *on_commit =
- region_alloc_object(&txn->region, typeof(*on_commit),
- &size);
+ struct trigger *on_commit = region_alloc_object(
+ &txn->region, typeof(*on_commit), &size);
if (on_commit == NULL) {
diag_set(OutOfMemory, size, "region_alloc_object",
"on_commit");
rc = -1;
break;
}
- struct trigger *on_rollback =
- region_alloc_object(&txn->region, typeof(*on_rollback),
- &size);
+ struct trigger *on_rollback = region_alloc_object(
+ &txn->region, typeof(*on_rollback), &size);
if (on_rollback == NULL) {
diag_set(OutOfMemory, size, "region_alloc_object",
"on_rollback");
@@ -4474,8 +4460,10 @@ vy_deferred_delete_on_replace(struct trigger *trigger, void *event)
break;
}
vy_mem_pin(mem);
- trigger_create(on_commit, vy_deferred_delete_on_commit, mem, NULL);
- trigger_create(on_rollback, vy_deferred_delete_on_rollback, mem, NULL);
+ trigger_create(on_commit, vy_deferred_delete_on_commit, mem,
+ NULL);
+ trigger_create(on_rollback, vy_deferred_delete_on_rollback, mem,
+ NULL);
txn_on_commit(txn, on_commit);
txn_on_rollback(txn, on_rollback);
}
@@ -4557,7 +4545,7 @@ static const struct index_vtab vinyl_index_vtab = {
/* .update_def = */ vinyl_index_update_def,
/* .depends_on_pk = */ vinyl_index_depends_on_pk,
/* .def_change_requires_rebuild = */
- vinyl_index_def_change_requires_rebuild,
+ vinyl_index_def_change_requires_rebuild,
/* .size = */ vinyl_index_size,
/* .bsize = */ vinyl_index_bsize,
/* .min = */ generic_index_min,
@@ -4568,7 +4556,7 @@ static const struct index_vtab vinyl_index_vtab = {
/* .replace = */ generic_index_replace,
/* .create_iterator = */ vinyl_index_create_iterator,
/* .create_snapshot_iterator = */
- vinyl_index_create_snapshot_iterator,
+ vinyl_index_create_snapshot_iterator,
/* .stat = */ vinyl_index_stat,
/* .compact = */ vinyl_index_compact,
/* .reset_stat = */ vinyl_index_reset_stat,
diff --git a/src/box/vinyl.h b/src/box/vinyl.h
index 2a3e8f1..715c49b 100644
--- a/src/box/vinyl.h
+++ b/src/box/vinyl.h
@@ -42,8 +42,8 @@ struct info_handler;
struct engine;
struct engine *
-vinyl_engine_new(const char *dir, size_t memory,
- int read_threads, int write_threads, bool force_recovery);
+vinyl_engine_new(const char *dir, size_t memory, int read_threads,
+ int write_threads, bool force_recovery);
/**
* Vinyl engine statistics (box.stat.vinyl()).
@@ -94,12 +94,12 @@ vinyl_engine_set_snap_io_rate_limit(struct engine *engine, double limit);
#include "diag.h"
static inline struct engine *
-vinyl_engine_new_xc(const char *dir, size_t memory,
- int read_threads, int write_threads, bool force_recovery)
+vinyl_engine_new_xc(const char *dir, size_t memory, int read_threads,
+ int write_threads, bool force_recovery)
{
struct engine *vinyl;
- vinyl = vinyl_engine_new(dir, memory, read_threads,
- write_threads, force_recovery);
+ vinyl = vinyl_engine_new(dir, memory, read_threads, write_threads,
+ force_recovery);
if (vinyl == NULL)
diag_raise();
return vinyl;
diff --git a/src/box/vy_cache.c b/src/box/vy_cache.c
index 7007d0e..e0bcafd 100644
--- a/src/box/vy_cache.c
+++ b/src/box/vy_cache.c
@@ -35,7 +35,7 @@
#include "vy_history.h"
#ifndef CT_ASSERT_G
-#define CT_ASSERT_G(e) typedef char CONCAT(__ct_assert_, __LINE__)[(e) ? 1 :-1]
+#define CT_ASSERT_G(e) typedef char CONCAT(__ct_assert_, __LINE__)[(e) ? 1 : -1]
#endif
CT_ASSERT_G(BOX_INDEX_PART_MAX <= UINT8_MAX);
@@ -143,8 +143,8 @@ vy_cache_create(struct vy_cache *cache, struct vy_cache_env *env,
cache->is_primary = is_primary;
cache->version = 1;
vy_cache_tree_create(&cache->cache_tree, cmp_def,
- vy_cache_tree_page_alloc,
- vy_cache_tree_page_free, env);
+ vy_cache_tree_page_alloc, vy_cache_tree_page_free,
+ env);
}
void
@@ -153,9 +153,8 @@ vy_cache_destroy(struct vy_cache *cache)
struct vy_cache_tree_iterator itr =
vy_cache_tree_iterator_first(&cache->cache_tree);
while (!vy_cache_tree_iterator_is_invalid(&itr)) {
- struct vy_cache_node **node =
- vy_cache_tree_iterator_get_elem(&cache->cache_tree,
- &itr);
+ struct vy_cache_node **node = vy_cache_tree_iterator_get_elem(
+ &cache->cache_tree, &itr);
assert(node != NULL && *node != NULL);
vy_cache_node_delete(cache->env, *node);
vy_cache_tree_iterator_next(&cache->cache_tree, &itr);
@@ -186,8 +185,7 @@ vy_cache_gc_step(struct vy_cache_env *env)
}
if (node->flags & VY_CACHE_RIGHT_LINKED) {
struct vy_cache_tree_iterator next = itr;
- vy_cache_tree_iterator_next(&cache->cache_tree,
- &next);
+ vy_cache_tree_iterator_next(&cache->cache_tree, &next);
struct vy_cache_node **next_node =
vy_cache_tree_iterator_get_elem(tree, &next);
assert((*next_node)->flags & VY_CACHE_LEFT_LINKED);
@@ -225,9 +223,8 @@ vy_cache_env_set_quota(struct vy_cache_env *env, size_t quota)
}
void
-vy_cache_add(struct vy_cache *cache, struct vy_entry curr,
- struct vy_entry prev, struct vy_entry key,
- enum iterator_type order)
+vy_cache_add(struct vy_cache *cache, struct vy_entry curr, struct vy_entry prev,
+ struct vy_entry key, enum iterator_type order)
{
if (cache->env->mem_quota == 0) {
/* Cache is disabled. */
@@ -269,14 +266,14 @@ vy_cache_add(struct vy_cache *cache, struct vy_entry curr,
* sequence of statements that is equal to the key.
*/
boundary_level = vy_stmt_key_part_count(key.stmt,
- cache->cmp_def);
+ cache->cmp_def);
}
} else {
assert(prev.stmt != NULL);
if (order == ITER_EQ || order == ITER_REQ) {
/* that is the last statement that is equal to key */
boundary_level = vy_stmt_key_part_count(key.stmt,
- cache->cmp_def);
+ cache->cmp_def);
} else {
/* that is the last statement */
boundary_level = 0;
@@ -295,14 +292,12 @@ vy_cache_add(struct vy_cache *cache, struct vy_entry curr,
assert(vy_stmt_type(curr.stmt) == IPROTO_INSERT ||
vy_stmt_type(curr.stmt) == IPROTO_REPLACE);
- assert(prev.stmt == NULL ||
- vy_stmt_type(prev.stmt) == IPROTO_INSERT ||
+ assert(prev.stmt == NULL || vy_stmt_type(prev.stmt) == IPROTO_INSERT ||
vy_stmt_type(prev.stmt) == IPROTO_REPLACE);
cache->version++;
/* Insert/replace new node to the tree */
- struct vy_cache_node *node =
- vy_cache_node_new(cache->env, cache, curr);
+ struct vy_cache_node *node = vy_cache_node_new(cache->env, cache, curr);
if (node == NULL) {
/* memory error, let's live without a cache */
return;
@@ -335,7 +330,7 @@ vy_cache_add(struct vy_cache *cache, struct vy_entry curr,
/* The flag that must be set in the inserted chain node */
uint32_t flag = direction > 0 ? VY_CACHE_LEFT_LINKED :
- VY_CACHE_RIGHT_LINKED;
+ VY_CACHE_RIGHT_LINKED;
#ifndef NDEBUG
/**
@@ -405,15 +400,16 @@ vy_cache_add(struct vy_cache *cache, struct vy_entry curr,
if (replaced != NULL) {
prev_node->flags = replaced->flags;
prev_node->left_boundary_level = replaced->left_boundary_level;
- prev_node->right_boundary_level = replaced->right_boundary_level;
+ prev_node->right_boundary_level =
+ replaced->right_boundary_level;
vy_cache_node_delete(cache->env, replaced);
}
/* Set proper flags */
node->flags |= flag;
/* Set inverted flag in the previous node */
- prev_node->flags |= (VY_CACHE_LEFT_LINKED |
- VY_CACHE_RIGHT_LINKED) ^ flag;
+ prev_node->flags |= (VY_CACHE_LEFT_LINKED | VY_CACHE_RIGHT_LINKED) ^
+ flag;
}
struct vy_entry
@@ -526,8 +522,8 @@ static inline bool
vy_cache_iterator_is_stop(struct vy_cache_iterator *itr,
struct vy_cache_node *node)
{
- uint8_t key_level = vy_stmt_key_part_count(itr->key.stmt,
- itr->cache->cmp_def);
+ uint8_t key_level =
+ vy_stmt_key_part_count(itr->key.stmt, itr->cache->cmp_def);
/* select{} is actually an EQ iterator with part_count == 0 */
bool iter_is_eq = itr->iterator_type == ITER_EQ || key_level == 0;
if (iterator_direction(itr->iterator_type) > 0) {
@@ -556,8 +552,8 @@ static inline bool
vy_cache_iterator_is_end_stop(struct vy_cache_iterator *itr,
struct vy_cache_node *last_node)
{
- uint8_t key_level = vy_stmt_key_part_count(itr->key.stmt,
- itr->cache->cmp_def);
+ uint8_t key_level =
+ vy_stmt_key_part_count(itr->key.stmt, itr->cache->cmp_def);
/* select{} is actually an EQ iterator with part_count == 0 */
bool iter_is_eq = itr->iterator_type == ITER_EQ || key_level == 0;
if (iterator_direction(itr->iterator_type) > 0) {
@@ -658,16 +654,17 @@ vy_cache_iterator_seek(struct vy_cache_iterator *itr, struct vy_entry last)
if (last.stmt != NULL) {
key = last;
iterator_type = iterator_direction(itr->iterator_type) > 0 ?
- ITER_GT : ITER_LT;
+ ITER_GT :
+ ITER_LT;
}
bool exact = false;
if (!vy_stmt_is_empty_key(key.stmt)) {
- itr->curr_pos = iterator_type == ITER_EQ ||
- iterator_type == ITER_GE ||
- iterator_type == ITER_LT ?
- vy_cache_tree_lower_bound(tree, key, &exact) :
- vy_cache_tree_upper_bound(tree, key, &exact);
+ itr->curr_pos =
+ iterator_type == ITER_EQ || iterator_type == ITER_GE ||
+ iterator_type == ITER_LT ?
+ vy_cache_tree_lower_bound(tree, key, &exact) :
+ vy_cache_tree_upper_bound(tree, key, &exact);
} else if (iterator_type == ITER_LE) {
itr->curr_pos = vy_cache_tree_invalid_iterator();
} else {
@@ -734,7 +731,9 @@ vy_cache_iterator_skip(struct vy_cache_iterator *itr, struct vy_entry last,
if (itr->search_started &&
(itr->curr.stmt == NULL || last.stmt == NULL ||
iterator_direction(itr->iterator_type) *
- vy_entry_compare(itr->curr, last, itr->cache->cmp_def) > 0))
+ vy_entry_compare(itr->curr, last,
+ itr->cache->cmp_def) >
+ 0))
return 0;
vy_history_cleanup(history);
@@ -806,7 +805,7 @@ vy_cache_iterator_restore(struct vy_cache_iterator *itr, struct vy_entry last,
if (cmp < 0 || (cmp == 0 && !key_belongs))
break;
if (vy_stmt_lsn(node->entry.stmt) <=
- (**itr->read_view).vlsn) {
+ (**itr->read_view).vlsn) {
itr->curr_pos = pos;
if (itr->curr.stmt != NULL)
tuple_unref(itr->curr.stmt);
diff --git a/src/box/vy_cache.h b/src/box/vy_cache.h
index 9f08391..76547ab 100644
--- a/src/box/vy_cache.h
+++ b/src/box/vy_cache.h
@@ -71,8 +71,8 @@ struct vy_cache_node {
* Internal comparator (1) for BPS tree.
*/
static inline int
-vy_cache_tree_cmp(struct vy_cache_node *a,
- struct vy_cache_node *b, struct key_def *cmp_def)
+vy_cache_tree_cmp(struct vy_cache_node *a, struct vy_cache_node *b,
+ struct key_def *cmp_def)
{
return vy_entry_compare(a->entry, b->entry, cmp_def);
}
@@ -201,9 +201,8 @@ vy_cache_destroy(struct vy_cache *cache);
* +1 - forward, -1 - backward.
*/
void
-vy_cache_add(struct vy_cache *cache, struct vy_entry curr,
- struct vy_entry prev, struct vy_entry key,
- enum iterator_type order);
+vy_cache_add(struct vy_cache *cache, struct vy_entry curr, struct vy_entry prev,
+ struct vy_entry key, enum iterator_type order);
/**
* Find value in cache.
@@ -223,7 +222,6 @@ void
vy_cache_on_write(struct vy_cache *cache, struct vy_entry entry,
struct vy_entry *deleted);
-
/**
* Cache iterator
*/
diff --git a/src/box/vy_history.c b/src/box/vy_history.c
index c3717b2..235af51 100644
--- a/src/box/vy_history.c
+++ b/src/box/vy_history.c
@@ -82,8 +82,8 @@ vy_history_apply(struct vy_history *history, struct key_def *cmp_def,
return 0;
struct vy_entry curr = vy_entry_none();
- struct vy_history_node *node = rlist_last_entry(&history->stmts,
- struct vy_history_node, link);
+ struct vy_history_node *node =
+ rlist_last_entry(&history->stmts, struct vy_history_node, link);
if (vy_history_is_terminal(history)) {
if (!keep_delete &&
vy_stmt_type(node->entry.stmt) == IPROTO_DELETE) {
@@ -103,8 +103,8 @@ vy_history_apply(struct vy_history *history, struct key_def *cmp_def,
node = rlist_prev_entry_safe(node, &history->stmts, link);
}
while (node != NULL) {
- struct vy_entry entry = vy_entry_apply_upsert(node->entry, curr,
- cmp_def, true);
+ struct vy_entry entry =
+ vy_entry_apply_upsert(node->entry, curr, cmp_def, true);
++*upserts_applied;
if (curr.stmt != NULL)
tuple_unref(curr.stmt);
diff --git a/src/box/vy_history.h b/src/box/vy_history.h
index b25c27f..a7f33e9 100644
--- a/src/box/vy_history.h
+++ b/src/box/vy_history.h
@@ -97,8 +97,8 @@ vy_history_is_terminal(struct vy_history *history)
{
if (rlist_empty(&history->stmts))
return false;
- struct vy_history_node *node = rlist_last_entry(&history->stmts,
- struct vy_history_node, link);
+ struct vy_history_node *node =
+ rlist_last_entry(&history->stmts, struct vy_history_node, link);
assert(vy_stmt_type(node->entry.stmt) == IPROTO_REPLACE ||
vy_stmt_type(node->entry.stmt) == IPROTO_DELETE ||
vy_stmt_type(node->entry.stmt) == IPROTO_INSERT ||
@@ -116,8 +116,8 @@ vy_history_last_stmt(struct vy_history *history)
if (rlist_empty(&history->stmts))
return vy_entry_none();
/* Newest statement is at the head of the list. */
- struct vy_history_node *node = rlist_first_entry(&history->stmts,
- struct vy_history_node, link);
+ struct vy_history_node *node = rlist_first_entry(
+ &history->stmts, struct vy_history_node, link);
return node->entry;
}
diff --git a/src/box/vy_log.c b/src/box/vy_log.c
index 06b2596..c7da9c0 100644
--- a/src/box/vy_log.c
+++ b/src/box/vy_log.c
@@ -250,70 +250,64 @@ vy_log_record_snprint(char *buf, int size, const struct vy_log_record *record)
SNPRINT(total, snprintf, buf, size, "%s{",
vy_log_type_name[record->type]);
if (record->lsm_id > 0)
- SNPRINT(total, snprintf, buf, size, "%s=%"PRIi64", ",
+ SNPRINT(total, snprintf, buf, size, "%s=%" PRIi64 ", ",
vy_log_key_name[VY_LOG_KEY_LSM_ID], record->lsm_id);
if (record->range_id > 0)
- SNPRINT(total, snprintf, buf, size, "%s=%"PRIi64", ",
- vy_log_key_name[VY_LOG_KEY_RANGE_ID],
- record->range_id);
+ SNPRINT(total, snprintf, buf, size, "%s=%" PRIi64 ", ",
+ vy_log_key_name[VY_LOG_KEY_RANGE_ID], record->range_id);
if (record->run_id > 0)
- SNPRINT(total, snprintf, buf, size, "%s=%"PRIi64", ",
- vy_log_key_name[VY_LOG_KEY_RUN_ID],
- record->run_id);
+ SNPRINT(total, snprintf, buf, size, "%s=%" PRIi64 ", ",
+ vy_log_key_name[VY_LOG_KEY_RUN_ID], record->run_id);
if (record->begin != NULL) {
- SNPRINT(total, snprintf, buf, size, "%s=",
- vy_log_key_name[VY_LOG_KEY_BEGIN]);
+ SNPRINT(total, snprintf, buf, size,
+ "%s=", vy_log_key_name[VY_LOG_KEY_BEGIN]);
SNPRINT(total, mp_snprint, buf, size, record->begin);
SNPRINT(total, snprintf, buf, size, ", ");
}
if (record->end != NULL) {
- SNPRINT(total, snprintf, buf, size, "%s=",
- vy_log_key_name[VY_LOG_KEY_END]);
+ SNPRINT(total, snprintf, buf, size,
+ "%s=", vy_log_key_name[VY_LOG_KEY_END]);
SNPRINT(total, mp_snprint, buf, size, record->end);
SNPRINT(total, snprintf, buf, size, ", ");
}
if (record->index_id > 0)
- SNPRINT(total, snprintf, buf, size, "%s=%"PRIu32", ",
+ SNPRINT(total, snprintf, buf, size, "%s=%" PRIu32 ", ",
vy_log_key_name[VY_LOG_KEY_INDEX_ID], record->index_id);
if (record->space_id > 0)
- SNPRINT(total, snprintf, buf, size, "%s=%"PRIu32", ",
+ SNPRINT(total, snprintf, buf, size, "%s=%" PRIu32 ", ",
vy_log_key_name[VY_LOG_KEY_SPACE_ID], record->space_id);
if (record->group_id > 0)
- SNPRINT(total, snprintf, buf, size, "%s=%"PRIu32", ",
+ SNPRINT(total, snprintf, buf, size, "%s=%" PRIu32 ", ",
vy_log_key_name[VY_LOG_KEY_GROUP_ID], record->group_id);
if (record->key_parts != NULL) {
- SNPRINT(total, snprintf, buf, size, "%s=",
- vy_log_key_name[VY_LOG_KEY_DEF]);
+ SNPRINT(total, snprintf, buf, size,
+ "%s=", vy_log_key_name[VY_LOG_KEY_DEF]);
SNPRINT(total, key_def_snprint_parts, buf, size,
record->key_parts, record->key_part_count);
SNPRINT(total, snprintf, buf, size, ", ");
}
if (record->slice_id > 0)
- SNPRINT(total, snprintf, buf, size, "%s=%"PRIi64", ",
- vy_log_key_name[VY_LOG_KEY_SLICE_ID],
- record->slice_id);
+ SNPRINT(total, snprintf, buf, size, "%s=%" PRIi64 ", ",
+ vy_log_key_name[VY_LOG_KEY_SLICE_ID], record->slice_id);
if (record->create_lsn > 0)
- SNPRINT(total, snprintf, buf, size, "%s=%"PRIi64", ",
+ SNPRINT(total, snprintf, buf, size, "%s=%" PRIi64 ", ",
vy_log_key_name[VY_LOG_KEY_CREATE_LSN],
record->create_lsn);
if (record->modify_lsn > 0)
- SNPRINT(total, snprintf, buf, size, "%s=%"PRIi64", ",
+ SNPRINT(total, snprintf, buf, size, "%s=%" PRIi64 ", ",
vy_log_key_name[VY_LOG_KEY_MODIFY_LSN],
record->modify_lsn);
if (record->drop_lsn > 0)
- SNPRINT(total, snprintf, buf, size, "%s=%"PRIi64", ",
- vy_log_key_name[VY_LOG_KEY_DROP_LSN],
- record->drop_lsn);
+ SNPRINT(total, snprintf, buf, size, "%s=%" PRIi64 ", ",
+ vy_log_key_name[VY_LOG_KEY_DROP_LSN], record->drop_lsn);
if (record->dump_lsn > 0)
- SNPRINT(total, snprintf, buf, size, "%s=%"PRIi64", ",
- vy_log_key_name[VY_LOG_KEY_DUMP_LSN],
- record->dump_lsn);
+ SNPRINT(total, snprintf, buf, size, "%s=%" PRIi64 ", ",
+ vy_log_key_name[VY_LOG_KEY_DUMP_LSN], record->dump_lsn);
if (record->gc_lsn > 0)
- SNPRINT(total, snprintf, buf, size, "%s=%"PRIi64", ",
- vy_log_key_name[VY_LOG_KEY_GC_LSN],
- record->gc_lsn);
+ SNPRINT(total, snprintf, buf, size, "%s=%" PRIi64 ", ",
+ vy_log_key_name[VY_LOG_KEY_GC_LSN], record->gc_lsn);
if (record->dump_count > 0)
- SNPRINT(total, snprintf, buf, size, "%s=%"PRIu32", ",
+ SNPRINT(total, snprintf, buf, size, "%s=%" PRIu32 ", ",
vy_log_key_name[VY_LOG_KEY_DUMP_COUNT],
record->dump_count);
SNPRINT(total, snprintf, buf, size, "}");
@@ -554,8 +548,7 @@ vy_log_record_encode(const struct vy_log_record *record,
* Return 0 on success, -1 on failure.
*/
static int
-vy_log_record_decode(struct vy_log_record *record,
- struct xrow_header *row)
+vy_log_record_decode(struct vy_log_record *record, struct xrow_header *row)
{
char *buf;
@@ -624,16 +617,15 @@ vy_log_record_decode(struct vy_log_record *record,
struct region *region = &fiber()->gc;
uint32_t part_count = mp_decode_array(&pos);
size_t size;
- struct key_part_def *parts =
- region_alloc_array(region, typeof(parts[0]),
- part_count, &size);
+ struct key_part_def *parts = region_alloc_array(
+ region, typeof(parts[0]), part_count, &size);
if (parts == NULL) {
diag_set(OutOfMemory, size,
"region_alloc_array", "parts");
return -1;
}
- if (key_def_decode_parts(parts, part_count, &pos,
- NULL, 0, region) != 0) {
+ if (key_def_decode_parts(parts, part_count, &pos, NULL,
+ 0, region) != 0) {
diag_log();
diag_set(ClientError, ER_INVALID_VYLOG_FILE,
"Bad record: failed to decode "
@@ -705,8 +697,8 @@ vy_log_record_dup(struct region *pool, const struct vy_log_record *src)
{
size_t used = region_used(pool);
size_t size;
- struct vy_log_record *dst = region_alloc_object(pool, typeof(*dst),
- &size);
+ struct vy_log_record *dst =
+ region_alloc_object(pool, typeof(*dst), &size);
if (dst == NULL) {
diag_set(OutOfMemory, size, "region_alloc_object", "dst");
goto err;
@@ -769,8 +761,7 @@ vy_log_init(const char *dir)
diag_create(&vy_log.tx_diag);
wal_init_vy_log();
fiber_cond_create(&vy_log.flusher_cond);
- vy_log.flusher = fiber_new("vinyl.vylog_flusher",
- vy_log_flusher_f);
+ vy_log.flusher = fiber_new("vinyl.vylog_flusher", vy_log_flusher_f);
if (vy_log.flusher == NULL)
panic("failed to allocate vylog flusher fiber");
fiber_wakeup(vy_log.flusher);
@@ -814,8 +805,7 @@ vy_log_tx_flush(struct vy_log_tx *tx)
int tx_size = 0;
struct vy_log_record *record;
- stailq_foreach_entry(record, &tx->records, in_tx)
- tx_size++;
+ stailq_foreach_entry(record, &tx->records, in_tx) tx_size++;
size_t used = region_used(&fiber()->gc);
@@ -835,7 +825,8 @@ vy_log_tx_flush(struct vy_log_tx *tx)
* Encode buffered records.
*/
int i = 0;
- stailq_foreach_entry(record, &tx->records, in_tx) {
+ stailq_foreach_entry(record, &tx->records, in_tx)
+ {
if (record->gc_lsn == VY_LOG_GC_LSN_CURRENT)
record->gc_lsn = vy_log_signature();
assert(i < tx_size);
@@ -880,8 +871,8 @@ vy_log_flush(void)
int rc = 0;
while (!stailq_empty(&pending)) {
- struct vy_log_tx *tx = stailq_first_entry(&pending,
- struct vy_log_tx, in_pending);
+ struct vy_log_tx *tx = stailq_first_entry(
+ &pending, struct vy_log_tx, in_pending);
rc = vy_log_tx_flush(tx);
if (rc != 0)
break;
@@ -952,8 +943,7 @@ vy_log_open(struct xlog *xlog)
goto fail;
}
- if (xdir_create_xlog(&vy_log.dir, xlog,
- &vy_log.last_checkpoint) < 0)
+ if (xdir_create_xlog(&vy_log.dir, xlog, &vy_log.last_checkpoint) < 0)
goto fail;
struct xrow_header row;
@@ -1113,7 +1103,8 @@ vy_log_end_recovery(void)
* recovery - we will need them for garbage collection.
*/
struct vy_log_tx *tx;
- stailq_foreach_entry(tx, &vy_log.pending_tx, in_pending) {
+ stailq_foreach_entry(tx, &vy_log.pending_tx, in_pending)
+ {
struct vy_log_record *record;
stailq_foreach_entry(record, &tx->records, in_tx)
vy_recovery_process_record(vy_log.recovery, record);
@@ -1156,8 +1147,8 @@ vy_log_rotate(const struct vclock *vclock)
return 0;
assert(signature > prev_signature);
- say_verbose("rotating vylog %lld => %lld",
- (long long)prev_signature, (long long)signature);
+ say_verbose("rotating vylog %lld => %lld", (long long)prev_signature,
+ (long long)signature);
/*
* Lock out all concurrent log writers while we are rotating it.
@@ -1321,8 +1312,8 @@ vy_log_write(const struct vy_log_record *record)
return;
assert(vy_log.tx != NULL);
- struct vy_log_record *tx_record = vy_log_record_dup(&vy_log.tx->region,
- record);
+ struct vy_log_record *tx_record =
+ vy_log_record_dup(&vy_log.tx->region, record);
if (tx_record == NULL) {
diag_move(diag_get(), &vy_log.tx_diag);
vy_log.tx_failed = true;
@@ -1343,8 +1334,8 @@ vy_recovery_index_id_hash(uint32_t space_id, uint32_t index_id)
/** Lookup an LSM tree in vy_recovery::index_id_hash map. */
struct vy_lsm_recovery_info *
-vy_recovery_lsm_by_index_id(struct vy_recovery *recovery,
- uint32_t space_id, uint32_t index_id)
+vy_recovery_lsm_by_index_id(struct vy_recovery *recovery, uint32_t space_id,
+ uint32_t index_id)
{
int64_t key = vy_recovery_index_id_hash(space_id, index_id);
struct mh_i64ptr_t *h = recovery->index_id_hash;
@@ -1413,7 +1404,8 @@ vy_recovery_alloc_key_parts(const struct key_part_def *key_parts,
uint32_t new_parts_sz = sizeof(*key_parts) * key_part_count;
for (uint32_t i = 0; i < key_part_count; i++) {
new_parts_sz += key_parts[i].path != NULL ?
- strlen(key_parts[i].path) + 1 : 0;
+ strlen(key_parts[i].path) + 1 :
+ 0;
}
struct key_part_def *new_parts = malloc(new_parts_sz);
if (new_parts == NULL) {
@@ -1459,8 +1451,8 @@ vy_recovery_do_create_lsm(struct vy_recovery *recovery, int64_t id,
}
struct vy_lsm_recovery_info *lsm = malloc(sizeof(*lsm));
if (lsm == NULL) {
- diag_set(OutOfMemory, sizeof(*lsm),
- "malloc", "struct vy_lsm_recovery_info");
+ diag_set(OutOfMemory, sizeof(*lsm), "malloc",
+ "struct vy_lsm_recovery_info");
return NULL;
}
lsm->key_parts = vy_recovery_alloc_key_parts(key_parts, key_part_count);
@@ -1521,8 +1513,7 @@ vy_recovery_do_create_lsm(struct vy_recovery *recovery, int64_t id,
*/
static int
vy_recovery_prepare_lsm(struct vy_recovery *recovery, int64_t id,
- uint32_t space_id, uint32_t index_id,
- uint32_t group_id,
+ uint32_t space_id, uint32_t index_id, uint32_t group_id,
const struct key_part_def *key_parts,
uint32_t key_part_count)
{
@@ -1570,8 +1561,8 @@ vy_recovery_create_lsm(struct vy_recovery *recovery, int64_t id,
}
} else {
lsm = vy_recovery_do_create_lsm(recovery, id, space_id,
- index_id, group_id,
- key_parts, key_part_count);
+ index_id, group_id, key_parts,
+ key_part_count);
if (lsm == NULL)
return -1;
lsm->dump_lsn = dump_lsn;
@@ -1656,9 +1647,10 @@ vy_recovery_forget_lsm(struct vy_recovery *recovery, int64_t id)
struct mh_i64ptr_t *h = recovery->lsm_hash;
mh_int_t k = mh_i64ptr_find(h, id, NULL);
if (k == mh_end(h)) {
- diag_set(ClientError, ER_INVALID_VYLOG_FILE,
- tt_sprintf("LSM tree %lld forgotten but not registered",
- (long long)id));
+ diag_set(
+ ClientError, ER_INVALID_VYLOG_FILE,
+ tt_sprintf("LSM tree %lld forgotten but not registered",
+ (long long)id));
return -1;
}
struct vy_lsm_recovery_info *lsm = mh_i64ptr_node(h, k)->val;
@@ -1682,8 +1674,7 @@ vy_recovery_forget_lsm(struct vy_recovery *recovery, int64_t id)
* Returns 0 on success, -1 if ID not found or LSM tree is dropped.
*/
static int
-vy_recovery_dump_lsm(struct vy_recovery *recovery,
- int64_t id, int64_t dump_lsn)
+vy_recovery_dump_lsm(struct vy_recovery *recovery, int64_t id, int64_t dump_lsn)
{
struct vy_lsm_recovery_info *lsm;
lsm = vy_recovery_lookup_lsm(recovery, id);
@@ -1706,8 +1697,8 @@ vy_recovery_do_create_run(struct vy_recovery *recovery, int64_t run_id)
{
struct vy_run_recovery_info *run = malloc(sizeof(*run));
if (run == NULL) {
- diag_set(OutOfMemory, sizeof(*run),
- "malloc", "struct vy_run_recovery_info");
+ diag_set(OutOfMemory, sizeof(*run), "malloc",
+ "struct vy_run_recovery_info");
return NULL;
}
struct mh_i64ptr_t *h = recovery->run_hash;
@@ -1748,8 +1739,8 @@ vy_recovery_prepare_run(struct vy_recovery *recovery, int64_t lsm_id,
if (lsm == NULL) {
diag_set(ClientError, ER_INVALID_VYLOG_FILE,
tt_sprintf("Run %lld created for unregistered "
- "LSM tree %lld", (long long)run_id,
- (long long)lsm_id));
+ "LSM tree %lld",
+ (long long)run_id, (long long)lsm_id));
return -1;
}
if (vy_recovery_lookup_run(recovery, run_id) != NULL) {
@@ -1784,8 +1775,8 @@ vy_recovery_create_run(struct vy_recovery *recovery, int64_t lsm_id,
if (lsm == NULL) {
diag_set(ClientError, ER_INVALID_VYLOG_FILE,
tt_sprintf("Run %lld created for unregistered "
- "LSM tree %lld", (long long)run_id,
- (long long)lsm_id));
+ "LSM tree %lld",
+ (long long)run_id, (long long)lsm_id));
return -1;
}
struct vy_run_recovery_info *run;
@@ -1883,8 +1874,8 @@ vy_recovery_insert_range(struct vy_recovery *recovery, int64_t lsm_id,
if (lsm == NULL) {
diag_set(ClientError, ER_INVALID_VYLOG_FILE,
tt_sprintf("Range %lld created for unregistered "
- "LSM tree %lld", (long long)range_id,
- (long long)lsm_id));
+ "LSM tree %lld",
+ (long long)range_id, (long long)lsm_id));
return -1;
}
@@ -1903,8 +1894,8 @@ vy_recovery_insert_range(struct vy_recovery *recovery, int64_t lsm_id,
struct vy_range_recovery_info *range = malloc(size);
if (range == NULL) {
- diag_set(OutOfMemory, size,
- "malloc", "struct vy_range_recovery_info");
+ diag_set(OutOfMemory, size, "malloc",
+ "struct vy_range_recovery_info");
return -1;
}
struct mh_i64ptr_t *h = recovery->range_hash;
@@ -1971,8 +1962,8 @@ vy_recovery_delete_range(struct vy_recovery *recovery, int64_t range_id)
*/
static int
vy_recovery_insert_slice(struct vy_recovery *recovery, int64_t range_id,
- int64_t run_id, int64_t slice_id,
- const char *begin, const char *end)
+ int64_t run_id, int64_t slice_id, const char *begin,
+ const char *end)
{
if (vy_recovery_lookup_slice(recovery, slice_id) != NULL) {
diag_set(ClientError, ER_INVALID_VYLOG_FILE,
@@ -1985,8 +1976,8 @@ vy_recovery_insert_slice(struct vy_recovery *recovery, int64_t range_id,
if (range == NULL) {
diag_set(ClientError, ER_INVALID_VYLOG_FILE,
tt_sprintf("Slice %lld created for unregistered "
- "range %lld", (long long)slice_id,
- (long long)range_id));
+ "range %lld",
+ (long long)slice_id, (long long)range_id));
return -1;
}
struct vy_run_recovery_info *run;
@@ -1994,8 +1985,8 @@ vy_recovery_insert_slice(struct vy_recovery *recovery, int64_t range_id,
if (run == NULL) {
diag_set(ClientError, ER_INVALID_VYLOG_FILE,
tt_sprintf("Slice %lld created for unregistered "
- "run %lld", (long long)slice_id,
- (long long)run_id));
+ "run %lld",
+ (long long)slice_id, (long long)run_id));
return -1;
}
@@ -2014,8 +2005,8 @@ vy_recovery_insert_slice(struct vy_recovery *recovery, int64_t range_id,
struct vy_slice_recovery_info *slice = malloc(size);
if (slice == NULL) {
- diag_set(OutOfMemory, size,
- "malloc", "struct vy_slice_recovery_info");
+ diag_set(OutOfMemory, size, "malloc",
+ "struct vy_slice_recovery_info");
return -1;
}
struct mh_i64ptr_t *h = recovery->slice_hash;
@@ -2128,21 +2119,23 @@ vy_recovery_process_record(struct vy_recovery *recovery,
switch (record->type) {
case VY_LOG_PREPARE_LSM:
rc = vy_recovery_prepare_lsm(recovery, record->lsm_id,
- record->space_id, record->index_id,
- record->group_id, record->key_parts,
- record->key_part_count);
+ record->space_id, record->index_id,
+ record->group_id,
+ record->key_parts,
+ record->key_part_count);
break;
case VY_LOG_CREATE_LSM:
- rc = vy_recovery_create_lsm(recovery, record->lsm_id,
- record->space_id, record->index_id,
- record->group_id, record->key_parts,
- record->key_part_count, record->create_lsn,
- record->modify_lsn, record->dump_lsn);
+ rc = vy_recovery_create_lsm(
+ recovery, record->lsm_id, record->space_id,
+ record->index_id, record->group_id, record->key_parts,
+ record->key_part_count, record->create_lsn,
+ record->modify_lsn, record->dump_lsn);
break;
case VY_LOG_MODIFY_LSM:
rc = vy_recovery_modify_lsm(recovery, record->lsm_id,
- record->key_parts, record->key_part_count,
- record->modify_lsn);
+ record->key_parts,
+ record->key_part_count,
+ record->modify_lsn);
break;
case VY_LOG_DROP_LSM:
rc = vy_recovery_drop_lsm(recovery, record->lsm_id,
@@ -2153,7 +2146,8 @@ vy_recovery_process_record(struct vy_recovery *recovery,
break;
case VY_LOG_INSERT_RANGE:
rc = vy_recovery_insert_range(recovery, record->lsm_id,
- record->range_id, record->begin, record->end);
+ record->range_id, record->begin,
+ record->end);
break;
case VY_LOG_DELETE_RANGE:
rc = vy_recovery_delete_range(recovery, record->range_id);
@@ -2184,7 +2178,7 @@ vy_recovery_process_record(struct vy_recovery *recovery,
break;
case VY_LOG_DUMP_LSM:
rc = vy_recovery_dump_lsm(recovery, record->lsm_id,
- record->dump_lsn);
+ record->dump_lsn);
break;
case VY_LOG_TRUNCATE_LSM:
/* Not used anymore, ignore. */
@@ -2247,8 +2241,8 @@ vy_recovery_build_index_id_hash(struct vy_recovery *recovery)
uint32_t space_id = lsm->space_id;
uint32_t index_id = lsm->index_id;
struct vy_lsm_recovery_info *hashed_lsm;
- hashed_lsm = vy_recovery_lsm_by_index_id(recovery,
- space_id, index_id);
+ hashed_lsm = vy_recovery_lsm_by_index_id(recovery, space_id,
+ index_id);
/*
* If there's no LSM tree for these space_id/index_id
* or it was dropped, simply replace it with the latest
@@ -2257,7 +2251,8 @@ vy_recovery_build_index_id_hash(struct vy_recovery *recovery)
if (hashed_lsm == NULL ||
(hashed_lsm->drop_lsn >= 0 && lsm->create_lsn >= 0)) {
struct mh_i64ptr_node_t node;
- node.key = vy_recovery_index_id_hash(space_id, index_id);
+ node.key =
+ vy_recovery_index_id_hash(space_id, index_id);
node.val = lsm;
if (mh_i64ptr_put(h, &node, NULL, NULL) == mh_end(h)) {
diag_set(OutOfMemory, 0, "mh_i64ptr_put",
@@ -2305,8 +2300,8 @@ vy_recovery_new_f(va_list ap)
struct vy_recovery *recovery = malloc(sizeof(*recovery));
if (recovery == NULL) {
- diag_set(OutOfMemory, sizeof(*recovery),
- "malloc", "struct vy_recovery");
+ diag_set(OutOfMemory, sizeof(*recovery), "malloc",
+ "struct vy_recovery");
goto fail;
}
@@ -2324,10 +2319,8 @@ vy_recovery_new_f(va_list ap)
recovery->range_hash = mh_i64ptr_new();
recovery->run_hash = mh_i64ptr_new();
recovery->slice_hash = mh_i64ptr_new();
- if (recovery->index_id_hash == NULL ||
- recovery->lsm_hash == NULL ||
- recovery->range_hash == NULL ||
- recovery->run_hash == NULL ||
+ if (recovery->index_id_hash == NULL || recovery->lsm_hash == NULL ||
+ recovery->range_hash == NULL || recovery->run_hash == NULL ||
recovery->slice_hash == NULL) {
diag_set(OutOfMemory, 0, "mh_i64ptr_new", "mh_i64ptr_t");
goto fail_free;
@@ -2445,8 +2438,8 @@ vy_recovery_delete(struct vy_recovery *recovery)
struct vy_run_recovery_info *run, *next_run;
rlist_foreach_entry_safe(lsm, &recovery->lsms, in_recovery, next_lsm) {
- rlist_foreach_entry_safe(range, &lsm->ranges,
- in_lsm, next_range) {
+ rlist_foreach_entry_safe(range, &lsm->ranges, in_lsm,
+ next_range) {
rlist_foreach_entry_safe(slice, &range->slices,
in_range, next_slice)
free(slice);
@@ -2495,8 +2488,8 @@ vy_log_append_lsm(struct xlog *xlog, struct vy_lsm_recovery_info *lsm)
struct vy_log_record record;
vy_log_record_init(&record);
- record.type = lsm->create_lsn < 0 ?
- VY_LOG_PREPARE_LSM : VY_LOG_CREATE_LSM;
+ record.type = lsm->create_lsn < 0 ? VY_LOG_PREPARE_LSM :
+ VY_LOG_CREATE_LSM;
record.lsm_id = lsm->id;
record.index_id = lsm->index_id;
record.space_id = lsm->space_id;
@@ -2548,7 +2541,8 @@ vy_log_append_lsm(struct xlog *xlog, struct vy_lsm_recovery_info *lsm)
* while we are supposed to return slices in chronological
* order, so use reverse iterator.
*/
- rlist_foreach_entry_reverse(slice, &range->slices, in_range) {
+ rlist_foreach_entry_reverse(slice, &range->slices, in_range)
+ {
vy_log_record_init(&record);
record.type = VY_LOG_INSERT_SLICE;
record.range_id = range->id;
@@ -2612,8 +2606,7 @@ vy_log_create(const struct vclock *vclock, struct vy_recovery *recovery)
});
/* Finalize the new xlog. */
- if (xlog_flush(&xlog) < 0 ||
- xlog_sync(&xlog) < 0 ||
+ if (xlog_flush(&xlog) < 0 || xlog_sync(&xlog) < 0 ||
xlog_rename(&xlog) < 0)
goto err_write_xlog;
@@ -2626,8 +2619,7 @@ err_write_xlog:
/* Delete the unfinished xlog. */
assert(xlog_is_open(&xlog));
if (unlink(xlog.filename) < 0)
- say_syserror("failed to delete file '%s'",
- xlog.filename);
+ say_syserror("failed to delete file '%s'", xlog.filename);
xlog_close(&xlog, false);
err_create_xlog:
diff --git a/src/box/vy_log.h b/src/box/vy_log.h
index 298a8ed..bf1fd42 100644
--- a/src/box/vy_log.h
+++ b/src/box/vy_log.h
@@ -69,22 +69,22 @@ enum vy_log_record_type {
* After rotation, it also stores space_id, index_id, group_id,
* key_def, create_lsn, modify_lsn, dump_lsn.
*/
- VY_LOG_CREATE_LSM = 0,
+ VY_LOG_CREATE_LSM = 0,
/**
* Drop an LSM tree.
* Requires vy_log_record::lsm_id, drop_lsn.
*/
- VY_LOG_DROP_LSM = 1,
+ VY_LOG_DROP_LSM = 1,
/**
* Insert a new range into an LSM tree.
* Requires vy_log_record::lsm_id, range_id, begin, end.
*/
- VY_LOG_INSERT_RANGE = 2,
+ VY_LOG_INSERT_RANGE = 2,
/**
* Delete a vinyl range and all its runs.
* Requires vy_log_record::range_id.
*/
- VY_LOG_DELETE_RANGE = 3,
+ VY_LOG_DELETE_RANGE = 3,
/**
* Prepare a vinyl run file.
* Requires vy_log_record::lsm_id, run_id.
@@ -93,14 +93,14 @@ enum vy_log_record_type {
* It is needed to keep track of unfinished due to errors run
* files so that we could remove them after recovery.
*/
- VY_LOG_PREPARE_RUN = 4,
+ VY_LOG_PREPARE_RUN = 4,
/**
* Commit a vinyl run file creation.
* Requires vy_log_record::lsm_id, run_id, dump_lsn, dump_count.
*
* Written after a run file was successfully created.
*/
- VY_LOG_CREATE_RUN = 5,
+ VY_LOG_CREATE_RUN = 5,
/**
* Drop a vinyl run.
* Requires vy_log_record::run_id, gc_lsn.
@@ -113,7 +113,7 @@ enum vy_log_record_type {
* deleted, but not "forgotten" are not expunged from the log
* on rotation.
*/
- VY_LOG_DROP_RUN = 6,
+ VY_LOG_DROP_RUN = 6,
/**
* Forget a vinyl run.
* Requires vy_log_record::run_id.
@@ -124,22 +124,22 @@ enum vy_log_record_type {
* run. Information about "forgotten" runs is not included in
* the new log on rotation.
*/
- VY_LOG_FORGET_RUN = 7,
+ VY_LOG_FORGET_RUN = 7,
/**
* Insert a run slice into a range.
* Requires vy_log_record::range_id, run_id, slice_id, begin, end.
*/
- VY_LOG_INSERT_SLICE = 8,
+ VY_LOG_INSERT_SLICE = 8,
/**
* Delete a run slice.
* Requires vy_log_record::slice_id.
*/
- VY_LOG_DELETE_SLICE = 9,
+ VY_LOG_DELETE_SLICE = 9,
/**
* Log LSM tree dump. Used to update max LSN stored on disk.
* Requires vy_log_record::lsm_id, dump_lsn.
*/
- VY_LOG_DUMP_LSM = 10,
+ VY_LOG_DUMP_LSM = 10,
/**
* We don't split vylog into snapshot and log - all records
* are written to the same file. Since we need to load a
@@ -150,7 +150,7 @@ enum vy_log_record_type {
*
* See also: @only_checkpoint argument of vy_recovery_new().
*/
- VY_LOG_SNAPSHOT = 11,
+ VY_LOG_SNAPSHOT = 11,
/**
* When we used LSN for identifying LSM trees in vylog, we
* couldn't simply recreate an LSM tree on space truncation,
@@ -164,12 +164,12 @@ enum vy_log_record_type {
* 'truncate' records - this will result in replay of all
* WAL records written after truncation.
*/
- VY_LOG_TRUNCATE_LSM = 12,
+ VY_LOG_TRUNCATE_LSM = 12,
/**
* Modify key definition of an LSM tree.
* Requires vy_log_record::lsm_id, key_def, modify_lsn.
*/
- VY_LOG_MODIFY_LSM = 13,
+ VY_LOG_MODIFY_LSM = 13,
/**
* Forget an LSM tree.
* Requires vy_log_record::lsm_id.
@@ -179,7 +179,7 @@ enum vy_log_record_type {
* so the LSM tree is not needed any longer and can be removed
* from vylog on the next rotation.
*/
- VY_LOG_FORGET_LSM = 14,
+ VY_LOG_FORGET_LSM = 14,
/**
* Prepare a new LSM tree for building.
* Requires vy_log_record::lsm_id, index_id, space_id, group_id,
@@ -195,7 +195,7 @@ enum vy_log_record_type {
* for building. Once the index has been built, we write
* a VY_LOG_CREATE_LSM record to commit it.
*/
- VY_LOG_PREPARE_LSM = 15,
+ VY_LOG_PREPARE_LSM = 15,
/**
* This record denotes the beginning of a rebootstrap section.
* A rebootstrap section ends either by another record of this
@@ -211,12 +211,12 @@ enum vy_log_record_type {
* record as dropped in the rotated vylog. If rebootstrap fails,
* we write VY_LOG_ABORT_REBOOTSTRAP on recovery.
*/
- VY_LOG_REBOOTSTRAP = 16,
+ VY_LOG_REBOOTSTRAP = 16,
/**
* This record is written on recovery if rebootstrap failed.
* See also VY_LOG_REBOOTSTRAP.
*/
- VY_LOG_ABORT_REBOOTSTRAP = 17,
+ VY_LOG_ABORT_REBOOTSTRAP = 17,
vy_log_record_type_MAX
};
@@ -577,12 +577,12 @@ enum vy_recovery_flag {
* i.e. get a consistent view of vinyl database at the time
* of the last checkpoint.
*/
- VY_RECOVERY_LOAD_CHECKPOINT = 1 << 0,
+ VY_RECOVERY_LOAD_CHECKPOINT = 1 << 0,
/**
* Consider the last attempt to rebootstrap aborted even if
* there's no VY_LOG_ABORT_REBOOTSTRAP record.
*/
- VY_RECOVERY_ABORT_REBOOTSTRAP = 1 << 1,
+ VY_RECOVERY_ABORT_REBOOTSTRAP = 1 << 1,
};
/**
@@ -608,8 +608,8 @@ vy_recovery_delete(struct vy_recovery *recovery);
* Returns NULL if the LSM tree was not found.
*/
struct vy_lsm_recovery_info *
-vy_recovery_lsm_by_index_id(struct vy_recovery *recovery,
- uint32_t space_id, uint32_t index_id);
+vy_recovery_lsm_by_index_id(struct vy_recovery *recovery, uint32_t space_id,
+ uint32_t index_id);
/**
* Initialize a log record with default values.
@@ -688,8 +688,8 @@ vy_log_drop_lsm(int64_t id, int64_t drop_lsn)
/** Helper to log a vinyl range insertion. */
static inline void
-vy_log_insert_range(int64_t lsm_id, int64_t range_id,
- const char *begin, const char *end)
+vy_log_insert_range(int64_t lsm_id, int64_t range_id, const char *begin,
+ const char *end)
{
struct vy_log_record record;
vy_log_record_init(&record);
@@ -726,8 +726,8 @@ vy_log_prepare_run(int64_t lsm_id, int64_t run_id)
/** Helper to log a vinyl run creation. */
static inline void
-vy_log_create_run(int64_t lsm_id, int64_t run_id,
- int64_t dump_lsn, uint32_t dump_count)
+vy_log_create_run(int64_t lsm_id, int64_t run_id, int64_t dump_lsn,
+ uint32_t dump_count)
{
struct vy_log_record record;
vy_log_record_init(&record);
diff --git a/src/box/vy_lsm.c b/src/box/vy_lsm.c
index 1f67bea..2abe27a 100644
--- a/src/box/vy_lsm.c
+++ b/src/box/vy_lsm.c
@@ -73,8 +73,7 @@ static const int64_t VY_MAX_RANGE_SIZE = 2LL * 1024 * 1024 * 1024;
int
vy_lsm_env_create(struct vy_lsm_env *env, const char *path,
int64_t *p_generation, struct tuple_format *key_format,
- vy_upsert_thresh_cb upsert_thresh_cb,
- void *upsert_thresh_arg)
+ vy_upsert_thresh_cb upsert_thresh_cb, void *upsert_thresh_arg)
{
env->empty_key.hint = HINT_NONE;
env->empty_key.stmt = vy_key_new(key_format, NULL, 0);
@@ -105,8 +104,8 @@ const char *
vy_lsm_name(struct vy_lsm *lsm)
{
char *buf = tt_static_buf();
- snprintf(buf, TT_STATIC_BUF_LEN, "%u/%u",
- (unsigned)lsm->space_id, (unsigned)lsm->index_id);
+ snprintf(buf, TT_STATIC_BUF_LEN, "%u/%u", (unsigned)lsm->space_id,
+ (unsigned)lsm->index_id);
return buf;
}
@@ -134,8 +133,8 @@ vy_lsm_new(struct vy_lsm_env *lsm_env, struct vy_cache_env *cache_env,
struct vy_lsm *lsm = calloc(1, sizeof(struct vy_lsm));
if (lsm == NULL) {
- diag_set(OutOfMemory, sizeof(struct vy_lsm),
- "calloc", "struct vy_lsm");
+ diag_set(OutOfMemory, sizeof(struct vy_lsm), "calloc",
+ "struct vy_lsm");
goto fail;
}
lsm->env = lsm_env;
@@ -167,9 +166,8 @@ vy_lsm_new(struct vy_lsm_env *lsm_env, struct vy_cache_env *cache_env,
*/
lsm->disk_format = lsm_env->key_format;
- lsm->pk_in_cmp_def = key_def_find_pk_in_cmp_def(lsm->cmp_def,
- pk->key_def,
- &fiber()->gc);
+ lsm->pk_in_cmp_def = key_def_find_pk_in_cmp_def(
+ lsm->cmp_def, pk->key_def, &fiber()->gc);
if (lsm->pk_in_cmp_def == NULL)
goto fail_pk_in_cmp_def;
}
@@ -182,8 +180,7 @@ vy_lsm_new(struct vy_lsm_env *lsm_env, struct vy_cache_env *cache_env,
if (lsm->run_hist == NULL)
goto fail_run_hist;
- lsm->mem = vy_mem_new(mem_env, cmp_def, format,
- *lsm->env->p_generation,
+ lsm->mem = vy_mem_new(mem_env, cmp_def, format, *lsm->env->p_generation,
space_cache_version);
if (lsm->mem == NULL)
goto fail_mem;
@@ -255,7 +252,7 @@ vy_lsm_delete(struct vy_lsm *lsm)
lsm->env->lsm_count--;
lsm->env->compaction_queue_size -=
- lsm->stat.disk.compaction.queue.bytes;
+ lsm->stat.disk.compaction.queue.bytes;
if (lsm->index_id == 0)
lsm->env->compacted_data_size -=
lsm->stat.disk.last_level_count.bytes;
@@ -292,8 +289,8 @@ vy_lsm_create(struct vy_lsm *lsm)
/* Make LSM tree directory. */
int rc;
char path[PATH_MAX];
- vy_lsm_snprint_path(path, sizeof(path), lsm->env->path,
- lsm->space_id, lsm->index_id);
+ vy_lsm_snprint_path(path, sizeof(path), lsm->env->path, lsm->space_id,
+ lsm->index_id);
char *path_sep = path;
while (*path_sep == '/') {
/* Don't create root */
@@ -305,7 +302,7 @@ vy_lsm_create(struct vy_lsm *lsm)
rc = mkdir(path, 0777);
if (rc == -1 && errno != EEXIST) {
diag_set(SystemError, "failed to create directory '%s'",
- path);
+ path);
*path_sep = '/';
return -1;
}
@@ -314,8 +311,7 @@ vy_lsm_create(struct vy_lsm *lsm)
}
rc = mkdir(path, 0777);
if (rc == -1 && errno != EEXIST) {
- diag_set(SystemError, "failed to create directory '%s'",
- path);
+ diag_set(SystemError, "failed to create directory '%s'", path);
return -1;
}
@@ -338,8 +334,8 @@ vy_lsm_create(struct vy_lsm *lsm)
/* Write the new LSM tree record to vylog. */
vy_log_tx_begin();
- vy_log_prepare_lsm(id, lsm->space_id, lsm->index_id,
- lsm->group_id, lsm->key_def);
+ vy_log_prepare_lsm(id, lsm->space_id, lsm->index_id, lsm->group_id,
+ lsm->key_def);
vy_log_insert_range(id, range->id, NULL, NULL);
vy_log_tx_try_commit();
@@ -370,9 +366,8 @@ vy_lsm_recover_run(struct vy_lsm *lsm, struct vy_run_recovery_info *run_info,
if (vy_run_recover(run, lsm->env->path, lsm->space_id, lsm->index_id,
lsm->cmp_def) != 0 &&
(!force_recovery ||
- vy_run_rebuild_index(run, lsm->env->path,
- lsm->space_id, lsm->index_id,
- lsm->cmp_def, lsm->key_def,
+ vy_run_rebuild_index(run, lsm->env->path, lsm->space_id,
+ lsm->index_id, lsm->cmp_def, lsm->key_def,
lsm->disk_format, &lsm->opts) != 0)) {
vy_run_unref(run);
return NULL;
@@ -403,16 +398,14 @@ vy_lsm_recover_slice(struct vy_lsm *lsm, struct vy_range *range,
struct vy_run *run;
if (slice_info->begin != NULL) {
- begin = vy_entry_key_from_msgpack(lsm->env->key_format,
- lsm->cmp_def,
- slice_info->begin);
+ begin = vy_entry_key_from_msgpack(
+ lsm->env->key_format, lsm->cmp_def, slice_info->begin);
if (begin.stmt == NULL)
goto out;
}
if (slice_info->end != NULL) {
end = vy_entry_key_from_msgpack(lsm->env->key_format,
- lsm->cmp_def,
- slice_info->end);
+ lsm->cmp_def, slice_info->end);
if (end.stmt == NULL)
goto out;
}
@@ -424,8 +417,7 @@ vy_lsm_recover_slice(struct vy_lsm *lsm, struct vy_range *range,
goto out;
}
- run = vy_lsm_recover_run(lsm, slice_info->run,
- run_env, force_recovery);
+ run = vy_lsm_recover_run(lsm, slice_info->run, run_env, force_recovery);
if (run == NULL)
goto out;
@@ -452,16 +444,14 @@ vy_lsm_recover_range(struct vy_lsm *lsm,
struct vy_range *range = NULL;
if (range_info->begin != NULL) {
- begin = vy_entry_key_from_msgpack(lsm->env->key_format,
- lsm->cmp_def,
- range_info->begin);
+ begin = vy_entry_key_from_msgpack(
+ lsm->env->key_format, lsm->cmp_def, range_info->begin);
if (begin.stmt == NULL)
goto out;
}
if (range_info->end != NULL) {
end = vy_entry_key_from_msgpack(lsm->env->key_format,
- lsm->cmp_def,
- range_info->end);
+ lsm->cmp_def, range_info->end);
if (end.stmt == NULL)
goto out;
}
@@ -483,9 +473,10 @@ vy_lsm_recover_range(struct vy_lsm *lsm,
* order, so use reverse iterator.
*/
struct vy_slice_recovery_info *slice_info;
- rlist_foreach_entry_reverse(slice_info, &range_info->slices, in_range) {
- if (vy_lsm_recover_slice(lsm, range, slice_info,
- run_env, force_recovery) == NULL) {
+ rlist_foreach_entry_reverse(slice_info, &range_info->slices, in_range)
+ {
+ if (vy_lsm_recover_slice(lsm, range, slice_info, run_env,
+ force_recovery) == NULL) {
vy_range_delete(range);
range = NULL;
goto out;
@@ -502,8 +493,8 @@ out:
int
vy_lsm_recover(struct vy_lsm *lsm, struct vy_recovery *recovery,
- struct vy_run_env *run_env, int64_t lsn,
- bool is_checkpoint_recovery, bool force_recovery)
+ struct vy_run_env *run_env, int64_t lsn,
+ bool is_checkpoint_recovery, bool force_recovery)
{
assert(lsm->id < 0);
assert(lsm->commit_lsn < 0);
@@ -523,8 +514,8 @@ vy_lsm_recover(struct vy_lsm *lsm, struct vy_recovery *recovery,
* Look up the last incarnation of the LSM tree in vylog.
*/
struct vy_lsm_recovery_info *lsm_info;
- lsm_info = vy_recovery_lsm_by_index_id(recovery,
- lsm->space_id, lsm->index_id);
+ lsm_info = vy_recovery_lsm_by_index_id(recovery, lsm->space_id,
+ lsm->index_id);
if (is_checkpoint_recovery) {
if (lsm_info == NULL || lsm_info->create_lsn < 0) {
/*
@@ -549,9 +540,9 @@ vy_lsm_recover(struct vy_lsm *lsm, struct vy_recovery *recovery,
}
}
- if (lsm_info == NULL || (lsm_info->prepared == NULL &&
- lsm_info->create_lsn >= 0 &&
- lsn > lsm_info->create_lsn)) {
+ if (lsm_info == NULL ||
+ (lsm_info->prepared == NULL && lsm_info->create_lsn >= 0 &&
+ lsn > lsm_info->create_lsn)) {
/*
* If we failed to log LSM tree creation before restart,
* we won't find it in the log on recovery. This is OK as
@@ -646,7 +637,8 @@ vy_lsm_recover(struct vy_lsm *lsm, struct vy_recovery *recovery,
*/
struct vy_range *range, *prev = NULL;
for (range = vy_range_tree_first(&lsm->range_tree); range != NULL;
- prev = range, range = vy_range_tree_next(&lsm->range_tree, range)) {
+ prev = range,
+ range = vy_range_tree_next(&lsm->range_tree, range)) {
if (prev == NULL && range->begin.stmt != NULL) {
diag_set(ClientError, ER_INVALID_VYLOG_FILE,
tt_sprintf("Range %lld is leftmost but "
@@ -659,12 +651,12 @@ vy_lsm_recover(struct vy_lsm *lsm, struct vy_recovery *recovery,
(prev->end.stmt == NULL || range->begin.stmt == NULL ||
(cmp = vy_entry_compare(prev->end, range->begin,
lsm->cmp_def)) != 0)) {
- const char *errmsg = cmp > 0 ?
- "Nearby ranges %lld and %lld overlap" :
- "Keys between ranges %lld and %lld not spanned";
+ const char *errmsg =
+ cmp > 0 ?
+ "Nearby ranges %lld and %lld overlap" :
+ "Keys between ranges %lld and %lld not spanned";
diag_set(ClientError, ER_INVALID_VYLOG_FILE,
- tt_sprintf(errmsg,
- (long long)prev->id,
+ tt_sprintf(errmsg, (long long)prev->id,
(long long)range->id));
return -1;
}
@@ -690,8 +682,11 @@ vy_lsm_recover(struct vy_lsm *lsm, struct vy_recovery *recovery,
int64_t
vy_lsm_generation(struct vy_lsm *lsm)
{
- struct vy_mem *oldest = rlist_empty(&lsm->sealed) ? lsm->mem :
- rlist_last_entry(&lsm->sealed, struct vy_mem, in_sealed);
+ struct vy_mem *oldest = rlist_empty(&lsm->sealed) ?
+ lsm->mem :
+ rlist_last_entry(&lsm->sealed,
+ struct vy_mem,
+ in_sealed);
return oldest->generation;
}
@@ -722,8 +717,10 @@ vy_lsm_range_size(struct vy_lsm *lsm)
* create four times more than that for better smoothing.
*/
int range_count = 4 * vy_lsm_dumps_per_compaction(lsm);
- int64_t range_size = range_count == 0 ? 0 :
- lsm->stat.disk.last_level_count.bytes / range_count;
+ int64_t range_size =
+ range_count == 0 ?
+ 0 :
+ lsm->stat.disk.last_level_count.bytes / range_count;
range_size = MAX(range_size, VY_MIN_RANGE_SIZE);
range_size = MIN(range_size, VY_MAX_RANGE_SIZE);
return range_size;
@@ -798,7 +795,7 @@ vy_lsm_add_range(struct vy_lsm *lsm, struct vy_range *range)
void
vy_lsm_remove_range(struct vy_lsm *lsm, struct vy_range *range)
{
- assert(! heap_node_is_stray(&range->heap_node));
+ assert(!heap_node_is_stray(&range->heap_node));
vy_range_heap_delete(&lsm->range_heap, range);
vy_range_tree_remove(&lsm->range_tree, range);
lsm->range_count--;
@@ -813,8 +810,8 @@ vy_lsm_acct_range(struct vy_lsm *lsm, struct vy_range *range)
&range->compaction_queue);
lsm->env->compaction_queue_size += range->compaction_queue.bytes;
if (!rlist_empty(&range->slices)) {
- struct vy_slice *slice = rlist_last_entry(&range->slices,
- struct vy_slice, in_range);
+ struct vy_slice *slice = rlist_last_entry(
+ &range->slices, struct vy_slice, in_range);
vy_disk_stmt_counter_add(&lsm->stat.disk.last_level_count,
&slice->count);
if (lsm->index_id == 0)
@@ -831,8 +828,8 @@ vy_lsm_unacct_range(struct vy_lsm *lsm, struct vy_range *range)
&range->compaction_queue);
lsm->env->compaction_queue_size -= range->compaction_queue.bytes;
if (!rlist_empty(&range->slices)) {
- struct vy_slice *slice = rlist_last_entry(&range->slices,
- struct vy_slice, in_range);
+ struct vy_slice *slice = rlist_last_entry(
+ &range->slices, struct vy_slice, in_range);
vy_disk_stmt_counter_sub(&lsm->stat.disk.last_level_count,
&slice->count);
if (lsm->index_id == 0)
@@ -890,8 +887,8 @@ vy_lsm_delete_mem(struct vy_lsm *lsm, struct vy_mem *mem)
}
int
-vy_lsm_set(struct vy_lsm *lsm, struct vy_mem *mem,
- struct vy_entry entry, struct tuple **region_stmt)
+vy_lsm_set(struct vy_lsm *lsm, struct vy_mem *mem, struct vy_entry entry,
+ struct tuple **region_stmt)
{
uint32_t format_id = entry.stmt->format_id;
@@ -907,9 +904,8 @@ vy_lsm_set(struct vy_lsm *lsm, struct vy_mem *mem,
* while other LSM trees still use the old space format.
*/
if (*region_stmt == NULL || (*region_stmt)->format_id != format_id) {
- *region_stmt = vy_stmt_dup_lsregion(entry.stmt,
- &mem->env->allocator,
- mem->generation);
+ *region_stmt = vy_stmt_dup_lsregion(
+ entry.stmt, &mem->env->allocator, mem->generation);
if (*region_stmt == NULL)
return -1;
}
@@ -982,7 +978,8 @@ vy_lsm_commit_upsert(struct vy_lsm *lsm, struct vy_mem *mem,
older = vy_mem_older_lsn(mem, entry);
assert(older.stmt != NULL &&
vy_stmt_type(older.stmt) == IPROTO_UPSERT &&
- vy_stmt_n_upserts(older.stmt) == VY_UPSERT_THRESHOLD - 1);
+ vy_stmt_n_upserts(older.stmt) ==
+ VY_UPSERT_THRESHOLD - 1);
#endif
if (lsm->env->upsert_thresh_cb == NULL) {
/* Squash callback is not installed. */
@@ -994,7 +991,7 @@ vy_lsm_commit_upsert(struct vy_lsm *lsm, struct vy_mem *mem,
dup.stmt = vy_stmt_dup(entry.stmt);
if (dup.stmt != NULL) {
lsm->env->upsert_thresh_cb(lsm, dup,
- lsm->env->upsert_thresh_arg);
+ lsm->env->upsert_thresh_arg);
tuple_unref(dup.stmt);
}
/*
@@ -1015,8 +1012,8 @@ vy_lsm_commit_upsert(struct vy_lsm *lsm, struct vy_mem *mem,
assert(older.stmt == NULL ||
vy_stmt_type(older.stmt) != IPROTO_UPSERT);
struct vy_entry upserted;
- upserted = vy_entry_apply_upsert(entry, older,
- lsm->cmp_def, false);
+ upserted = vy_entry_apply_upsert(entry, older, lsm->cmp_def,
+ false);
lsm->stat.upsert.applied++;
if (upserted.stmt == NULL) {
@@ -1041,10 +1038,8 @@ vy_lsm_commit_upsert(struct vy_lsm *lsm, struct vy_mem *mem,
upserted_lsn != vy_stmt_lsn(older.stmt));
assert(vy_stmt_type(upserted.stmt) == IPROTO_REPLACE);
- struct tuple *region_stmt =
- vy_stmt_dup_lsregion(upserted.stmt,
- &mem->env->allocator,
- mem->generation);
+ struct tuple *region_stmt = vy_stmt_dup_lsregion(
+ upserted.stmt, &mem->env->allocator, mem->generation);
if (region_stmt == NULL) {
/* OOM */
tuple_unref(upserted.stmt);
@@ -1058,7 +1053,8 @@ vy_lsm_commit_upsert(struct vy_lsm *lsm, struct vy_mem *mem,
* now we replacing one statement with another, the
* vy_lsm_set() cannot fail.
*/
- assert(rc == 0); (void)rc;
+ assert(rc == 0);
+ (void)rc;
tuple_unref(upserted.stmt);
upserted.stmt = region_stmt;
vy_mem_commit_stmt(mem, upserted);
@@ -1094,9 +1090,9 @@ vy_lsm_rollback_stmt(struct vy_lsm *lsm, struct vy_mem *mem,
}
int
-vy_lsm_find_range_intersection(struct vy_lsm *lsm,
- const char *min_key, const char *max_key,
- struct vy_range **begin, struct vy_range **end)
+vy_lsm_find_range_intersection(struct vy_lsm *lsm, const char *min_key,
+ const char *max_key, struct vy_range **begin,
+ struct vy_range **end)
{
struct tuple_format *key_format = lsm->env->key_format;
struct vy_entry entry;
@@ -1161,7 +1157,8 @@ vy_lsm_split_range(struct vy_lsm *lsm, struct vy_range *range)
* so to preserve the order of the slices list, we have
* to iterate backward.
*/
- rlist_foreach_entry_reverse(slice, &range->slices, in_range) {
+ rlist_foreach_entry_reverse(slice, &range->slices, in_range)
+ {
if (vy_slice_cut(slice, vy_log_next_id(), part->begin,
part->end, lsm->cmp_def,
&new_slice) != 0)
@@ -1187,9 +1184,10 @@ vy_lsm_split_range(struct vy_lsm *lsm, struct vy_range *range)
tuple_data_or_null(part->begin.stmt),
tuple_data_or_null(part->end.stmt));
rlist_foreach_entry(slice, &part->slices, in_range)
- vy_log_insert_slice(part->id, slice->run->id, slice->id,
- tuple_data_or_null(slice->begin.stmt),
- tuple_data_or_null(slice->end.stmt));
+ vy_log_insert_slice(
+ part->id, slice->run->id, slice->id,
+ tuple_data_or_null(slice->begin.stmt),
+ tuple_data_or_null(slice->end.stmt));
}
if (vy_log_tx_commit() < 0)
goto fail;
@@ -1224,8 +1222,8 @@ fail:
tuple_unref(split_key.stmt);
diag_log();
- say_error("%s: failed to split range %s",
- vy_lsm_name(lsm), vy_range_str(range));
+ say_error("%s: failed to split range %s", vy_lsm_name(lsm),
+ vy_range_str(range));
return false;
}
@@ -1237,8 +1235,8 @@ vy_lsm_coalesce_range(struct vy_lsm *lsm, struct vy_range *range)
vy_lsm_range_size(lsm), &first, &last))
return false;
- struct vy_range *result = vy_range_new(vy_log_next_id(),
- first->begin, last->end, lsm->cmp_def);
+ struct vy_range *result = vy_range_new(vy_log_next_id(), first->begin,
+ last->end, lsm->cmp_def);
if (result == NULL)
goto fail_range;
@@ -1259,9 +1257,10 @@ vy_lsm_coalesce_range(struct vy_lsm *lsm, struct vy_range *range)
vy_log_delete_slice(slice->id);
vy_log_delete_range(it->id);
rlist_foreach_entry(slice, &it->slices, in_range) {
- vy_log_insert_slice(result->id, slice->run->id, slice->id,
- tuple_data_or_null(slice->begin.stmt),
- tuple_data_or_null(slice->end.stmt));
+ vy_log_insert_slice(
+ result->id, slice->run->id, slice->id,
+ tuple_data_or_null(slice->begin.stmt),
+ tuple_data_or_null(slice->end.stmt));
}
}
if (vy_log_tx_commit() < 0)
@@ -1273,7 +1272,8 @@ vy_lsm_coalesce_range(struct vy_lsm *lsm, struct vy_range *range)
*/
it = first;
while (it != end) {
- struct vy_range *next = vy_range_tree_next(&lsm->range_tree, it);
+ struct vy_range *next =
+ vy_range_tree_next(&lsm->range_tree, it);
vy_lsm_unacct_range(lsm, it);
vy_lsm_remove_range(lsm, it);
rlist_splice(&result->slices, &it->slices);
@@ -1295,16 +1295,16 @@ vy_lsm_coalesce_range(struct vy_lsm *lsm, struct vy_range *range)
vy_lsm_add_range(lsm, result);
lsm->range_tree_version++;
- say_info("%s: coalesced ranges %s",
- vy_lsm_name(lsm), vy_range_str(result));
+ say_info("%s: coalesced ranges %s", vy_lsm_name(lsm),
+ vy_range_str(result));
return true;
fail_commit:
vy_range_delete(result);
fail_range:
diag_log();
- say_error("%s: failed to coalesce range %s",
- vy_lsm_name(lsm), vy_range_str(range));
+ say_error("%s: failed to coalesce range %s", vy_lsm_name(lsm),
+ vy_range_str(range));
return false;
}
diff --git a/src/box/vy_lsm.h b/src/box/vy_lsm.h
index 3b553ea..c8e82d9 100644
--- a/src/box/vy_lsm.h
+++ b/src/box/vy_lsm.h
@@ -61,8 +61,8 @@ struct vy_recovery;
struct vy_run;
struct vy_run_env;
-typedef void
-(*vy_upsert_thresh_cb)(struct vy_lsm *lsm, struct vy_entry entry, void *arg);
+typedef void (*vy_upsert_thresh_cb)(struct vy_lsm *lsm, struct vy_entry entry,
+ void *arg);
/** Common LSM tree environment. */
struct vy_lsm_env {
@@ -442,8 +442,8 @@ vy_lsm_create(struct vy_lsm *lsm);
*/
int
vy_lsm_recover(struct vy_lsm *lsm, struct vy_recovery *recovery,
- struct vy_run_env *run_env, int64_t lsn,
- bool is_checkpoint_recovery, bool force_recovery);
+ struct vy_run_env *run_env, int64_t lsn,
+ bool is_checkpoint_recovery, bool force_recovery);
/**
* Return generation of in-memory data stored in an LSM tree
@@ -547,9 +547,9 @@ vy_lsm_delete_mem(struct vy_lsm *lsm, struct vy_mem *mem);
* On memory allocation error returns -1 and sets diag.
*/
int
-vy_lsm_find_range_intersection(struct vy_lsm *lsm,
- const char *min_key, const char *max_key,
- struct vy_range **begin, struct vy_range **end);
+vy_lsm_find_range_intersection(struct vy_lsm *lsm, const char *min_key,
+ const char *max_key, struct vy_range **begin,
+ struct vy_range **end);
/**
* Split a range if it has grown too big, return true if the range
@@ -597,8 +597,8 @@ vy_lsm_force_compaction(struct vy_lsm *lsm);
* @retval -1 Memory error.
*/
int
-vy_lsm_set(struct vy_lsm *lsm, struct vy_mem *mem,
- struct vy_entry entry, struct tuple **region_stmt);
+vy_lsm_set(struct vy_lsm *lsm, struct vy_mem *mem, struct vy_entry entry,
+ struct tuple **region_stmt);
/**
* Confirm that the statement stays in the in-memory index of
diff --git a/src/box/vy_mem.c b/src/box/vy_mem.c
index 98027e7..d9c4f4b 100644
--- a/src/box/vy_mem.c
+++ b/src/box/vy_mem.c
@@ -53,8 +53,8 @@ vy_mem_env_create(struct vy_mem_env *env, size_t memory)
{
/* Vinyl memory is limited by vy_quota. */
quota_init(&env->quota, QUOTA_MAX);
- tuple_arena_create(&env->arena, &env->quota, memory,
- SLAB_SIZE, false, "vinyl");
+ tuple_arena_create(&env->arena, &env->quota, memory, SLAB_SIZE, false,
+ "vinyl");
lsregion_create(&env->allocator, &env->arena);
env->tree_extent_size = 0;
}
@@ -73,7 +73,7 @@ vy_mem_env_destroy(struct vy_mem_env *env)
static void *
vy_mem_tree_extent_alloc(void *ctx)
{
- struct vy_mem *mem = (struct vy_mem *) ctx;
+ struct vy_mem *mem = (struct vy_mem *)ctx;
struct vy_mem_env *env = mem->env;
void *ret = lsregion_aligned_alloc(&env->allocator,
VY_MEM_TREE_EXTENT_SIZE,
@@ -103,8 +103,8 @@ vy_mem_new(struct vy_mem_env *env, struct key_def *cmp_def,
{
struct vy_mem *index = calloc(1, sizeof(*index));
if (!index) {
- diag_set(OutOfMemory, sizeof(*index),
- "malloc", "struct vy_mem");
+ diag_set(OutOfMemory, sizeof(*index), "malloc",
+ "struct vy_mem");
return NULL;
}
index->env = env;
@@ -114,8 +114,7 @@ vy_mem_new(struct vy_mem_env *env, struct key_def *cmp_def,
index->space_cache_version = space_cache_version;
index->format = format;
tuple_format_ref(format);
- vy_mem_tree_create(&index->tree, cmp_def,
- vy_mem_tree_extent_alloc,
+ vy_mem_tree_create(&index->tree, cmp_def, vy_mem_tree_extent_alloc,
vy_mem_tree_extent_free, index);
rlist_create(&index->in_sealed);
fiber_cond_create(&index->pin_cond);
@@ -166,9 +165,9 @@ vy_mem_insert_upsert(struct vy_mem *mem, struct vy_entry entry)
if (vy_mem_tree_insert_get_iterator(&mem->tree, entry, &replaced,
&inserted) != 0)
return -1;
- assert(! vy_mem_tree_iterator_is_invalid(&inserted));
- assert(vy_entry_is_equal(entry,
- *vy_mem_tree_iterator_get_elem(&mem->tree, &inserted)));
+ assert(!vy_mem_tree_iterator_is_invalid(&inserted));
+ assert(vy_entry_is_equal(
+ entry, *vy_mem_tree_iterator_get_elem(&mem->tree, &inserted)));
if (replaced.stmt == NULL)
mem->count.rows++;
mem->count.bytes += size;
@@ -194,8 +193,8 @@ vy_mem_insert_upsert(struct vy_mem *mem, struct vy_entry entry)
* UPSERTs subsequence.
*/
vy_mem_tree_iterator_next(&mem->tree, &inserted);
- struct vy_entry *older = vy_mem_tree_iterator_get_elem(&mem->tree,
- &inserted);
+ struct vy_entry *older =
+ vy_mem_tree_iterator_get_elem(&mem->tree, &inserted);
if (older == NULL || vy_stmt_type(older->stmt) != IPROTO_UPSERT ||
vy_entry_compare(entry, *older, mem->cmp_def) != 0)
return 0;
@@ -265,7 +264,7 @@ vy_mem_rollback_stmt(struct vy_mem *mem, struct vy_entry entry)
assert(!vy_stmt_is_refable(entry.stmt));
int rc = vy_mem_tree_delete(&mem->tree, entry);
assert(rc == 0);
- (void) rc;
+ (void)rc;
/* We can't free memory in case of rollback. */
mem->count.rows--;
mem->version++;
@@ -289,8 +288,8 @@ vy_mem_iterator_step(struct vy_mem_iterator *itr)
vy_mem_tree_iterator_next(&itr->mem->tree, &itr->curr_pos);
if (vy_mem_tree_iterator_is_invalid(&itr->curr_pos))
return 1;
- itr->curr = *vy_mem_tree_iterator_get_elem(&itr->mem->tree,
- &itr->curr_pos);
+ itr->curr =
+ *vy_mem_tree_iterator_get_elem(&itr->mem->tree, &itr->curr_pos);
return 0;
}
@@ -307,9 +306,9 @@ vy_mem_iterator_find_lsn(struct vy_mem_iterator *itr)
{
/* Skip to the first statement visible in the read view. */
assert(!vy_mem_tree_iterator_is_invalid(&itr->curr_pos));
- assert(vy_entry_is_equal(itr->curr,
- *vy_mem_tree_iterator_get_elem(&itr->mem->tree,
- &itr->curr_pos)));
+ assert(vy_entry_is_equal(
+ itr->curr, *vy_mem_tree_iterator_get_elem(&itr->mem->tree,
+ &itr->curr_pos)));
struct key_def *cmp_def = itr->mem->cmp_def;
while (vy_stmt_lsn(itr->curr.stmt) > (**itr->read_view).vlsn ||
vy_stmt_flags(itr->curr.stmt) & VY_STMT_SKIP_READ) {
@@ -353,11 +352,11 @@ vy_mem_iterator_find_lsn(struct vy_mem_iterator *itr)
struct vy_mem_tree_key tree_key;
tree_key.entry = itr->curr;
tree_key.lsn = (**itr->read_view).vlsn;
- itr->curr_pos = vy_mem_tree_lower_bound(&itr->mem->tree,
- &tree_key, NULL);
+ itr->curr_pos =
+ vy_mem_tree_lower_bound(&itr->mem->tree, &tree_key, NULL);
assert(!vy_mem_tree_iterator_is_invalid(&itr->curr_pos));
- itr->curr = *vy_mem_tree_iterator_get_elem(&itr->mem->tree,
- &itr->curr_pos);
+ itr->curr =
+ *vy_mem_tree_iterator_get_elem(&itr->mem->tree, &itr->curr_pos);
/* Skip VY_STMT_SKIP_READ statements, if any. */
while (vy_stmt_flags(itr->curr.stmt) & VY_STMT_SKIP_READ) {
@@ -390,7 +389,8 @@ vy_mem_iterator_seek(struct vy_mem_iterator *itr, struct vy_entry last)
if (last.stmt != NULL) {
key = last;
iterator_type = iterator_direction(itr->iterator_type) > 0 ?
- ITER_GT : ITER_LT;
+ ITER_GT :
+ ITER_LT;
}
bool exact = false;
@@ -400,16 +400,14 @@ vy_mem_iterator_seek(struct vy_mem_iterator *itr, struct vy_entry last)
tree_key.lsn = INT64_MAX - 1;
if (!vy_stmt_is_empty_key(key.stmt)) {
if (iterator_type == ITER_LE || iterator_type == ITER_GT) {
- itr->curr_pos =
- vy_mem_tree_upper_bound(&itr->mem->tree,
- &tree_key, &exact);
+ itr->curr_pos = vy_mem_tree_upper_bound(
+ &itr->mem->tree, &tree_key, &exact);
} else {
assert(iterator_type == ITER_EQ ||
iterator_type == ITER_GE ||
iterator_type == ITER_LT);
- itr->curr_pos =
- vy_mem_tree_lower_bound(&itr->mem->tree,
- &tree_key, &exact);
+ itr->curr_pos = vy_mem_tree_lower_bound(
+ &itr->mem->tree, &tree_key, &exact);
}
} else if (iterator_type == ITER_LE) {
itr->curr_pos = vy_mem_tree_invalid_iterator();
@@ -422,8 +420,8 @@ vy_mem_iterator_seek(struct vy_mem_iterator *itr, struct vy_entry last)
vy_mem_tree_iterator_prev(&itr->mem->tree, &itr->curr_pos);
if (vy_mem_tree_iterator_is_invalid(&itr->curr_pos))
return 1;
- itr->curr = *vy_mem_tree_iterator_get_elem(&itr->mem->tree,
- &itr->curr_pos);
+ itr->curr =
+ *vy_mem_tree_iterator_get_elem(&itr->mem->tree, &itr->curr_pos);
if (itr->iterator_type == ITER_EQ &&
((last.stmt == NULL && !exact) ||
(last.stmt != NULL &&
@@ -439,9 +437,10 @@ vy_mem_iterator_seek(struct vy_mem_iterator *itr, struct vy_entry last)
/* {{{ vy_mem_iterator API implementation */
void
-vy_mem_iterator_open(struct vy_mem_iterator *itr, struct vy_mem_iterator_stat *stat,
- struct vy_mem *mem, enum iterator_type iterator_type,
- struct vy_entry key, const struct vy_read_view **rv)
+vy_mem_iterator_open(struct vy_mem_iterator *itr,
+ struct vy_mem_iterator_stat *stat, struct vy_mem *mem,
+ enum iterator_type iterator_type, struct vy_entry key,
+ const struct vy_read_view **rv)
{
itr->stat = stat;
@@ -472,9 +471,9 @@ vy_mem_iterator_next_key(struct vy_mem_iterator *itr)
return 1;
assert(itr->mem->version == itr->version);
assert(!vy_mem_tree_iterator_is_invalid(&itr->curr_pos));
- assert(vy_entry_is_equal(itr->curr,
- *vy_mem_tree_iterator_get_elem(&itr->mem->tree,
- &itr->curr_pos)));
+ assert(vy_entry_is_equal(
+ itr->curr, *vy_mem_tree_iterator_get_elem(&itr->mem->tree,
+ &itr->curr_pos)));
struct key_def *cmp_def = itr->mem->cmp_def;
struct vy_entry prev = itr->curr;
@@ -512,9 +511,9 @@ vy_mem_iterator_next_lsn(struct vy_mem_iterator *itr)
return 1;
assert(itr->mem->version == itr->version);
assert(!vy_mem_tree_iterator_is_invalid(&itr->curr_pos));
- assert(vy_entry_is_equal(itr->curr,
- *vy_mem_tree_iterator_get_elem(&itr->mem->tree,
- &itr->curr_pos)));
+ assert(vy_entry_is_equal(
+ itr->curr, *vy_mem_tree_iterator_get_elem(&itr->mem->tree,
+ &itr->curr_pos)));
struct key_def *cmp_def = itr->mem->cmp_def;
struct vy_mem_tree_iterator next_pos = itr->curr_pos;
@@ -555,8 +554,7 @@ vy_mem_iterator_get_history(struct vy_mem_iterator *itr,
}
NODISCARD int
-vy_mem_iterator_next(struct vy_mem_iterator *itr,
- struct vy_history *history)
+vy_mem_iterator_next(struct vy_mem_iterator *itr, struct vy_history *history)
{
vy_history_cleanup(history);
if (vy_mem_iterator_next_key(itr) == 0)
@@ -577,7 +575,9 @@ vy_mem_iterator_skip(struct vy_mem_iterator *itr, struct vy_entry last,
if (itr->search_started &&
(itr->curr.stmt == NULL || last.stmt == NULL ||
iterator_direction(itr->iterator_type) *
- vy_entry_compare(itr->curr, last, itr->mem->cmp_def) > 0))
+ vy_entry_compare(itr->curr, last,
+ itr->mem->cmp_def) >
+ 0))
return 0;
vy_history_cleanup(history);
@@ -614,9 +614,8 @@ vy_mem_stream_next(struct vy_stmt_stream *virt_stream, struct vy_entry *ret)
assert(virt_stream->iface->next == vy_mem_stream_next);
struct vy_mem_stream *stream = (struct vy_mem_stream *)virt_stream;
- struct vy_entry *res =
- vy_mem_tree_iterator_get_elem(&stream->mem->tree,
- &stream->curr_pos);
+ struct vy_entry *res = vy_mem_tree_iterator_get_elem(&stream->mem->tree,
+ &stream->curr_pos);
if (res == NULL) {
*ret = vy_entry_none();
} else {
diff --git a/src/box/vy_mem.h b/src/box/vy_mem.h
index 4f06c75..0591ad8 100644
--- a/src/box/vy_mem.h
+++ b/src/box/vy_mem.h
@@ -87,8 +87,7 @@ struct vy_mem_tree_key {
* Internal. Extracted to speed up BPS tree.
*/
static int
-vy_mem_tree_cmp(struct vy_entry a, struct vy_entry b,
- struct key_def *cmp_def)
+vy_mem_tree_cmp(struct vy_entry a, struct vy_entry b, struct key_def *cmp_def)
{
int res = vy_entry_compare(a, b, cmp_def);
if (res)
@@ -370,9 +369,10 @@ struct vy_mem_iterator {
* Open an iterator over in-memory tree.
*/
void
-vy_mem_iterator_open(struct vy_mem_iterator *itr, struct vy_mem_iterator_stat *stat,
- struct vy_mem *mem, enum iterator_type iterator_type,
- struct vy_entry key, const struct vy_read_view **rv);
+vy_mem_iterator_open(struct vy_mem_iterator *itr,
+ struct vy_mem_iterator_stat *stat, struct vy_mem *mem,
+ enum iterator_type iterator_type, struct vy_entry key,
+ const struct vy_read_view **rv);
/**
* Advance a mem iterator to the next key.
@@ -380,8 +380,7 @@ vy_mem_iterator_open(struct vy_mem_iterator *itr, struct vy_mem_iterator_stat *s
* Returns 0 on success, -1 on memory allocation error.
*/
NODISCARD int
-vy_mem_iterator_next(struct vy_mem_iterator *itr,
- struct vy_history *history);
+vy_mem_iterator_next(struct vy_mem_iterator *itr, struct vy_history *history);
/**
* Advance a mem iterator to the key following @last.
diff --git a/src/box/vy_point_lookup.c b/src/box/vy_point_lookup.c
index 80b5c59..51e4645 100644
--- a/src/box/vy_point_lookup.c
+++ b/src/box/vy_point_lookup.c
@@ -58,8 +58,7 @@ vy_point_lookup_scan_txw(struct vy_lsm *lsm, struct vy_tx *tx,
if (tx == NULL)
return 0;
lsm->stat.txw.iterator.lookup++;
- struct txv *txv =
- write_set_search_key(&tx->write_set, lsm, key);
+ struct txv *txv = write_set_search_key(&tx->write_set, lsm, key);
assert(txv == NULL || txv->lsm == lsm);
if (txv == NULL)
return 0;
@@ -92,19 +91,18 @@ vy_point_lookup_scan_cache(struct vy_lsm *lsm, const struct vy_read_view **rv,
*/
static int
vy_point_lookup_scan_mem(struct vy_lsm *lsm, struct vy_mem *mem,
- const struct vy_read_view **rv,
- struct vy_entry key, struct vy_history *history)
+ const struct vy_read_view **rv, struct vy_entry key,
+ struct vy_history *history)
{
struct vy_mem_iterator mem_itr;
- vy_mem_iterator_open(&mem_itr, &lsm->stat.memory.iterator,
- mem, ITER_EQ, key, rv);
+ vy_mem_iterator_open(&mem_itr, &lsm->stat.memory.iterator, mem, ITER_EQ,
+ key, rv);
struct vy_history mem_history;
vy_history_create(&mem_history, &lsm->env->history_node_pool);
int rc = vy_mem_iterator_next(&mem_itr, &mem_history);
vy_history_splice(history, &mem_history);
vy_mem_iterator_close(&mem_itr);
return rc;
-
}
/**
@@ -142,8 +140,8 @@ vy_point_lookup_scan_slice(struct vy_lsm *lsm, struct vy_slice *slice,
* format in vy_mem.
*/
struct vy_run_iterator run_itr;
- vy_run_iterator_open(&run_itr, &lsm->stat.disk.iterator, slice,
- ITER_EQ, key, rv, lsm->cmp_def, lsm->key_def,
+ vy_run_iterator_open(&run_itr, &lsm->stat.disk.iterator, slice, ITER_EQ,
+ key, rv, lsm->cmp_def, lsm->key_def,
lsm->disk_format);
struct vy_history slice_history;
vy_history_create(&slice_history, &lsm->env->history_node_pool);
@@ -163,14 +161,13 @@ static int
vy_point_lookup_scan_slices(struct vy_lsm *lsm, const struct vy_read_view **rv,
struct vy_entry key, struct vy_history *history)
{
- struct vy_range *range = vy_range_tree_find_by_key(&lsm->range_tree,
- ITER_EQ, key);
+ struct vy_range *range =
+ vy_range_tree_find_by_key(&lsm->range_tree, ITER_EQ, key);
assert(range != NULL);
int slice_count = range->slice_count;
size_t size;
- struct vy_slice **slices =
- region_alloc_array(&fiber()->gc, typeof(slices[0]), slice_count,
- &size);
+ struct vy_slice **slices = region_alloc_array(
+ &fiber()->gc, typeof(slices[0]), slice_count, &size);
if (slices == NULL) {
diag_set(OutOfMemory, size, "region_alloc_array", "slices");
return -1;
@@ -185,8 +182,8 @@ vy_point_lookup_scan_slices(struct vy_lsm *lsm, const struct vy_read_view **rv,
int rc = 0;
for (i = 0; i < slice_count; i++) {
if (rc == 0 && !vy_history_is_terminal(history))
- rc = vy_point_lookup_scan_slice(lsm, slices[i],
- rv, key, history);
+ rc = vy_point_lookup_scan_slice(lsm, slices[i], rv, key,
+ history);
vy_slice_unpin(slices[i]);
}
return rc;
@@ -194,8 +191,8 @@ vy_point_lookup_scan_slices(struct vy_lsm *lsm, const struct vy_read_view **rv,
int
vy_point_lookup(struct vy_lsm *lsm, struct vy_tx *tx,
- const struct vy_read_view **rv,
- struct vy_entry key, struct vy_entry *ret)
+ const struct vy_read_view **rv, struct vy_entry key,
+ struct vy_entry *ret)
{
/* All key parts must be set for a point lookup. */
assert(vy_stmt_is_full_key(key.stmt, lsm->cmp_def));
@@ -284,8 +281,8 @@ done:
if (rc == 0) {
int upserts_applied;
- rc = vy_history_apply(&history, lsm->cmp_def,
- false, &upserts_applied, ret);
+ rc = vy_history_apply(&history, lsm->cmp_def, false,
+ &upserts_applied, ret);
lsm->stat.upsert.applied += upserts_applied;
}
vy_history_cleanup(&history);
@@ -319,8 +316,8 @@ vy_point_lookup_mem(struct vy_lsm *lsm, const struct vy_read_view **rv,
done:
if (rc == 0) {
int upserts_applied;
- rc = vy_history_apply(&history, lsm->cmp_def,
- true, &upserts_applied, ret);
+ rc = vy_history_apply(&history, lsm->cmp_def, true,
+ &upserts_applied, ret);
lsm->stat.upsert.applied += upserts_applied;
}
out:
diff --git a/src/box/vy_point_lookup.h b/src/box/vy_point_lookup.h
index b4092ee..b33312c 100644
--- a/src/box/vy_point_lookup.h
+++ b/src/box/vy_point_lookup.h
@@ -67,8 +67,8 @@ struct vy_read_view;
*/
int
vy_point_lookup(struct vy_lsm *lsm, struct vy_tx *tx,
- const struct vy_read_view **rv,
- struct vy_entry key, struct vy_entry *ret);
+ const struct vy_read_view **rv, struct vy_entry key,
+ struct vy_entry *ret);
/**
* Look up a tuple by key in memory.
diff --git a/src/box/vy_quota.c b/src/box/vy_quota.c
index f1ac8dd..3c7078e 100644
--- a/src/box/vy_quota.c
+++ b/src/box/vy_quota.c
@@ -54,8 +54,7 @@ static const double VY_QUOTA_TIMER_PERIOD = 0.1;
/**
* Bit mask of resources used by a particular consumer type.
*/
-static unsigned
-vy_quota_consumer_resource_map[] = {
+static unsigned vy_quota_consumer_resource_map[] = {
/**
* Transaction throttling pursues two goals. First, it is
* capping memory consumption rate so that the hard memory
@@ -100,7 +99,7 @@ vy_rate_limit_is_applicable(enum vy_quota_consumer_type consumer_type,
enum vy_quota_resource_type resource_type)
{
return (vy_quota_consumer_resource_map[consumer_type] &
- (1 << resource_type)) != 0;
+ (1 << resource_type)) != 0;
}
/**
@@ -300,8 +299,8 @@ vy_quota_release(struct vy_quota *q, size_t size)
}
int
-vy_quota_use(struct vy_quota *q, enum vy_quota_consumer_type type,
- size_t size, double timeout)
+vy_quota_use(struct vy_quota *q, enum vy_quota_consumer_type type, size_t size,
+ double timeout)
{
/*
* Fail early if the configured memory limit never allows
@@ -342,8 +341,8 @@ vy_quota_use(struct vy_quota *q, enum vy_quota_consumer_type type,
double wait_time = ev_monotonic_now(loop()) - wait_start;
if (wait_time > q->too_long_threshold) {
say_warn_ratelimited("waited for %zu bytes of vinyl memory "
- "quota for too long: %.3f sec", size,
- wait_time);
+ "quota for too long: %.3f sec",
+ size, wait_time);
}
vy_quota_do_use(q, type, size);
diff --git a/src/box/vy_quota.h b/src/box/vy_quota.h
index bd7d4e0..15507aa 100644
--- a/src/box/vy_quota.h
+++ b/src/box/vy_quota.h
@@ -107,8 +107,7 @@ vy_rate_limit_refill(struct vy_rate_limit *rl, double time)
rl->value = MIN((ssize_t)value, SSIZE_MAX);
}
-typedef void
-(*vy_quota_exceeded_f)(struct vy_quota *quota);
+typedef void (*vy_quota_exceeded_f)(struct vy_quota *quota);
/**
* Apart from memory usage accounting and limiting, vy_quota is
@@ -311,8 +310,8 @@ vy_quota_release(struct vy_quota *q, size_t size);
* account while estimating the size of a memory allocation.
*/
int
-vy_quota_use(struct vy_quota *q, enum vy_quota_consumer_type type,
- size_t size, double timeout);
+vy_quota_use(struct vy_quota *q, enum vy_quota_consumer_type type, size_t size,
+ double timeout);
/**
* Adjust quota after allocating memory.
diff --git a/src/box/vy_range.c b/src/box/vy_range.c
index 4ff8521..586b3ef 100644
--- a/src/box/vy_range.c
+++ b/src/box/vy_range.c
@@ -78,8 +78,7 @@ vy_range_tree_key_cmp(struct vy_entry entry, struct vy_range *range)
struct vy_range *
vy_range_tree_find_by_key(vy_range_tree_t *tree,
- enum iterator_type iterator_type,
- struct vy_entry key)
+ enum iterator_type iterator_type, struct vy_entry key)
{
if (vy_stmt_is_empty_key(key.stmt)) {
switch (iterator_type) {
@@ -180,8 +179,8 @@ vy_range_new(int64_t id, struct vy_entry begin, struct vy_entry end,
{
struct vy_range *range = calloc(1, sizeof(*range));
if (range == NULL) {
- diag_set(OutOfMemory, sizeof(*range),
- "malloc", "struct vy_range");
+ diag_set(OutOfMemory, sizeof(*range), "malloc",
+ "struct vy_range");
return NULL;
}
range->id = id;
@@ -429,8 +428,8 @@ void
vy_range_update_dumps_per_compaction(struct vy_range *range)
{
if (!rlist_empty(&range->slices)) {
- struct vy_slice *slice = rlist_last_entry(&range->slices,
- struct vy_slice, in_range);
+ struct vy_slice *slice = rlist_last_entry(
+ &range->slices, struct vy_slice, in_range);
range->dumps_per_compaction = slice->run->dump_count;
} else {
range->dumps_per_compaction = 0;
@@ -470,12 +469,13 @@ vy_range_needs_split(struct vy_range *range, int64_t range_size,
/* Find the median key in the oldest run (approximately). */
struct vy_page_info *mid_page;
- mid_page = vy_run_page_info(slice->run, slice->first_page_no +
- (slice->last_page_no -
- slice->first_page_no) / 2);
+ mid_page = vy_run_page_info(
+ slice->run,
+ slice->first_page_no +
+ (slice->last_page_no - slice->first_page_no) / 2);
- struct vy_page_info *first_page = vy_run_page_info(slice->run,
- slice->first_page_no);
+ struct vy_page_info *first_page =
+ vy_run_page_info(slice->run, slice->first_page_no);
/* No point in splitting if a new range is going to be empty. */
if (key_compare(first_page->min_key, first_page->min_key_hint,
diff --git a/src/box/vy_range.h b/src/box/vy_range.h
index 2eb843b..0a6d62b 100644
--- a/src/box/vy_range.h
+++ b/src/box/vy_range.h
@@ -171,8 +171,8 @@ vy_range_tree_key_cmp(struct vy_entry entry, struct vy_range *range);
typedef rb_tree(struct vy_range) vy_range_tree_t;
rb_gen_ext_key(MAYBE_UNUSED static inline, vy_range_tree_, vy_range_tree_t,
- struct vy_range, tree_node, vy_range_tree_cmp,
- struct vy_entry, vy_range_tree_key_cmp);
+ struct vy_range, tree_node, vy_range_tree_cmp, struct vy_entry,
+ vy_range_tree_key_cmp);
/**
* Find the first range in which a given key should be looked up.
diff --git a/src/box/vy_read_iterator.c b/src/box/vy_read_iterator.c
index 4097969..b35e90c 100644
--- a/src/box/vy_read_iterator.c
+++ b/src/box/vy_read_iterator.c
@@ -70,8 +70,8 @@ vy_read_iterator_reserve(struct vy_read_iterator *itr, uint32_t capacity)
return 0;
struct vy_read_src *new_src = calloc(capacity, sizeof(*new_src));
if (new_src == NULL) {
- diag_set(OutOfMemory, capacity * sizeof(*new_src),
- "calloc", "new_src");
+ diag_set(OutOfMemory, capacity * sizeof(*new_src), "calloc",
+ "new_src");
return -1;
}
memcpy(new_src, itr->src, itr->src_count * sizeof(*new_src));
@@ -148,15 +148,15 @@ vy_read_iterator_range_is_done(struct vy_read_iterator *itr,
int dir = iterator_direction(itr->iterator_type);
if (dir > 0 && range->end.stmt != NULL &&
- (next.stmt == NULL || vy_entry_compare(next, range->end,
- cmp_def) >= 0) &&
+ (next.stmt == NULL ||
+ vy_entry_compare(next, range->end, cmp_def) >= 0) &&
(itr->iterator_type != ITER_EQ ||
vy_entry_compare(itr->key, range->end, cmp_def) >= 0))
return true;
if (dir < 0 && range->begin.stmt != NULL &&
- (next.stmt == NULL || vy_entry_compare(next, range->begin,
- cmp_def) < 0) &&
+ (next.stmt == NULL ||
+ vy_entry_compare(next, range->begin, cmp_def) < 0) &&
(itr->iterator_type != ITER_REQ ||
vy_entry_compare(itr->key, range->begin, cmp_def) <= 0))
return true;
@@ -175,8 +175,8 @@ vy_read_iterator_range_is_done(struct vy_read_iterator *itr,
* NULL denotes the statement following the last one.
*/
static inline int
-vy_read_iterator_cmp_stmt(struct vy_read_iterator *itr,
- struct vy_entry a, struct vy_entry b)
+vy_read_iterator_cmp_stmt(struct vy_read_iterator *itr, struct vy_entry a,
+ struct vy_entry b)
{
if (a.stmt == NULL && b.stmt != NULL)
return 1;
@@ -185,7 +185,7 @@ vy_read_iterator_cmp_stmt(struct vy_read_iterator *itr,
if (a.stmt == NULL && b.stmt == NULL)
return 0;
return iterator_direction(itr->iterator_type) *
- vy_entry_compare(a, b, itr->lsm->cmp_def);
+ vy_entry_compare(a, b, itr->lsm->cmp_def);
}
/**
@@ -205,10 +205,10 @@ vy_read_iterator_is_exact_match(struct vy_read_iterator *itr,
* in case the key is found in memory.
*/
return itr->last.stmt == NULL && entry.stmt != NULL &&
- (type == ITER_EQ || type == ITER_REQ ||
- type == ITER_GE || type == ITER_LE) &&
- vy_stmt_is_full_key(itr->key.stmt, cmp_def) &&
- vy_entry_compare(entry, itr->key, cmp_def) == 0;
+ (type == ITER_EQ || type == ITER_REQ || type == ITER_GE ||
+ type == ITER_LE) &&
+ vy_stmt_is_full_key(itr->key.stmt, cmp_def) &&
+ vy_entry_compare(entry, itr->key, cmp_def) == 0;
}
/**
@@ -220,8 +220,8 @@ vy_read_iterator_is_exact_match(struct vy_read_iterator *itr,
*/
static void
vy_read_iterator_evaluate_src(struct vy_read_iterator *itr,
- struct vy_read_src *src,
- struct vy_entry *next, bool *stop)
+ struct vy_read_src *src, struct vy_entry *next,
+ bool *stop)
{
uint32_t src_id = src - itr->src;
struct vy_entry entry = vy_history_last_stmt(&src->history);
@@ -268,8 +268,8 @@ vy_read_iterator_evaluate_src(struct vy_read_iterator *itr,
*/
static NODISCARD int
-vy_read_iterator_scan_txw(struct vy_read_iterator *itr,
- struct vy_entry *next, bool *stop)
+vy_read_iterator_scan_txw(struct vy_read_iterator *itr, struct vy_entry *next,
+ bool *stop)
{
struct vy_read_src *src = &itr->src[itr->txw_src];
struct vy_txw_iterator *src_itr = &src->txw_iterator;
@@ -297,19 +297,20 @@ vy_read_iterator_scan_txw(struct vy_read_iterator *itr,
}
static NODISCARD int
-vy_read_iterator_scan_cache(struct vy_read_iterator *itr,
- struct vy_entry *next, bool *stop)
+vy_read_iterator_scan_cache(struct vy_read_iterator *itr, struct vy_entry *next,
+ bool *stop)
{
bool is_interval = false;
struct vy_read_src *src = &itr->src[itr->cache_src];
struct vy_cache_iterator *src_itr = &src->cache_iterator;
- int rc = vy_cache_iterator_restore(src_itr, itr->last,
- &src->history, &is_interval);
+ int rc = vy_cache_iterator_restore(src_itr, itr->last, &src->history,
+ &is_interval);
if (rc == 0) {
if (!src->is_started || itr->cache_src >= itr->skipped_src) {
rc = vy_cache_iterator_skip(src_itr, itr->last,
- &src->history, &is_interval);
+ &src->history,
+ &is_interval);
} else if (src->front_id == itr->prev_front_id) {
rc = vy_cache_iterator_next(src_itr, &src->history,
&is_interval);
@@ -365,8 +366,7 @@ vy_read_iterator_scan_disk(struct vy_read_iterator *itr, uint32_t disk_src,
assert(disk_src >= itr->disk_src && disk_src < itr->src_count);
if (!src->is_started || disk_src >= itr->skipped_src)
- rc = vy_run_iterator_skip(src_itr, itr->last,
- &src->history);
+ rc = vy_run_iterator_skip(src_itr, itr->last, &src->history);
else if (src->front_id == itr->prev_front_id)
rc = vy_run_iterator_next(src_itr, &src->history);
src->is_started = true;
@@ -391,8 +391,8 @@ vy_read_iterator_next_range(struct vy_read_iterator *itr);
static NODISCARD int
vy_read_iterator_advance(struct vy_read_iterator *itr)
{
- if (itr->last.stmt != NULL && (itr->iterator_type == ITER_EQ ||
- itr->iterator_type == ITER_REQ) &&
+ if (itr->last.stmt != NULL &&
+ (itr->iterator_type == ITER_EQ || itr->iterator_type == ITER_REQ) &&
vy_stmt_is_full_key(itr->key.stmt, itr->lsm->cmp_def)) {
/*
* There may be one statement at max satisfying
@@ -507,7 +507,7 @@ done:
* and respects statement order.
*/
if (itr->last.stmt != NULL && next.stmt != NULL) {
- assert(vy_read_iterator_cmp_stmt(itr, next, itr->last) > 0);
+ assert(vy_read_iterator_cmp_stmt(itr, next, itr->last) > 0);
}
#endif
if (itr->need_check_eq && next.stmt != NULL &&
@@ -520,8 +520,8 @@ static void
vy_read_iterator_add_tx(struct vy_read_iterator *itr)
{
assert(itr->tx != NULL);
- enum iterator_type iterator_type = (itr->iterator_type != ITER_REQ ?
- itr->iterator_type : ITER_LE);
+ enum iterator_type iterator_type =
+ (itr->iterator_type != ITER_REQ ? itr->iterator_type : ITER_LE);
struct vy_txw_iterator_stat *stat = &itr->lsm->stat.txw.iterator;
struct vy_read_src *sub_src = vy_read_iterator_add_src(itr);
vy_txw_iterator_open(&sub_src->txw_iterator, stat, itr->tx, itr->lsm,
@@ -531,19 +531,18 @@ vy_read_iterator_add_tx(struct vy_read_iterator *itr)
static void
vy_read_iterator_add_cache(struct vy_read_iterator *itr)
{
- enum iterator_type iterator_type = (itr->iterator_type != ITER_REQ ?
- itr->iterator_type : ITER_LE);
+ enum iterator_type iterator_type =
+ (itr->iterator_type != ITER_REQ ? itr->iterator_type : ITER_LE);
struct vy_read_src *sub_src = vy_read_iterator_add_src(itr);
- vy_cache_iterator_open(&sub_src->cache_iterator,
- &itr->lsm->cache, iterator_type,
- itr->key, itr->read_view);
+ vy_cache_iterator_open(&sub_src->cache_iterator, &itr->lsm->cache,
+ iterator_type, itr->key, itr->read_view);
}
static void
vy_read_iterator_add_mem(struct vy_read_iterator *itr)
{
- enum iterator_type iterator_type = (itr->iterator_type != ITER_REQ ?
- itr->iterator_type : ITER_LE);
+ enum iterator_type iterator_type =
+ (itr->iterator_type != ITER_REQ ? itr->iterator_type : ITER_LE);
struct vy_lsm *lsm = itr->lsm;
struct vy_read_src *sub_src;
@@ -557,9 +556,8 @@ vy_read_iterator_add_mem(struct vy_read_iterator *itr)
rlist_foreach_entry(mem, &lsm->sealed, in_sealed) {
sub_src = vy_read_iterator_add_src(itr);
vy_mem_iterator_open(&sub_src->mem_iterator,
- &lsm->stat.memory.iterator,
- mem, iterator_type, itr->key,
- itr->read_view);
+ &lsm->stat.memory.iterator, mem,
+ iterator_type, itr->key, itr->read_view);
}
}
@@ -567,8 +565,8 @@ static void
vy_read_iterator_add_disk(struct vy_read_iterator *itr)
{
assert(itr->curr_range != NULL);
- enum iterator_type iterator_type = (itr->iterator_type != ITER_REQ ?
- itr->iterator_type : ITER_LE);
+ enum iterator_type iterator_type =
+ (itr->iterator_type != ITER_REQ ? itr->iterator_type : ITER_LE);
struct vy_lsm *lsm = itr->lsm;
struct vy_slice *slice;
/*
@@ -580,9 +578,9 @@ vy_read_iterator_add_disk(struct vy_read_iterator *itr)
struct vy_read_src *sub_src = vy_read_iterator_add_src(itr);
vy_run_iterator_open(&sub_src->run_iterator,
&lsm->stat.disk.iterator, slice,
- iterator_type, itr->key,
- itr->read_view, lsm->cmp_def,
- lsm->key_def, lsm->disk_format);
+ iterator_type, itr->key, itr->read_view,
+ lsm->cmp_def, lsm->key_def,
+ lsm->disk_format);
}
}
@@ -648,7 +646,8 @@ vy_read_iterator_open(struct vy_read_iterator *itr, struct vy_lsm *lsm,
* in this case.
*/
itr->iterator_type = iterator_direction(iterator_type) > 0 ?
- ITER_GE : ITER_LE;
+ ITER_GE :
+ ITER_LE;
}
if (iterator_type == ITER_ALL)
@@ -664,7 +663,6 @@ vy_read_iterator_open(struct vy_read_iterator *itr, struct vy_lsm *lsm,
*/
itr->need_check_eq = true;
}
-
}
/**
@@ -681,10 +679,9 @@ vy_read_iterator_restore(struct vy_read_iterator *itr)
itr->mem_list_version = itr->lsm->mem_list_version;
itr->range_tree_version = itr->lsm->range_tree_version;
- itr->curr_range = vy_range_tree_find_by_key(&itr->lsm->range_tree,
- itr->iterator_type,
- itr->last.stmt != NULL ?
- itr->last : itr->key);
+ itr->curr_range = vy_range_tree_find_by_key(
+ &itr->lsm->range_tree, itr->iterator_type,
+ itr->last.stmt != NULL ? itr->last : itr->key);
itr->range_version = itr->curr_range->version;
if (itr->tx != NULL) {
@@ -714,9 +711,10 @@ vy_read_iterator_next_range(struct vy_read_iterator *itr)
assert(range != NULL);
while (true) {
- range = dir > 0 ?
- vy_range_tree_next(&itr->lsm->range_tree, range) :
- vy_range_tree_prev(&itr->lsm->range_tree, range);
+ range = dir > 0 ? vy_range_tree_next(&itr->lsm->range_tree,
+ range) :
+ vy_range_tree_prev(&itr->lsm->range_tree,
+ range);
assert(range != NULL);
if (itr->last.stmt == NULL)
@@ -725,13 +723,13 @@ vy_read_iterator_next_range(struct vy_read_iterator *itr)
* We could skip an entire range due to the cache.
* Make sure the next statement falls in the range.
*/
- if (dir > 0 && (range->end.stmt == NULL ||
- vy_entry_compare(itr->last, range->end,
- cmp_def) < 0))
+ if (dir > 0 &&
+ (range->end.stmt == NULL ||
+ vy_entry_compare(itr->last, range->end, cmp_def) < 0))
break;
- if (dir < 0 && (range->begin.stmt == NULL ||
- vy_entry_compare(itr->last, range->begin,
- cmp_def) > 0))
+ if (dir < 0 &&
+ (range->begin.stmt == NULL ||
+ vy_entry_compare(itr->last, range->begin, cmp_def) > 0))
break;
}
itr->curr_range = range;
@@ -768,8 +766,8 @@ vy_read_iterator_apply_history(struct vy_read_iterator *itr,
}
int upserts_applied = 0;
- int rc = vy_history_apply(&history, lsm->cmp_def,
- true, &upserts_applied, ret);
+ int rc = vy_history_apply(&history, lsm->cmp_def, true,
+ &upserts_applied, ret);
lsm->stat.upsert.applied += upserts_applied;
vy_history_cleanup(&history);
@@ -787,18 +785,18 @@ vy_read_iterator_track_read(struct vy_read_iterator *itr, struct vy_entry entry)
if (entry.stmt == NULL) {
entry = (itr->iterator_type == ITER_EQ ||
- itr->iterator_type == ITER_REQ ?
- itr->key : itr->lsm->env->empty_key);
+ itr->iterator_type == ITER_REQ ?
+ itr->key :
+ itr->lsm->env->empty_key);
}
int rc;
if (iterator_direction(itr->iterator_type) >= 0) {
rc = vy_tx_track(itr->tx, itr->lsm, itr->key,
- itr->iterator_type != ITER_GT,
- entry, true);
+ itr->iterator_type != ITER_GT, entry, true);
} else {
- rc = vy_tx_track(itr->tx, itr->lsm, entry, true,
- itr->key, itr->iterator_type != ITER_LT);
+ rc = vy_tx_track(itr->tx, itr->lsm, entry, true, itr->key,
+ itr->iterator_type != ITER_LT);
}
return rc;
}
@@ -853,8 +851,8 @@ vy_read_iterator_cache_add(struct vy_read_iterator *itr, struct vy_entry entry)
itr->last_cached = vy_entry_none();
return;
}
- vy_cache_add(&itr->lsm->cache, entry, itr->last_cached,
- itr->key, itr->iterator_type);
+ vy_cache_add(&itr->lsm->cache, entry, itr->last_cached, itr->key,
+ itr->iterator_type);
if (entry.stmt != NULL)
tuple_ref(entry.stmt);
if (itr->last_cached.stmt != NULL)
diff --git a/src/box/vy_read_set.c b/src/box/vy_read_set.c
index 431b24f..58a67c8 100644
--- a/src/box/vy_read_set.c
+++ b/src/box/vy_read_set.c
@@ -118,8 +118,7 @@ vy_tx_conflict_iterator_next(struct vy_tx_conflict_iterator *it)
assert(left == NULL || left->lsm == curr->lsm);
assert(right == NULL || right->lsm == curr->lsm);
- int cmp_right = vy_entry_compare(it->key, last->right,
- cmp_def);
+ int cmp_right = vy_entry_compare(it->key, last->right, cmp_def);
if (cmp_right == 0 && !last->right_belongs)
cmp_right = 1;
@@ -138,8 +137,8 @@ vy_tx_conflict_iterator_next(struct vy_tx_conflict_iterator *it)
/* Optimize comparison out. */
cmp_left = cmp_right;
} else {
- cmp_left = vy_entry_compare(it->key, curr->left,
- cmp_def);
+ cmp_left =
+ vy_entry_compare(it->key, curr->left, cmp_def);
if (cmp_left == 0 && !curr->left_belongs)
cmp_left = -1;
}
@@ -166,8 +165,8 @@ vy_tx_conflict_iterator_next(struct vy_tx_conflict_iterator *it)
/* Optimize comparison out. */
cmp_right = cmp_left;
} else if (curr != last) {
- cmp_right = vy_entry_compare(it->key, curr->right,
- cmp_def);
+ cmp_right =
+ vy_entry_compare(it->key, curr->right, cmp_def);
if (cmp_right == 0 && !curr->right_belongs)
cmp_right = 1;
}
diff --git a/src/box/vy_regulator.c b/src/box/vy_regulator.c
index 8ec7e25..f4b1a4b 100644
--- a/src/box/vy_regulator.c
+++ b/src/box/vy_regulator.c
@@ -105,11 +105,11 @@ vy_regulator_trigger_dump(struct vy_regulator *regulator)
* write_rate dump_bandwidth
*/
struct vy_quota *quota = regulator->quota;
- size_t mem_left = (quota->used < quota->limit ?
- quota->limit - quota->used : 0);
+ size_t mem_left =
+ (quota->used < quota->limit ? quota->limit - quota->used : 0);
size_t mem_used = quota->used;
- size_t max_write_rate = (double)mem_left / (mem_used + 1) *
- regulator->dump_bandwidth;
+ size_t max_write_rate =
+ (double)mem_left / (mem_used + 1) * regulator->dump_bandwidth;
max_write_rate = MIN(max_write_rate, regulator->dump_bandwidth);
vy_quota_set_rate_limit(quota, VY_QUOTA_RESOURCE_MEMORY,
max_write_rate);
@@ -144,8 +144,8 @@ vy_regulator_update_write_rate(struct vy_regulator *regulator)
size_t rate_avg = regulator->write_rate;
size_t rate_curr = (used_curr - used_last) / VY_REGULATOR_TIMER_PERIOD;
- double weight = 1 - exp(-VY_REGULATOR_TIMER_PERIOD /
- VY_WRITE_RATE_AVG_WIN);
+ double weight =
+ 1 - exp(-VY_REGULATOR_TIMER_PERIOD / VY_WRITE_RATE_AVG_WIN);
rate_avg = (1 - weight) * rate_avg + weight * rate_curr;
regulator->write_rate = rate_avg;
@@ -178,15 +178,15 @@ vy_regulator_update_dump_watermark(struct vy_regulator *regulator)
*/
size_t write_rate = regulator->write_rate_max * 3 / 2;
regulator->dump_watermark =
- (double)quota->limit * regulator->dump_bandwidth /
- (regulator->dump_bandwidth + write_rate + 1);
+ (double)quota->limit * regulator->dump_bandwidth /
+ (regulator->dump_bandwidth + write_rate + 1);
/*
* It doesn't make sense to set the watermark below 50%
* of the memory limit because the write rate can exceed
* the dump bandwidth under no circumstances.
*/
- regulator->dump_watermark = MAX(regulator->dump_watermark,
- quota->limit / 2);
+ regulator->dump_watermark =
+ MAX(regulator->dump_watermark, quota->limit / 2);
}
static void
@@ -209,17 +209,17 @@ vy_regulator_create(struct vy_regulator *regulator, struct vy_quota *quota,
enum { KB = 1024, MB = KB * KB };
static int64_t dump_bandwidth_buckets[] = {
100 * KB, 200 * KB, 300 * KB, 400 * KB, 500 * KB, 600 * KB,
- 700 * KB, 800 * KB, 900 * KB, 1 * MB, 2 * MB, 3 * MB,
- 4 * MB, 5 * MB, 6 * MB, 7 * MB, 8 * MB, 9 * MB,
- 10 * MB, 15 * MB, 20 * MB, 25 * MB, 30 * MB, 35 * MB,
- 40 * MB, 45 * MB, 50 * MB, 55 * MB, 60 * MB, 65 * MB,
- 70 * MB, 75 * MB, 80 * MB, 85 * MB, 90 * MB, 95 * MB,
+ 700 * KB, 800 * KB, 900 * KB, 1 * MB, 2 * MB, 3 * MB,
+ 4 * MB, 5 * MB, 6 * MB, 7 * MB, 8 * MB, 9 * MB,
+ 10 * MB, 15 * MB, 20 * MB, 25 * MB, 30 * MB, 35 * MB,
+ 40 * MB, 45 * MB, 50 * MB, 55 * MB, 60 * MB, 65 * MB,
+ 70 * MB, 75 * MB, 80 * MB, 85 * MB, 90 * MB, 95 * MB,
100 * MB, 200 * MB, 300 * MB, 400 * MB, 500 * MB, 600 * MB,
700 * MB, 800 * MB, 900 * MB,
};
memset(regulator, 0, sizeof(*regulator));
- regulator->dump_bandwidth_hist = histogram_new(dump_bandwidth_buckets,
- lengthof(dump_bandwidth_buckets));
+ regulator->dump_bandwidth_hist = histogram_new(
+ dump_bandwidth_buckets, lengthof(dump_bandwidth_buckets));
if (regulator->dump_bandwidth_hist == NULL)
panic("failed to allocate dump bandwidth histogram");
@@ -262,8 +262,8 @@ vy_regulator_check_dump_watermark(struct vy_regulator *regulator)
}
void
-vy_regulator_dump_complete(struct vy_regulator *regulator,
- size_t mem_dumped, double dump_duration)
+vy_regulator_dump_complete(struct vy_regulator *regulator, size_t mem_dumped,
+ double dump_duration)
{
regulator->dump_in_progress = false;
@@ -430,7 +430,7 @@ vy_regulator_update_rate_limit(struct vy_regulator *regulator,
recent->compaction_time += compaction_time;
double rate = 0.75 * compaction_threads * recent->dump_input /
- recent->compaction_time;
+ recent->compaction_time;
/*
* We can't simply use (size_t)MIN(rate, SIZE_MAX) to cast
* the rate from double to size_t here, because on a 64-bit
diff --git a/src/box/vy_regulator.h b/src/box/vy_regulator.h
index 5131ac5..5ceeb34 100644
--- a/src/box/vy_regulator.h
+++ b/src/box/vy_regulator.h
@@ -45,8 +45,7 @@ struct histogram;
struct vy_quota;
struct vy_regulator;
-typedef int
-(*vy_trigger_dump_f)(struct vy_regulator *regulator);
+typedef int (*vy_trigger_dump_f)(struct vy_regulator *regulator);
/**
* The regulator is supposed to keep track of vinyl memory usage
@@ -153,8 +152,8 @@ vy_regulator_quota_exceeded(struct vy_regulator *regulator);
* Notify the regulator about memory dump completion.
*/
void
-vy_regulator_dump_complete(struct vy_regulator *regulator,
- size_t mem_dumped, double dump_duration);
+vy_regulator_dump_complete(struct vy_regulator *regulator, size_t mem_dumped,
+ double dump_duration);
/**
* Set memory limit and update the dump watermark accordingly.
diff --git a/src/box/vy_run.c b/src/box/vy_run.c
index b9822dc..4b6151d 100644
--- a/src/box/vy_run.c
+++ b/src/box/vy_run.c
@@ -45,18 +45,15 @@
#include "xrow.h"
#include "vy_history.h"
-static const uint64_t vy_page_info_key_map = (1 << VY_PAGE_INFO_OFFSET) |
- (1 << VY_PAGE_INFO_SIZE) |
- (1 << VY_PAGE_INFO_UNPACKED_SIZE) |
- (1 << VY_PAGE_INFO_ROW_COUNT) |
- (1 << VY_PAGE_INFO_MIN_KEY) |
- (1 << VY_PAGE_INFO_ROW_INDEX_OFFSET);
-
-static const uint64_t vy_run_info_key_map = (1 << VY_RUN_INFO_MIN_KEY) |
- (1 << VY_RUN_INFO_MAX_KEY) |
- (1 << VY_RUN_INFO_MIN_LSN) |
- (1 << VY_RUN_INFO_MAX_LSN) |
- (1 << VY_RUN_INFO_PAGE_COUNT);
+static const uint64_t vy_page_info_key_map =
+ (1 << VY_PAGE_INFO_OFFSET) | (1 << VY_PAGE_INFO_SIZE) |
+ (1 << VY_PAGE_INFO_UNPACKED_SIZE) | (1 << VY_PAGE_INFO_ROW_COUNT) |
+ (1 << VY_PAGE_INFO_MIN_KEY) | (1 << VY_PAGE_INFO_ROW_INDEX_OFFSET);
+
+static const uint64_t vy_run_info_key_map =
+ (1 << VY_RUN_INFO_MIN_KEY) | (1 << VY_RUN_INFO_MAX_KEY) |
+ (1 << VY_RUN_INFO_MIN_LSN) | (1 << VY_RUN_INFO_MAX_LSN) |
+ (1 << VY_RUN_INFO_PAGE_COUNT);
/** xlog meta type for .run files */
#define XLOG_META_TYPE_RUN "RUN"
@@ -65,10 +62,10 @@ static const uint64_t vy_run_info_key_map = (1 << VY_RUN_INFO_MIN_KEY) |
#define XLOG_META_TYPE_INDEX "INDEX"
const char *vy_file_suffix[] = {
- "index", /* VY_FILE_INDEX */
- "index" inprogress_suffix, /* VY_FILE_INDEX_INPROGRESS */
- "run", /* VY_FILE_RUN */
- "run" inprogress_suffix, /* VY_FILE_RUN_INPROGRESS */
+ "index", /* VY_FILE_INDEX */
+ "index" inprogress_suffix, /* VY_FILE_INDEX_INPROGRESS */
+ "run", /* VY_FILE_RUN */
+ "run" inprogress_suffix, /* VY_FILE_RUN_INPROGRESS */
};
/* sync run and index files very 16 MB */
@@ -127,8 +124,8 @@ vy_run_reader_f(va_list ap)
struct cbus_endpoint endpoint;
cpipe_create(&reader->tx_pipe, "tx_prio");
- cbus_endpoint_create(&endpoint, cord_name(cord()),
- fiber_schedule_cb, fiber());
+ cbus_endpoint_create(&endpoint, cord_name(cord()), fiber_schedule_cb,
+ fiber());
cbus_loop(&endpoint);
cbus_endpoint_destroy(&endpoint, cbus_process);
cpipe_destroy(&reader->tx_pipe);
@@ -142,8 +139,8 @@ vy_run_env_start_readers(struct vy_run_env *env)
assert(env->reader_pool == NULL);
assert(env->reader_pool_size > 0);
- env->reader_pool = calloc(env->reader_pool_size,
- sizeof(*env->reader_pool));
+ env->reader_pool =
+ calloc(env->reader_pool_size, sizeof(*env->reader_pool));
if (env->reader_pool == NULL)
panic("failed to allocate vinyl reader thread pool");
@@ -152,8 +149,8 @@ vy_run_env_start_readers(struct vy_run_env *env)
char name[FIBER_NAME_MAX];
snprintf(name, sizeof(name), "vinyl.reader.%d", i);
- if (cord_costart(&reader->cord, name,
- vy_run_reader_f, reader) != 0)
+ if (cord_costart(&reader->cord, name, vy_run_reader_f,
+ reader) != 0)
panic("failed to start vinyl reader thread");
cpipe_create(&reader->reader_pipe, name);
}
@@ -226,8 +223,8 @@ vy_run_env_coio_call(struct vy_run_env *env, struct cbus_call_msg *msg,
/* Post the task to the reader thread. */
bool cancellable = fiber_set_cancellable(false);
- int rc = cbus_call(&reader->reader_pipe, &reader->tx_pipe,
- msg, func, NULL, TIMEOUT_INFINITY);
+ int rc = cbus_call(&reader->reader_pipe, &reader->tx_pipe, msg, func,
+ NULL, TIMEOUT_INFINITY);
fiber_set_cancellable(cancellable);
if (rc != 0)
return -1;
@@ -353,8 +350,8 @@ vy_page_index_find_page(struct vy_run *run, struct vy_entry key,
{
if (itype == ITER_EQ)
itype = ITER_GE; /* One day it'll become obsolete */
- assert(itype == ITER_GE || itype == ITER_GT ||
- itype == ITER_LE || itype == ITER_LT);
+ assert(itype == ITER_GE || itype == ITER_GT || itype == ITER_LE ||
+ itype == ITER_LT);
int dir = iterator_direction(itype);
*equal_key = false;
@@ -388,9 +385,8 @@ vy_page_index_find_page(struct vy_run *run, struct vy_entry key,
do {
int32_t mid = range[0] + (range[1] - range[0]) / 2;
struct vy_page_info *info = vy_run_page_info(run, mid);
- int cmp = vy_entry_compare_with_raw_key(key, info->min_key,
- info->min_key_hint,
- cmp_def);
+ int cmp = vy_entry_compare_with_raw_key(
+ key, info->min_key, info->min_key_hint, cmp_def);
if (is_lower_bound)
range[cmp <= 0] = mid;
else
@@ -417,8 +413,8 @@ vy_slice_new(int64_t id, struct vy_run *run, struct vy_entry begin,
{
struct vy_slice *slice = malloc(sizeof(*slice));
if (slice == NULL) {
- diag_set(OutOfMemory, sizeof(*slice),
- "malloc", "struct vy_slice");
+ diag_set(OutOfMemory, sizeof(*slice), "malloc",
+ "struct vy_slice");
return NULL;
}
memset(slice, 0, sizeof(*slice));
@@ -444,17 +440,15 @@ vy_slice_new(int64_t id, struct vy_run *run, struct vy_entry begin,
if (slice->begin.stmt == NULL) {
slice->first_page_no = 0;
} else {
- slice->first_page_no =
- vy_page_index_find_page(run, slice->begin, cmp_def,
- ITER_GE, &unused);
+ slice->first_page_no = vy_page_index_find_page(
+ run, slice->begin, cmp_def, ITER_GE, &unused);
assert(slice->first_page_no < run->info.page_count);
}
if (slice->end.stmt == NULL) {
slice->last_page_no = run->info.page_count - 1;
} else {
- slice->last_page_no =
- vy_page_index_find_page(run, slice->end, cmp_def,
- ITER_LT, &unused);
+ slice->last_page_no = vy_page_index_find_page(
+ run, slice->end, cmp_def, ITER_LT, &unused);
if (slice->last_page_no == run->info.page_count) {
/* It's an empty slice */
slice->first_page_no = 0;
@@ -467,10 +461,10 @@ vy_slice_new(int64_t id, struct vy_run *run, struct vy_entry begin,
uint32_t run_pages = run->info.page_count;
uint32_t slice_pages = slice->last_page_no - slice->first_page_no + 1;
slice->count.pages = slice_pages;
- slice->count.rows = DIV_ROUND_UP(run->count.rows *
- slice_pages, run_pages);
- slice->count.bytes = DIV_ROUND_UP(run->count.bytes *
- slice_pages, run_pages);
+ slice->count.rows =
+ DIV_ROUND_UP(run->count.rows * slice_pages, run_pages);
+ slice->count.bytes =
+ DIV_ROUND_UP(run->count.bytes * slice_pages, run_pages);
slice->count.bytes_compressed = DIV_ROUND_UP(
run->count.bytes_compressed * slice_pages, run_pages);
return slice;
@@ -509,14 +503,14 @@ vy_slice_cut(struct vy_slice *slice, int64_t id, struct vy_entry begin,
/* begin = MAX(begin, slice->begin) */
if (slice->begin.stmt != NULL &&
- (begin.stmt == NULL || vy_entry_compare(begin, slice->begin,
- cmp_def) < 0))
+ (begin.stmt == NULL ||
+ vy_entry_compare(begin, slice->begin, cmp_def) < 0))
begin = slice->begin;
/* end = MIN(end, slice->end) */
if (slice->end.stmt != NULL &&
- (end.stmt == NULL || vy_entry_compare(end, slice->end,
- cmp_def) > 0))
+ (end.stmt == NULL ||
+ vy_entry_compare(end, slice->end, cmp_def) > 0))
end = slice->end;
*result = vy_slice_new(id, slice->run, begin, end, cmp_def);
@@ -569,8 +563,8 @@ vy_page_info_decode(struct vy_page_info *page, const struct xrow_header *xrow,
if (page->min_key == NULL)
return -1;
part_count = mp_decode_array(&key_beg);
- page->min_key_hint = key_hint(key_beg, part_count,
- cmp_def);
+ page->min_key_hint =
+ key_hint(key_beg, part_count, cmp_def);
break;
case VY_PAGE_INFO_UNPACKED_SIZE:
page->unpacked_size = mp_decode_uint(&pos);
@@ -633,8 +627,7 @@ vy_stmt_stat_decode(struct vy_stmt_stat *stat, const char **data)
* @retval -1 error (check diag)
*/
int
-vy_run_info_decode(struct vy_run_info *run_info,
- const struct xrow_header *xrow,
+vy_run_info_decode(struct vy_run_info *run_info, const struct xrow_header *xrow,
const char *filename)
{
assert(xrow->type == VY_INDEX_RUN_INFO);
@@ -707,8 +700,7 @@ vy_page_new(const struct vy_page_info *page_info)
{
struct vy_page *page = malloc(sizeof(*page));
if (page == NULL) {
- diag_set(OutOfMemory, sizeof(*page),
- "load_page", "page cache");
+ diag_set(OutOfMemory, sizeof(*page), "load_page", "page cache");
return NULL;
}
page->unpacked_size = page_info->unpacked_size;
@@ -723,8 +715,8 @@ vy_page_new(const struct vy_page_info *page_info)
page->data = (char *)malloc(page_info->unpacked_size);
if (page->data == NULL) {
- diag_set(OutOfMemory, page_info->unpacked_size,
- "malloc", "page->data");
+ diag_set(OutOfMemory, page_info->unpacked_size, "malloc",
+ "page->data");
free(page->row_index);
free(page);
return NULL;
@@ -748,14 +740,14 @@ vy_page_delete(struct vy_page *page)
}
static int
-vy_page_xrow(struct vy_page *page, uint32_t stmt_no,
- struct xrow_header *xrow)
+vy_page_xrow(struct vy_page *page, uint32_t stmt_no, struct xrow_header *xrow)
{
assert(stmt_no < page->row_count);
const char *data = page->data + page->row_index[stmt_no];
- const char *data_end = stmt_no + 1 < page->row_count ?
- page->data + page->row_index[stmt_no + 1] :
- page->data + page->unpacked_size;
+ const char *data_end =
+ stmt_no + 1 < page->row_count ?
+ page->data + page->row_index[stmt_no + 1] :
+ page->data + page->unpacked_size;
return xrow_header_decode(xrow, &data, data_end, false);
}
@@ -772,8 +764,8 @@ vy_page_xrow(struct vy_page *page, uint32_t stmt_no,
* @retval NULL Memory error.
*/
static struct vy_entry
-vy_page_stmt(struct vy_page *page, uint32_t stmt_no,
- struct key_def *cmp_def, struct tuple_format *format)
+vy_page_stmt(struct vy_page *page, uint32_t stmt_no, struct key_def *cmp_def,
+ struct tuple_format *format)
{
struct xrow_header xrow;
if (vy_page_xrow(page, stmt_no, &xrow) != 0)
@@ -802,12 +794,12 @@ vy_page_find_key(struct vy_page *page, struct vy_entry key,
uint32_t end = page->row_count;
*equal_key = false;
/* for upper bound we change zero comparison result to -1 */
- int zero_cmp = (iterator_type == ITER_GT ||
- iterator_type == ITER_LE ? -1 : 0);
+ int zero_cmp =
+ (iterator_type == ITER_GT || iterator_type == ITER_LE ? -1 : 0);
while (beg != end) {
uint32_t mid = beg + (end - beg) / 2;
- struct vy_entry fnd_key = vy_page_stmt(page, mid, cmp_def,
- format);
+ struct vy_entry fnd_key =
+ vy_page_stmt(page, mid, cmp_def, format);
if (fnd_key.stmt == NULL)
return end;
int cmp = vy_entry_compare(fnd_key, key, cmp_def);
@@ -898,11 +890,12 @@ vy_page_read(struct vy_page *page, const struct vy_page_info *page_info,
diag_set(OutOfMemory, page_info->size, "region gc", "page");
return -1;
}
- ssize_t readen = fio_pread(run->fd, data, page_info->size,
- page_info->offset);
+ ssize_t readen =
+ fio_pread(run->fd, data, page_info->size, page_info->offset);
ERROR_INJECT(ERRINJ_VYRUN_DATA_READ, {
readen = -1;
- errno = EIO;});
+ errno = EIO;
+ });
if (readen < 0) {
diag_set(SystemError, "failed to read from file");
goto error;
@@ -944,7 +937,8 @@ vy_page_read(struct vy_page *page, const struct vy_page_info *page_info,
region_truncate(&fiber()->gc, region_svp);
ERROR_INJECT(ERRINJ_VY_READ_PAGE, {
diag_set(ClientError, ER_INJECTION, "vinyl page read");
- return -1;});
+ return -1;
+ });
return 0;
error:
region_truncate(&fiber()->gc, region_svp);
@@ -987,10 +981,9 @@ vy_page_read_cb(struct cbus_call_msg *base)
if (vy_page_read(task->page, task->page_info, task->run, zdctx) != 0)
return -1;
if (task->key.stmt != NULL) {
- task->pos_in_page = vy_page_find_key(task->page, task->key,
- task->cmp_def, task->format,
- task->iterator_type,
- &task->equal_found);
+ task->pos_in_page = vy_page_find_key(
+ task->page, task->key, task->cmp_def, task->format,
+ task->iterator_type, &task->equal_found);
}
return 0;
}
@@ -1013,8 +1006,7 @@ vy_run_iterator_load_page(struct vy_run_iterator *itr, uint32_t page_no,
/* Check cache */
struct vy_page *page = NULL;
- if (itr->curr_page != NULL &&
- itr->curr_page->page_no == page_no) {
+ if (itr->curr_page != NULL && itr->curr_page->page_no == page_no) {
page = itr->curr_page;
} else if (itr->prev_page != NULL &&
itr->prev_page->page_no == page_no) {
@@ -1024,7 +1016,8 @@ vy_run_iterator_load_page(struct vy_run_iterator *itr, uint32_t page_no,
if (page != NULL) {
if (key.stmt != NULL)
*pos_in_page = vy_page_find_key(page, key, itr->cmp_def,
- itr->format, iterator_type,
+ itr->format,
+ iterator_type,
equal_found);
*result = page;
return 0;
@@ -1039,8 +1032,8 @@ vy_run_iterator_load_page(struct vy_run_iterator *itr, uint32_t page_no,
/* Read page data from the disk */
struct vy_page_read_task *task = mempool_alloc(&env->read_task_pool);
if (task == NULL) {
- diag_set(OutOfMemory, sizeof(*task),
- "mempool", "vy_page_read_task");
+ diag_set(OutOfMemory, sizeof(*task), "mempool",
+ "vy_page_read_task");
vy_page_delete(page);
return -1;
}
@@ -1092,8 +1085,7 @@ vy_run_iterator_load_page(struct vy_run_iterator *itr, uint32_t page_no,
*/
static NODISCARD int
vy_run_iterator_read(struct vy_run_iterator *itr,
- struct vy_run_iterator_pos pos,
- struct vy_entry *ret)
+ struct vy_run_iterator_pos pos, struct vy_entry *ret)
{
struct vy_page *page;
bool equal_found;
@@ -1125,9 +1117,8 @@ vy_run_iterator_search(struct vy_run_iterator *itr,
enum iterator_type iterator_type, struct vy_entry key,
struct vy_run_iterator_pos *pos, bool *equal_key)
{
- pos->page_no = vy_page_index_find_page(itr->slice->run, key,
- itr->cmp_def, iterator_type,
- equal_key);
+ pos->page_no = vy_page_index_find_page(
+ itr->slice->run, key, itr->cmp_def, iterator_type, equal_key);
if (pos->page_no == itr->slice->run->info.page_count)
return 1;
bool equal_in_page;
@@ -1287,7 +1278,7 @@ vy_run_iterator_do_seek(struct vy_run_iterator *itr,
enum iterator_type iterator_type, struct vy_entry key)
{
struct vy_run *run = itr->slice->run;
- struct vy_run_iterator_pos end_pos = {run->info.page_count, 0};
+ struct vy_run_iterator_pos end_pos = { run->info.page_count, 0 };
bool equal_found = false;
if (!vy_stmt_is_empty_key(key.stmt)) {
int rc = vy_run_iterator_search(itr, iterator_type, key,
@@ -1372,7 +1363,8 @@ vy_run_iterator_seek(struct vy_run_iterator *itr, struct vy_entry last,
if (iterator_type == ITER_EQ)
check_eq = true;
iterator_type = iterator_direction(iterator_type) > 0 ?
- ITER_GT : ITER_LT;
+ ITER_GT :
+ ITER_LT;
key = last;
}
@@ -1442,8 +1434,8 @@ vy_run_iterator_seek(struct vy_run_iterator *itr, struct vy_entry last,
return -1;
/* Check EQ constraint if necessary. */
- if (check_eq && vy_entry_compare(itr->curr, itr->key,
- itr->cmp_def) != 0)
+ if (check_eq &&
+ vy_entry_compare(itr->curr, itr->key, itr->cmp_def) != 0)
goto not_found;
/* Skip statements invisible from the iterator read view. */
@@ -1462,11 +1454,10 @@ not_found:
void
vy_run_iterator_open(struct vy_run_iterator *itr,
- struct vy_run_iterator_stat *stat,
- struct vy_slice *slice, enum iterator_type iterator_type,
- struct vy_entry key, const struct vy_read_view **rv,
- struct key_def *cmp_def, struct key_def *key_def,
- struct tuple_format *format)
+ struct vy_run_iterator_stat *stat, struct vy_slice *slice,
+ enum iterator_type iterator_type, struct vy_entry key,
+ const struct vy_read_view **rv, struct key_def *cmp_def,
+ struct key_def *key_def, struct tuple_format *format)
{
itr->stat = stat;
itr->cmp_def = cmp_def;
@@ -1581,8 +1572,7 @@ next:
}
NODISCARD int
-vy_run_iterator_next(struct vy_run_iterator *itr,
- struct vy_history *history)
+vy_run_iterator_next(struct vy_run_iterator *itr, struct vy_history *history)
{
vy_history_cleanup(history);
struct vy_entry entry;
@@ -1610,7 +1600,8 @@ vy_run_iterator_skip(struct vy_run_iterator *itr, struct vy_entry last,
if (itr->search_started &&
(itr->curr.stmt == NULL || last.stmt == NULL ||
iterator_direction(itr->iterator_type) *
- vy_entry_compare(itr->curr, last, itr->cmp_def) > 0))
+ vy_entry_compare(itr->curr, last, itr->cmp_def) >
+ 0))
return 0;
vy_history_cleanup(history);
@@ -1656,12 +1647,12 @@ vy_run_acct_page(struct vy_run *run, struct vy_page_info *page)
}
int
-vy_run_recover(struct vy_run *run, const char *dir,
- uint32_t space_id, uint32_t iid, struct key_def *cmp_def)
+vy_run_recover(struct vy_run *run, const char *dir, uint32_t space_id,
+ uint32_t iid, struct key_def *cmp_def)
{
char path[PATH_MAX];
- vy_run_snprint_path(path, sizeof(path), dir,
- space_id, iid, run->id, VY_FILE_INDEX);
+ vy_run_snprint_path(path, sizeof(path), dir, space_id, iid, run->id,
+ VY_FILE_INDEX);
struct xlog_cursor cursor;
ERROR_INJECT_COUNTDOWN(ERRINJ_VY_RUN_OPEN, {
@@ -1685,15 +1676,15 @@ vy_run_recover(struct vy_run *run, const char *dir,
if (rc != 0) {
if (rc > 0)
- diag_set(ClientError, ER_INVALID_INDEX_FILE,
- path, "Unexpected end of file");
+ diag_set(ClientError, ER_INVALID_INDEX_FILE, path,
+ "Unexpected end of file");
goto fail_close;
}
rc = xlog_cursor_next_row(&cursor, &xrow);
if (rc != 0) {
if (rc > 0)
- diag_set(ClientError, ER_INVALID_INDEX_FILE,
- path, "Unexpected end of file");
+ diag_set(ClientError, ER_INVALID_INDEX_FILE, path,
+ "Unexpected end of file");
goto fail_close;
}
@@ -1708,8 +1699,8 @@ vy_run_recover(struct vy_run *run, const char *dir,
goto fail_close;
/* Allocate buffer for page info. */
- run->page_info = calloc(run->info.page_count,
- sizeof(struct vy_page_info));
+ run->page_info =
+ calloc(run->info.page_count, sizeof(struct vy_page_info));
if (run->page_info == NULL) {
diag_set(OutOfMemory,
run->info.page_count * sizeof(struct vy_page_info),
@@ -1756,14 +1747,14 @@ vy_run_recover(struct vy_run *run, const char *dir,
xlog_cursor_close(&cursor, false);
/* Prepare data file for reading. */
- vy_run_snprint_path(path, sizeof(path), dir,
- space_id, iid, run->id, VY_FILE_RUN);
+ vy_run_snprint_path(path, sizeof(path), dir, space_id, iid, run->id,
+ VY_FILE_RUN);
if (xlog_cursor_open(&cursor, path))
goto fail;
meta = &cursor.meta;
if (strcmp(meta->filetype, XLOG_META_TYPE_RUN) != 0) {
- diag_set(ClientError, ER_INVALID_XLOG_TYPE,
- XLOG_META_TYPE_RUN, meta->filetype);
+ diag_set(ClientError, ER_INVALID_XLOG_TYPE, XLOG_META_TYPE_RUN,
+ meta->filetype);
goto fail_close;
}
run->fd = cursor.fd;
@@ -1786,11 +1777,12 @@ vy_run_dump_stmt(struct vy_entry entry, struct xlog *data_xlog,
bool is_primary)
{
struct xrow_header xrow;
- int rc = (is_primary ?
- vy_stmt_encode_primary(entry.stmt, key_def, 0, &xrow) :
- vy_stmt_encode_secondary(entry.stmt, key_def,
- vy_entry_multikey_idx(entry, key_def),
- &xrow));
+ int rc =
+ (is_primary ?
+ vy_stmt_encode_primary(entry.stmt, key_def, 0, &xrow) :
+ vy_stmt_encode_secondary(
+ entry.stmt, key_def,
+ vy_entry_multikey_idx(entry, key_def), &xrow));
if (rc != 0)
return -1;
@@ -1819,8 +1811,7 @@ vy_row_index_encode(const uint32_t *row_index, uint32_t row_count,
memset(xrow, 0, sizeof(*xrow));
xrow->type = VY_RUN_ROW_INDEX;
- size_t size = mp_sizeof_map(1) +
- mp_sizeof_uint(VY_ROW_INDEX_DATA) +
+ size_t size = mp_sizeof_map(1) + mp_sizeof_uint(VY_ROW_INDEX_DATA) +
mp_sizeof_bin(sizeof(uint32_t) * row_count);
char *pos = region_alloc(&fiber()->gc, size);
if (pos == NULL) {
@@ -1845,13 +1836,12 @@ vy_row_index_encode(const uint32_t *row_index, uint32_t row_count,
static inline int
vy_run_alloc_page_info(struct vy_run *run, uint32_t *page_info_capacity)
{
- uint32_t cap = *page_info_capacity > 0 ?
- *page_info_capacity * 2 : 16;
- struct vy_page_info *page_info = realloc(run->page_info,
- cap * sizeof(*page_info));
+ uint32_t cap = *page_info_capacity > 0 ? *page_info_capacity * 2 : 16;
+ struct vy_page_info *page_info =
+ realloc(run->page_info, cap * sizeof(*page_info));
if (page_info == NULL) {
- diag_set(OutOfMemory, cap * sizeof(*page_info),
- "realloc", "struct vy_page_info");
+ diag_set(OutOfMemory, cap * sizeof(*page_info), "realloc",
+ "struct vy_page_info");
return -1;
}
run->page_info = page_info;
@@ -1886,15 +1876,13 @@ vy_page_info_encode(const struct vy_page_info *page_info,
/* calc tuple size */
uint32_t size;
/* 3 items: page offset, size, and map */
- size = mp_sizeof_map(6) +
- mp_sizeof_uint(VY_PAGE_INFO_OFFSET) +
+ size = mp_sizeof_map(6) + mp_sizeof_uint(VY_PAGE_INFO_OFFSET) +
mp_sizeof_uint(page_info->offset) +
mp_sizeof_uint(VY_PAGE_INFO_SIZE) +
mp_sizeof_uint(page_info->size) +
mp_sizeof_uint(VY_PAGE_INFO_ROW_COUNT) +
mp_sizeof_uint(page_info->row_count) +
- mp_sizeof_uint(VY_PAGE_INFO_MIN_KEY) +
- min_key_size +
+ mp_sizeof_uint(VY_PAGE_INFO_MIN_KEY) + min_key_size +
mp_sizeof_uint(VY_PAGE_INFO_UNPACKED_SIZE) +
mp_sizeof_uint(page_info->unpacked_size) +
mp_sizeof_uint(VY_PAGE_INFO_ROW_INDEX_OFFSET) +
@@ -1938,15 +1926,11 @@ vy_page_info_encode(const struct vy_page_info *page_info,
static size_t
vy_stmt_stat_sizeof(const struct vy_stmt_stat *stat)
{
- return mp_sizeof_map(4) +
- mp_sizeof_uint(IPROTO_INSERT) +
- mp_sizeof_uint(IPROTO_REPLACE) +
- mp_sizeof_uint(IPROTO_DELETE) +
- mp_sizeof_uint(IPROTO_UPSERT) +
- mp_sizeof_uint(stat->inserts) +
- mp_sizeof_uint(stat->replaces) +
- mp_sizeof_uint(stat->deletes) +
- mp_sizeof_uint(stat->upserts);
+ return mp_sizeof_map(4) + mp_sizeof_uint(IPROTO_INSERT) +
+ mp_sizeof_uint(IPROTO_REPLACE) + mp_sizeof_uint(IPROTO_DELETE) +
+ mp_sizeof_uint(IPROTO_UPSERT) + mp_sizeof_uint(stat->inserts) +
+ mp_sizeof_uint(stat->replaces) + mp_sizeof_uint(stat->deletes) +
+ mp_sizeof_uint(stat->upserts);
}
/** Encode statement statistics to @buf and return advanced @buf. */
@@ -1976,8 +1960,7 @@ vy_stmt_stat_encode(const struct vy_stmt_stat *stat, char *buf)
* @retval -1 on error, check diag
*/
static int
-vy_run_info_encode(const struct vy_run_info *run_info,
- struct xrow_header *xrow)
+vy_run_info_encode(const struct vy_run_info *run_info, struct xrow_header *xrow)
{
const char *tmp;
tmp = run_info->min_key;
@@ -2045,19 +2028,19 @@ vy_run_info_encode(const struct vy_run_info *run_info,
* Write run index to file.
*/
static int
-vy_run_write_index(struct vy_run *run, const char *dirpath,
- uint32_t space_id, uint32_t iid)
+vy_run_write_index(struct vy_run *run, const char *dirpath, uint32_t space_id,
+ uint32_t iid)
{
char path[PATH_MAX];
- vy_run_snprint_path(path, sizeof(path), dirpath,
- space_id, iid, run->id, VY_FILE_INDEX);
+ vy_run_snprint_path(path, sizeof(path), dirpath, space_id, iid, run->id,
+ VY_FILE_INDEX);
say_info("writing `%s'", path);
struct xlog index_xlog;
struct xlog_meta meta;
- xlog_meta_create(&meta, XLOG_META_TYPE_INDEX, &INSTANCE_UUID,
- NULL, NULL);
+ xlog_meta_create(&meta, XLOG_META_TYPE_INDEX, &INSTANCE_UUID, NULL,
+ NULL);
struct xlog_opts opts = xlog_opts_default;
opts.rate_limit = run->env->snap_io_rate_limit;
opts.sync_interval = VY_RUN_SYNC_INTERVAL;
@@ -2092,8 +2075,7 @@ vy_run_write_index(struct vy_run *run, const char *dirpath,
return -1;
});
- if (xlog_flush(&index_xlog) < 0 ||
- xlog_rename(&index_xlog) < 0)
+ if (xlog_flush(&index_xlog) < 0 || xlog_rename(&index_xlog) < 0)
goto fail;
xlog_close(&index_xlog, false);
@@ -2154,8 +2136,7 @@ vy_run_writer_create_xlog(struct vy_run_writer *writer)
VY_FILE_RUN);
say_info("writing `%s'", path);
struct xlog_meta meta;
- xlog_meta_create(&meta, XLOG_META_TYPE_RUN, &INSTANCE_UUID,
- NULL, NULL);
+ xlog_meta_create(&meta, XLOG_META_TYPE_RUN, &INSTANCE_UUID, NULL, NULL);
struct xlog_opts opts = xlog_opts_default;
opts.rate_limit = writer->run->env->snap_io_rate_limit;
opts.sync_interval = VY_RUN_SYNC_INTERVAL;
@@ -2181,12 +2162,13 @@ vy_run_writer_start_page(struct vy_run_writer *writer,
if (run->info.page_count >= writer->page_info_capacity &&
vy_run_alloc_page_info(run, &writer->page_info_capacity) != 0)
return -1;
- const char *key = vy_stmt_is_key(first_entry.stmt) ?
- tuple_data(first_entry.stmt) :
- tuple_extract_key(first_entry.stmt, writer->cmp_def,
- vy_entry_multikey_idx(first_entry,
- writer->cmp_def),
- NULL);
+ const char *key =
+ vy_stmt_is_key(first_entry.stmt) ?
+ tuple_data(first_entry.stmt) :
+ tuple_extract_key(first_entry.stmt, writer->cmp_def,
+ vy_entry_multikey_idx(
+ first_entry, writer->cmp_def),
+ NULL);
if (key == NULL)
return -1;
if (run->info.page_count == 0) {
@@ -2196,8 +2178,8 @@ vy_run_writer_start_page(struct vy_run_writer *writer,
return -1;
}
struct vy_page_info *page = run->page_info + run->info.page_count;
- if (vy_page_info_create(page, writer->data_xlog.offset,
- key, writer->cmp_def) != 0)
+ if (vy_page_info_create(page, writer->data_xlog.offset, key,
+ writer->cmp_def) != 0)
return -1;
xlog_tx_begin(&writer->data_xlog);
return 0;
@@ -2230,8 +2212,8 @@ vy_run_writer_write_to_page(struct vy_run_writer *writer, struct vy_entry entry)
return -1;
}
*offset = page->unpacked_size;
- if (vy_run_dump_stmt(entry, &writer->data_xlog, page,
- writer->cmp_def, writer->iid == 0) != 0)
+ if (vy_run_dump_stmt(entry, &writer->data_xlog, page, writer->cmp_def,
+ writer->iid == 0) != 0)
return -1;
int64_t lsn = vy_stmt_lsn(entry.stmt);
run->info.min_lsn = MIN(run->info.min_lsn, lsn);
@@ -2336,12 +2318,14 @@ vy_run_writer_commit(struct vy_run_writer *writer)
}
assert(writer->last.stmt != NULL);
- const char *key = vy_stmt_is_key(writer->last.stmt) ?
- tuple_data(writer->last.stmt) :
- tuple_extract_key(writer->last.stmt, writer->cmp_def,
- vy_entry_multikey_idx(writer->last,
- writer->cmp_def),
- NULL);
+ const char *key =
+ vy_stmt_is_key(writer->last.stmt) ?
+ tuple_data(writer->last.stmt) :
+ tuple_extract_key(
+ writer->last.stmt, writer->cmp_def,
+ vy_entry_multikey_idx(writer->last,
+ writer->cmp_def),
+ NULL);
if (key == NULL)
goto out;
@@ -2361,13 +2345,13 @@ vy_run_writer_commit(struct vy_run_writer *writer)
goto out;
if (writer->bloom != NULL) {
- run->info.bloom = tuple_bloom_new(writer->bloom,
- writer->bloom_fpr);
+ run->info.bloom =
+ tuple_bloom_new(writer->bloom, writer->bloom_fpr);
if (run->info.bloom == NULL)
goto out;
}
- if (vy_run_write_index(run, writer->dirpath,
- writer->space_id, writer->iid) != 0)
+ if (vy_run_write_index(run, writer->dirpath, writer->space_id,
+ writer->iid) != 0)
goto out;
run->fd = writer->data_xlog.fd;
@@ -2385,10 +2369,10 @@ vy_run_writer_abort(struct vy_run_writer *writer)
}
int
-vy_run_rebuild_index(struct vy_run *run, const char *dir,
- uint32_t space_id, uint32_t iid,
- struct key_def *cmp_def, struct key_def *key_def,
- struct tuple_format *format, const struct index_opts *opts)
+vy_run_rebuild_index(struct vy_run *run, const char *dir, uint32_t space_id,
+ uint32_t iid, struct key_def *cmp_def,
+ struct key_def *key_def, struct tuple_format *format,
+ const struct index_opts *opts)
{
assert(run->info.bloom == NULL);
assert(run->page_info == NULL);
@@ -2397,8 +2381,8 @@ vy_run_rebuild_index(struct vy_run *run, const char *dir,
struct xlog_cursor cursor;
char path[PATH_MAX];
- vy_run_snprint_path(path, sizeof(path), dir,
- space_id, iid, run->id, VY_FILE_RUN);
+ vy_run_snprint_path(path, sizeof(path), dir, space_id, iid, run->id,
+ VY_FILE_RUN);
say_info("rebuilding index for `%s'", path);
if (xlog_cursor_open(&cursor, path))
@@ -2445,16 +2429,17 @@ vy_run_rebuild_index(struct vy_run *run, const char *dir,
if (tuple == NULL)
goto close_err;
if (bloom_builder != NULL) {
- struct vy_entry entry = {tuple, HINT_NONE};
+ struct vy_entry entry = { tuple, HINT_NONE };
if (vy_bloom_builder_add(bloom_builder, entry,
key_def) != 0) {
tuple_unref(tuple);
goto close_err;
}
}
- key = vy_stmt_is_key(tuple) ? tuple_data(tuple) :
- tuple_extract_key(tuple, cmp_def,
- MULTIKEY_NONE, NULL);
+ key = vy_stmt_is_key(tuple) ?
+ tuple_data(tuple) :
+ tuple_extract_key(tuple, cmp_def,
+ MULTIKEY_NONE, NULL);
if (prev_tuple != NULL)
tuple_unref(prev_tuple);
prev_tuple = tuple;
@@ -2478,8 +2463,8 @@ vy_run_rebuild_index(struct vy_run *run, const char *dir,
}
struct vy_page_info *info;
info = run->page_info + run->info.page_count;
- if (vy_page_info_create(info, page_offset,
- page_min_key, cmp_def) != 0)
+ if (vy_page_info_create(info, page_offset, page_min_key,
+ cmp_def) != 0)
goto close_err;
info->row_count = page_row_count;
info->size = next_page_offset - page_offset;
@@ -2509,8 +2494,8 @@ vy_run_rebuild_index(struct vy_run *run, const char *dir,
xlog_cursor_close(&cursor, true);
if (bloom_builder != NULL) {
- run->info.bloom = tuple_bloom_new(bloom_builder,
- opts->bloom_fpr);
+ run->info.bloom =
+ tuple_bloom_new(bloom_builder, opts->bloom_fpr);
if (run->info.bloom == NULL)
goto close_err;
tuple_bloom_builder_delete(bloom_builder);
@@ -2518,11 +2503,10 @@ vy_run_rebuild_index(struct vy_run *run, const char *dir,
}
/* New run index is ready for write, unlink old file if exists */
- vy_run_snprint_path(path, sizeof(path), dir,
- space_id, iid, run->id, VY_FILE_INDEX);
+ vy_run_snprint_path(path, sizeof(path), dir, space_id, iid, run->id,
+ VY_FILE_INDEX);
if (unlink(path) < 0 && errno != ENOENT) {
- diag_set(SystemError, "failed to unlink file '%s'",
- path);
+ diag_set(SystemError, "failed to unlink file '%s'", path);
goto close_err;
}
if (vy_run_write_index(run, dir, space_id, iid) != 0)
@@ -2543,17 +2527,19 @@ close_err:
}
int
-vy_run_remove_files(const char *dir, uint32_t space_id,
- uint32_t iid, int64_t run_id)
+vy_run_remove_files(const char *dir, uint32_t space_id, uint32_t iid,
+ int64_t run_id)
{
- ERROR_INJECT(ERRINJ_VY_GC,
- {say_error("error injection: vinyl run %lld not deleted",
- (long long)run_id); return -1;});
+ ERROR_INJECT(ERRINJ_VY_GC, {
+ say_error("error injection: vinyl run %lld not deleted",
+ (long long)run_id);
+ return -1;
+ });
int ret = 0;
char path[PATH_MAX];
for (int type = 0; type < vy_file_MAX; type++) {
- vy_run_snprint_path(path, sizeof(path), dir,
- space_id, iid, run_id, type);
+ vy_run_snprint_path(path, sizeof(path), dir, space_id, iid,
+ run_id, type);
if (coio_unlink(path) < 0) {
if (errno != ENOENT) {
say_syserror("error while removing %s", path);
@@ -2619,8 +2605,7 @@ vy_slice_stream_search(struct vy_stmt_stream *virt_stream)
bool unused;
stream->pos_in_page = vy_page_find_key(stream->page,
stream->slice->begin,
- stream->cmp_def,
- stream->format,
+ stream->cmp_def, stream->format,
ITER_GE, &unused);
if (stream->pos_in_page == stream->page->row_count) {
@@ -2679,8 +2664,8 @@ vy_slice_stream_next(struct vy_stmt_stream *virt_stream, struct vy_entry *ret)
stream->pos_in_page++;
/* Check whether the position is out of page */
- struct vy_page_info *page_info = vy_run_page_info(stream->slice->run,
- stream->page_no);
+ struct vy_page_info *page_info =
+ vy_run_page_info(stream->slice->run, stream->page_no);
if (stream->pos_in_page >= page_info->row_count) {
/**
* Out of page. Free page, move the position to the next page
diff --git a/src/box/vy_run.h b/src/box/vy_run.h
index 9618d85..1164faf 100644
--- a/src/box/vy_run.h
+++ b/src/box/vy_run.h
@@ -380,8 +380,8 @@ vy_run_unref(struct vy_run *run)
* @return - 0 on sucess, -1 on fail
*/
int
-vy_run_recover(struct vy_run *run, const char *dir,
- uint32_t space_id, uint32_t iid, struct key_def *cmp_def);
+vy_run_recover(struct vy_run *run, const char *dir, uint32_t space_id,
+ uint32_t iid, struct key_def *cmp_def);
/**
* Rebuild run index
@@ -396,10 +396,9 @@ vy_run_recover(struct vy_run *run, const char *dir,
* @return - 0 on sucess, -1 on fail
*/
int
-vy_run_rebuild_index(struct vy_run *run, const char *dir,
- uint32_t space_id, uint32_t iid,
- struct key_def *cmp_def, struct key_def *key_def,
- struct tuple_format *format,
+vy_run_rebuild_index(struct vy_run *run, const char *dir, uint32_t space_id,
+ uint32_t iid, struct key_def *cmp_def,
+ struct key_def *key_def, struct tuple_format *format,
const struct index_opts *opts);
enum vy_file_type {
@@ -413,29 +412,28 @@ enum vy_file_type {
extern const char *vy_file_suffix[];
static inline int
-vy_lsm_snprint_path(char *buf, int size, const char *dir,
- uint32_t space_id, uint32_t iid)
+vy_lsm_snprint_path(char *buf, int size, const char *dir, uint32_t space_id,
+ uint32_t iid)
{
- return snprintf(buf, size, "%s/%u/%u",
- dir, (unsigned)space_id, (unsigned)iid);
+ return snprintf(buf, size, "%s/%u/%u", dir, (unsigned)space_id,
+ (unsigned)iid);
}
static inline int
vy_run_snprint_filename(char *buf, int size, int64_t run_id,
enum vy_file_type type)
{
- return snprintf(buf, size, "%020lld.%s",
- (long long)run_id, vy_file_suffix[type]);
+ return snprintf(buf, size, "%020lld.%s", (long long)run_id,
+ vy_file_suffix[type]);
}
static inline int
-vy_run_snprint_path(char *buf, int size, const char *dir,
- uint32_t space_id, uint32_t iid,
- int64_t run_id, enum vy_file_type type)
+vy_run_snprint_path(char *buf, int size, const char *dir, uint32_t space_id,
+ uint32_t iid, int64_t run_id, enum vy_file_type type)
{
int total = 0;
- SNPRINT(total, vy_lsm_snprint_path, buf, size,
- dir, (unsigned)space_id, (unsigned)iid);
+ SNPRINT(total, vy_lsm_snprint_path, buf, size, dir, (unsigned)space_id,
+ (unsigned)iid);
SNPRINT(total, snprintf, buf, size, "/");
SNPRINT(total, vy_run_snprint_filename, buf, size, run_id, type);
return total;
@@ -447,8 +445,8 @@ vy_run_snprint_path(char *buf, int size, const char *dir,
* failed.
*/
int
-vy_run_remove_files(const char *dir, uint32_t space_id,
- uint32_t iid, int64_t run_id);
+vy_run_remove_files(const char *dir, uint32_t space_id, uint32_t iid,
+ int64_t run_id);
/**
* Allocate a new run slice.
@@ -518,11 +516,10 @@ vy_slice_cut(struct vy_slice *slice, int64_t id, struct vy_entry begin,
*/
void
vy_run_iterator_open(struct vy_run_iterator *itr,
- struct vy_run_iterator_stat *stat,
- struct vy_slice *slice, enum iterator_type iterator_type,
- struct vy_entry key, const struct vy_read_view **rv,
- struct key_def *cmp_def, struct key_def *key_def,
- struct tuple_format *format);
+ struct vy_run_iterator_stat *stat, struct vy_slice *slice,
+ enum iterator_type iterator_type, struct vy_entry key,
+ const struct vy_read_view **rv, struct key_def *cmp_def,
+ struct key_def *key_def, struct tuple_format *format);
/**
* Advance a run iterator to the next key.
@@ -530,8 +527,7 @@ vy_run_iterator_open(struct vy_run_iterator *itr,
* Returns 0 on success, -1 on memory allocation or IO error.
*/
NODISCARD int
-vy_run_iterator_next(struct vy_run_iterator *itr,
- struct vy_history *history);
+vy_run_iterator_next(struct vy_run_iterator *itr, struct vy_history *history);
/**
* Advance a run iterator to the key following @last.
diff --git a/src/box/vy_scheduler.c b/src/box/vy_scheduler.c
index b641dd9..c2cc463 100644
--- a/src/box/vy_scheduler.c
+++ b/src/box/vy_scheduler.c
@@ -60,15 +60,19 @@
#include "trivia/util.h"
/* Min and max values for vy_scheduler::timeout. */
-#define VY_SCHEDULER_TIMEOUT_MIN 1
-#define VY_SCHEDULER_TIMEOUT_MAX 60
+#define VY_SCHEDULER_TIMEOUT_MIN 1
+#define VY_SCHEDULER_TIMEOUT_MAX 60
static int vy_worker_f(va_list);
static int vy_scheduler_f(va_list);
-static void vy_task_execute_f(struct cmsg *);
-static void vy_task_complete_f(struct cmsg *);
-static void vy_deferred_delete_batch_process_f(struct cmsg *);
-static void vy_deferred_delete_batch_free_f(struct cmsg *);
+static void
+vy_task_execute_f(struct cmsg *);
+static void
+vy_task_complete_f(struct cmsg *);
+static void
+vy_deferred_delete_batch_process_f(struct cmsg *);
+static void
+vy_deferred_delete_batch_free_f(struct cmsg *);
static const struct cmsg_hop vy_task_execute_route[] = {
{ vy_task_execute_f, NULL },
@@ -222,7 +226,7 @@ struct vy_task {
};
static const struct vy_deferred_delete_handler_iface
-vy_task_deferred_delete_iface;
+ vy_task_deferred_delete_iface;
/**
* Allocate a new task to be executed by a worker thread.
@@ -237,8 +241,8 @@ vy_task_new(struct vy_scheduler *scheduler, struct vy_worker *worker,
{
struct vy_task *task = calloc(1, sizeof(*task));
if (task == NULL) {
- diag_set(OutOfMemory, sizeof(*task),
- "malloc", "struct vy_task");
+ diag_set(OutOfMemory, sizeof(*task), "malloc",
+ "struct vy_task");
return NULL;
}
memset(task, 0, sizeof(*task));
@@ -432,8 +436,8 @@ vy_scheduler_create(struct vy_scheduler *scheduler, int write_threads,
scheduler->read_views = read_views;
scheduler->run_env = run_env;
- scheduler->scheduler_fiber = fiber_new("vinyl.scheduler",
- vy_scheduler_f);
+ scheduler->scheduler_fiber =
+ fiber_new("vinyl.scheduler", vy_scheduler_f);
if (scheduler->scheduler_fiber == NULL)
panic("failed to allocate vinyl scheduler fiber");
@@ -455,10 +459,9 @@ vy_scheduler_create(struct vy_scheduler *scheduler, int write_threads,
assert(write_threads > 1);
int dump_threads = MAX(1, write_threads / 4);
int compaction_threads = write_threads - dump_threads;
- vy_worker_pool_create(&scheduler->dump_pool,
- "dump", dump_threads);
- vy_worker_pool_create(&scheduler->compaction_pool,
- "compaction", compaction_threads);
+ vy_worker_pool_create(&scheduler->dump_pool, "dump", dump_threads);
+ vy_worker_pool_create(&scheduler->compaction_pool, "compaction",
+ compaction_threads);
stailq_create(&scheduler->processed_tasks);
@@ -515,8 +518,8 @@ vy_scheduler_on_delete_lsm(struct trigger *trigger, void *event)
{
struct vy_lsm *lsm = event;
struct vy_scheduler *scheduler = trigger->data;
- assert(! heap_node_is_stray(&lsm->in_dump));
- assert(! heap_node_is_stray(&lsm->in_compaction));
+ assert(!heap_node_is_stray(&lsm->in_dump));
+ assert(!heap_node_is_stray(&lsm->in_compaction));
vy_dump_heap_delete(&scheduler->dump_heap, lsm);
vy_compaction_heap_delete(&scheduler->compaction_heap, lsm);
trigger_clear(trigger);
@@ -552,8 +555,8 @@ vy_scheduler_add_lsm(struct vy_scheduler *scheduler, struct vy_lsm *lsm)
static void
vy_scheduler_update_lsm(struct vy_scheduler *scheduler, struct vy_lsm *lsm)
{
- assert(! heap_node_is_stray(&lsm->in_dump));
- assert(! heap_node_is_stray(&lsm->in_compaction));
+ assert(!heap_node_is_stray(&lsm->in_dump));
+ assert(!heap_node_is_stray(&lsm->in_compaction));
vy_dump_heap_update(&scheduler->dump_heap, lsm);
vy_compaction_heap_update(&scheduler->compaction_heap, lsm);
}
@@ -675,8 +678,8 @@ vy_scheduler_complete_dump(struct vy_scheduler *scheduler)
scheduler->dump_start = now;
scheduler->dump_generation = min_generation;
scheduler->stat.dump_count++;
- scheduler->dump_complete_cb(scheduler,
- min_generation - 1, dump_duration);
+ scheduler->dump_complete_cb(scheduler, min_generation - 1,
+ dump_duration);
fiber_cond_signal(&scheduler->dump_cond);
}
@@ -698,7 +701,8 @@ vy_scheduler_begin_checkpoint(struct vy_scheduler *scheduler, bool is_scheduled)
struct error *e = diag_last_error(&scheduler->diag);
diag_set_error(diag_get(), e);
say_error("cannot checkpoint vinyl, "
- "scheduler is throttled with: %s", e->errmsg);
+ "scheduler is throttled with: %s",
+ e->errmsg);
return -1;
}
say_info("scheduler is unthrottled due to manual checkpoint "
@@ -796,9 +800,11 @@ vy_run_discard(struct vy_run *run)
vy_run_unref(run);
- ERROR_INJECT(ERRINJ_VY_RUN_DISCARD,
- {say_error("error injection: run %lld not discarded",
- (long long)run_id); return;});
+ ERROR_INJECT(ERRINJ_VY_RUN_DISCARD, {
+ say_error("error injection: run %lld not discarded",
+ (long long)run_id);
+ return;
+ });
vy_log_tx_begin();
/*
@@ -865,8 +871,8 @@ vy_deferred_delete_process_one(struct space *deferred_delete_space,
return -1;
struct tuple *unused;
- if (space_execute_dml(deferred_delete_space, txn,
- &request, &unused) != 0) {
+ if (space_execute_dml(deferred_delete_space, txn, &request, &unused) !=
+ 0) {
txn_rollback_stmt(txn);
return -1;
}
@@ -885,8 +891,8 @@ vy_deferred_delete_process_one(struct space *deferred_delete_space,
static void
vy_deferred_delete_batch_process_f(struct cmsg *cmsg)
{
- struct vy_deferred_delete_batch *batch = container_of(cmsg,
- struct vy_deferred_delete_batch, cmsg);
+ struct vy_deferred_delete_batch *batch =
+ container_of(cmsg, struct vy_deferred_delete_batch, cmsg);
struct vy_task *task = batch->task;
struct vy_lsm *pk = task->lsm;
@@ -936,8 +942,8 @@ fail:
static void
vy_deferred_delete_batch_free_f(struct cmsg *cmsg)
{
- struct vy_deferred_delete_batch *batch = container_of(cmsg,
- struct vy_deferred_delete_batch, cmsg);
+ struct vy_deferred_delete_batch *batch =
+ container_of(cmsg, struct vy_deferred_delete_batch, cmsg);
struct vy_task *task = batch->task;
for (int i = 0; i < batch->count; i++) {
struct vy_deferred_delete_stmt *stmt = &batch->stmt[i];
@@ -992,8 +998,8 @@ vy_task_deferred_delete_process(struct vy_deferred_delete_handler *handler,
{
enum { MAX_IN_PROGRESS = 10 };
- struct vy_task *task = container_of(handler, struct vy_task,
- deferred_delete_handler);
+ struct vy_task *task =
+ container_of(handler, struct vy_task, deferred_delete_handler);
struct vy_deferred_delete_batch *batch = task->deferred_delete_batch;
/*
@@ -1036,18 +1042,18 @@ vy_task_deferred_delete_process(struct vy_deferred_delete_handler *handler,
static void
vy_task_deferred_delete_destroy(struct vy_deferred_delete_handler *handler)
{
- struct vy_task *task = container_of(handler, struct vy_task,
- deferred_delete_handler);
+ struct vy_task *task =
+ container_of(handler, struct vy_task, deferred_delete_handler);
vy_task_deferred_delete_flush(task);
while (task->deferred_delete_in_progress > 0)
fiber_sleep(TIMEOUT_INFINITY);
}
static const struct vy_deferred_delete_handler_iface
-vy_task_deferred_delete_iface = {
- .process = vy_task_deferred_delete_process,
- .destroy = vy_task_deferred_delete_destroy,
-};
+ vy_task_deferred_delete_iface = {
+ .process = vy_task_deferred_delete_process,
+ .destroy = vy_task_deferred_delete_destroy,
+ };
static int
vy_task_write_run(struct vy_task *task, bool no_compression)
@@ -1057,17 +1063,17 @@ vy_task_write_run(struct vy_task *task, bool no_compression)
struct vy_lsm *lsm = task->lsm;
struct vy_stmt_stream *wi = task->wi;
- ERROR_INJECT(ERRINJ_VY_RUN_WRITE,
- {diag_set(ClientError, ER_INJECTION,
- "vinyl dump"); return -1;});
+ ERROR_INJECT(ERRINJ_VY_RUN_WRITE, {
+ diag_set(ClientError, ER_INJECTION, "vinyl dump");
+ return -1;
+ });
ERROR_INJECT_SLEEP(ERRINJ_VY_RUN_WRITE_DELAY);
struct vy_run_writer writer;
if (vy_run_writer_create(&writer, task->new_run, lsm->env->path,
- lsm->space_id, lsm->index_id,
- task->cmp_def, task->key_def,
- task->page_size, task->bloom_fpr,
- no_compression) != 0)
+ lsm->space_id, lsm->index_id, task->cmp_def,
+ task->key_def, task->page_size,
+ task->bloom_fpr, no_compression) != 0)
goto fail;
if (wi->iface->start(wi) != 0)
@@ -1076,8 +1082,8 @@ vy_task_write_run(struct vy_task *task, bool no_compression)
int loops = 0;
struct vy_entry entry = vy_entry_none();
while ((rc = wi->iface->next(wi, &entry)) == 0 && entry.stmt != NULL) {
- struct errinj *inj = errinj(ERRINJ_VY_RUN_WRITE_STMT_TIMEOUT,
- ERRINJ_DOUBLE);
+ struct errinj *inj =
+ errinj(ERRINJ_VY_RUN_WRITE_STMT_TIMEOUT, ERRINJ_DOUBLE);
if (inj != NULL && inj->dparam > 0)
thread_sleep(inj->dparam);
@@ -1158,8 +1164,8 @@ vy_task_dump_complete(struct vy_task *task)
* Figure out which ranges intersect the new run.
*/
if (vy_lsm_find_range_intersection(lsm, new_run->info.min_key,
- new_run->info.max_key,
- &begin_range, &end_range) != 0)
+ new_run->info.max_key, &begin_range,
+ &end_range) != 0)
goto fail;
/*
@@ -1173,8 +1179,8 @@ vy_task_dump_complete(struct vy_task *task)
}
for (range = begin_range, i = 0; range != end_range;
range = vy_range_tree_next(&lsm->range_tree, range), i++) {
- slice = vy_slice_new(vy_log_next_id(), new_run,
- range->begin, range->end, lsm->cmp_def);
+ slice = vy_slice_new(vy_log_next_id(), new_run, range->begin,
+ range->end, lsm->cmp_def);
if (slice == NULL)
goto fail_free_slices;
@@ -1473,12 +1479,12 @@ vy_task_compaction_complete(struct vy_task *task)
* as a result of compaction.
*/
RLIST_HEAD(unused_runs);
- for (slice = first_slice; ; slice = rlist_next_entry(slice, in_range)) {
+ for (slice = first_slice;; slice = rlist_next_entry(slice, in_range)) {
slice->run->compacted_slice_count++;
if (slice == last_slice)
break;
}
- for (slice = first_slice; ; slice = rlist_next_entry(slice, in_range)) {
+ for (slice = first_slice;; slice = rlist_next_entry(slice, in_range)) {
run = slice->run;
if (run->compacted_slice_count == run->slice_count)
rlist_add_entry(&unused_runs, run, in_unused);
@@ -1491,7 +1497,7 @@ vy_task_compaction_complete(struct vy_task *task)
* Log change in metadata.
*/
vy_log_tx_begin();
- for (slice = first_slice; ; slice = rlist_next_entry(slice, in_range)) {
+ for (slice = first_slice;; slice = rlist_next_entry(slice, in_range)) {
vy_log_delete_slice(slice->id);
if (slice == last_slice)
break;
@@ -1552,7 +1558,7 @@ vy_task_compaction_complete(struct vy_task *task)
if (new_slice != NULL)
vy_range_add_slice_before(range, new_slice, first_slice);
vy_disk_stmt_counter_reset(&compaction_input);
- for (slice = first_slice; ; slice = next_slice) {
+ for (slice = first_slice;; slice = next_slice) {
next_slice = rlist_next_entry(slice, in_range);
vy_range_remove_slice(range, slice);
rlist_add_entry(&compacted_slices, slice, in_range);
@@ -1564,8 +1570,8 @@ vy_task_compaction_complete(struct vy_task *task)
vy_range_update_compaction_priority(range, &lsm->opts);
vy_range_update_dumps_per_compaction(range);
vy_lsm_acct_range(lsm, range);
- vy_lsm_acct_compaction(lsm, compaction_time,
- &compaction_input, &compaction_output);
+ vy_lsm_acct_compaction(lsm, compaction_time, &compaction_input,
+ &compaction_output);
scheduler->stat.compaction_input += compaction_input.bytes;
scheduler->stat.compaction_output += compaction_output.bytes;
scheduler->stat.compaction_time += compaction_time;
@@ -1575,8 +1581,8 @@ vy_task_compaction_complete(struct vy_task *task)
*/
rlist_foreach_entry(run, &unused_runs, in_unused)
vy_lsm_remove_run(lsm, run);
- rlist_foreach_entry_safe(slice, &compacted_slices,
- in_range, next_slice) {
+ rlist_foreach_entry_safe(slice, &compacted_slices, in_range,
+ next_slice) {
vy_slice_wait_pinned(slice);
vy_slice_delete(slice);
}
@@ -1588,8 +1594,8 @@ vy_task_compaction_complete(struct vy_task *task)
vy_range_heap_insert(&lsm->range_heap, range);
vy_scheduler_update_lsm(scheduler, lsm);
- say_info("%s: completed compacting range %s",
- vy_lsm_name(lsm), vy_range_str(range));
+ say_info("%s: completed compacting range %s", vy_lsm_name(lsm),
+ vy_range_str(range));
return 0;
}
@@ -1605,8 +1611,8 @@ vy_task_compaction_abort(struct vy_task *task)
struct error *e = diag_last_error(&task->diag);
error_log(e);
- say_error("%s: failed to compact range %s",
- vy_lsm_name(lsm), vy_range_str(range));
+ say_error("%s: failed to compact range %s", vy_lsm_name(lsm),
+ vy_range_str(range));
vy_run_discard(task->new_run);
@@ -1635,8 +1641,8 @@ vy_task_compaction_new(struct vy_scheduler *scheduler, struct vy_worker *worker,
return 0;
}
- struct vy_task *task = vy_task_new(scheduler, worker, lsm,
- &compaction_ops);
+ struct vy_task *task =
+ vy_task_new(scheduler, worker, lsm, &compaction_ops);
if (task == NULL)
goto err_task;
@@ -1646,10 +1652,10 @@ vy_task_compaction_new(struct vy_scheduler *scheduler, struct vy_worker *worker,
struct vy_stmt_stream *wi;
bool is_last_level = (range->compaction_priority == range->slice_count);
- wi = vy_write_iterator_new(task->cmp_def, lsm->index_id == 0,
- is_last_level, scheduler->read_views,
- lsm->index_id > 0 ? NULL :
- &task->deferred_delete_handler);
+ wi = vy_write_iterator_new(
+ task->cmp_def, lsm->index_id == 0, is_last_level,
+ scheduler->read_views,
+ lsm->index_id > 0 ? NULL : &task->deferred_delete_handler);
if (wi == NULL)
goto err_wi;
@@ -1657,11 +1663,11 @@ vy_task_compaction_new(struct vy_scheduler *scheduler, struct vy_worker *worker,
int32_t dump_count = 0;
int n = range->compaction_priority;
rlist_foreach_entry(slice, &range->slices, in_range) {
- if (vy_write_iterator_new_slice(wi, slice,
- lsm->disk_format) != 0)
+ if (vy_write_iterator_new_slice(wi, slice, lsm->disk_format) !=
+ 0)
goto err_wi_sub;
- new_run->dump_lsn = MAX(new_run->dump_lsn,
- slice->run->dump_lsn);
+ new_run->dump_lsn =
+ MAX(new_run->dump_lsn, slice->run->dump_lsn);
dump_count += slice->run->dump_count;
/* Remember the slices we are compacting. */
if (task->first_slice == NULL)
@@ -1702,7 +1708,7 @@ vy_task_compaction_new(struct vy_scheduler *scheduler, struct vy_worker *worker,
say_info("%s: started compacting range %s, runs %d/%d",
vy_lsm_name(lsm), vy_range_str(range),
- range->compaction_priority, range->slice_count);
+ range->compaction_priority, range->slice_count);
*p_task = task;
return 0;
@@ -1783,8 +1789,8 @@ static void
vy_task_complete_f(struct cmsg *cmsg)
{
struct vy_task *task = container_of(cmsg, struct vy_task, cmsg);
- stailq_add_tail_entry(&task->scheduler->processed_tasks,
- task, in_processed);
+ stailq_add_tail_entry(&task->scheduler->processed_tasks, task,
+ in_processed);
fiber_cond_signal(&task->scheduler->scheduler_cond);
}
@@ -1892,7 +1898,8 @@ vy_scheduler_peek_compaction(struct vy_scheduler *scheduler,
struct vy_worker *worker = NULL;
retry:
*ptask = NULL;
- struct vy_lsm *lsm = vy_compaction_heap_top(&scheduler->compaction_heap);
+ struct vy_lsm *lsm =
+ vy_compaction_heap_top(&scheduler->compaction_heap);
if (lsm == NULL)
goto no_task; /* nothing to do */
if (vy_lsm_compaction_priority(lsm) <= 1)
@@ -1908,7 +1915,7 @@ retry:
}
if (*ptask == NULL)
goto retry; /* LSM tree dropped or range split/coalesced */
- return 0; /* new task */
+ return 0; /* new task */
no_task:
if (worker != NULL)
vy_worker_pool_put(worker);
@@ -1939,7 +1946,6 @@ fail:
assert(!diag_is_empty(diag_get()));
diag_move(diag_get(), &scheduler->diag);
return -1;
-
}
static int
@@ -1956,12 +1962,11 @@ vy_task_complete(struct vy_task *task)
goto fail; /* ->execute fialed */
}
ERROR_INJECT(ERRINJ_VY_TASK_COMPLETE, {
- diag_set(ClientError, ER_INJECTION,
- "vinyl task completion");
- diag_move(diag_get(), diag);
- goto fail; });
- if (task->ops->complete &&
- task->ops->complete(task) != 0) {
+ diag_set(ClientError, ER_INJECTION, "vinyl task completion");
+ diag_move(diag_get(), diag);
+ goto fail;
+ });
+ if (task->ops->complete && task->ops->complete(task) != 0) {
assert(!diag_is_empty(diag_get()));
diag_move(diag_get(), diag);
goto fail;
@@ -1992,7 +1997,8 @@ vy_scheduler_f(va_list va)
/* Complete and delete all processed tasks. */
stailq_foreach_entry_safe(task, next, &processed_tasks,
- in_processed) {
+ in_processed)
+ {
if (vy_task_complete(task) != 0)
tasks_failed++;
else
@@ -2035,7 +2041,7 @@ vy_scheduler_f(va_list va)
fiber_reschedule();
continue;
-error:
+ error:
/* Abort pending checkpoint. */
fiber_cond_signal(&scheduler->dump_cond);
/*
diff --git a/src/box/vy_scheduler.h b/src/box/vy_scheduler.h
index f487b42..68f35ae 100644
--- a/src/box/vy_scheduler.h
+++ b/src/box/vy_scheduler.h
@@ -53,9 +53,9 @@ struct vy_run_env;
struct vy_worker;
struct vy_scheduler;
-typedef void
-(*vy_scheduler_dump_complete_f)(struct vy_scheduler *scheduler,
- int64_t dump_generation, double dump_duration);
+typedef void (*vy_scheduler_dump_complete_f)(struct vy_scheduler *scheduler,
+ int64_t dump_generation,
+ double dump_duration);
struct vy_worker_pool {
/** Name of the pool. Used for naming threads. */
diff --git a/src/box/vy_stmt.c b/src/box/vy_stmt.c
index 92e0aa1..c2be567 100644
--- a/src/box/vy_stmt.c
+++ b/src/box/vy_stmt.c
@@ -33,7 +33,7 @@
#include <stdlib.h>
#include <string.h>
-#include <sys/uio.h> /* struct iovec */
+#include <sys/uio.h> /* struct iovec */
#include <pmatomic.h> /* for refs */
#include "diag.h"
@@ -171,7 +171,7 @@ vy_stmt_alloc(struct tuple_format *format, uint32_t data_offset, uint32_t bsize)
uint32_t total_size = data_offset + bsize;
if (unlikely(total_size > env->max_tuple_size)) {
diag_set(ClientError, ER_VINYL_MAX_TUPLE_SIZE,
- (unsigned) total_size);
+ (unsigned)total_size);
error_log(diag_last_error(diag_get()));
return NULL;
}
@@ -190,8 +190,9 @@ vy_stmt_alloc(struct tuple_format *format, uint32_t data_offset, uint32_t bsize)
diag_set(OutOfMemory, total_size, "malloc", "struct vy_stmt");
return NULL;
}
- say_debug("vy_stmt_alloc(format = %d data_offset = %u, bsize = %u) = %p",
- format->id, data_offset, bsize, tuple);
+ say_debug(
+ "vy_stmt_alloc(format = %d data_offset = %u, bsize = %u) = %p",
+ format->id, data_offset, bsize, tuple);
tuple->refs = 1;
tuple->format_id = tuple_format_id(format);
if (cord_is_main())
@@ -213,8 +214,8 @@ vy_stmt_dup(struct tuple *stmt)
* tuple field map. This map can be simple memcopied from
* the original tuple.
*/
- struct tuple *res = vy_stmt_alloc(tuple_format(stmt),
- stmt->data_offset, stmt->bsize);
+ struct tuple *res = vy_stmt_alloc(tuple_format(stmt), stmt->data_offset,
+ stmt->bsize);
if (res == NULL)
return NULL;
assert(tuple_size(res) == tuple_size(stmt));
@@ -238,8 +239,8 @@ vy_stmt_dup_lsregion(struct tuple *stmt, struct lsregion *lsregion,
if (type == IPROTO_UPSERT)
alloc_size += align;
- mem_stmt = lsregion_aligned_alloc(lsregion, alloc_size, align,
- alloc_id);
+ mem_stmt =
+ lsregion_aligned_alloc(lsregion, alloc_size, align, alloc_id);
if (mem_stmt == NULL) {
diag_set(OutOfMemory, alloc_size, "lsregion_aligned_alloc",
"mem_stmt");
@@ -279,11 +280,12 @@ vy_key_new(struct tuple_format *format, const char *key, uint32_t part_count)
/* Allocate stmt */
uint32_t key_size = key_end - key;
uint32_t bsize = mp_sizeof_array(part_count) + key_size;
- struct tuple *stmt = vy_stmt_alloc(format, sizeof(struct vy_stmt), bsize);
+ struct tuple *stmt =
+ vy_stmt_alloc(format, sizeof(struct vy_stmt), bsize);
if (stmt == NULL)
return NULL;
/* Copy MsgPack data */
- char *raw = (char *) stmt + sizeof(struct vy_stmt);
+ char *raw = (char *)stmt + sizeof(struct vy_stmt);
char *data = mp_encode_array(raw, part_count);
memcpy(data, key, key_size);
assert(data + key_size == raw + bsize);
@@ -312,8 +314,8 @@ vy_key_dup(const char *key)
*/
static struct tuple *
vy_stmt_new_with_ops(struct tuple_format *format, const char *tuple_begin,
- const char *tuple_end, struct iovec *ops,
- int op_count, enum iproto_type type)
+ const char *tuple_end, struct iovec *ops, int op_count,
+ enum iproto_type type)
{
mp_tuple_assert(tuple_begin, tuple_end);
@@ -350,18 +352,17 @@ vy_stmt_new_with_ops(struct tuple_format *format, const char *tuple_begin,
*/
size_t mpsize = (tuple_end - tuple_begin);
size_t bsize = mpsize + ops_size;
- stmt = vy_stmt_alloc(format, sizeof(struct vy_stmt) +
- field_map_size, bsize);
+ stmt = vy_stmt_alloc(format, sizeof(struct vy_stmt) + field_map_size,
+ bsize);
if (stmt == NULL)
goto end;
/* Copy MsgPack data */
- char *raw = (char *) tuple_data(stmt);
+ char *raw = (char *)tuple_data(stmt);
char *wpos = raw;
field_map_build(&builder, wpos - field_map_size);
memcpy(wpos, tuple_begin, mpsize);
wpos += mpsize;
- for (struct iovec *op = ops, *end = ops + op_count;
- op != end; ++op) {
+ for (struct iovec *op = ops, *end = ops + op_count; op != end; ++op) {
memcpy(wpos, op->iov_base, op->iov_len);
wpos += op->iov_len;
}
@@ -376,32 +377,32 @@ vy_stmt_new_upsert(struct tuple_format *format, const char *tuple_begin,
const char *tuple_end, struct iovec *operations,
uint32_t ops_cnt)
{
- return vy_stmt_new_with_ops(format, tuple_begin, tuple_end,
- operations, ops_cnt, IPROTO_UPSERT);
+ return vy_stmt_new_with_ops(format, tuple_begin, tuple_end, operations,
+ ops_cnt, IPROTO_UPSERT);
}
struct tuple *
vy_stmt_new_replace(struct tuple_format *format, const char *tuple_begin,
const char *tuple_end)
{
- return vy_stmt_new_with_ops(format, tuple_begin, tuple_end,
- NULL, 0, IPROTO_REPLACE);
+ return vy_stmt_new_with_ops(format, tuple_begin, tuple_end, NULL, 0,
+ IPROTO_REPLACE);
}
struct tuple *
vy_stmt_new_insert(struct tuple_format *format, const char *tuple_begin,
const char *tuple_end)
{
- return vy_stmt_new_with_ops(format, tuple_begin, tuple_end,
- NULL, 0, IPROTO_INSERT);
+ return vy_stmt_new_with_ops(format, tuple_begin, tuple_end, NULL, 0,
+ IPROTO_INSERT);
}
struct tuple *
vy_stmt_new_delete(struct tuple_format *format, const char *tuple_begin,
const char *tuple_end)
{
- return vy_stmt_new_with_ops(format, tuple_begin, tuple_end,
- NULL, 0, IPROTO_DELETE);
+ return vy_stmt_new_with_ops(format, tuple_begin, tuple_end, NULL, 0,
+ IPROTO_DELETE);
}
struct tuple *
@@ -415,7 +416,8 @@ vy_stmt_replace_from_upsert(struct tuple *upsert)
/* Copy statement data excluding UPSERT operations */
struct tuple_format *format = tuple_format(upsert);
- struct tuple *replace = vy_stmt_alloc(format, upsert->data_offset, bsize);
+ struct tuple *replace =
+ vy_stmt_alloc(format, upsert->data_offset, bsize);
if (replace == NULL)
return NULL;
/* Copy both data and field_map. */
@@ -453,8 +455,8 @@ vy_stmt_new_surrogate_delete_raw(struct tuple_format *format,
uint32_t field_count;
struct tuple_format_iterator it;
if (tuple_format_iterator_create(&it, format, src_data,
- TUPLE_FORMAT_ITERATOR_KEY_PARTS_ONLY, &field_count,
- region) != 0)
+ TUPLE_FORMAT_ITERATOR_KEY_PARTS_ONLY,
+ &field_count, region) != 0)
goto out;
char *pos = mp_encode_array(data, field_count);
struct tuple_format_iterator_entry entry;
@@ -480,8 +482,9 @@ vy_stmt_new_surrogate_delete_raw(struct tuple_format *format,
uint32_t offset_slot = entry.field->offset_slot;
if (offset_slot != TUPLE_OFFSET_SLOT_NIL &&
field_map_builder_set_slot(&builder, offset_slot,
- pos - data, entry.multikey_idx,
- entry.multikey_count, region) != 0)
+ pos - data, entry.multikey_idx,
+ entry.multikey_count,
+ region) != 0)
goto out;
/* Copy field data. */
if (entry.field->type == FIELD_TYPE_ARRAY) {
@@ -502,7 +505,7 @@ vy_stmt_new_surrogate_delete_raw(struct tuple_format *format,
bsize);
if (stmt == NULL)
goto out;
- char *stmt_data = (char *) tuple_data(stmt);
+ char *stmt_data = (char *)tuple_data(stmt);
char *stmt_field_map_begin = stmt_data - field_map_size;
memcpy(stmt_data, data, bsize);
field_map_build(&builder, stmt_field_map_begin);
@@ -519,8 +522,8 @@ vy_stmt_extract_key(struct tuple *stmt, struct key_def *key_def,
{
struct region *region = &fiber()->gc;
size_t region_svp = region_used(region);
- const char *key_raw = tuple_extract_key(stmt, key_def,
- multikey_idx, NULL);
+ const char *key_raw =
+ tuple_extract_key(stmt, key_def, multikey_idx, NULL);
if (key_raw == NULL)
return NULL;
uint32_t part_count = mp_decode_array(&key_raw);
@@ -551,34 +554,36 @@ vy_stmt_extract_key_raw(const char *data, const char *data_end,
}
int
-vy_bloom_builder_add(struct tuple_bloom_builder *builder,
- struct vy_entry entry, struct key_def *key_def)
+vy_bloom_builder_add(struct tuple_bloom_builder *builder, struct vy_entry entry,
+ struct key_def *key_def)
{
struct tuple *stmt = entry.stmt;
if (vy_stmt_is_key(stmt)) {
const char *data = tuple_data(stmt);
uint32_t part_count = mp_decode_array(&data);
- return tuple_bloom_builder_add_key(builder, data,
- part_count, key_def);
+ return tuple_bloom_builder_add_key(builder, data, part_count,
+ key_def);
} else {
return tuple_bloom_builder_add(builder, stmt, key_def,
- vy_entry_multikey_idx(entry, key_def));
+ vy_entry_multikey_idx(entry,
+ key_def));
}
}
bool
-vy_bloom_maybe_has(const struct tuple_bloom *bloom,
- struct vy_entry entry, struct key_def *key_def)
+vy_bloom_maybe_has(const struct tuple_bloom *bloom, struct vy_entry entry,
+ struct key_def *key_def)
{
struct tuple *stmt = entry.stmt;
if (vy_stmt_is_key(stmt)) {
const char *data = tuple_data(stmt);
uint32_t part_count = mp_decode_array(&data);
- return tuple_bloom_maybe_has_key(bloom, data,
- part_count, key_def);
+ return tuple_bloom_maybe_has_key(bloom, data, part_count,
+ key_def);
} else {
return tuple_bloom_maybe_has(bloom, stmt, key_def,
- vy_entry_multikey_idx(entry, key_def));
+ vy_entry_multikey_idx(entry,
+ key_def));
}
}
@@ -652,9 +657,9 @@ vy_stmt_encode_primary(struct tuple *value, struct key_def *key_def,
switch (type) {
case IPROTO_DELETE:
extracted = vy_stmt_is_key(value) ?
- tuple_data_range(value, &size) :
- tuple_extract_key(value, key_def,
- MULTIKEY_NONE, &size);
+ tuple_data_range(value, &size) :
+ tuple_extract_key(value, key_def,
+ MULTIKEY_NONE, &size);
if (extracted == NULL)
return -1;
request.key = extracted;
@@ -696,10 +701,10 @@ vy_stmt_encode_secondary(struct tuple *value, struct key_def *cmp_def,
memset(&request, 0, sizeof(request));
request.type = type;
uint32_t size;
- const char *extracted = vy_stmt_is_key(value) ?
- tuple_data_range(value, &size) :
- tuple_extract_key(value, cmp_def,
- multikey_idx, &size);
+ const char *extracted =
+ vy_stmt_is_key(value) ?
+ tuple_data_range(value, &size) :
+ tuple_extract_key(value, cmp_def, multikey_idx, &size);
if (extracted == NULL)
return -1;
if (type == IPROTO_REPLACE || type == IPROTO_INSERT) {
@@ -733,15 +738,15 @@ vy_stmt_decode(struct xrow_header *xrow, struct tuple_format *format)
switch (request.type) {
case IPROTO_DELETE:
/* Always use key format for DELETE statements. */
- stmt = vy_stmt_new_with_ops(env->key_format,
- request.key, request.key_end,
- NULL, 0, IPROTO_DELETE);
+ stmt = vy_stmt_new_with_ops(env->key_format, request.key,
+ request.key_end, NULL, 0,
+ IPROTO_DELETE);
break;
case IPROTO_INSERT:
case IPROTO_REPLACE:
stmt = vy_stmt_new_with_ops(format, request.tuple,
- request.tuple_end,
- NULL, 0, request.type);
+ request.tuple_end, NULL, 0,
+ request.type);
break;
case IPROTO_UPSERT:
ops.iov_base = (char *)request.ops;
@@ -781,14 +786,14 @@ vy_stmt_snprint(char *buf, int size, struct tuple *stmt)
}
SNPRINT(total, snprintf, buf, size, "%s(",
iproto_type_name(vy_stmt_type(stmt)));
- SNPRINT(total, mp_snprint, buf, size, tuple_data(stmt));
+ SNPRINT(total, mp_snprint, buf, size, tuple_data(stmt));
if (vy_stmt_type(stmt) == IPROTO_UPSERT) {
SNPRINT(total, snprintf, buf, size, ", ops=");
SNPRINT(total, mp_snprint, buf, size,
vy_stmt_upsert_ops(stmt, &mp_size));
}
SNPRINT(total, snprintf, buf, size, ", lsn=%lld)",
- (long long) vy_stmt_lsn(stmt));
+ (long long)vy_stmt_lsn(stmt));
return total;
}
diff --git a/src/box/vy_stmt.h b/src/box/vy_stmt.h
index 2521923..2638761 100644
--- a/src/box/vy_stmt.h
+++ b/src/box/vy_stmt.h
@@ -117,7 +117,7 @@ enum {
* secondary indexes. It makes the write iterator generate
* DELETE statements for them during compaction.
*/
- VY_STMT_DEFERRED_DELETE = 1 << 0,
+ VY_STMT_DEFERRED_DELETE = 1 << 0,
/**
* Statements that have this flag set are ignored by the
* read iterator.
@@ -127,7 +127,7 @@ enum {
* the older a source, the older statements it stores for a
* particular key.
*/
- VY_STMT_SKIP_READ = 1 << 1,
+ VY_STMT_SKIP_READ = 1 << 1,
/**
* This flag is set for those REPLACE statements that were
* generated by UPDATE operations. It is used by the write
@@ -135,12 +135,12 @@ enum {
* indexes so that they can get annihilated with DELETEs on
* compaction. It is never written to disk.
*/
- VY_STMT_UPDATE = 1 << 2,
+ VY_STMT_UPDATE = 1 << 2,
/**
* Bit mask of all statement flags.
*/
- VY_STMT_FLAGS_ALL = (VY_STMT_DEFERRED_DELETE | VY_STMT_SKIP_READ |
- VY_STMT_UPDATE),
+ VY_STMT_FLAGS_ALL =
+ (VY_STMT_DEFERRED_DELETE | VY_STMT_SKIP_READ | VY_STMT_UPDATE),
};
/**
@@ -172,7 +172,7 @@ enum {
struct vy_stmt {
struct tuple base;
int64_t lsn;
- uint8_t type; /* IPROTO_INSERT/REPLACE/UPSERT/DELETE */
+ uint8_t type; /* IPROTO_INSERT/REPLACE/UPSERT/DELETE */
uint8_t flags;
/**
* Offsets array concatenated with MessagePack fields
@@ -185,28 +185,28 @@ struct vy_stmt {
static inline int64_t
vy_stmt_lsn(struct tuple *stmt)
{
- return ((struct vy_stmt *) stmt)->lsn;
+ return ((struct vy_stmt *)stmt)->lsn;
}
/** Set LSN of the vinyl statement. */
static inline void
vy_stmt_set_lsn(struct tuple *stmt, int64_t lsn)
{
- ((struct vy_stmt *) stmt)->lsn = lsn;
+ ((struct vy_stmt *)stmt)->lsn = lsn;
}
/** Get type of the vinyl statement. */
static inline enum iproto_type
vy_stmt_type(struct tuple *stmt)
{
- return (enum iproto_type)((struct vy_stmt *) stmt)->type;
+ return (enum iproto_type)((struct vy_stmt *)stmt)->type;
}
/** Set type of the vinyl statement. */
static inline void
vy_stmt_set_type(struct tuple *stmt, enum iproto_type type)
{
- ((struct vy_stmt *) stmt)->type = type;
+ ((struct vy_stmt *)stmt)->type = type;
}
/** Get flags of the vinyl statement. */
@@ -383,8 +383,7 @@ vy_stmt_hint(struct tuple *stmt, struct key_def *key_def)
* formats (key or tuple) and using comparison hints.
*/
static inline int
-vy_stmt_compare(struct tuple *a, hint_t a_hint,
- struct tuple *b, hint_t b_hint,
+vy_stmt_compare(struct tuple *a, hint_t a_hint, struct tuple *b, hint_t b_hint,
struct key_def *key_def)
{
bool a_is_tuple = !vy_stmt_is_key(a);
@@ -403,8 +402,8 @@ vy_stmt_compare(struct tuple *a, hint_t a_hint,
a_hint, key_def);
} else {
assert(!a_is_tuple && !b_is_tuple);
- return key_compare(tuple_data(a), a_hint,
- tuple_data(b), b_hint, key_def);
+ return key_compare(tuple_data(a), a_hint, tuple_data(b), b_hint,
+ key_def);
}
}
@@ -419,9 +418,8 @@ vy_stmt_compare_with_raw_key(struct tuple *stmt, hint_t stmt_hint,
{
if (!vy_stmt_is_key(stmt)) {
uint32_t part_count = mp_decode_array(&key);
- return tuple_compare_with_key(stmt, stmt_hint, key,
- part_count, key_hint,
- key_def);
+ return tuple_compare_with_key(stmt, stmt_hint, key, part_count,
+ key_hint, key_def);
}
return key_compare(tuple_data(stmt), stmt_hint, key, key_hint, key_def);
}
@@ -465,8 +463,8 @@ vy_key_dup(const char *key);
* @retval NULL Memory or fields format error.
*/
struct tuple *
-vy_stmt_new_surrogate_delete_raw(struct tuple_format *format,
- const char *data, const char *data_end);
+vy_stmt_new_surrogate_delete_raw(struct tuple_format *format, const char *data,
+ const char *data_end);
/** @copydoc vy_stmt_new_surrogate_delete_raw. */
static inline struct tuple *
@@ -489,7 +487,7 @@ vy_stmt_new_surrogate_delete(struct tuple_format *format, struct tuple *tuple)
*/
struct tuple *
vy_stmt_new_replace(struct tuple_format *format, const char *tuple,
- const char *tuple_end);
+ const char *tuple_end);
/**
* Create the INSERT statement from raw MessagePack data.
@@ -519,7 +517,7 @@ struct tuple *
vy_stmt_new_delete(struct tuple_format *format, const char *tuple_begin,
const char *tuple_end);
- /**
+/**
* Create the UPSERT statement from raw MessagePack data.
* @param tuple_begin MessagePack data that contain an array of fields WITH the
* array header.
@@ -533,9 +531,9 @@ vy_stmt_new_delete(struct tuple_format *format, const char *tuple_begin,
* @retval not NULL Success.
*/
struct tuple *
-vy_stmt_new_upsert(struct tuple_format *format,
- const char *tuple_begin, const char *tuple_end,
- struct iovec *operations, uint32_t ops_cnt);
+vy_stmt_new_upsert(struct tuple_format *format, const char *tuple_begin,
+ const char *tuple_end, struct iovec *operations,
+ uint32_t ops_cnt);
/**
* Create REPLACE statement from UPSERT statement.
@@ -623,16 +621,16 @@ vy_stmt_extract_key_raw(const char *data, const char *data_end,
* See tuple_bloom_builder_add() for more details.
*/
int
-vy_bloom_builder_add(struct tuple_bloom_builder *builder,
- struct vy_entry entry, struct key_def *key_def);
+vy_bloom_builder_add(struct tuple_bloom_builder *builder, struct vy_entry entry,
+ struct key_def *key_def);
/**
* Check if a statement hash is present in a bloom filter.
* See tuple_bloom_maybe_has() for more details.
*/
bool
-vy_bloom_maybe_has(const struct tuple_bloom *bloom,
- struct vy_entry entry, struct key_def *key_def);
+vy_bloom_maybe_has(const struct tuple_bloom *bloom, struct vy_entry entry,
+ struct key_def *key_def);
/**
* Encode vy_stmt for a primary key as xrow_header
@@ -742,12 +740,11 @@ vy_entry_compare(struct vy_entry a, struct vy_entry b, struct key_def *key_def)
* (msgpack array).
*/
static inline int
-vy_entry_compare_with_raw_key(struct vy_entry entry,
- const char *key, hint_t key_hint,
- struct key_def *key_def)
+vy_entry_compare_with_raw_key(struct vy_entry entry, const char *key,
+ hint_t key_hint, struct key_def *key_def)
{
- return vy_stmt_compare_with_raw_key(entry.stmt, entry.hint,
- key, key_hint, key_def);
+ return vy_stmt_compare_with_raw_key(entry.stmt, entry.hint, key,
+ key_hint, key_def);
}
/**
@@ -764,15 +761,18 @@ vy_entry_compare_with_raw_key(struct vy_entry entry,
*
* entry.stmt is set to src_stmt on each iteration.
*/
-#define vy_stmt_foreach_entry(entry, src_stmt, key_def) \
- for (uint32_t multikey_idx = 0, \
- multikey_count = !(key_def)->is_multikey ? 1 : \
- tuple_multikey_count((src_stmt), (key_def)); \
- multikey_idx < multikey_count && \
- (((entry).stmt = (src_stmt)), \
- ((entry).hint = !(key_def)->is_multikey ? \
- vy_stmt_hint((src_stmt), (key_def)) : \
- multikey_idx), true); \
+#define vy_stmt_foreach_entry(entry, src_stmt, key_def) \
+ for (uint32_t multikey_idx = 0, \
+ multikey_count = !(key_def)->is_multikey ? \
+ 1 : \
+ tuple_multikey_count( \
+ (src_stmt), (key_def)); \
+ multikey_idx < multikey_count && \
+ (((entry).stmt = (src_stmt)), \
+ ((entry).hint = !(key_def)->is_multikey ? \
+ vy_stmt_hint((src_stmt), (key_def)) : \
+ multikey_idx), \
+ true); \
++multikey_idx)
#if defined(__cplusplus)
diff --git a/src/box/vy_stmt_stream.h b/src/box/vy_stmt_stream.h
index 08e4d5f..c1e0589 100644
--- a/src/box/vy_stmt_stream.h
+++ b/src/box/vy_stmt_stream.h
@@ -48,20 +48,18 @@ struct vy_stmt_stream;
/**
* Start streaming
*/
-typedef NODISCARD int
-(*vy_stream_start_f)(struct vy_stmt_stream *virt_stream);
+typedef NODISCARD int (*vy_stream_start_f)(struct vy_stmt_stream *virt_stream);
/**
* Get next tuple from a stream.
*/
-typedef NODISCARD int
-(*vy_stream_next_f)(struct vy_stmt_stream *virt_stream, struct vy_entry *ret);
+typedef NODISCARD int (*vy_stream_next_f)(struct vy_stmt_stream *virt_stream,
+ struct vy_entry *ret);
/**
* Close the stream.
*/
-typedef void
-(*vy_stream_close_f)(struct vy_stmt_stream *virt_stream);
+typedef void (*vy_stream_close_f)(struct vy_stmt_stream *virt_stream);
/**
* The interface description for streams over run and mem.
diff --git a/src/box/vy_tx.c b/src/box/vy_tx.c
index ff63cd7..0fb9eed 100644
--- a/src/box/vy_tx.c
+++ b/src/box/vy_tx.c
@@ -103,8 +103,8 @@ vy_tx_manager_new(void)
{
struct vy_tx_manager *xm = calloc(1, sizeof(*xm));
if (xm == NULL) {
- diag_set(OutOfMemory, sizeof(*xm),
- "malloc", "struct vy_tx_manager");
+ diag_set(OutOfMemory, sizeof(*xm), "malloc",
+ "struct vy_tx_manager");
return NULL;
}
@@ -113,8 +113,8 @@ vy_tx_manager_new(void)
vy_global_read_view_create((struct vy_read_view *)&xm->global_read_view,
INT64_MAX);
xm->p_global_read_view = &xm->global_read_view;
- vy_global_read_view_create((struct vy_read_view *)&xm->committed_read_view,
- MAX_LSN - 1);
+ vy_global_read_view_create(
+ (struct vy_read_view *)&xm->committed_read_view, MAX_LSN - 1);
xm->p_committed_read_view = &xm->committed_read_view;
struct slab_cache *slab_cache = cord_slab_cache();
@@ -171,15 +171,13 @@ vy_tx_manager_read_view(struct vy_tx_manager *xm)
if ((xm->last_prepared_tx == NULL && rv->vlsn == xm->lsn) ||
(xm->last_prepared_tx != NULL &&
rv->vlsn == MAX_LSN + xm->last_prepared_tx->psn)) {
-
rv->refs++;
- return rv;
+ return rv;
}
}
rv = mempool_alloc(&xm->read_view_mempool);
if (rv == NULL) {
- diag_set(OutOfMemory, sizeof(*rv),
- "mempool", "read view");
+ diag_set(OutOfMemory, sizeof(*rv), "mempool", "read view");
return NULL;
}
if (xm->last_prepared_tx != NULL) {
@@ -196,7 +194,7 @@ vy_tx_manager_read_view(struct vy_tx_manager *xm)
void
vy_tx_manager_destroy_read_view(struct vy_tx_manager *xm,
- struct vy_read_view *rv)
+ struct vy_read_view *rv)
{
if (rv == xm->p_global_read_view)
return;
@@ -268,16 +266,16 @@ vy_read_interval_unacct(struct vy_read_interval *interval)
}
static struct vy_read_interval *
-vy_read_interval_new(struct vy_tx *tx, struct vy_lsm *lsm,
- struct vy_entry left, bool left_belongs,
- struct vy_entry right, bool right_belongs)
+vy_read_interval_new(struct vy_tx *tx, struct vy_lsm *lsm, struct vy_entry left,
+ bool left_belongs, struct vy_entry right,
+ bool right_belongs)
{
struct vy_tx_manager *xm = tx->xm;
struct vy_read_interval *interval;
interval = mempool_alloc(&xm->read_interval_mempool);
if (interval == NULL) {
- diag_set(OutOfMemory, sizeof(*interval),
- "mempool", "struct vy_read_interval");
+ diag_set(OutOfMemory, sizeof(*interval), "mempool",
+ "struct vy_read_interval");
return NULL;
}
interval->tx = tx;
@@ -343,8 +341,7 @@ vy_tx_destroy(struct vy_tx *tx)
vy_tx_manager_destroy_read_view(tx->xm, tx->read_view);
struct txv *v, *tmp;
- stailq_foreach_entry_safe(v, tmp, &tx->log, next_in_log)
- txv_delete(v);
+ stailq_foreach_entry_safe(v, tmp, &tx->log, next_in_log) txv_delete(v);
vy_tx_read_set_iter(&tx->read_set, NULL, vy_tx_read_set_free_cb, NULL);
rlist_del_entry(tx, in_writers);
@@ -484,8 +481,8 @@ vy_tx_write_prepare(struct txv *v)
* @retval -1 Memory error.
*/
static int
-vy_tx_write(struct vy_lsm *lsm, struct vy_mem *mem,
- struct vy_entry entry, struct tuple **region_stmt)
+vy_tx_write(struct vy_lsm *lsm, struct vy_mem *mem, struct vy_entry entry,
+ struct tuple **region_stmt)
{
assert(vy_stmt_is_refable(entry.stmt));
assert(*region_stmt == NULL || !vy_stmt_is_refable(*region_stmt));
@@ -511,7 +508,7 @@ vy_tx_write(struct vy_lsm *lsm, struct vy_mem *mem,
vy_stmt_type(applied.stmt);
assert(applied_type == IPROTO_REPLACE ||
applied_type == IPROTO_INSERT);
- (void) applied_type;
+ (void)applied_type;
int rc = vy_lsm_set(lsm, mem, applied,
region_stmt);
tuple_unref(applied.stmt);
@@ -569,8 +566,8 @@ vy_tx_handle_deferred_delete(struct vy_tx *tx, struct txv *v)
/* Look up the tuple overwritten by this statement. */
struct vy_entry overwritten;
- if (vy_point_lookup_mem(pk, &tx->xm->p_global_read_view,
- v->entry, &overwritten) != 0)
+ if (vy_point_lookup_mem(pk, &tx->xm->p_global_read_view, v->entry,
+ &overwritten) != 0)
return -1;
if (overwritten.stmt == NULL) {
@@ -596,8 +593,8 @@ vy_tx_handle_deferred_delete(struct vy_tx *tx, struct txv *v)
}
struct tuple *delete_stmt;
- delete_stmt = vy_stmt_new_surrogate_delete(pk->mem_format,
- overwritten.stmt);
+ delete_stmt =
+ vy_stmt_new_surrogate_delete(pk->mem_format, overwritten.stmt);
tuple_unref(overwritten.stmt);
if (delete_stmt == NULL)
return -1;
@@ -641,7 +638,7 @@ vy_tx_handle_deferred_delete(struct vy_tx *tx, struct txv *v)
*/
assert(vy_stmt_type(stmt) == IPROTO_REPLACE);
assert(vy_stmt_type(other->entry.stmt) ==
- IPROTO_REPLACE);
+ IPROTO_REPLACE);
other->is_nop = true;
continue;
}
@@ -704,7 +701,8 @@ vy_tx_prepare(struct vy_tx *tx)
/* repsert - REPLACE/UPSERT */
struct tuple *delete = NULL, *repsert = NULL;
MAYBE_UNUSED uint32_t current_space_id = 0;
- stailq_foreach_entry(v, &tx->log, next_in_log) {
+ stailq_foreach_entry(v, &tx->log, next_in_log)
+ {
struct vy_lsm *lsm = v->lsm;
if (lsm->index_id == 0) {
/* The beginning of the new txn_stmt is met. */
@@ -755,8 +753,9 @@ vy_tx_prepare(struct vy_tx *tx)
*/
uint8_t flags = vy_stmt_flags(v->entry.stmt);
if (flags & VY_STMT_DEFERRED_DELETE) {
- vy_stmt_set_flags(v->entry.stmt, flags &
- ~VY_STMT_DEFERRED_DELETE);
+ vy_stmt_set_flags(
+ v->entry.stmt,
+ flags & ~VY_STMT_DEFERRED_DELETE);
}
}
@@ -780,8 +779,8 @@ vy_tx_prepare(struct vy_tx *tx)
/* In secondary indexes only REPLACE/DELETE can be written. */
vy_stmt_set_lsn(v->entry.stmt, MAX_LSN + tx->psn);
- struct tuple **region_stmt =
- (type == IPROTO_DELETE) ? &delete : &repsert;
+ struct tuple **region_stmt = (type == IPROTO_DELETE) ? &delete :
+ &repsert;
if (vy_tx_write(lsm, v->mem, v->entry, region_stmt) != 0)
return -1;
v->region_stmt = *region_stmt;
@@ -809,7 +808,8 @@ vy_tx_commit(struct vy_tx *tx, int64_t lsn)
/* Fix LSNs of the records and commit changes. */
struct txv *v;
- stailq_foreach_entry(v, &tx->log, next_in_log) {
+ stailq_foreach_entry(v, &tx->log, next_in_log)
+ {
if (v->region_stmt != NULL) {
struct vy_entry entry;
entry.stmt = v->region_stmt;
@@ -858,7 +858,8 @@ vy_tx_rollback_after_prepare(struct vy_tx *tx)
xm->last_prepared_tx = NULL;
struct txv *v;
- stailq_foreach_entry(v, &tx->log, next_in_log) {
+ stailq_foreach_entry(v, &tx->log, next_in_log)
+ {
if (v->region_stmt != NULL) {
struct vy_entry entry;
entry.stmt = v->region_stmt;
@@ -908,8 +909,7 @@ vy_tx_begin_statement(struct vy_tx *tx, struct space *space, void **savepoint)
void
vy_tx_rollback_statement(struct vy_tx *tx, void *svp)
{
- if (tx->state == VINYL_TX_ABORT ||
- tx->state == VINYL_TX_COMMIT)
+ if (tx->state == VINYL_TX_ABORT || tx->state == VINYL_TX_COMMIT)
return;
assert(tx->state == VINYL_TX_READY);
@@ -919,7 +919,8 @@ vy_tx_rollback_statement(struct vy_tx *tx, void *svp)
/* Rollback statements in LIFO order. */
stailq_reverse(&tail);
struct txv *v, *tmp;
- stailq_foreach_entry_safe(v, tmp, &tail, next_in_log) {
+ stailq_foreach_entry_safe(v, tmp, &tail, next_in_log)
+ {
write_set_remove(&tx->write_set, v);
if (v->overwritten != NULL) {
/* Restore overwritten statement. */
@@ -935,9 +936,8 @@ vy_tx_rollback_statement(struct vy_tx *tx, void *svp)
}
int
-vy_tx_track(struct vy_tx *tx, struct vy_lsm *lsm,
- struct vy_entry left, bool left_belongs,
- struct vy_entry right, bool right_belongs)
+vy_tx_track(struct vy_tx *tx, struct vy_lsm *lsm, struct vy_entry left,
+ bool left_belongs, struct vy_entry right, bool right_belongs)
{
if (vy_tx_is_in_read_view(tx)) {
/* No point in tracking reads. */
@@ -945,8 +945,8 @@ vy_tx_track(struct vy_tx *tx, struct vy_lsm *lsm,
}
struct vy_read_interval *new_interval;
- new_interval = vy_read_interval_new(tx, lsm, left, left_belongs,
- right, right_belongs);
+ new_interval = vy_read_interval_new(tx, lsm, left, left_belongs, right,
+ right_belongs);
if (new_interval == NULL)
return -1;
@@ -1006,7 +1006,8 @@ vy_tx_track(struct vy_tx *tx, struct vy_lsm *lsm,
}
struct vy_read_interval *next_interval;
stailq_foreach_entry_safe(interval, next_interval, &merge,
- in_merge) {
+ in_merge)
+ {
vy_tx_read_set_remove(&tx->read_set, interval);
vy_lsm_read_set_remove(&lsm->read_set, interval);
vy_read_interval_delete(interval);
@@ -1059,14 +1060,12 @@ vy_tx_set_entry(struct vy_tx *tx, struct vy_lsm *lsm, struct vy_entry entry)
if (old != NULL && vy_stmt_type(entry.stmt) == IPROTO_UPSERT) {
assert(lsm->index_id == 0);
uint8_t old_type = vy_stmt_type(old->entry.stmt);
- assert(old_type == IPROTO_UPSERT ||
- old_type == IPROTO_INSERT ||
- old_type == IPROTO_REPLACE ||
- old_type == IPROTO_DELETE);
- (void) old_type;
-
- applied = vy_entry_apply_upsert(entry, old->entry,
- lsm->cmp_def, true);
+ assert(old_type == IPROTO_UPSERT || old_type == IPROTO_INSERT ||
+ old_type == IPROTO_REPLACE || old_type == IPROTO_DELETE);
+ (void)old_type;
+
+ applied = vy_entry_apply_upsert(entry, old->entry, lsm->cmp_def,
+ true);
lsm->stat.upsert.applied++;
if (applied.stmt == NULL)
return -1;
@@ -1096,8 +1095,8 @@ vy_tx_set_entry(struct vy_tx *tx, struct vy_lsm *lsm, struct vy_entry entry)
*/
if (vy_stmt_flags(old->entry.stmt) & VY_STMT_DEFERRED_DELETE) {
uint8_t flags = vy_stmt_flags(entry.stmt);
- vy_stmt_set_flags(entry.stmt, flags |
- VY_STMT_DEFERRED_DELETE);
+ vy_stmt_set_flags(entry.stmt,
+ flags | VY_STMT_DEFERRED_DELETE);
}
}
@@ -1179,9 +1178,9 @@ vy_tx_manager_abort_writers_for_ro(struct vy_tx_manager *xm)
void
vy_txw_iterator_open(struct vy_txw_iterator *itr,
- struct vy_txw_iterator_stat *stat,
- struct vy_tx *tx, struct vy_lsm *lsm,
- enum iterator_type iterator_type, struct vy_entry key)
+ struct vy_txw_iterator_stat *stat, struct vy_tx *tx,
+ struct vy_lsm *lsm, enum iterator_type iterator_type,
+ struct vy_entry key)
{
itr->stat = stat;
itr->tx = tx;
@@ -1210,7 +1209,8 @@ vy_txw_iterator_seek(struct vy_txw_iterator *itr, struct vy_entry last)
if (last.stmt != NULL) {
key = last;
iterator_type = iterator_direction(iterator_type) > 0 ?
- ITER_GT : ITER_LT;
+ ITER_GT :
+ ITER_LT;
}
struct vy_lsm *lsm = itr->lsm;
@@ -1230,9 +1230,11 @@ vy_txw_iterator_seek(struct vy_txw_iterator *itr, struct vy_entry last)
struct txv *next;
if (iterator_type == ITER_LE ||
iterator_type == ITER_GT)
- next = write_set_next(&itr->tx->write_set, txv);
+ next = write_set_next(
+ &itr->tx->write_set, txv);
else
- next = write_set_prev(&itr->tx->write_set, txv);
+ next = write_set_prev(
+ &itr->tx->write_set, txv);
if (next == NULL || next->lsm != lsm)
break;
if (vy_entry_compare(key, next->entry,
@@ -1260,8 +1262,7 @@ vy_txw_iterator_seek(struct vy_txw_iterator *itr, struct vy_entry last)
}
NODISCARD int
-vy_txw_iterator_next(struct vy_txw_iterator *itr,
- struct vy_history *history)
+vy_txw_iterator_next(struct vy_txw_iterator *itr, struct vy_history *history)
{
vy_history_cleanup(history);
if (!itr->search_started) {
@@ -1273,9 +1274,11 @@ vy_txw_iterator_next(struct vy_txw_iterator *itr,
if (itr->curr_txv == NULL)
return 0;
if (itr->iterator_type == ITER_LE || itr->iterator_type == ITER_LT)
- itr->curr_txv = write_set_prev(&itr->tx->write_set, itr->curr_txv);
+ itr->curr_txv =
+ write_set_prev(&itr->tx->write_set, itr->curr_txv);
else
- itr->curr_txv = write_set_next(&itr->tx->write_set, itr->curr_txv);
+ itr->curr_txv =
+ write_set_next(&itr->tx->write_set, itr->curr_txv);
if (itr->curr_txv != NULL && itr->curr_txv->lsm != itr->lsm)
itr->curr_txv = NULL;
if (itr->curr_txv != NULL && itr->iterator_type == ITER_EQ &&
@@ -1305,8 +1308,9 @@ vy_txw_iterator_skip(struct vy_txw_iterator *itr, struct vy_entry last,
if (itr->search_started &&
(itr->curr_txv == NULL || last.stmt == NULL ||
iterator_direction(itr->iterator_type) *
- vy_entry_compare(itr->curr_txv->entry, last,
- itr->lsm->cmp_def) > 0))
+ vy_entry_compare(itr->curr_txv->entry, last,
+ itr->lsm->cmp_def) >
+ 0))
return 0;
vy_history_cleanup(history);
diff --git a/src/box/vy_tx.h b/src/box/vy_tx.h
index 4fac5f6..ba9be3d 100644
--- a/src/box/vy_tx.h
+++ b/src/box/vy_tx.h
@@ -128,7 +128,8 @@ write_set_key_cmp(struct write_set_key *a, struct txv *b);
typedef rb_tree(struct txv) write_set_t;
rb_gen_ext_key(MAYBE_UNUSED static inline, write_set_, write_set_t, struct txv,
- in_set, write_set_cmp, struct write_set_key *, write_set_key_cmp);
+ in_set, write_set_cmp, struct write_set_key *,
+ write_set_key_cmp);
static inline struct txv *
write_set_search_key(write_set_t *tree, struct vy_lsm *lsm,
@@ -296,7 +297,7 @@ vy_tx_manager_read_view(struct vy_tx_manager *xm);
/** Dereference and possibly destroy a read view. */
void
vy_tx_manager_destroy_read_view(struct vy_tx_manager *xm,
- struct vy_read_view *rv);
+ struct vy_read_view *rv);
/**
* Abort all rw transactions that affect the given space
@@ -309,7 +310,7 @@ vy_tx_manager_destroy_read_view(struct vy_tx_manager *xm,
*/
void
vy_tx_manager_abort_writers_for_ddl(struct vy_tx_manager *xm,
- struct space *space, bool *need_wal_sync);
+ struct space *space, bool *need_wal_sync);
/**
* Abort all local rw transactions that haven't reached WAL yet.
@@ -386,9 +387,8 @@ vy_tx_rollback_statement(struct vy_tx *tx, void *svp);
* @retval -1 Memory error.
*/
int
-vy_tx_track(struct vy_tx *tx, struct vy_lsm *lsm,
- struct vy_entry left, bool left_belongs,
- struct vy_entry right, bool right_belongs);
+vy_tx_track(struct vy_tx *tx, struct vy_lsm *lsm, struct vy_entry left,
+ bool left_belongs, struct vy_entry right, bool right_belongs);
/**
* Remember a point read in the conflict manager index.
@@ -453,9 +453,9 @@ struct vy_txw_iterator {
*/
void
vy_txw_iterator_open(struct vy_txw_iterator *itr,
- struct vy_txw_iterator_stat *stat,
- struct vy_tx *tx, struct vy_lsm *lsm,
- enum iterator_type iterator_type, struct vy_entry key);
+ struct vy_txw_iterator_stat *stat, struct vy_tx *tx,
+ struct vy_lsm *lsm, enum iterator_type iterator_type,
+ struct vy_entry key);
/**
* Advance a txw iterator to the next key.
@@ -463,8 +463,7 @@ vy_txw_iterator_open(struct vy_txw_iterator *itr,
* Returns 0 on success, -1 on memory allocation error.
*/
NODISCARD int
-vy_txw_iterator_next(struct vy_txw_iterator *itr,
- struct vy_history *history);
+vy_txw_iterator_next(struct vy_txw_iterator *itr, struct vy_history *history);
/**
* Advance a txw iterator to the key following @last.
diff --git a/src/box/vy_upsert.c b/src/box/vy_upsert.c
index 797492c..4ddbe26e 100644
--- a/src/box/vy_upsert.c
+++ b/src/box/vy_upsert.c
@@ -47,19 +47,17 @@
* @retval -1 - memory error
*/
static int
-vy_upsert_try_to_squash(struct tuple_format *format,
- const char *key_mp, const char *key_mp_end,
- const char *old_serie, const char *old_serie_end,
- const char *new_serie, const char *new_serie_end,
- struct tuple **result_stmt)
+vy_upsert_try_to_squash(struct tuple_format *format, const char *key_mp,
+ const char *key_mp_end, const char *old_serie,
+ const char *old_serie_end, const char *new_serie,
+ const char *new_serie_end, struct tuple **result_stmt)
{
*result_stmt = NULL;
size_t squashed_size;
- const char *squashed =
- xrow_upsert_squash(old_serie, old_serie_end,
- new_serie, new_serie_end, format,
- &squashed_size, 0);
+ const char *squashed = xrow_upsert_squash(old_serie, old_serie_end,
+ new_serie, new_serie_end,
+ format, &squashed_size, 0);
if (squashed == NULL)
return 0;
/* Successful squash! */
@@ -67,8 +65,8 @@ vy_upsert_try_to_squash(struct tuple_format *format,
operations[0].iov_base = (void *)squashed;
operations[0].iov_len = squashed_size;
- *result_stmt = vy_stmt_new_upsert(format, key_mp, key_mp_end,
- operations, 1);
+ *result_stmt =
+ vy_stmt_new_upsert(format, key_mp, key_mp_end, operations, 1);
if (*result_stmt == NULL)
return -1;
return 0;
@@ -119,21 +117,20 @@ vy_apply_upsert(struct tuple *new_stmt, struct tuple *old_stmt,
uint8_t old_type = vy_stmt_type(old_stmt);
uint64_t column_mask = COLUMN_MASK_FULL;
result_mp = xrow_upsert_execute(new_ops, new_ops_end, result_mp,
- result_mp_end, format, &mp_size,
- 0, suppress_error, &column_mask);
+ result_mp_end, format, &mp_size, 0,
+ suppress_error, &column_mask);
if (result_mp == NULL) {
region_truncate(region, region_svp);
return NULL;
}
result_mp_end = result_mp + mp_size;
if (old_type != IPROTO_UPSERT) {
- assert(old_type == IPROTO_INSERT ||
- old_type == IPROTO_REPLACE);
+ assert(old_type == IPROTO_INSERT || old_type == IPROTO_REPLACE);
/*
* UPDATE case: return the updated old stmt.
*/
- result_stmt = vy_stmt_new_replace(format, result_mp,
- result_mp_end);
+ result_stmt =
+ vy_stmt_new_replace(format, result_mp, result_mp_end);
region_truncate(region, region_svp);
if (result_stmt == NULL)
return NULL; /* OOM */
@@ -154,8 +151,8 @@ vy_apply_upsert(struct tuple *new_stmt, struct tuple *old_stmt,
* UPSERT + UPSERT case: combine operations
*/
assert(old_ops_end - old_ops > 0);
- if (vy_upsert_try_to_squash(format, result_mp, result_mp_end,
- old_ops, old_ops_end, new_ops, new_ops_end,
+ if (vy_upsert_try_to_squash(format, result_mp, result_mp_end, old_ops,
+ old_ops_end, new_ops, new_ops_end,
&result_stmt) != 0) {
region_truncate(region, region_svp);
return NULL;
@@ -195,8 +192,8 @@ check_key:
* Check that key hasn't been changed after applying operations.
*/
if (!key_update_can_be_skipped(cmp_def->column_mask, column_mask) &&
- vy_stmt_compare(old_stmt, HINT_NONE, result_stmt,
- HINT_NONE, cmp_def) != 0) {
+ vy_stmt_compare(old_stmt, HINT_NONE, result_stmt, HINT_NONE,
+ cmp_def) != 0) {
/*
* Key has been changed: ignore this UPSERT and
* @retval the old stmt.
diff --git a/src/box/vy_upsert.h b/src/box/vy_upsert.h
index 9b585e0..6a0daa7 100644
--- a/src/box/vy_upsert.h
+++ b/src/box/vy_upsert.h
@@ -73,8 +73,8 @@ vy_entry_apply_upsert(struct vy_entry new_entry, struct vy_entry old_entry,
{
struct vy_entry result;
result.hint = old_entry.stmt != NULL ? old_entry.hint : new_entry.hint;
- result.stmt = vy_apply_upsert(new_entry.stmt, old_entry.stmt,
- cmp_def, suppress_error);
+ result.stmt = vy_apply_upsert(new_entry.stmt, old_entry.stmt, cmp_def,
+ suppress_error);
return result.stmt != NULL ? result : vy_entry_none();
}
diff --git a/src/box/vy_write_iterator.c b/src/box/vy_write_iterator.c
index 78a52ae..4f99960 100644
--- a/src/box/vy_write_iterator.c
+++ b/src/box/vy_write_iterator.c
@@ -109,8 +109,9 @@ vy_write_history_new(struct vy_entry entry, struct vy_write_history *next)
return NULL;
}
h->entry = entry;
- assert(next == NULL || (next->entry.stmt != NULL &&
- vy_stmt_lsn(next->entry.stmt) > vy_stmt_lsn(entry.stmt)));
+ assert(next == NULL ||
+ (next->entry.stmt != NULL &&
+ vy_stmt_lsn(next->entry.stmt) > vy_stmt_lsn(entry.stmt)));
h->next = next;
vy_stmt_ref_if_possible(entry.stmt);
return h;
@@ -237,8 +238,8 @@ heap_less(heap_t *heap, struct vy_write_src *src1, struct vy_write_src *src2)
* Virtual sources use 0 for LSN, so they are ordered
* last automatically.
*/
- int64_t lsn1 = src1->is_end_of_key ? 0 : vy_stmt_lsn(src1->entry.stmt);
- int64_t lsn2 = src2->is_end_of_key ? 0 : vy_stmt_lsn(src2->entry.stmt);
+ int64_t lsn1 = src1->is_end_of_key ? 0 : vy_stmt_lsn(src1->entry.stmt);
+ int64_t lsn2 = src2->is_end_of_key ? 0 : vy_stmt_lsn(src2->entry.stmt);
if (lsn1 != lsn2)
return lsn1 > lsn2;
@@ -251,7 +252,6 @@ heap_less(heap_t *heap, struct vy_write_src *src1, struct vy_write_src *src2)
*/
return (vy_stmt_type(src1->entry.stmt) == IPROTO_DELETE ? 1 : 0) <
(vy_stmt_type(src2->entry.stmt) == IPROTO_DELETE ? 1 : 0);
-
}
/**
@@ -262,10 +262,10 @@ heap_less(heap_t *heap, struct vy_write_src *src1, struct vy_write_src *src2)
static struct vy_write_src *
vy_write_iterator_new_src(struct vy_write_iterator *stream)
{
- struct vy_write_src *res = (struct vy_write_src *) malloc(sizeof(*res));
+ struct vy_write_src *res = (struct vy_write_src *)malloc(sizeof(*res));
if (res == NULL) {
- diag_set(OutOfMemory, sizeof(*res),
- "malloc", "vinyl write stream");
+ diag_set(OutOfMemory, sizeof(*res), "malloc",
+ "vinyl write stream");
return NULL;
}
heap_node_create(&res->heap_node);
@@ -275,7 +275,6 @@ vy_write_iterator_new_src(struct vy_write_iterator *stream)
return res;
}
-
/** Close a stream, remove it from the write iterator and delete. */
static void
vy_write_iterator_delete_src(struct vy_write_iterator *stream,
@@ -310,8 +309,8 @@ vy_write_iterator_add_src(struct vy_write_iterator *stream,
rc = vy_source_heap_insert(&stream->src_heap, src);
if (rc != 0) {
- diag_set(OutOfMemory, sizeof(void *),
- "malloc", "vinyl write stream heap");
+ diag_set(OutOfMemory, sizeof(void *), "malloc",
+ "vinyl write stream heap");
goto stop;
}
return 0;
@@ -326,7 +325,7 @@ stop:
*/
static void
vy_write_iterator_remove_src(struct vy_write_iterator *stream,
- struct vy_write_src *src)
+ struct vy_write_src *src)
{
if (heap_node_is_stray(&src->heap_node))
return; /* already removed */
@@ -362,7 +361,7 @@ vy_write_iterator_new(struct key_def *cmp_def, bool is_primary,
size_t size = sizeof(struct vy_write_iterator) +
count * sizeof(struct vy_read_view_stmt);
struct vy_write_iterator *stream =
- (struct vy_write_iterator *) calloc(1, size);
+ (struct vy_write_iterator *)calloc(1, size);
if (stream == NULL) {
diag_set(OutOfMemory, size, "malloc", "write stream");
return NULL;
@@ -409,8 +408,8 @@ vy_write_iterator_start(struct vy_stmt_stream *vstream)
if (vy_write_iterator_add_src(stream, src) != 0)
goto fail;
#ifndef NDEBUG
- struct errinj *inj =
- errinj(ERRINJ_VY_WRITE_ITERATOR_START_FAIL, ERRINJ_BOOL);
+ struct errinj *inj = errinj(ERRINJ_VY_WRITE_ITERATOR_START_FAIL,
+ ERRINJ_BOOL);
if (inj != NULL && inj->bparam) {
inj->bparam = false;
diag_set(OutOfMemory, 666, "malloc", "struct vy_stmt");
@@ -447,7 +446,7 @@ vy_write_iterator_stop(struct vy_stmt_stream *vstream)
stream->deferred_delete = vy_entry_none();
}
struct vy_deferred_delete_handler *handler =
- stream->deferred_delete_handler;
+ stream->deferred_delete_handler;
if (handler != NULL) {
handler->iface->destroy(handler);
stream->deferred_delete_handler = NULL;
@@ -556,8 +555,7 @@ vy_write_iterator_push_rv(struct vy_write_iterator *stream,
assert(current_rv_i < stream->rv_count);
struct vy_read_view_stmt *rv = &stream->read_views[current_rv_i];
assert(rv->vlsn >= vy_stmt_lsn(entry.stmt));
- struct vy_write_history *h =
- vy_write_history_new(entry, rv->history);
+ struct vy_write_history *h = vy_write_history_new(entry, rv->history);
if (h == NULL)
return -1;
rv->history = h;
@@ -627,7 +625,7 @@ vy_write_iterator_deferred_delete(struct vy_write_iterator *stream,
*/
if (stream->deferred_delete.stmt != NULL) {
struct vy_deferred_delete_handler *handler =
- stream->deferred_delete_handler;
+ stream->deferred_delete_handler;
if (handler != NULL && vy_stmt_type(stmt) != IPROTO_DELETE &&
handler->iface->process(handler, stmt,
stream->deferred_delete.stmt) != 0)
@@ -669,8 +667,8 @@ vy_write_iterator_deferred_delete(struct vy_write_iterator *stream,
* @retval -1 Memory error.
*/
static NODISCARD int
-vy_write_iterator_build_history(struct vy_write_iterator *stream,
- int *count, bool *is_first_insert)
+vy_write_iterator_build_history(struct vy_write_iterator *stream, int *count,
+ bool *is_first_insert)
{
*count = 0;
*is_first_insert = false;
@@ -695,8 +693,8 @@ vy_write_iterator_build_history(struct vy_write_iterator *stream,
end_of_key_src.entry = src->entry;
int rc = vy_source_heap_insert(&stream->src_heap, &end_of_key_src);
if (rc) {
- diag_set(OutOfMemory, sizeof(void *),
- "malloc", "vinyl write stream heap");
+ diag_set(OutOfMemory, sizeof(void *), "malloc",
+ "vinyl write stream heap");
return rc;
}
vy_stmt_ref_if_possible(src->entry.stmt);
@@ -710,7 +708,8 @@ vy_write_iterator_build_history(struct vy_write_iterator *stream,
int64_t merge_until_lsn = vy_write_iterator_get_vlsn(stream, 1);
while (true) {
- *is_first_insert = vy_stmt_type(src->entry.stmt) == IPROTO_INSERT;
+ *is_first_insert = vy_stmt_type(src->entry.stmt) ==
+ IPROTO_INSERT;
if (!stream->is_primary &&
(vy_stmt_flags(src->entry.stmt) & VY_STMT_UPDATE) != 0) {
@@ -756,9 +755,8 @@ vy_write_iterator_build_history(struct vy_write_iterator *stream,
*/
current_rv_i++;
current_rv_lsn = merge_until_lsn;
- merge_until_lsn =
- vy_write_iterator_get_vlsn(stream,
- current_rv_i + 1);
+ merge_until_lsn = vy_write_iterator_get_vlsn(
+ stream, current_rv_i + 1);
}
/*
@@ -787,11 +785,10 @@ vy_write_iterator_build_history(struct vy_write_iterator *stream,
vy_stmt_type(src->entry.stmt) == IPROTO_DELETE) {
current_rv_i++;
current_rv_lsn = merge_until_lsn;
- merge_until_lsn =
- vy_write_iterator_get_vlsn(stream,
- current_rv_i + 1);
+ merge_until_lsn = vy_write_iterator_get_vlsn(
+ stream, current_rv_i + 1);
}
-next_lsn:
+ next_lsn:
rc = vy_write_iterator_merge_step(stream);
if (rc != 0)
break;
@@ -845,8 +842,7 @@ vy_read_view_merge(struct vy_write_iterator *stream, struct vy_entry prev,
* by a read view if it is preceded by another DELETE for
* the same key.
*/
- if (prev.stmt != NULL &&
- vy_stmt_type(prev.stmt) == IPROTO_DELETE &&
+ if (prev.stmt != NULL && vy_stmt_type(prev.stmt) == IPROTO_DELETE &&
vy_stmt_type(h->entry.stmt) == IPROTO_DELETE) {
vy_write_history_destroy(h);
rv->history = NULL;
@@ -871,13 +867,13 @@ vy_read_view_merge(struct vy_write_iterator *stream, struct vy_entry prev,
* it, whether is_last_level is true or not.
*/
if (vy_stmt_type(h->entry.stmt) == IPROTO_UPSERT &&
- (stream->is_last_level || (prev.stmt != NULL &&
- vy_stmt_type(prev.stmt) != IPROTO_UPSERT))) {
+ (stream->is_last_level ||
+ (prev.stmt != NULL && vy_stmt_type(prev.stmt) != IPROTO_UPSERT))) {
assert(!stream->is_last_level || prev.stmt == NULL ||
vy_stmt_type(prev.stmt) != IPROTO_UPSERT);
struct vy_entry applied;
- applied = vy_entry_apply_upsert(h->entry, prev,
- stream->cmp_def, false);
+ applied = vy_entry_apply_upsert(h->entry, prev, stream->cmp_def,
+ false);
if (applied.stmt == NULL)
return -1;
vy_stmt_unref_if_possible(h->entry.stmt);
@@ -1034,7 +1030,8 @@ vy_write_iterator_build_read_views(struct vy_write_iterator *stream, int *count)
for (; rv >= &stream->read_views[0]; --rv) {
if (rv->history == NULL)
continue;
- if (vy_read_view_merge(stream, prev, rv, is_first_insert) != 0) {
+ if (vy_read_view_merge(stream, prev, rv, is_first_insert) !=
+ 0) {
rc = -1;
goto cleanup;
}
@@ -1123,4 +1120,3 @@ static const struct vy_stmt_stream_iface vy_slice_stream_iface = {
.stop = vy_write_iterator_stop,
.close = vy_write_iterator_close
};
-
diff --git a/src/box/vy_write_iterator.h b/src/box/vy_write_iterator.h
index e217160..41884b0 100644
--- a/src/box/vy_write_iterator.h
+++ b/src/box/vy_write_iterator.h
@@ -215,16 +215,16 @@ struct vy_slice;
*
* @sa VY_STMT_DEFERRED_DELETE.
*/
-typedef int
-(*vy_deferred_delete_process_f)(struct vy_deferred_delete_handler *handler,
- struct tuple *old_stmt, struct tuple *new_stmt);
+typedef int (*vy_deferred_delete_process_f)(
+ struct vy_deferred_delete_handler *handler, struct tuple *old_stmt,
+ struct tuple *new_stmt);
/**
* Callack invoked by the write iterator to destroy a deferred
* DELETE handler when the iteration is stopped.
*/
-typedef void
-(*vy_deferred_delete_destroy_f)(struct vy_deferred_delete_handler *handler);
+typedef void (*vy_deferred_delete_destroy_f)(
+ struct vy_deferred_delete_handler *handler);
struct vy_deferred_delete_handler_iface {
vy_deferred_delete_process_f process;
@@ -269,4 +269,3 @@ vy_write_iterator_new_slice(struct vy_stmt_stream *stream,
struct tuple_format *disk_format);
#endif /* INCLUDES_TARANTOOL_BOX_VY_WRITE_STREAM_H */
-
diff --git a/src/box/wal.c b/src/box/wal.c
index 84abaa7..7210995 100644
--- a/src/box/wal.c
+++ b/src/box/wal.c
@@ -80,8 +80,7 @@ wal_write_none(struct journal *, struct journal_entry *);
* members used mainly in tx thread go first, wal thread members
* following.
*/
-struct wal_writer
-{
+struct wal_writer {
struct journal base;
/* ----------------- tx ------------------- */
wal_on_garbage_collection_f on_garbage_collection;
@@ -214,8 +213,8 @@ static void
tx_complete_batch(struct cmsg *msg);
static struct cmsg_hop wal_request_route[] = {
- {wal_write_to_disk, &wal_writer_singleton.tx_prio_pipe},
- {tx_complete_batch, NULL},
+ { wal_write_to_disk, &wal_writer_singleton.tx_prio_pipe },
+ { tx_complete_batch, NULL },
};
static void
@@ -231,7 +230,7 @@ wal_msg_create(struct wal_msg *batch)
static struct wal_msg *
wal_msg(struct cmsg *msg)
{
- return msg->route == wal_request_route ? (struct wal_msg *) msg : NULL;
+ return msg->route == wal_request_route ? (struct wal_msg *)msg : NULL;
}
/** Write a request to a log in a single transaction. */
@@ -249,7 +248,7 @@ xlog_write_entry(struct xlog *l, struct journal_entry *entry)
if (inj != NULL && inj->iparam == (*row)->lsn) {
(*row)->lsn = inj->iparam - 1;
say_warn("injected broken lsn: %lld",
- (long long) (*row)->lsn);
+ (long long)(*row)->lsn);
}
if (xlog_write_row(l, *row) < 0) {
/*
@@ -314,7 +313,7 @@ wal_begin_rollback(void)
static void
wal_complete_rollback(struct cmsg *base)
{
- (void) base;
+ (void)base;
/* WAL-thread can try writing transactions again. */
wal_writer_singleton.is_in_rollback = false;
}
@@ -329,16 +328,14 @@ tx_complete_rollback(void)
* transactions to rollback are collected, the last entry
* will be exactly, well, the last entry.
*/
- if (stailq_last_entry(&writer->rollback, struct journal_entry,
- fifo) != writer->last_entry)
+ if (stailq_last_entry(&writer->rollback, struct journal_entry, fifo) !=
+ writer->last_entry)
return;
stailq_reverse(&writer->rollback);
tx_schedule_queue(&writer->rollback);
/* TX-thread can try sending transactions to WAL again. */
stailq_create(&writer->rollback);
- static struct cmsg_hop route[] = {
- {wal_complete_rollback, NULL}
- };
+ static struct cmsg_hop route[] = { { wal_complete_rollback, NULL } };
static struct cmsg msg;
cmsg_init(&msg, route);
cpipe_push(&writer->wal_pipe, &msg);
@@ -356,20 +353,21 @@ static void
tx_complete_batch(struct cmsg *msg)
{
struct wal_writer *writer = &wal_writer_singleton;
- struct wal_msg *batch = (struct wal_msg *) msg;
+ struct wal_msg *batch = (struct wal_msg *)msg;
/*
* Move the rollback list to the writer first, since
* wal_msg memory disappears after the first
* iteration of tx_schedule_queue loop.
*/
- if (! stailq_empty(&batch->rollback)) {
+ if (!stailq_empty(&batch->rollback)) {
stailq_concat(&writer->rollback, &batch->rollback);
tx_complete_rollback();
}
/* Update the tx vclock to the latest written by wal. */
vclock_copy(&replicaset.vclock, &batch->vclock);
tx_schedule_queue(&batch->commit);
- mempool_free(&writer->msg_pool, container_of(msg, struct wal_msg, base));
+ mempool_free(&writer->msg_pool,
+ container_of(msg, struct wal_msg, base));
}
/**
@@ -417,10 +415,9 @@ wal_writer_create(struct wal_writer *writer, enum wal_mode wal_mode,
writer->wal_max_size = wal_max_size;
journal_create(&writer->base,
- wal_mode == WAL_NONE ?
- wal_write_none_async : wal_write_async,
- wal_mode == WAL_NONE ?
- wal_write_none : wal_write);
+ wal_mode == WAL_NONE ? wal_write_none_async :
+ wal_write_async,
+ wal_mode == WAL_NONE ? wal_write_none : wal_write);
struct xlog_opts opts = xlog_opts_default;
opts.sync_is_async = true;
@@ -463,8 +460,8 @@ wal_open_f(struct cbus_call_msg *msg)
{
(void)msg;
struct wal_writer *writer = &wal_writer_singleton;
- const char *path = xdir_format_filename(&writer->wal_dir,
- vclock_sum(&writer->vclock), NONE);
+ const char *path = xdir_format_filename(
+ &writer->wal_dir, vclock_sum(&writer->vclock), NONE);
assert(!xlog_is_open(&writer->current_wal));
return xlog_open(&writer->current_wal, path, &writer->wal_dir.opts);
}
@@ -475,8 +472,8 @@ wal_open_f(struct cbus_call_msg *msg)
static int
wal_open(struct wal_writer *writer)
{
- const char *path = xdir_format_filename(&writer->wal_dir,
- vclock_sum(&writer->vclock), NONE);
+ const char *path = xdir_format_filename(
+ &writer->wal_dir, vclock_sum(&writer->vclock), NONE);
if (access(path, F_OK) != 0) {
if (errno == ENOENT) {
/* No WAL, nothing to do. */
@@ -528,8 +525,8 @@ wal_open(struct wal_writer *writer)
}
int
-wal_init(enum wal_mode wal_mode, const char *wal_dirname,
- int64_t wal_max_size, const struct tt_uuid *instance_uuid,
+wal_init(enum wal_mode wal_mode, const char *wal_dirname, int64_t wal_max_size,
+ const struct tt_uuid *instance_uuid,
wal_on_garbage_collection_f on_garbage_collection,
wal_on_checkpoint_threshold_f on_checkpoint_threshold)
{
@@ -590,14 +587,14 @@ wal_free(void)
}
struct wal_vclock_msg {
- struct cbus_call_msg base;
- struct vclock vclock;
+ struct cbus_call_msg base;
+ struct vclock vclock;
};
static int
wal_sync_f(struct cbus_call_msg *data)
{
- struct wal_vclock_msg *msg = (struct wal_vclock_msg *) data;
+ struct wal_vclock_msg *msg = (struct wal_vclock_msg *)data;
struct wal_writer *writer = &wal_writer_singleton;
if (writer->is_in_rollback) {
/* We're rolling back a failed write. */
@@ -629,8 +626,8 @@ wal_sync(struct vclock *vclock)
}
bool cancellable = fiber_set_cancellable(false);
struct wal_vclock_msg msg;
- int rc = cbus_call(&writer->wal_pipe, &writer->tx_prio_pipe,
- &msg.base, wal_sync_f, NULL, TIMEOUT_INFINITY);
+ int rc = cbus_call(&writer->wal_pipe, &writer->tx_prio_pipe, &msg.base,
+ wal_sync_f, NULL, TIMEOUT_INFINITY);
fiber_set_cancellable(cancellable);
if (vclock != NULL)
vclock_copy(vclock, &msg.vclock);
@@ -640,7 +637,7 @@ wal_sync(struct vclock *vclock)
static int
wal_begin_checkpoint_f(struct cbus_call_msg *data)
{
- struct wal_checkpoint *msg = (struct wal_checkpoint *) data;
+ struct wal_checkpoint *msg = (struct wal_checkpoint *)data;
struct wal_writer *writer = &wal_writer_singleton;
if (writer->is_in_rollback) {
/*
@@ -656,8 +653,7 @@ wal_begin_checkpoint_f(struct cbus_call_msg *data)
*/
if (xlog_is_open(&writer->current_wal) &&
vclock_sum(&writer->current_wal.meta.vclock) !=
- vclock_sum(&writer->vclock)) {
-
+ vclock_sum(&writer->vclock)) {
xlog_close(&writer->current_wal, false);
/*
* The next WAL will be created on the first write.
@@ -702,7 +698,7 @@ wal_begin_checkpoint(struct wal_checkpoint *checkpoint)
static int
wal_commit_checkpoint_f(struct cbus_call_msg *data)
{
- struct wal_checkpoint *msg = (struct wal_checkpoint *) data;
+ struct wal_checkpoint *msg = (struct wal_checkpoint *)data;
struct wal_writer *writer = &wal_writer_singleton;
/*
* Now, once checkpoint has been created, we can update
@@ -730,9 +726,8 @@ wal_commit_checkpoint(struct wal_checkpoint *checkpoint)
return;
}
bool cancellable = fiber_set_cancellable(false);
- cbus_call(&writer->wal_pipe, &writer->tx_prio_pipe,
- &checkpoint->base, wal_commit_checkpoint_f, NULL,
- TIMEOUT_INFINITY);
+ cbus_call(&writer->wal_pipe, &writer->tx_prio_pipe, &checkpoint->base,
+ wal_commit_checkpoint_f, NULL, TIMEOUT_INFINITY);
fiber_set_cancellable(cancellable);
}
@@ -760,14 +755,12 @@ wal_set_checkpoint_threshold(int64_t threshold)
struct wal_set_checkpoint_threshold_msg msg;
msg.checkpoint_threshold = threshold;
bool cancellable = fiber_set_cancellable(false);
- cbus_call(&writer->wal_pipe, &writer->tx_prio_pipe,
- &msg.base, wal_set_checkpoint_threshold_f, NULL,
- TIMEOUT_INFINITY);
+ cbus_call(&writer->wal_pipe, &writer->tx_prio_pipe, &msg.base,
+ wal_set_checkpoint_threshold_f, NULL, TIMEOUT_INFINITY);
fiber_set_cancellable(cancellable);
}
-struct wal_gc_msg
-{
+struct wal_gc_msg {
struct cbus_call_msg base;
const struct vclock *vclock;
};
@@ -938,8 +931,8 @@ out:
};
struct tx_notify_gc_msg *msg = malloc(sizeof(*msg));
if (msg != NULL) {
- if (xdir_first_vclock(&writer->wal_dir,
- &msg->vclock) < 0)
+ if (xdir_first_vclock(&writer->wal_dir, &msg->vclock) <
+ 0)
vclock_copy(&msg->vclock, &writer->vclock);
cmsg_init(&msg->base, route);
cpipe_push(&writer->tx_prio_pipe, &msg->base);
@@ -955,14 +948,13 @@ out:
*/
static void
wal_assign_lsn(struct vclock *vclock_diff, struct vclock *base,
- struct xrow_header **row,
- struct xrow_header **end)
+ struct xrow_header **row, struct xrow_header **end)
{
int64_t tsn = 0;
struct xrow_header **start = row;
struct xrow_header **first_glob_row = row;
/** Assign LSN to all local rows. */
- for ( ; row < end; row++) {
+ for (; row < end; row++) {
if ((*row)->replica_id == 0) {
/*
* All rows representing local space data
@@ -975,8 +967,9 @@ wal_assign_lsn(struct vclock *vclock_diff, struct vclock *base,
if ((*row)->group_id != GROUP_LOCAL)
(*row)->replica_id = instance_id;
- (*row)->lsn = vclock_inc(vclock_diff, (*row)->replica_id) +
- vclock_get(base, (*row)->replica_id);
+ (*row)->lsn =
+ vclock_inc(vclock_diff, (*row)->replica_id) +
+ vclock_get(base, (*row)->replica_id);
/*
* Use lsn of the first global row as
* transaction id.
@@ -991,19 +984,22 @@ wal_assign_lsn(struct vclock *vclock_diff, struct vclock *base,
(*row)->tsn = tsn == 0 ? (*start)->lsn : tsn;
(*row)->is_commit = row == end - 1;
} else {
- int64_t diff = (*row)->lsn - vclock_get(base, (*row)->replica_id);
- if (diff <= vclock_get(vclock_diff,
- (*row)->replica_id)) {
+ int64_t diff = (*row)->lsn -
+ vclock_get(base, (*row)->replica_id);
+ if (diff <=
+ vclock_get(vclock_diff, (*row)->replica_id)) {
say_crit("Attempt to write a broken LSN to WAL:"
" replica id: %d, confirmed lsn: %d,"
- " new lsn %d", (*row)->replica_id,
+ " new lsn %d",
+ (*row)->replica_id,
vclock_get(base, (*row)->replica_id) +
- vclock_get(vclock_diff,
- (*row)->replica_id),
- (*row)->lsn);
+ vclock_get(vclock_diff,
+ (*row)->replica_id),
+ (*row)->lsn);
assert(false);
} else {
- vclock_follow(vclock_diff, (*row)->replica_id, diff);
+ vclock_follow(vclock_diff, (*row)->replica_id,
+ diff);
}
}
}
@@ -1021,7 +1017,7 @@ static void
wal_write_to_disk(struct cmsg *msg)
{
struct wal_writer *writer = &wal_writer_singleton;
- struct wal_msg *wal_msg = (struct wal_msg *) msg;
+ struct wal_msg *wal_msg = (struct wal_msg *)msg;
struct error *error;
/*
@@ -1090,11 +1086,12 @@ wal_write_to_disk(struct cmsg *msg)
int rc;
struct journal_entry *entry;
struct stailq_entry *last_committed = NULL;
- stailq_foreach_entry(entry, &wal_msg->commit, fifo) {
- wal_assign_lsn(&vclock_diff, &writer->vclock,
- entry->rows, entry->rows + entry->n_rows);
- entry->res = vclock_sum(&vclock_diff) +
- vclock_sum(&writer->vclock);
+ stailq_foreach_entry(entry, &wal_msg->commit, fifo)
+ {
+ wal_assign_lsn(&vclock_diff, &writer->vclock, entry->rows,
+ entry->rows + entry->n_rows);
+ entry->res =
+ vclock_sum(&vclock_diff) + vclock_sum(&writer->vclock);
rc = xlog_write_entry(l, entry);
if (rc < 0)
goto done;
@@ -1160,8 +1157,7 @@ done:
if (!stailq_empty(&rollback)) {
/* Update status of the successfully committed requests. */
- stailq_foreach_entry(entry, &rollback, fifo)
- entry->res = -1;
+ stailq_foreach_entry(entry, &rollback, fifo) entry->res = -1;
/* Rollback unprocessed requests */
stailq_concat(&wal_msg->rollback, &rollback);
wal_begin_rollback();
@@ -1175,7 +1171,7 @@ done:
static int
wal_writer_f(va_list ap)
{
- (void) ap;
+ (void)ap;
struct wal_writer *writer = &wal_writer_singleton;
/** Initialize eio in this thread */
@@ -1199,11 +1195,11 @@ wal_writer_f(va_list ap)
*/
if (writer->wal_mode != WAL_NONE &&
(!xlog_is_open(&writer->current_wal) ||
- vclock_compare(&writer->vclock,
- &writer->current_wal.meta.vclock) > 0)) {
+ vclock_compare(&writer->vclock, &writer->current_wal.meta.vclock) >
+ 0)) {
struct xlog l;
- if (xdir_create_xlog(&writer->wal_dir, &l,
- &writer->vclock) == 0)
+ if (xdir_create_xlog(&writer->wal_dir, &l, &writer->vclock) ==
+ 0)
xlog_close(&l, false);
else
diag_log();
@@ -1226,13 +1222,11 @@ wal_writer_f(va_list ap)
static int
wal_write_async(struct journal *journal, struct journal_entry *entry)
{
- struct wal_writer *writer = (struct wal_writer *) journal;
+ struct wal_writer *writer = (struct wal_writer *)journal;
- ERROR_INJECT(ERRINJ_WAL_IO, {
- goto fail;
- });
+ ERROR_INJECT(ERRINJ_WAL_IO, { goto fail; });
- if (! stailq_empty(&writer->rollback)) {
+ if (!stailq_empty(&writer->rollback)) {
/*
* The writer rollback queue is not empty,
* roll back this transaction immediately.
@@ -1250,13 +1244,12 @@ wal_write_async(struct journal *journal, struct journal_entry *entry)
if (!stailq_empty(&writer->wal_pipe.input) &&
(batch = wal_msg(stailq_first_entry(&writer->wal_pipe.input,
struct cmsg, fifo)))) {
-
stailq_add_tail_entry(&batch->commit, entry, fifo);
} else {
batch = (struct wal_msg *)mempool_alloc(&writer->msg_pool);
if (batch == NULL) {
- diag_set(OutOfMemory, sizeof(struct wal_msg),
- "region", "struct wal_msg");
+ diag_set(OutOfMemory, sizeof(struct wal_msg), "region",
+ "struct wal_msg");
goto fail;
}
wal_msg_create(batch);
@@ -1305,10 +1298,9 @@ wal_write(struct journal *journal, struct journal_entry *entry)
}
static int
-wal_write_none_async(struct journal *journal,
- struct journal_entry *entry)
+wal_write_none_async(struct journal *journal, struct journal_entry *entry)
{
- struct wal_writer *writer = (struct wal_writer *) journal;
+ struct wal_writer *writer = (struct wal_writer *)journal;
struct vclock vclock_diff;
vclock_create(&vclock_diff);
@@ -1333,8 +1325,7 @@ wal_init_vy_log(void)
xlog_clear(&vy_log_writer.xlog);
}
-struct wal_write_vy_log_msg
-{
+struct wal_write_vy_log_msg {
struct cbus_call_msg base;
struct journal_entry *entry;
};
@@ -1345,7 +1336,7 @@ wal_write_vy_log_f(struct cbus_call_msg *msg)
struct journal_entry *entry =
((struct wal_write_vy_log_msg *)msg)->entry;
- if (! xlog_is_open(&vy_log_writer.xlog)) {
+ if (!xlog_is_open(&vy_log_writer.xlog)) {
if (vy_log_open(&vy_log_writer.xlog) < 0)
return -1;
}
@@ -1364,11 +1355,10 @@ wal_write_vy_log(struct journal_entry *entry)
{
struct wal_writer *writer = &wal_writer_singleton;
struct wal_write_vy_log_msg msg;
- msg.entry= entry;
+ msg.entry = entry;
bool cancellable = fiber_set_cancellable(false);
- int rc = cbus_call(&writer->wal_pipe, &writer->tx_prio_pipe,
- &msg.base, wal_write_vy_log_f, NULL,
- TIMEOUT_INFINITY);
+ int rc = cbus_call(&writer->wal_pipe, &writer->tx_prio_pipe, &msg.base,
+ wal_write_vy_log_f, NULL, TIMEOUT_INFINITY);
fiber_set_cancellable(cancellable);
return rc;
}
@@ -1376,7 +1366,7 @@ wal_write_vy_log(struct journal_entry *entry)
static int
wal_rotate_vy_log_f(struct cbus_call_msg *msg)
{
- (void) msg;
+ (void)msg;
if (xlog_is_open(&vy_log_writer.xlog))
xlog_close(&vy_log_writer.xlog, false);
return 0;
@@ -1419,7 +1409,7 @@ wal_watcher_notify(struct wal_watcher *watcher, unsigned events)
static void
wal_watcher_notify_perform(struct cmsg *cmsg)
{
- struct wal_watcher_msg *msg = (struct wal_watcher_msg *) cmsg;
+ struct wal_watcher_msg *msg = (struct wal_watcher_msg *)cmsg;
struct wal_watcher *watcher = msg->watcher;
unsigned events = msg->events;
@@ -1429,7 +1419,7 @@ wal_watcher_notify_perform(struct cmsg *cmsg)
static void
wal_watcher_notify_complete(struct cmsg *cmsg)
{
- struct wal_watcher_msg *msg = (struct wal_watcher_msg *) cmsg;
+ struct wal_watcher_msg *msg = (struct wal_watcher_msg *)cmsg;
struct wal_watcher *watcher = msg->watcher;
cmsg->route = NULL;
@@ -1452,7 +1442,7 @@ wal_watcher_notify_complete(struct cmsg *cmsg)
static void
wal_watcher_attach(void *arg)
{
- struct wal_watcher *watcher = (struct wal_watcher *) arg;
+ struct wal_watcher *watcher = (struct wal_watcher *)arg;
struct wal_writer *writer = &wal_writer_singleton;
assert(rlist_empty(&watcher->next));
@@ -1468,7 +1458,7 @@ wal_watcher_attach(void *arg)
static void
wal_watcher_detach(void *arg)
{
- struct wal_watcher *watcher = (struct wal_watcher *) arg;
+ struct wal_watcher *watcher = (struct wal_watcher *)arg;
assert(!rlist_empty(&watcher->next));
rlist_del_entry(watcher, next);
@@ -1489,10 +1479,10 @@ wal_set_watcher(struct wal_watcher *watcher, const char *name,
watcher->pending_events = 0;
assert(lengthof(watcher->route) == 2);
- watcher->route[0] = (struct cmsg_hop)
- { wal_watcher_notify_perform, &watcher->wal_pipe };
- watcher->route[1] = (struct cmsg_hop)
- { wal_watcher_notify_complete, NULL };
+ watcher->route[0] = (struct cmsg_hop){ wal_watcher_notify_perform,
+ &watcher->wal_pipe };
+ watcher->route[1] =
+ (struct cmsg_hop){ wal_watcher_notify_complete, NULL };
cbus_pair("wal", name, &watcher->wal_pipe, &watcher->watcher_pipe,
wal_watcher_attach, watcher, process_cb);
}
@@ -1515,7 +1505,6 @@ wal_notify_watchers(struct wal_writer *writer, unsigned events)
wal_watcher_notify(watcher, events);
}
-
/**
* After fork, the WAL writer thread disappears.
* Make sure that atexit() handlers in the child do
diff --git a/src/box/wal.h b/src/box/wal.h
index 581306f..70db02f 100644
--- a/src/box/wal.h
+++ b/src/box/wal.h
@@ -81,8 +81,8 @@ typedef void (*wal_on_checkpoint_threshold_f)(void);
* Start WAL thread and initialize WAL writer.
*/
int
-wal_init(enum wal_mode wal_mode, const char *wal_dirname,
- int64_t wal_max_size, const struct tt_uuid *instance_uuid,
+wal_init(enum wal_mode wal_mode, const char *wal_dirname, int64_t wal_max_size,
+ const struct tt_uuid *instance_uuid,
wal_on_garbage_collection_f on_garbage_collection,
wal_on_checkpoint_threshold_f on_checkpoint_threshold);
@@ -113,9 +113,9 @@ struct wal_watcher_msg {
enum wal_event {
/** A row is written to the current WAL. */
- WAL_EVENT_WRITE = (1 << 0),
+ WAL_EVENT_WRITE = (1 << 0),
/** A new WAL is created. */
- WAL_EVENT_ROTATE = (1 << 1),
+ WAL_EVENT_ROTATE = (1 << 1),
};
struct wal_watcher {
diff --git a/src/box/xlog.c b/src/box/xlog.c
index 974f460..4c60382 100644
--- a/src/box/xlog.c
+++ b/src/box/xlog.c
@@ -54,9 +54,9 @@
* for a while. Define it manually if necessary.
*/
#ifdef HAVE_FALLOCATE
-# ifndef FALLOC_FL_KEEP_SIZE
-# define FALLOC_FL_KEEP_SIZE 0x01
-# endif
+#ifndef FALLOC_FL_KEEP_SIZE
+#define FALLOC_FL_KEEP_SIZE 0x01
+#endif
#endif /* HAVE_FALLOCATE */
/*
@@ -67,9 +67,12 @@
*/
typedef uint32_t log_magic_t;
-static const log_magic_t row_marker = mp_bswap_u32(0xd5ba0bab); /* host byte order */
-static const log_magic_t zrow_marker = mp_bswap_u32(0xd5ba0bba); /* host byte order */
-static const log_magic_t eof_marker = mp_bswap_u32(0xd510aded); /* host byte order */
+static const log_magic_t row_marker =
+ mp_bswap_u32(0xd5ba0bab); /* host byte order */
+static const log_magic_t zrow_marker =
+ mp_bswap_u32(0xd5ba0bba); /* host byte order */
+static const log_magic_t eof_marker =
+ mp_bswap_u32(0xd510aded); /* host byte order */
enum {
/**
@@ -121,8 +124,7 @@ static const char v12[] = "0.12";
void
xlog_meta_create(struct xlog_meta *meta, const char *filetype,
const struct tt_uuid *instance_uuid,
- const struct vclock *vclock,
- const struct vclock *prev_vclock)
+ const struct vclock *vclock, const struct vclock *prev_vclock)
{
snprintf(meta->filetype, sizeof(meta->filetype), "%s", filetype);
meta->instance_uuid = *instance_uuid;
@@ -154,9 +156,7 @@ xlog_meta_format(const struct xlog_meta *meta, char *buf, int size)
int total = 0;
SNPRINT(total, snprintf, buf, size,
"%s\n"
- "%s\n"
- VERSION_KEY ": %s\n"
- INSTANCE_UUID_KEY ": %s\n",
+ "%s\n" VERSION_KEY ": %s\n" INSTANCE_UUID_KEY ": %s\n",
meta->filetype, v13, PACKAGE_VERSION,
tt_uuid_str(&meta->instance_uuid));
if (vclock_is_set(&meta->vclock)) {
@@ -188,8 +188,10 @@ parse_vclock(const char *val, const char *val_end, struct vclock *vclock)
size_t off = vclock_from_string(vclock, str);
ERROR_INJECT(ERRINJ_XLOG_META, { off = 1; });
if (off != 0) {
- diag_set(XlogError, "invalid vclock at "
- "offset %zd", off);
+ diag_set(XlogError,
+ "invalid vclock at "
+ "offset %zd",
+ off);
return -1;
}
return 0;
@@ -211,12 +213,11 @@ xlog_meta_key_equal(const char *key, const char *key_end, const char *str)
* @retval 1 if buffer hasn't enough data
*/
static ssize_t
-xlog_meta_parse(struct xlog_meta *meta, const char **data,
- const char *data_end)
+xlog_meta_parse(struct xlog_meta *meta, const char **data, const char *data_end)
{
memset(meta, 0, sizeof(*meta));
- const char *end = (const char *)memmem(*data, data_end - *data,
- "\n\n", 2);
+ const char *end =
+ (const char *)memmem(*data, data_end - *data, "\n\n", 2);
if (end == NULL)
return 1;
++end; /* include the trailing \n to simplify the checks */
@@ -226,7 +227,7 @@ xlog_meta_parse(struct xlog_meta *meta, const char **data,
* Parse filetype, i.e "SNAP" or "XLOG"
*/
const char *eol = (const char *)memchr(pos, '\n', end - pos);
- if (eol == end || (eol - pos) >= (ptrdiff_t) sizeof(meta->filetype)) {
+ if (eol == end || (eol - pos) >= (ptrdiff_t)sizeof(meta->filetype)) {
diag_set(XlogError, "failed to parse xlog type string");
return -1;
}
@@ -240,7 +241,7 @@ xlog_meta_parse(struct xlog_meta *meta, const char **data,
*/
char version[10];
eol = (const char *)memchr(pos, '\n', end - pos);
- if (eol == end || (eol - pos) >= (ptrdiff_t) sizeof(version)) {
+ if (eol == end || (eol - pos) >= (ptrdiff_t)sizeof(version)) {
diag_set(XlogError, "failed to parse xlog version string");
return -1;
}
@@ -250,9 +251,8 @@ xlog_meta_parse(struct xlog_meta *meta, const char **data,
assert(pos <= end);
if (strncmp(version, v12, sizeof(v12)) != 0 &&
strncmp(version, v13, sizeof(v13)) != 0) {
- diag_set(XlogError,
- "unsupported file format version %s",
- version);
+ diag_set(XlogError, "unsupported file format version %s",
+ version);
return -1;
}
@@ -266,8 +266,7 @@ xlog_meta_parse(struct xlog_meta *meta, const char **data,
eol = (const char *)memchr(pos, '\n', end - pos);
assert(eol <= end);
const char *key = pos;
- const char *key_end = (const char *)
- memchr(key, ':', eol - key);
+ const char *key_end = (const char *)memchr(key, ':', eol - key);
if (key_end == NULL) {
diag_set(XlogError, "can't extract meta value");
return -1;
@@ -286,14 +285,17 @@ xlog_meta_parse(struct xlog_meta *meta, const char **data,
* Instance: <uuid>
*/
if (val_end - val != UUID_STR_LEN) {
- diag_set(XlogError, "can't parse instance UUID");
+ diag_set(XlogError,
+ "can't parse instance UUID");
return -1;
}
char uuid[UUID_STR_LEN + 1];
memcpy(uuid, val, UUID_STR_LEN);
uuid[UUID_STR_LEN] = '\0';
- if (tt_uuid_from_string(uuid, &meta->instance_uuid) != 0) {
- diag_set(XlogError, "can't parse instance UUID");
+ if (tt_uuid_from_string(uuid, &meta->instance_uuid) !=
+ 0) {
+ diag_set(XlogError,
+ "can't parse instance UUID");
return -1;
}
} else if (xlog_meta_key_equal(key, key_end, VCLOCK_KEY)) {
@@ -422,7 +424,7 @@ xdir_index_file(struct xdir *dir, int64_t signature)
* Append the clock describing the file to the
* directory index.
*/
- struct vclock *vclock = (struct vclock *) malloc(sizeof(*vclock));
+ struct vclock *vclock = (struct vclock *)malloc(sizeof(*vclock));
if (vclock == NULL) {
diag_set(OutOfMemory, sizeof(*vclock), "malloc", "vclock");
xlog_cursor_close(&cursor, false);
@@ -452,8 +454,8 @@ xdir_open_cursor(struct xdir *dir, int64_t signature,
struct xlog_meta *meta = &cursor->meta;
if (strcmp(meta->filetype, dir->filetype) != 0) {
xlog_cursor_close(cursor, false);
- diag_set(ClientError, ER_INVALID_XLOG_TYPE,
- dir->filetype, meta->filetype);
+ diag_set(ClientError, ER_INVALID_XLOG_TYPE, dir->filetype,
+ meta->filetype);
return -1;
}
if (!tt_uuid_is_nil(dir->instance_uuid) &&
@@ -521,15 +523,15 @@ xdir_open_cursor(struct xdir *dir, int64_t signature,
int
xdir_scan(struct xdir *dir, bool is_dir_required)
{
- DIR *dh = opendir(dir->dirname); /* log dir */
- int64_t *signatures = NULL; /* log file names */
+ DIR *dh = opendir(dir->dirname); /* log dir */
+ int64_t *signatures = NULL; /* log file names */
size_t s_count = 0, s_capacity = 0;
if (dh == NULL) {
if (!is_dir_required && errno == ENOENT)
return 0;
diag_set(SystemError, "error reading directory '%s'",
- dir->dirname);
+ dir->dirname);
return -1;
}
@@ -569,8 +571,8 @@ xdir_scan(struct xdir *dir, bool is_dir_required)
char *dot;
long long signature = strtoll(dent->d_name, &dot, 10);
- if (ext != dot ||
- signature == LLONG_MAX || signature == LLONG_MIN) {
+ if (ext != dot || signature == LLONG_MAX ||
+ signature == LLONG_MIN) {
say_warn("can't parse `%s', skipping", dent->d_name);
continue;
}
@@ -578,10 +580,10 @@ xdir_scan(struct xdir *dir, bool is_dir_required)
if (s_count == s_capacity) {
s_capacity = s_capacity > 0 ? 2 * s_capacity : 16;
size_t size = sizeof(*signatures) * s_capacity;
- signatures = (int64_t *) realloc(signatures, size);
+ signatures = (int64_t *)realloc(signatures, size);
if (signatures == NULL) {
- diag_set(OutOfMemory,
- size, "realloc", "signatures array");
+ diag_set(OutOfMemory, size, "realloc",
+ "signatures array");
goto exit;
}
}
@@ -612,7 +614,8 @@ xdir_scan(struct xdir *dir, bool is_dir_required)
/*
* force_recovery must not affect OOM
*/
- struct error *e = diag_last_error(&fiber()->diag);
+ struct error *e =
+ diag_last_error(&fiber()->diag);
if (!dir->force_recovery ||
type_assignable(&type_OutOfMemory, e->type))
goto exit;
@@ -621,8 +624,7 @@ xdir_scan(struct xdir *dir, bool is_dir_required)
}
i++;
} else {
- assert(s_old == s_new && i < s_count &&
- vclock != NULL);
+ assert(s_old == s_new && i < s_count && vclock != NULL);
vclock = vclockset_next(&dir->index, vclock);
i++;
}
@@ -638,10 +640,10 @@ exit:
int
xdir_check(struct xdir *dir)
{
- DIR *dh = opendir(dir->dirname); /* log dir */
+ DIR *dh = opendir(dir->dirname); /* log dir */
if (dh == NULL) {
diag_set(SystemError, "error reading directory '%s'",
- dir->dirname);
+ dir->dirname);
return -1;
}
closedir(dh);
@@ -650,12 +652,11 @@ xdir_check(struct xdir *dir)
const char *
xdir_format_filename(struct xdir *dir, int64_t signature,
- enum log_suffix suffix)
+ enum log_suffix suffix)
{
- return tt_snprintf(PATH_MAX, "%s/%020lld%s%s",
- dir->dirname, (long long) signature,
- dir->filename_ext, suffix == INPROGRESS ?
- inprogress_suffix : "");
+ return tt_snprintf(PATH_MAX, "%s/%020lld%s%s", dir->dirname,
+ (long long)signature, dir->filename_ext,
+ suffix == INPROGRESS ? inprogress_suffix : "");
}
static void
@@ -734,7 +735,6 @@ xdir_add_vclock(struct xdir *xdir, const struct vclock *vclock)
/* }}} */
-
/* {{{ struct xlog */
int
@@ -754,8 +754,7 @@ xlog_rename(struct xlog *l)
if (rename(filename, new_filename) != 0) {
say_syserror("can't rename %s to %s", filename, new_filename);
- diag_set(SystemError, "failed to rename '%s' file",
- filename);
+ diag_set(SystemError, "failed to rename '%s' file", filename);
return -1;
}
l->is_inprogress = false;
@@ -824,7 +823,8 @@ xlog_create(struct xlog *xlog, const char *name, int flags,
xlog->meta = *meta;
xlog->is_inprogress = true;
- snprintf(xlog->filename, sizeof(xlog->filename), "%s%s", name, inprogress_suffix);
+ snprintf(xlog->filename, sizeof(xlog->filename), "%s%s", name,
+ inprogress_suffix);
/* Make directory if needed (gh-5090). */
if (mkdirpath(xlog->filename) != 0) {
@@ -927,7 +927,7 @@ xlog_open(struct xlog *xlog, const char *name, const struct xlog_opts *opts)
goto err_read;
}
if (rc != sizeof(magic) || load_u32(magic) != eof_marker) {
-no_eof:
+ no_eof:
xlog->offset = fio_lseek(xlog->fd, 0, SEEK_END);
if (xlog->offset < 0) {
diag_set(SystemError, "failed to seek file '%s'",
@@ -991,12 +991,12 @@ xdir_create_xlog(struct xdir *dir, struct xlog *xlog,
prev_vclock = vclockset_last(&dir->index);
struct xlog_meta meta;
- xlog_meta_create(&meta, dir->filetype, dir->instance_uuid,
- vclock, prev_vclock);
+ xlog_meta_create(&meta, dir->filetype, dir->instance_uuid, vclock,
+ prev_vclock);
const char *filename = xdir_format_filename(dir, signature, NONE);
- if (xlog_create(xlog, filename, dir->open_wflags, &meta,
- &dir->opts) != 0)
+ if (xlog_create(xlog, filename, dir->open_wflags, &meta, &dir->opts) !=
+ 0)
return -1;
/* Rename xlog file */
@@ -1071,8 +1071,7 @@ xlog_tx_write_plain(struct xlog *log)
struct iovec *iov;
size_t offset = XLOG_FIXHEADER_SIZE;
for (iov = log->obuf.iov; iov->iov_len; ++iov) {
- crc32c = crc32_calc(crc32c,
- (char *)iov->iov_base + offset,
+ crc32c = crc32_calc(crc32c, (char *)iov->iov_base + offset,
iov->iov_len - offset);
offset = 0;
}
@@ -1095,7 +1094,8 @@ xlog_tx_write_plain(struct xlog *log)
return -1;
});
- ssize_t written = fio_writevn(log->fd, log->obuf.iov, log->obuf.pos + 1);
+ ssize_t written =
+ fio_writevn(log->fd, log->obuf.iov, log->obuf.pos + 1);
if (written < 0) {
diag_set(SystemError, "failed to write to '%s' file",
log->filename);
@@ -1112,8 +1112,7 @@ xlog_tx_write_plain(struct xlog *log)
static off_t
xlog_tx_write_zstd(struct xlog *log)
{
- char *fixheader = (char *)obuf_alloc(&log->zbuf,
- XLOG_FIXHEADER_SIZE);
+ char *fixheader = (char *)obuf_alloc(&log->zbuf, XLOG_FIXHEADER_SIZE);
uint32_t crc32c = 0;
struct iovec *iov;
@@ -1127,11 +1126,11 @@ xlog_tx_write_zstd(struct xlog *log)
void *zdst = obuf_reserve(&log->zbuf, zmax_size);
if (!zdst) {
diag_set(OutOfMemory, zmax_size, "runtime arena",
- "compression buffer");
+ "compression buffer");
goto error;
}
- size_t (*fcompress)(ZSTD_CCtx *, void *, size_t,
- const void *, size_t);
+ size_t (*fcompress)(ZSTD_CCtx *, void *, size_t, const void *,
+ size_t);
/*
* If it's the last iov or the last
* log has 0 bytes, end the stream.
@@ -1185,8 +1184,7 @@ xlog_tx_write_zstd(struct xlog *log)
});
ssize_t written;
- written = fio_writevn(log->fd, log->zbuf.iov,
- log->zbuf.pos + 1);
+ written = fio_writevn(log->fd, log->zbuf.iov, log->zbuf.pos + 1);
if (written < 0) {
diag_set(SystemError, "failed to write to '%s' file",
log->filename);
@@ -1200,9 +1198,9 @@ error:
}
/* file syncing and posix_fadvise() should be rounded by a page boundary */
-#define SYNC_MASK (4096 - 1)
-#define SYNC_ROUND_DOWN(size) ((size) & ~(4096 - 1))
-#define SYNC_ROUND_UP(size) (SYNC_ROUND_DOWN(size + SYNC_MASK))
+#define SYNC_MASK (4096 - 1)
+#define SYNC_ROUND_DOWN(size) ((size) & ~(4096 - 1))
+#define SYNC_ROUND_UP(size) (SYNC_ROUND_DOWN(size + SYNC_MASK))
/**
* Writes xlog batch to file
@@ -1234,7 +1232,8 @@ xlog_tx_write(struct xlog *log)
if (written < 0) {
if (lseek(log->fd, log->offset, SEEK_SET) < 0 ||
ftruncate(log->fd, log->offset) != 0)
- panic_syserror("failed to truncate xlog after write error");
+ panic_syserror(
+ "failed to truncate xlog after write error");
log->allocated = 0;
return -1;
}
@@ -1245,17 +1244,18 @@ xlog_tx_write(struct xlog *log)
log->offset += written;
log->rows += log->tx_rows;
log->tx_rows = 0;
- if ((log->opts.sync_interval && log->offset >=
- (off_t)(log->synced_size + log->opts.sync_interval)) ||
- (log->opts.rate_limit && log->offset >=
- (off_t)(log->synced_size + log->opts.rate_limit))) {
+ if ((log->opts.sync_interval &&
+ log->offset >=
+ (off_t)(log->synced_size + log->opts.sync_interval)) ||
+ (log->opts.rate_limit &&
+ log->offset >= (off_t)(log->synced_size + log->opts.rate_limit))) {
off_t sync_from = SYNC_ROUND_DOWN(log->synced_size);
- size_t sync_len = SYNC_ROUND_UP(log->offset) -
- sync_from;
+ size_t sync_len = SYNC_ROUND_UP(log->offset) - sync_from;
if (log->opts.rate_limit > 0) {
double throttle_time;
- throttle_time = (double)sync_len / log->opts.rate_limit -
- (ev_monotonic_time() - log->sync_time);
+ throttle_time =
+ (double)sync_len / log->opts.rate_limit -
+ (ev_monotonic_time() - log->sync_time);
if (throttle_time > 0)
ev_sleep(throttle_time);
}
@@ -1263,8 +1263,8 @@ xlog_tx_write(struct xlog *log)
#ifdef HAVE_SYNC_FILE_RANGE
sync_file_range(log->fd, sync_from, sync_len,
SYNC_FILE_RANGE_WAIT_BEFORE |
- SYNC_FILE_RANGE_WRITE |
- SYNC_FILE_RANGE_WAIT_AFTER);
+ SYNC_FILE_RANGE_WRITE |
+ SYNC_FILE_RANGE_WAIT_AFTER);
#else
fdatasync(log->fd);
#endif /* HAVE_SYNC_FILE_RANGE */
@@ -1277,8 +1277,8 @@ xlog_tx_write(struct xlog *log)
say_syserror("posix_fadvise, fd=%i", log->fd);
}
#else
- (void) sync_from;
- (void) sync_len;
+ (void)sync_from;
+ (void)sync_len;
#endif /* HAVE_POSIX_FADVISE */
}
log->synced_size = log->offset;
@@ -1303,7 +1303,7 @@ xlog_write_row(struct xlog *log, const struct xrow_header *packet)
if (obuf_size(&log->obuf) == 0) {
if (!obuf_alloc(&log->obuf, XLOG_FIXHEADER_SIZE)) {
diag_set(OutOfMemory, XLOG_FIXHEADER_SIZE,
- "runtime arena", "xlog tx output buffer");
+ "runtime arena", "xlog tx output buffer");
return -1;
}
}
@@ -1319,8 +1319,8 @@ xlog_write_row(struct xlog *log, const struct xrow_header *packet)
return -1;
}
for (int i = 0; i < iovcnt; ++i) {
- struct errinj *inj = errinj(ERRINJ_WAL_WRITE_PARTIAL,
- ERRINJ_INT);
+ struct errinj *inj =
+ errinj(ERRINJ_WAL_WRITE_PARTIAL, ERRINJ_INT);
if (inj != NULL && inj->iparam >= 0 &&
obuf_size(&log->obuf) > (size_t)inj->iparam) {
diag_set(ClientError, ER_INJECTION,
@@ -1331,7 +1331,7 @@ xlog_write_row(struct xlog *log, const struct xrow_header *packet)
if (obuf_dup(&log->obuf, iov[i].iov_base, iov[i].iov_len) <
iov[i].iov_len) {
diag_set(OutOfMemory, XLOG_FIXHEADER_SIZE,
- "runtime arena", "xlog tx output buffer");
+ "runtime arena", "xlog tx output buffer");
obuf_rollback_to_svp(&log->obuf, &svp);
return -1;
}
@@ -1404,11 +1404,10 @@ xlog_flush(struct xlog *log)
static int
sync_cb(eio_req *req)
{
- int fd = (intptr_t) req->data;
+ int fd = (intptr_t)req->data;
if (req->result) {
errno = req->errorno;
- say_syserror("%s: fsync() failed",
- fio_filename(fd));
+ say_syserror("%s: fsync() failed", fio_filename(fd));
errno = 0;
}
close(fd);
@@ -1424,7 +1423,7 @@ xlog_sync(struct xlog *l)
say_syserror("%s: dup() failed", l->filename);
return -1;
}
- eio_fsync(fd, 0, sync_cb, (void *) (intptr_t) fd);
+ eio_fsync(fd, 0, sync_cb, (void *)(intptr_t)fd);
} else if (fsync(l->fd) < 0) {
say_syserror("%s: fsync failed", l->filename);
return -1;
@@ -1503,7 +1502,7 @@ xlog_atfork(struct xlog *xlog)
/* {{{ struct xlog_cursor */
-#define XLOG_READ_AHEAD (1 << 14)
+#define XLOG_READ_AHEAD (1 << 14)
/**
* Ensure that at least count bytes are in read buffer
@@ -1531,8 +1530,7 @@ xlog_cursor_ensure(struct xlog_cursor *cursor, size_t count)
return -1;
}
ssize_t readen;
- readen = fio_pread(cursor->fd, dst, to_load,
- cursor->read_offset);
+ readen = fio_pread(cursor->fd, dst, to_load, cursor->read_offset);
struct errinj *inj = errinj(ERRINJ_XLOG_READ, ERRINJ_INT);
if (inj != NULL && inj->iparam >= 0 &&
inj->iparam < cursor->read_offset) {
@@ -1540,15 +1538,14 @@ xlog_cursor_ensure(struct xlog_cursor *cursor, size_t count)
errno = EIO;
};
if (readen < 0) {
- diag_set(SystemError, "failed to read '%s' file",
- cursor->name);
+ diag_set(SystemError, "failed to read '%s' file", cursor->name);
return -1;
}
/* ibuf_reserve() has been called above, ibuf_alloc() must not fail */
assert((size_t)readen <= to_load);
ibuf_alloc(&cursor->rbuf, readen);
cursor->read_offset += readen;
- return ibuf_used(&cursor->rbuf) >= count ? 0: 1;
+ return ibuf_used(&cursor->rbuf) >= count ? 0 : 1;
}
/**
@@ -1562,8 +1559,8 @@ static int
xlog_cursor_decompress(char **rows, char *rows_end, const char **data,
const char *data_end, ZSTD_DStream *zdctx)
{
- ZSTD_inBuffer input = {*data, (size_t)(data_end - *data), 0};
- ZSTD_outBuffer output = {*rows, (size_t)(rows_end - *rows), 0};
+ ZSTD_inBuffer input = { *data, (size_t)(data_end - *data), 0 };
+ ZSTD_outBuffer output = { *rows, (size_t)(rows_end - *rows), 0 };
while (input.pos < input.size && output.pos < output.size) {
size_t rc = ZSTD_decompressStream(zdctx, &output, &input);
@@ -1576,7 +1573,7 @@ xlog_cursor_decompress(char **rows, char *rows_end, const char **data,
*rows = (char *)output.dst + output.pos;
*data = (char *)input.src + input.pos;
}
- return input.pos == input.size ? 0: 1;
+ return input.pos == input.size ? 0 : 1;
}
/**
@@ -1610,8 +1607,8 @@ struct xlog_fixheader {
* @retval count of bytes left to parse header
*/
static ssize_t
-xlog_fixheader_decode(struct xlog_fixheader *fixheader,
- const char **data, const char *data_end)
+xlog_fixheader_decode(struct xlog_fixheader *fixheader, const char **data,
+ const char *data_end)
{
if (data_end - *data < (ptrdiff_t)XLOG_FIXHEADER_SIZE)
return XLOG_FIXHEADER_SIZE - (data_end - *data);
@@ -1620,8 +1617,7 @@ xlog_fixheader_decode(struct xlog_fixheader *fixheader,
/* Decode magic */
fixheader->magic = load_u32(pos);
- if (fixheader->magic != row_marker &&
- fixheader->magic != zrow_marker) {
+ if (fixheader->magic != row_marker && fixheader->magic != zrow_marker) {
diag_set(XlogError, "invalid magic: 0x%x", fixheader->magic);
return -1;
}
@@ -1671,8 +1667,8 @@ xlog_fixheader_decode(struct xlog_fixheader *fixheader,
}
int
-xlog_tx_decode(const char *data, const char *data_end,
- char *rows, char *rows_end, ZSTD_DStream *zdctx)
+xlog_tx_decode(const char *data, const char *data_end, char *rows,
+ char *rows_end, ZSTD_DStream *zdctx)
{
/* Decode fixheader */
struct xlog_fixheader fixheader;
@@ -1681,14 +1677,16 @@ xlog_tx_decode(const char *data, const char *data_end,
/* Check that buffer has enough bytes */
if (data + fixheader.len != data_end) {
- diag_set(XlogError, "invalid compressed length: "
- "expected %zd, got %u",
- data_end - data, fixheader.len);
+ diag_set(XlogError,
+ "invalid compressed length: "
+ "expected %zd, got %u",
+ data_end - data, fixheader.len);
return -1;
}
ERROR_INJECT(ERRINJ_XLOG_GARBAGE, {
- *((char *)data + fixheader.len / 2) = ~*((char *)data + fixheader.len / 2);
+ *((char *)data + fixheader.len / 2) =
+ ~*((char *)data + fixheader.len / 2);
});
/* Validate checksum */
@@ -1700,9 +1698,10 @@ xlog_tx_decode(const char *data, const char *data_end,
/* Copy uncompressed rows */
if (fixheader.magic == row_marker) {
if (rows_end - rows != (ptrdiff_t)fixheader.len) {
- diag_set(XlogError, "invalid unpacked length: "
- "expected %zd, got %u",
- rows_end - data, fixheader.len);
+ diag_set(XlogError,
+ "invalid unpacked length: "
+ "expected %zd, got %u",
+ rows_end - data, fixheader.len);
return -1;
}
memcpy(rows, data, fixheader.len);
@@ -1712,14 +1711,16 @@ xlog_tx_decode(const char *data, const char *data_end,
/* Decompress zstd rows */
assert(fixheader.magic == zrow_marker);
ZSTD_initDStream(zdctx);
- int rc = xlog_cursor_decompress(&rows, rows_end, &data, data_end,
- zdctx);
+ int rc =
+ xlog_cursor_decompress(&rows, rows_end, &data, data_end, zdctx);
if (rc < 0) {
return -1;
} else if (rc > 0) {
- diag_set(XlogError, "invalid decompressed length: "
- "expected %zd, got %zd", rows_end - data,
- rows_end - data + XLOG_TX_AUTOCOMMIT_THRESHOLD);
+ diag_set(XlogError,
+ "invalid decompressed length: "
+ "expected %zd, got %zd",
+ rows_end - data,
+ rows_end - data + XLOG_TX_AUTOCOMMIT_THRESHOLD);
return -1;
}
@@ -1733,9 +1734,8 @@ xlog_tx_decode(const char *data, const char *data_end,
* @retval >0 how many bytes we will have for continue
*/
ssize_t
-xlog_tx_cursor_create(struct xlog_tx_cursor *tx_cursor,
- const char **data, const char *data_end,
- ZSTD_DStream *zdctx)
+xlog_tx_cursor_create(struct xlog_tx_cursor *tx_cursor, const char **data,
+ const char *data_end, ZSTD_DStream *zdctx)
{
const char *rpos = *data;
struct xlog_fixheader fixheader;
@@ -1749,7 +1749,8 @@ xlog_tx_cursor_create(struct xlog_tx_cursor *tx_cursor,
return fixheader.len - (data_end - rpos);
ERROR_INJECT(ERRINJ_XLOG_GARBAGE, {
- *((char *)rpos + fixheader.len / 2) = ~*((char *)rpos + fixheader.len / 2);
+ *((char *)rpos + fixheader.len / 2) =
+ ~*((char *)rpos + fixheader.len / 2);
});
/* Validate checksum */
@@ -1764,8 +1765,8 @@ xlog_tx_cursor_create(struct xlog_tx_cursor *tx_cursor,
if (fixheader.magic == row_marker) {
void *dst = ibuf_alloc(&tx_cursor->rows, fixheader.len);
if (dst == NULL) {
- diag_set(OutOfMemory, fixheader.len,
- "runtime", "xlog rows buffer");
+ diag_set(OutOfMemory, fixheader.len, "runtime",
+ "xlog rows buffer");
ibuf_destroy(&tx_cursor->rows);
return -1;
}
@@ -1783,7 +1784,7 @@ xlog_tx_cursor_create(struct xlog_tx_cursor *tx_cursor,
if (ibuf_reserve(&tx_cursor->rows,
XLOG_TX_AUTOCOMMIT_THRESHOLD) == NULL) {
diag_set(OutOfMemory, XLOG_TX_AUTOCOMMIT_THRESHOLD,
- "runtime", "xlog output buffer");
+ "runtime", "xlog output buffer");
ibuf_destroy(&tx_cursor->rows);
return -1;
}
@@ -1801,13 +1802,12 @@ xlog_tx_cursor_create(struct xlog_tx_cursor *tx_cursor,
int
xlog_tx_cursor_next_row(struct xlog_tx_cursor *tx_cursor,
- struct xrow_header *xrow)
+ struct xrow_header *xrow)
{
if (ibuf_used(&tx_cursor->rows) == 0)
return 1;
/* Return row from xlog tx buffer */
- int rc = xrow_header_decode(xrow,
- (const char **)&tx_cursor->rows.rpos,
+ int rc = xrow_header_decode(xrow, (const char **)&tx_cursor->rows.rpos,
(const char *)tx_cursor->rows.wpos, false);
if (rc != 0) {
diag_set(XlogError, "can't parse row");
@@ -1897,9 +1897,10 @@ eof_found:
if (rc < 0)
return -1;
if (rc == 0) {
- diag_set(XlogError, "%s: has some data after "
- "eof marker at %lld", i->name,
- xlog_cursor_pos(i));
+ diag_set(XlogError,
+ "%s: has some data after "
+ "eof marker at %lld",
+ i->name, xlog_cursor_pos(i));
return -1;
}
i->state = XLOG_CURSOR_EOF;
@@ -1921,8 +1922,8 @@ xlog_cursor_next_row(struct xlog_cursor *cursor, struct xrow_header *xrow)
}
int
-xlog_cursor_next(struct xlog_cursor *cursor,
- struct xrow_header *xrow, bool force_recovery)
+xlog_cursor_next(struct xlog_cursor *cursor, struct xrow_header *xrow,
+ bool force_recovery)
{
assert(xlog_cursor_is_open(cursor));
while (true) {
@@ -1932,15 +1933,13 @@ xlog_cursor_next(struct xlog_cursor *cursor,
break;
if (rc < 0) {
struct error *e = diag_last_error(diag_get());
- if (!force_recovery ||
- e->type != &type_XlogError)
+ if (!force_recovery || e->type != &type_XlogError)
return -1;
say_error("can't decode row: %s", e->errmsg);
}
while ((rc = xlog_cursor_next_tx(cursor)) < 0) {
struct error *e = diag_last_error(diag_get());
- if (!force_recovery ||
- e->type != &type_XlogError)
+ if (!force_recovery || e->type != &type_XlogError)
return -1;
say_error("can't open tx: %s", e->errmsg);
if ((rc = xlog_cursor_find_tx_magic(cursor)) < 0)
@@ -1970,13 +1969,14 @@ xlog_cursor_openfd(struct xlog_cursor *i, int fd, const char *name)
rc = xlog_cursor_ensure(i, XLOG_META_LEN_MAX);
if (rc == -1)
goto error;
- rc = xlog_meta_parse(&i->meta,
- (const char **)&i->rbuf.rpos,
+ rc = xlog_meta_parse(&i->meta, (const char **)&i->rbuf.rpos,
(const char *)i->rbuf.wpos);
if (rc == -1)
goto error;
if (rc > 0) {
- diag_set(XlogError, "Unexpected end of file, run with 'force_recovery = true'");
+ diag_set(
+ XlogError,
+ "Unexpected end of file, run with 'force_recovery = true'");
goto error;
}
snprintf(i->name, sizeof(i->name), "%s", name);
@@ -2021,14 +2021,13 @@ xlog_cursor_openmem(struct xlog_cursor *i, const char *data, size_t size,
void *dst = ibuf_alloc(&i->rbuf, size);
if (dst == NULL) {
diag_set(OutOfMemory, size, "runtime",
- "xlog cursor read buffer");
+ "xlog cursor read buffer");
goto error;
}
memcpy(dst, data, size);
i->read_offset = size;
int rc;
- rc = xlog_meta_parse(&i->meta,
- (const char **)&i->rbuf.rpos,
+ rc = xlog_meta_parse(&i->meta, (const char **)&i->rbuf.rpos,
(const char *)i->rbuf.wpos);
if (rc < 0)
goto error;
@@ -2061,8 +2060,8 @@ xlog_cursor_close(struct xlog_cursor *i, bool reuse_fd)
if (i->state == XLOG_CURSOR_TX)
xlog_tx_cursor_destroy(&i->tx_cursor);
ZSTD_freeDStream(i->zdctx);
- i->state = (i->state == XLOG_CURSOR_EOF ?
- XLOG_CURSOR_EOF_CLOSED : XLOG_CURSOR_CLOSED);
+ i->state = (i->state == XLOG_CURSOR_EOF ? XLOG_CURSOR_EOF_CLOSED :
+ XLOG_CURSOR_CLOSED);
/*
* Do not trash the cursor object since the caller might
* still want to access its state and/or meta information.
diff --git a/src/box/xlog.h b/src/box/xlog.h
index 5b1f42c..cb7c2d3 100644
--- a/src/box/xlog.h
+++ b/src/box/xlog.h
@@ -94,9 +94,9 @@ extern const struct xlog_opts xlog_opts_default;
* but an xlog object sees only those files which match its type.
*/
enum xdir_type {
- SNAP, /* memtx snapshot */
- XLOG, /* write ahead log */
- VYLOG, /* vinyl metadata log */
+ SNAP, /* memtx snapshot */
+ XLOG, /* write ahead log */
+ VYLOG, /* vinyl metadata log */
};
/**
@@ -323,8 +323,7 @@ struct xlog_meta {
void
xlog_meta_create(struct xlog_meta *meta, const char *filetype,
const struct tt_uuid *instance_uuid,
- const struct vclock *vclock,
- const struct vclock *prev_vclock);
+ const struct vclock *vclock, const struct vclock *prev_vclock);
/* }}} */
@@ -455,7 +454,6 @@ xlog_create(struct xlog *xlog, const char *name, int flags,
int
xlog_open(struct xlog *xlog, const char *name, const struct xlog_opts *opts);
-
/**
* Reset an xlog object without opening it.
* The object is in limbo state: it doesn't hold
@@ -465,7 +463,6 @@ xlog_open(struct xlog *xlog, const char *name, const struct xlog_opts *opts);
void
xlog_clear(struct xlog *xlog);
-
/** Returns true if the xlog file is open. */
static inline bool
xlog_is_open(struct xlog *l)
@@ -531,7 +528,6 @@ xlog_tx_rollback(struct xlog *log);
ssize_t
xlog_flush(struct xlog *log);
-
/**
* Sync a log file. The exact action is defined
* by xdir flags.
@@ -563,8 +559,7 @@ xlog_atfork(struct xlog *xlog);
/**
* xlog tx iterator
*/
-struct xlog_tx_cursor
-{
+struct xlog_tx_cursor {
/** rows buffer */
struct ibuf rows;
/** tx size */
@@ -580,9 +575,8 @@ struct xlog_tx_cursor
* @retval >0 how many additional bytes should be read to parse tx
*/
ssize_t
-xlog_tx_cursor_create(struct xlog_tx_cursor *cursor,
- const char **data, const char *data_end,
- ZSTD_DStream *zdctx);
+xlog_tx_cursor_create(struct xlog_tx_cursor *cursor, const char **data,
+ const char *data_end, ZSTD_DStream *zdctx);
/**
* Destroy xlog tx cursor and free all associated memory
@@ -598,7 +592,8 @@ xlog_tx_cursor_destroy(struct xlog_tx_cursor *tx_cursor);
* @retval -1 for error
*/
int
-xlog_tx_cursor_next_row(struct xlog_tx_cursor *tx_cursor, struct xrow_header *xrow);
+xlog_tx_cursor_next_row(struct xlog_tx_cursor *tx_cursor,
+ struct xrow_header *xrow);
/**
* Return current tx cursor position
@@ -624,9 +619,8 @@ xlog_tx_cursor_pos(struct xlog_tx_cursor *tx_cursor)
* @retval -1 error, check diag
*/
int
-xlog_tx_decode(const char *data, const char *data_end,
- char *rows, char *rows_end,
- ZSTD_DStream *zdctx);
+xlog_tx_decode(const char *data, const char *data_end, char *rows,
+ char *rows_end, ZSTD_DStream *zdctx);
/* }}} */
@@ -762,8 +756,8 @@ xlog_cursor_next_row(struct xlog_cursor *cursor, struct xrow_header *xrow);
* @retval -1 for error
*/
int
-xlog_cursor_next(struct xlog_cursor *cursor,
- struct xrow_header *xrow, bool force_recovery);
+xlog_cursor_next(struct xlog_cursor *cursor, struct xrow_header *xrow,
+ bool force_recovery);
/**
* Move to the next xlog tx
@@ -875,8 +869,8 @@ xlog_cursor_open_xc(struct xlog_cursor *cursor, const char *name)
* @copydoc xlog_cursor_next
*/
static inline int
-xlog_cursor_next_xc(struct xlog_cursor *cursor,
- struct xrow_header *xrow, bool force_recovery)
+xlog_cursor_next_xc(struct xlog_cursor *cursor, struct xrow_header *xrow,
+ bool force_recovery)
{
int rc = xlog_cursor_next(cursor, xrow, force_recovery);
if (rc == -1)
diff --git a/src/box/xrow.c b/src/box/xrow.c
index da5c6ff..4237bb5 100644
--- a/src/box/xrow.c
+++ b/src/box/xrow.c
@@ -46,15 +46,16 @@
#include "mpstream/mpstream.h"
static_assert(IPROTO_DATA < 0x7f && IPROTO_METADATA < 0x7f &&
- IPROTO_SQL_INFO < 0x7f, "encoded IPROTO_BODY keys must fit into "\
+ IPROTO_SQL_INFO < 0x7f,
+ "encoded IPROTO_BODY keys must fit into "
"one byte");
static inline uint32_t
mp_sizeof_vclock_ignore0(const struct vclock *vclock)
{
uint32_t size = vclock_size_ignore0(vclock);
- return mp_sizeof_map(size) + size * (mp_sizeof_uint(UINT32_MAX) +
- mp_sizeof_uint(UINT64_MAX));
+ return mp_sizeof_map(size) +
+ size * (mp_sizeof_uint(UINT32_MAX) + mp_sizeof_uint(UINT64_MAX));
}
static inline char *
@@ -67,7 +68,7 @@ mp_encode_vclock_ignore0(char *data, const struct vclock *vclock)
replica = vclock_iterator_next(&it);
if (replica.id == 0)
replica = vclock_iterator_next(&it);
- for ( ; replica.id < VCLOCK_MAX; replica = vclock_iterator_next(&it)) {
+ for (; replica.id < VCLOCK_MAX; replica = vclock_iterator_next(&it)) {
data = mp_encode_uint(data, replica.id);
data = mp_encode_uint(data, replica.lsn);
}
@@ -104,7 +105,9 @@ mp_decode_vclock_ignore0(const char **data, struct vclock *vclock)
*
* The format is similar to the xxd utility.
*/
-void dump_row_hex(const char *start, const char *end) {
+void
+dump_row_hex(const char *start, const char *end)
+{
if (!say_log_level_is_enabled(S_VERBOSE))
return;
@@ -116,7 +119,8 @@ void dump_row_hex(const char *start, const char *end) {
char *pos = buf;
pos += snprintf(pos, buf_end - pos, "%08lX: ", cur - start);
for (size_t i = 0; i < 16; ++i) {
- pos += snprintf(pos, buf_end - pos, "%02X ", (unsigned char)*cur++);
+ pos += snprintf(pos, buf_end - pos, "%02X ",
+ (unsigned char)*cur++);
if (cur >= end || pos == buf_end)
break;
}
@@ -125,10 +129,11 @@ void dump_row_hex(const char *start, const char *end) {
}
}
-#define xrow_on_decode_err(start, end, what, desc_str) do {\
- diag_set(ClientError, what, desc_str);\
- dump_row_hex(start, end);\
-} while (0);
+#define xrow_on_decode_err(start, end, what, desc_str) \
+ do { \
+ diag_set(ClientError, what, desc_str); \
+ dump_row_hex(start, end); \
+ } while (0);
int
xrow_header_decode(struct xrow_header *header, const char **pos,
@@ -136,10 +141,11 @@ xrow_header_decode(struct xrow_header *header, const char **pos,
{
memset(header, 0, sizeof(struct xrow_header));
const char *tmp = *pos;
- const char * const start = *pos;
+ const char *const start = *pos;
if (mp_check(&tmp, end) != 0) {
-error:
- xrow_on_decode_err(start, end, ER_INVALID_MSGPACK, "packet header");
+ error:
+ xrow_on_decode_err(start, end, ER_INVALID_MSGPACK,
+ "packet header");
return -1;
}
@@ -206,15 +212,17 @@ error:
if (*pos < end && header->type != IPROTO_NOP) {
const char *body = *pos;
if (mp_check(pos, end)) {
- xrow_on_decode_err(start, end, ER_INVALID_MSGPACK, "packet body");
+ xrow_on_decode_err(start, end, ER_INVALID_MSGPACK,
+ "packet body");
return -1;
}
header->bodycnt = 1;
- header->body[0].iov_base = (void *) body;
+ header->body[0].iov_base = (void *)body;
header->body[0].iov_len = *pos - body;
}
if (end_is_exact && *pos < end) {
- xrow_on_decode_err(start,end, ER_INVALID_MSGPACK, "packet body");
+ xrow_on_decode_err(start, end, ER_INVALID_MSGPACK,
+ "packet body");
return -1;
}
return 0;
@@ -240,14 +248,14 @@ xrow_header_encode(const struct xrow_header *header, uint64_t sync,
struct iovec *out, size_t fixheader_len)
{
/* allocate memory for sign + header */
- out->iov_base = region_alloc(&fiber()->gc, XROW_HEADER_LEN_MAX +
- fixheader_len);
+ out->iov_base =
+ region_alloc(&fiber()->gc, XROW_HEADER_LEN_MAX + fixheader_len);
if (out->iov_base == NULL) {
diag_set(OutOfMemory, XROW_HEADER_LEN_MAX + fixheader_len,
"gc arena", "xrow header encode");
return -1;
}
- char *data = (char *) out->iov_base + fixheader_len;
+ char *data = (char *)out->iov_base + fixheader_len;
/* Header */
char *d = data + 1; /* Skip 1 byte for MP_MAP */
@@ -323,7 +331,7 @@ xrow_header_encode(const struct xrow_header *header, uint64_t sync,
}
assert(d <= data + XROW_HEADER_LEN_MAX);
mp_encode_map(data, map_size);
- out->iov_len = d - (char *) out->iov_base;
+ out->iov_len = d - (char *)out->iov_base;
out++;
memcpy(out, header->body, sizeof(*out) * header->bodycnt);
@@ -339,18 +347,18 @@ xrow_encode_uuid(char *pos, const struct tt_uuid *in)
/* m_ - msgpack meta, k_ - key, v_ - value */
struct PACKED iproto_header_bin {
- uint8_t m_len; /* MP_UINT32 */
- uint32_t v_len; /* length */
- uint8_t m_header; /* MP_MAP */
- uint8_t k_code; /* IPROTO_REQUEST_TYPE */
- uint8_t m_code; /* MP_UINT32 */
- uint32_t v_code; /* response status */
- uint8_t k_sync; /* IPROTO_SYNC */
- uint8_t m_sync; /* MP_UINT64 */
- uint64_t v_sync; /* sync */
- uint8_t k_schema_version; /* IPROTO_SCHEMA_VERSION */
- uint8_t m_schema_version; /* MP_UINT32 */
- uint32_t v_schema_version; /* schema_version */
+ uint8_t m_len; /* MP_UINT32 */
+ uint32_t v_len; /* length */
+ uint8_t m_header; /* MP_MAP */
+ uint8_t k_code; /* IPROTO_REQUEST_TYPE */
+ uint8_t m_code; /* MP_UINT32 */
+ uint32_t v_code; /* response status */
+ uint8_t k_sync; /* IPROTO_SYNC */
+ uint8_t m_sync; /* MP_UINT64 */
+ uint64_t v_sync; /* sync */
+ uint8_t k_schema_version; /* IPROTO_SCHEMA_VERSION */
+ uint8_t m_schema_version; /* MP_UINT32 */
+ uint32_t v_schema_version; /* schema_version */
};
static_assert(sizeof(struct iproto_header_bin) == IPROTO_HEADER_LEN,
@@ -378,18 +386,18 @@ iproto_header_encode(char *out, uint32_t type, uint64_t sync,
}
struct PACKED iproto_body_bin {
- uint8_t m_body; /* MP_MAP */
- uint8_t k_data; /* IPROTO_DATA or errors */
- uint8_t m_data; /* MP_STR or MP_ARRAY */
- uint32_t v_data_len; /* string length of array size */
+ uint8_t m_body; /* MP_MAP */
+ uint8_t k_data; /* IPROTO_DATA or errors */
+ uint8_t m_data; /* MP_STR or MP_ARRAY */
+ uint32_t v_data_len; /* string length of array size */
};
static_assert(sizeof(struct iproto_body_bin) + IPROTO_HEADER_LEN ==
- IPROTO_SELECT_HEADER_LEN, "size of the prepared select");
+ IPROTO_SELECT_HEADER_LEN,
+ "size of the prepared select");
-static const struct iproto_body_bin iproto_body_bin = {
- 0x81, IPROTO_DATA, 0xdd, 0
-};
+static const struct iproto_body_bin iproto_body_bin = { 0x81, IPROTO_DATA, 0xdd,
+ 0 };
/** Return a 4-byte numeric error code, with status flags. */
static inline uint32_t
@@ -417,12 +425,12 @@ iproto_reply_vclock(struct obuf *out, const struct vclock *vclock,
uint64_t sync, uint32_t schema_version)
{
size_t max_size = IPROTO_HEADER_LEN + mp_sizeof_map(1) +
- mp_sizeof_uint(UINT32_MAX) + mp_sizeof_vclock_ignore0(vclock);
+ mp_sizeof_uint(UINT32_MAX) +
+ mp_sizeof_vclock_ignore0(vclock);
char *buf = obuf_reserve(out, max_size);
if (buf == NULL) {
- diag_set(OutOfMemory, max_size,
- "obuf_alloc", "buf");
+ diag_set(OutOfMemory, max_size, "obuf_alloc", "buf");
return -1;
}
@@ -437,30 +445,30 @@ iproto_reply_vclock(struct obuf *out, const struct vclock *vclock,
size - IPROTO_HEADER_LEN);
char *ptr = obuf_alloc(out, size);
- (void) ptr;
+ (void)ptr;
assert(ptr == buf);
return 0;
}
int
-iproto_reply_vote(struct obuf *out, const struct ballot *ballot,
- uint64_t sync, uint32_t schema_version)
+iproto_reply_vote(struct obuf *out, const struct ballot *ballot, uint64_t sync,
+ uint32_t schema_version)
{
- size_t max_size = IPROTO_HEADER_LEN + mp_sizeof_map(1) +
+ size_t max_size =
+ IPROTO_HEADER_LEN + mp_sizeof_map(1) +
mp_sizeof_uint(UINT32_MAX) + mp_sizeof_map(5) +
mp_sizeof_uint(UINT32_MAX) + mp_sizeof_bool(ballot->is_ro) +
- mp_sizeof_uint(UINT32_MAX) + mp_sizeof_bool(ballot->is_loading) +
- mp_sizeof_uint(IPROTO_BALLOT_IS_ANON) +
- mp_sizeof_bool(ballot->is_anon) +
mp_sizeof_uint(UINT32_MAX) +
+ mp_sizeof_bool(ballot->is_loading) +
+ mp_sizeof_uint(IPROTO_BALLOT_IS_ANON) +
+ mp_sizeof_bool(ballot->is_anon) + mp_sizeof_uint(UINT32_MAX) +
mp_sizeof_vclock_ignore0(&ballot->vclock) +
mp_sizeof_uint(UINT32_MAX) +
mp_sizeof_vclock_ignore0(&ballot->gc_vclock);
char *buf = obuf_reserve(out, max_size);
if (buf == NULL) {
- diag_set(OutOfMemory, max_size,
- "obuf_alloc", "buf");
+ diag_set(OutOfMemory, max_size, "obuf_alloc", "buf");
return -1;
}
@@ -485,7 +493,7 @@ iproto_reply_vote(struct obuf *out, const struct ballot *ballot,
size - IPROTO_HEADER_LEN);
char *ptr = obuf_alloc(out, size);
- (void) ptr;
+ (void)ptr;
assert(ptr == buf);
return 0;
}
@@ -560,7 +568,7 @@ iproto_write_error(int fd, const struct error *e, uint32_t schema_version,
ssize_t unused;
unused = write(fd, header, sizeof(header));
unused = write(fd, payload, payload_size);
- (void) unused;
+ (void)unused;
cleanup:
region_truncate(region, region_svp);
}
@@ -580,7 +588,7 @@ iproto_prepare_header(struct obuf *buf, struct obuf_svp *svp, size_t size)
}
*svp = obuf_create_svp(buf);
ptr = obuf_alloc(buf, size);
- assert(ptr != NULL);
+ assert(ptr != NULL);
return 0;
}
@@ -588,10 +596,9 @@ void
iproto_reply_select(struct obuf *buf, struct obuf_svp *svp, uint64_t sync,
uint32_t schema_version, uint32_t count)
{
- char *pos = (char *) obuf_svp_to_ptr(buf, svp);
+ char *pos = (char *)obuf_svp_to_ptr(buf, svp);
iproto_header_encode(pos, IPROTO_OK, sync, schema_version,
- obuf_size(buf) - svp->used -
- IPROTO_HEADER_LEN);
+ obuf_size(buf) - svp->used - IPROTO_HEADER_LEN);
struct iproto_body_bin body = iproto_body_bin;
body.v_data_len = mp_bswap_u32(count);
@@ -603,18 +610,19 @@ int
xrow_decode_sql(const struct xrow_header *row, struct sql_request *request)
{
if (row->bodycnt == 0) {
- diag_set(ClientError, ER_INVALID_MSGPACK, "missing request body");
+ diag_set(ClientError, ER_INVALID_MSGPACK,
+ "missing request body");
return 1;
}
assert(row->bodycnt == 1);
- const char *data = (const char *) row->body[0].iov_base;
+ const char *data = (const char *)row->body[0].iov_base;
const char *end = data + row->body[0].iov_len;
assert((end - data) > 0);
if (mp_typeof(*data) != MP_MAP || mp_check_map(data, end) > 0) {
-error:
- xrow_on_decode_err(row->body[0].iov_base, end, ER_INVALID_MSGPACK,
- "packet body");
+ error:
+ xrow_on_decode_err(row->body[0].iov_base, end,
+ ER_INVALID_MSGPACK, "packet body");
return -1;
}
@@ -626,12 +634,12 @@ error:
uint8_t key = *data;
if (key != IPROTO_SQL_BIND && key != IPROTO_SQL_TEXT &&
key != IPROTO_STMT_ID) {
- mp_check(&data, end); /* skip the key */
- mp_check(&data, end); /* skip the value */
+ mp_check(&data, end); /* skip the key */
+ mp_check(&data, end); /* skip the value */
continue;
}
- const char *value = ++data; /* skip the key */
- if (mp_check(&data, end) != 0) /* check the value */
+ const char *value = ++data; /* skip the key */
+ if (mp_check(&data, end) != 0) /* check the value */
goto error;
if (key == IPROTO_SQL_BIND)
request->bind = value;
@@ -641,17 +649,17 @@ error:
request->stmt_id = value;
}
if (request->sql_text != NULL && request->stmt_id != NULL) {
- xrow_on_decode_err(row->body[0].iov_base, end, ER_INVALID_MSGPACK,
- "SQL text and statement id are incompatible "\
+ xrow_on_decode_err(row->body[0].iov_base, end,
+ ER_INVALID_MSGPACK,
+ "SQL text and statement id are incompatible "
"options in one request: choose one");
return -1;
}
if (request->sql_text == NULL && request->stmt_id == NULL) {
- xrow_on_decode_err(row->body[0].iov_base, end,
- ER_MISSING_REQUEST_FIELD,
- tt_sprintf("%s or %s",
- iproto_key_name(IPROTO_SQL_TEXT),
- iproto_key_name(IPROTO_STMT_ID)));
+ xrow_on_decode_err(
+ row->body[0].iov_base, end, ER_MISSING_REQUEST_FIELD,
+ tt_sprintf("%s or %s", iproto_key_name(IPROTO_SQL_TEXT),
+ iproto_key_name(IPROTO_STMT_ID)));
return -1;
}
if (data != end)
@@ -663,7 +671,7 @@ void
iproto_reply_sql(struct obuf *buf, struct obuf_svp *svp, uint64_t sync,
uint32_t schema_version)
{
- char *pos = (char *) obuf_svp_to_ptr(buf, svp);
+ char *pos = (char *)obuf_svp_to_ptr(buf, svp);
iproto_header_encode(pos, IPROTO_OK, sync, schema_version,
obuf_size(buf) - svp->used - IPROTO_HEADER_LEN);
}
@@ -672,7 +680,7 @@ void
iproto_reply_chunk(struct obuf *buf, struct obuf_svp *svp, uint64_t sync,
uint32_t schema_version)
{
- char *pos = (char *) obuf_svp_to_ptr(buf, svp);
+ char *pos = (char *)obuf_svp_to_ptr(buf, svp);
iproto_header_encode(pos, IPROTO_CHUNK, sync, schema_version,
obuf_size(buf) - svp->used - IPROTO_HEADER_LEN);
struct iproto_body_bin body = iproto_body_bin;
@@ -695,20 +703,20 @@ xrow_decode_dml(struct xrow_header *row, struct request *request,
goto done;
assert(row->bodycnt == 1);
- const char *data = start = (const char *) row->body[0].iov_base;
+ const char *data = start = (const char *)row->body[0].iov_base;
end = data + row->body[0].iov_len;
assert((end - data) > 0);
if (mp_typeof(*data) != MP_MAP || mp_check_map(data, end) > 0) {
-error:
- xrow_on_decode_err(row->body[0].iov_base, end, ER_INVALID_MSGPACK,
- "packet body");
+ error:
+ xrow_on_decode_err(row->body[0].iov_base, end,
+ ER_INVALID_MSGPACK, "packet body");
return -1;
}
uint32_t size = mp_decode_map(&data);
for (uint32_t i = 0; i < size; i++) {
- if (! iproto_dml_body_has_key(data, end)) {
+ if (!iproto_dml_body_has_key(data, end)) {
if (mp_check(&data, end) != 0 ||
mp_check(&data, end) != 0)
goto error;
@@ -716,8 +724,7 @@ error:
}
uint64_t key = mp_decode_uint(&data);
const char *value = data;
- if (mp_check(&data, end) ||
- key >= IPROTO_KEY_MAX ||
+ if (mp_check(&data, end) || key >= IPROTO_KEY_MAX ||
iproto_key_type[key] != mp_typeof(*value))
goto error;
key_map &= ~iproto_key_bit(key);
@@ -761,13 +768,13 @@ error:
}
}
if (data != end) {
- xrow_on_decode_err(row->body[0].iov_base, end, ER_INVALID_MSGPACK,
- "packet end");
+ xrow_on_decode_err(row->body[0].iov_base, end,
+ ER_INVALID_MSGPACK, "packet end");
return -1;
}
done:
if (key_map) {
- enum iproto_key key = (enum iproto_key) bit_ctz_u64(key_map);
+ enum iproto_key key = (enum iproto_key)bit_ctz_u64(key_map);
xrow_on_decode_err(start, end, ER_MISSING_REQUEST_FIELD,
iproto_key_name(key));
return -1;
@@ -779,14 +786,14 @@ static int
request_snprint(char *buf, int size, const struct request *request)
{
int total = 0;
- SNPRINT(total, snprintf, buf, size, "{type: '%s', "
- "replica_id: %u, lsn: %lld, "
- "space_id: %u, index_id: %u",
- iproto_type_name(request->type),
- (unsigned) request->header->replica_id,
- (long long) request->header->lsn,
- (unsigned) request->space_id,
- (unsigned) request->index_id);
+ SNPRINT(total, snprintf, buf, size,
+ "{type: '%s', "
+ "replica_id: %u, lsn: %lld, "
+ "space_id: %u, index_id: %u",
+ iproto_type_name(request->type),
+ (unsigned)request->header->replica_id,
+ (long long)request->header->lsn, (unsigned)request->space_id,
+ (unsigned)request->index_id);
if (request->key != NULL) {
SNPRINT(total, snprintf, buf, size, ", key: ");
SNPRINT(total, mp_snprint, buf, size, request->key);
@@ -822,14 +829,14 @@ xrow_encode_dml(const struct request *request, struct region *region,
uint32_t ops_len = request->ops_end - request->ops;
uint32_t tuple_meta_len = request->tuple_meta_end - request->tuple_meta;
uint32_t tuple_len = request->tuple_end - request->tuple;
- uint32_t len = MAP_LEN_MAX + key_len + ops_len + tuple_meta_len +
- tuple_len;
- char *begin = (char *) region_alloc(region, len);
+ uint32_t len =
+ MAP_LEN_MAX + key_len + ops_len + tuple_meta_len + tuple_len;
+ char *begin = (char *)region_alloc(region, len);
if (begin == NULL) {
diag_set(OutOfMemory, len, "region_alloc", "begin");
return -1;
}
- char *pos = begin + 1; /* skip 1 byte for MP_MAP */
+ char *pos = begin + 1; /* skip 1 byte for MP_MAP */
int map_size = 0;
if (request->space_id) {
pos = mp_encode_uint(pos, IPROTO_SPACE_ID);
@@ -883,8 +890,7 @@ xrow_encode_dml(const struct request *request, struct region *region,
}
void
-xrow_encode_synchro(struct xrow_header *row,
- struct synchro_body_bin *body,
+xrow_encode_synchro(struct xrow_header *row, struct synchro_body_bin *body,
const struct synchro_request *req)
{
/*
@@ -918,8 +924,8 @@ xrow_decode_synchro(const struct xrow_header *row, struct synchro_request *req)
assert(row->bodycnt == 1);
- const char * const data = (const char *)row->body[0].iov_base;
- const char * const end = data + row->body[0].iov_len;
+ const char *const data = (const char *)row->body[0].iov_base;
+ const char *const end = data + row->body[0].iov_len;
const char *d = data;
if (mp_check(&d, end) != 0 || mp_typeof(*data) != MP_MAP) {
xrow_on_decode_err(data, end, ER_INVALID_MSGPACK,
@@ -967,8 +973,8 @@ xrow_encode_raft(struct xrow_header *row, struct region *region,
* the term is too old.
*/
int map_size = 1;
- size_t size = mp_sizeof_uint(IPROTO_RAFT_TERM) +
- mp_sizeof_uint(r->term);
+ size_t size =
+ mp_sizeof_uint(IPROTO_RAFT_TERM) + mp_sizeof_uint(r->term);
if (r->vote != 0) {
++map_size;
size += mp_sizeof_uint(IPROTO_RAFT_VOTE) +
@@ -1033,8 +1039,7 @@ xrow_decode_raft(const struct xrow_header *row, struct raft_request *r,
const char *end = begin + row->body[0].iov_len;
const char *pos = begin;
uint32_t map_size = mp_decode_map(&pos);
- for (uint32_t i = 0; i < map_size; ++i)
- {
+ for (uint32_t i = 0; i < map_size; ++i) {
if (mp_typeof(*pos) != MP_UINT)
goto bad_msgpack;
uint64_t key = mp_decode_uint(&pos);
@@ -1085,7 +1090,7 @@ xrow_to_iovec(const struct xrow_header *row, struct iovec *out)
len += out[i].iov_len;
/* Encode length */
- char *data = (char *) out[0].iov_base;
+ char *data = (char *)out[0].iov_base;
*(data++) = 0xce; /* MP_UINT32 */
store_u32(data, mp_bswap_u32(len));
@@ -1103,14 +1108,14 @@ xrow_decode_call(const struct xrow_header *row, struct call_request *request)
}
assert(row->bodycnt == 1);
- const char *data = (const char *) row->body[0].iov_base;
+ const char *data = (const char *)row->body[0].iov_base;
const char *end = data + row->body[0].iov_len;
assert((end - data) > 0);
if (mp_typeof(*data) != MP_MAP || mp_check_map(data, end) > 0) {
-error:
- xrow_on_decode_err(row->body[0].iov_base, end, ER_INVALID_MSGPACK,
- "packet body");
+ error:
+ xrow_on_decode_err(row->body[0].iov_base, end,
+ ER_INVALID_MSGPACK, "packet body");
return -1;
}
@@ -1149,20 +1154,21 @@ error:
}
}
if (data != end) {
- xrow_on_decode_err(row->body[0].iov_base, end, ER_INVALID_MSGPACK,
- "packet end");
+ xrow_on_decode_err(row->body[0].iov_base, end,
+ ER_INVALID_MSGPACK, "packet end");
return -1;
}
if (row->type == IPROTO_EVAL) {
if (request->expr == NULL) {
- xrow_on_decode_err(row->body[0].iov_base, end, ER_MISSING_REQUEST_FIELD,
+ xrow_on_decode_err(row->body[0].iov_base, end,
+ ER_MISSING_REQUEST_FIELD,
iproto_key_name(IPROTO_EXPR));
return -1;
}
} else if (request->name == NULL) {
- assert(row->type == IPROTO_CALL_16 ||
- row->type == IPROTO_CALL);
- xrow_on_decode_err(row->body[0].iov_base, end, ER_MISSING_REQUEST_FIELD,
+ assert(row->type == IPROTO_CALL_16 || row->type == IPROTO_CALL);
+ xrow_on_decode_err(row->body[0].iov_base, end,
+ ER_MISSING_REQUEST_FIELD,
iproto_key_name(IPROTO_FUNCTION_NAME));
return -1;
}
@@ -1184,14 +1190,14 @@ xrow_decode_auth(const struct xrow_header *row, struct auth_request *request)
}
assert(row->bodycnt == 1);
- const char *data = (const char *) row->body[0].iov_base;
+ const char *data = (const char *)row->body[0].iov_base;
const char *end = data + row->body[0].iov_len;
assert((end - data) > 0);
if (mp_typeof(*data) != MP_MAP || mp_check_map(data, end) > 0) {
-error:
- xrow_on_decode_err(row->body[0].iov_base, end, ER_INVALID_MSGPACK,
- "packet body");
+ error:
+ xrow_on_decode_err(row->body[0].iov_base, end,
+ ER_INVALID_MSGPACK, "packet body");
return -1;
}
@@ -1223,17 +1229,19 @@ error:
}
}
if (data != end) {
- xrow_on_decode_err(row->body[0].iov_base, end, ER_INVALID_MSGPACK,
- "packet end");
+ xrow_on_decode_err(row->body[0].iov_base, end,
+ ER_INVALID_MSGPACK, "packet end");
return -1;
}
if (request->user_name == NULL) {
- xrow_on_decode_err(row->body[0].iov_base, end, ER_MISSING_REQUEST_FIELD,
+ xrow_on_decode_err(row->body[0].iov_base, end,
+ ER_MISSING_REQUEST_FIELD,
iproto_key_name(IPROTO_USER_NAME));
return -1;
}
if (request->scramble == NULL) {
- xrow_on_decode_err(row->body[0].iov_base, end, ER_MISSING_REQUEST_FIELD,
+ xrow_on_decode_err(row->body[0].iov_base, end,
+ ER_MISSING_REQUEST_FIELD,
iproto_key_name(IPROTO_TUPLE));
return -1;
}
@@ -1242,14 +1250,14 @@ error:
int
xrow_encode_auth(struct xrow_header *packet, const char *salt, size_t salt_len,
- const char *login, size_t login_len,
- const char *password, size_t password_len)
+ const char *login, size_t login_len, const char *password,
+ size_t password_len)
{
assert(login != NULL);
memset(packet, 0, sizeof(*packet));
size_t buf_size = XROW_BODY_LEN_MAX + login_len + SCRAMBLE_SIZE;
- char *buf = (char *) region_alloc(&fiber()->gc, buf_size);
+ char *buf = (char *)region_alloc(&fiber()->gc, buf_size);
if (buf == NULL) {
diag_set(OutOfMemory, buf_size, "region_alloc", "buf");
return -1;
@@ -1259,9 +1267,9 @@ xrow_encode_auth(struct xrow_header *packet, const char *salt, size_t salt_len,
d = mp_encode_map(d, password != NULL ? 2 : 1);
d = mp_encode_uint(d, IPROTO_USER_NAME);
d = mp_encode_str(d, login, login_len);
- if (password != NULL) { /* password can be omitted */
+ if (password != NULL) { /* password can be omitted */
assert(salt_len >= SCRAMBLE_SIZE); /* greetingbuf_decode */
- (void) salt_len;
+ (void)salt_len;
char scramble[SCRAMBLE_SIZE];
scramble_prepare(scramble, salt, password, password_len);
d = mp_encode_uint(d, IPROTO_TUPLE);
@@ -1289,11 +1297,11 @@ xrow_decode_error(struct xrow_header *row)
if (row->bodycnt == 0)
goto error;
- pos = (char *) row->body[0].iov_base;
+ pos = (char *)row->body[0].iov_base;
if (mp_check(&pos, pos + row->body[0].iov_len))
goto error;
- pos = (char *) row->body[0].iov_base;
+ pos = (char *)row->body[0].iov_base;
if (mp_typeof(*pos) != MP_MAP)
goto error;
map_size = mp_decode_map(&pos);
@@ -1314,7 +1322,8 @@ xrow_decode_error(struct xrow_header *row)
uint32_t len;
const char *str = mp_decode_str(&pos, &len);
if (!is_stack_parsed) {
- snprintf(error, sizeof(error), "%.*s", len, str);
+ snprintf(error, sizeof(error), "%.*s", len,
+ str);
box_error_set(__FILE__, __LINE__, code, error);
}
} else if (key == IPROTO_ERROR) {
@@ -1356,7 +1365,7 @@ xrow_decode_ballot(struct xrow_header *row, struct ballot *ballot)
goto err;
assert(row->bodycnt == 1);
- const char *data = start = (const char *) row->body[0].iov_base;
+ const char *data = start = (const char *)row->body[0].iov_base;
end = data + row->body[0].iov_len;
const char *tmp = data;
if (mp_check(&tmp, end) != 0 || mp_typeof(*data) != MP_MAP)
@@ -1402,8 +1411,8 @@ xrow_decode_ballot(struct xrow_header *row, struct ballot *ballot)
ballot->is_anon = mp_decode_bool(&data);
break;
case IPROTO_BALLOT_VCLOCK:
- if (mp_decode_vclock_ignore0(&data,
- &ballot->vclock) != 0)
+ if (mp_decode_vclock_ignore0(&data, &ballot->vclock) !=
+ 0)
goto err;
break;
case IPROTO_BALLOT_GC_VCLOCK:
@@ -1427,12 +1436,11 @@ xrow_encode_register(struct xrow_header *row,
const struct vclock *vclock)
{
memset(row, 0, sizeof(*row));
- size_t size = mp_sizeof_map(2) +
- mp_sizeof_uint(IPROTO_INSTANCE_UUID) +
+ size_t size = mp_sizeof_map(2) + mp_sizeof_uint(IPROTO_INSTANCE_UUID) +
mp_sizeof_str(UUID_STR_LEN) +
mp_sizeof_uint(IPROTO_VCLOCK) +
mp_sizeof_vclock_ignore0(vclock);
- char *buf = (char *) region_alloc(&fiber()->gc, size);
+ char *buf = (char *)region_alloc(&fiber()->gc, size);
if (buf == NULL) {
diag_set(OutOfMemory, size, "region_alloc", "buf");
return -1;
@@ -1459,9 +1467,8 @@ xrow_encode_subscribe(struct xrow_header *row,
uint32_t id_filter)
{
memset(row, 0, sizeof(*row));
- size_t size = XROW_BODY_LEN_MAX +
- mp_sizeof_vclock_ignore0(vclock);
- char *buf = (char *) region_alloc(&fiber()->gc, size);
+ size_t size = XROW_BODY_LEN_MAX + mp_sizeof_vclock_ignore0(vclock);
+ char *buf = (char *)region_alloc(&fiber()->gc, size);
if (buf == NULL) {
diag_set(OutOfMemory, size, "region_alloc", "buf");
return -1;
@@ -1483,8 +1490,7 @@ xrow_encode_subscribe(struct xrow_header *row,
data = mp_encode_uint(data, IPROTO_ID_FILTER);
data = mp_encode_array(data, filter_size);
struct bit_iterator it;
- bit_iterator_init(&it, &id_filter, sizeof(id_filter),
- true);
+ bit_iterator_init(&it, &id_filter, sizeof(id_filter), true);
for (size_t id = bit_iterator_next(&it); id < VCLOCK_MAX;
id = bit_iterator_next(&it)) {
data = mp_encode_uint(data, id);
@@ -1501,15 +1507,14 @@ xrow_encode_subscribe(struct xrow_header *row,
int
xrow_decode_subscribe(struct xrow_header *row, struct tt_uuid *replicaset_uuid,
struct tt_uuid *instance_uuid, struct vclock *vclock,
- uint32_t *version_id, bool *anon,
- uint32_t *id_filter)
+ uint32_t *version_id, bool *anon, uint32_t *id_filter)
{
if (row->bodycnt == 0) {
diag_set(ClientError, ER_INVALID_MSGPACK, "request body");
return -1;
}
assert(row->bodycnt == 1);
- const char * const data = (const char *) row->body[0].iov_base;
+ const char *const data = (const char *)row->body[0].iov_base;
const char *end = data + row->body[0].iov_len;
const char *d = data;
if (mp_check(&d, end) != 0 || mp_typeof(*data) != MP_MAP) {
@@ -1536,8 +1541,8 @@ xrow_decode_subscribe(struct xrow_header *row, struct tt_uuid *replicaset_uuid,
if (replicaset_uuid == NULL)
goto skip;
if (xrow_decode_uuid(&d, replicaset_uuid) != 0) {
- xrow_on_decode_err(data, end, ER_INVALID_MSGPACK,
- "UUID");
+ xrow_on_decode_err(data, end,
+ ER_INVALID_MSGPACK, "UUID");
return -1;
}
break;
@@ -1545,8 +1550,8 @@ xrow_decode_subscribe(struct xrow_header *row, struct tt_uuid *replicaset_uuid,
if (instance_uuid == NULL)
goto skip;
if (xrow_decode_uuid(&d, instance_uuid) != 0) {
- xrow_on_decode_err(data, end, ER_INVALID_MSGPACK,
- "UUID");
+ xrow_on_decode_err(data, end,
+ ER_INVALID_MSGPACK, "UUID");
return -1;
}
break;
@@ -1554,7 +1559,8 @@ xrow_decode_subscribe(struct xrow_header *row, struct tt_uuid *replicaset_uuid,
if (vclock == NULL)
goto skip;
if (mp_decode_vclock_ignore0(&d, vclock) != 0) {
- xrow_on_decode_err(data, end, ER_INVALID_MSGPACK,
+ xrow_on_decode_err(data, end,
+ ER_INVALID_MSGPACK,
"invalid VCLOCK");
return -1;
}
@@ -1563,7 +1569,8 @@ xrow_decode_subscribe(struct xrow_header *row, struct tt_uuid *replicaset_uuid,
if (version_id == NULL)
goto skip;
if (mp_typeof(*d) != MP_UINT) {
- xrow_on_decode_err(data, end, ER_INVALID_MSGPACK,
+ xrow_on_decode_err(data, end,
+ ER_INVALID_MSGPACK,
"invalid VERSION");
return -1;
}
@@ -1573,7 +1580,8 @@ xrow_decode_subscribe(struct xrow_header *row, struct tt_uuid *replicaset_uuid,
if (anon == NULL)
goto skip;
if (mp_typeof(*d) != MP_BOOL) {
- xrow_on_decode_err(data, end, ER_INVALID_MSGPACK,
+ xrow_on_decode_err(data, end,
+ ER_INVALID_MSGPACK,
"invalid REPLICA_ANON flag");
return -1;
}
@@ -1583,7 +1591,9 @@ xrow_decode_subscribe(struct xrow_header *row, struct tt_uuid *replicaset_uuid,
if (id_filter == NULL)
goto skip;
if (mp_typeof(*d) != MP_ARRAY) {
-id_filter_decode_err: xrow_on_decode_err(data, end, ER_INVALID_MSGPACK,
+ id_filter_decode_err:
+ xrow_on_decode_err(data, end,
+ ER_INVALID_MSGPACK,
"invalid ID_FILTER");
return -1;
}
@@ -1597,7 +1607,8 @@ id_filter_decode_err: xrow_on_decode_err(data, end, ER_INVALID_MSGPACK,
*id_filter |= 1 << val;
}
break;
- default: skip:
+ default:
+ skip:
mp_next(&d); /* value */
}
}
@@ -1610,7 +1621,7 @@ xrow_encode_join(struct xrow_header *row, const struct tt_uuid *instance_uuid)
memset(row, 0, sizeof(*row));
size_t size = 64;
- char *buf = (char *) region_alloc(&fiber()->gc, size);
+ char *buf = (char *)region_alloc(&fiber()->gc, size);
if (buf == NULL) {
diag_set(OutOfMemory, size, "region_alloc", "buf");
return -1;
@@ -1636,7 +1647,7 @@ xrow_encode_vclock(struct xrow_header *row, const struct vclock *vclock)
/* Add vclock to response body */
size_t size = 8 + mp_sizeof_vclock_ignore0(vclock);
- char *buf = (char *) region_alloc(&fiber()->gc, size);
+ char *buf = (char *)region_alloc(&fiber()->gc, size);
if (buf == NULL) {
diag_set(OutOfMemory, size, "region_alloc", "buf");
return -1;
@@ -1659,12 +1670,11 @@ xrow_encode_subscribe_response(struct xrow_header *row,
const struct vclock *vclock)
{
memset(row, 0, sizeof(*row));
- size_t size = mp_sizeof_map(2) +
- mp_sizeof_uint(IPROTO_VCLOCK) +
+ size_t size = mp_sizeof_map(2) + mp_sizeof_uint(IPROTO_VCLOCK) +
mp_sizeof_vclock_ignore0(vclock) +
mp_sizeof_uint(IPROTO_CLUSTER_UUID) +
mp_sizeof_str(UUID_STR_LEN);
- char *buf = (char *) region_alloc(&fiber()->gc, size);
+ char *buf = (char *)region_alloc(&fiber()->gc, size);
if (buf == NULL) {
diag_set(OutOfMemory, size, "region_alloc", "buf");
return -1;
@@ -1698,8 +1708,9 @@ greeting_encode(char *greetingbuf, uint32_t version_id,
{
int h = IPROTO_GREETING_SIZE / 2;
int r = snprintf(greetingbuf, h + 1, "Tarantool %u.%u.%u (Binary) ",
- version_id_major(version_id), version_id_minor(version_id),
- version_id_patch(version_id));
+ version_id_major(version_id),
+ version_id_minor(version_id),
+ version_id_patch(version_id));
assert(r + UUID_STR_LEN < h);
tt_uuid_to_string(uuid, greetingbuf + r);
@@ -1726,17 +1737,19 @@ greeting_decode(const char *greetingbuf, struct greeting *greeting)
int h = IPROTO_GREETING_SIZE / 2;
const char *pos = greetingbuf + strlen("Tarantool ");
const char *end = greetingbuf + h;
- for (; pos < end && *pos == ' '; ++pos); /* skip spaces */
+ for (; pos < end && *pos == ' '; ++pos)
+ ; /* skip spaces */
/* Extract a version string - a string until ' ' */
char version[20];
- const char *vend = (const char *) memchr(pos, ' ', end - pos);
+ const char *vend = (const char *)memchr(pos, ' ', end - pos);
if (vend == NULL || (size_t)(vend - pos) >= sizeof(version))
return -1;
memcpy(version, pos, vend - pos);
version[vend - pos] = '\0';
pos = vend + 1;
- for (; pos < end && *pos == ' '; ++pos); /* skip spaces */
+ for (; pos < end && *pos == ' '; ++pos)
+ ; /* skip spaces */
/* Parse a version string - 1.6.6-83-gc6b2129 or 1.6.7 */
unsigned major, minor, patch;
@@ -1746,7 +1759,7 @@ greeting_decode(const char *greetingbuf, struct greeting *greeting)
if (*pos == '(') {
/* Extract protocol name - a string between (parentheses) */
- vend = (const char *) memchr(pos + 1, ')', end - pos);
+ vend = (const char *)memchr(pos + 1, ')', end - pos);
if (!vend || (vend - pos - 1) > GREETING_PROTOCOL_LEN_MAX)
return -1;
memcpy(greeting->protocol, pos + 1, vend - pos - 1);
@@ -1759,10 +1772,12 @@ greeting_decode(const char *greetingbuf, struct greeting *greeting)
if (greeting->version_id >= version_id(1, 6, 7)) {
if (*(pos++) != ' ')
return -1;
- for (; pos < end && *pos == ' '; ++pos); /* spaces */
+ for (; pos < end && *pos == ' '; ++pos)
+ ; /* spaces */
if (end - pos < UUID_STR_LEN)
return -1;
- if (tt_uuid_from_strl(pos, UUID_STR_LEN, &greeting->uuid))
+ if (tt_uuid_from_strl(pos, UUID_STR_LEN,
+ &greeting->uuid))
return -1;
}
} else if (greeting->version_id < version_id(1, 6, 7)) {
@@ -1773,10 +1788,10 @@ greeting_decode(const char *greetingbuf, struct greeting *greeting)
}
/* Decode salt for binary protocol */
- greeting->salt_len = base64_decode(greetingbuf + h, h - 1,
- greeting->salt,
- sizeof(greeting->salt));
- if (greeting->salt_len < SCRAMBLE_SIZE || greeting->salt_len >= (uint32_t)h)
+ greeting->salt_len = base64_decode(
+ greetingbuf + h, h - 1, greeting->salt, sizeof(greeting->salt));
+ if (greeting->salt_len < SCRAMBLE_SIZE ||
+ greeting->salt_len >= (uint32_t)h)
return -1;
return 0;
diff --git a/src/box/xrow.h b/src/box/xrow.h
index 25985ad..7fcc672 100644
--- a/src/box/xrow.h
+++ b/src/box/xrow.h
@@ -250,8 +250,7 @@ struct PACKED synchro_body_bin {
* @param req Request parameters.
*/
void
-xrow_encode_synchro(struct xrow_header *row,
- struct synchro_body_bin *body,
+xrow_encode_synchro(struct xrow_header *row, struct synchro_body_bin *body,
const struct synchro_request *req);
/**
@@ -429,8 +428,7 @@ xrow_encode_subscribe(struct xrow_header *row,
int
xrow_decode_subscribe(struct xrow_header *row, struct tt_uuid *replicaset_uuid,
struct tt_uuid *instance_uuid, struct vclock *vclock,
- uint32_t *version_id, bool *anon,
- uint32_t *id_filter);
+ uint32_t *version_id, bool *anon, uint32_t *id_filter);
/**
* Encode JOIN command.
@@ -510,8 +508,8 @@ xrow_decode_vclock(struct xrow_header *row, struct vclock *vclock)
*/
int
xrow_encode_subscribe_response(struct xrow_header *row,
- const struct tt_uuid *replicaset_uuid,
- const struct vclock *vclock);
+ const struct tt_uuid *replicaset_uuid,
+ const struct vclock *vclock);
/**
* Decode a response to subscribe request.
@@ -632,8 +630,8 @@ iproto_reply_vclock(struct obuf *out, const struct vclock *vclock,
* @retval -1 Memory error.
*/
int
-iproto_reply_vote(struct obuf *out, const struct ballot *ballot,
- uint64_t sync, uint32_t schema_version);
+iproto_reply_vote(struct obuf *out, const struct ballot *ballot, uint64_t sync,
+ uint32_t schema_version);
/**
* Write an error packet int output buffer. Doesn't throw if out
@@ -781,7 +779,7 @@ xrow_decode_error(struct xrow_header *row);
* @return Previous LSN value.
*/
static inline int64_t
-vclock_follow_xrow(struct vclock* vclock, const struct xrow_header *row)
+vclock_follow_xrow(struct vclock *vclock, const struct xrow_header *row)
{
assert(row);
assert(row->replica_id < VCLOCK_MAX);
@@ -793,10 +791,9 @@ vclock_follow_xrow(struct vclock* vclock, const struct xrow_header *row)
/* Never confirm LSN out of order. */
panic("LSN for %u is used twice or COMMIT order is broken: "
"confirmed: %lld, new: %lld, req: %s",
- (unsigned) row->replica_id,
- (long long) vclock_get(vclock, row->replica_id),
- (long long) row->lsn,
- req_str);
+ (unsigned)row->replica_id,
+ (long long)vclock_get(vclock, row->replica_id),
+ (long long)row->lsn, req_str);
}
return vclock_follow(vclock, row->replica_id, row->lsn);
}
@@ -853,8 +850,7 @@ xrow_encode_dml_xc(const struct request *request, struct region *region,
/** @copydoc xrow_decode_call. */
static inline void
-xrow_decode_call_xc(const struct xrow_header *row,
- struct call_request *request)
+xrow_decode_call_xc(const struct xrow_header *row, struct call_request *request)
{
if (xrow_decode_call(row, request) != 0)
diag_raise();
@@ -862,8 +858,7 @@ xrow_decode_call_xc(const struct xrow_header *row,
/** @copydoc xrow_decode_auth. */
static inline void
-xrow_decode_auth_xc(const struct xrow_header *row,
- struct auth_request *request)
+xrow_decode_auth_xc(const struct xrow_header *row, struct auth_request *request)
{
if (xrow_decode_auth(row, request) != 0)
diag_raise();
@@ -891,8 +886,8 @@ xrow_decode_ballot_xc(struct xrow_header *row, struct ballot *ballot)
/** @copydoc xrow_encode_register. */
static inline void
xrow_encode_register_xc(struct xrow_header *row,
- const struct tt_uuid *instance_uuid,
- const struct vclock *vclock)
+ const struct tt_uuid *instance_uuid,
+ const struct vclock *vclock)
{
if (xrow_encode_register(row, instance_uuid, vclock) != 0)
diag_raise();
@@ -906,8 +901,8 @@ xrow_encode_subscribe_xc(struct xrow_header *row,
const struct vclock *vclock, bool anon,
uint32_t id_filter)
{
- if (xrow_encode_subscribe(row, replicaset_uuid, instance_uuid,
- vclock, anon, id_filter) != 0)
+ if (xrow_encode_subscribe(row, replicaset_uuid, instance_uuid, vclock,
+ anon, id_filter) != 0)
diag_raise();
}
@@ -919,9 +914,8 @@ xrow_decode_subscribe_xc(struct xrow_header *row,
uint32_t *replica_version_id, bool *anon,
uint32_t *id_filter)
{
- if (xrow_decode_subscribe(row, replicaset_uuid, instance_uuid,
- vclock, replica_version_id, anon,
- id_filter) != 0)
+ if (xrow_decode_subscribe(row, replicaset_uuid, instance_uuid, vclock,
+ replica_version_id, anon, id_filter) != 0)
diag_raise();
}
@@ -1007,7 +1001,7 @@ iproto_reply_vclock_xc(struct obuf *out, const struct vclock *vclock,
/** @copydoc iproto_reply_vote. */
static inline void
iproto_reply_vote_xc(struct obuf *out, const struct ballot *ballot,
- uint64_t sync, uint32_t schema_version)
+ uint64_t sync, uint32_t schema_version)
{
if (iproto_reply_vote(out, ballot, sync, schema_version) != 0)
diag_raise();
diff --git a/src/box/xrow_io.cc b/src/box/xrow_io.cc
index 4870798..f746b52 100644
--- a/src/box/xrow_io.cc
+++ b/src/box/xrow_io.cc
@@ -44,21 +44,20 @@ coio_read_xrow(struct ev_io *coio, struct ibuf *in, struct xrow_header *row)
/* Read length */
if (mp_typeof(*in->rpos) != MP_UINT) {
- tnt_raise(ClientError, ER_INVALID_MSGPACK,
- "packet length");
+ tnt_raise(ClientError, ER_INVALID_MSGPACK, "packet length");
}
ssize_t to_read = mp_check_uint(in->rpos, in->wpos);
if (to_read > 0)
coio_breadn(coio, in, to_read);
- uint32_t len = mp_decode_uint((const char **) &in->rpos);
+ uint32_t len = mp_decode_uint((const char **)&in->rpos);
/* Read header and body */
to_read = len - ibuf_used(in);
if (to_read > 0)
coio_breadn(coio, in, to_read);
- xrow_header_decode_xc(row, (const char **) &in->rpos, in->rpos + len,
+ xrow_header_decode_xc(row, (const char **)&in->rpos, in->rpos + len,
true);
}
@@ -75,26 +74,24 @@ coio_read_xrow_timeout_xc(struct ev_io *coio, struct ibuf *in,
/* Read length */
if (mp_typeof(*in->rpos) != MP_UINT) {
- tnt_raise(ClientError, ER_INVALID_MSGPACK,
- "packet length");
+ tnt_raise(ClientError, ER_INVALID_MSGPACK, "packet length");
}
ssize_t to_read = mp_check_uint(in->rpos, in->wpos);
if (to_read > 0)
coio_breadn_timeout(coio, in, to_read, delay);
coio_timeout_update(&start, &delay);
- uint32_t len = mp_decode_uint((const char **) &in->rpos);
+ uint32_t len = mp_decode_uint((const char **)&in->rpos);
/* Read header and body */
to_read = len - ibuf_used(in);
if (to_read > 0)
coio_breadn_timeout(coio, in, to_read, delay);
- xrow_header_decode_xc(row, (const char **) &in->rpos, in->rpos + len,
+ xrow_header_decode_xc(row, (const char **)&in->rpos, in->rpos + len,
true);
}
-
void
coio_write_xrow(struct ev_io *coio, const struct xrow_header *row)
{
@@ -102,4 +99,3 @@ coio_write_xrow(struct ev_io *coio, const struct xrow_header *row)
int iovcnt = xrow_to_iovec_xc(row, iov);
coio_writev(coio, iov, iovcnt, 0);
}
-
diff --git a/src/box/xrow_io.h b/src/box/xrow_io.h
index 0eb7a8a..eedad3d 100644
--- a/src/box/xrow_io.h
+++ b/src/box/xrow_io.h
@@ -48,7 +48,6 @@ coio_read_xrow_timeout_xc(struct ev_io *coio, struct ibuf *in,
void
coio_write_xrow(struct ev_io *coio, const struct xrow_header *row);
-
#if defined(__cplusplus)
} /* extern "C" */
#endif
diff --git a/src/box/xrow_update.c b/src/box/xrow_update.c
index 0493c0d..68e1395 100644
--- a/src/box/xrow_update.c
+++ b/src/box/xrow_update.c
@@ -102,8 +102,7 @@
*/
/** Update internal state */
-struct xrow_update
-{
+struct xrow_update {
/** Operations array. */
struct xrow_update_op *ops;
/** Length of ops. */
@@ -168,9 +167,8 @@ xrow_update_read_ops(struct xrow_update *update, const char *expr,
}
int size = update->op_count * sizeof(update->ops[0]);
- update->ops = (struct xrow_update_op *)
- region_aligned_alloc(&fiber()->gc, size,
- alignof(struct xrow_update_op));
+ update->ops = (struct xrow_update_op *)region_aligned_alloc(
+ &fiber()->gc, size, alignof(struct xrow_update_op));
if (update->ops == NULL) {
diag_set(OutOfMemory, size, "region_aligned_alloc",
"update->ops");
@@ -253,7 +251,7 @@ xrow_update_read_ops(struct xrow_update *update, const char *expr,
if (opcode == '!')
++field_count_hint;
else if (opcode == '#')
- field_count_hint -= (int32_t) op->arg.del.count;
+ field_count_hint -= (int32_t)op->arg.del.count;
if (opcode == '!' || opcode == '#')
/*
@@ -349,7 +347,7 @@ xrow_update_finish(struct xrow_update *update, struct tuple_format *format,
uint32_t *p_tuple_len)
{
uint32_t tuple_len = xrow_update_array_sizeof(&update->root);
- char *buffer = (char *) region_alloc(&fiber()->gc, tuple_len);
+ char *buffer = (char *)region_alloc(&fiber()->gc, tuple_len);
if (buffer == NULL) {
diag_set(OutOfMemory, tuple_len, "region_alloc", "buffer");
return NULL;
@@ -371,7 +369,7 @@ xrow_update_check_ops(const char *expr, const char *expr_end,
}
const char *
-xrow_update_execute(const char *expr,const char *expr_end,
+xrow_update_execute(const char *expr, const char *expr_end,
const char *old_data, const char *old_data_end,
struct tuple_format *format, uint32_t *p_tuple_len,
int index_base, uint64_t *column_mask)
@@ -394,7 +392,7 @@ xrow_update_execute(const char *expr,const char *expr_end,
}
const char *
-xrow_upsert_execute(const char *expr,const char *expr_end,
+xrow_upsert_execute(const char *expr, const char *expr_end,
const char *old_data, const char *old_data_end,
struct tuple_format *format, uint32_t *p_tuple_len,
int index_base, bool suppress_error, uint64_t *column_mask)
@@ -417,19 +415,18 @@ xrow_upsert_execute(const char *expr,const char *expr_end,
}
const char *
-xrow_upsert_squash(const char *expr1, const char *expr1_end,
- const char *expr2, const char *expr2_end,
- struct tuple_format *format, size_t *result_size,
- int index_base)
+xrow_upsert_squash(const char *expr1, const char *expr1_end, const char *expr2,
+ const char *expr2_end, struct tuple_format *format,
+ size_t *result_size, int index_base)
{
- const char *expr[2] = {expr1, expr2};
- const char *expr_end[2] = {expr1_end, expr2_end};
+ const char *expr[2] = { expr1, expr2 };
+ const char *expr_end[2] = { expr1_end, expr2_end };
struct xrow_update update[2];
struct tuple_dictionary *dict = format->dict;
for (int j = 0; j < 2; j++) {
xrow_update_init(&update[j], index_base);
- if (xrow_update_read_ops(&update[j], expr[j], expr_end[j],
- dict, 0) != 0)
+ if (xrow_update_read_ops(&update[j], expr[j], expr_end[j], dict,
+ 0) != 0)
return NULL;
mp_decode_array(&expr[j]);
int32_t prev_field_no = index_base - 1;
@@ -454,8 +451,8 @@ xrow_upsert_squash(const char *expr1, const char *expr1_end,
}
size_t possible_size = expr1_end - expr1 + expr2_end - expr2;
const uint32_t space_for_arr_tag = 5;
- char *buf = (char *) region_alloc(&fiber()->gc,
- possible_size + space_for_arr_tag);
+ char *buf = (char *)region_alloc(&fiber()->gc,
+ possible_size + space_for_arr_tag);
if (buf == NULL) {
diag_set(OutOfMemory, possible_size + space_for_arr_tag,
"region_alloc", "buf");
@@ -465,16 +462,16 @@ xrow_upsert_squash(const char *expr1, const char *expr1_end,
char *res_ops = buf + space_for_arr_tag;
uint32_t res_count = 0; /* number of resulting operations */
- uint32_t op_count[2] = {update[0].op_count, update[1].op_count};
- uint32_t op_no[2] = {0, 0};
+ uint32_t op_count[2] = { update[0].op_count, update[1].op_count };
+ uint32_t op_no[2] = { 0, 0 };
struct json_tree *format_tree = &format->fields;
struct json_token *root = &format_tree->root;
struct json_token token;
token.type = JSON_TOKEN_NUM;
while (op_no[0] < op_count[0] || op_no[1] < op_count[1]) {
res_count++;
- struct xrow_update_op *op[2] = {update[0].ops + op_no[0],
- update[1].ops + op_no[1]};
+ struct xrow_update_op *op[2] = { update[0].ops + op_no[0],
+ update[1].ops + op_no[1] };
/*
* from:
* 0 - take op from first update,
@@ -482,11 +479,13 @@ xrow_upsert_squash(const char *expr1, const char *expr1_end,
* 2 - merge both ops
*/
uint32_t from;
- uint32_t has[2] = {op_no[0] < op_count[0], op_no[1] < op_count[1]};
+ uint32_t has[2] = { op_no[0] < op_count[0],
+ op_no[1] < op_count[1] };
assert(has[0] || has[1]);
if (has[0] && has[1]) {
from = op[0]->field_no < op[1]->field_no ? 0 :
- op[0]->field_no > op[1]->field_no ? 1 : 2;
+ op[0]->field_no > op[1]->field_no ? 1 :
+ 2;
} else {
assert(has[0] != has[1]);
from = has[1];
@@ -527,21 +526,20 @@ xrow_upsert_squash(const char *expr1, const char *expr1_end,
*/
if (op[0]->opcode == '=') {
if (xrow_mp_read_arg_arith(op[0], &op[0]->arg.set.value,
- &arith) != 0)
+ &arith) != 0)
return NULL;
} else {
arith = op[0]->arg.arith;
}
struct xrow_update_op res;
- if (xrow_update_arith_make(op[1], arith,
- &res.arg.arith) != 0)
+ if (xrow_update_arith_make(op[1], arith, &res.arg.arith) != 0)
return NULL;
res_ops = mp_encode_array(res_ops, 3);
- res_ops = mp_encode_str(res_ops,
- (const char *)&op[0]->opcode, 1);
+ res_ops =
+ mp_encode_str(res_ops, (const char *)&op[0]->opcode, 1);
token.num = op[0]->field_no;
- res_ops = mp_encode_uint(res_ops, token.num +
- update[0].index_base);
+ res_ops = mp_encode_uint(res_ops,
+ token.num + update[0].index_base);
struct json_token *this_node =
json_tree_lookup(format_tree, root, &token);
xrow_update_op_store_arith(&res, format_tree, this_node, NULL,
@@ -554,8 +552,7 @@ xrow_upsert_squash(const char *expr1, const char *expr1_end,
}
assert(op_no[0] == op_count[0] && op_no[1] == op_count[1]);
assert(expr[0] == expr_end[0] && expr[1] == expr_end[1]);
- char *arr_start = buf + space_for_arr_tag -
- mp_sizeof_array(res_count);
+ char *arr_start = buf + space_for_arr_tag - mp_sizeof_array(res_count);
mp_encode_array(arr_start, res_count);
*result_size = res_ops - arr_start;
return arr_start;
diff --git a/src/box/xrow_update.h b/src/box/xrow_update.h
index d48c379..281c2fb 100644
--- a/src/box/xrow_update.h
+++ b/src/box/xrow_update.h
@@ -51,7 +51,7 @@ xrow_update_check_ops(const char *expr, const char *expr_end,
struct tuple_format *format, int index_base);
const char *
-xrow_update_execute(const char *expr,const char *expr_end,
+xrow_update_execute(const char *expr, const char *expr_end,
const char *old_data, const char *old_data_end,
struct tuple_format *format, uint32_t *p_new_size,
int index_base, uint64_t *column_mask);
@@ -60,8 +60,7 @@ const char *
xrow_upsert_execute(const char *expr, const char *expr_end,
const char *old_data, const char *old_data_end,
struct tuple_format *format, uint32_t *p_new_size,
- int index_base, bool suppress_error,
- uint64_t *column_mask);
+ int index_base, bool suppress_error, uint64_t *column_mask);
/**
* Try to merge two update/upsert expressions to an equivalent one.
@@ -74,10 +73,9 @@ xrow_upsert_execute(const char *expr, const char *expr_end,
* If it isn't possible to merge expressions NULL is returned.
*/
const char *
-xrow_upsert_squash(const char *expr1, const char *expr1_end,
- const char *expr2, const char *expr2_end,
- struct tuple_format *format, size_t *result_size,
- int index_base);
+xrow_upsert_squash(const char *expr1, const char *expr1_end, const char *expr2,
+ const char *expr2_end, struct tuple_format *format,
+ size_t *result_size, int index_base);
#if defined(__cplusplus)
} /* extern "C" */
diff --git a/src/box/xrow_update_array.c b/src/box/xrow_update_array.c
index 717466b..f90e539 100644
--- a/src/box/xrow_update_array.c
+++ b/src/box/xrow_update_array.c
@@ -47,8 +47,8 @@ xrow_update_op_prepare_num_token(struct xrow_update_op *op)
if (op->is_token_consumed && xrow_update_op_next_token(op) != 0)
return -1;
if (op->token_type != JSON_TOKEN_NUM) {
- return xrow_update_err(op, "can't update an array by a "\
- "non-numeric index");
+ return xrow_update_err(op, "can't update an array by a "
+ "non-numeric index");
}
return 0;
}
@@ -117,9 +117,10 @@ xrow_update_array_item_split(struct region *region,
struct xrow_update_array_item *prev, size_t size,
size_t offset)
{
- (void) size;
- struct xrow_update_array_item *next = (struct xrow_update_array_item *)
- xrow_update_alloc(region, sizeof(*next));
+ (void)size;
+ struct xrow_update_array_item *next =
+ (struct xrow_update_array_item *)xrow_update_alloc(
+ region, sizeof(*next));
if (next == NULL)
return NULL;
assert(offset > 0 && prev->tail_size > 0);
@@ -175,8 +176,9 @@ xrow_update_array_create(struct xrow_update_field *field, const char *header,
field->array.rope = xrow_update_rope_new(region);
if (field->array.rope == NULL)
return -1;
- struct xrow_update_array_item *item = (struct xrow_update_array_item *)
- xrow_update_alloc(region, sizeof(*item));
+ struct xrow_update_array_item *item =
+ (struct xrow_update_array_item *)xrow_update_alloc(
+ region, sizeof(*item));
if (item == NULL)
return -1;
if (data == data_end)
@@ -206,8 +208,9 @@ xrow_update_array_create_with_child(struct xrow_update_field *field,
struct xrow_update_rope *rope = xrow_update_rope_new(region);
if (rope == NULL)
return -1;
- struct xrow_update_array_item *item = (struct xrow_update_array_item *)
- xrow_update_alloc(region, sizeof(*item));
+ struct xrow_update_array_item *item =
+ (struct xrow_update_array_item *)xrow_update_alloc(
+ region, sizeof(*item));
if (item == NULL)
return -1;
const char *end = first_field_end;
@@ -219,8 +222,8 @@ xrow_update_array_create_with_child(struct xrow_update_field *field,
end - first_field_end);
if (xrow_update_rope_append(rope, item, field_no) != 0)
return -1;
- item = (struct xrow_update_array_item *)
- xrow_update_alloc(region, sizeof(*item));
+ item = (struct xrow_update_array_item *)xrow_update_alloc(
+ region, sizeof(*item));
if (item == NULL)
return -1;
first_field = end;
@@ -292,10 +295,12 @@ xrow_update_array_store(struct xrow_update_field *field,
for (; node != NULL; node = xrow_update_rope_iter_next(&it)) {
struct xrow_update_array_item *item =
xrow_update_rope_leaf_data(node);
- next_node = json_tree_lookup(format_tree, this_node, &token);
+ next_node = json_tree_lookup(format_tree, this_node,
+ &token);
uint32_t field_count = xrow_update_rope_leaf_size(node);
- out += xrow_update_field_store(&item->field, format_tree,
- next_node, out, out_end);
+ out += xrow_update_field_store(&item->field,
+ format_tree, next_node,
+ out, out_end);
assert(item->tail_size == 0 || field_count > 1);
memcpy(out, item->field.data + item->field.size,
item->tail_size);
@@ -304,7 +309,7 @@ xrow_update_array_store(struct xrow_update_field *field,
total_field_count += field_count;
}
}
- (void) total_field_count;
+ (void)total_field_count;
assert(xrow_update_rope_size(field->array.rope) == total_field_count);
assert(out <= out_end);
return out - out_begin;
@@ -332,8 +337,8 @@ xrow_update_op_do_array_insert(struct xrow_update_op *op,
if (xrow_update_op_adjust_field_no(op, size + 1) != 0)
return -1;
- item = (struct xrow_update_array_item *)
- xrow_update_alloc(rope->ctx, sizeof(*item));
+ item = (struct xrow_update_array_item *)xrow_update_alloc(
+ rope->ctx, sizeof(*item));
if (item == NULL)
return -1;
xrow_update_array_item_create(item, XUPDATE_NOP, op->arg.set.value,
@@ -351,7 +356,7 @@ xrow_update_op_do_array_set(struct xrow_update_op *op,
return -1;
/* Interpret '=' for n + 1 field as insert. */
- if (op->field_no == (int32_t) xrow_update_rope_size(rope))
+ if (op->field_no == (int32_t)xrow_update_rope_size(rope))
return xrow_update_op_do_array_insert(op, field);
struct xrow_update_array_item *item =
@@ -396,7 +401,7 @@ xrow_update_op_do_array_delete(struct xrow_update_op *op,
if (xrow_update_op_adjust_field_no(op, size) != 0)
return -1;
uint32_t delete_count = op->arg.del.count;
- if ((uint64_t) op->field_no + delete_count > size)
+ if ((uint64_t)op->field_no + delete_count > size)
delete_count = size - op->field_no;
assert(delete_count > 0);
for (uint32_t u = delete_count; u != 0; --u)
@@ -404,29 +409,29 @@ xrow_update_op_do_array_delete(struct xrow_update_op *op,
return 0;
}
-#define DO_SCALAR_OP_GENERIC(op_type) \
-int \
-xrow_update_op_do_array_##op_type(struct xrow_update_op *op, \
- struct xrow_update_field *field) \
-{ \
- if (xrow_update_op_prepare_num_token(op) != 0) \
- return -1; \
- struct xrow_update_array_item *item = \
- xrow_update_array_extract_item(field, op); \
- if (item == NULL) \
- return -1; \
- if (!xrow_update_op_is_term(op)) { \
- op->is_token_consumed = true; \
- return xrow_update_op_do_field_##op_type(op, &item->field); \
- } \
- if (item->field.type != XUPDATE_NOP) \
- return xrow_update_err_double(op); \
- if (xrow_update_op_do_##op_type(op, item->field.data) != 0) \
- return -1; \
- item->field.type = XUPDATE_SCALAR; \
- item->field.scalar.op = op; \
- return 0; \
-}
+#define DO_SCALAR_OP_GENERIC(op_type) \
+ int xrow_update_op_do_array_##op_type(struct xrow_update_op *op, \
+ struct xrow_update_field *field) \
+ { \
+ if (xrow_update_op_prepare_num_token(op) != 0) \
+ return -1; \
+ struct xrow_update_array_item *item = \
+ xrow_update_array_extract_item(field, op); \
+ if (item == NULL) \
+ return -1; \
+ if (!xrow_update_op_is_term(op)) { \
+ op->is_token_consumed = true; \
+ return xrow_update_op_do_field_##op_type( \
+ op, &item->field); \
+ } \
+ if (item->field.type != XUPDATE_NOP) \
+ return xrow_update_err_double(op); \
+ if (xrow_update_op_do_##op_type(op, item->field.data) != 0) \
+ return -1; \
+ item->field.type = XUPDATE_SCALAR; \
+ item->field.scalar.op = op; \
+ return 0; \
+ }
DO_SCALAR_OP_GENERIC(arith)
diff --git a/src/box/xrow_update_bar.c b/src/box/xrow_update_bar.c
index 796f340..28ff572 100644
--- a/src/box/xrow_update_bar.c
+++ b/src/box/xrow_update_bar.c
@@ -66,8 +66,7 @@ xrow_update_bar_finish(struct xrow_update_field *field)
*/
static inline int
xrow_update_bar_locate(struct xrow_update_op *op,
- struct xrow_update_field *field,
- int *key_len_or_index)
+ struct xrow_update_field *field, int *key_len_or_index)
{
/*
* Bar update is not flat by definition. It always has a
@@ -88,7 +87,6 @@ xrow_update_bar_locate(struct xrow_update_op *op,
struct json_token token;
while ((rc = json_lexer_next_token(&op->lexer, &token)) == 0 &&
token.type != JSON_TOKEN_END) {
-
switch (token.type) {
case JSON_TOKEN_NUM:
field->bar.parent = pos;
@@ -186,11 +184,11 @@ xrow_update_bar_locate_opt(struct xrow_update_op *op,
if (token.type == JSON_TOKEN_NUM) {
const char *tmp = field->bar.parent;
if (mp_typeof(*tmp) != MP_ARRAY) {
- return xrow_update_err(op, "can not access by index a "\
- "non-array field");
+ return xrow_update_err(op, "can not access by index a "
+ "non-array field");
}
uint32_t size = mp_decode_array(&tmp);
- if ((uint32_t) token.num > size)
+ if ((uint32_t)token.num > size)
return xrow_update_err_no_such_field(op);
/*
* The updated point is in an array, its position
@@ -199,7 +197,7 @@ xrow_update_bar_locate_opt(struct xrow_update_op *op,
* to append a new array element. The following
* code tries to find the array's end.
*/
- assert((uint32_t) token.num == size);
+ assert((uint32_t)token.num == size);
if (field->bar.parent == field->data) {
/*
* Optimization for the case when the path
@@ -220,8 +218,8 @@ xrow_update_bar_locate_opt(struct xrow_update_op *op,
field->bar.new_key = token.str;
field->bar.new_key_len = token.len;
if (mp_typeof(*field->bar.parent) != MP_MAP) {
- return xrow_update_err(op, "can not access by key a "\
- "non-map field");
+ return xrow_update_err(op, "can not access by key a "
+ "non-map field");
}
}
return 0;
@@ -306,19 +304,18 @@ xrow_update_op_do_nop_delete(struct xrow_update_op *op,
return xrow_update_bar_finish(field);
}
-#define DO_NOP_OP_GENERIC(op_type) \
-int \
-xrow_update_op_do_nop_##op_type(struct xrow_update_op *op, \
- struct xrow_update_field *field) \
-{ \
- assert(field->type == XUPDATE_NOP); \
- int key_len_or_index; \
- if (xrow_update_bar_locate(op, field, &key_len_or_index) != 0) \
- return -1; \
- if (xrow_update_op_do_##op_type(op, field->bar.point) != 0) \
- return -1; \
- return xrow_update_bar_finish(field); \
-}
+#define DO_NOP_OP_GENERIC(op_type) \
+ int xrow_update_op_do_nop_##op_type(struct xrow_update_op *op, \
+ struct xrow_update_field *field) \
+ { \
+ assert(field->type == XUPDATE_NOP); \
+ int key_len_or_index; \
+ if (xrow_update_bar_locate(op, field, &key_len_or_index) != 0) \
+ return -1; \
+ if (xrow_update_op_do_##op_type(op, field->bar.point) != 0) \
+ return -1; \
+ return xrow_update_bar_finish(field); \
+ }
DO_NOP_OP_GENERIC(arith)
@@ -328,17 +325,16 @@ DO_NOP_OP_GENERIC(splice)
#undef DO_NOP_OP_GENERIC
-#define DO_BAR_OP_GENERIC(op_type) \
-int \
-xrow_update_op_do_bar_##op_type(struct xrow_update_op *op, \
- struct xrow_update_field *field) \
-{ \
- assert(field->type == XUPDATE_BAR); \
- field = xrow_update_route_branch(field, op); \
- if (field == NULL) \
- return -1; \
- return xrow_update_op_do_field_##op_type(op, field); \
-}
+#define DO_BAR_OP_GENERIC(op_type) \
+ int xrow_update_op_do_bar_##op_type(struct xrow_update_op *op, \
+ struct xrow_update_field *field) \
+ { \
+ assert(field->type == XUPDATE_BAR); \
+ field = xrow_update_route_branch(field, op); \
+ if (field == NULL) \
+ return -1; \
+ return xrow_update_op_do_field_##op_type(op, field); \
+ }
DO_BAR_OP_GENERIC(insert)
@@ -358,7 +354,7 @@ uint32_t
xrow_update_bar_sizeof(struct xrow_update_field *field)
{
assert(field->type == XUPDATE_BAR);
- switch(field->bar.op->opcode) {
+ switch (field->bar.op->opcode) {
case '!': {
const char *parent = field->bar.parent;
uint32_t size = field->size + field->bar.op->new_field_len;
@@ -401,10 +397,10 @@ xrow_update_bar_store(struct xrow_update_field *field,
struct json_token *this_node, char *out, char *out_end)
{
assert(field->type == XUPDATE_BAR);
- (void) out_end;
+ (void)out_end;
struct xrow_update_op *op = field->bar.op;
char *out_saved = out;
- switch(op->opcode) {
+ switch (op->opcode) {
case '!': {
const char *pos = field->bar.parent;
uint32_t before_parent = pos - field->data;
diff --git a/src/box/xrow_update_field.c b/src/box/xrow_update_field.c
index 1095ece..6f9118b 100644
--- a/src/box/xrow_update_field.c
+++ b/src/box/xrow_update_field.c
@@ -85,8 +85,9 @@ int
xrow_update_err_no_such_field(const struct xrow_update_op *op)
{
if (op->lexer.src == NULL) {
- diag_set(ClientError, ER_NO_SUCH_FIELD_NO, op->field_no +
- (op->field_no >= 0 ? TUPLE_INDEX_BASE : 0));
+ diag_set(ClientError, ER_NO_SUCH_FIELD_NO,
+ op->field_no +
+ (op->field_no >= 0 ? TUPLE_INDEX_BASE : 0));
return -1;
}
diag_set(ClientError, ER_NO_SUCH_FIELD_NAME,
@@ -132,7 +133,7 @@ xrow_update_field_store(struct xrow_update_field *field,
struct json_token *this_node, char *out, char *out_end)
{
struct xrow_update_op *op;
- switch(field->type) {
+ switch (field->type) {
case XUPDATE_NOP:
assert(out_end - out >= field->size);
memcpy(out, field->data, field->size);
@@ -188,7 +189,7 @@ xrow_mp_read_arg_arith(struct xrow_update_op *op, const char **expr,
{
int8_t ext_type;
uint32_t len;
- switch(mp_typeof(**expr)) {
+ switch (mp_typeof(**expr)) {
case MP_UINT:
ret->type = XUPDATE_TYPE_INT;
int96_set_unsigned(&ret->int96, mp_decode_uint(expr));
@@ -237,10 +238,10 @@ static int
xrow_update_read_arg_set(struct xrow_update_op *op, const char **expr,
int index_base)
{
- (void) index_base;
+ (void)index_base;
op->arg.set.value = *expr;
mp_next(expr);
- op->arg.set.length = (uint32_t) (*expr - op->arg.set.value);
+ op->arg.set.length = (uint32_t)(*expr - op->arg.set.value);
return 0;
}
@@ -248,9 +249,9 @@ static int
xrow_update_read_arg_delete(struct xrow_update_op *op, const char **expr,
int index_base)
{
- (void) index_base;
+ (void)index_base;
if (mp_typeof(**expr) == MP_UINT) {
- op->arg.del.count = (uint32_t) mp_decode_uint(expr);
+ op->arg.del.count = (uint32_t)mp_decode_uint(expr);
if (op->arg.del.count != 0)
return 0;
return xrow_update_err(op, "cannot delete 0 fields");
@@ -262,7 +263,7 @@ static int
xrow_update_read_arg_arith(struct xrow_update_op *op, const char **expr,
int index_base)
{
- (void) index_base;
+ (void)index_base;
return xrow_mp_read_arg_arith(op, expr, &op->arg.arith);
}
@@ -270,7 +271,7 @@ static int
xrow_update_read_arg_bit(struct xrow_update_op *op, const char **expr,
int index_base)
{
- (void) index_base;
+ (void)index_base;
return xrow_update_mp_read_uint(op, expr, &op->arg.bit.val);
}
@@ -373,7 +374,7 @@ xrow_update_arith_make(struct xrow_update_op *op,
lowest_type = arg2.type;
if (lowest_type == XUPDATE_TYPE_INT) {
- switch(opcode) {
+ switch (opcode) {
case '+':
int96_add(&arg1.int96, &arg2.int96);
break;
@@ -393,7 +394,7 @@ xrow_update_arith_make(struct xrow_update_op *op,
double a = xrow_update_arg_arith_to_double(arg1);
double b = xrow_update_arg_arith_to_double(arg2);
double c;
- switch(opcode) {
+ switch (opcode) {
case '+':
c = a + b;
break;
@@ -417,13 +418,13 @@ xrow_update_arith_make(struct xrow_update_op *op,
return 0;
} else {
decimal_t a, b, c;
- if (! xrow_update_arg_arith_to_decimal(arg1, &a) ||
- ! xrow_update_arg_arith_to_decimal(arg2, &b)) {
- return xrow_update_err_arg_type(op, "a number "\
- "convertible to "\
- "decimal");
+ if (!xrow_update_arg_arith_to_decimal(arg1, &a) ||
+ !xrow_update_arg_arith_to_decimal(arg2, &b)) {
+ return xrow_update_err_arg_type(op, "a number "
+ "convertible to "
+ "decimal");
}
- switch(opcode) {
+ switch (opcode) {
case '+':
if (decimal_add(&c, &a, &b) == NULL)
return xrow_update_err_decimal_overflow(op);
@@ -482,7 +483,7 @@ xrow_update_op_do_splice(struct xrow_update_op *op, const char *old)
{
struct xrow_update_arg_splice *arg = &op->arg.splice;
int32_t str_len = 0;
- if (xrow_update_mp_read_str(op, &old, (uint32_t *) &str_len, &old) != 0)
+ if (xrow_update_mp_read_str(op, &old, (uint32_t *)&str_len, &old) != 0)
return -1;
if (arg->offset < 0) {
@@ -520,9 +521,9 @@ xrow_update_op_store_set(struct xrow_update_op *op,
struct json_token *this_node, const char *in,
char *out)
{
- (void) format_tree;
- (void) this_node;
- (void) in;
+ (void)format_tree;
+ (void)this_node;
+ (void)in;
memcpy(out, op->arg.set.value, op->arg.set.length);
return op->arg.set.length;
}
@@ -533,19 +534,19 @@ xrow_update_op_store_arith(struct xrow_update_op *op,
struct json_token *this_node, const char *in,
char *out)
{
- (void) format_tree;
- (void) in;
+ (void)format_tree;
+ (void)in;
char *begin = out;
struct xrow_update_arg_arith *arg = &op->arg.arith;
switch (arg->type) {
case XUPDATE_TYPE_INT:
if (int96_is_uint64(&arg->int96)) {
- out = mp_encode_uint(
- out, int96_extract_uint64(&arg->int96));
+ out = mp_encode_uint(out,
+ int96_extract_uint64(&arg->int96));
} else {
assert(int96_is_neg_int64(&arg->int96));
out = mp_encode_int(
- out, int96_extract_neg_int64( &arg->int96));
+ out, int96_extract_neg_int64(&arg->int96));
}
break;
case XUPDATE_TYPE_DOUBLE:
@@ -555,7 +556,8 @@ xrow_update_op_store_arith(struct xrow_update_op *op,
if (this_node != NULL) {
enum field_type type =
json_tree_entry(this_node, struct tuple_field,
- token)->type;
+ token)
+ ->type;
if (type == FIELD_TYPE_DOUBLE) {
out = mp_encode_double(out, arg->flt);
break;
@@ -577,9 +579,9 @@ xrow_update_op_store_bit(struct xrow_update_op *op,
struct json_token *this_node, const char *in,
char *out)
{
- (void) format_tree;
- (void) this_node;
- (void) in;
+ (void)format_tree;
+ (void)this_node;
+ (void)in;
char *end = mp_encode_uint(out, op->arg.bit.val);
return end - out;
}
@@ -590,13 +592,13 @@ xrow_update_op_store_splice(struct xrow_update_op *op,
struct json_token *this_node, const char *in,
char *out)
{
- (void) format_tree;
- (void) this_node;
+ (void)format_tree;
+ (void)this_node;
struct xrow_update_arg_splice *arg = &op->arg.splice;
- uint32_t new_str_len = arg->offset + arg->paste_length +
- arg->tail_length;
+ uint32_t new_str_len =
+ arg->offset + arg->paste_length + arg->tail_length;
char *begin = out;
- (void) mp_decode_strl(&in);
+ (void)mp_decode_strl(&in);
out = mp_encode_strl(out, new_str_len);
/* Copy field head. */
memcpy(out, in, arg->offset);
@@ -614,27 +616,27 @@ xrow_update_op_store_splice(struct xrow_update_op *op,
static const struct xrow_update_op_meta op_set = {
xrow_update_read_arg_set, xrow_update_op_do_field_set,
- (xrow_update_op_store_f) xrow_update_op_store_set, 3
+ (xrow_update_op_store_f)xrow_update_op_store_set, 3
};
static const struct xrow_update_op_meta op_insert = {
xrow_update_read_arg_set, xrow_update_op_do_field_insert,
- (xrow_update_op_store_f) xrow_update_op_store_set, 3
+ (xrow_update_op_store_f)xrow_update_op_store_set, 3
};
static const struct xrow_update_op_meta op_arith = {
xrow_update_read_arg_arith, xrow_update_op_do_field_arith,
- (xrow_update_op_store_f) xrow_update_op_store_arith, 3
+ (xrow_update_op_store_f)xrow_update_op_store_arith, 3
};
static const struct xrow_update_op_meta op_bit = {
xrow_update_read_arg_bit, xrow_update_op_do_field_bit,
- (xrow_update_op_store_f) xrow_update_op_store_bit, 3
+ (xrow_update_op_store_f)xrow_update_op_store_bit, 3
};
static const struct xrow_update_op_meta op_splice = {
xrow_update_read_arg_splice, xrow_update_op_do_field_splice,
- (xrow_update_op_store_f) xrow_update_op_store_splice, 5
+ (xrow_update_op_store_f)xrow_update_op_store_splice, 5
};
static const struct xrow_update_op_meta op_delete = {
xrow_update_read_arg_delete, xrow_update_op_do_field_delete,
- (xrow_update_op_store_f) NULL, 3
+ (xrow_update_op_store_f)NULL, 3
};
static inline const struct xrow_update_op_meta *
@@ -689,13 +691,15 @@ xrow_update_op_decode(struct xrow_update_op *op, int op_num, int index_base,
struct tuple_dictionary *dict, const char **expr)
{
if (mp_typeof(**expr) != MP_ARRAY) {
- diag_set(ClientError, ER_ILLEGAL_PARAMS, "update operation "
+ diag_set(ClientError, ER_ILLEGAL_PARAMS,
+ "update operation "
"must be an array {op,..}");
return -1;
}
uint32_t len, arg_count = mp_decode_array(expr);
if (arg_count < 1) {
- diag_set(ClientError, ER_ILLEGAL_PARAMS, "update operation "\
+ diag_set(ClientError, ER_ILLEGAL_PARAMS,
+ "update operation "
"must be an array {op,..}, got empty array");
return -1;
}
@@ -710,7 +714,7 @@ xrow_update_op_decode(struct xrow_update_op *op, int op_num, int index_base,
return -1;
op->opcode = *opcode;
if (arg_count != op->meta->arg_count) {
- const char *str = tt_sprintf("wrong number of arguments, "\
+ const char *str = tt_sprintf("wrong number of arguments, "
"expected %u, got %u",
op->meta->arg_count, arg_count);
diag_set(ClientError, ER_UNKNOWN_UPDATE_OP, op_num, str);
@@ -724,7 +728,7 @@ xrow_update_op_decode(struct xrow_update_op *op, int op_num, int index_base,
op->token_type = JSON_TOKEN_NUM;
op->is_token_consumed = false;
int32_t field_no = 0;
- switch(mp_typeof(**expr)) {
+ switch (mp_typeof(**expr)) {
case MP_INT:
case MP_UINT: {
json_lexer_create(&op->lexer, NULL, 0, 0);
@@ -744,9 +748,9 @@ xrow_update_op_decode(struct xrow_update_op *op, int op_num, int index_base,
const char *path = mp_decode_str(expr, &len);
uint32_t field_no, hash = field_name_hash(path, len);
json_lexer_create(&op->lexer, path, len, TUPLE_INDEX_BASE);
- if (tuple_fieldno_by_name(dict, path, len, hash,
- &field_no) == 0) {
- op->field_no = (int32_t) field_no;
+ if (tuple_fieldno_by_name(dict, path, len, hash, &field_no) ==
+ 0) {
+ op->field_no = (int32_t)field_no;
op->lexer.offset = len;
break;
}
@@ -762,7 +766,7 @@ xrow_update_op_decode(struct xrow_update_op *op, int op_num, int index_base,
hash = field_name_hash(token.str, token.len);
if (tuple_fieldno_by_name(dict, token.str, token.len,
hash, &field_no) == 0) {
- op->field_no = (int32_t) field_no;
+ op->field_no = (int32_t)field_no;
break;
}
FALLTHROUGH;
diff --git a/src/box/xrow_update_field.h b/src/box/xrow_update_field.h
index 193df58..3d5c39b 100644
--- a/src/box/xrow_update_field.h
+++ b/src/box/xrow_update_field.h
@@ -76,9 +76,9 @@ struct xrow_update_arg_del {
*/
enum xrow_update_arith_type {
XUPDATE_TYPE_DECIMAL = 0, /* MP_EXT + MP_DECIMAL */
- XUPDATE_TYPE_DOUBLE = 1, /* MP_DOUBLE */
- XUPDATE_TYPE_FLOAT = 2, /* MP_FLOAT */
- XUPDATE_TYPE_INT = 3 /* MP_INT/MP_UINT */
+ XUPDATE_TYPE_DOUBLE = 1, /* MP_DOUBLE */
+ XUPDATE_TYPE_FLOAT = 2, /* MP_FLOAT */
+ XUPDATE_TYPE_INT = 3 /* MP_INT/MP_UINT */
};
/**
@@ -143,19 +143,16 @@ union xrow_update_arg {
struct xrow_update_arg_splice splice;
};
-typedef int
-(*xrow_update_op_read_arg_f)(struct xrow_update_op *op, const char **expr,
- int index_base);
+typedef int (*xrow_update_op_read_arg_f)(struct xrow_update_op *op,
+ const char **expr, int index_base);
-typedef int
-(*xrow_update_op_do_f)(struct xrow_update_op *op,
- struct xrow_update_field *field);
+typedef int (*xrow_update_op_do_f)(struct xrow_update_op *op,
+ struct xrow_update_field *field);
-typedef uint32_t
-(*xrow_update_op_store_f)(struct xrow_update_op *op,
- struct json_tree *format_tree,
- struct json_token *this_node, const char *in,
- char *out);
+typedef uint32_t (*xrow_update_op_store_f)(struct xrow_update_op *op,
+ struct json_tree *format_tree,
+ struct json_token *this_node,
+ const char *in, char *out);
/**
* A set of functions and properties to initialize, do and store
@@ -482,39 +479,31 @@ xrow_update_field_store(struct xrow_update_field *field,
* etc. Each complex type has basic operations of the same
* signature: insert, set, delete, arith, bit, splice.
*/
-#define OP_DECL_GENERIC(type) \
-int \
-xrow_update_op_do_##type##_insert(struct xrow_update_op *op, \
- struct xrow_update_field *field); \
- \
-int \
-xrow_update_op_do_##type##_set(struct xrow_update_op *op, \
- struct xrow_update_field *field); \
- \
-int \
-xrow_update_op_do_##type##_delete(struct xrow_update_op *op, \
- struct xrow_update_field *field); \
- \
-int \
-xrow_update_op_do_##type##_arith(struct xrow_update_op *op, \
- struct xrow_update_field *field); \
- \
-int \
-xrow_update_op_do_##type##_bit(struct xrow_update_op *op, \
- struct xrow_update_field *field); \
- \
-int \
-xrow_update_op_do_##type##_splice(struct xrow_update_op *op, \
- struct xrow_update_field *field); \
- \
-uint32_t \
-xrow_update_##type##_sizeof(struct xrow_update_field *field); \
- \
-uint32_t \
-xrow_update_##type##_store(struct xrow_update_field *field, \
- struct json_tree *format_tree, \
- struct json_token *this_node, char *out, \
- char *out_end);
+#define OP_DECL_GENERIC(type) \
+ int xrow_update_op_do_##type##_insert( \
+ struct xrow_update_op *op, struct xrow_update_field *field); \
+ \
+ int xrow_update_op_do_##type##_set(struct xrow_update_op *op, \
+ struct xrow_update_field *field); \
+ \
+ int xrow_update_op_do_##type##_delete( \
+ struct xrow_update_op *op, struct xrow_update_field *field); \
+ \
+ int xrow_update_op_do_##type##_arith(struct xrow_update_op *op, \
+ struct xrow_update_field *field); \
+ \
+ int xrow_update_op_do_##type##_bit(struct xrow_update_op *op, \
+ struct xrow_update_field *field); \
+ \
+ int xrow_update_op_do_##type##_splice( \
+ struct xrow_update_op *op, struct xrow_update_field *field); \
+ \
+ uint32_t xrow_update_##type##_sizeof(struct xrow_update_field *field); \
+ \
+ uint32_t xrow_update_##type##_store(struct xrow_update_field *field, \
+ struct json_tree *format_tree, \
+ struct json_token *this_node, \
+ char *out, char *out_end);
/* }}} xrow_update_field */
@@ -666,27 +655,26 @@ OP_DECL_GENERIC(route)
* fit ~10k update tree depth - incredible number, even though the
* real limit is 4k due to limited number of operations.
*/
-#define OP_DECL_GENERIC(op_type) \
-static inline int \
-xrow_update_op_do_field_##op_type(struct xrow_update_op *op, \
- struct xrow_update_field *field) \
-{ \
- switch (field->type) { \
- case XUPDATE_ARRAY: \
- return xrow_update_op_do_array_##op_type(op, field); \
- case XUPDATE_NOP: \
- return xrow_update_op_do_nop_##op_type(op, field); \
- case XUPDATE_BAR: \
- return xrow_update_op_do_bar_##op_type(op, field); \
- case XUPDATE_ROUTE: \
- return xrow_update_op_do_route_##op_type(op, field); \
- case XUPDATE_MAP: \
- return xrow_update_op_do_map_##op_type(op, field); \
- default: \
- unreachable(); \
- } \
- return 0; \
-}
+#define OP_DECL_GENERIC(op_type) \
+ static inline int xrow_update_op_do_field_##op_type( \
+ struct xrow_update_op *op, struct xrow_update_field *field) \
+ { \
+ switch (field->type) { \
+ case XUPDATE_ARRAY: \
+ return xrow_update_op_do_array_##op_type(op, field); \
+ case XUPDATE_NOP: \
+ return xrow_update_op_do_nop_##op_type(op, field); \
+ case XUPDATE_BAR: \
+ return xrow_update_op_do_bar_##op_type(op, field); \
+ case XUPDATE_ROUTE: \
+ return xrow_update_op_do_route_##op_type(op, field); \
+ case XUPDATE_MAP: \
+ return xrow_update_op_do_map_##op_type(op, field); \
+ default: \
+ unreachable(); \
+ } \
+ return 0; \
+ }
OP_DECL_GENERIC(insert)
@@ -758,15 +746,15 @@ xrow_update_err_double(const struct xrow_update_op *op)
static inline int
xrow_update_err_bad_json(const struct xrow_update_op *op, int pos)
{
- return xrow_update_err(op, tt_sprintf("invalid JSON in position %d",
- pos));
+ return xrow_update_err(op,
+ tt_sprintf("invalid JSON in position %d", pos));
}
static inline int
xrow_update_err_delete1(const struct xrow_update_op *op)
{
- return xrow_update_err(op, "can delete only 1 field from a map in a "\
- "row");
+ return xrow_update_err(op, "can delete only 1 field from a map in a "
+ "row");
}
static inline int
diff --git a/src/box/xrow_update_map.c b/src/box/xrow_update_map.c
index 57fb27f..4bd7c94 100644
--- a/src/box/xrow_update_map.c
+++ b/src/box/xrow_update_map.c
@@ -83,8 +83,7 @@ xrow_update_map_create_item(struct xrow_update_field *field,
item->key_len = key_len;
item->field.type = type;
item->field.data = data;
- item->field.size = data_size,
- item->tail_size = tail_size;
+ item->field.size = data_size, item->tail_size = tail_size;
/*
* Each time a new item it created it is stored in the
* head of update map item list. It helps in case the
@@ -123,8 +122,8 @@ xrow_update_map_extract_opt_item(struct xrow_update_field *field,
if (xrow_update_op_next_token(op) != 0)
return -1;
if (op->token_type != JSON_TOKEN_STR) {
- return xrow_update_err(op, "can't update a map by not "\
- "a string key");
+ return xrow_update_err(op, "can't update a map by not "
+ "a string key");
}
}
struct stailq *items = &field->map.items;
@@ -136,7 +135,8 @@ xrow_update_map_extract_opt_item(struct xrow_update_field *field,
* passing this key, so it should be here for all except
* first updates.
*/
- stailq_foreach_entry(i, items, in_items) {
+ stailq_foreach_entry(i, items, in_items)
+ {
if (i->key != NULL && i->key_len == op->key_len &&
memcmp(i->key, op->key, i->key_len) == 0) {
*res = i;
@@ -149,12 +149,13 @@ xrow_update_map_extract_opt_item(struct xrow_update_field *field,
*/
uint32_t key_len, i_tail_size;
const char *pos, *key, *end, *tmp, *begin;
- stailq_foreach_entry(i, items, in_items) {
+ stailq_foreach_entry(i, items, in_items)
+ {
begin = i->field.data + i->field.size;
pos = begin;
end = begin + i->tail_size;
i_tail_size = 0;
- while(pos < end) {
+ while (pos < end) {
if (mp_typeof(*pos) != MP_STR) {
mp_next(&pos);
mp_next(&pos);
@@ -309,28 +310,28 @@ xrow_update_op_do_map_delete(struct xrow_update_op *op,
return 0;
}
-#define DO_SCALAR_OP_GENERIC(op_type) \
-int \
-xrow_update_op_do_map_##op_type(struct xrow_update_op *op, \
- struct xrow_update_field *field) \
-{ \
- assert(field->type == XUPDATE_MAP); \
- struct xrow_update_map_item *item = \
- xrow_update_map_extract_item(field, op); \
- if (item == NULL) \
- return -1; \
- if (!xrow_update_op_is_term(op)) { \
- op->is_token_consumed = true; \
- return xrow_update_op_do_field_##op_type(op, &item->field); \
- } \
- if (item->field.type != XUPDATE_NOP) \
- return xrow_update_err_double(op); \
- if (xrow_update_op_do_##op_type(op, item->field.data) != 0) \
- return -1; \
- item->field.type = XUPDATE_SCALAR; \
- item->field.scalar.op = op; \
- return 0; \
-}
+#define DO_SCALAR_OP_GENERIC(op_type) \
+ int xrow_update_op_do_map_##op_type(struct xrow_update_op *op, \
+ struct xrow_update_field *field) \
+ { \
+ assert(field->type == XUPDATE_MAP); \
+ struct xrow_update_map_item *item = \
+ xrow_update_map_extract_item(field, op); \
+ if (item == NULL) \
+ return -1; \
+ if (!xrow_update_op_is_term(op)) { \
+ op->is_token_consumed = true; \
+ return xrow_update_op_do_field_##op_type( \
+ op, &item->field); \
+ } \
+ if (item->field.type != XUPDATE_NOP) \
+ return xrow_update_err_double(op); \
+ if (xrow_update_op_do_##op_type(op, item->field.data) != 0) \
+ return -1; \
+ item->field.type = XUPDATE_SCALAR; \
+ item->field.scalar.op = op; \
+ return 0; \
+ }
DO_SCALAR_OP_GENERIC(arith)
@@ -349,9 +350,8 @@ xrow_update_map_create(struct xrow_update_field *field, const char *header,
stailq_create(&field->map.items);
if (field_count == 0)
return 0;
- struct xrow_update_map_item *first =
- xrow_update_map_new_item(field, XUPDATE_NOP, NULL, 0, data, 0,
- data_end - data);
+ struct xrow_update_map_item *first = xrow_update_map_new_item(
+ field, XUPDATE_NOP, NULL, 0, data, 0, data_end - data);
return first != NULL ? 0 : -1;
}
@@ -418,7 +418,8 @@ xrow_update_map_sizeof(struct xrow_update_field *field)
assert(field->type == XUPDATE_MAP);
uint32_t res = mp_sizeof_map(field->map.size);
struct xrow_update_map_item *i;
- stailq_foreach_entry(i, &field->map.items, in_items) {
+ stailq_foreach_entry(i, &field->map.items, in_items)
+ {
res += i->tail_size;
if (i->key != NULL) {
res += mp_sizeof_str(i->key_len) +
@@ -442,25 +443,25 @@ xrow_update_map_store(struct xrow_update_field *field,
* others. The first cycle doesn't save unchanged tails.
*/
if (this_node == NULL) {
- stailq_foreach_entry(i, &field->map.items, in_items) {
+ stailq_foreach_entry(i, &field->map.items, in_items)
+ {
if (i->key != NULL) {
out = mp_encode_str(out, i->key, i->key_len);
- out += xrow_update_field_store(&i->field, NULL,
- NULL, out,
- out_end);
+ out += xrow_update_field_store(
+ &i->field, NULL, NULL, out, out_end);
}
}
} else {
struct json_token token;
token.type = JSON_TOKEN_STR;
struct json_token *next_node;
- stailq_foreach_entry(i, &field->map.items, in_items) {
+ stailq_foreach_entry(i, &field->map.items, in_items)
+ {
if (i->key != NULL) {
token.str = i->key;
token.len = i->key_len;
next_node = json_tree_lookup(format_tree,
- this_node,
- &token);
+ this_node, &token);
out = mp_encode_str(out, i->key, i->key_len);
out += xrow_update_field_store(&i->field,
format_tree,
@@ -469,7 +470,8 @@ xrow_update_map_store(struct xrow_update_field *field,
}
}
}
- stailq_foreach_entry(i, &field->map.items, in_items) {
+ stailq_foreach_entry(i, &field->map.items, in_items)
+ {
memcpy(out, i->field.data + i->field.size, i->tail_size);
out += i->tail_size;
}
diff --git a/src/box/xrow_update_route.c b/src/box/xrow_update_route.c
index 0352aec..ee23bbe 100644
--- a/src/box/xrow_update_route.c
+++ b/src/box/xrow_update_route.c
@@ -151,8 +151,8 @@ xrow_update_route_branch_map(struct xrow_update_field *next_hop,
mp_next(&end);
mp_next(&end);
}
- if (xrow_update_map_create(next_hop, parent, data, end,
- field_count) != 0)
+ if (xrow_update_map_create(next_hop, parent, data, end, field_count) !=
+ 0)
return -1;
return op->meta->do_op(op, next_hop);
}
@@ -209,7 +209,7 @@ xrow_update_route_branch(struct xrow_update_field *field,
}
if (json_token_cmp(&old_token, &new_token) != 0)
break;
- switch(new_token.type) {
+ switch (new_token.type) {
case JSON_TOKEN_NUM:
rc = tuple_field_go_to_index(&parent, new_token.num);
break;
@@ -281,8 +281,8 @@ xrow_update_route_branch(struct xrow_update_field *field,
if (type == MP_ARRAY) {
if (new_token.type != JSON_TOKEN_NUM) {
- xrow_update_err(new_op, "can not update array by "\
- "non-integer index");
+ xrow_update_err(new_op, "can not update array by "
+ "non-integer index");
return NULL;
}
new_op->is_token_consumed = false;
@@ -293,8 +293,8 @@ xrow_update_route_branch(struct xrow_update_field *field,
return NULL;
} else if (type == MP_MAP) {
if (new_token.type != JSON_TOKEN_STR) {
- xrow_update_err(new_op, "can not update map by "\
- "non-string key");
+ xrow_update_err(new_op, "can not update map by "
+ "non-string key");
return NULL;
}
new_op->is_token_consumed = false;
@@ -327,7 +327,8 @@ xrow_update_route_branch(struct xrow_update_field *field,
* the route is just followed, via a lexer offset increase.
*/
static struct xrow_update_field *
-xrow_update_route_next(struct xrow_update_field *field, struct xrow_update_op *op)
+xrow_update_route_next(struct xrow_update_field *field,
+ struct xrow_update_op *op)
{
assert(field->type == XUPDATE_ROUTE);
assert(!xrow_update_op_is_term(op));
@@ -346,17 +347,17 @@ xrow_update_route_next(struct xrow_update_field *field, struct xrow_update_op *o
return xrow_update_route_branch(field, op);
}
-#define DO_SCALAR_OP_GENERIC(op_type) \
-int \
-xrow_update_op_do_route_##op_type(struct xrow_update_op *op, \
- struct xrow_update_field *field) \
-{ \
- assert(field->type == XUPDATE_ROUTE); \
- struct xrow_update_field *next_hop = xrow_update_route_next(field, op); \
- if (next_hop == NULL) \
- return -1; \
- return xrow_update_op_do_field_##op_type(op, next_hop); \
-}
+#define DO_SCALAR_OP_GENERIC(op_type) \
+ int xrow_update_op_do_route_##op_type(struct xrow_update_op *op, \
+ struct xrow_update_field *field) \
+ { \
+ assert(field->type == XUPDATE_ROUTE); \
+ struct xrow_update_field *next_hop = \
+ xrow_update_route_next(field, op); \
+ if (next_hop == NULL) \
+ return -1; \
+ return xrow_update_op_do_field_##op_type(op, next_hop); \
+ }
DO_SCALAR_OP_GENERIC(set)
@@ -383,9 +384,9 @@ xrow_update_route_store(struct xrow_update_field *field,
struct json_token *this_node, char *out, char *out_end)
{
if (this_node != NULL) {
- this_node = json_tree_lookup_path(
- format_tree, this_node, field->route.path,
- field->route.path_len, 0);
+ this_node = json_tree_lookup_path(format_tree, this_node,
+ field->route.path,
+ field->route.path_len, 0);
}
char *saved_out = out;
int before_hop = field->route.next_hop->data - field->data;
--
1.8.3.1
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2020-10-07 14:11 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-07 13:24 [Tarantool-patches] [PATCH 0/3] Add clang format Kirill Yukhin
2020-10-07 13:24 ` [Tarantool-patches] [PATCH 1/3] clang-format: guard various declarations Kirill Yukhin
2020-10-07 13:24 ` [Tarantool-patches] [PATCH 2/3] Add .clang-format for src/box/ Kirill Yukhin
2020-10-07 14:11 ` [Tarantool-patches] [PATCH 3/3] Apply clang-format Kirill Yukhin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox