Hi! Thanks for the patch!
 
box_issue_promote() and box_issue_demote() need fine-grained locking anyway.
Otherwise it’s possible that promote() is already issued, but not yet written to WAL, and some
outdated request is applied by applier at that exact moment.
 
You should take the lock before the WAL write, and release it only after txn_limbo_apply.
 
No need to guard every limbo function there is, but we have to guard everything that
writes PROMOTE/DEMOTE.
 
Четверг, 30 декабря 2021, 23:24 +03:00 от Cyrill Gorcunov <gorcunov@gmail.com>:
 
Limbo terms tracking is shared between appliers and when
one of appliers is waiting for write to complete inside
journal_write() routine, an other may need to access read
term value to figure out if promote request is valid to
apply. Due to cooperative multitasking access to the terms
is not consistent so we need to be sure that other fibers
read up to date terms (ie written to the WAL).

For this sake we use a latching mechanism, when one fiber
takes a lock for updating other readers are waiting until
the operation is complete.

For example here is a call graph of two appliers

applier 1
---------
applier_apply_tx
  (promote term = 3
   current max term = 2)
  applier_synchro_filter_tx
  apply_synchro_row
    journal_write
      (sleeping)

at this moment another applier comes in with obsolete
data and term 2

                              applier 2
                              ---------
                              applier_apply_tx
                                (term 2)
                                applier_synchro_filter_tx
                                  txn_limbo_is_replica_outdated -> false
                                journal_write (sleep)

applier 1
---------
journal wakes up
  apply_synchro_row_cb
    set max term to 3

So the applier 2 didn't notice that term 3 is already seen
and wrote obsolete data. With locking the applier 2 will
wait until applier 1 has finished its write.

We introduce the following helpers:

1) txn_limbo_begin: which takes a lock
2) txn_limbo_commit and txn_limbo_rollback which simply release
   the lock but have different names for better semantics
3) txn_limbo_process is a general function which uses x_begin
   and x_commit helper internally
4) txn_limbo_apply to do a real job over processing the
   request, it implies that txn_limbo_begin been called

Testing such in-flight condition won't be easy so we introduce
"box.info.synchro.queue.waiters" field which represent current
number of fibers waiting for limbo to finish request processing.

@TarantoolBot document
Title: synchronous replication changes

`box.info.synchro.queue` gets a new field: `waiters`. It represents
current number of fibers waiting the synchronous transaction processing
to complete.

Part-of #6036

Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com>
---
 src/box/applier.cc | 12 ++++++++---
 src/box/lua/info.c | 4 +++-
 src/box/txn_limbo.c | 18 ++++++++++++++--
 src/box/txn_limbo.h | 52 ++++++++++++++++++++++++++++++++++++++++-----
 4 files changed, 75 insertions(+), 11 deletions(-)
 
 
 
diff --git a/src/box/txn_limbo.h b/src/box/txn_limbo.h
index 53e52f676..42d572595 100644
--- a/src/box/txn_limbo.h
+++ b/src/box/txn_limbo.h
 
 
@@ -216,7 +225,7 @@ txn_limbo_last_entry(struct txn_limbo *limbo)
  * @a replica_id.
  */
 static inline uint64_t
-txn_limbo_replica_term(const struct txn_limbo *limbo, uint32_t replica_id)
+txn_limbo_replica_term(struct txn_limbo *limbo, uint32_t replica_id)
 {
 
You’ve forgot to lock the latch here, I guess.
 
  return vclock_get(&limbo->promote_term_map, replica_id);
 }
@@ -226,11 +235,14 @@ txn_limbo_replica_term(const struct txn_limbo *limbo, uint32_t replica_id)
  * data from it. The check is only valid when elections are enabled.
  */
 static inline bool
-txn_limbo_is_replica_outdated(const struct txn_limbo *limbo,
+txn_limbo_is_replica_outdated(struct txn_limbo *limbo,
  uint32_t replica_id)
 {
- return txn_limbo_replica_term(limbo, replica_id) <
- limbo->promote_greatest_term;
+ latch_lock(&limbo->promote_latch);
+ uint64_t v = vclock_get(&limbo->promote_term_map, replica_id);
+ bool res = v < limbo->promote_greatest_term;
+ latch_unlock(&limbo->promote_latch);
+ return res;
 }
 
 /**
@@ -300,7 +312,37 @@ txn_limbo_ack(struct txn_limbo *limbo, uint32_t replica_id, int64_t lsn);
 int
 txn_limbo_wait_complete(struct txn_limbo *limbo, struct txn_limbo_entry *entry);
 
-/** Execute a synchronous replication request. */
+/**
+ * Initiate execution of a synchronous replication request.
+ */
+static inline void
+txn_limbo_begin(struct txn_limbo *limbo)
+{
+ limbo->promote_latch_cnt++;
+ latch_lock(&limbo->promote_latch);
 
I suppose you should decrease the latch_cnt right after acquiring the lock.
 
Otherwise you count the sole «limbo user» together with «limbo waiters».
 
+}
+
+/** Commit a synchronous replication request. */
+static inline void
+txn_limbo_commit(struct txn_limbo *limbo)
+{
+ latch_unlock(&limbo->promote_latch);
+ limbo->promote_latch_cnt--;
+}
+
+/** Rollback a synchronous replication request. */
+static inline void
+txn_limbo_rollback(struct txn_limbo *limbo)
+{
+ latch_unlock(&limbo->promote_latch);
 
If you don’t want to decrease the counter right after latch_lock(), you should decrease it
here, as well as in txn_limbo_commit().
 
+}
+
+/** Apply a synchronous replication request after processing stage. */
+void
+txn_limbo_apply(struct txn_limbo *limbo,
+ const struct synchro_request *req);
+
+/** Process a synchronous replication request. */
 void
 txn_limbo_process(struct txn_limbo *limbo, const struct synchro_request *req);
 
--
2.31.1