[Tarantool-patches] [RFC] on downstream.lag design

Cyrill Gorcunov gorcunov at gmail.com
Wed Jun 2 01:02:49 MSK 2021


Guys, I would like to discuss option 3 from downstream.lag
proposal.

Quoting https://github.com/tarantool/tarantool/issues/5447
---
Option 3
Downstream.lag is updated constantly until there are non-received ACKs.
It becomes 0 when no ACKs to wait for. The difference with the option 2
is that the update is literally continuous - each read of downstream.lag
shows a bigger value until an ACK is received and corrects it.

Pros: with long transactions it won't freeze for seconds, and would show the truth when not 0.
Cons: the same as in the option 2. Also it is more complex to implement.
---

Here is a code flow I've in mind

~~~
master (stage 1)
================

TX                        WAL                         RELAY
--                        ---                         -----
txn_commit
  txn_limbo_append
  journal_write       --> wal_write
    fiber_yield()           ...
                            [xrow.tm = 1]
                            wal_write_to_disk
                              wal_watcher_notify -->  recover_remaining_wals
                      <--   fiber_up()                  recover_xlog
                                                          relay_send_row
                                                            relay_send [xrow.tm = 1, lag = arm to count]
                                                            {remember in relay's wal_st}
  txn_limbo_wait_complete                                        |
    (stage 1 complete,                                           |
     waiting for data from                                       |
     replica, to gather ACKs)                                    |
                                                                 |
                                                                 |
replica (stage 2)                                                |
=================   +--------------------------------------------+
                   /
TX                /               WAL                         RELAY
--               |                ---                         -----
           [xrow.tm = 1]
                 |
                 V

applier_apply_tx
  apply_plain_tx
    txn_commit_try_async
      journal_write_try_async --> wal_write_async
                                    wal_write_to_disk
                                      wal_watcher_notify -->  recover_remaining_wals
                                                                recover_xlog
                                                                  relay_send_row
                                                                  (filtered out)
    applier_txn_wal_write_cb
      [xrow.tm = 1] -> {remember in wal_st}

finally transfer comes to applier_writer_f

applier_writer_f
  xrow_encode_vclock
    {encode [xrow.tm = 1] from wal_st}
    coio_write_xrow  -
                      \
                       \
master (stage 3)       |
================       |
                       |
RELAY                  |
-----                 /
relay_reader_f    <--+
  receive ack [xrow.tm = 1]
    modify_relay_lag() (to implement)
      armed value from stage 1 minus xrow.tm

~~~

Once txn_commit() the pre-send stage is relay thread woken by
the WAL thread where we catch rows to be send and if there is
a sync transaction we remember the timestamp from first row
somewhere in the relay structure, this timestamp is assigned
by WAL thread itself right before flushing data to the disk.

If user start reading box.info().relay.downstream.lag it will
see increasing counter like [ev_now - xrow.tm] until ACK is received.

Once ACK is obtained the lag set to some positive value [ev_now - xrow.tm].
This value remains immutable until new sync transaction is sent. On new
sync transaction we do the same -- asign value from row.tm and count
time until ACK is received.


More information about the Tarantool-patches mailing list