[tarantool-patches] [PATCH] Relay logs from wal thread

Георгий Кириченко georgy at tarantool.org
Thu Jan 31 10:15:49 MSK 2019


On Tuesday, January 29, 2019 1:44:16 PM MSK Vladimir Davydov wrote:
> On Tue, Jan 29, 2019 at 12:49:53PM +0300, Георгий Кириченко wrote:
> > On Tuesday, January 29, 2019 11:53:18 AM MSK Vladimir Davydov wrote:
> > > On Tue, Jan 22, 2019 at 05:18:36PM +0300, Georgy Kirichenko wrote:
> > > > Use wal thread to relay rows. Relay writer and reader fibers live
> > > > in a wal thread. Writer fiber uses wal memory to send rows to the
> > > > peer when it possible or spawns cords to recover rows from a file.
> > > > 
> > > > Wal writer stores rows into two memory buffers swapping by buffer
> > > > threshold. Memory buffers are splitted into chunks with the
> > > > same server id issued by chunk threshold. In order to search
> > > > position in wal memory all chunks indexed by wal_mem_index
> > > > array. Each wal_mem_index contains corresponding replica_id and
> > > > vclock just before first row in the chunk as well as chunk buffer
> > > > number, position and size of the chunk in memory.
> > > > 
> > > > Closes: #3794
> > > 
> > > It's too early to comment on the code. Here are some general issues that
> > > 
> > > I think need to be addressed first:
> > >  - Moving relay code to WAL looks ugly as it turns the code into a messy
> > >  
> > >    bundle. Let's try to hide relay behind an interface. Currently, we
> > >    only need one method - write() - but in future we might need more,
> > >    for instance prepare() or commit(). May be, we could reuse journal
> > >    struct for this?
> > 
> > It was done because WAL should control relays' ACK in case of synchronous
> > replication. So, in likely case forwarding wals and reading ACK should be
> > done in wal thread.
> 
> I don't deny that.
> 
> > And if I left this code in a relay then wal-relay communication
> > would be to complicated (expose internal structures like sockets, memory
> > buffers, vclock and etc).
> 
> Not necessarily. You could hide it all behind a vtab. For now, vtab
> would be trivial - we only need write() virtual method (ACKs can be
> received by a fiber started by this method when relay switches to sync).
> Later on, we could extend it with methods that would allow WAL to wait
> for remote commit.
I do not think wal should wait for remote commit but relay should inform wal 
about state. For example wal should track state of an relay even if the relay 
is reading from file in a different cord (and might exit at any time).
So we also add some v-functions to request memory chunks, promote in-wal relay 
vclock or attach to sync mode as you suggested.
> 
> > Also I planned some future changes to move all wal-
> > processing logic into a wal.c to.
> 
> Please don't. We try to split the code into modules so that it's easier
> for understanding. Mixing relay/recovery/WAL in one module doesn't sound
> like a good idea to me.
> 
> > >  - Memory index looks confusing to me. Why would we need it at all?
> > >  
> > >    AFAIU it is only used to quickly position relay after switching to
> > >    memory buffer, but it's not a big deal to linearly scan the buffer
> > >    instead when that happens - it's a rare event and scanning memory
> > >    should be fast enough. At least, we should start with it, because
> > >    that would be clear and simple, and build any indexing later in
> > >    separate patches provided we realize that we really need it, which I
> > >    doubt.
> > 
> > It allows no to decode all logs as many times as we have relays connected
> > to.
> I don't understand. When relay is in sync, we should be simply writing a
> stream of rows to a socket. No decoding or positioning should be
> necessary. Am I missing something?
If relay blocks for some time then you stop writing and then resume reading 
from some position relay has reached before and this position could be 
identified only by vclock because memory buffer might be reallocated at any time 
when relay sleeps.
Also, lets imagine that switching from relay to wal needs dT time in which wal 
writes N rows. So, if you do not store logs in memory then you would not be 
able to switch this relay in a sync mode. So we should have some rows backed 
in memory and allow relays to navigate in it. 

Other cause why I rejected direct writing to relay - in that case wal would be 
in charge to track relays' state, for me it looks ugly.
> 
> > Also it eliminates following of corresponding vclocks for each row.
> 
> But you know the vclock of each row in WAL and you can pass it to relay.
> No reason to decode anything AFAIU.
I know it only when i write entry to wal, if relay yielded for some time - the 
only possibility is to store memory buffers' starting vclock and then decode 
each row one-by-one to follow vclock.
> 
> > >  - We shouldn't stop/start relay thread when switching to/from memory
> > >  
> > >    source. Instead on SUBSCRIBE we should always start a relay thread,
> > >    as we do now, and implement some machinery to suspend/resume it and
> > >    switch to memory/disk when it falls in/out the memory buffer.
> > 
> > In a good case (we do not have any outdated replicas) we do not even give
> > any chance for relay thread to work.
> 
> Starting/stopping relay thread on every lag looks just ugly. We have to
> start it anyway when a replica subscribes (because it's highly unlikely
> to fall in the in-memory buffer). I don't see any reason to stop it once
> we are in sync - we can switch to async anytime.
Relay cord do not started on every lag but only if relay gone out of from the 
wal memory buffer. The buffer should be large enough to compensate all hardware 
fluctuations, and this grants that any replica able to process all rows from 
master would live in a wal memory without spawning threads.
> 
> > >    All that machinery should live in relay.cc.
> > 
> > This leads to to complicated dependencies between wal and relay. AT least
> > wal should expose it internal structures into a header file.
> 
> How's that? WAL should only know about write() method. Relay registers
> the method in WAL. Everything else is done by relay, including switching
> between sync and async mode.
Write method is not enough at all.
> 
> > >  - When in sync mode, WAL should write directly to the relay socket,
> > >  
> > >    without using any intermediate buffer. Buffer should only be used if
> > >    the socket is blocked or the relay is running in asynchronous mode.
> > >    There's no need in extra memory copying otherwise.
> > 
> > Please explain why. I do not hing this can improve something. Also i think
> 
> That would eliminate extra memory copying in the best case scenario.
Let me explain - we will form in-memory data that will be used for both to-file 
writes and as a relay source.
> 
> > that this memory buffer should be used for writing files to.
> > In any case we should have a memory buffer to store wals because you do
> > not
> > know will relay block or not. But the biggest issue is to transfer state
> > from
> As I suggested below, let's try to maintain a buffer per each relay, not
> a single buffer in WAL. Then we wouldn't need to write anything to
> memory buffers if all relays are in sync and no socket is blocked. This
> is the best case scenario, but we should converge to it pretty quickly
> provided the network doesn't lag much. This would also allow us to hide
> this logic in relay.cc.
Above I described why it is not possible to not to have wal memory buffer - 
with constant load relay would not be able to attach to sync mode as you 
suggested. So we have to maintain wal memory buffer.
There are some situations in that you model would not be functional.
For example if wal writes a lot (more than tcp write buffer in a some time 
slot) so each call will block socket and push relay sync down.
And if we maintain per relay buffers so you we should copy data as much as 
relays we have, even for file-reading relays do let then chance to catch the 
wal.
> 
> > just blocked sync relay to detached from memory reader.
> > 
> > >  - Maintaining a single memory buffer in WAL would complicate relay sync
> > >  
> > >    interface implementation. May be, we could use a separate buffer per
> > >    each relay? That would be less efficient, obviously, because it would
> > >    mean extra memory copying and usage, but that would simplify the code
> > >    a great deal. Besides, we would need to use the buffers only for
> > >    async relays - once a relay is switched to sync mode, we could write
> > >    to its socket directly. So I think we should at least consider this
> > >    option.
> > >  
> > >  - You removed the code that tried to advance GC consumers as rarely as
> > >  
> > >    possible, namely only when a WAL file is closed. I'd try to save it
> > >    somehow, because without it every ACK would result in GC going to WAL
> > >    if relay is running in async mode (because only WAL knows about WAL
> > >    files; for others it's just a continuous stream). When a relay is
> > >    running in sync mode, advancing GC on each ACK seems to be OK,
> > >    because WAL won't be invoked then (as we never remove WAL files
> > >    created after the last checkpoint).
> > 
> > I think, we should move wal gc into a wal thread.
> 
> You can try, but until then I think we should try to leave that code
> intact.
> 
> > >  - I really don't like code duplication. I think we should reuse the
> > >  
> > >    code used for sending rows and receiving ACKs between sync and sync
> > >    modes. Hiding sync relay implementation behind an interface would
> > >    allow us to do that.
> > >  
> > >  - I don't see how this patch depends on #1025 (sending rows in
> > >  
> > >    batches). Let's implement relaying from memory first and do #1025
> > >    later, when we see the full picture.

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 488 bytes
Desc: This is a digitally signed message part.
URL: <https://lists.tarantool.org/pipermail/tarantool-patches/attachments/20190131/dc6db947/attachment.sig>


More information about the Tarantool-patches mailing list