<HTML><BODY><div>Using trigger on vclock change to determine the state would be cpu consuming, so I’m currently remaking previous patch so that we could yield from fiber and wait for a specific lsn from a specific replica. A possible use-case: committing a transaction and waiting for it to apply on all replicas. The way I am going to implement it is pretty much how Kostja suggested: <span style="color: rgb(51, 51, 51); font-family: Helvetica, Arial, Tahoma, Verdana, sans-serif; font-size: 15px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: rgb(255, 255, 255); text-decoration-style: initial; text-decoration-color: initial; display: inline !important; float: none;">«..wait_lsn() could add the server_id, lsn that is </span><span style="color: rgb(51, 51, 51); font-family: Helvetica, Arial, Tahoma, Verdana, sans-serif; font-size: 15px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: rgb(255, 255, 255); text-decoration-style: initial; text-decoration-color: initial; display: inline !important; float: none;">being waited for to a sorted list, and whenever we update </span><span style="color: rgb(51, 51, 51); font-family: Helvetica, Arial, Tahoma, Verdana, sans-serif; font-size: 15px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: rgb(255, 255, 255); text-decoration-style: initial; text-decoration-color: initial; display: inline !important; float: none;">replicaset vclock for this lsn we also look at top of the list, if </span><span style="color: rgb(51, 51, 51); font-family: Helvetica, Arial, Tahoma, Verdana, sans-serif; font-size: 15px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: rgb(255, 255, 255); text-decoration-style: initial; text-decoration-color: initial; display: inline !important; float: none;">it is not empty, and if the current lsn is greater than the top, </span><span style="color: rgb(51, 51, 51); font-family: Helvetica, Arial, Tahoma, Verdana, sans-serif; font-size: 15px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: rgb(255, 255, 255); text-decoration-style: initial; text-decoration-color: initial; display: inline !important; float: none;">we could pop the value from the list and send a notification to </span><span style="color: rgb(51, 51, 51); font-family: Helvetica, Arial, Tahoma, Verdana, sans-serif; font-size: 15px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: rgb(255, 255, 255); text-decoration-style: initial; text-decoration-color: initial; display: inline !important; float: none;">the waiter».</span></div><div> </div><div>Anyway, there are still some questions to discuss. 1. Do we need wait_lsn_any() method mentioned here <a href="https://github.com/tarantool/tarantool/issues/3808">https://github.com/tarantool/tarantool/issues/3808</a> ? I don’t see how this one can be useful. 2. What should be done in case of fail (reaching the timeout)? Simply returning an error seems like the best choice to me, so that user can later decide to do as he pleases with this information. </div><div>Another issue is that during the last discussion in the mailing list it was mentioned that we wouldn’t need this feature altogether if we had synchronous replication. Any thoughts on this matter?</div><blockquote style="border-left:1px solid #0857A6; margin:10px; padding:0 0 0 10px;"><div> <blockquote style="border-left:1px solid #0857A6; margin:10px; padding:0 0 0 10px;"><div><br>Понедельник, 18 ноября 2019, 12:31 +03:00 от Konstantin Osipov <kostja.osipov@gmail.com>:<br> <div id=""><div class="js-helper js-readmsg-msg"><style type="text/css"></style><div><div id="style_15740694931535993577_BODY">* Georgy Kirichenko <<a href="/compose?To=georgy@tarantool.org">georgy@tarantool.org</a>> [19/11/16 23:37]:<br><br>> > What is wrong with GC and how exactly do you want to "fix" it?<br>> We have discussed some points with you verbally (about 3-4 month<br>> ago). The main point is: the way of information processing is<br>> weird:<br><br>> 1. WAL has the full information about the wal directory (xlogs<br>> and their boundaries)<br><br>This is not strictly necessary. It saves us one xdir_scan() in<br>xdir_collect_garbage(), this is perhaps the main historical<br>reason it's there.<br><br>We even have to make an effort to maintain this state in WAL:<br>- we call xdir_scan() in wal_enable()<br>- we call xdir_add_vclock() whenever we open/close the next xlog.<br><br>The second reason it was done in WAL was to not block tx<br>thread, but later we had latency spikes in WAL thread as well, so<br>we added XDIR_GC_ASYNC to fix these, and this second reason is a<br>non-reason any more.<br><br>Finally, the third reason WAL does it is wal_fallocate() function,<br>which removes files if we're out of space. Instead of going back<br>to GC subsystem and asking it to remove a file, the implementation<br>went the short route and removes the file directly in WAL<br>susbystem and notifies GC as a matter of fact.<br><br>As you can see, all these reasons are accidental. Technically any<br>subsystem (WAL, GC) can remove these files if we add xdir_scan()<br>to xdir_collect_garbage().<br><br>GC subsystem is responsible for all the old files, so it should be<br>dealing with them.<br><br>The fix is to add xdir_scan() to xdir_collect_garbage(), and<br>change wal_fallocate() to send a message to GC asking it to remove<br>some data, rather than kick the chair out of GC butt by calling<br>xdir_collect_garbage(XDIR_GC_REMOVE_ONE). One issue with fixing it<br>this way, is what would you do in wal_fallocate() after you send<br>the message? You will have to have wal_fallocate_soft(), which<br>sends the message asynchronously, to not stall WAL, and<br>wal_fallocate_hard(), which would stall WAL until there is<br>response from TX about extra space. A lot more work.<br><br>Even though WAL contains some of GC state, it's neither an owner<br>of it nor a consumer: it is only a producer of GC state, and<br>it updates GC state by sending notifications about the files that<br>it creates and closes. The consumers are engines, checkpoints,<br>backups, relays.<br><br>BTW, I don't think in-memory replication is a new consumer of GC state<br>- it doesn't act like a standard consumer:<br> <br> * a usual consumer may need multiple xlog files, because it can<br> be at a position way behind the current xlog; in-memory<br> replication is almost always pointing to the current xlog,<br> there may be rare cases when it depends on the previous xlogs<br> when xlog size is small or there was a recent rotation.<br><br> * in case of standard consumers, each consumer is at its own<br> position, while for in-memory replication, all relays are more<br> or less on the same position - at least it doesn't make any<br> logical sense to advance each relay's position independently<br><br>I remember having suggested that, and I don't remember why using a<br>single consumer for all in-memory relays did not work out for you.<br>The idea is that whenever a relay switches to the memory mode it<br>unsubscribes from GC, and whenever it is back to file mode, it is<br>subscribes to GC again. In order to avoid any races, in-memory-WAL<br>as a consumer keeps a reference to a few WALs.<br><br>The alternative is to move GC subsystem entirely to WAL. This<br>could perhaps also work and even be cleaner than centralizing GC<br>in TX. Either way I don't see it as a blocker for in-memory WAL -<br>I think in-memory WAL can work with GC being either in WAL or in<br>TX, it's just the messages that threads exchange become a bit more<br>complicated.<br><br>> 2. WAL process the wal directory cleanup<br><br>As I wrote above, there are two reasons for this, both historical:<br>- we wanted to avoid TX stalls<br>- we have wal_fallocate(), a feature which was implemented<br> "lazily" so it just removes the files under GCs feet and<br> notifies GC after the fact.<br><br>GC, logically, controls the WAL dir, and WAL is only a producer of<br>WAL files.<br> <br>> 3. We filter out all this information while relaying (as a relay<br>> has only a stream of rows)<br><br>GC subscription is not interested in the stream of rows.<br>It is interested in a stream files. A file is represented in GC as a<br>vclock, and a row is identified by a vclock, but it doesn't mean<br>it's the same thing.<br><br>This is why I really dislike your idea of calling gc_advance on<br>row events.<br><br>> 4. We try to restore some of this information using on_close_log<br>> recovery trigger.<br><br>No, it's not "restore" the information. It's pass the right event<br>about the consumer - the file event - to the GC.<br><br>> 5. We send recovered boundaries to TX and tx tread reconstruct<br>> the relay order loosing really relay vclocks (as they mapped<br>> to the local xlog history)<br><br>I don't quite get what you mean here? Could you elaborate?<br>I think there is no "reconstruction". There are two types of<br>events: the events updating replicaset_vclock, are needed for<br>replication monitoring, and they happen often. The action upon<br>this event is very cheap - you simply<br>vclock_advance(replicaset_vclock).<br><br>The second type of event is when relay or backup or engine stops<br>using an xlog file. It is also represented by a vclock but it is<br>not as cheap to process as the first kind, because gc_advance() is<br>not cheap, it's rbtree search.<br><br>You keep trying to merge the two streams into a single stream, I<br>keep asking to keep the two streams separate. There is of course<br>the standard pluses and minuses of using a centralized "event bus"<br>for all these events - with a single bus, as you suggest, things<br>become simpler for the producer, but the consumers have to do more<br>work to filter out the unnecessary events.<br><br>> 6. TX sends the oldest vclock back to wal<br><br><br><br>> 7. There is some issues with making a consumer inactive. For<br>> instance if we deactivated a consumer could survive, for<br>> instance if deleted xlog was already send by an applier but<br>> not reported yet (I do not even know how it could be fixed in<br>> the current design).<br><br>I don't want to argue whether it's weird or not, it's subjective.<br>I agree GC state is distributed now, and it's better if it is<br>centralized.<br><br>This could be achieved by either moving WAL xdir state to tx,<br>and making sure tx is controlling it, or by moving entire GC<br>to WAL. Moving GC state to WAL seems a cleaner approach, but I'm<br>fine either way.<br><br>> Maybe it is working, but I afraid, this was done without any thinking about<br>> the future development (I mean the synchronous replication). Let me explain<br>> why.<br>> 1. WAL should receive all relay states as soon as possible.<br><br>Agree, but it is a different stream of events - it's sync<br>replication events. File events are routed to GC subsystem, sync<br>replication events are routed to RAFT subsystem in WAL.<br><br>> 2. The set of relay vclocks is enough to perform garbage<br>> collection (as we could form a vclock with is the lower bound<br>> of the set)<br><br>This is thanks to the fact that each file is unequivocally defined<br>by its vclock boundaries, which is accidental.<br><br>> So I wish the garbage collection would be implemented using direct relay to<br>> wal reporting. In this circumstances I was in need to implement a structure (I<br>> named it as matrix clock - mclcok) which able to contain relay vclocks and<br>> evaluate a vclock which is lower bound of n-members of the mclcock.<br>> The mclock could be used to get the n-majority vclock as wel as the lowest<br>> boundary of all vclock alive.<br>> The mclock is already implemented as well as new gc design (wal knows about<br>> all relay vclock and the first vclock locked by TX - checkpoint or join read<br>> view).<br><br>The idea of vclock matrix is totally fine & dandy for bsync. Using it for<br>GC seems like a huge overkill.<br><br>As to the fact that you have more patches on branches, I think<br>it's better to finish in-memory-replication first - it's a huge<br>performance boost for replicated set-ups, and reduces the latency,<br>too.<br><br>--<br>Konstantin Osipov, Moscow, Russia<br><a href="https://scylladb.com" target="_blank">https://scylladb.com</a></div></div></div></div></div></blockquote> <div> </div><div data-signature-widget="container"><div data-signature-widget="content"><div>--<br>Maria Khaydich</div></div></div><div> </div></div></blockquote></BODY></HTML>