From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from [87.239.111.99] (localhost [127.0.0.1]) by dev.tarantool.org (Postfix) with ESMTP id 3F87D6EC56; Sat, 20 Mar 2021 01:17:27 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org 3F87D6EC56 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=tarantool.org; s=dev; t=1616192247; bh=Xi4sJRWQmb2PXn9Nlj1zrKl5riUc0ZOD1Wdes8qgVvE=; h=To:References:Date:In-Reply-To:Subject:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc: From; b=uHDlDI3FUQTWB4j2i6wO+uJawWJzHRXfxIwUQsH5hXgGELK4vlKb2HOYiq8MwQ6cf fqH/NmxoZxPxadlbQXBgziSS5e+59X48MmFcBCciq6sWCMVya573FiXvo3ZLL4ZyXO NamoujRJJ+glccxp5yk1cF0B278iXzb0p6A5zp2s= Received: from smtpng3.m.smailru.net (smtpng3.m.smailru.net [94.100.177.149]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dev.tarantool.org (Postfix) with ESMTPS id D21066EC56 for ; Sat, 20 Mar 2021 01:17:24 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org D21066EC56 Received: by smtpng3.m.smailru.net with esmtpa (envelope-from ) id 1lNNQp-0003oV-BW; Sat, 20 Mar 2021 01:17:23 +0300 To: Cyrill Gorcunov References: <20210318184138.1077807-1-gorcunov@gmail.com> <20210318184138.1077807-2-gorcunov@gmail.com> Message-ID: <42ef5aa6-c491-9930-8f14-f848adb236eb@tarantool.org> Date: Fri, 19 Mar 2021 23:17:22 +0100 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0) Gecko/20100101 Thunderbird/78.8.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-7564579A: 646B95376F6C166E X-77F55803: 4F1203BC0FB41BD95D6E7CC48CB1F5F179C48A9DDACBFB6F5347129BC2C9341C182A05F53808504070BFA5C1F686B9E85A488BACE8F0D21028E768C38AB87C4EDFBEE9B1ECA05E44 X-7FA49CB5: FF5795518A3D127A4AD6D5ED66289B5278DA827A17800CE76C0A440987CA342DC2099A533E45F2D0395957E7521B51C2CFCAF695D4D8E9FCEA1F7E6F0F101C6778DA827A17800CE727FD6E7FC3A8F857EA1F7E6F0F101C67CDEEF6D7F21E0D1D174C73DBBBFC7664A1A73642244979A921D691A3C3A524C7BAA97C44011F67E3389733CBF5DBD5E913377AFFFEAFD269176DF2183F8FC7C078FCF50C7EAF9C588941B15DA834481FCF19DD082D7633A0EF3E4896CB9E6436389733CBF5DBD5E9D5E8D9A59859A8B6BAA8CD687FCDB2EBCC7F00164DA146DA6F5DAA56C3B73B23C77107234E2CFBA567F23339F89546C55F5C1EE8F4F765FCA56E11165BA017C7A7F4EDE966BC389F395957E7521B51C24C7702A67D5C33162DBA43225CD8A89F443EE40786112381C6EABA9B74D0DA47B5C8C57E37DE458B4C7702A67D5C3316FA3894348FB808DBE5FF2EAB69DB47CC574AF45C6390F7469DAA53EE0834AAEE X-C1DE0DAB: 0D63561A33F958A526050B7BE4FA9EA9476A48081A9120E518D7163CD05ACC91D59269BC5F550898D99A6476B3ADF6B47008B74DF8BB9EF7333BD3B22AA88B938A852937E12ACA75F04B387B5D7535DE410CA545F18667F91A7EA1CDA0B5A7A0 X-C8649E89: 4E36BF7865823D7055A7F0CF078B5EC49A30900B95165D34BF5454112BD5BFD711BCA5A5777D6E10B43EA39A0E69103DE955A24AE0FA74346C92B2EFD077C34C1D7E09C32AA3244CFC1600290704E6C8CFAFBA73F598B7F0E646F07CC2D4F3D88D5DD81C2BAB7D1D X-D57D3AED: 3ZO7eAau8CL7WIMRKs4sN3D3tLDjz0dLbV79QFUyzQ2Ujvy7cMT6pYYqY16iZVKkSc3dCLJ7zSJH7+u4VD18S7Vl4ZUrpaVfd2+vE6kuoey4m4VkSEu530nj6fImhcD4MUrOEAnl0W826KZ9Q+tr5ycPtXkTV4k65bRjmOUUP8cvGozZ33TWg5HZplvhhXbhDGzqmQDTd6OAevLeAnq3Ra9uf7zvY2zzsIhlcp/Y7m53TZgf2aB4JOg4gkr2biojyKyJYJ15DtLcsdSCGbPpVQ== X-Mailru-Sender: 689FA8AB762F73936BC43F508A0638220C7501884374FE457513967D9C2C63193841015FED1DE5223CC9A89AB576DD93FB559BB5D741EB963CF37A108A312F5C27E8A8C3839CE0E267EA787935ED9F1B X-Mras: Ok Subject: Re: [Tarantool-patches] [PATCH 1/2] gc/xlog: delay xlog cleanup until relays are subscribed X-BeenThere: tarantool-patches@dev.tarantool.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Tarantool development patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , From: Vladislav Shpilevoy via Tarantool-patches Reply-To: Vladislav Shpilevoy Cc: Mons Anderson , tml Errors-To: tarantool-patches-bounces@dev.tarantool.org Sender: "Tarantool-patches" >> If we would know the topology, we could tell which nodes will >> relay to who, and would be able to detect when a replica needs >> to keep the logs, and when don't. Not even the entire topology. >> Just the info from the adjacent nodes. From our own box.cfg.replication, >> and from box.cfg.replication of these nodes. >> >> I still don't understand what was wrong with implementing some kind >> of topology discovery for this 2-level topology tree. For instance >> when applier on a replica is connected, the remote node sends us a >> flag whether it is going to connect back. If the flag is false, we don't >> keep the logs for that instance. > > There is nothing wrong and I think we should do it. Could you please > elaborate the details? You mean to extend applier protocol data, so > that it would send not only vclock but also the flag if it going to > be setting up a relay? Nevermind, it won't work for certain topologies. The only way is to have it in _cluster, and make _cluster synchronous space. But this is a big separate task, so probably timeout is fine for now. >> See 13 comments below. >> >>> @TarantoolBot document >>> Title: Add wal_cleanup_delay configuration parameter >> >> 1. There are 2 issues with the name. Firstly, 'cleanup' conflicts >> with the existing 'gc' name, which is already exposed in box.info.gc. >> It is not called box.info.cleanup, so I would propose to use 'gc' >> here too. >> >> Another issue I described in the first patchset's discussion. It is >> not really a delay. Because it is simply ignored when the replication >> feels like it. It must either have 'replication' prefix to designate >> it is not just a fixated timeout to keep the WALs and it depends on >> the replication, or at least it must mention "max". To designate it >> is not the exact strict timeout for keeping the logs. > > Vlad, personally I don't mind to name this option whatever you like, > just gimme a name and I'll use it. I still don't like the name, but I am outvoted. We won't even add 'max' to it. The current name stays, unfortunately. >>> + } >>> +} >>> @@ -3045,6 +3055,7 @@ box_cfg_xc(void) >>> bootstrap(&instance_uuid, &replicaset_uuid, >>> &is_bootstrap_leader); >>> } >>> + gc_delay_unref(); >> >> 5. Why? > > For case where you don't have replicas at all thus you need > to unref yourself from counting and the counter shrinks to > zero enabling gc. Lets add a comment about this. >>> +void >>> +gc_delay_unref(void) >>> +{ >>> + if (gc.cleanup_is_paused) { >>> + assert(gc.delay_ref > 0); >>> + gc.delay_ref--; >> >> 11. If it is not paused and GC started earlier due to a >> timeout, the user will see 'delay_ref' in box.info even >> after all the replicas are connected. > > Yes, and this gonna be a sign that we're exited due to > timeout. Such output should not be treated as an error > but if you prefer I can zap the counter for this case. I would prefer not to have delay_ref in the public monitoring at all, but you should ask Mons about it now. >>> + if (!gc.cleanup_is_paused) { >>> + int64_t scheduled = gc.cleanup_scheduled; >>> + while (gc.cleanup_completed < scheduled) >>> + fiber_cond_wait(&gc.cleanup_cond); >>> + } >> >> 12. The function is called 'wait_cleanup', but it does not really >> wait in case GC is paused. It looks wrong. > > Hard to say. paused state is rather internal state and > when someone is waiting for cleanup to complete but cleanup > is turned off then I consider it pretty natural to exit > immediately. And I moved this "if" into the function itself > simply because we may use this helper in future and we should > not stuck forever in "waiting" if cleanup is disabled. > > I can move this "if" to the caller side though if you prefer. Probably this would be better. Because the function is called 'wait cleanup', not 'try wait or return immediately'. The comment in the place where the function is called says it guarantees that by the time box.snapshot() returns, all outdated checkpoint files have been removed It is misleading now. >>> +static void >>> +relay_gc_delay_unref(struct cmsg *msg) >>> +{ >>> + (void)msg; >>> + gc_delay_unref(); >> >> 13. You don't need a separate callback, and don't need to call it >> from the relay thread. Relay already works with GC - it calls >> gc_consumer_register() before starting the thread. You can do the >> unref in the same place. Starting from the consumer registration >> the logs are going to be kept anyway. >> >> Looks like it would be simpler if it works. > > Actually initially i did it this way, right before creating > relay thread. But I think this is not good and that's why: > when relay is starting it may fail in a number of places > (the thread itself is not created; thread is created but > then faiber creation failed with eception) and I think we > should decrement the reference when we only pretty sure that > there won't be new errors inside relay cycle. Why? New errors won't break anything. They also can happen after you did unref in your patch. > What would happen when say relay fiber is triggering an > error? iproto will write an error and as far as I understand > the replica will try to reconnect. Thus we should keep the > logs until relay is subscribed for sure. GC consumer keeps the logs while the replica tries to reconnect. But what I don't understand now - how does it work if the struct replica objects create the consumer objects right in replica_set_id()? Look at relay_subscribe(). If the replica is not anon, then 'replica->id != REPLICA_ID_NIL' is true (because there is an assertion). It means replica_set_id() was already called. And this means replica->gc is already not NULL. Therefore the check "replica->gc == NULL && !replica->anon" is never true. Am I missing something? Doesn't it mean all the _cluster nodes have a GC consumer on the current node, and nothing is deleted until all the nodes connect? Regardless of your patch. I feel like I miss something important. Because before the patch we have xlog gap errors somehow. Which means the consumers are dropped somewhere. Can you investigate?