From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from [87.239.111.99] (localhost [127.0.0.1]) by dev.tarantool.org (Postfix) with ESMTP id 65C5C6EC5A; Sat, 27 Feb 2021 02:58:11 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org 65C5C6EC5A DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=tarantool.org; s=dev; t=1614383891; bh=z7Pe9iI9IMIQP0Z8/yXB8ogCG5OYF9Foe63gZlTts7Y=; h=To:References:Date:In-Reply-To:Subject:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To: From; b=cJ5GRNd5I1AlaqPjVzcvq6U5kqTOaaXN7AA0XCxyKvssOysVTAAngz6nMr4evZeKM NVa8gbnHv5ZpUCXWj34XgkfdRltKuu0zbBoUok5lm1wwppFZjkYjbdNZVv/jPoiZKa cQWctzI5LX1n43Lk8TH/m9D2+/iEWMoPPOC4Fka4= Received: from smtpng1.m.smailru.net (smtpng1.m.smailru.net [94.100.181.251]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dev.tarantool.org (Postfix) with ESMTPS id 3923A6EC5A for ; Sat, 27 Feb 2021 02:58:09 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 dev.tarantool.org 3923A6EC5A Received: by smtpng1.m.smailru.net with esmtpa (envelope-from ) id 1lFmzo-00065C-31; Sat, 27 Feb 2021 02:58:08 +0300 To: Oleg Babin , tarantool-patches@dev.tarantool.org, yaroslav.dynnikov@tarantool.org References: <837d2697-5165-23d6-72bf-f7533af10864@tarantool.org> Message-ID: <85b54259-b7c2-475b-a5ed-da4fec44e7ca@tarantool.org> Date: Sat, 27 Feb 2021 00:58:06 +0100 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0) Gecko/20100101 Thunderbird/78.7.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit X-7564579A: 646B95376F6C166E X-77F55803: 4F1203BC0FB41BD9795828B892398B72BB71EC733CE1D6E17BA9E8B9BEDD7952182A05F53808504032F449AA0C958918AF684D9BA8D57AE1711D95C9228414E5C674A974B8268E8E X-7FA49CB5: FF5795518A3D127A4AD6D5ED66289B5278DA827A17800CE782A779A89F7D69B2C2099A533E45F2D0395957E7521B51C2CFCAF695D4D8E9FCEA1F7E6F0F101C6778DA827A17800CE7FCFCB92DA8654BB0EA1F7E6F0F101C674E70A05D1297E1BBC6CDE5D1141D2B1C9356D8C24053CE3C330CA2A2C293678EA1F7CAC51C685CFB9FA2833FD35BB23D9E625A9149C048EE33AC447995A7AD18CB629EEF1311BF91D2E47CDBA5A96583BD4B6F7A4D31EC0BC014FD901B82EE079FA2833FD35BB23D27C277FBC8AE2E8BECADA55FE5B58BB7A471835C12D1D977C4224003CC836476EC64975D915A344093EC92FD9297F6718AA50765F7900637C970FD8DF19C51D2A7F4EDE966BC389F395957E7521B51C24C7702A67D5C33162DBA43225CD8A89F9FFED5BD9FB417556D8C47C27EEC5E9FB5C8C57E37DE458B4C7702A67D5C3316FA3894348FB808DB985633C00BAEBE4F574AF45C6390F7469DAA53EE0834AAEE X-C1DE0DAB: 0D63561A33F958A57E67854CDEBBC78EA417972A029E979235F79FD102AFAA24D59269BC5F550898D99A6476B3ADF6B47008B74DF8BB9EF7333BD3B22AA88B938A852937E12ACA75968C9853642EB7C3410CA545F18667F91A7EA1CDA0B5A7A0 X-C8649E89: 4E36BF7865823D7055A7F0CF078B5EC49A30900B95165D34E420FF71F2F0FE03B99D740B13C1E2DFAC76A23B0D0512C59352B51C57A078AAF01B2B9C425122731D7E09C32AA3244C3EDFF1FCC328404614029802A06D81E933C9DC155518937FFACE5A9C96DEB163 X-D57D3AED: 3ZO7eAau8CL7WIMRKs4sN3D3tLDjz0dLbV79QFUyzQ2Ujvy7cMT6pYYqY16iZVKkSc3dCLJ7zSJH7+u4VD18S7Vl4ZUrpaVfd2+vE6kuoey4m4VkSEu530nj6fImhcD4MUrOEAnl0W826KZ9Q+tr5ycPtXkTV4k65bRjmOUUP8cvGozZ33TWg5HZplvhhXbhDGzqmQDTd6OAevLeAnq3Ra9uf7zvY2zzsIhlcp/Y7m53TZgf2aB4JOg4gkr2bioj8mqzvzJVEn26qRJOhqDjWQ== X-Mailru-Sender: 689FA8AB762F73936BC43F508A06382257AA0294588258D891D19FE3DB70D08A3841015FED1DE5223CC9A89AB576DD93FB559BB5D741EB963CF37A108A312F5C27E8A8C3839CE0E267EA787935ED9F1B X-Mras: Ok Subject: Re: [Tarantool-patches] [PATCH vshard 11/11] router: introduce map_callrw() X-BeenThere: tarantool-patches@dev.tarantool.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Tarantool development patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , From: Vladislav Shpilevoy via Tarantool-patches Reply-To: Vladislav Shpilevoy Errors-To: tarantool-patches-bounces@dev.tarantool.org Sender: "Tarantool-patches" >>> On 23.02.2021 03:15, Vladislav Shpilevoy wrote: >>>> Closes #147 >>> Will read-only map-reduce functions be done in the scope of separate issue/patch? >>> >>> I know about #173 but seems we need to keep information about map_callro function. >> Perhaps there will be a new ticket, yes. Until 173 is fixed, ro refs seem >> pointless. The implementation depends on how exactly to fix 173. >> >> At this moment I didn't even design it yet. I am thinking if I need to >> account ro storage refs separated from rw refs in the scheduler or not. >> Implementation heavily depends on this. >> >> If I go for considering ro refs separated, it adds third group of operations >> to the scheduler complicating it significantly, and a second type of refs to >> the refs module obviously. >> >> It would allow to send buckets and mark them as GARBAGE/SENT while keep ro >> refs. But would complicate the garbage collector a bit as it would need to >> consider storage ro refs. >> >> On the other hand, it would heavily complicate the code in the scheduler. >> Like really heavy, I suppose, but I can be wrong. >> >> Also the win will be zeroed if we ever implement rebalancing of writable >> buckets. Besides, the main reason I am more into unified type of refs is >> that they are used solely for map-reduce. Which means they are taken on all >> storages. This means if you have SENDING readable bucket on one storage, >> you have RECEIVING non-readable bucket on another storage. Which makes >> such ro refs pointless for full cluster scan. Simply won't be able to take >> ro ref on the RECEIVING storage. >> >> If we go for unified refs, it would allow to keep the scheduler relatively >> simple, but requires urgent fix of 173. Otherwise you can't be sure your >> data is consistent. Even now you can't be sure with normal requests, which >> terrifies me and raises a huge question - why the fuck nobody cares? I >> think I already had a couple of nightmares about 173. > > Yes, it's a problem but rebalansing is not quite often operation. So, in some > > cases routeall() was enough. Anyway we didn't have any alternatives. But map-reduce it's > > really often operation - any request over secondary index and you should scan whole cluster. Have you tried building secondary indexes over buckets? There is an algorithm, in case it is something perf sensitive. You can store secondary index in another space and shard it independently. And there are ways how to deal with inability to atomically update it and the main space together. So almost always you will have at most 2 network hops to at most 2 nodes to find the primary key. Regardless of cluster size. >> Another issue - ro refs won't stop rebalancer from working and don't >> even participate in the scheduler, if we fix 173 in the simplest way - >> just invalidate all refs on the replica if a bucket starts moving. If >> the rebalancer works actively, nearly all your ro map-reduces will return >> errors during data migration because buckets will move constantly. No >> throttling via the scheduler. >> >> One way to go - switch all replica weights to defaults for the time of >> data migration manually. So map_callro() will go to master nodes and >> will participate in the scheduling. >> >> Another way to go - don't care and fix 173 in the simplest way. Weights >> anyway are used super rare AFAIK. >> >> Third way to go - implement some smarter fix of 173. I have many ideas here. >> But neither of them are quick. >> >>>> diff --git a/vshard/router/init.lua b/vshard/router/init.lua >>>> index 97bcb0a..8abd77f 100644 >>>> --- a/vshard/router/init.lua >>>> +++ b/vshard/router/init.lua >>>> @@ -44,6 +44,11 @@ if not M then >>>>            module_version = 0, >>>>            -- Number of router which require collecting lua garbage. >>>>            collect_lua_garbage_cnt = 0, >>>> + >>>> +        ----------------------- Map-Reduce ----------------------- >>>> +        -- Storage Ref ID. It must be unique for each ref request >>>> +        -- and therefore is global and monotonically growing. >>>> +        ref_id = 0, >>> Maybe 0ULL? >> Wouldn't break anyway - doubles are precise until 2^53. But an integer >> should be faster I hope. >> >> Changed to 0ULL. I reverted it back. Asked Igor and he reminded me it is cdata. So it involves heavy stuff with metatables and shit. It is cheaper to simply increment the plain number. I didn't measure though. >>>> +router_map_callrw = function(router, func, args, opts) >>>> +    local replicasets = router.replicasets >>> It would be great to filter here replicasets with bucket_count = 0 and weight = 0. >> Information on the router may be outdated. Even if bucket_count is 0, >> it may still mean the discovery didn't get to there yet. Or the >> discovery is simply disabled or dead due to a bug. >> >> Weight 0 also does not say anything certain about buckets on the >> storage. It can be that config on the router is outdated, or it is >> outdated on the storage. Or someone simply does not set the weights >> for router configs because they are not used here. Or the weights are >> really 0 and set everywhere, but rebalancing is in progress - buckets >> move from the storage slowly, and the scheduler allows to squeeze some >> map-reduces in the meantime. >> >>> In case if such "dummy" replicasets are disabled we get an error "connection refused". >> I need more info here. What is 'disabled' replicaset? And why would >> 0 discovered buckets or 0 weight lead to the refused connection? >> >> In case this is some cartridge specific shit, this is bad. As you >> can see above, I can't ignore such replicasets. I need to send requests >> to them anyway. >> >> There is a workaround though - even if an error has occurred, continue >> execution if even without the failed storages I still cover all the buckets. >> Then having 'disabled' replicasets in the config would result in some >> unnecessary faulty requests for each map call, but they would work. >> >> Although I don't know how to 'return' such errors. I don't want to log them >> on each request, and don't have a concept of a 'warning' object or something >> like this. Weird option - in case of this uncertain success return the >> result, error, uuid. So the function could return >> >> - nil, err[, uuid] - fail; >> - res              - success; >> - res, err[, uuid] - success but with a suspicious issue; >> > Seems I didn't fully state the problem. Maybe it's not even relevant issue and users > > that do it just create their own problems. But there was a case when our customers added > > new replicaset (in fact single instance) in cluster. This instance didn't have any data and had a weight=0. > > Then they just turned off this instance after some time. And all requests that perform map-reduce started to fail with "Connection > > refused" error. > > It causes a question: "Why do our requests fail if we disable instance that doesn't have any data". > > Yes, requirement of weight=0 is obviously not enough - because if rebalansing is in progress replicaset with weight=0 > > still could contain some data. > > > Considering an opportunity to finish requests if someone failed - I'm not sure that it's really needed. > > Usually we don't need some partial result (moreover if it adds some workload). I want to emphasize - it won't be partial. It will be full result. In case the 'disabled' node does not have any data, the requests to the other nodes will see that they covered all the buckets. So this 'disabled' node loss is not critical. Map-Reduce, at least how I understood it, is about providing access to all buckets. You can succeed at this even if didn't manage to look into each node. If the not responded nodes didn't have any data. But sounds like a crutch for an outdated config. If this is not something regular I need to support, I better leave it as is now then. A normal fail.