From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtpng1.m.smailru.net (smtpng1.m.smailru.net [94.100.181.251]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dev.tarantool.org (Postfix) with ESMTPS id 2EC6B4765E0 for ; Tue, 22 Dec 2020 19:39:27 +0300 (MSK) References: <33050dea-c4b8-44cf-02fb-67928d5f8a93@tarantool.org> <3f4a382a-f74e-1829-2d94-83b66da59e6c@tarantool.org> <03e5806c-9439-e724-be22-fd8aee278bf3@tarantool.org> <164c0e1c-e4b0-f9ba-7e42-ab6c259bb4fd@tarantool.org> <64efe709-55bf-da8c-6779-9f3a20a3f6b9@tarantool.org> <1fe94ac2-11ec-dda1-da8e-e87b566a9812@tarantool.org> From: Vladislav Shpilevoy Message-ID: <4b899b9e-e9db-b522-6201-32ef97548758@tarantool.org> Date: Tue, 22 Dec 2020 17:39:25 +0100 MIME-Version: 1.0 In-Reply-To: <1fe94ac2-11ec-dda1-da8e-e87b566a9812@tarantool.org> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 8bit Subject: Re: [Tarantool-patches] [PATCH v2 2/2] base64: Improve decoder performance List-Id: Tarantool development patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Sergey Nikiforov , tarantool-patches@dev.tarantool.org Cc: Alexander Turenko >>>>>> This test now crashes: >>>>>> >>>>>> ==================== >>>>>> --- a/test/unit/base64.c >>>>>> +++ b/test/unit/base64.c >>>>>> @@ -7,7 +7,7 @@ static void >>>>>>     base64_test(const char *str, int options, const char *no_symbols, >>>>>>             int no_symbols_len) >>>>>>     { >>>>>> -    plan(3 + no_symbols_len); >>>>>> +    plan(4 + no_symbols_len); >>>>>>           int len = strlen(str); >>>>>>         int base64_buflen = base64_bufsize(len + 1, options); >>>>>> @@ -34,6 +34,11 @@ base64_test(const char *str, int options, const char *no_symbols, >>>>>>         free(base64_buf); >>>>>>         free(strbuf); >>>>>>     +    const char *in = "sIIpHw=="; >>>>>> +    int in_len = strlen(in); >>>>>> +    rc = base64_decode(in, in_len, NULL, 0); >>>>>> +    is(rc, 0, "no space in out buffer"); >>>>>> + >>>>>>         check_plan(); >>>>>>     } >>>>>> ==================== >>>>>> >>>>>> It didn't crash while the checks were in place. >>>>> >>>>> I knew about this "problem" even before first patch version. Glad someone noticed. More on that below. >>>>> >>>>>> I would suggest to >>>>>> add this test to the previous commit. Because technically it is >>>>>> also 'buffer overrun'. And it will crash 3069 bug even without ASAN. >>>>> >>>>> I do not think this is a good idea. >>>>> >>>>> Both old code and your proposed change (below) behave like this: if output buffer size if 0, stop everything and return zero. No input chars are decoded even when this is possible (no "stored" bits in state - aka "step_a" - and only one valid input char with 6 new bits). My "optimized" code already handles such situation properly. And this is "free" (no extra if's). >>>>> >>>>> It is of course possible to process "zero output buffer size" situation properly but it would require additional "if" in base64_decode_block() beginning and some logic - and in every case except "step_a with no more than one useful input char" would cause some input characters to be lost anyway. Existing API for base64_decode() completely ignores situation when output buffer is too small (it is easy to calculate worst-case output buffer size) - not-yet-processed input data is ignored in such case. No reasonable programmer would use "0" as output buffer size. Such requirement is even DOCUMENTED in base64.h. >>>>> >>>>> What would you say? >>>> >>>> I don't mind leaving it as is. But then I suggest to add an assertion, that >>>> the out-buffer length >= in-buffer length * 3 / 4. If it will crash >>>> anywhere in the tests, it will mean the function is used not only with >>>> the buffers of the given size. And you can't assume it can be not big >>>> enough to fit the whole output, but never 0. >>> >>> This would break code which accepts base64 string from whatever and decodes it into fixed-sized buffer - extra characters are silently ignored now. >>> >>>> If it will not crash, it means you can drop the out-buffer length >>>> parameter as it does not matter. Also you need to validate all the >>>> usages of base64_decode() just in case. >>> >>> I believe that overhauling API is outside of the task assigned to me. >> >> The problem is that this whole 'optimization' is also outside of the >> task assigned to you. But you decided to do it anyway. So lets do it >> properly then. > > I already did. Code is now faster, fixes several bugs and introduces no new ones. You did it in 2 other commits, which look good. This commit is your own initiative not related to any bugs. Talking of 'no new ones' - there are no proofs really. We can only *hope* that there are no new bugs and that the existing tests cover this code enough. >> For example, move this check >> one line below, after curr_byte |= ? I don't know if it helps though. > > It would help if we would add 3 new states like "step_b and there was not enough space in output buffer on last iteration" AND (if something is still present in the input buffer) somehow report to the caller the value of final in_base64 offset. > >> Since you decided to change this function, I would suggest to make >> it work properly for all inputs. > > It does - as long as out_len is large enough. > >> And even better - expose it in base64.h and test separately. > > What is the point in wasting more time on this? Right now we do not need to process base64 input in chunks. The point is that you want to optimize this function. But somewhy you don't want to go the end. I can even ask the same, look: What is the point in wasting more time on this? Right now base64 is not present in any flamegraphs as something expensive. >> It was initially taken from there: >> https://github.com/libb64/libb64/blob/master/src/cdecode.c >> in this commit: >> https://github.com/tarantool/tarantool/commit/4e707c0f989434b74eb3a859a1b26822a87c42e2#diff-4fd3a92107ebfe91b691f08cd53a84eccbcf89f1a7bad47e20f950d6c4214e6cR216 >> >> Readme of libb64 says it uses 'coroutines' implemented via >> switch-while-case. Which means the function is supposed to be >> called multiple times. Although in the source repository it >> does not accept output buffer length at all. > > So clearly someone just quickly added protection against output buffer overrun. And introduced a bug. Now you quickly optimized it leaving some dead code hang here (the state variable). Don't you think that these 2 cases look very similar? > libb64 required properly sized output buffer unconditionally, our current implementation supports ignoring "extra" input data for fixes-sized output buffer instead. Yes, libb64 does require a full buffer. When it was ported to our code base, the bug was introduced. The reason I have shown you libb64 is that originally this function was supposed not to loose data. It simply couldn't loose it due to output buffer being too short. But the user must have been able to receive 'input' data in chunks, get an 'out' buffer of the big enough size, and fill it. If some last bytes of 'input' were not translated, they were saved into the 'state'. In our code the state is not needed now. It is dead code, if you don't want to fix it to work properly in chunks. >> The reenterability means the function should be allowed to >> process all the input step by step, and must not loose data. >> Otherwise it makes no sense to have this 'state machine' via >> 'coroutines', and it also can be removed in order to pursue >> more performance. You don't even need the state variable. > > State is being read and written only once so we would gain literally nothing. It does not lose data if output buffer is properly sized. I never said it looses data when it is properly sized. > Removing state variable would make it harder for someone to implement processing input data in chunks if we would ever need this. Then fix it :) I don't understand, really. You want to optimize this code - lets optimize it. Delete the state, delete the dead code. What you did now is half-measure. Your patch even adds more dead code, because you added 3! new places where you update the state. Which is useless now anyway. If somebody will need to implement chunked parsing, I am sure he will be able to google libb64 and take the source code from there again. Or use git history to find the original code. Or take another implementation, because there is no any proof that libb64 is the fastest solution written in C. Anyway, a non-chunked function would be enough for most of the cases, and would be faster and easier for sure. > And it is extremely unlikely that we would processing input data in chunks WITHOUT properly sized output buffer. Worst-case output buffer size is easily calculated from input data size. > >> The bottom line: you either need to fix the function not to loose >> data + keep it reenterable + test properly; or remove the state >> entirely because if it looses information, it is useless anyway. > > It only loses information in "atypical" cases when output buffer is not sized properly. Which means the function can't be called again on the same input buffer. And that makes the state useless in all cases. Not only in some. But I see we are not going anywhere here. You don't really need LGTM from me on this patch, if you don't want to finish it. I am not strictly against these changes, because *probably* they don't add new bugs, and seem to be a tiny bit better for perf. I only don't like it being not finished. You can ask 2 other people making decisions to review it in its current state and still get it committed. For example, Nikita or Alexander Tu. or Alexander L. As for the other 2 commits - all looks fine to me except the comment I gave about the same test being run multiple times. Fix that comment, I will give LGTM, and you will need just +1 LGTM for the first 2 commits.