From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Date: Sun, 24 Feb 2019 21:22:51 +0300 From: Vladimir Davydov Subject: Re: [tarantool-patches] [PATCH v3 1/7] memtx: introduce universal iterator_pool Message-ID: <20190224182251.d65st3ncjheabeuf@esperanza> References: <236d59ddf2ed9bb9c9e112763ca2dbd27424482a.1550849496.git.kshcherbatov@tarantool.org> <20190222183725.GD1691@chai> <20190224065622.wzutg7sgzviknqdf@esperanza> <20190224171504.GA17349@chai> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190224171504.GA17349@chai> To: Konstantin Osipov Cc: tarantool-patches@freelists.org, Kirill Shcherbatov List-ID: On Sun, Feb 24, 2019 at 08:15:04PM +0300, Konstantin Osipov wrote: > * Vladimir Davydov [19/02/24 10:01]: > > On Fri, Feb 22, 2019 at 09:37:25PM +0300, Konstantin Osipov wrote: > > > * Kirill Shcherbatov [19/02/22 19:29]: > > > > Memtx uses separate mempools for iterators of different types. > > > > Due to the fact that there will be more iterators of different > > > > sizes in a series of upcoming changes, let's always allocate the > > > > iterator of the largest size. > > > > > > If rtree iterator is the one which is largest, let's use a > > > separate pool for it. > > > > > > In general mempools are rather cheap. Each mempool takes a slab > > > for ~100 objects and uses no slabs if there are no objects (e.g. > > > if rtree index is not used, there is no mempool memory for it). > > > > But I'd rather prefer to use the same mempool for all kinds of iterator > > objects to simplify the code. Take a look at how those mempools are > > initialized on demand. IMO it looks ugly. Do we really want to save > > those 500 of bytes that much to put up with that complexity? > > Just like in the recent bps tree performance issue, you don't > pessimise the code since you never really know how it's going to > be used. Oh come on, what pessimization are you talking about in this particular case? How many iterators can be out there simultaneously? A hundred, a thousand? 500 bytes overhead per each doesn't seem much, especially taking into account the fact that you're likely to have a fiber with 16KB stack for each iterator. Regarding the bps tree performance issue. I see nothing wrong about it. We've found an issue and we'll surely fix it. There was no point to think about such a minor optimization until we actually faced the problem. My point is we should strive to write simple and reliable code first, and optimize it only if there's a demand, otherwise we risk turning the code into unmaintainable mess for no good reason.