From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Date: Wed, 6 Feb 2019 14:49:05 +0300 From: Vladimir Davydov Subject: Re: [tarantool-patches] Re: [PATCH 7/9] vinyl: randomize range compaction to avoid IO load spikes Message-ID: <20190206114905.qq6wwcdexal4gy3j@esperanza> References: <44f34fbaf09af5d1054f2e4843a77e095afe1e71.1548017258.git.vdavydov.dev@gmail.com> <20190122125458.cutoz5rtfd2sb6el@esperanza> <20190205173958.GG6811@chai> <20190206085302.3xzjz2udfvdin5ld@esperanza> <20190206104419.GD19953@chai> <20190206105244.c4gkhb3xsn2pkqmp@esperanza> <20190206110609.GA24382@chai> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190206110609.GA24382@chai> To: Konstantin Osipov Cc: tarantool-patches@freelists.org List-ID: On Wed, Feb 06, 2019 at 02:06:09PM +0300, Konstantin Osipov wrote: > * Vladimir Davydov [19/02/06 13:57]: > > On Wed, Feb 06, 2019 at 01:44:19PM +0300, Konstantin Osipov wrote: > > > * Vladimir Davydov [19/02/06 13:31]: > > > > Over how many dumps? What do we do after restart, when there's no > > > > history and perhaps even no level 1? > > > > > > A am thinking about something along these lines: > > > f(n+1) = (f(n) + x*k)(1+k) - where k is the weight used to scale > > > the next input > > > > What should be k equal to? > > I don't know. Neither do I. That's why I don't want to involve any kind of averaging. > > > Also, what we should do after restart when there's no history? > > Seed the function with the top run size. > If there is no top run, seed with 0. This way the moving average will grow very reluctantly - we'll need more than k dumps to adapt. During that time, compaction priority calculation will be unstable. > > > > Why is using the last level size as reference bad? > > Because you don't know if it's last or not? We know which run is last. Provided the workload is stable, i.e. have stopped growing its dataset, it will be roughly the same. Besides, the last level run size changes only on major compaction, which is infrequent. After a major compaction, it's OK to use a different first level size - the point is in order not to break LSM algorithm, we have to maintain stable level sizing between major compactions.