From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Date: Mon, 21 May 2018 10:17:52 +0300 From: Vladimir Davydov Subject: Re: [PATCH] memtx: free tuples asynchronously when primary index is dropped Message-ID: <20180521071752.cy7c5bqupfqs5mvx@esperanza> References: <20180520213118.GA14364@atlas> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180520213118.GA14364@atlas> To: Konstantin Osipov Cc: tarantool-patches@freelists.org List-ID: On Mon, May 21, 2018 at 12:31:18AM +0300, Konstantin Osipov wrote: > * Vladimir Davydov [18/05/20 18:07]: > > When a memtx space is dropped or truncated, we have to unreference all > > tuples stored in it. Currently, we do it synchronously, thus blocking > > the tx thread. If a space is big, tx thread may remain blocked for > > several seconds, which is unacceptable. This patch makes drop/truncate > > hand actual work to a background fiber. > > > > Before this patch, drop of a space with 10M 64-byte records took more > > than 0.5 seconds. After this patch, it takes less than 1 millisecond. > > > > Closes #3408 > > This is a duplicate of https://github.com/tarantool/tarantool/issues/444 > > It's OK to push. > > You can test it either using error injection or by adding metrics. I'll add error injection. > > Choice of the constant - 128 iterations of the iterator loop per > yield - is puzzling. Did you do any math? How much does a fiber > yield cost? Choice of 128 was arbitrary. After you noticed that, I did some testing. If we want to keep max latency < 0.1 ms, which seems to be reasonable, it's enough to yield every 1000 tuples. The diff is below. diff --git a/src/box/memtx_hash.c b/src/box/memtx_hash.c index 7e4c0474..55131740 100644 --- a/src/box/memtx_hash.c +++ b/src/box/memtx_hash.c @@ -139,7 +139,8 @@ memtx_hash_index_free(struct memtx_hash_index *index) static void memtx_hash_index_destroy_f(struct memtx_gc_task *task) { - enum { YIELD_LOOPS = 128 }; + /* Yield every 1K tuples to keep latency < 0.1 ms. */ + enum { YIELD_LOOPS = 1000 }; struct memtx_hash_index *index = container_of(task, struct memtx_hash_index, gc_task); diff --git a/src/box/memtx_tree.c b/src/box/memtx_tree.c index 97452c5f..c72b8fa8 100644 --- a/src/box/memtx_tree.c +++ b/src/box/memtx_tree.c @@ -311,7 +311,8 @@ memtx_tree_index_free(struct memtx_tree_index *index) static void memtx_tree_index_destroy_f(struct memtx_gc_task *task) { - enum { YIELD_LOOPS = 128 }; + /* Yield every 1K tuples to keep latency < 0.1 ms. */ + enum { YIELD_LOOPS = 1000 }; struct memtx_tree_index *index = container_of(task, struct memtx_tree_index, gc_task);