From: Vladimir Davydov <vdavydov.dev@gmail.com> To: kostja@tarantool.org Cc: tarantool-patches@freelists.org Subject: [PATCH v2 2/2] memtx: run garbage collection on demand Date: Tue, 22 May 2018 20:25:31 +0300 [thread overview] Message-ID: <40168332447f753d7ef7d32857ca7d5b0d9ed900.1527009486.git.vdavydov.dev@gmail.com> (raw) In-Reply-To: <cover.1527009486.git.vdavydov.dev@gmail.com> In-Reply-To: <cover.1527009486.git.vdavydov.dev@gmail.com> When a memtx space is dropped or truncated, we delegate freeing tuples stored in it to a background fiber so as not to block the caller (and tx thread) for too long. Turns out it doesn't work out well for ephemeral spaces, which share the destruction code with normal spaces: the problem is the user might issue a lot of complex SQL SELECT statements that create a lot of ephemeral spaces and do not yield and hence don't give the garbage collection fiber a chance to clean up. There's a test that emulates this, 2.0:test/sql-tap/gh-3083-ephemeral-unref-tuples.test.lua. For this test to pass, let's run garbage collection procedure on demand, i.e. when any of memtx allocation functions fails to allocate memory. Follow-up #3408 --- src/box/memtx_engine.c | 20 ++++++++++++++++++-- 1 file changed, 18 insertions(+), 2 deletions(-) diff --git a/src/box/memtx_engine.c b/src/box/memtx_engine.c index 9bde98a0..b4c1582a 100644 --- a/src/box/memtx_engine.c +++ b/src/box/memtx_engine.c @@ -1070,7 +1070,13 @@ memtx_tuple_new(struct tuple_format *format, const char *data, const char *end) return NULL; } - struct memtx_tuple *memtx_tuple = smalloc(&memtx->alloc, total); + struct memtx_tuple *memtx_tuple; + while ((memtx_tuple = smalloc(&memtx->alloc, total)) == NULL) { + bool stop; + memtx_engine_run_gc(memtx, &stop); + if (stop) + break; + } if (memtx_tuple == NULL) { diag_set(OutOfMemory, total, "slab allocator", "memtx_tuple"); return NULL; @@ -1151,7 +1157,13 @@ memtx_index_extent_alloc(void *ctx) "mempool", "new slab"); return NULL; }); - void *ret = mempool_alloc(&memtx->index_extent_pool); + void *ret; + while ((ret = mempool_alloc(&memtx->index_extent_pool)) == NULL) { + bool stop; + memtx_engine_run_gc(memtx, &stop); + if (stop) + break; + } if (ret == NULL) diag_set(OutOfMemory, MEMTX_EXTENT_SIZE, "mempool", "new slab"); @@ -1184,6 +1196,10 @@ memtx_index_extent_reserve(struct memtx_engine *memtx, int num) while (memtx->num_reserved_extents < num) { void *ext = mempool_alloc(&memtx->index_extent_pool); if (ext == NULL) { + bool stop; + memtx_engine_run_gc(memtx, &stop); + if (!stop) + continue; diag_set(OutOfMemory, MEMTX_EXTENT_SIZE, "mempool", "new slab"); return -1; -- 2.11.0
next prev parent reply other threads:[~2018-05-22 17:25 UTC|newest] Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top 2018-05-22 17:25 [PATCH v2 0/2] Follow-up on async memtx index cleanup Vladimir Davydov 2018-05-22 17:25 ` [PATCH v2 1/2] memtx: rework background garbage collection procedure Vladimir Davydov 2018-05-23 17:56 ` Konstantin Osipov 2018-05-24 6:13 ` Vladimir Davydov 2018-05-22 17:25 ` Vladimir Davydov [this message] 2018-05-23 17:58 ` [PATCH v2 2/2] memtx: run garbage collection on demand Konstantin Osipov 2018-05-24 6:15 ` Vladimir Davydov
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=40168332447f753d7ef7d32857ca7d5b0d9ed900.1527009486.git.vdavydov.dev@gmail.com \ --to=vdavydov.dev@gmail.com \ --cc=kostja@tarantool.org \ --cc=tarantool-patches@freelists.org \ --subject='Re: [PATCH v2 2/2] memtx: run garbage collection on demand' \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox