From: Olga Arkhangelskaia <arkholga@tarantool.org> To: tarantool-patches@dev.tarantool.org Subject: [Tarantool-patches] [PATCH rfc v2] memtx: fix out of memory handling for rtree Date: Mon, 9 Dec 2019 16:48:51 +0300 [thread overview] Message-ID: <20191209134851.14462-1-arkholga@tarantool.org> (raw) When tarantool tries to recover rtree from a snapshot and memtx_memory value is lower than it has been when the snapshot was created, server suffers from segmentation fault. This happens because there is no out of memory error handling in rtree lib. In another words, we do not check the result of malloc operation. To prevent this behaviour we simply reserve memory before replace operation for rtree. And if there is not enough memory to be reserved - server will fail gently with the "Failed to allocate" error message. Closes #4619 --- Branch: https://github.com/tarantool/tarantool/tree/OKriw/gh-RTREE-doesnt-handle-OOM-properly Issue:https://github.com/tarantool/tarantool/issues/4619 v1:https://lists.tarantool.org/pipermail/tarantool-patches/2019-November/012391.html Changes in v2: - changed way of error handling, now we reserve pages before go to rtree lib - changed test - changed commit msg src/box/memtx_engine.h | 12 +++++ src/box/memtx_rtree.c | 7 +++ src/box/memtx_space.c | 12 ----- test/box/cfg.result | 101 +++++++++++++++++++++++++++++++++++++++++ test/box/cfg.test.lua | 28 ++++++++++++ 5 files changed, 148 insertions(+), 12 deletions(-) diff --git a/src/box/memtx_engine.h b/src/box/memtx_engine.h index fcf595e79..fc6aa083e 100644 --- a/src/box/memtx_engine.h +++ b/src/box/memtx_engine.h @@ -87,6 +87,18 @@ enum memtx_recovery_state { /** Memtx extents pool, available to statistics. */ extern struct mempool memtx_index_extent_pool; +enum memtx_reserve_extents_num{ + /** + * This number is calculated based on the + * max (realistic) number of insertions + * a deletion from a B-tree or an R-tree + * can lead to, and, as a result, the max + * number of new block allocations. + */ + RESERVE_EXTENTS_BEFORE_DELETE = 8, + RESERVE_EXTENTS_BEFORE_REPLACE = 16 +}; + /** * The size of the biggest memtx iterator. Used with * mempool_create. This is the size of the block that will be diff --git a/src/box/memtx_rtree.c b/src/box/memtx_rtree.c index 8badad797..799438241 100644 --- a/src/box/memtx_rtree.c +++ b/src/box/memtx_rtree.c @@ -227,6 +227,13 @@ memtx_rtree_index_replace(struct index *base, struct tuple *old_tuple, (void)mode; struct memtx_rtree_index *index = (struct memtx_rtree_index *)base; struct rtree_rect rect; + /* There is no error handling in rtree lib. We need to be sure that + * alloc does not fail. + */ + struct memtx_engine *memtx = (struct memtx_engine *)base->engine; + if (memtx_index_extent_reserve(memtx, + RESERVE_EXTENTS_BEFORE_REPLACE) != 0) + return -1; if (new_tuple) { if (extract_rectangle(&rect, new_tuple, base->def) != 0) return -1; diff --git a/src/box/memtx_space.c b/src/box/memtx_space.c index cf29cf328..300417001 100644 --- a/src/box/memtx_space.c +++ b/src/box/memtx_space.c @@ -103,18 +103,6 @@ memtx_space_replace_no_keys(struct space *space, struct tuple *old_tuple, return -1; } -enum { - /** - * This number is calculated based on the - * max (realistic) number of insertions - * a deletion from a B-tree or an R-tree - * can lead to, and, as a result, the max - * number of new block allocations. - */ - RESERVE_EXTENTS_BEFORE_DELETE = 8, - RESERVE_EXTENTS_BEFORE_REPLACE = 16 -}; - /** * A short-cut version of replace() used during bulk load * from snapshot. diff --git a/test/box/cfg.result b/test/box/cfg.result index 5370bb870..4ed8c9efe 100644 --- a/test/box/cfg.result +++ b/test/box/cfg.result @@ -580,3 +580,104 @@ test_run:cmd("cleanup server cfg_tester6") | --- | - true | ... + +-- +-- gh-4619-RTREE-doesn't-handle-OOM-properly +-- +test_run:cmd('create server rtree with script = "box/lua/cfg_test1.lua"') + | --- + | - true + | ... +test_run:cmd("start server rtree") + | --- + | - true + | ... +test_run:cmd('switch rtree') + | --- + | - true + | ... +box.cfg{memtx_memory = 3221225472} + | --- + | ... +math = require("math") + | --- + | ... +rtreespace = box.schema.create_space('rtree', {if_not_exists = true}) + | --- + | ... +rtreespace:create_index('pk', {if_not_exists = true}) + | --- + | - unique: true + | parts: + | - type: unsigned + | is_nullable: false + | fieldno: 1 + | id: 0 + | space_id: 512 + | type: TREE + | name: pk + | ... +rtreespace:create_index('target', {type='rtree', dimension = 3, parts={2, 'array'},unique = false, if_not_exists = true,}) + | --- + | - parts: + | - type: array + | is_nullable: false + | fieldno: 2 + | dimension: 3 + | id: 1 + | type: RTREE + | space_id: 512 + | name: target + | ... +count = 2e6 + | --- + | ... +for i = 1, count do box.space.rtree:insert{i, {(i + 1) -\ + math.floor((i + 1)/7000) * 7000, (i + 2) - math.floor((i + 2)/7000) * 7000,\ + (i + 3) - math.floor((i + 3)/7000) * 7000}} end + | --- + | ... +rtreespace:count() + | --- + | - 2000000 + | ... +box.snapshot() + | --- + | - ok + | ... +test_run:cmd('switch default') + | --- + | - true + | ... +test_run:cmd("stop server rtree") + | --- + | - true + | ... +test_run:cmd("start server rtree with crash_expected=True") + | --- + | - false + | ... +fio = require('fio') + | --- + | ... +fh = fio.open(fio.pathjoin(fio.cwd(), 'cfg_test1.log'), {'O_RDONLY'}) + | --- + | ... +size = fh:seek(0, 'SEEK_END') + | --- + | ... +fh:seek(-256, 'SEEK_END') + | --- + | - 11155 + | ... +line = fh:read(256) + | --- + | ... +fh:close() + | --- + | - true + | ... +string.match(line, 'Failed to allocate') ~= nil + | --- + | - true + | ... diff --git a/test/box/cfg.test.lua b/test/box/cfg.test.lua index 56ccb6767..b13f10c65 100644 --- a/test/box/cfg.test.lua +++ b/test/box/cfg.test.lua @@ -141,3 +141,31 @@ test_run:cmd("start server cfg_tester6") test_run:grep_log('cfg_tester6', 'set \'vinyl_memory\' configuration option to 1073741824', 1000) test_run:cmd("stop server cfg_tester6") test_run:cmd("cleanup server cfg_tester6") + +-- +-- gh-4619-RTREE-doesn't-handle-OOM-properly +-- +test_run:cmd('create server rtree with script = "box/lua/cfg_test1.lua"') +test_run:cmd("start server rtree") +test_run:cmd('switch rtree') +box.cfg{memtx_memory = 3221225472} +math = require("math") +rtreespace = box.schema.create_space('rtree', {if_not_exists = true}) +rtreespace:create_index('pk', {if_not_exists = true}) +rtreespace:create_index('target', {type='rtree', dimension = 3, parts={2, 'array'},unique = false, if_not_exists = true,}) +count = 2e6 +for i = 1, count do box.space.rtree:insert{i, {(i + 1) -\ + math.floor((i + 1)/7000) * 7000, (i + 2) - math.floor((i + 2)/7000) * 7000,\ + (i + 3) - math.floor((i + 3)/7000) * 7000}} end +rtreespace:count() +box.snapshot() +test_run:cmd('switch default') +test_run:cmd("stop server rtree") +test_run:cmd("start server rtree with crash_expected=True") +fio = require('fio') +fh = fio.open(fio.pathjoin(fio.cwd(), 'cfg_test1.log'), {'O_RDONLY'}) +size = fh:seek(0, 'SEEK_END') +fh:seek(-256, 'SEEK_END') +line = fh:read(256) +fh:close() +string.match(line, 'Failed to allocate') ~= nil -- 2.20.1 (Apple Git-117)
next reply other threads:[~2019-12-09 13:48 UTC|newest] Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top 2019-12-09 13:48 Olga Arkhangelskaia [this message] 2019-12-09 14:07 ` Konstantin Osipov 2019-12-19 9:06 ` Olga Arkhangelskaia
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20191209134851.14462-1-arkholga@tarantool.org \ --to=arkholga@tarantool.org \ --cc=tarantool-patches@dev.tarantool.org \ --subject='Re: [Tarantool-patches] [PATCH rfc v2] memtx: fix out of memory handling for rtree' \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox