From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from localhost (localhost [127.0.0.1]) by turing.freelists.org (Avenir Technologies Mail Multiplex) with ESMTP id 3A8E92CCC3 for ; Fri, 19 Apr 2019 08:44:15 -0400 (EDT) Received: from turing.freelists.org ([127.0.0.1]) by localhost (turing.freelists.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id FeA33JoU9W09 for ; Fri, 19 Apr 2019 08:44:15 -0400 (EDT) Received: from smtp50.i.mail.ru (smtp50.i.mail.ru [94.100.177.110]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by turing.freelists.org (Avenir Technologies Mail Multiplex) with ESMTPS id E2EC52CCC2 for ; Fri, 19 Apr 2019 08:44:14 -0400 (EDT) From: Georgy Kirichenko Subject: [tarantool-patches] [PATCH 08/10] Use mempool to alloc wal messages Date: Fri, 19 Apr 2019 15:44:04 +0300 Message-Id: In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: tarantool-patches-bounce@freelists.org Errors-to: tarantool-patches-bounce@freelists.org Reply-To: tarantool-patches@freelists.org List-Help: List-Unsubscribe: List-software: Ecartis version 1.0.0 List-Id: tarantool-patches List-Subscribe: List-Owner: List-post: List-Archive: To: tarantool-patches@freelists.org Cc: Georgy Kirichenko Don't use fiber gc region to alloc wal messages. This relaxes friction between fiber life cycle and transaction processing. Prerequisites: #1254 --- src/box/wal.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/src/box/wal.c b/src/box/wal.c index 6ccc1220a..f0352e938 100644 --- a/src/box/wal.c +++ b/src/box/wal.c @@ -89,6 +89,8 @@ struct wal_writer struct stailq rollback; /** A pipe from 'tx' thread to 'wal' */ struct cpipe wal_pipe; + /** A memory pool for messages. */ + struct mempool msg_pool; /* ----------------- wal ------------------- */ /** A setting from instance configuration - rows_per_wal */ int64_t wal_max_rows; @@ -287,6 +289,7 @@ tx_schedule_commit(struct cmsg *msg) /* Update the tx vclock to the latest written by wal. */ vclock_copy(&replicaset.vclock, &batch->vclock); tx_schedule_queue(&batch->commit); + mempool_free(&writer->msg_pool, container_of(msg, struct wal_msg, base)); } static void @@ -308,6 +311,9 @@ tx_schedule_rollback(struct cmsg *msg) trigger_run(&req->on_error, NULL); tx_schedule_queue(&writer->rollback); stailq_create(&writer->rollback); + if (msg != &writer->in_rollback) + mempool_free(&writer->msg_pool, + container_of(msg, struct wal_msg, base)); } @@ -378,6 +384,9 @@ wal_writer_create(struct wal_writer *writer, enum wal_mode wal_mode, writer->on_garbage_collection = on_garbage_collection; writer->on_checkpoint_threshold = on_checkpoint_threshold; + + mempool_create(&writer->msg_pool, &cord()->slabc, + sizeof(struct wal_msg)); } /** Destroy a WAL writer structure. */ @@ -1158,8 +1167,7 @@ wal_write(struct journal *journal, struct journal_entry *entry) stailq_add_tail_entry(&batch->commit, entry, fifo); } else { - batch = (struct wal_msg *) - region_alloc(&fiber()->gc, sizeof(struct wal_msg)); + batch = (struct wal_msg *)mempool_alloc(&writer->msg_pool); if (batch == NULL) { diag_set(OutOfMemory, sizeof(struct wal_msg), "region", "struct wal_msg"); -- 2.21.0