From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from localhost (localhost [127.0.0.1]) by turing.freelists.org (Avenir Technologies Mail Multiplex) with ESMTP id BEFBF24BE5 for ; Fri, 5 Jul 2019 19:01:39 -0400 (EDT) Received: from turing.freelists.org ([127.0.0.1]) by localhost (turing.freelists.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 0zTUoHIn78Cn for ; Fri, 5 Jul 2019 19:01:39 -0400 (EDT) Received: from smtp29.i.mail.ru (smtp29.i.mail.ru [94.100.177.89]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by turing.freelists.org (Avenir Technologies Mail Multiplex) with ESMTPS id 7AF8424B5F for ; Fri, 5 Jul 2019 19:01:39 -0400 (EDT) Date: Sat, 6 Jul 2019 02:01:36 +0300 From: Konstantin Osipov Subject: [tarantool-patches] Re: [PATCH 1/2] swim: pool IO tasks Message-ID: <20190705230136.GD30966@atlas> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Sender: tarantool-patches-bounce@freelists.org Errors-to: tarantool-patches-bounce@freelists.org Reply-To: tarantool-patches@freelists.org List-Help: List-Unsubscribe: List-software: Ecartis version 1.0.0 List-Id: tarantool-patches List-Subscribe: List-Owner: List-post: List-Archive: To: Vladislav Shpilevoy Cc: tarantool-patches@freelists.org * Vladislav Shpilevoy [19/07/06 01:39]: > + > +/** > + * All the SWIM instances and their members use the same objects > + * to send data - tasks. Each task is ~1.5KB, and on one hand it > + * would be a waste of memory to keep preallocated tasks for each > + * member. One the other hand it would be too slow to allocate > + * and delete ~1.5KB on each interaction, ~3KB on each round step. > + * Here is a pool of free tasks shared among all SWIM instances > + * to avoid allocations, but do not keep a separate task for each > + * member. > + */ > +static struct stailq swim_task_pool; > +/** Number of pooled tasks. */ > +static int swim_task_pool_size = 0; These should be thread-local. Why not use mempool? -- Konstantin Osipov, Moscow, Russia