Tarantool development patches archive
 help / color / mirror / Atom feed
From: Vladislav Shpilevoy <v.shpilevoy@tarantool.org>
To: Konstantin Osipov <kostja@tarantool.org>
Cc: tarantool-patches@freelists.org
Subject: [tarantool-patches] Re: [PATCH 1/2] swim: pool IO tasks
Date: Tue, 9 Jul 2019 00:13:13 +0200	[thread overview]
Message-ID: <050a79f5-847a-f184-2197-87cc789cfeab@tarantool.org> (raw)
In-Reply-To: <20190708215432.GB7873@atlas>

>>>>
>>>>>
>>>>> Why not use mempool?
>>>>>
>>>>>
>>>>
>>>> Because 1) it is an overkill, 2) I don't want to depend on
>>>> slab allocator, 3) it just does not fit this case, according
>>>> to mempool description from mempool.h:
>>>>
>>>>     "Good for allocating tons of small objects of the same size.".
>>>
>>> It is also quite decent for allocating many fairly large objects.
>>> The key point is that the object is of the same size. You can set
>>> up mempool the right slab size, and in this case it will do
>>> exactly what you want.
>>>
>>
>> And again - 'many' usually won't be the case. We will have 0-2 SWIMs
>> in 99% of cases. One internal SWIM for box, and one external created
>> by a user. Barely we will have more than 10 cached tasks.
>>
>> But ok, as you wish. This place is not as critical for me as thread
>> locality of the pool. At least we reuse existing code. Thread locality
>> still looks pointless waste of memory for me.
> 
> OK, wait a second. I thought you're going to have a single task
> instance for each member. Which means a couple of dozen instances
> even in a two node cluster. Am I wrong? 
> 

Yes, you misunderstood something. This is the whole point of this
commit - make number of tasks not depending on a number of members.
On the master branch now the number of tasks per SWIM is linear of
cluster size - it is terrible. Both 1 and 2 tasks per member are
linear.

But network load per one instance does not depend on number of members,
so we can make number of tasks independent from cluster size. I am trying
to reuse a small set of tasks for all members of one instance. Just
imagine - a SWIM instance knows about 500 members, and manages to work
with them using just 2-10 messages at once.

  reply	other threads:[~2019-07-08 22:11 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-05 22:40 [tarantool-patches] [PATCH 0/2] SWIM micro optimizations Vladislav Shpilevoy
2019-07-05 22:40 ` [tarantool-patches] [PATCH 1/2] swim: pool IO tasks Vladislav Shpilevoy
2019-07-05 23:01   ` [tarantool-patches] " Konstantin Osipov
2019-07-06 21:00     ` Vladislav Shpilevoy
2019-07-08  8:25       ` Konstantin Osipov
2019-07-08 18:31         ` Vladislav Shpilevoy
2019-07-08 21:54           ` Konstantin Osipov
2019-07-08 22:13             ` Vladislav Shpilevoy [this message]
2019-07-08 23:08               ` Konstantin Osipov
2019-07-09 19:43                 ` Vladislav Shpilevoy
2019-07-09 22:24                   ` Konstantin Osipov
2019-07-05 22:40 ` [tarantool-patches] [PATCH 2/2] swim: optimize struct swim_task layout Vladislav Shpilevoy
2019-07-05 23:02   ` [tarantool-patches] " Konstantin Osipov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=050a79f5-847a-f184-2197-87cc789cfeab@tarantool.org \
    --to=v.shpilevoy@tarantool.org \
    --cc=kostja@tarantool.org \
    --cc=tarantool-patches@freelists.org \
    --subject='[tarantool-patches] Re: [PATCH 1/2] swim: pool IO tasks' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox