Tarantool development patches archive
 help / color / mirror / Atom feed
From: Oleg Babin via Tarantool-patches <tarantool-patches@dev.tarantool.org>
To: Vladislav Shpilevoy <v.shpilevoy@tarantool.org>,
	tarantool-patches@dev.tarantool.org,
	yaroslav.dynnikov@tarantool.org
Subject: Re: [Tarantool-patches] [PATCH vshard 5/6] router: introduce automatic master discovery
Date: Tue, 6 Jul 2021 11:54:21 +0300	[thread overview]
Message-ID: <7aa15d86-2ec9-c517-5553-e37f7370f49b@tarantool.org> (raw)
In-Reply-To: <07f4dc34-0453-ce22-747e-6c50a2bcac12@tarantool.org>

Thanks for your answers. Patch LGTM. I also asked Yaroslav and I hope he 
will take a look.

On 05.07.2021 23:53, Vladislav Shpilevoy wrote:
>>>>>     diff --git a/vshard/replicaset.lua b/vshard/replicaset.lua
>>>>> index fa048c9..7747258 100644
>>>>> --- a/vshard/replicaset.lua
>>>>> +++ b/vshard/replicaset.lua
>>>>> @@ -345,18 +377,30 @@ local function replicaset_master_call(replicaset, func, args, opts)
>>>>>         assert(opts == nil or type(opts) == 'table')
>>>>>         assert(type(func) == 'string', 'function name')
>>>>>         assert(args == nil or type(args) == 'table', 'function arguments')
>>>>> -    local conn, err = replicaset_connect_master(replicaset)
>>>>> -    if not conn then
>>>>> -        return nil, err
>>>>> -    end
>>>>> -    if not opts then
>>>>> -        opts = {timeout = replicaset.master.net_timeout}
>>>>> -    elseif not opts.timeout then
>>>>> -        opts = table.copy(opts)
>>>>> -        opts.timeout = replicaset.master.net_timeout
>>>>> +    local master = replicaset.master
>>>>> +    if not master then
>>>>> +        opts = opts and table.copy(opts) or {}
>>>>> +        if opts.is_async then
>>>>> +            return nil, lerror.vshard(lerror.code.MISSING_MASTER,
>>>>> +                                      replicaset.uuid)
>>>>> +        end
>>>> Could / should we here wakeup master discovery if "auto" is specified?
>>> We can. But I am afraid that under considerable RW requests load we
>>> might wakeup it too often. And if there are many routers, they might
>>> create a storm of `vshard.storage._call('info')` requests putting a
>>> notable load on the storages. So I decided it is not safe until the
>>> subscriptions are implemented.
>> But we can't simply fail async requests when master is unknown.
> There is not much of a choice for now. You will fail them even if the master
> is here, but disconnected. Because netbox fails is_async immediately if the
> connection is not in active nor in fetch_schema states.
>
> If it must be solved, I would better do it in a generic way for both
> situations, separately. But it is not trivial. Because you start getting a
> concept of is_async + timeout so as the requests wouldn't hang there for too
> long time overflowing the queue of them. This means you would need a heap of
> requests sorted by their deadlines, and you would flush them in on_connect
> trigger, which AFAIR not suitable for making calls somewhy which is a next
> issue (but I might be wrong).
>
> I wouldn't want to do that all in scope of master discovery ticket.

Yes, I agree. I know about error in case when netbox is not connected.

Ok. Let's it work in the same way as netbox async call.


>
>> And we even don't have exposed replicaset_wait_connected/replicaset_wait_master
>>
>> to call it before master will be known. As result several seconds we will get failed requests.
> I didn't expose them yet because I am very reluctant towards exposing
> anything until there is an explicit request from a user. Don't want to
> overload the API.
>
>> Also I saw TODO in test that "RO requests should be able to go to replicas" and I expect questions
>>
>> about RTO 0 on read requests from our customers.
> What do you mean as "RTO 0 on read requests"?
>
We faced it when developed CRUD module. To perform requests we need to 
know space schema.

But all remote schema changes are visible only after request that 
triggers schema refetch.

And we had working cluster but the first portion of requests failed 
because of outdated schema.


Will we have something similar when we doesn't discover master yet but 
have replicas and our

read requests will fail? If it's so please file at least an issue to fix it.

Considering definition itself RTO means "recovery time objective".


>> I don't know will this feature be used by cartridge but it could create small inconvenience for our customers
>>
>> in test because they should inject timeouts until master will be discovered (it's too artifical example but there were
>>
>> some similar questions when discovery started to work by chunks).
>>>>>> +local function locate_masters(replicasets)
>>>>> +    local is_all_done = true
>>>>> +    local is_all_nop = true
>>>>> +    local last_err
>>>>> +    for _, replicaset in pairs(replicasets) do
>>>>> +        local is_done, is_nop, err = replicaset_locate_master(replicaset)
>>>> I think we should log result of master discovery. Especially if error occurred.
>>> It is logged by the caller. See master_search_f(), init.lua:1067.
>> Hmm... Yes, but I think such approach with last_err could lose some errors.
> Yes, that is also done on purpose. I don't want to clog the logs with too
> many errors. For instance, imagine you have 100 replicasets with 3 nodes in
> each, each is polled with 0.5s. And there is no network. Then it can produce
> about 600 errors per second. I tried to make the clogging less severe by showing
> only the last error. And the idea is that you fix these errors one by one until
> the log has no errors and discovery works fine.
>
> For example, firstly you fix the network if none of the futures was created. Then
> you fix storage.call('info') being not existing due to the old version, and etc.
> In the log you will see the next error to fix.
>
> I also wanted not to log the same error too often. For example, check that the
> message didn't change in the master search fiber, and log it only on change
> or once per, say, 10 seconds. But decided to wait if it explodes anywhere before
> I complicate the code even more.
>
> Do you think it is not needed? I am not sure, and can switch to logging of
> everything.

Yes, sounds reasonable. Feel free to ignore following thoughts I just 
try to say my idea.

t's not good to spam logs with error. But maybe we could aggregate

error messages and print after each step. I mean e.g. we have a step 
where we call

`pcall(conn.call, conn, func, args, async_opts)` we can save uris or 
uuids of bad hosts and

print them in single message.


>>>>> diff --git a/vshard/router/init.lua b/vshard/router/init.lua
>>>>> index 5e2a96b..9407ccd 100644
>>>>> --- a/vshard/router/init.lua
>>>>> +++ b/vshard/router/init.lua
>>>>> @@ -1030,6 +1032,93 @@ local function failover_f(router)
>>> <...>
>>>
>>>>> +--
>>>>> +-- Master discovery background function. It is supposed to notice master changes
>>>>> +-- and find new masters in the replicasets, which are configured for that.
>>>>> +--
>>>>> +-- XXX: due to polling the search might notice master change not right when it
>>>> Is this TODO related tohttps://github.com/tarantool/vshard/issues/284  ?
>>> Yes, that is one of the "solutions", although I would rather call it a crutch.
>>> I prefer to fix it by doing fair subscriptions. And for that I at least need a
>>> way to get netbox call result asynchronously, without having to call wait_result()
>>> anywhere.
>> Does it request some changes in netbox/iproto? I mean https://github.com/tarantool/tarantool/issues/6107
> 6107 is not about the protocol changes. It is only about
> the interface on the server and in netbox. For vshard it
> would be enough to patch netbox. I want to be able to do
> something like that:
>
> 	local subscribe_complete
>
> 	local function subscribe_push(conn, msg)
> 		log.info('Push: %s', msg)
> 		-- ...
> 	end
>
> 	local function subscribe_start(conn)
> 		conn:call('subscribe', {
> 			is_async = true,
> 			on_complete = subscribe_complete,
> 			on_complete_ctx = conn,
> 			on_push = subscribe_push,
> 			on_push_ctx = conn,
> 		})
> 	end
>
> 	local function subscribe_complete(conn, res)
> 		log.info('Subscription is done: %s, retry', res)
> 		-- ...
> 		subscribe_start(conn)
> 	end
>
> 	subscribe_start(conn)
>
> 'Subscribe' in vshard would be a function on the storage
> which returns when the storage's state changes anyhow. I could
> manage these subscriptions on the client in a very cheap way
> without fibers. Then I could make multiple subscriptions or
> one big sub for all storage's details. Wouldn't matter. And
> wouldn't cost almost anything on the storage too if 6107
> would be done.

I did something similar for one project. "subscribe" reads from special 
channel and

performs box.session.push(). Ok, I hope it will be done later.



  reply	other threads:[~2021-07-06  8:54 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-01 22:09 [Tarantool-patches] [PATCH vshard 0/6] Master discovery Vladislav Shpilevoy via Tarantool-patches
2021-07-01 22:09 ` [Tarantool-patches] [PATCH vshard 1/6] replicaset: introduce netbox_wait_connected() Vladislav Shpilevoy via Tarantool-patches
2021-07-02 11:46   ` Oleg Babin via Tarantool-patches
2021-07-01 22:09 ` [Tarantool-patches] [PATCH vshard 2/6] test: sort some table prints Vladislav Shpilevoy via Tarantool-patches
2021-07-02 11:46   ` Oleg Babin via Tarantool-patches
2021-07-01 22:09 ` [Tarantool-patches] [PATCH vshard 3/6] storage: introduce vshard.storage._call('info') Vladislav Shpilevoy via Tarantool-patches
2021-07-02 11:46   ` Oleg Babin via Tarantool-patches
2021-07-01 22:09 ` [Tarantool-patches] [PATCH vshard 4/6] config: introduce master 'auto' replicaset option Vladislav Shpilevoy via Tarantool-patches
2021-07-02 11:47   ` Oleg Babin via Tarantool-patches
2021-07-02 21:32     ` Vladislav Shpilevoy via Tarantool-patches
2021-07-05  9:23       ` Oleg Babin via Tarantool-patches
2021-07-01 22:09 ` [Tarantool-patches] [PATCH vshard 5/6] router: introduce automatic master discovery Vladislav Shpilevoy via Tarantool-patches
2021-07-02 11:48   ` Oleg Babin via Tarantool-patches
2021-07-02 21:35     ` Vladislav Shpilevoy via Tarantool-patches
2021-07-05  9:24       ` Oleg Babin via Tarantool-patches
2021-07-05 20:53         ` Vladislav Shpilevoy via Tarantool-patches
2021-07-06  8:54           ` Oleg Babin via Tarantool-patches [this message]
2021-07-06 21:19             ` Vladislav Shpilevoy via Tarantool-patches
2021-07-01 22:09 ` [Tarantool-patches] [PATCH vshard 6/6] router: update master using a hint from storage Vladislav Shpilevoy via Tarantool-patches
2021-07-02 11:49   ` Oleg Babin via Tarantool-patches
2021-07-02 21:36     ` Vladislav Shpilevoy via Tarantool-patches
2021-07-05  9:24       ` Oleg Babin via Tarantool-patches
2021-07-05 20:53         ` Vladislav Shpilevoy via Tarantool-patches
2021-07-06  8:55           ` Oleg Babin via Tarantool-patches
2021-07-02 21:36 ` [Tarantool-patches] [PATCH vshard 7/6] util: truncate too long fiber name Vladislav Shpilevoy via Tarantool-patches
2021-07-05  9:23   ` Oleg Babin via Tarantool-patches
2021-08-03 21:55 ` [Tarantool-patches] [PATCH vshard 0/6] Master discovery Vladislav Shpilevoy via Tarantool-patches

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7aa15d86-2ec9-c517-5553-e37f7370f49b@tarantool.org \
    --to=tarantool-patches@dev.tarantool.org \
    --cc=olegrok@tarantool.org \
    --cc=v.shpilevoy@tarantool.org \
    --cc=yaroslav.dynnikov@tarantool.org \
    --subject='Re: [Tarantool-patches] [PATCH vshard 5/6] router: introduce automatic master discovery' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox