[Tarantool-patches] [PATCH vshard 5/6] router: introduce automatic master discovery

Oleg Babin olegrok at tarantool.org
Tue Jul 6 11:54:21 MSK 2021


Thanks for your answers. Patch LGTM. I also asked Yaroslav and I hope he 
will take a look.

On 05.07.2021 23:53, Vladislav Shpilevoy wrote:
>>>>>     diff --git a/vshard/replicaset.lua b/vshard/replicaset.lua
>>>>> index fa048c9..7747258 100644
>>>>> --- a/vshard/replicaset.lua
>>>>> +++ b/vshard/replicaset.lua
>>>>> @@ -345,18 +377,30 @@ local function replicaset_master_call(replicaset, func, args, opts)
>>>>>         assert(opts == nil or type(opts) == 'table')
>>>>>         assert(type(func) == 'string', 'function name')
>>>>>         assert(args == nil or type(args) == 'table', 'function arguments')
>>>>> -    local conn, err = replicaset_connect_master(replicaset)
>>>>> -    if not conn then
>>>>> -        return nil, err
>>>>> -    end
>>>>> -    if not opts then
>>>>> -        opts = {timeout = replicaset.master.net_timeout}
>>>>> -    elseif not opts.timeout then
>>>>> -        opts = table.copy(opts)
>>>>> -        opts.timeout = replicaset.master.net_timeout
>>>>> +    local master = replicaset.master
>>>>> +    if not master then
>>>>> +        opts = opts and table.copy(opts) or {}
>>>>> +        if opts.is_async then
>>>>> +            return nil, lerror.vshard(lerror.code.MISSING_MASTER,
>>>>> +                                      replicaset.uuid)
>>>>> +        end
>>>> Could / should we here wakeup master discovery if "auto" is specified?
>>> We can. But I am afraid that under considerable RW requests load we
>>> might wakeup it too often. And if there are many routers, they might
>>> create a storm of `vshard.storage._call('info')` requests putting a
>>> notable load on the storages. So I decided it is not safe until the
>>> subscriptions are implemented.
>> But we can't simply fail async requests when master is unknown.
> There is not much of a choice for now. You will fail them even if the master
> is here, but disconnected. Because netbox fails is_async immediately if the
> connection is not in active nor in fetch_schema states.
>
> If it must be solved, I would better do it in a generic way for both
> situations, separately. But it is not trivial. Because you start getting a
> concept of is_async + timeout so as the requests wouldn't hang there for too
> long time overflowing the queue of them. This means you would need a heap of
> requests sorted by their deadlines, and you would flush them in on_connect
> trigger, which AFAIR not suitable for making calls somewhy which is a next
> issue (but I might be wrong).
>
> I wouldn't want to do that all in scope of master discovery ticket.

Yes, I agree. I know about error in case when netbox is not connected.

Ok. Let's it work in the same way as netbox async call.


>
>> And we even don't have exposed replicaset_wait_connected/replicaset_wait_master
>>
>> to call it before master will be known. As result several seconds we will get failed requests.
> I didn't expose them yet because I am very reluctant towards exposing
> anything until there is an explicit request from a user. Don't want to
> overload the API.
>
>> Also I saw TODO in test that "RO requests should be able to go to replicas" and I expect questions
>>
>> about RTO 0 on read requests from our customers.
> What do you mean as "RTO 0 on read requests"?
>
We faced it when developed CRUD module. To perform requests we need to 
know space schema.

But all remote schema changes are visible only after request that 
triggers schema refetch.

And we had working cluster but the first portion of requests failed 
because of outdated schema.


Will we have something similar when we doesn't discover master yet but 
have replicas and our

read requests will fail? If it's so please file at least an issue to fix it.

Considering definition itself RTO means "recovery time objective".


>> I don't know will this feature be used by cartridge but it could create small inconvenience for our customers
>>
>> in test because they should inject timeouts until master will be discovered (it's too artifical example but there were
>>
>> some similar questions when discovery started to work by chunks).
>>>>>> +local function locate_masters(replicasets)
>>>>> +    local is_all_done = true
>>>>> +    local is_all_nop = true
>>>>> +    local last_err
>>>>> +    for _, replicaset in pairs(replicasets) do
>>>>> +        local is_done, is_nop, err = replicaset_locate_master(replicaset)
>>>> I think we should log result of master discovery. Especially if error occurred.
>>> It is logged by the caller. See master_search_f(), init.lua:1067.
>> Hmm... Yes, but I think such approach with last_err could lose some errors.
> Yes, that is also done on purpose. I don't want to clog the logs with too
> many errors. For instance, imagine you have 100 replicasets with 3 nodes in
> each, each is polled with 0.5s. And there is no network. Then it can produce
> about 600 errors per second. I tried to make the clogging less severe by showing
> only the last error. And the idea is that you fix these errors one by one until
> the log has no errors and discovery works fine.
>
> For example, firstly you fix the network if none of the futures was created. Then
> you fix storage.call('info') being not existing due to the old version, and etc.
> In the log you will see the next error to fix.
>
> I also wanted not to log the same error too often. For example, check that the
> message didn't change in the master search fiber, and log it only on change
> or once per, say, 10 seconds. But decided to wait if it explodes anywhere before
> I complicate the code even more.
>
> Do you think it is not needed? I am not sure, and can switch to logging of
> everything.

Yes, sounds reasonable. Feel free to ignore following thoughts I just 
try to say my idea.

t's not good to spam logs with error. But maybe we could aggregate

error messages and print after each step. I mean e.g. we have a step 
where we call

`pcall(conn.call, conn, func, args, async_opts)` we can save uris or 
uuids of bad hosts and

print them in single message.


>>>>> diff --git a/vshard/router/init.lua b/vshard/router/init.lua
>>>>> index 5e2a96b..9407ccd 100644
>>>>> --- a/vshard/router/init.lua
>>>>> +++ b/vshard/router/init.lua
>>>>> @@ -1030,6 +1032,93 @@ local function failover_f(router)
>>> <...>
>>>
>>>>> +--
>>>>> +-- Master discovery background function. It is supposed to notice master changes
>>>>> +-- and find new masters in the replicasets, which are configured for that.
>>>>> +--
>>>>> +-- XXX: due to polling the search might notice master change not right when it
>>>> Is this TODO related tohttps://github.com/tarantool/vshard/issues/284  ?
>>> Yes, that is one of the "solutions", although I would rather call it a crutch.
>>> I prefer to fix it by doing fair subscriptions. And for that I at least need a
>>> way to get netbox call result asynchronously, without having to call wait_result()
>>> anywhere.
>> Does it request some changes in netbox/iproto? I mean https://github.com/tarantool/tarantool/issues/6107
> 6107 is not about the protocol changes. It is only about
> the interface on the server and in netbox. For vshard it
> would be enough to patch netbox. I want to be able to do
> something like that:
>
> 	local subscribe_complete
>
> 	local function subscribe_push(conn, msg)
> 		log.info('Push: %s', msg)
> 		-- ...
> 	end
>
> 	local function subscribe_start(conn)
> 		conn:call('subscribe', {
> 			is_async = true,
> 			on_complete = subscribe_complete,
> 			on_complete_ctx = conn,
> 			on_push = subscribe_push,
> 			on_push_ctx = conn,
> 		})
> 	end
>
> 	local function subscribe_complete(conn, res)
> 		log.info('Subscription is done: %s, retry', res)
> 		-- ...
> 		subscribe_start(conn)
> 	end
>
> 	subscribe_start(conn)
>
> 'Subscribe' in vshard would be a function on the storage
> which returns when the storage's state changes anyhow. I could
> manage these subscriptions on the client in a very cheap way
> without fibers. Then I could make multiple subscriptions or
> one big sub for all storage's details. Wouldn't matter. And
> wouldn't cost almost anything on the storage too if 6107
> would be done.

I did something similar for one project. "subscribe" reads from special 
channel and

performs box.session.push(). Ok, I hope it will be done later.




More information about the Tarantool-patches mailing list