<HTML><BODY><div class="js-helper js-readmsg-msg"><style type="text/css"></style><div><div id="style_15918278771621420503_BODY"><div class="cl_967816"><div class="js-helper_mr_css_attr js-readmsg-msg_mr_css_attr"><style type="text/css"></style><div><div id="style_15918055770995187402_BODY_mr_css_attr"><div class="cl_848282_mr_css_attr"><div>Ссылки были заинлайнены под слова.</div><div>Не учёл, что это работает не во всех клиентах.</div><div><a href="https://github.com/tarantool/tarantool/issues/3776" rel="noopener noreferrer" target="_blank">https://github.com/tarantool/tarantool/issues/3776</a></div><div><a href="https://github.com/tarantool/tarantool/issues/4646" rel="noopener noreferrer" target="_blank">https://github.com/tarantool/tarantool/issues/4646</a></div><div><a href="https://github.com/tarantool/tarantool/issues/4910" rel="noopener noreferrer" target="_blank">https://github.com/tarantool/tarantool/issues/4910</a></div><div> </div><div>Про vtab/состояния конечно вопрос отдельный, я написал, как это вижу.</div><div> </div><div><div>В iproto_msg_decode как раз есть возможность отделить одно от другого.<br>Или речь не об этом? В предложении в том числе говорится как раз о том,<br>чтобы выполнять iproto_msg_decode не дожидаясь tx (сейчас tx слишком<br>активно участвует в процессе сетевого взаимодействия.)</div><div> </div><div>Кроме того, так как в итоге всё равно tx должен поучаствовать в<br>процессе, когда он "занят", мы хотим отсекать любые соединения<br>в iproto, чтобы не происходило собственно утечки дескрипторов, как<br>минимум.</div><div> </div><div>--<br>Ilya Kosarev</div></div><div class="mail-quote-collapse"><blockquote style="border-left:1px solid #0857A6;margin:10px;padding:0 0 0 10px;"><span data-email="kostja@scylladb.com" data-name="Konstantin Osipov" data-quote-id="1710415181220173657" data-timestamp="1590764100" data-type="sender"><span>Пятница, 29 мая 2020, 17:55 +03:00 от Konstantin Osipov <<a rel="noopener noreferrer">kostja@scylladb.com</a>>:<br> </span></span><div data-quote-id="1710415181220173657" data-type="body"><div><div id=""><div class="js-helper_mr_css_attr_mr_css_attr js-readmsg-msg_mr_css_attr_mr_css_attr"><style type="text/css"></style><div><div id="style_15907641160401306017_BODY_mr_css_attr_mr_css_attr">* Ilya Kosarev <<a>i.kosarev@tarantool.org</a>> [20/05/29 16:49]:<br><br>Илья, по-русски то тут сложно было бы разобраться, а по-английски<br>уж и подавно.<br><br>Ссылок на "mentioned tickets" нет.<br><br>vtab значит оверкилл, - это идёт отсылка<br>к моему комментарию в тикете, видимо?<br>Можно было бы со мной напрямую обсудить.<br><br><br>В целом, тут вопрос не в rfc vs vtab, а в том как разделить<br>в трафике соединения от реплик, которые нужно принимать во время<br>бутстрапа, от соединений от клиентов.<br><br>На сегодня в протоколе таких различий нет.<br><br>В письме об этом ничего нет.<br><br> <div class="mail-quote-collapse">><br>> Hello everyone!<br>> <br>> It is well known that tarantool processes connections through iproto<br>> subsystem. Due to some problems, roughly described in the mentioned<br>> tickets , it turns out that this subsystem behavior should be<br>> reconsidered in some aspects.<br>> <br>> Proposed changes are supposed to solve at least following problems.<br>> First one is descriptors rlimit violation in case with some clients<br>> performing enough requests while tx-thread is unresponsive. According<br>> to Yaroslav 12 vshard routers reconnecting every 10.5 seconds for 15<br>> minutes are enough to recovery dying with «can't initialize storage:<br>> error reading directory: too many open files» error.<br>> Second one is dirty read and others when tx can response although<br>> bootstrap is not finished.<br>> <br>> The solution is basically to provide iproto with more freedom, at least<br>> in some cases. As far as i see it can be implemented using humble<br>> state-machine. The alternative is vtab and it seems like an overkill<br>> here, as far as it is less transparent and there can be only 2 options<br>> for each request: to process it or to reject. To start with, we can use<br>> 2 states to solve first problem, which seems to be more painful, and<br>> then introduce new states to solve second problem and possibly some<br>> more. These states may be called "solo" & "assist" states. "Assist"<br>> state mostly implies current iproto behavior and shoulbe the basic one,<br>> while "solo" state is intended to be enabled by tx thread when it is<br>> going to become unresponsive for considerale time (for example, while<br>> building secondary keys). "Solo" state means that iproto won't<br>> communicate with tx and will simply answer everyone with any request<br>> that tx is busy. The alternative is some kind of heartbeat from tx to<br>> iproto to allow iproto decide if it needs to change it's state itself,<br>> however it also seems like an overkill. If user, for example, loads tx<br>> so much that it can't communicate with iproto, that is his own problem.<br>> <br>> This approach is needed as far as now iproto can only accept<br>> connections, consequently spending sockets in case tx thread can't<br>> answer. tx now needs to prepare greeting and only then iproto can send<br>> it. It work the same with all other requests: tx needs to prepare the<br>> answer and then iproto processes it.<br>> Proposed approach allows iproto itself to close connections or ask<br>> them to wait in "solo" state. This will solve leaking descriptors<br>> problem. Late more states can be added, where iproto, for example, will<br>> answer itself only to dml requests (while tx is not ready for it). This<br>> idea is partly realized and it shows satisfying behavior in case with<br>> unresponsive tx.<br>> <br>> There is one thing that causes trouble: using output bufs with<br>> thread-local slab_cache. Now obuf's slab_cache belongs to tx, while<br>> proposed changes mean that both tx & iproto have to be able to use<br>> them depending on state & request type. I am currently searching for<br>> the best approach here. There is an option to use more obufs (4 instead<br>> of 2), 2 of them belonging to tx thread and 2 of them belonging to<br>> iproto thread.<br>> <br>> It is also doubtable if connections to iproto in "solo" state should be<br>> closed or retry their requests after some timeout. I propose to close<br>> them, while there is opinion that it is not the right behavior. Though<br>> I think it is more transparent and understandable for users to<br>> reconnect by themselves, also as far as this unresponsive tx state<br>> might last for quite a long time.<br>> <br>> --<br>> Ilya Kosarev</div><br>--<br>Konstantin Osipov, Moscow, Russia</div></div></div></div></div></div></blockquote></div><div><div> </div></div></div></div></div></div><div> </div></div></div></div></div><div> </div></BODY></HTML>