<HTML><BODY><div>>~ We should update libev properly.<br> <br>It does seem reasonable. </div><hr><div><br>There was a bug in libev that caused some stress tests to fail<br>with an error indicating lack of fds on mac. The flag which was<br>supposed to fix the issue (DARWIN_UNLIMITED_SELECT) was defined<br>too late in earlier versions of libev.</div><div><br>More precisely, it was defined after including time.h which in<br>its turn had this line:</div><div><br>include <sys/_select.h> /* select() prototype */</div><div><br>And <sys/_select.h> had this:<br>if defined(_DARWIN_C_SOURCE) || defined(_DARWIN_UNLIMITED_SELECT)<br> __DARWIN_EXTSN_C(select)</div><div><br>So unlimited select flag did not do the trick as it was supposed to.<br>It was fixed in 4.25 along with other changes.<br> <br>Closes #3867<br>Closes #4673<br>---<br>Issues:<br><a href="https://github.com/tarantool/tarantool/issues/3867">https://github.com/tarantool/tarantool/issues/3867</a> <br><a href="https://github.com/tarantool/tarantool/issues/4673">https://github.com/tarantool/tarantool/issues/4673</a> <br>Branch:<br><a href="https://github.com/tarantool/tarantool/compare/eljashm/gh-3867-libev-update">https://github.com/tarantool/tarantool/compare/eljashm/gh-3867-libev-update</a> <br> <br> src/lib/core/fiber.c | 4 +-<br> third_party/libev/CVS/Entries | 62 +-<br> third_party/libev/Changes | 101 +++<br> third_party/libev/Makefile.am | 3 +-<br> third_party/libev/README | 3 +-<br> third_party/libev/Symbols.ev | 2 +-<br> third_party/libev/configure.ac | 6 +-<br> third_party/libev/ev++.h | 220 +++---<br> third_party/libev/ev.3 | 373 +++++++---<br> third_party/libev/ev.c | 1131 ++++++++++++++++++++++---------<br> third_party/libev/ev.h | 225 +++---<br> third_party/libev/ev.pod | 316 +++++++--<br> third_party/libev/ev_epoll.c | 69 +-<br> third_party/libev/ev_iouring.c | 694 +++++++++++++++++++<br> third_party/libev/ev_kqueue.c | 24 +-<br> third_party/libev/ev_linuxaio.c | 620 +++++++++++++++++<br> third_party/libev/ev_poll.c | 33 +-<br> third_party/libev/ev_port.c | 13 +-<br> third_party/libev/ev_select.c | 12 +-<br> third_party/libev/ev_vars.h | 51 +-<br> third_party/libev/ev_win32.c | 4 +-<br> third_party/libev/ev_wrap.h | 72 ++<br> third_party/libev/libev.m4 | 7 +-<br> third_party/libev/update_ev_c | 1 +<br> 24 files changed, 3222 insertions(+), 824 deletions(-)<br> create mode 100644 third_party/libev/ev_iouring.c<br> create mode 100644 third_party/libev/ev_linuxaio.c<br> <br>diff --git a/src/lib/core/fiber.c b/src/lib/core/fiber.c<br>index ada7972cb..2bf6a3333 100644<br>--- a/src/lib/core/fiber.c<br>+++ b/src/lib/core/fiber.c<br>@@ -1463,7 +1463,7 @@ cord_start(struct cord *cord, const char *name, void *(*f)(void *), void *arg)<br> struct cord_thread_arg ct_arg = { cord, name, f, arg, false,<br> PTHREAD_MUTEX_INITIALIZER, PTHREAD_COND_INITIALIZER };<br> tt_pthread_mutex_lock(&ct_arg.start_mutex);<br>- cord->loop = ev_loop_new(EVFLAG_AUTO | EVFLAG_ALLOCFD);<br>+ cord->loop = ev_loop_new(EVFLAG_AUTO | EVFLAG_NOTIMERFD);<br> if (cord->loop == NULL) {<br> diag_set(OutOfMemory, 0, "ev_loop_new", "ev_loop");<br> goto end;<br>@@ -1701,7 +1701,7 @@ fiber_init(int (*invoke)(fiber_func f, va_list ap))<br> stack_direction = check_stack_direction(__builtin_frame_address(0));<br> fiber_invoke = invoke;<br> main_thread_id = pthread_self();<br>- main_cord.loop = ev_default_loop(EVFLAG_AUTO | EVFLAG_ALLOCFD);<br>+ main_cord.loop = ev_default_loop(EVFLAG_AUTO | EVFLAG_NOTIMERFD);<br> cord_create(&main_cord, "main");<br> }<br> <br>diff --git a/third_party/libev/CVS/Entries b/third_party/libev/CVS/Entries<br>index 3c8541193..497df4295 100644<br>--- a/third_party/libev/CVS/Entries<br>+++ b/third_party/libev/CVS/Entries<br>@@ -1,31 +1,33 @@<br>-/Makefile.am/1.9/Mon Aug 17 17:43:15 2015//<br>-/README/1.21/Mon Aug 17 17:43:15 2015//<br>-/README.embed/1.29/Mon Aug 17 17:43:15 2015//<br>-/Symbols.ev/1.14/Mon Aug 17 17:43:15 2015//<br>-/Symbols.event/1.4/Mon Aug 17 17:43:15 2015//<br>-/autogen.sh/1.3/Mon Aug 17 17:43:15 2015//<br>-/ev_poll.c/1.39/Mon Aug 17 17:43:15 2015//<br>-/ev_port.c/1.28/Mon Aug 17 17:43:15 2015//<br>-/ev_select.c/1.55/Mon Aug 17 17:43:15 2015//<br>-/event.c/1.52/Mon Aug 17 17:43:15 2015//<br>-/event.h/1.26/Mon Aug 17 17:43:15 2015//<br>-/event_compat.h/1.8/Mon Aug 17 17:43:15 2015//<br>-/import_libevent/1.29/Mon Aug 17 17:43:15 2015//<br>-/update_ev_c/1.2/Mon Aug 17 17:43:15 2015//<br>-/update_ev_wrap/1.6/Mon Aug 17 17:43:15 2015//<br>-/update_symbols/1.1/Mon Aug 17 17:43:15 2015//<br>-/Changes/1.307/Sun Oct 4 10:12:28 2015//<br>-/LICENSE/1.11/Sun Oct 4 10:12:28 2015//<br>-/configure.ac/1.40/Sun Oct 4 10:12:28 2015//<br>-/ev++.h/1.62/Sun Oct 4 10:12:28 2015//<br>-/ev.3/1.103/Sun Oct 4 10:12:28 2015//<br>-/ev.c/1.477/Sun Oct 4 10:12:28 2015//<br>-/ev.h/1.183/Sun Oct 4 10:12:28 2015//<br>-/ev.pod/1.435/Sun Oct 4 10:12:28 2015//<br>-/ev_epoll.c/1.68/Sun Oct 4 10:12:28 2015//<br>-/ev_kqueue.c/1.55/Sun Oct 4 10:12:28 2015//<br>-/ev_vars.h/1.58/Sun Oct 4 10:12:28 2015//<br>-/ev_win32.c/1.16/Sun Oct 4 10:12:28 2015//<br>-/ev_wrap.h/1.38/Sun Oct 4 10:12:28 2015//<br>-/libev.m4/1.16/Sun Oct 4 10:12:28 2015//<br>+/Changes/1.365/Result of merge+Thu Feb 27 07:26:50 2020//<br>+/LICENSE/1.11/Wed Feb 19 13:30:17 2020//<br>+/Makefile.am/1.11/Thu Feb 27 07:26:50 2020//<br>+/README/1.22/Thu Feb 27 07:26:50 2020//<br>+/README.embed/1.29/Wed Feb 19 13:30:17 2020//<br>+/Symbols.ev/1.15/Thu Feb 27 07:26:50 2020//<br>+/Symbols.event/1.4/Wed Feb 19 13:30:17 2020//<br>+/autogen.sh/1.3/Wed Feb 19 13:30:17 2020//<br>+/configure.ac/1.45/Result of merge+Thu Feb 27 07:26:50 2020//<br>+/ev++.h/1.68/Thu Feb 27 07:26:50 2020//<br>+/ev.3/1.120/Result of merge+Thu Feb 27 07:26:50 2020//<br>+/ev.c/1.528/Result of merge+Thu Feb 27 07:26:50 2020//<br>+/ev.h/1.204/Result of merge+Thu Feb 27 07:26:50 2020//<br>+/ev.pod/1.464/Result of merge//<br>+/ev_epoll.c/1.82/Result of merge+Thu Feb 27 07:26:50 2020//<br>+/ev_iouring.c/1.21/Wed Jan 22 02:20:47 2020//<br>+/ev_kqueue.c/1.61/Result of merge//<br>+/ev_linuxaio.c/1.53/Fri Dec 27 16:12:55 2019//<br>+/ev_poll.c/1.48/Result of merge+Thu Feb 27 07:26:50 2020//<br>+/ev_port.c/1.33/Result of merge//<br>+/ev_select.c/1.58/Result of merge//<br>+/ev_vars.h/1.67/Thu Feb 27 07:26:50 2020//<br>+/ev_win32.c/1.21/Result of merge//<br>+/ev_wrap.h/1.44/Thu Feb 27 07:26:50 2020//<br>+/event.c/1.52/Wed Feb 19 13:30:17 2020//<br>+/event.h/1.26/Wed Feb 19 13:30:17 2020//<br>+/event_compat.h/1.8/Wed Feb 19 13:30:17 2020//<br>+/import_libevent/1.29/Wed Feb 19 13:30:17 2020//<br>+/libev.m4/1.18/Thu Feb 27 07:26:50 2020//<br>+/update_ev_c/1.3/Thu Feb 27 07:26:50 2020//<br>+/update_ev_wrap/1.6/Wed Feb 19 13:30:17 2020//<br>+/update_symbols/1.1/Wed Feb 19 13:30:17 2020//<br> D<br>diff --git a/third_party/libev/Changes b/third_party/libev/Changes<br>index bb1e6d43d..80e70ae6d 100644<br>--- a/third_party/libev/Changes<br>+++ b/third_party/libev/Changes<br>@@ -1,5 +1,106 @@<br> Revision history for libev, a high-performance and full-featured event loop.<br> <br>+TODO: for next ABI/API change, consider moving EV__IOFDSSET into io->fd instead and provide a getter.<br>+TODO: document EV_TSTAMP_T<br>+<br>+4.32 (EV only)<br>+ - the 4.31 timerfd code wrongly changes the priority of the signal<br>+ fd watcher, which is usually harmless unless signal fds are<br>+ also used (found via cpan tester service).<br>+ - the documentation wrongly claimed that user may modify fd and events<br>+ members in io watchers when the watcher was stopped<br>+ (found by b_jonas).<br>+ - new ev_io_modify mutator which changes only the events member,<br>+ which can be faster. also added ev::io::set (int events) method<br>+ to ev++.h.<br>+ - officially allow a zero events mask for io watchers. this should<br>+ work with older libev versions as well but was not officially<br>+ allowed before.<br>+ - do not wake up every minute when timerfd is used to detect timejumps.<br>+ - do not wake up every minute when periodics are disabled and we have<br>+ a monotonic clock.<br>+ - support a lot more "uncommon" compile time configurations,<br>+ such as ev_embed enabled but ev_timer disabled.<br>+ - use a start/stop wrapper class to reduce code duplication in<br>+ ev++.h and make it needlessly more c++-y.<br>+ - the linux aio backend is no longer compiled in by default.<br>+ - update to libecb version 0x00010008.<br>+<br>+4.31 Fri Dec 20 21:58:29 CET 2019<br>+ - handle backends with minimum wait time a bit better by not<br>+ waiting in the presence of already-expired timers<br>+ (behaviour reported by Felipe Gasper).<br>+ - new feature: use timerfd to detect timejumps quickly,<br>+ can be disabled with the new EVFLAG_NOTIMERFD loop flag.<br>+ - document EV_USE_SIGNALFD feature macro.<br>+<br>+4.30 (EV only)<br>+ - change non-autoconf test for __kernel_rwf_t by testing<br>+ LINUX_VERSION_CODE, the most direct test I could find.<br>+ - fix a bug in the io_uring backend that polled the wrong<br>+ backend fd, causing it to not work in many cases.<br>+<br>+4.29 (EV only)<br>+ - add io uring autoconf and non-autoconf detection.<br>+ - disable io_uring when some header files are too old.<br>+<br>+4.28 (EV only)<br>+ - linuxaio backend resulted in random memory corruption<br>+ when loop is forked.<br>+ - linuxaio backend might have tried to cancel an iocb<br>+ multiple times (was unable to trigger this).<br>+ - linuxaio backend now employs a generation counter to<br>+ avoid handling spurious events from cancelled requests.<br>+ - io_cancel can return EINTR, deal with it. also, assume<br>+ io_submit also returns EINTR.<br>+ - fix some other minor bugs in linuxaio backend.<br>+ - ev_tstamp type can now be overriden by defining EV_TSTAMP_T.<br>+ - cleanup: replace expect_true/false and noinline by their<br>+ libecb counterparts.<br>+ - move syscall infrastructure from ev_linuxaio.c to ev.c.<br>+ - prepare io_uring integration.<br>+ - tweak ev_floor.<br>+ - epoll, poll, win32 Sleep and other places that use millisecond<br>+ reslution now all try to round up times.<br>+ - solaris port backend didn't compile.<br>+ - abstract time constants into their macros, for more flexibility.<br>+<br>+4.27 Thu Jun 27 22:43:44 CEST 2019<br>+ - linux aio backend almost completely rewritten to work around its<br>+ limitations.<br>+ - linux aio backend now requires linux 4.19+.<br>+ - epoll backend now mandatory for linux aio backend.<br>+ - fail assertions more aggressively on invalid fd's detected<br>+ in the event loop, do not just silently fd_kill in case of<br>+ user error.<br>+ - ev_io_start/ev_io_stop now verify the watcher fd using<br>+ a syscall when EV_VERIFY is 2 or higher.<br>+<br>+4.26 (EV only)<br>+ - update to libecb 0x00010006.<br>+ - new experimental linux aio backend (linux 4.18+).<br>+ - removed redundant 0-ptr check in ev_once.<br>+ - updated/extended ev_set_allocator documentation.<br>+ - replaced EMPTY2 macro by array_needsize_noinit.<br>+ - minor code cleanups.<br>+ - epoll backend now uses epoll_create1 also after fork.<br>+<br>+4.25 Fri Dec 21 07:49:20 CET 2018<br>+ - INCOMPATIBLE CHANGE: EV_THROW was renamed to EV_NOEXCEPT<br>+ (EV_THROW still provided) and now uses noexcept on C++11 or newer.<br>+ - move the darwin select workaround higher in ev.c, as newer versions of<br>+ darwin managed to break their broken select even more.<br>+ - ANDROID => __ANDROID__ (reported by enh@google.com).<br>+ - disable epoll_create1 on android because it has broken header files<br>+ and google is unwilling to fix them (reported by enh@google.com).<br>+ - avoid a minor compilation warning on win32.<br>+ - c++: remove deprecated dynamic throw() specifications.<br>+ - c++: improve the (unsupported) bad_loop exception class.<br>+ - backport perl ev_periodic example to C, untested.<br>+ - update libecb, biggets change is to include a memory fence<br>+ in ECB_MEMORY_FENCE_RELEASE on x86/amd64.<br>+ - minor autoconf/automake modernisation.<br>+<br> 4.24 Wed Dec 28 05:19:55 CET 2016<br> - bump version to 4.24, as the release tarball inexplicably<br> didn't have the right version in ev.h, even though the cvs-tagged<br>diff --git a/third_party/libev/Makefile.am b/third_party/libev/Makefile.am<br>index 059305bc3..2814622d8 100644<br>--- a/third_party/libev/Makefile.am<br>+++ b/third_party/libev/Makefile.am<br>@@ -4,7 +4,8 @@ VERSION_INFO = 4:0:0<br> <br> EXTRA_DIST = LICENSE Changes libev.m4 autogen.sh \<br> ev_vars.h ev_wrap.h \<br>- ev_epoll.c ev_select.c ev_poll.c ev_kqueue.c ev_port.c ev_win32.c \<br>+ ev_epoll.c ev_select.c ev_poll.c ev_kqueue.c ev_port.c ev_linuxaio.c ev_iouring.c \<br>+ ev_win32.c \<br> ev.3 ev.pod Symbols.ev Symbols.event<br> <br> man_MANS = ev.3<br>diff --git a/third_party/libev/README b/third_party/libev/README<br>index 31f619387..fca5fdf1a 100644<br>--- a/third_party/libev/README<br>+++ b/third_party/libev/README<br>@@ -18,7 +18,8 @@ ABOUT<br> - extensive and detailed, readable documentation (not doxygen garbage).<br> - fully supports fork, can detect fork in various ways and automatically<br> re-arms kernel mechanisms that do not support fork.<br>- - highly optimised select, poll, epoll, kqueue and event ports backends.<br>+ - highly optimised select, poll, linux epoll, linux aio, bsd kqueue<br>+ and solaris event ports backends.<br> - filesystem object (path) watching (with optional linux inotify support).<br> - wallclock-based times (using absolute time, cron-like).<br> - relative timers/timeouts (handle time jumps).<br>diff --git a/third_party/libev/Symbols.ev b/third_party/libev/Symbols.ev<br>index 7a29a75cb..fe169fa06 100644<br>--- a/third_party/libev/Symbols.ev<br>+++ b/third_party/libev/Symbols.ev<br>@@ -13,10 +13,10 @@ ev_clear_pending<br> ev_default_loop<br> ev_default_loop_ptr<br> ev_depth<br>+ev_embeddable_backends<br> ev_embed_start<br> ev_embed_stop<br> ev_embed_sweep<br>-ev_embeddable_backends<br> ev_feed_event<br> ev_feed_fd_event<br> ev_feed_signal<br>diff --git a/third_party/libev/configure.ac b/third_party/libev/configure.ac<br>index 2590f8fd6..fb9311fd4 100644<br>--- a/third_party/libev/configure.ac<br>+++ b/third_party/libev/configure.ac<br>@@ -1,11 +1,11 @@<br>-AC_INIT<br>+dnl also update ev.h!<br>+AC_INIT([libev], [4.31])<br> <br> orig_CFLAGS="$CFLAGS"<br> <br> AC_CONFIG_SRCDIR([ev_epoll.c])<br>+AM_INIT_AUTOMAKE<br> <br>-dnl also update ev.h!<br>-AM_INIT_AUTOMAKE(libev,4.24)<br> AC_CONFIG_HEADERS([config.h])<br> AM_MAINTAINER_MODE<br> <br>diff --git a/third_party/libev/ev++.h b/third_party/libev/ev++.h<br>index 4f0a36ab0..22dfcf58d 100644<br>--- a/third_party/libev/ev++.h<br>+++ b/third_party/libev/ev++.h<br>@@ -1,7 +1,7 @@<br> /*<br> * libev simple C++ wrapper classes<br> *<br>- * Copyright (c) 2007,2008,2010 Marc Alexander Lehmann <libev@schmorp.de><br>+ * Copyright (c) 2007,2008,2010,2018,2020 Marc Alexander Lehmann <libev@schmorp.de><br> * All rights reserved.<br> *<br> * Redistribution and use in source and binary forms, with or without modifica-<br>@@ -113,13 +113,13 @@ namespace ev {<br> <br> struct bad_loop<br> #if EV_USE_STDEXCEPT<br>- : std::runtime_error<br>+ : std::exception<br> #endif<br> {<br> #if EV_USE_STDEXCEPT<br>- bad_loop ()<br>- : std::runtime_error ("libev event loop cannot be initialized, bad value of LIBEV_FLAGS?")<br>+ const char *what () const EV_NOEXCEPT<br> {<br>+ return "libev event loop cannot be initialized, bad value of LIBEV_FLAGS?";<br> }<br> #endif<br> };<br>@@ -142,14 +142,14 @@ namespace ev {<br> <br> struct loop_ref<br> {<br>- loop_ref (EV_P) throw ()<br>+ loop_ref (EV_P) EV_NOEXCEPT<br> #if EV_MULTIPLICITY<br> : EV_AX (EV_A)<br> #endif<br> {<br> }<br> <br>- bool operator == (const loop_ref &other) const throw ()<br>+ bool operator == (const loop_ref &other) const EV_NOEXCEPT<br> {<br> #if EV_MULTIPLICITY<br> return EV_AX == other.EV_AX;<br>@@ -158,7 +158,7 @@ namespace ev {<br> #endif<br> }<br> <br>- bool operator != (const loop_ref &other) const throw ()<br>+ bool operator != (const loop_ref &other) const EV_NOEXCEPT<br> {<br> #if EV_MULTIPLICITY<br> return ! (*this == other);<br>@@ -168,27 +168,27 @@ namespace ev {<br> }<br> <br> #if EV_MULTIPLICITY<br>- bool operator == (const EV_P) const throw ()<br>+ bool operator == (const EV_P) const EV_NOEXCEPT<br> {<br> return this->EV_AX == EV_A;<br> }<br> <br>- bool operator != (const EV_P) const throw ()<br>+ bool operator != (const EV_P) const EV_NOEXCEPT<br> {<br>- return (*this == EV_A);<br>+ return ! (*this == EV_A);<br> }<br> <br>- operator struct ev_loop * () const throw ()<br>+ operator struct ev_loop * () const EV_NOEXCEPT<br> {<br> return EV_AX;<br> }<br> <br>- operator const struct ev_loop * () const throw ()<br>+ operator const struct ev_loop * () const EV_NOEXCEPT<br> {<br> return EV_AX;<br> }<br> <br>- bool is_default () const throw ()<br>+ bool is_default () const EV_NOEXCEPT<br> {<br> return EV_AX == ev_default_loop (0);<br> }<br>@@ -200,7 +200,7 @@ namespace ev {<br> ev_run (EV_AX_ flags);<br> }<br> <br>- void unloop (how_t how = ONE) throw ()<br>+ void unloop (how_t how = ONE) EV_NOEXCEPT<br> {<br> ev_break (EV_AX_ how);<br> }<br>@@ -211,74 +211,74 @@ namespace ev {<br> ev_run (EV_AX_ flags);<br> }<br> <br>- void break_loop (how_t how = ONE) throw ()<br>+ void break_loop (how_t how = ONE) EV_NOEXCEPT<br> {<br> ev_break (EV_AX_ how);<br> }<br> <br>- void post_fork () throw ()<br>+ void post_fork () EV_NOEXCEPT<br> {<br> ev_loop_fork (EV_AX);<br> }<br> <br>- unsigned int backend () const throw ()<br>+ unsigned int backend () const EV_NOEXCEPT<br> {<br> return ev_backend (EV_AX);<br> }<br> <br>- tstamp now () const throw ()<br>+ tstamp now () const EV_NOEXCEPT<br> {<br> return ev_now (EV_AX);<br> }<br> <br>- void ref () throw ()<br>+ void ref () EV_NOEXCEPT<br> {<br> ev_ref (EV_AX);<br> }<br> <br>- void unref () throw ()<br>+ void unref () EV_NOEXCEPT<br> {<br> ev_unref (EV_AX);<br> }<br> <br> #if EV_FEATURE_API<br>- unsigned int iteration () const throw ()<br>+ unsigned int iteration () const EV_NOEXCEPT<br> {<br> return ev_iteration (EV_AX);<br> }<br> <br>- unsigned int depth () const throw ()<br>+ unsigned int depth () const EV_NOEXCEPT<br> {<br> return ev_depth (EV_AX);<br> }<br> <br>- void set_io_collect_interval (tstamp interval) throw ()<br>+ void set_io_collect_interval (tstamp interval) EV_NOEXCEPT<br> {<br> ev_set_io_collect_interval (EV_AX_ interval);<br> }<br> <br>- void set_timeout_collect_interval (tstamp interval) throw ()<br>+ void set_timeout_collect_interval (tstamp interval) EV_NOEXCEPT<br> {<br> ev_set_timeout_collect_interval (EV_AX_ interval);<br> }<br> #endif<br> <br> // function callback<br>- void once (int fd, int events, tstamp timeout, void (*cb)(int, void *), void *arg = 0) throw ()<br>+ void once (int fd, int events, tstamp timeout, void (*cb)(int, void *), void *arg = 0) EV_NOEXCEPT<br> {<br> ev_once (EV_AX_ fd, events, timeout, cb, arg);<br> }<br> <br> // method callback<br> template<class K, void (K::*method)(int)><br>- void once (int fd, int events, tstamp timeout, K *object) throw ()<br>+ void once (int fd, int events, tstamp timeout, K *object) EV_NOEXCEPT<br> {<br> once (fd, events, timeout, method_thunk<K, method>, object);<br> }<br> <br> // default method == operator ()<br> template<class K><br>- void once (int fd, int events, tstamp timeout, K *object) throw ()<br>+ void once (int fd, int events, tstamp timeout, K *object) EV_NOEXCEPT<br> {<br> once (fd, events, timeout, method_thunk<K, &K::operator ()>, object);<br> }<br>@@ -292,7 +292,7 @@ namespace ev {<br> <br> // no-argument method callback<br> template<class K, void (K::*method)()><br>- void once (int fd, int events, tstamp timeout, K *object) throw ()<br>+ void once (int fd, int events, tstamp timeout, K *object) EV_NOEXCEPT<br> {<br> once (fd, events, timeout, method_noargs_thunk<K, method>, object);<br> }<br>@@ -306,7 +306,7 @@ namespace ev {<br> <br> // simpler function callback<br> template<void (*cb)(int)><br>- void once (int fd, int events, tstamp timeout) throw ()<br>+ void once (int fd, int events, tstamp timeout) EV_NOEXCEPT<br> {<br> once (fd, events, timeout, simpler_func_thunk<cb>);<br> }<br>@@ -320,7 +320,7 @@ namespace ev {<br> <br> // simplest function callback<br> template<void (*cb)()><br>- void once (int fd, int events, tstamp timeout) throw ()<br>+ void once (int fd, int events, tstamp timeout) EV_NOEXCEPT<br> {<br> once (fd, events, timeout, simplest_func_thunk<cb>);<br> }<br>@@ -332,12 +332,12 @@ namespace ev {<br> ();<br> }<br> <br>- void feed_fd_event (int fd, int revents) throw ()<br>+ void feed_fd_event (int fd, int revents) EV_NOEXCEPT<br> {<br> ev_feed_fd_event (EV_AX_ fd, revents);<br> }<br> <br>- void feed_signal_event (int signum) throw ()<br>+ void feed_signal_event (int signum) EV_NOEXCEPT<br> {<br> ev_feed_signal_event (EV_AX_ signum);<br> }<br>@@ -352,14 +352,14 @@ namespace ev {<br> struct dynamic_loop : loop_ref<br> {<br> <br>- dynamic_loop (unsigned int flags = AUTO) throw (bad_loop)<br>+ dynamic_loop (unsigned int flags = AUTO)<br> : loop_ref (ev_loop_new (flags))<br> {<br> if (!EV_AX)<br> throw bad_loop ();<br> }<br> <br>- ~dynamic_loop () throw ()<br>+ ~dynamic_loop () EV_NOEXCEPT<br> {<br> ev_loop_destroy (EV_AX);<br> EV_AX = 0;<br>@@ -376,7 +376,7 @@ namespace ev {<br> <br> struct default_loop : loop_ref<br> {<br>- default_loop (unsigned int flags = AUTO) throw (bad_loop)<br>+ default_loop (unsigned int flags = AUTO)<br> #if EV_MULTIPLICITY<br> : loop_ref (ev_default_loop (flags))<br> #endif<br>@@ -396,7 +396,7 @@ namespace ev {<br> default_loop &operator = (const default_loop &);<br> };<br> <br>- inline loop_ref get_default_loop () throw ()<br>+ inline loop_ref get_default_loop () EV_NOEXCEPT<br> {<br> #if EV_MULTIPLICITY<br> return ev_default_loop (0);<br>@@ -421,17 +421,35 @@ namespace ev {<br> template<class ev_watcher, class watcher><br> struct base : ev_watcher<br> {<br>+ // scoped pause/unpause of a watcher<br>+ struct freeze_guard<br>+ {<br>+ watcher &w;<br>+ bool active;<br>+<br>+ freeze_guard (watcher *self) EV_NOEXCEPT<br>+ : w (*self), active (w.is_active ())<br>+ {<br>+ if (active) w.stop ();<br>+ }<br>+<br>+ ~freeze_guard ()<br>+ {<br>+ if (active) w.start ();<br>+ }<br>+ };<br>+<br> #if EV_MULTIPLICITY<br> EV_PX;<br> <br> // loop set<br>- void set (EV_P) throw ()<br>+ void set (EV_P) EV_NOEXCEPT<br> {<br> this->EV_A = EV_A;<br> }<br> #endif<br> <br>- base (EV_PX) throw ()<br>+ base (EV_PX) EV_NOEXCEPT<br> #if EV_MULTIPLICITY<br> : EV_A (EV_A)<br> #endif<br>@@ -439,7 +457,7 @@ namespace ev {<br> ev_init (this, 0);<br> }<br> <br>- void set_ (const void *data, void (*cb)(EV_P_ ev_watcher *w, int revents)) throw ()<br>+ void set_ (const void *data, void (*cb)(EV_P_ ev_watcher *w, int revents)) EV_NOEXCEPT<br> {<br> this->data = (void *)data;<br> ev_set_cb (static_cast<ev_watcher *>(this), cb);<br>@@ -447,7 +465,7 @@ namespace ev {<br> <br> // function callback<br> template<void (*function)(watcher &w, int)><br>- void set (void *data = 0) throw ()<br>+ void set (void *data = 0) EV_NOEXCEPT<br> {<br> set_ (data, function_thunk<function>);<br> }<br>@@ -461,14 +479,14 @@ namespace ev {<br> <br> // method callback<br> template<class K, void (K::*method)(watcher &w, int)><br>- void set (K *object) throw ()<br>+ void set (K *object) EV_NOEXCEPT<br> {<br> set_ (object, method_thunk<K, method>);<br> }<br> <br> // default method == operator ()<br> template<class K><br>- void set (K *object) throw ()<br>+ void set (K *object) EV_NOEXCEPT<br> {<br> set_ (object, method_thunk<K, &K::operator ()>);<br> }<br>@@ -482,7 +500,7 @@ namespace ev {<br> <br> // no-argument callback<br> template<class K, void (K::*method)()><br>- void set (K *object) throw ()<br>+ void set (K *object) EV_NOEXCEPT<br> {<br> set_ (object, method_noargs_thunk<K, method>);<br> }<br>@@ -501,76 +519,76 @@ namespace ev {<br> (static_cast<ev_watcher *>(this), events);<br> }<br> <br>- bool is_active () const throw ()<br>+ bool is_active () const EV_NOEXCEPT<br> {<br> return ev_is_active (static_cast<const ev_watcher *>(this));<br> }<br> <br>- bool is_pending () const throw ()<br>+ bool is_pending () const EV_NOEXCEPT<br> {<br> return ev_is_pending (static_cast<const ev_watcher *>(this));<br> }<br> <br>- void feed_event (int revents) throw ()<br>+ void feed_event (int revents) EV_NOEXCEPT<br> {<br> ev_feed_event (EV_A_ static_cast<ev_watcher *>(this), revents);<br> }<br> };<br> <br>- inline tstamp now (EV_P) throw ()<br>+ inline tstamp now (EV_P) EV_NOEXCEPT<br> {<br> return ev_now (EV_A);<br> }<br> <br>- inline void delay (tstamp interval) throw ()<br>+ inline void delay (tstamp interval) EV_NOEXCEPT<br> {<br> ev_sleep (interval);<br> }<br> <br>- inline int version_major () throw ()<br>+ inline int version_major () EV_NOEXCEPT<br> {<br> return ev_version_major ();<br> }<br> <br>- inline int version_minor () throw ()<br>+ inline int version_minor () EV_NOEXCEPT<br> {<br> return ev_version_minor ();<br> }<br> <br>- inline unsigned int supported_backends () throw ()<br>+ inline unsigned int supported_backends () EV_NOEXCEPT<br> {<br> return ev_supported_backends ();<br> }<br> <br>- inline unsigned int recommended_backends () throw ()<br>+ inline unsigned int recommended_backends () EV_NOEXCEPT<br> {<br> return ev_recommended_backends ();<br> }<br> <br>- inline unsigned int embeddable_backends () throw ()<br>+ inline unsigned int embeddable_backends () EV_NOEXCEPT<br> {<br> return ev_embeddable_backends ();<br> }<br> <br>- inline void set_allocator (void *(*cb)(void *ptr, long size) throw ()) throw ()<br>+ inline void set_allocator (void *(*cb)(void *ptr, long size) EV_NOEXCEPT) EV_NOEXCEPT<br> {<br> ev_set_allocator (cb);<br> }<br> <br>- inline void set_syserr_cb (void (*cb)(const char *msg) throw ()) throw ()<br>+ inline void set_syserr_cb (void (*cb)(const char *msg) EV_NOEXCEPT) EV_NOEXCEPT<br> {<br> ev_set_syserr_cb (cb);<br> }<br> <br> #if EV_MULTIPLICITY<br> #define EV_CONSTRUCT(cppstem,cstem) \<br>- (EV_PX = get_default_loop ()) throw () \<br>+ (EV_PX = get_default_loop ()) EV_NOEXCEPT \<br> : base<ev_ ## cstem, cppstem> (EV_A) \<br> { \<br> }<br> #else<br> #define EV_CONSTRUCT(cppstem,cstem) \<br>- () throw () \<br>+ () EV_NOEXCEPT \<br> { \<br> }<br> #endif<br>@@ -581,19 +599,19 @@ namespace ev {<br> \<br> struct cppstem : base<ev_ ## cstem, cppstem> \<br> { \<br>- void start () throw () \<br>+ void start () EV_NOEXCEPT \<br> { \<br> ev_ ## cstem ## _start (EV_A_ static_cast<ev_ ## cstem *>(this)); \<br> } \<br> \<br>- void stop () throw () \<br>+ void stop () EV_NOEXCEPT \<br> { \<br> ev_ ## cstem ## _stop (EV_A_ static_cast<ev_ ## cstem *>(this)); \<br> } \<br> \<br> cppstem EV_CONSTRUCT(cppstem,cstem) \<br> \<br>- ~cppstem () throw () \<br>+ ~cppstem () EV_NOEXCEPT \<br> { \<br> stop (); \<br> } \<br>@@ -612,23 +630,19 @@ namespace ev {<br> };<br> <br> EV_BEGIN_WATCHER (io, io)<br>- void set (int fd, int events) throw ()<br>+ void set (int fd, int events) EV_NOEXCEPT<br> {<br>- int active = is_active ();<br>- if (active) stop ();<br>+ freeze_guard freeze (this);<br> ev_io_set (static_cast<ev_io *>(this), fd, events);<br>- if (active) start ();<br> }<br> <br>- void set (int events) throw ()<br>+ void set (int events) EV_NOEXCEPT<br> {<br>- int active = is_active ();<br>- if (active) stop ();<br>- ev_io_set (static_cast<ev_io *>(this), fd, events);<br>- if (active) start ();<br>+ freeze_guard freeze (this);<br>+ ev_io_modify (static_cast<ev_io *>(this), events);<br> }<br> <br>- void start (int fd, int events) throw ()<br>+ void start (int fd, int events) EV_NOEXCEPT<br> {<br> set (fd, events);<br> start ();<br>@@ -636,21 +650,19 @@ namespace ev {<br> EV_END_WATCHER (io, io)<br> <br> EV_BEGIN_WATCHER (timer, timer)<br>- void set (ev_tstamp after, ev_tstamp repeat = 0.) throw ()<br>+ void set (ev_tstamp after, ev_tstamp repeat = 0.) EV_NOEXCEPT<br> {<br>- int active = is_active ();<br>- if (active) stop ();<br>+ freeze_guard freeze (this);<br> ev_timer_set (static_cast<ev_timer *>(this), after, repeat);<br>- if (active) start ();<br> }<br> <br>- void start (ev_tstamp after, ev_tstamp repeat = 0.) throw ()<br>+ void start (ev_tstamp after, ev_tstamp repeat = 0.) EV_NOEXCEPT<br> {<br> set (after, repeat);<br> start ();<br> }<br> <br>- void again () throw ()<br>+ void again () EV_NOEXCEPT<br> {<br> ev_timer_again (EV_A_ static_cast<ev_timer *>(this));<br> }<br>@@ -663,21 +675,19 @@ namespace ev {<br> <br> #if EV_PERIODIC_ENABLE<br> EV_BEGIN_WATCHER (periodic, periodic)<br>- void set (ev_tstamp at, ev_tstamp interval = 0.) throw ()<br>+ void set (ev_tstamp at, ev_tstamp interval = 0.) EV_NOEXCEPT<br> {<br>- int active = is_active ();<br>- if (active) stop ();<br>+ freeze_guard freeze (this);<br> ev_periodic_set (static_cast<ev_periodic *>(this), at, interval, 0);<br>- if (active) start ();<br> }<br> <br>- void start (ev_tstamp at, ev_tstamp interval = 0.) throw ()<br>+ void start (ev_tstamp at, ev_tstamp interval = 0.) EV_NOEXCEPT<br> {<br> set (at, interval);<br> start ();<br> }<br> <br>- void again () throw ()<br>+ void again () EV_NOEXCEPT<br> {<br> ev_periodic_again (EV_A_ static_cast<ev_periodic *>(this));<br> }<br>@@ -686,15 +696,13 @@ namespace ev {<br> <br> #if EV_SIGNAL_ENABLE<br> EV_BEGIN_WATCHER (sig, signal)<br>- void set (int signum) throw ()<br>+ void set (int signum) EV_NOEXCEPT<br> {<br>- int active = is_active ();<br>- if (active) stop ();<br>+ freeze_guard freeze (this);<br> ev_signal_set (static_cast<ev_signal *>(this), signum);<br>- if (active) start ();<br> }<br> <br>- void start (int signum) throw ()<br>+ void start (int signum) EV_NOEXCEPT<br> {<br> set (signum);<br> start ();<br>@@ -704,15 +712,13 @@ namespace ev {<br> <br> #if EV_CHILD_ENABLE<br> EV_BEGIN_WATCHER (child, child)<br>- void set (int pid, int trace = 0) throw ()<br>+ void set (int pid, int trace = 0) EV_NOEXCEPT<br> {<br>- int active = is_active ();<br>- if (active) stop ();<br>+ freeze_guard freeze (this);<br> ev_child_set (static_cast<ev_child *>(this), pid, trace);<br>- if (active) start ();<br> }<br> <br>- void start (int pid, int trace = 0) throw ()<br>+ void start (int pid, int trace = 0) EV_NOEXCEPT<br> {<br> set (pid, trace);<br> start ();<br>@@ -722,22 +728,20 @@ namespace ev {<br> <br> #if EV_STAT_ENABLE<br> EV_BEGIN_WATCHER (stat, stat)<br>- void set (const char *path, ev_tstamp interval = 0.) throw ()<br>+ void set (const char *path, ev_tstamp interval = 0.) EV_NOEXCEPT<br> {<br>- int active = is_active ();<br>- if (active) stop ();<br>+ freeze_guard freeze (this);<br> ev_stat_set (static_cast<ev_stat *>(this), path, interval);<br>- if (active) start ();<br> }<br> <br>- void start (const char *path, ev_tstamp interval = 0.) throw ()<br>+ void start (const char *path, ev_tstamp interval = 0.) EV_NOEXCEPT<br> {<br> stop ();<br> set (path, interval);<br> start ();<br> }<br> <br>- void update () throw ()<br>+ void update () EV_NOEXCEPT<br> {<br> ev_stat_stat (EV_A_ static_cast<ev_stat *>(this));<br> }<br>@@ -746,33 +750,31 @@ namespace ev {<br> <br> #if EV_IDLE_ENABLE<br> EV_BEGIN_WATCHER (idle, idle)<br>- void set () throw () { }<br>+ void set () EV_NOEXCEPT { }<br> EV_END_WATCHER (idle, idle)<br> #endif<br> <br> #if EV_PREPARE_ENABLE<br> EV_BEGIN_WATCHER (prepare, prepare)<br>- void set () throw () { }<br>+ void set () EV_NOEXCEPT { }<br> EV_END_WATCHER (prepare, prepare)<br> #endif<br> <br> #if EV_CHECK_ENABLE<br> EV_BEGIN_WATCHER (check, check)<br>- void set () throw () { }<br>+ void set () EV_NOEXCEPT { }<br> EV_END_WATCHER (check, check)<br> #endif<br> <br> #if EV_EMBED_ENABLE<br> EV_BEGIN_WATCHER (embed, embed)<br>- void set_embed (struct ev_loop *embedded_loop) throw ()<br>+ void set_embed (struct ev_loop *embedded_loop) EV_NOEXCEPT<br> {<br>- int active = is_active ();<br>- if (active) stop ();<br>+ freeze_guard freeze (this);<br> ev_embed_set (static_cast<ev_embed *>(this), embedded_loop);<br>- if (active) start ();<br> }<br> <br>- void start (struct ev_loop *embedded_loop) throw ()<br>+ void start (struct ev_loop *embedded_loop) EV_NOEXCEPT<br> {<br> set (embedded_loop);<br> start ();<br>@@ -787,18 +789,18 @@ namespace ev {<br> <br> #if EV_FORK_ENABLE<br> EV_BEGIN_WATCHER (fork, fork)<br>- void set () throw () { }<br>+ void set () EV_NOEXCEPT { }<br> EV_END_WATCHER (fork, fork)<br> #endif<br> <br> #if EV_ASYNC_ENABLE<br> EV_BEGIN_WATCHER (async, async)<br>- void send () throw ()<br>+ void send () EV_NOEXCEPT<br> {<br> ev_async_send (EV_A_ static_cast<ev_async *>(this));<br> }<br> <br>- bool async_pending () throw ()<br>+ bool async_pending () EV_NOEXCEPT<br> {<br> return ev_async_pending (static_cast<ev_async *>(this));<br> }<br>diff --git a/third_party/libev/ev.3 b/third_party/libev/ev.3<br>index 5b2599e9b..985af854c 100644<br>--- a/third_party/libev/ev.3<br>+++ b/third_party/libev/ev.3<br>@@ -1,4 +1,4 @@<br>-.\" Automatically generated by Pod::Man 2.28 (Pod::Simple 3.30)<br>+.\" Automatically generated by Pod::Man 4.11 (Pod::Simple 3.35)<br> .\"<br> .\" Standard preamble:<br> .\" ========================================================================<br>@@ -46,7 +46,7 @@<br> .ie \n(.g .ds Aq \(aq<br> .el .ds Aq '<br> .\"<br>-.\" If the F register is turned on, we'll generate index entries on stderr for<br>+.\" If the F register is >0, we'll generate index entries on stderr for<br> .\" titles (.TH), headers (.SH), subsections (.SS), items (.Ip), and index<br> .\" entries marked with X<> in POD. Of course, you'll have to process the<br> .\" output yourself in some meaningful fashion.<br>@@ -56,12 +56,12 @@<br> ..<br> .nr rF 0<br> .if \n(.g .if rF .nr rF 1<br>-.if (\n(rF:(\n(.g==0)) \{<br>-. if \nF \{<br>+.if (\n(rF:(\n(.g==0)) \{\<br>+. if \nF \{\<br> . de IX<br> . tm Index:\\$1\t\\n%\t"\\$2"<br> ..<br>-. if !\nF==2 \{<br>+. if !\nF==2 \{\<br> . nr % 0<br> . nr F 2<br> . \}<br>@@ -133,7 +133,7 @@<br> .\" ========================================================================<br> .\"<br> .IX Title "LIBEV 3"<br>-.TH LIBEV 3 "2016-11-16" "libev-4.23" "libev - high performance full featured event loop"<br>+.TH LIBEV 3 "2020-01-22" "libev-4.31" "libev - high performance full featured event loop"<br> .\" For nroff, turn off justification. Always turn off hyphenation; it makes<br> .\" way too many mistakes in technical documents.<br> .if n .ad l<br>@@ -242,10 +242,10 @@ details of the event, and then hand it over to libev by \fIstarting\fR the<br> watcher.<br> .SS "\s-1FEATURES\s0"<br> .IX Subsection "FEATURES"<br>-Libev supports \f(CW\*(C`select\*(C'\fR, \f(CW\*(C`poll\*(C'\fR, the Linux-specific \f(CW\*(C`epoll\*(C'\fR, the<br>-BSD-specific \f(CW\*(C`kqueue\*(C'\fR and the Solaris-specific event port mechanisms<br>-for file descriptor events (\f(CW\*(C`ev_io\*(C'\fR), the Linux \f(CW\*(C`inotify\*(C'\fR interface<br>-(for \f(CW\*(C`ev_stat\*(C'\fR), Linux eventfd/signalfd (for faster and cleaner<br>+Libev supports \f(CW\*(C`select\*(C'\fR, \f(CW\*(C`poll\*(C'\fR, the Linux-specific aio and \f(CW\*(C`epoll\*(C'\fR<br>+interfaces, the BSD-specific \f(CW\*(C`kqueue\*(C'\fR and the Solaris-specific event port<br>+mechanisms for file descriptor events (\f(CW\*(C`ev_io\*(C'\fR), the Linux \f(CW\*(C`inotify\*(C'\fR<br>+interface (for \f(CW\*(C`ev_stat\*(C'\fR), Linux eventfd/signalfd (for faster and cleaner<br> inter-thread wakeup (\f(CW\*(C`ev_async\*(C'\fR)/signal handling (\f(CW\*(C`ev_signal\*(C'\fR)) relative<br> timers (\f(CW\*(C`ev_timer\*(C'\fR), absolute timers with customised rescheduling<br> (\f(CW\*(C`ev_periodic\*(C'\fR), synchronous signals (\f(CW\*(C`ev_signal\*(C'\fR), process status<br>@@ -293,9 +293,13 @@ it will print a diagnostic message and abort (via the \f(CW\*(C`assert\*(C'\fR m<br> so \f(CW\*(C`NDEBUG\*(C'\fR will disable this checking): these are programming errors in<br> the libev caller and need to be fixed there.<br> .PP<br>-Libev also has a few internal error-checking \f(CW\*(C`assert\*(C'\fRions, and also has<br>-extensive consistency checking code. These do not trigger under normal<br>-circumstances, as they indicate either a bug in libev or worse.<br>+Via the \f(CW\*(C`EV_FREQUENT\*(C'\fR macro you can compile in and/or enable extensive<br>+consistency checking code inside libev that can be used to check for<br>+internal inconsistencies, suually caused by application bugs.<br>+.PP<br>+Libev also has a few internal error-checking \f(CW\*(C`assert\*(C'\fRions. These do not<br>+trigger under normal circumstances, as they indicate either a bug in libev<br>+or worse.<br> .SH "GLOBAL FUNCTIONS"<br> .IX Header "GLOBAL FUNCTIONS"<br> These functions can be called anytime, even before initialising the<br>@@ -394,13 +398,35 @@ You could override this function in high-availability programs to, say,<br> free some memory if it cannot allocate memory, to use a special allocator,<br> or even to sleep a while and retry until some memory is available.<br> .Sp<br>+Example: The following is the \f(CW\*(C`realloc\*(C'\fR function that libev itself uses<br>+which should work with \f(CW\*(C`realloc\*(C'\fR and \f(CW\*(C`free\*(C'\fR functions of all kinds and<br>+is probably a good basis for your own implementation.<br>+.Sp<br>+.Vb 5<br>+\& static void *<br>+\& ev_realloc_emul (void *ptr, long size) EV_NOEXCEPT<br>+\& {<br>+\& if (size)<br>+\& return realloc (ptr, size);<br>+\&<br>+\& free (ptr);<br>+\& return 0;<br>+\& }<br>+.Ve<br>+.Sp<br> Example: Replace the libev allocator with one that waits a bit and then<br>-retries (example requires a standards-compliant \f(CW\*(C`realloc\*(C'\fR).<br>+retries.<br> .Sp<br>-.Vb 6<br>+.Vb 8<br> \& static void *<br> \& persistent_realloc (void *ptr, size_t size)<br> \& {<br>+\& if (!size)<br>+\& {<br>+\& free (ptr);<br>+\& return 0;<br>+\& }<br>+\&<br> \& for (;;)<br> \& {<br> \& void *newptr = realloc (ptr, size);<br>@@ -538,9 +564,10 @@ make libev check for a fork in each iteration by enabling this flag.<br> This works by calling \f(CW\*(C`getpid ()\*(C'\fR on every iteration of the loop,<br> and thus this might slow down your event loop if you do a lot of loop<br> iterations and little real work, but is usually not noticeable (on my<br>-GNU/Linux system for example, \f(CW\*(C`getpid\*(C'\fR is actually a simple 5\-insn sequence<br>-without a system call and thus \fIvery\fR fast, but my GNU/Linux system also has<br>-\&\f(CW\*(C`pthread_atfork\*(C'\fR which is even faster).<br>+GNU/Linux system for example, \f(CW\*(C`getpid\*(C'\fR is actually a simple 5\-insn<br>+sequence without a system call and thus \fIvery\fR fast, but my GNU/Linux<br>+system also has \f(CW\*(C`pthread_atfork\*(C'\fR which is even faster). (Update: glibc<br>+versions 2.25 apparently removed the \f(CW\*(C`getpid\*(C'\fR optimisation again).<br> .Sp<br> The big advantage of this flag is that you can forget about fork (and<br> forget about forgetting to tell libev about forking, although you still<br>@@ -581,12 +608,21 @@ unblocking the signals.<br> .Sp<br> It's also required by \s-1POSIX\s0 in a threaded program, as libev calls<br> \&\f(CW\*(C`sigprocmask\*(C'\fR, whose behaviour is officially unspecified.<br>-.Sp<br>-This flag's behaviour will become the default in future versions of libev.<br>+.ie n .IP """EVFLAG_NOTIMERFD""" 4<br>+.el .IP "\f(CWEVFLAG_NOTIMERFD\fR" 4<br>+.IX Item "EVFLAG_NOTIMERFD"<br>+When this flag is specified, the libev will avoid using a \f(CW\*(C`timerfd\*(C'\fR to<br>+detect time jumps. It will still be able to detect time jumps, but takes<br>+longer and has a lower accuracy in doing so, but saves a file descriptor<br>+per loop.<br>+.Sp<br>+The current implementation only tries to use a \f(CW\*(C`timerfd\*(C'\fR when the first<br>+\&\f(CW\*(C`ev_periodic\*(C'\fR watcher is started and falls back on other methods if it<br>+cannot be created, but this behaviour might change in the future.<br> .ie n .IP """EVBACKEND_SELECT"" (value 1, portable select backend)" 4<br> .el .IP "\f(CWEVBACKEND_SELECT\fR (value 1, portable select backend)" 4<br> .IX Item "EVBACKEND_SELECT (value 1, portable select backend)"<br>-This is your standard \fIselect\fR\|(2) backend. Not \fIcompletely\fR standard, as<br>+This is your standard \fBselect\fR\|(2) backend. Not \fIcompletely\fR standard, as<br> libev tries to roll its own fd_set with no limits on the number of fds,<br> but if that fails, expect a fairly low limit on the number of fds when<br> using this backend. It doesn't scale too well (O(highest_fd)), but its<br>@@ -605,7 +641,7 @@ This backend maps \f(CW\*(C`EV_READ\*(C'\fR to the \f(CW\*(C`readfds\*(C'\fR set<br> .ie n .IP """EVBACKEND_POLL"" (value 2, poll backend, available everywhere except on windows)" 4<br> .el .IP "\f(CWEVBACKEND_POLL\fR (value 2, poll backend, available everywhere except on windows)" 4<br> .IX Item "EVBACKEND_POLL (value 2, poll backend, available everywhere except on windows)"<br>-And this is your standard \fIpoll\fR\|(2) backend. It's more complicated<br>+And this is your standard \fBpoll\fR\|(2) backend. It's more complicated<br> than select, but handles sparse fds better and has no artificial<br> limit on the number of fds you can use (except it will slow down<br> considerably with a lot of inactive fds). It scales similarly to select,<br>@@ -617,7 +653,7 @@ This backend maps \f(CW\*(C`EV_READ\*(C'\fR to \f(CW\*(C`POLLIN | POLLERR | POLL<br> .ie n .IP """EVBACKEND_EPOLL"" (value 4, Linux)" 4<br> .el .IP "\f(CWEVBACKEND_EPOLL\fR (value 4, Linux)" 4<br> .IX Item "EVBACKEND_EPOLL (value 4, Linux)"<br>-Use the linux-specific \fIepoll\fR\|(7) interface (for both pre\- and post\-2.6.9<br>+Use the Linux-specific \fBepoll\fR\|(7) interface (for both pre\- and post\-2.6.9<br> kernels).<br> .Sp<br> For few fds, this backend is a bit little slower than poll and select, but<br>@@ -673,22 +709,65 @@ faster than epoll for maybe up to a hundred file descriptors, depending on<br> the usage. So sad.<br> .Sp<br> While nominally embeddable in other event loops, this feature is broken in<br>-all kernel versions tested so far.<br>+a lot of kernel revisions, but probably(!) works in current versions.<br>+.Sp<br>+This backend maps \f(CW\*(C`EV_READ\*(C'\fR and \f(CW\*(C`EV_WRITE\*(C'\fR in the same way as<br>+\&\f(CW\*(C`EVBACKEND_POLL\*(C'\fR.<br>+.ie n .IP """EVBACKEND_LINUXAIO"" (value 64, Linux)" 4<br>+.el .IP "\f(CWEVBACKEND_LINUXAIO\fR (value 64, Linux)" 4<br>+.IX Item "EVBACKEND_LINUXAIO (value 64, Linux)"<br>+Use the Linux-specific Linux \s-1AIO\s0 (\fInot\fR \f(CWaio(7)\fR but \f(CWio_submit(2)\fR) event interface available in post\-4.18 kernels (but libev<br>+only tries to use it in 4.19+).<br>+.Sp<br>+This is another Linux train wreck of an event interface.<br>+.Sp<br>+If this backend works for you (as of this writing, it was very<br>+experimental), it is the best event interface available on Linux and might<br>+be well worth enabling it \- if it isn't available in your kernel this will<br>+be detected and this backend will be skipped.<br>+.Sp<br>+This backend can batch oneshot requests and supports a user-space ring<br>+buffer to receive events. It also doesn't suffer from most of the design<br>+problems of epoll (such as not being able to remove event sources from<br>+the epoll set), and generally sounds too good to be true. Because, this<br>+being the Linux kernel, of course it suffers from a whole new set of<br>+limitations, forcing you to fall back to epoll, inheriting all its design<br>+issues.<br>+.Sp<br>+For one, it is not easily embeddable (but probably could be done using<br>+an event fd at some extra overhead). It also is subject to a system wide<br>+limit that can be configured in \fI/proc/sys/fs/aio\-max\-nr\fR. If no \s-1AIO\s0<br>+requests are left, this backend will be skipped during initialisation, and<br>+will switch to epoll when the loop is active.<br>+.Sp<br>+Most problematic in practice, however, is that not all file descriptors<br>+work with it. For example, in Linux 5.1, \s-1TCP\s0 sockets, pipes, event fds,<br>+files, \fI/dev/null\fR and many others are supported, but ttys do not work<br>+properly (a known bug that the kernel developers don't care about, see<br>+<https://lore.kernel.org/patchwork/patch/1047453/>), so this is not<br>+(yet?) a generic event polling interface.<br>+.Sp<br>+Overall, it seems the Linux developers just don't want it to have a<br>+generic event handling mechanism other than \f(CW\*(C`select\*(C'\fR or \f(CW\*(C`poll\*(C'\fR.<br>+.Sp<br>+To work around all these problem, the current version of libev uses its<br>+epoll backend as a fallback for file descriptor types that do not work. Or<br>+falls back completely to epoll if the kernel acts up.<br> .Sp<br> This backend maps \f(CW\*(C`EV_READ\*(C'\fR and \f(CW\*(C`EV_WRITE\*(C'\fR in the same way as<br> \&\f(CW\*(C`EVBACKEND_POLL\*(C'\fR.<br> .ie n .IP """EVBACKEND_KQUEUE"" (value 8, most \s-1BSD\s0 clones)" 4<br> .el .IP "\f(CWEVBACKEND_KQUEUE\fR (value 8, most \s-1BSD\s0 clones)" 4<br> .IX Item "EVBACKEND_KQUEUE (value 8, most BSD clones)"<br>-Kqueue deserves special mention, as at the time of this writing, it<br>-was broken on all BSDs except NetBSD (usually it doesn't work reliably<br>-with anything but sockets and pipes, except on Darwin, where of course<br>-it's completely useless). Unlike epoll, however, whose brokenness<br>-is by design, these kqueue bugs can (and eventually will) be fixed<br>-without \s-1API\s0 changes to existing programs. For this reason it's not being<br>-\&\*(L"auto-detected\*(R" unless you explicitly specify it in the flags (i.e. using<br>-\&\f(CW\*(C`EVBACKEND_KQUEUE\*(C'\fR) or libev was compiled on a known-to-be-good (\-enough)<br>-system like NetBSD.<br>+Kqueue deserves special mention, as at the time this backend was<br>+implemented, it was broken on all BSDs except NetBSD (usually it doesn't<br>+work reliably with anything but sockets and pipes, except on Darwin,<br>+where of course it's completely useless). Unlike epoll, however, whose<br>+brokenness is by design, these kqueue bugs can be (and mostly have been)<br>+fixed without \s-1API\s0 changes to existing programs. For this reason it's not<br>+being \*(L"auto-detected\*(R" on all platforms unless you explicitly specify it<br>+in the flags (i.e. using \f(CW\*(C`EVBACKEND_KQUEUE\*(C'\fR) or libev was compiled on a<br>+known-to-be-good (\-enough) system like NetBSD.<br> .Sp<br> You still can embed kqueue into a normal poll or select backend and use it<br> only for sockets (after having made sure that sockets work with kqueue on<br>@@ -699,7 +778,7 @@ kernel is more efficient (which says nothing about its actual speed, of<br> course). While stopping, setting and starting an I/O watcher does never<br> cause an extra system call as with \f(CW\*(C`EVBACKEND_EPOLL\*(C'\fR, it still adds up to<br> two event changes per incident. Support for \f(CW\*(C`fork ()\*(C'\fR is very bad (you<br>-might have to leak fd's on fork, but it's more sane than epoll) and it<br>+might have to leak fds on fork, but it's more sane than epoll) and it<br> drops fds silently in similarly hard-to-detect cases.<br> .Sp<br> This backend usually performs well under most conditions.<br>@@ -787,6 +866,14 @@ used if available.<br> .Vb 1<br> \& struct ev_loop *loop = ev_loop_new (ev_recommended_backends () | EVBACKEND_KQUEUE);<br> .Ve<br>+.Sp<br>+Example: Similarly, on linux, you mgiht want to take advantage of the<br>+linux aio backend if possible, but fall back to something else if that<br>+isn't available.<br>+.Sp<br>+.Vb 1<br>+\& struct ev_loop *loop = ev_loop_new (ev_recommended_backends () | EVBACKEND_LINUXAIO);<br>+.Ve<br> .RE<br> .IP "ev_loop_destroy (loop)" 4<br> .IX Item "ev_loop_destroy (loop)"<br>@@ -1264,8 +1351,9 @@ with a watcher-specific start function (\f(CW\*(C`ev_TYPE_start (loop, watcher<br> corresponding stop function (\f(CW\*(C`ev_TYPE_stop (loop, watcher *)\*(C'\fR.<br> .PP<br> As long as your watcher is active (has been started but not stopped) you<br>-must not touch the values stored in it. Most specifically you must never<br>-reinitialise it or call its \f(CW\*(C`ev_TYPE_set\*(C'\fR macro.<br>+must not touch the values stored in it except when explicitly documented<br>+otherwise. Most specifically you must never reinitialise it or call its<br>+\&\f(CW\*(C`ev_TYPE_set\*(C'\fR macro.<br> .PP<br> Each and every callback receives the event loop pointer as first, the<br> registered watcher structure as second, and a bitset of received events as<br>@@ -1366,7 +1454,7 @@ bug in your program.<br> Libev will usually signal a few \*(L"dummy\*(R" events together with an error, for<br> example it might indicate that a fd is readable or writable, and if your<br> callbacks is well-written it can just attempt the operation and cope with<br>-the error from \fIread()\fR or \fIwrite()\fR. This will not work in multi-threaded<br>+the error from \fBread()\fR or \fBwrite()\fR. This will not work in multi-threaded<br> programs, though, as the fd could already be closed and reused for another<br> thing, so beware.<br> .SS "\s-1GENERIC WATCHER FUNCTIONS\s0"<br>@@ -1578,7 +1666,7 @@ Many event loops support \fIwatcher priorities\fR, which are usually small<br> integers that influence the ordering of event callback invocation<br> between watchers in some way, all else being equal.<br> .PP<br>-In libev, Watcher priorities can be set using \f(CW\*(C`ev_set_priority\*(C'\fR. See its<br>+In libev, watcher priorities can be set using \f(CW\*(C`ev_set_priority\*(C'\fR. See its<br> description for the more technical details such as the actual priority<br> range.<br> .PP<br>@@ -1682,14 +1770,17 @@ This section describes each watcher in detail, but will not repeat<br> information given in the last section. Any initialisation/set macros,<br> functions and members specific to the watcher type are explained.<br> .PP<br>-Members are additionally marked with either \fI[read\-only]\fR, meaning that,<br>-while the watcher is active, you can look at the member and expect some<br>-sensible content, but you must not modify it (you can modify it while the<br>-watcher is stopped to your hearts content), or \fI[read\-write]\fR, which<br>+Most members are additionally marked with either \fI[read\-only]\fR, meaning<br>+that, while the watcher is active, you can look at the member and expect<br>+some sensible content, but you must not modify it (you can modify it while<br>+the watcher is stopped to your hearts content), or \fI[read\-write]\fR, which<br> means you can expect it to have some sensible content while the watcher<br> is active, but you can also modify it. Modifying it may not do something<br> sensible or take immediate effect (or do anything at all), but libev will<br> not crash or malfunction in any way.<br>+.PP<br>+In any case, the documentation for each member will explain what the<br>+effects are, and if there are any additional access restrictions.<br> .ie n .SS """ev_io"" \- is this file descriptor readable or writable?"<br> .el .SS "\f(CWev_io\fP \- is this file descriptor readable or writable?"<br> .IX Subsection "ev_io - is this file descriptor readable or writable?"<br>@@ -1727,13 +1818,13 @@ But really, best use non-blocking mode.<br> \fIThe special problem of disappearing file descriptors\fR<br> .IX Subsection "The special problem of disappearing file descriptors"<br> .PP<br>-Some backends (e.g. kqueue, epoll) need to be told about closing a file<br>-descriptor (either due to calling \f(CW\*(C`close\*(C'\fR explicitly or any other means,<br>-such as \f(CW\*(C`dup2\*(C'\fR). The reason is that you register interest in some file<br>-descriptor, but when it goes away, the operating system will silently drop<br>-this interest. If another file descriptor with the same number then is<br>-registered with libev, there is no efficient way to see that this is, in<br>-fact, a different file descriptor.<br>+Some backends (e.g. kqueue, epoll, linuxaio) need to be told about closing<br>+a file descriptor (either due to calling \f(CW\*(C`close\*(C'\fR explicitly or any other<br>+means, such as \f(CW\*(C`dup2\*(C'\fR). The reason is that you register interest in some<br>+file descriptor, but when it goes away, the operating system will silently<br>+drop this interest. If another file descriptor with the same number then<br>+is registered with libev, there is no efficient way to see that this is,<br>+in fact, a different file descriptor.<br> .PP<br> To avoid having to explicitly tell libev about such cases, libev follows<br> the following policy: Each time \f(CW\*(C`ev_io_set\*(C'\fR is being called, libev<br>@@ -1795,9 +1886,10 @@ reuse the same code path.<br> \fIThe special problem of fork\fR<br> .IX Subsection "The special problem of fork"<br> .PP<br>-Some backends (epoll, kqueue) do not support \f(CW\*(C`fork ()\*(C'\fR at all or exhibit<br>-useless behaviour. Libev fully supports fork, but needs to be told about<br>-it in the child if you want to continue to use it in the child.<br>+Some backends (epoll, kqueue, linuxaio, iouring) do not support \f(CW\*(C`fork ()\*(C'\fR<br>+at all or exhibit useless behaviour. Libev fully supports fork, but needs<br>+to be told about it in the child if you want to continue to use it in the<br>+child.<br> .PP<br> To support fork in your child processes, you have to call \f(CW\*(C`ev_loop_fork<br> ()\*(C'\fR after a fork in the child, enable \f(CW\*(C`EVFLAG_FORKCHECK\*(C'\fR, or resort to<br>@@ -1812,13 +1904,13 @@ sent a \s-1SIGPIPE,\s0 which, by default, aborts your program. For most programs<br> this is sensible behaviour, for daemons, this is usually undesirable.<br> .PP<br> So when you encounter spurious, unexplained daemon exits, make sure you<br>-ignore \s-1SIGPIPE \s0(and maybe make sure you log the exit status of your daemon<br>+ignore \s-1SIGPIPE\s0 (and maybe make sure you log the exit status of your daemon<br> somewhere, as that would have given you a big clue).<br> .PP<br>-\fIThe special problem of \fIaccept()\fIing when you can't\fR<br>+\fIThe special problem of \f(BIaccept()\fIing when you can't\fR<br> .IX Subsection "The special problem of accept()ing when you can't"<br> .PP<br>-Many implementations of the \s-1POSIX \s0\f(CW\*(C`accept\*(C'\fR function (for example,<br>+Many implementations of the \s-1POSIX\s0 \f(CW\*(C`accept\*(C'\fR function (for example,<br> found in post\-2004 Linux) have the peculiar behaviour of not removing a<br> connection from the pending queue in all error cases.<br> .PP<br>@@ -1864,14 +1956,33 @@ opportunity for a DoS attack.<br> .IX Item "ev_io_set (ev_io *, int fd, int events)"<br> .PD<br> Configures an \f(CW\*(C`ev_io\*(C'\fR watcher. The \f(CW\*(C`fd\*(C'\fR is the file descriptor to<br>-receive events for and \f(CW\*(C`events\*(C'\fR is either \f(CW\*(C`EV_READ\*(C'\fR, \f(CW\*(C`EV_WRITE\*(C'\fR or<br>-\&\f(CW\*(C`EV_READ | EV_WRITE\*(C'\fR, to express the desire to receive the given events.<br>-.IP "int fd [read\-only]" 4<br>-.IX Item "int fd [read-only]"<br>-The file descriptor being watched.<br>-.IP "int events [read\-only]" 4<br>-.IX Item "int events [read-only]"<br>-The events being watched.<br>+receive events for and \f(CW\*(C`events\*(C'\fR is either \f(CW\*(C`EV_READ\*(C'\fR, \f(CW\*(C`EV_WRITE\*(C'\fR, both<br>+\&\f(CW\*(C`EV_READ | EV_WRITE\*(C'\fR or \f(CW0\fR, to express the desire to receive the given<br>+events.<br>+.Sp<br>+Note that setting the \f(CW\*(C`events\*(C'\fR to \f(CW0\fR and starting the watcher is<br>+supported, but not specially optimized \- if your program sometimes happens<br>+to generate this combination this is fine, but if it is easy to avoid<br>+starting an io watcher watching for no events you should do so.<br>+.IP "ev_io_modify (ev_io *, int events)" 4<br>+.IX Item "ev_io_modify (ev_io *, int events)"<br>+Similar to \f(CW\*(C`ev_io_set\*(C'\fR, but only changes the event mask. Using this might<br>+be faster with some backends, as libev can assume that the \f(CW\*(C`fd\*(C'\fR still<br>+refers to the same underlying file description, something it cannot do<br>+when using \f(CW\*(C`ev_io_set\*(C'\fR.<br>+.IP "int fd [no\-modify]" 4<br>+.IX Item "int fd [no-modify]"<br>+The file descriptor being watched. While it can be read at any time, you<br>+must not modify this member even when the watcher is stopped \- always use<br>+\&\f(CW\*(C`ev_io_set\*(C'\fR for that.<br>+.IP "int events [no\-modify]" 4<br>+.IX Item "int events [no-modify]"<br>+The set of events the fd is being watched for, among other flags. Remember<br>+that this is a bit set \- to test for \f(CW\*(C`EV_READ\*(C'\fR, use \f(CW\*(C`w\->events &<br>+EV_READ\*(C'\fR, and similarly for \f(CW\*(C`EV_WRITE\*(C'\fR.<br>+.Sp<br>+As with \f(CW\*(C`fd\*(C'\fR, you must not modify this member even when the watcher is<br>+stopped, always use \f(CW\*(C`ev_io_set\*(C'\fR or \f(CW\*(C`ev_io_modify\*(C'\fR for that.<br> .PP<br> \fIExamples\fR<br> .IX Subsection "Examples"<br>@@ -2252,11 +2363,11 @@ deterministic behaviour in this case (you can do nothing against<br> .IP "ev_timer_set (ev_timer *, ev_tstamp after, ev_tstamp repeat)" 4<br> .IX Item "ev_timer_set (ev_timer *, ev_tstamp after, ev_tstamp repeat)"<br> .PD<br>-Configure the timer to trigger after \f(CW\*(C`after\*(C'\fR seconds. If \f(CW\*(C`repeat\*(C'\fR<br>-is \f(CW0.\fR, then it will automatically be stopped once the timeout is<br>-reached. If it is positive, then the timer will automatically be<br>-configured to trigger again \f(CW\*(C`repeat\*(C'\fR seconds later, again, and again,<br>-until stopped manually.<br>+Configure the timer to trigger after \f(CW\*(C`after\*(C'\fR seconds (fractional and<br>+negative values are supported). If \f(CW\*(C`repeat\*(C'\fR is \f(CW0.\fR, then it will<br>+automatically be stopped once the timeout is reached. If it is positive,<br>+then the timer will automatically be configured to trigger again \f(CW\*(C`repeat\*(C'\fR<br>+seconds later, again, and again, until stopped manually.<br> .Sp<br> The timer itself will do a best-effort at avoiding drift, that is, if<br> you configure a timer to trigger every 10 seconds, then it will normally<br>@@ -2363,8 +2474,8 @@ it, as it uses a relative timeout).<br> .PP<br> \&\f(CW\*(C`ev_periodic\*(C'\fR watchers can also be used to implement vastly more complex<br> timers, such as triggering an event on each \*(L"midnight, local time\*(R", or<br>-other complicated rules. This cannot be done with \f(CW\*(C`ev_timer\*(C'\fR watchers, as<br>-those cannot react to time jumps.<br>+other complicated rules. This cannot easily be done with \f(CW\*(C`ev_timer\*(C'\fR<br>+watchers, as those cannot react to time jumps.<br> .PP<br> As with timers, the callback is guaranteed to be invoked only when the<br> point in time where it is supposed to trigger has passed. If multiple<br>@@ -2435,7 +2546,7 @@ ignored. Instead, each time the periodic watcher gets scheduled, the<br> reschedule callback will be called with the watcher as first, and the<br> current time as second argument.<br> .Sp<br>-\&\s-1NOTE: \s0\fIThis callback \s-1MUST NOT\s0 stop or destroy any periodic watcher, ever,<br>+\&\s-1NOTE:\s0 \fIThis callback \s-1MUST NOT\s0 stop or destroy any periodic watcher, ever,<br> or make \s-1ANY\s0 other event loop modifications whatsoever, unless explicitly<br> allowed by documentation here\fR.<br> .Sp<br>@@ -2459,14 +2570,34 @@ It must return the next time to trigger, based on the passed time value<br> will usually be called just before the callback will be triggered, but<br> might be called at other times, too.<br> .Sp<br>-\&\s-1NOTE: \s0\fIThis callback must always return a time that is higher than or<br>+\&\s-1NOTE:\s0 \fIThis callback must always return a time that is higher than or<br> equal to the passed \f(CI\*(C`now\*(C'\fI value\fR.<br> .Sp<br> This can be used to create very complex timers, such as a timer that<br>-triggers on \*(L"next midnight, local time\*(R". To do this, you would calculate the<br>-next midnight after \f(CW\*(C`now\*(C'\fR and return the timestamp value for this. How<br>-you do this is, again, up to you (but it is not trivial, which is the main<br>-reason I omitted it as an example).<br>+triggers on \*(L"next midnight, local time\*(R". To do this, you would calculate<br>+the next midnight after \f(CW\*(C`now\*(C'\fR and return the timestamp value for<br>+this. Here is a (completely untested, no error checking) example on how to<br>+do this:<br>+.Sp<br>+.Vb 1<br>+\& #include <time.h><br>+\&<br>+\& static ev_tstamp<br>+\& my_rescheduler (ev_periodic *w, ev_tstamp now)<br>+\& {<br>+\& time_t tnow = (time_t)now;<br>+\& struct tm tm;<br>+\& localtime_r (&tnow, &tm);<br>+\&<br>+\& tm.tm_sec = tm.tm_min = tm.tm_hour = 0; // midnight current day<br>+\& ++tm.tm_mday; // midnight next day<br>+\&<br>+\& return mktime (&tm);<br>+\& }<br>+.Ve<br>+.Sp<br>+Note: this code might run into trouble on days that have more then two<br>+midnights (beginning and end).<br> .RE<br> .RS 4<br> .RE<br>@@ -2594,7 +2725,7 @@ to install a fork handler with \f(CW\*(C`pthread_atfork\*(C'\fR that resets it.<br> catch fork calls done by libraries (such as the libc) as well.<br> .PP<br> In current versions of libev, the signal will not be blocked indefinitely<br>-unless you use the \f(CW\*(C`signalfd\*(C'\fR \s-1API \s0(\f(CW\*(C`EV_SIGNALFD\*(C'\fR). While this reduces<br>+unless you use the \f(CW\*(C`signalfd\*(C'\fR \s-1API\s0 (\f(CW\*(C`EV_SIGNALFD\*(C'\fR). While this reduces<br> the window of opportunity for problems, it will not go away, as libev<br> \&\fIhas\fR to modify the signal mask, at least temporarily.<br> .PP<br>@@ -3646,8 +3777,8 @@ notification, and the callback being invoked.<br> .SH "OTHER FUNCTIONS"<br> .IX Header "OTHER FUNCTIONS"<br> There are some other functions of possible interest. Described. Here. Now.<br>-.IP "ev_once (loop, int fd, int events, ev_tstamp timeout, callback)" 4<br>-.IX Item "ev_once (loop, int fd, int events, ev_tstamp timeout, callback)"<br>+.IP "ev_once (loop, int fd, int events, ev_tstamp timeout, callback, arg)" 4<br>+.IX Item "ev_once (loop, int fd, int events, ev_tstamp timeout, callback, arg)"<br> This function combines a simple timer and an I/O watcher, calls your<br> callback on whichever event happens first and automatically stops both<br> watchers. This is useful if you want to wait for a single event on an fd<br>@@ -4107,15 +4238,15 @@ libev sources can be compiled as \*(C+. Therefore, code that uses the C \s-1API\<br> will work fine.<br> .PP<br> Proper exception specifications might have to be added to callbacks passed<br>-to libev: exceptions may be thrown only from watcher callbacks, all<br>-other callbacks (allocator, syserr, loop acquire/release and periodic<br>-reschedule callbacks) must not throw exceptions, and might need a \f(CW\*(C`throw<br>-()\*(C'\fR specification. If you have code that needs to be compiled as both C<br>-and \*(C+ you can use the \f(CW\*(C`EV_THROW\*(C'\fR macro for this:<br>+to libev: exceptions may be thrown only from watcher callbacks, all other<br>+callbacks (allocator, syserr, loop acquire/release and periodic reschedule<br>+callbacks) must not throw exceptions, and might need a \f(CW\*(C`noexcept\*(C'\fR<br>+specification. If you have code that needs to be compiled as both C and<br>+\&\*(C+ you can use the \f(CW\*(C`EV_NOEXCEPT\*(C'\fR macro for this:<br> .PP<br> .Vb 6<br> \& static void<br>-\& fatal_error (const char *msg) EV_THROW<br>+\& fatal_error (const char *msg) EV_NOEXCEPT<br> \& {<br> \& perror (msg);<br> \& abort ();<br>@@ -4289,6 +4420,9 @@ method.<br> .Sp<br> For \f(CW\*(C`ev::embed\*(C'\fR watchers this method is called \f(CW\*(C`set_embed\*(C'\fR, to avoid<br> clashing with the \f(CW\*(C`set (loop)\*(C'\fR method.<br>+.Sp<br>+For \f(CW\*(C`ev::io\*(C'\fR watchers there is an additional \f(CW\*(C`set\*(C'\fR method that acepts a<br>+new event mask only, and internally calls \f(CW\*(C`ev_io_modfify\*(C'\fR.<br> .IP "w\->start ()" 4<br> .IX Item "w->start ()"<br> Starts the watcher. Note that there is no \f(CW\*(C`loop\*(C'\fR argument, as the<br>@@ -4499,7 +4633,7 @@ configuration (no autoconf):<br> .PP<br> This will automatically include \fIev.h\fR, too, and should be done in a<br> single C source file only to provide the function implementations. To use<br>-it, do the same for \fIev.h\fR in all files wishing to use this \s-1API \s0(best<br>+it, do the same for \fIev.h\fR in all files wishing to use this \s-1API\s0 (best<br> done by writing a wrapper around \fIev.h\fR that you can include instead and<br> where you can put other configuration options):<br> .PP<br>@@ -4523,11 +4657,13 @@ in your include path (e.g. in libev/ when using \-Ilibev):<br> \&<br> \& ev_win32.c required on win32 platforms only<br> \&<br>-\& ev_select.c only when select backend is enabled (which is enabled by default)<br>-\& ev_poll.c only when poll backend is enabled (disabled by default)<br>-\& ev_epoll.c only when the epoll backend is enabled (disabled by default)<br>-\& ev_kqueue.c only when the kqueue backend is enabled (disabled by default)<br>-\& ev_port.c only when the solaris port backend is enabled (disabled by default)<br>+\& ev_select.c only when select backend is enabled<br>+\& ev_poll.c only when poll backend is enabled<br>+\& ev_epoll.c only when the epoll backend is enabled<br>+\& ev_linuxaio.c only when the linux aio backend is enabled<br>+\& ev_iouring.c only when the linux io_uring backend is enabled<br>+\& ev_kqueue.c only when the kqueue backend is enabled<br>+\& ev_port.c only when the solaris port backend is enabled<br> .Ve<br> .PP<br> \&\fIev.c\fR includes the backend files directly when enabled, so you only need<br>@@ -4582,7 +4718,7 @@ to redefine them before including \fIev.h\fR without breaking compatibility<br> to a compiled library. All other symbols change the \s-1ABI,\s0 which means all<br> users of libev and the libev code itself must be compiled with compatible<br> settings.<br>-.IP "\s-1EV_COMPAT3 \s0(h)" 4<br>+.IP "\s-1EV_COMPAT3\s0 (h)" 4<br> .IX Item "EV_COMPAT3 (h)"<br> Backwards compatibility is a major concern for libev. This is why this<br> release of libev comes with wrappers for the functions and symbols that<br>@@ -4597,7 +4733,7 @@ typedef in that case.<br> In some future version, the default for \f(CW\*(C`EV_COMPAT3\*(C'\fR will become \f(CW0\fR,<br> and in some even more future version the compatibility code will be<br> removed completely.<br>-.IP "\s-1EV_STANDALONE \s0(h)" 4<br>+.IP "\s-1EV_STANDALONE\s0 (h)" 4<br> .IX Item "EV_STANDALONE (h)"<br> Must always be \f(CW1\fR if you do not use autoconf configuration, which<br> keeps libev from including \fIconfig.h\fR, and it also defines dummy<br>@@ -4655,6 +4791,27 @@ available and will probe for kernel support at runtime. This will improve<br> \&\f(CW\*(C`ev_signal\*(C'\fR and \f(CW\*(C`ev_async\*(C'\fR performance and reduce resource consumption.<br> If undefined, it will be enabled if the headers indicate GNU/Linux + Glibc<br> 2.7 or newer, otherwise disabled.<br>+.IP "\s-1EV_USE_SIGNALFD\s0" 4<br>+.IX Item "EV_USE_SIGNALFD"<br>+If defined to be \f(CW1\fR, then libev will assume that \f(CW\*(C`signalfd ()\*(C'\fR is<br>+available and will probe for kernel support at runtime. This enables<br>+the use of \s-1EVFLAG_SIGNALFD\s0 for faster and simpler signal handling. If<br>+undefined, it will be enabled if the headers indicate GNU/Linux + Glibc<br>+2.7 or newer, otherwise disabled.<br>+.IP "\s-1EV_USE_TIMERFD\s0" 4<br>+.IX Item "EV_USE_TIMERFD"<br>+If defined to be \f(CW1\fR, then libev will assume that \f(CW\*(C`timerfd ()\*(C'\fR is<br>+available and will probe for kernel support at runtime. This allows<br>+libev to detect time jumps accurately. If undefined, it will be enabled<br>+if the headers indicate GNU/Linux + Glibc 2.8 or newer and define<br>+\&\f(CW\*(C`TFD_TIMER_CANCEL_ON_SET\*(C'\fR, otherwise disabled.<br>+.IP "\s-1EV_USE_EVENTFD\s0" 4<br>+.IX Item "EV_USE_EVENTFD"<br>+If defined to be \f(CW1\fR, then libev will assume that \f(CW\*(C`eventfd ()\*(C'\fR is<br>+available and will probe for kernel support at runtime. This will improve<br>+\&\f(CW\*(C`ev_signal\*(C'\fR and \f(CW\*(C`ev_async\*(C'\fR performance and reduce resource consumption.<br>+If undefined, it will be enabled if the headers indicate GNU/Linux + Glibc<br>+2.7 or newer, otherwise disabled.<br> .IP "\s-1EV_USE_SELECT\s0" 4<br> .IX Item "EV_USE_SELECT"<br> If undefined or defined to be \f(CW1\fR, libev will compile in support for the<br>@@ -4716,6 +4873,17 @@ If defined to be \f(CW1\fR, libev will compile in support for the Linux<br> otherwise another method will be used as fallback. This is the preferred<br> backend for GNU/Linux systems. If undefined, it will be enabled if the<br> headers indicate GNU/Linux + Glibc 2.4 or newer, otherwise disabled.<br>+.IP "\s-1EV_USE_LINUXAIO\s0" 4<br>+.IX Item "EV_USE_LINUXAIO"<br>+If defined to be \f(CW1\fR, libev will compile in support for the Linux aio<br>+backend (\f(CW\*(C`EV_USE_EPOLL\*(C'\fR must also be enabled). If undefined, it will be<br>+enabled on linux, otherwise disabled.<br>+.IP "\s-1EV_USE_IOURING\s0" 4<br>+.IX Item "EV_USE_IOURING"<br>+If defined to be \f(CW1\fR, libev will compile in support for the Linux<br>+io_uring backend (\f(CW\*(C`EV_USE_EPOLL\*(C'\fR must also be enabled). Due to it's<br>+current limitations it has to be requested explicitly. If undefined, it<br>+will be enabled on linux, otherwise disabled.<br> .IP "\s-1EV_USE_KQUEUE\s0" 4<br> .IX Item "EV_USE_KQUEUE"<br> If defined to be \f(CW1\fR, libev will compile in support for the \s-1BSD\s0 style<br>@@ -4765,21 +4933,21 @@ watchers.<br> .Sp<br> In the absence of this define, libev will use \f(CW\*(C`sig_atomic_t volatile\*(C'\fR<br> (from \fIsignal.h\fR), which is usually good enough on most platforms.<br>-.IP "\s-1EV_H \s0(h)" 4<br>+.IP "\s-1EV_H\s0 (h)" 4<br> .IX Item "EV_H (h)"<br> The name of the \fIev.h\fR header file used to include it. The default if<br> undefined is \f(CW"ev.h"\fR in \fIevent.h\fR, \fIev.c\fR and \fIev++.h\fR. This can be<br> used to virtually rename the \fIev.h\fR header file in case of conflicts.<br>-.IP "\s-1EV_CONFIG_H \s0(h)" 4<br>+.IP "\s-1EV_CONFIG_H\s0 (h)" 4<br> .IX Item "EV_CONFIG_H (h)"<br> If \f(CW\*(C`EV_STANDALONE\*(C'\fR isn't \f(CW1\fR, this variable can be used to override<br> \&\fIev.c\fR's idea of where to find the \fIconfig.h\fR file, similarly to<br> \&\f(CW\*(C`EV_H\*(C'\fR, above.<br>-.IP "\s-1EV_EVENT_H \s0(h)" 4<br>+.IP "\s-1EV_EVENT_H\s0 (h)" 4<br> .IX Item "EV_EVENT_H (h)"<br> Similarly to \f(CW\*(C`EV_H\*(C'\fR, this macro can be used to override \fIevent.c\fR's idea<br> of how the \fIevent.h\fR header can be found, the default is \f(CW"event.h"\fR.<br>-.IP "\s-1EV_PROTOTYPES \s0(h)" 4<br>+.IP "\s-1EV_PROTOTYPES\s0 (h)" 4<br> .IX Item "EV_PROTOTYPES (h)"<br> If defined to be \f(CW0\fR, then \fIev.h\fR will not define any function<br> prototypes, but still define all the structs and other symbols. This is<br>@@ -4982,6 +5150,9 @@ called once per loop, which can slow down libev. If set to \f(CW3\fR, then the<br> verification code will be called very frequently, which will slow down<br> libev considerably.<br> .Sp<br>+Verification errors are reported via C's \f(CW\*(C`assert\*(C'\fR mechanism, so if you<br>+disable that (e.g. by defining \f(CW\*(C`NDEBUG\*(C'\fR) then no errors will be reported.<br>+.Sp<br> The default is \f(CW1\fR, unless \f(CW\*(C`EV_FEATURES\*(C'\fR overrides it, in which case it<br> will be \f(CW0\fR.<br> .IP "\s-1EV_COMMON\s0" 4<br>@@ -4998,10 +5169,10 @@ For example, the perl \s-1EV\s0 module uses something like this:<br> \& SV *self; /* contains this struct */ \e<br> \& SV *cb_sv, *fh /* note no trailing ";" */<br> .Ve<br>-.IP "\s-1EV_CB_DECLARE \s0(type)" 4<br>+.IP "\s-1EV_CB_DECLARE\s0 (type)" 4<br> .IX Item "EV_CB_DECLARE (type)"<br> .PD 0<br>-.IP "\s-1EV_CB_INVOKE \s0(watcher, revents)" 4<br>+.IP "\s-1EV_CB_INVOKE\s0 (watcher, revents)" 4<br> .IX Item "EV_CB_INVOKE (watcher, revents)"<br> .IP "ev_set_cb (ev, cb)" 4<br> .IX Item "ev_set_cb (ev, cb)"<br>@@ -5014,7 +5185,7 @@ avoid the \f(CW\*(C`struct ev_loop *\*(C'\fR as first argument in all cases, or<br> method calls instead of plain function calls in \*(C+.<br> .SS "\s-1EXPORTED API SYMBOLS\s0"<br> .IX Subsection "EXPORTED API SYMBOLS"<br>-If you need to re-export the \s-1API \s0(e.g. via a \s-1DLL\s0) and you need a list of<br>+If you need to re-export the \s-1API\s0 (e.g. via a \s-1DLL\s0) and you need a list of<br> exported symbols, you can use the provided \fISymbol.*\fR files which list<br> all public symbols, one per line:<br> .PP<br>@@ -5256,7 +5427,7 @@ a loop.<br> .IX Subsection "select is buggy"<br> .PP<br> All that's left is \f(CW\*(C`select\*(C'\fR, and of course Apple found a way to fuck this<br>-one up as well: On \s-1OS/X, \s0\f(CW\*(C`select\*(C'\fR actively limits the number of file<br>+one up as well: On \s-1OS/X,\s0 \f(CW\*(C`select\*(C'\fR actively limits the number of file<br> descriptors you can pass in to 1024 \- your program suddenly crashes when<br> you use more.<br> .PP<br>diff --git a/third_party/libev/ev.c b/third_party/libev/ev.c<br>index 6a2648591..9a4d19905 100644<br>--- a/third_party/libev/ev.c<br>+++ b/third_party/libev/ev.c<br>@@ -1,7 +1,7 @@<br> /*<br> * libev event processing core, watcher management<br> *<br>- * Copyright (c) 2007,2008,2009,2010,2011,2012,2013 Marc Alexander Lehmann <libev@schmorp.de><br>+ * Copyright (c) 2007-2019 Marc Alexander Lehmann <libev@schmorp.de><br> * All rights reserved.<br> *<br> * Redistribution and use in source and binary forms, with or without modifica-<br>@@ -117,6 +117,24 @@<br> # define EV_USE_EPOLL 0<br> # endif<br> <br>+# if HAVE_LINUX_AIO_ABI_H<br>+# ifndef EV_USE_LINUXAIO<br>+# define EV_USE_LINUXAIO 0 /* was: EV_FEATURE_BACKENDS, always off by default */<br>+# endif<br>+# else<br>+# undef EV_USE_LINUXAIO<br>+# define EV_USE_LINUXAIO 0<br>+# endif<br>+ <br>+# if HAVE_LINUX_FS_H && HAVE_SYS_TIMERFD_H && HAVE_KERNEL_RWF_T<br>+# ifndef EV_USE_IOURING<br>+# define EV_USE_IOURING EV_FEATURE_BACKENDS<br>+# endif<br>+# else<br>+# undef EV_USE_IOURING<br>+# define EV_USE_IOURING 0<br>+# endif<br>+ <br> # if HAVE_KQUEUE && HAVE_SYS_EVENT_H<br> # ifndef EV_USE_KQUEUE<br> # define EV_USE_KQUEUE EV_FEATURE_BACKENDS<br>@@ -161,9 +179,28 @@<br> # undef EV_USE_EVENTFD<br> # define EV_USE_EVENTFD 0<br> # endif<br>- <br>+<br>+# if HAVE_SYS_TIMERFD_H<br>+# ifndef EV_USE_TIMERFD<br>+# define EV_USE_TIMERFD EV_FEATURE_OS<br>+# endif<br>+# else<br>+# undef EV_USE_TIMERFD<br>+# define EV_USE_TIMERFD 0<br>+# endif<br>+<br> #endif<br> <br>+/* OS X, in its infinite idiocy, actually HARDCODES<br>+ * a limit of 1024 into their select. Where people have brains,<br>+ * OS X engineers apparently have a vacuum. Or maybe they were<br>+ * ordered to have a vacuum, or they do anything for money.<br>+ * This might help. Or not.<br>+ * Note that this must be defined early, as other include files<br>+ * will rely on this define as well.<br>+ */<br>+#define _DARWIN_UNLIMITED_SELECT 1<br>+<br> #include <stdlib.h><br> #include <string.h><br> #include <fcntl.h><br>@@ -211,14 +248,6 @@<br> # undef EV_AVOID_STDIO<br> #endif<br> <br>-/* OS X, in its infinite idiocy, actually HARDCODES<br>- * a limit of 1024 into their select. Where people have brains,<br>- * OS X engineers apparently have a vacuum. Or maybe they were<br>- * ordered to have a vacuum, or they do anything for money.<br>- * This might help. Or not.<br>- */<br>-#define _DARWIN_UNLIMITED_SELECT 1<br>-<br> /* this block tries to deduce configuration from header-defined symbols and defaults */<br> <br> /* try to deduce the maximum number of signals on this platform */<br>@@ -315,6 +344,22 @@<br> # define EV_USE_PORT 0<br> #endif<br> <br>+#ifndef EV_USE_LINUXAIO<br>+# if __linux /* libev currently assumes linux/aio_abi.h is always available on linux */<br>+# define EV_USE_LINUXAIO 0 /* was: 1, always off by default */<br>+# else<br>+# define EV_USE_LINUXAIO 0<br>+# endif<br>+#endif<br>+<br>+#ifndef EV_USE_IOURING<br>+# if __linux /* later checks might disable again */<br>+# define EV_USE_IOURING 1<br>+# else<br>+# define EV_USE_IOURING 0<br>+# endif<br>+#endif<br>+<br> #ifndef EV_USE_INOTIFY<br> # if __linux && (__GLIBC__ > 2 || (__GLIBC__ == 2 && __GLIBC_MINOR__ >= 4))<br> # define EV_USE_INOTIFY EV_FEATURE_OS<br>@@ -347,6 +392,14 @@<br> # endif<br> #endif<br> <br>+#ifndef EV_USE_TIMERFD<br>+# if __linux && (__GLIBC__ > 2 || (__GLIBC__ == 2 && __GLIBC_MINOR__ >= 8))<br>+# define EV_USE_TIMERFD EV_FEATURE_OS<br>+# else<br>+# define EV_USE_TIMERFD 0<br>+# endif<br>+#endif<br>+<br> #if 0 /* debugging */<br> # define EV_VERIFY 3<br> # define EV_USE_4HEAP 1<br>@@ -365,7 +418,7 @@<br> # define EV_HEAP_CACHE_AT EV_FEATURE_DATA<br> #endif<br> <br>-#ifdef ANDROID<br>+#ifdef __ANDROID__<br> /* supposedly, android doesn't typedef fd_mask */<br> # undef EV_USE_SELECT<br> # define EV_USE_SELECT 0<br>@@ -389,6 +442,7 @@<br> # define clock_gettime(id, ts) syscall (SYS_clock_gettime, (id), (ts))<br> # undef EV_USE_MONOTONIC<br> # define EV_USE_MONOTONIC 1<br>+# define EV_NEED_SYSCALL 1<br> # else<br> # undef EV_USE_CLOCK_SYSCALL<br> # define EV_USE_CLOCK_SYSCALL 0<br>@@ -412,6 +466,14 @@<br> # define EV_USE_INOTIFY 0<br> #endif<br> <br>+#if __linux && EV_USE_IOURING<br>+# include <linux/version.h><br>+# if LINUX_VERSION_CODE < KERNEL_VERSION(4,14,0)<br>+# undef EV_USE_IOURING<br>+# define EV_USE_IOURING 0<br>+# endif<br>+#endif<br>+<br> #if !EV_USE_NANOSLEEP<br> /* hp-ux has it in sys/time.h, which we unconditionally include above */<br> # if !defined _WIN32 && !defined __hpux<br>@@ -419,6 +481,31 @@<br> # endif<br> #endif<br> <br>+#if EV_USE_LINUXAIO<br>+# include <sys/syscall.h><br>+# if SYS_io_getevents && EV_USE_EPOLL /* linuxaio backend requires epoll backend */<br>+# define EV_NEED_SYSCALL 1<br>+# else<br>+# undef EV_USE_LINUXAIO<br>+# define EV_USE_LINUXAIO 0<br>+# endif<br>+#endif<br>+<br>+#if EV_USE_IOURING<br>+# include <sys/syscall.h><br>+# if !SYS_io_uring_setup && __linux && !__alpha<br>+# define SYS_io_uring_setup 425<br>+# define SYS_io_uring_enter 426<br>+# define SYS_io_uring_wregister 427<br>+# endif<br>+# if SYS_io_uring_setup && EV_USE_EPOLL /* iouring backend requires epoll backend */<br>+# define EV_NEED_SYSCALL 1<br>+# else<br>+# undef EV_USE_IOURING<br>+# define EV_USE_IOURING 0<br>+# endif<br>+#endif<br>+<br> #if EV_USE_INOTIFY<br> # include <sys/statfs.h><br> # include <sys/inotify.h><br>@@ -430,7 +517,7 @@<br> #endif<br> <br> #if EV_USE_EVENTFD<br>-/* our minimum requirement is glibc 2.7 which has the stub, but not the header */<br>+/* our minimum requirement is glibc 2.7 which has the stub, but not the full header */<br> # include <stdint.h><br> # ifndef EFD_NONBLOCK<br> # define EFD_NONBLOCK O_NONBLOCK<br>@@ -446,7 +533,7 @@ EV_CPP(extern "C") int (eventfd) (unsigned int initval, int flags);<br> #endif<br> <br> #if EV_USE_SIGNALFD<br>-/* our minimum requirement is glibc 2.7 which has the stub, but not the header */<br>+/* our minimum requirement is glibc 2.7 which has the stub, but not the full header */<br> # include <stdint.h><br> # ifndef SFD_NONBLOCK<br> # define SFD_NONBLOCK O_NONBLOCK<br>@@ -458,7 +545,7 @@ EV_CPP(extern "C") int (eventfd) (unsigned int initval, int flags);<br> # define SFD_CLOEXEC 02000000<br> # endif<br> # endif<br>-EV_CPP (extern "C") int signalfd (int fd, const sigset_t *mask, int flags);<br>+EV_CPP (extern "C") int (signalfd) (int fd, const sigset_t *mask, int flags);<br> <br> struct signalfd_siginfo<br> {<br>@@ -467,7 +554,17 @@ struct signalfd_siginfo<br> };<br> #endif<br> <br>-/**/<br>+/* for timerfd, libev core requires TFD_TIMER_CANCEL_ON_SET &c */<br>+#if EV_USE_TIMERFD<br>+# include <sys/timerfd.h><br>+/* timerfd is only used for periodics */<br>+# if !(defined (TFD_TIMER_CANCEL_ON_SET) && defined (TFD_CLOEXEC) && defined (TFD_NONBLOCK)) || !EV_PERIODIC_ENABLE<br>+# undef EV_USE_TIMERFD<br>+# define EV_USE_TIMERFD 0<br>+# endif<br>+#endif<br>+<br>+/*****************************************************************************/<br> <br> #if EV_VERIFY >= 3<br> # define EV_FREQUENT_CHECK ev_verify (EV_A)<br>@@ -482,18 +579,34 @@ struct signalfd_siginfo<br> #define MIN_INTERVAL 0.0001220703125 /* 1/2**13, good till 4000 */<br> /*#define MIN_INTERVAL 0.00000095367431640625 /* 1/2**20, good till 2200 */<br> <br>-#define MIN_TIMEJUMP 1. /* minimum timejump that gets detected (if monotonic clock available) */<br>-#define MAX_BLOCKTIME 59.743 /* never wait longer than this time (to detect time jumps) */<br>+#define MIN_TIMEJUMP 1. /* minimum timejump that gets detected (if monotonic clock available) */<br>+#define MAX_BLOCKTIME 59.743 /* never wait longer than this time (to detect time jumps) */<br>+#define MAX_BLOCKTIME2 1500001.07 /* same, but when timerfd is used to detect jumps, also safe delay to not overflow */<br> <br>-#define EV_TV_SET(tv,t) do { tv.tv_sec = (long)t; tv.tv_usec = (long)((t - tv.tv_sec) * 1e6); } while (0)<br>-#define EV_TS_SET(ts,t) do { ts.tv_sec = (long)t; ts.tv_nsec = (long)((t - ts.tv_sec) * 1e9); } while (0)<br>+/* find a portable timestamp that is "always" in the future but fits into time_t.<br>+ * this is quite hard, and we are mostly guessing - we handle 32 bit signed/unsigned time_t,<br>+ * and sizes larger than 32 bit, and maybe the unlikely floating point time_t */<br>+#define EV_TSTAMP_HUGE \<br>+ (sizeof (time_t) >= 8 ? 10000000000000. \<br>+ : 0 < (time_t)4294967295 ? 4294967295. \<br>+ : 2147483647.) \<br>+<br>+#ifndef EV_TS_CONST<br>+# define EV_TS_CONST(nv) nv<br>+# define EV_TS_TO_MSEC(a) a * 1e3 + 0.9999<br>+# define EV_TS_FROM_USEC(us) us * 1e-6<br>+# define EV_TV_SET(tv,t) do { tv.tv_sec = (long)t; tv.tv_usec = (long)((t - tv.tv_sec) * 1e6); } while (0)<br>+# define EV_TS_SET(ts,t) do { ts.tv_sec = (long)t; ts.tv_nsec = (long)((t - ts.tv_sec) * 1e9); } while (0)<br>+# define EV_TV_GET(tv) ((tv).tv_sec + (tv).tv_usec * 1e-6)<br>+# define EV_TS_GET(ts) ((ts).tv_sec + (ts).tv_nsec * 1e-9)<br>+#endif<br> <br> /* the following is ecb.h embedded into libev - use update_ev_c to update from an external copy */<br> /* ECB.H BEGIN */<br> /*<br> * libecb - http://software.schmorp.de/pkg/libecb<br> *<br>- * Copyright (©) 2009-2015 Marc Alexander Lehmann <libecb@schmorp.de><br>+ * Copyright (©) 2009-2015,2018-2020 Marc Alexander Lehmann <libecb@schmorp.de><br> * Copyright (©) 2011 Emanuele Giaquinta<br> * All rights reserved.<br> *<br>@@ -534,15 +647,23 @@ struct signalfd_siginfo<br> #define ECB_H<br> <br> /* 16 bits major, 16 bits minor */<br>-#define ECB_VERSION 0x00010005<br>+#define ECB_VERSION 0x00010008<br> <br>-#ifdef _WIN32<br>+#include <string.h> /* for memcpy */<br>+<br>+#if defined (_WIN32) && !defined (__MINGW32__)<br> typedef signed char int8_t;<br> typedef unsigned char uint8_t;<br>+ typedef signed char int_fast8_t;<br>+ typedef unsigned char uint_fast8_t;<br> typedef signed short int16_t;<br> typedef unsigned short uint16_t;<br>+ typedef signed int int_fast16_t;<br>+ typedef unsigned int uint_fast16_t;<br> typedef signed int int32_t;<br> typedef unsigned int uint32_t;<br>+ typedef signed int int_fast32_t;<br>+ typedef unsigned int uint_fast32_t;<br> #if __GNUC__<br> typedef signed long long int64_t;<br> typedef unsigned long long uint64_t;<br>@@ -550,6 +671,8 @@ struct signalfd_siginfo<br> typedef signed __int64 int64_t;<br> typedef unsigned __int64 uint64_t;<br> #endif<br>+ typedef int64_t int_fast64_t;<br>+ typedef uint64_t uint_fast64_t;<br> #ifdef _WIN64<br> #define ECB_PTRSIZE 8<br> typedef uint64_t uintptr_t;<br>@@ -571,6 +694,14 @@ struct signalfd_siginfo<br> #define ECB_GCC_AMD64 (__amd64 || __amd64__ || __x86_64 || __x86_64__)<br> #define ECB_MSVC_AMD64 (_M_AMD64 || _M_X64)<br> <br>+#ifndef ECB_OPTIMIZE_SIZE<br>+ #if __OPTIMIZE_SIZE__<br>+ #define ECB_OPTIMIZE_SIZE 1<br>+ #else<br>+ #define ECB_OPTIMIZE_SIZE 0<br>+ #endif<br>+#endif<br>+<br> /* work around x32 idiocy by defining proper macros */<br> #if ECB_GCC_AMD64 || ECB_MSVC_AMD64<br> #if _ILP32<br>@@ -609,6 +740,8 @@ struct signalfd_siginfo<br> <br> #define ECB_CPP (__cplusplus+0)<br> #define ECB_CPP11 (__cplusplus >= 201103L)<br>+#define ECB_CPP14 (__cplusplus >= 201402L)<br>+#define ECB_CPP17 (__cplusplus >= 201703L)<br> <br> #if ECB_CPP<br> #define ECB_C 0<br>@@ -620,6 +753,7 @@ struct signalfd_siginfo<br> <br> #define ECB_C99 (ECB_STDC_VERSION >= 199901L)<br> #define ECB_C11 (ECB_STDC_VERSION >= 201112L)<br>+#define ECB_C17 (ECB_STDC_VERSION >= 201710L)<br> <br> #if ECB_CPP<br> #define ECB_EXTERN_C extern "C"<br>@@ -655,14 +789,15 @@ struct signalfd_siginfo<br> <br> #ifndef ECB_MEMORY_FENCE<br> #if ECB_GCC_VERSION(2,5) || defined __INTEL_COMPILER || (__llvm__ && __GNUC__) || __SUNPRO_C >= 0x5110 || __SUNPRO_CC >= 0x5110<br>+ #define ECB_MEMORY_FENCE_RELAXED __asm__ __volatile__ ("" : : : "memory")<br> #if __i386 || __i386__<br> #define ECB_MEMORY_FENCE __asm__ __volatile__ ("lock; orb $0, -1(%%esp)" : : : "memory")<br> #define ECB_MEMORY_FENCE_ACQUIRE __asm__ __volatile__ ("" : : : "memory")<br>- #define ECB_MEMORY_FENCE_RELEASE __asm__ __volatile__ ("")<br>+ #define ECB_MEMORY_FENCE_RELEASE __asm__ __volatile__ ("" : : : "memory")<br> #elif ECB_GCC_AMD64<br> #define ECB_MEMORY_FENCE __asm__ __volatile__ ("mfence" : : : "memory")<br> #define ECB_MEMORY_FENCE_ACQUIRE __asm__ __volatile__ ("" : : : "memory")<br>- #define ECB_MEMORY_FENCE_RELEASE __asm__ __volatile__ ("")<br>+ #define ECB_MEMORY_FENCE_RELEASE __asm__ __volatile__ ("" : : : "memory")<br> #elif __powerpc__ || __ppc__ || __powerpc64__ || __ppc64__<br> #define ECB_MEMORY_FENCE __asm__ __volatile__ ("sync" : : : "memory")<br> #elif defined __ARM_ARCH_2__ \<br>@@ -714,12 +849,14 @@ struct signalfd_siginfo<br> #define ECB_MEMORY_FENCE __atomic_thread_fence (__ATOMIC_SEQ_CST)<br> #define ECB_MEMORY_FENCE_ACQUIRE __atomic_thread_fence (__ATOMIC_ACQUIRE)<br> #define ECB_MEMORY_FENCE_RELEASE __atomic_thread_fence (__ATOMIC_RELEASE)<br>+ #define ECB_MEMORY_FENCE_RELAXED __atomic_thread_fence (__ATOMIC_RELAXED)<br> <br> #elif ECB_CLANG_EXTENSION(c_atomic)<br> /* see comment below (stdatomic.h) about the C11 memory model. */<br> #define ECB_MEMORY_FENCE __c11_atomic_thread_fence (__ATOMIC_SEQ_CST)<br> #define ECB_MEMORY_FENCE_ACQUIRE __c11_atomic_thread_fence (__ATOMIC_ACQUIRE)<br> #define ECB_MEMORY_FENCE_RELEASE __c11_atomic_thread_fence (__ATOMIC_RELEASE)<br>+ #define ECB_MEMORY_FENCE_RELAXED __c11_atomic_thread_fence (__ATOMIC_RELAXED)<br> <br> #elif ECB_GCC_VERSION(4,4) || defined __INTEL_COMPILER || defined __clang__<br> #define ECB_MEMORY_FENCE __sync_synchronize ()<br>@@ -739,9 +876,10 @@ struct signalfd_siginfo<br> #define ECB_MEMORY_FENCE MemoryBarrier () /* actually just xchg on x86... scary */<br> #elif __SUNPRO_C >= 0x5110 || __SUNPRO_CC >= 0x5110<br> #include <mbarrier.h><br>- #define ECB_MEMORY_FENCE __machine_rw_barrier ()<br>- #define ECB_MEMORY_FENCE_ACQUIRE __machine_r_barrier ()<br>- #define ECB_MEMORY_FENCE_RELEASE __machine_w_barrier ()<br>+ #define ECB_MEMORY_FENCE __machine_rw_barrier ()<br>+ #define ECB_MEMORY_FENCE_ACQUIRE __machine_acq_barrier ()<br>+ #define ECB_MEMORY_FENCE_RELEASE __machine_rel_barrier ()<br>+ #define ECB_MEMORY_FENCE_RELAXED __compiler_barrier ()<br> #elif __xlC__<br> #define ECB_MEMORY_FENCE __sync ()<br> #endif<br>@@ -752,15 +890,9 @@ struct signalfd_siginfo<br> /* we assume that these memory fences work on all variables/all memory accesses, */<br> /* not just C11 atomics and atomic accesses */<br> #include <stdatomic.h><br>- /* Unfortunately, neither gcc 4.7 nor clang 3.1 generate any instructions for */<br>- /* any fence other than seq_cst, which isn't very efficient for us. */<br>- /* Why that is, we don't know - either the C11 memory model is quite useless */<br>- /* for most usages, or gcc and clang have a bug */<br>- /* I *currently* lean towards the latter, and inefficiently implement */<br>- /* all three of ecb's fences as a seq_cst fence */<br>- /* Update, gcc-4.8 generates mfence for all c++ fences, but nothing */<br>- /* for all __atomic_thread_fence's except seq_cst */<br> #define ECB_MEMORY_FENCE atomic_thread_fence (memory_order_seq_cst)<br>+ #define ECB_MEMORY_FENCE_ACQUIRE atomic_thread_fence (memory_order_acquire)<br>+ #define ECB_MEMORY_FENCE_RELEASE atomic_thread_fence (memory_order_release)<br> #endif<br> #endif<br> <br>@@ -790,6 +922,10 @@ struct signalfd_siginfo<br> #define ECB_MEMORY_FENCE_RELEASE ECB_MEMORY_FENCE<br> #endif<br> <br>+#if !defined ECB_MEMORY_FENCE_RELAXED && defined ECB_MEMORY_FENCE<br>+ #define ECB_MEMORY_FENCE_RELAXED ECB_MEMORY_FENCE /* very heavy-handed */<br>+#endif<br>+<br> /*****************************************************************************/<br> <br> #if ECB_CPP<br>@@ -1081,6 +1217,44 @@ ecb_inline ecb_const uint32_t ecb_rotr32 (uint32_t x, unsigned int count) { retu<br> ecb_inline ecb_const uint64_t ecb_rotl64 (uint64_t x, unsigned int count) { return (x >> (64 - count)) | (x << count); }<br> ecb_inline ecb_const uint64_t ecb_rotr64 (uint64_t x, unsigned int count) { return (x << (64 - count)) | (x >> count); }<br> <br>+#if ECB_CPP<br>+<br>+inline uint8_t ecb_ctz (uint8_t v) { return ecb_ctz32 (v); }<br>+inline uint16_t ecb_ctz (uint16_t v) { return ecb_ctz32 (v); }<br>+inline uint32_t ecb_ctz (uint32_t v) { return ecb_ctz32 (v); }<br>+inline uint64_t ecb_ctz (uint64_t v) { return ecb_ctz64 (v); }<br>+<br>+inline bool ecb_is_pot (uint8_t v) { return ecb_is_pot32 (v); }<br>+inline bool ecb_is_pot (uint16_t v) { return ecb_is_pot32 (v); }<br>+inline bool ecb_is_pot (uint32_t v) { return ecb_is_pot32 (v); }<br>+inline bool ecb_is_pot (uint64_t v) { return ecb_is_pot64 (v); }<br>+<br>+inline int ecb_ld (uint8_t v) { return ecb_ld32 (v); }<br>+inline int ecb_ld (uint16_t v) { return ecb_ld32 (v); }<br>+inline int ecb_ld (uint32_t v) { return ecb_ld32 (v); }<br>+inline int ecb_ld (uint64_t v) { return ecb_ld64 (v); }<br>+<br>+inline int ecb_popcount (uint8_t v) { return ecb_popcount32 (v); }<br>+inline int ecb_popcount (uint16_t v) { return ecb_popcount32 (v); }<br>+inline int ecb_popcount (uint32_t v) { return ecb_popcount32 (v); }<br>+inline int ecb_popcount (uint64_t v) { return ecb_popcount64 (v); }<br>+<br>+inline uint8_t ecb_bitrev (uint8_t v) { return ecb_bitrev8 (v); }<br>+inline uint16_t ecb_bitrev (uint16_t v) { return ecb_bitrev16 (v); }<br>+inline uint32_t ecb_bitrev (uint32_t v) { return ecb_bitrev32 (v); }<br>+<br>+inline uint8_t ecb_rotl (uint8_t v, unsigned int count) { return ecb_rotl8 (v, count); }<br>+inline uint16_t ecb_rotl (uint16_t v, unsigned int count) { return ecb_rotl16 (v, count); }<br>+inline uint32_t ecb_rotl (uint32_t v, unsigned int count) { return ecb_rotl32 (v, count); }<br>+inline uint64_t ecb_rotl (uint64_t v, unsigned int count) { return ecb_rotl64 (v, count); }<br>+<br>+inline uint8_t ecb_rotr (uint8_t v, unsigned int count) { return ecb_rotr8 (v, count); }<br>+inline uint16_t ecb_rotr (uint16_t v, unsigned int count) { return ecb_rotr16 (v, count); }<br>+inline uint32_t ecb_rotr (uint32_t v, unsigned int count) { return ecb_rotr32 (v, count); }<br>+inline uint64_t ecb_rotr (uint64_t v, unsigned int count) { return ecb_rotr64 (v, count); }<br>+<br>+#endif<br>+<br> #if ECB_GCC_VERSION(4,3) || (ECB_CLANG_BUILTIN(__builtin_bswap32) && ECB_CLANG_BUILTIN(__builtin_bswap64))<br> #if ECB_GCC_VERSION(4,8) || ECB_CLANG_BUILTIN(__builtin_bswap16)<br> #define ecb_bswap16(x) __builtin_bswap16 (x)<br>@@ -1161,6 +1335,78 @@ ecb_inline ecb_const ecb_bool ecb_big_endian (void) { return ecb_byteorder_he<br> ecb_inline ecb_const ecb_bool ecb_little_endian (void);<br> ecb_inline ecb_const ecb_bool ecb_little_endian (void) { return ecb_byteorder_helper () == 0x44332211; }<br> <br>+/*****************************************************************************/<br>+/* unaligned load/store */<br>+<br>+ecb_inline uint_fast16_t ecb_be_u16_to_host (uint_fast16_t v) { return ecb_little_endian () ? ecb_bswap16 (v) : v; }<br>+ecb_inline uint_fast32_t ecb_be_u32_to_host (uint_fast32_t v) { return ecb_little_endian () ? ecb_bswap32 (v) : v; }<br>+ecb_inline uint_fast64_t ecb_be_u64_to_host (uint_fast64_t v) { return ecb_little_endian () ? ecb_bswap64 (v) : v; }<br>+<br>+ecb_inline uint_fast16_t ecb_le_u16_to_host (uint_fast16_t v) { return ecb_big_endian () ? ecb_bswap16 (v) : v; }<br>+ecb_inline uint_fast32_t ecb_le_u32_to_host (uint_fast32_t v) { return ecb_big_endian () ? ecb_bswap32 (v) : v; }<br>+ecb_inline uint_fast64_t ecb_le_u64_to_host (uint_fast64_t v) { return ecb_big_endian () ? ecb_bswap64 (v) : v; }<br>+<br>+ecb_inline uint_fast16_t ecb_peek_u16_u (const void *ptr) { uint16_t v; memcpy (&v, ptr, sizeof (v)); return v; }<br>+ecb_inline uint_fast32_t ecb_peek_u32_u (const void *ptr) { uint32_t v; memcpy (&v, ptr, sizeof (v)); return v; }<br>+ecb_inline uint_fast64_t ecb_peek_u64_u (const void *ptr) { uint64_t v; memcpy (&v, ptr, sizeof (v)); return v; }<br>+<br>+ecb_inline uint_fast16_t ecb_peek_be_u16_u (const void *ptr) { return ecb_be_u16_to_host (ecb_peek_u16_u (ptr)); }<br>+ecb_inline uint_fast32_t ecb_peek_be_u32_u (const void *ptr) { return ecb_be_u32_to_host (ecb_peek_u32_u (ptr)); }<br>+ecb_inline uint_fast64_t ecb_peek_be_u64_u (const void *ptr) { return ecb_be_u64_to_host (ecb_peek_u64_u (ptr)); }<br>+<br>+ecb_inline uint_fast16_t ecb_peek_le_u16_u (const void *ptr) { return ecb_le_u16_to_host (ecb_peek_u16_u (ptr)); }<br>+ecb_inline uint_fast32_t ecb_peek_le_u32_u (const void *ptr) { return ecb_le_u32_to_host (ecb_peek_u32_u (ptr)); }<br>+ecb_inline uint_fast64_t ecb_peek_le_u64_u (const void *ptr) { return ecb_le_u64_to_host (ecb_peek_u64_u (ptr)); }<br>+<br>+ecb_inline uint_fast16_t ecb_host_to_be_u16 (uint_fast16_t v) { return ecb_little_endian () ? ecb_bswap16 (v) : v; }<br>+ecb_inline uint_fast32_t ecb_host_to_be_u32 (uint_fast32_t v) { return ecb_little_endian () ? ecb_bswap32 (v) : v; }<br>+ecb_inline uint_fast64_t ecb_host_to_be_u64 (uint_fast64_t v) { return ecb_little_endian () ? ecb_bswap64 (v) : v; }<br>+<br>+ecb_inline uint_fast16_t ecb_host_to_le_u16 (uint_fast16_t v) { return ecb_big_endian () ? ecb_bswap16 (v) : v; }<br>+ecb_inline uint_fast32_t ecb_host_to_le_u32 (uint_fast32_t v) { return ecb_big_endian () ? ecb_bswap32 (v) : v; }<br>+ecb_inline uint_fast64_t ecb_host_to_le_u64 (uint_fast64_t v) { return ecb_big_endian () ? ecb_bswap64 (v) : v; }<br>+<br>+ecb_inline void ecb_poke_u16_u (void *ptr, uint16_t v) { memcpy (ptr, &v, sizeof (v)); }<br>+ecb_inline void ecb_poke_u32_u (void *ptr, uint32_t v) { memcpy (ptr, &v, sizeof (v)); }<br>+ecb_inline void ecb_poke_u64_u (void *ptr, uint64_t v) { memcpy (ptr, &v, sizeof (v)); }<br>+<br>+ecb_inline void ecb_poke_be_u16_u (void *ptr, uint_fast16_t v) { ecb_poke_u16_u (ptr, ecb_host_to_be_u16 (v)); }<br>+ecb_inline void ecb_poke_be_u32_u (void *ptr, uint_fast32_t v) { ecb_poke_u32_u (ptr, ecb_host_to_be_u32 (v)); }<br>+ecb_inline void ecb_poke_be_u64_u (void *ptr, uint_fast64_t v) { ecb_poke_u64_u (ptr, ecb_host_to_be_u64 (v)); }<br>+ <br>+ecb_inline void ecb_poke_le_u16_u (void *ptr, uint_fast16_t v) { ecb_poke_u16_u (ptr, ecb_host_to_le_u16 (v)); }<br>+ecb_inline void ecb_poke_le_u32_u (void *ptr, uint_fast32_t v) { ecb_poke_u32_u (ptr, ecb_host_to_le_u32 (v)); }<br>+ecb_inline void ecb_poke_le_u64_u (void *ptr, uint_fast64_t v) { ecb_poke_u64_u (ptr, ecb_host_to_le_u64 (v)); }<br>+<br>+#if ECB_CPP<br>+<br>+inline uint8_t ecb_bswap (uint8_t v) { return v; }<br>+inline uint16_t ecb_bswap (uint16_t v) { return ecb_bswap16 (v); }<br>+inline uint32_t ecb_bswap (uint32_t v) { return ecb_bswap32 (v); }<br>+inline uint64_t ecb_bswap (uint64_t v) { return ecb_bswap64 (v); }<br>+<br>+template<typename T> inline T ecb_be_to_host (T v) { return ecb_little_endian () ? ecb_bswap (v) : v; }<br>+template<typename T> inline T ecb_le_to_host (T v) { return ecb_big_endian () ? ecb_bswap (v) : v; }<br>+template<typename T> inline T ecb_peek (const void *ptr) { return *(const T *)ptr; }<br>+template<typename T> inline T ecb_peek_be (const void *ptr) { return ecb_be_to_host (ecb_peek <T> (ptr)); }<br>+template<typename T> inline T ecb_peek_le (const void *ptr) { return ecb_le_to_host (ecb_peek <T> (ptr)); }<br>+template<typename T> inline T ecb_peek_u (const void *ptr) { T v; memcpy (&v, ptr, sizeof (v)); return v; }<br>+template<typename T> inline T ecb_peek_be_u (const void *ptr) { return ecb_be_to_host (ecb_peek_u<T> (ptr)); }<br>+template<typename T> inline T ecb_peek_le_u (const void *ptr) { return ecb_le_to_host (ecb_peek_u<T> (ptr)); }<br>+<br>+template<typename T> inline T ecb_host_to_be (T v) { return ecb_little_endian () ? ecb_bswap (v) : v; }<br>+template<typename T> inline T ecb_host_to_le (T v) { return ecb_big_endian () ? ecb_bswap (v) : v; }<br>+template<typename T> inline void ecb_poke (void *ptr, T v) { *(T *)ptr = v; }<br>+template<typename T> inline void ecb_poke_be (void *ptr, T v) { return ecb_poke <T> (ptr, ecb_host_to_be (v)); }<br>+template<typename T> inline void ecb_poke_le (void *ptr, T v) { return ecb_poke <T> (ptr, ecb_host_to_le (v)); }<br>+template<typename T> inline void ecb_poke_u (void *ptr, T v) { memcpy (ptr, &v, sizeof (v)); }<br>+template<typename T> inline void ecb_poke_be_u (void *ptr, T v) { return ecb_poke_u<T> (ptr, ecb_host_to_be (v)); }<br>+template<typename T> inline void ecb_poke_le_u (void *ptr, T v) { return ecb_poke_u<T> (ptr, ecb_host_to_le (v)); }<br>+<br>+#endif<br>+<br>+/*****************************************************************************/<br>+<br> #if ECB_GCC_VERSION(3,0) || ECB_C99<br> #define ecb_mod(m,n) ((m) % (n) + ((m) % (n) < 0 ? (n) : 0))<br> #else<br>@@ -1194,6 +1440,8 @@ ecb_inline ecb_const ecb_bool ecb_little_endian (void) { return ecb_byteorder_he<br> #define ecb_array_length(name) (sizeof (name) / sizeof (name [0]))<br> #endif<br> <br>+/*****************************************************************************/<br>+<br> ecb_function_ ecb_const uint32_t ecb_binary16_to_binary32 (uint32_t x);<br> ecb_function_ ecb_const uint32_t<br> ecb_binary16_to_binary32 (uint32_t x)<br>@@ -1311,7 +1559,6 @@ ecb_binary32_to_binary16 (uint32_t x)<br> || (defined __arm__ && (defined __ARM_EABI__ || defined __EABI__ || defined __VFP_FP__ || defined _WIN32_WCE || defined __ANDROID__)) \<br> || defined __aarch64__<br> #define ECB_STDFP 1<br>- #include <string.h> /* for memcpy */<br> #else<br> #define ECB_STDFP 0<br> #endif<br>@@ -1506,7 +1753,7 @@ ecb_binary32_to_binary16 (uint32_t x)<br> #if ECB_MEMORY_FENCE_NEEDS_PTHREADS<br> /* if your architecture doesn't need memory fences, e.g. because it is<br> * single-cpu/core, or if you use libev in a project that doesn't use libev<br>- * from multiple threads, then you can define ECB_AVOID_PTHREADS when compiling<br>+ * from multiple threads, then you can define ECB_NO_THREADS when compiling<br> * libev, in which cases the memory fences become nops.<br> * alternatively, you can remove this #error and link against libpthread,<br> * which will then provide the memory fences.<br>@@ -1529,9 +1776,75 @@ ecb_binary32_to_binary16 (uint32_t x)<br> #if EV_FEATURE_CODE<br> # define inline_speed ecb_inline<br> #else<br>-# define inline_speed noinline static<br>+# define inline_speed ecb_noinline static<br> #endif<br> <br>+/*****************************************************************************/<br>+/* raw syscall wrappers */<br>+<br>+#if EV_NEED_SYSCALL<br>+<br>+#include <sys/syscall.h><br>+<br>+/*<br>+ * define some syscall wrappers for common architectures<br>+ * this is mostly for nice looks during debugging, not performance.<br>+ * our syscalls return < 0, not == -1, on error. which is good<br>+ * enough for linux aio.<br>+ * TODO: arm is also common nowadays, maybe even mips and x86<br>+ * TODO: after implementing this, it suddenly looks like overkill, but its hard to remove...<br>+ */<br>+#if __GNUC__ && __linux && ECB_AMD64 && !EV_FEATURE_CODE<br>+ /* the costly errno access probably kills this for size optimisation */<br>+<br>+ #define ev_syscall(nr,narg,arg1,arg2,arg3,arg4,arg5,arg6) \<br>+ ({ \<br>+ long res; \<br>+ register unsigned long r6 __asm__ ("r9" ); \<br>+ register unsigned long r5 __asm__ ("r8" ); \<br>+ register unsigned long r4 __asm__ ("r10"); \<br>+ register unsigned long r3 __asm__ ("rdx"); \<br>+ register unsigned long r2 __asm__ ("rsi"); \<br>+ register unsigned long r1 __asm__ ("rdi"); \<br>+ if (narg >= 6) r6 = (unsigned long)(arg6); \<br>+ if (narg >= 5) r5 = (unsigned long)(arg5); \<br>+ if (narg >= 4) r4 = (unsigned long)(arg4); \<br>+ if (narg >= 3) r3 = (unsigned long)(arg3); \<br>+ if (narg >= 2) r2 = (unsigned long)(arg2); \<br>+ if (narg >= 1) r1 = (unsigned long)(arg1); \<br>+ __asm__ __volatile__ ( \<br>+ "syscall\n\t" \<br>+ : "=a" (res) \<br>+ : "0" (nr), "r" (r1), "r" (r2), "r" (r3), "r" (r4), "r" (r5) \<br>+ : "cc", "r11", "cx", "memory"); \<br>+ errno = -res; \<br>+ res; \<br>+ })<br>+<br>+#endif<br>+<br>+#ifdef ev_syscall<br>+ #define ev_syscall0(nr) ev_syscall (nr, 0, 0, 0, 0, 0, 0, 0)<br>+ #define ev_syscall1(nr,arg1) ev_syscall (nr, 1, arg1, 0, 0, 0, 0, 0)<br>+ #define ev_syscall2(nr,arg1,arg2) ev_syscall (nr, 2, arg1, arg2, 0, 0, 0, 0)<br>+ #define ev_syscall3(nr,arg1,arg2,arg3) ev_syscall (nr, 3, arg1, arg2, arg3, 0, 0, 0)<br>+ #define ev_syscall4(nr,arg1,arg2,arg3,arg4) ev_syscall (nr, 3, arg1, arg2, arg3, arg4, 0, 0)<br>+ #define ev_syscall5(nr,arg1,arg2,arg3,arg4,arg5) ev_syscall (nr, 5, arg1, arg2, arg3, arg4, arg5, 0)<br>+ #define ev_syscall6(nr,arg1,arg2,arg3,arg4,arg5,arg6) ev_syscall (nr, 6, arg1, arg2, arg3, arg4, arg5,arg6)<br>+#else<br>+ #define ev_syscall0(nr) syscall (nr)<br>+ #define ev_syscall1(nr,arg1) syscall (nr, arg1)<br>+ #define ev_syscall2(nr,arg1,arg2) syscall (nr, arg1, arg2)<br>+ #define ev_syscall3(nr,arg1,arg2,arg3) syscall (nr, arg1, arg2, arg3)<br>+ #define ev_syscall4(nr,arg1,arg2,arg3,arg4) syscall (nr, arg1, arg2, arg3, arg4)<br>+ #define ev_syscall5(nr,arg1,arg2,arg3,arg4,arg5) syscall (nr, arg1, arg2, arg3, arg4, arg5)<br>+ #define ev_syscall6(nr,arg1,arg2,arg3,arg4,arg5,arg6) syscall (nr, arg1, arg2, arg3, arg4, arg5,arg6)<br>+#endif<br>+<br>+#endif<br>+<br>+/*****************************************************************************/<br>+<br> #define NUMPRI (EV_MAXPRI - EV_MINPRI + 1)<br> <br> #if EV_MINPRI == EV_MAXPRI<br>@@ -1540,8 +1853,7 @@ ecb_binary32_to_binary16 (uint32_t x)<br> # define ABSPRI(w) (((W)w)->priority - EV_MINPRI)<br> #endif<br> <br>-#define EMPTY /* required for microsofts broken pseudo-c compiler */<br>-#define EMPTY2(a,b) /* used to suppress some warnings */<br>+#define EMPTY /* required for microsofts broken pseudo-c compiler */<br> <br> typedef ev_watcher *W;<br> typedef ev_watcher_list *WL;<br>@@ -1576,6 +1888,10 @@ static EV_ATOMIC_T have_monotonic; /* did clock_gettime (CLOCK_MONOTONIC) work?<br> <br> /*****************************************************************************/<br> <br>+#if EV_USE_LINUXAIO<br>+# include <linux/aio_abi.h> /* probably only needed for aio_context_t */<br>+#endif<br>+<br> /* define a suitable floor function (only used by periodics atm) */<br> <br> #if EV_USE_FLOOR<br>@@ -1586,7 +1902,7 @@ static EV_ATOMIC_T have_monotonic; /* did clock_gettime (CLOCK_MONOTONIC) work?<br> #include <float.h><br> <br> /* a floor() replacement function, should be independent of ev_tstamp type */<br>-noinline<br>+ecb_noinline<br> static ev_tstamp<br> ev_floor (ev_tstamp v)<br> {<br>@@ -1597,26 +1913,26 @@ ev_floor (ev_tstamp v)<br> const ev_tstamp shift = sizeof (unsigned long) >= 8 ? 18446744073709551616. : 4294967296.;<br> #endif<br> <br>- /* argument too large for an unsigned long? */<br>- if (expect_false (v >= shift))<br>+ /* special treatment for negative arguments */<br>+ if (ecb_expect_false (v < 0.))<br>+ {<br>+ ev_tstamp f = -ev_floor (-v);<br>+<br>+ return f - (f == v ? 0 : 1);<br>+ }<br>+<br>+ /* argument too large for an unsigned long? then reduce it */<br>+ if (ecb_expect_false (v >= shift))<br> {<br> ev_tstamp f;<br> <br> if (v == v - 1.)<br>- return v; /* very large number */<br>+ return v; /* very large numbers are assumed to be integer */<br> <br> f = shift * ev_floor (v * (1. / shift));<br> return f + ev_floor (v - f);<br> }<br> <br>- /* special treatment for negative args? */<br>- if (expect_false (v < 0.))<br>- {<br>- ev_tstamp f = -ev_floor (-v);<br>-<br>- return f - (f == v ? 0 : 1);<br>- }<br>-<br> /* fits into an unsigned long */<br> return (unsigned long)v;<br> }<br>@@ -1629,7 +1945,7 @@ ev_floor (ev_tstamp v)<br> # include <sys/utsname.h><br> #endif<br> <br>-noinline ecb_cold<br>+ecb_noinline ecb_cold<br> static unsigned int<br> ev_linux_version (void)<br> {<br>@@ -1669,7 +1985,7 @@ ev_linux_version (void)<br> /*****************************************************************************/<br> <br> #if EV_AVOID_STDIO<br>-noinline ecb_cold<br>+ecb_noinline ecb_cold<br> static void<br> ev_printerr (const char *msg)<br> {<br>@@ -1677,16 +1993,16 @@ ev_printerr (const char *msg)<br> }<br> #endif<br> <br>-static void (*syserr_cb)(const char *msg) EV_THROW;<br>+static void (*syserr_cb)(const char *msg) EV_NOEXCEPT;<br> <br> ecb_cold<br> void<br>-ev_set_syserr_cb (void (*cb)(const char *msg) EV_THROW) EV_THROW<br>+ev_set_syserr_cb (void (*cb)(const char *msg) EV_NOEXCEPT) EV_NOEXCEPT<br> {<br> syserr_cb = cb;<br> }<br> <br>-noinline ecb_cold<br>+ecb_noinline ecb_cold<br> static void<br> ev_syserr (const char *msg)<br> {<br>@@ -1710,7 +2026,7 @@ ev_syserr (const char *msg)<br> }<br> <br> static void *<br>-ev_realloc_emul (void *ptr, long size) EV_THROW<br>+ev_realloc_emul (void *ptr, long size) EV_NOEXCEPT<br> {<br> /* some systems, notably openbsd and darwin, fail to properly<br> * implement realloc (x, 0) (as required by both ansi c-89 and<br>@@ -1726,11 +2042,11 @@ ev_realloc_emul (void *ptr, long size) EV_THROW<br> return 0;<br> }<br> <br>-static void *(*alloc)(void *ptr, long size) EV_THROW = ev_realloc_emul;<br>+static void *(*alloc)(void *ptr, long size) EV_NOEXCEPT = ev_realloc_emul;<br> <br> ecb_cold<br> void<br>-ev_set_allocator (void *(*cb)(void *ptr, long size) EV_THROW) EV_THROW<br>+ev_set_allocator (void *(*cb)(void *ptr, long size) EV_NOEXCEPT) EV_NOEXCEPT<br> {<br> alloc = cb;<br> }<br>@@ -1767,8 +2083,8 @@ typedef struct<br> WL head;<br> unsigned char events; /* the events watched for */<br> unsigned char reify; /* flag set when this ANFD needs reification (EV_ANFD_REIFY, EV__IOFDSET) */<br>- unsigned char emask; /* the epoll backend stores the actual kernel mask in here */<br>- unsigned char unused;<br>+ unsigned char emask; /* some backends store the actual kernel mask in here */<br>+ unsigned char eflags; /* flags field for use by backends */<br> #if EV_USE_EPOLL<br> unsigned int egen; /* generation counter to counter epoll bugs */<br> #endif<br>@@ -1832,12 +2148,7 @@ typedef struct<br> <br> #else<br> <br>-#ifdef EV_API_STATIC<br>- static ev_tstamp ev_rt_now = 0;<br>-#else<br>- ev_tstamp ev_rt_now = 0;<br>-#endif<br>-<br>+ EV_API_DECL ev_tstamp ev_rt_now = EV_TS_CONST (0.); /* needs to be initialised to make it a definition despite extern */<br> #define VAR(name,decl) static decl;<br> #include "ev_vars.h"<br> #undef VAR<br>@@ -1847,8 +2158,8 @@ typedef struct<br> #endif<br> <br> #if EV_FEATURE_API<br>-# define EV_RELEASE_CB if (expect_false (release_cb)) release_cb (EV_A)<br>-# define EV_ACQUIRE_CB if (expect_false (acquire_cb)) acquire_cb (EV_A)<br>+# define EV_RELEASE_CB if (ecb_expect_false (release_cb)) release_cb (EV_A)<br>+# define EV_ACQUIRE_CB if (ecb_expect_false (acquire_cb)) acquire_cb (EV_A)<br> # define EV_INVOKE_PENDING invoke_cb (EV_A)<br> #else<br> # define EV_RELEASE_CB (void)0<br>@@ -1862,20 +2173,22 @@ typedef struct<br> <br> #ifndef EV_HAVE_EV_TIME<br> ev_tstamp<br>-ev_time (void) EV_THROW<br>+ev_time (void) EV_NOEXCEPT<br> {<br> #if EV_USE_REALTIME<br>- if (expect_true (have_realtime))<br>+ if (ecb_expect_true (have_realtime))<br> {<br> struct timespec ts;<br> clock_gettime (CLOCK_REALTIME, &ts);<br>- return ts.tv_sec + ts.tv_nsec * 1e-9;<br>+ return EV_TS_GET (ts);<br> }<br> #endif<br> <br>- struct timeval tv;<br>- gettimeofday (&tv, 0);<br>- return tv.tv_sec + tv.tv_usec * 1e-6;<br>+ {<br>+ struct timeval tv;<br>+ gettimeofday (&tv, 0);<br>+ return EV_TV_GET (tv);<br>+ }<br> }<br> #endif<br> <br>@@ -1883,11 +2196,11 @@ inline_size ev_tstamp<br> get_clock (void)<br> {<br> #if EV_USE_MONOTONIC<br>- if (expect_true (have_monotonic))<br>+ if (ecb_expect_true (have_monotonic))<br> {<br> struct timespec ts;<br> clock_gettime (CLOCK_MONOTONIC, &ts);<br>- return ts.tv_sec + ts.tv_nsec * 1e-9;<br>+ return EV_TS_GET (ts);<br> }<br> #endif<br> <br>@@ -1896,28 +2209,28 @@ get_clock (void)<br> <br> #if EV_MULTIPLICITY<br> ev_tstamp<br>-ev_now (EV_P) EV_THROW<br>+ev_now (EV_P) EV_NOEXCEPT<br> {<br> return ev_rt_now;<br> }<br> #endif<br> <br> ev_tstamp<br>-ev_monotonic_now (EV_P) EV_THROW<br>+ev_monotonic_now (EV_P) EV_NOEXCEPT<br> {<br> return mn_now;<br> }<br> <br> ev_tstamp<br>-ev_monotonic_time (void) EV_THROW<br>+ev_monotonic_time (void) EV_NOEXCEPT<br> {<br> return get_clock();<br> }<br> <br> void<br>-ev_sleep (ev_tstamp delay) EV_THROW<br>+ev_sleep (ev_tstamp delay) EV_NOEXCEPT<br> {<br>- if (delay > 0.)<br>+ if (delay > EV_TS_CONST (0.))<br> {<br> #if EV_USE_NANOSLEEP<br> struct timespec ts;<br>@@ -1925,7 +2238,9 @@ ev_sleep (ev_tstamp delay) EV_THROW<br> EV_TS_SET (ts, delay);<br> nanosleep (&ts, 0);<br> #elif defined _WIN32<br>- Sleep ((unsigned long)(delay * 1e3));<br>+ /* maybe this should round up, as ms is very low resolution */<br>+ /* compared to select (µs) or nanosleep (ns) */<br>+ Sleep ((unsigned long)(EV_TS_TO_MSEC (delay)));<br> #else<br> struct timeval tv;<br> <br>@@ -1965,7 +2280,7 @@ array_nextsize (int elem, int cur, int cnt)<br> return ncur;<br> }<br> <br>-noinline ecb_cold<br>+ecb_noinline ecb_cold<br> static void *<br> array_realloc (int elem, void *base, int *cur, int cnt)<br> {<br>@@ -1973,16 +2288,18 @@ array_realloc (int elem, void *base, int *cur, int cnt)<br> return ev_realloc (base, elem * *cur);<br> }<br> <br>-#define array_init_zero(base,count) \<br>- memset ((void *)(base), 0, sizeof (*(base)) * (count))<br>+#define array_needsize_noinit(base,offset,count)<br>+<br>+#define array_needsize_zerofill(base,offset,count) \<br>+ memset ((void *)(base + offset), 0, sizeof (*(base)) * (count))<br> <br> #define array_needsize(type,base,cur,cnt,init) \<br>- if (expect_false ((cnt) > (cur))) \<br>+ if (ecb_expect_false ((cnt) > (cur))) \<br> { \<br> ecb_unused int ocur_ = (cur); \<br> (base) = (type *)array_realloc \<br> (sizeof (type), (base), &(cur), (cnt)); \<br>- init ((base) + (ocur_), (cur) - ocur_); \<br>+ init ((base), ocur_, ((cur) - ocur_)); \<br> }<br> <br> #if 0<br>@@ -2001,25 +2318,25 @@ array_realloc (int elem, void *base, int *cur, int cnt)<br> /*****************************************************************************/<br> <br> /* dummy callback for pending events */<br>-noinline<br>+ecb_noinline<br> static void<br> pendingcb (EV_P_ ev_prepare *w, int revents)<br> {<br> }<br> <br>-noinline<br>+ecb_noinline<br> void<br>-ev_feed_event (EV_P_ void *w, int revents) EV_THROW<br>+ev_feed_event (EV_P_ void *w, int revents) EV_NOEXCEPT<br> {<br> W w_ = (W)w;<br> int pri = ABSPRI (w_);<br> <br>- if (expect_false (w_->pending))<br>+ if (ecb_expect_false (w_->pending))<br> pendings [pri][w_->pending - 1].events |= revents;<br> else<br> {<br> w_->pending = ++pendingcnt [pri];<br>- array_needsize (ANPENDING, pendings [pri], pendingmax [pri], w_->pending, EMPTY2);<br>+ array_needsize (ANPENDING, pendings [pri], pendingmax [pri], w_->pending, array_needsize_noinit);<br> pendings [pri][w_->pending - 1].w = w_;<br> pendings [pri][w_->pending - 1].events = revents;<br> }<br>@@ -2030,7 +2347,7 @@ ev_feed_event (EV_P_ void *w, int revents) EV_THROW<br> inline_speed void<br> feed_reverse (EV_P_ W w)<br> {<br>- array_needsize (W, rfeeds, rfeedmax, rfeedcnt + 1, EMPTY2);<br>+ array_needsize (W, rfeeds, rfeedmax, rfeedcnt + 1, array_needsize_noinit);<br> rfeeds [rfeedcnt++] = w;<br> }<br> <br>@@ -2075,12 +2392,12 @@ fd_event (EV_P_ int fd, int revents)<br> {<br> ANFD *anfd = anfds + fd;<br> <br>- if (expect_true (!anfd->reify))<br>+ if (ecb_expect_true (!anfd->reify))<br> fd_event_nocheck (EV_A_ fd, revents);<br> }<br> <br> void<br>-ev_feed_fd_event (EV_P_ int fd, int revents) EV_THROW<br>+ev_feed_fd_event (EV_P_ int fd, int revents) EV_NOEXCEPT<br> {<br> if (fd >= 0 && fd < anfdmax)<br> fd_event_nocheck (EV_A_ fd, revents);<br>@@ -2093,8 +2410,20 @@ fd_reify (EV_P)<br> {<br> int i;<br> <br>+ /* most backends do not modify the fdchanges list in backend_modfiy.<br>+ * except io_uring, which has fixed-size buffers which might force us<br>+ * to handle events in backend_modify, causing fdchanges to be amended,<br>+ * which could result in an endless loop.<br>+ * to avoid this, we do not dynamically handle fds that were added<br>+ * during fd_reify. that means that for those backends, fdchangecnt<br>+ * might be non-zero during poll, which must cause them to not block.<br>+ * to not put too much of a burden on other backends, this detail<br>+ * needs to be handled in the backend.<br>+ */<br>+ int changecnt = fdchangecnt;<br>+<br> #if EV_SELECT_IS_WINSOCKET || EV_USE_IOCP<br>- for (i = 0; i < fdchangecnt; ++i)<br>+ for (i = 0; i < changecnt; ++i)<br> {<br> int fd = fdchanges [i];<br> ANFD *anfd = anfds + fd;<br>@@ -2118,7 +2447,7 @@ fd_reify (EV_P)<br> }<br> #endif<br> <br>- for (i = 0; i < fdchangecnt; ++i)<br>+ for (i = 0; i < changecnt; ++i)<br> {<br> int fd = fdchanges [i];<br> ANFD *anfd = anfds + fd;<br>@@ -2127,9 +2456,9 @@ fd_reify (EV_P)<br> unsigned char o_events = anfd->events;<br> unsigned char o_reify = anfd->reify;<br> <br>- anfd->reify = 0;<br>+ anfd->reify = 0;<br> <br>- /*if (expect_true (o_reify & EV_ANFD_REIFY)) probably a deoptimisation */<br>+ /*if (ecb_expect_true (o_reify & EV_ANFD_REIFY)) probably a deoptimisation */<br> {<br> anfd->events = 0;<br> <br>@@ -2144,7 +2473,14 @@ fd_reify (EV_P)<br> backend_modify (EV_A_ fd, o_events, anfd->events);<br> }<br> <br>- fdchangecnt = 0;<br>+ /* normally, fdchangecnt hasn't changed. if it has, then new fds have been added.<br>+ * this is a rare case (see beginning comment in this function), so we copy them to the<br>+ * front and hope the backend handles this case.<br>+ */<br>+ if (ecb_expect_false (fdchangecnt != changecnt))<br>+ memmove (fdchanges, fdchanges + changecnt, (fdchangecnt - changecnt) * sizeof (*fdchanges));<br>+<br>+ fdchangecnt -= changecnt;<br> }<br> <br> /* something about the given fd changed */<br>@@ -2153,12 +2489,12 @@ void<br> fd_change (EV_P_ int fd, int flags)<br> {<br> unsigned char reify = anfds [fd].reify;<br>- anfds [fd].reify |= flags;<br>+ anfds [fd].reify = reify | flags;<br> <br>- if (expect_true (!reify))<br>+ if (ecb_expect_true (!reify))<br> {<br> ++fdchangecnt;<br>- array_needsize (int, fdchanges, fdchangemax, fdchangecnt, EMPTY2);<br>+ array_needsize (int, fdchanges, fdchangemax, fdchangecnt, array_needsize_noinit);<br> fdchanges [fdchangecnt - 1] = fd;<br> }<br> }<br>@@ -2188,7 +2524,7 @@ fd_valid (int fd)<br> }<br> <br> /* called on EBADF to verify fds */<br>-noinline ecb_cold<br>+ecb_noinline ecb_cold<br> static void<br> fd_ebadf (EV_P)<br> {<br>@@ -2201,7 +2537,7 @@ fd_ebadf (EV_P)<br> }<br> <br> /* called on ENOMEM in select/poll to kill some fds and retry */<br>-noinline ecb_cold<br>+ecb_noinline ecb_cold<br> static void<br> fd_enomem (EV_P)<br> {<br>@@ -2216,7 +2552,7 @@ fd_enomem (EV_P)<br> }<br> <br> /* usually called after fork if backend needs to re-arm all fds from scratch */<br>-noinline<br>+ecb_noinline<br> static void<br> fd_rearm_all (EV_P)<br> {<br>@@ -2280,19 +2616,19 @@ downheap (ANHE *heap, int N, int k)<br> ANHE *pos = heap + DHEAP * (k - HEAP0) + HEAP0 + 1;<br> <br> /* find minimum child */<br>- if (expect_true (pos + DHEAP - 1 < E))<br>+ if (ecb_expect_true (pos + DHEAP - 1 < E))<br> {<br> /* fast path */ (minpos = pos + 0), (minat = ANHE_at (*minpos));<br>- if ( ANHE_at (pos [1]) < minat) (minpos = pos + 1), (minat = ANHE_at (*minpos));<br>- if ( ANHE_at (pos [2]) < minat) (minpos = pos + 2), (minat = ANHE_at (*minpos));<br>- if ( ANHE_at (pos [3]) < minat) (minpos = pos + 3), (minat = ANHE_at (*minpos));<br>+ if ( minat > ANHE_at (pos [1])) (minpos = pos + 1), (minat = ANHE_at (*minpos));<br>+ if ( minat > ANHE_at (pos [2])) (minpos = pos + 2), (minat = ANHE_at (*minpos));<br>+ if ( minat > ANHE_at (pos [3])) (minpos = pos + 3), (minat = ANHE_at (*minpos));<br> }<br> else if (pos < E)<br> {<br> /* slow path */ (minpos = pos + 0), (minat = ANHE_at (*minpos));<br>- if (pos + 1 < E && ANHE_at (pos [1]) < minat) (minpos = pos + 1), (minat = ANHE_at (*minpos));<br>- if (pos + 2 < E && ANHE_at (pos [2]) < minat) (minpos = pos + 2), (minat = ANHE_at (*minpos));<br>- if (pos + 3 < E && ANHE_at (pos [3]) < minat) (minpos = pos + 3), (minat = ANHE_at (*minpos));<br>+ if (pos + 1 < E && minat > ANHE_at (pos [1])) (minpos = pos + 1), (minat = ANHE_at (*minpos));<br>+ if (pos + 2 < E && minat > ANHE_at (pos [2])) (minpos = pos + 2), (minat = ANHE_at (*minpos));<br>+ if (pos + 3 < E && minat > ANHE_at (pos [3])) (minpos = pos + 3), (minat = ANHE_at (*minpos));<br> }<br> else<br> break;<br>@@ -2310,7 +2646,7 @@ downheap (ANHE *heap, int N, int k)<br> ev_active (ANHE_w (he)) = k;<br> }<br> <br>-#else /* 4HEAP */<br>+#else /* not 4HEAP */<br> <br> #define HEAP0 1<br> #define HPARENT(k) ((k) >> 1)<br>@@ -2392,7 +2728,7 @@ reheap (ANHE *heap, int N)<br> <br> /*****************************************************************************/<br> <br>-/* associate signal watchers to a signal signal */<br>+/* associate signal watchers to a signal */<br> typedef struct<br> {<br> EV_ATOMIC_T pending;<br>@@ -2466,7 +2802,7 @@ evpipe_write (EV_P_ EV_ATOMIC_T *flag)<br> {<br> ECB_MEMORY_FENCE; /* push out the write before this function was called, acquire flag */<br> <br>- if (expect_true (*flag))<br>+ if (ecb_expect_true (*flag))<br> return;<br> <br> *flag = 1;<br>@@ -2497,7 +2833,7 @@ evpipe_write (EV_P_ EV_ATOMIC_T *flag)<br> #ifdef _WIN32<br> WSABUF buf;<br> DWORD sent;<br>- buf.buf = &buf;<br>+ buf.buf = (char *)&buf;<br> buf.len = 1;<br> WSASend (EV_FD_TO_WIN32_HANDLE (evpipe [1]), &buf, 1, &sent, 0, 0, 0);<br> #else<br>@@ -2553,7 +2889,7 @@ pipecb (EV_P_ ev_io *iow, int revents)<br> ECB_MEMORY_FENCE;<br> <br> for (i = EV_NSIG - 1; i--; )<br>- if (expect_false (signals [i].pending))<br>+ if (ecb_expect_false (signals [i].pending))<br> ev_feed_signal_event (EV_A_ i + 1);<br> }<br> #endif<br>@@ -2579,7 +2915,7 @@ pipecb (EV_P_ ev_io *iow, int revents)<br> /*****************************************************************************/<br> <br> void<br>-ev_feed_signal (int signum) EV_THROW<br>+ev_feed_signal (int signum) EV_NOEXCEPT<br> {<br> #if EV_MULTIPLICITY<br> EV_P;<br>@@ -2604,13 +2940,13 @@ ev_sighandler (int signum)<br> ev_feed_signal (signum);<br> }<br> <br>-noinline<br>+ecb_noinline<br> void<br>-ev_feed_signal_event (EV_P_ int signum) EV_THROW<br>+ev_feed_signal_event (EV_P_ int signum) EV_NOEXCEPT<br> {<br> WL w;<br> <br>- if (expect_false (signum <= 0 || signum >= EV_NSIG))<br>+ if (ecb_expect_false (signum <= 0 || signum >= EV_NSIG))<br> return;<br> <br> --signum;<br>@@ -2619,7 +2955,7 @@ ev_feed_signal_event (EV_P_ int signum) EV_THROW<br> /* it is permissible to try to feed a signal to the wrong loop */<br> /* or, likely more useful, feeding a signal nobody is waiting for */<br> <br>- if (expect_false (signals [signum].loop != EV_A))<br>+ if (ecb_expect_false (signals [signum].loop != EV_A))<br> return;<br> #endif<br> <br>@@ -2713,6 +3049,57 @@ childcb (EV_P_ ev_signal *sw, int revents)<br> <br> /*****************************************************************************/<br> <br>+#if EV_USE_TIMERFD<br>+<br>+static void periodics_reschedule (EV_P);<br>+<br>+static void<br>+timerfdcb (EV_P_ ev_io *iow, int revents)<br>+{<br>+ struct itimerspec its = { 0 };<br>+<br>+ its.it_value.tv_sec = ev_rt_now + (int)MAX_BLOCKTIME2;<br>+ timerfd_settime (timerfd, TFD_TIMER_ABSTIME | TFD_TIMER_CANCEL_ON_SET, &its, 0);<br>+<br>+ ev_rt_now = ev_time ();<br>+ /* periodics_reschedule only needs ev_rt_now */<br>+ /* but maybe in the future we want the full treatment. */<br>+ /*<br>+ now_floor = EV_TS_CONST (0.);<br>+ time_update (EV_A_ EV_TSTAMP_HUGE);<br>+ */<br>+#if EV_PERIODIC_ENABLE<br>+ periodics_reschedule (EV_A);<br>+#endif<br>+}<br>+<br>+ecb_noinline ecb_cold<br>+static void<br>+evtimerfd_init (EV_P)<br>+{<br>+ if (!ev_is_active (&timerfd_w))<br>+ {<br>+ timerfd = timerfd_create (CLOCK_REALTIME, TFD_NONBLOCK | TFD_CLOEXEC);<br>+<br>+ if (timerfd >= 0)<br>+ {<br>+ fd_intern (timerfd); /* just to be sure */<br>+<br>+ ev_io_init (&timerfd_w, timerfdcb, timerfd, EV_READ);<br>+ ev_set_priority (&timerfd_w, EV_MINPRI);<br>+ ev_io_start (EV_A_ &timerfd_w);<br>+ ev_unref (EV_A); /* watcher should not keep loop alive */<br>+<br>+ /* (re-) arm timer */<br>+ timerfdcb (EV_A_ 0, 0);<br>+ }<br>+ }<br>+}<br>+<br>+#endif<br>+<br>+/*****************************************************************************/<br>+<br> #if EV_USE_IOCP<br> # include "ev_iocp.c"<br> #endif<br>@@ -2725,6 +3112,12 @@ childcb (EV_P_ ev_signal *sw, int revents)<br> #if EV_USE_EPOLL<br> # include "ev_epoll.c"<br> #endif<br>+#if EV_USE_LINUXAIO<br>+# include "ev_linuxaio.c"<br>+#endif<br>+#if EV_USE_IOURING<br>+# include "ev_iouring.c"<br>+#endif<br> #if EV_USE_POLL<br> # include "ev_poll.c"<br> #endif<br>@@ -2733,13 +3126,13 @@ childcb (EV_P_ ev_signal *sw, int revents)<br> #endif<br> <br> ecb_cold int<br>-ev_version_major (void) EV_THROW<br>+ev_version_major (void) EV_NOEXCEPT<br> {<br> return EV_VERSION_MAJOR;<br> }<br> <br> ecb_cold int<br>-ev_version_minor (void) EV_THROW<br>+ev_version_minor (void) EV_NOEXCEPT<br> {<br> return EV_VERSION_MINOR;<br> }<br>@@ -2758,22 +3151,24 @@ enable_secure (void)<br> <br> ecb_cold<br> unsigned int<br>-ev_supported_backends (void) EV_THROW<br>+ev_supported_backends (void) EV_NOEXCEPT<br> {<br> unsigned int flags = 0;<br> <br>- if (EV_USE_PORT ) flags |= EVBACKEND_PORT;<br>- if (EV_USE_KQUEUE) flags |= EVBACKEND_KQUEUE;<br>- if (EV_USE_EPOLL ) flags |= EVBACKEND_EPOLL;<br>- if (EV_USE_POLL ) flags |= EVBACKEND_POLL;<br>- if (EV_USE_SELECT) flags |= EVBACKEND_SELECT;<br>- <br>+ if (EV_USE_PORT ) flags |= EVBACKEND_PORT;<br>+ if (EV_USE_KQUEUE ) flags |= EVBACKEND_KQUEUE;<br>+ if (EV_USE_EPOLL ) flags |= EVBACKEND_EPOLL;<br>+ if (EV_USE_LINUXAIO ) flags |= EVBACKEND_LINUXAIO;<br>+ if (EV_USE_IOURING && ev_linux_version () >= 0x050601) flags |= EVBACKEND_IOURING; /* 5.6.1+ */<br>+ if (EV_USE_POLL ) flags |= EVBACKEND_POLL;<br>+ if (EV_USE_SELECT ) flags |= EVBACKEND_SELECT;<br>+<br> return flags;<br> }<br> <br> ecb_cold<br> unsigned int<br>-ev_recommended_backends (void) EV_THROW<br>+ev_recommended_backends (void) EV_NOEXCEPT<br> {<br> unsigned int flags = ev_supported_backends ();<br> <br>@@ -2791,73 +3186,84 @@ ev_recommended_backends (void) EV_THROW<br> flags &= ~EVBACKEND_POLL; /* poll return value is unusable (http://forums.freebsd.org/archive/index.php/t-10270.html) */<br> #endif<br> <br>+ /* TODO: linuxaio is very experimental */<br>+#if !EV_RECOMMEND_LINUXAIO<br>+ flags &= ~EVBACKEND_LINUXAIO;<br>+#endif<br>+ /* TODO: linuxaio is super experimental */<br>+#if !EV_RECOMMEND_IOURING<br>+ flags &= ~EVBACKEND_IOURING;<br>+#endif<br>+<br> return flags;<br> }<br> <br> ecb_cold<br> unsigned int<br>-ev_embeddable_backends (void) EV_THROW<br>+ev_embeddable_backends (void) EV_NOEXCEPT<br> {<br>- int flags = EVBACKEND_EPOLL | EVBACKEND_KQUEUE | EVBACKEND_PORT;<br>+ int flags = EVBACKEND_EPOLL | EVBACKEND_KQUEUE | EVBACKEND_PORT | EVBACKEND_IOURING;<br> <br> /* epoll embeddability broken on all linux versions up to at least 2.6.23 */<br> if (ev_linux_version () < 0x020620) /* disable it on linux < 2.6.32 */<br> flags &= ~EVBACKEND_EPOLL;<br> <br>+ /* EVBACKEND_LINUXAIO is theoretically embeddable, but suffers from a performance overhead */<br>+<br> return flags;<br> }<br> <br> unsigned int<br>-ev_backend (EV_P) EV_THROW<br>+ev_backend (EV_P) EV_NOEXCEPT<br> {<br> return backend;<br> }<br> <br> #if EV_FEATURE_API<br> unsigned int<br>-ev_iteration (EV_P) EV_THROW<br>+ev_iteration (EV_P) EV_NOEXCEPT<br> {<br> return loop_count;<br> }<br> <br> unsigned int<br>-ev_depth (EV_P) EV_THROW<br>+ev_depth (EV_P) EV_NOEXCEPT<br> {<br> return loop_depth;<br> }<br> <br> void<br>-ev_set_io_collect_interval (EV_P_ ev_tstamp interval) EV_THROW<br>+ev_set_io_collect_interval (EV_P_ ev_tstamp interval) EV_NOEXCEPT<br> {<br> io_blocktime = interval;<br> }<br> <br> void<br>-ev_set_timeout_collect_interval (EV_P_ ev_tstamp interval) EV_THROW<br>+ev_set_timeout_collect_interval (EV_P_ ev_tstamp interval) EV_NOEXCEPT<br> {<br> timeout_blocktime = interval;<br> }<br> <br> void<br>-ev_set_userdata (EV_P_ void *data) EV_THROW<br>+ev_set_userdata (EV_P_ void *data) EV_NOEXCEPT<br> {<br> userdata = data;<br> }<br> <br> void *<br>-ev_userdata (EV_P) EV_THROW<br>+ev_userdata (EV_P) EV_NOEXCEPT<br> {<br> return userdata;<br> }<br> <br> void<br>-ev_set_invoke_pending_cb (EV_P_ ev_loop_callback invoke_pending_cb) EV_THROW<br>+ev_set_invoke_pending_cb (EV_P_ ev_loop_callback invoke_pending_cb) EV_NOEXCEPT<br> {<br> invoke_cb = invoke_pending_cb;<br> }<br> <br> void<br>-ev_set_loop_release_cb (EV_P_ void (*release)(EV_P) EV_THROW, void (*acquire)(EV_P) EV_THROW) EV_THROW<br>+ev_set_loop_release_cb (EV_P_ void (*release)(EV_P) EV_NOEXCEPT, void (*acquire)(EV_P) EV_NOEXCEPT) EV_NOEXCEPT<br> {<br> release_cb = release;<br> acquire_cb = acquire;<br>@@ -2865,9 +3271,9 @@ ev_set_loop_release_cb (EV_P_ void (*release)(EV_P) EV_THROW, void (*acquire)(EV<br> #endif<br> <br> /* initialise a loop structure, must be zero-initialised */<br>-noinline ecb_cold<br>+ecb_noinline ecb_cold<br> static void<br>-loop_init (EV_P_ unsigned int flags) EV_THROW<br>+loop_init (EV_P_ unsigned int flags) EV_NOEXCEPT<br> {<br> if (!backend)<br> {<br>@@ -2930,30 +3336,39 @@ loop_init (EV_P_ unsigned int flags) EV_THROW<br> #if EV_USE_SIGNALFD<br> sigfd = flags & EVFLAG_SIGNALFD ? -2 : -1;<br> #endif<br>+#if EV_USE_TIMERFD<br>+ timerfd = flags & EVFLAG_NOTIMERFD ? -1 : -2;<br>+#endif<br> <br> if (!(flags & EVBACKEND_MASK))<br> flags |= ev_recommended_backends ();<br> <br>- if (flags & EVFLAG_ALLOCFD)<br>+ if (flags & EVFLAG_NOTIMERFD)<br> if (evpipe_alloc(EV_A) < 0)<br> return;<br> #if EV_USE_IOCP<br>- if (!backend && (flags & EVBACKEND_IOCP )) backend = iocp_init (EV_A_ flags);<br>+ if (!backend && (flags & EVBACKEND_IOCP )) backend = iocp_init (EV_A_ flags);<br> #endif<br> #if EV_USE_PORT<br>- if (!backend && (flags & EVBACKEND_PORT )) backend = port_init (EV_A_ flags);<br>+ if (!backend && (flags & EVBACKEND_PORT )) backend = port_init (EV_A_ flags);<br> #endif<br> #if EV_USE_KQUEUE<br>- if (!backend && (flags & EVBACKEND_KQUEUE)) backend = kqueue_init (EV_A_ flags);<br>+ if (!backend && (flags & EVBACKEND_KQUEUE )) backend = kqueue_init (EV_A_ flags);<br>+#endif<br>+#if EV_USE_IOURING<br>+ if (!backend && (flags & EVBACKEND_IOURING )) backend = iouring_init (EV_A_ flags);<br>+#endif<br>+#if EV_USE_LINUXAIO<br>+ if (!backend && (flags & EVBACKEND_LINUXAIO)) backend = linuxaio_init (EV_A_ flags);<br> #endif<br> #if EV_USE_EPOLL<br>- if (!backend && (flags & EVBACKEND_EPOLL )) backend = epoll_init (EV_A_ flags);<br>+ if (!backend && (flags & EVBACKEND_EPOLL )) backend = epoll_init (EV_A_ flags);<br> #endif<br> #if EV_USE_POLL<br>- if (!backend && (flags & EVBACKEND_POLL )) backend = poll_init (EV_A_ flags);<br>+ if (!backend && (flags & EVBACKEND_POLL )) backend = poll_init (EV_A_ flags);<br> #endif<br> #if EV_USE_SELECT<br>- if (!backend && (flags & EVBACKEND_SELECT)) backend = select_init (EV_A_ flags);<br>+ if (!backend && (flags & EVBACKEND_SELECT )) backend = select_init (EV_A_ flags);<br> #endif<br> <br> ev_prepare_init (&pending_w, pendingcb);<br>@@ -2961,7 +3376,7 @@ loop_init (EV_P_ unsigned int flags) EV_THROW<br> #if EV_SIGNAL_ENABLE || EV_ASYNC_ENABLE<br> ev_init (&pipe_w, pipecb);<br> ev_set_priority (&pipe_w, EV_MAXPRI);<br>- if (flags & EVFLAG_ALLOCFD)<br>+ if (flags & EVFLAG_NOTIMERFD)<br> {<br> ev_io_set (&pipe_w, evpipe [0] < 0 ? evpipe [1] : evpipe [0], EV_READ);<br> ev_io_start (EV_A_ &pipe_w);<br>@@ -2986,7 +3401,7 @@ ev_loop_destroy (EV_P)<br> <br> #if EV_CLEANUP_ENABLE<br> /* queue cleanup watchers (and execute them) */<br>- if (expect_false (cleanupcnt))<br>+ if (ecb_expect_false (cleanupcnt))<br> {<br> queue_events (EV_A_ (W *)cleanups, cleanupcnt, EV_CLEANUP);<br> EV_INVOKE_PENDING;<br>@@ -3015,6 +3430,11 @@ ev_loop_destroy (EV_P)<br> close (sigfd);<br> #endif<br> <br>+#if EV_USE_TIMERFD<br>+ if (ev_is_active (&timerfd_w))<br>+ close (timerfd);<br>+#endif<br>+<br> #if EV_USE_INOTIFY<br> if (fs_fd >= 0)<br> close (fs_fd);<br>@@ -3024,22 +3444,28 @@ ev_loop_destroy (EV_P)<br> close (backend_fd);<br> <br> #if EV_USE_IOCP<br>- if (backend == EVBACKEND_IOCP ) iocp_destroy (EV_A);<br>+ if (backend == EVBACKEND_IOCP ) iocp_destroy (EV_A);<br> #endif<br> #if EV_USE_PORT<br>- if (backend == EVBACKEND_PORT ) port_destroy (EV_A);<br>+ if (backend == EVBACKEND_PORT ) port_destroy (EV_A);<br> #endif<br> #if EV_USE_KQUEUE<br>- if (backend == EVBACKEND_KQUEUE) kqueue_destroy (EV_A);<br>+ if (backend == EVBACKEND_KQUEUE ) kqueue_destroy (EV_A);<br>+#endif<br>+#if EV_USE_IOURING<br>+ if (backend == EVBACKEND_IOURING ) iouring_destroy (EV_A);<br>+#endif<br>+#if EV_USE_LINUXAIO<br>+ if (backend == EVBACKEND_LINUXAIO) linuxaio_destroy (EV_A);<br> #endif<br> #if EV_USE_EPOLL<br>- if (backend == EVBACKEND_EPOLL ) epoll_destroy (EV_A);<br>+ if (backend == EVBACKEND_EPOLL ) epoll_destroy (EV_A);<br> #endif<br> #if EV_USE_POLL<br>- if (backend == EVBACKEND_POLL ) poll_destroy (EV_A);<br>+ if (backend == EVBACKEND_POLL ) poll_destroy (EV_A);<br> #endif<br> #if EV_USE_SELECT<br>- if (backend == EVBACKEND_SELECT) select_destroy (EV_A);<br>+ if (backend == EVBACKEND_SELECT ) select_destroy (EV_A);<br> #endif<br> <br> for (i = NUMPRI; i--; )<br>@@ -3091,34 +3517,62 @@ inline_size void<br> loop_fork (EV_P)<br> {<br> #if EV_USE_PORT<br>- if (backend == EVBACKEND_PORT ) port_fork (EV_A);<br>+ if (backend == EVBACKEND_PORT ) port_fork (EV_A);<br> #endif<br> #if EV_USE_KQUEUE<br>- if (backend == EVBACKEND_KQUEUE) kqueue_fork (EV_A);<br>+ if (backend == EVBACKEND_KQUEUE ) kqueue_fork (EV_A);<br>+#endif<br>+#if EV_USE_IOURING<br>+ if (backend == EVBACKEND_IOURING ) iouring_fork (EV_A);<br>+#endif<br>+#if EV_USE_LINUXAIO<br>+ if (backend == EVBACKEND_LINUXAIO) linuxaio_fork (EV_A);<br> #endif<br> #if EV_USE_EPOLL<br>- if (backend == EVBACKEND_EPOLL ) epoll_fork (EV_A);<br>+ if (backend == EVBACKEND_EPOLL ) epoll_fork (EV_A);<br> #endif<br> #if EV_USE_INOTIFY<br> infy_fork (EV_A);<br> #endif<br> <br>-#if EV_SIGNAL_ENABLE || EV_ASYNC_ENABLE<br>- if (ev_is_active (&pipe_w) && postfork != 2)<br>+ if (postfork != 2)<br> {<br>- /* pipe_write_wanted must be false now, so modifying fd vars should be safe */<br>-<br>- ev_ref (EV_A);<br>- ev_io_stop (EV_A_ &pipe_w);<br>-<br>- if (evpipe [0] >= 0)<br>- EV_WIN32_CLOSE_FD (evpipe [0]);<br>+ #if EV_USE_SIGNALFD<br>+ /* surprisingly, nothing needs to be done for signalfd, accoridng to docs, it does the right thing on fork */<br>+ #endif<br>+ <br>+ #if EV_USE_TIMERFD<br>+ if (ev_is_active (&timerfd_w))<br>+ {<br>+ ev_ref (EV_A);<br>+ ev_io_stop (EV_A_ &timerfd_w);<br> <br>- evpipe_init (EV_A);<br>- /* iterate over everything, in case we missed something before */<br>- ev_feed_event (EV_A_ &pipe_w, EV_CUSTOM);<br>+ close (timerfd);<br>+ timerfd = -2;<br>+ <br>+ evtimerfd_init (EV_A);<br>+ /* reschedule periodics, in case we missed something */<br>+ ev_feed_event (EV_A_ &timerfd_w, EV_CUSTOM);<br>+ }<br>+ #endif<br>+ <br>+ #if EV_SIGNAL_ENABLE || EV_ASYNC_ENABLE<br>+ if (ev_is_active (&pipe_w))<br>+ {<br>+ /* pipe_write_wanted must be false now, so modifying fd vars should be safe */<br>+ <br>+ ev_ref (EV_A);<br>+ ev_io_stop (EV_A_ &pipe_w);<br>+ <br>+ if (evpipe [0] >= 0)<br>+ EV_WIN32_CLOSE_FD (evpipe [0]);<br>+ <br>+ evpipe_init (EV_A);<br>+ /* iterate over everything, in case we missed something before */<br>+ ev_feed_event (EV_A_ &pipe_w, EV_CUSTOM);<br>+ }<br>+ #endif<br> }<br>-#endif<br> <br> postfork = 0;<br> }<br>@@ -3127,7 +3581,7 @@ loop_fork (EV_P)<br> <br> ecb_cold<br> struct ev_loop *<br>-ev_loop_new (unsigned int flags) EV_THROW<br>+ev_loop_new (unsigned int flags) EV_NOEXCEPT<br> {<br> EV_P = (struct ev_loop *)ev_malloc (sizeof (struct ev_loop));<br> <br>@@ -3144,7 +3598,7 @@ ev_loop_new (unsigned int flags) EV_THROW<br> #endif /* multiplicity */<br> <br> #if EV_VERIFY<br>-noinline ecb_cold<br>+ecb_noinline ecb_cold<br> static void<br> verify_watcher (EV_P_ W w)<br> {<br>@@ -3154,7 +3608,7 @@ verify_watcher (EV_P_ W w)<br> assert (("libev: pending watcher not on pending queue", pendings [ABSPRI (w)][w->pending - 1].w == w));<br> }<br> <br>-noinline ecb_cold<br>+ecb_noinline ecb_cold<br> static void<br> verify_heap (EV_P_ ANHE *heap, int N)<br> {<br>@@ -3170,7 +3624,7 @@ verify_heap (EV_P_ ANHE *heap, int N)<br> }<br> }<br> <br>-noinline ecb_cold<br>+ecb_noinline ecb_cold<br> static void<br> array_verify (EV_P_ W *ws, int cnt)<br> {<br>@@ -3184,7 +3638,7 @@ array_verify (EV_P_ W *ws, int cnt)<br> <br> #if EV_FEATURE_API<br> void ecb_cold<br>-ev_verify (EV_P) EV_THROW<br>+ev_verify (EV_P) EV_NOEXCEPT<br> {<br> #if EV_VERIFY<br> int i;<br>@@ -3275,7 +3729,7 @@ struct ev_loop *<br> #else<br> int<br> #endif<br>-ev_default_loop (unsigned int flags) EV_THROW<br>+ev_default_loop (unsigned int flags) EV_NOEXCEPT<br> {<br> if (!ev_default_loop_ptr)<br> {<br>@@ -3304,7 +3758,7 @@ ev_default_loop (unsigned int flags) EV_THROW<br> }<br> <br> void<br>-ev_loop_fork (EV_P) EV_THROW<br>+ev_loop_fork (EV_P) EV_NOEXCEPT<br> {<br> postfork = 1;<br> }<br>@@ -3318,7 +3772,7 @@ ev_invoke (EV_P_ void *w, int revents)<br> }<br> <br> unsigned int<br>-ev_pending_count (EV_P) EV_THROW<br>+ev_pending_count (EV_P) EV_NOEXCEPT<br> {<br> int pri;<br> unsigned int count = 0;<br>@@ -3329,16 +3783,17 @@ ev_pending_count (EV_P) EV_THROW<br> return count;<br> }<br> <br>-noinline<br>+ecb_noinline<br> void<br> ev_invoke_pending (EV_P)<br> {<br> pendingpri = NUMPRI;<br> <br>- while (pendingpri) /* pendingpri possibly gets modified in the inner loop */<br>+ do<br> {<br> --pendingpri;<br> <br>+ /* pendingpri possibly gets modified in the inner loop */<br> while (pendingcnt [pendingpri])<br> {<br> ANPENDING *p = pendings [pendingpri] + --pendingcnt [pendingpri];<br>@@ -3348,6 +3803,7 @@ ev_invoke_pending (EV_P)<br> EV_FREQUENT_CHECK;<br> }<br> }<br>+ while (pendingpri);<br> }<br> <br> #if EV_IDLE_ENABLE<br>@@ -3356,7 +3812,7 @@ ev_invoke_pending (EV_P)<br> inline_size void<br> idle_reify (EV_P)<br> {<br>- if (expect_false (idleall))<br>+ if (ecb_expect_false (idleall))<br> {<br> int pri;<br> <br>@@ -3396,7 +3852,7 @@ timers_reify (EV_P)<br> if (ev_at (w) < mn_now)<br> ev_at (w) = mn_now;<br> <br>- assert (("libev: negative ev_timer repeat value found while processing timers", w->repeat > 0.));<br>+ assert (("libev: negative ev_timer repeat value found while processing timers", w->repeat > EV_TS_CONST (0.)));<br> <br> ANHE_at_cache (timers [HEAP0]);<br> downheap (timers, timercnt, HEAP0);<br>@@ -3415,7 +3871,7 @@ timers_reify (EV_P)<br> <br> #if EV_PERIODIC_ENABLE<br> <br>-noinline<br>+ecb_noinline<br> static void<br> periodic_recalc (EV_P_ ev_periodic *w)<br> {<br>@@ -3428,7 +3884,7 @@ periodic_recalc (EV_P_ ev_periodic *w)<br> ev_tstamp nat = at + w->interval;<br> <br> /* when resolution fails us, we use ev_rt_now */<br>- if (expect_false (nat == at))<br>+ if (ecb_expect_false (nat == at))<br> {<br> at = ev_rt_now;<br> break;<br>@@ -3484,7 +3940,7 @@ periodics_reify (EV_P)<br> <br> /* simply recalculate all periodics */<br> /* TODO: maybe ensure that at least one event happens when jumping forward? */<br>-noinline ecb_cold<br>+ecb_noinline ecb_cold<br> static void<br> periodics_reschedule (EV_P)<br> {<br>@@ -3508,7 +3964,7 @@ periodics_reschedule (EV_P)<br> #endif<br> <br> /* adjust all timers by a given offset */<br>-noinline ecb_cold<br>+ecb_noinline ecb_cold<br> static void<br> timers_reschedule (EV_P_ ev_tstamp adjust)<br> {<br>@@ -3528,7 +3984,7 @@ inline_speed void<br> time_update (EV_P_ ev_tstamp max_block)<br> {<br> #if EV_USE_MONOTONIC<br>- if (expect_true (have_monotonic))<br>+ if (ecb_expect_true (have_monotonic))<br> {<br> int i;<br> ev_tstamp odiff = rtmn_diff;<br>@@ -3537,7 +3993,7 @@ time_update (EV_P_ ev_tstamp max_block)<br> <br> /* only fetch the realtime clock every 0.5*MIN_TIMEJUMP seconds */<br> /* interpolate in the meantime */<br>- if (expect_true (mn_now - now_floor < MIN_TIMEJUMP * .5))<br>+ if (ecb_expect_true (mn_now - now_floor < EV_TS_CONST (MIN_TIMEJUMP * .5)))<br> {<br> ev_rt_now = rtmn_diff + mn_now;<br> return;<br>@@ -3561,7 +4017,7 @@ time_update (EV_P_ ev_tstamp max_block)<br> <br> diff = odiff - rtmn_diff;<br> <br>- if (expect_true ((diff < 0. ? -diff : diff) < MIN_TIMEJUMP))<br>+ if (ecb_expect_true ((diff < EV_TS_CONST (0.) ? -diff : diff) < EV_TS_CONST (MIN_TIMEJUMP)))<br> return; /* all is well */<br> <br> ev_rt_now = ev_time ();<br>@@ -3580,7 +4036,7 @@ time_update (EV_P_ ev_tstamp max_block)<br> {<br> ev_rt_now = ev_time ();<br> <br>- if (expect_false (mn_now > ev_rt_now || ev_rt_now > mn_now + max_block + MIN_TIMEJUMP))<br>+ if (ecb_expect_false (mn_now > ev_rt_now || ev_rt_now > mn_now + max_block + EV_TS_CONST (MIN_TIMEJUMP)))<br> {<br> /* adjust timers. this is easy, as the offset is the same for all of them */<br> timers_reschedule (EV_A_ ev_rt_now - mn_now);<br>@@ -3613,8 +4069,8 @@ ev_run (EV_P_ int flags)<br> #endif<br> <br> #ifndef _WIN32<br>- if (expect_false (curpid)) /* penalise the forking check even more */<br>- if (expect_false (getpid () != curpid))<br>+ if (ecb_expect_false (curpid)) /* penalise the forking check even more */<br>+ if (ecb_expect_false (getpid () != curpid))<br> {<br> curpid = getpid ();<br> postfork = 1;<br>@@ -3623,7 +4079,7 @@ ev_run (EV_P_ int flags)<br> <br> #if EV_FORK_ENABLE<br> /* we might have forked, so queue fork handlers */<br>- if (expect_false (postfork))<br>+ if (ecb_expect_false (postfork))<br> if (forkcnt)<br> {<br> queue_events (EV_A_ (W *)forks, forkcnt, EV_FORK);<br>@@ -3633,18 +4089,18 @@ ev_run (EV_P_ int flags)<br> <br> #if EV_PREPARE_ENABLE<br> /* queue prepare watchers (and execute them) */<br>- if (expect_false (preparecnt))<br>+ if (ecb_expect_false (preparecnt))<br> {<br> queue_events (EV_A_ (W *)prepares, preparecnt, EV_PREPARE);<br> EV_INVOKE_PENDING;<br> }<br> #endif<br> <br>- if (expect_false (loop_done))<br>+ if (ecb_expect_false (loop_done))<br> break;<br> <br> /* we might have forked, so reify kernel state if necessary */<br>- if (expect_false (postfork))<br>+ if (ecb_expect_false (postfork))<br> loop_fork (EV_A);<br> <br> /* update fd-related kernel structures */<br>@@ -3659,16 +4115,28 @@ ev_run (EV_P_ int flags)<br> ev_tstamp prev_mn_now = mn_now;<br> <br> /* update time to cancel out callback processing overhead */<br>- time_update (EV_A_ 1e100);<br>+ time_update (EV_A_ EV_TS_CONST (EV_TSTAMP_HUGE));<br> <br> /* from now on, we want a pipe-wake-up */<br> pipe_write_wanted = 1;<br> <br> ECB_MEMORY_FENCE; /* make sure pipe_write_wanted is visible before we check for potential skips */<br> <br>- if (expect_true (!(flags & EVRUN_NOWAIT || idleall || !activecnt || pipe_write_skipped)))<br>+ if (ecb_expect_true (!(flags & EVRUN_NOWAIT || idleall || !activecnt || pipe_write_skipped)))<br> {<br>- waittime = MAX_BLOCKTIME;<br>+ waittime = EV_TS_CONST (MAX_BLOCKTIME);<br>+<br>+#if EV_USE_TIMERFD<br>+ /* sleep a lot longer when we can reliably detect timejumps */<br>+ if (ecb_expect_true (timerfd >= 0))<br>+ waittime = EV_TS_CONST (MAX_BLOCKTIME2);<br>+#endif<br>+#if !EV_PERIODIC_ENABLE<br>+ /* without periodics but with monotonic clock there is no need */<br>+ /* for any time jump detection, so sleep longer */<br>+ if (ecb_expect_true (have_monotonic))<br>+ waittime = EV_TS_CONST (MAX_BLOCKTIME2);<br>+#endif<br> <br> if (timercnt)<br> {<br>@@ -3685,23 +4153,28 @@ ev_run (EV_P_ int flags)<br> #endif<br> <br> /* don't let timeouts decrease the waittime below timeout_blocktime */<br>- if (expect_false (waittime < timeout_blocktime))<br>+ if (ecb_expect_false (waittime < timeout_blocktime))<br> waittime = timeout_blocktime;<br> <br>- /* at this point, we NEED to wait, so we have to ensure */<br>- /* to pass a minimum nonzero value to the backend */<br>- if (expect_false (waittime < backend_mintime))<br>- waittime = backend_mintime;<br>+ /* now there are two more special cases left, either we have<br>+ * already-expired timers, so we should not sleep, or we have timers<br>+ * that expire very soon, in which case we need to wait for a minimum<br>+ * amount of time for some event loop backends.<br>+ */<br>+ if (ecb_expect_false (waittime < backend_mintime))<br>+ waittime = waittime <= EV_TS_CONST (0.)<br>+ ? EV_TS_CONST (0.)<br>+ : backend_mintime;<br> <br> /* extra check because io_blocktime is commonly 0 */<br>- if (expect_false (io_blocktime))<br>+ if (ecb_expect_false (io_blocktime))<br> {<br> sleeptime = io_blocktime - (mn_now - prev_mn_now);<br> <br> if (sleeptime > waittime - backend_mintime)<br> sleeptime = waittime - backend_mintime;<br> <br>- if (expect_true (sleeptime > 0.))<br>+ if (ecb_expect_true (sleeptime > EV_TS_CONST (0.)))<br> {<br> ev_sleep (sleeptime);<br> waittime -= sleeptime;<br>@@ -3725,7 +4198,6 @@ ev_run (EV_P_ int flags)<br> ev_feed_event (EV_A_ &pipe_w, EV_CUSTOM);<br> }<br> <br>-<br> /* update ev_rt_now, do magic */<br> time_update (EV_A_ waittime + sleeptime);<br> }<br>@@ -3743,13 +4215,13 @@ ev_run (EV_P_ int flags)<br> <br> #if EV_CHECK_ENABLE<br> /* queue check watchers, to be executed first */<br>- if (expect_false (checkcnt))<br>+ if (ecb_expect_false (checkcnt))<br> queue_events (EV_A_ (W *)checks, checkcnt, EV_CHECK);<br> #endif<br> <br> EV_INVOKE_PENDING;<br> }<br>- while (expect_true (<br>+ while (ecb_expect_true (<br> activecnt<br> && !loop_done<br> && !(flags & (EVRUN_ONCE | EVRUN_NOWAIT))<br>@@ -3766,43 +4238,43 @@ ev_run (EV_P_ int flags)<br> }<br> <br> void<br>-ev_break (EV_P_ int how) EV_THROW<br>+ev_break (EV_P_ int how) EV_NOEXCEPT<br> {<br> loop_done = how;<br> }<br> <br> void<br>-ev_ref (EV_P) EV_THROW<br>+ev_ref (EV_P) EV_NOEXCEPT<br> {<br> ++activecnt;<br> }<br> <br> void<br>-ev_unref (EV_P) EV_THROW<br>+ev_unref (EV_P) EV_NOEXCEPT<br> {<br> --activecnt;<br> }<br> <br> int<br>-ev_activecnt (EV_P) EV_THROW<br>+ev_activecnt (EV_P) EV_NOEXCEPT<br> {<br> return activecnt;<br> }<br> <br> void<br>-ev_now_update (EV_P) EV_THROW<br>+ev_now_update (EV_P) EV_NOEXCEPT<br> {<br>- time_update (EV_A_ 1e100);<br>+ time_update (EV_A_ EV_TSTAMP_HUGE);<br> }<br> <br> void<br>-ev_suspend (EV_P) EV_THROW<br>+ev_suspend (EV_P) EV_NOEXCEPT<br> {<br> ev_now_update (EV_A);<br> }<br> <br> void<br>-ev_resume (EV_P) EV_THROW<br>+ev_resume (EV_P) EV_NOEXCEPT<br> {<br> ev_tstamp mn_prev = mn_now;<br> <br>@@ -3829,7 +4301,7 @@ wlist_del (WL *head, WL elem)<br> {<br> while (*head)<br> {<br>- if (expect_true (*head == elem))<br>+ if (ecb_expect_true (*head == elem))<br> {<br> *head = elem->next;<br> break;<br>@@ -3851,12 +4323,12 @@ clear_pending (EV_P_ W w)<br> }<br> <br> int<br>-ev_clear_pending (EV_P_ void *w) EV_THROW<br>+ev_clear_pending (EV_P_ void *w) EV_NOEXCEPT<br> {<br> W w_ = (W)w;<br> int pending = w_->pending;<br> <br>- if (expect_true (pending))<br>+ if (ecb_expect_true (pending))<br> {<br> ANPENDING *p = pendings [ABSPRI (w_)] + pending - 1;<br> p->w = (W)&pending_w;<br>@@ -3893,22 +4365,25 @@ ev_stop (EV_P_ W w)<br> <br> /*****************************************************************************/<br> <br>-noinline<br>+ecb_noinline<br> void<br>-ev_io_start (EV_P_ ev_io *w) EV_THROW<br>+ev_io_start (EV_P_ ev_io *w) EV_NOEXCEPT<br> {<br> int fd = w->fd;<br> <br>- if (expect_false (ev_is_active (w)))<br>+ if (ecb_expect_false (ev_is_active (w)))<br> return;<br> <br> assert (("libev: ev_io_start called with negative fd", fd >= 0));<br> assert (("libev: ev_io_start called with illegal event mask", !(w->events & ~(EV__IOFDSET | EV_READ | EV_WRITE))));<br> <br>+#if EV_VERIFY >= 2<br>+ assert (("libev: ev_io_start called on watcher with invalid fd", fd_valid (fd)));<br>+#endif<br> EV_FREQUENT_CHECK;<br> <br> ev_start (EV_A_ (W)w, 1);<br>- array_needsize (ANFD, anfds, anfdmax, fd + 1, array_init_zero);<br>+ array_needsize (ANFD, anfds, anfdmax, fd + 1, array_needsize_zerofill);<br> wlist_add (&anfds[fd].head, (WL)w);<br> <br> /* common bug, apparently */<br>@@ -3920,16 +4395,19 @@ ev_io_start (EV_P_ ev_io *w) EV_THROW<br> EV_FREQUENT_CHECK;<br> }<br> <br>-noinline<br>+ecb_noinline<br> void<br>-ev_io_stop (EV_P_ ev_io *w) EV_THROW<br>+ev_io_stop (EV_P_ ev_io *w) EV_NOEXCEPT<br> {<br> clear_pending (EV_A_ (W)w);<br>- if (expect_false (!ev_is_active (w)))<br>+ if (ecb_expect_false (!ev_is_active (w)))<br> return;<br> <br> assert (("libev: ev_io_stop called with illegal fd (must stay constant after start!)", w->fd >= 0 && w->fd < anfdmax));<br> <br>+#if EV_VERIFY >= 2<br>+ assert (("libev: ev_io_stop called on watcher with invalid fd", fd_valid (w->fd)));<br>+#endif<br> EV_FREQUENT_CHECK;<br> <br> wlist_del (&anfds[w->fd].head, (WL)w);<br>@@ -3947,7 +4425,7 @@ ev_io_stop (EV_P_ ev_io *w) EV_THROW<br> * backend is properly updated.<br> */<br> void noinline<br>-ev_io_closing (EV_P_ int fd, int revents) EV_THROW<br>+ev_io_closing (EV_P_ int fd, int revents) EV_NOEXCEPT<br> {<br> ev_io *w;<br> if (fd < 0 || fd >= anfdmax)<br>@@ -3960,11 +4438,11 @@ ev_io_closing (EV_P_ int fd, int revents) EV_THROW<br> }<br> }<br> <br>-noinline<br>+ecb_noinline<br> void<br>-ev_timer_start (EV_P_ ev_timer *w) EV_THROW<br>+ev_timer_start (EV_P_ ev_timer *w) EV_NOEXCEPT<br> {<br>- if (expect_false (ev_is_active (w)))<br>+ if (ecb_expect_false (ev_is_active (w)))<br> return;<br> <br> ev_at (w) += mn_now;<br>@@ -3975,7 +4453,7 @@ ev_timer_start (EV_P_ ev_timer *w) EV_THROW<br> <br> ++timercnt;<br> ev_start (EV_A_ (W)w, timercnt + HEAP0 - 1);<br>- array_needsize (ANHE, timers, timermax, ev_active (w) + 1, EMPTY2);<br>+ array_needsize (ANHE, timers, timermax, ev_active (w) + 1, array_needsize_noinit);<br> ANHE_w (timers [ev_active (w)]) = (WT)w;<br> ANHE_at_cache (timers [ev_active (w)]);<br> upheap (timers, ev_active (w));<br>@@ -3985,12 +4463,12 @@ ev_timer_start (EV_P_ ev_timer *w) EV_THROW<br> /*assert (("libev: internal timer heap corruption", timers [ev_active (w)] == (WT)w));*/<br> }<br> <br>-noinline<br>+ecb_noinline<br> void<br>-ev_timer_stop (EV_P_ ev_timer *w) EV_THROW<br>+ev_timer_stop (EV_P_ ev_timer *w) EV_NOEXCEPT<br> {<br> clear_pending (EV_A_ (W)w);<br>- if (expect_false (!ev_is_active (w)))<br>+ if (ecb_expect_false (!ev_is_active (w)))<br> return;<br> <br> EV_FREQUENT_CHECK;<br>@@ -4002,7 +4480,7 @@ ev_timer_stop (EV_P_ ev_timer *w) EV_THROW<br> <br> --timercnt;<br> <br>- if (expect_true (active < timercnt + HEAP0))<br>+ if (ecb_expect_true (active < timercnt + HEAP0))<br> {<br> timers [active] = timers [timercnt + HEAP0];<br> adjustheap (timers, timercnt, active);<br>@@ -4016,9 +4494,9 @@ ev_timer_stop (EV_P_ ev_timer *w) EV_THROW<br> EV_FREQUENT_CHECK;<br> }<br> <br>-noinline<br>+ecb_noinline<br> void<br>-ev_timer_again (EV_P_ ev_timer *w) EV_THROW<br>+ev_timer_again (EV_P_ ev_timer *w) EV_NOEXCEPT<br> {<br> EV_FREQUENT_CHECK;<br> <br>@@ -4045,19 +4523,24 @@ ev_timer_again (EV_P_ ev_timer *w) EV_THROW<br> }<br> <br> ev_tstamp<br>-ev_timer_remaining (EV_P_ ev_timer *w) EV_THROW<br>+ev_timer_remaining (EV_P_ ev_timer *w) EV_NOEXCEPT<br> {<br>- return ev_at (w) - (ev_is_active (w) ? mn_now : 0.);<br>+ return ev_at (w) - (ev_is_active (w) ? mn_now : EV_TS_CONST (0.));<br> }<br> <br> #if EV_PERIODIC_ENABLE<br>-noinline<br>+ecb_noinline<br> void<br>-ev_periodic_start (EV_P_ ev_periodic *w) EV_THROW<br>+ev_periodic_start (EV_P_ ev_periodic *w) EV_NOEXCEPT<br> {<br>- if (expect_false (ev_is_active (w)))<br>+ if (ecb_expect_false (ev_is_active (w)))<br> return;<br> <br>+#if EV_USE_TIMERFD<br>+ if (timerfd == -2)<br>+ evtimerfd_init (EV_A);<br>+#endif<br>+<br> if (w->reschedule_cb)<br> ev_at (w) = w->reschedule_cb (w, ev_rt_now);<br> else if (w->interval)<br>@@ -4072,7 +4555,7 @@ ev_periodic_start (EV_P_ ev_periodic *w) EV_THROW<br> <br> ++periodiccnt;<br> ev_start (EV_A_ (W)w, periodiccnt + HEAP0 - 1);<br>- array_needsize (ANHE, periodics, periodicmax, ev_active (w) + 1, EMPTY2);<br>+ array_needsize (ANHE, periodics, periodicmax, ev_active (w) + 1, array_needsize_noinit);<br> ANHE_w (periodics [ev_active (w)]) = (WT)w;<br> ANHE_at_cache (periodics [ev_active (w)]);<br> upheap (periodics, ev_active (w));<br>@@ -4082,12 +4565,12 @@ ev_periodic_start (EV_P_ ev_periodic *w) EV_THROW<br> /*assert (("libev: internal periodic heap corruption", ANHE_w (periodics [ev_active (w)]) == (WT)w));*/<br> }<br> <br>-noinline<br>+ecb_noinline<br> void<br>-ev_periodic_stop (EV_P_ ev_periodic *w) EV_THROW<br>+ev_periodic_stop (EV_P_ ev_periodic *w) EV_NOEXCEPT<br> {<br> clear_pending (EV_A_ (W)w);<br>- if (expect_false (!ev_is_active (w)))<br>+ if (ecb_expect_false (!ev_is_active (w)))<br> return;<br> <br> EV_FREQUENT_CHECK;<br>@@ -4099,7 +4582,7 @@ ev_periodic_stop (EV_P_ ev_periodic *w) EV_THROW<br> <br> --periodiccnt;<br> <br>- if (expect_true (active < periodiccnt + HEAP0))<br>+ if (ecb_expect_true (active < periodiccnt + HEAP0))<br> {<br> periodics [active] = periodics [periodiccnt + HEAP0];<br> adjustheap (periodics, periodiccnt, active);<br>@@ -4111,9 +4594,9 @@ ev_periodic_stop (EV_P_ ev_periodic *w) EV_THROW<br> EV_FREQUENT_CHECK;<br> }<br> <br>-noinline<br>+ecb_noinline<br> void<br>-ev_periodic_again (EV_P_ ev_periodic *w) EV_THROW<br>+ev_periodic_again (EV_P_ ev_periodic *w) EV_NOEXCEPT<br> {<br> /* TODO: use adjustheap and recalculation */<br> ev_periodic_stop (EV_A_ w);<br>@@ -4127,11 +4610,11 @@ ev_periodic_again (EV_P_ ev_periodic *w) EV_THROW<br> <br> #if EV_SIGNAL_ENABLE<br> <br>-noinline<br>+ecb_noinline<br> void<br>-ev_signal_start (EV_P_ ev_signal *w) EV_THROW<br>+ev_signal_start (EV_P_ ev_signal *w) EV_NOEXCEPT<br> {<br>- if (expect_false (ev_is_active (w)))<br>+ if (ecb_expect_false (ev_is_active (w)))<br> return;<br> <br> assert (("libev: ev_signal_start called with illegal signal number", w->signum > 0 && w->signum < EV_NSIG));<br>@@ -4210,12 +4693,12 @@ ev_signal_start (EV_P_ ev_signal *w) EV_THROW<br> EV_FREQUENT_CHECK;<br> }<br> <br>-noinline<br>+ecb_noinline<br> void<br>-ev_signal_stop (EV_P_ ev_signal *w) EV_THROW<br>+ev_signal_stop (EV_P_ ev_signal *w) EV_NOEXCEPT<br> {<br> clear_pending (EV_A_ (W)w);<br>- if (expect_false (!ev_is_active (w)))<br>+ if (ecb_expect_false (!ev_is_active (w)))<br> return;<br> <br> EV_FREQUENT_CHECK;<br>@@ -4253,12 +4736,12 @@ ev_signal_stop (EV_P_ ev_signal *w) EV_THROW<br> #if EV_CHILD_ENABLE<br> <br> void<br>-ev_child_start (EV_P_ ev_child *w) EV_THROW<br>+ev_child_start (EV_P_ ev_child *w) EV_NOEXCEPT<br> {<br> #if EV_MULTIPLICITY<br> assert (("libev: child watchers are only supported in the default loop", loop == ev_default_loop_ptr));<br> #endif<br>- if (expect_false (ev_is_active (w)))<br>+ if (ecb_expect_false (ev_is_active (w)))<br> return;<br> <br> EV_FREQUENT_CHECK;<br>@@ -4270,10 +4753,10 @@ ev_child_start (EV_P_ ev_child *w) EV_THROW<br> }<br> <br> void<br>-ev_child_stop (EV_P_ ev_child *w) EV_THROW<br>+ev_child_stop (EV_P_ ev_child *w) EV_NOEXCEPT<br> {<br> clear_pending (EV_A_ (W)w);<br>- if (expect_false (!ev_is_active (w)))<br>+ if (ecb_expect_false (!ev_is_active (w)))<br> return;<br> <br> EV_FREQUENT_CHECK;<br>@@ -4297,14 +4780,14 @@ ev_child_stop (EV_P_ ev_child *w) EV_THROW<br> #define NFS_STAT_INTERVAL 30.1074891 /* for filesystems potentially failing inotify */<br> #define MIN_STAT_INTERVAL 0.1074891<br> <br>-noinline static void stat_timer_cb (EV_P_ ev_timer *w_, int revents);<br>+ecb_noinline static void stat_timer_cb (EV_P_ ev_timer *w_, int revents);<br> <br> #if EV_USE_INOTIFY<br> <br> /* the * 2 is to allow for alignment padding, which for some reason is >> 8 */<br> # define EV_INOTIFY_BUFSIZE (sizeof (struct inotify_event) * 2 + NAME_MAX)<br> <br>-noinline<br>+ecb_noinline<br> static void<br> infy_add (EV_P_ ev_stat *w)<br> {<br>@@ -4379,7 +4862,7 @@ infy_add (EV_P_ ev_stat *w)<br> if (ev_is_active (&w->timer)) ev_unref (EV_A);<br> }<br> <br>-noinline<br>+ecb_noinline<br> static void<br> infy_del (EV_P_ ev_stat *w)<br> {<br>@@ -4397,7 +4880,7 @@ infy_del (EV_P_ ev_stat *w)<br> inotify_rm_watch (fs_fd, wd);<br> }<br> <br>-noinline<br>+ecb_noinline<br> static void<br> infy_wd (EV_P_ int slot, int wd, struct inotify_event *ev)<br> {<br>@@ -4545,7 +5028,7 @@ infy_fork (EV_P)<br> #endif<br> <br> void<br>-ev_stat_stat (EV_P_ ev_stat *w) EV_THROW<br>+ev_stat_stat (EV_P_ ev_stat *w) EV_NOEXCEPT<br> {<br> if (lstat (w->path, &w->attr) < 0)<br> w->attr.st_nlink = 0;<br>@@ -4553,7 +5036,7 @@ ev_stat_stat (EV_P_ ev_stat *w) EV_THROW<br> w->attr.st_nlink = 1;<br> }<br> <br>-noinline<br>+ecb_noinline<br> static void<br> stat_timer_cb (EV_P_ ev_timer *w_, int revents)<br> {<br>@@ -4604,9 +5087,9 @@ stat_timer_cb (EV_P_ ev_timer *w_, int revents)<br> }<br> <br> void<br>-ev_stat_start (EV_P_ ev_stat *w) EV_THROW<br>+ev_stat_start (EV_P_ ev_stat *w) EV_NOEXCEPT<br> {<br>- if (expect_false (ev_is_active (w)))<br>+ if (ecb_expect_false (ev_is_active (w)))<br> return;<br> <br> ev_stat_stat (EV_A_ w);<br>@@ -4635,10 +5118,10 @@ ev_stat_start (EV_P_ ev_stat *w) EV_THROW<br> }<br> <br> void<br>-ev_stat_stop (EV_P_ ev_stat *w) EV_THROW<br>+ev_stat_stop (EV_P_ ev_stat *w) EV_NOEXCEPT<br> {<br> clear_pending (EV_A_ (W)w);<br>- if (expect_false (!ev_is_active (w)))<br>+ if (ecb_expect_false (!ev_is_active (w)))<br> return;<br> <br> EV_FREQUENT_CHECK;<br>@@ -4661,9 +5144,9 @@ ev_stat_stop (EV_P_ ev_stat *w) EV_THROW<br> <br> #if EV_IDLE_ENABLE<br> void<br>-ev_idle_start (EV_P_ ev_idle *w) EV_THROW<br>+ev_idle_start (EV_P_ ev_idle *w) EV_NOEXCEPT<br> {<br>- if (expect_false (ev_is_active (w)))<br>+ if (ecb_expect_false (ev_is_active (w)))<br> return;<br> <br> pri_adjust (EV_A_ (W)w);<br>@@ -4676,7 +5159,7 @@ ev_idle_start (EV_P_ ev_idle *w) EV_THROW<br> ++idleall;<br> ev_start (EV_A_ (W)w, active);<br> <br>- array_needsize (ev_idle *, idles [ABSPRI (w)], idlemax [ABSPRI (w)], active, EMPTY2);<br>+ array_needsize (ev_idle *, idles [ABSPRI (w)], idlemax [ABSPRI (w)], active, array_needsize_noinit);<br> idles [ABSPRI (w)][active - 1] = w;<br> }<br> <br>@@ -4684,10 +5167,10 @@ ev_idle_start (EV_P_ ev_idle *w) EV_THROW<br> }<br> <br> void<br>-ev_idle_stop (EV_P_ ev_idle *w) EV_THROW<br>+ev_idle_stop (EV_P_ ev_idle *w) EV_NOEXCEPT<br> {<br> clear_pending (EV_A_ (W)w);<br>- if (expect_false (!ev_is_active (w)))<br>+ if (ecb_expect_false (!ev_is_active (w)))<br> return;<br> <br> EV_FREQUENT_CHECK;<br>@@ -4708,25 +5191,25 @@ ev_idle_stop (EV_P_ ev_idle *w) EV_THROW<br> <br> #if EV_PREPARE_ENABLE<br> void<br>-ev_prepare_start (EV_P_ ev_prepare *w) EV_THROW<br>+ev_prepare_start (EV_P_ ev_prepare *w) EV_NOEXCEPT<br> {<br>- if (expect_false (ev_is_active (w)))<br>+ if (ecb_expect_false (ev_is_active (w)))<br> return;<br> <br> EV_FREQUENT_CHECK;<br> <br> ev_start (EV_A_ (W)w, ++preparecnt);<br>- array_needsize (ev_prepare *, prepares, preparemax, preparecnt, EMPTY2);<br>+ array_needsize (ev_prepare *, prepares, preparemax, preparecnt, array_needsize_noinit);<br> prepares [preparecnt - 1] = w;<br> <br> EV_FREQUENT_CHECK;<br> }<br> <br> void<br>-ev_prepare_stop (EV_P_ ev_prepare *w) EV_THROW<br>+ev_prepare_stop (EV_P_ ev_prepare *w) EV_NOEXCEPT<br> {<br> clear_pending (EV_A_ (W)w);<br>- if (expect_false (!ev_is_active (w)))<br>+ if (ecb_expect_false (!ev_is_active (w)))<br> return;<br> <br> EV_FREQUENT_CHECK;<br>@@ -4746,25 +5229,25 @@ ev_prepare_stop (EV_P_ ev_prepare *w) EV_THROW<br> <br> #if EV_CHECK_ENABLE<br> void<br>-ev_check_start (EV_P_ ev_check *w) EV_THROW<br>+ev_check_start (EV_P_ ev_check *w) EV_NOEXCEPT<br> {<br>- if (expect_false (ev_is_active (w)))<br>+ if (ecb_expect_false (ev_is_active (w)))<br> return;<br> <br> EV_FREQUENT_CHECK;<br> <br> ev_start (EV_A_ (W)w, ++checkcnt);<br>- array_needsize (ev_check *, checks, checkmax, checkcnt, EMPTY2);<br>+ array_needsize (ev_check *, checks, checkmax, checkcnt, array_needsize_noinit);<br> checks [checkcnt - 1] = w;<br> <br> EV_FREQUENT_CHECK;<br> }<br> <br> void<br>-ev_check_stop (EV_P_ ev_check *w) EV_THROW<br>+ev_check_stop (EV_P_ ev_check *w) EV_NOEXCEPT<br> {<br> clear_pending (EV_A_ (W)w);<br>- if (expect_false (!ev_is_active (w)))<br>+ if (ecb_expect_false (!ev_is_active (w)))<br> return;<br> <br> EV_FREQUENT_CHECK;<br>@@ -4783,9 +5266,9 @@ ev_check_stop (EV_P_ ev_check *w) EV_THROW<br> #endif<br> <br> #if EV_EMBED_ENABLE<br>-noinline<br>+ecb_noinline<br> void<br>-ev_embed_sweep (EV_P_ ev_embed *w) EV_THROW<br>+ev_embed_sweep (EV_P_ ev_embed *w) EV_NOEXCEPT<br> {<br> ev_run (w->other, EVRUN_NOWAIT);<br> }<br>@@ -4817,6 +5300,7 @@ embed_prepare_cb (EV_P_ ev_prepare *prepare, int revents)<br> }<br> }<br> <br>+#if EV_FORK_ENABLE<br> static void<br> embed_fork_cb (EV_P_ ev_fork *fork_w, int revents)<br> {<br>@@ -4833,6 +5317,7 @@ embed_fork_cb (EV_P_ ev_fork *fork_w, int revents)<br> <br> ev_embed_start (EV_A_ w);<br> }<br>+#endif<br> <br> #if 0<br> static void<br>@@ -4843,9 +5328,9 @@ embed_idle_cb (EV_P_ ev_idle *idle, int revents)<br> #endif<br> <br> void<br>-ev_embed_start (EV_P_ ev_embed *w) EV_THROW<br>+ev_embed_start (EV_P_ ev_embed *w) EV_NOEXCEPT<br> {<br>- if (expect_false (ev_is_active (w)))<br>+ if (ecb_expect_false (ev_is_active (w)))<br> return;<br> <br> {<br>@@ -4863,8 +5348,10 @@ ev_embed_start (EV_P_ ev_embed *w) EV_THROW<br> ev_set_priority (&w->prepare, EV_MINPRI);<br> ev_prepare_start (EV_A_ &w->prepare);<br> <br>+#if EV_FORK_ENABLE<br> ev_fork_init (&w->fork, embed_fork_cb);<br> ev_fork_start (EV_A_ &w->fork);<br>+#endif<br> <br> /*ev_idle_init (&w->idle, e,bed_idle_cb);*/<br> <br>@@ -4874,17 +5361,19 @@ ev_embed_start (EV_P_ ev_embed *w) EV_THROW<br> }<br> <br> void<br>-ev_embed_stop (EV_P_ ev_embed *w) EV_THROW<br>+ev_embed_stop (EV_P_ ev_embed *w) EV_NOEXCEPT<br> {<br> clear_pending (EV_A_ (W)w);<br>- if (expect_false (!ev_is_active (w)))<br>+ if (ecb_expect_false (!ev_is_active (w)))<br> return;<br> <br> EV_FREQUENT_CHECK;<br> <br> ev_io_stop (EV_A_ &w->io);<br> ev_prepare_stop (EV_A_ &w->prepare);<br>+#if EV_FORK_ENABLE<br> ev_fork_stop (EV_A_ &w->fork);<br>+#endif<br> <br> ev_stop (EV_A_ (W)w);<br> <br>@@ -4894,25 +5383,25 @@ ev_embed_stop (EV_P_ ev_embed *w) EV_THROW<br> <br> #if EV_FORK_ENABLE<br> void<br>-ev_fork_start (EV_P_ ev_fork *w) EV_THROW<br>+ev_fork_start (EV_P_ ev_fork *w) EV_NOEXCEPT<br> {<br>- if (expect_false (ev_is_active (w)))<br>+ if (ecb_expect_false (ev_is_active (w)))<br> return;<br> <br> EV_FREQUENT_CHECK;<br> <br> ev_start (EV_A_ (W)w, ++forkcnt);<br>- array_needsize (ev_fork *, forks, forkmax, forkcnt, EMPTY2);<br>+ array_needsize (ev_fork *, forks, forkmax, forkcnt, array_needsize_noinit);<br> forks [forkcnt - 1] = w;<br> <br> EV_FREQUENT_CHECK;<br> }<br> <br> void<br>-ev_fork_stop (EV_P_ ev_fork *w) EV_THROW<br>+ev_fork_stop (EV_P_ ev_fork *w) EV_NOEXCEPT<br> {<br> clear_pending (EV_A_ (W)w);<br>- if (expect_false (!ev_is_active (w)))<br>+ if (ecb_expect_false (!ev_is_active (w)))<br> return;<br> <br> EV_FREQUENT_CHECK;<br>@@ -4932,15 +5421,15 @@ ev_fork_stop (EV_P_ ev_fork *w) EV_THROW<br> <br> #if EV_CLEANUP_ENABLE<br> void<br>-ev_cleanup_start (EV_P_ ev_cleanup *w) EV_THROW<br>+ev_cleanup_start (EV_P_ ev_cleanup *w) EV_NOEXCEPT<br> {<br>- if (expect_false (ev_is_active (w)))<br>+ if (ecb_expect_false (ev_is_active (w)))<br> return;<br> <br> EV_FREQUENT_CHECK;<br> <br> ev_start (EV_A_ (W)w, ++cleanupcnt);<br>- array_needsize (ev_cleanup *, cleanups, cleanupmax, cleanupcnt, EMPTY2);<br>+ array_needsize (ev_cleanup *, cleanups, cleanupmax, cleanupcnt, array_needsize_noinit);<br> cleanups [cleanupcnt - 1] = w;<br> <br> /* cleanup watchers should never keep a refcount on the loop */<br>@@ -4949,10 +5438,10 @@ ev_cleanup_start (EV_P_ ev_cleanup *w) EV_THROW<br> }<br> <br> void<br>-ev_cleanup_stop (EV_P_ ev_cleanup *w) EV_THROW<br>+ev_cleanup_stop (EV_P_ ev_cleanup *w) EV_NOEXCEPT<br> {<br> clear_pending (EV_A_ (W)w);<br>- if (expect_false (!ev_is_active (w)))<br>+ if (ecb_expect_false (!ev_is_active (w)))<br> return;<br> <br> EV_FREQUENT_CHECK;<br>@@ -4973,9 +5462,9 @@ ev_cleanup_stop (EV_P_ ev_cleanup *w) EV_THROW<br> <br> #if EV_ASYNC_ENABLE<br> void<br>-ev_async_start (EV_P_ ev_async *w) EV_THROW<br>+ev_async_start (EV_P_ ev_async *w) EV_NOEXCEPT<br> {<br>- if (expect_false (ev_is_active (w)))<br>+ if (ecb_expect_false (ev_is_active (w)))<br> return;<br> <br> w->sent = 0;<br>@@ -4985,17 +5474,17 @@ ev_async_start (EV_P_ ev_async *w) EV_THROW<br> EV_FREQUENT_CHECK;<br> <br> ev_start (EV_A_ (W)w, ++asynccnt);<br>- array_needsize (ev_async *, asyncs, asyncmax, asynccnt, EMPTY2);<br>+ array_needsize (ev_async *, asyncs, asyncmax, asynccnt, array_needsize_noinit);<br> asyncs [asynccnt - 1] = w;<br> <br> EV_FREQUENT_CHECK;<br> }<br> <br> void<br>-ev_async_stop (EV_P_ ev_async *w) EV_THROW<br>+ev_async_stop (EV_P_ ev_async *w) EV_NOEXCEPT<br> {<br> clear_pending (EV_A_ (W)w);<br>- if (expect_false (!ev_is_active (w)))<br>+ if (ecb_expect_false (!ev_is_active (w)))<br> return;<br> <br> EV_FREQUENT_CHECK;<br>@@ -5013,7 +5502,7 @@ ev_async_stop (EV_P_ ev_async *w) EV_THROW<br> }<br> <br> void<br>-ev_async_send (EV_P_ ev_async *w) EV_THROW<br>+ev_async_send (EV_P_ ev_async *w) EV_NOEXCEPT<br> {<br> w->sent = 1;<br> evpipe_write (EV_A_ &async_pending);<br>@@ -5060,16 +5549,10 @@ once_cb_to (EV_P_ ev_timer *w, int revents)<br> }<br> <br> void<br>-ev_once (EV_P_ int fd, int events, ev_tstamp timeout, void (*cb)(int revents, void *arg), void *arg) EV_THROW<br>+ev_once (EV_P_ int fd, int events, ev_tstamp timeout, void (*cb)(int revents, void *arg), void *arg) EV_NOEXCEPT<br> {<br> struct ev_once *once = (struct ev_once *)ev_malloc (sizeof (struct ev_once));<br> <br>- if (expect_false (!once))<br>- {<br>- cb (EV_ERROR | EV_READ | EV_WRITE | EV_TIMER, arg);<br>- return;<br>- }<br>-<br> once->cb = cb;<br> once->arg = arg;<br> <br>@@ -5093,7 +5576,7 @@ ev_once (EV_P_ int fd, int events, ev_tstamp timeout, void (*cb)(int revents, vo<br> #if EV_WALK_ENABLE<br> ecb_cold<br> void<br>-ev_walk (EV_P_ int types, void (*cb)(EV_P_ int type, void *w)) EV_THROW<br>+ev_walk (EV_P_ int types, void (*cb)(EV_P_ int type, void *w)) EV_NOEXCEPT<br> {<br> int i, j;<br> ev_watcher_list *wl, *wn;<br>diff --git a/third_party/libev/ev.h b/third_party/libev/ev.h<br>index d42e2df47..c0e17143b 100644<br>--- a/third_party/libev/ev.h<br>+++ b/third_party/libev/ev.h<br>@@ -1,7 +1,7 @@<br> /*<br> * libev native API header<br> *<br>- * Copyright (c) 2007,2008,2009,2010,2011,2012,2015 Marc Alexander Lehmann <libev@schmorp.de><br>+ * Copyright (c) 2007-2020 Marc Alexander Lehmann <libev@schmorp.de><br> * All rights reserved.<br> *<br> * Redistribution and use in source and binary forms, with or without modifica-<br>@@ -48,14 +48,13 @@<br> * due to non-throwing" warnings.<br> * # define EV_THROW noexcept<br> */<br>-# define EV_THROW<br>-# else<br>-# define EV_THROW throw ()<br>+# define EV_NOEXCEPT<br> # endif<br> #else<br> # define EV_CPP(x)<br>-# define EV_THROW<br>+# define EV_NOEXCEPT<br> #endif<br>+#define EV_THROW EV_NOEXCEPT /* pre-4.25, do not use in new code */<br> <br> EV_CPP(extern "C" {)<br> <br>@@ -155,7 +154,10 @@ EV_CPP(extern "C" {)<br> <br> /*****************************************************************************/<br> <br>-typedef double ev_tstamp;<br>+#ifndef EV_TSTAMP_T<br>+# define EV_TSTAMP_T double<br>+#endif<br>+typedef EV_TSTAMP_T ev_tstamp;<br> <br> #include <string.h> /* for memmove */<br> <br>@@ -216,7 +218,7 @@ struct ev_loop;<br> /*****************************************************************************/<br> <br> #define EV_VERSION_MAJOR 4<br>-#define EV_VERSION_MINOR 24<br>+#define EV_VERSION_MINOR 32<br> <br> /* eventmask, revents, events... */<br> enum {<br>@@ -344,7 +346,7 @@ typedef struct ev_periodic<br> <br> ev_tstamp offset; /* rw */<br> ev_tstamp interval; /* rw */<br>- ev_tstamp (*reschedule_cb)(struct ev_periodic *w, ev_tstamp now) EV_THROW; /* rw */<br>+ ev_tstamp (*reschedule_cb)(struct ev_periodic *w, ev_tstamp now) EV_NOEXCEPT; /* rw */<br> } ev_periodic;<br> <br> /* invoked when the given signal has been received */<br>@@ -393,14 +395,12 @@ typedef struct ev_stat<br> } ev_stat;<br> #endif<br> <br>-#if EV_IDLE_ENABLE<br> /* invoked when the nothing else needs to be done, keeps the process from blocking */<br> /* revent EV_IDLE */<br> typedef struct ev_idle<br> {<br> EV_WATCHER (ev_idle)<br> } ev_idle;<br>-#endif<br> <br> /* invoked for each run of the mainloop, just before the blocking call */<br> /* you can still change events in any way you like */<br>@@ -417,23 +417,19 @@ typedef struct ev_check<br> EV_WATCHER (ev_check)<br> } ev_check;<br> <br>-#if EV_FORK_ENABLE<br> /* the callback gets invoked before check in the child process when a fork was detected */<br> /* revent EV_FORK */<br> typedef struct ev_fork<br> {<br> EV_WATCHER (ev_fork)<br> } ev_fork;<br>-#endif<br> <br>-#if EV_CLEANUP_ENABLE<br> /* is invoked just before the loop gets destroyed */<br> /* revent EV_CLEANUP */<br> typedef struct ev_cleanup<br> {<br> EV_WATCHER (ev_cleanup)<br> } ev_cleanup;<br>-#endif<br> <br> #if EV_EMBED_ENABLE<br> /* used to embed an event loop inside another */<br>@@ -443,16 +439,18 @@ typedef struct ev_embed<br> EV_WATCHER (ev_embed)<br> <br> struct ev_loop *other; /* ro */<br>+#undef EV_IO_ENABLE<br>+#define EV_IO_ENABLE 1<br> ev_io io; /* private */<br>+#undef EV_PREPARE_ENABLE<br>+#define EV_PREPARE_ENABLE 1<br> ev_prepare prepare; /* private */<br> ev_check check; /* unused */<br> ev_timer timer; /* unused */<br> ev_periodic periodic; /* unused */<br> ev_idle idle; /* unused */<br> ev_fork fork; /* private */<br>-#if EV_CLEANUP_ENABLE<br> ev_cleanup cleanup; /* unused */<br>-#endif<br> } ev_embed;<br> #endif<br> <br>@@ -505,42 +503,44 @@ union ev_any_watcher<br> /* flag bits for ev_default_loop and ev_loop_new */<br> enum {<br> /* the default */<br>- EVFLAG_AUTO = 0x00000000U, /* not quite a mask */<br>+ EVFLAG_AUTO = 0x00000000U, /* not quite a mask */<br> /* flag bits */<br>- EVFLAG_NOENV = 0x01000000U, /* do NOT consult environment */<br>- EVFLAG_FORKCHECK = 0x02000000U, /* check for a fork in each iteration */<br>+ EVFLAG_NOENV = 0x01000000U, /* do NOT consult environment */<br>+ EVFLAG_FORKCHECK = 0x02000000U, /* check for a fork in each iteration */<br> /* debugging/feature disable */<br>- EVFLAG_NOINOTIFY = 0x00100000U, /* do not attempt to use inotify */<br>+ EVFLAG_NOINOTIFY = 0x00100000U, /* do not attempt to use inotify */<br> #if EV_COMPAT3<br>- EVFLAG_NOSIGFD = 0, /* compatibility to pre-3.9 */<br>+ EVFLAG_NOSIGFD = 0, /* compatibility to pre-3.9 */<br> #endif<br>- EVFLAG_SIGNALFD = 0x00200000U, /* attempt to use signalfd */<br>- EVFLAG_NOSIGMASK = 0x00400000U, /* avoid modifying the signal mask */<br>- EVFLAG_ALLOCFD = 0x00800000U /* preallocate event pipe descriptors */<br>+ EVFLAG_SIGNALFD = 0x00200000U, /* attempt to use signalfd */<br>+ EVFLAG_NOSIGMASK = 0x00400000U, /* avoid modifying the signal mask */<br>+ EVFLAG_NOTIMERFD = 0x00800000U /* avoid creating a timerfd */<br> };<br> <br> /* method bits to be ored together */<br> enum {<br>- EVBACKEND_SELECT = 0x00000001U, /* available just about anywhere */<br>- EVBACKEND_POLL = 0x00000002U, /* !win, !aix, broken on osx */<br>- EVBACKEND_EPOLL = 0x00000004U, /* linux */<br>- EVBACKEND_KQUEUE = 0x00000008U, /* bsd, broken on osx */<br>- EVBACKEND_DEVPOLL = 0x00000010U, /* solaris 8 */ /* NYI */<br>- EVBACKEND_PORT = 0x00000020U, /* solaris 10 */<br>- EVBACKEND_ALL = 0x0000003FU, /* all known backends */<br>- EVBACKEND_MASK = 0x0000FFFFU /* all future backends */<br>+ EVBACKEND_SELECT = 0x00000001U, /* available just about anywhere */<br>+ EVBACKEND_POLL = 0x00000002U, /* !win, !aix, broken on osx */<br>+ EVBACKEND_EPOLL = 0x00000004U, /* linux */<br>+ EVBACKEND_KQUEUE = 0x00000008U, /* bsd, broken on osx */<br>+ EVBACKEND_DEVPOLL = 0x00000010U, /* solaris 8 */ /* NYI */<br>+ EVBACKEND_PORT = 0x00000020U, /* solaris 10 */<br>+ EVBACKEND_LINUXAIO = 0x00000040U, /* linux AIO, 4.19+ */<br>+ EVBACKEND_IOURING = 0x00000080U, /* linux io_uring, 5.1+ */<br>+ EVBACKEND_ALL = 0x000000FFU, /* all known backends */<br>+ EVBACKEND_MASK = 0x0000FFFFU /* all future backends */<br> };<br> <br> #if EV_PROTOTYPES<br>-EV_API_DECL int ev_version_major (void) EV_THROW;<br>-EV_API_DECL int ev_version_minor (void) EV_THROW;<br>+EV_API_DECL int ev_version_major (void) EV_NOEXCEPT;<br>+EV_API_DECL int ev_version_minor (void) EV_NOEXCEPT;<br> <br>-EV_API_DECL unsigned int ev_supported_backends (void) EV_THROW;<br>-EV_API_DECL unsigned int ev_recommended_backends (void) EV_THROW;<br>-EV_API_DECL unsigned int ev_embeddable_backends (void) EV_THROW;<br>+EV_API_DECL unsigned int ev_supported_backends (void) EV_NOEXCEPT;<br>+EV_API_DECL unsigned int ev_recommended_backends (void) EV_NOEXCEPT;<br>+EV_API_DECL unsigned int ev_embeddable_backends (void) EV_NOEXCEPT;<br> <br>-EV_API_DECL ev_tstamp ev_time (void) EV_THROW;<br>-EV_API_DECL void ev_sleep (ev_tstamp delay) EV_THROW; /* sleep for a while */<br>+EV_API_DECL ev_tstamp ev_time (void) EV_NOEXCEPT;<br>+EV_API_DECL void ev_sleep (ev_tstamp delay) EV_NOEXCEPT; /* sleep for a while */<br> <br> /* Sets the allocation function to use, works like realloc.<br> * It is used to allocate and free memory.<br>@@ -548,26 +548,26 @@ EV_API_DECL void ev_sleep (ev_tstamp delay) EV_THROW; /* sleep for a while */<br> * or take some potentially destructive action.<br> * The default is your system realloc function.<br> */<br>-EV_API_DECL void ev_set_allocator (void *(*cb)(void *ptr, long size) EV_THROW) EV_THROW;<br>+EV_API_DECL void ev_set_allocator (void *(*cb)(void *ptr, long size) EV_NOEXCEPT) EV_NOEXCEPT;<br> <br> /* set the callback function to call on a<br> * retryable syscall error<br> * (such as failed select, poll, epoll_wait)<br> */<br>-EV_API_DECL void ev_set_syserr_cb (void (*cb)(const char *msg) EV_THROW) EV_THROW;<br>+EV_API_DECL void ev_set_syserr_cb (void (*cb)(const char *msg) EV_NOEXCEPT) EV_NOEXCEPT;<br> <br> #if EV_MULTIPLICITY<br> <br> /* the default loop is the only one that handles signals and child watchers */<br> /* you can call this as often as you like */<br>-EV_API_DECL struct ev_loop *ev_default_loop (unsigned int flags EV_CPP (= 0)) EV_THROW;<br>+EV_API_DECL struct ev_loop *ev_default_loop (unsigned int flags EV_CPP (= 0)) EV_NOEXCEPT;<br> <br> #ifdef EV_API_STATIC<br> EV_API_DECL struct ev_loop *ev_default_loop_ptr;<br> #endif<br> <br> EV_INLINE struct ev_loop *<br>-ev_default_loop_uc_ (void) EV_THROW<br>+ev_default_loop_uc_ (void) EV_NOEXCEPT<br> {<br> extern struct ev_loop *ev_default_loop_ptr;<br> <br>@@ -575,39 +575,39 @@ ev_default_loop_uc_ (void) EV_THROW<br> }<br> <br> EV_INLINE int<br>-ev_is_default_loop (EV_P) EV_THROW<br>+ev_is_default_loop (EV_P) EV_NOEXCEPT<br> {<br> return EV_A == EV_DEFAULT_UC;<br> }<br> <br> /* create and destroy alternative loops that don't handle signals */<br>-EV_API_DECL struct ev_loop *ev_loop_new (unsigned int flags EV_CPP (= 0)) EV_THROW;<br>+EV_API_DECL struct ev_loop *ev_loop_new (unsigned int flags EV_CPP (= 0)) EV_NOEXCEPT;<br> <br>-EV_API_DECL ev_tstamp ev_now (EV_P) EV_THROW; /* time w.r.t. timers and the eventloop, updated after each poll */<br>+EV_API_DECL ev_tstamp ev_now (EV_P) EV_NOEXCEPT; /* time w.r.t. timers and the eventloop, updated after each poll */<br> <br> #else<br> <br>-EV_API_DECL int ev_default_loop (unsigned int flags EV_CPP (= 0)) EV_THROW; /* returns true when successful */<br>+EV_API_DECL int ev_default_loop (unsigned int flags EV_CPP (= 0)) EV_NOEXCEPT; /* returns true when successful */<br> <br> EV_API_DECL ev_tstamp ev_rt_now;<br> <br> EV_INLINE ev_tstamp<br>-ev_now (void) EV_THROW<br>+ev_now (void) EV_NOEXCEPT<br> {<br> return ev_rt_now;<br> }<br> <br> /* looks weird, but ev_is_default_loop (EV_A) still works if this exists */<br> EV_INLINE int<br>-ev_is_default_loop (void) EV_THROW<br>+ev_is_default_loop (void) EV_NOEXCEPT<br> {<br> return 1;<br> }<br> <br> #endif /* multiplicity */<br> <br>-EV_API_DECL ev_tstamp ev_monotonic_time (void) EV_THROW;<br>-EV_API_DECL ev_tstamp ev_monotonic_now (EV_P) EV_THROW;<br>+EV_API_DECL ev_tstamp ev_monotonic_time (void) EV_NOEXCEPT;<br>+EV_API_DECL ev_tstamp ev_monotonic_now (EV_P) EV_NOEXCEPT;<br> <br> /* destroy event loops, also works for the default loop */<br> EV_API_DECL void ev_loop_destroy (EV_P);<br>@@ -616,17 +616,17 @@ EV_API_DECL void ev_loop_destroy (EV_P);<br> /* when you want to re-use it in the child */<br> /* you can call it in either the parent or the child */<br> /* you can actually call it at any time, anywhere :) */<br>-EV_API_DECL void ev_loop_fork (EV_P) EV_THROW;<br>+EV_API_DECL void ev_loop_fork (EV_P) EV_NOEXCEPT;<br> <br>-EV_API_DECL unsigned int ev_backend (EV_P) EV_THROW; /* backend in use by loop */<br>+EV_API_DECL unsigned int ev_backend (EV_P) EV_NOEXCEPT; /* backend in use by loop */<br> <br>-EV_API_DECL void ev_now_update (EV_P) EV_THROW; /* update event loop time */<br>+EV_API_DECL void ev_now_update (EV_P) EV_NOEXCEPT; /* update event loop time */<br> <br> #if EV_WALK_ENABLE<br> /* walk (almost) all watchers in the loop of a given type, invoking the */<br> /* callback on every such watcher. The callback might stop the watcher, */<br> /* but do nothing else with the loop */<br>-EV_API_DECL void ev_walk (EV_P_ int types, void (*cb)(EV_P_ int type, void *w)) EV_THROW;<br>+EV_API_DECL void ev_walk (EV_P_ int types, void (*cb)(EV_P_ int type, void *w)) EV_NOEXCEPT;<br> #endif<br> <br> #endif /* prototypes */<br>@@ -646,46 +646,47 @@ enum {<br> <br> #if EV_PROTOTYPES<br> EV_API_DECL int ev_run (EV_P_ int flags EV_CPP (= 0));<br>-EV_API_DECL void ev_break (EV_P_ int how EV_CPP (= EVBREAK_ONE)) EV_THROW; /* break out of the loop */<br>+EV_API_DECL void ev_break (EV_P_ int how EV_CPP (= EVBREAK_ONE)) EV_NOEXCEPT; /* break out of the loop */<br> <br> /*<br> * ref/unref can be used to add or remove a refcount on the mainloop. every watcher<br> * keeps one reference. if you have a long-running watcher you never unregister that<br> * should not keep ev_loop from running, unref() after starting, and ref() before stopping.<br> */<br>-EV_API_DECL void ev_ref (EV_P) EV_THROW;<br>-EV_API_DECL void ev_unref (EV_P) EV_THROW;<br>+EV_API_DECL void ev_ref (EV_P) EV_NOEXCEPT;<br>+EV_API_DECL void ev_unref (EV_P) EV_NOEXCEPT;<br> <br> /*<br> * convenience function, wait for a single event, without registering an event watcher<br> * if timeout is < 0, do wait indefinitely<br> */<br>-EV_API_DECL void ev_once (EV_P_ int fd, int events, ev_tstamp timeout, void (*cb)(int revents, void *arg), void *arg) EV_THROW;<br>+EV_API_DECL void ev_once (EV_P_ int fd, int events, ev_tstamp timeout, void (*cb)(int revents, void *arg), void *arg) EV_NOEXCEPT;<br>+<br>+EV_API_DECL void ev_invoke_pending (EV_P); /* invoke all pending watchers */<br> <br> # if EV_FEATURE_API<br>-EV_API_DECL unsigned int ev_iteration (EV_P) EV_THROW; /* number of loop iterations */<br>-EV_API_DECL unsigned int ev_depth (EV_P) EV_THROW; /* #ev_loop enters - #ev_loop leaves */<br>-EV_API_DECL void ev_verify (EV_P) EV_THROW; /* abort if loop data corrupted */<br>+EV_API_DECL unsigned int ev_iteration (EV_P) EV_NOEXCEPT; /* number of loop iterations */<br>+EV_API_DECL unsigned int ev_depth (EV_P) EV_NOEXCEPT; /* #ev_loop enters - #ev_loop leaves */<br>+EV_API_DECL void ev_verify (EV_P) EV_NOEXCEPT; /* abort if loop data corrupted */<br> <br>-EV_API_DECL void ev_set_io_collect_interval (EV_P_ ev_tstamp interval) EV_THROW; /* sleep at least this time, default 0 */<br>-EV_API_DECL void ev_set_timeout_collect_interval (EV_P_ ev_tstamp interval) EV_THROW; /* sleep at least this time, default 0 */<br>+EV_API_DECL void ev_set_io_collect_interval (EV_P_ ev_tstamp interval) EV_NOEXCEPT; /* sleep at least this time, default 0 */<br>+EV_API_DECL void ev_set_timeout_collect_interval (EV_P_ ev_tstamp interval) EV_NOEXCEPT; /* sleep at least this time, default 0 */<br> <br> /* advanced stuff for threading etc. support, see docs */<br>-EV_API_DECL void ev_set_userdata (EV_P_ void *data) EV_THROW;<br>-EV_API_DECL void *ev_userdata (EV_P) EV_THROW;<br>+EV_API_DECL void ev_set_userdata (EV_P_ void *data) EV_NOEXCEPT;<br>+EV_API_DECL void *ev_userdata (EV_P) EV_NOEXCEPT;<br> typedef void (*ev_loop_callback)(EV_P);<br>-EV_API_DECL void ev_set_invoke_pending_cb (EV_P_ ev_loop_callback invoke_pending_cb) EV_THROW;<br>+EV_API_DECL void ev_set_invoke_pending_cb (EV_P_ ev_loop_callback invoke_pending_cb) EV_NOEXCEPT;<br> /* C++ doesn't allow the use of the ev_loop_callback typedef here, so we need to spell it out */<br>-EV_API_DECL void ev_set_loop_release_cb (EV_P_ void (*release)(EV_P) EV_THROW, void (*acquire)(EV_P) EV_THROW) EV_THROW;<br>+EV_API_DECL void ev_set_loop_release_cb (EV_P_ void (*release)(EV_P) EV_NOEXCEPT, void (*acquire)(EV_P) EV_NOEXCEPT) EV_NOEXCEPT;<br> <br>-EV_API_DECL unsigned int ev_pending_count (EV_P) EV_THROW; /* number of pending events, if any */<br>-EV_API_DECL void ev_invoke_pending (EV_P); /* invoke all pending watchers */<br>+EV_API_DECL unsigned int ev_pending_count (EV_P) EV_NOEXCEPT; /* number of pending events, if any */<br> <br> /*<br> * stop/start the timer handling.<br> */<br>-EV_API_DECL void ev_suspend (EV_P) EV_THROW;<br>-EV_API_DECL void ev_resume (EV_P) EV_THROW;<br>+EV_API_DECL void ev_suspend (EV_P) EV_NOEXCEPT;<br>+EV_API_DECL void ev_resume (EV_P) EV_NOEXCEPT;<br> #endif<br> <br> #endif<br>@@ -699,6 +700,7 @@ EV_API_DECL void ev_resume (EV_P) EV_THROW;<br> ev_set_cb ((ev), cb_); \<br> } while (0)<br> <br>+#define ev_io_modify(ev,events_) do { (ev)->events = (ev)->events & EV__IOFDSET | (events_); } while (0)<br> #define ev_io_set(ev,fd_,events_) do { (ev)->fd = (fd_); (ev)->events = (events_) | EV__IOFDSET; } while (0)<br> #define ev_timer_set(ev,after_,repeat_) do { ((ev_watcher_time *)(ev))->at = (after_); (ev)->repeat = (repeat_); } while (0)<br> #define ev_periodic_set(ev,ofs_,ival_,rcb_) do { (ev)->offset = (ofs_); (ev)->interval = (ival_); (ev)->reschedule_cb = (rcb_); } while (0)<br>@@ -744,6 +746,7 @@ EV_API_DECL void ev_resume (EV_P) EV_THROW;<br> #define ev_periodic_at(ev) (+((ev_watcher_time *)(ev))->at)<br> <br> #ifndef ev_set_cb<br>+/* memmove is used here to avoid strict aliasing violations, and hopefully is optimized out by any reasonable compiler */<br> # define ev_set_cb(ev,cb_) (ev_cb_ (ev) = (cb_), memmove (&((ev_watcher *)(ev))->cb, &ev_cb_ (ev), sizeof (ev_cb_ (ev))))<br> #endif<br> <br>@@ -753,18 +756,18 @@ EV_API_DECL void ev_resume (EV_P) EV_THROW;<br> <br> /* feeds an event into a watcher as if the event actually occurred */<br> /* accepts any ev_watcher type */<br>-EV_API_DECL int ev_activecnt (EV_P) EV_THROW;<br>-EV_API_DECL void ev_feed_event (EV_P_ void *w, int revents) EV_THROW;<br>-EV_API_DECL void ev_feed_fd_event (EV_P_ int fd, int revents) EV_THROW;<br>+EV_API_DECL int ev_activecnt (EV_P) EV_NOEXCEPT;<br>+EV_API_DECL void ev_feed_event (EV_P_ void *w, int revents) EV_NOEXCEPT;<br>+EV_API_DECL void ev_feed_fd_event (EV_P_ int fd, int revents) EV_NOEXCEPT;<br> #if EV_SIGNAL_ENABLE<br>-EV_API_DECL void ev_feed_signal (int signum) EV_THROW;<br>-EV_API_DECL void ev_feed_signal_event (EV_P_ int signum) EV_THROW;<br>+EV_API_DECL void ev_feed_signal (int signum) EV_NOEXCEPT;<br>+EV_API_DECL void ev_feed_signal_event (EV_P_ int signum) EV_NOEXCEPT;<br> #endif<br> EV_API_DECL void ev_invoke (EV_P_ void *w, int revents);<br>-EV_API_DECL int ev_clear_pending (EV_P_ void *w) EV_THROW;<br>+EV_API_DECL int ev_clear_pending (EV_P_ void *w) EV_NOEXCEPT;<br> <br>-EV_API_DECL void ev_io_start (EV_P_ ev_io *w) EV_THROW;<br>-EV_API_DECL void ev_io_stop (EV_P_ ev_io *w) EV_THROW;<br>+EV_API_DECL void ev_io_start (EV_P_ ev_io *w) EV_NOEXCEPT;<br>+EV_API_DECL void ev_io_stop (EV_P_ ev_io *w) EV_NOEXCEPT;<br> <br> /*<br> * Fd is about to close. Make sure that libev won't do anything funny<br>@@ -772,75 +775,75 @@ EV_API_DECL void ev_io_stop (EV_P_ ev_io *w) EV_THROW;<br> * prior to close().<br> * Note: if fd was reused and added again it just works.<br> */<br>-EV_API_DECL void ev_io_closing (EV_P_ int fd, int revents) EV_THROW;<br>+EV_API_DECL void ev_io_closing (EV_P_ int fd, int revents) EV_NOEXCEPT;<br> <br>-EV_API_DECL void ev_timer_start (EV_P_ ev_timer *w) EV_THROW;<br>-EV_API_DECL void ev_timer_stop (EV_P_ ev_timer *w) EV_THROW;<br>+EV_API_DECL void ev_timer_start (EV_P_ ev_timer *w) EV_NOEXCEPT;<br>+EV_API_DECL void ev_timer_stop (EV_P_ ev_timer *w) EV_NOEXCEPT;<br> /* stops if active and no repeat, restarts if active and repeating, starts if inactive and repeating */<br>-EV_API_DECL void ev_timer_again (EV_P_ ev_timer *w) EV_THROW;<br>+EV_API_DECL void ev_timer_again (EV_P_ ev_timer *w) EV_NOEXCEPT;<br> /* return remaining time */<br>-EV_API_DECL ev_tstamp ev_timer_remaining (EV_P_ ev_timer *w) EV_THROW;<br>+EV_API_DECL ev_tstamp ev_timer_remaining (EV_P_ ev_timer *w) EV_NOEXCEPT;<br> <br> #if EV_PERIODIC_ENABLE<br>-EV_API_DECL void ev_periodic_start (EV_P_ ev_periodic *w) EV_THROW;<br>-EV_API_DECL void ev_periodic_stop (EV_P_ ev_periodic *w) EV_THROW;<br>-EV_API_DECL void ev_periodic_again (EV_P_ ev_periodic *w) EV_THROW;<br>+EV_API_DECL void ev_periodic_start (EV_P_ ev_periodic *w) EV_NOEXCEPT;<br>+EV_API_DECL void ev_periodic_stop (EV_P_ ev_periodic *w) EV_NOEXCEPT;<br>+EV_API_DECL void ev_periodic_again (EV_P_ ev_periodic *w) EV_NOEXCEPT;<br> #endif<br> <br> /* only supported in the default loop */<br> #if EV_SIGNAL_ENABLE<br>-EV_API_DECL void ev_signal_start (EV_P_ ev_signal *w) EV_THROW;<br>-EV_API_DECL void ev_signal_stop (EV_P_ ev_signal *w) EV_THROW;<br>+EV_API_DECL void ev_signal_start (EV_P_ ev_signal *w) EV_NOEXCEPT;<br>+EV_API_DECL void ev_signal_stop (EV_P_ ev_signal *w) EV_NOEXCEPT;<br> #endif<br> <br> /* only supported in the default loop */<br> # if EV_CHILD_ENABLE<br>-EV_API_DECL void ev_child_start (EV_P_ ev_child *w) EV_THROW;<br>-EV_API_DECL void ev_child_stop (EV_P_ ev_child *w) EV_THROW;<br>+EV_API_DECL void ev_child_start (EV_P_ ev_child *w) EV_NOEXCEPT;<br>+EV_API_DECL void ev_child_stop (EV_P_ ev_child *w) EV_NOEXCEPT;<br> # endif<br> <br> # if EV_STAT_ENABLE<br>-EV_API_DECL void ev_stat_start (EV_P_ ev_stat *w) EV_THROW;<br>-EV_API_DECL void ev_stat_stop (EV_P_ ev_stat *w) EV_THROW;<br>-EV_API_DECL void ev_stat_stat (EV_P_ ev_stat *w) EV_THROW;<br>+EV_API_DECL void ev_stat_start (EV_P_ ev_stat *w) EV_NOEXCEPT;<br>+EV_API_DECL void ev_stat_stop (EV_P_ ev_stat *w) EV_NOEXCEPT;<br>+EV_API_DECL void ev_stat_stat (EV_P_ ev_stat *w) EV_NOEXCEPT;<br> # endif<br> <br> # if EV_IDLE_ENABLE<br>-EV_API_DECL void ev_idle_start (EV_P_ ev_idle *w) EV_THROW;<br>-EV_API_DECL void ev_idle_stop (EV_P_ ev_idle *w) EV_THROW;<br>+EV_API_DECL void ev_idle_start (EV_P_ ev_idle *w) EV_NOEXCEPT;<br>+EV_API_DECL void ev_idle_stop (EV_P_ ev_idle *w) EV_NOEXCEPT;<br> # endif<br> <br> #if EV_PREPARE_ENABLE<br>-EV_API_DECL void ev_prepare_start (EV_P_ ev_prepare *w) EV_THROW;<br>-EV_API_DECL void ev_prepare_stop (EV_P_ ev_prepare *w) EV_THROW;<br>+EV_API_DECL void ev_prepare_start (EV_P_ ev_prepare *w) EV_NOEXCEPT;<br>+EV_API_DECL void ev_prepare_stop (EV_P_ ev_prepare *w) EV_NOEXCEPT;<br> #endif<br> <br> #if EV_CHECK_ENABLE<br>-EV_API_DECL void ev_check_start (EV_P_ ev_check *w) EV_THROW;<br>-EV_API_DECL void ev_check_stop (EV_P_ ev_check *w) EV_THROW;<br>+EV_API_DECL void ev_check_start (EV_P_ ev_check *w) EV_NOEXCEPT;<br>+EV_API_DECL void ev_check_stop (EV_P_ ev_check *w) EV_NOEXCEPT;<br> #endif<br> <br> # if EV_FORK_ENABLE<br>-EV_API_DECL void ev_fork_start (EV_P_ ev_fork *w) EV_THROW;<br>-EV_API_DECL void ev_fork_stop (EV_P_ ev_fork *w) EV_THROW;<br>+EV_API_DECL void ev_fork_start (EV_P_ ev_fork *w) EV_NOEXCEPT;<br>+EV_API_DECL void ev_fork_stop (EV_P_ ev_fork *w) EV_NOEXCEPT;<br> # endif<br> <br> # if EV_CLEANUP_ENABLE<br>-EV_API_DECL void ev_cleanup_start (EV_P_ ev_cleanup *w) EV_THROW;<br>-EV_API_DECL void ev_cleanup_stop (EV_P_ ev_cleanup *w) EV_THROW;<br>+EV_API_DECL void ev_cleanup_start (EV_P_ ev_cleanup *w) EV_NOEXCEPT;<br>+EV_API_DECL void ev_cleanup_stop (EV_P_ ev_cleanup *w) EV_NOEXCEPT;<br> # endif<br> <br> # if EV_EMBED_ENABLE<br> /* only supported when loop to be embedded is in fact embeddable */<br>-EV_API_DECL void ev_embed_start (EV_P_ ev_embed *w) EV_THROW;<br>-EV_API_DECL void ev_embed_stop (EV_P_ ev_embed *w) EV_THROW;<br>-EV_API_DECL void ev_embed_sweep (EV_P_ ev_embed *w) EV_THROW;<br>+EV_API_DECL void ev_embed_start (EV_P_ ev_embed *w) EV_NOEXCEPT;<br>+EV_API_DECL void ev_embed_stop (EV_P_ ev_embed *w) EV_NOEXCEPT;<br>+EV_API_DECL void ev_embed_sweep (EV_P_ ev_embed *w) EV_NOEXCEPT;<br> # endif<br> <br> # if EV_ASYNC_ENABLE<br>-EV_API_DECL void ev_async_start (EV_P_ ev_async *w) EV_THROW;<br>-EV_API_DECL void ev_async_stop (EV_P_ ev_async *w) EV_THROW;<br>-EV_API_DECL void ev_async_send (EV_P_ ev_async *w) EV_THROW;<br>+EV_API_DECL void ev_async_start (EV_P_ ev_async *w) EV_NOEXCEPT;<br>+EV_API_DECL void ev_async_stop (EV_P_ ev_async *w) EV_NOEXCEPT;<br>+EV_API_DECL void ev_async_send (EV_P_ ev_async *w) EV_NOEXCEPT;<br> # endif<br> <br> #if EV_COMPAT3<br>diff --git a/third_party/libev/ev.pod b/third_party/libev/ev.pod<br>index 633b87ea5..e4eeb5073 100644<br>--- a/third_party/libev/ev.pod<br>+++ b/third_party/libev/ev.pod<br>@@ -107,10 +107,10 @@ watcher.<br> <br> =head2 FEATURES<br> <br>-Libev supports C<select>, C<poll>, the Linux-specific C<epoll>, the<br>-BSD-specific C<kqueue> and the Solaris-specific event port mechanisms<br>-for file descriptor events (C<ev_io>), the Linux C<inotify> interface<br>-(for C<ev_stat>), Linux eventfd/signalfd (for faster and cleaner<br>+Libev supports C<select>, C<poll>, the Linux-specific aio and C<epoll><br>+interfaces, the BSD-specific C<kqueue> and the Solaris-specific event port<br>+mechanisms for file descriptor events (C<ev_io>), the Linux C<inotify><br>+interface (for C<ev_stat>), Linux eventfd/signalfd (for faster and cleaner<br> inter-thread wakeup (C<ev_async>)/signal handling (C<ev_signal>)) relative<br> timers (C<ev_timer>), absolute timers with customised rescheduling<br> (C<ev_periodic>), synchronous signals (C<ev_signal>), process status<br>@@ -161,9 +161,13 @@ it will print a diagnostic message and abort (via the C<assert> mechanism,<br> so C<NDEBUG> will disable this checking): these are programming errors in<br> the libev caller and need to be fixed there.<br> <br>-Libev also has a few internal error-checking C<assert>ions, and also has<br>-extensive consistency checking code. These do not trigger under normal<br>-circumstances, as they indicate either a bug in libev or worse.<br>+Via the C<EV_FREQUENT> macro you can compile in and/or enable extensive<br>+consistency checking code inside libev that can be used to check for<br>+internal inconsistencies, suually caused by application bugs.<br>+<br>+Libev also has a few internal error-checking C<assert>ions. These do not<br>+trigger under normal circumstances, as they indicate either a bug in libev<br>+or worse.<br> <br> <br> =head1 GLOBAL FUNCTIONS<br>@@ -267,12 +271,32 @@ You could override this function in high-availability programs to, say,<br> free some memory if it cannot allocate memory, to use a special allocator,<br> or even to sleep a while and retry until some memory is available.<br> <br>+Example: The following is the C<realloc> function that libev itself uses<br>+which should work with C<realloc> and C<free> functions of all kinds and<br>+is probably a good basis for your own implementation.<br>+<br>+ static void *<br>+ ev_realloc_emul (void *ptr, long size) EV_NOEXCEPT<br>+ {<br>+ if (size)<br>+ return realloc (ptr, size);<br>+<br>+ free (ptr);<br>+ return 0;<br>+ }<br>+<br> Example: Replace the libev allocator with one that waits a bit and then<br>-retries (example requires a standards-compliant C<realloc>).<br>+retries.<br> <br> static void *<br> persistent_realloc (void *ptr, size_t size)<br> {<br>+ if (!size)<br>+ {<br>+ free (ptr);<br>+ return 0;<br>+ }<br>+<br> for (;;)<br> {<br> void *newptr = realloc (ptr, size);<br>@@ -413,9 +437,10 @@ make libev check for a fork in each iteration by enabling this flag.<br> This works by calling C<getpid ()> on every iteration of the loop,<br> and thus this might slow down your event loop if you do a lot of loop<br> iterations and little real work, but is usually not noticeable (on my<br>-GNU/Linux system for example, C<getpid> is actually a simple 5-insn sequence<br>-without a system call and thus I<very> fast, but my GNU/Linux system also has<br>-C<pthread_atfork> which is even faster).<br>+GNU/Linux system for example, C<getpid> is actually a simple 5-insn<br>+sequence without a system call and thus I<very> fast, but my GNU/Linux<br>+system also has C<pthread_atfork> which is even faster). (Update: glibc<br>+versions 2.25 apparently removed the C<getpid> optimisation again).<br> <br> The big advantage of this flag is that you can forget about fork (and<br> forget about forgetting to tell libev about forking, although you still<br>@@ -457,7 +482,16 @@ unblocking the signals.<br> It's also required by POSIX in a threaded program, as libev calls<br> C<sigprocmask>, whose behaviour is officially unspecified.<br> <br>-This flag's behaviour will become the default in future versions of libev.<br>+=item C<EVFLAG_NOTIMERFD><br>+<br>+When this flag is specified, the libev will avoid using a C<timerfd> to<br>+detect time jumps. It will still be able to detect time jumps, but takes<br>+longer and has a lower accuracy in doing so, but saves a file descriptor<br>+per loop.<br>+<br>+The current implementation only tries to use a C<timerfd> when the first<br>+C<ev_periodic> watcher is started and falls back on other methods if it<br>+cannot be created, but this behaviour might change in the future.<br> <br> =item C<EVBACKEND_SELECT> (value 1, portable select backend)<br> <br>@@ -492,7 +526,7 @@ C<EV_WRITE> to C<POLLOUT | POLLERR | POLLHUP>.<br> <br> =item C<EVBACKEND_EPOLL> (value 4, Linux)<br> <br>-Use the linux-specific epoll(7) interface (for both pre- and post-2.6.9<br>+Use the Linux-specific epoll(7) interface (for both pre- and post-2.6.9<br> kernels).<br> <br> For few fds, this backend is a bit little slower than poll and select, but<br>@@ -548,22 +582,66 @@ faster than epoll for maybe up to a hundred file descriptors, depending on<br> the usage. So sad.<br> <br> While nominally embeddable in other event loops, this feature is broken in<br>-all kernel versions tested so far.<br>+a lot of kernel revisions, but probably(!) works in current versions.<br>+<br>+This backend maps C<EV_READ> and C<EV_WRITE> in the same way as<br>+C<EVBACKEND_POLL>.<br>+<br>+=item C<EVBACKEND_LINUXAIO> (value 64, Linux)<br>+<br>+Use the Linux-specific Linux AIO (I<not> C<< aio(7) >> but C<<<br>+io_submit(2) >>) event interface available in post-4.18 kernels (but libev<br>+only tries to use it in 4.19+).<br>+<br>+This is another Linux train wreck of an event interface.<br>+<br>+If this backend works for you (as of this writing, it was very<br>+experimental), it is the best event interface available on Linux and might<br>+be well worth enabling it - if it isn't available in your kernel this will<br>+be detected and this backend will be skipped.<br>+<br>+This backend can batch oneshot requests and supports a user-space ring<br>+buffer to receive events. It also doesn't suffer from most of the design<br>+problems of epoll (such as not being able to remove event sources from<br>+the epoll set), and generally sounds too good to be true. Because, this<br>+being the Linux kernel, of course it suffers from a whole new set of<br>+limitations, forcing you to fall back to epoll, inheriting all its design<br>+issues.<br>+<br>+For one, it is not easily embeddable (but probably could be done using<br>+an event fd at some extra overhead). It also is subject to a system wide<br>+limit that can be configured in F</proc/sys/fs/aio-max-nr>. If no AIO<br>+requests are left, this backend will be skipped during initialisation, and<br>+will switch to epoll when the loop is active.<br>+<br>+Most problematic in practice, however, is that not all file descriptors<br>+work with it. For example, in Linux 5.1, TCP sockets, pipes, event fds,<br>+files, F</dev/null> and many others are supported, but ttys do not work<br>+properly (a known bug that the kernel developers don't care about, see<br>+L<https://lore.kernel.org/patchwork/patch/1047453/>), so this is not<br>+(yet?) a generic event polling interface.<br>+<br>+Overall, it seems the Linux developers just don't want it to have a<br>+generic event handling mechanism other than C<select> or C<poll>.<br>+<br>+To work around all these problem, the current version of libev uses its<br>+epoll backend as a fallback for file descriptor types that do not work. Or<br>+falls back completely to epoll if the kernel acts up.<br> <br> This backend maps C<EV_READ> and C<EV_WRITE> in the same way as<br> C<EVBACKEND_POLL>.<br> <br> =item C<EVBACKEND_KQUEUE> (value 8, most BSD clones)<br> <br>-Kqueue deserves special mention, as at the time of this writing, it<br>-was broken on all BSDs except NetBSD (usually it doesn't work reliably<br>-with anything but sockets and pipes, except on Darwin, where of course<br>-it's completely useless). Unlike epoll, however, whose brokenness<br>-is by design, these kqueue bugs can (and eventually will) be fixed<br>-without API changes to existing programs. For this reason it's not being<br>-"auto-detected" unless you explicitly specify it in the flags (i.e. using<br>-C<EVBACKEND_KQUEUE>) or libev was compiled on a known-to-be-good (-enough)<br>-system like NetBSD.<br>+Kqueue deserves special mention, as at the time this backend was<br>+implemented, it was broken on all BSDs except NetBSD (usually it doesn't<br>+work reliably with anything but sockets and pipes, except on Darwin,<br>+where of course it's completely useless). Unlike epoll, however, whose<br>+brokenness is by design, these kqueue bugs can be (and mostly have been)<br>+fixed without API changes to existing programs. For this reason it's not<br>+being "auto-detected" on all platforms unless you explicitly specify it<br>+in the flags (i.e. using C<EVBACKEND_KQUEUE>) or libev was compiled on a<br>+known-to-be-good (-enough) system like NetBSD.<br> <br> You still can embed kqueue into a normal poll or select backend and use it<br> only for sockets (after having made sure that sockets work with kqueue on<br>@@ -574,7 +652,7 @@ kernel is more efficient (which says nothing about its actual speed, of<br> course). While stopping, setting and starting an I/O watcher does never<br> cause an extra system call as with C<EVBACKEND_EPOLL>, it still adds up to<br> two event changes per incident. Support for C<fork ()> is very bad (you<br>-might have to leak fd's on fork, but it's more sane than epoll) and it<br>+might have to leak fds on fork, but it's more sane than epoll) and it<br> drops fds silently in similarly hard-to-detect cases.<br> <br> This backend usually performs well under most conditions.<br>@@ -659,6 +737,12 @@ used if available.<br> <br> struct ev_loop *loop = ev_loop_new (ev_recommended_backends () | EVBACKEND_KQUEUE);<br> <br>+Example: Similarly, on linux, you mgiht want to take advantage of the<br>+linux aio backend if possible, but fall back to something else if that<br>+isn't available.<br>+<br>+ struct ev_loop *loop = ev_loop_new (ev_recommended_backends () | EVBACKEND_LINUXAIO);<br>+<br> =item ev_loop_destroy (loop)<br> <br> Destroys an event loop object (frees all memory and kernel state<br>@@ -1136,8 +1220,9 @@ with a watcher-specific start function (C<< ev_TYPE_start (loop, watcher<br> corresponding stop function (C<< ev_TYPE_stop (loop, watcher *) >>.<br> <br> As long as your watcher is active (has been started but not stopped) you<br>-must not touch the values stored in it. Most specifically you must never<br>-reinitialise it or call its C<ev_TYPE_set> macro.<br>+must not touch the values stored in it except when explicitly documented<br>+otherwise. Most specifically you must never reinitialise it or call its<br>+C<ev_TYPE_set> macro.<br> <br> Each and every callback receives the event loop pointer as first, the<br> registered watcher structure as second, and a bitset of received events as<br>@@ -1462,7 +1547,7 @@ Many event loops support I<watcher priorities>, which are usually small<br> integers that influence the ordering of event callback invocation<br> between watchers in some way, all else being equal.<br> <br>-In libev, Watcher priorities can be set using C<ev_set_priority>. See its<br>+In libev, watcher priorities can be set using C<ev_set_priority>. See its<br> description for the more technical details such as the actual priority<br> range.<br> <br>@@ -1566,15 +1651,18 @@ This section describes each watcher in detail, but will not repeat<br> information given in the last section. Any initialisation/set macros,<br> functions and members specific to the watcher type are explained.<br> <br>-Members are additionally marked with either I<[read-only]>, meaning that,<br>-while the watcher is active, you can look at the member and expect some<br>-sensible content, but you must not modify it (you can modify it while the<br>-watcher is stopped to your hearts content), or I<[read-write]>, which<br>-means you can expect it to have some sensible content while the watcher<br>-is active, but you can also modify it. Modifying it may not do something<br>+Most members are additionally marked with either I<[read-only]>, meaning<br>+that, while the watcher is active, you can look at the member and expect<br>+some sensible content, but you must not modify it (you can modify it while<br>+the watcher is stopped to your hearts content), or I<[read-write]>, which<br>+means you can expect it to have some sensible content while the watcher is<br>+active, but you can also modify it (within the same thread as the event<br>+loop, i.e. without creating data races). Modifying it may not do something<br> sensible or take immediate effect (or do anything at all), but libev will<br> not crash or malfunction in any way.<br> <br>+In any case, the documentation for each member will explain what the<br>+effects are, and if there are any additional access restrictions.<br> <br> =head2 C<ev_io> - is this file descriptor readable or writable?<br> <br>@@ -1611,13 +1699,13 @@ But really, best use non-blocking mode.<br> <br> =head3 The special problem of disappearing file descriptors<br> <br>-Some backends (e.g. kqueue, epoll) need to be told about closing a file<br>-descriptor (either due to calling C<close> explicitly or any other means,<br>-such as C<dup2>). The reason is that you register interest in some file<br>-descriptor, but when it goes away, the operating system will silently drop<br>-this interest. If another file descriptor with the same number then is<br>-registered with libev, there is no efficient way to see that this is, in<br>-fact, a different file descriptor.<br>+Some backends (e.g. kqueue, epoll, linuxaio) need to be told about closing<br>+a file descriptor (either due to calling C<close> explicitly or any other<br>+means, such as C<dup2>). The reason is that you register interest in some<br>+file descriptor, but when it goes away, the operating system will silently<br>+drop this interest. If another file descriptor with the same number then<br>+is registered with libev, there is no efficient way to see that this is,<br>+in fact, a different file descriptor.<br> <br> To avoid having to explicitly tell libev about such cases, libev follows<br> the following policy: Each time C<ev_io_set> is being called, libev<br>@@ -1676,9 +1764,10 @@ reuse the same code path.<br> <br> =head3 The special problem of fork<br> <br>-Some backends (epoll, kqueue) do not support C<fork ()> at all or exhibit<br>-useless behaviour. Libev fully supports fork, but needs to be told about<br>-it in the child if you want to continue to use it in the child.<br>+Some backends (epoll, kqueue, linuxaio, iouring) do not support C<fork ()><br>+at all or exhibit useless behaviour. Libev fully supports fork, but needs<br>+to be told about it in the child if you want to continue to use it in the<br>+child.<br> <br> To support fork in your child processes, you have to call C<ev_loop_fork<br> ()> after a fork in the child, enable C<EVFLAG_FORKCHECK>, or resort to<br>@@ -1743,16 +1832,36 @@ opportunity for a DoS attack.<br> =item ev_io_set (ev_io *, int fd, int events)<br> <br> Configures an C<ev_io> watcher. The C<fd> is the file descriptor to<br>-receive events for and C<events> is either C<EV_READ>, C<EV_WRITE> or<br>-C<EV_READ | EV_WRITE>, to express the desire to receive the given events.<br>+receive events for and C<events> is either C<EV_READ>, C<EV_WRITE>, both<br>+C<EV_READ | EV_WRITE> or C<0>, to express the desire to receive the given<br>+events.<br>+<br>+Note that setting the C<events> to C<0> and starting the watcher is<br>+supported, but not specially optimized - if your program sometimes happens<br>+to generate this combination this is fine, but if it is easy to avoid<br>+starting an io watcher watching for no events you should do so.<br>+<br>+=item ev_io_modify (ev_io *, int events)<br>+<br>+Similar to C<ev_io_set>, but only changes the requested events. Using this<br>+might be faster with some backends, as libev can assume that the C<fd><br>+still refers to the same underlying file description, something it cannot<br>+do when using C<ev_io_set>.<br> <br>-=item int fd [read-only]<br>+=item int fd [no-modify]<br> <br>-The file descriptor being watched.<br>+The file descriptor being watched. While it can be read at any time, you<br>+must not modify this member even when the watcher is stopped - always use<br>+C<ev_io_set> for that.<br> <br>-=item int events [read-only]<br>+=item int events [no-modify]<br> <br>-The events being watched.<br>+The set of events the fd is being watched for, among other flags. Remember<br>+that this is a bit set - to test for C<EV_READ>, use C<< w->events &<br>+EV_READ >>, and similarly for C<EV_WRITE>.<br>+<br>+As with C<fd>, you must not modify this member even when the watcher is<br>+stopped, always use C<ev_io_set> or C<ev_io_modify> for that.<br> <br> =back<br> <br>@@ -2115,11 +2224,11 @@ C<SIGSTOP>).<br> <br> =item ev_timer_set (ev_timer *, ev_tstamp after, ev_tstamp repeat)<br> <br>-Configure the timer to trigger after C<after> seconds. If C<repeat><br>-is C<0.>, then it will automatically be stopped once the timeout is<br>-reached. If it is positive, then the timer will automatically be<br>-configured to trigger again C<repeat> seconds later, again, and again,<br>-until stopped manually.<br>+Configure the timer to trigger after C<after> seconds (fractional and<br>+negative values are supported). If C<repeat> is C<0.>, then it will<br>+automatically be stopped once the timeout is reached. If it is positive,<br>+then the timer will automatically be configured to trigger again C<repeat><br>+seconds later, again, and again, until stopped manually.<br> <br> The timer itself will do a best-effort at avoiding drift, that is, if<br> you configure a timer to trigger every 10 seconds, then it will normally<br>@@ -2226,8 +2335,8 @@ it, as it uses a relative timeout).<br> <br> C<ev_periodic> watchers can also be used to implement vastly more complex<br> timers, such as triggering an event on each "midnight, local time", or<br>-other complicated rules. This cannot be done with C<ev_timer> watchers, as<br>-those cannot react to time jumps.<br>+other complicated rules. This cannot easily be done with C<ev_timer><br>+watchers, as those cannot react to time jumps.<br> <br> As with timers, the callback is guaranteed to be invoked only when the<br> point in time where it is supposed to trigger has passed. If multiple<br>@@ -2323,10 +2432,28 @@ NOTE: I<< This callback must always return a time that is higher than or<br> equal to the passed C<now> value >>.<br> <br> This can be used to create very complex timers, such as a timer that<br>-triggers on "next midnight, local time". To do this, you would calculate the<br>-next midnight after C<now> and return the timestamp value for this. How<br>-you do this is, again, up to you (but it is not trivial, which is the main<br>-reason I omitted it as an example).<br>+triggers on "next midnight, local time". To do this, you would calculate<br>+the next midnight after C<now> and return the timestamp value for<br>+this. Here is a (completely untested, no error checking) example on how to<br>+do this:<br>+<br>+ #include <time.h><br>+<br>+ static ev_tstamp<br>+ my_rescheduler (ev_periodic *w, ev_tstamp now)<br>+ {<br>+ time_t tnow = (time_t)now;<br>+ struct tm tm;<br>+ localtime_r (&tnow, &tm);<br>+<br>+ tm.tm_sec = tm.tm_min = tm.tm_hour = 0; // midnight current day<br>+ ++tm.tm_mday; // midnight next day<br>+<br>+ return mktime (&tm);<br>+ }<br>+<br>+Note: this code might run into trouble on days that have more then two<br>+midnights (beginning and end).<br> <br> =back<br> <br>@@ -3519,7 +3646,7 @@ There are some other functions of possible interest. Described. Here. Now.<br> <br> =over 4<br> <br>-=item ev_once (loop, int fd, int events, ev_tstamp timeout, callback)<br>+=item ev_once (loop, int fd, int events, ev_tstamp timeout, callback, arg)<br> <br> This function combines a simple timer and an I/O watcher, calls your<br> callback on whichever event happens first and automatically stops both<br>@@ -3961,14 +4088,14 @@ libev sources can be compiled as C++. Therefore, code that uses the C API<br> will work fine.<br> <br> Proper exception specifications might have to be added to callbacks passed<br>-to libev: exceptions may be thrown only from watcher callbacks, all<br>-other callbacks (allocator, syserr, loop acquire/release and periodic<br>-reschedule callbacks) must not throw exceptions, and might need a C<throw<br>-()> specification. If you have code that needs to be compiled as both C<br>-and C++ you can use the C<EV_THROW> macro for this:<br>+to libev: exceptions may be thrown only from watcher callbacks, all other<br>+callbacks (allocator, syserr, loop acquire/release and periodic reschedule<br>+callbacks) must not throw exceptions, and might need a C<noexcept><br>+specification. If you have code that needs to be compiled as both C and<br>+C++ you can use the C<EV_NOEXCEPT> macro for this:<br> <br> static void<br>- fatal_error (const char *msg) EV_THROW<br>+ fatal_error (const char *msg) EV_NOEXCEPT<br> {<br> perror (msg);<br> abort ();<br>@@ -4142,6 +4269,9 @@ method.<br> For C<ev::embed> watchers this method is called C<set_embed>, to avoid<br> clashing with the C<set (loop)> method.<br> <br>+For C<ev::io> watchers there is an additional C<set> method that acepts a<br>+new event mask only, and internally calls C<ev_io_modfify>.<br>+<br> =item w->start ()<br> <br> Starts the watcher. Note that there is no C<loop> argument, as the<br>@@ -4388,11 +4518,13 @@ in your include path (e.g. in libev/ when using -Ilibev):<br> <br> ev_win32.c required on win32 platforms only<br> <br>- ev_select.c only when select backend is enabled (which is enabled by default)<br>- ev_poll.c only when poll backend is enabled (disabled by default)<br>- ev_epoll.c only when the epoll backend is enabled (disabled by default)<br>- ev_kqueue.c only when the kqueue backend is enabled (disabled by default)<br>- ev_port.c only when the solaris port backend is enabled (disabled by default)<br>+ ev_select.c only when select backend is enabled<br>+ ev_poll.c only when poll backend is enabled<br>+ ev_epoll.c only when the epoll backend is enabled<br>+ ev_linuxaio.c only when the linux aio backend is enabled<br>+ ev_iouring.c only when the linux io_uring backend is enabled<br>+ ev_kqueue.c only when the kqueue backend is enabled<br>+ ev_port.c only when the solaris port backend is enabled<br> <br> F<ev.c> includes the backend files directly when enabled, so you only need<br> to compile this single file.<br>@@ -4521,6 +4653,30 @@ C<ev_signal> and C<ev_async> performance and reduce resource consumption.<br> If undefined, it will be enabled if the headers indicate GNU/Linux + Glibc<br> 2.7 or newer, otherwise disabled.<br> <br>+=item EV_USE_SIGNALFD<br>+<br>+If defined to be C<1>, then libev will assume that C<signalfd ()> is<br>+available and will probe for kernel support at runtime. This enables<br>+the use of EVFLAG_SIGNALFD for faster and simpler signal handling. If<br>+undefined, it will be enabled if the headers indicate GNU/Linux + Glibc<br>+2.7 or newer, otherwise disabled.<br>+<br>+=item EV_USE_TIMERFD<br>+<br>+If defined to be C<1>, then libev will assume that C<timerfd ()> is<br>+available and will probe for kernel support at runtime. This allows<br>+libev to detect time jumps accurately. If undefined, it will be enabled<br>+if the headers indicate GNU/Linux + Glibc 2.8 or newer and define<br>+C<TFD_TIMER_CANCEL_ON_SET>, otherwise disabled.<br>+<br>+=item EV_USE_EVENTFD<br>+<br>+If defined to be C<1>, then libev will assume that C<eventfd ()> is<br>+available and will probe for kernel support at runtime. This will improve<br>+C<ev_signal> and C<ev_async> performance and reduce resource consumption.<br>+If undefined, it will be enabled if the headers indicate GNU/Linux + Glibc<br>+2.7 or newer, otherwise disabled.<br>+<br> =item EV_USE_SELECT<br> <br> If undefined or defined to be C<1>, libev will compile in support for the<br>@@ -4591,6 +4747,19 @@ otherwise another method will be used as fallback. This is the preferred<br> backend for GNU/Linux systems. If undefined, it will be enabled if the<br> headers indicate GNU/Linux + Glibc 2.4 or newer, otherwise disabled.<br> <br>+=item EV_USE_LINUXAIO<br>+<br>+If defined to be C<1>, libev will compile in support for the Linux aio<br>+backend (C<EV_USE_EPOLL> must also be enabled). If undefined, it will be<br>+enabled on linux, otherwise disabled.<br>+<br>+=item EV_USE_IOURING<br>+<br>+If defined to be C<1>, libev will compile in support for the Linux<br>+io_uring backend (C<EV_USE_EPOLL> must also be enabled). Due to it's<br>+current limitations it has to be requested explicitly. If undefined, it<br>+will be enabled on linux, otherwise disabled.<br>+<br> =item EV_USE_KQUEUE<br> <br> If defined to be C<1>, libev will compile in support for the BSD style<br>@@ -4877,6 +5046,9 @@ called once per loop, which can slow down libev. If set to C<3>, then the<br> verification code will be called very frequently, which will slow down<br> libev considerably.<br> <br>+Verification errors are reported via C's C<assert> mechanism, so if you<br>+disable that (e.g. by defining C<NDEBUG>) then no errors will be reported.<br>+<br> The default is C<1>, unless C<EV_FEATURES> overrides it, in which case it<br> will be C<0>.<br> <br>diff --git a/third_party/libev/ev_epoll.c b/third_party/libev/ev_epoll.c<br>index df118a6fe..346b4196b 100644<br>--- a/third_party/libev/ev_epoll.c<br>+++ b/third_party/libev/ev_epoll.c<br>@@ -1,7 +1,7 @@<br> /*<br> * libev epoll fd activity backend<br> *<br>- * Copyright (c) 2007,2008,2009,2010,2011 Marc Alexander Lehmann <libev@schmorp.de><br>+ * Copyright (c) 2007,2008,2009,2010,2011,2016,2017,2019 Marc Alexander Lehmann <libev@schmorp.de><br> * All rights reserved.<br> *<br> * Redistribution and use in source and binary forms, with or without modifica-<br>@@ -93,10 +93,10 @@ epoll_modify (EV_P_ int fd, int oev, int nev)<br> ev.events = (nev & EV_READ ? EPOLLIN : 0)<br> | (nev & EV_WRITE ? EPOLLOUT : 0);<br> <br>- if (expect_true (!epoll_ctl (backend_fd, oev && oldmask != nev ? EPOLL_CTL_MOD : EPOLL_CTL_ADD, fd, &ev)))<br>+ if (ecb_expect_true (!epoll_ctl (backend_fd, oev && oldmask != nev ? EPOLL_CTL_MOD : EPOLL_CTL_ADD, fd, &ev)))<br> return;<br> <br>- if (expect_true (errno == ENOENT))<br>+ if (ecb_expect_true (errno == ENOENT))<br> {<br> /* if ENOENT then the fd went away, so try to do the right thing */<br> if (!nev)<br>@@ -105,7 +105,7 @@ epoll_modify (EV_P_ int fd, int oev, int nev)<br> if (!epoll_ctl (backend_fd, EPOLL_CTL_ADD, fd, &ev))<br> return;<br> }<br>- else if (expect_true (errno == EEXIST))<br>+ else if (ecb_expect_true (errno == EEXIST))<br> {<br> /* EEXIST means we ignored a previous DEL, but the fd is still active */<br> /* if the kernel mask is the same as the new mask, we assume it hasn't changed */<br>@@ -115,7 +115,7 @@ epoll_modify (EV_P_ int fd, int oev, int nev)<br> if (!epoll_ctl (backend_fd, EPOLL_CTL_MOD, fd, &ev))<br> return;<br> }<br>- else if (expect_true (errno == EPERM))<br>+ else if (ecb_expect_true (errno == EPERM))<br> {<br> /* EPERM means the fd is always ready, but epoll is too snobbish */<br> /* to handle it, unlike select or poll. */<br>@@ -124,7 +124,7 @@ epoll_modify (EV_P_ int fd, int oev, int nev)<br> /* add fd to epoll_eperms, if not already inside */<br> if (!(oldmask & EV_EMASK_EPERM))<br> {<br>- array_needsize (int, epoll_eperms, epoll_epermmax, epoll_epermcnt + 1, EMPTY2);<br>+ array_needsize (int, epoll_eperms, epoll_epermmax, epoll_epermcnt + 1, array_needsize_noinit);<br> epoll_eperms [epoll_epermcnt++] = fd;<br> }<br> <br>@@ -144,16 +144,16 @@ epoll_poll (EV_P_ ev_tstamp timeout)<br> int i;<br> int eventcnt;<br> <br>- if (expect_false (epoll_epermcnt))<br>- timeout = 0.;<br>+ if (ecb_expect_false (epoll_epermcnt))<br>+ timeout = EV_TS_CONST (0.);<br> <br> /* epoll wait times cannot be larger than (LONG_MAX - 999UL) / HZ msecs, which is below */<br> /* the default libev max wait time, however. */<br> EV_RELEASE_CB;<br>- eventcnt = epoll_wait (backend_fd, epoll_events, epoll_eventmax, timeout * 1e3);<br>+ eventcnt = epoll_wait (backend_fd, epoll_events, epoll_eventmax, EV_TS_TO_MSEC (timeout));<br> EV_ACQUIRE_CB;<br> <br>- if (expect_false (eventcnt < 0))<br>+ if (ecb_expect_false (eventcnt < 0))<br> {<br> if (errno != EINTR)<br> ev_syserr ("(libev) epoll_wait");<br>@@ -176,14 +176,14 @@ epoll_poll (EV_P_ ev_tstamp timeout)<br> * other spurious notifications will be found by epoll_ctl, below<br> * we assume that fd is always in range, as we never shrink the anfds array<br> */<br>- if (expect_false ((uint32_t)anfds [fd].egen != (uint32_t)(ev->data.u64 >> 32)))<br>+ if (ecb_expect_false ((uint32_t)anfds [fd].egen != (uint32_t)(ev->data.u64 >> 32)))<br> {<br> /* recreate kernel state */<br> postfork |= 2;<br> continue;<br> }<br> <br>- if (expect_false (got & ~want))<br>+ if (ecb_expect_false (got & ~want))<br> {<br> anfds [fd].emask = want;<br> <br>@@ -195,6 +195,8 @@ epoll_poll (EV_P_ ev_tstamp timeout)<br> * above with the gencounter check (== our fd is not the event fd), and<br> * partially here, when epoll_ctl returns an error (== a child has the fd<br> * but we closed it).<br>+ * note: for events such as POLLHUP, where we can't know whether it refers<br>+ * to EV_READ or EV_WRITE, we might issue redundant EPOLL_CTL_MOD calls.<br> */<br> ev->events = (want & EV_READ ? EPOLLIN : 0)<br> | (want & EV_WRITE ? EPOLLOUT : 0);<br>@@ -212,7 +214,7 @@ epoll_poll (EV_P_ ev_tstamp timeout)<br> }<br> <br> /* if the receive array was full, increase its size */<br>- if (expect_false (eventcnt == epoll_eventmax))<br>+ if (ecb_expect_false (eventcnt == epoll_eventmax))<br> {<br> ev_free (epoll_events);<br> epoll_eventmax = array_nextsize (sizeof (struct epoll_event), epoll_eventmax, epoll_eventmax + 1);<br>@@ -235,23 +237,34 @@ epoll_poll (EV_P_ ev_tstamp timeout)<br> }<br> }<br> <br>-inline_size<br>-int<br>-epoll_init (EV_P_ int flags)<br>+static int<br>+epoll_epoll_create (void)<br> {<br>-#ifdef EPOLL_CLOEXEC<br>- backend_fd = epoll_create1 (EPOLL_CLOEXEC);<br>+ int fd;<br> <br>- if (backend_fd < 0 && (errno == EINVAL || errno == ENOSYS))<br>+#if defined EPOLL_CLOEXEC && !defined __ANDROID__<br>+ fd = epoll_create1 (EPOLL_CLOEXEC);<br>+<br>+ if (fd < 0 && (errno == EINVAL || errno == ENOSYS))<br> #endif<br>- backend_fd = epoll_create (256);<br>+ {<br>+ fd = epoll_create (256);<br> <br>- if (backend_fd < 0)<br>- return 0;<br>+ if (fd >= 0)<br>+ fcntl (fd, F_SETFD, FD_CLOEXEC);<br>+ }<br>+<br>+ return fd;<br>+}<br> <br>- fcntl (backend_fd, F_SETFD, FD_CLOEXEC);<br>+inline_size<br>+int<br>+epoll_init (EV_P_ int flags)<br>+{<br>+ if ((backend_fd = epoll_epoll_create ()) < 0)<br>+ return 0;<br> <br>- backend_mintime = 1e-3; /* epoll does sometimes return early, this is just to avoid the worst */<br>+ backend_mintime = EV_TS_CONST (1e-3); /* epoll does sometimes return early, this is just to avoid the worst */<br> backend_modify = epoll_modify;<br> backend_poll = epoll_poll;<br> <br>@@ -269,17 +282,15 @@ epoll_destroy (EV_P)<br> array_free (epoll_eperm, EMPTY);<br> }<br> <br>-inline_size<br>-void<br>+ecb_cold<br>+static void<br> epoll_fork (EV_P)<br> {<br> close (backend_fd);<br> <br>- while ((backend_fd = epoll_create (256)) < 0)<br>+ while ((backend_fd = epoll_epoll_create ()) < 0)<br> ev_syserr ("(libev) epoll_create");<br> <br>- fcntl (backend_fd, F_SETFD, FD_CLOEXEC);<br>-<br> fd_rearm_all (EV_A);<br> }<br> <br>diff --git a/third_party/libev/ev_iouring.c b/third_party/libev/ev_iouring.c<br>new file mode 100644<br>index 000000000..bfd3de65f<br>--- /dev/null<br>+++ b/third_party/libev/ev_iouring.c<br>@@ -0,0 +1,694 @@<br>+/*<br>+ * libev linux io_uring fd activity backend<br>+ *<br>+ * Copyright (c) 2019-2020 Marc Alexander Lehmann <libev@schmorp.de><br>+ * All rights reserved.<br>+ *<br>+ * Redistribution and use in source and binary forms, with or without modifica-<br>+ * tion, are permitted provided that the following conditions are met:<br>+ *<br>+ * 1. Redistributions of source code must retain the above copyright notice,<br>+ * this list of conditions and the following disclaimer.<br>+ *<br>+ * 2. Redistributions in binary form must reproduce the above copyright<br>+ * notice, this list of conditions and the following disclaimer in the<br>+ * documentation and/or other materials provided with the distribution.<br>+ *<br>+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED<br>+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MER-<br>+ * CHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO<br>+ * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPE-<br>+ * CIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,<br>+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;<br>+ * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,<br>+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTH-<br>+ * ERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED<br>+ * OF THE POSSIBILITY OF SUCH DAMAGE.<br>+ *<br>+ * Alternatively, the contents of this file may be used under the terms of<br>+ * the GNU General Public License ("GPL") version 2 or any later version,<br>+ * in which case the provisions of the GPL are applicable instead of<br>+ * the above. If you wish to allow the use of your version of this file<br>+ * only under the terms of the GPL and not to allow others to use your<br>+ * version of this file under the BSD license, indicate your decision<br>+ * by deleting the provisions above and replace them with the notice<br>+ * and other provisions required by the GPL. If you do not delete the<br>+ * provisions above, a recipient may use your version of this file under<br>+ * either the BSD or the GPL.<br>+ */<br>+<br>+/*<br>+ * general notes about linux io_uring:<br>+ *<br>+ * a) it's the best interface I have seen so far. on linux.<br>+ * b) best is not necessarily very good.<br>+ * c) it's better than the aio mess, doesn't suffer from the fork problems<br>+ * of linux aio or epoll and so on and so on. and you could do event stuff<br>+ * without any syscalls. what's not to like?<br>+ * d) ok, it's vastly more complex, but that's ok, really.<br>+ * e) why two mmaps instead of one? one would be more space-efficient,<br>+ * and I can't see what benefit two would have (other than being<br>+ * somehow resizable/relocatable, but that's apparently not possible).<br>+ * f) hmm, it's practically undebuggable (gdb can't access the memory, and<br>+ * the bizarre way structure offsets are communicated makes it hard to<br>+ * just print the ring buffer heads, even *iff* the memory were visible<br>+ * in gdb. but then, that's also ok, really.<br>+ * g) well, you cannot specify a timeout when waiting for events. no,<br>+ * seriously, the interface doesn't support a timeout. never seen _that_<br>+ * before. sure, you can use a timerfd, but that's another syscall<br>+ * you could have avoided. overall, this bizarre omission smells<br>+ * like a µ-optimisation by the io_uring author for his personal<br>+ * applications, to the detriment of everybody else who just wants<br>+ * an event loop. but, umm, ok, if that's all, it could be worse.<br>+ * (from what I gather from the author Jens Axboe, it simply didn't<br>+ * occur to him, and he made good on it by adding an unlimited nuber<br>+ * of timeouts later :).<br>+ * h) initially there was a hardcoded limit of 4096 outstanding events.<br>+ * later versions not only bump this to 32k, but also can handle<br>+ * an unlimited amount of events, so this only affects the batch size.<br>+ * i) unlike linux aio, you *can* register more then the limit<br>+ * of fd events. while early verisons of io_uring signalled an overflow<br>+ * and you ended up getting wet. 5.5+ does not do this anymore.<br>+ * j) but, oh my! it had exactly the same bugs as the linux aio backend,<br>+ * where some undocumented poll combinations just fail. fortunately,<br>+ * after finally reaching the author, he was more than willing to fix<br>+ * this probably in 5.6+.<br>+ * k) overall, the *API* itself is, I dare to say, not a total trainwreck.<br>+ * once the bugs ae fixed (probably in 5.6+), it will be without<br>+ * competition.<br>+ */<br>+<br>+/* TODO: use internal TIMEOUT */<br>+/* TODO: take advantage of single mmap, NODROP etc. */<br>+/* TODO: resize cq/sq size independently */<br>+<br>+#include <sys/timerfd.h><br>+#include <sys/mman.h><br>+#include <poll.h><br>+#include <stdint.h><br>+<br>+#define IOURING_INIT_ENTRIES 32<br>+<br>+/*****************************************************************************/<br>+/* syscall wrapdadoop - this section has the raw api/abi definitions */<br>+<br>+#include <linux/fs.h><br>+#include <linux/types.h><br>+<br>+/* mostly directly taken from the kernel or documentation */<br>+<br>+struct io_uring_sqe<br>+{<br>+ __u8 opcode;<br>+ __u8 flags;<br>+ __u16 ioprio;<br>+ __s32 fd;<br>+ union {<br>+ __u64 off;<br>+ __u64 addr2;<br>+ };<br>+ __u64 addr;<br>+ __u32 len;<br>+ union {<br>+ __kernel_rwf_t rw_flags;<br>+ __u32 fsync_flags;<br>+ __u16 poll_events;<br>+ __u32 sync_range_flags;<br>+ __u32 msg_flags;<br>+ __u32 timeout_flags;<br>+ __u32 accept_flags;<br>+ __u32 cancel_flags;<br>+ __u32 open_flags;<br>+ __u32 statx_flags;<br>+ };<br>+ __u64 user_data;<br>+ union {<br>+ __u16 buf_index;<br>+ __u64 __pad2[3];<br>+ };<br>+};<br>+<br>+struct io_uring_cqe<br>+{<br>+ __u64 user_data;<br>+ __s32 res;<br>+ __u32 flags;<br>+};<br>+<br>+struct io_sqring_offsets<br>+{<br>+ __u32 head;<br>+ __u32 tail;<br>+ __u32 ring_mask;<br>+ __u32 ring_entries;<br>+ __u32 flags;<br>+ __u32 dropped;<br>+ __u32 array;<br>+ __u32 resv1;<br>+ __u64 resv2;<br>+};<br>+<br>+struct io_cqring_offsets<br>+{<br>+ __u32 head;<br>+ __u32 tail;<br>+ __u32 ring_mask;<br>+ __u32 ring_entries;<br>+ __u32 overflow;<br>+ __u32 cqes;<br>+ __u64 resv[2];<br>+};<br>+<br>+struct io_uring_params<br>+{<br>+ __u32 sq_entries;<br>+ __u32 cq_entries;<br>+ __u32 flags;<br>+ __u32 sq_thread_cpu;<br>+ __u32 sq_thread_idle;<br>+ __u32 features;<br>+ __u32 resv[4];<br>+ struct io_sqring_offsets sq_off;<br>+ struct io_cqring_offsets cq_off;<br>+};<br>+<br>+#define IORING_SETUP_CQSIZE 0x00000008<br>+<br>+#define IORING_OP_POLL_ADD 6<br>+#define IORING_OP_POLL_REMOVE 7<br>+#define IORING_OP_TIMEOUT 11<br>+#define IORING_OP_TIMEOUT_REMOVE 12<br>+<br>+/* relative or absolute, reference clock is CLOCK_MONOTONIC */<br>+struct iouring_kernel_timespec<br>+{<br>+ int64_t tv_sec;<br>+ long long tv_nsec;<br>+};<br>+<br>+#define IORING_TIMEOUT_ABS 0x00000001<br>+<br>+#define IORING_ENTER_GETEVENTS 0x01<br>+<br>+#define IORING_OFF_SQ_RING 0x00000000ULL<br>+#define IORING_OFF_CQ_RING 0x08000000ULL<br>+#define IORING_OFF_SQES 0x10000000ULL<br>+<br>+#define IORING_FEAT_SINGLE_MMAP 0x00000001<br>+#define IORING_FEAT_NODROP 0x00000002<br>+#define IORING_FEAT_SUBMIT_STABLE 0x00000004<br>+<br>+inline_size<br>+int<br>+evsys_io_uring_setup (unsigned entries, struct io_uring_params *params)<br>+{<br>+ return ev_syscall2 (SYS_io_uring_setup, entries, params);<br>+}<br>+<br>+inline_size<br>+int<br>+evsys_io_uring_enter (int fd, unsigned to_submit, unsigned min_complete, unsigned flags, const sigset_t *sig, size_t sigsz)<br>+{<br>+ return ev_syscall6 (SYS_io_uring_enter, fd, to_submit, min_complete, flags, sig, sigsz);<br>+}<br>+<br>+/*****************************************************************************/<br>+/* actual backed implementation */<br>+<br>+/* we hope that volatile will make the compiler access this variables only once */<br>+#define EV_SQ_VAR(name) *(volatile unsigned *)((char *)iouring_sq_ring + iouring_sq_ ## name)<br>+#define EV_CQ_VAR(name) *(volatile unsigned *)((char *)iouring_cq_ring + iouring_cq_ ## name)<br>+<br>+/* the index array */<br>+#define EV_SQ_ARRAY ((unsigned *)((char *)iouring_sq_ring + iouring_sq_array))<br>+<br>+/* the submit/completion queue entries */<br>+#define EV_SQES ((struct io_uring_sqe *) iouring_sqes)<br>+#define EV_CQES ((struct io_uring_cqe *)((char *)iouring_cq_ring + iouring_cq_cqes))<br>+<br>+inline_speed<br>+int<br>+iouring_enter (EV_P_ ev_tstamp timeout)<br>+{<br>+ int res;<br>+<br>+ EV_RELEASE_CB;<br>+<br>+ res = evsys_io_uring_enter (iouring_fd, iouring_to_submit, 1,<br>+ timeout > EV_TS_CONST (0.) ? IORING_ENTER_GETEVENTS : 0, 0, 0);<br>+<br>+ assert (("libev: io_uring_enter did not consume all sqes", (res < 0 || res == iouring_to_submit)));<br>+<br>+ iouring_to_submit = 0;<br>+<br>+ EV_ACQUIRE_CB;<br>+<br>+ return res;<br>+}<br>+<br>+/* TODO: can we move things around so we don't need this forward-reference? */<br>+static void<br>+iouring_poll (EV_P_ ev_tstamp timeout);<br>+<br>+static<br>+struct io_uring_sqe *<br>+iouring_sqe_get (EV_P)<br>+{<br>+ unsigned tail;<br>+ <br>+ for (;;)<br>+ {<br>+ tail = EV_SQ_VAR (tail);<br>+<br>+ if (ecb_expect_true (tail + 1 - EV_SQ_VAR (head) <= EV_SQ_VAR (ring_entries)))<br>+ break; /* whats the problem, we have free sqes */<br>+<br>+ /* queue full, need to flush and possibly handle some events */<br>+<br>+#if EV_FEATURE_CODE<br>+ /* first we ask the kernel nicely, most often this frees up some sqes */<br>+ int res = iouring_enter (EV_A_ EV_TS_CONST (0.));<br>+<br>+ ECB_MEMORY_FENCE_ACQUIRE; /* better safe than sorry */<br>+<br>+ if (res >= 0)<br>+ continue; /* yes, it worked, try again */<br>+#endif<br>+<br>+ /* some problem, possibly EBUSY - do the full poll and let it handle any issues */<br>+<br>+ iouring_poll (EV_A_ EV_TS_CONST (0.));<br>+ /* iouring_poll should have done ECB_MEMORY_FENCE_ACQUIRE for us */<br>+ }<br>+<br>+ /*assert (("libev: io_uring queue full after flush", tail + 1 - EV_SQ_VAR (head) <= EV_SQ_VAR (ring_entries)));*/<br>+<br>+ return EV_SQES + (tail & EV_SQ_VAR (ring_mask));<br>+}<br>+<br>+inline_size<br>+struct io_uring_sqe *<br>+iouring_sqe_submit (EV_P_ struct io_uring_sqe *sqe)<br>+{<br>+ unsigned idx = sqe - EV_SQES;<br>+<br>+ EV_SQ_ARRAY [idx] = idx;<br>+ ECB_MEMORY_FENCE_RELEASE;<br>+ ++EV_SQ_VAR (tail);<br>+ /*ECB_MEMORY_FENCE_RELEASE; /* for the time being we assume this is not needed */<br>+ ++iouring_to_submit;<br>+}<br>+<br>+/*****************************************************************************/<br>+<br>+/* when the timerfd expires we simply note the fact,<br>+ * as the purpose of the timerfd is to wake us up, nothing else.<br>+ * the next iteration should re-set it.<br>+ */<br>+static void<br>+iouring_tfd_cb (EV_P_ struct ev_io *w, int revents)<br>+{<br>+ iouring_tfd_to = EV_TSTAMP_HUGE;<br>+}<br>+<br>+/* called for full and partial cleanup */<br>+ecb_cold<br>+static int<br>+iouring_internal_destroy (EV_P)<br>+{<br>+ close (iouring_tfd);<br>+ close (iouring_fd);<br>+<br>+ if (iouring_sq_ring != MAP_FAILED) munmap (iouring_sq_ring, iouring_sq_ring_size);<br>+ if (iouring_cq_ring != MAP_FAILED) munmap (iouring_cq_ring, iouring_cq_ring_size);<br>+ if (iouring_sqes != MAP_FAILED) munmap (iouring_sqes , iouring_sqes_size );<br>+<br>+ if (ev_is_active (&iouring_tfd_w))<br>+ {<br>+ ev_ref (EV_A);<br>+ ev_io_stop (EV_A_ &iouring_tfd_w);<br>+ }<br>+}<br>+<br>+ecb_cold<br>+static int<br>+iouring_internal_init (EV_P)<br>+{<br>+ struct io_uring_params params = { 0 };<br>+<br>+ iouring_to_submit = 0;<br>+<br>+ iouring_tfd = -1;<br>+ iouring_sq_ring = MAP_FAILED;<br>+ iouring_cq_ring = MAP_FAILED;<br>+ iouring_sqes = MAP_FAILED;<br>+<br>+ if (!have_monotonic) /* cannot really happen, but what if11 */<br>+ return -1;<br>+<br>+ for (;;)<br>+ {<br>+ iouring_fd = evsys_io_uring_setup (iouring_entries, ¶ms);<br>+<br>+ if (iouring_fd >= 0)<br>+ break; /* yippie */<br>+<br>+ if (errno != EINVAL)<br>+ return -1; /* we failed */<br>+<br>+#if TODO<br>+ if ((~params.features) & (IORING_FEAT_NODROP | IORING_FEATURE_SINGLE_MMAP | IORING_FEAT_SUBMIT_STABLE))<br>+ return -1; /* we require the above features */<br>+#endif<br>+<br>+ /* EINVAL: lots of possible reasons, but maybe<br>+ * it is because we hit the unqueryable hardcoded size limit<br>+ */<br>+<br>+ /* we hit the limit already, give up */<br>+ if (iouring_max_entries)<br>+ return -1;<br>+<br>+ /* first time we hit EINVAL? assume we hit the limit, so go back and retry */<br>+ iouring_entries >>= 1;<br>+ iouring_max_entries = iouring_entries;<br>+ }<br>+<br>+ iouring_sq_ring_size = params.sq_off.array + params.sq_entries * sizeof (unsigned);<br>+ iouring_cq_ring_size = params.cq_off.cqes + params.cq_entries * sizeof (struct io_uring_cqe);<br>+ iouring_sqes_size = params.sq_entries * sizeof (struct io_uring_sqe);<br>+<br>+ iouring_sq_ring = mmap (0, iouring_sq_ring_size, PROT_READ | PROT_WRITE,<br>+ MAP_SHARED | MAP_POPULATE, iouring_fd, IORING_OFF_SQ_RING);<br>+ iouring_cq_ring = mmap (0, iouring_cq_ring_size, PROT_READ | PROT_WRITE,<br>+ MAP_SHARED | MAP_POPULATE, iouring_fd, IORING_OFF_CQ_RING);<br>+ iouring_sqes = mmap (0, iouring_sqes_size, PROT_READ | PROT_WRITE,<br>+ MAP_SHARED | MAP_POPULATE, iouring_fd, IORING_OFF_SQES);<br>+<br>+ if (iouring_sq_ring == MAP_FAILED || iouring_cq_ring == MAP_FAILED || iouring_sqes == MAP_FAILED)<br>+ return -1;<br>+<br>+ iouring_sq_head = params.sq_off.head;<br>+ iouring_sq_tail = params.sq_off.tail;<br>+ iouring_sq_ring_mask = params.sq_off.ring_mask;<br>+ iouring_sq_ring_entries = params.sq_off.ring_entries;<br>+ iouring_sq_flags = params.sq_off.flags;<br>+ iouring_sq_dropped = params.sq_off.dropped;<br>+ iouring_sq_array = params.sq_off.array;<br>+<br>+ iouring_cq_head = params.cq_off.head;<br>+ iouring_cq_tail = params.cq_off.tail;<br>+ iouring_cq_ring_mask = params.cq_off.ring_mask;<br>+ iouring_cq_ring_entries = params.cq_off.ring_entries;<br>+ iouring_cq_overflow = params.cq_off.overflow;<br>+ iouring_cq_cqes = params.cq_off.cqes;<br>+<br>+ iouring_tfd = timerfd_create (CLOCK_MONOTONIC, TFD_CLOEXEC);<br>+<br>+ if (iouring_tfd < 0)<br>+ return iouring_tfd;<br>+<br>+ iouring_tfd_to = EV_TSTAMP_HUGE;<br>+<br>+ return 0;<br>+}<br>+<br>+ecb_cold<br>+static void<br>+iouring_fork (EV_P)<br>+{<br>+ iouring_internal_destroy (EV_A);<br>+<br>+ while (iouring_internal_init (EV_A) < 0)<br>+ ev_syserr ("(libev) io_uring_setup");<br>+<br>+ fd_rearm_all (EV_A);<br>+<br>+ ev_io_stop (EV_A_ &iouring_tfd_w);<br>+ ev_io_set (EV_A_ &iouring_tfd_w, iouring_tfd, EV_READ);<br>+ ev_io_start (EV_A_ &iouring_tfd_w);<br>+}<br>+<br>+/*****************************************************************************/<br>+<br>+static void<br>+iouring_modify (EV_P_ int fd, int oev, int nev)<br>+{<br>+ if (oev)<br>+ {<br>+ /* we assume the sqe's are all "properly" initialised */<br>+ struct io_uring_sqe *sqe = iouring_sqe_get (EV_A);<br>+ sqe->opcode = IORING_OP_POLL_REMOVE;<br>+ sqe->fd = fd;<br>+ /* Jens Axboe notified me that user_data is not what is documented, but is<br>+ * some kind of unique ID that has to match, otherwise the request cannot<br>+ * be removed. Since we don't *really* have that, we pass in the old<br>+ * generation counter - if that fails, too bad, it will hopefully be removed<br>+ * at close time and then be ignored. */<br>+ sqe->addr = (uint32_t)fd | ((__u64)(uint32_t)anfds [fd].egen << 32);<br>+ sqe->user_data = (uint64_t)-1;<br>+ iouring_sqe_submit (EV_A_ sqe);<br>+<br>+ /* increment generation counter to avoid handling old events */<br>+ ++anfds [fd].egen;<br>+ }<br>+<br>+ if (nev)<br>+ {<br>+ struct io_uring_sqe *sqe = iouring_sqe_get (EV_A);<br>+ sqe->opcode = IORING_OP_POLL_ADD;<br>+ sqe->fd = fd;<br>+ sqe->addr = 0;<br>+ sqe->user_data = (uint32_t)fd | ((__u64)(uint32_t)anfds [fd].egen << 32);<br>+ sqe->poll_events =<br>+ (nev & EV_READ ? POLLIN : 0)<br>+ | (nev & EV_WRITE ? POLLOUT : 0);<br>+ iouring_sqe_submit (EV_A_ sqe);<br>+ }<br>+}<br>+<br>+inline_size<br>+void<br>+iouring_tfd_update (EV_P_ ev_tstamp timeout)<br>+{<br>+ ev_tstamp tfd_to = mn_now + timeout;<br>+<br>+ /* we assume there will be many iterations per timer change, so<br>+ * we only re-set the timerfd when we have to because its expiry<br>+ * is too late.<br>+ */<br>+ if (ecb_expect_false (tfd_to < iouring_tfd_to))<br>+ {<br>+ struct itimerspec its;<br>+<br>+ iouring_tfd_to = tfd_to;<br>+ EV_TS_SET (its.it_interval, 0.);<br>+ EV_TS_SET (its.it_value, tfd_to);<br>+<br>+ if (timerfd_settime (iouring_tfd, TFD_TIMER_ABSTIME, &its, 0) < 0)<br>+ assert (("libev: iouring timerfd_settime failed", 0));<br>+ }<br>+}<br>+<br>+inline_size<br>+void<br>+iouring_process_cqe (EV_P_ struct io_uring_cqe *cqe)<br>+{<br>+ int fd = cqe->user_data & 0xffffffffU;<br>+ uint32_t gen = cqe->user_data >> 32;<br>+ int res = cqe->res;<br>+<br>+ /* user_data -1 is a remove that we are not atm. interested in */<br>+ if (cqe->user_data == (uint64_t)-1)<br>+ return;<br>+<br>+ assert (("libev: io_uring fd must be in-bounds", fd >= 0 && fd < anfdmax));<br>+<br>+ /* documentation lies, of course. the result value is NOT like<br>+ * normal syscalls, but like linux raw syscalls, i.e. negative<br>+ * error numbers. fortunate, as otherwise there would be no way<br>+ * to get error codes at all. still, why not document this?<br>+ */<br>+<br>+ /* ignore event if generation doesn't match */<br>+ /* other than skipping removal events, */<br>+ /* this should actually be very rare */<br>+ if (ecb_expect_false (gen != (uint32_t)anfds [fd].egen))<br>+ return;<br>+<br>+ if (ecb_expect_false (res < 0))<br>+ {<br>+ /*TODO: EINVAL handling (was something failed with this fd)*/<br>+<br>+ if (res == -EBADF)<br>+ {<br>+ assert (("libev: event loop rejected bad fd", res != -EBADF));<br>+ fd_kill (EV_A_ fd);<br>+ }<br>+ else<br>+ {<br>+ errno = -res;<br>+ ev_syserr ("(libev) IORING_OP_POLL_ADD");<br>+ }<br>+<br>+ return;<br>+ }<br>+<br>+ /* feed events, we do not expect or handle POLLNVAL */<br>+ fd_event (<br>+ EV_A_<br>+ fd,<br>+ (res & (POLLOUT | POLLERR | POLLHUP) ? EV_WRITE : 0)<br>+ | (res & (POLLIN | POLLERR | POLLHUP) ? EV_READ : 0)<br>+ );<br>+<br>+ /* io_uring is oneshot, so we need to re-arm the fd next iteration */<br>+ /* this also means we usually have to do at least one syscall per iteration */<br>+ anfds [fd].events = 0;<br>+ fd_change (EV_A_ fd, EV_ANFD_REIFY);<br>+}<br>+<br>+/* called when the event queue overflows */<br>+ecb_cold<br>+static void<br>+iouring_overflow (EV_P)<br>+{<br>+ /* we have two options, resize the queue (by tearing down<br>+ * everything and recreating it, or living with it<br>+ * and polling.<br>+ * we implement this by resizing the queue, and, if that fails,<br>+ * we just recreate the state on every failure, which<br>+ * kind of is a very inefficient poll.<br>+ * one danger is, due to the bios toward lower fds,<br>+ * we will only really get events for those, so<br>+ * maybe we need a poll() fallback, after all.<br>+ */<br>+ /*EV_CQ_VAR (overflow) = 0;*/ /* need to do this if we keep the state and poll manually */<br>+<br>+ fd_rearm_all (EV_A);<br>+<br>+ /* we double the size until we hit the hard-to-probe maximum */<br>+ if (!iouring_max_entries)<br>+ {<br>+ iouring_entries <<= 1;<br>+ iouring_fork (EV_A);<br>+ }<br>+ else<br>+ {<br>+ /* we hit the kernel limit, we should fall back to something else.<br>+ * we can either poll() a few times and hope for the best,<br>+ * poll always, or switch to epoll.<br>+ * TODO: is this necessary with newer kernels?<br>+ */<br>+<br>+ iouring_internal_destroy (EV_A);<br>+<br>+ /* this should make it so that on return, we don't call any uring functions */<br>+ iouring_to_submit = 0;<br>+<br>+ for (;;)<br>+ {<br>+ backend = epoll_init (EV_A_ 0);<br>+<br>+ if (backend)<br>+ break;<br>+<br>+ ev_syserr ("(libev) iouring switch to epoll");<br>+ }<br>+ }<br>+}<br>+<br>+/* handle any events in the completion queue, return true if there were any */<br>+static int<br>+iouring_handle_cq (EV_P)<br>+{<br>+ unsigned head, tail, mask;<br>+ <br>+ head = EV_CQ_VAR (head);<br>+ ECB_MEMORY_FENCE_ACQUIRE;<br>+ tail = EV_CQ_VAR (tail);<br>+<br>+ if (head == tail)<br>+ return 0;<br>+<br>+ /* it can only overflow if we have events, yes, yes? */<br>+ if (ecb_expect_false (EV_CQ_VAR (overflow)))<br>+ {<br>+ iouring_overflow (EV_A);<br>+ return 1;<br>+ }<br>+<br>+ mask = EV_CQ_VAR (ring_mask);<br>+<br>+ do<br>+ iouring_process_cqe (EV_A_ &EV_CQES [head++ & mask]);<br>+ while (head != tail);<br>+<br>+ EV_CQ_VAR (head) = head;<br>+ ECB_MEMORY_FENCE_RELEASE;<br>+<br>+ return 1;<br>+}<br>+<br>+static void<br>+iouring_poll (EV_P_ ev_tstamp timeout)<br>+{<br>+ /* if we have events, no need for extra syscalls, but we might have to queue events */<br>+ /* we also clar the timeout if there are outstanding fdchanges */<br>+ /* the latter should only happen if both the sq and cq are full, most likely */<br>+ /* because we have a lot of event sources that immediately complete */<br>+ /* TODO: fdchacngecnt is always 0 because fd_reify does not have two buffers yet */<br>+ if (iouring_handle_cq (EV_A) || fdchangecnt)<br>+ timeout = EV_TS_CONST (0.);<br>+ else<br>+ /* no events, so maybe wait for some */<br>+ iouring_tfd_update (EV_A_ timeout);<br>+<br>+ /* only enter the kernel if we have something to submit, or we need to wait */<br>+ if (timeout || iouring_to_submit)<br>+ {<br>+ int res = iouring_enter (EV_A_ timeout);<br>+<br>+ if (ecb_expect_false (res < 0))<br>+ if (errno == EINTR)<br>+ /* ignore */;<br>+ else if (errno == EBUSY)<br>+ /* cq full, cannot submit - should be rare because we flush the cq first, so simply ignore */;<br>+ else<br>+ ev_syserr ("(libev) iouring setup");<br>+ else<br>+ iouring_handle_cq (EV_A);<br>+ }<br>+}<br>+<br>+inline_size<br>+int<br>+iouring_init (EV_P_ int flags)<br>+{<br>+ iouring_entries = IOURING_INIT_ENTRIES;<br>+ iouring_max_entries = 0;<br>+<br>+ if (iouring_internal_init (EV_A) < 0)<br>+ {<br>+ iouring_internal_destroy (EV_A);<br>+ return 0;<br>+ }<br>+<br>+ ev_io_init (&iouring_tfd_w, iouring_tfd_cb, iouring_tfd, EV_READ);<br>+ ev_set_priority (&iouring_tfd_w, EV_MINPRI);<br>+ ev_io_start (EV_A_ &iouring_tfd_w);<br>+ ev_unref (EV_A); /* watcher should not keep loop alive */<br>+<br>+ backend_modify = iouring_modify;<br>+ backend_poll = iouring_poll;<br>+<br>+ return EVBACKEND_IOURING;<br>+}<br>+<br>+inline_size<br>+void<br>+iouring_destroy (EV_P)<br>+{<br>+ iouring_internal_destroy (EV_A);<br>+}<br>+<br>diff --git a/third_party/libev/ev_kqueue.c b/third_party/libev/ev_kqueue.c<br>index 0c05ab9e7..69c5147f1 100644<br>--- a/third_party/libev/ev_kqueue.c<br>+++ b/third_party/libev/ev_kqueue.c<br>@@ -1,7 +1,7 @@<br> /*<br> * libev kqueue backend<br> *<br>- * Copyright (c) 2007,2008,2009,2010,2011,2012,2013 Marc Alexander Lehmann <libev@schmorp.de><br>+ * Copyright (c) 2007,2008,2009,2010,2011,2012,2013,2016,2019 Marc Alexander Lehmann <libev@schmorp.de><br> * All rights reserved.<br> *<br> * Redistribution and use in source and binary forms, with or without modifica-<br>@@ -48,7 +48,7 @@ void<br> kqueue_change (EV_P_ int fd, int filter, int flags, int fflags)<br> {<br> ++kqueue_changecnt;<br>- array_needsize (struct kevent, kqueue_changes, kqueue_changemax, kqueue_changecnt, EMPTY2);<br>+ array_needsize (struct kevent, kqueue_changes, kqueue_changemax, kqueue_changecnt, array_needsize_noinit);<br> <br> EV_SET (&kqueue_changes [kqueue_changecnt - 1], fd, filter, flags, fflags, 0, 0);<br> }<br>@@ -103,10 +103,10 @@ kqueue_poll (EV_P_ ev_tstamp timeout)<br> EV_ACQUIRE_CB;<br> kqueue_changecnt = 0;<br> <br>- if (expect_false (res < 0))<br>+ if (ecb_expect_false (res < 0))<br> {<br> if (errno != EINTR)<br>- ev_syserr ("(libev) kevent");<br>+ ev_syserr ("(libev) kqueue kevent");<br> <br> return;<br> }<br>@@ -115,7 +115,7 @@ kqueue_poll (EV_P_ ev_tstamp timeout)<br> {<br> int fd = kqueue_events [i].ident;<br> <br>- if (expect_false (kqueue_events [i].flags & EV_ERROR))<br>+ if (ecb_expect_false (kqueue_events [i].flags & EV_ERROR))<br> {<br> int err = kqueue_events [i].data;<br> <br>@@ -129,10 +129,16 @@ kqueue_poll (EV_P_ ev_tstamp timeout)<br> if (fd_valid (fd))<br> kqueue_modify (EV_A_ fd, 0, anfds [fd].events);<br> else<br>- fd_kill (EV_A_ fd);<br>+ {<br>+ assert (("libev: kqueue found invalid fd", 0));<br>+ fd_kill (EV_A_ fd);<br>+ }<br> }<br> else /* on all other errors, we error out on the fd */<br>- fd_kill (EV_A_ fd);<br>+ {<br>+ assert (("libev: kqueue found invalid fd", 0));<br>+ fd_kill (EV_A_ fd);<br>+ }<br> }<br> }<br> else<br>@@ -145,7 +151,7 @@ kqueue_poll (EV_P_ ev_tstamp timeout)<br> );<br> }<br> <br>- if (expect_false (res == kqueue_eventmax))<br>+ if (ecb_expect_false (res == kqueue_eventmax))<br> {<br> ev_free (kqueue_events);<br> kqueue_eventmax = array_nextsize (sizeof (struct kevent), kqueue_eventmax, kqueue_eventmax + 1);<br>@@ -164,7 +170,7 @@ kqueue_init (EV_P_ int flags)<br> <br> fcntl (backend_fd, F_SETFD, FD_CLOEXEC); /* not sure if necessary, hopefully doesn't hurt */<br> <br>- backend_mintime = 1e-9; /* apparently, they did the right thing in freebsd */<br>+ backend_mintime = EV_TS_CONST (1e-9); /* apparently, they did the right thing in freebsd */<br> backend_modify = kqueue_modify;<br> backend_poll = kqueue_poll;<br> <br>diff --git a/third_party/libev/ev_linuxaio.c b/third_party/libev/ev_linuxaio.c<br>new file mode 100644<br>index 000000000..4687a703e<br>--- /dev/null<br>+++ b/third_party/libev/ev_linuxaio.c<br>@@ -0,0 +1,620 @@<br>+/*<br>+ * libev linux aio fd activity backend<br>+ *<br>+ * Copyright (c) 2019 Marc Alexander Lehmann <libev@schmorp.de><br>+ * All rights reserved.<br>+ *<br>+ * Redistribution and use in source and binary forms, with or without modifica-<br>+ * tion, are permitted provided that the following conditions are met:<br>+ *<br>+ * 1. Redistributions of source code must retain the above copyright notice,<br>+ * this list of conditions and the following disclaimer.<br>+ *<br>+ * 2. Redistributions in binary form must reproduce the above copyright<br>+ * notice, this list of conditions and the following disclaimer in the<br>+ * documentation and/or other materials provided with the distribution.<br>+ *<br>+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED<br>+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MER-<br>+ * CHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO<br>+ * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPE-<br>+ * CIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,<br>+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;<br>+ * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,<br>+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTH-<br>+ * ERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED<br>+ * OF THE POSSIBILITY OF SUCH DAMAGE.<br>+ *<br>+ * Alternatively, the contents of this file may be used under the terms of<br>+ * the GNU General Public License ("GPL") version 2 or any later version,<br>+ * in which case the provisions of the GPL are applicable instead of<br>+ * the above. If you wish to allow the use of your version of this file<br>+ * only under the terms of the GPL and not to allow others to use your<br>+ * version of this file under the BSD license, indicate your decision<br>+ * by deleting the provisions above and replace them with the notice<br>+ * and other provisions required by the GPL. If you do not delete the<br>+ * provisions above, a recipient may use your version of this file under<br>+ * either the BSD or the GPL.<br>+ */<br>+<br>+/*<br>+ * general notes about linux aio:<br>+ *<br>+ * a) at first, the linux aio IOCB_CMD_POLL functionality introduced in<br>+ * 4.18 looks too good to be true: both watchers and events can be<br>+ * batched, and events can even be handled in userspace using<br>+ * a ring buffer shared with the kernel. watchers can be canceled<br>+ * regardless of whether the fd has been closed. no problems with fork.<br>+ * ok, the ring buffer is 200% undocumented (there isn't even a<br>+ * header file), but otherwise, it's pure bliss!<br>+ * b) ok, watchers are one-shot, so you have to re-arm active ones<br>+ * on every iteration. so much for syscall-less event handling,<br>+ * but at least these re-arms can be batched, no big deal, right?<br>+ * c) well, linux as usual: the documentation lies to you: io_submit<br>+ * sometimes returns EINVAL because the kernel doesn't feel like<br>+ * handling your poll mask - ttys can be polled for POLLOUT,<br>+ * POLLOUT|POLLIN, but polling for POLLIN fails. just great,<br>+ * so we have to fall back to something else (hello, epoll),<br>+ * but at least the fallback can be slow, because these are<br>+ * exceptional cases, right?<br>+ * d) hmm, you have to tell the kernel the maximum number of watchers<br>+ * you want to queue when initialising the aio context. but of<br>+ * course the real limit is magically calculated in the kernel, and<br>+ * is often higher then we asked for. so we just have to destroy<br>+ * the aio context and re-create it a bit larger if we hit the limit.<br>+ * (starts to remind you of epoll? well, it's a bit more deterministic<br>+ * and less gambling, but still ugly as hell).<br>+ * e) that's when you find out you can also hit an arbitrary system-wide<br>+ * limit. or the kernel simply doesn't want to handle your watchers.<br>+ * what the fuck do we do then? you guessed it, in the middle<br>+ * of event handling we have to switch to 100% epoll polling. and<br>+ * that better is as fast as normal epoll polling, so you practically<br>+ * have to use the normal epoll backend with all its quirks.<br>+ * f) end result of this train wreck: it inherits all the disadvantages<br>+ * from epoll, while adding a number on its own. why even bother to use<br>+ * it? because if conditions are right and your fds are supported and you<br>+ * don't hit a limit, this backend is actually faster, doesn't gamble with<br>+ * your fds, batches watchers and events and doesn't require costly state<br>+ * recreates. well, until it does.<br>+ * g) all of this makes this backend use almost twice as much code as epoll.<br>+ * which in turn uses twice as much code as poll. and that#s not counting<br>+ * the fact that this backend also depends on the epoll backend, making<br>+ * it three times as much code as poll, or kqueue.<br>+ * h) bleah. why can't linux just do kqueue. sure kqueue is ugly, but by now<br>+ * it's clear that whatever linux comes up with is far, far, far worse.<br>+ */<br>+<br>+#include <sys/time.h> /* actually linux/time.h, but we must assume they are compatible */<br>+#include <poll.h><br>+#include <linux/aio_abi.h><br>+<br>+/*****************************************************************************/<br>+/* syscall wrapdadoop - this section has the raw api/abi definitions */<br>+<br>+#include <sys/syscall.h> /* no glibc wrappers */<br>+<br>+/* aio_abi.h is not versioned in any way, so we cannot test for its existance */<br>+#define IOCB_CMD_POLL 5<br>+<br>+/* taken from linux/fs/aio.c. yup, that's a .c file.<br>+ * not only is this totally undocumented, not even the source code<br>+ * can tell you what the future semantics of compat_features and<br>+ * incompat_features are, or what header_length actually is for.<br>+ */<br>+#define AIO_RING_MAGIC 0xa10a10a1<br>+#define EV_AIO_RING_INCOMPAT_FEATURES 0<br>+struct aio_ring<br>+{<br>+ unsigned id; /* kernel internal index number */<br>+ unsigned nr; /* number of io_events */<br>+ unsigned head; /* Written to by userland or by kernel. */<br>+ unsigned tail;<br>+<br>+ unsigned magic;<br>+ unsigned compat_features;<br>+ unsigned incompat_features;<br>+ unsigned header_length; /* size of aio_ring */<br>+<br>+ struct io_event io_events[0];<br>+};<br>+<br>+inline_size<br>+int<br>+evsys_io_setup (unsigned nr_events, aio_context_t *ctx_idp)<br>+{<br>+ return ev_syscall2 (SYS_io_setup, nr_events, ctx_idp);<br>+}<br>+<br>+inline_size<br>+int<br>+evsys_io_destroy (aio_context_t ctx_id)<br>+{<br>+ return ev_syscall1 (SYS_io_destroy, ctx_id);<br>+}<br>+<br>+inline_size<br>+int<br>+evsys_io_submit (aio_context_t ctx_id, long nr, struct iocb *cbp[])<br>+{<br>+ return ev_syscall3 (SYS_io_submit, ctx_id, nr, cbp);<br>+}<br>+<br>+inline_size<br>+int<br>+evsys_io_cancel (aio_context_t ctx_id, struct iocb *cbp, struct io_event *result)<br>+{<br>+ return ev_syscall3 (SYS_io_cancel, ctx_id, cbp, result);<br>+}<br>+<br>+inline_size<br>+int<br>+evsys_io_getevents (aio_context_t ctx_id, long min_nr, long nr, struct io_event *events, struct timespec *timeout)<br>+{<br>+ return ev_syscall5 (SYS_io_getevents, ctx_id, min_nr, nr, events, timeout);<br>+}<br>+<br>+/*****************************************************************************/<br>+/* actual backed implementation */<br>+<br>+ecb_cold<br>+static int<br>+linuxaio_nr_events (EV_P)<br>+{<br>+ /* we start with 16 iocbs and incraese from there<br>+ * that's tiny, but the kernel has a rather low system-wide<br>+ * limit that can be reached quickly, so let's be parsimonious<br>+ * with this resource.<br>+ * Rest assured, the kernel generously rounds up small and big numbers<br>+ * in different ways (but doesn't seem to charge you for it).<br>+ * The 15 here is because the kernel usually has a power of two as aio-max-nr,<br>+ * and this helps to take advantage of that limit.<br>+ */<br>+<br>+ /* we try to fill 4kB pages exactly.<br>+ * the ring buffer header is 32 bytes, every io event is 32 bytes.<br>+ * the kernel takes the io requests number, doubles it, adds 2<br>+ * and adds the ring buffer.<br>+ * the way we use this is by starting low, and then roughly doubling the<br>+ * size each time we hit a limit.<br>+ */<br>+<br>+ int requests = 15 << linuxaio_iteration;<br>+ int one_page = (4096<br>+ / sizeof (struct io_event) ) / 2; /* how many fit into one page */<br>+ int first_page = ((4096 - sizeof (struct aio_ring))<br>+ / sizeof (struct io_event) - 2) / 2; /* how many fit into the first page */<br>+<br>+ /* if everything fits into one page, use count exactly */<br>+ if (requests > first_page)<br>+ /* otherwise, round down to full pages and add the first page */<br>+ requests = requests / one_page * one_page + first_page;<br>+<br>+ return requests;<br>+}<br>+<br>+/* we use out own wrapper structure in case we ever want to do something "clever" */<br>+typedef struct aniocb<br>+{<br>+ struct iocb io;<br>+ /*int inuse;*/<br>+} *ANIOCBP;<br>+<br>+inline_size<br>+void<br>+linuxaio_array_needsize_iocbp (ANIOCBP *base, int offset, int count)<br>+{<br>+ while (count--)<br>+ {<br>+ /* TODO: quite the overhead to allocate every iocb separately, maybe use our own allocator? */<br>+ ANIOCBP iocb = (ANIOCBP)ev_malloc (sizeof (*iocb));<br>+<br>+ /* full zero initialise is probably not required at the moment, but<br>+ * this is not well documented, so we better do it.<br>+ */<br>+ memset (iocb, 0, sizeof (*iocb));<br>+<br>+ iocb->io.aio_lio_opcode = IOCB_CMD_POLL;<br>+ iocb->io.aio_fildes = offset;<br>+<br>+ base [offset++] = iocb;<br>+ }<br>+}<br>+<br>+ecb_cold<br>+static void<br>+linuxaio_free_iocbp (EV_P)<br>+{<br>+ while (linuxaio_iocbpmax--)<br>+ ev_free (linuxaio_iocbps [linuxaio_iocbpmax]);<br>+<br>+ linuxaio_iocbpmax = 0; /* next resize will completely reallocate the array, at some overhead */<br>+}<br>+<br>+static void<br>+linuxaio_modify (EV_P_ int fd, int oev, int nev)<br>+{<br>+ array_needsize (ANIOCBP, linuxaio_iocbps, linuxaio_iocbpmax, fd + 1, linuxaio_array_needsize_iocbp);<br>+ ANIOCBP iocb = linuxaio_iocbps [fd];<br>+ ANFD *anfd = &anfds [fd];<br>+<br>+ if (ecb_expect_false (iocb->io.aio_reqprio < 0))<br>+ {<br>+ /* we handed this fd over to epoll, so undo this first */<br>+ /* we do it manually because the optimisations on epoll_modify won't do us any good */<br>+ epoll_ctl (backend_fd, EPOLL_CTL_DEL, fd, 0);<br>+ anfd->emask = 0;<br>+ iocb->io.aio_reqprio = 0;<br>+ }<br>+ else if (ecb_expect_false (iocb->io.aio_buf))<br>+ {<br>+ /* iocb active, so cancel it first before resubmit */<br>+ /* this assumes we only ever get one call per fd per loop iteration */<br>+ for (;;)<br>+ {<br>+ /* on all relevant kernels, io_cancel fails with EINPROGRESS on "success" */<br>+ if (ecb_expect_false (evsys_io_cancel (linuxaio_ctx, &iocb->io, (struct io_event *)0) == 0))<br>+ break;<br>+<br>+ if (ecb_expect_true (errno == EINPROGRESS))<br>+ break;<br>+<br>+ /* the EINPROGRESS test is for nicer error message. clumsy. */<br>+ if (errno != EINTR)<br>+ {<br>+ assert (("libev: linuxaio unexpected io_cancel failed", errno != EINTR && errno != EINPROGRESS));<br>+ break;<br>+ }<br>+ }<br>+<br>+ /* increment generation counter to avoid handling old events */<br>+ ++anfd->egen;<br>+ }<br>+<br>+ iocb->io.aio_buf = (nev & EV_READ ? POLLIN : 0)<br>+ | (nev & EV_WRITE ? POLLOUT : 0);<br>+<br>+ if (nev)<br>+ {<br>+ iocb->io.aio_data = (uint32_t)fd | ((__u64)(uint32_t)anfd->egen << 32);<br>+<br>+ /* queue iocb up for io_submit */<br>+ /* this assumes we only ever get one call per fd per loop iteration */<br>+ ++linuxaio_submitcnt;<br>+ array_needsize (struct iocb *, linuxaio_submits, linuxaio_submitmax, linuxaio_submitcnt, array_needsize_noinit);<br>+ linuxaio_submits [linuxaio_submitcnt - 1] = &iocb->io;<br>+ }<br>+}<br>+<br>+static void<br>+linuxaio_epoll_cb (EV_P_ struct ev_io *w, int revents)<br>+{<br>+ epoll_poll (EV_A_ 0);<br>+}<br>+<br>+inline_speed<br>+void<br>+linuxaio_fd_rearm (EV_P_ int fd)<br>+{<br>+ anfds [fd].events = 0;<br>+ linuxaio_iocbps [fd]->io.aio_buf = 0;<br>+ fd_change (EV_A_ fd, EV_ANFD_REIFY);<br>+}<br>+<br>+static void<br>+linuxaio_parse_events (EV_P_ struct io_event *ev, int nr)<br>+{<br>+ while (nr)<br>+ {<br>+ int fd = ev->data & 0xffffffff;<br>+ uint32_t gen = ev->data >> 32;<br>+ int res = ev->res;<br>+<br>+ assert (("libev: iocb fd must be in-bounds", fd >= 0 && fd < anfdmax));<br>+<br>+ /* only accept events if generation counter matches */<br>+ if (ecb_expect_true (gen == (uint32_t)anfds [fd].egen))<br>+ {<br>+ /* feed events, we do not expect or handle POLLNVAL */<br>+ fd_event (<br>+ EV_A_<br>+ fd,<br>+ (res & (POLLOUT | POLLERR | POLLHUP) ? EV_WRITE : 0)<br>+ | (res & (POLLIN | POLLERR | POLLHUP) ? EV_READ : 0)<br>+ );<br>+<br>+ /* linux aio is oneshot: rearm fd. TODO: this does more work than strictly needed */<br>+ linuxaio_fd_rearm (EV_A_ fd);<br>+ }<br>+<br>+ --nr;<br>+ ++ev;<br>+ }<br>+}<br>+<br>+/* get any events from ring buffer, return true if any were handled */<br>+static int<br>+linuxaio_get_events_from_ring (EV_P)<br>+{<br>+ struct aio_ring *ring = (struct aio_ring *)linuxaio_ctx;<br>+ unsigned head, tail;<br>+<br>+ /* the kernel reads and writes both of these variables, */<br>+ /* as a C extension, we assume that volatile use here */<br>+ /* both makes reads atomic and once-only */<br>+ head = *(volatile unsigned *)&ring->head;<br>+ ECB_MEMORY_FENCE_ACQUIRE;<br>+ tail = *(volatile unsigned *)&ring->tail;<br>+<br>+ if (head == tail)<br>+ return 0;<br>+<br>+ /* parse all available events, but only once, to avoid starvation */<br>+ if (ecb_expect_true (tail > head)) /* normal case around */<br>+ linuxaio_parse_events (EV_A_ ring->io_events + head, tail - head);<br>+ else /* wrapped around */<br>+ {<br>+ linuxaio_parse_events (EV_A_ ring->io_events + head, ring->nr - head);<br>+ linuxaio_parse_events (EV_A_ ring->io_events, tail);<br>+ }<br>+<br>+ ECB_MEMORY_FENCE_RELEASE;<br>+ /* as an extension to C, we hope that the volatile will make this atomic and once-only */<br>+ *(volatile unsigned *)&ring->head = tail;<br>+<br>+ return 1;<br>+}<br>+<br>+inline_size<br>+int<br>+linuxaio_ringbuf_valid (EV_P)<br>+{<br>+ struct aio_ring *ring = (struct aio_ring *)linuxaio_ctx;<br>+<br>+ return ecb_expect_true (ring->magic == AIO_RING_MAGIC)<br>+ && ring->incompat_features == EV_AIO_RING_INCOMPAT_FEATURES<br>+ && ring->header_length == sizeof (struct aio_ring); /* TODO: or use it to find io_event[0]? */<br>+}<br>+<br>+/* read at least one event from kernel, or timeout */<br>+inline_size<br>+void<br>+linuxaio_get_events (EV_P_ ev_tstamp timeout)<br>+{<br>+ struct timespec ts;<br>+ struct io_event ioev[8]; /* 256 octet stack space */<br>+ int want = 1; /* how many events to request */<br>+ int ringbuf_valid = linuxaio_ringbuf_valid (EV_A);<br>+<br>+ if (ecb_expect_true (ringbuf_valid))<br>+ {<br>+ /* if the ring buffer has any events, we don't wait or call the kernel at all */<br>+ if (linuxaio_get_events_from_ring (EV_A))<br>+ return;<br>+<br>+ /* if the ring buffer is empty, and we don't have a timeout, then don't call the kernel */<br>+ if (!timeout)<br>+ return;<br>+ }<br>+ else<br>+ /* no ringbuffer, request slightly larger batch */<br>+ want = sizeof (ioev) / sizeof (ioev [0]);<br>+<br>+ /* no events, so wait for some<br>+ * for fairness reasons, we do this in a loop, to fetch all events<br>+ */<br>+ for (;;)<br>+ {<br>+ int res;<br>+<br>+ EV_RELEASE_CB;<br>+<br>+ EV_TS_SET (ts, timeout);<br>+ res = evsys_io_getevents (linuxaio_ctx, 1, want, ioev, &ts);<br>+<br>+ EV_ACQUIRE_CB;<br>+<br>+ if (res < 0)<br>+ if (errno == EINTR)<br>+ /* ignored, retry */;<br>+ else<br>+ ev_syserr ("(libev) linuxaio io_getevents");<br>+ else if (res)<br>+ {<br>+ /* at least one event available, handle them */<br>+ linuxaio_parse_events (EV_A_ ioev, res);<br>+<br>+ if (ecb_expect_true (ringbuf_valid))<br>+ {<br>+ /* if we have a ring buffer, handle any remaining events in it */<br>+ linuxaio_get_events_from_ring (EV_A);<br>+<br>+ /* at this point, we should have handled all outstanding events */<br>+ break;<br>+ }<br>+ else if (res < want)<br>+ /* otherwise, if there were fewere events than we wanted, we assume there are no more */<br>+ break;<br>+ }<br>+ else<br>+ break; /* no events from the kernel, we are done */<br>+<br>+ timeout = EV_TS_CONST (0.); /* only wait in the first iteration */<br>+ }<br>+}<br>+<br>+inline_size<br>+int<br>+linuxaio_io_setup (EV_P)<br>+{<br>+ linuxaio_ctx = 0;<br>+ return evsys_io_setup (linuxaio_nr_events (EV_A), &linuxaio_ctx);<br>+}<br>+<br>+static void<br>+linuxaio_poll (EV_P_ ev_tstamp timeout)<br>+{<br>+ int submitted;<br>+<br>+ /* first phase: submit new iocbs */<br>+<br>+ /* io_submit might return less than the requested number of iocbs */<br>+ /* this is, afaics, only because of errors, but we go by the book and use a loop, */<br>+ /* which allows us to pinpoint the erroneous iocb */<br>+ for (submitted = 0; submitted < linuxaio_submitcnt; )<br>+ {<br>+ int res = evsys_io_submit (linuxaio_ctx, linuxaio_submitcnt - submitted, linuxaio_submits + submitted);<br>+<br>+ if (ecb_expect_false (res < 0))<br>+ if (errno == EINVAL)<br>+ {<br>+ /* This happens for unsupported fds, officially, but in my testing,<br>+ * also randomly happens for supported fds. We fall back to good old<br>+ * poll() here, under the assumption that this is a very rare case.<br>+ * See https://lore.kernel.org/patchwork/patch/1047453/ to see<br>+ * discussion about such a case (ttys) where polling for POLLIN<br>+ * fails but POLLIN|POLLOUT works.<br>+ */<br>+ struct iocb *iocb = linuxaio_submits [submitted];<br>+ epoll_modify (EV_A_ iocb->aio_fildes, 0, anfds [iocb->aio_fildes].events);<br>+ iocb->aio_reqprio = -1; /* mark iocb as epoll */<br>+<br>+ res = 1; /* skip this iocb - another iocb, another chance */<br>+ }<br>+ else if (errno == EAGAIN)<br>+ {<br>+ /* This happens when the ring buffer is full, or some other shit we<br>+ * don't know and isn't documented. Most likely because we have too<br>+ * many requests and linux aio can't be assed to handle them.<br>+ * In this case, we try to allocate a larger ring buffer, freeing<br>+ * ours first. This might fail, in which case we have to fall back to 100%<br>+ * epoll.<br>+ * God, how I hate linux not getting its act together. Ever.<br>+ */<br>+ evsys_io_destroy (linuxaio_ctx);<br>+ linuxaio_submitcnt = 0;<br>+<br>+ /* rearm all fds with active iocbs */<br>+ {<br>+ int fd;<br>+ for (fd = 0; fd < linuxaio_iocbpmax; ++fd)<br>+ if (linuxaio_iocbps [fd]->io.aio_buf)<br>+ linuxaio_fd_rearm (EV_A_ fd);<br>+ }<br>+<br>+ ++linuxaio_iteration;<br>+ if (linuxaio_io_setup (EV_A) < 0)<br>+ {<br>+ /* TODO: rearm all and recreate epoll backend from scratch */<br>+ /* TODO: might be more prudent? */<br>+<br>+ /* to bad, we can't get a new aio context, go 100% epoll */<br>+ linuxaio_free_iocbp (EV_A);<br>+ ev_io_stop (EV_A_ &linuxaio_epoll_w);<br>+ ev_ref (EV_A);<br>+ linuxaio_ctx = 0;<br>+<br>+ backend = EVBACKEND_EPOLL;<br>+ backend_modify = epoll_modify;<br>+ backend_poll = epoll_poll;<br>+ }<br>+<br>+ timeout = EV_TS_CONST (0.);<br>+ /* it's easiest to handle this mess in another iteration */<br>+ return;<br>+ }<br>+ else if (errno == EBADF)<br>+ {<br>+ assert (("libev: event loop rejected bad fd", errno != EBADF));<br>+ fd_kill (EV_A_ linuxaio_submits [submitted]->aio_fildes);<br>+<br>+ res = 1; /* skip this iocb */<br>+ }<br>+ else if (errno == EINTR) /* not seen in reality, not documented */<br>+ res = 0; /* silently ignore and retry */<br>+ else<br>+ {<br>+ ev_syserr ("(libev) linuxaio io_submit");<br>+ res = 0;<br>+ }<br>+<br>+ submitted += res;<br>+ }<br>+<br>+ linuxaio_submitcnt = 0;<br>+<br>+ /* second phase: fetch and parse events */<br>+<br>+ linuxaio_get_events (EV_A_ timeout);<br>+}<br>+<br>+inline_size<br>+int<br>+linuxaio_init (EV_P_ int flags)<br>+{<br>+ /* would be great to have a nice test for IOCB_CMD_POLL instead */<br>+ /* also: test some semi-common fd types, such as files and ttys in recommended_backends */<br>+ /* 4.18 introduced IOCB_CMD_POLL, 4.19 made epoll work, and we need that */<br>+ if (ev_linux_version () < 0x041300)<br>+ return 0;<br>+<br>+ if (!epoll_init (EV_A_ 0))<br>+ return 0;<br>+<br>+ linuxaio_iteration = 0;<br>+<br>+ if (linuxaio_io_setup (EV_A) < 0)<br>+ {<br>+ epoll_destroy (EV_A);<br>+ return 0;<br>+ }<br>+<br>+ ev_io_init (&linuxaio_epoll_w, linuxaio_epoll_cb, backend_fd, EV_READ);<br>+ ev_set_priority (&linuxaio_epoll_w, EV_MAXPRI);<br>+ ev_io_start (EV_A_ &linuxaio_epoll_w);<br>+ ev_unref (EV_A); /* watcher should not keep loop alive */<br>+<br>+ backend_modify = linuxaio_modify;<br>+ backend_poll = linuxaio_poll;<br>+<br>+ linuxaio_iocbpmax = 0;<br>+ linuxaio_iocbps = 0;<br>+<br>+ linuxaio_submits = 0;<br>+ linuxaio_submitmax = 0;<br>+ linuxaio_submitcnt = 0;<br>+<br>+ return EVBACKEND_LINUXAIO;<br>+}<br>+<br>+inline_size<br>+void<br>+linuxaio_destroy (EV_P)<br>+{<br>+ epoll_destroy (EV_A);<br>+ linuxaio_free_iocbp (EV_A);<br>+ evsys_io_destroy (linuxaio_ctx); /* fails in child, aio context is destroyed */<br>+}<br>+<br>+ecb_cold<br>+static void<br>+linuxaio_fork (EV_P)<br>+{<br>+ linuxaio_submitcnt = 0; /* all pointers were invalidated */<br>+ linuxaio_free_iocbp (EV_A); /* this frees all iocbs, which is very heavy-handed */<br>+ evsys_io_destroy (linuxaio_ctx); /* fails in child, aio context is destroyed */<br>+<br>+ linuxaio_iteration = 0; /* we start over in the child */<br>+<br>+ while (linuxaio_io_setup (EV_A) < 0)<br>+ ev_syserr ("(libev) linuxaio io_setup");<br>+<br>+ /* forking epoll should also effectively unregister all fds from the backend */<br>+ epoll_fork (EV_A);<br>+ /* epoll_fork already did this. hopefully */<br>+ /*fd_rearm_all (EV_A);*/<br>+<br>+ ev_io_stop (EV_A_ &linuxaio_epoll_w);<br>+ ev_io_set (EV_A_ &linuxaio_epoll_w, backend_fd, EV_READ);<br>+ ev_io_start (EV_A_ &linuxaio_epoll_w);<br>+}<br>+<br>diff --git a/third_party/libev/ev_poll.c b/third_party/libev/ev_poll.c<br>index bd742b07f..e5508ddb0 100644<br>--- a/third_party/libev/ev_poll.c<br>+++ b/third_party/libev/ev_poll.c<br>@@ -1,7 +1,7 @@<br> /*<br> * libev poll fd activity backend<br> *<br>- * Copyright (c) 2007,2008,2009,2010,2011 Marc Alexander Lehmann <libev@schmorp.de><br>+ * Copyright (c) 2007,2008,2009,2010,2011,2016,2019 Marc Alexander Lehmann <libev@schmorp.de><br> * All rights reserved.<br> *<br> * Redistribution and use in source and binary forms, with or without modifica-<br>@@ -41,10 +41,12 @@<br> <br> inline_size<br> void<br>-pollidx_init (int *base, int count)<br>+array_needsize_pollidx (int *base, int offset, int count)<br> {<br>- /* consider using memset (.., -1, ...), which is practically guaranteed<br>- * to work on all systems implementing poll */<br>+ /* using memset (.., -1, ...) is tempting, we we try<br>+ * to be ultraportable<br>+ */<br>+ base += offset;<br> while (count--)<br> *base++ = -1;<br> }<br>@@ -57,14 +59,14 @@ poll_modify (EV_P_ int fd, int oev, int nev)<br> if (oev == nev)<br> return;<br> <br>- array_needsize (int, pollidxs, pollidxmax, fd + 1, pollidx_init);<br>+ array_needsize (int, pollidxs, pollidxmax, fd + 1, array_needsize_pollidx);<br> <br> idx = pollidxs [fd];<br> <br> if (idx < 0) /* need to allocate a new pollfd */<br> {<br> pollidxs [fd] = idx = pollcnt++;<br>- array_needsize (struct pollfd, polls, pollmax, pollcnt, EMPTY2);<br>+ array_needsize (struct pollfd, polls, pollmax, pollcnt, array_needsize_noinit);<br> polls [idx].fd = fd;<br> }<br> <br>@@ -78,7 +80,7 @@ poll_modify (EV_P_ int fd, int oev, int nev)<br> {<br> pollidxs [fd] = -1;<br> <br>- if (expect_true (idx < --pollcnt))<br>+ if (ecb_expect_true (idx < --pollcnt))<br> {<br> polls [idx] = polls [pollcnt];<br> pollidxs [polls [idx].fd] = idx;<br>@@ -93,10 +95,10 @@ poll_poll (EV_P_ ev_tstamp timeout)<br> int res;<br> <br> EV_RELEASE_CB;<br>- res = poll (polls, pollcnt, timeout * 1e3);<br>+ res = poll (polls, pollcnt, EV_TS_TO_MSEC (timeout));<br> EV_ACQUIRE_CB;<br> <br>- if (expect_false (res < 0))<br>+ if (ecb_expect_false (res < 0))<br> {<br> if (errno == EBADF)<br> fd_ebadf (EV_A);<br>@@ -108,14 +110,17 @@ poll_poll (EV_P_ ev_tstamp timeout)<br> else<br> for (p = polls; res; ++p)<br> {<br>- assert (("libev: poll() returned illegal result, broken BSD kernel?", p < polls + pollcnt));<br>+ assert (("libev: poll returned illegal result, broken BSD kernel?", p < polls + pollcnt));<br> <br>- if (expect_false (p->revents)) /* this expect is debatable */<br>+ if (ecb_expect_false (p->revents)) /* this expect is debatable */<br> {<br> --res;<br> <br>- if (expect_false (p->revents & POLLNVAL))<br>- fd_kill (EV_A_ p->fd);<br>+ if (ecb_expect_false (p->revents & POLLNVAL))<br>+ {<br>+ assert (("libev: poll found invalid fd in poll set", 0));<br>+ fd_kill (EV_A_ p->fd);<br>+ }<br> else<br> fd_event (<br> EV_A_<br>@@ -131,7 +136,7 @@ inline_size<br> int<br> poll_init (EV_P_ int flags)<br> {<br>- backend_mintime = 1e-3;<br>+ backend_mintime = EV_TS_CONST (1e-3);<br> backend_modify = poll_modify;<br> backend_poll = poll_poll;<br> <br>diff --git a/third_party/libev/ev_port.c b/third_party/libev/ev_port.c<br>index c7b0b70c1..f4cd9d99c 100644<br>--- a/third_party/libev/ev_port.c<br>+++ b/third_party/libev/ev_port.c<br>@@ -1,7 +1,7 @@<br> /*<br> * libev solaris event port backend<br> *<br>- * Copyright (c) 2007,2008,2009,2010,2011 Marc Alexander Lehmann <libev@schmorp.de><br>+ * Copyright (c) 2007,2008,2009,2010,2011,2019 Marc Alexander Lehmann <libev@schmorp.de><br> * All rights reserved.<br> *<br> * Redistribution and use in source and binary forms, with or without modifica-<br>@@ -69,7 +69,10 @@ port_associate_and_check (EV_P_ int fd, int ev)<br> )<br> {<br> if (errno == EBADFD)<br>- fd_kill (EV_A_ fd);<br>+ {<br>+ assert (("libev: port_associate found invalid fd", errno != EBADFD));<br>+ fd_kill (EV_A_ fd);<br>+ }<br> else<br> ev_syserr ("(libev) port_associate");<br> }<br>@@ -129,7 +132,7 @@ port_poll (EV_P_ ev_tstamp timeout)<br> }<br> }<br> <br>- if (expect_false (nget == port_eventmax))<br>+ if (ecb_expect_false (nget == port_eventmax))<br> {<br> ev_free (port_events);<br> port_eventmax = array_nextsize (sizeof (port_event_t), port_eventmax, port_eventmax + 1);<br>@@ -151,11 +154,11 @@ port_init (EV_P_ int flags)<br> <br> /* if my reading of the opensolaris kernel sources are correct, then<br> * opensolaris does something very stupid: it checks if the time has already<br>- * elapsed and doesn't round up if that is the case,m otherwise it DOES round<br>+ * elapsed and doesn't round up if that is the case, otherwise it DOES round<br> * up. Since we can't know what the case is, we need to guess by using a<br> * "large enough" timeout. Normally, 1e-9 would be correct.<br> */<br>- backend_mintime = 1e-3; /* needed to compensate for port_getn returning early */<br>+ backend_mintime = EV_TS_CONST (1e-3); /* needed to compensate for port_getn returning early */<br> backend_modify = port_modify;<br> backend_poll = port_poll;<br> <br>diff --git a/third_party/libev/ev_select.c b/third_party/libev/ev_select.c<br>index ed1fc7ad9..b862c8113 100644<br>--- a/third_party/libev/ev_select.c<br>+++ b/third_party/libev/ev_select.c<br>@@ -108,7 +108,7 @@ select_modify (EV_P_ int fd, int oev, int nev)<br> int word = fd / NFDBITS;<br> fd_mask mask = 1UL << (fd % NFDBITS);<br> <br>- if (expect_false (vec_max <= word))<br>+ if (ecb_expect_false (vec_max <= word))<br> {<br> int new_max = word + 1;<br> <br>@@ -171,7 +171,7 @@ select_poll (EV_P_ ev_tstamp timeout)<br> #endif<br> EV_ACQUIRE_CB;<br> <br>- if (expect_false (res < 0))<br>+ if (ecb_expect_false (res < 0))<br> {<br> #if EV_SELECT_IS_WINSOCKET<br> errno = WSAGetLastError ();<br>@@ -197,7 +197,7 @@ select_poll (EV_P_ ev_tstamp timeout)<br> {<br> if (timeout)<br> {<br>- unsigned long ms = timeout * 1e3;<br>+ unsigned long ms = EV_TS_TO_MSEC (timeout);<br> Sleep (ms ? ms : 1);<br> }<br> <br>@@ -236,7 +236,7 @@ select_poll (EV_P_ ev_tstamp timeout)<br> if (FD_ISSET (handle, (fd_set *)vec_eo)) events |= EV_WRITE;<br> #endif<br> <br>- if (expect_true (events))<br>+ if (ecb_expect_true (events))<br> fd_event (EV_A_ fd, events);<br> }<br> }<br>@@ -262,7 +262,7 @@ select_poll (EV_P_ ev_tstamp timeout)<br> events |= word_r & mask ? EV_READ : 0;<br> events |= word_w & mask ? EV_WRITE : 0;<br> <br>- if (expect_true (events))<br>+ if (ecb_expect_true (events))<br> fd_event (EV_A_ word * NFDBITS + bit, events);<br> }<br> }<br>@@ -275,7 +275,7 @@ inline_size<br> int<br> select_init (EV_P_ int flags)<br> {<br>- backend_mintime = 1e-6;<br>+ backend_mintime = EV_TS_CONST (1e-6);<br> backend_modify = select_modify;<br> backend_poll = select_poll;<br> <br>diff --git a/third_party/libev/ev_vars.h b/third_party/libev/ev_vars.h<br>index 04d4db16f..fb0c58316 100644<br>--- a/third_party/libev/ev_vars.h<br>+++ b/third_party/libev/ev_vars.h<br>@@ -1,7 +1,7 @@<br> /*<br> * loop member variable declarations<br> *<br>- * Copyright (c) 2007,2008,2009,2010,2011,2012,2013 Marc Alexander Lehmann <libev@schmorp.de><br>+ * Copyright (c) 2007,2008,2009,2010,2011,2012,2013,2019 Marc Alexander Lehmann <libev@schmorp.de><br> * All rights reserved.<br> *<br> * Redistribution and use in source and binary forms, with or without modifica-<br>@@ -107,6 +107,46 @@ VARx(int, epoll_epermcnt)<br> VARx(int, epoll_epermmax)<br> #endif<br> <br>+#if EV_USE_LINUXAIO || EV_GENWRAP<br>+VARx(aio_context_t, linuxaio_ctx)<br>+VARx(int, linuxaio_iteration)<br>+VARx(struct aniocb **, linuxaio_iocbps)<br>+VARx(int, linuxaio_iocbpmax)<br>+VARx(struct iocb **, linuxaio_submits)<br>+VARx(int, linuxaio_submitcnt)<br>+VARx(int, linuxaio_submitmax)<br>+VARx(ev_io, linuxaio_epoll_w)<br>+#endif<br>+<br>+#if EV_USE_IOURING || EV_GENWRAP<br>+VARx(int, iouring_fd)<br>+VARx(unsigned, iouring_to_submit);<br>+VARx(int, iouring_entries)<br>+VARx(int, iouring_max_entries)<br>+VARx(void *, iouring_sq_ring)<br>+VARx(void *, iouring_cq_ring)<br>+VARx(void *, iouring_sqes)<br>+VARx(uint32_t, iouring_sq_ring_size)<br>+VARx(uint32_t, iouring_cq_ring_size)<br>+VARx(uint32_t, iouring_sqes_size)<br>+VARx(uint32_t, iouring_sq_head)<br>+VARx(uint32_t, iouring_sq_tail)<br>+VARx(uint32_t, iouring_sq_ring_mask)<br>+VARx(uint32_t, iouring_sq_ring_entries)<br>+VARx(uint32_t, iouring_sq_flags)<br>+VARx(uint32_t, iouring_sq_dropped)<br>+VARx(uint32_t, iouring_sq_array)<br>+VARx(uint32_t, iouring_cq_head)<br>+VARx(uint32_t, iouring_cq_tail)<br>+VARx(uint32_t, iouring_cq_ring_mask)<br>+VARx(uint32_t, iouring_cq_ring_entries)<br>+VARx(uint32_t, iouring_cq_overflow)<br>+VARx(uint32_t, iouring_cq_cqes)<br>+VARx(ev_tstamp, iouring_tfd_to)<br>+VARx(int, iouring_tfd)<br>+VARx(ev_io, iouring_tfd_w)<br>+#endif<br>+<br> #if EV_USE_KQUEUE || EV_GENWRAP<br> VARx(pid_t, kqueue_fd_pid)<br> VARx(struct kevent *, kqueue_changes)<br>@@ -187,6 +227,11 @@ VARx(ev_io, sigfd_w)<br> VARx(sigset_t, sigfd_set)<br> #endif<br> <br>+#if EV_USE_TIMERFD || EV_GENWRAP<br>+VARx(int, timerfd) /* timerfd for time jump detection */<br>+VARx(ev_io, timerfd_w)<br>+#endif<br>+<br> VARx(unsigned int, origflags) /* original loop flags */<br> <br> #if EV_FEATURE_API || EV_GENWRAP<br>@@ -195,8 +240,8 @@ VARx(unsigned int, loop_depth) /* #ev_run enters - #ev_run leaves */<br> <br> VARx(void *, userdata)<br> /* C++ doesn't support the ev_loop_callback typedef here. stinks. */<br>-VAR (release_cb, void (*release_cb)(EV_P) EV_THROW)<br>-VAR (acquire_cb, void (*acquire_cb)(EV_P) EV_THROW)<br>+VAR (release_cb, void (*release_cb)(EV_P) EV_NOEXCEPT)<br>+VAR (acquire_cb, void (*acquire_cb)(EV_P) EV_NOEXCEPT)<br> VAR (invoke_cb , ev_loop_callback invoke_cb)<br> #endif<br> <br>diff --git a/third_party/libev/ev_win32.c b/third_party/libev/ev_win32.c<br>index fd671356a..97344c3e1 100644<br>--- a/third_party/libev/ev_win32.c<br>+++ b/third_party/libev/ev_win32.c<br>@@ -154,8 +154,8 @@ ev_time (void)<br> ui.u.LowPart = ft.dwLowDateTime;<br> ui.u.HighPart = ft.dwHighDateTime;<br> <br>- /* msvc cannot convert ulonglong to double... yes, it is that sucky */<br>- return (LONGLONG)(ui.QuadPart - 116444736000000000) * 1e-7;<br>+ /* also, msvc cannot convert ulonglong to double... yes, it is that sucky */<br>+ return EV_TS_FROM_USEC (((LONGLONG)(ui.QuadPart - 116444736000000000) * 1e-1));<br> }<br> <br> #endif<br>diff --git a/third_party/libev/ev_wrap.h b/third_party/libev/ev_wrap.h<br>index ad989ea7d..45d793ced 100644<br>--- a/third_party/libev/ev_wrap.h<br>+++ b/third_party/libev/ev_wrap.h<br>@@ -44,12 +44,46 @@<br> #define invoke_cb ((loop)->invoke_cb)<br> #define io_blocktime ((loop)->io_blocktime)<br> #define iocp ((loop)->iocp)<br>+#define iouring_cq_cqes ((loop)->iouring_cq_cqes)<br>+#define iouring_cq_head ((loop)->iouring_cq_head)<br>+#define iouring_cq_overflow ((loop)->iouring_cq_overflow)<br>+#define iouring_cq_ring ((loop)->iouring_cq_ring)<br>+#define iouring_cq_ring_entries ((loop)->iouring_cq_ring_entries)<br>+#define iouring_cq_ring_mask ((loop)->iouring_cq_ring_mask)<br>+#define iouring_cq_ring_size ((loop)->iouring_cq_ring_size)<br>+#define iouring_cq_tail ((loop)->iouring_cq_tail)<br>+#define iouring_entries ((loop)->iouring_entries)<br>+#define iouring_fd ((loop)->iouring_fd)<br>+#define iouring_max_entries ((loop)->iouring_max_entries)<br>+#define iouring_sq_array ((loop)->iouring_sq_array)<br>+#define iouring_sq_dropped ((loop)->iouring_sq_dropped)<br>+#define iouring_sq_flags ((loop)->iouring_sq_flags)<br>+#define iouring_sq_head ((loop)->iouring_sq_head)<br>+#define iouring_sq_ring ((loop)->iouring_sq_ring)<br>+#define iouring_sq_ring_entries ((loop)->iouring_sq_ring_entries)<br>+#define iouring_sq_ring_mask ((loop)->iouring_sq_ring_mask)<br>+#define iouring_sq_ring_size ((loop)->iouring_sq_ring_size)<br>+#define iouring_sq_tail ((loop)->iouring_sq_tail)<br>+#define iouring_sqes ((loop)->iouring_sqes)<br>+#define iouring_sqes_size ((loop)->iouring_sqes_size)<br>+#define iouring_tfd ((loop)->iouring_tfd)<br>+#define iouring_tfd_to ((loop)->iouring_tfd_to)<br>+#define iouring_tfd_w ((loop)->iouring_tfd_w)<br>+#define iouring_to_submit ((loop)->iouring_to_submit)<br> #define kqueue_changecnt ((loop)->kqueue_changecnt)<br> #define kqueue_changemax ((loop)->kqueue_changemax)<br> #define kqueue_changes ((loop)->kqueue_changes)<br> #define kqueue_eventmax ((loop)->kqueue_eventmax)<br> #define kqueue_events ((loop)->kqueue_events)<br> #define kqueue_fd_pid ((loop)->kqueue_fd_pid)<br>+#define linuxaio_ctx ((loop)->linuxaio_ctx)<br>+#define linuxaio_epoll_w ((loop)->linuxaio_epoll_w)<br>+#define linuxaio_iocbpmax ((loop)->linuxaio_iocbpmax)<br>+#define linuxaio_iocbps ((loop)->linuxaio_iocbps)<br>+#define linuxaio_iteration ((loop)->linuxaio_iteration)<br>+#define linuxaio_submitcnt ((loop)->linuxaio_submitcnt)<br>+#define linuxaio_submitmax ((loop)->linuxaio_submitmax)<br>+#define linuxaio_submits ((loop)->linuxaio_submits)<br> #define loop_count ((loop)->loop_count)<br> #define loop_depth ((loop)->loop_depth)<br> #define loop_done ((loop)->loop_done)<br>@@ -89,6 +123,8 @@<br> #define sigfd_w ((loop)->sigfd_w)<br> #define timeout_blocktime ((loop)->timeout_blocktime)<br> #define timercnt ((loop)->timercnt)<br>+#define timerfd ((loop)->timerfd)<br>+#define timerfd_w ((loop)->timerfd_w)<br> #define timermax ((loop)->timermax)<br> #define timers ((loop)->timers)<br> #define userdata ((loop)->userdata)<br>@@ -143,12 +179,46 @@<br> #undef invoke_cb<br> #undef io_blocktime<br> #undef iocp<br>+#undef iouring_cq_cqes<br>+#undef iouring_cq_head<br>+#undef iouring_cq_overflow<br>+#undef iouring_cq_ring<br>+#undef iouring_cq_ring_entries<br>+#undef iouring_cq_ring_mask<br>+#undef iouring_cq_ring_size<br>+#undef iouring_cq_tail<br>+#undef iouring_entries<br>+#undef iouring_fd<br>+#undef iouring_max_entries<br>+#undef iouring_sq_array<br>+#undef iouring_sq_dropped<br>+#undef iouring_sq_flags<br>+#undef iouring_sq_head<br>+#undef iouring_sq_ring<br>+#undef iouring_sq_ring_entries<br>+#undef iouring_sq_ring_mask<br>+#undef iouring_sq_ring_size<br>+#undef iouring_sq_tail<br>+#undef iouring_sqes<br>+#undef iouring_sqes_size<br>+#undef iouring_tfd<br>+#undef iouring_tfd_to<br>+#undef iouring_tfd_w<br>+#undef iouring_to_submit<br> #undef kqueue_changecnt<br> #undef kqueue_changemax<br> #undef kqueue_changes<br> #undef kqueue_eventmax<br> #undef kqueue_events<br> #undef kqueue_fd_pid<br>+#undef linuxaio_ctx<br>+#undef linuxaio_epoll_w<br>+#undef linuxaio_iocbpmax<br>+#undef linuxaio_iocbps<br>+#undef linuxaio_iteration<br>+#undef linuxaio_submitcnt<br>+#undef linuxaio_submitmax<br>+#undef linuxaio_submits<br> #undef loop_count<br> #undef loop_depth<br> #undef loop_done<br>@@ -188,6 +258,8 @@<br> #undef sigfd_w<br> #undef timeout_blocktime<br> #undef timercnt<br>+#undef timerfd<br>+#undef timerfd_w<br> #undef timermax<br> #undef timers<br> #undef userdata<br>diff --git a/third_party/libev/libev.m4 b/third_party/libev/libev.m4<br>index 439fbde2c..f859eff27 100644<br>--- a/third_party/libev/libev.m4<br>+++ b/third_party/libev/libev.m4<br>@@ -2,7 +2,8 @@ dnl this file is part of libev, do not make local modifications<br> dnl http://software.schmorp.de/pkg/libev<br> <br> dnl libev support<br>-AC_CHECK_HEADERS(sys/inotify.h sys/epoll.h sys/event.h port.h poll.h sys/select.h sys/eventfd.h sys/signalfd.h)<br>+AC_CHECK_HEADERS(sys/inotify.h sys/epoll.h sys/event.h port.h poll.h sys/timerfd.h)<br>+AC_CHECK_HEADERS(sys/select.h sys/eventfd.h sys/signalfd.h linux/aio_abi.h linux/fs.h)<br> <br> AC_CHECK_FUNCS(inotify_init epoll_ctl kqueue port_create poll select eventfd signalfd)<br> <br>@@ -35,6 +36,10 @@ AC_CHECK_FUNCS(nanosleep, [], [<br> fi<br> ])<br> <br>+AC_CHECK_TYPE(__kernel_rwf_t, [<br>+ AC_DEFINE(HAVE_KERNEL_RWF_T, 1, Define to 1 if linux/fs.h defined kernel_rwf_t)<br>+], [], [#include <linux/fs.h>])<br>+<br> if test -z "$LIBEV_M4_AVOID_LIBM"; then<br> LIBM=m<br> fi<br>diff --git a/third_party/libev/update_ev_c b/third_party/libev/update_ev_c<br>index b55fd7fb7..a80bfae23 100755<br>--- a/third_party/libev/update_ev_c<br>+++ b/third_party/libev/update_ev_c<br>@@ -2,6 +2,7 @@<br> <br> (<br> sed -ne '1,\%/\* ECB.H BEGIN \*/%p' ev.c<br>+ #perl -ne 'print unless /^#if ECB_CPP/ .. /^#endif/' <~/src/libecb/ecb.h<br> cat ~/src/libecb/ecb.h<br> sed -ne '\%/\* ECB.H END \*/%,$p' ev.c<br> ) >ev.c~ && mv ev.c~ ev.c<br>-- <br>2.24.0<br> <br> <br>--<br>Maria Khaydich<br> </div><blockquote style="border-left:1px solid #0857A6; margin:10px; padding:0 0 0 10px;">Понедельник, 17 февраля 2020, 13:00 +03:00 от Alexander Turenko <alexander.turenko@tarantool.org>:<br> <div id=""><div class="js-helper js-readmsg-msg"><style type="text/css"></style><div><div id="style_15819336202034693582_BODY">On Mon, Feb 17, 2020 at 10:40:52AM +0300, Konstantin Osipov wrote:<br>> * Alexander Turenko <<a href="/compose?To=alexander.turenko@tarantool.org">alexander.turenko@tarantool.org</a>> [20/02/15 23:22]:</div></div></div></div></blockquote><div><...><blockquote style="border-left:1px solid #0857A6; margin:10px; padding:0 0 0 10px;"><div><div class="js-helper js-readmsg-msg"><div><div>><br>> How to update libev<br>> ===================<br>><br>> Remove Tarantool patches (see csv diff -U8).<br>> cvs up<br>> Add patches back.<br>><br>> Did the patch follow the procedure? If it did, it should clearly<br>> state that it updated libev, and to which version.<br><br>Agreed, it makes sense.<br><br>><br>><br>> ><br>> > | 4.25 Fri Dec 21 07:49:20 CET 2018<br>> > | <...><br>> > | - move the darwin select workaround higher in ev.c, as newer versions of<br>> > | darwin managed to break their broken select even more.<br>> ><br>> > <a href="http://cvs.schmorp.de/libev/Changes?view=markup" target="_blank">http://cvs.schmorp.de/libev/Changes?view=markup</a><br>><br>> :/<br>><br>> --<br>> Konstantin Osipov, Moscow, Russia</div></div></div></div></blockquote> <div> </div><div data-signature-widget="container"><div data-signature-widget="content"><div>--<br>Maria Khaydich</div></div></div><div> </div></div></BODY></HTML>