[Tarantool-patches] [PATCH] libev: update to version 4.32
Maria Khaydich
maria.khaydich at tarantool.org
Tue Mar 3 17:58:44 MSK 2020
>~ We should update libev properly.
It does seem reasonable.
----------------------------------------------------------------------
There was a bug in libev that caused some stress tests to fail
with an error indicating lack of fds on mac. The flag which was
supposed to fix the issue (DARWIN_UNLIMITED_SELECT) was defined
too late in earlier versions of libev.
More precisely, it was defined after including time.h which in
its turn had this line:
include <sys/_select.h> /* select() prototype */
And <sys/_select.h> had this:
if defined(_DARWIN_C_SOURCE) || defined(_DARWIN_UNLIMITED_SELECT)
__DARWIN_EXTSN_C(select)
So unlimited select flag did not do the trick as it was supposed to.
It was fixed in 4.25 along with other changes.
Closes #3867
Closes #4673
---
Issues:
https://github.com/tarantool/tarantool/issues/3867
https://github.com/tarantool/tarantool/issues/4673
Branch:
https://github.com/tarantool/tarantool/compare/eljashm/gh-3867-libev-update
src/lib/core/fiber.c | 4 +-
third_party/libev/CVS/Entries | 62 +-
third_party/libev/Changes | 101 +++
third_party/libev/Makefile.am | 3 +-
third_party/libev/README | 3 +-
third_party/libev/Symbols.ev | 2 +-
third_party/libev/configure.ac | 6 +-
third_party/libev/ev++.h | 220 +++---
third_party/libev/ev.3 | 373 +++++++---
third_party/libev/ev.c | 1131 ++++++++++++++++++++++---------
third_party/libev/ev.h | 225 +++---
third_party/libev/ev.pod | 316 +++++++--
third_party/libev/ev_epoll.c | 69 +-
third_party/libev/ev_iouring.c | 694 +++++++++++++++++++
third_party/libev/ev_kqueue.c | 24 +-
third_party/libev/ev_linuxaio.c | 620 +++++++++++++++++
third_party/libev/ev_poll.c | 33 +-
third_party/libev/ev_port.c | 13 +-
third_party/libev/ev_select.c | 12 +-
third_party/libev/ev_vars.h | 51 +-
third_party/libev/ev_win32.c | 4 +-
third_party/libev/ev_wrap.h | 72 ++
third_party/libev/libev.m4 | 7 +-
third_party/libev/update_ev_c | 1 +
24 files changed, 3222 insertions(+), 824 deletions(-)
create mode 100644 third_party/libev/ev_iouring.c
create mode 100644 third_party/libev/ev_linuxaio.c
diff --git a/src/lib/core/fiber.c b/src/lib/core/fiber.c
index ada7972cb..2bf6a3333 100644
--- a/src/lib/core/fiber.c
+++ b/src/lib/core/fiber.c
@@ -1463,7 +1463,7 @@ cord_start(struct cord *cord, const char *name, void *(*f)(void *), void *arg)
struct cord_thread_arg ct_arg = { cord, name, f, arg, false,
PTHREAD_MUTEX_INITIALIZER, PTHREAD_COND_INITIALIZER };
tt_pthread_mutex_lock(&ct_arg.start_mutex);
- cord->loop = ev_loop_new(EVFLAG_AUTO | EVFLAG_ALLOCFD);
+ cord->loop = ev_loop_new(EVFLAG_AUTO | EVFLAG_NOTIMERFD);
if (cord->loop == NULL) {
diag_set(OutOfMemory, 0, "ev_loop_new", "ev_loop");
goto end;
@@ -1701,7 +1701,7 @@ fiber_init(int (*invoke)(fiber_func f, va_list ap))
stack_direction = check_stack_direction(__builtin_frame_address(0));
fiber_invoke = invoke;
main_thread_id = pthread_self();
- main_cord.loop = ev_default_loop(EVFLAG_AUTO | EVFLAG_ALLOCFD);
+ main_cord.loop = ev_default_loop(EVFLAG_AUTO | EVFLAG_NOTIMERFD);
cord_create(&main_cord, "main");
}
diff --git a/third_party/libev/CVS/Entries b/third_party/libev/CVS/Entries
index 3c8541193..497df4295 100644
--- a/third_party/libev/CVS/Entries
+++ b/third_party/libev/CVS/Entries
@@ -1,31 +1,33 @@
-/Makefile.am/1.9/Mon Aug 17 17:43:15 2015//
-/README/1.21/Mon Aug 17 17:43:15 2015//
-/README.embed/1.29/Mon Aug 17 17:43:15 2015//
-/Symbols.ev/1.14/Mon Aug 17 17:43:15 2015//
-/Symbols.event/1.4/Mon Aug 17 17:43:15 2015//
-/autogen.sh/1.3/Mon Aug 17 17:43:15 2015//
-/ev_poll.c/1.39/Mon Aug 17 17:43:15 2015//
-/ev_port.c/1.28/Mon Aug 17 17:43:15 2015//
-/ev_select.c/1.55/Mon Aug 17 17:43:15 2015//
-/event.c/1.52/Mon Aug 17 17:43:15 2015//
-/event.h/1.26/Mon Aug 17 17:43:15 2015//
-/event_compat.h/1.8/Mon Aug 17 17:43:15 2015//
-/import_libevent/1.29/Mon Aug 17 17:43:15 2015//
-/update_ev_c/1.2/Mon Aug 17 17:43:15 2015//
-/update_ev_wrap/1.6/Mon Aug 17 17:43:15 2015//
-/update_symbols/1.1/Mon Aug 17 17:43:15 2015//
-/Changes/1.307/Sun Oct 4 10:12:28 2015//
-/LICENSE/1.11/Sun Oct 4 10:12:28 2015//
-/configure.ac/1.40/Sun Oct 4 10:12:28 2015//
-/ev++.h/1.62/Sun Oct 4 10:12:28 2015//
-/ev.3/1.103/Sun Oct 4 10:12:28 2015//
-/ev.c/1.477/Sun Oct 4 10:12:28 2015//
-/ev.h/1.183/Sun Oct 4 10:12:28 2015//
-/ev.pod/1.435/Sun Oct 4 10:12:28 2015//
-/ev_epoll.c/1.68/Sun Oct 4 10:12:28 2015//
-/ev_kqueue.c/1.55/Sun Oct 4 10:12:28 2015//
-/ev_vars.h/1.58/Sun Oct 4 10:12:28 2015//
-/ev_win32.c/1.16/Sun Oct 4 10:12:28 2015//
-/ev_wrap.h/1.38/Sun Oct 4 10:12:28 2015//
-/libev.m4/1.16/Sun Oct 4 10:12:28 2015//
+/Changes/1.365/Result of merge+Thu Feb 27 07:26:50 2020//
+/LICENSE/1.11/Wed Feb 19 13:30:17 2020//
+/Makefile.am/1.11/Thu Feb 27 07:26:50 2020//
+/README/1.22/Thu Feb 27 07:26:50 2020//
+/README.embed/1.29/Wed Feb 19 13:30:17 2020//
+/Symbols.ev/1.15/Thu Feb 27 07:26:50 2020//
+/Symbols.event/1.4/Wed Feb 19 13:30:17 2020//
+/autogen.sh/1.3/Wed Feb 19 13:30:17 2020//
+/configure.ac/1.45/Result of merge+Thu Feb 27 07:26:50 2020//
+/ev++.h/1.68/Thu Feb 27 07:26:50 2020//
+/ev.3/1.120/Result of merge+Thu Feb 27 07:26:50 2020//
+/ev.c/1.528/Result of merge+Thu Feb 27 07:26:50 2020//
+/ev.h/1.204/Result of merge+Thu Feb 27 07:26:50 2020//
+/ev.pod/1.464/Result of merge//
+/ev_epoll.c/1.82/Result of merge+Thu Feb 27 07:26:50 2020//
+/ev_iouring.c/1.21/Wed Jan 22 02:20:47 2020//
+/ev_kqueue.c/1.61/Result of merge//
+/ev_linuxaio.c/1.53/Fri Dec 27 16:12:55 2019//
+/ev_poll.c/1.48/Result of merge+Thu Feb 27 07:26:50 2020//
+/ev_port.c/1.33/Result of merge//
+/ev_select.c/1.58/Result of merge//
+/ev_vars.h/1.67/Thu Feb 27 07:26:50 2020//
+/ev_win32.c/1.21/Result of merge//
+/ev_wrap.h/1.44/Thu Feb 27 07:26:50 2020//
+/event.c/1.52/Wed Feb 19 13:30:17 2020//
+/event.h/1.26/Wed Feb 19 13:30:17 2020//
+/event_compat.h/1.8/Wed Feb 19 13:30:17 2020//
+/import_libevent/1.29/Wed Feb 19 13:30:17 2020//
+/libev.m4/1.18/Thu Feb 27 07:26:50 2020//
+/update_ev_c/1.3/Thu Feb 27 07:26:50 2020//
+/update_ev_wrap/1.6/Wed Feb 19 13:30:17 2020//
+/update_symbols/1.1/Wed Feb 19 13:30:17 2020//
D
diff --git a/third_party/libev/Changes b/third_party/libev/Changes
index bb1e6d43d..80e70ae6d 100644
--- a/third_party/libev/Changes
+++ b/third_party/libev/Changes
@@ -1,5 +1,106 @@
Revision history for libev, a high-performance and full-featured event loop.
+TODO: for next ABI/API change, consider moving EV__IOFDSSET into io->fd instead and provide a getter.
+TODO: document EV_TSTAMP_T
+
+4.32 (EV only)
+ - the 4.31 timerfd code wrongly changes the priority of the signal
+ fd watcher, which is usually harmless unless signal fds are
+ also used (found via cpan tester service).
+ - the documentation wrongly claimed that user may modify fd and events
+ members in io watchers when the watcher was stopped
+ (found by b_jonas).
+ - new ev_io_modify mutator which changes only the events member,
+ which can be faster. also added ev::io::set (int events) method
+ to ev++.h.
+ - officially allow a zero events mask for io watchers. this should
+ work with older libev versions as well but was not officially
+ allowed before.
+ - do not wake up every minute when timerfd is used to detect timejumps.
+ - do not wake up every minute when periodics are disabled and we have
+ a monotonic clock.
+ - support a lot more "uncommon" compile time configurations,
+ such as ev_embed enabled but ev_timer disabled.
+ - use a start/stop wrapper class to reduce code duplication in
+ ev++.h and make it needlessly more c++-y.
+ - the linux aio backend is no longer compiled in by default.
+ - update to libecb version 0x00010008.
+
+4.31 Fri Dec 20 21:58:29 CET 2019
+ - handle backends with minimum wait time a bit better by not
+ waiting in the presence of already-expired timers
+ (behaviour reported by Felipe Gasper).
+ - new feature: use timerfd to detect timejumps quickly,
+ can be disabled with the new EVFLAG_NOTIMERFD loop flag.
+ - document EV_USE_SIGNALFD feature macro.
+
+4.30 (EV only)
+ - change non-autoconf test for __kernel_rwf_t by testing
+ LINUX_VERSION_CODE, the most direct test I could find.
+ - fix a bug in the io_uring backend that polled the wrong
+ backend fd, causing it to not work in many cases.
+
+4.29 (EV only)
+ - add io uring autoconf and non-autoconf detection.
+ - disable io_uring when some header files are too old.
+
+4.28 (EV only)
+ - linuxaio backend resulted in random memory corruption
+ when loop is forked.
+ - linuxaio backend might have tried to cancel an iocb
+ multiple times (was unable to trigger this).
+ - linuxaio backend now employs a generation counter to
+ avoid handling spurious events from cancelled requests.
+ - io_cancel can return EINTR, deal with it. also, assume
+ io_submit also returns EINTR.
+ - fix some other minor bugs in linuxaio backend.
+ - ev_tstamp type can now be overriden by defining EV_TSTAMP_T.
+ - cleanup: replace expect_true/false and noinline by their
+ libecb counterparts.
+ - move syscall infrastructure from ev_linuxaio.c to ev.c.
+ - prepare io_uring integration.
+ - tweak ev_floor.
+ - epoll, poll, win32 Sleep and other places that use millisecond
+ reslution now all try to round up times.
+ - solaris port backend didn't compile.
+ - abstract time constants into their macros, for more flexibility.
+
+4.27 Thu Jun 27 22:43:44 CEST 2019
+ - linux aio backend almost completely rewritten to work around its
+ limitations.
+ - linux aio backend now requires linux 4.19+.
+ - epoll backend now mandatory for linux aio backend.
+ - fail assertions more aggressively on invalid fd's detected
+ in the event loop, do not just silently fd_kill in case of
+ user error.
+ - ev_io_start/ev_io_stop now verify the watcher fd using
+ a syscall when EV_VERIFY is 2 or higher.
+
+4.26 (EV only)
+ - update to libecb 0x00010006.
+ - new experimental linux aio backend (linux 4.18+).
+ - removed redundant 0-ptr check in ev_once.
+ - updated/extended ev_set_allocator documentation.
+ - replaced EMPTY2 macro by array_needsize_noinit.
+ - minor code cleanups.
+ - epoll backend now uses epoll_create1 also after fork.
+
+4.25 Fri Dec 21 07:49:20 CET 2018
+ - INCOMPATIBLE CHANGE: EV_THROW was renamed to EV_NOEXCEPT
+ (EV_THROW still provided) and now uses noexcept on C++11 or newer.
+ - move the darwin select workaround higher in ev.c, as newer versions of
+ darwin managed to break their broken select even more.
+ - ANDROID => __ANDROID__ (reported by enh at google.com).
+ - disable epoll_create1 on android because it has broken header files
+ and google is unwilling to fix them (reported by enh at google.com).
+ - avoid a minor compilation warning on win32.
+ - c++: remove deprecated dynamic throw() specifications.
+ - c++: improve the (unsupported) bad_loop exception class.
+ - backport perl ev_periodic example to C, untested.
+ - update libecb, biggets change is to include a memory fence
+ in ECB_MEMORY_FENCE_RELEASE on x86/amd64.
+ - minor autoconf/automake modernisation.
+
4.24 Wed Dec 28 05:19:55 CET 2016
- bump version to 4.24, as the release tarball inexplicably
didn't have the right version in ev.h, even though the cvs-tagged
diff --git a/third_party/libev/Makefile.am b/third_party/libev/Makefile.am
index 059305bc3..2814622d8 100644
--- a/third_party/libev/Makefile.am
+++ b/third_party/libev/Makefile.am
@@ -4,7 +4,8 @@ VERSION_INFO = 4:0:0
EXTRA_DIST = LICENSE Changes libev.m4 autogen.sh \
ev_vars.h ev_wrap.h \
- ev_epoll.c ev_select.c ev_poll.c ev_kqueue.c ev_port.c ev_win32.c \
+ ev_epoll.c ev_select.c ev_poll.c ev_kqueue.c ev_port.c ev_linuxaio.c ev_iouring.c \
+ ev_win32.c \
ev.3 ev.pod Symbols.ev Symbols.event
man_MANS = ev.3
diff --git a/third_party/libev/README b/third_party/libev/README
index 31f619387..fca5fdf1a 100644
--- a/third_party/libev/README
+++ b/third_party/libev/README
@@ -18,7 +18,8 @@ ABOUT
- extensive and detailed, readable documentation (not doxygen garbage).
- fully supports fork, can detect fork in various ways and automatically
re-arms kernel mechanisms that do not support fork.
- - highly optimised select, poll, epoll, kqueue and event ports backends.
+ - highly optimised select, poll, linux epoll, linux aio, bsd kqueue
+ and solaris event ports backends.
- filesystem object (path) watching (with optional linux inotify support).
- wallclock-based times (using absolute time, cron-like).
- relative timers/timeouts (handle time jumps).
diff --git a/third_party/libev/Symbols.ev b/third_party/libev/Symbols.ev
index 7a29a75cb..fe169fa06 100644
--- a/third_party/libev/Symbols.ev
+++ b/third_party/libev/Symbols.ev
@@ -13,10 +13,10 @@ ev_clear_pending
ev_default_loop
ev_default_loop_ptr
ev_depth
+ev_embeddable_backends
ev_embed_start
ev_embed_stop
ev_embed_sweep
-ev_embeddable_backends
ev_feed_event
ev_feed_fd_event
ev_feed_signal
diff --git a/third_party/libev/configure.ac b/third_party/libev/configure.ac
index 2590f8fd6..fb9311fd4 100644
--- a/third_party/libev/configure.ac
+++ b/third_party/libev/configure.ac
@@ -1,11 +1,11 @@
-AC_INIT
+dnl also update ev.h!
+AC_INIT([libev], [4.31])
orig_CFLAGS="$CFLAGS"
AC_CONFIG_SRCDIR([ev_epoll.c])
+AM_INIT_AUTOMAKE
-dnl also update ev.h!
-AM_INIT_AUTOMAKE(libev,4.24)
AC_CONFIG_HEADERS([config.h])
AM_MAINTAINER_MODE
diff --git a/third_party/libev/ev++.h b/third_party/libev/ev++.h
index 4f0a36ab0..22dfcf58d 100644
--- a/third_party/libev/ev++.h
+++ b/third_party/libev/ev++.h
@@ -1,7 +1,7 @@
/*
* libev simple C++ wrapper classes
*
- * Copyright (c) 2007,2008,2010 Marc Alexander Lehmann <libev at schmorp.de>
+ * Copyright (c) 2007,2008,2010,2018,2020 Marc Alexander Lehmann <libev at schmorp.de>
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without modifica-
@@ -113,13 +113,13 @@ namespace ev {
struct bad_loop
#if EV_USE_STDEXCEPT
- : std::runtime_error
+ : std::exception
#endif
{
#if EV_USE_STDEXCEPT
- bad_loop ()
- : std::runtime_error ("libev event loop cannot be initialized, bad value of LIBEV_FLAGS?")
+ const char *what () const EV_NOEXCEPT
{
+ return "libev event loop cannot be initialized, bad value of LIBEV_FLAGS?";
}
#endif
};
@@ -142,14 +142,14 @@ namespace ev {
struct loop_ref
{
- loop_ref (EV_P) throw ()
+ loop_ref (EV_P) EV_NOEXCEPT
#if EV_MULTIPLICITY
: EV_AX (EV_A)
#endif
{
}
- bool operator == (const loop_ref &other) const throw ()
+ bool operator == (const loop_ref &other) const EV_NOEXCEPT
{
#if EV_MULTIPLICITY
return EV_AX == other.EV_AX;
@@ -158,7 +158,7 @@ namespace ev {
#endif
}
- bool operator != (const loop_ref &other) const throw ()
+ bool operator != (const loop_ref &other) const EV_NOEXCEPT
{
#if EV_MULTIPLICITY
return ! (*this == other);
@@ -168,27 +168,27 @@ namespace ev {
}
#if EV_MULTIPLICITY
- bool operator == (const EV_P) const throw ()
+ bool operator == (const EV_P) const EV_NOEXCEPT
{
return this->EV_AX == EV_A;
}
- bool operator != (const EV_P) const throw ()
+ bool operator != (const EV_P) const EV_NOEXCEPT
{
- return (*this == EV_A);
+ return ! (*this == EV_A);
}
- operator struct ev_loop * () const throw ()
+ operator struct ev_loop * () const EV_NOEXCEPT
{
return EV_AX;
}
- operator const struct ev_loop * () const throw ()
+ operator const struct ev_loop * () const EV_NOEXCEPT
{
return EV_AX;
}
- bool is_default () const throw ()
+ bool is_default () const EV_NOEXCEPT
{
return EV_AX == ev_default_loop (0);
}
@@ -200,7 +200,7 @@ namespace ev {
ev_run (EV_AX_ flags);
}
- void unloop (how_t how = ONE) throw ()
+ void unloop (how_t how = ONE) EV_NOEXCEPT
{
ev_break (EV_AX_ how);
}
@@ -211,74 +211,74 @@ namespace ev {
ev_run (EV_AX_ flags);
}
- void break_loop (how_t how = ONE) throw ()
+ void break_loop (how_t how = ONE) EV_NOEXCEPT
{
ev_break (EV_AX_ how);
}
- void post_fork () throw ()
+ void post_fork () EV_NOEXCEPT
{
ev_loop_fork (EV_AX);
}
- unsigned int backend () const throw ()
+ unsigned int backend () const EV_NOEXCEPT
{
return ev_backend (EV_AX);
}
- tstamp now () const throw ()
+ tstamp now () const EV_NOEXCEPT
{
return ev_now (EV_AX);
}
- void ref () throw ()
+ void ref () EV_NOEXCEPT
{
ev_ref (EV_AX);
}
- void unref () throw ()
+ void unref () EV_NOEXCEPT
{
ev_unref (EV_AX);
}
#if EV_FEATURE_API
- unsigned int iteration () const throw ()
+ unsigned int iteration () const EV_NOEXCEPT
{
return ev_iteration (EV_AX);
}
- unsigned int depth () const throw ()
+ unsigned int depth () const EV_NOEXCEPT
{
return ev_depth (EV_AX);
}
- void set_io_collect_interval (tstamp interval) throw ()
+ void set_io_collect_interval (tstamp interval) EV_NOEXCEPT
{
ev_set_io_collect_interval (EV_AX_ interval);
}
- void set_timeout_collect_interval (tstamp interval) throw ()
+ void set_timeout_collect_interval (tstamp interval) EV_NOEXCEPT
{
ev_set_timeout_collect_interval (EV_AX_ interval);
}
#endif
// function callback
- void once (int fd, int events, tstamp timeout, void (*cb)(int, void *), void *arg = 0) throw ()
+ void once (int fd, int events, tstamp timeout, void (*cb)(int, void *), void *arg = 0) EV_NOEXCEPT
{
ev_once (EV_AX_ fd, events, timeout, cb, arg);
}
// method callback
template<class K, void (K::*method)(int)>
- void once (int fd, int events, tstamp timeout, K *object) throw ()
+ void once (int fd, int events, tstamp timeout, K *object) EV_NOEXCEPT
{
once (fd, events, timeout, method_thunk<K, method>, object);
}
// default method == operator ()
template<class K>
- void once (int fd, int events, tstamp timeout, K *object) throw ()
+ void once (int fd, int events, tstamp timeout, K *object) EV_NOEXCEPT
{
once (fd, events, timeout, method_thunk<K, &K::operator ()>, object);
}
@@ -292,7 +292,7 @@ namespace ev {
// no-argument method callback
template<class K, void (K::*method)()>
- void once (int fd, int events, tstamp timeout, K *object) throw ()
+ void once (int fd, int events, tstamp timeout, K *object) EV_NOEXCEPT
{
once (fd, events, timeout, method_noargs_thunk<K, method>, object);
}
@@ -306,7 +306,7 @@ namespace ev {
// simpler function callback
template<void (*cb)(int)>
- void once (int fd, int events, tstamp timeout) throw ()
+ void once (int fd, int events, tstamp timeout) EV_NOEXCEPT
{
once (fd, events, timeout, simpler_func_thunk<cb>);
}
@@ -320,7 +320,7 @@ namespace ev {
// simplest function callback
template<void (*cb)()>
- void once (int fd, int events, tstamp timeout) throw ()
+ void once (int fd, int events, tstamp timeout) EV_NOEXCEPT
{
once (fd, events, timeout, simplest_func_thunk<cb>);
}
@@ -332,12 +332,12 @@ namespace ev {
();
}
- void feed_fd_event (int fd, int revents) throw ()
+ void feed_fd_event (int fd, int revents) EV_NOEXCEPT
{
ev_feed_fd_event (EV_AX_ fd, revents);
}
- void feed_signal_event (int signum) throw ()
+ void feed_signal_event (int signum) EV_NOEXCEPT
{
ev_feed_signal_event (EV_AX_ signum);
}
@@ -352,14 +352,14 @@ namespace ev {
struct dynamic_loop : loop_ref
{
- dynamic_loop (unsigned int flags = AUTO) throw (bad_loop)
+ dynamic_loop (unsigned int flags = AUTO)
: loop_ref (ev_loop_new (flags))
{
if (!EV_AX)
throw bad_loop ();
}
- ~dynamic_loop () throw ()
+ ~dynamic_loop () EV_NOEXCEPT
{
ev_loop_destroy (EV_AX);
EV_AX = 0;
@@ -376,7 +376,7 @@ namespace ev {
struct default_loop : loop_ref
{
- default_loop (unsigned int flags = AUTO) throw (bad_loop)
+ default_loop (unsigned int flags = AUTO)
#if EV_MULTIPLICITY
: loop_ref (ev_default_loop (flags))
#endif
@@ -396,7 +396,7 @@ namespace ev {
default_loop &operator = (const default_loop &);
};
- inline loop_ref get_default_loop () throw ()
+ inline loop_ref get_default_loop () EV_NOEXCEPT
{
#if EV_MULTIPLICITY
return ev_default_loop (0);
@@ -421,17 +421,35 @@ namespace ev {
template<class ev_watcher, class watcher>
struct base : ev_watcher
{
+ // scoped pause/unpause of a watcher
+ struct freeze_guard
+ {
+ watcher &w;
+ bool active;
+
+ freeze_guard (watcher *self) EV_NOEXCEPT
+ : w (*self), active (w.is_active ())
+ {
+ if (active) w.stop ();
+ }
+
+ ~freeze_guard ()
+ {
+ if (active) w.start ();
+ }
+ };
+
#if EV_MULTIPLICITY
EV_PX;
// loop set
- void set (EV_P) throw ()
+ void set (EV_P) EV_NOEXCEPT
{
this->EV_A = EV_A;
}
#endif
- base (EV_PX) throw ()
+ base (EV_PX) EV_NOEXCEPT
#if EV_MULTIPLICITY
: EV_A (EV_A)
#endif
@@ -439,7 +457,7 @@ namespace ev {
ev_init (this, 0);
}
- void set_ (const void *data, void (*cb)(EV_P_ ev_watcher *w, int revents)) throw ()
+ void set_ (const void *data, void (*cb)(EV_P_ ev_watcher *w, int revents)) EV_NOEXCEPT
{
this->data = (void *)data;
ev_set_cb (static_cast<ev_watcher *>(this), cb);
@@ -447,7 +465,7 @@ namespace ev {
// function callback
template<void (*function)(watcher &w, int)>
- void set (void *data = 0) throw ()
+ void set (void *data = 0) EV_NOEXCEPT
{
set_ (data, function_thunk<function>);
}
@@ -461,14 +479,14 @@ namespace ev {
// method callback
template<class K, void (K::*method)(watcher &w, int)>
- void set (K *object) throw ()
+ void set (K *object) EV_NOEXCEPT
{
set_ (object, method_thunk<K, method>);
}
// default method == operator ()
template<class K>
- void set (K *object) throw ()
+ void set (K *object) EV_NOEXCEPT
{
set_ (object, method_thunk<K, &K::operator ()>);
}
@@ -482,7 +500,7 @@ namespace ev {
// no-argument callback
template<class K, void (K::*method)()>
- void set (K *object) throw ()
+ void set (K *object) EV_NOEXCEPT
{
set_ (object, method_noargs_thunk<K, method>);
}
@@ -501,76 +519,76 @@ namespace ev {
(static_cast<ev_watcher *>(this), events);
}
- bool is_active () const throw ()
+ bool is_active () const EV_NOEXCEPT
{
return ev_is_active (static_cast<const ev_watcher *>(this));
}
- bool is_pending () const throw ()
+ bool is_pending () const EV_NOEXCEPT
{
return ev_is_pending (static_cast<const ev_watcher *>(this));
}
- void feed_event (int revents) throw ()
+ void feed_event (int revents) EV_NOEXCEPT
{
ev_feed_event (EV_A_ static_cast<ev_watcher *>(this), revents);
}
};
- inline tstamp now (EV_P) throw ()
+ inline tstamp now (EV_P) EV_NOEXCEPT
{
return ev_now (EV_A);
}
- inline void delay (tstamp interval) throw ()
+ inline void delay (tstamp interval) EV_NOEXCEPT
{
ev_sleep (interval);
}
- inline int version_major () throw ()
+ inline int version_major () EV_NOEXCEPT
{
return ev_version_major ();
}
- inline int version_minor () throw ()
+ inline int version_minor () EV_NOEXCEPT
{
return ev_version_minor ();
}
- inline unsigned int supported_backends () throw ()
+ inline unsigned int supported_backends () EV_NOEXCEPT
{
return ev_supported_backends ();
}
- inline unsigned int recommended_backends () throw ()
+ inline unsigned int recommended_backends () EV_NOEXCEPT
{
return ev_recommended_backends ();
}
- inline unsigned int embeddable_backends () throw ()
+ inline unsigned int embeddable_backends () EV_NOEXCEPT
{
return ev_embeddable_backends ();
}
- inline void set_allocator (void *(*cb)(void *ptr, long size) throw ()) throw ()
+ inline void set_allocator (void *(*cb)(void *ptr, long size) EV_NOEXCEPT) EV_NOEXCEPT
{
ev_set_allocator (cb);
}
- inline void set_syserr_cb (void (*cb)(const char *msg) throw ()) throw ()
+ inline void set_syserr_cb (void (*cb)(const char *msg) EV_NOEXCEPT) EV_NOEXCEPT
{
ev_set_syserr_cb (cb);
}
#if EV_MULTIPLICITY
#define EV_CONSTRUCT(cppstem,cstem) \
- (EV_PX = get_default_loop ()) throw () \
+ (EV_PX = get_default_loop ()) EV_NOEXCEPT \
: base<ev_ ## cstem, cppstem> (EV_A) \
{ \
}
#else
#define EV_CONSTRUCT(cppstem,cstem) \
- () throw () \
+ () EV_NOEXCEPT \
{ \
}
#endif
@@ -581,19 +599,19 @@ namespace ev {
\
struct cppstem : base<ev_ ## cstem, cppstem> \
{ \
- void start () throw () \
+ void start () EV_NOEXCEPT \
{ \
ev_ ## cstem ## _start (EV_A_ static_cast<ev_ ## cstem *>(this)); \
} \
\
- void stop () throw () \
+ void stop () EV_NOEXCEPT \
{ \
ev_ ## cstem ## _stop (EV_A_ static_cast<ev_ ## cstem *>(this)); \
} \
\
cppstem EV_CONSTRUCT(cppstem,cstem) \
\
- ~cppstem () throw () \
+ ~cppstem () EV_NOEXCEPT \
{ \
stop (); \
} \
@@ -612,23 +630,19 @@ namespace ev {
};
EV_BEGIN_WATCHER (io, io)
- void set (int fd, int events) throw ()
+ void set (int fd, int events) EV_NOEXCEPT
{
- int active = is_active ();
- if (active) stop ();
+ freeze_guard freeze (this);
ev_io_set (static_cast<ev_io *>(this), fd, events);
- if (active) start ();
}
- void set (int events) throw ()
+ void set (int events) EV_NOEXCEPT
{
- int active = is_active ();
- if (active) stop ();
- ev_io_set (static_cast<ev_io *>(this), fd, events);
- if (active) start ();
+ freeze_guard freeze (this);
+ ev_io_modify (static_cast<ev_io *>(this), events);
}
- void start (int fd, int events) throw ()
+ void start (int fd, int events) EV_NOEXCEPT
{
set (fd, events);
start ();
@@ -636,21 +650,19 @@ namespace ev {
EV_END_WATCHER (io, io)
EV_BEGIN_WATCHER (timer, timer)
- void set (ev_tstamp after, ev_tstamp repeat = 0.) throw ()
+ void set (ev_tstamp after, ev_tstamp repeat = 0.) EV_NOEXCEPT
{
- int active = is_active ();
- if (active) stop ();
+ freeze_guard freeze (this);
ev_timer_set (static_cast<ev_timer *>(this), after, repeat);
- if (active) start ();
}
- void start (ev_tstamp after, ev_tstamp repeat = 0.) throw ()
+ void start (ev_tstamp after, ev_tstamp repeat = 0.) EV_NOEXCEPT
{
set (after, repeat);
start ();
}
- void again () throw ()
+ void again () EV_NOEXCEPT
{
ev_timer_again (EV_A_ static_cast<ev_timer *>(this));
}
@@ -663,21 +675,19 @@ namespace ev {
#if EV_PERIODIC_ENABLE
EV_BEGIN_WATCHER (periodic, periodic)
- void set (ev_tstamp at, ev_tstamp interval = 0.) throw ()
+ void set (ev_tstamp at, ev_tstamp interval = 0.) EV_NOEXCEPT
{
- int active = is_active ();
- if (active) stop ();
+ freeze_guard freeze (this);
ev_periodic_set (static_cast<ev_periodic *>(this), at, interval, 0);
- if (active) start ();
}
- void start (ev_tstamp at, ev_tstamp interval = 0.) throw ()
+ void start (ev_tstamp at, ev_tstamp interval = 0.) EV_NOEXCEPT
{
set (at, interval);
start ();
}
- void again () throw ()
+ void again () EV_NOEXCEPT
{
ev_periodic_again (EV_A_ static_cast<ev_periodic *>(this));
}
@@ -686,15 +696,13 @@ namespace ev {
#if EV_SIGNAL_ENABLE
EV_BEGIN_WATCHER (sig, signal)
- void set (int signum) throw ()
+ void set (int signum) EV_NOEXCEPT
{
- int active = is_active ();
- if (active) stop ();
+ freeze_guard freeze (this);
ev_signal_set (static_cast<ev_signal *>(this), signum);
- if (active) start ();
}
- void start (int signum) throw ()
+ void start (int signum) EV_NOEXCEPT
{
set (signum);
start ();
@@ -704,15 +712,13 @@ namespace ev {
#if EV_CHILD_ENABLE
EV_BEGIN_WATCHER (child, child)
- void set (int pid, int trace = 0) throw ()
+ void set (int pid, int trace = 0) EV_NOEXCEPT
{
- int active = is_active ();
- if (active) stop ();
+ freeze_guard freeze (this);
ev_child_set (static_cast<ev_child *>(this), pid, trace);
- if (active) start ();
}
- void start (int pid, int trace = 0) throw ()
+ void start (int pid, int trace = 0) EV_NOEXCEPT
{
set (pid, trace);
start ();
@@ -722,22 +728,20 @@ namespace ev {
#if EV_STAT_ENABLE
EV_BEGIN_WATCHER (stat, stat)
- void set (const char *path, ev_tstamp interval = 0.) throw ()
+ void set (const char *path, ev_tstamp interval = 0.) EV_NOEXCEPT
{
- int active = is_active ();
- if (active) stop ();
+ freeze_guard freeze (this);
ev_stat_set (static_cast<ev_stat *>(this), path, interval);
- if (active) start ();
}
- void start (const char *path, ev_tstamp interval = 0.) throw ()
+ void start (const char *path, ev_tstamp interval = 0.) EV_NOEXCEPT
{
stop ();
set (path, interval);
start ();
}
- void update () throw ()
+ void update () EV_NOEXCEPT
{
ev_stat_stat (EV_A_ static_cast<ev_stat *>(this));
}
@@ -746,33 +750,31 @@ namespace ev {
#if EV_IDLE_ENABLE
EV_BEGIN_WATCHER (idle, idle)
- void set () throw () { }
+ void set () EV_NOEXCEPT { }
EV_END_WATCHER (idle, idle)
#endif
#if EV_PREPARE_ENABLE
EV_BEGIN_WATCHER (prepare, prepare)
- void set () throw () { }
+ void set () EV_NOEXCEPT { }
EV_END_WATCHER (prepare, prepare)
#endif
#if EV_CHECK_ENABLE
EV_BEGIN_WATCHER (check, check)
- void set () throw () { }
+ void set () EV_NOEXCEPT { }
EV_END_WATCHER (check, check)
#endif
#if EV_EMBED_ENABLE
EV_BEGIN_WATCHER (embed, embed)
- void set_embed (struct ev_loop *embedded_loop) throw ()
+ void set_embed (struct ev_loop *embedded_loop) EV_NOEXCEPT
{
- int active = is_active ();
- if (active) stop ();
+ freeze_guard freeze (this);
ev_embed_set (static_cast<ev_embed *>(this), embedded_loop);
- if (active) start ();
}
- void start (struct ev_loop *embedded_loop) throw ()
+ void start (struct ev_loop *embedded_loop) EV_NOEXCEPT
{
set (embedded_loop);
start ();
@@ -787,18 +789,18 @@ namespace ev {
#if EV_FORK_ENABLE
EV_BEGIN_WATCHER (fork, fork)
- void set () throw () { }
+ void set () EV_NOEXCEPT { }
EV_END_WATCHER (fork, fork)
#endif
#if EV_ASYNC_ENABLE
EV_BEGIN_WATCHER (async, async)
- void send () throw ()
+ void send () EV_NOEXCEPT
{
ev_async_send (EV_A_ static_cast<ev_async *>(this));
}
- bool async_pending () throw ()
+ bool async_pending () EV_NOEXCEPT
{
return ev_async_pending (static_cast<ev_async *>(this));
}
diff --git a/third_party/libev/ev.3 b/third_party/libev/ev.3
index 5b2599e9b..985af854c 100644
--- a/third_party/libev/ev.3
+++ b/third_party/libev/ev.3
@@ -1,4 +1,4 @@
-.\" Automatically generated by Pod::Man 2.28 (Pod::Simple 3.30)
+.\" Automatically generated by Pod::Man 4.11 (Pod::Simple 3.35)
.\"
.\" Standard preamble:
.\" ========================================================================
@@ -46,7 +46,7 @@
.ie \n(.g .ds Aq \(aq
.el .ds Aq '
.\"
-.\" If the F register is turned on, we'll generate index entries on stderr for
+.\" If the F register is >0, we'll generate index entries on stderr for
.\" titles (.TH), headers (.SH), subsections (.SS), items (.Ip), and index
.\" entries marked with X<> in POD. Of course, you'll have to process the
.\" output yourself in some meaningful fashion.
@@ -56,12 +56,12 @@
..
.nr rF 0
.if \n(.g .if rF .nr rF 1
-.if (\n(rF:(\n(.g==0)) \{
-. if \nF \{
+.if (\n(rF:(\n(.g==0)) \{\
+. if \nF \{\
. de IX
. tm Index:\\$1\t\\n%\t"\\$2"
..
-. if !\nF==2 \{
+. if !\nF==2 \{\
. nr % 0
. nr F 2
. \}
@@ -133,7 +133,7 @@
.\" ========================================================================
.\"
.IX Title "LIBEV 3"
-.TH LIBEV 3 "2016-11-16" "libev-4.23" "libev - high performance full featured event loop"
+.TH LIBEV 3 "2020-01-22" "libev-4.31" "libev - high performance full featured event loop"
.\" For nroff, turn off justification. Always turn off hyphenation; it makes
.\" way too many mistakes in technical documents.
.if n .ad l
@@ -242,10 +242,10 @@ details of the event, and then hand it over to libev by \fIstarting\fR the
watcher.
.SS "\s-1FEATURES\s0"
.IX Subsection "FEATURES"
-Libev supports \f(CW\*(C`select\*(C'\fR, \f(CW\*(C`poll\*(C'\fR, the Linux-specific \f(CW\*(C`epoll\*(C'\fR, the
-BSD-specific \f(CW\*(C`kqueue\*(C'\fR and the Solaris-specific event port mechanisms
-for file descriptor events (\f(CW\*(C`ev_io\*(C'\fR), the Linux \f(CW\*(C`inotify\*(C'\fR interface
-(for \f(CW\*(C`ev_stat\*(C'\fR), Linux eventfd/signalfd (for faster and cleaner
+Libev supports \f(CW\*(C`select\*(C'\fR, \f(CW\*(C`poll\*(C'\fR, the Linux-specific aio and \f(CW\*(C`epoll\*(C'\fR
+interfaces, the BSD-specific \f(CW\*(C`kqueue\*(C'\fR and the Solaris-specific event port
+mechanisms for file descriptor events (\f(CW\*(C`ev_io\*(C'\fR), the Linux \f(CW\*(C`inotify\*(C'\fR
+interface (for \f(CW\*(C`ev_stat\*(C'\fR), Linux eventfd/signalfd (for faster and cleaner
inter-thread wakeup (\f(CW\*(C`ev_async\*(C'\fR)/signal handling (\f(CW\*(C`ev_signal\*(C'\fR)) relative
timers (\f(CW\*(C`ev_timer\*(C'\fR), absolute timers with customised rescheduling
(\f(CW\*(C`ev_periodic\*(C'\fR), synchronous signals (\f(CW\*(C`ev_signal\*(C'\fR), process status
@@ -293,9 +293,13 @@ it will print a diagnostic message and abort (via the \f(CW\*(C`assert\*(C'\fR m
so \f(CW\*(C`NDEBUG\*(C'\fR will disable this checking): these are programming errors in
the libev caller and need to be fixed there.
.PP
-Libev also has a few internal error-checking \f(CW\*(C`assert\*(C'\fRions, and also has
-extensive consistency checking code. These do not trigger under normal
-circumstances, as they indicate either a bug in libev or worse.
+Via the \f(CW\*(C`EV_FREQUENT\*(C'\fR macro you can compile in and/or enable extensive
+consistency checking code inside libev that can be used to check for
+internal inconsistencies, suually caused by application bugs.
+.PP
+Libev also has a few internal error-checking \f(CW\*(C`assert\*(C'\fRions. These do not
+trigger under normal circumstances, as they indicate either a bug in libev
+or worse.
.SH "GLOBAL FUNCTIONS"
.IX Header "GLOBAL FUNCTIONS"
These functions can be called anytime, even before initialising the
@@ -394,13 +398,35 @@ You could override this function in high-availability programs to, say,
free some memory if it cannot allocate memory, to use a special allocator,
or even to sleep a while and retry until some memory is available.
.Sp
+Example: The following is the \f(CW\*(C`realloc\*(C'\fR function that libev itself uses
+which should work with \f(CW\*(C`realloc\*(C'\fR and \f(CW\*(C`free\*(C'\fR functions of all kinds and
+is probably a good basis for your own implementation.
+.Sp
+.Vb 5
+\& static void *
+\& ev_realloc_emul (void *ptr, long size) EV_NOEXCEPT
+\& {
+\& if (size)
+\& return realloc (ptr, size);
+\&
+\& free (ptr);
+\& return 0;
+\& }
+.Ve
+.Sp
Example: Replace the libev allocator with one that waits a bit and then
-retries (example requires a standards-compliant \f(CW\*(C`realloc\*(C'\fR).
+retries.
.Sp
-.Vb 6
+.Vb 8
\& static void *
\& persistent_realloc (void *ptr, size_t size)
\& {
+\& if (!size)
+\& {
+\& free (ptr);
+\& return 0;
+\& }
+\&
\& for (;;)
\& {
\& void *newptr = realloc (ptr, size);
@@ -538,9 +564,10 @@ make libev check for a fork in each iteration by enabling this flag.
This works by calling \f(CW\*(C`getpid ()\*(C'\fR on every iteration of the loop,
and thus this might slow down your event loop if you do a lot of loop
iterations and little real work, but is usually not noticeable (on my
-GNU/Linux system for example, \f(CW\*(C`getpid\*(C'\fR is actually a simple 5\-insn sequence
-without a system call and thus \fIvery\fR fast, but my GNU/Linux system also has
-\&\f(CW\*(C`pthread_atfork\*(C'\fR which is even faster).
+GNU/Linux system for example, \f(CW\*(C`getpid\*(C'\fR is actually a simple 5\-insn
+sequence without a system call and thus \fIvery\fR fast, but my GNU/Linux
+system also has \f(CW\*(C`pthread_atfork\*(C'\fR which is even faster). (Update: glibc
+versions 2.25 apparently removed the \f(CW\*(C`getpid\*(C'\fR optimisation again).
.Sp
The big advantage of this flag is that you can forget about fork (and
forget about forgetting to tell libev about forking, although you still
@@ -581,12 +608,21 @@ unblocking the signals.
.Sp
It's also required by \s-1POSIX\s0 in a threaded program, as libev calls
\&\f(CW\*(C`sigprocmask\*(C'\fR, whose behaviour is officially unspecified.
-.Sp
-This flag's behaviour will become the default in future versions of libev.
+.ie n .IP """EVFLAG_NOTIMERFD""" 4
+.el .IP "\f(CWEVFLAG_NOTIMERFD\fR" 4
+.IX Item "EVFLAG_NOTIMERFD"
+When this flag is specified, the libev will avoid using a \f(CW\*(C`timerfd\*(C'\fR to
+detect time jumps. It will still be able to detect time jumps, but takes
+longer and has a lower accuracy in doing so, but saves a file descriptor
+per loop.
+.Sp
+The current implementation only tries to use a \f(CW\*(C`timerfd\*(C'\fR when the first
+\&\f(CW\*(C`ev_periodic\*(C'\fR watcher is started and falls back on other methods if it
+cannot be created, but this behaviour might change in the future.
.ie n .IP """EVBACKEND_SELECT"" (value 1, portable select backend)" 4
.el .IP "\f(CWEVBACKEND_SELECT\fR (value 1, portable select backend)" 4
.IX Item "EVBACKEND_SELECT (value 1, portable select backend)"
-This is your standard \fIselect\fR\|(2) backend. Not \fIcompletely\fR standard, as
+This is your standard \fBselect\fR\|(2) backend. Not \fIcompletely\fR standard, as
libev tries to roll its own fd_set with no limits on the number of fds,
but if that fails, expect a fairly low limit on the number of fds when
using this backend. It doesn't scale too well (O(highest_fd)), but its
@@ -605,7 +641,7 @@ This backend maps \f(CW\*(C`EV_READ\*(C'\fR to the \f(CW\*(C`readfds\*(C'\fR set
.ie n .IP """EVBACKEND_POLL"" (value 2, poll backend, available everywhere except on windows)" 4
.el .IP "\f(CWEVBACKEND_POLL\fR (value 2, poll backend, available everywhere except on windows)" 4
.IX Item "EVBACKEND_POLL (value 2, poll backend, available everywhere except on windows)"
-And this is your standard \fIpoll\fR\|(2) backend. It's more complicated
+And this is your standard \fBpoll\fR\|(2) backend. It's more complicated
than select, but handles sparse fds better and has no artificial
limit on the number of fds you can use (except it will slow down
considerably with a lot of inactive fds). It scales similarly to select,
@@ -617,7 +653,7 @@ This backend maps \f(CW\*(C`EV_READ\*(C'\fR to \f(CW\*(C`POLLIN | POLLERR | POLL
.ie n .IP """EVBACKEND_EPOLL"" (value 4, Linux)" 4
.el .IP "\f(CWEVBACKEND_EPOLL\fR (value 4, Linux)" 4
.IX Item "EVBACKEND_EPOLL (value 4, Linux)"
-Use the linux-specific \fIepoll\fR\|(7) interface (for both pre\- and post\-2.6.9
+Use the Linux-specific \fBepoll\fR\|(7) interface (for both pre\- and post\-2.6.9
kernels).
.Sp
For few fds, this backend is a bit little slower than poll and select, but
@@ -673,22 +709,65 @@ faster than epoll for maybe up to a hundred file descriptors, depending on
the usage. So sad.
.Sp
While nominally embeddable in other event loops, this feature is broken in
-all kernel versions tested so far.
+a lot of kernel revisions, but probably(!) works in current versions.
+.Sp
+This backend maps \f(CW\*(C`EV_READ\*(C'\fR and \f(CW\*(C`EV_WRITE\*(C'\fR in the same way as
+\&\f(CW\*(C`EVBACKEND_POLL\*(C'\fR.
+.ie n .IP """EVBACKEND_LINUXAIO"" (value 64, Linux)" 4
+.el .IP "\f(CWEVBACKEND_LINUXAIO\fR (value 64, Linux)" 4
+.IX Item "EVBACKEND_LINUXAIO (value 64, Linux)"
+Use the Linux-specific Linux \s-1AIO\s0 (\fInot\fR \f(CWaio(7)\fR but \f(CWio_submit(2)\fR) event interface available in post\-4.18 kernels (but libev
+only tries to use it in 4.19+).
+.Sp
+This is another Linux train wreck of an event interface.
+.Sp
+If this backend works for you (as of this writing, it was very
+experimental), it is the best event interface available on Linux and might
+be well worth enabling it \- if it isn't available in your kernel this will
+be detected and this backend will be skipped.
+.Sp
+This backend can batch oneshot requests and supports a user-space ring
+buffer to receive events. It also doesn't suffer from most of the design
+problems of epoll (such as not being able to remove event sources from
+the epoll set), and generally sounds too good to be true. Because, this
+being the Linux kernel, of course it suffers from a whole new set of
+limitations, forcing you to fall back to epoll, inheriting all its design
+issues.
+.Sp
+For one, it is not easily embeddable (but probably could be done using
+an event fd at some extra overhead). It also is subject to a system wide
+limit that can be configured in \fI/proc/sys/fs/aio\-max\-nr\fR. If no \s-1AIO\s0
+requests are left, this backend will be skipped during initialisation, and
+will switch to epoll when the loop is active.
+.Sp
+Most problematic in practice, however, is that not all file descriptors
+work with it. For example, in Linux 5.1, \s-1TCP\s0 sockets, pipes, event fds,
+files, \fI/dev/null\fR and many others are supported, but ttys do not work
+properly (a known bug that the kernel developers don't care about, see
+<https://lore.kernel.org/patchwork/patch/1047453/>), so this is not
+(yet?) a generic event polling interface.
+.Sp
+Overall, it seems the Linux developers just don't want it to have a
+generic event handling mechanism other than \f(CW\*(C`select\*(C'\fR or \f(CW\*(C`poll\*(C'\fR.
+.Sp
+To work around all these problem, the current version of libev uses its
+epoll backend as a fallback for file descriptor types that do not work. Or
+falls back completely to epoll if the kernel acts up.
.Sp
This backend maps \f(CW\*(C`EV_READ\*(C'\fR and \f(CW\*(C`EV_WRITE\*(C'\fR in the same way as
\&\f(CW\*(C`EVBACKEND_POLL\*(C'\fR.
.ie n .IP """EVBACKEND_KQUEUE"" (value 8, most \s-1BSD\s0 clones)" 4
.el .IP "\f(CWEVBACKEND_KQUEUE\fR (value 8, most \s-1BSD\s0 clones)" 4
.IX Item "EVBACKEND_KQUEUE (value 8, most BSD clones)"
-Kqueue deserves special mention, as at the time of this writing, it
-was broken on all BSDs except NetBSD (usually it doesn't work reliably
-with anything but sockets and pipes, except on Darwin, where of course
-it's completely useless). Unlike epoll, however, whose brokenness
-is by design, these kqueue bugs can (and eventually will) be fixed
-without \s-1API\s0 changes to existing programs. For this reason it's not being
-\&\*(L"auto-detected\*(R" unless you explicitly specify it in the flags (i.e. using
-\&\f(CW\*(C`EVBACKEND_KQUEUE\*(C'\fR) or libev was compiled on a known-to-be-good (\-enough)
-system like NetBSD.
+Kqueue deserves special mention, as at the time this backend was
+implemented, it was broken on all BSDs except NetBSD (usually it doesn't
+work reliably with anything but sockets and pipes, except on Darwin,
+where of course it's completely useless). Unlike epoll, however, whose
+brokenness is by design, these kqueue bugs can be (and mostly have been)
+fixed without \s-1API\s0 changes to existing programs. For this reason it's not
+being \*(L"auto-detected\*(R" on all platforms unless you explicitly specify it
+in the flags (i.e. using \f(CW\*(C`EVBACKEND_KQUEUE\*(C'\fR) or libev was compiled on a
+known-to-be-good (\-enough) system like NetBSD.
.Sp
You still can embed kqueue into a normal poll or select backend and use it
only for sockets (after having made sure that sockets work with kqueue on
@@ -699,7 +778,7 @@ kernel is more efficient (which says nothing about its actual speed, of
course). While stopping, setting and starting an I/O watcher does never
cause an extra system call as with \f(CW\*(C`EVBACKEND_EPOLL\*(C'\fR, it still adds up to
two event changes per incident. Support for \f(CW\*(C`fork ()\*(C'\fR is very bad (you
-might have to leak fd's on fork, but it's more sane than epoll) and it
+might have to leak fds on fork, but it's more sane than epoll) and it
drops fds silently in similarly hard-to-detect cases.
.Sp
This backend usually performs well under most conditions.
@@ -787,6 +866,14 @@ used if available.
.Vb 1
\& struct ev_loop *loop = ev_loop_new (ev_recommended_backends () | EVBACKEND_KQUEUE);
.Ve
+.Sp
+Example: Similarly, on linux, you mgiht want to take advantage of the
+linux aio backend if possible, but fall back to something else if that
+isn't available.
+.Sp
+.Vb 1
+\& struct ev_loop *loop = ev_loop_new (ev_recommended_backends () | EVBACKEND_LINUXAIO);
+.Ve
.RE
.IP "ev_loop_destroy (loop)" 4
.IX Item "ev_loop_destroy (loop)"
@@ -1264,8 +1351,9 @@ with a watcher-specific start function (\f(CW\*(C`ev_TYPE_start (loop, watcher
corresponding stop function (\f(CW\*(C`ev_TYPE_stop (loop, watcher *)\*(C'\fR.
.PP
As long as your watcher is active (has been started but not stopped) you
-must not touch the values stored in it. Most specifically you must never
-reinitialise it or call its \f(CW\*(C`ev_TYPE_set\*(C'\fR macro.
+must not touch the values stored in it except when explicitly documented
+otherwise. Most specifically you must never reinitialise it or call its
+\&\f(CW\*(C`ev_TYPE_set\*(C'\fR macro.
.PP
Each and every callback receives the event loop pointer as first, the
registered watcher structure as second, and a bitset of received events as
@@ -1366,7 +1454,7 @@ bug in your program.
Libev will usually signal a few \*(L"dummy\*(R" events together with an error, for
example it might indicate that a fd is readable or writable, and if your
callbacks is well-written it can just attempt the operation and cope with
-the error from \fIread()\fR or \fIwrite()\fR. This will not work in multi-threaded
+the error from \fBread()\fR or \fBwrite()\fR. This will not work in multi-threaded
programs, though, as the fd could already be closed and reused for another
thing, so beware.
.SS "\s-1GENERIC WATCHER FUNCTIONS\s0"
@@ -1578,7 +1666,7 @@ Many event loops support \fIwatcher priorities\fR, which are usually small
integers that influence the ordering of event callback invocation
between watchers in some way, all else being equal.
.PP
-In libev, Watcher priorities can be set using \f(CW\*(C`ev_set_priority\*(C'\fR. See its
+In libev, watcher priorities can be set using \f(CW\*(C`ev_set_priority\*(C'\fR. See its
description for the more technical details such as the actual priority
range.
.PP
@@ -1682,14 +1770,17 @@ This section describes each watcher in detail, but will not repeat
information given in the last section. Any initialisation/set macros,
functions and members specific to the watcher type are explained.
.PP
-Members are additionally marked with either \fI[read\-only]\fR, meaning that,
-while the watcher is active, you can look at the member and expect some
-sensible content, but you must not modify it (you can modify it while the
-watcher is stopped to your hearts content), or \fI[read\-write]\fR, which
+Most members are additionally marked with either \fI[read\-only]\fR, meaning
+that, while the watcher is active, you can look at the member and expect
+some sensible content, but you must not modify it (you can modify it while
+the watcher is stopped to your hearts content), or \fI[read\-write]\fR, which
means you can expect it to have some sensible content while the watcher
is active, but you can also modify it. Modifying it may not do something
sensible or take immediate effect (or do anything at all), but libev will
not crash or malfunction in any way.
+.PP
+In any case, the documentation for each member will explain what the
+effects are, and if there are any additional access restrictions.
.ie n .SS """ev_io"" \- is this file descriptor readable or writable?"
.el .SS "\f(CWev_io\fP \- is this file descriptor readable or writable?"
.IX Subsection "ev_io - is this file descriptor readable or writable?"
@@ -1727,13 +1818,13 @@ But really, best use non-blocking mode.
\fIThe special problem of disappearing file descriptors\fR
.IX Subsection "The special problem of disappearing file descriptors"
.PP
-Some backends (e.g. kqueue, epoll) need to be told about closing a file
-descriptor (either due to calling \f(CW\*(C`close\*(C'\fR explicitly or any other means,
-such as \f(CW\*(C`dup2\*(C'\fR). The reason is that you register interest in some file
-descriptor, but when it goes away, the operating system will silently drop
-this interest. If another file descriptor with the same number then is
-registered with libev, there is no efficient way to see that this is, in
-fact, a different file descriptor.
+Some backends (e.g. kqueue, epoll, linuxaio) need to be told about closing
+a file descriptor (either due to calling \f(CW\*(C`close\*(C'\fR explicitly or any other
+means, such as \f(CW\*(C`dup2\*(C'\fR). The reason is that you register interest in some
+file descriptor, but when it goes away, the operating system will silently
+drop this interest. If another file descriptor with the same number then
+is registered with libev, there is no efficient way to see that this is,
+in fact, a different file descriptor.
.PP
To avoid having to explicitly tell libev about such cases, libev follows
the following policy: Each time \f(CW\*(C`ev_io_set\*(C'\fR is being called, libev
@@ -1795,9 +1886,10 @@ reuse the same code path.
\fIThe special problem of fork\fR
.IX Subsection "The special problem of fork"
.PP
-Some backends (epoll, kqueue) do not support \f(CW\*(C`fork ()\*(C'\fR at all or exhibit
-useless behaviour. Libev fully supports fork, but needs to be told about
-it in the child if you want to continue to use it in the child.
+Some backends (epoll, kqueue, linuxaio, iouring) do not support \f(CW\*(C`fork ()\*(C'\fR
+at all or exhibit useless behaviour. Libev fully supports fork, but needs
+to be told about it in the child if you want to continue to use it in the
+child.
.PP
To support fork in your child processes, you have to call \f(CW\*(C`ev_loop_fork
()\*(C'\fR after a fork in the child, enable \f(CW\*(C`EVFLAG_FORKCHECK\*(C'\fR, or resort to
@@ -1812,13 +1904,13 @@ sent a \s-1SIGPIPE,\s0 which, by default, aborts your program. For most programs
this is sensible behaviour, for daemons, this is usually undesirable.
.PP
So when you encounter spurious, unexplained daemon exits, make sure you
-ignore \s-1SIGPIPE \s0(and maybe make sure you log the exit status of your daemon
+ignore \s-1SIGPIPE\s0 (and maybe make sure you log the exit status of your daemon
somewhere, as that would have given you a big clue).
.PP
-\fIThe special problem of \fIaccept()\fIing when you can't\fR
+\fIThe special problem of \f(BIaccept()\fIing when you can't\fR
.IX Subsection "The special problem of accept()ing when you can't"
.PP
-Many implementations of the \s-1POSIX \s0\f(CW\*(C`accept\*(C'\fR function (for example,
+Many implementations of the \s-1POSIX\s0 \f(CW\*(C`accept\*(C'\fR function (for example,
found in post\-2004 Linux) have the peculiar behaviour of not removing a
connection from the pending queue in all error cases.
.PP
@@ -1864,14 +1956,33 @@ opportunity for a DoS attack.
.IX Item "ev_io_set (ev_io *, int fd, int events)"
.PD
Configures an \f(CW\*(C`ev_io\*(C'\fR watcher. The \f(CW\*(C`fd\*(C'\fR is the file descriptor to
-receive events for and \f(CW\*(C`events\*(C'\fR is either \f(CW\*(C`EV_READ\*(C'\fR, \f(CW\*(C`EV_WRITE\*(C'\fR or
-\&\f(CW\*(C`EV_READ | EV_WRITE\*(C'\fR, to express the desire to receive the given events.
-.IP "int fd [read\-only]" 4
-.IX Item "int fd [read-only]"
-The file descriptor being watched.
-.IP "int events [read\-only]" 4
-.IX Item "int events [read-only]"
-The events being watched.
+receive events for and \f(CW\*(C`events\*(C'\fR is either \f(CW\*(C`EV_READ\*(C'\fR, \f(CW\*(C`EV_WRITE\*(C'\fR, both
+\&\f(CW\*(C`EV_READ | EV_WRITE\*(C'\fR or \f(CW0\fR, to express the desire to receive the given
+events.
+.Sp
+Note that setting the \f(CW\*(C`events\*(C'\fR to \f(CW0\fR and starting the watcher is
+supported, but not specially optimized \- if your program sometimes happens
+to generate this combination this is fine, but if it is easy to avoid
+starting an io watcher watching for no events you should do so.
+.IP "ev_io_modify (ev_io *, int events)" 4
+.IX Item "ev_io_modify (ev_io *, int events)"
+Similar to \f(CW\*(C`ev_io_set\*(C'\fR, but only changes the event mask. Using this might
+be faster with some backends, as libev can assume that the \f(CW\*(C`fd\*(C'\fR still
+refers to the same underlying file description, something it cannot do
+when using \f(CW\*(C`ev_io_set\*(C'\fR.
+.IP "int fd [no\-modify]" 4
+.IX Item "int fd [no-modify]"
+The file descriptor being watched. While it can be read at any time, you
+must not modify this member even when the watcher is stopped \- always use
+\&\f(CW\*(C`ev_io_set\*(C'\fR for that.
+.IP "int events [no\-modify]" 4
+.IX Item "int events [no-modify]"
+The set of events the fd is being watched for, among other flags. Remember
+that this is a bit set \- to test for \f(CW\*(C`EV_READ\*(C'\fR, use \f(CW\*(C`w\->events &
+EV_READ\*(C'\fR, and similarly for \f(CW\*(C`EV_WRITE\*(C'\fR.
+.Sp
+As with \f(CW\*(C`fd\*(C'\fR, you must not modify this member even when the watcher is
+stopped, always use \f(CW\*(C`ev_io_set\*(C'\fR or \f(CW\*(C`ev_io_modify\*(C'\fR for that.
.PP
\fIExamples\fR
.IX Subsection "Examples"
@@ -2252,11 +2363,11 @@ deterministic behaviour in this case (you can do nothing against
.IP "ev_timer_set (ev_timer *, ev_tstamp after, ev_tstamp repeat)" 4
.IX Item "ev_timer_set (ev_timer *, ev_tstamp after, ev_tstamp repeat)"
.PD
-Configure the timer to trigger after \f(CW\*(C`after\*(C'\fR seconds. If \f(CW\*(C`repeat\*(C'\fR
-is \f(CW0.\fR, then it will automatically be stopped once the timeout is
-reached. If it is positive, then the timer will automatically be
-configured to trigger again \f(CW\*(C`repeat\*(C'\fR seconds later, again, and again,
-until stopped manually.
+Configure the timer to trigger after \f(CW\*(C`after\*(C'\fR seconds (fractional and
+negative values are supported). If \f(CW\*(C`repeat\*(C'\fR is \f(CW0.\fR, then it will
+automatically be stopped once the timeout is reached. If it is positive,
+then the timer will automatically be configured to trigger again \f(CW\*(C`repeat\*(C'\fR
+seconds later, again, and again, until stopped manually.
.Sp
The timer itself will do a best-effort at avoiding drift, that is, if
you configure a timer to trigger every 10 seconds, then it will normally
@@ -2363,8 +2474,8 @@ it, as it uses a relative timeout).
.PP
\&\f(CW\*(C`ev_periodic\*(C'\fR watchers can also be used to implement vastly more complex
timers, such as triggering an event on each \*(L"midnight, local time\*(R", or
-other complicated rules. This cannot be done with \f(CW\*(C`ev_timer\*(C'\fR watchers, as
-those cannot react to time jumps.
+other complicated rules. This cannot easily be done with \f(CW\*(C`ev_timer\*(C'\fR
+watchers, as those cannot react to time jumps.
.PP
As with timers, the callback is guaranteed to be invoked only when the
point in time where it is supposed to trigger has passed. If multiple
@@ -2435,7 +2546,7 @@ ignored. Instead, each time the periodic watcher gets scheduled, the
reschedule callback will be called with the watcher as first, and the
current time as second argument.
.Sp
-\&\s-1NOTE: \s0\fIThis callback \s-1MUST NOT\s0 stop or destroy any periodic watcher, ever,
+\&\s-1NOTE:\s0 \fIThis callback \s-1MUST NOT\s0 stop or destroy any periodic watcher, ever,
or make \s-1ANY\s0 other event loop modifications whatsoever, unless explicitly
allowed by documentation here\fR.
.Sp
@@ -2459,14 +2570,34 @@ It must return the next time to trigger, based on the passed time value
will usually be called just before the callback will be triggered, but
might be called at other times, too.
.Sp
-\&\s-1NOTE: \s0\fIThis callback must always return a time that is higher than or
+\&\s-1NOTE:\s0 \fIThis callback must always return a time that is higher than or
equal to the passed \f(CI\*(C`now\*(C'\fI value\fR.
.Sp
This can be used to create very complex timers, such as a timer that
-triggers on \*(L"next midnight, local time\*(R". To do this, you would calculate the
-next midnight after \f(CW\*(C`now\*(C'\fR and return the timestamp value for this. How
-you do this is, again, up to you (but it is not trivial, which is the main
-reason I omitted it as an example).
+triggers on \*(L"next midnight, local time\*(R". To do this, you would calculate
+the next midnight after \f(CW\*(C`now\*(C'\fR and return the timestamp value for
+this. Here is a (completely untested, no error checking) example on how to
+do this:
+.Sp
+.Vb 1
+\& #include <time.h>
+\&
+\& static ev_tstamp
+\& my_rescheduler (ev_periodic *w, ev_tstamp now)
+\& {
+\& time_t tnow = (time_t)now;
+\& struct tm tm;
+\& localtime_r (&tnow, &tm);
+\&
+\& tm.tm_sec = tm.tm_min = tm.tm_hour = 0; // midnight current day
+\& ++tm.tm_mday; // midnight next day
+\&
+\& return mktime (&tm);
+\& }
+.Ve
+.Sp
+Note: this code might run into trouble on days that have more then two
+midnights (beginning and end).
.RE
.RS 4
.RE
@@ -2594,7 +2725,7 @@ to install a fork handler with \f(CW\*(C`pthread_atfork\*(C'\fR that resets it.
catch fork calls done by libraries (such as the libc) as well.
.PP
In current versions of libev, the signal will not be blocked indefinitely
-unless you use the \f(CW\*(C`signalfd\*(C'\fR \s-1API \s0(\f(CW\*(C`EV_SIGNALFD\*(C'\fR). While this reduces
+unless you use the \f(CW\*(C`signalfd\*(C'\fR \s-1API\s0 (\f(CW\*(C`EV_SIGNALFD\*(C'\fR). While this reduces
the window of opportunity for problems, it will not go away, as libev
\&\fIhas\fR to modify the signal mask, at least temporarily.
.PP
@@ -3646,8 +3777,8 @@ notification, and the callback being invoked.
.SH "OTHER FUNCTIONS"
.IX Header "OTHER FUNCTIONS"
There are some other functions of possible interest. Described. Here. Now.
-.IP "ev_once (loop, int fd, int events, ev_tstamp timeout, callback)" 4
-.IX Item "ev_once (loop, int fd, int events, ev_tstamp timeout, callback)"
+.IP "ev_once (loop, int fd, int events, ev_tstamp timeout, callback, arg)" 4
+.IX Item "ev_once (loop, int fd, int events, ev_tstamp timeout, callback, arg)"
This function combines a simple timer and an I/O watcher, calls your
callback on whichever event happens first and automatically stops both
watchers. This is useful if you want to wait for a single event on an fd
@@ -4107,15 +4238,15 @@ libev sources can be compiled as \*(C+. Therefore, code that uses the C \s-1API\
will work fine.
.PP
Proper exception specifications might have to be added to callbacks passed
-to libev: exceptions may be thrown only from watcher callbacks, all
-other callbacks (allocator, syserr, loop acquire/release and periodic
-reschedule callbacks) must not throw exceptions, and might need a \f(CW\*(C`throw
-()\*(C'\fR specification. If you have code that needs to be compiled as both C
-and \*(C+ you can use the \f(CW\*(C`EV_THROW\*(C'\fR macro for this:
+to libev: exceptions may be thrown only from watcher callbacks, all other
+callbacks (allocator, syserr, loop acquire/release and periodic reschedule
+callbacks) must not throw exceptions, and might need a \f(CW\*(C`noexcept\*(C'\fR
+specification. If you have code that needs to be compiled as both C and
+\&\*(C+ you can use the \f(CW\*(C`EV_NOEXCEPT\*(C'\fR macro for this:
.PP
.Vb 6
\& static void
-\& fatal_error (const char *msg) EV_THROW
+\& fatal_error (const char *msg) EV_NOEXCEPT
\& {
\& perror (msg);
\& abort ();
@@ -4289,6 +4420,9 @@ method.
.Sp
For \f(CW\*(C`ev::embed\*(C'\fR watchers this method is called \f(CW\*(C`set_embed\*(C'\fR, to avoid
clashing with the \f(CW\*(C`set (loop)\*(C'\fR method.
+.Sp
+For \f(CW\*(C`ev::io\*(C'\fR watchers there is an additional \f(CW\*(C`set\*(C'\fR method that acepts a
+new event mask only, and internally calls \f(CW\*(C`ev_io_modfify\*(C'\fR.
.IP "w\->start ()" 4
.IX Item "w->start ()"
Starts the watcher. Note that there is no \f(CW\*(C`loop\*(C'\fR argument, as the
@@ -4499,7 +4633,7 @@ configuration (no autoconf):
.PP
This will automatically include \fIev.h\fR, too, and should be done in a
single C source file only to provide the function implementations. To use
-it, do the same for \fIev.h\fR in all files wishing to use this \s-1API \s0(best
+it, do the same for \fIev.h\fR in all files wishing to use this \s-1API\s0 (best
done by writing a wrapper around \fIev.h\fR that you can include instead and
where you can put other configuration options):
.PP
@@ -4523,11 +4657,13 @@ in your include path (e.g. in libev/ when using \-Ilibev):
\&
\& ev_win32.c required on win32 platforms only
\&
-\& ev_select.c only when select backend is enabled (which is enabled by default)
-\& ev_poll.c only when poll backend is enabled (disabled by default)
-\& ev_epoll.c only when the epoll backend is enabled (disabled by default)
-\& ev_kqueue.c only when the kqueue backend is enabled (disabled by default)
-\& ev_port.c only when the solaris port backend is enabled (disabled by default)
+\& ev_select.c only when select backend is enabled
+\& ev_poll.c only when poll backend is enabled
+\& ev_epoll.c only when the epoll backend is enabled
+\& ev_linuxaio.c only when the linux aio backend is enabled
+\& ev_iouring.c only when the linux io_uring backend is enabled
+\& ev_kqueue.c only when the kqueue backend is enabled
+\& ev_port.c only when the solaris port backend is enabled
.Ve
.PP
\&\fIev.c\fR includes the backend files directly when enabled, so you only need
@@ -4582,7 +4718,7 @@ to redefine them before including \fIev.h\fR without breaking compatibility
to a compiled library. All other symbols change the \s-1ABI,\s0 which means all
users of libev and the libev code itself must be compiled with compatible
settings.
-.IP "\s-1EV_COMPAT3 \s0(h)" 4
+.IP "\s-1EV_COMPAT3\s0 (h)" 4
.IX Item "EV_COMPAT3 (h)"
Backwards compatibility is a major concern for libev. This is why this
release of libev comes with wrappers for the functions and symbols that
@@ -4597,7 +4733,7 @@ typedef in that case.
In some future version, the default for \f(CW\*(C`EV_COMPAT3\*(C'\fR will become \f(CW0\fR,
and in some even more future version the compatibility code will be
removed completely.
-.IP "\s-1EV_STANDALONE \s0(h)" 4
+.IP "\s-1EV_STANDALONE\s0 (h)" 4
.IX Item "EV_STANDALONE (h)"
Must always be \f(CW1\fR if you do not use autoconf configuration, which
keeps libev from including \fIconfig.h\fR, and it also defines dummy
@@ -4655,6 +4791,27 @@ available and will probe for kernel support at runtime. This will improve
\&\f(CW\*(C`ev_signal\*(C'\fR and \f(CW\*(C`ev_async\*(C'\fR performance and reduce resource consumption.
If undefined, it will be enabled if the headers indicate GNU/Linux + Glibc
2.7 or newer, otherwise disabled.
+.IP "\s-1EV_USE_SIGNALFD\s0" 4
+.IX Item "EV_USE_SIGNALFD"
+If defined to be \f(CW1\fR, then libev will assume that \f(CW\*(C`signalfd ()\*(C'\fR is
+available and will probe for kernel support at runtime. This enables
+the use of \s-1EVFLAG_SIGNALFD\s0 for faster and simpler signal handling. If
+undefined, it will be enabled if the headers indicate GNU/Linux + Glibc
+2.7 or newer, otherwise disabled.
+.IP "\s-1EV_USE_TIMERFD\s0" 4
+.IX Item "EV_USE_TIMERFD"
+If defined to be \f(CW1\fR, then libev will assume that \f(CW\*(C`timerfd ()\*(C'\fR is
+available and will probe for kernel support at runtime. This allows
+libev to detect time jumps accurately. If undefined, it will be enabled
+if the headers indicate GNU/Linux + Glibc 2.8 or newer and define
+\&\f(CW\*(C`TFD_TIMER_CANCEL_ON_SET\*(C'\fR, otherwise disabled.
+.IP "\s-1EV_USE_EVENTFD\s0" 4
+.IX Item "EV_USE_EVENTFD"
+If defined to be \f(CW1\fR, then libev will assume that \f(CW\*(C`eventfd ()\*(C'\fR is
+available and will probe for kernel support at runtime. This will improve
+\&\f(CW\*(C`ev_signal\*(C'\fR and \f(CW\*(C`ev_async\*(C'\fR performance and reduce resource consumption.
+If undefined, it will be enabled if the headers indicate GNU/Linux + Glibc
+2.7 or newer, otherwise disabled.
.IP "\s-1EV_USE_SELECT\s0" 4
.IX Item "EV_USE_SELECT"
If undefined or defined to be \f(CW1\fR, libev will compile in support for the
@@ -4716,6 +4873,17 @@ If defined to be \f(CW1\fR, libev will compile in support for the Linux
otherwise another method will be used as fallback. This is the preferred
backend for GNU/Linux systems. If undefined, it will be enabled if the
headers indicate GNU/Linux + Glibc 2.4 or newer, otherwise disabled.
+.IP "\s-1EV_USE_LINUXAIO\s0" 4
+.IX Item "EV_USE_LINUXAIO"
+If defined to be \f(CW1\fR, libev will compile in support for the Linux aio
+backend (\f(CW\*(C`EV_USE_EPOLL\*(C'\fR must also be enabled). If undefined, it will be
+enabled on linux, otherwise disabled.
+.IP "\s-1EV_USE_IOURING\s0" 4
+.IX Item "EV_USE_IOURING"
+If defined to be \f(CW1\fR, libev will compile in support for the Linux
+io_uring backend (\f(CW\*(C`EV_USE_EPOLL\*(C'\fR must also be enabled). Due to it's
+current limitations it has to be requested explicitly. If undefined, it
+will be enabled on linux, otherwise disabled.
.IP "\s-1EV_USE_KQUEUE\s0" 4
.IX Item "EV_USE_KQUEUE"
If defined to be \f(CW1\fR, libev will compile in support for the \s-1BSD\s0 style
@@ -4765,21 +4933,21 @@ watchers.
.Sp
In the absence of this define, libev will use \f(CW\*(C`sig_atomic_t volatile\*(C'\fR
(from \fIsignal.h\fR), which is usually good enough on most platforms.
-.IP "\s-1EV_H \s0(h)" 4
+.IP "\s-1EV_H\s0 (h)" 4
.IX Item "EV_H (h)"
The name of the \fIev.h\fR header file used to include it. The default if
undefined is \f(CW"ev.h"\fR in \fIevent.h\fR, \fIev.c\fR and \fIev++.h\fR. This can be
used to virtually rename the \fIev.h\fR header file in case of conflicts.
-.IP "\s-1EV_CONFIG_H \s0(h)" 4
+.IP "\s-1EV_CONFIG_H\s0 (h)" 4
.IX Item "EV_CONFIG_H (h)"
If \f(CW\*(C`EV_STANDALONE\*(C'\fR isn't \f(CW1\fR, this variable can be used to override
\&\fIev.c\fR's idea of where to find the \fIconfig.h\fR file, similarly to
\&\f(CW\*(C`EV_H\*(C'\fR, above.
-.IP "\s-1EV_EVENT_H \s0(h)" 4
+.IP "\s-1EV_EVENT_H\s0 (h)" 4
.IX Item "EV_EVENT_H (h)"
Similarly to \f(CW\*(C`EV_H\*(C'\fR, this macro can be used to override \fIevent.c\fR's idea
of how the \fIevent.h\fR header can be found, the default is \f(CW"event.h"\fR.
-.IP "\s-1EV_PROTOTYPES \s0(h)" 4
+.IP "\s-1EV_PROTOTYPES\s0 (h)" 4
.IX Item "EV_PROTOTYPES (h)"
If defined to be \f(CW0\fR, then \fIev.h\fR will not define any function
prototypes, but still define all the structs and other symbols. This is
@@ -4982,6 +5150,9 @@ called once per loop, which can slow down libev. If set to \f(CW3\fR, then the
verification code will be called very frequently, which will slow down
libev considerably.
.Sp
+Verification errors are reported via C's \f(CW\*(C`assert\*(C'\fR mechanism, so if you
+disable that (e.g. by defining \f(CW\*(C`NDEBUG\*(C'\fR) then no errors will be reported.
+.Sp
The default is \f(CW1\fR, unless \f(CW\*(C`EV_FEATURES\*(C'\fR overrides it, in which case it
will be \f(CW0\fR.
.IP "\s-1EV_COMMON\s0" 4
@@ -4998,10 +5169,10 @@ For example, the perl \s-1EV\s0 module uses something like this:
\& SV *self; /* contains this struct */ \e
\& SV *cb_sv, *fh /* note no trailing ";" */
.Ve
-.IP "\s-1EV_CB_DECLARE \s0(type)" 4
+.IP "\s-1EV_CB_DECLARE\s0 (type)" 4
.IX Item "EV_CB_DECLARE (type)"
.PD 0
-.IP "\s-1EV_CB_INVOKE \s0(watcher, revents)" 4
+.IP "\s-1EV_CB_INVOKE\s0 (watcher, revents)" 4
.IX Item "EV_CB_INVOKE (watcher, revents)"
.IP "ev_set_cb (ev, cb)" 4
.IX Item "ev_set_cb (ev, cb)"
@@ -5014,7 +5185,7 @@ avoid the \f(CW\*(C`struct ev_loop *\*(C'\fR as first argument in all cases, or
method calls instead of plain function calls in \*(C+.
.SS "\s-1EXPORTED API SYMBOLS\s0"
.IX Subsection "EXPORTED API SYMBOLS"
-If you need to re-export the \s-1API \s0(e.g. via a \s-1DLL\s0) and you need a list of
+If you need to re-export the \s-1API\s0 (e.g. via a \s-1DLL\s0) and you need a list of
exported symbols, you can use the provided \fISymbol.*\fR files which list
all public symbols, one per line:
.PP
@@ -5256,7 +5427,7 @@ a loop.
.IX Subsection "select is buggy"
.PP
All that's left is \f(CW\*(C`select\*(C'\fR, and of course Apple found a way to fuck this
-one up as well: On \s-1OS/X, \s0\f(CW\*(C`select\*(C'\fR actively limits the number of file
+one up as well: On \s-1OS/X,\s0 \f(CW\*(C`select\*(C'\fR actively limits the number of file
descriptors you can pass in to 1024 \- your program suddenly crashes when
you use more.
.PP
diff --git a/third_party/libev/ev.c b/third_party/libev/ev.c
index 6a2648591..9a4d19905 100644
--- a/third_party/libev/ev.c
+++ b/third_party/libev/ev.c
@@ -1,7 +1,7 @@
/*
* libev event processing core, watcher management
*
- * Copyright (c) 2007,2008,2009,2010,2011,2012,2013 Marc Alexander Lehmann <libev at schmorp.de>
+ * Copyright (c) 2007-2019 Marc Alexander Lehmann <libev at schmorp.de>
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without modifica-
@@ -117,6 +117,24 @@
# define EV_USE_EPOLL 0
# endif
+# if HAVE_LINUX_AIO_ABI_H
+# ifndef EV_USE_LINUXAIO
+# define EV_USE_LINUXAIO 0 /* was: EV_FEATURE_BACKENDS, always off by default */
+# endif
+# else
+# undef EV_USE_LINUXAIO
+# define EV_USE_LINUXAIO 0
+# endif
+
+# if HAVE_LINUX_FS_H && HAVE_SYS_TIMERFD_H && HAVE_KERNEL_RWF_T
+# ifndef EV_USE_IOURING
+# define EV_USE_IOURING EV_FEATURE_BACKENDS
+# endif
+# else
+# undef EV_USE_IOURING
+# define EV_USE_IOURING 0
+# endif
+
# if HAVE_KQUEUE && HAVE_SYS_EVENT_H
# ifndef EV_USE_KQUEUE
# define EV_USE_KQUEUE EV_FEATURE_BACKENDS
@@ -161,9 +179,28 @@
# undef EV_USE_EVENTFD
# define EV_USE_EVENTFD 0
# endif
-
+
+# if HAVE_SYS_TIMERFD_H
+# ifndef EV_USE_TIMERFD
+# define EV_USE_TIMERFD EV_FEATURE_OS
+# endif
+# else
+# undef EV_USE_TIMERFD
+# define EV_USE_TIMERFD 0
+# endif
+
#endif
+/* OS X, in its infinite idiocy, actually HARDCODES
+ * a limit of 1024 into their select. Where people have brains,
+ * OS X engineers apparently have a vacuum. Or maybe they were
+ * ordered to have a vacuum, or they do anything for money.
+ * This might help. Or not.
+ * Note that this must be defined early, as other include files
+ * will rely on this define as well.
+ */
+#define _DARWIN_UNLIMITED_SELECT 1
+
#include <stdlib.h>
#include <string.h>
#include <fcntl.h>
@@ -211,14 +248,6 @@
# undef EV_AVOID_STDIO
#endif
-/* OS X, in its infinite idiocy, actually HARDCODES
- * a limit of 1024 into their select. Where people have brains,
- * OS X engineers apparently have a vacuum. Or maybe they were
- * ordered to have a vacuum, or they do anything for money.
- * This might help. Or not.
- */
-#define _DARWIN_UNLIMITED_SELECT 1
-
/* this block tries to deduce configuration from header-defined symbols and defaults */
/* try to deduce the maximum number of signals on this platform */
@@ -315,6 +344,22 @@
# define EV_USE_PORT 0
#endif
+#ifndef EV_USE_LINUXAIO
+# if __linux /* libev currently assumes linux/aio_abi.h is always available on linux */
+# define EV_USE_LINUXAIO 0 /* was: 1, always off by default */
+# else
+# define EV_USE_LINUXAIO 0
+# endif
+#endif
+
+#ifndef EV_USE_IOURING
+# if __linux /* later checks might disable again */
+# define EV_USE_IOURING 1
+# else
+# define EV_USE_IOURING 0
+# endif
+#endif
+
#ifndef EV_USE_INOTIFY
# if __linux && (__GLIBC__ > 2 || (__GLIBC__ == 2 && __GLIBC_MINOR__ >= 4))
# define EV_USE_INOTIFY EV_FEATURE_OS
@@ -347,6 +392,14 @@
# endif
#endif
+#ifndef EV_USE_TIMERFD
+# if __linux && (__GLIBC__ > 2 || (__GLIBC__ == 2 && __GLIBC_MINOR__ >= 8))
+# define EV_USE_TIMERFD EV_FEATURE_OS
+# else
+# define EV_USE_TIMERFD 0
+# endif
+#endif
+
#if 0 /* debugging */
# define EV_VERIFY 3
# define EV_USE_4HEAP 1
@@ -365,7 +418,7 @@
# define EV_HEAP_CACHE_AT EV_FEATURE_DATA
#endif
-#ifdef ANDROID
+#ifdef __ANDROID__
/* supposedly, android doesn't typedef fd_mask */
# undef EV_USE_SELECT
# define EV_USE_SELECT 0
@@ -389,6 +442,7 @@
# define clock_gettime(id, ts) syscall (SYS_clock_gettime, (id), (ts))
# undef EV_USE_MONOTONIC
# define EV_USE_MONOTONIC 1
+# define EV_NEED_SYSCALL 1
# else
# undef EV_USE_CLOCK_SYSCALL
# define EV_USE_CLOCK_SYSCALL 0
@@ -412,6 +466,14 @@
# define EV_USE_INOTIFY 0
#endif
+#if __linux && EV_USE_IOURING
+# include <linux/version.h>
+# if LINUX_VERSION_CODE < KERNEL_VERSION(4,14,0)
+# undef EV_USE_IOURING
+# define EV_USE_IOURING 0
+# endif
+#endif
+
#if !EV_USE_NANOSLEEP
/* hp-ux has it in sys/time.h, which we unconditionally include above */
# if !defined _WIN32 && !defined __hpux
@@ -419,6 +481,31 @@
# endif
#endif
+#if EV_USE_LINUXAIO
+# include <sys/syscall.h>
+# if SYS_io_getevents && EV_USE_EPOLL /* linuxaio backend requires epoll backend */
+# define EV_NEED_SYSCALL 1
+# else
+# undef EV_USE_LINUXAIO
+# define EV_USE_LINUXAIO 0
+# endif
+#endif
+
+#if EV_USE_IOURING
+# include <sys/syscall.h>
+# if !SYS_io_uring_setup && __linux && !__alpha
+# define SYS_io_uring_setup 425
+# define SYS_io_uring_enter 426
+# define SYS_io_uring_wregister 427
+# endif
+# if SYS_io_uring_setup && EV_USE_EPOLL /* iouring backend requires epoll backend */
+# define EV_NEED_SYSCALL 1
+# else
+# undef EV_USE_IOURING
+# define EV_USE_IOURING 0
+# endif
+#endif
+
#if EV_USE_INOTIFY
# include <sys/statfs.h>
# include <sys/inotify.h>
@@ -430,7 +517,7 @@
#endif
#if EV_USE_EVENTFD
-/* our minimum requirement is glibc 2.7 which has the stub, but not the header */
+/* our minimum requirement is glibc 2.7 which has the stub, but not the full header */
# include <stdint.h>
# ifndef EFD_NONBLOCK
# define EFD_NONBLOCK O_NONBLOCK
@@ -446,7 +533,7 @@ EV_CPP(extern "C") int (eventfd) (unsigned int initval, int flags);
#endif
#if EV_USE_SIGNALFD
-/* our minimum requirement is glibc 2.7 which has the stub, but not the header */
+/* our minimum requirement is glibc 2.7 which has the stub, but not the full header */
# include <stdint.h>
# ifndef SFD_NONBLOCK
# define SFD_NONBLOCK O_NONBLOCK
@@ -458,7 +545,7 @@ EV_CPP(extern "C") int (eventfd) (unsigned int initval, int flags);
# define SFD_CLOEXEC 02000000
# endif
# endif
-EV_CPP (extern "C") int signalfd (int fd, const sigset_t *mask, int flags);
+EV_CPP (extern "C") int (signalfd) (int fd, const sigset_t *mask, int flags);
struct signalfd_siginfo
{
@@ -467,7 +554,17 @@ struct signalfd_siginfo
};
#endif
-/**/
+/* for timerfd, libev core requires TFD_TIMER_CANCEL_ON_SET &c */
+#if EV_USE_TIMERFD
+# include <sys/timerfd.h>
+/* timerfd is only used for periodics */
+# if !(defined (TFD_TIMER_CANCEL_ON_SET) && defined (TFD_CLOEXEC) && defined (TFD_NONBLOCK)) || !EV_PERIODIC_ENABLE
+# undef EV_USE_TIMERFD
+# define EV_USE_TIMERFD 0
+# endif
+#endif
+
+/*****************************************************************************/
#if EV_VERIFY >= 3
# define EV_FREQUENT_CHECK ev_verify (EV_A)
@@ -482,18 +579,34 @@ struct signalfd_siginfo
#define MIN_INTERVAL 0.0001220703125 /* 1/2**13, good till 4000 */
/*#define MIN_INTERVAL 0.00000095367431640625 /* 1/2**20, good till 2200 */
-#define MIN_TIMEJUMP 1. /* minimum timejump that gets detected (if monotonic clock available) */
-#define MAX_BLOCKTIME 59.743 /* never wait longer than this time (to detect time jumps) */
+#define MIN_TIMEJUMP 1. /* minimum timejump that gets detected (if monotonic clock available) */
+#define MAX_BLOCKTIME 59.743 /* never wait longer than this time (to detect time jumps) */
+#define MAX_BLOCKTIME2 1500001.07 /* same, but when timerfd is used to detect jumps, also safe delay to not overflow */
-#define EV_TV_SET(tv,t) do { tv.tv_sec = (long)t; tv.tv_usec = (long)((t - tv.tv_sec) * 1e6); } while (0)
-#define EV_TS_SET(ts,t) do { ts.tv_sec = (long)t; ts.tv_nsec = (long)((t - ts.tv_sec) * 1e9); } while (0)
+/* find a portable timestamp that is "always" in the future but fits into time_t.
+ * this is quite hard, and we are mostly guessing - we handle 32 bit signed/unsigned time_t,
+ * and sizes larger than 32 bit, and maybe the unlikely floating point time_t */
+#define EV_TSTAMP_HUGE \
+ (sizeof (time_t) >= 8 ? 10000000000000. \
+ : 0 < (time_t)4294967295 ? 4294967295. \
+ : 2147483647.) \
+
+#ifndef EV_TS_CONST
+# define EV_TS_CONST(nv) nv
+# define EV_TS_TO_MSEC(a) a * 1e3 + 0.9999
+# define EV_TS_FROM_USEC(us) us * 1e-6
+# define EV_TV_SET(tv,t) do { tv.tv_sec = (long)t; tv.tv_usec = (long)((t - tv.tv_sec) * 1e6); } while (0)
+# define EV_TS_SET(ts,t) do { ts.tv_sec = (long)t; ts.tv_nsec = (long)((t - ts.tv_sec) * 1e9); } while (0)
+# define EV_TV_GET(tv) ((tv).tv_sec + (tv).tv_usec * 1e-6)
+# define EV_TS_GET(ts) ((ts).tv_sec + (ts).tv_nsec * 1e-9)
+#endif
/* the following is ecb.h embedded into libev - use update_ev_c to update from an external copy */
/* ECB.H BEGIN */
/*
* libecb - http://software.schmorp.de/pkg/libecb
*
- * Copyright (©) 2009-2015 Marc Alexander Lehmann <libecb at schmorp.de>
+ * Copyright (©) 2009-2015,2018-2020 Marc Alexander Lehmann <libecb at schmorp.de>
* Copyright (©) 2011 Emanuele Giaquinta
* All rights reserved.
*
@@ -534,15 +647,23 @@ struct signalfd_siginfo
#define ECB_H
/* 16 bits major, 16 bits minor */
-#define ECB_VERSION 0x00010005
+#define ECB_VERSION 0x00010008
-#ifdef _WIN32
+#include <string.h> /* for memcpy */
+
+#if defined (_WIN32) && !defined (__MINGW32__)
typedef signed char int8_t;
typedef unsigned char uint8_t;
+ typedef signed char int_fast8_t;
+ typedef unsigned char uint_fast8_t;
typedef signed short int16_t;
typedef unsigned short uint16_t;
+ typedef signed int int_fast16_t;
+ typedef unsigned int uint_fast16_t;
typedef signed int int32_t;
typedef unsigned int uint32_t;
+ typedef signed int int_fast32_t;
+ typedef unsigned int uint_fast32_t;
#if __GNUC__
typedef signed long long int64_t;
typedef unsigned long long uint64_t;
@@ -550,6 +671,8 @@ struct signalfd_siginfo
typedef signed __int64 int64_t;
typedef unsigned __int64 uint64_t;
#endif
+ typedef int64_t int_fast64_t;
+ typedef uint64_t uint_fast64_t;
#ifdef _WIN64
#define ECB_PTRSIZE 8
typedef uint64_t uintptr_t;
@@ -571,6 +694,14 @@ struct signalfd_siginfo
#define ECB_GCC_AMD64 (__amd64 || __amd64__ || __x86_64 || __x86_64__)
#define ECB_MSVC_AMD64 (_M_AMD64 || _M_X64)
+#ifndef ECB_OPTIMIZE_SIZE
+ #if __OPTIMIZE_SIZE__
+ #define ECB_OPTIMIZE_SIZE 1
+ #else
+ #define ECB_OPTIMIZE_SIZE 0
+ #endif
+#endif
+
/* work around x32 idiocy by defining proper macros */
#if ECB_GCC_AMD64 || ECB_MSVC_AMD64
#if _ILP32
@@ -609,6 +740,8 @@ struct signalfd_siginfo
#define ECB_CPP (__cplusplus+0)
#define ECB_CPP11 (__cplusplus >= 201103L)
+#define ECB_CPP14 (__cplusplus >= 201402L)
+#define ECB_CPP17 (__cplusplus >= 201703L)
#if ECB_CPP
#define ECB_C 0
@@ -620,6 +753,7 @@ struct signalfd_siginfo
#define ECB_C99 (ECB_STDC_VERSION >= 199901L)
#define ECB_C11 (ECB_STDC_VERSION >= 201112L)
+#define ECB_C17 (ECB_STDC_VERSION >= 201710L)
#if ECB_CPP
#define ECB_EXTERN_C extern "C"
@@ -655,14 +789,15 @@ struct signalfd_siginfo
#ifndef ECB_MEMORY_FENCE
#if ECB_GCC_VERSION(2,5) || defined __INTEL_COMPILER || (__llvm__ && __GNUC__) || __SUNPRO_C >= 0x5110 || __SUNPRO_CC >= 0x5110
+ #define ECB_MEMORY_FENCE_RELAXED __asm__ __volatile__ ("" : : : "memory")
#if __i386 || __i386__
#define ECB_MEMORY_FENCE __asm__ __volatile__ ("lock; orb $0, -1(%%esp)" : : : "memory")
#define ECB_MEMORY_FENCE_ACQUIRE __asm__ __volatile__ ("" : : : "memory")
- #define ECB_MEMORY_FENCE_RELEASE __asm__ __volatile__ ("")
+ #define ECB_MEMORY_FENCE_RELEASE __asm__ __volatile__ ("" : : : "memory")
#elif ECB_GCC_AMD64
#define ECB_MEMORY_FENCE __asm__ __volatile__ ("mfence" : : : "memory")
#define ECB_MEMORY_FENCE_ACQUIRE __asm__ __volatile__ ("" : : : "memory")
- #define ECB_MEMORY_FENCE_RELEASE __asm__ __volatile__ ("")
+ #define ECB_MEMORY_FENCE_RELEASE __asm__ __volatile__ ("" : : : "memory")
#elif __powerpc__ || __ppc__ || __powerpc64__ || __ppc64__
#define ECB_MEMORY_FENCE __asm__ __volatile__ ("sync" : : : "memory")
#elif defined __ARM_ARCH_2__ \
@@ -714,12 +849,14 @@ struct signalfd_siginfo
#define ECB_MEMORY_FENCE __atomic_thread_fence (__ATOMIC_SEQ_CST)
#define ECB_MEMORY_FENCE_ACQUIRE __atomic_thread_fence (__ATOMIC_ACQUIRE)
#define ECB_MEMORY_FENCE_RELEASE __atomic_thread_fence (__ATOMIC_RELEASE)
+ #define ECB_MEMORY_FENCE_RELAXED __atomic_thread_fence (__ATOMIC_RELAXED)
#elif ECB_CLANG_EXTENSION(c_atomic)
/* see comment below (stdatomic.h) about the C11 memory model. */
#define ECB_MEMORY_FENCE __c11_atomic_thread_fence (__ATOMIC_SEQ_CST)
#define ECB_MEMORY_FENCE_ACQUIRE __c11_atomic_thread_fence (__ATOMIC_ACQUIRE)
#define ECB_MEMORY_FENCE_RELEASE __c11_atomic_thread_fence (__ATOMIC_RELEASE)
+ #define ECB_MEMORY_FENCE_RELAXED __c11_atomic_thread_fence (__ATOMIC_RELAXED)
#elif ECB_GCC_VERSION(4,4) || defined __INTEL_COMPILER || defined __clang__
#define ECB_MEMORY_FENCE __sync_synchronize ()
@@ -739,9 +876,10 @@ struct signalfd_siginfo
#define ECB_MEMORY_FENCE MemoryBarrier () /* actually just xchg on x86... scary */
#elif __SUNPRO_C >= 0x5110 || __SUNPRO_CC >= 0x5110
#include <mbarrier.h>
- #define ECB_MEMORY_FENCE __machine_rw_barrier ()
- #define ECB_MEMORY_FENCE_ACQUIRE __machine_r_barrier ()
- #define ECB_MEMORY_FENCE_RELEASE __machine_w_barrier ()
+ #define ECB_MEMORY_FENCE __machine_rw_barrier ()
+ #define ECB_MEMORY_FENCE_ACQUIRE __machine_acq_barrier ()
+ #define ECB_MEMORY_FENCE_RELEASE __machine_rel_barrier ()
+ #define ECB_MEMORY_FENCE_RELAXED __compiler_barrier ()
#elif __xlC__
#define ECB_MEMORY_FENCE __sync ()
#endif
@@ -752,15 +890,9 @@ struct signalfd_siginfo
/* we assume that these memory fences work on all variables/all memory accesses, */
/* not just C11 atomics and atomic accesses */
#include <stdatomic.h>
- /* Unfortunately, neither gcc 4.7 nor clang 3.1 generate any instructions for */
- /* any fence other than seq_cst, which isn't very efficient for us. */
- /* Why that is, we don't know - either the C11 memory model is quite useless */
- /* for most usages, or gcc and clang have a bug */
- /* I *currently* lean towards the latter, and inefficiently implement */
- /* all three of ecb's fences as a seq_cst fence */
- /* Update, gcc-4.8 generates mfence for all c++ fences, but nothing */
- /* for all __atomic_thread_fence's except seq_cst */
#define ECB_MEMORY_FENCE atomic_thread_fence (memory_order_seq_cst)
+ #define ECB_MEMORY_FENCE_ACQUIRE atomic_thread_fence (memory_order_acquire)
+ #define ECB_MEMORY_FENCE_RELEASE atomic_thread_fence (memory_order_release)
#endif
#endif
@@ -790,6 +922,10 @@ struct signalfd_siginfo
#define ECB_MEMORY_FENCE_RELEASE ECB_MEMORY_FENCE
#endif
+#if !defined ECB_MEMORY_FENCE_RELAXED && defined ECB_MEMORY_FENCE
+ #define ECB_MEMORY_FENCE_RELAXED ECB_MEMORY_FENCE /* very heavy-handed */
+#endif
+
/*****************************************************************************/
#if ECB_CPP
@@ -1081,6 +1217,44 @@ ecb_inline ecb_const uint32_t ecb_rotr32 (uint32_t x, unsigned int count) { retu
ecb_inline ecb_const uint64_t ecb_rotl64 (uint64_t x, unsigned int count) { return (x >> (64 - count)) | (x << count); }
ecb_inline ecb_const uint64_t ecb_rotr64 (uint64_t x, unsigned int count) { return (x << (64 - count)) | (x >> count); }
+#if ECB_CPP
+
+inline uint8_t ecb_ctz (uint8_t v) { return ecb_ctz32 (v); }
+inline uint16_t ecb_ctz (uint16_t v) { return ecb_ctz32 (v); }
+inline uint32_t ecb_ctz (uint32_t v) { return ecb_ctz32 (v); }
+inline uint64_t ecb_ctz (uint64_t v) { return ecb_ctz64 (v); }
+
+inline bool ecb_is_pot (uint8_t v) { return ecb_is_pot32 (v); }
+inline bool ecb_is_pot (uint16_t v) { return ecb_is_pot32 (v); }
+inline bool ecb_is_pot (uint32_t v) { return ecb_is_pot32 (v); }
+inline bool ecb_is_pot (uint64_t v) { return ecb_is_pot64 (v); }
+
+inline int ecb_ld (uint8_t v) { return ecb_ld32 (v); }
+inline int ecb_ld (uint16_t v) { return ecb_ld32 (v); }
+inline int ecb_ld (uint32_t v) { return ecb_ld32 (v); }
+inline int ecb_ld (uint64_t v) { return ecb_ld64 (v); }
+
+inline int ecb_popcount (uint8_t v) { return ecb_popcount32 (v); }
+inline int ecb_popcount (uint16_t v) { return ecb_popcount32 (v); }
+inline int ecb_popcount (uint32_t v) { return ecb_popcount32 (v); }
+inline int ecb_popcount (uint64_t v) { return ecb_popcount64 (v); }
+
+inline uint8_t ecb_bitrev (uint8_t v) { return ecb_bitrev8 (v); }
+inline uint16_t ecb_bitrev (uint16_t v) { return ecb_bitrev16 (v); }
+inline uint32_t ecb_bitrev (uint32_t v) { return ecb_bitrev32 (v); }
+
+inline uint8_t ecb_rotl (uint8_t v, unsigned int count) { return ecb_rotl8 (v, count); }
+inline uint16_t ecb_rotl (uint16_t v, unsigned int count) { return ecb_rotl16 (v, count); }
+inline uint32_t ecb_rotl (uint32_t v, unsigned int count) { return ecb_rotl32 (v, count); }
+inline uint64_t ecb_rotl (uint64_t v, unsigned int count) { return ecb_rotl64 (v, count); }
+
+inline uint8_t ecb_rotr (uint8_t v, unsigned int count) { return ecb_rotr8 (v, count); }
+inline uint16_t ecb_rotr (uint16_t v, unsigned int count) { return ecb_rotr16 (v, count); }
+inline uint32_t ecb_rotr (uint32_t v, unsigned int count) { return ecb_rotr32 (v, count); }
+inline uint64_t ecb_rotr (uint64_t v, unsigned int count) { return ecb_rotr64 (v, count); }
+
+#endif
+
#if ECB_GCC_VERSION(4,3) || (ECB_CLANG_BUILTIN(__builtin_bswap32) && ECB_CLANG_BUILTIN(__builtin_bswap64))
#if ECB_GCC_VERSION(4,8) || ECB_CLANG_BUILTIN(__builtin_bswap16)
#define ecb_bswap16(x) __builtin_bswap16 (x)
@@ -1161,6 +1335,78 @@ ecb_inline ecb_const ecb_bool ecb_big_endian (void) { return ecb_byteorder_he
ecb_inline ecb_const ecb_bool ecb_little_endian (void);
ecb_inline ecb_const ecb_bool ecb_little_endian (void) { return ecb_byteorder_helper () == 0x44332211; }
+/*****************************************************************************/
+/* unaligned load/store */
+
+ecb_inline uint_fast16_t ecb_be_u16_to_host (uint_fast16_t v) { return ecb_little_endian () ? ecb_bswap16 (v) : v; }
+ecb_inline uint_fast32_t ecb_be_u32_to_host (uint_fast32_t v) { return ecb_little_endian () ? ecb_bswap32 (v) : v; }
+ecb_inline uint_fast64_t ecb_be_u64_to_host (uint_fast64_t v) { return ecb_little_endian () ? ecb_bswap64 (v) : v; }
+
+ecb_inline uint_fast16_t ecb_le_u16_to_host (uint_fast16_t v) { return ecb_big_endian () ? ecb_bswap16 (v) : v; }
+ecb_inline uint_fast32_t ecb_le_u32_to_host (uint_fast32_t v) { return ecb_big_endian () ? ecb_bswap32 (v) : v; }
+ecb_inline uint_fast64_t ecb_le_u64_to_host (uint_fast64_t v) { return ecb_big_endian () ? ecb_bswap64 (v) : v; }
+
+ecb_inline uint_fast16_t ecb_peek_u16_u (const void *ptr) { uint16_t v; memcpy (&v, ptr, sizeof (v)); return v; }
+ecb_inline uint_fast32_t ecb_peek_u32_u (const void *ptr) { uint32_t v; memcpy (&v, ptr, sizeof (v)); return v; }
+ecb_inline uint_fast64_t ecb_peek_u64_u (const void *ptr) { uint64_t v; memcpy (&v, ptr, sizeof (v)); return v; }
+
+ecb_inline uint_fast16_t ecb_peek_be_u16_u (const void *ptr) { return ecb_be_u16_to_host (ecb_peek_u16_u (ptr)); }
+ecb_inline uint_fast32_t ecb_peek_be_u32_u (const void *ptr) { return ecb_be_u32_to_host (ecb_peek_u32_u (ptr)); }
+ecb_inline uint_fast64_t ecb_peek_be_u64_u (const void *ptr) { return ecb_be_u64_to_host (ecb_peek_u64_u (ptr)); }
+
+ecb_inline uint_fast16_t ecb_peek_le_u16_u (const void *ptr) { return ecb_le_u16_to_host (ecb_peek_u16_u (ptr)); }
+ecb_inline uint_fast32_t ecb_peek_le_u32_u (const void *ptr) { return ecb_le_u32_to_host (ecb_peek_u32_u (ptr)); }
+ecb_inline uint_fast64_t ecb_peek_le_u64_u (const void *ptr) { return ecb_le_u64_to_host (ecb_peek_u64_u (ptr)); }
+
+ecb_inline uint_fast16_t ecb_host_to_be_u16 (uint_fast16_t v) { return ecb_little_endian () ? ecb_bswap16 (v) : v; }
+ecb_inline uint_fast32_t ecb_host_to_be_u32 (uint_fast32_t v) { return ecb_little_endian () ? ecb_bswap32 (v) : v; }
+ecb_inline uint_fast64_t ecb_host_to_be_u64 (uint_fast64_t v) { return ecb_little_endian () ? ecb_bswap64 (v) : v; }
+
+ecb_inline uint_fast16_t ecb_host_to_le_u16 (uint_fast16_t v) { return ecb_big_endian () ? ecb_bswap16 (v) : v; }
+ecb_inline uint_fast32_t ecb_host_to_le_u32 (uint_fast32_t v) { return ecb_big_endian () ? ecb_bswap32 (v) : v; }
+ecb_inline uint_fast64_t ecb_host_to_le_u64 (uint_fast64_t v) { return ecb_big_endian () ? ecb_bswap64 (v) : v; }
+
+ecb_inline void ecb_poke_u16_u (void *ptr, uint16_t v) { memcpy (ptr, &v, sizeof (v)); }
+ecb_inline void ecb_poke_u32_u (void *ptr, uint32_t v) { memcpy (ptr, &v, sizeof (v)); }
+ecb_inline void ecb_poke_u64_u (void *ptr, uint64_t v) { memcpy (ptr, &v, sizeof (v)); }
+
+ecb_inline void ecb_poke_be_u16_u (void *ptr, uint_fast16_t v) { ecb_poke_u16_u (ptr, ecb_host_to_be_u16 (v)); }
+ecb_inline void ecb_poke_be_u32_u (void *ptr, uint_fast32_t v) { ecb_poke_u32_u (ptr, ecb_host_to_be_u32 (v)); }
+ecb_inline void ecb_poke_be_u64_u (void *ptr, uint_fast64_t v) { ecb_poke_u64_u (ptr, ecb_host_to_be_u64 (v)); }
+
+ecb_inline void ecb_poke_le_u16_u (void *ptr, uint_fast16_t v) { ecb_poke_u16_u (ptr, ecb_host_to_le_u16 (v)); }
+ecb_inline void ecb_poke_le_u32_u (void *ptr, uint_fast32_t v) { ecb_poke_u32_u (ptr, ecb_host_to_le_u32 (v)); }
+ecb_inline void ecb_poke_le_u64_u (void *ptr, uint_fast64_t v) { ecb_poke_u64_u (ptr, ecb_host_to_le_u64 (v)); }
+
+#if ECB_CPP
+
+inline uint8_t ecb_bswap (uint8_t v) { return v; }
+inline uint16_t ecb_bswap (uint16_t v) { return ecb_bswap16 (v); }
+inline uint32_t ecb_bswap (uint32_t v) { return ecb_bswap32 (v); }
+inline uint64_t ecb_bswap (uint64_t v) { return ecb_bswap64 (v); }
+
+template<typename T> inline T ecb_be_to_host (T v) { return ecb_little_endian () ? ecb_bswap (v) : v; }
+template<typename T> inline T ecb_le_to_host (T v) { return ecb_big_endian () ? ecb_bswap (v) : v; }
+template<typename T> inline T ecb_peek (const void *ptr) { return *(const T *)ptr; }
+template<typename T> inline T ecb_peek_be (const void *ptr) { return ecb_be_to_host (ecb_peek <T> (ptr)); }
+template<typename T> inline T ecb_peek_le (const void *ptr) { return ecb_le_to_host (ecb_peek <T> (ptr)); }
+template<typename T> inline T ecb_peek_u (const void *ptr) { T v; memcpy (&v, ptr, sizeof (v)); return v; }
+template<typename T> inline T ecb_peek_be_u (const void *ptr) { return ecb_be_to_host (ecb_peek_u<T> (ptr)); }
+template<typename T> inline T ecb_peek_le_u (const void *ptr) { return ecb_le_to_host (ecb_peek_u<T> (ptr)); }
+
+template<typename T> inline T ecb_host_to_be (T v) { return ecb_little_endian () ? ecb_bswap (v) : v; }
+template<typename T> inline T ecb_host_to_le (T v) { return ecb_big_endian () ? ecb_bswap (v) : v; }
+template<typename T> inline void ecb_poke (void *ptr, T v) { *(T *)ptr = v; }
+template<typename T> inline void ecb_poke_be (void *ptr, T v) { return ecb_poke <T> (ptr, ecb_host_to_be (v)); }
+template<typename T> inline void ecb_poke_le (void *ptr, T v) { return ecb_poke <T> (ptr, ecb_host_to_le (v)); }
+template<typename T> inline void ecb_poke_u (void *ptr, T v) { memcpy (ptr, &v, sizeof (v)); }
+template<typename T> inline void ecb_poke_be_u (void *ptr, T v) { return ecb_poke_u<T> (ptr, ecb_host_to_be (v)); }
+template<typename T> inline void ecb_poke_le_u (void *ptr, T v) { return ecb_poke_u<T> (ptr, ecb_host_to_le (v)); }
+
+#endif
+
+/*****************************************************************************/
+
#if ECB_GCC_VERSION(3,0) || ECB_C99
#define ecb_mod(m,n) ((m) % (n) + ((m) % (n) < 0 ? (n) : 0))
#else
@@ -1194,6 +1440,8 @@ ecb_inline ecb_const ecb_bool ecb_little_endian (void) { return ecb_byteorder_he
#define ecb_array_length(name) (sizeof (name) / sizeof (name [0]))
#endif
+/*****************************************************************************/
+
ecb_function_ ecb_const uint32_t ecb_binary16_to_binary32 (uint32_t x);
ecb_function_ ecb_const uint32_t
ecb_binary16_to_binary32 (uint32_t x)
@@ -1311,7 +1559,6 @@ ecb_binary32_to_binary16 (uint32_t x)
|| (defined __arm__ && (defined __ARM_EABI__ || defined __EABI__ || defined __VFP_FP__ || defined _WIN32_WCE || defined __ANDROID__)) \
|| defined __aarch64__
#define ECB_STDFP 1
- #include <string.h> /* for memcpy */
#else
#define ECB_STDFP 0
#endif
@@ -1506,7 +1753,7 @@ ecb_binary32_to_binary16 (uint32_t x)
#if ECB_MEMORY_FENCE_NEEDS_PTHREADS
/* if your architecture doesn't need memory fences, e.g. because it is
* single-cpu/core, or if you use libev in a project that doesn't use libev
- * from multiple threads, then you can define ECB_AVOID_PTHREADS when compiling
+ * from multiple threads, then you can define ECB_NO_THREADS when compiling
* libev, in which cases the memory fences become nops.
* alternatively, you can remove this #error and link against libpthread,
* which will then provide the memory fences.
@@ -1529,9 +1776,75 @@ ecb_binary32_to_binary16 (uint32_t x)
#if EV_FEATURE_CODE
# define inline_speed ecb_inline
#else
-# define inline_speed noinline static
+# define inline_speed ecb_noinline static
#endif
+/*****************************************************************************/
+/* raw syscall wrappers */
+
+#if EV_NEED_SYSCALL
+
+#include <sys/syscall.h>
+
+/*
+ * define some syscall wrappers for common architectures
+ * this is mostly for nice looks during debugging, not performance.
+ * our syscalls return < 0, not == -1, on error. which is good
+ * enough for linux aio.
+ * TODO: arm is also common nowadays, maybe even mips and x86
+ * TODO: after implementing this, it suddenly looks like overkill, but its hard to remove...
+ */
+#if __GNUC__ && __linux && ECB_AMD64 && !EV_FEATURE_CODE
+ /* the costly errno access probably kills this for size optimisation */
+
+ #define ev_syscall(nr,narg,arg1,arg2,arg3,arg4,arg5,arg6) \
+ ({ \
+ long res; \
+ register unsigned long r6 __asm__ ("r9" ); \
+ register unsigned long r5 __asm__ ("r8" ); \
+ register unsigned long r4 __asm__ ("r10"); \
+ register unsigned long r3 __asm__ ("rdx"); \
+ register unsigned long r2 __asm__ ("rsi"); \
+ register unsigned long r1 __asm__ ("rdi"); \
+ if (narg >= 6) r6 = (unsigned long)(arg6); \
+ if (narg >= 5) r5 = (unsigned long)(arg5); \
+ if (narg >= 4) r4 = (unsigned long)(arg4); \
+ if (narg >= 3) r3 = (unsigned long)(arg3); \
+ if (narg >= 2) r2 = (unsigned long)(arg2); \
+ if (narg >= 1) r1 = (unsigned long)(arg1); \
+ __asm__ __volatile__ ( \
+ "syscall\n\t" \
+ : "=a" (res) \
+ : "0" (nr), "r" (r1), "r" (r2), "r" (r3), "r" (r4), "r" (r5) \
+ : "cc", "r11", "cx", "memory"); \
+ errno = -res; \
+ res; \
+ })
+
+#endif
+
+#ifdef ev_syscall
+ #define ev_syscall0(nr) ev_syscall (nr, 0, 0, 0, 0, 0, 0, 0)
+ #define ev_syscall1(nr,arg1) ev_syscall (nr, 1, arg1, 0, 0, 0, 0, 0)
+ #define ev_syscall2(nr,arg1,arg2) ev_syscall (nr, 2, arg1, arg2, 0, 0, 0, 0)
+ #define ev_syscall3(nr,arg1,arg2,arg3) ev_syscall (nr, 3, arg1, arg2, arg3, 0, 0, 0)
+ #define ev_syscall4(nr,arg1,arg2,arg3,arg4) ev_syscall (nr, 3, arg1, arg2, arg3, arg4, 0, 0)
+ #define ev_syscall5(nr,arg1,arg2,arg3,arg4,arg5) ev_syscall (nr, 5, arg1, arg2, arg3, arg4, arg5, 0)
+ #define ev_syscall6(nr,arg1,arg2,arg3,arg4,arg5,arg6) ev_syscall (nr, 6, arg1, arg2, arg3, arg4, arg5,arg6)
+#else
+ #define ev_syscall0(nr) syscall (nr)
+ #define ev_syscall1(nr,arg1) syscall (nr, arg1)
+ #define ev_syscall2(nr,arg1,arg2) syscall (nr, arg1, arg2)
+ #define ev_syscall3(nr,arg1,arg2,arg3) syscall (nr, arg1, arg2, arg3)
+ #define ev_syscall4(nr,arg1,arg2,arg3,arg4) syscall (nr, arg1, arg2, arg3, arg4)
+ #define ev_syscall5(nr,arg1,arg2,arg3,arg4,arg5) syscall (nr, arg1, arg2, arg3, arg4, arg5)
+ #define ev_syscall6(nr,arg1,arg2,arg3,arg4,arg5,arg6) syscall (nr, arg1, arg2, arg3, arg4, arg5,arg6)
+#endif
+
+#endif
+
+/*****************************************************************************/
+
#define NUMPRI (EV_MAXPRI - EV_MINPRI + 1)
#if EV_MINPRI == EV_MAXPRI
@@ -1540,8 +1853,7 @@ ecb_binary32_to_binary16 (uint32_t x)
# define ABSPRI(w) (((W)w)->priority - EV_MINPRI)
#endif
-#define EMPTY /* required for microsofts broken pseudo-c compiler */
-#define EMPTY2(a,b) /* used to suppress some warnings */
+#define EMPTY /* required for microsofts broken pseudo-c compiler */
typedef ev_watcher *W;
typedef ev_watcher_list *WL;
@@ -1576,6 +1888,10 @@ static EV_ATOMIC_T have_monotonic; /* did clock_gettime (CLOCK_MONOTONIC) work?
/*****************************************************************************/
+#if EV_USE_LINUXAIO
+# include <linux/aio_abi.h> /* probably only needed for aio_context_t */
+#endif
+
/* define a suitable floor function (only used by periodics atm) */
#if EV_USE_FLOOR
@@ -1586,7 +1902,7 @@ static EV_ATOMIC_T have_monotonic; /* did clock_gettime (CLOCK_MONOTONIC) work?
#include <float.h>
/* a floor() replacement function, should be independent of ev_tstamp type */
-noinline
+ecb_noinline
static ev_tstamp
ev_floor (ev_tstamp v)
{
@@ -1597,26 +1913,26 @@ ev_floor (ev_tstamp v)
const ev_tstamp shift = sizeof (unsigned long) >= 8 ? 18446744073709551616. : 4294967296.;
#endif
- /* argument too large for an unsigned long? */
- if (expect_false (v >= shift))
+ /* special treatment for negative arguments */
+ if (ecb_expect_false (v < 0.))
+ {
+ ev_tstamp f = -ev_floor (-v);
+
+ return f - (f == v ? 0 : 1);
+ }
+
+ /* argument too large for an unsigned long? then reduce it */
+ if (ecb_expect_false (v >= shift))
{
ev_tstamp f;
if (v == v - 1.)
- return v; /* very large number */
+ return v; /* very large numbers are assumed to be integer */
f = shift * ev_floor (v * (1. / shift));
return f + ev_floor (v - f);
}
- /* special treatment for negative args? */
- if (expect_false (v < 0.))
- {
- ev_tstamp f = -ev_floor (-v);
-
- return f - (f == v ? 0 : 1);
- }
-
/* fits into an unsigned long */
return (unsigned long)v;
}
@@ -1629,7 +1945,7 @@ ev_floor (ev_tstamp v)
# include <sys/utsname.h>
#endif
-noinline ecb_cold
+ecb_noinline ecb_cold
static unsigned int
ev_linux_version (void)
{
@@ -1669,7 +1985,7 @@ ev_linux_version (void)
/*****************************************************************************/
#if EV_AVOID_STDIO
-noinline ecb_cold
+ecb_noinline ecb_cold
static void
ev_printerr (const char *msg)
{
@@ -1677,16 +1993,16 @@ ev_printerr (const char *msg)
}
#endif
-static void (*syserr_cb)(const char *msg) EV_THROW;
+static void (*syserr_cb)(const char *msg) EV_NOEXCEPT;
ecb_cold
void
-ev_set_syserr_cb (void (*cb)(const char *msg) EV_THROW) EV_THROW
+ev_set_syserr_cb (void (*cb)(const char *msg) EV_NOEXCEPT) EV_NOEXCEPT
{
syserr_cb = cb;
}
-noinline ecb_cold
+ecb_noinline ecb_cold
static void
ev_syserr (const char *msg)
{
@@ -1710,7 +2026,7 @@ ev_syserr (const char *msg)
}
static void *
-ev_realloc_emul (void *ptr, long size) EV_THROW
+ev_realloc_emul (void *ptr, long size) EV_NOEXCEPT
{
/* some systems, notably openbsd and darwin, fail to properly
* implement realloc (x, 0) (as required by both ansi c-89 and
@@ -1726,11 +2042,11 @@ ev_realloc_emul (void *ptr, long size) EV_THROW
return 0;
}
-static void *(*alloc)(void *ptr, long size) EV_THROW = ev_realloc_emul;
+static void *(*alloc)(void *ptr, long size) EV_NOEXCEPT = ev_realloc_emul;
ecb_cold
void
-ev_set_allocator (void *(*cb)(void *ptr, long size) EV_THROW) EV_THROW
+ev_set_allocator (void *(*cb)(void *ptr, long size) EV_NOEXCEPT) EV_NOEXCEPT
{
alloc = cb;
}
@@ -1767,8 +2083,8 @@ typedef struct
WL head;
unsigned char events; /* the events watched for */
unsigned char reify; /* flag set when this ANFD needs reification (EV_ANFD_REIFY, EV__IOFDSET) */
- unsigned char emask; /* the epoll backend stores the actual kernel mask in here */
- unsigned char unused;
+ unsigned char emask; /* some backends store the actual kernel mask in here */
+ unsigned char eflags; /* flags field for use by backends */
#if EV_USE_EPOLL
unsigned int egen; /* generation counter to counter epoll bugs */
#endif
@@ -1832,12 +2148,7 @@ typedef struct
#else
-#ifdef EV_API_STATIC
- static ev_tstamp ev_rt_now = 0;
-#else
- ev_tstamp ev_rt_now = 0;
-#endif
-
+ EV_API_DECL ev_tstamp ev_rt_now = EV_TS_CONST (0.); /* needs to be initialised to make it a definition despite extern */
#define VAR(name,decl) static decl;
#include "ev_vars.h"
#undef VAR
@@ -1847,8 +2158,8 @@ typedef struct
#endif
#if EV_FEATURE_API
-# define EV_RELEASE_CB if (expect_false (release_cb)) release_cb (EV_A)
-# define EV_ACQUIRE_CB if (expect_false (acquire_cb)) acquire_cb (EV_A)
+# define EV_RELEASE_CB if (ecb_expect_false (release_cb)) release_cb (EV_A)
+# define EV_ACQUIRE_CB if (ecb_expect_false (acquire_cb)) acquire_cb (EV_A)
# define EV_INVOKE_PENDING invoke_cb (EV_A)
#else
# define EV_RELEASE_CB (void)0
@@ -1862,20 +2173,22 @@ typedef struct
#ifndef EV_HAVE_EV_TIME
ev_tstamp
-ev_time (void) EV_THROW
+ev_time (void) EV_NOEXCEPT
{
#if EV_USE_REALTIME
- if (expect_true (have_realtime))
+ if (ecb_expect_true (have_realtime))
{
struct timespec ts;
clock_gettime (CLOCK_REALTIME, &ts);
- return ts.tv_sec + ts.tv_nsec * 1e-9;
+ return EV_TS_GET (ts);
}
#endif
- struct timeval tv;
- gettimeofday (&tv, 0);
- return tv.tv_sec + tv.tv_usec * 1e-6;
+ {
+ struct timeval tv;
+ gettimeofday (&tv, 0);
+ return EV_TV_GET (tv);
+ }
}
#endif
@@ -1883,11 +2196,11 @@ inline_size ev_tstamp
get_clock (void)
{
#if EV_USE_MONOTONIC
- if (expect_true (have_monotonic))
+ if (ecb_expect_true (have_monotonic))
{
struct timespec ts;
clock_gettime (CLOCK_MONOTONIC, &ts);
- return ts.tv_sec + ts.tv_nsec * 1e-9;
+ return EV_TS_GET (ts);
}
#endif
@@ -1896,28 +2209,28 @@ get_clock (void)
#if EV_MULTIPLICITY
ev_tstamp
-ev_now (EV_P) EV_THROW
+ev_now (EV_P) EV_NOEXCEPT
{
return ev_rt_now;
}
#endif
ev_tstamp
-ev_monotonic_now (EV_P) EV_THROW
+ev_monotonic_now (EV_P) EV_NOEXCEPT
{
return mn_now;
}
ev_tstamp
-ev_monotonic_time (void) EV_THROW
+ev_monotonic_time (void) EV_NOEXCEPT
{
return get_clock();
}
void
-ev_sleep (ev_tstamp delay) EV_THROW
+ev_sleep (ev_tstamp delay) EV_NOEXCEPT
{
- if (delay > 0.)
+ if (delay > EV_TS_CONST (0.))
{
#if EV_USE_NANOSLEEP
struct timespec ts;
@@ -1925,7 +2238,9 @@ ev_sleep (ev_tstamp delay) EV_THROW
EV_TS_SET (ts, delay);
nanosleep (&ts, 0);
#elif defined _WIN32
- Sleep ((unsigned long)(delay * 1e3));
+ /* maybe this should round up, as ms is very low resolution */
+ /* compared to select (µs) or nanosleep (ns) */
+ Sleep ((unsigned long)(EV_TS_TO_MSEC (delay)));
#else
struct timeval tv;
@@ -1965,7 +2280,7 @@ array_nextsize (int elem, int cur, int cnt)
return ncur;
}
-noinline ecb_cold
+ecb_noinline ecb_cold
static void *
array_realloc (int elem, void *base, int *cur, int cnt)
{
@@ -1973,16 +2288,18 @@ array_realloc (int elem, void *base, int *cur, int cnt)
return ev_realloc (base, elem * *cur);
}
-#define array_init_zero(base,count) \
- memset ((void *)(base), 0, sizeof (*(base)) * (count))
+#define array_needsize_noinit(base,offset,count)
+
+#define array_needsize_zerofill(base,offset,count) \
+ memset ((void *)(base + offset), 0, sizeof (*(base)) * (count))
#define array_needsize(type,base,cur,cnt,init) \
- if (expect_false ((cnt) > (cur))) \
+ if (ecb_expect_false ((cnt) > (cur))) \
{ \
ecb_unused int ocur_ = (cur); \
(base) = (type *)array_realloc \
(sizeof (type), (base), &(cur), (cnt)); \
- init ((base) + (ocur_), (cur) - ocur_); \
+ init ((base), ocur_, ((cur) - ocur_)); \
}
#if 0
@@ -2001,25 +2318,25 @@ array_realloc (int elem, void *base, int *cur, int cnt)
/*****************************************************************************/
/* dummy callback for pending events */
-noinline
+ecb_noinline
static void
pendingcb (EV_P_ ev_prepare *w, int revents)
{
}
-noinline
+ecb_noinline
void
-ev_feed_event (EV_P_ void *w, int revents) EV_THROW
+ev_feed_event (EV_P_ void *w, int revents) EV_NOEXCEPT
{
W w_ = (W)w;
int pri = ABSPRI (w_);
- if (expect_false (w_->pending))
+ if (ecb_expect_false (w_->pending))
pendings [pri][w_->pending - 1].events |= revents;
else
{
w_->pending = ++pendingcnt [pri];
- array_needsize (ANPENDING, pendings [pri], pendingmax [pri], w_->pending, EMPTY2);
+ array_needsize (ANPENDING, pendings [pri], pendingmax [pri], w_->pending, array_needsize_noinit);
pendings [pri][w_->pending - 1].w = w_;
pendings [pri][w_->pending - 1].events = revents;
}
@@ -2030,7 +2347,7 @@ ev_feed_event (EV_P_ void *w, int revents) EV_THROW
inline_speed void
feed_reverse (EV_P_ W w)
{
- array_needsize (W, rfeeds, rfeedmax, rfeedcnt + 1, EMPTY2);
+ array_needsize (W, rfeeds, rfeedmax, rfeedcnt + 1, array_needsize_noinit);
rfeeds [rfeedcnt++] = w;
}
@@ -2075,12 +2392,12 @@ fd_event (EV_P_ int fd, int revents)
{
ANFD *anfd = anfds + fd;
- if (expect_true (!anfd->reify))
+ if (ecb_expect_true (!anfd->reify))
fd_event_nocheck (EV_A_ fd, revents);
}
void
-ev_feed_fd_event (EV_P_ int fd, int revents) EV_THROW
+ev_feed_fd_event (EV_P_ int fd, int revents) EV_NOEXCEPT
{
if (fd >= 0 && fd < anfdmax)
fd_event_nocheck (EV_A_ fd, revents);
@@ -2093,8 +2410,20 @@ fd_reify (EV_P)
{
int i;
+ /* most backends do not modify the fdchanges list in backend_modfiy.
+ * except io_uring, which has fixed-size buffers which might force us
+ * to handle events in backend_modify, causing fdchanges to be amended,
+ * which could result in an endless loop.
+ * to avoid this, we do not dynamically handle fds that were added
+ * during fd_reify. that means that for those backends, fdchangecnt
+ * might be non-zero during poll, which must cause them to not block.
+ * to not put too much of a burden on other backends, this detail
+ * needs to be handled in the backend.
+ */
+ int changecnt = fdchangecnt;
+
#if EV_SELECT_IS_WINSOCKET || EV_USE_IOCP
- for (i = 0; i < fdchangecnt; ++i)
+ for (i = 0; i < changecnt; ++i)
{
int fd = fdchanges [i];
ANFD *anfd = anfds + fd;
@@ -2118,7 +2447,7 @@ fd_reify (EV_P)
}
#endif
- for (i = 0; i < fdchangecnt; ++i)
+ for (i = 0; i < changecnt; ++i)
{
int fd = fdchanges [i];
ANFD *anfd = anfds + fd;
@@ -2127,9 +2456,9 @@ fd_reify (EV_P)
unsigned char o_events = anfd->events;
unsigned char o_reify = anfd->reify;
- anfd->reify = 0;
+ anfd->reify = 0;
- /*if (expect_true (o_reify & EV_ANFD_REIFY)) probably a deoptimisation */
+ /*if (ecb_expect_true (o_reify & EV_ANFD_REIFY)) probably a deoptimisation */
{
anfd->events = 0;
@@ -2144,7 +2473,14 @@ fd_reify (EV_P)
backend_modify (EV_A_ fd, o_events, anfd->events);
}
- fdchangecnt = 0;
+ /* normally, fdchangecnt hasn't changed. if it has, then new fds have been added.
+ * this is a rare case (see beginning comment in this function), so we copy them to the
+ * front and hope the backend handles this case.
+ */
+ if (ecb_expect_false (fdchangecnt != changecnt))
+ memmove (fdchanges, fdchanges + changecnt, (fdchangecnt - changecnt) * sizeof (*fdchanges));
+
+ fdchangecnt -= changecnt;
}
/* something about the given fd changed */
@@ -2153,12 +2489,12 @@ void
fd_change (EV_P_ int fd, int flags)
{
unsigned char reify = anfds [fd].reify;
- anfds [fd].reify |= flags;
+ anfds [fd].reify = reify | flags;
- if (expect_true (!reify))
+ if (ecb_expect_true (!reify))
{
++fdchangecnt;
- array_needsize (int, fdchanges, fdchangemax, fdchangecnt, EMPTY2);
+ array_needsize (int, fdchanges, fdchangemax, fdchangecnt, array_needsize_noinit);
fdchanges [fdchangecnt - 1] = fd;
}
}
@@ -2188,7 +2524,7 @@ fd_valid (int fd)
}
/* called on EBADF to verify fds */
-noinline ecb_cold
+ecb_noinline ecb_cold
static void
fd_ebadf (EV_P)
{
@@ -2201,7 +2537,7 @@ fd_ebadf (EV_P)
}
/* called on ENOMEM in select/poll to kill some fds and retry */
-noinline ecb_cold
+ecb_noinline ecb_cold
static void
fd_enomem (EV_P)
{
@@ -2216,7 +2552,7 @@ fd_enomem (EV_P)
}
/* usually called after fork if backend needs to re-arm all fds from scratch */
-noinline
+ecb_noinline
static void
fd_rearm_all (EV_P)
{
@@ -2280,19 +2616,19 @@ downheap (ANHE *heap, int N, int k)
ANHE *pos = heap + DHEAP * (k - HEAP0) + HEAP0 + 1;
/* find minimum child */
- if (expect_true (pos + DHEAP - 1 < E))
+ if (ecb_expect_true (pos + DHEAP - 1 < E))
{
/* fast path */ (minpos = pos + 0), (minat = ANHE_at (*minpos));
- if ( ANHE_at (pos [1]) < minat) (minpos = pos + 1), (minat = ANHE_at (*minpos));
- if ( ANHE_at (pos [2]) < minat) (minpos = pos + 2), (minat = ANHE_at (*minpos));
- if ( ANHE_at (pos [3]) < minat) (minpos = pos + 3), (minat = ANHE_at (*minpos));
+ if ( minat > ANHE_at (pos [1])) (minpos = pos + 1), (minat = ANHE_at (*minpos));
+ if ( minat > ANHE_at (pos [2])) (minpos = pos + 2), (minat = ANHE_at (*minpos));
+ if ( minat > ANHE_at (pos [3])) (minpos = pos + 3), (minat = ANHE_at (*minpos));
}
else if (pos < E)
{
/* slow path */ (minpos = pos + 0), (minat = ANHE_at (*minpos));
- if (pos + 1 < E && ANHE_at (pos [1]) < minat) (minpos = pos + 1), (minat = ANHE_at (*minpos));
- if (pos + 2 < E && ANHE_at (pos [2]) < minat) (minpos = pos + 2), (minat = ANHE_at (*minpos));
- if (pos + 3 < E && ANHE_at (pos [3]) < minat) (minpos = pos + 3), (minat = ANHE_at (*minpos));
+ if (pos + 1 < E && minat > ANHE_at (pos [1])) (minpos = pos + 1), (minat = ANHE_at (*minpos));
+ if (pos + 2 < E && minat > ANHE_at (pos [2])) (minpos = pos + 2), (minat = ANHE_at (*minpos));
+ if (pos + 3 < E && minat > ANHE_at (pos [3])) (minpos = pos + 3), (minat = ANHE_at (*minpos));
}
else
break;
@@ -2310,7 +2646,7 @@ downheap (ANHE *heap, int N, int k)
ev_active (ANHE_w (he)) = k;
}
-#else /* 4HEAP */
+#else /* not 4HEAP */
#define HEAP0 1
#define HPARENT(k) ((k) >> 1)
@@ -2392,7 +2728,7 @@ reheap (ANHE *heap, int N)
/*****************************************************************************/
-/* associate signal watchers to a signal signal */
+/* associate signal watchers to a signal */
typedef struct
{
EV_ATOMIC_T pending;
@@ -2466,7 +2802,7 @@ evpipe_write (EV_P_ EV_ATOMIC_T *flag)
{
ECB_MEMORY_FENCE; /* push out the write before this function was called, acquire flag */
- if (expect_true (*flag))
+ if (ecb_expect_true (*flag))
return;
*flag = 1;
@@ -2497,7 +2833,7 @@ evpipe_write (EV_P_ EV_ATOMIC_T *flag)
#ifdef _WIN32
WSABUF buf;
DWORD sent;
- buf.buf = &buf;
+ buf.buf = (char *)&buf;
buf.len = 1;
WSASend (EV_FD_TO_WIN32_HANDLE (evpipe [1]), &buf, 1, &sent, 0, 0, 0);
#else
@@ -2553,7 +2889,7 @@ pipecb (EV_P_ ev_io *iow, int revents)
ECB_MEMORY_FENCE;
for (i = EV_NSIG - 1; i--; )
- if (expect_false (signals [i].pending))
+ if (ecb_expect_false (signals [i].pending))
ev_feed_signal_event (EV_A_ i + 1);
}
#endif
@@ -2579,7 +2915,7 @@ pipecb (EV_P_ ev_io *iow, int revents)
/*****************************************************************************/
void
-ev_feed_signal (int signum) EV_THROW
+ev_feed_signal (int signum) EV_NOEXCEPT
{
#if EV_MULTIPLICITY
EV_P;
@@ -2604,13 +2940,13 @@ ev_sighandler (int signum)
ev_feed_signal (signum);
}
-noinline
+ecb_noinline
void
-ev_feed_signal_event (EV_P_ int signum) EV_THROW
+ev_feed_signal_event (EV_P_ int signum) EV_NOEXCEPT
{
WL w;
- if (expect_false (signum <= 0 || signum >= EV_NSIG))
+ if (ecb_expect_false (signum <= 0 || signum >= EV_NSIG))
return;
--signum;
@@ -2619,7 +2955,7 @@ ev_feed_signal_event (EV_P_ int signum) EV_THROW
/* it is permissible to try to feed a signal to the wrong loop */
/* or, likely more useful, feeding a signal nobody is waiting for */
- if (expect_false (signals [signum].loop != EV_A))
+ if (ecb_expect_false (signals [signum].loop != EV_A))
return;
#endif
@@ -2713,6 +3049,57 @@ childcb (EV_P_ ev_signal *sw, int revents)
/*****************************************************************************/
+#if EV_USE_TIMERFD
+
+static void periodics_reschedule (EV_P);
+
+static void
+timerfdcb (EV_P_ ev_io *iow, int revents)
+{
+ struct itimerspec its = { 0 };
+
+ its.it_value.tv_sec = ev_rt_now + (int)MAX_BLOCKTIME2;
+ timerfd_settime (timerfd, TFD_TIMER_ABSTIME | TFD_TIMER_CANCEL_ON_SET, &its, 0);
+
+ ev_rt_now = ev_time ();
+ /* periodics_reschedule only needs ev_rt_now */
+ /* but maybe in the future we want the full treatment. */
+ /*
+ now_floor = EV_TS_CONST (0.);
+ time_update (EV_A_ EV_TSTAMP_HUGE);
+ */
+#if EV_PERIODIC_ENABLE
+ periodics_reschedule (EV_A);
+#endif
+}
+
+ecb_noinline ecb_cold
+static void
+evtimerfd_init (EV_P)
+{
+ if (!ev_is_active (&timerfd_w))
+ {
+ timerfd = timerfd_create (CLOCK_REALTIME, TFD_NONBLOCK | TFD_CLOEXEC);
+
+ if (timerfd >= 0)
+ {
+ fd_intern (timerfd); /* just to be sure */
+
+ ev_io_init (&timerfd_w, timerfdcb, timerfd, EV_READ);
+ ev_set_priority (&timerfd_w, EV_MINPRI);
+ ev_io_start (EV_A_ &timerfd_w);
+ ev_unref (EV_A); /* watcher should not keep loop alive */
+
+ /* (re-) arm timer */
+ timerfdcb (EV_A_ 0, 0);
+ }
+ }
+}
+
+#endif
+
+/*****************************************************************************/
+
#if EV_USE_IOCP
# include "ev_iocp.c"
#endif
@@ -2725,6 +3112,12 @@ childcb (EV_P_ ev_signal *sw, int revents)
#if EV_USE_EPOLL
# include "ev_epoll.c"
#endif
+#if EV_USE_LINUXAIO
+# include "ev_linuxaio.c"
+#endif
+#if EV_USE_IOURING
+# include "ev_iouring.c"
+#endif
#if EV_USE_POLL
# include "ev_poll.c"
#endif
@@ -2733,13 +3126,13 @@ childcb (EV_P_ ev_signal *sw, int revents)
#endif
ecb_cold int
-ev_version_major (void) EV_THROW
+ev_version_major (void) EV_NOEXCEPT
{
return EV_VERSION_MAJOR;
}
ecb_cold int
-ev_version_minor (void) EV_THROW
+ev_version_minor (void) EV_NOEXCEPT
{
return EV_VERSION_MINOR;
}
@@ -2758,22 +3151,24 @@ enable_secure (void)
ecb_cold
unsigned int
-ev_supported_backends (void) EV_THROW
+ev_supported_backends (void) EV_NOEXCEPT
{
unsigned int flags = 0;
- if (EV_USE_PORT ) flags |= EVBACKEND_PORT;
- if (EV_USE_KQUEUE) flags |= EVBACKEND_KQUEUE;
- if (EV_USE_EPOLL ) flags |= EVBACKEND_EPOLL;
- if (EV_USE_POLL ) flags |= EVBACKEND_POLL;
- if (EV_USE_SELECT) flags |= EVBACKEND_SELECT;
-
+ if (EV_USE_PORT ) flags |= EVBACKEND_PORT;
+ if (EV_USE_KQUEUE ) flags |= EVBACKEND_KQUEUE;
+ if (EV_USE_EPOLL ) flags |= EVBACKEND_EPOLL;
+ if (EV_USE_LINUXAIO ) flags |= EVBACKEND_LINUXAIO;
+ if (EV_USE_IOURING && ev_linux_version () >= 0x050601) flags |= EVBACKEND_IOURING; /* 5.6.1+ */
+ if (EV_USE_POLL ) flags |= EVBACKEND_POLL;
+ if (EV_USE_SELECT ) flags |= EVBACKEND_SELECT;
+
return flags;
}
ecb_cold
unsigned int
-ev_recommended_backends (void) EV_THROW
+ev_recommended_backends (void) EV_NOEXCEPT
{
unsigned int flags = ev_supported_backends ();
@@ -2791,73 +3186,84 @@ ev_recommended_backends (void) EV_THROW
flags &= ~EVBACKEND_POLL; /* poll return value is unusable (http://forums.freebsd.org/archive/index.php/t-10270.html) */
#endif
+ /* TODO: linuxaio is very experimental */
+#if !EV_RECOMMEND_LINUXAIO
+ flags &= ~EVBACKEND_LINUXAIO;
+#endif
+ /* TODO: linuxaio is super experimental */
+#if !EV_RECOMMEND_IOURING
+ flags &= ~EVBACKEND_IOURING;
+#endif
+
return flags;
}
ecb_cold
unsigned int
-ev_embeddable_backends (void) EV_THROW
+ev_embeddable_backends (void) EV_NOEXCEPT
{
- int flags = EVBACKEND_EPOLL | EVBACKEND_KQUEUE | EVBACKEND_PORT;
+ int flags = EVBACKEND_EPOLL | EVBACKEND_KQUEUE | EVBACKEND_PORT | EVBACKEND_IOURING;
/* epoll embeddability broken on all linux versions up to at least 2.6.23 */
if (ev_linux_version () < 0x020620) /* disable it on linux < 2.6.32 */
flags &= ~EVBACKEND_EPOLL;
+ /* EVBACKEND_LINUXAIO is theoretically embeddable, but suffers from a performance overhead */
+
return flags;
}
unsigned int
-ev_backend (EV_P) EV_THROW
+ev_backend (EV_P) EV_NOEXCEPT
{
return backend;
}
#if EV_FEATURE_API
unsigned int
-ev_iteration (EV_P) EV_THROW
+ev_iteration (EV_P) EV_NOEXCEPT
{
return loop_count;
}
unsigned int
-ev_depth (EV_P) EV_THROW
+ev_depth (EV_P) EV_NOEXCEPT
{
return loop_depth;
}
void
-ev_set_io_collect_interval (EV_P_ ev_tstamp interval) EV_THROW
+ev_set_io_collect_interval (EV_P_ ev_tstamp interval) EV_NOEXCEPT
{
io_blocktime = interval;
}
void
-ev_set_timeout_collect_interval (EV_P_ ev_tstamp interval) EV_THROW
+ev_set_timeout_collect_interval (EV_P_ ev_tstamp interval) EV_NOEXCEPT
{
timeout_blocktime = interval;
}
void
-ev_set_userdata (EV_P_ void *data) EV_THROW
+ev_set_userdata (EV_P_ void *data) EV_NOEXCEPT
{
userdata = data;
}
void *
-ev_userdata (EV_P) EV_THROW
+ev_userdata (EV_P) EV_NOEXCEPT
{
return userdata;
}
void
-ev_set_invoke_pending_cb (EV_P_ ev_loop_callback invoke_pending_cb) EV_THROW
+ev_set_invoke_pending_cb (EV_P_ ev_loop_callback invoke_pending_cb) EV_NOEXCEPT
{
invoke_cb = invoke_pending_cb;
}
void
-ev_set_loop_release_cb (EV_P_ void (*release)(EV_P) EV_THROW, void (*acquire)(EV_P) EV_THROW) EV_THROW
+ev_set_loop_release_cb (EV_P_ void (*release)(EV_P) EV_NOEXCEPT, void (*acquire)(EV_P) EV_NOEXCEPT) EV_NOEXCEPT
{
release_cb = release;
acquire_cb = acquire;
@@ -2865,9 +3271,9 @@ ev_set_loop_release_cb (EV_P_ void (*release)(EV_P) EV_THROW, void (*acquire)(EV
#endif
/* initialise a loop structure, must be zero-initialised */
-noinline ecb_cold
+ecb_noinline ecb_cold
static void
-loop_init (EV_P_ unsigned int flags) EV_THROW
+loop_init (EV_P_ unsigned int flags) EV_NOEXCEPT
{
if (!backend)
{
@@ -2930,30 +3336,39 @@ loop_init (EV_P_ unsigned int flags) EV_THROW
#if EV_USE_SIGNALFD
sigfd = flags & EVFLAG_SIGNALFD ? -2 : -1;
#endif
+#if EV_USE_TIMERFD
+ timerfd = flags & EVFLAG_NOTIMERFD ? -1 : -2;
+#endif
if (!(flags & EVBACKEND_MASK))
flags |= ev_recommended_backends ();
- if (flags & EVFLAG_ALLOCFD)
+ if (flags & EVFLAG_NOTIMERFD)
if (evpipe_alloc(EV_A) < 0)
return;
#if EV_USE_IOCP
- if (!backend && (flags & EVBACKEND_IOCP )) backend = iocp_init (EV_A_ flags);
+ if (!backend && (flags & EVBACKEND_IOCP )) backend = iocp_init (EV_A_ flags);
#endif
#if EV_USE_PORT
- if (!backend && (flags & EVBACKEND_PORT )) backend = port_init (EV_A_ flags);
+ if (!backend && (flags & EVBACKEND_PORT )) backend = port_init (EV_A_ flags);
#endif
#if EV_USE_KQUEUE
- if (!backend && (flags & EVBACKEND_KQUEUE)) backend = kqueue_init (EV_A_ flags);
+ if (!backend && (flags & EVBACKEND_KQUEUE )) backend = kqueue_init (EV_A_ flags);
+#endif
+#if EV_USE_IOURING
+ if (!backend && (flags & EVBACKEND_IOURING )) backend = iouring_init (EV_A_ flags);
+#endif
+#if EV_USE_LINUXAIO
+ if (!backend && (flags & EVBACKEND_LINUXAIO)) backend = linuxaio_init (EV_A_ flags);
#endif
#if EV_USE_EPOLL
- if (!backend && (flags & EVBACKEND_EPOLL )) backend = epoll_init (EV_A_ flags);
+ if (!backend && (flags & EVBACKEND_EPOLL )) backend = epoll_init (EV_A_ flags);
#endif
#if EV_USE_POLL
- if (!backend && (flags & EVBACKEND_POLL )) backend = poll_init (EV_A_ flags);
+ if (!backend && (flags & EVBACKEND_POLL )) backend = poll_init (EV_A_ flags);
#endif
#if EV_USE_SELECT
- if (!backend && (flags & EVBACKEND_SELECT)) backend = select_init (EV_A_ flags);
+ if (!backend && (flags & EVBACKEND_SELECT )) backend = select_init (EV_A_ flags);
#endif
ev_prepare_init (&pending_w, pendingcb);
@@ -2961,7 +3376,7 @@ loop_init (EV_P_ unsigned int flags) EV_THROW
#if EV_SIGNAL_ENABLE || EV_ASYNC_ENABLE
ev_init (&pipe_w, pipecb);
ev_set_priority (&pipe_w, EV_MAXPRI);
- if (flags & EVFLAG_ALLOCFD)
+ if (flags & EVFLAG_NOTIMERFD)
{
ev_io_set (&pipe_w, evpipe [0] < 0 ? evpipe [1] : evpipe [0], EV_READ);
ev_io_start (EV_A_ &pipe_w);
@@ -2986,7 +3401,7 @@ ev_loop_destroy (EV_P)
#if EV_CLEANUP_ENABLE
/* queue cleanup watchers (and execute them) */
- if (expect_false (cleanupcnt))
+ if (ecb_expect_false (cleanupcnt))
{
queue_events (EV_A_ (W *)cleanups, cleanupcnt, EV_CLEANUP);
EV_INVOKE_PENDING;
@@ -3015,6 +3430,11 @@ ev_loop_destroy (EV_P)
close (sigfd);
#endif
+#if EV_USE_TIMERFD
+ if (ev_is_active (&timerfd_w))
+ close (timerfd);
+#endif
+
#if EV_USE_INOTIFY
if (fs_fd >= 0)
close (fs_fd);
@@ -3024,22 +3444,28 @@ ev_loop_destroy (EV_P)
close (backend_fd);
#if EV_USE_IOCP
- if (backend == EVBACKEND_IOCP ) iocp_destroy (EV_A);
+ if (backend == EVBACKEND_IOCP ) iocp_destroy (EV_A);
#endif
#if EV_USE_PORT
- if (backend == EVBACKEND_PORT ) port_destroy (EV_A);
+ if (backend == EVBACKEND_PORT ) port_destroy (EV_A);
#endif
#if EV_USE_KQUEUE
- if (backend == EVBACKEND_KQUEUE) kqueue_destroy (EV_A);
+ if (backend == EVBACKEND_KQUEUE ) kqueue_destroy (EV_A);
+#endif
+#if EV_USE_IOURING
+ if (backend == EVBACKEND_IOURING ) iouring_destroy (EV_A);
+#endif
+#if EV_USE_LINUXAIO
+ if (backend == EVBACKEND_LINUXAIO) linuxaio_destroy (EV_A);
#endif
#if EV_USE_EPOLL
- if (backend == EVBACKEND_EPOLL ) epoll_destroy (EV_A);
+ if (backend == EVBACKEND_EPOLL ) epoll_destroy (EV_A);
#endif
#if EV_USE_POLL
- if (backend == EVBACKEND_POLL ) poll_destroy (EV_A);
+ if (backend == EVBACKEND_POLL ) poll_destroy (EV_A);
#endif
#if EV_USE_SELECT
- if (backend == EVBACKEND_SELECT) select_destroy (EV_A);
+ if (backend == EVBACKEND_SELECT ) select_destroy (EV_A);
#endif
for (i = NUMPRI; i--; )
@@ -3091,34 +3517,62 @@ inline_size void
loop_fork (EV_P)
{
#if EV_USE_PORT
- if (backend == EVBACKEND_PORT ) port_fork (EV_A);
+ if (backend == EVBACKEND_PORT ) port_fork (EV_A);
#endif
#if EV_USE_KQUEUE
- if (backend == EVBACKEND_KQUEUE) kqueue_fork (EV_A);
+ if (backend == EVBACKEND_KQUEUE ) kqueue_fork (EV_A);
+#endif
+#if EV_USE_IOURING
+ if (backend == EVBACKEND_IOURING ) iouring_fork (EV_A);
+#endif
+#if EV_USE_LINUXAIO
+ if (backend == EVBACKEND_LINUXAIO) linuxaio_fork (EV_A);
#endif
#if EV_USE_EPOLL
- if (backend == EVBACKEND_EPOLL ) epoll_fork (EV_A);
+ if (backend == EVBACKEND_EPOLL ) epoll_fork (EV_A);
#endif
#if EV_USE_INOTIFY
infy_fork (EV_A);
#endif
-#if EV_SIGNAL_ENABLE || EV_ASYNC_ENABLE
- if (ev_is_active (&pipe_w) && postfork != 2)
+ if (postfork != 2)
{
- /* pipe_write_wanted must be false now, so modifying fd vars should be safe */
-
- ev_ref (EV_A);
- ev_io_stop (EV_A_ &pipe_w);
-
- if (evpipe [0] >= 0)
- EV_WIN32_CLOSE_FD (evpipe [0]);
+ #if EV_USE_SIGNALFD
+ /* surprisingly, nothing needs to be done for signalfd, accoridng to docs, it does the right thing on fork */
+ #endif
+
+ #if EV_USE_TIMERFD
+ if (ev_is_active (&timerfd_w))
+ {
+ ev_ref (EV_A);
+ ev_io_stop (EV_A_ &timerfd_w);
- evpipe_init (EV_A);
- /* iterate over everything, in case we missed something before */
- ev_feed_event (EV_A_ &pipe_w, EV_CUSTOM);
+ close (timerfd);
+ timerfd = -2;
+
+ evtimerfd_init (EV_A);
+ /* reschedule periodics, in case we missed something */
+ ev_feed_event (EV_A_ &timerfd_w, EV_CUSTOM);
+ }
+ #endif
+
+ #if EV_SIGNAL_ENABLE || EV_ASYNC_ENABLE
+ if (ev_is_active (&pipe_w))
+ {
+ /* pipe_write_wanted must be false now, so modifying fd vars should be safe */
+
+ ev_ref (EV_A);
+ ev_io_stop (EV_A_ &pipe_w);
+
+ if (evpipe [0] >= 0)
+ EV_WIN32_CLOSE_FD (evpipe [0]);
+
+ evpipe_init (EV_A);
+ /* iterate over everything, in case we missed something before */
+ ev_feed_event (EV_A_ &pipe_w, EV_CUSTOM);
+ }
+ #endif
}
-#endif
postfork = 0;
}
@@ -3127,7 +3581,7 @@ loop_fork (EV_P)
ecb_cold
struct ev_loop *
-ev_loop_new (unsigned int flags) EV_THROW
+ev_loop_new (unsigned int flags) EV_NOEXCEPT
{
EV_P = (struct ev_loop *)ev_malloc (sizeof (struct ev_loop));
@@ -3144,7 +3598,7 @@ ev_loop_new (unsigned int flags) EV_THROW
#endif /* multiplicity */
#if EV_VERIFY
-noinline ecb_cold
+ecb_noinline ecb_cold
static void
verify_watcher (EV_P_ W w)
{
@@ -3154,7 +3608,7 @@ verify_watcher (EV_P_ W w)
assert (("libev: pending watcher not on pending queue", pendings [ABSPRI (w)][w->pending - 1].w == w));
}
-noinline ecb_cold
+ecb_noinline ecb_cold
static void
verify_heap (EV_P_ ANHE *heap, int N)
{
@@ -3170,7 +3624,7 @@ verify_heap (EV_P_ ANHE *heap, int N)
}
}
-noinline ecb_cold
+ecb_noinline ecb_cold
static void
array_verify (EV_P_ W *ws, int cnt)
{
@@ -3184,7 +3638,7 @@ array_verify (EV_P_ W *ws, int cnt)
#if EV_FEATURE_API
void ecb_cold
-ev_verify (EV_P) EV_THROW
+ev_verify (EV_P) EV_NOEXCEPT
{
#if EV_VERIFY
int i;
@@ -3275,7 +3729,7 @@ struct ev_loop *
#else
int
#endif
-ev_default_loop (unsigned int flags) EV_THROW
+ev_default_loop (unsigned int flags) EV_NOEXCEPT
{
if (!ev_default_loop_ptr)
{
@@ -3304,7 +3758,7 @@ ev_default_loop (unsigned int flags) EV_THROW
}
void
-ev_loop_fork (EV_P) EV_THROW
+ev_loop_fork (EV_P) EV_NOEXCEPT
{
postfork = 1;
}
@@ -3318,7 +3772,7 @@ ev_invoke (EV_P_ void *w, int revents)
}
unsigned int
-ev_pending_count (EV_P) EV_THROW
+ev_pending_count (EV_P) EV_NOEXCEPT
{
int pri;
unsigned int count = 0;
@@ -3329,16 +3783,17 @@ ev_pending_count (EV_P) EV_THROW
return count;
}
-noinline
+ecb_noinline
void
ev_invoke_pending (EV_P)
{
pendingpri = NUMPRI;
- while (pendingpri) /* pendingpri possibly gets modified in the inner loop */
+ do
{
--pendingpri;
+ /* pendingpri possibly gets modified in the inner loop */
while (pendingcnt [pendingpri])
{
ANPENDING *p = pendings [pendingpri] + --pendingcnt [pendingpri];
@@ -3348,6 +3803,7 @@ ev_invoke_pending (EV_P)
EV_FREQUENT_CHECK;
}
}
+ while (pendingpri);
}
#if EV_IDLE_ENABLE
@@ -3356,7 +3812,7 @@ ev_invoke_pending (EV_P)
inline_size void
idle_reify (EV_P)
{
- if (expect_false (idleall))
+ if (ecb_expect_false (idleall))
{
int pri;
@@ -3396,7 +3852,7 @@ timers_reify (EV_P)
if (ev_at (w) < mn_now)
ev_at (w) = mn_now;
- assert (("libev: negative ev_timer repeat value found while processing timers", w->repeat > 0.));
+ assert (("libev: negative ev_timer repeat value found while processing timers", w->repeat > EV_TS_CONST (0.)));
ANHE_at_cache (timers [HEAP0]);
downheap (timers, timercnt, HEAP0);
@@ -3415,7 +3871,7 @@ timers_reify (EV_P)
#if EV_PERIODIC_ENABLE
-noinline
+ecb_noinline
static void
periodic_recalc (EV_P_ ev_periodic *w)
{
@@ -3428,7 +3884,7 @@ periodic_recalc (EV_P_ ev_periodic *w)
ev_tstamp nat = at + w->interval;
/* when resolution fails us, we use ev_rt_now */
- if (expect_false (nat == at))
+ if (ecb_expect_false (nat == at))
{
at = ev_rt_now;
break;
@@ -3484,7 +3940,7 @@ periodics_reify (EV_P)
/* simply recalculate all periodics */
/* TODO: maybe ensure that at least one event happens when jumping forward? */
-noinline ecb_cold
+ecb_noinline ecb_cold
static void
periodics_reschedule (EV_P)
{
@@ -3508,7 +3964,7 @@ periodics_reschedule (EV_P)
#endif
/* adjust all timers by a given offset */
-noinline ecb_cold
+ecb_noinline ecb_cold
static void
timers_reschedule (EV_P_ ev_tstamp adjust)
{
@@ -3528,7 +3984,7 @@ inline_speed void
time_update (EV_P_ ev_tstamp max_block)
{
#if EV_USE_MONOTONIC
- if (expect_true (have_monotonic))
+ if (ecb_expect_true (have_monotonic))
{
int i;
ev_tstamp odiff = rtmn_diff;
@@ -3537,7 +3993,7 @@ time_update (EV_P_ ev_tstamp max_block)
/* only fetch the realtime clock every 0.5*MIN_TIMEJUMP seconds */
/* interpolate in the meantime */
- if (expect_true (mn_now - now_floor < MIN_TIMEJUMP * .5))
+ if (ecb_expect_true (mn_now - now_floor < EV_TS_CONST (MIN_TIMEJUMP * .5)))
{
ev_rt_now = rtmn_diff + mn_now;
return;
@@ -3561,7 +4017,7 @@ time_update (EV_P_ ev_tstamp max_block)
diff = odiff - rtmn_diff;
- if (expect_true ((diff < 0. ? -diff : diff) < MIN_TIMEJUMP))
+ if (ecb_expect_true ((diff < EV_TS_CONST (0.) ? -diff : diff) < EV_TS_CONST (MIN_TIMEJUMP)))
return; /* all is well */
ev_rt_now = ev_time ();
@@ -3580,7 +4036,7 @@ time_update (EV_P_ ev_tstamp max_block)
{
ev_rt_now = ev_time ();
- if (expect_false (mn_now > ev_rt_now || ev_rt_now > mn_now + max_block + MIN_TIMEJUMP))
+ if (ecb_expect_false (mn_now > ev_rt_now || ev_rt_now > mn_now + max_block + EV_TS_CONST (MIN_TIMEJUMP)))
{
/* adjust timers. this is easy, as the offset is the same for all of them */
timers_reschedule (EV_A_ ev_rt_now - mn_now);
@@ -3613,8 +4069,8 @@ ev_run (EV_P_ int flags)
#endif
#ifndef _WIN32
- if (expect_false (curpid)) /* penalise the forking check even more */
- if (expect_false (getpid () != curpid))
+ if (ecb_expect_false (curpid)) /* penalise the forking check even more */
+ if (ecb_expect_false (getpid () != curpid))
{
curpid = getpid ();
postfork = 1;
@@ -3623,7 +4079,7 @@ ev_run (EV_P_ int flags)
#if EV_FORK_ENABLE
/* we might have forked, so queue fork handlers */
- if (expect_false (postfork))
+ if (ecb_expect_false (postfork))
if (forkcnt)
{
queue_events (EV_A_ (W *)forks, forkcnt, EV_FORK);
@@ -3633,18 +4089,18 @@ ev_run (EV_P_ int flags)
#if EV_PREPARE_ENABLE
/* queue prepare watchers (and execute them) */
- if (expect_false (preparecnt))
+ if (ecb_expect_false (preparecnt))
{
queue_events (EV_A_ (W *)prepares, preparecnt, EV_PREPARE);
EV_INVOKE_PENDING;
}
#endif
- if (expect_false (loop_done))
+ if (ecb_expect_false (loop_done))
break;
/* we might have forked, so reify kernel state if necessary */
- if (expect_false (postfork))
+ if (ecb_expect_false (postfork))
loop_fork (EV_A);
/* update fd-related kernel structures */
@@ -3659,16 +4115,28 @@ ev_run (EV_P_ int flags)
ev_tstamp prev_mn_now = mn_now;
/* update time to cancel out callback processing overhead */
- time_update (EV_A_ 1e100);
+ time_update (EV_A_ EV_TS_CONST (EV_TSTAMP_HUGE));
/* from now on, we want a pipe-wake-up */
pipe_write_wanted = 1;
ECB_MEMORY_FENCE; /* make sure pipe_write_wanted is visible before we check for potential skips */
- if (expect_true (!(flags & EVRUN_NOWAIT || idleall || !activecnt || pipe_write_skipped)))
+ if (ecb_expect_true (!(flags & EVRUN_NOWAIT || idleall || !activecnt || pipe_write_skipped)))
{
- waittime = MAX_BLOCKTIME;
+ waittime = EV_TS_CONST (MAX_BLOCKTIME);
+
+#if EV_USE_TIMERFD
+ /* sleep a lot longer when we can reliably detect timejumps */
+ if (ecb_expect_true (timerfd >= 0))
+ waittime = EV_TS_CONST (MAX_BLOCKTIME2);
+#endif
+#if !EV_PERIODIC_ENABLE
+ /* without periodics but with monotonic clock there is no need */
+ /* for any time jump detection, so sleep longer */
+ if (ecb_expect_true (have_monotonic))
+ waittime = EV_TS_CONST (MAX_BLOCKTIME2);
+#endif
if (timercnt)
{
@@ -3685,23 +4153,28 @@ ev_run (EV_P_ int flags)
#endif
/* don't let timeouts decrease the waittime below timeout_blocktime */
- if (expect_false (waittime < timeout_blocktime))
+ if (ecb_expect_false (waittime < timeout_blocktime))
waittime = timeout_blocktime;
- /* at this point, we NEED to wait, so we have to ensure */
- /* to pass a minimum nonzero value to the backend */
- if (expect_false (waittime < backend_mintime))
- waittime = backend_mintime;
+ /* now there are two more special cases left, either we have
+ * already-expired timers, so we should not sleep, or we have timers
+ * that expire very soon, in which case we need to wait for a minimum
+ * amount of time for some event loop backends.
+ */
+ if (ecb_expect_false (waittime < backend_mintime))
+ waittime = waittime <= EV_TS_CONST (0.)
+ ? EV_TS_CONST (0.)
+ : backend_mintime;
/* extra check because io_blocktime is commonly 0 */
- if (expect_false (io_blocktime))
+ if (ecb_expect_false (io_blocktime))
{
sleeptime = io_blocktime - (mn_now - prev_mn_now);
if (sleeptime > waittime - backend_mintime)
sleeptime = waittime - backend_mintime;
- if (expect_true (sleeptime > 0.))
+ if (ecb_expect_true (sleeptime > EV_TS_CONST (0.)))
{
ev_sleep (sleeptime);
waittime -= sleeptime;
@@ -3725,7 +4198,6 @@ ev_run (EV_P_ int flags)
ev_feed_event (EV_A_ &pipe_w, EV_CUSTOM);
}
-
/* update ev_rt_now, do magic */
time_update (EV_A_ waittime + sleeptime);
}
@@ -3743,13 +4215,13 @@ ev_run (EV_P_ int flags)
#if EV_CHECK_ENABLE
/* queue check watchers, to be executed first */
- if (expect_false (checkcnt))
+ if (ecb_expect_false (checkcnt))
queue_events (EV_A_ (W *)checks, checkcnt, EV_CHECK);
#endif
EV_INVOKE_PENDING;
}
- while (expect_true (
+ while (ecb_expect_true (
activecnt
&& !loop_done
&& !(flags & (EVRUN_ONCE | EVRUN_NOWAIT))
@@ -3766,43 +4238,43 @@ ev_run (EV_P_ int flags)
}
void
-ev_break (EV_P_ int how) EV_THROW
+ev_break (EV_P_ int how) EV_NOEXCEPT
{
loop_done = how;
}
void
-ev_ref (EV_P) EV_THROW
+ev_ref (EV_P) EV_NOEXCEPT
{
++activecnt;
}
void
-ev_unref (EV_P) EV_THROW
+ev_unref (EV_P) EV_NOEXCEPT
{
--activecnt;
}
int
-ev_activecnt (EV_P) EV_THROW
+ev_activecnt (EV_P) EV_NOEXCEPT
{
return activecnt;
}
void
-ev_now_update (EV_P) EV_THROW
+ev_now_update (EV_P) EV_NOEXCEPT
{
- time_update (EV_A_ 1e100);
+ time_update (EV_A_ EV_TSTAMP_HUGE);
}
void
-ev_suspend (EV_P) EV_THROW
+ev_suspend (EV_P) EV_NOEXCEPT
{
ev_now_update (EV_A);
}
void
-ev_resume (EV_P) EV_THROW
+ev_resume (EV_P) EV_NOEXCEPT
{
ev_tstamp mn_prev = mn_now;
@@ -3829,7 +4301,7 @@ wlist_del (WL *head, WL elem)
{
while (*head)
{
- if (expect_true (*head == elem))
+ if (ecb_expect_true (*head == elem))
{
*head = elem->next;
break;
@@ -3851,12 +4323,12 @@ clear_pending (EV_P_ W w)
}
int
-ev_clear_pending (EV_P_ void *w) EV_THROW
+ev_clear_pending (EV_P_ void *w) EV_NOEXCEPT
{
W w_ = (W)w;
int pending = w_->pending;
- if (expect_true (pending))
+ if (ecb_expect_true (pending))
{
ANPENDING *p = pendings [ABSPRI (w_)] + pending - 1;
p->w = (W)&pending_w;
@@ -3893,22 +4365,25 @@ ev_stop (EV_P_ W w)
/*****************************************************************************/
-noinline
+ecb_noinline
void
-ev_io_start (EV_P_ ev_io *w) EV_THROW
+ev_io_start (EV_P_ ev_io *w) EV_NOEXCEPT
{
int fd = w->fd;
- if (expect_false (ev_is_active (w)))
+ if (ecb_expect_false (ev_is_active (w)))
return;
assert (("libev: ev_io_start called with negative fd", fd >= 0));
assert (("libev: ev_io_start called with illegal event mask", !(w->events & ~(EV__IOFDSET | EV_READ | EV_WRITE))));
+#if EV_VERIFY >= 2
+ assert (("libev: ev_io_start called on watcher with invalid fd", fd_valid (fd)));
+#endif
EV_FREQUENT_CHECK;
ev_start (EV_A_ (W)w, 1);
- array_needsize (ANFD, anfds, anfdmax, fd + 1, array_init_zero);
+ array_needsize (ANFD, anfds, anfdmax, fd + 1, array_needsize_zerofill);
wlist_add (&anfds[fd].head, (WL)w);
/* common bug, apparently */
@@ -3920,16 +4395,19 @@ ev_io_start (EV_P_ ev_io *w) EV_THROW
EV_FREQUENT_CHECK;
}
-noinline
+ecb_noinline
void
-ev_io_stop (EV_P_ ev_io *w) EV_THROW
+ev_io_stop (EV_P_ ev_io *w) EV_NOEXCEPT
{
clear_pending (EV_A_ (W)w);
- if (expect_false (!ev_is_active (w)))
+ if (ecb_expect_false (!ev_is_active (w)))
return;
assert (("libev: ev_io_stop called with illegal fd (must stay constant after start!)", w->fd >= 0 && w->fd < anfdmax));
+#if EV_VERIFY >= 2
+ assert (("libev: ev_io_stop called on watcher with invalid fd", fd_valid (w->fd)));
+#endif
EV_FREQUENT_CHECK;
wlist_del (&anfds[w->fd].head, (WL)w);
@@ -3947,7 +4425,7 @@ ev_io_stop (EV_P_ ev_io *w) EV_THROW
* backend is properly updated.
*/
void noinline
-ev_io_closing (EV_P_ int fd, int revents) EV_THROW
+ev_io_closing (EV_P_ int fd, int revents) EV_NOEXCEPT
{
ev_io *w;
if (fd < 0 || fd >= anfdmax)
@@ -3960,11 +4438,11 @@ ev_io_closing (EV_P_ int fd, int revents) EV_THROW
}
}
-noinline
+ecb_noinline
void
-ev_timer_start (EV_P_ ev_timer *w) EV_THROW
+ev_timer_start (EV_P_ ev_timer *w) EV_NOEXCEPT
{
- if (expect_false (ev_is_active (w)))
+ if (ecb_expect_false (ev_is_active (w)))
return;
ev_at (w) += mn_now;
@@ -3975,7 +4453,7 @@ ev_timer_start (EV_P_ ev_timer *w) EV_THROW
++timercnt;
ev_start (EV_A_ (W)w, timercnt + HEAP0 - 1);
- array_needsize (ANHE, timers, timermax, ev_active (w) + 1, EMPTY2);
+ array_needsize (ANHE, timers, timermax, ev_active (w) + 1, array_needsize_noinit);
ANHE_w (timers [ev_active (w)]) = (WT)w;
ANHE_at_cache (timers [ev_active (w)]);
upheap (timers, ev_active (w));
@@ -3985,12 +4463,12 @@ ev_timer_start (EV_P_ ev_timer *w) EV_THROW
/*assert (("libev: internal timer heap corruption", timers [ev_active (w)] == (WT)w));*/
}
-noinline
+ecb_noinline
void
-ev_timer_stop (EV_P_ ev_timer *w) EV_THROW
+ev_timer_stop (EV_P_ ev_timer *w) EV_NOEXCEPT
{
clear_pending (EV_A_ (W)w);
- if (expect_false (!ev_is_active (w)))
+ if (ecb_expect_false (!ev_is_active (w)))
return;
EV_FREQUENT_CHECK;
@@ -4002,7 +4480,7 @@ ev_timer_stop (EV_P_ ev_timer *w) EV_THROW
--timercnt;
- if (expect_true (active < timercnt + HEAP0))
+ if (ecb_expect_true (active < timercnt + HEAP0))
{
timers [active] = timers [timercnt + HEAP0];
adjustheap (timers, timercnt, active);
@@ -4016,9 +4494,9 @@ ev_timer_stop (EV_P_ ev_timer *w) EV_THROW
EV_FREQUENT_CHECK;
}
-noinline
+ecb_noinline
void
-ev_timer_again (EV_P_ ev_timer *w) EV_THROW
+ev_timer_again (EV_P_ ev_timer *w) EV_NOEXCEPT
{
EV_FREQUENT_CHECK;
@@ -4045,19 +4523,24 @@ ev_timer_again (EV_P_ ev_timer *w) EV_THROW
}
ev_tstamp
-ev_timer_remaining (EV_P_ ev_timer *w) EV_THROW
+ev_timer_remaining (EV_P_ ev_timer *w) EV_NOEXCEPT
{
- return ev_at (w) - (ev_is_active (w) ? mn_now : 0.);
+ return ev_at (w) - (ev_is_active (w) ? mn_now : EV_TS_CONST (0.));
}
#if EV_PERIODIC_ENABLE
-noinline
+ecb_noinline
void
-ev_periodic_start (EV_P_ ev_periodic *w) EV_THROW
+ev_periodic_start (EV_P_ ev_periodic *w) EV_NOEXCEPT
{
- if (expect_false (ev_is_active (w)))
+ if (ecb_expect_false (ev_is_active (w)))
return;
+#if EV_USE_TIMERFD
+ if (timerfd == -2)
+ evtimerfd_init (EV_A);
+#endif
+
if (w->reschedule_cb)
ev_at (w) = w->reschedule_cb (w, ev_rt_now);
else if (w->interval)
@@ -4072,7 +4555,7 @@ ev_periodic_start (EV_P_ ev_periodic *w) EV_THROW
++periodiccnt;
ev_start (EV_A_ (W)w, periodiccnt + HEAP0 - 1);
- array_needsize (ANHE, periodics, periodicmax, ev_active (w) + 1, EMPTY2);
+ array_needsize (ANHE, periodics, periodicmax, ev_active (w) + 1, array_needsize_noinit);
ANHE_w (periodics [ev_active (w)]) = (WT)w;
ANHE_at_cache (periodics [ev_active (w)]);
upheap (periodics, ev_active (w));
@@ -4082,12 +4565,12 @@ ev_periodic_start (EV_P_ ev_periodic *w) EV_THROW
/*assert (("libev: internal periodic heap corruption", ANHE_w (periodics [ev_active (w)]) == (WT)w));*/
}
-noinline
+ecb_noinline
void
-ev_periodic_stop (EV_P_ ev_periodic *w) EV_THROW
+ev_periodic_stop (EV_P_ ev_periodic *w) EV_NOEXCEPT
{
clear_pending (EV_A_ (W)w);
- if (expect_false (!ev_is_active (w)))
+ if (ecb_expect_false (!ev_is_active (w)))
return;
EV_FREQUENT_CHECK;
@@ -4099,7 +4582,7 @@ ev_periodic_stop (EV_P_ ev_periodic *w) EV_THROW
--periodiccnt;
- if (expect_true (active < periodiccnt + HEAP0))
+ if (ecb_expect_true (active < periodiccnt + HEAP0))
{
periodics [active] = periodics [periodiccnt + HEAP0];
adjustheap (periodics, periodiccnt, active);
@@ -4111,9 +4594,9 @@ ev_periodic_stop (EV_P_ ev_periodic *w) EV_THROW
EV_FREQUENT_CHECK;
}
-noinline
+ecb_noinline
void
-ev_periodic_again (EV_P_ ev_periodic *w) EV_THROW
+ev_periodic_again (EV_P_ ev_periodic *w) EV_NOEXCEPT
{
/* TODO: use adjustheap and recalculation */
ev_periodic_stop (EV_A_ w);
@@ -4127,11 +4610,11 @@ ev_periodic_again (EV_P_ ev_periodic *w) EV_THROW
#if EV_SIGNAL_ENABLE
-noinline
+ecb_noinline
void
-ev_signal_start (EV_P_ ev_signal *w) EV_THROW
+ev_signal_start (EV_P_ ev_signal *w) EV_NOEXCEPT
{
- if (expect_false (ev_is_active (w)))
+ if (ecb_expect_false (ev_is_active (w)))
return;
assert (("libev: ev_signal_start called with illegal signal number", w->signum > 0 && w->signum < EV_NSIG));
@@ -4210,12 +4693,12 @@ ev_signal_start (EV_P_ ev_signal *w) EV_THROW
EV_FREQUENT_CHECK;
}
-noinline
+ecb_noinline
void
-ev_signal_stop (EV_P_ ev_signal *w) EV_THROW
+ev_signal_stop (EV_P_ ev_signal *w) EV_NOEXCEPT
{
clear_pending (EV_A_ (W)w);
- if (expect_false (!ev_is_active (w)))
+ if (ecb_expect_false (!ev_is_active (w)))
return;
EV_FREQUENT_CHECK;
@@ -4253,12 +4736,12 @@ ev_signal_stop (EV_P_ ev_signal *w) EV_THROW
#if EV_CHILD_ENABLE
void
-ev_child_start (EV_P_ ev_child *w) EV_THROW
+ev_child_start (EV_P_ ev_child *w) EV_NOEXCEPT
{
#if EV_MULTIPLICITY
assert (("libev: child watchers are only supported in the default loop", loop == ev_default_loop_ptr));
#endif
- if (expect_false (ev_is_active (w)))
+ if (ecb_expect_false (ev_is_active (w)))
return;
EV_FREQUENT_CHECK;
@@ -4270,10 +4753,10 @@ ev_child_start (EV_P_ ev_child *w) EV_THROW
}
void
-ev_child_stop (EV_P_ ev_child *w) EV_THROW
+ev_child_stop (EV_P_ ev_child *w) EV_NOEXCEPT
{
clear_pending (EV_A_ (W)w);
- if (expect_false (!ev_is_active (w)))
+ if (ecb_expect_false (!ev_is_active (w)))
return;
EV_FREQUENT_CHECK;
@@ -4297,14 +4780,14 @@ ev_child_stop (EV_P_ ev_child *w) EV_THROW
#define NFS_STAT_INTERVAL 30.1074891 /* for filesystems potentially failing inotify */
#define MIN_STAT_INTERVAL 0.1074891
-noinline static void stat_timer_cb (EV_P_ ev_timer *w_, int revents);
+ecb_noinline static void stat_timer_cb (EV_P_ ev_timer *w_, int revents);
#if EV_USE_INOTIFY
/* the * 2 is to allow for alignment padding, which for some reason is >> 8 */
# define EV_INOTIFY_BUFSIZE (sizeof (struct inotify_event) * 2 + NAME_MAX)
-noinline
+ecb_noinline
static void
infy_add (EV_P_ ev_stat *w)
{
@@ -4379,7 +4862,7 @@ infy_add (EV_P_ ev_stat *w)
if (ev_is_active (&w->timer)) ev_unref (EV_A);
}
-noinline
+ecb_noinline
static void
infy_del (EV_P_ ev_stat *w)
{
@@ -4397,7 +4880,7 @@ infy_del (EV_P_ ev_stat *w)
inotify_rm_watch (fs_fd, wd);
}
-noinline
+ecb_noinline
static void
infy_wd (EV_P_ int slot, int wd, struct inotify_event *ev)
{
@@ -4545,7 +5028,7 @@ infy_fork (EV_P)
#endif
void
-ev_stat_stat (EV_P_ ev_stat *w) EV_THROW
+ev_stat_stat (EV_P_ ev_stat *w) EV_NOEXCEPT
{
if (lstat (w->path, &w->attr) < 0)
w->attr.st_nlink = 0;
@@ -4553,7 +5036,7 @@ ev_stat_stat (EV_P_ ev_stat *w) EV_THROW
w->attr.st_nlink = 1;
}
-noinline
+ecb_noinline
static void
stat_timer_cb (EV_P_ ev_timer *w_, int revents)
{
@@ -4604,9 +5087,9 @@ stat_timer_cb (EV_P_ ev_timer *w_, int revents)
}
void
-ev_stat_start (EV_P_ ev_stat *w) EV_THROW
+ev_stat_start (EV_P_ ev_stat *w) EV_NOEXCEPT
{
- if (expect_false (ev_is_active (w)))
+ if (ecb_expect_false (ev_is_active (w)))
return;
ev_stat_stat (EV_A_ w);
@@ -4635,10 +5118,10 @@ ev_stat_start (EV_P_ ev_stat *w) EV_THROW
}
void
-ev_stat_stop (EV_P_ ev_stat *w) EV_THROW
+ev_stat_stop (EV_P_ ev_stat *w) EV_NOEXCEPT
{
clear_pending (EV_A_ (W)w);
- if (expect_false (!ev_is_active (w)))
+ if (ecb_expect_false (!ev_is_active (w)))
return;
EV_FREQUENT_CHECK;
@@ -4661,9 +5144,9 @@ ev_stat_stop (EV_P_ ev_stat *w) EV_THROW
#if EV_IDLE_ENABLE
void
-ev_idle_start (EV_P_ ev_idle *w) EV_THROW
+ev_idle_start (EV_P_ ev_idle *w) EV_NOEXCEPT
{
- if (expect_false (ev_is_active (w)))
+ if (ecb_expect_false (ev_is_active (w)))
return;
pri_adjust (EV_A_ (W)w);
@@ -4676,7 +5159,7 @@ ev_idle_start (EV_P_ ev_idle *w) EV_THROW
++idleall;
ev_start (EV_A_ (W)w, active);
- array_needsize (ev_idle *, idles [ABSPRI (w)], idlemax [ABSPRI (w)], active, EMPTY2);
+ array_needsize (ev_idle *, idles [ABSPRI (w)], idlemax [ABSPRI (w)], active, array_needsize_noinit);
idles [ABSPRI (w)][active - 1] = w;
}
@@ -4684,10 +5167,10 @@ ev_idle_start (EV_P_ ev_idle *w) EV_THROW
}
void
-ev_idle_stop (EV_P_ ev_idle *w) EV_THROW
+ev_idle_stop (EV_P_ ev_idle *w) EV_NOEXCEPT
{
clear_pending (EV_A_ (W)w);
- if (expect_false (!ev_is_active (w)))
+ if (ecb_expect_false (!ev_is_active (w)))
return;
EV_FREQUENT_CHECK;
@@ -4708,25 +5191,25 @@ ev_idle_stop (EV_P_ ev_idle *w) EV_THROW
#if EV_PREPARE_ENABLE
void
-ev_prepare_start (EV_P_ ev_prepare *w) EV_THROW
+ev_prepare_start (EV_P_ ev_prepare *w) EV_NOEXCEPT
{
- if (expect_false (ev_is_active (w)))
+ if (ecb_expect_false (ev_is_active (w)))
return;
EV_FREQUENT_CHECK;
ev_start (EV_A_ (W)w, ++preparecnt);
- array_needsize (ev_prepare *, prepares, preparemax, preparecnt, EMPTY2);
+ array_needsize (ev_prepare *, prepares, preparemax, preparecnt, array_needsize_noinit);
prepares [preparecnt - 1] = w;
EV_FREQUENT_CHECK;
}
void
-ev_prepare_stop (EV_P_ ev_prepare *w) EV_THROW
+ev_prepare_stop (EV_P_ ev_prepare *w) EV_NOEXCEPT
{
clear_pending (EV_A_ (W)w);
- if (expect_false (!ev_is_active (w)))
+ if (ecb_expect_false (!ev_is_active (w)))
return;
EV_FREQUENT_CHECK;
@@ -4746,25 +5229,25 @@ ev_prepare_stop (EV_P_ ev_prepare *w) EV_THROW
#if EV_CHECK_ENABLE
void
-ev_check_start (EV_P_ ev_check *w) EV_THROW
+ev_check_start (EV_P_ ev_check *w) EV_NOEXCEPT
{
- if (expect_false (ev_is_active (w)))
+ if (ecb_expect_false (ev_is_active (w)))
return;
EV_FREQUENT_CHECK;
ev_start (EV_A_ (W)w, ++checkcnt);
- array_needsize (ev_check *, checks, checkmax, checkcnt, EMPTY2);
+ array_needsize (ev_check *, checks, checkmax, checkcnt, array_needsize_noinit);
checks [checkcnt - 1] = w;
EV_FREQUENT_CHECK;
}
void
-ev_check_stop (EV_P_ ev_check *w) EV_THROW
+ev_check_stop (EV_P_ ev_check *w) EV_NOEXCEPT
{
clear_pending (EV_A_ (W)w);
- if (expect_false (!ev_is_active (w)))
+ if (ecb_expect_false (!ev_is_active (w)))
return;
EV_FREQUENT_CHECK;
@@ -4783,9 +5266,9 @@ ev_check_stop (EV_P_ ev_check *w) EV_THROW
#endif
#if EV_EMBED_ENABLE
-noinline
+ecb_noinline
void
-ev_embed_sweep (EV_P_ ev_embed *w) EV_THROW
+ev_embed_sweep (EV_P_ ev_embed *w) EV_NOEXCEPT
{
ev_run (w->other, EVRUN_NOWAIT);
}
@@ -4817,6 +5300,7 @@ embed_prepare_cb (EV_P_ ev_prepare *prepare, int revents)
}
}
+#if EV_FORK_ENABLE
static void
embed_fork_cb (EV_P_ ev_fork *fork_w, int revents)
{
@@ -4833,6 +5317,7 @@ embed_fork_cb (EV_P_ ev_fork *fork_w, int revents)
ev_embed_start (EV_A_ w);
}
+#endif
#if 0
static void
@@ -4843,9 +5328,9 @@ embed_idle_cb (EV_P_ ev_idle *idle, int revents)
#endif
void
-ev_embed_start (EV_P_ ev_embed *w) EV_THROW
+ev_embed_start (EV_P_ ev_embed *w) EV_NOEXCEPT
{
- if (expect_false (ev_is_active (w)))
+ if (ecb_expect_false (ev_is_active (w)))
return;
{
@@ -4863,8 +5348,10 @@ ev_embed_start (EV_P_ ev_embed *w) EV_THROW
ev_set_priority (&w->prepare, EV_MINPRI);
ev_prepare_start (EV_A_ &w->prepare);
+#if EV_FORK_ENABLE
ev_fork_init (&w->fork, embed_fork_cb);
ev_fork_start (EV_A_ &w->fork);
+#endif
/*ev_idle_init (&w->idle, e,bed_idle_cb);*/
@@ -4874,17 +5361,19 @@ ev_embed_start (EV_P_ ev_embed *w) EV_THROW
}
void
-ev_embed_stop (EV_P_ ev_embed *w) EV_THROW
+ev_embed_stop (EV_P_ ev_embed *w) EV_NOEXCEPT
{
clear_pending (EV_A_ (W)w);
- if (expect_false (!ev_is_active (w)))
+ if (ecb_expect_false (!ev_is_active (w)))
return;
EV_FREQUENT_CHECK;
ev_io_stop (EV_A_ &w->io);
ev_prepare_stop (EV_A_ &w->prepare);
+#if EV_FORK_ENABLE
ev_fork_stop (EV_A_ &w->fork);
+#endif
ev_stop (EV_A_ (W)w);
@@ -4894,25 +5383,25 @@ ev_embed_stop (EV_P_ ev_embed *w) EV_THROW
#if EV_FORK_ENABLE
void
-ev_fork_start (EV_P_ ev_fork *w) EV_THROW
+ev_fork_start (EV_P_ ev_fork *w) EV_NOEXCEPT
{
- if (expect_false (ev_is_active (w)))
+ if (ecb_expect_false (ev_is_active (w)))
return;
EV_FREQUENT_CHECK;
ev_start (EV_A_ (W)w, ++forkcnt);
- array_needsize (ev_fork *, forks, forkmax, forkcnt, EMPTY2);
+ array_needsize (ev_fork *, forks, forkmax, forkcnt, array_needsize_noinit);
forks [forkcnt - 1] = w;
EV_FREQUENT_CHECK;
}
void
-ev_fork_stop (EV_P_ ev_fork *w) EV_THROW
+ev_fork_stop (EV_P_ ev_fork *w) EV_NOEXCEPT
{
clear_pending (EV_A_ (W)w);
- if (expect_false (!ev_is_active (w)))
+ if (ecb_expect_false (!ev_is_active (w)))
return;
EV_FREQUENT_CHECK;
@@ -4932,15 +5421,15 @@ ev_fork_stop (EV_P_ ev_fork *w) EV_THROW
#if EV_CLEANUP_ENABLE
void
-ev_cleanup_start (EV_P_ ev_cleanup *w) EV_THROW
+ev_cleanup_start (EV_P_ ev_cleanup *w) EV_NOEXCEPT
{
- if (expect_false (ev_is_active (w)))
+ if (ecb_expect_false (ev_is_active (w)))
return;
EV_FREQUENT_CHECK;
ev_start (EV_A_ (W)w, ++cleanupcnt);
- array_needsize (ev_cleanup *, cleanups, cleanupmax, cleanupcnt, EMPTY2);
+ array_needsize (ev_cleanup *, cleanups, cleanupmax, cleanupcnt, array_needsize_noinit);
cleanups [cleanupcnt - 1] = w;
/* cleanup watchers should never keep a refcount on the loop */
@@ -4949,10 +5438,10 @@ ev_cleanup_start (EV_P_ ev_cleanup *w) EV_THROW
}
void
-ev_cleanup_stop (EV_P_ ev_cleanup *w) EV_THROW
+ev_cleanup_stop (EV_P_ ev_cleanup *w) EV_NOEXCEPT
{
clear_pending (EV_A_ (W)w);
- if (expect_false (!ev_is_active (w)))
+ if (ecb_expect_false (!ev_is_active (w)))
return;
EV_FREQUENT_CHECK;
@@ -4973,9 +5462,9 @@ ev_cleanup_stop (EV_P_ ev_cleanup *w) EV_THROW
#if EV_ASYNC_ENABLE
void
-ev_async_start (EV_P_ ev_async *w) EV_THROW
+ev_async_start (EV_P_ ev_async *w) EV_NOEXCEPT
{
- if (expect_false (ev_is_active (w)))
+ if (ecb_expect_false (ev_is_active (w)))
return;
w->sent = 0;
@@ -4985,17 +5474,17 @@ ev_async_start (EV_P_ ev_async *w) EV_THROW
EV_FREQUENT_CHECK;
ev_start (EV_A_ (W)w, ++asynccnt);
- array_needsize (ev_async *, asyncs, asyncmax, asynccnt, EMPTY2);
+ array_needsize (ev_async *, asyncs, asyncmax, asynccnt, array_needsize_noinit);
asyncs [asynccnt - 1] = w;
EV_FREQUENT_CHECK;
}
void
-ev_async_stop (EV_P_ ev_async *w) EV_THROW
+ev_async_stop (EV_P_ ev_async *w) EV_NOEXCEPT
{
clear_pending (EV_A_ (W)w);
- if (expect_false (!ev_is_active (w)))
+ if (ecb_expect_false (!ev_is_active (w)))
return;
EV_FREQUENT_CHECK;
@@ -5013,7 +5502,7 @@ ev_async_stop (EV_P_ ev_async *w) EV_THROW
}
void
-ev_async_send (EV_P_ ev_async *w) EV_THROW
+ev_async_send (EV_P_ ev_async *w) EV_NOEXCEPT
{
w->sent = 1;
evpipe_write (EV_A_ &async_pending);
@@ -5060,16 +5549,10 @@ once_cb_to (EV_P_ ev_timer *w, int revents)
}
void
-ev_once (EV_P_ int fd, int events, ev_tstamp timeout, void (*cb)(int revents, void *arg), void *arg) EV_THROW
+ev_once (EV_P_ int fd, int events, ev_tstamp timeout, void (*cb)(int revents, void *arg), void *arg) EV_NOEXCEPT
{
struct ev_once *once = (struct ev_once *)ev_malloc (sizeof (struct ev_once));
- if (expect_false (!once))
- {
- cb (EV_ERROR | EV_READ | EV_WRITE | EV_TIMER, arg);
- return;
- }
-
once->cb = cb;
once->arg = arg;
@@ -5093,7 +5576,7 @@ ev_once (EV_P_ int fd, int events, ev_tstamp timeout, void (*cb)(int revents, vo
#if EV_WALK_ENABLE
ecb_cold
void
-ev_walk (EV_P_ int types, void (*cb)(EV_P_ int type, void *w)) EV_THROW
+ev_walk (EV_P_ int types, void (*cb)(EV_P_ int type, void *w)) EV_NOEXCEPT
{
int i, j;
ev_watcher_list *wl, *wn;
diff --git a/third_party/libev/ev.h b/third_party/libev/ev.h
index d42e2df47..c0e17143b 100644
--- a/third_party/libev/ev.h
+++ b/third_party/libev/ev.h
@@ -1,7 +1,7 @@
/*
* libev native API header
*
- * Copyright (c) 2007,2008,2009,2010,2011,2012,2015 Marc Alexander Lehmann <libev at schmorp.de>
+ * Copyright (c) 2007-2020 Marc Alexander Lehmann <libev at schmorp.de>
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without modifica-
@@ -48,14 +48,13 @@
* due to non-throwing" warnings.
* # define EV_THROW noexcept
*/
-# define EV_THROW
-# else
-# define EV_THROW throw ()
+# define EV_NOEXCEPT
# endif
#else
# define EV_CPP(x)
-# define EV_THROW
+# define EV_NOEXCEPT
#endif
+#define EV_THROW EV_NOEXCEPT /* pre-4.25, do not use in new code */
EV_CPP(extern "C" {)
@@ -155,7 +154,10 @@ EV_CPP(extern "C" {)
/*****************************************************************************/
-typedef double ev_tstamp;
+#ifndef EV_TSTAMP_T
+# define EV_TSTAMP_T double
+#endif
+typedef EV_TSTAMP_T ev_tstamp;
#include <string.h> /* for memmove */
@@ -216,7 +218,7 @@ struct ev_loop;
/*****************************************************************************/
#define EV_VERSION_MAJOR 4
-#define EV_VERSION_MINOR 24
+#define EV_VERSION_MINOR 32
/* eventmask, revents, events... */
enum {
@@ -344,7 +346,7 @@ typedef struct ev_periodic
ev_tstamp offset; /* rw */
ev_tstamp interval; /* rw */
- ev_tstamp (*reschedule_cb)(struct ev_periodic *w, ev_tstamp now) EV_THROW; /* rw */
+ ev_tstamp (*reschedule_cb)(struct ev_periodic *w, ev_tstamp now) EV_NOEXCEPT; /* rw */
} ev_periodic;
/* invoked when the given signal has been received */
@@ -393,14 +395,12 @@ typedef struct ev_stat
} ev_stat;
#endif
-#if EV_IDLE_ENABLE
/* invoked when the nothing else needs to be done, keeps the process from blocking */
/* revent EV_IDLE */
typedef struct ev_idle
{
EV_WATCHER (ev_idle)
} ev_idle;
-#endif
/* invoked for each run of the mainloop, just before the blocking call */
/* you can still change events in any way you like */
@@ -417,23 +417,19 @@ typedef struct ev_check
EV_WATCHER (ev_check)
} ev_check;
-#if EV_FORK_ENABLE
/* the callback gets invoked before check in the child process when a fork was detected */
/* revent EV_FORK */
typedef struct ev_fork
{
EV_WATCHER (ev_fork)
} ev_fork;
-#endif
-#if EV_CLEANUP_ENABLE
/* is invoked just before the loop gets destroyed */
/* revent EV_CLEANUP */
typedef struct ev_cleanup
{
EV_WATCHER (ev_cleanup)
} ev_cleanup;
-#endif
#if EV_EMBED_ENABLE
/* used to embed an event loop inside another */
@@ -443,16 +439,18 @@ typedef struct ev_embed
EV_WATCHER (ev_embed)
struct ev_loop *other; /* ro */
+#undef EV_IO_ENABLE
+#define EV_IO_ENABLE 1
ev_io io; /* private */
+#undef EV_PREPARE_ENABLE
+#define EV_PREPARE_ENABLE 1
ev_prepare prepare; /* private */
ev_check check; /* unused */
ev_timer timer; /* unused */
ev_periodic periodic; /* unused */
ev_idle idle; /* unused */
ev_fork fork; /* private */
-#if EV_CLEANUP_ENABLE
ev_cleanup cleanup; /* unused */
-#endif
} ev_embed;
#endif
@@ -505,42 +503,44 @@ union ev_any_watcher
/* flag bits for ev_default_loop and ev_loop_new */
enum {
/* the default */
- EVFLAG_AUTO = 0x00000000U, /* not quite a mask */
+ EVFLAG_AUTO = 0x00000000U, /* not quite a mask */
/* flag bits */
- EVFLAG_NOENV = 0x01000000U, /* do NOT consult environment */
- EVFLAG_FORKCHECK = 0x02000000U, /* check for a fork in each iteration */
+ EVFLAG_NOENV = 0x01000000U, /* do NOT consult environment */
+ EVFLAG_FORKCHECK = 0x02000000U, /* check for a fork in each iteration */
/* debugging/feature disable */
- EVFLAG_NOINOTIFY = 0x00100000U, /* do not attempt to use inotify */
+ EVFLAG_NOINOTIFY = 0x00100000U, /* do not attempt to use inotify */
#if EV_COMPAT3
- EVFLAG_NOSIGFD = 0, /* compatibility to pre-3.9 */
+ EVFLAG_NOSIGFD = 0, /* compatibility to pre-3.9 */
#endif
- EVFLAG_SIGNALFD = 0x00200000U, /* attempt to use signalfd */
- EVFLAG_NOSIGMASK = 0x00400000U, /* avoid modifying the signal mask */
- EVFLAG_ALLOCFD = 0x00800000U /* preallocate event pipe descriptors */
+ EVFLAG_SIGNALFD = 0x00200000U, /* attempt to use signalfd */
+ EVFLAG_NOSIGMASK = 0x00400000U, /* avoid modifying the signal mask */
+ EVFLAG_NOTIMERFD = 0x00800000U /* avoid creating a timerfd */
};
/* method bits to be ored together */
enum {
- EVBACKEND_SELECT = 0x00000001U, /* available just about anywhere */
- EVBACKEND_POLL = 0x00000002U, /* !win, !aix, broken on osx */
- EVBACKEND_EPOLL = 0x00000004U, /* linux */
- EVBACKEND_KQUEUE = 0x00000008U, /* bsd, broken on osx */
- EVBACKEND_DEVPOLL = 0x00000010U, /* solaris 8 */ /* NYI */
- EVBACKEND_PORT = 0x00000020U, /* solaris 10 */
- EVBACKEND_ALL = 0x0000003FU, /* all known backends */
- EVBACKEND_MASK = 0x0000FFFFU /* all future backends */
+ EVBACKEND_SELECT = 0x00000001U, /* available just about anywhere */
+ EVBACKEND_POLL = 0x00000002U, /* !win, !aix, broken on osx */
+ EVBACKEND_EPOLL = 0x00000004U, /* linux */
+ EVBACKEND_KQUEUE = 0x00000008U, /* bsd, broken on osx */
+ EVBACKEND_DEVPOLL = 0x00000010U, /* solaris 8 */ /* NYI */
+ EVBACKEND_PORT = 0x00000020U, /* solaris 10 */
+ EVBACKEND_LINUXAIO = 0x00000040U, /* linux AIO, 4.19+ */
+ EVBACKEND_IOURING = 0x00000080U, /* linux io_uring, 5.1+ */
+ EVBACKEND_ALL = 0x000000FFU, /* all known backends */
+ EVBACKEND_MASK = 0x0000FFFFU /* all future backends */
};
#if EV_PROTOTYPES
-EV_API_DECL int ev_version_major (void) EV_THROW;
-EV_API_DECL int ev_version_minor (void) EV_THROW;
+EV_API_DECL int ev_version_major (void) EV_NOEXCEPT;
+EV_API_DECL int ev_version_minor (void) EV_NOEXCEPT;
-EV_API_DECL unsigned int ev_supported_backends (void) EV_THROW;
-EV_API_DECL unsigned int ev_recommended_backends (void) EV_THROW;
-EV_API_DECL unsigned int ev_embeddable_backends (void) EV_THROW;
+EV_API_DECL unsigned int ev_supported_backends (void) EV_NOEXCEPT;
+EV_API_DECL unsigned int ev_recommended_backends (void) EV_NOEXCEPT;
+EV_API_DECL unsigned int ev_embeddable_backends (void) EV_NOEXCEPT;
-EV_API_DECL ev_tstamp ev_time (void) EV_THROW;
-EV_API_DECL void ev_sleep (ev_tstamp delay) EV_THROW; /* sleep for a while */
+EV_API_DECL ev_tstamp ev_time (void) EV_NOEXCEPT;
+EV_API_DECL void ev_sleep (ev_tstamp delay) EV_NOEXCEPT; /* sleep for a while */
/* Sets the allocation function to use, works like realloc.
* It is used to allocate and free memory.
@@ -548,26 +548,26 @@ EV_API_DECL void ev_sleep (ev_tstamp delay) EV_THROW; /* sleep for a while */
* or take some potentially destructive action.
* The default is your system realloc function.
*/
-EV_API_DECL void ev_set_allocator (void *(*cb)(void *ptr, long size) EV_THROW) EV_THROW;
+EV_API_DECL void ev_set_allocator (void *(*cb)(void *ptr, long size) EV_NOEXCEPT) EV_NOEXCEPT;
/* set the callback function to call on a
* retryable syscall error
* (such as failed select, poll, epoll_wait)
*/
-EV_API_DECL void ev_set_syserr_cb (void (*cb)(const char *msg) EV_THROW) EV_THROW;
+EV_API_DECL void ev_set_syserr_cb (void (*cb)(const char *msg) EV_NOEXCEPT) EV_NOEXCEPT;
#if EV_MULTIPLICITY
/* the default loop is the only one that handles signals and child watchers */
/* you can call this as often as you like */
-EV_API_DECL struct ev_loop *ev_default_loop (unsigned int flags EV_CPP (= 0)) EV_THROW;
+EV_API_DECL struct ev_loop *ev_default_loop (unsigned int flags EV_CPP (= 0)) EV_NOEXCEPT;
#ifdef EV_API_STATIC
EV_API_DECL struct ev_loop *ev_default_loop_ptr;
#endif
EV_INLINE struct ev_loop *
-ev_default_loop_uc_ (void) EV_THROW
+ev_default_loop_uc_ (void) EV_NOEXCEPT
{
extern struct ev_loop *ev_default_loop_ptr;
@@ -575,39 +575,39 @@ ev_default_loop_uc_ (void) EV_THROW
}
EV_INLINE int
-ev_is_default_loop (EV_P) EV_THROW
+ev_is_default_loop (EV_P) EV_NOEXCEPT
{
return EV_A == EV_DEFAULT_UC;
}
/* create and destroy alternative loops that don't handle signals */
-EV_API_DECL struct ev_loop *ev_loop_new (unsigned int flags EV_CPP (= 0)) EV_THROW;
+EV_API_DECL struct ev_loop *ev_loop_new (unsigned int flags EV_CPP (= 0)) EV_NOEXCEPT;
-EV_API_DECL ev_tstamp ev_now (EV_P) EV_THROW; /* time w.r.t. timers and the eventloop, updated after each poll */
+EV_API_DECL ev_tstamp ev_now (EV_P) EV_NOEXCEPT; /* time w.r.t. timers and the eventloop, updated after each poll */
#else
-EV_API_DECL int ev_default_loop (unsigned int flags EV_CPP (= 0)) EV_THROW; /* returns true when successful */
+EV_API_DECL int ev_default_loop (unsigned int flags EV_CPP (= 0)) EV_NOEXCEPT; /* returns true when successful */
EV_API_DECL ev_tstamp ev_rt_now;
EV_INLINE ev_tstamp
-ev_now (void) EV_THROW
+ev_now (void) EV_NOEXCEPT
{
return ev_rt_now;
}
/* looks weird, but ev_is_default_loop (EV_A) still works if this exists */
EV_INLINE int
-ev_is_default_loop (void) EV_THROW
+ev_is_default_loop (void) EV_NOEXCEPT
{
return 1;
}
#endif /* multiplicity */
-EV_API_DECL ev_tstamp ev_monotonic_time (void) EV_THROW;
-EV_API_DECL ev_tstamp ev_monotonic_now (EV_P) EV_THROW;
+EV_API_DECL ev_tstamp ev_monotonic_time (void) EV_NOEXCEPT;
+EV_API_DECL ev_tstamp ev_monotonic_now (EV_P) EV_NOEXCEPT;
/* destroy event loops, also works for the default loop */
EV_API_DECL void ev_loop_destroy (EV_P);
@@ -616,17 +616,17 @@ EV_API_DECL void ev_loop_destroy (EV_P);
/* when you want to re-use it in the child */
/* you can call it in either the parent or the child */
/* you can actually call it at any time, anywhere :) */
-EV_API_DECL void ev_loop_fork (EV_P) EV_THROW;
+EV_API_DECL void ev_loop_fork (EV_P) EV_NOEXCEPT;
-EV_API_DECL unsigned int ev_backend (EV_P) EV_THROW; /* backend in use by loop */
+EV_API_DECL unsigned int ev_backend (EV_P) EV_NOEXCEPT; /* backend in use by loop */
-EV_API_DECL void ev_now_update (EV_P) EV_THROW; /* update event loop time */
+EV_API_DECL void ev_now_update (EV_P) EV_NOEXCEPT; /* update event loop time */
#if EV_WALK_ENABLE
/* walk (almost) all watchers in the loop of a given type, invoking the */
/* callback on every such watcher. The callback might stop the watcher, */
/* but do nothing else with the loop */
-EV_API_DECL void ev_walk (EV_P_ int types, void (*cb)(EV_P_ int type, void *w)) EV_THROW;
+EV_API_DECL void ev_walk (EV_P_ int types, void (*cb)(EV_P_ int type, void *w)) EV_NOEXCEPT;
#endif
#endif /* prototypes */
@@ -646,46 +646,47 @@ enum {
#if EV_PROTOTYPES
EV_API_DECL int ev_run (EV_P_ int flags EV_CPP (= 0));
-EV_API_DECL void ev_break (EV_P_ int how EV_CPP (= EVBREAK_ONE)) EV_THROW; /* break out of the loop */
+EV_API_DECL void ev_break (EV_P_ int how EV_CPP (= EVBREAK_ONE)) EV_NOEXCEPT; /* break out of the loop */
/*
* ref/unref can be used to add or remove a refcount on the mainloop. every watcher
* keeps one reference. if you have a long-running watcher you never unregister that
* should not keep ev_loop from running, unref() after starting, and ref() before stopping.
*/
-EV_API_DECL void ev_ref (EV_P) EV_THROW;
-EV_API_DECL void ev_unref (EV_P) EV_THROW;
+EV_API_DECL void ev_ref (EV_P) EV_NOEXCEPT;
+EV_API_DECL void ev_unref (EV_P) EV_NOEXCEPT;
/*
* convenience function, wait for a single event, without registering an event watcher
* if timeout is < 0, do wait indefinitely
*/
-EV_API_DECL void ev_once (EV_P_ int fd, int events, ev_tstamp timeout, void (*cb)(int revents, void *arg), void *arg) EV_THROW;
+EV_API_DECL void ev_once (EV_P_ int fd, int events, ev_tstamp timeout, void (*cb)(int revents, void *arg), void *arg) EV_NOEXCEPT;
+
+EV_API_DECL void ev_invoke_pending (EV_P); /* invoke all pending watchers */
# if EV_FEATURE_API
-EV_API_DECL unsigned int ev_iteration (EV_P) EV_THROW; /* number of loop iterations */
-EV_API_DECL unsigned int ev_depth (EV_P) EV_THROW; /* #ev_loop enters - #ev_loop leaves */
-EV_API_DECL void ev_verify (EV_P) EV_THROW; /* abort if loop data corrupted */
+EV_API_DECL unsigned int ev_iteration (EV_P) EV_NOEXCEPT; /* number of loop iterations */
+EV_API_DECL unsigned int ev_depth (EV_P) EV_NOEXCEPT; /* #ev_loop enters - #ev_loop leaves */
+EV_API_DECL void ev_verify (EV_P) EV_NOEXCEPT; /* abort if loop data corrupted */
-EV_API_DECL void ev_set_io_collect_interval (EV_P_ ev_tstamp interval) EV_THROW; /* sleep at least this time, default 0 */
-EV_API_DECL void ev_set_timeout_collect_interval (EV_P_ ev_tstamp interval) EV_THROW; /* sleep at least this time, default 0 */
+EV_API_DECL void ev_set_io_collect_interval (EV_P_ ev_tstamp interval) EV_NOEXCEPT; /* sleep at least this time, default 0 */
+EV_API_DECL void ev_set_timeout_collect_interval (EV_P_ ev_tstamp interval) EV_NOEXCEPT; /* sleep at least this time, default 0 */
/* advanced stuff for threading etc. support, see docs */
-EV_API_DECL void ev_set_userdata (EV_P_ void *data) EV_THROW;
-EV_API_DECL void *ev_userdata (EV_P) EV_THROW;
+EV_API_DECL void ev_set_userdata (EV_P_ void *data) EV_NOEXCEPT;
+EV_API_DECL void *ev_userdata (EV_P) EV_NOEXCEPT;
typedef void (*ev_loop_callback)(EV_P);
-EV_API_DECL void ev_set_invoke_pending_cb (EV_P_ ev_loop_callback invoke_pending_cb) EV_THROW;
+EV_API_DECL void ev_set_invoke_pending_cb (EV_P_ ev_loop_callback invoke_pending_cb) EV_NOEXCEPT;
/* C++ doesn't allow the use of the ev_loop_callback typedef here, so we need to spell it out */
-EV_API_DECL void ev_set_loop_release_cb (EV_P_ void (*release)(EV_P) EV_THROW, void (*acquire)(EV_P) EV_THROW) EV_THROW;
+EV_API_DECL void ev_set_loop_release_cb (EV_P_ void (*release)(EV_P) EV_NOEXCEPT, void (*acquire)(EV_P) EV_NOEXCEPT) EV_NOEXCEPT;
-EV_API_DECL unsigned int ev_pending_count (EV_P) EV_THROW; /* number of pending events, if any */
-EV_API_DECL void ev_invoke_pending (EV_P); /* invoke all pending watchers */
+EV_API_DECL unsigned int ev_pending_count (EV_P) EV_NOEXCEPT; /* number of pending events, if any */
/*
* stop/start the timer handling.
*/
-EV_API_DECL void ev_suspend (EV_P) EV_THROW;
-EV_API_DECL void ev_resume (EV_P) EV_THROW;
+EV_API_DECL void ev_suspend (EV_P) EV_NOEXCEPT;
+EV_API_DECL void ev_resume (EV_P) EV_NOEXCEPT;
#endif
#endif
@@ -699,6 +700,7 @@ EV_API_DECL void ev_resume (EV_P) EV_THROW;
ev_set_cb ((ev), cb_); \
} while (0)
+#define ev_io_modify(ev,events_) do { (ev)->events = (ev)->events & EV__IOFDSET | (events_); } while (0)
#define ev_io_set(ev,fd_,events_) do { (ev)->fd = (fd_); (ev)->events = (events_) | EV__IOFDSET; } while (0)
#define ev_timer_set(ev,after_,repeat_) do { ((ev_watcher_time *)(ev))->at = (after_); (ev)->repeat = (repeat_); } while (0)
#define ev_periodic_set(ev,ofs_,ival_,rcb_) do { (ev)->offset = (ofs_); (ev)->interval = (ival_); (ev)->reschedule_cb = (rcb_); } while (0)
@@ -744,6 +746,7 @@ EV_API_DECL void ev_resume (EV_P) EV_THROW;
#define ev_periodic_at(ev) (+((ev_watcher_time *)(ev))->at)
#ifndef ev_set_cb
+/* memmove is used here to avoid strict aliasing violations, and hopefully is optimized out by any reasonable compiler */
# define ev_set_cb(ev,cb_) (ev_cb_ (ev) = (cb_), memmove (&((ev_watcher *)(ev))->cb, &ev_cb_ (ev), sizeof (ev_cb_ (ev))))
#endif
@@ -753,18 +756,18 @@ EV_API_DECL void ev_resume (EV_P) EV_THROW;
/* feeds an event into a watcher as if the event actually occurred */
/* accepts any ev_watcher type */
-EV_API_DECL int ev_activecnt (EV_P) EV_THROW;
-EV_API_DECL void ev_feed_event (EV_P_ void *w, int revents) EV_THROW;
-EV_API_DECL void ev_feed_fd_event (EV_P_ int fd, int revents) EV_THROW;
+EV_API_DECL int ev_activecnt (EV_P) EV_NOEXCEPT;
+EV_API_DECL void ev_feed_event (EV_P_ void *w, int revents) EV_NOEXCEPT;
+EV_API_DECL void ev_feed_fd_event (EV_P_ int fd, int revents) EV_NOEXCEPT;
#if EV_SIGNAL_ENABLE
-EV_API_DECL void ev_feed_signal (int signum) EV_THROW;
-EV_API_DECL void ev_feed_signal_event (EV_P_ int signum) EV_THROW;
+EV_API_DECL void ev_feed_signal (int signum) EV_NOEXCEPT;
+EV_API_DECL void ev_feed_signal_event (EV_P_ int signum) EV_NOEXCEPT;
#endif
EV_API_DECL void ev_invoke (EV_P_ void *w, int revents);
-EV_API_DECL int ev_clear_pending (EV_P_ void *w) EV_THROW;
+EV_API_DECL int ev_clear_pending (EV_P_ void *w) EV_NOEXCEPT;
-EV_API_DECL void ev_io_start (EV_P_ ev_io *w) EV_THROW;
-EV_API_DECL void ev_io_stop (EV_P_ ev_io *w) EV_THROW;
+EV_API_DECL void ev_io_start (EV_P_ ev_io *w) EV_NOEXCEPT;
+EV_API_DECL void ev_io_stop (EV_P_ ev_io *w) EV_NOEXCEPT;
/*
* Fd is about to close. Make sure that libev won't do anything funny
@@ -772,75 +775,75 @@ EV_API_DECL void ev_io_stop (EV_P_ ev_io *w) EV_THROW;
* prior to close().
* Note: if fd was reused and added again it just works.
*/
-EV_API_DECL void ev_io_closing (EV_P_ int fd, int revents) EV_THROW;
+EV_API_DECL void ev_io_closing (EV_P_ int fd, int revents) EV_NOEXCEPT;
-EV_API_DECL void ev_timer_start (EV_P_ ev_timer *w) EV_THROW;
-EV_API_DECL void ev_timer_stop (EV_P_ ev_timer *w) EV_THROW;
+EV_API_DECL void ev_timer_start (EV_P_ ev_timer *w) EV_NOEXCEPT;
+EV_API_DECL void ev_timer_stop (EV_P_ ev_timer *w) EV_NOEXCEPT;
/* stops if active and no repeat, restarts if active and repeating, starts if inactive and repeating */
-EV_API_DECL void ev_timer_again (EV_P_ ev_timer *w) EV_THROW;
+EV_API_DECL void ev_timer_again (EV_P_ ev_timer *w) EV_NOEXCEPT;
/* return remaining time */
-EV_API_DECL ev_tstamp ev_timer_remaining (EV_P_ ev_timer *w) EV_THROW;
+EV_API_DECL ev_tstamp ev_timer_remaining (EV_P_ ev_timer *w) EV_NOEXCEPT;
#if EV_PERIODIC_ENABLE
-EV_API_DECL void ev_periodic_start (EV_P_ ev_periodic *w) EV_THROW;
-EV_API_DECL void ev_periodic_stop (EV_P_ ev_periodic *w) EV_THROW;
-EV_API_DECL void ev_periodic_again (EV_P_ ev_periodic *w) EV_THROW;
+EV_API_DECL void ev_periodic_start (EV_P_ ev_periodic *w) EV_NOEXCEPT;
+EV_API_DECL void ev_periodic_stop (EV_P_ ev_periodic *w) EV_NOEXCEPT;
+EV_API_DECL void ev_periodic_again (EV_P_ ev_periodic *w) EV_NOEXCEPT;
#endif
/* only supported in the default loop */
#if EV_SIGNAL_ENABLE
-EV_API_DECL void ev_signal_start (EV_P_ ev_signal *w) EV_THROW;
-EV_API_DECL void ev_signal_stop (EV_P_ ev_signal *w) EV_THROW;
+EV_API_DECL void ev_signal_start (EV_P_ ev_signal *w) EV_NOEXCEPT;
+EV_API_DECL void ev_signal_stop (EV_P_ ev_signal *w) EV_NOEXCEPT;
#endif
/* only supported in the default loop */
# if EV_CHILD_ENABLE
-EV_API_DECL void ev_child_start (EV_P_ ev_child *w) EV_THROW;
-EV_API_DECL void ev_child_stop (EV_P_ ev_child *w) EV_THROW;
+EV_API_DECL void ev_child_start (EV_P_ ev_child *w) EV_NOEXCEPT;
+EV_API_DECL void ev_child_stop (EV_P_ ev_child *w) EV_NOEXCEPT;
# endif
# if EV_STAT_ENABLE
-EV_API_DECL void ev_stat_start (EV_P_ ev_stat *w) EV_THROW;
-EV_API_DECL void ev_stat_stop (EV_P_ ev_stat *w) EV_THROW;
-EV_API_DECL void ev_stat_stat (EV_P_ ev_stat *w) EV_THROW;
+EV_API_DECL void ev_stat_start (EV_P_ ev_stat *w) EV_NOEXCEPT;
+EV_API_DECL void ev_stat_stop (EV_P_ ev_stat *w) EV_NOEXCEPT;
+EV_API_DECL void ev_stat_stat (EV_P_ ev_stat *w) EV_NOEXCEPT;
# endif
# if EV_IDLE_ENABLE
-EV_API_DECL void ev_idle_start (EV_P_ ev_idle *w) EV_THROW;
-EV_API_DECL void ev_idle_stop (EV_P_ ev_idle *w) EV_THROW;
+EV_API_DECL void ev_idle_start (EV_P_ ev_idle *w) EV_NOEXCEPT;
+EV_API_DECL void ev_idle_stop (EV_P_ ev_idle *w) EV_NOEXCEPT;
# endif
#if EV_PREPARE_ENABLE
-EV_API_DECL void ev_prepare_start (EV_P_ ev_prepare *w) EV_THROW;
-EV_API_DECL void ev_prepare_stop (EV_P_ ev_prepare *w) EV_THROW;
+EV_API_DECL void ev_prepare_start (EV_P_ ev_prepare *w) EV_NOEXCEPT;
+EV_API_DECL void ev_prepare_stop (EV_P_ ev_prepare *w) EV_NOEXCEPT;
#endif
#if EV_CHECK_ENABLE
-EV_API_DECL void ev_check_start (EV_P_ ev_check *w) EV_THROW;
-EV_API_DECL void ev_check_stop (EV_P_ ev_check *w) EV_THROW;
+EV_API_DECL void ev_check_start (EV_P_ ev_check *w) EV_NOEXCEPT;
+EV_API_DECL void ev_check_stop (EV_P_ ev_check *w) EV_NOEXCEPT;
#endif
# if EV_FORK_ENABLE
-EV_API_DECL void ev_fork_start (EV_P_ ev_fork *w) EV_THROW;
-EV_API_DECL void ev_fork_stop (EV_P_ ev_fork *w) EV_THROW;
+EV_API_DECL void ev_fork_start (EV_P_ ev_fork *w) EV_NOEXCEPT;
+EV_API_DECL void ev_fork_stop (EV_P_ ev_fork *w) EV_NOEXCEPT;
# endif
# if EV_CLEANUP_ENABLE
-EV_API_DECL void ev_cleanup_start (EV_P_ ev_cleanup *w) EV_THROW;
-EV_API_DECL void ev_cleanup_stop (EV_P_ ev_cleanup *w) EV_THROW;
+EV_API_DECL void ev_cleanup_start (EV_P_ ev_cleanup *w) EV_NOEXCEPT;
+EV_API_DECL void ev_cleanup_stop (EV_P_ ev_cleanup *w) EV_NOEXCEPT;
# endif
# if EV_EMBED_ENABLE
/* only supported when loop to be embedded is in fact embeddable */
-EV_API_DECL void ev_embed_start (EV_P_ ev_embed *w) EV_THROW;
-EV_API_DECL void ev_embed_stop (EV_P_ ev_embed *w) EV_THROW;
-EV_API_DECL void ev_embed_sweep (EV_P_ ev_embed *w) EV_THROW;
+EV_API_DECL void ev_embed_start (EV_P_ ev_embed *w) EV_NOEXCEPT;
+EV_API_DECL void ev_embed_stop (EV_P_ ev_embed *w) EV_NOEXCEPT;
+EV_API_DECL void ev_embed_sweep (EV_P_ ev_embed *w) EV_NOEXCEPT;
# endif
# if EV_ASYNC_ENABLE
-EV_API_DECL void ev_async_start (EV_P_ ev_async *w) EV_THROW;
-EV_API_DECL void ev_async_stop (EV_P_ ev_async *w) EV_THROW;
-EV_API_DECL void ev_async_send (EV_P_ ev_async *w) EV_THROW;
+EV_API_DECL void ev_async_start (EV_P_ ev_async *w) EV_NOEXCEPT;
+EV_API_DECL void ev_async_stop (EV_P_ ev_async *w) EV_NOEXCEPT;
+EV_API_DECL void ev_async_send (EV_P_ ev_async *w) EV_NOEXCEPT;
# endif
#if EV_COMPAT3
diff --git a/third_party/libev/ev.pod b/third_party/libev/ev.pod
index 633b87ea5..e4eeb5073 100644
--- a/third_party/libev/ev.pod
+++ b/third_party/libev/ev.pod
@@ -107,10 +107,10 @@ watcher.
=head2 FEATURES
-Libev supports C<select>, C<poll>, the Linux-specific C<epoll>, the
-BSD-specific C<kqueue> and the Solaris-specific event port mechanisms
-for file descriptor events (C<ev_io>), the Linux C<inotify> interface
-(for C<ev_stat>), Linux eventfd/signalfd (for faster and cleaner
+Libev supports C<select>, C<poll>, the Linux-specific aio and C<epoll>
+interfaces, the BSD-specific C<kqueue> and the Solaris-specific event port
+mechanisms for file descriptor events (C<ev_io>), the Linux C<inotify>
+interface (for C<ev_stat>), Linux eventfd/signalfd (for faster and cleaner
inter-thread wakeup (C<ev_async>)/signal handling (C<ev_signal>)) relative
timers (C<ev_timer>), absolute timers with customised rescheduling
(C<ev_periodic>), synchronous signals (C<ev_signal>), process status
@@ -161,9 +161,13 @@ it will print a diagnostic message and abort (via the C<assert> mechanism,
so C<NDEBUG> will disable this checking): these are programming errors in
the libev caller and need to be fixed there.
-Libev also has a few internal error-checking C<assert>ions, and also has
-extensive consistency checking code. These do not trigger under normal
-circumstances, as they indicate either a bug in libev or worse.
+Via the C<EV_FREQUENT> macro you can compile in and/or enable extensive
+consistency checking code inside libev that can be used to check for
+internal inconsistencies, suually caused by application bugs.
+
+Libev also has a few internal error-checking C<assert>ions. These do not
+trigger under normal circumstances, as they indicate either a bug in libev
+or worse.
=head1 GLOBAL FUNCTIONS
@@ -267,12 +271,32 @@ You could override this function in high-availability programs to, say,
free some memory if it cannot allocate memory, to use a special allocator,
or even to sleep a while and retry until some memory is available.
+Example: The following is the C<realloc> function that libev itself uses
+which should work with C<realloc> and C<free> functions of all kinds and
+is probably a good basis for your own implementation.
+
+ static void *
+ ev_realloc_emul (void *ptr, long size) EV_NOEXCEPT
+ {
+ if (size)
+ return realloc (ptr, size);
+
+ free (ptr);
+ return 0;
+ }
+
Example: Replace the libev allocator with one that waits a bit and then
-retries (example requires a standards-compliant C<realloc>).
+retries.
static void *
persistent_realloc (void *ptr, size_t size)
{
+ if (!size)
+ {
+ free (ptr);
+ return 0;
+ }
+
for (;;)
{
void *newptr = realloc (ptr, size);
@@ -413,9 +437,10 @@ make libev check for a fork in each iteration by enabling this flag.
This works by calling C<getpid ()> on every iteration of the loop,
and thus this might slow down your event loop if you do a lot of loop
iterations and little real work, but is usually not noticeable (on my
-GNU/Linux system for example, C<getpid> is actually a simple 5-insn sequence
-without a system call and thus I<very> fast, but my GNU/Linux system also has
-C<pthread_atfork> which is even faster).
+GNU/Linux system for example, C<getpid> is actually a simple 5-insn
+sequence without a system call and thus I<very> fast, but my GNU/Linux
+system also has C<pthread_atfork> which is even faster). (Update: glibc
+versions 2.25 apparently removed the C<getpid> optimisation again).
The big advantage of this flag is that you can forget about fork (and
forget about forgetting to tell libev about forking, although you still
@@ -457,7 +482,16 @@ unblocking the signals.
It's also required by POSIX in a threaded program, as libev calls
C<sigprocmask>, whose behaviour is officially unspecified.
-This flag's behaviour will become the default in future versions of libev.
+=item C<EVFLAG_NOTIMERFD>
+
+When this flag is specified, the libev will avoid using a C<timerfd> to
+detect time jumps. It will still be able to detect time jumps, but takes
+longer and has a lower accuracy in doing so, but saves a file descriptor
+per loop.
+
+The current implementation only tries to use a C<timerfd> when the first
+C<ev_periodic> watcher is started and falls back on other methods if it
+cannot be created, but this behaviour might change in the future.
=item C<EVBACKEND_SELECT> (value 1, portable select backend)
@@ -492,7 +526,7 @@ C<EV_WRITE> to C<POLLOUT | POLLERR | POLLHUP>.
=item C<EVBACKEND_EPOLL> (value 4, Linux)
-Use the linux-specific epoll(7) interface (for both pre- and post-2.6.9
+Use the Linux-specific epoll(7) interface (for both pre- and post-2.6.9
kernels).
For few fds, this backend is a bit little slower than poll and select, but
@@ -548,22 +582,66 @@ faster than epoll for maybe up to a hundred file descriptors, depending on
the usage. So sad.
While nominally embeddable in other event loops, this feature is broken in
-all kernel versions tested so far.
+a lot of kernel revisions, but probably(!) works in current versions.
+
+This backend maps C<EV_READ> and C<EV_WRITE> in the same way as
+C<EVBACKEND_POLL>.
+
+=item C<EVBACKEND_LINUXAIO> (value 64, Linux)
+
+Use the Linux-specific Linux AIO (I<not> C<< aio(7) >> but C<<
+io_submit(2) >>) event interface available in post-4.18 kernels (but libev
+only tries to use it in 4.19+).
+
+This is another Linux train wreck of an event interface.
+
+If this backend works for you (as of this writing, it was very
+experimental), it is the best event interface available on Linux and might
+be well worth enabling it - if it isn't available in your kernel this will
+be detected and this backend will be skipped.
+
+This backend can batch oneshot requests and supports a user-space ring
+buffer to receive events. It also doesn't suffer from most of the design
+problems of epoll (such as not being able to remove event sources from
+the epoll set), and generally sounds too good to be true. Because, this
+being the Linux kernel, of course it suffers from a whole new set of
+limitations, forcing you to fall back to epoll, inheriting all its design
+issues.
+
+For one, it is not easily embeddable (but probably could be done using
+an event fd at some extra overhead). It also is subject to a system wide
+limit that can be configured in F</proc/sys/fs/aio-max-nr>. If no AIO
+requests are left, this backend will be skipped during initialisation, and
+will switch to epoll when the loop is active.
+
+Most problematic in practice, however, is that not all file descriptors
+work with it. For example, in Linux 5.1, TCP sockets, pipes, event fds,
+files, F</dev/null> and many others are supported, but ttys do not work
+properly (a known bug that the kernel developers don't care about, see
+L<https://lore.kernel.org/patchwork/patch/1047453/>), so this is not
+(yet?) a generic event polling interface.
+
+Overall, it seems the Linux developers just don't want it to have a
+generic event handling mechanism other than C<select> or C<poll>.
+
+To work around all these problem, the current version of libev uses its
+epoll backend as a fallback for file descriptor types that do not work. Or
+falls back completely to epoll if the kernel acts up.
This backend maps C<EV_READ> and C<EV_WRITE> in the same way as
C<EVBACKEND_POLL>.
=item C<EVBACKEND_KQUEUE> (value 8, most BSD clones)
-Kqueue deserves special mention, as at the time of this writing, it
-was broken on all BSDs except NetBSD (usually it doesn't work reliably
-with anything but sockets and pipes, except on Darwin, where of course
-it's completely useless). Unlike epoll, however, whose brokenness
-is by design, these kqueue bugs can (and eventually will) be fixed
-without API changes to existing programs. For this reason it's not being
-"auto-detected" unless you explicitly specify it in the flags (i.e. using
-C<EVBACKEND_KQUEUE>) or libev was compiled on a known-to-be-good (-enough)
-system like NetBSD.
+Kqueue deserves special mention, as at the time this backend was
+implemented, it was broken on all BSDs except NetBSD (usually it doesn't
+work reliably with anything but sockets and pipes, except on Darwin,
+where of course it's completely useless). Unlike epoll, however, whose
+brokenness is by design, these kqueue bugs can be (and mostly have been)
+fixed without API changes to existing programs. For this reason it's not
+being "auto-detected" on all platforms unless you explicitly specify it
+in the flags (i.e. using C<EVBACKEND_KQUEUE>) or libev was compiled on a
+known-to-be-good (-enough) system like NetBSD.
You still can embed kqueue into a normal poll or select backend and use it
only for sockets (after having made sure that sockets work with kqueue on
@@ -574,7 +652,7 @@ kernel is more efficient (which says nothing about its actual speed, of
course). While stopping, setting and starting an I/O watcher does never
cause an extra system call as with C<EVBACKEND_EPOLL>, it still adds up to
two event changes per incident. Support for C<fork ()> is very bad (you
-might have to leak fd's on fork, but it's more sane than epoll) and it
+might have to leak fds on fork, but it's more sane than epoll) and it
drops fds silently in similarly hard-to-detect cases.
This backend usually performs well under most conditions.
@@ -659,6 +737,12 @@ used if available.
struct ev_loop *loop = ev_loop_new (ev_recommended_backends () | EVBACKEND_KQUEUE);
+Example: Similarly, on linux, you mgiht want to take advantage of the
+linux aio backend if possible, but fall back to something else if that
+isn't available.
+
+ struct ev_loop *loop = ev_loop_new (ev_recommended_backends () | EVBACKEND_LINUXAIO);
+
=item ev_loop_destroy (loop)
Destroys an event loop object (frees all memory and kernel state
@@ -1136,8 +1220,9 @@ with a watcher-specific start function (C<< ev_TYPE_start (loop, watcher
corresponding stop function (C<< ev_TYPE_stop (loop, watcher *) >>.
As long as your watcher is active (has been started but not stopped) you
-must not touch the values stored in it. Most specifically you must never
-reinitialise it or call its C<ev_TYPE_set> macro.
+must not touch the values stored in it except when explicitly documented
+otherwise. Most specifically you must never reinitialise it or call its
+C<ev_TYPE_set> macro.
Each and every callback receives the event loop pointer as first, the
registered watcher structure as second, and a bitset of received events as
@@ -1462,7 +1547,7 @@ Many event loops support I<watcher priorities>, which are usually small
integers that influence the ordering of event callback invocation
between watchers in some way, all else being equal.
-In libev, Watcher priorities can be set using C<ev_set_priority>. See its
+In libev, watcher priorities can be set using C<ev_set_priority>. See its
description for the more technical details such as the actual priority
range.
@@ -1566,15 +1651,18 @@ This section describes each watcher in detail, but will not repeat
information given in the last section. Any initialisation/set macros,
functions and members specific to the watcher type are explained.
-Members are additionally marked with either I<[read-only]>, meaning that,
-while the watcher is active, you can look at the member and expect some
-sensible content, but you must not modify it (you can modify it while the
-watcher is stopped to your hearts content), or I<[read-write]>, which
-means you can expect it to have some sensible content while the watcher
-is active, but you can also modify it. Modifying it may not do something
+Most members are additionally marked with either I<[read-only]>, meaning
+that, while the watcher is active, you can look at the member and expect
+some sensible content, but you must not modify it (you can modify it while
+the watcher is stopped to your hearts content), or I<[read-write]>, which
+means you can expect it to have some sensible content while the watcher is
+active, but you can also modify it (within the same thread as the event
+loop, i.e. without creating data races). Modifying it may not do something
sensible or take immediate effect (or do anything at all), but libev will
not crash or malfunction in any way.
+In any case, the documentation for each member will explain what the
+effects are, and if there are any additional access restrictions.
=head2 C<ev_io> - is this file descriptor readable or writable?
@@ -1611,13 +1699,13 @@ But really, best use non-blocking mode.
=head3 The special problem of disappearing file descriptors
-Some backends (e.g. kqueue, epoll) need to be told about closing a file
-descriptor (either due to calling C<close> explicitly or any other means,
-such as C<dup2>). The reason is that you register interest in some file
-descriptor, but when it goes away, the operating system will silently drop
-this interest. If another file descriptor with the same number then is
-registered with libev, there is no efficient way to see that this is, in
-fact, a different file descriptor.
+Some backends (e.g. kqueue, epoll, linuxaio) need to be told about closing
+a file descriptor (either due to calling C<close> explicitly or any other
+means, such as C<dup2>). The reason is that you register interest in some
+file descriptor, but when it goes away, the operating system will silently
+drop this interest. If another file descriptor with the same number then
+is registered with libev, there is no efficient way to see that this is,
+in fact, a different file descriptor.
To avoid having to explicitly tell libev about such cases, libev follows
the following policy: Each time C<ev_io_set> is being called, libev
@@ -1676,9 +1764,10 @@ reuse the same code path.
=head3 The special problem of fork
-Some backends (epoll, kqueue) do not support C<fork ()> at all or exhibit
-useless behaviour. Libev fully supports fork, but needs to be told about
-it in the child if you want to continue to use it in the child.
+Some backends (epoll, kqueue, linuxaio, iouring) do not support C<fork ()>
+at all or exhibit useless behaviour. Libev fully supports fork, but needs
+to be told about it in the child if you want to continue to use it in the
+child.
To support fork in your child processes, you have to call C<ev_loop_fork
()> after a fork in the child, enable C<EVFLAG_FORKCHECK>, or resort to
@@ -1743,16 +1832,36 @@ opportunity for a DoS attack.
=item ev_io_set (ev_io *, int fd, int events)
Configures an C<ev_io> watcher. The C<fd> is the file descriptor to
-receive events for and C<events> is either C<EV_READ>, C<EV_WRITE> or
-C<EV_READ | EV_WRITE>, to express the desire to receive the given events.
+receive events for and C<events> is either C<EV_READ>, C<EV_WRITE>, both
+C<EV_READ | EV_WRITE> or C<0>, to express the desire to receive the given
+events.
+
+Note that setting the C<events> to C<0> and starting the watcher is
+supported, but not specially optimized - if your program sometimes happens
+to generate this combination this is fine, but if it is easy to avoid
+starting an io watcher watching for no events you should do so.
+
+=item ev_io_modify (ev_io *, int events)
+
+Similar to C<ev_io_set>, but only changes the requested events. Using this
+might be faster with some backends, as libev can assume that the C<fd>
+still refers to the same underlying file description, something it cannot
+do when using C<ev_io_set>.
-=item int fd [read-only]
+=item int fd [no-modify]
-The file descriptor being watched.
+The file descriptor being watched. While it can be read at any time, you
+must not modify this member even when the watcher is stopped - always use
+C<ev_io_set> for that.
-=item int events [read-only]
+=item int events [no-modify]
-The events being watched.
+The set of events the fd is being watched for, among other flags. Remember
+that this is a bit set - to test for C<EV_READ>, use C<< w->events &
+EV_READ >>, and similarly for C<EV_WRITE>.
+
+As with C<fd>, you must not modify this member even when the watcher is
+stopped, always use C<ev_io_set> or C<ev_io_modify> for that.
=back
@@ -2115,11 +2224,11 @@ C<SIGSTOP>).
=item ev_timer_set (ev_timer *, ev_tstamp after, ev_tstamp repeat)
-Configure the timer to trigger after C<after> seconds. If C<repeat>
-is C<0.>, then it will automatically be stopped once the timeout is
-reached. If it is positive, then the timer will automatically be
-configured to trigger again C<repeat> seconds later, again, and again,
-until stopped manually.
+Configure the timer to trigger after C<after> seconds (fractional and
+negative values are supported). If C<repeat> is C<0.>, then it will
+automatically be stopped once the timeout is reached. If it is positive,
+then the timer will automatically be configured to trigger again C<repeat>
+seconds later, again, and again, until stopped manually.
The timer itself will do a best-effort at avoiding drift, that is, if
you configure a timer to trigger every 10 seconds, then it will normally
@@ -2226,8 +2335,8 @@ it, as it uses a relative timeout).
C<ev_periodic> watchers can also be used to implement vastly more complex
timers, such as triggering an event on each "midnight, local time", or
-other complicated rules. This cannot be done with C<ev_timer> watchers, as
-those cannot react to time jumps.
+other complicated rules. This cannot easily be done with C<ev_timer>
+watchers, as those cannot react to time jumps.
As with timers, the callback is guaranteed to be invoked only when the
point in time where it is supposed to trigger has passed. If multiple
@@ -2323,10 +2432,28 @@ NOTE: I<< This callback must always return a time that is higher than or
equal to the passed C<now> value >>.
This can be used to create very complex timers, such as a timer that
-triggers on "next midnight, local time". To do this, you would calculate the
-next midnight after C<now> and return the timestamp value for this. How
-you do this is, again, up to you (but it is not trivial, which is the main
-reason I omitted it as an example).
+triggers on "next midnight, local time". To do this, you would calculate
+the next midnight after C<now> and return the timestamp value for
+this. Here is a (completely untested, no error checking) example on how to
+do this:
+
+ #include <time.h>
+
+ static ev_tstamp
+ my_rescheduler (ev_periodic *w, ev_tstamp now)
+ {
+ time_t tnow = (time_t)now;
+ struct tm tm;
+ localtime_r (&tnow, &tm);
+
+ tm.tm_sec = tm.tm_min = tm.tm_hour = 0; // midnight current day
+ ++tm.tm_mday; // midnight next day
+
+ return mktime (&tm);
+ }
+
+Note: this code might run into trouble on days that have more then two
+midnights (beginning and end).
=back
@@ -3519,7 +3646,7 @@ There are some other functions of possible interest. Described. Here. Now.
=over 4
-=item ev_once (loop, int fd, int events, ev_tstamp timeout, callback)
+=item ev_once (loop, int fd, int events, ev_tstamp timeout, callback, arg)
This function combines a simple timer and an I/O watcher, calls your
callback on whichever event happens first and automatically stops both
@@ -3961,14 +4088,14 @@ libev sources can be compiled as C++. Therefore, code that uses the C API
will work fine.
Proper exception specifications might have to be added to callbacks passed
-to libev: exceptions may be thrown only from watcher callbacks, all
-other callbacks (allocator, syserr, loop acquire/release and periodic
-reschedule callbacks) must not throw exceptions, and might need a C<throw
-()> specification. If you have code that needs to be compiled as both C
-and C++ you can use the C<EV_THROW> macro for this:
+to libev: exceptions may be thrown only from watcher callbacks, all other
+callbacks (allocator, syserr, loop acquire/release and periodic reschedule
+callbacks) must not throw exceptions, and might need a C<noexcept>
+specification. If you have code that needs to be compiled as both C and
+C++ you can use the C<EV_NOEXCEPT> macro for this:
static void
- fatal_error (const char *msg) EV_THROW
+ fatal_error (const char *msg) EV_NOEXCEPT
{
perror (msg);
abort ();
@@ -4142,6 +4269,9 @@ method.
For C<ev::embed> watchers this method is called C<set_embed>, to avoid
clashing with the C<set (loop)> method.
+For C<ev::io> watchers there is an additional C<set> method that acepts a
+new event mask only, and internally calls C<ev_io_modfify>.
+
=item w->start ()
Starts the watcher. Note that there is no C<loop> argument, as the
@@ -4388,11 +4518,13 @@ in your include path (e.g. in libev/ when using -Ilibev):
ev_win32.c required on win32 platforms only
- ev_select.c only when select backend is enabled (which is enabled by default)
- ev_poll.c only when poll backend is enabled (disabled by default)
- ev_epoll.c only when the epoll backend is enabled (disabled by default)
- ev_kqueue.c only when the kqueue backend is enabled (disabled by default)
- ev_port.c only when the solaris port backend is enabled (disabled by default)
+ ev_select.c only when select backend is enabled
+ ev_poll.c only when poll backend is enabled
+ ev_epoll.c only when the epoll backend is enabled
+ ev_linuxaio.c only when the linux aio backend is enabled
+ ev_iouring.c only when the linux io_uring backend is enabled
+ ev_kqueue.c only when the kqueue backend is enabled
+ ev_port.c only when the solaris port backend is enabled
F<ev.c> includes the backend files directly when enabled, so you only need
to compile this single file.
@@ -4521,6 +4653,30 @@ C<ev_signal> and C<ev_async> performance and reduce resource consumption.
If undefined, it will be enabled if the headers indicate GNU/Linux + Glibc
2.7 or newer, otherwise disabled.
+=item EV_USE_SIGNALFD
+
+If defined to be C<1>, then libev will assume that C<signalfd ()> is
+available and will probe for kernel support at runtime. This enables
+the use of EVFLAG_SIGNALFD for faster and simpler signal handling. If
+undefined, it will be enabled if the headers indicate GNU/Linux + Glibc
+2.7 or newer, otherwise disabled.
+
+=item EV_USE_TIMERFD
+
+If defined to be C<1>, then libev will assume that C<timerfd ()> is
+available and will probe for kernel support at runtime. This allows
+libev to detect time jumps accurately. If undefined, it will be enabled
+if the headers indicate GNU/Linux + Glibc 2.8 or newer and define
+C<TFD_TIMER_CANCEL_ON_SET>, otherwise disabled.
+
+=item EV_USE_EVENTFD
+
+If defined to be C<1>, then libev will assume that C<eventfd ()> is
+available and will probe for kernel support at runtime. This will improve
+C<ev_signal> and C<ev_async> performance and reduce resource consumption.
+If undefined, it will be enabled if the headers indicate GNU/Linux + Glibc
+2.7 or newer, otherwise disabled.
+
=item EV_USE_SELECT
If undefined or defined to be C<1>, libev will compile in support for the
@@ -4591,6 +4747,19 @@ otherwise another method will be used as fallback. This is the preferred
backend for GNU/Linux systems. If undefined, it will be enabled if the
headers indicate GNU/Linux + Glibc 2.4 or newer, otherwise disabled.
+=item EV_USE_LINUXAIO
+
+If defined to be C<1>, libev will compile in support for the Linux aio
+backend (C<EV_USE_EPOLL> must also be enabled). If undefined, it will be
+enabled on linux, otherwise disabled.
+
+=item EV_USE_IOURING
+
+If defined to be C<1>, libev will compile in support for the Linux
+io_uring backend (C<EV_USE_EPOLL> must also be enabled). Due to it's
+current limitations it has to be requested explicitly. If undefined, it
+will be enabled on linux, otherwise disabled.
+
=item EV_USE_KQUEUE
If defined to be C<1>, libev will compile in support for the BSD style
@@ -4877,6 +5046,9 @@ called once per loop, which can slow down libev. If set to C<3>, then the
verification code will be called very frequently, which will slow down
libev considerably.
+Verification errors are reported via C's C<assert> mechanism, so if you
+disable that (e.g. by defining C<NDEBUG>) then no errors will be reported.
+
The default is C<1>, unless C<EV_FEATURES> overrides it, in which case it
will be C<0>.
diff --git a/third_party/libev/ev_epoll.c b/third_party/libev/ev_epoll.c
index df118a6fe..346b4196b 100644
--- a/third_party/libev/ev_epoll.c
+++ b/third_party/libev/ev_epoll.c
@@ -1,7 +1,7 @@
/*
* libev epoll fd activity backend
*
- * Copyright (c) 2007,2008,2009,2010,2011 Marc Alexander Lehmann <libev at schmorp.de>
+ * Copyright (c) 2007,2008,2009,2010,2011,2016,2017,2019 Marc Alexander Lehmann <libev at schmorp.de>
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without modifica-
@@ -93,10 +93,10 @@ epoll_modify (EV_P_ int fd, int oev, int nev)
ev.events = (nev & EV_READ ? EPOLLIN : 0)
| (nev & EV_WRITE ? EPOLLOUT : 0);
- if (expect_true (!epoll_ctl (backend_fd, oev && oldmask != nev ? EPOLL_CTL_MOD : EPOLL_CTL_ADD, fd, &ev)))
+ if (ecb_expect_true (!epoll_ctl (backend_fd, oev && oldmask != nev ? EPOLL_CTL_MOD : EPOLL_CTL_ADD, fd, &ev)))
return;
- if (expect_true (errno == ENOENT))
+ if (ecb_expect_true (errno == ENOENT))
{
/* if ENOENT then the fd went away, so try to do the right thing */
if (!nev)
@@ -105,7 +105,7 @@ epoll_modify (EV_P_ int fd, int oev, int nev)
if (!epoll_ctl (backend_fd, EPOLL_CTL_ADD, fd, &ev))
return;
}
- else if (expect_true (errno == EEXIST))
+ else if (ecb_expect_true (errno == EEXIST))
{
/* EEXIST means we ignored a previous DEL, but the fd is still active */
/* if the kernel mask is the same as the new mask, we assume it hasn't changed */
@@ -115,7 +115,7 @@ epoll_modify (EV_P_ int fd, int oev, int nev)
if (!epoll_ctl (backend_fd, EPOLL_CTL_MOD, fd, &ev))
return;
}
- else if (expect_true (errno == EPERM))
+ else if (ecb_expect_true (errno == EPERM))
{
/* EPERM means the fd is always ready, but epoll is too snobbish */
/* to handle it, unlike select or poll. */
@@ -124,7 +124,7 @@ epoll_modify (EV_P_ int fd, int oev, int nev)
/* add fd to epoll_eperms, if not already inside */
if (!(oldmask & EV_EMASK_EPERM))
{
- array_needsize (int, epoll_eperms, epoll_epermmax, epoll_epermcnt + 1, EMPTY2);
+ array_needsize (int, epoll_eperms, epoll_epermmax, epoll_epermcnt + 1, array_needsize_noinit);
epoll_eperms [epoll_epermcnt++] = fd;
}
@@ -144,16 +144,16 @@ epoll_poll (EV_P_ ev_tstamp timeout)
int i;
int eventcnt;
- if (expect_false (epoll_epermcnt))
- timeout = 0.;
+ if (ecb_expect_false (epoll_epermcnt))
+ timeout = EV_TS_CONST (0.);
/* epoll wait times cannot be larger than (LONG_MAX - 999UL) / HZ msecs, which is below */
/* the default libev max wait time, however. */
EV_RELEASE_CB;
- eventcnt = epoll_wait (backend_fd, epoll_events, epoll_eventmax, timeout * 1e3);
+ eventcnt = epoll_wait (backend_fd, epoll_events, epoll_eventmax, EV_TS_TO_MSEC (timeout));
EV_ACQUIRE_CB;
- if (expect_false (eventcnt < 0))
+ if (ecb_expect_false (eventcnt < 0))
{
if (errno != EINTR)
ev_syserr ("(libev) epoll_wait");
@@ -176,14 +176,14 @@ epoll_poll (EV_P_ ev_tstamp timeout)
* other spurious notifications will be found by epoll_ctl, below
* we assume that fd is always in range, as we never shrink the anfds array
*/
- if (expect_false ((uint32_t)anfds [fd].egen != (uint32_t)(ev->data.u64 >> 32)))
+ if (ecb_expect_false ((uint32_t)anfds [fd].egen != (uint32_t)(ev->data.u64 >> 32)))
{
/* recreate kernel state */
postfork |= 2;
continue;
}
- if (expect_false (got & ~want))
+ if (ecb_expect_false (got & ~want))
{
anfds [fd].emask = want;
@@ -195,6 +195,8 @@ epoll_poll (EV_P_ ev_tstamp timeout)
* above with the gencounter check (== our fd is not the event fd), and
* partially here, when epoll_ctl returns an error (== a child has the fd
* but we closed it).
+ * note: for events such as POLLHUP, where we can't know whether it refers
+ * to EV_READ or EV_WRITE, we might issue redundant EPOLL_CTL_MOD calls.
*/
ev->events = (want & EV_READ ? EPOLLIN : 0)
| (want & EV_WRITE ? EPOLLOUT : 0);
@@ -212,7 +214,7 @@ epoll_poll (EV_P_ ev_tstamp timeout)
}
/* if the receive array was full, increase its size */
- if (expect_false (eventcnt == epoll_eventmax))
+ if (ecb_expect_false (eventcnt == epoll_eventmax))
{
ev_free (epoll_events);
epoll_eventmax = array_nextsize (sizeof (struct epoll_event), epoll_eventmax, epoll_eventmax + 1);
@@ -235,23 +237,34 @@ epoll_poll (EV_P_ ev_tstamp timeout)
}
}
-inline_size
-int
-epoll_init (EV_P_ int flags)
+static int
+epoll_epoll_create (void)
{
-#ifdef EPOLL_CLOEXEC
- backend_fd = epoll_create1 (EPOLL_CLOEXEC);
+ int fd;
- if (backend_fd < 0 && (errno == EINVAL || errno == ENOSYS))
+#if defined EPOLL_CLOEXEC && !defined __ANDROID__
+ fd = epoll_create1 (EPOLL_CLOEXEC);
+
+ if (fd < 0 && (errno == EINVAL || errno == ENOSYS))
#endif
- backend_fd = epoll_create (256);
+ {
+ fd = epoll_create (256);
- if (backend_fd < 0)
- return 0;
+ if (fd >= 0)
+ fcntl (fd, F_SETFD, FD_CLOEXEC);
+ }
+
+ return fd;
+}
- fcntl (backend_fd, F_SETFD, FD_CLOEXEC);
+inline_size
+int
+epoll_init (EV_P_ int flags)
+{
+ if ((backend_fd = epoll_epoll_create ()) < 0)
+ return 0;
- backend_mintime = 1e-3; /* epoll does sometimes return early, this is just to avoid the worst */
+ backend_mintime = EV_TS_CONST (1e-3); /* epoll does sometimes return early, this is just to avoid the worst */
backend_modify = epoll_modify;
backend_poll = epoll_poll;
@@ -269,17 +282,15 @@ epoll_destroy (EV_P)
array_free (epoll_eperm, EMPTY);
}
-inline_size
-void
+ecb_cold
+static void
epoll_fork (EV_P)
{
close (backend_fd);
- while ((backend_fd = epoll_create (256)) < 0)
+ while ((backend_fd = epoll_epoll_create ()) < 0)
ev_syserr ("(libev) epoll_create");
- fcntl (backend_fd, F_SETFD, FD_CLOEXEC);
-
fd_rearm_all (EV_A);
}
diff --git a/third_party/libev/ev_iouring.c b/third_party/libev/ev_iouring.c
new file mode 100644
index 000000000..bfd3de65f
--- /dev/null
+++ b/third_party/libev/ev_iouring.c
@@ -0,0 +1,694 @@
+/*
+ * libev linux io_uring fd activity backend
+ *
+ * Copyright (c) 2019-2020 Marc Alexander Lehmann <libev at schmorp.de>
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without modifica-
+ * tion, are permitted provided that the following conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MER-
+ * CHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
+ * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPE-
+ * CIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
+ * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTH-
+ * ERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * Alternatively, the contents of this file may be used under the terms of
+ * the GNU General Public License ("GPL") version 2 or any later version,
+ * in which case the provisions of the GPL are applicable instead of
+ * the above. If you wish to allow the use of your version of this file
+ * only under the terms of the GPL and not to allow others to use your
+ * version of this file under the BSD license, indicate your decision
+ * by deleting the provisions above and replace them with the notice
+ * and other provisions required by the GPL. If you do not delete the
+ * provisions above, a recipient may use your version of this file under
+ * either the BSD or the GPL.
+ */
+
+/*
+ * general notes about linux io_uring:
+ *
+ * a) it's the best interface I have seen so far. on linux.
+ * b) best is not necessarily very good.
+ * c) it's better than the aio mess, doesn't suffer from the fork problems
+ * of linux aio or epoll and so on and so on. and you could do event stuff
+ * without any syscalls. what's not to like?
+ * d) ok, it's vastly more complex, but that's ok, really.
+ * e) why two mmaps instead of one? one would be more space-efficient,
+ * and I can't see what benefit two would have (other than being
+ * somehow resizable/relocatable, but that's apparently not possible).
+ * f) hmm, it's practically undebuggable (gdb can't access the memory, and
+ * the bizarre way structure offsets are communicated makes it hard to
+ * just print the ring buffer heads, even *iff* the memory were visible
+ * in gdb. but then, that's also ok, really.
+ * g) well, you cannot specify a timeout when waiting for events. no,
+ * seriously, the interface doesn't support a timeout. never seen _that_
+ * before. sure, you can use a timerfd, but that's another syscall
+ * you could have avoided. overall, this bizarre omission smells
+ * like a µ-optimisation by the io_uring author for his personal
+ * applications, to the detriment of everybody else who just wants
+ * an event loop. but, umm, ok, if that's all, it could be worse.
+ * (from what I gather from the author Jens Axboe, it simply didn't
+ * occur to him, and he made good on it by adding an unlimited nuber
+ * of timeouts later :).
+ * h) initially there was a hardcoded limit of 4096 outstanding events.
+ * later versions not only bump this to 32k, but also can handle
+ * an unlimited amount of events, so this only affects the batch size.
+ * i) unlike linux aio, you *can* register more then the limit
+ * of fd events. while early verisons of io_uring signalled an overflow
+ * and you ended up getting wet. 5.5+ does not do this anymore.
+ * j) but, oh my! it had exactly the same bugs as the linux aio backend,
+ * where some undocumented poll combinations just fail. fortunately,
+ * after finally reaching the author, he was more than willing to fix
+ * this probably in 5.6+.
+ * k) overall, the *API* itself is, I dare to say, not a total trainwreck.
+ * once the bugs ae fixed (probably in 5.6+), it will be without
+ * competition.
+ */
+
+/* TODO: use internal TIMEOUT */
+/* TODO: take advantage of single mmap, NODROP etc. */
+/* TODO: resize cq/sq size independently */
+
+#include <sys/timerfd.h>
+#include <sys/mman.h>
+#include <poll.h>
+#include <stdint.h>
+
+#define IOURING_INIT_ENTRIES 32
+
+/*****************************************************************************/
+/* syscall wrapdadoop - this section has the raw api/abi definitions */
+
+#include <linux/fs.h>
+#include <linux/types.h>
+
+/* mostly directly taken from the kernel or documentation */
+
+struct io_uring_sqe
+{
+ __u8 opcode;
+ __u8 flags;
+ __u16 ioprio;
+ __s32 fd;
+ union {
+ __u64 off;
+ __u64 addr2;
+ };
+ __u64 addr;
+ __u32 len;
+ union {
+ __kernel_rwf_t rw_flags;
+ __u32 fsync_flags;
+ __u16 poll_events;
+ __u32 sync_range_flags;
+ __u32 msg_flags;
+ __u32 timeout_flags;
+ __u32 accept_flags;
+ __u32 cancel_flags;
+ __u32 open_flags;
+ __u32 statx_flags;
+ };
+ __u64 user_data;
+ union {
+ __u16 buf_index;
+ __u64 __pad2[3];
+ };
+};
+
+struct io_uring_cqe
+{
+ __u64 user_data;
+ __s32 res;
+ __u32 flags;
+};
+
+struct io_sqring_offsets
+{
+ __u32 head;
+ __u32 tail;
+ __u32 ring_mask;
+ __u32 ring_entries;
+ __u32 flags;
+ __u32 dropped;
+ __u32 array;
+ __u32 resv1;
+ __u64 resv2;
+};
+
+struct io_cqring_offsets
+{
+ __u32 head;
+ __u32 tail;
+ __u32 ring_mask;
+ __u32 ring_entries;
+ __u32 overflow;
+ __u32 cqes;
+ __u64 resv[2];
+};
+
+struct io_uring_params
+{
+ __u32 sq_entries;
+ __u32 cq_entries;
+ __u32 flags;
+ __u32 sq_thread_cpu;
+ __u32 sq_thread_idle;
+ __u32 features;
+ __u32 resv[4];
+ struct io_sqring_offsets sq_off;
+ struct io_cqring_offsets cq_off;
+};
+
+#define IORING_SETUP_CQSIZE 0x00000008
+
+#define IORING_OP_POLL_ADD 6
+#define IORING_OP_POLL_REMOVE 7
+#define IORING_OP_TIMEOUT 11
+#define IORING_OP_TIMEOUT_REMOVE 12
+
+/* relative or absolute, reference clock is CLOCK_MONOTONIC */
+struct iouring_kernel_timespec
+{
+ int64_t tv_sec;
+ long long tv_nsec;
+};
+
+#define IORING_TIMEOUT_ABS 0x00000001
+
+#define IORING_ENTER_GETEVENTS 0x01
+
+#define IORING_OFF_SQ_RING 0x00000000ULL
+#define IORING_OFF_CQ_RING 0x08000000ULL
+#define IORING_OFF_SQES 0x10000000ULL
+
+#define IORING_FEAT_SINGLE_MMAP 0x00000001
+#define IORING_FEAT_NODROP 0x00000002
+#define IORING_FEAT_SUBMIT_STABLE 0x00000004
+
+inline_size
+int
+evsys_io_uring_setup (unsigned entries, struct io_uring_params *params)
+{
+ return ev_syscall2 (SYS_io_uring_setup, entries, params);
+}
+
+inline_size
+int
+evsys_io_uring_enter (int fd, unsigned to_submit, unsigned min_complete, unsigned flags, const sigset_t *sig, size_t sigsz)
+{
+ return ev_syscall6 (SYS_io_uring_enter, fd, to_submit, min_complete, flags, sig, sigsz);
+}
+
+/*****************************************************************************/
+/* actual backed implementation */
+
+/* we hope that volatile will make the compiler access this variables only once */
+#define EV_SQ_VAR(name) *(volatile unsigned *)((char *)iouring_sq_ring + iouring_sq_ ## name)
+#define EV_CQ_VAR(name) *(volatile unsigned *)((char *)iouring_cq_ring + iouring_cq_ ## name)
+
+/* the index array */
+#define EV_SQ_ARRAY ((unsigned *)((char *)iouring_sq_ring + iouring_sq_array))
+
+/* the submit/completion queue entries */
+#define EV_SQES ((struct io_uring_sqe *) iouring_sqes)
+#define EV_CQES ((struct io_uring_cqe *)((char *)iouring_cq_ring + iouring_cq_cqes))
+
+inline_speed
+int
+iouring_enter (EV_P_ ev_tstamp timeout)
+{
+ int res;
+
+ EV_RELEASE_CB;
+
+ res = evsys_io_uring_enter (iouring_fd, iouring_to_submit, 1,
+ timeout > EV_TS_CONST (0.) ? IORING_ENTER_GETEVENTS : 0, 0, 0);
+
+ assert (("libev: io_uring_enter did not consume all sqes", (res < 0 || res == iouring_to_submit)));
+
+ iouring_to_submit = 0;
+
+ EV_ACQUIRE_CB;
+
+ return res;
+}
+
+/* TODO: can we move things around so we don't need this forward-reference? */
+static void
+iouring_poll (EV_P_ ev_tstamp timeout);
+
+static
+struct io_uring_sqe *
+iouring_sqe_get (EV_P)
+{
+ unsigned tail;
+
+ for (;;)
+ {
+ tail = EV_SQ_VAR (tail);
+
+ if (ecb_expect_true (tail + 1 - EV_SQ_VAR (head) <= EV_SQ_VAR (ring_entries)))
+ break; /* whats the problem, we have free sqes */
+
+ /* queue full, need to flush and possibly handle some events */
+
+#if EV_FEATURE_CODE
+ /* first we ask the kernel nicely, most often this frees up some sqes */
+ int res = iouring_enter (EV_A_ EV_TS_CONST (0.));
+
+ ECB_MEMORY_FENCE_ACQUIRE; /* better safe than sorry */
+
+ if (res >= 0)
+ continue; /* yes, it worked, try again */
+#endif
+
+ /* some problem, possibly EBUSY - do the full poll and let it handle any issues */
+
+ iouring_poll (EV_A_ EV_TS_CONST (0.));
+ /* iouring_poll should have done ECB_MEMORY_FENCE_ACQUIRE for us */
+ }
+
+ /*assert (("libev: io_uring queue full after flush", tail + 1 - EV_SQ_VAR (head) <= EV_SQ_VAR (ring_entries)));*/
+
+ return EV_SQES + (tail & EV_SQ_VAR (ring_mask));
+}
+
+inline_size
+struct io_uring_sqe *
+iouring_sqe_submit (EV_P_ struct io_uring_sqe *sqe)
+{
+ unsigned idx = sqe - EV_SQES;
+
+ EV_SQ_ARRAY [idx] = idx;
+ ECB_MEMORY_FENCE_RELEASE;
+ ++EV_SQ_VAR (tail);
+ /*ECB_MEMORY_FENCE_RELEASE; /* for the time being we assume this is not needed */
+ ++iouring_to_submit;
+}
+
+/*****************************************************************************/
+
+/* when the timerfd expires we simply note the fact,
+ * as the purpose of the timerfd is to wake us up, nothing else.
+ * the next iteration should re-set it.
+ */
+static void
+iouring_tfd_cb (EV_P_ struct ev_io *w, int revents)
+{
+ iouring_tfd_to = EV_TSTAMP_HUGE;
+}
+
+/* called for full and partial cleanup */
+ecb_cold
+static int
+iouring_internal_destroy (EV_P)
+{
+ close (iouring_tfd);
+ close (iouring_fd);
+
+ if (iouring_sq_ring != MAP_FAILED) munmap (iouring_sq_ring, iouring_sq_ring_size);
+ if (iouring_cq_ring != MAP_FAILED) munmap (iouring_cq_ring, iouring_cq_ring_size);
+ if (iouring_sqes != MAP_FAILED) munmap (iouring_sqes , iouring_sqes_size );
+
+ if (ev_is_active (&iouring_tfd_w))
+ {
+ ev_ref (EV_A);
+ ev_io_stop (EV_A_ &iouring_tfd_w);
+ }
+}
+
+ecb_cold
+static int
+iouring_internal_init (EV_P)
+{
+ struct io_uring_params params = { 0 };
+
+ iouring_to_submit = 0;
+
+ iouring_tfd = -1;
+ iouring_sq_ring = MAP_FAILED;
+ iouring_cq_ring = MAP_FAILED;
+ iouring_sqes = MAP_FAILED;
+
+ if (!have_monotonic) /* cannot really happen, but what if11 */
+ return -1;
+
+ for (;;)
+ {
+ iouring_fd = evsys_io_uring_setup (iouring_entries, ¶ms);
+
+ if (iouring_fd >= 0)
+ break; /* yippie */
+
+ if (errno != EINVAL)
+ return -1; /* we failed */
+
+#if TODO
+ if ((~params.features) & (IORING_FEAT_NODROP | IORING_FEATURE_SINGLE_MMAP | IORING_FEAT_SUBMIT_STABLE))
+ return -1; /* we require the above features */
+#endif
+
+ /* EINVAL: lots of possible reasons, but maybe
+ * it is because we hit the unqueryable hardcoded size limit
+ */
+
+ /* we hit the limit already, give up */
+ if (iouring_max_entries)
+ return -1;
+
+ /* first time we hit EINVAL? assume we hit the limit, so go back and retry */
+ iouring_entries >>= 1;
+ iouring_max_entries = iouring_entries;
+ }
+
+ iouring_sq_ring_size = params.sq_off.array + params.sq_entries * sizeof (unsigned);
+ iouring_cq_ring_size = params.cq_off.cqes + params.cq_entries * sizeof (struct io_uring_cqe);
+ iouring_sqes_size = params.sq_entries * sizeof (struct io_uring_sqe);
+
+ iouring_sq_ring = mmap (0, iouring_sq_ring_size, PROT_READ | PROT_WRITE,
+ MAP_SHARED | MAP_POPULATE, iouring_fd, IORING_OFF_SQ_RING);
+ iouring_cq_ring = mmap (0, iouring_cq_ring_size, PROT_READ | PROT_WRITE,
+ MAP_SHARED | MAP_POPULATE, iouring_fd, IORING_OFF_CQ_RING);
+ iouring_sqes = mmap (0, iouring_sqes_size, PROT_READ | PROT_WRITE,
+ MAP_SHARED | MAP_POPULATE, iouring_fd, IORING_OFF_SQES);
+
+ if (iouring_sq_ring == MAP_FAILED || iouring_cq_ring == MAP_FAILED || iouring_sqes == MAP_FAILED)
+ return -1;
+
+ iouring_sq_head = params.sq_off.head;
+ iouring_sq_tail = params.sq_off.tail;
+ iouring_sq_ring_mask = params.sq_off.ring_mask;
+ iouring_sq_ring_entries = params.sq_off.ring_entries;
+ iouring_sq_flags = params.sq_off.flags;
+ iouring_sq_dropped = params.sq_off.dropped;
+ iouring_sq_array = params.sq_off.array;
+
+ iouring_cq_head = params.cq_off.head;
+ iouring_cq_tail = params.cq_off.tail;
+ iouring_cq_ring_mask = params.cq_off.ring_mask;
+ iouring_cq_ring_entries = params.cq_off.ring_entries;
+ iouring_cq_overflow = params.cq_off.overflow;
+ iouring_cq_cqes = params.cq_off.cqes;
+
+ iouring_tfd = timerfd_create (CLOCK_MONOTONIC, TFD_CLOEXEC);
+
+ if (iouring_tfd < 0)
+ return iouring_tfd;
+
+ iouring_tfd_to = EV_TSTAMP_HUGE;
+
+ return 0;
+}
+
+ecb_cold
+static void
+iouring_fork (EV_P)
+{
+ iouring_internal_destroy (EV_A);
+
+ while (iouring_internal_init (EV_A) < 0)
+ ev_syserr ("(libev) io_uring_setup");
+
+ fd_rearm_all (EV_A);
+
+ ev_io_stop (EV_A_ &iouring_tfd_w);
+ ev_io_set (EV_A_ &iouring_tfd_w, iouring_tfd, EV_READ);
+ ev_io_start (EV_A_ &iouring_tfd_w);
+}
+
+/*****************************************************************************/
+
+static void
+iouring_modify (EV_P_ int fd, int oev, int nev)
+{
+ if (oev)
+ {
+ /* we assume the sqe's are all "properly" initialised */
+ struct io_uring_sqe *sqe = iouring_sqe_get (EV_A);
+ sqe->opcode = IORING_OP_POLL_REMOVE;
+ sqe->fd = fd;
+ /* Jens Axboe notified me that user_data is not what is documented, but is
+ * some kind of unique ID that has to match, otherwise the request cannot
+ * be removed. Since we don't *really* have that, we pass in the old
+ * generation counter - if that fails, too bad, it will hopefully be removed
+ * at close time and then be ignored. */
+ sqe->addr = (uint32_t)fd | ((__u64)(uint32_t)anfds [fd].egen << 32);
+ sqe->user_data = (uint64_t)-1;
+ iouring_sqe_submit (EV_A_ sqe);
+
+ /* increment generation counter to avoid handling old events */
+ ++anfds [fd].egen;
+ }
+
+ if (nev)
+ {
+ struct io_uring_sqe *sqe = iouring_sqe_get (EV_A);
+ sqe->opcode = IORING_OP_POLL_ADD;
+ sqe->fd = fd;
+ sqe->addr = 0;
+ sqe->user_data = (uint32_t)fd | ((__u64)(uint32_t)anfds [fd].egen << 32);
+ sqe->poll_events =
+ (nev & EV_READ ? POLLIN : 0)
+ | (nev & EV_WRITE ? POLLOUT : 0);
+ iouring_sqe_submit (EV_A_ sqe);
+ }
+}
+
+inline_size
+void
+iouring_tfd_update (EV_P_ ev_tstamp timeout)
+{
+ ev_tstamp tfd_to = mn_now + timeout;
+
+ /* we assume there will be many iterations per timer change, so
+ * we only re-set the timerfd when we have to because its expiry
+ * is too late.
+ */
+ if (ecb_expect_false (tfd_to < iouring_tfd_to))
+ {
+ struct itimerspec its;
+
+ iouring_tfd_to = tfd_to;
+ EV_TS_SET (its.it_interval, 0.);
+ EV_TS_SET (its.it_value, tfd_to);
+
+ if (timerfd_settime (iouring_tfd, TFD_TIMER_ABSTIME, &its, 0) < 0)
+ assert (("libev: iouring timerfd_settime failed", 0));
+ }
+}
+
+inline_size
+void
+iouring_process_cqe (EV_P_ struct io_uring_cqe *cqe)
+{
+ int fd = cqe->user_data & 0xffffffffU;
+ uint32_t gen = cqe->user_data >> 32;
+ int res = cqe->res;
+
+ /* user_data -1 is a remove that we are not atm. interested in */
+ if (cqe->user_data == (uint64_t)-1)
+ return;
+
+ assert (("libev: io_uring fd must be in-bounds", fd >= 0 && fd < anfdmax));
+
+ /* documentation lies, of course. the result value is NOT like
+ * normal syscalls, but like linux raw syscalls, i.e. negative
+ * error numbers. fortunate, as otherwise there would be no way
+ * to get error codes at all. still, why not document this?
+ */
+
+ /* ignore event if generation doesn't match */
+ /* other than skipping removal events, */
+ /* this should actually be very rare */
+ if (ecb_expect_false (gen != (uint32_t)anfds [fd].egen))
+ return;
+
+ if (ecb_expect_false (res < 0))
+ {
+ /*TODO: EINVAL handling (was something failed with this fd)*/
+
+ if (res == -EBADF)
+ {
+ assert (("libev: event loop rejected bad fd", res != -EBADF));
+ fd_kill (EV_A_ fd);
+ }
+ else
+ {
+ errno = -res;
+ ev_syserr ("(libev) IORING_OP_POLL_ADD");
+ }
+
+ return;
+ }
+
+ /* feed events, we do not expect or handle POLLNVAL */
+ fd_event (
+ EV_A_
+ fd,
+ (res & (POLLOUT | POLLERR | POLLHUP) ? EV_WRITE : 0)
+ | (res & (POLLIN | POLLERR | POLLHUP) ? EV_READ : 0)
+ );
+
+ /* io_uring is oneshot, so we need to re-arm the fd next iteration */
+ /* this also means we usually have to do at least one syscall per iteration */
+ anfds [fd].events = 0;
+ fd_change (EV_A_ fd, EV_ANFD_REIFY);
+}
+
+/* called when the event queue overflows */
+ecb_cold
+static void
+iouring_overflow (EV_P)
+{
+ /* we have two options, resize the queue (by tearing down
+ * everything and recreating it, or living with it
+ * and polling.
+ * we implement this by resizing the queue, and, if that fails,
+ * we just recreate the state on every failure, which
+ * kind of is a very inefficient poll.
+ * one danger is, due to the bios toward lower fds,
+ * we will only really get events for those, so
+ * maybe we need a poll() fallback, after all.
+ */
+ /*EV_CQ_VAR (overflow) = 0;*/ /* need to do this if we keep the state and poll manually */
+
+ fd_rearm_all (EV_A);
+
+ /* we double the size until we hit the hard-to-probe maximum */
+ if (!iouring_max_entries)
+ {
+ iouring_entries <<= 1;
+ iouring_fork (EV_A);
+ }
+ else
+ {
+ /* we hit the kernel limit, we should fall back to something else.
+ * we can either poll() a few times and hope for the best,
+ * poll always, or switch to epoll.
+ * TODO: is this necessary with newer kernels?
+ */
+
+ iouring_internal_destroy (EV_A);
+
+ /* this should make it so that on return, we don't call any uring functions */
+ iouring_to_submit = 0;
+
+ for (;;)
+ {
+ backend = epoll_init (EV_A_ 0);
+
+ if (backend)
+ break;
+
+ ev_syserr ("(libev) iouring switch to epoll");
+ }
+ }
+}
+
+/* handle any events in the completion queue, return true if there were any */
+static int
+iouring_handle_cq (EV_P)
+{
+ unsigned head, tail, mask;
+
+ head = EV_CQ_VAR (head);
+ ECB_MEMORY_FENCE_ACQUIRE;
+ tail = EV_CQ_VAR (tail);
+
+ if (head == tail)
+ return 0;
+
+ /* it can only overflow if we have events, yes, yes? */
+ if (ecb_expect_false (EV_CQ_VAR (overflow)))
+ {
+ iouring_overflow (EV_A);
+ return 1;
+ }
+
+ mask = EV_CQ_VAR (ring_mask);
+
+ do
+ iouring_process_cqe (EV_A_ &EV_CQES [head++ & mask]);
+ while (head != tail);
+
+ EV_CQ_VAR (head) = head;
+ ECB_MEMORY_FENCE_RELEASE;
+
+ return 1;
+}
+
+static void
+iouring_poll (EV_P_ ev_tstamp timeout)
+{
+ /* if we have events, no need for extra syscalls, but we might have to queue events */
+ /* we also clar the timeout if there are outstanding fdchanges */
+ /* the latter should only happen if both the sq and cq are full, most likely */
+ /* because we have a lot of event sources that immediately complete */
+ /* TODO: fdchacngecnt is always 0 because fd_reify does not have two buffers yet */
+ if (iouring_handle_cq (EV_A) || fdchangecnt)
+ timeout = EV_TS_CONST (0.);
+ else
+ /* no events, so maybe wait for some */
+ iouring_tfd_update (EV_A_ timeout);
+
+ /* only enter the kernel if we have something to submit, or we need to wait */
+ if (timeout || iouring_to_submit)
+ {
+ int res = iouring_enter (EV_A_ timeout);
+
+ if (ecb_expect_false (res < 0))
+ if (errno == EINTR)
+ /* ignore */;
+ else if (errno == EBUSY)
+ /* cq full, cannot submit - should be rare because we flush the cq first, so simply ignore */;
+ else
+ ev_syserr ("(libev) iouring setup");
+ else
+ iouring_handle_cq (EV_A);
+ }
+}
+
+inline_size
+int
+iouring_init (EV_P_ int flags)
+{
+ iouring_entries = IOURING_INIT_ENTRIES;
+ iouring_max_entries = 0;
+
+ if (iouring_internal_init (EV_A) < 0)
+ {
+ iouring_internal_destroy (EV_A);
+ return 0;
+ }
+
+ ev_io_init (&iouring_tfd_w, iouring_tfd_cb, iouring_tfd, EV_READ);
+ ev_set_priority (&iouring_tfd_w, EV_MINPRI);
+ ev_io_start (EV_A_ &iouring_tfd_w);
+ ev_unref (EV_A); /* watcher should not keep loop alive */
+
+ backend_modify = iouring_modify;
+ backend_poll = iouring_poll;
+
+ return EVBACKEND_IOURING;
+}
+
+inline_size
+void
+iouring_destroy (EV_P)
+{
+ iouring_internal_destroy (EV_A);
+}
+
diff --git a/third_party/libev/ev_kqueue.c b/third_party/libev/ev_kqueue.c
index 0c05ab9e7..69c5147f1 100644
--- a/third_party/libev/ev_kqueue.c
+++ b/third_party/libev/ev_kqueue.c
@@ -1,7 +1,7 @@
/*
* libev kqueue backend
*
- * Copyright (c) 2007,2008,2009,2010,2011,2012,2013 Marc Alexander Lehmann <libev at schmorp.de>
+ * Copyright (c) 2007,2008,2009,2010,2011,2012,2013,2016,2019 Marc Alexander Lehmann <libev at schmorp.de>
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without modifica-
@@ -48,7 +48,7 @@ void
kqueue_change (EV_P_ int fd, int filter, int flags, int fflags)
{
++kqueue_changecnt;
- array_needsize (struct kevent, kqueue_changes, kqueue_changemax, kqueue_changecnt, EMPTY2);
+ array_needsize (struct kevent, kqueue_changes, kqueue_changemax, kqueue_changecnt, array_needsize_noinit);
EV_SET (&kqueue_changes [kqueue_changecnt - 1], fd, filter, flags, fflags, 0, 0);
}
@@ -103,10 +103,10 @@ kqueue_poll (EV_P_ ev_tstamp timeout)
EV_ACQUIRE_CB;
kqueue_changecnt = 0;
- if (expect_false (res < 0))
+ if (ecb_expect_false (res < 0))
{
if (errno != EINTR)
- ev_syserr ("(libev) kevent");
+ ev_syserr ("(libev) kqueue kevent");
return;
}
@@ -115,7 +115,7 @@ kqueue_poll (EV_P_ ev_tstamp timeout)
{
int fd = kqueue_events [i].ident;
- if (expect_false (kqueue_events [i].flags & EV_ERROR))
+ if (ecb_expect_false (kqueue_events [i].flags & EV_ERROR))
{
int err = kqueue_events [i].data;
@@ -129,10 +129,16 @@ kqueue_poll (EV_P_ ev_tstamp timeout)
if (fd_valid (fd))
kqueue_modify (EV_A_ fd, 0, anfds [fd].events);
else
- fd_kill (EV_A_ fd);
+ {
+ assert (("libev: kqueue found invalid fd", 0));
+ fd_kill (EV_A_ fd);
+ }
}
else /* on all other errors, we error out on the fd */
- fd_kill (EV_A_ fd);
+ {
+ assert (("libev: kqueue found invalid fd", 0));
+ fd_kill (EV_A_ fd);
+ }
}
}
else
@@ -145,7 +151,7 @@ kqueue_poll (EV_P_ ev_tstamp timeout)
);
}
- if (expect_false (res == kqueue_eventmax))
+ if (ecb_expect_false (res == kqueue_eventmax))
{
ev_free (kqueue_events);
kqueue_eventmax = array_nextsize (sizeof (struct kevent), kqueue_eventmax, kqueue_eventmax + 1);
@@ -164,7 +170,7 @@ kqueue_init (EV_P_ int flags)
fcntl (backend_fd, F_SETFD, FD_CLOEXEC); /* not sure if necessary, hopefully doesn't hurt */
- backend_mintime = 1e-9; /* apparently, they did the right thing in freebsd */
+ backend_mintime = EV_TS_CONST (1e-9); /* apparently, they did the right thing in freebsd */
backend_modify = kqueue_modify;
backend_poll = kqueue_poll;
diff --git a/third_party/libev/ev_linuxaio.c b/third_party/libev/ev_linuxaio.c
new file mode 100644
index 000000000..4687a703e
--- /dev/null
+++ b/third_party/libev/ev_linuxaio.c
@@ -0,0 +1,620 @@
+/*
+ * libev linux aio fd activity backend
+ *
+ * Copyright (c) 2019 Marc Alexander Lehmann <libev at schmorp.de>
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without modifica-
+ * tion, are permitted provided that the following conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MER-
+ * CHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
+ * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPE-
+ * CIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
+ * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTH-
+ * ERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * Alternatively, the contents of this file may be used under the terms of
+ * the GNU General Public License ("GPL") version 2 or any later version,
+ * in which case the provisions of the GPL are applicable instead of
+ * the above. If you wish to allow the use of your version of this file
+ * only under the terms of the GPL and not to allow others to use your
+ * version of this file under the BSD license, indicate your decision
+ * by deleting the provisions above and replace them with the notice
+ * and other provisions required by the GPL. If you do not delete the
+ * provisions above, a recipient may use your version of this file under
+ * either the BSD or the GPL.
+ */
+
+/*
+ * general notes about linux aio:
+ *
+ * a) at first, the linux aio IOCB_CMD_POLL functionality introduced in
+ * 4.18 looks too good to be true: both watchers and events can be
+ * batched, and events can even be handled in userspace using
+ * a ring buffer shared with the kernel. watchers can be canceled
+ * regardless of whether the fd has been closed. no problems with fork.
+ * ok, the ring buffer is 200% undocumented (there isn't even a
+ * header file), but otherwise, it's pure bliss!
+ * b) ok, watchers are one-shot, so you have to re-arm active ones
+ * on every iteration. so much for syscall-less event handling,
+ * but at least these re-arms can be batched, no big deal, right?
+ * c) well, linux as usual: the documentation lies to you: io_submit
+ * sometimes returns EINVAL because the kernel doesn't feel like
+ * handling your poll mask - ttys can be polled for POLLOUT,
+ * POLLOUT|POLLIN, but polling for POLLIN fails. just great,
+ * so we have to fall back to something else (hello, epoll),
+ * but at least the fallback can be slow, because these are
+ * exceptional cases, right?
+ * d) hmm, you have to tell the kernel the maximum number of watchers
+ * you want to queue when initialising the aio context. but of
+ * course the real limit is magically calculated in the kernel, and
+ * is often higher then we asked for. so we just have to destroy
+ * the aio context and re-create it a bit larger if we hit the limit.
+ * (starts to remind you of epoll? well, it's a bit more deterministic
+ * and less gambling, but still ugly as hell).
+ * e) that's when you find out you can also hit an arbitrary system-wide
+ * limit. or the kernel simply doesn't want to handle your watchers.
+ * what the fuck do we do then? you guessed it, in the middle
+ * of event handling we have to switch to 100% epoll polling. and
+ * that better is as fast as normal epoll polling, so you practically
+ * have to use the normal epoll backend with all its quirks.
+ * f) end result of this train wreck: it inherits all the disadvantages
+ * from epoll, while adding a number on its own. why even bother to use
+ * it? because if conditions are right and your fds are supported and you
+ * don't hit a limit, this backend is actually faster, doesn't gamble with
+ * your fds, batches watchers and events and doesn't require costly state
+ * recreates. well, until it does.
+ * g) all of this makes this backend use almost twice as much code as epoll.
+ * which in turn uses twice as much code as poll. and that#s not counting
+ * the fact that this backend also depends on the epoll backend, making
+ * it three times as much code as poll, or kqueue.
+ * h) bleah. why can't linux just do kqueue. sure kqueue is ugly, but by now
+ * it's clear that whatever linux comes up with is far, far, far worse.
+ */
+
+#include <sys/time.h> /* actually linux/time.h, but we must assume they are compatible */
+#include <poll.h>
+#include <linux/aio_abi.h>
+
+/*****************************************************************************/
+/* syscall wrapdadoop - this section has the raw api/abi definitions */
+
+#include <sys/syscall.h> /* no glibc wrappers */
+
+/* aio_abi.h is not versioned in any way, so we cannot test for its existance */
+#define IOCB_CMD_POLL 5
+
+/* taken from linux/fs/aio.c. yup, that's a .c file.
+ * not only is this totally undocumented, not even the source code
+ * can tell you what the future semantics of compat_features and
+ * incompat_features are, or what header_length actually is for.
+ */
+#define AIO_RING_MAGIC 0xa10a10a1
+#define EV_AIO_RING_INCOMPAT_FEATURES 0
+struct aio_ring
+{
+ unsigned id; /* kernel internal index number */
+ unsigned nr; /* number of io_events */
+ unsigned head; /* Written to by userland or by kernel. */
+ unsigned tail;
+
+ unsigned magic;
+ unsigned compat_features;
+ unsigned incompat_features;
+ unsigned header_length; /* size of aio_ring */
+
+ struct io_event io_events[0];
+};
+
+inline_size
+int
+evsys_io_setup (unsigned nr_events, aio_context_t *ctx_idp)
+{
+ return ev_syscall2 (SYS_io_setup, nr_events, ctx_idp);
+}
+
+inline_size
+int
+evsys_io_destroy (aio_context_t ctx_id)
+{
+ return ev_syscall1 (SYS_io_destroy, ctx_id);
+}
+
+inline_size
+int
+evsys_io_submit (aio_context_t ctx_id, long nr, struct iocb *cbp[])
+{
+ return ev_syscall3 (SYS_io_submit, ctx_id, nr, cbp);
+}
+
+inline_size
+int
+evsys_io_cancel (aio_context_t ctx_id, struct iocb *cbp, struct io_event *result)
+{
+ return ev_syscall3 (SYS_io_cancel, ctx_id, cbp, result);
+}
+
+inline_size
+int
+evsys_io_getevents (aio_context_t ctx_id, long min_nr, long nr, struct io_event *events, struct timespec *timeout)
+{
+ return ev_syscall5 (SYS_io_getevents, ctx_id, min_nr, nr, events, timeout);
+}
+
+/*****************************************************************************/
+/* actual backed implementation */
+
+ecb_cold
+static int
+linuxaio_nr_events (EV_P)
+{
+ /* we start with 16 iocbs and incraese from there
+ * that's tiny, but the kernel has a rather low system-wide
+ * limit that can be reached quickly, so let's be parsimonious
+ * with this resource.
+ * Rest assured, the kernel generously rounds up small and big numbers
+ * in different ways (but doesn't seem to charge you for it).
+ * The 15 here is because the kernel usually has a power of two as aio-max-nr,
+ * and this helps to take advantage of that limit.
+ */
+
+ /* we try to fill 4kB pages exactly.
+ * the ring buffer header is 32 bytes, every io event is 32 bytes.
+ * the kernel takes the io requests number, doubles it, adds 2
+ * and adds the ring buffer.
+ * the way we use this is by starting low, and then roughly doubling the
+ * size each time we hit a limit.
+ */
+
+ int requests = 15 << linuxaio_iteration;
+ int one_page = (4096
+ / sizeof (struct io_event) ) / 2; /* how many fit into one page */
+ int first_page = ((4096 - sizeof (struct aio_ring))
+ / sizeof (struct io_event) - 2) / 2; /* how many fit into the first page */
+
+ /* if everything fits into one page, use count exactly */
+ if (requests > first_page)
+ /* otherwise, round down to full pages and add the first page */
+ requests = requests / one_page * one_page + first_page;
+
+ return requests;
+}
+
+/* we use out own wrapper structure in case we ever want to do something "clever" */
+typedef struct aniocb
+{
+ struct iocb io;
+ /*int inuse;*/
+} *ANIOCBP;
+
+inline_size
+void
+linuxaio_array_needsize_iocbp (ANIOCBP *base, int offset, int count)
+{
+ while (count--)
+ {
+ /* TODO: quite the overhead to allocate every iocb separately, maybe use our own allocator? */
+ ANIOCBP iocb = (ANIOCBP)ev_malloc (sizeof (*iocb));
+
+ /* full zero initialise is probably not required at the moment, but
+ * this is not well documented, so we better do it.
+ */
+ memset (iocb, 0, sizeof (*iocb));
+
+ iocb->io.aio_lio_opcode = IOCB_CMD_POLL;
+ iocb->io.aio_fildes = offset;
+
+ base [offset++] = iocb;
+ }
+}
+
+ecb_cold
+static void
+linuxaio_free_iocbp (EV_P)
+{
+ while (linuxaio_iocbpmax--)
+ ev_free (linuxaio_iocbps [linuxaio_iocbpmax]);
+
+ linuxaio_iocbpmax = 0; /* next resize will completely reallocate the array, at some overhead */
+}
+
+static void
+linuxaio_modify (EV_P_ int fd, int oev, int nev)
+{
+ array_needsize (ANIOCBP, linuxaio_iocbps, linuxaio_iocbpmax, fd + 1, linuxaio_array_needsize_iocbp);
+ ANIOCBP iocb = linuxaio_iocbps [fd];
+ ANFD *anfd = &anfds [fd];
+
+ if (ecb_expect_false (iocb->io.aio_reqprio < 0))
+ {
+ /* we handed this fd over to epoll, so undo this first */
+ /* we do it manually because the optimisations on epoll_modify won't do us any good */
+ epoll_ctl (backend_fd, EPOLL_CTL_DEL, fd, 0);
+ anfd->emask = 0;
+ iocb->io.aio_reqprio = 0;
+ }
+ else if (ecb_expect_false (iocb->io.aio_buf))
+ {
+ /* iocb active, so cancel it first before resubmit */
+ /* this assumes we only ever get one call per fd per loop iteration */
+ for (;;)
+ {
+ /* on all relevant kernels, io_cancel fails with EINPROGRESS on "success" */
+ if (ecb_expect_false (evsys_io_cancel (linuxaio_ctx, &iocb->io, (struct io_event *)0) == 0))
+ break;
+
+ if (ecb_expect_true (errno == EINPROGRESS))
+ break;
+
+ /* the EINPROGRESS test is for nicer error message. clumsy. */
+ if (errno != EINTR)
+ {
+ assert (("libev: linuxaio unexpected io_cancel failed", errno != EINTR && errno != EINPROGRESS));
+ break;
+ }
+ }
+
+ /* increment generation counter to avoid handling old events */
+ ++anfd->egen;
+ }
+
+ iocb->io.aio_buf = (nev & EV_READ ? POLLIN : 0)
+ | (nev & EV_WRITE ? POLLOUT : 0);
+
+ if (nev)
+ {
+ iocb->io.aio_data = (uint32_t)fd | ((__u64)(uint32_t)anfd->egen << 32);
+
+ /* queue iocb up for io_submit */
+ /* this assumes we only ever get one call per fd per loop iteration */
+ ++linuxaio_submitcnt;
+ array_needsize (struct iocb *, linuxaio_submits, linuxaio_submitmax, linuxaio_submitcnt, array_needsize_noinit);
+ linuxaio_submits [linuxaio_submitcnt - 1] = &iocb->io;
+ }
+}
+
+static void
+linuxaio_epoll_cb (EV_P_ struct ev_io *w, int revents)
+{
+ epoll_poll (EV_A_ 0);
+}
+
+inline_speed
+void
+linuxaio_fd_rearm (EV_P_ int fd)
+{
+ anfds [fd].events = 0;
+ linuxaio_iocbps [fd]->io.aio_buf = 0;
+ fd_change (EV_A_ fd, EV_ANFD_REIFY);
+}
+
+static void
+linuxaio_parse_events (EV_P_ struct io_event *ev, int nr)
+{
+ while (nr)
+ {
+ int fd = ev->data & 0xffffffff;
+ uint32_t gen = ev->data >> 32;
+ int res = ev->res;
+
+ assert (("libev: iocb fd must be in-bounds", fd >= 0 && fd < anfdmax));
+
+ /* only accept events if generation counter matches */
+ if (ecb_expect_true (gen == (uint32_t)anfds [fd].egen))
+ {
+ /* feed events, we do not expect or handle POLLNVAL */
+ fd_event (
+ EV_A_
+ fd,
+ (res & (POLLOUT | POLLERR | POLLHUP) ? EV_WRITE : 0)
+ | (res & (POLLIN | POLLERR | POLLHUP) ? EV_READ : 0)
+ );
+
+ /* linux aio is oneshot: rearm fd. TODO: this does more work than strictly needed */
+ linuxaio_fd_rearm (EV_A_ fd);
+ }
+
+ --nr;
+ ++ev;
+ }
+}
+
+/* get any events from ring buffer, return true if any were handled */
+static int
+linuxaio_get_events_from_ring (EV_P)
+{
+ struct aio_ring *ring = (struct aio_ring *)linuxaio_ctx;
+ unsigned head, tail;
+
+ /* the kernel reads and writes both of these variables, */
+ /* as a C extension, we assume that volatile use here */
+ /* both makes reads atomic and once-only */
+ head = *(volatile unsigned *)&ring->head;
+ ECB_MEMORY_FENCE_ACQUIRE;
+ tail = *(volatile unsigned *)&ring->tail;
+
+ if (head == tail)
+ return 0;
+
+ /* parse all available events, but only once, to avoid starvation */
+ if (ecb_expect_true (tail > head)) /* normal case around */
+ linuxaio_parse_events (EV_A_ ring->io_events + head, tail - head);
+ else /* wrapped around */
+ {
+ linuxaio_parse_events (EV_A_ ring->io_events + head, ring->nr - head);
+ linuxaio_parse_events (EV_A_ ring->io_events, tail);
+ }
+
+ ECB_MEMORY_FENCE_RELEASE;
+ /* as an extension to C, we hope that the volatile will make this atomic and once-only */
+ *(volatile unsigned *)&ring->head = tail;
+
+ return 1;
+}
+
+inline_size
+int
+linuxaio_ringbuf_valid (EV_P)
+{
+ struct aio_ring *ring = (struct aio_ring *)linuxaio_ctx;
+
+ return ecb_expect_true (ring->magic == AIO_RING_MAGIC)
+ && ring->incompat_features == EV_AIO_RING_INCOMPAT_FEATURES
+ && ring->header_length == sizeof (struct aio_ring); /* TODO: or use it to find io_event[0]? */
+}
+
+/* read at least one event from kernel, or timeout */
+inline_size
+void
+linuxaio_get_events (EV_P_ ev_tstamp timeout)
+{
+ struct timespec ts;
+ struct io_event ioev[8]; /* 256 octet stack space */
+ int want = 1; /* how many events to request */
+ int ringbuf_valid = linuxaio_ringbuf_valid (EV_A);
+
+ if (ecb_expect_true (ringbuf_valid))
+ {
+ /* if the ring buffer has any events, we don't wait or call the kernel at all */
+ if (linuxaio_get_events_from_ring (EV_A))
+ return;
+
+ /* if the ring buffer is empty, and we don't have a timeout, then don't call the kernel */
+ if (!timeout)
+ return;
+ }
+ else
+ /* no ringbuffer, request slightly larger batch */
+ want = sizeof (ioev) / sizeof (ioev [0]);
+
+ /* no events, so wait for some
+ * for fairness reasons, we do this in a loop, to fetch all events
+ */
+ for (;;)
+ {
+ int res;
+
+ EV_RELEASE_CB;
+
+ EV_TS_SET (ts, timeout);
+ res = evsys_io_getevents (linuxaio_ctx, 1, want, ioev, &ts);
+
+ EV_ACQUIRE_CB;
+
+ if (res < 0)
+ if (errno == EINTR)
+ /* ignored, retry */;
+ else
+ ev_syserr ("(libev) linuxaio io_getevents");
+ else if (res)
+ {
+ /* at least one event available, handle them */
+ linuxaio_parse_events (EV_A_ ioev, res);
+
+ if (ecb_expect_true (ringbuf_valid))
+ {
+ /* if we have a ring buffer, handle any remaining events in it */
+ linuxaio_get_events_from_ring (EV_A);
+
+ /* at this point, we should have handled all outstanding events */
+ break;
+ }
+ else if (res < want)
+ /* otherwise, if there were fewere events than we wanted, we assume there are no more */
+ break;
+ }
+ else
+ break; /* no events from the kernel, we are done */
+
+ timeout = EV_TS_CONST (0.); /* only wait in the first iteration */
+ }
+}
+
+inline_size
+int
+linuxaio_io_setup (EV_P)
+{
+ linuxaio_ctx = 0;
+ return evsys_io_setup (linuxaio_nr_events (EV_A), &linuxaio_ctx);
+}
+
+static void
+linuxaio_poll (EV_P_ ev_tstamp timeout)
+{
+ int submitted;
+
+ /* first phase: submit new iocbs */
+
+ /* io_submit might return less than the requested number of iocbs */
+ /* this is, afaics, only because of errors, but we go by the book and use a loop, */
+ /* which allows us to pinpoint the erroneous iocb */
+ for (submitted = 0; submitted < linuxaio_submitcnt; )
+ {
+ int res = evsys_io_submit (linuxaio_ctx, linuxaio_submitcnt - submitted, linuxaio_submits + submitted);
+
+ if (ecb_expect_false (res < 0))
+ if (errno == EINVAL)
+ {
+ /* This happens for unsupported fds, officially, but in my testing,
+ * also randomly happens for supported fds. We fall back to good old
+ * poll() here, under the assumption that this is a very rare case.
+ * See https://lore.kernel.org/patchwork/patch/1047453/ to see
+ * discussion about such a case (ttys) where polling for POLLIN
+ * fails but POLLIN|POLLOUT works.
+ */
+ struct iocb *iocb = linuxaio_submits [submitted];
+ epoll_modify (EV_A_ iocb->aio_fildes, 0, anfds [iocb->aio_fildes].events);
+ iocb->aio_reqprio = -1; /* mark iocb as epoll */
+
+ res = 1; /* skip this iocb - another iocb, another chance */
+ }
+ else if (errno == EAGAIN)
+ {
+ /* This happens when the ring buffer is full, or some other shit we
+ * don't know and isn't documented. Most likely because we have too
+ * many requests and linux aio can't be assed to handle them.
+ * In this case, we try to allocate a larger ring buffer, freeing
+ * ours first. This might fail, in which case we have to fall back to 100%
+ * epoll.
+ * God, how I hate linux not getting its act together. Ever.
+ */
+ evsys_io_destroy (linuxaio_ctx);
+ linuxaio_submitcnt = 0;
+
+ /* rearm all fds with active iocbs */
+ {
+ int fd;
+ for (fd = 0; fd < linuxaio_iocbpmax; ++fd)
+ if (linuxaio_iocbps [fd]->io.aio_buf)
+ linuxaio_fd_rearm (EV_A_ fd);
+ }
+
+ ++linuxaio_iteration;
+ if (linuxaio_io_setup (EV_A) < 0)
+ {
+ /* TODO: rearm all and recreate epoll backend from scratch */
+ /* TODO: might be more prudent? */
+
+ /* to bad, we can't get a new aio context, go 100% epoll */
+ linuxaio_free_iocbp (EV_A);
+ ev_io_stop (EV_A_ &linuxaio_epoll_w);
+ ev_ref (EV_A);
+ linuxaio_ctx = 0;
+
+ backend = EVBACKEND_EPOLL;
+ backend_modify = epoll_modify;
+ backend_poll = epoll_poll;
+ }
+
+ timeout = EV_TS_CONST (0.);
+ /* it's easiest to handle this mess in another iteration */
+ return;
+ }
+ else if (errno == EBADF)
+ {
+ assert (("libev: event loop rejected bad fd", errno != EBADF));
+ fd_kill (EV_A_ linuxaio_submits [submitted]->aio_fildes);
+
+ res = 1; /* skip this iocb */
+ }
+ else if (errno == EINTR) /* not seen in reality, not documented */
+ res = 0; /* silently ignore and retry */
+ else
+ {
+ ev_syserr ("(libev) linuxaio io_submit");
+ res = 0;
+ }
+
+ submitted += res;
+ }
+
+ linuxaio_submitcnt = 0;
+
+ /* second phase: fetch and parse events */
+
+ linuxaio_get_events (EV_A_ timeout);
+}
+
+inline_size
+int
+linuxaio_init (EV_P_ int flags)
+{
+ /* would be great to have a nice test for IOCB_CMD_POLL instead */
+ /* also: test some semi-common fd types, such as files and ttys in recommended_backends */
+ /* 4.18 introduced IOCB_CMD_POLL, 4.19 made epoll work, and we need that */
+ if (ev_linux_version () < 0x041300)
+ return 0;
+
+ if (!epoll_init (EV_A_ 0))
+ return 0;
+
+ linuxaio_iteration = 0;
+
+ if (linuxaio_io_setup (EV_A) < 0)
+ {
+ epoll_destroy (EV_A);
+ return 0;
+ }
+
+ ev_io_init (&linuxaio_epoll_w, linuxaio_epoll_cb, backend_fd, EV_READ);
+ ev_set_priority (&linuxaio_epoll_w, EV_MAXPRI);
+ ev_io_start (EV_A_ &linuxaio_epoll_w);
+ ev_unref (EV_A); /* watcher should not keep loop alive */
+
+ backend_modify = linuxaio_modify;
+ backend_poll = linuxaio_poll;
+
+ linuxaio_iocbpmax = 0;
+ linuxaio_iocbps = 0;
+
+ linuxaio_submits = 0;
+ linuxaio_submitmax = 0;
+ linuxaio_submitcnt = 0;
+
+ return EVBACKEND_LINUXAIO;
+}
+
+inline_size
+void
+linuxaio_destroy (EV_P)
+{
+ epoll_destroy (EV_A);
+ linuxaio_free_iocbp (EV_A);
+ evsys_io_destroy (linuxaio_ctx); /* fails in child, aio context is destroyed */
+}
+
+ecb_cold
+static void
+linuxaio_fork (EV_P)
+{
+ linuxaio_submitcnt = 0; /* all pointers were invalidated */
+ linuxaio_free_iocbp (EV_A); /* this frees all iocbs, which is very heavy-handed */
+ evsys_io_destroy (linuxaio_ctx); /* fails in child, aio context is destroyed */
+
+ linuxaio_iteration = 0; /* we start over in the child */
+
+ while (linuxaio_io_setup (EV_A) < 0)
+ ev_syserr ("(libev) linuxaio io_setup");
+
+ /* forking epoll should also effectively unregister all fds from the backend */
+ epoll_fork (EV_A);
+ /* epoll_fork already did this. hopefully */
+ /*fd_rearm_all (EV_A);*/
+
+ ev_io_stop (EV_A_ &linuxaio_epoll_w);
+ ev_io_set (EV_A_ &linuxaio_epoll_w, backend_fd, EV_READ);
+ ev_io_start (EV_A_ &linuxaio_epoll_w);
+}
+
diff --git a/third_party/libev/ev_poll.c b/third_party/libev/ev_poll.c
index bd742b07f..e5508ddb0 100644
--- a/third_party/libev/ev_poll.c
+++ b/third_party/libev/ev_poll.c
@@ -1,7 +1,7 @@
/*
* libev poll fd activity backend
*
- * Copyright (c) 2007,2008,2009,2010,2011 Marc Alexander Lehmann <libev at schmorp.de>
+ * Copyright (c) 2007,2008,2009,2010,2011,2016,2019 Marc Alexander Lehmann <libev at schmorp.de>
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without modifica-
@@ -41,10 +41,12 @@
inline_size
void
-pollidx_init (int *base, int count)
+array_needsize_pollidx (int *base, int offset, int count)
{
- /* consider using memset (.., -1, ...), which is practically guaranteed
- * to work on all systems implementing poll */
+ /* using memset (.., -1, ...) is tempting, we we try
+ * to be ultraportable
+ */
+ base += offset;
while (count--)
*base++ = -1;
}
@@ -57,14 +59,14 @@ poll_modify (EV_P_ int fd, int oev, int nev)
if (oev == nev)
return;
- array_needsize (int, pollidxs, pollidxmax, fd + 1, pollidx_init);
+ array_needsize (int, pollidxs, pollidxmax, fd + 1, array_needsize_pollidx);
idx = pollidxs [fd];
if (idx < 0) /* need to allocate a new pollfd */
{
pollidxs [fd] = idx = pollcnt++;
- array_needsize (struct pollfd, polls, pollmax, pollcnt, EMPTY2);
+ array_needsize (struct pollfd, polls, pollmax, pollcnt, array_needsize_noinit);
polls [idx].fd = fd;
}
@@ -78,7 +80,7 @@ poll_modify (EV_P_ int fd, int oev, int nev)
{
pollidxs [fd] = -1;
- if (expect_true (idx < --pollcnt))
+ if (ecb_expect_true (idx < --pollcnt))
{
polls [idx] = polls [pollcnt];
pollidxs [polls [idx].fd] = idx;
@@ -93,10 +95,10 @@ poll_poll (EV_P_ ev_tstamp timeout)
int res;
EV_RELEASE_CB;
- res = poll (polls, pollcnt, timeout * 1e3);
+ res = poll (polls, pollcnt, EV_TS_TO_MSEC (timeout));
EV_ACQUIRE_CB;
- if (expect_false (res < 0))
+ if (ecb_expect_false (res < 0))
{
if (errno == EBADF)
fd_ebadf (EV_A);
@@ -108,14 +110,17 @@ poll_poll (EV_P_ ev_tstamp timeout)
else
for (p = polls; res; ++p)
{
- assert (("libev: poll() returned illegal result, broken BSD kernel?", p < polls + pollcnt));
+ assert (("libev: poll returned illegal result, broken BSD kernel?", p < polls + pollcnt));
- if (expect_false (p->revents)) /* this expect is debatable */
+ if (ecb_expect_false (p->revents)) /* this expect is debatable */
{
--res;
- if (expect_false (p->revents & POLLNVAL))
- fd_kill (EV_A_ p->fd);
+ if (ecb_expect_false (p->revents & POLLNVAL))
+ {
+ assert (("libev: poll found invalid fd in poll set", 0));
+ fd_kill (EV_A_ p->fd);
+ }
else
fd_event (
EV_A_
@@ -131,7 +136,7 @@ inline_size
int
poll_init (EV_P_ int flags)
{
- backend_mintime = 1e-3;
+ backend_mintime = EV_TS_CONST (1e-3);
backend_modify = poll_modify;
backend_poll = poll_poll;
diff --git a/third_party/libev/ev_port.c b/third_party/libev/ev_port.c
index c7b0b70c1..f4cd9d99c 100644
--- a/third_party/libev/ev_port.c
+++ b/third_party/libev/ev_port.c
@@ -1,7 +1,7 @@
/*
* libev solaris event port backend
*
- * Copyright (c) 2007,2008,2009,2010,2011 Marc Alexander Lehmann <libev at schmorp.de>
+ * Copyright (c) 2007,2008,2009,2010,2011,2019 Marc Alexander Lehmann <libev at schmorp.de>
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without modifica-
@@ -69,7 +69,10 @@ port_associate_and_check (EV_P_ int fd, int ev)
)
{
if (errno == EBADFD)
- fd_kill (EV_A_ fd);
+ {
+ assert (("libev: port_associate found invalid fd", errno != EBADFD));
+ fd_kill (EV_A_ fd);
+ }
else
ev_syserr ("(libev) port_associate");
}
@@ -129,7 +132,7 @@ port_poll (EV_P_ ev_tstamp timeout)
}
}
- if (expect_false (nget == port_eventmax))
+ if (ecb_expect_false (nget == port_eventmax))
{
ev_free (port_events);
port_eventmax = array_nextsize (sizeof (port_event_t), port_eventmax, port_eventmax + 1);
@@ -151,11 +154,11 @@ port_init (EV_P_ int flags)
/* if my reading of the opensolaris kernel sources are correct, then
* opensolaris does something very stupid: it checks if the time has already
- * elapsed and doesn't round up if that is the case,m otherwise it DOES round
+ * elapsed and doesn't round up if that is the case, otherwise it DOES round
* up. Since we can't know what the case is, we need to guess by using a
* "large enough" timeout. Normally, 1e-9 would be correct.
*/
- backend_mintime = 1e-3; /* needed to compensate for port_getn returning early */
+ backend_mintime = EV_TS_CONST (1e-3); /* needed to compensate for port_getn returning early */
backend_modify = port_modify;
backend_poll = port_poll;
diff --git a/third_party/libev/ev_select.c b/third_party/libev/ev_select.c
index ed1fc7ad9..b862c8113 100644
--- a/third_party/libev/ev_select.c
+++ b/third_party/libev/ev_select.c
@@ -108,7 +108,7 @@ select_modify (EV_P_ int fd, int oev, int nev)
int word = fd / NFDBITS;
fd_mask mask = 1UL << (fd % NFDBITS);
- if (expect_false (vec_max <= word))
+ if (ecb_expect_false (vec_max <= word))
{
int new_max = word + 1;
@@ -171,7 +171,7 @@ select_poll (EV_P_ ev_tstamp timeout)
#endif
EV_ACQUIRE_CB;
- if (expect_false (res < 0))
+ if (ecb_expect_false (res < 0))
{
#if EV_SELECT_IS_WINSOCKET
errno = WSAGetLastError ();
@@ -197,7 +197,7 @@ select_poll (EV_P_ ev_tstamp timeout)
{
if (timeout)
{
- unsigned long ms = timeout * 1e3;
+ unsigned long ms = EV_TS_TO_MSEC (timeout);
Sleep (ms ? ms : 1);
}
@@ -236,7 +236,7 @@ select_poll (EV_P_ ev_tstamp timeout)
if (FD_ISSET (handle, (fd_set *)vec_eo)) events |= EV_WRITE;
#endif
- if (expect_true (events))
+ if (ecb_expect_true (events))
fd_event (EV_A_ fd, events);
}
}
@@ -262,7 +262,7 @@ select_poll (EV_P_ ev_tstamp timeout)
events |= word_r & mask ? EV_READ : 0;
events |= word_w & mask ? EV_WRITE : 0;
- if (expect_true (events))
+ if (ecb_expect_true (events))
fd_event (EV_A_ word * NFDBITS + bit, events);
}
}
@@ -275,7 +275,7 @@ inline_size
int
select_init (EV_P_ int flags)
{
- backend_mintime = 1e-6;
+ backend_mintime = EV_TS_CONST (1e-6);
backend_modify = select_modify;
backend_poll = select_poll;
diff --git a/third_party/libev/ev_vars.h b/third_party/libev/ev_vars.h
index 04d4db16f..fb0c58316 100644
--- a/third_party/libev/ev_vars.h
+++ b/third_party/libev/ev_vars.h
@@ -1,7 +1,7 @@
/*
* loop member variable declarations
*
- * Copyright (c) 2007,2008,2009,2010,2011,2012,2013 Marc Alexander Lehmann <libev at schmorp.de>
+ * Copyright (c) 2007,2008,2009,2010,2011,2012,2013,2019 Marc Alexander Lehmann <libev at schmorp.de>
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without modifica-
@@ -107,6 +107,46 @@ VARx(int, epoll_epermcnt)
VARx(int, epoll_epermmax)
#endif
+#if EV_USE_LINUXAIO || EV_GENWRAP
+VARx(aio_context_t, linuxaio_ctx)
+VARx(int, linuxaio_iteration)
+VARx(struct aniocb **, linuxaio_iocbps)
+VARx(int, linuxaio_iocbpmax)
+VARx(struct iocb **, linuxaio_submits)
+VARx(int, linuxaio_submitcnt)
+VARx(int, linuxaio_submitmax)
+VARx(ev_io, linuxaio_epoll_w)
+#endif
+
+#if EV_USE_IOURING || EV_GENWRAP
+VARx(int, iouring_fd)
+VARx(unsigned, iouring_to_submit);
+VARx(int, iouring_entries)
+VARx(int, iouring_max_entries)
+VARx(void *, iouring_sq_ring)
+VARx(void *, iouring_cq_ring)
+VARx(void *, iouring_sqes)
+VARx(uint32_t, iouring_sq_ring_size)
+VARx(uint32_t, iouring_cq_ring_size)
+VARx(uint32_t, iouring_sqes_size)
+VARx(uint32_t, iouring_sq_head)
+VARx(uint32_t, iouring_sq_tail)
+VARx(uint32_t, iouring_sq_ring_mask)
+VARx(uint32_t, iouring_sq_ring_entries)
+VARx(uint32_t, iouring_sq_flags)
+VARx(uint32_t, iouring_sq_dropped)
+VARx(uint32_t, iouring_sq_array)
+VARx(uint32_t, iouring_cq_head)
+VARx(uint32_t, iouring_cq_tail)
+VARx(uint32_t, iouring_cq_ring_mask)
+VARx(uint32_t, iouring_cq_ring_entries)
+VARx(uint32_t, iouring_cq_overflow)
+VARx(uint32_t, iouring_cq_cqes)
+VARx(ev_tstamp, iouring_tfd_to)
+VARx(int, iouring_tfd)
+VARx(ev_io, iouring_tfd_w)
+#endif
+
#if EV_USE_KQUEUE || EV_GENWRAP
VARx(pid_t, kqueue_fd_pid)
VARx(struct kevent *, kqueue_changes)
@@ -187,6 +227,11 @@ VARx(ev_io, sigfd_w)
VARx(sigset_t, sigfd_set)
#endif
+#if EV_USE_TIMERFD || EV_GENWRAP
+VARx(int, timerfd) /* timerfd for time jump detection */
+VARx(ev_io, timerfd_w)
+#endif
+
VARx(unsigned int, origflags) /* original loop flags */
#if EV_FEATURE_API || EV_GENWRAP
@@ -195,8 +240,8 @@ VARx(unsigned int, loop_depth) /* #ev_run enters - #ev_run leaves */
VARx(void *, userdata)
/* C++ doesn't support the ev_loop_callback typedef here. stinks. */
-VAR (release_cb, void (*release_cb)(EV_P) EV_THROW)
-VAR (acquire_cb, void (*acquire_cb)(EV_P) EV_THROW)
+VAR (release_cb, void (*release_cb)(EV_P) EV_NOEXCEPT)
+VAR (acquire_cb, void (*acquire_cb)(EV_P) EV_NOEXCEPT)
VAR (invoke_cb , ev_loop_callback invoke_cb)
#endif
diff --git a/third_party/libev/ev_win32.c b/third_party/libev/ev_win32.c
index fd671356a..97344c3e1 100644
--- a/third_party/libev/ev_win32.c
+++ b/third_party/libev/ev_win32.c
@@ -154,8 +154,8 @@ ev_time (void)
ui.u.LowPart = ft.dwLowDateTime;
ui.u.HighPart = ft.dwHighDateTime;
- /* msvc cannot convert ulonglong to double... yes, it is that sucky */
- return (LONGLONG)(ui.QuadPart - 116444736000000000) * 1e-7;
+ /* also, msvc cannot convert ulonglong to double... yes, it is that sucky */
+ return EV_TS_FROM_USEC (((LONGLONG)(ui.QuadPart - 116444736000000000) * 1e-1));
}
#endif
diff --git a/third_party/libev/ev_wrap.h b/third_party/libev/ev_wrap.h
index ad989ea7d..45d793ced 100644
--- a/third_party/libev/ev_wrap.h
+++ b/third_party/libev/ev_wrap.h
@@ -44,12 +44,46 @@
#define invoke_cb ((loop)->invoke_cb)
#define io_blocktime ((loop)->io_blocktime)
#define iocp ((loop)->iocp)
+#define iouring_cq_cqes ((loop)->iouring_cq_cqes)
+#define iouring_cq_head ((loop)->iouring_cq_head)
+#define iouring_cq_overflow ((loop)->iouring_cq_overflow)
+#define iouring_cq_ring ((loop)->iouring_cq_ring)
+#define iouring_cq_ring_entries ((loop)->iouring_cq_ring_entries)
+#define iouring_cq_ring_mask ((loop)->iouring_cq_ring_mask)
+#define iouring_cq_ring_size ((loop)->iouring_cq_ring_size)
+#define iouring_cq_tail ((loop)->iouring_cq_tail)
+#define iouring_entries ((loop)->iouring_entries)
+#define iouring_fd ((loop)->iouring_fd)
+#define iouring_max_entries ((loop)->iouring_max_entries)
+#define iouring_sq_array ((loop)->iouring_sq_array)
+#define iouring_sq_dropped ((loop)->iouring_sq_dropped)
+#define iouring_sq_flags ((loop)->iouring_sq_flags)
+#define iouring_sq_head ((loop)->iouring_sq_head)
+#define iouring_sq_ring ((loop)->iouring_sq_ring)
+#define iouring_sq_ring_entries ((loop)->iouring_sq_ring_entries)
+#define iouring_sq_ring_mask ((loop)->iouring_sq_ring_mask)
+#define iouring_sq_ring_size ((loop)->iouring_sq_ring_size)
+#define iouring_sq_tail ((loop)->iouring_sq_tail)
+#define iouring_sqes ((loop)->iouring_sqes)
+#define iouring_sqes_size ((loop)->iouring_sqes_size)
+#define iouring_tfd ((loop)->iouring_tfd)
+#define iouring_tfd_to ((loop)->iouring_tfd_to)
+#define iouring_tfd_w ((loop)->iouring_tfd_w)
+#define iouring_to_submit ((loop)->iouring_to_submit)
#define kqueue_changecnt ((loop)->kqueue_changecnt)
#define kqueue_changemax ((loop)->kqueue_changemax)
#define kqueue_changes ((loop)->kqueue_changes)
#define kqueue_eventmax ((loop)->kqueue_eventmax)
#define kqueue_events ((loop)->kqueue_events)
#define kqueue_fd_pid ((loop)->kqueue_fd_pid)
+#define linuxaio_ctx ((loop)->linuxaio_ctx)
+#define linuxaio_epoll_w ((loop)->linuxaio_epoll_w)
+#define linuxaio_iocbpmax ((loop)->linuxaio_iocbpmax)
+#define linuxaio_iocbps ((loop)->linuxaio_iocbps)
+#define linuxaio_iteration ((loop)->linuxaio_iteration)
+#define linuxaio_submitcnt ((loop)->linuxaio_submitcnt)
+#define linuxaio_submitmax ((loop)->linuxaio_submitmax)
+#define linuxaio_submits ((loop)->linuxaio_submits)
#define loop_count ((loop)->loop_count)
#define loop_depth ((loop)->loop_depth)
#define loop_done ((loop)->loop_done)
@@ -89,6 +123,8 @@
#define sigfd_w ((loop)->sigfd_w)
#define timeout_blocktime ((loop)->timeout_blocktime)
#define timercnt ((loop)->timercnt)
+#define timerfd ((loop)->timerfd)
+#define timerfd_w ((loop)->timerfd_w)
#define timermax ((loop)->timermax)
#define timers ((loop)->timers)
#define userdata ((loop)->userdata)
@@ -143,12 +179,46 @@
#undef invoke_cb
#undef io_blocktime
#undef iocp
+#undef iouring_cq_cqes
+#undef iouring_cq_head
+#undef iouring_cq_overflow
+#undef iouring_cq_ring
+#undef iouring_cq_ring_entries
+#undef iouring_cq_ring_mask
+#undef iouring_cq_ring_size
+#undef iouring_cq_tail
+#undef iouring_entries
+#undef iouring_fd
+#undef iouring_max_entries
+#undef iouring_sq_array
+#undef iouring_sq_dropped
+#undef iouring_sq_flags
+#undef iouring_sq_head
+#undef iouring_sq_ring
+#undef iouring_sq_ring_entries
+#undef iouring_sq_ring_mask
+#undef iouring_sq_ring_size
+#undef iouring_sq_tail
+#undef iouring_sqes
+#undef iouring_sqes_size
+#undef iouring_tfd
+#undef iouring_tfd_to
+#undef iouring_tfd_w
+#undef iouring_to_submit
#undef kqueue_changecnt
#undef kqueue_changemax
#undef kqueue_changes
#undef kqueue_eventmax
#undef kqueue_events
#undef kqueue_fd_pid
+#undef linuxaio_ctx
+#undef linuxaio_epoll_w
+#undef linuxaio_iocbpmax
+#undef linuxaio_iocbps
+#undef linuxaio_iteration
+#undef linuxaio_submitcnt
+#undef linuxaio_submitmax
+#undef linuxaio_submits
#undef loop_count
#undef loop_depth
#undef loop_done
@@ -188,6 +258,8 @@
#undef sigfd_w
#undef timeout_blocktime
#undef timercnt
+#undef timerfd
+#undef timerfd_w
#undef timermax
#undef timers
#undef userdata
diff --git a/third_party/libev/libev.m4 b/third_party/libev/libev.m4
index 439fbde2c..f859eff27 100644
--- a/third_party/libev/libev.m4
+++ b/third_party/libev/libev.m4
@@ -2,7 +2,8 @@ dnl this file is part of libev, do not make local modifications
dnl http://software.schmorp.de/pkg/libev
dnl libev support
-AC_CHECK_HEADERS(sys/inotify.h sys/epoll.h sys/event.h port.h poll.h sys/select.h sys/eventfd.h sys/signalfd.h)
+AC_CHECK_HEADERS(sys/inotify.h sys/epoll.h sys/event.h port.h poll.h sys/timerfd.h)
+AC_CHECK_HEADERS(sys/select.h sys/eventfd.h sys/signalfd.h linux/aio_abi.h linux/fs.h)
AC_CHECK_FUNCS(inotify_init epoll_ctl kqueue port_create poll select eventfd signalfd)
@@ -35,6 +36,10 @@ AC_CHECK_FUNCS(nanosleep, [], [
fi
])
+AC_CHECK_TYPE(__kernel_rwf_t, [
+ AC_DEFINE(HAVE_KERNEL_RWF_T, 1, Define to 1 if linux/fs.h defined kernel_rwf_t)
+], [], [#include <linux/fs.h>])
+
if test -z "$LIBEV_M4_AVOID_LIBM"; then
LIBM=m
fi
diff --git a/third_party/libev/update_ev_c b/third_party/libev/update_ev_c
index b55fd7fb7..a80bfae23 100755
--- a/third_party/libev/update_ev_c
+++ b/third_party/libev/update_ev_c
@@ -2,6 +2,7 @@
(
sed -ne '1,\%/\* ECB.H BEGIN \*/%p' ev.c
+ #perl -ne 'print unless /^#if ECB_CPP/ .. /^#endif/' <~/src/libecb/ecb.h
cat ~/src/libecb/ecb.h
sed -ne '\%/\* ECB.H END \*/%,$p' ev.c
) >ev.c~ && mv ev.c~ ev.c
--
2.24.0
--
Maria Khaydich
>Понедельник, 17 февраля 2020, 13:00 +03:00 от Alexander Turenko <alexander.turenko at tarantool.org>:
>
>On Mon, Feb 17, 2020 at 10:40:52AM +0300, Konstantin Osipov wrote:
>> * Alexander Turenko < alexander.turenko at tarantool.org > [20/02/15 23:22]:
<...>
>>
>> How to update libev
>> ===================
>>
>> Remove Tarantool patches (see csv diff -U8).
>> cvs up
>> Add patches back.
>>
>> Did the patch follow the procedure? If it did, it should clearly
>> state that it updated libev, and to which version.
>
>Agreed, it makes sense.
>
>>
>>
>> >
>> > | 4.25 Fri Dec 21 07:49:20 CET 2018
>> > | <...>
>> > | - move the darwin select workaround higher in ev.c, as newer versions of
>> > | darwin managed to break their broken select even more.
>> >
>> > http://cvs.schmorp.de/libev/Changes?view=markup
>>
>> :/
>>
>> --
>> Konstantin Osipov, Moscow, Russia
--
Maria Khaydich
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.tarantool.org/pipermail/tarantool-patches/attachments/20200303/88fdb300/attachment.html>
More information about the Tarantool-patches
mailing list