From patchwork Tue Feb 5 18:28:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Samuel Thibault X-Patchwork-Id: 10798347 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 58A90746 for ; Tue, 5 Feb 2019 20:13:04 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 465F72C8A3 for ; Tue, 5 Feb 2019 20:13:04 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 351312C8AD; Tue, 5 Feb 2019 20:13:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id AEF8D2C8A3 for ; Tue, 5 Feb 2019 20:13:02 +0000 (UTC) Received: from localhost ([127.0.0.1]:39879 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gr75Z-0006RY-O5 for patchwork-qemu-devel@patchwork.kernel.org; Tue, 05 Feb 2019 15:13:01 -0500 Received: from eggs.gnu.org ([209.51.188.92]:42839) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gr5Ti-0005jf-1e for qemu-devel@nongnu.org; Tue, 05 Feb 2019 13:29:52 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gr5Te-00063P-3v for qemu-devel@nongnu.org; Tue, 05 Feb 2019 13:29:49 -0500 Received: from hera.aquilenet.fr ([185.233.100.1]:34944) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1gr5Tb-0005jb-KU for qemu-devel@nongnu.org; Tue, 05 Feb 2019 13:29:44 -0500 Received: from localhost (localhost [127.0.0.1]) by hera.aquilenet.fr (Postfix) with ESMTP id DC768C10F; Tue, 5 Feb 2019 19:29:19 +0100 (CET) X-Virus-Scanned: Debian amavisd-new at aquilenet.fr Received: from hera.aquilenet.fr ([127.0.0.1]) by localhost (hera.aquilenet.fr [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id A8S4LUDkwyQm; Tue, 5 Feb 2019 19:29:18 +0100 (CET) Received: from function (unknown [188.164.201.34]) by hera.aquilenet.fr (Postfix) with ESMTPSA id 75E30C12F; Tue, 5 Feb 2019 19:28:56 +0100 (CET) Received: from samy by function with local (Exim 4.92-RC4) (envelope-from ) id 1gr5Sk-0007oA-8a; Tue, 05 Feb 2019 19:28:50 +0100 From: Samuel Thibault To: qemu-devel@nongnu.org, peter.maydell@linaro.org Date: Tue, 5 Feb 2019 20:28:44 +0200 Message-Id: <20190205182848.29887-29-samuel.thibault@ens-lyon.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190205182848.29887-1-samuel.thibault@ens-lyon.org> References: <20190205182848.29887-1-samuel.thibault@ens-lyon.org> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 185.233.100.1 Subject: [Qemu-devel] [PULLv3 28/32] slirp: replace global polling with per-instance & notifier X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Marc-Andr=C3=A9_Lureau?= , Samuel Thibault , stefanha@redhat.com, jan.kiszka@siemens.com Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Marc-André Lureau Remove hard-coded dependency on slirp in main-loop, and use a "poll" notifier instead. The notifier is registered per slirp instance. Signed-off-by: Marc-André Lureau Signed-off-by: Samuel Thibault --- include/qemu/main-loop.h | 15 ++ net/slirp.c | 24 ++ slirp/libslirp.h | 4 +- slirp/slirp.c | 555 +++++++++++++++++++-------------------- stubs/Makefile.objs | 1 - stubs/slirp.c | 13 - util/main-loop.c | 30 ++- 7 files changed, 333 insertions(+), 309 deletions(-) delete mode 100644 stubs/slirp.c diff --git a/include/qemu/main-loop.h b/include/qemu/main-loop.h index e59f9ae1e9..f6ba78ea73 100644 --- a/include/qemu/main-loop.h +++ b/include/qemu/main-loop.h @@ -302,4 +302,19 @@ void qemu_fd_register(int fd); QEMUBH *qemu_bh_new(QEMUBHFunc *cb, void *opaque); void qemu_bh_schedule_idle(QEMUBH *bh); +enum { + MAIN_LOOP_POLL_FILL, + MAIN_LOOP_POLL_ERR, + MAIN_LOOP_POLL_OK, +}; + +typedef struct MainLoopPoll { + int state; + uint32_t timeout; + GArray *pollfds; +} MainLoopPoll; + +void main_loop_poll_add_notifier(Notifier *notify); +void main_loop_poll_remove_notifier(Notifier *notify); + #endif diff --git a/net/slirp.c b/net/slirp.c index 664ff1c002..4d55f64168 100644 --- a/net/slirp.c +++ b/net/slirp.c @@ -86,6 +86,7 @@ typedef struct SlirpState { NetClientState nc; QTAILQ_ENTRY(SlirpState) entry; Slirp *slirp; + Notifier poll_notifier; Notifier exit_notifier; #ifndef _WIN32 gchar *smb_dir; @@ -144,6 +145,7 @@ static void net_slirp_cleanup(NetClientState *nc) SlirpState *s = DO_UPCAST(SlirpState, nc, nc); g_slist_free_full(s->fwd, slirp_free_fwd); + main_loop_poll_remove_notifier(&s->poll_notifier); slirp_cleanup(s->slirp); if (s->exit_notifier.notify) { qemu_remove_exit_notifier(&s->exit_notifier); @@ -209,6 +211,25 @@ static const SlirpCb slirp_cb = { .notify = qemu_notify_event, }; +static void net_slirp_poll_notify(Notifier *notifier, void *data) +{ + MainLoopPoll *poll = data; + SlirpState *s = container_of(notifier, SlirpState, poll_notifier); + + switch (poll->state) { + case MAIN_LOOP_POLL_FILL: + slirp_pollfds_fill(s->slirp, poll->pollfds, &poll->timeout); + break; + case MAIN_LOOP_POLL_OK: + case MAIN_LOOP_POLL_ERR: + slirp_pollfds_poll(s->slirp, poll->pollfds, + poll->state == MAIN_LOOP_POLL_ERR); + break; + default: + g_assert_not_reached(); + } +} + static int net_slirp_init(NetClientState *peer, const char *model, const char *name, int restricted, bool ipv4, const char *vnetwork, const char *vhost, @@ -429,6 +450,9 @@ static int net_slirp_init(NetClientState *peer, const char *model, &slirp_cb, s); QTAILQ_INSERT_TAIL(&slirp_stacks, s, entry); + s->poll_notifier.notify = net_slirp_poll_notify; + main_loop_poll_add_notifier(&s->poll_notifier); + for (config = slirp_configs; config; config = config->next) { if (config->flags & SLIRP_CFG_HOSTFWD) { if (slirp_hostfwd(s, config->str, errp) < 0) { diff --git a/slirp/libslirp.h b/slirp/libslirp.h index 8e5d4ed11b..18d5fb0133 100644 --- a/slirp/libslirp.h +++ b/slirp/libslirp.h @@ -63,9 +63,9 @@ Slirp *slirp_init(int restricted, bool in_enabled, struct in_addr vnetwork, void *opaque); void slirp_cleanup(Slirp *slirp); -void slirp_pollfds_fill(GArray *pollfds, uint32_t *timeout); +void slirp_pollfds_fill(Slirp *slirp, GArray *pollfds, uint32_t *timeout); -void slirp_pollfds_poll(GArray *pollfds, int select_error); +void slirp_pollfds_poll(Slirp *slirp, GArray *pollfds, int select_error); void slirp_input(Slirp *slirp, const uint8_t *pkt, int pkt_len); diff --git a/slirp/slirp.c b/slirp/slirp.c index 60cd8249bf..a0de8b711c 100644 --- a/slirp/slirp.c +++ b/slirp/slirp.c @@ -368,9 +368,8 @@ void slirp_cleanup(Slirp *slirp) #define CONN_CANFSEND(so) (((so)->so_state & (SS_FCANTSENDMORE|SS_ISFCONNECTED)) == SS_ISFCONNECTED) #define CONN_CANFRCV(so) (((so)->so_state & (SS_FCANTRCVMORE|SS_ISFCONNECTED)) == SS_ISFCONNECTED) -static void slirp_update_timeout(uint32_t *timeout) +static void slirp_update_timeout(Slirp *slirp, uint32_t *timeout) { - Slirp *slirp; uint32_t t; if (*timeout <= TIMEOUT_FAST) { @@ -382,370 +381,352 @@ static void slirp_update_timeout(uint32_t *timeout) /* If we have tcp timeout with slirp, then we will fill @timeout with * more precise value. */ - QTAILQ_FOREACH(slirp, &slirp_instances, entry) { - if (slirp->time_fasttimo) { - *timeout = TIMEOUT_FAST; - return; - } - if (slirp->do_slowtimo) { - t = MIN(TIMEOUT_SLOW, t); - } + if (slirp->time_fasttimo) { + *timeout = TIMEOUT_FAST; + return; + } + if (slirp->do_slowtimo) { + t = MIN(TIMEOUT_SLOW, t); } *timeout = t; } -void slirp_pollfds_fill(GArray *pollfds, uint32_t *timeout) +void slirp_pollfds_fill(Slirp *slirp, GArray *pollfds, uint32_t *timeout) { - Slirp *slirp; struct socket *so, *so_next; - if (QTAILQ_EMPTY(&slirp_instances)) { - return; - } - /* * First, TCP sockets */ - QTAILQ_FOREACH(slirp, &slirp_instances, entry) { - /* - * *_slowtimo needs calling if there are IP fragments - * in the fragment queue, or there are TCP connections active - */ - slirp->do_slowtimo = ((slirp->tcb.so_next != &slirp->tcb) || - (&slirp->ipq.ip_link != slirp->ipq.ip_link.next)); - - for (so = slirp->tcb.so_next; so != &slirp->tcb; - so = so_next) { - int events = 0; - - so_next = so->so_next; + /* + * *_slowtimo needs calling if there are IP fragments + * in the fragment queue, or there are TCP connections active + */ + slirp->do_slowtimo = ((slirp->tcb.so_next != &slirp->tcb) || + (&slirp->ipq.ip_link != slirp->ipq.ip_link.next)); - so->pollfds_idx = -1; + for (so = slirp->tcb.so_next; so != &slirp->tcb; so = so_next) { + int events = 0; - /* - * See if we need a tcp_fasttimo - */ - if (slirp->time_fasttimo == 0 && - so->so_tcpcb->t_flags & TF_DELACK) { - slirp->time_fasttimo = curtime; /* Flag when want a fasttimo */ - } + so_next = so->so_next; - /* - * NOFDREF can include still connecting to local-host, - * newly socreated() sockets etc. Don't want to select these. - */ - if (so->so_state & SS_NOFDREF || so->s == -1) { - continue; - } + so->pollfds_idx = -1; - /* - * Set for reading sockets which are accepting - */ - if (so->so_state & SS_FACCEPTCONN) { - GPollFD pfd = { - .fd = so->s, - .events = G_IO_IN | G_IO_HUP | G_IO_ERR, - }; - so->pollfds_idx = pollfds->len; - g_array_append_val(pollfds, pfd); - continue; - } + /* + * See if we need a tcp_fasttimo + */ + if (slirp->time_fasttimo == 0 && + so->so_tcpcb->t_flags & TF_DELACK) { + slirp->time_fasttimo = curtime; /* Flag when want a fasttimo */ + } - /* - * Set for writing sockets which are connecting - */ - if (so->so_state & SS_ISFCONNECTING) { - GPollFD pfd = { - .fd = so->s, - .events = G_IO_OUT | G_IO_ERR, - }; - so->pollfds_idx = pollfds->len; - g_array_append_val(pollfds, pfd); - continue; - } + /* + * NOFDREF can include still connecting to local-host, + * newly socreated() sockets etc. Don't want to select these. + */ + if (so->so_state & SS_NOFDREF || so->s == -1) { + continue; + } - /* - * Set for writing if we are connected, can send more, and - * we have something to send - */ - if (CONN_CANFSEND(so) && so->so_rcv.sb_cc) { - events |= G_IO_OUT | G_IO_ERR; - } + /* + * Set for reading sockets which are accepting + */ + if (so->so_state & SS_FACCEPTCONN) { + GPollFD pfd = { + .fd = so->s, + .events = G_IO_IN | G_IO_HUP | G_IO_ERR, + }; + so->pollfds_idx = pollfds->len; + g_array_append_val(pollfds, pfd); + continue; + } - /* - * Set for reading (and urgent data) if we are connected, can - * receive more, and we have room for it XXX /2 ? - */ - if (CONN_CANFRCV(so) && - (so->so_snd.sb_cc < (so->so_snd.sb_datalen/2))) { - events |= G_IO_IN | G_IO_HUP | G_IO_ERR | G_IO_PRI; - } + /* + * Set for writing sockets which are connecting + */ + if (so->so_state & SS_ISFCONNECTING) { + GPollFD pfd = { + .fd = so->s, + .events = G_IO_OUT | G_IO_ERR, + }; + so->pollfds_idx = pollfds->len; + g_array_append_val(pollfds, pfd); + continue; + } - if (events) { - GPollFD pfd = { - .fd = so->s, - .events = events, - }; - so->pollfds_idx = pollfds->len; - g_array_append_val(pollfds, pfd); - } + /* + * Set for writing if we are connected, can send more, and + * we have something to send + */ + if (CONN_CANFSEND(so) && so->so_rcv.sb_cc) { + events |= G_IO_OUT | G_IO_ERR; } /* - * UDP sockets + * Set for reading (and urgent data) if we are connected, can + * receive more, and we have room for it XXX /2 ? */ - for (so = slirp->udb.so_next; so != &slirp->udb; - so = so_next) { - so_next = so->so_next; + if (CONN_CANFRCV(so) && + (so->so_snd.sb_cc < (so->so_snd.sb_datalen/2))) { + events |= G_IO_IN | G_IO_HUP | G_IO_ERR | G_IO_PRI; + } - so->pollfds_idx = -1; + if (events) { + GPollFD pfd = { + .fd = so->s, + .events = events, + }; + so->pollfds_idx = pollfds->len; + g_array_append_val(pollfds, pfd); + } + } - /* - * See if it's timed out - */ - if (so->so_expire) { - if (so->so_expire <= curtime) { - udp_detach(so); - continue; - } else { - slirp->do_slowtimo = true; /* Let socket expire */ - } - } + /* + * UDP sockets + */ + for (so = slirp->udb.so_next; so != &slirp->udb; so = so_next) { + so_next = so->so_next; - /* - * When UDP packets are received from over the - * link, they're sendto()'d straight away, so - * no need for setting for writing - * Limit the number of packets queued by this session - * to 4. Note that even though we try and limit this - * to 4 packets, the session could have more queued - * if the packets needed to be fragmented - * (XXX <= 4 ?) - */ - if ((so->so_state & SS_ISFCONNECTED) && so->so_queued <= 4) { - GPollFD pfd = { - .fd = so->s, - .events = G_IO_IN | G_IO_HUP | G_IO_ERR, - }; - so->pollfds_idx = pollfds->len; - g_array_append_val(pollfds, pfd); + so->pollfds_idx = -1; + + /* + * See if it's timed out + */ + if (so->so_expire) { + if (so->so_expire <= curtime) { + udp_detach(so); + continue; + } else { + slirp->do_slowtimo = true; /* Let socket expire */ } } /* - * ICMP sockets + * When UDP packets are received from over the + * link, they're sendto()'d straight away, so + * no need for setting for writing + * Limit the number of packets queued by this session + * to 4. Note that even though we try and limit this + * to 4 packets, the session could have more queued + * if the packets needed to be fragmented + * (XXX <= 4 ?) */ - for (so = slirp->icmp.so_next; so != &slirp->icmp; - so = so_next) { - so_next = so->so_next; + if ((so->so_state & SS_ISFCONNECTED) && so->so_queued <= 4) { + GPollFD pfd = { + .fd = so->s, + .events = G_IO_IN | G_IO_HUP | G_IO_ERR, + }; + so->pollfds_idx = pollfds->len; + g_array_append_val(pollfds, pfd); + } + } - so->pollfds_idx = -1; + /* + * ICMP sockets + */ + for (so = slirp->icmp.so_next; so != &slirp->icmp; so = so_next) { + so_next = so->so_next; - /* - * See if it's timed out - */ - if (so->so_expire) { - if (so->so_expire <= curtime) { - icmp_detach(so); - continue; - } else { - slirp->do_slowtimo = true; /* Let socket expire */ - } - } + so->pollfds_idx = -1; - if (so->so_state & SS_ISFCONNECTED) { - GPollFD pfd = { - .fd = so->s, - .events = G_IO_IN | G_IO_HUP | G_IO_ERR, - }; - so->pollfds_idx = pollfds->len; - g_array_append_val(pollfds, pfd); + /* + * See if it's timed out + */ + if (so->so_expire) { + if (so->so_expire <= curtime) { + icmp_detach(so); + continue; + } else { + slirp->do_slowtimo = true; /* Let socket expire */ } } + + if (so->so_state & SS_ISFCONNECTED) { + GPollFD pfd = { + .fd = so->s, + .events = G_IO_IN | G_IO_HUP | G_IO_ERR, + }; + so->pollfds_idx = pollfds->len; + g_array_append_val(pollfds, pfd); + } } - slirp_update_timeout(timeout); + + slirp_update_timeout(slirp, timeout); } -void slirp_pollfds_poll(GArray *pollfds, int select_error) +void slirp_pollfds_poll(Slirp *slirp, GArray *pollfds, int select_error) { - Slirp *slirp = QTAILQ_FIRST(&slirp_instances); struct socket *so, *so_next; int ret; - if (!slirp) { - return; - } - curtime = slirp->cb->clock_get_ns() / SCALE_MS; - QTAILQ_FOREACH(slirp, &slirp_instances, entry) { - /* - * See if anything has timed out - */ - if (slirp->time_fasttimo && - ((curtime - slirp->time_fasttimo) >= TIMEOUT_FAST)) { - tcp_fasttimo(slirp); - slirp->time_fasttimo = 0; - } - if (slirp->do_slowtimo && - ((curtime - slirp->last_slowtimo) >= TIMEOUT_SLOW)) { - ip_slowtimo(slirp); - tcp_slowtimo(slirp); - slirp->last_slowtimo = curtime; - } + /* + * See if anything has timed out + */ + if (slirp->time_fasttimo && + ((curtime - slirp->time_fasttimo) >= TIMEOUT_FAST)) { + tcp_fasttimo(slirp); + slirp->time_fasttimo = 0; + } + if (slirp->do_slowtimo && + ((curtime - slirp->last_slowtimo) >= TIMEOUT_SLOW)) { + ip_slowtimo(slirp); + tcp_slowtimo(slirp); + slirp->last_slowtimo = curtime; + } + /* + * Check sockets + */ + if (!select_error) { /* - * Check sockets + * Check TCP sockets */ - if (!select_error) { - /* - * Check TCP sockets - */ - for (so = slirp->tcb.so_next; so != &slirp->tcb; - so = so_next) { - int revents; + for (so = slirp->tcb.so_next; so != &slirp->tcb; + so = so_next) { + int revents; - so_next = so->so_next; + so_next = so->so_next; - revents = 0; - if (so->pollfds_idx != -1) { - revents = g_array_index(pollfds, GPollFD, - so->pollfds_idx).revents; - } + revents = 0; + if (so->pollfds_idx != -1) { + revents = g_array_index(pollfds, GPollFD, + so->pollfds_idx).revents; + } - if (so->so_state & SS_NOFDREF || so->s == -1) { + if (so->so_state & SS_NOFDREF || so->s == -1) { + continue; + } + + /* + * Check for URG data + * This will soread as well, so no need to + * test for G_IO_IN below if this succeeds + */ + if (revents & G_IO_PRI) { + ret = sorecvoob(so); + if (ret < 0) { + /* Socket error might have resulted in the socket being + * removed, do not try to do anything more with it. */ continue; } - + } + /* + * Check sockets for reading + */ + else if (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR)) { /* - * Check for URG data - * This will soread as well, so no need to - * test for G_IO_IN below if this succeeds + * Check for incoming connections */ - if (revents & G_IO_PRI) { - ret = sorecvoob(so); - if (ret < 0) { - /* Socket error might have resulted in the socket being - * removed, do not try to do anything more with it. */ - continue; - } + if (so->so_state & SS_FACCEPTCONN) { + tcp_connect(so); + continue; + } /* else */ + ret = soread(so); + + /* Output it if we read something */ + if (ret > 0) { + tcp_output(sototcpcb(so)); + } + if (ret < 0) { + /* Socket error might have resulted in the socket being + * removed, do not try to do anything more with it. */ + continue; } + } + + /* + * Check sockets for writing + */ + if (!(so->so_state & SS_NOFDREF) && + (revents & (G_IO_OUT | G_IO_ERR))) { /* - * Check sockets for reading + * Check for non-blocking, still-connecting sockets */ - else if (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR)) { - /* - * Check for incoming connections - */ - if (so->so_state & SS_FACCEPTCONN) { - tcp_connect(so); - continue; - } /* else */ - ret = soread(so); + if (so->so_state & SS_ISFCONNECTING) { + /* Connected */ + so->so_state &= ~SS_ISFCONNECTING; - /* Output it if we read something */ - if (ret > 0) { - tcp_output(sototcpcb(so)); - } + ret = send(so->s, (const void *) &ret, 0, 0); if (ret < 0) { - /* Socket error might have resulted in the socket being - * removed, do not try to do anything more with it. */ - continue; + /* XXXXX Must fix, zero bytes is a NOP */ + if (errno == EAGAIN || errno == EWOULDBLOCK || + errno == EINPROGRESS || errno == ENOTCONN) { + continue; + } + + /* else failed */ + so->so_state &= SS_PERSISTENT_MASK; + so->so_state |= SS_NOFDREF; } - } + /* else so->so_state &= ~SS_ISFCONNECTING; */ - /* - * Check sockets for writing - */ - if (!(so->so_state & SS_NOFDREF) && - (revents & (G_IO_OUT | G_IO_ERR))) { /* - * Check for non-blocking, still-connecting sockets + * Continue tcp_input */ - if (so->so_state & SS_ISFCONNECTING) { - /* Connected */ - so->so_state &= ~SS_ISFCONNECTING; - - ret = send(so->s, (const void *) &ret, 0, 0); - if (ret < 0) { - /* XXXXX Must fix, zero bytes is a NOP */ - if (errno == EAGAIN || errno == EWOULDBLOCK || - errno == EINPROGRESS || errno == ENOTCONN) { - continue; - } - - /* else failed */ - so->so_state &= SS_PERSISTENT_MASK; - so->so_state |= SS_NOFDREF; - } - /* else so->so_state &= ~SS_ISFCONNECTING; */ - - /* - * Continue tcp_input - */ - tcp_input((struct mbuf *)NULL, sizeof(struct ip), so, - so->so_ffamily); - /* continue; */ - } else { - ret = sowrite(so); - if (ret > 0) { - /* Call tcp_output in case we need to send a window - * update to the guest, otherwise it will be stuck - * until it sends a window probe. */ - tcp_output(sototcpcb(so)); - } + tcp_input((struct mbuf *)NULL, sizeof(struct ip), so, + so->so_ffamily); + /* continue; */ + } else { + ret = sowrite(so); + if (ret > 0) { + /* Call tcp_output in case we need to send a window + * update to the guest, otherwise it will be stuck + * until it sends a window probe. */ + tcp_output(sototcpcb(so)); } } } + } - /* - * Now UDP sockets. - * Incoming packets are sent straight away, they're not buffered. - * Incoming UDP data isn't buffered either. - */ - for (so = slirp->udb.so_next; so != &slirp->udb; - so = so_next) { - int revents; + /* + * Now UDP sockets. + * Incoming packets are sent straight away, they're not buffered. + * Incoming UDP data isn't buffered either. + */ + for (so = slirp->udb.so_next; so != &slirp->udb; + so = so_next) { + int revents; - so_next = so->so_next; + so_next = so->so_next; - revents = 0; - if (so->pollfds_idx != -1) { - revents = g_array_index(pollfds, GPollFD, - so->pollfds_idx).revents; - } + revents = 0; + if (so->pollfds_idx != -1) { + revents = g_array_index(pollfds, GPollFD, + so->pollfds_idx).revents; + } - if (so->s != -1 && - (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR))) { - sorecvfrom(so); - } + if (so->s != -1 && + (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR))) { + sorecvfrom(so); } + } - /* - * Check incoming ICMP relies. - */ - for (so = slirp->icmp.so_next; so != &slirp->icmp; - so = so_next) { - int revents; + /* + * Check incoming ICMP relies. + */ + for (so = slirp->icmp.so_next; so != &slirp->icmp; + so = so_next) { + int revents; - so_next = so->so_next; + so_next = so->so_next; - revents = 0; - if (so->pollfds_idx != -1) { - revents = g_array_index(pollfds, GPollFD, - so->pollfds_idx).revents; - } + revents = 0; + if (so->pollfds_idx != -1) { + revents = g_array_index(pollfds, GPollFD, + so->pollfds_idx).revents; + } - if (so->s != -1 && - (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR))) { - icmp_receive(so); - } + if (so->s != -1 && + (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR))) { + icmp_receive(so); } } - - if_start(slirp); } + + if_start(slirp); } static void arp_input(Slirp *slirp, const uint8_t *pkt, int pkt_len) diff --git a/stubs/Makefile.objs b/stubs/Makefile.objs index cda0efa4e8..1558ff1fe7 100644 --- a/stubs/Makefile.objs +++ b/stubs/Makefile.objs @@ -26,7 +26,6 @@ stub-obj-y += qtest.o stub-obj-y += replay.o stub-obj-y += runstate-check.o stub-obj-y += set-fd-handler.o -stub-obj-y += slirp.o stub-obj-y += sysbus.o stub-obj-y += tpm.o stub-obj-y += trace-control.o diff --git a/stubs/slirp.c b/stubs/slirp.c deleted file mode 100644 index 70704346fd..0000000000 --- a/stubs/slirp.c +++ /dev/null @@ -1,13 +0,0 @@ -#include "qemu/osdep.h" -#include "qemu-common.h" -#include "qemu/host-utils.h" -#include "slirp/libslirp.h" - -void slirp_pollfds_fill(GArray *pollfds, uint32_t *timeout) -{ -} - -void slirp_pollfds_poll(GArray *pollfds, int select_error) -{ -} - diff --git a/util/main-loop.c b/util/main-loop.c index 443cb4cfe8..d4a521caeb 100644 --- a/util/main-loop.c +++ b/util/main-loop.c @@ -469,25 +469,42 @@ static int os_host_main_loop_wait(int64_t timeout) } #endif +static NotifierList main_loop_poll_notifiers = + NOTIFIER_LIST_INITIALIZER(main_loop_poll_notifiers); + +void main_loop_poll_add_notifier(Notifier *notify) +{ + notifier_list_add(&main_loop_poll_notifiers, notify); +} + +void main_loop_poll_remove_notifier(Notifier *notify) +{ + notifier_remove(notify); +} + void main_loop_wait(int nonblocking) { + MainLoopPoll mlpoll = { + .state = MAIN_LOOP_POLL_FILL, + .timeout = UINT32_MAX, + .pollfds = gpollfds, + }; int ret; - uint32_t timeout = UINT32_MAX; int64_t timeout_ns; if (nonblocking) { - timeout = 0; + mlpoll.timeout = 0; } /* poll any events */ g_array_set_size(gpollfds, 0); /* reset for new iteration */ /* XXX: separate device handlers from system ones */ - slirp_pollfds_fill(gpollfds, &timeout); + notifier_list_notify(&main_loop_poll_notifiers, &mlpoll); - if (timeout == UINT32_MAX) { + if (mlpoll.timeout == UINT32_MAX) { timeout_ns = -1; } else { - timeout_ns = (uint64_t)timeout * (int64_t)(SCALE_MS); + timeout_ns = (uint64_t)mlpoll.timeout * (int64_t)(SCALE_MS); } timeout_ns = qemu_soonest_timeout(timeout_ns, @@ -495,7 +512,8 @@ void main_loop_wait(int nonblocking) &main_loop_tlg)); ret = os_host_main_loop_wait(timeout_ns); - slirp_pollfds_poll(gpollfds, (ret < 0)); + mlpoll.state = ret < 0 ? MAIN_LOOP_POLL_ERR : MAIN_LOOP_POLL_OK; + notifier_list_notify(&main_loop_poll_notifiers, &mlpoll); /* CPU thread can infinitely wait for event after missing the warp */