From patchwork Sat Feb 22 08:50:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 11397971 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 56BF31580 for ; Sat, 22 Feb 2020 09:00:19 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1CDFB2192A for ; Sat, 22 Feb 2020 09:00:19 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="GRuDfmCh" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1CDFB2192A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:40548 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1j5Qe2-00060o-5a for patchwork-qemu-devel@patchwork.kernel.org; Sat, 22 Feb 2020 04:00:18 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:38304) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1j5QWN-00078q-KM for qemu-devel@nongnu.org; Sat, 22 Feb 2020 03:52:25 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1j5QWM-0000Vg-2Q for qemu-devel@nongnu.org; Sat, 22 Feb 2020 03:52:23 -0500 Received: from us-smtp-2.mimecast.com ([205.139.110.61]:40091 helo=us-smtp-delivery-1.mimecast.com) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1j5QWL-0000VU-UB for qemu-devel@nongnu.org; Sat, 22 Feb 2020 03:52:22 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1582361541; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=a7tgvqd0XB0MC+LD/7Rh7NTycdRszlRQXEVoAZn9pOQ=; b=GRuDfmCh546He89wMTZpR7Fd1Yix6jmrpdwtPyxYU0BNBGkkxG026/0BdoL/Vi2PPJ+H3J QLnMBtcQxFp4P/fGMy6FFBHpr7mE5L4PEkCWTtfX/Z1gHeKl96LUoDzXhY3gf2ljPFBFdZ QUlDTOJTvMWZ+E9uslOTil9J4Ykh+LE= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-86-HW25l3kgN-i7Ym5LNcL9lA-1; Sat, 22 Feb 2020 03:52:19 -0500 X-MC-Unique: HW25l3kgN-i7Ym5LNcL9lA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 8F61F100550E; Sat, 22 Feb 2020 08:52:18 +0000 (UTC) Received: from localhost (ovpn-116-74.ams2.redhat.com [10.36.116.74]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0684D5D9CD; Sat, 22 Feb 2020 08:52:04 +0000 (UTC) From: Stefan Hajnoczi To: qemu-devel@nongnu.org Subject: [PULL 09/31] aio-posix: make AioHandler dispatch O(1) with epoll Date: Sat, 22 Feb 2020 08:50:08 +0000 Message-Id: <20200222085030.1760640-10-stefanha@redhat.com> In-Reply-To: <20200222085030.1760640-1-stefanha@redhat.com> References: <20200222085030.1760640-1-stefanha@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 205.139.110.61 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Peter Maydell , Thomas Huth , Sergio Lopez , Eduardo Habkost , qemu-block@nongnu.org, "Michael S. Tsirkin" , Laurent Vivier , Max Reitz , Alexander Bulekov , Bandan Das , Stefan Hajnoczi , =?utf-8?q?Marc-Andr=C3=A9_Lureau?= , Paolo Bonzini , Fam Zheng , Richard Henderson Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" File descriptor monitoring is O(1) with epoll(7), but aio_dispatch_handlers() still scans all AioHandlers instead of dispatching just those that are ready. This makes aio_poll() O(n) with respect to the total number of registered handlers. Add a local ready_list to aio_poll() so that each nested aio_poll() builds a list of handlers ready to be dispatched. Since file descriptor polling is level-triggered, nested aio_poll() calls also see fds that were ready in the parent but not yet dispatched. This guarantees that nested aio_poll() invocations will dispatch all fds, even those that became ready before the nested invocation. Since only handlers ready to be dispatched are placed onto the ready_list, the new aio_dispatch_ready_handlers() function provides O(1) dispatch. Note that AioContext polling is still O(n) and currently cannot be fully disabled. This still needs to be fixed before aio_poll() is fully O(1). Signed-off-by: Stefan Hajnoczi Reviewed-by: Sergio Lopez Message-id: 20200214171712.541358-6-stefanha@redhat.com [Fix compilation error on macOS where there is no epoll(87). The aio_epoll() prototype was out of date and aio_add_ready_list() needed to be moved outside the ifdef. --Stefan] Signed-off-by: Stefan Hajnoczi --- util/aio-posix.c | 110 +++++++++++++++++++++++++++++++++-------------- 1 file changed, 78 insertions(+), 32 deletions(-) diff --git a/util/aio-posix.c b/util/aio-posix.c index b5cfdbd2f6..9e1befc0c0 100644 --- a/util/aio-posix.c +++ b/util/aio-posix.c @@ -35,9 +35,20 @@ struct AioHandler void *opaque; bool is_external; QLIST_ENTRY(AioHandler) node; + QLIST_ENTRY(AioHandler) node_ready; /* only used during aio_poll() */ QLIST_ENTRY(AioHandler) node_deleted; }; +/* Add a handler to a ready list */ +static void add_ready_handler(AioHandlerList *ready_list, + AioHandler *node, + int revents) +{ + QLIST_SAFE_REMOVE(node, node_ready); /* remove from nested parent's list */ + node->pfd.revents = revents; + QLIST_INSERT_HEAD(ready_list, node, node_ready); +} + #ifdef CONFIG_EPOLL_CREATE1 /* The fd number threshold to switch to epoll */ @@ -105,7 +116,8 @@ static void aio_epoll_update(AioContext *ctx, AioHandler *node, bool is_new) } } -static int aio_epoll(AioContext *ctx, int64_t timeout) +static int aio_epoll(AioContext *ctx, AioHandlerList *ready_list, + int64_t timeout) { GPollFD pfd = { .fd = ctx->epollfd, @@ -130,11 +142,13 @@ static int aio_epoll(AioContext *ctx, int64_t timeout) } for (i = 0; i < ret; i++) { int ev = events[i].events; + int revents = (ev & EPOLLIN ? G_IO_IN : 0) | + (ev & EPOLLOUT ? G_IO_OUT : 0) | + (ev & EPOLLHUP ? G_IO_HUP : 0) | + (ev & EPOLLERR ? G_IO_ERR : 0); + node = events[i].data.ptr; - node->pfd.revents = (ev & EPOLLIN ? G_IO_IN : 0) | - (ev & EPOLLOUT ? G_IO_OUT : 0) | - (ev & EPOLLHUP ? G_IO_HUP : 0) | - (ev & EPOLLERR ? G_IO_ERR : 0); + add_ready_handler(ready_list, node, revents); } } out: @@ -172,8 +186,8 @@ static void aio_epoll_update(AioContext *ctx, AioHandler *node, bool is_new) { } -static int aio_epoll(AioContext *ctx, GPollFD *pfds, - unsigned npfd, int64_t timeout) +static int aio_epoll(AioContext *ctx, AioHandlerList *ready_list, + int64_t timeout) { assert(false); } @@ -438,36 +452,63 @@ static void aio_free_deleted_handlers(AioContext *ctx) qemu_lockcnt_inc_and_unlock(&ctx->list_lock); } -static bool aio_dispatch_handlers(AioContext *ctx) +static bool aio_dispatch_handler(AioContext *ctx, AioHandler *node) { - AioHandler *node, *tmp; bool progress = false; + int revents; - QLIST_FOREACH_SAFE_RCU(node, &ctx->aio_handlers, node, tmp) { - int revents; + revents = node->pfd.revents & node->pfd.events; + node->pfd.revents = 0; - revents = node->pfd.revents & node->pfd.events; - node->pfd.revents = 0; + if (!QLIST_IS_INSERTED(node, node_deleted) && + (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR)) && + aio_node_check(ctx, node->is_external) && + node->io_read) { + node->io_read(node->opaque); - if (!QLIST_IS_INSERTED(node, node_deleted) && - (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR)) && - aio_node_check(ctx, node->is_external) && - node->io_read) { - node->io_read(node->opaque); - - /* aio_notify() does not count as progress */ - if (node->opaque != &ctx->notifier) { - progress = true; - } - } - if (!QLIST_IS_INSERTED(node, node_deleted) && - (revents & (G_IO_OUT | G_IO_ERR)) && - aio_node_check(ctx, node->is_external) && - node->io_write) { - node->io_write(node->opaque); + /* aio_notify() does not count as progress */ + if (node->opaque != &ctx->notifier) { progress = true; } } + if (!QLIST_IS_INSERTED(node, node_deleted) && + (revents & (G_IO_OUT | G_IO_ERR)) && + aio_node_check(ctx, node->is_external) && + node->io_write) { + node->io_write(node->opaque); + progress = true; + } + + return progress; +} + +/* + * If we have a list of ready handlers then this is more efficient than + * scanning all handlers with aio_dispatch_handlers(). + */ +static bool aio_dispatch_ready_handlers(AioContext *ctx, + AioHandlerList *ready_list) +{ + bool progress = false; + AioHandler *node; + + while ((node = QLIST_FIRST(ready_list))) { + QLIST_SAFE_REMOVE(node, node_ready); + progress = aio_dispatch_handler(ctx, node) || progress; + } + + return progress; +} + +/* Slower than aio_dispatch_ready_handlers() but only used via glib */ +static bool aio_dispatch_handlers(AioContext *ctx) +{ + AioHandler *node, *tmp; + bool progress = false; + + QLIST_FOREACH_SAFE_RCU(node, &ctx->aio_handlers, node, tmp) { + progress = aio_dispatch_handler(ctx, node) || progress; + } return progress; } @@ -639,6 +680,7 @@ static bool try_poll_mode(AioContext *ctx, int64_t *timeout) bool aio_poll(AioContext *ctx, bool blocking) { + AioHandlerList ready_list = QLIST_HEAD_INITIALIZER(ready_list); AioHandler *node; int i; int ret = 0; @@ -689,7 +731,7 @@ bool aio_poll(AioContext *ctx, bool blocking) /* wait until next event */ if (aio_epoll_check_poll(ctx, pollfds, npfd, timeout)) { npfd = 0; /* pollfds[] is not being used */ - ret = aio_epoll(ctx, timeout); + ret = aio_epoll(ctx, &ready_list, timeout); } else { ret = qemu_poll_ns(pollfds, npfd, timeout); } @@ -744,7 +786,11 @@ bool aio_poll(AioContext *ctx, bool blocking) /* if we have any readable fds, dispatch event */ if (ret > 0) { for (i = 0; i < npfd; i++) { - nodes[i]->pfd.revents = pollfds[i].revents; + int revents = pollfds[i].revents; + + if (revents) { + add_ready_handler(&ready_list, nodes[i], revents); + } } } @@ -753,7 +799,7 @@ bool aio_poll(AioContext *ctx, bool blocking) progress |= aio_bh_poll(ctx); if (ret > 0) { - progress |= aio_dispatch_handlers(ctx); + progress |= aio_dispatch_ready_handlers(ctx, &ready_list); } aio_free_deleted_handlers(ctx);