From patchwork Tue Feb 9 11:46:01 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 8261781 Return-Path: X-Original-To: patchwork-qemu-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 9E782BEEED for ; Tue, 9 Feb 2016 13:10:03 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id EC03F20218 for ; Tue, 9 Feb 2016 13:10:02 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3918220149 for ; Tue, 9 Feb 2016 13:10:02 +0000 (UTC) Received: from localhost ([::1]:54755 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aT6sd-0007CO-0r for patchwork-qemu-devel@patchwork.kernel.org; Tue, 09 Feb 2016 06:54:51 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:37575) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aT6kR-0006Bc-G2 for qemu-devel@nongnu.org; Tue, 09 Feb 2016 06:46:24 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1aT6kQ-0005FM-Hf for qemu-devel@nongnu.org; Tue, 09 Feb 2016 06:46:23 -0500 Received: from mx1.redhat.com ([209.132.183.28]:52455) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aT6kQ-0005FH-5o for qemu-devel@nongnu.org; Tue, 09 Feb 2016 06:46:22 -0500 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) by mx1.redhat.com (Postfix) with ESMTPS id C763B91354 for ; Tue, 9 Feb 2016 11:46:21 +0000 (UTC) Received: from donizetti.redhat.com (ovpn-112-47.ams2.redhat.com [10.36.112.47]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id u19BkELl002965; Tue, 9 Feb 2016 06:46:20 -0500 From: Paolo Bonzini To: qemu-devel@nongnu.org Date: Tue, 9 Feb 2016 12:46:01 +0100 Message-Id: <1455018374-4706-4-git-send-email-pbonzini@redhat.com> In-Reply-To: <1455018374-4706-1-git-send-email-pbonzini@redhat.com> References: <1455018374-4706-1-git-send-email-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x X-Received-From: 209.132.183.28 Cc: stefanha@redhat.com Subject: [Qemu-devel] [PATCH 03/16] aio: introduce aio_poll_internal X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Move the implemntation of aio_poll to aio_poll_internal, and introduce a wrapper for public use. For now it just asserts that aio_poll is being used correctly---either from the thread that manages the context, or with the QEMU global mutex held. The next patch, however, will completely separate the two cases. Signed-off-by: Paolo Bonzini --- aio-posix.c | 2 +- aio-win32.c | 2 +- async.c | 8 ++++++++ include/block/aio.h | 6 ++++++ 4 files changed, 16 insertions(+), 2 deletions(-) diff --git a/aio-posix.c b/aio-posix.c index fa7f8ab..4dc075c 100644 --- a/aio-posix.c +++ b/aio-posix.c @@ -401,7 +401,7 @@ static void add_pollfd(AioHandler *node) npfd++; } -bool aio_poll(AioContext *ctx, bool blocking) +bool aio_poll_internal(AioContext *ctx, bool blocking) { AioHandler *node; int i, ret; diff --git a/aio-win32.c b/aio-win32.c index 6aaa32a..86ad822 100644 --- a/aio-win32.c +++ b/aio-win32.c @@ -281,7 +281,7 @@ bool aio_dispatch(AioContext *ctx) return progress; } -bool aio_poll(AioContext *ctx, bool blocking) +bool aio_poll_internal(AioContext *ctx, bool blocking) { AioHandler *node; HANDLE events[MAXIMUM_WAIT_OBJECTS + 1]; diff --git a/async.c b/async.c index d083564..01c4891 100644 --- a/async.c +++ b/async.c @@ -300,6 +300,14 @@ void aio_notify_accept(AioContext *ctx) } } +bool aio_poll(AioContext *ctx, bool blocking) +{ + assert(qemu_mutex_iothread_locked() || + aio_context_in_iothread(ctx)); + + return aio_poll_internal(ctx, blocking); +} + static void aio_timerlist_notify(void *opaque) { aio_notify(opaque); diff --git a/include/block/aio.h b/include/block/aio.h index 9434665..986be97 100644 --- a/include/block/aio.h +++ b/include/block/aio.h @@ -287,6 +287,12 @@ bool aio_pending(AioContext *ctx); */ bool aio_dispatch(AioContext *ctx); +/* Same as aio_poll, but only meant for use in the I/O thread. + * + * This is used internally in the implementation of aio_poll. + */ +bool aio_poll_internal(AioContext *ctx, bool blocking); + /* Progress in completing AIO work to occur. This can issue new pending * aio as a result of executing I/O completion or bh callbacks. *