From patchwork Fri Jan 15 15:12:06 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 8042531 Return-Path: X-Original-To: patchwork-qemu-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id E4D8F9F6FA for ; Fri, 15 Jan 2016 15:15:57 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 4F87B20439 for ; Fri, 15 Jan 2016 15:15:57 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 99B782041B for ; Fri, 15 Jan 2016 15:15:56 +0000 (UTC) Received: from localhost ([::1]:47493 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aK66V-0002nB-Rd for patchwork-qemu-devel@patchwork.kernel.org; Fri, 15 Jan 2016 10:15:55 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:46167) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aK639-0005HO-S0 for qemu-devel@nongnu.org; Fri, 15 Jan 2016 10:12:33 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1aK638-0006De-Sx for qemu-devel@nongnu.org; Fri, 15 Jan 2016 10:12:27 -0500 Received: from mx1.redhat.com ([209.132.183.28]:34126) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aK638-0006DX-OB for qemu-devel@nongnu.org; Fri, 15 Jan 2016 10:12:26 -0500 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) by mx1.redhat.com (Postfix) with ESMTPS id 67C4B1136EB for ; Fri, 15 Jan 2016 15:12:26 +0000 (UTC) Received: from donizetti.redhat.com (ovpn-112-70.ams2.redhat.com [10.36.112.70]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id u0FFCJ4b003942; Fri, 15 Jan 2016 10:12:25 -0500 From: Paolo Bonzini To: qemu-devel@nongnu.org Date: Fri, 15 Jan 2016 16:12:06 +0100 Message-Id: <1452870739-28484-4-git-send-email-pbonzini@redhat.com> In-Reply-To: <1452870739-28484-1-git-send-email-pbonzini@redhat.com> References: <1452870739-28484-1-git-send-email-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x X-Received-From: 209.132.183.28 Cc: stefanha@redhat.com Subject: [Qemu-devel] [PATCH 03/16] aio: introduce aio_poll_internal X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Move the implemntation of aio_poll to aio_poll_internal, and introduce a wrapper for public use. For now it just asserts that aio_poll is being used correctly---either from the thread that manages the context, or with the QEMU global mutex held. The next patch, however, will completely separate the two cases. Signed-off-by: Paolo Bonzini --- aio-posix.c | 2 +- aio-win32.c | 2 +- async.c | 8 ++++++++ include/block/aio.h | 6 ++++++ 4 files changed, 16 insertions(+), 2 deletions(-) diff --git a/aio-posix.c b/aio-posix.c index 482b316..980bd41 100644 --- a/aio-posix.c +++ b/aio-posix.c @@ -400,7 +400,7 @@ static void add_pollfd(AioHandler *node) npfd++; } -bool aio_poll(AioContext *ctx, bool blocking) +bool aio_poll_internal(AioContext *ctx, bool blocking) { AioHandler *node; int i, ret; diff --git a/aio-win32.c b/aio-win32.c index cdc4456..6622cbf 100644 --- a/aio-win32.c +++ b/aio-win32.c @@ -280,7 +280,7 @@ bool aio_dispatch(AioContext *ctx) return progress; } -bool aio_poll(AioContext *ctx, bool blocking) +bool aio_poll_internal(AioContext *ctx, bool blocking) { AioHandler *node; HANDLE events[MAXIMUM_WAIT_OBJECTS + 1]; diff --git a/async.c b/async.c index b3efd3c..856aa75 100644 --- a/async.c +++ b/async.c @@ -299,6 +299,14 @@ void aio_notify_accept(AioContext *ctx) } } +bool aio_poll(AioContext *ctx, bool blocking) +{ + assert(qemu_mutex_iothread_locked() || + aio_context_in_iothread(ctx)); + + return aio_poll_internal(ctx, blocking); +} + static void aio_timerlist_notify(void *opaque) { aio_notify(opaque); diff --git a/include/block/aio.h b/include/block/aio.h index 9434665..986be97 100644 --- a/include/block/aio.h +++ b/include/block/aio.h @@ -287,6 +287,12 @@ bool aio_pending(AioContext *ctx); */ bool aio_dispatch(AioContext *ctx); +/* Same as aio_poll, but only meant for use in the I/O thread. + * + * This is used internally in the implementation of aio_poll. + */ +bool aio_poll_internal(AioContext *ctx, bool blocking); + /* Progress in completing AIO work to occur. This can issue new pending * aio as a result of executing I/O completion or bh callbacks. *