From patchwork Mon Feb 8 16:15:01 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 8251881 Return-Path: X-Original-To: patchwork-qemu-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 82778BEEE5 for ; Mon, 8 Feb 2016 16:20:24 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id AE81C202EC for ; Mon, 8 Feb 2016 16:20:23 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D068720148 for ; Mon, 8 Feb 2016 16:20:22 +0000 (UTC) Received: from localhost ([::1]:46233 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aSoY2-00054q-3p for patchwork-qemu-devel@patchwork.kernel.org; Mon, 08 Feb 2016 11:20:22 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:57302) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aSoTO-0006NG-7F for qemu-devel@nongnu.org; Mon, 08 Feb 2016 11:15:35 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1aSoTK-0002Rx-2G for qemu-devel@nongnu.org; Mon, 08 Feb 2016 11:15:34 -0500 Received: from mail-wm0-x242.google.com ([2a00:1450:400c:c09::242]:32832) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aSoTJ-0002Rn-O4 for qemu-devel@nongnu.org; Mon, 08 Feb 2016 11:15:29 -0500 Received: by mail-wm0-x242.google.com with SMTP id c200so2288110wme.0 for ; Mon, 08 Feb 2016 08:15:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references; bh=ArCO+Fe+H/IExSYdCXEok5XWhZ/R77pjsrRrEFNID04=; b=oQer7hqIu1yT3CYy3tjQiSbozLXwAqO8GmMluTMNPHYyOsFNxfd9vKOtwbDwht7llx a9vNHVeDhhMhYkAEQpMdXfiFd18zmAzIbGblIH2aatGju5YPCcpFC933Ya+FuYkWwxaP vMRuXjVqzVao0KpFWfrhBTmYTrqRw2qBGZyYxdMUOAF9oFnVShYJtvgE9Ol1WWsL5fDG TgFrnlB4XTqbzwpHfMiEb9xAipj6EP8T4iyWeUnJa2jsbE6DY4dqy0utxAFW+1o7vaQF 2MJJ5ciEZlp1K0SMnP8zuVljNs/+Y/KRVFdZR+hYAJCdo5KZKCQsOsfxLVv06BX5YxyH +Zcg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references; bh=ArCO+Fe+H/IExSYdCXEok5XWhZ/R77pjsrRrEFNID04=; b=diW4auLIVniwrlXKSe2Csl/zDQ7aIES+RVNTtJlImmfO/a/QfhfAE07W6BbCK9xlzL fzkrW8Eh+8T4uO8hUKd1WIxSDwzcPjzuEnsI8B6i6vaQB4EbBYPFk12L7SclNgQY0qT+ Jfnaeny86/htjuLp12mUx2ECbva/Kki6yWtUzYhJ07c+izUYBqpbEVubRvJNYu+B/rqo hXGh7qwoE6xKinQfWIPjrX9cxXVo626lNUIZHRgWYAYXb+UNgjPBiP50yz5t+npzDi28 P/n0nS49rkAOhU40kzUUgWMQVKwQKCNoe7wbfAzZBDFlA21sRwcOAdy+PA4my6a1IeOS hpwg== X-Gm-Message-State: AG10YOTM/4h85qKnTSfV6dltfiMad5lLPvMjATxLSWPydOJfD+GBfPDSyNWEOa0nw63SCA== X-Received: by 10.28.21.19 with SMTP id 19mr49362584wmv.43.1454948129303; Mon, 08 Feb 2016 08:15:29 -0800 (PST) Received: from donizetti.lan (94-39-141-130.adsl-ull.clienti.tiscali.it. [94.39.141.130]) by smtp.gmail.com with ESMTPSA id v2sm13305429wmd.24.2016.02.08.08.15.28 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 08 Feb 2016 08:15:28 -0800 (PST) From: Paolo Bonzini To: qemu-devel@nongnu.org Date: Mon, 8 Feb 2016 17:15:01 +0100 Message-Id: <1454948107-11844-11-git-send-email-pbonzini@redhat.com> X-Mailer: git-send-email 2.5.0 In-Reply-To: <1454948107-11844-1-git-send-email-pbonzini@redhat.com> References: <1454948107-11844-1-git-send-email-pbonzini@redhat.com> X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 2a00:1450:400c:c09::242 Cc: stefanha@redhat.com Subject: [Qemu-devel] [PATCH 10/16] aio: make ctx->list_lock a QemuLockCnt, subsuming ctx->walking_bh X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This will make it possible to walk the list of bottom halves without holding the AioContext lock---and in turn to call bottom half handlers without holding the lock. Signed-off-by: Paolo Bonzini --- async.c | 31 ++++++++++++++----------------- include/block/aio.h | 12 +++++------- 2 files changed, 19 insertions(+), 24 deletions(-) diff --git a/async.c b/async.c index fc4c173..bc7e142 100644 --- a/async.c +++ b/async.c @@ -51,12 +51,12 @@ QEMUBH *aio_bh_new(AioContext *ctx, QEMUBHFunc *cb, void *opaque) .cb = cb, .opaque = opaque, }; - qemu_mutex_lock(&ctx->list_lock); + qemu_lockcnt_lock(&ctx->list_lock); bh->next = ctx->first_bh; /* Make sure that the members are ready before putting bh into list */ smp_wmb(); ctx->first_bh = bh; - qemu_mutex_unlock(&ctx->list_lock); + qemu_lockcnt_unlock(&ctx->list_lock); return bh; } @@ -71,13 +71,11 @@ int aio_bh_poll(AioContext *ctx) QEMUBH *bh, **bhp, *next; int ret; - ctx->walking_bh++; + qemu_lockcnt_inc(&ctx->list_lock); ret = 0; - for (bh = ctx->first_bh; bh; bh = next) { - /* Make sure that fetching bh happens before accessing its members */ - smp_read_barrier_depends(); - next = bh->next; + for (bh = atomic_rcu_read(&ctx->first_bh); bh; bh = next) { + next = atomic_rcu_read(&bh->next); /* The atomic_xchg is paired with the one in qemu_bh_schedule. The * implicit memory barrier ensures that the callback sees all writes * done by the scheduling thread. It also ensures that the scheduling @@ -94,11 +92,8 @@ int aio_bh_poll(AioContext *ctx) } } - ctx->walking_bh--; - /* remove deleted bhs */ - if (!ctx->walking_bh) { - qemu_mutex_lock(&ctx->list_lock); + if (qemu_lockcnt_dec_and_lock(&ctx->list_lock)) { bhp = &ctx->first_bh; while (*bhp) { bh = *bhp; @@ -109,7 +104,7 @@ int aio_bh_poll(AioContext *ctx) bhp = &bh->next; } } - qemu_mutex_unlock(&ctx->list_lock); + qemu_lockcnt_unlock(&ctx->list_lock); } return ret; @@ -165,7 +160,8 @@ aio_compute_timeout(AioContext *ctx) int timeout = -1; QEMUBH *bh; - for (bh = ctx->first_bh; bh; bh = bh->next) { + for (bh = atomic_rcu_read(&ctx->first_bh); bh; + bh = atomic_rcu_read(&bh->next)) { if (!bh->deleted && bh->scheduled) { if (bh->idle) { /* idle bottom halves will be polled at least @@ -240,7 +236,8 @@ aio_ctx_finalize(GSource *source) thread_pool_free(ctx->thread_pool); - qemu_mutex_lock(&ctx->list_lock); + qemu_lockcnt_lock(&ctx->list_lock); + assert(!qemu_lockcnt_count(&ctx->list_lock)); while (ctx->first_bh) { QEMUBH *next = ctx->first_bh->next; @@ -250,12 +247,12 @@ aio_ctx_finalize(GSource *source) g_free(ctx->first_bh); ctx->first_bh = next; } - qemu_mutex_unlock(&ctx->list_lock); + qemu_lockcnt_unlock(&ctx->list_lock); aio_set_event_notifier(ctx, &ctx->notifier, false, NULL); event_notifier_cleanup(&ctx->notifier); qemu_rec_mutex_destroy(&ctx->lock); - qemu_mutex_destroy(&ctx->list_lock); + qemu_lockcnt_destroy(&ctx->list_lock); timerlistgroup_deinit(&ctx->tlg); } @@ -551,7 +548,7 @@ AioContext *aio_context_new(Error **errp) (EventNotifierHandler *) event_notifier_dummy_cb); ctx->thread_pool = NULL; - qemu_mutex_init(&ctx->list_lock); + qemu_lockcnt_init(&ctx->list_lock); qemu_rec_mutex_init(&ctx->lock); timerlistgroup_init(&ctx->tlg, aio_timerlist_notify, ctx); diff --git a/include/block/aio.h b/include/block/aio.h index 322a10e..fb0ff21 100644 --- a/include/block/aio.h +++ b/include/block/aio.h @@ -87,17 +87,15 @@ struct AioContext { */ uint32_t notify_me; - /* lock to protect between bh's adders and deleter */ - QemuMutex list_lock; + /* A lock to protect between bh's adders and deleter, and to ensure + * that no callbacks are removed while we're walking and dispatching + * them. + */ + QemuLockCnt list_lock; /* Anchor of the list of Bottom Halves belonging to the context */ struct QEMUBH *first_bh; - /* A simple lock used to protect the first_bh list, and ensure that - * no callbacks are removed while we're walking and dispatching callbacks. - */ - int walking_bh; - /* Used by aio_notify. * * "notified" is used to avoid expensive event_notifier_test_and_clear