From patchwork Fri Sep 8 14:52:19 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 9944437 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 881366034B for ; Fri, 8 Sep 2017 14:54:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8486B287C0 for ; Fri, 8 Sep 2017 14:54:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 796A1287C2; Fri, 8 Sep 2017 14:54:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 3424C287C0 for ; Fri, 8 Sep 2017 14:54:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Ceq+GUigGEac5O5Xgs/ulDkkV+VL42AaiazkgUnYllg=; b=urbPTa+in0J7F8R0UiR9btVYQz AhfEGAGKpIobrS7P9AdDjwWUsVBTm3i7eu3wjmhLi8tg2hBBbMjIlgqddaiNf7kpV1OQkekAsGamY t2bGVyq8eUsK7fwHqrccbnzrHX6u3wlAlvncOAlJJAAKM2faXXE9GDxxDMLqDFyb/uALdWAUyGUCH ZfFvO28s0xsU/Gmz7YFdDxpxVfwAgYmqKUczL4uG1ER7iT1Zat0cstkY3WnGISrxT/7hgeev+Wu8W iRBbJas9iVyzdLNAHvqMrb89BWMcwhyRCPbHiKc2O5/pDs9DQOusfFpFO2iMhbcTd3dsvzx4XtoiF CqHZRPkg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1dqKfk-0002Gh-Uj; Fri, 08 Sep 2017 14:54:20 +0000 Received: from mail-pg0-x22e.google.com ([2607:f8b0:400e:c05::22e]) by bombadil.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1dqKf2-0001ce-Cy for linux-arm-kernel@lists.infradead.org; Fri, 08 Sep 2017 14:53:50 +0000 Received: by mail-pg0-x22e.google.com with SMTP id v66so5203030pgb.5 for ; Fri, 08 Sep 2017 07:53:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=WwmoKGA0IP+2WjdGGjE5LJajTk15PpPL0Z45niQA5kM=; b=a3vn2D4HigpEEpxgVy8HhxZveAEGnTpnEOuUCmjnxy6aID0cUqcK3/O6TBQRoozGK9 iaZPglSzIaVhyibLyKabNiufgg0Y2W/n/G2a3O9yu2AztBKohnAZZ7EhpbajYC5h1WIh XcS+TE7poTm7yiAv+XvIRooD9o1R43qKay+Uk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=WwmoKGA0IP+2WjdGGjE5LJajTk15PpPL0Z45niQA5kM=; b=Zep46PMRHpT2Ic7wM6+kNMo+0s5TkGFng3kdu/kJX2NDZzVF3fldBPKH66Ann3JnWY 9FaenxsFzcGPDKOSrXlkphjLz6DkBck8xwmCHFEdZ7IizcjQp4NtLXDg7Ioqw/SeXmlX RZ7za3XLkLolz59ML++Vsts3XNyoaoSTn86ddY+ep2plrY6Yfj6pjrXBM28qOmSX1tiR b4QNhOkNaAOju6+7V/MUGYscE3iF9F6A45+QxiyC7E6w5455M/OnURj3nju2efCzDoCu OiM/LeJhKjEcPo9i7onuIgoXahBe5IAXcl7Rh9MuxcEcZIdQ46Npw1M6HYMCTEoTUUqK BI+A== X-Gm-Message-State: AHPjjUhOJ1Nnn477FfPx8BJojznWE8nxdyhfTStiGfjssfuxoXG2q3kF pngFUGg+tBMCdgFE X-Google-Smtp-Source: ADKCNb59AIBLZ6LHKWeNuxLWTJeBMUTDE70Z7CS2BaCK91dpw/JZXgAM6uJBAttlXjgQH4C8zFeBiw== X-Received: by 10.84.129.226 with SMTP id b89mr3674097plb.6.1504882395651; Fri, 08 Sep 2017 07:53:15 -0700 (PDT) Received: from localhost.localdomain ([104.153.224.169]) by smtp.gmail.com with ESMTPSA id x189sm4258160pfx.188.2017.09.08.07.53.04 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 08 Sep 2017 07:53:14 -0700 (PDT) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Subject: [PATCH 2/2] KVM: arm/arm64: Fix vgic_mmio_change_active with PREEMPT_RT Date: Fri, 8 Sep 2017 07:52:19 -0700 Message-Id: <1504882339-42520-3-git-send-email-christoffer.dall@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1504882339-42520-1-git-send-email-christoffer.dall@linaro.org> References: <1504882339-42520-1-git-send-email-christoffer.dall@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20170908_075337_077138_A2C65FC8 X-CRM114-Status: GOOD ( 12.61 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Christoffer Dall , kvm@vger.kernel.org, Marc Zyngier , stable@vger.kernel.org, hemk976@gmail.com, Jintack Lim MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP From: Christoffer Dall Getting a per-CPU variable requires a non-preemptible context and we were relying on a normal spinlock to disable preemption as well. This asusmption breaks with PREEMPT_RT and was observed on v4.9 using PREEMPT_RT. This change moves the spinlock tighter around the critical section accessing the IRQ structure protected by the lock and uses a separate preemption disabled section for determining the requesting VCPU. There should be no change in functionality of performance degradation on non-RT. Fixes: 370a0ec18199 ("KVM: arm/arm64: Let vcpu thread modify its own active state") Cc: stable@vger.kernel.org Cc: Jintack Lim Reported-by: Hemanth Kumar Signed-off-by: Christoffer Dall --- virt/kvm/arm/vgic/vgic-mmio.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c index c1e4bdd..7377f97 100644 --- a/virt/kvm/arm/vgic/vgic-mmio.c +++ b/virt/kvm/arm/vgic/vgic-mmio.c @@ -181,7 +181,6 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq, bool new_active_state) { struct kvm_vcpu *requester_vcpu; - spin_lock(&irq->irq_lock); /* * The vcpu parameter here can mean multiple things depending on how @@ -195,8 +194,19 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq, * NULL, which is fine, because we guarantee that no VCPUs are running * when accessing VGIC state from user space so irq->vcpu->cpu is * always -1. + * + * We have to temporarily disable preemption to read the per-CPU + * variable. It doesn't matter if we actually get preempted + * after enabling preemption because we only need to figure out if + * this thread is a running VCPU thread, and in that case for which + * VCPU. If we're migrated the preempt notifiers will migrate the + * running VCPU pointer with us. */ + preempt_disable(); requester_vcpu = kvm_arm_get_running_vcpu(); + preempt_enable(); + + spin_lock(&irq->irq_lock); /* * If this virtual IRQ was written into a list register, we