From patchwork Mon Dec 4 20:05:00 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 10091427 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id DF57060580 for ; Mon, 4 Dec 2017 20:05:17 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D1C9E28AB2 for ; Mon, 4 Dec 2017 20:05:17 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C6B7128AF2; Mon, 4 Dec 2017 20:05:17 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5CDE928AF0 for ; Mon, 4 Dec 2017 20:05:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752141AbdLDUFO (ORCPT ); Mon, 4 Dec 2017 15:05:14 -0500 Received: from mail-wm0-f67.google.com ([74.125.82.67]:41103 "EHLO mail-wm0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750812AbdLDUFL (ORCPT ); Mon, 4 Dec 2017 15:05:11 -0500 Received: by mail-wm0-f67.google.com with SMTP id g75so8092855wme.0 for ; Mon, 04 Dec 2017 12:05:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=christofferdall-dk.20150623.gappssmtp.com; s=20150623; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references; bh=vxaZTs+R+1ES3fwJBhCw/sYqFd39VMslNVO+dYTUBYU=; b=Lno52GvAnICJfI+sidR9WIngvkNhuF/AMd2ROf18CfY+xiVJyiL6ZBdP4wtg67lriP NoUTzGfXfkT1mKej2fR+mjpUGKcxlD+hV0y+75FKw1kjDo9USgVX5Kkn4OIykCKXEJC1 CNJ7l882BaXMp4/P6ObEcqotpLZ7TabCXIBXH3p7xgr9rYiHwWvAwfDZiA1Ss1TGQlLV 9xQhDRyhnnFLp4VaTJELXumQZ0qeoLZycUp5hQf7ia/aiT0S2v5EovzowPfR4xmNYDAr X/ThzRZCaQkRm91Gc4wR6/bNEM9gJ0sAhKnkLkMRkVknxwBy5TQm3+26HbsNIPY7lQgC kGrg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references; bh=vxaZTs+R+1ES3fwJBhCw/sYqFd39VMslNVO+dYTUBYU=; b=N5FyU2ql8LTT00R1M5BIO+ztf2E6c1kkdIfKnwH5d+oEqhIWLEANxYDbfdQ4wYFqYF 2xflyqrjtWH0kgw4Rm8gimvpWiCOPtfMTVamX1s3qpTkoask8ZUCt4CtT+yJJoozfoln Thnh+ZPuXFOhpIsJ0PcdBxWD6yo1xzdaqi7MHs378quJircdZpF0lDuC4cYxG1nuqS9+ t4oOFX6yEunezIjrBNCKUfXyXOPCL5DfkSUzj201oTYMhMb8Rue7Nd+3mkxlB9ldwYGB YxWlswd5XjzIgta9VV8w2jitY1lzTvQNLQ3gFqfpgI11gcpx3LAe9LsN2NCJxEij3fng IHzQ== X-Gm-Message-State: AJaThX6HecLiByhUDxVdTyjwfif68YloLwMOkyHI2iCw47M222oHjTA4 dD1n6Ance8FJ9AustLd+Hy0Q9Q== X-Google-Smtp-Source: AGs4zMaU/HEE6vOJYHswN94+ll0QQ2cN1VGECtT+WykjqM7U+OWstNVrdqLscxOGwde/n7WC0S1zlw== X-Received: by 10.80.210.197 with SMTP id q5mr32104601edg.86.1512417910442; Mon, 04 Dec 2017 12:05:10 -0800 (PST) Received: from localhost.localdomain (x50d2404e.cust.hiper.dk. [80.210.64.78]) by smtp.gmail.com with ESMTPSA id 26sm8374578eds.67.2017.12.04.12.05.09 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 04 Dec 2017 12:05:09 -0800 (PST) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu Cc: linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, Marc Zyngier , Andre Przywara , Eric Auger , Christoffer Dall Subject: [PATCH v6 2/8] KVM: arm/arm64: Factor out functionality to get vgic mmio requester_vcpu Date: Mon, 4 Dec 2017 21:05:00 +0100 Message-Id: <20171204200506.3224-3-cdall@kernel.org> X-Mailer: git-send-email 2.14.2 In-Reply-To: <20171204200506.3224-1-cdall@kernel.org> References: <20171204200506.3224-1-cdall@kernel.org> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Christoffer Dall We are about to distinguish between userspace accesses and mmio traps for a number of the mmio handlers. When the requester vcpu is NULL, it mens we are handling a userspace acccess. Factor out the functionality to get the request vcpu into its own function, mostly so we have a common place to document the semantics of the return value. Also take the chance to move the functionality outside of holding a spinlock and instead explicitly disable and enable preemption. This supports PREEMPT_RT kernels as well. Acked-by: Marc Zyngier Reviewed-by: Andre Przywara Signed-off-by: Christoffer Dall --- virt/kvm/arm/vgic/vgic-mmio.c | 44 +++++++++++++++++++++++++++---------------- 1 file changed, 28 insertions(+), 16 deletions(-) diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c index deb51ee16a3d..747b0a3b4784 100644 --- a/virt/kvm/arm/vgic/vgic-mmio.c +++ b/virt/kvm/arm/vgic/vgic-mmio.c @@ -122,6 +122,27 @@ unsigned long vgic_mmio_read_pending(struct kvm_vcpu *vcpu, return value; } +/* + * This function will return the VCPU that performed the MMIO access and + * trapped from twithin the VM, and will return NULL if this is a userspace + * access. + * + * We can disable preemption locally around accessing the per-CPU variable, + * and use the resolved vcpu pointer after enabling preemption again, because + * even if the current thread is migrated to another CPU, reading the per-CPU + * value later will give us the same value as we update the per-CPU variable + * in the preempt notifier handlers. + */ +static struct kvm_vcpu *vgic_get_mmio_requester_vcpu(void) +{ + struct kvm_vcpu *vcpu; + + preempt_disable(); + vcpu = kvm_arm_get_running_vcpu(); + preempt_enable(); + return vcpu; +} + void vgic_mmio_write_spending(struct kvm_vcpu *vcpu, gpa_t addr, unsigned int len, unsigned long val) @@ -184,24 +205,10 @@ unsigned long vgic_mmio_read_active(struct kvm_vcpu *vcpu, static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq, bool new_active_state) { - struct kvm_vcpu *requester_vcpu; unsigned long flags; - spin_lock_irqsave(&irq->irq_lock, flags); + struct kvm_vcpu *requester_vcpu = vgic_get_mmio_requester_vcpu(); - /* - * The vcpu parameter here can mean multiple things depending on how - * this function is called; when handling a trap from the kernel it - * depends on the GIC version, and these functions are also called as - * part of save/restore from userspace. - * - * Therefore, we have to figure out the requester in a reliable way. - * - * When accessing VGIC state from user space, the requester_vcpu is - * NULL, which is fine, because we guarantee that no VCPUs are running - * when accessing VGIC state from user space so irq->vcpu->cpu is - * always -1. - */ - requester_vcpu = kvm_arm_get_running_vcpu(); + spin_lock_irqsave(&irq->irq_lock, flags); /* * If this virtual IRQ was written into a list register, we @@ -213,6 +220,11 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq, * vgic_change_active_prepare) and still has to sync back this IRQ, * so we release and re-acquire the spin_lock to let the other thread * sync back the IRQ. + * + * When accessing VGIC state from user space, requester_vcpu is + * NULL, which is fine, because we guarantee that no VCPUs are running + * when accessing VGIC state from user space so irq->vcpu->cpu is + * always -1. */ while (irq->vcpu && /* IRQ may have state in an LR somewhere */ irq->vcpu != requester_vcpu && /* Current thread is not the VCPU thread */