From patchwork Fri Nov 17 16:38:54 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Martin X-Patchwork-Id: 10063005 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 1542F6023A for ; Fri, 17 Nov 2017 16:48:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 070A12AD3E for ; Fri, 17 Nov 2017 16:48:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id EFBC82AD41; Fri, 17 Nov 2017 16:48:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 7DB322AD3E for ; Fri, 17 Nov 2017 16:48:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=buF+gX2RwDqtGcUm6V2efbGwWNBeKjJMEMvlAkYv0iI=; b=EhDrc9htPbUaVowwXofI5M90yw GXkTzks/Ka0E8O1S1YoAoshCyk92/ols97kgQGrkWqGvIG0VkSWk4GGHTz+zSE8bjjfUuZIiA6eBh yYUYYllY/7vhFJrjtiH4ea9fo9ScanXhMYUoHlAcEbQMFbX70aZ2UNHeIsSCzybuMPDLlv7jv6ZUi E4PfP24PKJ+OFcCZW5149oW1eeIPs+JvQaK1ZN5upaVgRHTvDcg920t822idEy9VlZ4QziigBbjXi 0BVqlvgu289pb4bbVtEGM8BgyALKx0xNbtGFm61+YBGFLivPI2KqmmMOP2b4aKXo0yN+8JIxXAxtr s3Lbm0Mw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1eFjoq-0006AF-E7; Fri, 17 Nov 2017 16:48:44 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1eFjfq-0000BK-Gl for linux-arm-kernel@lists.infradead.org; Fri, 17 Nov 2017 16:39:28 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 30D291610; Fri, 17 Nov 2017 08:39:06 -0800 (PST) Received: from e103592.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id F261A3F487; Fri, 17 Nov 2017 08:39:04 -0800 (PST) From: Dave Martin To: linux-arm-kernel@lists.infradead.org Subject: [RFC PATCH 3/4] arm64/sve: KVM: Ensure user SVE use traps after vcpu execution Date: Fri, 17 Nov 2017 16:38:54 +0000 Message-Id: <1510936735-6762-4-git-send-email-Dave.Martin@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1510936735-6762-1-git-send-email-Dave.Martin@arm.com> References: <1510936735-6762-1-git-send-email-Dave.Martin@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20171117_083926_632315_573BA3C4 X-CRM114-Status: UNSURE ( 9.66 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Marc Zyngier , Peter Maydell , =?UTF-8?q?Alex=20Benn=C3=A9e?= , kvmarm@lists.cs.columbia.edu, Christoffer Dall MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Currently, SVE use can remain untrapped if a KVM vcpu thread is preempted inside the kernel and we then switch back to some user thread. This patch ensures that SVE traps for userspace are enabled before switching away from the vcpu thread. In an attempt to preserve some clarity about why and when this is needed, kvm_fpsimd_flush_cpu_state() is used as a hook for doing this. This means that this function needs to be called after exiting the vcpu instead of before entry: this patch moves the call as appropriate. As a side-effect, this will avoid the call if vcpu entry is shortcircuited by a signal etc. Signed-off-by: Dave Martin --- arch/arm64/kernel/fpsimd.c | 2 ++ virt/kvm/arm/arm.c | 6 +++--- 2 files changed, 5 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 3dc8058..3b135eb 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -1083,6 +1083,8 @@ void sve_flush_cpu_state(void) if (last->st && last->sve_in_use) fpsimd_flush_cpu_state(); + + sve_user_disable(); } #endif /* CONFIG_ARM64_SVE */ diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index 772bf74..554b157 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -651,9 +651,6 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) */ preempt_disable(); - /* Flush FP/SIMD state that can't survive guest entry/exit */ - kvm_fpsimd_flush_cpu_state(); - kvm_pmu_flush_hwstate(vcpu); local_irq_disable(); @@ -754,6 +751,9 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) guest_exit(); trace_kvm_exit(ret, kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu)); + /* Flush FP/SIMD state that can't survive guest entry/exit */ + kvm_fpsimd_flush_cpu_state(); + preempt_enable(); ret = handle_exit(vcpu, run, ret);