From patchwork Fri May 4 16:05:31 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Martin X-Patchwork-Id: 10381007 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 833546038F for ; Fri, 4 May 2018 16:14:09 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6D6D3285FD for ; Fri, 4 May 2018 16:14:09 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 620CF294E5; Fri, 4 May 2018 16:14:09 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 6ACA1285FD for ; Fri, 4 May 2018 16:14:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=mzi4huMjFfrRfwnkm3SbNRDJQRKKcS9Ra2WJaXN55S0=; b=lVqvxAnR51m3tP+uTy/MlOffGY fkfOhFAMoyxGQYJMZi0T9YmiyJaJUAb8+rxM55QtkVzWxnv+zPl9PScwQEAa49sfW+fqgfX5JPlXn fwcn7Diku8H5+egB1neDGX2fMgvUeSDjMqgsuqtBIA8g13c5UXiFYhVd3f7yEyyRJQCCVp2F0Sumz AWemTMWfyqKYPRfVCRqeRe0/13VkOymBrIz5+ScBZtIWJc2j1fgueriT7I+sRI3BO4LGBfu9qy4/q grsPYrIn21nAdVb3Mj/sfSfoqHJfYL+wisQk+/leBbe1zd8fGhFYfNwGeeXKMAxIBfevXwbJg26Mu d4e3aUHQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fEdLM-0005bG-Id; Fri, 04 May 2018 16:14:00 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fEdDk-0000ur-09 for linux-arm-kernel@lists.infradead.org; Fri, 04 May 2018 16:06:16 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BCE4819BF; Fri, 4 May 2018 09:05:55 -0700 (PDT) Received: from e103592.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id AF2F63F25D; Fri, 4 May 2018 09:05:54 -0700 (PDT) From: Dave Martin To: kvmarm@lists.cs.columbia.edu Subject: [PATCH v5 10/14] KVM: arm64: Save host SVE context as appropriate Date: Fri, 4 May 2018 17:05:31 +0100 Message-Id: <1525449935-31424-11-git-send-email-Dave.Martin@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1525449935-31424-1-git-send-email-Dave.Martin@arm.com> References: <1525449935-31424-1-git-send-email-Dave.Martin@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180504_090608_244184_96DB46F2 X-CRM114-Status: GOOD ( 18.37 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Marc Zyngier , Catalin Marinas , Christoffer Dall , linux-arm-kernel@lists.infradead.org, Ard Biesheuvel MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP This patch adds SVE context saving to the hyp FPSIMD context switch path. This means that it is no longer necessary to save the host SVE state in advance of entering the guest, when in use. In order to avoid adding pointless complexity to the code, VHE is assumed if SVE is in use. VHE is an architectural prerequisite for SVE, so there is no good reason to turn CONFIG_ARM64_VHE off in kernels that support both SVE and KVM. Historically, software models exist that can expose the architecturally invalid configuration of SVE without VHE, so if this situation is detected this patch warns and refuses to create a VM. Doing this check at VM creation time avoids race issues between KVM and SVE initialisation. Signed-off-by: Dave Martin Reviewed-by: Christoffer Dall --- arch/arm64/Kconfig | 7 +++++++ arch/arm64/kvm/fpsimd.c | 1 - arch/arm64/kvm/hyp/switch.c | 21 +++++++++++++++++++-- virt/kvm/arm/arm.c | 18 ++++++++++++++++++ 4 files changed, 44 insertions(+), 3 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index eb2cf49..b0d3820 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1130,6 +1130,7 @@ endmenu config ARM64_SVE bool "ARM Scalable Vector Extension support" default y + depends on !KVM || ARM64_VHE help The Scalable Vector Extension (SVE) is an extension to the AArch64 execution state which complements and extends the SIMD functionality @@ -1155,6 +1156,12 @@ config ARM64_SVE booting the kernel. If unsure and you are not observing these symptoms, you should assume that it is safe to say Y. + CPUs that support SVE are architecturally required to support the + Virtualization Host Extensions (VHE), so the kernel makes no + provision for supporting SVE alongside KVM without VHE enabled. + Thus, you will need to enable CONFIG_ARM64_VHE if you want to support + KVM in the same kernel image. + config ARM64_MODULE_PLTS bool select HAVE_MOD_ARCH_SPECIFIC diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index bbc6889..91ad01f 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -59,7 +59,6 @@ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu) */ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu) { - BUG_ON(system_supports_sve()); BUG_ON(!current->mm); vcpu->arch.fp_enabled = false; diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index 10f55d3..8009126 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -21,12 +21,14 @@ #include +#include #include #include #include #include #include #include +#include #include static bool __hyp_text update_fp_enabled(struct kvm_vcpu *vcpu) @@ -328,6 +330,8 @@ static bool __hyp_text __skip_instr(struct kvm_vcpu *vcpu) void __hyp_text __hyp_switch_fpsimd(u64 esr __always_unused, struct kvm_vcpu *vcpu) { + struct user_fpsimd_state *host_fpsimd = vcpu->arch.host_fpsimd_state; + if (has_vhe()) write_sysreg(read_sysreg(cpacr_el1) | CPACR_EL1_FPEN, cpacr_el1); @@ -337,8 +341,21 @@ void __hyp_text __hyp_switch_fpsimd(u64 esr __always_unused, isb(); - if (vcpu->arch.host_fpsimd_state) { - __fpsimd_save_state(vcpu->arch.host_fpsimd_state); + if (host_fpsimd) { + /* + * In the SVE case, VHE is assumed: it is enforced by + * Kconfig and kvm_arch_init_vm(). + */ + if (system_supports_sve() && vcpu->arch.host_sve_in_use) { + struct thread_struct *thread = container_of( + host_fpsimd, + struct thread_struct, uw.fpsimd_state); + + sve_save_state(sve_pffr(thread), &host_fpsimd->fpsr); + } else { + __fpsimd_save_state(vcpu->arch.host_fpsimd_state); + } + vcpu->arch.host_fpsimd_state = NULL; } diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index 6cf499b..a7be7bf 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -16,6 +16,7 @@ * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. */ +#include #include #include #include @@ -41,6 +42,7 @@ #include #include #include +#include #include #include #include @@ -120,6 +122,22 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) if (type) return -EINVAL; + /* + * VHE is a prerequisite for SVE in the Arm architecture, and + * Kconfig ensures that if system_supports_sve() here then + * CONFIG_ARM64_VHE is enabled, so if VHE support wasn't already + * detected and enabled, the CPU is architecturally + * noncompliant. + * + * Just in case this mismatch is seen, detect it, warn and give + * up. Supporting this forbidden configuration in Hyp would be + * pointless. + */ + if (system_supports_sve() && !has_vhe()) { + kvm_pr_unimpl("Cannot create VMs on SVE system without VHE. Broken cpu?"); + return -ENXIO; + } + kvm->arch.last_vcpu_ran = alloc_percpu(typeof(*kvm->arch.last_vcpu_ran)); if (!kvm->arch.last_vcpu_ran) return -ENOMEM;