From patchwork Wed Mar 21 17:47:31 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Martin X-Patchwork-Id: 10299955 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 0FABB600CC for ; Wed, 21 Mar 2018 17:49:18 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EECE128712 for ; Wed, 21 Mar 2018 17:49:17 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DFF9828DAB; Wed, 21 Mar 2018 17:49:17 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 404A828712 for ; Wed, 21 Mar 2018 17:49:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=SUiQ+yuANMRIbjazdmYnqH7YqCx9krF4VyOIwvBr1sE=; b=TQolLjjnugszLA7SwKbO0e6k25 tBVdNkBUt1wH1YJgxKCH4c6Z25wusmuuKTxU9192BfzZ4C+ksfwhvYnLjOoPIlh+nmnVpRkUIk4d7 F4kpSFFnqohsQ9bushAd/gHl7bRZkyIbuefbEtLH/pv8RiO62/hZqVlBz44Tlgmo5G3WVXSqdCUCS WyNnygzoiFkD8qONV/TJY8zv6os8iQEnRMLsVtwgXj4r2HOuC24FosTDThcqz/65SOxBsl9tj6aE5 w8qNOIKcz9KFrMDnWrjh2QT7qvY8IK3T8xRUSW3vrL3Nv5gcL/AwlxOGbgeUywRREwzBElZcLzZ7Q pculC6iQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1eyhrM-0007Xy-Va; Wed, 21 Mar 2018 17:49:12 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1eyhq2-0006qV-TT for linux-arm-kernel@lists.infradead.org; Wed, 21 Mar 2018 17:47:54 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 20D02165D; Wed, 21 Mar 2018 10:47:42 -0700 (PDT) Received: from e103592.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 362F53F24A; Wed, 21 Mar 2018 10:47:41 -0700 (PDT) From: Dave Martin To: kvmarm@lists.cs.columbia.edu Subject: [RFC PATCH 2/4] KVM: arm64: Convert lazy FPSIMD context switch trap to C Date: Wed, 21 Mar 2018 17:47:31 +0000 Message-Id: <1521654453-3837-3-git-send-email-Dave.Martin@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1521654453-3837-1-git-send-email-Dave.Martin@arm.com> References: <1521654453-3837-1-git-send-email-Dave.Martin@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180321_104751_090316_12481E63 X-CRM114-Status: GOOD ( 12.21 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Marc Zyngier , Christoffer Dall , linux-arm-kernel@lists.infradead.org, Ard Biesheuvel MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP To make the lazt FPSIMD context switch trap code easier to hack on, this patch converts it to C. This is not amazingly efficient, but the trap should typically only be taken once per host context switch. Signed-off-by: Dave Martin --- This patch adds the missing ! to ensure that the hyp stack pointer is actually moved down when pushing regs in preparation for calling __hyp_switch_fpsimd. Without this, bad things happen... --- arch/arm64/kvm/hyp/entry.S | 57 +++++++++++++++++---------------------------- arch/arm64/kvm/hyp/switch.c | 24 +++++++++++++++++++ 2 files changed, 46 insertions(+), 35 deletions(-) diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S index fdd1068..47c6a78 100644 --- a/arch/arm64/kvm/hyp/entry.S +++ b/arch/arm64/kvm/hyp/entry.S @@ -176,41 +176,28 @@ ENTRY(__fpsimd_guest_restore) // x1: vcpu // x2-x29,lr: vcpu regs // vcpu x0-x1 on the stack - stp x2, x3, [sp, #-16]! - stp x4, lr, [sp, #-16]! - -alternative_if_not ARM64_HAS_VIRT_HOST_EXTN - mrs x2, cptr_el2 - bic x2, x2, #CPTR_EL2_TFP - msr cptr_el2, x2 -alternative_else - mrs x2, cpacr_el1 - orr x2, x2, #CPACR_EL1_FPEN - msr cpacr_el1, x2 -alternative_endif - isb - - mov x3, x1 - - ldr x0, [x3, #VCPU_HOST_CONTEXT] - kern_hyp_va x0 - add x0, x0, #CPU_GP_REG_OFFSET(CPU_FP_REGS) - bl __fpsimd_save_state - - add x2, x3, #VCPU_CONTEXT - add x0, x2, #CPU_GP_REG_OFFSET(CPU_FP_REGS) - bl __fpsimd_restore_state - - // Skip restoring fpexc32 for AArch64 guests - mrs x1, hcr_el2 - tbnz x1, #HCR_RW_SHIFT, 1f - ldr x4, [x3, #VCPU_FPEXC32_EL2] - msr fpexc32_el2, x4 -1: - ldp x4, lr, [sp], #16 - ldp x2, x3, [sp], #16 - ldp x0, x1, [sp], #16 - + stp x2, x3, [sp, #-144]! + stp x4, x5, [sp, #16] + stp x6, x7, [sp, #32] + stp x8, x9, [sp, #48] + stp x10, x11, [sp, #64] + stp x12, x13, [sp, #80] + stp x14, x15, [sp, #96] + stp x16, x17, [sp, #112] + stp x18, lr, [sp, #128] + + bl __hyp_switch_fpsimd + + ldp x4, x5, [sp, #16] + ldp x6, x7, [sp, #32] + ldp x8, x9, [sp, #48] + ldp x10, x11, [sp, #64] + ldp x12, x13, [sp, #80] + ldp x14, x15, [sp, #96] + ldp x16, x17, [sp, #112] + ldp x18, lr, [sp, #128] + ldp x0, x1, [sp, #144] + ldp x2, x3, [sp], #160 eret ENDPROC(__fpsimd_guest_restore) diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index 24f52fe..01e4254 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -436,6 +436,30 @@ int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu) return exit_code; } +void __hyp_text __hyp_switch_fpsimd(u64 esr __always_unused, + struct kvm_vcpu *vcpu) +{ + kvm_cpu_context_t *host_ctxt; + + if (has_vhe()) + write_sysreg(read_sysreg(cpacr_el1) | CPACR_EL1_FPEN, + cpacr_el1); + else + write_sysreg(read_sysreg(cptr_el2) & ~(u64)CPTR_EL2_TFP, + cptr_el2); + + isb(); + + host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context); + __fpsimd_save_state(&host_ctxt->gp_regs.fp_regs); + __fpsimd_restore_state(&vcpu->arch.ctxt.gp_regs.fp_regs); + + /* Skip restoring fpexc32 for AArch64 guests */ + if (!(read_sysreg(hcr_el2) & HCR_RW)) + write_sysreg(vcpu->arch.ctxt.sys_regs[FPEXC32_EL2], + fpexc32_el2); +} + static const char __hyp_panic_string[] = "HYP panic:\nPS:%08llx PC:%016llx ESR:%08llx\nFAR:%016llx HPFAR:%016llx PAR:%016llx\nVCPU:%p\n"; static void __hyp_text __hyp_call_panic_nvhe(u64 spsr, u64 elr, u64 par,