From patchwork Fri Feb 16 18:29:31 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Martin X-Patchwork-Id: 10225335 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 1EFD4601E7 for ; Fri, 16 Feb 2018 18:31:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0BD11296AA for ; Fri, 16 Feb 2018 18:31:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F2B0D296BD; Fri, 16 Feb 2018 18:31:44 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id EB1342966D for ; Fri, 16 Feb 2018 18:31:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=TwHL5shTdD9hhxv/odN42cBqLsvWcV6akO+CHReJTco=; b=JDODxYAZuL2W8VoM7H+/0f8B+p lJrIjd3hXLrCEsleN+eljA57VjyaoNGm72WmO04hkwTGFL4DzqhnFZL752SR3lbPb+1/dmXhNXRQa zuqq6S3BtkRXzh10j+5IjxDQWX9v4lsaXW+y8bkTy2nOdywi3OEqIPS1B++sk6zhsufVyLfzWFl56 nCCoEkmDN3wByP2HRCZjT+4f8jiGNQmZoziE/7UK++bD0AbrpbaRH1xftV92VqH8aOkUbIK5tA/GS mY8hpsH1/QP2mfBtf5B/7DlPAa7ighSL2eFyVxqhYA9Aoef95ynaLiilZF4ns73brAgIK0PL497xZ IX+H6qxw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.89 #1 (Red Hat Linux)) id 1emknB-0006Km-Pr; Fri, 16 Feb 2018 18:31:29 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.89 #1 (Red Hat Linux)) id 1emkld-0003xp-2C for linux-arm-kernel@lists.infradead.org; Fri, 16 Feb 2018 18:29:56 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8567815BF; Fri, 16 Feb 2018 10:29:43 -0800 (PST) Received: from e103592.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 99C913F487; Fri, 16 Feb 2018 10:29:42 -0800 (PST) From: Dave Martin To: kvmarm@lists.cs.columbia.edu Subject: [RFC PATCH 2/2] KVM: arm64: Eliminate most redundant FPSIMD saves and restores Date: Fri, 16 Feb 2018 18:29:31 +0000 Message-Id: <1518805771-15346-3-git-send-email-Dave.Martin@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1518805771-15346-1-git-send-email-Dave.Martin@arm.com> References: <1518805771-15346-1-git-send-email-Dave.Martin@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180216_102953_234780_DA72C051 X-CRM114-Status: GOOD ( 19.36 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Marc Zyngier , Christoffer Dall , linux-arm-kernel@lists.infradead.org, Ard Biesheuvel MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Currently, KVM doesn't know how host tasks interact with the cpu FPSIMD regs, and the host doesn't knoe how vcpus interact with the regs. As a result, KVM must currently switch the FPSIMD state rather defensively in order to avoid anybody's state getting corrupted: in particular, the host and guest FPSIMD state must be fully swapped on each iteration of the run loop. This patch integrates KVM more closely with the host FPSIMD context switch machinery, to enable better tracking of whose state is in the FPSIMD regs. This brings some advantages: KVM can tell whether the host has any live state in the regs and can avoid saving them if not; also, KVM can tell when and if the host clobbers the vcpu state in the regs, to avoid reloading them before reentering the guest. As well as avoiding the host state being unecessarily saved, this should also mean that the vcpu state can survive context switch when there is no kernel-mode NEON use and no entry to userspace, such as when ancillary kernel threads preempt a vcpu. This patch cannot eliminate the need to save the guest context eefore enabling interrupts, becuase softirqs may use kernel- mode NEON and trash the vcpu regs. However, provding that doesn't happen the reload cost is at least saved on the next run loop iteration. Signed-off-by: Dave Martin --- Caveat: this does *not* currently deal properly with host SVE state, though supporting that shouldn't be drastically different. --- arch/arm64/include/asm/fpsimd.h | 1 + arch/arm64/include/asm/kvm_host.h | 10 +++++++- arch/arm64/include/asm/thread_info.h | 1 + arch/arm64/include/uapi/asm/kvm.h | 14 +++++----- arch/arm64/kernel/fpsimd.c | 7 ++++- arch/arm64/kvm/hyp/switch.c | 21 +++++++++------ virt/kvm/arm/arm.c | 50 ++++++++++++++++++++++++++++++++++++ 7 files changed, 88 insertions(+), 16 deletions(-) diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h index f4ce4d6..1f78631 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -76,6 +76,7 @@ extern void fpsimd_preserve_current_state(void); extern void fpsimd_restore_current_state(void); extern void fpsimd_update_current_state(struct user_fpsimd_state const *state); +extern void fpsimd_flush_state(struct fpsimd_state *state); extern void fpsimd_flush_task_state(struct task_struct *target); extern void sve_flush_cpu_state(void); diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index b463b5e..95ffb54 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -192,7 +192,13 @@ enum vcpu_sysreg { #define NR_COPRO_REGS (NR_SYS_REGS * 2) struct kvm_cpu_context { - struct kvm_regs gp_regs; + union { + struct kvm_regs gp_regs; + struct { + __KVM_REGS_COMMON + struct fpsimd_state fpsimd_state; + }; + }; union { u64 sys_regs[NR_SYS_REGS]; u32 copro[NR_COPRO_REGS]; @@ -235,6 +241,8 @@ struct kvm_vcpu_arch { /* Pointer to host CPU context */ kvm_cpu_context_t *host_cpu_context; + struct user_fpsimd_state *host_fpsimd_state; /* hyp va */ + bool guest_fpsimd_loaded; struct { /* {Break,watch}point registers */ struct kvm_guest_debug_arch regs; diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h index 740aa03c..9f1fa1a 100644 --- a/arch/arm64/include/asm/thread_info.h +++ b/arch/arm64/include/asm/thread_info.h @@ -94,6 +94,7 @@ void arch_release_task_struct(struct task_struct *tsk); #define TIF_32BIT 22 /* 32bit process */ #define TIF_SVE 23 /* Scalable Vector Extension in use */ #define TIF_SVE_VL_INHERIT 24 /* Inherit sve_vl_onexec across exec */ +#define TIF_MAPPED_TO_HYP 25 /* task_struct mapped to Hyp (KVM) */ #define _TIF_SIGPENDING (1 << TIF_SIGPENDING) #define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED) diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index 9abbf30..c3392d2 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -45,14 +45,16 @@ #define KVM_REG_SIZE(id) \ (1U << (((id) & KVM_REG_SIZE_MASK) >> KVM_REG_SIZE_SHIFT)) -struct kvm_regs { - struct user_pt_regs regs; /* sp = sp_el0 */ - - __u64 sp_el1; - __u64 elr_el1; - +#define __KVM_REGS_COMMON \ + struct user_pt_regs regs; /* sp = sp_el0 */ \ + \ + __u64 sp_el1; \ + __u64 elr_el1; \ + \ __u64 spsr[KVM_NR_SPSR]; +struct kvm_regs { + __KVM_REGS_COMMON struct user_fpsimd_state fp_regs; }; diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 138efaf..c46e11f 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -1073,12 +1073,17 @@ void fpsimd_update_current_state(struct user_fpsimd_state const *state) local_bh_enable(); } +void fpsimd_flush_state(struct fpsimd_state *st) +{ + st->cpu = NR_CPUS; +} + /* * Invalidate live CPU copies of task t's FPSIMD state */ void fpsimd_flush_task_state(struct task_struct *t) { - t->thread.fpsimd_state.cpu = NR_CPUS; + fpsimd_flush_state(&t->thread.fpsimd_state); } static inline void fpsimd_flush_cpu_state(void) diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index a0a63bc..b88e83f 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -91,7 +91,11 @@ static inline void activate_traps_vhe(struct kvm_vcpu *vcpu) val = read_sysreg(cpacr_el1); val |= CPACR_EL1_TTA; - val &= ~(CPACR_EL1_FPEN | CPACR_EL1_ZEN); + + val &= ~CPACR_EL1_ZEN; + if (!vcpu->arch.guest_fpsimd_loaded) + val &= ~CPACR_EL1_FPEN; + write_sysreg(val, cpacr_el1); write_sysreg(kvm_get_hyp_vector(), vbar_el1); @@ -104,7 +108,10 @@ static inline void __hyp_text __activate_traps_nvhe(struct kvm_vcpu *vcpu) __activate_traps_common(vcpu); val = CPTR_EL2_DEFAULT; - val |= CPTR_EL2_TTA | CPTR_EL2_TFP | CPTR_EL2_TZ; + val |= CPTR_EL2_TTA | CPTR_EL2_TZ; + if (!vcpu->arch.guest_fpsimd_loaded) + val |= CPTR_EL2_TFP; + write_sysreg(val, cptr_el2); } @@ -423,7 +430,6 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) if (fp_enabled) { __fpsimd_save_state(&guest_ctxt->gp_regs.fp_regs); - __fpsimd_restore_state(&host_ctxt->gp_regs.fp_regs); __fpsimd_save_fpexc32(vcpu); } @@ -491,7 +497,6 @@ int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu) if (fp_enabled) { __fpsimd_save_state(&guest_ctxt->gp_regs.fp_regs); - __fpsimd_restore_state(&host_ctxt->gp_regs.fp_regs); __fpsimd_save_fpexc32(vcpu); } @@ -507,8 +512,6 @@ int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu) void __hyp_text __hyp_switch_fpsimd(u64 esr __always_unused, struct kvm_vcpu *vcpu) { - kvm_cpu_context_t *host_ctxt; - if (has_vhe()) write_sysreg(read_sysreg(cpacr_el1) | CPACR_EL1_FPEN, cpacr_el1); @@ -518,9 +521,11 @@ void __hyp_text __hyp_switch_fpsimd(u64 esr __always_unused, isb(); - host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context); - __fpsimd_save_state(&host_ctxt->gp_regs.fp_regs); + if (vcpu->arch.host_fpsimd_state) + __fpsimd_save_state(vcpu->arch.host_fpsimd_state); + __fpsimd_restore_state(&vcpu->arch.ctxt.gp_regs.fp_regs); + vcpu->arch.guest_fpsimd_loaded = true; /* Skip restoring fpexc32 for AArch64 guests */ if (!(read_sysreg(hcr_el2) & HCR_RW)) diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index 6de7641..0330e1f 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -329,6 +329,10 @@ void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu) { + /* Mark this vcpu's FPSIMD state as non-live initially: */ + fpsimd_flush_state(&vcpu->arch.ctxt.fpsimd_state); + vcpu->arch.guest_fpsimd_loaded = false; + /* Force users to call KVM_ARM_VCPU_INIT */ vcpu->arch.target = -1; bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES); @@ -631,6 +635,9 @@ static void check_vcpu_requests(struct kvm_vcpu *vcpu) int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) { int ret; + struct fpsimd_state *guest_fpsimd = &vcpu->arch.ctxt.fpsimd_state; + struct user_fpsimd_state *host_fpsimd = + ¤t->thread.fpsimd_state.user_fpsimd; if (unlikely(!kvm_vcpu_initialized(vcpu))) return -ENOEXEC; @@ -650,6 +657,17 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) if (run->immediate_exit) return -EINTR; + WARN_ON(!current->mm); + + if (!test_thread_flag(TIF_MAPPED_TO_HYP)) { + ret = create_hyp_mappings(host_fpsimd, host_fpsimd + 1, + PAGE_HYP); + if (ret) + return ret; + + set_thread_flag(TIF_MAPPED_TO_HYP); + } + vcpu_load(vcpu); kvm_sigset_activate(vcpu); @@ -680,6 +698,23 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) local_irq_disable(); + /* + * host_fpsimd_state indicates to hyp that there is host state + * to save, and where to save it: + */ + if (test_thread_flag(TIF_FOREIGN_FPSTATE)) + vcpu->arch.host_fpsimd_state = NULL; + else + vcpu->arch.host_fpsimd_state = kern_hyp_va(host_fpsimd); + + vcpu->arch.guest_fpsimd_loaded = + !fpsimd_foreign_fpstate(guest_fpsimd); + + BUG_ON(system_supports_sve()); + + BUG_ON(vcpu->arch.guest_fpsimd_loaded && + vcpu->arch.host_fpsimd_state); + kvm_vgic_flush_hwstate(vcpu); /* @@ -774,6 +809,9 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) if (static_branch_unlikely(&userspace_irqchip_in_use)) kvm_timer_sync_hwstate(vcpu); + /* defend against kernel-mode NEON in softirq */ + local_bh_disable(); + /* * We may have taken a host interrupt in HYP mode (ie * while executing the guest). This interrupt is still @@ -786,6 +824,18 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) */ local_irq_enable(); + if (vcpu->arch.guest_fpsimd_loaded) { + set_thread_flag(TIF_FOREIGN_FPSTATE); + fpsimd_bind_state_to_cpu(&vcpu->arch.ctxt.fpsimd_state); + + /* + * Protect ourselves against a softirq splatting the + * FPSIMD state once irqs are enabled: + */ + fpsimd_save_state(guest_fpsimd); + } + local_bh_enable(); + /* * We do local_irq_enable() before calling guest_exit() so * that if a timer interrupt hits while running the guest we