From patchwork Thu Feb 15 21:03:05 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 10223545 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 1DFAF60467 for ; Thu, 15 Feb 2018 21:04:16 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0C58A29512 for ; Thu, 15 Feb 2018 21:04:16 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F0B8729516; Thu, 15 Feb 2018 21:04:15 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 720C729512 for ; Thu, 15 Feb 2018 21:04:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1167311AbeBOVEN (ORCPT ); Thu, 15 Feb 2018 16:04:13 -0500 Received: from mail-wm0-f68.google.com ([74.125.82.68]:54826 "EHLO mail-wm0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1167298AbeBOVEH (ORCPT ); Thu, 15 Feb 2018 16:04:07 -0500 Received: by mail-wm0-f68.google.com with SMTP id z81so3260829wmb.4 for ; Thu, 15 Feb 2018 13:04:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=SnFNaMcgMMc+BntJmk96rXueGEUhq/J3GW5qO/Wxsv8=; b=JYzRvN9+BuP4m2iZFSlGz2UgsiwJDOl7Zy+WWTC2/4reNBuM95yhRwNlA8r/l/WP0w d/TYkYdwtR2pY22NTVhcvyhk8Fv3yJp+bJFC/4/SfRLT1h35oK3nVw2g1iIS2Rp8ZWIu N+EYHuw8vch16l+6L68WJJrmSCS6bvbW6gsGY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=SnFNaMcgMMc+BntJmk96rXueGEUhq/J3GW5qO/Wxsv8=; b=XNk5CjrifyWMu+ulQk7nDOj3NLtiWJC6DSdxCjSqkGbUtU+BomYhAIxlqMU2s1GhHY 2xVAokW4cyuDjgRKiGhes/qcVrT7rFdgf7p73twZ0w3Vb2W+/IjfSsu4pdcGcU5NHCBm tG20RxxK9wUwNI5T6+GN6IjU6iYdP0f0EE8WlkCQVhZPxc8HJM8J/wGgviVUJBV/TbI2 FpwBIKKNsAWJ0zRdF3vOIfVegTQJ5YRxQsYHNsHK633kdQgWAWE+BIlYTYgRWixjwWL/ b9TPNIRUmE6/l/hKyY3VaKgpB1g8ySLqIgVadnSkxv3R0sLMcJyhN9ajDffyBRlHxCGn e2fw== X-Gm-Message-State: APf1xPBISEI4oa4+T9syjYDoLgkP5w0U+43otGqCGXLtA40+cXf6bLdM M7ItRYQOC+VPt15nETfvRnh1vA== X-Google-Smtp-Source: AH8x225Q9gnxSyYal3Bo5ha01FA9UfIHT/zBmW81ry6LVNsvRLyG7vefev1grDuTEHc7CvXoBaUTMg== X-Received: by 10.80.153.143 with SMTP id m15mr5042774edb.145.1518728646455; Thu, 15 Feb 2018 13:04:06 -0800 (PST) Received: from localhost.localdomain (x50d2404e.cust.hiper.dk. [80.210.64.78]) by smtp.gmail.com with ESMTPSA id p55sm8220030edc.15.2018.02.15.13.04.04 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 15 Feb 2018 13:04:05 -0800 (PST) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Cc: kvm@vger.kernel.org, Christoffer Dall , Marc Zyngier , Andrew Jones , Shih-Wei Li , Dave Martin , Julien Grall , Tomasz Nowicki , Yury Norov Subject: [PATCH v4 13/40] KVM: arm64: Introduce VHE-specific kvm_vcpu_run Date: Thu, 15 Feb 2018 22:03:05 +0100 Message-Id: <20180215210332.8648-14-christoffer.dall@linaro.org> X-Mailer: git-send-email 2.14.2 In-Reply-To: <20180215210332.8648-1-christoffer.dall@linaro.org> References: <20180215210332.8648-1-christoffer.dall@linaro.org> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP So far this is mostly (see below) a copy of the legacy non-VHE switch function, but we will start reworking these functions in separate directions to work on VHE and non-VHE in the most optimal way in later patches. The only difference after this patch between the VHE and non-VHE run functions is that we omit the branch-predictor variant-2 hardening for QC Falkor CPUs, because this workaround is specific to a series of non-VHE ARMv8.0 CPUs. Reviewed-by: Marc Zyngier Signed-off-by: Christoffer Dall --- Notes: Changes since v3: - Added BUG() to 32-bit ARM VHE run function - Omitted QC Falkor BP Hardening functionality from VHE-specific function Changes since v2: - Reworded commit message Changes since v1: - Rename kvm_vcpu_run to kvm_vcpu_run_vhe and rename __kvm_vcpu_run to __kvm_vcpu_run_nvhe - Removed stray whitespace line arch/arm/include/asm/kvm_asm.h | 5 ++- arch/arm/kvm/hyp/switch.c | 2 +- arch/arm64/include/asm/kvm_asm.h | 4 ++- arch/arm64/kvm/hyp/switch.c | 66 +++++++++++++++++++++++++++++++++++++++- virt/kvm/arm/arm.c | 5 ++- 5 files changed, 77 insertions(+), 5 deletions(-) diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h index 36dd2962a42d..5a953ecb0d78 100644 --- a/arch/arm/include/asm/kvm_asm.h +++ b/arch/arm/include/asm/kvm_asm.h @@ -70,7 +70,10 @@ extern void __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu); extern void __kvm_timer_set_cntvoff(u32 cntvoff_low, u32 cntvoff_high); -extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu); +/* no VHE on 32-bit :( */ +static inline int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) { BUG(); return 0; } + +extern int __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu); extern void __init_stage2_translation(void); diff --git a/arch/arm/kvm/hyp/switch.c b/arch/arm/kvm/hyp/switch.c index e86679daddff..aac025783ee8 100644 --- a/arch/arm/kvm/hyp/switch.c +++ b/arch/arm/kvm/hyp/switch.c @@ -154,7 +154,7 @@ static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu) return true; } -int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu) +int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu) { struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *guest_ctxt; diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 6b626750b0a1..0be2747a6c5f 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -58,7 +58,9 @@ extern void __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu); extern void __kvm_timer_set_cntvoff(u32 cntvoff_low, u32 cntvoff_high); -extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu); +extern int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu); + +extern int __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu); extern u64 __vgic_v3_get_ich_vtr_el2(void); extern u64 __vgic_v3_read_vmcr(void); diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index d2c0b1ae3216..b6126af539b6 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -362,7 +362,71 @@ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) return false; } -int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu) +/* Switch to the guest for VHE systems running in EL2 */ +int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) +{ + struct kvm_cpu_context *host_ctxt; + struct kvm_cpu_context *guest_ctxt; + bool fp_enabled; + u64 exit_code; + + vcpu = kern_hyp_va(vcpu); + + host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context); + host_ctxt->__hyp_running_vcpu = vcpu; + guest_ctxt = &vcpu->arch.ctxt; + + __sysreg_save_host_state(host_ctxt); + + __activate_traps(vcpu); + __activate_vm(vcpu); + + __vgic_restore_state(vcpu); + __timer_enable_traps(vcpu); + + /* + * We must restore the 32-bit state before the sysregs, thanks + * to erratum #852523 (Cortex-A57) or #853709 (Cortex-A72). + */ + __sysreg32_restore_state(vcpu); + __sysreg_restore_guest_state(guest_ctxt); + __debug_switch_to_guest(vcpu); + + do { + /* Jump in the fire! */ + exit_code = __guest_enter(vcpu, host_ctxt); + + /* And we're baaack! */ + } while (fixup_guest_exit(vcpu, &exit_code)); + + fp_enabled = __fpsimd_enabled(); + + __sysreg_save_guest_state(guest_ctxt); + __sysreg32_save_state(vcpu); + __timer_disable_traps(vcpu); + __vgic_save_state(vcpu); + + __deactivate_traps(vcpu); + __deactivate_vm(vcpu); + + __sysreg_restore_host_state(host_ctxt); + + if (fp_enabled) { + __fpsimd_save_state(&guest_ctxt->gp_regs.fp_regs); + __fpsimd_restore_state(&host_ctxt->gp_regs.fp_regs); + } + + /* + * This must come after restoring the host sysregs, since a non-VHE + * system may enable SPE here and make use of the TTBRs. + */ + __debug_switch_to_host(vcpu); + + return exit_code; +} + +/* Switch to the guest for legacy non-VHE systems */ +int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu) { struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *guest_ctxt; diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index 2062d9357971..5bd879c78951 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -736,7 +736,10 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) if (has_vhe()) kvm_arm_vhe_guest_enter(); - ret = kvm_call_hyp(__kvm_vcpu_run, vcpu); + if (has_vhe()) + ret = kvm_vcpu_run_vhe(vcpu); + else + ret = kvm_call_hyp(__kvm_vcpu_run_nvhe, vcpu); if (has_vhe()) kvm_arm_vhe_guest_exit();