From patchwork Thu Aug 20 10:34:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Scull X-Patchwork-Id: 11726021 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5471813B1 for ; Thu, 20 Aug 2020 10:36:26 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0A9EA20639 for ; Thu, 20 Aug 2020 10:36:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="x6Ogoev/"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="VSNVDRJm" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0A9EA20639 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:To:From:Subject:References:Mime-Version:Message-Id: In-Reply-To:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=FLeFCFw8p7yiqWkx5Ox4Aj2JOP4cSOQ7JPkCV0IOB0A=; b=x6Ogoev/L3xAsEA2Y7jy4pa1G Qffht6H3Yk37Keul66u4hMZFLhZj1EBotyzeH3eT9h3W5x0V7+Z3iows/VIfjFS3vbqP2U+ZCTbwv a3DrzaB01niPePFW4ERTjTZhz8ZUMVVNtF1iQpUvmyCe7Fncq4X9TZef1Y5fLDlmhVNpJZiwD4HJO zmLtXtSsYMKK7+lRbzE327EfjKArrx3pDr+ewm1oU/kpbltbbf8etmffyE3EF6aR3oG6PbowkvHIr pmfoBsSxUUSo8K0+A7RzJ6/afysQZjDVkm8xK/BRppAM1ynTMJyCSGWsWK7+U66dP6fjpJbubZkdI Ln44eGatg==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1k8hvW-0002E9-62; Thu, 20 Aug 2020 10:36:10 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1k8huy-000200-HG for linux-arm-kernel@lists.infradead.org; Thu, 20 Aug 2020 10:35:37 +0000 Received: by mail-yb1-xb49.google.com with SMTP id l13so1819079ybf.5 for ; Thu, 20 Aug 2020 03:35:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=yWIxVTXSWX9DBZGTihfgjYAmV7ffHrDFhCW1uSQ19FA=; b=VSNVDRJmBuP9EBJ0jRQJNS+KXGaULrrKeHAtXhSAnOiSfxfE3lKexLQjfMPURSrQHl BC3bloducVyjy2R51GQoiX4zEroPp1oDVeovEH4H37RrwZpVoZBHL9zyh47DDdLE5gXH 9Vk8EUht7VCuFlydtwVzIHH1cQNPsqkbmuM6huHHtpKGyMKMWMKhqL/Km6lqswm2AWT2 0vFGS5O8LE7QjOPO6+pXNIn7pIvx0UeFp81Khmo0tm57t48vgzEDhkTjEPo+owCmpL9O PWxO/Q88qu+GFZ8sEH+fl6V6fgSPc4eo6dQffmBnfuPjNJKiLMtvFqnujVHOain62aQM M73w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=yWIxVTXSWX9DBZGTihfgjYAmV7ffHrDFhCW1uSQ19FA=; b=mGaL0twQSgFHNcEYH5HvvKu0mxMKuCZJ4WOzV0+5ADTYwwuOOPaAHBhZ3wnKXfIMM9 1qUrUG6FbDVLl5i7M0W9lHwnk7ZSATsq1kEryKydLAf6++KtOBXP5z9/VEkjsDTCecNq +6wqt5rAc0AP0A/nJbi6dvKCthRPCjyEUIUKUmzN9grRnhLiA4BGoPRSlwk8o/XU59aR FAl4z3QQmeF4bSFUCIVfjaESNaC0ij6N32d2SMR/Zd8uMzZHY/kUmSaVQJqI54cM9dN9 FM1FM5xYcWgRRK+cv8lcNw9pc/HhiScxNGfL2Rq4XNJ0ldBq4h0d4OyH9tTsvxaUFmN8 4X4w== X-Gm-Message-State: AOAM532hYJ/zTISOcXwLDvyD9NJyBuJnqFLmoMHAjac7N5GofovFmU7M wWTFLGL8elfaemmIPV4rZkUG0zUtGDo= X-Google-Smtp-Source: ABdhPJz5fESWbgO87rA6SAxKswcyB7Uz5IXFWQWDJgzSkcCaR7Ls23qoOqB19umoL7RrqC845cHGfrnCDzM= X-Received: from ascull.lon.corp.google.com ([2a00:79e0:d:109:4a0f:cfff:fe4a:6363]) (user=ascull job=sendgmr) by 2002:a25:dc4b:: with SMTP id y72mr3699094ybe.197.1597919733298; Thu, 20 Aug 2020 03:35:33 -0700 (PDT) Date: Thu, 20 Aug 2020 11:34:32 +0100 In-Reply-To: <20200820103446.959000-1-ascull@google.com> Message-Id: <20200820103446.959000-7-ascull@google.com> Mime-Version: 1.0 References: <20200820103446.959000-1-ascull@google.com> X-Mailer: git-send-email 2.28.0.220.ged08abb693-goog Subject: [PATCH v2 06/20] KVM: arm64: nVHE: Use separate vector for the host From: Andrew Scull To: kvmarm@lists.cs.columbia.edu X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200820_063536_665326_1382AC22 X-CRM114-Status: GOOD ( 25.15 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: SpamAssassin version 3.4.4 on merlin.infradead.org summary: Content analysis details: (-7.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:b49 listed in] [list.dnswl.org] -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arm-kernel@lists.infradead.org, kernel-team@android.com, suzuki.poulose@arm.com, maz@kernel.org, Sudeep Holla , james.morse@arm.com, Andrew Scull , catalin.marinas@arm.com, will@kernel.org, julien.thierry.kdev@gmail.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org The host is treated differently from the guests when an exception is taken so introduce a separate vector that is specialized for the host. This also allows the nVHE specific code to move out of hyp-entry.S and into nvhe/host.S. The host is only expected to make HVC calls and anything else is considered invalid and results in a panic. Hyp initialization is now passed the vector that is used for the host and it is swapped for the guest vector during the context switch. Signed-off-by: Andrew Scull --- arch/arm64/include/asm/kvm_asm.h | 2 + arch/arm64/kernel/image-vars.h | 1 + arch/arm64/kvm/arm.c | 11 +++- arch/arm64/kvm/hyp/hyp-entry.S | 65 ------------------- arch/arm64/kvm/hyp/nvhe/Makefile | 2 +- arch/arm64/kvm/hyp/nvhe/host.S | 104 +++++++++++++++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/switch.c | 3 + 7 files changed, 121 insertions(+), 67 deletions(-) create mode 100644 arch/arm64/kvm/hyp/nvhe/host.S diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 413911d6446a..81f29a2c361a 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -98,10 +98,12 @@ struct kvm_vcpu; struct kvm_s2_mmu; DECLARE_KVM_NVHE_SYM(__kvm_hyp_init); +DECLARE_KVM_NVHE_SYM(__kvm_hyp_host_vector); DECLARE_KVM_HYP_SYM(__kvm_hyp_vector); #ifndef __KVM_NVHE_HYPERVISOR__ #define __kvm_hyp_init CHOOSE_NVHE_SYM(__kvm_hyp_init) +#define __kvm_hyp_host_vector CHOOSE_NVHE_SYM(__kvm_hyp_host_vector) #define __kvm_hyp_vector CHOOSE_HYP_SYM(__kvm_hyp_vector) #endif diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 9e897c500237..7e93b0c426d4 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -71,6 +71,7 @@ KVM_NVHE_ALIAS(kvm_update_va_mask); /* Global kernel state accessed by nVHE hyp code. */ KVM_NVHE_ALIAS(arm64_ssbd_callback_required); KVM_NVHE_ALIAS(kvm_host_data); +KVM_NVHE_ALIAS(kvm_hyp_vector); KVM_NVHE_ALIAS(kvm_vgic_global_state); /* Kernel constant needed to compute idmap addresses. */ diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index d779814f0da6..c6f76fdabecb 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1277,7 +1277,7 @@ static void cpu_init_hyp_mode(void) pgd_ptr = kvm_mmu_get_httbr(); hyp_stack_ptr = __this_cpu_read(kvm_arm_hyp_stack_page) + PAGE_SIZE; - vector_ptr = __this_cpu_read(kvm_hyp_vector); + vector_ptr = (unsigned long)kern_hyp_va(kvm_ksym_ref(__kvm_hyp_host_vector)); /* * Call initialization code, and switch to the full blown HYP code. @@ -1542,6 +1542,7 @@ static int init_hyp_mode(void) for_each_possible_cpu(cpu) { struct kvm_host_data *cpu_data; + unsigned long *vector; cpu_data = per_cpu_ptr(&kvm_host_data, cpu); err = create_hyp_mappings(cpu_data, cpu_data + 1, PAGE_HYP); @@ -1550,6 +1551,14 @@ static int init_hyp_mode(void) kvm_err("Cannot map host CPU state: %d\n", err); goto out_err; } + + vector = per_cpu_ptr(&kvm_hyp_vector, cpu); + err = create_hyp_mappings(vector, vector + 1, PAGE_HYP); + + if (err) { + kvm_err("Cannot map hyp guest vector address\n"); + goto out_err; + } } err = hyp_map_aux_data(); diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S index b0923e9f9291..6e14873680a8 100644 --- a/arch/arm64/kvm/hyp/hyp-entry.S +++ b/arch/arm64/kvm/hyp/hyp-entry.S @@ -12,25 +12,10 @@ #include #include #include -#include #include .text -.macro do_el2_call - /* - * Shuffle the parameters before calling the function - * pointed to in x0. Assumes parameters in x[1,2,3]. - */ - str lr, [sp, #-16]! - mov lr, x0 - mov x0, x1 - mov x1, x2 - mov x2, x3 - blr lr - ldr lr, [sp], #16 -.endm - el1_sync: // Guest trapped into EL2 mrs x0, esr_el2 @@ -39,43 +24,6 @@ el1_sync: // Guest trapped into EL2 ccmp x0, #ESR_ELx_EC_HVC32, #4, ne b.ne el1_trap -#ifdef __KVM_NVHE_HYPERVISOR__ - mrs x1, vttbr_el2 // If vttbr is valid, the guest - cbnz x1, el1_hvc_guest // called HVC - - /* Here, we're pretty sure the host called HVC. */ - ldp x0, x1, [sp], #16 - - /* Check for a stub HVC call */ - cmp x0, #HVC_STUB_HCALL_NR - b.hs 1f - - /* - * Compute the idmap address of __kvm_handle_stub_hvc and - * jump there. Since we use kimage_voffset, do not use the - * HYP VA for __kvm_handle_stub_hvc, but the kernel VA instead - * (by loading it from the constant pool). - * - * Preserve x0-x4, which may contain stub parameters. - */ - ldr x5, =__kvm_handle_stub_hvc - ldr_l x6, kimage_voffset - - /* x5 = __pa(x5) */ - sub x5, x5, x6 - br x5 - -1: - /* - * Perform the EL2 call - */ - kern_hyp_va x0 - do_el2_call - - eret - sb -#endif /* __KVM_NVHE_HYPERVISOR__ */ - el1_hvc_guest: /* * Fastest possible path for ARM_SMCCC_ARCH_WORKAROUND_1. @@ -181,18 +129,6 @@ el2_error: eret sb -#ifdef __KVM_NVHE_HYPERVISOR__ -SYM_FUNC_START(__hyp_do_panic) - mov lr, #(PSR_F_BIT | PSR_I_BIT | PSR_A_BIT | PSR_D_BIT |\ - PSR_MODE_EL1h) - msr spsr_el2, lr - ldr lr, =panic - msr elr_el2, lr - eret - sb -SYM_FUNC_END(__hyp_do_panic) -#endif - .macro invalid_vector label, target = hyp_panic .align 2 SYM_CODE_START(\label) @@ -205,7 +141,6 @@ SYM_CODE_END(\label) invalid_vector el2t_irq_invalid invalid_vector el2t_fiq_invalid invalid_vector el2t_error_invalid - invalid_vector el2h_sync_invalid invalid_vector el2h_irq_invalid invalid_vector el2h_fiq_invalid invalid_vector el1_fiq_invalid diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index aef76487edc2..ddf98eb07b9d 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -6,7 +6,7 @@ asflags-y := -D__KVM_NVHE_HYPERVISOR__ ccflags-y := -D__KVM_NVHE_HYPERVISOR__ -obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o +obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \ ../fpsimd.o ../hyp-entry.o diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S new file mode 100644 index 000000000000..9c96b9a3b71d --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/host.S @@ -0,0 +1,104 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2020 - Google Inc + * Author: Andrew Scull + */ + +#include + +#include +#include +#include + + .text + +SYM_FUNC_START(__hyp_do_panic) + mov lr, #(PSR_F_BIT | PSR_I_BIT | PSR_A_BIT | PSR_D_BIT |\ + PSR_MODE_EL1h) + msr spsr_el2, lr + ldr lr, =panic + msr elr_el2, lr + eret + sb +SYM_FUNC_END(__hyp_do_panic) + +.macro valid_host_el1_sync_vect + .align 7 + esb + stp x0, x1, [sp, #-16]! + + mrs x0, esr_el2 + lsr x0, x0, #ESR_ELx_EC_SHIFT + cmp x0, #ESR_ELx_EC_HVC64 + b.ne hyp_panic + + ldp x0, x1, [sp], #16 + + /* Check for a stub HVC call */ + cmp x0, #HVC_STUB_HCALL_NR + b.hs 1f + + /* + * Compute the idmap address of __kvm_handle_stub_hvc and + * jump there. Since we use kimage_voffset, do not use the + * HYP VA for __kvm_handle_stub_hvc, but the kernel VA instead + * (by loading it from the constant pool). + * + * Preserve x0-x4, which may contain stub parameters. + */ + ldr x5, =__kvm_handle_stub_hvc + ldr_l x6, kimage_voffset + + /* x5 = __pa(x5) */ + sub x5, x5, x6 + br x5 + +1: + /* + * Shuffle the parameters before calling the function + * pointed to in x0. Assumes parameters in x[1,2,3]. + */ + kern_hyp_va x0 + str lr, [sp, #-16]! + mov lr, x0 + mov x0, x1 + mov x1, x2 + mov x2, x3 + blr lr + ldr lr, [sp], #16 + + eret + sb +.endm + +.macro invalid_host_vect + .align 7 + b hyp_panic +.endm + +/* + * CONFIG_KVM_INDIRECT_VECTORS is not applied to the host vector because the + * host already knows the address of hyp by virtue of loading it there. + */ + .align 11 +SYM_CODE_START(__kvm_hyp_host_vector) + invalid_host_vect // Synchronous EL2t + invalid_host_vect // IRQ EL2t + invalid_host_vect // FIQ EL2t + invalid_host_vect // Error EL2t + + invalid_host_vect // Synchronous EL2h + invalid_host_vect // IRQ EL2h + invalid_host_vect // FIQ EL2h + invalid_host_vect // Error EL2h + + valid_host_el1_sync_vect // Synchronous 64-bit EL1 + invalid_host_vect // IRQ 64-bit EL1 + invalid_host_vect // FIQ 64-bit EL1 + invalid_host_vect // Error 64-bit EL1 + + invalid_host_vect // Synchronous 32-bit EL1 + invalid_host_vect // IRQ 32-bit EL1 + invalid_host_vect // FIQ 32-bit EL1 + invalid_host_vect // Error 32-bit EL1 +SYM_CODE_END(__kvm_hyp_host_vector) diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index f969a6dc4741..05bb0b472091 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -42,6 +42,7 @@ static void __activate_traps(struct kvm_vcpu *vcpu) } write_sysreg(val, cptr_el2); + write_sysreg(__hyp_this_cpu_read(kvm_hyp_vector), vbar_el2); if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) { struct kvm_cpu_context *ctxt = &vcpu->arch.ctxt; @@ -60,6 +61,7 @@ static void __activate_traps(struct kvm_vcpu *vcpu) static void __deactivate_traps(struct kvm_vcpu *vcpu) { + extern char __kvm_hyp_host_vector[]; u64 mdcr_el2; ___deactivate_traps(vcpu); @@ -91,6 +93,7 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu) write_sysreg(mdcr_el2, mdcr_el2); write_sysreg(HCR_HOST_NVHE_FLAGS, hcr_el2); write_sysreg(CPTR_EL2_DEFAULT, cptr_el2); + write_sysreg(__kvm_hyp_host_vector, vbar_el2); } static void __load_host_stage2(void)