From patchwork Fri Jan 12 12:07:09 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 10160457 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 7DA42602D8 for ; Fri, 12 Jan 2018 12:09:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6089C28518 for ; Fri, 12 Jan 2018 12:09:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5549A289CD; Fri, 12 Jan 2018 12:09:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id A5C6828518 for ; Fri, 12 Jan 2018 12:09:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=zixXfZxeOHf06etaPuXKxfXmHYWmlP4Jq+eskp9T7IQ=; b=kEErPE8Mpu24wd5hi1uVY4hyKF BbPf4wjIcAs/qJNWfilSAo2Bmf8lLNhHb2ljqY5HX88vONIWs95Z5mOd/E9IsK6AAsT7p3aq7z2eX dsnBWcSCX3e4dCm4HpdmCSVif/QpQTXYhtTdHmJNm1QrllWGu82eIOZCGTLqawTcLfJj3q4rD3aTQ L6XWiLMfSDAWio2bMt856lEddLdpjX/GesPmStcG1n9w7fB+o0uV/iEobUUV6xrk/hmA/KJktRqNe gg4kJ5ksjYAqiRwSAzH0cDKftLnc0/1hBB7MyCfLQxLiA81dC13vPgkBp8XhGkR8DUznMMv9d21e7 948F0tBQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.89 #1 (Red Hat Linux)) id 1eZy9D-0001hd-TP; Fri, 12 Jan 2018 12:09:23 +0000 Received: from mail-wm0-x241.google.com ([2a00:1450:400c:c09::241]) by bombadil.infradead.org with esmtps (Exim 4.89 #1 (Red Hat Linux)) id 1eZy7z-0008Gf-Bq for linux-arm-kernel@lists.infradead.org; Fri, 12 Jan 2018 12:08:15 +0000 Received: by mail-wm0-x241.google.com with SMTP id g75so11734948wme.0 for ; Fri, 12 Jan 2018 04:07:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=HWplbUz8FQZb0o7Bn4cLGNsXvnHIh9LW8xosJtUZAg8=; b=Lg3+h+QuzkArjrIuJm5R7fsP5ImGVyOdRLNuRJ8YdlTONl8j6RdJpg5K4u9Hu779Vj fIELPGxKXwktG7uHWeUvzwUJ8rXoe/XWnfsH0IH9erHV4rKRb05RAjogJEhnLtDzxOwY xzBRIBJAhZerRrvvzkbqvPs8FOVAP6tfy5x6w= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=HWplbUz8FQZb0o7Bn4cLGNsXvnHIh9LW8xosJtUZAg8=; b=njFVqpC9hsI80kSC8U2BeWjg5yBpGze8DVgWpAbCpbi3uoVHPhwb0TxUHLbXJp2Krd oDEfbBkLyFLzlLz1mgd7PDA8+9eQMbXwVrBPXmpzsQ7wutXloUlQVKfG0u3sfcmNiiLZ sdc29BNHmEMKsB8b0zeCDrW8qhE7/0JF1aTm/CyGQvxQS9XzR0kde3eCon9utULovOyU Q1Olznm8FawRUMmu8hkxHt/SUZ9eRaLBFc6iCH2Aw9UnUka+wdJX3K8I1llrxfxUclqS y1jkyBbvCNgpgxdMxi4eD97ViTbR/M0uOfY8braNkCj7TSkX9u+MqHbenJ8EcHZTZh5N pQfw== X-Gm-Message-State: AKwxytcsYUx1WgAYVnX23vZEfN5jb8A3mp0P+tI9BHURuI57xXEbx5RQ GBBHsFAOZEsGX5Xcq5Twl/KQaQ== X-Google-Smtp-Source: ACJfBosk3aUNeq7m6DerW4kT79sYVwZyfjnQSMwm3WfctiEjsaGmHMGoFBEa2ALO9c+GvY1tRQBV0w== X-Received: by 10.80.213.74 with SMTP id f10mr7622797edj.255.1515758875500; Fri, 12 Jan 2018 04:07:55 -0800 (PST) Received: from localhost.localdomain (x50d2404e.cust.hiper.dk. [80.210.64.78]) by smtp.gmail.com with ESMTPSA id f16sm13489705edj.65.2018.01.12.04.07.53 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 12 Jan 2018 04:07:54 -0800 (PST) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Subject: [PATCH v3 03/41] KVM: arm64: Avoid storing the vcpu pointer on the stack Date: Fri, 12 Jan 2018 13:07:09 +0100 Message-Id: <20180112120747.27999-4-christoffer.dall@linaro.org> X-Mailer: git-send-email 2.14.2 In-Reply-To: <20180112120747.27999-1-christoffer.dall@linaro.org> References: <20180112120747.27999-1-christoffer.dall@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180112_040807_654489_B8545F8C X-CRM114-Status: GOOD ( 23.27 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Andrew Jones , kvm@vger.kernel.org, Ard Biesheuvel , Marc Zyngier , Shih-Wei Li , Christoffer Dall MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP We already have the percpu area for the host cpu state, which points to the VCPU, so there's no need to store the VCPU pointer on the stack on every context switch. We can be a little more clever and just use tpidr_el2 for the percpu offset and load the VCPU pointer from the host context. This does require us to calculate the percpu offset without including the offset from the kernel mapping of the percpu array to the linear mapping of the array (which is what we store in tpidr_el1), because a PC-relative generated address in EL2 is already giving us the hyp alias of the linear mapping of a kernel address. We do this in __cpu_init_hyp_mode() by using kvm_ksym_ref(). This change also requires us to have a scratch register, so we take the chance to rearrange some of the el1_sync code to only look at the vttbr_el2 to determine if this is a trap from the guest or an HVC from the host. We do add an extra check to call the panic code if the kernel is configured with debugging enabled and we saw a trap from the host which wasn't an HVC, indicating that we left some EL2 trap configured by mistake. The code that accesses ESR_EL2 was previously using an alternative to use the _EL1 accessor on VHE systems, but this was actually unnecessary as the _EL1 accessor aliases the ESR_EL2 register on VHE, and the _EL2 accessor does the same thing on both systems. Cc: Ard Biesheuvel Reviewed-by: Marc Zyngier Signed-off-by: Christoffer Dall --- arch/arm64/include/asm/kvm_asm.h | 14 +++++++++++++ arch/arm64/include/asm/kvm_host.h | 15 ++++++++++++++ arch/arm64/kernel/asm-offsets.c | 1 + arch/arm64/kvm/hyp/entry.S | 6 +----- arch/arm64/kvm/hyp/hyp-entry.S | 41 ++++++++++++++++++--------------------- arch/arm64/kvm/hyp/switch.c | 5 +---- arch/arm64/kvm/hyp/sysreg-sr.c | 5 +++++ 7 files changed, 56 insertions(+), 31 deletions(-) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index ab4d0a926043..6c7599b5cb40 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -33,6 +33,7 @@ #define KVM_ARM64_DEBUG_DIRTY_SHIFT 0 #define KVM_ARM64_DEBUG_DIRTY (1 << KVM_ARM64_DEBUG_DIRTY_SHIFT) +/* Translate a kernel address of @sym into its equivalent linear mapping */ #define kvm_ksym_ref(sym) \ ({ \ void *val = &sym; \ @@ -68,6 +69,19 @@ extern u32 __kvm_get_mdcr_el2(void); extern u32 __init_stage2_translation(void); +#else /* __ASSEMBLY__ */ + +.macro get_host_ctxt reg, tmp + adr_l \reg, kvm_host_cpu_state + mrs \tmp, tpidr_el2 + add \reg, \reg, \tmp +.endm + +.macro get_vcpu vcpu, ctxt + ldr \vcpu, [\ctxt, #HOST_CONTEXT_VCPU] + kern_hyp_va \vcpu +.endm + #endif #endif /* __ARM_KVM_ASM_H__ */ diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 048f5db120f3..6ce0b428a4db 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -350,10 +350,15 @@ int kvm_perf_teardown(void); struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr); +extern void __kvm_set_tpidr_el2(u64 tpidr_el2); +DECLARE_PER_CPU(kvm_cpu_context_t, kvm_host_cpu_state); + static inline void __cpu_init_hyp_mode(phys_addr_t pgd_ptr, unsigned long hyp_stack_ptr, unsigned long vector_ptr) { + u64 tpidr_el2; + /* * Call initialization code, and switch to the full blown HYP code. * If the cpucaps haven't been finalized yet, something has gone very @@ -362,6 +367,16 @@ static inline void __cpu_init_hyp_mode(phys_addr_t pgd_ptr, */ BUG_ON(!static_branch_likely(&arm64_const_caps_ready)); __kvm_call_hyp((void *)pgd_ptr, hyp_stack_ptr, vector_ptr); + + /* + * Calculate the raw per-cpu offset without a translation from the + * kernel's mapping to the linear mapping, and store it in tpidr_el2 + * so that we can use adr_l to access per-cpu variables in EL2. + */ + tpidr_el2 = (u64)this_cpu_ptr(&kvm_host_cpu_state) + - (u64)kvm_ksym_ref(kvm_host_cpu_state); + + kvm_call_hyp(__kvm_set_tpidr_el2, tpidr_el2); } static inline void kvm_arch_hardware_unsetup(void) {} diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 71bf088f1e4b..612021dce84f 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -135,6 +135,7 @@ int main(void) DEFINE(CPU_FP_REGS, offsetof(struct kvm_regs, fp_regs)); DEFINE(VCPU_FPEXC32_EL2, offsetof(struct kvm_vcpu, arch.ctxt.sys_regs[FPEXC32_EL2])); DEFINE(VCPU_HOST_CONTEXT, offsetof(struct kvm_vcpu, arch.host_cpu_context)); + DEFINE(HOST_CONTEXT_VCPU, offsetof(struct kvm_cpu_context, __hyp_running_vcpu)); #endif #ifdef CONFIG_CPU_PM DEFINE(CPU_SUSPEND_SZ, sizeof(struct cpu_suspend_ctx)); diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S index 9a8ab5dddd9e..a360ac6e89e9 100644 --- a/arch/arm64/kvm/hyp/entry.S +++ b/arch/arm64/kvm/hyp/entry.S @@ -62,9 +62,6 @@ ENTRY(__guest_enter) // Store the host regs save_callee_saved_regs x1 - // Store host_ctxt and vcpu for use at exit time - stp x1, x0, [sp, #-16]! - add x18, x0, #VCPU_CONTEXT // Restore guest regs x0-x17 @@ -118,8 +115,7 @@ ENTRY(__guest_exit) // Store the guest regs x19-x29, lr save_callee_saved_regs x1 - // Restore the host_ctxt from the stack - ldr x2, [sp], #16 + get_host_ctxt x2, x3 // Now restore the host regs restore_callee_saved_regs x2 diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S index e4f37b9dd47c..71b4cc92895e 100644 --- a/arch/arm64/kvm/hyp/hyp-entry.S +++ b/arch/arm64/kvm/hyp/hyp-entry.S @@ -56,18 +56,15 @@ ENDPROC(__vhe_hyp_call) el1_sync: // Guest trapped into EL2 stp x0, x1, [sp, #-16]! -alternative_if_not ARM64_HAS_VIRT_HOST_EXTN - mrs x1, esr_el2 -alternative_else - mrs x1, esr_el1 -alternative_endif - lsr x0, x1, #ESR_ELx_EC_SHIFT + mrs x1, vttbr_el2 // If vttbr is valid, this is a trap + cbnz x1, el1_trap // from the guest - cmp x0, #ESR_ELx_EC_HVC64 - b.ne el1_trap - - mrs x1, vttbr_el2 // If vttbr is valid, the 64bit guest - cbnz x1, el1_trap // called HVC +#ifdef CONFIG_DEBUG + mrs x0, esr_el2 + lsr x0, x0, #ESR_ELx_EC_SHIFT + cmp x0, #ESR_ELx_EC_HVC64 + b.ne __hyp_panic +#endif /* Here, we're pretty sure the host called HVC. */ ldp x0, x1, [sp], #16 @@ -101,10 +98,15 @@ alternative_endif eret el1_trap: + get_host_ctxt x0, x1 + get_vcpu x1, x0 + + mrs x0, esr_el2 + lsr x0, x0, #ESR_ELx_EC_SHIFT /* * x0: ESR_EC + * x1: vcpu pointer */ - ldr x1, [sp, #16 + 8] // vcpu stored by __guest_enter /* * We trap the first access to the FP/SIMD to save the host context @@ -122,13 +124,15 @@ alternative_else_nop_endif el1_irq: stp x0, x1, [sp, #-16]! - ldr x1, [sp, #16 + 8] + get_host_ctxt x0, x1 + get_vcpu x1, x0 mov x0, #ARM_EXCEPTION_IRQ b __guest_exit el1_error: stp x0, x1, [sp, #-16]! - ldr x1, [sp, #16 + 8] + get_host_ctxt x0, x1 + get_vcpu x1, x0 mov x0, #ARM_EXCEPTION_EL1_SERROR b __guest_exit @@ -164,14 +168,7 @@ ENTRY(__hyp_do_panic) ENDPROC(__hyp_do_panic) ENTRY(__hyp_panic) - /* - * '=kvm_host_cpu_state' is a host VA from the constant pool, it may - * not be accessible by this address from EL2, hyp_panic() converts - * it with kern_hyp_va() before use. - */ - ldr x0, =kvm_host_cpu_state - mrs x1, tpidr_el2 - add x0, x0, x1 + get_host_ctxt x0, x1 b hyp_panic ENDPROC(__hyp_panic) diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index f7307b6b42f0..6fcb37e220b5 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -449,7 +449,7 @@ static hyp_alternate_select(__hyp_call_panic, __hyp_call_panic_nvhe, __hyp_call_panic_vhe, ARM64_HAS_VIRT_HOST_EXTN); -void __hyp_text __noreturn hyp_panic(struct kvm_cpu_context *__host_ctxt) +void __hyp_text __noreturn hyp_panic(struct kvm_cpu_context *host_ctxt) { struct kvm_vcpu *vcpu = NULL; @@ -458,9 +458,6 @@ void __hyp_text __noreturn hyp_panic(struct kvm_cpu_context *__host_ctxt) u64 par = read_sysreg(par_el1); if (read_sysreg(vttbr_el2)) { - struct kvm_cpu_context *host_ctxt; - - host_ctxt = kern_hyp_va(__host_ctxt); vcpu = host_ctxt->__hyp_running_vcpu; __timer_disable_traps(vcpu); __deactivate_traps(vcpu); diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c index c54cc2afb92b..e19d89cabf2a 100644 --- a/arch/arm64/kvm/hyp/sysreg-sr.c +++ b/arch/arm64/kvm/hyp/sysreg-sr.c @@ -183,3 +183,8 @@ void __hyp_text __sysreg32_restore_state(struct kvm_vcpu *vcpu) if (vcpu->arch.debug_flags & KVM_ARM64_DEBUG_DIRTY) write_sysreg(sysreg[DBGVCR32_EL2], dbgvcr32_el2); } + +void __hyp_text __kvm_set_tpidr_el2(u64 tpidr_el2) +{ + asm("msr tpidr_el2, %0": : "r" (tpidr_el2)); +}