From patchwork Thu Oct 12 10:41:05 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 10001511 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 56AB3602BF for ; Thu, 12 Oct 2017 10:41:49 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 48F6D28BEC for ; Thu, 12 Oct 2017 10:41:49 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3A71828D6C; Thu, 12 Oct 2017 10:41:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 54F8B28BEC for ; Thu, 12 Oct 2017 10:41:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756105AbdJLKlr (ORCPT ); Thu, 12 Oct 2017 06:41:47 -0400 Received: from mail-wm0-f42.google.com ([74.125.82.42]:51847 "EHLO mail-wm0-f42.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754894AbdJLKlo (ORCPT ); Thu, 12 Oct 2017 06:41:44 -0400 Received: by mail-wm0-f42.google.com with SMTP id f4so12122576wme.0 for ; Thu, 12 Oct 2017 03:41:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=fywTDjrlFWi6IEYJ61/SiD8BPV/IwRGiyLsD9tFIoSU=; b=S+Xz+SrT5BWK9t+zG1avCnxNU0loer00rTiBDqzsbaCgu43pirujJ5vo5jeBv/puGP jOF/rdxogLHSbK51PvsXNT2TIKvZwoN5YcK8RV3vRfZD/xv7ODG7eF40Jx+r+QPL12bf +uA12Hn71jphfsLX+/qi9LN4jwMhHbcmC4qbE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=fywTDjrlFWi6IEYJ61/SiD8BPV/IwRGiyLsD9tFIoSU=; b=mnYrxn4v28fBPrCGz7aM0+dYVEoPdiBUUC6K6nRNAVxHGYiczBKCBrjahAoGRvOJFr D6OeMlI/BjtH0BfRENoi1BQyvXIwbsG+OO52oRAjybuWQsXeULt5dj8ddxwoImFFLSK/ rgmvhp0Nizn+2d3PIc+HESlaZf5Eh1Vq9GZDslWvBsz9M9bRAX7xqUi3dtf2D2pTboCe BkWLefZWLdTHb2ooI6inW92v1lWMMQVwQSUvhf+WzdJa9k1FpbDdqo/nHt7BL26HzI7b UJdvFGLWHgnpEvX9Jvb54d6B8WbhClwI9kn5O3tn8o+n95C6ICVVNZa3cnPv30gqpinA 89Iw== X-Gm-Message-State: AMCzsaV/9OCzQjuhStBqfZejRUXMKtxkEvO1hwFdInitHEFs2C9/zy9b t50pzWfv4Zxyuy9p8e+ordOpa4gMhuQ= X-Google-Smtp-Source: AOwi7QAgu0bMfI5LtAwPfxfXn20yuCBXwK8dJI9VNHh0XBzEtMwZhmRwbYo+GRVDoSHMmR7tesdpOA== X-Received: by 10.80.153.59 with SMTP id k56mr2433464edb.208.1507804902718; Thu, 12 Oct 2017 03:41:42 -0700 (PDT) Received: from localhost.localdomain (xd93dd96b.cust.hiper.dk. [217.61.217.107]) by smtp.gmail.com with ESMTPSA id g49sm4798603edc.31.2017.10.12.03.41.41 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 12 Oct 2017 03:41:41 -0700 (PDT) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Cc: kvm@vger.kernel.org, Marc Zyngier , Shih-Wei Li , Christoffer Dall Subject: [PATCH 01/37] KVM: arm64: Avoid storing the vcpu pointer on the stack Date: Thu, 12 Oct 2017 12:41:05 +0200 Message-Id: <20171012104141.26902-2-christoffer.dall@linaro.org> X-Mailer: git-send-email 2.9.0 In-Reply-To: <20171012104141.26902-1-christoffer.dall@linaro.org> References: <20171012104141.26902-1-christoffer.dall@linaro.org> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We already have the percpu area for the host cpu state, which points to the VCPU, so there's no need to store the VCPU pointer on the stack on every context switch. We can be a little more clever and just use tpidr_el2 for the percpu offset and load the VCPU pointer from the host context. This requires us to have a scratch register though, so we take the chance to rearrange some of the el1_sync code to only look at the vttbr_el2 to determine if this is a trap from the guest or an HVC from the host. We do add an extra check to call the panic code if the kernel is configured with debugging enabled and we saw a trap from the host which wasn't an HVC, indicating that we left some EL2 trap configured by mistake. Signed-off-by: Christoffer Dall --- arch/arm64/include/asm/kvm_asm.h | 20 ++++++++++++++++++++ arch/arm64/kernel/asm-offsets.c | 1 + arch/arm64/kvm/hyp/entry.S | 5 +---- arch/arm64/kvm/hyp/hyp-entry.S | 39 ++++++++++++++++++--------------------- arch/arm64/kvm/hyp/switch.c | 2 +- 5 files changed, 41 insertions(+), 26 deletions(-) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index ab4d0a9..7e48a39 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -70,4 +70,24 @@ extern u32 __init_stage2_translation(void); #endif +#ifdef __ASSEMBLY__ +.macro get_host_ctxt reg, tmp + /* + * '=kvm_host_cpu_state' is a host VA from the constant pool, it may + * not be accessible by this address from EL2, hyp_panic() converts + * it with kern_hyp_va() before use. + */ + ldr \reg, =kvm_host_cpu_state + mrs \tmp, tpidr_el2 + add \reg, \reg, \tmp + kern_hyp_va \reg +.endm + +.macro get_vcpu vcpu, ctxt + ldr \vcpu, [\ctxt, #HOST_CONTEXT_VCPU] + kern_hyp_va \vcpu +.endm + +#endif + #endif /* __ARM_KVM_ASM_H__ */ diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 71bf088..612021d 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -135,6 +135,7 @@ int main(void) DEFINE(CPU_FP_REGS, offsetof(struct kvm_regs, fp_regs)); DEFINE(VCPU_FPEXC32_EL2, offsetof(struct kvm_vcpu, arch.ctxt.sys_regs[FPEXC32_EL2])); DEFINE(VCPU_HOST_CONTEXT, offsetof(struct kvm_vcpu, arch.host_cpu_context)); + DEFINE(HOST_CONTEXT_VCPU, offsetof(struct kvm_cpu_context, __hyp_running_vcpu)); #endif #ifdef CONFIG_CPU_PM DEFINE(CPU_SUSPEND_SZ, sizeof(struct cpu_suspend_ctx)); diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S index 9a8ab5d..76cd48f 100644 --- a/arch/arm64/kvm/hyp/entry.S +++ b/arch/arm64/kvm/hyp/entry.S @@ -62,9 +62,6 @@ ENTRY(__guest_enter) // Store the host regs save_callee_saved_regs x1 - // Store host_ctxt and vcpu for use at exit time - stp x1, x0, [sp, #-16]! - add x18, x0, #VCPU_CONTEXT // Restore guest regs x0-x17 @@ -119,7 +116,7 @@ ENTRY(__guest_exit) save_callee_saved_regs x1 // Restore the host_ctxt from the stack - ldr x2, [sp], #16 + get_host_ctxt x2, x3 // Now restore the host regs restore_callee_saved_regs x2 diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S index e4f37b9..2950f26 100644 --- a/arch/arm64/kvm/hyp/hyp-entry.S +++ b/arch/arm64/kvm/hyp/hyp-entry.S @@ -56,19 +56,16 @@ ENDPROC(__vhe_hyp_call) el1_sync: // Guest trapped into EL2 stp x0, x1, [sp, #-16]! -alternative_if_not ARM64_HAS_VIRT_HOST_EXTN - mrs x1, esr_el2 -alternative_else - mrs x1, esr_el1 -alternative_endif - lsr x0, x1, #ESR_ELx_EC_SHIFT - - cmp x0, #ESR_ELx_EC_HVC64 - b.ne el1_trap - mrs x1, vttbr_el2 // If vttbr is valid, the 64bit guest cbnz x1, el1_trap // called HVC +#ifdef CONFIG_DEBUG + mrs x0, esr_el2 + lsr x0, x0, #ESR_ELx_EC_SHIFT + cmp x0, #ESR_ELx_EC_HVC64 + b.ne __hyp_panic +#endif + /* Here, we're pretty sure the host called HVC. */ ldp x0, x1, [sp], #16 @@ -101,10 +98,15 @@ alternative_endif eret el1_trap: + get_host_ctxt x0, x1 + get_vcpu x1, x0 + + mrs x0, esr_el2 + lsr x0, x0, #ESR_ELx_EC_SHIFT /* * x0: ESR_EC + * x1: vcpu pointer */ - ldr x1, [sp, #16 + 8] // vcpu stored by __guest_enter /* * We trap the first access to the FP/SIMD to save the host context @@ -122,13 +124,15 @@ alternative_else_nop_endif el1_irq: stp x0, x1, [sp, #-16]! - ldr x1, [sp, #16 + 8] + get_host_ctxt x0, x1 + get_vcpu x1, x0 mov x0, #ARM_EXCEPTION_IRQ b __guest_exit el1_error: stp x0, x1, [sp, #-16]! - ldr x1, [sp, #16 + 8] + get_host_ctxt x0, x1 + get_vcpu x1, x0 mov x0, #ARM_EXCEPTION_EL1_SERROR b __guest_exit @@ -164,14 +168,7 @@ ENTRY(__hyp_do_panic) ENDPROC(__hyp_do_panic) ENTRY(__hyp_panic) - /* - * '=kvm_host_cpu_state' is a host VA from the constant pool, it may - * not be accessible by this address from EL2, hyp_panic() converts - * it with kern_hyp_va() before use. - */ - ldr x0, =kvm_host_cpu_state - mrs x1, tpidr_el2 - add x0, x0, x1 + get_host_ctxt x0, x1 b hyp_panic ENDPROC(__hyp_panic) diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index 69ef24a..a0123ad 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -435,7 +435,7 @@ void __hyp_text __noreturn hyp_panic(struct kvm_cpu_context *__host_ctxt) if (read_sysreg(vttbr_el2)) { struct kvm_cpu_context *host_ctxt; - host_ctxt = kern_hyp_va(__host_ctxt); + host_ctxt = __host_ctxt; vcpu = host_ctxt->__hyp_running_vcpu; __timer_disable_traps(vcpu); __deactivate_traps(vcpu);