From patchwork Tue Feb 27 11:34:01 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 10244979 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 6658660211 for ; Tue, 27 Feb 2018 11:54:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4F03527F3E for ; Tue, 27 Feb 2018 11:54:42 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3D28C28382; Tue, 27 Feb 2018 11:54:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 5563A28852 for ; Tue, 27 Feb 2018 11:52:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=KxS4f44eL+VWRR1ZLKsPuaYKZNnSqqghtYr8VB3OfxQ=; b=KK5kZFfmt67/DH8sq6yfiz+HRQ EzWVtcPz41owtlm66L3cy18tlWrlaq7rnjzWyybfGJhBmd2I9p3r0Oci7yaFt5qyst3NjSL4keSWi 2fwGa8jpPA65reFHIaPbJIln0NsqRWG3eKOuA7+2XKaSzdW7sOP6Jg6CK7X2IaCsk5UKZ8SWj1zE5 uXjIrPqBdS6XQi9YsKwnUFOywrtgC8J9FxLtd8AeE4VLRSW7cJu+rfpMatDSy74NUo3vq0KQ3BKne P66hxkxBH6Zy9WXx2WucIkEV47uFlwrDxHqR62xG18SMZxorIqZBEjkl6jv4CLW9fTmeXdFd50Ec0 XaHX2EBA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.89 #1 (Red Hat Linux)) id 1eqdoQ-0004WQ-AK; Tue, 27 Feb 2018 11:52:50 +0000 Received: from mail-wm0-x242.google.com ([2a00:1450:400c:c09::242]) by bombadil.infradead.org with esmtps (Exim 4.89 #1 (Red Hat Linux)) id 1eqdX2-0006av-L1 for linux-arm-kernel@lists.infradead.org; Tue, 27 Feb 2018 11:35:12 +0000 Received: by mail-wm0-x242.google.com with SMTP id t3so23399189wmc.2 for ; Tue, 27 Feb 2018 03:34:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=christofferdall-dk.20150623.gappssmtp.com; s=20150623; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references; bh=/7cwYyO07rI+gQCsbnzRBQQ3SPAfYW716xZpnuY2J14=; b=FvQMv+ENpySiaIrCBOjfxQvW4OeDPQlUw8NvHRzmazKOfT51JsAKzIARRs3+8d8mDf agy2kk6kCwpfa9gA2H8aIzXXbXnTkmb8K/H/J3uYb6sSY4l/LGmDbnmqCoudVS4VkUSA CMbXM4FzOP4J+6+Ch2/cFtz4OUC9OyxoiKY0fXDg6BE7/Hb+16Nn2LpVwuQGzXTRE5B3 /f2jPRqiiasRrcBBSZqnx/wo1fLPM098SeY39tRvTC9qGH6upsfuX+nOwWUk2zgfE4aU /Km+nt8N2IGeyOG95/dOKK9dtb9aFTzwKrel3zLOyZkSVxrBT7MVG/Finlx6PumiCYm0 f2zg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references; bh=/7cwYyO07rI+gQCsbnzRBQQ3SPAfYW716xZpnuY2J14=; b=dJUKEgGq06UXQdaUUWas8QiDd0artB8B5sukAmFc1dK9nQy63FQFPEy+81zcXTnnOG 1FHmYjQl1y1rEomjQ1O+S/jkgTust7ghW0u7MlQvDBe9nuAkwYBz2l9Vn83nC+0ZQWlM Jsj0Jad8lUSrjlKpq7EGXyl/ZMI5SslDI67N5JgTN43BpSh8t4PoLqVodwtD5bwHEIdd aSj8oGxzJZtCEOjhWVQW8QRi9olj5NfKmRYXoWds4Sj5r7+8lvcfvuqttFi7f8+t7roT UpKigHKQWNFb62ekNz77ZLFjsYHhzzZ5uMtlP8uunkOOGlNAE2bM3ccne0Z6IykzZ0AL /Gsg== X-Gm-Message-State: APf1xPBdTfVMeZmKQubuUVdrzcO298PcV16n3C8Xrjlhx6jjuZ8+003A WErwuNyUzIQtABeZOlXS7DagEA== X-Google-Smtp-Source: AH8x2258mfNN9ENf7aORJVpgqaOBld6n0PUrFkuO7K8CMhWuo1SqgQPmtlT8ZM6TM4HPvtJDexvd2A== X-Received: by 10.80.243.21 with SMTP id p21mr17964412edm.186.1519731289651; Tue, 27 Feb 2018 03:34:49 -0800 (PST) Received: from localhost.localdomain (x50d2404e.cust.hiper.dk. [80.210.64.78]) by smtp.gmail.com with ESMTPSA id m1sm9176786ede.39.2018.02.27.03.34.48 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 27 Feb 2018 03:34:48 -0800 (PST) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Subject: [PATCH v5 12/40] KVM: arm64: Factor out fault info population and gic workarounds Date: Tue, 27 Feb 2018 12:34:01 +0100 Message-Id: <20180227113429.637-13-cdall@kernel.org> X-Mailer: git-send-email 2.14.2 In-Reply-To: <20180227113429.637-1-cdall@kernel.org> References: <20180227113429.637-1-cdall@kernel.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180227_033453_497395_59F8FAE8 X-CRM114-Status: GOOD ( 19.09 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Andrew Jones , kvm@vger.kernel.org, Marc Zyngier , Tomasz Nowicki , Julien Grall , Yury Norov , Dave Martin , Shih-Wei Li MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP From: Christoffer Dall The current world-switch function has functionality to detect a number of cases where we need to fixup some part of the exit condition and possibly run the guest again, before having restored the host state. This includes populating missing fault info, emulating GICv2 CPU interface accesses when mapped at unaligned addresses, and emulating the GICv3 CPU interface on systems that need it. As we are about to have an alternative switch function for VHE systems, but VHE systems still need the same early fixup logic, factor out this logic into a separate function that can be shared by both switch functions. No functional change. Reviewed-by: Marc Zyngier Reviewed-by: Andrew Jones Signed-off-by: Christoffer Dall --- arch/arm64/kvm/hyp/switch.c | 104 ++++++++++++++++++++++++-------------------- 1 file changed, 57 insertions(+), 47 deletions(-) diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index 35f3bbe17084..b055111df1a1 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -291,53 +291,27 @@ static bool __hyp_text __skip_instr(struct kvm_vcpu *vcpu) } } -int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu) +/* + * Return true when we were able to fixup the guest exit and should return to + * the guest, false when we should restore the host state and return to the + * main run loop. + */ +static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) { - struct kvm_cpu_context *host_ctxt; - struct kvm_cpu_context *guest_ctxt; - bool fp_enabled; - u64 exit_code; - - vcpu = kern_hyp_va(vcpu); - - host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context); - host_ctxt->__hyp_running_vcpu = vcpu; - guest_ctxt = &vcpu->arch.ctxt; - - __sysreg_save_host_state(host_ctxt); - - __activate_traps(vcpu); - __activate_vm(vcpu); - - __vgic_restore_state(vcpu); - __timer_enable_traps(vcpu); - - /* - * We must restore the 32-bit state before the sysregs, thanks - * to erratum #852523 (Cortex-A57) or #853709 (Cortex-A72). - */ - __sysreg32_restore_state(vcpu); - __sysreg_restore_guest_state(guest_ctxt); - __debug_switch_to_guest(vcpu); - - /* Jump in the fire! */ -again: - exit_code = __guest_enter(vcpu, host_ctxt); - /* And we're baaack! */ - - if (ARM_EXCEPTION_CODE(exit_code) != ARM_EXCEPTION_IRQ) + if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ) vcpu->arch.fault.esr_el2 = read_sysreg_el2(esr); + /* * We're using the raw exception code in order to only process * the trap if no SError is pending. We will come back to the * same PC once the SError has been injected, and replay the * trapping instruction. */ - if (exit_code == ARM_EXCEPTION_TRAP && !__populate_fault_info(vcpu)) - goto again; + if (*exit_code == ARM_EXCEPTION_TRAP && !__populate_fault_info(vcpu)) + return true; if (static_branch_unlikely(&vgic_v2_cpuif_trap) && - exit_code == ARM_EXCEPTION_TRAP) { + *exit_code == ARM_EXCEPTION_TRAP) { bool valid; valid = kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_DABT_LOW && @@ -351,9 +325,9 @@ int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu) if (ret == 1) { if (__skip_instr(vcpu)) - goto again; + return true; else - exit_code = ARM_EXCEPTION_TRAP; + *exit_code = ARM_EXCEPTION_TRAP; } if (ret == -1) { @@ -365,29 +339,65 @@ int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu) */ if (!__skip_instr(vcpu)) *vcpu_cpsr(vcpu) &= ~DBG_SPSR_SS; - exit_code = ARM_EXCEPTION_EL1_SERROR; + *exit_code = ARM_EXCEPTION_EL1_SERROR; } - - /* 0 falls through to be handler out of EL2 */ } } if (static_branch_unlikely(&vgic_v3_cpuif_trap) && - exit_code == ARM_EXCEPTION_TRAP && + *exit_code == ARM_EXCEPTION_TRAP && (kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_SYS64 || kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_CP15_32)) { int ret = __vgic_v3_perform_cpuif_access(vcpu); if (ret == 1) { if (__skip_instr(vcpu)) - goto again; + return true; else - exit_code = ARM_EXCEPTION_TRAP; + *exit_code = ARM_EXCEPTION_TRAP; } - - /* 0 falls through to be handled out of EL2 */ } + /* Return to the host kernel and handle the exit */ + return false; +} + +int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu) +{ + struct kvm_cpu_context *host_ctxt; + struct kvm_cpu_context *guest_ctxt; + bool fp_enabled; + u64 exit_code; + + vcpu = kern_hyp_va(vcpu); + + host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context); + host_ctxt->__hyp_running_vcpu = vcpu; + guest_ctxt = &vcpu->arch.ctxt; + + __sysreg_save_host_state(host_ctxt); + + __activate_traps(vcpu); + __activate_vm(vcpu); + + __vgic_restore_state(vcpu); + __timer_enable_traps(vcpu); + + /* + * We must restore the 32-bit state before the sysregs, thanks + * to erratum #852523 (Cortex-A57) or #853709 (Cortex-A72). + */ + __sysreg32_restore_state(vcpu); + __sysreg_restore_guest_state(guest_ctxt); + __debug_switch_to_guest(vcpu); + + do { + /* Jump in the fire! */ + exit_code = __guest_enter(vcpu, host_ctxt); + + /* And we're baaack! */ + } while (fixup_guest_exit(vcpu, &exit_code)); + if (cpus_have_const_cap(ARM64_HARDEN_BP_POST_GUEST_EXIT)) { u32 midr = read_cpuid_id();