From patchwork Fri Jan 12 12:07:19 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 10160489 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 1FBFC605BD for ; Fri, 12 Jan 2018 12:10:38 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 02CF328515 for ; Fri, 12 Jan 2018 12:10:38 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E18D228518; Fri, 12 Jan 2018 12:10:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 5C2FC2094F for ; Fri, 12 Jan 2018 12:10:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=gesXEND2sOPU+Zv+pcIVtkda+lY5Fqxd7L6vYgehxgc=; b=gdepYrW8dFg6yiaYfdAjHsjtfr WzPvMxxhXds5eDxkxG0TOljJojlOR2tTwnyAuXkLkJtounqYubs4ifOa10G4/mg5ip7MWywJ9/2As +26cs8kQ32SutoafeoeHhgA30pWQjSo0dSEkYMMa6HydAq+0ZEEc1VbFsfZm5O43RQm5gjuwmr3Kp g+ulNOohTMDPz2pYD05NcwrhclLnXJMGNJoHbO9qn1u+Lnnr5b+uqaFdS+ujezZE7JYDll8Ew6jgK MbzL+e/7tQmTckEpAdb3iwY4SN/+kmT6LCeL3No//w52lorBkj6FewUybjXVoSdX4OuD2mryjyao2 IoeTwROg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.89 #1 (Red Hat Linux)) id 1eZyAL-0003MD-W3; Fri, 12 Jan 2018 12:10:34 +0000 Received: from mail-wm0-x244.google.com ([2a00:1450:400c:c09::244]) by bombadil.infradead.org with esmtps (Exim 4.89 #1 (Red Hat Linux)) id 1eZy88-0000I2-DO for linux-arm-kernel@lists.infradead.org; Fri, 12 Jan 2018 12:08:22 +0000 Received: by mail-wm0-x244.google.com with SMTP id 143so11564022wma.5 for ; Fri, 12 Jan 2018 04:08:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=2mzI7JH21V0V3RnXa8UOEFsrBWq6RzPW8YrLy0lxyg8=; b=bU2vA8yeE3r+3zvUbJah8g7iEKSaOrqG/MFTltRLSWGt7TQkItQPFq3esUSj7gxvas 3x5fDNbARwOb2JFB2b0mTMdbGhHhVy9l/AsoLifpSz/XC4uycAPTogFND62N781ax+jw 0bLmlMkvbvwiFV+pOzXqoYNUHOJ/lNXWjSSUA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=2mzI7JH21V0V3RnXa8UOEFsrBWq6RzPW8YrLy0lxyg8=; b=TD5gAFwsK0bbCKZsyzAUt5h7f1ThAOw8f1gjugTJ3v95CbkIV1Cwt//x3NUEgDbisF zlOMc3SsrcFBypIGQrRsIeKdlKrRssVnkzqOiyOvtwC9/Ilk3nrciH58/Cvknq6xmlFU U8UVUP9BIhe9lX5cfqHAYeENZ7a10SuZtzVWjQdqvUghp3OvIE25wZkGSSgftUdibeuc 2F+Q2LU8VFbn4LowstVPU83qImpcBecWY8QjmZ7zkz5sBzPat/m3/mamIVxvF24Ji5l7 iCayJS9H1PImBBaYa+eQUUnP9ty+oDU5ZRdt/sV2nJAAHpT/6xsIvNa953KKhGoA5dUK ggAw== X-Gm-Message-State: AKGB3mIv6XSm6IhI624UiuKDP00ejnChNI9GIas9YEHaV0erb2B2F6LH xCaPRg17WsE8WJhrgFObrkalxA== X-Google-Smtp-Source: ACJfBouBnYKeHb306VnMIr0vJakL9CC8lIYiPP7e5qNw8yaMpl6HdI+4TCmxYKx1OONJmw8NwS9XaA== X-Received: by 10.80.181.37 with SMTP id y34mr35293230edd.277.1515758891400; Fri, 12 Jan 2018 04:08:11 -0800 (PST) Received: from localhost.localdomain (x50d2404e.cust.hiper.dk. [80.210.64.78]) by smtp.gmail.com with ESMTPSA id f16sm13489705edj.65.2018.01.12.04.08.09 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 12 Jan 2018 04:08:10 -0800 (PST) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Subject: [PATCH v3 13/41] KVM: arm64: Factor out fault info population and gic workarounds Date: Fri, 12 Jan 2018 13:07:19 +0100 Message-Id: <20180112120747.27999-14-christoffer.dall@linaro.org> X-Mailer: git-send-email 2.14.2 In-Reply-To: <20180112120747.27999-1-christoffer.dall@linaro.org> References: <20180112120747.27999-1-christoffer.dall@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180112_040816_999284_FD81A42F X-CRM114-Status: GOOD ( 17.78 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Marc Zyngier , Andrew Jones , Christoffer Dall , Shih-Wei Li , kvm@vger.kernel.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP The current world-switch function has functionality to detect a number of cases where we need to fixup some part of the exit condition and possibly run the guest again, before having restored the host state. This includes populating missing fault info, emulating GICv2 CPU interface accesses when mapped at unaligned addresses, and emulating the GICv3 CPU interface on systems that need it. As we are about to have an alternative switch function for VHE systems, but VHE systems still need the same early fixup logic, factor out this logic into a separate function that can be shared by both switch functions. No functional change. Reviewed-by: Marc Zyngier Signed-off-by: Christoffer Dall Reviewed-by: Julien Thierry --- arch/arm64/kvm/hyp/switch.c | 99 ++++++++++++++++++++++++--------------------- 1 file changed, 54 insertions(+), 45 deletions(-) diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index 63284647ed11..55ca2e3d42eb 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -270,50 +270,24 @@ static bool __hyp_text __skip_instr(struct kvm_vcpu *vcpu) } } -int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu) +/* + * Return true when we were able to fixup the guest exit and should return to + * the guest, false when we should restore the host state and return to the + * main run loop. + */ +static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) { - struct kvm_cpu_context *host_ctxt; - struct kvm_cpu_context *guest_ctxt; - u64 exit_code; - - vcpu = kern_hyp_va(vcpu); - - host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context); - host_ctxt->__hyp_running_vcpu = vcpu; - guest_ctxt = &vcpu->arch.ctxt; - - __sysreg_save_host_state(host_ctxt); - - __activate_traps(vcpu); - __activate_vm(vcpu); - - __vgic_restore_state(vcpu); - __timer_enable_traps(vcpu); - - /* - * We must restore the 32-bit state before the sysregs, thanks - * to erratum #852523 (Cortex-A57) or #853709 (Cortex-A72). - */ - __sysreg32_restore_state(vcpu); - __sysreg_restore_guest_state(guest_ctxt); - __debug_switch_to_guest(vcpu); - - /* Jump in the fire! */ -again: - exit_code = __guest_enter(vcpu, host_ctxt); - /* And we're baaack! */ - /* * We're using the raw exception code in order to only process * the trap if no SError is pending. We will come back to the * same PC once the SError has been injected, and replay the * trapping instruction. */ - if (exit_code == ARM_EXCEPTION_TRAP && !__populate_fault_info(vcpu)) - goto again; + if (*exit_code == ARM_EXCEPTION_TRAP && !__populate_fault_info(vcpu)) + return true; if (static_branch_unlikely(&vgic_v2_cpuif_trap) && - exit_code == ARM_EXCEPTION_TRAP) { + *exit_code == ARM_EXCEPTION_TRAP) { bool valid; valid = kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_DABT_LOW && @@ -327,9 +301,9 @@ int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu) if (ret == 1) { if (__skip_instr(vcpu)) - goto again; + return true; else - exit_code = ARM_EXCEPTION_TRAP; + *exit_code = ARM_EXCEPTION_TRAP; } if (ret == -1) { @@ -341,29 +315,64 @@ int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu) */ if (!__skip_instr(vcpu)) *vcpu_cpsr(vcpu) &= ~DBG_SPSR_SS; - exit_code = ARM_EXCEPTION_EL1_SERROR; + *exit_code = ARM_EXCEPTION_EL1_SERROR; } - - /* 0 falls through to be handler out of EL2 */ } } if (static_branch_unlikely(&vgic_v3_cpuif_trap) && - exit_code == ARM_EXCEPTION_TRAP && + *exit_code == ARM_EXCEPTION_TRAP && (kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_SYS64 || kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_CP15_32)) { int ret = __vgic_v3_perform_cpuif_access(vcpu); if (ret == 1) { if (__skip_instr(vcpu)) - goto again; + return true; else - exit_code = ARM_EXCEPTION_TRAP; + *exit_code = ARM_EXCEPTION_TRAP; } - - /* 0 falls through to be handled out of EL2 */ } + /* Return to the host kernel and handle the exit */ + return false; +} + +int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu) +{ + struct kvm_cpu_context *host_ctxt; + struct kvm_cpu_context *guest_ctxt; + u64 exit_code; + + vcpu = kern_hyp_va(vcpu); + + host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context); + host_ctxt->__hyp_running_vcpu = vcpu; + guest_ctxt = &vcpu->arch.ctxt; + + __sysreg_save_host_state(host_ctxt); + + __activate_traps(vcpu); + __activate_vm(vcpu); + + __vgic_restore_state(vcpu); + __timer_enable_traps(vcpu); + + /* + * We must restore the 32-bit state before the sysregs, thanks + * to erratum #852523 (Cortex-A57) or #853709 (Cortex-A72). + */ + __sysreg32_restore_state(vcpu); + __sysreg_restore_guest_state(guest_ctxt); + __debug_switch_to_guest(vcpu); + + do { + /* Jump in the fire! */ + exit_code = __guest_enter(vcpu, host_ctxt); + + /* And we're baaack! */ + } while (fixup_guest_exit(vcpu, &exit_code)); + __sysreg_save_guest_state(guest_ctxt); __sysreg32_save_state(vcpu); __timer_disable_traps(vcpu);