From patchwork Thu Feb 15 21:02:56 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 10223615 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id DBDF660467 for ; Thu, 15 Feb 2018 21:05:50 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CB7D9283B9 for ; Thu, 15 Feb 2018 21:05:50 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BFB6F29516; Thu, 15 Feb 2018 21:05:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3BA9A283B9 for ; Thu, 15 Feb 2018 21:05:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1167284AbeBOVFs (ORCPT ); Thu, 15 Feb 2018 16:05:48 -0500 Received: from mail-wm0-f68.google.com ([74.125.82.68]:52061 "EHLO mail-wm0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1167269AbeBOVDx (ORCPT ); Thu, 15 Feb 2018 16:03:53 -0500 Received: by mail-wm0-f68.google.com with SMTP id r71so3316025wmd.1 for ; Thu, 15 Feb 2018 13:03:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=iXoZyTQ9S4X8YWEmOGEzcX3GS2wO+CroTifdImurNBE=; b=cK3FAVQeX68INbkNeQatXU1qH23AyCztbj7hUlKr+Zq0OvtVDoACkUVJD/cBx8Vh36 146bVlbR4bO467JIQNlPUkW0LB2t4f/Q2Oj08xUm+QmRuz668gS/QbKyyQyMyT4kH0QK W5+O2uIUGaw317y7kDqzNDpHOxxlruCrygJeU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=iXoZyTQ9S4X8YWEmOGEzcX3GS2wO+CroTifdImurNBE=; b=dmPxwWMJloBS2N1Y8rTRXZI5ytBqqRdPKAELhHrXFSizfCeTiNWByla2KwA35NE0gH OmDkdTz5QaEAO7/jbvTuk3mw67VrccGYbXPombZsTgf/xtW70kEkArgy36rNj9N7yOPg ODDhP7kIIfFI7eTV29cckxQcL+1PoP6mNmTjGhIKB6bb6gsbvBtIqDHt/Wb2IIj5bWdm 7TrWfUE+1Wm51cniWxSozQ8kNF1lCxAho54dwhIEhqIpAe5YJRQjvMR2owPD7JmguwkL S8CATZY6OEBQQ/gh7XyICZRiJU3JJIPJKyrW9BzO364+ICNwBhZeNDO7HY1kt7NUF92G dzBA== X-Gm-Message-State: APf1xPA2Js9btZ975vcA6mBdreaz2EHQkrBSgXDEDwrELVFYrOg9uyCB p10M8iidoiepNWh8W+xpzoY16A== X-Google-Smtp-Source: AH8x227qZh8K7cqb8DwIX3RAMF9P6PWRU2veBc3KZtxK1AYroXjbl3SNft3C1ixN/CNaqtAyRKIcNw== X-Received: by 10.80.194.25 with SMTP id n25mr5064981edf.84.1518728632028; Thu, 15 Feb 2018 13:03:52 -0800 (PST) Received: from localhost.localdomain (x50d2404e.cust.hiper.dk. [80.210.64.78]) by smtp.gmail.com with ESMTPSA id p55sm8220030edc.15.2018.02.15.13.03.50 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 15 Feb 2018 13:03:51 -0800 (PST) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Cc: kvm@vger.kernel.org, Christoffer Dall , Marc Zyngier , Andrew Jones , Shih-Wei Li , Dave Martin , Julien Grall , Tomasz Nowicki , Yury Norov Subject: [PATCH v4 04/40] KVM: arm64: Rework hyp_panic for VHE and non-VHE Date: Thu, 15 Feb 2018 22:02:56 +0100 Message-Id: <20180215210332.8648-5-christoffer.dall@linaro.org> X-Mailer: git-send-email 2.14.2 In-Reply-To: <20180215210332.8648-1-christoffer.dall@linaro.org> References: <20180215210332.8648-1-christoffer.dall@linaro.org> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP VHE actually doesn't rely on clearing the VTTBR when returning to the host kernel, and that is the current key mechanism of hyp_panic to figure out how to attempt to return to a state good enough to print a panic statement. Therefore, we split the hyp_panic function into two functions, a VHE and a non-VHE, keeping the non-VHE version intact, but changing the VHE behavior. The vttbr_el2 check on VHE doesn't really make that much sense, because the only situation where we can get here on VHE is when the hypervisor assembly code actually called into hyp_panic, which only happens when VBAR_EL2 has been set to the KVM exception vectors. On VHE, we can always safely disable the traps and restore the host registers at this point, so we simply do that unconditionally and call into the panic function directly. Acked-by: Marc Zyngier Signed-off-by: Christoffer Dall Reviewed-by: Andrew Jones --- Notes: Changes since v1: - Fixed typos in the commit message - Still use the generic __deactivte_traps() function in the hyp panic code until we rework that logic later. arch/arm64/kvm/hyp/switch.c | 42 +++++++++++++++++++++++------------------- 1 file changed, 23 insertions(+), 19 deletions(-) diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index d1749fa0bfc3..079bb5243f0c 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -437,10 +437,20 @@ int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu) static const char __hyp_panic_string[] = "HYP panic:\nPS:%08llx PC:%016llx ESR:%08llx\nFAR:%016llx HPFAR:%016llx PAR:%016llx\nVCPU:%p\n"; static void __hyp_text __hyp_call_panic_nvhe(u64 spsr, u64 elr, u64 par, - struct kvm_vcpu *vcpu) + struct kvm_cpu_context *__host_ctxt) { + struct kvm_vcpu *vcpu; unsigned long str_va; + vcpu = __host_ctxt->__hyp_running_vcpu; + + if (read_sysreg(vttbr_el2)) { + __timer_disable_traps(vcpu); + __deactivate_traps(vcpu); + __deactivate_vm(vcpu); + __sysreg_restore_host_state(__host_ctxt); + } + /* * Force the panic string to be loaded from the literal pool, * making sure it is a kernel address and not a PC-relative @@ -454,37 +464,31 @@ static void __hyp_text __hyp_call_panic_nvhe(u64 spsr, u64 elr, u64 par, read_sysreg(hpfar_el2), par, vcpu); } -static void __hyp_text __hyp_call_panic_vhe(u64 spsr, u64 elr, u64 par, - struct kvm_vcpu *vcpu) +static void __hyp_call_panic_vhe(u64 spsr, u64 elr, u64 par, + struct kvm_cpu_context *host_ctxt) { + struct kvm_vcpu *vcpu; + vcpu = host_ctxt->__hyp_running_vcpu; + + __deactivate_traps(vcpu); + __sysreg_restore_host_state(host_ctxt); + panic(__hyp_panic_string, spsr, elr, read_sysreg_el2(esr), read_sysreg_el2(far), read_sysreg(hpfar_el2), par, vcpu); } -static hyp_alternate_select(__hyp_call_panic, - __hyp_call_panic_nvhe, __hyp_call_panic_vhe, - ARM64_HAS_VIRT_HOST_EXTN); - void __hyp_text __noreturn hyp_panic(struct kvm_cpu_context *host_ctxt) { - struct kvm_vcpu *vcpu = NULL; - u64 spsr = read_sysreg_el2(spsr); u64 elr = read_sysreg_el2(elr); u64 par = read_sysreg(par_el1); - if (read_sysreg(vttbr_el2)) { - vcpu = host_ctxt->__hyp_running_vcpu; - __timer_disable_traps(vcpu); - __deactivate_traps(vcpu); - __deactivate_vm(vcpu); - __sysreg_restore_host_state(host_ctxt); - } - - /* Call panic for real */ - __hyp_call_panic()(spsr, elr, par, vcpu); + if (!has_vhe()) + __hyp_call_panic_nvhe(spsr, elr, par, host_ctxt); + else + __hyp_call_panic_vhe(spsr, elr, par, host_ctxt); unreachable(); }