From patchwork Fri Jul 15 06:10:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 12918775 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B6726C433EF for ; Fri, 15 Jul 2022 06:16:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=JYen1K+D76BGP3miOYFsYXQXAZqFiY4oDC4XHsxhpWY=; b=3EPAyqEZktVjrOLg6YPINGUoEb qpcZ0CAv/V9tcQFUDSmFIIgChz4tpFIX4Vb+a7yvERdCy7N5JoYH3FG98LLOaHto23YbTLEGIpuLg 9Nd7tmJeo/H7b8abmQ8iISimc5Axr6yaQMXjseyBogqFVkGYEoNSR+/24dVLMIWJpbqyE3R7JmlQo bDt5ahhspan+U0lFoCzIErZpC0EaTSeF7NfUqJiWMKzg4e2RCY7xWDmYO/dr6D8lZknmfDUvVrRI6 jbQV5E5u2tJhCffeB2rjdRo6RrZMxFPxkbxVlLaRHXN6gLOp/BkRe2fP7PuLQhGYSifPpp2o/H6rO QfRXP7Lg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oCEbc-004Y3s-Dm; Fri, 15 Jul 2022 06:15:17 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oCEY6-004WU8-HF for linux-arm-kernel@lists.infradead.org; Fri, 15 Jul 2022 06:11:40 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-31c858e18c8so33529007b3.4 for ; Thu, 14 Jul 2022 23:11:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=1fym5n3n0oRia5NvSUClhii6veVXNj9gblE/7wpke9c=; b=dc7YhQhB+Ip1RaWbAyXQxBj3/khXBkxGpkdHu1EkyjdG2Ntq0zGnQ7CI0vvsM5LoR7 bOTy4YLobXsT1AI1ayobnaGn/U0oTuEsFJSaTZsf2SCqTuOq+dIW6cVLIxFgWB6yIgPP r1DS4gW5k/bjt5YkQKMECX5jYmBr8RBTdc4GxWb58awYhSWlNu2Q+r8O7bLGjuJh1CjS mjuFNlu6jjHmjla41cuWn3LyCeBjEnrvudkvWECk8mENYuC32iG1BnIHI1/gj9SbNJ6r vx96pOp6wqrGrASAADA9JpN5jp6DvbjDu9dI2PS2fMoqyh8HmLeAtNjThZMK1jc4pgPF LP3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=1fym5n3n0oRia5NvSUClhii6veVXNj9gblE/7wpke9c=; b=wXHyJ83zBQEwlUflqi6CCkJyccOK6DN3/rP4Ik9m4i3nqsrFQ7CFT6Q3wt42172Js5 0bQlDr4XmN1qvybrc05q3SY6IqggWY4LWYqn3ZWpogP9J6qpCVkhGsgrf3a2EHWYbb5L 3xgHWGrR9kjkWeBMIAS15q2kp+yE4vmvn5DioRiDKlP8D4L+KHSwxNymFDMNy+Zr+xse rHD4posdZGQ33XixqMtFug4JitKUJD4Kjltnuq/Aqx+TczkY5+CJ/bsyCCB2mHPMq80q ShUpOfnHHQowqN4rUIkt1gMBOtE3ON07BWIlQH7ADPh5ztqAnrcY2zQP7M4wFNDyqQK2 RP+Q== X-Gm-Message-State: AJIora/2It1gX8xIWm4Jves0WsllHTdE++sG1Sikx3m6snSnlGvFBL+l Q1e5CiQIWShdb5rdllahlg5Tf/8x1ZdFNIjzGA== X-Google-Smtp-Source: AGRyM1u3qCw7t29vwofSuUW09s1Js0/b0NPwzvR/BwAcuU954cXttzvHa6Ek2lTeefUCsd1b3z2MavlgW2KJP5IGmg== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:f010:455b:62ce:19e]) (user=kaleshsingh job=sendgmr) by 2002:a5b:14f:0:b0:66a:bbd9:e502 with SMTP id c15-20020a5b014f000000b0066abbd9e502mr13149371ybp.278.1657865497089; Thu, 14 Jul 2022 23:11:37 -0700 (PDT) Date: Thu, 14 Jul 2022 23:10:22 -0700 In-Reply-To: <20220715061027.1612149-1-kaleshsingh@google.com> Message-Id: <20220715061027.1612149-14-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220715061027.1612149-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH v4 13/18] KVM: arm64: Prepare non-protected nVHE hypervisor stacktrace From: Kalesh Singh To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org, madvenka@linux.microsoft.com Cc: will@kernel.org, qperret@google.com, tabba@google.com, kaleshsingh@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, andreyknvl@gmail.com, russell.king@oracle.com, vincenzo.frascino@arm.com, mhiramat@kernel.org, ast@kernel.org, drjones@redhat.com, wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com, yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, android-mm@google.com, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220714_231138_635387_678D4790 X-CRM114-Status: GOOD ( 16.84 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In non-protected nVHE mode (non-pKVM) the host can directly access hypervisor memory; and unwinding of the hypervisor stacktrace is done from EL1 to save on memory for shared buffers. To unwind the hypervisor stack from EL1 the host needs to know the starting point for the unwind and information that will allow it to translate hypervisor stack addresses to the corresponding kernel addresses. This patch sets up this book keeping. It is made use of later in the series. Signed-off-by: Kalesh Singh --- arch/arm64/include/asm/kvm_asm.h | 16 ++++++++++++++++ arch/arm64/include/asm/stacktrace/nvhe.h | 4 ++++ arch/arm64/kvm/hyp/nvhe/stacktrace.c | 24 ++++++++++++++++++++++++ 3 files changed, 44 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 2e277f2ed671..0ae9d12c2b5a 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -176,6 +176,22 @@ struct kvm_nvhe_init_params { unsigned long vtcr; }; +/** + * Used by the host in EL1 to dump the nVHE hypervisor backtrace on + * hyp_panic() in non-protected mode. + * + * @stack_base: hyp VA of the hyp_stack base. + * @overflow_stack_base: hyp VA of the hyp_overflow_stack base. + * @fp: hyp FP where the backtrace begins. + * @pc: hyp PC where the backtrace begins. + */ +struct kvm_nvhe_stacktrace_info { + unsigned long stack_base; + unsigned long overflow_stack_base; + unsigned long fp; + unsigned long pc; +}; + /* Translate a kernel address @ptr into its equivalent linear mapping */ #define kvm_ksym_ref(ptr) \ ({ \ diff --git a/arch/arm64/include/asm/stacktrace/nvhe.h b/arch/arm64/include/asm/stacktrace/nvhe.h index 456a6ae08433..1aadfd8d7ac9 100644 --- a/arch/arm64/include/asm/stacktrace/nvhe.h +++ b/arch/arm64/include/asm/stacktrace/nvhe.h @@ -19,6 +19,7 @@ #ifndef __ASM_STACKTRACE_NVHE_H #define __ASM_STACKTRACE_NVHE_H +#include #include /** @@ -49,6 +50,9 @@ static inline bool on_accessible_stack(const struct task_struct *tsk, */ #ifdef __KVM_NVHE_HYPERVISOR__ +DECLARE_PER_CPU(unsigned long [OVERFLOW_STACK_SIZE/sizeof(long)], overflow_stack); +DECLARE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params); + extern void kvm_nvhe_prepare_backtrace(unsigned long fp, unsigned long pc); #ifdef CONFIG_PROTECTED_NVHE_STACKTRACE diff --git a/arch/arm64/kvm/hyp/nvhe/stacktrace.c b/arch/arm64/kvm/hyp/nvhe/stacktrace.c index 832a536e440f..315eb41c37a2 100644 --- a/arch/arm64/kvm/hyp/nvhe/stacktrace.c +++ b/arch/arm64/kvm/hyp/nvhe/stacktrace.c @@ -9,6 +9,28 @@ DEFINE_PER_CPU(unsigned long [OVERFLOW_STACK_SIZE/sizeof(long)], overflow_stack) __aligned(16); +DEFINE_PER_CPU(struct kvm_nvhe_stacktrace_info, kvm_stacktrace_info); + +/** + * hyp_prepare_backtrace - Prepare non-protected nVHE backtrace. + * + * @fp : frame pointer at which to start the unwinding. + * @pc : program counter at which to start the unwinding. + * + * Save the information needed by the host to unwind the non-protected + * nVHE hypervisor stack in EL1. + */ +static void hyp_prepare_backtrace(unsigned long fp, unsigned long pc) +{ + struct kvm_nvhe_stacktrace_info *stacktrace_info = this_cpu_ptr(&kvm_stacktrace_info); + struct kvm_nvhe_init_params *params = this_cpu_ptr(&kvm_init_params); + + stacktrace_info->stack_base = (unsigned long)(params->stack_hyp_va - PAGE_SIZE); + stacktrace_info->overflow_stack_base = (unsigned long)this_cpu_ptr(overflow_stack); + stacktrace_info->fp = fp; + stacktrace_info->pc = pc; +} + #ifdef CONFIG_PROTECTED_NVHE_STACKTRACE DEFINE_PER_CPU(unsigned long [NVHE_STACKTRACE_SIZE/sizeof(long)], pkvm_stacktrace); @@ -81,4 +103,6 @@ void kvm_nvhe_prepare_backtrace(unsigned long fp, unsigned long pc) { if (is_protected_kvm_enabled()) pkvm_save_backtrace(fp, pc); + else + hyp_prepare_backtrace(fp, pc); }