From patchwork Tue Jul 26 07:37:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 12928985 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6B6F1C433EF for ; Tue, 26 Jul 2022 07:43:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=DHIGTF/WTNeRwbqY0gOWxpWD5fS0HHjptF3BwWMnHRU=; b=D0HKHvNpS2SAd9pV8Sep4idVO4 qxU58sb78wubdB7EuYCTyyxxc6fdxpzbcIWRo9DDD7AX2pKt9JO6F6kqn3CzXu3EiBkNjW/k8WDuP Jj8X5o4SIqbNw9ebYIzCBnQp7yV4q/jg0hOBpS5tOiAbHrike+Jd16rdcGRyprPGMrVREuqWUbVXb 8vrZ1Db/r3MfoPJ9UzkeIZzGg4rfS11RCShBmMCv+Au7ozFJ7fjJiTlP2FZztiRWtwQ7oM+2cxvle T/lLllpBEqjTjrRYTHOdlHoVFT40wKWbV0AHMO4wRJDzQwg1GGXJUBs6xOU4KULGSTJA/ETxcoRY6 mFrHEQxQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oGFCe-009Yf3-Hm; Tue, 26 Jul 2022 07:42:04 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oGF93-009Vny-Dm for linux-arm-kernel@lists.infradead.org; Tue, 26 Jul 2022 07:38:23 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id s186-20020a255ec3000000b0067162ed1bd3so2184164ybb.8 for ; Tue, 26 Jul 2022 00:38:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc:content-transfer-encoding; bh=C05IvefY3yHCSicvywO05OO7/p0cJlxJJw9NBtAG/BI=; b=ob0fpBg8on7i2VT4Z634POHg6172v4VbWQBgZgfbUIfl3zaaqjd2K1fDRy9359PVjb 4x4V7vuhKpQ2qxl2xXXTkZqOksr1K6xehND+jYxPnqLLL4sGxuAYqcaGIzzLq/EQSV32 4XNIGl6pZxohnm3t8lNmwRsOcAm+H3FZRMvxLWyeW3uMAZF7pf3Q9GP2Fa1ueJk0x2Ux tpQPP6PzohSzZi9bbBhmNzb8WpHg8N43FnQ4ZwhcEcbQtltUBDAKI3HhFKP5hQT3YEVu PlAzKvYGDWLsYcb3P4TPbRyvpZgZMXxlQitgwIycWRd++/l8WatzmXjknkpPNNNxUIfh JyEg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc:content-transfer-encoding; bh=C05IvefY3yHCSicvywO05OO7/p0cJlxJJw9NBtAG/BI=; b=Mk9orVOSMFGgqjw37hxhlHGfhfy6azEMZNS6XptzjXLothB2mQuQmChCyw70lZmvFz V0dPqUiS89BYBtMmDL+Q5fvH40T8xRYO30yNpbHHZYw3IttcuIrUr4cETywByOxEmJlq SLBr4d9kStmLymEN0TLiLPuAX8E4D/hamM5Y7qFq0OjYxRG3OkFTm2fHPIerSxgjvYGV /u99fzfeUeHP15Dm1dSY9MzzmQAwn9IZnNVc2Di68K3ugZf5ZithdQgXy14SAm8SYqTa z0SVh/QbkXvhxA9XGPveQHa3rZndSh4dleu6nFzNinFb31KSoxzgge8NaI/bEhkF3KGK M+CQ== X-Gm-Message-State: AJIora9ePATAgV9tL644135Dxkoual9uSI0OKTQtd1lQ3aPzLAlE7rCB YitDD7ivSrobCQ3rV0RNrzwcYf55Ufs+0GoHuQ== X-Google-Smtp-Source: AGRyM1s6pLQ/wRx5BpTGjEP9aAkuHSQDACPu3UJRYg3QKP09PzT9y5R0O/ZIhB0UfRXFksJMnzxnUqHKNhS8Z1k5qQ== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:4f77:3b64:736a:394e]) (user=kaleshsingh job=sendgmr) by 2002:a25:50c5:0:b0:670:394a:a2a with SMTP id e188-20020a2550c5000000b00670394a0a2amr12895264ybb.294.1658821099711; Tue, 26 Jul 2022 00:38:19 -0700 (PDT) Date: Tue, 26 Jul 2022 00:37:43 -0700 In-Reply-To: <20220726073750.3219117-1-kaleshsingh@google.com> Message-Id: <20220726073750.3219117-11-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220726073750.3219117-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.37.1.359.gd136c6c3e2-goog Subject: [PATCH v6 10/17] KVM: arm64: Implement non-protected nVHE hyp stack unwinder From: Kalesh Singh To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org, madvenka@linux.microsoft.com, tabba@google.com, oliver.upton@linux.dev Cc: will@kernel.org, qperret@google.com, kaleshsingh@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, andreyknvl@gmail.com, vincenzo.frascino@arm.com, mhiramat@kernel.org, ast@kernel.org, wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com, yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, android-mm@google.com, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220726_003821_498248_C8F2DDF4 X-CRM114-Status: GOOD ( 16.30 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Implements the common framework necessary for unwind() to work for non-protected nVHE mode: - on_accessible_stack() - on_overflow_stack() - unwind_next() Non-protected nVHE unwind() is used to unwind and dump the hypervisor stacktrace by the host in EL1 Signed-off-by: Kalesh Singh Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba --- Changes in v6: - Add Fuad’s Reviewed-by and Tested-by tags Changes in v5: - Use regular comments instead of doc comments, per Fuad arch/arm64/include/asm/stacktrace/common.h | 2 + arch/arm64/include/asm/stacktrace/nvhe.h | 76 +++++++++++++++++++++- arch/arm64/kvm/arm.c | 2 +- 3 files changed, 77 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/stacktrace/common.h b/arch/arm64/include/asm/stacktrace/common.h index 45474b383630..3ebb69ea374a 100644 --- a/arch/arm64/include/asm/stacktrace/common.h +++ b/arch/arm64/include/asm/stacktrace/common.h @@ -34,6 +34,7 @@ enum stack_type { STACK_TYPE_OVERFLOW, STACK_TYPE_SDEI_NORMAL, STACK_TYPE_SDEI_CRITICAL, + STACK_TYPE_HYP, __NR_STACK_TYPES }; @@ -186,6 +187,7 @@ static inline int unwind_next_common(struct unwind_state *state, * * TASK -> IRQ -> OVERFLOW -> SDEI_NORMAL * TASK -> SDEI_NORMAL -> SDEI_CRITICAL -> OVERFLOW + * HYP -> OVERFLOW * * ... but the nesting itself is strict. Once we transition from one * stack to another, it's never valid to unwind back to that first diff --git a/arch/arm64/include/asm/stacktrace/nvhe.h b/arch/arm64/include/asm/stacktrace/nvhe.h index 1192ae0f80c1..21082fd4a0b7 100644 --- a/arch/arm64/include/asm/stacktrace/nvhe.h +++ b/arch/arm64/include/asm/stacktrace/nvhe.h @@ -16,10 +16,19 @@ #include +static inline bool on_hyp_stack(unsigned long sp, unsigned long size, + struct stack_info *info); + static inline bool on_accessible_stack(const struct task_struct *tsk, unsigned long sp, unsigned long size, struct stack_info *info) { + if (on_accessible_stack_common(tsk, sp, size, info)) + return true; + + if (on_hyp_stack(sp, size, info)) + return true; + return false; } @@ -31,15 +40,78 @@ static inline bool on_accessible_stack(const struct task_struct *tsk, * (by the host in EL1). */ +DECLARE_KVM_NVHE_PER_CPU(unsigned long [OVERFLOW_STACK_SIZE/sizeof(long)], overflow_stack); +DECLARE_KVM_NVHE_PER_CPU(struct kvm_nvhe_stacktrace_info, kvm_stacktrace_info); +DECLARE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page); + +/* + * kvm_nvhe_stack_kern_va - Convert KVM nVHE HYP stack addresses to a kernel VAs + * + * The nVHE hypervisor stack is mapped in the flexible 'private' VA range, to + * allow for guard pages below the stack. Consequently, the fixed offset address + * translation macros won't work here. + * + * The kernel VA is calculated as an offset from the kernel VA of the hypervisor + * stack base. + * + * Returns true on success and updates @addr to its corresponding kernel VA; + * otherwise returns false. + */ +static inline bool kvm_nvhe_stack_kern_va(unsigned long *addr, + enum stack_type type) +{ + struct kvm_nvhe_stacktrace_info *stacktrace_info; + unsigned long hyp_base, kern_base, hyp_offset; + + stacktrace_info = this_cpu_ptr_nvhe_sym(kvm_stacktrace_info); + + switch (type) { + case STACK_TYPE_HYP: + kern_base = (unsigned long)*this_cpu_ptr(&kvm_arm_hyp_stack_page); + hyp_base = (unsigned long)stacktrace_info->stack_base; + break; + case STACK_TYPE_OVERFLOW: + kern_base = (unsigned long)this_cpu_ptr_nvhe_sym(overflow_stack); + hyp_base = (unsigned long)stacktrace_info->overflow_stack_base; + break; + default: + return false; + } + + hyp_offset = *addr - hyp_base; + + *addr = kern_base + hyp_offset; + + return true; +} + static inline bool on_overflow_stack(unsigned long sp, unsigned long size, struct stack_info *info) { - return false; + struct kvm_nvhe_stacktrace_info *stacktrace_info + = this_cpu_ptr_nvhe_sym(kvm_stacktrace_info); + unsigned long low = (unsigned long)stacktrace_info->overflow_stack_base; + unsigned long high = low + OVERFLOW_STACK_SIZE; + + return on_stack(sp, size, low, high, STACK_TYPE_OVERFLOW, info); +} + +static inline bool on_hyp_stack(unsigned long sp, unsigned long size, + struct stack_info *info) +{ + struct kvm_nvhe_stacktrace_info *stacktrace_info + = this_cpu_ptr_nvhe_sym(kvm_stacktrace_info); + unsigned long low = (unsigned long)stacktrace_info->stack_base; + unsigned long high = low + PAGE_SIZE; + + return on_stack(sp, size, low, high, STACK_TYPE_HYP, info); } static inline int notrace unwind_next(struct unwind_state *state) { - return 0; + struct stack_info info; + + return unwind_next_common(state, &info, kvm_nvhe_stack_kern_va); } NOKPROBE_SYMBOL(unwind_next); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index a0188144a122..6a64293108c5 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -49,7 +49,7 @@ DEFINE_STATIC_KEY_FALSE(kvm_protected_mode_initialized); DECLARE_KVM_HYP_PER_CPU(unsigned long, kvm_hyp_vector); -static DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page); +DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page); unsigned long kvm_arm_hyp_percpu_base[NR_CPUS]; DECLARE_KVM_NVHE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params);