From patchwork Fri Jul 15 06:10:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 12918781 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 53644C43334 for ; Fri, 15 Jul 2022 06:19:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=My2qU8hA4veFZ49GA8UVRuTpAnoKJtQATgvc0dgRZqQ=; b=nR/XaRWQzEA0ma9omEqyQzlwpj UH7zkdSQ9056aDwE9rHx4/mIAl/PdDV2fp37J3/3CE0qcYXwOUMD7apbUDPMxsUsdv4S0lmziYoj0 AurdZ+n6XtmPO0apIQIb6cFkR0yC3A5AvXeLaGJIjxBkyuGjKMVusKzFl24/Oq/T+2f6lLONLI8o6 eiIhFvLOqjECdDpRVHtciXTKP1oUZ1XNN7WPTy4R6Y9yFCZmRbW4E/pokYM+NIUF24BM5UebA/nkx YMtfBJrL+f0MhjGXAmrgxjTF+a+NNHicJwkdOy2gLhoMLnDoUi0270lt/tz+ke2/fiB1x+erV2EU9 obngEVhg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oCEeL-004ZBG-0q; Fri, 15 Jul 2022 06:18:06 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oCEYG-004WYw-Gt for linux-arm-kernel@lists.infradead.org; Fri, 15 Jul 2022 06:11:50 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id m123-20020a253f81000000b0066ff6484995so64908yba.22 for ; Thu, 14 Jul 2022 23:11:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=pCrsAtrXdQrqKS3eeA8f9GflhdYZQyxHWES6iqq1HQQ=; b=jI4zAPvvQ77ZuvgrkrRIHZRkBl0VSF8QUgVCY/1ZnqdCg0pQyYf0UuEwHeiROiTaAy FSHAtwwXH951A9lpReSW3weDPL7qdYwHOOctZaeKlz/8zfmGCk6fRs1Evv5wtDjJbTBx cksIRahdwMwL9F3qJGyjQ997TNOhcHp+H114RtKstCa99riB9yoL6l7Y4CeCyWMV/92L JuWSurazgujBJNjwdTmMGq62lprxYhEAOKVMr/bVOZY31Ft4VD0s5Xyn5gfq130hJGt6 vwzQ9VHOvkWYELkKQ1Ph3gdjt7xURPyOBYcL0IdT2MaGg5DiSqXxP2wCEE8xSsVaHn7d DJwQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=pCrsAtrXdQrqKS3eeA8f9GflhdYZQyxHWES6iqq1HQQ=; b=jTrZcSu1UuMc9CFdZBN4IgyLymG3IBEijxx+QRQMiFMZ2FuSuZxNI8/CFPPemBpvyH hOG+puUmrbbM4XPIQmnfxxFX7TSbMFVGUjy35662XhyyJAGrFNXno+utzxAs+mrrxiB7 45IQPysP6Up0OkpnvDWUjtagFh5K2MolM0aCq9sOYyPPVmbU2dofM+r1DW6Lz4kHBGJ3 hOIa+f4vqr4nWQigN9lJ7chXTuAEPdKpi92BQ1HfqBmuVq/9llvj6vCbFTyGs59Igzk6 oFxAe2sfu8d4xR9bOy2WXFzllQ/Uoe2qXevrZRT0LH+W1XYYRFXHgfnjXXM09uxVeQrS tB9A== X-Gm-Message-State: AJIora9fFfTg3Z98G3nqUQwrCWKBCcixIxYSguUogyi4MShLhFqm2rdG UdzQMYIBnhA9xpxRCQXQC/GfFzln9sSaWB9PiA== X-Google-Smtp-Source: AGRyM1tEqC2CS+wqI7vtsUSjZ6J4A5D2uthRIAUasSZuLlJSGC81kXJwJZFzwd7qYeFr+qoAbrl5Z8/6YFdrJhWx1A== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:f010:455b:62ce:19e]) (user=kaleshsingh job=sendgmr) by 2002:a81:503:0:b0:317:c5d5:16fe with SMTP id 3-20020a810503000000b00317c5d516femr13830160ywf.231.1657865506980; Thu, 14 Jul 2022 23:11:46 -0700 (PDT) Date: Thu, 14 Jul 2022 23:10:26 -0700 In-Reply-To: <20220715061027.1612149-1-kaleshsingh@google.com> Message-Id: <20220715061027.1612149-18-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220715061027.1612149-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [PATCH v4 17/18] KVM: arm64: Introduce hyp_dump_backtrace() From: Kalesh Singh To: maz@kernel.org, mark.rutland@arm.com, broonie@kernel.org, madvenka@linux.microsoft.com Cc: will@kernel.org, qperret@google.com, tabba@google.com, kaleshsingh@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, andreyknvl@gmail.com, russell.king@oracle.com, vincenzo.frascino@arm.com, mhiramat@kernel.org, ast@kernel.org, drjones@redhat.com, wangkefeng.wang@huawei.com, elver@google.com, keirf@google.com, yuzenghui@huawei.com, ardb@kernel.org, oupton@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, android-mm@google.com, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220714_231148_635156_5EF66841 X-CRM114-Status: GOOD ( 14.85 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In non-protected nVHE mode, unwinds and dumps the hypervisor backtrace from EL1. This is possible beacuase the host can directly access the hypervisor stack pages in non-proteced mode. Signed-off-by: Kalesh Singh --- arch/arm64/include/asm/stacktrace/nvhe.h | 64 +++++++++++++++++++++--- 1 file changed, 56 insertions(+), 8 deletions(-) diff --git a/arch/arm64/include/asm/stacktrace/nvhe.h b/arch/arm64/include/asm/stacktrace/nvhe.h index ec1a4ee21c21..c322ac95b256 100644 --- a/arch/arm64/include/asm/stacktrace/nvhe.h +++ b/arch/arm64/include/asm/stacktrace/nvhe.h @@ -190,6 +190,56 @@ static int notrace unwind_next(struct unwind_state *state) } NOKPROBE_SYMBOL(unwind_next); +/** + * kvm_nvhe_print_backtrace_entry - Symbolizes and prints the HYP stack address + */ +static inline void kvm_nvhe_print_backtrace_entry(unsigned long addr, + unsigned long hyp_offset) +{ + unsigned long va_mask = GENMASK_ULL(vabits_actual - 1, 0); + + /* Mask tags and convert to kern addr */ + addr = (addr & va_mask) + hyp_offset; + kvm_err(" [<%016lx>] %pB\n", addr, (void *)addr); +} + +/** + * hyp_backtrace_entry - Dump an entry of the non-protected nVHE HYP stacktrace + * + * @arg : the hypervisor offset, used for address translation + * @where : the program counter corresponding to the stack frame + */ +static inline bool hyp_dump_backtrace_entry(void *arg, unsigned long where) +{ + kvm_nvhe_print_backtrace_entry(where, (unsigned long)arg); + + return true; +} + +/** + * hyp_dump_backtrace - Dump the non-proteced nVHE HYP backtrace. + * + * @hyp_offset: hypervisor offset, used for address translation. + * + * The host can directly access HYP stack pages in non-protected + * mode, so the unwinding is done directly from EL1. This removes + * the need for shared buffers between host and hypervisor for + * the stacktrace. + */ +static inline void hyp_dump_backtrace(unsigned long hyp_offset) +{ + struct kvm_nvhe_stacktrace_info *stacktrace_info; + struct unwind_state state; + + stacktrace_info = this_cpu_ptr_nvhe_sym(kvm_stacktrace_info); + + kvm_nvhe_unwind_init(&state, stacktrace_info->fp, stacktrace_info->pc); + + kvm_err("Non-protected nVHE HYP call trace:\n"); + unwind(&state, hyp_dump_backtrace_entry, (void *)hyp_offset); + kvm_err("---- End of Non-protected nVHE HYP call trace ----\n"); +} + #ifdef CONFIG_PROTECTED_NVHE_STACKTRACE DECLARE_KVM_NVHE_PER_CPU(unsigned long [NVHE_STACKTRACE_SIZE/sizeof(long)], pkvm_stacktrace); @@ -206,22 +256,18 @@ DECLARE_KVM_NVHE_PER_CPU(unsigned long [NVHE_STACKTRACE_SIZE/sizeof(long)], pkvm static inline void pkvm_dump_backtrace(unsigned long hyp_offset) { unsigned long *stacktrace_pos; - unsigned long va_mask, pc; stacktrace_pos = (unsigned long *)this_cpu_ptr_nvhe_sym(pkvm_stacktrace); - va_mask = GENMASK_ULL(vabits_actual - 1, 0); kvm_err("Protected nVHE HYP call trace:\n"); - /* The stack trace is terminated by a null entry */ - for (; *stacktrace_pos; stacktrace_pos++) { - /* Mask tags and convert to kern addr */ - pc = (*stacktrace_pos & va_mask) + hyp_offset; - kvm_err(" [<%016lx>] %pB\n", pc, (void *)pc); - } + /* The saved stacktrace is terminated by a null entry */ + for (; *stacktrace_pos; stacktrace_pos++) + kvm_nvhe_print_backtrace_entry(*stacktrace_pos, hyp_offset); kvm_err("---- End of Protected nVHE HYP call trace ----\n"); } + #else /* !CONFIG_PROTECTED_NVHE_STACKTRACE */ static inline void pkvm_dump_backtrace(unsigned long hyp_offset) { @@ -238,6 +284,8 @@ static inline void kvm_nvhe_dump_backtrace(unsigned long hyp_offset) { if (is_protected_kvm_enabled()) pkvm_dump_backtrace(hyp_offset); + else + hyp_dump_backtrace(hyp_offset); } #endif /* __KVM_NVHE_HYPERVISOR__ */ #endif /* __ASM_STACKTRACE_NVHE_H */