From patchwork Wed Jul 31 09:43:18 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Yan X-Patchwork-Id: 11067401 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8EFF61398 for ; Wed, 31 Jul 2019 09:28:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 74E5A28581 for ; Wed, 31 Jul 2019 09:28:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 686C0285E0; Wed, 31 Jul 2019 09:28:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 8021028581 for ; Wed, 31 Jul 2019 09:28:18 +0000 (UTC) Received: (qmail 11872 invoked by uid 550); 31 Jul 2019 09:26:43 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 11766 invoked from network); 31 Jul 2019 09:26:41 -0000 From: Jason Yan To: , , , , , , , , CC: , , , , , , , Jason Yan Subject: [PATCH v3 10/10] powerpc/fsl_booke/kaslr: dump out kernel offset information on panic Date: Wed, 31 Jul 2019 17:43:18 +0800 Message-ID: <20190731094318.26538-11-yanaijie@huawei.com> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20190731094318.26538-1-yanaijie@huawei.com> References: <20190731094318.26538-1-yanaijie@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.124.28] X-CFilter-Loop: Reflected X-Virus-Scanned: ClamAV using ClamSMTP When kaslr is enabled, the kernel offset is different for every boot. This brings some difficult to debug the kernel. Dump out the kernel offset when panic so that we can easily debug the kernel. Signed-off-by: Jason Yan Cc: Diana Craciun Cc: Michael Ellerman Cc: Christophe Leroy Cc: Benjamin Herrenschmidt Cc: Paul Mackerras Cc: Nicholas Piggin Cc: Kees Cook Reviewed-by: Christophe Leroy --- arch/powerpc/include/asm/page.h | 5 +++++ arch/powerpc/kernel/machine_kexec.c | 1 + arch/powerpc/kernel/setup-common.c | 19 +++++++++++++++++++ 3 files changed, 25 insertions(+) diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h index 60a68d3a54b1..cd3ac530e58d 100644 --- a/arch/powerpc/include/asm/page.h +++ b/arch/powerpc/include/asm/page.h @@ -317,6 +317,11 @@ struct vm_area_struct; extern unsigned long kimage_vaddr; +static inline unsigned long kaslr_offset(void) +{ + return kimage_vaddr - KERNELBASE; +} + #include #endif /* __ASSEMBLY__ */ #include diff --git a/arch/powerpc/kernel/machine_kexec.c b/arch/powerpc/kernel/machine_kexec.c index c4ed328a7b96..078fe3d76feb 100644 --- a/arch/powerpc/kernel/machine_kexec.c +++ b/arch/powerpc/kernel/machine_kexec.c @@ -86,6 +86,7 @@ void arch_crash_save_vmcoreinfo(void) VMCOREINFO_STRUCT_SIZE(mmu_psize_def); VMCOREINFO_OFFSET(mmu_psize_def, shift); #endif + vmcoreinfo_append_str("KERNELOFFSET=%lx\n", kaslr_offset()); } /* diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c index 1f8db666468d..064075f02837 100644 --- a/arch/powerpc/kernel/setup-common.c +++ b/arch/powerpc/kernel/setup-common.c @@ -715,12 +715,31 @@ static struct notifier_block ppc_panic_block = { .priority = INT_MIN /* may not return; must be done last */ }; +/* + * Dump out kernel offset information on panic. + */ +static int dump_kernel_offset(struct notifier_block *self, unsigned long v, + void *p) +{ + pr_emerg("Kernel Offset: 0x%lx from 0x%lx\n", + kaslr_offset(), KERNELBASE); + + return 0; +} + +static struct notifier_block kernel_offset_notifier = { + .notifier_call = dump_kernel_offset +}; + void __init setup_panic(void) { /* PPC64 always does a hard irq disable in its panic handler */ if (!IS_ENABLED(CONFIG_PPC64) && !ppc_md.panic) return; atomic_notifier_chain_register(&panic_notifier_list, &ppc_panic_block); + if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && kaslr_offset() > 0) + atomic_notifier_chain_register(&panic_notifier_list, + &kernel_offset_notifier); } #ifdef CONFIG_CHECK_CACHE_COHERENCY