From patchwork Fri Feb 26 01:19:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 12105457 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 82FE8C433DB for ; Fri, 26 Feb 2021 01:19:08 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 10C5B64EE4 for ; Fri, 26 Feb 2021 01:19:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 10C5B64EE4 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8E20D6B009F; Thu, 25 Feb 2021 20:19:07 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 86ABC6B00A0; Thu, 25 Feb 2021 20:19:07 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7576B6B00A1; Thu, 25 Feb 2021 20:19:07 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0154.hostedemail.com [216.40.44.154]) by kanga.kvack.org (Postfix) with ESMTP id 59AAF6B009F for ; Thu, 25 Feb 2021 20:19:07 -0500 (EST) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 1FB44824C453 for ; Fri, 26 Feb 2021 01:19:07 +0000 (UTC) X-FDA: 77858660334.18.F99EA61 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf26.hostedemail.com (Postfix) with ESMTP id C4317407F8FF for ; Fri, 26 Feb 2021 01:19:01 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id DACA164F22; Fri, 26 Feb 2021 01:19:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1614302345; bh=V1kjaSn35AL9zUYmFGwKjvGSp2Q2G949QFUgg9UBCuc=; h=Date:From:To:Subject:In-Reply-To:From; b=GTLPNYzHsHNGCnWqY2ksLCZ+AupBGoHuMFhUPq9v2EB1pPlgfLDtIz3LmQ56qz8sA nPSlx5FvcwS2ibQcynJhl/1Bgr5AVc7xF9XuGBaYyAraXuLDR2wUJ12IwQ3ZfUVb/R rnEGAT50DMqIDV0lDnGqBKbD5Mpd8mPtp9GVMonY= Date: Thu, 25 Feb 2021 17:19:03 -0800 From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, bp@alien8.de, catalin.marinas@arm.com, cl@linux.com, corbet@lwn.net, dave.hansen@linux.intel.com, dvyukov@google.com, edumazet@google.com, elver@google.com, glider@google.com, gregkh@linuxfoundation.org, hdanton@sina.com, hpa@zytor.com, iamjoonsoo.kim@lge.com, jannh@google.com, joern@purestorage.com, keescook@chromium.org, linux-mm@kvack.org, luto@kernel.org, mark.rutland@arm.com, mingo@redhat.com, mm-commits@vger.kernel.org, paulmck@kernel.org, penberg@kernel.org, peterz@infradead.org, rientjes@google.com, sjpark@amazon.de, tglx@linutronix.de, torvalds@linux-foundation.org, vbabka@suse.cz, will@kernel.org Subject: [patch 058/118] arm64, kfence: enable KFENCE for ARM64 Message-ID: <20210226011903.0X34ONJ39%akpm@linux-foundation.org> In-Reply-To: <20210225171452.713967e96554bb6a53e44a19@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Stat-Signature: wjen35wmewzoymwdzie6po6jbeeirdjn X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: C4317407F8FF Received-SPF: none (linux-foundation.org>: No applicable sender policy available) receiver=imf26; identity=mailfrom; envelope-from=""; helo=mail.kernel.org; client-ip=198.145.29.99 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1614302341-835233 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Marco Elver Subject: arm64, kfence: enable KFENCE for ARM64 Add architecture specific implementation details for KFENCE and enable KFENCE for the arm64 architecture. In particular, this implements the required interface in . KFENCE requires that attributes for pages from its memory pool can individually be set. Therefore, force the entire linear map to be mapped at page granularity. Doing so may result in extra memory allocated for page tables in case rodata=full is not set; however, currently CONFIG_RODATA_FULL_DEFAULT_ENABLED=y is the default, and the common case is therefore not affected by this change. [elver@google.com: add missing copyright and description header] Link: https://lkml.kernel.org/r/20210118092159.145934-3-elver@google.com Link: https://lkml.kernel.org/r/20201103175841.3495947-4-elver@google.com Signed-off-by: Alexander Potapenko Signed-off-by: Marco Elver Reviewed-by: Dmitry Vyukov Co-developed-by: Alexander Potapenko Reviewed-by: Jann Horn Reviewed-by: Mark Rutland Cc: Andrey Konovalov Cc: Andrey Ryabinin Cc: Andy Lutomirski Cc: Borislav Petkov Cc: Catalin Marinas Cc: Christopher Lameter Cc: Dave Hansen Cc: David Rientjes Cc: Eric Dumazet Cc: Greg Kroah-Hartman Cc: Hillf Danton Cc: "H. Peter Anvin" Cc: Ingo Molnar Cc: Joern Engel Cc: Jonathan Corbet Cc: Joonsoo Kim Cc: Kees Cook Cc: Paul E. McKenney Cc: Pekka Enberg Cc: Peter Zijlstra Cc: SeongJae Park Cc: Thomas Gleixner Cc: Vlastimil Babka Cc: Will Deacon Signed-off-by: Andrew Morton --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/kfence.h | 24 ++++++++++++++++++++++++ arch/arm64/mm/fault.c | 4 ++++ arch/arm64/mm/mmu.c | 8 +++++++- 4 files changed, 36 insertions(+), 1 deletion(-) --- /dev/null +++ a/arch/arm64/include/asm/kfence.h @@ -0,0 +1,24 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * arm64 KFENCE support. + * + * Copyright (C) 2020, Google LLC. + */ + +#ifndef __ASM_KFENCE_H +#define __ASM_KFENCE_H + +#include + +#define KFENCE_SKIP_ARCH_FAULT_HANDLER "el1_sync" + +static inline bool arch_kfence_init_pool(void) { return true; } + +static inline bool kfence_protect_page(unsigned long addr, bool protect) +{ + set_memory_valid(addr, 1, !protect); + + return true; +} + +#endif /* __ASM_KFENCE_H */ --- a/arch/arm64/Kconfig~arm64-kfence-enable-kfence-for-arm64 +++ a/arch/arm64/Kconfig @@ -140,6 +140,7 @@ config ARM64 select HAVE_ARCH_KASAN if !(ARM64_16K_PAGES && ARM64_VA_BITS_48) select HAVE_ARCH_KASAN_SW_TAGS if HAVE_ARCH_KASAN select HAVE_ARCH_KASAN_HW_TAGS if (HAVE_ARCH_KASAN && ARM64_MTE) + select HAVE_ARCH_KFENCE select HAVE_ARCH_KGDB select HAVE_ARCH_MMAP_RND_BITS select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT --- a/arch/arm64/mm/fault.c~arm64-kfence-enable-kfence-for-arm64 +++ a/arch/arm64/mm/fault.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include #include @@ -389,6 +390,9 @@ static void __do_kernel_fault(unsigned l } else if (addr < PAGE_SIZE) { msg = "NULL pointer dereference"; } else { + if (kfence_handle_page_fault(addr)) + return; + msg = "paging request"; } --- a/arch/arm64/mm/mmu.c~arm64-kfence-enable-kfence-for-arm64 +++ a/arch/arm64/mm/mmu.c @@ -1465,7 +1465,13 @@ int arch_add_memory(int nid, u64 start, int ret, flags = 0; VM_BUG_ON(!mhp_range_allowed(start, size, true)); - if (rodata_full || debug_pagealloc_enabled()) + + /* + * KFENCE requires linear map to be mapped at page granularity, so that + * it is possible to protect/unprotect single pages in the KFENCE pool. + */ + if (rodata_full || debug_pagealloc_enabled() || + IS_ENABLED(CONFIG_KFENCE)) flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; __create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),