From patchwork Tue Feb 4 17:33:51 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maciej Wieczor-Retman X-Patchwork-Id: 13959493 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7BF0C02193 for ; Tue, 4 Feb 2025 17:36:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5DA5B280005; Tue, 4 Feb 2025 12:36:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 58683280001; Tue, 4 Feb 2025 12:36:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3D818280005; Tue, 4 Feb 2025 12:36:46 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 1E261280001 for ; Tue, 4 Feb 2025 12:36:46 -0500 (EST) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id CA95D1602C6 for ; Tue, 4 Feb 2025 17:36:45 +0000 (UTC) X-FDA: 83082967170.17.10FA3A6 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.20]) by imf23.hostedemail.com (Postfix) with ESMTP id DCFC514000F for ; Tue, 4 Feb 2025 17:36:42 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=RTOCnDVB; spf=pass (imf23.hostedemail.com: domain of maciej.wieczor-retman@intel.com designates 198.175.65.20 as permitted sender) smtp.mailfrom=maciej.wieczor-retman@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738690603; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=pLFVRgBwiXf+T1hr1v6y3oXYvLLcOkmu5U12KRQmKlw=; b=MoSb6nAR+rmPPnMgkqTiK2oq292xiDZxLnkZL6aOrlcUE9YQXyLypo+zf2HhNJsIDGrsjy +h6wzcSCGFOtLWptqau0oWSLfCFuDld469G9gUkyEI3S4DMJ4D6ea4D/8SwPVz18ghuDQW aBNHv0ImGLY/xHIxg2OF1O5/IFI2Sgw= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738690603; a=rsa-sha256; cv=none; b=DO+CwDUwmjlbAP+8IEOF8eJcI9CWfLZ7a+aUCQG/0kcEUiX0HNuNaW1MVfx2bSkZSQJ2Su hOhtOlfncKKQBYxuHhY5d0N9NXYa5gKI23q2+4CCouAm5G26wGIm6b01Ij7N55/bODpMEc pHHBosSqcdmqorg0Ba8TM04iIsqGlxM= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=RTOCnDVB; spf=pass (imf23.hostedemail.com: domain of maciej.wieczor-retman@intel.com designates 198.175.65.20 as permitted sender) smtp.mailfrom=maciej.wieczor-retman@intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1738690603; x=1770226603; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=J5wZ/mcDbil10hn8JDYxev0M9sT2GI5gsQFk6+vTupM=; b=RTOCnDVBaTkOxHrD/hTF0bIaVQU93uqwF5cKCEi5nMYe5PAR4K1uofBA izbauIEidJkdUueFKcL+ypqQ1I4aHchBcLdoVGncCFFoYNKSJiy/rbBTT Iryz4Ai6uiZyqoGJb3sNBZwF70xI6yKrjH4RSVsGyVd3p2vWXi2sy0imQ 1mfyJVVmJ1liivZ+jEouACQRMZY4WxR8gQizGRiCl17DbUM8U0yp1fpBt sQDtT06JmZKOq+Br3dOL7AjlpSs6wOprY5FgIlt4bSg1WaEsZCekTAPy7 PkmxmoodcCg0vMSbEmGzD1KFyjW80Gh0VmVmXEGCh74gvls0CUxMt7Khc A==; X-CSE-ConnectionGUID: LBoxu+ctSaeyjCoyesVrCg== X-CSE-MsgGUID: 81niKgaOSxid0oKm/cdeFQ== X-IronPort-AV: E=McAfee;i="6700,10204,11336"; a="38930943" X-IronPort-AV: E=Sophos;i="6.13,259,1732608000"; d="scan'208";a="38930943" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orvoesa112.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Feb 2025 09:36:39 -0800 X-CSE-ConnectionGUID: euT5Fx0UQBaZT6g1J2Ba6Q== X-CSE-MsgGUID: VDPCZKL7T1GLAmeyS4orWQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="147866863" Received: from mjarzebo-mobl1.ger.corp.intel.com (HELO wieczorr-mobl1.intel.com) ([10.245.244.61]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Feb 2025 09:36:27 -0800 From: Maciej Wieczor-Retman To: luto@kernel.org, xin@zytor.com, kirill.shutemov@linux.intel.com, palmer@dabbelt.com, tj@kernel.org, andreyknvl@gmail.com, brgerst@gmail.com, ardb@kernel.org, dave.hansen@linux.intel.com, jgross@suse.com, will@kernel.org, akpm@linux-foundation.org, arnd@arndb.de, corbet@lwn.net, maciej.wieczor-retman@intel.com, dvyukov@google.com, richard.weiyang@gmail.com, ytcoode@gmail.com, tglx@linutronix.de, hpa@zytor.com, seanjc@google.com, paul.walmsley@sifive.com, aou@eecs.berkeley.edu, justinstitt@google.com, jason.andryuk@amd.com, glider@google.com, ubizjak@gmail.com, jannh@google.com, bhe@redhat.com, vincenzo.frascino@arm.com, rafael.j.wysocki@intel.com, ndesaulniers@google.com, mingo@redhat.com, catalin.marinas@arm.com, junichi.nomura@nec.com, nathan@kernel.org, ryabinin.a.a@gmail.com, dennis@kernel.org, bp@alien8.de, kevinloughlin@google.com, morbo@google.com, dan.j.williams@intel.com, julian.stecklina@cyberus-technology.de, peterz@infradead.org, cl@linux.com, kees@kernel.org Cc: kasan-dev@googlegroups.com, x86@kernel.org, linux-arm-kernel@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, linux-doc@vger.kernel.org Subject: [PATCH 10/15] x86: KASAN raw shadow memory PTE init Date: Tue, 4 Feb 2025 18:33:51 +0100 Message-ID: <28ddfb1694b19278405b4934f37d398794409749.1738686764.git.maciej.wieczor-retman@intel.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: References: MIME-Version: 1.0 X-Stat-Signature: xcqneaf4fcseg6iimayby3b14o1rzjy5 X-Rspamd-Queue-Id: DCFC514000F X-Rspam-User: X-Rspamd-Server: rspam06 X-HE-Tag: 1738690602-74393 X-HE-Meta: U2FsdGVkX18rQg9Gbx0vyk+hRi8b1B3PxoTb5Ud+60hUHu6wFeIHhH+RZMGn1kRAmr4naQqqhg1pdFK2Y/kyZoxfhPSHWqpLqZTKm/Fu0PUKsJKdgJmAmNYf2E2DtwlXFPVtWl4bJLCSYu71eYamCeCKmt72rcNpQzVAaYLdnxYKojmrHSlc/rB2eu8Zkn7Kvwu9Rv7zFN4/1mY2Yovyd60DnYYBNSdXqpRd95wGt7gwEr+wSZJHWIRglskxLWbuELh1klaDGPOmDy0XI+mUpFAHz7NzcmVKUyH8GglNGo39d+QE96zHHNWltOWxuQrLIAtA93C/x9c+fuUJrFZdQ5CW4BaFAD2dFDvSV+cisvAAV3DPpjWes+uWSD7FWtDsU0uDjk9MBCe8/uBoQEkzQC5GVYYknE1xHpFM4doTnIYVahPOfKfGddPU7lKSLgVkB27u1ngF5V3b4yvnsZOjpkHlJU+SMM59hpe7LSUmLOoFryLRXbI2D+YwwCP8mE5vxpAwC2/yEFvzf8H8/r5bjkpT9eU4Q30J9hFV6aAmA3B/UKiGyWtXh1P+ez58Gm9bF0CNlIml/b6ulvP2DBn2tElm0m5JnhzJuXWkzHiOFOZwdp0L9LhNl3vCBb9MVw2YBybWlG64178W4jKaoChQV4jwAKu12Y2nvhrCHGGHeL4lh7BvSxAe7A4sqgQ0kVE8uwFnwqgpHPsRmWZOacJEFcASgQVdkSV7aLIk6zP4vJHecJqozlZ2YNxl+Il/Rb4GaqUhtHfNS29IcUZ66LEbW9xijF651mMl4UzlTgRIwSHdR6WNUcJDV9VzBfSugjjl07378VPyY3yX7hgh+5iFqoaWNLNw8sP1vssHcmLnBfH0w72mUyhjVGuoayUkHTW10VBEWbPWUUBql+vrMxVKqXr6iMgI2Sqzfhbb1IfTih+ryP+b96pvf44Inv3VpeAVrlRQP4KDZ45hTUwiCOO 0dxIkdJN I7hgFVWaLp7MDekw/de1I3py0gYDlB4GIvvmQVCsUc+1VlKAciIkZhBjohS2fORLGA36ooClaB4bZRo/RnZQ9wIOMaWumMN4i38vIpJgQ8Hu4hix4U/VB/1pEFTBVzYcUBjWQKiAabP12RENQA9ldmtx4ur4c211joEg20lqwtbhjg+scx7lTLVdQLCUA0DHNaBnDzlbSKDBpe5vsr6zDkihGNOTvRu07vYGXHPqCSC1x4hSt9qWdY1jObw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: In KASAN's generic mode the default value in shadow memory is zero. During initialization of shadow memory pages they are allocated and zeroed. In KASAN's tag-based mode the default tag for the arm64 architecture is 0xFE which corresponds to any memory that should not be accessed. On x86 (where tags are 4-bit wide instead of 8-bit wide) that tag is 0xE so during the initializations all the bytes in shadow memory pages should be filled with 0xE or 0xEE if two tags should be packed in one shadow byte. Use memblock_alloc_try_nid_raw() instead of memblock_alloc_try_nid() to avoid zeroing out the memory so it can be set with the KASAN invalid tag. Signed-off-by: Maciej Wieczor-Retman --- arch/x86/mm/kasan_init_64.c | 19 ++++++++++++++++--- include/linux/kasan.h | 25 +++++++++++++++++++++++++ mm/kasan/kasan.h | 19 ------------------- 3 files changed, 41 insertions(+), 22 deletions(-) diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c index 9dddf19a5571..55d468d83682 100644 --- a/arch/x86/mm/kasan_init_64.c +++ b/arch/x86/mm/kasan_init_64.c @@ -35,6 +35,18 @@ static __init void *early_alloc(size_t size, int nid, bool should_panic) return ptr; } +static __init void *early_raw_alloc(size_t size, int nid, bool should_panic) +{ + void *ptr = memblock_alloc_try_nid_raw(size, size, + __pa(MAX_DMA_ADDRESS), MEMBLOCK_ALLOC_ACCESSIBLE, nid); + + if (!ptr && should_panic) + panic("%pS: Failed to allocate page, nid=%d from=%lx\n", + (void *)_RET_IP_, nid, __pa(MAX_DMA_ADDRESS)); + + return ptr; +} + static void __init kasan_populate_pmd(pmd_t *pmd, unsigned long addr, unsigned long end, int nid) { @@ -64,8 +76,9 @@ static void __init kasan_populate_pmd(pmd_t *pmd, unsigned long addr, if (!pte_none(*pte)) continue; - p = early_alloc(PAGE_SIZE, nid, true); - entry = pfn_pte(PFN_DOWN(__pa(p)), PAGE_KERNEL); + p = early_raw_alloc(PAGE_SIZE, nid, true); + memset(p, PAGE_SIZE, kasan_dense_tag(KASAN_SHADOW_INIT)); + entry = pfn_pte(PFN_DOWN(__pa_nodebug(p)), PAGE_KERNEL); set_pte_at(&init_mm, addr, pte, entry); } while (pte++, addr += PAGE_SIZE, addr != end); } @@ -437,7 +450,7 @@ void __init kasan_init(void) * it may contain some garbage. Now we can clear and write protect it, * since after the TLB flush no one should write to it. */ - memset(kasan_early_shadow_page, 0, PAGE_SIZE); + kasan_poison(kasan_early_shadow_page, PAGE_SIZE, KASAN_SHADOW_INIT, false); for (i = 0; i < PTRS_PER_PTE; i++) { pte_t pte; pgprot_t prot; diff --git a/include/linux/kasan.h b/include/linux/kasan.h index 83146367170a..af8272c74409 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -151,6 +151,31 @@ static __always_inline void kasan_unpoison_range(const void *addr, size_t size) __kasan_unpoison_range(addr, size); } +#ifdef CONFIG_KASAN_HW_TAGS + +static inline void kasan_poison(const void *addr, size_t size, u8 value, bool init) +{ + if (WARN_ON((unsigned long)addr & KASAN_GRANULE_MASK)) + return; + if (WARN_ON(size & KASAN_GRANULE_MASK)) + return; + + hw_set_mem_tag_range(kasan_reset_tag(addr), size, value, init); +} + +#else /* CONFIG_KASAN_HW_TAGS */ + +/** + * kasan_poison - mark the memory range as inaccessible + * @addr - range start address, must be aligned to KASAN_GRANULE_SIZE + * @size - range size, must be aligned to KASAN_GRANULE_SIZE + * @value - value that's written to metadata for the range + * @init - whether to initialize the memory range (only for hardware tag-based) + */ +void kasan_poison(const void *addr, size_t size, u8 value, bool init); + +#endif /* CONFIG_KASAN_HW_TAGS */ + void __kasan_poison_pages(struct page *page, unsigned int order, bool init); static __always_inline void kasan_poison_pages(struct page *page, unsigned int order, bool init) diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h index a56aadd51485..2405477c5899 100644 --- a/mm/kasan/kasan.h +++ b/mm/kasan/kasan.h @@ -466,16 +466,6 @@ static inline u8 kasan_random_tag(void) { return 0; } #ifdef CONFIG_KASAN_HW_TAGS -static inline void kasan_poison(const void *addr, size_t size, u8 value, bool init) -{ - if (WARN_ON((unsigned long)addr & KASAN_GRANULE_MASK)) - return; - if (WARN_ON(size & KASAN_GRANULE_MASK)) - return; - - hw_set_mem_tag_range(kasan_reset_tag(addr), size, value, init); -} - static inline void kasan_unpoison(const void *addr, size_t size, bool init) { u8 tag = get_tag(addr); @@ -497,15 +487,6 @@ static inline bool kasan_byte_accessible(const void *addr) #else /* CONFIG_KASAN_HW_TAGS */ -/** - * kasan_poison - mark the memory range as inaccessible - * @addr - range start address, must be aligned to KASAN_GRANULE_SIZE - * @size - range size, must be aligned to KASAN_GRANULE_SIZE - * @value - value that's written to metadata for the range - * @init - whether to initialize the memory range (only for hardware tag-based) - */ -void kasan_poison(const void *addr, size_t size, u8 value, bool init); - /** * kasan_unpoison - mark the memory range as accessible * @addr - range start address, must be aligned to KASAN_GRANULE_SIZE