From patchwork Tue Feb 4 17:33:43 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maciej Wieczor-Retman X-Patchwork-Id: 13959485 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB0F1C02193 for ; Tue, 4 Feb 2025 17:35:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 426546B0088; Tue, 4 Feb 2025 12:35:13 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3D6726B0089; Tue, 4 Feb 2025 12:35:13 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 277796B008A; Tue, 4 Feb 2025 12:35:13 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 0904B6B0088 for ; Tue, 4 Feb 2025 12:35:13 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 729C3491C8 for ; Tue, 4 Feb 2025 17:35:12 +0000 (UTC) X-FDA: 83082963264.26.0CDF7F0 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.20]) by imf15.hostedemail.com (Postfix) with ESMTP id A344BA0009 for ; Tue, 4 Feb 2025 17:35:08 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=LNfmSMbz; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf15.hostedemail.com: domain of maciej.wieczor-retman@intel.com designates 198.175.65.20 as permitted sender) smtp.mailfrom=maciej.wieczor-retman@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738690510; a=rsa-sha256; cv=none; b=e/gGYZwSTljGggi9sgkiHa0IUyeMpEOsVnbo+DJ/olDsxshNkWdtgfdjW2FAhJnwodUoVp H1qapFTBje2KZIwC9LjnaJITjF16rYXw52Bw5eicFbmWKrQKfz5F3UwoivPCYgApeQBLxs DB5bE17lz2f45pXS3z720DtDFEGUkhU= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=LNfmSMbz; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf15.hostedemail.com: domain of maciej.wieczor-retman@intel.com designates 198.175.65.20 as permitted sender) smtp.mailfrom=maciej.wieczor-retman@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738690510; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=pvxPfnrd9zxeTvTnimcFo2KrbNr0G4HQhkaUqyV3B7o=; b=R+dOb9emuLPRzsVaPKh8cnSXh0EUvNdoko+K+NSFcSC/wVvWXYmY4ChXqbKzpJD8qLHmoK qT52CLjHghUiWI838EICRTPf8XKCBXUVGaxtDHO97E1bpw/z2V1cOGY1pOcS/mLE08+CqU Tcmse2RPhvbz9zvl3LH1Ts1ORiY4IJE= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1738690508; x=1770226508; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=aHGhpjph+eDn968NL6XwKtEp1WbZxR8Rwtm1kIwOPTc=; b=LNfmSMbzlRtgfRc5CTza7dx8iVSXRj48s56MKvewoQdRnG0tWUrh2luM 3JK8HfUEk0ALgz+Cynl1XUdWuBdLd/K5Io8XQbUfpeGdwjYcctqAtD94N fEf88ZTJME/ys5Suq5RDTwxVQzP8JLJffwQNrJlFMcYCGrZOJttKVpPW0 MrAlFDbqpQNQX6f6sA1UNRqIx0rFgRWraGDi0Yj6hf3kWEItNJ+JhCemV 78TEVK+9FK5smqKIJu/8UYqJ8LLnt4uL2N1UENUkIsUlZ1Fh9FMLfBYFc ocG3I2ua8wd2bZgVqhSGUWfpRD2c9R4n9PZJztHOT+fv70h9VW/Y+cWhO Q==; X-CSE-ConnectionGUID: Te6UbGN5Qauswnr65Xr9Iw== X-CSE-MsgGUID: CKEODZbHTU+EW8o2YKlqZA== X-IronPort-AV: E=McAfee;i="6700,10204,11336"; a="38930403" X-IronPort-AV: E=Sophos;i="6.13,259,1732608000"; d="scan'208";a="38930403" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orvoesa112.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Feb 2025 09:35:04 -0800 X-CSE-ConnectionGUID: eBbq5MWaSZONJhS+dGPi8A== X-CSE-MsgGUID: mjasgO5WTcSBmxRL24BE0w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="147866217" Received: from mjarzebo-mobl1.ger.corp.intel.com (HELO wieczorr-mobl1.intel.com) ([10.245.244.61]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Feb 2025 09:34:49 -0800 From: Maciej Wieczor-Retman To: luto@kernel.org, xin@zytor.com, kirill.shutemov@linux.intel.com, palmer@dabbelt.com, tj@kernel.org, andreyknvl@gmail.com, brgerst@gmail.com, ardb@kernel.org, dave.hansen@linux.intel.com, jgross@suse.com, will@kernel.org, akpm@linux-foundation.org, arnd@arndb.de, corbet@lwn.net, maciej.wieczor-retman@intel.com, dvyukov@google.com, richard.weiyang@gmail.com, ytcoode@gmail.com, tglx@linutronix.de, hpa@zytor.com, seanjc@google.com, paul.walmsley@sifive.com, aou@eecs.berkeley.edu, justinstitt@google.com, jason.andryuk@amd.com, glider@google.com, ubizjak@gmail.com, jannh@google.com, bhe@redhat.com, vincenzo.frascino@arm.com, rafael.j.wysocki@intel.com, ndesaulniers@google.com, mingo@redhat.com, catalin.marinas@arm.com, junichi.nomura@nec.com, nathan@kernel.org, ryabinin.a.a@gmail.com, dennis@kernel.org, bp@alien8.de, kevinloughlin@google.com, morbo@google.com, dan.j.williams@intel.com, julian.stecklina@cyberus-technology.de, peterz@infradead.org, cl@linux.com, kees@kernel.org Cc: kasan-dev@googlegroups.com, x86@kernel.org, linux-arm-kernel@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, linux-doc@vger.kernel.org Subject: [PATCH 02/15] kasan: Tag checking with dense tag-based mode Date: Tue, 4 Feb 2025 18:33:43 +0100 Message-ID: <8f790bb7e166c1ea2e5003318149eb1d7aba3596.1738686764.git.maciej.wieczor-retman@intel.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: A344BA0009 X-Stat-Signature: 8oadpn6xgedh4qaekt7os5535u6s3pbn X-Rspam-User: X-HE-Tag: 1738690508-857608 X-HE-Meta: U2FsdGVkX19F/ODg049NzOlaBI5MIpwue4sMns7QcEZpM8jbxR3fEZFQGXTpbx4oCb5gBCKVMiYO1xoEYvZn3z9ZOGjMIfaIdBR1jbvTgPTH9oJgcx/kF7mT4kfiFw2KfC4R6ha0fBHRdRZz5siqlfV30tt/8Q4ypJMMQdDRKROorL3IVU82qn/eQy3wNMPU8P3//WoapROjahdnIg39Hqru9WBOH6ntztg0lhJEo985cQo+kO1vyX6RVf6agykRp8KKGP7osjmIRYRjQ/7cbThd8Z8NrGxrGdCHqi2QVi/WRUR+B6V6UWa16Umipv57f8Nxy697KfBa8mGd6FQz4ToR/0FNb+RKkg8j6IvAMCElx3uAY7f+e/Ae2XhUNE+hdB7DV/2Tv0CpR4ylvNdqUwgYjxeEpDN9GyHf8wYGAuKiZH5MJ3ddJH05DQpHexu34N8BxCYhN42Ay4HCkzHh7Ojl3uBShUWs0bXITDaaRf2AL3zLkaM+4ADGLfjeNpFyQ9GGJLZ/sp6UAhhZgFS1jWuH9RBirHydhQC2Fg2/TDk6Bk7TVZURjRzz3TvFNG06sUWJb7WlAglOXdprxgimZXzpAMkyBdJLQQxuUCMMTxDx9kp+JqQxhSCcmiIO5Sgtc2xLxCSnOyKA7Gy5ttUJ14lqR+tSBQ10ns68wvSvwZS89EfA+yhwIHlI++WYwOC0ieWO7ojMpMOa/hggWQqtRj7Io9IpvTcR6L7CsxApdVSYuunzm+qSmEvm+OasmO+7EiT401ZRIu8ko3Lwk8OI3x0jM4mBSqMv6l1O7717jsL4iprKtDdx3RFlMeqXJMklREwlEGqkzkVb8ahJp9NPkJb9mFza445C7gxw0+LMS9Kjz7ht8B+K5lNVQ1vB9DYy2YSZLAtsgxLOS39YC8xMyPnIyfMC8MH00v+uJihrNvr4Pgbp4743iqPEAs5hxx0blNx6hS3v1fxXaQhcKDj vn1OM9in 0X9WnN7MamW0Q5oJXEAzwpx/EV+oqhY8bXgmC92Vc+2K7PQ2KOiGwcCRe8LPcSyxOHkxvMQ+5h5Uh9MI0zZNW1sUpsXToUnB+2884F5TXxUI4DRGTB/ENiLhxBJkrUCDkOZVbqskDzE8Rwdm8lYWBI5aTS0RsORMiH3ihiis/H7MUpzWjONniHm4W6XPc3ZgpMgQZlKKYpHdCnDYrhbn9OHNhU63tFxgsX64hKGcnqoUE29RFBYfUhzbfmfTsOAEu9CfY X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: In KASAN's tag-based mode (arm64) when a memory access occurs, the tag stored in the top 8 bits of the pointer is compared with tags saved in the region of the shadow memory that maps to memory the pointer points to. If any of the tags in the shadow memory region do not match the one stored in the pointer an error report is generated. With the introduction of the dense mode, tags won't necessarily occupy whole bytes of shadow memory if the previously allocated memory wasn't aligned to 32 bytes - which is the coverage of one shadow byte. Add an alternative implementation of kasan_check_range() that performs special checks on first and last bytes of shadow memory ranges if the originally allocated memory wasn't aligned to 32 bytes. Signed-off-by: Maciej Wieczor-Retman --- include/linux/kasan.h | 47 +++++++++++++++------- mm/kasan/Makefile | 3 ++ mm/kasan/dense.c | 83 +++++++++++++++++++++++++++++++++++++++ mm/kasan/kasan.h | 2 +- mm/kasan/report.c | 2 +- mm/kasan/report_sw_tags.c | 12 ++---- mm/kasan/sw_tags.c | 8 ++++ 7 files changed, 133 insertions(+), 24 deletions(-) create mode 100644 mm/kasan/dense.c diff --git a/include/linux/kasan.h b/include/linux/kasan.h index ea0f5acd875b..5a3e9bec21c2 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -33,6 +33,20 @@ typedef unsigned int __bitwise kasan_vmalloc_flags_t; #include +#ifndef kasan_mem_to_shadow +static inline void *kasan_mem_to_shadow(const void *addr) +{ + void *scaled; + + if (IS_ENABLED(CONFIG_KASAN_GENERIC)) + scaled = (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT); + else + scaled = (void *)((long)addr >> KASAN_SHADOW_SCALE_SHIFT); + + return KASAN_SHADOW_OFFSET + scaled; +} +#endif + /* Software KASAN implementations use shadow memory. */ #ifdef CONFIG_KASAN_SW_TAGS_DENSE @@ -53,6 +67,25 @@ static inline u8 kasan_dense_tag(u8 tag) #define KASAN_GRANULE_SIZE (1UL << KASAN_GRANULE_SHIFT) +#ifdef CONFIG_KASAN_SW_TAGS_DENSE +static inline u8 kasan_get_shadow_tag(const void *ptr) +{ + u8 shadow_byte = *(u8 *)kasan_mem_to_shadow(ptr); + unsigned long addr = (unsigned long)ptr; + int shift; + + shift = !!(addr & KASAN_GRANULE_SIZE) * KASAN_TAG_WIDTH; + shadow_byte >>= shift; + + return shadow_byte & KASAN_TAG_KERNEL; +} +#else +static inline u8 kasan_get_shadow_tag(const void *addr) +{ + return (*(u8 *)kasan_mem_to_shadow(addr)); +} +#endif + #ifdef CONFIG_KASAN_SW_TAGS /* This matches KASAN_TAG_INVALID. */ #define KASAN_SHADOW_INIT 0xFE @@ -73,20 +106,6 @@ extern p4d_t kasan_early_shadow_p4d[MAX_PTRS_PER_P4D]; int kasan_populate_early_shadow(const void *shadow_start, const void *shadow_end); -#ifndef kasan_mem_to_shadow -static inline void *kasan_mem_to_shadow(const void *addr) -{ - void *scaled; - - if (IS_ENABLED(CONFIG_KASAN_GENERIC)) - scaled = (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT); - else - scaled = (void *)((long)addr >> KASAN_SHADOW_SCALE_SHIFT); - - return KASAN_SHADOW_OFFSET + scaled; -} -#endif - int kasan_add_zero_shadow(void *start, unsigned long size); void kasan_remove_zero_shadow(void *start, unsigned long size); diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile index b88543e5c0cc..3a460abd4c18 100644 --- a/mm/kasan/Makefile +++ b/mm/kasan/Makefile @@ -5,6 +5,7 @@ KCOV_INSTRUMENT := n # Disable ftrace to avoid recursion. CFLAGS_REMOVE_common.o = $(CC_FLAGS_FTRACE) +CFLAGS_REMOVE_dense.o = $(CC_FLAGS_FTRACE) CFLAGS_REMOVE_generic.o = $(CC_FLAGS_FTRACE) CFLAGS_REMOVE_init.o = $(CC_FLAGS_FTRACE) CFLAGS_REMOVE_quarantine.o = $(CC_FLAGS_FTRACE) @@ -24,6 +25,7 @@ CC_FLAGS_KASAN_RUNTIME += -fno-stack-protector CC_FLAGS_KASAN_RUNTIME += -DDISABLE_BRANCH_PROFILING CFLAGS_common.o := $(CC_FLAGS_KASAN_RUNTIME) +CFLAGS_dense.o := $(CC_FLAGS_KASAN_RUNTIME) CFLAGS_generic.o := $(CC_FLAGS_KASAN_RUNTIME) CFLAGS_init.o := $(CC_FLAGS_KASAN_RUNTIME) CFLAGS_quarantine.o := $(CC_FLAGS_KASAN_RUNTIME) @@ -49,6 +51,7 @@ RUSTFLAGS_kasan_test_rust.o := $(RUSTFLAGS_KASAN) CFLAGS_kasan_test_module.o := $(CFLAGS_KASAN_TEST) obj-y := common.o report.o +obj-$(CONFIG_KASAN_SW_TAGS_DENSE) += dense.o obj-$(CONFIG_KASAN_GENERIC) += init.o generic.o report_generic.o shadow.o quarantine.o obj-$(CONFIG_KASAN_HW_TAGS) += hw_tags.o report_hw_tags.o tags.o report_tags.o obj-$(CONFIG_KASAN_SW_TAGS) += init.o report_sw_tags.o shadow.o sw_tags.o tags.o report_tags.o diff --git a/mm/kasan/dense.c b/mm/kasan/dense.c new file mode 100644 index 000000000000..306bbbfdce29 --- /dev/null +++ b/mm/kasan/dense.c @@ -0,0 +1,83 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include "kasan.h" + +static __always_inline bool kasan_check_range_inline(const void *addr, + size_t size, bool write, + unsigned long ret_ip) +{ + u8 *shadow_first, *shadow_last, *shadow, *shadow_first_aligned, *shadow_last_aligned; + u64 addr_start_aligned, addr_end_aligned; + u8 tag, kasan_granule_offset; + size_t aligned_size; + void *untagged_addr; + + if (unlikely(size == 0)) + return true; + + if (unlikely(addr + size < addr)) + return !kasan_report(addr, size, write, ret_ip); + + tag = get_tag((const void *)addr); + + /* + * Ignore accesses for pointers tagged with native kernel + * pointer tag to suppress false positives caused by kmap. + * + * Some kernel code was written to account for archs that don't keep + * high memory mapped all the time, but rather map and unmap particular + * pages when needed. Instead of storing a pointer to the kernel memory, + * this code saves the address of the page structure and offset within + * that page for later use. Those pages are then mapped and unmapped + * with kmap/kunmap when necessary and virt_to_page is used to get the + * virtual address of the page. For arm64 (that keeps the high memory + * mapped all the time), kmap is turned into a page_address call. + + * The issue is that with use of the page_address + virt_to_page + * sequence the top byte value of the original pointer gets lost (gets + * set to KASAN_TAG_KERNEL). + */ + if (tag == KASAN_TAG_KERNEL) + return true; + + untagged_addr = kasan_reset_tag((void *)round_down((u64)addr, KASAN_GRANULE_SIZE)); + if (unlikely(!addr_has_metadata(untagged_addr))) + return !kasan_report(addr, size, write, ret_ip); + + kasan_granule_offset = ((u64)addr & KASAN_GRANULE_MASK); + aligned_size = round_up(size + kasan_granule_offset, KASAN_GRANULE_SIZE); + shadow_first = kasan_mem_to_shadow(untagged_addr); + shadow_last = kasan_mem_to_shadow(untagged_addr + aligned_size); + addr_start_aligned = round_up((u64)untagged_addr, KASAN_SHADOW_SCALE_SIZE); + addr_end_aligned = round_down((u64)untagged_addr + aligned_size, KASAN_SHADOW_SCALE_SIZE); + shadow_first_aligned = kasan_mem_to_shadow((void *)addr_start_aligned); + shadow_last_aligned = kasan_mem_to_shadow((void *)addr_end_aligned); + + /* Check the first unaligned tag in shadow memory. */ + if ((u64)untagged_addr % KASAN_SHADOW_SCALE_SIZE) { + if (unlikely((*shadow_first >> KASAN_TAG_WIDTH) != tag)) + return !kasan_report(addr, size, write, ret_ip); + } + + /* Check the middle aligned part in shadow memory. */ + for (shadow = shadow_first_aligned; shadow < shadow_last_aligned; shadow++) { + if (unlikely(*shadow != ((tag << KASAN_TAG_WIDTH) | tag))) + return !kasan_report(addr, size, write, ret_ip); + } + + /* Check the last unaligned tag in shadow memory. */ + if (((u64)untagged_addr + aligned_size) % KASAN_SHADOW_SCALE_SIZE) { + if (unlikely((*shadow_last & KASAN_TAG_MASK) != tag)) + return !kasan_report(addr, size, write, ret_ip); + } + + return true; +} + +#if IS_ENABLED(CONFIG_KASAN_SW_TAGS_DENSE) +bool kasan_check_range(const void *addr, size_t size, bool write, + unsigned long ret_ip) +{ + return kasan_check_range_inline(addr, size, write, ret_ip); +} +#endif diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h index 0e04c5e2c405..d29bd0e65020 100644 --- a/mm/kasan/kasan.h +++ b/mm/kasan/kasan.h @@ -183,7 +183,7 @@ static inline bool kasan_requires_meta(void) #define META_BYTES_PER_BLOCK 1 #define META_BLOCKS_PER_ROW 16 #define META_BYTES_PER_ROW (META_BLOCKS_PER_ROW * META_BYTES_PER_BLOCK) -#define META_MEM_BYTES_PER_ROW (META_BYTES_PER_ROW * KASAN_GRANULE_SIZE) +#define META_MEM_BYTES_PER_ROW (META_BYTES_PER_ROW * KASAN_SHADOW_SCALE_SIZE) #define META_ROWS_AROUND_ADDR 2 #define KASAN_STACK_DEPTH 64 diff --git a/mm/kasan/report.c b/mm/kasan/report.c index c08097715686..ee9e406b0cdb 100644 --- a/mm/kasan/report.c +++ b/mm/kasan/report.c @@ -436,7 +436,7 @@ static int meta_pointer_offset(const void *row, const void *addr) * plus 1 byte for space. */ return 3 + (BITS_PER_LONG / 8) * 2 + - (addr - row) / KASAN_GRANULE_SIZE * 3 + 1; + (addr - row) / KASAN_SHADOW_SCALE_SIZE * 3 + 1; } static void print_memory_metadata(const void *addr) diff --git a/mm/kasan/report_sw_tags.c b/mm/kasan/report_sw_tags.c index 689e94f9fe3c..1ac5c7a9011d 100644 --- a/mm/kasan/report_sw_tags.c +++ b/mm/kasan/report_sw_tags.c @@ -39,7 +39,7 @@ const void *kasan_find_first_bad_addr(const void *addr, size_t size) if (!addr_has_metadata(p)) return p; - while (p < end && tag == *(u8 *)kasan_mem_to_shadow(p)) + while (p < end && tag == kasan_get_shadow_tag(p)) p += KASAN_GRANULE_SIZE; return p; @@ -48,7 +48,6 @@ const void *kasan_find_first_bad_addr(const void *addr, size_t size) size_t kasan_get_alloc_size(void *object, struct kmem_cache *cache) { size_t size = 0; - u8 *shadow; /* * Skip the addr_has_metadata check, as this function only operates on @@ -59,13 +58,11 @@ size_t kasan_get_alloc_size(void *object, struct kmem_cache *cache) * The loop below returns 0 for freed objects, for which KASAN cannot * calculate the allocation size based on the metadata. */ - shadow = (u8 *)kasan_mem_to_shadow(object); while (size < cache->object_size) { - if (*shadow != KASAN_TAG_INVALID) + if (kasan_get_shadow_tag(object + size) != KASAN_TAG_INVALID) size += KASAN_GRANULE_SIZE; else return size; - shadow++; } return cache->object_size; @@ -78,9 +75,8 @@ void kasan_metadata_fetch_row(char *buffer, void *row) void kasan_print_tags(u8 addr_tag, const void *addr) { - u8 *shadow = (u8 *)kasan_mem_to_shadow(addr); - - pr_err("Pointer tag: [%02x], memory tag: [%02x]\n", addr_tag, *shadow); + pr_err("Pointer tag: [%02x], memory tag: [%02x]\n", addr_tag, + kasan_get_shadow_tag(addr)); } #ifdef CONFIG_KASAN_STACK diff --git a/mm/kasan/sw_tags.c b/mm/kasan/sw_tags.c index 32435d33583a..7a6b8ea9bf78 100644 --- a/mm/kasan/sw_tags.c +++ b/mm/kasan/sw_tags.c @@ -79,6 +79,7 @@ u8 __hwasan_generate_tag(void) } EXPORT_SYMBOL(__hwasan_generate_tag); +#if !IS_ENABLED(CONFIG_KASAN_SW_TAGS_DENSE) bool kasan_check_range(const void *addr, size_t size, bool write, unsigned long ret_ip) { @@ -127,17 +128,24 @@ bool kasan_check_range(const void *addr, size_t size, bool write, return true; } +#endif bool kasan_byte_accessible(const void *addr) { u8 tag = get_tag(addr); void *untagged_addr = kasan_reset_tag(addr); u8 shadow_byte; + int shift; if (!addr_has_metadata(untagged_addr)) return false; shadow_byte = READ_ONCE(*(u8 *)kasan_mem_to_shadow(untagged_addr)); + if (IS_ENABLED(CONFIG_KASAN_SW_TAGS_DENSE)) { + shift = !!((u64)addr & BIT(KASAN_TAG_WIDTH)) * KASAN_TAG_WIDTH; + shadow_byte = (shadow_byte >> shift) & KASAN_TAG_KERNEL; + } + return tag == KASAN_TAG_KERNEL || tag == shadow_byte; }