From patchwork Tue Feb 4 17:33:44 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maciej Wieczor-Retman X-Patchwork-Id: 13959503 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 76CECC0219A for ; Tue, 4 Feb 2025 17:35:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ocFItDamMALhgRIOyn78H/sejLZFdevS8GLWgce6ALQ=; b=F9f3J55t4+BdsS hWEVuYOdhkgt9ndWHstv4wShCzbjPpoE3E8fOprVA4KtUEjpVUN5rWBdblxXEVGRBIWnGFawP9Vld 4sm0C/1vW7mWWcOFCNth99baUYc8+W6Zh8lxE5Mx56hJfbBKc5P9G89lstZfqxDcTQinXFVkkBQRe W2mndR/nkRFbMQ5EbFV1g6ML4GBRPJzP7aTYCYnJ4ghG88KZOKKR0QSgiFTw4XxkA6yWMXrhCCJ+F kM+gf6lmHaL9kH9Qrj6Qa8LYGJGVay9NTj4Fcam2HwiRY2f6v0yGz5ZVmNBMwowu7ZtKhdJZ7aZiX l+EijpE2YBCkwJq4Oypw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tfMq1-0000000176S-2V7h; Tue, 04 Feb 2025 17:35:53 +0000 Received: from mgamail.intel.com ([198.175.65.20]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tfMpP-000000016rF-3H6Y; Tue, 04 Feb 2025 17:35:17 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1738690515; x=1770226515; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=edAJD92hFjQk8fjbvzxioatc9um2IkZNk+/tzcDM8p4=; b=P97rc45zonc1zwCjsnnsZZX+u6SdiysHnon46Y5Xz0gfBFvwzAobHBNe ahl8LI9wvjLuuVmERlzsyVAXJgkGPAeVMV7GPirxyhfG3ePMnaC8SFDBX IzsqMQFUHNz5gma9YG2B3H8pIKQM9XHxKgFgi2dzfo5hxnNEv9HYoTL86 e0eMsdrpHpPjdeL+0D2o3agFJbr0pmjjhHyJJT42qpGfAcA7gdhDwfoH9 M09cXgCAn8sQtHcJ5XVGV3pgMOX0rZkez8FFD0UND/BCKhOdRGGIkb+wF zjfpCzwZqs1bsQcVeU9uVMaPOkArSM7CNNHQEN6dk1UDkz8IALfGzlfn/ g==; X-CSE-ConnectionGUID: kSR95XdURfOg5YrwK9wgzA== X-CSE-MsgGUID: pQ2NyfcjS8ycPaZ6PKP0OQ== X-IronPort-AV: E=McAfee;i="6700,10204,11336"; a="38930454" X-IronPort-AV: E=Sophos;i="6.13,259,1732608000"; d="scan'208";a="38930454" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orvoesa112.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Feb 2025 09:35:14 -0800 X-CSE-ConnectionGUID: GXavZLsESKizsbio/E/Srw== X-CSE-MsgGUID: e0b52Ax9SOSm68zuxVJ9gQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="147866342" Received: from mjarzebo-mobl1.ger.corp.intel.com (HELO wieczorr-mobl1.intel.com) ([10.245.244.61]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Feb 2025 09:35:02 -0800 From: Maciej Wieczor-Retman To: luto@kernel.org, xin@zytor.com, kirill.shutemov@linux.intel.com, palmer@dabbelt.com, tj@kernel.org, andreyknvl@gmail.com, brgerst@gmail.com, ardb@kernel.org, dave.hansen@linux.intel.com, jgross@suse.com, will@kernel.org, akpm@linux-foundation.org, arnd@arndb.de, corbet@lwn.net, maciej.wieczor-retman@intel.com, dvyukov@google.com, richard.weiyang@gmail.com, ytcoode@gmail.com, tglx@linutronix.de, hpa@zytor.com, seanjc@google.com, paul.walmsley@sifive.com, aou@eecs.berkeley.edu, justinstitt@google.com, jason.andryuk@amd.com, glider@google.com, ubizjak@gmail.com, jannh@google.com, bhe@redhat.com, vincenzo.frascino@arm.com, rafael.j.wysocki@intel.com, ndesaulniers@google.com, mingo@redhat.com, catalin.marinas@arm.com, junichi.nomura@nec.com, nathan@kernel.org, ryabinin.a.a@gmail.com, dennis@kernel.org, bp@alien8.de, kevinloughlin@google.com, morbo@google.com, dan.j.williams@intel.com, julian.stecklina@cyberus-technology.de, peterz@infradead.org, cl@linux.com, kees@kernel.org Cc: kasan-dev@googlegroups.com, x86@kernel.org, linux-arm-kernel@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, linux-doc@vger.kernel.org Subject: [PATCH 03/15] kasan: Vmalloc dense tag-based mode support Date: Tue, 4 Feb 2025 18:33:44 +0100 Message-ID: X-Mailer: git-send-email 2.47.1 In-Reply-To: References: MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250204_093515_896374_44687D34 X-CRM114-Status: GOOD ( 12.06 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org To use KASAN with the vmalloc allocator multiple functions are implemented that deal with full pages of memory. Many of these functions are hardcoded to deal with byte aligned shadow memory regions by using __memset(). With the introduction of the dense mode, tags won't necessarily occupy whole bytes of shadow memory if the previously allocated memory wasn't aligned to 32 bytes - which is the coverage of one shadow byte. Change __memset() calls to kasan_poison(). With dense tag-based mode enabled that will take care of any unaligned tags in shadow memory. Signed-off-by: Maciej Wieczor-Retman --- mm/kasan/kasan.h | 2 +- mm/kasan/shadow.c | 14 ++++++-------- 2 files changed, 7 insertions(+), 9 deletions(-) diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h index d29bd0e65020..a56aadd51485 100644 --- a/mm/kasan/kasan.h +++ b/mm/kasan/kasan.h @@ -135,7 +135,7 @@ static inline bool kasan_requires_meta(void) #define KASAN_GRANULE_MASK (KASAN_GRANULE_SIZE - 1) -#define KASAN_MEMORY_PER_SHADOW_PAGE (KASAN_GRANULE_SIZE << PAGE_SHIFT) +#define KASAN_MEMORY_PER_SHADOW_PAGE (KASAN_SHADOW_SCALE_SIZE << PAGE_SHIFT) #ifdef CONFIG_KASAN_GENERIC #define KASAN_PAGE_FREE 0xFF /* freed page */ diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c index 368503f54b87..94f51046e6ae 100644 --- a/mm/kasan/shadow.c +++ b/mm/kasan/shadow.c @@ -332,7 +332,7 @@ static int kasan_populate_vmalloc_pte(pte_t *ptep, unsigned long addr, if (!page) return -ENOMEM; - __memset((void *)page, KASAN_VMALLOC_INVALID, PAGE_SIZE); + kasan_poison((void *)page, PAGE_SIZE, KASAN_VMALLOC_INVALID, false); pte = pfn_pte(PFN_DOWN(__pa(page)), PAGE_KERNEL); spin_lock(&init_mm.page_table_lock); @@ -357,9 +357,6 @@ int kasan_populate_vmalloc(unsigned long addr, unsigned long size) if (!is_vmalloc_or_module_addr((void *)addr)) return 0; - shadow_start = (unsigned long)kasan_mem_to_shadow((void *)addr); - shadow_end = (unsigned long)kasan_mem_to_shadow((void *)addr + size); - /* * User Mode Linux maps enough shadow memory for all of virtual memory * at boot, so doesn't need to allocate more on vmalloc, just clear it. @@ -368,12 +365,12 @@ int kasan_populate_vmalloc(unsigned long addr, unsigned long size) * reason. */ if (IS_ENABLED(CONFIG_UML)) { - __memset((void *)shadow_start, KASAN_VMALLOC_INVALID, shadow_end - shadow_start); + kasan_poison((void *)addr, size, KASAN_VMALLOC_INVALID, false); return 0; } - shadow_start = PAGE_ALIGN_DOWN(shadow_start); - shadow_end = PAGE_ALIGN(shadow_end); + shadow_start = PAGE_ALIGN_DOWN((unsigned long)kasan_mem_to_shadow((void *)addr)); + shadow_end = PAGE_ALIGN((unsigned long)kasan_mem_to_shadow((void *)addr + size)); ret = apply_to_page_range(&init_mm, shadow_start, shadow_end - shadow_start, @@ -546,7 +543,8 @@ void kasan_release_vmalloc(unsigned long start, unsigned long end, if (shadow_end > shadow_start) { size = shadow_end - shadow_start; if (IS_ENABLED(CONFIG_UML)) { - __memset(shadow_start, KASAN_SHADOW_INIT, shadow_end - shadow_start); + kasan_poison((void *)region_start, region_start - region_end, + KASAN_VMALLOC_INVALID, false); return; } apply_to_existing_page_range(&init_mm,