From patchwork Tue Feb 4 17:33:44 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maciej Wieczor-Retman X-Patchwork-Id: 13959486 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D4B98C02193 for ; Tue, 4 Feb 2025 17:35:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6BF1C6B008A; Tue, 4 Feb 2025 12:35:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6955F6B008C; Tue, 4 Feb 2025 12:35:19 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 536DD6B0092; Tue, 4 Feb 2025 12:35:19 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 336306B008A for ; Tue, 4 Feb 2025 12:35:19 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id F1D3214094F for ; Tue, 4 Feb 2025 17:35:18 +0000 (UTC) X-FDA: 83082963516.02.51573CC Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.20]) by imf15.hostedemail.com (Postfix) with ESMTP id 670C9A001B for ; Tue, 4 Feb 2025 17:35:15 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=lNICcpBG; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf15.hostedemail.com: domain of maciej.wieczor-retman@intel.com designates 198.175.65.20 as permitted sender) smtp.mailfrom=maciej.wieczor-retman@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738690517; a=rsa-sha256; cv=none; b=Xf9NYKYRoOm7pTdSbfejXFQ9eHlUxiUVOUHqJ8IwPxGbE/sBw100gfoH6MDtY8uzQngGUO Xk+KJoz3Urr6tCeUTrohQ7ancnl5nzwsGpZRX3NG140cdsAJecZvai+JJmd1FJHArMyfL1 XJfG0CzjJo6yG3xX0CZ+A8J5UY8elQY= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=lNICcpBG; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf15.hostedemail.com: domain of maciej.wieczor-retman@intel.com designates 198.175.65.20 as permitted sender) smtp.mailfrom=maciej.wieczor-retman@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738690517; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=WFW8QfccSsjvCfxPySJ9vuy2PIzxbcJNxEP0RlFEUe8=; b=ETjiy8E2LdY8gvXlL9iCJLxNYmsMHm+tw8m7aZFf0Izo6nf1hobENNcPLbOYxUzLyVMvHb s7jS1a8D9/3MYUHeoiY6gZa9u+HdArmuvwO7407pC+immhQzsGt7WDJ3uxK9nIZVOqtI7O rL6ePRhqN6PkRrDpHcfyWuDCScqZecs= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1738690516; x=1770226516; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=edAJD92hFjQk8fjbvzxioatc9um2IkZNk+/tzcDM8p4=; b=lNICcpBGl4Cgen8CLZjEAN6xxA51yANkn8sU7HxAPbyq3ON4dmDDzymu kRa+OqGj7yDGsFi8ZVLahpvUbZhGYx6EPIqGAxtfRsdfzC9DujaBWvHJ3 VcaBZtEuGYZ2vOaeESOWR9s7XwswCTB5/m83/vdXZOzSpQu5qHT4uu6gS X+jarWKn8v0CGDc7R90Ur6+ShsZkHCNUx/SYM3KEFbDeiYOVHpiGIM0E1 vbhkN9vP8cuonWH3Cm/uQsGPAcVdMSqWBnuTCwlCVBZsH/d6e0KbficYW BWlOu4Xt6iSpVpxX5Osdb1zW3NK9K2znKiDvv6N4rYEkPQJbrr4wcf9yF g==; X-CSE-ConnectionGUID: 9q3Y+Et6RoGcQZWWw6NYaQ== X-CSE-MsgGUID: uQiBUb9ZRpOvbrAYLrJA4w== X-IronPort-AV: E=McAfee;i="6700,10204,11336"; a="38930461" X-IronPort-AV: E=Sophos;i="6.13,259,1732608000"; d="scan'208";a="38930461" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orvoesa112.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Feb 2025 09:35:14 -0800 X-CSE-ConnectionGUID: GXavZLsESKizsbio/E/Srw== X-CSE-MsgGUID: e0b52Ax9SOSm68zuxVJ9gQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="147866342" Received: from mjarzebo-mobl1.ger.corp.intel.com (HELO wieczorr-mobl1.intel.com) ([10.245.244.61]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Feb 2025 09:35:02 -0800 From: Maciej Wieczor-Retman To: luto@kernel.org, xin@zytor.com, kirill.shutemov@linux.intel.com, palmer@dabbelt.com, tj@kernel.org, andreyknvl@gmail.com, brgerst@gmail.com, ardb@kernel.org, dave.hansen@linux.intel.com, jgross@suse.com, will@kernel.org, akpm@linux-foundation.org, arnd@arndb.de, corbet@lwn.net, maciej.wieczor-retman@intel.com, dvyukov@google.com, richard.weiyang@gmail.com, ytcoode@gmail.com, tglx@linutronix.de, hpa@zytor.com, seanjc@google.com, paul.walmsley@sifive.com, aou@eecs.berkeley.edu, justinstitt@google.com, jason.andryuk@amd.com, glider@google.com, ubizjak@gmail.com, jannh@google.com, bhe@redhat.com, vincenzo.frascino@arm.com, rafael.j.wysocki@intel.com, ndesaulniers@google.com, mingo@redhat.com, catalin.marinas@arm.com, junichi.nomura@nec.com, nathan@kernel.org, ryabinin.a.a@gmail.com, dennis@kernel.org, bp@alien8.de, kevinloughlin@google.com, morbo@google.com, dan.j.williams@intel.com, julian.stecklina@cyberus-technology.de, peterz@infradead.org, cl@linux.com, kees@kernel.org Cc: kasan-dev@googlegroups.com, x86@kernel.org, linux-arm-kernel@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, linux-doc@vger.kernel.org Subject: [PATCH 03/15] kasan: Vmalloc dense tag-based mode support Date: Tue, 4 Feb 2025 18:33:44 +0100 Message-ID: X-Mailer: git-send-email 2.47.1 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 670C9A001B X-Stat-Signature: ex4yd5kkasctosubw1pamo846pjz8nen X-Rspam-User: X-HE-Tag: 1738690515-863660 X-HE-Meta: U2FsdGVkX192ukhQ5IhoJohGv0ulbW7DsjnBNt6KZZ7TBaOxgtyNWG6/Z6VGWrV2XoPz/c7zimvCxQVs2KKxcZvFMBaOZXvjTcixmLNfg/tKYkb/wnIBI+O94qaHsTQm0qerJQbYFow+7ay1b3RrYUrc7KuGGK1FeoxYKDlCl3/3OHX1F02yvV8Prom9npZyiRM7Q9pi2PcwKXa5qOc5OrXv5P7SHtC2fx/Bk3ykPurM5V74doiaCavVaP9i7jURTALCllBwJa28B5Ds3b1CS7kIdaliSfUznKVmPMdWe7GR5zKY7e/dk2JcMG9oZZehy7b5co7ZalSHaagvZ2mggqp14yjIrlFzxk/Zr2qL1k6liNuWVtTKkZKqjoLo2Geuo6Vgh3ODhnlZqZD8/DRT3bh8Xe++uIRmB4sxQXFKsj0s43H4HE1ewGWFDdB8F8HbfFYrfKDIfKbOwGT/aFVNfxW0q5GoyDHAjkTLFIEvl4evCWtGsYMc74L98VVHVa1tILlelnNQiUoUcSWB8PT+7CGlau1b086ps79wm3jr174cvGlLCOHAyKQ1UvjOOFhQtDleJxs6g6hmolI6CmssoIa4jGgj0sQnd0s80pGISV7IblwtNoYFh5Kg0oUFOVR34alserIO5+O2HEW/Uq8/7lYP9K53tHdHfY/ZV/zN6fAHcbLDUYcbBEwgCeakyeRw7eaNYCSNdkL0haUv2/qKmvqKJAf+/sDUMH42sX21sexHOBHoCdIY0rVcYRYjht1+dAnzMpKYvI1twXwZfPjdDl7HdRVo4IBpuMsRC2tPB91MRUDvny4Feg3R584/TIvuuSyzp4j2L5gdQ1rfj557fhlGyyBvPDrmETTzgoD92lLxCZUCgd82PVwW0a43kaw9hQOihjhVKQFgfL9Sbin2QskFR29Xi8KgMolhppmWqzpMCgJ06Z0T8whQ5psIvz5TM+zaC7UBxYSeaYy3ZWV TmirhWlS 9lVelsWhPxPmcBMU8/k+SHCTIKgf7OU0CrPt7cWa+xEZMnrxoxiwhdhILxUNtz5HNJ0e+ccovohUs43He93Y/7CM811pbR6d/kyIjCM22I1qjt09aBlW8nsSGlCtSO5vHbKe8p8UvUKRefyh3wNCAomti1elmx2J0gyvgeIr/MLyBE/5DrFemzbwVlW9z0oPM6qzqNFLLJFn2U/AJdKgJpkayNkO+hK1cjetOIS4RScGwvR3BvB2fLzvSr63UJ9A2KGwf X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: To use KASAN with the vmalloc allocator multiple functions are implemented that deal with full pages of memory. Many of these functions are hardcoded to deal with byte aligned shadow memory regions by using __memset(). With the introduction of the dense mode, tags won't necessarily occupy whole bytes of shadow memory if the previously allocated memory wasn't aligned to 32 bytes - which is the coverage of one shadow byte. Change __memset() calls to kasan_poison(). With dense tag-based mode enabled that will take care of any unaligned tags in shadow memory. Signed-off-by: Maciej Wieczor-Retman --- mm/kasan/kasan.h | 2 +- mm/kasan/shadow.c | 14 ++++++-------- 2 files changed, 7 insertions(+), 9 deletions(-) diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h index d29bd0e65020..a56aadd51485 100644 --- a/mm/kasan/kasan.h +++ b/mm/kasan/kasan.h @@ -135,7 +135,7 @@ static inline bool kasan_requires_meta(void) #define KASAN_GRANULE_MASK (KASAN_GRANULE_SIZE - 1) -#define KASAN_MEMORY_PER_SHADOW_PAGE (KASAN_GRANULE_SIZE << PAGE_SHIFT) +#define KASAN_MEMORY_PER_SHADOW_PAGE (KASAN_SHADOW_SCALE_SIZE << PAGE_SHIFT) #ifdef CONFIG_KASAN_GENERIC #define KASAN_PAGE_FREE 0xFF /* freed page */ diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c index 368503f54b87..94f51046e6ae 100644 --- a/mm/kasan/shadow.c +++ b/mm/kasan/shadow.c @@ -332,7 +332,7 @@ static int kasan_populate_vmalloc_pte(pte_t *ptep, unsigned long addr, if (!page) return -ENOMEM; - __memset((void *)page, KASAN_VMALLOC_INVALID, PAGE_SIZE); + kasan_poison((void *)page, PAGE_SIZE, KASAN_VMALLOC_INVALID, false); pte = pfn_pte(PFN_DOWN(__pa(page)), PAGE_KERNEL); spin_lock(&init_mm.page_table_lock); @@ -357,9 +357,6 @@ int kasan_populate_vmalloc(unsigned long addr, unsigned long size) if (!is_vmalloc_or_module_addr((void *)addr)) return 0; - shadow_start = (unsigned long)kasan_mem_to_shadow((void *)addr); - shadow_end = (unsigned long)kasan_mem_to_shadow((void *)addr + size); - /* * User Mode Linux maps enough shadow memory for all of virtual memory * at boot, so doesn't need to allocate more on vmalloc, just clear it. @@ -368,12 +365,12 @@ int kasan_populate_vmalloc(unsigned long addr, unsigned long size) * reason. */ if (IS_ENABLED(CONFIG_UML)) { - __memset((void *)shadow_start, KASAN_VMALLOC_INVALID, shadow_end - shadow_start); + kasan_poison((void *)addr, size, KASAN_VMALLOC_INVALID, false); return 0; } - shadow_start = PAGE_ALIGN_DOWN(shadow_start); - shadow_end = PAGE_ALIGN(shadow_end); + shadow_start = PAGE_ALIGN_DOWN((unsigned long)kasan_mem_to_shadow((void *)addr)); + shadow_end = PAGE_ALIGN((unsigned long)kasan_mem_to_shadow((void *)addr + size)); ret = apply_to_page_range(&init_mm, shadow_start, shadow_end - shadow_start, @@ -546,7 +543,8 @@ void kasan_release_vmalloc(unsigned long start, unsigned long end, if (shadow_end > shadow_start) { size = shadow_end - shadow_start; if (IS_ENABLED(CONFIG_UML)) { - __memset(shadow_start, KASAN_SHADOW_INIT, shadow_end - shadow_start); + kasan_poison((void *)region_start, region_start - region_end, + KASAN_VMALLOC_INVALID, false); return; } apply_to_existing_page_range(&init_mm,