From patchwork Fri Jul 26 16:52:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adrian Huang X-Patchwork-Id: 13743000 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A9DA8C3DA4A for ; Fri, 26 Jul 2024 16:54:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 237156B0098; Fri, 26 Jul 2024 12:54:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1E74A6B0099; Fri, 26 Jul 2024 12:54:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 087CE6B009A; Fri, 26 Jul 2024 12:54:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id D76636B0098 for ; Fri, 26 Jul 2024 12:54:07 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 64CE11C0F5C for ; Fri, 26 Jul 2024 16:54:07 +0000 (UTC) X-FDA: 82382501334.29.2C806B7 Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) by imf04.hostedemail.com (Postfix) with ESMTP id 94F6D4001A for ; Fri, 26 Jul 2024 16:54:05 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=WH4gy3pE; spf=pass (imf04.hostedemail.com: domain of adrianhuang0701@gmail.com designates 209.85.214.179 as permitted sender) smtp.mailfrom=adrianhuang0701@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722012843; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=o4t2bj9tP9eNdb8RTP1Rkd3tr4D6cDtGqwUT49U/h94=; b=YJTd1IT/IrrkKnRDJ+TkzAveNOQQIbAehA+C1QRA89gkKeZwNwg1sX8RZ3/7zYC8mOTMmK 98N2Tx9h2TQrScGZcJo5Nr/BRvhI+3qEpZiiu07fCPeKIlhaRnMPQbfGygISNr44twkIhr e0eAk1HsZ4kMdCbnKkHWgd4C1p7wnG8= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=WH4gy3pE; spf=pass (imf04.hostedemail.com: domain of adrianhuang0701@gmail.com designates 209.85.214.179 as permitted sender) smtp.mailfrom=adrianhuang0701@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722012843; a=rsa-sha256; cv=none; b=x3Qwayp/NKcr6+ignd/kElRuHEoMoqFBYswbYs/sTww1+cc0/PMe/kGzqpYKH9BKzi5+wa 4MRL4x4omve4wS8DuQp4geGjYPQtfY7uV/ta3CjwYtDVazf2EjA7FNBRWE067gAv6S86sB MFayzrhmit48B5ffcd9A6BD24I/517I= Received: by mail-pl1-f179.google.com with SMTP id d9443c01a7336-1fd66cddd07so7659995ad.2 for ; Fri, 26 Jul 2024 09:54:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1722012844; x=1722617644; darn=kvack.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=o4t2bj9tP9eNdb8RTP1Rkd3tr4D6cDtGqwUT49U/h94=; b=WH4gy3pE3TfAlrnvqzdkIJneJTOXJxAtIIDg1BUUUV9sqgsZ3C4/R2mlUrvBZNJczN AVv7WZkRtUSDoSJAw23Q6YkEvY9ua0wmoRKf1lbAjtWRsPloGsJ/KeqoKXq760Urf9x8 pAhR9Nh38PLv+SY72SBcJLcyqXdMcA2lG41GrO3QJklQgoZkI16uh/lKg/yR6/peb/ua YCiiWBF3wED5FVuH8d9hEwBe/Ds6gD6WCxTIPbRK4N0PI50qGIwsQ9noOfqbvxWgzPAd /HnLQXJOZ1e/vYshVolnd3fyW+BG1iXHcAZVOelAQ8XR9V7sE+mPRQq73N1b908SRM2r epAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722012844; x=1722617644; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=o4t2bj9tP9eNdb8RTP1Rkd3tr4D6cDtGqwUT49U/h94=; b=eQoZLzBIlU9NETl0eW7Otesqi1swhCOwL0gs6ntt9CJdikqzKEdXse3p+VbBADVjdk G2HX5r5fMbZVyMQGfaBwj3EIteADqZOFpO8G2V1nkMyKA1dkbzlc+8QMBFVVplc4Ndb2 hYrMs29HOEprS21B8S5eCIC58gzMYSDOIZbWhFowEnXiKr+fKSKVIUqWOYPquf8SF3os Ijv+XRRNaPjgrjmvgw/ibnm2spAPzr81e6c2GgjK1DV5yy0i4ZinlK/geNDD5tH3DZwv Tl51Y/swDuNJ56bwYqRcvx8TlHE/Buc1k3MQa5L4g5uMdfWr9Bkv/QdfmE6seKiK1XoJ k0iw== X-Forwarded-Encrypted: i=1; AJvYcCVAuHSqab7q2sVuXk5vPLdUJYUCYuZMJYHMME5594xOD4Cbtn8GXRtCdqExgfcvxrG7W/5LJYHigr0XbUgu51gyR3I= X-Gm-Message-State: AOJu0YwC+nrqnxAm4FRqd/WHe3oAzFUNpkxHqr91cBV9x22fA5BIECVH xvWNoUbV8+hmBrLDu5pyhj1j6HOyoUfaOgUQPQIlWtq3hI7tJXIO4Hq5b4ix X-Google-Smtp-Source: AGHT+IFwO5/nYoc8mAYgHMXJs+2rtU6mJlKccGDL7cAHvInL7APqQb08P4db4KgKW/00R4sAxn2Wmg== X-Received: by 2002:a17:903:2284:b0:1fd:9b96:32d4 with SMTP id d9443c01a7336-1ff048e6ff9mr2338855ad.51.1722012844009; Fri, 26 Jul 2024 09:54:04 -0700 (PDT) Received: from AHUANG12-3ZHH9X.lenovo.com (220-143-223-167.dynamic-ip.hinet.net. [220.143.223.167]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1fed8000ecasm34897805ad.309.2024.07.26.09.54.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 26 Jul 2024 09:54:03 -0700 (PDT) From: Adrian Huang X-Google-Original-From: Adrian Huang To: Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Uladzislau Rezki Cc: Andrew Morton , Christoph Hellwig , Baoquan He , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Adrian Huang , Jiwei Sun Subject: [PATCH 1/1] mm/vmalloc: Combine all TLB flush operations of KASAN shadow virtual address into one operation Date: Sat, 27 Jul 2024 00:52:46 +0800 Message-Id: <20240726165246.31326-1-ahuang12@lenovo.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-Rspam-User: X-Stat-Signature: wo34uaci6bbcphfswkzt3swpacjcx6k3 X-Rspamd-Queue-Id: 94F6D4001A X-Rspamd-Server: rspam11 X-HE-Tag: 1722012845-264406 X-HE-Meta: U2FsdGVkX19fEUVKVtnhflHwXS6VUxrUh9IhuOKr3lr0owT00jIDjzi18K64lQhcLz9dUlYmhLG/J8lCrFBuUSdj2DLG+oeR8QBe+lvUusq67naMxvcimSh/WjRYhMk1XA/MdzOZ+NtAOGjuEYy9Uk1hZFYUI8wAZTJ6UZJIQNMeGITElpCVVvCrgx9CNXdUElNxzFhvpEEMEbvIN+HCz37GenueAyhO8v4GLmM2I9q9umguLGq3fiskFHS9ld39gH3rdDPYIJ1BOmZiInuRFLCT9cy0ceqsZBRGGVQwzmCbVowW/r+1ULBA8yZBir23TYSFGFKD90+8bwNe0c4uJq/ja5kbmhPFNRJImNl37Daz18lPbxk8H2EBC1le50huoY4makULLnlh5Q6M2YRGJ/jGig10UUB87u4CxbrkmY+EKSU9zvDF7FbWhZ+iw4/WikMC3WsaZwlHZtL72DPIMPeZcuNeFJNwkx+h9nApWtY8HbwrNIvzz+uJWIq15pqDskAG6TIvnzOlMu6edH+Yjq0Fb+JizU8C7aQkNpZWg8Ojx59yS4AvqgORFejQapGVzgNm9RWDj62xmtWWAbMXLYRCh/Og/c+kM4vaPDf0BYxDbNd9PsKzf4V/ezh9mtc8jarsM9neh6MmeIGC/PqvDBBwWF8lLuI/eKVKVCs4dAm0W0ooOd16vdquIC8x2DwpVh8HLwFGACWNmmuHfDeaRrFmf829lap1zcCAEwe3IT5KtHRITNkWtzXv3yl8mSYKovjFROTT4bfNpTH1jk+P3sEnJGeIkRZ5J8vWDugB56uJhAYeYK1MSrSpA9L7chNW3U0OPotN/yeSVhvxmGBs4Ym+MwD2dE2jY6+C0cH1M9wPFLwzWsAvg1bCNkjHi9DcS8xz1r2gVG+AYpheZxyXy+kzdMRGoCoQX9s26y/GlrTHbvvkosPo3Nj2EHoH3oVIXTACxFV4hunQ5qR2iGS q59GaJOp 3tGEY17x3GjXlxWs4VZbVeYXl8vCIW7/nthLzq8HaUyUyy1B/0QrpIAcrDYlA6w06OJIFbfWC1XQi5Gkhf/sE9D0U1hg84YrZ0wV9kJJySkDWOTyQWHdHMVgmUBz7OPygTfHCFV4RHiNsfgIkFrCGGbdFPrOsiiLv/ez0cp0r9+0VT6794uCmh853wg8BLYg8BSHLYf1gUKPuCA1L/IbowzlesLkLOBeuPdDHeCfYqDwFfsJeUZP89gCFPkN5euNRAO+ajiioOw6L6OpEm4LVCAF8unNEnDnpDUn0edq9jPNiLRqNC2LkQPutgjDn083+5tYuO6fCtxIZrdOm/JM3F0HTpIot0OsJSWETMEnSABX9yPXlcbWrGZ67iGXTk+iVrq286PzqZv9inmGTzwdj6Pntn9FzhXzTVEDz1LWuvE3m1/TaHezgJ2SXIBqtV/N9E/X/Rbnmv7bvt4Suu/drvJYugg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Adrian Huang When compiling kernel source 'make -j $(nproc)' with the up-and-running KASAN-enabled kernel on a 256-core machine, the following soft lockup is shown: watchdog: BUG: soft lockup - CPU#28 stuck for 22s! [kworker/28:1:1760] CPU: 28 PID: 1760 Comm: kworker/28:1 Kdump: loaded Not tainted 6.10.0-rc5 #95 Workqueue: events drain_vmap_area_work RIP: 0010:smp_call_function_many_cond+0x1d8/0xbb0 Code: 38 c8 7c 08 84 c9 0f 85 49 08 00 00 8b 45 08 a8 01 74 2e 48 89 f1 49 89 f7 48 c1 e9 03 41 83 e7 07 4c 01 e9 41 83 c7 03 f3 90 <0f> b6 01 41 38 c7 7c 08 84 c0 0f 85 d4 06 00 00 8b 45 08 a8 01 75 RSP: 0018:ffffc9000cb3fb60 EFLAGS: 00000202 RAX: 0000000000000011 RBX: ffff8883bc4469c0 RCX: ffffed10776e9949 RDX: 0000000000000002 RSI: ffff8883bb74ca48 RDI: ffffffff8434dc50 RBP: ffff8883bb74ca40 R08: ffff888103585dc0 R09: ffff8884533a1800 R10: 0000000000000004 R11: ffffffffffffffff R12: ffffed1077888d39 R13: dffffc0000000000 R14: ffffed1077888d38 R15: 0000000000000003 FS: 0000000000000000(0000) GS:ffff8883bc400000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00005577b5c8d158 CR3: 0000000004850000 CR4: 0000000000350ef0 Call Trace: ? watchdog_timer_fn+0x2cd/0x390 ? __pfx_watchdog_timer_fn+0x10/0x10 ? __hrtimer_run_queues+0x300/0x6d0 ? sched_clock_cpu+0x69/0x4e0 ? __pfx___hrtimer_run_queues+0x10/0x10 ? srso_return_thunk+0x5/0x5f ? ktime_get_update_offsets_now+0x7f/0x2a0 ? srso_return_thunk+0x5/0x5f ? srso_return_thunk+0x5/0x5f ? hrtimer_interrupt+0x2ca/0x760 ? __sysvec_apic_timer_interrupt+0x8c/0x2b0 ? sysvec_apic_timer_interrupt+0x6a/0x90 ? asm_sysvec_apic_timer_interrupt+0x16/0x20 ? smp_call_function_many_cond+0x1d8/0xbb0 ? __pfx_do_kernel_range_flush+0x10/0x10 on_each_cpu_cond_mask+0x20/0x40 flush_tlb_kernel_range+0x19b/0x250 ? srso_return_thunk+0x5/0x5f ? kasan_release_vmalloc+0xa7/0xc0 purge_vmap_node+0x357/0x820 ? __pfx_purge_vmap_node+0x10/0x10 __purge_vmap_area_lazy+0x5b8/0xa10 drain_vmap_area_work+0x21/0x30 process_one_work+0x661/0x10b0 worker_thread+0x844/0x10e0 ? srso_return_thunk+0x5/0x5f ? __kthread_parkme+0x82/0x140 ? __pfx_worker_thread+0x10/0x10 kthread+0x2a5/0x370 ? __pfx_kthread+0x10/0x10 ret_from_fork+0x30/0x70 ? __pfx_kthread+0x10/0x10 ret_from_fork_asm+0x1a/0x30 Debugging Analysis: 1. The following ftrace log shows that the lockup CPU spends too much time iterating vmap_nodes and flushing TLB when purging vm_area structures. (Some info is trimmed). kworker: funcgraph_entry: | drain_vmap_area_work() { kworker: funcgraph_entry: | mutex_lock() { kworker: funcgraph_entry: 1.092 us | __cond_resched(); kworker: funcgraph_exit: 3.306 us | } ... ... kworker: funcgraph_entry: | flush_tlb_kernel_range() { ... ... kworker: funcgraph_exit: # 7533.649 us | } ... ... kworker: funcgraph_entry: 2.344 us | mutex_unlock(); kworker: funcgraph_exit: $ 23871554 us | } The drain_vmap_area_work() spends over 23 seconds. There are 2805 flush_tlb_kernel_range() calls in the ftrace log. * One is called in __purge_vmap_area_lazy(). * Others are called by purge_vmap_node->kasan_release_vmalloc. purge_vmap_node() iteratively releases kasan vmalloc allocations and flushes TLB for each vmap_area. - [Rough calculation] Each flush_tlb_kernel_range() runs about 7.5ms. -- 2804 * 7.5ms = 21.03 seconds. -- That's why a soft lock is triggered. 2. Extending the soft lockup time can work around the issue (For example, # echo 60 > /proc/sys/kernel/watchdog_thresh). This confirms the above-mentioned speculation: drain_vmap_area_work() spends too much time. If we combine all TLB flush operations of the KASAN shadow virtual address into one operation in the call path 'purge_vmap_node()->kasan_release_vmalloc()', the running time of drain_vmap_area_work() can be saved greatly. The idea is from the flush_tlb_kernel_range() call in __purge_vmap_area_lazy(). And, the soft lockup won't not be triggered. Here is the test result based on 6.10: [6.10 wo/ the patch] 1. ftrace latency profiling (record a trace if the latency > 20s). echo 20000000 > /sys/kernel/debug/tracing/tracing_thresh echo drain_vmap_area_work > /sys/kernel/debug/tracing/set_graph_function echo function_graph > /sys/kernel/debug/tracing/current_tracer echo 1 > /sys/kernel/debug/tracing/tracing_on 2. Run `make -j $(nproc)` to compile the kernel source 3. Once the soft lockup is reproduced, check the ftrace log: cat /sys/kernel/debug/tracing/trace # tracer: function_graph # # CPU DURATION FUNCTION CALLS # | | | | | | | 76) $ 50412985 us | } /* __purge_vmap_area_lazy */ 76) $ 50412997 us | } /* drain_vmap_area_work */ 76) $ 29165911 us | } /* __purge_vmap_area_lazy */ 76) $ 29165926 us | } /* drain_vmap_area_work */ 91) $ 53629423 us | } /* __purge_vmap_area_lazy */ 91) $ 53629434 us | } /* drain_vmap_area_work */ 91) $ 28121014 us | } /* __purge_vmap_area_lazy */ 91) $ 28121026 us | } /* drain_vmap_area_work */ [6.10 w/ the patch] 1. Repeat step 1-2 in "[6.10 wo/ the patch]" 2. The soft lockup is not triggered and ftrace log is empty. cat /sys/kernel/debug/tracing/trace # tracer: function_graph # # CPU DURATION FUNCTION CALLS # | | | | | | | 3. Setting 'tracing_thresh' to 10/5 seconds does not get any ftrace log. 4. Setting 'tracing_thresh' to 1 second gets ftrace log. cat /sys/kernel/debug/tracing/trace # tracer: function_graph # # CPU DURATION FUNCTION CALLS # | | | | | | | 23) $ 1074942 us | } /* __purge_vmap_area_lazy */ 23) $ 1074950 us | } /* drain_vmap_area_work */ The worst execution time of drain_vmap_area_work() is about 1 second. Link: https://lore.kernel.org/lkml/ZqFlawuVnOMY2k3E@pc638.lan/ Fixes: 282631cb2447 ("mm: vmalloc: remove global purge_vmap_area_root rb-tree") Signed-off-by: Adrian Huang Co-developed-by: Uladzislau Rezki (Sony) Signed-off-by: Uladzislau Rezki (Sony) Tested-by: Jiwei Sun Reviewed-by: Baoquan He --- include/linux/kasan.h | 12 +++++++++--- mm/kasan/shadow.c | 14 ++++++++++---- mm/vmalloc.c | 34 ++++++++++++++++++++++++++-------- 3 files changed, 45 insertions(+), 15 deletions(-) diff --git a/include/linux/kasan.h b/include/linux/kasan.h index 70d6a8f6e25d..2adea4fef153 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -29,6 +29,9 @@ typedef unsigned int __bitwise kasan_vmalloc_flags_t; #define KASAN_VMALLOC_VM_ALLOC ((__force kasan_vmalloc_flags_t)0x02u) #define KASAN_VMALLOC_PROT_NORMAL ((__force kasan_vmalloc_flags_t)0x04u) +#define KASAN_VMALLOC_PAGE_RANGE 0x1 /* Apply exsiting page range */ +#define KASAN_VMALLOC_TLB_FLUSH 0x2 /* TLB flush */ + #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) #include @@ -511,7 +514,8 @@ void kasan_populate_early_vm_area_shadow(void *start, unsigned long size); int kasan_populate_vmalloc(unsigned long addr, unsigned long size); void kasan_release_vmalloc(unsigned long start, unsigned long end, unsigned long free_region_start, - unsigned long free_region_end); + unsigned long free_region_end, + unsigned long flags); #else /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */ @@ -526,7 +530,8 @@ static inline int kasan_populate_vmalloc(unsigned long start, static inline void kasan_release_vmalloc(unsigned long start, unsigned long end, unsigned long free_region_start, - unsigned long free_region_end) { } + unsigned long free_region_end, + unsigned long flags) { } #endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */ @@ -561,7 +566,8 @@ static inline int kasan_populate_vmalloc(unsigned long start, static inline void kasan_release_vmalloc(unsigned long start, unsigned long end, unsigned long free_region_start, - unsigned long free_region_end) { } + unsigned long free_region_end, + unsigned long flags) { } static inline void *kasan_unpoison_vmalloc(const void *start, unsigned long size, diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c index d6210ca48dda..88d1c9dcb507 100644 --- a/mm/kasan/shadow.c +++ b/mm/kasan/shadow.c @@ -489,7 +489,8 @@ static int kasan_depopulate_vmalloc_pte(pte_t *ptep, unsigned long addr, */ void kasan_release_vmalloc(unsigned long start, unsigned long end, unsigned long free_region_start, - unsigned long free_region_end) + unsigned long free_region_end, + unsigned long flags) { void *shadow_start, *shadow_end; unsigned long region_start, region_end; @@ -522,12 +523,17 @@ void kasan_release_vmalloc(unsigned long start, unsigned long end, __memset(shadow_start, KASAN_SHADOW_INIT, shadow_end - shadow_start); return; } - apply_to_existing_page_range(&init_mm, + + + if (flags & KASAN_VMALLOC_PAGE_RANGE) + apply_to_existing_page_range(&init_mm, (unsigned long)shadow_start, size, kasan_depopulate_vmalloc_pte, NULL); - flush_tlb_kernel_range((unsigned long)shadow_start, - (unsigned long)shadow_end); + + if (flags & KASAN_VMALLOC_TLB_FLUSH) + flush_tlb_kernel_range((unsigned long)shadow_start, + (unsigned long)shadow_end); } } diff --git a/mm/vmalloc.c b/mm/vmalloc.c index e34ea860153f..bc21d821d506 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2186,6 +2186,25 @@ decay_va_pool_node(struct vmap_node *vn, bool full_decay) reclaim_list_global(&decay_list); } +static void +kasan_release_vmalloc_node(struct vmap_node *vn) +{ + struct vmap_area *va; + unsigned long start, end; + + start = list_first_entry(&vn->purge_list, struct vmap_area, list)->va_start; + end = list_last_entry(&vn->purge_list, struct vmap_area, list)->va_end; + + list_for_each_entry(va, &vn->purge_list, list) { + if (is_vmalloc_or_module_addr((void *) va->va_start)) + kasan_release_vmalloc(va->va_start, va->va_end, + va->va_start, va->va_end, + KASAN_VMALLOC_PAGE_RANGE); + } + + kasan_release_vmalloc(start, end, start, end, KASAN_VMALLOC_TLB_FLUSH); +} + static void purge_vmap_node(struct work_struct *work) { struct vmap_node *vn = container_of(work, @@ -2193,20 +2212,17 @@ static void purge_vmap_node(struct work_struct *work) struct vmap_area *va, *n_va; LIST_HEAD(local_list); + if (IS_ENABLED(CONFIG_KASAN_VMALLOC)) + kasan_release_vmalloc_node(vn); + vn->nr_purged = 0; list_for_each_entry_safe(va, n_va, &vn->purge_list, list) { unsigned long nr = (va->va_end - va->va_start) >> PAGE_SHIFT; - unsigned long orig_start = va->va_start; - unsigned long orig_end = va->va_end; unsigned int vn_id = decode_vn_id(va->flags); list_del_init(&va->list); - if (is_vmalloc_or_module_addr((void *)orig_start)) - kasan_release_vmalloc(orig_start, orig_end, - va->va_start, va->va_end); - atomic_long_sub(nr, &vmap_lazy_nr); vn->nr_purged++; @@ -4726,7 +4742,8 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets, &free_vmap_area_list); if (va) kasan_release_vmalloc(orig_start, orig_end, - va->va_start, va->va_end); + va->va_start, va->va_end, + KASAN_VMALLOC_PAGE_RANGE | KASAN_VMALLOC_TLB_FLUSH); vas[area] = NULL; } @@ -4776,7 +4793,8 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets, &free_vmap_area_list); if (va) kasan_release_vmalloc(orig_start, orig_end, - va->va_start, va->va_end); + va->va_start, va->va_end, + KASAN_VMALLOC_PAGE_RANGE | KASAN_VMALLOC_TLB_FLUSH); vas[area] = NULL; kfree(vms[area]); }