From patchwork Mon Dec 20 21:58:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: andrey.konovalov@linux.dev X-Patchwork-Id: 12696908 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A9B45C433FE for ; Mon, 20 Dec 2021 22:00:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=q1QxvT6+YmIwJNB/xQqWDpicPiv0ULN7NO8SMDKrIzk=; b=RIzpyjmJ/uZi9G INX6baXahCVm8Vw/1V77J5N9yGQs4wd4XueBRDl1Nt89thixUWlELJofK1awk6cXUM2l5ccM3qfRc fMaSz8lnFyJ2zJw2feiKHLNzmqoRIHrqJqCwxejiz2onq3PDABdp6v5U5YxQhHVWxxeddAVxydodV NRL43GSNIBvpEa48+Sht/vYpnhkLG4pv26D5Qz8ayHebV97AZY3+g9nEA6crRV3ZL2xrXf7bRbBuQ hO38a2+ZZ+WGuC1Gj60nf0ZjlO6O34CPYc/mppyaFkicwT+8isCfl/5FM+h2vRC734Qwhf0aEc3Ug fRrDZTntr2p0cXOlMEYA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mzQh0-004b3O-R8; Mon, 20 Dec 2021 21:59:38 +0000 Received: from out2.migadu.com ([2001:41d0:2:aacc::]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mzQgU-004aoG-3F for linux-arm-kernel@lists.infradead.org; Mon, 20 Dec 2021 21:59:08 +0000 X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1640037544; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=C/H8xEvLFjZpRKDTHcIRqvJZF1FN7N83kytVLUev48w=; b=HjE3nU9F+sA2X9gypymZbFBgwU7LF1/A0nejcYvT1961VrlpgyKXPcdcRZMYb+o7eccdp0 up4kjXna/BZjXI0M4aGEkDBbxISH8ZYlUhWd0H5RGNzqLF9wAdn00VuiWoaN25HCe+iuPa fEdwtJYCSa457O3y/E1IPoZpCf0YtG8= From: andrey.konovalov@linux.dev To: Marco Elver , Alexander Potapenko , Andrew Morton Cc: Andrey Konovalov , Dmitry Vyukov , Andrey Ryabinin , kasan-dev@googlegroups.com, linux-mm@kvack.org, Vincenzo Frascino , Catalin Marinas , Will Deacon , Mark Rutland , linux-arm-kernel@lists.infradead.org, Peter Collingbourne , Evgenii Stepanov , linux-kernel@vger.kernel.org, Andrey Konovalov Subject: [PATCH mm v4 03/39] kasan, page_alloc: merge kasan_free_pages into free_pages_prepare Date: Mon, 20 Dec 2021 22:58:18 +0100 Message-Id: <6d95dbfdc95fb5f0c60e96e3cc7bc9499ffbc337.1640036051.git.andreyknvl@google.com> In-Reply-To: References: MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Migadu-Auth-User: linux.dev X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211220_135906_317601_5A49D6EE X-CRM114-Status: GOOD ( 16.98 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Andrey Konovalov Currently, the code responsible for initializing and poisoning memory in free_pages_prepare() is scattered across two locations: kasan_free_pages() for HW_TAGS KASAN and free_pages_prepare() itself. This is confusing. This and a few following patches combine the code from these two locations. Along the way, these patches also simplify the performed checks to make them easier to follow. Replaces the only caller of kasan_free_pages() with its implementation. As kasan_has_integrated_init() is only true when CONFIG_KASAN_HW_TAGS is enabled, moving the code does no functional changes. This patch is not useful by itself but makes the simplifications in the following patches easier to follow. Signed-off-by: Andrey Konovalov Reviewed-by: Alexander Potapenko --- Changes v2->v3: - Update patch description. --- include/linux/kasan.h | 8 -------- mm/kasan/common.c | 2 +- mm/kasan/hw_tags.c | 11 ----------- mm/page_alloc.c | 6 ++++-- 4 files changed, 5 insertions(+), 22 deletions(-) diff --git a/include/linux/kasan.h b/include/linux/kasan.h index 4a45562d8893..a8bfe9f157c9 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -96,7 +96,6 @@ static inline bool kasan_hw_tags_enabled(void) } void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags); -void kasan_free_pages(struct page *page, unsigned int order); #else /* CONFIG_KASAN_HW_TAGS */ @@ -117,13 +116,6 @@ static __always_inline void kasan_alloc_pages(struct page *page, BUILD_BUG(); } -static __always_inline void kasan_free_pages(struct page *page, - unsigned int order) -{ - /* Only available for integrated init. */ - BUILD_BUG(); -} - #endif /* CONFIG_KASAN_HW_TAGS */ static inline bool kasan_has_integrated_init(void) diff --git a/mm/kasan/common.c b/mm/kasan/common.c index 92196562687b..a0082fad48b1 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -387,7 +387,7 @@ static inline bool ____kasan_kfree_large(void *ptr, unsigned long ip) } /* - * The object will be poisoned by kasan_free_pages() or + * The object will be poisoned by kasan_poison_pages() or * kasan_slab_free_mempool(). */ diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c index 7355cb534e4f..0b8225add2e4 100644 --- a/mm/kasan/hw_tags.c +++ b/mm/kasan/hw_tags.c @@ -213,17 +213,6 @@ void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags) } } -void kasan_free_pages(struct page *page, unsigned int order) -{ - /* - * This condition should match the one in free_pages_prepare() in - * page_alloc.c. - */ - bool init = want_init_on_free(); - - kasan_poison_pages(page, order, init); -} - #if IS_ENABLED(CONFIG_KASAN_KUNIT_TEST) void kasan_enable_tagging_sync(void) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 7c2b29483b53..740fb01a27ed 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1367,15 +1367,17 @@ static __always_inline bool free_pages_prepare(struct page *page, /* * As memory initialization might be integrated into KASAN, - * kasan_free_pages and kernel_init_free_pages must be + * KASAN poisoning and memory initialization code must be * kept together to avoid discrepancies in behavior. * * With hardware tag-based KASAN, memory tags must be set before the * page becomes unavailable via debug_pagealloc or arch_free_page. */ if (kasan_has_integrated_init()) { + bool init = want_init_on_free(); + if (!skip_kasan_poison) - kasan_free_pages(page, order); + kasan_poison_pages(page, order, init); } else { bool init = want_init_on_free();