From patchwork Mon Dec 13 21:51:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: andrey.konovalov@linux.dev X-Patchwork-Id: 12695963 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3F031C433F5 for ; Mon, 13 Dec 2021 21:54:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=SO6aiC8W54INUh7BgEwco0rX8XqQB7a9D3nRWuoW77I=; b=nO2gMpV29sNCE4 2ZMOI6nGFxOqJILqerPu3YX5F9Z3SOuktY48ANnw6yUs6Bdo+GDyXyT0rdmq1CuS+tpf6mYwxYYDC JSHA9WuXQmOYaUb08DOBKO7nMdlD9Us2ZOsayk5mPKc2O7IJ8UrlDsTn/IXdbvBFwMNwhkJlcpqAr oN6wUvUqTfqqUnyGv8magVCfSHBNKyZl+9tTareFKMh6+uQ8fHDlLs5yQSgTdfz3wKQ423UL/wNH3 40Ut3t5OTyCZsi9yEEhtDUKZoIjy9kk1/qqwSpFDtGhPblW3lRng5Y41pmkoJ07yCLmXMd4mvTFO2 jX5oRssGtGjKOMErHB5Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mwtFb-00BVVL-AG; Mon, 13 Dec 2021 21:52:51 +0000 Received: from out2.migadu.com ([188.165.223.204]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mwtEu-00BVF8-28 for linux-arm-kernel@lists.infradead.org; Mon, 13 Dec 2021 21:52:10 +0000 X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1639432326; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=YEOG1fBxE2urzqqiJsu6t18C/X8DZDEufDfmYZqUEJQ=; b=Ox773wiFciVqMbBf+gTrtkkyIcTND4WhrWfIBE1bukIO/OEjlkdW7KTMuvBews6D2XcJGN QKWuGQ4+Z5+QYIkKhq4bL8jUXkq6/y8Goedz3zjKDBBvCrgY5kvybZ6AEk4lBNyHrqy9Wz 4Qv9jXEu2GFuslVyslDKUOaAkregBdA= From: andrey.konovalov@linux.dev To: Marco Elver , Alexander Potapenko , Andrew Morton Cc: Andrey Konovalov , Dmitry Vyukov , Andrey Ryabinin , kasan-dev@googlegroups.com, linux-mm@kvack.org, Vincenzo Frascino , Catalin Marinas , Will Deacon , Mark Rutland , linux-arm-kernel@lists.infradead.org, Peter Collingbourne , Evgenii Stepanov , linux-kernel@vger.kernel.org, Andrey Konovalov Subject: [PATCH mm v3 02/38] kasan, page_alloc: move tag_clear_highpage out of kernel_init_free_pages Date: Mon, 13 Dec 2021 22:51:21 +0100 Message-Id: <1c0a5b8e9488ea2d34844ed2ab04383873612d1b.1639432170.git.andreyknvl@google.com> In-Reply-To: References: MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Migadu-Auth-User: andrey.konovalov@linux.dev X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211213_135208_319251_C0D7745F X-CRM114-Status: GOOD ( 14.37 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Andrey Konovalov Currently, kernel_init_free_pages() serves two purposes: it either only zeroes memory or zeroes both memory and memory tags via a different code path. As this function has only two callers, each using only one code path, this behaviour is confusing. Pull the code that zeroes both memory and tags out of kernel_init_free_pages(). As a result of this change, the code in free_pages_prepare() starts to look complicated, but this is improved in the few following patches. Those improvements are not integrated into this patch to make diffs easier to read. This patch does no functional changes. Signed-off-by: Andrey Konovalov Reviewed-by: Alexander Potapenko --- Changes v2->v3: - Update patch description. --- mm/page_alloc.c | 24 +++++++++++++----------- 1 file changed, 13 insertions(+), 11 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index f0bcecac19cd..7c2b29483b53 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1281,16 +1281,10 @@ static inline bool should_skip_kasan_poison(struct page *page, fpi_t fpi_flags) PageSkipKASanPoison(page); } -static void kernel_init_free_pages(struct page *page, int numpages, bool zero_tags) +static void kernel_init_free_pages(struct page *page, int numpages) { int i; - if (zero_tags) { - for (i = 0; i < numpages; i++) - tag_clear_highpage(page + i); - return; - } - /* s390's use of memset() could override KASAN redzones. */ kasan_disable_current(); for (i = 0; i < numpages; i++) { @@ -1386,7 +1380,7 @@ static __always_inline bool free_pages_prepare(struct page *page, bool init = want_init_on_free(); if (init) - kernel_init_free_pages(page, 1 << order, false); + kernel_init_free_pages(page, 1 << order); if (!skip_kasan_poison) kasan_poison_pages(page, order, init); } @@ -2429,9 +2423,17 @@ inline void post_alloc_hook(struct page *page, unsigned int order, bool init = !want_init_on_free() && want_init_on_alloc(gfp_flags); kasan_unpoison_pages(page, order, init); - if (init) - kernel_init_free_pages(page, 1 << order, - gfp_flags & __GFP_ZEROTAGS); + + if (init) { + if (gfp_flags & __GFP_ZEROTAGS) { + int i; + + for (i = 0; i < 1 << order; i++) + tag_clear_highpage(page + i); + } else { + kernel_init_free_pages(page, 1 << order); + } + } } set_page_owner(page, order, gfp_flags);