From patchwork Mon May 8 07:11:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13234120 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB95EC7EE24 for ; Mon, 8 May 2023 06:54:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6B81B6B007E; Mon, 8 May 2023 02:54:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5F3A1900002; Mon, 8 May 2023 02:54:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3D0166B0081; Mon, 8 May 2023 02:54:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 2AC806B007E for ; Mon, 8 May 2023 02:54:55 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id EA561C0CEE for ; Mon, 8 May 2023 06:54:54 +0000 (UTC) X-FDA: 80766175308.21.5F53019 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf26.hostedemail.com (Postfix) with ESMTP id 73A9F140003 for ; Mon, 8 May 2023 06:54:52 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=none; spf=pass (imf26.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1683528893; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rcVR99SxxrcTRKdzQe8S81O5MJzdlt01YSkJ0mja9MQ=; b=rc2uL0H8fSe4FQa6XGMzbbp6iYv0bh/p/kzGfM71Pua7R7btUnPDzXkA68koIkCYXD0biT pqlO6iN61eVpdvHwbrUj6lsb7jbg3Govxy+hyaKG+aTa2urPUeS3EuZ87xX3sbqRvhiZJi dZZN1DcQ+sUmjopXPt/U0qn8lxXaGSc= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1683528893; a=rsa-sha256; cv=none; b=hA75V6IkgvtkemRViQ0MFp53ReatjvTWfIK2ebMPowYg+JDVhvmnK9ivDHAofols93NgTd 9NRzcEtb0Y3p4Ki709jYBGqfSqa0qtP83B1b5LFp6RQdE16Bfs8s3G9dDT1HdRQc9m6WJc dM54OYaFyv/xS+MesshIOFzX/8Oq4Yg= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=none; spf=pass (imf26.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.55]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4QFBkj5rRmzpSyw; Mon, 8 May 2023 14:50:37 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 8 May 2023 14:54:45 +0800 From: Kefeng Wang To: Andrew Morton , Mike Rapoport , CC: David Hildenbrand , Oscar Salvador , "Rafael J. Wysocki" , Pavel Machek , Len Brown , Luis Chamberlain , Kees Cook , Iurii Zaikin , , , , Kefeng Wang Subject: [PATCH 01/12] mm: page_alloc: move mirrored_kernelcore into mm_init.c Date: Mon, 8 May 2023 15:11:49 +0800 Message-ID: <20230508071200.123962-2-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230508071200.123962-1-wangkefeng.wang@huawei.com> References: <20230508071200.123962-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: 73A9F140003 X-Stat-Signature: zgqoir39zrbdirwghetmy9t3pkkp8s5f X-Rspam-User: X-Rspamd-Server: rspam09 X-HE-Tag: 1683528892-367759 X-HE-Meta: U2FsdGVkX18w+h+1iaew4WVvqlMeuU3JkBXoGdwqfcUkojIByduGiLjpZBOvD07YIp7Lrd19rMiWynPY7cZiEk+Kv+jwlihk3xVFQWGguehxQnJ4IYDzKdDwwcyfWgUWsuwwS1nyAv0kuxDrlcKCEqYJzgGjOboH4dv/T4d6diNcj8HosdJTbHNWBqQaQlDQqk+f40hHoK6JyHzBu2L/ef9+seb1QBQV+2JBFPIBu+vwdwmQJpoZup+mLoSGGmIiybJdoP43H7Ny01cvSg5lJnbeE7jJrcid4YtlZkcQTzKfp0df6R12f3hKL91RI9pTLVZcwOm4nCI5JVID84YUuoVx/Pgu43Lll4sGNL/6NWKACdoh30zWFLvhdisrRXR7+hACVs8A5rMwK2kRuHHtrvmSilKHw8gyCaLSfBKH2oyDN7Z7J+Oss0BwfqDsj41Gly1oNC6nF6kssJZYrKDkPDGmyVkkM4JKRPZImOfu22LnUtUvWuC5ZHvodzqB8PZUo0nZTjDlCrRFZOFBmxK+CMh/S9CcjYYPRUX+tEm858u9nBDblrxvYHMLqB7iC9RYf3Dhs8U6t8yPoNBKkHa1HZyKb1S9MnEOgTL5GyQdGtnQQEZD3yAQfB4lqCStfURWlU31uLW58QM2jbzcAznVQvU4W1VIE07gcZ6FGeMz/9vGWDyfLMkzSbY0YMB1bKl/UqU9C4ur1zZBYEoArotRjzKMyDXAmJXVd6ZNJPrJZycELhBjZYql7X4PG/c53LuN5In6ZuuKr/DfpgzeCpdprB/gy1lXOcLYLkNEVzCQwAMEfwpmlntOEwe+HP/i0y1tZfE5f7BeVrlpZN+BWdyZnESC20ni06EtsPDvm4rzMZixHg9GOyq8HRxfavJyYuxWbBAL6HH6zDicxN1MaFu9J6YdBtiKxdcDFItNroPNNI2bBV6jC+imVl/ZSkd7QFez8pNiRcl6SyBLlNA0+a0 0GE8iYBS KwvtiP6mvaXZGG69gtfcTx2uZRz+PJdfJy1bHc2aU/huK7uFI44pAJpsvt8XBG8xnH442i4dloznb5J2ZJTqzyiylHSnhVe2eimH4bQZ1t1i6F0gYaRLMfQJZnQxkXuOWezxeMVD9CJh6hiH3ulJVra587T1SFfSjBDy6ZhcQHmTuVjUA5T5Fp5BiVo0m0r/0zP2kGjrQu7Sa6fdiUAF5i9ccHB3tbpVQFVJ1it6a3T1av5M= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Since commit 9420f89db2dd ("mm: move most of core MM initialization to mm/mm_init.c"), mirrored_kernelcore should be moved into mm_init.c, as most related codes are already there. Signed-off-by: Kefeng Wang Reviewed-by: Mike Rapoport (IBM) --- mm/mm_init.c | 2 ++ mm/page_alloc.c | 3 --- 2 files changed, 2 insertions(+), 3 deletions(-) diff --git a/mm/mm_init.c b/mm/mm_init.c index 7f7f9c677854..da162b7a044c 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -259,6 +259,8 @@ static int __init cmdline_parse_core(char *p, unsigned long *core, return 0; } +bool mirrored_kernelcore __initdata_memblock; + /* * kernelcore=size sets the amount of memory for use for allocations that * cannot be reclaimed or migrated. diff --git a/mm/page_alloc.c b/mm/page_alloc.c index af9c995d3c1e..d1086aeca8f2 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -23,7 +23,6 @@ #include #include #include -#include #include #include #include @@ -374,8 +373,6 @@ int user_min_free_kbytes = -1; int watermark_boost_factor __read_mostly = 15000; int watermark_scale_factor = 10; -bool mirrored_kernelcore __initdata_memblock; - /* movable_zone is the "real" zone pages in ZONE_MOVABLE are taken from */ int movable_zone; EXPORT_SYMBOL(movable_zone); From patchwork Mon May 8 07:11:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13234118 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F2FAC7EE2A for ; Mon, 8 May 2023 06:54:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 690456B0078; Mon, 8 May 2023 02:54:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6191D900002; Mon, 8 May 2023 02:54:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4952F6B0080; Mon, 8 May 2023 02:54:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 362546B0078 for ; Mon, 8 May 2023 02:54:54 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id F30841C7398 for ; Mon, 8 May 2023 06:54:53 +0000 (UTC) X-FDA: 80766175308.26.D89E6FE Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by imf29.hostedemail.com (Postfix) with ESMTP id 54A1012000C for ; Mon, 8 May 2023 06:54:50 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=none; spf=pass (imf29.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1683528892; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=oP7DGThGpZGDNBqu8X6jnO1rCC4w3UXrojF/HjOHYmU=; b=LvIkpjse6xbPtoYV5VcT72HQnVUaW9u+knAncImMjeSYLuA86blEq4lnmqGcfrVyyNmHG9 3c0YuSg6l8/RC7bNA9+4geeHzi9sokU25Snvhw+4TTaMVGQNbZakshC4K+TjhILRJy5Lng K7dNrXYbHM3TGsMYplzSWaeAe/KVfbY= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=none; spf=pass (imf29.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1683528892; a=rsa-sha256; cv=none; b=VG8np2n8mgYAfmkek2KWnfJRfjwNHIaXJrVYIarEdLgZ9BwoRbb+DK7JrtzDzbr47v7JU8 MnqrOvAeFCl9rMEMYHc3xmR+EDhP5GhYVcUygdzL5bs57smdq9HtS5i6OlNhW7oPl6FZdD Pz1BerDKoIqmP+5pQama7BAA5DrNzCA= Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.55]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4QFBkl1tsTz18LDq; Mon, 8 May 2023 14:50:39 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 8 May 2023 14:54:46 +0800 From: Kefeng Wang To: Andrew Morton , Mike Rapoport , CC: David Hildenbrand , Oscar Salvador , "Rafael J. Wysocki" , Pavel Machek , Len Brown , Luis Chamberlain , Kees Cook , Iurii Zaikin , , , , Kefeng Wang Subject: [PATCH 02/12] mm: page_alloc: move init_on_alloc/free() into mm_init.c Date: Mon, 8 May 2023 15:11:50 +0800 Message-ID: <20230508071200.123962-3-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230508071200.123962-1-wangkefeng.wang@huawei.com> References: <20230508071200.123962-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Stat-Signature: m5pxjo5wjkwgs7854zdbwgndidtgoryd X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 54A1012000C X-Rspam-User: X-HE-Tag: 1683528890-280143 X-HE-Meta: U2FsdGVkX19gzUAZamzfQlVmsmtmPhuzjuVB+afrUI3AYbzE7L8KsVLLHZbOqYbEbkON1mdD07P+Zg+PonMv+7CppyQSg2pQ1Yh59lPNXoJwY9hwbgL4nH0DP1COxeEprVHamxwXtXA7Mx3ys9CT18GE+yFKs19uqptD25VWF08ceP4DgzPmzf8CPPWgxpkbYXgQ4GSdJt+8/V1mWDmf/9ZoXtZvvWOEXLZA1dwxNfUnfEAuwXjtUs+1OzPVhJf686rxr4RSkxY4zX5nB1St422doMSbvHx5fqvhrjcqj4J2TfyyCJOCRhGolUerxzXTTZjIxoHDM5iLMhaACAPQzCU7pnzg+1FgKHzi+bpq8QUw+asURdIcs3d9+lEV10tz5cw5D/vw3FvLIeASZ3NpxJ2Gtg3FQHcmSo0rb1MnRypEZiMzBlwg+0c1CALXpMy4iNDA4D7c1fGn9DHbkw8OfiJAV6P8Vdd1IpbykoZ9FicdC9oHFKP8qHDYQCm4QcwInVef3Qwpakvn6GOpJnvq96WEt/MvkhLT1+x+TfTy6NJjZyhaQryp3W1st0uRc5uAV36x1XguoUl2aQYorj5kZ0pXlPu5ddbqR2Ry2qUPiLG7+AVNIrzu2MB9u5uUU8GQClsBR4kXYPGvvDpzfLegOmz8AvnfeUYuJU+IRAB0B7FigAeG0HBYFptl1JM9Tv7Tb3wDqySY32Bb5Wj7XZ7eFoyOpqpHOuw3SEzskOaeF2vRf6dwp7NEqci2RhlkjBRHo5STyd6h1gZZBq06TuBiXGI/0SiNQcHVGmL3YEPHXU9wfM9QtiC7FsddVArt1d6EHTaed9Gv/yubxARvDYJxuVz3diBWl08JzLrNQ44qpl8uwpMrwJjKkhT3Sa7dHIxPociJZ1fcWLesno5cWTPmPs39VOAhem5M28xNVQRSxjHAyTGG8ruImGjsJF5nlBVD6W1oS1U3S1lXMSjWFNw xwcTTMy/ ylgtYOgN6Ho3/N6sW2DWrJqBZJDiK/2dA8s22hhAN5TM+Y2ho8SOV7uszTZBzI1y4oygpu4LcYcACJJ5riiBGteGj3HGS3zRb9rLkT1jPzcyYjOmvdRqqsh/0bCf+2Agqj68UhWeOa2wXggS/rqDE+NCcAS3NtKtNF67AaWdPCg4jDw2PqBo0r9jqruS3PeYELRh4 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Since commit f2fc4b44ec2b ("mm: move init_mem_debugging_and_hardening() to mm/mm_init.c"), the init_on_alloc() and init_on_free() define is better to move there too. Signed-off-by: Kefeng Wang Reviewed-by: Mike Rapoport (IBM) --- mm/mm_init.c | 6 ++++++ mm/page_alloc.c | 5 ----- 2 files changed, 6 insertions(+), 5 deletions(-) diff --git a/mm/mm_init.c b/mm/mm_init.c index da162b7a044c..15201887f8e0 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -2543,6 +2543,12 @@ void __init memblock_free_pages(struct page *page, unsigned long pfn, __free_pages_core(page, order); } +DEFINE_STATIC_KEY_MAYBE(CONFIG_INIT_ON_ALLOC_DEFAULT_ON, init_on_alloc); +EXPORT_SYMBOL(init_on_alloc); + +DEFINE_STATIC_KEY_MAYBE(CONFIG_INIT_ON_FREE_DEFAULT_ON, init_on_free); +EXPORT_SYMBOL(init_on_free); + static bool _init_on_alloc_enabled_early __read_mostly = IS_ENABLED(CONFIG_INIT_ON_ALLOC_DEFAULT_ON); static int __init early_init_on_alloc(char *buf) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index d1086aeca8f2..4f094ba7c8fb 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -233,11 +233,6 @@ unsigned long totalcma_pages __read_mostly; int percpu_pagelist_high_fraction; gfp_t gfp_allowed_mask __read_mostly = GFP_BOOT_MASK; -DEFINE_STATIC_KEY_MAYBE(CONFIG_INIT_ON_ALLOC_DEFAULT_ON, init_on_alloc); -EXPORT_SYMBOL(init_on_alloc); - -DEFINE_STATIC_KEY_MAYBE(CONFIG_INIT_ON_FREE_DEFAULT_ON, init_on_free); -EXPORT_SYMBOL(init_on_free); /* * A cached value of the page's pageblock's migratetype, used when the page is From patchwork Mon May 8 07:11:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13234124 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65FAEC7EE24 for ; Mon, 8 May 2023 06:55:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F1C74900002; Mon, 8 May 2023 02:54:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DE979280003; Mon, 8 May 2023 02:54:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B9E3C280001; Mon, 8 May 2023 02:54:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 7E8CC900002 for ; Mon, 8 May 2023 02:54:56 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 4EAB2ADC78 for ; Mon, 8 May 2023 06:54:56 +0000 (UTC) X-FDA: 80766175392.15.B578A3D Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by imf13.hostedemail.com (Postfix) with ESMTP id C10FF20003 for ; Mon, 8 May 2023 06:54:52 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=none; spf=pass (imf13.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1683528894; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=OhLwtz0svCuGkWin0y/EMxKde1C2pTHP2S8APY4aTok=; b=hvqsDFDs84lFsSPADNUlc7u9OLQYGFPvemlnL+RGCo/WtMI2vapy/SzuR2fqhlQAJozgdk 7IMTjISfA4VC/rJOBHxNGlDBL7Urx0ESW/3YcHEw7uDGke5ru/djmIH5WrCYUQ+xdwPc0g 0Pm0kO3Uqud72KmghOu+XN4kySgqIZg= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1683528894; a=rsa-sha256; cv=none; b=3h5SVcr3ZUoLib0myaPYXwBr9AzC5avlNwIu8L64wnwqgHAAPzZAhBwQkq3F/YnTlSjL22 onT4VPyPDLnKUCLtwP3aEdBOh847C7rFZn6s8UIQZQf/dq95532taGb4rlIy/k0ipXV10S jKiLhIL1xP0yf/lrUhVN8vJsq0Pmx28= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=none; spf=pass (imf13.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.56]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4QFBkl6pXLz18LDr; Mon, 8 May 2023 14:50:39 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 8 May 2023 14:54:47 +0800 From: Kefeng Wang To: Andrew Morton , Mike Rapoport , CC: David Hildenbrand , Oscar Salvador , "Rafael J. Wysocki" , Pavel Machek , Len Brown , Luis Chamberlain , Kees Cook , Iurii Zaikin , , , , Kefeng Wang Subject: [PATCH 03/12] mm: page_alloc: move set_zone_contiguous() into mm_init.c Date: Mon, 8 May 2023 15:11:51 +0800 Message-ID: <20230508071200.123962-4-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230508071200.123962-1-wangkefeng.wang@huawei.com> References: <20230508071200.123962-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Stat-Signature: n43c15z6se78iw9zkbfqusm3etqsmpzx X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: C10FF20003 X-HE-Tag: 1683528892-697116 X-HE-Meta: U2FsdGVkX1+l5+Eig6YS35+MaQUfKZSWoZqjkioSswgB8gRIrPDXWFIhpATxacozDMlPTgwqx0bmKizKdb9eLKDRzdL+Sh0uW61tG4gGQqnDVCSxxJjY0jnlus9ZX9smnYBDNukoqx+6Vrxa4y9cVzdFL4X06F+z43fGdJJW8HfDsYGHB3+q8cSADqO7jdGmjyRn907kQMKcTWpk2AN1RWmh5CxSY12sHvS9+vOFz+TsjQwrIPUJOCrnxojv+6CTxxjebUPgdK/AlSGz2hAm6jAg3sLHfkL0nEU/yHbQzZhftqeF0I0LCUVkJRH4+CaT/heIKwDHQp5DY9rIzIGsUF+15NTJyvkWlvoeWlr0XxS7xfSZubMtb6+QA8Ac9GB9oVnmF6PgZgP3Xw+yFUw/qz0GTZfqYRQbToKoYfz4iLSHnsFCbn0vIhO11b8DacmdYWEIYriB2ek12UOMnVwMaW20AcvIUEhC7KLbdmiDANOOgFW+9sNZlJlStEnQZGAVDPUJGsDrO8D5s/riiy+xZ4YUiEP0Z8vxFOwrVIX2qtjwDdhg42h4LbLGjwDOX6c/456rYY/9eBQwz1CaKsWXD9Bj48CeCbwZSj9r+S+dDw11AL+GmJuo/V0k1aEcSI36RK9V51RJZNzPApmmb0JGbDPdbXtv/mG3FO7C4Tf28O5KN5LrKlcgdjD6ccjcvcfypuFvzpSndeVUiEWaeNRY/jrLCTG1sedeG8IU4zDiBBsBArpq3GCLLFzybxddtUxvqufh6i8dNEDG6JtzI2wixg5Z9STUfjbxS/fOi5dJ4ncsmhMDCBw+q3TVTXapVE273sEJqSzqqKpjGSKD5aXQl1f/zhkJnCSl+OM5Gz/H8H2PAtMqFiTII02tapYgUhMsM+lkroHVUfaHlSY77e/AXWtz8YYdqf5LYGk2TUIHZchExdiLQnhoUiASIUn+hUX50CMf5wZlkIAyoQSMpKx jCwTHedr SpsHh/qiK57b4LeF4evLryBbnWbnEDkNrelm01c39eRZ7E+i3pIfCPdeKMAnvUlicthI2nGOMSbux2UE99MjKdEhu3YZttp4lFevszXnTVpeDiYQDBxD6JBMcOTHN+1zdgmc3SYnMaTAbTAZCcbR977x+LfobB+M3TIW+YID0lRYI5G0vvVGtinVPYYCvtEpzQBEY X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: set_zone_contiguous() is only used in mm init/hotplug, and clear_zone_contiguous() only used in hotplug, move them from page_alloc.c to the more appropriate file. Signed-off-by: Kefeng Wang --- include/linux/memory_hotplug.h | 3 -- mm/internal.h | 7 +++ mm/mm_init.c | 74 +++++++++++++++++++++++++++++++ mm/page_alloc.c | 79 ---------------------------------- 4 files changed, 81 insertions(+), 82 deletions(-) diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index 9fcbf5706595..04bc286eed42 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -326,9 +326,6 @@ static inline int remove_memory(u64 start, u64 size) static inline void __remove_memory(u64 start, u64 size) {} #endif /* CONFIG_MEMORY_HOTREMOVE */ -extern void set_zone_contiguous(struct zone *zone); -extern void clear_zone_contiguous(struct zone *zone); - #ifdef CONFIG_MEMORY_HOTPLUG extern void __ref free_area_init_core_hotplug(struct pglist_data *pgdat); extern int __add_memory(int nid, u64 start, u64 size, mhp_t mhp_flags); diff --git a/mm/internal.h b/mm/internal.h index e28442c0858a..9482862b28cc 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -371,6 +371,13 @@ static inline struct page *pageblock_pfn_to_page(unsigned long start_pfn, return __pageblock_pfn_to_page(start_pfn, end_pfn, zone); } +void set_zone_contiguous(struct zone *zone); + +static inline void clear_zone_contiguous(struct zone *zone) +{ + zone->contiguous = false; +} + extern int __isolate_free_page(struct page *page, unsigned int order); extern void __putback_isolated_page(struct page *page, unsigned int order, int mt); diff --git a/mm/mm_init.c b/mm/mm_init.c index 15201887f8e0..1f30b9e16577 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -2330,6 +2330,80 @@ void __init init_cma_reserved_pageblock(struct page *page) } #endif +/* + * Check that the whole (or subset of) a pageblock given by the interval of + * [start_pfn, end_pfn) is valid and within the same zone, before scanning it + * with the migration of free compaction scanner. + * + * Return struct page pointer of start_pfn, or NULL if checks were not passed. + * + * It's possible on some configurations to have a setup like node0 node1 node0 + * i.e. it's possible that all pages within a zones range of pages do not + * belong to a single zone. We assume that a border between node0 and node1 + * can occur within a single pageblock, but not a node0 node1 node0 + * interleaving within a single pageblock. It is therefore sufficient to check + * the first and last page of a pageblock and avoid checking each individual + * page in a pageblock. + * + * Note: the function may return non-NULL struct page even for a page block + * which contains a memory hole (i.e. there is no physical memory for a subset + * of the pfn range). For example, if the pageblock order is MAX_ORDER, which + * will fall into 2 sub-sections, and the end pfn of the pageblock may be hole + * even though the start pfn is online and valid. This should be safe most of + * the time because struct pages are still initialized via init_unavailable_range() + * and pfn walkers shouldn't touch any physical memory range for which they do + * not recognize any specific metadata in struct pages. + */ +struct page *__pageblock_pfn_to_page(unsigned long start_pfn, + unsigned long end_pfn, struct zone *zone) +{ + struct page *start_page; + struct page *end_page; + + /* end_pfn is one past the range we are checking */ + end_pfn--; + + if (!pfn_valid(end_pfn)) + return NULL; + + start_page = pfn_to_online_page(start_pfn); + if (!start_page) + return NULL; + + if (page_zone(start_page) != zone) + return NULL; + + end_page = pfn_to_page(end_pfn); + + /* This gives a shorter code than deriving page_zone(end_page) */ + if (page_zone_id(start_page) != page_zone_id(end_page)) + return NULL; + + return start_page; +} + +void set_zone_contiguous(struct zone *zone) +{ + unsigned long block_start_pfn = zone->zone_start_pfn; + unsigned long block_end_pfn; + + block_end_pfn = pageblock_end_pfn(block_start_pfn); + for (; block_start_pfn < zone_end_pfn(zone); + block_start_pfn = block_end_pfn, + block_end_pfn += pageblock_nr_pages) { + + block_end_pfn = min(block_end_pfn, zone_end_pfn(zone)); + + if (!__pageblock_pfn_to_page(block_start_pfn, + block_end_pfn, zone)) + return; + cond_resched(); + } + + /* We confirm that there is no hole */ + zone->contiguous = true; +} + void __init page_alloc_init_late(void) { struct zone *zone; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 4f094ba7c8fb..fe7c1ee5becd 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1480,85 +1480,6 @@ void __free_pages_core(struct page *page, unsigned int order) __free_pages_ok(page, order, FPI_TO_TAIL); } -/* - * Check that the whole (or subset of) a pageblock given by the interval of - * [start_pfn, end_pfn) is valid and within the same zone, before scanning it - * with the migration of free compaction scanner. - * - * Return struct page pointer of start_pfn, or NULL if checks were not passed. - * - * It's possible on some configurations to have a setup like node0 node1 node0 - * i.e. it's possible that all pages within a zones range of pages do not - * belong to a single zone. We assume that a border between node0 and node1 - * can occur within a single pageblock, but not a node0 node1 node0 - * interleaving within a single pageblock. It is therefore sufficient to check - * the first and last page of a pageblock and avoid checking each individual - * page in a pageblock. - * - * Note: the function may return non-NULL struct page even for a page block - * which contains a memory hole (i.e. there is no physical memory for a subset - * of the pfn range). For example, if the pageblock order is MAX_ORDER, which - * will fall into 2 sub-sections, and the end pfn of the pageblock may be hole - * even though the start pfn is online and valid. This should be safe most of - * the time because struct pages are still initialized via init_unavailable_range() - * and pfn walkers shouldn't touch any physical memory range for which they do - * not recognize any specific metadata in struct pages. - */ -struct page *__pageblock_pfn_to_page(unsigned long start_pfn, - unsigned long end_pfn, struct zone *zone) -{ - struct page *start_page; - struct page *end_page; - - /* end_pfn is one past the range we are checking */ - end_pfn--; - - if (!pfn_valid(end_pfn)) - return NULL; - - start_page = pfn_to_online_page(start_pfn); - if (!start_page) - return NULL; - - if (page_zone(start_page) != zone) - return NULL; - - end_page = pfn_to_page(end_pfn); - - /* This gives a shorter code than deriving page_zone(end_page) */ - if (page_zone_id(start_page) != page_zone_id(end_page)) - return NULL; - - return start_page; -} - -void set_zone_contiguous(struct zone *zone) -{ - unsigned long block_start_pfn = zone->zone_start_pfn; - unsigned long block_end_pfn; - - block_end_pfn = pageblock_end_pfn(block_start_pfn); - for (; block_start_pfn < zone_end_pfn(zone); - block_start_pfn = block_end_pfn, - block_end_pfn += pageblock_nr_pages) { - - block_end_pfn = min(block_end_pfn, zone_end_pfn(zone)); - - if (!__pageblock_pfn_to_page(block_start_pfn, - block_end_pfn, zone)) - return; - cond_resched(); - } - - /* We confirm that there is no hole */ - zone->contiguous = true; -} - -void clear_zone_contiguous(struct zone *zone) -{ - zone->contiguous = false; -} - /* * The order of subdivision here is critical for the IO subsystem. * Please do not alter this order without good reasons and regression From patchwork Mon May 8 07:11:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13234129 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B32FC7EE2C for ; Mon, 8 May 2023 06:55:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 57299280003; Mon, 8 May 2023 02:54:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4C5DF900008; Mon, 8 May 2023 02:54:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E2E3A280003; Mon, 8 May 2023 02:54:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 8E24E280003 for ; Mon, 8 May 2023 02:54:58 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 4E9081608EF for ; Mon, 8 May 2023 06:54:58 +0000 (UTC) X-FDA: 80766175476.11.74D1EAA Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf26.hostedemail.com (Postfix) with ESMTP id B323E14000D for ; Mon, 8 May 2023 06:54:54 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf26.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1683528895; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=9SqXKkpwjBFr4e0S3V9QsZ1USeg5gIpoWWG16hstxgk=; b=i7Dc/Isftu62/mfSM24DRHA8NHfpMUn5mlqExUAftKBeGXy0PeeSgm6x7LT6vsm3RnWPKG Nl+Aj2Fr7yOSx8tRwWGnyhv/A5rSCwhfCn5xPsKInb7iP2Wkfsnjus0/5BjOmsMt6SXInR EZQYO+hxWCwbwrIlV1ZV/mvHXdW39EA= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf26.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1683528895; a=rsa-sha256; cv=none; b=7iqNxCKrDmvRp+3PgkK0bS/RFmX+0kdURCo51pvJwU15PjWsDVCLmbBQmo695ROapHH1+r 0phltPMq0YnNUfE+rk30cqumfi6sLGqMJznXCr7Fxzk2WgIyxihFy7Y/6qTenJ3bDjrDnA d35Mbki5uWq3UVmkzTKu+9rFOKaks2E= Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4QFBkF6Bv2zTkBF; Mon, 8 May 2023 14:50:13 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 8 May 2023 14:54:48 +0800 From: Kefeng Wang To: Andrew Morton , Mike Rapoport , CC: David Hildenbrand , Oscar Salvador , "Rafael J. Wysocki" , Pavel Machek , Len Brown , Luis Chamberlain , Kees Cook , Iurii Zaikin , , , , Kefeng Wang Subject: [PATCH 04/12] mm: page_alloc: collect mem statistic into show_mem.c Date: Mon, 8 May 2023 15:11:52 +0800 Message-ID: <20230508071200.123962-5-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230508071200.123962-1-wangkefeng.wang@huawei.com> References: <20230508071200.123962-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Stat-Signature: r9khpzo8pmgn5e5e5h8sa6ztak6zimz1 X-Rspam-User: X-Rspamd-Queue-Id: B323E14000D X-Rspamd-Server: rspam07 X-HE-Tag: 1683528894-804040 X-HE-Meta: U2FsdGVkX18LDMgmSk9HQ46SBswGIGwzF2k+4xSy+mKIYbyiDntxZkQpvC1bx194qINKO+0YNi81brZ+Da9gz+kcXuNqzXtSvVCmF8MZX/E7MGUOEej1C38HnvpZ9IIsLi/QrfQOnsks8n6KVboxyoX34SjDnY/RUgzOCpMcXbsgpkbHZp4xb4SiOyKV6TadV2qcpQBn/Bfb8v1fvcDVcxwc/r2skhqwrZa/fpqB34RAsSGdoaiYV/WXAGaG5seSPVrkfd0ymG815z5on5oYoWypmUqKruMMLBwQJGmeMAqPzy/XwZbkcEkHge0N9QerbGvxJMhNwZYvHncBowtprkJHaLNnrUfF48vXCTCAcaLlFOfewoc7Mzoy7HZtcih6YZ1k+AXkHDRvXR6JCVJD63Gv52TmkkxPFKWlYfLFB0Wy57wFC3kWN4nKobvNcY86QdyRhWpDfozHnqO4GcjGiuJ87m6Tl0F2x9DhlB2NGNk37P14g4WCeoTV3QD1QS5o/UY3nNnN4vVDaw0Hf386kwR7illTmcpsAVICC9axOmKh6A55u4xoXTOaSNZJ11dI5NODxV8vFmuQISARjR7nEiJR9D4NcXwVBpzY3mcl5QPc+BnVEvCbqfrhRq1iOFH/oW0hNzHiQ8ZNr8P0oPmVjikTpxxSjp6c0SUNZ7GRPxwG0ROQK5ZEyfrppsf5GRRh8lv2TVE6Yu8eYK6X+4R9RiXppc/Xvb5StsvI66OLjmM+yYTlhqS5MOtrPiwjfdtd34sdGqFeJamyWonla/T4CvGl9W2hu4gO7/CpoU20aNbT6uY9R7k047WMlDXxH1+gjJCVfTlUUL8jQyoTZRy0vOTmcwF4a3TMwfz4WRyQoJlScXSWeQ4deymMB1wyx22ZSBxZYwx3Go11UGqxegvmjLOt3beYpObkcS+k5sK7fYlUmfzlKFUoxOYx7130MDZKfy3xPz1K3UV941S7YD4 crG4tpO6 4Rb+WTd0fAGxRrwoGymRcoaO7TyKMv83xeSYr4T+kSmjr31doqNvtoFIoK4yrJ8bQtfj/T5VGPd6oaIN/DI5gCC2QC5PG8eUHSgRtraqJDxeD2RpKZ7yoqLQVAL4HYzeSkCYbNgl09sM11+pVn6g/BPlsMgmFH3yW2h+wqra1SIcPoKK1wHF5quPsdH8tHfBYxFuDuGd2gy0uP8A= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Let's move show_mem.c from lib to mm, as it belongs memory subsystem, also split some memory statistic related functions from page_alloc.c to show_mem.c, and we cleanup some unneeded include. There is no functional change. Signed-off-by: Kefeng Wang --- lib/Makefile | 2 +- lib/show_mem.c | 37 ----- mm/Makefile | 2 +- mm/page_alloc.c | 402 --------------------------------------------- mm/show_mem.c | 429 ++++++++++++++++++++++++++++++++++++++++++++++++ 5 files changed, 431 insertions(+), 441 deletions(-) delete mode 100644 lib/show_mem.c create mode 100644 mm/show_mem.c diff --git a/lib/Makefile b/lib/Makefile index 876fcdeae34e..38f23f352736 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -30,7 +30,7 @@ endif lib-y := ctype.o string.o vsprintf.o cmdline.o \ rbtree.o radix-tree.o timerqueue.o xarray.o \ maple_tree.o idr.o extable.o irq_regs.o argv_split.o \ - flex_proportions.o ratelimit.o show_mem.o \ + flex_proportions.o ratelimit.o \ is_single_threaded.o plist.o decompress.o kobject_uevent.o \ earlycpio.o seq_buf.o siphash.o dec_and_lock.o \ nmi_backtrace.o win_minmax.o memcat_p.o \ diff --git a/lib/show_mem.c b/lib/show_mem.c deleted file mode 100644 index 1485c87be935..000000000000 --- a/lib/show_mem.c +++ /dev/null @@ -1,37 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-only -/* - * Generic show_mem() implementation - * - * Copyright (C) 2008 Johannes Weiner - */ - -#include -#include - -void __show_mem(unsigned int filter, nodemask_t *nodemask, int max_zone_idx) -{ - unsigned long total = 0, reserved = 0, highmem = 0; - struct zone *zone; - - printk("Mem-Info:\n"); - __show_free_areas(filter, nodemask, max_zone_idx); - - for_each_populated_zone(zone) { - - total += zone->present_pages; - reserved += zone->present_pages - zone_managed_pages(zone); - - if (is_highmem(zone)) - highmem += zone->present_pages; - } - - printk("%lu pages RAM\n", total); - printk("%lu pages HighMem/MovableOnly\n", highmem); - printk("%lu pages reserved\n", reserved); -#ifdef CONFIG_CMA - printk("%lu pages cma reserved\n", totalcma_pages); -#endif -#ifdef CONFIG_MEMORY_FAILURE - printk("%lu pages hwpoisoned\n", atomic_long_read(&num_poisoned_pages)); -#endif -} diff --git a/mm/Makefile b/mm/Makefile index e29afc890cde..5262ce5baa28 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -51,7 +51,7 @@ obj-y := filemap.o mempool.o oom_kill.o fadvise.o \ readahead.o swap.o truncate.o vmscan.o shmem.o \ util.o mmzone.o vmstat.o backing-dev.o \ mm_init.o percpu.o slab_common.o \ - compaction.o \ + compaction.o show_mem.o\ interval_tree.o list_lru.o workingset.o \ debug.o gup.o mmap_lock.o $(mmu-y) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index fe7c1ee5becd..9a85238f1140 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -18,10 +18,7 @@ #include #include #include -#include -#include #include -#include #include #include #include @@ -30,8 +27,6 @@ #include #include #include -#include -#include #include #include #include @@ -40,19 +35,10 @@ #include #include #include -#include #include -#include -#include -#include -#include #include #include -#include #include -#include -#include -#include #include #include #include @@ -60,12 +46,9 @@ #include #include #include -#include -#include #include #include #include -#include #include #include #include @@ -73,13 +56,10 @@ #include #include #include -#include -#include #include #include "internal.h" #include "shuffle.h" #include "page_reporting.h" -#include "swap.h" /* Free Page Internal flags: for internal, non-pcp variants of free_pages(). */ typedef int __bitwise fpi_t; @@ -226,11 +206,6 @@ nodemask_t node_states[NR_NODE_STATES] __read_mostly = { }; EXPORT_SYMBOL(node_states); -atomic_long_t _totalram_pages __read_mostly; -EXPORT_SYMBOL(_totalram_pages); -unsigned long totalreserve_pages __read_mostly; -unsigned long totalcma_pages __read_mostly; - int percpu_pagelist_high_fraction; gfp_t gfp_allowed_mask __read_mostly = GFP_BOOT_MASK; @@ -5050,383 +5025,6 @@ unsigned long nr_free_buffer_pages(void) } EXPORT_SYMBOL_GPL(nr_free_buffer_pages); -static inline void show_node(struct zone *zone) -{ - if (IS_ENABLED(CONFIG_NUMA)) - printk("Node %d ", zone_to_nid(zone)); -} - -long si_mem_available(void) -{ - long available; - unsigned long pagecache; - unsigned long wmark_low = 0; - unsigned long pages[NR_LRU_LISTS]; - unsigned long reclaimable; - struct zone *zone; - int lru; - - for (lru = LRU_BASE; lru < NR_LRU_LISTS; lru++) - pages[lru] = global_node_page_state(NR_LRU_BASE + lru); - - for_each_zone(zone) - wmark_low += low_wmark_pages(zone); - - /* - * Estimate the amount of memory available for userspace allocations, - * without causing swapping or OOM. - */ - available = global_zone_page_state(NR_FREE_PAGES) - totalreserve_pages; - - /* - * Not all the page cache can be freed, otherwise the system will - * start swapping or thrashing. Assume at least half of the page - * cache, or the low watermark worth of cache, needs to stay. - */ - pagecache = pages[LRU_ACTIVE_FILE] + pages[LRU_INACTIVE_FILE]; - pagecache -= min(pagecache / 2, wmark_low); - available += pagecache; - - /* - * Part of the reclaimable slab and other kernel memory consists of - * items that are in use, and cannot be freed. Cap this estimate at the - * low watermark. - */ - reclaimable = global_node_page_state_pages(NR_SLAB_RECLAIMABLE_B) + - global_node_page_state(NR_KERNEL_MISC_RECLAIMABLE); - available += reclaimable - min(reclaimable / 2, wmark_low); - - if (available < 0) - available = 0; - return available; -} -EXPORT_SYMBOL_GPL(si_mem_available); - -void si_meminfo(struct sysinfo *val) -{ - val->totalram = totalram_pages(); - val->sharedram = global_node_page_state(NR_SHMEM); - val->freeram = global_zone_page_state(NR_FREE_PAGES); - val->bufferram = nr_blockdev_pages(); - val->totalhigh = totalhigh_pages(); - val->freehigh = nr_free_highpages(); - val->mem_unit = PAGE_SIZE; -} - -EXPORT_SYMBOL(si_meminfo); - -#ifdef CONFIG_NUMA -void si_meminfo_node(struct sysinfo *val, int nid) -{ - int zone_type; /* needs to be signed */ - unsigned long managed_pages = 0; - unsigned long managed_highpages = 0; - unsigned long free_highpages = 0; - pg_data_t *pgdat = NODE_DATA(nid); - - for (zone_type = 0; zone_type < MAX_NR_ZONES; zone_type++) - managed_pages += zone_managed_pages(&pgdat->node_zones[zone_type]); - val->totalram = managed_pages; - val->sharedram = node_page_state(pgdat, NR_SHMEM); - val->freeram = sum_zone_node_page_state(nid, NR_FREE_PAGES); -#ifdef CONFIG_HIGHMEM - for (zone_type = 0; zone_type < MAX_NR_ZONES; zone_type++) { - struct zone *zone = &pgdat->node_zones[zone_type]; - - if (is_highmem(zone)) { - managed_highpages += zone_managed_pages(zone); - free_highpages += zone_page_state(zone, NR_FREE_PAGES); - } - } - val->totalhigh = managed_highpages; - val->freehigh = free_highpages; -#else - val->totalhigh = managed_highpages; - val->freehigh = free_highpages; -#endif - val->mem_unit = PAGE_SIZE; -} -#endif - -/* - * Determine whether the node should be displayed or not, depending on whether - * SHOW_MEM_FILTER_NODES was passed to show_free_areas(). - */ -static bool show_mem_node_skip(unsigned int flags, int nid, nodemask_t *nodemask) -{ - if (!(flags & SHOW_MEM_FILTER_NODES)) - return false; - - /* - * no node mask - aka implicit memory numa policy. Do not bother with - * the synchronization - read_mems_allowed_begin - because we do not - * have to be precise here. - */ - if (!nodemask) - nodemask = &cpuset_current_mems_allowed; - - return !node_isset(nid, *nodemask); -} - -static void show_migration_types(unsigned char type) -{ - static const char types[MIGRATE_TYPES] = { - [MIGRATE_UNMOVABLE] = 'U', - [MIGRATE_MOVABLE] = 'M', - [MIGRATE_RECLAIMABLE] = 'E', - [MIGRATE_HIGHATOMIC] = 'H', -#ifdef CONFIG_CMA - [MIGRATE_CMA] = 'C', -#endif -#ifdef CONFIG_MEMORY_ISOLATION - [MIGRATE_ISOLATE] = 'I', -#endif - }; - char tmp[MIGRATE_TYPES + 1]; - char *p = tmp; - int i; - - for (i = 0; i < MIGRATE_TYPES; i++) { - if (type & (1 << i)) - *p++ = types[i]; - } - - *p = '\0'; - printk(KERN_CONT "(%s) ", tmp); -} - -static bool node_has_managed_zones(pg_data_t *pgdat, int max_zone_idx) -{ - int zone_idx; - for (zone_idx = 0; zone_idx <= max_zone_idx; zone_idx++) - if (zone_managed_pages(pgdat->node_zones + zone_idx)) - return true; - return false; -} - -/* - * Show free area list (used inside shift_scroll-lock stuff) - * We also calculate the percentage fragmentation. We do this by counting the - * memory on each free list with the exception of the first item on the list. - * - * Bits in @filter: - * SHOW_MEM_FILTER_NODES: suppress nodes that are not allowed by current's - * cpuset. - */ -void __show_free_areas(unsigned int filter, nodemask_t *nodemask, int max_zone_idx) -{ - unsigned long free_pcp = 0; - int cpu, nid; - struct zone *zone; - pg_data_t *pgdat; - - for_each_populated_zone(zone) { - if (zone_idx(zone) > max_zone_idx) - continue; - if (show_mem_node_skip(filter, zone_to_nid(zone), nodemask)) - continue; - - for_each_online_cpu(cpu) - free_pcp += per_cpu_ptr(zone->per_cpu_pageset, cpu)->count; - } - - printk("active_anon:%lu inactive_anon:%lu isolated_anon:%lu\n" - " active_file:%lu inactive_file:%lu isolated_file:%lu\n" - " unevictable:%lu dirty:%lu writeback:%lu\n" - " slab_reclaimable:%lu slab_unreclaimable:%lu\n" - " mapped:%lu shmem:%lu pagetables:%lu\n" - " sec_pagetables:%lu bounce:%lu\n" - " kernel_misc_reclaimable:%lu\n" - " free:%lu free_pcp:%lu free_cma:%lu\n", - global_node_page_state(NR_ACTIVE_ANON), - global_node_page_state(NR_INACTIVE_ANON), - global_node_page_state(NR_ISOLATED_ANON), - global_node_page_state(NR_ACTIVE_FILE), - global_node_page_state(NR_INACTIVE_FILE), - global_node_page_state(NR_ISOLATED_FILE), - global_node_page_state(NR_UNEVICTABLE), - global_node_page_state(NR_FILE_DIRTY), - global_node_page_state(NR_WRITEBACK), - global_node_page_state_pages(NR_SLAB_RECLAIMABLE_B), - global_node_page_state_pages(NR_SLAB_UNRECLAIMABLE_B), - global_node_page_state(NR_FILE_MAPPED), - global_node_page_state(NR_SHMEM), - global_node_page_state(NR_PAGETABLE), - global_node_page_state(NR_SECONDARY_PAGETABLE), - global_zone_page_state(NR_BOUNCE), - global_node_page_state(NR_KERNEL_MISC_RECLAIMABLE), - global_zone_page_state(NR_FREE_PAGES), - free_pcp, - global_zone_page_state(NR_FREE_CMA_PAGES)); - - for_each_online_pgdat(pgdat) { - if (show_mem_node_skip(filter, pgdat->node_id, nodemask)) - continue; - if (!node_has_managed_zones(pgdat, max_zone_idx)) - continue; - - printk("Node %d" - " active_anon:%lukB" - " inactive_anon:%lukB" - " active_file:%lukB" - " inactive_file:%lukB" - " unevictable:%lukB" - " isolated(anon):%lukB" - " isolated(file):%lukB" - " mapped:%lukB" - " dirty:%lukB" - " writeback:%lukB" - " shmem:%lukB" -#ifdef CONFIG_TRANSPARENT_HUGEPAGE - " shmem_thp: %lukB" - " shmem_pmdmapped: %lukB" - " anon_thp: %lukB" -#endif - " writeback_tmp:%lukB" - " kernel_stack:%lukB" -#ifdef CONFIG_SHADOW_CALL_STACK - " shadow_call_stack:%lukB" -#endif - " pagetables:%lukB" - " sec_pagetables:%lukB" - " all_unreclaimable? %s" - "\n", - pgdat->node_id, - K(node_page_state(pgdat, NR_ACTIVE_ANON)), - K(node_page_state(pgdat, NR_INACTIVE_ANON)), - K(node_page_state(pgdat, NR_ACTIVE_FILE)), - K(node_page_state(pgdat, NR_INACTIVE_FILE)), - K(node_page_state(pgdat, NR_UNEVICTABLE)), - K(node_page_state(pgdat, NR_ISOLATED_ANON)), - K(node_page_state(pgdat, NR_ISOLATED_FILE)), - K(node_page_state(pgdat, NR_FILE_MAPPED)), - K(node_page_state(pgdat, NR_FILE_DIRTY)), - K(node_page_state(pgdat, NR_WRITEBACK)), - K(node_page_state(pgdat, NR_SHMEM)), -#ifdef CONFIG_TRANSPARENT_HUGEPAGE - K(node_page_state(pgdat, NR_SHMEM_THPS)), - K(node_page_state(pgdat, NR_SHMEM_PMDMAPPED)), - K(node_page_state(pgdat, NR_ANON_THPS)), -#endif - K(node_page_state(pgdat, NR_WRITEBACK_TEMP)), - node_page_state(pgdat, NR_KERNEL_STACK_KB), -#ifdef CONFIG_SHADOW_CALL_STACK - node_page_state(pgdat, NR_KERNEL_SCS_KB), -#endif - K(node_page_state(pgdat, NR_PAGETABLE)), - K(node_page_state(pgdat, NR_SECONDARY_PAGETABLE)), - pgdat->kswapd_failures >= MAX_RECLAIM_RETRIES ? - "yes" : "no"); - } - - for_each_populated_zone(zone) { - int i; - - if (zone_idx(zone) > max_zone_idx) - continue; - if (show_mem_node_skip(filter, zone_to_nid(zone), nodemask)) - continue; - - free_pcp = 0; - for_each_online_cpu(cpu) - free_pcp += per_cpu_ptr(zone->per_cpu_pageset, cpu)->count; - - show_node(zone); - printk(KERN_CONT - "%s" - " free:%lukB" - " boost:%lukB" - " min:%lukB" - " low:%lukB" - " high:%lukB" - " reserved_highatomic:%luKB" - " active_anon:%lukB" - " inactive_anon:%lukB" - " active_file:%lukB" - " inactive_file:%lukB" - " unevictable:%lukB" - " writepending:%lukB" - " present:%lukB" - " managed:%lukB" - " mlocked:%lukB" - " bounce:%lukB" - " free_pcp:%lukB" - " local_pcp:%ukB" - " free_cma:%lukB" - "\n", - zone->name, - K(zone_page_state(zone, NR_FREE_PAGES)), - K(zone->watermark_boost), - K(min_wmark_pages(zone)), - K(low_wmark_pages(zone)), - K(high_wmark_pages(zone)), - K(zone->nr_reserved_highatomic), - K(zone_page_state(zone, NR_ZONE_ACTIVE_ANON)), - K(zone_page_state(zone, NR_ZONE_INACTIVE_ANON)), - K(zone_page_state(zone, NR_ZONE_ACTIVE_FILE)), - K(zone_page_state(zone, NR_ZONE_INACTIVE_FILE)), - K(zone_page_state(zone, NR_ZONE_UNEVICTABLE)), - K(zone_page_state(zone, NR_ZONE_WRITE_PENDING)), - K(zone->present_pages), - K(zone_managed_pages(zone)), - K(zone_page_state(zone, NR_MLOCK)), - K(zone_page_state(zone, NR_BOUNCE)), - K(free_pcp), - K(this_cpu_read(zone->per_cpu_pageset->count)), - K(zone_page_state(zone, NR_FREE_CMA_PAGES))); - printk("lowmem_reserve[]:"); - for (i = 0; i < MAX_NR_ZONES; i++) - printk(KERN_CONT " %ld", zone->lowmem_reserve[i]); - printk(KERN_CONT "\n"); - } - - for_each_populated_zone(zone) { - unsigned int order; - unsigned long nr[MAX_ORDER + 1], flags, total = 0; - unsigned char types[MAX_ORDER + 1]; - - if (zone_idx(zone) > max_zone_idx) - continue; - if (show_mem_node_skip(filter, zone_to_nid(zone), nodemask)) - continue; - show_node(zone); - printk(KERN_CONT "%s: ", zone->name); - - spin_lock_irqsave(&zone->lock, flags); - for (order = 0; order <= MAX_ORDER; order++) { - struct free_area *area = &zone->free_area[order]; - int type; - - nr[order] = area->nr_free; - total += nr[order] << order; - - types[order] = 0; - for (type = 0; type < MIGRATE_TYPES; type++) { - if (!free_area_empty(area, type)) - types[order] |= 1 << type; - } - } - spin_unlock_irqrestore(&zone->lock, flags); - for (order = 0; order <= MAX_ORDER; order++) { - printk(KERN_CONT "%lu*%lukB ", - nr[order], K(1UL) << order); - if (nr[order]) - show_migration_types(types[order]); - } - printk(KERN_CONT "= %lukB\n", K(total)); - } - - for_each_online_node(nid) { - if (show_mem_node_skip(filter, nid, nodemask)) - continue; - hugetlb_show_meminfo_node(nid); - } - - printk("%ld total pagecache pages\n", global_node_page_state(NR_FILE_PAGES)); - - show_swap_cache_info(); -} - static void zoneref_set_zone(struct zone *zone, struct zoneref *zoneref) { zoneref->zone = zone; diff --git a/mm/show_mem.c b/mm/show_mem.c new file mode 100644 index 000000000000..9f1a5d8b03d1 --- /dev/null +++ b/mm/show_mem.c @@ -0,0 +1,429 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Generic show_mem() implementation + * + * Copyright (C) 2008 Johannes Weiner + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "internal.h" +#include "swap.h" + +atomic_long_t _totalram_pages __read_mostly; +EXPORT_SYMBOL(_totalram_pages); +unsigned long totalreserve_pages __read_mostly; +unsigned long totalcma_pages __read_mostly; + +void __show_mem(unsigned int filter, nodemask_t *nodemask, int max_zone_idx) +{ + unsigned long total = 0, reserved = 0, highmem = 0; + struct zone *zone; + + printk("Mem-Info:\n"); + __show_free_areas(filter, nodemask, max_zone_idx); + + for_each_populated_zone(zone) { + + total += zone->present_pages; + reserved += zone->present_pages - zone_managed_pages(zone); + + if (is_highmem(zone)) + highmem += zone->present_pages; + } + + printk("%lu pages RAM\n", total); + printk("%lu pages HighMem/MovableOnly\n", highmem); + printk("%lu pages reserved\n", reserved); +#ifdef CONFIG_CMA + printk("%lu pages cma reserved\n", totalcma_pages); +#endif +#ifdef CONFIG_MEMORY_FAILURE + printk("%lu pages hwpoisoned\n", atomic_long_read(&num_poisoned_pages)); +#endif +} + +static inline void show_node(struct zone *zone) +{ + if (IS_ENABLED(CONFIG_NUMA)) + printk("Node %d ", zone_to_nid(zone)); +} + +long si_mem_available(void) +{ + long available; + unsigned long pagecache; + unsigned long wmark_low = 0; + unsigned long pages[NR_LRU_LISTS]; + unsigned long reclaimable; + struct zone *zone; + int lru; + + for (lru = LRU_BASE; lru < NR_LRU_LISTS; lru++) + pages[lru] = global_node_page_state(NR_LRU_BASE + lru); + + for_each_zone(zone) + wmark_low += low_wmark_pages(zone); + + /* + * Estimate the amount of memory available for userspace allocations, + * without causing swapping or OOM. + */ + available = global_zone_page_state(NR_FREE_PAGES) - totalreserve_pages; + + /* + * Not all the page cache can be freed, otherwise the system will + * start swapping or thrashing. Assume at least half of the page + * cache, or the low watermark worth of cache, needs to stay. + */ + pagecache = pages[LRU_ACTIVE_FILE] + pages[LRU_INACTIVE_FILE]; + pagecache -= min(pagecache / 2, wmark_low); + available += pagecache; + + /* + * Part of the reclaimable slab and other kernel memory consists of + * items that are in use, and cannot be freed. Cap this estimate at the + * low watermark. + */ + reclaimable = global_node_page_state_pages(NR_SLAB_RECLAIMABLE_B) + + global_node_page_state(NR_KERNEL_MISC_RECLAIMABLE); + available += reclaimable - min(reclaimable / 2, wmark_low); + + if (available < 0) + available = 0; + return available; +} +EXPORT_SYMBOL_GPL(si_mem_available); + +void si_meminfo(struct sysinfo *val) +{ + val->totalram = totalram_pages(); + val->sharedram = global_node_page_state(NR_SHMEM); + val->freeram = global_zone_page_state(NR_FREE_PAGES); + val->bufferram = nr_blockdev_pages(); + val->totalhigh = totalhigh_pages(); + val->freehigh = nr_free_highpages(); + val->mem_unit = PAGE_SIZE; +} + +EXPORT_SYMBOL(si_meminfo); + +#ifdef CONFIG_NUMA +void si_meminfo_node(struct sysinfo *val, int nid) +{ + int zone_type; /* needs to be signed */ + unsigned long managed_pages = 0; + unsigned long managed_highpages = 0; + unsigned long free_highpages = 0; + pg_data_t *pgdat = NODE_DATA(nid); + + for (zone_type = 0; zone_type < MAX_NR_ZONES; zone_type++) + managed_pages += zone_managed_pages(&pgdat->node_zones[zone_type]); + val->totalram = managed_pages; + val->sharedram = node_page_state(pgdat, NR_SHMEM); + val->freeram = sum_zone_node_page_state(nid, NR_FREE_PAGES); +#ifdef CONFIG_HIGHMEM + for (zone_type = 0; zone_type < MAX_NR_ZONES; zone_type++) { + struct zone *zone = &pgdat->node_zones[zone_type]; + + if (is_highmem(zone)) { + managed_highpages += zone_managed_pages(zone); + free_highpages += zone_page_state(zone, NR_FREE_PAGES); + } + } + val->totalhigh = managed_highpages; + val->freehigh = free_highpages; +#else + val->totalhigh = managed_highpages; + val->freehigh = free_highpages; +#endif + val->mem_unit = PAGE_SIZE; +} +#endif + +/* + * Determine whether the node should be displayed or not, depending on whether + * SHOW_MEM_FILTER_NODES was passed to show_free_areas(). + */ +static bool show_mem_node_skip(unsigned int flags, int nid, nodemask_t *nodemask) +{ + if (!(flags & SHOW_MEM_FILTER_NODES)) + return false; + + /* + * no node mask - aka implicit memory numa policy. Do not bother with + * the synchronization - read_mems_allowed_begin - because we do not + * have to be precise here. + */ + if (!nodemask) + nodemask = &cpuset_current_mems_allowed; + + return !node_isset(nid, *nodemask); +} + +static void show_migration_types(unsigned char type) +{ + static const char types[MIGRATE_TYPES] = { + [MIGRATE_UNMOVABLE] = 'U', + [MIGRATE_MOVABLE] = 'M', + [MIGRATE_RECLAIMABLE] = 'E', + [MIGRATE_HIGHATOMIC] = 'H', +#ifdef CONFIG_CMA + [MIGRATE_CMA] = 'C', +#endif +#ifdef CONFIG_MEMORY_ISOLATION + [MIGRATE_ISOLATE] = 'I', +#endif + }; + char tmp[MIGRATE_TYPES + 1]; + char *p = tmp; + int i; + + for (i = 0; i < MIGRATE_TYPES; i++) { + if (type & (1 << i)) + *p++ = types[i]; + } + + *p = '\0'; + printk(KERN_CONT "(%s) ", tmp); +} + +static bool node_has_managed_zones(pg_data_t *pgdat, int max_zone_idx) +{ + int zone_idx; + for (zone_idx = 0; zone_idx <= max_zone_idx; zone_idx++) + if (zone_managed_pages(pgdat->node_zones + zone_idx)) + return true; + return false; +} + +/* + * Show free area list (used inside shift_scroll-lock stuff) + * We also calculate the percentage fragmentation. We do this by counting the + * memory on each free list with the exception of the first item on the list. + * + * Bits in @filter: + * SHOW_MEM_FILTER_NODES: suppress nodes that are not allowed by current's + * cpuset. + */ +void __show_free_areas(unsigned int filter, nodemask_t *nodemask, int max_zone_idx) +{ + unsigned long free_pcp = 0; + int cpu, nid; + struct zone *zone; + pg_data_t *pgdat; + + for_each_populated_zone(zone) { + if (zone_idx(zone) > max_zone_idx) + continue; + if (show_mem_node_skip(filter, zone_to_nid(zone), nodemask)) + continue; + + for_each_online_cpu(cpu) + free_pcp += per_cpu_ptr(zone->per_cpu_pageset, cpu)->count; + } + + printk("active_anon:%lu inactive_anon:%lu isolated_anon:%lu\n" + " active_file:%lu inactive_file:%lu isolated_file:%lu\n" + " unevictable:%lu dirty:%lu writeback:%lu\n" + " slab_reclaimable:%lu slab_unreclaimable:%lu\n" + " mapped:%lu shmem:%lu pagetables:%lu\n" + " sec_pagetables:%lu bounce:%lu\n" + " kernel_misc_reclaimable:%lu\n" + " free:%lu free_pcp:%lu free_cma:%lu\n", + global_node_page_state(NR_ACTIVE_ANON), + global_node_page_state(NR_INACTIVE_ANON), + global_node_page_state(NR_ISOLATED_ANON), + global_node_page_state(NR_ACTIVE_FILE), + global_node_page_state(NR_INACTIVE_FILE), + global_node_page_state(NR_ISOLATED_FILE), + global_node_page_state(NR_UNEVICTABLE), + global_node_page_state(NR_FILE_DIRTY), + global_node_page_state(NR_WRITEBACK), + global_node_page_state_pages(NR_SLAB_RECLAIMABLE_B), + global_node_page_state_pages(NR_SLAB_UNRECLAIMABLE_B), + global_node_page_state(NR_FILE_MAPPED), + global_node_page_state(NR_SHMEM), + global_node_page_state(NR_PAGETABLE), + global_node_page_state(NR_SECONDARY_PAGETABLE), + global_zone_page_state(NR_BOUNCE), + global_node_page_state(NR_KERNEL_MISC_RECLAIMABLE), + global_zone_page_state(NR_FREE_PAGES), + free_pcp, + global_zone_page_state(NR_FREE_CMA_PAGES)); + + for_each_online_pgdat(pgdat) { + if (show_mem_node_skip(filter, pgdat->node_id, nodemask)) + continue; + if (!node_has_managed_zones(pgdat, max_zone_idx)) + continue; + + printk("Node %d" + " active_anon:%lukB" + " inactive_anon:%lukB" + " active_file:%lukB" + " inactive_file:%lukB" + " unevictable:%lukB" + " isolated(anon):%lukB" + " isolated(file):%lukB" + " mapped:%lukB" + " dirty:%lukB" + " writeback:%lukB" + " shmem:%lukB" +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + " shmem_thp: %lukB" + " shmem_pmdmapped: %lukB" + " anon_thp: %lukB" +#endif + " writeback_tmp:%lukB" + " kernel_stack:%lukB" +#ifdef CONFIG_SHADOW_CALL_STACK + " shadow_call_stack:%lukB" +#endif + " pagetables:%lukB" + " sec_pagetables:%lukB" + " all_unreclaimable? %s" + "\n", + pgdat->node_id, + K(node_page_state(pgdat, NR_ACTIVE_ANON)), + K(node_page_state(pgdat, NR_INACTIVE_ANON)), + K(node_page_state(pgdat, NR_ACTIVE_FILE)), + K(node_page_state(pgdat, NR_INACTIVE_FILE)), + K(node_page_state(pgdat, NR_UNEVICTABLE)), + K(node_page_state(pgdat, NR_ISOLATED_ANON)), + K(node_page_state(pgdat, NR_ISOLATED_FILE)), + K(node_page_state(pgdat, NR_FILE_MAPPED)), + K(node_page_state(pgdat, NR_FILE_DIRTY)), + K(node_page_state(pgdat, NR_WRITEBACK)), + K(node_page_state(pgdat, NR_SHMEM)), +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + K(node_page_state(pgdat, NR_SHMEM_THPS)), + K(node_page_state(pgdat, NR_SHMEM_PMDMAPPED)), + K(node_page_state(pgdat, NR_ANON_THPS)), +#endif + K(node_page_state(pgdat, NR_WRITEBACK_TEMP)), + node_page_state(pgdat, NR_KERNEL_STACK_KB), +#ifdef CONFIG_SHADOW_CALL_STACK + node_page_state(pgdat, NR_KERNEL_SCS_KB), +#endif + K(node_page_state(pgdat, NR_PAGETABLE)), + K(node_page_state(pgdat, NR_SECONDARY_PAGETABLE)), + pgdat->kswapd_failures >= MAX_RECLAIM_RETRIES ? + "yes" : "no"); + } + + for_each_populated_zone(zone) { + int i; + + if (zone_idx(zone) > max_zone_idx) + continue; + if (show_mem_node_skip(filter, zone_to_nid(zone), nodemask)) + continue; + + free_pcp = 0; + for_each_online_cpu(cpu) + free_pcp += per_cpu_ptr(zone->per_cpu_pageset, cpu)->count; + + show_node(zone); + printk(KERN_CONT + "%s" + " free:%lukB" + " boost:%lukB" + " min:%lukB" + " low:%lukB" + " high:%lukB" + " reserved_highatomic:%luKB" + " active_anon:%lukB" + " inactive_anon:%lukB" + " active_file:%lukB" + " inactive_file:%lukB" + " unevictable:%lukB" + " writepending:%lukB" + " present:%lukB" + " managed:%lukB" + " mlocked:%lukB" + " bounce:%lukB" + " free_pcp:%lukB" + " local_pcp:%ukB" + " free_cma:%lukB" + "\n", + zone->name, + K(zone_page_state(zone, NR_FREE_PAGES)), + K(zone->watermark_boost), + K(min_wmark_pages(zone)), + K(low_wmark_pages(zone)), + K(high_wmark_pages(zone)), + K(zone->nr_reserved_highatomic), + K(zone_page_state(zone, NR_ZONE_ACTIVE_ANON)), + K(zone_page_state(zone, NR_ZONE_INACTIVE_ANON)), + K(zone_page_state(zone, NR_ZONE_ACTIVE_FILE)), + K(zone_page_state(zone, NR_ZONE_INACTIVE_FILE)), + K(zone_page_state(zone, NR_ZONE_UNEVICTABLE)), + K(zone_page_state(zone, NR_ZONE_WRITE_PENDING)), + K(zone->present_pages), + K(zone_managed_pages(zone)), + K(zone_page_state(zone, NR_MLOCK)), + K(zone_page_state(zone, NR_BOUNCE)), + K(free_pcp), + K(this_cpu_read(zone->per_cpu_pageset->count)), + K(zone_page_state(zone, NR_FREE_CMA_PAGES))); + printk("lowmem_reserve[]:"); + for (i = 0; i < MAX_NR_ZONES; i++) + printk(KERN_CONT " %ld", zone->lowmem_reserve[i]); + printk(KERN_CONT "\n"); + } + + for_each_populated_zone(zone) { + unsigned int order; + unsigned long nr[MAX_ORDER + 1], flags, total = 0; + unsigned char types[MAX_ORDER + 1]; + + if (zone_idx(zone) > max_zone_idx) + continue; + if (show_mem_node_skip(filter, zone_to_nid(zone), nodemask)) + continue; + show_node(zone); + printk(KERN_CONT "%s: ", zone->name); + + spin_lock_irqsave(&zone->lock, flags); + for (order = 0; order <= MAX_ORDER; order++) { + struct free_area *area = &zone->free_area[order]; + int type; + + nr[order] = area->nr_free; + total += nr[order] << order; + + types[order] = 0; + for (type = 0; type < MIGRATE_TYPES; type++) { + if (!free_area_empty(area, type)) + types[order] |= 1 << type; + } + } + spin_unlock_irqrestore(&zone->lock, flags); + for (order = 0; order <= MAX_ORDER; order++) { + printk(KERN_CONT "%lu*%lukB ", + nr[order], K(1UL) << order); + if (nr[order]) + show_migration_types(types[order]); + } + printk(KERN_CONT "= %lukB\n", K(total)); + } + + for_each_online_node(nid) { + if (show_mem_node_skip(filter, nid, nodemask)) + continue; + hugetlb_show_meminfo_node(nid); + } + + printk("%ld total pagecache pages\n", global_node_page_state(NR_FILE_PAGES)); + + show_swap_cache_info(); +} From patchwork Mon May 8 07:11:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13234121 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D772C7EE2E for ; Mon, 8 May 2023 06:54:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D47A0900003; Mon, 8 May 2023 02:54:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CCF94900002; Mon, 8 May 2023 02:54:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AFAD1900003; Mon, 8 May 2023 02:54:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 97D7E900002 for ; Mon, 8 May 2023 02:54:55 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 62F72140117 for ; Mon, 8 May 2023 06:54:55 +0000 (UTC) X-FDA: 80766175350.18.069F242 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf30.hostedemail.com (Postfix) with ESMTP id 341E48000C for ; Mon, 8 May 2023 06:54:52 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=none; spf=pass (imf30.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1683528893; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=sX3uf84v7hRsSzXIqnpJar5yw/3DQjbScfyQsyU6IN0=; b=N3kLeFAG2q801gQfC05YTPJxOBiLAiDkMk1r/PAh3IMRKIDciwtkzXV/7CGfcdxhECirqR 9qDf9Cfn6iNCint4JEv0y+LIIZ/mUgniKOz5TX8+QfZXHXnAYVJCQgCWdb5i4Dnmz/HM+R /BK9GvrP84Q574m4ddeeP5XxoxS3UY8= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=none; spf=pass (imf30.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1683528893; a=rsa-sha256; cv=none; b=40Oc4/YSe9V86b7VZfhY35ULWDpZAVOkVUdC6CtTp+YVSYWHEGauNXwjo2Ihpij/Qr9SxM soNdTKQq4zksbI9aMtvUouRZuYrrwxjFF0R62onOTl6EhA7dIEZurMfTIUz9QKb/via61n Xl3re3Gq01YRqkO+za7NobydQFCYi6g= Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.57]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4QFBkm4VF5zpTx3; Mon, 8 May 2023 14:50:40 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 8 May 2023 14:54:48 +0800 From: Kefeng Wang To: Andrew Morton , Mike Rapoport , CC: David Hildenbrand , Oscar Salvador , "Rafael J. Wysocki" , Pavel Machek , Len Brown , Luis Chamberlain , Kees Cook , Iurii Zaikin , , , , Kefeng Wang Subject: [PATCH 05/12] mm: page_alloc: squash page_is_consistent() Date: Mon, 8 May 2023 15:11:53 +0800 Message-ID: <20230508071200.123962-6-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230508071200.123962-1-wangkefeng.wang@huawei.com> References: <20230508071200.123962-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Stat-Signature: ws6x43fzf8z6gz5d9aid6tpnaz1s7snt X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 341E48000C X-Rspam-User: X-HE-Tag: 1683528892-337983 X-HE-Meta: U2FsdGVkX1/3Mu0kmHBTLGiLgNWYhEr+p5HDGNA3lOCK+qrs9SGuB9WwvJAob3kiTZTmpF+XhcrsppYenjD+GmGtJeHTGxLrfFUsdbTfu6LbJZHtuswihf6mOja/b0Fuj4LkGxYEdgjfojYlI/Vv2MvWHEKbXhXWo8fHLzQ7OZnLipr6R6Ioc5zhu+HhJKmIuteIM0AJELabaKO6vLD725R9nah+U9wENbjJNNLnQlwUl0v1/bPI5/cqPMzYlQrwMyTdQ89UDmYroJhfPseENz1Ps6N1YKqCI8MhhWwf2T5RxW6GWTL91iXnRrVqWCTuGvbJsl0TA9ElGvPFlLKdLkH+HrZDCJ/611mDUmBaml4enQ1ivmW3FHntGhP+t8YTtqr2eURD7SEiPzEO37JY4mr3qAFi3ouBuV8B3zvlUg9mIfbkMFblYXR4Jmh3AcNzEZWbXNGZ9MOan96l+ASWJNHqUZfVV0ptZmXNby3i8TvfVlv/eb5O1isIi+LQq7onBLsbs5zB4HAqtAkjX9fGEKVQCTPYn5LJSjkTZyGnnX+K6dWg5HBaY+DKwYBj2K8bDrNDscbGOzZm6xZAMFlioDgMhSu2BuXUVz2AGR917nG+lnWI+YrO7fnJlEKoC6Rp/pzlOvmbzgu/aPsj4mnFCBg03IxnRrQq2OlaQPwYQxheNSXch/UI5LOYmNgmH+zVQ3bYODTeVPJLPEPzok/p1EQ/mmGbMAqCJKBF0qg/PoagFV7o97iHrwCOxxc2/NmCYIDgvhrwRiFJ9NNtdTANfYP7kpFDtkn3mHUWgcIJamiTzM4917rEbdESA3FUoiQro7GywXtZ6znJfZ7Tp3PiTH8/O/Jp0FQ84kcarrzpo6cpj947WWgo6Zht+Z4Cbj/kG8UInzH68OVhReN68cH76n/zrvOKCHi+BTXbZwDdB9FFVqOgX5DUj+GdKwc/A0gzjIWw5Qt8bq2D+Qa5APB 75/G8d4s lrooMxf5kic2icmixpklnJtl7i0rCygnkSrd4hD0M/dZevnWJiywMD6AwnQYl2fCKuJRSGfXWBx/YWAoY/nrzUDYwmDG+OfAS+0ObmiO1PKj0tErBSnofy2gyvRLnilgrXHruBDQ5CZH9/5s4U+F6WXUESMpiuE+vppOERAMuPRn/vVTW64wbi67gbVGlUvM+MTDo X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Squash the page_is_consistent() into bad_range() as there is only one caller. Signed-off-by: Kefeng Wang Reviewed-by: Mike Rapoport (IBM) --- mm/page_alloc.c | 9 +-------- 1 file changed, 1 insertion(+), 8 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 9a85238f1140..348dcbaca757 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -517,13 +517,6 @@ static int page_outside_zone_boundaries(struct zone *zone, struct page *page) return ret; } -static int page_is_consistent(struct zone *zone, struct page *page) -{ - if (zone != page_zone(page)) - return 0; - - return 1; -} /* * Temporary debugging check for pages not lying within a given zone. */ @@ -531,7 +524,7 @@ static int __maybe_unused bad_range(struct zone *zone, struct page *page) { if (page_outside_zone_boundaries(zone, page)) return 1; - if (!page_is_consistent(zone, page)) + if (zone != page_zone(page)) return 1; return 0; From patchwork Mon May 8 07:11:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13234122 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0342FC77B75 for ; Mon, 8 May 2023 06:55:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8B81D900004; Mon, 8 May 2023 02:54:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 84CAF280001; Mon, 8 May 2023 02:54:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 57DDF900004; Mon, 8 May 2023 02:54:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 40ED0900002 for ; Mon, 8 May 2023 02:54:56 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 1076B40913 for ; Mon, 8 May 2023 06:54:56 +0000 (UTC) X-FDA: 80766175392.18.8FE9C2E Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf09.hostedemail.com (Postfix) with ESMTP id A4D62140003 for ; Mon, 8 May 2023 06:54:53 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf09.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1683528894; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=hCASaxZSrW5EQZRMkBRtEOue2/c6JM8AhFFtsAfKqas=; b=6nBYQoF9AacNUAyNraNiZzTUlWJ4xcUqnIRkHMNyYkBZgf5Mcbxv7XHnwr9jCz+OKgRrmN NttlQj6L9RQNkBy15hAJrtwVX3N0aJvgQSbXfLtqg2BgUnZ9Z+/RapVSZikYQ280BwFoAU AvXMz+XujwEkIyrsbiscqf+mQC2em0k= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf09.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1683528894; a=rsa-sha256; cv=none; b=zVq1k1G4oErfHRstHztWQiSP1khauPgrnQYtqlCWi8nYFrYxyI8uDMpF3ge4kdMWSgNboD lm5y6yIP3+b/4zJFao/jyjFbCSxMacZNqS9IJOsCTgnu3Essn0jp8xD5G+AIbRDa1WPHJ0 BD8Otoi7T4FfQE0gQaFQbEdiHAKqg28= Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4QFBkH2Y1mzTkGx; Mon, 8 May 2023 14:50:15 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 8 May 2023 14:54:49 +0800 From: Kefeng Wang To: Andrew Morton , Mike Rapoport , CC: David Hildenbrand , Oscar Salvador , "Rafael J. Wysocki" , Pavel Machek , Len Brown , Luis Chamberlain , Kees Cook , Iurii Zaikin , , , , Kefeng Wang Subject: [PATCH 06/12] mm: page_alloc: remove alloc_contig_dump_pages() stub Date: Mon, 8 May 2023 15:11:54 +0800 Message-ID: <20230508071200.123962-7-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230508071200.123962-1-wangkefeng.wang@huawei.com> References: <20230508071200.123962-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Stat-Signature: m1yn1463hheaxn3sk9fnqt1ofsenn7pz X-Rspam-User: X-Rspamd-Queue-Id: A4D62140003 X-Rspamd-Server: rspam07 X-HE-Tag: 1683528893-649827 X-HE-Meta: U2FsdGVkX1+9tXaGaDLnz26s4cKCXw/N9Fq4k92dc7FrnBDZPfkEIA06MVqQ+KbFZ3yLFmS+hL0W6giYRhPlgEppZsrKfyOI7tYfU+XbIXQH/YHSqoexYMb6eZdZW/8y0OP761iHgOm2fkoXXlAPxz8rqi3KZvpVx7TgD3zAXG1Vf7TjORBEqIBOcBSINRc52RubyMj+nAjw5AwsGerxuwJLvDDx+QAuF1zXWZUOTCoWE9/Z0tGU57D91++8dCJ/4P08rE3XKmMTyYjiXPa0YrRelZ9C+tENfeXQiqgD/8J1kRzBJEw1LQkmMeF7+jwF9OMuBczCiTGwLJsPbuqrX6bJwlP4409Mh7mapa+RiFDbRFn/VdTPE0jKaK8DKQqaCbY5OG5BxqgeAcZtYAZvcX08a7DB8EvRzz6VAmClRznFIy3R9ULA6EXCe75Ui/S7jGXhXyUkP1m6S/6hr3ndmcpg2HUWOHxCYQWELPsaNLWrEXguKd7wufilf6pt/bgmaTpc4s/iBoQPs8MDtm3T/FOvY2/lBwKlU89/0sRs68VscXmmh73k+uv9P+ehqRbbg2hHBR7tvUGKVclM7tusIANpAPvRLEw4092ZPo90cbYHTgbZKMKkyc6tpYcg5ASPIx9WyGSa061VLDs4BHoh8Wpk6B2j9/7I3iKCoiaXhnt40I38FZzylz6kRJfwxMrckRaZyUujNp4pszy4D/NnZy4RF2bDytRwzdnqNXLY3VUbNP01xU5R1bxvIQR+kdCD4OGGAdpsoK91Dzv6oaWM8/V+rXxg1y1GMlJ5+M2ADtmWhmAHD4f9iK/QkaajI/2W31y1+BJx1Fn39BE1c2pSzVdKBmtS9y579PcvkriQEeQhf5FJsihsQSf1JZWcTVewIxjoAWzsGay2Lz8B/wi3TdLJX8vfHhc3mKmvyAD6uYyYWms2AcasnTHMfcYTFOM5ElZ3lFkxA7uXGogV5tg FFPS7FAT LRvzDqReotrbNHEtOf3bxOANvoOnsl4X0vkbCaIQcjgDtbZFcg3GKlz7mwQ+UzFBz5xuHSkwZ40Rhg7XwUq2gR6B9RujTsJiikQIab9CCz/nHd2chJMbpaMAHxxXU55CnGSS3UYo6xAr5KdbHpa99HaJ3DiWUwhIsXY8v48ygN1pI0eweMWdX6pW7a346XrrqTKqR X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: DEFINE_DYNAMIC_DEBUG_METADATA and DYNAMIC_DEBUG_BRANCH already has stub definitions without dynamic debug feature, remove unnecessary alloc_contig_dump_pages() stub. Signed-off-by: Kefeng Wang Reviewed-by: Mike Rapoport (IBM) --- mm/page_alloc.c | 7 ------- 1 file changed, 7 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 348dcbaca757..bc453edbad21 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -6161,8 +6161,6 @@ int percpu_pagelist_high_fraction_sysctl_handler(struct ctl_table *table, } #ifdef CONFIG_CONTIG_ALLOC -#if defined(CONFIG_DYNAMIC_DEBUG) || \ - (defined(CONFIG_DYNAMIC_DEBUG_CORE) && defined(DYNAMIC_DEBUG_MODULE)) /* Usage: See admin-guide/dynamic-debug-howto.rst */ static void alloc_contig_dump_pages(struct list_head *page_list) { @@ -6176,11 +6174,6 @@ static void alloc_contig_dump_pages(struct list_head *page_list) dump_page(page, "migration failure"); } } -#else -static inline void alloc_contig_dump_pages(struct list_head *page_list) -{ -} -#endif /* [start, end) must belong to a single zone. */ int __alloc_contig_migrate_range(struct compact_control *cc, From patchwork Mon May 8 07:11:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13234123 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BDFFFC7EE2A for ; Mon, 8 May 2023 06:55:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C83FC280002; Mon, 8 May 2023 02:54:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C0FFA900002; Mon, 8 May 2023 02:54:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A407C280002; Mon, 8 May 2023 02:54:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 7EB05900005 for ; Mon, 8 May 2023 02:54:56 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 558E440913 for ; Mon, 8 May 2023 06:54:56 +0000 (UTC) X-FDA: 80766175392.21.CA602FA Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf20.hostedemail.com (Postfix) with ESMTP id B81DA1C0007 for ; Mon, 8 May 2023 06:54:53 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf20.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1683528894; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=s9vHq5TJy8H0Afs6oNs06skkih4nirqHggAvo3Vlzgw=; b=0uUfKEnneGTBN1samhUD0Cw3MhUDOmSqr+7YFudqiVLvHnUKwVH6UckdFkT2F26i0UwljZ IqOmNZN20F9BDL5oqjvsDLcD6hFOLcJepWKyLCsoIi94hHGAElnxN4KPCE2wlkQCkcP6ot eL+iXCoXa2bpoG/AEkQI15i5MaoeX/g= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf20.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1683528894; a=rsa-sha256; cv=none; b=KNhVuZBPIExDutwxPjQ3ugr3yV59qctCj9xh81kAm/bwG8mDq2BrlZcehiuP3JM4LtPSsT 8KbPdZ96SvpD5FGdTdQN9kH++Fig9CQoZ/3t8fN35BAGcTWqYJ5+T3573UAVztnzEaGe4P g4WcE1tuvl+YTwfqT8qht9Vnwn0jZd0= Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4QFBmK4B1vzLpB0; Mon, 8 May 2023 14:52:01 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 8 May 2023 14:54:50 +0800 From: Kefeng Wang To: Andrew Morton , Mike Rapoport , CC: David Hildenbrand , Oscar Salvador , "Rafael J. Wysocki" , Pavel Machek , Len Brown , Luis Chamberlain , Kees Cook , Iurii Zaikin , , , , Kefeng Wang Subject: [PATCH 07/12] mm: page_alloc: split out FAIL_PAGE_ALLOC Date: Mon, 8 May 2023 15:11:55 +0800 Message-ID: <20230508071200.123962-8-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230508071200.123962-1-wangkefeng.wang@huawei.com> References: <20230508071200.123962-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Stat-Signature: dmq9fs43wtqwg747p7pkned6agdmid1x X-Rspam-User: X-Rspamd-Queue-Id: B81DA1C0007 X-Rspamd-Server: rspam07 X-HE-Tag: 1683528893-980875 X-HE-Meta: U2FsdGVkX18Ok6A7HYRGId9MAnxX1si/ma7fg5e93GGm3vrpxK2/F+grflK45a3f+t1vSUNP5ern2fyPEB60vE28NPwa9p0fsVtzK7zi2lbYnhmV6ZZ+7ntgZdGdJTyK7XbM3N0PgA17sbZn4P4SrvxguFnnB1yg5meoShocjsD7Rsc96dzQelE0YN2ngoAGGHLnbmbwVSxhMQl4IrDvyQDn7+yF0f83qj3eiF+5Po/5mqMa+DhOvFhuXNmAN/EAKk01t7muA+PnFLWpjiHOzSG35TOZTTV3lrlr+y49MqHrQauunTpTF9XcoaIu9sN4QNeZ3JstMZdcbgDEBXbglXy87udeBdGtMjX0HISrKN2AZOwC4xvLHjh0SLpyWEkmdFif0YYqGBXyiDbuDGeL7p3VhH2oq8m6gHgvRutXSXnBZxJ0KLJgbESGr2vLGWHaYAhN3CuorMk7Rp0hMR1AFgDQiolwKihoquXd/NklRajIQW8n9fXe3Mfp+qaY2z+I6UVt10k1MUDT0fvtJaYBbGzusyHf7a0rH3MHzMRKp6C2x5u5OEs2zpXpoZDO06kpkWyteNoSfzj70zGV9K60ef5nu3PPxCFUDssk0D5+00aiy/gZMvx00f1bNteKwD9IUcrdE/cdzywtnsk8gYyfqjBxKOVUcyemdb3lFAu+RqgrppxuN1FS1ADd7NI134Z5yvUrA6Ioda0IE7Pyeh+JfMjnP4U/KQ3Zz2GSIGsYJC6mU2TYdCR79aScThL7W6uH3FfkKhAVhj+i4XDkGw/4Dp5n930YA1P+nUfyNDD+G0G+oAbvVU9sfPLbW7zFmAM6rcvMi/i16foC+AgDozLb4sgI5rvir9skL/IRrgwgwimuUs7jJKVOMnoPuKJF8975zIhWA2vM9IpE/mykHDNQNiPE/bwEKX3KzlvVGrFhnH5996qAWerzKlxheW3g+6HqXdP/ge7eRWLzHNlr4t5 UeyglEtJ l/d0NDniUnioHzGUmw8o1NX9rKJbT7hkAUzmFnqg2BiNAgyq5LsYXaWagStXOzu9r8R5FEEMikcjpEQpsQxO3WmFJ5GwBd/UVsDXkBN2vzQABjfG6cDnwwqPwPDsHe0u/oruZKOn/e7aoiq6ksGMoKE8OKI2ehdMQiEIRfdxykOCLugQjviCBjJ+jyQ/TViNRL2r7QV4hTIBBvpQ= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: ... to a single file to reduce a bit of page_alloc.c. Signed-off-by: Kefeng Wang --- include/linux/fault-inject.h | 9 +++++ mm/Makefile | 1 + mm/fail_page_alloc.c | 66 ++++++++++++++++++++++++++++++++ mm/page_alloc.c | 74 ------------------------------------ 4 files changed, 76 insertions(+), 74 deletions(-) create mode 100644 mm/fail_page_alloc.c diff --git a/include/linux/fault-inject.h b/include/linux/fault-inject.h index 481abf530b3c..6d5edef09d45 100644 --- a/include/linux/fault-inject.h +++ b/include/linux/fault-inject.h @@ -93,6 +93,15 @@ struct kmem_cache; bool should_fail_alloc_page(gfp_t gfp_mask, unsigned int order); +#ifdef CONFIG_FAIL_PAGE_ALLOC +bool __should_fail_alloc_page(gfp_t gfp_mask, unsigned int order); +#else +static inline bool __should_fail_alloc_page(gfp_t gfp_mask, unsigned int order) +{ + return false; +} +#endif /* CONFIG_FAIL_PAGE_ALLOC */ + int should_failslab(struct kmem_cache *s, gfp_t gfpflags); #ifdef CONFIG_FAILSLAB extern bool __should_failslab(struct kmem_cache *s, gfp_t gfpflags); diff --git a/mm/Makefile b/mm/Makefile index 5262ce5baa28..0eec4bc72d3f 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -89,6 +89,7 @@ obj-$(CONFIG_KASAN) += kasan/ obj-$(CONFIG_KFENCE) += kfence/ obj-$(CONFIG_KMSAN) += kmsan/ obj-$(CONFIG_FAILSLAB) += failslab.o +obj-$(CONFIG_FAIL_PAGE_ALLOC) += fail_page_alloc.o obj-$(CONFIG_MEMTEST) += memtest.o obj-$(CONFIG_MIGRATION) += migrate.o obj-$(CONFIG_NUMA) += memory-tiers.o diff --git a/mm/fail_page_alloc.c b/mm/fail_page_alloc.c new file mode 100644 index 000000000000..b1b09cce9394 --- /dev/null +++ b/mm/fail_page_alloc.c @@ -0,0 +1,66 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include + +static struct { + struct fault_attr attr; + + bool ignore_gfp_highmem; + bool ignore_gfp_reclaim; + u32 min_order; +} fail_page_alloc = { + .attr = FAULT_ATTR_INITIALIZER, + .ignore_gfp_reclaim = true, + .ignore_gfp_highmem = true, + .min_order = 1, +}; + +static int __init setup_fail_page_alloc(char *str) +{ + return setup_fault_attr(&fail_page_alloc.attr, str); +} +__setup("fail_page_alloc=", setup_fail_page_alloc); + +bool __should_fail_alloc_page(gfp_t gfp_mask, unsigned int order) +{ + int flags = 0; + + if (order < fail_page_alloc.min_order) + return false; + if (gfp_mask & __GFP_NOFAIL) + return false; + if (fail_page_alloc.ignore_gfp_highmem && (gfp_mask & __GFP_HIGHMEM)) + return false; + if (fail_page_alloc.ignore_gfp_reclaim && + (gfp_mask & __GFP_DIRECT_RECLAIM)) + return false; + + /* See comment in __should_failslab() */ + if (gfp_mask & __GFP_NOWARN) + flags |= FAULT_NOWARN; + + return should_fail_ex(&fail_page_alloc.attr, 1 << order, flags); +} + +#ifdef CONFIG_FAULT_INJECTION_DEBUG_FS + +static int __init fail_page_alloc_debugfs(void) +{ + umode_t mode = S_IFREG | 0600; + struct dentry *dir; + + dir = fault_create_debugfs_attr("fail_page_alloc", NULL, + &fail_page_alloc.attr); + + debugfs_create_bool("ignore-gfp-wait", mode, dir, + &fail_page_alloc.ignore_gfp_reclaim); + debugfs_create_bool("ignore-gfp-highmem", mode, dir, + &fail_page_alloc.ignore_gfp_highmem); + debugfs_create_u32("min-order", mode, dir, &fail_page_alloc.min_order); + + return 0; +} + +late_initcall(fail_page_alloc_debugfs); + +#endif /* CONFIG_FAULT_INJECTION_DEBUG_FS */ diff --git a/mm/page_alloc.c b/mm/page_alloc.c index bc453edbad21..fce47ccbcb3a 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2942,80 +2942,6 @@ struct page *rmqueue(struct zone *preferred_zone, return page; } -#ifdef CONFIG_FAIL_PAGE_ALLOC - -static struct { - struct fault_attr attr; - - bool ignore_gfp_highmem; - bool ignore_gfp_reclaim; - u32 min_order; -} fail_page_alloc = { - .attr = FAULT_ATTR_INITIALIZER, - .ignore_gfp_reclaim = true, - .ignore_gfp_highmem = true, - .min_order = 1, -}; - -static int __init setup_fail_page_alloc(char *str) -{ - return setup_fault_attr(&fail_page_alloc.attr, str); -} -__setup("fail_page_alloc=", setup_fail_page_alloc); - -static bool __should_fail_alloc_page(gfp_t gfp_mask, unsigned int order) -{ - int flags = 0; - - if (order < fail_page_alloc.min_order) - return false; - if (gfp_mask & __GFP_NOFAIL) - return false; - if (fail_page_alloc.ignore_gfp_highmem && (gfp_mask & __GFP_HIGHMEM)) - return false; - if (fail_page_alloc.ignore_gfp_reclaim && - (gfp_mask & __GFP_DIRECT_RECLAIM)) - return false; - - /* See comment in __should_failslab() */ - if (gfp_mask & __GFP_NOWARN) - flags |= FAULT_NOWARN; - - return should_fail_ex(&fail_page_alloc.attr, 1 << order, flags); -} - -#ifdef CONFIG_FAULT_INJECTION_DEBUG_FS - -static int __init fail_page_alloc_debugfs(void) -{ - umode_t mode = S_IFREG | 0600; - struct dentry *dir; - - dir = fault_create_debugfs_attr("fail_page_alloc", NULL, - &fail_page_alloc.attr); - - debugfs_create_bool("ignore-gfp-wait", mode, dir, - &fail_page_alloc.ignore_gfp_reclaim); - debugfs_create_bool("ignore-gfp-highmem", mode, dir, - &fail_page_alloc.ignore_gfp_highmem); - debugfs_create_u32("min-order", mode, dir, &fail_page_alloc.min_order); - - return 0; -} - -late_initcall(fail_page_alloc_debugfs); - -#endif /* CONFIG_FAULT_INJECTION_DEBUG_FS */ - -#else /* CONFIG_FAIL_PAGE_ALLOC */ - -static inline bool __should_fail_alloc_page(gfp_t gfp_mask, unsigned int order) -{ - return false; -} - -#endif /* CONFIG_FAIL_PAGE_ALLOC */ - noinline bool should_fail_alloc_page(gfp_t gfp_mask, unsigned int order) { return __should_fail_alloc_page(gfp_mask, order); From patchwork Mon May 8 07:11:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13234125 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45DF0C7EE2A for ; Mon, 8 May 2023 06:55:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6DF7A900007; Mon, 8 May 2023 02:54:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 64226900005; Mon, 8 May 2023 02:54:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 444EC280001; Mon, 8 May 2023 02:54:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 19F78900005 for ; Mon, 8 May 2023 02:54:58 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id E14D2A0869 for ; Mon, 8 May 2023 06:54:57 +0000 (UTC) X-FDA: 80766175434.09.5993594 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf07.hostedemail.com (Postfix) with ESMTP id 5977240003 for ; Mon, 8 May 2023 06:54:54 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf07.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1683528896; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=aPKx8cnoizcXMAmRmdFtWDU/sGeOHsVnN0Dv3vjxktY=; b=f73VQC3teD8yP9LBriFmrYoX47YPH+BoGBOAo/XOsYEZCfei+YyYuik0ORomHSNETP1dg5 U4rMVFEXObVEhge+0v6/pPgro8twzwUPBsFgoXQMeVHNCtiWQhIbsstNa+zrVhQzWtSZze TRNCOBsj8iTjwca3ern+G0Mp70uLAqg= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1683528896; a=rsa-sha256; cv=none; b=bUtb49XBXsPmu3w+/M3dHDYta2e1STW+mxXQ+gU6QZIInn9+jcvfxf1w+098a2moEAFbKH hy73y6R9jUWKMReEo4P7XE/L8kY3kaWIto2svTYlUGPplDjGuRtGtnFFWNweGovV0W4mjw lbK82PHcSliOoaLsVh+4Jda2BrrIt34= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf07.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4QFBmL1rJ3zLpGc; Mon, 8 May 2023 14:52:02 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 8 May 2023 14:54:50 +0800 From: Kefeng Wang To: Andrew Morton , Mike Rapoport , CC: David Hildenbrand , Oscar Salvador , "Rafael J. Wysocki" , Pavel Machek , Len Brown , Luis Chamberlain , Kees Cook , Iurii Zaikin , , , , Kefeng Wang Subject: [PATCH 08/12] mm: page_alloc: split out DEBUG_PAGEALLOC Date: Mon, 8 May 2023 15:11:56 +0800 Message-ID: <20230508071200.123962-9-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230508071200.123962-1-wangkefeng.wang@huawei.com> References: <20230508071200.123962-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Stat-Signature: aq4w5bsefqrgh3ofnyjt35g5pwsepjn9 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 5977240003 X-Rspam-User: X-HE-Tag: 1683528894-466310 X-HE-Meta: U2FsdGVkX18UELkCLjB1XrwWmP8rrb7/dn+o+ixDpz8IpoqcfvYdzCZG29NptRzrvXMG0MiJTJy52ZqeryPY/suS+9hpaMOJpyn/XOMsHrWJAOBHxEph2x1afvXYnU/QvOK9saxEi+9IIEeXR1JeD9T6rYouCTtSR/Gq3R459jvkE9GL2kXEzejFcRLL+S5EujBtfaYFoo3S7IYIkWigr8Jhd+SErkz/CkVLfC+Ct6MePIk+vQeypBFErd4n0FTAomStfrKZtWHNYkuOXMFyMz/SeYJ4Efoks/PV+I8ZHDGas7v7q4+CkNbDJKoEvKboTEzgPP+tSon2CS8brpzksZxO1qnptsfREibXXM1cgbYkdUWGbc3WDvPIjdo2OvO4lUiQlXhtbq4CaZEJ8WTWM/hw6Een6n4YKJx7nkv9WA7VnHI5Qw5WwucozvhV8z5AVvnzBUqJcKLS9iH5DDQkM26i9xU0WYyEncEtUL7pieyKGtKkHJMlPfcRS4VhC8fj2bEWcFxZBVQpmSkm3oTsiHfJvpCc1gGVh8FyhPBq7OQYnodfFOMdr+Siqn1IJ3TMW4mx/TUyeB+cpV3eKY/Y8LRssjBVZq7SOW1+JtEFB/HEL8p/LzpUm4jDy/6VY8EOHehwRGAAmG8QVcggdIWsDrB6qXVahVzok+pOJF5dNJyqz/940xAul6gG8dwpoO10RUJhAtII1eJAwo1mtdPeWf5H+ZyYUdBMbPM2Zjr1zvE8x6IOgn9UuRmFamSUt8kLwjhcxMnFbKTVIRoEYKlappXL+Hm1ir45LV42FP+ryJG222JdIrA5ES+SKX6mA5onl/cl5MHDU8b6nCbUEg85j7lmQy+MviJbDiV/CVOWt9KcmyrSKoNTXiWQoSnYs6s4QGVbEjb0FfQ4q2hpqqGL2tpJm8Y0kAJKfYKNXsAsx83caEN+FGtMmDaxFJxyl0pV2qMxJVhnbetFHf+QhYV TRaXkEos /PPFKvUi8fz4uvg2FZU3LzM8MDo/zPsPI6NukhePMfQT2qVJuNKXTFH3q3n5Br82lOD4xtEXKNFn11Buq8ri25m4bun/DlPZ1nPHnBIjwnONSU4sgUBWLkGB65XQd06ScJ6vHg9VHUahaZt8LB0YFc7X22B30LDaePUQqHU3WMO7bwSbcBNUPBDWvcpIk17vvKijF X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Move DEBUG_PAGEALLOC related functions into a single file to reduce a bit of page_alloc.c. Signed-off-by: Kefeng Wang --- include/linux/mm.h | 76 ++++++++++++++++++++++++++++--------------- mm/Makefile | 1 + mm/debug_page_alloc.c | 59 +++++++++++++++++++++++++++++++++ mm/page_alloc.c | 69 --------------------------------------- 4 files changed, 109 insertions(+), 96 deletions(-) create mode 100644 mm/debug_page_alloc.c diff --git a/include/linux/mm.h b/include/linux/mm.h index e5d7b65075a0..fc8732a119cf 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3534,9 +3534,58 @@ static inline void debug_pagealloc_unmap_pages(struct page *page, int numpages) if (debug_pagealloc_enabled_static()) __kernel_map_pages(page, numpages, 0); } + +extern unsigned int _debug_guardpage_minorder; +DECLARE_STATIC_KEY_FALSE(_debug_guardpage_enabled); + +static inline unsigned int debug_guardpage_minorder(void) +{ + return _debug_guardpage_minorder; +} + +static inline bool debug_guardpage_enabled(void) +{ + return static_branch_unlikely(&_debug_guardpage_enabled); +} + +static inline bool page_is_guard(struct page *page) +{ + if (!debug_guardpage_enabled()) + return false; + + return PageGuard(page); +} + +bool __set_page_guard(struct zone *zone, struct page *page, unsigned int order, + int migratetype); +static inline bool set_page_guard(struct zone *zone, struct page *page, + unsigned int order, int migratetype) +{ + if (!debug_guardpage_enabled()) + return false; + return __set_page_guard(zone, page, order, migratetype); +} + +void __clear_page_guard(struct zone *zone, struct page *page, unsigned int order, + int migratetype); +static inline void clear_page_guard(struct zone *zone, struct page *page, + unsigned int order, int migratetype) +{ + if (!debug_guardpage_enabled()) + return; + __clear_page_guard(zone, page, order, migratetype); +} + #else /* CONFIG_DEBUG_PAGEALLOC */ static inline void debug_pagealloc_map_pages(struct page *page, int numpages) {} static inline void debug_pagealloc_unmap_pages(struct page *page, int numpages) {} +static inline unsigned int debug_guardpage_minorder(void) { return 0; } +static inline bool debug_guardpage_enabled(void) { return false; } +static inline bool page_is_guard(struct page *page) { return false; } +static inline bool set_page_guard(struct zone *zone, struct page *page, + unsigned int order, int migratetype) { return false; } +static inline void clear_page_guard(struct zone *zone, struct page *page, + unsigned int order, int migratetype) {} #endif /* CONFIG_DEBUG_PAGEALLOC */ #ifdef __HAVE_ARCH_GATE_AREA @@ -3775,33 +3824,6 @@ static inline bool vma_is_special_huge(const struct vm_area_struct *vma) #endif /* CONFIG_TRANSPARENT_HUGEPAGE || CONFIG_HUGETLBFS */ -#ifdef CONFIG_DEBUG_PAGEALLOC -extern unsigned int _debug_guardpage_minorder; -DECLARE_STATIC_KEY_FALSE(_debug_guardpage_enabled); - -static inline unsigned int debug_guardpage_minorder(void) -{ - return _debug_guardpage_minorder; -} - -static inline bool debug_guardpage_enabled(void) -{ - return static_branch_unlikely(&_debug_guardpage_enabled); -} - -static inline bool page_is_guard(struct page *page) -{ - if (!debug_guardpage_enabled()) - return false; - - return PageGuard(page); -} -#else -static inline unsigned int debug_guardpage_minorder(void) { return 0; } -static inline bool debug_guardpage_enabled(void) { return false; } -static inline bool page_is_guard(struct page *page) { return false; } -#endif /* CONFIG_DEBUG_PAGEALLOC */ - #if MAX_NUMNODES > 1 void __init setup_nr_node_ids(void); #else diff --git a/mm/Makefile b/mm/Makefile index 0eec4bc72d3f..678530a07326 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -124,6 +124,7 @@ obj-$(CONFIG_SECRETMEM) += secretmem.o obj-$(CONFIG_CMA_SYSFS) += cma_sysfs.o obj-$(CONFIG_USERFAULTFD) += userfaultfd.o obj-$(CONFIG_IDLE_PAGE_TRACKING) += page_idle.o +obj-$(CONFIG_DEBUG_PAGEALLOC) += debug_page_alloc.o obj-$(CONFIG_DEBUG_PAGE_REF) += debug_page_ref.o obj-$(CONFIG_DAMON) += damon/ obj-$(CONFIG_HARDENED_USERCOPY) += usercopy.o diff --git a/mm/debug_page_alloc.c b/mm/debug_page_alloc.c new file mode 100644 index 000000000000..f9d145730fd1 --- /dev/null +++ b/mm/debug_page_alloc.c @@ -0,0 +1,59 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include + +unsigned int _debug_guardpage_minorder; + +bool _debug_pagealloc_enabled_early __read_mostly + = IS_ENABLED(CONFIG_DEBUG_PAGEALLOC_ENABLE_DEFAULT); +EXPORT_SYMBOL(_debug_pagealloc_enabled_early); +DEFINE_STATIC_KEY_FALSE(_debug_pagealloc_enabled); +EXPORT_SYMBOL(_debug_pagealloc_enabled); + +DEFINE_STATIC_KEY_FALSE(_debug_guardpage_enabled); + +static int __init early_debug_pagealloc(char *buf) +{ + return kstrtobool(buf, &_debug_pagealloc_enabled_early); +} +early_param("debug_pagealloc", early_debug_pagealloc); + +static int __init debug_guardpage_minorder_setup(char *buf) +{ + unsigned long res; + + if (kstrtoul(buf, 10, &res) < 0 || res > MAX_ORDER / 2) { + pr_err("Bad debug_guardpage_minorder value\n"); + return 0; + } + _debug_guardpage_minorder = res; + pr_info("Setting debug_guardpage_minorder to %lu\n", res); + return 0; +} +early_param("debug_guardpage_minorder", debug_guardpage_minorder_setup); + +bool __set_page_guard(struct zone *zone, struct page *page, unsigned int order, + int migratetype) +{ + if (order >= debug_guardpage_minorder()) + return false; + + __SetPageGuard(page); + INIT_LIST_HEAD(&page->buddy_list); + set_page_private(page, order); + /* Guard pages are not available for any usage */ + if (!is_migrate_isolate(migratetype)) + __mod_zone_freepage_state(zone, -(1 << order), migratetype); + + return true; +} + +void __clear_page_guard(struct zone *zone, struct page *page, unsigned int order, + int migratetype) +{ + __ClearPageGuard(page); + + set_page_private(page, 0); + if (!is_migrate_isolate(migratetype)) + __mod_zone_freepage_state(zone, (1 << order), migratetype); +} diff --git a/mm/page_alloc.c b/mm/page_alloc.c index fce47ccbcb3a..78d8a59f2afa 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -664,75 +664,6 @@ void destroy_large_folio(struct folio *folio) compound_page_dtors[dtor](&folio->page); } -#ifdef CONFIG_DEBUG_PAGEALLOC -unsigned int _debug_guardpage_minorder; - -bool _debug_pagealloc_enabled_early __read_mostly - = IS_ENABLED(CONFIG_DEBUG_PAGEALLOC_ENABLE_DEFAULT); -EXPORT_SYMBOL(_debug_pagealloc_enabled_early); -DEFINE_STATIC_KEY_FALSE(_debug_pagealloc_enabled); -EXPORT_SYMBOL(_debug_pagealloc_enabled); - -DEFINE_STATIC_KEY_FALSE(_debug_guardpage_enabled); - -static int __init early_debug_pagealloc(char *buf) -{ - return kstrtobool(buf, &_debug_pagealloc_enabled_early); -} -early_param("debug_pagealloc", early_debug_pagealloc); - -static int __init debug_guardpage_minorder_setup(char *buf) -{ - unsigned long res; - - if (kstrtoul(buf, 10, &res) < 0 || res > MAX_ORDER / 2) { - pr_err("Bad debug_guardpage_minorder value\n"); - return 0; - } - _debug_guardpage_minorder = res; - pr_info("Setting debug_guardpage_minorder to %lu\n", res); - return 0; -} -early_param("debug_guardpage_minorder", debug_guardpage_minorder_setup); - -static inline bool set_page_guard(struct zone *zone, struct page *page, - unsigned int order, int migratetype) -{ - if (!debug_guardpage_enabled()) - return false; - - if (order >= debug_guardpage_minorder()) - return false; - - __SetPageGuard(page); - INIT_LIST_HEAD(&page->buddy_list); - set_page_private(page, order); - /* Guard pages are not available for any usage */ - if (!is_migrate_isolate(migratetype)) - __mod_zone_freepage_state(zone, -(1 << order), migratetype); - - return true; -} - -static inline void clear_page_guard(struct zone *zone, struct page *page, - unsigned int order, int migratetype) -{ - if (!debug_guardpage_enabled()) - return; - - __ClearPageGuard(page); - - set_page_private(page, 0); - if (!is_migrate_isolate(migratetype)) - __mod_zone_freepage_state(zone, (1 << order), migratetype); -} -#else -static inline bool set_page_guard(struct zone *zone, struct page *page, - unsigned int order, int migratetype) { return false; } -static inline void clear_page_guard(struct zone *zone, struct page *page, - unsigned int order, int migratetype) {} -#endif - static inline void set_buddy_order(struct page *page, unsigned int order) { set_page_private(page, order); From patchwork Mon May 8 07:11:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13234127 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93616C7EE24 for ; Mon, 8 May 2023 06:55:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 221B0900005; Mon, 8 May 2023 02:54:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 11195280004; Mon, 8 May 2023 02:54:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C0E0A900005; Mon, 8 May 2023 02:54:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 8547C280001 for ; Mon, 8 May 2023 02:54:58 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 5CBA31C6CFE for ; Mon, 8 May 2023 06:54:58 +0000 (UTC) X-FDA: 80766175476.06.CD9EBFE Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by imf12.hostedemail.com (Postfix) with ESMTP id A0FEA40006 for ; Mon, 8 May 2023 06:54:55 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=none; spf=pass (imf12.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1683528896; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xovwDSWexNOB+OkP9L7oBj8y9KD9nnpznld/C1h13bY=; b=sOciEZzOyeNB25T6y7LwXqNCKJ0YpoAeHJljLbE5WBf18R0i6p0tzR7MO6dAS0MoBgKcX5 OC7EJpvGBQhKOCyAGscq+lUMJI1JdLnZ92TR2587kRSM0Tj7VZpy/2PuRh8IrAc/pvA5zL Q3UIc+JcIABDL5al0LZZTtSgTCBxFL0= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1683528896; a=rsa-sha256; cv=none; b=fzzIcKlHoD9q3NFfr32vBP3i1wL8uSIliqq5blUGfTqbQeKdhHEDx9ndwxoqpbIw+m24CA YI8f2KO2NbUPCAxl0hRB+m8LKeyA65BtkQjmGYHM0bHzkfsaNRYWnDhDs5sU3DyxJPQVyM rbvWuRoD/Sse8pQsP2yH+jSFOyD5Xg0= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=none; spf=pass (imf12.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.54]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4QFBpD6xQ9zpW6Q; Mon, 8 May 2023 14:53:40 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 8 May 2023 14:54:51 +0800 From: Kefeng Wang To: Andrew Morton , Mike Rapoport , CC: David Hildenbrand , Oscar Salvador , "Rafael J. Wysocki" , Pavel Machek , Len Brown , Luis Chamberlain , Kees Cook , Iurii Zaikin , , , , Kefeng Wang Subject: [PATCH 09/12] mm: page_alloc: move mark_free_page() into snapshot.c Date: Mon, 8 May 2023 15:11:57 +0800 Message-ID: <20230508071200.123962-10-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230508071200.123962-1-wangkefeng.wang@huawei.com> References: <20230508071200.123962-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: A0FEA40006 X-Stat-Signature: rqb9kxwgpeqmjag1sd7purypu6885jtb X-Rspam-User: X-Rspamd-Server: rspam09 X-HE-Tag: 1683528895-871065 X-HE-Meta: U2FsdGVkX18VJauToiKBKOn4FGKcM5VcgGKgcALPGeHs7mfwJSCuXkvcpzje02UTijizyt6iX5O6ypEJ8sJ5AXqAKUba8xebe/1/HYMeO34LVJaeuDe+MeSS7f6X2egDc5ZcisOpcc5kvCPWCLVZwRF2Zy6FqLG9LeyhiS8lk4qZZQU8nndliPre67H7XBJQpgGe9dJoQuMk51qOP6HPhglIl6ZmmDe4avtWtTnA2JkVvnv3HDdPQU5F2e2GiAALSLW28kvcfmjOwwBtrsfmYXSZZDMoCj6gnlS+XJAFQcHhMF/hYNxQkUVmlhD9TfgoBACj0Js7nfG+OP2WvXGjhamA/I4PlWbKcGhps/GkvzBnGqYaBBLDkV//SqyI36lY1PaYwcrH0mhvK2ESC4dRPksXxVqv1XlwULzlDCp00VspXhyA1vqofbDHlfOe/vnDQzHIZ63evRIpV8R5aKN7RSM+zlYDD1dKUHxiG62vRFWsTdrDV6dphGchQmRWwfxhOIPehuIWRyusF0ElHx1grOX9gQ54XQVkooDh4eiJqoBjqSm/erTPHK5xtKgFXln3JREFFiFF14WCD8Csb/ta2g7gkmDjC1czqrJjJd3P/E0RS6Dy7VaBVwmjT2euuCgPVBGtBjP7JJxXdlZb/tkPm/Ro9iThuhMLvjCJkzjKX4zZoKhwfvdv2cq0zzOd2uSOKpBFChIfev0iC9TRLK3SLTqio1oUNYxcO/v4CAefHPmQwMV+p3U1Uod9BXGE2DeUusoElcA/3ovbca6YBnJ4DbYL5a98OdpdNzu0Tvb5/G8WNWE7hmbAi8R6KXUelVJsAMVR4bgNuDwHODidCNvolc5IlJdWThRHYKdHuTIX8Iiam3puOxExP3ZYcT/Smi/U5cBqzb9jSZ+00EDGp3GAuuMZ8zvN7pI2Z5GJllRbhe4u97oon646F4k36th3DjlsVNvWQcTh08RNgkFODZm +wMqs17+ 1Eh5q/I6Oh8oFUVJL+BfVpGknbdlpnyzh1uQO1SazvHZdDt61ePLrGlItZG2MNXq5v6DEOHdVKO6liXIOgLudbgR2tEzxwC7PGA19MidYjGprkS5b4D4nXsVUNncUWBUnQPG3KP0Xtqn+RtmjdKzsdq9AdhnBMb4k5VYKPdPU7BqUrXBSXwUFQqh0rjbXEmZWFjbNuZNmzlnYfG4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The mark_free_page() is only used in kernel/power/snapshot.c, move it out to reduce a bit of page_alloc.c Signed-off-by: Kefeng Wang --- include/linux/suspend.h | 3 --- kernel/power/snapshot.c | 52 ++++++++++++++++++++++++++++++++++++++ mm/page_alloc.c | 55 ----------------------------------------- 3 files changed, 52 insertions(+), 58 deletions(-) diff --git a/include/linux/suspend.h b/include/linux/suspend.h index d0d4598a7b3f..3950a7bf33ae 100644 --- a/include/linux/suspend.h +++ b/include/linux/suspend.h @@ -364,9 +364,6 @@ struct pbe { struct pbe *next; }; -/* mm/page_alloc.c */ -extern void mark_free_pages(struct zone *zone); - /** * struct platform_hibernation_ops - hibernation platform support * diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c index cd8b7b35f1e8..45ef0bf81c85 100644 --- a/kernel/power/snapshot.c +++ b/kernel/power/snapshot.c @@ -1228,6 +1228,58 @@ unsigned int snapshot_additional_pages(struct zone *zone) return 2 * rtree; } +/* + * Touch the watchdog for every WD_PAGE_COUNT pages. + */ +#define WD_PAGE_COUNT (128*1024) + +static void mark_free_pages(struct zone *zone) +{ + unsigned long pfn, max_zone_pfn, page_count = WD_PAGE_COUNT; + unsigned long flags; + unsigned int order, t; + struct page *page; + + if (zone_is_empty(zone)) + return; + + spin_lock_irqsave(&zone->lock, flags); + + max_zone_pfn = zone_end_pfn(zone); + for (pfn = zone->zone_start_pfn; pfn < max_zone_pfn; pfn++) + if (pfn_valid(pfn)) { + page = pfn_to_page(pfn); + + if (!--page_count) { + touch_nmi_watchdog(); + page_count = WD_PAGE_COUNT; + } + + if (page_zone(page) != zone) + continue; + + if (!swsusp_page_is_forbidden(page)) + swsusp_unset_page_free(page); + } + + for_each_migratetype_order(order, t) { + list_for_each_entry(page, + &zone->free_area[order].free_list[t], buddy_list) { + unsigned long i; + + pfn = page_to_pfn(page); + for (i = 0; i < (1UL << order); i++) { + if (!--page_count) { + touch_nmi_watchdog(); + page_count = WD_PAGE_COUNT; + } + swsusp_set_page_free(pfn_to_page(pfn + i)); + } + } + } + spin_unlock_irqrestore(&zone->lock, flags); +} + #ifdef CONFIG_HIGHMEM /** * count_free_highmem_pages - Compute the total number of free highmem pages. diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 78d8a59f2afa..9284edf0259b 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2313,61 +2313,6 @@ void drain_all_pages(struct zone *zone) __drain_all_pages(zone, false); } -#ifdef CONFIG_HIBERNATION - -/* - * Touch the watchdog for every WD_PAGE_COUNT pages. - */ -#define WD_PAGE_COUNT (128*1024) - -void mark_free_pages(struct zone *zone) -{ - unsigned long pfn, max_zone_pfn, page_count = WD_PAGE_COUNT; - unsigned long flags; - unsigned int order, t; - struct page *page; - - if (zone_is_empty(zone)) - return; - - spin_lock_irqsave(&zone->lock, flags); - - max_zone_pfn = zone_end_pfn(zone); - for (pfn = zone->zone_start_pfn; pfn < max_zone_pfn; pfn++) - if (pfn_valid(pfn)) { - page = pfn_to_page(pfn); - - if (!--page_count) { - touch_nmi_watchdog(); - page_count = WD_PAGE_COUNT; - } - - if (page_zone(page) != zone) - continue; - - if (!swsusp_page_is_forbidden(page)) - swsusp_unset_page_free(page); - } - - for_each_migratetype_order(order, t) { - list_for_each_entry(page, - &zone->free_area[order].free_list[t], buddy_list) { - unsigned long i; - - pfn = page_to_pfn(page); - for (i = 0; i < (1UL << order); i++) { - if (!--page_count) { - touch_nmi_watchdog(); - page_count = WD_PAGE_COUNT; - } - swsusp_set_page_free(pfn_to_page(pfn + i)); - } - } - } - spin_unlock_irqrestore(&zone->lock, flags); -} -#endif /* CONFIG_PM */ - static bool free_unref_page_prepare(struct page *page, unsigned long pfn, unsigned int order) { From patchwork Mon May 8 07:11:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13234126 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DFD15C77B75 for ; Mon, 8 May 2023 06:55:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E0B9B280001; Mon, 8 May 2023 02:54:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D20DD900006; Mon, 8 May 2023 02:54:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B248A280004; Mon, 8 May 2023 02:54:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 6A87A900006 for ; Mon, 8 May 2023 02:54:58 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 36B241A024F for ; Mon, 8 May 2023 06:54:58 +0000 (UTC) X-FDA: 80766175476.09.7A297CF Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf01.hostedemail.com (Postfix) with ESMTP id AF9784000D for ; Mon, 8 May 2023 06:54:55 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=none; spf=pass (imf01.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1683528896; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5W8cZ1bgpgg7SSu8Q84sYoYhoWqVbUW9JQaBQzCU9rE=; b=qCh3iDrL/wdfy/XcfQFbapt6nYxsJ7/IBeORnTunbrzUmA0GEq/0ys7Cc2pfr2Py1cBesS cAh9pVBMTcJsrzZ4Pn49sQ8jrF+k/TfvRLQYWLk2CIkd25fACn4wDbiT7QUyxM+8JvNqEu MydNlv3b3rTtnwUFQ44uV5PLtxc119I= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1683528896; a=rsa-sha256; cv=none; b=QR9hQkPonj5NoR1cNQrlLSgVC5RG89MaXfC5Huy86c0FIjPWX0vUhM/nxfumVkSpPyfzh1 V3nlXHly48IaW4heXFTLTzRradjS/LT2wfBBYM4tqkkDEfCQCCQTgsCOZt78XrRnAbjJ1w jABwDzEVQOAQ1oJIF2cDg/uPkLy87TI= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=none; spf=pass (imf01.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4QFBmM4PprzLpV9; Mon, 8 May 2023 14:52:03 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 8 May 2023 14:54:52 +0800 From: Kefeng Wang To: Andrew Morton , Mike Rapoport , CC: David Hildenbrand , Oscar Salvador , "Rafael J. Wysocki" , Pavel Machek , Len Brown , Luis Chamberlain , Kees Cook , Iurii Zaikin , , , , Kefeng Wang Subject: [PATCH 10/12] mm: page_alloc: move pm_* function into power Date: Mon, 8 May 2023 15:11:58 +0800 Message-ID: <20230508071200.123962-11-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230508071200.123962-1-wangkefeng.wang@huawei.com> References: <20230508071200.123962-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: AF9784000D X-Rspam-User: X-Rspamd-Server: rspam06 X-Stat-Signature: twrd565enmp9ob1yk4babky3339pkodt X-HE-Tag: 1683528895-208213 X-HE-Meta: U2FsdGVkX19X4V/ziRiPZISrAII7nCJ8cS2tAKcseBfHrTyPU//hK8eziwYvSHBxNoQmXK8PzOc5ydMZ2huQ0tVCzTNXJ9JiLQOq8dTIRaobOCt0UYbUXGkJtSg8HWQApyr6T+kgwiHWwdM7ZBSHypcOHtz8gGJiKr5irMlsUY3RU5Feo+oNmln+bNCoOl66fzE2il6/bqzQMZg5pFH9lsthHOrNxu6R/5i9S4ZhyQcp4EwLMcHRbWk10xb2DwyvsH216Wp1xi5jsOnygzHitXgI4Rz9vcdGCGJD7QFoy/PAKs2gHLk0YoLB/3ew5MH1VkbXY46h9UG7QJT3PO8ZsdveIDKuZixK+bg60eO4OAM4DOsQs4D4yQJ/hdNcMSmpPQszJygotwvgxFfWMevXDoJ4jJtusLVdkIcMPOgQ184Zrh7NDrWDbBdC7OVAqzQ0aosOinG9k7FAiPEmyuu8UZbbpRQX9JSxkdL93chujeYexHVVr8iW0yuzeHa33WVYogq4WtRRgEbDIeZ9u30GdQeVKS1lRZwx68U65PG6586Zd7TxK9WzYITeTtGkL5jkXuWq/SgRk0k1n6YX37pfIOGQ4Pq3BDH6RMFqX8BUnCTEhJCO8Ob6C9OM8f1qRmV7fjcVXvmxnyGMnpCrGKpdxv4UCNdxgf0NSJSlFbRktAqLQ8lHId1DnYcvDCYcUqME+6TUnRPfGCaDAGSNDsad4bzePiZybch2R2Ve9FD6cr3x9TQHiP3AOpinysaP8F1TQuIfzvjWL/cS0sMFFF3fljH5CXVOAMce72/LLCT5qXqycTsRgse71xAUKnpvUgIa426DZntD8INuE+G/aj4Waf7pGWbUk5GDUlH9oFZelcVgHNT8NVjGdz9aTJijgzfeD9IfJugFTQOeDiAzurbz++KfVFnLCmNdgizKgDVbLSBjjWnxZmsb1UW5DCsTMDwB5klgLOXvzGlrNDkpBsz Wv9iCqyh K1rHCol16Ki4S6Tgi7Cffz606Z9suO4i3rYhIFc/UWWMdNFOFMBDTMjabx5VkMRnKAG+ZsVS4hIPz/xzhvd2WUtIdjSVnwoxin00YjNIVg/2mtYgNm88x3mk2JYOIAprlvhNC6TnBDPs85PIZkLzCz0V4bYhIFRXmObgK50c1VGCzOIDHYBf/AzLFE19rWA5aMpbz X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: pm_restrict_gfp_mask()/pm_restore_gfp_mask() only used in power, let's move them out of page_alloc.c. Adding a general gfp_has_io_fs() function which return true if gfp with both __GFP_IO and __GFP_FS flags, then use it inside of pm_suspended_storage(), also the pm_suspended_storage() is moved into suspend.h. Signed-off-by: Kefeng Wang --- include/linux/gfp.h | 15 ++++----------- include/linux/suspend.h | 6 ++++++ kernel/power/main.c | 27 +++++++++++++++++++++++++++ kernel/power/power.h | 5 +++++ mm/page_alloc.c | 38 -------------------------------------- mm/swapfile.c | 1 + 6 files changed, 43 insertions(+), 49 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index ed8cb537c6a7..665f06675c83 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -338,19 +338,12 @@ extern gfp_t gfp_allowed_mask; /* Returns true if the gfp_mask allows use of ALLOC_NO_WATERMARK */ bool gfp_pfmemalloc_allowed(gfp_t gfp_mask); -extern void pm_restrict_gfp_mask(void); -extern void pm_restore_gfp_mask(void); - -extern gfp_t vma_thp_gfp_mask(struct vm_area_struct *vma); - -#ifdef CONFIG_PM_SLEEP -extern bool pm_suspended_storage(void); -#else -static inline bool pm_suspended_storage(void) +static inline bool gfp_has_io_fs(gfp_t gfp) { - return false; + return (gfp & (__GFP_IO | __GFP_FS)) == (__GFP_IO | __GFP_FS); } -#endif /* CONFIG_PM_SLEEP */ + +extern gfp_t vma_thp_gfp_mask(struct vm_area_struct *vma); #ifdef CONFIG_CONTIG_ALLOC /* The below functions must be run on a range from a single zone. */ diff --git a/include/linux/suspend.h b/include/linux/suspend.h index 3950a7bf33ae..76923051c03d 100644 --- a/include/linux/suspend.h +++ b/include/linux/suspend.h @@ -502,6 +502,11 @@ extern void pm_report_max_hw_sleep(u64 t); extern bool events_check_enabled; extern suspend_state_t pm_suspend_target_state; +static inline bool pm_suspended_storage(void) +{ + return !gfp_has_io_fs(gfp_allowed_mask); +} + extern bool pm_wakeup_pending(void); extern void pm_system_wakeup(void); extern void pm_system_cancel_wakeup(void); @@ -535,6 +540,7 @@ static inline void ksys_sync_helper(void) {} #define pm_notifier(fn, pri) do { (void)(fn); } while (0) +static inline bool pm_suspended_storage(void) { return false; } static inline bool pm_wakeup_pending(void) { return false; } static inline void pm_system_wakeup(void) {} static inline void pm_wakeup_clear(bool reset) {} diff --git a/kernel/power/main.c b/kernel/power/main.c index 3113ec2f1db4..34fc8359145b 100644 --- a/kernel/power/main.c +++ b/kernel/power/main.c @@ -21,6 +21,33 @@ #include "power.h" #ifdef CONFIG_PM_SLEEP +/* + * The following functions are used by the suspend/hibernate code to temporarily + * change gfp_allowed_mask in order to avoid using I/O during memory allocations + * while devices are suspended. To avoid races with the suspend/hibernate code, + * they should always be called with system_transition_mutex held + * (gfp_allowed_mask also should only be modified with system_transition_mutex + * held, unless the suspend/hibernate code is guaranteed not to run in parallel + * with that modification). + */ +static gfp_t saved_gfp_mask; + +void pm_restore_gfp_mask(void) +{ + WARN_ON(!mutex_is_locked(&system_transition_mutex)); + if (saved_gfp_mask) { + gfp_allowed_mask = saved_gfp_mask; + saved_gfp_mask = 0; + } +} + +void pm_restrict_gfp_mask(void) +{ + WARN_ON(!mutex_is_locked(&system_transition_mutex)); + WARN_ON(saved_gfp_mask); + saved_gfp_mask = gfp_allowed_mask; + gfp_allowed_mask &= ~(__GFP_IO | __GFP_FS); +} unsigned int lock_system_sleep(void) { diff --git a/kernel/power/power.h b/kernel/power/power.h index b83c8d5e188d..ac14d1b463d1 100644 --- a/kernel/power/power.h +++ b/kernel/power/power.h @@ -216,6 +216,11 @@ static inline void suspend_test_finish(const char *label) {} /* kernel/power/main.c */ extern int pm_notifier_call_chain_robust(unsigned long val_up, unsigned long val_down); extern int pm_notifier_call_chain(unsigned long val); +void pm_restrict_gfp_mask(void); +void pm_restore_gfp_mask(void); +#else +static inline void pm_restrict_gfp_mask(void) {} +static inline void pm_restore_gfp_mask(void) {} #endif #ifdef CONFIG_HIGHMEM diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 9284edf0259b..aa4e4af9fc88 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -227,44 +227,6 @@ static inline void set_pcppage_migratetype(struct page *page, int migratetype) page->index = migratetype; } -#ifdef CONFIG_PM_SLEEP -/* - * The following functions are used by the suspend/hibernate code to temporarily - * change gfp_allowed_mask in order to avoid using I/O during memory allocations - * while devices are suspended. To avoid races with the suspend/hibernate code, - * they should always be called with system_transition_mutex held - * (gfp_allowed_mask also should only be modified with system_transition_mutex - * held, unless the suspend/hibernate code is guaranteed not to run in parallel - * with that modification). - */ - -static gfp_t saved_gfp_mask; - -void pm_restore_gfp_mask(void) -{ - WARN_ON(!mutex_is_locked(&system_transition_mutex)); - if (saved_gfp_mask) { - gfp_allowed_mask = saved_gfp_mask; - saved_gfp_mask = 0; - } -} - -void pm_restrict_gfp_mask(void) -{ - WARN_ON(!mutex_is_locked(&system_transition_mutex)); - WARN_ON(saved_gfp_mask); - saved_gfp_mask = gfp_allowed_mask; - gfp_allowed_mask &= ~(__GFP_IO | __GFP_FS); -} - -bool pm_suspended_storage(void) -{ - if ((gfp_allowed_mask & (__GFP_IO | __GFP_FS)) == (__GFP_IO | __GFP_FS)) - return false; - return true; -} -#endif /* CONFIG_PM_SLEEP */ - #ifdef CONFIG_HUGETLB_PAGE_SIZE_VARIABLE unsigned int pageblock_order __read_mostly; #endif diff --git a/mm/swapfile.c b/mm/swapfile.c index 274bbf797480..c74259001d5e 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -41,6 +41,7 @@ #include #include #include +#include #include #include From patchwork Mon May 8 07:11:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13234128 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD0E6C7EE2F for ; Mon, 8 May 2023 06:55:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8DA2A280004; Mon, 8 May 2023 02:54:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8632E900006; Mon, 8 May 2023 02:54:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 70ACE280004; Mon, 8 May 2023 02:54:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 24239900006 for ; Mon, 8 May 2023 02:54:59 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id F20F0120E81 for ; Mon, 8 May 2023 06:54:58 +0000 (UTC) X-FDA: 80766175476.03.382C4A7 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by imf23.hostedemail.com (Postfix) with ESMTP id C6A5814000B for ; Mon, 8 May 2023 06:54:56 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=none; spf=pass (imf23.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1683528897; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=pEQmbXru6q4li0fN/aC7FJHKouP8VLqgIOaYkU8mCOk=; b=mHji8aA2k5A9IPOJUAOuwPkLmUye7HoIib1aXF/ZXxL217nCg80aJOV5euaYBgecRnEqzG HfwGTM2OeHaMSzQTXFJ+isb/OmZN9Ghzmmugt5TXMg/xRLdWVTjJKxmRNUYcc63LKMLsk9 I9XqQq0oiCLlBzZtklnR4RDDS2GU3No= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=none; spf=pass (imf23.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1683528897; a=rsa-sha256; cv=none; b=XwL6W9NP7Jyi5cWBmpyIuJeQl96+b9n2lMlEkN9sXrKBs5mSPMZDLs+EOb+pLPLrVhOS84 XblgUE/cV9sCoa0J54tkO7FubaD9fbS4nJOIMg5RXX1VN7KupGgxWIPY+kafsQqUtGhXD9 sgBdGDpa6dn77tWxFeiSVC/Neen3X4s= Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.54]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4QFBpG2Gm2zpW9f; Mon, 8 May 2023 14:53:42 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 8 May 2023 14:54:52 +0800 From: Kefeng Wang To: Andrew Morton , Mike Rapoport , CC: David Hildenbrand , Oscar Salvador , "Rafael J. Wysocki" , Pavel Machek , Len Brown , Luis Chamberlain , Kees Cook , Iurii Zaikin , , , , Kefeng Wang Subject: [PATCH 11/12] mm: vmscan: use gfp_has_io_fs() Date: Mon, 8 May 2023 15:11:59 +0800 Message-ID: <20230508071200.123962-12-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230508071200.123962-1-wangkefeng.wang@huawei.com> References: <20230508071200.123962-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Stat-Signature: b5ctf5z56mkhwg4dc8rmtxc1gafz3xrb X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: C6A5814000B X-Rspam-User: X-HE-Tag: 1683528896-303011 X-HE-Meta: U2FsdGVkX1+5STQNi2R6tmxI8xaoEewTnSWirnTZof4lvQTajwZmFF2abTHFTP1NmilG7lyP5H3pMBfN+olXZvVbdX0bu1hogWfssUbO+EI1+gaRDtQJAGt3bYM7m7yVqFTiz2t+hTaprVIABDyeXU6fDmXPelc4Vc3wfFyDT3HuSV3Ta1aMD+EGv9Ilt3i2yIOw2UytJUVkagmrlU2cUIQdM2W8T/0RKLhxrbyQToFKzSk6tmLbp6X2PORxKE8PqOEaN9kShkYP/ZrojRklKfp+jfMqON4IpUUicYHjWZx71+PiF7St/KUrLrSeeeMNloUlK1qp8K9S9DvPcvWWLIn+dbi5Jh0KJ8CoHcTruG0ek+wO8ENZp0SPkmrw6kra4IqAjzZaWNe0dhvMNlYosri/aB2bMyrguqvoQGbHafE9bfxaV08UdBv4x1FRGqUBwA4WJ/9MfW/+tjCbhTK41UzsD9wgqTrhcPLg1EiPXkhKFf4T1f08MKmWYhAV4iFygl2mylU5znsUSvhKTy8MaZjP0x6yizLBmOjhijWou+kY9qiUftjvkQtVg0HzqnkSVax1vrW9tUZ1DQLVM4KFigfWWm1Nwmikr7R0AqwsDTVMgsfH9vITpK2BniBpYpZLtklycRxGxxgaYdLI9xapvK16Qn1Jsfye0d2d3Nlwa3Myg59juigj8mnqV41MyaY0vv6Px9tcxRNdUBPVAt34JEb0+7ORwDo7En38egC0kioVljcVJviV/ecK4gMm4hP6p4LusFryOhztnJQeTlmRKm49rVhtDU5DNRmETk2gJVKoKuO3zwcLpANF/L4iNzrxngRq28RvbSwXOhmbQ9M7t6agjCKMIKRIllPM00nIqPn+8GEQyE8WmzcDGKP380xbkTQ0WRJHM7tEmLCvWzqzPgfOUQa2f+2wzrxkFSv9btqW7TjO2S87yzVOEBtsdREO/hIPNZCDYPkWL/O23T3 pG4Jcrdt jWSEa9XFdt7rRP0P4IiIWyOCF2n9fgLaWLP5DGvZ3kHYtX99glVjxZrlZZdBJAXIvKElKJltpfNrJq2a2QCgLpvedCxJAmmwqlUeGav4YwcJ/DmOJvKAV11wHQFRqCz0Qizpd/Zrbejj3tt0jpWCGIlfwz/65YObSYT22liAnDCrLieyrNRzedrdK9DvbZP6lL9HZeo5NdslqutoZsSnYyJepC9Te9q7gai7/ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use gfp_has_io_fs() instead of open-code. Signed-off-by: Kefeng Wang --- mm/vmscan.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 6d0cd2840cf0..15efbfbb1963 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2458,7 +2458,7 @@ static int too_many_isolated(struct pglist_data *pgdat, int file, * won't get blocked by normal direct-reclaimers, forming a circular * deadlock. */ - if ((sc->gfp_mask & (__GFP_IO | __GFP_FS)) == (__GFP_IO | __GFP_FS)) + if (gfp_has_io_fs(sc->gfp_mask)) inactive >>= 3; too_many = isolated > inactive; From patchwork Mon May 8 07:12:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13234130 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C44BEC7EE2E for ; Mon, 8 May 2023 06:55:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3F8D4900006; Mon, 8 May 2023 02:55:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 382F9280005; Mon, 8 May 2023 02:55:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1D4B8900008; Mon, 8 May 2023 02:55:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 02B28900006 for ; Mon, 8 May 2023 02:55:00 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id D13501C7406 for ; Mon, 8 May 2023 06:54:59 +0000 (UTC) X-FDA: 80766175518.28.0D488F2 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf03.hostedemail.com (Postfix) with ESMTP id 4A0C920002 for ; Mon, 8 May 2023 06:54:56 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf03.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1683528898; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0hohOyX51ohn7wswwMzyiSDddhMMDfjPLYfH4vrmcms=; b=n7KgdYBum7aeFJGTpB25K6fx+89x7tJ/J04VR8grukZ7RqxXLW62a9OiTtTAvA+p0oMQz1 8Uhd1+I6Jfqnj0mwZQE8Bcv7+z3sM96McusOSBn2Y/vVFJQrlRbnuWNlBanWuy9pfxQoGc JqUXs88dT6eN8VLnATnDcggqFUHwalY= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf03.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1683528898; a=rsa-sha256; cv=none; b=c+187kKWKMZ8lI0OpPR1I6dq8aQ0UNnNcFqNCNH5ksSrexXaq7Suzlq5WOqV6xlGVVa/55 htyOkpiOO9HtSn4F425h8AL2/hxmbalVNVoH5fXzEEh0k2cztAD5/rRETVKQGvcKQ7IL8w 9H+g94SBPIT1OH73aTea8jtqiMCQ9RE= Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4QFBkM361ZzTkHy; Mon, 8 May 2023 14:50:19 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 8 May 2023 14:54:53 +0800 From: Kefeng Wang To: Andrew Morton , Mike Rapoport , CC: David Hildenbrand , Oscar Salvador , "Rafael J. Wysocki" , Pavel Machek , Len Brown , Luis Chamberlain , Kees Cook , Iurii Zaikin , , , , Kefeng Wang Subject: [PATCH 12/12] mm: page_alloc: move sysctls into it own fils Date: Mon, 8 May 2023 15:12:00 +0800 Message-ID: <20230508071200.123962-13-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230508071200.123962-1-wangkefeng.wang@huawei.com> References: <20230508071200.123962-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Stat-Signature: kszb8occkzfn8qrpmn4zkyfy6modqawt X-Rspam-User: X-Rspamd-Queue-Id: 4A0C920002 X-Rspamd-Server: rspam07 X-HE-Tag: 1683528896-847713 X-HE-Meta: U2FsdGVkX18DHpYypUmRy2Rk8oWKC8nQvE8JCKUJim8luiN2MZOrului6BMG3/QFuFvupey4jtAtfxQshlP5BfmG3egFW+YbFQj8K6wLX4+QSRK9Ama6Ub8kScDZ1e3nzJ1PvwgN8rXP6c7iqTsMN/ZTbL/b5PiOSUqsi2FPpTL2vVhEjkGhkxmeR0E1MRJTBP82F0Pt1zn1ebuIcmLTccFX/88gPbiO5NZg8ZqeMcMVUeMDymuuH2N3GyKAefSW4BKnyw0LyNKZU44ozT38UbW7CvlK4JbTwZnsJ1wOFgZz+kTgeRSmpjHzSgAFzdDIDT2chZWxE971K+c5S29c/4M0Tw7p4ok5emYMLTo/H8CJi0TLvBImHjcDkCSQ1YsWYxP8+JzcjTT3s7mwZ550wW9BdxcDqm9aLo6VPCY4XzGXI5hkq7//6VfaE+N+rjH2VoqazE/hG/GZgQ/qdyAJaCU7qCxihJjy2mZXSiMO2LlPjQ0yWgwTBTk0nXrnhLQ+alHbyHIYcgfLgZQ68uybTZAUuwpVe4EIxdEm0JxRu8IPoFR0TpDpc5BGiWe+U/2eRNgEYRijenYE0wFtJph/lW69caHxZCMnG8NZ/wGbSH/woj31d2ndQ4LynSWkkxCYars3UuVcFR43nF0+OOezj1CTdI6jQkfmgA42vOjhVcCgu/3HlgVtGCMxZC96Le/BcIOnXTw9K61A0Iv3IDXhXz1Rj86yb+65NXmYiMszVmh70ILzN5V0qw9LfE5BEzzt8BbEoVNRqT6mzxlL4LHdBnu7KY6t+15tMTtwSkkMSI10MqyUEbmXo0aeWb42XB7UIRBEMaywphYJoaR5IVTsLAB3Fw/snmgmdbJUyNlrNj7WhTrtgbA64uW+L9e7hZ8o07kOE10gc9mV21jnXfgyf96YQ+95rendifxcWgri5d6NavDMmu/s8HqPqBOOhfMQe2rbtRb5YcC2d3Flcvu CMOIeQnY JXRjTR59WsbqLQCwdQBa17GfP9u8mZJ9nAsuk6yXjfVwSxoxG+uyeIFQWooJKha3+jNuTS3Z5PIgnHcmyA8a+h+bRTj210bVDUbBPS8+jyt9WqCMXXL1Icv1QSPiJ93Ot2P/arL8zs9YJRoHQTzwMWSSx2GI/3tCSCu1aDx+NiaQc0F6Zt7wY5CnRHhcttviHqeRlmE3TiGoxNs4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This moves all page alloc related sysctls to its own file, as part of the kernel/sysctl.c spring cleaning, also move some functions declarations from mm.h into internal.h. Signed-off-by: Kefeng Wang --- include/linux/mm.h | 11 ----- include/linux/mmzone.h | 21 --------- kernel/sysctl.c | 67 --------------------------- mm/internal.h | 9 ++++ mm/mm_init.c | 2 + mm/page_alloc.c | 103 +++++++++++++++++++++++++++++++++++------ 6 files changed, 100 insertions(+), 113 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index fc8732a119cf..d533ef955dd0 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3045,12 +3045,6 @@ extern int __meminit early_pfn_to_nid(unsigned long pfn); #endif extern void set_dma_reserve(unsigned long new_dma_reserve); -extern void memmap_init_range(unsigned long, int, unsigned long, - unsigned long, unsigned long, enum meminit_context, - struct vmem_altmap *, int migratetype); -extern void setup_per_zone_wmarks(void); -extern void calculate_min_free_kbytes(void); -extern int __meminit init_per_zone_wmark_min(void); extern void mem_init(void); extern void __init mmap_init(void); @@ -3071,11 +3065,6 @@ void warn_alloc(gfp_t gfp_mask, nodemask_t *nodemask, const char *fmt, ...); extern void setup_per_cpu_pageset(void); -/* page_alloc.c */ -extern int min_free_kbytes; -extern int watermark_boost_factor; -extern int watermark_scale_factor; - /* nommu.c */ extern atomic_long_t mmap_pages_allocated; extern int nommu_shrink_inode_mappings(struct inode *, size_t, size_t); diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index a4889c9d4055..3a68326c9989 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -1512,27 +1512,6 @@ static inline bool has_managed_dma(void) } #endif -/* These two functions are used to setup the per zone pages min values */ -struct ctl_table; - -int min_free_kbytes_sysctl_handler(struct ctl_table *, int, void *, size_t *, - loff_t *); -int watermark_scale_factor_sysctl_handler(struct ctl_table *, int, void *, - size_t *, loff_t *); -extern int sysctl_lowmem_reserve_ratio[MAX_NR_ZONES]; -int lowmem_reserve_ratio_sysctl_handler(struct ctl_table *, int, void *, - size_t *, loff_t *); -int percpu_pagelist_high_fraction_sysctl_handler(struct ctl_table *, int, - void *, size_t *, loff_t *); -int sysctl_min_unmapped_ratio_sysctl_handler(struct ctl_table *, int, - void *, size_t *, loff_t *); -int sysctl_min_slab_ratio_sysctl_handler(struct ctl_table *, int, - void *, size_t *, loff_t *); -int numa_zonelist_order_handler(struct ctl_table *, int, - void *, size_t *, loff_t *); -extern int percpu_pagelist_high_fraction; -extern char numa_zonelist_order[]; -#define NUMA_ZONELIST_ORDER_LEN 16 #ifndef CONFIG_NUMA diff --git a/kernel/sysctl.c b/kernel/sysctl.c index bfe53e835524..a57de67f032f 100644 --- a/kernel/sysctl.c +++ b/kernel/sysctl.c @@ -2119,13 +2119,6 @@ static struct ctl_table vm_table[] = { .extra2 = SYSCTL_ONE, }, #endif - { - .procname = "lowmem_reserve_ratio", - .data = &sysctl_lowmem_reserve_ratio, - .maxlen = sizeof(sysctl_lowmem_reserve_ratio), - .mode = 0644, - .proc_handler = lowmem_reserve_ratio_sysctl_handler, - }, { .procname = "drop_caches", .data = &sysctl_drop_caches, @@ -2135,39 +2128,6 @@ static struct ctl_table vm_table[] = { .extra1 = SYSCTL_ONE, .extra2 = SYSCTL_FOUR, }, - { - .procname = "min_free_kbytes", - .data = &min_free_kbytes, - .maxlen = sizeof(min_free_kbytes), - .mode = 0644, - .proc_handler = min_free_kbytes_sysctl_handler, - .extra1 = SYSCTL_ZERO, - }, - { - .procname = "watermark_boost_factor", - .data = &watermark_boost_factor, - .maxlen = sizeof(watermark_boost_factor), - .mode = 0644, - .proc_handler = proc_dointvec_minmax, - .extra1 = SYSCTL_ZERO, - }, - { - .procname = "watermark_scale_factor", - .data = &watermark_scale_factor, - .maxlen = sizeof(watermark_scale_factor), - .mode = 0644, - .proc_handler = watermark_scale_factor_sysctl_handler, - .extra1 = SYSCTL_ONE, - .extra2 = SYSCTL_THREE_THOUSAND, - }, - { - .procname = "percpu_pagelist_high_fraction", - .data = &percpu_pagelist_high_fraction, - .maxlen = sizeof(percpu_pagelist_high_fraction), - .mode = 0644, - .proc_handler = percpu_pagelist_high_fraction_sysctl_handler, - .extra1 = SYSCTL_ZERO, - }, { .procname = "page_lock_unfairness", .data = &sysctl_page_lock_unfairness, @@ -2223,24 +2183,6 @@ static struct ctl_table vm_table[] = { .proc_handler = proc_dointvec_minmax, .extra1 = SYSCTL_ZERO, }, - { - .procname = "min_unmapped_ratio", - .data = &sysctl_min_unmapped_ratio, - .maxlen = sizeof(sysctl_min_unmapped_ratio), - .mode = 0644, - .proc_handler = sysctl_min_unmapped_ratio_sysctl_handler, - .extra1 = SYSCTL_ZERO, - .extra2 = SYSCTL_ONE_HUNDRED, - }, - { - .procname = "min_slab_ratio", - .data = &sysctl_min_slab_ratio, - .maxlen = sizeof(sysctl_min_slab_ratio), - .mode = 0644, - .proc_handler = sysctl_min_slab_ratio_sysctl_handler, - .extra1 = SYSCTL_ZERO, - .extra2 = SYSCTL_ONE_HUNDRED, - }, #endif #ifdef CONFIG_SMP { @@ -2267,15 +2209,6 @@ static struct ctl_table vm_table[] = { .proc_handler = mmap_min_addr_handler, }, #endif -#ifdef CONFIG_NUMA - { - .procname = "numa_zonelist_order", - .data = &numa_zonelist_order, - .maxlen = NUMA_ZONELIST_ORDER_LEN, - .mode = 0644, - .proc_handler = numa_zonelist_order_handler, - }, -#endif #if (defined(CONFIG_X86_32) && !defined(CONFIG_UML))|| \ (defined(CONFIG_SUPERH) && defined(CONFIG_VSYSCALL)) { diff --git a/mm/internal.h b/mm/internal.h index 9482862b28cc..8d8b2faebc89 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -213,6 +213,15 @@ static inline bool is_check_pages_enabled(void) return static_branch_unlikely(&check_pages_enabled); } +extern int min_free_kbytes; + +void page_alloc_sysctl_init(void); +void setup_per_zone_wmarks(void); +void calculate_min_free_kbytes(void); +int __meminit init_per_zone_wmark_min(void); +void memmap_init_range(unsigned long, int, unsigned long, unsigned long, + unsigned long, enum meminit_context, struct vmem_altmap *, int); + /* * Structure for holding the mostly immutable allocation parameters passed * between functions involved in allocations, including the alloc_pages* diff --git a/mm/mm_init.c b/mm/mm_init.c index 1f30b9e16577..afa56cd50ca4 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -2444,6 +2444,8 @@ void __init page_alloc_init_late(void) /* Initialize page ext after all struct pages are initialized. */ if (deferred_struct_pages) page_ext_init(); + + page_alloc_sysctl_init(); } #ifndef __HAVE_ARCH_RESERVED_KERNEL_PAGES diff --git a/mm/page_alloc.c b/mm/page_alloc.c index aa4e4af9fc88..880f08575d59 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -206,7 +206,6 @@ nodemask_t node_states[NR_NODE_STATES] __read_mostly = { }; EXPORT_SYMBOL(node_states); -int percpu_pagelist_high_fraction; gfp_t gfp_allowed_mask __read_mostly = GFP_BOOT_MASK; /* @@ -302,8 +301,8 @@ compound_page_dtor * const compound_page_dtors[NR_COMPOUND_DTORS] = { int min_free_kbytes = 1024; int user_min_free_kbytes = -1; -int watermark_boost_factor __read_mostly = 15000; -int watermark_scale_factor = 10; +static int watermark_boost_factor __read_mostly = 15000; +static int watermark_scale_factor = 10; /* movable_zone is the "real" zone pages in ZONE_MOVABLE are taken from */ int movable_zone; @@ -4828,12 +4827,12 @@ static int __parse_numa_zonelist_order(char *s) return 0; } -char numa_zonelist_order[] = "Node"; - +static char numa_zonelist_order[] = "Node"; +#define NUMA_ZONELIST_ORDER_LEN 16 /* * sysctl handler for numa_zonelist_order */ -int numa_zonelist_order_handler(struct ctl_table *table, int write, +static int numa_zonelist_order_handler(struct ctl_table *table, int write, void *buffer, size_t *length, loff_t *ppos) { if (write) @@ -4841,7 +4840,6 @@ int numa_zonelist_order_handler(struct ctl_table *table, int write, return proc_dostring(table, write, buffer, length, ppos); } - static int node_load[MAX_NUMNODES]; /** @@ -5244,6 +5242,7 @@ static int zone_batchsize(struct zone *zone) #endif } +static int percpu_pagelist_high_fraction; static int zone_highsize(struct zone *zone, int batch, int cpu_online) { #ifdef CONFIG_MMU @@ -5773,7 +5772,7 @@ postcore_initcall(init_per_zone_wmark_min) * that we can call two helper functions whenever min_free_kbytes * changes. */ -int min_free_kbytes_sysctl_handler(struct ctl_table *table, int write, +static int min_free_kbytes_sysctl_handler(struct ctl_table *table, int write, void *buffer, size_t *length, loff_t *ppos) { int rc; @@ -5789,7 +5788,7 @@ int min_free_kbytes_sysctl_handler(struct ctl_table *table, int write, return 0; } -int watermark_scale_factor_sysctl_handler(struct ctl_table *table, int write, +static int watermark_scale_factor_sysctl_handler(struct ctl_table *table, int write, void *buffer, size_t *length, loff_t *ppos) { int rc; @@ -5819,7 +5818,7 @@ static void setup_min_unmapped_ratio(void) } -int sysctl_min_unmapped_ratio_sysctl_handler(struct ctl_table *table, int write, +static int sysctl_min_unmapped_ratio_sysctl_handler(struct ctl_table *table, int write, void *buffer, size_t *length, loff_t *ppos) { int rc; @@ -5846,7 +5845,7 @@ static void setup_min_slab_ratio(void) sysctl_min_slab_ratio) / 100; } -int sysctl_min_slab_ratio_sysctl_handler(struct ctl_table *table, int write, +static int sysctl_min_slab_ratio_sysctl_handler(struct ctl_table *table, int write, void *buffer, size_t *length, loff_t *ppos) { int rc; @@ -5870,8 +5869,8 @@ int sysctl_min_slab_ratio_sysctl_handler(struct ctl_table *table, int write, * minimum watermarks. The lowmem reserve ratio can only make sense * if in function of the boot time zone sizes. */ -int lowmem_reserve_ratio_sysctl_handler(struct ctl_table *table, int write, - void *buffer, size_t *length, loff_t *ppos) +static int lowmem_reserve_ratio_sysctl_handler(struct ctl_table *table, + int write, void *buffer, size_t *length, loff_t *ppos) { int i; @@ -5891,7 +5890,7 @@ int lowmem_reserve_ratio_sysctl_handler(struct ctl_table *table, int write, * cpu. It is the fraction of total pages in each zone that a hot per cpu * pagelist can have before it gets flushed back to buddy allocator. */ -int percpu_pagelist_high_fraction_sysctl_handler(struct ctl_table *table, +static int percpu_pagelist_high_fraction_sysctl_handler(struct ctl_table *table, int write, void *buffer, size_t *length, loff_t *ppos) { struct zone *zone; @@ -5924,6 +5923,82 @@ int percpu_pagelist_high_fraction_sysctl_handler(struct ctl_table *table, return ret; } +static struct ctl_table page_alloc_sysctl_table[] = { + { + .procname = "min_free_kbytes", + .data = &min_free_kbytes, + .maxlen = sizeof(min_free_kbytes), + .mode = 0644, + .proc_handler = min_free_kbytes_sysctl_handler, + .extra1 = SYSCTL_ZERO, + }, + { + .procname = "watermark_boost_factor", + .data = &watermark_boost_factor, + .maxlen = sizeof(watermark_boost_factor), + .mode = 0644, + .proc_handler = proc_dointvec_minmax, + .extra1 = SYSCTL_ZERO, + }, + { + .procname = "watermark_scale_factor", + .data = &watermark_scale_factor, + .maxlen = sizeof(watermark_scale_factor), + .mode = 0644, + .proc_handler = watermark_scale_factor_sysctl_handler, + .extra1 = SYSCTL_ONE, + .extra2 = SYSCTL_THREE_THOUSAND, + }, + { + .procname = "percpu_pagelist_high_fraction", + .data = &percpu_pagelist_high_fraction, + .maxlen = sizeof(percpu_pagelist_high_fraction), + .mode = 0644, + .proc_handler = percpu_pagelist_high_fraction_sysctl_handler, + .extra1 = SYSCTL_ZERO, + }, + { + .procname = "lowmem_reserve_ratio", + .data = &sysctl_lowmem_reserve_ratio, + .maxlen = sizeof(sysctl_lowmem_reserve_ratio), + .mode = 0644, + .proc_handler = lowmem_reserve_ratio_sysctl_handler, + }, +#ifdef CONFIG_NUMA + { + .procname = "numa_zonelist_order", + .data = &numa_zonelist_order, + .maxlen = NUMA_ZONELIST_ORDER_LEN, + .mode = 0644, + .proc_handler = numa_zonelist_order_handler, + }, + { + .procname = "min_unmapped_ratio", + .data = &sysctl_min_unmapped_ratio, + .maxlen = sizeof(sysctl_min_unmapped_ratio), + .mode = 0644, + .proc_handler = sysctl_min_unmapped_ratio_sysctl_handler, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE_HUNDRED, + }, + { + .procname = "min_slab_ratio", + .data = &sysctl_min_slab_ratio, + .maxlen = sizeof(sysctl_min_slab_ratio), + .mode = 0644, + .proc_handler = sysctl_min_slab_ratio_sysctl_handler, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE_HUNDRED, + }, +#endif + {} +}; + +void __init page_alloc_sysctl_init(void) +{ + register_sysctl_init("vm", page_alloc_sysctl_table); +} + #ifdef CONFIG_CONTIG_ALLOC /* Usage: See admin-guide/dynamic-debug-howto.rst */ static void alloc_contig_dump_pages(struct list_head *page_list)