From patchwork Tue May 16 06:38:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13242575 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 365BEC7EE31 for ; Tue, 16 May 2023 06:21:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B08BA900003; Tue, 16 May 2023 02:21:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A2B6E280002; Tue, 16 May 2023 02:21:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5BE0C900004; Tue, 16 May 2023 02:21:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 41CD0900003 for ; Tue, 16 May 2023 02:21:37 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 0E4AC121420 for ; Tue, 16 May 2023 06:21:37 +0000 (UTC) X-FDA: 80795121834.15.00A79C9 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by imf17.hostedemail.com (Postfix) with ESMTP id A60F140008 for ; Tue, 16 May 2023 06:21:34 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=none; spf=pass (imf17.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684218095; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=XG2h2gjUuFBkWBOKj6fIiOCuPiWC+zCcN3M4anUpp64=; b=fXKZ4GEgUh1AC7A1g0BEKHTBpKVYiljFF019FP24xS29YoPKt5c+bZ+LI0PfZvnDOZepKZ MNKmdYzigfbBGJgtMGwL7JGQ20dd256XZ7JHpOa2YAClziU2bfNgxCvZGjNypieXUZwai/ tJNbB4iwPoNwmMRVm/4YJyOFJrL/UnI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684218095; a=rsa-sha256; cv=none; b=NLSXOOvgKzMqfTSqmey0QUes7XaHz3sK3mKZJwMzWhfKIL2fmajE9SNhhMpQ0vylPVS//a rQHxEWoIkhX6vl2lskjCfndndQ7yK1zvgz4asMsC1FpTkhtpnHQRuu0SRBn5hOVCmxVur4 vmWD+3ahVqM6G1bnJQSRjDc+pyxYnJk= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=none; spf=pass (imf17.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.54]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4QL5cR4KDBz18Lc3; Tue, 16 May 2023 14:17:11 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Tue, 16 May 2023 14:21:29 +0800 From: Kefeng Wang To: Andrew Morton , Mike Rapoport , CC: David Hildenbrand , Oscar Salvador , "Rafael J. Wysocki" , Pavel Machek , Len Brown , Luis Chamberlain , Kees Cook , Iurii Zaikin , , , , , Kefeng Wang Subject: [PATCH v2 01/13] mm: page_alloc: move mirrored_kernelcore into mm_init.c Date: Tue, 16 May 2023 14:38:09 +0800 Message-ID: <20230516063821.121844-2-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230516063821.121844-1-wangkefeng.wang@huawei.com> References: <20230516063821.121844-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Stat-Signature: cufko7x3tn9tbunrp9y83djjtzdrp18i X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: A60F140008 X-HE-Tag: 1684218094-899857 X-HE-Meta: U2FsdGVkX1+jbR20VRKcops3KIpBkEqpcMZh8B2DeWZMYcVC/BVkIahhbTtFuMOBZCtKwcZv2y6FQE2XzxjBl9KCVzJUgVB/fAQNwAi43mxrsxppL+oGQ/VOOLjUa5r+gJJZyZa1euP3rz9COHqKI2H/BkuUh/FLcF92Jk3bcVzYJQ9koHyyWmFECE9gwdcLpDePYv/RgbWQL43UJbkJhp0dervZxQamvTtSrmlLtgo3mkC/nbALRpdJGyX5zqHdzTXmOfJFYfBq1K0CM5q+YmXOHBJtclm8wB94pPsT+PZIHRTx6lK34Wy+x4V5T9zuXmnRps4ft69inTBzaTAs1Vp5n/B2UUyuKsdrwCkoVVfKqfwLKazVDFAR/EyxMy0BMQqGW+mZAkZMwnfIoCvFH/wtFyuP2vJlH0l6oBrCt5Mt++/+ySvrGMygbb22YHJeYEtAWLtAP5ABf5fao0NQjY2iEZv1qLyC+fstkJyx6WbLObg4MkPcengF3z0tnsrLvIbqEcaxDaLiWV08c+K1BE3UTAD3uE4kGoCX0hmVz1GtWY6zSLAdwR1oAUKIIOo1R37GKYiZPJUlOD3/gS+gTDaGFNam2fqeHwhWeGCKZ5SSRO5JeIGG6o70ld+s36OdKZHBufR5kZ9cv6TatZR5zCU/MrckVaadgeswZ8ObbDQ8b77mGbVoHnPjU8DILSyXhWJB8yugQ6T1pGm5z6mGv2FbxOb6f8nyWzPOl7ymNR/q85gv6WWetu6Us5NlbINUGqGzEdmGy7DjjYOvwdSypILTyJiOvb+qk+hURxNnDc7WPMabfdIqobF9jk09X0SZAsKwaqg7BFF7RoD6DwEkzbg2Ag+s9JXU8w/F9BMmPf5vL2/GIEWXkxKrNpP9fKEoBALc4G5EHYPJnAIQxcSR3YtFltN8toYS9FXFC3WQ/ICMuH38Dtc4KmRRVmA66w6EDE6ddxHBzjvMbDHsdUe ZBI9ooeb JgOBi299uvAfuntV8i9HqCRGWiGuTbNIV2V8kNBnO/2LmuRpC3OpjtMwJIVUEncWqNVVlGOTRrQ0LJZWAeK9QSGIdXFPfnx3kUD/68t6K1zo3q/OOgZi+h69Vfnn66ER0VGLRS3XKksGPyZ49eeV+MjHgVRU7gBKG1srfSDzl3cn8L6K6BX24urdW9mbgE6Unc4R1atnPL0jEiN2Ql1d8CXynWz2Xn5GfV9G0VxUTfkhRk0WboZhDthvZnA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Since commit 9420f89db2dd ("mm: move most of core MM initialization to mm/mm_init.c"), mirrored_kernelcore should be moved into mm_init.c, as most related codes are already there. Reviewed-by: Mike Rapoport (IBM) Signed-off-by: Kefeng Wang --- mm/mm_init.c | 2 ++ mm/page_alloc.c | 3 --- 2 files changed, 2 insertions(+), 3 deletions(-) diff --git a/mm/mm_init.c b/mm/mm_init.c index 7f7f9c677854..da162b7a044c 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -259,6 +259,8 @@ static int __init cmdline_parse_core(char *p, unsigned long *core, return 0; } +bool mirrored_kernelcore __initdata_memblock; + /* * kernelcore=size sets the amount of memory for use for allocations that * cannot be reclaimed or migrated. diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 1d6419cd3f37..4b4188cff820 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -23,7 +23,6 @@ #include #include #include -#include #include #include #include @@ -374,8 +373,6 @@ int user_min_free_kbytes = -1; int watermark_boost_factor __read_mostly = 15000; int watermark_scale_factor = 10; -bool mirrored_kernelcore __initdata_memblock; - /* movable_zone is the "real" zone pages in ZONE_MOVABLE are taken from */ int movable_zone; EXPORT_SYMBOL(movable_zone); From patchwork Tue May 16 06:38:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13242573 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DA2C3C7EE2D for ; Tue, 16 May 2023 06:21:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5C105900006; Tue, 16 May 2023 02:21:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 54A5A900002; Tue, 16 May 2023 02:21:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3E9A2900004; Tue, 16 May 2023 02:21:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 2C0EB900003 for ; Tue, 16 May 2023 02:21:37 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 0490DC1320 for ; Tue, 16 May 2023 06:21:36 +0000 (UTC) X-FDA: 80795121834.15.1C4BDE5 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf04.hostedemail.com (Postfix) with ESMTP id 42C244000B for ; Tue, 16 May 2023 06:21:33 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf04.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684218095; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=U+YxG+zKwUi/FiijdHXbxPyp+z4iMyYwOoeCyElx+aI=; b=KevS5a1J4B0QpaOafTf4/d8IAEXBvZuMH0/gsyKqQKUvWMmRuM+l4WccM7MaRQOzeps1Ds eHIUlarPMcPtvO9XrMaWgnGxmjoFtWqecAANIZGTuR8ghCl8m39irT4XuXXKf1L0EGvfgV E9TZXwvumuGh1irbd6YgXVwTjkzkmss= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684218095; a=rsa-sha256; cv=none; b=ljBs2k96XjYrT8QkWf8no7ohpZDseTmzX+A+rCrnQlxwdnl0Y7IR4mg3/K656Ja9YZlWfA 7w/YK9PitnsHyiYwX5uc06C8CTxtbOZKcbG5IWEACtD60YuVifL2J4VtA9iuYX0FIRY37h OwtHb3s48e++0Jq8wzGNOO6RZJ3ax+8= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf04.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.55]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4QL5g80Md8zsSFG; Tue, 16 May 2023 14:19:32 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Tue, 16 May 2023 14:21:30 +0800 From: Kefeng Wang To: Andrew Morton , Mike Rapoport , CC: David Hildenbrand , Oscar Salvador , "Rafael J. Wysocki" , Pavel Machek , Len Brown , Luis Chamberlain , Kees Cook , Iurii Zaikin , , , , , Kefeng Wang Subject: [PATCH v2 02/13] mm: page_alloc: move init_on_alloc/free() into mm_init.c Date: Tue, 16 May 2023 14:38:10 +0800 Message-ID: <20230516063821.121844-3-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230516063821.121844-1-wangkefeng.wang@huawei.com> References: <20230516063821.121844-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Stat-Signature: nyczebzkbiph1jsckz9nmhcoecugcu73 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 42C244000B X-Rspam-User: X-HE-Tag: 1684218093-302838 X-HE-Meta: U2FsdGVkX1+uORnulK0fXT+t6KnM77MDbyMgWcY9xOpTkg9XZbfNLSpgyyoUT5iRiyCCMmAOupNoDdhhV+nb/jriw8mlRB+AJijwU80jrCUGGEgrx+1tKBSD0cuaWr+QK//uSy1wAqJEqGcyaKrLtLnNQpJOt5yJIYRH2UHIx6zmJ0cwOE5L6RMX0r7Hq4IHGOKcYfui++da5ZcoqHi1AvdUmi238DWUuT9ayUVm/1foU5fcDZcGvzXf09P31JhR+yZbPNVp59Xrkavn8u3a1BrxOXp2ys4O2jFFqd5hdwRJ9HQEz3hP3SSIOAdXwXXgaoCj4bnhDG62ht5qWQNwIPl+DJrk5nEMr+oXbxmUYVrB6ttshuYQQGY1UvPW5e2rPmFZKTxZhGJyBobMa2mdFUF1N4i7e3pggHfzD0u3Jpl+/IOuPHY5a+rOay34ZLbL4kUPMcf3zuwpoUKl0vTg1XQxLiZd8PfUJcuU15MUzePjePRMzptblNfHDxgmB/RizDvJjumN9KSjbLV6gH1AeDXZJY8bvR2+wvKFIakwlHgD2VwihTSBf2KCLPm3XO8xI8D7QuKi3gk8kdyLxTCjq3T41z9xyruIhKaQlgVA/b4Cv43hBVGoNWNhJx6N2/bR4pdMy5H695wXrYQN/KaxYXo4UaPymMtepf2HKUrrbU5pmReeyYQX5vGaE5VynuZxUe4dRURR69Zi2QIipZSKFXbC0Wa6XeoCfawgDWYdo1F5aTAHhEy2aA6V+zTaF2ue/gUWmdosAskiOR1xC1Hd5CVMPwx2dbt+kq3H3CXcAWl6gr3NuM6uOvlG0acj+esLRn28J/6B3fYUuufhpBblDgitnZ7kPoecsheY8gGiJrXUb7XcjvUH//ks6B884b58U7FGjhmFs2fyuwYj4a7BG5/IJgpk/LXrmNyrUbgzx4Q5+rj7NXcRqN1NQU7CgvGuqdFYKELPqhBuD5YXx5W DFO06PzT LRvaypiQAsRhffljldoHXm9dk6c1i/malgWfd5jd0dRh5/nubY3GZRR36DLBf2ienYnqciOhWWIZm6ZXICfkM3V/9GBXjcAnbxhV9XHdkYMJb/eoZ7mA0QcA7RojiqL/CxY1nxZ0V8qLQqqJDOjnaKOL34RcWuxwo+1+LbFEDxocgXQHaG0sauT691UFmz1Cln3eZEQSoS8+jh1fLSbfhORjdEEv+hwO8DeGLoVF8RHz+hfeJJ64V84IBTFljGHcgFk95 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Since commit f2fc4b44ec2b ("mm: move init_mem_debugging_and_hardening() to mm/mm_init.c"), the init_on_alloc() and init_on_free() define is better to move there too. Reviewed-by: Mike Rapoport (IBM) Signed-off-by: Kefeng Wang --- mm/mm_init.c | 6 ++++++ mm/page_alloc.c | 5 ----- 2 files changed, 6 insertions(+), 5 deletions(-) diff --git a/mm/mm_init.c b/mm/mm_init.c index da162b7a044c..15201887f8e0 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -2543,6 +2543,12 @@ void __init memblock_free_pages(struct page *page, unsigned long pfn, __free_pages_core(page, order); } +DEFINE_STATIC_KEY_MAYBE(CONFIG_INIT_ON_ALLOC_DEFAULT_ON, init_on_alloc); +EXPORT_SYMBOL(init_on_alloc); + +DEFINE_STATIC_KEY_MAYBE(CONFIG_INIT_ON_FREE_DEFAULT_ON, init_on_free); +EXPORT_SYMBOL(init_on_free); + static bool _init_on_alloc_enabled_early __read_mostly = IS_ENABLED(CONFIG_INIT_ON_ALLOC_DEFAULT_ON); static int __init early_init_on_alloc(char *buf) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 4b4188cff820..bc69a0474069 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -233,11 +233,6 @@ unsigned long totalcma_pages __read_mostly; int percpu_pagelist_high_fraction; gfp_t gfp_allowed_mask __read_mostly = GFP_BOOT_MASK; -DEFINE_STATIC_KEY_MAYBE(CONFIG_INIT_ON_ALLOC_DEFAULT_ON, init_on_alloc); -EXPORT_SYMBOL(init_on_alloc); - -DEFINE_STATIC_KEY_MAYBE(CONFIG_INIT_ON_FREE_DEFAULT_ON, init_on_free); -EXPORT_SYMBOL(init_on_free); /* * A cached value of the page's pageblock's migratetype, used when the page is From patchwork Tue May 16 06:38:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13242574 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E768DC7EE30 for ; Tue, 16 May 2023 06:21:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 811FB900002; Tue, 16 May 2023 02:21:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 71E1E900003; Tue, 16 May 2023 02:21:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4AE3C900005; Tue, 16 May 2023 02:21:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 28727900002 for ; Tue, 16 May 2023 02:21:37 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id E848A80A47 for ; Tue, 16 May 2023 06:21:36 +0000 (UTC) X-FDA: 80795121792.15.0E22B37 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf26.hostedemail.com (Postfix) with ESMTP id 68718140007 for ; Tue, 16 May 2023 06:21:33 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=none; spf=pass (imf26.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684218095; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LnJ/2FHbHKFcBTkWNnx8nE7c+6i3wW8vlrKFpxUMFLM=; b=LKhAJDT9F84nJs+t1B/Mtkd9rYO0E96APG391Z85/paHcuECXkRbH/UwHf4bydUYxMR28u 4rx00ucBg8H1O/IVDaXylEztkcHBAK8x/6+7uiKi76sSANGnz+cilcRcXs3RwX8nhOeUbq 2D7Nx8em/tkFSd7friuNydKrv9AfDUQ= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=none; spf=pass (imf26.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684218095; a=rsa-sha256; cv=none; b=o1FVe5gChq9jPaQZ66ad+Bszm+hrydOlv9VEviuY5sMmU/K5klufxrC4KLU086TEMDKKV+ 5XbtuI5pvbmUfp/JCK3G0RzZqD5fvsM5AYqs/hQLNUr/P29oLZirXhOZ7EZsmzsDhhOkHp yvVsxj3xBA+SqxG3C+Lg/qtX+Ql8A5c= Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4QL5bw68MrzTkk2; Tue, 16 May 2023 14:16:44 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Tue, 16 May 2023 14:21:30 +0800 From: Kefeng Wang To: Andrew Morton , Mike Rapoport , CC: David Hildenbrand , Oscar Salvador , "Rafael J. Wysocki" , Pavel Machek , Len Brown , Luis Chamberlain , Kees Cook , Iurii Zaikin , , , , , Kefeng Wang Subject: [PATCH v2 03/13] mm: page_alloc: move set_zone_contiguous() into mm_init.c Date: Tue, 16 May 2023 14:38:11 +0800 Message-ID: <20230516063821.121844-4-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230516063821.121844-1-wangkefeng.wang@huawei.com> References: <20230516063821.121844-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 68718140007 X-Stat-Signature: x497u5q5n3yfiackywdjo4ftqwdpeku9 X-HE-Tag: 1684218093-468610 X-HE-Meta: U2FsdGVkX1///ghoA6ucIMW8/yaIysJ8T3NqCSdk+yBQrJw8baR2WvOjcBK4zpP8Z3uR7EwNQVj7FPLGSuV6nKIe2FPsdyU1olmyw+UMkB4tVngTcMg9yNiNfPtE0Q3CcoGK4mU7nSOStDbxiJZjjjnA0SfKMAexDyRNFf03Z0NXIM3KtV5xXDTBo1DWAsLBq/CshuW1ZXtGBDt5pkGIB1dPMpzHCwS7vnk0WkGxvS4fBn0WNfDdO/I4dsG8USinfboY+ncGsZFvbm2o+/ELNlxx5dgs7Pcgev8mV26oZo3bdU4OlgPbldkl/ySIZEcIC0WlW520C7zeFrFFwmHDvNSQ+b0TWapUvaAX15TLZQj6WBjCLz0UvKH7D536aFTb9hE3NlctZrQg8ORyER+fpxdp9YTRY9E54KrEo/qQNdQJLVQ3HGXgoMlDlt1SkjxF791wRrYdR1fVvn0oPdNAnACyMXG+OPI+PTF2fbACUt+CDbpA0cs+yCKrP8uei4vqyL/QAF5A7eOopKMH1k//pK1iqEc/wkOUBn9zz88xpcYTborlP6EZzbeF1lndRFc3tzwRQycP6ArxgXzjNZ26oTyXNUItw2Vil7gbdQDjjCTsR/IclrfqcRNrAX1ffY4Xc696QiJ4kK9sVXVH38ZW2JqlwNedU1OzElhcWn5crVQvyXqMEIkIImXyxJdcziM4V5TRUetCNfXqiADyK9GN2mwqfyUZTO/DoDzKqtv3qVtp4j2ORHEcwCQQCb4MjjxpRMXmeJm/bb4cRReu9+W13ZW2zucTguoFl3KKVkgPMSviQ9p3coWKyTUl6+NADqdnYztg+lTiSj5IgjpxMrpk0ncZmonfpNoEkEY/MP7SIB7x8STb2Mhhb+j1dm6kCpM3Hu0qz2p3nPLRwrbCAKsezuhG2EshLeyiV/6Q/ixnAmXNN7F4zeXYd/fi3+oP5k0N3nfcL4LzE+/JmqITodn oxOyfyNQ Pf/qjliD2ltjeFleW2wVPxIrWi07R3ZWQPyC59qmIXudUBMgUpmfch7iSU4GiOq0qlht+fBAUnCEGFcvNwQ+FqZMwG7SzdK/1qB6cDeDna0Zv1uKvwlLYLM/jBRHunkmo22+ukgOnVLZRVTjZgPhHv/IsvCyRASVlB58CCSB5nTCUWuMXX6B03vTZzuEDVYvcjwuS X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: set_zone_contiguous() is only used in mm init/hotplug, and clear_zone_contiguous() only used in hotplug, move them from page_alloc.c to the more appropriate file. Signed-off-by: Kefeng Wang --- include/linux/memory_hotplug.h | 3 --- mm/internal.h | 7 +++++++ mm/mm_init.c | 22 ++++++++++++++++++++++ mm/page_alloc.c | 27 --------------------------- 4 files changed, 29 insertions(+), 30 deletions(-) diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index 9fcbf5706595..04bc286eed42 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -326,9 +326,6 @@ static inline int remove_memory(u64 start, u64 size) static inline void __remove_memory(u64 start, u64 size) {} #endif /* CONFIG_MEMORY_HOTREMOVE */ -extern void set_zone_contiguous(struct zone *zone); -extern void clear_zone_contiguous(struct zone *zone); - #ifdef CONFIG_MEMORY_HOTPLUG extern void __ref free_area_init_core_hotplug(struct pglist_data *pgdat); extern int __add_memory(int nid, u64 start, u64 size, mhp_t mhp_flags); diff --git a/mm/internal.h b/mm/internal.h index 644fa8b761f5..79324b7f2bc8 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -371,6 +371,13 @@ static inline struct page *pageblock_pfn_to_page(unsigned long start_pfn, return __pageblock_pfn_to_page(start_pfn, end_pfn, zone); } +void set_zone_contiguous(struct zone *zone); + +static inline void clear_zone_contiguous(struct zone *zone) +{ + zone->contiguous = false; +} + extern int __isolate_free_page(struct page *page, unsigned int order); extern void __putback_isolated_page(struct page *page, unsigned int order, int mt); diff --git a/mm/mm_init.c b/mm/mm_init.c index 15201887f8e0..0fd4ddfdfb2e 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -2330,6 +2330,28 @@ void __init init_cma_reserved_pageblock(struct page *page) } #endif +void set_zone_contiguous(struct zone *zone) +{ + unsigned long block_start_pfn = zone->zone_start_pfn; + unsigned long block_end_pfn; + + block_end_pfn = pageblock_end_pfn(block_start_pfn); + for (; block_start_pfn < zone_end_pfn(zone); + block_start_pfn = block_end_pfn, + block_end_pfn += pageblock_nr_pages) { + + block_end_pfn = min(block_end_pfn, zone_end_pfn(zone)); + + if (!__pageblock_pfn_to_page(block_start_pfn, + block_end_pfn, zone)) + return; + cond_resched(); + } + + /* We confirm that there is no hole */ + zone->contiguous = true; +} + void __init page_alloc_init_late(void) { struct zone *zone; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index bc69a0474069..1b84b86fd33d 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1532,33 +1532,6 @@ struct page *__pageblock_pfn_to_page(unsigned long start_pfn, return start_page; } -void set_zone_contiguous(struct zone *zone) -{ - unsigned long block_start_pfn = zone->zone_start_pfn; - unsigned long block_end_pfn; - - block_end_pfn = pageblock_end_pfn(block_start_pfn); - for (; block_start_pfn < zone_end_pfn(zone); - block_start_pfn = block_end_pfn, - block_end_pfn += pageblock_nr_pages) { - - block_end_pfn = min(block_end_pfn, zone_end_pfn(zone)); - - if (!__pageblock_pfn_to_page(block_start_pfn, - block_end_pfn, zone)) - return; - cond_resched(); - } - - /* We confirm that there is no hole */ - zone->contiguous = true; -} - -void clear_zone_contiguous(struct zone *zone) -{ - zone->contiguous = false; -} - /* * The order of subdivision here is critical for the IO subsystem. * Please do not alter this order without good reasons and regression From patchwork Tue May 16 06:38:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13242577 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75ED7C77B75 for ; Tue, 16 May 2023 06:21:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 572F6280003; Tue, 16 May 2023 02:21:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4FBF8280002; Tue, 16 May 2023 02:21:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 300E2280003; Tue, 16 May 2023 02:21:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 132C4280002 for ; Tue, 16 May 2023 02:21:39 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id D3D7C16147A for ; Tue, 16 May 2023 06:21:38 +0000 (UTC) X-FDA: 80795121876.05.DD06FB2 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf27.hostedemail.com (Postfix) with ESMTP id 4D4CF40002 for ; Tue, 16 May 2023 06:21:35 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=none; spf=pass (imf27.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684218097; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jiJjGvXukfJDKXdvZFg4jHCpGwbT83KUzptGldsMBu4=; b=uFPR5iX9AY52jIOTXBI97e6PQULnGJ3FyX1OVo+/NHyVt/m6ga1OcihZXPly6BWmM+3acc w7GMdcR9Om6/iL/iUGkRTKhMWDsB2Ag+o18V6Bpld/+TrW+qwUZVoy9vdgDObuKWlyPwaz CY/yF/0vb0hrd2rmBSMes2nTgaOwPFQ= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=none; spf=pass (imf27.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684218097; a=rsa-sha256; cv=none; b=khHNubmwn3e5Z9NPZi/TUgUYLQvkZ/1aYigj1g+aUjSefdLlPgvQIZ/6zz85tUa9kw9JAx ldd6SnEEicY/Z+zc/x7nn2leH9sSqOAmpyGz+F5PgiObw655mfJlt39i/zOYTK1S4AIiVM xxonMpmGkJbU/CDefQWMb9+BBVRIUdo= Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.54]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4QL5g933PTzsSFS; Tue, 16 May 2023 14:19:33 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Tue, 16 May 2023 14:21:31 +0800 From: Kefeng Wang To: Andrew Morton , Mike Rapoport , CC: David Hildenbrand , Oscar Salvador , "Rafael J. Wysocki" , Pavel Machek , Len Brown , Luis Chamberlain , Kees Cook , Iurii Zaikin , , , , , Kefeng Wang Subject: [PATCH v2 04/13] mm: page_alloc: collect mem statistic into show_mem.c Date: Tue, 16 May 2023 14:38:12 +0800 Message-ID: <20230516063821.121844-5-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230516063821.121844-1-wangkefeng.wang@huawei.com> References: <20230516063821.121844-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 4D4CF40002 X-Stat-Signature: nm766mm7ordjn6qs1sic7kxk3nj7cxnm X-HE-Tag: 1684218095-620405 X-HE-Meta: U2FsdGVkX19TFOUPx60JuVvUKkYwXheLsoa5VRH5Ins/U4v+i3C39se8iymhb5Q9KUWjNwwXwh8Oc60O8SF0ZDgu8OsMr//3/x9BhEQqsOm1XlfqNC7exAeSI1ZHKWz8F8HhxkPTcDqRDSxGqJWL2htPjWs58nfrY+0ot+nf/KasXKUbqXGEVHQrbWm0wl/55kO6Ef1CUyM2kz2UyDnA5u/kRKoqAzdBmE5pQited2WGxuLSnLXLd1RzcutSlisLUp/dJeHWCty4yAkJvK32n96+P+aO8Q8NHGL37wSXpffmSCp3ImWIdQxhAV1iZcq7oMLj8dmM2AQnuXf6o1dxNKjLlRPort5qDnRi2aSilxV/xXcPfGO9ia2vtlwYmzhV1pf23ki2xqasAI1a3bNyyxrKhxoawaALgnER2fxsZmFxWudTO29SP8hxEeF4b9DIdjHBL5rGR++5G6D5zWypZlXgqMOnvheHUDo8QJcaIhTUxpJPZK+1LDksrv+c0hX+iAAsrPGwzbPGiTaoLrBVQJAIrdLrTc2c8d48e/QhbqH/6UmS7OC9PN18wnUfbGlH4U742CKa0QgpMopl4MEDEO40AVJT/Wcw4hwqVQ6vuo+U0D1ldjKMBL9uHf8sJ5VYZlHqLmOfJRXAvnwKE+l7/0YHSTLIc1CD7nc6FdEF0AUQZDe7RR/7R/xs6+gqtaZfo7yAtw0aZSaEBTrxfxGuug4Eq7erYXnlcJ+s+9BTjAR4jVfnljI43sVYdLeF7CRAdlLxWA/aJZpTudwQkWrrfc0XU9crF3tp0tWDxzuPthbSI6r5xxYxWqQLhnE0/9DDEl5oafbYDZg163XIVvgMOaI/lagYDrW9cHfRsT3ZRcVpXiIs+Vq5gT4lHLvdtfI2Z6jjjdKS8q4/Ez4WLU3MsViw4TwMmO4cuHOkKXTpvmAPWie6fyneUZ9nFTUK/raQtvVnbB2bXnOeY9FQGIh Z2Dk1IJx Jli1xJzkfoLMF+dMJGE3pI1TGAUP1qFan3EpcnH2HQdxK5ASos/IVcdeT91So8UnfKiOUJ7VxXoGRsuOC0iNueCp4mE9i6PkJHEXMx95Lwa3EVz8qTVI8phBAJRs77PKCZGJjnJYpYfX899cQY+9l2QfPHyTOSQQcVVmW2UpmcNkZNL1BWCyWEZVK+KQYQkxDAzHqT8Pv5qyeEEA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Let's move show_mem.c from lib to mm, as it belongs memory subsystem, also split some memory statistic related functions from page_alloc.c to show_mem.c, and we cleanup some unneeded include. There is no functional change. Signed-off-by: Kefeng Wang --- lib/Makefile | 2 +- lib/show_mem.c | 37 ----- mm/Makefile | 2 +- mm/page_alloc.c | 402 --------------------------------------------- mm/show_mem.c | 429 ++++++++++++++++++++++++++++++++++++++++++++++++ 5 files changed, 431 insertions(+), 441 deletions(-) delete mode 100644 lib/show_mem.c create mode 100644 mm/show_mem.c diff --git a/lib/Makefile b/lib/Makefile index 876fcdeae34e..38f23f352736 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -30,7 +30,7 @@ endif lib-y := ctype.o string.o vsprintf.o cmdline.o \ rbtree.o radix-tree.o timerqueue.o xarray.o \ maple_tree.o idr.o extable.o irq_regs.o argv_split.o \ - flex_proportions.o ratelimit.o show_mem.o \ + flex_proportions.o ratelimit.o \ is_single_threaded.o plist.o decompress.o kobject_uevent.o \ earlycpio.o seq_buf.o siphash.o dec_and_lock.o \ nmi_backtrace.o win_minmax.o memcat_p.o \ diff --git a/lib/show_mem.c b/lib/show_mem.c deleted file mode 100644 index 1485c87be935..000000000000 --- a/lib/show_mem.c +++ /dev/null @@ -1,37 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-only -/* - * Generic show_mem() implementation - * - * Copyright (C) 2008 Johannes Weiner - */ - -#include -#include - -void __show_mem(unsigned int filter, nodemask_t *nodemask, int max_zone_idx) -{ - unsigned long total = 0, reserved = 0, highmem = 0; - struct zone *zone; - - printk("Mem-Info:\n"); - __show_free_areas(filter, nodemask, max_zone_idx); - - for_each_populated_zone(zone) { - - total += zone->present_pages; - reserved += zone->present_pages - zone_managed_pages(zone); - - if (is_highmem(zone)) - highmem += zone->present_pages; - } - - printk("%lu pages RAM\n", total); - printk("%lu pages HighMem/MovableOnly\n", highmem); - printk("%lu pages reserved\n", reserved); -#ifdef CONFIG_CMA - printk("%lu pages cma reserved\n", totalcma_pages); -#endif -#ifdef CONFIG_MEMORY_FAILURE - printk("%lu pages hwpoisoned\n", atomic_long_read(&num_poisoned_pages)); -#endif -} diff --git a/mm/Makefile b/mm/Makefile index e29afc890cde..5262ce5baa28 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -51,7 +51,7 @@ obj-y := filemap.o mempool.o oom_kill.o fadvise.o \ readahead.o swap.o truncate.o vmscan.o shmem.o \ util.o mmzone.o vmstat.o backing-dev.o \ mm_init.o percpu.o slab_common.o \ - compaction.o \ + compaction.o show_mem.o\ interval_tree.o list_lru.o workingset.o \ debug.o gup.o mmap_lock.o $(mmu-y) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 1b84b86fd33d..84ba6cca3b3a 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -18,10 +18,7 @@ #include #include #include -#include -#include #include -#include #include #include #include @@ -30,8 +27,6 @@ #include #include #include -#include -#include #include #include #include @@ -40,19 +35,10 @@ #include #include #include -#include #include -#include -#include -#include -#include #include #include -#include #include -#include -#include -#include #include #include #include @@ -60,12 +46,9 @@ #include #include #include -#include -#include #include #include #include -#include #include #include #include @@ -73,13 +56,10 @@ #include #include #include -#include -#include #include #include "internal.h" #include "shuffle.h" #include "page_reporting.h" -#include "swap.h" /* Free Page Internal flags: for internal, non-pcp variants of free_pages(). */ typedef int __bitwise fpi_t; @@ -226,11 +206,6 @@ nodemask_t node_states[NR_NODE_STATES] __read_mostly = { }; EXPORT_SYMBOL(node_states); -atomic_long_t _totalram_pages __read_mostly; -EXPORT_SYMBOL(_totalram_pages); -unsigned long totalreserve_pages __read_mostly; -unsigned long totalcma_pages __read_mostly; - int percpu_pagelist_high_fraction; gfp_t gfp_allowed_mask __read_mostly = GFP_BOOT_MASK; @@ -5139,383 +5114,6 @@ unsigned long nr_free_buffer_pages(void) } EXPORT_SYMBOL_GPL(nr_free_buffer_pages); -static inline void show_node(struct zone *zone) -{ - if (IS_ENABLED(CONFIG_NUMA)) - printk("Node %d ", zone_to_nid(zone)); -} - -long si_mem_available(void) -{ - long available; - unsigned long pagecache; - unsigned long wmark_low = 0; - unsigned long pages[NR_LRU_LISTS]; - unsigned long reclaimable; - struct zone *zone; - int lru; - - for (lru = LRU_BASE; lru < NR_LRU_LISTS; lru++) - pages[lru] = global_node_page_state(NR_LRU_BASE + lru); - - for_each_zone(zone) - wmark_low += low_wmark_pages(zone); - - /* - * Estimate the amount of memory available for userspace allocations, - * without causing swapping or OOM. - */ - available = global_zone_page_state(NR_FREE_PAGES) - totalreserve_pages; - - /* - * Not all the page cache can be freed, otherwise the system will - * start swapping or thrashing. Assume at least half of the page - * cache, or the low watermark worth of cache, needs to stay. - */ - pagecache = pages[LRU_ACTIVE_FILE] + pages[LRU_INACTIVE_FILE]; - pagecache -= min(pagecache / 2, wmark_low); - available += pagecache; - - /* - * Part of the reclaimable slab and other kernel memory consists of - * items that are in use, and cannot be freed. Cap this estimate at the - * low watermark. - */ - reclaimable = global_node_page_state_pages(NR_SLAB_RECLAIMABLE_B) + - global_node_page_state(NR_KERNEL_MISC_RECLAIMABLE); - available += reclaimable - min(reclaimable / 2, wmark_low); - - if (available < 0) - available = 0; - return available; -} -EXPORT_SYMBOL_GPL(si_mem_available); - -void si_meminfo(struct sysinfo *val) -{ - val->totalram = totalram_pages(); - val->sharedram = global_node_page_state(NR_SHMEM); - val->freeram = global_zone_page_state(NR_FREE_PAGES); - val->bufferram = nr_blockdev_pages(); - val->totalhigh = totalhigh_pages(); - val->freehigh = nr_free_highpages(); - val->mem_unit = PAGE_SIZE; -} - -EXPORT_SYMBOL(si_meminfo); - -#ifdef CONFIG_NUMA -void si_meminfo_node(struct sysinfo *val, int nid) -{ - int zone_type; /* needs to be signed */ - unsigned long managed_pages = 0; - unsigned long managed_highpages = 0; - unsigned long free_highpages = 0; - pg_data_t *pgdat = NODE_DATA(nid); - - for (zone_type = 0; zone_type < MAX_NR_ZONES; zone_type++) - managed_pages += zone_managed_pages(&pgdat->node_zones[zone_type]); - val->totalram = managed_pages; - val->sharedram = node_page_state(pgdat, NR_SHMEM); - val->freeram = sum_zone_node_page_state(nid, NR_FREE_PAGES); -#ifdef CONFIG_HIGHMEM - for (zone_type = 0; zone_type < MAX_NR_ZONES; zone_type++) { - struct zone *zone = &pgdat->node_zones[zone_type]; - - if (is_highmem(zone)) { - managed_highpages += zone_managed_pages(zone); - free_highpages += zone_page_state(zone, NR_FREE_PAGES); - } - } - val->totalhigh = managed_highpages; - val->freehigh = free_highpages; -#else - val->totalhigh = managed_highpages; - val->freehigh = free_highpages; -#endif - val->mem_unit = PAGE_SIZE; -} -#endif - -/* - * Determine whether the node should be displayed or not, depending on whether - * SHOW_MEM_FILTER_NODES was passed to show_free_areas(). - */ -static bool show_mem_node_skip(unsigned int flags, int nid, nodemask_t *nodemask) -{ - if (!(flags & SHOW_MEM_FILTER_NODES)) - return false; - - /* - * no node mask - aka implicit memory numa policy. Do not bother with - * the synchronization - read_mems_allowed_begin - because we do not - * have to be precise here. - */ - if (!nodemask) - nodemask = &cpuset_current_mems_allowed; - - return !node_isset(nid, *nodemask); -} - -static void show_migration_types(unsigned char type) -{ - static const char types[MIGRATE_TYPES] = { - [MIGRATE_UNMOVABLE] = 'U', - [MIGRATE_MOVABLE] = 'M', - [MIGRATE_RECLAIMABLE] = 'E', - [MIGRATE_HIGHATOMIC] = 'H', -#ifdef CONFIG_CMA - [MIGRATE_CMA] = 'C', -#endif -#ifdef CONFIG_MEMORY_ISOLATION - [MIGRATE_ISOLATE] = 'I', -#endif - }; - char tmp[MIGRATE_TYPES + 1]; - char *p = tmp; - int i; - - for (i = 0; i < MIGRATE_TYPES; i++) { - if (type & (1 << i)) - *p++ = types[i]; - } - - *p = '\0'; - printk(KERN_CONT "(%s) ", tmp); -} - -static bool node_has_managed_zones(pg_data_t *pgdat, int max_zone_idx) -{ - int zone_idx; - for (zone_idx = 0; zone_idx <= max_zone_idx; zone_idx++) - if (zone_managed_pages(pgdat->node_zones + zone_idx)) - return true; - return false; -} - -/* - * Show free area list (used inside shift_scroll-lock stuff) - * We also calculate the percentage fragmentation. We do this by counting the - * memory on each free list with the exception of the first item on the list. - * - * Bits in @filter: - * SHOW_MEM_FILTER_NODES: suppress nodes that are not allowed by current's - * cpuset. - */ -void __show_free_areas(unsigned int filter, nodemask_t *nodemask, int max_zone_idx) -{ - unsigned long free_pcp = 0; - int cpu, nid; - struct zone *zone; - pg_data_t *pgdat; - - for_each_populated_zone(zone) { - if (zone_idx(zone) > max_zone_idx) - continue; - if (show_mem_node_skip(filter, zone_to_nid(zone), nodemask)) - continue; - - for_each_online_cpu(cpu) - free_pcp += per_cpu_ptr(zone->per_cpu_pageset, cpu)->count; - } - - printk("active_anon:%lu inactive_anon:%lu isolated_anon:%lu\n" - " active_file:%lu inactive_file:%lu isolated_file:%lu\n" - " unevictable:%lu dirty:%lu writeback:%lu\n" - " slab_reclaimable:%lu slab_unreclaimable:%lu\n" - " mapped:%lu shmem:%lu pagetables:%lu\n" - " sec_pagetables:%lu bounce:%lu\n" - " kernel_misc_reclaimable:%lu\n" - " free:%lu free_pcp:%lu free_cma:%lu\n", - global_node_page_state(NR_ACTIVE_ANON), - global_node_page_state(NR_INACTIVE_ANON), - global_node_page_state(NR_ISOLATED_ANON), - global_node_page_state(NR_ACTIVE_FILE), - global_node_page_state(NR_INACTIVE_FILE), - global_node_page_state(NR_ISOLATED_FILE), - global_node_page_state(NR_UNEVICTABLE), - global_node_page_state(NR_FILE_DIRTY), - global_node_page_state(NR_WRITEBACK), - global_node_page_state_pages(NR_SLAB_RECLAIMABLE_B), - global_node_page_state_pages(NR_SLAB_UNRECLAIMABLE_B), - global_node_page_state(NR_FILE_MAPPED), - global_node_page_state(NR_SHMEM), - global_node_page_state(NR_PAGETABLE), - global_node_page_state(NR_SECONDARY_PAGETABLE), - global_zone_page_state(NR_BOUNCE), - global_node_page_state(NR_KERNEL_MISC_RECLAIMABLE), - global_zone_page_state(NR_FREE_PAGES), - free_pcp, - global_zone_page_state(NR_FREE_CMA_PAGES)); - - for_each_online_pgdat(pgdat) { - if (show_mem_node_skip(filter, pgdat->node_id, nodemask)) - continue; - if (!node_has_managed_zones(pgdat, max_zone_idx)) - continue; - - printk("Node %d" - " active_anon:%lukB" - " inactive_anon:%lukB" - " active_file:%lukB" - " inactive_file:%lukB" - " unevictable:%lukB" - " isolated(anon):%lukB" - " isolated(file):%lukB" - " mapped:%lukB" - " dirty:%lukB" - " writeback:%lukB" - " shmem:%lukB" -#ifdef CONFIG_TRANSPARENT_HUGEPAGE - " shmem_thp: %lukB" - " shmem_pmdmapped: %lukB" - " anon_thp: %lukB" -#endif - " writeback_tmp:%lukB" - " kernel_stack:%lukB" -#ifdef CONFIG_SHADOW_CALL_STACK - " shadow_call_stack:%lukB" -#endif - " pagetables:%lukB" - " sec_pagetables:%lukB" - " all_unreclaimable? %s" - "\n", - pgdat->node_id, - K(node_page_state(pgdat, NR_ACTIVE_ANON)), - K(node_page_state(pgdat, NR_INACTIVE_ANON)), - K(node_page_state(pgdat, NR_ACTIVE_FILE)), - K(node_page_state(pgdat, NR_INACTIVE_FILE)), - K(node_page_state(pgdat, NR_UNEVICTABLE)), - K(node_page_state(pgdat, NR_ISOLATED_ANON)), - K(node_page_state(pgdat, NR_ISOLATED_FILE)), - K(node_page_state(pgdat, NR_FILE_MAPPED)), - K(node_page_state(pgdat, NR_FILE_DIRTY)), - K(node_page_state(pgdat, NR_WRITEBACK)), - K(node_page_state(pgdat, NR_SHMEM)), -#ifdef CONFIG_TRANSPARENT_HUGEPAGE - K(node_page_state(pgdat, NR_SHMEM_THPS)), - K(node_page_state(pgdat, NR_SHMEM_PMDMAPPED)), - K(node_page_state(pgdat, NR_ANON_THPS)), -#endif - K(node_page_state(pgdat, NR_WRITEBACK_TEMP)), - node_page_state(pgdat, NR_KERNEL_STACK_KB), -#ifdef CONFIG_SHADOW_CALL_STACK - node_page_state(pgdat, NR_KERNEL_SCS_KB), -#endif - K(node_page_state(pgdat, NR_PAGETABLE)), - K(node_page_state(pgdat, NR_SECONDARY_PAGETABLE)), - pgdat->kswapd_failures >= MAX_RECLAIM_RETRIES ? - "yes" : "no"); - } - - for_each_populated_zone(zone) { - int i; - - if (zone_idx(zone) > max_zone_idx) - continue; - if (show_mem_node_skip(filter, zone_to_nid(zone), nodemask)) - continue; - - free_pcp = 0; - for_each_online_cpu(cpu) - free_pcp += per_cpu_ptr(zone->per_cpu_pageset, cpu)->count; - - show_node(zone); - printk(KERN_CONT - "%s" - " free:%lukB" - " boost:%lukB" - " min:%lukB" - " low:%lukB" - " high:%lukB" - " reserved_highatomic:%luKB" - " active_anon:%lukB" - " inactive_anon:%lukB" - " active_file:%lukB" - " inactive_file:%lukB" - " unevictable:%lukB" - " writepending:%lukB" - " present:%lukB" - " managed:%lukB" - " mlocked:%lukB" - " bounce:%lukB" - " free_pcp:%lukB" - " local_pcp:%ukB" - " free_cma:%lukB" - "\n", - zone->name, - K(zone_page_state(zone, NR_FREE_PAGES)), - K(zone->watermark_boost), - K(min_wmark_pages(zone)), - K(low_wmark_pages(zone)), - K(high_wmark_pages(zone)), - K(zone->nr_reserved_highatomic), - K(zone_page_state(zone, NR_ZONE_ACTIVE_ANON)), - K(zone_page_state(zone, NR_ZONE_INACTIVE_ANON)), - K(zone_page_state(zone, NR_ZONE_ACTIVE_FILE)), - K(zone_page_state(zone, NR_ZONE_INACTIVE_FILE)), - K(zone_page_state(zone, NR_ZONE_UNEVICTABLE)), - K(zone_page_state(zone, NR_ZONE_WRITE_PENDING)), - K(zone->present_pages), - K(zone_managed_pages(zone)), - K(zone_page_state(zone, NR_MLOCK)), - K(zone_page_state(zone, NR_BOUNCE)), - K(free_pcp), - K(this_cpu_read(zone->per_cpu_pageset->count)), - K(zone_page_state(zone, NR_FREE_CMA_PAGES))); - printk("lowmem_reserve[]:"); - for (i = 0; i < MAX_NR_ZONES; i++) - printk(KERN_CONT " %ld", zone->lowmem_reserve[i]); - printk(KERN_CONT "\n"); - } - - for_each_populated_zone(zone) { - unsigned int order; - unsigned long nr[MAX_ORDER + 1], flags, total = 0; - unsigned char types[MAX_ORDER + 1]; - - if (zone_idx(zone) > max_zone_idx) - continue; - if (show_mem_node_skip(filter, zone_to_nid(zone), nodemask)) - continue; - show_node(zone); - printk(KERN_CONT "%s: ", zone->name); - - spin_lock_irqsave(&zone->lock, flags); - for (order = 0; order <= MAX_ORDER; order++) { - struct free_area *area = &zone->free_area[order]; - int type; - - nr[order] = area->nr_free; - total += nr[order] << order; - - types[order] = 0; - for (type = 0; type < MIGRATE_TYPES; type++) { - if (!free_area_empty(area, type)) - types[order] |= 1 << type; - } - } - spin_unlock_irqrestore(&zone->lock, flags); - for (order = 0; order <= MAX_ORDER; order++) { - printk(KERN_CONT "%lu*%lukB ", - nr[order], K(1UL) << order); - if (nr[order]) - show_migration_types(types[order]); - } - printk(KERN_CONT "= %lukB\n", K(total)); - } - - for_each_online_node(nid) { - if (show_mem_node_skip(filter, nid, nodemask)) - continue; - hugetlb_show_meminfo_node(nid); - } - - printk("%ld total pagecache pages\n", global_node_page_state(NR_FILE_PAGES)); - - show_swap_cache_info(); -} - static void zoneref_set_zone(struct zone *zone, struct zoneref *zoneref) { zoneref->zone = zone; diff --git a/mm/show_mem.c b/mm/show_mem.c new file mode 100644 index 000000000000..01f8e9905817 --- /dev/null +++ b/mm/show_mem.c @@ -0,0 +1,429 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Generic show_mem() implementation + * + * Copyright (C) 2008 Johannes Weiner + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "internal.h" +#include "swap.h" + +atomic_long_t _totalram_pages __read_mostly; +EXPORT_SYMBOL(_totalram_pages); +unsigned long totalreserve_pages __read_mostly; +unsigned long totalcma_pages __read_mostly; + +static inline void show_node(struct zone *zone) +{ + if (IS_ENABLED(CONFIG_NUMA)) + printk("Node %d ", zone_to_nid(zone)); +} + +long si_mem_available(void) +{ + long available; + unsigned long pagecache; + unsigned long wmark_low = 0; + unsigned long pages[NR_LRU_LISTS]; + unsigned long reclaimable; + struct zone *zone; + int lru; + + for (lru = LRU_BASE; lru < NR_LRU_LISTS; lru++) + pages[lru] = global_node_page_state(NR_LRU_BASE + lru); + + for_each_zone(zone) + wmark_low += low_wmark_pages(zone); + + /* + * Estimate the amount of memory available for userspace allocations, + * without causing swapping or OOM. + */ + available = global_zone_page_state(NR_FREE_PAGES) - totalreserve_pages; + + /* + * Not all the page cache can be freed, otherwise the system will + * start swapping or thrashing. Assume at least half of the page + * cache, or the low watermark worth of cache, needs to stay. + */ + pagecache = pages[LRU_ACTIVE_FILE] + pages[LRU_INACTIVE_FILE]; + pagecache -= min(pagecache / 2, wmark_low); + available += pagecache; + + /* + * Part of the reclaimable slab and other kernel memory consists of + * items that are in use, and cannot be freed. Cap this estimate at the + * low watermark. + */ + reclaimable = global_node_page_state_pages(NR_SLAB_RECLAIMABLE_B) + + global_node_page_state(NR_KERNEL_MISC_RECLAIMABLE); + available += reclaimable - min(reclaimable / 2, wmark_low); + + if (available < 0) + available = 0; + return available; +} +EXPORT_SYMBOL_GPL(si_mem_available); + +void si_meminfo(struct sysinfo *val) +{ + val->totalram = totalram_pages(); + val->sharedram = global_node_page_state(NR_SHMEM); + val->freeram = global_zone_page_state(NR_FREE_PAGES); + val->bufferram = nr_blockdev_pages(); + val->totalhigh = totalhigh_pages(); + val->freehigh = nr_free_highpages(); + val->mem_unit = PAGE_SIZE; +} + +EXPORT_SYMBOL(si_meminfo); + +#ifdef CONFIG_NUMA +void si_meminfo_node(struct sysinfo *val, int nid) +{ + int zone_type; /* needs to be signed */ + unsigned long managed_pages = 0; + unsigned long managed_highpages = 0; + unsigned long free_highpages = 0; + pg_data_t *pgdat = NODE_DATA(nid); + + for (zone_type = 0; zone_type < MAX_NR_ZONES; zone_type++) + managed_pages += zone_managed_pages(&pgdat->node_zones[zone_type]); + val->totalram = managed_pages; + val->sharedram = node_page_state(pgdat, NR_SHMEM); + val->freeram = sum_zone_node_page_state(nid, NR_FREE_PAGES); +#ifdef CONFIG_HIGHMEM + for (zone_type = 0; zone_type < MAX_NR_ZONES; zone_type++) { + struct zone *zone = &pgdat->node_zones[zone_type]; + + if (is_highmem(zone)) { + managed_highpages += zone_managed_pages(zone); + free_highpages += zone_page_state(zone, NR_FREE_PAGES); + } + } + val->totalhigh = managed_highpages; + val->freehigh = free_highpages; +#else + val->totalhigh = managed_highpages; + val->freehigh = free_highpages; +#endif + val->mem_unit = PAGE_SIZE; +} +#endif + +/* + * Determine whether the node should be displayed or not, depending on whether + * SHOW_MEM_FILTER_NODES was passed to show_free_areas(). + */ +static bool show_mem_node_skip(unsigned int flags, int nid, nodemask_t *nodemask) +{ + if (!(flags & SHOW_MEM_FILTER_NODES)) + return false; + + /* + * no node mask - aka implicit memory numa policy. Do not bother with + * the synchronization - read_mems_allowed_begin - because we do not + * have to be precise here. + */ + if (!nodemask) + nodemask = &cpuset_current_mems_allowed; + + return !node_isset(nid, *nodemask); +} + +static void show_migration_types(unsigned char type) +{ + static const char types[MIGRATE_TYPES] = { + [MIGRATE_UNMOVABLE] = 'U', + [MIGRATE_MOVABLE] = 'M', + [MIGRATE_RECLAIMABLE] = 'E', + [MIGRATE_HIGHATOMIC] = 'H', +#ifdef CONFIG_CMA + [MIGRATE_CMA] = 'C', +#endif +#ifdef CONFIG_MEMORY_ISOLATION + [MIGRATE_ISOLATE] = 'I', +#endif + }; + char tmp[MIGRATE_TYPES + 1]; + char *p = tmp; + int i; + + for (i = 0; i < MIGRATE_TYPES; i++) { + if (type & (1 << i)) + *p++ = types[i]; + } + + *p = '\0'; + printk(KERN_CONT "(%s) ", tmp); +} + +static bool node_has_managed_zones(pg_data_t *pgdat, int max_zone_idx) +{ + int zone_idx; + for (zone_idx = 0; zone_idx <= max_zone_idx; zone_idx++) + if (zone_managed_pages(pgdat->node_zones + zone_idx)) + return true; + return false; +} + +/* + * Show free area list (used inside shift_scroll-lock stuff) + * We also calculate the percentage fragmentation. We do this by counting the + * memory on each free list with the exception of the first item on the list. + * + * Bits in @filter: + * SHOW_MEM_FILTER_NODES: suppress nodes that are not allowed by current's + * cpuset. + */ +void __show_free_areas(unsigned int filter, nodemask_t *nodemask, int max_zone_idx) +{ + unsigned long free_pcp = 0; + int cpu, nid; + struct zone *zone; + pg_data_t *pgdat; + + for_each_populated_zone(zone) { + if (zone_idx(zone) > max_zone_idx) + continue; + if (show_mem_node_skip(filter, zone_to_nid(zone), nodemask)) + continue; + + for_each_online_cpu(cpu) + free_pcp += per_cpu_ptr(zone->per_cpu_pageset, cpu)->count; + } + + printk("active_anon:%lu inactive_anon:%lu isolated_anon:%lu\n" + " active_file:%lu inactive_file:%lu isolated_file:%lu\n" + " unevictable:%lu dirty:%lu writeback:%lu\n" + " slab_reclaimable:%lu slab_unreclaimable:%lu\n" + " mapped:%lu shmem:%lu pagetables:%lu\n" + " sec_pagetables:%lu bounce:%lu\n" + " kernel_misc_reclaimable:%lu\n" + " free:%lu free_pcp:%lu free_cma:%lu\n", + global_node_page_state(NR_ACTIVE_ANON), + global_node_page_state(NR_INACTIVE_ANON), + global_node_page_state(NR_ISOLATED_ANON), + global_node_page_state(NR_ACTIVE_FILE), + global_node_page_state(NR_INACTIVE_FILE), + global_node_page_state(NR_ISOLATED_FILE), + global_node_page_state(NR_UNEVICTABLE), + global_node_page_state(NR_FILE_DIRTY), + global_node_page_state(NR_WRITEBACK), + global_node_page_state_pages(NR_SLAB_RECLAIMABLE_B), + global_node_page_state_pages(NR_SLAB_UNRECLAIMABLE_B), + global_node_page_state(NR_FILE_MAPPED), + global_node_page_state(NR_SHMEM), + global_node_page_state(NR_PAGETABLE), + global_node_page_state(NR_SECONDARY_PAGETABLE), + global_zone_page_state(NR_BOUNCE), + global_node_page_state(NR_KERNEL_MISC_RECLAIMABLE), + global_zone_page_state(NR_FREE_PAGES), + free_pcp, + global_zone_page_state(NR_FREE_CMA_PAGES)); + + for_each_online_pgdat(pgdat) { + if (show_mem_node_skip(filter, pgdat->node_id, nodemask)) + continue; + if (!node_has_managed_zones(pgdat, max_zone_idx)) + continue; + + printk("Node %d" + " active_anon:%lukB" + " inactive_anon:%lukB" + " active_file:%lukB" + " inactive_file:%lukB" + " unevictable:%lukB" + " isolated(anon):%lukB" + " isolated(file):%lukB" + " mapped:%lukB" + " dirty:%lukB" + " writeback:%lukB" + " shmem:%lukB" +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + " shmem_thp: %lukB" + " shmem_pmdmapped: %lukB" + " anon_thp: %lukB" +#endif + " writeback_tmp:%lukB" + " kernel_stack:%lukB" +#ifdef CONFIG_SHADOW_CALL_STACK + " shadow_call_stack:%lukB" +#endif + " pagetables:%lukB" + " sec_pagetables:%lukB" + " all_unreclaimable? %s" + "\n", + pgdat->node_id, + K(node_page_state(pgdat, NR_ACTIVE_ANON)), + K(node_page_state(pgdat, NR_INACTIVE_ANON)), + K(node_page_state(pgdat, NR_ACTIVE_FILE)), + K(node_page_state(pgdat, NR_INACTIVE_FILE)), + K(node_page_state(pgdat, NR_UNEVICTABLE)), + K(node_page_state(pgdat, NR_ISOLATED_ANON)), + K(node_page_state(pgdat, NR_ISOLATED_FILE)), + K(node_page_state(pgdat, NR_FILE_MAPPED)), + K(node_page_state(pgdat, NR_FILE_DIRTY)), + K(node_page_state(pgdat, NR_WRITEBACK)), + K(node_page_state(pgdat, NR_SHMEM)), +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + K(node_page_state(pgdat, NR_SHMEM_THPS)), + K(node_page_state(pgdat, NR_SHMEM_PMDMAPPED)), + K(node_page_state(pgdat, NR_ANON_THPS)), +#endif + K(node_page_state(pgdat, NR_WRITEBACK_TEMP)), + node_page_state(pgdat, NR_KERNEL_STACK_KB), +#ifdef CONFIG_SHADOW_CALL_STACK + node_page_state(pgdat, NR_KERNEL_SCS_KB), +#endif + K(node_page_state(pgdat, NR_PAGETABLE)), + K(node_page_state(pgdat, NR_SECONDARY_PAGETABLE)), + pgdat->kswapd_failures >= MAX_RECLAIM_RETRIES ? + "yes" : "no"); + } + + for_each_populated_zone(zone) { + int i; + + if (zone_idx(zone) > max_zone_idx) + continue; + if (show_mem_node_skip(filter, zone_to_nid(zone), nodemask)) + continue; + + free_pcp = 0; + for_each_online_cpu(cpu) + free_pcp += per_cpu_ptr(zone->per_cpu_pageset, cpu)->count; + + show_node(zone); + printk(KERN_CONT + "%s" + " free:%lukB" + " boost:%lukB" + " min:%lukB" + " low:%lukB" + " high:%lukB" + " reserved_highatomic:%luKB" + " active_anon:%lukB" + " inactive_anon:%lukB" + " active_file:%lukB" + " inactive_file:%lukB" + " unevictable:%lukB" + " writepending:%lukB" + " present:%lukB" + " managed:%lukB" + " mlocked:%lukB" + " bounce:%lukB" + " free_pcp:%lukB" + " local_pcp:%ukB" + " free_cma:%lukB" + "\n", + zone->name, + K(zone_page_state(zone, NR_FREE_PAGES)), + K(zone->watermark_boost), + K(min_wmark_pages(zone)), + K(low_wmark_pages(zone)), + K(high_wmark_pages(zone)), + K(zone->nr_reserved_highatomic), + K(zone_page_state(zone, NR_ZONE_ACTIVE_ANON)), + K(zone_page_state(zone, NR_ZONE_INACTIVE_ANON)), + K(zone_page_state(zone, NR_ZONE_ACTIVE_FILE)), + K(zone_page_state(zone, NR_ZONE_INACTIVE_FILE)), + K(zone_page_state(zone, NR_ZONE_UNEVICTABLE)), + K(zone_page_state(zone, NR_ZONE_WRITE_PENDING)), + K(zone->present_pages), + K(zone_managed_pages(zone)), + K(zone_page_state(zone, NR_MLOCK)), + K(zone_page_state(zone, NR_BOUNCE)), + K(free_pcp), + K(this_cpu_read(zone->per_cpu_pageset->count)), + K(zone_page_state(zone, NR_FREE_CMA_PAGES))); + printk("lowmem_reserve[]:"); + for (i = 0; i < MAX_NR_ZONES; i++) + printk(KERN_CONT " %ld", zone->lowmem_reserve[i]); + printk(KERN_CONT "\n"); + } + + for_each_populated_zone(zone) { + unsigned int order; + unsigned long nr[MAX_ORDER + 1], flags, total = 0; + unsigned char types[MAX_ORDER + 1]; + + if (zone_idx(zone) > max_zone_idx) + continue; + if (show_mem_node_skip(filter, zone_to_nid(zone), nodemask)) + continue; + show_node(zone); + printk(KERN_CONT "%s: ", zone->name); + + spin_lock_irqsave(&zone->lock, flags); + for (order = 0; order <= MAX_ORDER; order++) { + struct free_area *area = &zone->free_area[order]; + int type; + + nr[order] = area->nr_free; + total += nr[order] << order; + + types[order] = 0; + for (type = 0; type < MIGRATE_TYPES; type++) { + if (!free_area_empty(area, type)) + types[order] |= 1 << type; + } + } + spin_unlock_irqrestore(&zone->lock, flags); + for (order = 0; order <= MAX_ORDER; order++) { + printk(KERN_CONT "%lu*%lukB ", + nr[order], K(1UL) << order); + if (nr[order]) + show_migration_types(types[order]); + } + printk(KERN_CONT "= %lukB\n", K(total)); + } + + for_each_online_node(nid) { + if (show_mem_node_skip(filter, nid, nodemask)) + continue; + hugetlb_show_meminfo_node(nid); + } + + printk("%ld total pagecache pages\n", global_node_page_state(NR_FILE_PAGES)); + + show_swap_cache_info(); +} + +void __show_mem(unsigned int filter, nodemask_t *nodemask, int max_zone_idx) +{ + unsigned long total = 0, reserved = 0, highmem = 0; + struct zone *zone; + + printk("Mem-Info:\n"); + __show_free_areas(filter, nodemask, max_zone_idx); + + for_each_populated_zone(zone) { + + total += zone->present_pages; + reserved += zone->present_pages - zone_managed_pages(zone); + + if (is_highmem(zone)) + highmem += zone->present_pages; + } + + printk("%lu pages RAM\n", total); + printk("%lu pages HighMem/MovableOnly\n", highmem); + printk("%lu pages reserved\n", reserved); +#ifdef CONFIG_CMA + printk("%lu pages cma reserved\n", totalcma_pages); +#endif +#ifdef CONFIG_MEMORY_FAILURE + printk("%lu pages hwpoisoned\n", atomic_long_read(&num_poisoned_pages)); +#endif +} From patchwork Tue May 16 06:38:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13242578 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93C69C7EE2E for ; Tue, 16 May 2023 06:21:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 32942280006; Tue, 16 May 2023 02:21:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2D9E0280005; Tue, 16 May 2023 02:21:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F4207280006; Tue, 16 May 2023 02:21:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id D31DC280004 for ; Tue, 16 May 2023 02:21:40 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 96A71A1479 for ; Tue, 16 May 2023 06:21:40 +0000 (UTC) X-FDA: 80795121960.30.CD9FF0E Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by imf21.hostedemail.com (Postfix) with ESMTP id 44FC71C000C for ; Tue, 16 May 2023 06:21:37 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=none; spf=pass (imf21.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684218098; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=583j3fsviF54SiM/q72kdov29+3Gn5DDJT2PVj0/et8=; b=DzrDjz3wduDlqhkRtvHFyJm0INBk4MhYLNhUVoxVcYgpmp1Zmj5SYLgNVhtDypiNQXM12s K9ukjhJJJd+6cyIKVyuCNXg8ZMX1Pi/Mun8Olo/AHYSUCy/g6OGM3GvrOwpmxLka7DHcPe /cH+nD0SyiVwyBnVwltSK74orutcOpE= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684218098; a=rsa-sha256; cv=none; b=dHBf59EP642NwFblPVvcL20n4uX+oWr0tcWWgPbD3mQfFnD6LdCDHzVienJ/uEKJnqdVWF PWdCQO3sFhsRm+V77zgtFKyCSJx3U1fXeiPD0aHWIm+ZjXh9Ip7S2OyKT+BPNbP2EID7Uk t9PPFRXzLgYL6TfqTAFX9VvJ40ZpaPs= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=none; spf=pass (imf21.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.56]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4QL5gx2DsrzLmDK; Tue, 16 May 2023 14:20:13 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Tue, 16 May 2023 14:21:32 +0800 From: Kefeng Wang To: Andrew Morton , Mike Rapoport , CC: David Hildenbrand , Oscar Salvador , "Rafael J. Wysocki" , Pavel Machek , Len Brown , Luis Chamberlain , Kees Cook , Iurii Zaikin , , , , , Kefeng Wang Subject: [PATCH v2 05/13] mm: page_alloc: squash page_is_consistent() Date: Tue, 16 May 2023 14:38:13 +0800 Message-ID: <20230516063821.121844-6-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230516063821.121844-1-wangkefeng.wang@huawei.com> References: <20230516063821.121844-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: 44FC71C000C X-Stat-Signature: ofdh18xem3najzse3zbqzqyd9zicpx1g X-Rspam-User: X-Rspamd-Server: rspam09 X-HE-Tag: 1684218097-305019 X-HE-Meta: U2FsdGVkX1/OZzndoyxtL6OksCBYz/H0FFATySngI/XZ9nLxTZBc+mdAkFVlLUO8Qt68q2xeG9M8XwkXWzS7EDBAZ7Awwiwj+VhWq87sUmY/+yCW2aQ4/BphscITdrDtx8mngD0ZXjIV+cffZGaM6vE3IaKFAzyQqW+bYbPMhdTskY0/q6xmmWin04IPgJkVdnqadDhjV60Z4t+kEC41WE7+oopv1PJ5nl40KWNWUu57L3ZAVeBWDPqpYtyfJFH8UKdE9recU+H6qlvuXg6GloAE2WbOOrVw3G9SK+Wi5fYlTOwTkIcgS8L766cNuNs51RQQ5QA7L7nbfpJ89inCMesfFnaOulQl3HwANu2ITUfjKl85kqn/rn7qayrg8gK4Pct4A6fqUaFxIycTTIsJRjpV8EIfzRhQuBD8ihEbNrAD4+LolMmhL9T9J0CjGM+/x+dlj8w2oQ2YNfzn7GRX2Ynfx/Y9uz6zAwx93L70rJf7HD9nRN6WvjiiUkaA7W0h1cyyLd/B9xcCp18WEYdIjUJ62lXQFwOFR8fVjmoI0CivVXtHGcRLVlIyO0bsiliDp072gndvuWHdPHkMWfEOCAv7AOdgeQemWAjgfGigDUJbyH+ITaIIltL61YxzlF4wTj3u0SXrhsmo9RiK7E+Z9VjV53PFBqJ5V6LHiqibTVOBhPtZLDFcQvsyamkjnoReVH09YK2xvypt/MOynODcayARMa6iuh5NohUNlZtwWIoGQSYgelpScKY9N/SrB2KJWLMLfjv62Yg55ISJskm/14ledu8a1oUpXwXJMtgfvfhn9BFCFKO7qyo6nn4CwWoRPdHHAoVSJfS29PUI6RdSVtS6KzU5tR8QIntcjVPaOqYVWlssKylgIcNpKa1gKUGfJwpdATEGg5PaRonAk1v5GuQ3W/QOlZ4Q44I3/nsU5UAwqYWOveNikKzUsYEJmW/6zViDKSFEkkCwadNaEl8 XfRYWde/ bqzkWZLgZvN1yQhl+Tm3nPlLLug/lYFtuZ4WsnC3N0PN7riFYFW6DN21dZruI8SylCr5Tt+5h+7it0XRQVhTUxAYuRLz/qam6BplGm3HGJwW3tYDS+FmqdahnnZURntLc11yN1kdIUsU7iLPDMUcpGN7xS0z/b5bgy6rcMx9g0YKRSimwoeHoBwPD2baQLZLSbtdw496s6FYYIbAPDDKxIQjAjUly/HCN453174e3MjrHGn+tEjL5cHp4HMCbK1PJNx8A X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Squash the page_is_consistent() into bad_range() as there is only one caller. Reviewed-by: Mike Rapoport (IBM) Signed-off-by: Kefeng Wang --- mm/page_alloc.c | 9 +-------- 1 file changed, 1 insertion(+), 8 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 84ba6cca3b3a..1bd8b7832d40 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -517,13 +517,6 @@ static int page_outside_zone_boundaries(struct zone *zone, struct page *page) return ret; } -static int page_is_consistent(struct zone *zone, struct page *page) -{ - if (zone != page_zone(page)) - return 0; - - return 1; -} /* * Temporary debugging check for pages not lying within a given zone. */ @@ -531,7 +524,7 @@ static int __maybe_unused bad_range(struct zone *zone, struct page *page) { if (page_outside_zone_boundaries(zone, page)) return 1; - if (!page_is_consistent(zone, page)) + if (zone != page_zone(page)) return 1; return 0; From patchwork Tue May 16 06:38:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13242579 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3C605C7EE2C for ; Tue, 16 May 2023 06:21:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A565E280002; Tue, 16 May 2023 02:21:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9BA93280004; Tue, 16 May 2023 02:21:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 881E1280002; Tue, 16 May 2023 02:21:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 6A7B4280004 for ; Tue, 16 May 2023 02:21:39 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 3D343AEBCB for ; Tue, 16 May 2023 06:21:39 +0000 (UTC) X-FDA: 80795121918.05.1E0EE20 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf11.hostedemail.com (Postfix) with ESMTP id EDBFB40005 for ; Tue, 16 May 2023 06:21:36 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf11.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684218097; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7+5m3eB7OCh57eoUmxS6u+YHj9ZR2Md0FnJisaNcwK0=; b=IebEdjEK1dQFC3EGXkjjDA6BJqFvNJDoCivwo2tyNHe0SY7jq90Gekh54Ufb+QwBWHbICR eEVpSnxhyUR/QE0dp3YdHHxA+HsL3z/W5CD8rwQCUUF05zONx/C9Yo3PcBSckQ/nXz+4wi /WKf1Yzg5c6Ebob9SCLlm43bpUQWY/s= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684218097; a=rsa-sha256; cv=none; b=UnwvU+xxuabIsteIBdiqvPAEwkRXlfmc0EwvyOJZ7uj4wSNaCekzoEZ4wowlPXVhCs/G1o YQmMGyNGLyyuwfB18QGxwTe+jpjwxdDa4bwNBng3g7WL7oFoQzecFv2CgGZFlJK45Qt4DC kDiEEWBZldgX4xyYPVPVrgPXJNUK8co= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf11.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4QL5f86GbJzLq3D; Tue, 16 May 2023 14:18:40 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Tue, 16 May 2023 14:21:32 +0800 From: Kefeng Wang To: Andrew Morton , Mike Rapoport , CC: David Hildenbrand , Oscar Salvador , "Rafael J. Wysocki" , Pavel Machek , Len Brown , Luis Chamberlain , Kees Cook , Iurii Zaikin , , , , , Kefeng Wang Subject: [PATCH v2 06/13] mm: page_alloc: remove alloc_contig_dump_pages() stub Date: Tue, 16 May 2023 14:38:14 +0800 Message-ID: <20230516063821.121844-7-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230516063821.121844-1-wangkefeng.wang@huawei.com> References: <20230516063821.121844-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Stat-Signature: fqm6x8co4mey5fxqofj19ipfgkixqejq X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: EDBFB40005 X-Rspam-User: X-HE-Tag: 1684218096-33405 X-HE-Meta: U2FsdGVkX19cbrAfy7NPaVe3tD+AFwutmslFfhEUhahXXuTw9RhenoOlCvOBMfihSejOpgS8Iw6ZZLZlgKBvq3TIqQeECK5A5zgHuBMqmjFUuFTY5BDflhdpRIH1etDJ5OqOniana9rJn7PAwPV8jFSC8DlrDw6uf/MOnhDcs9pQXuKM1Fon/LFkJwsZGBkk7wVWnDW5I6eOo2SC54cKzWNxs8iSJYBGtYfnHouGJ6QEOX27bxvgthyUMDPJOQyNfOoX7btq17JQtX0WNeSq8z1YGi7eHZCfKd9jxznOYspp3H719++sDFZ7nZwNZ3gE1X0v5s9+kqqfaeoNKGnRrguYmOY+UnSN+9Aj4PWABUxtaU+Dj/LroFPCUnoiOmoZJicEDD/uOkocsyZEDrTIE53qzFBe5WtEsz4fqThrtUflOQOp9MN9/Iwz2noAry0k2BNC93xaFhxLb3jI2EjxA37xFvlYSaB/IEdkdfK1XLR8pQqgTIvxrkd9XlVCocq1hphjgsIPQh4o10DRsZ2oNrq23THwPUjQqy0tqx2nZicxeHYLqBOlBGzZEH14GobwPBibSy3F7tFxOwN3HkPFlrLQ9hiBHpskAOv4jsfDoXDsblIXfq8tzFR8XEFkPLPwaYOaLZqxfsmbo8wuE0oeo+u/6t2UVS54n1L/nNPrRUL3jT7/vbLpvVV8Gzkak4ZZpJI1Qc9JId4djIsFdHjoW4S1350X8EXgz5rRSGz5VRfoCFXL42D+Xc5y+b4NP1i8UQZ+H8qnMkWDIZAyADeJKk1GBK93FWfxZjisuQPLqoH3XwrQfNll+KIwO8rdwtBPLteepyEaopRW05n8paWmMG25NyB1RLq0WkCcoMJUwv7wTM1kq0RkkpdEVafwi4R9NbzH5F5JoRgk1yOxjVaGutxdmy+Q8oC0nSOS21hG2WYRVLol2Qbu8exNz7YC6dDX5hUHop0Oo40Sm+2aJwb phGVvazV fVUIMTSKw/8cEW1SPPQCUiGD+me+L/B4FMFhmtz92G1BtnsXc+PR5+vdtWiWJ/ZJdE2sIZP+OPomOyXXKmJBd9rtple/OrpUmCu5P2Fdkhl2e6HZ08Y+BTE1II7gbSlFXEfPGooNAwt5ObSF5XIt5Nl4imNjx7UPfHrqo9cSxzAwSKUXEzgEwRc7qjvvzVc17xahSF1dcDOcbcFuuyrm0OoL/FHA8mwokfq4jot2189EhhzGt95beg+P/VIB/lXYTr+9m X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: DEFINE_DYNAMIC_DEBUG_METADATA and DYNAMIC_DEBUG_BRANCH already has stub definitions without dynamic debug feature, remove unnecessary alloc_contig_dump_pages() stub. Reviewed-by: Mike Rapoport (IBM) Signed-off-by: Kefeng Wang --- mm/page_alloc.c | 7 ------- 1 file changed, 7 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 1bd8b7832d40..aa3cdfd88393 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -6250,8 +6250,6 @@ int percpu_pagelist_high_fraction_sysctl_handler(struct ctl_table *table, } #ifdef CONFIG_CONTIG_ALLOC -#if defined(CONFIG_DYNAMIC_DEBUG) || \ - (defined(CONFIG_DYNAMIC_DEBUG_CORE) && defined(DYNAMIC_DEBUG_MODULE)) /* Usage: See admin-guide/dynamic-debug-howto.rst */ static void alloc_contig_dump_pages(struct list_head *page_list) { @@ -6265,11 +6263,6 @@ static void alloc_contig_dump_pages(struct list_head *page_list) dump_page(page, "migration failure"); } } -#else -static inline void alloc_contig_dump_pages(struct list_head *page_list) -{ -} -#endif /* [start, end) must belong to a single zone. */ int __alloc_contig_migrate_range(struct compact_control *cc, From patchwork Tue May 16 06:38:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13242580 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0B97C7EE30 for ; Tue, 16 May 2023 06:21:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5C1E6280004; Tue, 16 May 2023 02:21:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 522FD280007; Tue, 16 May 2023 02:21:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 12E98280004; Tue, 16 May 2023 02:21:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id E936D280005 for ; Tue, 16 May 2023 02:21:40 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id C43E7A147E for ; Tue, 16 May 2023 06:21:40 +0000 (UTC) X-FDA: 80795121960.29.A099FA2 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by imf08.hostedemail.com (Postfix) with ESMTP id 4A7A7160016 for ; Tue, 16 May 2023 06:21:37 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=none; spf=pass (imf08.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684218099; a=rsa-sha256; cv=none; b=EXE0TRFHkxTiREJD9HGt1JGNcy5oleUpG+uGl1AWzQL92tnRubfXyyxYSyk2t9ALATwXSf MqhSgI3LFaOiI1/j+NEKp41JsdtJZgSOi3be4sUUpP+zgvYsmdkXRtjx6IOQNNulvKd9Bc 91rlKow6I95x863LUvw1ApsWUuf5x1I= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=none; spf=pass (imf08.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684218099; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=lwNtCoCT9R1vDpMQgDJB+kwlg5CRTJ+/lXd5hTWMrhY=; b=FMM9G6v1K0zGHCqNERP938TMCQOT5m/DXZI62sma9fYjGHfuyUaPEUSp5kCgqCn9DE6nGL UUVaxY3NXbSbhBI7/5s7ggCBhObm4qxrz0XHzMXu4GVu5nX2voFXG5BCKVjeIj8tGg2HAu Lgc2yZk6tFS7z1E8Fr+0sYPUNss1Gls= Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.57]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4QL5cW5Zpbz18Lc2; Tue, 16 May 2023 14:17:15 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Tue, 16 May 2023 14:21:33 +0800 From: Kefeng Wang To: Andrew Morton , Mike Rapoport , CC: David Hildenbrand , Oscar Salvador , "Rafael J. Wysocki" , Pavel Machek , Len Brown , Luis Chamberlain , Kees Cook , Iurii Zaikin , , , , , Kefeng Wang Subject: [PATCH v2 07/13] mm: page_alloc: split out FAIL_PAGE_ALLOC Date: Tue, 16 May 2023 14:38:15 +0800 Message-ID: <20230516063821.121844-8-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230516063821.121844-1-wangkefeng.wang@huawei.com> References: <20230516063821.121844-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Rspam-User: X-Stat-Signature: cx37xhh5ihoe936q1bna8qedth66bruh X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 4A7A7160016 X-HE-Tag: 1684218097-813067 X-HE-Meta: U2FsdGVkX19UHyAVMllV42o8E0kWSI2mv6OkQ1ux9uJCQs41WbgWoFW+fm1hPfvO+xQWuJOp/VUQxFmgFIh2T9kzwK3RGVNt/jyUq+Lgt/iX/7gf5f1+rvks30jJj9ySKfYPVYQJ4s7JAxqWuJqKSjl3/fB+Ph6Xs83myXK9NawAPt359PNSmzMLSpmRHZBzVl/RDlqacoLpRNK6lkJDAJRFtnYq1dJ7j13GOL8Uf9wXFAu8MJHjtIuK9vM6LraxgaYpHosTKJoUid5fXsz4vt1x0jILf9HBGuiPDzl0pGjlsgZXfk+pWsGTm2ojEeRDKhoqFSp8LIGf4MBzNh7LnGaBZZ3u24lD8D2GWXg/nXqRnBEqtDHNezP5xqMfMUGLpvHlFGD/UJhGg7YqEah6hqdekQWlHf100j6GjRlmsnLRaFOEpZCFS8V7NkAwEnzxykGATtDq6kX8jIB/Ua8YntBpa5sZGx/b+QMrEmgtKzHqMRgXV8ePyWdI1HFsio4y3AYJnWIKEm/s7Nk31Dbvjv4FcR2Op+WBnpuQgwhAPPX/o3H0VDFFs/9mSRR1CtrgUQxaZz+vXdPzw3GKJJ2OT8QL3jLfa5ov2KnoS0d1acRlvMOT1jWtRHVM9KNjO27uPD1OIjzCn5ffw3jNzyAN6qlR5KoFSa3vnKGJE/duyQrERQDxUr2LlPenjGtG9eeky+mGqW3//QTYyAsjXudL+S7xXuwngzwzKza8ERlp2Wt62nR4L3SSa4fUZSWZ3BDzwLEMk1VaXrDjq2denO63i/mXEubkExOEqY2jlc7ZLN5XJucgKkWTJWmDGp5Kf1LiK+HMYB5Pe67qw4WuSGGoVlYXp1jWdlppCS/MTOTWT+DwLU1S8aBXIjceaGUMn6eOy+WIRou1t/TRInoojnOZZVoHZlR4Tx1MPOuP9FOcmfLtcJparSpXhF5X79aYNF2gNjNdDO5txcDVDBbRS0I D/nNDqQa 8/0DiwuQf7Ssi9Q0RE22bUUiu1/kjYXBtuNpwtHEnS3SOuF8n3k4GTX0yeN5hM00YIHZCvAjTiyrkt5ox+EhVXrtegOc6UOAo1BXbciFFtxdugQ4GqNP3g71zq+0oDaPoQUMyUFZVI6uqbwNnNuNIzcYxygt8XVj5TkzGHyTYt5+K0mvLqppvRT0zuegSJxCKmD4i X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: ... to a single file to reduce a bit of page_alloc.c. Signed-off-by: Kefeng Wang --- include/linux/fault-inject.h | 9 +++++ mm/Makefile | 1 + mm/fail_page_alloc.c | 66 ++++++++++++++++++++++++++++++++ mm/page_alloc.c | 74 ------------------------------------ 4 files changed, 76 insertions(+), 74 deletions(-) create mode 100644 mm/fail_page_alloc.c diff --git a/include/linux/fault-inject.h b/include/linux/fault-inject.h index 481abf530b3c..6d5edef09d45 100644 --- a/include/linux/fault-inject.h +++ b/include/linux/fault-inject.h @@ -93,6 +93,15 @@ struct kmem_cache; bool should_fail_alloc_page(gfp_t gfp_mask, unsigned int order); +#ifdef CONFIG_FAIL_PAGE_ALLOC +bool __should_fail_alloc_page(gfp_t gfp_mask, unsigned int order); +#else +static inline bool __should_fail_alloc_page(gfp_t gfp_mask, unsigned int order) +{ + return false; +} +#endif /* CONFIG_FAIL_PAGE_ALLOC */ + int should_failslab(struct kmem_cache *s, gfp_t gfpflags); #ifdef CONFIG_FAILSLAB extern bool __should_failslab(struct kmem_cache *s, gfp_t gfpflags); diff --git a/mm/Makefile b/mm/Makefile index 5262ce5baa28..0eec4bc72d3f 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -89,6 +89,7 @@ obj-$(CONFIG_KASAN) += kasan/ obj-$(CONFIG_KFENCE) += kfence/ obj-$(CONFIG_KMSAN) += kmsan/ obj-$(CONFIG_FAILSLAB) += failslab.o +obj-$(CONFIG_FAIL_PAGE_ALLOC) += fail_page_alloc.o obj-$(CONFIG_MEMTEST) += memtest.o obj-$(CONFIG_MIGRATION) += migrate.o obj-$(CONFIG_NUMA) += memory-tiers.o diff --git a/mm/fail_page_alloc.c b/mm/fail_page_alloc.c new file mode 100644 index 000000000000..b1b09cce9394 --- /dev/null +++ b/mm/fail_page_alloc.c @@ -0,0 +1,66 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include + +static struct { + struct fault_attr attr; + + bool ignore_gfp_highmem; + bool ignore_gfp_reclaim; + u32 min_order; +} fail_page_alloc = { + .attr = FAULT_ATTR_INITIALIZER, + .ignore_gfp_reclaim = true, + .ignore_gfp_highmem = true, + .min_order = 1, +}; + +static int __init setup_fail_page_alloc(char *str) +{ + return setup_fault_attr(&fail_page_alloc.attr, str); +} +__setup("fail_page_alloc=", setup_fail_page_alloc); + +bool __should_fail_alloc_page(gfp_t gfp_mask, unsigned int order) +{ + int flags = 0; + + if (order < fail_page_alloc.min_order) + return false; + if (gfp_mask & __GFP_NOFAIL) + return false; + if (fail_page_alloc.ignore_gfp_highmem && (gfp_mask & __GFP_HIGHMEM)) + return false; + if (fail_page_alloc.ignore_gfp_reclaim && + (gfp_mask & __GFP_DIRECT_RECLAIM)) + return false; + + /* See comment in __should_failslab() */ + if (gfp_mask & __GFP_NOWARN) + flags |= FAULT_NOWARN; + + return should_fail_ex(&fail_page_alloc.attr, 1 << order, flags); +} + +#ifdef CONFIG_FAULT_INJECTION_DEBUG_FS + +static int __init fail_page_alloc_debugfs(void) +{ + umode_t mode = S_IFREG | 0600; + struct dentry *dir; + + dir = fault_create_debugfs_attr("fail_page_alloc", NULL, + &fail_page_alloc.attr); + + debugfs_create_bool("ignore-gfp-wait", mode, dir, + &fail_page_alloc.ignore_gfp_reclaim); + debugfs_create_bool("ignore-gfp-highmem", mode, dir, + &fail_page_alloc.ignore_gfp_highmem); + debugfs_create_u32("min-order", mode, dir, &fail_page_alloc.min_order); + + return 0; +} + +late_initcall(fail_page_alloc_debugfs); + +#endif /* CONFIG_FAULT_INJECTION_DEBUG_FS */ diff --git a/mm/page_alloc.c b/mm/page_alloc.c index aa3cdfd88393..8d4e803cec44 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3031,80 +3031,6 @@ struct page *rmqueue(struct zone *preferred_zone, return page; } -#ifdef CONFIG_FAIL_PAGE_ALLOC - -static struct { - struct fault_attr attr; - - bool ignore_gfp_highmem; - bool ignore_gfp_reclaim; - u32 min_order; -} fail_page_alloc = { - .attr = FAULT_ATTR_INITIALIZER, - .ignore_gfp_reclaim = true, - .ignore_gfp_highmem = true, - .min_order = 1, -}; - -static int __init setup_fail_page_alloc(char *str) -{ - return setup_fault_attr(&fail_page_alloc.attr, str); -} -__setup("fail_page_alloc=", setup_fail_page_alloc); - -static bool __should_fail_alloc_page(gfp_t gfp_mask, unsigned int order) -{ - int flags = 0; - - if (order < fail_page_alloc.min_order) - return false; - if (gfp_mask & __GFP_NOFAIL) - return false; - if (fail_page_alloc.ignore_gfp_highmem && (gfp_mask & __GFP_HIGHMEM)) - return false; - if (fail_page_alloc.ignore_gfp_reclaim && - (gfp_mask & __GFP_DIRECT_RECLAIM)) - return false; - - /* See comment in __should_failslab() */ - if (gfp_mask & __GFP_NOWARN) - flags |= FAULT_NOWARN; - - return should_fail_ex(&fail_page_alloc.attr, 1 << order, flags); -} - -#ifdef CONFIG_FAULT_INJECTION_DEBUG_FS - -static int __init fail_page_alloc_debugfs(void) -{ - umode_t mode = S_IFREG | 0600; - struct dentry *dir; - - dir = fault_create_debugfs_attr("fail_page_alloc", NULL, - &fail_page_alloc.attr); - - debugfs_create_bool("ignore-gfp-wait", mode, dir, - &fail_page_alloc.ignore_gfp_reclaim); - debugfs_create_bool("ignore-gfp-highmem", mode, dir, - &fail_page_alloc.ignore_gfp_highmem); - debugfs_create_u32("min-order", mode, dir, &fail_page_alloc.min_order); - - return 0; -} - -late_initcall(fail_page_alloc_debugfs); - -#endif /* CONFIG_FAULT_INJECTION_DEBUG_FS */ - -#else /* CONFIG_FAIL_PAGE_ALLOC */ - -static inline bool __should_fail_alloc_page(gfp_t gfp_mask, unsigned int order) -{ - return false; -} - -#endif /* CONFIG_FAIL_PAGE_ALLOC */ - noinline bool should_fail_alloc_page(gfp_t gfp_mask, unsigned int order) { return __should_fail_alloc_page(gfp_mask, order); From patchwork Tue May 16 06:38:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13242582 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D137DC7EE26 for ; Tue, 16 May 2023 06:21:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8AAB0280005; Tue, 16 May 2023 02:21:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7EBD028000A; Tue, 16 May 2023 02:21:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3D5E0280005; Tue, 16 May 2023 02:21:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 19C51280007 for ; Tue, 16 May 2023 02:21:42 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 540471C6A7F for ; Tue, 16 May 2023 06:21:41 +0000 (UTC) X-FDA: 80795122002.30.9E307F4 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by imf06.hostedemail.com (Postfix) with ESMTP id 235FB180004 for ; Tue, 16 May 2023 06:21:37 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=none; spf=pass (imf06.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684218099; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ilU7FJNxAYOHufnFtGCNddHxmEKCpMhRKmE5S4WRsGE=; b=rQy4e/LPgJzBa6+yiXwE3Fec66xOvR8mGZsITKvo9svQ3VYEgvywcXf8eOoc82lFPZPQ+B 3LJCvnaIfCNOCJfdmXI66EiXPGsR63ofD5CsHLaPEVjUXaQyitIqddLnZbr2/wjsWUUUxv zetk9WoglXWIoayiiv3K7WIhAOYCICM= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684218099; a=rsa-sha256; cv=none; b=giOJ4vXiROyxuZ9KoY3m8oxfB6CsFZisqbCoyERopw5BhEAc+XsRW1gNLRQJqNOptdTcwF 0dL2eXEN7qeGvur8UWiq0DuUEzRq46Kp0y08VdxZrUDp7/VTKlYJwQ/UloLamZYBBglmEq Qs80dNnh2aq7NrEk5vDGR/ayHRZ0QPQ= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=none; spf=pass (imf06.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.56]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4QL5gz2d1nzLmCw; Tue, 16 May 2023 14:20:15 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Tue, 16 May 2023 14:21:34 +0800 From: Kefeng Wang To: Andrew Morton , Mike Rapoport , CC: David Hildenbrand , Oscar Salvador , "Rafael J. Wysocki" , Pavel Machek , Len Brown , Luis Chamberlain , Kees Cook , Iurii Zaikin , , , , , Kefeng Wang Subject: [PATCH v2 08/13] mm: page_alloc: split out DEBUG_PAGEALLOC Date: Tue, 16 May 2023 14:38:16 +0800 Message-ID: <20230516063821.121844-9-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230516063821.121844-1-wangkefeng.wang@huawei.com> References: <20230516063821.121844-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Stat-Signature: i1c3c814sbmq7nsy1wbb83nmocf8m98s X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: 235FB180004 X-HE-Tag: 1684218097-428275 X-HE-Meta: U2FsdGVkX1/iw6wjb34VpX9YYHvFpUbv+gtvnVkA1uhtywSky+mQv7zbNb1Jod9gmC6H9HEu10aAZ/FyYgEhnqLWq3P69buLds7QKRl1gFtmpXpHWMfxc6YpZoeAjUf8HeD7TeGvWHcWo0MBl42GmvU2iMlFQY1gtX/mKP26tzcfyOLoHY4oqWLhADMp5nEUWb2mPUN1gFuesv2zdQMQAAGnLLBAov/JvrVMx/GY3wnVWe6ObvNXN7SOqMUq9E0SZTrkP1XXOsiXexGHtKrOMeM5YOZIn+u9jENzVCkH5wgL+vPuOACVY1ADlMDunuB3MrVvuOX9IvPwWZwiAqH/6hJACpAqqAWlWI74ak9IHZ1wAw6J99nj5fQegHPlJxjCH85bBf9bZ9t/St2YopFYdqWxbwwt2777D/7mWjXIHTY+8YaHUJd3ybvxj3LGuvdqircJEN0LcrSEwUBP+NKsEOkUa+jKONHU3i94axw9NP5nWy36QY5hn/q+UmdgsWu/0YB/c+91F7AOE1zVTi6wLUPvSPV66ue0ENCUN8msy5OFjOyBFrxc6uqsGeFH558rWj6z7YXREvi1Pk47mBx4rfy7j9pXrAtcXaA9+k9s7e05H37W696HTQcRF9J6X1p8aQVyEu1nMZipur2VbpaC/PTPw9tJpPXV0gAWqH7NN0oppCTSMOodC+6d9+7UZzxgR9LE6BC4tdELX6DEeUg6a5Ia7A0kx5sXgrrridAZMmz/vX3GH9/dgvpGnKk2ZX0//dvrFZCc5MpH39o3CE/CF92uNS2TjKvZkN9KJ5XDUHYdVvYrwouLnMKg719mCD+u4MWPVy2Xg0qAnIrpPqCc/GrMvTd01fKtyuTHCUZ4481iO245xFNCB8BOb4Q/5EY+4c9Th4ACDCxcZijibNALC9b1EaEFqUbfMcg0IbL+OBZdfL8vqTObMN5Q/NUNno+UVTt0OkHkr2jkccDlp8A olSc7x1W Y5AD4JNQUBVEG0ChaYUN6KZ9Cn2YvXaI2aLp6/wd2hyhmevF6/3thr23TUqEqpVUpWH9duyUXQbQTmpDM6Ox5UtMTjER27XvOklcKK6HaUjo/Rd3HFvCTDIGFYnXYmOgizogCscMVoUfhd45oJz2IU6KRwXHUgaWEjMWx8j7uH03bk/GAnkPNsUJ1tOYDDovXdoLomWIRBF5ZwRw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Move DEBUG_PAGEALLOC related functions into a single file to reduce a bit of page_alloc.c. Signed-off-by: Kefeng Wang --- include/linux/mm.h | 76 ++++++++++++++++++++++++++++--------------- mm/Makefile | 1 + mm/debug_page_alloc.c | 59 +++++++++++++++++++++++++++++++++ mm/page_alloc.c | 69 --------------------------------------- 4 files changed, 109 insertions(+), 96 deletions(-) create mode 100644 mm/debug_page_alloc.c diff --git a/include/linux/mm.h b/include/linux/mm.h index db3f66ed2f32..d3241f4ac903 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3485,9 +3485,58 @@ static inline void debug_pagealloc_unmap_pages(struct page *page, int numpages) if (debug_pagealloc_enabled_static()) __kernel_map_pages(page, numpages, 0); } + +extern unsigned int _debug_guardpage_minorder; +DECLARE_STATIC_KEY_FALSE(_debug_guardpage_enabled); + +static inline unsigned int debug_guardpage_minorder(void) +{ + return _debug_guardpage_minorder; +} + +static inline bool debug_guardpage_enabled(void) +{ + return static_branch_unlikely(&_debug_guardpage_enabled); +} + +static inline bool page_is_guard(struct page *page) +{ + if (!debug_guardpage_enabled()) + return false; + + return PageGuard(page); +} + +bool __set_page_guard(struct zone *zone, struct page *page, unsigned int order, + int migratetype); +static inline bool set_page_guard(struct zone *zone, struct page *page, + unsigned int order, int migratetype) +{ + if (!debug_guardpage_enabled()) + return false; + return __set_page_guard(zone, page, order, migratetype); +} + +void __clear_page_guard(struct zone *zone, struct page *page, unsigned int order, + int migratetype); +static inline void clear_page_guard(struct zone *zone, struct page *page, + unsigned int order, int migratetype) +{ + if (!debug_guardpage_enabled()) + return; + __clear_page_guard(zone, page, order, migratetype); +} + #else /* CONFIG_DEBUG_PAGEALLOC */ static inline void debug_pagealloc_map_pages(struct page *page, int numpages) {} static inline void debug_pagealloc_unmap_pages(struct page *page, int numpages) {} +static inline unsigned int debug_guardpage_minorder(void) { return 0; } +static inline bool debug_guardpage_enabled(void) { return false; } +static inline bool page_is_guard(struct page *page) { return false; } +static inline bool set_page_guard(struct zone *zone, struct page *page, + unsigned int order, int migratetype) { return false; } +static inline void clear_page_guard(struct zone *zone, struct page *page, + unsigned int order, int migratetype) {} #endif /* CONFIG_DEBUG_PAGEALLOC */ #ifdef __HAVE_ARCH_GATE_AREA @@ -3725,33 +3774,6 @@ static inline bool vma_is_special_huge(const struct vm_area_struct *vma) #endif /* CONFIG_TRANSPARENT_HUGEPAGE || CONFIG_HUGETLBFS */ -#ifdef CONFIG_DEBUG_PAGEALLOC -extern unsigned int _debug_guardpage_minorder; -DECLARE_STATIC_KEY_FALSE(_debug_guardpage_enabled); - -static inline unsigned int debug_guardpage_minorder(void) -{ - return _debug_guardpage_minorder; -} - -static inline bool debug_guardpage_enabled(void) -{ - return static_branch_unlikely(&_debug_guardpage_enabled); -} - -static inline bool page_is_guard(struct page *page) -{ - if (!debug_guardpage_enabled()) - return false; - - return PageGuard(page); -} -#else -static inline unsigned int debug_guardpage_minorder(void) { return 0; } -static inline bool debug_guardpage_enabled(void) { return false; } -static inline bool page_is_guard(struct page *page) { return false; } -#endif /* CONFIG_DEBUG_PAGEALLOC */ - #if MAX_NUMNODES > 1 void __init setup_nr_node_ids(void); #else diff --git a/mm/Makefile b/mm/Makefile index 0eec4bc72d3f..678530a07326 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -124,6 +124,7 @@ obj-$(CONFIG_SECRETMEM) += secretmem.o obj-$(CONFIG_CMA_SYSFS) += cma_sysfs.o obj-$(CONFIG_USERFAULTFD) += userfaultfd.o obj-$(CONFIG_IDLE_PAGE_TRACKING) += page_idle.o +obj-$(CONFIG_DEBUG_PAGEALLOC) += debug_page_alloc.o obj-$(CONFIG_DEBUG_PAGE_REF) += debug_page_ref.o obj-$(CONFIG_DAMON) += damon/ obj-$(CONFIG_HARDENED_USERCOPY) += usercopy.o diff --git a/mm/debug_page_alloc.c b/mm/debug_page_alloc.c new file mode 100644 index 000000000000..f9d145730fd1 --- /dev/null +++ b/mm/debug_page_alloc.c @@ -0,0 +1,59 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include + +unsigned int _debug_guardpage_minorder; + +bool _debug_pagealloc_enabled_early __read_mostly + = IS_ENABLED(CONFIG_DEBUG_PAGEALLOC_ENABLE_DEFAULT); +EXPORT_SYMBOL(_debug_pagealloc_enabled_early); +DEFINE_STATIC_KEY_FALSE(_debug_pagealloc_enabled); +EXPORT_SYMBOL(_debug_pagealloc_enabled); + +DEFINE_STATIC_KEY_FALSE(_debug_guardpage_enabled); + +static int __init early_debug_pagealloc(char *buf) +{ + return kstrtobool(buf, &_debug_pagealloc_enabled_early); +} +early_param("debug_pagealloc", early_debug_pagealloc); + +static int __init debug_guardpage_minorder_setup(char *buf) +{ + unsigned long res; + + if (kstrtoul(buf, 10, &res) < 0 || res > MAX_ORDER / 2) { + pr_err("Bad debug_guardpage_minorder value\n"); + return 0; + } + _debug_guardpage_minorder = res; + pr_info("Setting debug_guardpage_minorder to %lu\n", res); + return 0; +} +early_param("debug_guardpage_minorder", debug_guardpage_minorder_setup); + +bool __set_page_guard(struct zone *zone, struct page *page, unsigned int order, + int migratetype) +{ + if (order >= debug_guardpage_minorder()) + return false; + + __SetPageGuard(page); + INIT_LIST_HEAD(&page->buddy_list); + set_page_private(page, order); + /* Guard pages are not available for any usage */ + if (!is_migrate_isolate(migratetype)) + __mod_zone_freepage_state(zone, -(1 << order), migratetype); + + return true; +} + +void __clear_page_guard(struct zone *zone, struct page *page, unsigned int order, + int migratetype) +{ + __ClearPageGuard(page); + + set_page_private(page, 0); + if (!is_migrate_isolate(migratetype)) + __mod_zone_freepage_state(zone, (1 << order), migratetype); +} diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 8d4e803cec44..dc9820466377 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -664,75 +664,6 @@ void destroy_large_folio(struct folio *folio) compound_page_dtors[dtor](&folio->page); } -#ifdef CONFIG_DEBUG_PAGEALLOC -unsigned int _debug_guardpage_minorder; - -bool _debug_pagealloc_enabled_early __read_mostly - = IS_ENABLED(CONFIG_DEBUG_PAGEALLOC_ENABLE_DEFAULT); -EXPORT_SYMBOL(_debug_pagealloc_enabled_early); -DEFINE_STATIC_KEY_FALSE(_debug_pagealloc_enabled); -EXPORT_SYMBOL(_debug_pagealloc_enabled); - -DEFINE_STATIC_KEY_FALSE(_debug_guardpage_enabled); - -static int __init early_debug_pagealloc(char *buf) -{ - return kstrtobool(buf, &_debug_pagealloc_enabled_early); -} -early_param("debug_pagealloc", early_debug_pagealloc); - -static int __init debug_guardpage_minorder_setup(char *buf) -{ - unsigned long res; - - if (kstrtoul(buf, 10, &res) < 0 || res > MAX_ORDER / 2) { - pr_err("Bad debug_guardpage_minorder value\n"); - return 0; - } - _debug_guardpage_minorder = res; - pr_info("Setting debug_guardpage_minorder to %lu\n", res); - return 0; -} -early_param("debug_guardpage_minorder", debug_guardpage_minorder_setup); - -static inline bool set_page_guard(struct zone *zone, struct page *page, - unsigned int order, int migratetype) -{ - if (!debug_guardpage_enabled()) - return false; - - if (order >= debug_guardpage_minorder()) - return false; - - __SetPageGuard(page); - INIT_LIST_HEAD(&page->buddy_list); - set_page_private(page, order); - /* Guard pages are not available for any usage */ - if (!is_migrate_isolate(migratetype)) - __mod_zone_freepage_state(zone, -(1 << order), migratetype); - - return true; -} - -static inline void clear_page_guard(struct zone *zone, struct page *page, - unsigned int order, int migratetype) -{ - if (!debug_guardpage_enabled()) - return; - - __ClearPageGuard(page); - - set_page_private(page, 0); - if (!is_migrate_isolate(migratetype)) - __mod_zone_freepage_state(zone, (1 << order), migratetype); -} -#else -static inline bool set_page_guard(struct zone *zone, struct page *page, - unsigned int order, int migratetype) { return false; } -static inline void clear_page_guard(struct zone *zone, struct page *page, - unsigned int order, int migratetype) {} -#endif - static inline void set_buddy_order(struct page *page, unsigned int order) { set_page_private(page, order); From patchwork Tue May 16 06:38:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13242583 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 87F54C7EE2C for ; Tue, 16 May 2023 06:21:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C31BE280007; Tue, 16 May 2023 02:21:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B8E96280008; Tue, 16 May 2023 02:21:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4D984280007; Tue, 16 May 2023 02:21:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 1A7A8280008 for ; Tue, 16 May 2023 02:21:42 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id E59404143D for ; Tue, 16 May 2023 06:21:41 +0000 (UTC) X-FDA: 80795122002.21.E54ED17 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by imf17.hostedemail.com (Postfix) with ESMTP id 6947F40006 for ; Tue, 16 May 2023 06:21:38 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=none; spf=pass (imf17.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684218100; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1iOwkSLnWEdCnxBRNGxTN2XwpQQBOKaFg8d13A7UZM0=; b=zf1Om5VlewfrolnAzOCqx3HuZh6P4nSuMog3ZwCsbiuAloOEYpM6GAc8XFsBb3IG+QGDuk o8xBpxWgQw/r6V8Mkp8E2ovjiyjPrL9WZBQZ3nmEEKiv1l1OqWpGTs2ZKyqEx/G0Chopmr 7ELKo24nzAnZGDPW8VjbbfvZI1Lu8oE= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=none; spf=pass (imf17.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684218100; a=rsa-sha256; cv=none; b=Le0MTsxoZlwD1yDHcNasmU474XdtmUsFkqCsy0x5dOCAN9+Gk2OaS6zHwLcKrR7B30Ridy NGpxDg9MsS/saD+OzLe44hszFbYlpBt2RpEYi8MeDGifWfI/d8EnLdHSZS1SbJJLL3RcP7 IlMpNjI/WZCmljNA3uMd9XM//UO/aCQ= Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.57]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4QL5h00Tw4zLmCd; Tue, 16 May 2023 14:20:16 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Tue, 16 May 2023 14:21:34 +0800 From: Kefeng Wang To: Andrew Morton , Mike Rapoport , CC: David Hildenbrand , Oscar Salvador , "Rafael J. Wysocki" , Pavel Machek , Len Brown , Luis Chamberlain , Kees Cook , Iurii Zaikin , , , , , Kefeng Wang Subject: [PATCH v2 09/13] mm: page_alloc: move mark_free_page() into snapshot.c Date: Tue, 16 May 2023 14:38:17 +0800 Message-ID: <20230516063821.121844-10-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230516063821.121844-1-wangkefeng.wang@huawei.com> References: <20230516063821.121844-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 6947F40006 X-Stat-Signature: 5on7ni1x89u11fmkpgtph198tojsnqkz X-HE-Tag: 1684218098-830879 X-HE-Meta: U2FsdGVkX18myByktYR96kb3iMvbrI22qiZOBoSy/juBl36AkreXwLqbkfNa/R3bxUSWTWaTgoNPWKJAFqnT9YYks4JTVNukuXgfIlR1chPT56SCsqfuITPRdq2o55As9qMyplleL9ANThVWa3U8G/OLAqiPbL6Lr1ivf9HFUNGswvffRiI6eK3SHNCOdeKeA2t6BGP1HYDi1gN8CSssoipqiL/y+iaKLvAEJBg0izTz//CoTZi/gJssftI/T1NKXf6wvWV42206eSQOKDjd0ud/ljLvnYQR8A8Qjw6xoTDAAuYatfe+PHDFl+gFbmcHpkQLnJPcRRwMHb44bMOib+K0ltfqIxLDAP/r7j0R6ntD6QGaN1d13X9cABGOlhoaLmPXkmzunrWUJvVsosGFFTqGe4vfElSZbNRl2Lyz6bcIEPzS+7geddMjF4smxzFkW9vYLJ7S6qEg5ruan+K4QGiV80fsDq83tPqIAec5/PAZ4kCO5DtWtRdXD/D1rleIXEm4L5yqU23D3q1jrF1NGZYnFhf6hhO1PSLOpyJEtGRb+zZJUTT1CWpdCrIJ/eGlweOyYbxTagXSS89jfCiaf8YiCZYtYirhAC8uBTEEk3teCFT2eUjhwguTvD2Oj6f+AplIEicmjgBnN824SQv42QbcWkP7MG5MYVFY+3ZMll6K2rZwJK8P5BQC8dHeBTJ7BElZygK13+su4MVpB7ggGXG4GG0ldKIHuFceNHxtuAsYRG9AD2jZLNg2+Rdpo7JR4Yfuxnf1VQ35/shUDfBTYVVuoLCSU9oLRRrvi2yOD7iO/rYd557UWEW4j8Ey5U5sUsmgrMgniZCbFtr97IyGYl2+ngu7T/BT6xhDXN4/c8cHrtWvhO19pVRUwV0Te1kiRK1hqvGHxc1lKjg+IEq9XgpmJoalur7Wjp2ozeoueICoz+qlyDrq/EYtvTSxKMBd7zqxmvaF4OxEST/RhHX ddu6C7AM pIMWnVamOQGTMJI7LvwUHqZ/nxVvvWSSHkBEU0c2c6zcv+lDiskjjacKrYWOSMJrvrQqbAZWMf83FVk73nhEiV47H6EzVVsd7d6PveVk6j4kW0x7mHJzxHw4sqWqpTv/tz00B6gJ4UVm2XJpM/JjDJUGNtpEFMM4nIWYUm9rJYuHKQb/UylqzURvkzxrWKKZNU1wGw2XHeK7sKsE= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The mark_free_page() is only used in kernel/power/snapshot.c, move it out to reduce a bit of page_alloc.c Signed-off-by: Kefeng Wang --- include/linux/suspend.h | 3 --- kernel/power/snapshot.c | 52 ++++++++++++++++++++++++++++++++++++++ mm/page_alloc.c | 55 ----------------------------------------- 3 files changed, 52 insertions(+), 58 deletions(-) diff --git a/include/linux/suspend.h b/include/linux/suspend.h index d0d4598a7b3f..3950a7bf33ae 100644 --- a/include/linux/suspend.h +++ b/include/linux/suspend.h @@ -364,9 +364,6 @@ struct pbe { struct pbe *next; }; -/* mm/page_alloc.c */ -extern void mark_free_pages(struct zone *zone); - /** * struct platform_hibernation_ops - hibernation platform support * diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c index cd8b7b35f1e8..45ef0bf81c85 100644 --- a/kernel/power/snapshot.c +++ b/kernel/power/snapshot.c @@ -1228,6 +1228,58 @@ unsigned int snapshot_additional_pages(struct zone *zone) return 2 * rtree; } +/* + * Touch the watchdog for every WD_PAGE_COUNT pages. + */ +#define WD_PAGE_COUNT (128*1024) + +static void mark_free_pages(struct zone *zone) +{ + unsigned long pfn, max_zone_pfn, page_count = WD_PAGE_COUNT; + unsigned long flags; + unsigned int order, t; + struct page *page; + + if (zone_is_empty(zone)) + return; + + spin_lock_irqsave(&zone->lock, flags); + + max_zone_pfn = zone_end_pfn(zone); + for (pfn = zone->zone_start_pfn; pfn < max_zone_pfn; pfn++) + if (pfn_valid(pfn)) { + page = pfn_to_page(pfn); + + if (!--page_count) { + touch_nmi_watchdog(); + page_count = WD_PAGE_COUNT; + } + + if (page_zone(page) != zone) + continue; + + if (!swsusp_page_is_forbidden(page)) + swsusp_unset_page_free(page); + } + + for_each_migratetype_order(order, t) { + list_for_each_entry(page, + &zone->free_area[order].free_list[t], buddy_list) { + unsigned long i; + + pfn = page_to_pfn(page); + for (i = 0; i < (1UL << order); i++) { + if (!--page_count) { + touch_nmi_watchdog(); + page_count = WD_PAGE_COUNT; + } + swsusp_set_page_free(pfn_to_page(pfn + i)); + } + } + } + spin_unlock_irqrestore(&zone->lock, flags); +} + #ifdef CONFIG_HIGHMEM /** * count_free_highmem_pages - Compute the total number of free highmem pages. diff --git a/mm/page_alloc.c b/mm/page_alloc.c index dc9820466377..71bfe72be045 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2401,61 +2401,6 @@ void drain_all_pages(struct zone *zone) __drain_all_pages(zone, false); } -#ifdef CONFIG_HIBERNATION - -/* - * Touch the watchdog for every WD_PAGE_COUNT pages. - */ -#define WD_PAGE_COUNT (128*1024) - -void mark_free_pages(struct zone *zone) -{ - unsigned long pfn, max_zone_pfn, page_count = WD_PAGE_COUNT; - unsigned long flags; - unsigned int order, t; - struct page *page; - - if (zone_is_empty(zone)) - return; - - spin_lock_irqsave(&zone->lock, flags); - - max_zone_pfn = zone_end_pfn(zone); - for (pfn = zone->zone_start_pfn; pfn < max_zone_pfn; pfn++) - if (pfn_valid(pfn)) { - page = pfn_to_page(pfn); - - if (!--page_count) { - touch_nmi_watchdog(); - page_count = WD_PAGE_COUNT; - } - - if (page_zone(page) != zone) - continue; - - if (!swsusp_page_is_forbidden(page)) - swsusp_unset_page_free(page); - } - - for_each_migratetype_order(order, t) { - list_for_each_entry(page, - &zone->free_area[order].free_list[t], buddy_list) { - unsigned long i; - - pfn = page_to_pfn(page); - for (i = 0; i < (1UL << order); i++) { - if (!--page_count) { - touch_nmi_watchdog(); - page_count = WD_PAGE_COUNT; - } - swsusp_set_page_free(pfn_to_page(pfn + i)); - } - } - } - spin_unlock_irqrestore(&zone->lock, flags); -} -#endif /* CONFIG_PM */ - static bool free_unref_page_prepare(struct page *page, unsigned long pfn, unsigned int order) { From patchwork Tue May 16 06:38:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13242581 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55ED8C77B75 for ; Tue, 16 May 2023 06:21:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5C245280009; Tue, 16 May 2023 02:21:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 54A8E280008; Tue, 16 May 2023 02:21:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2B5E2280009; Tue, 16 May 2023 02:21:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id F3AF7280005 for ; Tue, 16 May 2023 02:21:41 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id AF4181C5EC8 for ; Tue, 16 May 2023 06:21:41 +0000 (UTC) X-FDA: 80795122002.25.4C6373B Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf06.hostedemail.com (Postfix) with ESMTP id 4B40F180010 for ; Tue, 16 May 2023 06:21:38 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=none; spf=pass (imf06.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684218100; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Vvb+Fq9FybwwftpVHRJbU0d8Vr60eDCdMo44I6empNk=; b=lGJVHhq1XLVuu2NaTrIMw+bYQt/A5W88pJ4YonGlgAg/zaQEidnOX2eXC+hmzV2cOZmALL 6joqPKu+klJXWOkpHFCGW1r74cM1rPkS+cRad1rh10SgLLZmMu+f2NFAuRrun/edV25EcZ R99HJjWRbX30duZZaiWjxLwhbKUqsTk= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684218100; a=rsa-sha256; cv=none; b=650t5O0+ghcSrCHwtYd8KNfkYLMFuwGCaGsYy+y4q4/2x18DeXz1ZoyYAwu2PWjpgszOtM LpHV3JRwdGaa5azFrr/gaewnaxwTOKNPGbu9OLpUjgLKC2FgDRCQAtkEjFIzxOTgQZ2Av7 5O35fLjAmlUv/OMQAzFsBoBK1PzTNZY= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=none; spf=pass (imf06.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4QL5cY0STrzqSHD; Tue, 16 May 2023 14:17:17 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Tue, 16 May 2023 14:21:35 +0800 From: Kefeng Wang To: Andrew Morton , Mike Rapoport , CC: David Hildenbrand , Oscar Salvador , "Rafael J. Wysocki" , Pavel Machek , Len Brown , Luis Chamberlain , Kees Cook , Iurii Zaikin , , , , , Kefeng Wang Subject: [PATCH v2 10/13] mm: page_alloc: move pm_* function into power Date: Tue, 16 May 2023 14:38:18 +0800 Message-ID: <20230516063821.121844-11-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230516063821.121844-1-wangkefeng.wang@huawei.com> References: <20230516063821.121844-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Stat-Signature: kbqr3c1npjw6ij43s7xa8ejfcnxx4heo X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: 4B40F180010 X-HE-Tag: 1684218098-137985 X-HE-Meta: U2FsdGVkX18UCfqqnEYFTTofCNoWdxFuZSjTGomiiM+bXDtB4LAa+1yu7JX7trFmK4ZUxqUrdHButMJVYUL11Jj+zLVk+DIN+HUOQcpnYI2zY8zJNnaCvmfkcG5srRDL0T8X7DrhS92yYbOWXosv4mUCx9Cy4X300z8u5HvC/nX1b/vUQNbSj/6vBaq7DPXrRkRRnmsm2dn0Li0yToLBb/H50JCLyrPd43HJ3kqwNHo8Zi/pZbuQYUsUpTgY/HJvBpM/SWegvznFQJi3dDXosKO/cD3oSRVecnNUm9roc2af4n/5OTRbxUHuWEBSF3sfbNVNsxiMcdpLAAGE5QL/Q2Zvy+t2kX2YGbNRQwCJZhJgqLi8H7K38p25ThQzWwdcX+yK7pl/KQoo/iI2+y1whDG9BHBSwMnrvFVnUq5+eVpnb+p56qUmZ26Yn1qKKiHCQCGfWQ0Sy7mNe4k+92mxDOU872O/lvyiKrA+upviyKqroR4KRvUb9vxeYbIE8jr1o9kC47dErXJaD5hm2LCGw3q/aAigcLBXAdfwz7w+N5t0/NrNT5a7M1iSlK6y7MHoLgSabun65liMfAjVL4ql29H3yBJELBp0GNdUT8UPqNJnPnhY9sRY4ZCGO0dAXnbvqDPA1UAJphSDgMCV2CqC7O3kQ3J9xwhXup4ohnYjXrxbhqqd2zZJSm3JJO7IMKPSAzxlA8NJ1TRmL4sFrl33PQiLNXAtpZuLyH1xSX2XNcbDyBer2u/wVM7QXyMemGq/Elj3VgaYgMFaN8EbI3jFX12onI0O+el5DAfdvRAvRpmtoDNOc0U2mwvUPCP0XpeHC455z1lOQ1HJV0S3Sr2iZNpHiFdwnsxZtGQL9J1J6kK9hkcdSQHo/rQ9lj+Uh9ARzoWjVmDxFXU5QG3WeAYSxLfEAjui8sT6esCDdkaa45FiHy6D9KDgZ7m1reR4pfY60a6Kdd/pLYAhjLWzWer d+COLnq9 ZrV3LTgpIrE6DS1fsl/xmfNfsuLNLPKUgumZZoBnfJIOvhM+Hq7snn9onbXV4Sqf8XRJQHfQAuQlgs8B61W+j6PL1K4KoL5PQdw8OnHaiKwF92mFyRKS71UW7ZTKJuHzFesNJyA7QyPG7qcUxzZL+N+zV81EmLwAg7qlq9jgWkLMvfhD8U6ImbdUX/Ed+Gf+c9S7VK0OK2jFG8VA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: pm_restrict_gfp_mask()/pm_restore_gfp_mask() only used in power, let's move them out of page_alloc.c. Adding a general gfp_has_io_fs() function which return true if gfp with both __GFP_IO and __GFP_FS flags, then use it inside of pm_suspended_storage(), also the pm_suspended_storage() is moved into suspend.h. Signed-off-by: Kefeng Wang --- include/linux/gfp.h | 15 ++++----------- include/linux/suspend.h | 6 ++++++ kernel/power/main.c | 27 +++++++++++++++++++++++++++ kernel/power/power.h | 5 +++++ mm/page_alloc.c | 38 -------------------------------------- mm/swapfile.c | 1 + 6 files changed, 43 insertions(+), 49 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index ed8cb537c6a7..665f06675c83 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -338,19 +338,12 @@ extern gfp_t gfp_allowed_mask; /* Returns true if the gfp_mask allows use of ALLOC_NO_WATERMARK */ bool gfp_pfmemalloc_allowed(gfp_t gfp_mask); -extern void pm_restrict_gfp_mask(void); -extern void pm_restore_gfp_mask(void); - -extern gfp_t vma_thp_gfp_mask(struct vm_area_struct *vma); - -#ifdef CONFIG_PM_SLEEP -extern bool pm_suspended_storage(void); -#else -static inline bool pm_suspended_storage(void) +static inline bool gfp_has_io_fs(gfp_t gfp) { - return false; + return (gfp & (__GFP_IO | __GFP_FS)) == (__GFP_IO | __GFP_FS); } -#endif /* CONFIG_PM_SLEEP */ + +extern gfp_t vma_thp_gfp_mask(struct vm_area_struct *vma); #ifdef CONFIG_CONTIG_ALLOC /* The below functions must be run on a range from a single zone. */ diff --git a/include/linux/suspend.h b/include/linux/suspend.h index 3950a7bf33ae..76923051c03d 100644 --- a/include/linux/suspend.h +++ b/include/linux/suspend.h @@ -502,6 +502,11 @@ extern void pm_report_max_hw_sleep(u64 t); extern bool events_check_enabled; extern suspend_state_t pm_suspend_target_state; +static inline bool pm_suspended_storage(void) +{ + return !gfp_has_io_fs(gfp_allowed_mask); +} + extern bool pm_wakeup_pending(void); extern void pm_system_wakeup(void); extern void pm_system_cancel_wakeup(void); @@ -535,6 +540,7 @@ static inline void ksys_sync_helper(void) {} #define pm_notifier(fn, pri) do { (void)(fn); } while (0) +static inline bool pm_suspended_storage(void) { return false; } static inline bool pm_wakeup_pending(void) { return false; } static inline void pm_system_wakeup(void) {} static inline void pm_wakeup_clear(bool reset) {} diff --git a/kernel/power/main.c b/kernel/power/main.c index 3113ec2f1db4..34fc8359145b 100644 --- a/kernel/power/main.c +++ b/kernel/power/main.c @@ -21,6 +21,33 @@ #include "power.h" #ifdef CONFIG_PM_SLEEP +/* + * The following functions are used by the suspend/hibernate code to temporarily + * change gfp_allowed_mask in order to avoid using I/O during memory allocations + * while devices are suspended. To avoid races with the suspend/hibernate code, + * they should always be called with system_transition_mutex held + * (gfp_allowed_mask also should only be modified with system_transition_mutex + * held, unless the suspend/hibernate code is guaranteed not to run in parallel + * with that modification). + */ +static gfp_t saved_gfp_mask; + +void pm_restore_gfp_mask(void) +{ + WARN_ON(!mutex_is_locked(&system_transition_mutex)); + if (saved_gfp_mask) { + gfp_allowed_mask = saved_gfp_mask; + saved_gfp_mask = 0; + } +} + +void pm_restrict_gfp_mask(void) +{ + WARN_ON(!mutex_is_locked(&system_transition_mutex)); + WARN_ON(saved_gfp_mask); + saved_gfp_mask = gfp_allowed_mask; + gfp_allowed_mask &= ~(__GFP_IO | __GFP_FS); +} unsigned int lock_system_sleep(void) { diff --git a/kernel/power/power.h b/kernel/power/power.h index b83c8d5e188d..ac14d1b463d1 100644 --- a/kernel/power/power.h +++ b/kernel/power/power.h @@ -216,6 +216,11 @@ static inline void suspend_test_finish(const char *label) {} /* kernel/power/main.c */ extern int pm_notifier_call_chain_robust(unsigned long val_up, unsigned long val_down); extern int pm_notifier_call_chain(unsigned long val); +void pm_restrict_gfp_mask(void); +void pm_restore_gfp_mask(void); +#else +static inline void pm_restrict_gfp_mask(void) {} +static inline void pm_restore_gfp_mask(void) {} #endif #ifdef CONFIG_HIGHMEM diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 71bfe72be045..2a95e095bb2a 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -227,44 +227,6 @@ static inline void set_pcppage_migratetype(struct page *page, int migratetype) page->index = migratetype; } -#ifdef CONFIG_PM_SLEEP -/* - * The following functions are used by the suspend/hibernate code to temporarily - * change gfp_allowed_mask in order to avoid using I/O during memory allocations - * while devices are suspended. To avoid races with the suspend/hibernate code, - * they should always be called with system_transition_mutex held - * (gfp_allowed_mask also should only be modified with system_transition_mutex - * held, unless the suspend/hibernate code is guaranteed not to run in parallel - * with that modification). - */ - -static gfp_t saved_gfp_mask; - -void pm_restore_gfp_mask(void) -{ - WARN_ON(!mutex_is_locked(&system_transition_mutex)); - if (saved_gfp_mask) { - gfp_allowed_mask = saved_gfp_mask; - saved_gfp_mask = 0; - } -} - -void pm_restrict_gfp_mask(void) -{ - WARN_ON(!mutex_is_locked(&system_transition_mutex)); - WARN_ON(saved_gfp_mask); - saved_gfp_mask = gfp_allowed_mask; - gfp_allowed_mask &= ~(__GFP_IO | __GFP_FS); -} - -bool pm_suspended_storage(void) -{ - if ((gfp_allowed_mask & (__GFP_IO | __GFP_FS)) == (__GFP_IO | __GFP_FS)) - return false; - return true; -} -#endif /* CONFIG_PM_SLEEP */ - #ifdef CONFIG_HUGETLB_PAGE_SIZE_VARIABLE unsigned int pageblock_order __read_mostly; #endif diff --git a/mm/swapfile.c b/mm/swapfile.c index 274bbf797480..c74259001d5e 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -41,6 +41,7 @@ #include #include #include +#include #include #include From patchwork Tue May 16 06:38:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13242584 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62931C77B7F for ; Tue, 16 May 2023 06:21:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 60F0828000A; Tue, 16 May 2023 02:21:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 59B06280008; Tue, 16 May 2023 02:21:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 34E5628000A; Tue, 16 May 2023 02:21:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 11472280008 for ; Tue, 16 May 2023 02:21:43 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id D9DEE141469 for ; Tue, 16 May 2023 06:21:42 +0000 (UTC) X-FDA: 80795122044.18.DD5F29D Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf09.hostedemail.com (Postfix) with ESMTP id B3A6F140009 for ; Tue, 16 May 2023 06:21:40 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=none; spf=pass (imf09.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684218101; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=pEQmbXru6q4li0fN/aC7FJHKouP8VLqgIOaYkU8mCOk=; b=ZJeHz4bC7MLPlD9o89iaomzZb2CYxgY6QpKhp6QYy3jPy3GMf056m/aTE8ZkGmnkEQUs3J mqVkgDPVPLsobVQ0H6GB1mbe3RsF5ejyEt0sXivCVT5xXWSK+9YbbWz01vux30LdRhK67o 7re32WtTE2QulNaGMk9ba85DzAp5uL4= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=none; spf=pass (imf09.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684218101; a=rsa-sha256; cv=none; b=szotoskW6K8lNVncKIu+zvVMFoVnvSNo7zBNnZnUQ+WYJ+6lrFxpO9RcamWWahbrQ/YGln l0qNqgeyBvtb9T8xpAQHM6I2VXyoKLzn48pOv8A+Z98TP8P+7Biacfmzr83Iy9aZSeyahc O10uM3/tztmAYilPml5TUrqRHR8WkP0= Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.53]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4QL5gG1gffzsSGt; Tue, 16 May 2023 14:19:38 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Tue, 16 May 2023 14:21:36 +0800 From: Kefeng Wang To: Andrew Morton , Mike Rapoport , CC: David Hildenbrand , Oscar Salvador , "Rafael J. Wysocki" , Pavel Machek , Len Brown , Luis Chamberlain , Kees Cook , Iurii Zaikin , , , , , Kefeng Wang Subject: [PATCH v2 11/13] mm: vmscan: use gfp_has_io_fs() Date: Tue, 16 May 2023 14:38:19 +0800 Message-ID: <20230516063821.121844-12-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230516063821.121844-1-wangkefeng.wang@huawei.com> References: <20230516063821.121844-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Stat-Signature: rkuy7pprxgy8tnarhkr1oqd1mb8f3sdi X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: B3A6F140009 X-Rspam-User: X-HE-Tag: 1684218100-195386 X-HE-Meta: U2FsdGVkX1+q90vJ+F45pIqndWS8x5PaVs19TuqY8125bCJjAFkLMTMQgr6aUZ+MHdTmD1KahOY405a7NKSw+u0luzSz9zMkGHq61/rck5YKPIl+twtuejgOUWJ9R6FyCkFbd9cQYWwguIoDC0MhY6Tgeagwt4DJCs13fJNbU2yIFdzb6uUk2y1yJUUBuT+4F5WZMuHAmns07EYH8lKjDr6FDVofGIse7+nEpzVoECewsZQmfxlzpasyAw4TNL25jOX2XmpKAouF4Rh2wq0gTR+wMrVFKGnq72psOJ/JLXQq2wAaZUcTZEcOnthoRDKD2Pzv2ESkBE+2JkbvbJHzzg/QvvNYLRvIcLSd4F/0yO8+dMqzl+JeHz2v16WrAViDrmaYpNrWOHDy2im9kWSoBtIORLr3sXFbCLTTKGfYn+uU6sbhDgwXs5DKng+fwO/Pt+kggp0KBdkAPV4Zv8X3bcROZnxLfXUD0CoiT4kxO8fpwdX5ZHgMhHZ/Q9D5cbe0xp8dxUvub7ae2zGuqM3Jdb6Ut7yDiFSmc4HTXZFG+5+9u7OPkTP/NGwVBNNB3Xd0nX3v/f833aAtUF8nB8Iz/TAgCRkYLRUQCLfWC2BaMfiP2vaZNMewVBjz5wQQtiVwJNmcuqHqQkO2i22fE/JvAQ6a1qrSYkbPdKpR+MZTAJeNWJP/8Zv0PDrcT4+c99EyVVBQ3szTt6olZinb/YIbtWtb5/3CQ6bNU5KjFYuYh367NMv1IqEKkzgECHx4Qc+byWzX5ifhuIhwVrPsbKnRE8exssZGJx2kFi9HbRxaAbdIoYFRozHZsJdel9Qsjn/iUyPiQoRCv0J0z9Zv0Z2CAiQ2CEXzA18nRnfnTM0Io9ZAZObm+KwZvfMFmwtFllxBaBHMMPOO8K1DukrCE9mfPspSCeGtFSCXZ/Sz4Hn32+IfAXmPwmK0LKSOkrlquonWoHO4ldj7+XVpTw6P4+N Kwt+3QXp hmSZ0mGNejGuk5l7ZB4bu5P0fEajSXDaL02Eb2qhirDiLUBB/VrVqF8d8n50NVtS0XJBFF/1xKW2SpHBgUUf9FWV812eJrIwP/0GjoHbx9nqq8lWyyjgiULetGpLpdflxBPA/7D8L2qQ4ZxXZIBLTNQV6HcIc+2IeiIZGuDzKlXEBpNDSSxdu4XBRMwYchUGVkRv570eMcDiVI9xVEGY2xbI8Sq74BmaAIvK/ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use gfp_has_io_fs() instead of open-code. Signed-off-by: Kefeng Wang --- mm/vmscan.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 6d0cd2840cf0..15efbfbb1963 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2458,7 +2458,7 @@ static int too_many_isolated(struct pglist_data *pgdat, int file, * won't get blocked by normal direct-reclaimers, forming a circular * deadlock. */ - if ((sc->gfp_mask & (__GFP_IO | __GFP_FS)) == (__GFP_IO | __GFP_FS)) + if (gfp_has_io_fs(sc->gfp_mask)) inactive >>= 3; too_many = isolated > inactive; From patchwork Tue May 16 06:38:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13242586 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5F30DC77B7F for ; Tue, 16 May 2023 06:21:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CC73228000C; Tue, 16 May 2023 02:21:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C75AB280008; Tue, 16 May 2023 02:21:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AA1CA28000C; Tue, 16 May 2023 02:21:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 8E1A5280008 for ; Tue, 16 May 2023 02:21:44 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 6A62EC14CA for ; Tue, 16 May 2023 06:21:44 +0000 (UTC) X-FDA: 80795122128.16.B573928 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf22.hostedemail.com (Postfix) with ESMTP id 9D3C9C0005 for ; Tue, 16 May 2023 06:21:41 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=none; spf=pass (imf22.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684218102; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=J+G4uiwLIKJP/VY9B9Dc9VESswrlbsozT+C2WA9tZ1U=; b=QXf94lXvny31C12HtQMEorCLxM5aHYQ4tByWHlP1xIWyBsrSEsV89/sbSNzF9a9Cb+yQi9 fDMN6z0y1LU47D0AftNlqFEeQv+3MHSaJUgVieBZo2Qj663R35ZXWp+BIZPQ6RpnlTj99B 71KQ91kQBBI5sAuemZfov24maZWWGCs= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684218102; a=rsa-sha256; cv=none; b=uoNUXpmmmjqUYgW9qO2xaHSOj/NlHzeOpQUeWWF3plj44LADcMfyaMM140jESgdjODHAx5 R7mWnATEUASTJGIznMlzYMLgisHKBM1jmNo0764rlnEfbe7oFb7nr+86OsVFYmFEnXJnSn lPQS4lrnz9IByrqabKHo0eCDh6TL5YY= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=none; spf=pass (imf22.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4QL5fF05WWzLq2s; Tue, 16 May 2023 14:18:45 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Tue, 16 May 2023 14:21:37 +0800 From: Kefeng Wang To: Andrew Morton , Mike Rapoport , CC: David Hildenbrand , Oscar Salvador , "Rafael J. Wysocki" , Pavel Machek , Len Brown , Luis Chamberlain , Kees Cook , Iurii Zaikin , , , , , Kefeng Wang Subject: [PATCH v2 12/13] mm: page_alloc: move sysctls into it own fils Date: Tue, 16 May 2023 14:38:20 +0800 Message-ID: <20230516063821.121844-13-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230516063821.121844-1-wangkefeng.wang@huawei.com> References: <20230516063821.121844-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Stat-Signature: h91bo17fqe1bo58osd7w9fc1czrmkkr7 X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: 9D3C9C0005 X-HE-Tag: 1684218101-227342 X-HE-Meta: U2FsdGVkX19rAFz6C+Bo0vNG4jQ7wjVaYaWxXbKRiQtyPNzb42Wtwn74gqErRK5VLUfsNb5vkJl/uUR6kr0RjCxXMX48MmKGPnBEDsohp8NTqseHvtgqpjpqP9KH6Ag05c+HnWVTRUMTBGCHsg2nxGZd2TUSLdItEUuPgHsTwZWQrm9OqfUImCeVyKqZtJGLQlEEPEqtZ1tJtee7nt0oIwrZikBlcmQFALPU6pWkUhL3sCyUXYP20t1rkcbRxiPqpX+v4mem37BrEaKCmydpy5OgD/ZtTHllYwBJ6/fMUG78qt02MYhmtRLrezLA69fBRZFQZMA0tw+1JCqvcQFdR3/x5GPBn8j2SWIbYMAs22IZQQ109hnWV+IGvjic9/0rZswR62E0GZqFSFvLRQpYq0Ne4Nhm+O2jbYS+JWmQX93WWJUAxE3Pfl9VcgeJBkR0Sx+xtxPehiswqYf1pBC9+m2XMKmBJ2AX/mkw1gaZRVGuBtIvooC1i6y8jsJNABraIs0/PmEuVaHlUz1tSkLkgLbDXoQb+sLJzxpXs0BS+UxZ/Sa2KjqpTZYbb9/ccZbAmElx0OR8q1YsR///hb0joOGr4sWG5+IfstOom7GEcKevHr5yYA3eKN5PYyHlos6Gbl4YMY6I/7xNThsycZGeEzIXW1478hHd80Fn9mWVqCws57bmYadWCtyjEaW+XDPxfzuAJ4FxoVZI9VGs7pnrU0iulJA80Ypec4MD4vO3xdJr6rYqyRZqt87CMdgolYSjb2jTj5cwy+Ne+YJ3N4s6h/AGA0wgKk2sdiZkuLNYQI50zN0JYadjniWBV4BrEUee+oloLi84rABFUEuJCjNDABYF5T5nk6C0/mw+7NR0jRmHt5hPouOeqnzQzk/cVml5JcwRpKjK9QZF6/x/iVXualWYP/mZ9ZUTQCTE8DKQdqI0A+U6+88jPKEtStSWFSTtcTpMtsviyLSuQQ5bXRt Z0gM0KkA 7Sy1GRXKYdfjCXNdKUZxWoEPkFFMgIyRFysNlyfKN1WdzjnzBsKLJI+AbJWqNisV0B7bTZ5yyOHs6aRtGceuH9HIfmZPGCL7xG5cOoiNaTfYRpDWWwTjfUFJjz2Bn7VVLA4ttci1JuYEaTE5Ztu+EuZab1mY5WOG7WzSyuuHQ+k/TLjS7LZGic0sD6tosU7gevBXwlJOKn5H5D8I= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This moves all page alloc related sysctls to its own file, as part of the kernel/sysctl.c spring cleaning, also move some functions declarations from mm.h into internal.h. Signed-off-by: Kefeng Wang --- include/linux/mm.h | 11 ----- include/linux/mmzone.h | 21 --------- kernel/sysctl.c | 67 --------------------------- mm/internal.h | 11 +++++ mm/mm_init.c | 2 + mm/page_alloc.c | 103 +++++++++++++++++++++++++++++++++++------ 6 files changed, 102 insertions(+), 113 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index d3241f4ac903..eabe520139ef 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3008,12 +3008,6 @@ extern int __meminit early_pfn_to_nid(unsigned long pfn); #endif extern void set_dma_reserve(unsigned long new_dma_reserve); -extern void memmap_init_range(unsigned long, int, unsigned long, - unsigned long, unsigned long, enum meminit_context, - struct vmem_altmap *, int migratetype); -extern void setup_per_zone_wmarks(void); -extern void calculate_min_free_kbytes(void); -extern int __meminit init_per_zone_wmark_min(void); extern void mem_init(void); extern void __init mmap_init(void); @@ -3034,11 +3028,6 @@ void warn_alloc(gfp_t gfp_mask, nodemask_t *nodemask, const char *fmt, ...); extern void setup_per_cpu_pageset(void); -/* page_alloc.c */ -extern int min_free_kbytes; -extern int watermark_boost_factor; -extern int watermark_scale_factor; - /* nommu.c */ extern atomic_long_t mmap_pages_allocated; extern int nommu_shrink_inode_mappings(struct inode *, size_t, size_t); diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index a4889c9d4055..3a68326c9989 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -1512,27 +1512,6 @@ static inline bool has_managed_dma(void) } #endif -/* These two functions are used to setup the per zone pages min values */ -struct ctl_table; - -int min_free_kbytes_sysctl_handler(struct ctl_table *, int, void *, size_t *, - loff_t *); -int watermark_scale_factor_sysctl_handler(struct ctl_table *, int, void *, - size_t *, loff_t *); -extern int sysctl_lowmem_reserve_ratio[MAX_NR_ZONES]; -int lowmem_reserve_ratio_sysctl_handler(struct ctl_table *, int, void *, - size_t *, loff_t *); -int percpu_pagelist_high_fraction_sysctl_handler(struct ctl_table *, int, - void *, size_t *, loff_t *); -int sysctl_min_unmapped_ratio_sysctl_handler(struct ctl_table *, int, - void *, size_t *, loff_t *); -int sysctl_min_slab_ratio_sysctl_handler(struct ctl_table *, int, - void *, size_t *, loff_t *); -int numa_zonelist_order_handler(struct ctl_table *, int, - void *, size_t *, loff_t *); -extern int percpu_pagelist_high_fraction; -extern char numa_zonelist_order[]; -#define NUMA_ZONELIST_ORDER_LEN 16 #ifndef CONFIG_NUMA diff --git a/kernel/sysctl.c b/kernel/sysctl.c index bfe53e835524..a57de67f032f 100644 --- a/kernel/sysctl.c +++ b/kernel/sysctl.c @@ -2119,13 +2119,6 @@ static struct ctl_table vm_table[] = { .extra2 = SYSCTL_ONE, }, #endif - { - .procname = "lowmem_reserve_ratio", - .data = &sysctl_lowmem_reserve_ratio, - .maxlen = sizeof(sysctl_lowmem_reserve_ratio), - .mode = 0644, - .proc_handler = lowmem_reserve_ratio_sysctl_handler, - }, { .procname = "drop_caches", .data = &sysctl_drop_caches, @@ -2135,39 +2128,6 @@ static struct ctl_table vm_table[] = { .extra1 = SYSCTL_ONE, .extra2 = SYSCTL_FOUR, }, - { - .procname = "min_free_kbytes", - .data = &min_free_kbytes, - .maxlen = sizeof(min_free_kbytes), - .mode = 0644, - .proc_handler = min_free_kbytes_sysctl_handler, - .extra1 = SYSCTL_ZERO, - }, - { - .procname = "watermark_boost_factor", - .data = &watermark_boost_factor, - .maxlen = sizeof(watermark_boost_factor), - .mode = 0644, - .proc_handler = proc_dointvec_minmax, - .extra1 = SYSCTL_ZERO, - }, - { - .procname = "watermark_scale_factor", - .data = &watermark_scale_factor, - .maxlen = sizeof(watermark_scale_factor), - .mode = 0644, - .proc_handler = watermark_scale_factor_sysctl_handler, - .extra1 = SYSCTL_ONE, - .extra2 = SYSCTL_THREE_THOUSAND, - }, - { - .procname = "percpu_pagelist_high_fraction", - .data = &percpu_pagelist_high_fraction, - .maxlen = sizeof(percpu_pagelist_high_fraction), - .mode = 0644, - .proc_handler = percpu_pagelist_high_fraction_sysctl_handler, - .extra1 = SYSCTL_ZERO, - }, { .procname = "page_lock_unfairness", .data = &sysctl_page_lock_unfairness, @@ -2223,24 +2183,6 @@ static struct ctl_table vm_table[] = { .proc_handler = proc_dointvec_minmax, .extra1 = SYSCTL_ZERO, }, - { - .procname = "min_unmapped_ratio", - .data = &sysctl_min_unmapped_ratio, - .maxlen = sizeof(sysctl_min_unmapped_ratio), - .mode = 0644, - .proc_handler = sysctl_min_unmapped_ratio_sysctl_handler, - .extra1 = SYSCTL_ZERO, - .extra2 = SYSCTL_ONE_HUNDRED, - }, - { - .procname = "min_slab_ratio", - .data = &sysctl_min_slab_ratio, - .maxlen = sizeof(sysctl_min_slab_ratio), - .mode = 0644, - .proc_handler = sysctl_min_slab_ratio_sysctl_handler, - .extra1 = SYSCTL_ZERO, - .extra2 = SYSCTL_ONE_HUNDRED, - }, #endif #ifdef CONFIG_SMP { @@ -2267,15 +2209,6 @@ static struct ctl_table vm_table[] = { .proc_handler = mmap_min_addr_handler, }, #endif -#ifdef CONFIG_NUMA - { - .procname = "numa_zonelist_order", - .data = &numa_zonelist_order, - .maxlen = NUMA_ZONELIST_ORDER_LEN, - .mode = 0644, - .proc_handler = numa_zonelist_order_handler, - }, -#endif #if (defined(CONFIG_X86_32) && !defined(CONFIG_UML))|| \ (defined(CONFIG_SUPERH) && defined(CONFIG_VSYSCALL)) { diff --git a/mm/internal.h b/mm/internal.h index 79324b7f2bc8..5fdf930a87b5 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -213,6 +213,13 @@ static inline bool is_check_pages_enabled(void) return static_branch_unlikely(&check_pages_enabled); } +extern int min_free_kbytes; + +void setup_per_zone_wmarks(void); +void calculate_min_free_kbytes(void); +int __meminit init_per_zone_wmark_min(void); +void page_alloc_sysctl_init(void); + /* * Structure for holding the mostly immutable allocation parameters passed * between functions involved in allocations, including the alloc_pages* @@ -423,6 +430,10 @@ extern void *memmap_alloc(phys_addr_t size, phys_addr_t align, phys_addr_t min_addr, int nid, bool exact_nid); +void memmap_init_range(unsigned long, int, unsigned long, unsigned long, + unsigned long, enum meminit_context, struct vmem_altmap *, int); + + int split_free_page(struct page *free_page, unsigned int order, unsigned long split_pfn_offset); diff --git a/mm/mm_init.c b/mm/mm_init.c index 0fd4ddfdfb2e..10bf560302c4 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -2392,6 +2392,8 @@ void __init page_alloc_init_late(void) /* Initialize page ext after all struct pages are initialized. */ if (deferred_struct_pages) page_ext_init(); + + page_alloc_sysctl_init(); } #ifndef __HAVE_ARCH_RESERVED_KERNEL_PAGES diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 2a95e095bb2a..5e8680669388 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -206,7 +206,6 @@ nodemask_t node_states[NR_NODE_STATES] __read_mostly = { }; EXPORT_SYMBOL(node_states); -int percpu_pagelist_high_fraction; gfp_t gfp_allowed_mask __read_mostly = GFP_BOOT_MASK; /* @@ -302,8 +301,8 @@ compound_page_dtor * const compound_page_dtors[NR_COMPOUND_DTORS] = { int min_free_kbytes = 1024; int user_min_free_kbytes = -1; -int watermark_boost_factor __read_mostly = 15000; -int watermark_scale_factor = 10; +static int watermark_boost_factor __read_mostly = 15000; +static int watermark_scale_factor = 10; /* movable_zone is the "real" zone pages in ZONE_MOVABLE are taken from */ int movable_zone; @@ -4917,12 +4916,12 @@ static int __parse_numa_zonelist_order(char *s) return 0; } -char numa_zonelist_order[] = "Node"; - +static char numa_zonelist_order[] = "Node"; +#define NUMA_ZONELIST_ORDER_LEN 16 /* * sysctl handler for numa_zonelist_order */ -int numa_zonelist_order_handler(struct ctl_table *table, int write, +static int numa_zonelist_order_handler(struct ctl_table *table, int write, void *buffer, size_t *length, loff_t *ppos) { if (write) @@ -4930,7 +4929,6 @@ int numa_zonelist_order_handler(struct ctl_table *table, int write, return proc_dostring(table, write, buffer, length, ppos); } - static int node_load[MAX_NUMNODES]; /** @@ -5333,6 +5331,7 @@ static int zone_batchsize(struct zone *zone) #endif } +static int percpu_pagelist_high_fraction; static int zone_highsize(struct zone *zone, int batch, int cpu_online) { #ifdef CONFIG_MMU @@ -5862,7 +5861,7 @@ postcore_initcall(init_per_zone_wmark_min) * that we can call two helper functions whenever min_free_kbytes * changes. */ -int min_free_kbytes_sysctl_handler(struct ctl_table *table, int write, +static int min_free_kbytes_sysctl_handler(struct ctl_table *table, int write, void *buffer, size_t *length, loff_t *ppos) { int rc; @@ -5878,7 +5877,7 @@ int min_free_kbytes_sysctl_handler(struct ctl_table *table, int write, return 0; } -int watermark_scale_factor_sysctl_handler(struct ctl_table *table, int write, +static int watermark_scale_factor_sysctl_handler(struct ctl_table *table, int write, void *buffer, size_t *length, loff_t *ppos) { int rc; @@ -5908,7 +5907,7 @@ static void setup_min_unmapped_ratio(void) } -int sysctl_min_unmapped_ratio_sysctl_handler(struct ctl_table *table, int write, +static int sysctl_min_unmapped_ratio_sysctl_handler(struct ctl_table *table, int write, void *buffer, size_t *length, loff_t *ppos) { int rc; @@ -5935,7 +5934,7 @@ static void setup_min_slab_ratio(void) sysctl_min_slab_ratio) / 100; } -int sysctl_min_slab_ratio_sysctl_handler(struct ctl_table *table, int write, +static int sysctl_min_slab_ratio_sysctl_handler(struct ctl_table *table, int write, void *buffer, size_t *length, loff_t *ppos) { int rc; @@ -5959,8 +5958,8 @@ int sysctl_min_slab_ratio_sysctl_handler(struct ctl_table *table, int write, * minimum watermarks. The lowmem reserve ratio can only make sense * if in function of the boot time zone sizes. */ -int lowmem_reserve_ratio_sysctl_handler(struct ctl_table *table, int write, - void *buffer, size_t *length, loff_t *ppos) +static int lowmem_reserve_ratio_sysctl_handler(struct ctl_table *table, + int write, void *buffer, size_t *length, loff_t *ppos) { int i; @@ -5980,7 +5979,7 @@ int lowmem_reserve_ratio_sysctl_handler(struct ctl_table *table, int write, * cpu. It is the fraction of total pages in each zone that a hot per cpu * pagelist can have before it gets flushed back to buddy allocator. */ -int percpu_pagelist_high_fraction_sysctl_handler(struct ctl_table *table, +static int percpu_pagelist_high_fraction_sysctl_handler(struct ctl_table *table, int write, void *buffer, size_t *length, loff_t *ppos) { struct zone *zone; @@ -6013,6 +6012,82 @@ int percpu_pagelist_high_fraction_sysctl_handler(struct ctl_table *table, return ret; } +static struct ctl_table page_alloc_sysctl_table[] = { + { + .procname = "min_free_kbytes", + .data = &min_free_kbytes, + .maxlen = sizeof(min_free_kbytes), + .mode = 0644, + .proc_handler = min_free_kbytes_sysctl_handler, + .extra1 = SYSCTL_ZERO, + }, + { + .procname = "watermark_boost_factor", + .data = &watermark_boost_factor, + .maxlen = sizeof(watermark_boost_factor), + .mode = 0644, + .proc_handler = proc_dointvec_minmax, + .extra1 = SYSCTL_ZERO, + }, + { + .procname = "watermark_scale_factor", + .data = &watermark_scale_factor, + .maxlen = sizeof(watermark_scale_factor), + .mode = 0644, + .proc_handler = watermark_scale_factor_sysctl_handler, + .extra1 = SYSCTL_ONE, + .extra2 = SYSCTL_THREE_THOUSAND, + }, + { + .procname = "percpu_pagelist_high_fraction", + .data = &percpu_pagelist_high_fraction, + .maxlen = sizeof(percpu_pagelist_high_fraction), + .mode = 0644, + .proc_handler = percpu_pagelist_high_fraction_sysctl_handler, + .extra1 = SYSCTL_ZERO, + }, + { + .procname = "lowmem_reserve_ratio", + .data = &sysctl_lowmem_reserve_ratio, + .maxlen = sizeof(sysctl_lowmem_reserve_ratio), + .mode = 0644, + .proc_handler = lowmem_reserve_ratio_sysctl_handler, + }, +#ifdef CONFIG_NUMA + { + .procname = "numa_zonelist_order", + .data = &numa_zonelist_order, + .maxlen = NUMA_ZONELIST_ORDER_LEN, + .mode = 0644, + .proc_handler = numa_zonelist_order_handler, + }, + { + .procname = "min_unmapped_ratio", + .data = &sysctl_min_unmapped_ratio, + .maxlen = sizeof(sysctl_min_unmapped_ratio), + .mode = 0644, + .proc_handler = sysctl_min_unmapped_ratio_sysctl_handler, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE_HUNDRED, + }, + { + .procname = "min_slab_ratio", + .data = &sysctl_min_slab_ratio, + .maxlen = sizeof(sysctl_min_slab_ratio), + .mode = 0644, + .proc_handler = sysctl_min_slab_ratio_sysctl_handler, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE_HUNDRED, + }, +#endif + {} +}; + +void __init page_alloc_sysctl_init(void) +{ + register_sysctl_init("vm", page_alloc_sysctl_table); +} + #ifdef CONFIG_CONTIG_ALLOC /* Usage: See admin-guide/dynamic-debug-howto.rst */ static void alloc_contig_dump_pages(struct list_head *page_list) From patchwork Tue May 16 06:38:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13242585 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A077CC7EE24 for ; Tue, 16 May 2023 06:21:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F152728000B; Tue, 16 May 2023 02:21:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D872D280008; Tue, 16 May 2023 02:21:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B655A28000B; Tue, 16 May 2023 02:21:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id A0D84280008 for ; Tue, 16 May 2023 02:21:43 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 7155581439 for ; Tue, 16 May 2023 06:21:43 +0000 (UTC) X-FDA: 80795122086.17.1FE01CA Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf23.hostedemail.com (Postfix) with ESMTP id 223AA140017 for ; Tue, 16 May 2023 06:21:40 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=none; spf=pass (imf23.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684218101; a=rsa-sha256; cv=none; b=siToGsccMEwxweRAgzFZf+YG0sFPMGoM/ypEbKRzvMip5xN51Yl88LytQ4hYVTDn9TB+np MZDEENyTjdDevg7CGsCjZAAfXqUeVZ/NeNQztt7sSwWjU4hMRH7F7ik/C6n/ZB9nAMSbAP goEyEmTdw1Ja5cFXtNyQhB+2zv27TMM= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=none; spf=pass (imf23.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684218101; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4b99EODE3M968cRu0gr5EcqMioO2UY6+VKVSbMny83w=; b=ggMQjsFwYj/EzEgTkBtjeTU691DDWekKcYaQuooobus2QmyaKydstQgH8ONpGfeAsRJI+p oBxFPtlBKCNA4UjOKdqfU/SbBQwe3xp1sw4ClASwAy/SS/RTeDDIickz822XtRAjt0duM9 PqCD7wtANBvAw8WPauWi4vbyT8oUb6o= Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.55]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4QL5cb0cKzzqSMx; Tue, 16 May 2023 14:17:19 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Tue, 16 May 2023 14:21:37 +0800 From: Kefeng Wang To: Andrew Morton , Mike Rapoport , CC: David Hildenbrand , Oscar Salvador , "Rafael J. Wysocki" , Pavel Machek , Len Brown , Luis Chamberlain , Kees Cook , Iurii Zaikin , , , , , Kefeng Wang Subject: [PATCH v2 13/13] mm: page_alloc: move is_check_pages_enabled() into page_alloc.c Date: Tue, 16 May 2023 14:38:21 +0800 Message-ID: <20230516063821.121844-14-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230516063821.121844-1-wangkefeng.wang@huawei.com> References: <20230516063821.121844-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Rspam-User: X-Stat-Signature: ufbuzfiikyupszqntoo4eiba6ystmrgu X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 223AA140017 X-HE-Tag: 1684218100-179392 X-HE-Meta: U2FsdGVkX1+ZdO6zm05iYOYCey0X5KwguptThfruhjfkRvQtVfHZVoXkWpxOSPF+qwnwfN2dWwca8YN0xJOLwPl8SGnnj3ONfQfRpfq0pJAavAdwinj/+n+/PppEjGDr86ZhnivF0AY+71yo3nIVhfjkqESYB0FJNzXnkJ7ZCzHRd6dzvvSNfE3CKAYe8ii/EDWMtXRcRmE8iHtot7GBjtm9lcdXuEOM/mDfAdNYN5e1hUfvwUfAzQST2ZDc1kwTIozOhG56CwakeQj9AM38R2iiN0ZpJ/+lwJlOXPA+PKt4RxIOSfgKEhuKRv4Y5oqYmtwd5vy5WVNVbxsajGl13hkS4xKBrGJa2M1pAPMDEKHXdgPDaR8wNmSg3OwwqoUC/Jz6av1gCgQ+NFe+ah0Ansi6f0xjl5s47dXamGxuwrVbcGejEsueeClUK41T4QNxqeyKjvLe7lfxJfzQg1TJLiePy/1jLxAFzCMrhrZETtks4lfVDU+K2LA0TTpvEqyzOAZPX1JljEABsU+5b6OiE8mYbeNSnOD1dynPrKsFxuw+MkMTOoeRl2j8u5lozQXITowayWbmWD3jRJ6ftJWx99MK1ayr1282rPS+JZ6qTV251xGQc2zZZOlkG+y0Y66eUN0PIeFU57WsW2vJvVO1RGc/O8SdEF7g+7ok9z63aRoHGTEcenP6dRal0gd2FtNBVFTVzv4H4jOc/pcrrHMMh/mMCbQF2WHqkvD95jlYyP7D+BlJ7vuXGeglKomld0/uB5Wtn5+UYElOXSQbNGKNNH3lP7hdcwbh6tzhYMFL695lrjvwC8LiGhGJUvG83fc9bMl9dFn5rJn8o6boq3WeHwRgdVgGKqGQ6hl1lECxaNY8qekco+Rn35XeZ0uK6Nkkg+DLdHN9Cdlz2XQAeHcsQJ2kWGEwIKrmRG6nFVnpUHbGmvUlZRt63+2aUai97/6AvKWBkh5ztPA5XsI2Oh6 u5JE2cf3 JcR57WMK5rFdkjBmWpxyqjr1Md8lXIdk9HNDZhxlZ2nAzKtM2bPos1cfwuJ/O/AtIPisCCIE5uSc1AID2yXPXtN1pkOkdFTEelVqWgk9Qx1VG+r3FiQfTpVvDLOWhd8YMHtpAPW5gx3brmnUtyq6WCajCsnP/Rbwu7ckr+3mjWkGCt8qaDbWl/P2TrJ3cav48gq/F4hc1oq/s9IE= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The is_check_pages_enabled() only used in page_alloc.c, move it into page_alloc.c, also use it in free_tail_page_prepare(). Signed-off-by: Kefeng Wang --- mm/internal.h | 5 ----- mm/page_alloc.c | 7 ++++++- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 5fdf930a87b5..bb6542279599 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -208,11 +208,6 @@ extern char * const zone_names[MAX_NR_ZONES]; /* perform sanity checks on struct pages being allocated or freed */ DECLARE_STATIC_KEY_MAYBE(CONFIG_DEBUG_VM, check_pages_enabled); -static inline bool is_check_pages_enabled(void) -{ - return static_branch_unlikely(&check_pages_enabled); -} - extern int min_free_kbytes; void setup_per_zone_wmarks(void); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 5e8680669388..1023f41de2fb 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -983,6 +983,11 @@ static inline bool free_page_is_bad(struct page *page) return true; } +static inline bool is_check_pages_enabled(void) +{ + return static_branch_unlikely(&check_pages_enabled); +} + static int free_tail_page_prepare(struct page *head_page, struct page *page) { struct folio *folio = (struct folio *)head_page; @@ -994,7 +999,7 @@ static int free_tail_page_prepare(struct page *head_page, struct page *page) */ BUILD_BUG_ON((unsigned long)LIST_POISON1 & 1); - if (!static_branch_unlikely(&check_pages_enabled)) { + if (!is_check_pages_enabled()) { ret = 0; goto out; }