From patchwork Wed May 15 06:45:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13664650 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93E27C25B75 for ; Wed, 15 May 2024 06:19:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2A87D8D0071; Wed, 15 May 2024 02:19:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 20BB68D004F; Wed, 15 May 2024 02:19:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0ADE28D0071; Wed, 15 May 2024 02:19:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id E05648D004F for ; Wed, 15 May 2024 02:19:46 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 46511A1554 for ; Wed, 15 May 2024 06:19:46 +0000 (UTC) X-FDA: 82119629172.22.C685FCB Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf07.hostedemail.com (Postfix) with ESMTP id 1E3EC4000A for ; Wed, 15 May 2024 06:19:41 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=none; spf=pass (imf07.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1715753983; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references; bh=zX7OJHzh7XwCCV9z/hM5d25+A89rvoA3MKAmg6Tuq4U=; b=xE2Ha7BP57sg8NeZshtYLiIqqocj1YwGR/3prI+reUGh8Kphox+8a0MEF4VmxT9ZUOCprq CbF3e6Zsd2Mg5LFIPOKiiijDB9XF/jHzjTdHjPN6uEcAnJ/aMkVx4GRx5cERr2KjsMt1lQ u8YlqBMUG/c+pKB6gPW69PnNPlb/iTI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1715753983; a=rsa-sha256; cv=none; b=2Mm0lGqbNWhBlV7Ji5is4SjYR0G26TYBMPr25KJBkwxk+vj6YqiebUYWb+SZPL6iJqRbCR Km201DAqO1B7xFUjdgiDqHbaVXlCsYurVrocOThYcfJZE0KKp1ykhxiyqLeDXyQBqAgjGC TDTvfRLsX880sasms7PZNam3Giku090= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=none; spf=pass (imf07.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.252]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4VfNJ84gVGzgZYr; Wed, 15 May 2024 14:15:36 +0800 (CST) Received: from dggpemm100001.china.huawei.com (unknown [7.185.36.93]) by mail.maildlp.com (Postfix) with ESMTPS id B28F31825A6; Wed, 15 May 2024 14:19:37 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Wed, 15 May 2024 14:19:37 +0800 From: Kefeng Wang To: Matthew Wilcox CC: David Hildenbrand , Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , , Kefeng Wang Subject: [PATCH] mm: refactor folio_undo_large_rmappable() Date: Wed, 15 May 2024 14:45:06 +0800 Message-ID: <20240515064506.72253-1-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.41.0 MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemm100001.china.huawei.com (7.185.36.93) X-Rspam-User: X-Stat-Signature: d5t1zx57an387oh6qmjn67rr5c5dbfye X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 1E3EC4000A X-HE-Tag: 1715753981-764645 X-HE-Meta: U2FsdGVkX1/6d1l8PL/vrZJSIGCGcWuUzSuck2K3KF+bZdBvtg6r5P566IQEh/1CqR4Go+3VIpSXleddIfHVqX5WGbTtUaG8ncxoxOh5qhIc/p8OgGyR6cCXZOk+AWwOYdixERnyXWQQtjPFZw2OhceVtbsqL74OxNgdppbnktyRG6guPAal/gZQLAeRNuFidx+hhYFNsVDyuxdSJwmLKh4rjttFrsLaT2Hh7LhLXCY7/lQPLjRc0e6/elWqeTjvxeWQY/6jUZD1n5zVZFGFL08QXCB014KHzjWA3LTHMANXmRuiteMN4AlPyINLIaE/ex9vaF449faP17nIG4zROu/uH2ePpkyIvpktNl0etFMozOppGJCuuLecVk7le6UveOC/SemGjjXXfl72he2kuLJW4aeSdiXE8X1mhrx2raNYD9QQ9lKlrlBz0RsUSn+o7Ld7tARXomQTCpj0qFdtJVIrUYLTdqAZ6XS7bVBtOtx2MBPvex/04/Hi8lxfJZidQ5ZXwoqf9Vu4/QNKCuvztTVsUYoaCo88/8/B9eHSBbfXKgwFdNMHzL6UYUoL7CS1PV9zNFvw8gQ8Bx90XG8rPiCAkJ2Ef7eD8EB3DJh1ptSej8vuAFxz5uAnuZnBQ55iRs+nL4VG0du0Jnq0DDSys8dS2mvozTFQlhQtfKsmPBRp0qDT1MqygIhH6PMIxxZ/3Q5+XHFuB6EInaCv451Sqwi3LhpEMxl3B10QnEE8n44hnfnDLpHukio8y4SEgdYDAmxf9MqrPV5ljJXq1G4pwPq1pcfQQ4ff/m3nsVXrXx39WAe9cKIwKPTB21IdwIeT3b1gXRvjgIPEYMzfNeVtDJLbED3cutJSaLhwJTuC7fZfO+ntvQ+b3OppIU6JvXsQkig6++N50Nd0VCTaLz7onTDEvM036l3nZEq6sRSAw7r1jpRs3U9n7g10hdUTW3j3/ipcyK3rO4gnKn0DyZb lUT6m4QH Ml3Oq8yBC1O3bnocdglmicFNWeZMae6b1ATkIhAevxrtY4PT21+vyo2YjbkOK+CY+K47QUQIKAjVEMqy13lbsV8DSfU/IE8zQ+wSzU1q/7kfiO5qafLBRH7GMJO7vBazPdcnedgXnxB9tM5TJGMWyOKhSx6TvBDfG2kJ2615k5xkchZOb2dTZDUn48wwX7ijnEi68 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: All folio_undo_large_rmappable() callers will check folio_test_large() which already checked by folio_order(), so only add the check folio_test_large_rmappable() into the function to avoid repeated calls. In addtion, move all the checks into headfile to save a function call for non-large-rmappable or empty deferred_list folio. Signed-off-by: Kefeng Wang --- mm/huge_memory.c | 13 +------------ mm/internal.h | 17 ++++++++++++++++- mm/memcontrol.c | 3 +-- mm/page_alloc.c | 3 +-- mm/swap.c | 8 ++------ mm/vmscan.c | 8 ++------ 6 files changed, 23 insertions(+), 29 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 9efb6fefc391..2e5c5690449a 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3257,22 +3257,11 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, return ret; } -void folio_undo_large_rmappable(struct folio *folio) +void __folio_undo_large_rmappable(struct folio *folio) { struct deferred_split *ds_queue; unsigned long flags; - if (folio_order(folio) <= 1) - return; - - /* - * At this point, there is no one trying to add the folio to - * deferred_list. If folio is not in deferred_list, it's safe - * to check without acquiring the split_queue_lock. - */ - if (data_race(list_empty(&folio->_deferred_list))) - return; - ds_queue = get_deferred_split_queue(folio); spin_lock_irqsave(&ds_queue->split_queue_lock, flags); if (!list_empty(&folio->_deferred_list)) { diff --git a/mm/internal.h b/mm/internal.h index b2c75b12014e..447171d171ce 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -605,7 +605,22 @@ static inline void folio_set_order(struct folio *folio, unsigned int order) #endif } -void folio_undo_large_rmappable(struct folio *folio); +void __folio_undo_large_rmappable(struct folio *folio); +static inline void folio_undo_large_rmappable(struct folio *folio) +{ + if (folio_order(folio) <= 1 || !folio_test_large_rmappable(folio)) + return; + + /* + * At this point, there is no one trying to add the folio to + * deferred_list. If folio is not in deferred_list, it's safe + * to check without acquiring the split_queue_lock. + */ + if (data_race(list_empty(&folio->_deferred_list))) + return; + + __folio_undo_large_rmappable(folio); +} static inline struct folio *page_rmappable_folio(struct page *page) { diff --git a/mm/memcontrol.c b/mm/memcontrol.c index feb6651ee1e8..cdf6b595e40e 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -7875,8 +7875,7 @@ void mem_cgroup_migrate(struct folio *old, struct folio *new) * In addition, the old folio is about to be freed after migration, so * removing from the split queue a bit earlier seems reasonable. */ - if (folio_test_large(old) && folio_test_large_rmappable(old)) - folio_undo_large_rmappable(old); + folio_undo_large_rmappable(old); old->memcg_data = 0; } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index cd584aace6bf..b1e3eb5787de 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2645,8 +2645,7 @@ void free_unref_folios(struct folio_batch *folios) unsigned long pfn = folio_pfn(folio); unsigned int order = folio_order(folio); - if (order > 0 && folio_test_large_rmappable(folio)) - folio_undo_large_rmappable(folio); + folio_undo_large_rmappable(folio); if (!free_pages_prepare(&folio->page, order)) continue; /* diff --git a/mm/swap.c b/mm/swap.c index 67786cb77130..dc205bdfbbd4 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -123,8 +123,7 @@ void __folio_put(struct folio *folio) } page_cache_release(folio); - if (folio_test_large(folio) && folio_test_large_rmappable(folio)) - folio_undo_large_rmappable(folio); + folio_undo_large_rmappable(folio); mem_cgroup_uncharge(folio); free_unref_page(&folio->page, folio_order(folio)); } @@ -1002,10 +1001,7 @@ void folios_put_refs(struct folio_batch *folios, unsigned int *refs) free_huge_folio(folio); continue; } - if (folio_test_large(folio) && - folio_test_large_rmappable(folio)) - folio_undo_large_rmappable(folio); - + folio_undo_large_rmappable(folio); __page_cache_release(folio, &lruvec, &flags); if (j != i) diff --git a/mm/vmscan.c b/mm/vmscan.c index 6981a71c8ef0..615d2422d0e4 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1454,9 +1454,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, */ nr_reclaimed += nr_pages; - if (folio_test_large(folio) && - folio_test_large_rmappable(folio)) - folio_undo_large_rmappable(folio); + folio_undo_large_rmappable(folio); if (folio_batch_add(&free_folios, folio) == 0) { mem_cgroup_uncharge_folios(&free_folios); try_to_unmap_flush(); @@ -1863,9 +1861,7 @@ static unsigned int move_folios_to_lru(struct lruvec *lruvec, if (unlikely(folio_put_testzero(folio))) { __folio_clear_lru_flags(folio); - if (folio_test_large(folio) && - folio_test_large_rmappable(folio)) - folio_undo_large_rmappable(folio); + folio_undo_large_rmappable(folio); if (folio_batch_add(&free_folios, folio) == 0) { spin_unlock_irq(&lruvec->lru_lock); mem_cgroup_uncharge_folios(&free_folios);