From patchwork Sun Aug 6 07:48:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueshi Hu X-Patchwork-Id: 13342742 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 367B6C001DE for ; Sun, 6 Aug 2023 07:49:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A68AF8D0006; Sun, 6 Aug 2023 03:49:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A13648D0005; Sun, 6 Aug 2023 03:49:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8BBB68D0006; Sun, 6 Aug 2023 03:49:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 6E2F88D0005 for ; Sun, 6 Aug 2023 03:49:52 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 409B6B1DA5 for ; Sun, 6 Aug 2023 07:49:52 +0000 (UTC) X-FDA: 81092905824.06.56E3526 Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) by imf20.hostedemail.com (Postfix) with ESMTP id 636A91C0019 for ; Sun, 6 Aug 2023 07:49:50 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=smartx-com.20221208.gappssmtp.com header.s=20221208 header.b=qKtVZaAS; spf=none (imf20.hostedemail.com: domain of xueshi.hu@smartx.com has no SPF policy when checking 209.85.214.179) smtp.mailfrom=xueshi.hu@smartx.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1691308190; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=IWO97P41pTgDcE9XlJTPRLerxITDufaXqvlZ/NpQ9xw=; b=8C27DpZUc3Z7Sa2sknC4+BaojnROzgz0XFIUslIPHCTtsJiIPbapWQreAS401WYlWspP4v 1e3dPNFnrqTmSxwLBOtAz6OhEJsMfEDMvOYJ4RG0dF3K8BbrFoxw+79aLJlAkymkeMO53Q qZjyjJEmiw3QLDzCv0PHojQM8BhsArs= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=smartx-com.20221208.gappssmtp.com header.s=20221208 header.b=qKtVZaAS; spf=none (imf20.hostedemail.com: domain of xueshi.hu@smartx.com has no SPF policy when checking 209.85.214.179) smtp.mailfrom=xueshi.hu@smartx.com; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1691308190; a=rsa-sha256; cv=none; b=su0j6ZqzNEh3HkZHF3cCWwkuAS94zmI1u/lrDEwqktJn+2NEUVwUiCQESD2bYcVZzuu1Yd VWXoVa1IN2+seNJtBE+HPhuLM3IqzyfzNrhKvRg3NAzWa1LiJI9WcCrtV9oi7l3jeD59+E ktgEf7ZKKWxPgGdf43vaeG9d/35gRek= Received: by mail-pl1-f179.google.com with SMTP id d9443c01a7336-1bc3d94d40fso31025145ad.3 for ; Sun, 06 Aug 2023 00:49:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=smartx-com.20221208.gappssmtp.com; s=20221208; t=1691308189; x=1691912989; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=IWO97P41pTgDcE9XlJTPRLerxITDufaXqvlZ/NpQ9xw=; b=qKtVZaASZvHdsOrl0UvxpHJViBZrzscbST/b3CtYMT3XVWYE4+QgnGMDDoFUn7WRiN aCaSeEcacve4eUjxb2goCsR73RKWuMDSexbUccnuKcswovdl0VhcDzoZbY4tiPnf6z/4 ZYVnfg2kMRi0tq326Vpkkw9GTzheiCRRMk4A+D9qC87xN3RKgLyCem4sFmEkxlqIUiPB AJpop2T1r6YCHlSquGkH5270M6kejXB7DD7r9GxcyCgfsZxOTgvuvsZadt+hqf8llmh4 iqiz+4Dj+eT0pcaxhVjpbsadZ10h6R2MbGHOy412TmXSy6uT9dvjL9/VeJtdObw97LQK OIBA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691308189; x=1691912989; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=IWO97P41pTgDcE9XlJTPRLerxITDufaXqvlZ/NpQ9xw=; b=dhSdBd9wbz3Wty2uQsuER+3ZxkTbW1QpldpHoxLp+qu9wn9gRvkVJGeEFPQsLHSsyO fuWrOWK7+pr3TuYMlC3IgnPWCrW7aqSzXrUYoUUU9NkwSW3pLF/pBB5y1XShn07ORZ4V er1vj7h1URrLF8GP4ELx7LVDQOseSyd1Cr3lYTMIFlX9OvXJLsAFCdHhubx+UN7qV4ag 1ZPiv0Rz23QQrSxJ+L+ijt67r9gHcqwXS6aXvdQG+9A303WW2TwAZvZvmaXwA8eGmkxa 7rytTYGVNz5Zb9LceEY9WxoUMdGbIb8KUkcnvMGXUkSR204dAk2PVShXGbM6/5w4O3WT eIJg== X-Gm-Message-State: AOJu0YyAMUZBEz2Ymk+Lr6fSyU0WmF6xUbZR5WqH3m2kjHDeurL5NWLE XuMkfK7ldiVo54Hf1LvVM38jQg== X-Google-Smtp-Source: AGHT+IEMSjDu3fkUxq4yLpKnI9NOXYrwPFZ+a1WrGPLSdJlWaOwcYOqco4fDD3ILzMQiRTh4fCldzA== X-Received: by 2002:a17:903:2351:b0:1bb:a85c:23cc with SMTP id c17-20020a170903235100b001bba85c23ccmr7867280plh.15.1691308188827; Sun, 06 Aug 2023 00:49:48 -0700 (PDT) Received: from nixos.tailf4e9e.ts.net ([47.75.78.161]) by smtp.googlemail.com with ESMTPSA id j3-20020a170902c3c300b001b9cf6342e2sm4522814plj.42.2023.08.06.00.49.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 06 Aug 2023 00:49:48 -0700 (PDT) From: Xueshi Hu To: mike.kravetz@oracle.com, muchun.song@linux.dev, corbet@lwn.net, akpm@linux-foundation.org, n-horiguchi@ah.jp.nec.com, osalvador@suse.de Cc: linux-mm@kvack.org, Xueshi Hu Subject: [PATCH v2 2/4] mm/hugeltb: clean up hstate::max_huge_pages Date: Sun, 6 Aug 2023 15:48:51 +0800 Message-Id: <20230806074853.317203-3-xueshi.hu@smartx.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230806074853.317203-1-xueshi.hu@smartx.com> References: <20230806074853.317203-1-xueshi.hu@smartx.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 636A91C0019 X-Rspam-User: X-Stat-Signature: 8sp34wfxu1yrtwof54beumeo9oegjn5h X-Rspamd-Server: rspam01 X-HE-Tag: 1691308190-542316 X-HE-Meta: U2FsdGVkX1+hEoDl0P0XwvqjyITfm3ODbDEn80GY+BLS2rgfj0lJB51kDlikPzwPvfFQo2r23YDcfCuH49/t6qhgBTnEIXr33eKI3nI0kAG8e8N3i5rEYdUaFgaU40BZ3CMpnsEJQEEDcRrH3+MyDKVYNs9MeUQ22GfBFMQNYxwwl5e43I3k+YpBzaygsRy5slnFGbBZfd8bb+ALQipfb9nmIiA/K1/r/lQMPLVdvg959Hxqk92zkaJMgcC6E9PHxC1duB5ByCVZDkjRtIKceT4UaqnFeMG/QYwDJmr0DLiOcT1DDsL3t4sArb+y0zuQYQ4XlhKI+/GHgms1bZTZyCiFaD6PGMVzIUwMmbjEmHjw6JmLXmPcF/gAvSUn7sG7dVrMNRXlIpPkxqrPFWr5LfiliPCwnduJneq7Mtp6P8XHZoSFNjjuT4K8FscS99/o8kyXuNuxCVwaWT0zPyaSlJjTCRkFGuy/OO18TQd0/yImfV08984i2dehEeqemEbwMAwRQz3AOE1V6ud3WGoWGfA/S0m1PiTKyZoa/uEDrT0torJO4rpVjHRn1GgMiYXiKIcMk5NAp7CvCXPHIJIBAbA7+yHskzAKL2AhZRSRtB9I+7F/ZHsE9BMZgOJPaROjGUjDSdWtnlCtruzl0lb3rylfQ/ku/4siLH+tp/5bae9L5p3gPquUZ9sDBcEDORi5YQZ70oGuNQu7UGpJZELGwRK9lvSk4Y+uwVmTN89Jw22CX00LA0H4koF3on1v3Hyifodx5W5keJjNHZTxOUVXA8ACYQ2FK81S4cuLZNNEIktoqwrVpx7etXBj24LpBMQXq18vqOPk44GWip4xIBGjXaZ0agn19yTGFgMYff8GhXDLXQXspYRK+GmuY/cLc6/0+ghueTJ/L6kCq/bh3u5+z243BI/rq9m64O73aVrOuk3noG8IEtMHOHcYnaQAtXM4FzszJgmqmshTmyLHhU+ J5XHU3L0 xJzUlLkVLOYfwzdx1x77u3G8OYVX2g11Qevw6zcxQhx7k0DCu0VYm+ioIcPfEsDtZ9z7nvjcaAHGSjr9+rwgCP2ZT/4wp64FrbZKEzrl7a9pTIL+GIdumaxAxaqOFwYKqjv9YuIeJ2bxCp+pubWQZcb6HJuxfrXTbg41VxqunuK6eZEzjExc2mwAD4Ebunwymc6apk0J/U5QggovQ4qEDSjFvckABqMAHngYA6ONlDKkljtosKhAaee4BxZd5OL822/xCxH9Gr5yLG73D6LMMD3MCSEvbskON8elDEmXJYWP5ySssOpdRHFBKoxRA/O85QSuy+GrgpoOaG7TJpOaTO9suY4L4Sr9hdvxdPTcFnkIYTBCS+JEzcEhtMjOFSgosarSkYhs4paX4VN8LBuJB7RH7wGSJKdNluXQpEPg5Q3PaKfhXzZaizRMwX2YkQc+rRlHITN2mA1nOImIKfHl+g42oow== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Presently, the sole use case of hstate::max_huge_pages is confined to hugetlb_sysctl_handler_common() and hugetlbfs_size_to_hpages(). The former has been replaced with hstate::nr_huge_pages, while the latter can be effortlessly substituted. After hugeltb subsystem has been initialized, hstate::max_huge_pages always equals to persistent_huge_pages(). It's a burden to maintain the equation[1][2]. After this patch, hstate::max_huge_pages is only used in kernel command line parameter parsing. Renaming set_max_huge_pages() to set_nr_huge_pages() would enhance the readability of the code. [1]: Commit a43a83c79b4f ("mm/hugetlb: fix incorrect update of max_huge_pages") [2]: Commit c1470b33bb6e ("mm/hugetlb: fix incorrect hugepages count during mem hotplug") Signed-off-by: Xueshi Hu --- fs/hugetlbfs/inode.c | 2 +- mm/hugetlb.c | 24 +++++------------------- 2 files changed, 6 insertions(+), 20 deletions(-) diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c index 316c4cebd3f3..cd1a3e4bf8fb 100644 --- a/fs/hugetlbfs/inode.c +++ b/fs/hugetlbfs/inode.c @@ -1375,7 +1375,7 @@ hugetlbfs_size_to_hpages(struct hstate *h, unsigned long long size_opt, if (val_type == SIZE_PERCENT) { size_opt <<= huge_page_shift(h); - size_opt *= h->max_huge_pages; + size_opt *= (h->nr_huge_pages - h->surplus_huge_pages); do_div(size_opt, 100); } diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 76af189053f0..56647235ab21 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -2343,14 +2343,13 @@ int dissolve_free_huge_page(struct page *page) } remove_hugetlb_folio(h, folio, false); - h->max_huge_pages--; spin_unlock_irq(&hugetlb_lock); /* * Normally update_and_free_hugtlb_folio will allocate required vmemmmap * before freeing the page. update_and_free_hugtlb_folio will fail to * free the page if it can not allocate required vmemmap. We - * need to adjust max_huge_pages if the page is not freed. + * need to adjust nr_huge_pages if the page is not freed. * Attempt to allocate vmemmmap here so that we can take * appropriate action on failure. */ @@ -2360,7 +2359,6 @@ int dissolve_free_huge_page(struct page *page) } else { spin_lock_irq(&hugetlb_lock); add_hugetlb_folio(h, folio, false); - h->max_huge_pages++; spin_unlock_irq(&hugetlb_lock); } @@ -3274,8 +3272,6 @@ static void __init hugetlb_hstate_alloc_pages_onenode(struct hstate *h, int nid) string_get_size(huge_page_size(h), 1, STRING_UNITS_2, buf, 32); pr_warn("HugeTLB: allocating %u of page size %s failed node%d. Only allocated %lu hugepages.\n", h->max_huge_pages_node[nid], buf, nid, i); - h->max_huge_pages -= (h->max_huge_pages_node[nid] - i); - h->max_huge_pages_node[nid] = i; } static void __init hugetlb_hstate_alloc_pages(struct hstate *h) @@ -3336,7 +3332,6 @@ static void __init hugetlb_hstate_alloc_pages(struct hstate *h) string_get_size(huge_page_size(h), 1, STRING_UNITS_2, buf, 32); pr_warn("HugeTLB: allocating %lu of page size %s failed. Only allocated %lu hugepages.\n", h->max_huge_pages, buf, i); - h->max_huge_pages = i; } kfree(node_alloc_noretry); } @@ -3460,7 +3455,7 @@ static int adjust_pool_surplus(struct hstate *h, nodemask_t *nodes_allowed, } #define persistent_huge_pages(h) (h->nr_huge_pages - h->surplus_huge_pages) -static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid, +static int set_nr_huge_pages(struct hstate *h, unsigned long count, int nid, nodemask_t *nodes_allowed) { unsigned long min_count, ret; @@ -3601,7 +3596,6 @@ static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid, break; } out: - h->max_huge_pages = persistent_huge_pages(h); spin_unlock_irq(&hugetlb_lock); mutex_unlock(&h->resize_lock); @@ -3639,7 +3633,7 @@ static int demote_free_hugetlb_folio(struct hstate *h, struct folio *folio) destroy_compound_hugetlb_folio_for_demote(folio, huge_page_order(h)); /* - * Taking target hstate mutex synchronizes with set_max_huge_pages. + * Taking target hstate mutex synchronizes with set_nr_huge_pages. * Without the mutex, pages added to target hstate could be marked * as surplus. * @@ -3664,14 +3658,6 @@ static int demote_free_hugetlb_folio(struct hstate *h, struct folio *folio) spin_lock_irq(&hugetlb_lock); - /* - * Not absolutely necessary, but for consistency update max_huge_pages - * based on pool changes for the demoted page. - */ - h->max_huge_pages--; - target_hstate->max_huge_pages += - pages_per_huge_page(h) / pages_per_huge_page(target_hstate); - return rc; } @@ -3770,13 +3756,13 @@ static ssize_t __nr_hugepages_store_common(bool obey_mempolicy, } else { /* * Node specific request. count adjustment happens in - * set_max_huge_pages() after acquiring hugetlb_lock. + * set_nr_huge_pages() after acquiring hugetlb_lock. */ init_nodemask_of_node(&nodes_allowed, nid); n_mask = &nodes_allowed; } - err = set_max_huge_pages(h, count, nid, n_mask); + err = set_nr_huge_pages(h, count, nid, n_mask); return err ? err : len; }