From patchwork Sun Aug 11 21:21:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 13759890 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 634FCC3DA4A for ; Sun, 11 Aug 2024 21:21:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E30626B009E; Sun, 11 Aug 2024 17:21:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D6A2F6B009F; Sun, 11 Aug 2024 17:21:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BBD3E6B00A0; Sun, 11 Aug 2024 17:21:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 93F896B009E for ; Sun, 11 Aug 2024 17:21:43 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 47D0D8053E for ; Sun, 11 Aug 2024 21:21:43 +0000 (UTC) X-FDA: 82441236486.02.F8FDF87 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) by imf05.hostedemail.com (Postfix) with ESMTP id 76D39100009 for ; Sun, 11 Aug 2024 21:21:41 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=o40zgmZf; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf05.hostedemail.com: domain of 3ZCu5ZgYKCNsVRWE7LDLLDIB.9LJIFKRU-JJHS79H.LOD@flex--yuzhao.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3ZCu5ZgYKCNsVRWE7LDLLDIB.9LJIFKRU-JJHS79H.LOD@flex--yuzhao.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723411290; a=rsa-sha256; cv=none; b=WJHYBZXIHIMCMIfzrUqP20muBq7OCJLRdCe1SQcjRNO3xbX1tkTH17Tsmgqk0ihU73w6zp SxblWbi5QqLJWoQ2BDN/YbPIWcPJiO05g8ADs/8FAbRkvcCqohjhFD/JL7z9AqE8A3XGgz Hqu+JLSoWXPxZgxnqNGO/oLzJedfjw8= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=o40zgmZf; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf05.hostedemail.com: domain of 3ZCu5ZgYKCNsVRWE7LDLLDIB.9LJIFKRU-JJHS79H.LOD@flex--yuzhao.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3ZCu5ZgYKCNsVRWE7LDLLDIB.9LJIFKRU-JJHS79H.LOD@flex--yuzhao.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723411290; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=plPNUwSc20wvxDLG/5V4ONiycR26+90aEQUlSgRQtvI=; b=NaQpMjssoFvc7dw2uryK7M9gP+oQa6UPhuOGg/Rkp5qY0Bzgh19ek0r+a7+9lG9QRjkJiZ +k+64OcOWvJx53dp/hvJhRm11HMuajl3r6DpeZOVZOXgw+CQzBnH/7n6YmNlUbqSlB5pqm OUbxCJUah9sRV4u3xCF2+5QJSipZzfM= Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-66a2aee82a0so73082727b3.0 for ; Sun, 11 Aug 2024 14:21:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1723411300; x=1724016100; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=plPNUwSc20wvxDLG/5V4ONiycR26+90aEQUlSgRQtvI=; b=o40zgmZfpPUUM0DFAvfXfrIfpfzulCvnSPiveUr0Qgi4GmhQZ3ycCrXUqgXHrUh9Jy FLOG+MuiL2QAVE0uyVR/rzrZt/xzxOtr8mmOwDMfzChlt47NzKkGMos0fmqlbk+7cda6 eIiVQx+IHc3h4bYTwBLzqnCLMosh7V1DDT5ZPm51wvfIV1ddA51+JnjHLhOBj5a6rVBA QySqicihQxuhZonN9YkXbDp6oKElXrpUlpIU+ma7ST4wcDc60cob66WgG22A/F4MZL/l AMzwp1M9HsPCkY4QeFq8sr0QfKULIqlPdElHxbSmJAVmZbdIolbKDXrbVlp3b5BaSYHd m4SA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1723411300; x=1724016100; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=plPNUwSc20wvxDLG/5V4ONiycR26+90aEQUlSgRQtvI=; b=xB7Z5GDo3XBAyt56KQwq+wwOex861maHnUjclBz9lf+5jtM68jJqIgP3QTwfEdfLYL 9L679arPM8mYGRdbm4om4cIC5f8PaKd62DEnPM5f/6VygqfKKbtq+Ab6OrrZXHNmApzt 5ysFCe+Uk03GcjedlDFvIyzmqTM5gSjPfsRL6mM22lkR05V0/nNjTam360jgzcDkyFck 6JARnJ36MSVb30eMLiQKr1Lb4fwBCVvVPcoMiKVUO1jzSt6cP2MASWQeO9OAfHt+15yp IuQ1p900j99/plNT1rvTPvDVPdcezIS4aacaZ+qIVw7dBgtv35EYxkZGMVAsB3SiFh+N VR/A== X-Forwarded-Encrypted: i=1; AJvYcCXhLY8FnHxjbFKsXNYRnRZzbAzIY9ZKClPNrHezB9hMgxrhNkvvbRFoMcWJtBOgzjSNTLtSASTeumciJNsiQuLbGnU= X-Gm-Message-State: AOJu0Yy+UZF53gmSqGlB4Xx5nIhOHJw8ZJNvI7cPruX5NbPMXXI6r8ky K6IyzSU8+0Y2QSzJh47GgZhwPko5haLnnlGWSTzhcmJBS21s+AZDxCz6go+G9y6XiAGO+GzTT17 n2A== X-Google-Smtp-Source: AGHT+IHabmrGk91oLaByBgnfM+5JbvP9zqcwTHpzXiVoA50OMZR/9U6IX+5qqzgzKEbG6UqMgpxrz3SgVrA= X-Received: from yuzhao2.bld.corp.google.com ([2a00:79e0:2e28:6:c9c:12b4:a1e3:7f10]) (user=yuzhao job=sendgmr) by 2002:a81:c64b:0:b0:62f:a56a:cee8 with SMTP id 00721157ae682-69ec54adf13mr3154657b3.3.1723411300470; Sun, 11 Aug 2024 14:21:40 -0700 (PDT) Date: Sun, 11 Aug 2024 15:21:29 -0600 In-Reply-To: <20240811212129.3074314-1-yuzhao@google.com> Mime-Version: 1.0 References: <20240811212129.3074314-1-yuzhao@google.com> X-Mailer: git-send-email 2.46.0.76.ge559c4bf1a-goog Message-ID: <20240811212129.3074314-4-yuzhao@google.com> Subject: [PATCH mm-unstable v1 3/3] mm/hugetlb: use __GFP_COMP for gigantic folios From: Yu Zhao To: Andrew Morton , Muchun Song Cc: "Matthew Wilcox (Oracle)" , Zi Yan , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yu Zhao X-Rspam-User: X-Rspamd-Queue-Id: 76D39100009 X-Rspamd-Server: rspam01 X-Stat-Signature: hu98c17swrsr9wb4hsi5ky4fmk7xehr8 X-HE-Tag: 1723411301-762442 X-HE-Meta: U2FsdGVkX18bGe4PGXrRq4Io2Y1aqIYpcbG8+pKcKmXM0RX04cO/821b8Q/Nj8UmFQDdhebtEayaZO0QHlFZwef98uY7g4+IsAOJtMJzuLURFzXvPiZ7hL0Aobqz4nY/0dIIg9RMb4bLgZfkUjAWvgTFGZ7+Pf4wc51ft9K22JegpJOClNk0nR5HWN5mjXiuSkFwfU9YxGSpILlZI/RtJwpoycJQD5Wa3U01Da77aPEsQwtccUunVb/u8iw1bHb1+Mf9fc2zcGiNYE+oefFVgXgCq1a2Cy+KteE2F7F6OYgpGPS2FifoTgdCbf5X4yAvl2YBRNkN7lw1ThF7g1JL/EWi0L3/iS6SGs7/lmbJXcFBShgMvKU9GPbu/0J7X/FYlRhfvvLNyMQ5fXQno06iuxC9juQYkP6Q8WPz24BmBl+QTmDa33m3gdwz6oYAFYP6eHUSFXDAV2+oegH57qAAjpJdFfRcg6A0QTsiJzz16FOGVJgkUNCWmIy3/DktpbQ2rPFpCKG5zqCmBiP1bveLV/dUMJGWOO4d3GkeUWiDvC5OROu4YiZ/fUmRigBe2g4+ShlLJWTAq+jcwJUdW4LazHm5TjCLCTXHdKGgrxWfOi8EqG8tWS4VtnvwiJ+iiKzdgaEypl0oQNV4panMQocFnwMxSrh5OjZ2A+IVDil1msV9lqBTawfNjk2fFIaJYMNj4DtNkisghLTouwthZaTfdngWZcVUkqHC63njx5bvdxdRUCupAc4t3z5jyTIUOzSkqmkYFZcbSPDbGWWl3BfFkojjZ+iY1hm72JP3t2ixXk4GC8DmznCtV5NLvc6PlJDSiLDqXpa6Bbh5PekfJryOpiewR+ndP+F+q/IQZDSytV14NFsFrw7ZbdkfD0qX9j0zpZZN3NQ86BWlP4pT0q4lklyxtmesjf3G2lyIY7tfmnn9Zo0Bv5Ihxxl8zkx0n6DmdI+uNHbTV1v44BoZcQm asGDATh/ FDNLFBlrs6J4OD3rFVh5H3opSkhA4mW86uO55yAe9oeq8NQ9AJhd6niozdp+IY3V2T3Xm0iMmNYgDCxxIHDaTqZ7ac5vgAPXd7IW96Emy9VteWd5H1nWXdpff45+FDyll6DuwHIFNn8sp3zjJY1hGAVFA4O2vaRYXi25KA5WoFBqnjb5qbWLv33D1JA2+5EXJJ3cGhKeMMy0KjYJfabT6EGC92srsuueaEJzs+DsX1Chq7LmBAiIu4zrDU8Iowkmzyiw43uukNGnWQRrVWJzEU2xNzqdy5WO1tyARA8OC51ev6nkKT5P9ZNs3VJQgvtSJX/v67dYsOkumMWPo2LZ+0mOEDFKz/AQcQofObgzgm0hgkMeomub4qHQdX6gUnxuqUp1E/lVG/kKG1oTX1QVFiJ40pgZbMFLRiPcg/8eO+YBI2Zdl1flzet4MGOtdVPpbnwHcJ+R2WagKeC680NQ59Zm7+vQD3Z+9bpCXdubrk9r5CiA+ctl/N+I6AMlzGPPrjqkQRbwb2frJ1I2ry9L22g1UH2kfUuRmqX843YOJIHvHsQLK2g6Zq5beig== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Use __GFP_COMP for gigantic folios to greatly reduce not only the code but also the allocation and free time. LOC (approximately): -200, +50 Allocate and free 500 1GB hugeTLB memory without HVO by: time echo 500 >/sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages time echo 0 >/sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages Before After Alloc ~13s ~10s Free ~15s <1s The above magnitude generally holds for multiple x86 and arm64 CPU models. Signed-off-by: Yu Zhao --- include/linux/hugetlb.h | 9 +- mm/hugetlb.c | 244 ++++++++-------------------------------- 2 files changed, 50 insertions(+), 203 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 3100a52ceb73..98c47c394b89 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -896,10 +896,11 @@ static inline bool hugepage_movable_supported(struct hstate *h) /* Movability of hugepages depends on migration support. */ static inline gfp_t htlb_alloc_mask(struct hstate *h) { - if (hugepage_movable_supported(h)) - return GFP_HIGHUSER_MOVABLE; - else - return GFP_HIGHUSER; + gfp_t gfp = __GFP_COMP | __GFP_NOWARN; + + gfp |= hugepage_movable_supported(h) ? GFP_HIGHUSER_MOVABLE : GFP_HIGHUSER; + + return gfp; } static inline gfp_t htlb_modify_alloc_mask(struct hstate *h, gfp_t gfp_mask) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 1c13e65ab119..691f63408d50 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1512,43 +1512,7 @@ static int hstate_next_node_to_free(struct hstate *h, nodemask_t *nodes_allowed) ((node = hstate_next_node_to_free(hs, mask)) || 1); \ nr_nodes--) -/* used to demote non-gigantic_huge pages as well */ -static void __destroy_compound_gigantic_folio(struct folio *folio, - unsigned int order, bool demote) -{ - int i; - int nr_pages = 1 << order; - struct page *p; - - atomic_set(&folio->_entire_mapcount, 0); - atomic_set(&folio->_large_mapcount, 0); - atomic_set(&folio->_pincount, 0); - - for (i = 1; i < nr_pages; i++) { - p = folio_page(folio, i); - p->flags &= ~PAGE_FLAGS_CHECK_AT_FREE; - p->mapping = NULL; - clear_compound_head(p); - if (!demote) - set_page_refcounted(p); - } - - __folio_clear_head(folio); -} - -static void destroy_compound_hugetlb_folio_for_demote(struct folio *folio, - unsigned int order) -{ - __destroy_compound_gigantic_folio(folio, order, true); -} - #ifdef CONFIG_ARCH_HAS_GIGANTIC_PAGE -static void destroy_compound_gigantic_folio(struct folio *folio, - unsigned int order) -{ - __destroy_compound_gigantic_folio(folio, order, false); -} - static void free_gigantic_folio(struct folio *folio, unsigned int order) { /* @@ -1569,38 +1533,52 @@ static void free_gigantic_folio(struct folio *folio, unsigned int order) static struct folio *alloc_gigantic_folio(struct hstate *h, gfp_t gfp_mask, int nid, nodemask_t *nodemask) { - struct page *page; - unsigned long nr_pages = pages_per_huge_page(h); + struct folio *folio; + int order = huge_page_order(h); + bool retry = false; + if (nid == NUMA_NO_NODE) nid = numa_mem_id(); - +retry: + folio = NULL; #ifdef CONFIG_CMA { int node; - if (hugetlb_cma[nid]) { - page = cma_alloc(hugetlb_cma[nid], nr_pages, - huge_page_order(h), true); - if (page) - return page_folio(page); - } + if (hugetlb_cma[nid]) + folio = cma_alloc_folio(hugetlb_cma[nid], order, gfp_mask); - if (!(gfp_mask & __GFP_THISNODE)) { + if (!folio && !(gfp_mask & __GFP_THISNODE)) { for_each_node_mask(node, *nodemask) { if (node == nid || !hugetlb_cma[node]) continue; - page = cma_alloc(hugetlb_cma[node], nr_pages, - huge_page_order(h), true); - if (page) - return page_folio(page); + folio = cma_alloc_folio(hugetlb_cma[node], order, gfp_mask); + if (folio) + break; } } } #endif + if (!folio) { + struct page *page = alloc_contig_pages(1 << order, gfp_mask, nid, nodemask); - page = alloc_contig_pages(nr_pages, gfp_mask, nid, nodemask); - return page ? page_folio(page) : NULL; + if (!page) + return NULL; + + folio = page_folio(page); + } + + if (folio_ref_freeze(folio, 1)) + return folio; + + pr_warn("HugeTLB: unexpected refcount on PFN %lu\n", folio_pfn(folio)); + free_gigantic_folio(folio, order); + if (!retry) { + retry = true; + goto retry; + } + return NULL; } #else /* !CONFIG_CONTIG_ALLOC */ @@ -1619,8 +1597,6 @@ static struct folio *alloc_gigantic_folio(struct hstate *h, gfp_t gfp_mask, } static inline void free_gigantic_folio(struct folio *folio, unsigned int order) { } -static inline void destroy_compound_gigantic_folio(struct folio *folio, - unsigned int order) { } #endif /* @@ -1747,19 +1723,17 @@ static void __update_and_free_hugetlb_folio(struct hstate *h, folio_clear_hugetlb_hwpoison(folio); folio_ref_unfreeze(folio, 1); + INIT_LIST_HEAD(&folio->_deferred_list); /* * Non-gigantic pages demoted from CMA allocated gigantic pages * need to be given back to CMA in free_gigantic_folio. */ if (hstate_is_gigantic(h) || - hugetlb_cma_folio(folio, huge_page_order(h))) { - destroy_compound_gigantic_folio(folio, huge_page_order(h)); + hugetlb_cma_folio(folio, huge_page_order(h))) free_gigantic_folio(folio, huge_page_order(h)); - } else { - INIT_LIST_HEAD(&folio->_deferred_list); + else folio_put(folio); - } } /* @@ -2032,95 +2006,6 @@ static void prep_new_hugetlb_folio(struct hstate *h, struct folio *folio, int ni spin_unlock_irq(&hugetlb_lock); } -static bool __prep_compound_gigantic_folio(struct folio *folio, - unsigned int order, bool demote) -{ - int i, j; - int nr_pages = 1 << order; - struct page *p; - - __folio_clear_reserved(folio); - for (i = 0; i < nr_pages; i++) { - p = folio_page(folio, i); - - /* - * For gigantic hugepages allocated through bootmem at - * boot, it's safer to be consistent with the not-gigantic - * hugepages and clear the PG_reserved bit from all tail pages - * too. Otherwise drivers using get_user_pages() to access tail - * pages may get the reference counting wrong if they see - * PG_reserved set on a tail page (despite the head page not - * having PG_reserved set). Enforcing this consistency between - * head and tail pages allows drivers to optimize away a check - * on the head page when they need know if put_page() is needed - * after get_user_pages(). - */ - if (i != 0) /* head page cleared above */ - __ClearPageReserved(p); - /* - * Subtle and very unlikely - * - * Gigantic 'page allocators' such as memblock or cma will - * return a set of pages with each page ref counted. We need - * to turn this set of pages into a compound page with tail - * page ref counts set to zero. Code such as speculative page - * cache adding could take a ref on a 'to be' tail page. - * We need to respect any increased ref count, and only set - * the ref count to zero if count is currently 1. If count - * is not 1, we return an error. An error return indicates - * the set of pages can not be converted to a gigantic page. - * The caller who allocated the pages should then discard the - * pages using the appropriate free interface. - * - * In the case of demote, the ref count will be zero. - */ - if (!demote) { - if (!page_ref_freeze(p, 1)) { - pr_warn("HugeTLB page can not be used due to unexpected inflated ref count\n"); - goto out_error; - } - } else { - VM_BUG_ON_PAGE(page_count(p), p); - } - if (i != 0) - set_compound_head(p, &folio->page); - } - __folio_set_head(folio); - /* we rely on prep_new_hugetlb_folio to set the hugetlb flag */ - folio_set_order(folio, order); - atomic_set(&folio->_entire_mapcount, -1); - atomic_set(&folio->_large_mapcount, -1); - atomic_set(&folio->_pincount, 0); - return true; - -out_error: - /* undo page modifications made above */ - for (j = 0; j < i; j++) { - p = folio_page(folio, j); - if (j != 0) - clear_compound_head(p); - set_page_refcounted(p); - } - /* need to clear PG_reserved on remaining tail pages */ - for (; j < nr_pages; j++) { - p = folio_page(folio, j); - __ClearPageReserved(p); - } - return false; -} - -static bool prep_compound_gigantic_folio(struct folio *folio, - unsigned int order) -{ - return __prep_compound_gigantic_folio(folio, order, false); -} - -static bool prep_compound_gigantic_folio_for_demote(struct folio *folio, - unsigned int order) -{ - return __prep_compound_gigantic_folio(folio, order, true); -} - /* * Find and lock address space (mapping) in write mode. * @@ -2159,7 +2044,6 @@ static struct folio *alloc_buddy_hugetlb_folio(struct hstate *h, */ if (node_alloc_noretry && node_isset(nid, *node_alloc_noretry)) alloc_try_hard = false; - gfp_mask |= __GFP_COMP|__GFP_NOWARN; if (alloc_try_hard) gfp_mask |= __GFP_RETRY_MAYFAIL; if (nid == NUMA_NO_NODE) @@ -2206,48 +2090,14 @@ static struct folio *alloc_buddy_hugetlb_folio(struct hstate *h, return folio; } -static struct folio *__alloc_fresh_hugetlb_folio(struct hstate *h, - gfp_t gfp_mask, int nid, nodemask_t *nmask, - nodemask_t *node_alloc_noretry) -{ - struct folio *folio; - bool retry = false; - -retry: - if (hstate_is_gigantic(h)) - folio = alloc_gigantic_folio(h, gfp_mask, nid, nmask); - else - folio = alloc_buddy_hugetlb_folio(h, gfp_mask, - nid, nmask, node_alloc_noretry); - if (!folio) - return NULL; - - if (hstate_is_gigantic(h)) { - if (!prep_compound_gigantic_folio(folio, huge_page_order(h))) { - /* - * Rare failure to convert pages to compound page. - * Free pages and try again - ONCE! - */ - free_gigantic_folio(folio, huge_page_order(h)); - if (!retry) { - retry = true; - goto retry; - } - return NULL; - } - } - - return folio; -} - static struct folio *only_alloc_fresh_hugetlb_folio(struct hstate *h, gfp_t gfp_mask, int nid, nodemask_t *nmask, nodemask_t *node_alloc_noretry) { struct folio *folio; - folio = __alloc_fresh_hugetlb_folio(h, gfp_mask, nid, nmask, - node_alloc_noretry); + folio = hstate_is_gigantic(h) ? alloc_gigantic_folio(h, gfp_mask, nid, nmask) : + alloc_buddy_hugetlb_folio(h, gfp_mask, nid, nmask, node_alloc_noretry); if (folio) init_new_hugetlb_folio(h, folio); return folio; @@ -2265,7 +2115,8 @@ static struct folio *alloc_fresh_hugetlb_folio(struct hstate *h, { struct folio *folio; - folio = __alloc_fresh_hugetlb_folio(h, gfp_mask, nid, nmask, NULL); + folio = hstate_is_gigantic(h) ? alloc_gigantic_folio(h, gfp_mask, nid, nmask) : + alloc_buddy_hugetlb_folio(h, gfp_mask, nid, nmask, NULL); if (!folio) return NULL; @@ -2549,9 +2400,8 @@ struct folio *alloc_buddy_hugetlb_folio_with_mpol(struct hstate *h, nid = huge_node(vma, addr, gfp_mask, &mpol, &nodemask); if (mpol_is_preferred_many(mpol)) { - gfp_t gfp = gfp_mask | __GFP_NOWARN; + gfp_t gfp = gfp_mask & ~(__GFP_DIRECT_RECLAIM | __GFP_NOFAIL); - gfp &= ~(__GFP_DIRECT_RECLAIM | __GFP_NOFAIL); folio = alloc_surplus_hugetlb_folio(h, gfp, nid, nodemask); /* Fallback to all nodes if page==NULL */ @@ -3333,6 +3183,7 @@ static void __init hugetlb_folio_init_tail_vmemmap(struct folio *folio, for (pfn = head_pfn + start_page_number; pfn < end_pfn; pfn++) { struct page *page = pfn_to_page(pfn); + __ClearPageReserved(folio_page(folio, pfn - head_pfn)); __init_single_page(page, pfn, zone, nid); prep_compound_tail((struct page *)folio, pfn - head_pfn); ret = page_ref_freeze(page, 1); @@ -3949,21 +3800,16 @@ static long demote_free_hugetlb_folios(struct hstate *src, struct hstate *dst, continue; list_del(&folio->lru); - /* - * Use destroy_compound_hugetlb_folio_for_demote for all huge page - * sizes as it will not ref count folios. - */ - destroy_compound_hugetlb_folio_for_demote(folio, huge_page_order(src)); + + split_page_owner(&folio->page, huge_page_order(src), huge_page_order(dst)); + pgalloc_tag_split(&folio->page, 1 << huge_page_order(src)); for (i = 0; i < pages_per_huge_page(src); i += pages_per_huge_page(dst)) { struct page *page = folio_page(folio, i); - if (hstate_is_gigantic(dst)) - prep_compound_gigantic_folio_for_demote(page_folio(page), - dst->order); - else - prep_compound_page(page, dst->order); - set_page_private(page, 0); + page->mapping = NULL; + clear_compound_head(page); + prep_compound_page(page, dst->order); init_new_hugetlb_folio(dst, page_folio(page)); list_add(&page->lru, &dst_list);