From patchwork Sun Jul 30 15:16:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Usama Arif X-Patchwork-Id: 13333452 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3928C04A94 for ; Sun, 30 Jul 2023 15:16:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C4C688D0001; Sun, 30 Jul 2023 11:16:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BD78C900002; Sun, 30 Jul 2023 11:16:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A03008D0001; Sun, 30 Jul 2023 11:16:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 8385E8D0001 for ; Sun, 30 Jul 2023 11:16:14 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 5584A12093C for ; Sun, 30 Jul 2023 15:16:14 +0000 (UTC) X-FDA: 81068629068.22.05722F6 Received: from mail-wm1-f45.google.com (mail-wm1-f45.google.com [209.85.128.45]) by imf02.hostedemail.com (Postfix) with ESMTP id 52FC28001D for ; Sun, 30 Jul 2023 15:16:12 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=LhfXuIze; dmarc=pass (policy=quarantine) header.from=bytedance.com; spf=pass (imf02.hostedemail.com: domain of usama.arif@bytedance.com designates 209.85.128.45 as permitted sender) smtp.mailfrom=usama.arif@bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690730172; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=P5VE1b09XGrn3hjFzsE3qaVui4TN6DlsvAzQ6XWShV0=; b=Y4Qnw914Bqg9f+L6IPhxixzbBVBoXwXP+/Miu0PvqKTNlbKW26uug2kQe3i/tSHKO0vsc5 /CTKRxX/q/3g0jLlBeYLffOFZ2WHVRKZGQlWXmg27Yte2TqyZoLe8IUWwUfgAlOx+7Qvum tvUWRKoUOH2MDo4uY0DvH90YucjnSOY= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=LhfXuIze; dmarc=pass (policy=quarantine) header.from=bytedance.com; spf=pass (imf02.hostedemail.com: domain of usama.arif@bytedance.com designates 209.85.128.45 as permitted sender) smtp.mailfrom=usama.arif@bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690730172; a=rsa-sha256; cv=none; b=8UooUum3CJS5bGsJSkWr/8iYsIGK+FU7YrGXJ30/Eusd9JLSmK+I8y2ycOamg7C8kg7fiq uGwA9+cQZq/H/YdW7ZuMlYnGJjnc4ecZhUoPL6yl4tTV9seT9dL8HkclVRDjlQkoGx4J2n 6N9BdQhS76UWQW02Riyn1O86UrKUZNI= Received: by mail-wm1-f45.google.com with SMTP id 5b1f17b1804b1-3fe1d462762so3041615e9.0 for ; Sun, 30 Jul 2023 08:16:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1690730170; x=1691334970; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=P5VE1b09XGrn3hjFzsE3qaVui4TN6DlsvAzQ6XWShV0=; b=LhfXuIzeZE5fNFxToBAlvDQA5EoZbK+3WEM+AjQQxsQ0JxOdwHbtoU7CSVU9Y3ua/I YUpJ5p9ziGSrXpohlrNQkGF+742Gm7k1GzsA5f33b42pcc1fc/iAwrWonfQ7MYbv1q7h YVGa2uWWvFfRDEkTBXC8BuX87YIa3rtNxtLUF7dmSJ7zSlON6lGYhX3vs8+xJgQ7inEi v2WUTfER2YUR5IubnbmBUcgeUw1db17u0y+MPFkcMPvVpO4iH/iea96utluPPBbXOwxl JtX5YfmoepWtGVwANwedcUUAaZIqjKeijQsq21YfiLsa2xpb4FFh20dY0yu/BPDgxfy1 opbQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690730170; x=1691334970; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=P5VE1b09XGrn3hjFzsE3qaVui4TN6DlsvAzQ6XWShV0=; b=AWQrZvFO6pqdEuhfYDh8YXRrBuJT3BZOlDW1MX7cXDbQ9MMhnBm3H13hRd540uc0yM lijoGi+JkEF74RuM5X9ZeJa5gR6BzTexjwPvKn7WGUbcaEYIrWycGrmpK3ieH4/YrKCz bjFPQOTeIkC0ZY/h3HrldOWTOXAXSQLYvRj19nvbXG2VF7l9GT/Tk+T67RQrw7bGMQ3q 8Kq6KkSEoJc/Ne2gIgmni+GwTGMz+NiMddXyiEDncqt+Gs+LRJymOwFFoibdGzz2M2oE /3DRCdyVxuYUvW7k6kbIYPSHaQpIQYDFTQveGUptoeN+UUZa4RiyMzAN3mYtzeHgvy3z dQFg== X-Gm-Message-State: ABy/qLY93ujByi6Ice6Nrn1zextDFw3MceeAlwBviH0FNebVwfHX66MQ 4xpYhm5P1lM1ud2fCDTgqtPEL/n2kwL0CkmMJQk= X-Google-Smtp-Source: APBJJlGPOGrrpNvuwalf5EOZlDbgmbBWoE71UsgklAK9FFQ8DCMd6wl0g32vEm3QO8b2B4WvXFrH5A== X-Received: by 2002:a7b:cc8c:0:b0:3fc:175:ade7 with SMTP id p12-20020a7bcc8c000000b003fc0175ade7mr5638740wma.38.1690730170432; Sun, 30 Jul 2023 08:16:10 -0700 (PDT) Received: from localhost.localdomain ([2a02:6b6a:b465:0:eda5:aa63:ce24:dac2]) by smtp.gmail.com with ESMTPSA id f17-20020a7bcc11000000b003fd2d33ea53sm9123027wmh.14.2023.07.30.08.16.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 30 Jul 2023 08:16:09 -0700 (PDT) From: Usama Arif To: linux-mm@kvack.org, muchun.song@linux.dev, mike.kravetz@oracle.com, rppt@kernel.org Cc: linux-kernel@vger.kernel.org, fam.zheng@bytedance.com, liangma@liangbit.com, simon.evans@bytedance.com, punit.agrawal@bytedance.com, Usama Arif Subject: [v2 1/6] mm: hugetlb: Skip prep of tail pages when HVO is enabled Date: Sun, 30 Jul 2023 16:16:01 +0100 Message-Id: <20230730151606.2871391-2-usama.arif@bytedance.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230730151606.2871391-1-usama.arif@bytedance.com> References: <20230730151606.2871391-1-usama.arif@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 52FC28001D X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: snck8hyd9kcfbtoe4jn3u5qq5a9x8az4 X-HE-Tag: 1690730172-220406 X-HE-Meta: U2FsdGVkX18C/YUZaTltQ59hdDCRhN48x+yfqowL2gBrmJM4VKjNASMCEQGDYklAt2gJ3p0slmUlzCRS4fVBh7fE2TH7LSV3hSHyIDaVqJK5rLv4cLdpTmEIrhCDlLyUEErAilYKOrXv9q30B7hCWvZWGqNbVEOZv75iKnbf9CwvySQEP4CODIfb3N/1BGHKbbovBu98bmi/wRyx42WORqwdTUt4vbU11NI0lcu8g1fMkaMAtAqas75G/k4O3gWpZtfzT0PV21ln+sOsCnPGV/rJQUFllmehqlxAAuOcwH6bcv7wVrK2P7mO7K89oLFSKrOdwjBhU9P8xHYEHO16imQkE2RA1i9tYms1lKHN+Fp+f3sQ1taFDkdd+sZk0OmHBSN+IG6UbrnB3AnkZo8JVHxx6Daz9W79JkyL5tt6ur1oumKRKfzsueK2g3K9F8vdidhTl0zwweVxnDx/mWaHNrF/Z2EY2E9QMFzjTXJleWZ7G11e985AyPLEPy8zpC0X+g96IHMfx+EyL7Bl3ZO29+UD9ZV83Mz920xZAopd3vybVr3ZROPdUuvLjaJJ0EtfiSwPA2QDn9KYHOAE4y9c+cN9DFzQZ6qyjl/Qc0vQL1kDK+6OA1TcWIIn4GLwnfmmqUKz2Tfgoi31pOw66QYwCV1vodmv2T/AtetugAkmvUjzKnbjhj4SMtlK/SQnB8nzcP6Th3Gi1pijaCEvWLoy2mdCisYFiQPbskEj6K9LN6i2eVlYp6Ipo6thUSjHu2mhaIF6jYkhN0PNAXcqK4rMLlvOQhpVpya9nYFtV+5f/aVtXTKGLig88cLtn2Z3b/ixUJs+RTIsr5R22Wy/7qOOthwTABIDygk9CTU/n/Ox9dclJSr9Pg6RUov17pJN5jYIH2JPaU6rLgK/LXmWcBsgWqc9t1dugkyGKnuniGk5eLeEQdJdm6o1dthR6ryFG9nTwMCGJE9peFXg2ZSU4/m n24niiyb ad1X8VdIrkqpA42TugXbB3q5dUsqjUV1DNyiEe/BTXNgMRVOkK/Mqixk8SrKs5m2fwrUpuHjR4FtMi0YaNc7Hvv8XDlj/r6wghvcq6jnInBf+SByKT5/8P5sCJWwVte4X8Vf6/ZB05Bmdu+SZlZWzoIMrTMN3tfuMmdKcHCScEJtFhLtZQLN0WPrMP2/BF2IVtUxhWnBylUbhXSQC8Eia7FFUD1Na+uJtVNXiqSZ4kljzPHCF0XIkhAX1CzhbqeCuh+gdTJI4ZVkl2ou0b4hU5FovnT254UU1IobyUXC0mu6orhR4MYeDE849hLqqRXNjJk9yNtnVUiftwpVsrWY3Aer0J10V4QwOmWeBLLQjlik6/G+tyU2TUn7fkTnYmBUNsTlZwf3tbwEsiTANldA2tnOkw+FXM17r/JD/saQmL4Y2fhI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000026, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When vmemmap is optimizable, it will free all the duplicated tail pages in hugetlb_vmemmap_optimize while preparing the new hugepage. Hence, there is no need to prepare them. For 1G x86 hugepages, it avoids preparing 262144 - 64 = 262080 struct pages per hugepage. The indirection of using __prep_compound_gigantic_folio is also removed, as it just creates extra functions to indicate demote which can be done with the argument. Signed-off-by: Usama Arif --- mm/hugetlb.c | 32 ++++++++++++++------------------ mm/hugetlb_vmemmap.c | 2 +- mm/hugetlb_vmemmap.h | 15 +++++++++++---- 3 files changed, 26 insertions(+), 23 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 64a3239b6407..541c07b6d60f 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1942,14 +1942,23 @@ static void prep_new_hugetlb_folio(struct hstate *h, struct folio *folio, int ni spin_unlock_irq(&hugetlb_lock); } -static bool __prep_compound_gigantic_folio(struct folio *folio, - unsigned int order, bool demote) +static bool prep_compound_gigantic_folio(struct folio *folio, struct hstate *h, bool demote) { int i, j; + int order = huge_page_order(h); int nr_pages = 1 << order; struct page *p; __folio_clear_reserved(folio); + + /* + * No need to prep pages that will be freed later by hugetlb_vmemmap_optimize. + * Hence, reduce nr_pages to the pages that will be kept. + */ + if (IS_ENABLED(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP) && + vmemmap_should_optimize(h, &folio->page)) + nr_pages = HUGETLB_VMEMMAP_RESERVE_SIZE / sizeof(struct page); + for (i = 0; i < nr_pages; i++) { p = folio_page(folio, i); @@ -2019,18 +2028,6 @@ static bool __prep_compound_gigantic_folio(struct folio *folio, return false; } -static bool prep_compound_gigantic_folio(struct folio *folio, - unsigned int order) -{ - return __prep_compound_gigantic_folio(folio, order, false); -} - -static bool prep_compound_gigantic_folio_for_demote(struct folio *folio, - unsigned int order) -{ - return __prep_compound_gigantic_folio(folio, order, true); -} - /* * PageHuge() only returns true for hugetlbfs pages, but not for normal or * transparent huge pages. See the PageTransHuge() documentation for more @@ -2185,7 +2182,7 @@ static struct folio *alloc_fresh_hugetlb_folio(struct hstate *h, if (!folio) return NULL; if (hstate_is_gigantic(h)) { - if (!prep_compound_gigantic_folio(folio, huge_page_order(h))) { + if (!prep_compound_gigantic_folio(folio, h, false)) { /* * Rare failure to convert pages to compound page. * Free pages and try again - ONCE! @@ -3201,7 +3198,7 @@ static void __init gather_bootmem_prealloc(void) VM_BUG_ON(!hstate_is_gigantic(h)); WARN_ON(folio_ref_count(folio) != 1); - if (prep_compound_gigantic_folio(folio, huge_page_order(h))) { + if (prep_compound_gigantic_folio(folio, h, false)) { WARN_ON(folio_test_reserved(folio)); prep_new_hugetlb_folio(h, folio, folio_nid(folio)); free_huge_page(page); /* add to the hugepage allocator */ @@ -3624,8 +3621,7 @@ static int demote_free_hugetlb_folio(struct hstate *h, struct folio *folio) subpage = folio_page(folio, i); inner_folio = page_folio(subpage); if (hstate_is_gigantic(target_hstate)) - prep_compound_gigantic_folio_for_demote(inner_folio, - target_hstate->order); + prep_compound_gigantic_folio(inner_folio, target_hstate, true); else prep_compound_page(subpage, target_hstate->order); folio_change_private(inner_folio, NULL); diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index c2007ef5e9b0..b721e87de2b3 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -486,7 +486,7 @@ int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head) } /* Return true iff a HugeTLB whose vmemmap should and can be optimized. */ -static bool vmemmap_should_optimize(const struct hstate *h, const struct page *head) +bool vmemmap_should_optimize(const struct hstate *h, const struct page *head) { if (!READ_ONCE(vmemmap_optimize_enabled)) return false; diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index 25bd0e002431..3e7978a9af73 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -10,16 +10,17 @@ #define _LINUX_HUGETLB_VMEMMAP_H #include -#ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP -int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head); -void hugetlb_vmemmap_optimize(const struct hstate *h, struct page *head); - /* * Reserve one vmemmap page, all vmemmap addresses are mapped to it. See * Documentation/vm/vmemmap_dedup.rst. */ #define HUGETLB_VMEMMAP_RESERVE_SIZE PAGE_SIZE +#ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP +int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head); +void hugetlb_vmemmap_optimize(const struct hstate *h, struct page *head); +bool vmemmap_should_optimize(const struct hstate *h, const struct page *head); + static inline unsigned int hugetlb_vmemmap_size(const struct hstate *h) { return pages_per_huge_page(h) * sizeof(struct page); @@ -51,6 +52,12 @@ static inline unsigned int hugetlb_vmemmap_optimizable_size(const struct hstate { return 0; } + +static inline bool vmemmap_should_optimize(const struct hstate *h, const struct page *head) +{ + return false; +} + #endif /* CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP */ static inline bool hugetlb_vmemmap_optimizable(const struct hstate *h)