From patchwork Mon Jul 24 13:46:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Usama Arif X-Patchwork-Id: 13324805 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 12D6AC0015E for ; Mon, 24 Jul 2023 13:47:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DB42D900006; Mon, 24 Jul 2023 09:46:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D4966900002; Mon, 24 Jul 2023 09:46:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B41E1900006; Mon, 24 Jul 2023 09:46:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 99BBE900002 for ; Mon, 24 Jul 2023 09:46:55 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 37331C0A97 for ; Mon, 24 Jul 2023 13:46:55 +0000 (UTC) X-FDA: 81046631190.05.4D31669 Received: from mail-wm1-f47.google.com (mail-wm1-f47.google.com [209.85.128.47]) by imf12.hostedemail.com (Postfix) with ESMTP id 4833740015 for ; Mon, 24 Jul 2023 13:46:52 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=U9TJoPZM; dmarc=pass (policy=quarantine) header.from=bytedance.com; spf=pass (imf12.hostedemail.com: domain of usama.arif@bytedance.com designates 209.85.128.47 as permitted sender) smtp.mailfrom=usama.arif@bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690206413; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=l4pGELgjvEXBJuRM+kyS6zBxvRB1Y0OOMkMsDX033Vw=; b=GdJNUJBXCl0c04WQ6Y2vjrKvkg8DjL3+mjoRTyCJJ7vst2I8gGC35byx7O6Qw9Iot6ZhIg 1/pIkfGhAEdE66X8HRgsiX/fbktuj4yNrMcspXpskLgHaFGItVKLKEZa7Rbbf6iW3tWu5V nv2GFuytybM1E3XizShWzkaKzJUO03Y= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=U9TJoPZM; dmarc=pass (policy=quarantine) header.from=bytedance.com; spf=pass (imf12.hostedemail.com: domain of usama.arif@bytedance.com designates 209.85.128.47 as permitted sender) smtp.mailfrom=usama.arif@bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690206413; a=rsa-sha256; cv=none; b=SBManTHS64aAAiOb3trPOmqYRtRtIQRu+ENB0SVXUHnQlEVX+2U+e/onH15U8l4XnN7NHD 2sZ/YmU5aNwe/eKkuuNgcvBLj8OG5a9QXLQrD11UNPX3UuvI83mXYPLTjkfLBfviXdKP0K pCkbQlJcCM/Pqu52TN13GDaSDSKO/Xc= Received: by mail-wm1-f47.google.com with SMTP id 5b1f17b1804b1-3fbf1b82d9cso34181315e9.2 for ; Mon, 24 Jul 2023 06:46:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1690206411; x=1690811211; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=l4pGELgjvEXBJuRM+kyS6zBxvRB1Y0OOMkMsDX033Vw=; b=U9TJoPZM8Vb+e/+TDxZOJ/G64+DOvXK8duWaCY98EJ9LR85s7Ir6jydu8BZa479KxZ jRykc91Lr65/WdO14wqIbrmESX+QVJSfPKmbkz08W6PegHJYRKhZsHYX5/ZeohmFO5I6 ROm+xlqvOa/+KqvRaMncQyPQY2le2+8HqRa+EVD/36i0+8/kad2mKe6z940vhrF7rJAO MA+rdL862CQeWZy+wv9ZnuKLEUaUrm6Ho1ie1q6HfyD0bJpWH4Cwb1xB+e3/cLjPXN9C v8pFji+l0xoy/MHWdK9ePxYNKHToDuhq5eys6H6aN5bmin9SIzYcVj3G2fhYuOBSlPSk kY4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690206411; x=1690811211; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=l4pGELgjvEXBJuRM+kyS6zBxvRB1Y0OOMkMsDX033Vw=; b=IGDQgjhJAET0XbZ1913+Esm7XLF6iElNe5bTuZeGjr6Qd8BUKRwl87MQ6Ju/xM/S+z K9ZrxByJQI8+pJopUEftIjw9YqXlZxubl2si24oHQdt2DLO+UGacBqf7qn8M/AKdM/WE R2YMTJOE/zlI9jVl3MTH7wjHdrgwC2T35Vov3OZnj1N09FKBBR5PIn1XdlSQ3pArXrfW XVm45XwpBpw+zkdmuPExpyi8n39hjJP0dEW4w12TEFJPBVaDuLdg6tLiUJp0NQnmbshr 6sgtp3CA9N/uBI/cBbIkKYB0Z7I6pj9X7kYSVyK0qeDgdUdjgxcP9ES3DNNWesmKP7nb O1Jw== X-Gm-Message-State: ABy/qLY2YtLgskMBjVMGN/+rlwC7541cat6qgFmMDUfGllhayccKjfuF lWfTnupfiqHK7do6T3+i5m8Fn4tzXZNS7oUu+yg= X-Google-Smtp-Source: APBJJlGC9ngrxmvPhoLL2+ZVdDO/kVFluie5EE/bs+tXQXKrVPr0EB5rctNwlrUxp2EATgtzWXfzgg== X-Received: by 2002:a1c:7703:0:b0:3fa:d160:fc6d with SMTP id t3-20020a1c7703000000b003fad160fc6dmr6581149wmi.30.1690206411484; Mon, 24 Jul 2023 06:46:51 -0700 (PDT) Received: from localhost.localdomain ([2a02:6b6a:b465:0:d7c4:7f46:8fed:f874]) by smtp.gmail.com with ESMTPSA id e19-20020a05600c219300b003fbe791a0e8sm10209354wme.0.2023.07.24.06.46.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Jul 2023 06:46:50 -0700 (PDT) From: Usama Arif To: linux-mm@kvack.org, muchun.song@linux.dev, mike.kravetz@oracle.com, rppt@kernel.org Cc: linux-kernel@vger.kernel.org, fam.zheng@bytedance.com, liangma@liangbit.com, simon.evans@bytedance.com, punit.agrawal@bytedance.com, Usama Arif Subject: [RFC 4/4] mm/memblock: Skip initialization of struct pages freed later by HVO Date: Mon, 24 Jul 2023 14:46:44 +0100 Message-Id: <20230724134644.1299963-5-usama.arif@bytedance.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230724134644.1299963-1-usama.arif@bytedance.com> References: <20230724134644.1299963-1-usama.arif@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 4833740015 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: aicmu8pc4r6kg1f1dsohkybzjg3xpde3 X-HE-Tag: 1690206412-314989 X-HE-Meta: U2FsdGVkX1/06QP8+BCm/X+yUlpfFQKTVYk1fXoaMVuPPCXGK3kEmKG623CbZ/piiKfOa6GkAaoim+emGi/gsUH4HFwS4f/V3UprSRY6hfa15n9cWG8IBxopo+keGF6g/htm++tWsc1+++nwfKH+huY6hKyp93IIak53SzumPul6n4JevzikHmD0vO1GYWTonBM16t5cNyL14ZQlju3bRTKgQsIW5VJ69wtjIbqfvw7zIvXo+iz9zfF5RGYWRl5CEVUOc/X6XFsXe3nwTGNWQfu5BY5o8X0TspcBaWHTLRaWfNRoC9YvpRnyPDjB/5HYckASe+SMa47VLxW+h2IwK1xd7L2CIK4YuQJ64rVS/b1sSX9+AwNzZhUPNPEqxZiIO3iyNm7XYZcsFvwDHR3rSduczP4YFBO0Uf8MBPJmb6ik/DzAdChHIsONO4MRE01I7oJ1Kj9HRsMe65uUeMkrr6f6rLQq7agU6wd2orM4d8d8Xp4U1ry9KAJ5pgbdIB2B/jwmM96pvqvSL1HUF2sfmdnaJ1EHWovbMLP1Q4L4VCIvBgqyQN60Maf/P/Ov/p0/QiUZFAD93iQsbqYjaqsDMM3GjspSHWmyPUn1W6r7YgJHUxaSiKkFEWINyW8AO9y8hGClMTuNdhWMQ/1hAbgZN6iqwfMx80q8Is5w2GZT6VJdnNMgAt7YmIepfw0V7qK+YIfJh4iz2BOrpQgAjZFmxWFevmZik8BxCAATX0iC03PcWsPF8XAH6U/GebjPZpzR94eAqlhBOG2FICUW2dOfT5Fxu1KGH/t1PTTv0CqMZDeeccyNUsz5wgP6GZbHRFOkmV0N/yvsYrVev9vG386Z/jdml/brTW/OEuis32WLph8ZGADVxxkH8skW/7euaf/rLgFyASYVHbnP+f6Apef1rFXkqF8dy2FMl9Wg8PI3iTEM1DwjRzno00XaTaskwVDyZi5bVlr8RjjVYFoB2Jo OqwmMBTp QxHgpM6XAiygixLRkv5M7XWO95zglCGobY6TNkSU36jp5ylnNyJsxdHPuPLbpV7GQECOn8YNXkIVbu38iRqvcrwR1LNcp3UltRxT+58oH23t5AleAfGP2fJp3g3ZmKYe3Rn81NUfih4+CmPq94KzKieaDwLhGujrNYJqUmZZGEctonfcEt5lM5teW6n9lB9hY8BvK4U+SX3AnbkOZ6e7UN1pAPLZnNFULMcR3ofCQuOighp8eHnK5CLV4qSk4QyQYmQ4nCVssJXj4T0x8+2Uck6BVEkKzY4xU7YQv7sFFMDSuzPXkdahZR8Nw5uLDI1GVxpaPm7jMC/CiClRJD5zWs/xjIfs/Vue3KBNOtcDpb45arhEdCa5S/iWANt6JerfPk/rEW/j3tygbNoriwPzgVNWSLg/EMAxPCifXPA/nBJuDOmQ= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If the region is for hugepages and if HVO is enabled, then those struct pages which will be freed later don't need to be initialized. This can save significant time when a large number of hugepages are allocated at boot time. As memmap_init_reserved_pages is only called at boot time, we don't need to worry about memory hotplug. Hugepage regions are kept separate from non hugepage regions in memblock_merge_regions so that initialization for unused struct pages can be skipped for the entire region. Signed-off-by: Usama Arif --- mm/hugetlb_vmemmap.c | 2 +- mm/hugetlb_vmemmap.h | 3 +++ mm/memblock.c | 27 ++++++++++++++++++++++----- 3 files changed, 26 insertions(+), 6 deletions(-) diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index bdf750a4786b..b5b7834e0f42 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -443,7 +443,7 @@ static int vmemmap_remap_alloc(unsigned long start, unsigned long end, DEFINE_STATIC_KEY_FALSE(hugetlb_optimize_vmemmap_key); EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key); -static bool vmemmap_optimize_enabled = IS_ENABLED(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON); +bool vmemmap_optimize_enabled = IS_ENABLED(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON); core_param(hugetlb_free_vmemmap, vmemmap_optimize_enabled, bool, 0); /** diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index 3525c514c061..8b9a1563f7b9 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -58,4 +58,7 @@ static inline bool hugetlb_vmemmap_optimizable(const struct hstate *h) return hugetlb_vmemmap_optimizable_size(h) != 0; } bool vmemmap_should_optimize(const struct hstate *h, const struct page *head); + +extern bool vmemmap_optimize_enabled; + #endif /* _LINUX_HUGETLB_VMEMMAP_H */ diff --git a/mm/memblock.c b/mm/memblock.c index e92d437bcb51..62072a0226de 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -21,6 +21,7 @@ #include #include "internal.h" +#include "hugetlb_vmemmap.h" #define INIT_MEMBLOCK_REGIONS 128 #define INIT_PHYSMEM_REGIONS 4 @@ -519,7 +520,8 @@ static void __init_memblock memblock_merge_regions(struct memblock_type *type, if (this->base + this->size != next->base || memblock_get_region_node(this) != memblock_get_region_node(next) || - this->flags != next->flags) { + this->flags != next->flags || + this->hugepage_size != next->hugepage_size) { BUG_ON(this->base + this->size > next->base); i++; continue; @@ -2125,10 +2127,25 @@ static void __init memmap_init_reserved_pages(void) /* initialize struct pages for the reserved regions */ for_each_reserved_mem_region(region) { nid = memblock_get_region_node(region); - start = region->base; - end = start + region->size; - - reserve_bootmem_region(start, end, nid); + /* + * If the region is for hugepages and if HVO is enabled, then those + * struct pages which will be freed later don't need to be initialized. + * This can save significant time when a large number of hugepages are + * allocated at boot time. As this is at boot time, we don't need to + * worry about memory hotplug. + */ + if (region->hugepage_size && vmemmap_optimize_enabled) { + for (start = region->base; + start < region->base + region->size; + start += region->hugepage_size) { + end = start + HUGETLB_VMEMMAP_RESERVE_SIZE * sizeof(struct page); + reserve_bootmem_region(start, end, nid); + } + } else { + start = region->base; + end = start + region->size; + reserve_bootmem_region(start, end, nid); + } } }