From patchwork Wed Jan 29 22:41:29 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Frank van der Linden X-Patchwork-Id: 13954197 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8EF00C0218D for ; Wed, 29 Jan 2025 22:42:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DF3FB280092; Wed, 29 Jan 2025 17:42:20 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id DA2B6280091; Wed, 29 Jan 2025 17:42:20 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C69A9280092; Wed, 29 Jan 2025 17:42:20 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id AA661280091 for ; Wed, 29 Jan 2025 17:42:20 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 0DDD912049F for ; Wed, 29 Jan 2025 22:42:20 +0000 (UTC) X-FDA: 83061964440.05.42FAEF8 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) by imf25.hostedemail.com (Postfix) with ESMTP id 4C047A0012 for ; Wed, 29 Jan 2025 22:42:18 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=Fd+Fyjd8; spf=pass (imf25.hostedemail.com: domain of 3yK6aZwQKCNM4K2A5DD5A3.1DBA7CJM-BB9Kz19.DG5@flex--fvdl.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=3yK6aZwQKCNM4K2A5DD5A3.1DBA7CJM-BB9Kz19.DG5@flex--fvdl.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738190538; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=GLjryzNikNQGfntXYG2RfRyRJLfLbaJbKpgQF6jYtoM=; b=32f4iWNv8fF1nmoe2diOdk4TmZp2uLBvlYRLf8s488IgPDXZB5ne8DNoqFtPaa9KVue3uU caMkykI9C9mDV5Ky9QxFntG5mme4U8wBocT20/KvI9snrT6CB68KPlIBkeQVSFQ1Lczxu6 IHe9W6kNKY/1Y7FEpezf/41xK/F2LQA= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=Fd+Fyjd8; spf=pass (imf25.hostedemail.com: domain of 3yK6aZwQKCNM4K2A5DD5A3.1DBA7CJM-BB9Kz19.DG5@flex--fvdl.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=3yK6aZwQKCNM4K2A5DD5A3.1DBA7CJM-BB9Kz19.DG5@flex--fvdl.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738190538; a=rsa-sha256; cv=none; b=JC0yOxKAzskSS2WtCG+74qHN0gJqHvcdaBzqzAHEQ7bhd114bYv65FvK0e/RoqaIIqu2zI 2VdP1n6455e8u9no3py18EPxfSylFLoC4GjewuEkOt0Qwc4Uoh2erw2kSkdP+QAnDVPqkc 7TEWcW5J3iHSUTH3hyo0oCFz54l3iEk= Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-2178115051dso3204385ad.1 for ; Wed, 29 Jan 2025 14:42:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738190537; x=1738795337; darn=kvack.org; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=GLjryzNikNQGfntXYG2RfRyRJLfLbaJbKpgQF6jYtoM=; b=Fd+Fyjd8j92TDWnBdwyfQ9zek2DwKCJpLkPqquC/W4/U7vghvHt2dqb89/i6IDJQGK qd3+scXsDTyr+P5VnDXDnd+/cIhk7hzwhf/NQU4FiF5YgqkPxqqQiN2+cuh0BtMy/mgl leOSnDd39kpFxd5KAde3iKoDChq3FWAQIk/v2908BYjzPcfjtKNO5pZQYN2sforneH7S wwsMY4MaNyfSGZyQvlkJUEn11xSla5kWronKoSiLXdEBHYiStpCSC0KE3cYnqEM55uyr /9yrUAzRJabo7ThfVRR0wi9CgcJTIfzdlmZlHO6g2qxiULp6/MHe37P4eYNzpgTHyU0E uWrA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738190537; x=1738795337; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=GLjryzNikNQGfntXYG2RfRyRJLfLbaJbKpgQF6jYtoM=; b=NqXTGpTOx3zDG2tdTHJYBAGG7JVqC53/VLqy/Bx6gdMEvkGwVjE8bBpzsww13O9CWX eDSM9viwrbTiNJEPibC5gsscVeykfZB2+wawriPI1cZEjkJKukcbW+esMFiYG6CRs+W9 8GMFqls1TBrmpruYWKlv5O2sbWDPN7uxTOnWnIqZijeasdjrbBnhnrwKshAGadkTQdet q0bgWY6zIF0ifvepZ2TS7xj3z4KaxRHHJrXaIeiXo+KM9xahdSV9A6uRcFu95bJZbFBF ASHqz2OtlMZY8QEj09iWX/ERofRKVGvZBpOCgacLH45lMjqC/TO//kS40NLsWWF8bZN7 iYWA== X-Forwarded-Encrypted: i=1; AJvYcCXE7/BYPnSYiVF+7LtKQTkq4mgNDDHSUuYmWhpfguGKBAuRD8GTs11OCmTdtZBp1SGpVOWJkXW1pg==@kvack.org X-Gm-Message-State: AOJu0Yxad0j3+ardz+9Ml+YrHo/G1FyO2D6cDRUYYZMDN+n4S7VZ3+pe LcBcjbdkNTEgO5Smgyus7xZd3yHIew7qb76wVkRYDISDR0Vv4FGQeA/bjijwobtnCbkh4w== X-Google-Smtp-Source: AGHT+IHm/jmRE1gnNlV9YnYP40r0iFXE53KS+a649UNNfzLx1bRHCWTGpleNwFbGU3nIgHjzPhbdT4M3 X-Received: from plblw16.prod.google.com ([2002:a17:903:2ad0:b0:215:4daf:8d9b]) (user=fvdl job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:f682:b0:216:4dfe:3ebd with SMTP id d9443c01a7336-21dd7e38e43mr75374125ad.50.1738190536956; Wed, 29 Jan 2025 14:42:16 -0800 (PST) Date: Wed, 29 Jan 2025 22:41:29 +0000 Mime-Version: 1.0 X-Mailer: git-send-email 2.48.1.262.g85cc9f2d1e-goog Message-ID: <20250129224157.2046079-1-fvdl@google.com> Subject: [PATCH v2 00/28] hugetlb/CMA improvements for large systems From: Frank van der Linden To: akpm@linux-foundation.org, muchun.song@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: yuzhao@google.com, usamaarif642@gmail.com, joao.m.martins@oracle.com, roman.gushchin@linux.dev, Frank van der Linden X-Rspamd-Queue-Id: 4C047A0012 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: 4nw6q64f7c4hr348rh4o6uh8s4bjdwz5 X-HE-Tag: 1738190538-266142 X-HE-Meta: U2FsdGVkX18zHBC3QOdUJ0jplkKByYgQf2yOZvzRwsNmysYz1pLIufuH4sGXwfCUtF4LZ/Zwm5T6hIx/kaKBCH8antuxGkgfyL+bi98giRx5WarlXYiDn5S6e3CwFxpkmRX0fbkOR3am3ro3T+Yw32wNMTNjD9Wvk9NF7D/O9BNjX1fYqD4IvVwwIatpGEEKHlKbYYxeMZtiSMu/qaE+JlA5mD/n0MuIOQVno5RaCiaffSernWniyNzXXCGBfKisvNNS57PDScbV6Da0CpgDhvdrktWb9QMOpGxxcLrvL5cqkqsJmrgzAf2rj7HDXGC4jR/Y4y1l7S4HBeZDyHkNUnTDIDcJExQBE5I6F14lwqtpJnMbA+UMQXpiDJ4yCjNSdEmYBpfD2qnIhlxO8UjUaH9xv9z2ctwFUnsS38SDz5ka5XmbmVIQ4QGZMBmOlkR7bYzP1qrEiAleNLWPZl0bowZ/iwUdG0Zawaj5dZHG4vBUvihjX1U6otxZAzArrGRlfkwX2wtX57hvu0O57YQInNSMc4IA+XWKH6WAIWV8cH2GLFT6YL3nlPgdDGyGJDTcmrCvXQtM/77HxAguj/58wLXBkL9cAjzXyWtFt9DuuHoPmg4laxAQBegzsI8b+u0FfH2bymQIaa0fiRtJnSVESC2jjkYHzHKFjtQ8BbqF5AA+X7JFRIEeVbdk7vCcPxYW3j0jwjw+viBSDBgQSzdK+UeHIMN72WWTdbsbIURmdMXXwgLsIBOyV+YsBm57ZfO8o/ki14v0Hlmqn/WTeiTdeL/wDVIWGQHp+LqxWkFWm8sB6llUBfqhyn8chUP4bAYWHSv4amkN65U0taLUfsVGzY+5SRvQ4NRWBUPTpi26f6dEgG1voJl+ksBf5JGwflTUUKd68H5ezwKbK3ZQHXN6az+hz0u1e7eBkr1uVHjWIXl5jSEltYj8Mm9640MJKbeR3sE35gW1eO829R8RyBa P8WffEq+ tCFeGLqMm3NApdemu2eeATo47aTaDnzMz4do1a/pJ0Ri4AJBpAG1i5V6iabsNTFDF+zBEO4KSYfUUjK54m4o0JL7iUfgh08jr67fgNU/nzEAiNkHtGdpgdJUZ4p1a2MhmKksIaUbxMM/YNFXqNp8qtDhfjPSAvxJpaXO4r9O5LRPw44KRQK4OBDk0M6/Lzm8Fj/LCXW8ViAy+RSQCsfiH2ZYBOxUbqReEbOyrJhaPGweOnzi7mv/mdITHa8IzY44eC7AKVuRnp0l4zQiqeRVNViZVejUMZegS4JLl+OYF6S4gVVAVCTgjQAbzcrp24nSz49KN3ELdQoyh6G8jppgnwkpELKSA68pvfXXQtgqTMjIH8Kp5mfFDl+zH8yajxqmKetujTo/rGhc32P3hltILyWfptFNMm4VkSez1d8pcBzdo2z0X8CiE1PMPNdPTJxF/isSv5oJFfh80s4W4FiN0PIesmj0nxsmFPkmTx6iQqJsG67qjxrSDl7CC55S/3xmd7QT87GpbZ/hoyjuvq2ev3FbSC1cc1O1EobhW X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On large systems, we observed some issues with hugetlb and CMA: 1) When specifying a large number of hugetlb boot pages (hugepages= on the commandline), the kernel may run out of memory before it even gets to HVO. For example, if you have a 3072G system, and want to use 3024 1G hugetlb pages for VMs, that should leave you plenty of space for the hypervisor, provided you have the hugetlb vmemmap optimization (HVO) enabled. However, since the vmemmap pages are always allocated first, and then later in boot freed, you will actually run yourself out of memory before you can do HVO. This means not getting all the hugetlb pages you want, and worse, failure to boot if there is an allocation failure in the system from which it can't recover. 2) There is a system setup where you might want to use hugetlb_cma with a large value (say, again, 3024 out of 3072G like above), and then lower that if system usage allows it, to make room for non-hugetlb processes. For this, a variation of the problem above applies: the kernel runs out of unmovable space to allocate from before you finish boot, since your CMA area takes up all the space. 3) CMA wants to use one big contiguous area for allocations. Which fails if you have the aforementioned 3T system with a gap in the middle of physical memory (like the < 40bits BIOS DMA area seen on some AMD systems). You then won't be able to set up a CMA area for one of the NUMA nodes, leading to loss of half of your hugetlb CMA area. 4) Under the scenario mentioned in 2), when trying to grow the number of hugetlb pages after dropping it for a while, new CMA allocations may fail occasionally. This is not unexpected, some transient references on pages may prevent cma_alloc from succeeding under memory pressure. However, the hugetlb code then falls back to a normal contiguous alloc, which may end up succeeding. This is not always desired behavior. If you have a large CMA area, then the kernel has a restricted amount of memory it can do unmovable allocations from (a well known issue). A normal contiguous alloc may eat further in to this space. To resolve these issues, do the following: * Add hooks to the section init code to do custom initialization of memmap pages. Hugetlb bootmem (memblock) allocated pages can then be pre-HVOed. This avoids allocating a large number of vmemmap pages early in boot, only to have them be freed again later, and also avoids running out of memory as described under 1). Using these hooks for hugetlb is optional. It requires moving hugetlb bootmem allocation to an earlier spot by the architecture. This has been enabled on x86. * hugetlb_cma doesn't care about the CMA area it uses being one large contiguous range. Multiple smaller ranges are fine. The only requirements are that the areas should be on one NUMA node, and individual gigantic pages should be allocatable from them. So, implement multi-range support for CMA, avoiding issue 3). * Introduce a hugetlb_cma_only option on the commandline. This only allows allocations from CMA for gigantic pages, if hugetlb_cma= is also specified. * With hugetlb_cma_only active, it also makes sense to be able to pre-allocate gigantic hugetlb pages at boot time from the CMA area(s). Add a rudimentary early CMA allocation interface, that just grabs a piece of memblock-allocated space from the CMA area, which gets marked as allocated in the CMA bitmap when the CMA area is initialized. With this, hugepages= can be supported with hugetlb_cma=, making scenario 2) work. Additionally, fix some minor bugs, with one worth mentioning: since hugetlb gigantic bootmem pages are allocated by memblock, they may span multiple zones, as memblock doesn't (and mostly can't) know about zones. This can cause problems. A hugetlb page spanning multiple zones is bad, and it's worse with HVO, when the de-HVO step effectively sneakily re-assigns pages to a different zone than originally configured, since the tail pages all inherit the zone from the first 60 tail pages. This condition is not common, but can be easily reproduced using ZONE_MOVABLE. To fix this, add checks to see if gigantic bootmem pages intersect with multiple zones, and do not use them if they do, giving them back to the page allocator instead. The first patch is kind of along for the ride, except that maintaining an available_count for a CMA area is convenient for the multiple range support. v2: * Add missing CMA debugfs code. * Minor cleanups in hugetlb_cma changes. * Move hugetlb_cma code to its own file to further clean things up. Frank van der Linden (28): mm/cma: export total and free number of pages for CMA areas mm, cma: support multiple contiguous ranges, if requested mm/cma: introduce cma_intersects function mm, hugetlb: use cma_declare_contiguous_multi mm/hugetlb: fix round-robin bootmem allocation mm/hugetlb: remove redundant __ClearPageReserved mm/hugetlb: use online nodes for bootmem allocation mm/hugetlb: convert cmdline parameters from setup to early x86/mm: make register_page_bootmem_memmap handle PTE mappings mm/bootmem_info: export register_page_bootmem_memmap mm/sparse: allow for alternate vmemmap section init at boot mm/hugetlb: set migratetype for bootmem folios mm: define __init_reserved_page_zone function mm/hugetlb: check bootmem pages for zone intersections mm/sparse: add vmemmap_*_hvo functions mm/hugetlb: deal with multiple calls to hugetlb_bootmem_alloc mm/hugetlb: move huge_boot_pages list init to hugetlb_bootmem_alloc mm/hugetlb: add pre-HVO framework mm/hugetlb_vmemmap: fix hugetlb_vmemmap_restore_folios definition mm/hugetlb: do pre-HVO for bootmem allocated pages x86/setup: call hugetlb_bootmem_alloc early x86/mm: set ARCH_WANT_SPARSEMEM_VMEMMAP_PREINIT mm/cma: simplify zone intersection check mm/cma: introduce a cma validate function mm/cma: introduce interface for early reservations mm/hugetlb: add hugetlb_cma_only cmdline option mm/hugetlb: enable bootmem allocation from CMA areas mm/hugetlb: move hugetlb CMA code in to its own file Documentation/ABI/testing/sysfs-kernel-mm-cma | 13 + .../admin-guide/kernel-parameters.txt | 7 + arch/powerpc/include/asm/book3s/64/hugetlb.h | 6 + arch/powerpc/mm/hugetlbpage.c | 1 + arch/powerpc/mm/init_64.c | 1 + arch/s390/mm/init.c | 13 +- arch/x86/Kconfig | 1 + arch/x86/kernel/setup.c | 4 +- arch/x86/mm/init_64.c | 16 +- include/linux/bootmem_info.h | 7 + include/linux/cma.h | 9 + include/linux/hugetlb.h | 35 + include/linux/mm.h | 13 +- include/linux/mmzone.h | 35 + mm/Kconfig | 8 + mm/Makefile | 3 + mm/bootmem_info.c | 4 +- mm/cma.c | 749 +++++++++++++++--- mm/cma.h | 46 +- mm/cma_debug.c | 61 +- mm/cma_sysfs.c | 20 + mm/hugetlb.c | 566 +++++++------ mm/hugetlb_cma.c | 258 ++++++ mm/hugetlb_cma.h | 55 ++ mm/hugetlb_vmemmap.c | 199 ++++- mm/hugetlb_vmemmap.h | 23 +- mm/internal.h | 19 + mm/mm_init.c | 78 +- mm/sparse-vmemmap.c | 168 +++- mm/sparse.c | 87 +- 30 files changed, 2016 insertions(+), 489 deletions(-) create mode 100644 mm/hugetlb_cma.c create mode 100644 mm/hugetlb_cma.h