From patchwork Mon Jan 27 23:22:05 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Frank van der Linden X-Patchwork-Id: 13951866 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27DACC02188 for ; Mon, 27 Jan 2025 23:23:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 42B9C2801CD; Mon, 27 Jan 2025 18:23:07 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2E2472801D9; Mon, 27 Jan 2025 18:23:07 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F17032801DB; Mon, 27 Jan 2025 18:23:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id C09BE2801CC for ; Mon, 27 Jan 2025 18:23:06 -0500 (EST) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 785811C7C70 for ; Mon, 27 Jan 2025 23:23:06 +0000 (UTC) X-FDA: 83054809572.11.725C2BE Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) by imf18.hostedemail.com (Postfix) with ESMTP id A511A1C0010 for ; Mon, 27 Jan 2025 23:23:04 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=OxbXBDTc; spf=pass (imf18.hostedemail.com: domain of 3VxWYZwQKCCYHXFNIQQING.EQONKPWZ-OOMXCEM.QTI@flex--fvdl.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=3VxWYZwQKCCYHXFNIQQING.EQONKPWZ-OOMXCEM.QTI@flex--fvdl.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738020184; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=KwINzBek+TuboeY4k9bxwxfZ/O8zkR2bm85x9QvP/04=; b=qcBq0ytWADUi3H9BKJy0nGTc/hSLmf481lVQOrpD+F9jA7ZpviDGpRwDp2D9OUecQyL9aH L9MyRsDzYeshiTsfFI770L9Mb3fGgnDvg3slH8nWv1tnOTSHQeeXnSKHp3/oC4hJF0bIE7 ndx2/kNtgMmrPRBjUlNEyZA5CAxnr0Q= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738020184; a=rsa-sha256; cv=none; b=5aHuKjKoVrbVBLtXgZuVk+Xl/Gl6I72yxMMxd/Kh1R2vT8biIUugXAEmkbLkINxKMgDiwv m8TbGkvMcRzIgZDmOg1FgfY+N/ODiKBOG+pPhR8eOXuAPhWJrRcUfP0Dq72oim/616HoFs gCz0rY1FItppPcc96GbdbcqRufndBoI= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=OxbXBDTc; spf=pass (imf18.hostedemail.com: domain of 3VxWYZwQKCCYHXFNIQQING.EQONKPWZ-OOMXCEM.QTI@flex--fvdl.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=3VxWYZwQKCCYHXFNIQQING.EQONKPWZ-OOMXCEM.QTI@flex--fvdl.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2ef35de8901so9222855a91.3 for ; Mon, 27 Jan 2025 15:23:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738020183; x=1738624983; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=KwINzBek+TuboeY4k9bxwxfZ/O8zkR2bm85x9QvP/04=; b=OxbXBDTcGh/Iuu7v2Tg1fKfby89oitnZIlAmmbSmD3MhTE86iPblyDv8aT9IOFNCn0 S7ubHBkH6ua8c5UKArIW32mFWr4ndkb+1zt76sga8ElgsnkfdBBIZOvsH7L3uSvoN0kL w0mYsIaxFlI73KbLMQwzdxROKzGXNwdg6GrcOj0r1b0B6tVfhAl4x+H8bQyOLD2VKMYw eYOzPNmyomMcphnuPKo+5MsRQJFaPo/yIjCKyPJlIySpTA3PjF4cJPi9eLxy71X+WD4M bQ8kgUiAWIxpvfOVaeVush2sYcAnuSyNLc2ozuOH5T9QRQZjZeuBqPUGZHEbqw8x5idX MfTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738020183; x=1738624983; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=KwINzBek+TuboeY4k9bxwxfZ/O8zkR2bm85x9QvP/04=; b=ely3aIdn+Lqxh1n92yfi4qKPh9soWHtvo6OHvAQQoFiKFReoEUIEFSC71FNal1Bu7v VtEuNkpcjItdALsScWAVfaNB50fB0w86gc5x52fAM0wCc+O7nqJHTRAprF+aUbtBERqm pdP8jKCMWx29QjR/mVvdSX/gJMnT6dxNIblVoqtfvGFqN31ARYceP6aVaJbjQzezkZfr +57OjYCrjpjhJY0uKbFwcsb8us4JMo7HiX+2Ui8jYo3ILnYFVZd/TxN4lSePBAsKcolE MvSoZd4Krz48PNJpYruvMeNy6JrCq1c6OSYT3kYvUOH5JinwMDyjybkV8VIVGNfvN2T1 Vqpw== X-Forwarded-Encrypted: i=1; AJvYcCUZw5Bp1Q82MMsnwvZbBeJA+zjPGv84p/XJP2x1/OPPHg13iv0F65aPKiMs7VNfrGbQ123AtB9eEQ==@kvack.org X-Gm-Message-State: AOJu0YzmsXBjCATuskTku9W3QvknrivKYr8p7xYi028n7xibPZ2Ubo4K aM7DfP5cD9P3170mc1yVYaw9usQYc7kA1fKobPGkreipN8IkGJfURCXQkcp55LGuf9zjZg== X-Google-Smtp-Source: AGHT+IF6R6P25DhcwrUq5f/uWfBdxK6gbsckW/gvsb31sSGfRC3rYruHziIqFE9lTFPRi2qCdE0ZlH17 X-Received: from pfat15.prod.google.com ([2002:a05:6a00:aa0f:b0:727:3c81:f42a]) (user=fvdl job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:10d1:b0:72d:a208:d366 with SMTP id d2e1a72fcca58-72dafaa5962mr67617715b3a.20.1738020183373; Mon, 27 Jan 2025 15:23:03 -0800 (PST) Date: Mon, 27 Jan 2025 23:22:05 +0000 In-Reply-To: <20250127232207.3888640-1-fvdl@google.com> Mime-Version: 1.0 References: <20250127232207.3888640-1-fvdl@google.com> X-Mailer: git-send-email 2.48.1.262.g85cc9f2d1e-goog Message-ID: <20250127232207.3888640-26-fvdl@google.com> Subject: [PATCH 25/27] mm/cma: introduce interface for early reservations From: Frank van der Linden To: akpm@linux-foundation.org, muchun.song@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: yuzhao@google.com, usama.arif@bytedance.com, joao.m.martins@oracle.com, roman.gushchin@linux.dev, Frank van der Linden X-Rspamd-Queue-Id: A511A1C0010 X-Stat-Signature: 57c8rocmew6u33mshxd8bk4tft96k44r X-Rspam-User: X-Rspamd-Server: rspam12 X-HE-Tag: 1738020184-313448 X-HE-Meta: U2FsdGVkX19OwnrsoyDDPk8m6PN0FHEP/xoOPcUQbopGLi+imYKBbdcgGpTSX6hrxH7YKfDjmFP6RgNv9KCbLXiWZ03AOGnZ5J2gEk2M59MJ6QiWTpiIT6gidREy5hSCZWBgbJunBB58XJZtotZcfvpLikrRCT8ZF3rcXaUe0cAnEaE6MCrnhAdv0EZltf0emoHcPA3tI12A0E4jzkz7ExYlT9pqjqxBTdU6fR4lfNtKLvFRylpRq0Af2wzqZvFs84Z9iwa/44FMmZJHhtNuL9IYq7aaQQpr89CoMQ9TOe2qM+DPkWBx4oI8+KxKw1EoOWLVxD5DvIn0JDa/oLs5HIiPM0Bct6a00eQvT7iamfM6DV7U4V1IdRcBGOceXHYv/icsLGArb5TA/UEOrPGGfQN9pZ8Oz3+zIvd90jc3E5Ex/l8i29o80SeDZVU29RWNS90MoP41N0Go8MhkC20RL7Ip7XwUdBIO1+oxmdZ1oUnTQQhjaglBUBMAtr4/xxoDyKo9Pha1J6bMmFIzI3e27oBqErDR9DY2W3n9gpXnRSdm1j4B+eIakAmOpLdSeJ4zfNoErhVn3X7caA1qgHKYfUErFehB/B4YPnBALZ2oeLoeglDcLpwmGNIH3lNr3wI1VBV4nMIyPIDFEW0Mtprz7liW3kjB/vjOYyE9kBWriN2a9aat3vL3rVOtkW7FY54ulU28SiB9BCwfXpIo8aucdH98/+R77/65KQdC2aG86HdDfDta/zMucVjygdPTnqkPtMj5Tumc4viMnlptRF55mSBBEfP9tlwQDdW9PE9wUB8ScXWyrhxUJM5/tFpfb0idUf2ylmdRNoCLOdbtB0sejJM+XswABIX6cKCUO76irLj8185YctjwsWM3MGUT3vpzVVieB5HeXL449+Mgf301HUJa6UX1xH63vQy5SzLiJ9nL+EIvWJcWy18J63+YtH52SH0SGPBrRbwFq/YYmbR UXF9TCBX pi/9yJOXDixCOMkZP97Ktb90eO6paU75yi2Yo6NB9PQO25C743D0ABpMWEFl0vJ/eKROY/z2NkhZ8dkSocWe/NbcR2b8+CBBqnuPexxwcvx91TrKXXBt2aLkgQGOFk/rSVQ0HhR9V6cuoe0KoRkH9jI1+V5sY7hdIUDoAUSkXl7xeBN5ILFwoRp7eCgqTH+qRtQHH/30h0Svx+t8ce639wkRZ5Ik5A6L0Uy1EnSuQ7xv4+pbTDAA0hUi4HA0ubk9W9juOJgOz/pscYiWQj++KZ9FHqNfEuMSGZo5pCo7s0GRG+F5e2nWfs0+mTJ/QdrpkOG6blbOthRD2A1c9XFZ5TQnv93cxtqIH5UsnhG1gwEMa2IPVgjOHei2DLRIdtt5xZ8kAOBS17cQZNjBpqln8wmKUnIeo9IkYWzazCnBqRJTb0MTHW2q3zFbYpvlgWuPszBGDjDfCpz7h9W+WjKB+8CqpViROo4Oep/hB8lkHz51b8Ua6dwI98gyuKYKI/P342gBfTz6zf8HMZbXvmNH6hmglnQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: It can be desirable to reserve memory in a CMA area before it is activated, early in boot. Such reservations would effectively be memblock allocations, but they can be returned to the CMA area later. This functionality can be used to allow hugetlb bootmem allocations from a hugetlb CMA area. A new interface, cma_reserve_early is introduced. This allows for pageblock-aligned reservations. These reservations are skipped during the initial handoff of pages in a CMA area to the buddy allocator. The caller is responsible for making sure that the page structures are set up, and that the migrate type is set correctly, as with other memblock allocations that stick around. If the CMA area fails to activate (because it intersects with multiple zones), the reserved memory is not given to the buddy allocator, the caller needs to take care of that. Signed-off-by: Frank van der Linden --- mm/cma.c | 83 ++++++++++++++++++++++++++++++++++++++++++++++----- mm/cma.h | 8 +++++ mm/internal.h | 16 ++++++++++ mm/mm_init.c | 9 ++++++ 4 files changed, 109 insertions(+), 7 deletions(-) diff --git a/mm/cma.c b/mm/cma.c index 41248dee7197..1c0a01d02a28 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -144,9 +144,10 @@ bool cma_validate_zones(struct cma *cma) static void __init cma_activate_area(struct cma *cma) { - unsigned long pfn, base_pfn; + unsigned long pfn, end_pfn; int allocrange, r; struct cma_memrange *cmr; + unsigned long bitmap_count, count; for (allocrange = 0; allocrange < cma->nranges; allocrange++) { cmr = &cma->ranges[allocrange]; @@ -161,8 +162,13 @@ static void __init cma_activate_area(struct cma *cma) for (r = 0; r < cma->nranges; r++) { cmr = &cma->ranges[r]; - base_pfn = cmr->base_pfn; - for (pfn = base_pfn; pfn < base_pfn + cmr->count; + if (cmr->early_pfn != cmr->base_pfn) { + count = cmr->early_pfn - cmr->base_pfn; + bitmap_count = cma_bitmap_pages_to_bits(cma, count); + bitmap_set(cmr->bitmap, 0, bitmap_count); + } + + for (pfn = cmr->early_pfn; pfn < cmr->base_pfn + cmr->count; pfn += pageblock_nr_pages) init_cma_reserved_pageblock(pfn_to_page(pfn)); } @@ -173,6 +179,7 @@ static void __init cma_activate_area(struct cma *cma) INIT_HLIST_HEAD(&cma->mem_head); spin_lock_init(&cma->mem_head_lock); #endif + set_bit(CMA_ACTIVATED, &cma->flags); return; @@ -184,9 +191,8 @@ static void __init cma_activate_area(struct cma *cma) if (!test_bit(CMA_RESERVE_PAGES_ON_ERROR, &cma->flags)) { for (r = 0; r < allocrange; r++) { cmr = &cma->ranges[r]; - for (pfn = cmr->base_pfn; - pfn < cmr->base_pfn + cmr->count; - pfn++) + end_pfn = cmr->base_pfn + cmr->count; + for (pfn = cmr->early_pfn; pfn < end_pfn; pfn++) free_reserved_page(pfn_to_page(pfn)); } } @@ -290,6 +296,7 @@ int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size, return ret; cma->ranges[0].base_pfn = PFN_DOWN(base); + cma->ranges[0].early_pfn = PFN_DOWN(base); cma->ranges[0].count = cma->count; cma->nranges = 1; cma->nid = NUMA_NO_NODE; @@ -509,6 +516,7 @@ int __init cma_declare_contiguous_multi(phys_addr_t total_size, nr, (u64)mlp->base, (u64)mlp->base + size); cmrp = &cma->ranges[nr++]; cmrp->base_pfn = PHYS_PFN(mlp->base); + cmrp->early_pfn = cmrp->base_pfn; cmrp->count = size >> PAGE_SHIFT; sizeleft -= size; @@ -540,7 +548,6 @@ int __init cma_declare_contiguous_multi(phys_addr_t total_size, pr_info("Reserved %lu MiB in %d range%s\n", (unsigned long)total_size / SZ_1M, nr, nr > 1 ? "s" : ""); - return ret; } @@ -1044,3 +1051,65 @@ bool cma_intersects(struct cma *cma, unsigned long start, unsigned long end) return false; } + +/* + * Very basic function to reserve memory from a CMA area that has not + * yet been activated. This is expected to be called early, when the + * system is single-threaded, so there is no locking. The alignment + * checking is restrictive - only pageblock-aligned areas + * (CMA_MIN_ALIGNMENT_BYTES) may be reserved through this function. + * This keeps things simple, and is enough for the current use case. + * + * The CMA bitmaps have not yet been allocated, so just start + * reserving from the bottom up, using a PFN to keep track + * of what has been reserved. Unreserving is not possible. + * + * The caller is responsible for initializing the page structures + * in the area properly, since this just points to memblock-allocated + * memory. The caller should subsequently use init_cma_pageblock to + * set the migrate type and CMA stats the pageblocks that were reserved. + * + * If the CMA area fails to activate later, memory obtained through + * this interface is not handed to the page allocator, this is + * the responsibility of the caller (e.g. like normal memblock-allocated + * memory). + */ +void __init *cma_reserve_early(struct cma *cma, unsigned long size) +{ + int r; + struct cma_memrange *cmr; + unsigned long available; + void *ret = NULL; + + if (!cma->count) + return NULL; + /* + * Can only be called early in init. + */ + if (test_bit(CMA_ACTIVATED, &cma->flags)) + return NULL; + + if (!IS_ALIGNED(size, CMA_MIN_ALIGNMENT_BYTES)) + return NULL; + + if (!IS_ALIGNED(size, (PAGE_SIZE << cma->order_per_bit))) + return NULL; + + size >>= PAGE_SHIFT; + + if (size > cma->available_count) + return NULL; + + for (r = 0; r < cma->nranges; r++) { + cmr = &cma->ranges[r]; + available = cmr->count - (cmr->early_pfn - cmr->base_pfn); + if (size <= available) { + ret = phys_to_virt(PFN_PHYS(cmr->early_pfn)); + cmr->early_pfn += size; + cma->available_count -= size; + return ret; + } + } + + return ret; +} diff --git a/mm/cma.h b/mm/cma.h index 0a1f8f8abe08..93fc76cc6068 100644 --- a/mm/cma.h +++ b/mm/cma.h @@ -16,9 +16,16 @@ struct cma_kobject { * and the total amount of memory requested, while smaller than the total * amount of memory available, is large enough that it doesn't fit in a * single physical memory range because of memory holes. + * + * Fields: + * @base_pfn: physical address of range + * @early_pfn: first PFN not reserved through cma_reserve_early + * @count: size of range + * @bitmap: bitmap of allocated (1 << order_per_bit)-sized chunks. */ struct cma_memrange { unsigned long base_pfn; + unsigned long early_pfn; unsigned long count; unsigned long *bitmap; }; @@ -56,6 +63,7 @@ enum cma_flags { CMA_RESERVE_PAGES_ON_ERROR, CMA_ZONES_VALID, CMA_ZONES_INVALID, + CMA_ACTIVATED, }; extern struct cma cma_areas[MAX_CMA_AREAS]; diff --git a/mm/internal.h b/mm/internal.h index 63fda9bb9426..8318c8e6e589 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -848,6 +848,22 @@ void init_cma_reserved_pageblock(struct page *page); #endif /* CONFIG_COMPACTION || CONFIG_CMA */ +struct cma; + +#ifdef CONFIG_CMA +void *cma_reserve_early(struct cma *cma, unsigned long size); +void init_cma_pageblock(struct page *page); +#else +static inline void *cma_reserve_early(struct cma *cma, unsigned long size) +{ + return NULL; +} +static inline void init_cma_pageblock(struct page *page) +{ +} +#endif + + int find_suitable_fallback(struct free_area *area, unsigned int order, int migratetype, bool only_stealable, bool *can_steal); diff --git a/mm/mm_init.c b/mm/mm_init.c index f7d5b4fe1ae9..f31260fd393e 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -2263,6 +2263,15 @@ void __init init_cma_reserved_pageblock(struct page *page) adjust_managed_page_count(page, pageblock_nr_pages); page_zone(page)->cma_pages += pageblock_nr_pages; } +/* + * Similar to above, but only set the migrate type and stats. + */ +void __init init_cma_pageblock(struct page *page) +{ + set_pageblock_migratetype(page, MIGRATE_CMA); + adjust_managed_page_count(page, pageblock_nr_pages); + page_zone(page)->cma_pages += pageblock_nr_pages; +} #endif void set_zone_contiguous(struct zone *zone)