From patchwork Tue Feb 18 18:16:52 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Frank van der Linden X-Patchwork-Id: 13980423 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id ED3EFC021AF for ; Tue, 18 Feb 2025 18:18:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EFF6928018C; Tue, 18 Feb 2025 13:17:47 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E89A0280181; Tue, 18 Feb 2025 13:17:47 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CB39C28018C; Tue, 18 Feb 2025 13:17:47 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id A80D4280181 for ; Tue, 18 Feb 2025 13:17:47 -0500 (EST) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 675B1490CE for ; Tue, 18 Feb 2025 18:17:47 +0000 (UTC) X-FDA: 83133873774.13.A78B30F Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) by imf18.hostedemail.com (Postfix) with ESMTP id 93E351C0009 for ; Tue, 18 Feb 2025 18:17:45 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=bGxa9aQv; spf=pass (imf18.hostedemail.com: domain of 3yM60ZwQKCH0gwemhpphmf.dpnmjovy-nnlwbdl.psh@flex--fvdl.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=3yM60ZwQKCH0gwemhpphmf.dpnmjovy-nnlwbdl.psh@flex--fvdl.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1739902665; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=vGpYOJcOz/lsGyIjARR6Q+MEJZ3eOZuY6bqPCycZ40I=; b=xWbYtG3TuDOXkbUqcVGKHFYtu80xqOth5DUEIF2gYLZ+y8w3xIUpTN9IBPEeF97j+Y86gp KS1cNWeHM7qFnKxULqevbQMYzJWnrfggqLiQ7GzKzlHi9Tuqtpz7ErFpTCRe/bUY2I7eyP qKTaLqOqecU0OHpv/oyvkofMzaSyWVQ= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=bGxa9aQv; spf=pass (imf18.hostedemail.com: domain of 3yM60ZwQKCH0gwemhpphmf.dpnmjovy-nnlwbdl.psh@flex--fvdl.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=3yM60ZwQKCH0gwemhpphmf.dpnmjovy-nnlwbdl.psh@flex--fvdl.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1739902665; a=rsa-sha256; cv=none; b=LCOXJMrX/TNrgIwzh2IQN+c5WU4+L6Y6oHFGV7lgl7TKfC3OFUMzSKVFaOXxifDb7Np7wq eSUAL+/R5fyfg2JL1764vhbCUzcDU6PvDaOpG5As+moeZjwojCrmdJMQG1AgA22QyY4Tx4 2dvUknD1gASZ/p969S46LJOHor06ogY= Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2fc5888c192so5629381a91.0 for ; Tue, 18 Feb 2025 10:17:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1739902664; x=1740507464; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=vGpYOJcOz/lsGyIjARR6Q+MEJZ3eOZuY6bqPCycZ40I=; b=bGxa9aQv+KDuI79n4nuFmcc4wG7TpJNLLJt4E+vhPfGvBbRv1ciGhia3wanmFR3AgP Y0BCURFJGKFAeHYl4OUrT53PqNpGGwpReezSblgKdK+07knup259sqMYByDeKLL13vTw EZ8NO+c+r5vPJpZJwi509swWcvAlrAwb3aV1VTI2e20Axt0JF0gOlzWV6IdoiT2mHEsY s4+5okZHazaPF1DsJX3wKNyETcrViSWSkBpZSxDfWOdj5XFTv3fUapBDvSuj+m/zLN8z oZbImxfhEz2M7W+xun/GzaUKzWEldqxP7O9Q4KClj5XeeREYv6YV9rjjqdeFChBooTT+ 8iCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1739902664; x=1740507464; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=vGpYOJcOz/lsGyIjARR6Q+MEJZ3eOZuY6bqPCycZ40I=; b=wQ0yrMbZBCTzUQg5WjLS4WlUmPgLPJU5q+LhZVD22VcIRL+z1zNOe7Vh3GGFbwHqq/ ybrzFmTxiKb9mwNDCT6L+/6MnIF6C9+ky4TK44QlYkW4b5pfqaj5fSFX2fTd5ZIxHmrk trZqL2pYi3EXeCbx18hYFBcY3WmV7zgugrJwt1xxrn1uskoGkB1RaazhnNRBKEY9YGYi Qs3gASnYjRv+XB9ZzKU4xDS71iCHcph4TVE0arLHMEYtWpJXAAybOeTgxftqyqz7mham v0PoGCnCJvaKpWwtL7HCvVUMYAC0nVaamGellfAAJtr3ZGXcz4wXn8TSkCUAVFLaq5VW ShYg== X-Forwarded-Encrypted: i=1; AJvYcCUn4EwMDzo8pPTsyVQJvDVAn7zkSXJDS8RnF8xPJT0qbfs+vJQ5Ru0+U7A9HKl2riLXWiPcxVToeQ==@kvack.org X-Gm-Message-State: AOJu0YyWPSWABpmwxkc8Bo6XvGT6nrUHuSY5QWqox1q/9vFI6qR6XtNx hz7PhctVPpPvPK8oVSwinXbOY6LqaHX+GqGLOGaCnbBtLO1fKIQ6xb4KUOMb3ZywVD3VUw== X-Google-Smtp-Source: AGHT+IFh2d5idNjmWGvVf9Xf3cPlznXhdMTwHzwLmt5BRbrt6r+zOP11FtXO72U/qfZB0d+QE+Rinz2m X-Received: from pfbge5.prod.google.com ([2002:a05:6a00:8385:b0:730:85dc:cebb]) (user=fvdl job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:4fc4:b0:730:7499:4036 with SMTP id d2e1a72fcca58-7329df28813mr421658b3a.22.1739902664466; Tue, 18 Feb 2025 10:17:44 -0800 (PST) Date: Tue, 18 Feb 2025 18:16:52 +0000 In-Reply-To: <20250218181656.207178-1-fvdl@google.com> Mime-Version: 1.0 References: <20250218181656.207178-1-fvdl@google.com> X-Mailer: git-send-email 2.48.1.601.g30ceb7b040-goog Message-ID: <20250218181656.207178-25-fvdl@google.com> Subject: [PATCH v4 24/27] mm/cma: introduce interface for early reservations From: Frank van der Linden To: akpm@linux-foundation.org, muchun.song@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: yuzhao@google.com, usamaarif642@gmail.com, joao.m.martins@oracle.com, roman.gushchin@linux.dev, Frank van der Linden X-Rspamd-Queue-Id: 93E351C0009 X-Stat-Signature: cmhmrq84tubfcciofnbtomtnzcq58qx8 X-Rspam-User: X-Rspamd-Server: rspam01 X-HE-Tag: 1739902665-134570 X-HE-Meta: U2FsdGVkX1+ufmNMIsl0G9CzLWbqseYzm2YrhZ9AMDzElNC5xrfb2oZnIhOFWiI+lS8cb07uCVb6WXfV9t1ndaCaLqlfLuonEbNWIzJARiJTKcu/YU9qUwhbhDpw5rbnBlMTOqXAapYQ5xffVuAyZWhNHnVfJf4Tu0jrjYJao3s429p3Ks/YKgGDkto7tHmJ4ry/cXkGdM/OKpBiWnq9188WFD+y8I+hOCmBha4s733/J7efh968JEGl8jyOGPL7O9QtAPDJiVW1HqO+j6rP3P7p22RWNvB6cF5EIMGuuG2EfuHvNneuK09yLIsSg5oefc1iJaq7CGDYBd7fTL6MSlGWUlr4V0xuqmTqXOjhxgKdas5SsC736baxQF05lyIZySGFJ7vQj7P4uqGH32uyf9ZI4xNOr40k6JCEWuvYrx0vZvQqN17hCD0U5JWkOhE27R4yaASbrsVtZtD64V/c68vVBLBLMdgnwL5muta+Y3cW74ReS9ylDAipD3/bcEzlAEgNo2akwz53mx3eFtLYKztZ0zC2Q8lZnuDZsXb0aRJVcnNCjfdLcW2QNkWjkdgl6BY6Adhybixp5ZzUCVAODktYYETSPalTs8w2V73jS0anBtNWF5mr9dNN4rmJ9DPjS0/gGuj22NiWacJmIHBA0Kjg9w89CwpIjwBP2CDhwurGVSG0sh8aFBHhly8dgX+YlmeIy9IFZUVV9A0KsQ+fZe7+Mh5O5WdiaEndNtxL+DjAhN5Q/LyKR2JUkvyESE0M69Y393cmPu6gQ0SpPJ5XU8eoakPrG4Y2I5iYbZ1T/jmYY3NwNEh/38ifDjD7mFndbMX0+dbarse1d0eOZy9xTmMwBNaXXOQafJHVLVcbzQAGuwW0vdTjM5pXKG/HxoJ40YpfeNN2PAf3bdVUqRmFr2CdIWGwDy4VmN2NubH9zsnFAbp+i+BXU9bScmFiDvs1Xx4w7E0aQQgQ6tFZaD8 q4G+7enP ZYNRjGfIkBTotDbDJAgOt1MxCoWj2KBMnstgHqPn7nyjKN8H9k8SgIac3AEhdY487kgg5pJ+cA8IYnwbsP7D2fCSL+LtzGWYfdwhY6CbX34W+iWUuQ7fuG9CogthERWgVikWMAbsHReRFSzy8Q1cz/VHVnR+/b1AlwYxHAqYpMBw7VyStfMI7wXjWd83QUWz+yOSEEklVUwURgSqXdFgJ5Md2IpHk0TFMgvcrQa5ZvUZfOwPNw3Ec2hNHBq4w2OjlSBwjD/xdO0Z/xRzLeONJi0fVg+787RKq18OvsPTQRYaGyyocxonHalCWvN6xmjtubcd3XcaWnL1yoTsOEYR/wo0WkGgwqHSzOoIPP8F9AYkzBsXMt8Spu+IR+OaT+zk7+2wMuNke1Z5nU9m43dYGXa52bpyKBKaeH8rfszfJnM9Uj9MfVJ4tVLZvnJ7b3Xxh7NHmIF6sjbL5DI+G4Ejik/0FKTVHBYDN00pKEQ2PZACZ0l+689qoxQxD0j4cLRBsHcrgMgktJQrHEH7sBby7/M5QXg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: It can be desirable to reserve memory in a CMA area before it is activated, early in boot. Such reservations would effectively be memblock allocations, but they can be returned to the CMA area later. This functionality can be used to allow hugetlb bootmem allocations from a hugetlb CMA area. A new interface, cma_reserve_early is introduced. This allows for pageblock-aligned reservations. These reservations are skipped during the initial handoff of pages in a CMA area to the buddy allocator. The caller is responsible for making sure that the page structures are set up, and that the migrate type is set correctly, as with other memblock allocations that stick around. If the CMA area fails to activate (because it intersects with multiple zones), the reserved memory is not given to the buddy allocator, the caller needs to take care of that. Signed-off-by: Frank van der Linden --- mm/cma.c | 83 ++++++++++++++++++++++++++++++++++++++++++++++----- mm/cma.h | 8 +++++ mm/internal.h | 16 ++++++++++ mm/mm_init.c | 9 ++++++ 4 files changed, 109 insertions(+), 7 deletions(-) diff --git a/mm/cma.c b/mm/cma.c index 4388d941d381..34a4df29af72 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -144,9 +144,10 @@ bool cma_validate_zones(struct cma *cma) static void __init cma_activate_area(struct cma *cma) { - unsigned long pfn, base_pfn; + unsigned long pfn, end_pfn; int allocrange, r; struct cma_memrange *cmr; + unsigned long bitmap_count, count; for (allocrange = 0; allocrange < cma->nranges; allocrange++) { cmr = &cma->ranges[allocrange]; @@ -161,8 +162,13 @@ static void __init cma_activate_area(struct cma *cma) for (r = 0; r < cma->nranges; r++) { cmr = &cma->ranges[r]; - base_pfn = cmr->base_pfn; - for (pfn = base_pfn; pfn < base_pfn + cmr->count; + if (cmr->early_pfn != cmr->base_pfn) { + count = cmr->early_pfn - cmr->base_pfn; + bitmap_count = cma_bitmap_pages_to_bits(cma, count); + bitmap_set(cmr->bitmap, 0, bitmap_count); + } + + for (pfn = cmr->early_pfn; pfn < cmr->base_pfn + cmr->count; pfn += pageblock_nr_pages) init_cma_reserved_pageblock(pfn_to_page(pfn)); } @@ -173,6 +179,7 @@ static void __init cma_activate_area(struct cma *cma) INIT_HLIST_HEAD(&cma->mem_head); spin_lock_init(&cma->mem_head_lock); #endif + set_bit(CMA_ACTIVATED, &cma->flags); return; @@ -184,9 +191,8 @@ static void __init cma_activate_area(struct cma *cma) if (!test_bit(CMA_RESERVE_PAGES_ON_ERROR, &cma->flags)) { for (r = 0; r < allocrange; r++) { cmr = &cma->ranges[r]; - for (pfn = cmr->base_pfn; - pfn < cmr->base_pfn + cmr->count; - pfn++) + end_pfn = cmr->base_pfn + cmr->count; + for (pfn = cmr->early_pfn; pfn < end_pfn; pfn++) free_reserved_page(pfn_to_page(pfn)); } } @@ -290,6 +296,7 @@ int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size, return ret; cma->ranges[0].base_pfn = PFN_DOWN(base); + cma->ranges[0].early_pfn = PFN_DOWN(base); cma->ranges[0].count = cma->count; cma->nranges = 1; cma->nid = NUMA_NO_NODE; @@ -509,6 +516,7 @@ int __init cma_declare_contiguous_multi(phys_addr_t total_size, nr, (u64)mlp->base, (u64)mlp->base + size); cmrp = &cma->ranges[nr++]; cmrp->base_pfn = PHYS_PFN(mlp->base); + cmrp->early_pfn = cmrp->base_pfn; cmrp->count = size >> PAGE_SHIFT; sizeleft -= size; @@ -540,7 +548,6 @@ int __init cma_declare_contiguous_multi(phys_addr_t total_size, pr_info("Reserved %lu MiB in %d range%s\n", (unsigned long)total_size / SZ_1M, nr, nr > 1 ? "s" : ""); - return ret; } @@ -1034,3 +1041,65 @@ bool cma_intersects(struct cma *cma, unsigned long start, unsigned long end) return false; } + +/* + * Very basic function to reserve memory from a CMA area that has not + * yet been activated. This is expected to be called early, when the + * system is single-threaded, so there is no locking. The alignment + * checking is restrictive - only pageblock-aligned areas + * (CMA_MIN_ALIGNMENT_BYTES) may be reserved through this function. + * This keeps things simple, and is enough for the current use case. + * + * The CMA bitmaps have not yet been allocated, so just start + * reserving from the bottom up, using a PFN to keep track + * of what has been reserved. Unreserving is not possible. + * + * The caller is responsible for initializing the page structures + * in the area properly, since this just points to memblock-allocated + * memory. The caller should subsequently use init_cma_pageblock to + * set the migrate type and CMA stats the pageblocks that were reserved. + * + * If the CMA area fails to activate later, memory obtained through + * this interface is not handed to the page allocator, this is + * the responsibility of the caller (e.g. like normal memblock-allocated + * memory). + */ +void __init *cma_reserve_early(struct cma *cma, unsigned long size) +{ + int r; + struct cma_memrange *cmr; + unsigned long available; + void *ret = NULL; + + if (!cma || !cma->count) + return NULL; + /* + * Can only be called early in init. + */ + if (test_bit(CMA_ACTIVATED, &cma->flags)) + return NULL; + + if (!IS_ALIGNED(size, CMA_MIN_ALIGNMENT_BYTES)) + return NULL; + + if (!IS_ALIGNED(size, (PAGE_SIZE << cma->order_per_bit))) + return NULL; + + size >>= PAGE_SHIFT; + + if (size > cma->available_count) + return NULL; + + for (r = 0; r < cma->nranges; r++) { + cmr = &cma->ranges[r]; + available = cmr->count - (cmr->early_pfn - cmr->base_pfn); + if (size <= available) { + ret = phys_to_virt(PFN_PHYS(cmr->early_pfn)); + cmr->early_pfn += size; + cma->available_count -= size; + return ret; + } + } + + return ret; +} diff --git a/mm/cma.h b/mm/cma.h index bddc84b3cd96..df7fc623b7a6 100644 --- a/mm/cma.h +++ b/mm/cma.h @@ -16,9 +16,16 @@ struct cma_kobject { * and the total amount of memory requested, while smaller than the total * amount of memory available, is large enough that it doesn't fit in a * single physical memory range because of memory holes. + * + * Fields: + * @base_pfn: physical address of range + * @early_pfn: first PFN not reserved through cma_reserve_early + * @count: size of range + * @bitmap: bitmap of allocated (1 << order_per_bit)-sized chunks. */ struct cma_memrange { unsigned long base_pfn; + unsigned long early_pfn; unsigned long count; unsigned long *bitmap; #ifdef CONFIG_CMA_DEBUGFS @@ -58,6 +65,7 @@ enum cma_flags { CMA_RESERVE_PAGES_ON_ERROR, CMA_ZONES_VALID, CMA_ZONES_INVALID, + CMA_ACTIVATED, }; extern struct cma cma_areas[MAX_CMA_AREAS]; diff --git a/mm/internal.h b/mm/internal.h index 63fda9bb9426..8318c8e6e589 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -848,6 +848,22 @@ void init_cma_reserved_pageblock(struct page *page); #endif /* CONFIG_COMPACTION || CONFIG_CMA */ +struct cma; + +#ifdef CONFIG_CMA +void *cma_reserve_early(struct cma *cma, unsigned long size); +void init_cma_pageblock(struct page *page); +#else +static inline void *cma_reserve_early(struct cma *cma, unsigned long size) +{ + return NULL; +} +static inline void init_cma_pageblock(struct page *page) +{ +} +#endif + + int find_suitable_fallback(struct free_area *area, unsigned int order, int migratetype, bool only_stealable, bool *can_steal); diff --git a/mm/mm_init.c b/mm/mm_init.c index f7d5b4fe1ae9..f31260fd393e 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -2263,6 +2263,15 @@ void __init init_cma_reserved_pageblock(struct page *page) adjust_managed_page_count(page, pageblock_nr_pages); page_zone(page)->cma_pages += pageblock_nr_pages; } +/* + * Similar to above, but only set the migrate type and stats. + */ +void __init init_cma_pageblock(struct page *page) +{ + set_pageblock_migratetype(page, MIGRATE_CMA); + adjust_managed_page_count(page, pageblock_nr_pages); + page_zone(page)->cma_pages += pageblock_nr_pages; +} #endif void set_zone_contiguous(struct zone *zone)