diff mbox series

[v1,1/2] mm/cma: expose all pages to the buddy if activation of an area fails

Message ID 20210127101813.6370-2-david@redhat.com (mailing list archive)
State New, archived
Headers show
Series mm/cma: better error handling and count pages per zone | expand

Commit Message

David Hildenbrand Jan. 27, 2021, 10:18 a.m. UTC
Right now, if activation fails, we might already have exposed some pages to
the buddy for CMA use (although they will never get actually used by CMA),
and some pages won't be exposed to the buddy at all.

Let's check for "single zone" early and on error, don't expose any pages
for CMA use - instead, expose them to the buddy available for any use.
Simply call free_reserved_page() on every single page - easier than
going via free_reserved_area(), converting back and forth between pfns
and virt addresses.

In addition, make sure to fixup totalcma_pages properly.

Example: 6 GiB QEMU VM with "... hugetlb_cma=2G movablecore=20% ...":
  [    0.006891] hugetlb_cma: reserve 2048 MiB, up to 2048 MiB per node
  [    0.006893] cma: Reserved 2048 MiB at 0x0000000100000000
  [    0.006893] hugetlb_cma: reserved 2048 MiB on node 0
  ...
  [    0.175433] cma: CMA area hugetlb0 could not be activated

Before this patch:
  # cat /proc/meminfo
  MemTotal:        5867348 kB
  MemFree:         5692808 kB
  MemAvailable:    5542516 kB
  ...
  CmaTotal:        2097152 kB
  CmaFree:         1884160 kB

After this patch:
  # cat /proc/meminfo
  MemTotal:        6077308 kB
  MemFree:         5904208 kB
  MemAvailable:    5747968 kB
  ...
  CmaTotal:              0 kB
  CmaFree:               0 kB

Note: cma_init_reserved_mem() makes sure that we always cover full
pageblocks / MAX_ORDER - 1 pages.

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 mm/cma.c | 43 +++++++++++++++++++++----------------------
 1 file changed, 21 insertions(+), 22 deletions(-)

Comments

Zi Yan Jan. 27, 2021, 3:58 p.m. UTC | #1
On 27 Jan 2021, at 5:18, David Hildenbrand wrote:

> Right now, if activation fails, we might already have exposed some pages to
> the buddy for CMA use (although they will never get actually used by CMA),
> and some pages won't be exposed to the buddy at all.
>
> Let's check for "single zone" early and on error, don't expose any pages
> for CMA use - instead, expose them to the buddy available for any use.
> Simply call free_reserved_page() on every single page - easier than
> going via free_reserved_area(), converting back and forth between pfns
> and virt addresses.
>
> In addition, make sure to fixup totalcma_pages properly.
>
> Example: 6 GiB QEMU VM with "... hugetlb_cma=2G movablecore=20% ...":
>   [    0.006891] hugetlb_cma: reserve 2048 MiB, up to 2048 MiB per node
>   [    0.006893] cma: Reserved 2048 MiB at 0x0000000100000000
>   [    0.006893] hugetlb_cma: reserved 2048 MiB on node 0
>   ...
>   [    0.175433] cma: CMA area hugetlb0 could not be activated
>
> Before this patch:
>   # cat /proc/meminfo
>   MemTotal:        5867348 kB
>   MemFree:         5692808 kB
>   MemAvailable:    5542516 kB
>   ...
>   CmaTotal:        2097152 kB
>   CmaFree:         1884160 kB
>
> After this patch:
>   # cat /proc/meminfo
>   MemTotal:        6077308 kB
>   MemFree:         5904208 kB
>   MemAvailable:    5747968 kB
>   ...
>   CmaTotal:              0 kB
>   CmaFree:               0 kB
>
> Note: cma_init_reserved_mem() makes sure that we always cover full
> pageblocks / MAX_ORDER - 1 pages.
>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
> Cc: Mike Rapoport <rppt@kernel.org>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>  mm/cma.c | 43 +++++++++++++++++++++----------------------
>  1 file changed, 21 insertions(+), 22 deletions(-)

LGTM. Reviewed-by: Zi Yan <ziy@nvidia.com>

—
Best Regards,
Yan Zi
Oscar Salvador Jan. 28, 2021, 9:59 a.m. UTC | #2
On Wed, Jan 27, 2021 at 11:18:12AM +0100, David Hildenbrand wrote:
> Right now, if activation fails, we might already have exposed some pages to
> the buddy for CMA use (although they will never get actually used by CMA),
> and some pages won't be exposed to the buddy at all.
> 
> Let's check for "single zone" early and on error, don't expose any pages
> for CMA use - instead, expose them to the buddy available for any use.
> Simply call free_reserved_page() on every single page - easier than
> going via free_reserved_area(), converting back and forth between pfns
> and virt addresses.
> 
> In addition, make sure to fixup totalcma_pages properly.
> 
> Example: 6 GiB QEMU VM with "... hugetlb_cma=2G movablecore=20% ...":
>   [    0.006891] hugetlb_cma: reserve 2048 MiB, up to 2048 MiB per node
>   [    0.006893] cma: Reserved 2048 MiB at 0x0000000100000000
>   [    0.006893] hugetlb_cma: reserved 2048 MiB on node 0
>   ...
>   [    0.175433] cma: CMA area hugetlb0 could not be activated
> 
> Before this patch:
>   # cat /proc/meminfo
>   MemTotal:        5867348 kB
>   MemFree:         5692808 kB
>   MemAvailable:    5542516 kB
>   ...
>   CmaTotal:        2097152 kB
>   CmaFree:         1884160 kB
> 
> After this patch:
>   # cat /proc/meminfo
>   MemTotal:        6077308 kB
>   MemFree:         5904208 kB
>   MemAvailable:    5747968 kB
>   ...
>   CmaTotal:              0 kB
>   CmaFree:               0 kB
> 
> Note: cma_init_reserved_mem() makes sure that we always cover full
> pageblocks / MAX_ORDER - 1 pages.
> 
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
> Cc: Mike Rapoport <rppt@kernel.org>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>

Besides benefit of the error handling, I find this code much more
cleaer:

Reviewed-by: Oscar Salvador <osalvador@suse.de>
diff mbox series

Patch

diff --git a/mm/cma.c b/mm/cma.c
index 0ba69cd16aeb..23d4a97c834a 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -94,34 +94,29 @@  static void cma_clear_bitmap(struct cma *cma, unsigned long pfn,
 
 static void __init cma_activate_area(struct cma *cma)
 {
-	unsigned long base_pfn = cma->base_pfn, pfn = base_pfn;
-	unsigned i = cma->count >> pageblock_order;
+	unsigned long base_pfn = cma->base_pfn, pfn;
 	struct zone *zone;
 
 	cma->bitmap = bitmap_zalloc(cma_bitmap_maxno(cma), GFP_KERNEL);
 	if (!cma->bitmap)
 		goto out_error;
 
-	WARN_ON_ONCE(!pfn_valid(pfn));
-	zone = page_zone(pfn_to_page(pfn));
-
-	do {
-		unsigned j;
-
-		base_pfn = pfn;
-		for (j = pageblock_nr_pages; j; --j, pfn++) {
-			WARN_ON_ONCE(!pfn_valid(pfn));
-			/*
-			 * alloc_contig_range requires the pfn range
-			 * specified to be in the same zone. Make this
-			 * simple by forcing the entire CMA resv range
-			 * to be in the same zone.
-			 */
-			if (page_zone(pfn_to_page(pfn)) != zone)
-				goto not_in_zone;
-		}
-		init_cma_reserved_pageblock(pfn_to_page(base_pfn));
-	} while (--i);
+	/*
+	 * alloc_contig_range() requires the pfn range specified to be in the
+	 * same zone. Simplify by forcing the entire CMA resv range to be in the
+	 * same zone.
+	 */
+	WARN_ON_ONCE(!pfn_valid(base_pfn));
+	zone = page_zone(pfn_to_page(base_pfn));
+	for (pfn = base_pfn + 1; pfn < base_pfn + cma->count; pfn++) {
+		WARN_ON_ONCE(!pfn_valid(pfn));
+		if (page_zone(pfn_to_page(pfn)) != zone)
+			goto not_in_zone;
+	}
+
+	for (pfn = base_pfn; pfn < base_pfn + cma->count;
+	     pfn += pageblock_nr_pages)
+		init_cma_reserved_pageblock(pfn_to_page(pfn));
 
 	mutex_init(&cma->lock);
 
@@ -135,6 +130,10 @@  static void __init cma_activate_area(struct cma *cma)
 not_in_zone:
 	bitmap_free(cma->bitmap);
 out_error:
+	/* Expose all pages to the buddy, they are useless for CMA. */
+	for (pfn = base_pfn; pfn < base_pfn + cma->count; pfn++)
+		free_reserved_page(pfn_to_page(pfn));
+	totalcma_pages -= cma->count;
 	cma->count = 0;
 	pr_err("CMA area %s could not be activated\n", cma->name);
 	return;