diff mbox series

[v1,2/2] mm/page_alloc: count CMA pages per zone and print them in /proc/zoneinfo

Message ID 20210127101813.6370-3-david@redhat.com (mailing list archive)
State New, archived
Headers show
Series mm/cma: better error handling and count pages per zone | expand

Commit Message

David Hildenbrand Jan. 27, 2021, 10:18 a.m. UTC
Let's count the number of CMA pages per zone and print them in
/proc/zoneinfo.

Having access to the total number of CMA pages per zone is helpful for
debugging purposes to know where exactly the CMA pages ended up, and to
figure out how many pages of a zone might behave differently (e.g., like
ZONE_MOVABLE) - even after some of these pages might already have been
allocated.

For now, we are only able to get the global nr+free cma pages from
/proc/meminfo and the free cma pages per zone from /proc/zoneinfo.

Note: Track/print that information even without CONFIG_CMA, similar to
"nr_free_cma" in /proc/zoneinfo. This is different to /proc/meminfo -
maybe we want to make that consistent in the future (however, changing
/proc/zoneinfo output might uglify the code a bit).

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 include/linux/mmzone.h | 4 ++++
 mm/page_alloc.c        | 1 +
 mm/vmstat.c            | 6 ++++--
 3 files changed, 9 insertions(+), 2 deletions(-)

Comments

Oscar Salvador Jan. 28, 2021, 10:22 a.m. UTC | #1
On Wed, Jan 27, 2021 at 11:18:13AM +0100, David Hildenbrand wrote:
> Let's count the number of CMA pages per zone and print them in
> /proc/zoneinfo.
> 
> Having access to the total number of CMA pages per zone is helpful for
> debugging purposes to know where exactly the CMA pages ended up, and to
> figure out how many pages of a zone might behave differently (e.g., like
> ZONE_MOVABLE) - even after some of these pages might already have been
> allocated.

My knowledge of CMA tends to be quite low, actually I though that CMA
was somehow tied to ZONE_MOVABLE.

I see how tracking CMA pages per zona might give you a clue, but what do
you mean by "might behave differently - even after some of these pages might
already have been allocated"

> For now, we are only able to get the global nr+free cma pages from
> /proc/meminfo and the free cma pages per zone from /proc/zoneinfo.
> 
> Note: Track/print that information even without CONFIG_CMA, similar to
> "nr_free_cma" in /proc/zoneinfo. This is different to /proc/meminfo -
> maybe we want to make that consistent in the future (however, changing
> /proc/zoneinfo output might uglify the code a bit).
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
> Cc: Mike Rapoport <rppt@kernel.org>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>  include/linux/mmzone.h | 4 ++++
>  mm/page_alloc.c        | 1 +
>  mm/vmstat.c            | 6 ++++--
>  3 files changed, 9 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index ae588b2f87ef..3bc18c9976fd 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -503,6 +503,9 @@ struct zone {
>  	 * bootmem allocator):
>  	 *	managed_pages = present_pages - reserved_pages;
>  	 *
> +	 * cma pages is present pages that are assigned for CMA use
> +	 * (MIGRATE_CMA).
> +	 *
>  	 * So present_pages may be used by memory hotplug or memory power
>  	 * management logic to figure out unmanaged pages by checking
>  	 * (present_pages - managed_pages). And managed_pages should be used
> @@ -527,6 +530,7 @@ struct zone {
>  	atomic_long_t		managed_pages;
>  	unsigned long		spanned_pages;
>  	unsigned long		present_pages;
> +	unsigned long		cma_pages;

I see that NR_FREE_CMA_PAGES is there even without CONFIG_CMA, as you
said, but I am not sure about adding size to a zone unconditionally.
I mean, it is not terrible as IIRC, the maximum MAX_NUMNODES can get
is 1024, and on x86_64 that would be (1024 * 4 zones) * 8 = 32K.
So not a big deal, but still.

Besides following NR_FREE_CMA_PAGES, is there any reason for not doing:

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 1e22d96734e0..2d8a830d168d 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -436,6 +436,9 @@ struct zone {
        unsigned long           managed_pages;
        unsigned long           spanned_pages;
        unsigned long           present_pages;
+#ifdef CONFIG_CMA
+       unsigned long           cma_pages;
+#endif
 
        const char              *name;
 
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 8ba0870ecddd..5757df4bfd45 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1559,13 +1559,15 @@ static void zoneinfo_show_print(struct seq_file *m, pg_data_t *pgdat,
                   "\n        spanned  %lu"
                   "\n        present  %lu"
                   "\n        managed  %lu",
+                  "\n        cma      %lu",
                   zone_page_state(zone, NR_FREE_PAGES),
                   min_wmark_pages(zone),
                   low_wmark_pages(zone),
                   high_wmark_pages(zone),
                   zone->spanned_pages,
                   zone->present_pages,
-                  zone->managed_pages);
+                  zone->managed_pages,
+                  IS_ENABLED(CONFIG_CMA) ? zone->cma_pages : 0);
 
        seq_printf(m,
                   "\n        protection: (%ld",


I do not see it that ugly, but just my taste.
David Hildenbrand Jan. 28, 2021, 10:43 a.m. UTC | #2
On 28.01.21 11:22, Oscar Salvador wrote:
> On Wed, Jan 27, 2021 at 11:18:13AM +0100, David Hildenbrand wrote:
>> Let's count the number of CMA pages per zone and print them in
>> /proc/zoneinfo.
>>
>> Having access to the total number of CMA pages per zone is helpful for
>> debugging purposes to know where exactly the CMA pages ended up, and to
>> figure out how many pages of a zone might behave differently (e.g., like
>> ZONE_MOVABLE) - even after some of these pages might already have been
>> allocated.
> 
> My knowledge of CMA tends to be quite low, actually I though that CMA
> was somehow tied to ZONE_MOVABLE.

CMA is often placed into one of the kernel zones, but can also end up in the movable zone.

> 
> I see how tracking CMA pages per zona might give you a clue, but what do
> you mean by "might behave differently - even after some of these pages might
> already have been allocated"

Assume you have 4GB in ZONE_NORMAL but 1GB is assigned for CMA. You actually only have 3GB available for random kernel allocations, not 4GB.

Currently, you can only observe the free CMA pages, excluding any pages that are already allocated. Having that information how many CMA pages we have can be helpful - similar to what we already have in /proc/meminfo.

> 
>> For now, we are only able to get the global nr+free cma pages from
>> /proc/meminfo and the free cma pages per zone from /proc/zoneinfo.
>>
>> Note: Track/print that information even without CONFIG_CMA, similar to
>> "nr_free_cma" in /proc/zoneinfo. This is different to /proc/meminfo -
>> maybe we want to make that consistent in the future (however, changing
>> /proc/zoneinfo output might uglify the code a bit).
>> Cc: Andrew Morton <akpm@linux-foundation.org>
>> Cc: Thomas Gleixner <tglx@linutronix.de>
>> Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
>> Cc: Mike Rapoport <rppt@kernel.org>
>> Cc: Oscar Salvador <osalvador@suse.de>
>> Cc: Michal Hocko <mhocko@kernel.org>
>> Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
>> ---
>>   include/linux/mmzone.h | 4 ++++
>>   mm/page_alloc.c        | 1 +
>>   mm/vmstat.c            | 6 ++++--
>>   3 files changed, 9 insertions(+), 2 deletions(-)
>>
>> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
>> index ae588b2f87ef..3bc18c9976fd 100644
>> --- a/include/linux/mmzone.h
>> +++ b/include/linux/mmzone.h
>> @@ -503,6 +503,9 @@ struct zone {
>>   	 * bootmem allocator):
>>   	 *	managed_pages = present_pages - reserved_pages;
>>   	 *
>> +	 * cma pages is present pages that are assigned for CMA use
>> +	 * (MIGRATE_CMA).
>> +	 *
>>   	 * So present_pages may be used by memory hotplug or memory power
>>   	 * management logic to figure out unmanaged pages by checking
>>   	 * (present_pages - managed_pages). And managed_pages should be used
>> @@ -527,6 +530,7 @@ struct zone {
>>   	atomic_long_t		managed_pages;
>>   	unsigned long		spanned_pages;
>>   	unsigned long		present_pages;
>> +	unsigned long		cma_pages;
> 
> I see that NR_FREE_CMA_PAGES is there even without CONFIG_CMA, as you
> said, but I am not sure about adding size to a zone unconditionally.
> I mean, it is not terrible as IIRC, the maximum MAX_NUMNODES can get
> is 1024, and on x86_64 that would be (1024 * 4 zones) * 8 = 32K.
> So not a big deal, but still.

I'm asking myself how many such systems will run without
CONFIG_CMA in the future.

> 
> Besides following NR_FREE_CMA_PAGES, is there any reason for not doing:
> 
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 1e22d96734e0..2d8a830d168d 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -436,6 +436,9 @@ struct zone {
>          unsigned long           managed_pages;
>          unsigned long           spanned_pages;
>          unsigned long           present_pages;
> +#ifdef CONFIG_CMA
> +       unsigned long           cma_pages;
> +#endif
>   
>          const char              *name;
>   
> diff --git a/mm/vmstat.c b/mm/vmstat.c
> index 8ba0870ecddd..5757df4bfd45 100644
> --- a/mm/vmstat.c
> +++ b/mm/vmstat.c
> @@ -1559,13 +1559,15 @@ static void zoneinfo_show_print(struct seq_file *m, pg_data_t *pgdat,
>                     "\n        spanned  %lu"
>                     "\n        present  %lu"
>                     "\n        managed  %lu",
> +                  "\n        cma      %lu",
>                     zone_page_state(zone, NR_FREE_PAGES),
>                     min_wmark_pages(zone),
>                     low_wmark_pages(zone),
>                     high_wmark_pages(zone),
>                     zone->spanned_pages,
>                     zone->present_pages,
> -                  zone->managed_pages);
> +                  zone->managed_pages,
> +                  IS_ENABLED(CONFIG_CMA) ? zone->cma_pages : 0);
>   
>          seq_printf(m,
>                     "\n        protection: (%ld",
> 
> 
> I do not see it that ugly, but just my taste.

IIRC, that does not work. The compiler will still complain
about a missing struct members. We would have to provide a
zone_cma_pages() helper with some ifdefery.



We could do something like this on top

--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -530,7 +530,9 @@ struct zone {
         atomic_long_t           managed_pages;
         unsigned long           spanned_pages;
         unsigned long           present_pages;
+#ifdef CONFIG_CMA
         unsigned long           cma_pages;
+#endif
  
         const char              *name;
  
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 97fc32a53320..b753a64f099f 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1643,7 +1643,10 @@ static void zoneinfo_show_print(struct seq_file *m, pg_data_t *pgdat,
                    "\n        spanned  %lu"
                    "\n        present  %lu"
                    "\n        managed  %lu"
-                  "\n        cma      %lu",
+#ifdef CONFIG_CMA
+                  "\n        cma      %lu"
+#endif
+                  "%s",
                    zone_page_state(zone, NR_FREE_PAGES),
                    min_wmark_pages(zone),
                    low_wmark_pages(zone),
@@ -1651,7 +1654,10 @@ static void zoneinfo_show_print(struct seq_file *m, pg_data_t *pgdat,
                    zone->spanned_pages,
                    zone->present_pages,
                    zone_managed_pages(zone),
-                  zone->cma_pages);
+#ifdef CONFIG_CMA
+                  zone->cma_pages,
+#endif
+                  "");
  
         seq_printf(m,
                    "\n        protection: (%ld",


Getting rid of NR_FREE_CMA_PAGES will be more ugly.
Oscar Salvador Jan. 28, 2021, 1:44 p.m. UTC | #3
On Thu, Jan 28, 2021 at 11:43:41AM +0100, David Hildenbrand wrote:
> > My knowledge of CMA tends to be quite low, actually I though that CMA
> > was somehow tied to ZONE_MOVABLE.
> 
> CMA is often placed into one of the kernel zones, but can also end up in the movable zone.

Ok good to know.

> > I see how tracking CMA pages per zona might give you a clue, but what do
> > you mean by "might behave differently - even after some of these pages might
> > already have been allocated"
> 
> Assume you have 4GB in ZONE_NORMAL but 1GB is assigned for CMA. You actually only have 3GB available for random kernel allocations, not 4GB.
> 
> Currently, you can only observe the free CMA pages, excluding any pages that are already allocated. Having that information how many CMA pages we have can be helpful - similar to what we already have in /proc/meminfo.

I see, I agree that it can provide some guidance. 

> > I see that NR_FREE_CMA_PAGES is there even without CONFIG_CMA, as you
> > said, but I am not sure about adding size to a zone unconditionally.
> > I mean, it is not terrible as IIRC, the maximum MAX_NUMNODES can get
> > is 1024, and on x86_64 that would be (1024 * 4 zones) * 8 = 32K.
> > So not a big deal, but still.
> 
> I'm asking myself how many such systems will run without
> CONFIG_CMA in the future.

I am not sure, my comment was just to point out that even the added size might
not be that large, hiding it under CONFIG_CMA seemed the right thing to
do.

> > diff --git a/mm/vmstat.c b/mm/vmstat.c
> > index 8ba0870ecddd..5757df4bfd45 100644
> > --- a/mm/vmstat.c
> > +++ b/mm/vmstat.c
> > @@ -1559,13 +1559,15 @@ static void zoneinfo_show_print(struct seq_file *m, pg_data_t *pgdat,
> >                     "\n        spanned  %lu"
> >                     "\n        present  %lu"
> >                     "\n        managed  %lu",
> > +                  "\n        cma      %lu",
> >                     zone_page_state(zone, NR_FREE_PAGES),
> >                     min_wmark_pages(zone),
> >                     low_wmark_pages(zone),
> >                     high_wmark_pages(zone),
> >                     zone->spanned_pages,
> >                     zone->present_pages,
> > -                  zone->managed_pages);
> > +                  zone->managed_pages,
> > +                  IS_ENABLED(CONFIG_CMA) ? zone->cma_pages : 0);
> >          seq_printf(m,
> >                     "\n        protection: (%ld",
> > 
> > 
> > I do not see it that ugly, but just my taste.
> 
> IIRC, that does not work. The compiler will still complain
> about a missing struct members. We would have to provide a
> zone_cma_pages() helper with some ifdefery.

Of course, it seems I switched off my brain.

> We could do something like this on top
> 
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -530,7 +530,9 @@ struct zone {
>         atomic_long_t           managed_pages;
>         unsigned long           spanned_pages;
>         unsigned long           present_pages;
> +#ifdef CONFIG_CMA
>         unsigned long           cma_pages;
> +#endif
>         const char              *name;
> diff --git a/mm/vmstat.c b/mm/vmstat.c
> index 97fc32a53320..b753a64f099f 100644
> --- a/mm/vmstat.c
> +++ b/mm/vmstat.c
> @@ -1643,7 +1643,10 @@ static void zoneinfo_show_print(struct seq_file *m, pg_data_t *pgdat,
>                    "\n        spanned  %lu"
>                    "\n        present  %lu"
>                    "\n        managed  %lu"
> -                  "\n        cma      %lu",
> +#ifdef CONFIG_CMA
> +                  "\n        cma      %lu"
> +#endif
> +                  "%s",
>                    zone_page_state(zone, NR_FREE_PAGES),
>                    min_wmark_pages(zone),
>                    low_wmark_pages(zone),
> @@ -1651,7 +1654,10 @@ static void zoneinfo_show_print(struct seq_file *m, pg_data_t *pgdat,
>                    zone->spanned_pages,
>                    zone->present_pages,
>                    zone_managed_pages(zone),
> -                  zone->cma_pages);
> +#ifdef CONFIG_CMA
> +                  zone->cma_pages,
> +#endif
> +                  "");
>         seq_printf(m,
>                    "\n        protection: (%ld",

Looks good to me, but I can see how those #ifdef can raise some
eyebrows.
Let us see what other thinks as well.

Btw, should linux-uapi be CCed, as /proc/vmstat layout will change?
Oscar Salvador Jan. 28, 2021, 1:46 p.m. UTC | #4
On Thu, Jan 28, 2021 at 02:44:58PM +0100, Oscar Salvador wrote:
> Btw, should linux-uapi be CCed, as /proc/vmstat layout will change?

I meant /proc/zoneinfo

> 
> -- 
> Oscar Salvador
> SUSE L3
>
David Hildenbrand Jan. 28, 2021, 2:01 p.m. UTC | #5
On 28.01.21 14:44, Oscar Salvador wrote:
> On Thu, Jan 28, 2021 at 11:43:41AM +0100, David Hildenbrand wrote:
>>> My knowledge of CMA tends to be quite low, actually I though that CMA
>>> was somehow tied to ZONE_MOVABLE.
>>
>> CMA is often placed into one of the kernel zones, but can also end up in the movable zone.
> 
> Ok good to know.
> 
>>> I see how tracking CMA pages per zona might give you a clue, but what do
>>> you mean by "might behave differently - even after some of these pages might
>>> already have been allocated"
>>
>> Assume you have 4GB in ZONE_NORMAL but 1GB is assigned for CMA. You actually only have 3GB available for random kernel allocations, not 4GB.
>>
>> Currently, you can only observe the free CMA pages, excluding any pages that are already allocated. Having that information how many CMA pages we have can be helpful - similar to what we already have in /proc/meminfo.
> 
> I see, I agree that it can provide some guidance.
> 
>>> I see that NR_FREE_CMA_PAGES is there even without CONFIG_CMA, as you
>>> said, but I am not sure about adding size to a zone unconditionally.
>>> I mean, it is not terrible as IIRC, the maximum MAX_NUMNODES can get
>>> is 1024, and on x86_64 that would be (1024 * 4 zones) * 8 = 32K.
>>> So not a big deal, but still.
>>
>> I'm asking myself how many such systems will run without
>> CONFIG_CMA in the future.
> 
> I am not sure, my comment was just to point out that even the added size might
> not be that large, hiding it under CONFIG_CMA seemed the right thing to
> do.
> 
>>> diff --git a/mm/vmstat.c b/mm/vmstat.c
>>> index 8ba0870ecddd..5757df4bfd45 100644
>>> --- a/mm/vmstat.c
>>> +++ b/mm/vmstat.c
>>> @@ -1559,13 +1559,15 @@ static void zoneinfo_show_print(struct seq_file *m, pg_data_t *pgdat,
>>>                      "\n        spanned  %lu"
>>>                      "\n        present  %lu"
>>>                      "\n        managed  %lu",
>>> +                  "\n        cma      %lu",
>>>                      zone_page_state(zone, NR_FREE_PAGES),
>>>                      min_wmark_pages(zone),
>>>                      low_wmark_pages(zone),
>>>                      high_wmark_pages(zone),
>>>                      zone->spanned_pages,
>>>                      zone->present_pages,
>>> -                  zone->managed_pages);
>>> +                  zone->managed_pages,
>>> +                  IS_ENABLED(CONFIG_CMA) ? zone->cma_pages : 0);
>>>           seq_printf(m,
>>>                      "\n        protection: (%ld",
>>>
>>>
>>> I do not see it that ugly, but just my taste.
>>
>> IIRC, that does not work. The compiler will still complain
>> about a missing struct members. We would have to provide a
>> zone_cma_pages() helper with some ifdefery.
> 
> Of course, it seems I switched off my brain.
> 
>> We could do something like this on top
>>
>> --- a/include/linux/mmzone.h
>> +++ b/include/linux/mmzone.h
>> @@ -530,7 +530,9 @@ struct zone {
>>          atomic_long_t           managed_pages;
>>          unsigned long           spanned_pages;
>>          unsigned long           present_pages;
>> +#ifdef CONFIG_CMA
>>          unsigned long           cma_pages;
>> +#endif
>>          const char              *name;
>> diff --git a/mm/vmstat.c b/mm/vmstat.c
>> index 97fc32a53320..b753a64f099f 100644
>> --- a/mm/vmstat.c
>> +++ b/mm/vmstat.c
>> @@ -1643,7 +1643,10 @@ static void zoneinfo_show_print(struct seq_file *m, pg_data_t *pgdat,
>>                     "\n        spanned  %lu"
>>                     "\n        present  %lu"
>>                     "\n        managed  %lu"
>> -                  "\n        cma      %lu",
>> +#ifdef CONFIG_CMA
>> +                  "\n        cma      %lu"
>> +#endif
>> +                  "%s",
>>                     zone_page_state(zone, NR_FREE_PAGES),
>>                     min_wmark_pages(zone),
>>                     low_wmark_pages(zone),
>> @@ -1651,7 +1654,10 @@ static void zoneinfo_show_print(struct seq_file *m, pg_data_t *pgdat,
>>                     zone->spanned_pages,
>>                     zone->present_pages,
>>                     zone_managed_pages(zone),
>> -                  zone->cma_pages);
>> +#ifdef CONFIG_CMA
>> +                  zone->cma_pages,
>> +#endif
>> +                  "");
>>          seq_printf(m,
>>                     "\n        protection: (%ld",
> 
> Looks good to me, but I can see how those #ifdef can raise some
> eyebrows.

We could print it further above to avoid the "%s" ... "", or print it 
separately below. Then we'd only need a single ifdef. Might make sense

> Let us see what other thinks as well.
> 
> Btw, should linux-uapi be CCed, as /proc/vmstat layout will change?

Is there a linux-uapi@ list? I know linux-api@ ("forum to discuss 
changes that affect the Linux programming interface (API or ABI)".

Good question, I can certainly cc linux-api@, although I doubt it's 
strictly necessary when adding something here.

Thanks!
diff mbox series

Patch

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index ae588b2f87ef..3bc18c9976fd 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -503,6 +503,9 @@  struct zone {
 	 * bootmem allocator):
 	 *	managed_pages = present_pages - reserved_pages;
 	 *
+	 * cma pages is present pages that are assigned for CMA use
+	 * (MIGRATE_CMA).
+	 *
 	 * So present_pages may be used by memory hotplug or memory power
 	 * management logic to figure out unmanaged pages by checking
 	 * (present_pages - managed_pages). And managed_pages should be used
@@ -527,6 +530,7 @@  struct zone {
 	atomic_long_t		managed_pages;
 	unsigned long		spanned_pages;
 	unsigned long		present_pages;
+	unsigned long		cma_pages;
 
 	const char		*name;
 
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index b031a5ae0bd5..9a82375bbcb2 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2168,6 +2168,7 @@  void __init init_cma_reserved_pageblock(struct page *page)
 	}
 
 	adjust_managed_page_count(page, pageblock_nr_pages);
+	page_zone(page)->cma_pages += pageblock_nr_pages;
 }
 #endif
 
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 7758486097f9..97fc32a53320 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1642,14 +1642,16 @@  static void zoneinfo_show_print(struct seq_file *m, pg_data_t *pgdat,
 		   "\n        high     %lu"
 		   "\n        spanned  %lu"
 		   "\n        present  %lu"
-		   "\n        managed  %lu",
+		   "\n        managed  %lu"
+		   "\n        cma      %lu",
 		   zone_page_state(zone, NR_FREE_PAGES),
 		   min_wmark_pages(zone),
 		   low_wmark_pages(zone),
 		   high_wmark_pages(zone),
 		   zone->spanned_pages,
 		   zone->present_pages,
-		   zone_managed_pages(zone));
+		   zone_managed_pages(zone),
+		   zone->cma_pages);
 
 	seq_printf(m,
 		   "\n        protection: (%ld",