diff mbox series

[v13] mm: report per-page metadata information

Message ID 20240605222751.1406125-1-souravpanda@google.com (mailing list archive)
State New
Headers show
Series [v13] mm: report per-page metadata information | expand

Commit Message

Sourav Panda June 5, 2024, 10:27 p.m. UTC
Today, we do not have any observability of per-page metadata
and how much it takes away from the machine capacity. Thus,
we want to describe the amount of memory that is going towards
per-page metadata, which can vary depending on build
configuration, machine architecture, and system use.

This patch adds 2 fields to /proc/vmstat that can used as shown
below:

Accounting per-page metadata allocated by boot-allocator:
	/proc/vmstat:nr_memmap_boot * PAGE_SIZE

Accounting per-page metadata allocated by buddy-allocator:
	/proc/vmstat:nr_memmap * PAGE_SIZE

Accounting total Perpage metadata allocated on the machine:
	(/proc/vmstat:nr_memmap_boot +
	 /proc/vmstat:nr_memmap) * PAGE_SIZE

Utility for userspace:

Observability: Describe the amount of memory overhead that is
going to per-page metadata on the system at any given time since
this overhead is not currently observable.

Debugging: Tracking the changes or absolute value in struct pages
can help detect anomalies as they can be correlated with other
metrics in the machine (e.g., memtotal, number of huge pages,
etc).

page_ext overheads: Some kernel features such as page_owner
page_table_check that use page_ext can be optionally enabled via
kernel parameters. Having the total per-page metadata information
helps users precisely measure impact. Furthermore, page-metadata
metrics will reflect the amount of struct pages reliquished
(or overhead reduced) when hugetlbfs pages are reserved which
will vary depending on whether hugetlb vmemmap optimization is
enabled or not.

For background and results see:
lore.kernel.org/all/20240220214558.3377482-1-souravpanda@google.com

Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Sourav Panda <souravpanda@google.com>
Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
Changelog:
Synchronized with 6.10-rc2.
Added David Rientjes's Ack.

v12:
https://lore.kernel.org/all/20240512010611.290464-1-souravpanda@google.com
---
 include/linux/mmzone.h |  2 ++
 include/linux/vmstat.h |  4 ++++
 mm/hugetlb_vmemmap.c   | 17 +++++++++++++----
 mm/mm_init.c           |  3 +++
 mm/page_alloc.c        |  1 +
 mm/page_ext.c          | 32 +++++++++++++++++++++++---------
 mm/sparse-vmemmap.c    |  8 ++++++++
 mm/sparse.c            |  7 ++++++-
 mm/vmstat.c            | 26 +++++++++++++++++++++++++-
 9 files changed, 85 insertions(+), 15 deletions(-)

Comments

Andrew Morton June 11, 2024, 10:30 p.m. UTC | #1
On Wed,  5 Jun 2024 22:27:51 +0000 Sourav Panda <souravpanda@google.com> wrote:

> Today, we do not have any observability of per-page metadata
> and how much it takes away from the machine capacity. Thus,
> we want to describe the amount of memory that is going towards
> per-page metadata, which can vary depending on build
> configuration, machine architecture, and system use.
> 
> This patch adds 2 fields to /proc/vmstat that can used as shown
> below:
> 
> Accounting per-page metadata allocated by boot-allocator:
> 	/proc/vmstat:nr_memmap_boot * PAGE_SIZE
> 
> Accounting per-page metadata allocated by buddy-allocator:
> 	/proc/vmstat:nr_memmap * PAGE_SIZE
> 
> Accounting total Perpage metadata allocated on the machine:
> 	(/proc/vmstat:nr_memmap_boot +
> 	 /proc/vmstat:nr_memmap) * PAGE_SIZE

Under what circumstances do these change?  Only hotplug?

It's nasty, but would it be sufficient to simply emit these numbers
into dmesg when they change?
Pasha Tatashin June 12, 2024, 5:53 p.m. UTC | #2
On Tue, Jun 11, 2024 at 6:30 PM Andrew Morton <akpm@linux-foundation.org> wrote:
>
> On Wed,  5 Jun 2024 22:27:51 +0000 Sourav Panda <souravpanda@google.com> wrote:
>
> > Today, we do not have any observability of per-page metadata
> > and how much it takes away from the machine capacity. Thus,
> > we want to describe the amount of memory that is going towards
> > per-page metadata, which can vary depending on build
> > configuration, machine architecture, and system use.
> >
> > This patch adds 2 fields to /proc/vmstat that can used as shown
> > below:
> >
> > Accounting per-page metadata allocated by boot-allocator:
> >       /proc/vmstat:nr_memmap_boot * PAGE_SIZE
> >
> > Accounting per-page metadata allocated by buddy-allocator:
> >       /proc/vmstat:nr_memmap * PAGE_SIZE
> >
> > Accounting total Perpage metadata allocated on the machine:
> >       (/proc/vmstat:nr_memmap_boot +
> >        /proc/vmstat:nr_memmap) * PAGE_SIZE
>
> Under what circumstances do these change?  Only hotplug?

Currently, there are several reasons these numbers can change during runtime:

1. Memory hotplug/hotremove
2. Adding/Removing hugetlb pages with vmemmap optimization
3. Adding/Removing Device DAX with vmemmap optimization.

>
> It's nasty, but would it be sufficient to simply emit these numbers
> into dmesg when they change?

These numbers should really be part of /proc/vmstat in order to
provide an interface for determining the system memory overhead.

Pasha
Alison Schofield Aug. 2, 2024, 7:02 p.m. UTC | #3
++ nvdimm, linux-cxl, Yu Zhang

On Wed, Jun 05, 2024 at 10:27:51PM +0000, Sourav Panda wrote:
> Today, we do not have any observability of per-page metadata
> and how much it takes away from the machine capacity. Thus,
> we want to describe the amount of memory that is going towards
> per-page metadata, which can vary depending on build
> configuration, machine architecture, and system use.
> 
> This patch adds 2 fields to /proc/vmstat that can used as shown
> below:
> 
> Accounting per-page metadata allocated by boot-allocator:
> 	/proc/vmstat:nr_memmap_boot * PAGE_SIZE
> 
> Accounting per-page metadata allocated by buddy-allocator:
> 	/proc/vmstat:nr_memmap * PAGE_SIZE
> 
> Accounting total Perpage metadata allocated on the machine:
> 	(/proc/vmstat:nr_memmap_boot +
> 	 /proc/vmstat:nr_memmap) * PAGE_SIZE
> 
> Utility for userspace:
> 
> Observability: Describe the amount of memory overhead that is
> going to per-page metadata on the system at any given time since
> this overhead is not currently observable.
> 
> Debugging: Tracking the changes or absolute value in struct pages
> can help detect anomalies as they can be correlated with other
> metrics in the machine (e.g., memtotal, number of huge pages,
> etc).
> 
> page_ext overheads: Some kernel features such as page_owner
> page_table_check that use page_ext can be optionally enabled via
> kernel parameters. Having the total per-page metadata information
> helps users precisely measure impact. Furthermore, page-metadata
> metrics will reflect the amount of struct pages reliquished
> (or overhead reduced) when hugetlbfs pages are reserved which
> will vary depending on whether hugetlb vmemmap optimization is
> enabled or not.
> 
> For background and results see:
> lore.kernel.org/all/20240220214558.3377482-1-souravpanda@google.com
> 
> Acked-by: David Rientjes <rientjes@google.com>
> Signed-off-by: Sourav Panda <souravpanda@google.com>
> Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>

This patch is leading to Oops in 6.11-rc1 when CONFIG_MEMORY_HOTPLUG
is enabled. Folks hitting it have had success with reverting this patch.
Disabling CONFIG_MEMORY_HOTPLUG is not a long term solution.

Reported here:
https://lore.kernel.org/linux-cxl/CAHj4cs9Ax1=CoJkgBGP_+sNu6-6=6v=_L-ZBZY0bVLD3wUWZQg@mail.gmail.com/

A bit of detail below, follow above link for more:
dmesg:
[ 1408.632268] Oops: general protection fault, probably for
non-canonical address 0xdffffc0000005650: 0000 [#1] PREEMPT SMP KASAN
PTI
[ 1408.644006] KASAN: probably user-memory-access in range
[0x000000000002b280-0x000000000002b287]
[ 1408.652699] CPU: 26 UID: 0 PID: 1868 Comm: ndctl Not tainted 6.11.0-rc1 #1
[ 1408.659571] Hardware name: Dell Inc. PowerEdge R640/08HT8T, BIOS
2.20.1 09/13/2023
[ 1408.667136] RIP: 0010:mod_node_page_state+0x2a/0x110
[ 1408.672112] Code: 0f 1f 44 00 00 48 b8 00 00 00 00 00 fc ff df 41
54 55 48 89 fd 48 81 c7 80 b2 02 00 53 48 89 f9 89 d3 48 c1 e9 03 48
83 ec 10 <80> 3c 01 00 0f 85 b8 00 00 00 48 8b bd 80 b2 02 00 41 89 f0
83 ee
[ 1408.690856] RSP: 0018:ffffc900246d7388 EFLAGS: 00010286
[ 1408.696088] RAX: dffffc0000000000 RBX: 00000000fffffe00 RCX: 0000000000005650
[ 1408.703222] RDX: fffffffffffffe00 RSI: 000000000000002f RDI: 000000000002b280
[ 1408.710353] RBP: 0000000000000000 R08: ffff88a06ffcb1c8 R09: 1ffffffff218c681
[ 1408.717486] R10: ffffffff93d922bf R11: ffff88855e790f10 R12: 00000000000003ff
[ 1408.724619] R13: 1ffff920048dae7b R14: ffffea0081e00000 R15: ffffffff90c63408
[ 1408.731750] FS:  00007f753c219200(0000) GS:ffff889bf2a00000(0000)
knlGS:0000000000000000
[ 1408.739834] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 1408.745581] CR2: 0000559f5902a5a8 CR3: 00000001292f0006 CR4: 00000000007706f0
[ 1408.752713] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 1408.759843] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 1408.766976] PKRU: 55555554
[ 1408.769690] Call Trace:
[ 1408.772143]  <TASK>
[ 1408.774248]  ? die_addr+0x3d/0xa0
[ 1408.777577]  ? exc_general_protection+0x150/0x230
[ 1408.782297]  ? asm_exc_general_protection+0x22/0x30
[ 1408.787182]  ? mod_node_page_state+0x2a/0x110
[ 1408.791548]  section_deactivate+0x519/0x780
[ 1408.795740]  ? __pfx_section_deactivate+0x10/0x10
[ 1408.800449]  __remove_pages+0x6c/0xa0
[ 1408.804119]  arch_remove_memory+0x1a/0x70
[ 1408.808141]  pageunmap_range+0x2ad/0x5e0
[ 1408.812067]  memunmap_pages+0x320/0x5a0
[ 1408.815909]  release_nodes+0xd6/0x170
[ 1408.819581]  ? lockdep_hardirqs_on+0x78/0x100
[ 1408.823941]  devres_release_all+0x106/0x170
[ 1408.828126]  ? __pfx_devres_release_all+0x10/0x10
[ 1408.832834]  device_unbind_cleanup+0x16/0x1a0
[ 1408.837198]  device_release_driver_internal+0x3d5/0x530
[ 1408.842423]  ? klist_put+0xf7/0x170
[ 1408.845916]  bus_remove_device+0x1ed/0x3f0
[ 1408.850017]  device_del+0x33b/0x8c0
[ 1408.853518]  ? __pfx_device_del+0x10/0x10
[ 1408.857532]  unregister_dev_dax+0x112/0x210
[ 1408.861722]  release_nodes+0xd6/0x170
[ 1408.865387]  ? lockdep_hardirqs_on+0x78/0x100
[ 1408.869749]  devres_release_all+0x106/0x170
[ 1408.873933]  ? __pfx_devres_release_all+0x10/0x10
[ 1408.878643]  device_unbind_cleanup+0x16/0x1a0
[ 1408.883007]  device_release_driver_internal+0x3d5/0x530
[ 1408.888235]  ? __pfx_sysfs_kf_write+0x10/0x10
[ 1408.892598]  unbind_store+0xdc/0xf0
[ 1408.896093]  kernfs_fop_write_iter+0x358/0x530
[ 1408.900539]  vfs_write+0x9b2/0xf60
[ 1408.903954]  ? __pfx_vfs_write+0x10/0x10
[ 1408.907891]  ? __fget_light+0x53/0x1e0
[ 1408.911646]  ? __x64_sys_openat+0x11f/0x1e0
[ 1408.915835]  ksys_write+0xf1/0x1d0
[ 1408.919249]  ? __pfx_ksys_write+0x10/0x10
[ 1408.923264]  do_syscall_64+0x8c/0x180
[ 1408.926934]  ? __debug_check_no_obj_freed+0x253/0x520
[ 1408.931997]  ? __pfx___debug_check_no_obj_freed+0x10/0x10
[ 1408.937405]  ? kasan_quarantine_put+0x109/0x220
[ 1408.941944]  ? lockdep_hardirqs_on+0x78/0x100
[ 1408.946304]  ? kmem_cache_free+0x1a6/0x4c0
[ 1408.950408]  ? do_sys_openat2+0x10a/0x160
[ 1408.954424]  ? do_sys_openat2+0x10a/0x160
[ 1408.958434]  ? __pfx_do_sys_openat2+0x10/0x10
[ 1408.962794]  ? lockdep_hardirqs_on+0x78/0x100
[ 1408.967153]  ? __pfx___debug_check_no_obj_freed+0x10/0x10
[ 1408.972554]  ? __x64_sys_openat+0x11f/0x1e0
[ 1408.976737]  ? __pfx___x64_sys_openat+0x10/0x10
[ 1408.981269]  ? rcu_is_watching+0x11/0xb0
[ 1408.985204]  ? lockdep_hardirqs_on_prepare+0x179/0x400
[ 1408.990351]  ? do_syscall_64+0x98/0x180
[ 1408.994191]  ? lockdep_hardirqs_on+0x78/0x100
[ 1408.998549]  ? do_syscall_64+0x98/0x180
[ 1409.002386]  ? do_syscall_64+0x98/0x180
[ 1409.006227]  ? lockdep_hardirqs_on+0x78/0x100
[ 1409.010585]  ? do_syscall_64+0x98/0x180
[ 1409.014425]  ? lockdep_hardirqs_on_prepare+0x179/0x400
[ 1409.019565]  ? do_syscall_64+0x98/0x180
[ 1409.023401]  ? lockdep_hardirqs_on+0x78/0x100
[ 1409.027763]  ? do_syscall_64+0x98/0x180
[ 1409.031600]  ? do_syscall_64+0x98/0x180
[ 1409.035439]  ? do_syscall_64+0x98/0x180
[ 1409.039281]  entry_SYSCALL_64_after_hwframe+0x76/0x7e
[ 1409.044331] RIP: 0033:0x7f753c0fda57
[ 1409.047911] Code: 0f 00 f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb b7
0f 1f 00 f3 0f 1e fa 64 8b 04 25 18 00 00 00 85 c0 75 10 b8 01 00 00
00 0f 05 <48> 3d 00 f0 ff ff 77 51 c3 48 83 ec 28 48 89 54 24 18 48 89
74 24
[ 1409.066655] RSP: 002b:00007ffc19323e28 EFLAGS: 00000246 ORIG_RAX:
0000000000000001
[ 1409.074220] RAX: ffffffffffffffda RBX: 0000000000000007 RCX: 00007f753c0fda57
[ 1409.081352] RDX: 0000000000000007 RSI: 0000559f5901f740 RDI: 0000000000000003
[ 1409.088483] RBP: 0000000000000003 R08: 0000000000000000 R09: 00007ffc19323d20
[ 1409.095616] R10: 0000000000000000 R11: 0000000000000246 R12: 0000559f5901f740
[ 1409.102748] R13: 00007ffc19323e90 R14: 00007f753c219120 R15: 0000559f5901fc30
[ 1409.109887]  </TASK>
[ 1409.112082] Modules linked in: kmem device_dax rpcsec_gss_krb5
auth_rpcgss nfsv4 dns_resolver nfs lockd grace netfs rfkill sunrpc
dm_multipath intel_rapl_msr intel_rapl_common intel_uncore_frequency
intel_uncore_frequency_common skx_edac skx_edac_common
x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm mgag200
rapl cdc_ether iTCO_wdt dell_pc i2c_algo_bit iTCO_vendor_support
ipmi_ssif usbnet acpi_power_meter drm_shmem_helper mei_me dell_smbios
platform_profile intel_cstate dcdbas wmi_bmof dell_wmi_descriptor
intel_uncore pcspkr mii drm_kms_helper i2c_i801 mei i2c_smbus
intel_pch_thermal lpc_ich ipmi_si acpi_ipmi dax_pmem ipmi_devintf
ipmi_msghandler drm fuse xfs libcrc32c sd_mod sg nd_pmem nd_btt
crct10dif_pclmul crc32_pclmul crc32c_intel ahci ghash_clmulni_intel
libahci bnxt_en megaraid_sas tg3 libata wmi nfit libnvdimm dm_mirror
dm_region_hash dm_log dm_mod
[ 1409.189120] ---[ end trace 0000000000000000 ]---

-- snip
>
Pasha Tatashin Aug. 5, 2024, 6:40 p.m. UTC | #4
On Fri, Aug 2, 2024 at 3:02 PM Alison Schofield
<alison.schofield@intel.com> wrote:
>
> ++ nvdimm, linux-cxl, Yu Zhang
>
> On Wed, Jun 05, 2024 at 10:27:51PM +0000, Sourav Panda wrote:
> > Today, we do not have any observability of per-page metadata
> > and how much it takes away from the machine capacity. Thus,
> > we want to describe the amount of memory that is going towards
> > per-page metadata, which can vary depending on build
> > configuration, machine architecture, and system use.
> >
> > This patch adds 2 fields to /proc/vmstat that can used as shown
> > below:
> >
> > Accounting per-page metadata allocated by boot-allocator:
> >       /proc/vmstat:nr_memmap_boot * PAGE_SIZE
> >
> > Accounting per-page metadata allocated by buddy-allocator:
> >       /proc/vmstat:nr_memmap * PAGE_SIZE
> >
> > Accounting total Perpage metadata allocated on the machine:
> >       (/proc/vmstat:nr_memmap_boot +
> >        /proc/vmstat:nr_memmap) * PAGE_SIZE
> >
> > Utility for userspace:
> >
> > Observability: Describe the amount of memory overhead that is
> > going to per-page metadata on the system at any given time since
> > this overhead is not currently observable.
> >
> > Debugging: Tracking the changes or absolute value in struct pages
> > can help detect anomalies as they can be correlated with other
> > metrics in the machine (e.g., memtotal, number of huge pages,
> > etc).
> >
> > page_ext overheads: Some kernel features such as page_owner
> > page_table_check that use page_ext can be optionally enabled via
> > kernel parameters. Having the total per-page metadata information
> > helps users precisely measure impact. Furthermore, page-metadata
> > metrics will reflect the amount of struct pages reliquished
> > (or overhead reduced) when hugetlbfs pages are reserved which
> > will vary depending on whether hugetlb vmemmap optimization is
> > enabled or not.
> >
> > For background and results see:
> > lore.kernel.org/all/20240220214558.3377482-1-souravpanda@google.com
> >
> > Acked-by: David Rientjes <rientjes@google.com>
> > Signed-off-by: Sourav Panda <souravpanda@google.com>
> > Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
>
> This patch is leading to Oops in 6.11-rc1 when CONFIG_MEMORY_HOTPLUG
> is enabled. Folks hitting it have had success with reverting this patch.
> Disabling CONFIG_MEMORY_HOTPLUG is not a long term solution.
>
> Reported here:
> https://lore.kernel.org/linux-cxl/CAHj4cs9Ax1=CoJkgBGP_+sNu6-6=6v=_L-ZBZY0bVLD3wUWZQg@mail.gmail.com/

Thank you for the heads up. Can you please attach a full config file,
also was anyone able to reproduce this problem in qemu with emulated
nvdimm?

Pasha
Dan Williams Aug. 5, 2024, 11:06 p.m. UTC | #5
Pasha Tatashin wrote:
[..]
> Thank you for the heads up. Can you please attach a full config file,
> also was anyone able to reproduce this problem in qemu with emulated
> nvdimm?

Yes, I can reproduce the crash just by trying to reconfigure the mode of
a pmem namespace:

# ndctl create-namespace -m raw -f -e namespace0.0

...where namespace0.0 results from:

    memmap=4G!4G

...passed on the kernel command line.

Kernel config here:

https://gist.github.com/djbw/143705077103d43a735c179395d4f69a
Alison Schofield Aug. 5, 2024, 11:18 p.m. UTC | #6
On Mon, Aug 05, 2024 at 02:40:48PM -0400, Pasha Tatashin wrote:
> On Fri, Aug 2, 2024 at 3:02 PM Alison Schofield
> <alison.schofield@intel.com> wrote:
> >
> > ++ nvdimm, linux-cxl, Yu Zhang
> >
> > On Wed, Jun 05, 2024 at 10:27:51PM +0000, Sourav Panda wrote:
> > > Today, we do not have any observability of per-page metadata
> > > and how much it takes away from the machine capacity. Thus,
> > > we want to describe the amount of memory that is going towards
> > > per-page metadata, which can vary depending on build
> > > configuration, machine architecture, and system use.
> > >
> > > This patch adds 2 fields to /proc/vmstat that can used as shown
> > > below:
> > >
> > > Accounting per-page metadata allocated by boot-allocator:
> > >       /proc/vmstat:nr_memmap_boot * PAGE_SIZE
> > >
> > > Accounting per-page metadata allocated by buddy-allocator:
> > >       /proc/vmstat:nr_memmap * PAGE_SIZE
> > >
> > > Accounting total Perpage metadata allocated on the machine:
> > >       (/proc/vmstat:nr_memmap_boot +
> > >        /proc/vmstat:nr_memmap) * PAGE_SIZE
> > >
> > > Utility for userspace:
> > >
> > > Observability: Describe the amount of memory overhead that is
> > > going to per-page metadata on the system at any given time since
> > > this overhead is not currently observable.
> > >
> > > Debugging: Tracking the changes or absolute value in struct pages
> > > can help detect anomalies as they can be correlated with other
> > > metrics in the machine (e.g., memtotal, number of huge pages,
> > > etc).
> > >
> > > page_ext overheads: Some kernel features such as page_owner
> > > page_table_check that use page_ext can be optionally enabled via
> > > kernel parameters. Having the total per-page metadata information
> > > helps users precisely measure impact. Furthermore, page-metadata
> > > metrics will reflect the amount of struct pages reliquished
> > > (or overhead reduced) when hugetlbfs pages are reserved which
> > > will vary depending on whether hugetlb vmemmap optimization is
> > > enabled or not.
> > >
> > > For background and results see:
> > > lore.kernel.org/all/20240220214558.3377482-1-souravpanda@google.com
> > >
> > > Acked-by: David Rientjes <rientjes@google.com>
> > > Signed-off-by: Sourav Panda <souravpanda@google.com>
> > > Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
> >
> > This patch is leading to Oops in 6.11-rc1 when CONFIG_MEMORY_HOTPLUG
> > is enabled. Folks hitting it have had success with reverting this patch.
> > Disabling CONFIG_MEMORY_HOTPLUG is not a long term solution.
> >
> > Reported here:
> > https://lore.kernel.org/linux-cxl/CAHj4cs9Ax1=CoJkgBGP_+sNu6-6=6v=_L-ZBZY0bVLD3wUWZQg@mail.gmail.com/
> 
> Thank you for the heads up. Can you please attach a full config file,
> also was anyone able to reproduce this problem in qemu with emulated
> nvdimm?
> 
> Pasha

Hi Pasha,

This hits every time when boot with a CXL enabled kernel and the cxl-test
module loaded.  After boot, modprobe -r cxl-test emits the TRACE appended
below. Seems to be the same failing signature as in ndctl case above.

Applying the diff below works for the cxl-test unload failure. It moves the
state update to before freeing the page. I saw a note in the patch review
history about this:
	"v8:  Declined changing  placement of metrics after attempting"

Hope it's as simple as this :)


diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 829112b0a914..39c9050f8780 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -188,8 +188,8 @@ static inline void free_vmemmap_page(struct page *page)
                free_bootmem_page(page);
                mod_node_page_state(page_pgdat(page), NR_MEMMAP_BOOT, -1);
        } else {
-               __free_page(page);
                mod_node_page_state(page_pgdat(page), NR_MEMMAP, -1);
+               __free_page(page);
        }
 }

Failure trace
[   94.158105] BUG: unable to handle page fault for address: 0000000000004200
[   94.159953] #PF: supervisor read access in kernel mode
[   94.161132] #PF: error_code(0x0000) - not-present page
[   94.162300] PGD 0 P4D 0 
[   94.162915] Oops: Oops: 0000 [#1] PREEMPT SMP PTI
[   94.164006] CPU: 0 UID: 0 PID: 1076 Comm: modprobe Tainted: G           O     N 6.11.0-rc1 #197
[   94.165966] Tainted: [O]=OOT_MODULE, [N]=TEST
[   94.166973] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
[   94.168768] RIP: 0010:mod_node_page_state+0x6/0x90
[   94.169877] Code: 82 e9 ec fd ff ff 31 c9 e9 de fd ff ff 0f 1f 80 00 00 00 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 55 41 89 f2 89 d0 <4c> 8b 8f 00 42 00 00 83 ee 05 c1 f8 0c 83 fe 02 4f 8d 44 11 01 0f
[   94.172849] RSP: 0018:ffffc90002a4b760 EFLAGS: 00010287
[   94.173645] RAX: 00000000fffffe00 RBX: ffffea03c0800000 RCX: 0000000000000000
[   94.174762] RDX: fffffffffffffe00 RSI: 000000000000002d RDI: 0000000000000000
[   94.175862] RBP: ffffc90002a4b7b0 R08: 0000000000000000 R09: 0000000000000000
[   94.176973] R10: 000000000000002d R11: 0000000000080000 R12: ffff888012688040
[   94.178078] R13: 0000000000200000 R14: ffffea03c0a00000 R15: ffff88812df6ce40
[   94.179200] FS:  00007f9d9ea91740(0000) GS:ffff888077200000(0000) knlGS:0000000000000000
[   94.180257] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[   94.180888] CR2: 0000000000004200 CR3: 0000000129886000 CR4: 00000000000006f0
[   94.181687] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[   94.182487] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[   94.183278] Call Trace:
[   94.183572]  <TASK>
[   94.183825]  ? show_regs+0x5f/0x70
[   94.184217]  ? __die+0x1f/0x70
[   94.184556]  ? page_fault_oops+0x14b/0x450
[   94.185017]  ? mod_node_page_state+0x6/0x90
[   94.185479]  ? search_exception_tables+0x5b/0x60
[   94.185997]  ? fixup_exception+0x22/0x300
[   94.186449]  ? kernelmode_fixup_or_oops.constprop.0+0x5a/0x70
[   94.187088]  ? __bad_area_nosemaphore+0x166/0x230
[   94.187601]  ? up_read+0x1d/0x30
[   94.187973]  ? bad_area_nosemaphore+0x11/0x20
[   94.188452]  ? do_user_addr_fault+0x2cb/0x6b0
[   94.188940]  ? __pfx_do_flush_tlb_all+0x10/0x10
[   94.189445]  ? exc_page_fault+0x6e/0x220
[   94.189857]  ? asm_exc_page_fault+0x27/0x30
[   94.190212]  ? mod_node_page_state+0x6/0x90
[   94.190566]  ? section_deactivate+0x242/0x290
[   94.190946]  sparse_remove_section+0x4d/0x70
[   94.191312]  __remove_pages+0x59/0x90
[   94.191623]  arch_remove_memory+0x1a/0x50
[   94.192224]  try_remove_memory+0xe9/0x150
[   94.192858]  remove_memory+0x1d/0x30
[   94.193457]  dev_dax_kmem_remove+0x9d/0x140 [kmem]
[   94.194140]  dax_bus_remove+0x1d/0x30
[   94.194723]  device_remove+0x3e/0x70
[   94.195283]  device_release_driver_internal+0x1ae/0x220
[   94.195980]  device_release_driver+0xd/0x20
[   94.196590]  bus_remove_device+0xd7/0x140
[   94.197189]  device_del+0x15b/0x3a0
[   94.197746]  unregister_dev_dax+0x6c/0xd0
[   94.198335]  devm_action_release+0x10/0x20
[   94.198945]  devres_release_all+0xa8/0xe0
[   94.199536]  device_unbind_cleanup+0xd/0x70
[   94.200138]  device_release_driver_internal+0x1d3/0x220
[   94.200809]  device_release_driver+0xd/0x20
[   94.201396]  bus_remove_device+0xd7/0x140
[   94.201977]  device_del+0x15b/0x3a0
[   94.202484]  device_unregister+0x12/0x60
[   94.203044]  cxlr_dax_unregister+0x9/0x10 [cxl_core]
[   94.203712]  devm_action_release+0x10/0x20
[   94.204249]  devres_release_all+0xa8/0xe0
[   94.204780]  device_unbind_cleanup+0xd/0x70
[   94.205327]  device_release_driver_internal+0x1d3/0x220
[   94.205960]  device_release_driver+0xd/0x20
[   94.206485]  bus_remove_device+0xd7/0x140
[   94.207011]  device_del+0x15b/0x3a0
[   94.207486]  unregister_region+0x2b/0x80 [cxl_core]
[   94.208092]  devm_action_release+0x10/0x20
[   94.208624]  devres_release_all+0xa8/0xe0
[   94.209130]  device_unbind_cleanup+0xd/0x70
[   94.209666]  device_release_driver_internal+0x1d3/0x220
[   94.210274]  device_release_driver+0xd/0x20
[   94.210812]  bus_remove_device+0xd7/0x140
[   94.211326]  device_del+0x15b/0x3a0
[   94.211807]  ? __this_cpu_preempt_check+0x13/0x20
[   94.212384]  ? lock_release+0x133/0x290
[   94.212896]  ? __x64_sys_delete_module+0x171/0x260
[   94.213474]  platform_device_del.part.0+0x13/0x80
[   94.214046]  platform_device_unregister+0x1b/0x40
[   94.214602]  cxl_test_exit+0x1a/0xcb0 [cxl_test]
[   94.215164]  __x64_sys_delete_module+0x182/0x260
[   94.215720]  ? __fput+0x1b5/0x2e0
[   94.216176]  ? debug_smp_processor_id+0x17/0x20
[   94.216735]  x64_sys_call+0xcc/0x1f30
[   94.217206]  do_syscall_64+0x47/0x110
[   94.217691]  entry_SYSCALL_64_after_hwframe+0x76/0x7e
[   94.218265] RIP: 0033:0x7f9d9e3128cb
[   94.218745] Code: 73 01 c3 48 8b 0d 55 55 0e 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa b8 b0 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 25 55 0e 00 f7 d8 64 89 01 48
[   94.220590] RSP: 002b:00007ffd8ca73a88 EFLAGS: 00000206 ORIG_RAX: 00000000000000b0
[   94.221403] RAX: ffffffffffffffda RBX: 0000564e9737f9f0 RCX: 00007f9d9e3128cb
[   94.222176] RDX: 0000000000000000 RSI: 0000000000000800 RDI: 0000564e9737fa58
[   94.222960] RBP: 0000564e9737fa58 R08: 1999999999999999 R09: 0000000000000000
[   94.223772] R10: 00007f9d9e39dac0 R11: 0000000000000206 R12: 0000000000000001
[   94.224538] R13: 0000000000000000 R14: 00007ffd8ca75dc8 R15: 0000564e9737f480
[   94.225314]  </TASK>
[   94.225721] Modules linked in: kmem device_dax dax_cxl cxl_pci cxl_mock_mem(ON) cxl_test(ON-) cxl_mem(ON) cxl_pmem(ON) cxl_port(ON) cxl_acpi(ON) cxl_mock(ON) cxl_core(ON) libnvdimm
[   94.227422] CR2: 0000000000004200
[   94.227977] ---[ end trace 0000000000000000 ]---
[   94.228585] RIP: 0010:mod_node_page_state+0x6/0x90
[   94.229523] Code: 82 e9 ec fd ff ff 31 c9 e9 de fd ff ff 0f 1f 80 00 00 00 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 55 41 89 f2 89 d0 <4c> 8b 8f 00 42 00 00 83 ee 05 c1 f8 0c 83 fe 02 4f 8d 44 11 01 0f
[   94.232041] RSP: 0018:ffffc90002a4b760 EFLAGS: 00010287
[   94.232891] RAX: 00000000fffffe00 RBX: ffffea03c0800000 RCX: 0000000000000000
[   94.233921] RDX: fffffffffffffe00 RSI: 000000000000002d RDI: 0000000000000000
[   94.234898] RBP: ffffc90002a4b7b0 R08: 0000000000000000 R09: 0000000000000000
[   94.235908] R10: 000000000000002d R11: 0000000000080000 R12: ffff888012688040
[   94.237077] R13: 0000000000200000 R14: ffffea03c0a00000 R15: ffff88812df6ce40
[   94.237922] FS:  00007f9d9ea91740(0000) GS:ffff888077200000(0000) knlGS:0000000000000000
[   94.238844] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[   94.239584] CR2: 0000000000004200 CR3: 0000000129886000 CR4: 00000000000006f0
[   94.240566] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[   94.241787] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Pasha Tatashin Aug. 6, 2024, 5:59 p.m. UTC | #7
On Mon, Aug 5, 2024 at 7:06 PM Dan Williams <dan.j.williams@intel.com> wrote:
>
> Pasha Tatashin wrote:
> [..]
> > Thank you for the heads up. Can you please attach a full config file,
> > also was anyone able to reproduce this problem in qemu with emulated
> > nvdimm?
>
> Yes, I can reproduce the crash just by trying to reconfigure the mode of
> a pmem namespace:
>
> # ndctl create-namespace -m raw -f -e namespace0.0
>
> ...where namespace0.0 results from:
>
>     memmap=4G!4G
>
> ...passed on the kernel command line.
>
> Kernel config here:
>
> https://gist.github.com/djbw/143705077103d43a735c179395d4f69a

Excellent, I was able to reproduce this problem.

The problem appear to be caused by this code:

Calling page_pgdat() in depopulate_section_memmap():

static void depopulate_section_memmap(unsigned long pfn, unsigned long nr_pages,
                struct vmem_altmap *altmap)
{
        unsigned long start = (unsigned long) pfn_to_page(pfn);
        unsigned long end = start + nr_pages * sizeof(struct page);

        mod_node_page_state(page_pgdat(pfn_to_page(pfn)), NR_MEMMAP,
<<<< We cannot do it.
                            -1L * (DIV_ROUND_UP(end - start, PAGE_SIZE)));
        vmemmap_free(start, end, altmap);
}

The page_pgdat() returns NULL starting from:
pageunmap_range()
    remove_pfn_range_from_zone() <- page is removed from the zone.

Pasha
Pasha Tatashin Aug. 6, 2024, 9:37 p.m. UTC | #8
On Tue, Aug 6, 2024 at 4:53 PM Ira Weiny <iweiny@iweiny-mobl> wrote:
>
> On Tue, Aug 06, 2024 at 01:59:54PM -0400, Pasha Tatashin wrote:
> > On Mon, Aug 5, 2024 at 7:06 PM Dan Williams <dan.j.williams@intel.com> wrote:
> > >
> > > Pasha Tatashin wrote:
> > > [..]
> > > > Thank you for the heads up. Can you please attach a full config file,
> > > > also was anyone able to reproduce this problem in qemu with emulated
> > > > nvdimm?
> > >
> > > Yes, I can reproduce the crash just by trying to reconfigure the mode of
> > > a pmem namespace:
> > >
> > > # ndctl create-namespace -m raw -f -e namespace0.0
> > >
> > > ...where namespace0.0 results from:
> > >
> > >     memmap=4G!4G
> > >
> > > ...passed on the kernel command line.
> > >
> > > Kernel config here:
> > >
> > > https://gist.github.com/djbw/143705077103d43a735c179395d4f69a
> >
> > Excellent, I was able to reproduce this problem.
> >
> > The problem appear to be caused by this code:
> >
> > Calling page_pgdat() in depopulate_section_memmap():
> >
> > static void depopulate_section_memmap(unsigned long pfn, unsigned long nr_pages,
> >                 struct vmem_altmap *altmap)
> > {
> >         unsigned long start = (unsigned long) pfn_to_page(pfn);
> >         unsigned long end = start + nr_pages * sizeof(struct page);
> >
> >         mod_node_page_state(page_pgdat(pfn_to_page(pfn)), NR_MEMMAP,
> > <<<< We cannot do it.
> >                             -1L * (DIV_ROUND_UP(end - start, PAGE_SIZE)));
> >         vmemmap_free(start, end, altmap);
> > }
> >
> > The page_pgdat() returns NULL starting from:
> > pageunmap_range()
> >     remove_pfn_range_from_zone() <- page is removed from the zone.
>
> Is there any idea on a fix?  I'm seeing the same error.
>
> [  561.867431]  ? mod_node_page_state+0x11/0xa0
> [  561.867963]  section_deactivate+0x2a0/0x2c0
> [  561.868496]  __remove_pages+0x59/0x90
> [  561.868975]  arch_remove_memory+0x1a/0x40
> [  561.869491]  memunmap_pages+0x206/0x3d0
> [  561.869972]  devres_release_all+0xa8/0xe0
> [  561.870466]  device_unbind_cleanup+0xe/0x70
> [  561.870960]  device_release_driver_internal+0x1ca/0x210
> [  561.871529]  driver_detach+0x47/0x90
> [  561.871981]  bus_remove_driver+0x6c/0xf0
>
> Shall we revert this patch until we figure out a fix?

I am working on a fix, and will send it out in a couple hours.

Pasha
Pasha Tatashin Aug. 6, 2024, 10:32 p.m. UTC | #9
On Tue, Aug 6, 2024 at 5:37 PM Pasha Tatashin <pasha.tatashin@soleen.com> wrote:
>
> On Tue, Aug 6, 2024 at 4:53 PM Ira Weiny <iweiny@iweiny-mobl> wrote:
> >
> > On Tue, Aug 06, 2024 at 01:59:54PM -0400, Pasha Tatashin wrote:
> > > On Mon, Aug 5, 2024 at 7:06 PM Dan Williams <dan.j.williams@intel.com> wrote:
> > > >
> > > > Pasha Tatashin wrote:
> > > > [..]
> > > > > Thank you for the heads up. Can you please attach a full config file,
> > > > > also was anyone able to reproduce this problem in qemu with emulated
> > > > > nvdimm?
> > > >
> > > > Yes, I can reproduce the crash just by trying to reconfigure the mode of
> > > > a pmem namespace:
> > > >
> > > > # ndctl create-namespace -m raw -f -e namespace0.0
> > > >
> > > > ...where namespace0.0 results from:
> > > >
> > > >     memmap=4G!4G
> > > >
> > > > ...passed on the kernel command line.
> > > >
> > > > Kernel config here:
> > > >
> > > > https://gist.github.com/djbw/143705077103d43a735c179395d4f69a
> > >
> > > Excellent, I was able to reproduce this problem.
> > >
> > > The problem appear to be caused by this code:
> > >
> > > Calling page_pgdat() in depopulate_section_memmap():
> > >
> > > static void depopulate_section_memmap(unsigned long pfn, unsigned long nr_pages,
> > >                 struct vmem_altmap *altmap)
> > > {
> > >         unsigned long start = (unsigned long) pfn_to_page(pfn);
> > >         unsigned long end = start + nr_pages * sizeof(struct page);
> > >
> > >         mod_node_page_state(page_pgdat(pfn_to_page(pfn)), NR_MEMMAP,
> > > <<<< We cannot do it.
> > >                             -1L * (DIV_ROUND_UP(end - start, PAGE_SIZE)));
> > >         vmemmap_free(start, end, altmap);
> > > }
> > >
> > > The page_pgdat() returns NULL starting from:
> > > pageunmap_range()
> > >     remove_pfn_range_from_zone() <- page is removed from the zone.
> >
> > Is there any idea on a fix?  I'm seeing the same error.
> >
> > [  561.867431]  ? mod_node_page_state+0x11/0xa0
> > [  561.867963]  section_deactivate+0x2a0/0x2c0
> > [  561.868496]  __remove_pages+0x59/0x90
> > [  561.868975]  arch_remove_memory+0x1a/0x40
> > [  561.869491]  memunmap_pages+0x206/0x3d0
> > [  561.869972]  devres_release_all+0xa8/0xe0
> > [  561.870466]  device_unbind_cleanup+0xe/0x70
> > [  561.870960]  device_release_driver_internal+0x1ca/0x210
> > [  561.871529]  driver_detach+0x47/0x90
> > [  561.871981]  bus_remove_driver+0x6c/0xf0
> >
> > Shall we revert this patch until we figure out a fix?
>
> I am working on a fix, and will send it out in a couple hours.

Patch is posted:
https://lore.kernel.org/all/20240806221454.1971755-2-pasha.tatashin@soleen.com/#r

>
> Pasha
diff mbox series

Patch

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 8f9c9590a42c..b7546dd8c298 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -220,6 +220,8 @@  enum node_stat_item {
 	PGDEMOTE_KSWAPD,
 	PGDEMOTE_DIRECT,
 	PGDEMOTE_KHUGEPAGED,
+	NR_MEMMAP, /* page metadata allocated through buddy allocator */
+	NR_MEMMAP_BOOT, /* page metadata allocated through boot allocator */
 	NR_VM_NODE_STAT_ITEMS
 };
 
diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
index 735eae6e272c..16b0cfa80502 100644
--- a/include/linux/vmstat.h
+++ b/include/linux/vmstat.h
@@ -624,4 +624,8 @@  static inline void lruvec_stat_sub_folio(struct folio *folio,
 {
 	lruvec_stat_mod_folio(folio, idx, -folio_nr_pages(folio));
 }
+
+void __meminit mod_node_early_perpage_metadata(int nid, long delta);
+void __meminit store_early_perpage_metadata(void);
+
 #endif /* _LINUX_VMSTAT_H */
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index b9a55322e52c..fa00d61b6c5a 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -184,10 +184,13 @@  static int vmemmap_remap_range(unsigned long start, unsigned long end,
  */
 static inline void free_vmemmap_page(struct page *page)
 {
-	if (PageReserved(page))
+	if (PageReserved(page)) {
 		free_bootmem_page(page);
-	else
+		mod_node_page_state(page_pgdat(page), NR_MEMMAP_BOOT, -1);
+	} else {
 		__free_page(page);
+		mod_node_page_state(page_pgdat(page), NR_MEMMAP, -1);
+	}
 }
 
 /* Free a list of the vmemmap pages */
@@ -338,6 +341,7 @@  static int vmemmap_remap_free(unsigned long start, unsigned long end,
 		copy_page(page_to_virt(walk.reuse_page),
 			  (void *)walk.reuse_addr);
 		list_add(&walk.reuse_page->lru, vmemmap_pages);
+		mod_node_page_state(NODE_DATA(nid), NR_MEMMAP, 1);
 	}
 
 	/*
@@ -384,14 +388,19 @@  static int alloc_vmemmap_page_list(unsigned long start, unsigned long end,
 	unsigned long nr_pages = (end - start) >> PAGE_SHIFT;
 	int nid = page_to_nid((struct page *)start);
 	struct page *page, *next;
+	int i;
 
-	while (nr_pages--) {
+	for (i = 0; i < nr_pages; i++) {
 		page = alloc_pages_node(nid, gfp_mask, 0);
-		if (!page)
+		if (!page) {
+			mod_node_page_state(NODE_DATA(nid), NR_MEMMAP, i);
 			goto out;
+		}
 		list_add(&page->lru, list);
 	}
 
+	mod_node_page_state(NODE_DATA(nid), NR_MEMMAP, nr_pages);
+
 	return 0;
 out:
 	list_for_each_entry_safe(page, next, list, lru)
diff --git a/mm/mm_init.c b/mm/mm_init.c
index f72b852bd5b8..ca2daf0a6993 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -29,6 +29,7 @@ 
 #include <linux/cma.h>
 #include <linux/crash_dump.h>
 #include <linux/execmem.h>
+#include <linux/vmstat.h>
 #include "internal.h"
 #include "slab.h"
 #include "shuffle.h"
@@ -1618,6 +1619,8 @@  static void __init alloc_node_mem_map(struct pglist_data *pgdat)
 		panic("Failed to allocate %ld bytes for node %d memory map\n",
 		      size, pgdat->node_id);
 	pgdat->node_mem_map = map + offset;
+	mod_node_early_perpage_metadata(pgdat->node_id,
+					DIV_ROUND_UP(size, PAGE_SIZE));
 	pr_debug("%s: node %d, pgdat %08lx, node_mem_map %08lx\n",
 		 __func__, pgdat->node_id, (unsigned long)pgdat,
 		 (unsigned long)pgdat->node_mem_map);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2e22ce5675ca..1f747319f76d 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5715,6 +5715,7 @@  void __init setup_per_cpu_pageset(void)
 	for_each_online_pgdat(pgdat)
 		pgdat->per_cpu_nodestats =
 			alloc_percpu(struct per_cpu_nodestat);
+	store_early_perpage_metadata();
 }
 
 __meminit void zone_pcp_init(struct zone *zone)
diff --git a/mm/page_ext.c b/mm/page_ext.c
index 95dd8ffeaf81..c191e490c401 100644
--- a/mm/page_ext.c
+++ b/mm/page_ext.c
@@ -214,6 +214,8 @@  static int __init alloc_node_page_ext(int nid)
 		return -ENOMEM;
 	NODE_DATA(nid)->node_page_ext = base;
 	total_usage += table_size;
+	mod_node_page_state(NODE_DATA(nid), NR_MEMMAP_BOOT,
+			    DIV_ROUND_UP(table_size, PAGE_SIZE));
 	return 0;
 }
 
@@ -268,12 +270,15 @@  static void *__meminit alloc_page_ext(size_t size, int nid)
 	void *addr = NULL;
 
 	addr = alloc_pages_exact_nid(nid, size, flags);
-	if (addr) {
+	if (addr)
 		kmemleak_alloc(addr, size, 1, flags);
-		return addr;
-	}
+	else
+		addr = vzalloc_node(size, nid);
 
-	addr = vzalloc_node(size, nid);
+	if (addr) {
+		mod_node_page_state(NODE_DATA(nid), NR_MEMMAP,
+				    DIV_ROUND_UP(size, PAGE_SIZE));
+	}
 
 	return addr;
 }
@@ -316,18 +321,27 @@  static int __meminit init_section_page_ext(unsigned long pfn, int nid)
 
 static void free_page_ext(void *addr)
 {
+	size_t table_size;
+	struct page *page;
+	struct pglist_data *pgdat;
+
+	table_size = page_ext_size * PAGES_PER_SECTION;
+
 	if (is_vmalloc_addr(addr)) {
+		page = vmalloc_to_page(addr);
+		pgdat = page_pgdat(page);
 		vfree(addr);
 	} else {
-		struct page *page = virt_to_page(addr);
-		size_t table_size;
-
-		table_size = page_ext_size * PAGES_PER_SECTION;
-
+		page = virt_to_page(addr);
+		pgdat = page_pgdat(page);
 		BUG_ON(PageReserved(page));
 		kmemleak_free(addr);
 		free_pages_exact(addr, table_size);
 	}
+
+	mod_node_page_state(pgdat, NR_MEMMAP,
+			    -1L * (DIV_ROUND_UP(table_size, PAGE_SIZE)));
+
 }
 
 static void __free_page_ext(unsigned long pfn)
diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
index a2cbe44c48e1..1dda6c53370b 100644
--- a/mm/sparse-vmemmap.c
+++ b/mm/sparse-vmemmap.c
@@ -469,5 +469,13 @@  struct page * __meminit __populate_section_memmap(unsigned long pfn,
 	if (r < 0)
 		return NULL;
 
+	if (system_state == SYSTEM_BOOTING) {
+		mod_node_early_perpage_metadata(nid, DIV_ROUND_UP(end - start,
+								  PAGE_SIZE));
+	} else {
+		mod_node_page_state(NODE_DATA(nid), NR_MEMMAP,
+				    DIV_ROUND_UP(end - start, PAGE_SIZE));
+	}
+
 	return pfn_to_page(pfn);
 }
diff --git a/mm/sparse.c b/mm/sparse.c
index de40b2c73406..973d66a15062 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -14,7 +14,7 @@ 
 #include <linux/swap.h>
 #include <linux/swapops.h>
 #include <linux/bootmem_info.h>
-
+#include <linux/vmstat.h>
 #include "internal.h"
 #include <asm/dma.h>
 
@@ -465,6 +465,9 @@  static void __init sparse_buffer_init(unsigned long size, int nid)
 	 */
 	sparsemap_buf = memmap_alloc(size, section_map_size(), addr, nid, true);
 	sparsemap_buf_end = sparsemap_buf + size;
+#ifndef CONFIG_SPARSEMEM_VMEMMAP
+	mod_node_early_perpage_metadata(nid, DIV_ROUND_UP(size, PAGE_SIZE));
+#endif
 }
 
 static void __init sparse_buffer_fini(void)
@@ -643,6 +646,8 @@  static void depopulate_section_memmap(unsigned long pfn, unsigned long nr_pages,
 	unsigned long start = (unsigned long) pfn_to_page(pfn);
 	unsigned long end = start + nr_pages * sizeof(struct page);
 
+	mod_node_page_state(page_pgdat(pfn_to_page(pfn)), NR_MEMMAP,
+			    -1L * (DIV_ROUND_UP(end - start, PAGE_SIZE)));
 	vmemmap_free(start, end, altmap);
 }
 static void free_map_bootmem(struct page *memmap)
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 8507c497218b..73d791d1caad 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1255,7 +1255,8 @@  const char * const vmstat_text[] = {
 	"pgdemote_kswapd",
 	"pgdemote_direct",
 	"pgdemote_khugepaged",
-
+	"nr_memmap",
+	"nr_memmap_boot",
 	/* enum writeback_stat_item counters */
 	"nr_dirty_threshold",
 	"nr_dirty_background_threshold",
@@ -2282,4 +2283,27 @@  static int __init extfrag_debug_init(void)
 }
 
 module_init(extfrag_debug_init);
+
 #endif
+
+/*
+ * Page metadata size (struct page and page_ext) in pages
+ */
+static unsigned long early_perpage_metadata[MAX_NUMNODES] __meminitdata;
+
+void __meminit mod_node_early_perpage_metadata(int nid, long delta)
+{
+	early_perpage_metadata[nid] += delta;
+}
+
+void __meminit store_early_perpage_metadata(void)
+{
+	int nid;
+	struct pglist_data *pgdat;
+
+	for_each_online_pgdat(pgdat) {
+		nid = pgdat->node_id;
+		mod_node_page_state(NODE_DATA(nid), NR_MEMMAP_BOOT,
+				    early_perpage_metadata[nid]);
+	}
+}