diff mbox series

[v2] virtio-balloon: fix managed page counts when migrating pages between zones

Message ID 20191205092420.6934-1-david@redhat.com (mailing list archive)
State New, archived
Headers show
Series [v2] virtio-balloon: fix managed page counts when migrating pages between zones | expand

Commit Message

David Hildenbrand Dec. 5, 2019, 9:24 a.m. UTC
In case we have to migrate a ballon page to a newpage of another zone, the
managed page count of both zones is wrong. Paired with memory offlining
(which will adjust the managed page count), we can trigger kernel crashes
and all kinds of different symptoms.

One way to reproduce:
1. Start a QEMU guest with 4GB, no NUMA
2. Hotplug a 1GB DIMM and only the memory to ZONE_NORMAL
3. Inflate the balloon to 1GB
4. Unplug the DIMM (be quick, otherwise unmovable data ends up on it)
5. Observe /proc/zoneinfo
  Node 0, zone   Normal
    pages free     16810
          min      24848885473806
          low      18471592959183339
          high     36918337032892872
          spanned  262144
          present  262144
          managed  18446744073709533486
6. Do anything that requires some memory (e.g., inflate the balloon some
more). The OOM goes crazy and the system crashes
  [  238.324946] Out of memory: Killed process 537 (login) total-vm:27584kB, anon-rss:860kB, file-rss:0kB, shmem-rss:00
  [  238.338585] systemd invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0
  [  238.339420] CPU: 0 PID: 1 Comm: systemd Tainted: G      D W         5.4.0-next-20191204+ #75
  [  238.340139] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.12.0-59-gc9ba5276e321-prebuilt.qemu4
  [  238.341121] Call Trace:
  [  238.341337]  dump_stack+0x8f/0xd0
  [  238.341630]  dump_header+0x61/0x5ea
  [  238.341942]  oom_kill_process.cold+0xb/0x10
  [  238.342299]  out_of_memory+0x24d/0x5a0
  [  238.342625]  __alloc_pages_slowpath+0xd12/0x1020
  [  238.343024]  __alloc_pages_nodemask+0x391/0x410
  [  238.343407]  pagecache_get_page+0xc3/0x3a0
  [  238.343757]  filemap_fault+0x804/0xc30
  [  238.344083]  ? ext4_filemap_fault+0x28/0x42
  [  238.344444]  ext4_filemap_fault+0x30/0x42
  [  238.344789]  __do_fault+0x37/0x1a0
  [  238.345087]  __handle_mm_fault+0x104d/0x1ab0
  [  238.345450]  handle_mm_fault+0x169/0x360
  [  238.345790]  do_user_addr_fault+0x20d/0x490
  [  238.346154]  do_page_fault+0x31/0x210
  [  238.346468]  async_page_fault+0x43/0x50
  [  238.346797] RIP: 0033:0x7f47eba4197e
  [  238.347110] Code: Bad RIP value.
  [  238.347387] RSP: 002b:00007ffd7c0c1890 EFLAGS: 00010293
  [  238.347834] RAX: 0000000000000002 RBX: 000055d196a20a20 RCX: 00007f47eba4197e
  [  238.348437] RDX: 0000000000000033 RSI: 00007ffd7c0c18c0 RDI: 0000000000000004
  [  238.349047] RBP: 00007ffd7c0c1c20 R08: 0000000000000000 R09: 0000000000000033
  [  238.349660] R10: 00000000ffffffff R11: 0000000000000293 R12: 0000000000000001
  [  238.350261] R13: ffffffffffffffff R14: 0000000000000000 R15: 00007ffd7c0c18c0
  [  238.350878] Mem-Info:
  [  238.351085] active_anon:3121 inactive_anon:51 isolated_anon:0
  [  238.351085]  active_file:12 inactive_file:7 isolated_file:0
  [  238.351085]  unevictable:0 dirty:0 writeback:0 unstable:0
  [  238.351085]  slab_reclaimable:5565 slab_unreclaimable:10170
  [  238.351085]  mapped:3 shmem:111 pagetables:155 bounce:0
  [  238.351085]  free:720717 free_pcp:2 free_cma:0
  [  238.353757] Node 0 active_anon:12484kB inactive_anon:204kB active_file:48kB inactive_file:28kB unevictable:0kB iss
  [  238.355979] Node 0 DMA free:11556kB min:36kB low:48kB high:60kB reserved_highatomic:0KB active_anon:152kB inactivB
  [  238.358345] lowmem_reserve[]: 0 2955 2884 2884 2884
  [  238.358761] Node 0 DMA32 free:2677864kB min:7004kB low:10028kB high:13052kB reserved_highatomic:0KB active_anon:0B
  [  238.361202] lowmem_reserve[]: 0 0 72057594037927865 72057594037927865 72057594037927865
  [  238.361888] Node 0 Normal free:193448kB min:99395541895224kB low:73886371836733356kB high:147673348131571488kB reB
  [  238.364765] lowmem_reserve[]: 0 0 0 0 0
  [  238.365101] Node 0 DMA: 7*4kB (U) 5*8kB (UE) 6*16kB (UME) 2*32kB (UM) 1*64kB (U) 2*128kB (UE) 3*256kB (UME) 2*512B
  [  238.366379] Node 0 DMA32: 0*4kB 1*8kB (U) 2*16kB (UM) 2*32kB (UM) 2*64kB (UM) 1*128kB (U) 1*256kB (U) 1*512kB (U)B
  [  238.367654] Node 0 Normal: 1985*4kB (UME) 1321*8kB (UME) 844*16kB (UME) 524*32kB (UME) 300*64kB (UME) 138*128kB (B
  [  238.369184] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
  [  238.369915] 130 total pagecache pages
  [  238.370241] 0 pages in swap cache
  [  238.370533] Swap cache stats: add 0, delete 0, find 0/0
  [  238.370981] Free swap  = 0kB
  [  238.371239] Total swap = 0kB
  [  238.371488] 1048445 pages RAM
  [  238.371756] 0 pages HighMem/MovableOnly
  [  238.372090] 306992 pages reserved
  [  238.372376] 0 pages cma reserved
  [  238.372661] 0 pages hwpoisoned

In another instance (older kernel), I was able to observe this
(negative page count :/):
  [  180.896971] Offlined Pages 32768
  [  182.667462] Offlined Pages 32768
  [  184.408117] Offlined Pages 32768
  [  186.026321] Offlined Pages 32768
  [  187.684861] Offlined Pages 32768
  [  189.227013] Offlined Pages 32768
  [  190.830303] Offlined Pages 32768
  [  190.833071] Built 1 zonelists, mobility grouping on.  Total pages: -36920272750453009

In another instance (older kernel), I was no longer able to start any
process:
  [root@vm ~]# [  214.348068] Offlined Pages 32768
  [  215.973009] Offlined Pages 32768
  cat /proc/meminfo
  -bash: fork: Cannot allocate memory
  [root@vm ~]# cat /proc/meminfo
  -bash: fork: Cannot allocate memory

Fix it by properly adjusting the managed page count when migrating if
the zone changed. The managed page count of the zones now looks after
unplug of the DIMM (and after deflating the balloon) just like before
inflating the balloon (and plugging+onlining the DIMM).

We'll temporarily modify the totalram page count. If this ever becomes a
problem, we can fine tune by providing helpers that don't touch
the totalram pages (e.g., adjust_zone_managed_page_count()).

Reported-by: Yumei Huang <yuhuang@redhat.com>
Fixes: 3dcc0571cd64 ("mm: correctly update zone->managed_pages")
Cc: <stable@vger.kernel.org> # v3.11+
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Jason Wang <jasowang@redhat.com>
Cc: Jiang Liu <liuj97@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: virtualization@lists.linux-foundation.org
Signed-off-by: David Hildenbrand <david@redhat.com>
---

v1 -> v2:
- Adjust count before enquing newpage (and it possibly gets free form the
  balloon)
- Check if the zone changed

---
 drivers/virtio/virtio_balloon.c | 7 +++++++
 1 file changed, 7 insertions(+)

Comments

David Hildenbrand Dec. 10, 2019, 1:44 p.m. UTC | #1
On 05.12.19 10:24, David Hildenbrand wrote:
> In case we have to migrate a ballon page to a newpage of another zone, the
> managed page count of both zones is wrong. Paired with memory offlining
> (which will adjust the managed page count), we can trigger kernel crashes
> and all kinds of different symptoms.
> 
> One way to reproduce:
> 1. Start a QEMU guest with 4GB, no NUMA
> 2. Hotplug a 1GB DIMM and only the memory to ZONE_NORMAL

s/only/online/

as requested by Igor.

> 3. Inflate the balloon to 1GB
> 4. Unplug the DIMM (be quick, otherwise unmovable data ends up on it)
> 5. Observe /proc/zoneinfo
>   Node 0, zone   Normal
>     pages free     16810
>           min      24848885473806
>           low      18471592959183339
>           high     36918337032892872
>           spanned  262144
>           present  262144
>           managed  18446744073709533486
> 6. Do anything that requires some memory (e.g., inflate the balloon some
> more). The OOM goes crazy and the system crashes
>   [  238.324946] Out of memory: Killed process 537 (login) total-vm:27584kB, anon-rss:860kB, file-rss:0kB, shmem-rss:00
>   [  238.338585] systemd invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0
>   [  238.339420] CPU: 0 PID: 1 Comm: systemd Tainted: G      D W         5.4.0-next-20191204+ #75
>   [  238.340139] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.12.0-59-gc9ba5276e321-prebuilt.qemu4
>   [  238.341121] Call Trace:
>   [  238.341337]  dump_stack+0x8f/0xd0
>   [  238.341630]  dump_header+0x61/0x5ea
>   [  238.341942]  oom_kill_process.cold+0xb/0x10
>   [  238.342299]  out_of_memory+0x24d/0x5a0
>   [  238.342625]  __alloc_pages_slowpath+0xd12/0x1020
>   [  238.343024]  __alloc_pages_nodemask+0x391/0x410
>   [  238.343407]  pagecache_get_page+0xc3/0x3a0
>   [  238.343757]  filemap_fault+0x804/0xc30
>   [  238.344083]  ? ext4_filemap_fault+0x28/0x42
>   [  238.344444]  ext4_filemap_fault+0x30/0x42
>   [  238.344789]  __do_fault+0x37/0x1a0
>   [  238.345087]  __handle_mm_fault+0x104d/0x1ab0
>   [  238.345450]  handle_mm_fault+0x169/0x360
>   [  238.345790]  do_user_addr_fault+0x20d/0x490
>   [  238.346154]  do_page_fault+0x31/0x210
>   [  238.346468]  async_page_fault+0x43/0x50
>   [  238.346797] RIP: 0033:0x7f47eba4197e
>   [  238.347110] Code: Bad RIP value.
>   [  238.347387] RSP: 002b:00007ffd7c0c1890 EFLAGS: 00010293
>   [  238.347834] RAX: 0000000000000002 RBX: 000055d196a20a20 RCX: 00007f47eba4197e
>   [  238.348437] RDX: 0000000000000033 RSI: 00007ffd7c0c18c0 RDI: 0000000000000004
>   [  238.349047] RBP: 00007ffd7c0c1c20 R08: 0000000000000000 R09: 0000000000000033
>   [  238.349660] R10: 00000000ffffffff R11: 0000000000000293 R12: 0000000000000001
>   [  238.350261] R13: ffffffffffffffff R14: 0000000000000000 R15: 00007ffd7c0c18c0
>   [  238.350878] Mem-Info:
>   [  238.351085] active_anon:3121 inactive_anon:51 isolated_anon:0
>   [  238.351085]  active_file:12 inactive_file:7 isolated_file:0
>   [  238.351085]  unevictable:0 dirty:0 writeback:0 unstable:0
>   [  238.351085]  slab_reclaimable:5565 slab_unreclaimable:10170
>   [  238.351085]  mapped:3 shmem:111 pagetables:155 bounce:0
>   [  238.351085]  free:720717 free_pcp:2 free_cma:0
>   [  238.353757] Node 0 active_anon:12484kB inactive_anon:204kB active_file:48kB inactive_file:28kB unevictable:0kB iss
>   [  238.355979] Node 0 DMA free:11556kB min:36kB low:48kB high:60kB reserved_highatomic:0KB active_anon:152kB inactivB
>   [  238.358345] lowmem_reserve[]: 0 2955 2884 2884 2884
>   [  238.358761] Node 0 DMA32 free:2677864kB min:7004kB low:10028kB high:13052kB reserved_highatomic:0KB active_anon:0B
>   [  238.361202] lowmem_reserve[]: 0 0 72057594037927865 72057594037927865 72057594037927865
>   [  238.361888] Node 0 Normal free:193448kB min:99395541895224kB low:73886371836733356kB high:147673348131571488kB reB
>   [  238.364765] lowmem_reserve[]: 0 0 0 0 0
>   [  238.365101] Node 0 DMA: 7*4kB (U) 5*8kB (UE) 6*16kB (UME) 2*32kB (UM) 1*64kB (U) 2*128kB (UE) 3*256kB (UME) 2*512B
>   [  238.366379] Node 0 DMA32: 0*4kB 1*8kB (U) 2*16kB (UM) 2*32kB (UM) 2*64kB (UM) 1*128kB (U) 1*256kB (U) 1*512kB (U)B
>   [  238.367654] Node 0 Normal: 1985*4kB (UME) 1321*8kB (UME) 844*16kB (UME) 524*32kB (UME) 300*64kB (UME) 138*128kB (B
>   [  238.369184] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
>   [  238.369915] 130 total pagecache pages
>   [  238.370241] 0 pages in swap cache
>   [  238.370533] Swap cache stats: add 0, delete 0, find 0/0
>   [  238.370981] Free swap  = 0kB
>   [  238.371239] Total swap = 0kB
>   [  238.371488] 1048445 pages RAM
>   [  238.371756] 0 pages HighMem/MovableOnly
>   [  238.372090] 306992 pages reserved
>   [  238.372376] 0 pages cma reserved
>   [  238.372661] 0 pages hwpoisoned
> 
> In another instance (older kernel), I was able to observe this
> (negative page count :/):
>   [  180.896971] Offlined Pages 32768
>   [  182.667462] Offlined Pages 32768
>   [  184.408117] Offlined Pages 32768
>   [  186.026321] Offlined Pages 32768
>   [  187.684861] Offlined Pages 32768
>   [  189.227013] Offlined Pages 32768
>   [  190.830303] Offlined Pages 32768
>   [  190.833071] Built 1 zonelists, mobility grouping on.  Total pages: -36920272750453009
> 
> In another instance (older kernel), I was no longer able to start any
> process:
>   [root@vm ~]# [  214.348068] Offlined Pages 32768
>   [  215.973009] Offlined Pages 32768
>   cat /proc/meminfo
>   -bash: fork: Cannot allocate memory
>   [root@vm ~]# cat /proc/meminfo
>   -bash: fork: Cannot allocate memory
> 
> Fix it by properly adjusting the managed page count when migrating if
> the zone changed. The managed page count of the zones now looks after
> unplug of the DIMM (and after deflating the balloon) just like before
> inflating the balloon (and plugging+onlining the DIMM).
> 
> We'll temporarily modify the totalram page count. If this ever becomes a
> problem, we can fine tune by providing helpers that don't touch
> the totalram pages (e.g., adjust_zone_managed_page_count()).
> 
> Reported-by: Yumei Huang <yuhuang@redhat.com>
> Fixes: 3dcc0571cd64 ("mm: correctly update zone->managed_pages")
> Cc: <stable@vger.kernel.org> # v3.11+
> Cc: "Michael S. Tsirkin" <mst@redhat.com>
> Cc: Jason Wang <jasowang@redhat.com>
> Cc: Jiang Liu <liuj97@gmail.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: virtualization@lists.linux-foundation.org
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
> 
> v1 -> v2:
> - Adjust count before enquing newpage (and it possibly gets free form the
>   balloon)
> - Check if the zone changed
> 
> ---
>  drivers/virtio/virtio_balloon.c | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
> index 15b7f1d8c334..3078e1ac2a8f 100644
> --- a/drivers/virtio/virtio_balloon.c
> +++ b/drivers/virtio/virtio_balloon.c
> @@ -722,6 +722,13 @@ static int virtballoon_migratepage(struct balloon_dev_info *vb_dev_info,
>  
>  	get_page(newpage); /* balloon reference */
>  
> +	/* fixup the managed page count (esp. of the zone) */

/*
 * When we migrate to a different zone, we have to adjust the managed
 * page count of both involved zones.
 */

as requested by Michael.


@Michael, if there are no further comments, shall I resend?
Michael S. Tsirkin Dec. 10, 2019, 3:24 p.m. UTC | #2
On Tue, Dec 10, 2019 at 02:44:38PM +0100, David Hildenbrand wrote:
> On 05.12.19 10:24, David Hildenbrand wrote:
> > In case we have to migrate a ballon page to a newpage of another zone, the
> > managed page count of both zones is wrong. Paired with memory offlining
> > (which will adjust the managed page count), we can trigger kernel crashes
> > and all kinds of different symptoms.
> > 
> > One way to reproduce:
> > 1. Start a QEMU guest with 4GB, no NUMA
> > 2. Hotplug a 1GB DIMM and only the memory to ZONE_NORMAL
> 
> s/only/online/
> 
> as requested by Igor.
> 
> > 3. Inflate the balloon to 1GB
> > 4. Unplug the DIMM (be quick, otherwise unmovable data ends up on it)
> > 5. Observe /proc/zoneinfo
> >   Node 0, zone   Normal
> >     pages free     16810
> >           min      24848885473806
> >           low      18471592959183339
> >           high     36918337032892872
> >           spanned  262144
> >           present  262144
> >           managed  18446744073709533486
> > 6. Do anything that requires some memory (e.g., inflate the balloon some
> > more). The OOM goes crazy and the system crashes
> >   [  238.324946] Out of memory: Killed process 537 (login) total-vm:27584kB, anon-rss:860kB, file-rss:0kB, shmem-rss:00
> >   [  238.338585] systemd invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0
> >   [  238.339420] CPU: 0 PID: 1 Comm: systemd Tainted: G      D W         5.4.0-next-20191204+ #75
> >   [  238.340139] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.12.0-59-gc9ba5276e321-prebuilt.qemu4
> >   [  238.341121] Call Trace:
> >   [  238.341337]  dump_stack+0x8f/0xd0
> >   [  238.341630]  dump_header+0x61/0x5ea
> >   [  238.341942]  oom_kill_process.cold+0xb/0x10
> >   [  238.342299]  out_of_memory+0x24d/0x5a0
> >   [  238.342625]  __alloc_pages_slowpath+0xd12/0x1020
> >   [  238.343024]  __alloc_pages_nodemask+0x391/0x410
> >   [  238.343407]  pagecache_get_page+0xc3/0x3a0
> >   [  238.343757]  filemap_fault+0x804/0xc30
> >   [  238.344083]  ? ext4_filemap_fault+0x28/0x42
> >   [  238.344444]  ext4_filemap_fault+0x30/0x42
> >   [  238.344789]  __do_fault+0x37/0x1a0
> >   [  238.345087]  __handle_mm_fault+0x104d/0x1ab0
> >   [  238.345450]  handle_mm_fault+0x169/0x360
> >   [  238.345790]  do_user_addr_fault+0x20d/0x490
> >   [  238.346154]  do_page_fault+0x31/0x210
> >   [  238.346468]  async_page_fault+0x43/0x50
> >   [  238.346797] RIP: 0033:0x7f47eba4197e
> >   [  238.347110] Code: Bad RIP value.
> >   [  238.347387] RSP: 002b:00007ffd7c0c1890 EFLAGS: 00010293
> >   [  238.347834] RAX: 0000000000000002 RBX: 000055d196a20a20 RCX: 00007f47eba4197e
> >   [  238.348437] RDX: 0000000000000033 RSI: 00007ffd7c0c18c0 RDI: 0000000000000004
> >   [  238.349047] RBP: 00007ffd7c0c1c20 R08: 0000000000000000 R09: 0000000000000033
> >   [  238.349660] R10: 00000000ffffffff R11: 0000000000000293 R12: 0000000000000001
> >   [  238.350261] R13: ffffffffffffffff R14: 0000000000000000 R15: 00007ffd7c0c18c0
> >   [  238.350878] Mem-Info:
> >   [  238.351085] active_anon:3121 inactive_anon:51 isolated_anon:0
> >   [  238.351085]  active_file:12 inactive_file:7 isolated_file:0
> >   [  238.351085]  unevictable:0 dirty:0 writeback:0 unstable:0
> >   [  238.351085]  slab_reclaimable:5565 slab_unreclaimable:10170
> >   [  238.351085]  mapped:3 shmem:111 pagetables:155 bounce:0
> >   [  238.351085]  free:720717 free_pcp:2 free_cma:0
> >   [  238.353757] Node 0 active_anon:12484kB inactive_anon:204kB active_file:48kB inactive_file:28kB unevictable:0kB iss
> >   [  238.355979] Node 0 DMA free:11556kB min:36kB low:48kB high:60kB reserved_highatomic:0KB active_anon:152kB inactivB
> >   [  238.358345] lowmem_reserve[]: 0 2955 2884 2884 2884
> >   [  238.358761] Node 0 DMA32 free:2677864kB min:7004kB low:10028kB high:13052kB reserved_highatomic:0KB active_anon:0B
> >   [  238.361202] lowmem_reserve[]: 0 0 72057594037927865 72057594037927865 72057594037927865
> >   [  238.361888] Node 0 Normal free:193448kB min:99395541895224kB low:73886371836733356kB high:147673348131571488kB reB
> >   [  238.364765] lowmem_reserve[]: 0 0 0 0 0
> >   [  238.365101] Node 0 DMA: 7*4kB (U) 5*8kB (UE) 6*16kB (UME) 2*32kB (UM) 1*64kB (U) 2*128kB (UE) 3*256kB (UME) 2*512B
> >   [  238.366379] Node 0 DMA32: 0*4kB 1*8kB (U) 2*16kB (UM) 2*32kB (UM) 2*64kB (UM) 1*128kB (U) 1*256kB (U) 1*512kB (U)B
> >   [  238.367654] Node 0 Normal: 1985*4kB (UME) 1321*8kB (UME) 844*16kB (UME) 524*32kB (UME) 300*64kB (UME) 138*128kB (B
> >   [  238.369184] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
> >   [  238.369915] 130 total pagecache pages
> >   [  238.370241] 0 pages in swap cache
> >   [  238.370533] Swap cache stats: add 0, delete 0, find 0/0
> >   [  238.370981] Free swap  = 0kB
> >   [  238.371239] Total swap = 0kB
> >   [  238.371488] 1048445 pages RAM
> >   [  238.371756] 0 pages HighMem/MovableOnly
> >   [  238.372090] 306992 pages reserved
> >   [  238.372376] 0 pages cma reserved
> >   [  238.372661] 0 pages hwpoisoned
> > 
> > In another instance (older kernel), I was able to observe this
> > (negative page count :/):
> >   [  180.896971] Offlined Pages 32768
> >   [  182.667462] Offlined Pages 32768
> >   [  184.408117] Offlined Pages 32768
> >   [  186.026321] Offlined Pages 32768
> >   [  187.684861] Offlined Pages 32768
> >   [  189.227013] Offlined Pages 32768
> >   [  190.830303] Offlined Pages 32768
> >   [  190.833071] Built 1 zonelists, mobility grouping on.  Total pages: -36920272750453009
> > 
> > In another instance (older kernel), I was no longer able to start any
> > process:
> >   [root@vm ~]# [  214.348068] Offlined Pages 32768
> >   [  215.973009] Offlined Pages 32768
> >   cat /proc/meminfo
> >   -bash: fork: Cannot allocate memory
> >   [root@vm ~]# cat /proc/meminfo
> >   -bash: fork: Cannot allocate memory
> > 
> > Fix it by properly adjusting the managed page count when migrating if
> > the zone changed. The managed page count of the zones now looks after
> > unplug of the DIMM (and after deflating the balloon) just like before
> > inflating the balloon (and plugging+onlining the DIMM).
> > 
> > We'll temporarily modify the totalram page count. If this ever becomes a
> > problem, we can fine tune by providing helpers that don't touch
> > the totalram pages (e.g., adjust_zone_managed_page_count()).
> > 
> > Reported-by: Yumei Huang <yuhuang@redhat.com>
> > Fixes: 3dcc0571cd64 ("mm: correctly update zone->managed_pages")
> > Cc: <stable@vger.kernel.org> # v3.11+
> > Cc: "Michael S. Tsirkin" <mst@redhat.com>
> > Cc: Jason Wang <jasowang@redhat.com>
> > Cc: Jiang Liu <liuj97@gmail.com>
> > Cc: Andrew Morton <akpm@linux-foundation.org>
> > Cc: virtualization@lists.linux-foundation.org
> > Signed-off-by: David Hildenbrand <david@redhat.com>
> > ---
> > 
> > v1 -> v2:
> > - Adjust count before enquing newpage (and it possibly gets free form the
> >   balloon)
> > - Check if the zone changed
> > 
> > ---
> >  drivers/virtio/virtio_balloon.c | 7 +++++++
> >  1 file changed, 7 insertions(+)
> > 
> > diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
> > index 15b7f1d8c334..3078e1ac2a8f 100644
> > --- a/drivers/virtio/virtio_balloon.c
> > +++ b/drivers/virtio/virtio_balloon.c
> > @@ -722,6 +722,13 @@ static int virtballoon_migratepage(struct balloon_dev_info *vb_dev_info,
> >  
> >  	get_page(newpage); /* balloon reference */
> >  
> > +	/* fixup the managed page count (esp. of the zone) */
> 
> /*
>  * When we migrate to a different zone, we have to adjust the managed
>  * page count of both involved zones.
>  */
> 
> as requested by Michael.
> 
> 
> @Michael, if there are no further comments, shall I resend?

Also, what does it have to do with deflate on oom?
Why is it true we don't need to do it then?

> -- 
> Thanks,
> 
> David / dhildenb
David Hildenbrand Dec. 10, 2019, 3:36 p.m. UTC | #3
On 10.12.19 16:24, Michael S. Tsirkin wrote:
> On Tue, Dec 10, 2019 at 02:44:38PM +0100, David Hildenbrand wrote:
>> On 05.12.19 10:24, David Hildenbrand wrote:
>>> In case we have to migrate a ballon page to a newpage of another zone, the
>>> managed page count of both zones is wrong. Paired with memory offlining
>>> (which will adjust the managed page count), we can trigger kernel crashes
>>> and all kinds of different symptoms.
>>>
>>> One way to reproduce:
>>> 1. Start a QEMU guest with 4GB, no NUMA
>>> 2. Hotplug a 1GB DIMM and only the memory to ZONE_NORMAL
>>
>> s/only/online/
>>
>> as requested by Igor.
>>
>>> 3. Inflate the balloon to 1GB
>>> 4. Unplug the DIMM (be quick, otherwise unmovable data ends up on it)
>>> 5. Observe /proc/zoneinfo
>>>   Node 0, zone   Normal
>>>     pages free     16810
>>>           min      24848885473806
>>>           low      18471592959183339
>>>           high     36918337032892872
>>>           spanned  262144
>>>           present  262144
>>>           managed  18446744073709533486
>>> 6. Do anything that requires some memory (e.g., inflate the balloon some
>>> more). The OOM goes crazy and the system crashes
>>>   [  238.324946] Out of memory: Killed process 537 (login) total-vm:27584kB, anon-rss:860kB, file-rss:0kB, shmem-rss:00
>>>   [  238.338585] systemd invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0
>>>   [  238.339420] CPU: 0 PID: 1 Comm: systemd Tainted: G      D W         5.4.0-next-20191204+ #75
>>>   [  238.340139] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.12.0-59-gc9ba5276e321-prebuilt.qemu4
>>>   [  238.341121] Call Trace:
>>>   [  238.341337]  dump_stack+0x8f/0xd0
>>>   [  238.341630]  dump_header+0x61/0x5ea
>>>   [  238.341942]  oom_kill_process.cold+0xb/0x10
>>>   [  238.342299]  out_of_memory+0x24d/0x5a0
>>>   [  238.342625]  __alloc_pages_slowpath+0xd12/0x1020
>>>   [  238.343024]  __alloc_pages_nodemask+0x391/0x410
>>>   [  238.343407]  pagecache_get_page+0xc3/0x3a0
>>>   [  238.343757]  filemap_fault+0x804/0xc30
>>>   [  238.344083]  ? ext4_filemap_fault+0x28/0x42
>>>   [  238.344444]  ext4_filemap_fault+0x30/0x42
>>>   [  238.344789]  __do_fault+0x37/0x1a0
>>>   [  238.345087]  __handle_mm_fault+0x104d/0x1ab0
>>>   [  238.345450]  handle_mm_fault+0x169/0x360
>>>   [  238.345790]  do_user_addr_fault+0x20d/0x490
>>>   [  238.346154]  do_page_fault+0x31/0x210
>>>   [  238.346468]  async_page_fault+0x43/0x50
>>>   [  238.346797] RIP: 0033:0x7f47eba4197e
>>>   [  238.347110] Code: Bad RIP value.
>>>   [  238.347387] RSP: 002b:00007ffd7c0c1890 EFLAGS: 00010293
>>>   [  238.347834] RAX: 0000000000000002 RBX: 000055d196a20a20 RCX: 00007f47eba4197e
>>>   [  238.348437] RDX: 0000000000000033 RSI: 00007ffd7c0c18c0 RDI: 0000000000000004
>>>   [  238.349047] RBP: 00007ffd7c0c1c20 R08: 0000000000000000 R09: 0000000000000033
>>>   [  238.349660] R10: 00000000ffffffff R11: 0000000000000293 R12: 0000000000000001
>>>   [  238.350261] R13: ffffffffffffffff R14: 0000000000000000 R15: 00007ffd7c0c18c0
>>>   [  238.350878] Mem-Info:
>>>   [  238.351085] active_anon:3121 inactive_anon:51 isolated_anon:0
>>>   [  238.351085]  active_file:12 inactive_file:7 isolated_file:0
>>>   [  238.351085]  unevictable:0 dirty:0 writeback:0 unstable:0
>>>   [  238.351085]  slab_reclaimable:5565 slab_unreclaimable:10170
>>>   [  238.351085]  mapped:3 shmem:111 pagetables:155 bounce:0
>>>   [  238.351085]  free:720717 free_pcp:2 free_cma:0
>>>   [  238.353757] Node 0 active_anon:12484kB inactive_anon:204kB active_file:48kB inactive_file:28kB unevictable:0kB iss
>>>   [  238.355979] Node 0 DMA free:11556kB min:36kB low:48kB high:60kB reserved_highatomic:0KB active_anon:152kB inactivB
>>>   [  238.358345] lowmem_reserve[]: 0 2955 2884 2884 2884
>>>   [  238.358761] Node 0 DMA32 free:2677864kB min:7004kB low:10028kB high:13052kB reserved_highatomic:0KB active_anon:0B
>>>   [  238.361202] lowmem_reserve[]: 0 0 72057594037927865 72057594037927865 72057594037927865
>>>   [  238.361888] Node 0 Normal free:193448kB min:99395541895224kB low:73886371836733356kB high:147673348131571488kB reB
>>>   [  238.364765] lowmem_reserve[]: 0 0 0 0 0
>>>   [  238.365101] Node 0 DMA: 7*4kB (U) 5*8kB (UE) 6*16kB (UME) 2*32kB (UM) 1*64kB (U) 2*128kB (UE) 3*256kB (UME) 2*512B
>>>   [  238.366379] Node 0 DMA32: 0*4kB 1*8kB (U) 2*16kB (UM) 2*32kB (UM) 2*64kB (UM) 1*128kB (U) 1*256kB (U) 1*512kB (U)B
>>>   [  238.367654] Node 0 Normal: 1985*4kB (UME) 1321*8kB (UME) 844*16kB (UME) 524*32kB (UME) 300*64kB (UME) 138*128kB (B
>>>   [  238.369184] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
>>>   [  238.369915] 130 total pagecache pages
>>>   [  238.370241] 0 pages in swap cache
>>>   [  238.370533] Swap cache stats: add 0, delete 0, find 0/0
>>>   [  238.370981] Free swap  = 0kB
>>>   [  238.371239] Total swap = 0kB
>>>   [  238.371488] 1048445 pages RAM
>>>   [  238.371756] 0 pages HighMem/MovableOnly
>>>   [  238.372090] 306992 pages reserved
>>>   [  238.372376] 0 pages cma reserved
>>>   [  238.372661] 0 pages hwpoisoned
>>>
>>> In another instance (older kernel), I was able to observe this
>>> (negative page count :/):
>>>   [  180.896971] Offlined Pages 32768
>>>   [  182.667462] Offlined Pages 32768
>>>   [  184.408117] Offlined Pages 32768
>>>   [  186.026321] Offlined Pages 32768
>>>   [  187.684861] Offlined Pages 32768
>>>   [  189.227013] Offlined Pages 32768
>>>   [  190.830303] Offlined Pages 32768
>>>   [  190.833071] Built 1 zonelists, mobility grouping on.  Total pages: -36920272750453009
>>>
>>> In another instance (older kernel), I was no longer able to start any
>>> process:
>>>   [root@vm ~]# [  214.348068] Offlined Pages 32768
>>>   [  215.973009] Offlined Pages 32768
>>>   cat /proc/meminfo
>>>   -bash: fork: Cannot allocate memory
>>>   [root@vm ~]# cat /proc/meminfo
>>>   -bash: fork: Cannot allocate memory
>>>
>>> Fix it by properly adjusting the managed page count when migrating if
>>> the zone changed. The managed page count of the zones now looks after
>>> unplug of the DIMM (and after deflating the balloon) just like before
>>> inflating the balloon (and plugging+onlining the DIMM).
>>>
>>> We'll temporarily modify the totalram page count. If this ever becomes a
>>> problem, we can fine tune by providing helpers that don't touch
>>> the totalram pages (e.g., adjust_zone_managed_page_count()).
>>>
>>> Reported-by: Yumei Huang <yuhuang@redhat.com>
>>> Fixes: 3dcc0571cd64 ("mm: correctly update zone->managed_pages")
>>> Cc: <stable@vger.kernel.org> # v3.11+
>>> Cc: "Michael S. Tsirkin" <mst@redhat.com>
>>> Cc: Jason Wang <jasowang@redhat.com>
>>> Cc: Jiang Liu <liuj97@gmail.com>
>>> Cc: Andrew Morton <akpm@linux-foundation.org>
>>> Cc: virtualization@lists.linux-foundation.org
>>> Signed-off-by: David Hildenbrand <david@redhat.com>
>>> ---
>>>
>>> v1 -> v2:
>>> - Adjust count before enquing newpage (and it possibly gets free form the
>>>   balloon)
>>> - Check if the zone changed
>>>
>>> ---
>>>  drivers/virtio/virtio_balloon.c | 7 +++++++
>>>  1 file changed, 7 insertions(+)
>>>
>>> diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
>>> index 15b7f1d8c334..3078e1ac2a8f 100644
>>> --- a/drivers/virtio/virtio_balloon.c
>>> +++ b/drivers/virtio/virtio_balloon.c
>>> @@ -722,6 +722,13 @@ static int virtballoon_migratepage(struct balloon_dev_info *vb_dev_info,
>>>  
>>>  	get_page(newpage); /* balloon reference */
>>>  
>>> +	/* fixup the managed page count (esp. of the zone) */
>>
>> /*
>>  * When we migrate to a different zone, we have to adjust the managed
>>  * page count of both involved zones.
>>  */
>>
>> as requested by Michael.
>>
>>
>> @Michael, if there are no further comments, shall I resend?
> 
> Also, what does it have to do with deflate on oom?
> Why is it true we don't need to do it then?

Let's have a look at the other users in this file:

t460s: ~/git/linux memory_holes $ git grep -C3 adjust_man drivers/virtio/virtio_balloon.c
drivers/virtio/virtio_balloon.c-                vb->num_pages += VIRTIO_BALLOON_PAGES_PER_PAGE;
drivers/virtio/virtio_balloon.c-                if (!virtio_has_feature(vb->vdev,
drivers/virtio/virtio_balloon.c-                                        VIRTIO_BALLOON_F_DEFLATE_ON_OOM))
drivers/virtio/virtio_balloon.c:                        adjust_managed_page_count(page, -1);
drivers/virtio/virtio_balloon.c-                vb->num_pfns += VIRTIO_BALLOON_PAGES_PER_PAGE;
drivers/virtio/virtio_balloon.c-        }
drivers/virtio/virtio_balloon.c-
--
drivers/virtio/virtio_balloon.c-        list_for_each_entry_safe(page, next, pages, lru) {
drivers/virtio/virtio_balloon.c-                if (!virtio_has_feature(vb->vdev,
drivers/virtio/virtio_balloon.c-                                        VIRTIO_BALLOON_F_DEFLATE_ON_OOM))
drivers/virtio/virtio_balloon.c:                        adjust_managed_page_count(page, 1);
drivers/virtio/virtio_balloon.c-                list_del(&page->lru);
drivers/virtio/virtio_balloon.c-                put_page(page); /* balloon reference */
drivers/virtio/virtio_balloon.c-        }

In case we don't have VIRTIO_BALLOON_F_DEFLATE_ON_OOM, we don't touch the managed
page count when inflating/deflating, therefore, there is nothing to do when migrating
a page between zones.

Why? Because the designers of VIRTIO_BALLOON_F_DEFLATE_ON_OOM decided to make
it look like the VM has all RAM available (and make it look like it's only
allocated, not "unplugged"). With VIRTIO_BALLOON_F_DEFLATE_ON_OOM, the system
will reclaim memory from the balloon when OOM and, therefore, "shrink" the
balloon automatically.

E.g., with VIRTIO_BALLOON_F_DEFLATE_ON_OOM neither "MemToal" under
/proc/meminfo, nor the managed page count under /proc/zoneinfo will change.
Without VIRTIO_BALLOON_F_DEFLATE_ON_OOM, it will, therefore we have to fixup
only without VIRTIO_BALLOON_F_DEFLATE_ON_OOM.
diff mbox series

Patch

diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
index 15b7f1d8c334..3078e1ac2a8f 100644
--- a/drivers/virtio/virtio_balloon.c
+++ b/drivers/virtio/virtio_balloon.c
@@ -722,6 +722,13 @@  static int virtballoon_migratepage(struct balloon_dev_info *vb_dev_info,
 
 	get_page(newpage); /* balloon reference */
 
+	/* fixup the managed page count (esp. of the zone) */
+	if (!virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM) &&
+	    page_zone(page) != page_zone(newpage)) {
+		adjust_managed_page_count(page, 1);
+		adjust_managed_page_count(newpage, -1);
+	}
+
 	/* balloon's page migration 1st step  -- inflate "newpage" */
 	spin_lock_irqsave(&vb_dev_info->pages_lock, flags);
 	balloon_page_insert(vb_dev_info, newpage);