mbox series

[v2,00/11] Fix PM hibernation in Xen guests

Message ID cover.1593665947.git.anchalag@amazon.com (mailing list archive)
Headers show
Series Fix PM hibernation in Xen guests | expand

Message

Anchal Agarwal July 2, 2020, 6:21 p.m. UTC
Hello,
This series fixes PM hibernation for hvm guests running on xen hypervisor.
The running guest could now be hibernated and resumed successfully at a
later time. The fixes for PM hibernation are added to block and
network device drivers i.e xen-blkfront and xen-netfront. Any other driver
that needs to add S4 support if not already, can follow same method of
introducing freeze/thaw/restore callbacks.
The patches had been tested against upstream kernel and xen4.11. Large
scale testing is also done on Xen based Amazon EC2 instances. All this testing
involved running memory exhausting workload in the background.

Doing guest hibernation does not involve any support from hypervisor and
this way guest has complete control over its state. Infrastructure
restrictions for saving up guest state can be overcome by guest initiated
hibernation.

These patches were send out as RFC before and all the feedback had been
incorporated in the patches. The last v1 could be found here:

[v1]: https://lkml.org/lkml/2020/5/19/1312
All comments and feedback from v1 had been incorporated in v2 series.
Any comments/suggestions are welcome

Known issues:
1.KASLR causes intermittent hibernation failures. VM fails to resumes and
has to be restarted. I will investigate this issue separately and shouldn't
be a blocker for this patch series.
2. During hibernation, I observed sometimes that freezing of tasks fails due
to busy XFS workqueuei[xfs-cil/xfs-sync]. This is also intermittent may be 1
out of 200 runs and hibernation is aborted in this case. Re-trying hibernation
may work. Also, this is a known issue with hibernation and some
filesystems like XFS has been discussed by the community for years with not an
effectve resolution at this point.

Testing How to:
---------------
1. Setup xen hypervisor on a physical machine[ I used Ubuntu 16.04 +upstream
xen-4.11]
2. Bring up a HVM guest w/t kernel compiled with hibernation patches
[I used ubuntu18.04 netboot bionic images and also Amazon Linux on-prem images].
3. Create a swap file size=RAM size
4. Update grub parameters and reboot
5. Trigger pm-hibernation from within the VM

Example:
Set up a file-backed swap space. Swap file size>=Total memory on the system
sudo dd if=/dev/zero of=/swap bs=$(( 1024 * 1024 )) count=4096 # 4096MiB
sudo chmod 600 /swap
sudo mkswap /swap
sudo swapon /swap

Update resume device/resume offset in grub if using swap file:
resume=/dev/xvda1 resume_offset=200704 no_console_suspend=1

Execute:
--------
sudo pm-hibernate
OR
echo disk > /sys/power/state && echo reboot > /sys/power/disk

Compute resume offset code:
"
#!/usr/bin/env python
import sys
import array
import fcntl

#swap file
f = open(sys.argv[1], 'r')
buf = array.array('L', [0])

#FIBMAP
ret = fcntl.ioctl(f.fileno(), 0x01, buf)
print buf[0]
"


Aleksei Besogonov (1):
  PM / hibernate: update the resume offset on SNAPSHOT_SET_SWAP_AREA

Anchal Agarwal (4):
  x86/xen: Introduce new function to map HYPERVISOR_shared_info on
    Resume
  x86/xen: save and restore steal clock during PM hibernation
  xen: Introduce wrapper for save/restore sched clock offset
  xen: Update sched clock offset to avoid system instability in
    hibernation

Munehisa Kamata (5):
  xen/manage: keep track of the on-going suspend mode
  xenbus: add freeze/thaw/restore callbacks support
  x86/xen: add system core suspend and resume callbacks
  xen-blkfront: add callbacks for PM suspend and hibernation
  xen-netfront: add callbacks for PM suspend and hibernation

Thomas Gleixner (1):
  genirq: Shutdown irq chips in suspend/resume during hibernation

 arch/x86/xen/enlighten_hvm.c      |   7 ++
 arch/x86/xen/suspend.c            |  53 +++++++++++++
 arch/x86/xen/time.c               |  15 +++-
 arch/x86/xen/xen-ops.h            |   3 +
 drivers/block/xen-blkfront.c      | 122 +++++++++++++++++++++++++++++-
 drivers/net/xen-netfront.c        |  98 +++++++++++++++++++++++-
 drivers/xen/events/events_base.c  |   1 +
 drivers/xen/manage.c              |  60 +++++++++++++++
 drivers/xen/xenbus/xenbus_probe.c |  96 +++++++++++++++++++----
 include/linux/irq.h               |   2 +
 include/xen/xen-ops.h             |   3 +
 include/xen/xenbus.h              |   3 +
 kernel/irq/chip.c                 |   2 +-
 kernel/irq/internals.h            |   1 +
 kernel/irq/pm.c                   |  31 +++++---
 kernel/power/user.c               |   6 +-
 16 files changed, 470 insertions(+), 33 deletions(-)

Comments

Anchal Agarwal July 10, 2020, 6:17 p.m. UTC | #1
Gentle ping on this series. 

--
Anchal

    Hello,
    This series fixes PM hibernation for hvm guests running on xen hypervisor.
    The running guest could now be hibernated and resumed successfully at a
    later time. The fixes for PM hibernation are added to block and
    network device drivers i.e xen-blkfront and xen-netfront. Any other driver
    that needs to add S4 support if not already, can follow same method of
    introducing freeze/thaw/restore callbacks.
    The patches had been tested against upstream kernel and xen4.11. Large
    scale testing is also done on Xen based Amazon EC2 instances. All this testing
    involved running memory exhausting workload in the background.

    Doing guest hibernation does not involve any support from hypervisor and
    this way guest has complete control over its state. Infrastructure
    restrictions for saving up guest state can be overcome by guest initiated
    hibernation.

    These patches were send out as RFC before and all the feedback had been
    incorporated in the patches. The last v1 could be found here:

    [v1]: https://lkml.org/lkml/2020/5/19/1312
    All comments and feedback from v1 had been incorporated in v2 series.
    Any comments/suggestions are welcome

    Known issues:
    1.KASLR causes intermittent hibernation failures. VM fails to resumes and
    has to be restarted. I will investigate this issue separately and shouldn't
    be a blocker for this patch series.
    2. During hibernation, I observed sometimes that freezing of tasks fails due
    to busy XFS workqueuei[xfs-cil/xfs-sync]. This is also intermittent may be 1
    out of 200 runs and hibernation is aborted in this case. Re-trying hibernation
    may work. Also, this is a known issue with hibernation and some
    filesystems like XFS has been discussed by the community for years with not an
    effectve resolution at this point.

    Testing How to:
    ---------------
    1. Setup xen hypervisor on a physical machine[ I used Ubuntu 16.04 +upstream
    xen-4.11]
    2. Bring up a HVM guest w/t kernel compiled with hibernation patches
    [I used ubuntu18.04 netboot bionic images and also Amazon Linux on-prem images].
    3. Create a swap file size=RAM size
    4. Update grub parameters and reboot
    5. Trigger pm-hibernation from within the VM

    Example:
    Set up a file-backed swap space. Swap file size>=Total memory on the system
    sudo dd if=/dev/zero of=/swap bs=$(( 1024 * 1024 )) count=4096 # 4096MiB
    sudo chmod 600 /swap
    sudo mkswap /swap
    sudo swapon /swap

    Update resume device/resume offset in grub if using swap file:
    resume=/dev/xvda1 resume_offset=200704 no_console_suspend=1

    Execute:
    --------
    sudo pm-hibernate
    OR
    echo disk > /sys/power/state && echo reboot > /sys/power/disk

    Compute resume offset code:
    "
    #!/usr/bin/env python
    import sys
    import array
    import fcntl

    #swap file
    f = open(sys.argv[1], 'r')
    buf = array.array('L', [0])

    #FIBMAP
    ret = fcntl.ioctl(f.fileno(), 0x01, buf)
    print buf[0]
    "


    Aleksei Besogonov (1):
      PM / hibernate: update the resume offset on SNAPSHOT_SET_SWAP_AREA

    Anchal Agarwal (4):
      x86/xen: Introduce new function to map HYPERVISOR_shared_info on
        Resume
      x86/xen: save and restore steal clock during PM hibernation
      xen: Introduce wrapper for save/restore sched clock offset
      xen: Update sched clock offset to avoid system instability in
        hibernation

    Munehisa Kamata (5):
      xen/manage: keep track of the on-going suspend mode
      xenbus: add freeze/thaw/restore callbacks support
      x86/xen: add system core suspend and resume callbacks
      xen-blkfront: add callbacks for PM suspend and hibernation
      xen-netfront: add callbacks for PM suspend and hibernation

    Thomas Gleixner (1):
      genirq: Shutdown irq chips in suspend/resume during hibernation

     arch/x86/xen/enlighten_hvm.c      |   7 ++
     arch/x86/xen/suspend.c            |  53 +++++++++++++
     arch/x86/xen/time.c               |  15 +++-
     arch/x86/xen/xen-ops.h            |   3 +
     drivers/block/xen-blkfront.c      | 122 +++++++++++++++++++++++++++++-
     drivers/net/xen-netfront.c        |  98 +++++++++++++++++++++++-
     drivers/xen/events/events_base.c  |   1 +
     drivers/xen/manage.c              |  60 +++++++++++++++
     drivers/xen/xenbus/xenbus_probe.c |  96 +++++++++++++++++++----
     include/linux/irq.h               |   2 +
     include/xen/xen-ops.h             |   3 +
     include/xen/xenbus.h              |   3 +
     kernel/irq/chip.c                 |   2 +-
     kernel/irq/internals.h            |   1 +
     kernel/irq/pm.c                   |  31 +++++---
     kernel/power/user.c               |   6 +-
     16 files changed, 470 insertions(+), 33 deletions(-)

    -- 
    2.20.1
Boris Ostrovsky July 13, 2020, 7:43 p.m. UTC | #2
On 7/10/20 2:17 PM, Agarwal, Anchal wrote:
> Gentle ping on this series. 


Have you tested save/restore?


-bois
Anchal Agarwal July 15, 2020, 7:49 p.m. UTC | #3
On Mon, Jul 13, 2020 at 03:43:33PM -0400, Boris Ostrovsky wrote:
> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> On 7/10/20 2:17 PM, Agarwal, Anchal wrote:
> > Gentle ping on this series.
> 
> 
> Have you tested save/restore?
>
No, not with the last few series. But a good point, I will test that and get
back to you. Do you see anything specific in the series that suggests otherwise?

Thanks,
Anchal
> 
> -bois
> 
> 
>
Boris Ostrovsky July 15, 2020, 8:49 p.m. UTC | #4
On 7/15/20 3:49 PM, Anchal Agarwal wrote:
> On Mon, Jul 13, 2020 at 03:43:33PM -0400, Boris Ostrovsky wrote:
>> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
>>
>>
>>
>> On 7/10/20 2:17 PM, Agarwal, Anchal wrote:
>>> Gentle ping on this series.
>>
>> Have you tested save/restore?
>>
> No, not with the last few series. But a good point, I will test that and get
> back to you. Do you see anything specific in the series that suggests otherwise?


root@ovs104> xl save pvh saved
Saving to saved new xl format (info 0x3/0x0/1699)
xc: info: Saving domain 3, type x86 HVM
xc: Frames: 1044480/1044480  100%
xc: End of stream: 0/0    0%
root@ovs104> xl restore saved
Loading new save file saved (new xl fmt info 0x3/0x0/1699)
 Savefile contains xl domain config in JSON format
Parsing config from <saved>
xc: info: Found x86 HVM domain from Xen 4.13
xc: info: Restoring domain
xc: info: Restore successful
xc: info: XenStore: mfn 0xfeffc, dom 0, evt 1
xc: info: Console: mfn 0xfefff, dom 0, evt 2
root@ovs104> xl console pvh
[  139.943872] ------------[ cut here ]------------
[  139.943872] kernel BUG at arch/x86/xen/enlighten.c:205!
[  139.943872] invalid opcode: 0000 [#1] SMP PTI
[  139.943872] CPU: 0 PID: 11 Comm: migration/0 Not tainted 5.8.0-rc5 #26
[  139.943872] RIP: 0010:xen_vcpu_setup+0x16d/0x180
[  139.943872] Code: 4a 8b 14 f5 40 c9 1b 82 48 89 d8 48 89 2c 02 8b 05
a4 d4 40 01 85 c0 0f 85 15 ff ff ff 4a 8b 04 f5 40 c9 1b 82 e9 f4 fe ff
ff <0f> 0b b8 ed ff ff ff e9 14 ff ff ff e8 12 4f 86 00 66 90 66 66 66
[  139.943872] RSP: 0018:ffffc9000006bdb0 EFLAGS: 00010046
[  139.943872] RAX: 0000000000000000 RBX: ffffc9000014fe00 RCX:
0000000000000000
[  139.943872] RDX: ffff88803fc00000 RSI: 0000000000016128 RDI:
0000000000000000
[  139.943872] RBP: 0000000000000000 R08: 0000000000000000 R09:
0000000000000000
[  139.943872] R10: ffffffff826174a0 R11: ffffc9000006bcb4 R12:
0000000000016120
[  139.943872] R13: 0000000000016120 R14: 0000000000016128 R15:
0000000000000000
[  139.943872] FS:  0000000000000000(0000) GS:ffff88803fc00000(0000)
knlGS:0000000000000000
[  139.943872] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  139.943872] CR2: 00007f704be8b000 CR3: 000000003901a004 CR4:
00000000000606f0
[  139.943872] Call Trace:
[  139.943872]  ? __kmalloc+0x167/0x260
[  139.943872]  ? xen_manage_runstate_time+0x14a/0x170
[  139.943872]  xen_vcpu_restore+0x134/0x170
[  139.943872]  xen_hvm_post_suspend+0x1d/0x30
[  139.943872]  xen_arch_post_suspend+0x13/0x30
[  139.943872]  xen_suspend+0x87/0x190
[  139.943872]  multi_cpu_stop+0x6d/0x110
[  139.943872]  ? stop_machine_yield+0x10/0x10
[  139.943872]  cpu_stopper_thread+0x47/0x100
[  139.943872]  smpboot_thread_fn+0xc5/0x160
[  139.943872]  ? sort_range+0x20/0x20
[  139.943872]  kthread+0xfe/0x140
[  139.943872]  ? kthread_park+0x90/0x90
[  139.943872]  ret_from_fork+0x22/0x30
[  139.943872] Modules linked in:
[  139.943872] ---[ end trace 74716859a6b4f0a8 ]---
[  139.943872] RIP: 0010:xen_vcpu_setup+0x16d/0x180
[  139.943872] Code: 4a 8b 14 f5 40 c9 1b 82 48 89 d8 48 89 2c 02 8b 05
a4 d4 40 01 85 c0 0f 85 15 ff ff ff 4a 8b 04 f5 40 c9 1b 82 e9 f4 fe ff
ff <0f> 0b b8 ed ff ff ff e9 14 ff ff ff e8 12 4f 86 00 66 90 66 66 66
[  139.943872] RSP: 0018:ffffc9000006bdb0 EFLAGS: 00010046
[  139.943872] RAX: 0000000000000000 RBX: ffffc9000014fe00 RCX:
0000000000000000
[  139.943872] RDX: ffff88803fc00000 RSI: 0000000000016128 RDI:
0000000000000000
[  139.943872] RBP: 0000000000000000 R08: 0000000000000000 R09:
0000000000000000
[  139.943872] R10: ffffffff826174a0 R11: ffffc9000006bcb4 R12:
0000000000016120
[  139.943872] R13: 0000000000016120 R14: 0000000000016128 R15:
0000000000000000
[  139.943872] FS:  0000000000000000(0000) GS:ffff88803fc00000(0000)
knlGS:0000000000000000
[  139.943872] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  139.943872] CR2: 00007f704be8b000 CR3: 000000003901a004 CR4:
00000000000606f0
[  139.943872] Kernel panic - not syncing: Fatal exception
[  139.943872] Shutting down cpus with NMI
[  143.927559] Kernel Offset: disabled
root@ovs104>
Anchal Agarwal July 16, 2020, 11:28 p.m. UTC | #5
On Wed, Jul 15, 2020 at 04:49:57PM -0400, Boris Ostrovsky wrote:
> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> On 7/15/20 3:49 PM, Anchal Agarwal wrote:
> > On Mon, Jul 13, 2020 at 03:43:33PM -0400, Boris Ostrovsky wrote:
> >> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> >>
> >>
> >>
> >> On 7/10/20 2:17 PM, Agarwal, Anchal wrote:
> >>> Gentle ping on this series.
> >>
> >> Have you tested save/restore?
> >>
> > No, not with the last few series. But a good point, I will test that and get
> > back to you. Do you see anything specific in the series that suggests otherwise?
> 
> 
> root@ovs104> xl save pvh saved
> Saving to saved new xl format (info 0x3/0x0/1699)
> xc: info: Saving domain 3, type x86 HVM
> xc: Frames: 1044480/1044480  100%
> xc: End of stream: 0/0    0%
> root@ovs104> xl restore saved
> Loading new save file saved (new xl fmt info 0x3/0x0/1699)
>  Savefile contains xl domain config in JSON format
> Parsing config from <saved>
> xc: info: Found x86 HVM domain from Xen 4.13
> xc: info: Restoring domain
> xc: info: Restore successful
> xc: info: XenStore: mfn 0xfeffc, dom 0, evt 1
> xc: info: Console: mfn 0xfefff, dom 0, evt 2
> root@ovs104> xl console pvh
> [  139.943872] ------------[ cut here ]------------
> [  139.943872] kernel BUG at arch/x86/xen/enlighten.c:205!
> [  139.943872] invalid opcode: 0000 [#1] SMP PTI
> [  139.943872] CPU: 0 PID: 11 Comm: migration/0 Not tainted 5.8.0-rc5 #26
> [  139.943872] RIP: 0010:xen_vcpu_setup+0x16d/0x180
> [  139.943872] Code: 4a 8b 14 f5 40 c9 1b 82 48 89 d8 48 89 2c 02 8b 05
> a4 d4 40 01 85 c0 0f 85 15 ff ff ff 4a 8b 04 f5 40 c9 1b 82 e9 f4 fe ff
> ff <0f> 0b b8 ed ff ff ff e9 14 ff ff ff e8 12 4f 86 00 66 90 66 66 66
> [  139.943872] RSP: 0018:ffffc9000006bdb0 EFLAGS: 00010046
> [  139.943872] RAX: 0000000000000000 RBX: ffffc9000014fe00 RCX:
> 0000000000000000
> [  139.943872] RDX: ffff88803fc00000 RSI: 0000000000016128 RDI:
> 0000000000000000
> [  139.943872] RBP: 0000000000000000 R08: 0000000000000000 R09:
> 0000000000000000
> [  139.943872] R10: ffffffff826174a0 R11: ffffc9000006bcb4 R12:
> 0000000000016120
> [  139.943872] R13: 0000000000016120 R14: 0000000000016128 R15:
> 0000000000000000
> [  139.943872] FS:  0000000000000000(0000) GS:ffff88803fc00000(0000)
> knlGS:0000000000000000
> [  139.943872] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [  139.943872] CR2: 00007f704be8b000 CR3: 000000003901a004 CR4:
> 00000000000606f0
> [  139.943872] Call Trace:
> [  139.943872]  ? __kmalloc+0x167/0x260
> [  139.943872]  ? xen_manage_runstate_time+0x14a/0x170
> [  139.943872]  xen_vcpu_restore+0x134/0x170
> [  139.943872]  xen_hvm_post_suspend+0x1d/0x30
> [  139.943872]  xen_arch_post_suspend+0x13/0x30
> [  139.943872]  xen_suspend+0x87/0x190
> [  139.943872]  multi_cpu_stop+0x6d/0x110
> [  139.943872]  ? stop_machine_yield+0x10/0x10
> [  139.943872]  cpu_stopper_thread+0x47/0x100
> [  139.943872]  smpboot_thread_fn+0xc5/0x160
> [  139.943872]  ? sort_range+0x20/0x20
> [  139.943872]  kthread+0xfe/0x140
> [  139.943872]  ? kthread_park+0x90/0x90
> [  139.943872]  ret_from_fork+0x22/0x30
> [  139.943872] Modules linked in:
> [  139.943872] ---[ end trace 74716859a6b4f0a8 ]---
> [  139.943872] RIP: 0010:xen_vcpu_setup+0x16d/0x180
> [  139.943872] Code: 4a 8b 14 f5 40 c9 1b 82 48 89 d8 48 89 2c 02 8b 05
> a4 d4 40 01 85 c0 0f 85 15 ff ff ff 4a 8b 04 f5 40 c9 1b 82 e9 f4 fe ff
> ff <0f> 0b b8 ed ff ff ff e9 14 ff ff ff e8 12 4f 86 00 66 90 66 66 66
> [  139.943872] RSP: 0018:ffffc9000006bdb0 EFLAGS: 00010046
> [  139.943872] RAX: 0000000000000000 RBX: ffffc9000014fe00 RCX:
> 0000000000000000
> [  139.943872] RDX: ffff88803fc00000 RSI: 0000000000016128 RDI:
> 0000000000000000
> [  139.943872] RBP: 0000000000000000 R08: 0000000000000000 R09:
> 0000000000000000
> [  139.943872] R10: ffffffff826174a0 R11: ffffc9000006bcb4 R12:
> 0000000000016120
> [  139.943872] R13: 0000000000016120 R14: 0000000000016128 R15:
> 0000000000000000
> [  139.943872] FS:  0000000000000000(0000) GS:ffff88803fc00000(0000)
> knlGS:0000000000000000
> [  139.943872] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [  139.943872] CR2: 00007f704be8b000 CR3: 000000003901a004 CR4:
> 00000000000606f0
> [  139.943872] Kernel panic - not syncing: Fatal exception
> [  139.943872] Shutting down cpus with NMI
> [  143.927559] Kernel Offset: disabled
> root@ovs104>
>
I think I may have found a bug. There were no issues with V1 version  that I
send however, there were issues with V2. I tested both series and found xl
save/restore to be working in V1 but not in V2. I should have tested it.
Anyways, looks the issue is coming from executing syscore ops registered for
hibernation use case during call to xen_suspend. 
I remember your comment from earlier where you did ask why we need to
check xen_suspend mode xen_syscore_suspend [patch-004] and I removed that based
on my theoretical understanding of your suggestion that since lock_system_sleep() lock
is taken, we cannot initialize hibernation. I skipped to check the part in the
code where during xen_suspend(), all registered syscore_ops suspend callbacks are
called. Hence the ones registered for PM hibernation will also be called.
With no check there on suspend mode, it fails to return from the function and
they never should be executed in case of xen suspend.
I will revert a part of that check in Patch-004 from V1 and send an updated patch with
the fix.

Thanks,
Anchal