diff mbox series

[2/2] libxl_pci: Fix guest shutdown with PCI PT attached

Message ID 20190930172327.784520-3-anthony.perard@citrix.com (mailing list archive)
State New, archived
Headers show
Series libxl fixes with pci passthrough | expand

Commit Message

Anthony PERARD Sept. 30, 2019, 5:23 p.m. UTC
Before the problematic commit, libxl used to ignore error when
destroying (force == true) a passthrough device. If the DM failed to
detach the pci device within the allowed time, the timed out error
raised skip part of pci_remove_*, but also raise the error up to the
caller of libxl__device_pci_destroy_all, libxl__destroy_domid, and
thus the destruction of the domain fails.

When a *pci_destroy* function is called (so we have force=true), error
should mostly be ignored. If the DM didn't confirmed that the device
is removed, we will print a warning and keep going if force=true.
The patch reorder the functions so that pci_remove_timeout() calls
pci_remove_detatched() like it's done when DM calls are successful.

We also clean the QMP states and associated timeouts earlier, as soon
as they are not needed anymore.

Reported-by: Sander Eikelenboom <linux@eikelenboom.it>
Fixes: fae4880c45fe015e567afa223f78bf17a6d98e1b
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/libxl/libxl_pci.c | 41 ++++++++++++++++++++++++++---------------
 1 file changed, 26 insertions(+), 15 deletions(-)

Comments

Sander Eikelenboom Sept. 30, 2019, 6:10 p.m. UTC | #1
On 30/09/2019 19:23, Anthony PERARD wrote:
> Before the problematic commit, libxl used to ignore error when
> destroying (force == true) a passthrough device. If the DM failed to
> detach the pci device within the allowed time, the timed out error
> raised skip part of pci_remove_*, but also raise the error up to the
> caller of libxl__device_pci_destroy_all, libxl__destroy_domid, and
> thus the destruction of the domain fails.
> 
> When a *pci_destroy* function is called (so we have force=true), error
> should mostly be ignored. If the DM didn't confirmed that the device
> is removed, we will print a warning and keep going if force=true.
> The patch reorder the functions so that pci_remove_timeout() calls
> pci_remove_detatched() like it's done when DM calls are successful.
> 
> We also clean the QMP states and associated timeouts earlier, as soon
> as they are not needed anymore.
> 
> Reported-by: Sander Eikelenboom <linux@eikelenboom.it>
> Fixes: fae4880c45fe015e567afa223f78bf17a6d98e1b
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>

Hi Anthony,

Just tested and it works for me, thanks !

--
Sander
Anthony PERARD Oct. 1, 2019, 10:35 a.m. UTC | #2
Rewrite of the commit message:

Before the problematic commit, libxl used to ignore error when
destroying (force == true) a passthrough device, especially error that
happens when dealing with the DM.

Since fae4880c45fe, if the DM failed to detach the pci device within
the allowed time, the timed out error raised skip part of
pci_remove_*, but also raise the error up to the caller of
libxl__device_pci_destroy_all, libxl__destroy_domid, and thus the
destruction of the domain fails.

In this patch, if the DM didn't confirmed that the device is removed,
we will print a warning and keep going if force=true.  The patch
reorder the functions so that pci_remove_timeout() calls
pci_remove_detatched() like it's done when DM calls are successful.

We also clean the QMP states and associated timeouts earlier, as soon
as they are not needed anymore.

Reported-by: Sander Eikelenboom <linux@eikelenboom.it>
Fixes: fae4880c45fe015e567afa223f78bf17a6d98e1b
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Sander Eikelenboom Oct. 10, 2019, 4:13 p.m. UTC | #3
On 01/10/2019 12:35, Anthony PERARD wrote:
> Rewrite of the commit message:
> 
> Before the problematic commit, libxl used to ignore error when
> destroying (force == true) a passthrough device, especially error that
> happens when dealing with the DM.
> 
> Since fae4880c45fe, if the DM failed to detach the pci device within
> the allowed time, the timed out error raised skip part of
> pci_remove_*, but also raise the error up to the caller of
> libxl__device_pci_destroy_all, libxl__destroy_domid, and thus the
> destruction of the domain fails.
> 
> In this patch, if the DM didn't confirmed that the device is removed,
> we will print a warning and keep going if force=true.  The patch
> reorder the functions so that pci_remove_timeout() calls
> pci_remove_detatched() like it's done when DM calls are successful.
> 
> We also clean the QMP states and associated timeouts earlier, as soon
> as they are not needed anymore.
> 
> Reported-by: Sander Eikelenboom <linux@eikelenboom.it>
> Fixes: fae4880c45fe015e567afa223f78bf17a6d98e1b
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> 

Hi Anthony / Chao,

I have to come back to this, a bit because perhaps there is an underlying issue.
While it earlier occurred to me that the VM to which I passed through most pci-devices 
(8 to be exact) became very slow to shutdown, but I  didn't investigate it further.

But after you commit messages from this patch it kept nagging, so today I did some testing
and bisecting.

The difference in tear-down time at least from what the IOMMU code logs is quite large:

xen-4.12.0
	Setup: 	    7.452 s
	Tear-down:  7.626 s

xen-unstable-ee7170822f1fc209f33feb47b268bab35541351d
	Setup:      7.468 s
	Tear-down: 50.239 s

Bisection turned up:
	commit c4b1ef0f89aa6a74faa4618ce3efed1de246ec40
	Author: Chao Gao <chao.gao@intel.com>
	Date:   Fri Jul 19 10:24:08 2019 +0100
	libxl_qmp: wait for completion of device removal

Which makes me wonder if there is something going wrong in Qemu ?

--
Sander



xen-4.12.0 setup:
	(XEN) [2019-10-10 09:54:14.846] AMD-Vi: Disable: device id = 0x900, domain = 0, paging mode = 3
	(XEN) [2019-10-10 09:54:14.846] AMD-Vi: Setup I/O page table: device id = 0x900, type = 0x1, root table = 0x4aa847000, domain = 1, paging mode = 3
	(XEN) [2019-10-10 09:54:14.846] AMD-Vi: Re-assign 0000:09:00.0 from dom0 to dom1
	...
	(XEN) [2019-10-10 09:54:22.298] AMD-Vi: Disable: device id = 0x907, domain = 0, paging mode = 3
	(XEN) [2019-10-10 09:54:22.298] AMD-Vi: Setup I/O page table: device id = 0x907, type = 0x1, root table = 0x4aa847000, domain = 1, paging mode = 3
	(XEN) [2019-10-10 09:54:22.298] AMD-Vi: Re-assign 0000:09:00.7 from dom0 to dom1


xen-4.12.0 tear-down:
	(XEN) [2019-10-10 10:01:11.971] AMD-Vi: Disable: device id = 0x900, domain = 1, paging mode = 3
	(XEN) [2019-10-10 10:01:11.971] AMD-Vi: Setup I/O page table: device id = 0x900, type = 0x1, root table = 0x53572c000, domain = 0, paging mode = 3
	(XEN) [2019-10-10 10:01:11.971] AMD-Vi: Re-assign 0000:09:00.0 from dom1 to dom0
	...
	(XEN) [2019-10-10 10:01:19.597] AMD-Vi: Disable: device id = 0x907, domain = 1, paging mode = 3
	(XEN) [2019-10-10 10:01:19.597] AMD-Vi: Setup I/O page table: device id = 0x907, type = 0x1, root table = 0x53572c000, domain = 0, paging mode = 3
	(XEN) [2019-10-10 10:01:19.597] AMD-Vi: Re-assign 0000:09:00.7 from dom1 to dom0

xen-unstable-ee7170822f1fc209f33feb47b268bab35541351d setup:
	(XEN) [2019-10-10 10:21:38.549] d1: bind: m_gsi=47 g_gsi=36 dev=00.00.5 intx=0
	(XEN) [2019-10-10 10:21:38.621] AMD-Vi: Disable: device id = 0x900, domain = 0, paging mode = 3
	(XEN) [2019-10-10 10:21:38.621] AMD-Vi: Setup I/O page table: device id = 0x900, type = 0x1, root table = 0x4aa83b000, domain = 1, paging mode = 3
	(XEN) [2019-10-10 10:21:38.621] AMD-Vi: Re-assign 0000:09:00.0 from dom0 to dom1
	...
	(XEN) [2019-10-10 10:21:46.069] d1: bind: m_gsi=46 g_gsi=36 dev=00.01.4 intx=3
	(XEN) [2019-10-10 10:21:46.089] AMD-Vi: Disable: device id = 0x907, domain = 0, paging mode = 3
	(XEN) [2019-10-10 10:21:46.089] AMD-Vi: Setup I/O page table: device id = 0x907, type = 0x1, root table = 0x4aa83b000, domain = 1, paging mode = 3
	(XEN) [2019-10-10 10:21:46.089] AMD-Vi: Re-assign 0000:09:00.7 from dom0 to dom1


xen-unstable-ee7170822f1fc209f33feb47b268bab35541351d tear-down:
	(XEN) [2019-10-10 10:23:53.167] AMD-Vi: Disable: device id = 0x900, domain = 1, paging mode = 3
	(XEN) [2019-10-10 10:23:53.167] AMD-Vi: Setup I/O page table: device id = 0x900, type = 0x1, root table = 0x5240f8000, domain = 0, paging mode = 3
	(XEN) [2019-10-10 10:23:53.167] AMD-Vi: Re-assign 0000:09:00.0 from dom1 to dom0
	...
	(XEN) [2019-10-10 10:24:43.406] AMD-Vi: Disable: device id = 0x907, domain = 1, paging mode = 3
	(XEN) [2019-10-10 10:24:43.406] AMD-Vi: Setup I/O page table: device id = 0x907, type = 0x1, root table = 0x5240f8000, domain = 0, paging mode = 3
	(XEN) [2019-10-10 10:24:43.406] AMD-Vi: Re-assign 0000:09:00.7 from dom1 to dom0
Chao Gao Oct. 14, 2019, 3:03 p.m. UTC | #4
On Thu, Oct 10, 2019 at 06:13:43PM +0200, Sander Eikelenboom wrote:
>On 01/10/2019 12:35, Anthony PERARD wrote:
>> Rewrite of the commit message:
>> 
>> Before the problematic commit, libxl used to ignore error when
>> destroying (force == true) a passthrough device, especially error that
>> happens when dealing with the DM.
>> 
>> Since fae4880c45fe, if the DM failed to detach the pci device within
>> the allowed time, the timed out error raised skip part of
>> pci_remove_*, but also raise the error up to the caller of
>> libxl__device_pci_destroy_all, libxl__destroy_domid, and thus the
>> destruction of the domain fails.
>> 
>> In this patch, if the DM didn't confirmed that the device is removed,
>> we will print a warning and keep going if force=true.  The patch
>> reorder the functions so that pci_remove_timeout() calls
>> pci_remove_detatched() like it's done when DM calls are successful.
>> 
>> We also clean the QMP states and associated timeouts earlier, as soon
>> as they are not needed anymore.
>> 
>> Reported-by: Sander Eikelenboom <linux@eikelenboom.it>
>> Fixes: fae4880c45fe015e567afa223f78bf17a6d98e1b
>> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
>> 
>
>Hi Anthony / Chao,
>
>I have to come back to this, a bit because perhaps there is an underlying issue.
>While it earlier occurred to me that the VM to which I passed through most pci-devices 
>(8 to be exact) became very slow to shutdown, but I  didn't investigate it further.
>
>But after you commit messages from this patch it kept nagging, so today I did some testing
>and bisecting.
>
>The difference in tear-down time at least from what the IOMMU code logs is quite large:
>
>xen-4.12.0
>	Setup: 	    7.452 s
>	Tear-down:  7.626 s
>
>xen-unstable-ee7170822f1fc209f33feb47b268bab35541351d
>	Setup:      7.468 s
>	Tear-down: 50.239 s
>
>Bisection turned up:
>	commit c4b1ef0f89aa6a74faa4618ce3efed1de246ec40
>	Author: Chao Gao <chao.gao@intel.com>
>	Date:   Fri Jul 19 10:24:08 2019 +0100
>	libxl_qmp: wait for completion of device removal
>
>Which makes me wonder if there is something going wrong in Qemu ?

Hi Sander,

Thanks for your testing and the bisection.

I tried on my machine, the destruction time of a guest with 8 pass-thru
devices increased from 4s to 12s after applied the commit above. In my
understanding, I guess you might get the error message "timed out
waiting for DM to remove...". There might be some issues on your assigned
devices' drivers. You can first unbind the devices with their drivers in
VM and then tear down the VM, and check whether the VM teardown gets
much faster.

Anthony & Wei,

The commit above basically serializes and synchronizes detaching
assigned devices and thus increases VM teardown time significantly if
there are multiple assigned devices. The commit aimed to avoid qemu's
access to PCI configuration space coinciding with the device reset
initiated by xl (which is not desired and is exactly the case which
triggers the assertion in Xen [1]). I personally insist that xl should
wait for DM's completion of device detaching. Otherwise, besides Xen
panic (which can be fixed in another way), in theory, such sudden
unawared device reset might cause a disaster (e.g. data loss for a
storage device).

[1]: https://lists.xenproject.org/archives/html/xen-devel/2019-09/msg03287.html

But considering fast creation and teardown is an important benefit of
virtualization, I am not sure how to deal with the situation. Anyway,
you can make the decision. To fix the regression on VM teardown, we can
revert the commit by removing the timeout logic.

What's your opinion?

Thanks
Chao

>
>--
>Sander
>
>
>
>xen-4.12.0 setup:
>	(XEN) [2019-10-10 09:54:14.846] AMD-Vi: Disable: device id = 0x900, domain = 0, paging mode = 3
>	(XEN) [2019-10-10 09:54:14.846] AMD-Vi: Setup I/O page table: device id = 0x900, type = 0x1, root table = 0x4aa847000, domain = 1, paging mode = 3
>	(XEN) [2019-10-10 09:54:14.846] AMD-Vi: Re-assign 0000:09:00.0 from dom0 to dom1
>	...
>	(XEN) [2019-10-10 09:54:22.298] AMD-Vi: Disable: device id = 0x907, domain = 0, paging mode = 3
>	(XEN) [2019-10-10 09:54:22.298] AMD-Vi: Setup I/O page table: device id = 0x907, type = 0x1, root table = 0x4aa847000, domain = 1, paging mode = 3
>	(XEN) [2019-10-10 09:54:22.298] AMD-Vi: Re-assign 0000:09:00.7 from dom0 to dom1
>
>
>xen-4.12.0 tear-down:
>	(XEN) [2019-10-10 10:01:11.971] AMD-Vi: Disable: device id = 0x900, domain = 1, paging mode = 3
>	(XEN) [2019-10-10 10:01:11.971] AMD-Vi: Setup I/O page table: device id = 0x900, type = 0x1, root table = 0x53572c000, domain = 0, paging mode = 3
>	(XEN) [2019-10-10 10:01:11.971] AMD-Vi: Re-assign 0000:09:00.0 from dom1 to dom0
>	...
>	(XEN) [2019-10-10 10:01:19.597] AMD-Vi: Disable: device id = 0x907, domain = 1, paging mode = 3
>	(XEN) [2019-10-10 10:01:19.597] AMD-Vi: Setup I/O page table: device id = 0x907, type = 0x1, root table = 0x53572c000, domain = 0, paging mode = 3
>	(XEN) [2019-10-10 10:01:19.597] AMD-Vi: Re-assign 0000:09:00.7 from dom1 to dom0
>
>xen-unstable-ee7170822f1fc209f33feb47b268bab35541351d setup:
>	(XEN) [2019-10-10 10:21:38.549] d1: bind: m_gsi=47 g_gsi=36 dev=00.00.5 intx=0
>	(XEN) [2019-10-10 10:21:38.621] AMD-Vi: Disable: device id = 0x900, domain = 0, paging mode = 3
>	(XEN) [2019-10-10 10:21:38.621] AMD-Vi: Setup I/O page table: device id = 0x900, type = 0x1, root table = 0x4aa83b000, domain = 1, paging mode = 3
>	(XEN) [2019-10-10 10:21:38.621] AMD-Vi: Re-assign 0000:09:00.0 from dom0 to dom1
>	...
>	(XEN) [2019-10-10 10:21:46.069] d1: bind: m_gsi=46 g_gsi=36 dev=00.01.4 intx=3
>	(XEN) [2019-10-10 10:21:46.089] AMD-Vi: Disable: device id = 0x907, domain = 0, paging mode = 3
>	(XEN) [2019-10-10 10:21:46.089] AMD-Vi: Setup I/O page table: device id = 0x907, type = 0x1, root table = 0x4aa83b000, domain = 1, paging mode = 3
>	(XEN) [2019-10-10 10:21:46.089] AMD-Vi: Re-assign 0000:09:00.7 from dom0 to dom1
>
>
>xen-unstable-ee7170822f1fc209f33feb47b268bab35541351d tear-down:
>	(XEN) [2019-10-10 10:23:53.167] AMD-Vi: Disable: device id = 0x900, domain = 1, paging mode = 3
>	(XEN) [2019-10-10 10:23:53.167] AMD-Vi: Setup I/O page table: device id = 0x900, type = 0x1, root table = 0x5240f8000, domain = 0, paging mode = 3
>	(XEN) [2019-10-10 10:23:53.167] AMD-Vi: Re-assign 0000:09:00.0 from dom1 to dom0
>	...
>	(XEN) [2019-10-10 10:24:43.406] AMD-Vi: Disable: device id = 0x907, domain = 1, paging mode = 3
>	(XEN) [2019-10-10 10:24:43.406] AMD-Vi: Setup I/O page table: device id = 0x907, type = 0x1, root table = 0x5240f8000, domain = 0, paging mode = 3
>	(XEN) [2019-10-10 10:24:43.406] AMD-Vi: Re-assign 0000:09:00.7 from dom1 to dom0
Sander Eikelenboom Oct. 15, 2019, 4:59 p.m. UTC | #5
On 14/10/2019 17:03, Chao Gao wrote:
> On Thu, Oct 10, 2019 at 06:13:43PM +0200, Sander Eikelenboom wrote:
>> On 01/10/2019 12:35, Anthony PERARD wrote:
>>> Rewrite of the commit message:
>>>
>>> Before the problematic commit, libxl used to ignore error when
>>> destroying (force == true) a passthrough device, especially error that
>>> happens when dealing with the DM.
>>>
>>> Since fae4880c45fe, if the DM failed to detach the pci device within
>>> the allowed time, the timed out error raised skip part of
>>> pci_remove_*, but also raise the error up to the caller of
>>> libxl__device_pci_destroy_all, libxl__destroy_domid, and thus the
>>> destruction of the domain fails.
>>>
>>> In this patch, if the DM didn't confirmed that the device is removed,
>>> we will print a warning and keep going if force=true.  The patch
>>> reorder the functions so that pci_remove_timeout() calls
>>> pci_remove_detatched() like it's done when DM calls are successful.
>>>
>>> We also clean the QMP states and associated timeouts earlier, as soon
>>> as they are not needed anymore.
>>>
>>> Reported-by: Sander Eikelenboom <linux@eikelenboom.it>
>>> Fixes: fae4880c45fe015e567afa223f78bf17a6d98e1b
>>> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
>>>
>>
>> Hi Anthony / Chao,
>>
>> I have to come back to this, a bit because perhaps there is an underlying issue.
>> While it earlier occurred to me that the VM to which I passed through most pci-devices 
>> (8 to be exact) became very slow to shutdown, but I  didn't investigate it further.
>>
>> But after you commit messages from this patch it kept nagging, so today I did some testing
>> and bisecting.
>>
>> The difference in tear-down time at least from what the IOMMU code logs is quite large:
>>
>> xen-4.12.0
>> 	Setup: 	    7.452 s
>> 	Tear-down:  7.626 s
>>
>> xen-unstable-ee7170822f1fc209f33feb47b268bab35541351d
>> 	Setup:      7.468 s
>> 	Tear-down: 50.239 s
>>
>> Bisection turned up:
>> 	commit c4b1ef0f89aa6a74faa4618ce3efed1de246ec40
>> 	Author: Chao Gao <chao.gao@intel.com>
>> 	Date:   Fri Jul 19 10:24:08 2019 +0100
>> 	libxl_qmp: wait for completion of device removal
>>
>> Which makes me wonder if there is something going wrong in Qemu ?
 
> Hi Sander,
Hi Chao,

> 
> Thanks for your testing and the bisection.
> 
> I tried on my machine, the destruction time of a guest with 8 pass-thru
> devices increased from 4s to 12s after applied the commit above.

To what patch are you referring Anthony's or c4b1ef0f89aa6a74faa4618ce3efed1de246ec40 ?

> In my understanding, I guess you might get the error message "timed out
> waiting for DM to remove...". There might be some issues on your assigned
> devices' drivers. You can first unbind the devices with their drivers in
> VM and then tear down the VM, and check whether the VM teardown gets
> much faster.

I get that error message when I test with Anthony's patch applied, the destruction time with that patch is low.

How ever my point was if that patch is correct in the sense that there seems to be an underlying issue 
which causes it to take so long. That issue was uncovered by c4b1ef0f89aa6a74faa4618ce3efed1de246ec40, so I'm not
saying that commit is wrong in any sense, it just uncovered another issue that was already present,
but hard to detect as we just didn't wait at destruction time (and thus the same effect as a timeout).

One or the other way that was just a minor issue until fae4880c45fe015e567afa223f78bf17a6d98e1b, where the long
destruction time now caused the domain destruction to stall, which was then fixed by Antony's patch, but that uses
a timeout which kinds of circumvents the issue, instead of finding out where is comes from and solve it there (
if that is possible of course).

And I wonder if Anthony's patch doesn't interfere with the case you made c4b1ef0f89aa6a74faa4618ce3efed1de246ec40 for, 
if you get the timeout error message as well, then that is kind of not waiting for the destruction to finish, isn't it ?

Chao, 
could you perhaps test for me Xen with as latest commit ee7170822f1fc209f33feb47b268bab35541351d ?
That is before Anthony's patch series, but after your c4b1ef0f89aa6a74faa4618ce3efed1de246ec40.

I would expect to see longer destruction times in the case of 8 pass-throuh devices as well.

Unfortunately Qemu doesn't seem to do much verbose logging even when i enable the debug defines in hw/xen,
especially for the destruction side of things (it mostly logs setting up stuff).

--
Sander
 



> Anthony & Wei,
> 
> The commit above basically serializes and synchronizes detaching
> assigned devices and thus increases VM teardown time significantly if
> there are multiple assigned devices. The commit aimed to avoid qemu's
> access to PCI configuration space coinciding with the device reset
> initiated by xl (which is not desired and is exactly the case which
> triggers the assertion in Xen [1]). I personally insist that xl should
> wait for DM's completion of device detaching. Otherwise, besides Xen
> panic (which can be fixed in another way), in theory, such sudden
> unawared device reset might cause a disaster (e.g. data loss for a
> storage device).
> 
> [1]: https://lists.xenproject.org/archives/html/xen-devel/2019-09/msg03287.html
> 
> But considering fast creation and teardown is an important benefit of
> virtualization, I am not sure how to deal with the situation. Anyway,
> you can make the decision. To fix the regression on VM teardown, we can
> revert the commit by removing the timeout logic.
> 
> What's your opinion?
> 
> Thanks
> Chao
>
Sander Eikelenboom Oct. 15, 2019, 6:46 p.m. UTC | #6
On 15/10/2019 18:59, Sander Eikelenboom wrote:
> On 14/10/2019 17:03, Chao Gao wrote:
>> On Thu, Oct 10, 2019 at 06:13:43PM +0200, Sander Eikelenboom wrote:
>>> On 01/10/2019 12:35, Anthony PERARD wrote:
>>>> Rewrite of the commit message:
>>>>
>>>> Before the problematic commit, libxl used to ignore error when
>>>> destroying (force == true) a passthrough device, especially error that
>>>> happens when dealing with the DM.
>>>>
>>>> Since fae4880c45fe, if the DM failed to detach the pci device within
>>>> the allowed time, the timed out error raised skip part of
>>>> pci_remove_*, but also raise the error up to the caller of
>>>> libxl__device_pci_destroy_all, libxl__destroy_domid, and thus the
>>>> destruction of the domain fails.
>>>>
>>>> In this patch, if the DM didn't confirmed that the device is removed,
>>>> we will print a warning and keep going if force=true.  The patch
>>>> reorder the functions so that pci_remove_timeout() calls
>>>> pci_remove_detatched() like it's done when DM calls are successful.
>>>>
>>>> We also clean the QMP states and associated timeouts earlier, as soon
>>>> as they are not needed anymore.
>>>>
>>>> Reported-by: Sander Eikelenboom <linux@eikelenboom.it>
>>>> Fixes: fae4880c45fe015e567afa223f78bf17a6d98e1b
>>>> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
>>>>
>>>
>>> Hi Anthony / Chao,
>>>
>>> I have to come back to this, a bit because perhaps there is an underlying issue.
>>> While it earlier occurred to me that the VM to which I passed through most pci-devices 
>>> (8 to be exact) became very slow to shutdown, but I  didn't investigate it further.
>>>
>>> But after you commit messages from this patch it kept nagging, so today I did some testing
>>> and bisecting.
>>>
>>> The difference in tear-down time at least from what the IOMMU code logs is quite large:
>>>
>>> xen-4.12.0
>>> 	Setup: 	    7.452 s
>>> 	Tear-down:  7.626 s
>>>
>>> xen-unstable-ee7170822f1fc209f33feb47b268bab35541351d
>>> 	Setup:      7.468 s
>>> 	Tear-down: 50.239 s
>>>
>>> Bisection turned up:
>>> 	commit c4b1ef0f89aa6a74faa4618ce3efed1de246ec40
>>> 	Author: Chao Gao <chao.gao@intel.com>
>>> 	Date:   Fri Jul 19 10:24:08 2019 +0100
>>> 	libxl_qmp: wait for completion of device removal
>>>
>>> Which makes me wonder if there is something going wrong in Qemu ?
>  
>> Hi Sander,
> Hi Chao,
> 
>>
>> Thanks for your testing and the bisection.
>>
>> I tried on my machine, the destruction time of a guest with 8 pass-thru
>> devices increased from 4s to 12s after applied the commit above.
> 
> To what patch are you referring Anthony's or c4b1ef0f89aa6a74faa4618ce3efed1de246ec40 ?
> 
>> In my understanding, I guess you might get the error message "timed out
>> waiting for DM to remove...". There might be some issues on your assigned
>> devices' drivers. You can first unbind the devices with their drivers in
>> VM and then tear down the VM, and check whether the VM teardown gets
>> much faster.

Sorry I forgot to answer your question, I tried unbinding the drivers in
the guest prior to shutting it down, but it didn't make any difference.

--
Sander


> I get that error message when I test with Anthony's patch applied, the destruction time with that patch is low.
> 
> How ever my point was if that patch is correct in the sense that there seems to be an underlying issue 
> which causes it to take so long. That issue was uncovered by c4b1ef0f89aa6a74faa4618ce3efed1de246ec40, so I'm not
> saying that commit is wrong in any sense, it just uncovered another issue that was already present,
> but hard to detect as we just didn't wait at destruction time (and thus the same effect as a timeout).
> 
> One or the other way that was just a minor issue until fae4880c45fe015e567afa223f78bf17a6d98e1b, where the long
> destruction time now caused the domain destruction to stall, which was then fixed by Antony's patch, but that uses
> a timeout which kinds of circumvents the issue, instead of finding out where is comes from and solve it there (
> if that is possible of course).
> 
> And I wonder if Anthony's patch doesn't interfere with the case you made c4b1ef0f89aa6a74faa4618ce3efed1de246ec40 for, 
> if you get the timeout error message as well, then that is kind of not waiting for the destruction to finish, isn't it ?
> 
> Chao, 
> could you perhaps test for me Xen with as latest commit ee7170822f1fc209f33feb47b268bab35541351d ?
> That is before Anthony's patch series, but after your c4b1ef0f89aa6a74faa4618ce3efed1de246ec40.
> 
> I would expect to see longer destruction times in the case of 8 pass-throuh devices as well.
> 
> Unfortunately Qemu doesn't seem to do much verbose logging even when i enable the debug defines in hw/xen,
> especially for the destruction side of things (it mostly logs setting up stuff).
> 
> --
> Sander
>  
> 
> 
> 
>> Anthony & Wei,
>>
>> The commit above basically serializes and synchronizes detaching
>> assigned devices and thus increases VM teardown time significantly if
>> there are multiple assigned devices. The commit aimed to avoid qemu's
>> access to PCI configuration space coinciding with the device reset
>> initiated by xl (which is not desired and is exactly the case which
>> triggers the assertion in Xen [1]). I personally insist that xl should
>> wait for DM's completion of device detaching. Otherwise, besides Xen
>> panic (which can be fixed in another way), in theory, such sudden
>> unawared device reset might cause a disaster (e.g. data loss for a
>> storage device).
>>
>> [1]: https://lists.xenproject.org/archives/html/xen-devel/2019-09/msg03287.html
>>
>> But considering fast creation and teardown is an important benefit of
>> virtualization, I am not sure how to deal with the situation. Anyway,
>> you can make the decision. To fix the regression on VM teardown, we can
>> revert the commit by removing the timeout logic.
>>
>> What's your opinion?
>>
>> Thanks
>> Chao
>>
> 
>
Chao Gao Oct. 16, 2019, 4:55 a.m. UTC | #7
On Tue, Oct 15, 2019 at 06:59:37PM +0200, Sander Eikelenboom wrote:
>On 14/10/2019 17:03, Chao Gao wrote:
>> On Thu, Oct 10, 2019 at 06:13:43PM +0200, Sander Eikelenboom wrote:
>>> On 01/10/2019 12:35, Anthony PERARD wrote:
>>>> Rewrite of the commit message:
>>>>
>>>> Before the problematic commit, libxl used to ignore error when
>>>> destroying (force == true) a passthrough device, especially error that
>>>> happens when dealing with the DM.
>>>>
>>>> Since fae4880c45fe, if the DM failed to detach the pci device within
>>>> the allowed time, the timed out error raised skip part of
>>>> pci_remove_*, but also raise the error up to the caller of
>>>> libxl__device_pci_destroy_all, libxl__destroy_domid, and thus the
>>>> destruction of the domain fails.
>>>>
>>>> In this patch, if the DM didn't confirmed that the device is removed,
>>>> we will print a warning and keep going if force=true.  The patch
>>>> reorder the functions so that pci_remove_timeout() calls
>>>> pci_remove_detatched() like it's done when DM calls are successful.
>>>>
>>>> We also clean the QMP states and associated timeouts earlier, as soon
>>>> as they are not needed anymore.
>>>>
>>>> Reported-by: Sander Eikelenboom <linux@eikelenboom.it>
>>>> Fixes: fae4880c45fe015e567afa223f78bf17a6d98e1b
>>>> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
>>>>
>>>
>>> Hi Anthony / Chao,
>>>
>>> I have to come back to this, a bit because perhaps there is an underlying issue.
>>> While it earlier occurred to me that the VM to which I passed through most pci-devices 
>>> (8 to be exact) became very slow to shutdown, but I  didn't investigate it further.
>>>
>>> But after you commit messages from this patch it kept nagging, so today I did some testing
>>> and bisecting.
>>>
>>> The difference in tear-down time at least from what the IOMMU code logs is quite large:
>>>
>>> xen-4.12.0
>>> 	Setup: 	    7.452 s
>>> 	Tear-down:  7.626 s
>>>
>>> xen-unstable-ee7170822f1fc209f33feb47b268bab35541351d
>>> 	Setup:      7.468 s
>>> 	Tear-down: 50.239 s
>>>
>>> Bisection turned up:
>>> 	commit c4b1ef0f89aa6a74faa4618ce3efed1de246ec40
>>> 	Author: Chao Gao <chao.gao@intel.com>
>>> 	Date:   Fri Jul 19 10:24:08 2019 +0100
>>> 	libxl_qmp: wait for completion of device removal
>>>
>>> Which makes me wonder if there is something going wrong in Qemu ?
> 
>> Hi Sander,
>Hi Chao,
>
>> 
>> Thanks for your testing and the bisection.
>> 
>> I tried on my machine, the destruction time of a guest with 8 pass-thru
>> devices increased from 4s to 12s after applied the commit above.
>
>To what patch are you referring Anthony's or c4b1ef0f89aa6a74faa4618ce3efed1de246ec40 ?

The latter.

>
>> In my understanding, I guess you might get the error message "timed out
>> waiting for DM to remove...". There might be some issues on your assigned
>> devices' drivers. You can first unbind the devices with their drivers in
>> VM and then tear down the VM, and check whether the VM teardown gets
>> much faster.
>
>I get that error message when I test with Anthony's patch applied, the destruction time with that patch is low.
>
>How ever my point was if that patch is correct in the sense that there seems to be an underlying issue 
>which causes it to take so long. That issue was uncovered by c4b1ef0f89aa6a74faa4618ce3efed1de246ec40, so I'm not
>saying that commit is wrong in any sense, it just uncovered another issue that was already present,
>but hard to detect as we just didn't wait at destruction time (and thus the same effect as a timeout).

Actually, it is introduced by c4b1ef0f89, though it did fix another
issue.

>
>One or the other way that was just a minor issue until fae4880c45fe015e567afa223f78bf17a6d98e1b, where the long
>destruction time now caused the domain destruction to stall, which was then fixed by Antony's patch, but that uses
>a timeout which kinds of circumvents the issue, instead of finding out where is comes from and solve it there (
>if that is possible of course).
>
>And I wonder if Anthony's patch doesn't interfere with the case you made c4b1ef0f89aa6a74faa4618ce3efed1de246ec40 for, 
>if you get the timeout error message as well, then that is kind of not waiting for the destruction to finish, isn't it ?
>
>Chao, 
>could you perhaps test for me Xen with as latest commit ee7170822f1fc209f33feb47b268bab35541351d ?
>That is before Anthony's patch series, but after your c4b1ef0f89aa6a74faa4618ce3efed1de246ec40.

It's actually what I did. VM teardown with 8 pass-thru devices on my
side takes 12s which only took 4s without my patch.

Thanks
Chao
Anthony PERARD Oct. 18, 2019, 4:11 p.m. UTC | #8
On Mon, Oct 14, 2019 at 11:03:43PM +0800, Chao Gao wrote:
> On Thu, Oct 10, 2019 at 06:13:43PM +0200, Sander Eikelenboom wrote:
> >Hi Anthony / Chao,
> >
> >I have to come back to this, a bit because perhaps there is an underlying issue.
> >While it earlier occurred to me that the VM to which I passed through most pci-devices 
> >(8 to be exact) became very slow to shutdown, but I  didn't investigate it further.
> >
> >But after you commit messages from this patch it kept nagging, so today I did some testing
> >and bisecting.
> >
> >The difference in tear-down time at least from what the IOMMU code logs is quite large:
> >
> >xen-4.12.0
> >	Setup: 	    7.452 s
> >	Tear-down:  7.626 s
> >
> >xen-unstable-ee7170822f1fc209f33feb47b268bab35541351d
> >	Setup:      7.468 s
> >	Tear-down: 50.239 s
> >
> >Bisection turned up:
> >	commit c4b1ef0f89aa6a74faa4618ce3efed1de246ec40
> >	Author: Chao Gao <chao.gao@intel.com>
> >	Date:   Fri Jul 19 10:24:08 2019 +0100
> >	libxl_qmp: wait for completion of device removal
> >
> >Which makes me wonder if there is something going wrong in Qemu ?
> 
> Hi Sander,
> 
> Thanks for your testing and the bisection.
> 
> I tried on my machine, the destruction time of a guest with 8 pass-thru
> devices increased from 4s to 12s after applied the commit above. In my
> understanding, I guess you might get the error message "timed out
> waiting for DM to remove...". There might be some issues on your assigned
> devices' drivers. You can first unbind the devices with their drivers in
> VM and then tear down the VM, and check whether the VM teardown gets
> much faster.

Hi,

Chao, I think you've tested `xl destroy`, and Sander, I think your are
speaking about `xl shutdown` or simply power off of a guest. Well, these
two operations are a bit different, on destroy the guest kernel is
still running when the pci devices are been removed, but on shutdown the
guest kernel is gone.

I don't think there's anything wrong with QEMU or with the devices, it
just that when the toolstack ask QEMU to unplug the pci device, QEMU
will ask the guest kernel first. So the guest may never acknowledge the
removal and QEMU will not let go of the pci device. There is actually an
old Xen commit about that:
77fea72b068d25afb7e63947aba32b487d7124a2, and a comment in the code:
    /* This depends on guest operating system acknowledging the
     * SCI, if it doesn't respond in time then we may wish to
     * force the removal. */


> Anthony & Wei,
> 
> The commit above basically serializes and synchronizes detaching
> assigned devices and thus increases VM teardown time significantly if
> there are multiple assigned devices. The commit aimed to avoid qemu's
> access to PCI configuration space coinciding with the device reset
> initiated by xl (which is not desired and is exactly the case which
> triggers the assertion in Xen [1]). I personally insist that xl should
> wait for DM's completion of device detaching. Otherwise, besides Xen
> panic (which can be fixed in another way), in theory, such sudden
> unawared device reset might cause a disaster (e.g. data loss for a
> storage device).
> 
> [1]: https://lists.xenproject.org/archives/html/xen-devel/2019-09/msg03287.html
> 
> But considering fast creation and teardown is an important benefit of
> virtualization, I am not sure how to deal with the situation. Anyway,
> you can make the decision. To fix the regression on VM teardown, we can
> revert the commit by removing the timeout logic.
> 
> What's your opinion?

It probably a good idea to wait a bit until QEMU has detach the device.
For cases where QEMU will never detach the device (the guest kernel is
shutdown), we could reduce the timeout. Following my changes to pci
passthrough handling in libxl, the timeout is 10s for one device (and
probably 10s for many; I don't think libxl will even ask qemu to remove
the other devices if the first one timeout).

So, maybe we could wait for 5s for QEMU to detach the pci device? As
past that time, it will probably never happen. This still mean about +5s
to tear-down compare to previous releases. (Or maybe +5s per device if
we have to do one device at a time.)

There are other issues with handling multiple pci passthrough devices,
so I don't have patches yet.

Cheers,
Sander Eikelenboom Oct. 18, 2019, 4:43 p.m. UTC | #9
On 18/10/2019 18:11, Anthony PERARD wrote:
> On Mon, Oct 14, 2019 at 11:03:43PM +0800, Chao Gao wrote:
>> On Thu, Oct 10, 2019 at 06:13:43PM +0200, Sander Eikelenboom wrote:
>>> Hi Anthony / Chao,
>>>
>>> I have to come back to this, a bit because perhaps there is an underlying issue.
>>> While it earlier occurred to me that the VM to which I passed through most pci-devices 
>>> (8 to be exact) became very slow to shutdown, but I  didn't investigate it further.
>>>
>>> But after you commit messages from this patch it kept nagging, so today I did some testing
>>> and bisecting.
>>>
>>> The difference in tear-down time at least from what the IOMMU code logs is quite large:
>>>
>>> xen-4.12.0
>>> 	Setup: 	    7.452 s
>>> 	Tear-down:  7.626 s
>>>
>>> xen-unstable-ee7170822f1fc209f33feb47b268bab35541351d
>>> 	Setup:      7.468 s
>>> 	Tear-down: 50.239 s
>>>
>>> Bisection turned up:
>>> 	commit c4b1ef0f89aa6a74faa4618ce3efed1de246ec40
>>> 	Author: Chao Gao <chao.gao@intel.com>
>>> 	Date:   Fri Jul 19 10:24:08 2019 +0100
>>> 	libxl_qmp: wait for completion of device removal
>>>
>>> Which makes me wonder if there is something going wrong in Qemu ?
>>
>> Hi Sander,
>>
>> Thanks for your testing and the bisection.
>>
>> I tried on my machine, the destruction time of a guest with 8 pass-thru
>> devices increased from 4s to 12s after applied the commit above. In my
>> understanding, I guess you might get the error message "timed out
>> waiting for DM to remove...". There might be some issues on your assigned
>> devices' drivers. You can first unbind the devices with their drivers in
>> VM and then tear down the VM, and check whether the VM teardown gets
>> much faster.
> 
> Hi,
> 
> Chao, I think you've tested `xl destroy`, and Sander, I think your are
> speaking about `xl shutdown` or simply power off of a guest. Well, these
> two operations are a bit different, on destroy the guest kernel is
> still running when the pci devices are been removed, but on shutdown the
> guest kernel is gone.
> 
> I don't think there's anything wrong with QEMU or with the devices, it
> just that when the toolstack ask QEMU to unplug the pci device, QEMU
> will ask the guest kernel first. So the guest may never acknowledge the
> removal and QEMU will not let go of the pci device. There is actually an
> old Xen commit about that:
> 77fea72b068d25afb7e63947aba32b487d7124a2, and a comment in the code:
>     /* This depends on guest operating system acknowledging the
>      * SCI, if it doesn't respond in time then we may wish to
>      * force the removal. */

Hi Anthony,

Correct I was referring to the behavior with "xl shutdown".

The above is interesting, my follow up question would be:
Does Qemu know / keep a state when the guest kernel is gone ?
Because if it does, we could make the need to ackknowledge removal be
conditional on that ?

--
Sander


>> Anthony & Wei,
>>
>> The commit above basically serializes and synchronizes detaching
>> assigned devices and thus increases VM teardown time significantly if
>> there are multiple assigned devices. The commit aimed to avoid qemu's
>> access to PCI configuration space coinciding with the device reset
>> initiated by xl (which is not desired and is exactly the case which
>> triggers the assertion in Xen [1]). I personally insist that xl should
>> wait for DM's completion of device detaching. Otherwise, besides Xen
>> panic (which can be fixed in another way), in theory, such sudden
>> unawared device reset might cause a disaster (e.g. data loss for a
>> storage device).
>>
>> [1]: https://lists.xenproject.org/archives/html/xen-devel/2019-09/msg03287.html
>>
>> But considering fast creation and teardown is an important benefit of
>> virtualization, I am not sure how to deal with the situation. Anyway,
>> you can make the decision. To fix the regression on VM teardown, we can
>> revert the commit by removing the timeout logic.
>>
>> What's your opinion?
> 
> It probably a good idea to wait a bit until QEMU has detach the device.
> For cases where QEMU will never detach the device (the guest kernel is
> shutdown), we could reduce the timeout. Following my changes to pci
> passthrough handling in libxl, the timeout is 10s for one device (and
> probably 10s for many; I don't think libxl will even ask qemu to remove
> the other devices if the first one timeout).
> 
> So, maybe we could wait for 5s for QEMU to detach the pci device? As
> past that time, it will probably never happen. This still mean about +5s
> to tear-down compare to previous releases. (Or maybe +5s per device if
> we have to do one device at a time.)
> 
> There are other issues with handling multiple pci passthrough devices,
> so I don't have patches yet.
> 
> Cheers,
>
diff mbox series

Patch

diff --git a/tools/libxl/libxl_pci.c b/tools/libxl/libxl_pci.c
index 3b31cfd417be..cb849b241581 100644
--- a/tools/libxl/libxl_pci.c
+++ b/tools/libxl/libxl_pci.c
@@ -1778,12 +1778,12 @@  static void pci_remove_qmp_retry_timer_cb(libxl__egc *egc,
     libxl__ev_time *ev, const struct timeval *requested_abs, int rc);
 static void pci_remove_qmp_query_cb(libxl__egc *egc,
     libxl__ev_qmp *qmp, const libxl__json_object *response, int rc);
+static void pci_remove_timeout(libxl__egc *egc,
+    libxl__ev_time *ev, const struct timeval *requested_abs, int rc);
 static void pci_remove_detatched(libxl__egc *egc,
     pci_remove_state *prs, int rc);
 static void pci_remove_stubdom_done(libxl__egc *egc,
     libxl__ao_device *aodev);
-static void pci_remove_timeout(libxl__egc *egc,
-    libxl__ev_time *ev, const struct timeval *requested_abs, int rc);
 static void pci_remove_done(libxl__egc *egc,
     pci_remove_state *prs, int rc);
 
@@ -2044,6 +2044,25 @@  static void pci_remove_qmp_query_cb(libxl__egc *egc,
     pci_remove_detatched(egc, prs, rc); /* must be last */
 }
 
+static void pci_remove_timeout(libxl__egc *egc, libxl__ev_time *ev,
+                               const struct timeval *requested_abs,
+                               int rc)
+{
+    EGC_GC;
+    pci_remove_state *prs = CONTAINER_OF(ev, *prs, timeout);
+
+    /* Convenience aliases */
+    libxl_device_pci *const pcidev = prs->pcidev;
+
+    LOGD(WARN, prs->domid, "timed out waiting for DM to remove "
+         PCI_PT_QDEV_ID, pcidev->bus, pcidev->dev, pcidev->func);
+
+    /* If we timed out, we might still want to keep destroying the device
+     * (when force==true), so let the next function decide what to do on
+     * error */
+    pci_remove_detatched(egc, prs, rc);
+}
+
 static void pci_remove_detatched(libxl__egc *egc,
                                  pci_remove_state *prs,
                                  int rc)
@@ -2057,6 +2076,11 @@  static void pci_remove_detatched(libxl__egc *egc,
     libxl_device_pci *const pcidev = prs->pcidev;
     libxl_domid domid = prs->domid;
 
+    /* Cleaning QMP states ASAP */
+    libxl__ev_qmp_dispose(gc, &prs->qmp);
+    libxl__ev_time_deregister(gc, &prs->timeout);
+    libxl__ev_time_deregister(gc, &prs->retry_timer);
+
     if (rc && !prs->force)
         goto out;
 
@@ -2104,15 +2128,6 @@  static void pci_remove_stubdom_done(libxl__egc *egc,
     pci_remove_done(egc, prs, 0);
 }
 
-static void pci_remove_timeout(libxl__egc *egc, libxl__ev_time *ev,
-                               const struct timeval *requested_abs,
-                               int rc)
-{
-    pci_remove_state *prs = CONTAINER_OF(ev, *prs, timeout);
-
-    pci_remove_done(egc, prs, rc);
-}
-
 static void pci_remove_done(libxl__egc *egc,
                             pci_remove_state *prs,
                             int rc)
@@ -2121,10 +2136,6 @@  static void pci_remove_done(libxl__egc *egc,
 
     if (rc) goto out;
 
-    libxl__ev_qmp_dispose(gc, &prs->qmp);
-    libxl__ev_time_deregister(gc, &prs->timeout);
-    libxl__ev_time_deregister(gc, &prs->retry_timer);
-
     libxl__device_pci_remove_xenstore(gc, prs->domid, prs->pcidev);
 out:
     device_pci_remove_common_next(egc, prs, rc);