diff mbox series

[v2,2/2] PCI: hv: Propagate coherence from VMbus device to PCI device

Message ID 1648067472-13000-3-git-send-email-mikelley@microsoft.com (mailing list archive)
State Superseded, archived
Headers show
Series Fix coherence for VMbus and PCI pass-thru devices in Hyper-V VM | expand

Commit Message

Michael Kelley (LINUX) March 23, 2022, 8:31 p.m. UTC
PCI pass-thru devices in a Hyper-V VM are represented as a VMBus
device and as a PCI device.  The coherence of the VMbus device is
set based on the VMbus node in ACPI, but the PCI device has no
ACPI node and defaults to not hardware coherent.  This results
in extra software coherence management overhead on ARM64 when
devices are hardware coherent.

Fix this by setting up the PCI host bus so that normal
PCI mechanisms will propagate the coherence of the VMbus
device to the PCI device. There's no effect on x86/x64 where
devices are always hardware coherent.

Signed-off-by: Michael Kelley <mikelley@microsoft.com>
---
 drivers/pci/controller/pci-hyperv.c | 9 +++++++++
 1 file changed, 9 insertions(+)

Comments

Boqun Feng March 24, 2022, 1:09 a.m. UTC | #1
On Wed, Mar 23, 2022 at 01:31:12PM -0700, Michael Kelley wrote:
> PCI pass-thru devices in a Hyper-V VM are represented as a VMBus
> device and as a PCI device.  The coherence of the VMbus device is
> set based on the VMbus node in ACPI, but the PCI device has no
> ACPI node and defaults to not hardware coherent.  This results
> in extra software coherence management overhead on ARM64 when
> devices are hardware coherent.
> 
> Fix this by setting up the PCI host bus so that normal
> PCI mechanisms will propagate the coherence of the VMbus
> device to the PCI device. There's no effect on x86/x64 where
> devices are always hardware coherent.
> 
> Signed-off-by: Michael Kelley <mikelley@microsoft.com>

Acked-by: Boqun Feng <boqun.feng@gmail.com>

Regards,
Boqun

> ---
>  drivers/pci/controller/pci-hyperv.c | 9 +++++++++
>  1 file changed, 9 insertions(+)
> 
> diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
> index ae0bc2f..88b3b56 100644
> --- a/drivers/pci/controller/pci-hyperv.c
> +++ b/drivers/pci/controller/pci-hyperv.c
> @@ -3404,6 +3404,15 @@ static int hv_pci_probe(struct hv_device *hdev,
>  	hbus->bridge->domain_nr = dom;
>  #ifdef CONFIG_X86
>  	hbus->sysdata.domain = dom;
> +#elif defined(CONFIG_ARM64)
> +	/*
> +	 * Set the PCI bus parent to be the corresponding VMbus
> +	 * device. Then the VMbus device will be assigned as the
> +	 * ACPI companion in pcibios_root_bridge_prepare() and
> +	 * pci_dma_configure() will propagate device coherence
> +	 * information to devices created on the bus.
> +	 */
> +	hbus->sysdata.parent = hdev->device.parent;
>  #endif
>  
>  	hbus->hdev = hdev;
> -- 
> 1.8.3.1
> 
>
Robin Murphy March 24, 2022, 12:23 p.m. UTC | #2
On 2022-03-23 20:31, Michael Kelley wrote:
> PCI pass-thru devices in a Hyper-V VM are represented as a VMBus
> device and as a PCI device.  The coherence of the VMbus device is
> set based on the VMbus node in ACPI, but the PCI device has no
> ACPI node and defaults to not hardware coherent.  This results
> in extra software coherence management overhead on ARM64 when
> devices are hardware coherent.
> 
> Fix this by setting up the PCI host bus so that normal
> PCI mechanisms will propagate the coherence of the VMbus
> device to the PCI device. There's no effect on x86/x64 where
> devices are always hardware coherent.

Honestly, I don't hate this :)

It seems conceptually accurate, as far as I understand, and in 
functional terms I'm starting to think it might even be the most correct 
approach anyway. In the physical world we might be surprised to find the 
PCI side of a host bridge behind anything other than some platform/ACPI 
device representing the other side of a physical host bridge or root 
complex, but who's to say that a paravirtual world can't present a more 
abstract topology? Either way, a one-line way of tying in to the 
standard flow is hard to turn down.

Acked-by: Robin Murphy <robin.murphy@arm.com>

> Signed-off-by: Michael Kelley <mikelley@microsoft.com>
> ---
>   drivers/pci/controller/pci-hyperv.c | 9 +++++++++
>   1 file changed, 9 insertions(+)
> 
> diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
> index ae0bc2f..88b3b56 100644
> --- a/drivers/pci/controller/pci-hyperv.c
> +++ b/drivers/pci/controller/pci-hyperv.c
> @@ -3404,6 +3404,15 @@ static int hv_pci_probe(struct hv_device *hdev,
>   	hbus->bridge->domain_nr = dom;
>   #ifdef CONFIG_X86
>   	hbus->sysdata.domain = dom;
> +#elif defined(CONFIG_ARM64)
> +	/*
> +	 * Set the PCI bus parent to be the corresponding VMbus
> +	 * device. Then the VMbus device will be assigned as the
> +	 * ACPI companion in pcibios_root_bridge_prepare() and
> +	 * pci_dma_configure() will propagate device coherence
> +	 * information to devices created on the bus.
> +	 */
> +	hbus->sysdata.parent = hdev->device.parent;
>   #endif
>   
>   	hbus->hdev = hdev;
Robin Murphy March 24, 2022, 12:35 p.m. UTC | #3
On 2022-03-24 12:23, Robin Murphy wrote:
> On 2022-03-23 20:31, Michael Kelley wrote:
>> PCI pass-thru devices in a Hyper-V VM are represented as a VMBus
>> device and as a PCI device.  The coherence of the VMbus device is
>> set based on the VMbus node in ACPI, but the PCI device has no
>> ACPI node and defaults to not hardware coherent.  This results
>> in extra software coherence management overhead on ARM64 when
>> devices are hardware coherent.
>>
>> Fix this by setting up the PCI host bus so that normal
>> PCI mechanisms will propagate the coherence of the VMbus
>> device to the PCI device. There's no effect on x86/x64 where
>> devices are always hardware coherent.
> 
> Honestly, I don't hate this :)
> 
> It seems conceptually accurate, as far as I understand, and in 
> functional terms I'm starting to think it might even be the most correct 
> approach anyway. In the physical world we might be surprised to find the 
> PCI side of a host bridge

And of course by "the PCI side of a host bridge" I think I actually mean 
"a PCI root bus", because in my sloppy terminology I'm thinking about 
hardware bridging from PCI(e) to some SoC-internal protocol, which does 
not have to imply an actual PCI-visible Host Bridge device...

Robin.

> behind anything other than some platform/ACPI 
> device representing the other side of a physical host bridge or root 
> complex, but who's to say that a paravirtual world can't present a more 
> abstract topology? Either way, a one-line way of tying in to the 
> standard flow is hard to turn down.
> 
> Acked-by: Robin Murphy <robin.murphy@arm.com>
> 
>> Signed-off-by: Michael Kelley <mikelley@microsoft.com>
>> ---
>>   drivers/pci/controller/pci-hyperv.c | 9 +++++++++
>>   1 file changed, 9 insertions(+)
>>
>> diff --git a/drivers/pci/controller/pci-hyperv.c 
>> b/drivers/pci/controller/pci-hyperv.c
>> index ae0bc2f..88b3b56 100644
>> --- a/drivers/pci/controller/pci-hyperv.c
>> +++ b/drivers/pci/controller/pci-hyperv.c
>> @@ -3404,6 +3404,15 @@ static int hv_pci_probe(struct hv_device *hdev,
>>       hbus->bridge->domain_nr = dom;
>>   #ifdef CONFIG_X86
>>       hbus->sysdata.domain = dom;
>> +#elif defined(CONFIG_ARM64)
>> +    /*
>> +     * Set the PCI bus parent to be the corresponding VMbus
>> +     * device. Then the VMbus device will be assigned as the
>> +     * ACPI companion in pcibios_root_bridge_prepare() and
>> +     * pci_dma_configure() will propagate device coherence
>> +     * information to devices created on the bus.
>> +     */
>> +    hbus->sysdata.parent = hdev->device.parent;
>>   #endif
>>       hbus->hdev = hdev;
> _______________________________________________
> iommu mailing list
> iommu@lists.linux-foundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/iommu
diff mbox series

Patch

diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
index ae0bc2f..88b3b56 100644
--- a/drivers/pci/controller/pci-hyperv.c
+++ b/drivers/pci/controller/pci-hyperv.c
@@ -3404,6 +3404,15 @@  static int hv_pci_probe(struct hv_device *hdev,
 	hbus->bridge->domain_nr = dom;
 #ifdef CONFIG_X86
 	hbus->sysdata.domain = dom;
+#elif defined(CONFIG_ARM64)
+	/*
+	 * Set the PCI bus parent to be the corresponding VMbus
+	 * device. Then the VMbus device will be assigned as the
+	 * ACPI companion in pcibios_root_bridge_prepare() and
+	 * pci_dma_configure() will propagate device coherence
+	 * information to devices created on the bus.
+	 */
+	hbus->sysdata.parent = hdev->device.parent;
 #endif
 
 	hbus->hdev = hdev;