Message ID | 20221018122852.1185395-1-eric.auger@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | vhost: Warn if DEVIOTLB_UNMAP is not supported and ats is set | expand |
Hi, Eric, On Tue, Oct 18, 2022 at 02:28:52PM +0200, Eric Auger wrote: > Since b68ba1ca5767 ("memory: Add IOMMU_NOTIFIER_DEVIOTLB_UNMAP > IOMMUTLBNotificationType"), vhost attempts to register DEVIOTLB_UNMAP > notifier. This latter is supported by the intel-iommu which supports > device-iotlb if the corresponding option is set. Then 958ec334bca3 > ("vhost: Unbreak SMMU and virtio-iommu on dev-iotlb support") allowed > silent fallback to the legacy UNMAP notifier if the viommu does not > support device iotlb. > > Initially vhost/viommu integration was introduced with intel iommu > assuming ats=on was set on virtio-pci device and device-iotlb was set > on the intel iommu. vhost acts as an ATS capable device since it > implements an IOTLB on kernel side. However translated transactions > that hit the device IOTLB do not transit through the vIOMMU. So this > requires a limited ATS support on viommu side. > > However, in theory, if ats=on is set on a pci device, the > viommu should support ATS for that device to work. Pure question: what will happen if one ATS supported PCI device got plugged into a system whose physical IOMMU does not support ATS? Will ATS just be ignored and the device keep working simply without ATS? [1] [...] > @@ -760,8 +771,16 @@ static void vhost_iommu_region_add(MemoryListener *listener, > iommu->iommu_offset = section->offset_within_address_space - > section->offset_within_region; > iommu->hdev = dev; > - ret = memory_region_register_iommu_notifier(section->mr, &iommu->n, NULL); > + ret = memory_region_register_iommu_notifier(section->mr, &iommu->n, &err); > if (ret) { > + if (vhost_dev_ats_enabled(dev)) { > + error_reportf_err(err, > + "vhost cannot register DEVIOTLB_UNMAP " > + "although ATS is enabled, " > + "fall back to legacy UNMAP notifier: "); We want to use the warning message to either remind the user to (1) add the dev-iotlb=on parameter for vIOMMU, or (2) drop the ats=on on device. Am I right? As we've discussed - I remember Jason used to test with/without dev-iotlb on vhost on Intel and dev-iotlb is faster on vt-d guest driver than without it. So that can make sense to me for (1). I don't know whether it helps for (2) because fundamentally it's the same question as [1] above, and whether that's a legal configuration. Thanks,
Hi Peter, On 10/18/22 16:25, Peter Xu wrote: > Hi, Eric, > > On Tue, Oct 18, 2022 at 02:28:52PM +0200, Eric Auger wrote: >> Since b68ba1ca5767 ("memory: Add IOMMU_NOTIFIER_DEVIOTLB_UNMAP >> IOMMUTLBNotificationType"), vhost attempts to register DEVIOTLB_UNMAP >> notifier. This latter is supported by the intel-iommu which supports >> device-iotlb if the corresponding option is set. Then 958ec334bca3 >> ("vhost: Unbreak SMMU and virtio-iommu on dev-iotlb support") allowed >> silent fallback to the legacy UNMAP notifier if the viommu does not >> support device iotlb. >> >> Initially vhost/viommu integration was introduced with intel iommu >> assuming ats=on was set on virtio-pci device and device-iotlb was set >> on the intel iommu. vhost acts as an ATS capable device since it >> implements an IOTLB on kernel side. However translated transactions >> that hit the device IOTLB do not transit through the vIOMMU. So this >> requires a limited ATS support on viommu side. >> >> However, in theory, if ats=on is set on a pci device, the >> viommu should support ATS for that device to work. > Pure question: what will happen if one ATS supported PCI device got plugged > into a system whose physical IOMMU does not support ATS? Will ATS just be > ignored and the device keep working simply without ATS? Yes that's my understanding: in that case the ATS capable device would work with ats disabled (baremetal case). In the iommu driver you can have a look at the pci_enable_ats() call which is guarded by info->ats_supported for instance on intel iommu. Following that reasoning vhost modality should not be enabled without ATS support on vIOMMU side. But it is. In that sense I may rename the ats_enabled helpers with ats_capable? If I understand correctly setting ats=on exposes the ATS capability ( 615c4ed205 virtio-pci: address space translation service (ATS) support) which is then enabled by the guest driver. > [1] > > [...] > >> @@ -760,8 +771,16 @@ static void vhost_iommu_region_add(MemoryListener *listener, >> iommu->iommu_offset = section->offset_within_address_space - >> section->offset_within_region; >> iommu->hdev = dev; >> - ret = memory_region_register_iommu_notifier(section->mr, &iommu->n, NULL); >> + ret = memory_region_register_iommu_notifier(section->mr, &iommu->n, &err); >> if (ret) { >> + if (vhost_dev_ats_enabled(dev)) { >> + error_reportf_err(err, >> + "vhost cannot register DEVIOTLB_UNMAP " >> + "although ATS is enabled, " >> + "fall back to legacy UNMAP notifier: "); > We want to use the warning message to either remind the user to (1) add the > dev-iotlb=on parameter for vIOMMU, or (2) drop the ats=on on device. Am I > right? My focus is to warn the end user there is no support for device-iotlb support in virtio-iommu or vsmmuv3 but vhost does not really require it.Indeed current users of virtio-iommu/vsmmuv3 seem confused now wrt vhost integration and the lack of device-iotlb option on those viommus. On intel I understand we would like to enforce that ats and dev-iotlb are both set or unset. But this is not really addressed in that series. Indeed vtd_iommu_notify_flag_changed does not reject any registration of IOMMU_NOTIFIER_DEVIOTLB_UNMAP notifier in case it does not support device-iotlb. I think it should. The trouble is vhost_iommu_region_add is not meant to nicely fail. > > As we've discussed - I remember Jason used to test with/without dev-iotlb > on vhost on Intel and dev-iotlb is faster on vt-d guest driver than without It would be nice to have a clarification about this. Indeed [PATCH v3 0/5] memory: Skip assertion in memory_region_unregister_iommu_notifier https://lore.kernel.org/all/20201116165506.31315-1-eperezma@redhat.com/ mostly focussed on removing an assertion although one patch mentionned perf improvements. What does make the perf better (less device iotlb flushes than general iotlb flushes?) > it. So that can make sense to me for (1). I don't know whether it helps > for (2) because fundamentally it's the same question as [1] above, and > whether that's a legal configuration. > > Thanks, > Adding jean in the loop too Thanks Eric
On Tue, Oct 18, 2022 at 05:08:19PM +0200, Eric Auger wrote: > Hi Peter, > > On 10/18/22 16:25, Peter Xu wrote: > > Hi, Eric, > > > > On Tue, Oct 18, 2022 at 02:28:52PM +0200, Eric Auger wrote: > >> Since b68ba1ca5767 ("memory: Add IOMMU_NOTIFIER_DEVIOTLB_UNMAP > >> IOMMUTLBNotificationType"), vhost attempts to register DEVIOTLB_UNMAP > >> notifier. This latter is supported by the intel-iommu which supports > >> device-iotlb if the corresponding option is set. Then 958ec334bca3 > >> ("vhost: Unbreak SMMU and virtio-iommu on dev-iotlb support") allowed > >> silent fallback to the legacy UNMAP notifier if the viommu does not > >> support device iotlb. > >> > >> Initially vhost/viommu integration was introduced with intel iommu > >> assuming ats=on was set on virtio-pci device and device-iotlb was set > >> on the intel iommu. vhost acts as an ATS capable device since it > >> implements an IOTLB on kernel side. However translated transactions > >> that hit the device IOTLB do not transit through the vIOMMU. So this > >> requires a limited ATS support on viommu side. > >> > >> However, in theory, if ats=on is set on a pci device, the > >> viommu should support ATS for that device to work. > > Pure question: what will happen if one ATS supported PCI device got plugged > > into a system whose physical IOMMU does not support ATS? Will ATS just be > > ignored and the device keep working simply without ATS? > Yes that's my understanding: in that case the ATS capable device would > work with ats disabled (baremetal case). In the iommu driver you can > have a look at the pci_enable_ats() call which is guarded by > info->ats_supported for instance on intel iommu. > > Following that reasoning vhost modality should not be enabled without > ATS support on vIOMMU side. But it is. > > In that sense I may rename the ats_enabled helpers with ats_capable? Sounds good to me. > If I understand correctly setting ats=on exposes the ATS capability ( > 615c4ed205 virtio-pci: address space translation service (ATS) support) > which is then enabled by the guest driver. I think it won't, as long as vIOMMU doesn't have DT support declared? > > > [1] > > > > [...] > > > >> @@ -760,8 +771,16 @@ static void vhost_iommu_region_add(MemoryListener *listener, > >> iommu->iommu_offset = section->offset_within_address_space - > >> section->offset_within_region; > >> iommu->hdev = dev; > >> - ret = memory_region_register_iommu_notifier(section->mr, &iommu->n, NULL); > >> + ret = memory_region_register_iommu_notifier(section->mr, &iommu->n, &err); > >> if (ret) { > >> + if (vhost_dev_ats_enabled(dev)) { > >> + error_reportf_err(err, > >> + "vhost cannot register DEVIOTLB_UNMAP " > >> + "although ATS is enabled, " > >> + "fall back to legacy UNMAP notifier: "); > > We want to use the warning message to either remind the user to (1) add the > > dev-iotlb=on parameter for vIOMMU, or (2) drop the ats=on on device. Am I > > right? > My focus is to warn the end user there is no support for device-iotlb > support in virtio-iommu or vsmmuv3 but vhost does not really require > it.Indeed current users of virtio-iommu/vsmmuv3 seem confused now wrt > vhost integration and the lack of device-iotlb option on those viommus. > > On intel I understand we would like to enforce that ats and dev-iotlb > are both set or unset. But this is not really addressed in that series. > Indeed vtd_iommu_notify_flag_changed does not reject any registration of > IOMMU_NOTIFIER_DEVIOTLB_UNMAP notifier in case it does not support > device-iotlb. I think it should. Yes I agree, thanks for finding it. Just posted a patch: https://lore.kernel.org/r/20221018215407.363986-1-peterx@redhat.com > The trouble is vhost_iommu_region_add > is not meant to nicely fail. > > > > As we've discussed - I remember Jason used to test with/without dev-iotlb > > on vhost on Intel and dev-iotlb is faster on vt-d guest driver than without > It would be nice to have a clarification about this. Indeed > > [PATCH v3 0/5] memory: Skip assertion in memory_region_unregister_iommu_notifier > https://lore.kernel.org/all/20201116165506.31315-1-eperezma@redhat.com/ > mostly focussed on removing an assertion although one patch mentionned perf improvements. What does make the perf better (less device iotlb flushes than general iotlb flushes?) I'll leave that to Jason. Thanks. > > > it. So that can make sense to me for (1). I don't know whether it helps > > for (2) because fundamentally it's the same question as [1] above, and > > whether that's a legal configuration. > > > > Thanks, > > > Adding jean in the loop too > > Thanks > > Eric >
在 2022/10/19 05:56, Peter Xu 写道: > On Tue, Oct 18, 2022 at 05:08:19PM +0200, Eric Auger wrote: >> Hi Peter, >> >> On 10/18/22 16:25, Peter Xu wrote: >>> Hi, Eric, >>> >>> On Tue, Oct 18, 2022 at 02:28:52PM +0200, Eric Auger wrote: >>>> Since b68ba1ca5767 ("memory: Add IOMMU_NOTIFIER_DEVIOTLB_UNMAP >>>> IOMMUTLBNotificationType"), vhost attempts to register DEVIOTLB_UNMAP >>>> notifier. This latter is supported by the intel-iommu which supports >>>> device-iotlb if the corresponding option is set. Then 958ec334bca3 >>>> ("vhost: Unbreak SMMU and virtio-iommu on dev-iotlb support") allowed >>>> silent fallback to the legacy UNMAP notifier if the viommu does not >>>> support device iotlb. >>>> >>>> Initially vhost/viommu integration was introduced with intel iommu >>>> assuming ats=on was set on virtio-pci device and device-iotlb was set >>>> on the intel iommu. vhost acts as an ATS capable device since it >>>> implements an IOTLB on kernel side. However translated transactions >>>> that hit the device IOTLB do not transit through the vIOMMU. So this >>>> requires a limited ATS support on viommu side. >>>> >>>> However, in theory, if ats=on is set on a pci device, the >>>> viommu should support ATS for that device to work. >>> Pure question: what will happen if one ATS supported PCI device got plugged >>> into a system whose physical IOMMU does not support ATS? Will ATS just be >>> ignored and the device keep working simply without ATS? >> Yes that's my understanding: in that case the ATS capable device would >> work with ats disabled (baremetal case). In the iommu driver you can >> have a look at the pci_enable_ats() call which is guarded by >> info->ats_supported for instance on intel iommu. >> >> Following that reasoning vhost modality should not be enabled without >> ATS support on vIOMMU side. But it is. >> >> In that sense I may rename the ats_enabled helpers with ats_capable? > Sounds good to me. > >> If I understand correctly setting ats=on exposes the ATS capability ( >> 615c4ed205 virtio-pci: address space translation service (ATS) support) >> which is then enabled by the guest driver. > I think it won't, as long as vIOMMU doesn't have DT support declared? Yes, it's exposed but not enabled. > >>> [1] >>> >>> [...] >>> >>>> @@ -760,8 +771,16 @@ static void vhost_iommu_region_add(MemoryListener *listener, >>>> iommu->iommu_offset = section->offset_within_address_space - >>>> section->offset_within_region; >>>> iommu->hdev = dev; >>>> - ret = memory_region_register_iommu_notifier(section->mr, &iommu->n, NULL); >>>> + ret = memory_region_register_iommu_notifier(section->mr, &iommu->n, &err); >>>> if (ret) { >>>> + if (vhost_dev_ats_enabled(dev)) { >>>> + error_reportf_err(err, >>>> + "vhost cannot register DEVIOTLB_UNMAP " >>>> + "although ATS is enabled, " >>>> + "fall back to legacy UNMAP notifier: "); >>> We want to use the warning message to either remind the user to (1) add the >>> dev-iotlb=on parameter for vIOMMU, or (2) drop the ats=on on device. Am I >>> right? >> My focus is to warn the end user there is no support for device-iotlb >> support in virtio-iommu or vsmmuv3 but vhost does not really require >> it.Indeed current users of virtio-iommu/vsmmuv3 seem confused now wrt >> vhost integration and the lack of device-iotlb option on those viommus. >> >> On intel I understand we would like to enforce that ats and dev-iotlb >> are both set or unset. But this is not really addressed in that series. >> Indeed vtd_iommu_notify_flag_changed does not reject any registration of >> IOMMU_NOTIFIER_DEVIOTLB_UNMAP notifier in case it does not support >> device-iotlb. I think it should. > Yes I agree, thanks for finding it. Just posted a patch: > > https://lore.kernel.org/r/20221018215407.363986-1-peterx@redhat.com > >> The trouble is vhost_iommu_region_add >> is not meant to nicely fail. >>> As we've discussed - I remember Jason used to test with/without dev-iotlb >>> on vhost on Intel and dev-iotlb is faster on vt-d guest driver than without >> It would be nice to have a clarification about this. Indeed >> >> [PATCH v3 0/5] memory: Skip assertion in memory_region_unregister_iommu_notifier >> https://lore.kernel.org/all/20201116165506.31315-1-eperezma@redhat.com/ >> mostly focussed on removing an assertion although one patch mentionned perf improvements. What does make the perf better (less device iotlb flushes than general iotlb flushes?) > I'll leave that to Jason. Thanks. If you mean f7701e2c7983b680790af47117577b285b6a1aed ("intel_iommu: Skip page walking on device iotlb invalidations"), it should help in the case of domian or global invalidation. But it really depends on whether or not guest is using those. Regrading to the perf number, it might be tricky since: 1) the invalidation was batched or delayed which should be rare or at least very less frequent 2) meaning with TCP might be impact by a lot of other factors (you can see UDP_STREAM doesn't give ovbious improvement). Thanks > >>> it. So that can make sense to me for (1). I don't know whether it helps >>> for (2) because fundamentally it's the same question as [1] above, and >>> whether that's a legal configuration. >>> >>> Thanks, >>> >> Adding jean in the loop too >> >> Thanks >> >> Eric >>
Hi Peter, On 10/18/22 23:56, Peter Xu wrote: > On Tue, Oct 18, 2022 at 05:08:19PM +0200, Eric Auger wrote: >> Hi Peter, >> >> On 10/18/22 16:25, Peter Xu wrote: >>> Hi, Eric, >>> >>> On Tue, Oct 18, 2022 at 02:28:52PM +0200, Eric Auger wrote: >>>> Since b68ba1ca5767 ("memory: Add IOMMU_NOTIFIER_DEVIOTLB_UNMAP >>>> IOMMUTLBNotificationType"), vhost attempts to register DEVIOTLB_UNMAP >>>> notifier. This latter is supported by the intel-iommu which supports >>>> device-iotlb if the corresponding option is set. Then 958ec334bca3 >>>> ("vhost: Unbreak SMMU and virtio-iommu on dev-iotlb support") allowed >>>> silent fallback to the legacy UNMAP notifier if the viommu does not >>>> support device iotlb. >>>> >>>> Initially vhost/viommu integration was introduced with intel iommu >>>> assuming ats=on was set on virtio-pci device and device-iotlb was set >>>> on the intel iommu. vhost acts as an ATS capable device since it >>>> implements an IOTLB on kernel side. However translated transactions >>>> that hit the device IOTLB do not transit through the vIOMMU. So this >>>> requires a limited ATS support on viommu side. >>>> >>>> However, in theory, if ats=on is set on a pci device, the >>>> viommu should support ATS for that device to work. >>> Pure question: what will happen if one ATS supported PCI device got plugged >>> into a system whose physical IOMMU does not support ATS? Will ATS just be >>> ignored and the device keep working simply without ATS? >> Yes that's my understanding: in that case the ATS capable device would >> work with ats disabled (baremetal case). In the iommu driver you can >> have a look at the pci_enable_ats() call which is guarded by >> info->ats_supported for instance on intel iommu. >> >> Following that reasoning vhost modality should not be enabled without >> ATS support on vIOMMU side. But it is. >> >> In that sense I may rename the ats_enabled helpers with ats_capable? > Sounds good to me. OK > >> If I understand correctly setting ats=on exposes the ATS capability ( >> 615c4ed205 virtio-pci: address space translation service (ATS) support) >> which is then enabled by the guest driver. > I think it won't, as long as vIOMMU doesn't have DT support declared? That's my assumption too > >>> [1] >>> >>> [...] >>> >>>> @@ -760,8 +771,16 @@ static void vhost_iommu_region_add(MemoryListener *listener, >>>> iommu->iommu_offset = section->offset_within_address_space - >>>> section->offset_within_region; >>>> iommu->hdev = dev; >>>> - ret = memory_region_register_iommu_notifier(section->mr, &iommu->n, NULL); >>>> + ret = memory_region_register_iommu_notifier(section->mr, &iommu->n, &err); >>>> if (ret) { >>>> + if (vhost_dev_ats_enabled(dev)) { >>>> + error_reportf_err(err, >>>> + "vhost cannot register DEVIOTLB_UNMAP " >>>> + "although ATS is enabled, " >>>> + "fall back to legacy UNMAP notifier: "); >>> We want to use the warning message to either remind the user to (1) add the >>> dev-iotlb=on parameter for vIOMMU, or (2) drop the ats=on on device. Am I >>> right? >> My focus is to warn the end user there is no support for device-iotlb >> support in virtio-iommu or vsmmuv3 but vhost does not really require >> it.Indeed current users of virtio-iommu/vsmmuv3 seem confused now wrt >> vhost integration and the lack of device-iotlb option on those viommus. >> >> On intel I understand we would like to enforce that ats and dev-iotlb >> are both set or unset. But this is not really addressed in that series. >> Indeed vtd_iommu_notify_flag_changed does not reject any registration of >> IOMMU_NOTIFIER_DEVIOTLB_UNMAP notifier in case it does not support >> device-iotlb. I think it should. > Yes I agree, thanks for finding it. Just posted a patch: > > https://lore.kernel.org/r/20221018215407.363986-1-peterx@redhat.com OK thanks > >> The trouble is vhost_iommu_region_add >> is not meant to nicely fail. >>> As we've discussed - I remember Jason used to test with/without dev-iotlb >>> on vhost on Intel and dev-iotlb is faster on vt-d guest driver than without >> It would be nice to have a clarification about this. Indeed >> >> [PATCH v3 0/5] memory: Skip assertion in memory_region_unregister_iommu_notifier >> https://lore.kernel.org/all/20201116165506.31315-1-eperezma@redhat.com/ >> mostly focussed on removing an assertion although one patch mentionned perf improvements. What does make the perf better (less device iotlb flushes than general iotlb flushes?) > I'll leave that to Jason. Thanks. OK thanks Eric > >>> it. So that can make sense to me for (1). I don't know whether it helps >>> for (2) because fundamentally it's the same question as [1] above, and >>> whether that's a legal configuration. >>> >>> Thanks, >>> >> Adding jean in the loop too >> >> Thanks >> >> Eric >>
diff --git a/include/hw/virtio/virtio-bus.h b/include/hw/virtio/virtio-bus.h index 7ab8c9dab0..df96cf5b4d 100644 --- a/include/hw/virtio/virtio-bus.h +++ b/include/hw/virtio/virtio-bus.h @@ -94,6 +94,7 @@ struct VirtioBusClass { bool has_variable_vring_alignment; AddressSpace *(*get_dma_as)(DeviceState *d); bool (*iommu_enabled)(DeviceState *d); + bool (*ats_enabled)(DeviceState *d); }; struct VirtioBusState { @@ -157,4 +158,6 @@ int virtio_bus_set_host_notifier(VirtioBusState *bus, int n, bool assign); void virtio_bus_cleanup_host_notifier(VirtioBusState *bus, int n); /* Whether the IOMMU is enabled for this device */ bool virtio_bus_device_iommu_enabled(VirtIODevice *vdev); +/* Whether ATS is enabled for this device */ +bool virtio_bus_device_ats_enabled(VirtIODevice *vdev); #endif /* VIRTIO_BUS_H */ diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c index 5185c15295..b5ad4262c1 100644 --- a/hw/virtio/vhost.c +++ b/hw/virtio/vhost.c @@ -324,6 +324,16 @@ static bool vhost_dev_has_iommu(struct vhost_dev *dev) } } +static bool vhost_dev_ats_enabled(struct vhost_dev *dev) +{ + VirtIODevice *vdev = dev->vdev; + + if (vdev && virtio_bus_device_ats_enabled(vdev)) { + return true; + } + return false; +} + static void *vhost_memory_map(struct vhost_dev *dev, hwaddr addr, hwaddr *plen, bool is_write) { @@ -737,6 +747,7 @@ static void vhost_iommu_region_add(MemoryListener *listener, Int128 end; int iommu_idx; IOMMUMemoryRegion *iommu_mr; + Error *err = NULL; int ret; if (!memory_region_is_iommu(section->mr)) { @@ -760,8 +771,16 @@ static void vhost_iommu_region_add(MemoryListener *listener, iommu->iommu_offset = section->offset_within_address_space - section->offset_within_region; iommu->hdev = dev; - ret = memory_region_register_iommu_notifier(section->mr, &iommu->n, NULL); + ret = memory_region_register_iommu_notifier(section->mr, &iommu->n, &err); if (ret) { + if (vhost_dev_ats_enabled(dev)) { + error_reportf_err(err, + "vhost cannot register DEVIOTLB_UNMAP " + "although ATS is enabled, " + "fall back to legacy UNMAP notifier: "); + } else { + error_free(err); + } /* * Some vIOMMUs do not support dev-iotlb yet. If so, try to use the * UNMAP legacy message diff --git a/hw/virtio/virtio-bus.c b/hw/virtio/virtio-bus.c index 896feb37a1..6276779b07 100644 --- a/hw/virtio/virtio-bus.c +++ b/hw/virtio/virtio-bus.c @@ -348,6 +348,20 @@ bool virtio_bus_device_iommu_enabled(VirtIODevice *vdev) return klass->iommu_enabled(qbus->parent); } +bool virtio_bus_device_ats_enabled(VirtIODevice *vdev) +{ + DeviceState *qdev = DEVICE(vdev); + BusState *qbus = BUS(qdev_get_parent_bus(qdev)); + VirtioBusState *bus = VIRTIO_BUS(qbus); + VirtioBusClass *klass = VIRTIO_BUS_GET_CLASS(bus); + + if (!klass->ats_enabled) { + return false; + } + + return klass->ats_enabled(qbus->parent); +} + static void virtio_bus_class_init(ObjectClass *klass, void *data) { BusClass *bus_class = BUS_CLASS(klass); diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c index e7d80242b7..d33a47abbf 100644 --- a/hw/virtio/virtio-pci.c +++ b/hw/virtio/virtio-pci.c @@ -1141,6 +1141,16 @@ static bool virtio_pci_iommu_enabled(DeviceState *d) return true; } +static bool virtio_pci_ats_enabled(DeviceState *d) +{ + VirtIOPCIProxy *proxy = VIRTIO_PCI(d); + + if (proxy->flags & VIRTIO_PCI_FLAG_ATS) { + return true; + } + return false; +} + static bool virtio_pci_queue_enabled(DeviceState *d, int n) { VirtIOPCIProxy *proxy = VIRTIO_PCI(d); @@ -2229,6 +2239,7 @@ static void virtio_pci_bus_class_init(ObjectClass *klass, void *data) k->ioeventfd_assign = virtio_pci_ioeventfd_assign; k->get_dma_as = virtio_pci_get_dma_as; k->iommu_enabled = virtio_pci_iommu_enabled; + k->ats_enabled = virtio_pci_ats_enabled; k->queue_enabled = virtio_pci_queue_enabled; }
Since b68ba1ca5767 ("memory: Add IOMMU_NOTIFIER_DEVIOTLB_UNMAP IOMMUTLBNotificationType"), vhost attempts to register DEVIOTLB_UNMAP notifier. This latter is supported by the intel-iommu which supports device-iotlb if the corresponding option is set. Then 958ec334bca3 ("vhost: Unbreak SMMU and virtio-iommu on dev-iotlb support") allowed silent fallback to the legacy UNMAP notifier if the viommu does not support device iotlb. Initially vhost/viommu integration was introduced with intel iommu assuming ats=on was set on virtio-pci device and device-iotlb was set on the intel iommu. vhost acts as an ATS capable device since it implements an IOTLB on kernel side. However translated transactions that hit the device IOTLB do not transit through the vIOMMU. So this requires a limited ATS support on viommu side. However, in theory, if ats=on is set on a pci device, the viommu should support ATS for that device to work. But neither SMMUv3 nor virtio-iommu do support ATS and the integration with vhost just relies on the fact those vIOMMU send UNMAP notifications whenever the guest trigger them. So the situation with respect to ats setting on virtio-pci side and ats support on viommu side has become quite confusing, especially on ARM. Is ats needed whereas vIOMMUs do not support device IOTLBs. This patch makes sure we get a warning if ats is set on a device protected by virtio-iommu or vsmmuv3, reminding that we don't have full support of ATS on those vIOMMUs and ats setting on end-point is not a requirement. Signed-off-by: Eric Auger <eric.auger@redhat.com> --- include/hw/virtio/virtio-bus.h | 3 +++ hw/virtio/vhost.c | 21 ++++++++++++++++++++- hw/virtio/virtio-bus.c | 14 ++++++++++++++ hw/virtio/virtio-pci.c | 11 +++++++++++ 4 files changed, 48 insertions(+), 1 deletion(-)