Message ID | 1510622154-17224-2-git-send-email-zhuyijun@huawei.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Tue, 14 Nov 2017 09:15:50 +0800 <zhuyijun@huawei.com> wrote: > From: Zhu Yijun <zhuyijun@huawei.com> > > With kernel 4.11, iommu/smmu will populate the MSI IOVA reserved window and > PCI reserved window which has to be excluded from Guest iova allocations. > > However, If it falls within the Qemu default virtual memory address space, > then reserved regions may get allocated for a Guest VF DMA iova and it will > fail. > > So get those reserved regions in this patch and create some holes in the Qemu > ram address in next patchset. > > Signed-off-by: Zhu Yijun <zhuyijun@huawei.com> > --- > hw/vfio/common.c | 67 +++++++++++++++++++++++++++++++++++++++++++ > hw/vfio/pci.c | 2 ++ > hw/vfio/platform.c | 2 ++ > include/exec/memory.h | 7 +++++ > include/hw/vfio/vfio-common.h | 3 ++ > 5 files changed, 81 insertions(+) I generally prefer the vfio interface to be more self sufficient, if there are regions the IOMMU cannot map, we should be describing those via capabilities on the container through the vfio interface. If we're just scraping together things from sysfs, the user can just as easily do that and provide an explicit memory map for the VM taking the devices into account. Of course having a device associated with a restricted memory map that conflicts with the default memory map for the VM implies that you can never support hot-add of such devices. Please cc me on vfio related patches. Thanks, Alex
Hi Alex, > -----Original Message----- > From: Alex Williamson [mailto:alex.williamson@redhat.com] > Sent: Tuesday, November 14, 2017 3:48 PM > To: Zhuyijun <zhuyijun@huawei.com> > Cc: qemu-arm@nongnu.org; qemu-devel@nongnu.org; > eric.auger@redhat.com; peter.maydell@linaro.org; Shameerali Kolothum > Thodi <shameerali.kolothum.thodi@huawei.com>; Zhaoshenglong > <zhaoshenglong@huawei.com> > Subject: Re: [Qemu-devel] [RFC 1/5] hw/vfio: Add function for getting > reserved_region of device iommu group > > On Tue, 14 Nov 2017 09:15:50 +0800 > <zhuyijun@huawei.com> wrote: > > > From: Zhu Yijun <zhuyijun@huawei.com> > > > > With kernel 4.11, iommu/smmu will populate the MSI IOVA reserved > > window and PCI reserved window which has to be excluded from Guest iova > allocations. > > > > However, If it falls within the Qemu default virtual memory address > > space, then reserved regions may get allocated for a Guest VF DMA iova > > and it will fail. > > > > So get those reserved regions in this patch and create some holes in > > the Qemu ram address in next patchset. > > > > Signed-off-by: Zhu Yijun <zhuyijun@huawei.com> > > --- > > hw/vfio/common.c | 67 > +++++++++++++++++++++++++++++++++++++++++++ > > hw/vfio/pci.c | 2 ++ > > hw/vfio/platform.c | 2 ++ > > include/exec/memory.h | 7 +++++ > > include/hw/vfio/vfio-common.h | 3 ++ > > 5 files changed, 81 insertions(+) > > I generally prefer the vfio interface to be more self sufficient, if there are > regions the IOMMU cannot map, we should be describing those via capabilities > on the container through the vfio interface. If we're just scraping together > things from sysfs, the user can just as easily do that and provide an explicit > memory map for the VM taking the devices into account. Ok. I was under the impression that the purpose of introducing the /sys/kernel/iommu_groups/reserved_regions was to get the IOVA regions that are reserved(MSI or non-mappable) for Qemu or other apps to make use of. I think this was introduced as part of the "KVM/MSI passthrough support on ARM" patch series. And if I remember correctly, Eric had an approach where the user space can retrieve all the reserved regions through the VFIO_IOMMU_GET_INFO ioctl and later this idea was replaced with the sysfs interface. May be I am missing something here. > Of course having a > device associated with a restricted memory map that conflicts with the default > memory map for the VM implies that you can never support hot-add of such > devices. True. Hot-add and migration will have issues on these platforms. Thanks, Shameer
On Wed, 15 Nov 2017 09:49:41 +0000 Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com> wrote: > Hi Alex, > > > -----Original Message----- > > From: Alex Williamson [mailto:alex.williamson@redhat.com] > > Sent: Tuesday, November 14, 2017 3:48 PM > > To: Zhuyijun <zhuyijun@huawei.com> > > Cc: qemu-arm@nongnu.org; qemu-devel@nongnu.org; > > eric.auger@redhat.com; peter.maydell@linaro.org; Shameerali Kolothum > > Thodi <shameerali.kolothum.thodi@huawei.com>; Zhaoshenglong > > <zhaoshenglong@huawei.com> > > Subject: Re: [Qemu-devel] [RFC 1/5] hw/vfio: Add function for getting > > reserved_region of device iommu group > > > > On Tue, 14 Nov 2017 09:15:50 +0800 > > <zhuyijun@huawei.com> wrote: > > > > > From: Zhu Yijun <zhuyijun@huawei.com> > > > > > > With kernel 4.11, iommu/smmu will populate the MSI IOVA reserved > > > window and PCI reserved window which has to be excluded from Guest iova > > allocations. > > > > > > However, If it falls within the Qemu default virtual memory address > > > space, then reserved regions may get allocated for a Guest VF DMA iova > > > and it will fail. > > > > > > So get those reserved regions in this patch and create some holes in > > > the Qemu ram address in next patchset. > > > > > > Signed-off-by: Zhu Yijun <zhuyijun@huawei.com> > > > --- > > > hw/vfio/common.c | 67 > > +++++++++++++++++++++++++++++++++++++++++++ > > > hw/vfio/pci.c | 2 ++ > > > hw/vfio/platform.c | 2 ++ > > > include/exec/memory.h | 7 +++++ > > > include/hw/vfio/vfio-common.h | 3 ++ > > > 5 files changed, 81 insertions(+) > > > > I generally prefer the vfio interface to be more self sufficient, if there are > > regions the IOMMU cannot map, we should be describing those via capabilities > > on the container through the vfio interface. If we're just scraping together > > things from sysfs, the user can just as easily do that and provide an explicit > > memory map for the VM taking the devices into account. > > Ok. I was under the impression that the purpose of introducing the > /sys/kernel/iommu_groups/reserved_regions was to get the IOVA regions > that are reserved(MSI or non-mappable) for Qemu or other apps to > make use of. I think this was introduced as part of the "KVM/MSI passthrough > support on ARM" patch series. And if I remember correctly, Eric had > an approach where the user space can retrieve all the reserved regions through > the VFIO_IOMMU_GET_INFO ioctl and later this idea was replaced with the > sysfs interface. > > May be I am missing something here. And sysfs is a good interface if the user wants to use it to configure the VM in a way that's compatible with a device. For instance, in your case, a user could evaluate these reserved regions across all devices in a system, or even across an entire cluster, and instantiate the VM with a memory map compatible with hotplugging any of those evaluated devices (QEMU implementation of allowing the user to do this TBD). Having the vfio device evaluate these reserved regions only helps in the cold-plug case. So the proposed solution is limited in scope and doesn't address similar needs on other platforms. There is value to verifying that a device's IOVA space is compatible with a VM memory map and modifying the memory map on cold-plug or rejecting the device on hot-plug, but isn't that why we have an ioctl within vfio to expose information about the IOMMU? Why take the path of allowing QEMU to rummage through sysfs files outside of vfio, implying additional security and access concerns, rather than filling the gap within the vifo API? Thanks, Alex
> -----Original Message----- > From: Alex Williamson [mailto:alex.williamson@redhat.com] > Sent: Wednesday, November 15, 2017 6:25 PM > To: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com> > Cc: Zhuyijun <zhuyijun@huawei.com>; qemu-arm@nongnu.org; qemu- > devel@nongnu.org; eric.auger@redhat.com; peter.maydell@linaro.org; > Zhaoshenglong <zhaoshenglong@huawei.com> > Subject: Re: [Qemu-devel] [RFC 1/5] hw/vfio: Add function for getting > reserved_region of device iommu group > > On Wed, 15 Nov 2017 09:49:41 +0000 > Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com> wrote: > > > Hi Alex, > > > > > -----Original Message----- > > > From: Alex Williamson [mailto:alex.williamson@redhat.com] > > > Sent: Tuesday, November 14, 2017 3:48 PM > > > To: Zhuyijun <zhuyijun@huawei.com> > > > Cc: qemu-arm@nongnu.org; qemu-devel@nongnu.org; > > > eric.auger@redhat.com; peter.maydell@linaro.org; Shameerali Kolothum > > > Thodi <shameerali.kolothum.thodi@huawei.com>; Zhaoshenglong > > > <zhaoshenglong@huawei.com> > > > Subject: Re: [Qemu-devel] [RFC 1/5] hw/vfio: Add function for getting > > > reserved_region of device iommu group > > > > > > On Tue, 14 Nov 2017 09:15:50 +0800 > > > <zhuyijun@huawei.com> wrote: > > > > > > > From: Zhu Yijun <zhuyijun@huawei.com> > > > > > > > > With kernel 4.11, iommu/smmu will populate the MSI IOVA reserved > > > > window and PCI reserved window which has to be excluded from Guest > iova > > > allocations. > > > > > > > > However, If it falls within the Qemu default virtual memory address > > > > space, then reserved regions may get allocated for a Guest VF DMA iova > > > > and it will fail. > > > > > > > > So get those reserved regions in this patch and create some holes in > > > > the Qemu ram address in next patchset. > > > > > > > > Signed-off-by: Zhu Yijun <zhuyijun@huawei.com> > > > > --- > > > > hw/vfio/common.c | 67 > > > +++++++++++++++++++++++++++++++++++++++++++ > > > > hw/vfio/pci.c | 2 ++ > > > > hw/vfio/platform.c | 2 ++ > > > > include/exec/memory.h | 7 +++++ > > > > include/hw/vfio/vfio-common.h | 3 ++ > > > > 5 files changed, 81 insertions(+) > > > > > > I generally prefer the vfio interface to be more self sufficient, if there are > > > regions the IOMMU cannot map, we should be describing those via > capabilities > > > on the container through the vfio interface. If we're just scraping together > > > things from sysfs, the user can just as easily do that and provide an explicit > > > memory map for the VM taking the devices into account. > > > > Ok. I was under the impression that the purpose of introducing the > > /sys/kernel/iommu_groups/reserved_regions was to get the IOVA regions > > that are reserved(MSI or non-mappable) for Qemu or other apps to > > make use of. I think this was introduced as part of the "KVM/MSI passthrough > > support on ARM" patch series. And if I remember correctly, Eric had > > an approach where the user space can retrieve all the reserved regions > through > > the VFIO_IOMMU_GET_INFO ioctl and later this idea was replaced with the > > sysfs interface. > > > > May be I am missing something here. > > And sysfs is a good interface if the user wants to use it to configure > the VM in a way that's compatible with a device. For instance, in your > case, a user could evaluate these reserved regions across all devices in > a system, or even across an entire cluster, and instantiate the VM with > a memory map compatible with hotplugging any of those evaluated > devices (QEMU implementation of allowing the user to do this TBD). > Having the vfio device evaluate these reserved regions only helps in > the cold-plug case. So the proposed solution is limited in scope and > doesn't address similar needs on other platforms. There is value to > verifying that a device's IOVA space is compatible with a VM memory map > and modifying the memory map on cold-plug or rejecting the device on > hot-plug, but isn't that why we have an ioctl within vfio to expose > information about the IOMMU? Why take the path of allowing QEMU to > rummage through sysfs files outside of vfio, implying additional > security and access concerns, rather than filling the gap within the > vifo API? Thanks Alex for the explanation. I came across this patch[1] from Eric where he introduced the IOCTL interface to retrieve the reserved regions. It looks like this can be reworked to accommodate the above requirement. Hi Eric, Please let us know if you have any plans to respin this patch or else we can take a look at this and do the rework if it is Ok. Thanks, Shameer 1. https://lists.linuxfoundation.org/pipermail/iommu/2016-November/019002.html
On Mon, 20 Nov 2017 11:58:43 +0000 Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com> wrote: > > -----Original Message----- > > From: Alex Williamson [mailto:alex.williamson@redhat.com] > > Sent: Wednesday, November 15, 2017 6:25 PM > > To: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com> > > Cc: Zhuyijun <zhuyijun@huawei.com>; qemu-arm@nongnu.org; qemu- > > devel@nongnu.org; eric.auger@redhat.com; peter.maydell@linaro.org; > > Zhaoshenglong <zhaoshenglong@huawei.com> > > Subject: Re: [Qemu-devel] [RFC 1/5] hw/vfio: Add function for getting > > reserved_region of device iommu group > > > > On Wed, 15 Nov 2017 09:49:41 +0000 > > Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com> wrote: > > > > > Hi Alex, > > > > > > > -----Original Message----- > > > > From: Alex Williamson [mailto:alex.williamson@redhat.com] > > > > Sent: Tuesday, November 14, 2017 3:48 PM > > > > To: Zhuyijun <zhuyijun@huawei.com> > > > > Cc: qemu-arm@nongnu.org; qemu-devel@nongnu.org; > > > > eric.auger@redhat.com; peter.maydell@linaro.org; Shameerali Kolothum > > > > Thodi <shameerali.kolothum.thodi@huawei.com>; Zhaoshenglong > > > > <zhaoshenglong@huawei.com> > > > > Subject: Re: [Qemu-devel] [RFC 1/5] hw/vfio: Add function for getting > > > > reserved_region of device iommu group > > > > > > > > On Tue, 14 Nov 2017 09:15:50 +0800 > > > > <zhuyijun@huawei.com> wrote: > > > > > > > > > From: Zhu Yijun <zhuyijun@huawei.com> > > > > > > > > > > With kernel 4.11, iommu/smmu will populate the MSI IOVA reserved > > > > > window and PCI reserved window which has to be excluded from Guest > > iova > > > > allocations. > > > > > > > > > > However, If it falls within the Qemu default virtual memory address > > > > > space, then reserved regions may get allocated for a Guest VF DMA iova > > > > > and it will fail. > > > > > > > > > > So get those reserved regions in this patch and create some holes in > > > > > the Qemu ram address in next patchset. > > > > > > > > > > Signed-off-by: Zhu Yijun <zhuyijun@huawei.com> > > > > > --- > > > > > hw/vfio/common.c | 67 > > > > +++++++++++++++++++++++++++++++++++++++++++ > > > > > hw/vfio/pci.c | 2 ++ > > > > > hw/vfio/platform.c | 2 ++ > > > > > include/exec/memory.h | 7 +++++ > > > > > include/hw/vfio/vfio-common.h | 3 ++ > > > > > 5 files changed, 81 insertions(+) > > > > > > > > I generally prefer the vfio interface to be more self sufficient, if there are > > > > regions the IOMMU cannot map, we should be describing those via > > capabilities > > > > on the container through the vfio interface. If we're just scraping together > > > > things from sysfs, the user can just as easily do that and provide an explicit > > > > memory map for the VM taking the devices into account. > > > > > > Ok. I was under the impression that the purpose of introducing the > > > /sys/kernel/iommu_groups/reserved_regions was to get the IOVA regions > > > that are reserved(MSI or non-mappable) for Qemu or other apps to > > > make use of. I think this was introduced as part of the "KVM/MSI passthrough > > > support on ARM" patch series. And if I remember correctly, Eric had > > > an approach where the user space can retrieve all the reserved regions > > through > > > the VFIO_IOMMU_GET_INFO ioctl and later this idea was replaced with the > > > sysfs interface. > > > > > > May be I am missing something here. > > > > And sysfs is a good interface if the user wants to use it to configure > > the VM in a way that's compatible with a device. For instance, in your > > case, a user could evaluate these reserved regions across all devices in > > a system, or even across an entire cluster, and instantiate the VM with > > a memory map compatible with hotplugging any of those evaluated > > devices (QEMU implementation of allowing the user to do this TBD). > > Having the vfio device evaluate these reserved regions only helps in > > the cold-plug case. So the proposed solution is limited in scope and > > doesn't address similar needs on other platforms. There is value to > > verifying that a device's IOVA space is compatible with a VM memory map > > and modifying the memory map on cold-plug or rejecting the device on > > hot-plug, but isn't that why we have an ioctl within vfio to expose > > information about the IOMMU? Why take the path of allowing QEMU to > > rummage through sysfs files outside of vfio, implying additional > > security and access concerns, rather than filling the gap within the > > vifo API? > > Thanks Alex for the explanation. > > I came across this patch[1] from Eric where he introduced the IOCTL interface to > retrieve the reserved regions. It looks like this can be reworked to accommodate > the above requirement. I don't think we need a new ioctl for this, nor do I think that describing the holes is the correct approach. The existing VFIO_IOMMU_GET_INFO ioctl can be extended to support capability chains, as we've done for VFIO_DEVICE_GET_REGION_INFO. IMO, we should try to describe the available fixed IOVA regions which are available for mapping rather than describing various holes within the address space which are unavailable. The latter method always fails to describe the end of the mappable IOVA space and gets bogged down in trying to classify the types of holes that might exist. Thanks, Alex
> -----Original Message----- > From: Alex Williamson [mailto:alex.williamson@redhat.com] > Sent: Monday, November 20, 2017 3:58 PM > To: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com> > Cc: eric.auger@redhat.com; Zhuyijun <zhuyijun@huawei.com>; qemu- > arm@nongnu.org; qemu-devel@nongnu.org; peter.maydell@linaro.org; > Zhaoshenglong <zhaoshenglong@huawei.com>; Linuxarm > <linuxarm@huawei.com> > Subject: Re: [Qemu-devel] [RFC 1/5] hw/vfio: Add function for getting > reserved_region of device iommu group > > On Mon, 20 Nov 2017 11:58:43 +0000 > Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com> wrote: > > > > -----Original Message----- > > > From: Alex Williamson [mailto:alex.williamson@redhat.com] > > > Sent: Wednesday, November 15, 2017 6:25 PM > > > To: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com> > > > Cc: Zhuyijun <zhuyijun@huawei.com>; qemu-arm@nongnu.org; qemu- > > > devel@nongnu.org; eric.auger@redhat.com; peter.maydell@linaro.org; > > > Zhaoshenglong <zhaoshenglong@huawei.com> > > > Subject: Re: [Qemu-devel] [RFC 1/5] hw/vfio: Add function for getting > > > reserved_region of device iommu group > > > > > > On Wed, 15 Nov 2017 09:49:41 +0000 > > > Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com> > wrote: > > > > > > > Hi Alex, > > > > > > > > > -----Original Message----- > > > > > From: Alex Williamson [mailto:alex.williamson@redhat.com] > > > > > Sent: Tuesday, November 14, 2017 3:48 PM > > > > > To: Zhuyijun <zhuyijun@huawei.com> > > > > > Cc: qemu-arm@nongnu.org; qemu-devel@nongnu.org; > > > > > eric.auger@redhat.com; peter.maydell@linaro.org; Shameerali > Kolothum > > > > > Thodi <shameerali.kolothum.thodi@huawei.com>; Zhaoshenglong > > > > > <zhaoshenglong@huawei.com> > > > > > Subject: Re: [Qemu-devel] [RFC 1/5] hw/vfio: Add function for getting > > > > > reserved_region of device iommu group > > > > > > > > > > On Tue, 14 Nov 2017 09:15:50 +0800 > > > > > <zhuyijun@huawei.com> wrote: > > > > > > > > > > > From: Zhu Yijun <zhuyijun@huawei.com> > > > > > > > > > > > > With kernel 4.11, iommu/smmu will populate the MSI IOVA reserved > > > > > > window and PCI reserved window which has to be excluded from > Guest > > > iova > > > > > allocations. > > > > > > > > > > > > However, If it falls within the Qemu default virtual memory address > > > > > > space, then reserved regions may get allocated for a Guest VF DMA > iova > > > > > > and it will fail. > > > > > > > > > > > > So get those reserved regions in this patch and create some holes in > > > > > > the Qemu ram address in next patchset. > > > > > > > > > > > > Signed-off-by: Zhu Yijun <zhuyijun@huawei.com> > > > > > > --- > > > > > > hw/vfio/common.c | 67 > > > > > +++++++++++++++++++++++++++++++++++++++++++ > > > > > > hw/vfio/pci.c | 2 ++ > > > > > > hw/vfio/platform.c | 2 ++ > > > > > > include/exec/memory.h | 7 +++++ > > > > > > include/hw/vfio/vfio-common.h | 3 ++ > > > > > > 5 files changed, 81 insertions(+) > > > > > > > > > > I generally prefer the vfio interface to be more self sufficient, if there > are > > > > > regions the IOMMU cannot map, we should be describing those via > > > capabilities > > > > > on the container through the vfio interface. If we're just scraping > together > > > > > things from sysfs, the user can just as easily do that and provide an > explicit > > > > > memory map for the VM taking the devices into account. > > > > > > > > Ok. I was under the impression that the purpose of introducing the > > > > /sys/kernel/iommu_groups/reserved_regions was to get the IOVA regions > > > > that are reserved(MSI or non-mappable) for Qemu or other apps to > > > > make use of. I think this was introduced as part of the "KVM/MSI > passthrough > > > > support on ARM" patch series. And if I remember correctly, Eric had > > > > an approach where the user space can retrieve all the reserved regions > > > through > > > > the VFIO_IOMMU_GET_INFO ioctl and later this idea was replaced with > the > > > > sysfs interface. > > > > > > > > May be I am missing something here. > > > > > > And sysfs is a good interface if the user wants to use it to configure > > > the VM in a way that's compatible with a device. For instance, in your > > > case, a user could evaluate these reserved regions across all devices in > > > a system, or even across an entire cluster, and instantiate the VM with > > > a memory map compatible with hotplugging any of those evaluated > > > devices (QEMU implementation of allowing the user to do this TBD). > > > Having the vfio device evaluate these reserved regions only helps in > > > the cold-plug case. So the proposed solution is limited in scope and > > > doesn't address similar needs on other platforms. There is value to > > > verifying that a device's IOVA space is compatible with a VM memory map > > > and modifying the memory map on cold-plug or rejecting the device on > > > hot-plug, but isn't that why we have an ioctl within vfio to expose > > > information about the IOMMU? Why take the path of allowing QEMU to > > > rummage through sysfs files outside of vfio, implying additional > > > security and access concerns, rather than filling the gap within the > > > vifo API? > > > > Thanks Alex for the explanation. > > > > I came across this patch[1] from Eric where he introduced the IOCTL > interface to > > retrieve the reserved regions. It looks like this can be reworked to > accommodate > > the above requirement. > > I don't think we need a new ioctl for this, nor do I think that > describing the holes is the correct approach. The existing > VFIO_IOMMU_GET_INFO ioctl can be extended to support capability chains, > as we've done for VFIO_DEVICE_GET_REGION_INFO. Right, as far as I can see the above mentioned patch is doing exactly the same, extending the VFIO_IOMMU_GET_INFO ioctl with capability chain. > IMO, we should try to > describe the available fixed IOVA regions which are available for > mapping rather than describing various holes within the address space > which are unavailable. The latter method always fails to describe the > end of the mappable IOVA space and gets bogged down in trying to > classify the types of holes that might exist. Thanks, Ok. I guess that is to take care iommu max address width case. Thanks, Shameer
Hi Alex, > -----Original Message----- > From: Shameerali Kolothum Thodi > Sent: Monday, November 20, 2017 4:31 PM > To: 'Alex Williamson' <alex.williamson@redhat.com> > Cc: eric.auger@redhat.com; Zhuyijun <zhuyijun@huawei.com>; qemu- > arm@nongnu.org; qemu-devel@nongnu.org; peter.maydell@linaro.org; > Zhaoshenglong <zhaoshenglong@huawei.com>; Linuxarm > <linuxarm@huawei.com> > Subject: RE: [Qemu-devel] [RFC 1/5] hw/vfio: Add function for getting > reserved_region of device iommu group [...] > > > > And sysfs is a good interface if the user wants to use it to > > > > configure the VM in a way that's compatible with a device. For > > > > instance, in your case, a user could evaluate these reserved > > > > regions across all devices in a system, or even across an entire > > > > cluster, and instantiate the VM with a memory map compatible with > > > > hotplugging any of those evaluated devices (QEMU implementation of > allowing the user to do this TBD). > > > > Having the vfio device evaluate these reserved regions only helps > > > > in the cold-plug case. So the proposed solution is limited in > > > > scope and doesn't address similar needs on other platforms. There > > > > is value to verifying that a device's IOVA space is compatible > > > > with a VM memory map and modifying the memory map on cold-plug or > > > > rejecting the device on hot-plug, but isn't that why we have an > > > > ioctl within vfio to expose information about the IOMMU? Why take > > > > the path of allowing QEMU to rummage through sysfs files outside > > > > of vfio, implying additional security and access concerns, rather > > > > than filling the gap within the vifo API? > > > > > > Thanks Alex for the explanation. > > > > > > I came across this patch[1] from Eric where he introduced the IOCTL > > interface to > > > retrieve the reserved regions. It looks like this can be reworked to > > accommodate > > > the above requirement. > > > > I don't think we need a new ioctl for this, nor do I think that > > describing the holes is the correct approach. The existing > > VFIO_IOMMU_GET_INFO ioctl can be extended to support capability > > chains, as we've done for VFIO_DEVICE_GET_REGION_INFO. > > Right, as far as I can see the above mentioned patch is doing exactly the same, > extending the VFIO_IOMMU_GET_INFO ioctl with capability chain. > > > IMO, we should try to > > describe the available fixed IOVA regions which are available for > > mapping rather than describing various holes within the address space > > which are unavailable. The latter method always fails to describe the > > end of the mappable IOVA space and gets bogged down in trying to > > classify the types of holes that might exist. Thanks, > I was going through this and noticed that it is possible to have multiple iommu domains associated with a container. If that's true, is it safe to make the assumption that all the domains will have the same iova geometry or not? Thanks, Shameer
Hi Shameer, On 06/12/17 11:30, Shameerali Kolothum Thodi wrote: > Hi Alex, > >> -----Original Message----- >> From: Shameerali Kolothum Thodi >> Sent: Monday, November 20, 2017 4:31 PM >> To: 'Alex Williamson' <alex.williamson@redhat.com> >> Cc: eric.auger@redhat.com; Zhuyijun <zhuyijun@huawei.com>; qemu- >> arm@nongnu.org; qemu-devel@nongnu.org; peter.maydell@linaro.org; >> Zhaoshenglong <zhaoshenglong@huawei.com>; Linuxarm >> <linuxarm@huawei.com> >> Subject: RE: [Qemu-devel] [RFC 1/5] hw/vfio: Add function for getting >> reserved_region of device iommu group > [...] >>>>> And sysfs is a good interface if the user wants to use it to >>>>> configure the VM in a way that's compatible with a device. For >>>>> instance, in your case, a user could evaluate these reserved >>>>> regions across all devices in a system, or even across an entire >>>>> cluster, and instantiate the VM with a memory map compatible with >>>>> hotplugging any of those evaluated devices (QEMU implementation of >> allowing the user to do this TBD). >>>>> Having the vfio device evaluate these reserved regions only helps >>>>> in the cold-plug case. So the proposed solution is limited in >>>>> scope and doesn't address similar needs on other platforms. There >>>>> is value to verifying that a device's IOVA space is compatible >>>>> with a VM memory map and modifying the memory map on cold-plug or >>>>> rejecting the device on hot-plug, but isn't that why we have an >>>>> ioctl within vfio to expose information about the IOMMU? Why take >>>>> the path of allowing QEMU to rummage through sysfs files outside >>>>> of vfio, implying additional security and access concerns, rather >>>>> than filling the gap within the vifo API? >>>> >>>> Thanks Alex for the explanation. >>>> >>>> I came across this patch[1] from Eric where he introduced the IOCTL >>> interface to >>>> retrieve the reserved regions. It looks like this can be reworked to >>> accommodate >>>> the above requirement. >>> >>> I don't think we need a new ioctl for this, nor do I think that >>> describing the holes is the correct approach. The existing >>> VFIO_IOMMU_GET_INFO ioctl can be extended to support capability >>> chains, as we've done for VFIO_DEVICE_GET_REGION_INFO. >> >> Right, as far as I can see the above mentioned patch is doing exactly the same, >> extending the VFIO_IOMMU_GET_INFO ioctl with capability chain. >> >>> IMO, we should try to >>> describe the available fixed IOVA regions which are available for >>> mapping rather than describing various holes within the address space >>> which are unavailable. The latter method always fails to describe the >>> end of the mappable IOVA space and gets bogged down in trying to >>> classify the types of holes that might exist. Thanks, >> > > I was going through this and noticed that it is possible to have multiple > iommu domains associated with a container. If that's true, is it safe to > make the assumption that all the domains will have the same iova > geometry or not? To me the answer is no. There are several iommu domains "underneath" a single container. You attach vfio groups to a container. Each of them is associated to an iommu group and an iommu domain. See vfio_iommu_type1_attach_group(). Besides, the reserved regions are per iommu group. Thanks Eric > > Thanks, > Shameer >
Hi Eric, > -----Original Message----- > From: Auger Eric [mailto:eric.auger@redhat.com] > Sent: Wednesday, December 06, 2017 2:01 PM > To: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>; > Alex Williamson <alex.williamson@redhat.com> > Cc: peter.maydell@linaro.org; qemu-devel@nongnu.org; Linuxarm > <linuxarm@huawei.com>; qemu-arm@nongnu.org; Zhaoshenglong > <zhaoshenglong@huawei.com>; Zhuyijun <zhuyijun@huawei.com> > Subject: Re: [Qemu-devel] [RFC 1/5] hw/vfio: Add function for getting > reserved_region of device iommu group > > Hi Shameer, > > On 06/12/17 11:30, Shameerali Kolothum Thodi wrote: > > Hi Alex, > > > >> -----Original Message----- > >> From: Shameerali Kolothum Thodi > >> Sent: Monday, November 20, 2017 4:31 PM > >> To: 'Alex Williamson' <alex.williamson@redhat.com> > >> Cc: eric.auger@redhat.com; Zhuyijun <zhuyijun@huawei.com>; qemu- > >> arm@nongnu.org; qemu-devel@nongnu.org; peter.maydell@linaro.org; > >> Zhaoshenglong <zhaoshenglong@huawei.com>; Linuxarm > >> <linuxarm@huawei.com> > >> Subject: RE: [Qemu-devel] [RFC 1/5] hw/vfio: Add function for getting > >> reserved_region of device iommu group > > [...] > >>>>> And sysfs is a good interface if the user wants to use it to > >>>>> configure the VM in a way that's compatible with a device. For > >>>>> instance, in your case, a user could evaluate these reserved > >>>>> regions across all devices in a system, or even across an entire > >>>>> cluster, and instantiate the VM with a memory map compatible with > >>>>> hotplugging any of those evaluated devices (QEMU implementation of > >> allowing the user to do this TBD). > >>>>> Having the vfio device evaluate these reserved regions only helps > >>>>> in the cold-plug case. So the proposed solution is limited in > >>>>> scope and doesn't address similar needs on other platforms. There > >>>>> is value to verifying that a device's IOVA space is compatible > >>>>> with a VM memory map and modifying the memory map on cold-plug or > >>>>> rejecting the device on hot-plug, but isn't that why we have an > >>>>> ioctl within vfio to expose information about the IOMMU? Why take > >>>>> the path of allowing QEMU to rummage through sysfs files outside > >>>>> of vfio, implying additional security and access concerns, rather > >>>>> than filling the gap within the vifo API? > >>>> > >>>> Thanks Alex for the explanation. > >>>> > >>>> I came across this patch[1] from Eric where he introduced the IOCTL > >>> interface to > >>>> retrieve the reserved regions. It looks like this can be reworked to > >>> accommodate > >>>> the above requirement. > >>> > >>> I don't think we need a new ioctl for this, nor do I think that > >>> describing the holes is the correct approach. The existing > >>> VFIO_IOMMU_GET_INFO ioctl can be extended to support capability > >>> chains, as we've done for VFIO_DEVICE_GET_REGION_INFO. > >> > >> Right, as far as I can see the above mentioned patch is doing exactly the > same, > >> extending the VFIO_IOMMU_GET_INFO ioctl with capability chain. > >> > >>> IMO, we should try to > >>> describe the available fixed IOVA regions which are available for > >>> mapping rather than describing various holes within the address space > >>> which are unavailable. The latter method always fails to describe the > >>> end of the mappable IOVA space and gets bogged down in trying to > >>> classify the types of holes that might exist. Thanks, > >> > > > > I was going through this and noticed that it is possible to have multiple > > iommu domains associated with a container. If that's true, is it safe to > > make the assumption that all the domains will have the same iova > > geometry or not? > To me the answer is no. > > There are several iommu domains "underneath" a single container. You > attach vfio groups to a container. Each of them is associated to an > iommu group and an iommu domain. See vfio_iommu_type1_attach_group(). > > Besides, the reserved regions are per iommu group. > Thanks for your reply. Yes, container can have multiple groups(hence multiple iommu domains) and reserved regions are per group. Hence while deciding the default supported iova geometry we have to go through all the domains in the container and select smallest aperture as the supported default iova range. Please find below snippet from a patch I am working on. Please let me know your thoughts on this. Thanks, Shameer -- >8 -- +static int vfio_build_iommu_iova_caps(struct vfio_iommu *iommu, + struct vfio_info_cap *caps) { + struct iommu_resv_region *resv, *resv_next; + struct vfio_iommu_iova *iova, *iova_next; + struct list_head group_resv_regions, vfio_iova_regions; + struct vfio_domain *domain; + struct vfio_group *g; + phys_addr_t start, end; + int ret = 0; + + domain = list_first_entry(&iommu->domain_list, + struct vfio_domain, next); + /* Get the default iova range supported */ + start = domain->domain->geometry.aperture_start; + end = domain->domain->geometry.aperture_end; This is where I am confused. I think instead I should go over the domain_list and select the smallest aperture as default iova range. + INIT_LIST_HEAD(&vfio_iova_regions); + vfio_insert_iova(start, end, &vfio_iova_regions); + + /* Get reserved regions if any */ + INIT_LIST_HEAD(&group_resv_regions); + list_for_each_entry(g, &domain->group_list, next) + iommu_get_group_resv_regions(g->iommu_group, + &group_resv_regions); + list_sort(NULL, &group_resv_regions, vfio_resv_cmp); + + /* Update iova range excluding reserved regions */ ... -- >8 --
Hi Shameer, On 06/12/17 15:38, Shameerali Kolothum Thodi wrote: > Hi Eric, > >> -----Original Message----- >> From: Auger Eric [mailto:eric.auger@redhat.com] >> Sent: Wednesday, December 06, 2017 2:01 PM >> To: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>; >> Alex Williamson <alex.williamson@redhat.com> >> Cc: peter.maydell@linaro.org; qemu-devel@nongnu.org; Linuxarm >> <linuxarm@huawei.com>; qemu-arm@nongnu.org; Zhaoshenglong >> <zhaoshenglong@huawei.com>; Zhuyijun <zhuyijun@huawei.com> >> Subject: Re: [Qemu-devel] [RFC 1/5] hw/vfio: Add function for getting >> reserved_region of device iommu group >> >> Hi Shameer, >> >> On 06/12/17 11:30, Shameerali Kolothum Thodi wrote: >>> Hi Alex, >>> >>>> -----Original Message----- >>>> From: Shameerali Kolothum Thodi >>>> Sent: Monday, November 20, 2017 4:31 PM >>>> To: 'Alex Williamson' <alex.williamson@redhat.com> >>>> Cc: eric.auger@redhat.com; Zhuyijun <zhuyijun@huawei.com>; qemu- >>>> arm@nongnu.org; qemu-devel@nongnu.org; peter.maydell@linaro.org; >>>> Zhaoshenglong <zhaoshenglong@huawei.com>; Linuxarm >>>> <linuxarm@huawei.com> >>>> Subject: RE: [Qemu-devel] [RFC 1/5] hw/vfio: Add function for getting >>>> reserved_region of device iommu group >>> [...] >>>>>>> And sysfs is a good interface if the user wants to use it to >>>>>>> configure the VM in a way that's compatible with a device. For >>>>>>> instance, in your case, a user could evaluate these reserved >>>>>>> regions across all devices in a system, or even across an entire >>>>>>> cluster, and instantiate the VM with a memory map compatible with >>>>>>> hotplugging any of those evaluated devices (QEMU implementation of >>>> allowing the user to do this TBD). >>>>>>> Having the vfio device evaluate these reserved regions only helps >>>>>>> in the cold-plug case. So the proposed solution is limited in >>>>>>> scope and doesn't address similar needs on other platforms. There >>>>>>> is value to verifying that a device's IOVA space is compatible >>>>>>> with a VM memory map and modifying the memory map on cold-plug or >>>>>>> rejecting the device on hot-plug, but isn't that why we have an >>>>>>> ioctl within vfio to expose information about the IOMMU? Why take >>>>>>> the path of allowing QEMU to rummage through sysfs files outside >>>>>>> of vfio, implying additional security and access concerns, rather >>>>>>> than filling the gap within the vifo API? >>>>>> >>>>>> Thanks Alex for the explanation. >>>>>> >>>>>> I came across this patch[1] from Eric where he introduced the IOCTL >>>>> interface to >>>>>> retrieve the reserved regions. It looks like this can be reworked to >>>>> accommodate >>>>>> the above requirement. >>>>> >>>>> I don't think we need a new ioctl for this, nor do I think that >>>>> describing the holes is the correct approach. The existing >>>>> VFIO_IOMMU_GET_INFO ioctl can be extended to support capability >>>>> chains, as we've done for VFIO_DEVICE_GET_REGION_INFO. >>>> >>>> Right, as far as I can see the above mentioned patch is doing exactly the >> same, >>>> extending the VFIO_IOMMU_GET_INFO ioctl with capability chain. >>>> >>>>> IMO, we should try to >>>>> describe the available fixed IOVA regions which are available for >>>>> mapping rather than describing various holes within the address space >>>>> which are unavailable. The latter method always fails to describe the >>>>> end of the mappable IOVA space and gets bogged down in trying to >>>>> classify the types of holes that might exist. Thanks, >>>> >>> >>> I was going through this and noticed that it is possible to have multiple >>> iommu domains associated with a container. If that's true, is it safe to >>> make the assumption that all the domains will have the same iova >>> geometry or not? >> To me the answer is no. >> >> There are several iommu domains "underneath" a single container. You >> attach vfio groups to a container. Each of them is associated to an >> iommu group and an iommu domain. See vfio_iommu_type1_attach_group(). >> >> Besides, the reserved regions are per iommu group. >> > > Thanks for your reply. Yes, container can have multiple groups(hence multiple > iommu domains) and reserved regions are per group. Hence while deciding > the default supported iova geometry we have to go through all the domains > in the container and select smallest aperture as the supported default iova > range. > > Please find below snippet from a patch I am working on. Please > let me know your thoughts on this. > > Thanks, > Shameer > > -- >8 -- > +static int vfio_build_iommu_iova_caps(struct vfio_iommu *iommu, > + struct vfio_info_cap *caps) { > + struct iommu_resv_region *resv, *resv_next; > + struct vfio_iommu_iova *iova, *iova_next; > + struct list_head group_resv_regions, vfio_iova_regions; > + struct vfio_domain *domain; > + struct vfio_group *g; > + phys_addr_t start, end; > + int ret = 0; > + > + domain = list_first_entry(&iommu->domain_list, > + struct vfio_domain, next); > + /* Get the default iova range supported */ > + start = domain->domain->geometry.aperture_start; > + end = domain->domain->geometry.aperture_end; > > This is where I am confused. I think instead I should go over > the domain_list and select the smallest aperture as default > iova range. yes that's correct. I Just want to warn you Pierre is working on the same topic. May be worth to sync together. [PATCH] vfio/iommu_type1: report the IOMMU aperture info (https://patchwork.kernel.org/patch/10084655/) I think he plans to rework his series with capability chain too. Thanks Eric > > + INIT_LIST_HEAD(&vfio_iova_regions); > + vfio_insert_iova(start, end, &vfio_iova_regions); > + > + /* Get reserved regions if any */ > + INIT_LIST_HEAD(&group_resv_regions); > + list_for_each_entry(g, &domain->group_list, next) > + iommu_get_group_resv_regions(g->iommu_group, > + &group_resv_regions); > + list_sort(NULL, &group_resv_regions, vfio_resv_cmp); > + > + /* Update iova range excluding reserved regions */ > ... > -- >8 -- > >
> -----Original Message----- > From: Auger Eric [mailto:eric.auger@redhat.com] > Sent: Wednesday, December 06, 2017 3:00 PM > To: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>; > Alex Williamson <alex.williamson@redhat.com> > Cc: peter.maydell@linaro.org; qemu-devel@nongnu.org; Linuxarm > <linuxarm@huawei.com>; qemu-arm@nongnu.org; Zhaoshenglong > <zhaoshenglong@huawei.com>; Zhuyijun <zhuyijun@huawei.com> > Subject: Re: [Qemu-devel] [RFC 1/5] hw/vfio: Add function for getting > reserved_region of device iommu group > > Hi Shameer, > > On 06/12/17 15:38, Shameerali Kolothum Thodi wrote: > > Hi Eric, > > > >> -----Original Message----- > >> From: Auger Eric [mailto:eric.auger@redhat.com] > >> Sent: Wednesday, December 06, 2017 2:01 PM > >> To: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>; > >> Alex Williamson <alex.williamson@redhat.com> > >> Cc: peter.maydell@linaro.org; qemu-devel@nongnu.org; Linuxarm > >> <linuxarm@huawei.com>; qemu-arm@nongnu.org; Zhaoshenglong > >> <zhaoshenglong@huawei.com>; Zhuyijun <zhuyijun@huawei.com> > >> Subject: Re: [Qemu-devel] [RFC 1/5] hw/vfio: Add function for getting > >> reserved_region of device iommu group > >> > >> Hi Shameer, > >> > >> On 06/12/17 11:30, Shameerali Kolothum Thodi wrote: > >>> Hi Alex, > >>> > >>>> -----Original Message----- > >>>> From: Shameerali Kolothum Thodi > >>>> Sent: Monday, November 20, 2017 4:31 PM > >>>> To: 'Alex Williamson' <alex.williamson@redhat.com> > >>>> Cc: eric.auger@redhat.com; Zhuyijun <zhuyijun@huawei.com>; qemu- > >>>> arm@nongnu.org; qemu-devel@nongnu.org; peter.maydell@linaro.org; > >>>> Zhaoshenglong <zhaoshenglong@huawei.com>; Linuxarm > >>>> <linuxarm@huawei.com> > >>>> Subject: RE: [Qemu-devel] [RFC 1/5] hw/vfio: Add function for getting > >>>> reserved_region of device iommu group > >>> [...] > >>>>>>> And sysfs is a good interface if the user wants to use it to > >>>>>>> configure the VM in a way that's compatible with a device. For > >>>>>>> instance, in your case, a user could evaluate these reserved > >>>>>>> regions across all devices in a system, or even across an entire > >>>>>>> cluster, and instantiate the VM with a memory map compatible with > >>>>>>> hotplugging any of those evaluated devices (QEMU implementation of > >>>> allowing the user to do this TBD). > >>>>>>> Having the vfio device evaluate these reserved regions only helps > >>>>>>> in the cold-plug case. So the proposed solution is limited in > >>>>>>> scope and doesn't address similar needs on other platforms. There > >>>>>>> is value to verifying that a device's IOVA space is compatible > >>>>>>> with a VM memory map and modifying the memory map on cold-plug > or > >>>>>>> rejecting the device on hot-plug, but isn't that why we have an > >>>>>>> ioctl within vfio to expose information about the IOMMU? Why take > >>>>>>> the path of allowing QEMU to rummage through sysfs files outside > >>>>>>> of vfio, implying additional security and access concerns, rather > >>>>>>> than filling the gap within the vifo API? > >>>>>> > >>>>>> Thanks Alex for the explanation. > >>>>>> > >>>>>> I came across this patch[1] from Eric where he introduced the IOCTL > >>>>> interface to > >>>>>> retrieve the reserved regions. It looks like this can be reworked to > >>>>> accommodate > >>>>>> the above requirement. > >>>>> > >>>>> I don't think we need a new ioctl for this, nor do I think that > >>>>> describing the holes is the correct approach. The existing > >>>>> VFIO_IOMMU_GET_INFO ioctl can be extended to support capability > >>>>> chains, as we've done for VFIO_DEVICE_GET_REGION_INFO. > >>>> > >>>> Right, as far as I can see the above mentioned patch is doing exactly the > >> same, > >>>> extending the VFIO_IOMMU_GET_INFO ioctl with capability chain. > >>>> > >>>>> IMO, we should try to > >>>>> describe the available fixed IOVA regions which are available for > >>>>> mapping rather than describing various holes within the address space > >>>>> which are unavailable. The latter method always fails to describe the > >>>>> end of the mappable IOVA space and gets bogged down in trying to > >>>>> classify the types of holes that might exist. Thanks, > >>>> > >>> > >>> I was going through this and noticed that it is possible to have multiple > >>> iommu domains associated with a container. If that's true, is it safe to > >>> make the assumption that all the domains will have the same iova > >>> geometry or not? > >> To me the answer is no. > >> > >> There are several iommu domains "underneath" a single container. You > >> attach vfio groups to a container. Each of them is associated to an > >> iommu group and an iommu domain. See > vfio_iommu_type1_attach_group(). > >> > >> Besides, the reserved regions are per iommu group. > >> > > > > Thanks for your reply. Yes, container can have multiple groups(hence > multiple > > iommu domains) and reserved regions are per group. Hence while deciding > > the default supported iova geometry we have to go through all the domains > > in the container and select smallest aperture as the supported default iova > > range. > > > > Please find below snippet from a patch I am working on. Please > > let me know your thoughts on this. > > > > Thanks, > > Shameer > > > > -- >8 -- > > +static int vfio_build_iommu_iova_caps(struct vfio_iommu *iommu, > > + struct vfio_info_cap *caps) { > > + struct iommu_resv_region *resv, *resv_next; > > + struct vfio_iommu_iova *iova, *iova_next; > > + struct list_head group_resv_regions, vfio_iova_regions; > > + struct vfio_domain *domain; > > + struct vfio_group *g; > > + phys_addr_t start, end; > > + int ret = 0; > > + > > + domain = list_first_entry(&iommu->domain_list, > > + struct vfio_domain, next); > > + /* Get the default iova range supported */ > > + start = domain->domain->geometry.aperture_start; > > + end = domain->domain->geometry.aperture_end; > > > > This is where I am confused. I think instead I should go over > > the domain_list and select the smallest aperture as default > > iova range. > yes that's correct. I Just want to warn you Pierre is working on the > same topic. May be worth to sync together. > > [PATCH] vfio/iommu_type1: report the IOMMU aperture info > (https://patchwork.kernel.org/patch/10084655/) > > I think he plans to rework his series with capability chain too. Thanks Eric for the pointer. I will sent out my patch as an RFC then and take it from there (without the default iova aperture changes, as I can see there are some discussions in Pierre's patch to solve that) Thanks, Shameer
diff --git a/hw/vfio/common.c b/hw/vfio/common.c index 7b2924c..01bdbbd 100644 --- a/hw/vfio/common.c +++ b/hw/vfio/common.c @@ -40,6 +40,8 @@ struct vfio_group_head vfio_group_list = QLIST_HEAD_INITIALIZER(vfio_group_list); struct vfio_as_head vfio_address_spaces = QLIST_HEAD_INITIALIZER(vfio_address_spaces); +struct reserved_ram_head reserved_ram_regions = + QLIST_HEAD_INITIALIZER(reserved_ram_regions); #ifdef CONFIG_KVM /* @@ -52,6 +54,71 @@ struct vfio_as_head vfio_address_spaces = static int vfio_kvm_device_fd = -1; #endif +void update_reserved_regions(hwaddr addr, hwaddr size) +{ + RAMRegion *reg, *new; + + new = g_new(RAMRegion, 1); + new->base = addr; + new->size = size; + + if (QLIST_EMPTY(&reserved_ram_regions)) { + QLIST_INSERT_HEAD(&reserved_ram_regions, new, next); + return; + } + + /* make the base of reserved_ram_regions increase */ + QLIST_FOREACH(reg, &reserved_ram_regions, next) { + if (addr > (reg->base + reg->size - 1)) { + if (!QLIST_NEXT(reg, next)) { + QLIST_INSERT_AFTER(reg, new, next); + break; + } + continue; + } else if (addr >= reg->base && addr <= (reg->base + reg->size - 1)) { + /* discard the duplicate entry */ + if (addr == reg->base && size == reg->size) { + g_free(new); + break; + } else { + QLIST_INSERT_AFTER(reg, new, next); + break; + } + } else { + QLIST_INSERT_BEFORE(reg, new, next); + break; + } + } +} + +void vfio_get_iommu_group_reserved_region(char *group_path) +{ + char *filename; + FILE *fp; + hwaddr start, end; + char str[10]; + struct stat st; + + filename = g_strdup_printf("%s/iommu_group/reserved_regions", group_path); + if (stat(filename, &st) < 0) { + g_free(filename); + return; + } + + fp = fopen(filename, "r"); + if (!fp) { + g_free(filename); + return; + } + + while (fscanf(fp, "%"PRIx64" %"PRIx64" %s", &start, &end, str) != EOF) { + update_reserved_regions(start, (end - start + 1)); + } + + g_free(filename); + fclose(fp); +} + /* * Common VFIO interrupt disable */ diff --git a/hw/vfio/pci.c b/hw/vfio/pci.c index c977ee3..9bffb38 100644 --- a/hw/vfio/pci.c +++ b/hw/vfio/pci.c @@ -2674,6 +2674,8 @@ static void vfio_realize(PCIDevice *pdev, Error **errp) vdev->vbasedev.type = VFIO_DEVICE_TYPE_PCI; vdev->vbasedev.dev = &vdev->pdev.qdev; + vfio_get_iommu_group_reserved_region(vdev->vbasedev.sysfsdev); + tmp = g_strdup_printf("%s/iommu_group", vdev->vbasedev.sysfsdev); len = readlink(tmp, group_path, sizeof(group_path)); g_free(tmp); diff --git a/hw/vfio/platform.c b/hw/vfio/platform.c index da84abf..d5bbc3a 100644 --- a/hw/vfio/platform.c +++ b/hw/vfio/platform.c @@ -578,6 +578,8 @@ static int vfio_base_device_init(VFIODevice *vbasedev, Error **errp) return -errno; } + vfio_get_iommu_group_reserved_region(vbasedev->sysfsdev); + tmp = g_strdup_printf("%s/iommu_group", vbasedev->sysfsdev); len = readlink(tmp, group_path, sizeof(group_path)); g_free(tmp); diff --git a/include/exec/memory.h b/include/exec/memory.h index 5ed4042..2bcc83b 100644 --- a/include/exec/memory.h +++ b/include/exec/memory.h @@ -46,6 +46,13 @@ OBJECT_GET_CLASS(IOMMUMemoryRegionClass, (obj), \ TYPE_IOMMU_MEMORY_REGION) +/* Scattered RAM memory region struct */ +typedef struct RAMRegion { + hwaddr base; + hwaddr size; + QLIST_ENTRY(RAMRegion) next; +} RAMRegion; + typedef struct MemoryRegionOps MemoryRegionOps; typedef struct MemoryRegionMmio MemoryRegionMmio; diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h index f3a2ac9..3483ca6 100644 --- a/include/hw/vfio/vfio-common.h +++ b/include/hw/vfio/vfio-common.h @@ -161,10 +161,13 @@ VFIOGroup *vfio_get_group(int groupid, AddressSpace *as, Error **errp); void vfio_put_group(VFIOGroup *group); int vfio_get_device(VFIOGroup *group, const char *name, VFIODevice *vbasedev, Error **errp); +void update_reserved_regions(hwaddr addr, hwaddr size); +void vfio_get_iommu_group_reserved_region(char *group_path); extern const MemoryRegionOps vfio_region_ops; extern QLIST_HEAD(vfio_group_head, VFIOGroup) vfio_group_list; extern QLIST_HEAD(vfio_as_head, VFIOAddressSpace) vfio_address_spaces; +extern QLIST_HEAD(reserved_ram_head, RAMRegion) reserved_ram_regions; #ifdef CONFIG_LINUX int vfio_get_region_info(VFIODevice *vbasedev, int index,