diff mbox series

[RFC,11/42] iommu: Add new domain op cache_invalidate_kvm

Message ID 20231202092041.14084-1-yan.y.zhao@intel.com (mailing list archive)
State New, archived
Headers show
Series Sharing KVM TDP to IOMMU | expand

Commit Message

Yan Zhao Dec. 2, 2023, 9:20 a.m. UTC
On KVM invalidates mappings that are shared to IOMMU stage 2 paging
structures, IOMMU driver needs to invalidate hardware TLBs accordingly.

The new op cache_invalidate_kvm is called from IOMMUFD to invalidate
hardware TLBs upon receiving invalidation notifications from KVM.

Signed-off-by: Yan Zhao <yan.y.zhao@intel.com>
---
 include/linux/iommu.h | 5 +++++
 1 file changed, 5 insertions(+)

Comments

Jason Gunthorpe Dec. 4, 2023, 3:09 p.m. UTC | #1
On Sat, Dec 02, 2023 at 05:20:41PM +0800, Yan Zhao wrote:
> On KVM invalidates mappings that are shared to IOMMU stage 2 paging
> structures, IOMMU driver needs to invalidate hardware TLBs accordingly.
> 
> The new op cache_invalidate_kvm is called from IOMMUFD to invalidate
> hardware TLBs upon receiving invalidation notifications from KVM.

Why?

SVA hooks the invalidation directly to the mm, shouldn't KVM also hook
the invalidation directly from the kvm? Why do we need to call a chain
of function pointers? iommufd isn't adding any value in the chain
here.

Jason
Yan Zhao Dec. 5, 2023, 6:40 a.m. UTC | #2
On Mon, Dec 04, 2023 at 11:09:45AM -0400, Jason Gunthorpe wrote:
> On Sat, Dec 02, 2023 at 05:20:41PM +0800, Yan Zhao wrote:
> > On KVM invalidates mappings that are shared to IOMMU stage 2 paging
> > structures, IOMMU driver needs to invalidate hardware TLBs accordingly.
> > 
> > The new op cache_invalidate_kvm is called from IOMMUFD to invalidate
> > hardware TLBs upon receiving invalidation notifications from KVM.
> 
> Why?
> 
> SVA hooks the invalidation directly to the mm, shouldn't KVM also hook
> the invalidation directly from the kvm? Why do we need to call a chain
> of function pointers? iommufd isn't adding any value in the chain
> here.
Do you prefer IOMMU vendor driver to register as importer to KVM directly?
Then IOMMUFD just passes "struct kvm_tdp_fd" to IOMMU vendor driver for domain
creation.
Actually both ways are ok for us.
The current chaining way is just to let IOMMU domain only managed by IOMMUFD and
decoupled to KVM.
Jason Gunthorpe Dec. 5, 2023, 2:52 p.m. UTC | #3
On Tue, Dec 05, 2023 at 02:40:28PM +0800, Yan Zhao wrote:
> On Mon, Dec 04, 2023 at 11:09:45AM -0400, Jason Gunthorpe wrote:
> > On Sat, Dec 02, 2023 at 05:20:41PM +0800, Yan Zhao wrote:
> > > On KVM invalidates mappings that are shared to IOMMU stage 2 paging
> > > structures, IOMMU driver needs to invalidate hardware TLBs accordingly.
> > > 
> > > The new op cache_invalidate_kvm is called from IOMMUFD to invalidate
> > > hardware TLBs upon receiving invalidation notifications from KVM.
> > 
> > Why?
> > 
> > SVA hooks the invalidation directly to the mm, shouldn't KVM also hook
> > the invalidation directly from the kvm? Why do we need to call a chain
> > of function pointers? iommufd isn't adding any value in the chain
> > here.
> Do you prefer IOMMU vendor driver to register as importer to KVM directly?
> Then IOMMUFD just passes "struct kvm_tdp_fd" to IOMMU vendor driver for domain
> creation.

Yes, this is what we did for SVA

Function pointers are slow these days, so it is preferred to go
directly.

Jason
Yan Zhao Dec. 6, 2023, 1 a.m. UTC | #4
On Tue, Dec 05, 2023 at 10:52:27AM -0400, Jason Gunthorpe wrote:
> On Tue, Dec 05, 2023 at 02:40:28PM +0800, Yan Zhao wrote:
> > On Mon, Dec 04, 2023 at 11:09:45AM -0400, Jason Gunthorpe wrote:
> > > On Sat, Dec 02, 2023 at 05:20:41PM +0800, Yan Zhao wrote:
> > > > On KVM invalidates mappings that are shared to IOMMU stage 2 paging
> > > > structures, IOMMU driver needs to invalidate hardware TLBs accordingly.
> > > > 
> > > > The new op cache_invalidate_kvm is called from IOMMUFD to invalidate
> > > > hardware TLBs upon receiving invalidation notifications from KVM.
> > > 
> > > Why?
> > > 
> > > SVA hooks the invalidation directly to the mm, shouldn't KVM also hook
> > > the invalidation directly from the kvm? Why do we need to call a chain
> > > of function pointers? iommufd isn't adding any value in the chain
> > > here.
> > Do you prefer IOMMU vendor driver to register as importer to KVM directly?
> > Then IOMMUFD just passes "struct kvm_tdp_fd" to IOMMU vendor driver for domain
> > creation.
> 
> Yes, this is what we did for SVA
> 
> Function pointers are slow these days, so it is preferred to go
> directly.

Ok. Will do in this way. thanks!
diff mbox series

Patch

diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index 0ce23ee399d35..0b056d5a6b3a3 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -636,6 +636,9 @@  struct iommu_ops {
  *                         forward a driver specific error code to user space.
  *                         Both the driver data structure and the error code
  *                         must be defined in include/uapi/linux/iommufd.h
+ * @cache_invalidate_kvm: Synchronously flush hardware TLBs for KVM managed
+ *                        stage 2 IO page tables.
+ *                        The @domain must be IOMMU_DOMAIN_KVM.
  * @iova_to_phys: translate iova to physical address
  * @enforce_cache_coherency: Prevent any kind of DMA from bypassing IOMMU_CACHE,
  *                           including no-snoop TLPs on PCIe or other platform
@@ -665,6 +668,8 @@  struct iommu_domain_ops {
 	int (*cache_invalidate_user)(struct iommu_domain *domain,
 				     struct iommu_user_data_array *array,
 				     u32 *error_code);
+	void (*cache_invalidate_kvm)(struct iommu_domain *domain,
+				     unsigned long iova, unsigned long size);
 
 	phys_addr_t (*iova_to_phys)(struct iommu_domain *domain,
 				    dma_addr_t iova);