Message ID | 20221123134832.429589-2-yi.l.liu@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Make mdev driver dma_unmap callback tolerant to unmaps come before device open | expand |
> From: Liu, Yi L <yi.l.liu@intel.com> > Sent: Wednesday, November 23, 2022 9:49 PM > > vfio_iommufd_bind() creates an access which has an unmap callback, which > can be called immediately. So dma_unmap() callback should tolerate the > unmaps that come before the emulated device is opened. this should first talk about how it works today and then why iommufd changes it. > > To achieve above, move the protect_table_init and gvt_cache_init into the > init op which is supposed to be triggered prior to the open_device() op. what about below? -- vfio container registers .dma_unmap() callback after the device is opened. So it's fine for mdev drivers to initialize internal mapping cache in .open_device(). See vfio_device_container_register(). Now with iommufd an access ops with an unmap callback is registered when the device is bound to iommufd which is before .open_device() is called. This implies gvt's .dma_unmap() could be called before its internal mapping cache is initialized. The fix is moving gvt mapping cache initialization to vGPU creation. While at it also move ptable initialization together. > > Cc: Zhenyu Wang <zhenyuw@linux.intel.com> > Cc: Zhi Wang <zhi.a.wang@intel.com> > Cc: Kevin Tian <kevin.tian@intel.com> > Signed-off-by: Yi Liu <yi.l.liu@intel.com> > --- > drivers/gpu/drm/i915/gvt/gvt.h | 2 ++ > drivers/gpu/drm/i915/gvt/kvmgt.c | 7 ++----- > drivers/gpu/drm/i915/gvt/vgpu.c | 2 ++ > 3 files changed, 6 insertions(+), 5 deletions(-) > > diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h > index dbf8d7470b2c..a3a7e16078ba 100644 > --- a/drivers/gpu/drm/i915/gvt/gvt.h > +++ b/drivers/gpu/drm/i915/gvt/gvt.h > @@ -754,6 +754,8 @@ void intel_gvt_debugfs_remove_vgpu(struct > intel_vgpu *vgpu); > void intel_gvt_debugfs_init(struct intel_gvt *gvt); > void intel_gvt_debugfs_clean(struct intel_gvt *gvt); > > +void gvt_cache_init(struct intel_vgpu *vgpu); > +void kvmgt_protect_table_init(struct intel_vgpu *info); > int intel_gvt_page_track_add(struct intel_vgpu *info, u64 gfn); > int intel_gvt_page_track_remove(struct intel_vgpu *info, u64 gfn); > int intel_gvt_dma_pin_guest_page(struct intel_vgpu *vgpu, dma_addr_t > dma_addr); > diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c > b/drivers/gpu/drm/i915/gvt/kvmgt.c > index 579b230a0f58..cb21b1ba4162 100644 > --- a/drivers/gpu/drm/i915/gvt/kvmgt.c > +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c > @@ -322,7 +322,7 @@ static void gvt_cache_destroy(struct intel_vgpu *vgpu) > } > } > > -static void gvt_cache_init(struct intel_vgpu *vgpu) > +void gvt_cache_init(struct intel_vgpu *vgpu) those are local functions. Just move to vgpu.c. or you can remove the function wrap and directly put the internal lines in intel_gvt_create_vgpu() > { > vgpu->gfn_cache = RB_ROOT; > vgpu->dma_addr_cache = RB_ROOT; > @@ -330,7 +330,7 @@ static void gvt_cache_init(struct intel_vgpu *vgpu) > mutex_init(&vgpu->cache_lock); > } > > -static void kvmgt_protect_table_init(struct intel_vgpu *info) > +void kvmgt_protect_table_init(struct intel_vgpu *info) > { > hash_init(info->ptable); > } > @@ -671,9 +671,6 @@ static int intel_vgpu_open_device(struct vfio_device > *vfio_dev) > > vgpu->attached = true; > > - kvmgt_protect_table_init(vgpu); > - gvt_cache_init(vgpu); > - > vgpu->track_node.track_write = kvmgt_page_track_write; > vgpu->track_node.track_flush_slot = kvmgt_page_track_flush_slot; > kvm_page_track_register_notifier(vgpu->vfio_device.kvm, > diff --git a/drivers/gpu/drm/i915/gvt/vgpu.c > b/drivers/gpu/drm/i915/gvt/vgpu.c > index 56c71474008a..036e1a72a26b 100644 > --- a/drivers/gpu/drm/i915/gvt/vgpu.c > +++ b/drivers/gpu/drm/i915/gvt/vgpu.c > @@ -382,6 +382,8 @@ int intel_gvt_create_vgpu(struct intel_vgpu *vgpu, > > intel_gvt_update_reg_whitelist(vgpu); > mutex_unlock(&gvt->lock); > + kvmgt_protect_table_init(vgpu); > + gvt_cache_init(vgpu); > return 0; > > out_clean_sched_policy: > -- > 2.34.1
On 2022/11/24 15:07, Tian, Kevin wrote: >> From: Liu, Yi L <yi.l.liu@intel.com> >> Sent: Wednesday, November 23, 2022 9:49 PM >> >> vfio_iommufd_bind() creates an access which has an unmap callback, which >> can be called immediately. So dma_unmap() callback should tolerate the >> unmaps that come before the emulated device is opened. > > this should first talk about how it works today and then why iommufd changes > it. > >> >> To achieve above, move the protect_table_init and gvt_cache_init into the >> init op which is supposed to be triggered prior to the open_device() op. > > what about below? > -- > vfio container registers .dma_unmap() callback after the device is opened. > So it's fine for mdev drivers to initialize internal mapping cache in > .open_device(). See vfio_device_container_register(). > > Now with iommufd an access ops with an unmap callback is registered > when the device is bound to iommufd which is before .open_device() > is called. This implies gvt's .dma_unmap() could be called before its > internal mapping cache is initialized. > > The fix is moving gvt mapping cache initialization to vGPU creation. > While at it also move ptable initialization together. much clearer :-) >> >> Cc: Zhenyu Wang <zhenyuw@linux.intel.com> >> Cc: Zhi Wang <zhi.a.wang@intel.com> >> Cc: Kevin Tian <kevin.tian@intel.com> >> Signed-off-by: Yi Liu <yi.l.liu@intel.com> >> --- >> drivers/gpu/drm/i915/gvt/gvt.h | 2 ++ >> drivers/gpu/drm/i915/gvt/kvmgt.c | 7 ++----- >> drivers/gpu/drm/i915/gvt/vgpu.c | 2 ++ >> 3 files changed, 6 insertions(+), 5 deletions(-) >> >> diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h >> index dbf8d7470b2c..a3a7e16078ba 100644 >> --- a/drivers/gpu/drm/i915/gvt/gvt.h >> +++ b/drivers/gpu/drm/i915/gvt/gvt.h >> @@ -754,6 +754,8 @@ void intel_gvt_debugfs_remove_vgpu(struct >> intel_vgpu *vgpu); >> void intel_gvt_debugfs_init(struct intel_gvt *gvt); >> void intel_gvt_debugfs_clean(struct intel_gvt *gvt); >> >> +void gvt_cache_init(struct intel_vgpu *vgpu); >> +void kvmgt_protect_table_init(struct intel_vgpu *info); >> int intel_gvt_page_track_add(struct intel_vgpu *info, u64 gfn); >> int intel_gvt_page_track_remove(struct intel_vgpu *info, u64 gfn); >> int intel_gvt_dma_pin_guest_page(struct intel_vgpu *vgpu, dma_addr_t >> dma_addr); >> diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c >> b/drivers/gpu/drm/i915/gvt/kvmgt.c >> index 579b230a0f58..cb21b1ba4162 100644 >> --- a/drivers/gpu/drm/i915/gvt/kvmgt.c >> +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c >> @@ -322,7 +322,7 @@ static void gvt_cache_destroy(struct intel_vgpu *vgpu) >> } >> } >> >> -static void gvt_cache_init(struct intel_vgpu *vgpu) >> +void gvt_cache_init(struct intel_vgpu *vgpu) > > those are local functions. Just move to vgpu.c. > > or you can remove the function wrap and directly put the internal lines > in intel_gvt_create_vgpu() yes. maybe see Zhenyu and Zhi's input. which way is preferred by them. >> { >> vgpu->gfn_cache = RB_ROOT; >> vgpu->dma_addr_cache = RB_ROOT; >> @@ -330,7 +330,7 @@ static void gvt_cache_init(struct intel_vgpu *vgpu) >> mutex_init(&vgpu->cache_lock); >> } >> >> -static void kvmgt_protect_table_init(struct intel_vgpu *info) >> +void kvmgt_protect_table_init(struct intel_vgpu *info) >> { >> hash_init(info->ptable); >> } >> @@ -671,9 +671,6 @@ static int intel_vgpu_open_device(struct vfio_device >> *vfio_dev) >> >> vgpu->attached = true; >> >> - kvmgt_protect_table_init(vgpu); >> - gvt_cache_init(vgpu); >> - >> vgpu->track_node.track_write = kvmgt_page_track_write; >> vgpu->track_node.track_flush_slot = kvmgt_page_track_flush_slot; >> kvm_page_track_register_notifier(vgpu->vfio_device.kvm, >> diff --git a/drivers/gpu/drm/i915/gvt/vgpu.c >> b/drivers/gpu/drm/i915/gvt/vgpu.c >> index 56c71474008a..036e1a72a26b 100644 >> --- a/drivers/gpu/drm/i915/gvt/vgpu.c >> +++ b/drivers/gpu/drm/i915/gvt/vgpu.c >> @@ -382,6 +382,8 @@ int intel_gvt_create_vgpu(struct intel_vgpu *vgpu, >> >> intel_gvt_update_reg_whitelist(vgpu); >> mutex_unlock(&gvt->lock); >> + kvmgt_protect_table_init(vgpu); >> + gvt_cache_init(vgpu); >> return 0; >> >> out_clean_sched_policy: >> -- >> 2.34.1 >
On 2022.11.24 17:15:12 +0800, Yi Liu wrote: > On 2022/11/24 15:07, Tian, Kevin wrote: > > > From: Liu, Yi L <yi.l.liu@intel.com> > > > Sent: Wednesday, November 23, 2022 9:49 PM > > > > > > vfio_iommufd_bind() creates an access which has an unmap callback, which > > > can be called immediately. So dma_unmap() callback should tolerate the > > > unmaps that come before the emulated device is opened. > > > > this should first talk about how it works today and then why iommufd changes > > it. > > > > > > > > To achieve above, move the protect_table_init and gvt_cache_init into the > > > init op which is supposed to be triggered prior to the open_device() op. > > > > what about below? > > -- > > vfio container registers .dma_unmap() callback after the device is opened. > > So it's fine for mdev drivers to initialize internal mapping cache in > > .open_device(). See vfio_device_container_register(). > > > > Now with iommufd an access ops with an unmap callback is registered > > when the device is bound to iommufd which is before .open_device() > > is called. This implies gvt's .dma_unmap() could be called before its > > internal mapping cache is initialized. > > > > The fix is moving gvt mapping cache initialization to vGPU creation. > > While at it also move ptable initialization together. > > much clearer :-) > Current gvt internal cache is handled with .open_device() and .close_device() pair, so those internal cache is now re-initialized for each device session, how is that handled for iommufd? Looks that's missed in this patch.. > > > > > > Cc: Zhenyu Wang <zhenyuw@linux.intel.com> > > > Cc: Zhi Wang <zhi.a.wang@intel.com> > > > Cc: Kevin Tian <kevin.tian@intel.com> > > > Signed-off-by: Yi Liu <yi.l.liu@intel.com> > > > --- > > > drivers/gpu/drm/i915/gvt/gvt.h | 2 ++ > > > drivers/gpu/drm/i915/gvt/kvmgt.c | 7 ++----- > > > drivers/gpu/drm/i915/gvt/vgpu.c | 2 ++ > > > 3 files changed, 6 insertions(+), 5 deletions(-) > > > > > > diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h > > > index dbf8d7470b2c..a3a7e16078ba 100644 > > > --- a/drivers/gpu/drm/i915/gvt/gvt.h > > > +++ b/drivers/gpu/drm/i915/gvt/gvt.h > > > @@ -754,6 +754,8 @@ void intel_gvt_debugfs_remove_vgpu(struct > > > intel_vgpu *vgpu); > > > void intel_gvt_debugfs_init(struct intel_gvt *gvt); > > > void intel_gvt_debugfs_clean(struct intel_gvt *gvt); > > > > > > +void gvt_cache_init(struct intel_vgpu *vgpu); > > > +void kvmgt_protect_table_init(struct intel_vgpu *info); > > > int intel_gvt_page_track_add(struct intel_vgpu *info, u64 gfn); > > > int intel_gvt_page_track_remove(struct intel_vgpu *info, u64 gfn); > > > int intel_gvt_dma_pin_guest_page(struct intel_vgpu *vgpu, dma_addr_t > > > dma_addr); > > > diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c > > > b/drivers/gpu/drm/i915/gvt/kvmgt.c > > > index 579b230a0f58..cb21b1ba4162 100644 > > > --- a/drivers/gpu/drm/i915/gvt/kvmgt.c > > > +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c > > > @@ -322,7 +322,7 @@ static void gvt_cache_destroy(struct intel_vgpu *vgpu) > > > } > > > } > > > > > > -static void gvt_cache_init(struct intel_vgpu *vgpu) > > > +void gvt_cache_init(struct intel_vgpu *vgpu) > > > > those are local functions. Just move to vgpu.c. > > > > or you can remove the function wrap and directly put the internal lines > > in intel_gvt_create_vgpu() > > yes. maybe see Zhenyu and Zhi's input. which way is preferred by them. > > > > { > > > vgpu->gfn_cache = RB_ROOT; > > > vgpu->dma_addr_cache = RB_ROOT; > > > @@ -330,7 +330,7 @@ static void gvt_cache_init(struct intel_vgpu *vgpu) > > > mutex_init(&vgpu->cache_lock); > > > } > > > > > > -static void kvmgt_protect_table_init(struct intel_vgpu *info) > > > +void kvmgt_protect_table_init(struct intel_vgpu *info) > > > { > > > hash_init(info->ptable); > > > } > > > @@ -671,9 +671,6 @@ static int intel_vgpu_open_device(struct vfio_device > > > *vfio_dev) > > > > > > vgpu->attached = true; > > > > > > - kvmgt_protect_table_init(vgpu); > > > - gvt_cache_init(vgpu); > > > - > > > vgpu->track_node.track_write = kvmgt_page_track_write; > > > vgpu->track_node.track_flush_slot = kvmgt_page_track_flush_slot; > > > kvm_page_track_register_notifier(vgpu->vfio_device.kvm, > > > diff --git a/drivers/gpu/drm/i915/gvt/vgpu.c > > > b/drivers/gpu/drm/i915/gvt/vgpu.c > > > index 56c71474008a..036e1a72a26b 100644 > > > --- a/drivers/gpu/drm/i915/gvt/vgpu.c > > > +++ b/drivers/gpu/drm/i915/gvt/vgpu.c > > > @@ -382,6 +382,8 @@ int intel_gvt_create_vgpu(struct intel_vgpu *vgpu, > > > > > > intel_gvt_update_reg_whitelist(vgpu); > > > mutex_unlock(&gvt->lock); > > > + kvmgt_protect_table_init(vgpu); > > > + gvt_cache_init(vgpu); > > > return 0; > > > > > > out_clean_sched_policy: > > > -- > > > 2.34.1 > > > > -- > Regards, > Yi Liu
On 2022/11/25 14:04, Zhenyu Wang wrote: > On 2022.11.24 17:15:12 +0800, Yi Liu wrote: >> On 2022/11/24 15:07, Tian, Kevin wrote: >>>> From: Liu, Yi L <yi.l.liu@intel.com> >>>> Sent: Wednesday, November 23, 2022 9:49 PM >>>> >>>> vfio_iommufd_bind() creates an access which has an unmap callback, which >>>> can be called immediately. So dma_unmap() callback should tolerate the >>>> unmaps that come before the emulated device is opened. >>> >>> this should first talk about how it works today and then why iommufd changes >>> it. >>> >>>> >>>> To achieve above, move the protect_table_init and gvt_cache_init into the >>>> init op which is supposed to be triggered prior to the open_device() op. >>> >>> what about below? >>> -- >>> vfio container registers .dma_unmap() callback after the device is opened. >>> So it's fine for mdev drivers to initialize internal mapping cache in >>> .open_device(). See vfio_device_container_register(). >>> >>> Now with iommufd an access ops with an unmap callback is registered >>> when the device is bound to iommufd which is before .open_device() >>> is called. This implies gvt's .dma_unmap() could be called before its >>> internal mapping cache is initialized. >>> >>> The fix is moving gvt mapping cache initialization to vGPU creation. >>> While at it also move ptable initialization together. >> >> much clearer :-) >> > > Current gvt internal cache is handled with .open_device() and .close_device() pair, > so those internal cache is now re-initialized for each device session, how is that > handled for iommufd? Looks that's missed in this patch.. you are right. I noticed below two helpers are used to destroy. However, the code seems to be more clear the internal cache. So seems no need to re-initialize. I'm no expert here. :) gvt_cache_destroy() kvmgt_protect_table_destroy() >>>> >>>> Cc: Zhenyu Wang <zhenyuw@linux.intel.com> >>>> Cc: Zhi Wang <zhi.a.wang@intel.com> >>>> Cc: Kevin Tian <kevin.tian@intel.com> >>>> Signed-off-by: Yi Liu <yi.l.liu@intel.com> >>>> --- >>>> drivers/gpu/drm/i915/gvt/gvt.h | 2 ++ >>>> drivers/gpu/drm/i915/gvt/kvmgt.c | 7 ++----- >>>> drivers/gpu/drm/i915/gvt/vgpu.c | 2 ++ >>>> 3 files changed, 6 insertions(+), 5 deletions(-) >>>> >>>> diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h >>>> index dbf8d7470b2c..a3a7e16078ba 100644 >>>> --- a/drivers/gpu/drm/i915/gvt/gvt.h >>>> +++ b/drivers/gpu/drm/i915/gvt/gvt.h >>>> @@ -754,6 +754,8 @@ void intel_gvt_debugfs_remove_vgpu(struct >>>> intel_vgpu *vgpu); >>>> void intel_gvt_debugfs_init(struct intel_gvt *gvt); >>>> void intel_gvt_debugfs_clean(struct intel_gvt *gvt); >>>> >>>> +void gvt_cache_init(struct intel_vgpu *vgpu); >>>> +void kvmgt_protect_table_init(struct intel_vgpu *info); >>>> int intel_gvt_page_track_add(struct intel_vgpu *info, u64 gfn); >>>> int intel_gvt_page_track_remove(struct intel_vgpu *info, u64 gfn); >>>> int intel_gvt_dma_pin_guest_page(struct intel_vgpu *vgpu, dma_addr_t >>>> dma_addr); >>>> diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c >>>> b/drivers/gpu/drm/i915/gvt/kvmgt.c >>>> index 579b230a0f58..cb21b1ba4162 100644 >>>> --- a/drivers/gpu/drm/i915/gvt/kvmgt.c >>>> +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c >>>> @@ -322,7 +322,7 @@ static void gvt_cache_destroy(struct intel_vgpu *vgpu) >>>> } >>>> } >>>> >>>> -static void gvt_cache_init(struct intel_vgpu *vgpu) >>>> +void gvt_cache_init(struct intel_vgpu *vgpu) >>> >>> those are local functions. Just move to vgpu.c. >>> >>> or you can remove the function wrap and directly put the internal lines >>> in intel_gvt_create_vgpu() >> >> yes. maybe see Zhenyu and Zhi's input. which way is preferred by them. >> >>>> { >>>> vgpu->gfn_cache = RB_ROOT; >>>> vgpu->dma_addr_cache = RB_ROOT; >>>> @@ -330,7 +330,7 @@ static void gvt_cache_init(struct intel_vgpu *vgpu) >>>> mutex_init(&vgpu->cache_lock); >>>> } >>>> >>>> -static void kvmgt_protect_table_init(struct intel_vgpu *info) >>>> +void kvmgt_protect_table_init(struct intel_vgpu *info) >>>> { >>>> hash_init(info->ptable); >>>> } >>>> @@ -671,9 +671,6 @@ static int intel_vgpu_open_device(struct vfio_device >>>> *vfio_dev) >>>> >>>> vgpu->attached = true; >>>> >>>> - kvmgt_protect_table_init(vgpu); >>>> - gvt_cache_init(vgpu); >>>> - >>>> vgpu->track_node.track_write = kvmgt_page_track_write; >>>> vgpu->track_node.track_flush_slot = kvmgt_page_track_flush_slot; >>>> kvm_page_track_register_notifier(vgpu->vfio_device.kvm, >>>> diff --git a/drivers/gpu/drm/i915/gvt/vgpu.c >>>> b/drivers/gpu/drm/i915/gvt/vgpu.c >>>> index 56c71474008a..036e1a72a26b 100644 >>>> --- a/drivers/gpu/drm/i915/gvt/vgpu.c >>>> +++ b/drivers/gpu/drm/i915/gvt/vgpu.c >>>> @@ -382,6 +382,8 @@ int intel_gvt_create_vgpu(struct intel_vgpu *vgpu, >>>> >>>> intel_gvt_update_reg_whitelist(vgpu); >>>> mutex_unlock(&gvt->lock); >>>> + kvmgt_protect_table_init(vgpu); >>>> + gvt_cache_init(vgpu); >>>> return 0; >>>> >>>> out_clean_sched_policy: >>>> -- >>>> 2.34.1 >>> >> >> -- >> Regards, >> Yi Liu
> From: Liu, Yi L <yi.l.liu@intel.com> > Sent: Friday, November 25, 2022 5:07 PM > > > > Current gvt internal cache is handled with .open_device() > and .close_device() pair, > > so those internal cache is now re-initialized for each device session, how is > that > > handled for iommufd? Looks that's missed in this patch.. > > you are right. I noticed below two helpers are used to destroy. However, > the code seems to be more clear the internal cache. So seems no need to > re-initialize. I'm no expert here. :) > > gvt_cache_destroy() > kvmgt_protect_table_destroy() > It's common practice to have init in the probe path and then do the housekeeping in the close path. Re-init in every open is unnecessary.
On 11/23/22 13:48, Liu, Yi L wrote: > vfio_iommufd_bind() creates an access which has an unmap callback, which > can be called immediately. So dma_unmap() callback should tolerate the > unmaps that come before the emulated device is opened. After reading code on Jason's tree, I think it would be nice to document the requirements for a vfio driver somewhere, like what they must do and must not do in the callbacks of vfio_device_ops. maybe in include/linux/vfio.h? > To achieve above, move the protect_table_init and gvt_cache_init into the > init op which is supposed to be triggered prior to the open_device() op. > > Cc: Zhenyu Wang <zhenyuw@linux.intel.com> > Cc: Zhi Wang <zhi.a.wang@intel.com> > Cc: Kevin Tian <kevin.tian@intel.com> > Signed-off-by: Yi Liu <yi.l.liu@intel.com> > --- > drivers/gpu/drm/i915/gvt/gvt.h | 2 ++ > drivers/gpu/drm/i915/gvt/kvmgt.c | 7 ++----- > drivers/gpu/drm/i915/gvt/vgpu.c | 2 ++ > 3 files changed, 6 insertions(+), 5 deletions(-) > > diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h > index dbf8d7470b2c..a3a7e16078ba 100644 > --- a/drivers/gpu/drm/i915/gvt/gvt.h > +++ b/drivers/gpu/drm/i915/gvt/gvt.h > @@ -754,6 +754,8 @@ void intel_gvt_debugfs_remove_vgpu(struct intel_vgpu *vgpu); > void intel_gvt_debugfs_init(struct intel_gvt *gvt); > void intel_gvt_debugfs_clean(struct intel_gvt *gvt); > > +void gvt_cache_init(struct intel_vgpu *vgpu); > +void kvmgt_protect_table_init(struct intel_vgpu *info); > int intel_gvt_page_track_add(struct intel_vgpu *info, u64 gfn); > int intel_gvt_page_track_remove(struct intel_vgpu *info, u64 gfn); > int intel_gvt_dma_pin_guest_page(struct intel_vgpu *vgpu, dma_addr_t dma_addr); > diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c > index 579b230a0f58..cb21b1ba4162 100644 > --- a/drivers/gpu/drm/i915/gvt/kvmgt.c > +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c > @@ -322,7 +322,7 @@ static void gvt_cache_destroy(struct intel_vgpu *vgpu) > } > } > > -static void gvt_cache_init(struct intel_vgpu *vgpu) > +void gvt_cache_init(struct intel_vgpu *vgpu) > { > vgpu->gfn_cache = RB_ROOT; > vgpu->dma_addr_cache = RB_ROOT; > @@ -330,7 +330,7 @@ static void gvt_cache_init(struct intel_vgpu *vgpu) > mutex_init(&vgpu->cache_lock); > } > > -static void kvmgt_protect_table_init(struct intel_vgpu *info) > +void kvmgt_protect_table_init(struct intel_vgpu *info) > { > hash_init(info->ptable); > } > @@ -671,9 +671,6 @@ static int intel_vgpu_open_device(struct vfio_device *vfio_dev) > > vgpu->attached = true; > > - kvmgt_protect_table_init(vgpu); > - gvt_cache_init(vgpu); > - > vgpu->track_node.track_write = kvmgt_page_track_write; > vgpu->track_node.track_flush_slot = kvmgt_page_track_flush_slot; > kvm_page_track_register_notifier(vgpu->vfio_device.kvm, > diff --git a/drivers/gpu/drm/i915/gvt/vgpu.c b/drivers/gpu/drm/i915/gvt/vgpu.c > index 56c71474008a..036e1a72a26b 100644 > --- a/drivers/gpu/drm/i915/gvt/vgpu.c > +++ b/drivers/gpu/drm/i915/gvt/vgpu.c > @@ -382,6 +382,8 @@ int intel_gvt_create_vgpu(struct intel_vgpu *vgpu, > > intel_gvt_update_reg_whitelist(vgpu); > mutex_unlock(&gvt->lock); > + kvmgt_protect_table_init(vgpu); > + gvt_cache_init(vgpu); It would be nice that you can still keep the initialization in the kvmgt.c as they are kvmgt related. With the changes above: Reviewed-by: Zhi Wang <zhi.a.wang@intel.com> > return 0; > > out_clean_sched_policy:
On 2022/11/28 18:22, Wang, Zhi A wrote: > On 11/23/22 13:48, Liu, Yi L wrote: >> vfio_iommufd_bind() creates an access which has an unmap callback, which >> can be called immediately. So dma_unmap() callback should tolerate the >> unmaps that come before the emulated device is opened. > > After reading code on Jason's tree, I think it would be nice to document > > the requirements for a vfio driver somewhere, like what they must do and > > must not do in the callbacks of vfio_device_ops. maybe in include/linux/vfio.h? yes. maybe vfio.rst. Altough I found it is out of date. I've got one fix and may send out later. >> To achieve above, move the protect_table_init and gvt_cache_init into the >> init op which is supposed to be triggered prior to the open_device() op. >> >> Cc: Zhenyu Wang <zhenyuw@linux.intel.com> >> Cc: Zhi Wang <zhi.a.wang@intel.com> >> Cc: Kevin Tian <kevin.tian@intel.com> >> Signed-off-by: Yi Liu <yi.l.liu@intel.com> >> --- >> drivers/gpu/drm/i915/gvt/gvt.h | 2 ++ >> drivers/gpu/drm/i915/gvt/kvmgt.c | 7 ++----- >> drivers/gpu/drm/i915/gvt/vgpu.c | 2 ++ >> 3 files changed, 6 insertions(+), 5 deletions(-) >> >> diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h >> index dbf8d7470b2c..a3a7e16078ba 100644 >> --- a/drivers/gpu/drm/i915/gvt/gvt.h >> +++ b/drivers/gpu/drm/i915/gvt/gvt.h >> @@ -754,6 +754,8 @@ void intel_gvt_debugfs_remove_vgpu(struct intel_vgpu *vgpu); >> void intel_gvt_debugfs_init(struct intel_gvt *gvt); >> void intel_gvt_debugfs_clean(struct intel_gvt *gvt); >> >> +void gvt_cache_init(struct intel_vgpu *vgpu); >> +void kvmgt_protect_table_init(struct intel_vgpu *info); >> int intel_gvt_page_track_add(struct intel_vgpu *info, u64 gfn); >> int intel_gvt_page_track_remove(struct intel_vgpu *info, u64 gfn); >> int intel_gvt_dma_pin_guest_page(struct intel_vgpu *vgpu, dma_addr_t dma_addr); >> diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c >> index 579b230a0f58..cb21b1ba4162 100644 >> --- a/drivers/gpu/drm/i915/gvt/kvmgt.c >> +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c >> @@ -322,7 +322,7 @@ static void gvt_cache_destroy(struct intel_vgpu *vgpu) >> } >> } >> >> -static void gvt_cache_init(struct intel_vgpu *vgpu) >> +void gvt_cache_init(struct intel_vgpu *vgpu) >> { >> vgpu->gfn_cache = RB_ROOT; >> vgpu->dma_addr_cache = RB_ROOT; >> @@ -330,7 +330,7 @@ static void gvt_cache_init(struct intel_vgpu *vgpu) >> mutex_init(&vgpu->cache_lock); >> } >> >> -static void kvmgt_protect_table_init(struct intel_vgpu *info) >> +void kvmgt_protect_table_init(struct intel_vgpu *info) >> { >> hash_init(info->ptable); >> } >> @@ -671,9 +671,6 @@ static int intel_vgpu_open_device(struct vfio_device *vfio_dev) >> >> vgpu->attached = true; >> >> - kvmgt_protect_table_init(vgpu); >> - gvt_cache_init(vgpu); >> - >> vgpu->track_node.track_write = kvmgt_page_track_write; >> vgpu->track_node.track_flush_slot = kvmgt_page_track_flush_slot; >> kvm_page_track_register_notifier(vgpu->vfio_device.kvm, >> diff --git a/drivers/gpu/drm/i915/gvt/vgpu.c b/drivers/gpu/drm/i915/gvt/vgpu.c >> index 56c71474008a..036e1a72a26b 100644 >> --- a/drivers/gpu/drm/i915/gvt/vgpu.c >> +++ b/drivers/gpu/drm/i915/gvt/vgpu.c >> @@ -382,6 +382,8 @@ int intel_gvt_create_vgpu(struct intel_vgpu *vgpu, >> >> intel_gvt_update_reg_whitelist(vgpu); >> mutex_unlock(&gvt->lock); >> + kvmgt_protect_table_init(vgpu); >> + gvt_cache_init(vgpu); > > It would be nice that you can still keep the initialization in the kvmgt.c as they are ok, so I'll leave the function implementation in kvmgt.c. > kvmgt related. > > With the changes above: Reviewed-by: Zhi Wang <zhi.a.wang@intel.com> thanks. :-) >> return 0; >> >> out_clean_sched_policy:
diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h index dbf8d7470b2c..a3a7e16078ba 100644 --- a/drivers/gpu/drm/i915/gvt/gvt.h +++ b/drivers/gpu/drm/i915/gvt/gvt.h @@ -754,6 +754,8 @@ void intel_gvt_debugfs_remove_vgpu(struct intel_vgpu *vgpu); void intel_gvt_debugfs_init(struct intel_gvt *gvt); void intel_gvt_debugfs_clean(struct intel_gvt *gvt); +void gvt_cache_init(struct intel_vgpu *vgpu); +void kvmgt_protect_table_init(struct intel_vgpu *info); int intel_gvt_page_track_add(struct intel_vgpu *info, u64 gfn); int intel_gvt_page_track_remove(struct intel_vgpu *info, u64 gfn); int intel_gvt_dma_pin_guest_page(struct intel_vgpu *vgpu, dma_addr_t dma_addr); diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c index 579b230a0f58..cb21b1ba4162 100644 --- a/drivers/gpu/drm/i915/gvt/kvmgt.c +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c @@ -322,7 +322,7 @@ static void gvt_cache_destroy(struct intel_vgpu *vgpu) } } -static void gvt_cache_init(struct intel_vgpu *vgpu) +void gvt_cache_init(struct intel_vgpu *vgpu) { vgpu->gfn_cache = RB_ROOT; vgpu->dma_addr_cache = RB_ROOT; @@ -330,7 +330,7 @@ static void gvt_cache_init(struct intel_vgpu *vgpu) mutex_init(&vgpu->cache_lock); } -static void kvmgt_protect_table_init(struct intel_vgpu *info) +void kvmgt_protect_table_init(struct intel_vgpu *info) { hash_init(info->ptable); } @@ -671,9 +671,6 @@ static int intel_vgpu_open_device(struct vfio_device *vfio_dev) vgpu->attached = true; - kvmgt_protect_table_init(vgpu); - gvt_cache_init(vgpu); - vgpu->track_node.track_write = kvmgt_page_track_write; vgpu->track_node.track_flush_slot = kvmgt_page_track_flush_slot; kvm_page_track_register_notifier(vgpu->vfio_device.kvm, diff --git a/drivers/gpu/drm/i915/gvt/vgpu.c b/drivers/gpu/drm/i915/gvt/vgpu.c index 56c71474008a..036e1a72a26b 100644 --- a/drivers/gpu/drm/i915/gvt/vgpu.c +++ b/drivers/gpu/drm/i915/gvt/vgpu.c @@ -382,6 +382,8 @@ int intel_gvt_create_vgpu(struct intel_vgpu *vgpu, intel_gvt_update_reg_whitelist(vgpu); mutex_unlock(&gvt->lock); + kvmgt_protect_table_init(vgpu); + gvt_cache_init(vgpu); return 0; out_clean_sched_policy:
vfio_iommufd_bind() creates an access which has an unmap callback, which can be called immediately. So dma_unmap() callback should tolerate the unmaps that come before the emulated device is opened. To achieve above, move the protect_table_init and gvt_cache_init into the init op which is supposed to be triggered prior to the open_device() op. Cc: Zhenyu Wang <zhenyuw@linux.intel.com> Cc: Zhi Wang <zhi.a.wang@intel.com> Cc: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Yi Liu <yi.l.liu@intel.com> --- drivers/gpu/drm/i915/gvt/gvt.h | 2 ++ drivers/gpu/drm/i915/gvt/kvmgt.c | 7 ++----- drivers/gpu/drm/i915/gvt/vgpu.c | 2 ++ 3 files changed, 6 insertions(+), 5 deletions(-)