Message ID | 20241217233348.3519726-28-matthew.brost@intel.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Introduce GPU SVM and Xe SVM implementation | expand |
On Tue, Dec 17, 2024 at 03:33:45PM -0800, Matthew Brost wrote: > Wire xe_bo_move to GPU SVM migration via new helper xe_svm_bo_evict. > Somehow lost the xe_bo.c changes in this rev which call xe_svm_bo_evict. Ignore this patch. Matt > v2: > - Use xe_svm_bo_evict > - Drop bo->range > v3: > - Kernel doc (Thomas) > > Signed-off-by: Matthew Brost <matthew.brost@intel.com> > --- > drivers/gpu/drm/xe/xe_svm.c | 14 ++++++++++++++ > drivers/gpu/drm/xe/xe_svm.h | 2 ++ > 2 files changed, 16 insertions(+) > > diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c > index a417d8942da4..8780a0b2c81e 100644 > --- a/drivers/gpu/drm/xe/xe_svm.c > +++ b/drivers/gpu/drm/xe/xe_svm.c > @@ -768,6 +768,20 @@ bool xe_svm_has_mapping(struct xe_vm *vm, u64 start, u64 end) > return drm_gpusvm_has_mapping(&vm->svm.gpusvm, start, end); > } > > +/** > + * xe_svm_bo_evict() - SVM evict BO to system memory > + * @bo: BO to evict > + * > + * SVM evict BO to system memory. GPU SVM layer ensures all device pages > + * are evicted before returning. > + * > + * Return: 0 on success standard error code otherwise > + */ > +int xe_svm_bo_evict(struct xe_bo *bo) > +{ > + return drm_gpusvm_evict_to_ram(&bo->devmem_allocation); > +} > + > #if IS_ENABLED(CONFIG_XE_DEVMEM_MIRROR) > static struct drm_pagemap_dma_addr > xe_drm_pagemap_map_dma(struct drm_pagemap *dpagemap, > diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h > index d549dd9e8641..9e9d333bb9d3 100644 > --- a/drivers/gpu/drm/xe/xe_svm.h > +++ b/drivers/gpu/drm/xe/xe_svm.h > @@ -56,6 +56,8 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma, > > bool xe_svm_has_mapping(struct xe_vm *vm, u64 start, u64 end); > > +int xe_svm_bo_evict(struct xe_bo *bo); > + > static inline bool xe_svm_range_pages_valid(struct xe_svm_range *range) > { > return drm_gpusvm_range_pages_valid(range->base.gpusvm, &range->base); > -- > 2.34.1 >
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c index a417d8942da4..8780a0b2c81e 100644 --- a/drivers/gpu/drm/xe/xe_svm.c +++ b/drivers/gpu/drm/xe/xe_svm.c @@ -768,6 +768,20 @@ bool xe_svm_has_mapping(struct xe_vm *vm, u64 start, u64 end) return drm_gpusvm_has_mapping(&vm->svm.gpusvm, start, end); } +/** + * xe_svm_bo_evict() - SVM evict BO to system memory + * @bo: BO to evict + * + * SVM evict BO to system memory. GPU SVM layer ensures all device pages + * are evicted before returning. + * + * Return: 0 on success standard error code otherwise + */ +int xe_svm_bo_evict(struct xe_bo *bo) +{ + return drm_gpusvm_evict_to_ram(&bo->devmem_allocation); +} + #if IS_ENABLED(CONFIG_XE_DEVMEM_MIRROR) static struct drm_pagemap_dma_addr xe_drm_pagemap_map_dma(struct drm_pagemap *dpagemap, diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h index d549dd9e8641..9e9d333bb9d3 100644 --- a/drivers/gpu/drm/xe/xe_svm.h +++ b/drivers/gpu/drm/xe/xe_svm.h @@ -56,6 +56,8 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma, bool xe_svm_has_mapping(struct xe_vm *vm, u64 start, u64 end); +int xe_svm_bo_evict(struct xe_bo *bo); + static inline bool xe_svm_range_pages_valid(struct xe_svm_range *range) { return drm_gpusvm_range_pages_valid(range->base.gpusvm, &range->base);
Wire xe_bo_move to GPU SVM migration via new helper xe_svm_bo_evict. v2: - Use xe_svm_bo_evict - Drop bo->range v3: - Kernel doc (Thomas) Signed-off-by: Matthew Brost <matthew.brost@intel.com> --- drivers/gpu/drm/xe/xe_svm.c | 14 ++++++++++++++ drivers/gpu/drm/xe/xe_svm.h | 2 ++ 2 files changed, 16 insertions(+)