Message ID | 20190730055203.28467-4-hch@lst.de (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [01/13] amdgpu: remove -EAGAIN handling for hmm_range_fault | expand |
On Tue, Jul 30, 2019 at 08:51:53AM +0300, Christoph Hellwig wrote: > This avoid having to abuse the vma field in struct hmm_range to unlock > the mmap_sem. I think the change inside hmm_range_fault got lost on rebase, it is now using: up_read(&range->hmm->mm->mmap_sem); But, yes, lets change it to use svmm->mm and try to keep struct hmm opaque to drivers Jason
On Tue, Jul 30, 2019 at 12:35:59PM +0000, Jason Gunthorpe wrote: > On Tue, Jul 30, 2019 at 08:51:53AM +0300, Christoph Hellwig wrote: > > This avoid having to abuse the vma field in struct hmm_range to unlock > > the mmap_sem. > > I think the change inside hmm_range_fault got lost on rebase, it is > now using: > > up_read(&range->hmm->mm->mmap_sem); > > But, yes, lets change it to use svmm->mm and try to keep struct hmm > opaque to drivers It got lost somewhat intentionally as I didn't want the churn, but I forgot to update the changelog. But if you are fine with changing it over I can bring it back.
On Tue, Jul 30, 2019 at 03:10:38PM +0200, Christoph Hellwig wrote: > On Tue, Jul 30, 2019 at 12:35:59PM +0000, Jason Gunthorpe wrote: > > On Tue, Jul 30, 2019 at 08:51:53AM +0300, Christoph Hellwig wrote: > > > This avoid having to abuse the vma field in struct hmm_range to unlock > > > the mmap_sem. > > > > I think the change inside hmm_range_fault got lost on rebase, it is > > now using: > > > > up_read(&range->hmm->mm->mmap_sem); > > > > But, yes, lets change it to use svmm->mm and try to keep struct hmm > > opaque to drivers > > It got lost somewhat intentionally as I didn't want the churn, but I > forgot to update the changelog. But if you are fine with changing it > over I can bring it back. I have a patch deleting hmm->mm, so using svmm seems cleaner churn here, we could defer and I can fold this into that patch? Jason
On Tue, Jul 30, 2019 at 01:14:58PM +0000, Jason Gunthorpe wrote: > I have a patch deleting hmm->mm, so using svmm seems cleaner churn > here, we could defer and I can fold this into that patch? Sounds good. If I don't need to resend feel fee to fold it, otherwise I'll fix it up.
diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c index a74530b5a523..b889d5ec4c7e 100644 --- a/drivers/gpu/drm/nouveau/nouveau_svm.c +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c @@ -485,14 +485,14 @@ nouveau_range_done(struct hmm_range *range) } static int -nouveau_range_fault(struct hmm_mirror *mirror, struct hmm_range *range) +nouveau_range_fault(struct nouveau_svmm *svmm, struct hmm_range *range) { long ret; range->default_flags = 0; range->pfn_flags_mask = -1UL; - ret = hmm_range_register(range, mirror, + ret = hmm_range_register(range, &svmm->mirror, range->start, range->end, PAGE_SHIFT); if (ret) { @@ -689,7 +689,7 @@ nouveau_svm_fault(struct nvif_notify *notify) range.values = nouveau_svm_pfn_values; range.pfn_shift = NVIF_VMM_PFNMAP_V0_ADDR_SHIFT; again: - ret = nouveau_range_fault(&svmm->mirror, &range); + ret = nouveau_range_fault(svmm, &range); if (ret == 0) { mutex_lock(&svmm->mutex); if (!nouveau_range_done(&range)) {
This avoid having to abuse the vma field in struct hmm_range to unlock the mmap_sem. Signed-off-by: Christoph Hellwig <hch@lst.de> --- drivers/gpu/drm/nouveau/nouveau_svm.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)