Message ID | 1435314689-1934-1-git-send-email-deathsimple@vodafone.de (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Fri, Jun 26, 2015 at 6:31 AM, Christian König <deathsimple@vodafone.de> wrote: > From: Christian König <christian.koenig@amd.com> > > We only should do so when the BO_VA was actually mapped. > Otherwise we get a nice error message on the next CS. > > v2: It actually doesn't matter if it was invalidated or not, > if it was mapped we need to clear the area where it was mapped. > > Signed-off-by: Christian König <christian.koenig@amd.com> > Tested-by: Michel Dänzer <michel.daenzer@amd.com> (v1) Applied. Thanks! Alex > --- > drivers/gpu/drm/radeon/radeon_vm.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/drivers/gpu/drm/radeon/radeon_vm.c b/drivers/gpu/drm/radeon/radeon_vm.c > index 3662157..ec10533 100644 > --- a/drivers/gpu/drm/radeon/radeon_vm.c > +++ b/drivers/gpu/drm/radeon/radeon_vm.c > @@ -1129,12 +1129,12 @@ void radeon_vm_bo_rmv(struct radeon_device *rdev, > interval_tree_remove(&bo_va->it, &vm->va); > > spin_lock(&vm->status_lock); > - if (list_empty(&bo_va->vm_status)) { > + list_del(&bo_va->vm_status); > + if (bo_va->it.start || bo_va->it.last) { > bo_va->bo = radeon_bo_ref(bo_va->bo); > list_add(&bo_va->vm_status, &vm->freed); > } else { > radeon_fence_unref(&bo_va->last_pt_update); > - list_del(&bo_va->vm_status); > kfree(bo_va); > } > spin_unlock(&vm->status_lock); > -- > 1.9.1 >
On 26.06.2015 19:31, Christian König wrote: > From: Christian König <christian.koenig@amd.com> > > We only should do so when the BO_VA was actually mapped. > Otherwise we get a nice error message on the next CS. > > v2: It actually doesn't matter if it was invalidated or not, > if it was mapped we need to clear the area where it was mapped. > > Signed-off-by: Christian König <christian.koenig@amd.com> > Tested-by: Michel Dänzer <michel.daenzer@amd.com> (v1) > --- > drivers/gpu/drm/radeon/radeon_vm.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/drivers/gpu/drm/radeon/radeon_vm.c b/drivers/gpu/drm/radeon/radeon_vm.c > index 3662157..ec10533 100644 > --- a/drivers/gpu/drm/radeon/radeon_vm.c > +++ b/drivers/gpu/drm/radeon/radeon_vm.c > @@ -1129,12 +1129,12 @@ void radeon_vm_bo_rmv(struct radeon_device *rdev, > interval_tree_remove(&bo_va->it, &vm->va); > > spin_lock(&vm->status_lock); > - if (list_empty(&bo_va->vm_status)) { > + list_del(&bo_va->vm_status); > + if (bo_va->it.start || bo_va->it.last) { > bo_va->bo = radeon_bo_ref(bo_va->bo); > list_add(&bo_va->vm_status, &vm->freed); > } else { > radeon_fence_unref(&bo_va->last_pt_update); > - list_del(&bo_va->vm_status); > kfree(bo_va); > } > spin_unlock(&vm->status_lock); > Even with this v2 patch, I was still running into these messages sometimes, accompanied by Mesa complaining about a CS being rejected and visual corruption: radeon 0000:00:01.0: bo ffff88021b433000 don't has a mapping in vm ffff8802355c1800 I tested the v1 patch again, same problem. Reverting commit 161ab658a611df14fb0365b7b70a8c5fed3e4870 ("drm/radeon: stop using addr to check for BO move") instead of applying this patch fixes it. Unfortunately, I don't have a very reliable way to reproduce it. On this Kaveri laptop, it seemed to happen every time doing xinit =xterm -- :1 -retro which resulted in a completely corrupted display instead of the expected root weave and xterm window. However, I haven't seen it on my desktop Kaveri machine with either the v1 or v2 patch.
diff --git a/drivers/gpu/drm/radeon/radeon_vm.c b/drivers/gpu/drm/radeon/radeon_vm.c index 3662157..ec10533 100644 --- a/drivers/gpu/drm/radeon/radeon_vm.c +++ b/drivers/gpu/drm/radeon/radeon_vm.c @@ -1129,12 +1129,12 @@ void radeon_vm_bo_rmv(struct radeon_device *rdev, interval_tree_remove(&bo_va->it, &vm->va); spin_lock(&vm->status_lock); - if (list_empty(&bo_va->vm_status)) { + list_del(&bo_va->vm_status); + if (bo_va->it.start || bo_va->it.last) { bo_va->bo = radeon_bo_ref(bo_va->bo); list_add(&bo_va->vm_status, &vm->freed); } else { radeon_fence_unref(&bo_va->last_pt_update); - list_del(&bo_va->vm_status); kfree(bo_va); } spin_unlock(&vm->status_lock);