Message ID | 20200805122231.23313-17-chris@chris-wilson.co.uk (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Replace obj->mm.lock with reservation_ww_class | expand |
On 05/08/2020 13:22, Chris Wilson wrote: > Currently, if an error is raised we always call the cleanup locally > [and skip the main work callback]. However, some future users may need > to take a mutex to cleanup and so we cannot immediately execute the > cleanup as we may still be in interrupt context. For example, if we have > committed sensitive changes [like evicting from the ppGTT layout] that > are visible but gated behind the fence, we need to ensure those changes > are completed even after an error. [This does suggest the split between > the work/release callback is artificial and we may be able to simplify > the worker api by only requiring a single callback.] > > With the execute-immediate flag, for most cases this should result in > immediate cleanup of an error. > > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> > --- > drivers/gpu/drm/i915/i915_sw_fence_work.c | 26 +++++++++++------------ > 1 file changed, 13 insertions(+), 13 deletions(-) > > diff --git a/drivers/gpu/drm/i915/i915_sw_fence_work.c b/drivers/gpu/drm/i915/i915_sw_fence_work.c > index a3a81bb8f2c3..e094fd0a4202 100644 > --- a/drivers/gpu/drm/i915/i915_sw_fence_work.c > +++ b/drivers/gpu/drm/i915/i915_sw_fence_work.c > @@ -16,11 +16,14 @@ static void fence_complete(struct dma_fence_work *f) > static void fence_work(struct work_struct *work) > { > struct dma_fence_work *f = container_of(work, typeof(*f), work); > - int err; > > - err = f->ops->work(f); > - if (err) > - dma_fence_set_error(&f->dma, err); > + if (!f->dma.error) { > + int err; > + > + err = f->ops->work(f); > + if (err) > + dma_fence_set_error(&f->dma, err); > + } > > fence_complete(f); > dma_fence_put(&f->dma); > @@ -36,15 +39,10 @@ fence_notify(struct i915_sw_fence *fence, enum i915_sw_fence_notify state) > if (fence->error) > dma_fence_set_error(&f->dma, fence->error); > > - if (!f->dma.error) { > - dma_fence_get(&f->dma); > - if (test_bit(DMA_FENCE_WORK_IMM, &f->dma.flags)) > - fence_work(&f->work); > - else > - queue_work(system_unbound_wq, &f->work); > - } else { > - fence_complete(f); > - } > + if (test_bit(DMA_FENCE_WORK_IMM, &f->dma.flags)) > + fence_work(&f->work); > + else > + queue_work(system_unbound_wq, &f->work); > break; > > case FENCE_FREE: > @@ -91,6 +89,8 @@ void dma_fence_work_init(struct dma_fence_work *f, > dma_fence_init(&f->dma, &fence_ops, &f->lock, 0, 0); > i915_sw_fence_init(&f->chain, fence_notify); > INIT_WORK(&f->work, fence_work); > + > + dma_fence_get(&f->dma); /* once for the chain; once for the work */ > } > > int dma_fence_work_chain(struct dma_fence_work *f, struct dma_fence *signal) > Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Regards, Tvrtko
diff --git a/drivers/gpu/drm/i915/i915_sw_fence_work.c b/drivers/gpu/drm/i915/i915_sw_fence_work.c index a3a81bb8f2c3..e094fd0a4202 100644 --- a/drivers/gpu/drm/i915/i915_sw_fence_work.c +++ b/drivers/gpu/drm/i915/i915_sw_fence_work.c @@ -16,11 +16,14 @@ static void fence_complete(struct dma_fence_work *f) static void fence_work(struct work_struct *work) { struct dma_fence_work *f = container_of(work, typeof(*f), work); - int err; - err = f->ops->work(f); - if (err) - dma_fence_set_error(&f->dma, err); + if (!f->dma.error) { + int err; + + err = f->ops->work(f); + if (err) + dma_fence_set_error(&f->dma, err); + } fence_complete(f); dma_fence_put(&f->dma); @@ -36,15 +39,10 @@ fence_notify(struct i915_sw_fence *fence, enum i915_sw_fence_notify state) if (fence->error) dma_fence_set_error(&f->dma, fence->error); - if (!f->dma.error) { - dma_fence_get(&f->dma); - if (test_bit(DMA_FENCE_WORK_IMM, &f->dma.flags)) - fence_work(&f->work); - else - queue_work(system_unbound_wq, &f->work); - } else { - fence_complete(f); - } + if (test_bit(DMA_FENCE_WORK_IMM, &f->dma.flags)) + fence_work(&f->work); + else + queue_work(system_unbound_wq, &f->work); break; case FENCE_FREE: @@ -91,6 +89,8 @@ void dma_fence_work_init(struct dma_fence_work *f, dma_fence_init(&f->dma, &fence_ops, &f->lock, 0, 0); i915_sw_fence_init(&f->chain, fence_notify); INIT_WORK(&f->work, fence_work); + + dma_fence_get(&f->dma); /* once for the chain; once for the work */ } int dma_fence_work_chain(struct dma_fence_work *f, struct dma_fence *signal)
Currently, if an error is raised we always call the cleanup locally [and skip the main work callback]. However, some future users may need to take a mutex to cleanup and so we cannot immediately execute the cleanup as we may still be in interrupt context. For example, if we have committed sensitive changes [like evicting from the ppGTT layout] that are visible but gated behind the fence, we need to ensure those changes are completed even after an error. [This does suggest the split between the work/release callback is artificial and we may be able to simplify the worker api by only requiring a single callback.] With the execute-immediate flag, for most cases this should result in immediate cleanup of an error. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> --- drivers/gpu/drm/i915/i915_sw_fence_work.c | 26 +++++++++++------------ 1 file changed, 13 insertions(+), 13 deletions(-)