diff mbox

drm/i915: Ignore -EIO from __i915_wait_request() during mmio flip

Message ID 1434039268-17870-1-git-send-email-ville.syrjala@linux.intel.com (mailing list archive)
State New, archived
Headers show

Commit Message

Ville Syrjälä June 11, 2015, 4:14 p.m. UTC
From: Ville Syrjälä <ville.syrjala@linux.intel.com>

When the GPU gets reset __i915_wait_request() returns -EIO to the
mmio flip worker. Currently we WARN whenever we get anything other
than 0. Ignore the -EIO too since it's a perfectly normal thing
to get during a GPU reset.

Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
---
 drivers/gpu/drm/i915/intel_display.c | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

Comments

Chris Wilson June 11, 2015, 8:01 p.m. UTC | #1
On Thu, Jun 11, 2015 at 07:14:28PM +0300, ville.syrjala@linux.intel.com wrote:
> From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> 
> When the GPU gets reset __i915_wait_request() returns -EIO to the
> mmio flip worker. Currently we WARN whenever we get anything other
> than 0. Ignore the -EIO too since it's a perfectly normal thing
> to get during a GPU reset.

Nak. I consider it is a bug in __i915_wait_request(). I am discussing
with Thomas Elf how to fix this wrt the next generation of individual
ring resets.

In the meantime I prefer a fix along the lines of
http://patchwork.freedesktop.org/patch/46607/ which addresses this and
more such as the false SIGBUSes.
-Chris
Shuang He June 15, 2015, 1:40 a.m. UTC | #2
Tested-By: Intel Graphics QA PRTS (Patch Regression Test System Contact: shuang.he@intel.com)
Task id: 6571
-------------------------------------Summary-------------------------------------
Platform          Delta          drm-intel-nightly          Series Applied
PNV                                  276/276              276/276
ILK                                  303/303              303/303
SNB                                  312/312              312/312
IVB                                  343/343              343/343
BYT                                  287/287              287/287
BDW                                  321/321              321/321
-------------------------------------Detailed-------------------------------------
Platform  Test                                drm-intel-nightly          Series Applied
Note: You need to pay more attention to line start with '*'
Daniel Vetter June 15, 2015, 4:34 p.m. UTC | #3
On Thu, Jun 11, 2015 at 09:01:08PM +0100, Chris Wilson wrote:
> On Thu, Jun 11, 2015 at 07:14:28PM +0300, ville.syrjala@linux.intel.com wrote:
> > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > 
> > When the GPU gets reset __i915_wait_request() returns -EIO to the
> > mmio flip worker. Currently we WARN whenever we get anything other
> > than 0. Ignore the -EIO too since it's a perfectly normal thing
> > to get during a GPU reset.
> 
> Nak. I consider it is a bug in __i915_wait_request(). I am discussing
> with Thomas Elf how to fix this wrt the next generation of individual
> ring resets.

We should only get an -EIO if the gpu is truly gone, but an -EAGAIN when
the reset is ongoing. Neither is currently handled. For lockless users we
probably want a version of wait_request which just dtrt (of waiting for
the reset handler to complete without trying to grab the mutex and then
returning). Or some other means of retrying.

Returning -EIO from the low-level wait function still seems appropriate,
but callers need to eat/handle it appropriately. WARN_ON isn't it here
ofc.

Also we have piles of flip vs. gpu hang testcases ... do they fail to
provoke this or is this another case of bug lost in bugzilla? In any case
needs a Testcase: line.
-Daniel
Chris Wilson June 16, 2015, 12:10 p.m. UTC | #4
On Mon, Jun 15, 2015 at 06:34:51PM +0200, Daniel Vetter wrote:
> On Thu, Jun 11, 2015 at 09:01:08PM +0100, Chris Wilson wrote:
> > On Thu, Jun 11, 2015 at 07:14:28PM +0300, ville.syrjala@linux.intel.com wrote:
> > > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > > 
> > > When the GPU gets reset __i915_wait_request() returns -EIO to the
> > > mmio flip worker. Currently we WARN whenever we get anything other
> > > than 0. Ignore the -EIO too since it's a perfectly normal thing
> > > to get during a GPU reset.
> > 
> > Nak. I consider it is a bug in __i915_wait_request(). I am discussing
> > with Thomas Elf how to fix this wrt the next generation of individual
> > ring resets.
> 
> We should only get an -EIO if the gpu is truly gone, but an -EAGAIN when
> the reset is ongoing. Neither is currently handled. For lockless users we
> probably want a version of wait_request which just dtrt (of waiting for
> the reset handler to complete without trying to grab the mutex and then
> returning). Or some other means of retrying.
> 
> Returning -EIO from the low-level wait function still seems appropriate,
> but callers need to eat/handle it appropriately. WARN_ON isn't it here
> ofc.

Bleh, a few years ago you decided not to take the EIO handling along the
call paths that don't care.

I disagree. There are two classes of callers, those that care about
EIO/EAGAIN and those that simply want to know when the GPU is no longer
processing that request. That latter class is still popping up in
bugzilla with frozen displays. For the former, we actually only care
about backoff if we are holding the mutex - and that is only required
for EAGAIN. The only user that cares about EIO is throttle().
 
> Also we have piles of flip vs. gpu hang testcases ... do they fail to
> provoke this or is this another case of bug lost in bugzilla?

We have a few bugs every year for incorrect EIOs returned by
wait_request, but none for this case.
-Chris
Daniel Vetter June 16, 2015, 4:21 p.m. UTC | #5
On Tue, Jun 16, 2015 at 01:10:33PM +0100, Chris Wilson wrote:
> On Mon, Jun 15, 2015 at 06:34:51PM +0200, Daniel Vetter wrote:
> > On Thu, Jun 11, 2015 at 09:01:08PM +0100, Chris Wilson wrote:
> > > On Thu, Jun 11, 2015 at 07:14:28PM +0300, ville.syrjala@linux.intel.com wrote:
> > > > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > > > 
> > > > When the GPU gets reset __i915_wait_request() returns -EIO to the
> > > > mmio flip worker. Currently we WARN whenever we get anything other
> > > > than 0. Ignore the -EIO too since it's a perfectly normal thing
> > > > to get during a GPU reset.
> > > 
> > > Nak. I consider it is a bug in __i915_wait_request(). I am discussing
> > > with Thomas Elf how to fix this wrt the next generation of individual
> > > ring resets.
> > 
> > We should only get an -EIO if the gpu is truly gone, but an -EAGAIN when
> > the reset is ongoing. Neither is currently handled. For lockless users we
> > probably want a version of wait_request which just dtrt (of waiting for
> > the reset handler to complete without trying to grab the mutex and then
> > returning). Or some other means of retrying.
> > 
> > Returning -EIO from the low-level wait function still seems appropriate,
> > but callers need to eat/handle it appropriately. WARN_ON isn't it here
> > ofc.
> 
> Bleh, a few years ago you decided not to take the EIO handling along the
> call paths that don't care.
> 
> I disagree. There are two classes of callers, those that care about
> EIO/EAGAIN and those that simply want to know when the GPU is no longer
> processing that request. That latter class is still popping up in
> bugzilla with frozen displays. For the former, we actually only care
> about backoff if we are holding the mutex - and that is only required
> for EAGAIN. The only user that cares about EIO is throttle().

Hm, right now the design is that for non-interruptible designs we indeed
return -EIO or -EAGAIN, but the reset handler will fix up outstanding
flips. So I guess removing the WARN_ON here is indeed the right thing to
do. We should probably change this once we have atomic (where the wait
doesn't need a lock really, at least for async commits which is what
matters here) and loop until completion.

I'm still vary of eating -EIO in general since it's so hard to test all
this for correctness. Maybe we need a __check_wedge which can return -EIO
and a check_wedge which eats it. And then decide once for where to put
special checks, probably just execbuf and throttle.
-Daniel
Chris Wilson June 16, 2015, 4:30 p.m. UTC | #6
On Tue, Jun 16, 2015 at 06:21:53PM +0200, Daniel Vetter wrote:
> On Tue, Jun 16, 2015 at 01:10:33PM +0100, Chris Wilson wrote:
> > On Mon, Jun 15, 2015 at 06:34:51PM +0200, Daniel Vetter wrote:
> > > On Thu, Jun 11, 2015 at 09:01:08PM +0100, Chris Wilson wrote:
> > > > On Thu, Jun 11, 2015 at 07:14:28PM +0300, ville.syrjala@linux.intel.com wrote:
> > > > > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > > > > 
> > > > > When the GPU gets reset __i915_wait_request() returns -EIO to the
> > > > > mmio flip worker. Currently we WARN whenever we get anything other
> > > > > than 0. Ignore the -EIO too since it's a perfectly normal thing
> > > > > to get during a GPU reset.
> > > > 
> > > > Nak. I consider it is a bug in __i915_wait_request(). I am discussing
> > > > with Thomas Elf how to fix this wrt the next generation of individual
> > > > ring resets.
> > > 
> > > We should only get an -EIO if the gpu is truly gone, but an -EAGAIN when
> > > the reset is ongoing. Neither is currently handled. For lockless users we
> > > probably want a version of wait_request which just dtrt (of waiting for
> > > the reset handler to complete without trying to grab the mutex and then
> > > returning). Or some other means of retrying.
> > > 
> > > Returning -EIO from the low-level wait function still seems appropriate,
> > > but callers need to eat/handle it appropriately. WARN_ON isn't it here
> > > ofc.
> > 
> > Bleh, a few years ago you decided not to take the EIO handling along the
> > call paths that don't care.
> > 
> > I disagree. There are two classes of callers, those that care about
> > EIO/EAGAIN and those that simply want to know when the GPU is no longer
> > processing that request. That latter class is still popping up in
> > bugzilla with frozen displays. For the former, we actually only care
> > about backoff if we are holding the mutex - and that is only required
> > for EAGAIN. The only user that cares about EIO is throttle().
> 
> Hm, right now the design is that for non-interruptible designs we indeed
> return -EIO or -EAGAIN, but the reset handler will fix up outstanding
> flips. So I guess removing the WARN_ON here is indeed the right thing to
> do. We should probably change this once we have atomic (where the wait
> doesn't need a lock really, at least for async commits which is what
> matters here) and loop until completion.
> 
> I'm still vary of eating -EIO in general since it's so hard to test all
> this for correctness. Maybe we need a __check_wedge which can return -EIO
> and a check_wedge which eats it. And then decide once for where to put
> special checks, probably just execbuf and throttle.

Even execbuf really doesn't care. If the GPU didn't complete the earlier
request (principally for semaphore sw sync), it makes no difference for
us now. The content is either corrupt, or we bail when we spot the
wedged GPU upon writing to the ring. Reporting EIO because of an earlier
failure is a poor substitute for the async reset notification. But here
we still need EAGAIN backoff ofc.

I really think eating EIO is the right thing to do in most circumstances
and is correct with the semantics of the callers.
-Chris
Daniel Vetter June 17, 2015, 11:53 a.m. UTC | #7
On Tue, Jun 16, 2015 at 05:30:19PM +0100, Chris Wilson wrote:
> On Tue, Jun 16, 2015 at 06:21:53PM +0200, Daniel Vetter wrote:
> > On Tue, Jun 16, 2015 at 01:10:33PM +0100, Chris Wilson wrote:
> > > On Mon, Jun 15, 2015 at 06:34:51PM +0200, Daniel Vetter wrote:
> > > > On Thu, Jun 11, 2015 at 09:01:08PM +0100, Chris Wilson wrote:
> > > > > On Thu, Jun 11, 2015 at 07:14:28PM +0300, ville.syrjala@linux.intel.com wrote:
> > > > > > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > > > > > 
> > > > > > When the GPU gets reset __i915_wait_request() returns -EIO to the
> > > > > > mmio flip worker. Currently we WARN whenever we get anything other
> > > > > > than 0. Ignore the -EIO too since it's a perfectly normal thing
> > > > > > to get during a GPU reset.
> > > > > 
> > > > > Nak. I consider it is a bug in __i915_wait_request(). I am discussing
> > > > > with Thomas Elf how to fix this wrt the next generation of individual
> > > > > ring resets.
> > > > 
> > > > We should only get an -EIO if the gpu is truly gone, but an -EAGAIN when
> > > > the reset is ongoing. Neither is currently handled. For lockless users we
> > > > probably want a version of wait_request which just dtrt (of waiting for
> > > > the reset handler to complete without trying to grab the mutex and then
> > > > returning). Or some other means of retrying.
> > > > 
> > > > Returning -EIO from the low-level wait function still seems appropriate,
> > > > but callers need to eat/handle it appropriately. WARN_ON isn't it here
> > > > ofc.
> > > 
> > > Bleh, a few years ago you decided not to take the EIO handling along the
> > > call paths that don't care.
> > > 
> > > I disagree. There are two classes of callers, those that care about
> > > EIO/EAGAIN and those that simply want to know when the GPU is no longer
> > > processing that request. That latter class is still popping up in
> > > bugzilla with frozen displays. For the former, we actually only care
> > > about backoff if we are holding the mutex - and that is only required
> > > for EAGAIN. The only user that cares about EIO is throttle().
> > 
> > Hm, right now the design is that for non-interruptible designs we indeed
> > return -EIO or -EAGAIN, but the reset handler will fix up outstanding
> > flips. So I guess removing the WARN_ON here is indeed the right thing to
> > do. We should probably change this once we have atomic (where the wait
> > doesn't need a lock really, at least for async commits which is what
> > matters here) and loop until completion.
> > 
> > I'm still vary of eating -EIO in general since it's so hard to test all
> > this for correctness. Maybe we need a __check_wedge which can return -EIO
> > and a check_wedge which eats it. And then decide once for where to put
> > special checks, probably just execbuf and throttle.
> 
> Even execbuf really doesn't care. If the GPU didn't complete the earlier
> request (principally for semaphore sw sync), it makes no difference for
> us now. The content is either corrupt, or we bail when we spot the
> wedged GPU upon writing to the ring. Reporting EIO because of an earlier
> failure is a poor substitute for the async reset notification. But here
> we still need EAGAIN backoff ofc.
> 
> I really think eating EIO is the right thing to do in most circumstances
> and is correct with the semantics of the callers.

Well we once had the transparent sw fallback at least in the ddx for -EIO.
Mesa never coped for obvious reasons, and given that a modern desktop
can't survive with GL there's not all that much point any more. But still
I think if the gpu is terminally dead we need to tell this to userspace
somehow I think.

What I'm unclear about is which ioctl that should be, and my assumption
thus has been that it's execbuf.
-Daniel
Chris Wilson June 17, 2015, 1:05 p.m. UTC | #8
On Wed, Jun 17, 2015 at 01:53:55PM +0200, Daniel Vetter wrote:
> On Tue, Jun 16, 2015 at 05:30:19PM +0100, Chris Wilson wrote:
> > On Tue, Jun 16, 2015 at 06:21:53PM +0200, Daniel Vetter wrote:
> > > On Tue, Jun 16, 2015 at 01:10:33PM +0100, Chris Wilson wrote:
> > > > On Mon, Jun 15, 2015 at 06:34:51PM +0200, Daniel Vetter wrote:
> > > > > On Thu, Jun 11, 2015 at 09:01:08PM +0100, Chris Wilson wrote:
> > > > > > On Thu, Jun 11, 2015 at 07:14:28PM +0300, ville.syrjala@linux.intel.com wrote:
> > > > > > > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > > > > > > 
> > > > > > > When the GPU gets reset __i915_wait_request() returns -EIO to the
> > > > > > > mmio flip worker. Currently we WARN whenever we get anything other
> > > > > > > than 0. Ignore the -EIO too since it's a perfectly normal thing
> > > > > > > to get during a GPU reset.
> > > > > > 
> > > > > > Nak. I consider it is a bug in __i915_wait_request(). I am discussing
> > > > > > with Thomas Elf how to fix this wrt the next generation of individual
> > > > > > ring resets.
> > > > > 
> > > > > We should only get an -EIO if the gpu is truly gone, but an -EAGAIN when
> > > > > the reset is ongoing. Neither is currently handled. For lockless users we
> > > > > probably want a version of wait_request which just dtrt (of waiting for
> > > > > the reset handler to complete without trying to grab the mutex and then
> > > > > returning). Or some other means of retrying.
> > > > > 
> > > > > Returning -EIO from the low-level wait function still seems appropriate,
> > > > > but callers need to eat/handle it appropriately. WARN_ON isn't it here
> > > > > ofc.
> > > > 
> > > > Bleh, a few years ago you decided not to take the EIO handling along the
> > > > call paths that don't care.
> > > > 
> > > > I disagree. There are two classes of callers, those that care about
> > > > EIO/EAGAIN and those that simply want to know when the GPU is no longer
> > > > processing that request. That latter class is still popping up in
> > > > bugzilla with frozen displays. For the former, we actually only care
> > > > about backoff if we are holding the mutex - and that is only required
> > > > for EAGAIN. The only user that cares about EIO is throttle().
> > > 
> > > Hm, right now the design is that for non-interruptible designs we indeed
> > > return -EIO or -EAGAIN, but the reset handler will fix up outstanding
> > > flips. So I guess removing the WARN_ON here is indeed the right thing to
> > > do. We should probably change this once we have atomic (where the wait
> > > doesn't need a lock really, at least for async commits which is what
> > > matters here) and loop until completion.
> > > 
> > > I'm still vary of eating -EIO in general since it's so hard to test all
> > > this for correctness. Maybe we need a __check_wedge which can return -EIO
> > > and a check_wedge which eats it. And then decide once for where to put
> > > special checks, probably just execbuf and throttle.
> > 
> > Even execbuf really doesn't care. If the GPU didn't complete the earlier
> > request (principally for semaphore sw sync), it makes no difference for
> > us now. The content is either corrupt, or we bail when we spot the
> > wedged GPU upon writing to the ring. Reporting EIO because of an earlier
> > failure is a poor substitute for the async reset notification. But here
> > we still need EAGAIN backoff ofc.
> > 
> > I really think eating EIO is the right thing to do in most circumstances
> > and is correct with the semantics of the callers.
> 
> Well we once had the transparent sw fallback at least in the ddx for -EIO.
> Mesa never coped for obvious reasons, and given that a modern desktop
> can't survive with GL there's not all that much point any more. But still
> I think if the gpu is terminally dead we need to tell this to userspace
> somehow I think.

The DDX checks throttle() for that purposes. Error returns from
execbuffer usually indicate that the kernel is broken and we promptly
ignore it. Having execbuf report EIO is superflous since it is an async
error from before.
 
> What I'm unclear about is which ioctl that should be, and my assumption
> thus has been that it's execbuf.

Nope. It's throttle.
-Chris
Chris Wilson June 17, 2015, 2:16 p.m. UTC | #9
We have gone far off topic.

The question is how we want __i915_wait_request() to handle a wedged
GPU.

It currently reports EIO, and my argument is that is wrong wrt to the
semantics of the wait completion and that no caller actually cares about
EIO from __i915_wait_request().

* Correction: one caller cares!

If we regard a wedged GPU (and in the short term a reset is equally
terminal to an outstanding request) then the GPU can no longer be
accesing thta request and the wait can be safely completed. Imo it is
correct to return 0 in all circumstances. (Reset pending needs to return
-EAGAIN if we need to backoff, but for the lockless consumers we can
just ignore the reset notification.

That is set-domain, mmioflip, modesetting do not care if the request
succeeded, just that it completed.

Throttle() has an -EIO in its ABI for reporting a wedged GPU - this is
used by X to detect when the GPU is unusable prior to use, e.g. when
waking up, and also during its periodic flushes.

Overlay reports -EIO when turning on and hanging the GPU. To be fair, it
can equally report that failure the very next time it touches the ring.

Execbuf itself doesn't rely on wait request reporting EIO, just that we
report EIO prior to submitting work o a dead GPU/context. Execbuf uses
wait_request via two paths, syncing to an old request on another ring
and for flushing requests from the ringbuffer to make room for new
commands. This is the tricky part, the only instance where we rely on
aborting after waiting but before further operation - we won't even
notice a dead GPU prior to starting a request and running out of space
otherwise). Since it is the only instance, we can move the terminal
detection of a dead GPU from the wait request into the
ring_wait_for_space(). This is in keeping with the ethos that we do not
report -EIO until we attempt to access the GPU.
-Chris
Daniel Vetter June 17, 2015, 3:07 p.m. UTC | #10
On Wed, Jun 17, 2015 at 03:16:10PM +0100, Chris Wilson wrote:
> We have gone far off topic.
> 
> The question is how we want __i915_wait_request() to handle a wedged
> GPU.
> 
> It currently reports EIO, and my argument is that is wrong wrt to the
> semantics of the wait completion and that no caller actually cares about
> EIO from __i915_wait_request().
> 
> * Correction: one caller cares!
> 
> If we regard a wedged GPU (and in the short term a reset is equally
> terminal to an outstanding request) then the GPU can no longer be
> accesing thta request and the wait can be safely completed. Imo it is
> correct to return 0 in all circumstances. (Reset pending needs to return
> -EAGAIN if we need to backoff, but for the lockless consumers we can
> just ignore the reset notification.
> 
> That is set-domain, mmioflip, modesetting do not care if the request
> succeeded, just that it completed.
> 
> Throttle() has an -EIO in its ABI for reporting a wedged GPU - this is
> used by X to detect when the GPU is unusable prior to use, e.g. when
> waking up, and also during its periodic flushes.
> 
> Overlay reports -EIO when turning on and hanging the GPU. To be fair, it
> can equally report that failure the very next time it touches the ring.
> 
> Execbuf itself doesn't rely on wait request reporting EIO, just that we
> report EIO prior to submitting work o a dead GPU/context. Execbuf uses
> wait_request via two paths, syncing to an old request on another ring
> and for flushing requests from the ringbuffer to make room for new
> commands. This is the tricky part, the only instance where we rely on
> aborting after waiting but before further operation - we won't even
> notice a dead GPU prior to starting a request and running out of space
> otherwise). Since it is the only instance, we can move the terminal
> detection of a dead GPU from the wait request into the
> ring_wait_for_space(). This is in keeping with the ethos that we do not
> report -EIO until we attempt to access the GPU.

Ok, following up with my side of the irc discussion we've had. I agree
that there's only 2 places where we must report an EIO if the gpu is
terminally wedge:
- throttle
- execbuf

How that's done doesn't matter, and when it's racy wrt concurrent gpu
deaths that also doens't matter, i.e. we don't need wait_request to EIO
immediately as long as we check terminally_wedged somewhere in these
ioctls.

My main concern is that if we remove the EIO from wait_request we'll
accidentally also remove the EIO from execbuf. And we've had kernels where
the only EIO left was the wait_request from ring_begin ...

But if we add a small igt to manually wedge the gpu through debugfs and
then check that throttle/execbuf do EIO that risk is averted and I'd be ok
with eating EIO from wait_request with extreme prejudice. Since indeed we
still have trouble with EIO at least temporarily totally wreaking modeset
ioclts and other things that really always should work.
-Daniel
Chris Wilson June 17, 2015, 3:46 p.m. UTC | #11
On Wed, Jun 17, 2015 at 05:07:31PM +0200, Daniel Vetter wrote:
> On Wed, Jun 17, 2015 at 03:16:10PM +0100, Chris Wilson wrote:
> > We have gone far off topic.
> > 
> > The question is how we want __i915_wait_request() to handle a wedged
> > GPU.
> > 
> > It currently reports EIO, and my argument is that is wrong wrt to the
> > semantics of the wait completion and that no caller actually cares about
> > EIO from __i915_wait_request().
> > 
> > * Correction: one caller cares!
> > 
> > If we regard a wedged GPU (and in the short term a reset is equally
> > terminal to an outstanding request) then the GPU can no longer be
> > accesing thta request and the wait can be safely completed. Imo it is
> > correct to return 0 in all circumstances. (Reset pending needs to return
> > -EAGAIN if we need to backoff, but for the lockless consumers we can
> > just ignore the reset notification.
> > 
> > That is set-domain, mmioflip, modesetting do not care if the request
> > succeeded, just that it completed.
> > 
> > Throttle() has an -EIO in its ABI for reporting a wedged GPU - this is
> > used by X to detect when the GPU is unusable prior to use, e.g. when
> > waking up, and also during its periodic flushes.
> > 
> > Overlay reports -EIO when turning on and hanging the GPU. To be fair, it
> > can equally report that failure the very next time it touches the ring.
> > 
> > Execbuf itself doesn't rely on wait request reporting EIO, just that we
> > report EIO prior to submitting work o a dead GPU/context. Execbuf uses
> > wait_request via two paths, syncing to an old request on another ring
> > and for flushing requests from the ringbuffer to make room for new
> > commands. This is the tricky part, the only instance where we rely on
> > aborting after waiting but before further operation - we won't even
> > notice a dead GPU prior to starting a request and running out of space
> > otherwise). Since it is the only instance, we can move the terminal
> > detection of a dead GPU from the wait request into the
> > ring_wait_for_space(). This is in keeping with the ethos that we do not
> > report -EIO until we attempt to access the GPU.
> 
> Ok, following up with my side of the irc discussion we've had. I agree
> that there's only 2 places where we must report an EIO if the gpu is
> terminally wedge:
> - throttle
> - execbuf
> 
> How that's done doesn't matter, and when it's racy wrt concurrent gpu
> deaths that also doens't matter, i.e. we don't need wait_request to EIO
> immediately as long as we check terminally_wedged somewhere in these
> ioctls.
> 
> My main concern is that if we remove the EIO from wait_request we'll
> accidentally also remove the EIO from execbuf. And we've had kernels where
> the only EIO left was the wait_request from ring_begin ...

My plan has been to do an is-wedged check on allocating a request
(that's guarranteed to be the first action before writing into a ring),
and then double-check that no events have taken place before submitting
that request (i.e. on finishing the block). To supplement that, we do
the explicit is-wedged check after waiting for ring space.

> But if we add a small igt to manually wedge the gpu through debugfs and
> then check that throttle/execbuf do EIO that risk is averted and I'd be ok
> with eating EIO from wait_request with extreme prejudice. Since indeed we
> still have trouble with EIO at least temporarily totally wreaking modeset
> ioclts and other things that really always should work.

Well, I've successfully wedged the GPU, broke the driver but verified
that throttle reports -EIO. Just needs a little TLC.
-Chris
Shuang He June 29, 2015, 9:11 a.m. UTC | #12
Tested-By: Intel Graphics QA PRTS (Patch Regression Test System Contact: shuang.he@intel.com)
Task id: 6643
-------------------------------------Summary-------------------------------------
Platform          Delta          drm-intel-nightly          Series Applied
ILK                                  302/302              302/302
SNB                                  312/316              312/316
IVB                                  343/343              343/343
BYT                 -2              287/287              285/287
HSW                                  380/380              380/380
-------------------------------------Detailed-------------------------------------
Platform  Test                                drm-intel-nightly          Series Applied
*BYT  igt@gem_partial_pwrite_pread@reads      PASS(1)      FAIL(1)
*BYT  igt@gem_tiled_partial_pwrite_pread@reads      PASS(1)      FAIL(1)
Note: You need to pay more attention to line start with '*'
diff mbox

Patch

diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 9bf759c..3cd0935 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -11327,11 +11327,13 @@  static void intel_mmio_flip_work_func(struct work_struct *work)
 	struct intel_mmio_flip *mmio_flip =
 		container_of(work, struct intel_mmio_flip, work);
 
-	if (mmio_flip->req)
-		WARN_ON(__i915_wait_request(mmio_flip->req,
-					    mmio_flip->crtc->reset_counter,
-					    false, NULL,
-					    &mmio_flip->i915->rps.mmioflips));
+	if (mmio_flip->req) {
+		int ret = __i915_wait_request(mmio_flip->req,
+					      mmio_flip->crtc->reset_counter,
+					      false, NULL,
+					      &mmio_flip->i915->rps.mmioflips);
+		WARN_ON(ret != 0 && ret != -EIO);
+	}
 
 	intel_do_mmio_flip(mmio_flip->crtc);