diff mbox

drm/i915: Mitigate retirement starvation a bit

Message ID 1454588724-34816-1-git-send-email-tvrtko.ursulin@linux.intel.com (mailing list archive)
State New, archived
Headers show

Commit Message

Tvrtko Ursulin Feb. 4, 2016, 12:25 p.m. UTC
From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

In execlists mode internal house keeping of the discarded
requests (and so contexts and VMAs) relies solely on the retire
worker, which can be prevented from running by just being
unlucky when busy clients are hammering on the big lock.

Prime example is the gem_close_race IGT, which due to this
effect causes internal lists to grow to epic proportions, with
a consequece of object VMA traversal to growing exponentially
and resulting in tens of minutes test runtime. Memory use is
also very high and a limiting factor on some platforms.

Since we do not want to run this internal house keeping more
frequently, due concerns that it may affect performance, and
the scenario being statistically not very likely in real
workloads, one possible workaround is to run it when new
client handles are opened.

This will solve the issues with this particular test case,
making it complete in tens of seconds instead of tens of
minutes, and will not add any run-time penalty to running
clients.

It can only slightly slow down new client startup, but on a
realisticaly loaded system we are expecting this to be not
significant. Even with heavy rendering in progress we can have
perhaps up to several thousands of requests pending retirement,
which, with a typical retirement cost of 80ns to 1us per
request, is not significant.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Testcase: igt/gem_close_race/gem-close-race
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem.c | 4 ++++
 1 file changed, 4 insertions(+)

Comments

Chris Wilson Feb. 4, 2016, 12:40 p.m. UTC | #1
On Thu, Feb 04, 2016 at 12:25:24PM +0000, Tvrtko Ursulin wrote:
> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> 
> In execlists mode internal house keeping of the discarded
> requests (and so contexts and VMAs) relies solely on the retire
> worker, which can be prevented from running by just being
> unlucky when busy clients are hammering on the big lock.
> 
> Prime example is the gem_close_race IGT, which due to this
> effect causes internal lists to grow to epic proportions, with
> a consequece of object VMA traversal to growing exponentially
> and resulting in tens of minutes test runtime. Memory use is
> also very high and a limiting factor on some platforms.
> 
> Since we do not want to run this internal house keeping more
> frequently, due concerns that it may affect performance, and
> the scenario being statistically not very likely in real
> workloads, one possible workaround is to run it when new
> client handles are opened.
> 
> This will solve the issues with this particular test case,
> making it complete in tens of seconds instead of tens of
> minutes, and will not add any run-time penalty to running
> clients.
> 
> It can only slightly slow down new client startup, but on a
> realisticaly loaded system we are expecting this to be not
> significant. Even with heavy rendering in progress we can have
> perhaps up to several thousands of requests pending retirement,
> which, with a typical retirement cost of 80ns to 1us per
> request, is not significant.
> 
> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> Testcase: igt/gem_close_race/gem-close-race
> Cc: Chris Wilson <chris@chris-wilson.co.uk>

Still doesn't fix actual workloads where this is demonstrably bad, which
can be demonstrated with a single fd.

The most effective treatment I found is moving the retire-requests from
execbuf (which exists for similar reasons) to get-pages.

http://cgit.freedesktop.org/~ickle/linux-2.6/commit/?h=breadcrumbs&id=75f4e53f1c9141ba2dd8847396a1bcc8dbeecd55

I would also suggest

http://cgit.freedesktop.org/~ickle/linux-2.6/commit/?h=breadcrumbs&id=7c1b679c76524780f8e15cc8b0c6652539182d51

for the reasons above.
-Chris
Tvrtko Ursulin Feb. 4, 2016, 1:30 p.m. UTC | #2
On 04/02/16 12:40, Chris Wilson wrote:
> On Thu, Feb 04, 2016 at 12:25:24PM +0000, Tvrtko Ursulin wrote:
>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>
>> In execlists mode internal house keeping of the discarded
>> requests (and so contexts and VMAs) relies solely on the retire
>> worker, which can be prevented from running by just being
>> unlucky when busy clients are hammering on the big lock.
>>
>> Prime example is the gem_close_race IGT, which due to this
>> effect causes internal lists to grow to epic proportions, with
>> a consequece of object VMA traversal to growing exponentially
>> and resulting in tens of minutes test runtime. Memory use is
>> also very high and a limiting factor on some platforms.
>>
>> Since we do not want to run this internal house keeping more
>> frequently, due concerns that it may affect performance, and
>> the scenario being statistically not very likely in real
>> workloads, one possible workaround is to run it when new
>> client handles are opened.
>>
>> This will solve the issues with this particular test case,
>> making it complete in tens of seconds instead of tens of
>> minutes, and will not add any run-time penalty to running
>> clients.
>>
>> It can only slightly slow down new client startup, but on a
>> realisticaly loaded system we are expecting this to be not
>> significant. Even with heavy rendering in progress we can have
>> perhaps up to several thousands of requests pending retirement,
>> which, with a typical retirement cost of 80ns to 1us per
>> request, is not significant.
>>
>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>> Testcase: igt/gem_close_race/gem-close-race
>> Cc: Chris Wilson <chris@chris-wilson.co.uk>
>
> Still doesn't fix actual workloads where this is demonstrably bad, which
> can be demonstrated with a single fd.

Which are those?

> The most effective treatment I found is moving the retire-requests from
> execbuf (which exists for similar reasons) to get-pages.
>
> http://cgit.freedesktop.org/~ickle/linux-2.6/commit/?h=breadcrumbs&id=75f4e53f1c9141ba2dd8847396a1bcc8dbeecd55

I struggle to understand how it is OK to stall get pages or even the 
object close when you objected to those in the past?

Regards,

Tvrtko
Chris Wilson Feb. 4, 2016, 1:37 p.m. UTC | #3
On Thu, Feb 04, 2016 at 01:30:30PM +0000, Tvrtko Ursulin wrote:
> 
> 
> On 04/02/16 12:40, Chris Wilson wrote:
> >On Thu, Feb 04, 2016 at 12:25:24PM +0000, Tvrtko Ursulin wrote:
> >>From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> >>
> >>In execlists mode internal house keeping of the discarded
> >>requests (and so contexts and VMAs) relies solely on the retire
> >>worker, which can be prevented from running by just being
> >>unlucky when busy clients are hammering on the big lock.
> >>
> >>Prime example is the gem_close_race IGT, which due to this
> >>effect causes internal lists to grow to epic proportions, with
> >>a consequece of object VMA traversal to growing exponentially
> >>and resulting in tens of minutes test runtime. Memory use is
> >>also very high and a limiting factor on some platforms.
> >>
> >>Since we do not want to run this internal house keeping more
> >>frequently, due concerns that it may affect performance, and
> >>the scenario being statistically not very likely in real
> >>workloads, one possible workaround is to run it when new
> >>client handles are opened.
> >>
> >>This will solve the issues with this particular test case,
> >>making it complete in tens of seconds instead of tens of
> >>minutes, and will not add any run-time penalty to running
> >>clients.
> >>
> >>It can only slightly slow down new client startup, but on a
> >>realisticaly loaded system we are expecting this to be not
> >>significant. Even with heavy rendering in progress we can have
> >>perhaps up to several thousands of requests pending retirement,
> >>which, with a typical retirement cost of 80ns to 1us per
> >>request, is not significant.
> >>
> >>Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> >>Testcase: igt/gem_close_race/gem-close-race
> >>Cc: Chris Wilson <chris@chris-wilson.co.uk>
> >
> >Still doesn't fix actual workloads where this is demonstrably bad, which
> >can be demonstrated with a single fd.
> 
> Which are those?

OglDrvCtx and clones.

> >The most effective treatment I found is moving the retire-requests from
> >execbuf (which exists for similar reasons) to get-pages.
> >
> >http://cgit.freedesktop.org/~ickle/linux-2.6/commit/?h=breadcrumbs&id=75f4e53f1c9141ba2dd8847396a1bcc8dbeecd55
> 
> I struggle to understand how it is OK to stall get pages or even the
> object close when you objected to those in the past?

Benchmarks. Taking a hit here avoids situations that end up invoking the
shrinker.
-Chris
Chris Wilson Feb. 4, 2016, 1:46 p.m. UTC | #4
On Thu, Feb 04, 2016 at 01:30:30PM +0000, Tvrtko Ursulin wrote:
> On 04/02/16 12:40, Chris Wilson wrote:
> >The most effective treatment I found is moving the retire-requests from
> >execbuf (which exists for similar reasons) to get-pages.
> >
> >http://cgit.freedesktop.org/~ickle/linux-2.6/commit/?h=breadcrumbs&id=75f4e53f1c9141ba2dd8847396a1bcc8dbeecd55
> 
> I struggle to understand how it is OK to stall get pages or even the
> object close when you objected to those in the past?

Note also this reduces the number of those stalls.
-Chris
diff mbox

Patch

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index d46a0462c765..f02991d28048 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -5162,6 +5162,10 @@  int i915_gem_open(struct drm_device *dev, struct drm_file *file)
 	if (ret)
 		kfree(file_priv);
 
+	mutex_lock(&dev->struct_mutex);
+	i915_gem_retire_requests(dev);
+	mutex_unlock(&dev->struct_mutex);
+
 	return ret;
 }