diff mbox series

[2/2] drm/i915: Mark pinned shmemfs pages as unevictable

Message ID 20181016174300.197906-3-vovoy@chromium.org (mailing list archive)
State New, archived
Headers show
Series shmem, drm/i915: Mark pinned shmemfs pages as unevictable | expand

Commit Message

Kuo-Hsin Yang Oct. 16, 2018, 5:43 p.m. UTC
The i915 driver use shmemfs to allocate backing storage for gem objects.
These shmemfs pages can be pinned (increased ref count) by
shmem_read_mapping_page_gfp(). When a lot of pages are pinned, vmscan
wastes a lot of time scanning these pinned pages. Mark these pinned
pages as unevictable to speed up vmscan.

Signed-off-by: Kuo-Hsin Yang <vovoy@chromium.org>
---
 drivers/gpu/drm/i915/i915_gem.c | 8 ++++++++
 1 file changed, 8 insertions(+)

Comments

Michal Hocko Oct. 16, 2018, 6:21 p.m. UTC | #1
On Wed 17-10-18 01:43:00, Kuo-Hsin Yang wrote:
> The i915 driver use shmemfs to allocate backing storage for gem objects.
> These shmemfs pages can be pinned (increased ref count) by
> shmem_read_mapping_page_gfp(). When a lot of pages are pinned, vmscan
> wastes a lot of time scanning these pinned pages. Mark these pinned
> pages as unevictable to speed up vmscan.

I would squash the two patches into the single one. One more thing
though. One more thing to be careful about here. Unless I miss something
such a page is not migrateable so it shouldn't be allocated from a
movable zone. Does mapping_gfp_constraint contains __GFP_MOVABLE? If
yes, we want to drop it as well. Other than that the patch makes sense
with my very limited knowlege of the i915 code of course.
Chris Wilson Oct. 16, 2018, 6:31 p.m. UTC | #2
Quoting Michal Hocko (2018-10-16 19:21:55)
> On Wed 17-10-18 01:43:00, Kuo-Hsin Yang wrote:
> > The i915 driver use shmemfs to allocate backing storage for gem objects.
> > These shmemfs pages can be pinned (increased ref count) by
> > shmem_read_mapping_page_gfp(). When a lot of pages are pinned, vmscan
> > wastes a lot of time scanning these pinned pages. Mark these pinned
> > pages as unevictable to speed up vmscan.
> 
> I would squash the two patches into the single one. One more thing
> though. One more thing to be careful about here. Unless I miss something
> such a page is not migrateable so it shouldn't be allocated from a
> movable zone. Does mapping_gfp_constraint contains __GFP_MOVABLE? If
> yes, we want to drop it as well. Other than that the patch makes sense
> with my very limited knowlege of the i915 code of course.

They are not migrateable today. But we have proposed hooking up
.migratepage and setting __GFP_MOVABLE which would then include unlocking
the mapping at migrate time.

Fwiw, the shmem_unlock_mapping() call feels quite expensive, almost
nullifying the advantage gained from not walking the lists in reclaim.
I'll have better numbers in a couple of days.
-Chris
Michal Hocko Oct. 16, 2018, 7:13 p.m. UTC | #3
On Tue 16-10-18 19:31:06, Chris Wilson wrote:
> Quoting Michal Hocko (2018-10-16 19:21:55)
> > On Wed 17-10-18 01:43:00, Kuo-Hsin Yang wrote:
> > > The i915 driver use shmemfs to allocate backing storage for gem objects.
> > > These shmemfs pages can be pinned (increased ref count) by
> > > shmem_read_mapping_page_gfp(). When a lot of pages are pinned, vmscan
> > > wastes a lot of time scanning these pinned pages. Mark these pinned
> > > pages as unevictable to speed up vmscan.
> > 
> > I would squash the two patches into the single one. One more thing
> > though. One more thing to be careful about here. Unless I miss something
> > such a page is not migrateable so it shouldn't be allocated from a
> > movable zone. Does mapping_gfp_constraint contains __GFP_MOVABLE? If
> > yes, we want to drop it as well. Other than that the patch makes sense
> > with my very limited knowlege of the i915 code of course.
> 
> They are not migrateable today. But we have proposed hooking up
> .migratepage and setting __GFP_MOVABLE which would then include unlocking
> the mapping at migrate time.

if the mapping_gfp doesn't include __GFP_MOVABLE today then there is no
issue I've had in mind.
Chris Wilson Oct. 18, 2018, 6:56 a.m. UTC | #4
Quoting Chris Wilson (2018-10-16 19:31:06)
> Fwiw, the shmem_unlock_mapping() call feels quite expensive, almost
> nullifying the advantage gained from not walking the lists in reclaim.
> I'll have better numbers in a couple of days.

Using a test ("igt/benchmarks/gem_syslatency -t 120 -b -m" on kbl)
consisting of cycletest with a background load of trying to allocate +
populate 2MiB (to hit thp) while catting all files to /dev/null, the
result of using mapping_set_unevictable is mixed.

Each test run consists of running cycletest for 120s measuring the mean
and maximum wakeup latency and then repeating that 120 times.

x baseline-mean.txt # no i915 activity
+ tip-mean.txt # current stock i915 with a continuous load
+------------------------------------------------------------------------+
| x      +                                                               |
| x      +                                                               |
|xx      +                                                               |
|xx      +                                                               |
|xx      +                                                               |
|xx     ++                                                               |
|xx    +++                                                               |
|xx    +++                                                               |
|xx    +++                                                               |
|xx    +++                                                               |
|xx    +++                                                               |
|xx    ++++                                                              |
|xx   +++++                                                              |
|xx  ++++++                                                              |
|xx  ++++++                                                              |
|xx  ++++++                                                              |
|xx  ++++++                                                              |
|xx  ++++++                                                              |
|xx  +++++++ +                                                           |
|xx ++++++++ +                                                           |
|xx ++++++++++                                                           |
|xx+++++++++++ +     +                                                   |
|xx+++++++++++ +     +  +          +      +       ++                    +|
| A                                                                      |
||______M_A_________|                                                    |
+------------------------------------------------------------------------+
    N           Min           Max        Median           Avg        Stddev
x 120       359.153       876.915       863.548     778.80319     186.15875
+ 120      2475.318     73172.303      7666.812     9579.4671      9552.865

Our target then is 863us, but currently i915 adds 7ms of uninterruptable
delay on hitting the shrinker.

x baseline-mean.txt
+ mapping-mean.txt # applying the mapping_set_evictable patch
* tip-mean.txt
+------------------------------------------------------------------------+
| x      *         +                                                     |
| x      *         +                                                     |
|xx      *         +                                                     |
|xx      *         +                                                     |
|xx      *         +                                                     |
|xx     **         +                                                     |
|xx    ***         ++                                                    |
|xx    ***         ++                                                    |
|xx    ***         ++                                                    |
|xx    ***         ++                                                    |
|xx    ***         ++                                                    |
|xx    ****  +     ++                                                    |
|xx   *****+ ++    ++                                                    |
|xx  ******+ ++    ++                                                    |
|xx  ******+ ++  + ++                                                    |
|xx  ******+ ++  + ++                                                    |
|xx  ******+ ++  ++++                                                    |
|xx  ******+ ++  ++++                                                    |
|xx  ******* *+  ++++                                                    |
|xx ******** *+ +++++                                                    |
|xx **********+ +++++                                                    |
|xx***********+*+++++*                                                   |
|xx***********+*+++++*  *  +       *      *       **                    *|
| A                                                                      |
|          |___AM___|                                                    |
||______M_A_________|                                                    |
+------------------------------------------------------------------------+
    N           Min           Max        Median           Avg        Stddev
x 120       359.153       876.915       863.548     778.80319     186.15875
+ 120      3291.633     26644.894     15829.186     14654.781     4466.6997
* 120      2475.318     73172.303      7666.812     9579.4671      9552.865

Shows that if we use the mapping_set_evictable() +
shmem_unlock_mapping() we add a further 8ms uninterruptable delay to the
system... That's the opposite of our goal! ;)

x baseline-mean.txt
+ lock_vma-mean.txt # the old approach of pinning each page
* tip-mean.txt
+------------------------------------------------------------------------+
| *+     *                                                               |
| *+   * *                                                               |
| *+   * *                                                               |
| *+   * *                                                               |
| *+   ***                                                               |
| *+   ***                                                               |
| *+   ***                                                               |
| *+   ***                                                               |
| *+   ***                                                               |
| *+   ***                                                               |
| *+   ***                                                               |
| *+   ****                                                              |
| *+  *****                                                              |
| *+  ******                                                             |
| *+  ****** *                                                           |
| *+  ****** *                                                           |
| *+ ******* *                                                           |
| *+******** *                                                           |
| *+******** *                                                           |
| *+******** *                                                           |
| *+******** * *     *                                                   |
| *+******** * *   + *  *          *      *       * *                   *|
| A                                                                      |
||MA|                                                                    |
||_______M_A________|                                                    |
+------------------------------------------------------------------------+
    N           Min           Max        Median           Avg        Stddev
x 120       359.153       876.915       863.548     778.80319     186.15875
+ 120       511.415     18757.367      1276.302     1416.0016     1679.3965
* 120      2475.318     73172.303      7666.812     9579.4671      9552.865

By contrast, the previous approach of using mlock_page_vma() does
dramatically reduce the uninterruptable delay -- which suggests that the
mapping_set_evictable() isn't keeping our unshrinkable pages off the
shrinker lru.

However, if instead of looking at the average uninterruptable delay
during the 120s of cycletest, but look at the worst case, things get a
little more interesting. Currently i915 is terrible.

x baseline-max.txt
+ tip-max.txt
+------------------------------------------------------------------------+
|      *                                                                 |
[snip 100 lines]
|      *                                                                 |
|      *                                                                 |
|      *                                                                 |
|      *                                                                 |
|      *                                                                 |
|      *                                                                 |
|      *                                                                 |
|      *                                                                 |
|      * +++      ++ +           +  +      +                            +|
|      A                                                                 |
||_____M_A_______|                                                       |
+------------------------------------------------------------------------+
    N           Min           Max        Median           Avg        Stddev
x 120          7391         58543         51953     51564.033     5044.6375
+ 120       2284928  6.752085e+08       3385097      20825362      80352645

Worst case with no i915 is 52ms, but as soon as we load up i915 with
some work, the worst case uninterruptable delay is on average 20s!!! As
suggested by the median, the data is severely skewed by a few outliers.
(Worst worst case is so bad khungtaskd often makes an appearance.)

x baseline-max.txt
+ mapping-max.txt
* tip-max.txt
+------------------------------------------------------------------------+
|      *                                                                 |
[snip 100 lines]
|      *                                                                 |
|      *                                                                 |
|      *                                                                 |
|      *                                                                 |
|      *                                                                 |
|      *                                                                 |
|      *                                                                 |
|      *+                                                                |
|      *+***      ** *           * +*      *                            *|
|      A                                                                 |
|    |_A__|                                                              |
||_____M_A_______|                                                       |
+------------------------------------------------------------------------+
    N           Min           Max        Median           Avg        Stddev
x 120          7391         58543         51953     51564.033     5044.6375
+ 120       3088140 2.9181602e+08       4022581     6528993.3      26278426
* 120       2284928  6.752085e+08       3385097      20825362      80352645

So while the mapping_set_evictable patch did reduce the maximum observed
delay within the 4 hour sample, on average (median, to exclude those worst
worst case outliers) it still fares worse than stock i915. The
mlock_page_vma() has no impact on worst case wrt stock.

My conclusion is that the mapping_set_evictable patch makes both the
average and worst case uninterruptable latency (as observed by other
users of the system) significantly worse. (Although the maximum latency
is not stable enough to draw a real conclusion other than i915 is
shockingly terrible.)
-Chris
Michal Hocko Oct. 18, 2018, 8:15 a.m. UTC | #5
On Thu 18-10-18 07:56:45, Chris Wilson wrote:
> Quoting Chris Wilson (2018-10-16 19:31:06)
> > Fwiw, the shmem_unlock_mapping() call feels quite expensive, almost
> > nullifying the advantage gained from not walking the lists in reclaim.
> > I'll have better numbers in a couple of days.
> 
> Using a test ("igt/benchmarks/gem_syslatency -t 120 -b -m" on kbl)
> consisting of cycletest with a background load of trying to allocate +
> populate 2MiB (to hit thp) while catting all files to /dev/null, the
> result of using mapping_set_unevictable is mixed.

I haven't really read through your report completely yet but I wanted to
point out that the above test scenario is unlikely show the real effect of
the LRU scanning overhead because shmem pages do live on the anonymous
LRU list. With a plenty of file page cache available we do not even scan
anonymous LRU lists. You would have to generate a swapout workload to
test this properly.

On the other hand if mapping_set_unevictable has really a measurably bad
performance impact then this is probably not worth much because most
workloads are swap modest.
diff mbox series

Patch

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index fcc73a6ab503..e0ff5b736128 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2390,6 +2390,7 @@  i915_gem_object_put_pages_gtt(struct drm_i915_gem_object *obj,
 {
 	struct sgt_iter sgt_iter;
 	struct page *page;
+	struct address_space *mapping;
 
 	__i915_gem_object_release_shmem(obj, pages, true);
 
@@ -2409,6 +2410,10 @@  i915_gem_object_put_pages_gtt(struct drm_i915_gem_object *obj,
 	}
 	obj->mm.dirty = false;
 
+	mapping = file_inode(obj->base.filp)->i_mapping;
+	mapping_clear_unevictable(mapping);
+	shmem_unlock_mapping(mapping);
+
 	sg_free_table(pages);
 	kfree(pages);
 }
@@ -2551,6 +2556,7 @@  static int i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
 	 * Fail silently without starting the shrinker
 	 */
 	mapping = obj->base.filp->f_mapping;
+	mapping_set_unevictable(mapping);
 	noreclaim = mapping_gfp_constraint(mapping, ~__GFP_RECLAIM);
 	noreclaim |= __GFP_NORETRY | __GFP_NOWARN;
 
@@ -2664,6 +2670,8 @@  static int i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
 err_pages:
 	for_each_sgt_page(page, sgt_iter, st)
 		put_page(page);
+	mapping_clear_unevictable(mapping);
+	shmem_unlock_mapping(mapping);
 	sg_free_table(st);
 	kfree(st);