diff mbox

[3/3] drm/ttm: roundup the shrink request to prevent skip huge pool

Message ID 1511351069-4757-3-git-send-email-Hongbo.He@amd.com (mailing list archive)
State New, archived
Headers show

Commit Message

He, Hongbo Nov. 22, 2017, 11:44 a.m. UTC
e.g. shrink reqeust is less than 512, the logic will skip huge pool

Change-Id: Id8bd4d1ecff9f3ab14355e2dbd1c59b9fe824e01
Signed-off-by: Roger He <Hongbo.He@amd.com>
---
 drivers/gpu/drm/ttm/ttm_page_alloc.c | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

Comments

Chris Wilson Nov. 22, 2017, 11:52 a.m. UTC | #1
Quoting Roger He (2017-11-22 11:44:29)
> e.g. shrink reqeust is less than 512, the logic will skip huge pool

You should also tell the shrinker that you skipped objects so that it
knows to accumulate the request for the next pass. See
shrinkctl->nr_scanned.
-Chris
diff mbox

Patch

diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc.c b/drivers/gpu/drm/ttm/ttm_page_alloc.c
index 25b0fa5..1543532 100644
--- a/drivers/gpu/drm/ttm/ttm_page_alloc.c
+++ b/drivers/gpu/drm/ttm/ttm_page_alloc.c
@@ -442,17 +442,19 @@  ttm_pool_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
 	/* select start pool in round robin fashion */
 	for (i = 0; i < NUM_POOLS; ++i) {
 		unsigned nr_free = shrink_pages;
+		unsigned page_nr;
+
 		if (shrink_pages == 0)
 			break;
 
 		pool = &_manager->pools[(i + pool_offset)%NUM_POOLS];
+		page_nr = (1 << pool->order);
 		/* OK to use static buffer since global mutex is held. */
-		nr_free_pool = (nr_free >> pool->order);
-		if (nr_free_pool == 0)
-			continue;
-
+		nr_free_pool = roundup(nr_free, page_nr) >> pool->order;
 		shrink_pages = ttm_page_pool_free(pool, nr_free_pool, true);
-		freed += ((nr_free_pool - shrink_pages) << pool->order);
+		freed += (nr_free_pool - shrink_pages) << pool->order;
+		if (freed >= sc->nr_to_scan)
+			break;
 	}
 	mutex_unlock(&lock);
 	return freed;