From patchwork Wed Dec 15 11:07:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 12678057 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BC544C433F5 for ; Wed, 15 Dec 2021 11:07:59 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 979BB10E4C5; Wed, 15 Dec 2021 11:07:56 +0000 (UTC) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by gabe.freedesktop.org (Postfix) with ESMTPS id EE59A10E4C9; Wed, 15 Dec 2021 11:07:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1639566475; x=1671102475; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=uWu4JrGrgQpY9oYwZ10O7u1YipqZG+5OiaTaIyJDID4=; b=XofWAF97F19gFS+RqxqWG+L0RDp00uEr9nBjSsk1rCcZgSOLjMtI629Z V+2/eJ8owbxLwN/DNgbw0si10Fs4I4nOh4sq2JSySCt3XOC+MsvSrP7EG a+AvyQmXPwMzQwkVBNaryhN0BKL5k/10r+yFz0+41MVm4G5uEdTiFuvLZ hgUCRPKo7Si0jbs4AQKwlRdXE+1W6Ew8CpaTsGzKiqRaSsCGvzeiR9Q0s Ia/vDymXHNZmimH+xdUakbY6mfdyPtUM/y6rLhKW/16iThgB/LqkIZ4be yOku7svxLwPOSzUNckskUGxZyu/WAg3O+Nh06bsM9p/esws0LoF0jmXCd A==; X-IronPort-AV: E=McAfee;i="6200,9189,10198"; a="239014608" X-IronPort-AV: E=Sophos;i="5.88,207,1635231600"; d="scan'208";a="239014608" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Dec 2021 03:07:54 -0800 X-IronPort-AV: E=Sophos;i="5.88,207,1635231600"; d="scan'208";a="614665097" Received: from dhogarty-mobl1.ger.corp.intel.com (HELO mwauld-desk1.intel.com) ([10.252.6.151]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Dec 2021 03:07:53 -0800 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Subject: [PATCH 1/2] drm/i915: remove writeback hook Date: Wed, 15 Dec 2021 11:07:45 +0000 Message-Id: <20211215110746.865-1-matthew.auld@intel.com> X-Mailer: git-send-email 2.31.1 MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Tvrtko Ursulin , dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Ditch the writeback hook and drop i915_gem_object_writeback(). We already support the shrinker_release_pages hook which can just call shmem_writeback directly. Suggested-by: Tvrtko Ursulin Signed-off-by: Matthew Auld Reviewed-by: Tvrtko Ursulin --- drivers/gpu/drm/i915/gem/i915_gem_object.h | 1 - .../gpu/drm/i915/gem/i915_gem_object_types.h | 1 - drivers/gpu/drm/i915/gem/i915_gem_pages.c | 10 ---------- drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 19 ++++++++++++++++++- drivers/gpu/drm/i915/gem/i915_gem_shrinker.c | 12 ------------ 5 files changed, 18 insertions(+), 25 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index 66f20b803b01..aaf9183e601b 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -455,7 +455,6 @@ i915_gem_object_unpin_pages(struct drm_i915_gem_object *obj) int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj); int i915_gem_object_truncate(struct drm_i915_gem_object *obj); -void i915_gem_object_writeback(struct drm_i915_gem_object *obj); /** * i915_gem_object_pin_map - return a contiguous mapping of the entire object diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h index f9f7e44099fe..00c844caeabd 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h @@ -57,7 +57,6 @@ struct drm_i915_gem_object_ops { void (*put_pages)(struct drm_i915_gem_object *obj, struct sg_table *pages); int (*truncate)(struct drm_i915_gem_object *obj); - void (*writeback)(struct drm_i915_gem_object *obj); int (*shrinker_release_pages)(struct drm_i915_gem_object *obj, bool no_gpu_wait, bool should_writeback); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c index 89b70f5cde7a..820eee5e954e 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c @@ -168,16 +168,6 @@ int i915_gem_object_truncate(struct drm_i915_gem_object *obj) return 0; } -/* Try to discard unwanted pages */ -void i915_gem_object_writeback(struct drm_i915_gem_object *obj) -{ - assert_object_held_shared(obj); - GEM_BUG_ON(i915_gem_object_has_pages(obj)); - - if (obj->ops->writeback) - obj->ops->writeback(obj); -} - static void __i915_gem_object_reset_page_iter(struct drm_i915_gem_object *obj) { struct radix_tree_iter iter; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c index cc9fe258fba7..7fdf4fa10b0e 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c @@ -331,6 +331,23 @@ shmem_writeback(struct drm_i915_gem_object *obj) __shmem_writeback(obj->base.size, obj->base.filp->f_mapping); } +static int shmem_shrinker_release_pages(struct drm_i915_gem_object *obj, + bool no_gpu_wait, + bool writeback) +{ + switch (obj->mm.madv) { + case I915_MADV_DONTNEED: + return i915_gem_object_truncate(obj); + case __I915_MADV_PURGED: + return 0; + } + + if (writeback) + shmem_writeback(obj); + + return 0; +} + void __i915_gem_object_release_shmem(struct drm_i915_gem_object *obj, struct sg_table *pages, @@ -503,7 +520,7 @@ const struct drm_i915_gem_object_ops i915_gem_shmem_ops = { .get_pages = shmem_get_pages, .put_pages = shmem_put_pages, .truncate = shmem_truncate, - .writeback = shmem_writeback, + .shrinker_release_pages = shmem_shrinker_release_pages, .pwrite = shmem_pwrite, .pread = shmem_pread, diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c index 157a9765f483..fd54e05521f6 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c @@ -61,18 +61,6 @@ static int try_to_writeback(struct drm_i915_gem_object *obj, unsigned int flags) return obj->ops->shrinker_release_pages(obj, !(flags & I915_SHRINK_ACTIVE), flags & I915_SHRINK_WRITEBACK); - - switch (obj->mm.madv) { - case I915_MADV_DONTNEED: - i915_gem_object_truncate(obj); - return 0; - case __I915_MADV_PURGED: - return 0; - } - - if (flags & I915_SHRINK_WRITEBACK) - i915_gem_object_writeback(obj); - return 0; } From patchwork Wed Dec 15 11:07:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 12678055 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 524BBC433EF for ; Wed, 15 Dec 2021 11:07:57 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 6D4BF10E4B9; Wed, 15 Dec 2021 11:07:56 +0000 (UTC) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by gabe.freedesktop.org (Postfix) with ESMTPS id E0F4B10E4B9; Wed, 15 Dec 2021 11:07:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1639566475; x=1671102475; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ZPW51pGwtLdgYtJgipxHtoyEngnV9NiFM/admpkhYEY=; b=iSjlaMJhUM5ozhkXzUiEft/wqi6rue9Jj153wYVil1yY1f240iwoX0bK p5cRToEVrlEUelM2YotOhj9kSNMVLD+9poE3Cye/YCQ2BKmUuRy9IJBnB pGanHaoYweJAbt50VLbPFzJdijwkdV9sBMBs7Ls1KMAmwGbSpw5gs4Zwi DO7yIzgLbu8Ss/FmqIj0tI/t076SXzPixRwjYXAwXkeEEVHtLzLW+To0L CU7OE2Gu5uCvcZ7hQrp2abzJexDXUxAfrDCNbv+EPIyuYYdqPvYtYnulc MLlow2F2kmCkxGGD/hCf4Moh9s+Iy7uNrzOpT7iP/K+M+ybj7flVyEryX w==; X-IronPort-AV: E=McAfee;i="6200,9189,10198"; a="239014615" X-IronPort-AV: E=Sophos;i="5.88,207,1635231600"; d="scan'208";a="239014615" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Dec 2021 03:07:55 -0800 X-IronPort-AV: E=Sophos;i="5.88,207,1635231600"; d="scan'208";a="614665100" Received: from dhogarty-mobl1.ger.corp.intel.com (HELO mwauld-desk1.intel.com) ([10.252.6.151]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Dec 2021 03:07:54 -0800 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Subject: [PATCH 2/2] drm/i915: clean up shrinker_release_pages Date: Wed, 15 Dec 2021 11:07:46 +0000 Message-Id: <20211215110746.865-2-matthew.auld@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211215110746.865-1-matthew.auld@intel.com> References: <20211215110746.865-1-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Tvrtko Ursulin , dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Add some proper flags for the different modes, and shorten the name to something more snappy. Suggested-by: Tvrtko Ursulin Signed-off-by: Matthew Auld Reviewed-by: Thomas Hellström --- .../gpu/drm/i915/gem/i915_gem_object_types.h | 23 ++++++++++++++++--- drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 8 +++---- drivers/gpu/drm/i915/gem/i915_gem_shrinker.c | 16 +++++++++---- drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 10 ++++---- 4 files changed, 39 insertions(+), 18 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h index 00c844caeabd..6f446cca4322 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h @@ -57,9 +57,26 @@ struct drm_i915_gem_object_ops { void (*put_pages)(struct drm_i915_gem_object *obj, struct sg_table *pages); int (*truncate)(struct drm_i915_gem_object *obj); - int (*shrinker_release_pages)(struct drm_i915_gem_object *obj, - bool no_gpu_wait, - bool should_writeback); + /** + * shrink - Perform further backend specific actions to facilate + * shrinking. + * @obj: The gem object + * @flags: Extra flags to control shrinking behaviour in the backend + * + * Possible values for @flags: + * + * I915_GEM_OBJECT_SHRINK_WRITEBACK - Try to perform writeback of the + * backing pages, if supported. + * + * I915_GEM_OBJECT_SHRINK_NO_GPU_WAIT - Don't wait for the object to + * idle. Active objects can be considered later. The TTM backend for + * example might have aync migrations going on, which don't use any + * i915_vma to track the active GTT binding, and hence having an unbound + * object might not be enough. + */ +#define I915_GEM_OBJECT_SHRINK_WRITEBACK BIT(0) +#define I915_GEM_OBJECT_SHRINK_NO_GPU_WAIT BIT(1) + int (*shrink)(struct drm_i915_gem_object *obj, unsigned int flags); int (*pread)(struct drm_i915_gem_object *obj, const struct drm_i915_gem_pread *arg); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c index 7fdf4fa10b0e..6c57b0a79c8a 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c @@ -331,9 +331,7 @@ shmem_writeback(struct drm_i915_gem_object *obj) __shmem_writeback(obj->base.size, obj->base.filp->f_mapping); } -static int shmem_shrinker_release_pages(struct drm_i915_gem_object *obj, - bool no_gpu_wait, - bool writeback) +static int shmem_shrink(struct drm_i915_gem_object *obj, unsigned int flags) { switch (obj->mm.madv) { case I915_MADV_DONTNEED: @@ -342,7 +340,7 @@ static int shmem_shrinker_release_pages(struct drm_i915_gem_object *obj, return 0; } - if (writeback) + if (flags & I915_GEM_OBJECT_SHRINK_WRITEBACK) shmem_writeback(obj); return 0; @@ -520,7 +518,7 @@ const struct drm_i915_gem_object_ops i915_gem_shmem_ops = { .get_pages = shmem_get_pages, .put_pages = shmem_put_pages, .truncate = shmem_truncate, - .shrinker_release_pages = shmem_shrinker_release_pages, + .shrink = shmem_shrink, .pwrite = shmem_pwrite, .pread = shmem_pread, diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c index fd54e05521f6..968ca0fdd57b 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c @@ -57,10 +57,18 @@ static bool unsafe_drop_pages(struct drm_i915_gem_object *obj, static int try_to_writeback(struct drm_i915_gem_object *obj, unsigned int flags) { - if (obj->ops->shrinker_release_pages) - return obj->ops->shrinker_release_pages(obj, - !(flags & I915_SHRINK_ACTIVE), - flags & I915_SHRINK_WRITEBACK); + if (obj->ops->shrink) { + unsigned int shrink_flags = 0; + + if (!(flags & I915_SHRINK_ACTIVE)) + shrink_flags |= I915_GEM_OBJECT_SHRINK_NO_GPU_WAIT; + + if (flags & I915_SHRINK_WRITEBACK) + shrink_flags |= I915_GEM_OBJECT_SHRINK_WRITEBACK; + + return obj->ops->shrink(obj, shrink_flags); + } + return 0; } diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index 923cc7ad8d70..21277f3c64e7 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -424,16 +424,14 @@ int i915_ttm_purge(struct drm_i915_gem_object *obj) return 0; } -static int i915_ttm_shrinker_release_pages(struct drm_i915_gem_object *obj, - bool no_wait_gpu, - bool should_writeback) +static int i915_ttm_shrink(struct drm_i915_gem_object *obj, unsigned int flags) { struct ttm_buffer_object *bo = i915_gem_to_ttm(obj); struct i915_ttm_tt *i915_tt = container_of(bo->ttm, typeof(*i915_tt), ttm); struct ttm_operation_ctx ctx = { .interruptible = true, - .no_wait_gpu = no_wait_gpu, + .no_wait_gpu = flags & I915_GEM_OBJECT_SHRINK_NO_GPU_WAIT, }; struct ttm_placement place = {}; int ret; @@ -467,7 +465,7 @@ static int i915_ttm_shrinker_release_pages(struct drm_i915_gem_object *obj, return ret; } - if (should_writeback) + if (flags & I915_GEM_OBJECT_SHRINK_WRITEBACK) __shmem_writeback(obj->base.size, i915_tt->filp->f_mapping); return 0; @@ -953,7 +951,7 @@ static const struct drm_i915_gem_object_ops i915_gem_ttm_obj_ops = { .get_pages = i915_ttm_get_pages, .put_pages = i915_ttm_put_pages, .truncate = i915_ttm_purge, - .shrinker_release_pages = i915_ttm_shrinker_release_pages, + .shrink = i915_ttm_shrink, .adjust_lru = i915_ttm_adjust_lru, .delayed_free = i915_ttm_delayed_free,