From patchwork Fri Nov 27 12:06:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 11936171 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F16F1C63777 for ; Fri, 27 Nov 2020 12:13:45 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A0733208D5 for ; Fri, 27 Nov 2020 12:13:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A0733208D5 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id D8E646ED0A; Fri, 27 Nov 2020 12:11:41 +0000 (UTC) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by gabe.freedesktop.org (Postfix) with ESMTPS id 01C166ECD0; Fri, 27 Nov 2020 12:11:38 +0000 (UTC) IronPort-SDR: Rb6GGauIeWRqXmZpEa5VWT32cmjw64HqxzcP9x/tLOiM4qEXjVclqYvvfw48EM9mxpOeuPvhUY kQTV3Hmfthdw== X-IronPort-AV: E=McAfee;i="6000,8403,9817"; a="257092921" X-IronPort-AV: E=Sophos;i="5.78,374,1599548400"; d="scan'208";a="257092921" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Nov 2020 04:11:38 -0800 IronPort-SDR: AuTmIoDPM07bjWNnorHz4cLUQRzJAkc3gGf4zPVqSleXd3HocmtMsRUfIOo63rpdgovmlpS3JB 7bu8S9EJAfig== X-IronPort-AV: E=Sophos;i="5.78,374,1599548400"; d="scan'208";a="548029844" Received: from mjgleeso-mobl.ger.corp.intel.com (HELO mwauld-desk1.ger.corp.intel.com) ([10.251.85.2]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Nov 2020 04:11:37 -0800 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Date: Fri, 27 Nov 2020 12:06:44 +0000 Message-Id: <20201127120718.454037-129-matthew.auld@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201127120718.454037-1-matthew.auld@intel.com> References: <20201127120718.454037-1-matthew.auld@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [RFC PATCH 128/162] drm/i915/dg1: intel_memory_region_evict() changes for eviction X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: dri-devel@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: CQ Tang Function i915_gem_shrink_memory_region() is changed to intel_memory_region_evict() and moved from i915_gem_shrinker.c to intel_memory_region.c, this function is used to handle local memory swapping, in addition to evict purgeable objects only. When an object is selected from list, i915_gem_object_unbind() might fail if the object vma is pinned, this causes an error -EBUSY is returned from this function. The new code uses similar logic as function i915_gem_shrink(). Signed-off-by: CQ Tang --- .../gpu/drm/i915/gem/i915_gem_object_types.h | 1 - drivers/gpu/drm/i915/gem/i915_gem_shrinker.c | 58 ----------- drivers/gpu/drm/i915/gem/i915_gem_shrinker.h | 2 - drivers/gpu/drm/i915/i915_gem.c | 8 +- drivers/gpu/drm/i915/intel_memory_region.c | 95 +++++++++++++++++-- .../drm/i915/selftests/intel_memory_region.c | 3 +- 6 files changed, 94 insertions(+), 73 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h index 8d639509b78b..517a606ade8d 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h @@ -237,7 +237,6 @@ struct drm_i915_gem_object { * region->obj_lock. */ struct list_head region_link; - struct list_head tmp_link; struct sg_table *pages; void *mapping; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c index 4d346df8fd5b..27674048f17d 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c @@ -272,64 +272,6 @@ unsigned long i915_gem_shrink_all(struct drm_i915_private *i915) return freed; } -int i915_gem_shrink_memory_region(struct intel_memory_region *mem, - resource_size_t target) -{ - struct drm_i915_private *i915 = mem->i915; - struct drm_i915_gem_object *obj; - resource_size_t purged; - LIST_HEAD(purgeable); - int err = -ENOSPC; - - intel_gt_retire_requests(&i915->gt); - - purged = 0; - - mutex_lock(&mem->objects.lock); - - while ((obj = list_first_entry_or_null(&mem->objects.purgeable, - typeof(*obj), - mm.region_link))) { - list_move_tail(&obj->mm.region_link, &purgeable); - - if (!i915_gem_object_has_pages(obj)) - continue; - - if (i915_gem_object_is_framebuffer(obj)) - continue; - - if (!kref_get_unless_zero(&obj->base.refcount)) - continue; - - mutex_unlock(&mem->objects.lock); - - if (!i915_gem_object_unbind(obj, I915_GEM_OBJECT_UNBIND_ACTIVE)) { - if (i915_gem_object_trylock(obj)) { - __i915_gem_object_put_pages(obj); - if (!i915_gem_object_has_pages(obj)) { - purged += obj->base.size; - if (!i915_gem_object_is_volatile(obj)) - obj->mm.madv = __I915_MADV_PURGED; - } - i915_gem_object_unlock(obj); - } - } - - i915_gem_object_put(obj); - - mutex_lock(&mem->objects.lock); - - if (purged >= target) { - err = 0; - break; - } - } - - list_splice_tail(&purgeable, &mem->objects.purgeable); - mutex_unlock(&mem->objects.lock); - return err; -} - static unsigned long i915_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) { diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.h b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.h index c945f3b587d6..7c1e648a8b44 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.h @@ -31,7 +31,5 @@ void i915_gem_driver_register__shrinker(struct drm_i915_private *i915); void i915_gem_driver_unregister__shrinker(struct drm_i915_private *i915); void i915_gem_shrinker_taints_mutex(struct drm_i915_private *i915, struct mutex *mutex); -int i915_gem_shrink_memory_region(struct intel_memory_region *mem, - resource_size_t target); #endif /* __I915_GEM_SHRINKER_H__ */ diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index bf67f323a1ae..85cbdb8e2bb8 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -1008,12 +1008,12 @@ i915_gem_madvise_ioctl(struct drm_device *dev, void *data, switch (obj->mm.madv) { case I915_MADV_WILLNEED: - list_move(&obj->mm.region_link, - &obj->mm.region->objects.list); + list_move_tail(&obj->mm.region_link, + &obj->mm.region->objects.list); break; default: - list_move(&obj->mm.region_link, - &obj->mm.region->objects.purgeable); + list_move_tail(&obj->mm.region_link, + &obj->mm.region->objects.purgeable); break; } diff --git a/drivers/gpu/drm/i915/intel_memory_region.c b/drivers/gpu/drm/i915/intel_memory_region.c index 371cd88ff6d8..185eab497803 100644 --- a/drivers/gpu/drm/i915/intel_memory_region.c +++ b/drivers/gpu/drm/i915/intel_memory_region.c @@ -3,6 +3,7 @@ * Copyright © 2019 Intel Corporation */ +#include "gt/intel_gt_requests.h" #include "intel_memory_region.h" #include "i915_drv.h" @@ -94,6 +95,90 @@ __intel_memory_region_put_block_buddy(struct i915_buddy_block *block) __intel_memory_region_put_pages_buddy(block->private, &blocks); } +static int intel_memory_region_evict(struct intel_memory_region *mem, + resource_size_t target) +{ + struct drm_i915_private *i915 = mem->i915; + struct list_head still_in_list; + struct drm_i915_gem_object *obj; + struct list_head *phases[] = { + &mem->objects.purgeable, + &mem->objects.list, + NULL, + }; + struct list_head **phase; + resource_size_t found; + int pass; + + intel_gt_retire_requests(&i915->gt); + + found = 0; + pass = 0; + phase = phases; + +next: + INIT_LIST_HEAD(&still_in_list); + mutex_lock(&mem->objects.lock); + + while (found < target && + (obj = list_first_entry_or_null(*phase, + typeof(*obj), + mm.region_link))) { + list_move_tail(&obj->mm.region_link, &still_in_list); + + if (!i915_gem_object_has_pages(obj)) + continue; + + if (i915_gem_object_is_framebuffer(obj)) + continue; + + /* + * For IOMEM region, only swap user space objects. + * kernel objects are bound and causes a lot of unbind + * warning message in driver. + * FIXME: swap kernel object as well. + */ + if (i915_gem_object_type_has(obj, I915_GEM_OBJECT_HAS_IOMEM) + && !obj->base.handle_count) + continue; + + if (!kref_get_unless_zero(&obj->base.refcount)) + continue; + + mutex_unlock(&mem->objects.lock); + + if (!i915_gem_object_unbind(obj, I915_GEM_OBJECT_UNBIND_ACTIVE)) { + if (i915_gem_object_trylock(obj)) { + __i915_gem_object_put_pages(obj); + /* May arrive from get_pages on another bo */ + if (!i915_gem_object_has_pages(obj)) { + found += obj->base.size; + if (obj->mm.madv == I915_MADV_DONTNEED) + obj->mm.madv = __I915_MADV_PURGED; + } + i915_gem_object_unlock(obj); + } + } + + i915_gem_object_put(obj); + mutex_lock(&mem->objects.lock); + + if (found >= target) + break; + } + list_splice_tail(&still_in_list, *phase); + mutex_unlock(&mem->objects.lock); + + if (found < target) { + pass++; + phase++; + if (*phase) + goto next; + } + + return (found < target) ? -ENOSPC : 0; +} + int __intel_memory_region_get_pages_buddy(struct intel_memory_region *mem, resource_size_t size, @@ -137,7 +222,7 @@ __intel_memory_region_get_pages_buddy(struct intel_memory_region *mem, do { struct i915_buddy_block *block; unsigned int order; - bool retry = true; + retry: order = min_t(u32, (fls(n_pages) - 1), max_order); GEM_BUG_ON(order > mem->mm.max_order); @@ -152,19 +237,15 @@ __intel_memory_region_get_pages_buddy(struct intel_memory_region *mem, resource_size_t target; int err; - if (!retry) - goto err_free_blocks; - target = n_pages * mem->mm.chunk_size; mutex_unlock(&mem->mm_lock); - err = i915_gem_shrink_memory_region(mem, - target); + err = intel_memory_region_evict(mem, + target); mutex_lock(&mem->mm_lock); if (err) goto err_free_blocks; - retry = false; goto retry; } } while (1); diff --git a/drivers/gpu/drm/i915/selftests/intel_memory_region.c b/drivers/gpu/drm/i915/selftests/intel_memory_region.c index 9df0a4f657c1..4b007ed48d2f 100644 --- a/drivers/gpu/drm/i915/selftests/intel_memory_region.c +++ b/drivers/gpu/drm/i915/selftests/intel_memory_region.c @@ -1093,7 +1093,8 @@ static void igt_mark_evictable(struct drm_i915_gem_object *obj) { i915_gem_object_unpin_pages(obj); obj->mm.madv = I915_MADV_DONTNEED; - list_move(&obj->mm.region_link, &obj->mm.region->objects.purgeable); + list_move_tail(&obj->mm.region_link, + &obj->mm.region->objects.purgeable); } static int igt_mock_shrink(void *arg)