From patchwork Thu Feb 10 12:12:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 12741834 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EADB6C433FE for ; Thu, 10 Feb 2022 12:16:41 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E9FD010E831; Thu, 10 Feb 2022 12:16:39 +0000 (UTC) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id DB4BE10E831; Thu, 10 Feb 2022 12:16:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1644495397; x=1676031397; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=o+e6fxesVW/4iG0Nu1Yr9ci3I8K44JMlfBdixFm/GFs=; b=Sn+bSeUAjEay5hOXQeTLtFgxyNhfJ3WZAZdc1ROmbPf5R7d8EAeF6fl1 zu6ySd/XAVOgEns5NLAct4ajsc7Nji3tS8g1YH0Cn3LMUtZpahjNJ1CwL XmV85GibhJn+MLamFcSwuslPkxRbbHgPSzghe1heKKanpAF1ziOrwt+Wb 04d9tjq3e9KvGW6vDKzY1AxufH8Qh/3Fy9hDvLTqVtOviV7qclYeeXgOj YZy+j1E+UWvyJ3nBiwyQc3EDSUMZzmALUAuLj9DnRjuhkC3KZ6LbIBNU3 TBWmh142FmocpR7GFENihXa5B9wtaetKSd0wmnnePpNTtG5DU+yWKg3Om w==; X-IronPort-AV: E=McAfee;i="6200,9189,10253"; a="247060929" X-IronPort-AV: E=Sophos;i="5.88,358,1635231600"; d="scan'208";a="247060929" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2022 04:16:37 -0800 X-IronPort-AV: E=Sophos;i="5.88,358,1635231600"; d="scan'208";a="485664943" Received: from paulbarr-mobl2.ger.corp.intel.com (HELO mwauld-desk1.intel.com) ([10.252.11.226]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2022 04:16:36 -0800 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Date: Thu, 10 Feb 2022 12:12:59 +0000 Message-Id: <20220210121313.701004-2-matthew.auld@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220210121313.701004-1-matthew.auld@intel.com> References: <20220210121313.701004-1-matthew.auld@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 01/15] drm/i915: add io_size plumbing X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , dri-devel@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" With small LMEM-BAR we need to be able to differentiate between the total size of LMEM, and how much of it is CPU mappable. The end goal is to be able to utilize the entire range, even if part of is it not CPU accessible. v2: also update intelfb_create Signed-off-by: Matthew Auld Cc: Thomas Hellström Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/display/intel_fbdev.c | 2 +- drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 2 +- drivers/gpu/drm/i915/gem/i915_gem_stolen.c | 8 +++++--- drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 2 +- drivers/gpu/drm/i915/gem/selftests/huge_pages.c | 2 +- drivers/gpu/drm/i915/gt/intel_region_lmem.c | 6 +++++- drivers/gpu/drm/i915/intel_memory_region.c | 6 +++++- drivers/gpu/drm/i915/intel_memory_region.h | 2 ++ drivers/gpu/drm/i915/selftests/intel_memory_region.c | 8 ++++---- drivers/gpu/drm/i915/selftests/mock_region.c | 6 ++++-- drivers/gpu/drm/i915/selftests/mock_region.h | 3 ++- 11 files changed, 31 insertions(+), 16 deletions(-) diff --git a/drivers/gpu/drm/i915/display/intel_fbdev.c b/drivers/gpu/drm/i915/display/intel_fbdev.c index 41d279db2be6..244acf462065 100644 --- a/drivers/gpu/drm/i915/display/intel_fbdev.c +++ b/drivers/gpu/drm/i915/display/intel_fbdev.c @@ -248,7 +248,7 @@ static int intelfb_create(struct drm_fb_helper *helper, struct intel_memory_region *mem = obj->mm.region; info->apertures->ranges[0].base = mem->io_start; - info->apertures->ranges[0].size = mem->total; + info->apertures->ranges[0].size = mem->io_size; /* Use fbdev's framebuffer from lmem for discrete */ info->fix.smem_start = diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c index 6c57b0a79c8a..a9aca11cedbb 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c @@ -696,7 +696,7 @@ struct intel_memory_region *i915_gem_shmem_setup(struct drm_i915_private *i915, { return intel_memory_region_create(i915, 0, totalram_pages() << PAGE_SHIFT, - PAGE_SIZE, 0, + PAGE_SIZE, 0, 0, type, instance, &shmem_region_ops); } diff --git a/drivers/gpu/drm/i915/gem/i915_gem_stolen.c b/drivers/gpu/drm/i915/gem/i915_gem_stolen.c index 1de73a644965..b79f5f5e7df0 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_stolen.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_stolen.c @@ -491,6 +491,7 @@ static int i915_gem_init_stolen(struct intel_memory_region *mem) /* Exclude the reserved region from driver use */ mem->region.end = reserved_base - 1; + mem->io_size = resource_size(&mem->region); /* It is possible for the reserved area to end before the end of stolen * memory, so just consider the start. */ @@ -747,7 +748,7 @@ static int init_stolen_lmem(struct intel_memory_region *mem) if (!io_mapping_init_wc(&mem->iomap, mem->io_start, - resource_size(&mem->region))) + mem->io_size)) return -EIO; /* @@ -802,7 +803,8 @@ i915_gem_stolen_lmem_setup(struct drm_i915_private *i915, u16 type, I915_GTT_PAGE_SIZE_4K; mem = intel_memory_region_create(i915, lmem_base, lmem_size, - min_page_size, io_start, + min_page_size, + io_start, lmem_size, type, instance, &i915_region_stolen_lmem_ops); if (IS_ERR(mem)) @@ -833,7 +835,7 @@ i915_gem_stolen_smem_setup(struct drm_i915_private *i915, u16 type, mem = intel_memory_region_create(i915, intel_graphics_stolen_res.start, resource_size(&intel_graphics_stolen_res), - PAGE_SIZE, 0, type, instance, + PAGE_SIZE, 0, 0, type, instance, &i915_region_stolen_smem_ops); if (IS_ERR(mem)) return mem; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index 1eb2fd81c5b6..e9399f7b3e67 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -1101,7 +1101,7 @@ i915_gem_ttm_system_setup(struct drm_i915_private *i915, mr = intel_memory_region_create(i915, 0, totalram_pages() << PAGE_SHIFT, - PAGE_SIZE, 0, + PAGE_SIZE, 0, 0, type, instance, &ttm_system_region_ops); if (IS_ERR(mr)) diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c index f36191ebf964..42db9cd30978 100644 --- a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c +++ b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c @@ -499,7 +499,7 @@ static int igt_mock_memory_region_huge_pages(void *arg) int bit; int err = 0; - mem = mock_region_create(i915, 0, SZ_2G, I915_GTT_PAGE_SIZE_4K, 0); + mem = mock_region_create(i915, 0, SZ_2G, I915_GTT_PAGE_SIZE_4K, 0, 0); if (IS_ERR(mem)) { pr_err("%s failed to create memory region\n", __func__); return PTR_ERR(mem); diff --git a/drivers/gpu/drm/i915/gt/intel_region_lmem.c b/drivers/gpu/drm/i915/gt/intel_region_lmem.c index cb3f66707b21..01838b8ce4c7 100644 --- a/drivers/gpu/drm/i915/gt/intel_region_lmem.c +++ b/drivers/gpu/drm/i915/gt/intel_region_lmem.c @@ -91,7 +91,7 @@ region_lmem_init(struct intel_memory_region *mem) if (!io_mapping_init_wc(&mem->iomap, mem->io_start, - resource_size(&mem->region))) { + mem->io_size)) { ret = -EIO; goto out_no_io; } @@ -144,6 +144,7 @@ intel_gt_setup_fake_lmem(struct intel_gt *gt) mappable_end, PAGE_SIZE, io_start, + mappable_end, INTEL_MEMORY_LOCAL, 0, &intel_region_lmem_ops); @@ -220,6 +221,7 @@ static struct intel_memory_region *setup_lmem(struct intel_gt *gt) lmem_size, min_page_size, io_start, + lmem_size, INTEL_MEMORY_LOCAL, 0, &intel_region_lmem_ops); @@ -233,6 +235,8 @@ static struct intel_memory_region *setup_lmem(struct intel_gt *gt) drm_dbg(&i915->drm, "Local memory: %pR\n", &mem->region); drm_dbg(&i915->drm, "Local memory IO start: %pa\n", &mem->io_start); + drm_info(&i915->drm, "Local memory IO size: %pa\n", + &mem->io_size); drm_info(&i915->drm, "Local memory available: %pa\n", &lmem_size); diff --git a/drivers/gpu/drm/i915/intel_memory_region.c b/drivers/gpu/drm/i915/intel_memory_region.c index c70d7e286a51..595e2489c23e 100644 --- a/drivers/gpu/drm/i915/intel_memory_region.c +++ b/drivers/gpu/drm/i915/intel_memory_region.c @@ -97,7 +97,7 @@ static int iomemtest(struct intel_memory_region *mem, bool test_all, const void *caller) { - resource_size_t last = resource_size(&mem->region) - PAGE_SIZE; + resource_size_t last = mem->io_size - PAGE_SIZE; resource_size_t page; int err; @@ -205,6 +205,8 @@ static int intel_memory_region_memtest(struct intel_memory_region *mem, if (!mem->io_start) return 0; + WARN_ON_ONCE(!mem->io_size); + if (IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM) || i915->params.memtest) err = iomemtest(mem, i915->params.memtest, caller); @@ -217,6 +219,7 @@ intel_memory_region_create(struct drm_i915_private *i915, resource_size_t size, resource_size_t min_page_size, resource_size_t io_start, + resource_size_t io_size, u16 type, u16 instance, const struct intel_memory_region_ops *ops) @@ -231,6 +234,7 @@ intel_memory_region_create(struct drm_i915_private *i915, mem->i915 = i915; mem->region = (struct resource)DEFINE_RES_MEM(start, size); mem->io_start = io_start; + mem->io_size = io_size; mem->min_page_size = min_page_size; mem->ops = ops; mem->total = size; diff --git a/drivers/gpu/drm/i915/intel_memory_region.h b/drivers/gpu/drm/i915/intel_memory_region.h index 5625c9c38993..459051ce0c91 100644 --- a/drivers/gpu/drm/i915/intel_memory_region.h +++ b/drivers/gpu/drm/i915/intel_memory_region.h @@ -71,6 +71,7 @@ struct intel_memory_region { struct drm_mm_node fake_mappable; resource_size_t io_start; + resource_size_t io_size; resource_size_t min_page_size; resource_size_t total; resource_size_t avail; @@ -103,6 +104,7 @@ intel_memory_region_create(struct drm_i915_private *i915, resource_size_t size, resource_size_t min_page_size, resource_size_t io_start, + resource_size_t io_size, u16 type, u16 instance, const struct intel_memory_region_ops *ops); diff --git a/drivers/gpu/drm/i915/selftests/intel_memory_region.c b/drivers/gpu/drm/i915/selftests/intel_memory_region.c index 7acba1d2135e..247f65f02bbf 100644 --- a/drivers/gpu/drm/i915/selftests/intel_memory_region.c +++ b/drivers/gpu/drm/i915/selftests/intel_memory_region.c @@ -170,7 +170,7 @@ static int igt_mock_reserve(void *arg) if (!order) return 0; - mem = mock_region_create(i915, 0, SZ_2G, I915_GTT_PAGE_SIZE_4K, 0); + mem = mock_region_create(i915, 0, SZ_2G, I915_GTT_PAGE_SIZE_4K, 0, 0); if (IS_ERR(mem)) { pr_err("failed to create memory region\n"); err = PTR_ERR(mem); @@ -383,7 +383,7 @@ static int igt_mock_splintered_region(void *arg) */ size = (SZ_4G - 1) & PAGE_MASK; - mem = mock_region_create(i915, 0, size, PAGE_SIZE, 0); + mem = mock_region_create(i915, 0, size, PAGE_SIZE, 0, 0); if (IS_ERR(mem)) return PTR_ERR(mem); @@ -471,7 +471,7 @@ static int igt_mock_max_segment(void *arg) */ size = SZ_8G; - mem = mock_region_create(i915, 0, size, PAGE_SIZE, 0); + mem = mock_region_create(i915, 0, size, PAGE_SIZE, 0, 0); if (IS_ERR(mem)) return PTR_ERR(mem); @@ -1188,7 +1188,7 @@ int intel_memory_region_mock_selftests(void) if (!i915) return -ENOMEM; - mem = mock_region_create(i915, 0, SZ_2G, I915_GTT_PAGE_SIZE_4K, 0); + mem = mock_region_create(i915, 0, SZ_2G, I915_GTT_PAGE_SIZE_4K, 0, 0); if (IS_ERR(mem)) { pr_err("failed to create memory region\n"); err = PTR_ERR(mem); diff --git a/drivers/gpu/drm/i915/selftests/mock_region.c b/drivers/gpu/drm/i915/selftests/mock_region.c index 19bff8afcaaa..467eeae6d5f0 100644 --- a/drivers/gpu/drm/i915/selftests/mock_region.c +++ b/drivers/gpu/drm/i915/selftests/mock_region.c @@ -107,7 +107,8 @@ mock_region_create(struct drm_i915_private *i915, resource_size_t start, resource_size_t size, resource_size_t min_page_size, - resource_size_t io_start) + resource_size_t io_start, + resource_size_t io_size) { int instance = ida_alloc_max(&i915->selftest.mock_region_instances, TTM_NUM_MEM_TYPES - TTM_PL_PRIV - 1, @@ -117,6 +118,7 @@ mock_region_create(struct drm_i915_private *i915, return ERR_PTR(instance); return intel_memory_region_create(i915, start, size, min_page_size, - io_start, INTEL_MEMORY_MOCK, instance, + io_start, io_size, + INTEL_MEMORY_MOCK, instance, &mock_region_ops); } diff --git a/drivers/gpu/drm/i915/selftests/mock_region.h b/drivers/gpu/drm/i915/selftests/mock_region.h index 329bf74dfaca..e36c3a433551 100644 --- a/drivers/gpu/drm/i915/selftests/mock_region.h +++ b/drivers/gpu/drm/i915/selftests/mock_region.h @@ -16,6 +16,7 @@ mock_region_create(struct drm_i915_private *i915, resource_size_t start, resource_size_t size, resource_size_t min_page_size, - resource_size_t io_start); + resource_size_t io_start, + resource_size_t io_size); #endif /* !__MOCK_REGION_H */ From patchwork Thu Feb 10 12:13:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 12741836 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5E6BFC433EF for ; Thu, 10 Feb 2022 12:16:53 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id D47AA10E840; Thu, 10 Feb 2022 12:16:45 +0000 (UTC) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id 3664810E831; Thu, 10 Feb 2022 12:16:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1644495399; x=1676031399; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Vth5Zx3SKzb9ZDLkZT5NWnvO0cYNUrS5FgGylYthOjM=; b=nXj9Y1VWfVFUVm8EobrDVtWcOiVW1t+BNmTYJUCSd2J18SHCRuytzLA5 /n44Zc7Wj5IwDhHRSKTM081cdKJxDOwMNyiDfQQTrDO7GbA5d8TY052S8 6HmeNcwr1/pAmH6TBCGNh3NBrTvgP90cIaQ8eCCxazjMGGjTbMHo0iOW/ nOQqEwMf3dHmsVxp7qWgsfFrlO2hg9k3GYdq/s4zfm6XizDWyJPDLpJgu ahnfTkLQCXD47CQEyVczjhzP3u4/43Zq22jvsyGSb0SitlD1hPwwRrYpq M8xJ16LPs3O2LgN31DcpRqtXKYsyV8N9EANm4YM0XLu8vbnXlLtszJstk g==; X-IronPort-AV: E=McAfee;i="6200,9189,10253"; a="247060932" X-IronPort-AV: E=Sophos;i="5.88,358,1635231600"; d="scan'208";a="247060932" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2022 04:16:38 -0800 X-IronPort-AV: E=Sophos;i="5.88,358,1635231600"; d="scan'208";a="485664947" Received: from paulbarr-mobl2.ger.corp.intel.com (HELO mwauld-desk1.intel.com) ([10.252.11.226]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2022 04:16:37 -0800 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Date: Thu, 10 Feb 2022 12:13:00 +0000 Message-Id: <20220210121313.701004-3-matthew.auld@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220210121313.701004-1-matthew.auld@intel.com> References: <20220210121313.701004-1-matthew.auld@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 02/15] drm/i915/ttm: require mappable by default X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , dri-devel@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" On devices with non-mappable LMEM ensure we always allocate the pages within the mappable portion. For now we assume that all LMEM buffers will require CPU access, which is also inline with pretty much all current kernel internal users. In the next patch we will introduce a new flag to override this behaviour. Signed-off-by: Matthew Auld Cc: Thomas Hellström Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 4 ++++ drivers/gpu/drm/i915/intel_region_ttm.c | 5 +++++ 2 files changed, 9 insertions(+) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index e9399f7b3e67..41e94d09e742 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -128,6 +128,10 @@ i915_ttm_place_from_region(const struct intel_memory_region *mr, if (flags & I915_BO_ALLOC_CONTIGUOUS) place->flags = TTM_PL_FLAG_CONTIGUOUS; + if (mr->io_size && mr->io_size < mr->total) { + place->fpfn = 0; + place->lpfn = mr->io_size >> PAGE_SHIFT; + } } static void diff --git a/drivers/gpu/drm/i915/intel_region_ttm.c b/drivers/gpu/drm/i915/intel_region_ttm.c index f2b888c16958..4689192d5e8d 100644 --- a/drivers/gpu/drm/i915/intel_region_ttm.c +++ b/drivers/gpu/drm/i915/intel_region_ttm.c @@ -199,6 +199,11 @@ intel_region_ttm_resource_alloc(struct intel_memory_region *mem, struct ttm_resource *res; int ret; + if (mem->io_size && mem->io_size < mem->total) { + place.fpfn = 0; + place.lpfn = mem->io_size >> PAGE_SHIFT; + } + mock_bo.base.size = size; place.flags = flags; From patchwork Thu Feb 10 12:13:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 12741835 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C9316C433F5 for ; Thu, 10 Feb 2022 12:16:51 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 82DDB10E839; Thu, 10 Feb 2022 12:16:44 +0000 (UTC) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id 9B72410E840; Thu, 10 Feb 2022 12:16:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1644495400; x=1676031400; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/4u0eyuBrcOXsQ4PQqulGLteafXy+ulYi55gYyBVkfo=; b=XIzeeGPYkdnXxjKSy3jOnrGz3thGBDU6ng37lBY2r+y8vsSb1Cplj1+W A//PjEROi9F06zlQAyn7OMuZhlMNhX7SBjs/v0koNzrX9Lwm2P0+mxUjv k/0zwGgabAoFg55v5W8XoxRrBAYWO1bzw8DyhEle0uE51n9OwUibcXljY 7NTOfNrboBo09mWCyrHQVsKLa9xGnjloiBCrPdg+tGUvU0bkPrllPwFPn T9Hf/jVDtqxF5127+F0t4l+fVnsYJmwNGpezfivq0izJlP/VhLghO9bdY 8ZN/6c0Fo63d0AKuyAqVdqmKiv/X4XY3q0vsztQsB40/GwCBMEpOn1m3Q A==; X-IronPort-AV: E=McAfee;i="6200,9189,10253"; a="247060950" X-IronPort-AV: E=Sophos;i="5.88,358,1635231600"; d="scan'208";a="247060950" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2022 04:16:40 -0800 X-IronPort-AV: E=Sophos;i="5.88,358,1635231600"; d="scan'208";a="485664948" Received: from paulbarr-mobl2.ger.corp.intel.com (HELO mwauld-desk1.intel.com) ([10.252.11.226]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2022 04:16:39 -0800 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Date: Thu, 10 Feb 2022 12:13:01 +0000 Message-Id: <20220210121313.701004-4-matthew.auld@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220210121313.701004-1-matthew.auld@intel.com> References: <20220210121313.701004-1-matthew.auld@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 03/15] drm/i915: add I915_BO_ALLOC_GPU_ONLY X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , dri-devel@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" If the user doesn't require CPU access for the buffer, then ALLOC_GPU_ONLY should be used, in order to prioritise allocating in the non-mappable portion of LMEM, on devices with small BAR. v2(Thomas): - The BO_ALLOC_TOPDOWN naming here is poor, since this is pure lies on systems that don't even have small BAR. A better name is GPU_ONLY, which is accurate regardless of the configuration. Signed-off-by: Matthew Auld Cc: Thomas Hellström Reviewed-by: Thomas Hellström --- .../gpu/drm/i915/gem/i915_gem_object_types.h | 17 ++++++++++++----- drivers/gpu/drm/i915/gem/i915_gem_pages.c | 3 +++ drivers/gpu/drm/i915/gem/i915_gem_region.c | 5 +++++ drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 13 ++++++++++--- drivers/gpu/drm/i915/gt/intel_gt.c | 4 +++- drivers/gpu/drm/i915/i915_vma.c | 3 +++ drivers/gpu/drm/i915/intel_region_ttm.c | 11 ++++++++--- drivers/gpu/drm/i915/selftests/mock_region.c | 7 +------ 8 files changed, 45 insertions(+), 18 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h index 0098a32490f0..fd54eb8f4826 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h @@ -319,16 +319,23 @@ struct drm_i915_gem_object { #define I915_BO_ALLOC_PM_VOLATILE BIT(4) /* Object needs to be restored early using memcpy during resume */ #define I915_BO_ALLOC_PM_EARLY BIT(5) +/* + * Object is likely never accessed by the CPU. This will prioritise the BO to be + * allocated in the non-mappable portion of lmem. This is merely a hint, and if + * dealing with userspace objects the CPU fault handler is free to ignore this. + */ +#define I915_BO_ALLOC_GPU_ONLY BIT(6) #define I915_BO_ALLOC_FLAGS (I915_BO_ALLOC_CONTIGUOUS | \ I915_BO_ALLOC_VOLATILE | \ I915_BO_ALLOC_CPU_CLEAR | \ I915_BO_ALLOC_USER | \ I915_BO_ALLOC_PM_VOLATILE | \ - I915_BO_ALLOC_PM_EARLY) -#define I915_BO_READONLY BIT(6) -#define I915_TILING_QUIRK_BIT 7 /* unknown swizzling; do not release! */ -#define I915_BO_PROTECTED BIT(8) -#define I915_BO_WAS_BOUND_BIT 9 + I915_BO_ALLOC_PM_EARLY | \ + I915_BO_ALLOC_GPU_ONLY) +#define I915_BO_READONLY BIT(7) +#define I915_TILING_QUIRK_BIT 8 /* unknown swizzling; do not release! */ +#define I915_BO_PROTECTED BIT(9) +#define I915_BO_WAS_BOUND_BIT 10 /** * @mem_flags - Mutable placement-related flags * diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c index 060fe29f5929..a4d8dc163691 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c @@ -356,6 +356,9 @@ void *i915_gem_object_pin_map(struct drm_i915_gem_object *obj, !i915_gem_object_has_iomem(obj)) return ERR_PTR(-ENXIO); + if (WARN_ON_ONCE(obj->flags & I915_BO_ALLOC_GPU_ONLY)) + return ERR_PTR(-EINVAL); + assert_object_held(obj); pinned = !(type & I915_MAP_OVERRIDE); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_region.c b/drivers/gpu/drm/i915/gem/i915_gem_region.c index a4350227e9ae..873d52f872c5 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_region.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_region.c @@ -45,6 +45,11 @@ i915_gem_object_create_region(struct intel_memory_region *mem, GEM_BUG_ON(flags & ~I915_BO_ALLOC_FLAGS); + if (WARN_ON_ONCE(flags & I915_BO_ALLOC_GPU_ONLY && + (flags & I915_BO_ALLOC_CPU_CLEAR || + flags & I915_BO_ALLOC_PM_EARLY))) + return ERR_PTR(-EINVAL); + if (!mem) return ERR_PTR(-ENODEV); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index 41e94d09e742..36e77fcf2ef9 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -127,10 +127,14 @@ i915_ttm_place_from_region(const struct intel_memory_region *mr, place->mem_type = intel_region_to_ttm_type(mr); if (flags & I915_BO_ALLOC_CONTIGUOUS) - place->flags = TTM_PL_FLAG_CONTIGUOUS; + place->flags |= TTM_PL_FLAG_CONTIGUOUS; if (mr->io_size && mr->io_size < mr->total) { - place->fpfn = 0; - place->lpfn = mr->io_size >> PAGE_SHIFT; + if (flags & I915_BO_ALLOC_GPU_ONLY) { + place->flags |= TTM_PL_FLAG_TOPDOWN; + } else { + place->fpfn = 0; + place->lpfn = mr->io_size >> PAGE_SHIFT; + } } } @@ -888,6 +892,9 @@ static vm_fault_t vm_fault_ttm(struct vm_fault *vmf) if (!obj) return VM_FAULT_SIGBUS; + if (obj->flags & I915_BO_ALLOC_GPU_ONLY) + return -EINVAL; + /* Sanity check that we allow writing into this object */ if (unlikely(i915_gem_object_is_readonly(obj) && area->vm_flags & VM_WRITE)) diff --git a/drivers/gpu/drm/i915/gt/intel_gt.c b/drivers/gpu/drm/i915/gt/intel_gt.c index e6f6bf7c3926..4f35c51fdb52 100644 --- a/drivers/gpu/drm/i915/gt/intel_gt.c +++ b/drivers/gpu/drm/i915/gt/intel_gt.c @@ -457,7 +457,9 @@ static int intel_gt_init_scratch(struct intel_gt *gt, unsigned int size) struct i915_vma *vma; int ret; - obj = i915_gem_object_create_lmem(i915, size, I915_BO_ALLOC_VOLATILE); + obj = i915_gem_object_create_lmem(i915, size, + I915_BO_ALLOC_VOLATILE | + I915_BO_ALLOC_GPU_ONLY); if (IS_ERR(obj)) obj = i915_gem_object_create_stolen(i915, size); if (IS_ERR(obj)) diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c index 845cd88f8313..3322a0651a17 100644 --- a/drivers/gpu/drm/i915/i915_vma.c +++ b/drivers/gpu/drm/i915/i915_vma.c @@ -540,6 +540,9 @@ void __iomem *i915_vma_pin_iomap(struct i915_vma *vma) void __iomem *ptr; int err; + if (WARN_ON_ONCE(vma->obj->flags & I915_BO_ALLOC_GPU_ONLY)) + return IO_ERR_PTR(-EINVAL); + if (!i915_gem_object_is_lmem(vma->obj)) { if (GEM_WARN_ON(!i915_vma_is_map_and_fenceable(vma))) { err = -ENODEV; diff --git a/drivers/gpu/drm/i915/intel_region_ttm.c b/drivers/gpu/drm/i915/intel_region_ttm.c index 4689192d5e8d..c055029c7a70 100644 --- a/drivers/gpu/drm/i915/intel_region_ttm.c +++ b/drivers/gpu/drm/i915/intel_region_ttm.c @@ -199,13 +199,18 @@ intel_region_ttm_resource_alloc(struct intel_memory_region *mem, struct ttm_resource *res; int ret; + if (flags & I915_BO_ALLOC_CONTIGUOUS) + place.flags |= TTM_PL_FLAG_CONTIGUOUS; if (mem->io_size && mem->io_size < mem->total) { - place.fpfn = 0; - place.lpfn = mem->io_size >> PAGE_SHIFT; + if (flags & I915_BO_ALLOC_GPU_ONLY) { + place.flags |= TTM_PL_FLAG_TOPDOWN; + } else { + place.fpfn = 0; + place.lpfn = mem->io_size >> PAGE_SHIFT; + } } mock_bo.base.size = size; - place.flags = flags; ret = man->func->alloc(man, &mock_bo, &place, &res); if (ret == -ENOSPC) diff --git a/drivers/gpu/drm/i915/selftests/mock_region.c b/drivers/gpu/drm/i915/selftests/mock_region.c index 467eeae6d5f0..f64325491f35 100644 --- a/drivers/gpu/drm/i915/selftests/mock_region.c +++ b/drivers/gpu/drm/i915/selftests/mock_region.c @@ -22,17 +22,12 @@ static void mock_region_put_pages(struct drm_i915_gem_object *obj, static int mock_region_get_pages(struct drm_i915_gem_object *obj) { - unsigned int flags; struct sg_table *pages; int err; - flags = 0; - if (obj->flags & I915_BO_ALLOC_CONTIGUOUS) - flags |= TTM_PL_FLAG_CONTIGUOUS; - obj->mm.res = intel_region_ttm_resource_alloc(obj->mm.region, obj->base.size, - flags); + obj->flags); if (IS_ERR(obj->mm.res)) return PTR_ERR(obj->mm.res); From patchwork Thu Feb 10 12:13:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 12741837 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CD11CC433EF for ; Thu, 10 Feb 2022 12:16:57 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 3AF6210E847; Thu, 10 Feb 2022 12:16:46 +0000 (UTC) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id 0E72510E840; Thu, 10 Feb 2022 12:16:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1644495402; x=1676031402; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=zmTGwMRptgpyJusJS8iZvq/E3vzjEcfr5fXVw59wWE4=; b=HHu7kfpypl/G2ouLV5ZajH37090ucDjTGbUTAsVahSpJfOJYV0R/Wcu8 /X9+vUiCdaARjYI6LaL623PxZY/jsLaPiUixYhPb7YlMoSKe1qql2v8aC rWRlJReebAWmKzXOIhgjdX5vkdAphP4Ftiw9gdRLk/sasDAa/FRv4edT9 jdtVH6WA3cEZVxkzp+WP881dcVC7WqsFB1iHyg+QoKj0yACWuSEwaomgr uo/mI0xxaoYJNOp59C5OTHzx5rIZ+jodxnUgWCuE90mgzyXUhHKzwhnZl 4L4+hX63dAGs5S8GI/5Lr3JxVAHimX8wcCCuwNVZWeS9kym8flMQQn58I A==; X-IronPort-AV: E=McAfee;i="6200,9189,10253"; a="247060957" X-IronPort-AV: E=Sophos;i="5.88,358,1635231600"; d="scan'208";a="247060957" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2022 04:16:41 -0800 X-IronPort-AV: E=Sophos;i="5.88,358,1635231600"; d="scan'208";a="485664953" Received: from paulbarr-mobl2.ger.corp.intel.com (HELO mwauld-desk1.intel.com) ([10.252.11.226]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2022 04:16:40 -0800 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Date: Thu, 10 Feb 2022 12:13:02 +0000 Message-Id: <20220210121313.701004-5-matthew.auld@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220210121313.701004-1-matthew.auld@intel.com> References: <20220210121313.701004-1-matthew.auld@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 04/15] drm/i915/buddy: track available visible size X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , dri-devel@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Track the total amount of available visible memory, and also track per-resource the amount of used visible memory. For now this is useful for our debug output, and deciding if it is even worth calling into the buddy allocator. In the future tracking the per-resource visible usage will be useful for when deciding if we should attempt to evict certain buffers. v2: - s/place->lpfn/lpfn/, that way we can avoid scanning the list if the entire range is already mappable. - Move the end declaration inside the if block(Thomas). - Make sure to also account for reserved memory. Signed-off-by: Matthew Auld Cc: Thomas Hellström Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/i915_ttm_buddy_manager.c | 68 ++++++++++++++++++- drivers/gpu/drm/i915/i915_ttm_buddy_manager.h | 8 ++- drivers/gpu/drm/i915/intel_region_ttm.c | 1 + 3 files changed, 75 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_ttm_buddy_manager.c b/drivers/gpu/drm/i915/i915_ttm_buddy_manager.c index 76d5211c25eb..e47a3d46c6ff 100644 --- a/drivers/gpu/drm/i915/i915_ttm_buddy_manager.c +++ b/drivers/gpu/drm/i915/i915_ttm_buddy_manager.c @@ -19,6 +19,9 @@ struct i915_ttm_buddy_manager { struct drm_buddy mm; struct list_head reserved; struct mutex lock; + unsigned long visible_size; + unsigned long visible_avail; + unsigned long visible_reserved; u64 default_page_size; }; @@ -87,6 +90,12 @@ static int i915_ttm_buddy_man_alloc(struct ttm_resource_manager *man, n_pages = size >> ilog2(mm->chunk_size); mutex_lock(&bman->lock); + if (lpfn <= bman->visible_size && n_pages > bman->visible_avail) { + mutex_unlock(&bman->lock); + err = -ENOSPC; + goto err_free_res; + } + err = drm_buddy_alloc_blocks(mm, (u64)place->fpfn << PAGE_SHIFT, (u64)lpfn << PAGE_SHIFT, (u64)n_pages << PAGE_SHIFT, @@ -107,6 +116,31 @@ static int i915_ttm_buddy_man_alloc(struct ttm_resource_manager *man, mutex_unlock(&bman->lock); } + if (lpfn <= bman->visible_size) { + bman_res->used_visible_size = bman_res->base.num_pages; + } else { + struct drm_buddy_block *block; + + list_for_each_entry(block, &bman_res->blocks, link) { + unsigned long start = + drm_buddy_block_offset(block) >> PAGE_SHIFT; + + if (start < bman->visible_size) { + unsigned long end = start + + (drm_buddy_block_size(mm, block) >> PAGE_SHIFT); + + bman_res->used_visible_size += + min(end, bman->visible_size) - start; + } + } + } + + if (bman_res->used_visible_size) { + mutex_lock(&bman->lock); + bman->visible_avail -= bman_res->used_visible_size; + mutex_unlock(&bman->lock); + } + *res = &bman_res->base; return 0; @@ -128,6 +162,7 @@ static void i915_ttm_buddy_man_free(struct ttm_resource_manager *man, mutex_lock(&bman->lock); drm_buddy_free_list(&bman->mm, &bman_res->blocks); + bman->visible_avail += bman_res->used_visible_size; mutex_unlock(&bman->lock); ttm_resource_fini(man, res); @@ -143,6 +178,12 @@ static void i915_ttm_buddy_man_debug(struct ttm_resource_manager *man, mutex_lock(&bman->lock); drm_printf(printer, "default_page_size: %lluKiB\n", bman->default_page_size >> 10); + drm_printf(printer, "visible_avail: %lluMiB\n", + (u64)bman->visible_avail << PAGE_SHIFT >> 20); + drm_printf(printer, "visible_size: %lluMiB\n", + (u64)bman->visible_size << PAGE_SHIFT >> 20); + drm_printf(printer, "visible_reserved: %lluMiB\n", + (u64)bman->visible_reserved << PAGE_SHIFT >> 20); drm_buddy_print(&bman->mm, printer); @@ -164,6 +205,7 @@ static const struct ttm_resource_manager_func i915_ttm_buddy_manager_func = { * @type: Memory type we want to manage * @use_tt: Set use_tt for the manager * @size: The size in bytes to manage + * @visible_size: The CPU visible size in bytes to manage * @default_page_size: The default minimum page size in bytes for allocations, * this must be at least as large as @chunk_size, and can be overridden by * setting the BO page_alignment, to be larger or smaller as needed. @@ -187,7 +229,7 @@ static const struct ttm_resource_manager_func i915_ttm_buddy_manager_func = { */ int i915_ttm_buddy_man_init(struct ttm_device *bdev, unsigned int type, bool use_tt, - u64 size, u64 default_page_size, + u64 size, u64 visible_size, u64 default_page_size, u64 chunk_size) { struct ttm_resource_manager *man; @@ -206,6 +248,8 @@ int i915_ttm_buddy_man_init(struct ttm_device *bdev, INIT_LIST_HEAD(&bman->reserved); GEM_BUG_ON(default_page_size < chunk_size); bman->default_page_size = default_page_size; + bman->visible_size = visible_size >> PAGE_SHIFT; + bman->visible_avail = bman->visible_size; man = &bman->manager; man->use_tt = use_tt; @@ -250,6 +294,8 @@ int i915_ttm_buddy_man_fini(struct ttm_device *bdev, unsigned int type) mutex_lock(&bman->lock); drm_buddy_free_list(mm, &bman->reserved); drm_buddy_fini(mm); + bman->visible_avail += bman->visible_reserved; + WARN_ON_ONCE(bman->visible_avail != bman->visible_size); mutex_unlock(&bman->lock); ttm_resource_manager_cleanup(man); @@ -273,6 +319,7 @@ int i915_ttm_buddy_man_reserve(struct ttm_resource_manager *man, { struct i915_ttm_buddy_manager *bman = to_buddy_manager(man); struct drm_buddy *mm = &bman->mm; + unsigned long fpfn = start >> PAGE_SHIFT; unsigned long flags = 0; int ret; @@ -284,8 +331,27 @@ int i915_ttm_buddy_man_reserve(struct ttm_resource_manager *man, size, mm->chunk_size, &bman->reserved, flags); + + if (fpfn < bman->visible_size) { + unsigned long lpfn = fpfn + (size >> PAGE_SHIFT); + unsigned long visible = min(lpfn, bman->visible_size) - fpfn; + + bman->visible_reserved += visible; + bman->visible_avail -= visible; + } mutex_unlock(&bman->lock); return ret; } +/** + * i915_ttm_buddy_man_visible_size - Return the size of the CPU visible portion + * in pages. + * @man: The buddy allocator ttm manager + */ +u64 i915_ttm_buddy_man_visible_size(struct ttm_resource_manager *man) +{ + struct i915_ttm_buddy_manager *bman = to_buddy_manager(man); + + return bman->visible_size; +} diff --git a/drivers/gpu/drm/i915/i915_ttm_buddy_manager.h b/drivers/gpu/drm/i915/i915_ttm_buddy_manager.h index 72c90b432e87..35fe03a6a78c 100644 --- a/drivers/gpu/drm/i915/i915_ttm_buddy_manager.h +++ b/drivers/gpu/drm/i915/i915_ttm_buddy_manager.h @@ -21,6 +21,8 @@ struct drm_buddy; * @base: struct ttm_resource base class we extend * @blocks: the list of struct i915_buddy_block for this resource/allocation * @flags: DRM_BUDDY_*_ALLOCATION flags + * @used_visible_size: How much of this resource, if any, uses the CPU visible + * portion, in pages. * @mm: the struct i915_buddy_mm for this resource * * Extends the struct ttm_resource to manage an address space allocation with @@ -30,6 +32,7 @@ struct i915_ttm_buddy_resource { struct ttm_resource base; struct list_head blocks; unsigned long flags; + unsigned long used_visible_size; struct drm_buddy *mm; }; @@ -48,11 +51,14 @@ to_ttm_buddy_resource(struct ttm_resource *res) int i915_ttm_buddy_man_init(struct ttm_device *bdev, unsigned type, bool use_tt, - u64 size, u64 default_page_size, u64 chunk_size); + u64 size, u64 visible_size, + u64 default_page_size, u64 chunk_size); int i915_ttm_buddy_man_fini(struct ttm_device *bdev, unsigned int type); int i915_ttm_buddy_man_reserve(struct ttm_resource_manager *man, u64 start, u64 size); +u64 i915_ttm_buddy_man_visible_size(struct ttm_resource_manager *man); + #endif diff --git a/drivers/gpu/drm/i915/intel_region_ttm.c b/drivers/gpu/drm/i915/intel_region_ttm.c index c055029c7a70..96a0bd82c1f3 100644 --- a/drivers/gpu/drm/i915/intel_region_ttm.c +++ b/drivers/gpu/drm/i915/intel_region_ttm.c @@ -87,6 +87,7 @@ int intel_region_ttm_init(struct intel_memory_region *mem) ret = i915_ttm_buddy_man_init(bdev, mem_type, false, resource_size(&mem->region), + mem->io_size, mem->min_page_size, PAGE_SIZE); if (ret) return ret; From patchwork Thu Feb 10 12:13:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 12741844 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 48542C4332F for ; Thu, 10 Feb 2022 12:17:17 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 71AC610E85D; Thu, 10 Feb 2022 12:16:57 +0000 (UTC) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id 5910910E847; Thu, 10 Feb 2022 12:16:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1644495403; x=1676031403; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Qp7Nvg5KCgX5HZYtQasQguvLwvBIfTEokuUYLvxqkVo=; b=OVZLA/kZMgaPhcz/Qn0jd7K4ajgQzC1qfiiMWFl5saUqFH/uQdhhHRoX GlhmWEo/Co8IfiuTTzraC3DYO28dnH6L5Ez5/gkwffczGScQUCej8xlx7 mXfAHMRPGYNLCfymthCQ0zKeBw+Kt4q+cn9ShgBLV/Qdr1KYqlEz9m3Zy MSL1RLzR2DlWzeXFv+ffbdSM+l9Z3hCEkD+HbXxhjapfLbs5QKo2k6gXO f0XmQWI+F+MKvMF3sYz4U1uRpgBXxclB8yvXWKjZaOWWNMWXGmlmwE5wJ BnSFmt8fdwZZ0RkfX13vhZE0kxybN83jotzRjEKFeDlWY4KRatWDIrqoq g==; X-IronPort-AV: E=McAfee;i="6200,9189,10253"; a="247060960" X-IronPort-AV: E=Sophos;i="5.88,358,1635231600"; d="scan'208";a="247060960" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2022 04:16:43 -0800 X-IronPort-AV: E=Sophos;i="5.88,358,1635231600"; d="scan'208";a="485664957" Received: from paulbarr-mobl2.ger.corp.intel.com (HELO mwauld-desk1.intel.com) ([10.252.11.226]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2022 04:16:41 -0800 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Date: Thu, 10 Feb 2022 12:13:03 +0000 Message-Id: <20220210121313.701004-6-matthew.auld@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220210121313.701004-1-matthew.auld@intel.com> References: <20220210121313.701004-1-matthew.auld@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 05/15] drm/i915/buddy: adjust res->start X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , dri-devel@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Differentiate between mappable vs non-mappable resources, also if this is an actual range allocation ensure we set res->start as the starting pfn. Later when we need to do non-mappable -> mappable moves then we want TTM to see that the current placement is not compatible, which should result in an actual move, instead of being turned into a noop. Signed-off-by: Matthew Auld Cc: Thomas Hellström Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/i915_ttm_buddy_manager.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/drivers/gpu/drm/i915/i915_ttm_buddy_manager.c b/drivers/gpu/drm/i915/i915_ttm_buddy_manager.c index e47a3d46c6ff..0ac6b2463fd5 100644 --- a/drivers/gpu/drm/i915/i915_ttm_buddy_manager.c +++ b/drivers/gpu/drm/i915/i915_ttm_buddy_manager.c @@ -141,6 +141,13 @@ static int i915_ttm_buddy_man_alloc(struct ttm_resource_manager *man, mutex_unlock(&bman->lock); } + if (place->lpfn - place->fpfn == n_pages) + bman_res->base.start = place->fpfn; + else if (lpfn <= bman->visible_size) + bman_res->base.start = 0; + else + bman_res->base.start = bman->visible_size; + *res = &bman_res->base; return 0; From patchwork Thu Feb 10 12:13:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 12741842 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DE814C433EF for ; Thu, 10 Feb 2022 12:17:15 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 606A510E85C; Thu, 10 Feb 2022 12:16:57 +0000 (UTC) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id C8B8210E840; Thu, 10 Feb 2022 12:16:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1644495404; x=1676031404; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=yxvutEt29jYT09qXItd3kZbee+tbdK5JyMuhu+MJJjk=; b=f8erdgi7cQUrKR9Aw0fzvzERYT2gZvizZ8oxlZOMobLQLDxPg168ipkS RdtUKk2KSm8VvsusxU3MRldK0RpPLZNGiI+oR6DkR3qo+I0jDAGG6+ExP HkhuMZciqgy5N7pM6abanZZHIcxJtGkmd4F24SNtWp4FH7fJl2Cxclevw SCJMgF2BKbIZgWhrJWGVS9R+nufzkYqthmtcVCcY3WcAM1LhD/C7H7Hfq P1qjVJ3tszvlCIK/ZRRNkc+S3zJY3AcW6AIivyxYzUvLpd75Rcnu4Scg9 PRhZMo4b4KxEnp+dRZrOghTD72tL8HjDLp3Cg60N5IEmfZ1NNigJlPI6q g==; X-IronPort-AV: E=McAfee;i="6200,9189,10253"; a="247060963" X-IronPort-AV: E=Sophos;i="5.88,358,1635231600"; d="scan'208";a="247060963" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2022 04:16:44 -0800 X-IronPort-AV: E=Sophos;i="5.88,358,1635231600"; d="scan'208";a="485664959" Received: from paulbarr-mobl2.ger.corp.intel.com (HELO mwauld-desk1.intel.com) ([10.252.11.226]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2022 04:16:43 -0800 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Date: Thu, 10 Feb 2022 12:13:04 +0000 Message-Id: <20220210121313.701004-7-matthew.auld@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220210121313.701004-1-matthew.auld@intel.com> References: <20220210121313.701004-1-matthew.auld@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 06/15] drm/i915/buddy: tweak 2big check X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , dri-devel@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Otherwise we get -EINVAL, instead of the more useful -E2BIG if the allocation doesn't fit within the pfn range, like with mappable lmem. The hugepages selftest, for example, needs this to know if a smaller size is needed. Signed-off-by: Matthew Auld Cc: Thomas Hellström Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/i915_ttm_buddy_manager.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/i915_ttm_buddy_manager.c b/drivers/gpu/drm/i915/i915_ttm_buddy_manager.c index 0ac6b2463fd5..92d49a3c378c 100644 --- a/drivers/gpu/drm/i915/i915_ttm_buddy_manager.c +++ b/drivers/gpu/drm/i915/i915_ttm_buddy_manager.c @@ -82,7 +82,7 @@ static int i915_ttm_buddy_man_alloc(struct ttm_resource_manager *man, lpfn = pages; } - if (size > mm->size) { + if (size > lpfn << PAGE_SHIFT) { err = -E2BIG; goto err_free_res; } From patchwork Thu Feb 10 12:13:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 12741838 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 05805C433FE for ; Thu, 10 Feb 2022 12:16:59 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id D902710E843; Thu, 10 Feb 2022 12:16:48 +0000 (UTC) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id 3AC6310E846; Thu, 10 Feb 2022 12:16:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1644495406; x=1676031406; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=TSkel8cvodyiMm1cE0yPJ4fOPo0rKQT4ws8uuZA3hTM=; b=Gss0duIBdS0sSG0ywTelYWz8UeDfFHIH0WtX6hIUtO55NZr/lvmJGsYx yroeLVuJ3dIT9KafxI5LVuQVq5AROCVc5nVYDs8n3fv1rU2JrDTxcygrg ZZ7CkJlqIs1E4nbAhuWAIJ6HS9+Th6D8uIH1EAHnjZT9HQPbYhwPQhXYu 4IW4bv4do+0/v/OmTkXPuK1yei7E4vP1P88qRFU8E8yu0/G1I8TNWZqCk ZMHxP3hL58ySH7meebH9uOT73mPXTG3DufLxu3wOUMsm43LS5owaSU2OJ EV2hwhCTwsZO/pDlA0mjzkiu3l2TfYCM0iT8GpN0FD7s2rynzelm5QEuU g==; X-IronPort-AV: E=McAfee;i="6200,9189,10253"; a="247060967" X-IronPort-AV: E=Sophos;i="5.88,358,1635231600"; d="scan'208";a="247060967" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2022 04:16:45 -0800 X-IronPort-AV: E=Sophos;i="5.88,358,1635231600"; d="scan'208";a="485664971" Received: from paulbarr-mobl2.ger.corp.intel.com (HELO mwauld-desk1.intel.com) ([10.252.11.226]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2022 04:16:44 -0800 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Date: Thu, 10 Feb 2022 12:13:05 +0000 Message-Id: <20220210121313.701004-8-matthew.auld@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220210121313.701004-1-matthew.auld@intel.com> References: <20220210121313.701004-1-matthew.auld@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 07/15] drm/i915/selftests: mock test io_size X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , dri-devel@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Check that mappable vs non-mappable matches our expectations. Signed-off-by: Matthew Auld Cc: Thomas Hellström Reviewed-by: Thomas Hellström --- .../drm/i915/selftests/intel_memory_region.c | 143 ++++++++++++++++++ 1 file changed, 143 insertions(+) diff --git a/drivers/gpu/drm/i915/selftests/intel_memory_region.c b/drivers/gpu/drm/i915/selftests/intel_memory_region.c index 247f65f02bbf..56dec9723601 100644 --- a/drivers/gpu/drm/i915/selftests/intel_memory_region.c +++ b/drivers/gpu/drm/i915/selftests/intel_memory_region.c @@ -17,6 +17,7 @@ #include "gem/i915_gem_context.h" #include "gem/i915_gem_lmem.h" #include "gem/i915_gem_region.h" +#include "gem/i915_gem_ttm.h" #include "gem/selftests/igt_gem_utils.h" #include "gem/selftests/mock_context.h" #include "gt/intel_engine_pm.h" @@ -512,6 +513,147 @@ static int igt_mock_max_segment(void *arg) return err; } +static u64 igt_object_mappable_total(struct drm_i915_gem_object *obj) +{ + struct intel_memory_region *mr = obj->mm.region; + struct i915_ttm_buddy_resource *bman_res = + to_ttm_buddy_resource(obj->mm.res); + struct drm_buddy *mm = bman_res->mm; + struct drm_buddy_block *block; + u64 total; + + total = 0; + list_for_each_entry(block, &bman_res->blocks, link) { + u64 start = drm_buddy_block_offset(block); + u64 end = start + drm_buddy_block_size(mm, block); + + if (start < mr->io_size) + total += min_t(u64, end, mr->io_size) - start; + } + + return total; +} + +static int igt_mock_io_size(void *arg) +{ + struct intel_memory_region *mr = arg; + struct drm_i915_private *i915 = mr->i915; + struct drm_i915_gem_object *obj; + u64 mappable_theft_total; + u64 io_size; + u64 total; + u64 ps; + u64 rem; + u64 size; + I915_RND_STATE(prng); + LIST_HEAD(objects); + int err = 0; + + ps = SZ_4K; + if (i915_prandom_u64_state(&prng) & 1) + ps = SZ_64K; /* For something like DG2 */ + + div64_u64_rem(i915_prandom_u64_state(&prng), SZ_8G, &total); + total = round_down(total, ps); + total = max_t(u64, total, SZ_1G); + + div64_u64_rem(i915_prandom_u64_state(&prng), total - ps, &io_size); + io_size = round_down(io_size, ps); + io_size = max_t(u64, io_size, SZ_256M); /* 256M seems to be the common lower limit */ + + pr_info("%s with ps=%llx, io_size=%llx, total=%llx\n", + __func__, ps, io_size, total); + + mr = mock_region_create(i915, 0, total, ps, 0, io_size); + if (IS_ERR(mr)) { + err = PTR_ERR(mr); + goto out_err; + } + + mappable_theft_total = 0; + rem = total - io_size; + do { + div64_u64_rem(i915_prandom_u64_state(&prng), rem, &size); + size = round_down(size, ps); + size = max(size, ps); + + obj = igt_object_create(mr, &objects, size, + I915_BO_ALLOC_GPU_ONLY); + if (IS_ERR(obj)) { + pr_err("%s TOPDOWN failed with rem=%llx, size=%llx\n", + __func__, rem, size); + err = PTR_ERR(obj); + goto out_close; + } + + mappable_theft_total += igt_object_mappable_total(obj); + rem -= size; + } while (rem); + + pr_info("%s mappable theft=(%lluMiB/%lluMiB), total=%lluMiB\n", + __func__, + (u64)mappable_theft_total >> 20, + (u64)io_size >> 20, + (u64)total >> 20); + + /* + * Even if we allocate all of the non-mappable portion, we should still + * be able to dip into the mappable portion. + */ + obj = igt_object_create(mr, &objects, io_size, + I915_BO_ALLOC_GPU_ONLY); + if (IS_ERR(obj)) { + pr_err("%s allocation unexpectedly failed\n", __func__); + err = PTR_ERR(obj); + goto out_close; + } + + close_objects(mr, &objects); + + rem = io_size; + do { + div64_u64_rem(i915_prandom_u64_state(&prng), rem, &size); + size = round_down(size, ps); + size = max(size, ps); + + obj = igt_object_create(mr, &objects, size, 0); + if (IS_ERR(obj)) { + pr_err("%s MAPPABLE failed with rem=%llx, size=%llx\n", + __func__, rem, size); + err = PTR_ERR(obj); + goto out_close; + } + + if (igt_object_mappable_total(obj) != size) { + pr_err("%s allocation is not mappable(size=%llx)\n", + __func__, size); + err = -EINVAL; + goto out_close; + } + rem -= size; + } while (rem); + + /* + * We assume CPU access is required by default, which should result in a + * failure here, even though the non-mappable portion is free. + */ + obj = igt_object_create(mr, &objects, ps, 0); + if (!IS_ERR(obj)) { + pr_err("%s allocation unexpectedly succeeded\n", __func__); + err = -EINVAL; + goto out_close; + } + +out_close: + close_objects(mr, &objects); + intel_memory_region_destroy(mr); +out_err: + if (err == -ENOMEM) + err = 0; + + return err; +} + static int igt_gpu_write_dw(struct intel_context *ce, struct i915_vma *vma, u32 dword, @@ -1179,6 +1321,7 @@ int intel_memory_region_mock_selftests(void) SUBTEST(igt_mock_contiguous), SUBTEST(igt_mock_splintered_region), SUBTEST(igt_mock_max_segment), + SUBTEST(igt_mock_io_size), }; struct intel_memory_region *mem; struct drm_i915_private *i915; From patchwork Thu Feb 10 12:13:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 12741840 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 24C78C433F5 for ; Thu, 10 Feb 2022 12:17:14 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 30E7F10E85A; Thu, 10 Feb 2022 12:16:56 +0000 (UTC) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id 9255B10E83F; Thu, 10 Feb 2022 12:16:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1644495407; x=1676031407; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=sJBq9A5raQN5I5d6srWYI0Cisol/iqiezG0csSgMLak=; b=dr+hxKujFe44Djz2xcA4T8/UQHFG7lMvZTuGzhHtYQKg5+GgddI21mFh /qxpcA9Lh9fzzYdSKdCcYfY8UO00dTTlIVr0wYvkO+n+I6r1EmbzfJX6o ifF9Wv73MExEct0pSKqbeWl10mBYZVuzI1eMLOIWfcgrrURUwp4jKAW9W pn8/toV+jo8eiAHMTIDu+SJnsAruZ0p/vZ/Qv4J7HeJQZNAqRlCJ1A8dU 5PWQLC1Q/pUyKvD1Il+8CvzU7hISByribiNGlQUDwKCZggq2KaqSfW6AS abB8jpVoO5BG6kiVODE8ATpAsr3beMO2dKZMw28W+iWkB0O6Jp4h34YiO w==; X-IronPort-AV: E=McAfee;i="6200,9189,10253"; a="247060972" X-IronPort-AV: E=Sophos;i="5.88,358,1635231600"; d="scan'208";a="247060972" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2022 04:16:47 -0800 X-IronPort-AV: E=Sophos;i="5.88,358,1635231600"; d="scan'208";a="485664978" Received: from paulbarr-mobl2.ger.corp.intel.com (HELO mwauld-desk1.intel.com) ([10.252.11.226]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2022 04:16:46 -0800 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Date: Thu, 10 Feb 2022 12:13:06 +0000 Message-Id: <20220210121313.701004-9-matthew.auld@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220210121313.701004-1-matthew.auld@intel.com> References: <20220210121313.701004-1-matthew.auld@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 08/15] drm/i915/ttm: make eviction mappable aware X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , dri-devel@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" If we need to make room for some mappable object, then we should only victimize objects that have one or pages that occupy the visible portion of LMEM. Let's also create a new priority hint for objects that are placed in mappable memory, where we know that CPU access was requested, that way we hopefully victimize these last. v2(Thomas): s/TTM_PL_PRIV/I915_PL_LMEM0/ Signed-off-by: Matthew Auld Cc: Thomas Hellström Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 65 ++++++++++++++++++++++++- 1 file changed, 63 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index 36e77fcf2ef9..3790e569d6a9 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -5,8 +5,10 @@ #include #include +#include #include "i915_drv.h" +#include "i915_ttm_buddy_manager.h" #include "intel_memory_region.h" #include "intel_region_ttm.h" @@ -20,6 +22,7 @@ #define I915_TTM_PRIO_PURGE 0 #define I915_TTM_PRIO_NO_PAGES 1 #define I915_TTM_PRIO_HAS_PAGES 2 +#define I915_TTM_PRIO_NEEDS_CPU_ACCESS 3 /* * Size of struct ttm_place vector in on-stack struct ttm_placement allocs @@ -337,6 +340,7 @@ static bool i915_ttm_eviction_valuable(struct ttm_buffer_object *bo, const struct ttm_place *place) { struct drm_i915_gem_object *obj = i915_ttm_to_gem(bo); + struct ttm_resource *res = bo->resource; if (!obj) return false; @@ -350,7 +354,48 @@ static bool i915_ttm_eviction_valuable(struct ttm_buffer_object *bo, return false; /* Will do for now. Our pinned objects are still on TTM's LRU lists */ - return i915_gem_object_evictable(obj); + if (!i915_gem_object_evictable(obj)) + return false; + + switch (res->mem_type) { + case I915_PL_LMEM0: { + struct ttm_resource_manager *man = + ttm_manager_type(bo->bdev, res->mem_type); + struct i915_ttm_buddy_resource *bman_res = + to_ttm_buddy_resource(res); + struct drm_buddy *mm = bman_res->mm; + struct drm_buddy_block *block; + + if (!place->fpfn && !place->lpfn) + return true; + + GEM_BUG_ON(!place->lpfn); + + /* + * If we just want something mappable then we can quickly check + * if the current victim resource is using any of the CPU + * visible portion. + */ + if (!place->fpfn && + place->lpfn == i915_ttm_buddy_man_visible_size(man)) + return bman_res->used_visible_size > 0; + + /* Real range allocation */ + list_for_each_entry(block, &bman_res->blocks, link) { + unsigned long fpfn = + drm_buddy_block_offset(block) >> PAGE_SHIFT; + unsigned long lpfn = fpfn + + (drm_buddy_block_size(mm, block) >> PAGE_SHIFT); + + if (place->fpfn < lpfn && place->lpfn > fpfn) + return true; + } + return false; + } default: + break; + } + + return true; } static void i915_ttm_evict_flags(struct ttm_buffer_object *bo, @@ -850,7 +895,23 @@ void i915_ttm_adjust_lru(struct drm_i915_gem_object *obj) } else if (!i915_gem_object_has_pages(obj)) { bo->priority = I915_TTM_PRIO_NO_PAGES; } else { - bo->priority = I915_TTM_PRIO_HAS_PAGES; + struct ttm_resource_manager *man = + ttm_manager_type(bo->bdev, bo->resource->mem_type); + + /* + * If we need to place an LMEM resource which doesn't need CPU + * access then we should try not to victimize mappable objects + * first, since we likely end up stealing more of the mappable + * portion. And likewise when we try to find space for a mappble + * object, we know not to ever victimize objects that don't + * occupy any mappable pages. + */ + if (i915_ttm_cpu_maps_iomem(bo->resource) && + i915_ttm_buddy_man_visible_size(man) < man->size && + !(obj->flags & I915_BO_ALLOC_GPU_ONLY)) + bo->priority = I915_TTM_PRIO_NEEDS_CPU_ACCESS; + else + bo->priority = I915_TTM_PRIO_HAS_PAGES; } ttm_bo_move_to_lru_tail(bo, bo->resource, NULL); From patchwork Thu Feb 10 12:13:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 12741843 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6E8E0C433F5 for ; Thu, 10 Feb 2022 12:17:16 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 1C2B010E858; Thu, 10 Feb 2022 12:16:57 +0000 (UTC) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id 2C55A10E844; Thu, 10 Feb 2022 12:16:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1644495409; x=1676031409; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=dMRdSfYHydNcnHm12lb+kY9HjrEEF6pjjZKo7g13XIM=; b=VPF9mf89nBxHfXHNktkhlCau268dIqtE6wj0ywCXC0HthrcXFBvviTgf E91LXVtX1AMbkr5ADQoV6YrHtY38S+Xv2yfDq+jxcdYlvQrEq6TTtOOoL nfHq2u1GrcWtH/TSD5ktX9qFWcdFTmISly0Tip3eM3jh75ezCLHoMjJoI hWbLwpXCYFw2Mxh6aKPTKjuJMyEYdrgRY7h+VpGzWl23hH8nY5mwoQ4fz v6HiLAfszi1KOURcqqMkOz13616WCWRpyviiMzbQMKKCn7hsI93CRodow cGSnHE4jt0DNCGQ2KckMZkreMpltmAbJVAsFIu4ZZ6r8iOHv4+yvkr20/ A==; X-IronPort-AV: E=McAfee;i="6200,9189,10253"; a="247060977" X-IronPort-AV: E=Sophos;i="5.88,358,1635231600"; d="scan'208";a="247060977" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2022 04:16:48 -0800 X-IronPort-AV: E=Sophos;i="5.88,358,1635231600"; d="scan'208";a="485664982" Received: from paulbarr-mobl2.ger.corp.intel.com (HELO mwauld-desk1.intel.com) ([10.252.11.226]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2022 04:16:47 -0800 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Date: Thu, 10 Feb 2022 12:13:07 +0000 Message-Id: <20220210121313.701004-10-matthew.auld@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220210121313.701004-1-matthew.auld@intel.com> References: <20220210121313.701004-1-matthew.auld@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 09/15] drm/i915/ttm: mappable migration on fault X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , dri-devel@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" The end goal is to have userspace tell the kernel what buffers will require CPU access, however if we ever reach the CPU fault handler, and the current resource is not mappable, then we should attempt to migrate the buffer to the mappable portion of LMEM, or even system memory, if the allowable placements permit it. Signed-off-by: Matthew Auld Cc: Thomas Hellström Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 54 ++++++++++++++++++++++--- 1 file changed, 48 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index 3790e569d6a9..780513e98fdc 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -636,11 +636,24 @@ static void i915_ttm_swap_notify(struct ttm_buffer_object *bo) i915_ttm_purge(obj); } +static bool i915_ttm_resource_mappable(struct ttm_resource *res) +{ + struct i915_ttm_buddy_resource *bman_res = to_ttm_buddy_resource(res); + + if (!i915_ttm_cpu_maps_iomem(res)) + return true; + + return bman_res->used_visible_size == bman_res->base.num_pages; +} + static int i915_ttm_io_mem_reserve(struct ttm_device *bdev, struct ttm_resource *mem) { if (!i915_ttm_cpu_maps_iomem(mem)) return 0; + if (!i915_ttm_resource_mappable(mem)) + return -EINVAL; + mem->bus.caching = ttm_write_combined; mem->bus.is_iomem = true; @@ -779,14 +792,15 @@ static int i915_ttm_get_pages(struct drm_i915_gem_object *obj) * Gem forced migration using the i915_ttm_migrate() op, is allowed even * to regions that are not in the object's list of allowable placements. */ -static int i915_ttm_migrate(struct drm_i915_gem_object *obj, - struct intel_memory_region *mr) +static int __i915_ttm_migrate(struct drm_i915_gem_object *obj, + struct intel_memory_region *mr, + unsigned int flags) { struct ttm_place requested; struct ttm_placement placement; int ret; - i915_ttm_place_from_region(mr, &requested, obj->flags); + i915_ttm_place_from_region(mr, &requested, flags); placement.num_placement = 1; placement.num_busy_placement = 1; placement.placement = &requested; @@ -809,6 +823,12 @@ static int i915_ttm_migrate(struct drm_i915_gem_object *obj, return 0; } +static int i915_ttm_migrate(struct drm_i915_gem_object *obj, + struct intel_memory_region *mr) +{ + return __i915_ttm_migrate(obj, mr, obj->flags); +} + static void i915_ttm_put_pages(struct drm_i915_gem_object *obj, struct sg_table *st) { @@ -953,9 +973,6 @@ static vm_fault_t vm_fault_ttm(struct vm_fault *vmf) if (!obj) return VM_FAULT_SIGBUS; - if (obj->flags & I915_BO_ALLOC_GPU_ONLY) - return -EINVAL; - /* Sanity check that we allow writing into this object */ if (unlikely(i915_gem_object_is_readonly(obj) && area->vm_flags & VM_WRITE)) @@ -970,6 +987,31 @@ static vm_fault_t vm_fault_ttm(struct vm_fault *vmf) return VM_FAULT_SIGBUS; } + if (!i915_ttm_resource_mappable(bo->resource)) { + int err = -ENODEV; + int i; + + for (i = 0; i < obj->mm.n_placements; i++) { + struct intel_memory_region *mr = obj->mm.placements[i]; + unsigned int flags; + + if (!mr->io_size && mr->type != INTEL_MEMORY_SYSTEM) + continue; + + flags = obj->flags; + flags &= ~I915_BO_ALLOC_TOPDOWN; + err = __i915_ttm_migrate(obj, mr, flags); + if (!err) + break; + } + + if (err) { + drm_dbg(dev, "Unable to make resource CPU accessible\n"); + dma_resv_unlock(bo->base.resv); + return VM_FAULT_SIGBUS; + } + } + if (drm_dev_enter(dev, &idx)) { ret = ttm_bo_vm_fault_reserved(vmf, vmf->vma->vm_page_prot, TTM_BO_VM_NUM_PREFAULT); From patchwork Thu Feb 10 12:13:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 12741841 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E9F36C433FE for ; Thu, 10 Feb 2022 12:17:14 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id DCD5B10E846; Thu, 10 Feb 2022 12:16:56 +0000 (UTC) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id 6752910E84E; Thu, 10 Feb 2022 12:16:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1644495410; x=1676031410; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=wycYYxY7PbN+gGKA4oTWjO+11Fw5YZXkuvhKHFG19G0=; b=lKruS7wLf26ipFJAMG5sdATv65tW5z/aKtTWG9iP0zvcLB+cm2toFN/5 0qZO6pO0xtyy/gKKtss3UEQ4MBxaLIsnOfwhaErr7Yll29fDKQHkwabQv RZR1CS6wSWTgCwezLzdW9PJ6Ptgg1kSYSzigrN0ifL0WsXgD5PNnXSudf Blh5aT7KgWstdQ7IIRnMI8QjtPVW+5FrHekFecufZn+vpogMKDa9WR+Fd aM65vMhs93mzJrMp26MP+h7iEiCL65qFv2BlpPg0Sa3NuWBsmUsnvWbnb D54Usorsgeu1KxTQ3XwM42djJkhxFsYsucz04hijk40GvLOpqR7ATWO/U w==; X-IronPort-AV: E=McAfee;i="6200,9189,10253"; a="247060978" X-IronPort-AV: E=Sophos;i="5.88,358,1635231600"; d="scan'208";a="247060978" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2022 04:16:50 -0800 X-IronPort-AV: E=Sophos;i="5.88,358,1635231600"; d="scan'208";a="485664985" Received: from paulbarr-mobl2.ger.corp.intel.com (HELO mwauld-desk1.intel.com) ([10.252.11.226]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2022 04:16:48 -0800 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Date: Thu, 10 Feb 2022 12:13:08 +0000 Message-Id: <20220210121313.701004-11-matthew.auld@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220210121313.701004-1-matthew.auld@intel.com> References: <20220210121313.701004-1-matthew.auld@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 10/15] drm/i915/selftests: exercise mmap migration X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , dri-devel@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Exercise each of the migration scenarios, verifying that the final placement and buffer contents match our expectations. v2(Thomas): Replace for_i915_gem_ww() block with simpler object_lock() Signed-off-by: Matthew Auld Cc: Thomas Hellström Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 2 +- .../drm/i915/gem/selftests/i915_gem_mman.c | 304 ++++++++++++++++++ 2 files changed, 305 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index 780513e98fdc..7940dfec1d56 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -999,7 +999,7 @@ static vm_fault_t vm_fault_ttm(struct vm_fault *vmf) continue; flags = obj->flags; - flags &= ~I915_BO_ALLOC_TOPDOWN; + flags &= ~I915_BO_ALLOC_GPU_ONLY; err = __i915_ttm_migrate(obj, mr, flags); if (!err) break; diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c index ba29767348be..7a73a0b015b9 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c @@ -10,6 +10,7 @@ #include "gt/intel_gpu_commands.h" #include "gt/intel_gt.h" #include "gt/intel_gt_pm.h" +#include "gt/intel_migrate.h" #include "gem/i915_gem_region.h" #include "huge_gem_object.h" #include "i915_selftest.h" @@ -999,6 +1000,308 @@ static int igt_mmap(void *arg) return 0; } +static void igt_close_objects(struct drm_i915_private *i915, + struct list_head *objects) +{ + struct drm_i915_gem_object *obj, *on; + + list_for_each_entry_safe(obj, on, objects, st_link) { + i915_gem_object_lock(obj, NULL); + if (i915_gem_object_has_pinned_pages(obj)) + i915_gem_object_unpin_pages(obj); + /* No polluting the memory region between tests */ + __i915_gem_object_put_pages(obj); + i915_gem_object_unlock(obj); + list_del(&obj->st_link); + i915_gem_object_put(obj); + } + + cond_resched(); + + i915_gem_drain_freed_objects(i915); +} + +static void igt_make_evictable(struct list_head *objects) +{ + struct drm_i915_gem_object *obj; + + list_for_each_entry(obj, objects, st_link) { + i915_gem_object_lock(obj, NULL); + if (i915_gem_object_has_pinned_pages(obj)) + i915_gem_object_unpin_pages(obj); + i915_gem_object_unlock(obj); + } + + cond_resched(); +} + +static int igt_fill_mappable(struct intel_memory_region *mr, + struct list_head *objects) +{ + u64 size, total; + int err; + + total = 0; + size = mr->io_size; + do { + struct drm_i915_gem_object *obj; + + obj = i915_gem_object_create_region(mr, size, 0, 0); + if (IS_ERR(obj)) { + err = PTR_ERR(obj); + goto err_close; + } + + list_add(&obj->st_link, objects); + + err = i915_gem_object_pin_pages_unlocked(obj); + if (err) { + if (err != -ENXIO && err != -ENOMEM) + goto err_close; + + if (size == mr->min_page_size) { + err = 0; + break; + } + + size >>= 1; + continue; + } + + total += obj->base.size; + } while (1); + + pr_info("%s filled=%lluMiB\n", __func__, total >> 20); + return 0; + +err_close: + igt_close_objects(mr->i915, objects); + return err; +} + +static int ___igt_mmap_migrate(struct drm_i915_private *i915, + struct drm_i915_gem_object *obj, + unsigned long addr, + bool unfaultable) +{ + struct vm_area_struct *area; + int err = 0, i; + + pr_info("igt_mmap(%s, %d) @ %lx\n", + obj->mm.region->name, I915_MMAP_TYPE_FIXED, addr); + + mmap_read_lock(current->mm); + area = vma_lookup(current->mm, addr); + mmap_read_unlock(current->mm); + if (!area) { + pr_err("%s: Did not create a vm_area_struct for the mmap\n", + obj->mm.region->name); + err = -EINVAL; + goto out_unmap; + } + + for (i = 0; i < obj->base.size / sizeof(u32); i++) { + u32 __user *ux = u64_to_user_ptr((u64)(addr + i * sizeof(*ux))); + u32 x; + + if (get_user(x, ux)) { + err = -EFAULT; + if (!unfaultable) { + pr_err("%s: Unable to read from mmap, offset:%zd\n", + obj->mm.region->name, i * sizeof(x)); + goto out_unmap; + } + + continue; + } + + if (unfaultable) { + pr_err("%s: Faulted unmappable memory\n", + obj->mm.region->name); + err = -EINVAL; + goto out_unmap; + } + + if (x != expand32(POISON_INUSE)) { + pr_err("%s: Read incorrect value from mmap, offset:%zd, found:%x, expected:%x\n", + obj->mm.region->name, + i * sizeof(x), x, expand32(POISON_INUSE)); + err = -EINVAL; + goto out_unmap; + } + + x = expand32(POISON_FREE); + if (put_user(x, ux)) { + pr_err("%s: Unable to write to mmap, offset:%zd\n", + obj->mm.region->name, i * sizeof(x)); + err = -EFAULT; + goto out_unmap; + } + } + + if (unfaultable) { + if (err == -EFAULT) + err = 0; + } else { + obj->flags &= ~I915_BO_ALLOC_GPU_ONLY; + err = wc_check(obj); + } +out_unmap: + vm_munmap(addr, obj->base.size); + return err; +} + +#define IGT_MMAP_MIGRATE_TOPDOWN (1 << 0) +#define IGT_MMAP_MIGRATE_FILL (1 << 1) +#define IGT_MMAP_MIGRATE_EVICTABLE (1 << 2) +#define IGT_MMAP_MIGRATE_UNFAULTABLE (1 << 3) +static int __igt_mmap_migrate(struct intel_memory_region **placements, + int n_placements, + struct intel_memory_region *expected_mr, + unsigned int flags) +{ + struct drm_i915_private *i915 = placements[0]->i915; + struct drm_i915_gem_object *obj; + struct i915_request *rq = NULL; + unsigned long addr; + LIST_HEAD(objects); + u64 offset; + int err; + + obj = __i915_gem_object_create_user(i915, PAGE_SIZE, + placements, + n_placements); + if (IS_ERR(obj)) + return PTR_ERR(obj); + + if (flags & IGT_MMAP_MIGRATE_TOPDOWN) + obj->flags |= I915_BO_ALLOC_GPU_ONLY; + + err = __assign_mmap_offset(obj, I915_MMAP_TYPE_FIXED, &offset, NULL); + if (err) + goto out_put; + + /* + * This will eventually create a GEM context, due to opening dummy drm + * file, which needs a tiny amount of mappable device memory for the top + * level paging structures(and perhaps scratch), so make sure we + * allocate early, to avoid tears. + */ + addr = igt_mmap_offset(i915, offset, obj->base.size, + PROT_WRITE, MAP_SHARED); + if (IS_ERR_VALUE(addr)) { + err = addr; + goto out_put; + } + + if (flags & IGT_MMAP_MIGRATE_FILL) { + err = igt_fill_mappable(placements[0], &objects); + if (err) + goto out_put; + } + + err = i915_gem_object_lock(obj, NULL); + if (err) + goto out_put; + + err = i915_gem_object_pin_pages(obj); + if (err) { + i915_gem_object_unlock(obj); + goto out_put; + } + + err = intel_context_migrate_clear(to_gt(i915)->migrate.context, NULL, + obj->mm.pages->sgl, obj->cache_level, + i915_gem_object_is_lmem(obj), + expand32(POISON_INUSE), &rq); + i915_gem_object_unpin_pages(obj); + if (rq) { + dma_resv_add_excl_fence(obj->base.resv, &rq->fence); + i915_gem_object_set_moving_fence(obj, &rq->fence); + i915_request_put(rq); + } + i915_gem_object_unlock(obj); + if (err) + goto out_put; + + if (flags & IGT_MMAP_MIGRATE_EVICTABLE) + igt_make_evictable(&objects); + + err = ___igt_mmap_migrate(i915, obj, addr, + flags & IGT_MMAP_MIGRATE_UNFAULTABLE); + if (!err && obj->mm.region != expected_mr) { + pr_err("%s region mismatch %s\n", __func__, expected_mr->name); + err = -EINVAL; + } + +out_put: + i915_gem_object_put(obj); + igt_close_objects(i915, &objects); + return err; +} + +static int igt_mmap_migrate(void *arg) +{ + struct drm_i915_private *i915 = arg; + struct intel_memory_region *system = i915->mm.regions[INTEL_REGION_SMEM]; + struct intel_memory_region *mr; + enum intel_region_id id; + + for_each_memory_region(mr, i915, id) { + struct intel_memory_region *mixed[] = { mr, system }; + struct intel_memory_region *single[] = { mr }; + int err; + + if (mr->private) + continue; + + if (!mr->io_size || mr->io_size == mr->total) + continue; + + /* + * Allocate in the mappable portion, should be no suprises here. + */ + err = __igt_mmap_migrate(mixed, ARRAY_SIZE(mixed), mr, 0); + if (err) + return err; + + /* + * Allocate in the non-mappable portion, but force migrating to + * the mappable portion on fault (LMEM -> LMEM) + */ + err = __igt_mmap_migrate(single, ARRAY_SIZE(single), mr, + IGT_MMAP_MIGRATE_TOPDOWN | + IGT_MMAP_MIGRATE_FILL | + IGT_MMAP_MIGRATE_EVICTABLE); + if (err) + return err; + + /* + * Allocate in the non-mappable portion, but force spilling into + * system memory on fault (LMEM -> SMEM) + */ + err = __igt_mmap_migrate(mixed, ARRAY_SIZE(mixed), system, + IGT_MMAP_MIGRATE_TOPDOWN | + IGT_MMAP_MIGRATE_FILL); + if (err) + return err; + + /* + * Allocate in the non-mappable portion, but since the mappable + * portion is already full, and we can't spill to system memory, + * then we should expect the fault to fail. + */ + err = __igt_mmap_migrate(single, ARRAY_SIZE(single), mr, + IGT_MMAP_MIGRATE_TOPDOWN | + IGT_MMAP_MIGRATE_FILL | + IGT_MMAP_MIGRATE_UNFAULTABLE); + if (err) + return err; + } + + return 0; +} + static const char *repr_mmap_type(enum i915_mmap_type type) { switch (type) { @@ -1424,6 +1727,7 @@ int i915_gem_mman_live_selftests(struct drm_i915_private *i915) SUBTEST(igt_smoke_tiling), SUBTEST(igt_mmap_offset_exhaustion), SUBTEST(igt_mmap), + SUBTEST(igt_mmap_migrate), SUBTEST(igt_mmap_access), SUBTEST(igt_mmap_revoke), SUBTEST(igt_mmap_gpu), From patchwork Thu Feb 10 12:13:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 12741839 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E3E95C4332F for ; Thu, 10 Feb 2022 12:17:10 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 2CC1D10E857; Thu, 10 Feb 2022 12:16:56 +0000 (UTC) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id E5E7810E852; Thu, 10 Feb 2022 12:16:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1644495411; x=1676031411; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=iC6qgl7ytn02+2kCYRuzhkojGgWcNJlVqnRbss8TRFc=; b=h5TBUBdBwxLdu9jAa31aGEwt269vrN5QKEMSo0by7l8IpGec97/PLGlx ku3eOl6EJ+bSa3kBN9NQXiJv1VFKyAVGcDuQZq7XHZiyHoXaLz+i8Al9D Jg5KnYWtMj6HGD8H8Gvh0Vzwwp9SoN57Kki3v86WqyHJJickowetDhQ5d FbkoRQXFE7J9hxHSBnbRSTbQoeNrj57Nmm8PWyMzKNB0BG/BCbYcPerEL GE2bt/8ZgbtVcZ+S5uthSOOtyZoQgKtwcfw//7TcCCIO6J4zqWvodjLaY f/X93phm1PJV9IwBF1sVjepo8l7kWqJ4achPckun/NPvmEhKZ0vCOOoO4 Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10253"; a="247060982" X-IronPort-AV: E=Sophos;i="5.88,358,1635231600"; d="scan'208";a="247060982" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2022 04:16:51 -0800 X-IronPort-AV: E=Sophos;i="5.88,358,1635231600"; d="scan'208";a="485664988" Received: from paulbarr-mobl2.ger.corp.intel.com (HELO mwauld-desk1.intel.com) ([10.252.11.226]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2022 04:16:50 -0800 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Date: Thu, 10 Feb 2022 12:13:09 +0000 Message-Id: <20220210121313.701004-12-matthew.auld@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220210121313.701004-1-matthew.auld@intel.com> References: <20220210121313.701004-1-matthew.auld@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 11/15] drm/i915/selftests: handle allocation failures X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , dri-devel@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" If we have to contend with non-mappable LMEM, then we need to ensure the object fits within the mappable portion, like in the selftests, where we later try to CPU access the pages. However if it can't then we need to gracefully handle this, without throwing an error. Also it looks like TTM will return -ENOMEM, in ttm_bo_mem_space() after exhausting all possible placements. Signed-off-by: Matthew Auld Cc: Thomas Hellström Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/gem/selftests/huge_pages.c | 2 +- drivers/gpu/drm/i915/selftests/intel_memory_region.c | 8 +++++++- 2 files changed, 8 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c index 42db9cd30978..3caa178bbd07 100644 --- a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c +++ b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c @@ -1344,7 +1344,7 @@ static int igt_ppgtt_smoke_huge(void *arg) err = i915_gem_object_pin_pages_unlocked(obj); if (err) { - if (err == -ENXIO || err == -E2BIG) { + if (err == -ENXIO || err == -E2BIG || err == -ENOMEM) { i915_gem_object_put(obj); size >>= 1; goto try_again; diff --git a/drivers/gpu/drm/i915/selftests/intel_memory_region.c b/drivers/gpu/drm/i915/selftests/intel_memory_region.c index 56dec9723601..ba32893e0873 100644 --- a/drivers/gpu/drm/i915/selftests/intel_memory_region.c +++ b/drivers/gpu/drm/i915/selftests/intel_memory_region.c @@ -822,8 +822,14 @@ static int igt_lmem_create_with_ps(void *arg) i915_gem_object_lock(obj, NULL); err = i915_gem_object_pin_pages(obj); - if (err) + if (err) { + if (err == -ENXIO || err == -E2BIG || err == -ENOMEM) { + pr_info("%s not enough lmem for ps(%u) err=%d\n", + __func__, ps, err); + err = 0; + } goto out_put; + } daddr = i915_gem_object_get_dma_address(obj, 0); if (!IS_ALIGNED(daddr, ps)) { From patchwork Thu Feb 10 12:13:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 12741845 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 18122C433EF for ; Thu, 10 Feb 2022 12:17:18 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 43F6B10E83F; Thu, 10 Feb 2022 12:16:58 +0000 (UTC) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id 20E4210E855; Thu, 10 Feb 2022 12:16:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1644495413; x=1676031413; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=zjxrYNJABbwO8rguV3U/HPho+jTAf9Uh9F18QktYDiM=; b=D5cKvpoXRUpTQ9gtk7YymEx0h7fc1htmId7OvEn+VvtaNYh2OL2z//AJ KmlKndGs5YjO07INbJL8Q78m23JZFObZcO1NGhfPpi++fNZtPWh5W0zRH nyMc84mPbJe7BZ9teV5tjZ79nClUaUyDrZmJJhnf7+46PbIdQ1ybRkWDE EFMJKqtR0t/8xwC3nMELn/5seqOm6r61htFBh+llGEeRl5eFxt101m/3Z mzpA23ZYY5qA6IuS++ZUjVdwIT/Bn1/HGfq/ENRPObaZFXfCUozVaBNP8 phi/tndWT6fZjYiCXed68Zd70f7SLdh1nwOUXjlO2cz4sEz4G45dAvTaH Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10253"; a="247060984" X-IronPort-AV: E=Sophos;i="5.88,358,1635231600"; d="scan'208";a="247060984" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2022 04:16:52 -0800 X-IronPort-AV: E=Sophos;i="5.88,358,1635231600"; d="scan'208";a="485664991" Received: from paulbarr-mobl2.ger.corp.intel.com (HELO mwauld-desk1.intel.com) ([10.252.11.226]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2022 04:16:51 -0800 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Date: Thu, 10 Feb 2022 12:13:10 +0000 Message-Id: <20220210121313.701004-13-matthew.auld@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220210121313.701004-1-matthew.auld@intel.com> References: <20220210121313.701004-1-matthew.auld@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 12/15] drm/i915/create: apply ALLOC_GPU_ONLY by default X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , dri-devel@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Starting from DG2+, when dealing with LMEM, we assume that by default all userspace allocations should be placed in the non-mappable portion of LMEM. Note that dumb buffers are not included here, since these are not "GPU accelerated" and likely need CPU access. In a later patch userspace will be able to provide a hint if CPU access to the buffer is needed. Signed-off-by: Matthew Auld Cc: Thomas Hellström --- drivers/gpu/drm/i915/gem/i915_gem_create.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_create.c b/drivers/gpu/drm/i915/gem/i915_gem_create.c index 9402d4bf4ffc..cc9ddb943f96 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_create.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_create.c @@ -424,6 +424,15 @@ i915_gem_create_ext_ioctl(struct drm_device *dev, void *data, ext_data.n_placements = 1; } + /* + * TODO: add a userspace hint to force CPU_ACCESS for the object, which + * can override this. + */ + if (!IS_DG1(i915) && (ext_data.n_placements > 1 || + ext_data.placements[0]->type != + INTEL_MEMORY_SYSTEM)) + ext_data.flags |= I915_BO_ALLOC_GPU_ONLY; + obj = __i915_gem_object_create_user_ext(i915, args->size, ext_data.placements, ext_data.n_placements, From patchwork Thu Feb 10 12:13:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 12741846 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 047C2C433FE for ; Thu, 10 Feb 2022 12:17:19 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 8C23410E84E; Thu, 10 Feb 2022 12:16:58 +0000 (UTC) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id 784F610E857; Thu, 10 Feb 2022 12:16:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1644495414; x=1676031414; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=YiF4zueZ8KsbNIcxq+w8XyH6uq5rC/6tVu5ZsYD8CN4=; b=VkLprGlrAE1MqKBIzS76I4CVHEJ1YuZt4xIDw4RaI/Xh6bPMFwtYJ9Uy 2jicyfYrP7+YRRYEPEXGmtf8cVnBEAfvYrlghIrICpnKazD0eq+xTfLbA lyf11vF4cxFHvp1kPZFFlU4dzgGilgOG38IAEhtZty1Vcw1lM53gkHTAA PXCuO/ioxGP/e+R49SvdP7YOgP6YM0jJ3kSr596s6n/M4xAM0fWX6PfjI OQiP8dDdVbf0tYk9Y5b4yktypvFYAZQlWn0VGKJ9EUaYyzyPw0EO/cpH/ zWpgewiLtAoQU2obE5HIutAnZNo4ibCxWE40lrDT07dzHry4XbHrDDdvW Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10253"; a="247060985" X-IronPort-AV: E=Sophos;i="5.88,358,1635231600"; d="scan'208";a="247060985" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2022 04:16:54 -0800 X-IronPort-AV: E=Sophos;i="5.88,358,1635231600"; d="scan'208";a="485664994" Received: from paulbarr-mobl2.ger.corp.intel.com (HELO mwauld-desk1.intel.com) ([10.252.11.226]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2022 04:16:53 -0800 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Date: Thu, 10 Feb 2022 12:13:11 +0000 Message-Id: <20220210121313.701004-14-matthew.auld@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220210121313.701004-1-matthew.auld@intel.com> References: <20220210121313.701004-1-matthew.auld@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 13/15] drm/i915/uapi: add NEEDS_CPU_ACCESS hint X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , dri-devel@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" If set, force the allocation to be placed in the mappable portion of LMEM. One big restriction here is that system memory must be given as a potential placement for the object, that way we can always spill the object into system memory if we can't make space. XXX: Still very much WIP and needs IGTs. Including now just for the sake of having more complete picture. Signed-off-by: Matthew Auld Cc: Thomas Hellström --- drivers/gpu/drm/i915/gem/i915_gem_create.c | 28 ++++++++++++------- include/uapi/drm/i915_drm.h | 31 +++++++++++++++++++++- 2 files changed, 49 insertions(+), 10 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_create.c b/drivers/gpu/drm/i915/gem/i915_gem_create.c index cc9ddb943f96..2737977f81a4 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_create.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_create.c @@ -238,6 +238,7 @@ struct create_ext { struct drm_i915_private *i915; struct intel_memory_region *placements[INTEL_REGION_UNKNOWN]; unsigned int n_placements; + unsigned int placement_mask; unsigned long flags; }; @@ -334,6 +335,7 @@ static int set_placements(struct drm_i915_gem_create_ext_memory_regions *args, for (i = 0; i < args->num_regions; i++) ext_data->placements[i] = placements[i]; + ext_data->placement_mask = mask; return 0; out_dump: @@ -408,7 +410,7 @@ i915_gem_create_ext_ioctl(struct drm_device *dev, void *data, struct drm_i915_gem_object *obj; int ret; - if (args->flags) + if (args->flags & ~I915_GEM_CREATE_EXT_FLAG_NEEDS_CPU_ACCESS) return -EINVAL; ret = i915_user_extensions(u64_to_user_ptr(args->extensions), @@ -424,14 +426,22 @@ i915_gem_create_ext_ioctl(struct drm_device *dev, void *data, ext_data.n_placements = 1; } - /* - * TODO: add a userspace hint to force CPU_ACCESS for the object, which - * can override this. - */ - if (!IS_DG1(i915) && (ext_data.n_placements > 1 || - ext_data.placements[0]->type != - INTEL_MEMORY_SYSTEM)) - ext_data.flags |= I915_BO_ALLOC_GPU_ONLY; + if (args->flags & I915_GEM_CREATE_EXT_FLAG_NEEDS_CPU_ACCESS) { + if (ext_data.n_placements == 1) + return -EINVAL; + + /* + * We always need to be able to spill to system memory, if we + * can't place in the mappable part of LMEM. + */ + if (!(ext_data.placement_mask & BIT(INTEL_REGION_SMEM))) + return -EINVAL; + } else { + if (!IS_DG1(i915) && (ext_data.n_placements > 1 || + ext_data.placements[0]->type != + INTEL_MEMORY_SYSTEM)) + ext_data.flags |= I915_BO_ALLOC_GPU_ONLY; + } obj = __i915_gem_object_create_user_ext(i915, args->size, ext_data.placements, diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h index 914ebd9290e5..925685bd261e 100644 --- a/include/uapi/drm/i915_drm.h +++ b/include/uapi/drm/i915_drm.h @@ -3157,7 +3157,36 @@ struct drm_i915_gem_create_ext { * Object handles are nonzero. */ __u32 handle; - /** @flags: MBZ */ + /** + * @flags: Optional flags. + * + * Supported values: + * + * I915_GEM_CREATE_EXT_FLAG_NEEDS_CPU_ACCESS - Signal to the kernel that + * the object will need to be accessed via the CPU. + * + * Only valid when placing objects in I915_MEMORY_CLASS_DEVICE, and + * only strictly required on platforms where only some of the device + * memory is directly visible or mappable through the CPU, like on DG2+. + * + * One of the placements MUST also be I915_MEMORY_CLASS_SYSTEM, to + * ensure we can always spill the allocation to system memory, if we + * can't place the object in the mappable part of + * I915_MEMORY_CLASS_DEVICE. + * + * Note that buffers that need to be captured with EXEC_OBJECT_CAPTURE, + * will need to enable this hint, if the object can also be placed in + * I915_MEMORY_CLASS_DEVICE, starting from DG2+. The execbuf call will + * throw an error otherwise. This also means that such objects will need + * I915_MEMORY_CLASS_SYSTEM set as a possible placement. + * + * Without this hint, the kernel will assume that non-mappable + * I915_MEMORY_CLASS_DEVICE is preferred for this object. Note that the + * kernel can still migrate the object to the mappable part, as a last + * resort, if userspace ever CPU faults this object, but this might be + * expensive, and so ideally should be avoided. + */ +#define I915_GEM_CREATE_EXT_FLAG_NEEDS_CPU_ACCESS (1 << 0) __u32 flags; /** * @extensions: The chain of extensions to apply to this object. From patchwork Thu Feb 10 12:13:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 12741847 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DB9D6C433EF for ; Thu, 10 Feb 2022 12:17:19 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A011310E851; Thu, 10 Feb 2022 12:16:59 +0000 (UTC) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id D522A10E858; Thu, 10 Feb 2022 12:16:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1644495415; x=1676031415; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=cjCJVuuCkoU3zNRpy+3l4jsvcwd2Oxu68MhMnc3Ph9s=; b=HHII/6s+1h0WAos267zAaI4KUYnZSvSqlIRDQYhDA6aFHAXkF6UYM/wV l1EN0ak6TC/0Fz0Fmr8N33c6HuUm+vq08uw8JZ4AsRnLmMytxC+JyCLWJ JIQVvpPGN7sgid+1drkqDiOINvOzkMuHb7VA/m88BB5V2rUM6mWphLZMt LnwmBELJxtpLzB/7wu7dbH78ulIA9XwqwfH8sZpFPV0LPBajQ9sDNmmRB 38/ESFndn5q8xSaaTQzSer3l5QRyDZJfEoaHgKL/PgHzE75N9Z8dD22/x PC+QPs+rnR5FccEBYOkmrpwHQHoTu+wopZFAh3FgGo96SfVWIc9vOImg+ A==; X-IronPort-AV: E=McAfee;i="6200,9189,10253"; a="247060987" X-IronPort-AV: E=Sophos;i="5.88,358,1635231600"; d="scan'208";a="247060987" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2022 04:16:55 -0800 X-IronPort-AV: E=Sophos;i="5.88,358,1635231600"; d="scan'208";a="485664996" Received: from paulbarr-mobl2.ger.corp.intel.com (HELO mwauld-desk1.intel.com) ([10.252.11.226]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2022 04:16:54 -0800 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Date: Thu, 10 Feb 2022 12:13:12 +0000 Message-Id: <20220210121313.701004-15-matthew.auld@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220210121313.701004-1-matthew.auld@intel.com> References: <20220210121313.701004-1-matthew.auld@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 14/15] drm/i915/uapi: forbid ALLOC_GPU_ONLY for error capture X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , dri-devel@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" On platforms where there might be non-mappable LMEM, force userspace to mark the buffers with the correct hint. When dumping the BO contents during capture we need CPU access. Note this only applies to buffers that can be placed in LMEM, and also doesn't impact DG1. v2(Reported-by: kernel test robot ): - Also update the function signature on !CONFIG_DRM_I915_CAPTURE_ERROR builds. Signed-off-by: Matthew Auld Cc: Thomas Hellström --- drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index 498b458fd784..017f928cbcaf 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -1965,7 +1965,7 @@ eb_find_first_request_added(struct i915_execbuffer *eb) #if IS_ENABLED(CONFIG_DRM_I915_CAPTURE_ERROR) /* Stage with GFP_KERNEL allocations before we enter the signaling critical path */ -static void eb_capture_stage(struct i915_execbuffer *eb) +static int eb_capture_stage(struct i915_execbuffer *eb) { const unsigned int count = eb->buffer_count; unsigned int i = count, j; @@ -1978,6 +1978,9 @@ static void eb_capture_stage(struct i915_execbuffer *eb) if (!(flags & EXEC_OBJECT_CAPTURE)) continue; + if (vma->obj->flags & I915_BO_ALLOC_GPU_ONLY) + return -EINVAL; + for_each_batch_create_order(eb, j) { struct i915_capture_list *capture; @@ -1990,6 +1993,8 @@ static void eb_capture_stage(struct i915_execbuffer *eb) eb->capture_lists[j] = capture; } } + + return 0; } /* Commit once we're in the critical path */ @@ -2031,7 +2036,7 @@ static void eb_capture_list_clear(struct i915_execbuffer *eb) #else -static void eb_capture_stage(struct i915_execbuffer *eb) +static int eb_capture_stage(struct i915_execbuffer *eb) { } @@ -3418,7 +3423,9 @@ i915_gem_do_execbuffer(struct drm_device *dev, } ww_acquire_done(&eb.ww.ctx); - eb_capture_stage(&eb); + err = eb_capture_stage(&eb); + if (err) + goto err_vma; out_fence = eb_requests_create(&eb, in_fence, out_fence_fd); if (IS_ERR(out_fence)) { From patchwork Thu Feb 10 12:13:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 12741848 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 16752C433F5 for ; Thu, 10 Feb 2022 12:17:30 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 3DBBA10E865; Thu, 10 Feb 2022 12:17:22 +0000 (UTC) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id 36B6F10E85E; Thu, 10 Feb 2022 12:16:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1644495417; x=1676031417; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=tRn6WeSGt5dbrfhbjKgoqgtJP1Z7aqYafLvLNaJAnOI=; b=L5OS5zJ8PN97Mj/YGo4sAoamuIHHnatevNWkjkakOhiEc5iFAljSX99y X7LvJDNu2RiJ05ouqhomneU0v/LHPE+YEuDdixM4dUGpp7f2TvV5Z26Di WDSCFMh6MbYGr5bp0ufiEXzF/k16zT3xVeJFrHRCTqnJp6aLtNTiXLBm6 0IMFtL+di22DyAbGwZphLPPvGZ7br2A51VP7s07D2slswZQIPyFRC6rEC 0R9PdVEIi4WCZTJulElHTvvptqxStP7XIDyPbrw+iceQj8J96DBNocjox tmSvuDnwe5qM5g7Ub3+xXprY1yEqrLBK6EApsVdN4Y6cvbW8Ce4WwnvhZ w==; X-IronPort-AV: E=McAfee;i="6200,9189,10253"; a="247060989" X-IronPort-AV: E=Sophos;i="5.88,358,1635231600"; d="scan'208";a="247060989" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2022 04:16:56 -0800 X-IronPort-AV: E=Sophos;i="5.88,358,1635231600"; d="scan'208";a="485664998" Received: from paulbarr-mobl2.ger.corp.intel.com (HELO mwauld-desk1.intel.com) ([10.252.11.226]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2022 04:16:55 -0800 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Date: Thu, 10 Feb 2022 12:13:13 +0000 Message-Id: <20220210121313.701004-16-matthew.auld@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220210121313.701004-1-matthew.auld@intel.com> References: <20220210121313.701004-1-matthew.auld@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 15/15] drm/i915/lmem: don't treat small BAR as an error X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , dri-devel@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Just pass along the probed io_size. The backend should be able to utilize the entire range here, even if some of it is non-mappable. It does leave open with what to do with stolen local-memory. Signed-off-by: Matthew Auld Cc: Thomas Hellström Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/gt/intel_region_lmem.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_region_lmem.c b/drivers/gpu/drm/i915/gt/intel_region_lmem.c index 01838b8ce4c7..ad3cf348b4a8 100644 --- a/drivers/gpu/drm/i915/gt/intel_region_lmem.c +++ b/drivers/gpu/drm/i915/gt/intel_region_lmem.c @@ -201,6 +201,7 @@ static struct intel_memory_region *setup_lmem(struct intel_gt *gt) struct intel_memory_region *mem; resource_size_t min_page_size; resource_size_t io_start; + resource_size_t io_size; resource_size_t lmem_size; int err; @@ -211,7 +212,8 @@ static struct intel_memory_region *setup_lmem(struct intel_gt *gt) lmem_size = intel_uncore_read64(uncore, GEN12_GSMBASE); io_start = pci_resource_start(pdev, 2); - if (GEM_WARN_ON(lmem_size > pci_resource_len(pdev, 2))) + io_size = min(pci_resource_len(pdev, 2), lmem_size); + if (!io_size) return ERR_PTR(-ENODEV); min_page_size = HAS_64K_PAGES(i915) ? I915_GTT_PAGE_SIZE_64K : @@ -221,7 +223,7 @@ static struct intel_memory_region *setup_lmem(struct intel_gt *gt) lmem_size, min_page_size, io_start, - lmem_size, + io_size, INTEL_MEMORY_LOCAL, 0, &intel_region_lmem_ops);