From patchwork Mon Feb 21 14:16:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 12753674 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 23F7DC433F5 for ; Mon, 21 Feb 2022 14:16:42 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 5FB2510E32A; Mon, 21 Feb 2022 14:16:41 +0000 (UTC) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by gabe.freedesktop.org (Postfix) with ESMTPS id 7EF1210E3E1; Mon, 21 Feb 2022 14:16:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1645452997; x=1676988997; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=lFGIImgHd6/ff58qxg2a/Fs06PSapnqmi056xJ7TBe0=; b=W+XlbStdcrsXHzI7+7e2sBhQPQzQC/tm8JDU4RlkgZdjRImRkxr7ZqcR gTYOIo9LOFddz71za0VmltznddNUfwjGZ71jGuUJ4lIcGka+mIflzw/sq 16WYtrSQaiz6B+KHJYp+Uv9VD4KCMy/LQEUSenbQNCe/alDg8rJZnHuUQ ozwY5lue/dEVuShTTIcLYnVks6F3pZf9lJzMO+/ZXHWXM4yhHnVjYwUuY OhzbY6DLsIeJpVqM40FD7UEpFrU8EVlH/nQBuLyA2UKPBBnrADwxgSwW/ H22ZyquthuUaV2Hg9oUKlsEXlPpR58nmkxoCRbDPqt3OuWZOq+y27viNw A==; X-IronPort-AV: E=McAfee;i="6200,9189,10264"; a="251467130" X-IronPort-AV: E=Sophos;i="5.88,385,1635231600"; d="scan'208";a="251467130" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Feb 2022 06:16:37 -0800 X-IronPort-AV: E=Sophos;i="5.88,385,1635231600"; d="scan'208";a="606400530" Received: from joeyegax-mobl.ger.corp.intel.com (HELO mwauld-desk1.intel.com) ([10.252.23.97]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Feb 2022 06:16:36 -0800 From: Matthew Auld To: igt-dev@lists.freedesktop.org Date: Mon, 21 Feb 2022 14:16:15 +0000 Message-Id: <20220221141620.2490914-2-matthew.auld@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220221141620.2490914-1-matthew.auld@intel.com> References: <20220221141620.2490914-1-matthew.auld@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH i-g-t v2 1/6] lib/i915_drm_local: Add I915_GEM_CREATE_EXT_FLAG_NEEDS_CPU_ACCESS X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , intel-gfx@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" For now dump into i915_drm_local.h. Once the uapi on the kernel side is merged, and is part of drm-next, we can sync the kernel headers and remove this. Signed-off-by: Matthew Auld Cc: Thomas Hellström --- lib/i915/i915_drm_local.h | 27 +++++++++++++++++++++++++++ 1 file changed, 27 insertions(+) diff --git a/lib/i915/i915_drm_local.h b/lib/i915/i915_drm_local.h index 9e82c968..7b5285f3 100644 --- a/lib/i915/i915_drm_local.h +++ b/lib/i915/i915_drm_local.h @@ -21,6 +21,33 @@ extern "C" { */ #define I915_ENGINE_CLASS_COMPUTE 4 +/* + * Signal to the kernel that the object will need to be accessed via + * the CPU. + * + * Only valid when placing objects in I915_MEMORY_CLASS_DEVICE, and only + * strictly required on platforms where only some of the device memory + * is directly visible or mappable through the CPU, like on DG2+. + * + * One of the placements MUST also be I915_MEMORY_CLASS_SYSTEM, to + * ensure we can always spill the allocation to system memory, if we + * can't place the object in the mappable part of + * I915_MEMORY_CLASS_DEVICE. + * + * Note that buffers that need to be captured with EXEC_OBJECT_CAPTURE, + * will need to enable this hint, if the object can also be placed in + * I915_MEMORY_CLASS_DEVICE, starting from DG2+. The execbuf call will + * throw an error otherwise. This also means that such objects will need + * I915_MEMORY_CLASS_SYSTEM set as a possible placement. + * + * Without this hint, the kernel will assume that non-mappable + * I915_MEMORY_CLASS_DEVICE is preferred for this object. Note that the + * kernel can still migrate the object to the mappable part, as a last + * resort, if userspace ever CPU faults this object, but this might be + * expensive, and so ideally should be avoided. + */ +#define I915_GEM_CREATE_EXT_FLAG_NEEDS_CPU_ACCESS (1 << 0) + #if defined(__cplusplus) } #endif From patchwork Mon Feb 21 14:16:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 12753675 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A328AC433EF for ; Mon, 21 Feb 2022 14:16:44 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 1C36710E3E1; Mon, 21 Feb 2022 14:16:44 +0000 (UTC) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by gabe.freedesktop.org (Postfix) with ESMTPS id 742F710E344; Mon, 21 Feb 2022 14:16:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1645453002; x=1676989002; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/kUxtqSV6eXcLq4eBa+uvvhhj1iTVCQfP0MVCJHXUsw=; b=X+iIZ/9Fe6Glu7it4dGDc+3+mJdts7djfxIW/URTc0UWM9b8Mz12D5I+ qKw3brSMfppCIxWPpAtaJFMYRTOFKJm+EqThgSdLVY9OuLywzWWLrdJ0B UXSkcGxTCkNzRAyCpLplp6QRBuSNPrwtSchjJgP93zxMyYweLNBf7QU2d +iHAoeBSSGJEo+oJK2nY1JRgvbmxa/SeU24/pK/zkmqvb/vT3jT/7d7ve 3VvIVGQW1V5QE0I6hVN9lRgmW/VR1+7dhZehxKLzbqlV++oyFn3bgJJMD LhWBvnreaVMldZReSGCljXU2jDBesfA9yPe0ajSiiTKDMWuOEXM9AeXm2 Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10264"; a="251467136" X-IronPort-AV: E=Sophos;i="5.88,385,1635231600"; d="scan'208";a="251467136" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Feb 2022 06:16:38 -0800 X-IronPort-AV: E=Sophos;i="5.88,385,1635231600"; d="scan'208";a="606400531" Received: from joeyegax-mobl.ger.corp.intel.com (HELO mwauld-desk1.intel.com) ([10.252.23.97]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Feb 2022 06:16:37 -0800 From: Matthew Auld To: igt-dev@lists.freedesktop.org Date: Mon, 21 Feb 2022 14:16:16 +0000 Message-Id: <20220221141620.2490914-3-matthew.auld@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220221141620.2490914-1-matthew.auld@intel.com> References: <20220221141620.2490914-1-matthew.auld@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH i-g-t v2 2/6] lib/i915: wire up optional flags for gem_create_ext X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , intel-gfx@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" For now limit to direct callers. Signed-off-by: Matthew Auld Cc: Thomas Hellström --- lib/i915/gem_create.c | 9 ++++++--- lib/i915/gem_create.h | 5 +++-- lib/i915/intel_memory_region.c | 2 +- tests/i915/gem_create.c | 24 ++++++++++++------------ tests/i915/gem_pxp.c | 2 +- 5 files changed, 23 insertions(+), 19 deletions(-) diff --git a/lib/i915/gem_create.c b/lib/i915/gem_create.c index b2e8d559..d245a32e 100644 --- a/lib/i915/gem_create.c +++ b/lib/i915/gem_create.c @@ -48,11 +48,12 @@ uint32_t gem_create(int fd, uint64_t size) return handle; } -int __gem_create_ext(int fd, uint64_t *size, uint32_t *handle, +int __gem_create_ext(int fd, uint64_t *size, uint32_t flags, uint32_t *handle, struct i915_user_extension *ext) { struct drm_i915_gem_create_ext create = { .size = *size, + .flags = flags, .extensions = to_user_pointer(ext), }; int err = 0; @@ -73,6 +74,7 @@ int __gem_create_ext(int fd, uint64_t *size, uint32_t *handle, * gem_create_ext: * @fd: open i915 drm file descriptor * @size: desired size of the buffer + * @flags: optional flags * @ext: optional extensions chain * * This wraps the GEM_CREATE_EXT ioctl, which allocates a new gem buffer object @@ -80,11 +82,12 @@ int __gem_create_ext(int fd, uint64_t *size, uint32_t *handle, * * Returns: The file-private handle of the created buffer object */ -uint32_t gem_create_ext(int fd, uint64_t size, struct i915_user_extension *ext) +uint32_t gem_create_ext(int fd, uint64_t size, uint32_t flags, + struct i915_user_extension *ext) { uint32_t handle; - igt_assert_eq(__gem_create_ext(fd, &size, &handle, ext), 0); + igt_assert_eq(__gem_create_ext(fd, &size, flags, &handle, ext), 0); return handle; } diff --git a/lib/i915/gem_create.h b/lib/i915/gem_create.h index c2b531b4..02232693 100644 --- a/lib/i915/gem_create.h +++ b/lib/i915/gem_create.h @@ -12,8 +12,9 @@ int __gem_create(int fd, uint64_t *size, uint32_t *handle); uint32_t gem_create(int fd, uint64_t size); -int __gem_create_ext(int fd, uint64_t *size, uint32_t *handle, +int __gem_create_ext(int fd, uint64_t *size, uint32_t flags, uint32_t *handle, struct i915_user_extension *ext); -uint32_t gem_create_ext(int fd, uint64_t size, struct i915_user_extension *ext); +uint32_t gem_create_ext(int fd, uint64_t size, uint32_t flags, + struct i915_user_extension *ext); #endif /* GEM_CREATE_H */ diff --git a/lib/i915/intel_memory_region.c b/lib/i915/intel_memory_region.c index a8759e06..f0c8bc7c 100644 --- a/lib/i915/intel_memory_region.c +++ b/lib/i915/intel_memory_region.c @@ -208,7 +208,7 @@ int __gem_create_in_memory_region_list(int fd, uint32_t *handle, uint64_t *size, }; int ret; - ret = __gem_create_ext(fd, size, handle, &ext_regions.base); + ret = __gem_create_ext(fd, size, 0, handle, &ext_regions.base); /* * Provide fallback for stable kernels if region passed in the array diff --git a/tests/i915/gem_create.c b/tests/i915/gem_create.c index 45804cde..a6c3c9d9 100644 --- a/tests/i915/gem_create.c +++ b/tests/i915/gem_create.c @@ -331,38 +331,38 @@ static void create_ext_placement_sanity_check(int fd) * behaviour. */ size = PAGE_SIZE; - igt_assert_eq(__gem_create_ext(fd, &size, &handle, 0), 0); + igt_assert_eq(__gem_create_ext(fd, &size, 0, &handle, 0), 0); gem_close(fd, handle); /* Try some uncreative invalid combinations */ setparam_region.regions = to_user_pointer(®ion_smem); setparam_region.num_regions = 0; size = PAGE_SIZE; - igt_assert_neq(__gem_create_ext(fd, &size, &handle, + igt_assert_neq(__gem_create_ext(fd, &size, 0, &handle, &setparam_region.base), 0); setparam_region.regions = to_user_pointer(®ion_smem); setparam_region.num_regions = regions->num_regions + 1; size = PAGE_SIZE; - igt_assert_neq(__gem_create_ext(fd, &size, &handle, + igt_assert_neq(__gem_create_ext(fd, &size, 0, &handle, &setparam_region.base), 0); setparam_region.regions = to_user_pointer(®ion_smem); setparam_region.num_regions = -1; size = PAGE_SIZE; - igt_assert_neq(__gem_create_ext(fd, &size, &handle, + igt_assert_neq(__gem_create_ext(fd, &size, 0, &handle, &setparam_region.base), 0); setparam_region.regions = to_user_pointer(®ion_invalid); setparam_region.num_regions = 1; size = PAGE_SIZE; - igt_assert_neq(__gem_create_ext(fd, &size, &handle, + igt_assert_neq(__gem_create_ext(fd, &size, 0, &handle, &setparam_region.base), 0); setparam_region.regions = to_user_pointer(®ion_invalid); setparam_region.num_regions = 0; size = PAGE_SIZE; - igt_assert_neq(__gem_create_ext(fd, &size, &handle, + igt_assert_neq(__gem_create_ext(fd, &size, 0, &handle, &setparam_region.base), 0); uregions = calloc(regions->num_regions + 1, sizeof(uint32_t)); @@ -373,7 +373,7 @@ static void create_ext_placement_sanity_check(int fd) setparam_region.regions = to_user_pointer(uregions); setparam_region.num_regions = regions->num_regions + 1; size = PAGE_SIZE; - igt_assert_neq(__gem_create_ext(fd, &size, &handle, + igt_assert_neq(__gem_create_ext(fd, &size, 0, &handle, &setparam_region.base), 0); if (regions->num_regions > 1) { @@ -386,7 +386,7 @@ static void create_ext_placement_sanity_check(int fd) setparam_region.regions = to_user_pointer(dups); setparam_region.num_regions = 2; size = PAGE_SIZE; - igt_assert_neq(__gem_create_ext(fd, &size, &handle, + igt_assert_neq(__gem_create_ext(fd, &size, 0, &handle, &setparam_region.base), 0); } } @@ -396,7 +396,7 @@ static void create_ext_placement_sanity_check(int fd) setparam_region.regions = to_user_pointer(uregions); setparam_region.num_regions = regions->num_regions; size = PAGE_SIZE; - igt_assert_neq(__gem_create_ext(fd, &size, &handle, + igt_assert_neq(__gem_create_ext(fd, &size, 0, &handle, &setparam_region.base), 0); free(uregions); @@ -412,7 +412,7 @@ static void create_ext_placement_sanity_check(int fd) to_user_pointer(&setparam_region_next); size = PAGE_SIZE; - igt_assert_neq(__gem_create_ext(fd, &size, &handle, + igt_assert_neq(__gem_create_ext(fd, &size, 0, &handle, &setparam_region.base), 0); setparam_region.base.next_extension = 0; } @@ -444,7 +444,7 @@ static void create_ext_placement_all(int fd) setparam_region.num_regions = regions->num_regions; size = PAGE_SIZE; - igt_assert_eq(__gem_create_ext(fd, &size, &handle, + igt_assert_eq(__gem_create_ext(fd, &size, 0, &handle, &setparam_region.base), 0); gem_close(fd, handle); free(uregions); @@ -473,7 +473,7 @@ static void create_ext_placement_each(int fd) setparam_region.num_regions = 1; size = PAGE_SIZE; - igt_assert_eq(__gem_create_ext(fd, &size, &handle, + igt_assert_eq(__gem_create_ext(fd, &size, 0, &handle, &setparam_region.base), 0); gem_close(fd, handle); } diff --git a/tests/i915/gem_pxp.c b/tests/i915/gem_pxp.c index 5f269bab..65618556 100644 --- a/tests/i915/gem_pxp.c +++ b/tests/i915/gem_pxp.c @@ -40,7 +40,7 @@ static int create_bo_ext(int i915, uint32_t size, bool protected_is_true, uint32 ext = &protected_ext.base; *bo_out = 0; - ret = __gem_create_ext(i915, &size64, bo_out, ext); + ret = __gem_create_ext(i915, &size64, 0, bo_out, ext); return ret; } From patchwork Mon Feb 21 14:16:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 12753679 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C24B3C433EF for ; Mon, 21 Feb 2022 14:16:48 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 0B83810E703; Mon, 21 Feb 2022 14:16:47 +0000 (UTC) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by gabe.freedesktop.org (Postfix) with ESMTPS id A0F1D10E344; Mon, 21 Feb 2022 14:16:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1645453003; x=1676989003; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=KbR5JMXW5vxjTD72SYa4Cs5GP+9QPMyZcaE6neFPaTI=; b=ageIkbUFmcrNT1mX18J2vFTW+71z5ezsDLYSRmpLvPGuOnyCP6IaAIBl YhUqWnI3EKdhju6KGalgBnXd8TAq87jqctvVsRTjQgHUcoGQPmcs4FqHB UbLouuN9QCcpCCwpyQSzzRfNu2n0dF5Iu1Xm+J2VVffi88caazcggQO2s 2GsOxHnBceagt3HqyeahIWKYf855HPk3EIXn9TchZ6YtRiFrJywSvqskL p8XhoIojwU3pup5hVojWtruhBkMR/zs1p7Mb2NPk+WctXuNhKmByIJYs5 GzkstGR3JKH5JVvQp1k+gB4HBlyYbvzy+TkdcaYD7rZ2rUjsNdzBHoOEB w==; X-IronPort-AV: E=McAfee;i="6200,9189,10264"; a="251467144" X-IronPort-AV: E=Sophos;i="5.88,385,1635231600"; d="scan'208";a="251467144" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Feb 2022 06:16:40 -0800 X-IronPort-AV: E=Sophos;i="5.88,385,1635231600"; d="scan'208";a="606400534" Received: from joeyegax-mobl.ger.corp.intel.com (HELO mwauld-desk1.intel.com) ([10.252.23.97]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Feb 2022 06:16:38 -0800 From: Matthew Auld To: igt-dev@lists.freedesktop.org Date: Mon, 21 Feb 2022 14:16:17 +0000 Message-Id: <20220221141620.2490914-4-matthew.auld@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220221141620.2490914-1-matthew.auld@intel.com> References: <20220221141620.2490914-1-matthew.auld@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH i-g-t v2 3/6] tests/i915/gem_create: test NEEDS_CPU_ACCESS X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , intel-gfx@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Add some basic tests for this new flag. Signed-off-by: Matthew Auld Cc: Thomas Hellström --- tests/i915/gem_create.c | 334 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 334 insertions(+) diff --git a/tests/i915/gem_create.c b/tests/i915/gem_create.c index a6c3c9d9..318e6491 100644 --- a/tests/i915/gem_create.c +++ b/tests/i915/gem_create.c @@ -43,6 +43,8 @@ #include #include #include +#include +#include #include "drm.h" #include "drmtest.h" @@ -481,6 +483,327 @@ static void create_ext_placement_each(int fd) free(regions); } +static bool supports_needs_cpu_access(int fd) +{ + struct drm_i915_gem_memory_class_instance regions[] = { + { I915_MEMORY_CLASS_DEVICE, }, + { I915_MEMORY_CLASS_SYSTEM, }, + }; + struct drm_i915_gem_create_ext_memory_regions setparam_region = { + .base = { .name = I915_GEM_CREATE_EXT_MEMORY_REGIONS }, + .regions = to_user_pointer(®ions), + .num_regions = ARRAY_SIZE(regions), + }; + uint64_t size = PAGE_SIZE; + uint32_t handle; + int ret; + + ret = __gem_create_ext(fd, &size, + I915_GEM_CREATE_EXT_FLAG_NEEDS_CPU_ACCESS, + &handle, &setparam_region.base); + if (!ret) { + gem_close(fd, handle); + igt_assert(gem_has_lmem(fd)); /* Should be dgpu only */ + } + + return ret == 0; +} + +static uint32_t batch_create_size(int fd, uint64_t size) +{ + const uint32_t bbe = MI_BATCH_BUFFER_END; + uint32_t handle; + + handle = gem_create(fd, size); + gem_write(fd, handle, 0, &bbe, sizeof(bbe)); + + return handle; +} + +static int upload(int fd, uint32_t handle) +{ + struct drm_i915_gem_exec_object2 exec[2] = {}; + struct drm_i915_gem_execbuffer2 execbuf = { + .buffers_ptr = to_user_pointer(&exec), + .buffer_count = 2, + }; + + /* + * To be reasonably sure that we are not being swindled, let's make sure + * to 'touch' the pages from the GPU first to ensure the object is for + * sure placed in one of requested regions. + */ + exec[0].handle = handle; + exec[1].handle = batch_create_size(fd, PAGE_SIZE); + + return __gem_execbuf(fd, &execbuf); +} + +static int alloc_lmem(int fd, uint32_t *handle, + struct drm_i915_gem_memory_class_instance ci, + uint64_t size, bool cpu_access, bool do_upload) +{ + struct drm_i915_gem_memory_class_instance regions[] = { + ci, { I915_MEMORY_CLASS_SYSTEM, }, + }; + struct drm_i915_gem_create_ext_memory_regions setparam_region = { + .base = { .name = I915_GEM_CREATE_EXT_MEMORY_REGIONS }, + .regions = to_user_pointer(®ions), + }; + uint32_t flags; + + igt_assert_eq(ci.memory_class, I915_MEMORY_CLASS_DEVICE); + + flags = 0; + setparam_region.num_regions = 1; + if (cpu_access) { + flags = I915_GEM_CREATE_EXT_FLAG_NEEDS_CPU_ACCESS, + setparam_region.num_regions = 2; + } + + *handle = gem_create_ext(fd, size, flags, &setparam_region.base); + + if (do_upload) + return upload(fd, *handle); + + return 0; +} + +static void create_ext_cpu_access_sanity_check(int fd) +{ + struct drm_i915_gem_create_ext_memory_regions setparam_region = { + .base = { .name = I915_GEM_CREATE_EXT_MEMORY_REGIONS }, + }; + struct drm_i915_query_memory_regions *regions; + uint64_t size = PAGE_SIZE; + uint32_t handle; + int i; + + /* + * The ABI is that FLAG_NEEDS_CPU_ACCESS can only be applied to LMEM + + * SMEM objects. Make sure the kernel follows that. Let's check if we + * can indeed fault the object. + */ + + /* Implicit placement; should fail */ + igt_assert_eq(__gem_create_ext(fd, &size, + I915_GEM_CREATE_EXT_FLAG_NEEDS_CPU_ACCESS, + &handle, NULL), -EINVAL); + + regions = gem_get_query_memory_regions(fd); + igt_assert(regions); + igt_assert(regions->num_regions); + + for (i = 0; i < regions->num_regions; i++) { + struct drm_i915_gem_memory_class_instance ci_regions[2] = { + regions->regions[i].region, + { I915_MEMORY_CLASS_SYSTEM, }, + }; + uint32_t *ptr; + + setparam_region.regions = to_user_pointer(ci_regions); + setparam_region.num_regions = 1; + + /* Single explicit placement; should fail */ + igt_assert_eq(__gem_create_ext(fd, &size, + I915_GEM_CREATE_EXT_FLAG_NEEDS_CPU_ACCESS, + &handle, &setparam_region.base), + -EINVAL); + + if (ci_regions[0].memory_class == I915_MEMORY_CLASS_SYSTEM) + continue; + + /* + * Now combine with system memory; should pass. We should also + * be able to fault it. + */ + setparam_region.num_regions = 2; + igt_assert_eq(__gem_create_ext(fd, &size, + I915_GEM_CREATE_EXT_FLAG_NEEDS_CPU_ACCESS, + &handle, &setparam_region.base), + 0); + upload(fd, handle); + ptr = gem_mmap_offset__fixed(fd, handle, 0, size, + PROT_READ | PROT_WRITE); + ptr[0] = 0xdeadbeaf; + gem_close(fd, handle); + + /* + * It should also work just fine without the flag, where in the + * worst case we need to migrate it when faulting it. + */ + igt_assert_eq(__gem_create_ext(fd, &size, + 0, + &handle, &setparam_region.base), + 0); + upload(fd, handle); + ptr = gem_mmap_offset__fixed(fd, handle, 0, size, + PROT_READ | PROT_WRITE); + ptr[0] = 0xdeadbeaf; + gem_close(fd, handle); + } + + free(regions); +} + +static uint64_t get_visible_size(int device, uint16_t instance) +{ + FILE *file; + char *line = NULL; + size_t line_size; + int fd, i; + + fd = igt_debugfs_open(device, "i915_gem_objects", O_RDONLY); + file = fdopen(fd, "r"); + igt_require(getline(&line, &line_size, file) > 0); + + i = 0; + while (getline(&line, &line_size, file) > 0) { + const char needle[] = "visible_size: "; + const char *ptr = strstr(line, needle); + long int size_mb; + char *end; + + if (!ptr) + continue; + + if (i++ != instance) + continue; + + size_mb = strtol(ptr + ARRAY_SIZE(needle)-1, &end, 10); + + free(line); + fclose(file); + close(fd); + + return size_mb << 20ULL; + } + + igt_assert(!"reached"); +} + +static jmp_buf jmp; + +__noreturn static void sigtrap(int sig) +{ + siglongjmp(jmp, sig); +} + +static void trap_sigbus(uint32_t *ptr) +{ + sighandler_t old_sigbus; + + old_sigbus = signal(SIGBUS, sigtrap); + switch (sigsetjmp(jmp, SIGBUS)) { + case SIGBUS: + break; + case 0: + *ptr = 0xdeadbeaf; + default: + igt_assert(!"reached"); + break; + } + signal(SIGBUS, old_sigbus); +} + +static void create_ext_cpu_access_big(int fd) +{ + struct drm_i915_query_memory_regions *regions; + int i; + + /* + * Sanity check that we can still CPU map an overly large object, even + * if it happens to be larger the CPU visible portion of LMEM. Also + * check that an overly large allocation, which can't be spilled into + * system memory does indeed fail. + */ + + regions = gem_get_query_memory_regions(fd); + igt_assert(regions); + igt_assert(regions->num_regions); + + for (i = 0; i < regions->num_regions; i++) { + struct drm_i915_memory_region_info qmr = regions->regions[i]; + struct drm_i915_gem_memory_class_instance ci = qmr.region; + uint64_t size, visible_size, lmem_size; + uint32_t handle; + uint32_t *ptr; + + if (ci.memory_class == I915_MEMORY_CLASS_SYSTEM) + continue; + + lmem_size = qmr.probed_size; + visible_size = get_visible_size(fd, ci.memory_instance); + if (!visible_size) + continue; + + if (visible_size <= (0.70 * lmem_size)) { + /* + * Too big. We should still be able to allocate it just + * fine, but faulting should result in tears. + */ + size = visible_size; + igt_assert_eq(alloc_lmem(fd, &handle, ci, size, false, true), 0); + ptr = gem_mmap_offset__fixed(fd, handle, 0, size, + PROT_READ | PROT_WRITE); + trap_sigbus(ptr); + gem_close(fd, handle); + + /* + * Too big again, but this time we can spill to system + * memory when faulting the object. + */ + size = visible_size; + igt_assert_eq(alloc_lmem(fd, &handle, ci, size, true, true), 0); + ptr = gem_mmap_offset__fixed(fd, handle, 0, size, + PROT_READ | PROT_WRITE); + ptr[0] = 0xdeadbeaf; + gem_close(fd, handle); + + /* + * Let's also move the upload to after faulting the + * pages. The current behaviour is that the pages are + * only allocated in device memory when initially + * touched by the GPU. With this in mind we should also + * make sure that the pages are indeed migrated, as + * expected. + */ + size = visible_size; + igt_assert_eq(alloc_lmem(fd, &handle, ci, size, false, false), 0); + ptr = gem_mmap_offset__fixed(fd, handle, 0, size, + PROT_READ | PROT_WRITE); + ptr[0] = 0xdeadbeaf; /* temp system memory */ + igt_assert_eq(upload(fd, handle), 0); + trap_sigbus(ptr); /* non-mappable device memory */ + gem_close(fd, handle); + } + + /* + * Should fit. We likely need to migrate to the mappable portion + * on fault though, if this device has a small BAR, given how + * large the initial allocation is. + */ + size = visible_size >> 1; + igt_assert_eq(alloc_lmem(fd, &handle, ci, size, false, true), 0); + ptr = gem_mmap_offset__fixed(fd, handle, 0, size, + PROT_READ | PROT_WRITE); + ptr[0] = 0xdeadbeaf; + gem_close(fd, handle); + + /* + * And then with the CPU_ACCESS flag enabled; should also be no + * surprises here. + */ + igt_assert_eq(alloc_lmem(fd, &handle, ci, size, true, true), 0); + ptr = gem_mmap_offset__fixed(fd, handle, 0, size, + PROT_READ | PROT_WRITE); + ptr[0] = 0xdeadbeaf; + gem_close(fd, handle); + } + + free(regions); +} + igt_main { int fd = -1; @@ -516,4 +839,15 @@ igt_main igt_subtest("create-ext-placement-all") create_ext_placement_all(fd); + igt_describe("Verify the basic functionally and expected ABI contract around I915_GEM_CREATE_EXT_FLAG_NEEDS_CPU_ACCESS"); + igt_subtest("create-ext-cpu-access-sanity-check") { + igt_require(supports_needs_cpu_access(fd)); + create_ext_cpu_access_sanity_check(fd); + } + + igt_describe("Verify the extreme cases with very large objects and I915_GEM_CREATE_EXT_FLAG_NEEDS_CPU_ACCESS"); + igt_subtest("create-ext-cpu-access-big") { + igt_require(supports_needs_cpu_access(fd)); + create_ext_cpu_access_big(fd); + } } From patchwork Mon Feb 21 14:16:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 12753677 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 89D04C433F5 for ; Mon, 21 Feb 2022 14:16:48 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id D082610E673; Mon, 21 Feb 2022 14:16:46 +0000 (UTC) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by gabe.freedesktop.org (Postfix) with ESMTPS id 3881810E344; Mon, 21 Feb 2022 14:16:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1645453004; x=1676989004; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Su0uNhfUgunf+Un+BfUA8TY3aGm8dX1AnYFbMmLyLv4=; b=ngBExQP0lXWUveZXykzx0o/KEzmmk+cfQf9kUHPIxYxMPOfTzuzI0LdL OYTqlIZb41zX3qUk2q/s6FVOeNJp5j3W3tz3dfOW/biGOweEBjGZ+hhcS 3rjqojwKXTsiu6BMncwOYzWpWLh1tuy7RKd3vytvogm1v0+1rIxiH6fBQ q9mcOdWeUWrGAesrEYeWpLijsBIFX/yTBwRbo/0E7GpH4RyWW7RJdKNVC juUaaXJN6jQj8BWH4XWXjbloNKNPXODzFgprx7/92KJnkFbBv0sHScqM7 INDmWtAVDlUyghyGI78pnDJjsDE6vWitBPkmE0d7Yt0iMDezNJDBwF4ER A==; X-IronPort-AV: E=McAfee;i="6200,9189,10264"; a="251467154" X-IronPort-AV: E=Sophos;i="5.88,385,1635231600"; d="scan'208";a="251467154" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Feb 2022 06:16:41 -0800 X-IronPort-AV: E=Sophos;i="5.88,385,1635231600"; d="scan'208";a="606400536" Received: from joeyegax-mobl.ger.corp.intel.com (HELO mwauld-desk1.intel.com) ([10.252.23.97]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Feb 2022 06:16:40 -0800 From: Matthew Auld To: igt-dev@lists.freedesktop.org Date: Mon, 21 Feb 2022 14:16:18 +0000 Message-Id: <20220221141620.2490914-5-matthew.auld@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220221141620.2490914-1-matthew.auld@intel.com> References: <20220221141620.2490914-1-matthew.auld@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH i-g-t v2 4/6] lib/i915: add gem_create_with_cpu_access_in_memory_regions X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , intel-gfx@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Most users shouldn't care about such an interface, but where required, this should be useful to aid in setting NEEDS_CPU_ACCESS for a given BO. Underneath we try to smooth over needing to provide an explicit SMEM region, or if this is SMEM-only, we don't want the kernel to throw an error. Put it to use in gem_exec_capture, where a proper hint is now required by the kernel on DG2+, for objects marked with EXEC_OBJECT_CAPTURE, that can also be placed in LMEM. Signed-off-by: Matthew Auld Cc: Thomas Hellström --- lib/i915/intel_memory_region.c | 10 +++++--- lib/i915/intel_memory_region.h | 46 +++++++++++++++++++++++++++++++--- tests/i915/gem_exec_capture.c | 6 ++--- tests/i915/gem_lmem_swapping.c | 2 +- 4 files changed, 52 insertions(+), 12 deletions(-) diff --git a/lib/i915/intel_memory_region.c b/lib/i915/intel_memory_region.c index f0c8bc7c..4893c5ba 100644 --- a/lib/i915/intel_memory_region.c +++ b/lib/i915/intel_memory_region.c @@ -197,7 +197,7 @@ bool gem_has_lmem(int fd) /* A version of gem_create_in_memory_region_list which can be allowed to fail so that the object creation can be retried */ -int __gem_create_in_memory_region_list(int fd, uint32_t *handle, uint64_t *size, +int __gem_create_in_memory_region_list(int fd, uint32_t *handle, uint64_t *size, uint32_t flags, struct drm_i915_gem_memory_class_instance *mem_regions, int num_regions) { @@ -208,7 +208,9 @@ int __gem_create_in_memory_region_list(int fd, uint32_t *handle, uint64_t *size, }; int ret; - ret = __gem_create_ext(fd, size, 0, handle, &ext_regions.base); + ret = __gem_create_ext(fd, size, flags, handle, &ext_regions.base); + if (flags && ret == -EINVAL) + ret = __gem_create_ext(fd, size, 0, handle, &ext_regions.base); /* * Provide fallback for stable kernels if region passed in the array @@ -231,12 +233,12 @@ int __gem_create_in_memory_region_list(int fd, uint32_t *handle, uint64_t *size, * @mem_regions: memory regions array (priority list) * @num_regions: @mem_regions length */ -uint32_t gem_create_in_memory_region_list(int fd, uint64_t size, +uint32_t gem_create_in_memory_region_list(int fd, uint64_t size, uint32_t flags, struct drm_i915_gem_memory_class_instance *mem_regions, int num_regions) { uint32_t handle; - int ret = __gem_create_in_memory_region_list(fd, &handle, &size, + int ret = __gem_create_in_memory_region_list(fd, &handle, &size, flags, mem_regions, num_regions); igt_assert_eq(ret, 0); return handle; diff --git a/lib/i915/intel_memory_region.h b/lib/i915/intel_memory_region.h index 936e7d1c..7cc119ec 100644 --- a/lib/i915/intel_memory_region.h +++ b/lib/i915/intel_memory_region.h @@ -21,6 +21,7 @@ * IN THE SOFTWARE. */ #include "igt_collection.h" +#include "i915_drm_local.h" #ifndef INTEL_MEMORY_REGION_H #define INTEL_MEMORY_REGION_H @@ -64,11 +65,11 @@ bool gem_has_lmem(int fd); struct drm_i915_gem_memory_class_instance; -int __gem_create_in_memory_region_list(int fd, uint32_t *handle, uint64_t *size, +int __gem_create_in_memory_region_list(int fd, uint32_t *handle, uint64_t *size, uint32_t flags, struct drm_i915_gem_memory_class_instance *mem_regions, int num_regions); -uint32_t gem_create_in_memory_region_list(int fd, uint64_t size, +uint32_t gem_create_in_memory_region_list(int fd, uint64_t size, uint32_t flags, struct drm_i915_gem_memory_class_instance *mem_regions, int num_regions); @@ -84,7 +85,7 @@ uint32_t gem_create_in_memory_region_list(int fd, uint64_t size, arr_query__[i__].memory_class = MEMORY_TYPE_FROM_REGION(arr__[i__]); \ arr_query__[i__].memory_instance = MEMORY_INSTANCE_FROM_REGION(arr__[i__]); \ } \ - __gem_create_in_memory_region_list(fd, handle, size, arr_query__, ARRAY_SIZE(arr_query__)); \ + __gem_create_in_memory_region_list(fd, handle, size, 0, arr_query__, ARRAY_SIZE(arr_query__)); \ }) #define gem_create_in_memory_regions(fd, size, regions...) ({ \ unsigned int arr__[] = { regions }; \ @@ -93,7 +94,44 @@ uint32_t gem_create_in_memory_region_list(int fd, uint64_t size, arr_query__[i__].memory_class = MEMORY_TYPE_FROM_REGION(arr__[i__]); \ arr_query__[i__].memory_instance = MEMORY_INSTANCE_FROM_REGION(arr__[i__]); \ } \ - gem_create_in_memory_region_list(fd, size, arr_query__, ARRAY_SIZE(arr_query__)); \ + gem_create_in_memory_region_list(fd, size, 0, arr_query__, ARRAY_SIZE(arr_query__)); \ +}) + +/* + * Create an object that requires CPU access. This only becomes interesting on + * platforms that have a small BAR for LMEM CPU access. Without this the object + * might need to be migrated when CPU faulting the object, or if that is not + * possible we hit SIGBUS. Most users should be fine with this. If enabled the + * kernel will never allocate this object in the non-CPU visible portion of + * LMEM. + * + * Underneath this just enables the I915_GEM_CREATE_EXT_FLAG_NEEDS_CPU_ACCESS + * flag, if we also have an LMEM placement. Also since the kernel requires SMEM + * as a potential placement, we automatically attach that as a possible + * placement, if not already provided. If this happens to be an SMEM-only + * placement then we don't supply the flag, and instead just treat as normal + * allocation. + */ +#define gem_create_with_cpu_access_in_memory_regions(fd, size, regions...) ({ \ + unsigned int arr__[] = { regions }; \ + struct drm_i915_gem_memory_class_instance arr_query__[ARRAY_SIZE(arr__) + 1]; \ + int i__, arr_query_size__ = ARRAY_SIZE(arr__); \ + uint32_t ext_flags__ = 0; \ + bool ext_found_smem__ = false; \ + for (i__ = 0; i__ < arr_query_size__; ++i__) { \ + arr_query__[i__].memory_class = MEMORY_TYPE_FROM_REGION(arr__[i__]); \ + if (arr_query__[i__].memory_class == I915_MEMORY_CLASS_DEVICE) \ + ext_flags__ = I915_GEM_CREATE_EXT_FLAG_NEEDS_CPU_ACCESS; \ + else \ + ext_found_smem__ = true; \ + arr_query__[i__].memory_instance = MEMORY_INSTANCE_FROM_REGION(arr__[i__]); \ + } \ + if (ext_flags__ && !ext_found_smem__) { \ + arr_query__[i__].memory_class = I915_MEMORY_CLASS_SYSTEM; \ + arr_query__[i__].memory_instance = 0; \ + arr_query_size__++; \ + } \ + gem_create_in_memory_region_list(fd, size, ext_flags__, arr_query__, arr_query_size__); \ }) struct igt_collection * diff --git a/tests/i915/gem_exec_capture.c b/tests/i915/gem_exec_capture.c index 60f8df04..24ba6036 100644 --- a/tests/i915/gem_exec_capture.c +++ b/tests/i915/gem_exec_capture.c @@ -268,7 +268,7 @@ static void __capture1(int fd, int dir, uint64_t ahnd, const intel_ctx_t *ctx, saved_engine = configure_hangs(fd, e, ctx->id); memset(obj, 0, sizeof(obj)); - obj[SCRATCH].handle = gem_create_in_memory_regions(fd, 4096, region); + obj[SCRATCH].handle = gem_create_with_cpu_access_in_memory_regions(fd, 4096, region); obj[SCRATCH].flags = EXEC_OBJECT_WRITE; obj[CAPTURE].handle = target; obj[CAPTURE].flags = EXEC_OBJECT_CAPTURE; @@ -387,9 +387,9 @@ static void capture(int fd, int dir, const intel_ctx_t *ctx, const struct intel_execution_engine2 *e, uint32_t region) { uint32_t handle; - uint64_t ahnd, obj_size = 4096; + uint64_t ahnd, obj_size = 16 * 4096; - igt_assert_eq(__gem_create_in_memory_regions(fd, &handle, &obj_size, region), 0); + handle = gem_create_with_cpu_access_in_memory_regions(fd, obj_size, region); ahnd = get_reloc_ahnd(fd, ctx->id); __capture1(fd, dir, ahnd, ctx, e, handle, obj_size, region); diff --git a/tests/i915/gem_lmem_swapping.c b/tests/i915/gem_lmem_swapping.c index 39f9e1f5..18b66f09 100644 --- a/tests/i915/gem_lmem_swapping.c +++ b/tests/i915/gem_lmem_swapping.c @@ -80,7 +80,7 @@ static uint32_t create_bo(int i915, int ret; retry: - ret = __gem_create_in_memory_region_list(i915, &handle, &size, region, 1); + ret = __gem_create_in_memory_region_list(i915, &handle, &size, 0, region, 1); if (do_oom_test && ret == -ENOMEM) goto retry; igt_assert_eq(ret, 0); From patchwork Mon Feb 21 14:16:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 12753678 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B9786C433FE for ; Mon, 21 Feb 2022 14:16:49 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 49ECA10E343; Mon, 21 Feb 2022 14:16:47 +0000 (UTC) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by gabe.freedesktop.org (Postfix) with ESMTPS id BEFF610E344; Mon, 21 Feb 2022 14:16:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1645453004; x=1676989004; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=gmKm1K1NNqB2xjQ8t193Wm7FiYCzzg4UVyl8de03ql0=; b=B505Tywt3FQnFazDR5RgHFgImHI3QWLoaNw9r7wlbx5ioTmSDOPL+bst cSvTv8ORdpnRuWfJCVGVeed6la4gdXRt0sJWZUw57vfYKoLuPxLFJlr93 ZIg+UjMLlFrHQIN4ITBl++NaAZu2F9qXcOiPQ+fnTDdNkaltCDM0RJaA+ jpsRU/lB9P3Zl82hrUIQHFqmXy7b/56R4CGeFBtlYClgSWriVGn0yapkg 1vSCDFpsMXY4bMiWWfdic5wk2z2MIxDcW77U38gHpKy4yTSvP4qimQhI/ 0tKW9Y7anexitkAA3Tkue5aEzb0dO1u7qu72Z2//WRjAEo6PSoqS2UnpG Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10264"; a="251467160" X-IronPort-AV: E=Sophos;i="5.88,385,1635231600"; d="scan'208";a="251467160" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Feb 2022 06:16:42 -0800 X-IronPort-AV: E=Sophos;i="5.88,385,1635231600"; d="scan'208";a="606400543" Received: from joeyegax-mobl.ger.corp.intel.com (HELO mwauld-desk1.intel.com) ([10.252.23.97]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Feb 2022 06:16:41 -0800 From: Matthew Auld To: igt-dev@lists.freedesktop.org Date: Mon, 21 Feb 2022 14:16:19 +0000 Message-Id: <20220221141620.2490914-6-matthew.auld@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220221141620.2490914-1-matthew.auld@intel.com> References: <20220221141620.2490914-1-matthew.auld@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH i-g-t v2 5/6] i915/tests/capture: add a negative test for NEEDS_CPU_ACCESS X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , intel-gfx@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Sanity check that the kernel does indeed reject LMEM buffers marked with EXEC_OBJECT_CAPTURE, that are not also marked with NEEDS_CPU_ACCESS. Signed-off-by: Matthew Auld Cc: Thomas Hellström --- tests/i915/gem_exec_capture.c | 69 +++++++++++++++++++++++++++++++++++ 1 file changed, 69 insertions(+) diff --git a/tests/i915/gem_exec_capture.c b/tests/i915/gem_exec_capture.c index 24ba6036..09187f62 100644 --- a/tests/i915/gem_exec_capture.c +++ b/tests/i915/gem_exec_capture.c @@ -735,6 +735,71 @@ static void userptr(int fd, int dir) gem_engine_properties_restore(fd, &saved_engine); } +static bool supports_needs_cpu_access(int fd) +{ + struct drm_i915_gem_memory_class_instance regions[] = { + { I915_MEMORY_CLASS_DEVICE, }, + { I915_MEMORY_CLASS_SYSTEM, }, + }; + struct drm_i915_gem_create_ext_memory_regions setparam_region = { + .base = { .name = I915_GEM_CREATE_EXT_MEMORY_REGIONS }, + .regions = to_user_pointer(®ions), + .num_regions = ARRAY_SIZE(regions), + }; + uint64_t size = 4096; + uint32_t handle; + int ret; + + ret = __gem_create_ext(fd, &size, + I915_GEM_CREATE_EXT_FLAG_NEEDS_CPU_ACCESS, + &handle, &setparam_region.base); + if (!ret) { + gem_close(fd, handle); + igt_assert(gem_has_lmem(fd)); /* Should be dgpu only */ + } + + return ret == 0; +} + +static void capture_no_cpu_access(int fd) +{ + struct drm_i915_gem_exec_object2 exec = { + .flags = EXEC_OBJECT_CAPTURE, + }; + struct drm_i915_gem_execbuffer2 execbuf = { + .buffers_ptr = to_user_pointer(&exec), + .buffer_count = 1, + }; + uint64_t size = 4096; + uint32_t handle; + int ret; + + igt_require(gem_has_lmem(fd)); + igt_require(supports_needs_cpu_access(fd)); + + /* + * Sanity check that execbuf rejects EXEC_OBJECT_CAPTURE marked BO, that + * is not also tagged with I915_GEM_CREATE_EXT_FLAG_NEEDS_CPU_ACCESS, if + * it can be placed in LMEM. This is only relevant for Dg2+. + */ + + igt_require(__gem_create_in_memory_regions(fd, &handle, &size, + REGION_LMEM(0)) == 0); + + exec.handle = handle; + ret = __gem_execbuf(fd, &execbuf); + if (IS_DG1(fd)) /* Should be no impact on existing ABI */ + igt_assert(ret == 0); + else + igt_assert(ret == -EINVAL); + + /* SMEM only buffers should work as normal */ + igt_assert(__gem_create_in_memory_regions(fd, &handle, &size, + REGION_SMEM) == 0); + exec.handle = handle; + igt_assert(__gem_execbuf(fd, &execbuf) == 0); +} + static bool has_capture(int fd) { drm_i915_getparam_t gp; @@ -839,6 +904,10 @@ igt_main igt_dynamic_f("%s", (e)->name) prioinv(fd, dir, ctx, e); + igt_describe("Verify the ABI contract when using EXEC_OBJECT_CAPTURE without I915_GEM_CREATE_EXT_FLAG_NEEDS_CPU_ACCESS"); + igt_subtest_f("capture-non-cpu-access") + capture_no_cpu_access(fd); + igt_fixture { close(dir); igt_disallow_hang(fd, hang); From patchwork Mon Feb 21 14:16:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 12753676 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8B147C433FE for ; Mon, 21 Feb 2022 14:16:47 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C951510E570; Mon, 21 Feb 2022 14:16:46 +0000 (UTC) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by gabe.freedesktop.org (Postfix) with ESMTPS id 8577D10E546; Mon, 21 Feb 2022 14:16:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1645453004; x=1676989004; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=mZE+Hx7k2G9ftJ2JefZfgAJIvpxUhLM1wmf66cMq63Y=; b=c37iP4+DocLAR6NhfT0sYKdKOj4ZTTiypkF6G76A5yDMk5UbgAq3JKO/ cwmUky2Fh+Y0RdXS7jaqr4/RtKDIIJsEdet8yOHpgkFI8Uf8ClHuavhf/ GztY2vG4GY9whLv0lzOT3SGyMB+qq0fG7qHAmkPhoQoRSUPLgfjhH5Oqo eb5u9g9ck0+vXnopuwXq8aNr+ybMrn6fAk78rEyR8qcBK928AchxwZYUP UbF+3ohuFjsUv1yFq3F9wCZv9Gg2xieP602YWndtj5dqBagQoiW47xhn1 fSt9foTEHSegkN6wg+pjed5uyAikSriYiP1kl5yT/qmsR7zzBIei9kmNq Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10264"; a="251467161" X-IronPort-AV: E=Sophos;i="5.88,385,1635231600"; d="scan'208";a="251467161" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Feb 2022 06:16:43 -0800 X-IronPort-AV: E=Sophos;i="5.88,385,1635231600"; d="scan'208";a="606400547" Received: from joeyegax-mobl.ger.corp.intel.com (HELO mwauld-desk1.intel.com) ([10.252.23.97]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Feb 2022 06:16:42 -0800 From: Matthew Auld To: igt-dev@lists.freedesktop.org Date: Mon, 21 Feb 2022 14:16:20 +0000 Message-Id: <20220221141620.2490914-7-matthew.auld@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220221141620.2490914-1-matthew.auld@intel.com> References: <20220221141620.2490914-1-matthew.auld@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH i-g-t v2 6/6] lib/i915: request CPU_ACCESS for fb objects X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , intel-gfx@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" kms_frontbuffer_tracking@basic falls over if the fb needs to be migrated from non-mappable device memory, to the mappable part, due to being temporarily pinned for scanout, when hitting the CPU fault handler, which just gives us SIGBUS. If the device has a small BAR let's attempt to use the mappable portion, if possible. XXX: perhaps the kernel needs to somehow handle this better? Signed-off-by: Matthew Auld Cc: Thomas Hellström --- lib/ioctl_wrappers.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/lib/ioctl_wrappers.c b/lib/ioctl_wrappers.c index 09eb3ce7..7713e78b 100644 --- a/lib/ioctl_wrappers.c +++ b/lib/ioctl_wrappers.c @@ -635,7 +635,8 @@ uint32_t gem_buffer_create_fb_obj(int fd, uint64_t size) uint32_t handle; if (gem_has_lmem(fd)) - handle = gem_create_in_memory_regions(fd, size, REGION_LMEM(0)); + handle = gem_create_with_cpu_access_in_memory_regions(fd, size, + REGION_LMEM(0)); else handle = gem_create(fd, size);