From patchwork Thu Oct 27 15:27:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 13022316 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 98AABECAAA1 for ; Thu, 27 Oct 2022 15:29:00 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id CA3DD10E67A; Thu, 27 Oct 2022 15:28:59 +0000 (UTC) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by gabe.freedesktop.org (Postfix) with ESMTPS id 2B6E210E67A for ; Thu, 27 Oct 2022 15:28:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1666884536; x=1698420536; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=5eNryszaSVXfJApHN1x71iX4/hBbugVp7ICXFrDwlxA=; b=k5RMrYLz66Yux+i+62kvq9MSz3eySeBkzo5IArzBLu8lXXFNf3wuVlHf f8DJNHRPVJc8+31LgoGmu56boogsZaSxBUJr1PXxRnkHiTNT/kYq6veXu Xb03Uy+nxMnBw4Cs69X37wKyie9dUOBHAUnpRRBypm4vv+Ezw6G9y3g8E d4+HJ9TuhV4ygOorhDxd3CCQyYtviBhhOZa1l+Xu+mdh76bYSz9fdCc0X V5EWeox9F+a4SfTsweKNolGLpJonmghBngQ1JFaeVHZtyti7F6SgJbPk2 VgqcR5Y0uZGh+CYV92x8pWXQkG9Mxtjx5Wx+BeYAZH6L+dbnceYE33rJW w==; X-IronPort-AV: E=McAfee;i="6500,9779,10513"; a="288651667" X-IronPort-AV: E=Sophos;i="5.95,218,1661842800"; d="scan'208";a="288651667" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2022 08:28:55 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10513"; a="961662741" X-IronPort-AV: E=Sophos;i="5.95,218,1661842800"; d="scan'208";a="961662741" Received: from jpascuax-mobl3.ger.corp.intel.com (HELO mwauld-desk1.intel.com) ([10.252.28.212]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2022 08:28:54 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Date: Thu, 27 Oct 2022 16:27:22 +0100 Message-Id: <20221027152723.381060-1-matthew.auld@intel.com> X-Mailer: git-send-email 2.37.3 MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH 1/2] drm/i915/dmabuf: fix sg_table handling in map_dma_buf X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" We need to iterate over the original entries here for the sg_table, pulling out the struct page for each one, to be remapped. However currently this incorrectly iterates over the final dma mapped entries, which is likely just one gigantic sg entry if the iommu is enabled, leading to us only mapping the first struct page (and any physically contiguous pages following it), even if there is potentially lots more data to follow. Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/7306 Signed-off-by: Matthew Auld Cc: Lionel Landwerlin Cc: Tvrtko Ursulin Cc: Ville Syrjälä --- drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c index 07eee1c09aaf..05ebbdfd3b3b 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c @@ -40,13 +40,13 @@ static struct sg_table *i915_gem_map_dma_buf(struct dma_buf_attachment *attachme goto err; } - ret = sg_alloc_table(st, obj->mm.pages->nents, GFP_KERNEL); + ret = sg_alloc_table(st, obj->mm.pages->orig_nents, GFP_KERNEL); if (ret) goto err_free; src = obj->mm.pages->sgl; dst = st->sgl; - for (i = 0; i < obj->mm.pages->nents; i++) { + for (i = 0; i < obj->mm.pages->orig_nents; i++) { sg_set_page(dst, sg_page(src), src->length, 0); dst = sg_next(dst); src = sg_next(src); From patchwork Thu Oct 27 15:27:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 13022317 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6D0EAFA3740 for ; Thu, 27 Oct 2022 15:29:08 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 9670810E67B; Thu, 27 Oct 2022 15:29:07 +0000 (UTC) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by gabe.freedesktop.org (Postfix) with ESMTPS id 1F6A110E67A for ; Thu, 27 Oct 2022 15:28:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1666884537; x=1698420537; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=cz2gDUukO6ToJ1XoiTg5aYQeQvm/OGadX/LIVDB9H8U=; b=asvSpwzEBSffFtW4/ub4FDiO1/jGf57duHft8SRkxKXPiIhP/2Zg6yBv k8koia0nC5YEWtbeeKizceLy05MM/53TpI9nu57W2wgWdQbPTakFCldJ8 ODu2Wr3xu6nqh5fN3Fj5TIPBiKm/SEj+ZHiXtYLyR2T7CrQeRyquuSM3b azKBd++/4MoeqYf1dB0Dxh6lCfAgwuzJ6PQDxSVY5s9wTWLxTr/vdrswK 6qU1WZSNoUFdAw/30GtdPVYtk1sQBhp2QNurY9CP2X3kt4TsT7XBuuNp+ c26ObVOa/O67zLNo7BPS9sVpoWfWZHq2u9Y7VVk9HwR3Zt9rClYyRlsWT Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10513"; a="288651674" X-IronPort-AV: E=Sophos;i="5.95,218,1661842800"; d="scan'208";a="288651674" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2022 08:28:56 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10513"; a="961662745" X-IronPort-AV: E=Sophos;i="5.95,218,1661842800"; d="scan'208";a="961662745" Received: from jpascuax-mobl3.ger.corp.intel.com (HELO mwauld-desk1.intel.com) ([10.252.28.212]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2022 08:28:55 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Date: Thu, 27 Oct 2022 16:27:23 +0100 Message-Id: <20221027152723.381060-2-matthew.auld@intel.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221027152723.381060-1-matthew.auld@intel.com> References: <20221027152723.381060-1-matthew.auld@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH 2/2] drm/i915/selftests: exercise GPU access from the importer X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Using PAGE_SIZE here potentially hides issues so bump that to something larger. This should also make it possible for iommu to coalesce entries for us. With that in place verify we can write from the GPU using the importers sg_table, followed by checking that our writes match when read from the CPU side. References: https://gitlab.freedesktop.org/drm/intel/-/issues/7306 Signed-off-by: Matthew Auld Cc: Lionel Landwerlin Cc: Tvrtko Ursulin Cc: Ville Syrjälä --- .../drm/i915/gem/selftests/i915_gem_dmabuf.c | 37 ++++++++++++++++++- 1 file changed, 35 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c index f2f3cfad807b..e55b255f69f8 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c @@ -6,6 +6,7 @@ #include "i915_drv.h" #include "i915_selftest.h" +#include "gt/intel_migrate.h" #include "mock_dmabuf.h" #include "selftests/mock_gem_device.h" @@ -148,13 +149,15 @@ static int igt_dmabuf_import_same_driver(struct drm_i915_private *i915, struct drm_gem_object *import; struct dma_buf *dmabuf; struct dma_buf_attachment *import_attach; + struct i915_request *rq; struct sg_table *st; + u32 *vaddr; long timeout; - int err; + int err, i; force_different_devices = true; - obj = __i915_gem_object_create_user(i915, PAGE_SIZE, + obj = __i915_gem_object_create_user(i915, SZ_8M, regions, num_regions); if (IS_ERR(obj)) { pr_err("__i915_gem_object_create_user failed with err=%ld\n", @@ -194,6 +197,19 @@ static int igt_dmabuf_import_same_driver(struct drm_i915_private *i915, goto out_import; } + err = intel_context_migrate_clear(to_gt(i915)->migrate.context, NULL, + import_obj->mm.pages->sgl, + import_obj->cache_level, + false, + 0xdeadbeaf, &rq); + if (rq) { + err = dma_resv_reserve_fences(obj->base.resv, 1); + if (!err) + dma_resv_add_fence(obj->base.resv, &rq->fence, + DMA_RESV_USAGE_KERNEL); + i915_request_put(rq); + } + /* * If the exported object is not in system memory, something * weird is going on. TODO: When p2p is supported, this is no @@ -206,6 +222,23 @@ static int igt_dmabuf_import_same_driver(struct drm_i915_private *i915, i915_gem_object_unlock(import_obj); + if (err) + goto out_import; + + vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB); + if (IS_ERR(vaddr)) { + err = PTR_ERR(vaddr); + goto out_import; + } + + for (i = 0; i < obj->base.size / sizeof(u32); i++) { + if (vaddr[i] != 0xdeadbeaf) { + pr_err("Data mismatch [%d]=%u\n", i, vaddr[i]); + err = -EINVAL; + goto out_import; + } + } + /* Now try a fake an importer */ import_attach = dma_buf_attach(dmabuf, obj->base.dev->dev); if (IS_ERR(import_attach)) {