From patchwork Fri Oct 18 21:16:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 13842431 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D6D12D3F2A0 for ; Fri, 18 Oct 2024 21:16:02 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C050B10E9AE; Fri, 18 Oct 2024 21:15:59 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="iwLqHyd+"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) by gabe.freedesktop.org (Postfix) with ESMTPS id C9FCC10E9A7; Fri, 18 Oct 2024 21:15:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1729286159; x=1760822159; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=xRxU3EWfTWf08sUiweA9BID3fRg68EpqncHnrNT0/JI=; b=iwLqHyd+eWWQ5eVycKRlskCmvb9FOEC/9v4EKDikCyuvIykw5/QmXha/ h9Wr2/PIzg48PGmuHv1IBk3Ms5hzki76BtJLf2Gucz5jNVAWVOGILNgce shDmDL/QAd7PuaeQHCE+LeQTF/TndottXbzeK8OYJIM7CSHAGeEr6oWuN 92bXMZMoNNxJxwp9jA/AS7ffdqLkrruxkbvH62Sej+HZx175xD6QNcWwf zfYrWGJK5KSqhzil0H5r4trEP6dptaZCrr7RhoR4GvLCwU+XzAgeQG21z MqzEyZcEmHgkh7qBVDVx0va53Yu359fqsoeUiEvmTcJ0F2lHECtDSMjrY Q==; X-CSE-ConnectionGUID: 9POyCPj9QraRTbBP9Tg0QA== X-CSE-MsgGUID: anz4TzLUSL6Ia04ckm/9mA== X-IronPort-AV: E=McAfee;i="6700,10204,11229"; a="46329576" X-IronPort-AV: E=Sophos;i="6.11,214,1725346800"; d="scan'208";a="46329576" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Oct 2024 14:15:58 -0700 X-CSE-ConnectionGUID: XNgaeYmHT7GuNjphp6oroQ== X-CSE-MsgGUID: CDo9h/G0R9mAitJY+C+71g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,214,1725346800"; d="scan'208";a="84040229" Received: from lstrano-desk.jf.intel.com ([10.54.39.91]) by orviesa004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Oct 2024 14:15:59 -0700 From: Matthew Brost To: intel-xe@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: matthew.auld@intel.com, thomas.hellstrom@linux.intel.com Subject: [PATCH v2 1/3] drm/ttm: Add ttm_bo_access Date: Fri, 18 Oct 2024 14:16:21 -0700 Message-Id: <20241018211623.1367891-2-matthew.brost@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241018211623.1367891-1-matthew.brost@intel.com> References: <20241018211623.1367891-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Non-contiguous VRAM cannot easily be mapped in TTM nor can non-visible VRAM easily be accessed. Add ttm_bo_access, which is similar to ttm_bo_vm_access, to access such memory. Reported-by: Christoph Manszewski Suggested-by: Thomas Hellström Signed-off-by: Matthew Brost --- drivers/gpu/drm/ttm/ttm_bo_util.c | 85 +++++++++++++++++++++++++++++++ drivers/gpu/drm/ttm/ttm_bo_vm.c | 65 +---------------------- include/drm/ttm/ttm_bo.h | 2 + 3 files changed, 88 insertions(+), 64 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c index d939925efa81..9e427c8342ab 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c @@ -919,3 +919,88 @@ s64 ttm_lru_walk_for_evict(struct ttm_lru_walk *walk, struct ttm_device *bdev, return progress; } + +static int ttm_bo_access_kmap(struct ttm_buffer_object *bo, + unsigned long offset, + uint8_t *buf, int len, int write) +{ + unsigned long page = offset >> PAGE_SHIFT; + unsigned long bytes_left = len; + int ret; + + /* Copy a page at a time, that way no extra virtual address + * mapping is needed + */ + offset -= page << PAGE_SHIFT; + do { + unsigned long bytes = min(bytes_left, PAGE_SIZE - offset); + struct ttm_bo_kmap_obj map; + void *ptr; + bool is_iomem; + + ret = ttm_bo_kmap(bo, page, 1, &map); + if (ret) + return ret; + + ptr = (uint8_t *)ttm_kmap_obj_virtual(&map, &is_iomem) + offset; + WARN_ON_ONCE(is_iomem); + if (write) + memcpy(ptr, buf, bytes); + else + memcpy(buf, ptr, bytes); + ttm_bo_kunmap(&map); + + page++; + buf += bytes; + bytes_left -= bytes; + offset = 0; + } while (bytes_left); + + return len; +} +/** + * ttm_bo_access - Helper to access a buffer object + * + * @bo: ttm buffer object + * @offset: access offset into buffer object + * @buf: pointer to caller memory to read into or write from + * @len: length of access + * @write: write access + * + * Utility function to access a buffer object. Useful when buffer object cannot + * be easily mapped (non-contiguous, non-visible, etc...). + * + * Returns: + * Number of bytes accessed or errno + */ +int ttm_bo_access(struct ttm_buffer_object *bo, unsigned long offset, + void *buf, int len, int write) +{ + int ret; + + if (len < 1 || (offset + len) > bo->base.size) + return -EIO; + + ret = ttm_bo_reserve(bo, true, false, NULL); + if (ret) + return ret; + + switch (bo->resource->mem_type) { + case TTM_PL_SYSTEM: + fallthrough; + case TTM_PL_TT: + ret = ttm_bo_access_kmap(bo, offset, buf, len, write); + break; + default: + if (bo->bdev->funcs->access_memory) + ret = bo->bdev->funcs->access_memory( + bo, offset, buf, len, write); + else + ret = -EIO; + } + + ttm_bo_unreserve(bo); + + return ret; +} +EXPORT_SYMBOL(ttm_bo_access); diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c index 2c699ed1963a..20b1e5f78684 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_vm.c +++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c @@ -366,45 +366,6 @@ void ttm_bo_vm_close(struct vm_area_struct *vma) } EXPORT_SYMBOL(ttm_bo_vm_close); -static int ttm_bo_vm_access_kmap(struct ttm_buffer_object *bo, - unsigned long offset, - uint8_t *buf, int len, int write) -{ - unsigned long page = offset >> PAGE_SHIFT; - unsigned long bytes_left = len; - int ret; - - /* Copy a page at a time, that way no extra virtual address - * mapping is needed - */ - offset -= page << PAGE_SHIFT; - do { - unsigned long bytes = min(bytes_left, PAGE_SIZE - offset); - struct ttm_bo_kmap_obj map; - void *ptr; - bool is_iomem; - - ret = ttm_bo_kmap(bo, page, 1, &map); - if (ret) - return ret; - - ptr = (uint8_t *)ttm_kmap_obj_virtual(&map, &is_iomem) + offset; - WARN_ON_ONCE(is_iomem); - if (write) - memcpy(ptr, buf, bytes); - else - memcpy(buf, ptr, bytes); - ttm_bo_kunmap(&map); - - page++; - buf += bytes; - bytes_left -= bytes; - offset = 0; - } while (bytes_left); - - return len; -} - int ttm_bo_vm_access(struct vm_area_struct *vma, unsigned long addr, void *buf, int len, int write) { @@ -412,32 +373,8 @@ int ttm_bo_vm_access(struct vm_area_struct *vma, unsigned long addr, unsigned long offset = (addr) - vma->vm_start + ((vma->vm_pgoff - drm_vma_node_start(&bo->base.vma_node)) << PAGE_SHIFT); - int ret; - - if (len < 1 || (offset + len) > bo->base.size) - return -EIO; - ret = ttm_bo_reserve(bo, true, false, NULL); - if (ret) - return ret; - - switch (bo->resource->mem_type) { - case TTM_PL_SYSTEM: - fallthrough; - case TTM_PL_TT: - ret = ttm_bo_vm_access_kmap(bo, offset, buf, len, write); - break; - default: - if (bo->bdev->funcs->access_memory) - ret = bo->bdev->funcs->access_memory( - bo, offset, buf, len, write); - else - ret = -EIO; - } - - ttm_bo_unreserve(bo); - - return ret; + return ttm_bo_access(bo, offset, buf, len, write); } EXPORT_SYMBOL(ttm_bo_vm_access); diff --git a/include/drm/ttm/ttm_bo.h b/include/drm/ttm/ttm_bo.h index 5804408815be..8ea11cd8df39 100644 --- a/include/drm/ttm/ttm_bo.h +++ b/include/drm/ttm/ttm_bo.h @@ -421,6 +421,8 @@ void ttm_bo_unpin(struct ttm_buffer_object *bo); int ttm_bo_evict_first(struct ttm_device *bdev, struct ttm_resource_manager *man, struct ttm_operation_ctx *ctx); +int ttm_bo_access(struct ttm_buffer_object *bo, unsigned long offset, + void *buf, int len, int write); vm_fault_t ttm_bo_vm_reserve(struct ttm_buffer_object *bo, struct vm_fault *vmf); vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf, From patchwork Fri Oct 18 21:16:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 13842430 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 09D86D3F29E for ; Fri, 18 Oct 2024 21:16:00 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 7EE7B10E9A7; Fri, 18 Oct 2024 21:15:59 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="FCrdM+QP"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) by gabe.freedesktop.org (Postfix) with ESMTPS id F069310E240; Fri, 18 Oct 2024 21:15:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1729286159; x=1760822159; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=CSdOD2U5eijtsUahA+Onxv41UTOlcf16XCKGkQeIqoY=; b=FCrdM+QPKKtEYqQZD97D1maEAVBhBYPhW6ocHCvANluIMWPvEpx/BcUx m7RA7ZYibFW2nslgtS3l4+AQNTn4Nazz1+mX1IUiQGVzrgo4hiGA33ELV U+e/gCQltOJs8ZWn185GlTHKiCkX8WT+BJ0erZdLV+fbaOSm6nvbCVnMi 9WYly8FVtVHNJjIgwuhTFDw1wsjIoCtcN8hBtLAxKY0BmMekLWmsYKQ9e YJ4qUvKNkoMN+l2E0CRnWaIUQilfLWFHLuDfWK0vENCYTaxBl4bWLnefa RsYd7cK2OrqV1EfdrispR0l5A0BVLhuc0nqS0rZZS8jkvMpA0gk55mGaA g==; X-CSE-ConnectionGUID: +zwMRkaRSFC3WUBACqkGvg== X-CSE-MsgGUID: kSVDTqelQCSBXLIB7BOGHA== X-IronPort-AV: E=McAfee;i="6700,10204,11229"; a="46329578" X-IronPort-AV: E=Sophos;i="6.11,214,1725346800"; d="scan'208";a="46329578" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Oct 2024 14:15:58 -0700 X-CSE-ConnectionGUID: P93YMoSVQQGXwektWN2gNw== X-CSE-MsgGUID: QGolJHdaQRKnw89ZcipO3A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,214,1725346800"; d="scan'208";a="84040233" Received: from lstrano-desk.jf.intel.com ([10.54.39.91]) by orviesa004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Oct 2024 14:15:59 -0700 From: Matthew Brost To: intel-xe@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: matthew.auld@intel.com, thomas.hellstrom@linux.intel.com Subject: [PATCH v2 2/3] drm/xe: Add xe_ttm_access_memory Date: Fri, 18 Oct 2024 14:16:22 -0700 Message-Id: <20241018211623.1367891-3-matthew.brost@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241018211623.1367891-1-matthew.brost@intel.com> References: <20241018211623.1367891-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Non-contiguous VRAM cannot easily be mapped in TTM nor can non-visible VRAM easily be accessed. Add xe_ttm_access_memory which hooks into ttm_bo_access to access such memory. Reported-by: Christoph Manszewski Suggested-by: Thomas Hellström Signed-off-by: Matthew Brost --- drivers/gpu/drm/xe/xe_bo.c | 57 ++++++++++++++++++++++++++++++++++++-- 1 file changed, 54 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c index 5b232f2951b1..9a5c1ed7ae97 100644 --- a/drivers/gpu/drm/xe/xe_bo.c +++ b/drivers/gpu/drm/xe/xe_bo.c @@ -442,6 +442,14 @@ static void xe_ttm_tt_destroy(struct ttm_device *ttm_dev, struct ttm_tt *tt) kfree(tt); } +static bool xe_ttm_resource_visible(struct ttm_resource *mem) +{ + struct xe_ttm_vram_mgr_resource *vres = + to_xe_ttm_vram_mgr_resource(mem); + + return vres->used_visible_size == mem->size; +} + static int xe_ttm_io_mem_reserve(struct ttm_device *bdev, struct ttm_resource *mem) { @@ -453,11 +461,9 @@ static int xe_ttm_io_mem_reserve(struct ttm_device *bdev, return 0; case XE_PL_VRAM0: case XE_PL_VRAM1: { - struct xe_ttm_vram_mgr_resource *vres = - to_xe_ttm_vram_mgr_resource(mem); struct xe_mem_region *vram = res_to_mem_region(mem); - if (vres->used_visible_size < mem->size) + if (!xe_ttm_resource_visible(mem)) return -EINVAL; mem->bus.offset = mem->start << PAGE_SHIFT; @@ -1111,6 +1117,50 @@ static void xe_ttm_bo_swap_notify(struct ttm_buffer_object *ttm_bo) } } +static int xe_ttm_access_memory(struct ttm_buffer_object *ttm_bo, + unsigned long offset, void *buf, int len, + int write) +{ + struct xe_bo *bo = ttm_to_xe_bo(ttm_bo); + struct xe_device *xe = ttm_to_xe_device(ttm_bo->bdev); + struct iosys_map vmap; + struct xe_res_cursor cursor; + struct xe_mem_region *vram; + int bytes_left = len; + + xe_bo_assert_held(bo); + + if (!mem_type_is_vram(ttm_bo->resource->mem_type)) + return -EIO; + + /* FIXME: Use GPU for non-visible VRAM */ + if (!xe_ttm_resource_visible(ttm_bo->resource)) + return -EIO; + + vram = res_to_mem_region(ttm_bo->resource); + xe_res_first(ttm_bo->resource, offset & PAGE_MASK, bo->size, &cursor); + + do { + unsigned long page_offset = (offset & ~PAGE_MASK); + int byte_count = min((int)(PAGE_SIZE - page_offset), bytes_left); + + iosys_map_set_vaddr_iomem(&vmap, (u8 __iomem *)vram->mapping + + cursor.start); + if (write) + xe_map_memcpy_to(xe, &vmap, page_offset, buf, byte_count); + else + xe_map_memcpy_from(xe, buf, &vmap, page_offset, byte_count); + + offset += byte_count; + buf += byte_count; + bytes_left -= byte_count; + if (bytes_left) + xe_res_next(&cursor, PAGE_SIZE); + } while (bytes_left); + + return len; +} + const struct ttm_device_funcs xe_ttm_funcs = { .ttm_tt_create = xe_ttm_tt_create, .ttm_tt_populate = xe_ttm_tt_populate, @@ -1120,6 +1170,7 @@ const struct ttm_device_funcs xe_ttm_funcs = { .move = xe_bo_move, .io_mem_reserve = xe_ttm_io_mem_reserve, .io_mem_pfn = xe_ttm_io_mem_pfn, + .access_memory = xe_ttm_access_memory, .release_notify = xe_ttm_bo_release_notify, .eviction_valuable = ttm_bo_eviction_valuable, .delete_mem_notify = xe_ttm_bo_delete_mem_notify, From patchwork Fri Oct 18 21:16:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 13842433 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 36E77D3F29F for ; Fri, 18 Oct 2024 21:16:06 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id DF17C10E9B8; Fri, 18 Oct 2024 21:16:00 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="LYGH6tfx"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) by gabe.freedesktop.org (Postfix) with ESMTPS id 201B710E9A7; Fri, 18 Oct 2024 21:15:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1729286159; x=1760822159; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=R76qaQsWJwXSD60V/rJWhDs90E5lGsxWrquGL4AWbf8=; b=LYGH6tfxDBlw2WG95WqC3EeIXfrGAf4ZEy5I5SGFaUzVmzBAWgG2Xw15 pgkt1jcPfYm1VMUlsOp54ydTwrXDN5QfWL+p7pD0N2kMy/+Bell+HcQtM RRwR1eWKKvwfm8VnaktLngFloOH9hIzpOuz/4YsuOGl53Jt9A23GY1d/S 6OceYv5udr1A8bYfxSb3lIHyzJ0NUDwwJQu7Lkcg27cORRf8ToiLK1rOX GXij5AziLeebUbhmD7nlw7IZvBWW4UglCaE6lrIXiJf+s9rGrjESWyPYp 2sO11JhE2Ja854o9+Jzhs/tYLkTsk6yQmi08385EId9Urj2eJfyvYRbsH g==; X-CSE-ConnectionGUID: DZ/muQhXQKyYZfFDVKOu3Q== X-CSE-MsgGUID: gC+v8F0+QZaVHUEnm/w7yQ== X-IronPort-AV: E=McAfee;i="6700,10204,11229"; a="46329579" X-IronPort-AV: E=Sophos;i="6.11,214,1725346800"; d="scan'208";a="46329579" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Oct 2024 14:15:58 -0700 X-CSE-ConnectionGUID: zwD+a687Qg6tkmH50ziGOw== X-CSE-MsgGUID: wQpINdcOS++x6FuY8nYr1A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,214,1725346800"; d="scan'208";a="84040235" Received: from lstrano-desk.jf.intel.com ([10.54.39.91]) by orviesa004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Oct 2024 14:15:59 -0700 From: Matthew Brost To: intel-xe@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: matthew.auld@intel.com, thomas.hellstrom@linux.intel.com Subject: [PATCH v2 3/3] drm/xe: Use ttm_bo_access in xe_vm_snapshot_capture_delayed Date: Fri, 18 Oct 2024 14:16:23 -0700 Message-Id: <20241018211623.1367891-4-matthew.brost@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241018211623.1367891-1-matthew.brost@intel.com> References: <20241018211623.1367891-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Non-contiguous mapping of BO in VRAM doesn't work, use ttm_bo_access instead. v2: - Fix error handling Fixes: 0eb2a18a8fad ("drm/xe: Implement VM snapshot support for BO's and userptr") Suggested-by: Matthew Auld Signed-off-by: Matthew Brost --- drivers/gpu/drm/xe/xe_vm.c | 18 +++++++----------- 1 file changed, 7 insertions(+), 11 deletions(-) diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c index c99380271de6..0f760fd69d44 100644 --- a/drivers/gpu/drm/xe/xe_vm.c +++ b/drivers/gpu/drm/xe/xe_vm.c @@ -3303,7 +3303,6 @@ void xe_vm_snapshot_capture_delayed(struct xe_vm_snapshot *snap) for (int i = 0; i < snap->num_snaps; i++) { struct xe_bo *bo = snap->snap[i].bo; - struct iosys_map src; int err; if (IS_ERR(snap->snap[i].data)) @@ -3316,16 +3315,12 @@ void xe_vm_snapshot_capture_delayed(struct xe_vm_snapshot *snap) } if (bo) { - xe_bo_lock(bo, false); - err = ttm_bo_vmap(&bo->ttm, &src); - if (!err) { - xe_map_memcpy_from(xe_bo_device(bo), - snap->snap[i].data, - &src, snap->snap[i].bo_ofs, - snap->snap[i].len); - ttm_bo_vunmap(&bo->ttm, &src); - } - xe_bo_unlock(bo); + err = ttm_bo_access(&bo->ttm, snap->snap[i].bo_ofs, + snap->snap[i].data, snap->snap[i].len, 0); + if (!(err < 0) && err != snap->snap[i].len) + err = -EIO; + else if (!(err < 0)) + err = 0; } else { void __user *userptr = (void __user *)(size_t)snap->snap[i].bo_ofs; @@ -3375,6 +3370,7 @@ void xe_vm_snapshot_print(struct xe_vm_snapshot *snap, struct drm_printer *p) u32 *val = snap->snap[i].data + j; char dumped[ASCII85_BUFSZ]; + printk("%s:%d: j=%d", __func__, __LINE__, (int)j); drm_puts(p, ascii85_encode(*val, dumped)); }