From patchwork Mon Jul 29 16:54:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Welty, Brian" X-Patchwork-Id: 11064225 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B69DD13A4 for ; Mon, 29 Jul 2019 16:54:48 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A60182871A for ; Mon, 29 Jul 2019 16:54:48 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 996962872E; Mon, 29 Jul 2019 16:54:48 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id D455B28739 for ; Mon, 29 Jul 2019 16:54:47 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id F18D889A9F; Mon, 29 Jul 2019 16:54:41 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by gabe.freedesktop.org (Postfix) with ESMTPS id 951A4899BB; Mon, 29 Jul 2019 16:54:37 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 29 Jul 2019 09:54:37 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,323,1559545200"; d="scan'208";a="176478847" Received: from nperf12.hd.intel.com ([10.127.88.161]) by orsmga006.jf.intel.com with ESMTP; 29 Jul 2019 09:54:36 -0700 From: Brian Welty To: dri-devel@lists.freedesktop.org, Daniel Vetter , intel-gfx@lists.freedesktop.org Subject: [RFC PATCH 1/3] drm: introduce new struct drm_mem_region Date: Mon, 29 Jul 2019 12:54:55 -0400 Message-Id: <20190729165457.18500-2-brian.welty@intel.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190729165457.18500-1-brian.welty@intel.com> References: <20190729165457.18500-1-brian.welty@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP Move basic members of ttm_mem_type_manager into a new DRM memory region structure. The idea is for this base structure to be nested inside the TTM structure and later in Intel's proposed intel_memory_region. As comments in the code suggest, the following future work can extend the usefulness of this: - Create common memory region types (next patch) - Create common set of memory_region function callbacks (based on ttm_mem_type_manager_funcs and intel_memory_regions_ops) - Create common helpers that operate on drm_mem_region to be leveraged by both TTM drivers and i915, reducing code duplication - Above might start with refactoring ttm_bo_manager.c as these are helpers for using drm_mm's range allocator and could be made to operate on DRM structures instead of TTM ones. - Larger goal might be to make LRU management of GEM objects common, and migrate those fields into drm_mem_region and drm_gem_object strucures. vmwgfx changes included here as just example of what driver updates will look like, and can be moved later to separate patch. Other TTM drivers need to be updated similarly. Signed-off-by: Brian Welty --- drivers/gpu/drm/ttm/ttm_bo.c | 34 +++++++++++-------- drivers/gpu/drm/ttm/ttm_bo_manager.c | 14 ++++---- drivers/gpu/drm/ttm/ttm_bo_util.c | 11 +++--- drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c | 8 ++--- drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c | 4 +-- include/drm/drm_mm.h | 27 +++++++++++++++ include/drm/ttm/ttm_bo_api.h | 2 +- include/drm/ttm/ttm_bo_driver.h | 16 ++++----- 8 files changed, 73 insertions(+), 43 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index 58c403eda04e..45434ea513dd 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -84,8 +84,8 @@ static void ttm_mem_type_debug(struct ttm_bo_device *bdev, struct drm_printer *p drm_printf(p, " has_type: %d\n", man->has_type); drm_printf(p, " use_type: %d\n", man->use_type); drm_printf(p, " flags: 0x%08X\n", man->flags); - drm_printf(p, " gpu_offset: 0x%08llX\n", man->gpu_offset); - drm_printf(p, " size: %llu\n", man->size); + drm_printf(p, " gpu_offset: 0x%08llX\n", man->region.start); + drm_printf(p, " size: %llu\n", man->region.size); drm_printf(p, " available_caching: 0x%08X\n", man->available_caching); drm_printf(p, " default_caching: 0x%08X\n", man->default_caching); if (mem_type != TTM_PL_SYSTEM) @@ -399,7 +399,7 @@ static int ttm_bo_handle_move_mem(struct ttm_buffer_object *bo, if (bo->mem.mm_node) bo->offset = (bo->mem.start << PAGE_SHIFT) + - bdev->man[bo->mem.mem_type].gpu_offset; + bdev->man[bo->mem.mem_type].region.start; else bo->offset = 0; @@ -926,9 +926,9 @@ static int ttm_bo_add_move_fence(struct ttm_buffer_object *bo, struct dma_fence *fence; int ret; - spin_lock(&man->move_lock); - fence = dma_fence_get(man->move); - spin_unlock(&man->move_lock); + spin_lock(&man->region.move_lock); + fence = dma_fence_get(man->region.move); + spin_unlock(&man->region.move_lock); if (fence) { reservation_object_add_shared_fence(bo->resv, fence); @@ -1490,9 +1490,9 @@ static int ttm_bo_force_list_clean(struct ttm_bo_device *bdev, } spin_unlock(&glob->lru_lock); - spin_lock(&man->move_lock); - fence = dma_fence_get(man->move); - spin_unlock(&man->move_lock); + spin_lock(&man->region.move_lock); + fence = dma_fence_get(man->region.move); + spin_unlock(&man->region.move_lock); if (fence) { ret = dma_fence_wait(fence, false); @@ -1535,8 +1535,8 @@ int ttm_bo_clean_mm(struct ttm_bo_device *bdev, unsigned mem_type) ret = (*man->func->takedown)(man); } - dma_fence_put(man->move); - man->move = NULL; + dma_fence_put(man->region.move); + man->region.move = NULL; return ret; } @@ -1561,7 +1561,7 @@ int ttm_bo_evict_mm(struct ttm_bo_device *bdev, unsigned mem_type) EXPORT_SYMBOL(ttm_bo_evict_mm); int ttm_bo_init_mm(struct ttm_bo_device *bdev, unsigned type, - unsigned long p_size) + resource_size_t p_size) { int ret; struct ttm_mem_type_manager *man; @@ -1570,10 +1570,16 @@ int ttm_bo_init_mm(struct ttm_bo_device *bdev, unsigned type, BUG_ON(type >= TTM_NUM_MEM_TYPES); man = &bdev->man[type]; BUG_ON(man->has_type); + + /* FIXME: add call to (new) drm_mem_region_init ? */ + man->region.size = p_size; + man->region.type = type; + spin_lock_init(&man->region.move_lock); + man->region.move = NULL; + man->io_reserve_fastpath = true; man->use_io_reserve_lru = false; mutex_init(&man->io_reserve_mutex); - spin_lock_init(&man->move_lock); INIT_LIST_HEAD(&man->io_reserve_lru); ret = bdev->driver->init_mem_type(bdev, type, man); @@ -1588,11 +1594,9 @@ int ttm_bo_init_mm(struct ttm_bo_device *bdev, unsigned type, } man->has_type = true; man->use_type = true; - man->size = p_size; for (i = 0; i < TTM_MAX_BO_PRIORITY; ++i) INIT_LIST_HEAD(&man->lru[i]); - man->move = NULL; return 0; } diff --git a/drivers/gpu/drm/ttm/ttm_bo_manager.c b/drivers/gpu/drm/ttm/ttm_bo_manager.c index 18d3debcc949..0a99b3d5b482 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_manager.c +++ b/drivers/gpu/drm/ttm/ttm_bo_manager.c @@ -53,7 +53,7 @@ static int ttm_bo_man_get_node(struct ttm_mem_type_manager *man, const struct ttm_place *place, struct ttm_mem_reg *mem) { - struct ttm_range_manager *rman = (struct ttm_range_manager *) man->priv; + struct ttm_range_manager *rman = (struct ttm_range_manager *) man->region.priv; struct drm_mm *mm = &rman->mm; struct drm_mm_node *node; enum drm_mm_insert_mode mode; @@ -62,7 +62,7 @@ static int ttm_bo_man_get_node(struct ttm_mem_type_manager *man, lpfn = place->lpfn; if (!lpfn) - lpfn = man->size; + lpfn = man->region.size; node = kzalloc(sizeof(*node), GFP_KERNEL); if (!node) @@ -92,7 +92,7 @@ static int ttm_bo_man_get_node(struct ttm_mem_type_manager *man, static void ttm_bo_man_put_node(struct ttm_mem_type_manager *man, struct ttm_mem_reg *mem) { - struct ttm_range_manager *rman = (struct ttm_range_manager *) man->priv; + struct ttm_range_manager *rman = (struct ttm_range_manager *) man->region.priv; if (mem->mm_node) { spin_lock(&rman->lock); @@ -115,13 +115,13 @@ static int ttm_bo_man_init(struct ttm_mem_type_manager *man, drm_mm_init(&rman->mm, 0, p_size); spin_lock_init(&rman->lock); - man->priv = rman; + man->region.priv = rman; return 0; } static int ttm_bo_man_takedown(struct ttm_mem_type_manager *man) { - struct ttm_range_manager *rman = (struct ttm_range_manager *) man->priv; + struct ttm_range_manager *rman = (struct ttm_range_manager *) man->region.priv; struct drm_mm *mm = &rman->mm; spin_lock(&rman->lock); @@ -129,7 +129,7 @@ static int ttm_bo_man_takedown(struct ttm_mem_type_manager *man) drm_mm_takedown(mm); spin_unlock(&rman->lock); kfree(rman); - man->priv = NULL; + man->region.priv = NULL; return 0; } spin_unlock(&rman->lock); @@ -139,7 +139,7 @@ static int ttm_bo_man_takedown(struct ttm_mem_type_manager *man) static void ttm_bo_man_debug(struct ttm_mem_type_manager *man, struct drm_printer *printer) { - struct ttm_range_manager *rman = (struct ttm_range_manager *) man->priv; + struct ttm_range_manager *rman = (struct ttm_range_manager *) man->region.priv; spin_lock(&rman->lock); drm_mm_print(&rman->mm, printer); diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c index 9f918b992f7e..e44d0b7d60b4 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c @@ -795,12 +795,13 @@ int ttm_bo_pipeline_move(struct ttm_buffer_object *bo, * this eviction and free up the allocation */ - spin_lock(&from->move_lock); - if (!from->move || dma_fence_is_later(fence, from->move)) { - dma_fence_put(from->move); - from->move = dma_fence_get(fence); + spin_lock(&from->region.move_lock); + if (!from->region.move || + dma_fence_is_later(fence, from->region.move)) { + dma_fence_put(from->region.move); + from->region.move = dma_fence_get(fence); } - spin_unlock(&from->move_lock); + spin_unlock(&from->region.move_lock); ttm_bo_free_old_node(bo); diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c b/drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c index 7da752ca1c34..dd4f85accc4e 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c @@ -50,7 +50,7 @@ static int vmw_gmrid_man_get_node(struct ttm_mem_type_manager *man, struct ttm_mem_reg *mem) { struct vmwgfx_gmrid_man *gman = - (struct vmwgfx_gmrid_man *)man->priv; + (struct vmwgfx_gmrid_man *)man->region.priv; int id; mem->mm_node = NULL; @@ -85,7 +85,7 @@ static void vmw_gmrid_man_put_node(struct ttm_mem_type_manager *man, struct ttm_mem_reg *mem) { struct vmwgfx_gmrid_man *gman = - (struct vmwgfx_gmrid_man *)man->priv; + (struct vmwgfx_gmrid_man *)man->region.priv; if (mem->mm_node) { ida_free(&gman->gmr_ida, mem->start); @@ -123,14 +123,14 @@ static int vmw_gmrid_man_init(struct ttm_mem_type_manager *man, default: BUG(); } - man->priv = (void *) gman; + man->region.priv = (void *) gman; return 0; } static int vmw_gmrid_man_takedown(struct ttm_mem_type_manager *man) { struct vmwgfx_gmrid_man *gman = - (struct vmwgfx_gmrid_man *)man->priv; + (struct vmwgfx_gmrid_man *)man->region.priv; if (gman) { ida_destroy(&gman->gmr_ida); diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c index d8ea3dd10af0..c6e99893e993 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c @@ -755,7 +755,7 @@ static int vmw_init_mem_type(struct ttm_bo_device *bdev, uint32_t type, case TTM_PL_VRAM: /* "On-card" video ram */ man->func = &ttm_bo_manager_func; - man->gpu_offset = 0; + man->region.start = 0; man->flags = TTM_MEMTYPE_FLAG_FIXED | TTM_MEMTYPE_FLAG_MAPPABLE; man->available_caching = TTM_PL_FLAG_CACHED; man->default_caching = TTM_PL_FLAG_CACHED; @@ -768,7 +768,7 @@ static int vmw_init_mem_type(struct ttm_bo_device *bdev, uint32_t type, * slots as well as the bo size. */ man->func = &vmw_gmrid_manager_func; - man->gpu_offset = 0; + man->region.start = 0; man->flags = TTM_MEMTYPE_FLAG_CMA | TTM_MEMTYPE_FLAG_MAPPABLE; man->available_caching = TTM_PL_FLAG_CACHED; man->default_caching = TTM_PL_FLAG_CACHED; diff --git a/include/drm/drm_mm.h b/include/drm/drm_mm.h index 2c3bbb43c7d1..3d123eb10d62 100644 --- a/include/drm/drm_mm.h +++ b/include/drm/drm_mm.h @@ -43,6 +43,8 @@ #include #include #include +#include +#include #ifdef CONFIG_DRM_DEBUG_MM #include #endif @@ -54,6 +56,31 @@ #define DRM_MM_BUG_ON(expr) BUILD_BUG_ON_INVALID(expr) #endif +struct drm_device; +struct drm_mm; + +/** + * struct drm_mem_region + * + * Base memory region structure to be nested inside TTM memory regions + * (ttm_mem_type_manager) and i915 memory regions (intel_memory_region). + */ +struct drm_mem_region { + resource_size_t start; /* within GPU physical address space */ + resource_size_t io_start; /* BAR address (CPU accessible) */ + resource_size_t size; + struct io_mapping iomap; + u8 type; + + union { + struct drm_mm *mm; + /* FIXME (for i915): struct drm_buddy_mm *buddy_mm; */ + void *priv; + }; + spinlock_t move_lock; + struct dma_fence *move; +}; + /** * enum drm_mm_insert_mode - control search and allocation behaviour * diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h index 49d9cdfc58f2..f8cb332f0eeb 100644 --- a/include/drm/ttm/ttm_bo_api.h +++ b/include/drm/ttm/ttm_bo_api.h @@ -615,7 +615,7 @@ int ttm_bo_create(struct ttm_bo_device *bdev, unsigned long size, * May also return driver-specified errors. */ int ttm_bo_init_mm(struct ttm_bo_device *bdev, unsigned type, - unsigned long p_size); + resource_size_t p_size); /** * ttm_bo_clean_mm diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h index c9b8ba492f24..4066ee315469 100644 --- a/include/drm/ttm/ttm_bo_driver.h +++ b/include/drm/ttm/ttm_bo_driver.h @@ -51,6 +51,12 @@ struct ttm_mem_type_manager; +/* FIXME: + * Potentially can rework this as common callbacks for drm_mem_region + * instead of ttm_mem_type_manager. + * Then the intel_memory_region_ops proposed by LMEM patch series could + * be folded into here. + */ struct ttm_mem_type_manager_func { /** * struct ttm_mem_type_manager member init @@ -168,6 +174,7 @@ struct ttm_mem_type_manager_func { struct ttm_mem_type_manager { + struct drm_mem_region region; struct ttm_bo_device *bdev; /* @@ -177,16 +184,12 @@ struct ttm_mem_type_manager { bool has_type; bool use_type; uint32_t flags; - uint64_t gpu_offset; /* GPU address space is independent of CPU word size */ - uint64_t size; uint32_t available_caching; uint32_t default_caching; const struct ttm_mem_type_manager_func *func; - void *priv; struct mutex io_reserve_mutex; bool use_io_reserve_lru; bool io_reserve_fastpath; - spinlock_t move_lock; /* * Protected by @io_reserve_mutex: @@ -199,11 +202,6 @@ struct ttm_mem_type_manager { */ struct list_head lru[TTM_MAX_BO_PRIORITY]; - - /* - * Protected by @move_lock. - */ - struct dma_fence *move; }; /** From patchwork Mon Jul 29 16:54:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Welty, Brian" X-Patchwork-Id: 11064231 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EA38314DB for ; Mon, 29 Jul 2019 16:54:54 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DB1712872E for ; Mon, 29 Jul 2019 16:54:54 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CF12128739; Mon, 29 Jul 2019 16:54:54 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 83A142876D for ; Mon, 29 Jul 2019 16:54:54 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id EEF6289C80; Mon, 29 Jul 2019 16:54:42 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by gabe.freedesktop.org (Postfix) with ESMTPS id 48D1D89A62; Mon, 29 Jul 2019 16:54:38 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 29 Jul 2019 09:54:38 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,323,1559545200"; d="scan'208";a="176478850" Received: from nperf12.hd.intel.com ([10.127.88.161]) by orsmga006.jf.intel.com with ESMTP; 29 Jul 2019 09:54:37 -0700 From: Brian Welty To: dri-devel@lists.freedesktop.org, Daniel Vetter , intel-gfx@lists.freedesktop.org Subject: [RFC PATCH 2/3] drm: Introduce DRM_MEM defines for specifying type of drm_mem_region Date: Mon, 29 Jul 2019 12:54:56 -0400 Message-Id: <20190729165457.18500-3-brian.welty@intel.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190729165457.18500-1-brian.welty@intel.com> References: <20190729165457.18500-1-brian.welty@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP Introduce DRM memory region types to be common for both drivers using TTM and for i915. For now, TTM continues to define it's own set but uses the DRM base definitions. Signed-off-by: Brian Welty --- include/drm/drm_mm.h | 8 ++++++++ include/drm/ttm/ttm_placement.h | 8 ++++---- 2 files changed, 12 insertions(+), 4 deletions(-) diff --git a/include/drm/drm_mm.h b/include/drm/drm_mm.h index 3d123eb10d62..8178d13384bc 100644 --- a/include/drm/drm_mm.h +++ b/include/drm/drm_mm.h @@ -59,6 +59,14 @@ struct drm_device; struct drm_mm; +/* + * Memory types for drm_mem_region + */ +#define DRM_MEM_SYSTEM 0 +#define DRM_MEM_STOLEN 1 +#define DRM_MEM_VRAM 2 +#define DRM_MEM_PRIV 3 + /** * struct drm_mem_region * diff --git a/include/drm/ttm/ttm_placement.h b/include/drm/ttm/ttm_placement.h index e88a8e39767b..976cf8d2f899 100644 --- a/include/drm/ttm/ttm_placement.h +++ b/include/drm/ttm/ttm_placement.h @@ -37,10 +37,10 @@ * Memory regions for data placement. */ -#define TTM_PL_SYSTEM 0 -#define TTM_PL_TT 1 -#define TTM_PL_VRAM 2 -#define TTM_PL_PRIV 3 +#define TTM_PL_SYSTEM DRM_MEM_SYSTEM +#define TTM_PL_TT DRM_MEM_STOLEN +#define TTM_PL_VRAM DRM_MEM_VRAM +#define TTM_PL_PRIV DRM_MEM_PRIV #define TTM_PL_FLAG_SYSTEM (1 << TTM_PL_SYSTEM) #define TTM_PL_FLAG_TT (1 << TTM_PL_TT) From patchwork Mon Jul 29 16:54:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Welty, Brian" X-Patchwork-Id: 11064227 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0A91113A4 for ; Mon, 29 Jul 2019 16:54:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EE60426E3A for ; Mon, 29 Jul 2019 16:54:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E2E1428780; Mon, 29 Jul 2019 16:54:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 297182873C for ; Mon, 29 Jul 2019 16:54:51 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 4559F89C07; Mon, 29 Jul 2019 16:54:42 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by gabe.freedesktop.org (Postfix) with ESMTPS id 4016389C07; Mon, 29 Jul 2019 16:54:39 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 29 Jul 2019 09:54:38 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,323,1559545200"; d="scan'208";a="176478858" Received: from nperf12.hd.intel.com ([10.127.88.161]) by orsmga006.jf.intel.com with ESMTP; 29 Jul 2019 09:54:38 -0700 From: Brian Welty To: dri-devel@lists.freedesktop.org, Daniel Vetter , intel-gfx@lists.freedesktop.org Subject: [RFC PATCH 3/3] drm/i915: Update intel_memory_region to use nested drm_mem_region Date: Mon, 29 Jul 2019 12:54:57 -0400 Message-Id: <20190729165457.18500-4-brian.welty@intel.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190729165457.18500-1-brian.welty@intel.com> References: <20190729165457.18500-1-brian.welty@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP Some fields are deleted from intel_memory_region in favor of instead using the new nested drm_mem_region structure. Note, this is based upon unmerged i915 series [1] in order to show how i915 might begin to integrate the proposed drm_mem_region. [1] https://lists.freedesktop.org/archives/intel-gfx/2019-June/203649.html Signed-off-by: Brian Welty --- drivers/gpu/drm/i915/gem/i915_gem_object.c | 2 +- drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 2 +- drivers/gpu/drm/i915/i915_gem_gtt.c | 10 +++---- drivers/gpu/drm/i915/i915_gpu_error.c | 2 +- drivers/gpu/drm/i915/i915_query.c | 2 +- drivers/gpu/drm/i915/intel_memory_region.c | 10 ++++--- drivers/gpu/drm/i915/intel_memory_region.h | 19 ++++---------- drivers/gpu/drm/i915/intel_region_lmem.c | 26 +++++++++---------- .../drm/i915/selftests/intel_memory_region.c | 8 +++--- 9 files changed, 37 insertions(+), 44 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c index 73d2d72adc19..7e56fd89a972 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c @@ -606,7 +606,7 @@ static int i915_gem_object_region_select(struct drm_i915_private *dev_priv, ret = i915_gem_object_migrate(obj, ce, id); if (!ret) { if (MEMORY_TYPE_FROM_REGION(region) == - INTEL_LMEM) { + DRM_MEM_VRAM) { /* * TODO: this should be part of get_pages(), * when async get_pages arrives diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c index d24f34443c4c..ac18e73665d4 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c @@ -53,7 +53,7 @@ static int shmem_get_pages(struct drm_i915_gem_object *obj) * If there's no chance of allocating enough pages for the whole * object, bail early. */ - if (obj->base.size > resource_size(&mem->region)) + if (obj->base.size > mem->region.size) return -ENOMEM; st = kmalloc(sizeof(*st), GFP_KERNEL); diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c index 2288a55f27f1..f4adc7e397ff 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c @@ -2737,20 +2737,20 @@ int i915_gem_init_memory_regions(struct drm_i915_private *i915) for (i = 0; i < ARRAY_SIZE(intel_region_map); i++) { struct intel_memory_region *mem = NULL; - u32 type; + u8 type; if (!HAS_REGION(i915, BIT(i))) continue; type = MEMORY_TYPE_FROM_REGION(intel_region_map[i]); switch (type) { - case INTEL_SMEM: + case DRM_MEM_SYSTEM: mem = i915_gem_shmem_setup(i915); break; - case INTEL_STOLEN: + case DRM_MEM_STOLEN: mem = i915_gem_stolen_setup(i915); break; - case INTEL_LMEM: + case DRM_MEM_VRAM: mem = i915_gem_setup_fake_lmem(i915); break; } @@ -2762,7 +2762,7 @@ int i915_gem_init_memory_regions(struct drm_i915_private *i915) } mem->id = intel_region_map[i]; - mem->type = type; + mem->region.type = type; mem->instance = MEMORY_INSTANCE_FROM_REGION(intel_region_map[i]); i915->regions[i] = mem; diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c index 9feb597f2b01..908691c3aadb 100644 --- a/drivers/gpu/drm/i915/i915_gpu_error.c +++ b/drivers/gpu/drm/i915/i915_gpu_error.c @@ -1048,7 +1048,7 @@ i915_error_object_create(struct drm_i915_private *i915, struct intel_memory_region *mem = vma->obj->memory_region; for_each_sgt_dma(dma, iter, vma->pages) { - s = io_mapping_map_atomic_wc(&mem->iomap, dma); + s = io_mapping_map_atomic_wc(&mem->region.iomap, dma); ret = compress_page(compress, s, dst); io_mapping_unmap_atomic(s); diff --git a/drivers/gpu/drm/i915/i915_query.c b/drivers/gpu/drm/i915/i915_query.c index 21c4c2592d6c..d16b4a6688e8 100644 --- a/drivers/gpu/drm/i915/i915_query.c +++ b/drivers/gpu/drm/i915/i915_query.c @@ -184,7 +184,7 @@ static int query_memregion_info(struct drm_i915_private *dev_priv, continue; info.id = region->id; - info.size = resource_size(®ion->region); + info.size = region->region.size; if (__copy_to_user(info_ptr, &info, sizeof(info))) return -EFAULT; diff --git a/drivers/gpu/drm/i915/intel_memory_region.c b/drivers/gpu/drm/i915/intel_memory_region.c index ab57b94b27a9..dcf077c23a72 100644 --- a/drivers/gpu/drm/i915/intel_memory_region.c +++ b/drivers/gpu/drm/i915/intel_memory_region.c @@ -200,7 +200,7 @@ i915_memory_region_get_pages_buddy(struct drm_i915_gem_object *obj) int i915_memory_region_init_buddy(struct intel_memory_region *mem) { - return i915_buddy_init(&mem->mm, resource_size(&mem->region), + return i915_buddy_init(&mem->mm, mem->region.size, mem->min_page_size); } @@ -285,10 +285,12 @@ intel_memory_region_create(struct drm_i915_private *i915, return ERR_PTR(-ENOMEM); mem->i915 = i915; - mem->region = (struct resource)DEFINE_RES_MEM(start, size); - mem->io_start = io_start; - mem->min_page_size = min_page_size; mem->ops = ops; + /* FIXME drm_mem_region_init? */ + mem->region.start = start; + mem->region.size = size; + mem->region.io_start = io_start; + mem->min_page_size = min_page_size; mutex_init(&mem->obj_lock); INIT_LIST_HEAD(&mem->objects); diff --git a/drivers/gpu/drm/i915/intel_memory_region.h b/drivers/gpu/drm/i915/intel_memory_region.h index 4960096ec30f..fc00d43f07a7 100644 --- a/drivers/gpu/drm/i915/intel_memory_region.h +++ b/drivers/gpu/drm/i915/intel_memory_region.h @@ -19,14 +19,8 @@ struct intel_memory_region; struct sg_table; /** - * Base memory type + * Define supported memory regions */ -enum intel_memory_type { - INTEL_SMEM = 0, - INTEL_LMEM, - INTEL_STOLEN, -}; - enum intel_region_id { INTEL_MEMORY_SMEM = 0, INTEL_MEMORY_LMEM, @@ -47,9 +41,9 @@ enum intel_region_id { * Memory regions encoded as type | instance */ static const u32 intel_region_map[] = { - [INTEL_MEMORY_SMEM] = BIT(INTEL_SMEM + INTEL_MEMORY_TYPE_SHIFT) | BIT(0), - [INTEL_MEMORY_LMEM] = BIT(INTEL_LMEM + INTEL_MEMORY_TYPE_SHIFT) | BIT(0), - [INTEL_MEMORY_STOLEN] = BIT(INTEL_STOLEN + INTEL_MEMORY_TYPE_SHIFT) | BIT(0), + [INTEL_MEMORY_SMEM] = BIT(DRM_MEM_SYSTEM + INTEL_MEMORY_TYPE_SHIFT) | BIT(0), + [INTEL_MEMORY_LMEM] = BIT(DRM_MEM_VRAM + INTEL_MEMORY_TYPE_SHIFT) | BIT(0), + [INTEL_MEMORY_STOLEN] = BIT(DRM_MEM_STOLEN + INTEL_MEMORY_TYPE_SHIFT) | BIT(0), }; struct intel_memory_region_ops { @@ -69,8 +63,7 @@ struct intel_memory_region { const struct intel_memory_region_ops *ops; - struct io_mapping iomap; - struct resource region; + struct drm_mem_region region; /* For faking for lmem */ struct drm_mm_node fake_mappable; @@ -78,10 +71,8 @@ struct intel_memory_region { struct i915_buddy_mm mm; struct mutex mm_lock; - resource_size_t io_start; resource_size_t min_page_size; - unsigned int type; unsigned int instance; unsigned int id; diff --git a/drivers/gpu/drm/i915/intel_region_lmem.c b/drivers/gpu/drm/i915/intel_region_lmem.c index afde9be72a12..6f0ce0314b98 100644 --- a/drivers/gpu/drm/i915/intel_region_lmem.c +++ b/drivers/gpu/drm/i915/intel_region_lmem.c @@ -250,7 +250,7 @@ static int i915_gem_init_fake_lmem_bar(struct intel_memory_region *mem) int ret; mem->fake_mappable.start = 0; - mem->fake_mappable.size = resource_size(&mem->region); + mem->fake_mappable.size = mem->region.size; mem->fake_mappable.color = I915_COLOR_UNEVICTABLE; ret = drm_mm_reserve_node(&ggtt->vm.mm, &mem->fake_mappable); @@ -277,7 +277,7 @@ static void region_lmem_release(struct intel_memory_region *mem) { i915_gem_relase_fake_lmem_bar(mem); - io_mapping_fini(&mem->iomap); + io_mapping_fini(&mem->region.iomap); i915_memory_region_release_buddy(mem); } @@ -294,14 +294,14 @@ region_lmem_init(struct intel_memory_region *mem) } } - if (!io_mapping_init_wc(&mem->iomap, - mem->io_start, - resource_size(&mem->region))) + if (!io_mapping_init_wc(&mem->region.iomap, + mem->region.io_start, + mem->region.size)) return -EIO; ret = i915_memory_region_init_buddy(mem); if (ret) - io_mapping_fini(&mem->iomap); + io_mapping_fini(&mem->region.iomap); return ret; } @@ -321,7 +321,7 @@ void __iomem *i915_gem_object_lmem_io_map_page(struct drm_i915_gem_object *obj, offset = i915_gem_object_get_dma_address(obj, n); offset -= intel_graphics_fake_lmem_res.start; - return io_mapping_map_atomic_wc(&obj->memory_region->iomap, offset); + return io_mapping_map_atomic_wc(&obj->memory_region->region.iomap, offset); } void __iomem *i915_gem_object_lmem_io_map(struct drm_i915_gem_object *obj, @@ -335,7 +335,7 @@ void __iomem *i915_gem_object_lmem_io_map(struct drm_i915_gem_object *obj, offset = i915_gem_object_get_dma_address(obj, n); offset -= intel_graphics_fake_lmem_res.start; - return io_mapping_map_wc(&obj->memory_region->iomap, offset, size); + return io_mapping_map_wc(&obj->memory_region->region.iomap, offset, size); } resource_size_t i915_gem_object_lmem_io_offset(struct drm_i915_gem_object *obj, @@ -352,14 +352,14 @@ resource_size_t i915_gem_object_lmem_io_offset(struct drm_i915_gem_object *obj, daddr = i915_gem_object_get_dma_address(obj, n); daddr -= intel_graphics_fake_lmem_res.start; - return mem->io_start + daddr; + return mem->region.io_start + daddr; } bool i915_gem_object_is_lmem(struct drm_i915_gem_object *obj) { - struct intel_memory_region *region = obj->memory_region; + struct intel_memory_region *mem = obj->memory_region; - return region && region->type == INTEL_LMEM; + return mem && mem->region.type == DRM_MEM_VRAM; } struct drm_i915_gem_object * @@ -395,9 +395,9 @@ i915_gem_setup_fake_lmem(struct drm_i915_private *i915) io_start, ®ion_lmem_ops); if (!IS_ERR(mem)) { - DRM_INFO("Intel graphics fake LMEM: %pR\n", &mem->region); + DRM_INFO("Intel graphics fake LMEM: %pR\n", mem); DRM_INFO("Intel graphics fake LMEM IO start: %llx\n", - (u64)mem->io_start); + (u64)mem->region.io_start); } return mem; diff --git a/drivers/gpu/drm/i915/selftests/intel_memory_region.c b/drivers/gpu/drm/i915/selftests/intel_memory_region.c index 9793f548a71a..1496f47a794a 100644 --- a/drivers/gpu/drm/i915/selftests/intel_memory_region.c +++ b/drivers/gpu/drm/i915/selftests/intel_memory_region.c @@ -32,7 +32,7 @@ static void close_objects(struct list_head *objects) static int igt_mock_fill(void *arg) { struct intel_memory_region *mem = arg; - resource_size_t total = resource_size(&mem->region); + resource_size_t total = mem->region.size; resource_size_t page_size; resource_size_t rem; unsigned long max_pages; @@ -98,7 +98,7 @@ static int igt_frag_region(struct intel_memory_region *mem, int err = 0; target = mem->mm.min_size; - total = resource_size(&mem->region); + total = mem->region.size; n_objects = total / target; while (n_objects--) { @@ -152,7 +152,7 @@ static int igt_mock_evict(void *arg) if (err) return err; - total = resource_size(&mem->region); + total = mem->region.size; target = mem->mm.min_size; while (target <= total / 2) { @@ -198,7 +198,7 @@ static int igt_mock_continuous(void *arg) if (err) return err; - total = resource_size(&mem->region); + total = mem->region.size; target = total / 2; /*