From patchwork Fri Aug 9 22:26:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 11087741 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4302B912 for ; Fri, 9 Aug 2019 22:26:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2E9CB205FD for ; Fri, 9 Aug 2019 22:26:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 220E72223E; Fri, 9 Aug 2019 22:26:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 78000205FD for ; Fri, 9 Aug 2019 22:26:51 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id EF5136EEE1; Fri, 9 Aug 2019 22:26:48 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id C3F2B6EEE3; Fri, 9 Aug 2019 22:26:46 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Aug 2019 15:26:46 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,366,1559545200"; d="scan'208";a="176926964" Received: from jmath3-mobl1.ger.corp.intel.com (HELO mwahaha-bdw.ger.corp.intel.com) ([10.252.5.86]) by fmsmga007.fm.intel.com with ESMTP; 09 Aug 2019 15:26:45 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Subject: [PATCH v3 01/37] drm/i915: buddy allocator Date: Fri, 9 Aug 2019 23:26:07 +0100 Message-Id: <20190809222643.23142-2-matthew.auld@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190809222643.23142-1-matthew.auld@intel.com> References: <20190809222643.23142-1-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP Simple buddy allocator. We want to allocate properly aligned power-of-two blocks to promote usage of huge-pages for the GTT, so 64K, 2M and possibly even 1G. While we do support allocating stuff at a specific offset, it is more intended for preallocating portions of the address space, say for an initial framebuffer, for other uses drm_mm is probably a much better fit. Anyway, hopefully this can all be thrown away if we eventually move to having the core MM manage device memory. Signed-off-by: Matthew Auld --- drivers/gpu/drm/i915/Makefile | 1 + drivers/gpu/drm/i915/i915_buddy.c | 433 +++++++++++ drivers/gpu/drm/i915/i915_buddy.h | 126 +++ drivers/gpu/drm/i915/i915_globals.c | 1 + drivers/gpu/drm/i915/i915_globals.h | 1 + drivers/gpu/drm/i915/selftests/i915_buddy.c | 719 ++++++++++++++++++ .../drm/i915/selftests/i915_mock_selftests.h | 1 + 7 files changed, 1282 insertions(+) create mode 100644 drivers/gpu/drm/i915/i915_buddy.c create mode 100644 drivers/gpu/drm/i915/i915_buddy.h create mode 100644 drivers/gpu/drm/i915/selftests/i915_buddy.c diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile index d3ca46dc54ae..3962d9728dd7 100644 --- a/drivers/gpu/drm/i915/Makefile +++ b/drivers/gpu/drm/i915/Makefile @@ -126,6 +126,7 @@ gem-y += \ i915-y += \ $(gem-y) \ i915_active.o \ + i915_buddy.o \ i915_cmd_parser.o \ i915_gem_evict.o \ i915_gem_fence_reg.o \ diff --git a/drivers/gpu/drm/i915/i915_buddy.c b/drivers/gpu/drm/i915/i915_buddy.c new file mode 100644 index 000000000000..e3039e1273ef --- /dev/null +++ b/drivers/gpu/drm/i915/i915_buddy.c @@ -0,0 +1,433 @@ +// SPDX-License-Identifier: MIT +/* + * Copyright © 2019 Intel Corporation + */ + +#include + +#include "i915_buddy.h" + +#include "i915_gem.h" +#include "i915_globals.h" +#include "i915_utils.h" + +static struct i915_global_block { + struct i915_global base; + struct kmem_cache *slab_blocks; +} global; + +static void i915_global_buddy_shrink(void) +{ + kmem_cache_shrink(global.slab_blocks); +} + +static void i915_global_buddy_exit(void) +{ + kmem_cache_destroy(global.slab_blocks); +} + +static struct i915_global_block global = { { + .shrink = i915_global_buddy_shrink, + .exit = i915_global_buddy_exit, +} }; + +int __init i915_global_buddy_init(void) +{ + global.slab_blocks = KMEM_CACHE(i915_buddy_block, SLAB_HWCACHE_ALIGN); + if (!global.slab_blocks) + return -ENOMEM; + + return 0; +} + +static struct i915_buddy_block *i915_block_alloc(struct i915_buddy_block *parent, + unsigned int order, + u64 offset) +{ + struct i915_buddy_block *block; + + block = kmem_cache_zalloc(global.slab_blocks, GFP_KERNEL); + if (!block) + return NULL; + + block->header = offset; + block->header |= order; + block->parent = parent; + + return block; +} + +static void i915_block_free(struct i915_buddy_block *block) +{ + kmem_cache_free(global.slab_blocks, block); +} + +static void mark_allocated(struct i915_buddy_block *block) +{ + block->header &= ~I915_BUDDY_HEADER_STATE; + block->header |= I915_BUDDY_ALLOCATED; + + list_del(&block->link); +} + +static void mark_free(struct i915_buddy_mm *mm, + struct i915_buddy_block *block) +{ + block->header &= ~I915_BUDDY_HEADER_STATE; + block->header |= I915_BUDDY_FREE; + + list_add(&block->link, + &mm->free_list[i915_buddy_block_order(block)]); +} + +static void mark_split(struct i915_buddy_block *block) +{ + block->header &= ~I915_BUDDY_HEADER_STATE; + block->header |= I915_BUDDY_SPLIT; + + list_del(&block->link); +} + +int i915_buddy_init(struct i915_buddy_mm *mm, u64 size, u64 chunk_size) +{ + unsigned int i; + u64 offset; + + if (size < chunk_size) + return -EINVAL; + + if (chunk_size < PAGE_SIZE) + return -EINVAL; + + if (!is_power_of_2(chunk_size)) + return -EINVAL; + + size = round_down(size, chunk_size); + + mm->size = size; + mm->chunk_size = chunk_size; + mm->max_order = ilog2(size) - ilog2(chunk_size); + + GEM_BUG_ON(mm->max_order > I915_BUDDY_MAX_ORDER); + + mm->free_list = kmalloc_array(mm->max_order + 1, + sizeof(struct list_head), + GFP_KERNEL); + if (!mm->free_list) + return -ENOMEM; + + for (i = 0; i <= mm->max_order; ++i) + INIT_LIST_HEAD(&mm->free_list[i]); + + mm->n_roots = hweight64(size); + + mm->roots = kmalloc_array(mm->n_roots, + sizeof(struct i915_buddy_block *), + GFP_KERNEL); + if (!mm->roots) + goto out_free_list; + + offset = 0; + i = 0; + + /* + * Split into power-of-two blocks, in case we are given a size that is + * not itself a power-of-two. + */ + do { + struct i915_buddy_block *root; + unsigned int order; + u64 root_size; + + root_size = rounddown_pow_of_two(size); + order = ilog2(root_size) - ilog2(chunk_size); + + root = i915_block_alloc(NULL, order, offset); + if (!root) + goto out_free_roots; + + mark_free(mm, root); + + GEM_BUG_ON(i > mm->max_order); + GEM_BUG_ON(i915_buddy_block_size(mm, root) < chunk_size); + + mm->roots[i] = root; + + offset += root_size; + size -= root_size; + i++; + } while (size); + + return 0; + +out_free_roots: + while (i--) + i915_block_free(mm->roots[i]); + kfree(mm->roots); +out_free_list: + kfree(mm->free_list); + return -ENOMEM; +} + +void i915_buddy_fini(struct i915_buddy_mm *mm) +{ + int err = 0; + int i; + + for (i = 0; i < mm->n_roots; ++i) { + if (!i915_buddy_block_is_free(mm->roots[i])) { + err = -EBUSY; + continue; + } + + i915_block_free(mm->roots[i]); + } + + kfree(mm->roots); + kfree(mm->free_list); +} + +static int split_block(struct i915_buddy_mm *mm, + struct i915_buddy_block *block) +{ + unsigned int block_order = i915_buddy_block_order(block) - 1; + u64 offset = i915_buddy_block_offset(block); + + GEM_BUG_ON(!i915_buddy_block_is_free(block)); + GEM_BUG_ON(!i915_buddy_block_order(block)); + + block->left = i915_block_alloc(block, block_order, offset); + if (!block->left) + return -ENOMEM; + + block->right = i915_block_alloc(block, block_order, + offset + (mm->chunk_size << block_order)); + if (!block->right) { + i915_block_free(block->left); + return -ENOMEM; + } + + mark_free(mm, block->left); + mark_free(mm, block->right); + + mark_split(block); + + return 0; +} + +static struct i915_buddy_block * +get_buddy(struct i915_buddy_block *block) +{ + struct i915_buddy_block *parent; + + parent = block->parent; + if (!parent) + return NULL; + + if (parent->left == block) + return parent->right; + + return parent->left; +} + +static void __i915_buddy_free(struct i915_buddy_mm *mm, + struct i915_buddy_block *block) +{ + struct i915_buddy_block *parent; + + while ((parent = block->parent)) { + struct i915_buddy_block *buddy; + + buddy = get_buddy(block); + + if (!i915_buddy_block_is_free(buddy)) + break; + + list_del(&buddy->link); + + i915_block_free(block); + i915_block_free(buddy); + + block = parent; + } + + mark_free(mm, block); +} + +void i915_buddy_free(struct i915_buddy_mm *mm, + struct i915_buddy_block *block) +{ + GEM_BUG_ON(!i915_buddy_block_is_allocated(block)); + __i915_buddy_free(mm, block); +} + +void i915_buddy_free_list(struct i915_buddy_mm *mm, struct list_head *objects) +{ + struct i915_buddy_block *block, *on; + + list_for_each_entry_safe(block, on, objects, link) + i915_buddy_free(mm, block); + INIT_LIST_HEAD(objects); +} + +/* + * Allocate power-of-two block. The order value here translates to: + * + * 0 = 2^0 * mm->chunk_size + * 1 = 2^1 * mm->chunk_size + * 2 = 2^2 * mm->chunk_size + * ... + */ +struct i915_buddy_block * +i915_buddy_alloc(struct i915_buddy_mm *mm, unsigned int order) +{ + struct i915_buddy_block *block = NULL; + unsigned int i; + int err; + + for (i = order; i <= mm->max_order; ++i) { + block = list_first_entry_or_null(&mm->free_list[i], + struct i915_buddy_block, + link); + if (block) + break; + } + + if (!block) + return ERR_PTR(-ENOSPC); + + GEM_BUG_ON(!i915_buddy_block_is_free(block)); + + while (i != order) { + err = split_block(mm, block); + if (unlikely(err)) + goto out_free; + + /* Go low */ + block = block->left; + i--; + } + + mark_allocated(block); + return block; + +out_free: + __i915_buddy_free(mm, block); + return ERR_PTR(err); +} + +static inline bool overlaps(u64 s1, u64 e1, u64 s2, u64 e2) +{ + return s1 <= e2 && e1 >= s2; +} + +static inline bool contains(u64 s1, u64 e1, u64 s2, u64 e2) +{ + return s1 <= s2 && e1 >= e2; +} + +/* + * Allocate range. Note that it's safe to chain together multiple alloc_ranges + * with the same blocks list. + * + * Intended for pre-allocating portions of the address space, for example to + * reserve a block for the initial framebuffer or similar, hence the expectation + * here is that i915_buddy_alloc() is still the main vehicle for + * allocations, so if that's not the case then the drm_mm range allocator is + * probably a much better fit, and so you should probably go use that instead. + */ +int i915_buddy_alloc_range(struct i915_buddy_mm *mm, + struct list_head *blocks, + u64 start, u64 size) +{ + struct i915_buddy_block *block; + struct i915_buddy_block *buddy; + LIST_HEAD(allocated); + LIST_HEAD(dfs); + u64 end; + int err; + int i; + + if (size < mm->chunk_size) + return -EINVAL; + + if (!IS_ALIGNED(start, mm->chunk_size)) + return -EINVAL; + + if (!size || !IS_ALIGNED(size, mm->chunk_size)) + return -EINVAL; + + if (range_overflows(start, size, mm->size)) + return -EINVAL; + + for (i = 0; i < mm->n_roots; ++i) + list_add_tail(&mm->roots[i]->tmp_link, &dfs); + + end = start + size - 1; + + do { + u64 block_start; + u64 block_end; + + block = list_first_entry_or_null(&dfs, + struct i915_buddy_block, + tmp_link); + if (!block) + break; + + list_del(&block->tmp_link); + + block_start = i915_buddy_block_offset(block); + block_end = block_start + i915_buddy_block_size(mm, block) - 1; + + if (!overlaps(start, end, block_start, block_end)) + continue; + + if (i915_buddy_block_is_allocated(block)) { + err = -ENOSPC; + goto err_free; + } + + if (contains(start, end, block_start, block_end)) { + if (!i915_buddy_block_is_free(block)) { + err = -ENOSPC; + goto err_free; + } + + mark_allocated(block); + list_add_tail(&block->link, &allocated); + continue; + } + + if (!i915_buddy_block_is_split(block)) { + err = split_block(mm, block); + if (unlikely(err)) + goto err_undo; + } + + list_add(&block->right->tmp_link, &dfs); + list_add(&block->left->tmp_link, &dfs); + } while (1); + + list_splice_tail(&allocated, blocks); + return 0; + +err_undo: + /* + * We really don't want to leave around a bunch of split blocks, since + * bigger is better, so make sure we merge everything back before we + * free the allocated blocks. + */ + buddy = get_buddy(block); + if (buddy && (i915_buddy_block_is_free(block) && + i915_buddy_block_is_free(buddy))) + __i915_buddy_free(mm, block); + +err_free: + i915_buddy_free_list(mm, &allocated); + return err; +} + +#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST) +#include "selftests/i915_buddy.c" +#endif diff --git a/drivers/gpu/drm/i915/i915_buddy.h b/drivers/gpu/drm/i915/i915_buddy.h new file mode 100644 index 000000000000..76c20941aed5 --- /dev/null +++ b/drivers/gpu/drm/i915/i915_buddy.h @@ -0,0 +1,126 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright © 2019 Intel Corporation + */ + +#ifndef __I915_BUDDY_H__ +#define __I915_BUDDY_H__ + +#include +#include + +struct i915_buddy_block { +#define I915_BUDDY_HEADER_OFFSET GENMASK_ULL(63, 12) +#define I915_BUDDY_HEADER_STATE GENMASK_ULL(11, 10) +#define I915_BUDDY_ALLOCATED (1 << 10) +#define I915_BUDDY_FREE (2 << 10) +#define I915_BUDDY_SPLIT (3 << 10) +#define I915_BUDDY_HEADER_ORDER GENMASK_ULL(9, 0) + u64 header; + + struct i915_buddy_block *left; + struct i915_buddy_block *right; + struct i915_buddy_block *parent; + + /* + * While the block is allocated by the user through i915_buddy_alloc*, + * the user has ownership of the link, for example to maintain within + * a list, if so desired. As soon as the block is freed with + * i915_buddy_free* ownership is given back to the mm. + */ + struct list_head link; + struct list_head tmp_link; +}; + +#define I915_BUDDY_MAX_ORDER I915_BUDDY_HEADER_ORDER + +/* + * Binary Buddy System. + * + * Locking should be handled by the user, a simple mutex around + * i915_buddy_alloc* and i915_buddy_free* should suffice. + */ +struct i915_buddy_mm { + /* Maintain a free list for each order. */ + struct list_head *free_list; + + /* + * Maintain explicit binary tree(s) to track the allocation of the + * address space. This gives us a simple way of finding a buddy block + * and performing the potentially recursive merge step when freeing a + * block. Nodes are either allocated or free, in which case they will + * also exist on the respective free list. + */ + struct i915_buddy_block **roots; + + /* + * Anything from here is public, and remains static for the lifetime of + * the mm. Everything above is considered do-not-touch. + */ + unsigned int n_roots; + unsigned int max_order; + + /* Must be at least PAGE_SIZE */ + u64 chunk_size; + u64 size; +}; + +static inline u64 +i915_buddy_block_offset(struct i915_buddy_block *block) +{ + return block->header & I915_BUDDY_HEADER_OFFSET; +} + +static inline unsigned int +i915_buddy_block_order(struct i915_buddy_block *block) +{ + return block->header & I915_BUDDY_HEADER_ORDER; +} + +static inline unsigned int +i915_buddy_block_state(struct i915_buddy_block *block) +{ + return block->header & I915_BUDDY_HEADER_STATE; +} + +static inline bool +i915_buddy_block_is_allocated(struct i915_buddy_block *block) +{ + return i915_buddy_block_state(block) == I915_BUDDY_ALLOCATED; +} + +static inline bool +i915_buddy_block_is_free(struct i915_buddy_block *block) +{ + return i915_buddy_block_state(block) == I915_BUDDY_FREE; +} + +static inline bool +i915_buddy_block_is_split(struct i915_buddy_block *block) +{ + return i915_buddy_block_state(block) == I915_BUDDY_SPLIT; +} + +static inline u64 +i915_buddy_block_size(struct i915_buddy_mm *mm, + struct i915_buddy_block *block) +{ + return mm->chunk_size << i915_buddy_block_order(block); +} + +int i915_buddy_init(struct i915_buddy_mm *mm, u64 size, u64 chunk_size); + +void i915_buddy_fini(struct i915_buddy_mm *mm); + +struct i915_buddy_block * +i915_buddy_alloc(struct i915_buddy_mm *mm, unsigned int order); + +int i915_buddy_alloc_range(struct i915_buddy_mm *mm, + struct list_head *blocks, + u64 start, u64 size); + +void i915_buddy_free(struct i915_buddy_mm *mm, struct i915_buddy_block *block); + +void i915_buddy_free_list(struct i915_buddy_mm *mm, struct list_head *objects); + +#endif diff --git a/drivers/gpu/drm/i915/i915_globals.c b/drivers/gpu/drm/i915/i915_globals.c index 2d5fcba98841..be127cd28931 100644 --- a/drivers/gpu/drm/i915/i915_globals.c +++ b/drivers/gpu/drm/i915/i915_globals.c @@ -62,6 +62,7 @@ static void __i915_globals_cleanup(void) static __initconst int (* const initfn[])(void) = { i915_global_active_init, + i915_global_buddy_init, i915_global_context_init, i915_global_gem_context_init, i915_global_objects_init, diff --git a/drivers/gpu/drm/i915/i915_globals.h b/drivers/gpu/drm/i915/i915_globals.h index 2d199f411a4a..b2f5cd9b9b1a 100644 --- a/drivers/gpu/drm/i915/i915_globals.h +++ b/drivers/gpu/drm/i915/i915_globals.h @@ -27,6 +27,7 @@ void i915_globals_exit(void); /* constructors */ int i915_global_active_init(void); +int i915_global_buddy_init(void); int i915_global_context_init(void); int i915_global_gem_context_init(void); int i915_global_objects_init(void); diff --git a/drivers/gpu/drm/i915/selftests/i915_buddy.c b/drivers/gpu/drm/i915/selftests/i915_buddy.c new file mode 100644 index 000000000000..b839dd99dd1f --- /dev/null +++ b/drivers/gpu/drm/i915/selftests/i915_buddy.c @@ -0,0 +1,719 @@ +// SPDX-License-Identifier: MIT +/* + * Copyright © 2019 Intel Corporation + */ + +#include + +#include "../i915_selftest.h" +#include "i915_random.h" + +#define SZ_8G (1ULL << 33) + +static void __igt_dump_block(struct i915_buddy_mm *mm, + struct i915_buddy_block *block, + bool buddy) +{ + pr_err("block info: header=%llx, state=%u, order=%d, offset=%llx size=%llx root=%s buddy=%s\n", + block->header, + i915_buddy_block_state(block), + i915_buddy_block_order(block), + i915_buddy_block_offset(block), + i915_buddy_block_size(mm, block), + yesno(!block->parent), + yesno(buddy)); +} + +static void igt_dump_block(struct i915_buddy_mm *mm, + struct i915_buddy_block *block) +{ + struct i915_buddy_block *buddy; + + __igt_dump_block(mm, block, false); + + buddy = get_buddy(block); + if (buddy) + __igt_dump_block(mm, buddy, true); +} + +static int igt_check_block(struct i915_buddy_mm *mm, + struct i915_buddy_block *block) +{ + struct i915_buddy_block *buddy; + unsigned int block_state; + u64 block_size; + u64 offset; + int err = 0; + + block_state = i915_buddy_block_state(block); + + if (block_state != I915_BUDDY_ALLOCATED && + block_state != I915_BUDDY_FREE && + block_state != I915_BUDDY_SPLIT) { + pr_err("block state mismatch\n"); + err = -EINVAL; + } + + block_size = i915_buddy_block_size(mm, block); + offset = i915_buddy_block_offset(block); + + if (block_size < mm->chunk_size) { + pr_err("block size smaller than min size\n"); + err = -EINVAL; + } + + if (!is_power_of_2(block_size)) { + pr_err("block size not power of two\n"); + err = -EINVAL; + } + + if (!IS_ALIGNED(block_size, mm->chunk_size)) { + pr_err("block size not aligned to min size\n"); + err = -EINVAL; + } + + if (!IS_ALIGNED(offset, mm->chunk_size)) { + pr_err("block offset not aligned to min size\n"); + err = -EINVAL; + } + + if (!IS_ALIGNED(offset, block_size)) { + pr_err("block offset not aligned to block size\n"); + err = -EINVAL; + } + + buddy = get_buddy(block); + + if (!buddy && block->parent) { + pr_err("buddy has gone fishing\n"); + err = -EINVAL; + } + + if (buddy) { + if (i915_buddy_block_offset(buddy) != (offset ^ block_size)) { + pr_err("buddy has wrong offset\n"); + err = -EINVAL; + } + + if (i915_buddy_block_size(mm, buddy) != block_size) { + pr_err("buddy size mismatch\n"); + err = -EINVAL; + } + + if (i915_buddy_block_state(buddy) == block_state && + block_state == I915_BUDDY_FREE) { + pr_err("block and its buddy are free\n"); + err = -EINVAL; + } + } + + return err; +} + +static int igt_check_blocks(struct i915_buddy_mm *mm, + struct list_head *blocks, + u64 expected_size, + bool is_contiguous) +{ + struct i915_buddy_block *block; + struct i915_buddy_block *prev; + u64 total; + int err = 0; + + block = NULL; + prev = NULL; + total = 0; + + list_for_each_entry(block, blocks, link) { + err = igt_check_block(mm, block); + + if (!i915_buddy_block_is_allocated(block)) { + pr_err("block not allocated\n"), + err = -EINVAL; + } + + if (is_contiguous && prev) { + u64 prev_block_size; + u64 prev_offset; + u64 offset; + + prev_offset = i915_buddy_block_offset(prev); + prev_block_size = i915_buddy_block_size(mm, prev); + offset = i915_buddy_block_offset(block); + + if (offset != (prev_offset + prev_block_size)) { + pr_err("block offset mismatch\n"); + err = -EINVAL; + } + } + + if (err) + break; + + total += i915_buddy_block_size(mm, block); + prev = block; + } + + if (!err) { + if (total != expected_size) { + pr_err("size mismatch, expected=%llx, found=%llx\n", + expected_size, total); + err = -EINVAL; + } + return err; + } + + if (prev) { + pr_err("prev block, dump:\n"); + igt_dump_block(mm, prev); + } + + if (block) { + pr_err("bad block, dump:\n"); + igt_dump_block(mm, block); + } + + return err; +} + +static int igt_check_mm(struct i915_buddy_mm *mm) +{ + struct i915_buddy_block *root; + struct i915_buddy_block *prev; + unsigned int i; + u64 total; + int err = 0; + + if (!mm->n_roots) { + pr_err("n_roots is zero\n"); + return -EINVAL; + } + + if (mm->n_roots != hweight64(mm->size)) { + pr_err("n_roots mismatch, n_roots=%u, expected=%lu\n", + mm->n_roots, hweight64(mm->size)); + return -EINVAL; + } + + root = NULL; + prev = NULL; + total = 0; + + for (i = 0; i < mm->n_roots; ++i) { + struct i915_buddy_block *block; + unsigned int order; + + root = mm->roots[i]; + if (!root) { + pr_err("root(%u) is NULL\n", i); + err = -EINVAL; + break; + } + + err = igt_check_block(mm, root); + + if (!i915_buddy_block_is_free(root)) { + pr_err("root not free\n"); + err = -EINVAL; + } + + order = i915_buddy_block_order(root); + + if (!i) { + if (order != mm->max_order) { + pr_err("max order root missing\n"); + err = -EINVAL; + } + } + + if (prev) { + u64 prev_block_size; + u64 prev_offset; + u64 offset; + + prev_offset = i915_buddy_block_offset(prev); + prev_block_size = i915_buddy_block_size(mm, prev); + offset = i915_buddy_block_offset(root); + + if (offset != (prev_offset + prev_block_size)) { + pr_err("root offset mismatch\n"); + err = -EINVAL; + } + } + + block = list_first_entry_or_null(&mm->free_list[order], + struct i915_buddy_block, + link); + if (block != root) { + pr_err("root mismatch at order=%u\n", order); + err = -EINVAL; + } + + if (err) + break; + + prev = root; + total += i915_buddy_block_size(mm, root); + } + + if (!err) { + if (total != mm->size) { + pr_err("expected mm size=%llx, found=%llx\n", mm->size, + total); + err = -EINVAL; + } + return err; + } + + if (prev) { + pr_err("prev root(%u), dump:\n", i - 1); + igt_dump_block(mm, prev); + } + + if (root) { + pr_err("bad root(%u), dump:\n", i); + igt_dump_block(mm, root); + } + + return err; +} + +static void igt_mm_config(u64 *size, u64 *chunk_size) +{ + I915_RND_STATE(prng); + u64 s, ms; + + /* Nothing fancy, just try to get an interesting bit pattern */ + + prandom_seed_state(&prng, i915_selftest.random_seed); + + s = i915_prandom_u64_state(&prng) & (SZ_8G - 1); + ms = BIT_ULL(12 + (prandom_u32_state(&prng) % ilog2(s >> 12))); + s = max(s & -ms, ms); + + *chunk_size = ms; + *size = s; +} + +static int igt_buddy_alloc_smoke(void *arg) +{ + struct i915_buddy_mm mm; + int max_order; + u64 chunk_size; + u64 mm_size; + int err; + + igt_mm_config(&mm_size, &chunk_size); + + pr_info("buddy_init with size=%llx, chunk_size=%llx\n", mm_size, chunk_size); + + err = i915_buddy_init(&mm, mm_size, chunk_size); + if (err) { + pr_err("buddy_init failed(%d)\n", err); + return err; + } + + for (max_order = mm.max_order; max_order >= 0; max_order--) { + struct i915_buddy_block *block; + int order; + LIST_HEAD(blocks); + u64 total; + + err = igt_check_mm(&mm); + if (err) { + pr_err("pre-mm check failed, abort\n"); + break; + } + + pr_info("filling from max_order=%u\n", max_order); + + order = max_order; + total = 0; + + do { +retry: + block = i915_buddy_alloc(&mm, order); + if (IS_ERR(block)) { + err = PTR_ERR(block); + if (err == -ENOMEM) { + pr_info("buddy_alloc hit -ENOMEM with order=%d\n", + order); + } else { + if (order--) { + err = 0; + goto retry; + } + + pr_err("buddy_alloc with order=%d failed(%d)\n", + order, err); + } + + break; + } + + list_add_tail(&block->link, &blocks); + + if (i915_buddy_block_order(block) != order) { + pr_err("buddy_alloc order mismatch\n"); + err = -EINVAL; + break; + } + + total += i915_buddy_block_size(&mm, block); + } while (total < mm.size); + + if (!err) + err = igt_check_blocks(&mm, &blocks, total, false); + + i915_buddy_free_list(&mm, &blocks); + + if (!err) { + err = igt_check_mm(&mm); + if (err) + pr_err("post-mm check failed\n"); + } + + if (err) + break; + } + + if (err == -ENOMEM) + err = 0; + + i915_buddy_fini(&mm); + + return err; +} + +static int igt_buddy_alloc_pessimistic(void *arg) +{ + const unsigned int max_order = 16; + struct i915_buddy_block *block, *bn; + struct i915_buddy_mm mm; + unsigned int order; + LIST_HEAD(blocks); + int err; + + /* + * Create a pot-sized mm, then allocate one of each possible + * order within. This should leave the mm with exactly one + * page left. + */ + + err = i915_buddy_init(&mm, PAGE_SIZE << max_order, PAGE_SIZE); + if (err) { + pr_err("buddy_init failed(%d)\n", err); + return err; + } + GEM_BUG_ON(mm.max_order != max_order); + + for (order = 0; order < max_order; order++) { + block = i915_buddy_alloc(&mm, order); + if (IS_ERR(block)) { + pr_info("buddy_alloc hit -ENOMEM with order=%d\n", + order); + err = PTR_ERR(block); + goto err; + } + + list_add_tail(&block->link, &blocks); + } + + /* And now the last remaining block available */ + block = i915_buddy_alloc(&mm, 0); + if (IS_ERR(block)) { + pr_info("buddy_alloc hit -ENOMEM on final alloc\n"); + err = PTR_ERR(block); + goto err; + } + list_add_tail(&block->link, &blocks); + + /* Should be completely full! */ + for (order = max_order; order--; ) { + block = i915_buddy_alloc(&mm, order); + if (!IS_ERR(block)) { + pr_info("buddy_alloc unexpectedly succeeded at order %d, it should be full!", + order); + list_add_tail(&block->link, &blocks); + err = -EINVAL; + goto err; + } + } + + block = list_last_entry(&blocks, typeof(*block), link); + list_del(&block->link); + i915_buddy_free(&mm, block); + + /* As we free in increasing size, we make available larger blocks */ + order = 1; + list_for_each_entry_safe(block, bn, &blocks, link) { + list_del(&block->link); + i915_buddy_free(&mm, block); + + block = i915_buddy_alloc(&mm, order); + if (IS_ERR(block)) { + pr_info("buddy_alloc (realloc) hit -ENOMEM with order=%d\n", + order); + err = PTR_ERR(block); + goto err; + } + i915_buddy_free(&mm, block); + order++; + } + + /* To confirm, now the whole mm should be available */ + block = i915_buddy_alloc(&mm, max_order); + if (IS_ERR(block)) { + pr_info("buddy_alloc (realloc) hit -ENOMEM with order=%d\n", + max_order); + err = PTR_ERR(block); + goto err; + } + i915_buddy_free(&mm, block); + +err: + i915_buddy_free_list(&mm, &blocks); + i915_buddy_fini(&mm); + return err; +} + +static int igt_buddy_alloc_optimistic(void *arg) +{ + const int max_order = 16; + struct i915_buddy_block *block; + struct i915_buddy_mm mm; + LIST_HEAD(blocks); + int order; + int err; + + /* + * Create a mm with one block of each order available, and + * try to allocate them all. + */ + + err = i915_buddy_init(&mm, + PAGE_SIZE * ((1 << (max_order + 1)) - 1), + PAGE_SIZE); + if (err) { + pr_err("buddy_init failed(%d)\n", err); + return err; + } + GEM_BUG_ON(mm.max_order != max_order); + + for (order = 0; order <= max_order; order++) { + block = i915_buddy_alloc(&mm, order); + if (IS_ERR(block)) { + pr_info("buddy_alloc hit -ENOMEM with order=%d\n", + order); + err = PTR_ERR(block); + goto err; + } + + list_add_tail(&block->link, &blocks); + } + + /* Should be completely full! */ + block = i915_buddy_alloc(&mm, 0); + if (!IS_ERR(block)) { + pr_info("buddy_alloc unexpectedly succeeded, it should be full!"); + list_add_tail(&block->link, &blocks); + err = -EINVAL; + goto err; + } + +err: + i915_buddy_free_list(&mm, &blocks); + i915_buddy_fini(&mm); + return err; +} + +static int igt_buddy_alloc_pathological(void *arg) +{ + const int max_order = 16; + struct i915_buddy_block *block; + struct i915_buddy_mm mm; + LIST_HEAD(blocks); + LIST_HEAD(holes); + int order, top; + int err; + + /* + * Create a pot-sized mm, then allocate one of each possible + * order within. This should leave the mm with exactly one + * page left. Free the largest block, then whittle down again. + * Eventually we will have a fully 50% fragmented mm. + */ + + err = i915_buddy_init(&mm, PAGE_SIZE << max_order, PAGE_SIZE); + if (err) { + pr_err("buddy_init failed(%d)\n", err); + return err; + } + GEM_BUG_ON(mm.max_order != max_order); + + for (top = max_order; top; top--) { + /* Make room by freeing the largest allocated block */ + block = list_first_entry_or_null(&blocks, typeof(*block), link); + if (block) { + list_del(&block->link); + i915_buddy_free(&mm, block); + } + + for (order = top; order--; ) { + block = i915_buddy_alloc(&mm, order); + if (IS_ERR(block)) { + pr_info("buddy_alloc hit -ENOMEM with order=%d, top=%d\n", + order, top); + err = PTR_ERR(block); + goto err; + } + list_add_tail(&block->link, &blocks); + } + + /* There should be one final page for this sub-allocation */ + block = i915_buddy_alloc(&mm, 0); + if (IS_ERR(block)) { + pr_info("buddy_alloc hit -ENOMEM for hole\n"); + err = PTR_ERR(block); + goto err; + } + list_add_tail(&block->link, &holes); + + block = i915_buddy_alloc(&mm, top); + if (!IS_ERR(block)) { + pr_info("buddy_alloc unexpectedly succeeded at top-order %d/%d, it should be full!", + top, max_order); + list_add_tail(&block->link, &blocks); + err = -EINVAL; + goto err; + } + } + + i915_buddy_free_list(&mm, &holes); + + /* Nothing larger than blocks of chunk_size now available */ + for (order = 1; order <= max_order; order++) { + block = i915_buddy_alloc(&mm, order); + if (!IS_ERR(block)) { + pr_info("buddy_alloc unexpectedly succeeded at order %d, it should be full!", + order); + list_add_tail(&block->link, &blocks); + err = -EINVAL; + goto err; + } + } + +err: + list_splice_tail(&holes, &blocks); + i915_buddy_free_list(&mm, &blocks); + i915_buddy_fini(&mm); + return err; +} + +static int igt_buddy_alloc_range(void *arg) +{ + struct i915_buddy_mm mm; + unsigned long page_num; + LIST_HEAD(blocks); + u64 chunk_size; + u64 offset; + u64 size; + u64 rem; + int err; + + igt_mm_config(&size, &chunk_size); + + pr_info("buddy_init with size=%llx, chunk_size=%llx\n", size, chunk_size); + + err = i915_buddy_init(&mm, size, chunk_size); + if (err) { + pr_err("buddy_init failed(%d)\n", err); + return err; + } + + err = igt_check_mm(&mm); + if (err) { + pr_err("pre-mm check failed, abort, abort, abort!\n"); + goto err_fini; + } + + rem = mm.size; + offset = 0; + + for_each_prime_number_from(page_num, 1, ULONG_MAX - 1) { + struct i915_buddy_block *block; + LIST_HEAD(tmp); + + size = min(page_num * mm.chunk_size, rem); + + err = i915_buddy_alloc_range(&mm, &tmp, offset, size); + if (err) { + if (err == -ENOMEM) { + pr_info("alloc_range hit -ENOMEM with size=%llx\n", + size); + } else { + pr_err("alloc_range with offset=%llx, size=%llx failed(%d)\n", + offset, size, err); + } + + break; + } + + block = list_first_entry_or_null(&tmp, + struct i915_buddy_block, + link); + if (!block) { + pr_err("alloc_range has no blocks\n"); + err = -EINVAL; + } + + if (i915_buddy_block_offset(block) != offset) { + pr_err("alloc_range start offset mismatch, found=%llx, expected=%llx\n", + i915_buddy_block_offset(block), offset); + err = -EINVAL; + } + + if (!err) + err = igt_check_blocks(&mm, &tmp, size, true); + + list_splice_tail(&tmp, &blocks); + + if (err) + break; + + offset += size; + + rem -= size; + if (!rem) + break; + } + + if (err == -ENOMEM) + err = 0; + + i915_buddy_free_list(&mm, &blocks); + + if (!err) { + err = igt_check_mm(&mm); + if (err) + pr_err("post-mm check failed\n"); + } + +err_fini: + i915_buddy_fini(&mm); + + return err; +} + +int i915_buddy_mock_selftests(void) +{ + static const struct i915_subtest tests[] = { + SUBTEST(igt_buddy_alloc_pessimistic), + SUBTEST(igt_buddy_alloc_optimistic), + SUBTEST(igt_buddy_alloc_pathological), + SUBTEST(igt_buddy_alloc_smoke), + SUBTEST(igt_buddy_alloc_range), + }; + + return i915_subtests(tests, NULL); +} diff --git a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h index b55da4d9ccba..b88084fe3269 100644 --- a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h +++ b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h @@ -25,3 +25,4 @@ selftest(evict, i915_gem_evict_mock_selftests) selftest(gtt, i915_gem_gtt_mock_selftests) selftest(hugepages, i915_gem_huge_page_mock_selftests) selftest(contexts, i915_gem_context_mock_selftests) +selftest(buddy, i915_buddy_mock_selftests) From patchwork Fri Aug 9 22:26:08 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 11087747 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 69325912 for ; Fri, 9 Aug 2019 22:26:57 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 55740205FD for ; Fri, 9 Aug 2019 22:26:57 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4961B21FAC; Fri, 9 Aug 2019 22:26:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 195B3205FD for ; Fri, 9 Aug 2019 22:26:56 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 0D3BB6EEE9; Fri, 9 Aug 2019 22:26:53 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id F02676EEE3; Fri, 9 Aug 2019 22:26:48 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Aug 2019 15:26:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,366,1559545200"; d="scan'208";a="176926982" Received: from jmath3-mobl1.ger.corp.intel.com (HELO mwahaha-bdw.ger.corp.intel.com) ([10.252.5.86]) by fmsmga007.fm.intel.com with ESMTP; 09 Aug 2019 15:26:46 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Subject: [PATCH v3 02/37] drm/i915: introduce intel_memory_region Date: Fri, 9 Aug 2019 23:26:08 +0100 Message-Id: <20190809222643.23142-3-matthew.auld@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190809222643.23142-1-matthew.auld@intel.com> References: <20190809222643.23142-1-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Abdiel Janulgue , Niranjana Vishwanathapura , dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP Support memory regions, as defined by a given (start, end), and allow creating GEM objects which are backed by said region. The immediate goal here is to have something to represent our device memory, but later on we also want to represent every memory domain with a region, so stolen, shmem, and of course device. At some point we are probably going to want use a common struct here, such that we are better aligned with say TTM. Signed-off-by: Matthew Auld Signed-off-by: Abdiel Janulgue Signed-off-by: Niranjana Vishwanathapura Cc: Joonas Lahtinen --- drivers/gpu/drm/i915/Makefile | 2 + .../gpu/drm/i915/gem/i915_gem_object_types.h | 9 + drivers/gpu/drm/i915/gem/i915_gem_region.c | 129 +++++++++++++ drivers/gpu/drm/i915/gem/i915_gem_region.h | 27 +++ .../gpu/drm/i915/gem/selftests/huge_pages.c | 78 ++++++++ drivers/gpu/drm/i915/i915_buddy.h | 2 + drivers/gpu/drm/i915/i915_drv.h | 1 + drivers/gpu/drm/i915/intel_memory_region.c | 175 ++++++++++++++++++ drivers/gpu/drm/i915/intel_memory_region.h | 107 +++++++++++ .../drm/i915/selftests/i915_mock_selftests.h | 1 + .../drm/i915/selftests/intel_memory_region.c | 114 ++++++++++++ .../gpu/drm/i915/selftests/mock_gem_device.c | 1 + drivers/gpu/drm/i915/selftests/mock_region.c | 60 ++++++ drivers/gpu/drm/i915/selftests/mock_region.h | 16 ++ 14 files changed, 722 insertions(+) create mode 100644 drivers/gpu/drm/i915/gem/i915_gem_region.c create mode 100644 drivers/gpu/drm/i915/gem/i915_gem_region.h create mode 100644 drivers/gpu/drm/i915/intel_memory_region.c create mode 100644 drivers/gpu/drm/i915/intel_memory_region.h create mode 100644 drivers/gpu/drm/i915/selftests/intel_memory_region.c create mode 100644 drivers/gpu/drm/i915/selftests/mock_region.c create mode 100644 drivers/gpu/drm/i915/selftests/mock_region.h diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile index 3962d9728dd7..e9cf87696bde 100644 --- a/drivers/gpu/drm/i915/Makefile +++ b/drivers/gpu/drm/i915/Makefile @@ -50,6 +50,7 @@ i915-y += i915_drv.o \ i915_utils.o \ intel_csr.o \ intel_device_info.o \ + intel_memory_region.o \ intel_pch.o \ intel_pm.o \ intel_runtime_pm.o \ @@ -115,6 +116,7 @@ gem-y += \ gem/i915_gem_pages.o \ gem/i915_gem_phys.o \ gem/i915_gem_pm.o \ + gem/i915_gem_region.o \ gem/i915_gem_shmem.o \ gem/i915_gem_shrinker.o \ gem/i915_gem_stolen.o \ diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h index d474c6ac4100..a32066e66271 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h @@ -160,6 +160,15 @@ struct drm_i915_gem_object { struct mutex lock; /* protects the pages and their use */ atomic_t pages_pin_count; + /** + * Memory region for this object. + */ + struct intel_memory_region *region; + /** + * List of memory region blocks allocated for this object. + */ + struct list_head blocks; + struct sg_table *pages; void *mapping; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_region.c b/drivers/gpu/drm/i915/gem/i915_gem_region.c new file mode 100644 index 000000000000..3cd1bf15e25b --- /dev/null +++ b/drivers/gpu/drm/i915/gem/i915_gem_region.c @@ -0,0 +1,129 @@ +// SPDX-License-Identifier: MIT +/* + * Copyright © 2019 Intel Corporation + */ + +#include "intel_memory_region.h" +#include "i915_gem_region.h" +#include "i915_drv.h" + +void +i915_gem_object_put_pages_buddy(struct drm_i915_gem_object *obj, + struct sg_table *pages) +{ + __intel_memory_region_put_pages_buddy(obj->mm.region, &obj->mm.blocks); + + obj->mm.dirty = false; + sg_free_table(pages); + kfree(pages); +} + +int +i915_gem_object_get_pages_buddy(struct drm_i915_gem_object *obj) +{ + struct intel_memory_region *mem = obj->mm.region; + struct list_head *blocks = &obj->mm.blocks; + resource_size_t size = obj->base.size; + resource_size_t prev_end; + struct i915_buddy_block *block; + unsigned int flags = 0; + struct sg_table *st; + struct scatterlist *sg; + unsigned int sg_page_sizes; + unsigned long i; + int ret; + + st = kmalloc(sizeof(*st), GFP_KERNEL); + if (!st) + return -ENOMEM; + + if (sg_alloc_table(st, size >> ilog2(mem->mm.chunk_size), GFP_KERNEL)) { + kfree(st); + return -ENOMEM; + } + + ret = __intel_memory_region_get_pages_buddy(mem, size, flags, blocks); + if (ret) + goto err_free_sg; + + GEM_BUG_ON(list_empty(blocks)); + + sg = st->sgl; + st->nents = 0; + sg_page_sizes = 0; + i = 0; + + list_for_each_entry(block, blocks, link) { + u64 block_size, offset; + + block_size = i915_buddy_block_size(&mem->mm, block); + offset = i915_buddy_block_offset(block); + + GEM_BUG_ON(overflows_type(block_size, sg->length)); + + if (!i || offset != prev_end || + add_overflows_t(typeof(sg->length), sg->length, block_size)) { + if (i) { + sg_page_sizes |= sg->length; + sg = __sg_next(sg); + } + + sg_dma_address(sg) = mem->region.start + offset; + sg_dma_len(sg) = block_size; + + sg->length = block_size; + + st->nents++; + } else { + sg->length += block_size; + sg_dma_len(sg) += block_size; + } + + prev_end = offset + block_size; + i++; + }; + + sg_page_sizes |= sg->length; + sg_mark_end(sg); + i915_sg_trim(st); + + __i915_gem_object_set_pages(obj, st, sg_page_sizes); + + return 0; + +err_free_sg: + sg_free_table(st); + kfree(st); + return ret; +} + +void i915_gem_object_init_memory_region(struct drm_i915_gem_object *obj, + struct intel_memory_region *mem) +{ + INIT_LIST_HEAD(&obj->mm.blocks); + obj->mm.region= mem; +} + +struct drm_i915_gem_object * +i915_gem_object_create_region(struct intel_memory_region *mem, + resource_size_t size, + unsigned int flags) +{ + struct drm_i915_gem_object *obj; + + if (!mem) + return ERR_PTR(-ENODEV); + + size = round_up(size, mem->min_page_size); + + GEM_BUG_ON(!size); + GEM_BUG_ON(!IS_ALIGNED(size, I915_GTT_MIN_ALIGNMENT)); + + if (size >> PAGE_SHIFT > INT_MAX) + return ERR_PTR(-E2BIG); + + if (overflows_type(size, obj->base.size)) + return ERR_PTR(-E2BIG); + + return mem->ops->create_object(mem, size, flags); +} diff --git a/drivers/gpu/drm/i915/gem/i915_gem_region.h b/drivers/gpu/drm/i915/gem/i915_gem_region.h new file mode 100644 index 000000000000..da5a2ca1a0fb --- /dev/null +++ b/drivers/gpu/drm/i915/gem/i915_gem_region.h @@ -0,0 +1,27 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright © 2019 Intel Corporation + */ + +#ifndef __I915_GEM_REGION_H__ +#define __I915_GEM_REGION_H__ + +#include + +struct intel_memory_region; +struct drm_i915_gem_object; +struct sg_table; + +int i915_gem_object_get_pages_buddy(struct drm_i915_gem_object *obj); +void i915_gem_object_put_pages_buddy(struct drm_i915_gem_object *obj, + struct sg_table *pages); + +void i915_gem_object_init_memory_region(struct drm_i915_gem_object *obj, + struct intel_memory_region *mem); + +struct drm_i915_gem_object * +i915_gem_object_create_region(struct intel_memory_region *mem, + resource_size_t size, + unsigned int flags); + +#endif diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c index 6cbd4a668c9a..4c594f90b776 100644 --- a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c +++ b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c @@ -8,6 +8,7 @@ #include "i915_selftest.h" +#include "gem/i915_gem_region.h" #include "gem/i915_gem_pm.h" #include "gt/intel_gt.h" @@ -17,6 +18,7 @@ #include "selftests/mock_drm.h" #include "selftests/mock_gem_device.h" +#include "selftests/mock_region.h" #include "selftests/i915_random.h" static const unsigned int page_sizes[] = { @@ -447,6 +449,81 @@ static int igt_mock_exhaust_device_supported_pages(void *arg) return err; } +static int igt_mock_memory_region_huge_pages(void *arg) +{ + struct i915_ppgtt *ppgtt = arg; + struct drm_i915_private *i915 = ppgtt->vm.i915; + unsigned long supported = INTEL_INFO(i915)->page_sizes; + struct intel_memory_region *mem; + struct drm_i915_gem_object *obj; + struct i915_vma *vma; + int bit; + int err = 0; + + mem = mock_region_create(i915, 0, SZ_2G, + I915_GTT_PAGE_SIZE_4K, 0); + if (IS_ERR(mem)) { + pr_err("failed to create memory region\n"); + return PTR_ERR(mem); + } + + for_each_set_bit(bit, &supported, ilog2(I915_GTT_MAX_PAGE_SIZE) + 1) { + unsigned int page_size = BIT(bit); + resource_size_t phys; + + obj = i915_gem_object_create_region(mem, page_size, 0); + if (IS_ERR(obj)) { + err = PTR_ERR(obj); + goto out_destroy_device; + } + + vma = i915_vma_instance(obj, &ppgtt->vm, NULL); + if (IS_ERR(vma)) { + err = PTR_ERR(vma); + goto out_put; + } + + err = i915_vma_pin(vma, 0, 0, PIN_USER); + if (err) + goto out_close; + + phys = i915_gem_object_get_dma_address(obj, 0); + if (!IS_ALIGNED(phys, page_size)) { + pr_err("memory region misaligned(%pa)\n", &phys); + err = -EINVAL; + goto out_close; + } + + if (vma->page_sizes.gtt != page_size) { + pr_err("page_sizes.gtt=%u, expected=%u\n", + vma->page_sizes.gtt, page_size); + err = -EINVAL; + goto out_unpin; + } + + i915_vma_unpin(vma); + i915_vma_close(vma); + + i915_gem_object_put(obj); + } + + goto out_destroy_device; + +out_unpin: + i915_vma_unpin(vma); +out_close: + i915_vma_close(vma); +out_put: + i915_gem_object_put(obj); +out_destroy_device: + mutex_unlock(&i915->drm.struct_mutex); + i915_gem_drain_freed_objects(i915); + mutex_lock(&i915->drm.struct_mutex); + intel_memory_region_destroy(mem); + + return err; +} + static int igt_mock_ppgtt_misaligned_dma(void *arg) { struct i915_ppgtt *ppgtt = arg; @@ -1705,6 +1782,7 @@ int i915_gem_huge_page_mock_selftests(void) { static const struct i915_subtest tests[] = { SUBTEST(igt_mock_exhaust_device_supported_pages), + SUBTEST(igt_mock_memory_region_huge_pages), SUBTEST(igt_mock_ppgtt_misaligned_dma), SUBTEST(igt_mock_ppgtt_huge_fill), SUBTEST(igt_mock_ppgtt_64K), diff --git a/drivers/gpu/drm/i915/i915_buddy.h b/drivers/gpu/drm/i915/i915_buddy.h index 76c20941aed5..ed41f3507cdc 100644 --- a/drivers/gpu/drm/i915/i915_buddy.h +++ b/drivers/gpu/drm/i915/i915_buddy.h @@ -22,6 +22,8 @@ struct i915_buddy_block { struct i915_buddy_block *right; struct i915_buddy_block *parent; + void *private; /* owned by creator */ + /* * While the block is allocated by the user through i915_buddy_alloc*, * the user has ownership of the link, for example to maintain within diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 3cdb5bf489f2..39cdf4eac2a6 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -83,6 +83,7 @@ #include "intel_device_info.h" #include "intel_pch.h" #include "intel_runtime_pm.h" +#include "intel_memory_region.h" #include "intel_uncore.h" #include "intel_wakeref.h" #include "intel_wopcm.h" diff --git a/drivers/gpu/drm/i915/intel_memory_region.c b/drivers/gpu/drm/i915/intel_memory_region.c new file mode 100644 index 000000000000..ef12e462acb8 --- /dev/null +++ b/drivers/gpu/drm/i915/intel_memory_region.c @@ -0,0 +1,175 @@ +// SPDX-License-Identifier: MIT +/* + * Copyright © 2019 Intel Corporation + */ + +#include "intel_memory_region.h" +#include "i915_drv.h" + +const u32 intel_region_map[] = { + [INTEL_MEMORY_SMEM] = BIT(INTEL_SMEM + INTEL_MEMORY_TYPE_SHIFT) | BIT(0), + [INTEL_MEMORY_LMEM] = BIT(INTEL_LMEM + INTEL_MEMORY_TYPE_SHIFT) | BIT(0), + [INTEL_MEMORY_STOLEN] = BIT(INTEL_STOLEN + INTEL_MEMORY_TYPE_SHIFT) | BIT(0), +}; + +static u64 +intel_memory_region_free_pages(struct intel_memory_region *mem, + struct list_head *blocks) +{ + struct i915_buddy_block *block, *on; + u64 size = 0; + + list_for_each_entry_safe(block, on, blocks, link) { + size += i915_buddy_block_size(&mem->mm, block); + i915_buddy_free(&mem->mm, block); + } + INIT_LIST_HEAD(blocks); + + return size; +} + +void +__intel_memory_region_put_pages_buddy(struct intel_memory_region *mem, + struct list_head *blocks) +{ + mutex_lock(&mem->mm_lock); + intel_memory_region_free_pages(mem, blocks); + mutex_unlock(&mem->mm_lock); +} + +void +__intel_memory_region_put_block_buddy(struct i915_buddy_block *block) +{ + struct list_head blocks; + + INIT_LIST_HEAD(&blocks); + list_add(&block->link, &blocks); + __intel_memory_region_put_pages_buddy(block->private, &blocks); +} + +int +__intel_memory_region_get_pages_buddy(struct intel_memory_region *mem, + resource_size_t size, + unsigned int flags, + struct list_head *blocks) +{ + unsigned long n_pages = size >> ilog2(mem->mm.chunk_size); + + GEM_BUG_ON(!IS_ALIGNED(size, mem->mm.chunk_size)); + GEM_BUG_ON(!list_empty(blocks)); + + mutex_lock(&mem->mm_lock); + + do { + struct i915_buddy_block *block; + unsigned int order; + + order = fls(n_pages) - 1; + GEM_BUG_ON(order > mem->mm.max_order); + + do { + block = i915_buddy_alloc(&mem->mm, order); + if (!IS_ERR(block)) + break; + + /* XXX: some kind of eviction pass, local to the device */ + if (flags & I915_ALLOC_CONTIGUOUS || !order--) + goto err_free_blocks; + } while (1); + + n_pages -= BIT(order); + + block->private = mem; + list_add(&block->link, blocks); + + if (!n_pages) + break; + } while (1); + + mutex_unlock(&mem->mm_lock); + return 0; + +err_free_blocks: + intel_memory_region_free_pages(mem, blocks); + mutex_unlock(&mem->mm_lock); + return -ENXIO; +} + +struct i915_buddy_block * +__intel_memory_region_get_block_buddy(struct intel_memory_region *mem, + resource_size_t size) +{ + struct i915_buddy_block *block; + struct list_head blocks; + int ret; + + INIT_LIST_HEAD(&blocks); + ret = __intel_memory_region_get_pages_buddy(mem, size, + I915_ALLOC_CONTIGUOUS, + &blocks); + if (ret) + return ERR_PTR(ret); + + block = list_first_entry(&blocks, typeof(*block), link); + list_del_init(&block->link); + return block; +} + +int intel_memory_region_init_buddy(struct intel_memory_region *mem) +{ + return i915_buddy_init(&mem->mm, resource_size(&mem->region), + mem->min_page_size); +} + +void intel_memory_region_release_buddy(struct intel_memory_region *mem) +{ + i915_buddy_fini(&mem->mm); +} + +struct intel_memory_region * +intel_memory_region_create(struct drm_i915_private *i915, + resource_size_t start, + resource_size_t size, + resource_size_t min_page_size, + resource_size_t io_start, + const struct intel_memory_region_ops *ops) +{ + struct intel_memory_region *mem; + int err; + + mem = kzalloc(sizeof(*mem), GFP_KERNEL); + if (!mem) + return ERR_PTR(-ENOMEM); + + mem->i915 = i915; + mem->region = (struct resource)DEFINE_RES_MEM(start, size); + mem->io_start = io_start; + mem->min_page_size = min_page_size; + mem->ops = ops; + + mutex_init(&mem->mm_lock); + + if (ops->init) { + err = ops->init(mem); + if (err) { + kfree(mem); + mem = ERR_PTR(err); + } + } + + return mem; +} + +void +intel_memory_region_destroy(struct intel_memory_region *mem) +{ + if (mem->ops->release) + mem->ops->release(mem); + + kfree(mem); +} + +#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST) +#include "selftests/intel_memory_region.c" +#include "selftests/mock_region.c" +#endif diff --git a/drivers/gpu/drm/i915/intel_memory_region.h b/drivers/gpu/drm/i915/intel_memory_region.h new file mode 100644 index 000000000000..d299fed169e9 --- /dev/null +++ b/drivers/gpu/drm/i915/intel_memory_region.h @@ -0,0 +1,107 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright © 2019 Intel Corporation + */ + +#ifndef __INTEL_MEMORY_REGION_H__ +#define __INTEL_MEMORY_REGION_H__ + +#include +#include +#include + +#include "i915_buddy.h" + +struct drm_i915_private; +struct drm_i915_gem_object; +struct intel_memory_region; +struct sg_table; + +/** + * Base memory type + */ +enum intel_memory_type { + INTEL_SMEM = 0, + INTEL_LMEM, + INTEL_STOLEN, +}; + +enum intel_region_id { + INTEL_MEMORY_SMEM = 0, + INTEL_MEMORY_LMEM, + INTEL_MEMORY_STOLEN, + INTEL_MEMORY_UKNOWN, /* Should be last */ +}; + +#define REGION_SMEM BIT(INTEL_MEMORY_SMEM) +#define REGION_LMEM BIT(INTEL_MEMORY_LMEM) +#define REGION_STOLEN BIT(INTEL_MEMORY_STOLEN) + +#define INTEL_MEMORY_TYPE_SHIFT 16 + +#define MEMORY_TYPE_FROM_REGION(r) (ilog2(r >> INTEL_MEMORY_TYPE_SHIFT)) +#define MEMORY_INSTANCE_FROM_REGION(r) (ilog2(r & 0xffff)) + +#define I915_ALLOC_CONTIGUOUS BIT(0) + +/** + * Memory regions encoded as type | instance + */ +extern const u32 intel_region_map[]; + +struct intel_memory_region_ops { + unsigned int flags; + + int (*init)(struct intel_memory_region *); + void (*release)(struct intel_memory_region *); + + struct drm_i915_gem_object * + (*create_object)(struct intel_memory_region *, + resource_size_t, + unsigned int); +}; + +struct intel_memory_region { + struct drm_i915_private *i915; + + const struct intel_memory_region_ops *ops; + + struct io_mapping iomap; + struct resource region; + + struct i915_buddy_mm mm; + struct mutex mm_lock; + + resource_size_t io_start; + resource_size_t min_page_size; + + unsigned int type; + unsigned int instance; + unsigned int id; +}; + +int intel_memory_region_init_buddy(struct intel_memory_region *mem); +void intel_memory_region_release_buddy(struct intel_memory_region *mem); + +int __intel_memory_region_get_pages_buddy(struct intel_memory_region *mem, + resource_size_t size, + unsigned int flags, + struct list_head *blocks); +struct i915_buddy_block * +__intel_memory_region_get_block_buddy(struct intel_memory_region *mem, + resource_size_t size); +void __intel_memory_region_put_pages_buddy(struct intel_memory_region *mem, + struct list_head *blocks); +void __intel_memory_region_put_block_buddy(struct i915_buddy_block *block); + +struct intel_memory_region * +intel_memory_region_create(struct drm_i915_private *i915, + resource_size_t start, + resource_size_t size, + resource_size_t min_page_size, + resource_size_t io_start, + const struct intel_memory_region_ops *ops); +void +intel_memory_region_destroy(struct intel_memory_region *mem); + +#endif diff --git a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h index b88084fe3269..aa5a0e7f5d9e 100644 --- a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h +++ b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h @@ -26,3 +26,4 @@ selftest(gtt, i915_gem_gtt_mock_selftests) selftest(hugepages, i915_gem_huge_page_mock_selftests) selftest(contexts, i915_gem_context_mock_selftests) selftest(buddy, i915_buddy_mock_selftests) +selftest(memory_region, intel_memory_region_mock_selftests) diff --git a/drivers/gpu/drm/i915/selftests/intel_memory_region.c b/drivers/gpu/drm/i915/selftests/intel_memory_region.c new file mode 100644 index 000000000000..84daa92bf92c --- /dev/null +++ b/drivers/gpu/drm/i915/selftests/intel_memory_region.c @@ -0,0 +1,114 @@ +// SPDX-License-Identifier: MIT +/* + * Copyright © 2019 Intel Corporation + */ + +#include + +#include "../i915_selftest.h" + +#include "mock_drm.h" +#include "mock_gem_device.h" +#include "mock_region.h" + +#include "gem/i915_gem_region.h" +#include "gem/selftests/mock_context.h" + +static void close_objects(struct list_head *objects) +{ + struct drm_i915_gem_object *obj, *on; + + list_for_each_entry_safe(obj, on, objects, st_link) { + if (i915_gem_object_has_pinned_pages(obj)) + i915_gem_object_unpin_pages(obj); + /* No polluting the memory region between tests */ + __i915_gem_object_put_pages(obj, I915_MM_NORMAL); + i915_gem_object_put(obj); + list_del(&obj->st_link); + } +} + +static int igt_mock_fill(void *arg) +{ + struct intel_memory_region *mem = arg; + resource_size_t total = resource_size(&mem->region); + resource_size_t page_size; + resource_size_t rem; + unsigned long max_pages; + unsigned long page_num; + LIST_HEAD(objects); + int err = 0; + + page_size = mem->mm.chunk_size; + max_pages = total / page_size; + rem = total; + + for_each_prime_number_from(page_num, 1, max_pages) { + resource_size_t size = page_num * page_size; + struct drm_i915_gem_object *obj; + + obj = i915_gem_object_create_region(mem, size, 0); + if (IS_ERR(obj)) { + err = PTR_ERR(obj); + break; + } + + err = i915_gem_object_pin_pages(obj); + if (err) { + i915_gem_object_put(obj); + break; + } + + list_add(&obj->st_link, &objects); + rem -= size; + } + + if (err == -ENOMEM) + err = 0; + if (err == -ENXIO) { + if (page_num * page_size <= rem) { + pr_err("igt_mock_fill failed, space still left in region\n"); + err = -EINVAL; + } else { + err = 0; + } + } + + close_objects(&objects); + + return err; +} + +int intel_memory_region_mock_selftests(void) +{ + static const struct i915_subtest tests[] = { + SUBTEST(igt_mock_fill), + }; + struct intel_memory_region *mem; + struct drm_i915_private *i915; + int err; + + i915 = mock_gem_device(); + if (!i915) + return -ENOMEM; + + mem = mock_region_create(i915, 0, SZ_2G, + I915_GTT_PAGE_SIZE_4K, 0); + if (IS_ERR(mem)) { + pr_err("failed to create memory region\n"); + err = PTR_ERR(mem); + goto out_unref; + } + + mutex_lock(&i915->drm.struct_mutex); + err = i915_subtests(tests, mem); + mutex_unlock(&i915->drm.struct_mutex); + + i915_gem_drain_freed_objects(i915); + intel_memory_region_destroy(mem); + +out_unref: + drm_dev_put(&i915->drm); + + return err; +} diff --git a/drivers/gpu/drm/i915/selftests/mock_gem_device.c b/drivers/gpu/drm/i915/selftests/mock_gem_device.c index 01a89c071bf5..cc0fe0a79330 100644 --- a/drivers/gpu/drm/i915/selftests/mock_gem_device.c +++ b/drivers/gpu/drm/i915/selftests/mock_gem_device.c @@ -32,6 +32,7 @@ #include "mock_gem_device.h" #include "mock_gtt.h" #include "mock_uncore.h" +#include "mock_region.h" #include "gem/selftests/mock_context.h" #include "gem/selftests/mock_gem_object.h" diff --git a/drivers/gpu/drm/i915/selftests/mock_region.c b/drivers/gpu/drm/i915/selftests/mock_region.c new file mode 100644 index 000000000000..fbde890385c7 --- /dev/null +++ b/drivers/gpu/drm/i915/selftests/mock_region.c @@ -0,0 +1,60 @@ +// SPDX-License-Identifier: MIT +/* + * Copyright © 2019 Intel Corporation + */ + +#include "gem/i915_gem_region.h" +#include "intel_memory_region.h" + +#include "mock_region.h" + +static const struct drm_i915_gem_object_ops mock_region_obj_ops = { + .get_pages = i915_gem_object_get_pages_buddy, + .put_pages = i915_gem_object_put_pages_buddy, +}; + +static struct drm_i915_gem_object * +mock_object_create(struct intel_memory_region *mem, + resource_size_t size, + unsigned int flags) +{ + struct drm_i915_private *i915 = mem->i915; + struct drm_i915_gem_object *obj; + unsigned int cache_level; + + if (size > BIT(mem->mm.max_order) * mem->mm.chunk_size) + return ERR_PTR(-E2BIG); + + obj = i915_gem_object_alloc(); + if (!obj) + return ERR_PTR(-ENOMEM); + + drm_gem_private_object_init(&i915->drm, &obj->base, size); + i915_gem_object_init(obj, &mock_region_obj_ops); + + obj->read_domains = I915_GEM_DOMAIN_CPU | I915_GEM_DOMAIN_GTT; + + cache_level = HAS_LLC(i915) ? I915_CACHE_LLC : I915_CACHE_NONE; + i915_gem_object_set_cache_coherency(obj, cache_level); + + i915_gem_object_init_memory_region(obj, mem); + + return obj; +} + +static const struct intel_memory_region_ops mock_region_ops = { + .init = intel_memory_region_init_buddy, + .release = intel_memory_region_release_buddy, + .create_object = mock_object_create, +}; + +struct intel_memory_region * +mock_region_create(struct drm_i915_private *i915, + resource_size_t start, + resource_size_t size, + resource_size_t min_page_size, + resource_size_t io_start) +{ + return intel_memory_region_create(i915, start, size, min_page_size, + io_start, &mock_region_ops); +} diff --git a/drivers/gpu/drm/i915/selftests/mock_region.h b/drivers/gpu/drm/i915/selftests/mock_region.h new file mode 100644 index 000000000000..24608089d833 --- /dev/null +++ b/drivers/gpu/drm/i915/selftests/mock_region.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright © 2019 Intel Corporation + */ + +#ifndef __MOCK_REGION_H +#define __MOCK_REGION_H + +struct intel_memory_region * +mock_region_create(struct drm_i915_private *i915, + resource_size_t start, + resource_size_t size, + resource_size_t min_page_size, + resource_size_t io_start); + +#endif /* !__MOCK_REGION_H */ From patchwork Fri Aug 9 22:26:09 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 11087749 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D61DF13B1 for ; Fri, 9 Aug 2019 22:26:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C1240205FD for ; Fri, 9 Aug 2019 22:26:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B580921FAC; Fri, 9 Aug 2019 22:26:58 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 1E8B9205FD for ; Fri, 9 Aug 2019 22:26:58 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 17E426EEEA; Fri, 9 Aug 2019 22:26:53 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id 7900F6EEE5; Fri, 9 Aug 2019 22:26:50 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Aug 2019 15:26:50 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,366,1559545200"; d="scan'208";a="176926983" Received: from jmath3-mobl1.ger.corp.intel.com (HELO mwahaha-bdw.ger.corp.intel.com) ([10.252.5.86]) by fmsmga007.fm.intel.com with ESMTP; 09 Aug 2019 15:26:49 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Subject: [PATCH v3 03/37] drm/i915/region: support basic eviction Date: Fri, 9 Aug 2019 23:26:09 +0100 Message-Id: <20190809222643.23142-4-matthew.auld@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190809222643.23142-1-matthew.auld@intel.com> References: <20190809222643.23142-1-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Abdiel Janulgue , dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP Support basic eviction for regions. Signed-off-by: Matthew Auld Cc: Joonas Lahtinen Cc: Abdiel Janulgue --- .../gpu/drm/i915/gem/i915_gem_object_types.h | 7 ++ drivers/gpu/drm/i915/gem/i915_gem_region.c | 11 +++ drivers/gpu/drm/i915/gem/i915_gem_region.h | 1 + drivers/gpu/drm/i915/i915_gem.c | 17 +++++ drivers/gpu/drm/i915/intel_memory_region.c | 73 +++++++++++++++++- drivers/gpu/drm/i915/intel_memory_region.h | 5 ++ .../drm/i915/selftests/intel_memory_region.c | 76 +++++++++++++++++++ drivers/gpu/drm/i915/selftests/mock_region.c | 1 + 8 files changed, 187 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h index a32066e66271..5e2fa37e9bc0 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h @@ -168,6 +168,13 @@ struct drm_i915_gem_object { * List of memory region blocks allocated for this object. */ struct list_head blocks; + /** + * Element within memory_region->objects or + * memory_region->purgeable if the object is marked as + * DONTNEED. Access is protected by memory_region->obj_lock. + */ + struct list_head region_link; + struct list_head tmp_link; struct sg_table *pages; void *mapping; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_region.c b/drivers/gpu/drm/i915/gem/i915_gem_region.c index 3cd1bf15e25b..be126e70c90f 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_region.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_region.c @@ -102,6 +102,17 @@ void i915_gem_object_init_memory_region(struct drm_i915_gem_object *obj, { INIT_LIST_HEAD(&obj->mm.blocks); obj->mm.region= mem; + + mutex_lock(&mem->obj_lock); + list_add(&obj->mm.region_link, &mem->objects); + mutex_unlock(&mem->obj_lock); +} + +void i915_gem_object_release_memory_region(struct drm_i915_gem_object *obj) +{ + mutex_lock(&obj->mm.region->obj_lock); + list_del(&obj->mm.region_link); + mutex_unlock(&obj->mm.region->obj_lock); } struct drm_i915_gem_object * diff --git a/drivers/gpu/drm/i915/gem/i915_gem_region.h b/drivers/gpu/drm/i915/gem/i915_gem_region.h index da5a2ca1a0fb..ebddc86d78f7 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_region.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_region.h @@ -18,6 +18,7 @@ void i915_gem_object_put_pages_buddy(struct drm_i915_gem_object *obj, void i915_gem_object_init_memory_region(struct drm_i915_gem_object *obj, struct intel_memory_region *mem); +void i915_gem_object_release_memory_region(struct drm_i915_gem_object *obj); struct drm_i915_gem_object * i915_gem_object_create_region(struct intel_memory_region *mem, diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 6ff01a404346..8735dea74809 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -1105,6 +1105,23 @@ i915_gem_madvise_ioctl(struct drm_device *dev, void *data, !i915_gem_object_has_pages(obj)) i915_gem_object_truncate(obj); + if (obj->mm.region) { + mutex_lock(&obj->mm.region->obj_lock); + + switch (obj->mm.madv) { + case I915_MADV_WILLNEED: + list_move(&obj->mm.region_link, + &obj->mm.region->objects); + break; + default: + list_move(&obj->mm.region_link, + &obj->mm.region->purgeable); + break; + } + + mutex_unlock(&obj->mm.region->obj_lock); + } + args->retained = obj->mm.madv != __I915_MADV_PURGED; mutex_unlock(&obj->mm.lock); diff --git a/drivers/gpu/drm/i915/intel_memory_region.c b/drivers/gpu/drm/i915/intel_memory_region.c index ef12e462acb8..3a3caaadea1f 100644 --- a/drivers/gpu/drm/i915/intel_memory_region.c +++ b/drivers/gpu/drm/i915/intel_memory_region.c @@ -12,6 +12,51 @@ const u32 intel_region_map[] = { [INTEL_MEMORY_STOLEN] = BIT(INTEL_STOLEN + INTEL_MEMORY_TYPE_SHIFT) | BIT(0), }; +static int +intel_memory_region_evict(struct intel_memory_region *mem, + resource_size_t target, + unsigned int flags) +{ + struct drm_i915_gem_object *obj; + resource_size_t found; + int err; + + err = 0; + found = 0; + + mutex_lock(&mem->obj_lock); + list_for_each_entry(obj, &mem->purgeable, mm.region_link) { + if (!i915_gem_object_has_pages(obj)) + continue; + + if (READ_ONCE(obj->pin_global)) + continue; + + if (atomic_read(&obj->bind_count)) + continue; + + mutex_unlock(&mem->obj_lock); + + __i915_gem_object_put_pages(obj, I915_MM_SHRINKER); + + mutex_lock_nested(&obj->mm.lock, I915_MM_SHRINKER); + if (!i915_gem_object_has_pages(obj)) { + obj->mm.madv = __I915_MADV_PURGED; + found += obj->base.size; + } + mutex_unlock(&obj->mm.lock); + + if (found >= target) + return 0; + + mutex_lock(&mem->obj_lock); + } + + err = -ENOSPC; + mutex_unlock(&mem->obj_lock); + return err; +} + static u64 intel_memory_region_free_pages(struct intel_memory_region *mem, struct list_head *blocks) @@ -63,7 +108,8 @@ __intel_memory_region_get_pages_buddy(struct intel_memory_region *mem, do { struct i915_buddy_block *block; unsigned int order; - + bool retry = true; +retry: order = fls(n_pages) - 1; GEM_BUG_ON(order > mem->mm.max_order); @@ -72,9 +118,24 @@ __intel_memory_region_get_pages_buddy(struct intel_memory_region *mem, if (!IS_ERR(block)) break; - /* XXX: some kind of eviction pass, local to the device */ - if (flags & I915_ALLOC_CONTIGUOUS || !order--) - goto err_free_blocks; + if (flags & I915_ALLOC_CONTIGUOUS || !order--) { + resource_size_t target; + int err; + + if (!retry) + goto err_free_blocks; + + target = n_pages * mem->mm.chunk_size; + + mutex_unlock(&mem->mm_lock); + err = intel_memory_region_evict(mem, target, 0); + mutex_lock(&mem->mm_lock); + if (err) + goto err_free_blocks; + + retry = false; + goto retry; + } } while (1); n_pages -= BIT(order); @@ -147,6 +208,10 @@ intel_memory_region_create(struct drm_i915_private *i915, mem->min_page_size = min_page_size; mem->ops = ops; + mutex_init(&mem->obj_lock); + INIT_LIST_HEAD(&mem->objects); + INIT_LIST_HEAD(&mem->purgeable); + mutex_init(&mem->mm_lock); if (ops->init) { diff --git a/drivers/gpu/drm/i915/intel_memory_region.h b/drivers/gpu/drm/i915/intel_memory_region.h index d299fed169e9..340411dcf86b 100644 --- a/drivers/gpu/drm/i915/intel_memory_region.h +++ b/drivers/gpu/drm/i915/intel_memory_region.h @@ -78,6 +78,11 @@ struct intel_memory_region { unsigned int type; unsigned int instance; unsigned int id; + + /* Protects access to objects and purgeable */ + struct mutex obj_lock; + struct list_head objects; + struct list_head purgeable; }; int intel_memory_region_init_buddy(struct intel_memory_region *mem); diff --git a/drivers/gpu/drm/i915/selftests/intel_memory_region.c b/drivers/gpu/drm/i915/selftests/intel_memory_region.c index 84daa92bf92c..2f13e4c1d999 100644 --- a/drivers/gpu/drm/i915/selftests/intel_memory_region.c +++ b/drivers/gpu/drm/i915/selftests/intel_memory_region.c @@ -79,10 +79,86 @@ static int igt_mock_fill(void *arg) return err; } +static void igt_mark_evictable(struct drm_i915_gem_object *obj) +{ + i915_gem_object_unpin_pages(obj); + obj->mm.madv = I915_MADV_DONTNEED; + list_move(&obj->mm.region_link, &obj->mm.region->purgeable); +} + +static int igt_mock_shrink(void *arg) +{ + struct intel_memory_region *mem = arg; + struct drm_i915_gem_object *obj; + unsigned long n_objects; + LIST_HEAD(objects); + resource_size_t target; + resource_size_t total; + int err = 0; + + target = mem->mm.chunk_size; + total = resource_size(&mem->region); + n_objects = total / target; + + while (n_objects--) { + obj = i915_gem_object_create_region(mem, + target, + 0); + if (IS_ERR(obj)) { + err = PTR_ERR(obj); + goto err_close_objects; + } + + list_add(&obj->st_link, &objects); + + err = i915_gem_object_pin_pages(obj); + if (err) + goto err_close_objects; + + /* + * Make half of the region evictable, though do so in a + * horribly fragmented fashion. + */ + if (n_objects % 2) + igt_mark_evictable(obj); + } + + while (target <= total / 2) { + obj = i915_gem_object_create_region(mem, target, 0); + if (IS_ERR(obj)) { + err = PTR_ERR(obj); + goto err_close_objects; + } + + list_add(&obj->st_link, &objects); + + /* Provoke the shrinker to start violently swinging its axe! */ + err = i915_gem_object_pin_pages(obj); + if (err) { + pr_err("failed to shrink for target=%pa", &target); + goto err_close_objects; + } + + /* Again, half of the region should remain evictable */ + igt_mark_evictable(obj); + + target <<= 1; + } + +err_close_objects: + close_objects(&objects); + + if (err == -ENOMEM) + err = 0; + + return err; +} + int intel_memory_region_mock_selftests(void) { static const struct i915_subtest tests[] = { SUBTEST(igt_mock_fill), + SUBTEST(igt_mock_shrink), }; struct intel_memory_region *mem; struct drm_i915_private *i915; diff --git a/drivers/gpu/drm/i915/selftests/mock_region.c b/drivers/gpu/drm/i915/selftests/mock_region.c index fbde890385c7..cc97250dca62 100644 --- a/drivers/gpu/drm/i915/selftests/mock_region.c +++ b/drivers/gpu/drm/i915/selftests/mock_region.c @@ -11,6 +11,7 @@ static const struct drm_i915_gem_object_ops mock_region_obj_ops = { .get_pages = i915_gem_object_get_pages_buddy, .put_pages = i915_gem_object_put_pages_buddy, + .release = i915_gem_object_release_memory_region, }; static struct drm_i915_gem_object * From patchwork Fri Aug 9 22:26:10 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 11087761 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A5468912 for ; Fri, 9 Aug 2019 22:27:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 92B18205FD for ; Fri, 9 Aug 2019 22:27:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 871D921FAC; Fri, 9 Aug 2019 22:27:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id EA11E205FD for ; Fri, 9 Aug 2019 22:27:07 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id BF0B16EEED; Fri, 9 Aug 2019 22:26:56 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id 8F6FC6EEE7; Fri, 9 Aug 2019 22:26:52 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Aug 2019 15:26:52 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,366,1559545200"; d="scan'208";a="176926986" Received: from jmath3-mobl1.ger.corp.intel.com (HELO mwahaha-bdw.ger.corp.intel.com) ([10.252.5.86]) by fmsmga007.fm.intel.com with ESMTP; 09 Aug 2019 15:26:50 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Subject: [PATCH v3 04/37] drm/i915/region: support continuous allocations Date: Fri, 9 Aug 2019 23:26:10 +0100 Message-Id: <20190809222643.23142-5-matthew.auld@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190809222643.23142-1-matthew.auld@intel.com> References: <20190809222643.23142-1-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Abdiel Janulgue , dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP Some objects may need to be allocated as a continuous block, thinking ahead the various kernel io_mapping interfaces seem to expect it. Signed-off-by: Matthew Auld Cc: Joonas Lahtinen Cc: Abdiel Janulgue --- .../gpu/drm/i915/gem/i915_gem_object_types.h | 4 + drivers/gpu/drm/i915/gem/i915_gem_region.c | 10 +- drivers/gpu/drm/i915/gem/i915_gem_region.h | 3 +- .../drm/i915/selftests/intel_memory_region.c | 152 +++++++++++++++++- drivers/gpu/drm/i915/selftests/mock_region.c | 5 +- 5 files changed, 166 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h index 5e2fa37e9bc0..eb92243d473b 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h @@ -116,6 +116,10 @@ struct drm_i915_gem_object { I915_SELFTEST_DECLARE(struct list_head st_link); + unsigned long flags; +#define I915_BO_ALLOC_CONTIGUOUS BIT(0) +#define I915_BO_ALLOC_FLAGS (I915_BO_ALLOC_CONTIGUOUS) + /* * Is the object to be mapped as read-only to the GPU * Only honoured if hardware has relevant pte bit diff --git a/drivers/gpu/drm/i915/gem/i915_gem_region.c b/drivers/gpu/drm/i915/gem/i915_gem_region.c index be126e70c90f..d9cd722b5dbf 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_region.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_region.c @@ -42,6 +42,9 @@ i915_gem_object_get_pages_buddy(struct drm_i915_gem_object *obj) return -ENOMEM; } + if (obj->flags & I915_BO_ALLOC_CONTIGUOUS) + flags = I915_ALLOC_CONTIGUOUS; + ret = __intel_memory_region_get_pages_buddy(mem, size, flags, blocks); if (ret) goto err_free_sg; @@ -98,10 +101,12 @@ i915_gem_object_get_pages_buddy(struct drm_i915_gem_object *obj) } void i915_gem_object_init_memory_region(struct drm_i915_gem_object *obj, - struct intel_memory_region *mem) + struct intel_memory_region *mem, + unsigned long flags) { INIT_LIST_HEAD(&obj->mm.blocks); obj->mm.region= mem; + obj->flags = flags; mutex_lock(&mem->obj_lock); list_add(&obj->mm.region_link, &mem->objects); @@ -125,6 +130,9 @@ i915_gem_object_create_region(struct intel_memory_region *mem, if (!mem) return ERR_PTR(-ENODEV); + if (flags & ~I915_BO_ALLOC_FLAGS) + return ERR_PTR(-EINVAL); + size = round_up(size, mem->min_page_size); GEM_BUG_ON(!size); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_region.h b/drivers/gpu/drm/i915/gem/i915_gem_region.h index ebddc86d78f7..f2ff6f8bff74 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_region.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_region.h @@ -17,7 +17,8 @@ void i915_gem_object_put_pages_buddy(struct drm_i915_gem_object *obj, struct sg_table *pages); void i915_gem_object_init_memory_region(struct drm_i915_gem_object *obj, - struct intel_memory_region *mem); + struct intel_memory_region *mem, + unsigned long flags); void i915_gem_object_release_memory_region(struct drm_i915_gem_object *obj); struct drm_i915_gem_object * diff --git a/drivers/gpu/drm/i915/selftests/intel_memory_region.c b/drivers/gpu/drm/i915/selftests/intel_memory_region.c index 2f13e4c1d999..70b467d4e811 100644 --- a/drivers/gpu/drm/i915/selftests/intel_memory_region.c +++ b/drivers/gpu/drm/i915/selftests/intel_memory_region.c @@ -81,17 +81,17 @@ static int igt_mock_fill(void *arg) static void igt_mark_evictable(struct drm_i915_gem_object *obj) { - i915_gem_object_unpin_pages(obj); + if (i915_gem_object_has_pinned_pages(obj)) + i915_gem_object_unpin_pages(obj); obj->mm.madv = I915_MADV_DONTNEED; list_move(&obj->mm.region_link, &obj->mm.region->purgeable); } -static int igt_mock_shrink(void *arg) +static int igt_frag_region(struct intel_memory_region *mem, + struct list_head *objects) { - struct intel_memory_region *mem = arg; struct drm_i915_gem_object *obj; unsigned long n_objects; - LIST_HEAD(objects); resource_size_t target; resource_size_t total; int err = 0; @@ -109,7 +109,7 @@ static int igt_mock_shrink(void *arg) goto err_close_objects; } - list_add(&obj->st_link, &objects); + list_add(&obj->st_link, objects); err = i915_gem_object_pin_pages(obj); if (err) @@ -123,6 +123,39 @@ static int igt_mock_shrink(void *arg) igt_mark_evictable(obj); } + return 0; + +err_close_objects: + close_objects(objects); + return err; +} + +static void igt_defrag_region(struct list_head *objects) +{ + struct drm_i915_gem_object *obj; + + list_for_each_entry(obj, objects, st_link) { + if (obj->mm.madv == I915_MADV_WILLNEED) + igt_mark_evictable(obj); + } +} + +static int igt_mock_shrink(void *arg) +{ + struct intel_memory_region *mem = arg; + struct drm_i915_gem_object *obj; + LIST_HEAD(objects); + resource_size_t target; + resource_size_t total; + int err; + + err = igt_frag_region(mem, &objects); + if (err) + return err; + + total = resource_size(&mem->region); + target = mem->mm.chunk_size; + while (target <= total / 2) { obj = i915_gem_object_create_region(mem, target, 0); if (IS_ERR(obj)) { @@ -154,11 +187,120 @@ static int igt_mock_shrink(void *arg) return err; } +static int igt_mock_continuous(void *arg) +{ + struct intel_memory_region *mem = arg; + struct drm_i915_gem_object *obj; + LIST_HEAD(objects); + resource_size_t target; + resource_size_t total; + int err; + + err = igt_frag_region(mem, &objects); + if (err) + return err; + + total = resource_size(&mem->region); + target = total / 2; + + /* + * Sanity check that we can allocate all of the available fragmented + * space. + */ + obj = i915_gem_object_create_region(mem, target, 0); + if (IS_ERR(obj)) { + err = PTR_ERR(obj); + goto err_close_objects; + } + + list_add(&obj->st_link, &objects); + + err = i915_gem_object_pin_pages(obj); + if (err) { + pr_err("failed to allocate available space\n"); + goto err_close_objects; + } + + igt_mark_evictable(obj); + + /* Try the smallest possible size -- should succeed */ + obj = i915_gem_object_create_region(mem, mem->mm.chunk_size, + I915_BO_ALLOC_CONTIGUOUS); + if (IS_ERR(obj)) { + err = PTR_ERR(obj); + goto err_close_objects; + } + + list_add(&obj->st_link, &objects); + + err = i915_gem_object_pin_pages(obj); + if (err) { + pr_err("failed to allocate smallest possible size\n"); + goto err_close_objects; + } + + igt_mark_evictable(obj); + + if (obj->mm.pages->nents != 1) { + pr_err("[1]object spans multiple sg entries\n"); + err = -EINVAL; + goto err_close_objects; + } + + /* + * Even though there is enough free space for the allocation, we + * shouldn't be able to allocate it, given that it is fragmented, and + * non-continuous. + */ + obj = i915_gem_object_create_region(mem, target, I915_BO_ALLOC_CONTIGUOUS); + if (IS_ERR(obj)) { + err = PTR_ERR(obj); + goto err_close_objects; + } + + list_add(&obj->st_link, &objects); + + err = i915_gem_object_pin_pages(obj); + if (!err) { + pr_err("expected allocation to fail\n"); + err = -EINVAL; + goto err_close_objects; + } + + igt_defrag_region(&objects); + + /* Should now succeed */ + obj = i915_gem_object_create_region(mem, target, I915_BO_ALLOC_CONTIGUOUS); + if (IS_ERR(obj)) { + err = PTR_ERR(obj); + goto err_close_objects; + } + + list_add(&obj->st_link, &objects); + + err = i915_gem_object_pin_pages(obj); + if (err) { + pr_err("failed to allocate from defraged area\n"); + goto err_close_objects; + } + + if (obj->mm.pages->nents != 1) { + pr_err("object spans multiple sg entries\n"); + err = -EINVAL; + } + +err_close_objects: + close_objects(&objects); + + return err; +} + int intel_memory_region_mock_selftests(void) { static const struct i915_subtest tests[] = { SUBTEST(igt_mock_fill), SUBTEST(igt_mock_shrink), + SUBTEST(igt_mock_continuous), }; struct intel_memory_region *mem; struct drm_i915_private *i915; diff --git a/drivers/gpu/drm/i915/selftests/mock_region.c b/drivers/gpu/drm/i915/selftests/mock_region.c index cc97250dca62..d73f37712c44 100644 --- a/drivers/gpu/drm/i915/selftests/mock_region.c +++ b/drivers/gpu/drm/i915/selftests/mock_region.c @@ -23,6 +23,9 @@ mock_object_create(struct intel_memory_region *mem, struct drm_i915_gem_object *obj; unsigned int cache_level; + if (flags & I915_BO_ALLOC_CONTIGUOUS) + size = roundup_pow_of_two(size); + if (size > BIT(mem->mm.max_order) * mem->mm.chunk_size) return ERR_PTR(-E2BIG); @@ -38,7 +41,7 @@ mock_object_create(struct intel_memory_region *mem, cache_level = HAS_LLC(i915) ? I915_CACHE_LLC : I915_CACHE_NONE; i915_gem_object_set_cache_coherency(obj, cache_level); - i915_gem_object_init_memory_region(obj, mem); + i915_gem_object_init_memory_region(obj, mem, flags); return obj; } From patchwork Fri Aug 9 22:26:11 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 11087755 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E4D21912 for ; Fri, 9 Aug 2019 22:27:03 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D2703205FD for ; Fri, 9 Aug 2019 22:27:03 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C6A5D21FAC; Fri, 9 Aug 2019 22:27:03 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 32F97205FD for ; Fri, 9 Aug 2019 22:27:03 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 01EC86EEEF; Fri, 9 Aug 2019 22:26:57 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id B32556EEE7; Fri, 9 Aug 2019 22:26:53 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Aug 2019 15:26:53 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,366,1559545200"; d="scan'208";a="176926995" Received: from jmath3-mobl1.ger.corp.intel.com (HELO mwahaha-bdw.ger.corp.intel.com) ([10.252.5.86]) by fmsmga007.fm.intel.com with ESMTP; 09 Aug 2019 15:26:52 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Subject: [PATCH v3 05/37] drm/i915/region: support volatile objects Date: Fri, 9 Aug 2019 23:26:11 +0100 Message-Id: <20190809222643.23142-6-matthew.auld@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190809222643.23142-1-matthew.auld@intel.com> References: <20190809222643.23142-1-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Abdiel Janulgue , CQ Tang , dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP Volatile objects are marked as DONTNEED while pinned, therefore once unpinned the backing store can be discarded. Signed-off-by: Matthew Auld Signed-off-by: CQ Tang Cc: Joonas Lahtinen Cc: Abdiel Janulgue Reviewed-by: Chris Wilson --- drivers/gpu/drm/i915/gem/i915_gem_internal.c | 17 +++--- drivers/gpu/drm/i915/gem/i915_gem_object.h | 6 ++ .../gpu/drm/i915/gem/i915_gem_object_types.h | 3 +- drivers/gpu/drm/i915/gem/i915_gem_pages.c | 6 ++ drivers/gpu/drm/i915/gem/i915_gem_region.c | 7 ++- .../gpu/drm/i915/gem/selftests/huge_pages.c | 12 ++-- drivers/gpu/drm/i915/selftests/i915_gem_gtt.c | 5 +- .../drm/i915/selftests/intel_memory_region.c | 56 +++++++++++++++++++ 8 files changed, 91 insertions(+), 21 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c b/drivers/gpu/drm/i915/gem/i915_gem_internal.c index 0c41e04ab8fa..5e72cb1cc2d3 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c @@ -117,13 +117,6 @@ static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj) goto err; } - /* Mark the pages as dontneed whilst they are still pinned. As soon - * as they are unpinned they are allowed to be reaped by the shrinker, - * and the caller is expected to repopulate - the contents of this - * object are only valid whilst active and pinned. - */ - obj->mm.madv = I915_MADV_DONTNEED; - __i915_gem_object_set_pages(obj, st, sg_page_sizes); return 0; @@ -143,7 +136,6 @@ static void i915_gem_object_put_pages_internal(struct drm_i915_gem_object *obj, internal_free_pages(pages); obj->mm.dirty = false; - obj->mm.madv = I915_MADV_WILLNEED; } static const struct drm_i915_gem_object_ops i915_gem_object_internal_ops = { @@ -188,6 +180,15 @@ i915_gem_object_create_internal(struct drm_i915_private *i915, drm_gem_private_object_init(&i915->drm, &obj->base, size); i915_gem_object_init(obj, &i915_gem_object_internal_ops); + /* + * Mark the object as volatile, such that the pages are marked as + * dontneed whilst they are still pinned. As soon as they are unpinned + * they are allowed to be reaped by the shrinker, and the caller is + * expected to repopulate - the contents of this object are only valid + * whilst active and pinned. + */ + obj->flags = I915_BO_ALLOC_VOLATILE; + obj->read_domains = I915_GEM_DOMAIN_CPU; obj->write_domain = I915_GEM_DOMAIN_CPU; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index 3714cf234d64..1af838050d6c 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -122,6 +122,12 @@ i915_gem_object_lock_fence(struct drm_i915_gem_object *obj); void i915_gem_object_unlock_fence(struct drm_i915_gem_object *obj, struct dma_fence *fence); +static inline bool +i915_gem_object_is_volatile(const struct drm_i915_gem_object *obj) +{ + return obj->flags & I915_BO_ALLOC_VOLATILE; +} + static inline void i915_gem_object_set_readonly(struct drm_i915_gem_object *obj) { diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h index eb92243d473b..2142d74a57ea 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h @@ -118,7 +118,8 @@ struct drm_i915_gem_object { unsigned long flags; #define I915_BO_ALLOC_CONTIGUOUS BIT(0) -#define I915_BO_ALLOC_FLAGS (I915_BO_ALLOC_CONTIGUOUS) +#define I915_BO_ALLOC_VOLATILE BIT(1) +#define I915_BO_ALLOC_FLAGS (I915_BO_ALLOC_CONTIGUOUS | I915_BO_ALLOC_VOLATILE) /* * Is the object to be mapped as read-only to the GPU diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c index 18f0ce0135c1..d3f0debdb875 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c @@ -18,6 +18,9 @@ void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj, lockdep_assert_held(&obj->mm.lock); + if (i915_gem_object_is_volatile(obj)) + obj->mm.madv = I915_MADV_DONTNEED; + /* Make the pages coherent with the GPU (flushing any swapin). */ if (obj->cache_dirty) { obj->write_domain = 0; @@ -159,6 +162,9 @@ __i915_gem_object_unset_pages(struct drm_i915_gem_object *obj) if (IS_ERR_OR_NULL(pages)) return pages; + if (i915_gem_object_is_volatile(obj)) + obj->mm.madv = I915_MADV_WILLNEED; + i915_gem_object_make_unshrinkable(obj); if (obj->mm.mapping) { diff --git a/drivers/gpu/drm/i915/gem/i915_gem_region.c b/drivers/gpu/drm/i915/gem/i915_gem_region.c index d9cd722b5dbf..0d09da9f7168 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_region.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_region.c @@ -109,7 +109,12 @@ void i915_gem_object_init_memory_region(struct drm_i915_gem_object *obj, obj->flags = flags; mutex_lock(&mem->obj_lock); - list_add(&obj->mm.region_link, &mem->objects); + + if (obj->flags & I915_BO_ALLOC_VOLATILE) + list_add(&obj->mm.region_link, &mem->purgeable); + else + list_add(&obj->mm.region_link, &mem->objects); + mutex_unlock(&mem->obj_lock); } diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c index 4c594f90b776..6ead53455c51 100644 --- a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c +++ b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c @@ -115,8 +115,6 @@ static int get_huge_pages(struct drm_i915_gem_object *obj) if (i915_gem_gtt_prepare_pages(obj, st)) goto err; - obj->mm.madv = I915_MADV_DONTNEED; - GEM_BUG_ON(sg_page_sizes != obj->mm.page_mask); __i915_gem_object_set_pages(obj, st, sg_page_sizes); @@ -137,7 +135,6 @@ static void put_huge_pages(struct drm_i915_gem_object *obj, huge_pages_free_pages(pages); obj->mm.dirty = false; - obj->mm.madv = I915_MADV_WILLNEED; } static const struct drm_i915_gem_object_ops huge_page_ops = { @@ -170,6 +167,8 @@ huge_pages_object(struct drm_i915_private *i915, drm_gem_private_object_init(&i915->drm, &obj->base, size); i915_gem_object_init(obj, &huge_page_ops); + obj->flags = I915_BO_ALLOC_VOLATILE; + obj->write_domain = I915_GEM_DOMAIN_CPU; obj->read_domains = I915_GEM_DOMAIN_CPU; obj->cache_level = I915_CACHE_NONE; @@ -229,8 +228,6 @@ static int fake_get_huge_pages(struct drm_i915_gem_object *obj) i915_sg_trim(st); - obj->mm.madv = I915_MADV_DONTNEED; - __i915_gem_object_set_pages(obj, st, sg_page_sizes); return 0; @@ -263,8 +260,6 @@ static int fake_get_huge_pages_single(struct drm_i915_gem_object *obj) sg_dma_len(sg) = obj->base.size; sg_dma_address(sg) = page_size; - obj->mm.madv = I915_MADV_DONTNEED; - __i915_gem_object_set_pages(obj, st, sg->length); return 0; @@ -283,7 +278,6 @@ static void fake_put_huge_pages(struct drm_i915_gem_object *obj, { fake_free_huge_pages(obj, pages); obj->mm.dirty = false; - obj->mm.madv = I915_MADV_WILLNEED; } static const struct drm_i915_gem_object_ops fake_ops = { @@ -323,6 +317,8 @@ fake_huge_pages_object(struct drm_i915_private *i915, u64 size, bool single) else i915_gem_object_init(obj, &fake_ops); + obj->flags = I915_BO_ALLOC_VOLATILE; + obj->write_domain = I915_GEM_DOMAIN_CPU; obj->read_domains = I915_GEM_DOMAIN_CPU; obj->cache_level = I915_CACHE_NONE; diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c index 31a51ca1ddcb..81850e3a7d2d 100644 --- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c @@ -88,8 +88,6 @@ static int fake_get_pages(struct drm_i915_gem_object *obj) } GEM_BUG_ON(rem); - obj->mm.madv = I915_MADV_DONTNEED; - __i915_gem_object_set_pages(obj, pages, sg_page_sizes); return 0; @@ -101,7 +99,6 @@ static void fake_put_pages(struct drm_i915_gem_object *obj, { fake_free_pages(obj, pages); obj->mm.dirty = false; - obj->mm.madv = I915_MADV_WILLNEED; } static const struct drm_i915_gem_object_ops fake_ops = { @@ -128,6 +125,8 @@ fake_dma_object(struct drm_i915_private *i915, u64 size) drm_gem_private_object_init(&i915->drm, &obj->base, size); i915_gem_object_init(obj, &fake_ops); + obj->flags = I915_BO_ALLOC_VOLATILE; + obj->write_domain = I915_GEM_DOMAIN_CPU; obj->read_domains = I915_GEM_DOMAIN_CPU; obj->cache_level = I915_CACHE_NONE; diff --git a/drivers/gpu/drm/i915/selftests/intel_memory_region.c b/drivers/gpu/drm/i915/selftests/intel_memory_region.c index 70b467d4e811..f94e7eae90e9 100644 --- a/drivers/gpu/drm/i915/selftests/intel_memory_region.c +++ b/drivers/gpu/drm/i915/selftests/intel_memory_region.c @@ -295,12 +295,68 @@ static int igt_mock_continuous(void *arg) return err; } +static int igt_mock_volatile(void *arg) +{ + struct intel_memory_region *mem = arg; + struct drm_i915_gem_object *obj; + int err; + + obj = i915_gem_object_create_region(mem, PAGE_SIZE, 0); + if (IS_ERR(obj)) + return PTR_ERR(obj); + + err = i915_gem_object_pin_pages(obj); + if (err) + goto err_put; + + i915_gem_object_unpin_pages(obj); + + err = intel_memory_region_evict(mem, PAGE_SIZE, 0); + if (err != -ENOSPC) { + pr_err("shrink memory region\n"); + goto err_put; + } + + i915_gem_object_put(obj); + + obj = i915_gem_object_create_region(mem, PAGE_SIZE, I915_BO_ALLOC_VOLATILE); + if (IS_ERR(obj)) + return PTR_ERR(obj); + + if (!(obj->flags & I915_BO_ALLOC_VOLATILE)) { + pr_err("missing flags\n"); + goto err_put; + } + + err = i915_gem_object_pin_pages(obj); + if (err) + goto err_put; + + i915_gem_object_unpin_pages(obj); + + err = intel_memory_region_evict(mem, PAGE_SIZE, 0); + if (err) { + pr_err("failed to shrink memory\n"); + goto err_put; + } + + if (i915_gem_object_has_pages(obj)) { + pr_err("object pages not discarded\n"); + err = -EINVAL; + } + +err_put: + i915_gem_object_put(obj); + return err; +} + int intel_memory_region_mock_selftests(void) { static const struct i915_subtest tests[] = { SUBTEST(igt_mock_fill), SUBTEST(igt_mock_shrink), SUBTEST(igt_mock_continuous), + SUBTEST(igt_mock_volatile), }; struct intel_memory_region *mem; struct drm_i915_private *i915; From patchwork Fri Aug 9 22:26:12 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 11087757 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 739FE13B1 for ; Fri, 9 Aug 2019 22:27:05 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 61A96205FD for ; Fri, 9 Aug 2019 22:27:05 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 55BF221FAC; Fri, 9 Aug 2019 22:27:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 1773B205FD for ; Fri, 9 Aug 2019 22:27:05 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 7D7486EEF2; Fri, 9 Aug 2019 22:26:57 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id D8D5C6EEEF; Fri, 9 Aug 2019 22:26:54 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Aug 2019 15:26:54 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,366,1559545200"; d="scan'208";a="176926996" Received: from jmath3-mobl1.ger.corp.intel.com (HELO mwahaha-bdw.ger.corp.intel.com) ([10.252.5.86]) by fmsmga007.fm.intel.com with ESMTP; 09 Aug 2019 15:26:53 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Subject: [PATCH v3 06/37] drm/i915: Add memory region information to device_info Date: Fri, 9 Aug 2019 23:26:12 +0100 Message-Id: <20190809222643.23142-7-matthew.auld@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190809222643.23142-1-matthew.auld@intel.com> References: <20190809222643.23142-1-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Abdiel Janulgue , dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Abdiel Janulgue Exposes available regions for the platform. Shared memory will always be available. Signed-off-by: Abdiel Janulgue Signed-off-by: Matthew Auld --- drivers/gpu/drm/i915/i915_drv.h | 2 ++ drivers/gpu/drm/i915/intel_device_info.h | 2 ++ 2 files changed, 4 insertions(+) diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 39cdf4eac2a6..d947f7415861 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -2212,6 +2212,8 @@ IS_SUBPLATFORM(const struct drm_i915_private *i915, #define HAS_IPC(dev_priv) (INTEL_INFO(dev_priv)->display.has_ipc) +#define HAS_REGION(i915, i) (INTEL_INFO(i915)->memory_regions & (i)) + #define HAS_GT_UC(dev_priv) (INTEL_INFO(dev_priv)->has_gt_uc) /* Having GuC is not the same as using GuC */ diff --git a/drivers/gpu/drm/i915/intel_device_info.h b/drivers/gpu/drm/i915/intel_device_info.h index 92e0c2e0954c..3166f38910f7 100644 --- a/drivers/gpu/drm/i915/intel_device_info.h +++ b/drivers/gpu/drm/i915/intel_device_info.h @@ -159,6 +159,8 @@ struct intel_device_info { unsigned int page_sizes; /* page sizes supported by the HW */ + u32 memory_regions; /* regions supported by the HW */ + u32 display_mmio_offset; u8 num_pipes; From patchwork Fri Aug 9 22:26:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 11087779 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0D28E13B1 for ; Fri, 9 Aug 2019 22:27:24 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EE4CF205FD for ; Fri, 9 Aug 2019 22:27:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E26A821FAC; Fri, 9 Aug 2019 22:27:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 5EB6C205FD for ; Fri, 9 Aug 2019 22:27:23 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 78EC26EEFD; Fri, 9 Aug 2019 22:27:05 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id 5AC326EEEC; Fri, 9 Aug 2019 22:26:56 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Aug 2019 15:26:56 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,366,1559545200"; d="scan'208";a="176927001" Received: from jmath3-mobl1.ger.corp.intel.com (HELO mwahaha-bdw.ger.corp.intel.com) ([10.252.5.86]) by fmsmga007.fm.intel.com with ESMTP; 09 Aug 2019 15:26:54 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Subject: [PATCH v3 07/37] drm/i915: support creating LMEM objects Date: Fri, 9 Aug 2019 23:26:13 +0100 Message-Id: <20190809222643.23142-8-matthew.auld@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190809222643.23142-1-matthew.auld@intel.com> References: <20190809222643.23142-1-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Abdiel Janulgue , dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP We currently define LMEM, or local memory, as just another memory region, like system memory or stolen, which we can expose to userspace and can be mapped to the CPU via some BAR. Signed-off-by: Matthew Auld Cc: Joonas Lahtinen Cc: Abdiel Janulgue --- drivers/gpu/drm/i915/Makefile | 2 + drivers/gpu/drm/i915/gem/i915_gem_lmem.c | 31 ++++++++++++ drivers/gpu/drm/i915/gem/i915_gem_lmem.h | 23 +++++++++ drivers/gpu/drm/i915/i915_drv.h | 5 ++ drivers/gpu/drm/i915/intel_region_lmem.c | 48 +++++++++++++++++++ drivers/gpu/drm/i915/intel_region_lmem.h | 11 +++++ .../drm/i915/selftests/i915_live_selftests.h | 1 + .../drm/i915/selftests/intel_memory_region.c | 45 +++++++++++++++++ 8 files changed, 166 insertions(+) create mode 100644 drivers/gpu/drm/i915/gem/i915_gem_lmem.c create mode 100644 drivers/gpu/drm/i915/gem/i915_gem_lmem.h create mode 100644 drivers/gpu/drm/i915/intel_region_lmem.c create mode 100644 drivers/gpu/drm/i915/intel_region_lmem.h diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile index e9cf87696bde..17394fd0c7f8 100644 --- a/drivers/gpu/drm/i915/Makefile +++ b/drivers/gpu/drm/i915/Makefile @@ -112,6 +112,7 @@ gem-y += \ gem/i915_gem_internal.o \ gem/i915_gem_object.o \ gem/i915_gem_object_blt.o \ + gem/i915_gem_lmem.o \ gem/i915_gem_mman.o \ gem/i915_gem_pages.o \ gem/i915_gem_phys.o \ @@ -140,6 +141,7 @@ i915-y += \ i915_scheduler.o \ i915_trace_points.o \ i915_vma.o \ + intel_region_lmem.o \ intel_wopcm.o # general-purpose microcontroller (GuC) support diff --git a/drivers/gpu/drm/i915/gem/i915_gem_lmem.c b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c new file mode 100644 index 000000000000..ac5a15db1d27 --- /dev/null +++ b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c @@ -0,0 +1,31 @@ +// SPDX-License-Identifier: MIT +/* + * Copyright © 2019 Intel Corporation + */ + +#include "intel_memory_region.h" +#include "gem/i915_gem_region.h" +#include "gem/i915_gem_lmem.h" +#include "i915_drv.h" + +const struct drm_i915_gem_object_ops i915_gem_lmem_obj_ops = { + .get_pages = i915_gem_object_get_pages_buddy, + .put_pages = i915_gem_object_put_pages_buddy, + .release = i915_gem_object_release_memory_region, +}; + +bool i915_gem_object_is_lmem(struct drm_i915_gem_object *obj) +{ + struct intel_memory_region *region = obj->mm.region; + + return region && region->type == INTEL_LMEM; +} + +struct drm_i915_gem_object * +i915_gem_object_create_lmem(struct drm_i915_private *i915, + resource_size_t size, + unsigned int flags) +{ + return i915_gem_object_create_region(i915->regions[INTEL_MEMORY_LMEM], + size, flags); +} diff --git a/drivers/gpu/drm/i915/gem/i915_gem_lmem.h b/drivers/gpu/drm/i915/gem/i915_gem_lmem.h new file mode 100644 index 000000000000..ebc15fe24f58 --- /dev/null +++ b/drivers/gpu/drm/i915/gem/i915_gem_lmem.h @@ -0,0 +1,23 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright © 2019 Intel Corporation + */ + +#ifndef __I915_GEM_LMEM_H +#define __I915_GEM_LMEM_H + +#include + +struct drm_i915_private; +struct drm_i915_gem_object; + +extern const struct drm_i915_gem_object_ops i915_gem_lmem_obj_ops; + +bool i915_gem_object_is_lmem(struct drm_i915_gem_object *obj); + +struct drm_i915_gem_object * +i915_gem_object_create_lmem(struct drm_i915_private *i915, + resource_size_t size, + unsigned int flags); + +#endif /* !__I915_GEM_LMEM_H */ diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index d947f7415861..f7be8cee4709 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -98,6 +98,8 @@ #include "i915_vma.h" #include "i915_irq.h" +#include "intel_region_lmem.h" + #include "intel_gvt.h" /* General customization: @@ -1369,6 +1371,8 @@ struct drm_i915_private { */ resource_size_t stolen_usable_size; /* Total size minus reserved ranges */ + struct intel_memory_region *regions[INTEL_MEMORY_UKNOWN]; + struct intel_uncore uncore; struct i915_virtual_gpu vgpu; @@ -2213,6 +2217,7 @@ IS_SUBPLATFORM(const struct drm_i915_private *i915, #define HAS_IPC(dev_priv) (INTEL_INFO(dev_priv)->display.has_ipc) #define HAS_REGION(i915, i) (INTEL_INFO(i915)->memory_regions & (i)) +#define HAS_LMEM(i915) HAS_REGION(i915, REGION_LMEM) #define HAS_GT_UC(dev_priv) (INTEL_INFO(dev_priv)->has_gt_uc) diff --git a/drivers/gpu/drm/i915/intel_region_lmem.c b/drivers/gpu/drm/i915/intel_region_lmem.c new file mode 100644 index 000000000000..ca906d1ff631 --- /dev/null +++ b/drivers/gpu/drm/i915/intel_region_lmem.c @@ -0,0 +1,48 @@ +// SPDX-License-Identifier: MIT +/* + * Copyright © 2019 Intel Corporation + */ + +#include "i915_drv.h" +#include "intel_memory_region.h" +#include "gem/i915_gem_lmem.h" +#include "gem/i915_gem_region.h" +#include "intel_region_lmem.h" + +static struct drm_i915_gem_object * +lmem_create_object(struct intel_memory_region *mem, + resource_size_t size, + unsigned int flags) +{ + struct drm_i915_private *i915 = mem->i915; + struct drm_i915_gem_object *obj; + unsigned int cache_level; + + if (flags & I915_BO_ALLOC_CONTIGUOUS) + size = roundup_pow_of_two(size); + + if (size > BIT(mem->mm.max_order) * mem->mm.chunk_size) + return ERR_PTR(-E2BIG); + + obj = i915_gem_object_alloc(); + if (!obj) + return ERR_PTR(-ENOMEM); + + drm_gem_private_object_init(&i915->drm, &obj->base, size); + i915_gem_object_init(obj, &i915_gem_lmem_obj_ops); + + obj->read_domains = I915_GEM_DOMAIN_CPU | I915_GEM_DOMAIN_GTT; + + cache_level = HAS_LLC(i915) ? I915_CACHE_LLC : I915_CACHE_NONE; + i915_gem_object_set_cache_coherency(obj, cache_level); + + i915_gem_object_init_memory_region(obj, mem, flags); + + return obj; +} + +const struct intel_memory_region_ops intel_region_lmem_ops = { + .init = intel_memory_region_init_buddy, + .release = intel_memory_region_release_buddy, + .create_object = lmem_create_object, +}; diff --git a/drivers/gpu/drm/i915/intel_region_lmem.h b/drivers/gpu/drm/i915/intel_region_lmem.h new file mode 100644 index 000000000000..ed2a3bab6443 --- /dev/null +++ b/drivers/gpu/drm/i915/intel_region_lmem.h @@ -0,0 +1,11 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright © 2019 Intel Corporation + */ + +#ifndef __INTEL_REGION_LMEM_H +#define __INTEL_REGION_LMEM_H + +extern const struct intel_memory_region_ops intel_region_lmem_ops; + +#endif /* !__INTEL_REGION_LMEM_H */ diff --git a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h index 1ccf0f731ac0..3b4ab39bfe7e 100644 --- a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h +++ b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h @@ -30,6 +30,7 @@ selftest(gem_contexts, i915_gem_context_live_selftests) selftest(blt, i915_gem_object_blt_live_selftests) selftest(client, i915_gem_client_blt_live_selftests) selftest(reset, intel_reset_live_selftests) +selftest(memory_region, intel_memory_region_live_selftests) selftest(hangcheck, intel_hangcheck_live_selftests) selftest(execlists, intel_execlists_live_selftests) selftest(guc, intel_guc_live_selftest) diff --git a/drivers/gpu/drm/i915/selftests/intel_memory_region.c b/drivers/gpu/drm/i915/selftests/intel_memory_region.c index f94e7eae90e9..422416f71643 100644 --- a/drivers/gpu/drm/i915/selftests/intel_memory_region.c +++ b/drivers/gpu/drm/i915/selftests/intel_memory_region.c @@ -11,8 +11,10 @@ #include "mock_gem_device.h" #include "mock_region.h" +#include "gem/i915_gem_lmem.h" #include "gem/i915_gem_region.h" #include "gem/selftests/mock_context.h" +#include "gt/intel_gt.h" static void close_objects(struct list_head *objects) { @@ -350,6 +352,27 @@ static int igt_mock_volatile(void *arg) return err; } +static int igt_lmem_create(void *arg) +{ + struct drm_i915_private *i915 = arg; + struct drm_i915_gem_object *obj; + int err = 0; + + obj = i915_gem_object_create_lmem(i915, PAGE_SIZE, 0); + if (IS_ERR(obj)) + return PTR_ERR(obj); + + err = i915_gem_object_pin_pages(obj); + if (err) + goto out_put; + + i915_gem_object_unpin_pages(obj); +out_put: + i915_gem_object_put(obj); + + return err; +} + int intel_memory_region_mock_selftests(void) { static const struct i915_subtest tests[] = { @@ -386,3 +409,25 @@ int intel_memory_region_mock_selftests(void) return err; } + +int intel_memory_region_live_selftests(struct drm_i915_private *i915) +{ + static const struct i915_subtest tests[] = { + SUBTEST(igt_lmem_create), + }; + int err; + + if (!HAS_LMEM(i915)) { + pr_info("device lacks LMEM support, skipping\n"); + return 0; + } + + if (intel_gt_is_wedged(&i915->gt)) + return 0; + + mutex_lock(&i915->drm.struct_mutex); + err = i915_subtests(tests, i915); + mutex_unlock(&i915->drm.struct_mutex); + + return err; +} From patchwork Fri Aug 9 22:26:14 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 11087763 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6442613B1 for ; Fri, 9 Aug 2019 22:27:11 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 514C2205FD for ; Fri, 9 Aug 2019 22:27:11 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 45EF321FAC; Fri, 9 Aug 2019 22:27:11 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 067D5205FD for ; Fri, 9 Aug 2019 22:27:11 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 10A396EEEC; Fri, 9 Aug 2019 22:26:59 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id 7226E6EEF1; Fri, 9 Aug 2019 22:26:57 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Aug 2019 15:26:57 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,366,1559545200"; d="scan'208";a="176927005" Received: from jmath3-mobl1.ger.corp.intel.com (HELO mwahaha-bdw.ger.corp.intel.com) ([10.252.5.86]) by fmsmga007.fm.intel.com with ESMTP; 09 Aug 2019 15:26:56 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Subject: [PATCH v3 08/37] drm/i915: setup io-mapping for LMEM Date: Fri, 9 Aug 2019 23:26:14 +0100 Message-Id: <20190809222643.23142-9-matthew.auld@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190809222643.23142-1-matthew.auld@intel.com> References: <20190809222643.23142-1-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Abdiel Janulgue , dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Abdiel Janulgue Signed-off-by: Abdiel Janulgue Cc: Matthew Auld --- drivers/gpu/drm/i915/intel_region_lmem.c | 28 ++++++++++++++++++++++-- 1 file changed, 26 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/intel_region_lmem.c b/drivers/gpu/drm/i915/intel_region_lmem.c index ca906d1ff631..7f1543e2759c 100644 --- a/drivers/gpu/drm/i915/intel_region_lmem.c +++ b/drivers/gpu/drm/i915/intel_region_lmem.c @@ -41,8 +41,32 @@ lmem_create_object(struct intel_memory_region *mem, return obj; } +static void +region_lmem_release(struct intel_memory_region *mem) +{ + io_mapping_fini(&mem->iomap); + intel_memory_region_release_buddy(mem); +} + +static int +region_lmem_init(struct intel_memory_region *mem) +{ + int ret; + + if (!io_mapping_init_wc(&mem->iomap, + mem->io_start, + resource_size(&mem->region))) + return -EIO; + + ret = intel_memory_region_init_buddy(mem); + if (ret) + io_mapping_fini(&mem->iomap); + + return ret; +} + const struct intel_memory_region_ops intel_region_lmem_ops = { - .init = intel_memory_region_init_buddy, - .release = intel_memory_region_release_buddy, + .init = region_lmem_init, + .release = region_lmem_release, .create_object = lmem_create_object, }; From patchwork Fri Aug 9 22:26:15 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 11087775 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F1FDB912 for ; Fri, 9 Aug 2019 22:27:17 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DDD7E205FD for ; Fri, 9 Aug 2019 22:27:17 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D19B921FAC; Fri, 9 Aug 2019 22:27:17 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 34C19205FD for ; Fri, 9 Aug 2019 22:27:17 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 7339D6EEFC; Fri, 9 Aug 2019 22:27:05 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id 534F76EEF3; Fri, 9 Aug 2019 22:26:59 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Aug 2019 15:26:59 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,366,1559545200"; d="scan'208";a="176927012" Received: from jmath3-mobl1.ger.corp.intel.com (HELO mwahaha-bdw.ger.corp.intel.com) ([10.252.5.86]) by fmsmga007.fm.intel.com with ESMTP; 09 Aug 2019 15:26:57 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Subject: [PATCH v3 09/37] drm/i915/lmem: support kernel mapping Date: Fri, 9 Aug 2019 23:26:15 +0100 Message-Id: <20190809222643.23142-10-matthew.auld@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190809222643.23142-1-matthew.auld@intel.com> References: <20190809222643.23142-1-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Abdiel Janulgue , Steve Hampson , dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Abdiel Janulgue We can create LMEM objects, but we also need to support mapping them into kernel space for internal use. Signed-off-by: Abdiel Janulgue Signed-off-by: Matthew Auld Signed-off-by: Steve Hampson Cc: Joonas Lahtinen --- drivers/gpu/drm/i915/gem/i915_gem_internal.c | 4 +- drivers/gpu/drm/i915/gem/i915_gem_lmem.c | 36 +++++++++ drivers/gpu/drm/i915/gem/i915_gem_lmem.h | 8 ++ drivers/gpu/drm/i915/gem/i915_gem_object.h | 6 ++ .../gpu/drm/i915/gem/i915_gem_object_types.h | 3 +- drivers/gpu/drm/i915/gem/i915_gem_pages.c | 20 ++++- drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 3 +- .../drm/i915/gem/selftests/huge_gem_object.c | 4 +- .../drm/i915/selftests/intel_memory_region.c | 76 +++++++++++++++++++ 9 files changed, 152 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c b/drivers/gpu/drm/i915/gem/i915_gem_internal.c index 5e72cb1cc2d3..c2e237702e8c 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c @@ -140,7 +140,9 @@ static void i915_gem_object_put_pages_internal(struct drm_i915_gem_object *obj, static const struct drm_i915_gem_object_ops i915_gem_object_internal_ops = { .flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE | - I915_GEM_OBJECT_IS_SHRINKABLE, + I915_GEM_OBJECT_IS_SHRINKABLE | + I915_GEM_OBJECT_IS_MAPPABLE, + .get_pages = i915_gem_object_get_pages_internal, .put_pages = i915_gem_object_put_pages_internal, }; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_lmem.c b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c index ac5a15db1d27..8d957135afa4 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_lmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c @@ -9,11 +9,47 @@ #include "i915_drv.h" const struct drm_i915_gem_object_ops i915_gem_lmem_obj_ops = { + .flags = I915_GEM_OBJECT_IS_MAPPABLE, + .get_pages = i915_gem_object_get_pages_buddy, .put_pages = i915_gem_object_put_pages_buddy, .release = i915_gem_object_release_memory_region, }; +/* XXX: Time to vfunc your life up? */ +void __iomem *i915_gem_object_lmem_io_map_page(struct drm_i915_gem_object *obj, + unsigned long n) +{ + resource_size_t offset; + + offset = i915_gem_object_get_dma_address(obj, n); + + return io_mapping_map_wc(&obj->mm.region->iomap, offset, PAGE_SIZE); +} + +void __iomem *i915_gem_object_lmem_io_map_page_atomic(struct drm_i915_gem_object *obj, + unsigned long n) +{ + resource_size_t offset; + + offset = i915_gem_object_get_dma_address(obj, n); + + return io_mapping_map_atomic_wc(&obj->mm.region->iomap, offset); +} + +void __iomem *i915_gem_object_lmem_io_map(struct drm_i915_gem_object *obj, + unsigned long n, + unsigned long size) +{ + resource_size_t offset; + + GEM_BUG_ON(!(obj->flags & I915_BO_ALLOC_CONTIGUOUS)); + + offset = i915_gem_object_get_dma_address(obj, n); + + return io_mapping_map_wc(&obj->mm.region->iomap, offset, size); +} + bool i915_gem_object_is_lmem(struct drm_i915_gem_object *obj) { struct intel_memory_region *region = obj->mm.region; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_lmem.h b/drivers/gpu/drm/i915/gem/i915_gem_lmem.h index ebc15fe24f58..31a6462bdbb6 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_lmem.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_lmem.h @@ -13,6 +13,14 @@ struct drm_i915_gem_object; extern const struct drm_i915_gem_object_ops i915_gem_lmem_obj_ops; +void __iomem *i915_gem_object_lmem_io_map(struct drm_i915_gem_object *obj, + unsigned long n, unsigned long size); +void __iomem *i915_gem_object_lmem_io_map_page(struct drm_i915_gem_object *obj, + unsigned long n); +void __iomem * +i915_gem_object_lmem_io_map_page_atomic(struct drm_i915_gem_object *obj, + unsigned long n); + bool i915_gem_object_is_lmem(struct drm_i915_gem_object *obj); struct drm_i915_gem_object * diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index 1af838050d6c..1cbc63470212 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -158,6 +158,12 @@ i915_gem_object_is_proxy(const struct drm_i915_gem_object *obj) return obj->ops->flags & I915_GEM_OBJECT_IS_PROXY; } +static inline bool +i915_gem_object_is_mappable(const struct drm_i915_gem_object *obj) +{ + return obj->ops->flags & I915_GEM_OBJECT_IS_MAPPABLE; +} + static inline bool i915_gem_object_needs_async_cancel(const struct drm_i915_gem_object *obj) { diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h index 2142d74a57ea..19c3f9804b68 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h @@ -31,7 +31,8 @@ struct drm_i915_gem_object_ops { #define I915_GEM_OBJECT_HAS_STRUCT_PAGE BIT(0) #define I915_GEM_OBJECT_IS_SHRINKABLE BIT(1) #define I915_GEM_OBJECT_IS_PROXY BIT(2) -#define I915_GEM_OBJECT_ASYNC_CANCEL BIT(3) +#define I915_GEM_OBJECT_IS_MAPPABLE BIT(3) +#define I915_GEM_OBJECT_ASYNC_CANCEL BIT(4) /* Interface between the GEM object and its backing storage. * get_pages() is called once prior to the use of the associated set diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c index d3f0debdb875..0b73860deaf8 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c @@ -7,6 +7,7 @@ #include "i915_drv.h" #include "i915_gem_object.h" #include "i915_scatterlist.h" +#include "i915_gem_lmem.h" void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj, struct sg_table *pages, @@ -171,7 +172,9 @@ __i915_gem_object_unset_pages(struct drm_i915_gem_object *obj) void *ptr; ptr = page_mask_bits(obj->mm.mapping); - if (is_vmalloc_addr(ptr)) + if (i915_gem_object_is_lmem(obj)) + io_mapping_unmap(ptr); + else if (is_vmalloc_addr(ptr)) vunmap(ptr); else kunmap(kmap_to_page(ptr)); @@ -230,7 +233,7 @@ int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj, } /* The 'mapping' part of i915_gem_object_pin_map() below */ -static void *i915_gem_object_map(const struct drm_i915_gem_object *obj, +static void *i915_gem_object_map(struct drm_i915_gem_object *obj, enum i915_map_type type) { unsigned long n_pages = obj->base.size >> PAGE_SHIFT; @@ -243,6 +246,13 @@ static void *i915_gem_object_map(const struct drm_i915_gem_object *obj, pgprot_t pgprot; void *addr; + if (i915_gem_object_is_lmem(obj)) { + if (type != I915_MAP_WC) + return NULL; + + return i915_gem_object_lmem_io_map(obj, 0, obj->base.size); + } + /* A single page can always be kmapped */ if (n_pages == 1 && type == I915_MAP_WB) return kmap(sg_page(sgt->sgl)); @@ -288,7 +298,7 @@ void *i915_gem_object_pin_map(struct drm_i915_gem_object *obj, void *ptr; int err; - if (unlikely(!i915_gem_object_has_struct_page(obj))) + if (unlikely(!i915_gem_object_is_mappable(obj))) return ERR_PTR(-ENXIO); err = mutex_lock_interruptible(&obj->mm.lock); @@ -320,7 +330,9 @@ void *i915_gem_object_pin_map(struct drm_i915_gem_object *obj, goto err_unpin; } - if (is_vmalloc_addr(ptr)) + if (i915_gem_object_is_lmem(obj)) + io_mapping_unmap(ptr); + else if (is_vmalloc_addr(ptr)) vunmap(ptr); else kunmap(kmap_to_page(ptr)); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c index 4c4954e8ce0a..9f5d903f7793 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c @@ -422,7 +422,8 @@ static void shmem_release(struct drm_i915_gem_object *obj) const struct drm_i915_gem_object_ops i915_gem_shmem_ops = { .flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE | - I915_GEM_OBJECT_IS_SHRINKABLE, + I915_GEM_OBJECT_IS_SHRINKABLE | + I915_GEM_OBJECT_IS_MAPPABLE, .get_pages = shmem_get_pages, .put_pages = shmem_put_pages, diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_gem_object.c b/drivers/gpu/drm/i915/gem/selftests/huge_gem_object.c index 3c5d17b2b670..686e0e909280 100644 --- a/drivers/gpu/drm/i915/gem/selftests/huge_gem_object.c +++ b/drivers/gpu/drm/i915/gem/selftests/huge_gem_object.c @@ -86,7 +86,9 @@ static void huge_put_pages(struct drm_i915_gem_object *obj, static const struct drm_i915_gem_object_ops huge_ops = { .flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE | - I915_GEM_OBJECT_IS_SHRINKABLE, + I915_GEM_OBJECT_IS_SHRINKABLE | + I915_GEM_OBJECT_IS_MAPPABLE, + .get_pages = huge_get_pages, .put_pages = huge_put_pages, }; diff --git a/drivers/gpu/drm/i915/selftests/intel_memory_region.c b/drivers/gpu/drm/i915/selftests/intel_memory_region.c index 422416f71643..2570fa93e286 100644 --- a/drivers/gpu/drm/i915/selftests/intel_memory_region.c +++ b/drivers/gpu/drm/i915/selftests/intel_memory_region.c @@ -13,8 +13,10 @@ #include "gem/i915_gem_lmem.h" #include "gem/i915_gem_region.h" +#include "gem/i915_gem_object_blt.h" #include "gem/selftests/mock_context.h" #include "gt/intel_gt.h" +#include "selftests/igt_flush_test.h" static void close_objects(struct list_head *objects) { @@ -373,6 +375,79 @@ static int igt_lmem_create(void *arg) return err; } +static int igt_lmem_write_cpu(void *arg) +{ + struct drm_i915_private *i915 = arg; + struct intel_context *ce = i915->engine[BCS0]->kernel_context; + struct drm_i915_gem_object *obj; + struct rnd_state prng; + u32 *vaddr; + u32 dword; + u32 val; + u32 sz; + int err; + + if (!HAS_ENGINE(i915, BCS0)) + return 0; + + sz = round_up(prandom_u32_state(&prng) % SZ_32M, PAGE_SIZE); + + obj = i915_gem_object_create_lmem(i915, sz, I915_BO_ALLOC_CONTIGUOUS); + if (IS_ERR(obj)) + return PTR_ERR(obj); + + vaddr = i915_gem_object_pin_map(obj, I915_MAP_WC); + if (IS_ERR(vaddr)) { + pr_err("Failed to iomap lmembar; err=%d\n", (int)PTR_ERR(vaddr)); + err = PTR_ERR(vaddr); + goto out_put; + } + + val = prandom_u32_state(&prng); + + /* Write from gpu and then read from cpu */ + err = i915_gem_object_fill_blt(obj, ce, val); + if (err) + goto out_unpin; + + i915_gem_object_lock(obj); + err = i915_gem_object_set_to_wc_domain(obj, true); + i915_gem_object_unlock(obj); + if (err) + goto out_unpin; + + for (dword = 0; dword < sz / sizeof(u32); ++dword) { + if (vaddr[dword] != val) { + pr_err("vaddr[%u]=%u, val=%u\n", dword, vaddr[dword], + val); + err = -EINVAL; + break; + } + } + + /* Write from the cpu and read again from the cpu */ + memset32(vaddr, val ^ 0xdeadbeaf, sz / sizeof(u32)); + + for (dword = 0; dword < sz / sizeof(u32); ++dword) { + if (vaddr[dword] != (val ^ 0xdeadbeaf)) { + pr_err("vaddr[%u]=%u, val=%u\n", dword, vaddr[dword], + val ^ 0xdeadbeaf); + err = -EINVAL; + break; + } + } + +out_unpin: + i915_gem_object_unpin_map(obj); +out_put: + i915_gem_object_put(obj); + + if (igt_flush_test(i915, I915_WAIT_LOCKED)) + err = -EIO; + + return err; +} + int intel_memory_region_mock_selftests(void) { static const struct i915_subtest tests[] = { @@ -414,6 +489,7 @@ int intel_memory_region_live_selftests(struct drm_i915_private *i915) { static const struct i915_subtest tests[] = { SUBTEST(igt_lmem_create), + SUBTEST(igt_lmem_write_cpu), }; int err; From patchwork Fri Aug 9 22:26:16 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 11087783 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3F40C13B1 for ; Fri, 9 Aug 2019 22:27:27 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2CAEE205FD for ; Fri, 9 Aug 2019 22:27:27 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2129D21FAC; Fri, 9 Aug 2019 22:27:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id DD2C2205FD for ; Fri, 9 Aug 2019 22:27:26 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C2C406EEF0; Fri, 9 Aug 2019 22:27:09 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id 4BA7C6EEF7; Fri, 9 Aug 2019 22:27:00 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Aug 2019 15:27:00 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,366,1559545200"; d="scan'208";a="176927013" Received: from jmath3-mobl1.ger.corp.intel.com (HELO mwahaha-bdw.ger.corp.intel.com) ([10.252.5.86]) by fmsmga007.fm.intel.com with ESMTP; 09 Aug 2019 15:26:59 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Subject: [PATCH v3 10/37] drm/i915/blt: don't assume pinned intel_context Date: Fri, 9 Aug 2019 23:26:16 +0100 Message-Id: <20190809222643.23142-11-matthew.auld@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190809222643.23142-1-matthew.auld@intel.com> References: <20190809222643.23142-1-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP Currently we just pass in bcs0->engine_context so it matters not, but in the future we may want to pass in something that is not a kernel_context, so try to be a bit more generic. Suggested-by: Chris Wilson Signed-off-by: Matthew Auld --- drivers/gpu/drm/i915/gem/i915_gem_client_blt.c | 3 ++- drivers/gpu/drm/i915/gem/i915_gem_object_blt.c | 3 ++- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_client_blt.c b/drivers/gpu/drm/i915/gem/i915_gem_client_blt.c index de6616bdb3a6..08a84c940d8d 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_client_blt.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_client_blt.c @@ -4,6 +4,7 @@ */ #include "i915_drv.h" +#include "gt/intel_context.h" #include "i915_gem_client_blt.h" #include "i915_gem_object_blt.h" @@ -175,7 +176,7 @@ static void clear_pages_worker(struct work_struct *work) if (unlikely(err)) goto out_unlock; - rq = i915_request_create(w->ce); + rq = intel_context_create_request(w->ce); if (IS_ERR(rq)) { err = PTR_ERR(rq); goto out_unpin; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c b/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c index 837dd6636dd1..fa90c38c8b07 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c @@ -4,6 +4,7 @@ */ #include "i915_drv.h" +#include "gt/intel_context.h" #include "i915_gem_clflush.h" #include "i915_gem_object_blt.h" @@ -64,7 +65,7 @@ int i915_gem_object_fill_blt(struct drm_i915_gem_object *obj, i915_gem_object_unlock(obj); } - rq = i915_request_create(ce); + rq = intel_context_create_request(ce); if (IS_ERR(rq)) { err = PTR_ERR(rq); goto out_unpin; From patchwork Fri Aug 9 22:26:17 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 11087777 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 81EAC13B1 for ; Fri, 9 Aug 2019 22:27:22 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6DA8E205FD for ; Fri, 9 Aug 2019 22:27:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 61F6221FAC; Fri, 9 Aug 2019 22:27:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 9B43B205FD for ; Fri, 9 Aug 2019 22:27:21 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id D96D36EEF1; Fri, 9 Aug 2019 22:27:07 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id 7C6876EEE3; Fri, 9 Aug 2019 22:27:02 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Aug 2019 15:27:02 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,366,1559545200"; d="scan'208";a="176927017" Received: from jmath3-mobl1.ger.corp.intel.com (HELO mwahaha-bdw.ger.corp.intel.com) ([10.252.5.86]) by fmsmga007.fm.intel.com with ESMTP; 09 Aug 2019 15:27:00 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Subject: [PATCH v3 11/37] drm/i915/blt: bump size restriction Date: Fri, 9 Aug 2019 23:26:17 +0100 Message-Id: <20190809222643.23142-12-matthew.auld@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190809222643.23142-1-matthew.auld@intel.com> References: <20190809222643.23142-1-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP Reported-by: Chris Wilson Signed-off-by: Matthew Auld --- .../gpu/drm/i915/gem/i915_gem_client_blt.c | 31 +++- .../gpu/drm/i915/gem/i915_gem_object_blt.c | 139 ++++++++++++++---- .../gpu/drm/i915/gem/i915_gem_object_blt.h | 9 +- .../i915/gem/selftests/i915_gem_client_blt.c | 16 +- .../i915/gem/selftests/i915_gem_object_blt.c | 22 ++- 5 files changed, 170 insertions(+), 47 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_client_blt.c b/drivers/gpu/drm/i915/gem/i915_gem_client_blt.c index 08a84c940d8d..4b096309a97e 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_client_blt.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_client_blt.c @@ -5,6 +5,8 @@ #include "i915_drv.h" #include "gt/intel_context.h" +#include "gt/intel_engine_pm.h" +#include "gt/intel_engine_pool.h" #include "i915_gem_client_blt.h" #include "i915_gem_object_blt.h" @@ -156,7 +158,9 @@ static void clear_pages_worker(struct work_struct *work) struct drm_i915_private *i915 = w->ce->engine->i915; struct drm_i915_gem_object *obj = w->sleeve->vma->obj; struct i915_vma *vma = w->sleeve->vma; + struct intel_engine_pool_node *pool; struct i915_request *rq; + struct i915_vma *batch; int err = w->dma.error; if (unlikely(err)) @@ -176,10 +180,17 @@ static void clear_pages_worker(struct work_struct *work) if (unlikely(err)) goto out_unlock; + intel_engine_pm_get(w->ce->engine); + batch = intel_emit_vma_fill_blt(&pool, w->ce, vma, w->value); + if (IS_ERR(batch)) { + err = PTR_ERR(batch); + goto out_unpin; + } + rq = intel_context_create_request(w->ce); if (IS_ERR(rq)) { err = PTR_ERR(rq); - goto out_unpin; + goto out_batch; } /* There's no way the fence has signalled */ @@ -187,6 +198,16 @@ static void clear_pages_worker(struct work_struct *work) clear_pages_dma_fence_cb)) GEM_BUG_ON(1); + i915_vma_lock(batch); + err = i915_vma_move_to_active(batch, rq, 0); + i915_vma_unlock(batch); + if (unlikely(err)) + goto out_request; + + err = intel_engine_pool_mark_active(pool, rq); + if (unlikely(err)) + goto out_request; + if (w->ce->engine->emit_init_breadcrumb) { err = w->ce->engine->emit_init_breadcrumb(rq); if (unlikely(err)) @@ -202,7 +223,9 @@ static void clear_pages_worker(struct work_struct *work) if (err) goto out_request; - err = intel_emit_vma_fill_blt(rq, vma, w->value); + err = w->ce->engine->emit_bb_start(rq, + batch->node.start, batch->node.size, + 0); out_request: if (unlikely(err)) { i915_request_skip(rq, err); @@ -210,7 +233,11 @@ static void clear_pages_worker(struct work_struct *work) } i915_request_add(rq); +out_batch: + i915_vma_unpin(batch); + intel_engine_pool_put(pool); out_unpin: + intel_engine_pm_put(w->ce->engine); i915_vma_unpin(vma); out_unlock: mutex_unlock(&i915->drm.struct_mutex); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c b/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c index fa90c38c8b07..c1e5edd1e359 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c @@ -5,49 +5,103 @@ #include "i915_drv.h" #include "gt/intel_context.h" +#include "gt/intel_engine_pm.h" +#include "gt/intel_engine_pool.h" +#include "gt/intel_gt.h" #include "i915_gem_clflush.h" #include "i915_gem_object_blt.h" -int intel_emit_vma_fill_blt(struct i915_request *rq, - struct i915_vma *vma, - u32 value) +struct i915_vma *intel_emit_vma_fill_blt(struct intel_engine_pool_node **p, + struct intel_context *ce, + struct i915_vma *vma, + u32 value) { - u32 *cs; - - cs = intel_ring_begin(rq, 8); - if (IS_ERR(cs)) - return PTR_ERR(cs); - - if (INTEL_GEN(rq->i915) >= 8) { - *cs++ = XY_COLOR_BLT_CMD | BLT_WRITE_RGBA | (7 - 2); - *cs++ = BLT_DEPTH_32 | BLT_ROP_COLOR_COPY | PAGE_SIZE; - *cs++ = 0; - *cs++ = vma->size >> PAGE_SHIFT << 16 | PAGE_SIZE / 4; - *cs++ = lower_32_bits(vma->node.start); - *cs++ = upper_32_bits(vma->node.start); - *cs++ = value; - *cs++ = MI_NOOP; - } else { - *cs++ = XY_COLOR_BLT_CMD | BLT_WRITE_RGBA | (6 - 2); - *cs++ = BLT_DEPTH_32 | BLT_ROP_COLOR_COPY | PAGE_SIZE; - *cs++ = 0; - *cs++ = vma->size >> PAGE_SHIFT << 16 | PAGE_SIZE / 4; - *cs++ = vma->node.start; - *cs++ = value; - *cs++ = MI_NOOP; - *cs++ = MI_NOOP; + struct drm_i915_private *i915 = ce->vm->i915; + const u32 block_size = S16_MAX * PAGE_SIZE; + struct intel_engine_pool_node *pool; + struct i915_vma *batch; + u64 offset; + u64 count; + u64 rem; + u32 size; + u32 *cmd; + int err; + + count = div_u64(vma->size, block_size); + size = (1 + 8 * count) * sizeof(u32); + size = round_up(size, PAGE_SIZE); + pool = intel_engine_pool_get(&ce->engine->pool, size); + if (IS_ERR(pool)) + return ERR_CAST(pool); + + cmd = i915_gem_object_pin_map(pool->obj, I915_MAP_WC); + if (IS_ERR(cmd)) { + err = PTR_ERR(cmd); + goto out_put; + } + + rem = vma->size; + offset = vma->node.start; + + do { + u32 size = min_t(u64, rem, block_size); + + GEM_BUG_ON(size >> PAGE_SHIFT > S16_MAX); + + if (INTEL_GEN(i915) >= 8) { + *cmd++ = XY_COLOR_BLT_CMD | BLT_WRITE_RGBA | (7 - 2); + *cmd++ = BLT_DEPTH_32 | BLT_ROP_COLOR_COPY | PAGE_SIZE; + *cmd++ = 0; + *cmd++ = size >> PAGE_SHIFT << 16 | PAGE_SIZE / 4; + *cmd++ = lower_32_bits(offset); + *cmd++ = upper_32_bits(offset); + *cmd++ = value; + } else { + *cmd++ = XY_COLOR_BLT_CMD | BLT_WRITE_RGBA | (6 - 2); + *cmd++ = BLT_DEPTH_32 | BLT_ROP_COLOR_COPY | PAGE_SIZE; + *cmd++ = 0; + *cmd++ = size >> PAGE_SHIFT << 16 | PAGE_SIZE / 4; + *cmd++ = offset; + *cmd++ = value; + } + + /* Allow ourselves to be preempted in between blocks. */ + *cmd++ = MI_ARB_CHECK; + + offset += size; + rem -= size; + } while (rem); + + *cmd = MI_BATCH_BUFFER_END; + intel_gt_chipset_flush(ce->vm->gt); + + i915_gem_object_unpin_map(pool->obj); + + batch = i915_vma_instance(pool->obj, ce->vm, NULL); + if (IS_ERR(batch)) { + err = PTR_ERR(batch); + goto out_put; } - intel_ring_advance(rq, cs); + err = i915_vma_pin(batch, 0, 0, PIN_USER); + if (unlikely(err)) + goto out_put; + + *p = pool; + return batch; - return 0; +out_put: + intel_engine_pool_put(pool); + return ERR_PTR(err); } int i915_gem_object_fill_blt(struct drm_i915_gem_object *obj, struct intel_context *ce, u32 value) { + struct intel_engine_pool_node *pool; struct i915_request *rq; + struct i915_vma *batch; struct i915_vma *vma; int err; @@ -65,12 +119,29 @@ int i915_gem_object_fill_blt(struct drm_i915_gem_object *obj, i915_gem_object_unlock(obj); } + intel_engine_pm_get(ce->engine); + batch = intel_emit_vma_fill_blt(&pool, ce, vma, value); + if (IS_ERR(batch)) { + err = PTR_ERR(batch); + goto out_unpin; + } + rq = intel_context_create_request(ce); if (IS_ERR(rq)) { err = PTR_ERR(rq); - goto out_unpin; + goto out_batch; } + i915_vma_lock(batch); + err = i915_vma_move_to_active(batch, rq, 0); + i915_vma_unlock(batch); + if (unlikely(err)) + goto out_request; + + err = intel_engine_pool_mark_active(pool, rq); + if (unlikely(err)) + goto out_request; + err = i915_request_await_object(rq, obj, true); if (unlikely(err)) goto out_request; @@ -87,13 +158,19 @@ int i915_gem_object_fill_blt(struct drm_i915_gem_object *obj, if (unlikely(err)) goto out_request; - err = intel_emit_vma_fill_blt(rq, vma, value); + err = ce->engine->emit_bb_start(rq, + batch->node.start, batch->node.size, + 0); out_request: if (unlikely(err)) i915_request_skip(rq, err); i915_request_add(rq); +out_batch: + i915_vma_unpin(batch); + intel_engine_pool_put(pool); out_unpin: + intel_engine_pm_put(ce->engine); i915_vma_unpin(vma); return err; } diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_blt.h b/drivers/gpu/drm/i915/gem/i915_gem_object_blt.h index 7ec7de6ac0c0..a7425c234d50 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_blt.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_blt.h @@ -9,13 +9,14 @@ #include struct drm_i915_gem_object; +struct intel_engine_pool_node; struct intel_context; -struct i915_request; struct i915_vma; -int intel_emit_vma_fill_blt(struct i915_request *rq, - struct i915_vma *vma, - u32 value); +struct i915_vma *intel_emit_vma_fill_blt(struct intel_engine_pool_node **p, + struct intel_context *ce, + struct i915_vma *vma, + u32 value); int i915_gem_object_fill_blt(struct drm_i915_gem_object *obj, struct intel_context *ce, diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_client_blt.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_client_blt.c index 275c28926067..d8804a847945 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_client_blt.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_client_blt.c @@ -9,6 +9,7 @@ #include "selftests/igt_flush_test.h" #include "selftests/mock_drm.h" +#include "huge_gem_object.h" #include "mock_context.h" static int igt_client_fill(void *arg) @@ -24,15 +25,19 @@ static int igt_client_fill(void *arg) prandom_seed_state(&prng, i915_selftest.random_seed); do { - u32 sz = prandom_u32_state(&prng) % SZ_32M; + const u32 max_block_size = S16_MAX * PAGE_SIZE; + u32 sz = min_t(u64, ce->vm->total >> 4, prandom_u32_state(&prng)); + u32 phys_sz = sz % (max_block_size + 1); u32 val = prandom_u32_state(&prng); u32 i; sz = round_up(sz, PAGE_SIZE); + phys_sz = round_up(phys_sz, PAGE_SIZE); - pr_debug("%s with sz=%x, val=%x\n", __func__, sz, val); + pr_debug("%s with phys_sz= %x, sz=%x, val=%x\n", __func__, + phys_sz, sz, val); - obj = i915_gem_object_create_internal(i915, sz); + obj = huge_gem_object(i915, phys_sz, sz); if (IS_ERR(obj)) { err = PTR_ERR(obj); goto err_flush; @@ -54,7 +59,8 @@ static int igt_client_fill(void *arg) * values after we do the set_to_cpu_domain and pick it up as a * test failure. */ - memset32(vaddr, val ^ 0xdeadbeaf, obj->base.size / sizeof(u32)); + memset32(vaddr, val ^ 0xdeadbeaf, + huge_gem_object_phys_size(obj) / sizeof(u32)); if (!(obj->cache_coherent & I915_BO_CACHE_COHERENT_FOR_WRITE)) obj->cache_dirty = true; @@ -71,7 +77,7 @@ static int igt_client_fill(void *arg) if (err) goto err_unpin; - for (i = 0; i < obj->base.size / sizeof(u32); ++i) { + for (i = 0; i < huge_gem_object_phys_size(obj) / sizeof(u32); ++i) { if (vaddr[i] != val) { pr_err("vaddr[%u]=%x, expected=%x\n", i, vaddr[i], val); diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_object_blt.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_object_blt.c index 19843acc84d3..c6e1eebe53f5 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_object_blt.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_object_blt.c @@ -9,6 +9,7 @@ #include "selftests/igt_flush_test.h" #include "selftests/mock_drm.h" +#include "huge_gem_object.h" #include "mock_context.h" static int igt_fill_blt(void *arg) @@ -23,16 +24,26 @@ static int igt_fill_blt(void *arg) prandom_seed_state(&prng, i915_selftest.random_seed); + /* + * XXX: needs some threads to scale all these tests, also maybe throw + * in submission from higher priority context to see if we are + * preempted for very large objects... + */ + do { - u32 sz = prandom_u32_state(&prng) % SZ_32M; + const u32 max_block_size = S16_MAX * PAGE_SIZE; + u32 sz = min_t(u64, ce->vm->total >> 4, prandom_u32_state(&prng)); + u32 phys_sz = sz % (max_block_size + 1); u32 val = prandom_u32_state(&prng); u32 i; sz = round_up(sz, PAGE_SIZE); + phys_sz = round_up(phys_sz, PAGE_SIZE); - pr_debug("%s with sz=%x, val=%x\n", __func__, sz, val); + pr_debug("%s with phys_sz= %x, sz=%x, val=%x\n", __func__, + phys_sz, sz, val); - obj = i915_gem_object_create_internal(i915, sz); + obj = huge_gem_object(i915, phys_sz, sz); if (IS_ERR(obj)) { err = PTR_ERR(obj); goto err_flush; @@ -48,7 +59,8 @@ static int igt_fill_blt(void *arg) * Make sure the potentially async clflush does its job, if * required. */ - memset32(vaddr, val ^ 0xdeadbeaf, obj->base.size / sizeof(u32)); + memset32(vaddr, val ^ 0xdeadbeaf, + huge_gem_object_phys_size(obj) / sizeof(u32)); if (!(obj->cache_coherent & I915_BO_CACHE_COHERENT_FOR_WRITE)) obj->cache_dirty = true; @@ -65,7 +77,7 @@ static int igt_fill_blt(void *arg) if (err) goto err_unpin; - for (i = 0; i < obj->base.size / sizeof(u32); ++i) { + for (i = 0; i < huge_gem_object_phys_size(obj) / sizeof(u32); ++i) { if (vaddr[i] != val) { pr_err("vaddr[%u]=%x, expected=%x\n", i, vaddr[i], val); From patchwork Fri Aug 9 22:26:18 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 11087781 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4A9C9912 for ; Fri, 9 Aug 2019 22:27:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 372C3205FD for ; Fri, 9 Aug 2019 22:27:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2B22321FAC; Fri, 9 Aug 2019 22:27:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 673D6205FD for ; Fri, 9 Aug 2019 22:27:25 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 404D76EEFF; Fri, 9 Aug 2019 22:27:07 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id A6BC26EEE8; Fri, 9 Aug 2019 22:27:03 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Aug 2019 15:27:03 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,366,1559545200"; d="scan'208";a="176927022" Received: from jmath3-mobl1.ger.corp.intel.com (HELO mwahaha-bdw.ger.corp.intel.com) ([10.252.5.86]) by fmsmga007.fm.intel.com with ESMTP; 09 Aug 2019 15:27:02 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Subject: [PATCH v3 12/37] drm/i915/blt: support copying objects Date: Fri, 9 Aug 2019 23:26:18 +0100 Message-Id: <20190809222643.23142-13-matthew.auld@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190809222643.23142-1-matthew.auld@intel.com> References: <20190809222643.23142-1-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP We can already clear an object with the blt, so try to do the same to support copying from one object backing store to another. Really this is just object -> object, which is not that useful yet, what we really want is two backing stores, but that will require some vma rework first, otherwise we are stuck with "tmp" objects. Signed-off-by: Matthew Auld Cc: Joonas Lahtinen Cc: Abdiel Janulgue vm->i915; + const u32 block_size = S16_MAX * PAGE_SIZE; + struct intel_engine_pool_node *pool; + struct i915_vma *batch; + u64 src_offset, dst_offset; + u64 count; + u64 rem; + u32 size; + u32 *cmd; + int err; + + GEM_BUG_ON(src->size != dst->size); + + count = div_u64(dst->size, block_size); + size = (1 + 11 * count) * sizeof(u32); + size = round_up(size, PAGE_SIZE); + pool = intel_engine_pool_get(&ce->engine->pool, size); + if (IS_ERR(pool)) + return ERR_CAST(pool); + + cmd = i915_gem_object_pin_map(pool->obj, I915_MAP_WC); + if (IS_ERR(cmd)) { + err = PTR_ERR(cmd); + goto out_put; + } + + rem = src->size; + src_offset = src->node.start; + dst_offset = dst->node.start; + + do { + u32 size = min_t(u64, rem, block_size); + + GEM_BUG_ON(size >> PAGE_SHIFT > S16_MAX); + + if (INTEL_GEN(i915) >= 9) { + *cmd++ = GEN9_XY_FAST_COPY_BLT_CMD | (10 - 2); + *cmd++ = BLT_DEPTH_32 | PAGE_SIZE; + *cmd++ = 0; + *cmd++ = size >> PAGE_SHIFT << 16 | PAGE_SIZE / 4; + *cmd++ = lower_32_bits(dst_offset); + *cmd++ = upper_32_bits(dst_offset); + *cmd++ = 0; + *cmd++ = PAGE_SIZE; + *cmd++ = lower_32_bits(src_offset); + *cmd++ = upper_32_bits(src_offset); + } else if (INTEL_GEN(i915) >= 8) { + *cmd++ = XY_SRC_COPY_BLT_CMD | BLT_WRITE_RGBA | (10 - 2); + *cmd++ = BLT_DEPTH_32 | BLT_ROP_SRC_COPY | PAGE_SIZE; + *cmd++ = 0; + *cmd++ = size >> PAGE_SHIFT << 16 | PAGE_SIZE / 4; + *cmd++ = lower_32_bits(dst_offset); + *cmd++ = upper_32_bits(dst_offset); + *cmd++ = 0; + *cmd++ = PAGE_SIZE; + *cmd++ = lower_32_bits(src_offset); + *cmd++ = upper_32_bits(src_offset); + } else { + *cmd++ = SRC_COPY_BLT_CMD | BLT_WRITE_RGBA | (6 - 2); + *cmd++ = BLT_DEPTH_32 | BLT_ROP_SRC_COPY | PAGE_SIZE; + *cmd++ = size >> PAGE_SHIFT << 16 | PAGE_SIZE; + *cmd++ = dst_offset; + *cmd++ = PAGE_SIZE; + *cmd++ = src_offset; + } + + /* Allow ourselves to be preempted in between blocks. */ + *cmd++ = MI_ARB_CHECK; + + src_offset += size; + dst_offset += size; + rem -= size; + } while (rem); + + *cmd = MI_BATCH_BUFFER_END; + intel_gt_chipset_flush(ce->vm->gt); + + i915_gem_object_unpin_map(pool->obj); + + batch = i915_vma_instance(pool->obj, ce->vm, NULL); + if (IS_ERR(batch)) { + err = PTR_ERR(batch); + goto out_put; + } + + err = i915_vma_pin(batch, 0, 0, PIN_USER); + if (unlikely(err)) + goto out_put; + + *p = pool; + return batch; + +out_put: + intel_engine_pool_put(pool); + return ERR_PTR(err); +} + +int i915_gem_object_copy_blt(struct drm_i915_gem_object *src, + struct drm_i915_gem_object *dst, + struct intel_context *ce) +{ + struct drm_gem_object *objs[] = { &src->base, &dst->base }; + struct i915_address_space *vm = ce->vm; + struct intel_engine_pool_node *pool; + struct ww_acquire_ctx acquire; + struct i915_vma *vma_src, *vma_dst; + struct i915_vma *batch; + struct i915_request *rq; + int err; + + vma_src = i915_vma_instance(src, vm, NULL); + if (IS_ERR(vma_src)) + return PTR_ERR(vma_src); + + err = i915_vma_pin(vma_src, 0, 0, PIN_USER); + if (unlikely(err)) + return err; + + vma_dst = i915_vma_instance(dst, vm, NULL); + if (IS_ERR(vma_dst)) + goto out_unpin_src; + + err = i915_vma_pin(vma_dst, 0, 0, PIN_USER); + if (unlikely(err)) + goto out_unpin_src; + + intel_engine_pm_get(ce->engine); + batch = intel_emit_vma_copy_blt(&pool, ce, vma_src, vma_dst); + if (IS_ERR(batch)) { + err = PTR_ERR(batch); + goto out_unpin_dst; + } + + rq = intel_context_create_request(ce); + if (IS_ERR(rq)) { + err = PTR_ERR(rq); + goto out_batch; + } + + i915_vma_lock(batch); + err = i915_vma_move_to_active(batch, rq, 0); + i915_vma_unlock(batch); + if (unlikely(err)) + goto out_request; + + err = intel_engine_pool_mark_active(pool, rq); + if (unlikely(err)) + goto out_request; + + err = drm_gem_lock_reservations(objs, ARRAY_SIZE(objs), &acquire); + if (unlikely(err)) + goto out_request; + + if (src->cache_dirty & ~src->cache_coherent) + i915_gem_clflush_object(src, 0); + + if (dst->cache_dirty & ~dst->cache_coherent) + i915_gem_clflush_object(dst, 0); + + err = i915_request_await_object(rq, src, false); + if (unlikely(err)) + goto out_unlock; + + err = i915_vma_move_to_active(vma_src, rq, 0); + if (unlikely(err)) + goto out_unlock; + + err = i915_request_await_object(rq, dst, true); + if (unlikely(err)) + goto out_unlock; + + err = i915_vma_move_to_active(vma_dst, rq, EXEC_OBJECT_WRITE); + if (unlikely(err)) + goto out_unlock; + + if (ce->engine->emit_init_breadcrumb) { + err = ce->engine->emit_init_breadcrumb(rq); + if (unlikely(err)) + goto out_unlock; + } + + err = ce->engine->emit_bb_start(rq, + batch->node.start, batch->node.size, + 0); +out_unlock: + drm_gem_unlock_reservations(objs, ARRAY_SIZE(objs), &acquire); +out_request: + if (unlikely(err)) + i915_request_skip(rq, err); + + i915_request_add(rq); +out_batch: + i915_vma_unpin(batch); + intel_engine_pool_put(pool); +out_unpin_dst: + i915_vma_unpin(vma_dst); + intel_engine_pm_put(ce->engine); +out_unpin_src: + i915_vma_unpin(vma_src); + return err; +} + #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST) #include "selftests/i915_gem_object_blt.c" #endif diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_blt.h b/drivers/gpu/drm/i915/gem/i915_gem_object_blt.h index a7425c234d50..5d3e4ed6e060 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_blt.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_blt.h @@ -22,4 +22,13 @@ int i915_gem_object_fill_blt(struct drm_i915_gem_object *obj, struct intel_context *ce, u32 value); +struct i915_vma *intel_emit_vma_copy_blt(struct intel_engine_pool_node **p, + struct intel_context *ce, + struct i915_vma *src, + struct i915_vma *dst); + +int i915_gem_object_copy_blt(struct drm_i915_gem_object *src, + struct drm_i915_gem_object *dst, + struct intel_context *ce); + #endif diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_object_blt.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_object_blt.c index c6e1eebe53f5..c21d747e7d05 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_object_blt.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_object_blt.c @@ -103,10 +103,116 @@ static int igt_fill_blt(void *arg) return err; } +static int igt_copy_blt(void *arg) +{ + struct drm_i915_private *i915 = arg; + struct intel_context *ce = i915->engine[BCS0]->kernel_context; + struct drm_i915_gem_object *src, *dst; + struct rnd_state prng; + IGT_TIMEOUT(end); + u32 *vaddr; + int err = 0; + + prandom_seed_state(&prng, i915_selftest.random_seed); + + do { + const u32 max_block_size = S16_MAX * PAGE_SIZE; + u32 sz = min_t(u64, ce->vm->total >> 4, prandom_u32_state(&prng)); + u32 phys_sz = sz % (max_block_size + 1); + u32 val = prandom_u32_state(&prng); + u32 i; + + sz = round_up(sz, PAGE_SIZE); + phys_sz = round_up(phys_sz, PAGE_SIZE); + + pr_debug("%s with phys_sz= %x, sz=%x, val=%x\n", __func__, + phys_sz, sz, val); + + src = huge_gem_object(i915, phys_sz, sz); + if (IS_ERR(src)) { + err = PTR_ERR(src); + goto err_flush; + } + + vaddr = i915_gem_object_pin_map(src, I915_MAP_WB); + if (IS_ERR(vaddr)) { + err = PTR_ERR(vaddr); + goto err_put_src; + } + + memset32(vaddr, val, + huge_gem_object_phys_size(src) / sizeof(u32)); + + i915_gem_object_unpin_map(src); + + if (!(src->cache_coherent & I915_BO_CACHE_COHERENT_FOR_READ)) + src->cache_dirty = true; + + dst = huge_gem_object(i915, phys_sz, sz); + if (IS_ERR(dst)) { + err = PTR_ERR(dst); + goto err_put_src; + } + + vaddr = i915_gem_object_pin_map(dst, I915_MAP_WB); + if (IS_ERR(vaddr)) { + err = PTR_ERR(vaddr); + goto err_put_dst; + } + + memset32(vaddr, val ^ 0xdeadbeaf, + huge_gem_object_phys_size(dst) / sizeof(u32)); + + if (!(dst->cache_coherent & I915_BO_CACHE_COHERENT_FOR_WRITE)) + dst->cache_dirty = true; + + mutex_lock(&i915->drm.struct_mutex); + err = i915_gem_object_copy_blt(src, dst, ce); + mutex_unlock(&i915->drm.struct_mutex); + if (err) + goto err_unpin; + + i915_gem_object_lock(dst); + err = i915_gem_object_set_to_cpu_domain(dst, false); + i915_gem_object_unlock(dst); + if (err) + goto err_unpin; + + for (i = 0; i < huge_gem_object_phys_size(dst) / sizeof(u32); ++i) { + if (vaddr[i] != val) { + pr_err("vaddr[%u]=%x, expected=%x\n", i, + vaddr[i], val); + err = -EINVAL; + goto err_unpin; + } + } + + i915_gem_object_unpin_map(dst); + + i915_gem_object_put(src); + i915_gem_object_put(dst); + } while (!time_after(jiffies, end)); + + goto err_flush; + +err_unpin: + i915_gem_object_unpin_map(dst); +err_put_dst: + i915_gem_object_put(dst); +err_put_src: + i915_gem_object_put(src); +err_flush: + if (err == -ENOMEM) + err = 0; + + return err; +} + int i915_gem_object_blt_live_selftests(struct drm_i915_private *i915) { static const struct i915_subtest tests[] = { SUBTEST(igt_fill_blt), + SUBTEST(igt_copy_blt), }; if (intel_gt_is_wedged(&i915->gt)) diff --git a/drivers/gpu/drm/i915/gt/intel_gpu_commands.h b/drivers/gpu/drm/i915/gt/intel_gpu_commands.h index 69f34737325f..af7b9d272144 100644 --- a/drivers/gpu/drm/i915/gt/intel_gpu_commands.h +++ b/drivers/gpu/drm/i915/gt/intel_gpu_commands.h @@ -188,8 +188,9 @@ #define COLOR_BLT_CMD (2<<29 | 0x40<<22 | (5-2)) #define XY_COLOR_BLT_CMD (2 << 29 | 0x50 << 22) -#define SRC_COPY_BLT_CMD ((2<<29)|(0x43<<22)|4) -#define XY_SRC_COPY_BLT_CMD ((2<<29)|(0x53<<22)|6) +#define SRC_COPY_BLT_CMD ((2<<29)|(0x43<<22)) +#define GEN9_XY_FAST_COPY_BLT_CMD ((2<<29)|(0x42<<22)) +#define XY_SRC_COPY_BLT_CMD ((2<<29)|(0x53<<22)) #define XY_MONO_SRC_COPY_IMM_BLT ((2<<29)|(0x71<<22)|5) #define BLT_WRITE_A (2<<20) #define BLT_WRITE_RGB (1<<20) diff --git a/drivers/gpu/drm/i915/gt/intel_ringbuffer.c b/drivers/gpu/drm/i915/gt/intel_ringbuffer.c index 78b4235f9c0f..f109a1736ed8 100644 --- a/drivers/gpu/drm/i915/gt/intel_ringbuffer.c +++ b/drivers/gpu/drm/i915/gt/intel_ringbuffer.c @@ -1136,7 +1136,7 @@ i830_emit_bb_start(struct i915_request *rq, * stable batch scratch bo area (so that the CS never * stumbles over its tlb invalidation bug) ... */ - *cs++ = SRC_COPY_BLT_CMD | BLT_WRITE_RGBA; + *cs++ = SRC_COPY_BLT_CMD | BLT_WRITE_RGBA | (6 - 2); *cs++ = BLT_DEPTH_32 | BLT_ROP_SRC_COPY | 4096; *cs++ = DIV_ROUND_UP(len, 4096) << 16 | 4096; *cs++ = cs_offset; From patchwork Fri Aug 9 22:26:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 11087835 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 051FA912 for ; Fri, 9 Aug 2019 22:28:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E6EA422376 for ; Fri, 9 Aug 2019 22:28:05 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C94FB2237D; Fri, 9 Aug 2019 22:28:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 1DB84223A1 for ; Fri, 9 Aug 2019 22:28:05 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 86BF16EF1A; Fri, 9 Aug 2019 22:27:23 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id 008A56EEF0; Fri, 9 Aug 2019 22:27:04 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Aug 2019 15:27:04 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,366,1559545200"; d="scan'208";a="176927027" Received: from jmath3-mobl1.ger.corp.intel.com (HELO mwahaha-bdw.ger.corp.intel.com) ([10.252.5.86]) by fmsmga007.fm.intel.com with ESMTP; 09 Aug 2019 15:27:03 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Subject: [PATCH v3 13/37] drm/i915/selftests: move gpu-write-dw into utils Date: Fri, 9 Aug 2019 23:26:19 +0100 Message-Id: <20190809222643.23142-14-matthew.auld@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190809222643.23142-1-matthew.auld@intel.com> References: <20190809222643.23142-1-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP Using the gpu to write to some dword over a number of pages is rather useful, and we already have two copies of such a thing, and we don't want a third so move it to utils. There is probably some other stuff also... Signed-off-by: Matthew Auld Reviewed-by: Chris Wilson --- .../gpu/drm/i915/gem/selftests/huge_pages.c | 120 ++-------------- .../drm/i915/gem/selftests/i915_gem_context.c | 134 ++--------------- .../drm/i915/gem/selftests/igt_gem_utils.c | 135 ++++++++++++++++++ .../drm/i915/gem/selftests/igt_gem_utils.h | 16 +++ 4 files changed, 169 insertions(+), 236 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c index 6ead53455c51..c36cef61ce3c 100644 --- a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c +++ b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c @@ -952,126 +952,22 @@ static int igt_mock_ppgtt_64K(void *arg) return err; } -static struct i915_vma * -gpu_write_dw(struct i915_vma *vma, u64 offset, u32 val) -{ - struct drm_i915_private *i915 = vma->vm->i915; - const int gen = INTEL_GEN(i915); - unsigned int count = vma->size >> PAGE_SHIFT; - struct drm_i915_gem_object *obj; - struct i915_vma *batch; - unsigned int size; - u32 *cmd; - int n; - int err; - - size = (1 + 4 * count) * sizeof(u32); - size = round_up(size, PAGE_SIZE); - obj = i915_gem_object_create_internal(i915, size); - if (IS_ERR(obj)) - return ERR_CAST(obj); - - cmd = i915_gem_object_pin_map(obj, I915_MAP_WC); - if (IS_ERR(cmd)) { - err = PTR_ERR(cmd); - goto err; - } - - offset += vma->node.start; - - for (n = 0; n < count; n++) { - if (gen >= 8) { - *cmd++ = MI_STORE_DWORD_IMM_GEN4; - *cmd++ = lower_32_bits(offset); - *cmd++ = upper_32_bits(offset); - *cmd++ = val; - } else if (gen >= 4) { - *cmd++ = MI_STORE_DWORD_IMM_GEN4 | - (gen < 6 ? MI_USE_GGTT : 0); - *cmd++ = 0; - *cmd++ = offset; - *cmd++ = val; - } else { - *cmd++ = MI_STORE_DWORD_IMM | MI_MEM_VIRTUAL; - *cmd++ = offset; - *cmd++ = val; - } - - offset += PAGE_SIZE; - } - - *cmd = MI_BATCH_BUFFER_END; - intel_gt_chipset_flush(vma->vm->gt); - - i915_gem_object_unpin_map(obj); - - batch = i915_vma_instance(obj, vma->vm, NULL); - if (IS_ERR(batch)) { - err = PTR_ERR(batch); - goto err; - } - - err = i915_vma_pin(batch, 0, 0, PIN_USER); - if (err) - goto err; - - return batch; - -err: - i915_gem_object_put(obj); - - return ERR_PTR(err); -} - static int gpu_write(struct i915_vma *vma, struct i915_gem_context *ctx, struct intel_engine_cs *engine, - u32 dword, - u32 value) + u32 dw, + u32 val) { - struct i915_request *rq; - struct i915_vma *batch; int err; - GEM_BUG_ON(!intel_engine_can_store_dword(engine)); - - batch = gpu_write_dw(vma, dword * sizeof(u32), value); - if (IS_ERR(batch)) - return PTR_ERR(batch); - - rq = igt_request_alloc(ctx, engine); - if (IS_ERR(rq)) { - err = PTR_ERR(rq); - goto err_batch; - } - - i915_vma_lock(batch); - err = i915_vma_move_to_active(batch, rq, 0); - i915_vma_unlock(batch); - if (err) - goto err_request; - - i915_vma_lock(vma); - err = i915_gem_object_set_to_gtt_domain(vma->obj, false); - if (err == 0) - err = i915_vma_move_to_active(vma, rq, EXEC_OBJECT_WRITE); - i915_vma_unlock(vma); + i915_gem_object_lock(vma->obj); + err = i915_gem_object_set_to_gtt_domain(vma->obj, true); + i915_gem_object_unlock(vma->obj); if (err) - goto err_request; - - err = engine->emit_bb_start(rq, - batch->node.start, batch->node.size, - 0); -err_request: - if (err) - i915_request_skip(rq, err); - i915_request_add(rq); -err_batch: - i915_vma_unpin(batch); - i915_vma_close(batch); - i915_vma_put(batch); + return err; - return err; + return igt_gpu_fill_dw(vma, ctx, engine, dw * sizeof(u32), + vma->size >> PAGE_SHIFT, val); } static int cpu_check(struct drm_i915_gem_object *obj, u32 dword, u32 val) diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c index c24430352a38..dd87e6cd612e 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c @@ -156,70 +156,6 @@ static int live_nop_switch(void *arg) return err; } -static struct i915_vma * -gpu_fill_dw(struct i915_vma *vma, u64 offset, unsigned long count, u32 value) -{ - struct drm_i915_gem_object *obj; - const int gen = INTEL_GEN(vma->vm->i915); - unsigned long n, size; - u32 *cmd; - int err; - - size = (4 * count + 1) * sizeof(u32); - size = round_up(size, PAGE_SIZE); - obj = i915_gem_object_create_internal(vma->vm->i915, size); - if (IS_ERR(obj)) - return ERR_CAST(obj); - - cmd = i915_gem_object_pin_map(obj, I915_MAP_WB); - if (IS_ERR(cmd)) { - err = PTR_ERR(cmd); - goto err; - } - - GEM_BUG_ON(offset + (count - 1) * PAGE_SIZE > vma->node.size); - offset += vma->node.start; - - for (n = 0; n < count; n++) { - if (gen >= 8) { - *cmd++ = MI_STORE_DWORD_IMM_GEN4; - *cmd++ = lower_32_bits(offset); - *cmd++ = upper_32_bits(offset); - *cmd++ = value; - } else if (gen >= 4) { - *cmd++ = MI_STORE_DWORD_IMM_GEN4 | - (gen < 6 ? MI_USE_GGTT : 0); - *cmd++ = 0; - *cmd++ = offset; - *cmd++ = value; - } else { - *cmd++ = MI_STORE_DWORD_IMM | MI_MEM_VIRTUAL; - *cmd++ = offset; - *cmd++ = value; - } - offset += PAGE_SIZE; - } - *cmd = MI_BATCH_BUFFER_END; - i915_gem_object_flush_map(obj); - i915_gem_object_unpin_map(obj); - - vma = i915_vma_instance(obj, vma->vm, NULL); - if (IS_ERR(vma)) { - err = PTR_ERR(vma); - goto err; - } - - err = i915_vma_pin(vma, 0, 0, PIN_USER); - if (err) - goto err; - - return vma; - -err: - i915_gem_object_put(obj); - return ERR_PTR(err); -} - static unsigned long real_page_count(struct drm_i915_gem_object *obj) { return huge_gem_object_phys_size(obj) >> PAGE_SHIFT; @@ -236,10 +172,7 @@ static int gpu_fill(struct drm_i915_gem_object *obj, unsigned int dw) { struct i915_address_space *vm = ctx->vm ?: &engine->gt->ggtt->vm; - struct i915_request *rq; struct i915_vma *vma; - struct i915_vma *batch; - unsigned int flags; int err; GEM_BUG_ON(obj->base.size > vm->total); @@ -250,7 +183,7 @@ static int gpu_fill(struct drm_i915_gem_object *obj, return PTR_ERR(vma); i915_gem_object_lock(obj); - err = i915_gem_object_set_to_gtt_domain(obj, false); + err = i915_gem_object_set_to_gtt_domain(obj, true); i915_gem_object_unlock(obj); if (err) return err; @@ -259,70 +192,23 @@ static int gpu_fill(struct drm_i915_gem_object *obj, if (err) return err; - /* Within the GTT the huge objects maps every page onto + /* + * Within the GTT the huge objects maps every page onto * its 1024 real pages (using phys_pfn = dma_pfn % 1024). * We set the nth dword within the page using the nth * mapping via the GTT - this should exercise the GTT mapping * whilst checking that each context provides a unique view * into the object. */ - batch = gpu_fill_dw(vma, - (dw * real_page_count(obj)) << PAGE_SHIFT | - (dw * sizeof(u32)), - real_page_count(obj), - dw); - if (IS_ERR(batch)) { - err = PTR_ERR(batch); - goto err_vma; - } - - rq = igt_request_alloc(ctx, engine); - if (IS_ERR(rq)) { - err = PTR_ERR(rq); - goto err_batch; - } - - flags = 0; - if (INTEL_GEN(vm->i915) <= 5) - flags |= I915_DISPATCH_SECURE; - - err = engine->emit_bb_start(rq, - batch->node.start, batch->node.size, - flags); - if (err) - goto err_request; - - i915_vma_lock(batch); - err = i915_vma_move_to_active(batch, rq, 0); - i915_vma_unlock(batch); - if (err) - goto skip_request; - - i915_vma_lock(vma); - err = i915_vma_move_to_active(vma, rq, EXEC_OBJECT_WRITE); - i915_vma_unlock(vma); - if (err) - goto skip_request; - - i915_request_add(rq); - - i915_vma_unpin(batch); - i915_vma_close(batch); - i915_vma_put(batch); - + err = igt_gpu_fill_dw(vma, + ctx, + engine, + (dw * real_page_count(obj)) << PAGE_SHIFT | + (dw * sizeof(u32)), + real_page_count(obj), + dw); i915_vma_unpin(vma); - return 0; - -skip_request: - i915_request_skip(rq, err); -err_request: - i915_request_add(rq); -err_batch: - i915_vma_unpin(batch); - i915_vma_put(batch); -err_vma: - i915_vma_unpin(vma); return err; } diff --git a/drivers/gpu/drm/i915/gem/selftests/igt_gem_utils.c b/drivers/gpu/drm/i915/gem/selftests/igt_gem_utils.c index 93c636aeae73..42e1e9c58f63 100644 --- a/drivers/gpu/drm/i915/gem/selftests/igt_gem_utils.c +++ b/drivers/gpu/drm/i915/gem/selftests/igt_gem_utils.c @@ -9,6 +9,8 @@ #include "gem/i915_gem_context.h" #include "gem/i915_gem_pm.h" #include "gt/intel_context.h" +#include "i915_vma.h" +#include "i915_drv.h" #include "i915_request.h" @@ -32,3 +34,136 @@ igt_request_alloc(struct i915_gem_context *ctx, struct intel_engine_cs *engine) return rq; } + +struct i915_vma * +igt_emit_store_dw(struct i915_vma *vma, + u64 offset, + unsigned long count, + u32 val) +{ + struct drm_i915_gem_object *obj; + const int gen = INTEL_GEN(vma->vm->i915); + unsigned long n, size; + u32 *cmd; + int err; + + size = (4 * count + 1) * sizeof(u32); + size = round_up(size, PAGE_SIZE); + obj = i915_gem_object_create_internal(vma->vm->i915, size); + if (IS_ERR(obj)) + return ERR_CAST(obj); + + cmd = i915_gem_object_pin_map(obj, I915_MAP_WC); + if (IS_ERR(cmd)) { + err = PTR_ERR(cmd); + goto err; + } + + GEM_BUG_ON(offset + (count - 1) * PAGE_SIZE > vma->node.size); + offset += vma->node.start; + + for (n = 0; n < count; n++) { + if (gen >= 8) { + *cmd++ = MI_STORE_DWORD_IMM_GEN4; + *cmd++ = lower_32_bits(offset); + *cmd++ = upper_32_bits(offset); + *cmd++ = val; + } else if (gen >= 4) { + *cmd++ = MI_STORE_DWORD_IMM_GEN4 | + (gen < 6 ? MI_USE_GGTT : 0); + *cmd++ = 0; + *cmd++ = offset; + *cmd++ = val; + } else { + *cmd++ = MI_STORE_DWORD_IMM | MI_MEM_VIRTUAL; + *cmd++ = offset; + *cmd++ = val; + } + offset += PAGE_SIZE; + } + *cmd = MI_BATCH_BUFFER_END; + i915_gem_object_unpin_map(obj); + + vma = i915_vma_instance(obj, vma->vm, NULL); + if (IS_ERR(vma)) { + err = PTR_ERR(vma); + goto err; + } + + err = i915_vma_pin(vma, 0, 0, PIN_USER); + if (err) + goto err; + + return vma; + +err: + i915_gem_object_put(obj); + return ERR_PTR(err); +} + +int igt_gpu_fill_dw(struct i915_vma *vma, + struct i915_gem_context *ctx, + struct intel_engine_cs *engine, + u64 offset, + unsigned long count, + u32 val) +{ + struct i915_address_space *vm = ctx->vm ?: &engine->gt->ggtt->vm; + struct i915_request *rq; + struct i915_vma *batch; + unsigned int flags; + int err; + + GEM_BUG_ON(vma->size > vm->total); + GEM_BUG_ON(!intel_engine_can_store_dword(engine)); + GEM_BUG_ON(!i915_vma_is_pinned(vma)); + + batch = igt_emit_store_dw(vma, offset, count, val); + if (IS_ERR(batch)) + return PTR_ERR(batch); + + rq = igt_request_alloc(ctx, engine); + if (IS_ERR(rq)) { + err = PTR_ERR(rq); + goto err_batch; + } + + flags = 0; + if (INTEL_GEN(vm->i915) <= 5) + flags |= I915_DISPATCH_SECURE; + + err = engine->emit_bb_start(rq, + batch->node.start, batch->node.size, + flags); + if (err) + goto err_request; + + i915_vma_lock(batch); + err = i915_vma_move_to_active(batch, rq, 0); + i915_vma_unlock(batch); + if (err) + goto skip_request; + + i915_vma_lock(vma); + err = i915_vma_move_to_active(vma, rq, EXEC_OBJECT_WRITE); + i915_vma_unlock(vma); + if (err) + goto skip_request; + + i915_request_add(rq); + + i915_vma_unpin(batch); + i915_vma_close(batch); + i915_vma_put(batch); + + return 0; + +skip_request: + i915_request_skip(rq, err); +err_request: + i915_request_add(rq); +err_batch: + i915_vma_unpin(batch); + i915_vma_put(batch); + return err; +} diff --git a/drivers/gpu/drm/i915/gem/selftests/igt_gem_utils.h b/drivers/gpu/drm/i915/gem/selftests/igt_gem_utils.h index 0f17251cf75d..361a7ef866b0 100644 --- a/drivers/gpu/drm/i915/gem/selftests/igt_gem_utils.h +++ b/drivers/gpu/drm/i915/gem/selftests/igt_gem_utils.h @@ -7,11 +7,27 @@ #ifndef __IGT_GEM_UTILS_H__ #define __IGT_GEM_UTILS_H__ +#include + struct i915_request; struct i915_gem_context; struct intel_engine_cs; +struct i915_vma; struct i915_request * igt_request_alloc(struct i915_gem_context *ctx, struct intel_engine_cs *engine); +struct i915_vma * +igt_emit_store_dw(struct i915_vma *vma, + u64 offset, + unsigned long count, + u32 val); + +int igt_gpu_fill_dw(struct i915_vma *vma, + struct i915_gem_context *ctx, + struct intel_engine_cs *engine, + u64 offset, + unsigned long count, + u32 val); + #endif /* __IGT_GEM_UTILS_H__ */ From patchwork Fri Aug 9 22:26:20 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 11087795 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AEE50912 for ; Fri, 9 Aug 2019 22:27:41 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9D5ED205FD for ; Fri, 9 Aug 2019 22:27:41 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 91E862223E; Fri, 9 Aug 2019 22:27:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 2DF9A205FD for ; Fri, 9 Aug 2019 22:27:41 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 291DD6EEE3; Fri, 9 Aug 2019 22:27:11 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id ECC826EEF9; Fri, 9 Aug 2019 22:27:05 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Aug 2019 15:27:05 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,366,1559545200"; d="scan'208";a="176927028" Received: from jmath3-mobl1.ger.corp.intel.com (HELO mwahaha-bdw.ger.corp.intel.com) ([10.252.5.86]) by fmsmga007.fm.intel.com with ESMTP; 09 Aug 2019 15:27:05 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Subject: [PATCH v3 14/37] drm/i915/selftests: add write-dword test for LMEM Date: Fri, 9 Aug 2019 23:26:20 +0100 Message-Id: <20190809222643.23142-15-matthew.auld@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190809222643.23142-1-matthew.auld@intel.com> References: <20190809222643.23142-1-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP Simple test writing to dwords across an object, using various engines in a randomized order, checking that our writes land from the cpu. Signed-off-by: Matthew Auld --- .../drm/i915/selftests/intel_memory_region.c | 179 ++++++++++++++++++ 1 file changed, 179 insertions(+) diff --git a/drivers/gpu/drm/i915/selftests/intel_memory_region.c b/drivers/gpu/drm/i915/selftests/intel_memory_region.c index 2570fa93e286..4123e81a2bda 100644 --- a/drivers/gpu/drm/i915/selftests/intel_memory_region.c +++ b/drivers/gpu/drm/i915/selftests/intel_memory_region.c @@ -7,6 +7,7 @@ #include "../i915_selftest.h" + #include "mock_drm.h" #include "mock_gem_device.h" #include "mock_region.h" @@ -14,9 +15,11 @@ #include "gem/i915_gem_lmem.h" #include "gem/i915_gem_region.h" #include "gem/i915_gem_object_blt.h" +#include "gem/selftests/igt_gem_utils.h" #include "gem/selftests/mock_context.h" #include "gt/intel_gt.h" #include "selftests/igt_flush_test.h" +#include "selftests/i915_random.h" static void close_objects(struct list_head *objects) { @@ -354,6 +357,128 @@ static int igt_mock_volatile(void *arg) return err; } +static int igt_gpu_write_dw(struct i915_vma *vma, + struct i915_gem_context *ctx, + struct intel_engine_cs *engine, + u32 dword, + u32 value) +{ + int err; + + i915_gem_object_lock(vma->obj); + err = i915_gem_object_set_to_gtt_domain(vma->obj, true); + i915_gem_object_unlock(vma->obj); + if (err) + return err; + + return igt_gpu_fill_dw(vma, ctx, engine, dword * sizeof(u32), + vma->size >> PAGE_SHIFT, value); +} + +static int igt_cpu_check(struct drm_i915_gem_object *obj, u32 dword, u32 val) +{ + unsigned long n; + int err; + + i915_gem_object_lock(obj); + err = i915_gem_object_set_to_wc_domain(obj, false); + i915_gem_object_unlock(obj); + if (err) + return err; + + err = i915_gem_object_pin_pages(obj); + if (err) + return err; + + for (n = 0; n < obj->base.size >> PAGE_SHIFT; ++n) { + u32 __iomem *base; + u32 read_val; + + base = i915_gem_object_lmem_io_map_page_atomic(obj, n); + + read_val = ioread32(base + dword); + io_mapping_unmap_atomic(base); + if (read_val != val) { + pr_err("n=%lu base[%u]=%u, val=%u\n", + n, dword, read_val, val); + err = -EINVAL; + break; + } + } + + i915_gem_object_unpin_pages(obj); + return err; +} + +static int igt_gpu_write(struct i915_gem_context *ctx, + struct drm_i915_gem_object *obj) +{ + struct drm_i915_private *i915 = ctx->i915; + struct i915_address_space *vm = ctx->vm ?: &i915->ggtt.vm; + static struct intel_engine_cs *engines[I915_NUM_ENGINES]; + struct intel_engine_cs *engine; + IGT_TIMEOUT(end_time); + I915_RND_STATE(prng); + struct i915_vma *vma; + unsigned int id; + int *order; + int i, n; + int err; + + n = 0; + for_each_engine(engine, i915, id) { + if (!intel_engine_can_store_dword(engine)) { + pr_info("store-dword-imm not supported on engine=%u\n", + id); + continue; + } + engines[n++] = engine; + } + + if (!n) + return 0; + + order = i915_random_order(n * I915_NUM_ENGINES, &prng); + if (!order) + return -ENOMEM; + + vma = i915_vma_instance(obj, vm, NULL); + if (IS_ERR(vma)) { + err = PTR_ERR(vma); + goto out_free; + } + + err = i915_vma_pin(vma, 0, 0, PIN_USER); + if (err) + goto out_free; + + i = 0; + do { + u32 rng = prandom_u32_state(&prng); + u32 dword = offset_in_page(rng) / 4; + + engine = engines[order[i] % n]; + i = (i + 1) % (n * I915_NUM_ENGINES); + + err = igt_gpu_write_dw(vma, ctx, engine, dword, rng); + if (err) + break; + + err = igt_cpu_check(obj, dword, rng); + if (err) + break; + } while (!__igt_timeout(end_time, NULL)); + + i915_vma_unpin(vma); +out_free: + kfree(order); + + if (err == -ENOMEM) + err = 0; + + return err; +} + static int igt_lmem_create(void *arg) { struct drm_i915_private *i915 = arg; @@ -375,6 +500,59 @@ static int igt_lmem_create(void *arg) return err; } +static int igt_lmem_write_gpu(void *arg) +{ + struct drm_i915_private *i915 = arg; + struct drm_i915_gem_object *obj; + struct i915_gem_context *ctx; + struct drm_file *file; + I915_RND_STATE(prng); + u32 sz; + int err; + + mutex_unlock(&i915->drm.struct_mutex); + file = mock_file(i915); + mutex_lock(&i915->drm.struct_mutex); + if (IS_ERR(file)) + return PTR_ERR(file); + + ctx = live_context(i915, file); + if (IS_ERR(ctx)) { + err = PTR_ERR(ctx); + goto out_file; + } + + /* + * XXX: Nothing too big for now; we don't want to upset CI. What we + * really want is the huge dma stuff for device memory, then we can go + * to town... + */ + sz = round_up(prandom_u32_state(&prng) % SZ_32M, PAGE_SIZE); + + obj = i915_gem_object_create_lmem(i915, sz, 0); + if (IS_ERR(obj)) { + err = PTR_ERR(obj); + goto out_file; + } + + err = i915_gem_object_pin_pages(obj); + if (err) + goto out_put; + + err = igt_gpu_write(ctx, obj); + if (err) + pr_err("igt_gpu_write failed(%d)\n", err); + + i915_gem_object_unpin_pages(obj); +out_put: + i915_gem_object_put(obj); +out_file: + mutex_unlock(&i915->drm.struct_mutex); + mock_file_free(i915, file); + mutex_lock(&i915->drm.struct_mutex); + return err; +} + static int igt_lmem_write_cpu(void *arg) { struct drm_i915_private *i915 = arg; @@ -490,6 +668,7 @@ int intel_memory_region_live_selftests(struct drm_i915_private *i915) static const struct i915_subtest tests[] = { SUBTEST(igt_lmem_create), SUBTEST(igt_lmem_write_cpu), + SUBTEST(igt_lmem_write_gpu), }; int err; From patchwork Fri Aug 9 22:26:21 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 11087787 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E1B88912 for ; Fri, 9 Aug 2019 22:27:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D04CA205FD for ; Fri, 9 Aug 2019 22:27:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C4D9921C9A; Fri, 9 Aug 2019 22:27:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 694F621FAC for ; Fri, 9 Aug 2019 22:27:34 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 8F6926EEF9; Fri, 9 Aug 2019 22:27:11 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id 064D86EEE8; Fri, 9 Aug 2019 22:27:07 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Aug 2019 15:27:06 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,366,1559545200"; d="scan'208";a="176927029" Received: from jmath3-mobl1.ger.corp.intel.com (HELO mwahaha-bdw.ger.corp.intel.com) ([10.252.5.86]) by fmsmga007.fm.intel.com with ESMTP; 09 Aug 2019 15:27:06 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Subject: [PATCH v3 15/37] drm/i915/selftest: extend coverage to include LMEM huge-pages Date: Fri, 9 Aug 2019 23:26:21 +0100 Message-Id: <20190809222643.23142-16-matthew.auld@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190809222643.23142-1-matthew.auld@intel.com> References: <20190809222643.23142-1-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Matthew Auld --- .../gpu/drm/i915/gem/selftests/huge_pages.c | 121 +++++++++++++++++- 1 file changed, 120 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c index c36cef61ce3c..4bac15363020 100644 --- a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c +++ b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c @@ -9,6 +9,7 @@ #include "i915_selftest.h" #include "gem/i915_gem_region.h" +#include "gem/i915_gem_lmem.h" #include "gem/i915_gem_pm.h" #include "gt/intel_gt.h" @@ -970,7 +971,7 @@ static int gpu_write(struct i915_vma *vma, vma->size >> PAGE_SHIFT, val); } -static int cpu_check(struct drm_i915_gem_object *obj, u32 dword, u32 val) +static int __cpu_check_shmem(struct drm_i915_gem_object *obj, u32 dword, u32 val) { unsigned int needs_flush; unsigned long n; @@ -1002,6 +1003,51 @@ static int cpu_check(struct drm_i915_gem_object *obj, u32 dword, u32 val) return err; } +static int __cpu_check_lmem(struct drm_i915_gem_object *obj, u32 dword, u32 val) +{ + unsigned long n; + int err; + + i915_gem_object_lock(obj); + err = i915_gem_object_set_to_wc_domain(obj, false); + i915_gem_object_unlock(obj); + if (err) + return err; + + err = i915_gem_object_pin_pages(obj); + if (err) + return err; + + for (n = 0; n < obj->base.size >> PAGE_SHIFT; ++n) { + u32 __iomem *base; + u32 read_val; + + base = i915_gem_object_lmem_io_map_page_atomic(obj, n); + + read_val = ioread32(base + dword); + io_mapping_unmap_atomic(base); + if (read_val != val) { + pr_err("n=%lu base[%u]=%u, val=%u\n", + n, dword, read_val, val); + err = -EINVAL; + break; + } + } + + i915_gem_object_unpin_pages(obj); + return err; +} + +static int cpu_check(struct drm_i915_gem_object *obj, u32 dword, u32 val) +{ + if (i915_gem_object_has_struct_page(obj)) + return __cpu_check_shmem(obj, dword, val); + else if (i915_gem_object_is_lmem(obj)) + return __cpu_check_lmem(obj, dword, val); + + return -ENODEV; +} + static int __igt_write_huge(struct i915_gem_context *ctx, struct intel_engine_cs *engine, struct drm_i915_gem_object *obj, @@ -1382,6 +1428,78 @@ static int igt_ppgtt_gemfs_huge(void *arg) return err; } +static int igt_ppgtt_lmem_huge(void *arg) +{ + struct i915_gem_context *ctx = arg; + struct drm_i915_private *i915 = ctx->i915; + struct drm_i915_gem_object *obj; + static const unsigned int sizes[] = { + SZ_64K, + SZ_512K, + SZ_1M, + SZ_2M, + }; + int i; + int err; + + if (!HAS_LMEM(i915)) { + pr_info("device lacks LMEM support, skipping\n"); + return 0; + } + + /* + * Sanity check that the HW uses huge pages correctly through LMEM + * -- ensure that our writes land in the right place. + */ + + for (i = 0; i < ARRAY_SIZE(sizes); ++i) { + unsigned int size = sizes[i]; + + obj = i915_gem_object_create_lmem(i915, size, I915_BO_ALLOC_CONTIGUOUS); + if (IS_ERR(obj)) { + err = PTR_ERR(obj); + if (err == -E2BIG) { + pr_info("object too big for region!\n"); + return 0; + } + + return err; + } + + err = i915_gem_object_pin_pages(obj); + if (err) + goto out_put; + + if (obj->mm.page_sizes.phys < I915_GTT_PAGE_SIZE_64K) { + pr_info("LMEM unable to allocate huge-page(s) with size=%u\n", + size); + goto out_unpin; + } + + err = igt_write_huge(ctx, obj); + if (err) { + pr_err("LMEM write-huge failed with size=%u\n", size); + goto out_unpin; + } + + i915_gem_object_unpin_pages(obj); + __i915_gem_object_put_pages(obj, I915_MM_NORMAL); + i915_gem_object_put(obj); + } + + return 0; + +out_unpin: + i915_gem_object_unpin_pages(obj); +out_put: + i915_gem_object_put(obj); + + if (err == -ENOMEM) + err = 0; + + return err; +} + static int igt_ppgtt_pin_update(void *arg) { struct i915_gem_context *ctx = arg; @@ -1732,6 +1850,7 @@ int i915_gem_huge_page_live_selftests(struct drm_i915_private *i915) SUBTEST(igt_ppgtt_exhaust_huge), SUBTEST(igt_ppgtt_gemfs_huge), SUBTEST(igt_ppgtt_internal_huge), + SUBTEST(igt_ppgtt_lmem_huge), }; struct drm_file *file; struct i915_gem_context *ctx; From patchwork Fri Aug 9 22:26:22 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 11087823 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D4B2613B1 for ; Fri, 9 Aug 2019 22:27:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C156D205FD for ; Fri, 9 Aug 2019 22:27:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B588821FAC; Fri, 9 Aug 2019 22:27:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 60A3A21C9A for ; Fri, 9 Aug 2019 22:27:56 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 129C76EF0E; Fri, 9 Aug 2019 22:27:22 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id 96AEF6EF02; Fri, 9 Aug 2019 22:27:08 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Aug 2019 15:27:08 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,366,1559545200"; d="scan'208";a="176927032" Received: from jmath3-mobl1.ger.corp.intel.com (HELO mwahaha-bdw.ger.corp.intel.com) ([10.252.5.86]) by fmsmga007.fm.intel.com with ESMTP; 09 Aug 2019 15:27:07 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Subject: [PATCH v3 16/37] drm/i915/lmem: support CPU relocations Date: Fri, 9 Aug 2019 23:26:22 +0100 Message-Id: <20190809222643.23142-17-matthew.auld@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190809222643.23142-1-matthew.auld@intel.com> References: <20190809222643.23142-1-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Abdiel Janulgue , dri-devel@lists.freedesktop.org, Rodrigo Vivi Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP Add LMEM support for the CPU reloc path. When doing relocations we have both a GPU and CPU reloc path, as well as some debugging options to force a particular path. The GPU reloc path is preferred when the object is not currently idle, otherwise we use the CPU reloc path. Since we can't kmap the object, and the mappable aperture might not be available, add support for mapping it through LMEMBAR. Signed-off-by: Matthew Auld Cc: Joonas Lahtinen Cc: Abdiel Janulgue Cc: Rodrigo Vivi --- .../gpu/drm/i915/gem/i915_gem_execbuffer.c | 55 +++++++++++++++++-- 1 file changed, 51 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index 2fa08357944e..d70b3e6dc12d 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -15,6 +15,7 @@ #include "display/intel_frontbuffer.h" #include "gem/i915_gem_ioctls.h" +#include "gem/i915_gem_lmem.h" #include "gt/intel_context.h" #include "gt/intel_engine_pool.h" #include "gt/intel_gt.h" @@ -251,6 +252,7 @@ struct i915_execbuffer { bool has_llc : 1; bool has_fence : 1; bool needs_unfenced : 1; + bool is_lmem : 1; struct i915_request *rq; u32 *rq_cmd; @@ -959,6 +961,7 @@ static void reloc_cache_init(struct reloc_cache *cache, cache->use_64bit_reloc = HAS_64BIT_RELOC(i915); cache->has_fence = cache->gen < 4; cache->needs_unfenced = INTEL_INFO(i915)->unfenced_needs_alignment; + cache->is_lmem = false; cache->node.allocated = false; cache->rq = NULL; cache->rq_size = 0; @@ -1017,10 +1020,14 @@ static void reloc_cache_reset(struct reloc_cache *cache) } else { struct i915_ggtt *ggtt = cache_to_ggtt(cache); - intel_gt_flush_ggtt_writes(ggtt->vm.gt); + if (!cache->is_lmem) + intel_gt_flush_ggtt_writes(ggtt->vm.gt); io_mapping_unmap_atomic((void __iomem *)vaddr); - if (cache->node.allocated) { + if (cache->is_lmem) { + i915_gem_object_unpin_pages((struct drm_i915_gem_object *)cache->node.mm); + cache->is_lmem = false; + } else if (cache->node.allocated) { ggtt->vm.clear_range(&ggtt->vm, cache->node.start, cache->node.size); @@ -1066,6 +1073,42 @@ static void *reloc_kmap(struct drm_i915_gem_object *obj, return vaddr; } +static void *reloc_lmem(struct drm_i915_gem_object *obj, + struct reloc_cache *cache, + unsigned long page) +{ + void *vaddr; + int err; + + GEM_BUG_ON(use_cpu_reloc(cache, obj)); + + if (cache->vaddr) { + io_mapping_unmap_atomic((void __force __iomem *) unmask_page(cache->vaddr)); + } else { + err = i915_gem_object_pin_pages(obj); + if (err) + return ERR_PTR(err); + + i915_gem_object_lock(obj); + err = i915_gem_object_set_to_wc_domain(obj, true); + i915_gem_object_unlock(obj); + if (err) { + i915_gem_object_unpin_pages(obj); + return ERR_PTR(err); + } + + cache->node.mm = (void *)obj; + cache->is_lmem = true; + } + + vaddr = i915_gem_object_lmem_io_map_page_atomic(obj, page); + + cache->vaddr = (unsigned long)vaddr; + cache->page = page; + + return vaddr; +} + static void *reloc_iomap(struct drm_i915_gem_object *obj, struct reloc_cache *cache, unsigned long page) @@ -1142,8 +1185,12 @@ static void *reloc_vaddr(struct drm_i915_gem_object *obj, vaddr = unmask_page(cache->vaddr); } else { vaddr = NULL; - if ((cache->vaddr & KMAP) == 0) - vaddr = reloc_iomap(obj, cache, page); + if ((cache->vaddr & KMAP) == 0) { + if (i915_gem_object_is_lmem(obj)) + vaddr = reloc_lmem(obj, cache, page); + else + vaddr = reloc_iomap(obj, cache, page); + } if (!vaddr) vaddr = reloc_kmap(obj, cache, page); } From patchwork Fri Aug 9 22:26:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 11087821 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 52364912 for ; Fri, 9 Aug 2019 22:27:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 330F921C9A for ; Fri, 9 Aug 2019 22:27:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2718F2223E; Fri, 9 Aug 2019 22:27:55 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id C68CC21C9A for ; Fri, 9 Aug 2019 22:27:54 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 49A7C6EF10; Fri, 9 Aug 2019 22:27:22 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id 325546EF05; Fri, 9 Aug 2019 22:27:10 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Aug 2019 15:27:10 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,366,1559545200"; d="scan'208";a="176927037" Received: from jmath3-mobl1.ger.corp.intel.com (HELO mwahaha-bdw.ger.corp.intel.com) ([10.252.5.86]) by fmsmga007.fm.intel.com with ESMTP; 09 Aug 2019 15:27:08 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Subject: [PATCH v3 17/37] drm/i915/lmem: support pread Date: Fri, 9 Aug 2019 23:26:23 +0100 Message-Id: <20190809222643.23142-18-matthew.auld@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190809222643.23142-1-matthew.auld@intel.com> References: <20190809222643.23142-1-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Steve Hampson , Abdiel Janulgue , dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP We need to add support for pread'ing an LMEM object. Signed-off-by: Matthew Auld Signed-off-by: Steve Hampson Cc: Joonas Lahtinen Cc: Abdiel Janulgue --- drivers/gpu/drm/i915/gem/i915_gem_lmem.c | 88 +++++++++++++++++++ .../gpu/drm/i915/gem/i915_gem_object_types.h | 2 + drivers/gpu/drm/i915/i915_gem.c | 6 ++ 3 files changed, 96 insertions(+) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_lmem.c b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c index 8d957135afa4..f5a13994dc2a 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_lmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c @@ -8,12 +8,100 @@ #include "gem/i915_gem_lmem.h" #include "i915_drv.h" +static int lmem_pread(struct drm_i915_gem_object *obj, + const struct drm_i915_gem_pread *arg) +{ + struct drm_i915_private *i915 = to_i915(obj->base.dev); + struct intel_runtime_pm *rpm = &i915->runtime_pm; + intel_wakeref_t wakeref; + struct dma_fence *fence; + char __user *user_data; + unsigned int offset; + unsigned long idx; + u64 remain; + int ret; + + ret = i915_gem_object_wait(obj, + I915_WAIT_INTERRUPTIBLE, + MAX_SCHEDULE_TIMEOUT); + if (ret) + return ret; + + ret = i915_gem_object_pin_pages(obj); + if (ret) + return ret; + + i915_gem_object_lock(obj); + ret = i915_gem_object_set_to_wc_domain(obj, false); + if (ret) { + i915_gem_object_unlock(obj); + goto out_unpin; + } + + fence = i915_gem_object_lock_fence(obj); + i915_gem_object_unlock(obj); + if (!fence) { + ret = -ENOMEM; + goto out_unpin; + } + + wakeref = intel_runtime_pm_get(rpm); + + remain = arg->size; + user_data = u64_to_user_ptr(arg->data_ptr); + offset = offset_in_page(arg->offset); + for (idx = arg->offset >> PAGE_SHIFT; remain; idx++) { + unsigned long unwritten; + void __iomem *vaddr; + int length; + + length = remain; + if (offset + length > PAGE_SIZE) + length = PAGE_SIZE - offset; + + vaddr = i915_gem_object_lmem_io_map_page_atomic(obj, idx); + if (!vaddr) { + ret = -ENOMEM; + goto out_put; + } + unwritten = __copy_to_user_inatomic(user_data, + (void __force *)vaddr + offset, + length); + io_mapping_unmap_atomic(vaddr); + if (unwritten) { + vaddr = i915_gem_object_lmem_io_map_page(obj, idx); + unwritten = copy_to_user(user_data, + (void __force *)vaddr + offset, + length); + io_mapping_unmap(vaddr); + } + if (unwritten) { + ret = -EFAULT; + goto out_put; + } + + remain -= length; + user_data += length; + offset = 0; + } + +out_put: + intel_runtime_pm_put(rpm, wakeref); + i915_gem_object_unlock_fence(obj, fence); +out_unpin: + i915_gem_object_unpin_pages(obj); + + return ret; +} + const struct drm_i915_gem_object_ops i915_gem_lmem_obj_ops = { .flags = I915_GEM_OBJECT_IS_MAPPABLE, .get_pages = i915_gem_object_get_pages_buddy, .put_pages = i915_gem_object_put_pages_buddy, .release = i915_gem_object_release_memory_region, + + .pread = lmem_pread, }; /* XXX: Time to vfunc your life up? */ diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h index 19c3f9804b68..cd06051eb797 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h @@ -53,6 +53,8 @@ struct drm_i915_gem_object_ops { void (*truncate)(struct drm_i915_gem_object *obj); void (*writeback)(struct drm_i915_gem_object *obj); + int (*pread)(struct drm_i915_gem_object *, + const struct drm_i915_gem_pread *arg); int (*pwrite)(struct drm_i915_gem_object *obj, const struct drm_i915_gem_pwrite *arg); diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 8735dea74809..96e143d133d1 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -465,6 +465,12 @@ i915_gem_pread_ioctl(struct drm_device *dev, void *data, trace_i915_gem_object_pread(obj, args->offset, args->size); + ret = -ENODEV; + if (obj->ops->pread) + ret = obj->ops->pread(obj, args); + if (ret != -ENODEV) + goto out; + ret = i915_gem_object_wait(obj, I915_WAIT_INTERRUPTIBLE, MAX_SCHEDULE_TIMEOUT); From patchwork Fri Aug 9 22:26:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 11087817 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9BD36912 for ; Fri, 9 Aug 2019 22:27:50 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 87FE0205FD for ; Fri, 9 Aug 2019 22:27:50 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7C55D21FAC; Fri, 9 Aug 2019 22:27:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 35687205FD for ; Fri, 9 Aug 2019 22:27:50 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 5E9976EF0D; Fri, 9 Aug 2019 22:27:21 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id BA5646EEFA; Fri, 9 Aug 2019 22:27:11 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Aug 2019 15:27:11 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,366,1559545200"; d="scan'208";a="176927040" Received: from jmath3-mobl1.ger.corp.intel.com (HELO mwahaha-bdw.ger.corp.intel.com) ([10.252.5.86]) by fmsmga007.fm.intel.com with ESMTP; 09 Aug 2019 15:27:10 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Subject: [PATCH v3 18/37] drm/i915/lmem: support pwrite Date: Fri, 9 Aug 2019 23:26:24 +0100 Message-Id: <20190809222643.23142-19-matthew.auld@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190809222643.23142-1-matthew.auld@intel.com> References: <20190809222643.23142-1-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Steve Hampson , Abdiel Janulgue , dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP We need to add support for pwrite'ing an LMEM object. Signed-off-by: Matthew Auld Signed-off-by: Steve Hampson Cc: Joonas Lahtinen Cc: Abdiel Janulgue --- drivers/gpu/drm/i915/gem/i915_gem_lmem.c | 87 ++++++++++++++++++++++++ 1 file changed, 87 insertions(+) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_lmem.c b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c index f5a13994dc2a..f00078ac331e 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_lmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c @@ -94,6 +94,92 @@ static int lmem_pread(struct drm_i915_gem_object *obj, return ret; } +static int lmem_pwrite(struct drm_i915_gem_object *obj, + const struct drm_i915_gem_pwrite *arg) +{ + struct drm_i915_private *i915 = to_i915(obj->base.dev); + struct intel_runtime_pm *rpm = &i915->runtime_pm; + intel_wakeref_t wakeref; + struct dma_fence *fence; + char __user *user_data; + unsigned int offset; + unsigned long idx; + u64 remain; + int ret; + + ret = i915_gem_object_wait(obj, + I915_WAIT_INTERRUPTIBLE, + MAX_SCHEDULE_TIMEOUT); + if (ret) + return ret; + + ret = i915_gem_object_pin_pages(obj); + if (ret) + return ret; + + i915_gem_object_lock(obj); + ret = i915_gem_object_set_to_wc_domain(obj, true); + if (ret) { + i915_gem_object_unlock(obj); + goto out_unpin; + } + + fence = i915_gem_object_lock_fence(obj); + i915_gem_object_unlock(obj); + if (!fence) { + ret = -ENOMEM; + goto out_unpin; + } + + wakeref = intel_runtime_pm_get(rpm); + + remain = arg->size; + user_data = u64_to_user_ptr(arg->data_ptr); + offset = offset_in_page(arg->offset); + for (idx = arg->offset >> PAGE_SHIFT; remain; idx++) { + unsigned long unwritten; + void __iomem *vaddr; + int length; + + length = remain; + if (offset + length > PAGE_SIZE) + length = PAGE_SIZE - offset; + + vaddr = i915_gem_object_lmem_io_map_page_atomic(obj, idx); + if (!vaddr) { + ret = -ENOMEM; + goto out_put; + } + + unwritten = __copy_from_user_inatomic_nocache((void __force*)vaddr + offset, + user_data, length); + io_mapping_unmap_atomic(vaddr); + if (unwritten) { + vaddr = i915_gem_object_lmem_io_map_page(obj, idx); + unwritten = copy_from_user((void __force*)vaddr + offset, + user_data, length); + io_mapping_unmap(vaddr); + } + if (unwritten) { + ret = -EFAULT; + goto out_put; + } + + remain -= length; + user_data += length; + offset = 0; + } + +out_put: + intel_runtime_pm_put(rpm, wakeref); + i915_gem_object_unlock_fence(obj, fence); +out_unpin: + i915_gem_object_unpin_pages(obj); + + return ret; +} + + const struct drm_i915_gem_object_ops i915_gem_lmem_obj_ops = { .flags = I915_GEM_OBJECT_IS_MAPPABLE, @@ -102,6 +188,7 @@ const struct drm_i915_gem_object_ops i915_gem_lmem_obj_ops = { .release = i915_gem_object_release_memory_region, .pread = lmem_pread, + .pwrite = lmem_pwrite, }; /* XXX: Time to vfunc your life up? */ From patchwork Fri Aug 9 22:26:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 11087829 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AEC46912 for ; Fri, 9 Aug 2019 22:28:00 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9BC3B21C9A for ; Fri, 9 Aug 2019 22:28:00 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8FE5E2223E; Fri, 9 Aug 2019 22:28:00 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 2785C21C9A for ; Fri, 9 Aug 2019 22:28:00 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B48FF6EF20; Fri, 9 Aug 2019 22:27:29 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id 3DF246EEFA; Fri, 9 Aug 2019 22:27:13 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Aug 2019 15:27:13 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,366,1559545200"; d="scan'208";a="176927041" Received: from jmath3-mobl1.ger.corp.intel.com (HELO mwahaha-bdw.ger.corp.intel.com) ([10.252.5.86]) by fmsmga007.fm.intel.com with ESMTP; 09 Aug 2019 15:27:11 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Subject: [PATCH v3 19/37] drm/i915: enumerate and init each supported region Date: Fri, 9 Aug 2019 23:26:25 +0100 Message-Id: <20190809222643.23142-20-matthew.auld@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190809222643.23142-1-matthew.auld@intel.com> References: <20190809222643.23142-1-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Abdiel Janulgue , dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Abdiel Janulgue Nothing to enumerate yet... Signed-off-by: Abdiel Janulgue Signed-off-by: Matthew Auld Cc: Joonas Lahtinen --- drivers/gpu/drm/i915/i915_drv.h | 3 + drivers/gpu/drm/i915/i915_gem_gtt.c | 70 +++++++++++++++++-- .../gpu/drm/i915/selftests/mock_gem_device.c | 6 ++ 3 files changed, 72 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index f7be8cee4709..3d7da69f0d1b 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -2436,6 +2436,9 @@ int __must_check i915_gem_evict_for_node(struct i915_address_space *vm, unsigned int flags); int i915_gem_evict_vm(struct i915_address_space *vm); +void i915_gem_cleanup_memory_regions(struct drm_i915_private *i915); +int i915_gem_init_memory_regions(struct drm_i915_private *i915); + /* i915_gem_internal.c */ struct drm_i915_gem_object * i915_gem_object_create_internal(struct drm_i915_private *dev_priv, diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c index 83a02e773c58..a1dd3e7e1ad9 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c @@ -2713,6 +2713,66 @@ int i915_init_ggtt(struct drm_i915_private *i915) return 0; } +void i915_gem_cleanup_memory_regions(struct drm_i915_private *i915) +{ + int i; + + i915_gem_cleanup_stolen(i915); + + for (i = 0; i < ARRAY_SIZE(i915->regions); ++i) { + struct intel_memory_region *region = i915->regions[i]; + + if (region) + intel_memory_region_destroy(region); + } +} + +int i915_gem_init_memory_regions(struct drm_i915_private *i915) +{ + int err, i; + + /* + * Initialise stolen early so that we may reserve preallocated + * objects for the BIOS to KMS transition. + */ + /* XXX: stolen will become a region at some point */ + err = i915_gem_init_stolen(i915); + if (err) + return err; + + for (i = 0; i < INTEL_MEMORY_UKNOWN; i++) { + struct intel_memory_region *mem = NULL; + u32 type; + + if (!HAS_REGION(i915, BIT(i))) + continue; + + type = MEMORY_TYPE_FROM_REGION(intel_region_map[i]); + switch (type) { + default: + break; + } + + if (IS_ERR(mem)) { + err = PTR_ERR(mem); + DRM_ERROR("Failed to setup region(%d) type=%d\n", err, type); + goto out_cleanup; + } + + mem->id = intel_region_map[i]; + mem->type = type; + mem->instance = MEMORY_INSTANCE_FROM_REGION(intel_region_map[i]); + + i915->regions[i] = mem; + } + + return 0; + +out_cleanup: + i915_gem_cleanup_memory_regions(i915); + return err; +} + static void ggtt_cleanup_hw(struct i915_ggtt *ggtt) { struct drm_i915_private *i915 = ggtt->vm.i915; @@ -2754,6 +2814,8 @@ void i915_ggtt_driver_release(struct drm_i915_private *i915) { struct pagevec *pvec; + i915_gem_cleanup_memory_regions(i915); + fini_aliasing_ppgtt(&i915->ggtt); ggtt_cleanup_hw(&i915->ggtt); @@ -2763,8 +2825,6 @@ void i915_ggtt_driver_release(struct drm_i915_private *i915) set_pages_array_wb(pvec->pages, pvec->nr); __pagevec_release(pvec); } - - i915_gem_cleanup_stolen(i915); } static unsigned int gen6_get_total_gtt_size(u16 snb_gmch_ctl) @@ -3204,11 +3264,7 @@ int i915_ggtt_init_hw(struct drm_i915_private *dev_priv) if (ret) return ret; - /* - * Initialise stolen early so that we may reserve preallocated - * objects for the BIOS to KMS transition. - */ - ret = i915_gem_init_stolen(dev_priv); + ret = i915_gem_init_memory_regions(dev_priv); if (ret) goto out_gtt_cleanup; diff --git a/drivers/gpu/drm/i915/selftests/mock_gem_device.c b/drivers/gpu/drm/i915/selftests/mock_gem_device.c index cc0fe0a79330..c6944e17a2c5 100644 --- a/drivers/gpu/drm/i915/selftests/mock_gem_device.c +++ b/drivers/gpu/drm/i915/selftests/mock_gem_device.c @@ -82,6 +82,8 @@ static void mock_device_release(struct drm_device *dev) i915_gemfs_fini(i915); + i915_gem_cleanup_memory_regions(i915); + drm_mode_config_cleanup(&i915->drm); drm_dev_fini(&i915->drm); @@ -219,6 +221,10 @@ struct drm_i915_private *mock_gem_device(void) WARN_ON(i915_gemfs_init(i915)); + err = i915_gem_init_memory_regions(i915); + if (err) + goto err_context; + return i915; err_context: From patchwork Fri Aug 9 22:26:26 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 11087833 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1A1A513B1 for ; Fri, 9 Aug 2019 22:28:04 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0619121FAC for ; Fri, 9 Aug 2019 22:28:04 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id EE9122228E; Fri, 9 Aug 2019 22:28:03 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 3089521FAC for ; Fri, 9 Aug 2019 22:28:03 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 4D78E6EF15; Fri, 9 Aug 2019 22:27:21 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id D08EB6EEFA; Fri, 9 Aug 2019 22:27:14 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Aug 2019 15:27:14 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,366,1559545200"; d="scan'208";a="176927042" Received: from jmath3-mobl1.ger.corp.intel.com (HELO mwahaha-bdw.ger.corp.intel.com) ([10.252.5.86]) by fmsmga007.fm.intel.com with ESMTP; 09 Aug 2019 15:27:13 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Subject: [PATCH v3 20/37] drm/i915: treat shmem as a region Date: Fri, 9 Aug 2019 23:26:26 +0100 Message-Id: <20190809222643.23142-21-matthew.auld@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190809222643.23142-1-matthew.auld@intel.com> References: <20190809222643.23142-1-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Abdiel Janulgue , dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Matthew Auld Cc: Joonas Lahtinen Cc: Abdiel Janulgue --- drivers/gpu/drm/i915/gem/i915_gem_phys.c | 6 +- drivers/gpu/drm/i915/gem/i915_gem_region.c | 14 +++- drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 68 ++++++++++++++----- drivers/gpu/drm/i915/i915_drv.c | 5 +- drivers/gpu/drm/i915/i915_drv.h | 4 +- drivers/gpu/drm/i915/i915_gem.c | 13 +--- drivers/gpu/drm/i915/i915_gem_gtt.c | 3 +- drivers/gpu/drm/i915/i915_pci.c | 29 +++++--- .../gpu/drm/i915/selftests/mock_gem_device.c | 6 +- 9 files changed, 99 insertions(+), 49 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_phys.c b/drivers/gpu/drm/i915/gem/i915_gem_phys.c index 768356908160..f0e5e0df00ef 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_phys.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_phys.c @@ -16,6 +16,7 @@ #include "gt/intel_gt.h" #include "i915_drv.h" #include "i915_gem_object.h" +#include "i915_gem_region.h" #include "i915_scatterlist.h" static int i915_gem_object_get_pages_phys(struct drm_i915_gem_object *obj) @@ -191,8 +192,11 @@ int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align) /* Perma-pin (until release) the physical set of pages */ __i915_gem_object_pin_pages(obj); - if (!IS_ERR_OR_NULL(pages)) + if (!IS_ERR_OR_NULL(pages)) { i915_gem_shmem_ops.put_pages(obj, pages); + /* XXX: where is the fput now though? */ + i915_gem_object_release_memory_region(obj); + } mutex_unlock(&obj->mm.lock); return 0; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_region.c b/drivers/gpu/drm/i915/gem/i915_gem_region.c index 0d09da9f7168..592012bb9b14 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_region.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_region.c @@ -6,6 +6,7 @@ #include "intel_memory_region.h" #include "i915_gem_region.h" #include "i915_drv.h" +#include "i915_trace.h" void i915_gem_object_put_pages_buddy(struct drm_i915_gem_object *obj, @@ -143,11 +144,22 @@ i915_gem_object_create_region(struct intel_memory_region *mem, GEM_BUG_ON(!size); GEM_BUG_ON(!IS_ALIGNED(size, I915_GTT_MIN_ALIGNMENT)); + /* + * There is a prevalence of the assumption that we fit the object's + * page count inside a 32bit _signed_ variable. Let's document this and + * catch if we ever need to fix it. In the meantime, if you do spot + * such a local variable, please consider fixing! + */ + if (size >> PAGE_SHIFT > INT_MAX) return ERR_PTR(-E2BIG); if (overflows_type(size, obj->base.size)) return ERR_PTR(-E2BIG); - return mem->ops->create_object(mem, size, flags); + obj = mem->ops->create_object(mem, size, flags); + if (!IS_ERR(obj)) + trace_i915_gem_object_create(obj); + + return obj; } diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c index 9f5d903f7793..ac7a552349b4 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c @@ -7,7 +7,9 @@ #include #include +#include "gem/i915_gem_region.h" #include "i915_drv.h" +#include "i915_gemfs.h" #include "i915_gem_object.h" #include "i915_scatterlist.h" #include "i915_trace.h" @@ -26,6 +28,7 @@ static void check_release_pagevec(struct pagevec *pvec) static int shmem_get_pages(struct drm_i915_gem_object *obj) { struct drm_i915_private *i915 = to_i915(obj->base.dev); + struct intel_memory_region *mem = obj->mm.region; const unsigned long page_count = obj->base.size / PAGE_SIZE; unsigned long i; struct address_space *mapping; @@ -52,7 +55,7 @@ static int shmem_get_pages(struct drm_i915_gem_object *obj) * If there's no chance of allocating enough pages for the whole * object, bail early. */ - if (page_count > totalram_pages()) + if (obj->base.size > resource_size(&mem->region)) return -ENOMEM; st = kmalloc(sizeof(*st), GFP_KERNEL); @@ -417,6 +420,8 @@ shmem_pwrite(struct drm_i915_gem_object *obj, static void shmem_release(struct drm_i915_gem_object *obj) { + i915_gem_object_release_memory_region(obj); + fput(obj->base.filp); } @@ -435,7 +440,7 @@ const struct drm_i915_gem_object_ops i915_gem_shmem_ops = { .release = shmem_release, }; -static int create_shmem(struct drm_i915_private *i915, +static int __create_shmem(struct drm_i915_private *i915, struct drm_gem_object *obj, size_t size) { @@ -456,31 +461,23 @@ static int create_shmem(struct drm_i915_private *i915, return 0; } -struct drm_i915_gem_object * -i915_gem_object_create_shmem(struct drm_i915_private *i915, u64 size) +static struct drm_i915_gem_object * +create_shmem(struct intel_memory_region *mem, + resource_size_t size, + unsigned flags) { + struct drm_i915_private *i915 = mem->i915; struct drm_i915_gem_object *obj; struct address_space *mapping; unsigned int cache_level; gfp_t mask; int ret; - /* There is a prevalence of the assumption that we fit the object's - * page count inside a 32bit _signed_ variable. Let's document this and - * catch if we ever need to fix it. In the meantime, if you do spot - * such a local variable, please consider fixing! - */ - if (size >> PAGE_SHIFT > INT_MAX) - return ERR_PTR(-E2BIG); - - if (overflows_type(size, obj->base.size)) - return ERR_PTR(-E2BIG); - obj = i915_gem_object_alloc(); if (!obj) return ERR_PTR(-ENOMEM); - ret = create_shmem(i915, &obj->base, size); + ret = __create_shmem(i915, &obj->base, size); if (ret) goto fail; @@ -519,7 +516,7 @@ i915_gem_object_create_shmem(struct drm_i915_private *i915, u64 size) i915_gem_object_set_cache_coherency(obj, cache_level); - trace_i915_gem_object_create(obj); + i915_gem_object_init_memory_region(obj, mem, 0); return obj; @@ -528,6 +525,13 @@ i915_gem_object_create_shmem(struct drm_i915_private *i915, u64 size) return ERR_PTR(ret); } +struct drm_i915_gem_object * +i915_gem_object_create_shmem(struct drm_i915_private *i915, u64 size) +{ + return i915_gem_object_create_region(i915->regions[INTEL_MEMORY_SMEM], + size, 0); +} + /* Allocate a new GEM object and fill it with the supplied data */ struct drm_i915_gem_object * i915_gem_object_create_shmem_from_data(struct drm_i915_private *dev_priv, @@ -578,3 +582,33 @@ i915_gem_object_create_shmem_from_data(struct drm_i915_private *dev_priv, i915_gem_object_put(obj); return ERR_PTR(err); } + +static int init_shmem(struct intel_memory_region *mem) +{ + int err; + + err = i915_gemfs_init(mem->i915); + if (err) + DRM_NOTE("Unable to create a private tmpfs mount, hugepage support will be disabled(%d).\n", err); + + return 0; /* Don't error, we can simply fallback to the kernel mnt */ +} + +static void release_shmem(struct intel_memory_region *mem) +{ + i915_gemfs_fini(mem->i915); +} + +static const struct intel_memory_region_ops shmem_region_ops = { + .init = init_shmem, + .release = release_shmem, + .create_object = create_shmem, +}; + +struct intel_memory_region *i915_gem_shmem_setup(struct drm_i915_private *i915) +{ + return intel_memory_region_create(i915, 0, + totalram_pages() << PAGE_SHIFT, + I915_GTT_PAGE_SIZE_4K, 0, + &shmem_region_ops); +} diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c index caac7fc84987..22e87ae36621 100644 --- a/drivers/gpu/drm/i915/i915_drv.c +++ b/drivers/gpu/drm/i915/i915_drv.c @@ -511,9 +511,7 @@ static int i915_driver_early_probe(struct drm_i915_private *dev_priv) intel_gt_init_early(&dev_priv->gt, dev_priv); - ret = i915_gem_init_early(dev_priv); - if (ret < 0) - goto err_workqueues; + i915_gem_init_early(dev_priv); /* This must be called before any calls to HAS_PCH_* */ intel_detect_pch(dev_priv); @@ -535,7 +533,6 @@ static int i915_driver_early_probe(struct drm_i915_private *dev_priv) err_gem: i915_gem_cleanup_early(dev_priv); -err_workqueues: intel_gt_driver_late_release(&dev_priv->gt); i915_workqueues_cleanup(dev_priv); return ret; diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 3d7da69f0d1b..0f94f1f3ccaa 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -2295,11 +2295,13 @@ int i915_getparam_ioctl(struct drm_device *dev, void *data, int i915_gem_init_userptr(struct drm_i915_private *dev_priv); void i915_gem_cleanup_userptr(struct drm_i915_private *dev_priv); void i915_gem_sanitize(struct drm_i915_private *i915); -int i915_gem_init_early(struct drm_i915_private *dev_priv); +void i915_gem_init_early(struct drm_i915_private *dev_priv); void i915_gem_cleanup_early(struct drm_i915_private *dev_priv); int i915_gem_freeze(struct drm_i915_private *dev_priv); int i915_gem_freeze_late(struct drm_i915_private *dev_priv); +struct intel_memory_region *i915_gem_shmem_setup(struct drm_i915_private *i915); + static inline void i915_gem_drain_freed_objects(struct drm_i915_private *i915) { /* diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 96e143d133d1..2aa4fbe7edc0 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -45,7 +45,6 @@ #include "gem/i915_gem_context.h" #include "gem/i915_gem_ioctls.h" #include "gem/i915_gem_pm.h" -#include "gem/i915_gemfs.h" #include "gt/intel_engine_user.h" #include "gt/intel_gt.h" #include "gt/intel_gt_pm.h" @@ -1669,20 +1668,12 @@ static void i915_gem_init__mm(struct drm_i915_private *i915) i915_gem_init__objects(i915); } -int i915_gem_init_early(struct drm_i915_private *dev_priv) +void i915_gem_init_early(struct drm_i915_private *dev_priv) { - int err; - i915_gem_init__mm(dev_priv); i915_gem_init__pm(dev_priv); spin_lock_init(&dev_priv->fb_tracking.lock); - - err = i915_gemfs_init(dev_priv); - if (err) - DRM_NOTE("Unable to create a private tmpfs mount, hugepage support will be disabled(%d).\n", err); - - return 0; } void i915_gem_cleanup_early(struct drm_i915_private *dev_priv) @@ -1691,8 +1682,6 @@ void i915_gem_cleanup_early(struct drm_i915_private *dev_priv) GEM_BUG_ON(!llist_empty(&dev_priv->mm.free_list)); GEM_BUG_ON(atomic_read(&dev_priv->mm.free_count)); WARN_ON(dev_priv->mm.shrink_count); - - i915_gemfs_fini(dev_priv); } int i915_gem_freeze(struct drm_i915_private *dev_priv) diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c index a1dd3e7e1ad9..10332b876dd1 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c @@ -2749,7 +2749,8 @@ int i915_gem_init_memory_regions(struct drm_i915_private *i915) type = MEMORY_TYPE_FROM_REGION(intel_region_map[i]); switch (type) { - default: + case INTEL_SMEM: + mem = i915_gem_shmem_setup(i915); break; } diff --git a/drivers/gpu/drm/i915/i915_pci.c b/drivers/gpu/drm/i915/i915_pci.c index 1febda2a90e7..52ad21440f04 100644 --- a/drivers/gpu/drm/i915/i915_pci.c +++ b/drivers/gpu/drm/i915/i915_pci.c @@ -144,6 +144,9 @@ #define GEN_DEFAULT_PAGE_SIZES \ .page_sizes = I915_GTT_PAGE_SIZE_4K +#define GEN_DEFAULT_REGIONS \ + .memory_regions = REGION_SMEM + #define I830_FEATURES \ GEN(2), \ .is_mobile = 1, \ @@ -161,7 +164,8 @@ I9XX_PIPE_OFFSETS, \ I9XX_CURSOR_OFFSETS, \ I9XX_COLORS, \ - GEN_DEFAULT_PAGE_SIZES + GEN_DEFAULT_PAGE_SIZES, \ + GEN_DEFAULT_REGIONS #define I845_FEATURES \ GEN(2), \ @@ -178,7 +182,8 @@ I845_PIPE_OFFSETS, \ I845_CURSOR_OFFSETS, \ I9XX_COLORS, \ - GEN_DEFAULT_PAGE_SIZES + GEN_DEFAULT_PAGE_SIZES, \ + GEN_DEFAULT_REGIONS static const struct intel_device_info intel_i830_info = { I830_FEATURES, @@ -212,7 +217,8 @@ static const struct intel_device_info intel_i865g_info = { I9XX_PIPE_OFFSETS, \ I9XX_CURSOR_OFFSETS, \ I9XX_COLORS, \ - GEN_DEFAULT_PAGE_SIZES + GEN_DEFAULT_PAGE_SIZES, \ + GEN_DEFAULT_REGIONS static const struct intel_device_info intel_i915g_info = { GEN3_FEATURES, @@ -297,7 +303,8 @@ static const struct intel_device_info intel_pineview_m_info = { I9XX_PIPE_OFFSETS, \ I9XX_CURSOR_OFFSETS, \ I965_COLORS, \ - GEN_DEFAULT_PAGE_SIZES + GEN_DEFAULT_PAGE_SIZES, \ + GEN_DEFAULT_REGIONS static const struct intel_device_info intel_i965g_info = { GEN4_FEATURES, @@ -347,7 +354,8 @@ static const struct intel_device_info intel_gm45_info = { I9XX_PIPE_OFFSETS, \ I9XX_CURSOR_OFFSETS, \ ILK_COLORS, \ - GEN_DEFAULT_PAGE_SIZES + GEN_DEFAULT_PAGE_SIZES, \ + GEN_DEFAULT_REGIONS static const struct intel_device_info intel_ironlake_d_info = { GEN5_FEATURES, @@ -377,7 +385,8 @@ static const struct intel_device_info intel_ironlake_m_info = { I9XX_PIPE_OFFSETS, \ I9XX_CURSOR_OFFSETS, \ ILK_COLORS, \ - GEN_DEFAULT_PAGE_SIZES + GEN_DEFAULT_PAGE_SIZES, \ + GEN_DEFAULT_REGIONS #define SNB_D_PLATFORM \ GEN6_FEATURES, \ @@ -425,7 +434,8 @@ static const struct intel_device_info intel_sandybridge_m_gt2_info = { IVB_PIPE_OFFSETS, \ IVB_CURSOR_OFFSETS, \ IVB_COLORS, \ - GEN_DEFAULT_PAGE_SIZES + GEN_DEFAULT_PAGE_SIZES, \ + GEN_DEFAULT_REGIONS #define IVB_D_PLATFORM \ GEN7_FEATURES, \ @@ -486,6 +496,7 @@ static const struct intel_device_info intel_valleyview_info = { I9XX_CURSOR_OFFSETS, I965_COLORS, GEN_DEFAULT_PAGE_SIZES, + GEN_DEFAULT_REGIONS, }; #define G75_FEATURES \ @@ -582,6 +593,7 @@ static const struct intel_device_info intel_cherryview_info = { CHV_CURSOR_OFFSETS, CHV_COLORS, GEN_DEFAULT_PAGE_SIZES, + GEN_DEFAULT_REGIONS, }; #define GEN9_DEFAULT_PAGE_SIZES \ @@ -657,7 +669,8 @@ static const struct intel_device_info intel_skylake_gt4_info = { HSW_PIPE_OFFSETS, \ IVB_CURSOR_OFFSETS, \ IVB_COLORS, \ - GEN9_DEFAULT_PAGE_SIZES + GEN9_DEFAULT_PAGE_SIZES, \ + GEN_DEFAULT_REGIONS static const struct intel_device_info intel_broxton_info = { GEN9_LP_FEATURES, diff --git a/drivers/gpu/drm/i915/selftests/mock_gem_device.c b/drivers/gpu/drm/i915/selftests/mock_gem_device.c index c6944e17a2c5..5fb72c0a84fb 100644 --- a/drivers/gpu/drm/i915/selftests/mock_gem_device.c +++ b/drivers/gpu/drm/i915/selftests/mock_gem_device.c @@ -80,8 +80,6 @@ static void mock_device_release(struct drm_device *dev) destroy_workqueue(i915->wq); - i915_gemfs_fini(i915); - i915_gem_cleanup_memory_regions(i915); drm_mode_config_cleanup(&i915->drm); @@ -181,6 +179,8 @@ struct drm_i915_private *mock_gem_device(void) I915_GTT_PAGE_SIZE_64K | I915_GTT_PAGE_SIZE_2M; + mkwrite_device_info(i915)->memory_regions = REGION_SMEM; + mock_uncore_init(&i915->uncore); i915_gem_init__mm(i915); intel_gt_init_early(&i915->gt, i915); @@ -219,8 +219,6 @@ struct drm_i915_private *mock_gem_device(void) intel_engines_driver_register(i915); mutex_unlock(&i915->drm.struct_mutex); - WARN_ON(i915_gemfs_init(i915)); - err = i915_gem_init_memory_regions(i915); if (err) goto err_context; From patchwork Fri Aug 9 22:26:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 11087819 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5642C912 for ; Fri, 9 Aug 2019 22:27:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 43A35205FD for ; Fri, 9 Aug 2019 22:27:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 37E2E21FAC; Fri, 9 Aug 2019 22:27:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id B1B8D205FD for ; Fri, 9 Aug 2019 22:27:52 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E64606EEFA; Fri, 9 Aug 2019 22:27:21 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id 211736EF0C; Fri, 9 Aug 2019 22:27:16 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Aug 2019 15:27:16 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,366,1559545200"; d="scan'208";a="176927047" Received: from jmath3-mobl1.ger.corp.intel.com (HELO mwahaha-bdw.ger.corp.intel.com) ([10.252.5.86]) by fmsmga007.fm.intel.com with ESMTP; 09 Aug 2019 15:27:14 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Subject: [PATCH v3 21/37] drm/i915: treat stolen as a region Date: Fri, 9 Aug 2019 23:26:27 +0100 Message-Id: <20190809222643.23142-22-matthew.auld@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190809222643.23142-1-matthew.auld@intel.com> References: <20190809222643.23142-1-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Abdiel Janulgue , dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP Convert stolen memory over to a region object. Still leaves open the question with what to do with pre-allocated objects... Signed-off-by: Matthew Auld Cc: Joonas Lahtinen Cc: Abdiel Janulgue --- drivers/gpu/drm/i915/gem/i915_gem_region.c | 2 +- drivers/gpu/drm/i915/gem/i915_gem_stolen.c | 71 +++++++++++++++++++--- drivers/gpu/drm/i915/gem/i915_gem_stolen.h | 3 +- drivers/gpu/drm/i915/i915_gem_gtt.c | 14 +---- drivers/gpu/drm/i915/i915_pci.c | 2 +- 5 files changed, 68 insertions(+), 24 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_region.c b/drivers/gpu/drm/i915/gem/i915_gem_region.c index 592012bb9b14..b6f18c3b9eed 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_region.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_region.c @@ -158,7 +158,7 @@ i915_gem_object_create_region(struct intel_memory_region *mem, return ERR_PTR(-E2BIG); obj = mem->ops->create_object(mem, size, flags); - if (!IS_ERR(obj)) + if (!IS_ERR_OR_NULL(obj)) trace_i915_gem_object_create(obj); return obj; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_stolen.c b/drivers/gpu/drm/i915/gem/i915_gem_stolen.c index 696dea5ec7c6..c93a3fac90f6 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_stolen.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_stolen.c @@ -10,6 +10,7 @@ #include #include +#include "gem/i915_gem_region.h" #include "i915_drv.h" #include "i915_gem_stolen.h" @@ -150,7 +151,7 @@ static int i915_adjust_stolen(struct drm_i915_private *dev_priv, return 0; } -void i915_gem_cleanup_stolen(struct drm_i915_private *dev_priv) +static void i915_gem_cleanup_stolen(struct drm_i915_private *dev_priv) { if (!drm_mm_initialized(&dev_priv->mm.stolen)) return; @@ -355,7 +356,7 @@ static void icl_get_stolen_reserved(struct drm_i915_private *i915, } } -int i915_gem_init_stolen(struct drm_i915_private *dev_priv) +static int i915_gem_init_stolen(struct drm_i915_private *dev_priv) { resource_size_t reserved_base, stolen_top; resource_size_t reserved_total, reserved_size; @@ -532,6 +533,9 @@ i915_gem_object_release_stolen(struct drm_i915_gem_object *obj) i915_gem_stolen_remove_node(dev_priv, stolen); kfree(stolen); + + if (obj->mm.region) + i915_gem_object_release_memory_region(obj); } static const struct drm_i915_gem_object_ops i915_gem_object_stolen_ops = { @@ -541,8 +545,9 @@ static const struct drm_i915_gem_object_ops i915_gem_object_stolen_ops = { }; static struct drm_i915_gem_object * -_i915_gem_object_create_stolen(struct drm_i915_private *dev_priv, - struct drm_mm_node *stolen) +__i915_gem_object_create_stolen(struct drm_i915_private *dev_priv, + struct drm_mm_node *stolen, + struct intel_memory_region *mem) { struct drm_i915_gem_object *obj; unsigned int cache_level; @@ -559,6 +564,9 @@ _i915_gem_object_create_stolen(struct drm_i915_private *dev_priv, cache_level = HAS_LLC(dev_priv) ? I915_CACHE_LLC : I915_CACHE_NONE; i915_gem_object_set_cache_coherency(obj, cache_level); + if (mem) + i915_gem_object_init_memory_region(obj, mem, 0); + if (i915_gem_object_pin_pages(obj)) goto cleanup; @@ -569,10 +577,12 @@ _i915_gem_object_create_stolen(struct drm_i915_private *dev_priv, return NULL; } -struct drm_i915_gem_object * -i915_gem_object_create_stolen(struct drm_i915_private *dev_priv, - resource_size_t size) +static struct drm_i915_gem_object * +_i915_gem_object_create_stolen(struct intel_memory_region *mem, + resource_size_t size, + unsigned int flags) { + struct drm_i915_private *dev_priv = mem->i915; struct drm_i915_gem_object *obj; struct drm_mm_node *stolen; int ret; @@ -593,7 +603,7 @@ i915_gem_object_create_stolen(struct drm_i915_private *dev_priv, return NULL; } - obj = _i915_gem_object_create_stolen(dev_priv, stolen); + obj = __i915_gem_object_create_stolen(dev_priv, stolen, mem); if (obj) return obj; @@ -602,6 +612,49 @@ i915_gem_object_create_stolen(struct drm_i915_private *dev_priv, return NULL; } +struct drm_i915_gem_object * +i915_gem_object_create_stolen(struct drm_i915_private *dev_priv, + resource_size_t size) +{ + struct drm_i915_gem_object *obj; + + obj = i915_gem_object_create_region(dev_priv->regions[INTEL_MEMORY_STOLEN], + size, I915_BO_ALLOC_CONTIGUOUS); + if (IS_ERR(obj)) + return NULL; + + return obj; +} + +static int init_stolen(struct intel_memory_region *mem) +{ + /* + * Initialise stolen early so that we may reserve preallocated + * objects for the BIOS to KMS transition. + */ + return i915_gem_init_stolen(mem->i915); +} + +static void release_stolen(struct intel_memory_region *mem) +{ + i915_gem_cleanup_stolen(mem->i915); +} + +static const struct intel_memory_region_ops i915_region_stolen_ops = { + .init = init_stolen, + .release = release_stolen, + .create_object = _i915_gem_object_create_stolen, +}; + +struct intel_memory_region *i915_gem_stolen_setup(struct drm_i915_private *i915) +{ + return intel_memory_region_create(i915, + intel_graphics_stolen_res.start, + resource_size(&intel_graphics_stolen_res), + I915_GTT_PAGE_SIZE_4K, 0, + &i915_region_stolen_ops); +} + struct drm_i915_gem_object * i915_gem_object_create_stolen_for_preallocated(struct drm_i915_private *dev_priv, resource_size_t stolen_offset, @@ -643,7 +696,7 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_i915_private *dev_priv return NULL; } - obj = _i915_gem_object_create_stolen(dev_priv, stolen); + obj = __i915_gem_object_create_stolen(dev_priv, stolen, NULL); if (obj == NULL) { DRM_DEBUG_DRIVER("failed to allocate stolen object\n"); i915_gem_stolen_remove_node(dev_priv, stolen); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_stolen.h b/drivers/gpu/drm/i915/gem/i915_gem_stolen.h index 2289644d8604..c1040627fbf3 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_stolen.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_stolen.h @@ -21,8 +21,7 @@ int i915_gem_stolen_insert_node_in_range(struct drm_i915_private *dev_priv, u64 end); void i915_gem_stolen_remove_node(struct drm_i915_private *dev_priv, struct drm_mm_node *node); -int i915_gem_init_stolen(struct drm_i915_private *dev_priv); -void i915_gem_cleanup_stolen(struct drm_i915_private *dev_priv); +struct intel_memory_region *i915_gem_stolen_setup(struct drm_i915_private *i915); struct drm_i915_gem_object * i915_gem_object_create_stolen(struct drm_i915_private *dev_priv, resource_size_t size); diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c index 10332b876dd1..5bcf71b18e5f 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c @@ -2717,8 +2717,6 @@ void i915_gem_cleanup_memory_regions(struct drm_i915_private *i915) { int i; - i915_gem_cleanup_stolen(i915); - for (i = 0; i < ARRAY_SIZE(i915->regions); ++i) { struct intel_memory_region *region = i915->regions[i]; @@ -2731,15 +2729,6 @@ int i915_gem_init_memory_regions(struct drm_i915_private *i915) { int err, i; - /* - * Initialise stolen early so that we may reserve preallocated - * objects for the BIOS to KMS transition. - */ - /* XXX: stolen will become a region at some point */ - err = i915_gem_init_stolen(i915); - if (err) - return err; - for (i = 0; i < INTEL_MEMORY_UKNOWN; i++) { struct intel_memory_region *mem = NULL; u32 type; @@ -2752,6 +2741,9 @@ int i915_gem_init_memory_regions(struct drm_i915_private *i915) case INTEL_SMEM: mem = i915_gem_shmem_setup(i915); break; + case INTEL_STOLEN: + mem = i915_gem_stolen_setup(i915); + break; } if (IS_ERR(mem)) { diff --git a/drivers/gpu/drm/i915/i915_pci.c b/drivers/gpu/drm/i915/i915_pci.c index 52ad21440f04..05547afde974 100644 --- a/drivers/gpu/drm/i915/i915_pci.c +++ b/drivers/gpu/drm/i915/i915_pci.c @@ -145,7 +145,7 @@ .page_sizes = I915_GTT_PAGE_SIZE_4K #define GEN_DEFAULT_REGIONS \ - .memory_regions = REGION_SMEM + .memory_regions = REGION_SMEM | REGION_STOLEN #define I830_FEATURES \ GEN(2), \ From patchwork Fri Aug 9 22:26:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 11087837 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DA05713B1 for ; Fri, 9 Aug 2019 22:28:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C899222376 for ; Fri, 9 Aug 2019 22:28:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B8FE1223A1; Fri, 9 Aug 2019 22:28:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 838F822376 for ; Fri, 9 Aug 2019 22:28:06 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C6E466EF18; Fri, 9 Aug 2019 22:27:22 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id 381A76EF0F; Fri, 9 Aug 2019 22:27:17 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Aug 2019 15:27:17 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,366,1559545200"; d="scan'208";a="176927061" Received: from jmath3-mobl1.ger.corp.intel.com (HELO mwahaha-bdw.ger.corp.intel.com) ([10.252.5.86]) by fmsmga007.fm.intel.com with ESMTP; 09 Aug 2019 15:27:16 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Subject: [PATCH v3 22/37] drm/i915: define HAS_MAPPABLE_APERTURE Date: Fri, 9 Aug 2019 23:26:28 +0100 Message-Id: <20190809222643.23142-23-matthew.auld@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190809222643.23142-1-matthew.auld@intel.com> References: <20190809222643.23142-1-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Daniele Ceraolo Spurio , dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Daniele Ceraolo Spurio The following patches in the series will use it to avoid certain operations when aperture is not available in HW. Signed-off-by: Daniele Ceraolo Spurio Cc: Matthew Auld --- drivers/gpu/drm/i915/i915_drv.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 0f94f1f3ccaa..182ed6b46aa5 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -2168,6 +2168,8 @@ IS_SUBPLATFORM(const struct drm_i915_private *i915, #define OVERLAY_NEEDS_PHYSICAL(dev_priv) \ (INTEL_INFO(dev_priv)->display.overlay_needs_physical) +#define HAS_MAPPABLE_APERTURE(dev_priv) (dev_priv->ggtt.mappable_end > 0) + /* Early gen2 have a totally busted CS tlb and require pinned batches. */ #define HAS_BROKEN_CS_TLB(dev_priv) (IS_I830(dev_priv) || IS_I845G(dev_priv)) From patchwork Fri Aug 9 22:26:29 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 11087827 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D3C10912 for ; Fri, 9 Aug 2019 22:27:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C02B2205FD for ; Fri, 9 Aug 2019 22:27:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B43B921FAC; Fri, 9 Aug 2019 22:27:58 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 73F01205FD for ; Fri, 9 Aug 2019 22:27:58 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 465D26EF1E; Fri, 9 Aug 2019 22:27:29 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id 4AC5B6EF12; Fri, 9 Aug 2019 22:27:18 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Aug 2019 15:27:18 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,366,1559545200"; d="scan'208";a="176927066" Received: from jmath3-mobl1.ger.corp.intel.com (HELO mwahaha-bdw.ger.corp.intel.com) ([10.252.5.86]) by fmsmga007.fm.intel.com with ESMTP; 09 Aug 2019 15:27:17 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Subject: [PATCH v3 23/37] drm/i915: do not map aperture if it is not available. Date: Fri, 9 Aug 2019 23:26:29 +0100 Message-Id: <20190809222643.23142-24-matthew.auld@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190809222643.23142-1-matthew.auld@intel.com> References: <20190809222643.23142-1-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Daniele Ceraolo Spurio , dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Daniele Ceraolo Spurio Skip both setup and cleanup of the aperture mapping if the HW doesn't have an aperture bar. Signed-off-by: Daniele Ceraolo Spurio Cc: Matthew Auld --- drivers/gpu/drm/i915/i915_gem_gtt.c | 36 ++++++++++++++++++----------- 1 file changed, 22 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c index 5bcf71b18e5f..dd28c54527e3 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c @@ -2795,8 +2795,10 @@ static void ggtt_cleanup_hw(struct i915_ggtt *ggtt) mutex_unlock(&i915->drm.struct_mutex); - arch_phys_wc_del(ggtt->mtrr); - io_mapping_fini(&ggtt->iomap); + if (HAS_MAPPABLE_APERTURE(i915)) { + arch_phys_wc_del(ggtt->mtrr); + io_mapping_fini(&ggtt->iomap); + } } /** @@ -2992,10 +2994,13 @@ static int gen8_gmch_probe(struct i915_ggtt *ggtt) int err; /* TODO: We're not aware of mappable constraints on gen8 yet */ - ggtt->gmadr = - (struct resource) DEFINE_RES_MEM(pci_resource_start(pdev, 2), - pci_resource_len(pdev, 2)); - ggtt->mappable_end = resource_size(&ggtt->gmadr); + /* FIXME: We probably need to add do device_info or runtime_info */ + if (!HAS_LMEM(dev_priv)) { + ggtt->gmadr = + (struct resource) DEFINE_RES_MEM(pci_resource_start(pdev, 2), + pci_resource_len(pdev, 2)); + ggtt->mappable_end = resource_size(&ggtt->gmadr); + } err = pci_set_dma_mask(pdev, DMA_BIT_MASK(39)); if (!err) @@ -3220,15 +3225,18 @@ static int ggtt_init_hw(struct i915_ggtt *ggtt) if (!HAS_LLC(i915) && !HAS_PPGTT(i915)) ggtt->vm.mm.color_adjust = i915_gtt_color_adjust; - if (!io_mapping_init_wc(&ggtt->iomap, - ggtt->gmadr.start, - ggtt->mappable_end)) { - ggtt->vm.cleanup(&ggtt->vm); - ret = -EIO; - goto out; - } + if (HAS_MAPPABLE_APERTURE(i915)) { + if (!io_mapping_init_wc(&ggtt->iomap, + ggtt->gmadr.start, + ggtt->mappable_end)) { + ggtt->vm.cleanup(&ggtt->vm); + ret = -EIO; + goto out; + } - ggtt->mtrr = arch_phys_wc_add(ggtt->gmadr.start, ggtt->mappable_end); + ggtt->mtrr = arch_phys_wc_add(ggtt->gmadr.start, + ggtt->mappable_end); + } i915_ggtt_init_fences(ggtt); From patchwork Fri Aug 9 22:26:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 11087813 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 05EFE14D5 for ; Fri, 9 Aug 2019 22:27:48 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E7385205FD for ; Fri, 9 Aug 2019 22:27:47 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DBD1521FAC; Fri, 9 Aug 2019 22:27:47 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id A19B5205FD for ; Fri, 9 Aug 2019 22:27:47 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 6B5F46EF16; Fri, 9 Aug 2019 22:27:22 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id 86F0D6EEFA; Fri, 9 Aug 2019 22:27:19 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Aug 2019 15:27:19 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,366,1559545200"; d="scan'208";a="176927071" Received: from jmath3-mobl1.ger.corp.intel.com (HELO mwahaha-bdw.ger.corp.intel.com) ([10.252.5.86]) by fmsmga007.fm.intel.com with ESMTP; 09 Aug 2019 15:27:18 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Subject: [PATCH v3 24/37] drm/i915: set num_fence_regs to 0 if there is no aperture Date: Fri, 9 Aug 2019 23:26:30 +0100 Message-Id: <20190809222643.23142-25-matthew.auld@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190809222643.23142-1-matthew.auld@intel.com> References: <20190809222643.23142-1-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Stuart Summers , Daniele Ceraolo Spurio , dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Daniele Ceraolo Spurio We can't fence anything without aperture. Signed-off-by: Daniele Ceraolo Spurio Signed-off-by: Stuart Summers Cc: Matthew Auld --- drivers/gpu/drm/i915/i915_gem_fence_reg.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem_fence_reg.c b/drivers/gpu/drm/i915/i915_gem_fence_reg.c index bcac359ec661..bb7d9321cadf 100644 --- a/drivers/gpu/drm/i915/i915_gem_fence_reg.c +++ b/drivers/gpu/drm/i915/i915_gem_fence_reg.c @@ -808,8 +808,10 @@ void i915_ggtt_init_fences(struct i915_ggtt *ggtt) detect_bit_6_swizzle(i915); - if (INTEL_GEN(i915) >= 7 && - !(IS_VALLEYVIEW(i915) || IS_CHERRYVIEW(i915))) + if (!HAS_MAPPABLE_APERTURE(i915)) + num_fences = 0; + else if (INTEL_GEN(i915) >= 7 && + !(IS_VALLEYVIEW(i915) || IS_CHERRYVIEW(i915))) num_fences = 32; else if (INTEL_GEN(i915) >= 4 || IS_I945G(i915) || IS_I945GM(i915) || From patchwork Fri Aug 9 22:26:31 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 11087839 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2016F13B1 for ; Fri, 9 Aug 2019 22:28:09 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0D37421FAC for ; Fri, 9 Aug 2019 22:28:09 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0162E2228E; Fri, 9 Aug 2019 22:28:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id A736521FAC for ; Fri, 9 Aug 2019 22:28:08 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C2DCA6EF1B; Fri, 9 Aug 2019 22:27:24 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id C2BF66EF0C; Fri, 9 Aug 2019 22:27:20 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Aug 2019 15:27:20 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,366,1559545200"; d="scan'208";a="176927076" Received: from jmath3-mobl1.ger.corp.intel.com (HELO mwahaha-bdw.ger.corp.intel.com) ([10.252.5.86]) by fmsmga007.fm.intel.com with ESMTP; 09 Aug 2019 15:27:19 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Subject: [PATCH v3 25/37] drm/i915/selftests: check for missing aperture Date: Fri, 9 Aug 2019 23:26:31 +0100 Message-Id: <20190809222643.23142-26-matthew.auld@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190809222643.23142-1-matthew.auld@intel.com> References: <20190809222643.23142-1-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Daniele Ceraolo Spurio , dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP We may be missing support for the mappable aperture on some platforms. Signed-off-by: Matthew Auld Cc: Daniele Ceraolo Spurio --- .../drm/i915/gem/selftests/i915_gem_coherency.c | 5 ++++- drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c | 3 +++ drivers/gpu/drm/i915/gt/selftest_hangcheck.c | 14 ++++++++++---- drivers/gpu/drm/i915/selftests/i915_gem.c | 3 +++ drivers/gpu/drm/i915/selftests/i915_gem_gtt.c | 3 +++ 5 files changed, 23 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c index a1a4b53cdc4a..42db49ff9b8e 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c @@ -244,7 +244,10 @@ static bool always_valid(struct drm_i915_private *i915) static bool needs_fence_registers(struct drm_i915_private *i915) { - return !intel_gt_is_wedged(&i915->gt); + if (intel_gt_is_wedged(&i915->gt)) + return false; + + return i915->ggtt.num_fences; } static bool needs_mi_store_dword(struct drm_i915_private *i915) diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c index 50aa7e95124d..fa83745abcc0 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c @@ -184,6 +184,9 @@ static int igt_partial_tiling(void *arg) int tiling; int err; + if (!HAS_MAPPABLE_APERTURE(i915)) + return 0; + /* We want to check the page mapping and fencing of a large object * mmapped through the GTT. The object we create is larger than can * possibly be mmaped as a whole, and so we must use partial GGTT vma. diff --git a/drivers/gpu/drm/i915/gt/selftest_hangcheck.c b/drivers/gpu/drm/i915/gt/selftest_hangcheck.c index 4484b4447db1..233810da5387 100644 --- a/drivers/gpu/drm/i915/gt/selftest_hangcheck.c +++ b/drivers/gpu/drm/i915/gt/selftest_hangcheck.c @@ -1179,8 +1179,12 @@ static int __igt_reset_evict_vma(struct intel_gt *gt, struct i915_request *rq; struct evict_vma arg; struct hang h; + unsigned int pin_flags; int err; + if (!gt->ggtt->num_fences && flags & EXEC_OBJECT_NEEDS_FENCE) + return 0; + if (!engine || !intel_engine_can_store_dword(engine)) return 0; @@ -1217,10 +1221,12 @@ static int __igt_reset_evict_vma(struct intel_gt *gt, goto out_obj; } - err = i915_vma_pin(arg.vma, 0, 0, - i915_vma_is_ggtt(arg.vma) ? - PIN_GLOBAL | PIN_MAPPABLE : - PIN_USER); + pin_flags = i915_vma_is_ggtt(arg.vma) ? PIN_GLOBAL : PIN_USER; + + if (flags & EXEC_OBJECT_NEEDS_FENCE) + pin_flags |= PIN_MAPPABLE; + + err = i915_vma_pin(arg.vma, 0, 0, pin_flags); if (err) { i915_request_add(rq); goto out_obj; diff --git a/drivers/gpu/drm/i915/selftests/i915_gem.c b/drivers/gpu/drm/i915/selftests/i915_gem.c index bb6dd54a6ff3..0e62d5e07fcc 100644 --- a/drivers/gpu/drm/i915/selftests/i915_gem.c +++ b/drivers/gpu/drm/i915/selftests/i915_gem.c @@ -42,6 +42,9 @@ static void trash_stolen(struct drm_i915_private *i915) unsigned long page; u32 prng = 0x12345678; + if (!HAS_MAPPABLE_APERTURE(i915)) + return; + for (page = 0; page < size; page += PAGE_SIZE) { const dma_addr_t dma = i915->dsm.start + page; u32 __iomem *s; diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c index 81850e3a7d2d..2b72276d4e97 100644 --- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c @@ -1147,6 +1147,9 @@ static int igt_ggtt_page(void *arg) unsigned int *order, n; int err; + if (!HAS_MAPPABLE_APERTURE(i915)) + return 0; + mutex_lock(&i915->drm.struct_mutex); obj = i915_gem_object_create_internal(i915, PAGE_SIZE); From patchwork Fri Aug 9 22:26:32 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 11087851 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5511213B1 for ; Fri, 9 Aug 2019 22:28:18 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 41EA321C9A for ; Fri, 9 Aug 2019 22:28:18 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 358012223E; Fri, 9 Aug 2019 22:28:18 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id BB01E21C9A for ; Fri, 9 Aug 2019 22:28:17 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id DE0636EF27; Fri, 9 Aug 2019 22:27:30 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id 44A6F6EF0F; Fri, 9 Aug 2019 22:27:22 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Aug 2019 15:27:22 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,366,1559545200"; d="scan'208";a="176927082" Received: from jmath3-mobl1.ger.corp.intel.com (HELO mwahaha-bdw.ger.corp.intel.com) ([10.252.5.86]) by fmsmga007.fm.intel.com with ESMTP; 09 Aug 2019 15:27:20 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Subject: [PATCH v3 26/37] drm/i915: error capture with no ggtt slot Date: Fri, 9 Aug 2019 23:26:32 +0100 Message-Id: <20190809222643.23142-27-matthew.auld@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190809222643.23142-1-matthew.auld@intel.com> References: <20190809222643.23142-1-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Abdiel Janulgue , Daniele Ceraolo Spurio , dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Daniele Ceraolo Spurio If the aperture is not available in HW we can't use a ggtt slot and wc copy, so fall back to regular kmap. Signed-off-by: Daniele Ceraolo Spurio Signed-off-by: Abdiel Janulgue --- drivers/gpu/drm/i915/i915_gem_gtt.c | 19 ++++---- drivers/gpu/drm/i915/i915_gpu_error.c | 64 ++++++++++++++++++++++----- 2 files changed, 63 insertions(+), 20 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c index dd28c54527e3..0819ac9837dc 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c @@ -2630,7 +2630,8 @@ static void ggtt_release_guc_top(struct i915_ggtt *ggtt) static void cleanup_init_ggtt(struct i915_ggtt *ggtt) { ggtt_release_guc_top(ggtt); - drm_mm_remove_node(&ggtt->error_capture); + if (drm_mm_node_allocated(&ggtt->error_capture)) + drm_mm_remove_node(&ggtt->error_capture); } static int init_ggtt(struct i915_ggtt *ggtt) @@ -2661,13 +2662,15 @@ static int init_ggtt(struct i915_ggtt *ggtt) if (ret) return ret; - /* Reserve a mappable slot for our lockless error capture */ - ret = drm_mm_insert_node_in_range(&ggtt->vm.mm, &ggtt->error_capture, - PAGE_SIZE, 0, I915_COLOR_UNEVICTABLE, - 0, ggtt->mappable_end, - DRM_MM_INSERT_LOW); - if (ret) - return ret; + if (HAS_MAPPABLE_APERTURE(ggtt->vm.i915)) { + /* Reserve a mappable slot for our lockless error capture */ + ret = drm_mm_insert_node_in_range(&ggtt->vm.mm, &ggtt->error_capture, + PAGE_SIZE, 0, I915_COLOR_UNEVICTABLE, + 0, ggtt->mappable_end, + DRM_MM_INSERT_LOW); + if (ret) + return ret; + } /* * The upper portion of the GuC address space has a sizeable hole diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c index 92986d3f6995..19eb5ccba387 100644 --- a/drivers/gpu/drm/i915/i915_gpu_error.c +++ b/drivers/gpu/drm/i915/i915_gpu_error.c @@ -40,6 +40,7 @@ #include "display/intel_overlay.h" #include "gem/i915_gem_context.h" +#include "gem/i915_gem_lmem.h" #include "i915_drv.h" #include "i915_gpu_error.h" @@ -235,6 +236,7 @@ struct compress { struct pagevec pool; struct z_stream_s zstream; void *tmp; + bool wc; }; static bool compress_init(struct compress *c) @@ -292,7 +294,7 @@ static int compress_page(struct compress *c, struct z_stream_s *zstream = &c->zstream; zstream->next_in = src; - if (c->tmp && i915_memcpy_from_wc(c->tmp, src, PAGE_SIZE)) + if (c->wc && c->tmp && i915_memcpy_from_wc(c->tmp, src, PAGE_SIZE)) zstream->next_in = c->tmp; zstream->avail_in = PAGE_SIZE; @@ -367,6 +369,7 @@ static void err_compression_marker(struct drm_i915_error_state_buf *m) struct compress { struct pagevec pool; + bool wc; }; static bool compress_init(struct compress *c) @@ -389,7 +392,7 @@ static int compress_page(struct compress *c, if (!ptr) return -ENOMEM; - if (!i915_memcpy_from_wc(ptr, src, PAGE_SIZE)) + if (!(c->wc && i915_memcpy_from_wc(ptr, src, PAGE_SIZE))) memcpy(ptr, src, PAGE_SIZE); dst->pages[dst->page_count++] = ptr; @@ -963,7 +966,6 @@ i915_error_object_create(struct drm_i915_private *i915, struct drm_i915_error_object *dst; unsigned long num_pages; struct sgt_iter iter; - dma_addr_t dma; int ret; might_sleep(); @@ -988,17 +990,53 @@ i915_error_object_create(struct drm_i915_private *i915, dst->page_count = 0; dst->unused = 0; + compress->wc = i915_gem_object_is_lmem(vma->obj) || + drm_mm_node_allocated(&ggtt->error_capture); + ret = -EINVAL; - for_each_sgt_dma(dma, iter, vma->pages) { + if (drm_mm_node_allocated(&ggtt->error_capture)) { void __iomem *s; + dma_addr_t dma; - ggtt->vm.insert_page(&ggtt->vm, dma, slot, I915_CACHE_NONE, 0); + for_each_sgt_dma(dma, iter, vma->pages) { + ggtt->vm.insert_page(&ggtt->vm, dma, slot, + I915_CACHE_NONE, 0); - s = io_mapping_map_wc(&ggtt->iomap, slot, PAGE_SIZE); - ret = compress_page(compress, (void __force *)s, dst); - io_mapping_unmap(s); - if (ret) - break; + s = io_mapping_map_wc(&ggtt->iomap, slot, PAGE_SIZE); + ret = compress_page(compress, (void __force *)s, dst); + io_mapping_unmap(s); + if (ret) + break; + } + } else if (i915_gem_object_is_lmem(vma->obj)) { + void *s; + dma_addr_t dma; + struct intel_memory_region *mem = vma->obj->mm.region; + + for_each_sgt_dma(dma, iter, vma->pages) { + s = io_mapping_map_atomic_wc(&mem->iomap, dma); + ret = compress_page(compress, s, dst); + io_mapping_unmap_atomic(s); + + if (ret) + break; + } + } else { + void *s; + struct page *page; + + for_each_sgt_page(page, iter, vma->pages) { + drm_clflush_pages(&page, 1); + + s = kmap_atomic(page); + ret = compress_page(compress, s, dst); + kunmap_atomic(s); + + if (ret) + break; + + drm_clflush_pages(&page, 1); + } } if (ret || compress_flush(compress, dst)) { @@ -1664,9 +1702,11 @@ static unsigned long capture_find_epoch(const struct i915_gpu_state *error) static void capture_finish(struct i915_gpu_state *error) { struct i915_ggtt *ggtt = &error->i915->ggtt; - const u64 slot = ggtt->error_capture.start; - ggtt->vm.clear_range(&ggtt->vm, slot, PAGE_SIZE); + if (drm_mm_node_allocated(&ggtt->error_capture)) { + const u64 slot = ggtt->error_capture.start; + ggtt->vm.clear_range(&ggtt->vm, slot, PAGE_SIZE); + } } #define DAY_AS_SECONDS(x) (24 * 60 * 60 * (x)) From patchwork Fri Aug 9 22:26:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 11087841 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5D398912 for ; Fri, 9 Aug 2019 22:28:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 49C0C21FAC for ; Fri, 9 Aug 2019 22:28:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3DF592228E; Fri, 9 Aug 2019 22:28:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 03E9521FAC for ; Fri, 9 Aug 2019 22:28:10 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id CC5196EF02; Fri, 9 Aug 2019 22:27:27 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id 83CFD6EF0C; Fri, 9 Aug 2019 22:27:23 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Aug 2019 15:27:23 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,366,1559545200"; d="scan'208";a="176927087" Received: from jmath3-mobl1.ger.corp.intel.com (HELO mwahaha-bdw.ger.corp.intel.com) ([10.252.5.86]) by fmsmga007.fm.intel.com with ESMTP; 09 Aug 2019 15:27:22 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Subject: [PATCH v3 27/37] drm/i915: Don't try to place HWS in non-existing mappable region Date: Fri, 9 Aug 2019 23:26:33 +0100 Message-Id: <20190809222643.23142-28-matthew.auld@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190809222643.23142-1-matthew.auld@intel.com> References: <20190809222643.23142-1-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Daniele Ceraolo Spurio , dri-devel@lists.freedesktop.org, Michal Wajdeczko Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Michal Wajdeczko HWS placement restrictions can't just rely on HAS_LLC flag. Signed-off-by: Michal Wajdeczko Cc: Daniele Ceraolo Spurio --- drivers/gpu/drm/i915/gt/intel_engine_cs.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c index 634ef45b77da..46658ecd8975 100644 --- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c +++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c @@ -512,7 +512,7 @@ static int pin_ggtt_status_page(struct intel_engine_cs *engine, unsigned int flags; flags = PIN_GLOBAL; - if (!HAS_LLC(engine->i915)) + if (!HAS_LLC(engine->i915) && HAS_MAPPABLE_APERTURE(engine->i915)) /* * On g33, we cannot place HWS above 256MiB, so * restrict its pinning to the low mappable arena. From patchwork Fri Aug 9 22:26:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 11087831 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E698213B1 for ; Fri, 9 Aug 2019 22:28:01 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D3FF021FAC for ; Fri, 9 Aug 2019 22:28:01 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C870F2228E; Fri, 9 Aug 2019 22:28:01 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 8672921FAC for ; Fri, 9 Aug 2019 22:28:01 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 92F686EF12; Fri, 9 Aug 2019 22:27:28 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id 992276EF1C; Fri, 9 Aug 2019 22:27:24 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Aug 2019 15:27:24 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,366,1559545200"; d="scan'208";a="176927095" Received: from jmath3-mobl1.ger.corp.intel.com (HELO mwahaha-bdw.ger.corp.intel.com) ([10.252.5.86]) by fmsmga007.fm.intel.com with ESMTP; 09 Aug 2019 15:27:23 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Subject: [PATCH v3 28/37] drm/i915: check for missing aperture in insert_mappable_node Date: Fri, 9 Aug 2019 23:26:34 +0100 Message-Id: <20190809222643.23142-29-matthew.auld@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190809222643.23142-1-matthew.auld@intel.com> References: <20190809222643.23142-1-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: CQ Tang , dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: CQ Tang Signed-off-by: CQ Tang Signed-off-by: Matthew Auld --- drivers/gpu/drm/i915/i915_gem.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 2aa4fbe7edc0..af63d1a0af14 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -64,6 +64,9 @@ static int insert_mappable_node(struct i915_ggtt *ggtt, struct drm_mm_node *node, u32 size) { + if (!ggtt->mappable_end) + return -ENOSPC; + memset(node, 0, sizeof(*node)); return drm_mm_insert_node_in_range(&ggtt->vm.mm, node, size, 0, I915_COLOR_UNEVICTABLE, From patchwork Fri Aug 9 22:26:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 11087843 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4AFE0912 for ; Fri, 9 Aug 2019 22:28:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3488721FAC for ; Fri, 9 Aug 2019 22:28:13 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2894B22376; Fri, 9 Aug 2019 22:28:13 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 21C3521FAC for ; Fri, 9 Aug 2019 22:28:12 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 6C23A6EF24; Fri, 9 Aug 2019 22:27:30 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id 947736EF0C; Fri, 9 Aug 2019 22:27:26 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Aug 2019 15:27:26 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,366,1559545200"; d="scan'208";a="176927104" Received: from jmath3-mobl1.ger.corp.intel.com (HELO mwahaha-bdw.ger.corp.intel.com) ([10.252.5.86]) by fmsmga007.fm.intel.com with ESMTP; 09 Aug 2019 15:27:24 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Subject: [PATCH v3 29/37] drm/i915: Allow i915 to manage the vma offset nodes instead of drm core Date: Fri, 9 Aug 2019 23:26:35 +0100 Message-Id: <20190809222643.23142-30-matthew.auld@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190809222643.23142-1-matthew.auld@intel.com> References: <20190809222643.23142-1-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Abdiel Janulgue , Daniele Ceraolo Spurio , dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Abdiel Janulgue This enables us to store extra data within vma->vm_private_data and assign the pagefault ops for each mmap instance. We replace the core drm_gem_mmap implementation to overcome the limitation in having only a single offset node per gem object, allowing us to have multiple offsets per object. This enables a mapping instance to use unique fault-hadlers, per object. Signed-off-by: Abdiel Janulgue Cc: Joonas Lahtinen Signed-off-by: Daniele Ceraolo Spurio --- drivers/gpu/drm/i915/gem/i915_gem_mman.c | 183 ++++++++++++++++-- drivers/gpu/drm/i915/gem/i915_gem_object.c | 16 ++ drivers/gpu/drm/i915/gem/i915_gem_object.h | 7 +- .../gpu/drm/i915/gem/i915_gem_object_types.h | 18 ++ .../drm/i915/gem/selftests/i915_gem_mman.c | 12 +- drivers/gpu/drm/i915/gt/intel_reset.c | 13 +- drivers/gpu/drm/i915/i915_drv.c | 9 +- drivers/gpu/drm/i915/i915_drv.h | 1 + drivers/gpu/drm/i915/i915_vma.c | 21 +- 9 files changed, 244 insertions(+), 36 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c index 1e7311493530..d4a9d59803a7 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c @@ -221,7 +221,8 @@ vm_fault_t i915_gem_fault(struct vm_fault *vmf) { #define MIN_CHUNK_PAGES (SZ_1M >> PAGE_SHIFT) struct vm_area_struct *area = vmf->vma; - struct drm_i915_gem_object *obj = to_intel_bo(area->vm_private_data); + struct i915_mmap_offset *priv = area->vm_private_data; + struct drm_i915_gem_object *obj = priv->obj; struct drm_device *dev = obj->base.dev; struct drm_i915_private *i915 = to_i915(dev); struct intel_runtime_pm *rpm = &i915->runtime_pm; @@ -373,13 +374,15 @@ vm_fault_t i915_gem_fault(struct vm_fault *vmf) void __i915_gem_object_release_mmap(struct drm_i915_gem_object *obj) { struct i915_vma *vma; + struct i915_mmap_offset *mmo; GEM_BUG_ON(!obj->userfault_count); obj->userfault_count = 0; list_del(&obj->userfault_link); - drm_vma_node_unmap(&obj->base.vma_node, - obj->base.dev->anon_inode->i_mapping); + list_for_each_entry(mmo, &obj->mmap_offsets, offset) + drm_vma_node_unmap(&mmo->vma_node, + obj->base.dev->anon_inode->i_mapping); for_each_ggtt_vma(vma, obj) i915_vma_unset_userfault(vma); @@ -433,14 +436,31 @@ void i915_gem_object_release_mmap(struct drm_i915_gem_object *obj) intel_runtime_pm_put(&i915->runtime_pm, wakeref); } -static int create_mmap_offset(struct drm_i915_gem_object *obj) +static void init_mmap_offset(struct drm_i915_gem_object *obj, + struct i915_mmap_offset *mmo) +{ + mutex_lock(&obj->mmo_lock); + kref_init(&mmo->ref); + list_add(&mmo->offset, &obj->mmap_offsets); + mutex_unlock(&obj->mmo_lock); +} + +static int create_mmap_offset(struct drm_i915_gem_object *obj, + struct i915_mmap_offset *mmo) { struct drm_i915_private *i915 = to_i915(obj->base.dev); + struct drm_device *dev = obj->base.dev; int err; - err = drm_gem_create_mmap_offset(&obj->base); - if (likely(!err)) + drm_vma_node_reset(&mmo->vma_node); + if (mmo->file) + drm_vma_node_allow(&mmo->vma_node, mmo->file); + err = drm_vma_offset_add(dev->vma_offset_manager, &mmo->vma_node, + obj->base.size / PAGE_SIZE); + if (likely(!err)) { + init_mmap_offset(obj, mmo); return 0; + } /* Attempt to reap some mmap space from dead objects */ do { @@ -451,32 +471,49 @@ static int create_mmap_offset(struct drm_i915_gem_object *obj) break; i915_gem_drain_freed_objects(i915); - err = drm_gem_create_mmap_offset(&obj->base); - if (!err) + err = drm_vma_offset_add(dev->vma_offset_manager, &mmo->vma_node, + obj->base.size / PAGE_SIZE); + if (!err) { + init_mmap_offset(obj, mmo); break; + } } while (flush_delayed_work(&i915->gem.retire_work)); return err; } -int -i915_gem_mmap_gtt(struct drm_file *file, - struct drm_device *dev, - u32 handle, - u64 *offset) +static int +__assign_gem_object_mmap_data(struct drm_file *file, + u32 handle, + enum i915_mmap_type mmap_type, + u64 *offset) { struct drm_i915_gem_object *obj; + struct i915_mmap_offset *mmo; int ret; obj = i915_gem_object_lookup(file, handle); if (!obj) return -ENOENT; - ret = create_mmap_offset(obj); - if (ret == 0) - *offset = drm_vma_node_offset_addr(&obj->base.vma_node); + mmo = kzalloc(sizeof(*mmo), GFP_KERNEL); + if (!mmo) { + ret = -ENOMEM; + goto err; + } + mmo->file = file; + ret = create_mmap_offset(obj, mmo); + if (ret) { + kfree(mmo); + goto err; + } + + mmo->mmap_type = mmap_type; + mmo->obj = obj; + *offset = drm_vma_node_offset_addr(&mmo->vma_node); +err: i915_gem_object_put(obj); return ret; } @@ -500,9 +537,119 @@ int i915_gem_mmap_gtt_ioctl(struct drm_device *dev, void *data, struct drm_file *file) { - struct drm_i915_gem_mmap_gtt *args = data; + struct drm_i915_gem_mmap_offset *args = data; + + return __assign_gem_object_mmap_data(file, args->handle, + I915_MMAP_TYPE_GTT, + &args->offset); +} + +void i915_mmap_offset_object_release(struct kref *ref) +{ + struct i915_mmap_offset *mmo = container_of(ref, + struct i915_mmap_offset, + ref); + struct drm_i915_gem_object *obj = mmo->obj; + struct drm_device *dev = obj->base.dev; + + if (mmo->file) + drm_vma_node_revoke(&mmo->vma_node, mmo->file); + drm_vma_offset_remove(dev->vma_offset_manager, &mmo->vma_node); + list_del(&mmo->offset); + + kfree(mmo); +} + +static void i915_gem_vm_open(struct vm_area_struct *vma) +{ + struct i915_mmap_offset *priv = vma->vm_private_data; + struct drm_i915_gem_object *obj = priv->obj; + + i915_gem_object_get(obj); + kref_get(&priv->ref); +} + +static void i915_gem_vm_close(struct vm_area_struct *vma) +{ + struct i915_mmap_offset *priv = vma->vm_private_data; + struct drm_i915_gem_object *obj = priv->obj; - return i915_gem_mmap_gtt(file, dev, args->handle, &args->offset); + i915_gem_object_put(obj); + kref_put(&priv->ref, i915_mmap_offset_object_release); +} + +static const struct vm_operations_struct i915_gem_gtt_vm_ops = { + .fault = i915_gem_fault, + .open = i915_gem_vm_open, + .close = i915_gem_vm_close, +}; + +/* This overcomes the limitation in drm_gem_mmap's assignment of a + * drm_gem_object as the vma->vm_private_data. Since we need to + * be able to resolve multiple mmap offsets which could be tied + * to a single gem object. + */ +int i915_gem_mmap(struct file *filp, struct vm_area_struct *vma) +{ + struct drm_vma_offset_node *node; + struct drm_file *priv = filp->private_data; + struct drm_device *dev = priv->minor->dev; + struct i915_mmap_offset *mmo = NULL; + struct drm_gem_object *obj = NULL; + + if (drm_dev_is_unplugged(dev)) + return -ENODEV; + + drm_vma_offset_lock_lookup(dev->vma_offset_manager); + node = drm_vma_offset_exact_lookup_locked(dev->vma_offset_manager, + vma->vm_pgoff, + vma_pages(vma)); + if (likely(node)) { + mmo = container_of(node, struct i915_mmap_offset, + vma_node); + /* + * Take a ref for our mmap_offset and gem objects. The reference is cleaned + * up when the vma is closed. + * + * Skip 0-refcnted objects as it is in the process of being destroyed + * and will be invalid when the vma manager lock is released. + */ + if (kref_get_unless_zero(&mmo->ref)) { + obj = &mmo->obj->base; + if (!kref_get_unless_zero(&obj->refcount)) + obj = NULL; + } + } + drm_vma_offset_unlock_lookup(dev->vma_offset_manager); + + if (!obj) { + if (mmo) + kref_put(&mmo->ref, i915_mmap_offset_object_release); + return -EINVAL; + } + + if (!drm_vma_node_is_allowed(node, priv)) { + drm_gem_object_put_unlocked(obj); + return -EACCES; + } + + if (to_intel_bo(obj)->readonly) { + if (vma->vm_flags & VM_WRITE) { + drm_gem_object_put_unlocked(obj); + return -EINVAL; + } + + vma->vm_flags &= ~VM_MAYWRITE; + } + + vma->vm_flags |= VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP; + vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags)); + vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot); + vma->vm_private_data = mmo; + + vma->vm_ops = &i915_gem_gtt_vm_ops; + + return 0; } #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c index 3929c3a6b281..24f737b00e84 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c @@ -68,6 +68,9 @@ void i915_gem_object_init(struct drm_i915_gem_object *obj, INIT_LIST_HEAD(&obj->lut_list); + mutex_init(&obj->mmo_lock); + INIT_LIST_HEAD(&obj->mmap_offsets); + init_rcu_head(&obj->rcu); obj->ops = ops; @@ -108,6 +111,7 @@ void i915_gem_close_object(struct drm_gem_object *gem, struct drm_file *file) struct drm_i915_gem_object *obj = to_intel_bo(gem); struct drm_i915_file_private *fpriv = file->driver_priv; struct i915_lut_handle *lut, *ln; + struct i915_mmap_offset *mmo, *on; LIST_HEAD(close); i915_gem_object_lock(obj); @@ -122,6 +126,11 @@ void i915_gem_close_object(struct drm_gem_object *gem, struct drm_file *file) } i915_gem_object_unlock(obj); + mutex_lock(&obj->mmo_lock); + list_for_each_entry_safe(mmo, on, &obj->mmap_offsets, offset) + kref_put(&mmo->ref, i915_mmap_offset_object_release); + mutex_unlock(&obj->mmo_lock); + list_for_each_entry_safe(lut, ln, &close, obj_link) { struct i915_gem_context *ctx = lut->ctx; struct i915_vma *vma; @@ -170,6 +179,7 @@ static void __i915_gem_free_objects(struct drm_i915_private *i915, wakeref = intel_runtime_pm_get(&i915->runtime_pm); llist_for_each_entry_safe(obj, on, freed, freed) { struct i915_vma *vma, *vn; + struct i915_mmap_offset *mmo, *on; trace_i915_gem_object_destroy(obj); @@ -183,6 +193,7 @@ static void __i915_gem_free_objects(struct drm_i915_private *i915, GEM_BUG_ON(!list_empty(&obj->vma.list)); GEM_BUG_ON(!RB_EMPTY_ROOT(&obj->vma.tree)); + i915_gem_object_release_mmap(obj); mutex_unlock(&i915->drm.struct_mutex); GEM_BUG_ON(atomic_read(&obj->bind_count)); @@ -203,6 +214,11 @@ static void __i915_gem_free_objects(struct drm_i915_private *i915, if (obj->ops->release) obj->ops->release(obj); + mutex_lock(&obj->mmo_lock); + list_for_each_entry_safe(mmo, on, &obj->mmap_offsets, offset) + kref_put(&mmo->ref, i915_mmap_offset_object_release); + mutex_unlock(&obj->mmo_lock); + /* But keep the pointer alive for RCU-protected lookups */ call_rcu(&obj->rcu, __i915_gem_free_object_rcu); } diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index 1cbc63470212..2bb0c779c850 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -131,13 +131,13 @@ i915_gem_object_is_volatile(const struct drm_i915_gem_object *obj) static inline void i915_gem_object_set_readonly(struct drm_i915_gem_object *obj) { - obj->base.vma_node.readonly = true; + obj->readonly = true; } static inline bool i915_gem_object_is_readonly(const struct drm_i915_gem_object *obj) { - return obj->base.vma_node.readonly; + return obj->readonly; } static inline bool @@ -435,6 +435,9 @@ int i915_gem_object_wait(struct drm_i915_gem_object *obj, int i915_gem_object_wait_priority(struct drm_i915_gem_object *obj, unsigned int flags, const struct i915_sched_attr *attr); + +void i915_mmap_offset_object_release(struct kref *ref); + #define I915_PRIORITY_DISPLAY I915_USER_PRIORITY(I915_PRIORITY_MAX) #endif diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h index cd06051eb797..a3745f7d57a1 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h @@ -62,6 +62,19 @@ struct drm_i915_gem_object_ops { void (*release)(struct drm_i915_gem_object *obj); }; +enum i915_mmap_type { + I915_MMAP_TYPE_GTT = 0, +}; + +struct i915_mmap_offset { + struct drm_vma_offset_node vma_node; + struct drm_i915_gem_object* obj; + struct drm_file *file; + enum i915_mmap_type mmap_type; + struct kref ref; + struct list_head offset; +}; + struct drm_i915_gem_object { struct drm_gem_object base; @@ -117,6 +130,11 @@ struct drm_i915_gem_object { unsigned int userfault_count; struct list_head userfault_link; + /* Protects access to mmap offsets */ + struct mutex mmo_lock; + struct list_head mmap_offsets; + bool readonly:1; + I915_SELFTEST_DECLARE(struct list_head st_link); unsigned long flags; diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c index fa83745abcc0..4e336d68d6eb 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c @@ -371,15 +371,20 @@ static bool assert_mmap_offset(struct drm_i915_private *i915, int expected) { struct drm_i915_gem_object *obj; + /* refcounted in create_mmap_offset */ + struct i915_mmap_offset *mmo = kzalloc(sizeof(*mmo), GFP_KERNEL); int err; obj = i915_gem_object_create_internal(i915, size); if (IS_ERR(obj)) return PTR_ERR(obj); - err = create_mmap_offset(obj); + err = create_mmap_offset(obj, mmo); i915_gem_object_put(obj); + if (err) + kfree(mmo); + return err == expected; } @@ -422,6 +427,8 @@ static int igt_mmap_offset_exhaustion(void *arg) struct drm_mm *mm = &i915->drm.vma_offset_manager->vm_addr_space_mm; struct drm_i915_gem_object *obj; struct drm_mm_node resv, *hole; + /* refcounted in create_mmap_offset */ + struct i915_mmap_offset *mmo = kzalloc(sizeof(*mmo), GFP_KERNEL); u64 hole_start, hole_end; int loop, err; @@ -465,9 +472,10 @@ static int igt_mmap_offset_exhaustion(void *arg) goto out; } - err = create_mmap_offset(obj); + err = create_mmap_offset(obj, mmo); if (err) { pr_err("Unable to insert object into reclaimed hole\n"); + kfree(mmo); goto err_obj; } diff --git a/drivers/gpu/drm/i915/gt/intel_reset.c b/drivers/gpu/drm/i915/gt/intel_reset.c index ec85740de942..4968c8c7633a 100644 --- a/drivers/gpu/drm/i915/gt/intel_reset.c +++ b/drivers/gpu/drm/i915/gt/intel_reset.c @@ -628,6 +628,7 @@ static void revoke_mmaps(struct intel_gt *gt) for (i = 0; i < gt->ggtt->num_fences; i++) { struct drm_vma_offset_node *node; + struct i915_mmap_offset *mmo; struct i915_vma *vma; u64 vma_offset; @@ -641,10 +642,20 @@ static void revoke_mmaps(struct intel_gt *gt) GEM_BUG_ON(vma->fence != >->ggtt->fence_regs[i]); node = &vma->obj->base.vma_node; vma_offset = vma->ggtt_view.partial.offset << PAGE_SHIFT; - unmap_mapping_range(gt->i915->drm.anon_inode->i_mapping, + + mutex_lock(&vma->obj->mmo_lock); + list_for_each_entry(mmo, &vma->obj->mmap_offsets, offset) { + node = &mmo->vma_node; + if (!drm_mm_node_allocated(&node->vm_node) || + mmo->mmap_type != I915_MMAP_TYPE_GTT) + continue; + + unmap_mapping_range(gt->i915->drm.anon_inode->i_mapping, drm_vma_node_offset_addr(node) + vma_offset, vma->size, 1); + } + mutex_unlock(&vma->obj->mmo_lock); } } diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c index 22e87ae36621..fcee06ed3469 100644 --- a/drivers/gpu/drm/i915/i915_drv.c +++ b/drivers/gpu/drm/i915/i915_drv.c @@ -2658,18 +2658,12 @@ const struct dev_pm_ops i915_pm_ops = { .runtime_resume = intel_runtime_resume, }; -static const struct vm_operations_struct i915_gem_vm_ops = { - .fault = i915_gem_fault, - .open = drm_gem_vm_open, - .close = drm_gem_vm_close, -}; - static const struct file_operations i915_driver_fops = { .owner = THIS_MODULE, .open = drm_open, .release = drm_release, .unlocked_ioctl = drm_ioctl, - .mmap = drm_gem_mmap, + .mmap = i915_gem_mmap, .poll = drm_poll, .read = drm_read, .compat_ioctl = i915_compat_ioctl, @@ -2758,7 +2752,6 @@ static struct drm_driver driver = { .gem_close_object = i915_gem_close_object, .gem_free_object_unlocked = i915_gem_free_object, - .gem_vm_ops = &i915_gem_vm_ops, .prime_handle_to_fd = drm_gem_prime_handle_to_fd, .prime_fd_to_handle = drm_gem_prime_fd_to_handle, diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 182ed6b46aa5..5a5b90670e16 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -2396,6 +2396,7 @@ int i915_gem_wait_for_idle(struct drm_i915_private *dev_priv, void i915_gem_suspend(struct drm_i915_private *dev_priv); void i915_gem_suspend_late(struct drm_i915_private *dev_priv); void i915_gem_resume(struct drm_i915_private *dev_priv); +int i915_gem_mmap(struct file *filp, struct vm_area_struct *vma); vm_fault_t i915_gem_fault(struct vm_fault *vmf); int i915_gem_open(struct drm_i915_private *i915, struct drm_file *file); diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c index 4183b0e10324..ec2a41e1a04d 100644 --- a/drivers/gpu/drm/i915/i915_vma.c +++ b/drivers/gpu/drm/i915/i915_vma.c @@ -865,7 +865,8 @@ static void __i915_vma_iounmap(struct i915_vma *vma) void i915_vma_revoke_mmap(struct i915_vma *vma) { - struct drm_vma_offset_node *node = &vma->obj->base.vma_node; + struct drm_vma_offset_node *node; + struct i915_mmap_offset *mmo; u64 vma_offset; lockdep_assert_held(&vma->vm->i915->drm.struct_mutex); @@ -877,10 +878,20 @@ void i915_vma_revoke_mmap(struct i915_vma *vma) GEM_BUG_ON(!vma->obj->userfault_count); vma_offset = vma->ggtt_view.partial.offset << PAGE_SHIFT; - unmap_mapping_range(vma->vm->i915->drm.anon_inode->i_mapping, - drm_vma_node_offset_addr(node) + vma_offset, - vma->size, - 1); + + mutex_lock(&vma->obj->mmo_lock); + list_for_each_entry(mmo, &vma->obj->mmap_offsets, offset) { + node = &mmo->vma_node; + if (!drm_mm_node_allocated(&node->vm_node) || + mmo->mmap_type != I915_MMAP_TYPE_GTT) + continue; + + unmap_mapping_range(vma->vm->i915->drm.anon_inode->i_mapping, + drm_vma_node_offset_addr(node) + vma_offset, + vma->size, + 1); + } + mutex_unlock(&vma->obj->mmo_lock); i915_vma_unset_userfault(vma); if (!--vma->obj->userfault_count) From patchwork Fri Aug 9 22:26:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 11087847 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A1FA113B1 for ; Fri, 9 Aug 2019 22:28:15 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8CA2621FAC for ; Fri, 9 Aug 2019 22:28:15 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 80E852228E; Fri, 9 Aug 2019 22:28:15 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 4E12321FAC for ; Fri, 9 Aug 2019 22:28:14 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 22FF36EF30; Fri, 9 Aug 2019 22:27:32 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id EFDEC6EF0A; Fri, 9 Aug 2019 22:27:27 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Aug 2019 15:27:27 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,366,1559545200"; d="scan'208";a="176927106" Received: from jmath3-mobl1.ger.corp.intel.com (HELO mwahaha-bdw.ger.corp.intel.com) ([10.252.5.86]) by fmsmga007.fm.intel.com with ESMTP; 09 Aug 2019 15:27:26 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Subject: [PATCH v3 30/37] drm/i915: Introduce DRM_I915_GEM_MMAP_OFFSET Date: Fri, 9 Aug 2019 23:26:36 +0100 Message-Id: <20190809222643.23142-31-matthew.auld@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190809222643.23142-1-matthew.auld@intel.com> References: <20190809222643.23142-1-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Abdiel Janulgue , dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Abdiel Janulgue Add a new CPU mmap implementation that allows multiple fault handlers that depends on the object's backing pages. Note that we multiplex mmap_gtt and mmap_offset through the same ioctl, and use the zero extending behaviour of drm to differentiate between them, when we inspect the flags. Signed-off-by: Abdiel Janulgue Signed-off-by: Matthew Auld Cc: Joonas Lahtinen --- drivers/gpu/drm/i915/gem/i915_gem_ioctls.h | 2 ++ drivers/gpu/drm/i915/gem/i915_gem_mman.c | 30 ++++++++++++++++++ .../gpu/drm/i915/gem/i915_gem_object_types.h | 3 ++ drivers/gpu/drm/i915/i915_drv.c | 2 +- drivers/gpu/drm/i915/i915_getparam.c | 1 + include/uapi/drm/i915_drm.h | 31 +++++++++++++++++++ 6 files changed, 68 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ioctls.h b/drivers/gpu/drm/i915/gem/i915_gem_ioctls.h index ddc7f2a52b3e..5abd5b2172f2 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ioctls.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_ioctls.h @@ -30,6 +30,8 @@ int i915_gem_mmap_ioctl(struct drm_device *dev, void *data, struct drm_file *file); int i915_gem_mmap_gtt_ioctl(struct drm_device *dev, void *data, struct drm_file *file); +int i915_gem_mmap_offset_ioctl(struct drm_device *dev, void *data, + struct drm_file *file_priv); int i915_gem_pread_ioctl(struct drm_device *dev, void *data, struct drm_file *file); int i915_gem_pwrite_ioctl(struct drm_device *dev, void *data, diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c index d4a9d59803a7..a62657a1f011 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c @@ -538,12 +538,42 @@ i915_gem_mmap_gtt_ioctl(struct drm_device *dev, void *data, struct drm_file *file) { struct drm_i915_gem_mmap_offset *args = data; + struct drm_i915_private *i915 = to_i915(dev); + + if (args->flags & I915_MMAP_OFFSET_FLAGS) + return i915_gem_mmap_offset_ioctl(dev, data, file); + + if (!HAS_MAPPABLE_APERTURE(i915)) { + DRM_ERROR("No aperture, cannot mmap via legacy GTT\n"); + return -ENODEV; + } return __assign_gem_object_mmap_data(file, args->handle, I915_MMAP_TYPE_GTT, &args->offset); } +int i915_gem_mmap_offset_ioctl(struct drm_device *dev, void *data, + struct drm_file *file) +{ + struct drm_i915_gem_mmap_offset *args = data; + enum i915_mmap_type type; + + if ((args->flags & (I915_MMAP_OFFSET_WC | I915_MMAP_OFFSET_WB)) && + !boot_cpu_has(X86_FEATURE_PAT)) + return -ENODEV; + + if (args->flags & I915_MMAP_OFFSET_WC) + type = I915_MMAP_TYPE_OFFSET_WC; + else if (args->flags & I915_MMAP_OFFSET_WB) + type = I915_MMAP_TYPE_OFFSET_WB; + else if (args->flags & I915_MMAP_OFFSET_UC) + type = I915_MMAP_TYPE_OFFSET_UC; + + return __assign_gem_object_mmap_data(file, args->handle, type, + &args->offset); +} + void i915_mmap_offset_object_release(struct kref *ref) { struct i915_mmap_offset *mmo = container_of(ref, diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h index a3745f7d57a1..4ea78d3c92a9 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h @@ -64,6 +64,9 @@ struct drm_i915_gem_object_ops { enum i915_mmap_type { I915_MMAP_TYPE_GTT = 0, + I915_MMAP_TYPE_OFFSET_WC, + I915_MMAP_TYPE_OFFSET_WB, + I915_MMAP_TYPE_OFFSET_UC, }; struct i915_mmap_offset { diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c index fcee06ed3469..cf390092c927 100644 --- a/drivers/gpu/drm/i915/i915_drv.c +++ b/drivers/gpu/drm/i915/i915_drv.c @@ -2710,7 +2710,7 @@ static const struct drm_ioctl_desc i915_ioctls[] = { DRM_IOCTL_DEF_DRV(I915_GEM_PREAD, i915_gem_pread_ioctl, DRM_RENDER_ALLOW), DRM_IOCTL_DEF_DRV(I915_GEM_PWRITE, i915_gem_pwrite_ioctl, DRM_RENDER_ALLOW), DRM_IOCTL_DEF_DRV(I915_GEM_MMAP, i915_gem_mmap_ioctl, DRM_RENDER_ALLOW), - DRM_IOCTL_DEF_DRV(I915_GEM_MMAP_GTT, i915_gem_mmap_gtt_ioctl, DRM_RENDER_ALLOW), + DRM_IOCTL_DEF_DRV(I915_GEM_MMAP_OFFSET, i915_gem_mmap_gtt_ioctl, DRM_RENDER_ALLOW), DRM_IOCTL_DEF_DRV(I915_GEM_SET_DOMAIN, i915_gem_set_domain_ioctl, DRM_RENDER_ALLOW), DRM_IOCTL_DEF_DRV(I915_GEM_SW_FINISH, i915_gem_sw_finish_ioctl, DRM_RENDER_ALLOW), DRM_IOCTL_DEF_DRV(I915_GEM_SET_TILING, i915_gem_set_tiling_ioctl, DRM_RENDER_ALLOW), diff --git a/drivers/gpu/drm/i915/i915_getparam.c b/drivers/gpu/drm/i915/i915_getparam.c index 5d9101376a3d..28120249a250 100644 --- a/drivers/gpu/drm/i915/i915_getparam.c +++ b/drivers/gpu/drm/i915/i915_getparam.c @@ -130,6 +130,7 @@ int i915_getparam_ioctl(struct drm_device *dev, void *data, case I915_PARAM_HAS_EXEC_BATCH_FIRST: case I915_PARAM_HAS_EXEC_FENCE_ARRAY: case I915_PARAM_HAS_EXEC_SUBMIT_FENCE: + case I915_PARAM_MMAP_OFFSET_VERSION: /* For the time being all of these are always true; * if some supported hardware does not have one of these * features this value needs to be provided from diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h index 469dc512cca3..fb84aed10825 100644 --- a/include/uapi/drm/i915_drm.h +++ b/include/uapi/drm/i915_drm.h @@ -359,6 +359,7 @@ typedef struct _drm_i915_sarea { #define DRM_I915_QUERY 0x39 #define DRM_I915_GEM_VM_CREATE 0x3a #define DRM_I915_GEM_VM_DESTROY 0x3b +#define DRM_I915_GEM_MMAP_OFFSET DRM_I915_GEM_MMAP_GTT /* Must be kept compact -- no holes */ #define DRM_IOCTL_I915_INIT DRM_IOW( DRM_COMMAND_BASE + DRM_I915_INIT, drm_i915_init_t) @@ -421,6 +422,7 @@ typedef struct _drm_i915_sarea { #define DRM_IOCTL_I915_QUERY DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_QUERY, struct drm_i915_query) #define DRM_IOCTL_I915_GEM_VM_CREATE DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_GEM_VM_CREATE, struct drm_i915_gem_vm_control) #define DRM_IOCTL_I915_GEM_VM_DESTROY DRM_IOW (DRM_COMMAND_BASE + DRM_I915_GEM_VM_DESTROY, struct drm_i915_gem_vm_control) +#define DRM_IOCTL_I915_GEM_MMAP_OFFSET DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_GEM_MMAP_OFFSET, struct drm_i915_gem_mmap_offset) /* Allow drivers to submit batchbuffers directly to hardware, relying * on the security mechanisms provided by hardware. @@ -611,6 +613,10 @@ typedef struct drm_i915_irq_wait { * See I915_EXEC_FENCE_OUT and I915_EXEC_FENCE_SUBMIT. */ #define I915_PARAM_HAS_EXEC_SUBMIT_FENCE 53 + +/* Mmap offset ioctl */ +#define I915_PARAM_MMAP_OFFSET_VERSION 54 + /* Must be kept compact -- no holes and well documented */ typedef struct drm_i915_getparam { @@ -786,6 +792,31 @@ struct drm_i915_gem_mmap_gtt { __u64 offset; }; +struct drm_i915_gem_mmap_offset { + /** Handle for the object being mapped. */ + __u32 handle; + __u32 pad; + /** + * Fake offset to use for subsequent mmap call + * + * This is a fixed-size type for 32/64 compatibility. + */ + __u64 offset; + + /** + * Flags for extended behaviour. + * + * It is mandatory that either one of the _WC/_WB flags + * should be passed here. + */ + __u64 flags; +#define I915_MMAP_OFFSET_WC (1 << 0) +#define I915_MMAP_OFFSET_WB (1 << 1) +#define I915_MMAP_OFFSET_UC (1 << 2) +#define I915_MMAP_OFFSET_FLAGS \ + (I915_MMAP_OFFSET_WC | I915_MMAP_OFFSET_WB | I915_MMAP_OFFSET_UC) +}; + struct drm_i915_gem_set_domain { /** Handle for the object */ __u32 handle; From patchwork Fri Aug 9 22:26:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 11087849 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 87BBF912 for ; Fri, 9 Aug 2019 22:28:16 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7115821FAC for ; Fri, 9 Aug 2019 22:28:16 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 622082228E; Fri, 9 Aug 2019 22:28:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 1886C21FAC for ; Fri, 9 Aug 2019 22:28:16 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 929B16EF0A; Fri, 9 Aug 2019 22:27:31 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id 39DD76EF1D; Fri, 9 Aug 2019 22:27:29 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Aug 2019 15:27:29 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,366,1559545200"; d="scan'208";a="176927107" Received: from jmath3-mobl1.ger.corp.intel.com (HELO mwahaha-bdw.ger.corp.intel.com) ([10.252.5.86]) by fmsmga007.fm.intel.com with ESMTP; 09 Aug 2019 15:27:28 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Subject: [PATCH v3 31/37] drm/i915/lmem: add helper to get CPU accessible offset Date: Fri, 9 Aug 2019 23:26:37 +0100 Message-Id: <20190809222643.23142-32-matthew.auld@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190809222643.23142-1-matthew.auld@intel.com> References: <20190809222643.23142-1-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Abdiel Janulgue , dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Abdiel Janulgue LMEM can be accessed by the CPU through a BAR. The mapping itself should be 1:1. Signed-off-by: Abdiel Janulgue Signed-off-by: Matthew Auld Cc: Joonas Lahtinen --- drivers/gpu/drm/i915/gem/i915_gem_lmem.c | 16 ++++++++++++++++ drivers/gpu/drm/i915/gem/i915_gem_lmem.h | 3 +++ 2 files changed, 19 insertions(+) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_lmem.c b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c index f00078ac331e..8d0251af5dfc 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_lmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c @@ -225,6 +225,22 @@ void __iomem *i915_gem_object_lmem_io_map(struct drm_i915_gem_object *obj, return io_mapping_map_wc(&obj->mm.region->iomap, offset, size); } +resource_size_t i915_gem_object_lmem_io_offset(struct drm_i915_gem_object *obj, + unsigned long n) +{ + struct intel_memory_region *mem = obj->mm.region; + dma_addr_t daddr; + + /* + * XXX: It's not a dma address, more a device address or physical + * offset, so we are clearly abusing the semantics of the sg_table + * here, and elsewhere like in the gtt paths. + */ + daddr = i915_gem_object_get_dma_address(obj, n); + + return mem->io_start + daddr; +} + bool i915_gem_object_is_lmem(struct drm_i915_gem_object *obj) { struct intel_memory_region *region = obj->mm.region; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_lmem.h b/drivers/gpu/drm/i915/gem/i915_gem_lmem.h index 31a6462bdbb6..43e6e715eeed 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_lmem.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_lmem.h @@ -21,6 +21,9 @@ void __iomem * i915_gem_object_lmem_io_map_page_atomic(struct drm_i915_gem_object *obj, unsigned long n); +resource_size_t i915_gem_object_lmem_io_offset(struct drm_i915_gem_object *obj, + unsigned long n); + bool i915_gem_object_is_lmem(struct drm_i915_gem_object *obj); struct drm_i915_gem_object * From patchwork Fri Aug 9 22:26:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 11087853 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1D4D613B1 for ; Fri, 9 Aug 2019 22:28:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0917721C9A for ; Fri, 9 Aug 2019 22:28:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F16E32223E; Fri, 9 Aug 2019 22:28:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 3844921C9A for ; Fri, 9 Aug 2019 22:28:19 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 7CB2B6EF1D; Fri, 9 Aug 2019 22:27:33 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id C5CEA6EF26; Fri, 9 Aug 2019 22:27:30 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Aug 2019 15:27:30 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,366,1559545200"; d="scan'208";a="176927117" Received: from jmath3-mobl1.ger.corp.intel.com (HELO mwahaha-bdw.ger.corp.intel.com) ([10.252.5.86]) by fmsmga007.fm.intel.com with ESMTP; 09 Aug 2019 15:27:29 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Subject: [PATCH v3 32/37] drm/i915: Add cpu and lmem fault handlers Date: Fri, 9 Aug 2019 23:26:38 +0100 Message-Id: <20190809222643.23142-33-matthew.auld@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190809222643.23142-1-matthew.auld@intel.com> References: <20190809222643.23142-1-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Abdiel Janulgue , dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Abdiel Janulgue Fault handler to handle missing pages to be filled depending on an object's backing storage. Handle also changes needed to refault pages depending on fault handler usage. Signed-off-by: Abdiel Janulgue Signed-off-by: Matthew Auld Cc: Joonas Lahtinen --- drivers/gpu/drm/i915/gem/i915_gem_lmem.c | 54 +++++++ drivers/gpu/drm/i915/gem/i915_gem_lmem.h | 3 + drivers/gpu/drm/i915/gem/i915_gem_mman.c | 155 +++++++++++++++++++-- drivers/gpu/drm/i915/gem/i915_gem_object.h | 2 +- drivers/gpu/drm/i915/i915_gem.c | 2 +- 5 files changed, 201 insertions(+), 15 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_lmem.c b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c index 8d0251af5dfc..2194e2c3bdcd 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_lmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c @@ -6,6 +6,7 @@ #include "intel_memory_region.h" #include "gem/i915_gem_region.h" #include "gem/i915_gem_lmem.h" +#include "gt/intel_gt.h" #include "i915_drv.h" static int lmem_pread(struct drm_i915_gem_object *obj, @@ -179,6 +180,59 @@ static int lmem_pwrite(struct drm_i915_gem_object *obj, return ret; } +vm_fault_t i915_gem_fault_lmem(struct vm_fault *vmf) +{ + struct vm_area_struct *area = vmf->vma; + struct i915_mmap_offset *priv = area->vm_private_data; + struct drm_i915_gem_object *obj = priv->obj; + struct drm_device *dev = obj->base.dev; + struct drm_i915_private *i915 = to_i915(dev); + unsigned long size = area->vm_end - area->vm_start; + bool write = area->vm_flags & VM_WRITE; + vm_fault_t vmf_ret; + int i, ret; + + /* Sanity check that we allow writing into this object */ + if (i915_gem_object_is_readonly(obj) && write) + return VM_FAULT_SIGBUS; + + ret = i915_gem_object_pin_pages(obj); + if (ret) + goto err; + + for (i = 0; i < size >> PAGE_SHIFT; i++) { + vmf_ret = vmf_insert_pfn(area, + (unsigned long)area->vm_start + i * PAGE_SIZE, + i915_gem_object_lmem_io_offset(obj, i) >> PAGE_SHIFT); + if (vmf_ret & VM_FAULT_ERROR) { + ret = vm_fault_to_errno(vmf_ret, 0); + goto err; + } + } + + i915_gem_object_unpin_pages(obj); +err: + switch (ret) { + case -EIO: + if (!intel_gt_is_wedged(&i915->gt)) + return VM_FAULT_SIGBUS; + /* fallthrough */ + case -EAGAIN: + case 0: + case -ERESTARTSYS: + case -EINTR: + case -EBUSY: + return VM_FAULT_NOPAGE; + case -ENOMEM: + return VM_FAULT_OOM; + case -ENOSPC: + case -EFAULT: + return VM_FAULT_SIGBUS; + default: + WARN_ONCE(ret, "unhandled error in %s: %i\n", __func__, ret); + return VM_FAULT_SIGBUS; + } +} const struct drm_i915_gem_object_ops i915_gem_lmem_obj_ops = { .flags = I915_GEM_OBJECT_IS_MAPPABLE, diff --git a/drivers/gpu/drm/i915/gem/i915_gem_lmem.h b/drivers/gpu/drm/i915/gem/i915_gem_lmem.h index 43e6e715eeed..c3255eb6daa5 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_lmem.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_lmem.h @@ -7,6 +7,7 @@ #define __I915_GEM_LMEM_H #include +#include struct drm_i915_private; struct drm_i915_gem_object; @@ -24,6 +25,8 @@ i915_gem_object_lmem_io_map_page_atomic(struct drm_i915_gem_object *obj, resource_size_t i915_gem_object_lmem_io_offset(struct drm_i915_gem_object *obj, unsigned long n); +vm_fault_t i915_gem_fault_lmem(struct vm_fault *vmf); + bool i915_gem_object_is_lmem(struct drm_i915_gem_object *obj); struct drm_i915_gem_object * diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c index a62657a1f011..304ea578fd30 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c @@ -5,6 +5,7 @@ */ #include +#include #include #include "gt/intel_gt.h" @@ -12,6 +13,7 @@ #include "i915_drv.h" #include "i915_gem_gtt.h" #include "i915_gem_ioctls.h" +#include "i915_gem_lmem.h" #include "i915_gem_object.h" #include "i915_trace.h" #include "i915_vma.h" @@ -371,7 +373,62 @@ vm_fault_t i915_gem_fault(struct vm_fault *vmf) } } -void __i915_gem_object_release_mmap(struct drm_i915_gem_object *obj) +static vm_fault_t i915_gem_fault_cpu(struct vm_fault *vmf) +{ + struct vm_area_struct *area = vmf->vma; + struct i915_mmap_offset *priv = area->vm_private_data; + struct drm_i915_gem_object *obj = priv->obj; + struct drm_device *dev = obj->base.dev; + struct drm_i915_private *dev_priv = to_i915(dev); + vm_fault_t vmf_ret; + unsigned long size = area->vm_end - area->vm_start; + bool write = area->vm_flags & VM_WRITE; + int i, ret; + + /* Sanity check that we allow writing into this object */ + if (i915_gem_object_is_readonly(obj) && write) + return VM_FAULT_SIGBUS; + + ret = i915_gem_object_pin_pages(obj); + if (ret) + goto err; + + for (i = 0; i < size >> PAGE_SHIFT; i++) { + struct page *page = i915_gem_object_get_page(obj, i); + vmf_ret = vmf_insert_pfn(area, + (unsigned long)area->vm_start + i * PAGE_SIZE, + page_to_pfn(page)); + if (vmf_ret & VM_FAULT_ERROR) { + ret = vm_fault_to_errno(vmf_ret, 0); + break; + } + } + + i915_gem_object_unpin_pages(obj); +err: + switch (ret) { + case -EIO: + if (!intel_gt_is_wedged(&dev_priv->gt)) + return VM_FAULT_SIGBUS; + /* fallthrough */ + case -EAGAIN: + case 0: + case -ERESTARTSYS: + case -EINTR: + case -EBUSY: + return VM_FAULT_NOPAGE; + case -ENOMEM: + return VM_FAULT_OOM; + case -ENOSPC: + case -EFAULT: + return VM_FAULT_SIGBUS; + default: + WARN_ONCE(ret, "unhandled error in %s: %i\n", __func__, ret); + return VM_FAULT_SIGBUS; + } +} + +void __i915_gem_object_release_mmap_gtt(struct drm_i915_gem_object *obj) { struct i915_vma *vma; struct i915_mmap_offset *mmo; @@ -380,21 +437,20 @@ void __i915_gem_object_release_mmap(struct drm_i915_gem_object *obj) obj->userfault_count = 0; list_del(&obj->userfault_link); - list_for_each_entry(mmo, &obj->mmap_offsets, offset) - drm_vma_node_unmap(&mmo->vma_node, - obj->base.dev->anon_inode->i_mapping); + + mutex_lock(&obj->mmo_lock); + list_for_each_entry(mmo, &obj->mmap_offsets, offset) { + if (mmo->mmap_type == I915_MMAP_TYPE_GTT) + drm_vma_node_unmap(&mmo->vma_node, + obj->base.dev->anon_inode->i_mapping); + } + mutex_unlock(&obj->mmo_lock); for_each_ggtt_vma(vma, obj) i915_vma_unset_userfault(vma); } /** - * i915_gem_object_release_mmap - remove physical page mappings - * @obj: obj in question - * - * Preserve the reservation of the mmapping with the DRM core code, but - * relinquish ownership of the pages back to the system. - * * It is vital that we remove the page mapping if we have mapped a tiled * object through the GTT and then lose the fence register due to * resource pressure. Similarly if the object has been moved out of the @@ -402,7 +458,7 @@ void __i915_gem_object_release_mmap(struct drm_i915_gem_object *obj) * mapping will then trigger a page fault on the next user access, allowing * fixup by i915_gem_fault(). */ -void i915_gem_object_release_mmap(struct drm_i915_gem_object *obj) +static void i915_gem_object_release_mmap_gtt(struct drm_i915_gem_object *obj) { struct drm_i915_private *i915 = to_i915(obj->base.dev); intel_wakeref_t wakeref; @@ -421,7 +477,7 @@ void i915_gem_object_release_mmap(struct drm_i915_gem_object *obj) if (!obj->userfault_count) goto out; - __i915_gem_object_release_mmap(obj); + __i915_gem_object_release_mmap_gtt(obj); /* Ensure that the CPU's PTE are revoked and there are not outstanding * memory transactions from userspace before we return. The TLB @@ -436,6 +492,34 @@ void i915_gem_object_release_mmap(struct drm_i915_gem_object *obj) intel_runtime_pm_put(&i915->runtime_pm, wakeref); } +static void i915_gem_object_release_mmap_offset(struct drm_i915_gem_object *obj) +{ + struct i915_mmap_offset *mmo; + + mutex_lock(&obj->mmo_lock); + list_for_each_entry(mmo, &obj->mmap_offsets, offset) { + if (mmo->mmap_type == I915_MMAP_TYPE_OFFSET_WC || + mmo->mmap_type == I915_MMAP_TYPE_OFFSET_WB || + mmo->mmap_type == I915_MMAP_TYPE_OFFSET_UC) + drm_vma_node_unmap(&mmo->vma_node, + obj->base.dev->anon_inode->i_mapping); + } + mutex_unlock(&obj->mmo_lock); +} + +/** + * i915_gem_object_release_mmap - remove physical page mappings + * @obj: obj in question + * + * Preserve the reservation of the mmapping with the DRM core code, but + * relinquish ownership of the pages back to the system. + */ +void i915_gem_object_release_mmap(struct drm_i915_gem_object *obj) +{ + i915_gem_object_release_mmap_gtt(obj); + i915_gem_object_release_mmap_offset(obj); +} + static void init_mmap_offset(struct drm_i915_gem_object *obj, struct i915_mmap_offset *mmo) { @@ -614,6 +698,42 @@ static const struct vm_operations_struct i915_gem_gtt_vm_ops = { .close = i915_gem_vm_close, }; +static const struct vm_operations_struct i915_gem_cpu_vm_ops = { + .fault = i915_gem_fault_cpu, + .open = i915_gem_vm_open, + .close = i915_gem_vm_close, +}; + +static const struct vm_operations_struct i915_gem_lmem_vm_ops = { + .fault = i915_gem_fault_lmem, + .open = i915_gem_vm_open, + .close = i915_gem_vm_close, +}; + +static void set_vmdata_mmap_offset(struct i915_mmap_offset *mmo, struct vm_area_struct *vma) +{ + switch (mmo->mmap_type) { + case I915_MMAP_TYPE_OFFSET_WC: + vma->vm_page_prot = + pgprot_writecombine(vm_get_page_prot(vma->vm_flags)); + break; + case I915_MMAP_TYPE_OFFSET_WB: + vma->vm_page_prot = vm_get_page_prot(vma->vm_flags); + break; + case I915_MMAP_TYPE_OFFSET_UC: + vma->vm_page_prot = + pgprot_noncached(vm_get_page_prot(vma->vm_flags)); + break; + default: + break; + } + + if (i915_gem_object_is_lmem(mmo->obj)) + vma->vm_ops = &i915_gem_lmem_vm_ops; + else + vma->vm_ops = &i915_gem_cpu_vm_ops; +} + /* This overcomes the limitation in drm_gem_mmap's assignment of a * drm_gem_object as the vma->vm_private_data. Since we need to * be able to resolve multiple mmap offsets which could be tied @@ -677,7 +797,16 @@ int i915_gem_mmap(struct file *filp, struct vm_area_struct *vma) vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot); vma->vm_private_data = mmo; - vma->vm_ops = &i915_gem_gtt_vm_ops; + switch (mmo->mmap_type) { + case I915_MMAP_TYPE_OFFSET_WC: + case I915_MMAP_TYPE_OFFSET_WB: + case I915_MMAP_TYPE_OFFSET_UC: + set_vmdata_mmap_offset(mmo, vma); + break; + case I915_MMAP_TYPE_GTT: + vma->vm_ops = &i915_gem_gtt_vm_ops; + break; + } return 0; } diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index 2bb0c779c850..fd58b9aea180 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -350,7 +350,7 @@ static inline void i915_gem_object_unpin_map(struct drm_i915_gem_object *obj) i915_gem_object_unpin_pages(obj); } -void __i915_gem_object_release_mmap(struct drm_i915_gem_object *obj); +void __i915_gem_object_release_mmap_gtt(struct drm_i915_gem_object *obj); void i915_gem_object_release_mmap(struct drm_i915_gem_object *obj); void diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index af63d1a0af14..5a9bd94b6760 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -871,7 +871,7 @@ void i915_gem_runtime_suspend(struct drm_i915_private *i915) list_for_each_entry_safe(obj, on, &i915->ggtt.userfault_list, userfault_link) - __i915_gem_object_release_mmap(obj); + __i915_gem_object_release_mmap_gtt(obj); /* * The fence will be lost when the device powers down. If any were From patchwork Fri Aug 9 22:26:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 11087855 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 625FD912 for ; Fri, 9 Aug 2019 22:28:21 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4E22F21C9A for ; Fri, 9 Aug 2019 22:28:21 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4275C2223E; Fri, 9 Aug 2019 22:28:21 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id D655721C9A for ; Fri, 9 Aug 2019 22:28:20 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 9EABA6EF0C; Fri, 9 Aug 2019 22:27:33 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id 65DC46EF33; Fri, 9 Aug 2019 22:27:32 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Aug 2019 15:27:32 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,366,1559545200"; d="scan'208";a="176927125" Received: from jmath3-mobl1.ger.corp.intel.com (HELO mwahaha-bdw.ger.corp.intel.com) ([10.252.5.86]) by fmsmga007.fm.intel.com with ESMTP; 09 Aug 2019 15:27:30 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Subject: [PATCH v3 33/37] drm/i915: cpu-map based dumb buffers Date: Fri, 9 Aug 2019 23:26:39 +0100 Message-Id: <20190809222643.23142-34-matthew.auld@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190809222643.23142-1-matthew.auld@intel.com> References: <20190809222643.23142-1-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Abdiel Janulgue , Tvrtko Ursulin , Daniele Ceraolo Spurio , dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Abdiel Janulgue If there is no aperture we can't use map_gtt to map dumb buffers, so we need a cpu-map based path to do it. We prefer map_gtt on platforms that do have aperture. Signed-off-by: Abdiel Janulgue Cc: Daniele Ceraolo Spurio Cc: Tvrtko Ursulin Cc: Matthew Auld --- drivers/gpu/drm/i915/gem/i915_gem_mman.c | 18 +++++++++++++++++- .../gpu/drm/i915/gem/i915_gem_object_types.h | 1 + drivers/gpu/drm/i915/i915_drv.c | 2 +- drivers/gpu/drm/i915/i915_drv.h | 2 +- 4 files changed, 20 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c index 304ea578fd30..4fe83e31c1b3 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c @@ -500,7 +500,8 @@ static void i915_gem_object_release_mmap_offset(struct drm_i915_gem_object *obj) list_for_each_entry(mmo, &obj->mmap_offsets, offset) { if (mmo->mmap_type == I915_MMAP_TYPE_OFFSET_WC || mmo->mmap_type == I915_MMAP_TYPE_OFFSET_WB || - mmo->mmap_type == I915_MMAP_TYPE_OFFSET_UC) + mmo->mmap_type == I915_MMAP_TYPE_OFFSET_UC || + mmo->mmap_type == I915_MMAP_TYPE_DUMB_WC) drm_vma_node_unmap(&mmo->vma_node, obj->base.dev->anon_inode->i_mapping); } @@ -602,6 +603,19 @@ __assign_gem_object_mmap_data(struct drm_file *file, return ret; } +int +i915_gem_mmap_dumb(struct drm_file *file, + struct drm_device *dev, + u32 handle, + u64 *offset) +{ + struct drm_i915_private *i915 = dev->dev_private; + enum i915_mmap_type mmap_type = HAS_MAPPABLE_APERTURE(i915) ? + I915_MMAP_TYPE_GTT : I915_MMAP_TYPE_DUMB_WC; + + return __assign_gem_object_mmap_data(file, handle, mmap_type, offset); +} + /** * i915_gem_mmap_gtt_ioctl - prepare an object for GTT mmap'ing * @dev: DRM device @@ -714,6 +728,7 @@ static void set_vmdata_mmap_offset(struct i915_mmap_offset *mmo, struct vm_area_ { switch (mmo->mmap_type) { case I915_MMAP_TYPE_OFFSET_WC: + case I915_MMAP_TYPE_DUMB_WC: vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags)); break; @@ -801,6 +816,7 @@ int i915_gem_mmap(struct file *filp, struct vm_area_struct *vma) case I915_MMAP_TYPE_OFFSET_WC: case I915_MMAP_TYPE_OFFSET_WB: case I915_MMAP_TYPE_OFFSET_UC: + case I915_MMAP_TYPE_DUMB_WC: set_vmdata_mmap_offset(mmo, vma); break; case I915_MMAP_TYPE_GTT: diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h index 4ea78d3c92a9..d280267689f9 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h @@ -67,6 +67,7 @@ enum i915_mmap_type { I915_MMAP_TYPE_OFFSET_WC, I915_MMAP_TYPE_OFFSET_WB, I915_MMAP_TYPE_OFFSET_UC, + I915_MMAP_TYPE_DUMB_WC, }; struct i915_mmap_offset { diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c index cf390092c927..f6a3daf696f6 100644 --- a/drivers/gpu/drm/i915/i915_drv.c +++ b/drivers/gpu/drm/i915/i915_drv.c @@ -2762,7 +2762,7 @@ static struct drm_driver driver = { .get_scanout_position = i915_get_crtc_scanoutpos, .dumb_create = i915_gem_dumb_create, - .dumb_map_offset = i915_gem_mmap_gtt, + .dumb_map_offset = i915_gem_mmap_dumb, .ioctls = i915_ioctls, .num_ioctls = ARRAY_SIZE(i915_ioctls), .fops = &i915_driver_fops, diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 5a5b90670e16..f93f55947b7c 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -2363,7 +2363,7 @@ i915_mutex_lock_interruptible(struct drm_device *dev) int i915_gem_dumb_create(struct drm_file *file_priv, struct drm_device *dev, struct drm_mode_create_dumb *args); -int i915_gem_mmap_gtt(struct drm_file *file_priv, struct drm_device *dev, +int i915_gem_mmap_dumb(struct drm_file *file_priv, struct drm_device *dev, u32 handle, u64 *offset); int i915_gem_mmap_gtt_version(void); From patchwork Fri Aug 9 22:26:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 11087885 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8C5AE13B1 for ; Fri, 9 Aug 2019 22:28:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 78D9A21C9A for ; Fri, 9 Aug 2019 22:28:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6CE0D2223E; Fri, 9 Aug 2019 22:28:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id D247E21C9A for ; Fri, 9 Aug 2019 22:28:33 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id ECD4C6EF34; Fri, 9 Aug 2019 22:27:46 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id 306936EEF6; Fri, 9 Aug 2019 22:27:34 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Aug 2019 15:27:34 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,366,1559545200"; d="scan'208";a="176927132" Received: from jmath3-mobl1.ger.corp.intel.com (HELO mwahaha-bdw.ger.corp.intel.com) ([10.252.5.86]) by fmsmga007.fm.intel.com with ESMTP; 09 Aug 2019 15:27:32 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Subject: [PATCH v3 34/37] drm/i915: support basic object migration Date: Fri, 9 Aug 2019 23:26:40 +0100 Message-Id: <20190809222643.23142-35-matthew.auld@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190809222643.23142-1-matthew.auld@intel.com> References: <20190809222643.23142-1-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Abdiel Janulgue , CQ Tang , dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP We are going want to able to move objects between different regions like system memory and local memory. In the future everything should be just another region. Signed-off-by: Matthew Auld Signed-off-by: Abdiel Janulgue Signed-off-by: CQ Tang Cc: Joonas Lahtinen Cc: Abdiel Janulgue --- drivers/gpu/drm/i915/gem/i915_gem_object.c | 140 ++++++++++++++++++ drivers/gpu/drm/i915/gem/i915_gem_object.h | 8 + drivers/gpu/drm/i915/gem/i915_gem_pages.c | 2 +- .../drm/i915/selftests/intel_memory_region.c | 129 ++++++++++++++++ 4 files changed, 278 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c index 24f737b00e84..5982aeaaa2e3 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c @@ -28,6 +28,8 @@ #include "i915_gem_clflush.h" #include "i915_gem_context.h" #include "i915_gem_object.h" +#include "i915_gem_object_blt.h" +#include "i915_gem_region.h" #include "i915_globals.h" #include "i915_trace.h" @@ -170,6 +172,144 @@ static void __i915_gem_free_object_rcu(struct rcu_head *head) atomic_dec(&i915->mm.free_count); } + +int i915_gem_object_prepare_move(struct drm_i915_gem_object *obj) +{ + int err; + + lockdep_assert_held(&obj->base.dev->struct_mutex); + + if (obj->mm.madv != I915_MADV_WILLNEED) + return -EINVAL; + + if (i915_gem_object_needs_bit17_swizzle(obj)) + return -EINVAL; + + if (atomic_read(&obj->mm.pages_pin_count) > + atomic_read(&obj->bind_count)) + return -EBUSY; + + if (obj->pin_global) + return -EBUSY; + + i915_gem_object_release_mmap(obj); + + GEM_BUG_ON(obj->mm.mapping); + GEM_BUG_ON(obj->base.filp && mapping_mapped(obj->base.filp->f_mapping)); + + err = i915_gem_object_wait(obj, + I915_WAIT_INTERRUPTIBLE | + I915_WAIT_LOCKED | + I915_WAIT_ALL, + MAX_SCHEDULE_TIMEOUT); + if (err) + return err; + + return i915_gem_object_unbind(obj, + I915_GEM_OBJECT_UNBIND_ACTIVE); +} + +int i915_gem_object_migrate(struct drm_i915_gem_object *obj, + struct intel_context *ce, + enum intel_region_id id) +{ + struct drm_i915_private *i915 = to_i915(obj->base.dev); + struct drm_i915_gem_object *donor; + struct intel_memory_region *mem; + struct sg_table *pages = NULL; + unsigned int page_sizes; + int err = 0; + + lockdep_assert_held(&i915->drm.struct_mutex); + + GEM_BUG_ON(id >= INTEL_MEMORY_UKNOWN); + GEM_BUG_ON(obj->mm.region->id == id); + GEM_BUG_ON(obj->mm.madv != I915_MADV_WILLNEED); + + mem = i915->regions[id]; + + donor = i915_gem_object_create_region(mem, obj->base.size, 0); + if (IS_ERR(donor)) + return PTR_ERR(donor); + + /* Copy backing-pages if we have to */ + if (i915_gem_object_has_pages(obj)) { + err = i915_gem_object_pin_pages(obj); + if (err) + goto err_put_donor; + + err = i915_gem_object_copy_blt(obj, donor, ce); + if (err) + goto err_put_donor; + + i915_gem_object_lock(donor); + err = i915_gem_object_set_to_cpu_domain(donor, false); + i915_gem_object_unlock(donor); + if (err) + goto err_put_donor; + + i915_retire_requests(i915); + + i915_gem_object_unbind(donor, 0); + err = i915_gem_object_unbind(obj, 0); + if (err) + goto err_put_donor; + + mutex_lock(&obj->mm.lock); + + pages = __i915_gem_object_unset_pages(obj); + obj->ops->put_pages(obj, pages); + + mutex_unlock(&obj->mm.lock); + + page_sizes = donor->mm.page_sizes.phys; + pages = __i915_gem_object_unset_pages(donor); + } + + if (obj->ops->release) + obj->ops->release(obj); + + mutex_lock(&obj->mm.lock); + + /* We need still need a little special casing for shmem */ + if (obj->base.filp) + fput(fetch_and_zero(&obj->base.filp)); + else if (donor->base.filp) { + atomic_long_inc(&donor->base.filp->f_count); + obj->base.filp = donor->base.filp; + } + + obj->base.size = donor->base.size; + obj->mm.region = mem; + obj->flags = donor->flags; + obj->ops = donor->ops; + obj->cache_level = donor->cache_level; + obj->cache_coherent = donor->cache_coherent; + obj->cache_dirty = donor->cache_dirty; + + list_replace_init(&donor->mm.blocks, &obj->mm.blocks); + + mutex_lock(&mem->obj_lock); + list_add(&obj->mm.region_link, &mem->objects); + mutex_unlock(&mem->obj_lock); + + /* set pages after migrated */ + if (pages) + __i915_gem_object_set_pages(obj, pages, page_sizes); + + mutex_unlock(&obj->mm.lock); + + GEM_BUG_ON(i915_gem_object_has_pages(donor)); + GEM_BUG_ON(i915_gem_object_has_pinned_pages(donor)); + +err_put_donor: + i915_gem_object_put(donor); + if (i915_gem_object_has_pinned_pages(obj)) + i915_gem_object_unpin_pages(obj); + + return err; +} + static void __i915_gem_free_objects(struct drm_i915_private *i915, struct llist_node *freed) { diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index fd58b9aea180..660880a71554 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -40,8 +40,16 @@ int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align); void i915_gem_close_object(struct drm_gem_object *gem, struct drm_file *file); void i915_gem_free_object(struct drm_gem_object *obj); +enum intel_region_id; +int i915_gem_object_prepare_move(struct drm_i915_gem_object *obj); +int i915_gem_object_migrate(struct drm_i915_gem_object *obj, + struct intel_context *ce, + enum intel_region_id id); + void i915_gem_flush_free_objects(struct drm_i915_private *i915); +void __i915_gem_object_reset_page_iter(struct drm_i915_gem_object *obj); + struct sg_table * __i915_gem_object_unset_pages(struct drm_i915_gem_object *obj); void i915_gem_object_truncate(struct drm_i915_gem_object *obj); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c index 0b73860deaf8..9e0a7af6fa1d 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c @@ -143,7 +143,7 @@ void i915_gem_object_writeback(struct drm_i915_gem_object *obj) obj->ops->writeback(obj); } -static void __i915_gem_object_reset_page_iter(struct drm_i915_gem_object *obj) +void __i915_gem_object_reset_page_iter(struct drm_i915_gem_object *obj) { struct radix_tree_iter iter; void __rcu **slot; diff --git a/drivers/gpu/drm/i915/selftests/intel_memory_region.c b/drivers/gpu/drm/i915/selftests/intel_memory_region.c index 4123e81a2bda..2b62e3361a5f 100644 --- a/drivers/gpu/drm/i915/selftests/intel_memory_region.c +++ b/drivers/gpu/drm/i915/selftests/intel_memory_region.c @@ -500,6 +500,59 @@ static int igt_lmem_create(void *arg) return err; } +static int igt_smem_create_migrate(void *arg) +{ + struct drm_i915_private *i915 = arg; + struct intel_context *ce = i915->engine[BCS0]->kernel_context; + struct drm_i915_gem_object *obj; + int err; + + /* Switch object backing-store on create */ + obj = i915_gem_object_create_lmem(i915, PAGE_SIZE, 0); + if (IS_ERR(obj)) + return PTR_ERR(obj); + + err = i915_gem_object_migrate(obj, ce, INTEL_MEMORY_SMEM); + if (err) + goto out_put; + + err = i915_gem_object_pin_pages(obj); + if (err) + goto out_put; + + i915_gem_object_unpin_pages(obj); +out_put: + i915_gem_object_put(obj); + + return err; +} + +static int igt_lmem_create_migrate(void *arg) +{ + struct drm_i915_private *i915 = arg; + struct intel_context *ce = i915->engine[BCS0]->kernel_context; + struct drm_i915_gem_object *obj; + int err; + + /* Switch object backing-store on create */ + obj = i915_gem_object_create_shmem(i915, PAGE_SIZE); + if (IS_ERR(obj)) + return PTR_ERR(obj); + + err = i915_gem_object_migrate(obj, ce, INTEL_MEMORY_LMEM); + if (err) + goto out_put; + + err = i915_gem_object_pin_pages(obj); + if (err) + goto out_put; + + i915_gem_object_unpin_pages(obj); +out_put: + i915_gem_object_put(obj); + + return err; +} static int igt_lmem_write_gpu(void *arg) { struct drm_i915_private *i915 = arg; @@ -626,6 +679,79 @@ static int igt_lmem_write_cpu(void *arg) return err; } +static int igt_lmem_pages_migrate(void *arg) +{ + struct drm_i915_private *i915 = arg; + struct intel_context *ce = i915->engine[BCS0]->kernel_context; + struct drm_i915_gem_object *obj; + IGT_TIMEOUT(end_time); + I915_RND_STATE(prng); + u32 sz; + int err; + + sz = round_up(prandom_u32_state(&prng) % SZ_32M, PAGE_SIZE); + + obj = i915_gem_object_create_lmem(i915, sz, 0); + if (IS_ERR(obj)) + return PTR_ERR(obj); + + err = i915_gem_object_fill_blt(obj, ce, 0); + if (err) + goto out_put; + + do { + err = i915_gem_object_prepare_move(obj); + if (err) + goto out_put; + + if (i915_gem_object_is_lmem(obj)) { + err = i915_gem_object_migrate(obj, ce, INTEL_MEMORY_SMEM); + if (err) + goto out_put; + + if (i915_gem_object_is_lmem(obj)) { + pr_err("object still backed by lmem\n"); + err = -EINVAL; + } + + if (!list_empty(&obj->mm.blocks)) { + pr_err("object leaking memory region\n"); + err = -EINVAL; + } + + if (!i915_gem_object_has_struct_page(obj)) { + pr_err("object not backed by struct page\n"); + err = -EINVAL; + } + + } else { + err = i915_gem_object_migrate(obj, ce, INTEL_MEMORY_LMEM); + if (err) + goto out_put; + + if (i915_gem_object_has_struct_page(obj)) { + pr_err("object still backed by struct page\n"); + err = -EINVAL; + } + + if (!i915_gem_object_is_lmem(obj)) { + pr_err("object not backed by lmem\n"); + err = -EINVAL; + } + } + + if (!err) + err = i915_gem_object_fill_blt(obj, ce, 0xdeadbeaf); + if (err) + break; + } while (!__igt_timeout(end_time, NULL)); + +out_put: + i915_gem_object_put(obj); + + return err; +} + int intel_memory_region_mock_selftests(void) { static const struct i915_subtest tests[] = { @@ -669,6 +795,9 @@ int intel_memory_region_live_selftests(struct drm_i915_private *i915) SUBTEST(igt_lmem_create), SUBTEST(igt_lmem_write_cpu), SUBTEST(igt_lmem_write_gpu), + SUBTEST(igt_smem_create_migrate), + SUBTEST(igt_lmem_create_migrate), + SUBTEST(igt_lmem_pages_migrate), }; int err; From patchwork Fri Aug 9 22:26:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 11087883 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EC588912 for ; Fri, 9 Aug 2019 22:28:32 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D894521C9A for ; Fri, 9 Aug 2019 22:28:32 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CCB7E2223E; Fri, 9 Aug 2019 22:28:32 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 4703321C9A for ; Fri, 9 Aug 2019 22:28:32 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A30B76EF2D; Fri, 9 Aug 2019 22:27:47 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id A28C96EEE4; Fri, 9 Aug 2019 22:27:35 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Aug 2019 15:27:35 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,366,1559545200"; d="scan'208";a="176927133" Received: from jmath3-mobl1.ger.corp.intel.com (HELO mwahaha-bdw.ger.corp.intel.com) ([10.252.5.86]) by fmsmga007.fm.intel.com with ESMTP; 09 Aug 2019 15:27:34 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Subject: [PATCH v3 35/37] drm/i915: Introduce GEM_OBJECT_SETPARAM with I915_PARAM_MEMORY_REGION Date: Fri, 9 Aug 2019 23:26:41 +0100 Message-Id: <20190809222643.23142-36-matthew.auld@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190809222643.23142-1-matthew.auld@intel.com> References: <20190809222643.23142-1-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Abdiel Janulgue , dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Abdiel Janulgue This call will specify which memory region an object should be placed. Note that changing the object's backing storage should be immediately done after an object is created or if it's not yet in use, otherwise this will fail on a busy object. Signed-off-by: Abdiel Janulgue Cc: Joonas Lahtinen --- drivers/gpu/drm/i915/gem/i915_gem_context.c | 17 +++ drivers/gpu/drm/i915/gem/i915_gem_context.h | 2 + drivers/gpu/drm/i915/gem/i915_gem_ioctls.h | 2 + drivers/gpu/drm/i915/gem/i915_gem_object.c | 115 ++++++++++++++++++++ drivers/gpu/drm/i915/i915_drv.c | 2 +- include/uapi/drm/i915_drm.h | 23 ++++ 6 files changed, 160 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c index b407baaf0014..572033ac6e3b 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c @@ -76,6 +76,7 @@ #include "i915_globals.h" #include "i915_trace.h" #include "i915_user_extensions.h" +#include "i915_gem_ioctls.h" #define ALL_L3_SLICES(dev) (1 << NUM_L3_SLICES(dev)) - 1 @@ -2308,6 +2309,22 @@ int i915_gem_context_setparam_ioctl(struct drm_device *dev, void *data, return ret; } +int i915_gem_setparam_ioctl(struct drm_device *dev, void *data, + struct drm_file *file) +{ + struct drm_i915_gem_context_param *args = data; + u32 object_class = upper_32_bits(args->param); + + switch (object_class) { + case 0: + return i915_gem_context_setparam_ioctl(dev, data, file); + case 1: + return i915_gem_object_setparam_ioctl(dev, data, file); + + } + return -EINVAL; +} + int i915_gem_context_reset_stats_ioctl(struct drm_device *dev, void *data, struct drm_file *file) { diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.h b/drivers/gpu/drm/i915/gem/i915_gem_context.h index 106e2ccf7a4c..1cfcf1e6bbb9 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_context.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.h @@ -157,6 +157,8 @@ int i915_gem_context_getparam_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); int i915_gem_context_setparam_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); +int i915_gem_setparam_ioctl(struct drm_device *dev, void *data, + struct drm_file *file); int i915_gem_context_reset_stats_ioctl(struct drm_device *dev, void *data, struct drm_file *file); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ioctls.h b/drivers/gpu/drm/i915/gem/i915_gem_ioctls.h index 5abd5b2172f2..af7465bceebd 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ioctls.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_ioctls.h @@ -32,6 +32,8 @@ int i915_gem_mmap_gtt_ioctl(struct drm_device *dev, void *data, struct drm_file *file); int i915_gem_mmap_offset_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); +int i915_gem_object_setparam_ioctl(struct drm_device *dev, void *data, + struct drm_file *file_priv); int i915_gem_pread_ioctl(struct drm_device *dev, void *data, struct drm_file *file); int i915_gem_pwrite_ioctl(struct drm_device *dev, void *data, diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c index 5982aeaaa2e3..52ea65f203a1 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c @@ -506,6 +506,121 @@ int __init i915_global_objects_init(void) return 0; } +static enum intel_region_id +__region_id(u32 region) +{ + enum intel_region_id id; + + for (id = 0; id < INTEL_MEMORY_UKNOWN; ++id) { + if (intel_region_map[id] == region) + return id; + } + + return INTEL_MEMORY_UKNOWN; +} + +static int i915_gem_object_region_select(struct drm_i915_private *dev_priv, + struct drm_i915_gem_object_param *args, + struct drm_file *file, + struct drm_i915_gem_object *obj) +{ + struct intel_context *ce = dev_priv->engine[BCS0]->kernel_context; + u32 __user *uregions = u64_to_user_ptr(args->data); + u32 uregions_copy[INTEL_MEMORY_UKNOWN]; + int i, ret; + + if (args->size > INTEL_MEMORY_UKNOWN) + return -EINVAL; + + memset(uregions_copy, 0, sizeof(uregions_copy)); + for (i = 0; i < args->size; i++) { + u32 region; + + ret = get_user(region, uregions); + if (ret) + return ret; + + uregions_copy[i] = region; + ++uregions; + } + + mutex_lock(&dev_priv->drm.struct_mutex); + ret = i915_gem_object_prepare_move(obj); + if (ret) { + DRM_ERROR("Cannot set memory region, object in use\n"); + goto err; + } + + for (i = 0; i < args->size; i++) { + u32 region = uregions_copy[i]; + enum intel_region_id id = __region_id(region); + + if (id == INTEL_MEMORY_UKNOWN) { + ret = -EINVAL; + goto err; + } + + ret = i915_gem_object_migrate(obj, ce, id); + if (!ret) { + if (!i915_gem_object_has_pages(obj) && + MEMORY_TYPE_FROM_REGION(region) == + INTEL_LMEM) { + /* + * TODO: this should be part of get_pages(), + * when async get_pages arrives + */ + ret = i915_gem_object_fill_blt(obj, ce, 0); + if (ret) { + DRM_ERROR("Failed clearing the object\n"); + goto err; + } + + i915_gem_object_lock(obj); + ret = i915_gem_object_set_to_cpu_domain(obj, false); + i915_gem_object_unlock(obj); + if (ret) + goto err; + } + break; + } + } +err: + mutex_unlock(&dev_priv->drm.struct_mutex); + return ret; +} + +int i915_gem_object_setparam_ioctl(struct drm_device *dev, void *data, + struct drm_file *file) +{ + + struct drm_i915_gem_object_param *args = data; + struct drm_i915_private *dev_priv = to_i915(dev); + struct drm_i915_gem_object *obj; + int ret; + + obj = i915_gem_object_lookup(file, args->handle); + if (!obj) + return -ENOENT; + + switch (lower_32_bits(args->param)) { + case I915_PARAM_MEMORY_REGION: + ret = i915_gem_object_region_select(dev_priv, args, file, obj); + if (ret) { + DRM_ERROR("Cannot set memory region, migration failed\n"); + goto err; + } + + break; + default: + ret = -EINVAL; + break; + } + +err: + i915_gem_object_put(obj); + return ret; +} + #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST) #include "selftests/huge_gem_object.c" #include "selftests/huge_pages.c" diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c index f6a3daf696f6..845e80c2acc0 100644 --- a/drivers/gpu/drm/i915/i915_drv.c +++ b/drivers/gpu/drm/i915/i915_drv.c @@ -2729,7 +2729,7 @@ static const struct drm_ioctl_desc i915_ioctls[] = { DRM_IOCTL_DEF_DRV(I915_GET_RESET_STATS, i915_gem_context_reset_stats_ioctl, DRM_RENDER_ALLOW), DRM_IOCTL_DEF_DRV(I915_GEM_USERPTR, i915_gem_userptr_ioctl, DRM_RENDER_ALLOW), DRM_IOCTL_DEF_DRV(I915_GEM_CONTEXT_GETPARAM, i915_gem_context_getparam_ioctl, DRM_RENDER_ALLOW), - DRM_IOCTL_DEF_DRV(I915_GEM_CONTEXT_SETPARAM, i915_gem_context_setparam_ioctl, DRM_RENDER_ALLOW), + DRM_IOCTL_DEF_DRV(I915_GEM_CONTEXT_SETPARAM, i915_gem_setparam_ioctl, DRM_RENDER_ALLOW), DRM_IOCTL_DEF_DRV(I915_PERF_OPEN, i915_perf_open_ioctl, DRM_RENDER_ALLOW), DRM_IOCTL_DEF_DRV(I915_PERF_ADD_CONFIG, i915_perf_add_config_ioctl, DRM_RENDER_ALLOW), DRM_IOCTL_DEF_DRV(I915_PERF_REMOVE_CONFIG, i915_perf_remove_config_ioctl, DRM_RENDER_ALLOW), diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h index fb84aed10825..75d79c17e91b 100644 --- a/include/uapi/drm/i915_drm.h +++ b/include/uapi/drm/i915_drm.h @@ -360,6 +360,7 @@ typedef struct _drm_i915_sarea { #define DRM_I915_GEM_VM_CREATE 0x3a #define DRM_I915_GEM_VM_DESTROY 0x3b #define DRM_I915_GEM_MMAP_OFFSET DRM_I915_GEM_MMAP_GTT +#define DRM_I915_GEM_OBJECT_SETPARAM DRM_I915_GEM_CONTEXT_SETPARAM /* Must be kept compact -- no holes */ #define DRM_IOCTL_I915_INIT DRM_IOW( DRM_COMMAND_BASE + DRM_I915_INIT, drm_i915_init_t) @@ -423,6 +424,7 @@ typedef struct _drm_i915_sarea { #define DRM_IOCTL_I915_GEM_VM_CREATE DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_GEM_VM_CREATE, struct drm_i915_gem_vm_control) #define DRM_IOCTL_I915_GEM_VM_DESTROY DRM_IOW (DRM_COMMAND_BASE + DRM_I915_GEM_VM_DESTROY, struct drm_i915_gem_vm_control) #define DRM_IOCTL_I915_GEM_MMAP_OFFSET DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_GEM_MMAP_OFFSET, struct drm_i915_gem_mmap_offset) +#define DRM_IOCTL_I915_GEM_OBJECT_SETPARAM DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_GEM_OBJECT_SETPARAM, struct drm_i915_gem_object_param) /* Allow drivers to submit batchbuffers directly to hardware, relying * on the security mechanisms provided by hardware. @@ -1601,6 +1603,27 @@ struct drm_i915_gem_context_param { __u64 value; }; +struct drm_i915_gem_object_param { + /** Handle for the object */ + __u32 handle; + + __u32 size; + + /** Set the memory region for the object listed in preference order + * as an array of region ids within data. To force an object + * to a particular memory region, set the region as the sole entry. + * + * Valid region ids are derived from the id field of + * struct drm_i915_memory_region_info. + * See struct drm_i915_query_memory_region_info. + */ +#define I915_OBJECT_PARAM (1ull<<32) +#define I915_PARAM_MEMORY_REGION 0x1 + __u64 param; + + __u64 data; +}; + /** * Context SSEU programming * From patchwork Fri Aug 9 22:26:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 11087879 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A48AC13B1 for ; Fri, 9 Aug 2019 22:28:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9189021C9A for ; Fri, 9 Aug 2019 22:28:29 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8597A2223E; Fri, 9 Aug 2019 22:28:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 397FF21C9A for ; Fri, 9 Aug 2019 22:28:29 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id F12F66EF35; Fri, 9 Aug 2019 22:27:46 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id 0FBB46EEF6; Fri, 9 Aug 2019 22:27:37 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Aug 2019 15:27:36 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,366,1559545200"; d="scan'208";a="176927137" Received: from jmath3-mobl1.ger.corp.intel.com (HELO mwahaha-bdw.ger.corp.intel.com) ([10.252.5.86]) by fmsmga007.fm.intel.com with ESMTP; 09 Aug 2019 15:27:35 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Subject: [PATCH v3 36/37] drm/i915/query: Expose memory regions through the query uAPI Date: Fri, 9 Aug 2019 23:26:42 +0100 Message-Id: <20190809222643.23142-37-matthew.auld@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190809222643.23142-1-matthew.auld@intel.com> References: <20190809222643.23142-1-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Abdiel Janulgue , dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Abdiel Janulgue Returns the available memory region areas supported by the HW. Signed-off-by: Abdiel Janulgue Cc: Joonas Lahtinen --- drivers/gpu/drm/i915/i915_query.c | 57 +++++++++++++++++++++++++++++++ include/uapi/drm/i915_drm.h | 39 +++++++++++++++++++++ 2 files changed, 96 insertions(+) diff --git a/drivers/gpu/drm/i915/i915_query.c b/drivers/gpu/drm/i915/i915_query.c index ad9240a0817a..69a2a906feef 100644 --- a/drivers/gpu/drm/i915/i915_query.c +++ b/drivers/gpu/drm/i915/i915_query.c @@ -142,10 +142,67 @@ query_engine_info(struct drm_i915_private *i915, return len; } +static int query_memregion_info(struct drm_i915_private *dev_priv, + struct drm_i915_query_item *query_item) +{ + struct drm_i915_query_memory_region_info __user *query_ptr = + u64_to_user_ptr(query_item->data_ptr); + struct drm_i915_memory_region_info __user *info_ptr = + &query_ptr->regions[0]; + struct drm_i915_memory_region_info info = { }; + struct drm_i915_query_memory_region_info query; + u32 total_length; + int ret, i; + + if (query_item->flags != 0) + return -EINVAL; + + total_length = sizeof(struct drm_i915_query_memory_region_info); + for (i = 0; i < ARRAY_SIZE(dev_priv->regions); ++i) { + struct intel_memory_region *region = dev_priv->regions[i]; + + if (!region) + continue; + + total_length += sizeof(struct drm_i915_memory_region_info); + } + + ret = copy_query_item(&query, sizeof(query), total_length, + query_item); + if (ret != 0) + return ret; + + if (query.num_regions || query.rsvd[0] || query.rsvd[1] || + query.rsvd[2]) + return -EINVAL; + + for (i = 0; i < ARRAY_SIZE(dev_priv->regions); ++i) { + struct intel_memory_region *region = dev_priv->regions[i]; + + if (!region) + continue; + + info.id = region->id; + info.size = resource_size(®ion->region); + + if (__copy_to_user(info_ptr, &info, sizeof(info))) + return -EFAULT; + + query.num_regions++; + info_ptr++; + } + + if (__copy_to_user(query_ptr, &query, sizeof(query))) + return -EFAULT; + + return total_length; +} + static int (* const i915_query_funcs[])(struct drm_i915_private *dev_priv, struct drm_i915_query_item *query_item) = { query_topology_info, query_engine_info, + query_memregion_info, }; int i915_query_ioctl(struct drm_device *dev, void *data, struct drm_file *file) diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h index 75d79c17e91b..7ef037f58e1b 100644 --- a/include/uapi/drm/i915_drm.h +++ b/include/uapi/drm/i915_drm.h @@ -2038,6 +2038,7 @@ struct drm_i915_query_item { __u64 query_id; #define DRM_I915_QUERY_TOPOLOGY_INFO 1 #define DRM_I915_QUERY_ENGINE_INFO 2 +#define DRM_I915_QUERY_MEMREGION_INFO 3 /* Must be kept compact -- no holes and well documented */ /* @@ -2177,6 +2178,44 @@ struct drm_i915_query_engine_info { struct drm_i915_engine_info engines[]; }; +struct drm_i915_memory_region_info { + + /** Base type of a region + */ +#define I915_SYSTEM_MEMORY 0 +#define I915_DEVICE_MEMORY 1 + + /** The region id is encoded in a layout which makes it possible to + * retrieve the following information: + * + * Base type: log2(ID >> 16) + * Instance: log2(ID & 0xffff) + */ + __u32 id; + + /** Reserved field. MBZ */ + __u32 rsvd0; + + /** Unused for now. MBZ */ + __u64 flags; + + __u64 size; + + /** Reserved fields must be cleared to zero. */ + __u64 rsvd1[4]; +}; + +struct drm_i915_query_memory_region_info { + + /** Number of struct drm_i915_memory_region_info structs */ + __u32 num_regions; + + /** MBZ */ + __u32 rsvd[3]; + + struct drm_i915_memory_region_info regions[]; +}; + #if defined(__cplusplus) } #endif From patchwork Fri Aug 9 22:26:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 11087881 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6C33C912 for ; Fri, 9 Aug 2019 22:28:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 58DB421C9A for ; Fri, 9 Aug 2019 22:28:31 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4D0F52223E; Fri, 9 Aug 2019 22:28:31 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id A7C1D21C9A for ; Fri, 9 Aug 2019 22:28:30 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id DDBC76EF32; Fri, 9 Aug 2019 22:27:47 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id 9005E6EF2C; Fri, 9 Aug 2019 22:27:38 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Aug 2019 15:27:38 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,366,1559545200"; d="scan'208";a="176927141" Received: from jmath3-mobl1.ger.corp.intel.com (HELO mwahaha-bdw.ger.corp.intel.com) ([10.252.5.86]) by fmsmga007.fm.intel.com with ESMTP; 09 Aug 2019 15:27:37 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Subject: [PATCH v3 37/37] HAX drm/i915: add the fake lmem region Date: Fri, 9 Aug 2019 23:26:43 +0100 Message-Id: <20190809222643.23142-38-matthew.auld@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190809222643.23142-1-matthew.auld@intel.com> References: <20190809222643.23142-1-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Abdiel Janulgue , dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP Intended for upstream testing so that we can still exercise the LMEM plumbing and !HAS_MAPPABLE_APERTURE paths. Smoke tested on Skull Canyon device. This works by allocating an intel_memory_region for a reserved portion of system memory, which we treat like LMEM. For the LMEMBAR we steal the aperture and 1:1 it map to the stolen region. To enable simply set i915_fake_lmem_start= on the kernel cmdline with the start of reserved region(see memmap=). The size of the region we can use is determined by the size of the mappable aperture, so the size of reserved region should be >= mappable_end. eg. memmap=2G$16G i915_fake_lmem_start=0x400000000 Signed-off-by: Matthew Auld Cc: Joonas Lahtinen Cc: Abdiel Janulgue --- arch/x86/kernel/early-quirks.c | 26 ++++++++ drivers/gpu/drm/i915/gem/i915_gem_lmem.c | 3 + drivers/gpu/drm/i915/i915_drv.c | 8 +++ drivers/gpu/drm/i915/i915_gem_gtt.c | 3 + drivers/gpu/drm/i915/intel_memory_region.h | 4 ++ drivers/gpu/drm/i915/intel_region_lmem.c | 69 ++++++++++++++++++++++ drivers/gpu/drm/i915/intel_region_lmem.h | 5 ++ include/drm/i915_drm.h | 3 + 8 files changed, 121 insertions(+) diff --git a/arch/x86/kernel/early-quirks.c b/arch/x86/kernel/early-quirks.c index 6f6b1d04dadf..9b04655e3926 100644 --- a/arch/x86/kernel/early-quirks.c +++ b/arch/x86/kernel/early-quirks.c @@ -603,6 +603,32 @@ static void __init intel_graphics_quirks(int num, int slot, int func) } } +struct resource intel_graphics_fake_lmem_res __ro_after_init = DEFINE_RES_MEM(0, 0); +EXPORT_SYMBOL(intel_graphics_fake_lmem_res); + +static int __init early_i915_fake_lmem_init(char *s) +{ + u64 start; + int ret; + + if (*s == '=') + s++; + + ret = kstrtoull(s, 16, &start); + if (ret) + return ret; + + intel_graphics_fake_lmem_res.start = start; + intel_graphics_fake_lmem_res.end = SZ_2G; /* Placeholder; depends on aperture size */ + + printk(KERN_INFO "Intel graphics fake LMEM starts at %pa\n", + &intel_graphics_fake_lmem_res.start); + + return 0; +} + +early_param("i915_fake_lmem_start", early_i915_fake_lmem_init); + static void __init force_disable_hpet(int num, int slot, int func) { #ifdef CONFIG_HPET_TIMER diff --git a/drivers/gpu/drm/i915/gem/i915_gem_lmem.c b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c index 2194e2c3bdcd..bcdc7fd099af 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_lmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c @@ -252,6 +252,7 @@ void __iomem *i915_gem_object_lmem_io_map_page(struct drm_i915_gem_object *obj, resource_size_t offset; offset = i915_gem_object_get_dma_address(obj, n); + offset -= intel_graphics_fake_lmem_res.start; return io_mapping_map_wc(&obj->mm.region->iomap, offset, PAGE_SIZE); } @@ -262,6 +263,7 @@ void __iomem *i915_gem_object_lmem_io_map_page_atomic(struct drm_i915_gem_object resource_size_t offset; offset = i915_gem_object_get_dma_address(obj, n); + offset -= intel_graphics_fake_lmem_res.start; return io_mapping_map_atomic_wc(&obj->mm.region->iomap, offset); } @@ -275,6 +277,7 @@ void __iomem *i915_gem_object_lmem_io_map(struct drm_i915_gem_object *obj, GEM_BUG_ON(!(obj->flags & I915_BO_ALLOC_CONTIGUOUS)); offset = i915_gem_object_get_dma_address(obj, n); + offset -= intel_graphics_fake_lmem_res.start; return io_mapping_map_wc(&obj->mm.region->iomap, offset, size); } diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c index 845e80c2acc0..f71685a6d49b 100644 --- a/drivers/gpu/drm/i915/i915_drv.c +++ b/drivers/gpu/drm/i915/i915_drv.c @@ -1474,6 +1474,14 @@ int i915_driver_probe(struct pci_dev *pdev, const struct pci_device_id *ent) if (!i915_modparams.nuclear_pageflip && match_info->gen < 5) dev_priv->drm.driver_features &= ~DRIVER_ATOMIC; + /* Check if we support fake LMEM -- enable for live selftests */ + if (INTEL_GEN(dev_priv) >= 9 && i915_selftest.live && + intel_graphics_fake_lmem_res.start) { + mkwrite_device_info(dev_priv)->memory_regions = + REGION_SMEM | REGION_LMEM; + GEM_BUG_ON(!HAS_LMEM(dev_priv)); + } + ret = pci_enable_device(pdev); if (ret) goto out_fini; diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c index 0819ac9837dc..355268d85374 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c @@ -2747,6 +2747,9 @@ int i915_gem_init_memory_regions(struct drm_i915_private *i915) case INTEL_STOLEN: mem = i915_gem_stolen_setup(i915); break; + case INTEL_LMEM: + mem = intel_setup_fake_lmem(i915); + break; } if (IS_ERR(mem)) { diff --git a/drivers/gpu/drm/i915/intel_memory_region.h b/drivers/gpu/drm/i915/intel_memory_region.h index 340411dcf86b..13b3daec4127 100644 --- a/drivers/gpu/drm/i915/intel_memory_region.h +++ b/drivers/gpu/drm/i915/intel_memory_region.h @@ -9,6 +9,7 @@ #include #include #include +#include #include "i915_buddy.h" @@ -69,6 +70,9 @@ struct intel_memory_region { struct io_mapping iomap; struct resource region; + /* For faking for lmem */ + struct drm_mm_node fake_mappable; + struct i915_buddy_mm mm; struct mutex mm_lock; diff --git a/drivers/gpu/drm/i915/intel_region_lmem.c b/drivers/gpu/drm/i915/intel_region_lmem.c index 7f1543e2759c..b8f671634919 100644 --- a/drivers/gpu/drm/i915/intel_region_lmem.c +++ b/drivers/gpu/drm/i915/intel_region_lmem.c @@ -41,9 +41,41 @@ lmem_create_object(struct intel_memory_region *mem, return obj; } +static int init_fake_lmem_bar(struct intel_memory_region *mem) +{ + struct drm_i915_private *i915 = mem->i915; + struct i915_ggtt *ggtt = &i915->ggtt; + unsigned long n; + int ret; + + mem->fake_mappable.start = 0; + mem->fake_mappable.size = resource_size(&mem->region); + mem->fake_mappable.color = I915_COLOR_UNEVICTABLE; + + ret = drm_mm_reserve_node(&ggtt->vm.mm, &mem->fake_mappable); + if (ret) + return ret; + + /* 1:1 map the mappable aperture to our reserved region */ + for (n = 0; n < mem->fake_mappable.size >> PAGE_SHIFT; ++n) { + ggtt->vm.insert_page(&ggtt->vm, + mem->region.start + (n << PAGE_SHIFT), + n << PAGE_SHIFT, I915_CACHE_NONE, 0); + } + + return 0; +} + +static void release_fake_lmem_bar(struct intel_memory_region *mem) +{ + if (drm_mm_node_allocated(&mem->fake_mappable)) + drm_mm_remove_node(&mem->fake_mappable); +} + static void region_lmem_release(struct intel_memory_region *mem) { + release_fake_lmem_bar(mem); io_mapping_fini(&mem->iomap); intel_memory_region_release_buddy(mem); } @@ -53,6 +85,11 @@ region_lmem_init(struct intel_memory_region *mem) { int ret; + if (intel_graphics_fake_lmem_res.start) { + ret = init_fake_lmem_bar(mem); + GEM_BUG_ON(ret); + } + if (!io_mapping_init_wc(&mem->iomap, mem->io_start, resource_size(&mem->region))) @@ -70,3 +107,35 @@ const struct intel_memory_region_ops intel_region_lmem_ops = { .release = region_lmem_release, .create_object = lmem_create_object, }; + +struct intel_memory_region * +intel_setup_fake_lmem(struct drm_i915_private *i915) +{ + struct pci_dev *pdev = i915->drm.pdev; + struct intel_memory_region *mem; + resource_size_t mappable_end; + resource_size_t io_start; + resource_size_t start; + + GEM_BUG_ON(HAS_MAPPABLE_APERTURE(i915)); + GEM_BUG_ON(!intel_graphics_fake_lmem_res.start); + + /* Your mappable aperture belongs to me now! */ + mappable_end = pci_resource_len(pdev, 2); + io_start = pci_resource_start(pdev, 2), + start = intel_graphics_fake_lmem_res.start; + + mem = intel_memory_region_create(i915, + start, + mappable_end, + I915_GTT_PAGE_SIZE_4K, + io_start, + &intel_region_lmem_ops); + if (!IS_ERR(mem)) { + DRM_INFO("Intel graphics fake LMEM: %pR\n", &mem->region); + DRM_INFO("Intel graphics fake LMEM IO start: %llx\n", + (u64)mem->io_start); + } + + return mem; +} diff --git a/drivers/gpu/drm/i915/intel_region_lmem.h b/drivers/gpu/drm/i915/intel_region_lmem.h index ed2a3bab6443..213def7c7b8a 100644 --- a/drivers/gpu/drm/i915/intel_region_lmem.h +++ b/drivers/gpu/drm/i915/intel_region_lmem.h @@ -6,6 +6,11 @@ #ifndef __INTEL_REGION_LMEM_H #define __INTEL_REGION_LMEM_H +struct drm_i915_private; + extern const struct intel_memory_region_ops intel_region_lmem_ops; +struct intel_memory_region * +intel_setup_fake_lmem(struct drm_i915_private *i915); + #endif /* !__INTEL_REGION_LMEM_H */ diff --git a/include/drm/i915_drm.h b/include/drm/i915_drm.h index 23274cf92712..4d9dd548963b 100644 --- a/include/drm/i915_drm.h +++ b/include/drm/i915_drm.h @@ -39,6 +39,9 @@ bool i915_gpu_turbo_disable(void); /* Exported from arch/x86/kernel/early-quirks.c */ extern struct resource intel_graphics_stolen_res; +/* Exported from arch/x86/kernel/early-printk.c */ +extern struct resource intel_graphics_fake_lmem_res; + /* * The Bridge device's PCI config space has information about the * fb aperture size and the amount of pre-reserved memory.