From patchwork Fri Oct 16 10:43:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841335 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A78DEC433DF for ; Fri, 16 Oct 2020 10:44:56 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3AE3A207F7 for ; Fri, 16 Oct 2020 10:44:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3AE3A207F7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E40196EB17; Fri, 16 Oct 2020 10:44:49 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [141.105.120.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 429846EABC for ; Fri, 16 Oct 2020 10:44:48 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:43:44 +0200 Message-Id: <20201016104444.1492028-2-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 01/61] drm/i915: Move cmd parser pinning to execbuffer X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" We need to get rid of allocations in the cmd parser, because it needs to be called from a signaling context, first move all pinning to execbuf, where we already hold all locks. Allocate jump_whitelist in the execbuffer, and add annotations around intel_engine_cmd_parser(), to ensure we only call the command parser without allocating any memory, or taking any locks we're not supposed to. Because i915_gem_object_get_page() may also allocate memory, add a path to i915_gem_object_get_sg() that prevents memory allocations, and walk the sg list manually. It should be similarly fast. This has the added benefit of being able to catch all memory allocation errors before the point of no return, and return -ENOMEM safely to the execbuf submitter. Signed-off-by: Maarten Lankhorst Acked-by: Thomas Hellström --- .../gpu/drm/i915/gem/i915_gem_execbuffer.c | 74 ++++++++++++- drivers/gpu/drm/i915/gem/i915_gem_object.h | 10 +- drivers/gpu/drm/i915/gem/i915_gem_pages.c | 21 +++- drivers/gpu/drm/i915/gt/intel_ggtt.c | 2 +- drivers/gpu/drm/i915/i915_cmd_parser.c | 104 ++++++++---------- drivers/gpu/drm/i915/i915_drv.h | 7 +- drivers/gpu/drm/i915/i915_memcpy.c | 2 +- drivers/gpu/drm/i915/i915_memcpy.h | 2 +- 8 files changed, 142 insertions(+), 80 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index 1904e6e5ea64..a199336792fb 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -27,6 +27,7 @@ #include "i915_sw_fence_work.h" #include "i915_trace.h" #include "i915_user_extensions.h" +#include "i915_memcpy.h" struct eb_vma { struct i915_vma *vma; @@ -2273,24 +2274,45 @@ struct eb_parse_work { struct i915_vma *trampoline; unsigned long batch_offset; unsigned long batch_length; + unsigned long *jump_whitelist; + const void *batch_map; + void *shadow_map; }; static int __eb_parse(struct dma_fence_work *work) { struct eb_parse_work *pw = container_of(work, typeof(*pw), base); + int ret; + bool cookie; - return intel_engine_cmd_parser(pw->engine, - pw->batch, - pw->batch_offset, - pw->batch_length, - pw->shadow, - pw->trampoline); + cookie = dma_fence_begin_signalling(); + ret = intel_engine_cmd_parser(pw->engine, + pw->batch, + pw->batch_offset, + pw->batch_length, + pw->shadow, + pw->jump_whitelist, + pw->shadow_map, + pw->batch_map); + dma_fence_end_signalling(cookie); + + return ret; } static void __eb_parse_release(struct dma_fence_work *work) { struct eb_parse_work *pw = container_of(work, typeof(*pw), base); + if (!IS_ERR_OR_NULL(pw->jump_whitelist)) + kfree(pw->jump_whitelist); + + if (pw->batch_map) + i915_gem_object_unpin_map(pw->batch->obj); + else + i915_gem_object_unpin_pages(pw->batch->obj); + + i915_gem_object_unpin_map(pw->shadow->obj); + if (pw->trampoline) i915_active_release(&pw->trampoline->active); i915_active_release(&pw->shadow->active); @@ -2340,6 +2362,8 @@ static int eb_parse_pipeline(struct i915_execbuffer *eb, struct i915_vma *trampoline) { struct eb_parse_work *pw; + struct drm_i915_gem_object *batch = eb->batch->vma->obj; + bool needs_clflush; int err; GEM_BUG_ON(overflows_type(eb->batch_start_offset, pw->batch_offset)); @@ -2363,6 +2387,34 @@ static int eb_parse_pipeline(struct i915_execbuffer *eb, goto err_shadow; } + pw->shadow_map = i915_gem_object_pin_map(shadow->obj, I915_MAP_FORCE_WB); + if (IS_ERR(pw->shadow_map)) { + err = PTR_ERR(pw->shadow_map); + goto err_trampoline; + } + + needs_clflush = + !(batch->cache_coherent & I915_BO_CACHE_COHERENT_FOR_READ); + + pw->batch_map = ERR_PTR(-ENODEV); + if (needs_clflush && i915_has_memcpy_from_wc()) + pw->batch_map = i915_gem_object_pin_map(batch, I915_MAP_WC); + + if (IS_ERR(pw->batch_map)) { + err = i915_gem_object_pin_pages(batch); + if (err) + goto err_unmap_shadow; + pw->batch_map = NULL; + } + + pw->jump_whitelist = + intel_engine_cmd_parser_alloc_jump_whitelist(eb->batch_len, + trampoline); + if (IS_ERR(pw->jump_whitelist)) { + err = PTR_ERR(pw->jump_whitelist); + goto err_unmap_batch; + } + dma_fence_work_init(&pw->base, &eb_parse_ops); pw->engine = eb->engine; @@ -2402,6 +2454,16 @@ static int eb_parse_pipeline(struct i915_execbuffer *eb, dma_fence_work_commit_imm(&pw->base); return err; +err_unmap_batch: + if (pw->batch_map) + i915_gem_object_unpin_map(batch); + else + i915_gem_object_unpin_pages(batch); +err_unmap_shadow: + i915_gem_object_unpin_map(shadow->obj); +err_trampoline: + if (trampoline) + i915_active_release(&trampoline->active); err_shadow: i915_active_release(&shadow->active); err_batch: diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index be14486f63a7..99b18ba0c48d 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -275,22 +275,22 @@ struct scatterlist * __i915_gem_object_get_sg(struct drm_i915_gem_object *obj, struct i915_gem_object_page_iter *iter, unsigned int n, - unsigned int *offset); + unsigned int *offset, bool allow_alloc); static inline struct scatterlist * i915_gem_object_get_sg(struct drm_i915_gem_object *obj, unsigned int n, - unsigned int *offset) + unsigned int *offset, bool allow_alloc) { - return __i915_gem_object_get_sg(obj, &obj->mm.get_page, n, offset); + return __i915_gem_object_get_sg(obj, &obj->mm.get_page, n, offset, allow_alloc); } static inline struct scatterlist * i915_gem_object_get_sg_dma(struct drm_i915_gem_object *obj, unsigned int n, - unsigned int *offset) + unsigned int *offset, bool allow_alloc) { - return __i915_gem_object_get_sg(obj, &obj->mm.get_dma_page, n, offset); + return __i915_gem_object_get_sg(obj, &obj->mm.get_dma_page, n, offset, allow_alloc); } struct page * diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c index 256e69f4eb5a..2e89ba5133eb 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c @@ -457,7 +457,8 @@ struct scatterlist * __i915_gem_object_get_sg(struct drm_i915_gem_object *obj, struct i915_gem_object_page_iter *iter, unsigned int n, - unsigned int *offset) + unsigned int *offset, + bool allow_alloc) { const bool dma = iter == &obj->mm.get_dma_page; struct scatterlist *sg; @@ -479,6 +480,9 @@ __i915_gem_object_get_sg(struct drm_i915_gem_object *obj, if (n < READ_ONCE(iter->sg_idx)) goto lookup; + if (!allow_alloc) + goto manual_lookup; + mutex_lock(&iter->lock); /* We prefer to reuse the last sg so that repeated lookup of this @@ -528,7 +532,16 @@ __i915_gem_object_get_sg(struct drm_i915_gem_object *obj, if (unlikely(n < idx)) /* insertion completed by another thread */ goto lookup; - /* In case we failed to insert the entry into the radixtree, we need + goto manual_walk; + +manual_lookup: + idx = 0; + sg = obj->mm.pages->sgl; + count = __sg_page_count(sg); + +manual_walk: + /* + * In case we failed to insert the entry into the radixtree, we need * to look beyond the current sg. */ while (idx + count <= n) { @@ -575,7 +588,7 @@ i915_gem_object_get_page(struct drm_i915_gem_object *obj, unsigned int n) GEM_BUG_ON(!i915_gem_object_has_struct_page(obj)); - sg = i915_gem_object_get_sg(obj, n, &offset); + sg = i915_gem_object_get_sg(obj, n, &offset, true); return nth_page(sg_page(sg), offset); } @@ -601,7 +614,7 @@ i915_gem_object_get_dma_address_len(struct drm_i915_gem_object *obj, struct scatterlist *sg; unsigned int offset; - sg = i915_gem_object_get_sg_dma(obj, n, &offset); + sg = i915_gem_object_get_sg_dma(obj, n, &offset, true); if (len) *len = sg_dma_len(sg) - (offset << PAGE_SHIFT); diff --git a/drivers/gpu/drm/i915/gt/intel_ggtt.c b/drivers/gpu/drm/i915/gt/intel_ggtt.c index cf94525be2c1..60bd2c8ed8b0 100644 --- a/drivers/gpu/drm/i915/gt/intel_ggtt.c +++ b/drivers/gpu/drm/i915/gt/intel_ggtt.c @@ -1383,7 +1383,7 @@ intel_partial_pages(const struct i915_ggtt_view *view, if (ret) goto err_sg_alloc; - iter = i915_gem_object_get_sg_dma(obj, view->partial.offset, &offset); + iter = i915_gem_object_get_sg_dma(obj, view->partial.offset, &offset, true); GEM_BUG_ON(!iter); sg = st->sgl; diff --git a/drivers/gpu/drm/i915/i915_cmd_parser.c b/drivers/gpu/drm/i915/i915_cmd_parser.c index 93265951fdbb..8883a7d4964f 100644 --- a/drivers/gpu/drm/i915/i915_cmd_parser.c +++ b/drivers/gpu/drm/i915/i915_cmd_parser.c @@ -1136,38 +1136,19 @@ find_reg(const struct intel_engine_cs *engine, u32 addr) /* Returns a vmap'd pointer to dst_obj, which the caller must unmap */ static u32 *copy_batch(struct drm_i915_gem_object *dst_obj, struct drm_i915_gem_object *src_obj, - unsigned long offset, unsigned long length) + unsigned long offset, unsigned long length, + void *dst, const void *src) { - bool needs_clflush; - void *dst, *src; - int ret; - - dst = i915_gem_object_pin_map(dst_obj, I915_MAP_FORCE_WB); - if (IS_ERR(dst)) - return dst; - - ret = i915_gem_object_pin_pages(src_obj); - if (ret) { - i915_gem_object_unpin_map(dst_obj); - return ERR_PTR(ret); - } - - needs_clflush = + bool needs_clflush = !(src_obj->cache_coherent & I915_BO_CACHE_COHERENT_FOR_READ); - src = ERR_PTR(-ENODEV); - if (needs_clflush && i915_has_memcpy_from_wc()) { - src = i915_gem_object_pin_map(src_obj, I915_MAP_WC); - if (!IS_ERR(src)) { - i915_unaligned_memcpy_from_wc(dst, - src + offset, - length); - i915_gem_object_unpin_map(src_obj); - } - } - if (IS_ERR(src)) { - unsigned long x, n; + if (src) { + GEM_BUG_ON(!needs_clflush); + i915_unaligned_memcpy_from_wc(dst, src + offset, length); + } else { + struct scatterlist *sg; void *ptr; + unsigned int x, sg_ofs; /* * We can avoid clflushing partial cachelines before the write @@ -1183,23 +1164,32 @@ static u32 *copy_batch(struct drm_i915_gem_object *dst_obj, ptr = dst; x = offset_in_page(offset); - for (n = offset >> PAGE_SHIFT; length; n++) { - int len = min(length, PAGE_SIZE - x); - - src = kmap_atomic(i915_gem_object_get_page(src_obj, n)); - if (needs_clflush) - drm_clflush_virt_range(src + x, len); - memcpy(ptr, src + x, len); - kunmap_atomic(src); - - ptr += len; - length -= len; - x = 0; + + sg = i915_gem_object_get_sg(src_obj, offset >> PAGE_SHIFT, &sg_ofs, false); + + while (length) { + unsigned long sg_max = sg->length >> PAGE_SHIFT; + + for (; length && sg_ofs < sg_max; sg_ofs++) { + unsigned long len = min(length, PAGE_SIZE - x); + void *map; + + map = kmap_atomic(nth_page(sg_page(sg), sg_ofs)); + if (needs_clflush) + drm_clflush_virt_range(map + x, len); + memcpy(ptr, map + x, len); + kunmap_atomic(map); + + ptr += len; + length -= len; + x = 0; + } + + sg_ofs = 0; + sg = sg_next(sg); } } - i915_gem_object_unpin_pages(src_obj); - /* dst_obj is returned with vmap pinned */ return dst; } @@ -1359,9 +1349,6 @@ static int check_bbstart(u32 *cmd, u32 offset, u32 length, if (target_cmd_index == offset) return 0; - if (IS_ERR(jump_whitelist)) - return PTR_ERR(jump_whitelist); - if (!test_bit(target_cmd_index, jump_whitelist)) { DRM_DEBUG("CMD: BB_START to 0x%llx not a previously executed cmd\n", jump_target); @@ -1371,10 +1358,14 @@ static int check_bbstart(u32 *cmd, u32 offset, u32 length, return 0; } -static unsigned long *alloc_whitelist(u32 batch_length) +unsigned long *intel_engine_cmd_parser_alloc_jump_whitelist(u32 batch_length, + bool trampoline) { unsigned long *jmp; + if (trampoline) + return NULL; + /* * We expect batch_length to be less than 256KiB for known users, * i.e. we need at most an 8KiB bitmap allocation which should be @@ -1417,14 +1408,16 @@ int intel_engine_cmd_parser(struct intel_engine_cs *engine, unsigned long batch_offset, unsigned long batch_length, struct i915_vma *shadow, - bool trampoline) + unsigned long *jump_whitelist, + void *shadow_map, + const void *batch_map) { u32 *cmd, *batch_end, offset = 0; struct drm_i915_cmd_descriptor default_desc = noop_desc; const struct drm_i915_cmd_descriptor *desc = &default_desc; - unsigned long *jump_whitelist; u64 batch_addr, shadow_addr; int ret = 0; + bool trampoline = !jump_whitelist; GEM_BUG_ON(!IS_ALIGNED(batch_offset, sizeof(*cmd))); GEM_BUG_ON(!IS_ALIGNED(batch_length, sizeof(*cmd))); @@ -1432,16 +1425,8 @@ int intel_engine_cmd_parser(struct intel_engine_cs *engine, batch->size)); GEM_BUG_ON(!batch_length); - cmd = copy_batch(shadow->obj, batch->obj, batch_offset, batch_length); - if (IS_ERR(cmd)) { - DRM_DEBUG("CMD: Failed to copy batch\n"); - return PTR_ERR(cmd); - } - - jump_whitelist = NULL; - if (!trampoline) - /* Defer failure until attempted use */ - jump_whitelist = alloc_whitelist(batch_length); + cmd = copy_batch(shadow->obj, batch->obj, batch_offset, batch_length, + shadow_map, batch_map); shadow_addr = gen8_canonical_addr(shadow->node.start); batch_addr = gen8_canonical_addr(batch->node.start + batch_offset); @@ -1549,9 +1534,6 @@ int intel_engine_cmd_parser(struct intel_engine_cs *engine, drm_clflush_virt_range(ptr, (void *)(cmd + 1) - ptr); } - if (!IS_ERR_OR_NULL(jump_whitelist)) - kfree(jump_whitelist); - i915_gem_object_unpin_map(shadow->obj); return ret; } diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 1a5729932c81..7bd7b3e82c45 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -1948,12 +1948,17 @@ const char *i915_cache_level_str(struct drm_i915_private *i915, int type); int i915_cmd_parser_get_version(struct drm_i915_private *dev_priv); void intel_engine_init_cmd_parser(struct intel_engine_cs *engine); void intel_engine_cleanup_cmd_parser(struct intel_engine_cs *engine); +unsigned long *intel_engine_cmd_parser_alloc_jump_whitelist(u32 batch_length, + bool trampoline); + int intel_engine_cmd_parser(struct intel_engine_cs *engine, struct i915_vma *batch, unsigned long batch_offset, unsigned long batch_length, struct i915_vma *shadow, - bool trampoline); + unsigned long *jump_whitelist, + void *shadow_map, + const void *batch_map); #define I915_CMD_PARSER_TRAMPOLINE_SIZE 8 /* intel_device_info.c */ diff --git a/drivers/gpu/drm/i915/i915_memcpy.c b/drivers/gpu/drm/i915/i915_memcpy.c index 7b3b83bd5ab8..1b021a4902de 100644 --- a/drivers/gpu/drm/i915/i915_memcpy.c +++ b/drivers/gpu/drm/i915/i915_memcpy.c @@ -135,7 +135,7 @@ bool i915_memcpy_from_wc(void *dst, const void *src, unsigned long len) * accepts that its arguments may not be aligned, but are valid for the * potential 16-byte read past the end. */ -void i915_unaligned_memcpy_from_wc(void *dst, void *src, unsigned long len) +void i915_unaligned_memcpy_from_wc(void *dst, const void *src, unsigned long len) { unsigned long addr; diff --git a/drivers/gpu/drm/i915/i915_memcpy.h b/drivers/gpu/drm/i915/i915_memcpy.h index e36d30edd987..3df063a3293b 100644 --- a/drivers/gpu/drm/i915/i915_memcpy.h +++ b/drivers/gpu/drm/i915/i915_memcpy.h @@ -13,7 +13,7 @@ struct drm_i915_private; void i915_memcpy_init_early(struct drm_i915_private *i915); bool i915_memcpy_from_wc(void *dst, const void *src, unsigned long len); -void i915_unaligned_memcpy_from_wc(void *dst, void *src, unsigned long len); +void i915_unaligned_memcpy_from_wc(void *dst, const void *src, unsigned long len); /* The movntdqa instructions used for memcpy-from-wc require 16-byte alignment, * as well as SSE4.1 support. i915_memcpy_from_wc() will report if it cannot From patchwork Fri Oct 16 10:43:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841329 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5475EC43457 for ; Fri, 16 Oct 2020 10:44:53 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D66FC207F7 for ; Fri, 16 Oct 2020 10:44:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D66FC207F7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E05016EB16; Fri, 16 Oct 2020 10:44:49 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [141.105.120.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 3461E6EAB9 for ; Fri, 16 Oct 2020 10:44:48 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:43:45 +0200 Message-Id: <20201016104444.1492028-3-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 02/61] drm/i915: Add missing -EDEADLK handling to execbuf pinning X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" i915_vma_pin may fail with -EDEADLK when we start locking page tables, so ensure we handle this correctly. Signed-off-by: Maarten Lankhorst Reviewed-by: Thomas Hellström --- .../gpu/drm/i915/gem/i915_gem_execbuffer.c | 23 +++++++++++++++---- 1 file changed, 18 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index a199336792fb..0f5efced0b87 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -419,13 +419,14 @@ static u64 eb_pin_flags(const struct drm_i915_gem_exec_object2 *entry, return pin_flags; } -static inline bool +static inline int eb_pin_vma(struct i915_execbuffer *eb, const struct drm_i915_gem_exec_object2 *entry, struct eb_vma *ev) { struct i915_vma *vma = ev->vma; u64 pin_flags; + int err; if (vma->node.size) pin_flags = vma->node.start; @@ -438,16 +439,24 @@ eb_pin_vma(struct i915_execbuffer *eb, /* Attempt to reuse the current location if available */ /* TODO: Add -EDEADLK handling here */ - if (unlikely(i915_vma_pin_ww(vma, &eb->ww, 0, 0, pin_flags))) { + err = i915_vma_pin_ww(vma, &eb->ww, 0, 0, pin_flags); + if (err == -EDEADLK) + return err; + + if (unlikely(err)) { if (entry->flags & EXEC_OBJECT_PINNED) return false; /* Failing that pick any _free_ space if suitable */ - if (unlikely(i915_vma_pin_ww(vma, &eb->ww, + err = i915_vma_pin_ww(vma, &eb->ww, entry->pad_to_size, entry->alignment, eb_pin_flags(entry, ev->flags) | - PIN_USER | PIN_NOEVICT))) + PIN_USER | PIN_NOEVICT); + if (err == -EDEADLK) + return err; + + if (unlikely(err)) return false; } @@ -900,7 +909,11 @@ static int eb_validate_vmas(struct i915_execbuffer *eb) if (err) return err; - if (eb_pin_vma(eb, entry, ev)) { + err = eb_pin_vma(eb, entry, ev); + if (err < 0) + return err; + + if (err > 0) { if (entry->offset != vma->node.start) { entry->offset = vma->node.start | UPDATE; eb->args->flags |= __EXEC_HAS_RELOC; From patchwork Fri Oct 16 10:43:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841323 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 19A03C43467 for ; Fri, 16 Oct 2020 10:44:58 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A5F52207F7 for ; Fri, 16 Oct 2020 10:44:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A5F52207F7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 847AE6EB0E; Fri, 16 Oct 2020 10:44:50 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [IPv6:2a02:2308::216:3eff:fe92:dfa3]) by gabe.freedesktop.org (Postfix) with ESMTPS id 9BA026EABA for ; Fri, 16 Oct 2020 10:44:48 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:43:46 +0200 Message-Id: <20201016104444.1492028-4-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 03/61] drm/i915: Do not share hwsp across contexts any more, v4. X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Instead of sharing pages with breadcrumbs, give each timeline a single page. This allows unrelated timelines not to share locks any more during command submission. As an additional benefit, seqno wraparound no longer requires i915_vma_pin, which means we no longer need to worry about a potential -EDEADLK at a point where we are ready to submit. Changes since v1: - Fix erroneous i915_vma_acquire that should be a i915_vma_release (ickle). - Extra check for completion in intel_read_hwsp(). Changes since v2: - Fix inconsistent indent in hwsp_alloc() (kbuild) - memset entire cacheline to 0. Changes since v3: - Do same in intel_timeline_reset_seqno(), and clflush for good measure. Signed-off-by: Maarten Lankhorst Reviewed-by: Thomas Hellström #v1 Reported-by: kernel test robot --- drivers/gpu/drm/i915/gt/intel_gt_types.h | 4 - drivers/gpu/drm/i915/gt/intel_timeline.c | 384 +++--------------- .../gpu/drm/i915/gt/intel_timeline_types.h | 15 +- drivers/gpu/drm/i915/gt/selftest_timeline.c | 11 +- drivers/gpu/drm/i915/i915_request.c | 4 - drivers/gpu/drm/i915/i915_request.h | 10 - 6 files changed, 61 insertions(+), 367 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_gt_types.h b/drivers/gpu/drm/i915/gt/intel_gt_types.h index 6d39a4a11bf3..7aff8350c364 100644 --- a/drivers/gpu/drm/i915/gt/intel_gt_types.h +++ b/drivers/gpu/drm/i915/gt/intel_gt_types.h @@ -39,10 +39,6 @@ struct intel_gt { struct intel_gt_timelines { spinlock_t lock; /* protects active_list */ struct list_head active_list; - - /* Pack multiple timelines' seqnos into the same page */ - spinlock_t hwsp_lock; - struct list_head hwsp_free_list; } timelines; struct intel_gt_requests { diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.c b/drivers/gpu/drm/i915/gt/intel_timeline.c index a2f74cefe4c3..07114443c402 100644 --- a/drivers/gpu/drm/i915/gt/intel_timeline.c +++ b/drivers/gpu/drm/i915/gt/intel_timeline.c @@ -12,21 +12,7 @@ #include "intel_ring.h" #include "intel_timeline.h" -#define ptr_set_bit(ptr, bit) ((typeof(ptr))((unsigned long)(ptr) | BIT(bit))) -#define ptr_test_bit(ptr, bit) ((unsigned long)(ptr) & BIT(bit)) - -#define CACHELINE_BITS 6 -#define CACHELINE_FREE CACHELINE_BITS - -struct intel_timeline_hwsp { - struct intel_gt *gt; - struct intel_gt_timelines *gt_timelines; - struct list_head free_link; - struct i915_vma *vma; - u64 free_bitmap; -}; - -static struct i915_vma *__hwsp_alloc(struct intel_gt *gt) +static struct i915_vma *hwsp_alloc(struct intel_gt *gt) { struct drm_i915_private *i915 = gt->i915; struct drm_i915_gem_object *obj; @@ -45,174 +31,26 @@ static struct i915_vma *__hwsp_alloc(struct intel_gt *gt) return vma; } -static struct i915_vma * -hwsp_alloc(struct intel_timeline *timeline, unsigned int *cacheline) -{ - struct intel_gt_timelines *gt = &timeline->gt->timelines; - struct intel_timeline_hwsp *hwsp; - - BUILD_BUG_ON(BITS_PER_TYPE(u64) * CACHELINE_BYTES > PAGE_SIZE); - - spin_lock_irq(>->hwsp_lock); - - /* hwsp_free_list only contains HWSP that have available cachelines */ - hwsp = list_first_entry_or_null(>->hwsp_free_list, - typeof(*hwsp), free_link); - if (!hwsp) { - struct i915_vma *vma; - - spin_unlock_irq(>->hwsp_lock); - - hwsp = kmalloc(sizeof(*hwsp), GFP_KERNEL); - if (!hwsp) - return ERR_PTR(-ENOMEM); - - vma = __hwsp_alloc(timeline->gt); - if (IS_ERR(vma)) { - kfree(hwsp); - return vma; - } - - GT_TRACE(timeline->gt, "new HWSP allocated\n"); - - vma->private = hwsp; - hwsp->gt = timeline->gt; - hwsp->vma = vma; - hwsp->free_bitmap = ~0ull; - hwsp->gt_timelines = gt; - - spin_lock_irq(>->hwsp_lock); - list_add(&hwsp->free_link, >->hwsp_free_list); - } - - GEM_BUG_ON(!hwsp->free_bitmap); - *cacheline = __ffs64(hwsp->free_bitmap); - hwsp->free_bitmap &= ~BIT_ULL(*cacheline); - if (!hwsp->free_bitmap) - list_del(&hwsp->free_link); - - spin_unlock_irq(>->hwsp_lock); - - GEM_BUG_ON(hwsp->vma->private != hwsp); - return hwsp->vma; -} - -static void __idle_hwsp_free(struct intel_timeline_hwsp *hwsp, int cacheline) -{ - struct intel_gt_timelines *gt = hwsp->gt_timelines; - unsigned long flags; - - spin_lock_irqsave(>->hwsp_lock, flags); - - /* As a cacheline becomes available, publish the HWSP on the freelist */ - if (!hwsp->free_bitmap) - list_add_tail(&hwsp->free_link, >->hwsp_free_list); - - GEM_BUG_ON(cacheline >= BITS_PER_TYPE(hwsp->free_bitmap)); - hwsp->free_bitmap |= BIT_ULL(cacheline); - - /* And if no one is left using it, give the page back to the system */ - if (hwsp->free_bitmap == ~0ull) { - i915_vma_put(hwsp->vma); - list_del(&hwsp->free_link); - kfree(hwsp); - } - - spin_unlock_irqrestore(>->hwsp_lock, flags); -} - -static void __rcu_cacheline_free(struct rcu_head *rcu) -{ - struct intel_timeline_cacheline *cl = - container_of(rcu, typeof(*cl), rcu); - - i915_active_fini(&cl->active); - kfree(cl); -} - -static void __idle_cacheline_free(struct intel_timeline_cacheline *cl) -{ - GEM_BUG_ON(!i915_active_is_idle(&cl->active)); - - i915_gem_object_unpin_map(cl->hwsp->vma->obj); - i915_vma_put(cl->hwsp->vma); - __idle_hwsp_free(cl->hwsp, ptr_unmask_bits(cl->vaddr, CACHELINE_BITS)); - - call_rcu(&cl->rcu, __rcu_cacheline_free); -} - __i915_active_call -static void __cacheline_retire(struct i915_active *active) +static void __timeline_retire(struct i915_active *active) { - struct intel_timeline_cacheline *cl = - container_of(active, typeof(*cl), active); + struct intel_timeline *tl = + container_of(active, typeof(*tl), active); - i915_vma_unpin(cl->hwsp->vma); - if (ptr_test_bit(cl->vaddr, CACHELINE_FREE)) - __idle_cacheline_free(cl); + i915_vma_unpin(tl->hwsp_ggtt); + intel_timeline_put(tl); } -static int __cacheline_active(struct i915_active *active) +static int __timeline_active(struct i915_active *active) { - struct intel_timeline_cacheline *cl = - container_of(active, typeof(*cl), active); + struct intel_timeline *tl = + container_of(active, typeof(*tl), active); - __i915_vma_pin(cl->hwsp->vma); + __i915_vma_pin(tl->hwsp_ggtt); + intel_timeline_get(tl); return 0; } -static struct intel_timeline_cacheline * -cacheline_alloc(struct intel_timeline_hwsp *hwsp, unsigned int cacheline) -{ - struct intel_timeline_cacheline *cl; - void *vaddr; - - GEM_BUG_ON(cacheline >= BIT(CACHELINE_BITS)); - - cl = kmalloc(sizeof(*cl), GFP_KERNEL); - if (!cl) - return ERR_PTR(-ENOMEM); - - vaddr = i915_gem_object_pin_map(hwsp->vma->obj, I915_MAP_WB); - if (IS_ERR(vaddr)) { - kfree(cl); - return ERR_CAST(vaddr); - } - - i915_vma_get(hwsp->vma); - cl->hwsp = hwsp; - cl->vaddr = page_pack_bits(vaddr, cacheline); - - i915_active_init(&cl->active, __cacheline_active, __cacheline_retire); - - return cl; -} - -static void cacheline_acquire(struct intel_timeline_cacheline *cl) -{ - if (cl) - i915_active_acquire(&cl->active); -} - -static void cacheline_release(struct intel_timeline_cacheline *cl) -{ - if (cl) - i915_active_release(&cl->active); -} - -static void cacheline_free(struct intel_timeline_cacheline *cl) -{ - if (!i915_active_acquire_if_busy(&cl->active)) { - __idle_cacheline_free(cl); - return; - } - - GEM_BUG_ON(ptr_test_bit(cl->vaddr, CACHELINE_FREE)); - cl->vaddr = ptr_set_bit(cl->vaddr, CACHELINE_FREE); - - i915_active_release(&cl->active); -} - static int intel_timeline_init(struct intel_timeline *timeline, struct intel_gt *gt, struct i915_vma *hwsp, @@ -225,38 +63,25 @@ static int intel_timeline_init(struct intel_timeline *timeline, timeline->gt = gt; - timeline->has_initial_breadcrumb = !hwsp; - timeline->hwsp_cacheline = NULL; - - if (!hwsp) { - struct intel_timeline_cacheline *cl; - unsigned int cacheline; - - hwsp = hwsp_alloc(timeline, &cacheline); + if (hwsp) { + timeline->hwsp_offset = offset; + timeline->hwsp_ggtt = i915_vma_get(hwsp); + } else { + timeline->has_initial_breadcrumb = true; + hwsp = hwsp_alloc(gt); if (IS_ERR(hwsp)) return PTR_ERR(hwsp); - - cl = cacheline_alloc(hwsp->private, cacheline); - if (IS_ERR(cl)) { - __idle_hwsp_free(hwsp->private, cacheline); - return PTR_ERR(cl); - } - - timeline->hwsp_cacheline = cl; - timeline->hwsp_offset = cacheline * CACHELINE_BYTES; - - vaddr = page_mask_bits(cl->vaddr); - } else { - timeline->hwsp_offset = offset; - vaddr = i915_gem_object_pin_map(hwsp->obj, I915_MAP_WB); - if (IS_ERR(vaddr)) - return PTR_ERR(vaddr); + timeline->hwsp_ggtt = hwsp; } + vaddr = i915_gem_object_pin_map(hwsp->obj, I915_MAP_WB); + if (IS_ERR(vaddr)) + return PTR_ERR(vaddr); + + timeline->hwsp_map = vaddr; timeline->hwsp_seqno = memset(vaddr + timeline->hwsp_offset, 0, CACHELINE_BYTES); - timeline->hwsp_ggtt = i915_vma_get(hwsp); GEM_BUG_ON(timeline->hwsp_offset >= hwsp->size); timeline->fence_context = dma_fence_context_alloc(1); @@ -267,6 +92,7 @@ static int intel_timeline_init(struct intel_timeline *timeline, INIT_LIST_HEAD(&timeline->requests); i915_syncmap_init(&timeline->sync); + i915_active_init(&timeline->active, __timeline_active, __timeline_retire); return 0; } @@ -277,9 +103,6 @@ void intel_gt_init_timelines(struct intel_gt *gt) spin_lock_init(&timelines->lock); INIT_LIST_HEAD(&timelines->active_list); - - spin_lock_init(&timelines->hwsp_lock); - INIT_LIST_HEAD(&timelines->hwsp_free_list); } static void intel_timeline_fini(struct intel_timeline *timeline) @@ -288,12 +111,10 @@ static void intel_timeline_fini(struct intel_timeline *timeline) GEM_BUG_ON(!list_empty(&timeline->requests)); GEM_BUG_ON(timeline->retire); - if (timeline->hwsp_cacheline) - cacheline_free(timeline->hwsp_cacheline); - else - i915_gem_object_unpin_map(timeline->hwsp_ggtt->obj); + i915_gem_object_unpin_map(timeline->hwsp_ggtt->obj); i915_vma_put(timeline->hwsp_ggtt); + i915_active_fini(&timeline->active); } struct intel_timeline * @@ -340,9 +161,9 @@ int intel_timeline_pin(struct intel_timeline *tl, struct i915_gem_ww_ctx *ww) GT_TRACE(tl->gt, "timeline:%llx using HWSP offset:%x\n", tl->fence_context, tl->hwsp_offset); - cacheline_acquire(tl->hwsp_cacheline); + i915_active_acquire(&tl->active); if (atomic_fetch_inc(&tl->pin_count)) { - cacheline_release(tl->hwsp_cacheline); + i915_active_release(&tl->active); __i915_vma_unpin(tl->hwsp_ggtt); } @@ -353,7 +174,9 @@ void intel_timeline_reset_seqno(const struct intel_timeline *tl) { /* Must be pinned to be writable, and no requests in flight. */ GEM_BUG_ON(!atomic_read(&tl->pin_count)); + memset((u32 *)tl->hwsp_seqno, 0, CACHELINE_BYTES); WRITE_ONCE(*(u32 *)tl->hwsp_seqno, tl->seqno); + clflush(tl->hwsp_seqno); } void intel_timeline_enter(struct intel_timeline *tl) @@ -429,106 +252,20 @@ static u32 timeline_advance(struct intel_timeline *tl) return tl->seqno += 1 + tl->has_initial_breadcrumb; } -static void timeline_rollback(struct intel_timeline *tl) -{ - tl->seqno -= 1 + tl->has_initial_breadcrumb; -} - static noinline int __intel_timeline_get_seqno(struct intel_timeline *tl, struct i915_request *rq, u32 *seqno) { - struct intel_timeline_cacheline *cl; - unsigned int cacheline; - struct i915_vma *vma; - void *vaddr; - int err; - - might_lock(&tl->gt->ggtt->vm.mutex); - GT_TRACE(tl->gt, "timeline:%llx wrapped\n", tl->fence_context); - - /* - * If there is an outstanding GPU reference to this cacheline, - * such as it being sampled by a HW semaphore on another timeline, - * we cannot wraparound our seqno value (the HW semaphore does - * a strict greater-than-or-equals compare, not i915_seqno_passed). - * So if the cacheline is still busy, we must detach ourselves - * from it and leave it inflight alongside its users. - * - * However, if nobody is watching and we can guarantee that nobody - * will, we could simply reuse the same cacheline. - * - * if (i915_active_request_is_signaled(&tl->last_request) && - * i915_active_is_signaled(&tl->hwsp_cacheline->active)) - * return 0; - * - * That seems unlikely for a busy timeline that needed to wrap in - * the first place, so just replace the cacheline. - */ + tl->hwsp_offset = i915_ggtt_offset(tl->hwsp_ggtt) + + offset_in_page(tl->hwsp_offset + CACHELINE_BYTES); - vma = hwsp_alloc(tl, &cacheline); - if (IS_ERR(vma)) { - err = PTR_ERR(vma); - goto err_rollback; - } - - err = i915_ggtt_pin(vma, NULL, 0, PIN_HIGH); - if (err) { - __idle_hwsp_free(vma->private, cacheline); - goto err_rollback; - } - - cl = cacheline_alloc(vma->private, cacheline); - if (IS_ERR(cl)) { - err = PTR_ERR(cl); - __idle_hwsp_free(vma->private, cacheline); - goto err_unpin; - } - GEM_BUG_ON(cl->hwsp->vma != vma); - - /* - * Attach the old cacheline to the current request, so that we only - * free it after the current request is retired, which ensures that - * all writes into the cacheline from previous requests are complete. - */ - err = i915_active_ref(&tl->hwsp_cacheline->active, - tl->fence_context, - &rq->fence); - if (err) - goto err_cacheline; - - cacheline_release(tl->hwsp_cacheline); /* ownership now xfered to rq */ - cacheline_free(tl->hwsp_cacheline); - - i915_vma_unpin(tl->hwsp_ggtt); /* binding kept alive by old cacheline */ - i915_vma_put(tl->hwsp_ggtt); - - tl->hwsp_ggtt = i915_vma_get(vma); - - vaddr = page_mask_bits(cl->vaddr); - tl->hwsp_offset = cacheline * CACHELINE_BYTES; - tl->hwsp_seqno = - memset(vaddr + tl->hwsp_offset, 0, CACHELINE_BYTES); - - tl->hwsp_offset += i915_ggtt_offset(vma); - GT_TRACE(tl->gt, "timeline:%llx using HWSP offset:%x\n", - tl->fence_context, tl->hwsp_offset); - - cacheline_acquire(cl); - tl->hwsp_cacheline = cl; + tl->hwsp_seqno = tl->hwsp_map + offset_in_page(tl->hwsp_offset); + intel_timeline_reset_seqno(tl); *seqno = timeline_advance(tl); GEM_BUG_ON(i915_seqno_passed(*tl->hwsp_seqno, *seqno)); return 0; - -err_cacheline: - cacheline_free(cl); -err_unpin: - i915_vma_unpin(vma); -err_rollback: - timeline_rollback(tl); - return err; } int intel_timeline_get_seqno(struct intel_timeline *tl, @@ -538,53 +275,42 @@ int intel_timeline_get_seqno(struct intel_timeline *tl, *seqno = timeline_advance(tl); /* Replace the HWSP on wraparound for HW semaphores */ - if (unlikely(!*seqno && tl->hwsp_cacheline)) + if (unlikely(!*seqno && tl->has_initial_breadcrumb)) return __intel_timeline_get_seqno(tl, rq, seqno); return 0; } -static int cacheline_ref(struct intel_timeline_cacheline *cl, - struct i915_request *rq) -{ - return i915_active_add_request(&cl->active, rq); -} - int intel_timeline_read_hwsp(struct i915_request *from, struct i915_request *to, u32 *hwsp) { - struct intel_timeline_cacheline *cl; + struct intel_timeline *tl; int err; - GEM_BUG_ON(!rcu_access_pointer(from->hwsp_cacheline)); - rcu_read_lock(); - cl = rcu_dereference(from->hwsp_cacheline); - if (i915_request_completed(from)) /* confirm cacheline is valid */ - goto unlock; - if (unlikely(!i915_active_acquire_if_busy(&cl->active))) - goto unlock; /* seqno wrapped and completed! */ - if (unlikely(i915_request_completed(from))) - goto release; + tl = rcu_dereference(from->timeline); + if (tl && (i915_request_completed(from) || + !i915_active_acquire_if_busy(&tl->active))) + tl = NULL; + + /* ensure we wait on the right request, if not, we completed */ + if (tl && i915_request_completed(from)) { + i915_active_release(&tl->active); + tl = NULL; + } rcu_read_unlock(); - err = cacheline_ref(cl, to); - if (err) - goto out; + if (!tl) + return 1; - *hwsp = i915_ggtt_offset(cl->hwsp->vma) + - ptr_unmask_bits(cl->vaddr, CACHELINE_BITS) * CACHELINE_BYTES; + /* hwsp_offset may wraparound, so use from->hwsp_seqno */ + *hwsp = i915_ggtt_offset(tl->hwsp_ggtt) + + offset_in_page(from->hwsp_seqno); -out: - i915_active_release(&cl->active); + err = i915_active_add_request(&tl->active, to); + i915_active_release(&tl->active); return err; - -release: - i915_active_release(&cl->active); -unlock: - rcu_read_unlock(); - return 1; } void intel_timeline_unpin(struct intel_timeline *tl) @@ -593,8 +319,7 @@ void intel_timeline_unpin(struct intel_timeline *tl) if (!atomic_dec_and_test(&tl->pin_count)) return; - cacheline_release(tl->hwsp_cacheline); - + i915_active_release(&tl->active); __i915_vma_unpin(tl->hwsp_ggtt); } @@ -612,7 +337,6 @@ void intel_gt_fini_timelines(struct intel_gt *gt) struct intel_gt_timelines *timelines = >->timelines; GEM_BUG_ON(!list_empty(&timelines->active_list)); - GEM_BUG_ON(!list_empty(&timelines->hwsp_free_list)); } #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST) diff --git a/drivers/gpu/drm/i915/gt/intel_timeline_types.h b/drivers/gpu/drm/i915/gt/intel_timeline_types.h index 02181c5020db..610d593b5bda 100644 --- a/drivers/gpu/drm/i915/gt/intel_timeline_types.h +++ b/drivers/gpu/drm/i915/gt/intel_timeline_types.h @@ -18,7 +18,6 @@ struct i915_vma; struct i915_syncmap; struct intel_gt; -struct intel_timeline_hwsp; struct intel_timeline { u64 fence_context; @@ -45,12 +44,11 @@ struct intel_timeline { atomic_t pin_count; atomic_t active_count; + void *hwsp_map; const u32 *hwsp_seqno; struct i915_vma *hwsp_ggtt; u32 hwsp_offset; - struct intel_timeline_cacheline *hwsp_cacheline; - bool has_initial_breadcrumb; /** @@ -67,6 +65,8 @@ struct intel_timeline { */ struct i915_active_fence last_request; + struct i915_active active; + /** A chain of completed timelines ready for early retirement. */ struct intel_timeline *retire; @@ -88,13 +88,4 @@ struct intel_timeline { struct rcu_head rcu; }; -struct intel_timeline_cacheline { - struct i915_active active; - - struct intel_timeline_hwsp *hwsp; - void *vaddr; - - struct rcu_head rcu; -}; - #endif /* __I915_TIMELINE_TYPES_H__ */ diff --git a/drivers/gpu/drm/i915/gt/selftest_timeline.c b/drivers/gpu/drm/i915/gt/selftest_timeline.c index 19c2cb166e7c..98cd161b3925 100644 --- a/drivers/gpu/drm/i915/gt/selftest_timeline.c +++ b/drivers/gpu/drm/i915/gt/selftest_timeline.c @@ -664,7 +664,7 @@ static int live_hwsp_wrap(void *arg) if (IS_ERR(tl)) return PTR_ERR(tl); - if (!tl->has_initial_breadcrumb || !tl->hwsp_cacheline) + if (!tl->has_initial_breadcrumb) goto out_free; err = intel_timeline_pin(tl, NULL); @@ -780,9 +780,7 @@ static int live_hwsp_rollover_kernel(void *arg) } GEM_BUG_ON(i915_active_fence_isset(&tl->last_request)); - tl->seqno = 0; - timeline_rollback(tl); - timeline_rollback(tl); + tl->seqno = -2u; WRITE_ONCE(*(u32 *)tl->hwsp_seqno, tl->seqno); for (i = 0; i < ARRAY_SIZE(rq); i++) { @@ -862,11 +860,10 @@ static int live_hwsp_rollover_user(void *arg) goto out; tl = ce->timeline; - if (!tl->has_initial_breadcrumb || !tl->hwsp_cacheline) + if (!tl->has_initial_breadcrumb) goto out; - timeline_rollback(tl); - timeline_rollback(tl); + tl->seqno = -4u; WRITE_ONCE(*(u32 *)tl->hwsp_seqno, tl->seqno); for (i = 0; i < ARRAY_SIZE(rq); i++) { diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c index 0e813819b041..598fba061bda 100644 --- a/drivers/gpu/drm/i915/i915_request.c +++ b/drivers/gpu/drm/i915/i915_request.c @@ -853,7 +853,6 @@ __i915_request_create(struct intel_context *ce, gfp_t gfp) rq->fence.seqno = seqno; RCU_INIT_POINTER(rq->timeline, tl); - RCU_INIT_POINTER(rq->hwsp_cacheline, tl->hwsp_cacheline); rq->hwsp_seqno = tl->hwsp_seqno; GEM_BUG_ON(i915_request_completed(rq)); @@ -1092,9 +1091,6 @@ emit_semaphore_wait(struct i915_request *to, if (i915_request_has_initial_breadcrumb(to)) goto await_fence; - if (!rcu_access_pointer(from->hwsp_cacheline)) - goto await_fence; - /* * If this or its dependents are waiting on an external fence * that may fail catastrophically, then we want to avoid using diff --git a/drivers/gpu/drm/i915/i915_request.h b/drivers/gpu/drm/i915/i915_request.h index 16b721080195..03ba7c85929c 100644 --- a/drivers/gpu/drm/i915/i915_request.h +++ b/drivers/gpu/drm/i915/i915_request.h @@ -234,16 +234,6 @@ struct i915_request { */ const u32 *hwsp_seqno; - /* - * If we need to access the timeline's seqno for this request in - * another request, we need to keep a read reference to this associated - * cacheline, so that we do not free and recycle it before the foreign - * observers have completed. Hence, we keep a pointer to the cacheline - * inside the timeline's HWSP vma, but it is only valid while this - * request has not completed and guarded by the timeline mutex. - */ - struct intel_timeline_cacheline __rcu *hwsp_cacheline; - /** Position in the ring of the start of the request */ u32 head; From patchwork Fri Oct 16 10:43:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841321 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DEB24C433E7 for ; Fri, 16 Oct 2020 10:44:50 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 72692207F7 for ; Fri, 16 Oct 2020 10:44:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 72692207F7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 816636EACF; Fri, 16 Oct 2020 10:44:49 +0000 (UTC) X-Greylist: delayed 83897 seconds by postgrey-1.36 at gabe; Fri, 16 Oct 2020 10:44:48 UTC Received: from mblankhorst.nl (mblankhorst.nl [IPv6:2a02:2308::216:3eff:fe92:dfa3]) by gabe.freedesktop.org (Postfix) with ESMTPS id 948386EAC2 for ; Fri, 16 Oct 2020 10:44:48 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:43:47 +0200 Message-Id: <20201016104444.1492028-5-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 04/61] drm/i915: Pin timeline map after first timeline pin, v3. X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" We're starting to require the reservation lock for pinning, so wait until we have that. Update the selftests to handle this correctly, and ensure pin is called in live_hwsp_rollover_user() and mock_hwsp_freelist(). Changes since v1: - Fix NULL + XX arithmatic, use casts. (kbuild) Changes since v2: - Clear entire cacheline when pinning. Signed-off-by: Maarten Lankhorst Reported-by: kernel test robot --- drivers/gpu/drm/i915/gt/intel_timeline.c | 39 +++++++++---- drivers/gpu/drm/i915/gt/intel_timeline.h | 2 + drivers/gpu/drm/i915/gt/mock_engine.c | 22 ++++++- drivers/gpu/drm/i915/gt/selftest_timeline.c | 63 +++++++++++---------- drivers/gpu/drm/i915/i915_selftest.h | 2 + 5 files changed, 84 insertions(+), 44 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.c b/drivers/gpu/drm/i915/gt/intel_timeline.c index 07114443c402..b5c4eb4232da 100644 --- a/drivers/gpu/drm/i915/gt/intel_timeline.c +++ b/drivers/gpu/drm/i915/gt/intel_timeline.c @@ -51,13 +51,29 @@ static int __timeline_active(struct i915_active *active) return 0; } +I915_SELFTEST_EXPORT int +intel_timeline_pin_map(struct intel_timeline *timeline) +{ + struct drm_i915_gem_object *obj = timeline->hwsp_ggtt->obj; + u32 ofs = offset_in_page(timeline->hwsp_offset); + void *vaddr; + + vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB); + if (IS_ERR(vaddr)) + return PTR_ERR(vaddr); + + timeline->hwsp_map = vaddr; + timeline->hwsp_seqno = memset(vaddr + ofs, 0, CACHELINE_BYTES); + clflush(timeline->hwsp_seqno); + + return 0; +} + static int intel_timeline_init(struct intel_timeline *timeline, struct intel_gt *gt, struct i915_vma *hwsp, unsigned int offset) { - void *vaddr; - kref_init(&timeline->kref); atomic_set(&timeline->pin_count, 0); @@ -73,14 +89,8 @@ static int intel_timeline_init(struct intel_timeline *timeline, return PTR_ERR(hwsp); timeline->hwsp_ggtt = hwsp; } - - vaddr = i915_gem_object_pin_map(hwsp->obj, I915_MAP_WB); - if (IS_ERR(vaddr)) - return PTR_ERR(vaddr); - - timeline->hwsp_map = vaddr; - timeline->hwsp_seqno = - memset(vaddr + timeline->hwsp_offset, 0, CACHELINE_BYTES); + timeline->hwsp_map = NULL; + timeline->hwsp_seqno = (void *)(long)timeline->hwsp_offset; GEM_BUG_ON(timeline->hwsp_offset >= hwsp->size); @@ -111,7 +121,8 @@ static void intel_timeline_fini(struct intel_timeline *timeline) GEM_BUG_ON(!list_empty(&timeline->requests)); GEM_BUG_ON(timeline->retire); - i915_gem_object_unpin_map(timeline->hwsp_ggtt->obj); + if (timeline->hwsp_map) + i915_gem_object_unpin_map(timeline->hwsp_ggtt->obj); i915_vma_put(timeline->hwsp_ggtt); i915_active_fini(&timeline->active); @@ -151,6 +162,12 @@ int intel_timeline_pin(struct intel_timeline *tl, struct i915_gem_ww_ctx *ww) if (atomic_add_unless(&tl->pin_count, 1, 0)) return 0; + if (!tl->hwsp_map) { + err = intel_timeline_pin_map(tl); + if (err) + return err; + } + err = i915_ggtt_pin(tl->hwsp_ggtt, ww, 0, PIN_HIGH); if (err) return err; diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.h b/drivers/gpu/drm/i915/gt/intel_timeline.h index 9882cd911d8e..1cfdc4679b62 100644 --- a/drivers/gpu/drm/i915/gt/intel_timeline.h +++ b/drivers/gpu/drm/i915/gt/intel_timeline.h @@ -106,4 +106,6 @@ int intel_timeline_read_hwsp(struct i915_request *from, void intel_gt_init_timelines(struct intel_gt *gt); void intel_gt_fini_timelines(struct intel_gt *gt); +I915_SELFTEST_DECLARE(int intel_timeline_pin_map(struct intel_timeline *tl)); + #endif diff --git a/drivers/gpu/drm/i915/gt/mock_engine.c b/drivers/gpu/drm/i915/gt/mock_engine.c index 2f830017c51d..effbac877eec 100644 --- a/drivers/gpu/drm/i915/gt/mock_engine.c +++ b/drivers/gpu/drm/i915/gt/mock_engine.c @@ -32,9 +32,20 @@ #include "mock_engine.h" #include "selftests/mock_request.h" -static void mock_timeline_pin(struct intel_timeline *tl) +static int mock_timeline_pin(struct intel_timeline *tl) { + int err; + + if (WARN_ON(!i915_gem_object_trylock(tl->hwsp_ggtt->obj))) + return -EBUSY; + + err = intel_timeline_pin_map(tl); + i915_gem_object_unlock(tl->hwsp_ggtt->obj); + if (err) + return err; + atomic_inc(&tl->pin_count); + return 0; } static void mock_timeline_unpin(struct intel_timeline *tl) @@ -152,6 +163,8 @@ static void mock_context_destroy(struct kref *ref) static int mock_context_alloc(struct intel_context *ce) { + int err; + ce->ring = mock_ring(ce->engine); if (!ce->ring) return -ENOMEM; @@ -162,7 +175,12 @@ static int mock_context_alloc(struct intel_context *ce) return PTR_ERR(ce->timeline); } - mock_timeline_pin(ce->timeline); + err = mock_timeline_pin(ce->timeline); + if (err) { + intel_timeline_put(ce->timeline); + ce->timeline = NULL; + return err; + } return 0; } diff --git a/drivers/gpu/drm/i915/gt/selftest_timeline.c b/drivers/gpu/drm/i915/gt/selftest_timeline.c index 98cd161b3925..6d6092a28e6b 100644 --- a/drivers/gpu/drm/i915/gt/selftest_timeline.c +++ b/drivers/gpu/drm/i915/gt/selftest_timeline.c @@ -33,7 +33,7 @@ static unsigned long hwsp_cacheline(struct intel_timeline *tl) { unsigned long address = (unsigned long)page_address(hwsp_page(tl)); - return (address + tl->hwsp_offset) / CACHELINE_BYTES; + return (address + offset_in_page(tl->hwsp_offset)) / CACHELINE_BYTES; } #define CACHELINES_PER_PAGE (PAGE_SIZE / CACHELINE_BYTES) @@ -57,6 +57,7 @@ static void __mock_hwsp_record(struct mock_hwsp_freelist *state, tl = xchg(&state->history[idx], tl); if (tl) { radix_tree_delete(&state->cachelines, hwsp_cacheline(tl)); + intel_timeline_unpin(tl); intel_timeline_put(tl); } } @@ -76,6 +77,12 @@ static int __mock_hwsp_timeline(struct mock_hwsp_freelist *state, if (IS_ERR(tl)) return PTR_ERR(tl); + err = intel_timeline_pin(tl, NULL); + if (err) { + intel_timeline_put(tl); + return err; + } + cacheline = hwsp_cacheline(tl); err = radix_tree_insert(&state->cachelines, cacheline, tl); if (err) { @@ -83,6 +90,7 @@ static int __mock_hwsp_timeline(struct mock_hwsp_freelist *state, pr_err("HWSP cacheline %lu already used; duplicate allocation!\n", cacheline); } + intel_timeline_unpin(tl); intel_timeline_put(tl); return err; } @@ -450,7 +458,7 @@ static int emit_ggtt_store_dw(struct i915_request *rq, u32 addr, u32 value) } static struct i915_request * -tl_write(struct intel_timeline *tl, struct intel_engine_cs *engine, u32 value) +checked_tl_write(struct intel_timeline *tl, struct intel_engine_cs *engine, u32 value) { struct i915_request *rq; int err; @@ -461,6 +469,13 @@ tl_write(struct intel_timeline *tl, struct intel_engine_cs *engine, u32 value) goto out; } + if (READ_ONCE(*tl->hwsp_seqno) != tl->seqno) { + pr_err("Timeline created with incorrect breadcrumb, found %x, expected %x\n", + *tl->hwsp_seqno, tl->seqno); + intel_timeline_unpin(tl); + return ERR_PTR(-EINVAL); + } + rq = intel_engine_create_kernel_request(engine); if (IS_ERR(rq)) goto out_unpin; @@ -482,25 +497,6 @@ tl_write(struct intel_timeline *tl, struct intel_engine_cs *engine, u32 value) return rq; } -static struct intel_timeline * -checked_intel_timeline_create(struct intel_gt *gt) -{ - struct intel_timeline *tl; - - tl = intel_timeline_create(gt); - if (IS_ERR(tl)) - return tl; - - if (READ_ONCE(*tl->hwsp_seqno) != tl->seqno) { - pr_err("Timeline created with incorrect breadcrumb, found %x, expected %x\n", - *tl->hwsp_seqno, tl->seqno); - intel_timeline_put(tl); - return ERR_PTR(-EINVAL); - } - - return tl; -} - static int live_hwsp_engine(void *arg) { #define NUM_TIMELINES 4096 @@ -533,13 +529,13 @@ static int live_hwsp_engine(void *arg) struct intel_timeline *tl; struct i915_request *rq; - tl = checked_intel_timeline_create(gt); + tl = intel_timeline_create(gt); if (IS_ERR(tl)) { err = PTR_ERR(tl); break; } - rq = tl_write(tl, engine, count); + rq = checked_tl_write(tl, engine, count); if (IS_ERR(rq)) { intel_timeline_put(tl); err = PTR_ERR(rq); @@ -606,14 +602,14 @@ static int live_hwsp_alternate(void *arg) if (!intel_engine_can_store_dword(engine)) continue; - tl = checked_intel_timeline_create(gt); + tl = intel_timeline_create(gt); if (IS_ERR(tl)) { err = PTR_ERR(tl); goto out; } intel_engine_pm_get(engine); - rq = tl_write(tl, engine, count); + rq = checked_tl_write(tl, engine, count); intel_engine_pm_put(engine); if (IS_ERR(rq)) { intel_timeline_put(tl); @@ -863,6 +859,10 @@ static int live_hwsp_rollover_user(void *arg) if (!tl->has_initial_breadcrumb) goto out; + err = intel_context_pin(ce); + if (err) + goto out; + tl->seqno = -4u; WRITE_ONCE(*(u32 *)tl->hwsp_seqno, tl->seqno); @@ -872,7 +872,7 @@ static int live_hwsp_rollover_user(void *arg) this = intel_context_create_request(ce); if (IS_ERR(this)) { err = PTR_ERR(this); - goto out; + goto out_unpin; } pr_debug("%s: create fence.seqnp:%d\n", @@ -891,17 +891,18 @@ static int live_hwsp_rollover_user(void *arg) if (i915_request_wait(rq[2], 0, HZ / 5) < 0) { pr_err("Wait for timeline wrap timed out!\n"); err = -EIO; - goto out; + goto out_unpin; } for (i = 0; i < ARRAY_SIZE(rq); i++) { if (!i915_request_completed(rq[i])) { pr_err("Pre-wrap request not completed!\n"); err = -EINVAL; - goto out; + goto out_unpin; } } - +out_unpin: + intel_context_unpin(ce); out: for (i = 0; i < ARRAY_SIZE(rq); i++) i915_request_put(rq[i]); @@ -943,13 +944,13 @@ static int live_hwsp_recycle(void *arg) struct intel_timeline *tl; struct i915_request *rq; - tl = checked_intel_timeline_create(gt); + tl = intel_timeline_create(gt); if (IS_ERR(tl)) { err = PTR_ERR(tl); break; } - rq = tl_write(tl, engine, count); + rq = checked_tl_write(tl, engine, count); if (IS_ERR(rq)) { intel_timeline_put(tl); err = PTR_ERR(rq); diff --git a/drivers/gpu/drm/i915/i915_selftest.h b/drivers/gpu/drm/i915/i915_selftest.h index d53d207ab6eb..f54de0499be7 100644 --- a/drivers/gpu/drm/i915/i915_selftest.h +++ b/drivers/gpu/drm/i915/i915_selftest.h @@ -107,6 +107,7 @@ int __i915_subtests(const char *caller, #define I915_SELFTEST_DECLARE(x) x #define I915_SELFTEST_ONLY(x) unlikely(x) +#define I915_SELFTEST_EXPORT #else /* !IS_ENABLED(CONFIG_DRM_I915_SELFTEST) */ @@ -116,6 +117,7 @@ static inline int i915_perf_selftests(struct pci_dev *pdev) { return 0; } #define I915_SELFTEST_DECLARE(x) #define I915_SELFTEST_ONLY(x) 0 +#define I915_SELFTEST_EXPORT static #endif From patchwork Fri Oct 16 10:43:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841345 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0EB36C43457 for ; Fri, 16 Oct 2020 10:45:12 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A1A202084C for ; Fri, 16 Oct 2020 10:45:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A1A202084C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B63066EAC2; Fri, 16 Oct 2020 10:44:55 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [IPv6:2a02:2308::216:3eff:fe92:dfa3]) by gabe.freedesktop.org (Postfix) with ESMTPS id 812736EACC for ; Fri, 16 Oct 2020 10:44:49 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:43:48 +0200 Message-Id: <20201016104444.1492028-6-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 05/61] drm/i915: Ensure we hold the object mutex in pin correctly. X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Currently we have a lot of places where we hold the gem object lock, but haven't yet been converted to the ww dance. Complain loudly about those places. i915_vma_pin shouldn't have the obj lock held, so we can do a ww dance, while i915_vma_pin_ww should. Signed-off-by: Maarten Lankhorst --- drivers/gpu/drm/i915/gt/intel_renderstate.c | 2 +- drivers/gpu/drm/i915/i915_vma.c | 11 ++++++++++- drivers/gpu/drm/i915/i915_vma.h | 3 +++ 3 files changed, 14 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_renderstate.c b/drivers/gpu/drm/i915/gt/intel_renderstate.c index ea2a77c7b469..a68e5c23a67c 100644 --- a/drivers/gpu/drm/i915/gt/intel_renderstate.c +++ b/drivers/gpu/drm/i915/gt/intel_renderstate.c @@ -196,7 +196,7 @@ int intel_renderstate_init(struct intel_renderstate *so, if (err) goto err_context; - err = i915_vma_pin(so->vma, 0, 0, PIN_GLOBAL | PIN_HIGH); + err = i915_vma_pin_ww(so->vma, &so->ww, 0, 0, PIN_GLOBAL | PIN_HIGH); if (err) goto err_context; diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c index ffb5287e055a..4ead74c5142b 100644 --- a/drivers/gpu/drm/i915/i915_vma.c +++ b/drivers/gpu/drm/i915/i915_vma.c @@ -863,6 +863,8 @@ int i915_vma_pin_ww(struct i915_vma *vma, struct i915_gem_ww_ctx *ww, #ifdef CONFIG_PROVE_LOCKING if (debug_locks && lockdep_is_held(&vma->vm->i915->drm.struct_mutex)) WARN_ON(!ww); + if (debug_locks && ww && vma->resv) + assert_vma_held(vma); #endif BUILD_BUG_ON(PIN_GLOBAL != I915_VMA_GLOBAL_BIND); @@ -1018,8 +1020,15 @@ int i915_ggtt_pin(struct i915_vma *vma, struct i915_gem_ww_ctx *ww, GEM_BUG_ON(!i915_vma_is_ggtt(vma)); +#ifdef CONFIG_LOCKDEP + WARN_ON(!ww && vma->resv && dma_resv_held(vma->resv)); +#endif + do { - err = i915_vma_pin_ww(vma, ww, 0, align, flags | PIN_GLOBAL); + if (ww) + err = i915_vma_pin_ww(vma, ww, 0, align, flags | PIN_GLOBAL); + else + err = i915_vma_pin(vma, 0, align, flags | PIN_GLOBAL); if (err != -ENOSPC) { if (!err) { err = i915_vma_wait_for_bind(vma); diff --git a/drivers/gpu/drm/i915/i915_vma.h b/drivers/gpu/drm/i915/i915_vma.h index 5b3a3c653454..838bbbeb11cc 100644 --- a/drivers/gpu/drm/i915/i915_vma.h +++ b/drivers/gpu/drm/i915/i915_vma.h @@ -243,6 +243,9 @@ i915_vma_pin_ww(struct i915_vma *vma, struct i915_gem_ww_ctx *ww, static inline int __must_check i915_vma_pin(struct i915_vma *vma, u64 size, u64 alignment, u64 flags) { +#ifdef CONFIG_LOCKDEP + WARN_ON_ONCE(vma->resv && dma_resv_held(vma->resv)); +#endif return i915_vma_pin_ww(vma, NULL, size, alignment, flags); } From patchwork Fri Oct 16 10:43:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841463 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D1C32C43457 for ; Fri, 16 Oct 2020 10:45:35 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 74FC9207F7 for ; Fri, 16 Oct 2020 10:45:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 74FC9207F7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B39876EB97; Fri, 16 Oct 2020 10:45:01 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [141.105.120.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 83E4E6EB0C for ; Fri, 16 Oct 2020 10:44:49 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:43:49 +0200 Message-Id: <20201016104444.1492028-7-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 06/61] drm/i915: Add gem object locking to madvise. X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Doesn't need the full ww lock, only checking if pages are bound. Signed-off-by: Maarten Lankhorst Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/i915_gem.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index bb0c12975f38..30af7e4b71ab 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -1071,10 +1071,14 @@ i915_gem_madvise_ioctl(struct drm_device *dev, void *data, if (!obj) return -ENOENT; - err = mutex_lock_interruptible(&obj->mm.lock); + err = i915_gem_object_lock_interruptible(obj, NULL); if (err) goto out; + err = mutex_lock_interruptible(&obj->mm.lock); + if (err) + goto out_ww; + if (i915_gem_object_has_pages(obj) && i915_gem_object_is_tiled(obj) && i915->quirks & QUIRK_PIN_SWIZZLED_PAGES) { @@ -1119,6 +1123,8 @@ i915_gem_madvise_ioctl(struct drm_device *dev, void *data, args->retained = obj->mm.madv != __I915_MADV_PURGED; mutex_unlock(&obj->mm.lock); +out_ww: + i915_gem_object_unlock(obj); out: i915_gem_object_put(obj); return err; From patchwork Fri Oct 16 10:43:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841331 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6CAB8C433E7 for ; Fri, 16 Oct 2020 10:45:06 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id F2D7E207F7 for ; Fri, 16 Oct 2020 10:45:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F2D7E207F7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A98786EAC3; Fri, 16 Oct 2020 10:44:55 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [141.105.120.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id A542E6EB0E for ; Fri, 16 Oct 2020 10:44:49 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:43:50 +0200 Message-Id: <20201016104444.1492028-8-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 07/61] drm/i915: Move HAS_STRUCT_PAGE to obj->flags X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" We want to remove the changing of ops structure for attaching phys pages, so we need to kill off HAS_STRUCT_PAGE from ops->flags, and put it in the bo. This will remove a potential race of dereferencing the wrong obj->ops without ww mutex held. Signed-off-by: Maarten Lankhorst Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 2 +- drivers/gpu/drm/i915/gem/i915_gem_internal.c | 6 +++--- drivers/gpu/drm/i915/gem/i915_gem_lmem.c | 4 ++-- drivers/gpu/drm/i915/gem/i915_gem_mman.c | 7 +++---- drivers/gpu/drm/i915/gem/i915_gem_object.c | 4 +++- drivers/gpu/drm/i915/gem/i915_gem_object.h | 5 +++-- drivers/gpu/drm/i915/gem/i915_gem_object_types.h | 8 +++++--- drivers/gpu/drm/i915/gem/i915_gem_pages.c | 5 ++--- drivers/gpu/drm/i915/gem/i915_gem_phys.c | 2 ++ drivers/gpu/drm/i915/gem/i915_gem_region.c | 4 +--- drivers/gpu/drm/i915/gem/i915_gem_region.h | 3 +-- drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 8 ++++---- drivers/gpu/drm/i915/gem/i915_gem_stolen.c | 4 ++-- drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 6 +++--- drivers/gpu/drm/i915/gem/selftests/huge_gem_object.c | 4 ++-- drivers/gpu/drm/i915/gem/selftests/huge_pages.c | 10 +++++----- drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c | 11 ++++------- drivers/gpu/drm/i915/gem/selftests/i915_gem_phys.c | 12 ++++++++++++ drivers/gpu/drm/i915/gvt/dmabuf.c | 2 +- drivers/gpu/drm/i915/selftests/i915_gem_gtt.c | 2 +- drivers/gpu/drm/i915/selftests/mock_region.c | 4 ++-- 21 files changed, 62 insertions(+), 51 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c index 0dd477e56573..131ec53d8521 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c @@ -259,7 +259,7 @@ struct drm_gem_object *i915_gem_prime_import(struct drm_device *dev, } drm_gem_private_object_init(dev, &obj->base, dma_buf->size); - i915_gem_object_init(obj, &i915_gem_object_dmabuf_ops, &lock_class); + i915_gem_object_init(obj, &i915_gem_object_dmabuf_ops, &lock_class, 0); obj->base.import_attach = attach; obj->base.resv = dma_buf->resv; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c b/drivers/gpu/drm/i915/gem/i915_gem_internal.c index ad22f42541bd..21cc40897ca8 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c @@ -138,8 +138,7 @@ static void i915_gem_object_put_pages_internal(struct drm_i915_gem_object *obj, static const struct drm_i915_gem_object_ops i915_gem_object_internal_ops = { .name = "i915_gem_object_internal", - .flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE | - I915_GEM_OBJECT_IS_SHRINKABLE, + .flags = I915_GEM_OBJECT_IS_SHRINKABLE, .get_pages = i915_gem_object_get_pages_internal, .put_pages = i915_gem_object_put_pages_internal, }; @@ -178,7 +177,8 @@ i915_gem_object_create_internal(struct drm_i915_private *i915, return ERR_PTR(-ENOMEM); drm_gem_private_object_init(&i915->drm, &obj->base, size); - i915_gem_object_init(obj, &i915_gem_object_internal_ops, &lock_class); + i915_gem_object_init(obj, &i915_gem_object_internal_ops, &lock_class, + I915_BO_ALLOC_STRUCT_PAGE); /* * Mark the object as volatile, such that the pages are marked as diff --git a/drivers/gpu/drm/i915/gem/i915_gem_lmem.c b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c index 932ee21e6609..e953965f8263 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_lmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c @@ -45,13 +45,13 @@ __i915_gem_lmem_object_create(struct intel_memory_region *mem, return ERR_PTR(-ENOMEM); drm_gem_private_object_init(&i915->drm, &obj->base, size); - i915_gem_object_init(obj, &i915_gem_lmem_obj_ops, &lock_class); + i915_gem_object_init(obj, &i915_gem_lmem_obj_ops, &lock_class, flags); obj->read_domains = I915_GEM_DOMAIN_WC | I915_GEM_DOMAIN_GTT; i915_gem_object_set_cache_coherency(obj, I915_CACHE_NONE); - i915_gem_object_init_memory_region(obj, mem, flags); + i915_gem_object_init_memory_region(obj, mem); return obj; } diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c index 3d69e51f3e4d..5aa037ca3a41 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c @@ -251,7 +251,7 @@ static vm_fault_t vm_fault_cpu(struct vm_fault *vmf) goto out; iomap = -1; - if (!i915_gem_object_type_has(obj, I915_GEM_OBJECT_HAS_STRUCT_PAGE)) { + if (!i915_gem_object_has_struct_page(obj)) { iomap = obj->mm.region->iomap.base; iomap -= obj->mm.region->region.start; } @@ -653,9 +653,8 @@ __assign_mmap_offset(struct drm_file *file, } if (mmap_type != I915_MMAP_TYPE_GTT && - !i915_gem_object_type_has(obj, - I915_GEM_OBJECT_HAS_STRUCT_PAGE | - I915_GEM_OBJECT_HAS_IOMEM)) { + !i915_gem_object_has_struct_page(obj) && + !i915_gem_object_type_has(obj, I915_GEM_OBJECT_HAS_IOMEM)) { err = -ENODEV; goto out; } diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c index 00d24000b5e8..1393988bd5af 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c @@ -60,7 +60,7 @@ void i915_gem_object_free(struct drm_i915_gem_object *obj) void i915_gem_object_init(struct drm_i915_gem_object *obj, const struct drm_i915_gem_object_ops *ops, - struct lock_class_key *key) + struct lock_class_key *key, unsigned flags) { __mutex_init(&obj->mm.lock, ops->name ?: "obj->mm.lock", key); @@ -78,6 +78,8 @@ void i915_gem_object_init(struct drm_i915_gem_object *obj, init_rcu_head(&obj->rcu); obj->ops = ops; + GEM_BUG_ON(flags & ~I915_BO_ALLOC_FLAGS); + obj->flags = flags; obj->mm.madv = I915_MADV_WILLNEED; INIT_RADIX_TREE(&obj->mm.get_page.radix, GFP_KERNEL | __GFP_NOWARN); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index 99b18ba0c48d..04c29ed93632 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -23,7 +23,8 @@ void i915_gem_object_free(struct drm_i915_gem_object *obj); void i915_gem_object_init(struct drm_i915_gem_object *obj, const struct drm_i915_gem_object_ops *ops, - struct lock_class_key *key); + struct lock_class_key *key, + unsigned alloc_flags); struct drm_i915_gem_object * i915_gem_object_create_shmem(struct drm_i915_private *i915, resource_size_t size); @@ -197,7 +198,7 @@ i915_gem_object_type_has(const struct drm_i915_gem_object *obj, static inline bool i915_gem_object_has_struct_page(const struct drm_i915_gem_object *obj) { - return i915_gem_object_type_has(obj, I915_GEM_OBJECT_HAS_STRUCT_PAGE); + return obj->flags & I915_BO_ALLOC_STRUCT_PAGE; } static inline bool diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h index fedfebf13344..dcdff134ccc2 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h @@ -30,7 +30,6 @@ struct i915_lut_handle { struct drm_i915_gem_object_ops { unsigned int flags; -#define I915_GEM_OBJECT_HAS_STRUCT_PAGE BIT(0) #define I915_GEM_OBJECT_HAS_IOMEM BIT(1) #define I915_GEM_OBJECT_IS_SHRINKABLE BIT(2) #define I915_GEM_OBJECT_IS_PROXY BIT(3) @@ -163,8 +162,11 @@ struct drm_i915_gem_object { unsigned long flags; #define I915_BO_ALLOC_CONTIGUOUS BIT(0) #define I915_BO_ALLOC_VOLATILE BIT(1) -#define I915_BO_ALLOC_FLAGS (I915_BO_ALLOC_CONTIGUOUS | I915_BO_ALLOC_VOLATILE) -#define I915_BO_READONLY BIT(2) +#define I915_BO_ALLOC_STRUCT_PAGE BIT(2) +#define I915_BO_ALLOC_FLAGS (I915_BO_ALLOC_CONTIGUOUS | \ + I915_BO_ALLOC_VOLATILE | \ + I915_BO_ALLOC_STRUCT_PAGE) +#define I915_BO_READONLY BIT(3) /* * Is the object to be mapped as read-only to the GPU diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c index 2e89ba5133eb..1c646d5f802b 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c @@ -346,13 +346,12 @@ void *i915_gem_object_pin_map(struct drm_i915_gem_object *obj, enum i915_map_type type) { enum i915_map_type has_type; - unsigned int flags; bool pinned; void *ptr; int err; - flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE | I915_GEM_OBJECT_HAS_IOMEM; - if (!i915_gem_object_type_has(obj, flags)) + if (!i915_gem_object_has_struct_page(obj) && + !i915_gem_object_type_has(obj, I915_GEM_OBJECT_HAS_IOMEM)) return ERR_PTR(-ENXIO); err = mutex_lock_interruptible_nested(&obj->mm.lock, I915_MM_GET_PAGES); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_phys.c b/drivers/gpu/drm/i915/gem/i915_gem_phys.c index 28147aab47b9..3b92156b494d 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_phys.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_phys.c @@ -185,6 +185,7 @@ int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align) pages = __i915_gem_object_unset_pages(obj); obj->ops = &i915_gem_phys_ops; + obj->flags &= ~I915_BO_ALLOC_STRUCT_PAGE; err = ____i915_gem_object_get_pages(obj); if (err) @@ -203,6 +204,7 @@ int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align) err_xfer: obj->ops = &i915_gem_shmem_ops; + obj->flags |= I915_BO_ALLOC_STRUCT_PAGE; if (!IS_ERR_OR_NULL(pages)) { unsigned int sg_page_sizes = i915_sg_page_sizes(pages->sgl); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_region.c b/drivers/gpu/drm/i915/gem/i915_gem_region.c index 1515384d7e0e..6a96741253b3 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_region.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_region.c @@ -102,13 +102,11 @@ i915_gem_object_get_pages_buddy(struct drm_i915_gem_object *obj) } void i915_gem_object_init_memory_region(struct drm_i915_gem_object *obj, - struct intel_memory_region *mem, - unsigned long flags) + struct intel_memory_region *mem) { INIT_LIST_HEAD(&obj->mm.blocks); obj->mm.region = intel_memory_region_get(mem); - obj->flags |= flags; if (obj->base.size <= mem->min_page_size) obj->flags |= I915_BO_ALLOC_CONTIGUOUS; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_region.h b/drivers/gpu/drm/i915/gem/i915_gem_region.h index f2ff6f8bff74..ebddc86d78f7 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_region.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_region.h @@ -17,8 +17,7 @@ void i915_gem_object_put_pages_buddy(struct drm_i915_gem_object *obj, struct sg_table *pages); void i915_gem_object_init_memory_region(struct drm_i915_gem_object *obj, - struct intel_memory_region *mem, - unsigned long flags); + struct intel_memory_region *mem); void i915_gem_object_release_memory_region(struct drm_i915_gem_object *obj); struct drm_i915_gem_object * diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c index 38113d3c0138..1ad4713589da 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c @@ -430,8 +430,7 @@ static void shmem_release(struct drm_i915_gem_object *obj) const struct drm_i915_gem_object_ops i915_gem_shmem_ops = { .name = "i915_gem_object_shmem", - .flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE | - I915_GEM_OBJECT_IS_SHRINKABLE, + .flags = I915_GEM_OBJECT_IS_SHRINKABLE, .get_pages = shmem_get_pages, .put_pages = shmem_put_pages, @@ -496,7 +495,8 @@ create_shmem(struct intel_memory_region *mem, mapping_set_gfp_mask(mapping, mask); GEM_BUG_ON(!(mapping_gfp_mask(mapping) & __GFP_RECLAIM)); - i915_gem_object_init(obj, &i915_gem_shmem_ops, &lock_class); + i915_gem_object_init(obj, &i915_gem_shmem_ops, &lock_class, + I915_BO_ALLOC_STRUCT_PAGE); obj->write_domain = I915_GEM_DOMAIN_CPU; obj->read_domains = I915_GEM_DOMAIN_CPU; @@ -520,7 +520,7 @@ create_shmem(struct intel_memory_region *mem, i915_gem_object_set_cache_coherency(obj, cache_level); - i915_gem_object_init_memory_region(obj, mem, 0); + i915_gem_object_init_memory_region(obj, mem); return obj; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_stolen.c b/drivers/gpu/drm/i915/gem/i915_gem_stolen.c index 0be5e8683337..9a9242b5a99f 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_stolen.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_stolen.c @@ -586,7 +586,7 @@ __i915_gem_object_create_stolen(struct intel_memory_region *mem, goto err; drm_gem_private_object_init(&mem->i915->drm, &obj->base, stolen->size); - i915_gem_object_init(obj, &i915_gem_object_stolen_ops, &lock_class); + i915_gem_object_init(obj, &i915_gem_object_stolen_ops, &lock_class, 0); obj->stolen = stolen; obj->read_domains = I915_GEM_DOMAIN_CPU | I915_GEM_DOMAIN_GTT; @@ -597,7 +597,7 @@ __i915_gem_object_create_stolen(struct intel_memory_region *mem, if (err) goto cleanup; - i915_gem_object_init_memory_region(obj, mem, 0); + i915_gem_object_init_memory_region(obj, mem); return obj; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c index 12b30075134a..22008948be58 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c @@ -702,8 +702,7 @@ i915_gem_userptr_dmabuf_export(struct drm_i915_gem_object *obj) static const struct drm_i915_gem_object_ops i915_gem_userptr_ops = { .name = "i915_gem_object_userptr", - .flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE | - I915_GEM_OBJECT_IS_SHRINKABLE | + .flags = I915_GEM_OBJECT_IS_SHRINKABLE | I915_GEM_OBJECT_NO_MMAP | I915_GEM_OBJECT_ASYNC_CANCEL, .get_pages = i915_gem_userptr_get_pages, @@ -810,7 +809,8 @@ i915_gem_userptr_ioctl(struct drm_device *dev, return -ENOMEM; drm_gem_private_object_init(dev, &obj->base, args->user_size); - i915_gem_object_init(obj, &i915_gem_userptr_ops, &lock_class); + i915_gem_object_init(obj, &i915_gem_userptr_ops, &lock_class, + I915_BO_ALLOC_STRUCT_PAGE); obj->read_domains = I915_GEM_DOMAIN_CPU; obj->write_domain = I915_GEM_DOMAIN_CPU; i915_gem_object_set_cache_coherency(obj, I915_CACHE_LLC); diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_gem_object.c b/drivers/gpu/drm/i915/gem/selftests/huge_gem_object.c index a768ec61e966..dfad86d74dd0 100644 --- a/drivers/gpu/drm/i915/gem/selftests/huge_gem_object.c +++ b/drivers/gpu/drm/i915/gem/selftests/huge_gem_object.c @@ -89,7 +89,6 @@ static void huge_put_pages(struct drm_i915_gem_object *obj, static const struct drm_i915_gem_object_ops huge_ops = { .name = "huge-gem", - .flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE, .get_pages = huge_get_pages, .put_pages = huge_put_pages, }; @@ -115,7 +114,8 @@ huge_gem_object(struct drm_i915_private *i915, return ERR_PTR(-ENOMEM); drm_gem_private_object_init(&i915->drm, &obj->base, dma_size); - i915_gem_object_init(obj, &huge_ops, &lock_class); + i915_gem_object_init(obj, &huge_ops, &lock_class, + I915_BO_ALLOC_STRUCT_PAGE); obj->read_domains = I915_GEM_DOMAIN_CPU; obj->write_domain = I915_GEM_DOMAIN_CPU; diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c index 1f35e71429b4..a7d5f7785f32 100644 --- a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c +++ b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c @@ -140,8 +140,7 @@ static void put_huge_pages(struct drm_i915_gem_object *obj, static const struct drm_i915_gem_object_ops huge_page_ops = { .name = "huge-gem", - .flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE | - I915_GEM_OBJECT_IS_SHRINKABLE, + .flags = I915_GEM_OBJECT_IS_SHRINKABLE, .get_pages = get_huge_pages, .put_pages = put_huge_pages, }; @@ -168,7 +167,8 @@ huge_pages_object(struct drm_i915_private *i915, return ERR_PTR(-ENOMEM); drm_gem_private_object_init(&i915->drm, &obj->base, size); - i915_gem_object_init(obj, &huge_page_ops, &lock_class); + i915_gem_object_init(obj, &huge_page_ops, &lock_class, + I915_BO_ALLOC_STRUCT_PAGE); i915_gem_object_set_volatile(obj); @@ -319,9 +319,9 @@ fake_huge_pages_object(struct drm_i915_private *i915, u64 size, bool single) drm_gem_private_object_init(&i915->drm, &obj->base, size); if (single) - i915_gem_object_init(obj, &fake_ops_single, &lock_class); + i915_gem_object_init(obj, &fake_ops_single, &lock_class, 0); else - i915_gem_object_init(obj, &fake_ops, &lock_class); + i915_gem_object_init(obj, &fake_ops, &lock_class, 0); i915_gem_object_set_volatile(obj); diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c index d27d87a678c8..3ac7628f3bc4 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c @@ -834,9 +834,8 @@ static bool can_mmap(struct drm_i915_gem_object *obj, enum i915_mmap_type type) return false; if (type != I915_MMAP_TYPE_GTT && - !i915_gem_object_type_has(obj, - I915_GEM_OBJECT_HAS_STRUCT_PAGE | - I915_GEM_OBJECT_HAS_IOMEM)) + !i915_gem_object_has_struct_page(obj) && + !i915_gem_object_type_has(obj, I915_GEM_OBJECT_HAS_IOMEM)) return false; return true; @@ -976,10 +975,8 @@ static const char *repr_mmap_type(enum i915_mmap_type type) static bool can_access(const struct drm_i915_gem_object *obj) { - unsigned int flags = - I915_GEM_OBJECT_HAS_STRUCT_PAGE | I915_GEM_OBJECT_HAS_IOMEM; - - return i915_gem_object_type_has(obj, flags); + return i915_gem_object_has_struct_page(obj) || + i915_gem_object_type_has(obj, I915_GEM_OBJECT_HAS_IOMEM); } static int __igt_mmap_access(struct drm_i915_private *i915, diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_phys.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_phys.c index 8cee68c6a6dc..fb6a17701310 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_phys.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_phys.c @@ -25,12 +25,24 @@ static int mock_phys_object(void *arg) goto out; } + if (!i915_gem_object_has_struct_page(obj)) { + err = -EINVAL; + pr_err("shmem has no struct page\n"); + goto out_obj; + } + err = i915_gem_object_attach_phys(obj, PAGE_SIZE); if (err) { pr_err("i915_gem_object_attach_phys failed, err=%d\n", err); goto out_obj; } + if (i915_gem_object_has_struct_page(obj)) { + err = -EINVAL; + pr_err("shmem has a struct page\n"); + goto out_obj; + } + if (obj->ops != &i915_gem_phys_ops) { pr_err("i915_gem_object_attach_phys did not create a phys object\n"); err = -EINVAL; diff --git a/drivers/gpu/drm/i915/gvt/dmabuf.c b/drivers/gpu/drm/i915/gvt/dmabuf.c index c3eb3838fe88..d4f883f35b95 100644 --- a/drivers/gpu/drm/i915/gvt/dmabuf.c +++ b/drivers/gpu/drm/i915/gvt/dmabuf.c @@ -218,7 +218,7 @@ static struct drm_i915_gem_object *vgpu_create_gem(struct drm_device *dev, drm_gem_private_object_init(dev, &obj->base, roundup(info->size, PAGE_SIZE)); - i915_gem_object_init(obj, &intel_vgpu_gem_ops, &lock_class); + i915_gem_object_init(obj, &intel_vgpu_gem_ops, &lock_class, 0); i915_gem_object_set_readonly(obj); obj->read_domains = I915_GEM_DOMAIN_GTT; diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c index c53a222e3dec..2cfe99c79034 100644 --- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c @@ -120,7 +120,7 @@ fake_dma_object(struct drm_i915_private *i915, u64 size) goto err; drm_gem_private_object_init(&i915->drm, &obj->base, size); - i915_gem_object_init(obj, &fake_ops, &lock_class); + i915_gem_object_init(obj, &fake_ops, &lock_class, 0); i915_gem_object_set_volatile(obj); diff --git a/drivers/gpu/drm/i915/selftests/mock_region.c b/drivers/gpu/drm/i915/selftests/mock_region.c index 09660f5a0a4c..2a15e525d5b9 100644 --- a/drivers/gpu/drm/i915/selftests/mock_region.c +++ b/drivers/gpu/drm/i915/selftests/mock_region.c @@ -32,13 +32,13 @@ mock_object_create(struct intel_memory_region *mem, return ERR_PTR(-ENOMEM); drm_gem_private_object_init(&i915->drm, &obj->base, size); - i915_gem_object_init(obj, &mock_region_obj_ops, &lock_class); + i915_gem_object_init(obj, &mock_region_obj_ops, &lock_class, flags); obj->read_domains = I915_GEM_DOMAIN_CPU | I915_GEM_DOMAIN_GTT; i915_gem_object_set_cache_coherency(obj, I915_CACHE_NONE); - i915_gem_object_init_memory_region(obj, mem, flags); + i915_gem_object_init_memory_region(obj, mem); return obj; } From patchwork Fri Oct 16 10:43:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841501 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 094E0C35268 for ; Fri, 16 Oct 2020 10:45:40 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 96F02207F7 for ; Fri, 16 Oct 2020 10:45:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 96F02207F7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 8B8136EC3C; Fri, 16 Oct 2020 10:45:21 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [IPv6:2a02:2308::216:3eff:fe92:dfa3]) by gabe.freedesktop.org (Postfix) with ESMTPS id B43A96EB12 for ; Fri, 16 Oct 2020 10:44:49 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:43:51 +0200 Message-Id: <20201016104444.1492028-9-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 08/61] drm/i915: Rework struct phys attachment handling X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Instead of creating a separate object type, we make changes to the shmem type, to clear struct page backing. This will allow us to ensure we never run into a race when we exchange obj->ops with other function pointers. Signed-off-by: Maarten Lankhorst Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/gem/i915_gem_object.h | 3 + drivers/gpu/drm/i915/gem/i915_gem_phys.c | 91 ++++++++++--------- drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 8 +- .../drm/i915/gem/selftests/i915_gem_phys.c | 6 -- 4 files changed, 56 insertions(+), 52 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index 04c29ed93632..e0c1e2817bee 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -38,6 +38,9 @@ void __i915_gem_object_release_shmem(struct drm_i915_gem_object *obj, bool needs_clflush); int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align); +void i915_gem_object_put_pages_phys(struct drm_i915_gem_object *obj, + struct sg_table *pages); + void i915_gem_flush_free_objects(struct drm_i915_private *i915); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_phys.c b/drivers/gpu/drm/i915/gem/i915_gem_phys.c index 3b92156b494d..3960c1d9d415 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_phys.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_phys.c @@ -76,6 +76,8 @@ static int i915_gem_object_get_pages_phys(struct drm_i915_gem_object *obj) intel_gt_chipset_flush(&to_i915(obj->base.dev)->gt); + /* We're no longer struct page backed */ + obj->flags &= ~I915_BO_ALLOC_STRUCT_PAGE; __i915_gem_object_set_pages(obj, st, sg->length); return 0; @@ -89,7 +91,7 @@ static int i915_gem_object_get_pages_phys(struct drm_i915_gem_object *obj) return -ENOMEM; } -static void +void i915_gem_object_put_pages_phys(struct drm_i915_gem_object *obj, struct sg_table *pages) { @@ -134,83 +136,82 @@ i915_gem_object_put_pages_phys(struct drm_i915_gem_object *obj, vaddr, dma); } -static void phys_release(struct drm_i915_gem_object *obj) +static int i915_gem_object_shmem_to_phys(struct drm_i915_gem_object *obj) { - fput(obj->base.filp); -} + struct sg_table *pages; + int err; + + pages = __i915_gem_object_unset_pages(obj); + + err = i915_gem_object_get_pages_phys(obj); + if (err) + goto err_xfer; + + /* Perma-pin (until release) the physical set of pages */ + __i915_gem_object_pin_pages(obj); + + if (!IS_ERR_OR_NULL(pages)) + i915_gem_shmem_ops.put_pages(obj, pages); + + i915_gem_object_release_memory_region(obj); + return 0; -static const struct drm_i915_gem_object_ops i915_gem_phys_ops = { - .name = "i915_gem_object_phys", - .get_pages = i915_gem_object_get_pages_phys, - .put_pages = i915_gem_object_put_pages_phys, +err_xfer: + if (!IS_ERR_OR_NULL(pages)) { + unsigned int sg_page_sizes = i915_sg_page_sizes(pages->sgl); - .release = phys_release, -}; + __i915_gem_object_set_pages(obj, pages, sg_page_sizes); + } + return err; +} int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align) { - struct sg_table *pages; int err; if (align > obj->base.size) return -EINVAL; - if (obj->ops == &i915_gem_phys_ops) - return 0; - if (obj->ops != &i915_gem_shmem_ops) return -EINVAL; + if (!i915_gem_object_has_struct_page(obj)) + return 0; + err = i915_gem_object_unbind(obj, I915_GEM_OBJECT_UNBIND_ACTIVE); if (err) return err; mutex_lock_nested(&obj->mm.lock, I915_MM_GET_PAGES); + if (unlikely(!i915_gem_object_has_struct_page(obj))) + goto out; + if (obj->mm.madv != I915_MADV_WILLNEED) { err = -EFAULT; - goto err_unlock; + goto out; } if (obj->mm.quirked) { err = -EFAULT; - goto err_unlock; + goto out; } - if (obj->mm.mapping) { + if (obj->mm.mapping || i915_gem_object_has_pinned_pages(obj)) { err = -EBUSY; - goto err_unlock; + goto out; } - pages = __i915_gem_object_unset_pages(obj); - - obj->ops = &i915_gem_phys_ops; - obj->flags &= ~I915_BO_ALLOC_STRUCT_PAGE; - - err = ____i915_gem_object_get_pages(obj); - if (err) - goto err_xfer; - - /* Perma-pin (until release) the physical set of pages */ - __i915_gem_object_pin_pages(obj); - - if (!IS_ERR_OR_NULL(pages)) - i915_gem_shmem_ops.put_pages(obj, pages); - - i915_gem_object_release_memory_region(obj); - - mutex_unlock(&obj->mm.lock); - return 0; + if (unlikely(obj->mm.madv != I915_MADV_WILLNEED)) { + drm_dbg(obj->base.dev, + "Attempting to obtain a purgeable object\n"); + err = -EFAULT; + goto out; + } -err_xfer: - obj->ops = &i915_gem_shmem_ops; - obj->flags |= I915_BO_ALLOC_STRUCT_PAGE; - if (!IS_ERR_OR_NULL(pages)) { - unsigned int sg_page_sizes = i915_sg_page_sizes(pages->sgl); + err = i915_gem_object_shmem_to_phys(obj); - __i915_gem_object_set_pages(obj, pages, sg_page_sizes); - } -err_unlock: +out: mutex_unlock(&obj->mm.lock); return err; } diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c index 1ad4713589da..e0778b3cc0c3 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c @@ -303,6 +303,11 @@ shmem_put_pages(struct drm_i915_gem_object *obj, struct sg_table *pages) struct pagevec pvec; struct page *page; + if (unlikely(!i915_gem_object_has_struct_page(obj))) { + i915_gem_object_put_pages_phys(obj, pages); + return; + } + __i915_gem_object_release_shmem(obj, pages, true); i915_gem_gtt_finish_pages(obj, pages); @@ -423,7 +428,8 @@ shmem_pwrite(struct drm_i915_gem_object *obj, static void shmem_release(struct drm_i915_gem_object *obj) { - i915_gem_object_release_memory_region(obj); + if (obj->flags & I915_BO_ALLOC_STRUCT_PAGE) + i915_gem_object_release_memory_region(obj); fput(obj->base.filp); } diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_phys.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_phys.c index fb6a17701310..0cfa082047fe 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_phys.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_phys.c @@ -38,12 +38,6 @@ static int mock_phys_object(void *arg) } if (i915_gem_object_has_struct_page(obj)) { - err = -EINVAL; - pr_err("shmem has a struct page\n"); - goto out_obj; - } - - if (obj->ops != &i915_gem_phys_ops) { pr_err("i915_gem_object_attach_phys did not create a phys object\n"); err = -EINVAL; goto out_obj; From patchwork Fri Oct 16 10:43:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841337 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3DDA9C43467 for ; Fri, 16 Oct 2020 10:45:09 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D0B012084C for ; Fri, 16 Oct 2020 10:45:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D0B012084C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C33A26EAC7; Fri, 16 Oct 2020 10:44:55 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [141.105.120.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id B8C3A6EB13 for ; Fri, 16 Oct 2020 10:44:49 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:43:52 +0200 Message-Id: <20201016104444.1492028-10-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 09/61] drm/i915: Convert i915_gem_object_attach_phys() to ww locking X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Simple adding of i915_gem_object_lock, we may start to pass ww to get_pages() in the future, but that won't be the case here; We override shmem's get_pages() handling by calling i915_gem_object_get_pages_phys(), no ww is needed. Signed-off-by: Maarten Lankhorst Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/gem/i915_gem_phys.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_phys.c b/drivers/gpu/drm/i915/gem/i915_gem_phys.c index 3960c1d9d415..153de6538378 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_phys.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_phys.c @@ -182,7 +182,13 @@ int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align) if (err) return err; - mutex_lock_nested(&obj->mm.lock, I915_MM_GET_PAGES); + err = i915_gem_object_lock_interruptible(obj, NULL); + if (err) + return err; + + err = mutex_lock_interruptible_nested(&obj->mm.lock, I915_MM_GET_PAGES); + if (err) + goto err_unlock; if (unlikely(!i915_gem_object_has_struct_page(obj))) goto out; @@ -213,6 +219,8 @@ int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align) out: mutex_unlock(&obj->mm.lock); +err_unlock: + i915_gem_object_unlock(obj); return err; } From patchwork Fri Oct 16 10:43:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841371 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EC01EC43467 for ; Fri, 16 Oct 2020 10:45:22 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 81C682084C for ; Fri, 16 Oct 2020 10:45:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 81C682084C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C5A686EB20; Fri, 16 Oct 2020 10:44:59 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [141.105.120.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id C9F736EACC for ; Fri, 16 Oct 2020 10:44:49 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:43:53 +0200 Message-Id: <20201016104444.1492028-11-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 10/61] drm/i915: make lockdep slightly happier about execbuf. X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" As soon as we install fences, we should stop allocating memory in order to prevent any potential deadlocks. This is required later on, when we start adding support for dma-fence annotations. Signed-off-by: Maarten Lankhorst Reviewed-by: Thomas Hellström --- .../gpu/drm/i915/gem/i915_gem_execbuffer.c | 24 ++++++++++++++----- drivers/gpu/drm/i915/i915_active.c | 20 ++++++++-------- drivers/gpu/drm/i915/i915_vma.c | 8 ++++--- drivers/gpu/drm/i915/i915_vma.h | 3 +++ 4 files changed, 36 insertions(+), 19 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index 0f5efced0b87..9a44d9a6b5ed 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -49,11 +49,12 @@ enum { #define DBG_FORCE_RELOC 0 /* choose one of the above! */ }; -#define __EXEC_OBJECT_HAS_PIN BIT(31) -#define __EXEC_OBJECT_HAS_FENCE BIT(30) -#define __EXEC_OBJECT_NEEDS_MAP BIT(29) -#define __EXEC_OBJECT_NEEDS_BIAS BIT(28) -#define __EXEC_OBJECT_INTERNAL_FLAGS (~0u << 28) /* all of the above */ +/* __EXEC_OBJECT_NO_RESERVE is BIT(31), defined in i915_vma.h */ +#define __EXEC_OBJECT_HAS_PIN BIT(30) +#define __EXEC_OBJECT_HAS_FENCE BIT(29) +#define __EXEC_OBJECT_NEEDS_MAP BIT(28) +#define __EXEC_OBJECT_NEEDS_BIAS BIT(27) +#define __EXEC_OBJECT_INTERNAL_FLAGS (~0u << 27) /* all of the above + */ #define __EXEC_OBJECT_RESERVED (__EXEC_OBJECT_HAS_PIN | __EXEC_OBJECT_HAS_FENCE) #define __EXEC_HAS_RELOC BIT(31) @@ -929,6 +930,12 @@ static int eb_validate_vmas(struct i915_execbuffer *eb) } } + if (!(ev->flags & EXEC_OBJECT_WRITE)) { + err = dma_resv_reserve_shared(vma->resv, 1); + if (err) + return err; + } + GEM_BUG_ON(drm_mm_node_allocated(&vma->node) && eb_vma_misplaced(&eb->exec[i], vma, ev->flags)); } @@ -2194,7 +2201,8 @@ static int eb_move_to_gpu(struct i915_execbuffer *eb) } if (err == 0) - err = i915_vma_move_to_active(vma, eb->request, flags); + err = i915_vma_move_to_active(vma, eb->request, + flags | __EXEC_OBJECT_NO_RESERVE); } if (unlikely(err)) @@ -2446,6 +2454,10 @@ static int eb_parse_pipeline(struct i915_execbuffer *eb, if (err) goto err_commit; + err = dma_resv_reserve_shared(shadow->resv, 1); + if (err) + goto err_commit; + /* Wait for all writes (and relocs) into the batch to complete */ err = i915_sw_fence_await_reservation(&pw->base.chain, pw->batch->resv, NULL, false, diff --git a/drivers/gpu/drm/i915/i915_active.c b/drivers/gpu/drm/i915/i915_active.c index b0a6522be3d1..2bf1e444dda7 100644 --- a/drivers/gpu/drm/i915/i915_active.c +++ b/drivers/gpu/drm/i915/i915_active.c @@ -296,18 +296,13 @@ static struct active_node *__active_lookup(struct i915_active *ref, u64 idx) static struct i915_active_fence * active_instance(struct i915_active *ref, u64 idx) { - struct active_node *node, *prealloc; + struct active_node *node; struct rb_node **p, *parent; node = __active_lookup(ref, idx); if (likely(node)) return &node->base; - /* Preallocate a replacement, just in case */ - prealloc = kmem_cache_alloc(global.slab_cache, GFP_KERNEL); - if (!prealloc) - return NULL; - spin_lock_irq(&ref->tree_lock); GEM_BUG_ON(i915_active_is_idle(ref)); @@ -317,10 +312,8 @@ active_instance(struct i915_active *ref, u64 idx) parent = *p; node = rb_entry(parent, struct active_node, node); - if (node->timeline == idx) { - kmem_cache_free(global.slab_cache, prealloc); + if (node->timeline == idx) goto out; - } if (node->timeline < idx) p = &parent->rb_right; @@ -328,7 +321,14 @@ active_instance(struct i915_active *ref, u64 idx) p = &parent->rb_left; } - node = prealloc; + /* + * XXX: We should preallocate this before i915_active_ref() is ever + * called, but we cannot call into fs_reclaim() anyway, so use GFP_ATOMIC. + */ + node = kmem_cache_alloc(global.slab_cache, GFP_ATOMIC); + if (!node) + goto out; + __i915_active_fence_init(&node->base, NULL, node_retire); node->ref = ref; node->timeline = idx; diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c index 4ead74c5142b..f50250c8685a 100644 --- a/drivers/gpu/drm/i915/i915_vma.c +++ b/drivers/gpu/drm/i915/i915_vma.c @@ -1245,9 +1245,11 @@ int i915_vma_move_to_active(struct i915_vma *vma, obj->write_domain = I915_GEM_DOMAIN_RENDER; obj->read_domains = 0; } else { - err = dma_resv_reserve_shared(vma->resv, 1); - if (unlikely(err)) - return err; + if (!(flags & __EXEC_OBJECT_NO_RESERVE)) { + err = dma_resv_reserve_shared(vma->resv, 1); + if (unlikely(err)) + return err; + } dma_resv_add_shared_fence(vma->resv, &rq->fence); obj->write_domain = 0; diff --git a/drivers/gpu/drm/i915/i915_vma.h b/drivers/gpu/drm/i915/i915_vma.h index 838bbbeb11cc..3c951d5428cf 100644 --- a/drivers/gpu/drm/i915/i915_vma.h +++ b/drivers/gpu/drm/i915/i915_vma.h @@ -52,6 +52,9 @@ static inline bool i915_vma_is_active(const struct i915_vma *vma) return !i915_active_is_idle(&vma->active); } +/* do not reserve memory to prevent deadlocks */ +#define __EXEC_OBJECT_NO_RESERVE BIT(31) + int __must_check __i915_vma_move_to_active(struct i915_vma *vma, struct i915_request *rq); int __must_check i915_vma_move_to_active(struct i915_vma *vma, From patchwork Fri Oct 16 10:43:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841369 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2796C4363D for ; Fri, 16 Oct 2020 10:45:20 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6625B2084C for ; Fri, 16 Oct 2020 10:45:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6625B2084C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 80EB56EAC6; Fri, 16 Oct 2020 10:44:59 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [141.105.120.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 059016EB1B for ; Fri, 16 Oct 2020 10:44:50 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:43:54 +0200 Message-Id: <20201016104444.1492028-12-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 11/61] drm/i915: Disable userptr pread/pwrite support. X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Userptr should not need the kernel for a userspace memcpy, userspace needs to call memcpy directly. Signed-off-by: Maarten Lankhorst Reviewed-by: Thomas Hellström --- .../gpu/drm/i915/gem/i915_gem_object_types.h | 2 ++ drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 20 +++++++++++++++++++ drivers/gpu/drm/i915/i915_gem.c | 5 +++++ 3 files changed, 27 insertions(+) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h index dcdff134ccc2..e84b279bfee6 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h @@ -57,6 +57,8 @@ struct drm_i915_gem_object_ops { int (*pwrite)(struct drm_i915_gem_object *obj, const struct drm_i915_gem_pwrite *arg); + int (*pread)(struct drm_i915_gem_object *obj, + const struct drm_i915_gem_pread *arg); int (*dmabuf_export)(struct drm_i915_gem_object *obj); void (*release)(struct drm_i915_gem_object *obj); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c index 22008948be58..136a589e5d94 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c @@ -700,6 +700,24 @@ i915_gem_userptr_dmabuf_export(struct drm_i915_gem_object *obj) return i915_gem_userptr_init__mmu_notifier(obj, 0); } +static int +i915_gem_userptr_pwrite(struct drm_i915_gem_object *obj, + const struct drm_i915_gem_pwrite *args) +{ + drm_dbg(obj->base.dev, "pwrite to userptr no longer allowed\n"); + + return -EINVAL; +} + +static int +i915_gem_userptr_pread(struct drm_i915_gem_object *obj, + const struct drm_i915_gem_pread *args) +{ + drm_dbg(obj->base.dev, "pread from userptr no longer allowed\n"); + + return -EINVAL; +} + static const struct drm_i915_gem_object_ops i915_gem_userptr_ops = { .name = "i915_gem_object_userptr", .flags = I915_GEM_OBJECT_IS_SHRINKABLE | @@ -708,6 +726,8 @@ static const struct drm_i915_gem_object_ops i915_gem_userptr_ops = { .get_pages = i915_gem_userptr_get_pages, .put_pages = i915_gem_userptr_put_pages, .dmabuf_export = i915_gem_userptr_dmabuf_export, + .pwrite = i915_gem_userptr_pwrite, + .pread = i915_gem_userptr_pread, .release = i915_gem_userptr_release, }; diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 30af7e4b71ab..d349c0b796ec 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -526,6 +526,11 @@ i915_gem_pread_ioctl(struct drm_device *dev, void *data, } trace_i915_gem_object_pread(obj, args->offset, args->size); + ret = -ENODEV; + if (obj->ops->pread) + ret = obj->ops->pread(obj, args); + if (ret != -ENODEV) + goto out; ret = i915_gem_object_wait(obj, I915_WAIT_INTERRUPTIBLE, From patchwork Fri Oct 16 10:43:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841499 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65E2FC2BD0C for ; Fri, 16 Oct 2020 10:45:39 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 055D5207F7 for ; Fri, 16 Oct 2020 10:45:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 055D5207F7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E394E6EB8A; Fri, 16 Oct 2020 10:45:21 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [IPv6:2a02:2308::216:3eff:fe92:dfa3]) by gabe.freedesktop.org (Postfix) with ESMTPS id 11ECC6EB1C for ; Fri, 16 Oct 2020 10:44:50 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:43:55 +0200 Message-Id: <20201016104444.1492028-13-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 12/61] drm/i915: No longer allow exporting userptr through dma-buf X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" It doesn't make sense to export a memory address, we will prevent allowing access this way to different address spaces when we rework userptr handling, so best to explicitly disable it. Signed-off-by: Maarten Lankhorst Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c index 136a589e5d94..9c1293c99d88 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c @@ -694,10 +694,9 @@ i915_gem_userptr_release(struct drm_i915_gem_object *obj) static int i915_gem_userptr_dmabuf_export(struct drm_i915_gem_object *obj) { - if (obj->userptr.mmu_object) - return 0; + drm_dbg(obj->base.dev, "Exporting userptr no longer allowed\n"); - return i915_gem_userptr_init__mmu_notifier(obj, 0); + return -EINVAL; } static int From patchwork Fri Oct 16 10:43:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841387 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F921C433E7 for ; Fri, 16 Oct 2020 10:45:27 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B253C2084C for ; Fri, 16 Oct 2020 10:45:26 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B253C2084C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 6DD656EB93; Fri, 16 Oct 2020 10:45:01 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [IPv6:2a02:2308::216:3eff:fe92:dfa3]) by gabe.freedesktop.org (Postfix) with ESMTPS id 1FCFE6EB20 for ; Fri, 16 Oct 2020 10:44:50 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:43:56 +0200 Message-Id: <20201016104444.1492028-14-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 13/61] drm/i915: Reject more ioctls for userptr X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Allow set_domain to fail silently, waiting for idle should be good enough. set_tiling and set_caching are rejected with -ENXIO, there's no valid reason to allow it. Signed-off-by: Maarten Lankhorst --- drivers/gpu/drm/i915/display/intel_display.c | 2 +- drivers/gpu/drm/i915/gem/i915_gem_domain.c | 4 +++- drivers/gpu/drm/i915/gem/i915_gem_object.h | 6 ++++++ drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 3 ++- 4 files changed, 12 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c index a02ca7de72de..5690e2ae2366 100644 --- a/drivers/gpu/drm/i915/display/intel_display.c +++ b/drivers/gpu/drm/i915/display/intel_display.c @@ -17365,7 +17365,7 @@ static int intel_user_framebuffer_create_handle(struct drm_framebuffer *fb, struct drm_i915_gem_object *obj = intel_fb_obj(fb); struct drm_i915_private *i915 = to_i915(obj->base.dev); - if (obj->userptr.mm) { + if (i915_gem_object_is_userptr(obj)) { drm_dbg(&i915->drm, "attempting to use a userptr for a framebuffer, denied\n"); return -EINVAL; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_domain.c b/drivers/gpu/drm/i915/gem/i915_gem_domain.c index 7c90a63c273d..43c22648b074 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_domain.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_domain.c @@ -543,7 +543,9 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data, * considered to be outside of any cache domain. */ if (i915_gem_object_is_proxy(obj)) { - err = -ENXIO; + /* silently allow userptr to complete */ + if (!i915_gem_object_is_userptr(obj)) + err = -ENXIO; goto out; } diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index e0c1e2817bee..436ff0d4951f 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -528,6 +528,12 @@ void __i915_gem_object_flush_frontbuffer(struct drm_i915_gem_object *obj, void __i915_gem_object_invalidate_frontbuffer(struct drm_i915_gem_object *obj, enum fb_op_origin origin); +static inline bool +i915_gem_object_is_userptr(struct drm_i915_gem_object *obj) +{ + return obj->userptr.mm; +} + static inline void i915_gem_object_flush_frontbuffer(struct drm_i915_gem_object *obj, enum fb_op_origin origin) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c index 9c1293c99d88..3fd63fdd7466 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c @@ -721,7 +721,8 @@ static const struct drm_i915_gem_object_ops i915_gem_userptr_ops = { .name = "i915_gem_object_userptr", .flags = I915_GEM_OBJECT_IS_SHRINKABLE | I915_GEM_OBJECT_NO_MMAP | - I915_GEM_OBJECT_ASYNC_CANCEL, + I915_GEM_OBJECT_ASYNC_CANCEL | + I915_GEM_OBJECT_IS_PROXY, .get_pages = i915_gem_userptr_get_pages, .put_pages = i915_gem_userptr_put_pages, .dmabuf_export = i915_gem_userptr_dmabuf_export, From patchwork Fri Oct 16 10:43:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841361 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7067EC43467 for ; Fri, 16 Oct 2020 10:45:16 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 15BB62084C for ; Fri, 16 Oct 2020 10:45:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 15BB62084C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B76916EB1C; Fri, 16 Oct 2020 10:44:58 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [IPv6:2a02:2308::216:3eff:fe92:dfa3]) by gabe.freedesktop.org (Postfix) with ESMTPS id 253816EB21 for ; Fri, 16 Oct 2020 10:44:50 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:43:57 +0200 Message-Id: <20201016104444.1492028-15-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 14/61] drm/i915: Reject UNSYNCHRONIZED for userptr X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" We should not allow this any more, as it will break with the new userptr implementation, it could still be made to work, but there's no point in doing so. Signed-off-by: Maarten Lankhorst --- .../gpu/drm/i915/gem/i915_gem_execbuffer.c | 2 + drivers/gpu/drm/i915/gem/i915_gem_object.h | 4 ++ .../gpu/drm/i915/gem/i915_gem_object_types.h | 2 + drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 64 ++++++------------- drivers/gpu/drm/i915/i915_drv.h | 2 + 5 files changed, 31 insertions(+), 43 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index 9a44d9a6b5ed..89d7e7980eae 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -1970,8 +1970,10 @@ static noinline int eb_relocate_parse_slow(struct i915_execbuffer *eb, err = 0; } +#ifdef CONFIG_MMU_NOTIFIER if (!err) flush_workqueue(eb->i915->mm.userptr_wq); +#endif err_relock: i915_gem_ww_ctx_init(&eb->ww, true); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index 436ff0d4951f..a3774e80aedd 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -531,7 +531,11 @@ void __i915_gem_object_invalidate_frontbuffer(struct drm_i915_gem_object *obj, static inline bool i915_gem_object_is_userptr(struct drm_i915_gem_object *obj) { +#ifdef CONFIG_MMU_NOTIFIER return obj->userptr.mm; +#else + return false; +#endif } static inline void diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h index e84b279bfee6..1f729e63867c 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h @@ -289,6 +289,7 @@ struct drm_i915_gem_object { unsigned long *bit_17; union { +#ifdef CONFIG_MMU_NOTIFIER struct i915_gem_userptr { uintptr_t ptr; @@ -296,6 +297,7 @@ struct drm_i915_gem_object { struct i915_mmu_object *mmu_object; struct work_struct *work; } userptr; +#endif unsigned long scratch; u64 encode; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c index 3fd63fdd7466..a2b7f6db2f1a 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c @@ -15,6 +15,8 @@ #include "i915_gem_object.h" #include "i915_scatterlist.h" +#if defined(CONFIG_MMU_NOTIFIER) + struct i915_mm_struct { struct mm_struct *mm; struct drm_i915_private *i915; @@ -24,7 +26,6 @@ struct i915_mm_struct { struct rcu_work work; }; -#if defined(CONFIG_MMU_NOTIFIER) #include struct i915_mmu_notifier { @@ -217,15 +218,11 @@ i915_mmu_notifier_find(struct i915_mm_struct *mm) } static int -i915_gem_userptr_init__mmu_notifier(struct drm_i915_gem_object *obj, - unsigned flags) +i915_gem_userptr_init__mmu_notifier(struct drm_i915_gem_object *obj) { struct i915_mmu_notifier *mn; struct i915_mmu_object *mo; - if (flags & I915_USERPTR_UNSYNCHRONIZED) - return capable(CAP_SYS_ADMIN) ? 0 : -EPERM; - if (GEM_WARN_ON(!obj->userptr.mm)) return -EINVAL; @@ -258,38 +255,6 @@ i915_mmu_notifier_free(struct i915_mmu_notifier *mn, kfree(mn); } -#else - -static void -__i915_gem_userptr_set_active(struct drm_i915_gem_object *obj, bool value) -{ -} - -static void -i915_gem_userptr_release__mmu_notifier(struct drm_i915_gem_object *obj) -{ -} - -static int -i915_gem_userptr_init__mmu_notifier(struct drm_i915_gem_object *obj, - unsigned flags) -{ - if ((flags & I915_USERPTR_UNSYNCHRONIZED) == 0) - return -ENODEV; - - if (!capable(CAP_SYS_ADMIN)) - return -EPERM; - - return 0; -} - -static void -i915_mmu_notifier_free(struct i915_mmu_notifier *mn, - struct mm_struct *mm) -{ -} - -#endif static struct i915_mm_struct * __i915_mm_struct_find(struct drm_i915_private *i915, struct mm_struct *real) @@ -731,6 +696,8 @@ static const struct drm_i915_gem_object_ops i915_gem_userptr_ops = { .release = i915_gem_userptr_release, }; +#endif + /* * Creates a new mm object that wraps some normal memory from the process * context - user memory. @@ -771,12 +738,12 @@ i915_gem_userptr_ioctl(struct drm_device *dev, void *data, struct drm_file *file) { - static struct lock_class_key lock_class; + static struct lock_class_key __maybe_unused lock_class; struct drm_i915_private *dev_priv = to_i915(dev); struct drm_i915_gem_userptr *args = data; - struct drm_i915_gem_object *obj; - int ret; - u32 handle; + struct drm_i915_gem_object __maybe_unused *obj; + int __maybe_unused ret; + u32 __maybe_unused handle; if (!HAS_LLC(dev_priv) && !HAS_SNOOP(dev_priv)) { /* We cannot support coherent userptr objects on hw without @@ -815,6 +782,9 @@ i915_gem_userptr_ioctl(struct drm_device *dev, if (!access_ok((char __user *)(unsigned long)args->user_ptr, args->user_size)) return -EFAULT; + if (args->flags & I915_USERPTR_UNSYNCHRONIZED) + return -ENODEV; + if (args->flags & I915_USERPTR_READ_ONLY) { /* * On almost all of the older hw, we cannot tell the GPU that @@ -824,6 +794,7 @@ i915_gem_userptr_ioctl(struct drm_device *dev, return -ENODEV; } +#ifdef CONFIG_MMU_NOTIFIER obj = i915_gem_object_alloc(); if (obj == NULL) return -ENOMEM; @@ -845,7 +816,7 @@ i915_gem_userptr_ioctl(struct drm_device *dev, */ ret = i915_gem_userptr_init__mm_struct(obj); if (ret == 0) - ret = i915_gem_userptr_init__mmu_notifier(obj, args->flags); + ret = i915_gem_userptr_init__mmu_notifier(obj); if (ret == 0) ret = drm_gem_handle_create(file, &obj->base, &handle); @@ -856,10 +827,14 @@ i915_gem_userptr_ioctl(struct drm_device *dev, args->handle = handle; return 0; +#else + return -ENODEV; +#endif } int i915_gem_init_userptr(struct drm_i915_private *dev_priv) { +#ifdef CONFIG_MMU_NOTIFIER spin_lock_init(&dev_priv->mm_lock); hash_init(dev_priv->mm_structs); @@ -869,11 +844,14 @@ int i915_gem_init_userptr(struct drm_i915_private *dev_priv) 0); if (!dev_priv->mm.userptr_wq) return -ENOMEM; +#endif return 0; } void i915_gem_cleanup_userptr(struct drm_i915_private *dev_priv) { +#ifdef CONFIG_MMU_NOTIFIER destroy_workqueue(dev_priv->mm.userptr_wq); +#endif } diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 7bd7b3e82c45..23db5a5f5fcb 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -589,12 +589,14 @@ struct i915_gem_mm { struct notifier_block vmap_notifier; struct shrinker shrinker; +#ifdef CONFIG_MMU_NOTIFIER /** * Workqueue to fault in userptr pages, flushed by the execbuf * when required but otherwise left to userspace to try again * on EAGAIN. */ struct workqueue_struct *userptr_wq; +#endif /* shrinker accounting, also useful for userland debugging */ u64 shrink_memory; From patchwork Fri Oct 16 10:43:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841491 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30293C35269 for ; Fri, 16 Oct 2020 10:45:41 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CBCB52084C for ; Fri, 16 Oct 2020 10:45:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CBCB52084C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 3F9B26EC38; Fri, 16 Oct 2020 10:45:23 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [IPv6:2a02:2308::216:3eff:fe92:dfa3]) by gabe.freedesktop.org (Postfix) with ESMTPS id C50C96EAC7 for ; Fri, 16 Oct 2020 10:44:50 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:43:58 +0200 Message-Id: <20201016104444.1492028-16-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 15/61] drm/i915: Fix userptr so we do not have to worry about obj->mm.lock, v4. X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Instead of doing what we do currently, which will never work with PROVE_LOCKING, do the same as AMD does, and something similar to relocation slowpath. When all locks are dropped, we acquire the pages for pinning. When the locks are taken, we transfer those pages in .get_pages() to the bo. As a final check before installing the fences, we ensure that the mmu notifier was not called; if it is, we return -EAGAIN to userspace to signal it has to start over. Changes since v1: - Unbinding is done in submit_init only. submit_begin() removed. - MMU_NOTFIER -> MMU_NOTIFIER Changes since v2: - Make i915->mm.notifier a spinlock. Changes since v3: - Add WARN_ON if there are any page references left, should have been 0. - Return 0 on success in submit_init(), bug from spinlock conversion. - Release pvec outside of notifier_lock (Thomas). Signed-off-by: Maarten Lankhorst --- .../gpu/drm/i915/gem/i915_gem_execbuffer.c | 94 ++- drivers/gpu/drm/i915/gem/i915_gem_object.h | 35 +- .../gpu/drm/i915/gem/i915_gem_object_types.h | 10 +- drivers/gpu/drm/i915/gem/i915_gem_pages.c | 2 +- drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 765 +++++------------- drivers/gpu/drm/i915/i915_drv.h | 9 +- drivers/gpu/drm/i915/i915_gem.c | 5 +- 7 files changed, 334 insertions(+), 586 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index 89d7e7980eae..c9db199c4d81 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -52,14 +52,16 @@ enum { /* __EXEC_OBJECT_NO_RESERVE is BIT(31), defined in i915_vma.h */ #define __EXEC_OBJECT_HAS_PIN BIT(30) #define __EXEC_OBJECT_HAS_FENCE BIT(29) -#define __EXEC_OBJECT_NEEDS_MAP BIT(28) -#define __EXEC_OBJECT_NEEDS_BIAS BIT(27) -#define __EXEC_OBJECT_INTERNAL_FLAGS (~0u << 27) /* all of the above + */ +#define __EXEC_OBJECT_USERPTR_INIT BIT(28) +#define __EXEC_OBJECT_NEEDS_MAP BIT(27) +#define __EXEC_OBJECT_NEEDS_BIAS BIT(26) +#define __EXEC_OBJECT_INTERNAL_FLAGS (~0u << 26) /* all of the above + */ #define __EXEC_OBJECT_RESERVED (__EXEC_OBJECT_HAS_PIN | __EXEC_OBJECT_HAS_FENCE) #define __EXEC_HAS_RELOC BIT(31) #define __EXEC_ENGINE_PINNED BIT(30) -#define __EXEC_INTERNAL_FLAGS (~0u << 30) +#define __EXEC_USERPTR_USED BIT(29) +#define __EXEC_INTERNAL_FLAGS (~0u << 29) #define UPDATE PIN_OFFSET_FIXED #define BATCH_OFFSET_BIAS (256*1024) @@ -865,6 +867,19 @@ static int eb_lookup_vmas(struct i915_execbuffer *eb) } eb_add_vma(eb, i, batch, vma); + + if (i915_gem_object_is_userptr(vma->obj)) { + err = i915_gem_object_userptr_submit_init(vma->obj); + if (err) { + if (i + 1 < eb->buffer_count) + eb->vma[i + 1].vma = NULL; + + return err; + } + + eb->vma[i].flags |= __EXEC_OBJECT_USERPTR_INIT; + eb->args->flags |= __EXEC_USERPTR_USED; + } } if (unlikely(eb->batch->flags & EXEC_OBJECT_WRITE)) { @@ -966,7 +981,7 @@ eb_get_vma(const struct i915_execbuffer *eb, unsigned long handle) } } -static void eb_release_vmas(struct i915_execbuffer *eb, bool final) +static void eb_release_vmas(struct i915_execbuffer *eb, bool final, bool release_userptr) { const unsigned int count = eb->buffer_count; unsigned int i; @@ -980,6 +995,11 @@ static void eb_release_vmas(struct i915_execbuffer *eb, bool final) eb_unreserve_vma(ev); + if (release_userptr && ev->flags & __EXEC_OBJECT_USERPTR_INIT) { + ev->flags &= ~__EXEC_OBJECT_USERPTR_INIT; + i915_gem_object_userptr_submit_fini(vma->obj); + } + if (final) i915_vma_put(vma); } @@ -1915,6 +1935,31 @@ static int eb_prefault_relocations(const struct i915_execbuffer *eb) return 0; } +static int eb_reinit_userptr(struct i915_execbuffer *eb) +{ + const unsigned int count = eb->buffer_count; + unsigned int i; + int ret; + + if (likely(!(eb->args->flags & __EXEC_USERPTR_USED))) + return 0; + + for (i = 0; i < count; i++) { + struct eb_vma *ev = &eb->vma[i]; + + if (!i915_gem_object_is_userptr(ev->vma->obj)) + continue; + + ret = i915_gem_object_userptr_submit_init(ev->vma->obj); + if (ret) + return ret; + + ev->flags |= __EXEC_OBJECT_USERPTR_INIT; + } + + return 0; +} + static noinline int eb_relocate_parse_slow(struct i915_execbuffer *eb, struct i915_request *rq) { @@ -1929,7 +1974,7 @@ static noinline int eb_relocate_parse_slow(struct i915_execbuffer *eb, } /* We may process another execbuffer during the unlock... */ - eb_release_vmas(eb, false); + eb_release_vmas(eb, false, true); i915_gem_ww_ctx_fini(&eb->ww); if (rq) { @@ -1970,10 +2015,8 @@ static noinline int eb_relocate_parse_slow(struct i915_execbuffer *eb, err = 0; } -#ifdef CONFIG_MMU_NOTIFIER if (!err) - flush_workqueue(eb->i915->mm.userptr_wq); -#endif + err = eb_reinit_userptr(eb); err_relock: i915_gem_ww_ctx_init(&eb->ww, true); @@ -2035,7 +2078,7 @@ static noinline int eb_relocate_parse_slow(struct i915_execbuffer *eb, err: if (err == -EDEADLK) { - eb_release_vmas(eb, false); + eb_release_vmas(eb, false, false); err = i915_gem_ww_ctx_backoff(&eb->ww); if (!err) goto repeat_validate; @@ -2132,7 +2175,7 @@ static int eb_relocate_parse(struct i915_execbuffer *eb) err: if (err == -EDEADLK) { - eb_release_vmas(eb, false); + eb_release_vmas(eb, false, false); err = i915_gem_ww_ctx_backoff(&eb->ww); if (!err) goto retry; @@ -2207,6 +2250,30 @@ static int eb_move_to_gpu(struct i915_execbuffer *eb) flags | __EXEC_OBJECT_NO_RESERVE); } +#ifdef CONFIG_MMU_NOTIFIER + if (!err && (eb->args->flags & __EXEC_USERPTR_USED)) { + spin_lock(&eb->i915->mm.notifier_lock); + + /* + * count is always at least 1, otherwise __EXEC_USERPTR_USED + * could not have been set + */ + for (i = count - 1; i; i--) { + struct eb_vma *ev = &eb->vma[i]; + struct drm_i915_gem_object *obj = ev->vma->obj; + + if (!i915_gem_object_is_userptr(obj)) + continue; + + err = i915_gem_object_userptr_submit_done(obj); + if (err) + break; + } + + spin_unlock(&eb->i915->mm.notifier_lock); + } +#endif + if (unlikely(err)) goto err_skip; @@ -3347,7 +3414,7 @@ i915_gem_do_execbuffer(struct drm_device *dev, err = eb_lookup_vmas(&eb); if (err) { - eb_release_vmas(&eb, true); + eb_release_vmas(&eb, true, true); goto err_engine; } @@ -3419,6 +3486,7 @@ i915_gem_do_execbuffer(struct drm_device *dev, trace_i915_request_queue(eb.request, eb.batch_flags); err = eb_submit(&eb, batch); + err_request: i915_request_get(eb.request); eb_request_add(&eb); @@ -3439,7 +3507,7 @@ i915_gem_do_execbuffer(struct drm_device *dev, i915_request_put(eb.request); err_vma: - eb_release_vmas(&eb, true); + eb_release_vmas(&eb, true, true); if (eb.trampoline) i915_vma_unpin(eb.trampoline); WARN_ON(err == -EDEADLK); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index a3774e80aedd..abcce4d285b5 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -33,6 +33,7 @@ i915_gem_object_create_shmem_from_data(struct drm_i915_private *i915, const void *data, resource_size_t size); extern const struct drm_i915_gem_object_ops i915_gem_shmem_ops; + void __i915_gem_object_release_shmem(struct drm_i915_gem_object *obj, struct sg_table *pages, bool needs_clflush); @@ -222,12 +223,6 @@ i915_gem_object_never_mmap(const struct drm_i915_gem_object *obj) return i915_gem_object_type_has(obj, I915_GEM_OBJECT_NO_MMAP); } -static inline bool -i915_gem_object_needs_async_cancel(const struct drm_i915_gem_object *obj) -{ - return i915_gem_object_type_has(obj, I915_GEM_OBJECT_ASYNC_CANCEL); -} - static inline bool i915_gem_object_is_framebuffer(const struct drm_i915_gem_object *obj) { @@ -528,16 +523,6 @@ void __i915_gem_object_flush_frontbuffer(struct drm_i915_gem_object *obj, void __i915_gem_object_invalidate_frontbuffer(struct drm_i915_gem_object *obj, enum fb_op_origin origin); -static inline bool -i915_gem_object_is_userptr(struct drm_i915_gem_object *obj) -{ -#ifdef CONFIG_MMU_NOTIFIER - return obj->userptr.mm; -#else - return false; -#endif -} - static inline void i915_gem_object_flush_frontbuffer(struct drm_i915_gem_object *obj, enum fb_op_origin origin) @@ -554,4 +539,22 @@ i915_gem_object_invalidate_frontbuffer(struct drm_i915_gem_object *obj, __i915_gem_object_invalidate_frontbuffer(obj, origin); } +#ifdef CONFIG_MMU_NOTIFIER +static inline bool +i915_gem_object_is_userptr(struct drm_i915_gem_object *obj) +{ + return obj->userptr.notifier.mm; +} + +int i915_gem_object_userptr_submit_init(struct drm_i915_gem_object *obj); +int i915_gem_object_userptr_submit_done(struct drm_i915_gem_object *obj); +void i915_gem_object_userptr_submit_fini(struct drm_i915_gem_object *obj); +#else +static inline bool i915_gem_object_is_userptr(struct drm_i915_gem_object *obj) { return false; } + +static inline int i915_gem_object_userptr_submit_init(struct drm_i915_gem_object *obj) { GEM_BUG_ON(1); return -ENODEV; } +static inline int i915_gem_object_userptr_submit_done(struct drm_i915_gem_object *obj) { GEM_BUG_ON(1); return -ENODEV; } +static inline void i915_gem_object_userptr_submit_fini(struct drm_i915_gem_object *obj) { GEM_BUG_ON(1); } +#endif + #endif diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h index 1f729e63867c..0aa391f5d73c 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h @@ -7,6 +7,8 @@ #ifndef __I915_GEM_OBJECT_TYPES_H__ #define __I915_GEM_OBJECT_TYPES_H__ +#include + #include #include @@ -34,7 +36,6 @@ struct drm_i915_gem_object_ops { #define I915_GEM_OBJECT_IS_SHRINKABLE BIT(2) #define I915_GEM_OBJECT_IS_PROXY BIT(3) #define I915_GEM_OBJECT_NO_MMAP BIT(4) -#define I915_GEM_OBJECT_ASYNC_CANCEL BIT(5) /* Interface between the GEM object and its backing storage. * get_pages() is called once prior to the use of the associated set @@ -292,10 +293,11 @@ struct drm_i915_gem_object { #ifdef CONFIG_MMU_NOTIFIER struct i915_gem_userptr { uintptr_t ptr; + unsigned long notifier_seq; - struct i915_mm_struct *mm; - struct i915_mmu_object *mmu_object; - struct work_struct *work; + struct mmu_interval_notifier notifier; + struct page **pvec; + int page_ref; } userptr; #endif diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c index 1c646d5f802b..b81f253f5dc9 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c @@ -225,7 +225,7 @@ int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj) * get_pages backends we should be better able to handle the * cancellation of the async task in a more uniform manner. */ - if (!pages && !i915_gem_object_needs_async_cancel(obj)) + if (!pages) pages = ERR_PTR(-EINVAL); if (!IS_ERR(pages)) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c index a2b7f6db2f1a..58a426bec2e5 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c @@ -2,10 +2,39 @@ * SPDX-License-Identifier: MIT * * Copyright © 2012-2014 Intel Corporation + * + * Based on amdgpu_mn, which bears the following notice: + * + * Copyright 2014 Advanced Micro Devices, Inc. + * All Rights Reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the + * "Software"), to deal in the Software without restriction, including + * without limitation the rights to use, copy, modify, merge, publish, + * distribute, sub license, and/or sell copies of the Software, and to + * permit persons to whom the Software is furnished to do so, subject to + * the following conditions: + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, + * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR + * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE + * USE OR OTHER DEALINGS IN THE SOFTWARE. + * + * The above copyright notice and this permission notice (including the + * next paragraph) shall be included in all copies or substantial portions + * of the Software. + * + */ +/* + * Authors: + * Christian König */ #include -#include #include #include #include @@ -15,374 +44,109 @@ #include "i915_gem_object.h" #include "i915_scatterlist.h" -#if defined(CONFIG_MMU_NOTIFIER) - -struct i915_mm_struct { - struct mm_struct *mm; - struct drm_i915_private *i915; - struct i915_mmu_notifier *mn; - struct hlist_node node; - struct kref kref; - struct rcu_work work; -}; - -#include - -struct i915_mmu_notifier { - spinlock_t lock; - struct hlist_node node; - struct mmu_notifier mn; - struct rb_root_cached objects; - struct i915_mm_struct *mm; -}; - -struct i915_mmu_object { - struct i915_mmu_notifier *mn; - struct drm_i915_gem_object *obj; - struct interval_tree_node it; -}; - -static void add_object(struct i915_mmu_object *mo) -{ - GEM_BUG_ON(!RB_EMPTY_NODE(&mo->it.rb)); - interval_tree_insert(&mo->it, &mo->mn->objects); -} - -static void del_object(struct i915_mmu_object *mo) -{ - if (RB_EMPTY_NODE(&mo->it.rb)) - return; - - interval_tree_remove(&mo->it, &mo->mn->objects); - RB_CLEAR_NODE(&mo->it.rb); -} - -static void -__i915_gem_userptr_set_active(struct drm_i915_gem_object *obj, bool value) -{ - struct i915_mmu_object *mo = obj->userptr.mmu_object; - - /* - * During mm_invalidate_range we need to cancel any userptr that - * overlaps the range being invalidated. Doing so requires the - * struct_mutex, and that risks recursion. In order to cause - * recursion, the user must alias the userptr address space with - * a GTT mmapping (possible with a MAP_FIXED) - then when we have - * to invalidate that mmaping, mm_invalidate_range is called with - * the userptr address *and* the struct_mutex held. To prevent that - * we set a flag under the i915_mmu_notifier spinlock to indicate - * whether this object is valid. - */ - if (!mo) - return; - - spin_lock(&mo->mn->lock); - if (value) - add_object(mo); - else - del_object(mo); - spin_unlock(&mo->mn->lock); -} +#ifdef CONFIG_MMU_NOTIFIER -static int -userptr_mn_invalidate_range_start(struct mmu_notifier *_mn, - const struct mmu_notifier_range *range) +/** + * i915_gem_userptr_invalidate - callback to notify about mm change + * + * @mni: the range (mm) is about to update + * @range: details on the invalidation + * @cur_seq: Value to pass to mmu_interval_set_seq() + * + * Block for operations on BOs to finish and mark pages as accessed and + * potentially dirty. + */ +static bool i915_gem_userptr_invalidate(struct mmu_interval_notifier *mni, + const struct mmu_notifier_range *range, + unsigned long cur_seq) { - struct i915_mmu_notifier *mn = - container_of(_mn, struct i915_mmu_notifier, mn); - struct interval_tree_node *it; - unsigned long end; - int ret = 0; - - if (RB_EMPTY_ROOT(&mn->objects.rb_root)) - return 0; - - /* interval ranges are inclusive, but invalidate range is exclusive */ - end = range->end - 1; - - spin_lock(&mn->lock); - it = interval_tree_iter_first(&mn->objects, range->start, end); - while (it) { - struct drm_i915_gem_object *obj; - - if (!mmu_notifier_range_blockable(range)) { - ret = -EAGAIN; - break; - } + struct drm_i915_gem_object *obj = container_of(mni, struct drm_i915_gem_object, userptr.notifier); + struct drm_i915_private *i915 = to_i915(obj->base.dev); + long r; - /* - * The mmu_object is released late when destroying the - * GEM object so it is entirely possible to gain a - * reference on an object in the process of being freed - * since our serialisation is via the spinlock and not - * the struct_mutex - and consequently use it after it - * is freed and then double free it. To prevent that - * use-after-free we only acquire a reference on the - * object if it is not in the process of being destroyed. - */ - obj = container_of(it, struct i915_mmu_object, it)->obj; - if (!kref_get_unless_zero(&obj->base.refcount)) { - it = interval_tree_iter_next(it, range->start, end); - continue; - } - spin_unlock(&mn->lock); + if (!mmu_notifier_range_blockable(range)) + return false; - ret = i915_gem_object_unbind(obj, - I915_GEM_OBJECT_UNBIND_ACTIVE | - I915_GEM_OBJECT_UNBIND_BARRIER); - if (ret == 0) - ret = __i915_gem_object_put_pages(obj); - i915_gem_object_put(obj); - if (ret) - return ret; + spin_lock(&i915->mm.notifier_lock); - spin_lock(&mn->lock); + mmu_interval_set_seq(mni, cur_seq); - /* - * As we do not (yet) protect the mmu from concurrent insertion - * over this range, there is no guarantee that this search will - * terminate given a pathologic workload. - */ - it = interval_tree_iter_first(&mn->objects, range->start, end); - } - spin_unlock(&mn->lock); + spin_unlock(&i915->mm.notifier_lock); - return ret; + /* we will unbind on next submission, still have userptr pins */ + r = dma_resv_wait_timeout_rcu(obj->base.resv, true, false, + MAX_SCHEDULE_TIMEOUT); + if (r <= 0) + drm_err(&i915->drm, "(%ld) failed to wait for idle\n", r); + return true; } -static const struct mmu_notifier_ops i915_gem_userptr_notifier = { - .invalidate_range_start = userptr_mn_invalidate_range_start, +static const struct mmu_interval_notifier_ops i915_gem_userptr_notifier_ops = { + .invalidate = i915_gem_userptr_invalidate, }; -static struct i915_mmu_notifier * -i915_mmu_notifier_create(struct i915_mm_struct *mm) -{ - struct i915_mmu_notifier *mn; - - mn = kmalloc(sizeof(*mn), GFP_KERNEL); - if (mn == NULL) - return ERR_PTR(-ENOMEM); - - spin_lock_init(&mn->lock); - mn->mn.ops = &i915_gem_userptr_notifier; - mn->objects = RB_ROOT_CACHED; - mn->mm = mm; - - return mn; -} - -static void -i915_gem_userptr_release__mmu_notifier(struct drm_i915_gem_object *obj) -{ - struct i915_mmu_object *mo; - - mo = fetch_and_zero(&obj->userptr.mmu_object); - if (!mo) - return; - - spin_lock(&mo->mn->lock); - del_object(mo); - spin_unlock(&mo->mn->lock); - kfree(mo); -} - -static struct i915_mmu_notifier * -i915_mmu_notifier_find(struct i915_mm_struct *mm) -{ - struct i915_mmu_notifier *mn, *old; - int err; - - mn = READ_ONCE(mm->mn); - if (likely(mn)) - return mn; - - mn = i915_mmu_notifier_create(mm); - if (IS_ERR(mn)) - return mn; - - err = mmu_notifier_register(&mn->mn, mm->mm); - if (err) { - kfree(mn); - return ERR_PTR(err); - } - - old = cmpxchg(&mm->mn, NULL, mn); - if (old) { - mmu_notifier_unregister(&mn->mn, mm->mm); - kfree(mn); - mn = old; - } - - return mn; -} - static int i915_gem_userptr_init__mmu_notifier(struct drm_i915_gem_object *obj) { - struct i915_mmu_notifier *mn; - struct i915_mmu_object *mo; - - if (GEM_WARN_ON(!obj->userptr.mm)) - return -EINVAL; - - mn = i915_mmu_notifier_find(obj->userptr.mm); - if (IS_ERR(mn)) - return PTR_ERR(mn); - - mo = kzalloc(sizeof(*mo), GFP_KERNEL); - if (!mo) - return -ENOMEM; - - mo->mn = mn; - mo->obj = obj; - mo->it.start = obj->userptr.ptr; - mo->it.last = obj->userptr.ptr + obj->base.size - 1; - RB_CLEAR_NODE(&mo->it.rb); - - obj->userptr.mmu_object = mo; - return 0; + return mmu_interval_notifier_insert(&obj->userptr.notifier, current->mm, + obj->userptr.ptr, obj->base.size, + &i915_gem_userptr_notifier_ops); } -static void -i915_mmu_notifier_free(struct i915_mmu_notifier *mn, - struct mm_struct *mm) -{ - if (mn == NULL) - return; - - mmu_notifier_unregister(&mn->mn, mm); - kfree(mn); -} - - -static struct i915_mm_struct * -__i915_mm_struct_find(struct drm_i915_private *i915, struct mm_struct *real) -{ - struct i915_mm_struct *it, *mm = NULL; - - rcu_read_lock(); - hash_for_each_possible_rcu(i915->mm_structs, - it, node, - (unsigned long)real) - if (it->mm == real && kref_get_unless_zero(&it->kref)) { - mm = it; - break; - } - rcu_read_unlock(); - - return mm; -} - -static int -i915_gem_userptr_init__mm_struct(struct drm_i915_gem_object *obj) +static void i915_gem_object_userptr_drop_ref(struct drm_i915_gem_object *obj) { struct drm_i915_private *i915 = to_i915(obj->base.dev); - struct i915_mm_struct *mm, *new; - int ret = 0; - - /* During release of the GEM object we hold the struct_mutex. This - * precludes us from calling mmput() at that time as that may be - * the last reference and so call exit_mmap(). exit_mmap() will - * attempt to reap the vma, and if we were holding a GTT mmap - * would then call drm_gem_vm_close() and attempt to reacquire - * the struct mutex. So in order to avoid that recursion, we have - * to defer releasing the mm reference until after we drop the - * struct_mutex, i.e. we need to schedule a worker to do the clean - * up. - */ - mm = __i915_mm_struct_find(i915, current->mm); - if (mm) - goto out; + struct page **pvec = NULL; - new = kmalloc(sizeof(*mm), GFP_KERNEL); - if (!new) - return -ENOMEM; - - kref_init(&new->kref); - new->i915 = to_i915(obj->base.dev); - new->mm = current->mm; - new->mn = NULL; - - spin_lock(&i915->mm_lock); - mm = __i915_mm_struct_find(i915, current->mm); - if (!mm) { - hash_add_rcu(i915->mm_structs, - &new->node, - (unsigned long)new->mm); - mmgrab(current->mm); - mm = new; + spin_lock(&i915->mm.notifier_lock); + if (!--obj->userptr.page_ref) { + pvec = obj->userptr.pvec; + obj->userptr.pvec = NULL; } - spin_unlock(&i915->mm_lock); - if (mm != new) - kfree(new); - -out: - obj->userptr.mm = mm; - return ret; -} - -static void -__i915_mm_struct_free__worker(struct work_struct *work) -{ - struct i915_mm_struct *mm = container_of(work, typeof(*mm), work.work); - - i915_mmu_notifier_free(mm->mn, mm->mm); - mmdrop(mm->mm); - kfree(mm); -} - -static void -__i915_mm_struct_free(struct kref *kref) -{ - struct i915_mm_struct *mm = container_of(kref, typeof(*mm), kref); - - spin_lock(&mm->i915->mm_lock); - hash_del_rcu(&mm->node); - spin_unlock(&mm->i915->mm_lock); - - INIT_RCU_WORK(&mm->work, __i915_mm_struct_free__worker); - queue_rcu_work(system_wq, &mm->work); -} + GEM_BUG_ON(obj->userptr.page_ref < 0); + spin_unlock(&i915->mm.notifier_lock); -static void -i915_gem_userptr_release__mm_struct(struct drm_i915_gem_object *obj) -{ - if (obj->userptr.mm == NULL) - return; + if (pvec) { + const unsigned long num_pages = obj->base.size >> PAGE_SHIFT; - kref_put(&obj->userptr.mm->kref, __i915_mm_struct_free); - obj->userptr.mm = NULL; + unpin_user_pages(pvec, num_pages); + kfree(pvec); + } } -struct get_pages_work { - struct work_struct work; - struct drm_i915_gem_object *obj; - struct task_struct *task; -}; - -static struct sg_table * -__i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj, - struct page **pvec, unsigned long num_pages) +static int i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj) { + struct drm_i915_private *i915 = to_i915(obj->base.dev); + const unsigned long num_pages = obj->base.size >> PAGE_SHIFT; unsigned int max_segment = i915_sg_segment_size(); struct sg_table *st; unsigned int sg_page_sizes; + struct page **pvec; int ret; st = kmalloc(sizeof(*st), GFP_KERNEL); if (!st) - return ERR_PTR(-ENOMEM); + return -ENOMEM; + + spin_lock(&i915->mm.notifier_lock); + if (GEM_WARN_ON(!obj->userptr.page_ref)) { + spin_unlock(&i915->mm.notifier_lock); + ret = -EFAULT; + goto err_free; + } + + obj->userptr.page_ref++; + pvec = obj->userptr.pvec; + spin_unlock(&i915->mm.notifier_lock); alloc_table: ret = __sg_alloc_table_from_pages(st, pvec, num_pages, 0, num_pages << PAGE_SHIFT, max_segment, GFP_KERNEL); - if (ret) { - kfree(st); - return ERR_PTR(ret); - } + if (ret) + goto err; ret = i915_gem_gtt_prepare_pages(obj, st); if (ret) { @@ -393,203 +157,20 @@ __i915_gem_userptr_alloc_pages(struct drm_i915_gem_object *obj, goto alloc_table; } - kfree(st); - return ERR_PTR(ret); + goto err; } sg_page_sizes = i915_sg_page_sizes(st->sgl); __i915_gem_object_set_pages(obj, st, sg_page_sizes); - return st; -} - -static void -__i915_gem_userptr_get_pages_worker(struct work_struct *_work) -{ - struct get_pages_work *work = container_of(_work, typeof(*work), work); - struct drm_i915_gem_object *obj = work->obj; - const unsigned long npages = obj->base.size >> PAGE_SHIFT; - unsigned long pinned; - struct page **pvec; - int ret; - - ret = -ENOMEM; - pinned = 0; - - pvec = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL); - if (pvec != NULL) { - struct mm_struct *mm = obj->userptr.mm->mm; - unsigned int flags = 0; - int locked = 0; - - if (!i915_gem_object_is_readonly(obj)) - flags |= FOLL_WRITE; - - ret = -EFAULT; - if (mmget_not_zero(mm)) { - while (pinned < npages) { - if (!locked) { - mmap_read_lock(mm); - locked = 1; - } - ret = pin_user_pages_remote - (mm, - obj->userptr.ptr + pinned * PAGE_SIZE, - npages - pinned, - flags, - pvec + pinned, NULL, &locked); - if (ret < 0) - break; - - pinned += ret; - } - if (locked) - mmap_read_unlock(mm); - mmput(mm); - } - } - - mutex_lock_nested(&obj->mm.lock, I915_MM_GET_PAGES); - if (obj->userptr.work == &work->work) { - struct sg_table *pages = ERR_PTR(ret); - - if (pinned == npages) { - pages = __i915_gem_userptr_alloc_pages(obj, pvec, - npages); - if (!IS_ERR(pages)) { - pinned = 0; - pages = NULL; - } - } - - obj->userptr.work = ERR_CAST(pages); - if (IS_ERR(pages)) - __i915_gem_userptr_set_active(obj, false); - } - mutex_unlock(&obj->mm.lock); - - unpin_user_pages(pvec, pinned); - kvfree(pvec); - - i915_gem_object_put(obj); - put_task_struct(work->task); - kfree(work); -} - -static struct sg_table * -__i915_gem_userptr_get_pages_schedule(struct drm_i915_gem_object *obj) -{ - struct get_pages_work *work; - - /* Spawn a worker so that we can acquire the - * user pages without holding our mutex. Access - * to the user pages requires mmap_lock, and we have - * a strict lock ordering of mmap_lock, struct_mutex - - * we already hold struct_mutex here and so cannot - * call gup without encountering a lock inversion. - * - * Userspace will keep on repeating the operation - * (thanks to EAGAIN) until either we hit the fast - * path or the worker completes. If the worker is - * cancelled or superseded, the task is still run - * but the results ignored. (This leads to - * complications that we may have a stray object - * refcount that we need to be wary of when - * checking for existing objects during creation.) - * If the worker encounters an error, it reports - * that error back to this function through - * obj->userptr.work = ERR_PTR. - */ - work = kmalloc(sizeof(*work), GFP_KERNEL); - if (work == NULL) - return ERR_PTR(-ENOMEM); - - obj->userptr.work = &work->work; - - work->obj = i915_gem_object_get(obj); - - work->task = current; - get_task_struct(work->task); - - INIT_WORK(&work->work, __i915_gem_userptr_get_pages_worker); - queue_work(to_i915(obj->base.dev)->mm.userptr_wq, &work->work); - - return ERR_PTR(-EAGAIN); -} - -static int i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj) -{ - const unsigned long num_pages = obj->base.size >> PAGE_SHIFT; - struct mm_struct *mm = obj->userptr.mm->mm; - struct page **pvec; - struct sg_table *pages; - bool active; - int pinned; - unsigned int gup_flags = 0; - - /* If userspace should engineer that these pages are replaced in - * the vma between us binding this page into the GTT and completion - * of rendering... Their loss. If they change the mapping of their - * pages they need to create a new bo to point to the new vma. - * - * However, that still leaves open the possibility of the vma - * being copied upon fork. Which falls under the same userspace - * synchronisation issue as a regular bo, except that this time - * the process may not be expecting that a particular piece of - * memory is tied to the GPU. - * - * Fortunately, we can hook into the mmu_notifier in order to - * discard the page references prior to anything nasty happening - * to the vma (discard or cloning) which should prevent the more - * egregious cases from causing harm. - */ - - if (obj->userptr.work) { - /* active flag should still be held for the pending work */ - if (IS_ERR(obj->userptr.work)) - return PTR_ERR(obj->userptr.work); - else - return -EAGAIN; - } - - pvec = NULL; - pinned = 0; - - if (mm == current->mm) { - pvec = kvmalloc_array(num_pages, sizeof(struct page *), - GFP_KERNEL | - __GFP_NORETRY | - __GFP_NOWARN); - if (pvec) { - /* defer to worker if malloc fails */ - if (!i915_gem_object_is_readonly(obj)) - gup_flags |= FOLL_WRITE; - pinned = pin_user_pages_fast_only(obj->userptr.ptr, - num_pages, gup_flags, - pvec); - } - } - - active = false; - if (pinned < 0) { - pages = ERR_PTR(pinned); - pinned = 0; - } else if (pinned < num_pages) { - pages = __i915_gem_userptr_get_pages_schedule(obj); - active = pages == ERR_PTR(-EAGAIN); - } else { - pages = __i915_gem_userptr_alloc_pages(obj, pvec, num_pages); - active = !IS_ERR(pages); - } - if (active) - __i915_gem_userptr_set_active(obj, true); - - if (IS_ERR(pages)) - unpin_user_pages(pvec, pinned); - kvfree(pvec); + return 0; - return PTR_ERR_OR_ZERO(pages); +err: + i915_gem_object_userptr_drop_ref(obj); +err_free: + kfree(st); + return ret; } static void @@ -599,9 +180,6 @@ i915_gem_userptr_put_pages(struct drm_i915_gem_object *obj, struct sgt_iter sgt_iter; struct page *page; - /* Cancel any inflight work and force them to restart their gup */ - obj->userptr.work = NULL; - __i915_gem_userptr_set_active(obj, false); if (!pages) return; @@ -641,19 +219,135 @@ i915_gem_userptr_put_pages(struct drm_i915_gem_object *obj, } mark_page_accessed(page); - unpin_user_page(page); } obj->mm.dirty = false; sg_free_table(pages); kfree(pages); + + i915_gem_object_userptr_drop_ref(obj); +} + +static int i915_gem_object_userptr_unbind(struct drm_i915_gem_object *obj, bool get_pages) +{ + struct sg_table *pages; + int err; + + err = i915_gem_object_unbind(obj, I915_GEM_OBJECT_UNBIND_ACTIVE); + if (err) + return err; + + if (GEM_WARN_ON(i915_gem_object_has_pinned_pages(obj))) + return -EBUSY; + + mutex_lock_nested(&obj->mm.lock, I915_MM_GET_PAGES); + + pages = __i915_gem_object_unset_pages(obj); + if (!IS_ERR_OR_NULL(pages)) + i915_gem_userptr_put_pages(obj, pages); + + if (get_pages) + err = ____i915_gem_object_get_pages(obj); + mutex_unlock(&obj->mm.lock); + + return err; +} + +int i915_gem_object_userptr_submit_init(struct drm_i915_gem_object *obj) +{ + struct drm_i915_private *i915 = to_i915(obj->base.dev); + const unsigned long num_pages = obj->base.size >> PAGE_SHIFT; + struct page **pvec; + unsigned int gup_flags = 0; + unsigned long notifier_seq; + int pinned, ret; + + if (obj->userptr.notifier.mm != current->mm) + return -EFAULT; + + ret = i915_gem_object_lock_interruptible(obj, NULL); + if (ret) + return ret; + + /* Make sure userptr is unbound for next attempt, so we don't use stale pages. */ + ret = i915_gem_object_userptr_unbind(obj, false); + i915_gem_object_unlock(obj); + if (ret) + return ret; + + notifier_seq = mmu_interval_read_begin(&obj->userptr.notifier); + + pvec = kvmalloc_array(num_pages, sizeof(struct page *), GFP_KERNEL); + if (!pvec) + return -ENOMEM; + + if (!i915_gem_object_is_readonly(obj)) + gup_flags |= FOLL_WRITE; + + pinned = ret = 0; + while (pinned < num_pages) { + ret = pin_user_pages_fast(obj->userptr.ptr + pinned * PAGE_SIZE, + num_pages - pinned, gup_flags, + &pvec[pinned]); + if (ret < 0) + goto out; + + pinned += ret; + } + ret = 0; + + spin_lock(&i915->mm.notifier_lock); + + if (mmu_interval_read_retry(&obj->userptr.notifier, + !obj->userptr.page_ref ? notifier_seq : + obj->userptr.notifier_seq)) { + ret = -EAGAIN; + goto out_unlock; + } + + if (!obj->userptr.page_ref++) { + obj->userptr.pvec = pvec; + obj->userptr.notifier_seq = notifier_seq; + + pvec = NULL; + } + +out_unlock: + spin_unlock(&i915->mm.notifier_lock); + +out: + if (pvec) { + unpin_user_pages(pvec, pinned); + kvfree(pvec); + } + + return ret; +} + +int i915_gem_object_userptr_submit_done(struct drm_i915_gem_object *obj) +{ + if (mmu_interval_read_retry(&obj->userptr.notifier, + obj->userptr.notifier_seq)) { + /* We collided with the mmu notifier, need to retry */ + + return -EAGAIN; + } + + return 0; +} + +void i915_gem_object_userptr_submit_fini(struct drm_i915_gem_object *obj) +{ + i915_gem_object_userptr_drop_ref(obj); } static void i915_gem_userptr_release(struct drm_i915_gem_object *obj) { - i915_gem_userptr_release__mmu_notifier(obj); - i915_gem_userptr_release__mm_struct(obj); + GEM_WARN_ON(obj->userptr.page_ref); + + mmu_interval_notifier_remove(&obj->userptr.notifier); + obj->userptr.notifier.mm = NULL; } static int @@ -686,7 +380,6 @@ static const struct drm_i915_gem_object_ops i915_gem_userptr_ops = { .name = "i915_gem_object_userptr", .flags = I915_GEM_OBJECT_IS_SHRINKABLE | I915_GEM_OBJECT_NO_MMAP | - I915_GEM_OBJECT_ASYNC_CANCEL | I915_GEM_OBJECT_IS_PROXY, .get_pages = i915_gem_userptr_get_pages, .put_pages = i915_gem_userptr_put_pages, @@ -807,6 +500,7 @@ i915_gem_userptr_ioctl(struct drm_device *dev, i915_gem_object_set_cache_coherency(obj, I915_CACHE_LLC); obj->userptr.ptr = args->user_ptr; + obj->userptr.notifier_seq = ULONG_MAX; if (args->flags & I915_USERPTR_READ_ONLY) i915_gem_object_set_readonly(obj); @@ -814,9 +508,7 @@ i915_gem_userptr_ioctl(struct drm_device *dev, * at binding. This means that we need to hook into the mmu_notifier * in order to detect if the mmu is destroyed. */ - ret = i915_gem_userptr_init__mm_struct(obj); - if (ret == 0) - ret = i915_gem_userptr_init__mmu_notifier(obj); + ret = i915_gem_userptr_init__mmu_notifier(obj); if (ret == 0) ret = drm_gem_handle_create(file, &obj->base, &handle); @@ -835,15 +527,7 @@ i915_gem_userptr_ioctl(struct drm_device *dev, int i915_gem_init_userptr(struct drm_i915_private *dev_priv) { #ifdef CONFIG_MMU_NOTIFIER - spin_lock_init(&dev_priv->mm_lock); - hash_init(dev_priv->mm_structs); - - dev_priv->mm.userptr_wq = - alloc_workqueue("i915-userptr-acquire", - WQ_HIGHPRI | WQ_UNBOUND, - 0); - if (!dev_priv->mm.userptr_wq) - return -ENOMEM; + spin_lock_init(&dev_priv->mm.notifier_lock); #endif return 0; @@ -851,7 +535,4 @@ int i915_gem_init_userptr(struct drm_i915_private *dev_priv) void i915_gem_cleanup_userptr(struct drm_i915_private *dev_priv) { -#ifdef CONFIG_MMU_NOTIFIER - destroy_workqueue(dev_priv->mm.userptr_wq); -#endif } diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 23db5a5f5fcb..70280a275dce 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -591,11 +591,10 @@ struct i915_gem_mm { #ifdef CONFIG_MMU_NOTIFIER /** - * Workqueue to fault in userptr pages, flushed by the execbuf - * when required but otherwise left to userspace to try again - * on EAGAIN. + * notifier_lock for mmu notifiers, memory may not be allocated + * while holding this lock. */ - struct workqueue_struct *userptr_wq; + spinlock_t notifier_lock; #endif /* shrinker accounting, also useful for userland debugging */ @@ -978,8 +977,6 @@ struct drm_i915_private { struct i915_ggtt ggtt; /* VM representing the global address space */ struct i915_gem_mm mm; - DECLARE_HASHTABLE(mm_structs, 7); - spinlock_t mm_lock; /* Kernel Modesetting */ diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index d349c0b796ec..e4097201f0e5 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -1183,10 +1183,8 @@ int i915_gem_init(struct drm_i915_private *dev_priv) err_unlock: i915_gem_drain_workqueue(dev_priv); - if (ret != -EIO) { + if (ret != -EIO) intel_uc_cleanup_firmwares(&dev_priv->gt.uc); - i915_gem_cleanup_userptr(dev_priv); - } if (ret == -EIO) { /* @@ -1245,7 +1243,6 @@ void i915_gem_driver_release(struct drm_i915_private *dev_priv) intel_wa_list_free(&dev_priv->gt_wa_list); intel_uc_cleanup_firmwares(&dev_priv->gt.uc); - i915_gem_cleanup_userptr(dev_priv); i915_gem_drain_freed_objects(dev_priv); From patchwork Fri Oct 16 10:43:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841493 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21C9AC3279D for ; Fri, 16 Oct 2020 10:45:34 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C1022207F7 for ; Fri, 16 Oct 2020 10:45:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C1022207F7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 6B1896EB91; Fri, 16 Oct 2020 10:45:01 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [IPv6:2a02:2308::216:3eff:fe92:dfa3]) by gabe.freedesktop.org (Postfix) with ESMTPS id C72936EACA for ; Fri, 16 Oct 2020 10:44:50 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:43:59 +0200 Message-Id: <20201016104444.1492028-17-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 16/61] drm/i915: Flatten obj->mm.lock X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" With userptr fixed, there is no need for all separate lockdep classes now, and we can remove all lockdep tricks used. A trylock in the shrinker is all we need now to flatten the locking hierarchy. Signed-off-by: Maarten Lankhorst Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/gem/i915_gem_object.c | 6 +--- drivers/gpu/drm/i915/gem/i915_gem_object.h | 20 ++---------- drivers/gpu/drm/i915/gem/i915_gem_pages.c | 34 ++++++++++---------- drivers/gpu/drm/i915/gem/i915_gem_phys.c | 2 +- drivers/gpu/drm/i915/gem/i915_gem_shrinker.c | 10 +++--- drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 2 +- 6 files changed, 27 insertions(+), 47 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c index 1393988bd5af..028a556ab1a5 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c @@ -62,7 +62,7 @@ void i915_gem_object_init(struct drm_i915_gem_object *obj, const struct drm_i915_gem_object_ops *ops, struct lock_class_key *key, unsigned flags) { - __mutex_init(&obj->mm.lock, ops->name ?: "obj->mm.lock", key); + mutex_init(&obj->mm.lock); spin_lock_init(&obj->vma.lock); INIT_LIST_HEAD(&obj->vma.list); @@ -86,10 +86,6 @@ void i915_gem_object_init(struct drm_i915_gem_object *obj, mutex_init(&obj->mm.get_page.lock); INIT_RADIX_TREE(&obj->mm.get_dma_page.radix, GFP_KERNEL | __GFP_NOWARN); mutex_init(&obj->mm.get_dma_page.lock); - - if (IS_ENABLED(CONFIG_LOCKDEP) && i915_gem_object_is_shrinkable(obj)) - i915_gem_shrinker_taints_mutex(to_i915(obj->base.dev), - &obj->mm.lock); } /** diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index abcce4d285b5..b7d15a3db10e 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -316,27 +316,10 @@ void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj, int ____i915_gem_object_get_pages(struct drm_i915_gem_object *obj); int __i915_gem_object_get_pages(struct drm_i915_gem_object *obj); -enum i915_mm_subclass { /* lockdep subclass for obj->mm.lock/struct_mutex */ - I915_MM_NORMAL = 0, - /* - * Only used by struct_mutex, when called "recursively" from - * direct-reclaim-esque. Safe because there is only every one - * struct_mutex in the entire system. - */ - I915_MM_SHRINKER = 1, - /* - * Used for obj->mm.lock when allocating pages. Safe because the object - * isn't yet on any LRU, and therefore the shrinker can't deadlock on - * it. As soon as the object has pages, obj->mm.lock nests within - * fs_reclaim. - */ - I915_MM_GET_PAGES = 1, -}; - static inline int __must_check i915_gem_object_pin_pages(struct drm_i915_gem_object *obj) { - might_lock_nested(&obj->mm.lock, I915_MM_GET_PAGES); + might_lock(&obj->mm.lock); if (atomic_inc_not_zero(&obj->mm.pages_pin_count)) return 0; @@ -380,6 +363,7 @@ i915_gem_object_unpin_pages(struct drm_i915_gem_object *obj) } int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj); +int __i915_gem_object_put_pages_locked(struct drm_i915_gem_object *obj); void i915_gem_object_truncate(struct drm_i915_gem_object *obj); void i915_gem_object_writeback(struct drm_i915_gem_object *obj); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c index b81f253f5dc9..00ce88c609f9 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c @@ -111,7 +111,7 @@ int __i915_gem_object_get_pages(struct drm_i915_gem_object *obj) { int err; - err = mutex_lock_interruptible_nested(&obj->mm.lock, I915_MM_GET_PAGES); + err = mutex_lock_interruptible(&obj->mm.lock); if (err) return err; @@ -195,21 +195,13 @@ __i915_gem_object_unset_pages(struct drm_i915_gem_object *obj) return pages; } -int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj) +int __i915_gem_object_put_pages_locked(struct drm_i915_gem_object *obj) { struct sg_table *pages; - int err; if (i915_gem_object_has_pinned_pages(obj)) return -EBUSY; - /* May be called by shrinker from within get_pages() (on another bo) */ - mutex_lock(&obj->mm.lock); - if (unlikely(atomic_read(&obj->mm.pages_pin_count))) { - err = -EBUSY; - goto unlock; - } - i915_gem_object_release_mmap_offset(obj); /* @@ -225,14 +217,22 @@ int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj) * get_pages backends we should be better able to handle the * cancellation of the async task in a more uniform manner. */ - if (!pages) - pages = ERR_PTR(-EINVAL); - - if (!IS_ERR(pages)) + if (!IS_ERR_OR_NULL(pages)) obj->ops->put_pages(obj, pages); - err = 0; -unlock: + return 0; +} + +int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj) +{ + int err; + + if (i915_gem_object_has_pinned_pages(obj)) + return -EBUSY; + + /* May be called by shrinker from within get_pages() (on another bo) */ + mutex_lock(&obj->mm.lock); + err = __i915_gem_object_put_pages_locked(obj); mutex_unlock(&obj->mm.lock); return err; @@ -354,7 +354,7 @@ void *i915_gem_object_pin_map(struct drm_i915_gem_object *obj, !i915_gem_object_type_has(obj, I915_GEM_OBJECT_HAS_IOMEM)) return ERR_PTR(-ENXIO); - err = mutex_lock_interruptible_nested(&obj->mm.lock, I915_MM_GET_PAGES); + err = mutex_lock_interruptible(&obj->mm.lock); if (err) return ERR_PTR(err); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_phys.c b/drivers/gpu/drm/i915/gem/i915_gem_phys.c index 153de6538378..4322e35cfe48 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_phys.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_phys.c @@ -186,7 +186,7 @@ int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align) if (err) return err; - err = mutex_lock_interruptible_nested(&obj->mm.lock, I915_MM_GET_PAGES); + err = mutex_lock_interruptible(&obj->mm.lock); if (err) goto err_unlock; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c index dc8f052a0ffe..afc6e5b4dcf1 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c @@ -48,9 +48,9 @@ static bool unsafe_drop_pages(struct drm_i915_gem_object *obj, flags = I915_GEM_OBJECT_UNBIND_TEST; if (i915_gem_object_unbind(obj, flags) == 0) - __i915_gem_object_put_pages(obj); + return true; - return !i915_gem_object_has_pages(obj); + return false; } static void try_to_writeback(struct drm_i915_gem_object *obj, @@ -199,10 +199,10 @@ i915_gem_shrink(struct drm_i915_private *i915, spin_unlock_irqrestore(&i915->mm.obj_lock, flags); - if (unsafe_drop_pages(obj, shrink)) { + if (unsafe_drop_pages(obj, shrink) && + mutex_trylock(&obj->mm.lock)) { /* May arrive from get_pages on another bo */ - mutex_lock(&obj->mm.lock); - if (!i915_gem_object_has_pages(obj)) { + if (!__i915_gem_object_put_pages_locked(obj)) { try_to_writeback(obj, shrink); count += obj->base.size >> PAGE_SHIFT; } diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c index 58a426bec2e5..01a9b7306c68 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c @@ -240,7 +240,7 @@ static int i915_gem_object_userptr_unbind(struct drm_i915_gem_object *obj, bool if (GEM_WARN_ON(i915_gem_object_has_pinned_pages(obj))) return -EBUSY; - mutex_lock_nested(&obj->mm.lock, I915_MM_GET_PAGES); + mutex_lock(&obj->mm.lock); pages = __i915_gem_object_unset_pages(obj); if (!IS_ERR_OR_NULL(pages)) From patchwork Fri Oct 16 10:44:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841459 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0694FC35257 for ; Fri, 16 Oct 2020 10:45:37 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AEDD8207F7 for ; Fri, 16 Oct 2020 10:45:36 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AEDD8207F7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B82026EB98; Fri, 16 Oct 2020 10:45:01 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [141.105.120.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id D0DEB6EB19 for ; Fri, 16 Oct 2020 10:44:50 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:00 +0200 Message-Id: <20201016104444.1492028-18-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 17/61] drm/i915: Populate logical context during first pin. X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" This allows us to remove pin_map from state allocation, which saves us a few retry loops. We won't need this until first pin, anyway. Signed-off-by: Maarten Lankhorst Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/gt/intel_context_types.h | 13 ++- drivers/gpu/drm/i915/gt/intel_lrc.c | 107 +++++++++--------- 2 files changed, 62 insertions(+), 58 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h b/drivers/gpu/drm/i915/gt/intel_context_types.h index 552cb57a2e8c..bebf52868563 100644 --- a/drivers/gpu/drm/i915/gt/intel_context_types.h +++ b/drivers/gpu/drm/i915/gt/intel_context_types.h @@ -64,12 +64,13 @@ struct intel_context { unsigned long flags; #define CONTEXT_BARRIER_BIT 0 #define CONTEXT_ALLOC_BIT 1 -#define CONTEXT_VALID_BIT 2 -#define CONTEXT_CLOSED_BIT 3 -#define CONTEXT_USE_SEMAPHORES 4 -#define CONTEXT_BANNED 5 -#define CONTEXT_FORCE_SINGLE_SUBMISSION 6 -#define CONTEXT_NOPREEMPT 7 +#define CONTEXT_INIT_BIT 2 +#define CONTEXT_VALID_BIT 3 +#define CONTEXT_CLOSED_BIT 4 +#define CONTEXT_USE_SEMAPHORES 5 +#define CONTEXT_BANNED 6 +#define CONTEXT_FORCE_SINGLE_SUBMISSION 7 +#define CONTEXT_NOPREEMPT 8 u32 *lrc_reg_state; union { diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c index 5f5553c7107b..7e256b144c68 100644 --- a/drivers/gpu/drm/i915/gt/intel_lrc.c +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c @@ -3527,9 +3527,39 @@ __execlists_update_reg_state(const struct intel_context *ce, } } +static void populate_lr_context(struct intel_context *ce, + struct intel_engine_cs *engine, + void *vaddr) +{ + bool inhibit = true; + struct drm_i915_gem_object *ctx_obj = ce->state->obj; + + set_redzone(vaddr, engine); + + if (engine->default_state) { + shmem_read(engine->default_state, 0, + vaddr, engine->context_size); + __set_bit(CONTEXT_VALID_BIT, &ce->flags); + inhibit = false; + } + + /* Clear the ppHWSP (inc. per-context counters) */ + memset(vaddr, 0, PAGE_SIZE); + + /* + * The second page of the context object contains some registers which + * must be set up prior to the first execution. + */ + execlists_init_reg_state(vaddr + LRC_STATE_OFFSET, + ce, engine, ce->ring, inhibit); + + __i915_gem_object_flush_map(ctx_obj, 0, engine->context_size); +} + static int -execlists_context_pre_pin(struct intel_context *ce, - struct i915_gem_ww_ctx *ww, void **vaddr) +__execlists_context_pre_pin(struct intel_context *ce, + struct intel_engine_cs *engine, + struct i915_gem_ww_ctx *ww, void **vaddr) { GEM_BUG_ON(!ce->state); GEM_BUG_ON(!i915_vma_is_pinned(ce->state)); @@ -3537,8 +3567,20 @@ execlists_context_pre_pin(struct intel_context *ce, *vaddr = i915_gem_object_pin_map(ce->state->obj, i915_coherent_map_type(ce->engine->i915) | I915_MAP_OVERRIDE); + if (IS_ERR(*vaddr)) + return PTR_ERR(*vaddr); + + if (!__test_and_set_bit(CONTEXT_INIT_BIT, &ce->flags)) + populate_lr_context(ce, engine, *vaddr); + + return 0; +} - return PTR_ERR_OR_ZERO(*vaddr); +static int +execlists_context_pre_pin(struct intel_context *ce, + struct i915_gem_ww_ctx *ww, void **vaddr) +{ + return __execlists_context_pre_pin(ce, ce->engine, ww, vaddr); } static int @@ -5333,45 +5375,6 @@ static void execlists_init_reg_state(u32 *regs, __reset_stop_ring(regs, engine); } -static int -populate_lr_context(struct intel_context *ce, - struct drm_i915_gem_object *ctx_obj, - struct intel_engine_cs *engine, - struct intel_ring *ring) -{ - bool inhibit = true; - void *vaddr; - - vaddr = i915_gem_object_pin_map(ctx_obj, I915_MAP_WB); - if (IS_ERR(vaddr)) { - drm_dbg(&engine->i915->drm, "Could not map object pages!\n"); - return PTR_ERR(vaddr); - } - - set_redzone(vaddr, engine); - - if (engine->default_state) { - shmem_read(engine->default_state, 0, - vaddr, engine->context_size); - __set_bit(CONTEXT_VALID_BIT, &ce->flags); - inhibit = false; - } - - /* Clear the ppHWSP (inc. per-context counters) */ - memset(vaddr, 0, PAGE_SIZE); - - /* - * The second page of the context object contains some registers which - * must be set up prior to the first execution. - */ - execlists_init_reg_state(vaddr + LRC_STATE_OFFSET, - ce, engine, ring, inhibit); - - __i915_gem_object_flush_map(ctx_obj, 0, engine->context_size); - i915_gem_object_unpin_map(ctx_obj); - return 0; -} - static struct intel_timeline *pinned_timeline(struct intel_context *ce) { struct intel_timeline *tl = fetch_and_zero(&ce->timeline); @@ -5435,20 +5438,11 @@ static int __execlists_context_alloc(struct intel_context *ce, goto error_deref_obj; } - ret = populate_lr_context(ce, ctx_obj, engine, ring); - if (ret) { - drm_dbg(&engine->i915->drm, - "Failed to populate LRC: %d\n", ret); - goto error_ring_free; - } - ce->ring = ring; ce->state = vma; return 0; -error_ring_free: - intel_ring_put(ring); error_deref_obj: i915_gem_object_put(ctx_obj); return ret; @@ -5526,6 +5520,15 @@ static int virtual_context_alloc(struct intel_context *ce) return __execlists_context_alloc(ce, ve->siblings[0]); } +static int +virtual_context_pre_pin(struct intel_context *ce, + struct i915_gem_ww_ctx *ww, void **vaddr) +{ + struct virtual_engine *ve = container_of(ce, typeof(*ve), context); + + return __execlists_context_pre_pin(ce, ve->siblings[0], ww, vaddr); +} + static int virtual_context_pin(struct intel_context *ce, void *vaddr) { struct virtual_engine *ve = container_of(ce, typeof(*ve), context); @@ -5559,7 +5562,7 @@ static void virtual_context_exit(struct intel_context *ce) static const struct intel_context_ops virtual_context_ops = { .alloc = virtual_context_alloc, - .pre_pin = execlists_context_pre_pin, + .pre_pin = virtual_context_pre_pin, .pin = virtual_context_pin, .unpin = execlists_context_unpin, .post_unpin = execlists_context_post_unpin, From patchwork Fri Oct 16 10:44:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841357 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BFD34C43457 for ; Fri, 16 Oct 2020 10:45:15 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 619552084C for ; Fri, 16 Oct 2020 10:45:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 619552084C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 7CD0D6EB19; Fri, 16 Oct 2020 10:44:58 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [IPv6:2a02:2308::216:3eff:fe92:dfa3]) by gabe.freedesktop.org (Postfix) with ESMTPS id CFE2F6EB13 for ; Fri, 16 Oct 2020 10:44:50 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:01 +0200 Message-Id: <20201016104444.1492028-19-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 18/61] drm/i915: Make ring submission compatible with obj->mm.lock removal, v2. X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Dan Carpenter Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" We map the initial context during first pin. This allows us to remove pin_map from state allocation, which saves us a few retry loops. We won't need this until first pin anyway. intel_ring_submission_setup() is also reworked slightly to do all pinning in a single ww loop. Changes since v1: - Handle -EDEADLK backoff in intel_ring_submission_setup() better. - Handle smatch errors reported by Dan and testbot. Signed-off-by: Maarten Lankhorst Reported-by: kernel test robot Reported-by: Dan Carpenter Reviewed-by: Thomas Hellström --- .../gpu/drm/i915/gt/intel_ring_submission.c | 184 +++++++++++------- 1 file changed, 118 insertions(+), 66 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_ring_submission.c b/drivers/gpu/drm/i915/gt/intel_ring_submission.c index a41b43f445b8..6b280904db43 100644 --- a/drivers/gpu/drm/i915/gt/intel_ring_submission.c +++ b/drivers/gpu/drm/i915/gt/intel_ring_submission.c @@ -478,6 +478,26 @@ static void ring_context_destroy(struct kref *ref) intel_context_free(ce); } +static int ring_context_init_default_state(struct intel_context *ce, + struct i915_gem_ww_ctx *ww) +{ + struct drm_i915_gem_object *obj = ce->state->obj; + void *vaddr; + + vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB); + if (IS_ERR(vaddr)) + return PTR_ERR(vaddr); + + shmem_read(ce->engine->default_state, 0, + vaddr, ce->engine->context_size); + + i915_gem_object_flush_map(obj); + __i915_gem_object_release_map(obj); + + __set_bit(CONTEXT_VALID_BIT, &ce->flags); + return 0; +} + static int ring_context_pre_pin(struct intel_context *ce, struct i915_gem_ww_ctx *ww, void **unused) @@ -485,6 +505,13 @@ static int ring_context_pre_pin(struct intel_context *ce, struct i915_address_space *vm; int err = 0; + if (ce->engine->default_state && + !test_bit(CONTEXT_VALID_BIT, &ce->flags)) { + err = ring_context_init_default_state(ce, ww); + if (err) + return err; + } + vm = vm_alias(ce->vm); if (vm) err = gen6_ppgtt_pin(i915_vm_to_ppgtt((vm)), ww); @@ -540,22 +567,6 @@ alloc_context_vma(struct intel_engine_cs *engine) if (IS_IVYBRIDGE(i915)) i915_gem_object_set_cache_coherency(obj, I915_CACHE_L3_LLC); - if (engine->default_state) { - void *vaddr; - - vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB); - if (IS_ERR(vaddr)) { - err = PTR_ERR(vaddr); - goto err_obj; - } - - shmem_read(engine->default_state, 0, - vaddr, engine->context_size); - - i915_gem_object_flush_map(obj); - __i915_gem_object_release_map(obj); - } - vma = i915_vma_instance(obj, &engine->gt->ggtt->vm, NULL); if (IS_ERR(vma)) { err = PTR_ERR(vma); @@ -587,8 +598,6 @@ static int ring_context_alloc(struct intel_context *ce) return PTR_ERR(vma); ce->state = vma; - if (engine->default_state) - __set_bit(CONTEXT_VALID_BIT, &ce->flags); } return 0; @@ -1184,37 +1193,15 @@ static int gen7_ctx_switch_bb_setup(struct intel_engine_cs * const engine, return gen7_setup_clear_gpr_bb(engine, vma); } -static int gen7_ctx_switch_bb_init(struct intel_engine_cs *engine) +static int gen7_ctx_switch_bb_init(struct intel_engine_cs *engine, + struct i915_gem_ww_ctx *ww, + struct i915_vma *vma) { - struct drm_i915_gem_object *obj; - struct i915_vma *vma; - int size; int err; - size = gen7_ctx_switch_bb_setup(engine, NULL /* probe size */); - if (size <= 0) - return size; - - size = ALIGN(size, PAGE_SIZE); - obj = i915_gem_object_create_internal(engine->i915, size); - if (IS_ERR(obj)) - return PTR_ERR(obj); - - vma = i915_vma_instance(obj, engine->gt->vm, NULL); - if (IS_ERR(vma)) { - err = PTR_ERR(vma); - goto err_obj; - } - - vma->private = intel_context_create(engine); /* dummy residuals */ - if (IS_ERR(vma->private)) { - err = PTR_ERR(vma->private); - goto err_obj; - } - - err = i915_vma_pin(vma, 0, 0, PIN_USER | PIN_HIGH); + err = i915_vma_pin_ww(vma, ww, 0, 0, PIN_USER | PIN_HIGH); if (err) - goto err_private; + return err; err = i915_vma_sync(vma); if (err) @@ -1229,17 +1216,53 @@ static int gen7_ctx_switch_bb_init(struct intel_engine_cs *engine) err_unpin: i915_vma_unpin(vma); -err_private: - intel_context_put(vma->private); -err_obj: - i915_gem_object_put(obj); return err; } +static struct i915_vma *gen7_ctx_vma(struct intel_engine_cs *engine) +{ + struct drm_i915_gem_object *obj; + struct i915_vma *vma; + int size, err; + + if (!IS_HASWELL(engine->i915) || engine->class != RENDER_CLASS) + return 0; + + err = gen7_ctx_switch_bb_setup(engine, NULL /* probe size */); + if (err < 0) + return ERR_PTR(err); + if (!err) + return NULL; + + size = ALIGN(err, PAGE_SIZE); + + obj = i915_gem_object_create_internal(engine->i915, size); + if (IS_ERR(obj)) + return ERR_CAST(obj); + + vma = i915_vma_instance(obj, engine->gt->vm, NULL); + if (IS_ERR(vma)) { + i915_gem_object_put(obj); + return ERR_CAST(vma); + } + + vma->private = intel_context_create(engine); /* dummy residuals */ + if (IS_ERR(vma->private)) { + err = PTR_ERR(vma->private); + vma->private = NULL; + i915_gem_object_put(obj); + return ERR_PTR(err); + } + + return vma; +} + int intel_ring_submission_setup(struct intel_engine_cs *engine) { + struct i915_gem_ww_ctx ww; struct intel_timeline *timeline; struct intel_ring *ring; + struct i915_vma *gen7_wa_vma; int err; setup_common(engine); @@ -1270,43 +1293,72 @@ int intel_ring_submission_setup(struct intel_engine_cs *engine) } GEM_BUG_ON(timeline->has_initial_breadcrumb); - err = intel_timeline_pin(timeline, NULL); - if (err) - goto err_timeline; - ring = intel_engine_create_ring(engine, SZ_16K); if (IS_ERR(ring)) { err = PTR_ERR(ring); - goto err_timeline_unpin; + goto err_timeline; } - err = intel_ring_pin(ring, NULL); - if (err) - goto err_ring; - GEM_BUG_ON(engine->legacy.ring); engine->legacy.ring = ring; engine->legacy.timeline = timeline; - GEM_BUG_ON(timeline->hwsp_ggtt != engine->status_page.vma); + gen7_wa_vma = gen7_ctx_vma(engine); + if (IS_ERR(gen7_wa_vma)) { + err = PTR_ERR(gen7_wa_vma); + goto err_ring; + } - if (IS_HASWELL(engine->i915) && engine->class == RENDER_CLASS) { - err = gen7_ctx_switch_bb_init(engine); + i915_gem_ww_ctx_init(&ww, false); + +retry: + err = i915_gem_object_lock(timeline->hwsp_ggtt->obj, &ww); + if (!err && gen7_wa_vma) + err = i915_gem_object_lock(gen7_wa_vma->obj, &ww); + if (!err && engine->legacy.ring->vma->obj) + err = i915_gem_object_lock(engine->legacy.ring->vma->obj, &ww); + if (!err) + err = intel_timeline_pin(timeline, &ww); + if (!err) { + err = intel_ring_pin(ring, &ww); if (err) - goto err_ring_unpin; + intel_timeline_unpin(timeline); } + if (err) + goto out; + + GEM_BUG_ON(timeline->hwsp_ggtt != engine->status_page.vma); + + if (gen7_wa_vma) { + err = gen7_ctx_switch_bb_init(engine, &ww, gen7_wa_vma); + if (err) { + intel_ring_unpin(ring); + intel_timeline_unpin(timeline); + } + } + +out: + if (err == -EDEADLK) { + err = i915_gem_ww_ctx_backoff(&ww); + if (!err) + goto retry; + } + i915_gem_ww_ctx_fini(&ww); + if (err) + goto err_gen7_put; /* Finally, take ownership and responsibility for cleanup! */ engine->release = ring_release; return 0; -err_ring_unpin: - intel_ring_unpin(ring); +err_gen7_put: + if (gen7_wa_vma) { + intel_context_put(gen7_wa_vma->private); + i915_gem_object_put(gen7_wa_vma->obj); + } err_ring: intel_ring_put(ring); -err_timeline_unpin: - intel_timeline_unpin(timeline); err_timeline: intel_timeline_put(timeline); err: From patchwork Fri Oct 16 10:44:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841397 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D10FC433DF for ; Fri, 16 Oct 2020 10:45:31 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C20E4207F7 for ; Fri, 16 Oct 2020 10:45:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C20E4207F7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 0730E6EB8E; Fri, 16 Oct 2020 10:45:01 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [IPv6:2a02:2308::216:3eff:fe92:dfa3]) by gabe.freedesktop.org (Postfix) with ESMTPS id 01D196EABC for ; Fri, 16 Oct 2020 10:44:51 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:02 +0200 Message-Id: <20201016104444.1492028-20-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 19/61] drm/i915: Handle ww locking in init_status_page X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Try to pin to ggtt first, and use a full ww loop to handle eviction correctly. Signed-off-by: Maarten Lankhorst Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/gt/intel_engine_cs.c | 37 +++++++++++++++-------- 1 file changed, 24 insertions(+), 13 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c index 1985772152bf..66d87ce764e0 100644 --- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c +++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c @@ -615,6 +615,7 @@ static void cleanup_status_page(struct intel_engine_cs *engine) } static int pin_ggtt_status_page(struct intel_engine_cs *engine, + struct i915_gem_ww_ctx *ww, struct i915_vma *vma) { unsigned int flags; @@ -635,12 +636,13 @@ static int pin_ggtt_status_page(struct intel_engine_cs *engine, else flags = PIN_HIGH; - return i915_ggtt_pin(vma, NULL, 0, flags); + return i915_ggtt_pin(vma, ww, 0, flags); } static int init_status_page(struct intel_engine_cs *engine) { struct drm_i915_gem_object *obj; + struct i915_gem_ww_ctx ww; struct i915_vma *vma; void *vaddr; int ret; @@ -664,30 +666,39 @@ static int init_status_page(struct intel_engine_cs *engine) vma = i915_vma_instance(obj, &engine->gt->ggtt->vm, NULL); if (IS_ERR(vma)) { ret = PTR_ERR(vma); - goto err; + goto err_put; } + i915_gem_ww_ctx_init(&ww, true); +retry: + ret = i915_gem_object_lock(obj, &ww); + if (!ret && !HWS_NEEDS_PHYSICAL(engine->i915)) + ret = pin_ggtt_status_page(engine, &ww, vma); + if (ret) + goto err; + vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB); if (IS_ERR(vaddr)) { ret = PTR_ERR(vaddr); - goto err; + goto err_unpin; } engine->status_page.addr = memset(vaddr, 0, PAGE_SIZE); engine->status_page.vma = vma; - if (!HWS_NEEDS_PHYSICAL(engine->i915)) { - ret = pin_ggtt_status_page(engine, vma); - if (ret) - goto err_unpin; - } - - return 0; - err_unpin: - i915_gem_object_unpin_map(obj); + if (ret) + i915_vma_unpin(vma); err: - i915_gem_object_put(obj); + if (ret == -EDEADLK) { + ret = i915_gem_ww_ctx_backoff(&ww); + if (!ret) + goto retry; + } + i915_gem_ww_ctx_fini(&ww); +err_put: + if (ret) + i915_gem_object_put(obj); return ret; } From patchwork Fri Oct 16 10:44:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841355 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93F69C43467 for ; Fri, 16 Oct 2020 10:45:19 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3EFBE2084C for ; Fri, 16 Oct 2020 10:45:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3EFBE2084C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 44F9A6EB18; Fri, 16 Oct 2020 10:44:59 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [IPv6:2a02:2308::216:3eff:fe92:dfa3]) by gabe.freedesktop.org (Postfix) with ESMTPS id 05C6F6EACC for ; Fri, 16 Oct 2020 10:44:51 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:03 +0200 Message-Id: <20201016104444.1492028-21-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 20/61] drm/i915: Rework clflush to work correctly without obj->mm.lock. X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Pin in the caller, not in the work itself. This should also work better for dma-fence annotations. Signed-off-by: Maarten Lankhorst Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/gem/i915_gem_clflush.c | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_clflush.c b/drivers/gpu/drm/i915/gem/i915_gem_clflush.c index bc0223716906..daf9284ef1f5 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_clflush.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_clflush.c @@ -27,15 +27,8 @@ static void __do_clflush(struct drm_i915_gem_object *obj) static int clflush_work(struct dma_fence_work *base) { struct clflush *clflush = container_of(base, typeof(*clflush), base); - struct drm_i915_gem_object *obj = clflush->obj; - int err; - err = i915_gem_object_pin_pages(obj); - if (err) - return err; - - __do_clflush(obj); - i915_gem_object_unpin_pages(obj); + __do_clflush(clflush->obj); return 0; } @@ -44,6 +37,7 @@ static void clflush_release(struct dma_fence_work *base) { struct clflush *clflush = container_of(base, typeof(*clflush), base); + i915_gem_object_unpin_pages(clflush->obj); i915_gem_object_put(clflush->obj); } @@ -63,6 +57,11 @@ static struct clflush *clflush_work_create(struct drm_i915_gem_object *obj) if (!clflush) return NULL; + if (__i915_gem_object_get_pages(obj) < 0) { + kfree(clflush); + return NULL; + } + dma_fence_work_init(&clflush->base, &clflush_ops); clflush->obj = i915_gem_object_get(obj); /* obj <-> clflush cycle */ From patchwork Fri Oct 16 10:44:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841467 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 987D6C35266 for ; Fri, 16 Oct 2020 10:45:37 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4EE6B207F7 for ; Fri, 16 Oct 2020 10:45:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4EE6B207F7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 43DCE6EC3A; Fri, 16 Oct 2020 10:45:21 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [141.105.120.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 14A6E6EB18 for ; Fri, 16 Oct 2020 10:44:51 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:04 +0200 Message-Id: <20201016104444.1492028-22-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 21/61] drm/i915: Pass ww ctx to intel_pin_to_display_plane X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Instead of multiple lockings, lock the object once, and perform the ww dance around attach_phys and pin_pages. Signed-off-by: Maarten Lankhorst Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/display/intel_display.c | 69 ++++++++++++------- drivers/gpu/drm/i915/display/intel_display.h | 2 +- drivers/gpu/drm/i915/display/intel_fbdev.c | 2 +- drivers/gpu/drm/i915/display/intel_overlay.c | 34 +++++++-- drivers/gpu/drm/i915/gem/i915_gem_domain.c | 30 ++------ drivers/gpu/drm/i915/gem/i915_gem_object.h | 1 + drivers/gpu/drm/i915/gem/i915_gem_phys.c | 10 +-- .../drm/i915/gem/selftests/i915_gem_phys.c | 2 + 8 files changed, 86 insertions(+), 64 deletions(-) diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c index 5690e2ae2366..3bd8ed4e8ff4 100644 --- a/drivers/gpu/drm/i915/display/intel_display.c +++ b/drivers/gpu/drm/i915/display/intel_display.c @@ -2232,6 +2232,7 @@ static bool intel_plane_uses_fence(const struct intel_plane_state *plane_state) struct i915_vma * intel_pin_and_fence_fb_obj(struct drm_framebuffer *fb, + bool phys_cursor, const struct i915_ggtt_view *view, bool uses_fence, unsigned long *out_flags) @@ -2240,14 +2241,19 @@ intel_pin_and_fence_fb_obj(struct drm_framebuffer *fb, struct drm_i915_private *dev_priv = to_i915(dev); struct drm_i915_gem_object *obj = intel_fb_obj(fb); intel_wakeref_t wakeref; + struct i915_gem_ww_ctx ww; struct i915_vma *vma; unsigned int pinctl; u32 alignment; + int ret; if (drm_WARN_ON(dev, !i915_gem_object_is_framebuffer(obj))) return ERR_PTR(-EINVAL); - alignment = intel_surf_alignment(fb, 0); + if (phys_cursor) + alignment = intel_cursor_alignment(dev_priv); + else + alignment = intel_surf_alignment(fb, 0); if (drm_WARN_ON(dev, alignment && !is_power_of_2(alignment))) return ERR_PTR(-EINVAL); @@ -2282,14 +2288,26 @@ intel_pin_and_fence_fb_obj(struct drm_framebuffer *fb, if (HAS_GMCH(dev_priv)) pinctl |= PIN_MAPPABLE; - vma = i915_gem_object_pin_to_display_plane(obj, - alignment, view, pinctl); - if (IS_ERR(vma)) + i915_gem_ww_ctx_init(&ww, true); +retry: + ret = i915_gem_object_lock(obj, &ww); + if (!ret && phys_cursor) + ret = i915_gem_object_attach_phys(obj, alignment); + if (!ret) + ret = i915_gem_object_pin_pages(obj); + if (ret) goto err; - if (uses_fence && i915_vma_is_map_and_fenceable(vma)) { - int ret; + if (!ret) { + vma = i915_gem_object_pin_to_display_plane(obj, &ww, alignment, + view, pinctl); + if (IS_ERR(vma)) { + ret = PTR_ERR(vma); + goto err_unpin; + } + } + if (uses_fence && i915_vma_is_map_and_fenceable(vma)) { /* * Install a fence for tiled scan-out. Pre-i965 always needs a * fence, whereas 965+ only requires a fence if using @@ -2310,16 +2328,28 @@ intel_pin_and_fence_fb_obj(struct drm_framebuffer *fb, ret = i915_vma_pin_fence(vma); if (ret != 0 && INTEL_GEN(dev_priv) < 4) { i915_gem_object_unpin_from_display_plane(vma); - vma = ERR_PTR(ret); - goto err; + goto err_unpin; } + ret = 0; - if (ret == 0 && vma->fence) + if (vma->fence) *out_flags |= PLANE_HAS_FENCE; } i915_vma_get(vma); + +err_unpin: + i915_gem_object_unpin_pages(obj); err: + if (ret == -EDEADLK) { + ret = i915_gem_ww_ctx_backoff(&ww); + if (!ret) + goto retry; + } + i915_gem_ww_ctx_fini(&ww); + if (ret) + vma = ERR_PTR(ret); + atomic_dec(&dev_priv->gpu_error.pending_fb_pin); intel_runtime_pm_put(&dev_priv->runtime_pm, wakeref); return vma; @@ -16144,19 +16174,11 @@ static int intel_plane_pin_fb(struct intel_plane_state *plane_state) struct drm_i915_private *dev_priv = to_i915(plane->base.dev); struct drm_framebuffer *fb = plane_state->hw.fb; struct i915_vma *vma; + bool phys_cursor = + plane->id == PLANE_CURSOR && + INTEL_INFO(dev_priv)->display.cursor_needs_physical; - if (plane->id == PLANE_CURSOR && - INTEL_INFO(dev_priv)->display.cursor_needs_physical) { - struct drm_i915_gem_object *obj = intel_fb_obj(fb); - const int align = intel_cursor_alignment(dev_priv); - int err; - - err = i915_gem_object_attach_phys(obj, align); - if (err) - return err; - } - - vma = intel_pin_and_fence_fb_obj(fb, + vma = intel_pin_and_fence_fb_obj(fb, phys_cursor, &plane_state->view, intel_plane_uses_fence(plane_state), &plane_state->flags); @@ -16252,13 +16274,8 @@ intel_prepare_plane_fb(struct drm_plane *_plane, if (!obj) return 0; - ret = i915_gem_object_pin_pages(obj); - if (ret) - return ret; ret = intel_plane_pin_fb(new_plane_state); - - i915_gem_object_unpin_pages(obj); if (ret) return ret; diff --git a/drivers/gpu/drm/i915/display/intel_display.h b/drivers/gpu/drm/i915/display/intel_display.h index d10b7c8cde3f..03058d69d15d 100644 --- a/drivers/gpu/drm/i915/display/intel_display.h +++ b/drivers/gpu/drm/i915/display/intel_display.h @@ -551,7 +551,7 @@ void intel_release_load_detect_pipe(struct drm_connector *connector, struct intel_load_detect_pipe *old, struct drm_modeset_acquire_ctx *ctx); struct i915_vma * -intel_pin_and_fence_fb_obj(struct drm_framebuffer *fb, +intel_pin_and_fence_fb_obj(struct drm_framebuffer *fb, bool phys_cursor, const struct i915_ggtt_view *view, bool uses_fence, unsigned long *out_flags); diff --git a/drivers/gpu/drm/i915/display/intel_fbdev.c b/drivers/gpu/drm/i915/display/intel_fbdev.c index 842c04e63214..bdf44e923cc0 100644 --- a/drivers/gpu/drm/i915/display/intel_fbdev.c +++ b/drivers/gpu/drm/i915/display/intel_fbdev.c @@ -211,7 +211,7 @@ static int intelfb_create(struct drm_fb_helper *helper, * This also validates that any existing fb inherited from the * BIOS is suitable for own access. */ - vma = intel_pin_and_fence_fb_obj(&ifbdev->fb->base, + vma = intel_pin_and_fence_fb_obj(&ifbdev->fb->base, false, &view, false, &flags); if (IS_ERR(vma)) { ret = PTR_ERR(vma); diff --git a/drivers/gpu/drm/i915/display/intel_overlay.c b/drivers/gpu/drm/i915/display/intel_overlay.c index 52b4f6193b4c..9cf634cc7084 100644 --- a/drivers/gpu/drm/i915/display/intel_overlay.c +++ b/drivers/gpu/drm/i915/display/intel_overlay.c @@ -755,6 +755,32 @@ static u32 overlay_cmd_reg(struct drm_intel_overlay_put_image *params) return cmd; } +static struct i915_vma *intel_overlay_pin_fb(struct drm_i915_gem_object *new_bo) +{ + struct i915_gem_ww_ctx ww; + struct i915_vma *vma; + int ret; + + i915_gem_ww_ctx_init(&ww, true); +retry: + ret = i915_gem_object_lock(new_bo, &ww); + if (!ret) { + vma = i915_gem_object_pin_to_display_plane(new_bo, &ww, 0, + NULL, PIN_MAPPABLE); + ret = PTR_ERR_OR_ZERO(vma); + } + if (ret == -EDEADLK) { + ret = i915_gem_ww_ctx_backoff(&ww); + if (!ret) + goto retry; + } + i915_gem_ww_ctx_fini(&ww); + if (ret) + return ERR_PTR(ret); + + return vma; +} + static int intel_overlay_do_put_image(struct intel_overlay *overlay, struct drm_i915_gem_object *new_bo, struct drm_intel_overlay_put_image *params) @@ -776,12 +802,10 @@ static int intel_overlay_do_put_image(struct intel_overlay *overlay, atomic_inc(&dev_priv->gpu_error.pending_fb_pin); - vma = i915_gem_object_pin_to_display_plane(new_bo, - 0, NULL, PIN_MAPPABLE); - if (IS_ERR(vma)) { - ret = PTR_ERR(vma); + vma = intel_overlay_pin_fb(new_bo); + if (IS_ERR(vma)) goto out_pin_section; - } + i915_gem_object_flush_frontbuffer(new_bo, ORIGIN_DIRTYFB); if (!overlay->active) { diff --git a/drivers/gpu/drm/i915/gem/i915_gem_domain.c b/drivers/gpu/drm/i915/gem/i915_gem_domain.c index 43c22648b074..9adced5a6843 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_domain.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_domain.c @@ -313,12 +313,12 @@ int i915_gem_set_caching_ioctl(struct drm_device *dev, void *data, */ struct i915_vma * i915_gem_object_pin_to_display_plane(struct drm_i915_gem_object *obj, + struct i915_gem_ww_ctx *ww, u32 alignment, const struct i915_ggtt_view *view, unsigned int flags) { struct drm_i915_private *i915 = to_i915(obj->base.dev); - struct i915_gem_ww_ctx ww; struct i915_vma *vma; int ret; @@ -326,11 +326,6 @@ i915_gem_object_pin_to_display_plane(struct drm_i915_gem_object *obj, if (HAS_LMEM(i915) && !i915_gem_object_is_lmem(obj)) return ERR_PTR(-EINVAL); - i915_gem_ww_ctx_init(&ww, true); -retry: - ret = i915_gem_object_lock(obj, &ww); - if (ret) - goto err; /* * The display engine is not coherent with the LLC cache on gen6. As * a result, we make sure that the pinning that is about to occur is @@ -345,7 +340,7 @@ i915_gem_object_pin_to_display_plane(struct drm_i915_gem_object *obj, HAS_WT(i915) ? I915_CACHE_WT : I915_CACHE_NONE); if (ret) - goto err; + return ERR_PTR(ret); /* * As the user may map the buffer once pinned in the display plane @@ -358,32 +353,19 @@ i915_gem_object_pin_to_display_plane(struct drm_i915_gem_object *obj, vma = ERR_PTR(-ENOSPC); if ((flags & PIN_MAPPABLE) == 0 && (!view || view->type == I915_GGTT_VIEW_NORMAL)) - vma = i915_gem_object_ggtt_pin_ww(obj, &ww, view, 0, alignment, + vma = i915_gem_object_ggtt_pin_ww(obj, ww, view, 0, alignment, flags | PIN_MAPPABLE | PIN_NONBLOCK); if (IS_ERR(vma) && vma != ERR_PTR(-EDEADLK)) - vma = i915_gem_object_ggtt_pin_ww(obj, &ww, view, 0, + vma = i915_gem_object_ggtt_pin_ww(obj, ww, view, 0, alignment, flags); - if (IS_ERR(vma)) { - ret = PTR_ERR(vma); - goto err; - } + if (IS_ERR(vma)) + return vma; vma->display_alignment = max_t(u64, vma->display_alignment, alignment); i915_gem_object_flush_if_display_locked(obj); -err: - if (ret == -EDEADLK) { - ret = i915_gem_ww_ctx_backoff(&ww); - if (!ret) - goto retry; - } - i915_gem_ww_ctx_fini(&ww); - - if (ret) - return ERR_PTR(ret); - return vma; } diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index b7d15a3db10e..d3086a59b5ad 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -466,6 +466,7 @@ int __must_check i915_gem_object_set_to_cpu_domain(struct drm_i915_gem_object *obj, bool write); struct i915_vma * __must_check i915_gem_object_pin_to_display_plane(struct drm_i915_gem_object *obj, + struct i915_gem_ww_ctx *ww, u32 alignment, const struct i915_ggtt_view *view, unsigned int flags); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_phys.c b/drivers/gpu/drm/i915/gem/i915_gem_phys.c index 4322e35cfe48..15d8f8d52cbe 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_phys.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_phys.c @@ -169,6 +169,8 @@ int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align) { int err; + assert_object_held(obj); + if (align > obj->base.size) return -EINVAL; @@ -182,13 +184,9 @@ int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align) if (err) return err; - err = i915_gem_object_lock_interruptible(obj, NULL); - if (err) - return err; - err = mutex_lock_interruptible(&obj->mm.lock); if (err) - goto err_unlock; + return err; if (unlikely(!i915_gem_object_has_struct_page(obj))) goto out; @@ -219,8 +217,6 @@ int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align) out: mutex_unlock(&obj->mm.lock); -err_unlock: - i915_gem_object_unlock(obj); return err; } diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_phys.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_phys.c index 0cfa082047fe..3a6ce87f8b52 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_phys.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_phys.c @@ -31,7 +31,9 @@ static int mock_phys_object(void *arg) goto out_obj; } + i915_gem_object_lock(obj, NULL); err = i915_gem_object_attach_phys(obj, PAGE_SIZE); + i915_gem_object_unlock(obj); if (err) { pr_err("i915_gem_object_attach_phys failed, err=%d\n", err); goto out_obj; From patchwork Fri Oct 16 10:44:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841333 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 126EBC433DF for ; Fri, 16 Oct 2020 10:45:09 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9D9EF207F7 for ; Fri, 16 Oct 2020 10:45:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9D9EF207F7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 6F19C6EABA; Fri, 16 Oct 2020 10:44:56 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [141.105.120.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 2CE256EB29 for ; Fri, 16 Oct 2020 10:44:51 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:05 +0200 Message-Id: <20201016104444.1492028-23-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 22/61] drm/i915: Add object locking to vm_fault_cpu X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Take a simple lock so we hold ww around (un)pin_pages as needed. Signed-off-by: Maarten Lankhorst Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/gem/i915_gem_mman.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c index 5aa037ca3a41..ba8e9ef6943d 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c @@ -246,6 +246,9 @@ static vm_fault_t vm_fault_cpu(struct vm_fault *vmf) area->vm_flags & VM_WRITE)) return VM_FAULT_SIGBUS; + if (i915_gem_object_lock_interruptible(obj, NULL)) + return VM_FAULT_NOPAGE; + err = i915_gem_object_pin_pages(obj); if (err) goto out; @@ -269,6 +272,7 @@ static vm_fault_t vm_fault_cpu(struct vm_fault *vmf) i915_gem_object_unpin_pages(obj); out: + i915_gem_object_unlock(obj); return i915_error_to_vmf_fault(err); } From patchwork Fri Oct 16 10:44:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841495 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B7CCC35266 for ; Fri, 16 Oct 2020 10:45:43 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 396212084C for ; Fri, 16 Oct 2020 10:45:43 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 396212084C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id D8B7C6ED9E; Fri, 16 Oct 2020 10:45:23 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [IPv6:2a02:2308::216:3eff:fe92:dfa3]) by gabe.freedesktop.org (Postfix) with ESMTPS id 208C46EB1A for ; Fri, 16 Oct 2020 10:44:51 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:06 +0200 Message-Id: <20201016104444.1492028-24-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 23/61] drm/i915: Move pinning to inside engine_wa_list_verify() X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" This should be done as part of the ww loop, in order to remove a i915_vma_pin that needs ww held. Now only i915_ggtt_pin() callers remaining. Signed-off-by: Maarten Lankhorst Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/gt/intel_workarounds.c | 24 ++++++++---------- .../gpu/drm/i915/gt/selftest_workarounds.c | 25 ++++++++++++++++--- 2 files changed, 32 insertions(+), 17 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_workarounds.c b/drivers/gpu/drm/i915/gt/intel_workarounds.c index fed9503a7c4e..0f2c09d62322 100644 --- a/drivers/gpu/drm/i915/gt/intel_workarounds.c +++ b/drivers/gpu/drm/i915/gt/intel_workarounds.c @@ -2071,7 +2071,6 @@ create_scratch(struct i915_address_space *vm, int count) struct drm_i915_gem_object *obj; struct i915_vma *vma; unsigned int size; - int err; size = round_up(count * sizeof(u32), PAGE_SIZE); obj = i915_gem_object_create_internal(vm->i915, size); @@ -2082,20 +2081,11 @@ create_scratch(struct i915_address_space *vm, int count) vma = i915_vma_instance(obj, vm, NULL); if (IS_ERR(vma)) { - err = PTR_ERR(vma); - goto err_obj; + i915_gem_object_put(obj); + return vma; } - err = i915_vma_pin(vma, 0, 0, - i915_vma_is_ggtt(vma) ? PIN_GLOBAL : PIN_USER); - if (err) - goto err_obj; - return vma; - -err_obj: - i915_gem_object_put(obj); - return ERR_PTR(err); } struct mcr_range { @@ -2213,10 +2203,15 @@ static int engine_wa_list_verify(struct intel_context *ce, if (err) goto err_pm; + err = i915_vma_pin_ww(vma, &ww, 0, 0, + i915_vma_is_ggtt(vma) ? PIN_GLOBAL : PIN_USER); + if (err) + goto err_unpin; + rq = i915_request_create(ce); if (IS_ERR(rq)) { err = PTR_ERR(rq); - goto err_unpin; + goto err_vma; } err = i915_request_await_object(rq, vma->obj, true); @@ -2257,6 +2252,8 @@ static int engine_wa_list_verify(struct intel_context *ce, err_rq: i915_request_put(rq); +err_vma: + i915_vma_unpin(vma); err_unpin: intel_context_unpin(ce); err_pm: @@ -2267,7 +2264,6 @@ static int engine_wa_list_verify(struct intel_context *ce, } i915_gem_ww_ctx_fini(&ww); intel_engine_pm_put(ce->engine); - i915_vma_unpin(vma); i915_vma_put(vma); return err; } diff --git a/drivers/gpu/drm/i915/gt/selftest_workarounds.c b/drivers/gpu/drm/i915/gt/selftest_workarounds.c index 61a0532d0f3d..810ab026a55e 100644 --- a/drivers/gpu/drm/i915/gt/selftest_workarounds.c +++ b/drivers/gpu/drm/i915/gt/selftest_workarounds.c @@ -386,6 +386,25 @@ static struct i915_vma *create_batch(struct i915_address_space *vm) return ERR_PTR(err); } +static struct i915_vma * +create_scratch_pinned(struct i915_address_space *vm, int count) +{ + struct i915_vma *vma = create_scratch(vm, count); + int err; + + if (IS_ERR(vma)) + return vma; + + err = i915_vma_pin(vma, 0, 0, + i915_vma_is_ggtt(vma) ? PIN_GLOBAL : PIN_USER); + if (err) { + i915_vma_put(vma); + return ERR_PTR(err); + } + + return vma; +} + static u32 reg_write(u32 old, u32 new, u32 rsvd) { if (rsvd == 0x0000ffff) { @@ -489,7 +508,7 @@ static int check_dirty_whitelist(struct intel_context *ce) int err = 0, i, v; u32 *cs, *results; - scratch = create_scratch(ce->vm, 2 * ARRAY_SIZE(values) + 1); + scratch = create_scratch_pinned(ce->vm, 2 * ARRAY_SIZE(values) + 1); if (IS_ERR(scratch)) return PTR_ERR(scratch); @@ -1043,7 +1062,7 @@ static int live_isolated_whitelist(void *arg) vm = i915_gem_context_get_vm_rcu(c); - client[i].scratch[0] = create_scratch(vm, 1024); + client[i].scratch[0] = create_scratch_pinned(vm, 1024); if (IS_ERR(client[i].scratch[0])) { err = PTR_ERR(client[i].scratch[0]); i915_vm_put(vm); @@ -1051,7 +1070,7 @@ static int live_isolated_whitelist(void *arg) goto err; } - client[i].scratch[1] = create_scratch(vm, 1024); + client[i].scratch[1] = create_scratch_pinned(vm, 1024); if (IS_ERR(client[i].scratch[1])) { err = PTR_ERR(client[i].scratch[1]); i915_vma_unpin_and_release(&client[i].scratch[0], 0); From patchwork Fri Oct 16 10:44:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841485 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C21E8C35257 for ; Fri, 16 Oct 2020 10:45:41 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 70EF9207F7 for ; Fri, 16 Oct 2020 10:45:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 70EF9207F7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A46EE6EB87; Fri, 16 Oct 2020 10:45:24 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [141.105.120.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 2E4B36EB2A for ; Fri, 16 Oct 2020 10:44:51 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:07 +0200 Message-Id: <20201016104444.1492028-25-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 24/61] drm/i915: Take reservation lock around i915_vma_pin. X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" We previously complained when ww == NULL. This function is now only used in selftests to pin an object, and ww locking is now fixed. Signed-off-by: Maarten Lankhorst Reviewed-by: Thomas Hellström --- .../i915/gem/selftests/i915_gem_coherency.c | 14 +++++-------- drivers/gpu/drm/i915/i915_gem.c | 6 +++++- drivers/gpu/drm/i915/i915_vma.c | 4 +--- drivers/gpu/drm/i915/i915_vma.h | 20 +++++++++++++++---- 4 files changed, 27 insertions(+), 17 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c index 7049a6bbc03d..2e439bb269d6 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c @@ -199,16 +199,14 @@ static int gpu_set(struct context *ctx, unsigned long offset, u32 v) u32 *cs; int err; + vma = i915_gem_object_ggtt_pin(ctx->obj, NULL, 0, 0, 0); + if (IS_ERR(vma)) + return PTR_ERR(vma); + i915_gem_object_lock(ctx->obj, NULL); err = i915_gem_object_set_to_gtt_domain(ctx->obj, true); if (err) - goto out_unlock; - - vma = i915_gem_object_ggtt_pin(ctx->obj, NULL, 0, 0, 0); - if (IS_ERR(vma)) { - err = PTR_ERR(vma); - goto out_unlock; - } + goto out_unpin; rq = intel_engine_create_kernel_request(ctx->engine); if (IS_ERR(rq)) { @@ -248,9 +246,7 @@ static int gpu_set(struct context *ctx, unsigned long offset, u32 v) i915_request_add(rq); out_unpin: i915_vma_unpin(vma); -out_unlock: i915_gem_object_unlock(ctx->obj); - return err; } diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index e4097201f0e5..fe9449e7a02a 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -1036,7 +1036,11 @@ i915_gem_object_ggtt_pin_ww(struct drm_i915_gem_object *obj, return ERR_PTR(ret); } - ret = i915_vma_pin_ww(vma, ww, size, alignment, flags | PIN_GLOBAL); + if (ww) + ret = i915_vma_pin_ww(vma, ww, size, alignment, flags | PIN_GLOBAL); + else + ret = i915_vma_pin(vma, size, alignment, flags | PIN_GLOBAL); + if (ret) return ERR_PTR(ret); diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c index f50250c8685a..ed6cf4529d5d 100644 --- a/drivers/gpu/drm/i915/i915_vma.c +++ b/drivers/gpu/drm/i915/i915_vma.c @@ -861,9 +861,7 @@ int i915_vma_pin_ww(struct i915_vma *vma, struct i915_gem_ww_ctx *ww, int err; #ifdef CONFIG_PROVE_LOCKING - if (debug_locks && lockdep_is_held(&vma->vm->i915->drm.struct_mutex)) - WARN_ON(!ww); - if (debug_locks && ww && vma->resv) + if (debug_locks && !WARN_ON(!ww) && vma->resv) assert_vma_held(vma); #endif diff --git a/drivers/gpu/drm/i915/i915_vma.h b/drivers/gpu/drm/i915/i915_vma.h index 3c951d5428cf..cea6e7b8611b 100644 --- a/drivers/gpu/drm/i915/i915_vma.h +++ b/drivers/gpu/drm/i915/i915_vma.h @@ -246,10 +246,22 @@ i915_vma_pin_ww(struct i915_vma *vma, struct i915_gem_ww_ctx *ww, static inline int __must_check i915_vma_pin(struct i915_vma *vma, u64 size, u64 alignment, u64 flags) { -#ifdef CONFIG_LOCKDEP - WARN_ON_ONCE(vma->resv && dma_resv_held(vma->resv)); -#endif - return i915_vma_pin_ww(vma, NULL, size, alignment, flags); + struct i915_gem_ww_ctx ww; + int err; + + i915_gem_ww_ctx_init(&ww, true); +retry: + err = i915_gem_object_lock(vma->obj, &ww); + if (!err) + err = i915_vma_pin_ww(vma, &ww, size, alignment, flags); + if (err == -EDEADLK) { + err = i915_gem_ww_ctx_backoff(&ww); + if (!err) + goto retry; + } + i915_gem_ww_ctx_fini(&ww); + + return err; } int i915_ggtt_pin(struct i915_vma *vma, struct i915_gem_ww_ctx *ww, From patchwork Fri Oct 16 10:44:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841377 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 82D82C433E7 for ; Fri, 16 Oct 2020 10:45:22 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1F457207F7 for ; Fri, 16 Oct 2020 10:45:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1F457207F7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E091E6EB22; Fri, 16 Oct 2020 10:44:59 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [141.105.120.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 48C7B6EB1B for ; Fri, 16 Oct 2020 10:44:51 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:08 +0200 Message-Id: <20201016104444.1492028-26-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 25/61] drm/i915: Make intel_init_workaround_bb more compatible with ww locking. X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Make creation separate from pinning, in order to take the lock only once, and pin the mapping with the lock held. Signed-off-by: Maarten Lankhorst Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/gt/intel_lrc.c | 43 ++++++++++++++++++++++------- 1 file changed, 33 insertions(+), 10 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c index 7e256b144c68..7e34417e61ea 100644 --- a/drivers/gpu/drm/i915/gt/intel_lrc.c +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c @@ -3974,7 +3974,7 @@ gen10_init_indirectctx_bb(struct intel_engine_cs *engine, u32 *batch) #define CTX_WA_BB_OBJ_SIZE (PAGE_SIZE) -static int lrc_setup_wa_ctx(struct intel_engine_cs *engine) +static int lrc_init_wa_ctx(struct intel_engine_cs *engine) { struct drm_i915_gem_object *obj; struct i915_vma *vma; @@ -3990,10 +3990,6 @@ static int lrc_setup_wa_ctx(struct intel_engine_cs *engine) goto err; } - err = i915_ggtt_pin(vma, NULL, 0, PIN_HIGH); - if (err) - goto err; - engine->wa_ctx.vma = vma; return 0; @@ -4002,9 +3998,16 @@ static int lrc_setup_wa_ctx(struct intel_engine_cs *engine) return err; } -static void lrc_destroy_wa_ctx(struct intel_engine_cs *engine) +static void lrc_destroy_wa_ctx(struct intel_engine_cs *engine, bool unpin) { - i915_vma_unpin_and_release(&engine->wa_ctx.vma, 0); + if (!engine->wa_ctx.vma) + return; + + if (unpin) + i915_vma_unpin(engine->wa_ctx.vma); + + i915_vma_put(engine->wa_ctx.vma); + engine->wa_ctx.vma = NULL; } typedef u32 *(*wa_bb_func_t)(struct intel_engine_cs *engine, u32 *batch); @@ -4016,6 +4019,7 @@ static int intel_init_workaround_bb(struct intel_engine_cs *engine) &wa_ctx->per_ctx }; wa_bb_func_t wa_bb_fn[2]; void *batch, *batch_ptr; + struct i915_gem_ww_ctx ww; unsigned int i; int ret; @@ -4043,13 +4047,21 @@ static int intel_init_workaround_bb(struct intel_engine_cs *engine) return 0; } - ret = lrc_setup_wa_ctx(engine); + ret = lrc_init_wa_ctx(engine); if (ret) { drm_dbg(&engine->i915->drm, "Failed to setup context WA page: %d\n", ret); return ret; } + i915_gem_ww_ctx_init(&ww, true); +retry: + ret = i915_gem_object_lock(wa_ctx->vma->obj, &ww); + if (!ret) + ret = i915_ggtt_pin(wa_ctx->vma, &ww, 0, PIN_HIGH); + if (ret) + goto err; + batch = i915_gem_object_pin_map(wa_ctx->vma->obj, I915_MAP_WB); /* @@ -4073,8 +4085,19 @@ static int intel_init_workaround_bb(struct intel_engine_cs *engine) __i915_gem_object_flush_map(wa_ctx->vma->obj, 0, batch_ptr - batch); __i915_gem_object_release_map(wa_ctx->vma->obj); + + if (ret) + i915_vma_unpin(wa_ctx->vma); + +err: + if (ret == -EDEADLK) { + ret = i915_gem_ww_ctx_backoff(&ww); + if (!ret) + goto retry; + } + i915_gem_ww_ctx_fini(&ww); if (ret) - lrc_destroy_wa_ctx(engine); + lrc_destroy_wa_ctx(engine, false); return ret; } @@ -5149,7 +5172,7 @@ static void execlists_release(struct intel_engine_cs *engine) execlists_shutdown(engine); intel_engine_cleanup_common(engine); - lrc_destroy_wa_ctx(engine); + lrc_destroy_wa_ctx(engine, true); } static void From patchwork Fri Oct 16 10:44:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841341 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EC54EC433E7 for ; Fri, 16 Oct 2020 10:45:12 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8207E207F7 for ; Fri, 16 Oct 2020 10:45:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8207E207F7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 588A76EB12; Fri, 16 Oct 2020 10:44:56 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [IPv6:2a02:2308::216:3eff:fe92:dfa3]) by gabe.freedesktop.org (Postfix) with ESMTPS id B50DC6EAC7 for ; Fri, 16 Oct 2020 10:44:51 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:09 +0200 Message-Id: <20201016104444.1492028-27-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 26/61] drm/i915: Make __engine_unpark() compatible with ww locking. X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Take the ww lock around engine_unpark. Because of the many many places where rpm is used, I chose the safest option and used a trylock to opportunistically take this lock for __engine_unpark. Signed-off-by: Maarten Lankhorst Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/gt/intel_engine_pm.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.c b/drivers/gpu/drm/i915/gt/intel_engine_pm.c index f7b2e07e2229..1ab9597a5c70 100644 --- a/drivers/gpu/drm/i915/gt/intel_engine_pm.c +++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.c @@ -33,7 +33,8 @@ static int __engine_unpark(struct intel_wakeref *wf) GEM_BUG_ON(test_bit(CONTEXT_VALID_BIT, &ce->flags)); /* First poison the image to verify we never fully trust it */ - if (IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM) && ce->state) { + if (IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM) && ce->state && + i915_gem_object_trylock(ce->state->obj)) { struct drm_i915_gem_object *obj = ce->state->obj; int type = i915_coherent_map_type(engine->i915); void *map; @@ -44,6 +45,7 @@ static int __engine_unpark(struct intel_wakeref *wf) i915_gem_object_flush_map(obj); i915_gem_object_unpin_map(obj); } + i915_gem_object_unlock(obj); } ce->ops->reset(ce); From patchwork Fri Oct 16 10:44:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841465 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4141C2D0A2 for ; Fri, 16 Oct 2020 10:45:33 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 41837207F7 for ; Fri, 16 Oct 2020 10:45:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 41837207F7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 9E4106EB2B; Fri, 16 Oct 2020 10:45:00 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [141.105.120.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 923346EAC2 for ; Fri, 16 Oct 2020 10:44:51 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:10 +0200 Message-Id: <20201016104444.1492028-28-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 27/61] drm/i915: Take obj lock around set_domain ioctl X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" We need to lock the object to move it to the correct domain, add the missing lock. Signed-off-by: Maarten Lankhorst Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/gem/i915_gem_domain.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_domain.c b/drivers/gpu/drm/i915/gem/i915_gem_domain.c index 9adced5a6843..0c0a8579f495 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_domain.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_domain.c @@ -531,6 +531,10 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data, goto out; } + err = i915_gem_object_lock_interruptible(obj, NULL); + if (err) + goto out; + /* * Flush and acquire obj->pages so that we are coherent through * direct access in memory with previous cached writes through @@ -542,11 +546,7 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data, */ err = i915_gem_object_pin_pages(obj); if (err) - goto out; - - err = i915_gem_object_lock_interruptible(obj, NULL); - if (err) - goto out_unpin; + goto out_unlock; if (read_domains & I915_GEM_DOMAIN_WC) err = i915_gem_object_set_to_wc_domain(obj, write_domain); @@ -558,13 +558,14 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data, /* And bump the LRU for this access */ i915_gem_object_bump_inactive_ggtt(obj); + i915_gem_object_unpin_pages(obj); + +out_unlock: i915_gem_object_unlock(obj); - if (write_domain) + if (!err && write_domain) i915_gem_object_invalidate_frontbuffer(obj, ORIGIN_CPU); -out_unpin: - i915_gem_object_unpin_pages(obj); out: i915_gem_object_put(obj); return err; From patchwork Fri Oct 16 10:44:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841395 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 763C8C2BD0C for ; Fri, 16 Oct 2020 10:45:29 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 20021207F7 for ; Fri, 16 Oct 2020 10:45:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 20021207F7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id BB5C66EB1F; Fri, 16 Oct 2020 10:44:59 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [141.105.120.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id B25F36EABC for ; Fri, 16 Oct 2020 10:44:51 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:11 +0200 Message-Id: <20201016104444.1492028-29-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 28/61] drm/i915: Defer pin calls in buffer pool until first use by caller. X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" We need to take the obj lock to pin pages, so wait until the callers have done so, before making the object unshrinkable. Signed-off-by: Maarten Lankhorst Reviewed-by: Thomas Hellström --- .../gpu/drm/i915/gem/i915_gem_execbuffer.c | 2 + .../gpu/drm/i915/gem/i915_gem_object_blt.c | 6 +++ .../gpu/drm/i915/gt/intel_gt_buffer_pool.c | 47 +++++++++---------- .../gpu/drm/i915/gt/intel_gt_buffer_pool.h | 5 ++ .../drm/i915/gt/intel_gt_buffer_pool_types.h | 1 + 5 files changed, 35 insertions(+), 26 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index c9db199c4d81..da265b04eb76 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -1336,6 +1336,7 @@ static int __reloc_gpu_alloc(struct i915_execbuffer *eb, err = PTR_ERR(cmd); goto err_pool; } + intel_gt_buffer_pool_mark_used(pool); batch = i915_vma_instance(pool->obj, vma->vm, NULL); if (IS_ERR(batch)) { @@ -2628,6 +2629,7 @@ static int eb_parse(struct i915_execbuffer *eb) err = PTR_ERR(shadow); goto err; } + intel_gt_buffer_pool_mark_used(pool); i915_gem_object_set_readonly(shadow->obj); shadow->private = pool; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c b/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c index aee7ad3cc3c6..e0b873c3f46a 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c @@ -54,6 +54,9 @@ struct i915_vma *intel_emit_vma_fill_blt(struct intel_context *ce, if (unlikely(err)) goto out_put; + /* we pinned the pool, mark it as such */ + intel_gt_buffer_pool_mark_used(pool); + cmd = i915_gem_object_pin_map(pool->obj, I915_MAP_WC); if (IS_ERR(cmd)) { err = PTR_ERR(cmd); @@ -276,6 +279,9 @@ struct i915_vma *intel_emit_vma_copy_blt(struct intel_context *ce, if (unlikely(err)) goto out_put; + /* we pinned the pool, mark it as such */ + intel_gt_buffer_pool_mark_used(pool); + cmd = i915_gem_object_pin_map(pool->obj, I915_MAP_WC); if (IS_ERR(cmd)) { err = PTR_ERR(cmd); diff --git a/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.c b/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.c index 104cb30e8c13..030759305196 100644 --- a/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.c +++ b/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.c @@ -98,28 +98,6 @@ static void pool_free_work(struct work_struct *wrk) round_jiffies_up_relative(HZ)); } -static int pool_active(struct i915_active *ref) -{ - struct intel_gt_buffer_pool_node *node = - container_of(ref, typeof(*node), active); - struct dma_resv *resv = node->obj->base.resv; - int err; - - if (dma_resv_trylock(resv)) { - dma_resv_add_excl_fence(resv, NULL); - dma_resv_unlock(resv); - } - - err = i915_gem_object_pin_pages(node->obj); - if (err) - return err; - - /* Hide this pinned object from the shrinker until retired */ - i915_gem_object_make_unshrinkable(node->obj); - - return 0; -} - __i915_active_call static void pool_retire(struct i915_active *ref) { @@ -129,10 +107,13 @@ static void pool_retire(struct i915_active *ref) struct list_head *list = bucket_for_size(pool, node->obj->base.size); unsigned long flags; - i915_gem_object_unpin_pages(node->obj); + if (node->pinned) { + i915_gem_object_unpin_pages(node->obj); - /* Return this object to the shrinker pool */ - i915_gem_object_make_purgeable(node->obj); + /* Return this object to the shrinker pool */ + i915_gem_object_make_purgeable(node->obj); + node->pinned = false; + } GEM_BUG_ON(node->age); spin_lock_irqsave(&pool->lock, flags); @@ -144,6 +125,19 @@ static void pool_retire(struct i915_active *ref) round_jiffies_up_relative(HZ)); } +void intel_gt_buffer_pool_mark_used(struct intel_gt_buffer_pool_node *node) +{ + assert_object_held(node->obj); + + if (node->pinned) + return; + + __i915_gem_object_pin_pages(node->obj); + /* Hide this pinned object from the shrinker until retired */ + i915_gem_object_make_unshrinkable(node->obj); + node->pinned = true; +} + static struct intel_gt_buffer_pool_node * node_create(struct intel_gt_buffer_pool *pool, size_t sz) { @@ -158,7 +152,8 @@ node_create(struct intel_gt_buffer_pool *pool, size_t sz) node->age = 0; node->pool = pool; - i915_active_init(&node->active, pool_active, pool_retire); + node->pinned = false; + i915_active_init(&node->active, NULL, pool_retire); obj = i915_gem_object_create_internal(gt->i915, sz); if (IS_ERR(obj)) { diff --git a/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.h b/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.h index 42cbac003e8a..9878ce9a07ab 100644 --- a/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.h +++ b/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.h @@ -17,10 +17,15 @@ struct i915_request; struct intel_gt_buffer_pool_node * intel_gt_get_buffer_pool(struct intel_gt *gt, size_t size); +void intel_gt_buffer_pool_mark_used(struct intel_gt_buffer_pool_node *node); + static inline int intel_gt_buffer_pool_mark_active(struct intel_gt_buffer_pool_node *node, struct i915_request *rq) { + /* did we call mark_used? */ + GEM_WARN_ON(!node->pinned); + return i915_active_add_request(&node->active, rq); } diff --git a/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool_types.h b/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool_types.h index bcf1658c9633..0401825e829d 100644 --- a/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool_types.h +++ b/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool_types.h @@ -31,6 +31,7 @@ struct intel_gt_buffer_pool_node { struct rcu_head rcu; }; unsigned long age; + bool pinned; }; #endif /* INTEL_GT_BUFFER_POOL_TYPES_H */ From patchwork Fri Oct 16 10:44:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841359 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 20E94C433DF for ; Fri, 16 Oct 2020 10:45:17 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id ACD632084C for ; Fri, 16 Oct 2020 10:45:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org ACD632084C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 2369A6EABE; Fri, 16 Oct 2020 10:44:59 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [IPv6:2a02:2308::216:3eff:fe92:dfa3]) by gabe.freedesktop.org (Postfix) with ESMTPS id C18FD6EACA for ; Fri, 16 Oct 2020 10:44:51 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:12 +0200 Message-Id: <20201016104444.1492028-30-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 29/61] drm/i915: Fix pread/pwrite to work with new locking rules. X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" We are removing obj->mm.lock, and need to take the reservation lock before we can pin pages. Move the pinning pages into the helper, and merge gtt pwrite/pread preparation and cleanup paths. The fence lock is also removed; it will conflict with fence annotations, because of memory allocations done when pagefaulting inside copy_*_user. Signed-off-by: Maarten Lankhorst Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/Makefile | 1 - drivers/gpu/drm/i915/gem/i915_gem_fence.c | 95 -------- drivers/gpu/drm/i915/gem/i915_gem_object.h | 5 - drivers/gpu/drm/i915/i915_gem.c | 247 +++++++++++---------- 4 files changed, 133 insertions(+), 215 deletions(-) delete mode 100644 drivers/gpu/drm/i915/gem/i915_gem_fence.c diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile index e5574e506a5c..58d129b5a65a 100644 --- a/drivers/gpu/drm/i915/Makefile +++ b/drivers/gpu/drm/i915/Makefile @@ -134,7 +134,6 @@ gem-y += \ gem/i915_gem_dmabuf.o \ gem/i915_gem_domain.o \ gem/i915_gem_execbuffer.o \ - gem/i915_gem_fence.o \ gem/i915_gem_internal.o \ gem/i915_gem_object.o \ gem/i915_gem_object_blt.o \ diff --git a/drivers/gpu/drm/i915/gem/i915_gem_fence.c b/drivers/gpu/drm/i915/gem/i915_gem_fence.c deleted file mode 100644 index 8ab842c80f99..000000000000 --- a/drivers/gpu/drm/i915/gem/i915_gem_fence.c +++ /dev/null @@ -1,95 +0,0 @@ -/* - * SPDX-License-Identifier: MIT - * - * Copyright © 2019 Intel Corporation - */ - -#include "i915_drv.h" -#include "i915_gem_object.h" - -struct stub_fence { - struct dma_fence dma; - struct i915_sw_fence chain; -}; - -static int __i915_sw_fence_call -stub_notify(struct i915_sw_fence *fence, enum i915_sw_fence_notify state) -{ - struct stub_fence *stub = container_of(fence, typeof(*stub), chain); - - switch (state) { - case FENCE_COMPLETE: - dma_fence_signal(&stub->dma); - break; - - case FENCE_FREE: - dma_fence_put(&stub->dma); - break; - } - - return NOTIFY_DONE; -} - -static const char *stub_driver_name(struct dma_fence *fence) -{ - return DRIVER_NAME; -} - -static const char *stub_timeline_name(struct dma_fence *fence) -{ - return "object"; -} - -static void stub_release(struct dma_fence *fence) -{ - struct stub_fence *stub = container_of(fence, typeof(*stub), dma); - - i915_sw_fence_fini(&stub->chain); - - BUILD_BUG_ON(offsetof(typeof(*stub), dma)); - dma_fence_free(&stub->dma); -} - -static const struct dma_fence_ops stub_fence_ops = { - .get_driver_name = stub_driver_name, - .get_timeline_name = stub_timeline_name, - .release = stub_release, -}; - -struct dma_fence * -i915_gem_object_lock_fence(struct drm_i915_gem_object *obj) -{ - struct stub_fence *stub; - - assert_object_held(obj); - - stub = kmalloc(sizeof(*stub), GFP_KERNEL); - if (!stub) - return NULL; - - i915_sw_fence_init(&stub->chain, stub_notify); - dma_fence_init(&stub->dma, &stub_fence_ops, &stub->chain.wait.lock, - 0, 0); - - if (i915_sw_fence_await_reservation(&stub->chain, - obj->base.resv, NULL, true, - i915_fence_timeout(to_i915(obj->base.dev)), - I915_FENCE_GFP) < 0) - goto err; - - dma_resv_add_excl_fence(obj->base.resv, &stub->dma); - - return &stub->dma; - -err: - stub_release(&stub->dma); - return NULL; -} - -void i915_gem_object_unlock_fence(struct drm_i915_gem_object *obj, - struct dma_fence *fence) -{ - struct stub_fence *stub = container_of(fence, typeof(*stub), dma); - - i915_sw_fence_commit(&stub->chain); -} diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index d3086a59b5ad..9e87a2547b0d 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -157,11 +157,6 @@ static inline void i915_gem_object_unlock(struct drm_i915_gem_object *obj) dma_resv_unlock(obj->base.resv); } -struct dma_fence * -i915_gem_object_lock_fence(struct drm_i915_gem_object *obj); -void i915_gem_object_unlock_fence(struct drm_i915_gem_object *obj, - struct dma_fence *fence); - static inline void i915_gem_object_set_readonly(struct drm_i915_gem_object *obj) { diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index fe9449e7a02a..c58ea2490bf4 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -184,23 +184,38 @@ i915_gem_phys_pwrite(struct drm_i915_gem_object *obj, struct drm_i915_gem_pwrite *args, struct drm_file *file) { - void *vaddr = sg_page(obj->mm.pages->sgl) + args->offset; + void *vaddr; char __user *user_data = u64_to_user_ptr(args->data_ptr); + int ret; + + ret = i915_gem_object_lock_interruptible(obj, NULL); + if (ret) + return ret; + ret = i915_gem_object_pin_pages(obj); + i915_gem_object_unlock(obj); + if (ret) + return ret; + + vaddr = sg_page(obj->mm.pages->sgl) + args->offset; /* * We manually control the domain here and pretend that it * remains coherent i.e. in the GTT domain, like shmem_pwrite. */ i915_gem_object_invalidate_frontbuffer(obj, ORIGIN_CPU); - if (copy_from_user(vaddr, user_data, args->size)) - return -EFAULT; + if (copy_from_user(vaddr, user_data, args->size)) { + ret = -EFAULT; + goto err; + } drm_clflush_virt_range(vaddr, args->size); intel_gt_chipset_flush(&to_i915(obj->base.dev)->gt); i915_gem_object_flush_frontbuffer(obj, ORIGIN_CPU); - return 0; +err: + i915_gem_object_unpin_pages(obj); + return ret; } static int @@ -330,7 +345,6 @@ i915_gem_shmem_pread(struct drm_i915_gem_object *obj, { unsigned int needs_clflush; unsigned int idx, offset; - struct dma_fence *fence; char __user *user_data; u64 remain; int ret; @@ -339,19 +353,17 @@ i915_gem_shmem_pread(struct drm_i915_gem_object *obj, if (ret) return ret; + ret = i915_gem_object_pin_pages(obj); + if (ret) + goto err_unlock; + ret = i915_gem_object_prepare_read(obj, &needs_clflush); - if (ret) { - i915_gem_object_unlock(obj); - return ret; - } + if (ret) + goto err_unpin; - fence = i915_gem_object_lock_fence(obj); i915_gem_object_finish_access(obj); i915_gem_object_unlock(obj); - if (!fence) - return -ENOMEM; - remain = args->size; user_data = u64_to_user_ptr(args->data_ptr); offset = offset_in_page(args->offset); @@ -369,7 +381,13 @@ i915_gem_shmem_pread(struct drm_i915_gem_object *obj, offset = 0; } - i915_gem_object_unlock_fence(obj, fence); + i915_gem_object_unpin_pages(obj); + return ret; + +err_unpin: + i915_gem_object_unpin_pages(obj); +err_unlock: + i915_gem_object_unlock(obj); return ret; } @@ -397,52 +415,102 @@ gtt_user_read(struct io_mapping *mapping, return unwritten; } -static int -i915_gem_gtt_pread(struct drm_i915_gem_object *obj, - const struct drm_i915_gem_pread *args) +static struct i915_vma *i915_gem_gtt_prepare(struct drm_i915_gem_object *obj, + struct drm_mm_node *node, + bool write) { struct drm_i915_private *i915 = to_i915(obj->base.dev); struct i915_ggtt *ggtt = &i915->ggtt; - intel_wakeref_t wakeref; - struct drm_mm_node node; - struct dma_fence *fence; - void __user *user_data; struct i915_vma *vma; - u64 remain, offset; + struct i915_gem_ww_ctx ww; int ret; - wakeref = intel_runtime_pm_get(&i915->runtime_pm); + i915_gem_ww_ctx_init(&ww, true); +retry: vma = ERR_PTR(-ENODEV); + ret = i915_gem_object_lock(obj, &ww); + if (ret) + goto err_ww; + + ret = i915_gem_object_set_to_gtt_domain(obj, write); + if (ret) + goto err_ww; + if (!i915_gem_object_is_tiled(obj)) - vma = i915_gem_object_ggtt_pin(obj, NULL, 0, 0, - PIN_MAPPABLE | - PIN_NONBLOCK /* NOWARN */ | - PIN_NOEVICT); - if (!IS_ERR(vma)) { - node.start = i915_ggtt_offset(vma); - node.flags = 0; + vma = i915_gem_object_ggtt_pin_ww(obj, &ww, NULL, 0, 0, + PIN_MAPPABLE | + PIN_NONBLOCK /* NOWARN */ | + PIN_NOEVICT); + if (vma == ERR_PTR(-EDEADLK)) { + ret = -EDEADLK; + goto err_ww; + } else if (!IS_ERR(vma)) { + node->start = i915_ggtt_offset(vma); + node->flags = 0; } else { - ret = insert_mappable_node(ggtt, &node, PAGE_SIZE); + ret = insert_mappable_node(ggtt, node, PAGE_SIZE); if (ret) - goto out_rpm; - GEM_BUG_ON(!drm_mm_node_allocated(&node)); + goto err_ww; + GEM_BUG_ON(!drm_mm_node_allocated(node)); + vma = NULL; } - ret = i915_gem_object_lock_interruptible(obj, NULL); - if (ret) - goto out_unpin; - - ret = i915_gem_object_set_to_gtt_domain(obj, false); + ret = i915_gem_object_pin_pages(obj); if (ret) { - i915_gem_object_unlock(obj); - goto out_unpin; + if (drm_mm_node_allocated(node)) { + ggtt->vm.clear_range(&ggtt->vm, node->start, node->size); + remove_mappable_node(ggtt, node); + } else { + i915_vma_unpin(vma); + } } - fence = i915_gem_object_lock_fence(obj); - i915_gem_object_unlock(obj); - if (!fence) { - ret = -ENOMEM; - goto out_unpin; +err_ww: + if (ret == -EDEADLK) { + ret = i915_gem_ww_ctx_backoff(&ww); + if (!ret) + goto retry; + } + i915_gem_ww_ctx_fini(&ww); + + return ret ? ERR_PTR(ret) : vma; +} + +static void i915_gem_gtt_cleanup(struct drm_i915_gem_object *obj, + struct drm_mm_node *node, + struct i915_vma *vma) +{ + struct drm_i915_private *i915 = to_i915(obj->base.dev); + struct i915_ggtt *ggtt = &i915->ggtt; + + i915_gem_object_unpin_pages(obj); + if (drm_mm_node_allocated(node)) { + ggtt->vm.clear_range(&ggtt->vm, node->start, node->size); + remove_mappable_node(ggtt, node); + } else { + i915_vma_unpin(vma); + } +} + +static int +i915_gem_gtt_pread(struct drm_i915_gem_object *obj, + const struct drm_i915_gem_pread *args) +{ + struct drm_i915_private *i915 = to_i915(obj->base.dev); + struct i915_ggtt *ggtt = &i915->ggtt; + intel_wakeref_t wakeref; + struct drm_mm_node node; + void __user *user_data; + struct i915_vma *vma; + u64 remain, offset; + int ret = 0; + + wakeref = intel_runtime_pm_get(&i915->runtime_pm); + + vma = i915_gem_gtt_prepare(obj, &node, false); + if (IS_ERR(vma)) { + ret = PTR_ERR(vma); + goto out_rpm; } user_data = u64_to_user_ptr(args->data_ptr); @@ -479,14 +547,7 @@ i915_gem_gtt_pread(struct drm_i915_gem_object *obj, offset += page_length; } - i915_gem_object_unlock_fence(obj, fence); -out_unpin: - if (drm_mm_node_allocated(&node)) { - ggtt->vm.clear_range(&ggtt->vm, node.start, node.size); - remove_mappable_node(ggtt, &node); - } else { - i915_vma_unpin(vma); - } + i915_gem_gtt_cleanup(obj, &node, vma); out_rpm: intel_runtime_pm_put(&i915->runtime_pm, wakeref); return ret; @@ -538,15 +599,10 @@ i915_gem_pread_ioctl(struct drm_device *dev, void *data, if (ret) goto out; - ret = i915_gem_object_pin_pages(obj); - if (ret) - goto out; - ret = i915_gem_shmem_pread(obj, args); if (ret == -EFAULT || ret == -ENODEV) ret = i915_gem_gtt_pread(obj, args); - i915_gem_object_unpin_pages(obj); out: i915_gem_object_put(obj); return ret; @@ -594,11 +650,10 @@ i915_gem_gtt_pwrite_fast(struct drm_i915_gem_object *obj, struct intel_runtime_pm *rpm = &i915->runtime_pm; intel_wakeref_t wakeref; struct drm_mm_node node; - struct dma_fence *fence; struct i915_vma *vma; u64 remain, offset; void __user *user_data; - int ret; + int ret = 0; if (i915_gem_object_has_struct_page(obj)) { /* @@ -616,37 +671,10 @@ i915_gem_gtt_pwrite_fast(struct drm_i915_gem_object *obj, wakeref = intel_runtime_pm_get(rpm); } - vma = ERR_PTR(-ENODEV); - if (!i915_gem_object_is_tiled(obj)) - vma = i915_gem_object_ggtt_pin(obj, NULL, 0, 0, - PIN_MAPPABLE | - PIN_NONBLOCK /* NOWARN */ | - PIN_NOEVICT); - if (!IS_ERR(vma)) { - node.start = i915_ggtt_offset(vma); - node.flags = 0; - } else { - ret = insert_mappable_node(ggtt, &node, PAGE_SIZE); - if (ret) - goto out_rpm; - GEM_BUG_ON(!drm_mm_node_allocated(&node)); - } - - ret = i915_gem_object_lock_interruptible(obj, NULL); - if (ret) - goto out_unpin; - - ret = i915_gem_object_set_to_gtt_domain(obj, true); - if (ret) { - i915_gem_object_unlock(obj); - goto out_unpin; - } - - fence = i915_gem_object_lock_fence(obj); - i915_gem_object_unlock(obj); - if (!fence) { - ret = -ENOMEM; - goto out_unpin; + vma = i915_gem_gtt_prepare(obj, &node, true); + if (IS_ERR(vma)) { + ret = PTR_ERR(vma); + goto out_rpm; } i915_gem_object_invalidate_frontbuffer(obj, ORIGIN_CPU); @@ -695,14 +723,7 @@ i915_gem_gtt_pwrite_fast(struct drm_i915_gem_object *obj, intel_gt_flush_ggtt_writes(ggtt->vm.gt); i915_gem_object_flush_frontbuffer(obj, ORIGIN_CPU); - i915_gem_object_unlock_fence(obj, fence); -out_unpin: - if (drm_mm_node_allocated(&node)) { - ggtt->vm.clear_range(&ggtt->vm, node.start, node.size); - remove_mappable_node(ggtt, &node); - } else { - i915_vma_unpin(vma); - } + i915_gem_gtt_cleanup(obj, &node, vma); out_rpm: intel_runtime_pm_put(rpm, wakeref); return ret; @@ -742,7 +763,6 @@ i915_gem_shmem_pwrite(struct drm_i915_gem_object *obj, unsigned int partial_cacheline_write; unsigned int needs_clflush; unsigned int offset, idx; - struct dma_fence *fence; void __user *user_data; u64 remain; int ret; @@ -751,19 +771,17 @@ i915_gem_shmem_pwrite(struct drm_i915_gem_object *obj, if (ret) return ret; + ret = i915_gem_object_pin_pages(obj); + if (ret) + goto err_unlock; + ret = i915_gem_object_prepare_write(obj, &needs_clflush); - if (ret) { - i915_gem_object_unlock(obj); - return ret; - } + if (ret) + goto err_unpin; - fence = i915_gem_object_lock_fence(obj); i915_gem_object_finish_access(obj); i915_gem_object_unlock(obj); - if (!fence) - return -ENOMEM; - /* If we don't overwrite a cacheline completely we need to be * careful to have up-to-date data by first clflushing. Don't * overcomplicate things and flush the entire patch. @@ -791,8 +809,14 @@ i915_gem_shmem_pwrite(struct drm_i915_gem_object *obj, } i915_gem_object_flush_frontbuffer(obj, ORIGIN_CPU); - i915_gem_object_unlock_fence(obj, fence); + i915_gem_object_unpin_pages(obj); + return ret; + +err_unpin: + i915_gem_object_unpin_pages(obj); +err_unlock: + i915_gem_object_unlock(obj); return ret; } @@ -849,10 +873,6 @@ i915_gem_pwrite_ioctl(struct drm_device *dev, void *data, if (ret) goto err; - ret = i915_gem_object_pin_pages(obj); - if (ret) - goto err; - ret = -EFAULT; /* We can only do the GTT pwrite on untiled buffers, as otherwise * it would end up going through the fenced access, and we'll get @@ -875,7 +895,6 @@ i915_gem_pwrite_ioctl(struct drm_device *dev, void *data, ret = i915_gem_phys_pwrite(obj, args, file); } - i915_gem_object_unpin_pages(obj); err: i915_gem_object_put(obj); return ret; From patchwork Fri Oct 16 10:44:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841339 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B6EEC43457 for ; Fri, 16 Oct 2020 10:45:08 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CF8F9207F7 for ; Fri, 16 Oct 2020 10:45:07 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CF8F9207F7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id F29816EB0C; Fri, 16 Oct 2020 10:44:55 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [IPv6:2a02:2308::216:3eff:fe92:dfa3]) by gabe.freedesktop.org (Postfix) with ESMTPS id D5A1E6EAC4 for ; Fri, 16 Oct 2020 10:44:51 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:13 +0200 Message-Id: <20201016104444.1492028-31-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 30/61] drm/i915: Fix workarounds selftest, part 1 X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" pin_map needs the ww lock, so ensure we pin both before submission. Signed-off-by: Maarten Lankhorst Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/gem/i915_gem_object.h | 3 + drivers/gpu/drm/i915/gem/i915_gem_pages.c | 12 +++ .../gpu/drm/i915/gt/selftest_workarounds.c | 76 ++++++++++++------- 3 files changed, 64 insertions(+), 27 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index 9e87a2547b0d..8db84ce09d9f 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -389,6 +389,9 @@ enum i915_map_type { void *__must_check i915_gem_object_pin_map(struct drm_i915_gem_object *obj, enum i915_map_type type); +void *__must_check i915_gem_object_pin_map_unlocked(struct drm_i915_gem_object *obj, + enum i915_map_type type); + void __i915_gem_object_flush_map(struct drm_i915_gem_object *obj, unsigned long offset, unsigned long size); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c index 00ce88c609f9..ef1d5fabd077 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c @@ -409,6 +409,18 @@ void *i915_gem_object_pin_map(struct drm_i915_gem_object *obj, goto out_unlock; } +void *i915_gem_object_pin_map_unlocked(struct drm_i915_gem_object *obj, + enum i915_map_type type) +{ + void *ret; + + i915_gem_object_lock(obj, NULL); + ret = i915_gem_object_pin_map(obj, type); + i915_gem_object_unlock(obj); + + return ret; +} + void __i915_gem_object_flush_map(struct drm_i915_gem_object *obj, unsigned long offset, unsigned long size) diff --git a/drivers/gpu/drm/i915/gt/selftest_workarounds.c b/drivers/gpu/drm/i915/gt/selftest_workarounds.c index 810ab026a55e..69da2147ed3b 100644 --- a/drivers/gpu/drm/i915/gt/selftest_workarounds.c +++ b/drivers/gpu/drm/i915/gt/selftest_workarounds.c @@ -111,7 +111,7 @@ read_nonprivs(struct i915_gem_context *ctx, struct intel_engine_cs *engine) i915_gem_object_set_cache_coherency(result, I915_CACHE_LLC); - cs = i915_gem_object_pin_map(result, I915_MAP_WB); + cs = i915_gem_object_pin_map_unlocked(result, I915_MAP_WB); if (IS_ERR(cs)) { err = PTR_ERR(cs); goto err_obj; @@ -217,7 +217,7 @@ static int check_whitelist(struct i915_gem_context *ctx, i915_gem_object_lock(results, NULL); intel_wedge_on_timeout(&wedge, engine->gt, HZ / 5) /* safety net! */ err = i915_gem_object_set_to_cpu_domain(results, false); - i915_gem_object_unlock(results); + if (intel_gt_is_wedged(engine->gt)) err = -EIO; if (err) @@ -245,6 +245,7 @@ static int check_whitelist(struct i915_gem_context *ctx, i915_gem_object_unpin_map(results); out_put: + i915_gem_object_unlock(results); i915_gem_object_put(results); return err; } @@ -520,6 +521,7 @@ static int check_dirty_whitelist(struct intel_context *ce) for (i = 0; i < engine->whitelist.count; i++) { u32 reg = i915_mmio_reg_offset(engine->whitelist.list[i].reg); + struct i915_gem_ww_ctx ww; u64 addr = scratch->node.start; struct i915_request *rq; u32 srm, lrm, rsvd; @@ -535,6 +537,29 @@ static int check_dirty_whitelist(struct intel_context *ce) ro_reg = ro_register(reg); + i915_gem_ww_ctx_init(&ww, false); +retry: + cs = NULL; + err = i915_gem_object_lock(scratch->obj, &ww); + if (!err) + err = i915_gem_object_lock(batch->obj, &ww); + if (!err) + err = intel_context_pin_ww(ce, &ww); + if (err) + goto out; + + cs = i915_gem_object_pin_map(batch->obj, I915_MAP_WC); + if (IS_ERR(cs)) { + err = PTR_ERR(cs); + goto out_ctx; + } + + results = i915_gem_object_pin_map(scratch->obj, I915_MAP_WB); + if (IS_ERR(results)) { + err = PTR_ERR(results); + goto out_unmap_batch; + } + /* Clear non priv flags */ reg &= RING_FORCE_TO_NONPRIV_ADDRESS_MASK; @@ -546,12 +571,6 @@ static int check_dirty_whitelist(struct intel_context *ce) pr_debug("%s: Writing garbage to %x\n", engine->name, reg); - cs = i915_gem_object_pin_map(batch->obj, I915_MAP_WC); - if (IS_ERR(cs)) { - err = PTR_ERR(cs); - goto out_batch; - } - /* SRM original */ *cs++ = srm; *cs++ = reg; @@ -598,11 +617,12 @@ static int check_dirty_whitelist(struct intel_context *ce) i915_gem_object_flush_map(batch->obj); i915_gem_object_unpin_map(batch->obj); intel_gt_chipset_flush(engine->gt); + cs = NULL; - rq = intel_context_create_request(ce); + rq = i915_request_create(ce); if (IS_ERR(rq)) { err = PTR_ERR(rq); - goto out_batch; + goto out_unmap_scratch; } if (engine->emit_init_breadcrumb) { /* Be nice if we hang */ @@ -611,20 +631,16 @@ static int check_dirty_whitelist(struct intel_context *ce) goto err_request; } - i915_vma_lock(batch); err = i915_request_await_object(rq, batch->obj, false); if (err == 0) err = i915_vma_move_to_active(batch, rq, 0); - i915_vma_unlock(batch); if (err) goto err_request; - i915_vma_lock(scratch); err = i915_request_await_object(rq, scratch->obj, true); if (err == 0) err = i915_vma_move_to_active(scratch, rq, EXEC_OBJECT_WRITE); - i915_vma_unlock(scratch); if (err) goto err_request; @@ -640,13 +656,7 @@ static int check_dirty_whitelist(struct intel_context *ce) pr_err("%s: Futzing %x timedout; cancelling test\n", engine->name, reg); intel_gt_set_wedged(engine->gt); - goto out_batch; - } - - results = i915_gem_object_pin_map(scratch->obj, I915_MAP_WB); - if (IS_ERR(results)) { - err = PTR_ERR(results); - goto out_batch; + goto out_unmap_scratch; } GEM_BUG_ON(values[ARRAY_SIZE(values) - 1] != 0xffffffff); @@ -657,7 +667,7 @@ static int check_dirty_whitelist(struct intel_context *ce) pr_err("%s: Unable to write to whitelisted register %x\n", engine->name, reg); err = -EINVAL; - goto out_unpin; + goto out_unmap_scratch; } } else { rsvd = 0; @@ -723,15 +733,27 @@ static int check_dirty_whitelist(struct intel_context *ce) err = -EINVAL; } -out_unpin: +out_unmap_scratch: i915_gem_object_unpin_map(scratch->obj); +out_unmap_batch: + if (cs) + i915_gem_object_unpin_map(batch->obj); +out_ctx: + intel_context_unpin(ce); +out: + if (err == -EDEADLK) { + err = i915_gem_ww_ctx_backoff(&ww); + if (!err) + goto retry; + } + i915_gem_ww_ctx_fini(&ww); if (err) break; } if (igt_flush_test(engine->i915)) err = -EIO; -out_batch: + i915_vma_unpin_and_release(&batch, 0); out_scratch: i915_vma_unpin_and_release(&scratch, 0); @@ -868,7 +890,7 @@ static int scrub_whitelisted_registers(struct i915_gem_context *ctx, if (IS_ERR(batch)) return PTR_ERR(batch); - cs = i915_gem_object_pin_map(batch->obj, I915_MAP_WC); + cs = i915_gem_object_pin_map_unlocked(batch->obj, I915_MAP_WC); if (IS_ERR(cs)) { err = PTR_ERR(cs); goto err_batch; @@ -1003,11 +1025,11 @@ check_whitelisted_registers(struct intel_engine_cs *engine, u32 *a, *b; int i, err; - a = i915_gem_object_pin_map(A->obj, I915_MAP_WB); + a = i915_gem_object_pin_map_unlocked(A->obj, I915_MAP_WB); if (IS_ERR(a)) return PTR_ERR(a); - b = i915_gem_object_pin_map(B->obj, I915_MAP_WB); + b = i915_gem_object_pin_map_unlocked(B->obj, I915_MAP_WB); if (IS_ERR(b)) { err = PTR_ERR(b); goto err_a; From patchwork Fri Oct 16 10:44:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841497 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC168C43467 for ; Fri, 16 Oct 2020 10:45:38 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7ACD4207F7 for ; Fri, 16 Oct 2020 10:45:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7ACD4207F7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 57F586EB90; Fri, 16 Oct 2020 10:45:22 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [141.105.120.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id ED5CC6EAC3 for ; Fri, 16 Oct 2020 10:44:51 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:14 +0200 Message-Id: <20201016104444.1492028-32-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 31/61] drm/i915: Prepare for obj->mm.lock removal X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Thomas Hellström Stolen objects need to lock, and we may call put_pages when refcount drops to 0, ensure all calls are handled correctly. Idea-from: Thomas Hellström Signed-off-by: Maarten Lankhorst Signed-off-by: Thomas Hellström --- drivers/gpu/drm/i915/gem/i915_gem_object.h | 14 ++++++++++++++ drivers/gpu/drm/i915/gem/i915_gem_pages.c | 14 ++++++++++++-- drivers/gpu/drm/i915/gem/i915_gem_stolen.c | 10 +++++++++- 3 files changed, 35 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index 8db84ce09d9f..a3a701d849bf 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -112,6 +112,20 @@ i915_gem_object_put(struct drm_i915_gem_object *obj) #define assert_object_held(obj) dma_resv_assert_held((obj)->base.resv) +/* + * If more than one potential simultaneous locker, assert held. + */ +static inline void assert_object_held_shared(struct drm_i915_gem_object *obj) +{ + /* + * Note mm list lookup is protected by + * kref_get_unless_zero(). + */ + if (IS_ENABLED(CONFIG_LOCKDEP) && + kref_read(&obj->base.refcount) > 0) + lockdep_assert_held(&obj->mm.lock); +} + static inline int __i915_gem_object_lock(struct drm_i915_gem_object *obj, struct i915_gem_ww_ctx *ww, bool intr) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c index ef1d5fabd077..429ec652c394 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c @@ -18,7 +18,7 @@ void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj, unsigned long supported = INTEL_INFO(i915)->page_sizes; int i; - lockdep_assert_held(&obj->mm.lock); + assert_object_held_shared(obj); if (i915_gem_object_is_volatile(obj)) obj->mm.madv = I915_MADV_DONTNEED; @@ -67,6 +67,7 @@ void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj, struct list_head *list; unsigned long flags; + lockdep_assert_held(&obj->mm.lock); spin_lock_irqsave(&i915->mm.obj_lock, flags); i915->mm.shrink_count++; @@ -88,6 +89,8 @@ int ____i915_gem_object_get_pages(struct drm_i915_gem_object *obj) struct drm_i915_private *i915 = to_i915(obj->base.dev); int err; + assert_object_held_shared(obj); + if (unlikely(obj->mm.madv != I915_MADV_WILLNEED)) { drm_dbg(&i915->drm, "Attempting to obtain a purgeable object\n"); @@ -115,6 +118,8 @@ int __i915_gem_object_get_pages(struct drm_i915_gem_object *obj) if (err) return err; + assert_object_held_shared(obj); + if (unlikely(!i915_gem_object_has_pages(obj))) { GEM_BUG_ON(i915_gem_object_has_pinned_pages(obj)); @@ -142,7 +147,7 @@ void i915_gem_object_truncate(struct drm_i915_gem_object *obj) /* Try to discard unwanted pages */ void i915_gem_object_writeback(struct drm_i915_gem_object *obj) { - lockdep_assert_held(&obj->mm.lock); + assert_object_held_shared(obj); GEM_BUG_ON(i915_gem_object_has_pages(obj)); if (obj->ops->writeback) @@ -175,6 +180,8 @@ __i915_gem_object_unset_pages(struct drm_i915_gem_object *obj) { struct sg_table *pages; + assert_object_held_shared(obj); + pages = fetch_and_zero(&obj->mm.pages); if (IS_ERR_OR_NULL(pages)) return pages; @@ -202,6 +209,9 @@ int __i915_gem_object_put_pages_locked(struct drm_i915_gem_object *obj) if (i915_gem_object_has_pinned_pages(obj)) return -EBUSY; + /* May be called by shrinker from within get_pages() (on another bo) */ + assert_object_held_shared(obj); + i915_gem_object_release_mmap_offset(obj); /* diff --git a/drivers/gpu/drm/i915/gem/i915_gem_stolen.c b/drivers/gpu/drm/i915/gem/i915_gem_stolen.c index 9a9242b5a99f..1fd287ce86f4 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_stolen.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_stolen.c @@ -593,11 +593,19 @@ __i915_gem_object_create_stolen(struct intel_memory_region *mem, cache_level = HAS_LLC(mem->i915) ? I915_CACHE_LLC : I915_CACHE_NONE; i915_gem_object_set_cache_coherency(obj, cache_level); + if (WARN_ON(!i915_gem_object_trylock(obj))) { + err = -EBUSY; + goto cleanup; + } + err = i915_gem_object_pin_pages(obj); - if (err) + if (err) { + i915_gem_object_unlock(obj); goto cleanup; + } i915_gem_object_init_memory_region(obj, mem); + i915_gem_object_unlock(obj); return obj; From patchwork Fri Oct 16 10:44:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841383 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5FE32C2D0A2 for ; Fri, 16 Oct 2020 10:45:21 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 005022084C for ; Fri, 16 Oct 2020 10:45:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 005022084C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 3B7BF6EB15; Fri, 16 Oct 2020 10:44:59 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [141.105.120.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 11FD06EACC for ; Fri, 16 Oct 2020 10:44:52 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:15 +0200 Message-Id: <20201016104444.1492028-33-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 32/61] drm/i915: Add igt_spinner_pin() to allow for ww locking around spinner. X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" By default, we assume that it's called inside igt_create_request to keep existing selftests working, but allow for manual pinning when passing a ww context. Signed-off-by: Maarten Lankhorst Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/selftests/igt_spinner.c | 136 ++++++++++++------- drivers/gpu/drm/i915/selftests/igt_spinner.h | 5 + 2 files changed, 95 insertions(+), 46 deletions(-) diff --git a/drivers/gpu/drm/i915/selftests/igt_spinner.c b/drivers/gpu/drm/i915/selftests/igt_spinner.c index ec0ecb4e4ca6..9c461edb0b73 100644 --- a/drivers/gpu/drm/i915/selftests/igt_spinner.c +++ b/drivers/gpu/drm/i915/selftests/igt_spinner.c @@ -11,8 +11,6 @@ int igt_spinner_init(struct igt_spinner *spin, struct intel_gt *gt) { - unsigned int mode; - void *vaddr; int err; memset(spin, 0, sizeof(*spin)); @@ -23,6 +21,7 @@ int igt_spinner_init(struct igt_spinner *spin, struct intel_gt *gt) err = PTR_ERR(spin->hws); goto err; } + i915_gem_object_set_cache_coherency(spin->hws, I915_CACHE_LLC); spin->obj = i915_gem_object_create_internal(gt->i915, PAGE_SIZE); if (IS_ERR(spin->obj)) { @@ -30,34 +29,83 @@ int igt_spinner_init(struct igt_spinner *spin, struct intel_gt *gt) goto err_hws; } - i915_gem_object_set_cache_coherency(spin->hws, I915_CACHE_LLC); - vaddr = i915_gem_object_pin_map(spin->hws, I915_MAP_WB); - if (IS_ERR(vaddr)) { - err = PTR_ERR(vaddr); - goto err_obj; - } - spin->seqno = memset(vaddr, 0xff, PAGE_SIZE); - - mode = i915_coherent_map_type(gt->i915); - vaddr = i915_gem_object_pin_map(spin->obj, mode); - if (IS_ERR(vaddr)) { - err = PTR_ERR(vaddr); - goto err_unpin_hws; - } - spin->batch = vaddr; - return 0; -err_unpin_hws: - i915_gem_object_unpin_map(spin->hws); -err_obj: - i915_gem_object_put(spin->obj); err_hws: i915_gem_object_put(spin->hws); err: return err; } +static void *igt_spinner_pin_obj(struct intel_context *ce, + struct i915_gem_ww_ctx *ww, + struct drm_i915_gem_object *obj, + unsigned int mode, struct i915_vma **vma) +{ + void *vaddr; + int ret; + + *vma = i915_vma_instance(obj, ce->vm, NULL); + if (IS_ERR(*vma)) + return ERR_CAST(*vma); + + ret = i915_gem_object_lock(obj, ww); + if (ret) + return ERR_PTR(ret); + + vaddr = i915_gem_object_pin_map(obj, mode); + + if (!ww) + i915_gem_object_unlock(obj); + + if (IS_ERR(vaddr)) + return vaddr; + + if (ww) + ret = i915_vma_pin_ww(*vma, ww, 0, 0, PIN_USER); + else + ret = i915_vma_pin(*vma, 0, 0, PIN_USER); + + if (ret) { + i915_gem_object_unpin_map(obj); + return ERR_PTR(ret); + } + + return vaddr; +} + +int igt_spinner_pin(struct igt_spinner *spin, + struct intel_context *ce, + struct i915_gem_ww_ctx *ww) +{ + void *vaddr; + + if (spin->ce && WARN_ON(spin->ce != ce)) + return -ENODEV; + spin->ce = ce; + + if (!spin->seqno) { + vaddr = igt_spinner_pin_obj(ce, ww, spin->hws, I915_MAP_WB, &spin->hws_vma); + if (IS_ERR(vaddr)) + return PTR_ERR(vaddr); + + spin->seqno = memset(vaddr, 0xff, PAGE_SIZE); + } + + if (!spin->batch) { + unsigned int mode = + i915_coherent_map_type(spin->gt->i915); + + vaddr = igt_spinner_pin_obj(ce, ww, spin->obj, mode, &spin->batch_vma); + if (IS_ERR(vaddr)) + return PTR_ERR(vaddr); + + spin->batch = vaddr; + } + + return 0; +} + static unsigned int seqno_offset(u64 fence) { return offset_in_page(sizeof(u32) * fence); @@ -102,27 +150,18 @@ igt_spinner_create_request(struct igt_spinner *spin, if (!intel_engine_can_store_dword(ce->engine)) return ERR_PTR(-ENODEV); - vma = i915_vma_instance(spin->obj, ce->vm, NULL); - if (IS_ERR(vma)) - return ERR_CAST(vma); - - hws = i915_vma_instance(spin->hws, ce->vm, NULL); - if (IS_ERR(hws)) - return ERR_CAST(hws); + if (!spin->batch) { + err = igt_spinner_pin(spin, ce, NULL); + if (err) + return ERR_PTR(err); + } - err = i915_vma_pin(vma, 0, 0, PIN_USER); - if (err) - return ERR_PTR(err); - - err = i915_vma_pin(hws, 0, 0, PIN_USER); - if (err) - goto unpin_vma; + hws = spin->hws_vma; + vma = spin->batch_vma; rq = intel_context_create_request(ce); - if (IS_ERR(rq)) { - err = PTR_ERR(rq); - goto unpin_hws; - } + if (IS_ERR(rq)) + return ERR_CAST(rq); err = move_to_active(vma, rq, 0); if (err) @@ -185,10 +224,6 @@ igt_spinner_create_request(struct igt_spinner *spin, i915_request_set_error_once(rq, err); i915_request_add(rq); } -unpin_hws: - i915_vma_unpin(hws); -unpin_vma: - i915_vma_unpin(vma); return err ? ERR_PTR(err) : rq; } @@ -202,6 +237,9 @@ hws_seqno(const struct igt_spinner *spin, const struct i915_request *rq) void igt_spinner_end(struct igt_spinner *spin) { + if (!spin->batch) + return; + *spin->batch = MI_BATCH_BUFFER_END; intel_gt_chipset_flush(spin->gt); } @@ -210,10 +248,16 @@ void igt_spinner_fini(struct igt_spinner *spin) { igt_spinner_end(spin); - i915_gem_object_unpin_map(spin->obj); + if (spin->batch) { + i915_vma_unpin(spin->batch_vma); + i915_gem_object_unpin_map(spin->obj); + } i915_gem_object_put(spin->obj); - i915_gem_object_unpin_map(spin->hws); + if (spin->seqno) { + i915_vma_unpin(spin->hws_vma); + i915_gem_object_unpin_map(spin->hws); + } i915_gem_object_put(spin->hws); } diff --git a/drivers/gpu/drm/i915/selftests/igt_spinner.h b/drivers/gpu/drm/i915/selftests/igt_spinner.h index ec62c9ef320b..fbe5b1625b05 100644 --- a/drivers/gpu/drm/i915/selftests/igt_spinner.h +++ b/drivers/gpu/drm/i915/selftests/igt_spinner.h @@ -20,11 +20,16 @@ struct igt_spinner { struct intel_gt *gt; struct drm_i915_gem_object *hws; struct drm_i915_gem_object *obj; + struct intel_context *ce; + struct i915_vma *hws_vma, *batch_vma; u32 *batch; void *seqno; }; int igt_spinner_init(struct igt_spinner *spin, struct intel_gt *gt); +int igt_spinner_pin(struct igt_spinner *spin, + struct intel_context *ce, + struct i915_gem_ww_ctx *ww); void igt_spinner_fini(struct igt_spinner *spin); struct i915_request * From patchwork Fri Oct 16 10:44:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841391 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ABCBEC43457 for ; Fri, 16 Oct 2020 10:45:26 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4CED5207F7 for ; Fri, 16 Oct 2020 10:45:26 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4CED5207F7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id CCBED6EB9A; Fri, 16 Oct 2020 10:45:01 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [141.105.120.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 2E3AA6EAC2 for ; Fri, 16 Oct 2020 10:44:52 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:16 +0200 Message-Id: <20201016104444.1492028-34-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 33/61] drm/i915: Add ww locking around vm_access() X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" i915_gem_object_pin_map potentially needs a ww context, so ensure we have one we can revoke. Signed-off-by: Maarten Lankhorst Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/gem/i915_gem_mman.c | 24 ++++++++++++++++++++++-- 1 file changed, 22 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c index ba8e9ef6943d..1361eabea966 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c @@ -421,7 +421,9 @@ vm_access(struct vm_area_struct *area, unsigned long addr, { struct i915_mmap_offset *mmo = area->vm_private_data; struct drm_i915_gem_object *obj = mmo->obj; + struct i915_gem_ww_ctx ww; void *vaddr; + int err = 0; if (i915_gem_object_is_readonly(obj) && write) return -EACCES; @@ -430,10 +432,18 @@ vm_access(struct vm_area_struct *area, unsigned long addr, if (addr >= obj->base.size) return -EINVAL; + i915_gem_ww_ctx_init(&ww, true); +retry: + err = i915_gem_object_lock(obj, &ww); + if (err) + goto out; + /* As this is primarily for debugging, let's focus on simplicity */ vaddr = i915_gem_object_pin_map(obj, I915_MAP_FORCE_WC); - if (IS_ERR(vaddr)) - return PTR_ERR(vaddr); + if (IS_ERR(vaddr)) { + err = PTR_ERR(vaddr); + goto out; + } if (write) { memcpy(vaddr + addr, buf, len); @@ -443,6 +453,16 @@ vm_access(struct vm_area_struct *area, unsigned long addr, } i915_gem_object_unpin_map(obj); +out: + if (err == -EDEADLK) { + err = i915_gem_ww_ctx_backoff(&ww); + if (!err) + goto retry; + } + i915_gem_ww_ctx_fini(&ww); + + if (err) + return err; return len; } From patchwork Fri Oct 16 10:44:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841349 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3CE2AC433DF for ; Fri, 16 Oct 2020 10:45:12 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B8641207F7 for ; Fri, 16 Oct 2020 10:45:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B8641207F7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 189E46EABC; Fri, 16 Oct 2020 10:44:57 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [141.105.120.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 499F86EAC7 for ; Fri, 16 Oct 2020 10:44:52 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:17 +0200 Message-Id: <20201016104444.1492028-35-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 34/61] drm/i915: Increase ww locking for perf. X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" We need to lock a few more objects, some temporarily, add ww lock where needed. Signed-off-by: Maarten Lankhorst Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/i915_perf.c | 56 ++++++++++++++++++++++++-------- 1 file changed, 43 insertions(+), 13 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c index e94976976571..281af1fdf514 100644 --- a/drivers/gpu/drm/i915/i915_perf.c +++ b/drivers/gpu/drm/i915/i915_perf.c @@ -1579,7 +1579,7 @@ static int alloc_oa_buffer(struct i915_perf_stream *stream) stream->oa_buffer.vma = vma; stream->oa_buffer.vaddr = - i915_gem_object_pin_map(bo, I915_MAP_WB); + i915_gem_object_pin_map_unlocked(bo, I915_MAP_WB); if (IS_ERR(stream->oa_buffer.vaddr)) { ret = PTR_ERR(stream->oa_buffer.vaddr); goto err_unpin; @@ -1632,6 +1632,7 @@ static int alloc_noa_wait(struct i915_perf_stream *stream) const u32 base = stream->engine->mmio_base; #define CS_GPR(x) GEN8_RING_CS_GPR(base, x) u32 *batch, *ts0, *cs, *jump; + struct i915_gem_ww_ctx ww; int ret, i; enum { START_TS, @@ -1649,15 +1650,21 @@ static int alloc_noa_wait(struct i915_perf_stream *stream) return PTR_ERR(bo); } + i915_gem_ww_ctx_init(&ww, true); +retry: + ret = i915_gem_object_lock(bo, &ww); + if (ret) + goto out_ww; + /* * We pin in GGTT because we jump into this buffer now because * multiple OA config BOs will have a jump to this address and it * needs to be fixed during the lifetime of the i915/perf stream. */ - vma = i915_gem_object_ggtt_pin(bo, NULL, 0, 0, PIN_HIGH); + vma = i915_gem_object_ggtt_pin_ww(bo, &ww, NULL, 0, 0, PIN_HIGH); if (IS_ERR(vma)) { ret = PTR_ERR(vma); - goto err_unref; + goto out_ww; } batch = cs = i915_gem_object_pin_map(bo, I915_MAP_WB); @@ -1791,12 +1798,19 @@ static int alloc_noa_wait(struct i915_perf_stream *stream) __i915_gem_object_release_map(bo); stream->noa_wait = vma; - return 0; + goto out_ww; err_unpin: i915_vma_unpin_and_release(&vma, 0); -err_unref: - i915_gem_object_put(bo); +out_ww: + if (ret == -EDEADLK) { + ret = i915_gem_ww_ctx_backoff(&ww); + if (!ret) + goto retry; + } + i915_gem_ww_ctx_fini(&ww); + if (ret) + i915_gem_object_put(bo); return ret; } @@ -1839,6 +1853,7 @@ alloc_oa_config_buffer(struct i915_perf_stream *stream, { struct drm_i915_gem_object *obj; struct i915_oa_config_bo *oa_bo; + struct i915_gem_ww_ctx ww; size_t config_length = 0; u32 *cs; int err; @@ -1859,10 +1874,16 @@ alloc_oa_config_buffer(struct i915_perf_stream *stream, goto err_free; } + i915_gem_ww_ctx_init(&ww, true); +retry: + err = i915_gem_object_lock(obj, &ww); + if (err) + goto out_ww; + cs = i915_gem_object_pin_map(obj, I915_MAP_WB); if (IS_ERR(cs)) { err = PTR_ERR(cs); - goto err_oa_bo; + goto out_ww; } cs = write_cs_mi_lri(cs, @@ -1890,19 +1911,28 @@ alloc_oa_config_buffer(struct i915_perf_stream *stream, NULL); if (IS_ERR(oa_bo->vma)) { err = PTR_ERR(oa_bo->vma); - goto err_oa_bo; + goto out_ww; } oa_bo->oa_config = i915_oa_config_get(oa_config); llist_add(&oa_bo->node, &stream->oa_config_bos); - return oa_bo; +out_ww: + if (err == -EDEADLK) { + err = i915_gem_ww_ctx_backoff(&ww); + if (!err) + goto retry; + } + i915_gem_ww_ctx_fini(&ww); -err_oa_bo: - i915_gem_object_put(obj); + if (err) + i915_gem_object_put(obj); err_free: - kfree(oa_bo); - return ERR_PTR(err); + if (err) { + kfree(oa_bo); + return ERR_PTR(err); + } + return oa_bo; } static struct i915_vma * From patchwork Fri Oct 16 10:44:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841469 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CF999C35271 for ; Fri, 16 Oct 2020 10:45:40 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 79EA9207F7 for ; Fri, 16 Oct 2020 10:45:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 79EA9207F7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B03236ED9C; Fri, 16 Oct 2020 10:45:21 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [141.105.120.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 742336EABC for ; Fri, 16 Oct 2020 10:44:52 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:18 +0200 Message-Id: <20201016104444.1492028-36-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 35/61] drm/i915: Lock ww in ucode objects correctly X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" In the ucode functions, the calls are done before userspace runs, when debugging using debugfs, or when creating semi-permanent mappings; we can safely use the unlocked versions that does the ww dance for us. Because there is no pin_pages_unlocked yet, add it as convenience function. This removes possible lockdep splats about missing resv lock for ucode. Signed-off-by: Maarten Lankhorst Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/gem/i915_gem_object.h | 2 ++ drivers/gpu/drm/i915/gem/i915_gem_pages.c | 20 ++++++++++++++++++++ drivers/gpu/drm/i915/gt/uc/intel_guc.c | 2 +- drivers/gpu/drm/i915/gt/uc/intel_guc_log.c | 4 ++-- drivers/gpu/drm/i915/gt/uc/intel_huc.c | 2 +- drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c | 2 +- 6 files changed, 27 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index a3a701d849bf..e7236224a29c 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -336,6 +336,8 @@ i915_gem_object_pin_pages(struct drm_i915_gem_object *obj) return __i915_gem_object_get_pages(obj); } +int i915_gem_object_pin_pages_unlocked(struct drm_i915_gem_object *obj); + static inline bool i915_gem_object_has_pages(struct drm_i915_gem_object *obj) { diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c index 429ec652c394..81b1b560ad18 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c @@ -136,6 +136,26 @@ int __i915_gem_object_get_pages(struct drm_i915_gem_object *obj) return err; } +int i915_gem_object_pin_pages_unlocked(struct drm_i915_gem_object *obj) +{ + struct i915_gem_ww_ctx ww; + int err; + + i915_gem_ww_ctx_init(&ww, true); +retry: + err = i915_gem_object_lock(obj, &ww); + if (!err) + err = i915_gem_object_pin_pages(obj); + + if (err == -EDEADLK) { + err = i915_gem_ww_ctx_backoff(&ww); + if (!err) + goto retry; + } + i915_gem_ww_ctx_fini(&ww); + return err; +} + /* Immediately discard the backing storage */ void i915_gem_object_truncate(struct drm_i915_gem_object *obj) { diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc.c index e4aaa5f29796..ecdd3b4f1a32 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.c @@ -712,7 +712,7 @@ int intel_guc_allocate_and_map_vma(struct intel_guc *guc, u32 size, if (IS_ERR(vma)) return PTR_ERR(vma); - vaddr = i915_gem_object_pin_map(vma->obj, I915_MAP_WB); + vaddr = i915_gem_object_pin_map_unlocked(vma->obj, I915_MAP_WB); if (IS_ERR(vaddr)) { i915_vma_unpin_and_release(&vma, 0); return PTR_ERR(vaddr); diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_log.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_log.c index 9bbe8a795cb8..8dc8678e7ab0 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_log.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_log.c @@ -335,7 +335,7 @@ static int guc_log_map(struct intel_guc_log *log) * buffer pages, so that we can directly get the data * (up-to-date) from memory. */ - vaddr = i915_gem_object_pin_map(log->vma->obj, I915_MAP_WC); + vaddr = i915_gem_object_pin_map_unlocked(log->vma->obj, I915_MAP_WC); if (IS_ERR(vaddr)) return PTR_ERR(vaddr); @@ -744,7 +744,7 @@ int intel_guc_log_dump(struct intel_guc_log *log, struct drm_printer *p, if (!obj) return 0; - map = i915_gem_object_pin_map(obj, I915_MAP_WC); + map = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC); if (IS_ERR(map)) { DRM_DEBUG("Failed to pin object\n"); drm_puts(p, "(log data unaccessible)\n"); diff --git a/drivers/gpu/drm/i915/gt/uc/intel_huc.c b/drivers/gpu/drm/i915/gt/uc/intel_huc.c index 65eeb44b397d..2126dd81ac38 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_huc.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_huc.c @@ -82,7 +82,7 @@ static int intel_huc_rsa_data_create(struct intel_huc *huc) if (IS_ERR(vma)) return PTR_ERR(vma); - vaddr = i915_gem_object_pin_map(vma->obj, I915_MAP_WB); + vaddr = i915_gem_object_pin_map_unlocked(vma->obj, I915_MAP_WB); if (IS_ERR(vaddr)) { i915_vma_unpin_and_release(&vma, 0); return PTR_ERR(vaddr); diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c b/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c index 037bcaf3c8b5..a7aa81a89ede 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c @@ -542,7 +542,7 @@ int intel_uc_fw_init(struct intel_uc_fw *uc_fw) if (!intel_uc_fw_is_available(uc_fw)) return -ENOEXEC; - err = i915_gem_object_pin_pages(uc_fw->obj); + err = i915_gem_object_pin_pages_unlocked(uc_fw->obj); if (err) { DRM_DEBUG_DRIVER("%s fw pin-pages err=%d\n", intel_uc_fw_type_repr(uc_fw->type), err); From patchwork Fri Oct 16 10:44:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841365 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 40CCAC2BD0C for ; Fri, 16 Oct 2020 10:45:20 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CAF6D2084C for ; Fri, 16 Oct 2020 10:45:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CAF6D2084C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 063846EB24; Fri, 16 Oct 2020 10:45:00 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [141.105.120.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 8C54A6EABE for ; Fri, 16 Oct 2020 10:44:52 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:19 +0200 Message-Id: <20201016104444.1492028-37-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 36/61] drm/i915: Add ww locking to dma-buf ops. X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" vmap is using pin_pages, but needs to use ww locking, add pin_pages_unlocked to correctly lock the mapping. Also add ww locking to begin/end cpu access. Signed-off-by: Maarten Lankhorst Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 60 ++++++++++++---------- 1 file changed, 33 insertions(+), 27 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c index 131ec53d8521..dfd483147b73 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c @@ -82,7 +82,7 @@ static int i915_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf); void *vaddr; - vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB); + vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB); if (IS_ERR(vaddr)) return PTR_ERR(vaddr); @@ -124,42 +124,48 @@ static int i915_gem_begin_cpu_access(struct dma_buf *dma_buf, enum dma_data_dire { struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf); bool write = (direction == DMA_BIDIRECTIONAL || direction == DMA_TO_DEVICE); + struct i915_gem_ww_ctx ww; int err; - err = i915_gem_object_pin_pages(obj); - if (err) - return err; - - err = i915_gem_object_lock_interruptible(obj, NULL); - if (err) - goto out; - - err = i915_gem_object_set_to_cpu_domain(obj, write); - i915_gem_object_unlock(obj); - -out: - i915_gem_object_unpin_pages(obj); + i915_gem_ww_ctx_init(&ww, true); +retry: + err = i915_gem_object_lock(obj, &ww); + if (!err) + err = i915_gem_object_pin_pages(obj); + if (!err) { + err = i915_gem_object_set_to_cpu_domain(obj, write); + i915_gem_object_unpin_pages(obj); + } + if (err == -EDEADLK) { + err = i915_gem_ww_ctx_backoff(&ww); + if (!err) + goto retry; + } + i915_gem_ww_ctx_fini(&ww); return err; } static int i915_gem_end_cpu_access(struct dma_buf *dma_buf, enum dma_data_direction direction) { struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf); + struct i915_gem_ww_ctx ww; int err; - err = i915_gem_object_pin_pages(obj); - if (err) - return err; - - err = i915_gem_object_lock_interruptible(obj, NULL); - if (err) - goto out; - - err = i915_gem_object_set_to_gtt_domain(obj, false); - i915_gem_object_unlock(obj); - -out: - i915_gem_object_unpin_pages(obj); + i915_gem_ww_ctx_init(&ww, true); +retry: + err = i915_gem_object_lock(obj, &ww); + if (!err) + err = i915_gem_object_pin_pages(obj); + if (!err) { + err = i915_gem_object_set_to_gtt_domain(obj, false); + i915_gem_object_unpin_pages(obj); + } + if (err == -EDEADLK) { + err = i915_gem_ww_ctx_backoff(&ww); + if (!err) + goto retry; + } + i915_gem_ww_ctx_fini(&ww); return err; } From patchwork Fri Oct 16 10:44:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841343 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B9E5C433E7 for ; Fri, 16 Oct 2020 10:45:10 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D241A207F7 for ; Fri, 16 Oct 2020 10:45:09 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D241A207F7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 154836EB0D; Fri, 16 Oct 2020 10:44:56 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [IPv6:2a02:2308::216:3eff:fe92:dfa3]) by gabe.freedesktop.org (Postfix) with ESMTPS id A1E4B6EAC2 for ; Fri, 16 Oct 2020 10:44:52 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:20 +0200 Message-Id: <20201016104444.1492028-38-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 37/61] drm/i915: Add missing ww lock in intel_dsb_prepare. X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Because of the long lifetime of the mapping, we cannot wrap this in a simple limited ww lock. Just use the unlocked version of pin_map, because we'll likely release the mapping a lot later, in a different thread. Signed-off-by: Maarten Lankhorst Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/display/intel_dsb.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/display/intel_dsb.c b/drivers/gpu/drm/i915/display/intel_dsb.c index 566fa72427b3..857126822a88 100644 --- a/drivers/gpu/drm/i915/display/intel_dsb.c +++ b/drivers/gpu/drm/i915/display/intel_dsb.c @@ -293,7 +293,7 @@ void intel_dsb_prepare(struct intel_crtc_state *crtc_state) goto out; } - buf = i915_gem_object_pin_map(vma->obj, I915_MAP_WC); + buf = i915_gem_object_pin_map_unlocked(vma->obj, I915_MAP_WC); if (IS_ERR(buf)) { drm_err(&i915->drm, "Command buffer creation failed\n"); i915_vma_unpin_and_release(&vma, I915_VMA_RELEASE_MAP); From patchwork Fri Oct 16 10:44:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841353 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34798C433E7 for ; Fri, 16 Oct 2020 10:45:15 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AEBEC2084C for ; Fri, 16 Oct 2020 10:45:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AEBEC2084C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C70096EAC9; Fri, 16 Oct 2020 10:44:57 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [141.105.120.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id C5D4E6EAC3 for ; Fri, 16 Oct 2020 10:44:52 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:21 +0200 Message-Id: <20201016104444.1492028-39-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 38/61] drm/i915: Fix ww locking in shmem_create_from_object X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Quick fix, just use the unlocked version. Signed-off-by: Maarten Lankhorst Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/gt/shmem_utils.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/gt/shmem_utils.c b/drivers/gpu/drm/i915/gt/shmem_utils.c index 43c7acbdc79d..8c8dfa41e032 100644 --- a/drivers/gpu/drm/i915/gt/shmem_utils.c +++ b/drivers/gpu/drm/i915/gt/shmem_utils.c @@ -39,7 +39,7 @@ struct file *shmem_create_from_object(struct drm_i915_gem_object *obj) return file; } - ptr = i915_gem_object_pin_map(obj, I915_MAP_WB); + ptr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB); if (IS_ERR(ptr)) return ERR_CAST(ptr); From patchwork Fri Oct 16 10:44:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841475 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2BDF8C433E7 for ; Fri, 16 Oct 2020 10:45:38 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D5434207F7 for ; Fri, 16 Oct 2020 10:45:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D5434207F7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A13066ED9A; Fri, 16 Oct 2020 10:45:21 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [141.105.120.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id E5DDC6EABE for ; Fri, 16 Oct 2020 10:44:52 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:22 +0200 Message-Id: <20201016104444.1492028-40-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 39/61] drm/i915: Use a single page table lock for each gtt. X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" We may create page table objects on the fly, but we may need to wait with the ww lock held. Instead of waiting on a freed obj lock, ensure we have the same lock for each object to keep -EDEADLK working. This ensures that i915_vma_pin_ww can lock the page tables when required. Signed-off-by: Maarten Lankhorst Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/gt/intel_ggtt.c | 8 +++++- drivers/gpu/drm/i915/gt/intel_gtt.c | 38 ++++++++++++++++++++++++++- drivers/gpu/drm/i915/gt/intel_gtt.h | 5 ++++ drivers/gpu/drm/i915/gt/intel_ppgtt.c | 3 ++- drivers/gpu/drm/i915/i915_vma.c | 5 ++++ 5 files changed, 56 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_ggtt.c b/drivers/gpu/drm/i915/gt/intel_ggtt.c index 60bd2c8ed8b0..17ecaef1834d 100644 --- a/drivers/gpu/drm/i915/gt/intel_ggtt.c +++ b/drivers/gpu/drm/i915/gt/intel_ggtt.c @@ -615,7 +615,9 @@ static int init_aliasing_ppgtt(struct i915_ggtt *ggtt) if (err) goto err_ppgtt; + i915_gem_object_lock(ppgtt->vm.scratch[0], NULL); err = i915_vm_pin_pt_stash(&ppgtt->vm, &stash); + i915_gem_object_unlock(ppgtt->vm.scratch[0]); if (err) goto err_stash; @@ -702,6 +704,7 @@ static void ggtt_cleanup_hw(struct i915_ggtt *ggtt) mutex_unlock(&ggtt->vm.mutex); i915_address_space_fini(&ggtt->vm); + dma_resv_fini(&ggtt->vm.resv); arch_phys_wc_del(ggtt->mtrr); @@ -1078,6 +1081,7 @@ static int ggtt_probe_hw(struct i915_ggtt *ggtt, struct intel_gt *gt) ggtt->vm.gt = gt; ggtt->vm.i915 = i915; ggtt->vm.dma = &i915->drm.pdev->dev; + dma_resv_init(&ggtt->vm.resv); if (INTEL_GEN(i915) <= 5) ret = i915_gmch_probe(ggtt); @@ -1085,8 +1089,10 @@ static int ggtt_probe_hw(struct i915_ggtt *ggtt, struct intel_gt *gt) ret = gen6_gmch_probe(ggtt); else ret = gen8_gmch_probe(ggtt); - if (ret) + if (ret) { + dma_resv_fini(&ggtt->vm.resv); return ret; + } if ((ggtt->vm.total - 1) >> 32) { drm_err(&i915->drm, diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.c b/drivers/gpu/drm/i915/gt/intel_gtt.c index 7bfe9072be9a..070d538cdc56 100644 --- a/drivers/gpu/drm/i915/gt/intel_gtt.c +++ b/drivers/gpu/drm/i915/gt/intel_gtt.c @@ -13,16 +13,36 @@ struct drm_i915_gem_object *alloc_pt_dma(struct i915_address_space *vm, int sz) { + struct drm_i915_gem_object *obj; + if (I915_SELFTEST_ONLY(should_fail(&vm->fault_attr, 1))) i915_gem_shrink_all(vm->i915); - return i915_gem_object_create_internal(vm->i915, sz); + obj = i915_gem_object_create_internal(vm->i915, sz); + /* ensure all dma objects have the same reservation class */ + if (!IS_ERR(obj)) + obj->base.resv = &vm->resv; + return obj; } int pin_pt_dma(struct i915_address_space *vm, struct drm_i915_gem_object *obj) { int err; + i915_gem_object_lock(obj, NULL); + err = i915_gem_object_pin_pages(obj); + i915_gem_object_unlock(obj); + if (err) + return err; + + i915_gem_object_make_unshrinkable(obj); + return 0; +} + +int pin_pt_dma_locked(struct i915_address_space *vm, struct drm_i915_gem_object *obj) +{ + int err; + err = i915_gem_object_pin_pages(obj); if (err) return err; @@ -56,6 +76,20 @@ void __i915_vm_close(struct i915_address_space *vm) mutex_unlock(&vm->mutex); } +/* lock the vm into the current ww, if we lock one, we lock all */ +int i915_vm_lock_objects(struct i915_address_space *vm, + struct i915_gem_ww_ctx *ww) +{ + if (vm->scratch[0]->base.resv == &vm->resv) { + return i915_gem_object_lock(vm->scratch[0], ww); + } else { + struct i915_ppgtt *ppgtt = i915_vm_to_ppgtt(vm); + + /* We borrowed the scratch page from ggtt, take the top level object */ + return i915_gem_object_lock(ppgtt->pd->pt.base, ww); + } +} + void i915_address_space_fini(struct i915_address_space *vm) { drm_mm_takedown(&vm->mm); @@ -69,6 +103,7 @@ static void __i915_vm_release(struct work_struct *work) vm->cleanup(vm); i915_address_space_fini(vm); + dma_resv_fini(&vm->resv); kfree(vm); } @@ -98,6 +133,7 @@ void i915_address_space_init(struct i915_address_space *vm, int subclass) mutex_init(&vm->mutex); lockdep_set_subclass(&vm->mutex, subclass); i915_gem_shrinker_taints_mutex(vm->i915, &vm->mutex); + dma_resv_init(&vm->resv); GEM_BUG_ON(!vm->total); drm_mm_init(&vm->mm, 0, vm->total); diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.h b/drivers/gpu/drm/i915/gt/intel_gtt.h index 8a33940a71f3..16063b2f0119 100644 --- a/drivers/gpu/drm/i915/gt/intel_gtt.h +++ b/drivers/gpu/drm/i915/gt/intel_gtt.h @@ -238,6 +238,7 @@ struct i915_address_space { atomic_t open; struct mutex mutex; /* protects vma and our lists */ + struct dma_resv resv; /* reservation lock for all pd objects, and buffer pool */ #define VM_CLASS_GGTT 0 #define VM_CLASS_PPGTT 1 @@ -346,6 +347,9 @@ struct i915_ppgtt { #define i915_is_ggtt(vm) ((vm)->is_ggtt) +int __must_check +i915_vm_lock_objects(struct i915_address_space *vm, struct i915_gem_ww_ctx *ww); + static inline bool i915_vm_is_4lvl(const struct i915_address_space *vm) { @@ -522,6 +526,7 @@ struct i915_page_directory *alloc_pd(struct i915_address_space *vm); struct i915_page_directory *__alloc_pd(int npde); int pin_pt_dma(struct i915_address_space *vm, struct drm_i915_gem_object *obj); +int pin_pt_dma_locked(struct i915_address_space *vm, struct drm_i915_gem_object *obj); void free_px(struct i915_address_space *vm, struct i915_page_table *pt, int lvl); diff --git a/drivers/gpu/drm/i915/gt/intel_ppgtt.c b/drivers/gpu/drm/i915/gt/intel_ppgtt.c index 46d9aceda64c..f3ac47702aee 100644 --- a/drivers/gpu/drm/i915/gt/intel_ppgtt.c +++ b/drivers/gpu/drm/i915/gt/intel_ppgtt.c @@ -262,7 +262,7 @@ int i915_vm_pin_pt_stash(struct i915_address_space *vm, for (n = 0; n < ARRAY_SIZE(stash->pt); n++) { for (pt = stash->pt[n]; pt; pt = pt->stash) { - err = pin_pt_dma(vm, pt->base); + err = pin_pt_dma_locked(vm, pt->base); if (err) return err; } @@ -304,6 +304,7 @@ void ppgtt_init(struct i915_ppgtt *ppgtt, struct intel_gt *gt) ppgtt->vm.dma = &i915->drm.pdev->dev; ppgtt->vm.total = BIT_ULL(INTEL_INFO(i915)->ppgtt_size); + dma_resv_init(&ppgtt->vm.resv); i915_address_space_init(&ppgtt->vm, VM_CLASS_PPGTT); ppgtt->vm.vma_ops.bind_vma = ppgtt_bind_vma; diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c index ed6cf4529d5d..4106b10ac651 100644 --- a/drivers/gpu/drm/i915/i915_vma.c +++ b/drivers/gpu/drm/i915/i915_vma.c @@ -882,6 +882,11 @@ int i915_vma_pin_ww(struct i915_vma *vma, struct i915_gem_ww_ctx *ww, wakeref = intel_runtime_pm_get(&vma->vm->i915->runtime_pm); if (flags & vma->vm->bind_async_flags) { + /* lock VM */ + err = i915_vm_lock_objects(vma->vm, ww); + if (err) + goto err_rpm; + work = i915_vma_work(); if (!work) { err = -ENOMEM; From patchwork Fri Oct 16 10:44:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841481 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4DA26C433DF for ; Fri, 16 Oct 2020 10:45:37 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E92B82084C for ; Fri, 16 Oct 2020 10:45:34 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E92B82084C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 9C8296EB1B; Fri, 16 Oct 2020 10:44:59 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [IPv6:2a02:2308::216:3eff:fe92:dfa3]) by gabe.freedesktop.org (Postfix) with ESMTPS id 08D996EABC for ; Fri, 16 Oct 2020 10:44:53 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:23 +0200 Message-Id: <20201016104444.1492028-41-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 40/61] drm/i915/selftests: Prepare huge_pages testcases for obj->mm.lock removal. X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Straightforward conversion, just convert a bunch of calls to unlocked versions. Signed-off-by: Maarten Lankhorst Reviewed-by: Thomas Hellström --- .../gpu/drm/i915/gem/selftests/huge_pages.c | 28 ++++++++++++++----- 1 file changed, 21 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c index a7d5f7785f32..34f248c205ca 100644 --- a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c +++ b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c @@ -568,7 +568,7 @@ static int igt_mock_ppgtt_misaligned_dma(void *arg) goto out_put; } - err = i915_gem_object_pin_pages(obj); + err = i915_gem_object_pin_pages_unlocked(obj); if (err) goto out_put; @@ -632,15 +632,19 @@ static int igt_mock_ppgtt_misaligned_dma(void *arg) break; } + i915_gem_object_lock(obj, NULL); i915_gem_object_unpin_pages(obj); __i915_gem_object_put_pages(obj); + i915_gem_object_unlock(obj); i915_gem_object_put(obj); } return 0; out_unpin: + i915_gem_object_lock(obj, NULL); i915_gem_object_unpin_pages(obj); + i915_gem_object_unlock(obj); out_put: i915_gem_object_put(obj); @@ -654,8 +658,10 @@ static void close_object_list(struct list_head *objects, list_for_each_entry_safe(obj, on, objects, st_link) { list_del(&obj->st_link); + i915_gem_object_lock(obj, NULL); i915_gem_object_unpin_pages(obj); __i915_gem_object_put_pages(obj); + i915_gem_object_unlock(obj); i915_gem_object_put(obj); } } @@ -692,7 +698,7 @@ static int igt_mock_ppgtt_huge_fill(void *arg) break; } - err = i915_gem_object_pin_pages(obj); + err = i915_gem_object_pin_pages_unlocked(obj); if (err) { i915_gem_object_put(obj); break; @@ -868,7 +874,7 @@ static int igt_mock_ppgtt_64K(void *arg) if (IS_ERR(obj)) return PTR_ERR(obj); - err = i915_gem_object_pin_pages(obj); + err = i915_gem_object_pin_pages_unlocked(obj); if (err) goto out_object_put; @@ -922,8 +928,10 @@ static int igt_mock_ppgtt_64K(void *arg) } i915_vma_unpin(vma); + i915_gem_object_lock(obj, NULL); i915_gem_object_unpin_pages(obj); __i915_gem_object_put_pages(obj); + i915_gem_object_unlock(obj); i915_gem_object_put(obj); } } @@ -933,7 +941,9 @@ static int igt_mock_ppgtt_64K(void *arg) out_vma_unpin: i915_vma_unpin(vma); out_object_unpin: + i915_gem_object_lock(obj, NULL); i915_gem_object_unpin_pages(obj); + i915_gem_object_unlock(obj); out_object_put: i915_gem_object_put(obj); @@ -1003,7 +1013,7 @@ static int __cpu_check_vmap(struct drm_i915_gem_object *obj, u32 dword, u32 val) if (err) return err; - ptr = i915_gem_object_pin_map(obj, I915_MAP_WC); + ptr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC); if (IS_ERR(ptr)) return PTR_ERR(ptr); @@ -1283,7 +1293,7 @@ static int igt_ppgtt_smoke_huge(void *arg) return err; } - err = i915_gem_object_pin_pages(obj); + err = i915_gem_object_pin_pages_unlocked(obj); if (err) { if (err == -ENXIO || err == -E2BIG) { i915_gem_object_put(obj); @@ -1306,8 +1316,10 @@ static int igt_ppgtt_smoke_huge(void *arg) __func__, size, i); } out_unpin: + i915_gem_object_lock(obj, NULL); i915_gem_object_unpin_pages(obj); __i915_gem_object_put_pages(obj); + i915_gem_object_unlock(obj); out_put: i915_gem_object_put(obj); @@ -1380,7 +1392,7 @@ static int igt_ppgtt_sanity_check(void *arg) return err; } - err = i915_gem_object_pin_pages(obj); + err = i915_gem_object_pin_pages_unlocked(obj); if (err) { i915_gem_object_put(obj); goto out; @@ -1394,8 +1406,10 @@ static int igt_ppgtt_sanity_check(void *arg) err = igt_write_huge(ctx, obj); + i915_gem_object_lock(obj, NULL); i915_gem_object_unpin_pages(obj); __i915_gem_object_put_pages(obj); + i915_gem_object_unlock(obj); i915_gem_object_put(obj); if (err) { @@ -1440,7 +1454,7 @@ static int igt_tmpfs_fallback(void *arg) goto out_restore; } - vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB); + vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB); if (IS_ERR(vaddr)) { err = PTR_ERR(vaddr); goto out_put; From patchwork Fri Oct 16 10:44:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841373 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8DB68C433DF for ; Fri, 16 Oct 2020 10:45:23 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3078A207F7 for ; Fri, 16 Oct 2020 10:45:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3078A207F7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 0BEB46EC37; Fri, 16 Oct 2020 10:45:03 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [141.105.120.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 27A336EAC2 for ; Fri, 16 Oct 2020 10:44:53 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:24 +0200 Message-Id: <20201016104444.1492028-42-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 41/61] drm/i915/selftests: Prepare client blit for obj->mm.lock removal. X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Straightforward conversion, just convert a bunch of calls to unlocked versions. Signed-off-by: Maarten Lankhorst --- drivers/gpu/drm/i915/gem/selftests/i915_gem_client_blt.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_client_blt.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_client_blt.c index 4e36d4897ea6..cc782569765f 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_client_blt.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_client_blt.c @@ -47,7 +47,7 @@ static int __igt_client_fill(struct intel_engine_cs *engine) goto err_flush; } - vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB); + vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB); if (IS_ERR(vaddr)) { err = PTR_ERR(vaddr); goto err_put; @@ -159,7 +159,7 @@ static int prepare_blit(const struct tiled_blits *t, u32 src_pitch, dst_pitch; u32 cmd, *cs; - cs = i915_gem_object_pin_map(batch, I915_MAP_WC); + cs = i915_gem_object_pin_map_unlocked(batch, I915_MAP_WC); if (IS_ERR(cs)) return PTR_ERR(cs); @@ -379,7 +379,7 @@ static int verify_buffer(const struct tiled_blits *t, y = i915_prandom_u32_max_state(t->height, prng); p = y * t->width + x; - vaddr = i915_gem_object_pin_map(buf->vma->obj, I915_MAP_WC); + vaddr = i915_gem_object_pin_map_unlocked(buf->vma->obj, I915_MAP_WC); if (IS_ERR(vaddr)) return PTR_ERR(vaddr); @@ -566,7 +566,7 @@ static int tiled_blits_prepare(struct tiled_blits *t, int err; int i; - map = i915_gem_object_pin_map(t->scratch.vma->obj, I915_MAP_WC); + map = i915_gem_object_pin_map_unlocked(t->scratch.vma->obj, I915_MAP_WC); if (IS_ERR(map)) return PTR_ERR(map); From patchwork Fri Oct 16 10:44:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841351 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5BBEC43467 for ; Fri, 16 Oct 2020 10:45:13 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4F6AD207F7 for ; Fri, 16 Oct 2020 10:45:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4F6AD207F7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id D3DCC6EB13; Fri, 16 Oct 2020 10:44:57 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [141.105.120.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 4530B6EAC4 for ; Fri, 16 Oct 2020 10:44:53 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:25 +0200 Message-Id: <20201016104444.1492028-43-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 42/61] drm/i915/selftests: Prepare coherency tests for obj->mm.lock removal. X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Straightforward conversion, just convert a bunch of calls to unlocked versions. Signed-off-by: Maarten Lankhorst --- drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c index 2e439bb269d6..42aa3c5e0621 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c @@ -159,7 +159,7 @@ static int wc_set(struct context *ctx, unsigned long offset, u32 v) if (err) return err; - map = i915_gem_object_pin_map(ctx->obj, I915_MAP_WC); + map = i915_gem_object_pin_map_unlocked(ctx->obj, I915_MAP_WC); if (IS_ERR(map)) return PTR_ERR(map); @@ -182,7 +182,7 @@ static int wc_get(struct context *ctx, unsigned long offset, u32 *v) if (err) return err; - map = i915_gem_object_pin_map(ctx->obj, I915_MAP_WC); + map = i915_gem_object_pin_map_unlocked(ctx->obj, I915_MAP_WC); if (IS_ERR(map)) return PTR_ERR(map); From patchwork Fri Oct 16 10:44:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841363 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B50A5C433E7 for ; Fri, 16 Oct 2020 10:45:17 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 599742084C for ; Fri, 16 Oct 2020 10:45:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 599742084C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 283466EAC4; Fri, 16 Oct 2020 10:44:59 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [IPv6:2a02:2308::216:3eff:fe92:dfa3]) by gabe.freedesktop.org (Postfix) with ESMTPS id 60A876EABC for ; Fri, 16 Oct 2020 10:44:53 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:26 +0200 Message-Id: <20201016104444.1492028-44-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 43/61] drm/i915/selftests: Prepare context tests for obj->mm.lock removal. X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Straightforward conversion, just convert a bunch of calls to unlocked versions. Signed-off-by: Maarten Lankhorst --- drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c index d3f87dc4eda3..5fef592390cb 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c @@ -1094,7 +1094,7 @@ __read_slice_count(struct intel_context *ce, if (ret < 0) return ret; - buf = i915_gem_object_pin_map(obj, I915_MAP_WB); + buf = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB); if (IS_ERR(buf)) { ret = PTR_ERR(buf); return ret; @@ -1511,7 +1511,7 @@ static int write_to_scratch(struct i915_gem_context *ctx, if (IS_ERR(obj)) return PTR_ERR(obj); - cmd = i915_gem_object_pin_map(obj, I915_MAP_WB); + cmd = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB); if (IS_ERR(cmd)) { err = PTR_ERR(cmd); goto out; @@ -1622,7 +1622,7 @@ static int read_from_scratch(struct i915_gem_context *ctx, if (err) goto out_vm; - cmd = i915_gem_object_pin_map(obj, I915_MAP_WB); + cmd = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB); if (IS_ERR(cmd)) { err = PTR_ERR(cmd); goto out; @@ -1658,7 +1658,7 @@ static int read_from_scratch(struct i915_gem_context *ctx, if (err) goto out_vm; - cmd = i915_gem_object_pin_map(obj, I915_MAP_WB); + cmd = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB); if (IS_ERR(cmd)) { err = PTR_ERR(cmd); goto out; @@ -1715,7 +1715,7 @@ static int read_from_scratch(struct i915_gem_context *ctx, if (err) goto out_vm; - cmd = i915_gem_object_pin_map(obj, I915_MAP_WB); + cmd = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB); if (IS_ERR(cmd)) { err = PTR_ERR(cmd); goto out_vm; From patchwork Fri Oct 16 10:44:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841327 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E55D7C43457 for ; Fri, 16 Oct 2020 10:44:59 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 79D8E207F7 for ; Fri, 16 Oct 2020 10:44:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 79D8E207F7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id DAF5E6EACC; Fri, 16 Oct 2020 10:44:55 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [IPv6:2a02:2308::216:3eff:fe92:dfa3]) by gabe.freedesktop.org (Postfix) with ESMTPS id 891E86EABE for ; Fri, 16 Oct 2020 10:44:53 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:27 +0200 Message-Id: <20201016104444.1492028-45-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 44/61] drm/i915/selftests: Prepare dma-buf tests for obj->mm.lock removal. X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Use pin_pages_unlocked() where we don't have a lock. Signed-off-by: Maarten Lankhorst --- drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c index b6d43880b0c1..dd74bc09ec88 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c @@ -194,7 +194,7 @@ static int igt_dmabuf_import_ownership(void *arg) dma_buf_put(dmabuf); - err = i915_gem_object_pin_pages(obj); + err = i915_gem_object_pin_pages_unlocked(obj); if (err) { pr_err("i915_gem_object_pin_pages failed with err=%d\n", err); goto out_obj; From patchwork Fri Oct 16 10:44:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841347 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E54AC433DF for ; Fri, 16 Oct 2020 10:45:14 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0EBFE207F7 for ; Fri, 16 Oct 2020 10:45:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0EBFE207F7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E0AF86EB14; Fri, 16 Oct 2020 10:44:57 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [IPv6:2a02:2308::216:3eff:fe92:dfa3]) by gabe.freedesktop.org (Postfix) with ESMTPS id AFA016EABC for ; Fri, 16 Oct 2020 10:44:53 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:28 +0200 Message-Id: <20201016104444.1492028-46-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 45/61] drm/i915/selftests: Prepare execbuf tests for obj->mm.lock removal. X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Also quite simple, a single call needs to use the unlocked version. Signed-off-by: Maarten Lankhorst --- drivers/gpu/drm/i915/gem/selftests/i915_gem_execbuffer.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_execbuffer.c index e1d50a5a1477..4df505e4c53a 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_execbuffer.c @@ -116,7 +116,7 @@ static int igt_gpu_reloc(void *arg) if (IS_ERR(scratch)) return PTR_ERR(scratch); - map = i915_gem_object_pin_map(scratch, I915_MAP_WC); + map = i915_gem_object_pin_map_unlocked(scratch, I915_MAP_WC); if (IS_ERR(map)) { err = PTR_ERR(map); goto err_scratch; From patchwork Fri Oct 16 10:44:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841471 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EEFA3C352BA for ; Fri, 16 Oct 2020 10:45:42 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9897F20EDD for ; Fri, 16 Oct 2020 10:45:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9897F20EDD Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E4FC66ED9F; Fri, 16 Oct 2020 10:45:23 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [IPv6:2a02:2308::216:3eff:fe92:dfa3]) by gabe.freedesktop.org (Postfix) with ESMTPS id CC20E6EAC2 for ; Fri, 16 Oct 2020 10:44:53 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:29 +0200 Message-Id: <20201016104444.1492028-47-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 46/61] drm/i915/selftests: Prepare mman testcases for obj->mm.lock removal. X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Ensure we hold the lock around put_pages, and use the unlocked wrappers for pinning pages and mappings. Signed-off-by: Maarten Lankhorst --- drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c index 3ac7628f3bc4..85fff8bed08c 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c @@ -321,7 +321,7 @@ static int igt_partial_tiling(void *arg) if (IS_ERR(obj)) return PTR_ERR(obj); - err = i915_gem_object_pin_pages(obj); + err = i915_gem_object_pin_pages_unlocked(obj); if (err) { pr_err("Failed to allocate %u pages (%lu total), err=%d\n", nreal, obj->base.size / PAGE_SIZE, err); @@ -458,7 +458,7 @@ static int igt_smoke_tiling(void *arg) if (IS_ERR(obj)) return PTR_ERR(obj); - err = i915_gem_object_pin_pages(obj); + err = i915_gem_object_pin_pages_unlocked(obj); if (err) { pr_err("Failed to allocate %u pages (%lu total), err=%d\n", nreal, obj->base.size / PAGE_SIZE, err); @@ -797,7 +797,7 @@ static int wc_set(struct drm_i915_gem_object *obj) { void *vaddr; - vaddr = i915_gem_object_pin_map(obj, I915_MAP_WC); + vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC); if (IS_ERR(vaddr)) return PTR_ERR(vaddr); @@ -813,7 +813,7 @@ static int wc_check(struct drm_i915_gem_object *obj) void *vaddr; int err = 0; - vaddr = i915_gem_object_pin_map(obj, I915_MAP_WC); + vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC); if (IS_ERR(vaddr)) return PTR_ERR(vaddr); @@ -1315,7 +1315,9 @@ static int __igt_mmap_revoke(struct drm_i915_private *i915, } if (type != I915_MMAP_TYPE_GTT) { + i915_gem_object_lock(obj, NULL); __i915_gem_object_put_pages(obj); + i915_gem_object_unlock(obj); if (i915_gem_object_has_pages(obj)) { pr_err("Failed to put-pages object!\n"); err = -EINVAL; From patchwork Fri Oct 16 10:44:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841375 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 64768C43457 for ; Fri, 16 Oct 2020 10:45:18 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0AB5C2084C for ; Fri, 16 Oct 2020 10:45:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0AB5C2084C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 9106F6EB1A; Fri, 16 Oct 2020 10:44:58 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [141.105.120.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 005246EABE for ; Fri, 16 Oct 2020 10:44:53 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:30 +0200 Message-Id: <20201016104444.1492028-48-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 47/61] drm/i915/selftests: Prepare object tests for obj->mm.lock removal. X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Convert a single pin_pages call to use the unlocked version. Signed-off-by: Maarten Lankhorst --- drivers/gpu/drm/i915/gem/selftests/i915_gem_object.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_object.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_object.c index bf853c40ec65..740ee8086a27 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_object.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_object.c @@ -47,7 +47,7 @@ static int igt_gem_huge(void *arg) if (IS_ERR(obj)) return PTR_ERR(obj); - err = i915_gem_object_pin_pages(obj); + err = i915_gem_object_pin_pages_unlocked(obj); if (err) { pr_err("Failed to allocate %u pages (%lu total), err=%d\n", nreal, obj->base.size / PAGE_SIZE, err); From patchwork Fri Oct 16 10:44:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841473 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7DBFAC2D0AE for ; Fri, 16 Oct 2020 10:45:36 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 20F2B207F7 for ; Fri, 16 Oct 2020 10:45:36 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 20F2B207F7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 418056EB25; Fri, 16 Oct 2020 10:45:00 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [141.105.120.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 1F3E16EAC3 for ; Fri, 16 Oct 2020 10:44:54 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:31 +0200 Message-Id: <20201016104444.1492028-49-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 48/61] drm/i915/selftests: Prepare object blit tests for obj->mm.lock removal. X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Use some unlocked versions where we're not holding the ww lock. Signed-off-by: Maarten Lankhorst --- drivers/gpu/drm/i915/gem/selftests/i915_gem_object_blt.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_object_blt.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_object_blt.c index 23b6e11bbc3e..ee9496f3d11d 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_object_blt.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_object_blt.c @@ -262,7 +262,7 @@ static int igt_fill_blt_thread(void *arg) goto err_flush; } - vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB); + vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB); if (IS_ERR(vaddr)) { err = PTR_ERR(vaddr); goto err_put; @@ -380,7 +380,7 @@ static int igt_copy_blt_thread(void *arg) goto err_flush; } - vaddr = i915_gem_object_pin_map(src, I915_MAP_WB); + vaddr = i915_gem_object_pin_map_unlocked(src, I915_MAP_WB); if (IS_ERR(vaddr)) { err = PTR_ERR(vaddr); goto err_put_src; @@ -400,7 +400,7 @@ static int igt_copy_blt_thread(void *arg) goto err_put_src; } - vaddr = i915_gem_object_pin_map(dst, I915_MAP_WB); + vaddr = i915_gem_object_pin_map_unlocked(dst, I915_MAP_WB); if (IS_ERR(vaddr)) { err = PTR_ERR(vaddr); goto err_put_dst; From patchwork Fri Oct 16 10:44:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841461 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5761AC4363D for ; Fri, 16 Oct 2020 10:45:32 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E6768207F7 for ; Fri, 16 Oct 2020 10:45:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E6768207F7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 36FA76EB8F; Fri, 16 Oct 2020 10:45:01 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [141.105.120.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 3BF986EABC for ; Fri, 16 Oct 2020 10:44:54 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:32 +0200 Message-Id: <20201016104444.1492028-50-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 49/61] drm/i915/selftests: Prepare igt_gem_utils for obj->mm.lock removal X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" igt_emit_store_dw needs to use the unlocked version, as it's not holding a lock. This fixes igt_gpu_fill_dw() which is used by some other selftests. Signed-off-by: Maarten Lankhorst --- drivers/gpu/drm/i915/gem/selftests/igt_gem_utils.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/gem/selftests/igt_gem_utils.c b/drivers/gpu/drm/i915/gem/selftests/igt_gem_utils.c index e21b5023ca7d..f4e85b4a347d 100644 --- a/drivers/gpu/drm/i915/gem/selftests/igt_gem_utils.c +++ b/drivers/gpu/drm/i915/gem/selftests/igt_gem_utils.c @@ -54,7 +54,7 @@ igt_emit_store_dw(struct i915_vma *vma, if (IS_ERR(obj)) return ERR_CAST(obj); - cmd = i915_gem_object_pin_map(obj, I915_MAP_WC); + cmd = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC); if (IS_ERR(cmd)) { err = PTR_ERR(cmd); goto err; From patchwork Fri Oct 16 10:44:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841457 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1757C43467 for ; Fri, 16 Oct 2020 10:45:30 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 43E0D207F7 for ; Fri, 16 Oct 2020 10:45:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 43E0D207F7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id D5C146EB8D; Fri, 16 Oct 2020 10:45:00 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [141.105.120.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 619C36EAC2 for ; Fri, 16 Oct 2020 10:44:54 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:33 +0200 Message-Id: <20201016104444.1492028-51-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 50/61] drm/i915/selftests: Prepare context selftest for obj->mm.lock removal X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Only needs to convert a single call to the unlocked version. Signed-off-by: Maarten Lankhorst --- drivers/gpu/drm/i915/gt/selftest_context.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/selftest_context.c b/drivers/gpu/drm/i915/gt/selftest_context.c index 1f4020e906a8..d9b0ebc938f1 100644 --- a/drivers/gpu/drm/i915/gt/selftest_context.c +++ b/drivers/gpu/drm/i915/gt/selftest_context.c @@ -88,8 +88,8 @@ static int __live_context_size(struct intel_engine_cs *engine) if (err) goto err; - vaddr = i915_gem_object_pin_map(ce->state->obj, - i915_coherent_map_type(engine->i915)); + vaddr = i915_gem_object_pin_map_unlocked(ce->state->obj, + i915_coherent_map_type(engine->i915)); if (IS_ERR(vaddr)) { err = PTR_ERR(vaddr); intel_context_unpin(ce); From patchwork Fri Oct 16 10:44:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841483 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AADC3C352B8 for ; Fri, 16 Oct 2020 10:45:42 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5126E207F7 for ; Fri, 16 Oct 2020 10:45:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5126E207F7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 2F1476EB95; Fri, 16 Oct 2020 10:45:23 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [IPv6:2a02:2308::216:3eff:fe92:dfa3]) by gabe.freedesktop.org (Postfix) with ESMTPS id 955886EABE for ; Fri, 16 Oct 2020 10:44:54 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:34 +0200 Message-Id: <20201016104444.1492028-52-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 51/61] drm/i915/selftests: Prepare hangcheck for obj->mm.lock removal X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Convert a few calls to use the unlocked versions. Signed-off-by: Maarten Lankhorst --- drivers/gpu/drm/i915/gt/selftest_hangcheck.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/selftest_hangcheck.c b/drivers/gpu/drm/i915/gt/selftest_hangcheck.c index fb5ebf930ab2..e3027cebab5b 100644 --- a/drivers/gpu/drm/i915/gt/selftest_hangcheck.c +++ b/drivers/gpu/drm/i915/gt/selftest_hangcheck.c @@ -80,15 +80,15 @@ static int hang_init(struct hang *h, struct intel_gt *gt) } i915_gem_object_set_cache_coherency(h->hws, I915_CACHE_LLC); - vaddr = i915_gem_object_pin_map(h->hws, I915_MAP_WB); + vaddr = i915_gem_object_pin_map_unlocked(h->hws, I915_MAP_WB); if (IS_ERR(vaddr)) { err = PTR_ERR(vaddr); goto err_obj; } h->seqno = memset(vaddr, 0xff, PAGE_SIZE); - vaddr = i915_gem_object_pin_map(h->obj, - i915_coherent_map_type(gt->i915)); + vaddr = i915_gem_object_pin_map_unlocked(h->obj, + i915_coherent_map_type(gt->i915)); if (IS_ERR(vaddr)) { err = PTR_ERR(vaddr); goto err_unpin_hws; @@ -149,7 +149,7 @@ hang_create_request(struct hang *h, struct intel_engine_cs *engine) return ERR_CAST(obj); } - vaddr = i915_gem_object_pin_map(obj, i915_coherent_map_type(gt->i915)); + vaddr = i915_gem_object_pin_map_unlocked(obj, i915_coherent_map_type(gt->i915)); if (IS_ERR(vaddr)) { i915_gem_object_put(obj); i915_vm_put(vm); From patchwork Fri Oct 16 10:44:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841393 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EC8B9C2BD0C for ; Fri, 16 Oct 2020 10:45:32 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 928812084C for ; Fri, 16 Oct 2020 10:45:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 928812084C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 95DA76EB2A; Fri, 16 Oct 2020 10:45:00 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [IPv6:2a02:2308::216:3eff:fe92:dfa3]) by gabe.freedesktop.org (Postfix) with ESMTPS id A2C1F6EAC3 for ; Fri, 16 Oct 2020 10:44:54 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:35 +0200 Message-Id: <20201016104444.1492028-53-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 52/61] drm/i915/selftests: Prepare execlists for obj->mm.lock removal X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Convert normal functions to unlocked versions where needed. Signed-off-by: Maarten Lankhorst --- drivers/gpu/drm/i915/gt/selftest_lrc.c | 34 +++++++++++++------------- 1 file changed, 17 insertions(+), 17 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c b/drivers/gpu/drm/i915/gt/selftest_lrc.c index 95d41c01d0e0..124011f6fb51 100644 --- a/drivers/gpu/drm/i915/gt/selftest_lrc.c +++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c @@ -1007,7 +1007,7 @@ static int live_timeslice_preempt(void *arg) goto err_obj; } - vaddr = i915_gem_object_pin_map(obj, I915_MAP_WC); + vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC); if (IS_ERR(vaddr)) { err = PTR_ERR(vaddr); goto err_obj; @@ -1315,7 +1315,7 @@ static int live_timeslice_queue(void *arg) goto err_obj; } - vaddr = i915_gem_object_pin_map(obj, I915_MAP_WC); + vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC); if (IS_ERR(vaddr)) { err = PTR_ERR(vaddr); goto err_obj; @@ -1562,7 +1562,7 @@ static int live_busywait_preempt(void *arg) goto err_ctx_lo; } - map = i915_gem_object_pin_map(obj, I915_MAP_WC); + map = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC); if (IS_ERR(map)) { err = PTR_ERR(map); goto err_obj; @@ -2678,7 +2678,7 @@ static int create_gang(struct intel_engine_cs *engine, if (err) goto err_obj; - cs = i915_gem_object_pin_map(obj, I915_MAP_WC); + cs = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC); if (IS_ERR(cs)) goto err_obj; @@ -2960,7 +2960,7 @@ static int live_preempt_gang(void *arg) * it will terminate the next lowest spinner until there * are no more spinners and the gang is complete. */ - cs = i915_gem_object_pin_map(rq->batch->obj, I915_MAP_WC); + cs = i915_gem_object_pin_map_unlocked(rq->batch->obj, I915_MAP_WC); if (!IS_ERR(cs)) { *cs = 0; i915_gem_object_unpin_map(rq->batch->obj); @@ -3025,7 +3025,7 @@ create_gpr_user(struct intel_engine_cs *engine, return ERR_PTR(err); } - cs = i915_gem_object_pin_map(obj, I915_MAP_WC); + cs = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC); if (IS_ERR(cs)) { i915_vma_put(vma); return ERR_CAST(cs); @@ -3235,7 +3235,7 @@ static int live_preempt_user(void *arg) if (IS_ERR(global)) return PTR_ERR(global); - result = i915_gem_object_pin_map(global->obj, I915_MAP_WC); + result = i915_gem_object_pin_map_unlocked(global->obj, I915_MAP_WC); if (IS_ERR(result)) { i915_vma_unpin_and_release(&global, 0); return PTR_ERR(result); @@ -3628,7 +3628,7 @@ static int live_preempt_smoke(void *arg) goto err_free; } - cs = i915_gem_object_pin_map(smoke.batch, I915_MAP_WB); + cs = i915_gem_object_pin_map_unlocked(smoke.batch, I915_MAP_WB); if (IS_ERR(cs)) { err = PTR_ERR(cs); goto err_batch; @@ -4231,7 +4231,7 @@ static int preserved_virtual_engine(struct intel_gt *gt, goto out_end; } - cs = i915_gem_object_pin_map(scratch->obj, I915_MAP_WB); + cs = i915_gem_object_pin_map_unlocked(scratch->obj, I915_MAP_WB); if (IS_ERR(cs)) { err = PTR_ERR(cs); goto out_end; @@ -5259,7 +5259,7 @@ static int __live_lrc_gpr(struct intel_engine_cs *engine, goto err_rq; } - cs = i915_gem_object_pin_map(scratch->obj, I915_MAP_WB); + cs = i915_gem_object_pin_map_unlocked(scratch->obj, I915_MAP_WB); if (IS_ERR(cs)) { err = PTR_ERR(cs); goto err_rq; @@ -5553,7 +5553,7 @@ store_context(struct intel_context *ce, struct i915_vma *scratch) if (IS_ERR(batch)) return batch; - cs = i915_gem_object_pin_map(batch->obj, I915_MAP_WC); + cs = i915_gem_object_pin_map_unlocked(batch->obj, I915_MAP_WC); if (IS_ERR(cs)) { i915_vma_put(batch); return ERR_CAST(cs); @@ -5717,7 +5717,7 @@ static struct i915_vma *load_context(struct intel_context *ce, u32 poison) if (IS_ERR(batch)) return batch; - cs = i915_gem_object_pin_map(batch->obj, I915_MAP_WC); + cs = i915_gem_object_pin_map_unlocked(batch->obj, I915_MAP_WC); if (IS_ERR(cs)) { i915_vma_put(batch); return ERR_CAST(cs); @@ -5831,29 +5831,29 @@ static int compare_isolation(struct intel_engine_cs *engine, u32 *defaults; int err = 0; - A[0] = i915_gem_object_pin_map(ref[0]->obj, I915_MAP_WC); + A[0] = i915_gem_object_pin_map_unlocked(ref[0]->obj, I915_MAP_WC); if (IS_ERR(A[0])) return PTR_ERR(A[0]); - A[1] = i915_gem_object_pin_map(ref[1]->obj, I915_MAP_WC); + A[1] = i915_gem_object_pin_map_unlocked(ref[1]->obj, I915_MAP_WC); if (IS_ERR(A[1])) { err = PTR_ERR(A[1]); goto err_A0; } - B[0] = i915_gem_object_pin_map(result[0]->obj, I915_MAP_WC); + B[0] = i915_gem_object_pin_map_unlocked(result[0]->obj, I915_MAP_WC); if (IS_ERR(B[0])) { err = PTR_ERR(B[0]); goto err_A1; } - B[1] = i915_gem_object_pin_map(result[1]->obj, I915_MAP_WC); + B[1] = i915_gem_object_pin_map_unlocked(result[1]->obj, I915_MAP_WC); if (IS_ERR(B[1])) { err = PTR_ERR(B[1]); goto err_B0; } - lrc = i915_gem_object_pin_map(ce->state->obj, + lrc = i915_gem_object_pin_map_unlocked(ce->state->obj, i915_coherent_map_type(engine->i915)); if (IS_ERR(lrc)) { err = PTR_ERR(lrc); From patchwork Fri Oct 16 10:44:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841489 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BCE39C4363A for ; Fri, 16 Oct 2020 10:45:31 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5A140207F7 for ; Fri, 16 Oct 2020 10:45:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5A140207F7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A46EE6EB94; Fri, 16 Oct 2020 10:45:01 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [IPv6:2a02:2308::216:3eff:fe92:dfa3]) by gabe.freedesktop.org (Postfix) with ESMTPS id C380B6EABC for ; Fri, 16 Oct 2020 10:44:54 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:36 +0200 Message-Id: <20201016104444.1492028-54-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 53/61] drm/i915/selftests: Prepare mocs tests for obj->mm.lock removal X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Use pin_map_unlocked when we're not holding locks. Signed-off-by: Maarten Lankhorst --- drivers/gpu/drm/i915/gt/selftest_mocs.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/gt/selftest_mocs.c b/drivers/gpu/drm/i915/gt/selftest_mocs.c index b25eba50c88e..5765c31fa80f 100644 --- a/drivers/gpu/drm/i915/gt/selftest_mocs.c +++ b/drivers/gpu/drm/i915/gt/selftest_mocs.c @@ -105,7 +105,7 @@ static int live_mocs_init(struct live_mocs *arg, struct intel_gt *gt) if (IS_ERR(arg->scratch)) return PTR_ERR(arg->scratch); - arg->vaddr = i915_gem_object_pin_map(arg->scratch->obj, I915_MAP_WB); + arg->vaddr = i915_gem_object_pin_map_unlocked(arg->scratch->obj, I915_MAP_WB); if (IS_ERR(arg->vaddr)) { err = PTR_ERR(arg->vaddr); goto err_scratch; From patchwork Fri Oct 16 10:44:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841379 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A858C4363D for ; Fri, 16 Oct 2020 10:45:28 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0C7C5207F7 for ; Fri, 16 Oct 2020 10:45:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0C7C5207F7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 5694E6EC39; Fri, 16 Oct 2020 10:45:03 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [IPv6:2a02:2308::216:3eff:fe92:dfa3]) by gabe.freedesktop.org (Postfix) with ESMTPS id E1CAC6EAC2 for ; Fri, 16 Oct 2020 10:44:54 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:37 +0200 Message-Id: <20201016104444.1492028-55-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 54/61] drm/i915/selftests: Prepare ring submission for obj->mm.lock removal X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Use unlocked versions when the ww lock is not held. Signed-off-by: Maarten Lankhorst --- drivers/gpu/drm/i915/gt/selftest_ring_submission.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/selftest_ring_submission.c b/drivers/gpu/drm/i915/gt/selftest_ring_submission.c index 3350e7c995bc..99609271c3a7 100644 --- a/drivers/gpu/drm/i915/gt/selftest_ring_submission.c +++ b/drivers/gpu/drm/i915/gt/selftest_ring_submission.c @@ -35,7 +35,7 @@ static struct i915_vma *create_wally(struct intel_engine_cs *engine) return ERR_PTR(err); } - cs = i915_gem_object_pin_map(obj, I915_MAP_WC); + cs = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC); if (IS_ERR(cs)) { i915_gem_object_put(obj); return ERR_CAST(cs); @@ -212,7 +212,7 @@ static int __live_ctx_switch_wa(struct intel_engine_cs *engine) if (IS_ERR(bb)) return PTR_ERR(bb); - result = i915_gem_object_pin_map(bb->obj, I915_MAP_WC); + result = i915_gem_object_pin_map_unlocked(bb->obj, I915_MAP_WC); if (IS_ERR(result)) { intel_context_put(bb->private); i915_vma_unpin_and_release(&bb, 0); From patchwork Fri Oct 16 10:44:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841487 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D66E5C3815A for ; Fri, 16 Oct 2020 10:45:44 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7C6DD2084C for ; Fri, 16 Oct 2020 10:45:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7C6DD2084C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 9A8366EC3B; Fri, 16 Oct 2020 10:45:23 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [IPv6:2a02:2308::216:3eff:fe92:dfa3]) by gabe.freedesktop.org (Postfix) with ESMTPS id 0FD656EAC4 for ; Fri, 16 Oct 2020 10:44:55 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:38 +0200 Message-Id: <20201016104444.1492028-56-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 55/61] drm/i915/selftests: Prepare timeline tests for obj->mm.lock removal X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" We can no longer call intel_timeline_pin with a null argument, so add a ww loop that locks the backing object. Signed-off-by: Maarten Lankhorst --- drivers/gpu/drm/i915/gt/selftest_timeline.c | 26 ++++++++++++++++++--- 1 file changed, 23 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/selftest_timeline.c b/drivers/gpu/drm/i915/gt/selftest_timeline.c index 6d6092a28e6b..cd8374780f7c 100644 --- a/drivers/gpu/drm/i915/gt/selftest_timeline.c +++ b/drivers/gpu/drm/i915/gt/selftest_timeline.c @@ -36,6 +36,26 @@ static unsigned long hwsp_cacheline(struct intel_timeline *tl) return (address + offset_in_page(tl->hwsp_offset)) / CACHELINE_BYTES; } +static int selftest_tl_pin(struct intel_timeline *tl) +{ + struct i915_gem_ww_ctx ww; + int err; + + i915_gem_ww_ctx_init(&ww, false); +retry: + err = i915_gem_object_lock(tl->hwsp_ggtt->obj, &ww); + if (!err) + err = intel_timeline_pin(tl, &ww); + + if (err == -EDEADLK) { + err = i915_gem_ww_ctx_backoff(&ww); + if (!err) + goto retry; + } + i915_gem_ww_ctx_fini(&ww); + return err; +} + #define CACHELINES_PER_PAGE (PAGE_SIZE / CACHELINE_BYTES) struct mock_hwsp_freelist { @@ -77,7 +97,7 @@ static int __mock_hwsp_timeline(struct mock_hwsp_freelist *state, if (IS_ERR(tl)) return PTR_ERR(tl); - err = intel_timeline_pin(tl, NULL); + err = selftest_tl_pin(tl); if (err) { intel_timeline_put(tl); return err; @@ -463,7 +483,7 @@ checked_tl_write(struct intel_timeline *tl, struct intel_engine_cs *engine, u32 struct i915_request *rq; int err; - err = intel_timeline_pin(tl, NULL); + err = selftest_tl_pin(tl); if (err) { rq = ERR_PTR(err); goto out; @@ -663,7 +683,7 @@ static int live_hwsp_wrap(void *arg) if (!tl->has_initial_breadcrumb) goto out_free; - err = intel_timeline_pin(tl, NULL); + err = selftest_tl_pin(tl); if (err) goto out_free; From patchwork Fri Oct 16 10:44:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841477 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C8C56C2D0A4 for ; Fri, 16 Oct 2020 10:45:34 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 742952084C for ; Fri, 16 Oct 2020 10:45:34 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 742952084C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id AC1966EC35; Fri, 16 Oct 2020 10:45:02 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [141.105.120.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 2C4ED6EABE for ; Fri, 16 Oct 2020 10:44:55 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:39 +0200 Message-Id: <20201016104444.1492028-57-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 56/61] drm/i915/selftests: Prepare i915_request tests for obj->mm.lock removal X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Straightforward conversion by using unlocked versions. Signed-off-by: Maarten Lankhorst --- drivers/gpu/drm/i915/selftests/i915_request.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/i915/selftests/i915_request.c b/drivers/gpu/drm/i915/selftests/i915_request.c index 64bbb8288249..a677e6851573 100644 --- a/drivers/gpu/drm/i915/selftests/i915_request.c +++ b/drivers/gpu/drm/i915/selftests/i915_request.c @@ -619,7 +619,7 @@ static struct i915_vma *empty_batch(struct drm_i915_private *i915) if (IS_ERR(obj)) return ERR_CAST(obj); - cmd = i915_gem_object_pin_map(obj, I915_MAP_WB); + cmd = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB); if (IS_ERR(cmd)) { err = PTR_ERR(cmd); goto err; @@ -781,7 +781,7 @@ static struct i915_vma *recursive_batch(struct drm_i915_private *i915) if (err) goto err; - cmd = i915_gem_object_pin_map(obj, I915_MAP_WC); + cmd = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC); if (IS_ERR(cmd)) { err = PTR_ERR(cmd); goto err; @@ -816,7 +816,7 @@ static int recursive_batch_resolve(struct i915_vma *batch) { u32 *cmd; - cmd = i915_gem_object_pin_map(batch->obj, I915_MAP_WC); + cmd = i915_gem_object_pin_map_unlocked(batch->obj, I915_MAP_WC); if (IS_ERR(cmd)) return PTR_ERR(cmd); @@ -1069,8 +1069,8 @@ static int live_sequential_engines(void *arg) if (!request[idx]) break; - cmd = i915_gem_object_pin_map(request[idx]->batch->obj, - I915_MAP_WC); + cmd = i915_gem_object_pin_map_unlocked(request[idx]->batch->obj, + I915_MAP_WC); if (!IS_ERR(cmd)) { *cmd = MI_BATCH_BUFFER_END; From patchwork Fri Oct 16 10:44:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841367 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06592C4363A for ; Fri, 16 Oct 2020 10:45:19 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A28472084C for ; Fri, 16 Oct 2020 10:45:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A28472084C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id CF5956EB21; Fri, 16 Oct 2020 10:44:58 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [141.105.120.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 508766EABC for ; Fri, 16 Oct 2020 10:44:55 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:40 +0200 Message-Id: <20201016104444.1492028-58-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 57/61] drm/i915/selftests: Prepare memory region tests for obj->mm.lock removal X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Use the unlocked variants for pin_map and pin_pages, and add lock around unpinning/putting pages. Signed-off-by: Maarten Lankhorst --- .../drm/i915/selftests/intel_memory_region.c | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/i915/selftests/intel_memory_region.c b/drivers/gpu/drm/i915/selftests/intel_memory_region.c index 334b0648e253..ccd4b65a272f 100644 --- a/drivers/gpu/drm/i915/selftests/intel_memory_region.c +++ b/drivers/gpu/drm/i915/selftests/intel_memory_region.c @@ -31,10 +31,12 @@ static void close_objects(struct intel_memory_region *mem, struct drm_i915_gem_object *obj, *on; list_for_each_entry_safe(obj, on, objects, st_link) { + i915_gem_object_lock(obj, NULL); if (i915_gem_object_has_pinned_pages(obj)) i915_gem_object_unpin_pages(obj); /* No polluting the memory region between tests */ __i915_gem_object_put_pages(obj); + i915_gem_object_unlock(obj); list_del(&obj->st_link); i915_gem_object_put(obj); } @@ -69,7 +71,7 @@ static int igt_mock_fill(void *arg) break; } - err = i915_gem_object_pin_pages(obj); + err = i915_gem_object_pin_pages_unlocked(obj); if (err) { i915_gem_object_put(obj); break; @@ -109,7 +111,7 @@ igt_object_create(struct intel_memory_region *mem, if (IS_ERR(obj)) return obj; - err = i915_gem_object_pin_pages(obj); + err = i915_gem_object_pin_pages_unlocked(obj); if (err) goto put; @@ -123,8 +125,10 @@ igt_object_create(struct intel_memory_region *mem, static void igt_object_release(struct drm_i915_gem_object *obj) { + i915_gem_object_lock(obj, NULL); i915_gem_object_unpin_pages(obj); __i915_gem_object_put_pages(obj); + i915_gem_object_unlock(obj); list_del(&obj->st_link); i915_gem_object_put(obj); } @@ -280,7 +284,7 @@ static int igt_cpu_check(struct drm_i915_gem_object *obj, u32 dword, u32 val) if (err) return err; - ptr = i915_gem_object_pin_map(obj, I915_MAP_WC); + ptr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC); if (IS_ERR(ptr)) return PTR_ERR(ptr); @@ -385,7 +389,7 @@ static int igt_lmem_create(void *arg) if (IS_ERR(obj)) return PTR_ERR(obj); - err = i915_gem_object_pin_pages(obj); + err = i915_gem_object_pin_pages_unlocked(obj); if (err) goto out_put; @@ -424,7 +428,7 @@ static int igt_lmem_write_gpu(void *arg) goto out_file; } - err = i915_gem_object_pin_pages(obj); + err = i915_gem_object_pin_pages_unlocked(obj); if (err) goto out_put; @@ -496,7 +500,7 @@ static int igt_lmem_write_cpu(void *arg) if (IS_ERR(obj)) return PTR_ERR(obj); - vaddr = i915_gem_object_pin_map(obj, I915_MAP_WC); + vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WC); if (IS_ERR(vaddr)) { err = PTR_ERR(vaddr); goto out_put; @@ -600,7 +604,7 @@ create_region_for_mapping(struct intel_memory_region *mr, u64 size, u32 type, return obj; } - addr = i915_gem_object_pin_map(obj, type); + addr = i915_gem_object_pin_map_unlocked(obj, type); if (IS_ERR(addr)) { i915_gem_object_put(obj); if (PTR_ERR(addr) == -ENXIO) From patchwork Fri Oct 16 10:44:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841381 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC6C7C433DF for ; Fri, 16 Oct 2020 10:45:27 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6BE9420872 for ; Fri, 16 Oct 2020 10:45:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6BE9420872 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id D44266EB8B; Fri, 16 Oct 2020 10:45:00 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [IPv6:2a02:2308::216:3eff:fe92:dfa3]) by gabe.freedesktop.org (Postfix) with ESMTPS id 600F96EAC2 for ; Fri, 16 Oct 2020 10:44:55 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:41 +0200 Message-Id: <20201016104444.1492028-59-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 58/61] drm/i915/selftests: Prepare cs engine tests for obj->mm.lock removal X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Same as other tests, use pin_map_unlocked. Signed-off-by: Maarten Lankhorst --- drivers/gpu/drm/i915/gt/selftest_engine_cs.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/selftest_engine_cs.c b/drivers/gpu/drm/i915/gt/selftest_engine_cs.c index 729c3c7b11e2..853d1f02131a 100644 --- a/drivers/gpu/drm/i915/gt/selftest_engine_cs.c +++ b/drivers/gpu/drm/i915/gt/selftest_engine_cs.c @@ -72,7 +72,7 @@ static struct i915_vma *create_empty_batch(struct intel_context *ce) if (IS_ERR(obj)) return ERR_CAST(obj); - cs = i915_gem_object_pin_map(obj, I915_MAP_WB); + cs = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB); if (IS_ERR(cs)) { err = PTR_ERR(cs); goto err_put; @@ -208,7 +208,7 @@ static struct i915_vma *create_nop_batch(struct intel_context *ce) if (IS_ERR(obj)) return ERR_CAST(obj); - cs = i915_gem_object_pin_map(obj, I915_MAP_WB); + cs = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB); if (IS_ERR(cs)) { err = PTR_ERR(cs); goto err_put; From patchwork Fri Oct 16 10:44:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841385 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 15F2FC2D0A2 for ; Fri, 16 Oct 2020 10:45:30 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A06A6207F7 for ; Fri, 16 Oct 2020 10:45:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A06A6207F7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 150796EB9D; Fri, 16 Oct 2020 10:45:02 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [141.105.120.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 842636EABE for ; Fri, 16 Oct 2020 10:44:55 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:42 +0200 Message-Id: <20201016104444.1492028-60-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 59/61] drm/i915/selftests: Prepare gtt tests for obj->mm.lock removal X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" We need to lock the global gtt dma_resv, use i915_vm_lock_objects to handle this correctly. Add ww handling for this where required. Add the object lock around unpin/put pages, and use the unlocked versions of pin_pages and pin_map where required. Signed-off-by: Maarten Lankhorst Reviewed-by: Thomas Hellström Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/selftests/i915_gem_gtt.c | 92 ++++++++++++++----- 1 file changed, 67 insertions(+), 25 deletions(-) diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c index 2cfe99c79034..d07dd6780005 100644 --- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c @@ -129,7 +129,7 @@ fake_dma_object(struct drm_i915_private *i915, u64 size) obj->cache_level = I915_CACHE_NONE; /* Preallocate the "backing storage" */ - if (i915_gem_object_pin_pages(obj)) + if (i915_gem_object_pin_pages_unlocked(obj)) goto err_obj; i915_gem_object_unpin_pages(obj); @@ -145,6 +145,7 @@ static int igt_ppgtt_alloc(void *arg) { struct drm_i915_private *dev_priv = arg; struct i915_ppgtt *ppgtt; + struct i915_gem_ww_ctx ww; u64 size, last, limit; int err = 0; @@ -170,6 +171,12 @@ static int igt_ppgtt_alloc(void *arg) limit = totalram_pages() << PAGE_SHIFT; limit = min(ppgtt->vm.total, limit); + i915_gem_ww_ctx_init(&ww, false); +retry: + err = i915_vm_lock_objects(&ppgtt->vm, &ww); + if (err) + goto err_ppgtt_cleanup; + /* Check we can allocate the entire range */ for (size = 4096; size <= limit; size <<= 2) { struct i915_vm_pt_stash stash = {}; @@ -214,6 +221,13 @@ static int igt_ppgtt_alloc(void *arg) } err_ppgtt_cleanup: + if (err == -EDEADLK) { + err = i915_gem_ww_ctx_backoff(&ww); + if (!err) + goto retry; + } + i915_gem_ww_ctx_fini(&ww); + i915_vm_put(&ppgtt->vm); return err; } @@ -275,7 +289,7 @@ static int lowlevel_hole(struct i915_address_space *vm, GEM_BUG_ON(obj->base.size != BIT_ULL(size)); - if (i915_gem_object_pin_pages(obj)) { + if (i915_gem_object_pin_pages_unlocked(obj)) { i915_gem_object_put(obj); kfree(order); break; @@ -296,20 +310,36 @@ static int lowlevel_hole(struct i915_address_space *vm, if (vm->allocate_va_range) { struct i915_vm_pt_stash stash = {}; + struct i915_gem_ww_ctx ww; + int err; + + i915_gem_ww_ctx_init(&ww, false); +retry: + err = i915_vm_lock_objects(vm, &ww); + if (err) + goto alloc_vm_end; + err = -ENOMEM; if (i915_vm_alloc_pt_stash(vm, &stash, BIT_ULL(size))) - break; - - if (i915_vm_pin_pt_stash(vm, &stash)) { - i915_vm_free_pt_stash(vm, &stash); - break; - } + goto alloc_vm_end; - vm->allocate_va_range(vm, &stash, - addr, BIT_ULL(size)); + err = i915_vm_pin_pt_stash(vm, &stash); + if (!err) + vm->allocate_va_range(vm, &stash, + addr, BIT_ULL(size)); i915_vm_free_pt_stash(vm, &stash); +alloc_vm_end: + if (err == -EDEADLK) { + err = i915_gem_ww_ctx_backoff(&ww); + if (!err) + goto retry; + } + i915_gem_ww_ctx_fini(&ww); + + if (err) + break; } mock_vma->pages = obj->mm.pages; @@ -1165,7 +1195,7 @@ static int igt_ggtt_page(void *arg) if (IS_ERR(obj)) return PTR_ERR(obj); - err = i915_gem_object_pin_pages(obj); + err = i915_gem_object_pin_pages_unlocked(obj); if (err) goto out_free; @@ -1332,7 +1362,7 @@ static int igt_gtt_reserve(void *arg) goto out; } - err = i915_gem_object_pin_pages(obj); + err = i915_gem_object_pin_pages_unlocked(obj); if (err) { i915_gem_object_put(obj); goto out; @@ -1384,7 +1414,7 @@ static int igt_gtt_reserve(void *arg) goto out; } - err = i915_gem_object_pin_pages(obj); + err = i915_gem_object_pin_pages_unlocked(obj); if (err) { i915_gem_object_put(obj); goto out; @@ -1548,7 +1578,7 @@ static int igt_gtt_insert(void *arg) goto out; } - err = i915_gem_object_pin_pages(obj); + err = i915_gem_object_pin_pages_unlocked(obj); if (err) { i915_gem_object_put(obj); goto out; @@ -1657,7 +1687,7 @@ static int igt_gtt_insert(void *arg) goto out; } - err = i915_gem_object_pin_pages(obj); + err = i915_gem_object_pin_pages_unlocked(obj); if (err) { i915_gem_object_put(obj); goto out; @@ -1828,7 +1858,7 @@ static int igt_cs_tlb(void *arg) goto out_vm; } - batch = i915_gem_object_pin_map(bbe, I915_MAP_WC); + batch = i915_gem_object_pin_map_unlocked(bbe, I915_MAP_WC); if (IS_ERR(batch)) { err = PTR_ERR(batch); goto out_put_bbe; @@ -1844,7 +1874,7 @@ static int igt_cs_tlb(void *arg) } /* Track the execution of each request by writing into different slot */ - batch = i915_gem_object_pin_map(act, I915_MAP_WC); + batch = i915_gem_object_pin_map_unlocked(act, I915_MAP_WC); if (IS_ERR(batch)) { err = PTR_ERR(batch); goto out_put_act; @@ -1891,7 +1921,7 @@ static int igt_cs_tlb(void *arg) goto out_put_out; GEM_BUG_ON(vma->node.start != vm->total - PAGE_SIZE); - result = i915_gem_object_pin_map(out, I915_MAP_WB); + result = i915_gem_object_pin_map_unlocked(out, I915_MAP_WB); if (IS_ERR(result)) { err = PTR_ERR(result); goto out_put_out; @@ -1907,6 +1937,7 @@ static int igt_cs_tlb(void *arg) while (!__igt_timeout(end_time, NULL)) { struct i915_vm_pt_stash stash = {}; struct i915_request *rq; + struct i915_gem_ww_ctx ww; u64 offset; offset = igt_random_offset(&prng, @@ -1925,19 +1956,30 @@ static int igt_cs_tlb(void *arg) if (err) goto end; + i915_gem_ww_ctx_init(&ww, false); +retry: + err = i915_vm_lock_objects(vm, &ww); + if (err) + goto end_ww; + err = i915_vm_alloc_pt_stash(vm, &stash, chunk_size); if (err) - goto end; + goto end_ww; err = i915_vm_pin_pt_stash(vm, &stash); - if (err) { - i915_vm_free_pt_stash(vm, &stash); - goto end; - } - - vm->allocate_va_range(vm, &stash, offset, chunk_size); + if (!err) + vm->allocate_va_range(vm, &stash, offset, chunk_size); i915_vm_free_pt_stash(vm, &stash); +end_ww: + if (err == -EDEADLK) { + err = i915_gem_ww_ctx_backoff(&ww); + if (!err) + goto retry; + } + i915_gem_ww_ctx_fini(&ww); + if (err) + goto end; /* Prime the TLB with the dummy pages */ for (i = 0; i < count; i++) { From patchwork Fri Oct 16 10:44:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841389 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D8C77C43457 for ; Fri, 16 Oct 2020 10:45:28 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 87E42207F7 for ; Fri, 16 Oct 2020 10:45:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 87E42207F7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 6BC796EB28; Fri, 16 Oct 2020 10:45:00 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [141.105.120.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id A30316EABC for ; Fri, 16 Oct 2020 10:44:55 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:43 +0200 Message-Id: <20201016104444.1492028-61-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 60/61] drm/i915: Finally remove obj->mm.lock. X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" With all callers and selftests fixed to use ww locking, we can now finally remove this lock. Signed-off-by: Maarten Lankhorst Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/gem/i915_gem_object.c | 2 - drivers/gpu/drm/i915/gem/i915_gem_object.h | 5 +-- .../gpu/drm/i915/gem/i915_gem_object_types.h | 1 - drivers/gpu/drm/i915/gem/i915_gem_pages.c | 38 ++++--------------- drivers/gpu/drm/i915/gem/i915_gem_phys.c | 34 ++++------------- drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 2 +- drivers/gpu/drm/i915/gem/i915_gem_shrinker.c | 37 +++++++++++++----- drivers/gpu/drm/i915/gem/i915_gem_shrinker.h | 4 +- drivers/gpu/drm/i915/gem/i915_gem_tiling.c | 2 - drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 3 +- drivers/gpu/drm/i915/i915_debugfs.c | 4 +- drivers/gpu/drm/i915/i915_gem.c | 8 +--- drivers/gpu/drm/i915/i915_gem_gtt.c | 2 +- 13 files changed, 53 insertions(+), 89 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c index 028a556ab1a5..08d806bbf48e 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c @@ -62,8 +62,6 @@ void i915_gem_object_init(struct drm_i915_gem_object *obj, const struct drm_i915_gem_object_ops *ops, struct lock_class_key *key, unsigned flags) { - mutex_init(&obj->mm.lock); - spin_lock_init(&obj->vma.lock); INIT_LIST_HEAD(&obj->vma.list); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index e7236224a29c..f6ccd05010df 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -123,7 +123,7 @@ static inline void assert_object_held_shared(struct drm_i915_gem_object *obj) */ if (IS_ENABLED(CONFIG_LOCKDEP) && kref_read(&obj->base.refcount) > 0) - lockdep_assert_held(&obj->mm.lock); + assert_object_held(obj); } static inline int __i915_gem_object_lock(struct drm_i915_gem_object *obj, @@ -328,7 +328,7 @@ int __i915_gem_object_get_pages(struct drm_i915_gem_object *obj); static inline int __must_check i915_gem_object_pin_pages(struct drm_i915_gem_object *obj) { - might_lock(&obj->mm.lock); + assert_object_held(obj); if (atomic_inc_not_zero(&obj->mm.pages_pin_count)) return 0; @@ -374,7 +374,6 @@ i915_gem_object_unpin_pages(struct drm_i915_gem_object *obj) } int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj); -int __i915_gem_object_put_pages_locked(struct drm_i915_gem_object *obj); void i915_gem_object_truncate(struct drm_i915_gem_object *obj); void i915_gem_object_writeback(struct drm_i915_gem_object *obj); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h index 0aa391f5d73c..6ba8f5abef49 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h @@ -209,7 +209,6 @@ struct drm_i915_gem_object { * Protects the pages and their use. Do not use directly, but * instead go through the pin/unpin interfaces. */ - struct mutex lock; atomic_t pages_pin_count; atomic_t shrink_pin; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c index 81b1b560ad18..55ed1b7b06ce 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c @@ -67,7 +67,7 @@ void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj, struct list_head *list; unsigned long flags; - lockdep_assert_held(&obj->mm.lock); + assert_object_held(obj); spin_lock_irqsave(&i915->mm.obj_lock, flags); i915->mm.shrink_count++; @@ -114,9 +114,7 @@ int __i915_gem_object_get_pages(struct drm_i915_gem_object *obj) { int err; - err = mutex_lock_interruptible(&obj->mm.lock); - if (err) - return err; + assert_object_held(obj); assert_object_held_shared(obj); @@ -125,15 +123,13 @@ int __i915_gem_object_get_pages(struct drm_i915_gem_object *obj) err = ____i915_gem_object_get_pages(obj); if (err) - goto unlock; + return err; smp_mb__before_atomic(); } atomic_inc(&obj->mm.pages_pin_count); -unlock: - mutex_unlock(&obj->mm.lock); - return err; + return 0; } int i915_gem_object_pin_pages_unlocked(struct drm_i915_gem_object *obj) @@ -222,7 +218,7 @@ __i915_gem_object_unset_pages(struct drm_i915_gem_object *obj) return pages; } -int __i915_gem_object_put_pages_locked(struct drm_i915_gem_object *obj) +int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj) { struct sg_table *pages; @@ -253,21 +249,6 @@ int __i915_gem_object_put_pages_locked(struct drm_i915_gem_object *obj) return 0; } -int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj) -{ - int err; - - if (i915_gem_object_has_pinned_pages(obj)) - return -EBUSY; - - /* May be called by shrinker from within get_pages() (on another bo) */ - mutex_lock(&obj->mm.lock); - err = __i915_gem_object_put_pages_locked(obj); - mutex_unlock(&obj->mm.lock); - - return err; -} - static inline pte_t iomap_pte(resource_size_t base, dma_addr_t offset, pgprot_t prot) @@ -384,9 +365,7 @@ void *i915_gem_object_pin_map(struct drm_i915_gem_object *obj, !i915_gem_object_type_has(obj, I915_GEM_OBJECT_HAS_IOMEM)) return ERR_PTR(-ENXIO); - err = mutex_lock_interruptible(&obj->mm.lock); - if (err) - return ERR_PTR(err); + assert_object_held(obj); pinned = !(type & I915_MAP_OVERRIDE); type &= ~I915_MAP_OVERRIDE; @@ -428,15 +407,12 @@ void *i915_gem_object_pin_map(struct drm_i915_gem_object *obj, obj->mm.mapping = page_pack_bits(ptr, type); } -out_unlock: - mutex_unlock(&obj->mm.lock); return ptr; err_unpin: atomic_dec(&obj->mm.pages_pin_count); err_unlock: - ptr = ERR_PTR(err); - goto out_unlock; + return ERR_PTR(err); } void *i915_gem_object_pin_map_unlocked(struct drm_i915_gem_object *obj, diff --git a/drivers/gpu/drm/i915/gem/i915_gem_phys.c b/drivers/gpu/drm/i915/gem/i915_gem_phys.c index 15d8f8d52cbe..fee2e1ffba07 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_phys.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_phys.c @@ -184,40 +184,22 @@ int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align) if (err) return err; - err = mutex_lock_interruptible(&obj->mm.lock); - if (err) - return err; + if (obj->mm.madv != I915_MADV_WILLNEED) + return -EFAULT; - if (unlikely(!i915_gem_object_has_struct_page(obj))) - goto out; + if (obj->mm.quirked) + return -EFAULT; - if (obj->mm.madv != I915_MADV_WILLNEED) { - err = -EFAULT; - goto out; - } - - if (obj->mm.quirked) { - err = -EFAULT; - goto out; - } - - if (obj->mm.mapping || i915_gem_object_has_pinned_pages(obj)) { - err = -EBUSY; - goto out; - } + if (obj->mm.mapping || i915_gem_object_has_pinned_pages(obj)) + return -EBUSY; if (unlikely(obj->mm.madv != I915_MADV_WILLNEED)) { drm_dbg(obj->base.dev, "Attempting to obtain a purgeable object\n"); - err = -EFAULT; - goto out; + return -EFAULT; } - err = i915_gem_object_shmem_to_phys(obj); - -out: - mutex_unlock(&obj->mm.lock); - return err; + return i915_gem_object_shmem_to_phys(obj); } #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c index e0778b3cc0c3..5ae09be61c0b 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c @@ -99,7 +99,7 @@ static int shmem_get_pages(struct drm_i915_gem_object *obj) goto err_sg; } - i915_gem_shrink(i915, 2 * page_count, NULL, *s++); + i915_gem_shrink(NULL, i915, 2 * page_count, NULL, *s++); /* * We've tried hard to allocate the memory by reaping diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c index afc6e5b4dcf1..e42192834c88 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c @@ -93,7 +93,8 @@ static void try_to_writeback(struct drm_i915_gem_object *obj, * The number of pages of backing storage actually released. */ unsigned long -i915_gem_shrink(struct drm_i915_private *i915, +i915_gem_shrink(struct i915_gem_ww_ctx *ww, + struct drm_i915_private *i915, unsigned long target, unsigned long *nr_scanned, unsigned int shrink) @@ -112,6 +113,7 @@ i915_gem_shrink(struct drm_i915_private *i915, intel_wakeref_t wakeref = 0; unsigned long count = 0; unsigned long scanned = 0; + int err; trace_i915_gem_shrink(i915, target, shrink); @@ -199,23 +201,38 @@ i915_gem_shrink(struct drm_i915_private *i915, spin_unlock_irqrestore(&i915->mm.obj_lock, flags); - if (unsafe_drop_pages(obj, shrink) && - mutex_trylock(&obj->mm.lock)) { + err = 0; + if (unsafe_drop_pages(obj, shrink)) { /* May arrive from get_pages on another bo */ - if (!__i915_gem_object_put_pages_locked(obj)) { + if (!ww) { + if (!i915_gem_object_trylock(obj)) + goto skip; + } else { + err = i915_gem_object_lock(obj, ww); + if (err) + goto skip; + } + + if (!__i915_gem_object_put_pages(obj)) { try_to_writeback(obj, shrink); count += obj->base.size >> PAGE_SHIFT; } - mutex_unlock(&obj->mm.lock); + if (!ww) + i915_gem_object_unlock(obj); } scanned += obj->base.size >> PAGE_SHIFT; +skip: i915_gem_object_put(obj); spin_lock_irqsave(&i915->mm.obj_lock, flags); + if (err) + break; } list_splice_tail(&still_in_list, phase->list); spin_unlock_irqrestore(&i915->mm.obj_lock, flags); + if (err) + return err; } if (shrink & I915_SHRINK_BOUND) @@ -246,7 +263,7 @@ unsigned long i915_gem_shrink_all(struct drm_i915_private *i915) unsigned long freed = 0; with_intel_runtime_pm(&i915->runtime_pm, wakeref) { - freed = i915_gem_shrink(i915, -1UL, NULL, + freed = i915_gem_shrink(NULL, i915, -1UL, NULL, I915_SHRINK_BOUND | I915_SHRINK_UNBOUND); } @@ -292,7 +309,7 @@ i915_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) sc->nr_scanned = 0; - freed = i915_gem_shrink(i915, + freed = i915_gem_shrink(NULL, i915, sc->nr_to_scan, &sc->nr_scanned, I915_SHRINK_BOUND | @@ -301,7 +318,7 @@ i915_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) intel_wakeref_t wakeref; with_intel_runtime_pm(&i915->runtime_pm, wakeref) { - freed += i915_gem_shrink(i915, + freed += i915_gem_shrink(NULL, i915, sc->nr_to_scan - sc->nr_scanned, &sc->nr_scanned, I915_SHRINK_ACTIVE | @@ -326,7 +343,7 @@ i915_gem_shrinker_oom(struct notifier_block *nb, unsigned long event, void *ptr) freed_pages = 0; with_intel_runtime_pm(&i915->runtime_pm, wakeref) - freed_pages += i915_gem_shrink(i915, -1UL, NULL, + freed_pages += i915_gem_shrink(NULL, i915, -1UL, NULL, I915_SHRINK_BOUND | I915_SHRINK_UNBOUND | I915_SHRINK_WRITEBACK); @@ -364,7 +381,7 @@ i915_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr intel_wakeref_t wakeref; with_intel_runtime_pm(&i915->runtime_pm, wakeref) - freed_pages += i915_gem_shrink(i915, -1UL, NULL, + freed_pages += i915_gem_shrink(NULL, i915, -1UL, NULL, I915_SHRINK_BOUND | I915_SHRINK_UNBOUND | I915_SHRINK_VMAPS); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.h b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.h index b397d7785789..8512470f6fd6 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.h @@ -9,10 +9,12 @@ #include struct drm_i915_private; +struct i915_gem_ww_ctx; struct mutex; /* i915_gem_shrinker.c */ -unsigned long i915_gem_shrink(struct drm_i915_private *i915, +unsigned long i915_gem_shrink(struct i915_gem_ww_ctx *ww, + struct drm_i915_private *i915, unsigned long target, unsigned long *nr_scanned, unsigned flags); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_tiling.c b/drivers/gpu/drm/i915/gem/i915_gem_tiling.c index ffcaee74a249..4523a14db86e 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_tiling.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_tiling.c @@ -265,7 +265,6 @@ i915_gem_object_set_tiling(struct drm_i915_gem_object *obj, * pages to prevent them being swapped out and causing corruption * due to the change in swizzling. */ - mutex_lock(&obj->mm.lock); if (i915_gem_object_has_pages(obj) && obj->mm.madv == I915_MADV_WILLNEED && i915->quirks & QUIRK_PIN_SWIZZLED_PAGES) { @@ -280,7 +279,6 @@ i915_gem_object_set_tiling(struct drm_i915_gem_object *obj, obj->mm.quirked = true; } } - mutex_unlock(&obj->mm.lock); spin_lock(&obj->vma.lock); for_each_ggtt_vma(vma, obj) { diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c index 01a9b7306c68..8f05b6d90d54 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c @@ -240,7 +240,7 @@ static int i915_gem_object_userptr_unbind(struct drm_i915_gem_object *obj, bool if (GEM_WARN_ON(i915_gem_object_has_pinned_pages(obj))) return -EBUSY; - mutex_lock(&obj->mm.lock); + assert_object_held(obj); pages = __i915_gem_object_unset_pages(obj); if (!IS_ERR_OR_NULL(pages)) @@ -248,7 +248,6 @@ static int i915_gem_object_userptr_unbind(struct drm_i915_gem_object *obj, bool if (get_pages) err = ____i915_gem_object_get_pages(obj); - mutex_unlock(&obj->mm.lock); return err; } diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c index ea469168cd44..c5c7f77ee8dd 100644 --- a/drivers/gpu/drm/i915/i915_debugfs.c +++ b/drivers/gpu/drm/i915/i915_debugfs.c @@ -1508,10 +1508,10 @@ i915_drop_caches_set(void *data, u64 val) fs_reclaim_acquire(GFP_KERNEL); if (val & DROP_BOUND) - i915_gem_shrink(i915, LONG_MAX, NULL, I915_SHRINK_BOUND); + i915_gem_shrink(NULL, i915, LONG_MAX, NULL, I915_SHRINK_BOUND); if (val & DROP_UNBOUND) - i915_gem_shrink(i915, LONG_MAX, NULL, I915_SHRINK_UNBOUND); + i915_gem_shrink(NULL, i915, LONG_MAX, NULL, I915_SHRINK_UNBOUND); if (val & DROP_SHRINK_ALL) i915_gem_shrink_all(i915); diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index c58ea2490bf4..5a497576614c 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -1103,10 +1103,6 @@ i915_gem_madvise_ioctl(struct drm_device *dev, void *data, if (err) goto out; - err = mutex_lock_interruptible(&obj->mm.lock); - if (err) - goto out_ww; - if (i915_gem_object_has_pages(obj) && i915_gem_object_is_tiled(obj) && i915->quirks & QUIRK_PIN_SWIZZLED_PAGES) { @@ -1149,9 +1145,7 @@ i915_gem_madvise_ioctl(struct drm_device *dev, void *data, i915_gem_object_truncate(obj); args->retained = obj->mm.madv != __I915_MADV_PURGED; - mutex_unlock(&obj->mm.lock); -out_ww: i915_gem_object_unlock(obj); out: i915_gem_object_put(obj); @@ -1332,7 +1326,7 @@ int i915_gem_freeze_late(struct drm_i915_private *i915) wakeref = intel_runtime_pm_get(&i915->runtime_pm); - i915_gem_shrink(i915, -1UL, NULL, ~0); + i915_gem_shrink(NULL, i915, -1UL, NULL, ~0); i915_gem_drain_freed_objects(i915); list_for_each_entry(obj, &i915->mm.shrink_list, mm.link) { diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c index c5ee1567f3d1..729074ee33d4 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c @@ -44,7 +44,7 @@ int i915_gem_gtt_prepare_pages(struct drm_i915_gem_object *obj, * the DMA remapper, i915_gem_shrink will return 0. */ GEM_BUG_ON(obj->mm.pages == pages); - } while (i915_gem_shrink(to_i915(obj->base.dev), + } while (i915_gem_shrink(NULL, to_i915(obj->base.dev), obj->base.size >> PAGE_SHIFT, NULL, I915_SHRINK_BOUND | I915_SHRINK_UNBOUND)); From patchwork Fri Oct 16 10:44:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 11841479 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2FADAC388CB for ; Fri, 16 Oct 2020 10:45:44 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D036D2084C for ; Fri, 16 Oct 2020 10:45:43 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D036D2084C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id CB8F76EDA1; Fri, 16 Oct 2020 10:45:24 +0000 (UTC) Received: from mblankhorst.nl (mblankhorst.nl [141.105.120.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id CCC7E6EAC9 for ; Fri, 16 Oct 2020 10:44:55 +0000 (UTC) From: Maarten Lankhorst To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Oct 2020 12:44:44 +0200 Message-Id: <20201016104444.1492028-62-maarten.lankhorst@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> References: <20201016104444.1492028-1-maarten.lankhorst@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 61/61] drm/i915: Keep userpointer bindings if seqcount is unchanged, v2. X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Dan Carpenter Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Instead of force unbinding and rebinding every time, we try to check if our notifier seqcount is still correct when pages are bound. This way we only rebind userptr when we need to, and prevent stalls. Changes since v1: - Missing mutex_unlock, reported by kbuild. Reported-by: kernel test robot Reported-by: Dan Carpenter Signed-off-by: Maarten Lankhorst --- drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 27 ++++++++++++++++++--- 1 file changed, 24 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c index 8f05b6d90d54..b3fd5eecf0a1 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c @@ -268,12 +268,33 @@ int i915_gem_object_userptr_submit_init(struct drm_i915_gem_object *obj) if (ret) return ret; - /* Make sure userptr is unbound for next attempt, so we don't use stale pages. */ - ret = i915_gem_object_userptr_unbind(obj, false); + /* optimistically try to preserve current pages while unlocked */ + if (i915_gem_object_has_pages(obj) && + !mmu_interval_check_retry(&obj->userptr.notifier, + obj->userptr.notifier_seq)) { + spin_lock(&i915->mm.notifier_lock); + if (obj->userptr.pvec && + !mmu_interval_read_retry(&obj->userptr.notifier, + obj->userptr.notifier_seq)) { + obj->userptr.page_ref++; + + /* We can keep using the current binding, this is the fastpath */ + ret = 1; + } + spin_unlock(&i915->mm.notifier_lock); + } + + if (!ret) { + /* Make sure userptr is unbound for next attempt, so we don't use stale pages. */ + ret = i915_gem_object_userptr_unbind(obj, false); + } i915_gem_object_unlock(obj); - if (ret) + if (ret < 0) return ret; + if (ret > 0) + return 0; + notifier_seq = mmu_interval_read_begin(&obj->userptr.notifier); pvec = kvmalloc_array(num_pages, sizeof(struct page *), GFP_KERNEL);