From patchwork Thu Nov 28 11:34:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 11265871 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 35D1C6C1 for ; Thu, 28 Nov 2019 11:34:31 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1CBDF21741 for ; Thu, 28 Nov 2019 11:34:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1CBDF21741 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=chris-wilson.co.uk Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 7BF246E7C1; Thu, 28 Nov 2019 11:34:29 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from fireflyinternet.com (mail.fireflyinternet.com [109.228.58.192]) by gabe.freedesktop.org (Postfix) with ESMTPS id 2B965895E1 for ; Thu, 28 Nov 2019 11:34:27 +0000 (UTC) X-Default-Received-SPF: pass (skip=forwardok (res=PASS)) x-ip-name=78.156.65.138; Received: from haswell.alporthouse.com (unverified [78.156.65.138]) by fireflyinternet.com (Firefly Internet (M1)) with ESMTP id 19374254-1500050 for ; Thu, 28 Nov 2019 11:34:25 +0000 From: Chris Wilson To: intel-gfx@lists.freedesktop.org Date: Thu, 28 Nov 2019 11:34:24 +0000 Message-Id: <20191128113424.3885958-1-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.24.0 MIME-Version: 1.0 Subject: [Intel-gfx] [CI] drm/i915/gem: Excise the per-batch whitelist from the context X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" One does not lightly add a new hidden struct_mutex dependency deep within the execbuf bowels! The immediate suspicion in seeing the whitelist cached on the context, is that it is intended to be preserved between batches, as the kernel is quite adept at caching small allocations itself. But no, it's sole purpose is to serialise command submission in order to save a kmalloc on a slow, slow path! By removing the whitelist dependency from the context, our freedom to chop the big struct_mutex is greatly augmented. v2: s/set_bit/__set_bit/ as the whitelist shall never be accessed concurrently. Signed-off-by: Chris Wilson Cc: Tvrtko Ursulin Cc: Joonas Lahtinen Cc: Mika Kuoppala Reviewed-by: Mika Kuoppala --- drivers/gpu/drm/i915/gem/i915_gem_context.c | 5 -- .../gpu/drm/i915/gem/i915_gem_context_types.h | 7 -- drivers/gpu/drm/i915/i915_cmd_parser.c | 67 ++++++++----------- 3 files changed, 27 insertions(+), 52 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c index c94ac838401a..a179e170c936 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c @@ -275,8 +275,6 @@ static void i915_gem_context_free(struct i915_gem_context *ctx) free_engines(rcu_access_pointer(ctx->engines)); mutex_destroy(&ctx->engines_mutex); - kfree(ctx->jump_whitelist); - if (ctx->timeline) intel_timeline_put(ctx->timeline); @@ -584,9 +582,6 @@ __create_context(struct drm_i915_private *i915) for (i = 0; i < ARRAY_SIZE(ctx->hang_timestamp); i++) ctx->hang_timestamp[i] = jiffies - CONTEXT_FAST_HANG_JIFFIES; - ctx->jump_whitelist = NULL; - ctx->jump_whitelist_cmds = 0; - spin_lock(&i915->gem.contexts.lock); list_add_tail(&ctx->link, &i915->gem.contexts.list); spin_unlock(&i915->gem.contexts.lock); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h index c060bc428f49..69df5459c350 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h @@ -168,13 +168,6 @@ struct i915_gem_context { */ struct radix_tree_root handles_vma; - /** jump_whitelist: Bit array for tracking cmds during cmdparsing - * Guarded by struct_mutex - */ - unsigned long *jump_whitelist; - /** jump_whitelist_cmds: No of cmd slots available */ - u32 jump_whitelist_cmds; - /** * @name: arbitrary name, used for user debug * diff --git a/drivers/gpu/drm/i915/i915_cmd_parser.c b/drivers/gpu/drm/i915/i915_cmd_parser.c index f24096e27bef..213ace50cd08 100644 --- a/drivers/gpu/drm/i915/i915_cmd_parser.c +++ b/drivers/gpu/drm/i915/i915_cmd_parser.c @@ -1310,13 +1310,14 @@ static int check_bbstart(const struct i915_gem_context *ctx, u32 *cmd, u32 offset, u32 length, u32 batch_len, u64 batch_start, - u64 shadow_batch_start) + u64 shadow_batch_start, + const unsigned long *jump_whitelist) { u64 jump_offset, jump_target; u32 target_cmd_offset, target_cmd_index; /* For igt compatibility on older platforms */ - if (CMDPARSER_USES_GGTT(ctx->i915)) { + if (!jump_whitelist) { DRM_DEBUG("CMD: Rejecting BB_START for ggtt based submission\n"); return -EACCES; } @@ -1352,10 +1353,10 @@ static int check_bbstart(const struct i915_gem_context *ctx, if (target_cmd_index == offset) return 0; - if (ctx->jump_whitelist_cmds <= target_cmd_index) { - DRM_DEBUG("CMD: Rejecting BB_START - truncated whitelist array\n"); - return -EINVAL; - } else if (!test_bit(target_cmd_index, ctx->jump_whitelist)) { + if (IS_ERR(jump_whitelist)) + return PTR_ERR(jump_whitelist); + + if (!test_bit(target_cmd_index, jump_whitelist)) { DRM_DEBUG("CMD: BB_START to 0x%llx not a previously executed cmd\n", jump_target); return -EINVAL; @@ -1364,40 +1365,21 @@ static int check_bbstart(const struct i915_gem_context *ctx, return 0; } -static void init_whitelist(struct i915_gem_context *ctx, u32 batch_len) +static unsigned long * +alloc_whitelist(struct drm_i915_private *i915, u32 batch_len) { - const u32 batch_cmds = DIV_ROUND_UP(batch_len, sizeof(u32)); - const u32 exact_size = BITS_TO_LONGS(batch_cmds); - u32 next_size = BITS_TO_LONGS(roundup_pow_of_two(batch_cmds)); - unsigned long *next_whitelist; - - if (CMDPARSER_USES_GGTT(ctx->i915)) - return; - - if (batch_cmds <= ctx->jump_whitelist_cmds) { - bitmap_zero(ctx->jump_whitelist, batch_cmds); - return; - } - -again: - next_whitelist = kcalloc(next_size, sizeof(long), GFP_KERNEL); - if (next_whitelist) { - kfree(ctx->jump_whitelist); - ctx->jump_whitelist = next_whitelist; - ctx->jump_whitelist_cmds = - next_size * BITS_PER_BYTE * sizeof(long); - return; - } + unsigned long *jmp; - if (next_size > exact_size) { - next_size = exact_size; - goto again; - } + if (CMDPARSER_USES_GGTT(i915)) + return NULL; - DRM_DEBUG("CMD: Failed to extend whitelist. BB_START may be disallowed\n"); - bitmap_zero(ctx->jump_whitelist, ctx->jump_whitelist_cmds); + /* Prefer to report transient allocation failure rather than hit oom */ + jmp = bitmap_zalloc(DIV_ROUND_UP(batch_len, sizeof(u32)), + GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_NOWARN); + if (!jmp) + return ERR_PTR(-ENOMEM); - return; + return jmp; } #define LENGTH_BIAS 2 @@ -1433,6 +1415,7 @@ int intel_engine_cmd_parser(struct i915_gem_context *ctx, struct drm_i915_cmd_descriptor default_desc = noop_desc; const struct drm_i915_cmd_descriptor *desc = &default_desc; bool needs_clflush_after = false; + unsigned long *jump_whitelist; int ret = 0; cmd = copy_batch(shadow_batch_obj, batch_obj, @@ -1443,7 +1426,8 @@ int intel_engine_cmd_parser(struct i915_gem_context *ctx, return PTR_ERR(cmd); } - init_whitelist(ctx, batch_len); + /* Defer failure until attempted use */ + jump_whitelist = alloc_whitelist(ctx->i915, batch_len); /* * We use the batch length as size because the shadow object is as @@ -1487,15 +1471,16 @@ int intel_engine_cmd_parser(struct i915_gem_context *ctx, if (desc->cmd.value == MI_BATCH_BUFFER_START) { ret = check_bbstart(ctx, cmd, offset, length, batch_len, batch_start, - shadow_batch_start); + shadow_batch_start, + jump_whitelist); if (ret) goto err; break; } - if (ctx->jump_whitelist_cmds > offset) - set_bit(offset, ctx->jump_whitelist); + if (!IS_ERR_OR_NULL(jump_whitelist)) + __set_bit(offset, jump_whitelist); cmd += length; offset += length; @@ -1513,6 +1498,8 @@ int intel_engine_cmd_parser(struct i915_gem_context *ctx, } err: + if (!IS_ERR_OR_NULL(jump_whitelist)) + kfree(jump_whitelist); i915_gem_object_unpin_map(shadow_batch_obj); return ret; }