From patchwork Thu Mar 28 21:07:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 10876025 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5043414DE for ; Thu, 28 Mar 2019 21:08:36 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2CEF528B7E for ; Thu, 28 Mar 2019 21:08:36 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 20EEC28E34; Thu, 28 Mar 2019 21:08:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 3753428B7E for ; Thu, 28 Mar 2019 21:08:34 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 5B5E26E3BD; Thu, 28 Mar 2019 21:08:33 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from fireflyinternet.com (mail.fireflyinternet.com [109.228.58.192]) by gabe.freedesktop.org (Postfix) with ESMTPS id A442C6E3BD for ; Thu, 28 Mar 2019 21:08:31 +0000 (UTC) X-Default-Received-SPF: pass (skip=forwardok (res=PASS)) x-ip-name=78.156.65.138; Received: from haswell.alporthouse.com (unverified [78.156.65.138]) by fireflyinternet.com (Firefly Internet (M1)) with ESMTP id 16055135-1500050 for multiple; Thu, 28 Mar 2019 21:08:11 +0000 From: Chris Wilson To: intel-gfx@lists.freedesktop.org Date: Thu, 28 Mar 2019 21:07:54 +0000 Message-Id: <20190328210754.18802-1-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH] drm/i915: Move intel_engine_mask_t around for use by i915_request_types.h X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP We want to use intel_engine_mask_t inside i915_request.h, which means extracting it from the general header file mess and placing it inside a types.h. A knock on effect is that the compiler wants to warn about type-contraction of ALL_ENGINES into intel_engine_maskt_t, so prepare for the worst. v2: Use intel_engine_mask_t consistently v3: Move I915_NUM_ENGINES to its natural home at the end of the enum Signed-off-by: Chris Wilson Cc: Tvrtko Ursulin Cc: Mika Kuoppala Cc: John Harrison --- drivers/gpu/drm/i915/Makefile | 1 + drivers/gpu/drm/i915/gvt/execlist.c | 11 ++- drivers/gpu/drm/i915/gvt/execlist.h | 2 +- drivers/gpu/drm/i915/gvt/gvt.h | 8 +- drivers/gpu/drm/i915/gvt/handlers.c | 2 +- drivers/gpu/drm/i915/gvt/scheduler.c | 8 +- drivers/gpu/drm/i915/gvt/scheduler.h | 6 +- drivers/gpu/drm/i915/gvt/vgpu.c | 4 +- drivers/gpu/drm/i915/i915_debugfs.c | 2 +- drivers/gpu/drm/i915/i915_drv.h | 1 - drivers/gpu/drm/i915/i915_gem.h | 2 - drivers/gpu/drm/i915/i915_gem_context.c | 6 +- drivers/gpu/drm/i915/i915_gem_context.h | 2 +- drivers/gpu/drm/i915/i915_gem_gtt.h | 2 +- drivers/gpu/drm/i915/i915_gpu_error.c | 9 +- drivers/gpu/drm/i915/i915_gpu_error.h | 2 +- drivers/gpu/drm/i915/i915_reset.c | 43 ++++---- drivers/gpu/drm/i915/i915_reset.h | 9 +- drivers/gpu/drm/i915/i915_scheduler.h | 86 +--------------- drivers/gpu/drm/i915/i915_scheduler_types.h | 98 +++++++++++++++++++ drivers/gpu/drm/i915/i915_timeline.h | 1 + drivers/gpu/drm/i915/i915_timeline_types.h | 3 +- drivers/gpu/drm/i915/intel_device_info.h | 3 +- drivers/gpu/drm/i915/intel_engine_types.h | 12 ++- drivers/gpu/drm/i915/intel_guc_submission.h | 1 + drivers/gpu/drm/i915/intel_hangcheck.c | 2 +- .../gpu/drm/i915/selftests/i915_gem_context.c | 8 +- .../gpu/drm/i915/selftests/igt_gem_utils.c | 40 ++++++++ .../gpu/drm/i915/selftests/intel_hangcheck.c | 3 +- .../test_i915_scheduler_types_standalone.c | 7 ++ 30 files changed, 232 insertions(+), 152 deletions(-) create mode 100644 drivers/gpu/drm/i915/i915_scheduler_types.h create mode 100644 drivers/gpu/drm/i915/selftests/igt_gem_utils.c create mode 100644 drivers/gpu/drm/i915/test_i915_scheduler_types_standalone.c diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile index 60de05f3fa60..1f3e8b145fc0 100644 --- a/drivers/gpu/drm/i915/Makefile +++ b/drivers/gpu/drm/i915/Makefile @@ -61,6 +61,7 @@ i915-$(CONFIG_PERF_EVENTS) += i915_pmu.o i915-$(CONFIG_DRM_I915_WERROR) += \ test_i915_active_types_standalone.o \ test_i915_gem_context_types_standalone.o \ + test_i915_scheduler_types_standalone.o \ test_i915_timeline_types_standalone.o \ test_intel_context_types_standalone.o \ test_intel_engine_types_standalone.o \ diff --git a/drivers/gpu/drm/i915/gvt/execlist.c b/drivers/gpu/drm/i915/gvt/execlist.c index 1a93472cb34e..f21b8fb5b37e 100644 --- a/drivers/gpu/drm/i915/gvt/execlist.c +++ b/drivers/gpu/drm/i915/gvt/execlist.c @@ -526,12 +526,13 @@ static void init_vgpu_execlist(struct intel_vgpu *vgpu, int ring_id) vgpu_vreg(vgpu, ctx_status_ptr_reg) = ctx_status_ptr.dw; } -static void clean_execlist(struct intel_vgpu *vgpu, unsigned long engine_mask) +static void clean_execlist(struct intel_vgpu *vgpu, + intel_engine_mask_t engine_mask) { - unsigned int tmp; struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv; struct intel_engine_cs *engine; struct intel_vgpu_submission *s = &vgpu->submission; + intel_engine_mask_t tmp; for_each_engine_masked(engine, dev_priv, engine_mask, tmp) { kfree(s->ring_scan_buffer[engine->id]); @@ -541,18 +542,18 @@ static void clean_execlist(struct intel_vgpu *vgpu, unsigned long engine_mask) } static void reset_execlist(struct intel_vgpu *vgpu, - unsigned long engine_mask) + intel_engine_mask_t engine_mask) { struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv; struct intel_engine_cs *engine; - unsigned int tmp; + intel_engine_mask_t tmp; for_each_engine_masked(engine, dev_priv, engine_mask, tmp) init_vgpu_execlist(vgpu, engine->id); } static int init_execlist(struct intel_vgpu *vgpu, - unsigned long engine_mask) + intel_engine_mask_t engine_mask) { reset_execlist(vgpu, engine_mask); return 0; diff --git a/drivers/gpu/drm/i915/gvt/execlist.h b/drivers/gpu/drm/i915/gvt/execlist.h index 714d709829a2..5ccc2c695848 100644 --- a/drivers/gpu/drm/i915/gvt/execlist.h +++ b/drivers/gpu/drm/i915/gvt/execlist.h @@ -180,6 +180,6 @@ int intel_vgpu_init_execlist(struct intel_vgpu *vgpu); int intel_vgpu_submit_execlist(struct intel_vgpu *vgpu, int ring_id); void intel_vgpu_reset_execlist(struct intel_vgpu *vgpu, - unsigned long engine_mask); + intel_engine_mask_t engine_mask); #endif /*_GVT_EXECLIST_H_*/ diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h index 8bce09de4b82..7a4e1a6387e5 100644 --- a/drivers/gpu/drm/i915/gvt/gvt.h +++ b/drivers/gpu/drm/i915/gvt/gvt.h @@ -144,9 +144,9 @@ enum { struct intel_vgpu_submission_ops { const char *name; - int (*init)(struct intel_vgpu *vgpu, unsigned long engine_mask); - void (*clean)(struct intel_vgpu *vgpu, unsigned long engine_mask); - void (*reset)(struct intel_vgpu *vgpu, unsigned long engine_mask); + int (*init)(struct intel_vgpu *vgpu, intel_engine_mask_t engine_mask); + void (*clean)(struct intel_vgpu *vgpu, intel_engine_mask_t engine_mask); + void (*reset)(struct intel_vgpu *vgpu, intel_engine_mask_t engine_mask); }; struct intel_vgpu_submission { @@ -488,7 +488,7 @@ struct intel_vgpu *intel_gvt_create_vgpu(struct intel_gvt *gvt, void intel_gvt_destroy_vgpu(struct intel_vgpu *vgpu); void intel_gvt_release_vgpu(struct intel_vgpu *vgpu); void intel_gvt_reset_vgpu_locked(struct intel_vgpu *vgpu, bool dmlr, - unsigned int engine_mask); + intel_engine_mask_t engine_mask); void intel_gvt_reset_vgpu(struct intel_vgpu *vgpu); void intel_gvt_activate_vgpu(struct intel_vgpu *vgpu); void intel_gvt_deactivate_vgpu(struct intel_vgpu *vgpu); diff --git a/drivers/gpu/drm/i915/gvt/handlers.c b/drivers/gpu/drm/i915/gvt/handlers.c index dbc749617922..86761b1def1e 100644 --- a/drivers/gpu/drm/i915/gvt/handlers.c +++ b/drivers/gpu/drm/i915/gvt/handlers.c @@ -311,7 +311,7 @@ static int mul_force_wake_write(struct intel_vgpu *vgpu, static int gdrst_mmio_write(struct intel_vgpu *vgpu, unsigned int offset, void *p_data, unsigned int bytes) { - unsigned int engine_mask = 0; + intel_engine_mask_t engine_mask = 0; u32 data; write_vreg(vgpu, offset, p_data, bytes); diff --git a/drivers/gpu/drm/i915/gvt/scheduler.c b/drivers/gpu/drm/i915/gvt/scheduler.c index bf743b47b59d..9f29cd33bf20 100644 --- a/drivers/gpu/drm/i915/gvt/scheduler.c +++ b/drivers/gpu/drm/i915/gvt/scheduler.c @@ -850,13 +850,13 @@ static void update_guest_context(struct intel_vgpu_workload *workload) } void intel_vgpu_clean_workloads(struct intel_vgpu *vgpu, - unsigned long engine_mask) + intel_engine_mask_t engine_mask) { struct intel_vgpu_submission *s = &vgpu->submission; struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv; struct intel_engine_cs *engine; struct intel_vgpu_workload *pos, *n; - unsigned int tmp; + intel_engine_mask_t tmp; /* free the unsubmited workloads in the queues. */ for_each_engine_masked(engine, dev_priv, engine_mask, tmp) { @@ -1149,7 +1149,7 @@ void intel_vgpu_clean_submission(struct intel_vgpu *vgpu) * */ void intel_vgpu_reset_submission(struct intel_vgpu *vgpu, - unsigned long engine_mask) + intel_engine_mask_t engine_mask) { struct intel_vgpu_submission *s = &vgpu->submission; @@ -1239,7 +1239,7 @@ int intel_vgpu_setup_submission(struct intel_vgpu *vgpu) * */ int intel_vgpu_select_submission_ops(struct intel_vgpu *vgpu, - unsigned long engine_mask, + intel_engine_mask_t engine_mask, unsigned int interface) { struct intel_vgpu_submission *s = &vgpu->submission; diff --git a/drivers/gpu/drm/i915/gvt/scheduler.h b/drivers/gpu/drm/i915/gvt/scheduler.h index 0635b2c4bed7..90c6756f5453 100644 --- a/drivers/gpu/drm/i915/gvt/scheduler.h +++ b/drivers/gpu/drm/i915/gvt/scheduler.h @@ -142,12 +142,12 @@ void intel_gvt_wait_vgpu_idle(struct intel_vgpu *vgpu); int intel_vgpu_setup_submission(struct intel_vgpu *vgpu); void intel_vgpu_reset_submission(struct intel_vgpu *vgpu, - unsigned long engine_mask); + intel_engine_mask_t engine_mask); void intel_vgpu_clean_submission(struct intel_vgpu *vgpu); int intel_vgpu_select_submission_ops(struct intel_vgpu *vgpu, - unsigned long engine_mask, + intel_engine_mask_t engine_mask, unsigned int interface); extern const struct intel_vgpu_submission_ops @@ -160,6 +160,6 @@ intel_vgpu_create_workload(struct intel_vgpu *vgpu, int ring_id, void intel_vgpu_destroy_workload(struct intel_vgpu_workload *workload); void intel_vgpu_clean_workloads(struct intel_vgpu *vgpu, - unsigned long engine_mask); + intel_engine_mask_t engine_mask); #endif diff --git a/drivers/gpu/drm/i915/gvt/vgpu.c b/drivers/gpu/drm/i915/gvt/vgpu.c index 314e40121e47..44ce3c2b9ac1 100644 --- a/drivers/gpu/drm/i915/gvt/vgpu.c +++ b/drivers/gpu/drm/i915/gvt/vgpu.c @@ -526,11 +526,11 @@ struct intel_vgpu *intel_gvt_create_vgpu(struct intel_gvt *gvt, * GPU engines. For FLR, engine_mask is ignored. */ void intel_gvt_reset_vgpu_locked(struct intel_vgpu *vgpu, bool dmlr, - unsigned int engine_mask) + intel_engine_mask_t engine_mask) { struct intel_gvt *gvt = vgpu->gvt; struct intel_gvt_workload_scheduler *scheduler = &gvt->scheduler; - unsigned int resetting_eng = dmlr ? ALL_ENGINES : engine_mask; + intel_engine_mask_t resetting_eng = dmlr ? ALL_ENGINES : engine_mask; gvt_dbg_core("------------------------------------------\n"); gvt_dbg_core("resseting vgpu%d, dmlr %d, engine_mask %08x\n", diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c index 652f65d2e131..deff45d612de 100644 --- a/drivers/gpu/drm/i915/i915_debugfs.c +++ b/drivers/gpu/drm/i915/i915_debugfs.c @@ -2245,7 +2245,7 @@ static int i915_guc_stage_pool(struct seq_file *m, void *data) const struct intel_guc *guc = &dev_priv->guc; struct guc_stage_desc *desc = guc->stage_desc_pool_vaddr; struct intel_guc_client *client = guc->execbuf_client; - unsigned int tmp; + intel_engine_mask_t tmp; int index; if (!USES_GUC_SUBMISSION(dev_priv)) diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index a32bfc7ec5ea..34f3ec61efb0 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -2452,7 +2452,6 @@ static inline unsigned int i915_sg_segment_size(void) #define IS_GEN9_LP(dev_priv) (IS_GEN(dev_priv, 9) && IS_LP(dev_priv)) #define IS_GEN9_BC(dev_priv) (IS_GEN(dev_priv, 9) && !IS_LP(dev_priv)) -#define ALL_ENGINES (~0u) #define HAS_ENGINE(dev_priv, id) (INTEL_INFO(dev_priv)->engine_mask & BIT(id)) #define ENGINE_INSTANCES_MASK(dev_priv, first, count) ({ \ diff --git a/drivers/gpu/drm/i915/i915_gem.h b/drivers/gpu/drm/i915/i915_gem.h index 5c073fe73664..9074eb1e843f 100644 --- a/drivers/gpu/drm/i915/i915_gem.h +++ b/drivers/gpu/drm/i915/i915_gem.h @@ -73,8 +73,6 @@ struct drm_i915_private; #define GEM_TRACE_DUMP_ON(expr) BUILD_BUG_ON_INVALID(expr) #endif -#define I915_NUM_ENGINES 8 - #define I915_GEM_IDLE_TIMEOUT (HZ / 5) void i915_gem_park(struct drm_i915_private *i915); diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c index 662da485e15f..7af2da1e0bf2 100644 --- a/drivers/gpu/drm/i915/i915_gem_context.c +++ b/drivers/gpu/drm/i915/i915_gem_context.c @@ -858,9 +858,9 @@ static void cb_retire(struct i915_active *base) kfree(cb); } -I915_SELFTEST_DECLARE(static unsigned long context_barrier_inject_fault); +I915_SELFTEST_DECLARE(static intel_engine_mask_t context_barrier_inject_fault); static int context_barrier_task(struct i915_gem_context *ctx, - unsigned long engines, + intel_engine_mask_t engines, int (*emit)(struct i915_request *rq, void *data), void (*task)(void *data), void *data) @@ -922,7 +922,7 @@ static int context_barrier_task(struct i915_gem_context *ctx, } int i915_gem_switch_to_kernel_context(struct drm_i915_private *i915, - unsigned long mask) + intel_engine_mask_t mask) { struct intel_engine_cs *engine; diff --git a/drivers/gpu/drm/i915/i915_gem_context.h b/drivers/gpu/drm/i915/i915_gem_context.h index edc6ba3f0288..23dcb01bfd82 100644 --- a/drivers/gpu/drm/i915/i915_gem_context.h +++ b/drivers/gpu/drm/i915/i915_gem_context.h @@ -142,7 +142,7 @@ void i915_gem_context_close(struct drm_file *file); int i915_switch_context(struct i915_request *rq); int i915_gem_switch_to_kernel_context(struct drm_i915_private *i915, - unsigned long engine_mask); + intel_engine_mask_t engine_mask); void i915_gem_context_release(struct kref *ctx_ref); struct i915_gem_context * diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h index 83ded9fc761a..f597f35b109b 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.h +++ b/drivers/gpu/drm/i915/i915_gem_gtt.h @@ -390,7 +390,7 @@ struct i915_hw_ppgtt { struct i915_address_space vm; struct kref ref; - unsigned long pd_dirty_engines; + intel_engine_mask_t pd_dirty_engines; union { struct i915_pml4 pml4; /* GEN8+ & 48b PPGTT */ struct i915_page_directory_pointer pdp; /* GEN8+ */ diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c index a2a98ccda421..f97f64a440b7 100644 --- a/drivers/gpu/drm/i915/i915_gpu_error.c +++ b/drivers/gpu/drm/i915/i915_gpu_error.c @@ -1093,7 +1093,7 @@ static u32 capture_error_bo(struct drm_i915_error_buffer *err, * It's only a small step better than a random number in its current form. */ static u32 i915_error_generate_code(struct i915_gpu_state *error, - unsigned long engine_mask) + intel_engine_mask_t engine_mask) { /* * IPEHR would be an ideal way to detect errors, as it's the gross @@ -1638,7 +1638,8 @@ static void capture_reg_state(struct i915_gpu_state *error) } static const char * -error_msg(struct i915_gpu_state *error, unsigned long engines, const char *msg) +error_msg(struct i915_gpu_state *error, + intel_engine_mask_t engines, const char *msg) { int len; int i; @@ -1648,7 +1649,7 @@ error_msg(struct i915_gpu_state *error, unsigned long engines, const char *msg) engines &= ~BIT(i); len = scnprintf(error->error_msg, sizeof(error->error_msg), - "GPU HANG: ecode %d:%lx:0x%08x", + "GPU HANG: ecode %d:%x:0x%08x", INTEL_GEN(error->i915), engines, i915_error_generate_code(error, engines)); if (engines) { @@ -1787,7 +1788,7 @@ i915_capture_gpu_state(struct drm_i915_private *i915) * to pick up. */ void i915_capture_error_state(struct drm_i915_private *i915, - unsigned long engine_mask, + intel_engine_mask_t engine_mask, const char *msg) { static bool warned; diff --git a/drivers/gpu/drm/i915/i915_gpu_error.h b/drivers/gpu/drm/i915/i915_gpu_error.h index 302a14240b45..5dc761e85d9d 100644 --- a/drivers/gpu/drm/i915/i915_gpu_error.h +++ b/drivers/gpu/drm/i915/i915_gpu_error.h @@ -263,7 +263,7 @@ void i915_error_printf(struct drm_i915_error_state_buf *e, const char *f, ...); struct i915_gpu_state *i915_capture_gpu_state(struct drm_i915_private *i915); void i915_capture_error_state(struct drm_i915_private *dev_priv, - unsigned long engine_mask, + intel_engine_mask_t engine_mask, const char *error_msg); static inline struct i915_gpu_state * diff --git a/drivers/gpu/drm/i915/i915_reset.c b/drivers/gpu/drm/i915/i915_reset.c index 2f25ed702ba0..ddc403ee8855 100644 --- a/drivers/gpu/drm/i915/i915_reset.c +++ b/drivers/gpu/drm/i915/i915_reset.c @@ -144,15 +144,15 @@ static void gen3_stop_engine(struct intel_engine_cs *engine) } static void i915_stop_engines(struct drm_i915_private *i915, - unsigned int engine_mask) + intel_engine_mask_t engine_mask) { struct intel_engine_cs *engine; - enum intel_engine_id id; + intel_engine_mask_t tmp; if (INTEL_GEN(i915) < 3) return; - for_each_engine_masked(engine, i915, engine_mask, id) + for_each_engine_masked(engine, i915, engine_mask, tmp) gen3_stop_engine(engine); } @@ -165,7 +165,7 @@ static bool i915_in_reset(struct pci_dev *pdev) } static int i915_do_reset(struct drm_i915_private *i915, - unsigned int engine_mask, + intel_engine_mask_t engine_mask, unsigned int retry) { struct pci_dev *pdev = i915->drm.pdev; @@ -194,7 +194,7 @@ static bool g4x_reset_complete(struct pci_dev *pdev) } static int g33_do_reset(struct drm_i915_private *i915, - unsigned int engine_mask, + intel_engine_mask_t engine_mask, unsigned int retry) { struct pci_dev *pdev = i915->drm.pdev; @@ -204,7 +204,7 @@ static int g33_do_reset(struct drm_i915_private *i915, } static int g4x_do_reset(struct drm_i915_private *dev_priv, - unsigned int engine_mask, + intel_engine_mask_t engine_mask, unsigned int retry) { struct pci_dev *pdev = dev_priv->drm.pdev; @@ -242,7 +242,7 @@ static int g4x_do_reset(struct drm_i915_private *dev_priv, } static int ironlake_do_reset(struct drm_i915_private *dev_priv, - unsigned int engine_mask, + intel_engine_mask_t engine_mask, unsigned int retry) { struct intel_uncore *uncore = &dev_priv->uncore; @@ -303,7 +303,7 @@ static int gen6_hw_domain_reset(struct drm_i915_private *dev_priv, } static int gen6_reset_engines(struct drm_i915_private *i915, - unsigned int engine_mask, + intel_engine_mask_t engine_mask, unsigned int retry) { struct intel_engine_cs *engine; @@ -319,7 +319,7 @@ static int gen6_reset_engines(struct drm_i915_private *i915, if (engine_mask == ALL_ENGINES) { hw_mask = GEN6_GRDOM_FULL; } else { - unsigned int tmp; + intel_engine_mask_t tmp; hw_mask = 0; for_each_engine_masked(engine, i915, engine_mask, tmp) { @@ -429,7 +429,7 @@ static void gen11_unlock_sfc(struct drm_i915_private *dev_priv, } static int gen11_reset_engines(struct drm_i915_private *i915, - unsigned int engine_mask, + intel_engine_mask_t engine_mask, unsigned int retry) { const u32 hw_engine_mask[] = { @@ -443,7 +443,7 @@ static int gen11_reset_engines(struct drm_i915_private *i915, [VECS1] = GEN11_GRDOM_VECS2, }; struct intel_engine_cs *engine; - unsigned int tmp; + intel_engine_mask_t tmp; u32 hw_mask; int ret; @@ -496,12 +496,12 @@ static void gen8_engine_reset_cancel(struct intel_engine_cs *engine) } static int gen8_reset_engines(struct drm_i915_private *i915, - unsigned int engine_mask, + intel_engine_mask_t engine_mask, unsigned int retry) { struct intel_engine_cs *engine; const bool reset_non_ready = retry >= 1; - unsigned int tmp; + intel_engine_mask_t tmp; int ret; for_each_engine_masked(engine, i915, engine_mask, tmp) { @@ -537,7 +537,7 @@ static int gen8_reset_engines(struct drm_i915_private *i915, } typedef int (*reset_func)(struct drm_i915_private *, - unsigned int engine_mask, + intel_engine_mask_t engine_mask, unsigned int retry); static reset_func intel_get_gpu_reset(struct drm_i915_private *i915) @@ -558,7 +558,8 @@ static reset_func intel_get_gpu_reset(struct drm_i915_private *i915) return NULL; } -int intel_gpu_reset(struct drm_i915_private *i915, unsigned int engine_mask) +int intel_gpu_reset(struct drm_i915_private *i915, + intel_engine_mask_t engine_mask) { const int retries = engine_mask == ALL_ENGINES ? RESET_MAX_RETRIES : 1; reset_func reset; @@ -692,7 +693,8 @@ static void gt_revoke(struct drm_i915_private *i915) revoke_mmaps(i915); } -static int gt_reset(struct drm_i915_private *i915, unsigned int stalled_mask) +static int gt_reset(struct drm_i915_private *i915, + intel_engine_mask_t stalled_mask) { struct intel_engine_cs *engine; enum intel_engine_id id; @@ -951,7 +953,8 @@ bool i915_gem_unset_wedged(struct drm_i915_private *i915) return result; } -static int do_reset(struct drm_i915_private *i915, unsigned int stalled_mask) +static int do_reset(struct drm_i915_private *i915, + intel_engine_mask_t stalled_mask) { int err, i; @@ -986,7 +989,7 @@ static int do_reset(struct drm_i915_private *i915, unsigned int stalled_mask) * - re-init display */ void i915_reset(struct drm_i915_private *i915, - unsigned int stalled_mask, + intel_engine_mask_t stalled_mask, const char *reason) { struct i915_gpu_error *error = &i915->gpu_error; @@ -1233,14 +1236,14 @@ void i915_clear_error_registers(struct drm_i915_private *dev_priv) * of a ring dump etc.). */ void i915_handle_error(struct drm_i915_private *i915, - u32 engine_mask, + intel_engine_mask_t engine_mask, unsigned long flags, const char *fmt, ...) { struct i915_gpu_error *error = &i915->gpu_error; struct intel_engine_cs *engine; intel_wakeref_t wakeref; - unsigned int tmp; + intel_engine_mask_t tmp; char error_msg[80]; char *msg = NULL; diff --git a/drivers/gpu/drm/i915/i915_reset.h b/drivers/gpu/drm/i915/i915_reset.h index 16f2389f656f..86b1ac8116ce 100644 --- a/drivers/gpu/drm/i915/i915_reset.h +++ b/drivers/gpu/drm/i915/i915_reset.h @@ -11,13 +11,15 @@ #include #include +#include "intel_engine_types.h" + struct drm_i915_private; struct intel_engine_cs; struct intel_guc; __printf(4, 5) void i915_handle_error(struct drm_i915_private *i915, - u32 engine_mask, + intel_engine_mask_t engine_mask, unsigned long flags, const char *fmt, ...); #define I915_ERROR_CAPTURE BIT(0) @@ -25,7 +27,7 @@ void i915_handle_error(struct drm_i915_private *i915, void i915_clear_error_registers(struct drm_i915_private *i915); void i915_reset(struct drm_i915_private *i915, - unsigned int stalled_mask, + intel_engine_mask_t stalled_mask, const char *reason); int i915_reset_engine(struct intel_engine_cs *engine, const char *reason); @@ -41,7 +43,8 @@ int i915_terminally_wedged(struct drm_i915_private *i915); bool intel_has_gpu_reset(struct drm_i915_private *i915); bool intel_has_reset_engine(struct drm_i915_private *i915); -int intel_gpu_reset(struct drm_i915_private *i915, u32 engine_mask); +int intel_gpu_reset(struct drm_i915_private *i915, + intel_engine_mask_t engine_mask); int intel_reset_guc(struct drm_i915_private *i915); diff --git a/drivers/gpu/drm/i915/i915_scheduler.h b/drivers/gpu/drm/i915/i915_scheduler.h index 9a1d257f3d6e..07d243acf553 100644 --- a/drivers/gpu/drm/i915/i915_scheduler.h +++ b/drivers/gpu/drm/i915/i915_scheduler.h @@ -8,92 +8,10 @@ #define _I915_SCHEDULER_H_ #include +#include #include -#include - -struct drm_i915_private; -struct i915_request; -struct intel_engine_cs; - -enum { - I915_PRIORITY_MIN = I915_CONTEXT_MIN_USER_PRIORITY - 1, - I915_PRIORITY_NORMAL = I915_CONTEXT_DEFAULT_PRIORITY, - I915_PRIORITY_MAX = I915_CONTEXT_MAX_USER_PRIORITY + 1, - - I915_PRIORITY_INVALID = INT_MIN -}; - -#define I915_USER_PRIORITY_SHIFT 3 -#define I915_USER_PRIORITY(x) ((x) << I915_USER_PRIORITY_SHIFT) - -#define I915_PRIORITY_COUNT BIT(I915_USER_PRIORITY_SHIFT) -#define I915_PRIORITY_MASK (I915_PRIORITY_COUNT - 1) - -#define I915_PRIORITY_WAIT ((u8)BIT(0)) -#define I915_PRIORITY_NEWCLIENT ((u8)BIT(1)) -#define I915_PRIORITY_NOSEMAPHORE ((u8)BIT(2)) - -#define __NO_PREEMPTION (I915_PRIORITY_WAIT) - -struct i915_sched_attr { - /** - * @priority: execution and service priority - * - * All clients are equal, but some are more equal than others! - * - * Requests from a context with a greater (more positive) value of - * @priority will be executed before those with a lower @priority - * value, forming a simple QoS. - * - * The &drm_i915_private.kernel_context is assigned the lowest priority. - */ - int priority; -}; - -/* - * "People assume that time is a strict progression of cause to effect, but - * actually, from a nonlinear, non-subjective viewpoint, it's more like a big - * ball of wibbly-wobbly, timey-wimey ... stuff." -The Doctor, 2015 - * - * Requests exist in a complex web of interdependencies. Each request - * has to wait for some other request to complete before it is ready to be run - * (e.g. we have to wait until the pixels have been rendering into a texture - * before we can copy from it). We track the readiness of a request in terms - * of fences, but we also need to keep the dependency tree for the lifetime - * of the request (beyond the life of an individual fence). We use the tree - * at various points to reorder the requests whilst keeping the requests - * in order with respect to their various dependencies. - * - * There is no active component to the "scheduler". As we know the dependency - * DAG of each request, we are able to insert it into a sorted queue when it - * is ready, and are able to reorder its portion of the graph to accommodate - * dynamic priority changes. - */ -struct i915_sched_node { - struct list_head signalers_list; /* those before us, we depend upon */ - struct list_head waiters_list; /* those after us, they depend upon us */ - struct list_head link; - struct i915_sched_attr attr; - unsigned int flags; -#define I915_SCHED_HAS_SEMAPHORE BIT(0) -}; - -struct i915_dependency { - struct i915_sched_node *signaler; - struct list_head signal_link; - struct list_head wait_link; - struct list_head dfs_link; - unsigned long flags; -#define I915_DEPENDENCY_ALLOC BIT(0) -}; - -struct i915_priolist { - struct list_head requests[I915_PRIORITY_COUNT]; - struct rb_node node; - unsigned long used; - int priority; -}; +#include "i915_scheduler_types.h" #define priolist_for_each_request(it, plist, idx) \ for (idx = 0; idx < ARRAY_SIZE((plist)->requests); idx++) \ diff --git a/drivers/gpu/drm/i915/i915_scheduler_types.h b/drivers/gpu/drm/i915/i915_scheduler_types.h new file mode 100644 index 000000000000..5c94b3eb5c81 --- /dev/null +++ b/drivers/gpu/drm/i915/i915_scheduler_types.h @@ -0,0 +1,98 @@ +/* + * SPDX-License-Identifier: MIT + * + * Copyright © 2018 Intel Corporation + */ + +#ifndef _I915_SCHEDULER_TYPES_H_ +#define _I915_SCHEDULER_TYPES_H_ + +#include +#include + +#include + +struct drm_i915_private; +struct i915_request; +struct intel_engine_cs; + +enum { + I915_PRIORITY_MIN = I915_CONTEXT_MIN_USER_PRIORITY - 1, + I915_PRIORITY_NORMAL = I915_CONTEXT_DEFAULT_PRIORITY, + I915_PRIORITY_MAX = I915_CONTEXT_MAX_USER_PRIORITY + 1, + + I915_PRIORITY_INVALID = INT_MIN +}; + +#define I915_USER_PRIORITY_SHIFT 3 +#define I915_USER_PRIORITY(x) ((x) << I915_USER_PRIORITY_SHIFT) + +#define I915_PRIORITY_COUNT BIT(I915_USER_PRIORITY_SHIFT) +#define I915_PRIORITY_MASK (I915_PRIORITY_COUNT - 1) + +#define I915_PRIORITY_WAIT ((u8)BIT(0)) +#define I915_PRIORITY_NEWCLIENT ((u8)BIT(1)) +#define I915_PRIORITY_NOSEMAPHORE ((u8)BIT(2)) + +#define __NO_PREEMPTION (I915_PRIORITY_WAIT) + +struct i915_sched_attr { + /** + * @priority: execution and service priority + * + * All clients are equal, but some are more equal than others! + * + * Requests from a context with a greater (more positive) value of + * @priority will be executed before those with a lower @priority + * value, forming a simple QoS. + * + * The &drm_i915_private.kernel_context is assigned the lowest priority. + */ + int priority; +}; + +/* + * "People assume that time is a strict progression of cause to effect, but + * actually, from a nonlinear, non-subjective viewpoint, it's more like a big + * ball of wibbly-wobbly, timey-wimey ... stuff." -The Doctor, 2015 + * + * Requests exist in a complex web of interdependencies. Each request + * has to wait for some other request to complete before it is ready to be run + * (e.g. we have to wait until the pixels have been rendering into a texture + * before we can copy from it). We track the readiness of a request in terms + * of fences, but we also need to keep the dependency tree for the lifetime + * of the request (beyond the life of an individual fence). We use the tree + * at various points to reorder the requests whilst keeping the requests + * in order with respect to their various dependencies. + * + * There is no active component to the "scheduler". As we know the dependency + * DAG of each request, we are able to insert it into a sorted queue when it + * is ready, and are able to reorder its portion of the graph to accommodate + * dynamic priority changes. + */ +struct i915_sched_node { + struct list_head signalers_list; /* those before us, we depend upon */ + struct list_head waiters_list; /* those after us, they depend upon us */ + struct list_head link; + struct i915_sched_attr attr; + unsigned int flags; +#define I915_SCHED_HAS_SEMAPHORE BIT(0) +}; + +struct i915_dependency { + struct i915_sched_node *signaler; + struct list_head signal_link; + struct list_head wait_link; + struct list_head dfs_link; + unsigned long flags; +#define I915_DEPENDENCY_ALLOC BIT(0) +}; + +struct i915_priolist { + struct list_head requests[I915_PRIORITY_COUNT]; + struct rb_node node; + unsigned long used; + int priority; +}; + +#endif /* _I915_SCHEDULER_TYPES_H_ */ diff --git a/drivers/gpu/drm/i915/i915_timeline.h b/drivers/gpu/drm/i915/i915_timeline.h index c1e47a423d85..4ca7f80bdf6d 100644 --- a/drivers/gpu/drm/i915/i915_timeline.h +++ b/drivers/gpu/drm/i915/i915_timeline.h @@ -27,6 +27,7 @@ #include +#include "i915_active.h" #include "i915_syncmap.h" #include "i915_timeline_types.h" diff --git a/drivers/gpu/drm/i915/i915_timeline_types.h b/drivers/gpu/drm/i915/i915_timeline_types.h index 12ba3c573aa0..1f5b55d9ffb5 100644 --- a/drivers/gpu/drm/i915/i915_timeline_types.h +++ b/drivers/gpu/drm/i915/i915_timeline_types.h @@ -9,9 +9,10 @@ #include #include +#include #include -#include "i915_active.h" +#include "i915_active_types.h" struct drm_i915_private; struct i915_vma; diff --git a/drivers/gpu/drm/i915/intel_device_info.h b/drivers/gpu/drm/i915/intel_device_info.h index 7e04b4829aba..26d65c037ea9 100644 --- a/drivers/gpu/drm/i915/intel_device_info.h +++ b/drivers/gpu/drm/i915/intel_device_info.h @@ -27,6 +27,7 @@ #include +#include "intel_engine_types.h" #include "intel_display.h" struct drm_printer; @@ -150,8 +151,6 @@ struct sseu_dev_info { u8 eu_mask[GEN_MAX_SLICES * GEN_MAX_SUBSLICES]; }; -typedef u8 intel_engine_mask_t; - struct intel_device_info { u16 gen_mask; diff --git a/drivers/gpu/drm/i915/intel_engine_types.h b/drivers/gpu/drm/i915/intel_engine_types.h index b3249bf6a65f..facfe42a7cce 100644 --- a/drivers/gpu/drm/i915/intel_engine_types.h +++ b/drivers/gpu/drm/i915/intel_engine_types.h @@ -13,8 +13,10 @@ #include #include +#include "i915_gem.h" +#include "i915_scheduler_types.h" +#include "i915_selftest.h" #include "i915_timeline_types.h" -#include "intel_device_info.h" #include "intel_workarounds_types.h" #include "i915_gem_batch_pool.h" @@ -25,12 +27,16 @@ #define I915_CMD_HASH_ORDER 9 +struct dma_fence; struct drm_i915_reg_table; struct i915_gem_context; struct i915_request; struct i915_sched_attr; struct intel_uncore; +typedef u8 intel_engine_mask_t; +#define ALL_ENGINES ((intel_engine_mask_t)~0ul) + struct intel_hw_status_page { struct i915_vma *vma; u32 *addr; @@ -105,8 +111,10 @@ enum intel_engine_id { VCS3, #define _VCS(n) (VCS0 + (n)) VECS0, - VECS1 + VECS1, #define _VECS(n) (VECS0 + (n)) + I915_NUM_ENGINES + }; struct st_preempt_hang { diff --git a/drivers/gpu/drm/i915/intel_guc_submission.h b/drivers/gpu/drm/i915/intel_guc_submission.h index 169c54568340..aa5e6749c925 100644 --- a/drivers/gpu/drm/i915/intel_guc_submission.h +++ b/drivers/gpu/drm/i915/intel_guc_submission.h @@ -29,6 +29,7 @@ #include "i915_gem.h" #include "i915_selftest.h" +#include "intel_engine_types.h" struct drm_i915_private; diff --git a/drivers/gpu/drm/i915/intel_hangcheck.c b/drivers/gpu/drm/i915/intel_hangcheck.c index 59232df11ada..3d51ed1428d4 100644 --- a/drivers/gpu/drm/i915/intel_hangcheck.c +++ b/drivers/gpu/drm/i915/intel_hangcheck.c @@ -221,8 +221,8 @@ static void hangcheck_declare_hang(struct drm_i915_private *i915, unsigned int stuck) { struct intel_engine_cs *engine; + intel_engine_mask_t tmp; char msg[80]; - unsigned int tmp; int len; /* If some rings hung but others were still busy, only diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_context.c b/drivers/gpu/drm/i915/selftests/i915_gem_context.c index 45f73b8b4e6d..4e1b6efc6b22 100644 --- a/drivers/gpu/drm/i915/selftests/i915_gem_context.c +++ b/drivers/gpu/drm/i915/selftests/i915_gem_context.c @@ -1594,10 +1594,10 @@ static int igt_vm_isolation(void *arg) } static __maybe_unused const char * -__engine_name(struct drm_i915_private *i915, unsigned int engines) +__engine_name(struct drm_i915_private *i915, intel_engine_mask_t engines) { struct intel_engine_cs *engine; - unsigned int tmp; + intel_engine_mask_t tmp; if (engines == ALL_ENGINES) return "all"; @@ -1610,10 +1610,10 @@ __engine_name(struct drm_i915_private *i915, unsigned int engines) static int __igt_switch_to_kernel_context(struct drm_i915_private *i915, struct i915_gem_context *ctx, - unsigned int engines) + intel_engine_mask_t engines) { struct intel_engine_cs *engine; - unsigned int tmp; + intel_engine_mask_t tmp; int pass; GEM_TRACE("Testing %s\n", __engine_name(i915, engines)); diff --git a/drivers/gpu/drm/i915/selftests/igt_gem_utils.c b/drivers/gpu/drm/i915/selftests/igt_gem_utils.c new file mode 100644 index 000000000000..ab4b7eaae193 --- /dev/null +++ b/drivers/gpu/drm/i915/selftests/igt_gem_utils.c @@ -0,0 +1,40 @@ +/* + * SPDX-License-Identifier: MIT + * + * Copyright © 2018 Intel Corporation + */ + +#include "igt_gem_utils.h" + +#include "../i915_gem_context.h" +#include "../i915_gem_pm.h" +#include "../i915_request.h" +#include "../intel_context.h" + +struct i915_request * +igt_request_alloc(struct i915_gem_context *ctx, struct intel_engine_cs *engine) +{ + struct intel_context *ce; + struct i915_request *rq; + int err; + + /* + * Pinning the contexts may generate requests in order to acquire + * GGTT space, so do this first before we reserve a seqno for + * ourselves. + */ + ce = i915_gem_context_get_engine(ctx, engine->id); + if (IS_ERR(ce)) + return ERR_CAST(ce); + + err = intel_context_pin(ce); + intel_context_put(ce); + if (err) + return ERR_PTR(err); + + rq = i915_request_create(ce); + intel_context_unpin(ce); + + return rq; +} + diff --git a/drivers/gpu/drm/i915/selftests/intel_hangcheck.c b/drivers/gpu/drm/i915/selftests/intel_hangcheck.c index 76b4fa150f2e..050bd1e19e02 100644 --- a/drivers/gpu/drm/i915/selftests/intel_hangcheck.c +++ b/drivers/gpu/drm/i915/selftests/intel_hangcheck.c @@ -1124,7 +1124,8 @@ static int igt_reset_engines(void *arg) return 0; } -static u32 fake_hangcheck(struct drm_i915_private *i915, u32 mask) +static u32 fake_hangcheck(struct drm_i915_private *i915, + intel_engine_mask_t mask) { u32 count = i915_reset_count(&i915->gpu_error); diff --git a/drivers/gpu/drm/i915/test_i915_scheduler_types_standalone.c b/drivers/gpu/drm/i915/test_i915_scheduler_types_standalone.c new file mode 100644 index 000000000000..8afa2c3719fb --- /dev/null +++ b/drivers/gpu/drm/i915/test_i915_scheduler_types_standalone.c @@ -0,0 +1,7 @@ +/* + * SPDX-License-Identifier: MIT + * + * Copyright © 2019 Intel Corporation + */ + +#include "i915_scheduler_types.h"