diff mbox

[1/2] drm/i915: Split intel_engine allocation and initialisation

Message ID 20170121145045.27351-1-chris@chris-wilson.co.uk (mailing list archive)
State New, archived
Headers show

Commit Message

Chris Wilson Jan. 21, 2017, 2:50 p.m. UTC
In order to reset the GPU early on in the module load sequence, we need
to allocate the basic engine structs (to populate the mmio offsets etc).
Currently, the engine initialisation allocates both the base struct and
also allocate auxiliary objects, which depend upon state setup quite
late in the load sequence. We split off the allocation callback for
later and allow ourselves to allocate the engine structs themselves
early.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_drv.c        | 19 +++++++-
 drivers/gpu/drm/i915/i915_drv.h        |  3 ++
 drivers/gpu/drm/i915/intel_engine_cs.c | 79 +++++++++++++++++++++++++---------
 drivers/gpu/drm/i915/intel_lrc.h       |  2 -
 4 files changed, 79 insertions(+), 24 deletions(-)

Comments

Joonas Lahtinen Jan. 23, 2017, 9:41 a.m. UTC | #1
On la, 2017-01-21 at 14:50 +0000, Chris Wilson wrote:
> In order to reset the GPU early on in the module load sequence, we need
> to allocate the basic engine structs (to populate the mmio offsets etc).
> Currently, the engine initialisation allocates both the base struct and
> also allocate auxiliary objects, which depend upon state setup quite
> late in the load sequence. We split off the allocation callback for
> later and allow ourselves to allocate the engine structs themselves
> early.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

<SNIP>

> +int intel_engines_init(struct drm_i915_private *dev_priv)
> +{

<SNIP>

> +	for_each_engine(engine, dev_priv, id) {
> +		int (*init)(struct intel_engine_cs *engine) = NULL;
> +
> +		if (!err) {
> +			if (i915.enable_execlists)
> +				init = intel_engines[id].init_execlists;
> +			else
> +				init = intel_engines[id].init_legacy;
> +		}
> +
> +		if (!init || (err = init(engine))) {
> +			kfree(engine);
> +			dev_priv->engine[id] = NULL;
> +			continue;
> +		}
> +
> +		mask |= ENGINE_MASK(id);
> +	}

As discussed in IRC, this loop is broken after first erroring init.
Looking forward to v2.

Regards, Joonas
Chris Wilson Jan. 23, 2017, 9:57 a.m. UTC | #2
On Mon, Jan 23, 2017 at 11:41:12AM +0200, Joonas Lahtinen wrote:
> On la, 2017-01-21 at 14:50 +0000, Chris Wilson wrote:
> > In order to reset the GPU early on in the module load sequence, we need
> > to allocate the basic engine structs (to populate the mmio offsets etc).
> > Currently, the engine initialisation allocates both the base struct and
> > also allocate auxiliary objects, which depend upon state setup quite
> > late in the load sequence. We split off the allocation callback for
> > later and allow ourselves to allocate the engine structs themselves
> > early.
> > 
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> 
> <SNIP>
> 
> > +int intel_engines_init(struct drm_i915_private *dev_priv)
> > +{
> 
> <SNIP>
> 
> > +	for_each_engine(engine, dev_priv, id) {
> > +		int (*init)(struct intel_engine_cs *engine) = NULL;
> > +
> > +		if (!err) {
> > +			if (i915.enable_execlists)
> > +				init = intel_engines[id].init_execlists;
> > +			else
> > +				init = intel_engines[id].init_legacy;
> > +		}
> > +
> > +		if (!init || (err = init(engine))) {
> > +			kfree(engine);
> > +			dev_priv->engine[id] = NULL;
> > +			continue;
> > +		}
> > +
> > +		mask |= ENGINE_MASK(id);
> > +	}
> 
> As discussed in IRC, this loop is broken after first erroring init.

As answered, it is not.

After an err is set, it and all subsequent engines are freed, and then
all previously initialised engines are run through the cleanup.
-Chris
Joonas Lahtinen Jan. 23, 2017, 12:08 p.m. UTC | #3
On ma, 2017-01-23 at 09:57 +0000, Chris Wilson wrote:
> On Mon, Jan 23, 2017 at 11:41:12AM +0200, Joonas Lahtinen wrote:
> > 
> > On la, 2017-01-21 at 14:50 +0000, Chris Wilson wrote:
> > > 
> > > In order to reset the GPU early on in the module load sequence, we need
> > > to allocate the basic engine structs (to populate the mmio offsets etc).
> > > Currently, the engine initialisation allocates both the base struct and
> > > also allocate auxiliary objects, which depend upon state setup quite
> > > late in the load sequence. We split off the allocation callback for
> > > later and allow ourselves to allocate the engine structs themselves
> > > early.
> > > 
> > > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > 
> > <SNIP>
> > 
> > > 
> > > +int intel_engines_init(struct drm_i915_private *dev_priv)
> > > +{
> > 
> > <SNIP>
> > 
> > > 
> > > +	for_each_engine(engine, dev_priv, id) {
> > > +		int (*init)(struct intel_engine_cs *engine) = NULL;
> > > +
> > > +		if (!err) {
> > > +			if (i915.enable_execlists)
> > > +				init = intel_engines[id].init_execlists;
> > > +			else
> > > +				init = intel_engines[id].init_legacy;
> > > +		}
> > > +
> > > +		if (!init || (err = init(engine))) {
> > > +			kfree(engine);
> > > +			dev_priv->engine[id] = NULL;
> > > +			continue;
> > > +		}
> > > +
> > > +		mask |= ENGINE_MASK(id);
> > > +	}
> > 
> > As discussed in IRC, this loop is broken after first erroring init.
> 
> As answered, it is not.
> 
> After an err is set, it and all subsequent engines are freed, and then
> all previously initialised engines are run through the cleanup.

You're correct, I managed to ignore the declaration time initialization
a half a dozen times when going through it.

You could check the "!init" as "err" to make my next review round
easier :P

Regards, Joonas
diff mbox

Patch

diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index 4ae69ebe166e..b19ec224831c 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -755,6 +755,15 @@  static int i915_workqueues_init(struct drm_i915_private *dev_priv)
 	return -ENOMEM;
 }
 
+static void i915_engines_cleanup(struct drm_i915_private *i915)
+{
+	struct intel_engine_cs *engine;
+	enum intel_engine_id id;
+
+	for_each_engine(engine, i915, id)
+		kfree(engine);
+}
+
 static void i915_workqueues_cleanup(struct drm_i915_private *dev_priv)
 {
 	destroy_workqueue(dev_priv->hotplug.dp_wq);
@@ -817,12 +826,15 @@  static int i915_driver_init_early(struct drm_i915_private *dev_priv,
 	mutex_init(&dev_priv->pps_mutex);
 
 	intel_uc_init_early(dev_priv);
-
 	i915_memcpy_init_early(dev_priv);
 
+	ret = intel_engines_init_early(dev_priv);
+	if (ret)
+		return ret;
+
 	ret = i915_workqueues_init(dev_priv);
 	if (ret < 0)
-		return ret;
+		goto err_engines;
 
 	ret = intel_gvt_init(dev_priv);
 	if (ret < 0)
@@ -857,6 +869,8 @@  static int i915_driver_init_early(struct drm_i915_private *dev_priv,
 	intel_gvt_cleanup(dev_priv);
 err_workqueues:
 	i915_workqueues_cleanup(dev_priv);
+err_engines:
+	i915_engines_cleanup(dev_priv);
 	return ret;
 }
 
@@ -869,6 +883,7 @@  static void i915_driver_cleanup_early(struct drm_i915_private *dev_priv)
 	i915_perf_fini(dev_priv);
 	i915_gem_load_cleanup(dev_priv);
 	i915_workqueues_cleanup(dev_priv);
+	i915_engines_cleanup(dev_priv);
 }
 
 static int i915_mmio_setup(struct drm_i915_private *dev_priv)
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 322e8c99d118..2467faa4a9e4 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2941,6 +2941,9 @@  extern unsigned long i915_gfx_val(struct drm_i915_private *dev_priv);
 extern void i915_update_gfx_val(struct drm_i915_private *dev_priv);
 int vlv_force_gfx_clock(struct drm_i915_private *dev_priv, bool on);
 
+int intel_engines_init_early(struct drm_i915_private *dev_priv);
+int intel_engines_init(struct drm_i915_private *dev_priv);
+
 /* intel_hotplug.c */
 void intel_hpd_irq_handler(struct drm_i915_private *dev_priv,
 			   u32 pin_mask, u32 long_mask);
diff --git a/drivers/gpu/drm/i915/intel_engine_cs.c b/drivers/gpu/drm/i915/intel_engine_cs.c
index 371acf109e34..77c07e86ec62 100644
--- a/drivers/gpu/drm/i915/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/intel_engine_cs.c
@@ -110,21 +110,20 @@  intel_engine_setup(struct drm_i915_private *dev_priv,
 }
 
 /**
- * intel_engines_init() - allocate, populate and init the Engine Command Streamers
+ * intel_engines_init_early() - allocate the Engine Command Streamers
  * @dev_priv: i915 device private
  *
  * Return: non-zero if the initialization failed.
  */
-int intel_engines_init(struct drm_i915_private *dev_priv)
+int intel_engines_init_early(struct drm_i915_private *dev_priv)
 {
 	struct intel_device_info *device_info = mkwrite_device_info(dev_priv);
 	unsigned int ring_mask = INTEL_INFO(dev_priv)->ring_mask;
 	unsigned int mask = 0;
-	int (*init)(struct intel_engine_cs *engine);
 	struct intel_engine_cs *engine;
 	enum intel_engine_id id;
 	unsigned int i;
-	int ret;
+	int err;
 
 	WARN_ON(ring_mask == 0);
 	WARN_ON(ring_mask &
@@ -134,20 +133,8 @@  int intel_engines_init(struct drm_i915_private *dev_priv)
 		if (!HAS_ENGINE(dev_priv, i))
 			continue;
 
-		if (i915.enable_execlists)
-			init = intel_engines[i].init_execlists;
-		else
-			init = intel_engines[i].init_legacy;
-
-		if (!init)
-			continue;
-
-		ret = intel_engine_setup(dev_priv, i);
-		if (ret)
-			goto cleanup;
-
-		ret = init(dev_priv->engine[i]);
-		if (ret)
+		err = intel_engine_setup(dev_priv, i);
+		if (err)
 			goto cleanup;
 
 		mask |= ENGINE_MASK(i);
@@ -166,14 +153,66 @@  int intel_engines_init(struct drm_i915_private *dev_priv)
 	return 0;
 
 cleanup:
+	for_each_engine(engine, dev_priv, id)
+		kfree(engine);
+	return err;
+}
+
+/**
+ * intel_engines_init() - allocate, populate and init the Engine Command Streamers
+ * @dev_priv: i915 device private
+ *
+ * Return: non-zero if the initialization failed.
+ */
+int intel_engines_init(struct drm_i915_private *dev_priv)
+{
+	struct intel_device_info *device_info = mkwrite_device_info(dev_priv);
+	unsigned int mask = 0;
+	struct intel_engine_cs *engine;
+	enum intel_engine_id id;
+	int err = 0;
+
+	for_each_engine(engine, dev_priv, id) {
+		int (*init)(struct intel_engine_cs *engine) = NULL;
+
+		if (!err) {
+			if (i915.enable_execlists)
+				init = intel_engines[id].init_execlists;
+			else
+				init = intel_engines[id].init_legacy;
+		}
+
+		if (!init || (err = init(engine))) {
+			kfree(engine);
+			dev_priv->engine[id] = NULL;
+			continue;
+		}
+
+		mask |= ENGINE_MASK(id);
+	}
+	if (err)
+		goto cleanup;
+
+	/*
+	 * Catch failures to update intel_engines table when the new engines
+	 * are added to the driver by a warning and disabling the forgotten
+	 * engines.
+	 */
+	if (WARN_ON(mask != INTEL_INFO(dev_priv)->ring_mask))
+		device_info->ring_mask = mask;
+
+	device_info->num_rings = hweight32(mask);
+
+	return 0;
+
+cleanup:
 	for_each_engine(engine, dev_priv, id) {
 		if (i915.enable_execlists)
 			intel_logical_ring_cleanup(engine);
 		else
 			intel_engine_cleanup(engine);
 	}
-
-	return ret;
+	return err;
 }
 
 void intel_engine_init_global_seqno(struct intel_engine_cs *engine, u32 seqno)
diff --git a/drivers/gpu/drm/i915/intel_lrc.h b/drivers/gpu/drm/i915/intel_lrc.h
index 0c852c024227..c8009c7bfbdd 100644
--- a/drivers/gpu/drm/i915/intel_lrc.h
+++ b/drivers/gpu/drm/i915/intel_lrc.h
@@ -68,8 +68,6 @@  void intel_logical_ring_cleanup(struct intel_engine_cs *engine);
 int logical_render_ring_init(struct intel_engine_cs *engine);
 int logical_xcs_ring_init(struct intel_engine_cs *engine);
 
-int intel_engines_init(struct drm_i915_private *dev_priv);
-
 /* Logical Ring Contexts */
 
 /* One extra page is added before LRC for GuC as shared data */