diff mbox series

[v2,1/6] clk: Remove recursion in clk_core_{prepare,enable}()

Message ID 20190305044936.22267-2-dbasehore@chromium.org (mailing list archive)
State New, archived
Headers show
Series Coordinated Clks | expand

Commit Message

Derek Basehore March 5, 2019, 4:49 a.m. UTC
From: Stephen Boyd <sboyd@codeaurora.org>

Enabling and preparing clocks can be written quite naturally with
recursion. We start at some point in the tree and recurse up the
tree to find the oldest parent clk that needs to be enabled or
prepared. Then we enable/prepare and return to the caller, going
back to the clk we started at and enabling/preparing along the
way. This also unroll the recursion in unprepare,disable which can
just be done in the order of walking up the clk tree.

The problem is recursion isn't great for kernel code where we
have a limited stack size. Furthermore, we may be calling this
code inside clk_set_rate() which also has recursion in it, so
we're really not looking good if we encounter a tall clk tree.

Let's create a stack instead by looping over the parent chain and
collecting clks of interest. Then the enable/prepare becomes as
simple as iterating over that list and calling enable.

Modified verison of https://lore.kernel.org/patchwork/patch/814369/
-Fixed kernel warning
-unrolled recursion in unprepare/disable too

Cc: Jerome Brunet <jbrunet@baylibre.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Derek Basehore <dbasehore@chromium.org>
---
 drivers/clk/clk.c | 191 ++++++++++++++++++++++++++--------------------
 1 file changed, 107 insertions(+), 84 deletions(-)

Comments

Stephen Boyd March 5, 2019, 6:49 p.m. UTC | #1
Quoting Derek Basehore (2019-03-04 20:49:31)
> From: Stephen Boyd <sboyd@codeaurora.org>
> 
> Enabling and preparing clocks can be written quite naturally with
> recursion. We start at some point in the tree and recurse up the
> tree to find the oldest parent clk that needs to be enabled or
> prepared. Then we enable/prepare and return to the caller, going
> back to the clk we started at and enabling/preparing along the
> way. This also unroll the recursion in unprepare,disable which can
> just be done in the order of walking up the clk tree.
> 
> The problem is recursion isn't great for kernel code where we
> have a limited stack size. Furthermore, we may be calling this
> code inside clk_set_rate() which also has recursion in it, so
> we're really not looking good if we encounter a tall clk tree.
> 
> Let's create a stack instead by looping over the parent chain and
> collecting clks of interest. Then the enable/prepare becomes as
> simple as iterating over that list and calling enable.
> 
> Modified verison of https://lore.kernel.org/patchwork/patch/814369/
> -Fixed kernel warning
> -unrolled recursion in unprepare/disable too
> 
> Cc: Jerome Brunet <jbrunet@baylibre.com>
> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
> Signed-off-by: Derek Basehore <dbasehore@chromium.org>
> ---

From the original post:

"I have some vague fear that this may not work if a clk op is framework 
reentrant and attemps to call consumer clk APIs from within the clk ops.
If the reentrant call tries to add a clk that's already in the list then
we'll corrupt the list. Ugh."

Do we have this sort of problem here? Or are you certain that we don't
have clks that prepare or enable something that is already in the
process of being prepared or enabled?
Derek Basehore March 6, 2019, 1:35 a.m. UTC | #2
On Tue, Mar 5, 2019 at 10:49 AM Stephen Boyd <sboyd@kernel.org> wrote:
>
> Quoting Derek Basehore (2019-03-04 20:49:31)
> > From: Stephen Boyd <sboyd@codeaurora.org>
> >
> > Enabling and preparing clocks can be written quite naturally with
> > recursion. We start at some point in the tree and recurse up the
> > tree to find the oldest parent clk that needs to be enabled or
> > prepared. Then we enable/prepare and return to the caller, going
> > back to the clk we started at and enabling/preparing along the
> > way. This also unroll the recursion in unprepare,disable which can
> > just be done in the order of walking up the clk tree.
> >
> > The problem is recursion isn't great for kernel code where we
> > have a limited stack size. Furthermore, we may be calling this
> > code inside clk_set_rate() which also has recursion in it, so
> > we're really not looking good if we encounter a tall clk tree.
> >
> > Let's create a stack instead by looping over the parent chain and
> > collecting clks of interest. Then the enable/prepare becomes as
> > simple as iterating over that list and calling enable.
> >
> > Modified verison of https://lore.kernel.org/patchwork/patch/814369/
> > -Fixed kernel warning
> > -unrolled recursion in unprepare/disable too
> >
> > Cc: Jerome Brunet <jbrunet@baylibre.com>
> > Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
> > Signed-off-by: Derek Basehore <dbasehore@chromium.org>
> > ---
>
> From the original post:
>
> "I have some vague fear that this may not work if a clk op is framework
> reentrant and attemps to call consumer clk APIs from within the clk ops.
> If the reentrant call tries to add a clk that's already in the list then
> we'll corrupt the list. Ugh."
>
> Do we have this sort of problem here? Or are you certain that we don't
> have clks that prepare or enable something that is already in the
> process of being prepared or enabled?

I can look into whether anything's doing this and add a WARN_ON which
returns an error if we ever hit that case. If this is happening on
some platform, we'd want to correct that anyways.

>
Derek Basehore March 6, 2019, 4:11 a.m. UTC | #3
On Tue, Mar 5, 2019 at 5:35 PM dbasehore . <dbasehore@chromium.org> wrote:
>
> On Tue, Mar 5, 2019 at 10:49 AM Stephen Boyd <sboyd@kernel.org> wrote:
> >
> > Quoting Derek Basehore (2019-03-04 20:49:31)
> > > From: Stephen Boyd <sboyd@codeaurora.org>
> > >
> > > Enabling and preparing clocks can be written quite naturally with
> > > recursion. We start at some point in the tree and recurse up the
> > > tree to find the oldest parent clk that needs to be enabled or
> > > prepared. Then we enable/prepare and return to the caller, going
> > > back to the clk we started at and enabling/preparing along the
> > > way. This also unroll the recursion in unprepare,disable which can
> > > just be done in the order of walking up the clk tree.
> > >
> > > The problem is recursion isn't great for kernel code where we
> > > have a limited stack size. Furthermore, we may be calling this
> > > code inside clk_set_rate() which also has recursion in it, so
> > > we're really not looking good if we encounter a tall clk tree.
> > >
> > > Let's create a stack instead by looping over the parent chain and
> > > collecting clks of interest. Then the enable/prepare becomes as
> > > simple as iterating over that list and calling enable.
> > >
> > > Modified verison of https://lore.kernel.org/patchwork/patch/814369/
> > > -Fixed kernel warning
> > > -unrolled recursion in unprepare/disable too
> > >
> > > Cc: Jerome Brunet <jbrunet@baylibre.com>
> > > Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
> > > Signed-off-by: Derek Basehore <dbasehore@chromium.org>
> > > ---
> >
> > From the original post:
> >
> > "I have some vague fear that this may not work if a clk op is framework
> > reentrant and attemps to call consumer clk APIs from within the clk ops.
> > If the reentrant call tries to add a clk that's already in the list then
> > we'll corrupt the list. Ugh."
> >
> > Do we have this sort of problem here? Or are you certain that we don't
> > have clks that prepare or enable something that is already in the
> > process of being prepared or enabled?
>
> I can look into whether anything's doing this and add a WARN_ON which
> returns an error if we ever hit that case. If this is happening on
> some platform, we'd want to correct that anyways.
>

Also, if we're ever able to move to another locking scheme (hopefully
soon...), we can make the prepare/enable locks non-reentrant. Then if
anyone recursively calls back into the framework for another
prepare/enable, they will deadlock. I guess that's one way of making
sure no one does that.

> >
Stephen Boyd March 6, 2019, 9:16 p.m. UTC | #4
Quoting dbasehore . (2019-03-05 20:11:57)
> On Tue, Mar 5, 2019 at 5:35 PM dbasehore . <dbasehore@chromium.org> wrote:
> >
> > On Tue, Mar 5, 2019 at 10:49 AM Stephen Boyd <sboyd@kernel.org> wrote:
> > >
> > > Quoting Derek Basehore (2019-03-04 20:49:31)
> > > > From: Stephen Boyd <sboyd@codeaurora.org>
> > > >
> > > > Enabling and preparing clocks can be written quite naturally with
> > > > recursion. We start at some point in the tree and recurse up the
> > > > tree to find the oldest parent clk that needs to be enabled or
> > > > prepared. Then we enable/prepare and return to the caller, going
> > > > back to the clk we started at and enabling/preparing along the
> > > > way. This also unroll the recursion in unprepare,disable which can
> > > > just be done in the order of walking up the clk tree.
> > > >
> > > > The problem is recursion isn't great for kernel code where we
> > > > have a limited stack size. Furthermore, we may be calling this
> > > > code inside clk_set_rate() which also has recursion in it, so
> > > > we're really not looking good if we encounter a tall clk tree.
> > > >
> > > > Let's create a stack instead by looping over the parent chain and
> > > > collecting clks of interest. Then the enable/prepare becomes as
> > > > simple as iterating over that list and calling enable.
> > > >
> > > > Modified verison of https://lore.kernel.org/patchwork/patch/814369/
> > > > -Fixed kernel warning
> > > > -unrolled recursion in unprepare/disable too
> > > >
> > > > Cc: Jerome Brunet <jbrunet@baylibre.com>
> > > > Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
> > > > Signed-off-by: Derek Basehore <dbasehore@chromium.org>
> > > > ---
> > >
> > > From the original post:
> > >
> > > "I have some vague fear that this may not work if a clk op is framework
> > > reentrant and attemps to call consumer clk APIs from within the clk ops.
> > > If the reentrant call tries to add a clk that's already in the list then
> > > we'll corrupt the list. Ugh."
> > >
> > > Do we have this sort of problem here? Or are you certain that we don't
> > > have clks that prepare or enable something that is already in the
> > > process of being prepared or enabled?
> >
> > I can look into whether anything's doing this and add a WARN_ON which
> > returns an error if we ever hit that case. If this is happening on
> > some platform, we'd want to correct that anyways.
> >
> 
> Also, if we're ever able to move to another locking scheme (hopefully
> soon...), we can make the prepare/enable locks non-reentrant. Then if
> anyone recursively calls back into the framework for another
> prepare/enable, they will deadlock. I guess that's one way of making
> sure no one does that.
> 

Sure, but we can't regress the system by making prepare and enable
non-reentrant. I was thinking we could write a Coccinelle script that
looks for suspects by matching against a clk_ops structure and then
picks out the prepare and enable ops from them and looks in those
functions for calls to clk_prepare_enable() or clk_prepare() or
clk_enable(). I don't know if or how it's possible to descend into the
call graph from the clk_ops function to check for clk API calls, so we
probably need to add the WARN_ON to help us find these issues at runtime
too.

You can use this cocci script to start poking at it though:

<smpl>
@ prepare_enabler @
identifier func;
identifier hw;
position p;
expression E;
@@
int func@p(struct clk_hw *hw)
{
...
(
clk_prepare_enable(E);
|
clk_prepare(E);
|
clk_enable(E);
)
...
}

@ has_preparer depends on prepare_enabler @
identifier ops;
identifier prepare_enabler.func;
position p;
@@
struct clk_ops ops = {
...,
.prepare = func@p,
...
};

@ has_enabler depends on prepare_enabler @
identifier ops;
identifier prepare_enabler.func;
position p;
@@
struct clk_ops ops = {
...,
.enable = func@p,
...
};

@script:python@
pf << prepare_enabler.p;
@@

coccilib.report.print_report(pf[0],"WARNING something bad called from clk op")

</smpl>

I ran it and found one hit in the davinci clk driver where the driver
manually turns a clk on to ensure a PLL locks. Hopefully that's a
different clk tree than the current one so that it's not an issue.

We already have quite a few grep hits for clk_prepare_enable() in
drivers/clk/ too but I think those are mostly drivers that haven't
converted to using critical clks so they have setup code to do that for
them. I suppose it would also be good to dig through all those drivers
and move them to the critical clk flag. For example, here's a patch for
the highbank driver that should definitely be moved to critical clks.

-----8<-----
diff --git a/drivers/clk/clk-highbank.c b/drivers/clk/clk-highbank.c
index 8e4581004695..bd328b0eb243 100644
--- a/drivers/clk/clk-highbank.c
+++ b/drivers/clk/clk-highbank.c
@@ -17,7 +17,6 @@
 #include <linux/kernel.h>
 #include <linux/slab.h>
 #include <linux/err.h>
-#include <linux/clk.h>
 #include <linux/clk-provider.h>
 #include <linux/io.h>
 #include <linux/of.h>
@@ -272,7 +271,7 @@ static const struct clk_ops periclk_ops = {
 	.set_rate = clk_periclk_set_rate,
 };
 
-static __init struct clk *hb_clk_init(struct device_node *node, const struct clk_ops *ops)
+static void __init hb_clk_init(struct device_node *node, const struct clk_ops *ops, unsigned long clkflags)
 {
 	u32 reg;
 	struct hb_clk *hb_clk;
@@ -284,11 +283,11 @@ static __init struct clk *hb_clk_init(struct device_node *node, const struct clk
 
 	rc = of_property_read_u32(node, "reg", &reg);
 	if (WARN_ON(rc))
-		return NULL;
+		return;
 
 	hb_clk = kzalloc(sizeof(*hb_clk), GFP_KERNEL);
 	if (WARN_ON(!hb_clk))
-		return NULL;
+		return;
 
 	/* Map system registers */
 	srnp = of_find_compatible_node(NULL, NULL, "calxeda,hb-sregs");
@@ -301,7 +300,7 @@ static __init struct clk *hb_clk_init(struct device_node *node, const struct clk
 
 	init.name = clk_name;
 	init.ops = ops;
-	init.flags = 0;
+	init.flags = clkflags;
 	parent_name = of_clk_get_parent_name(node, 0);
 	init.parent_names = &parent_name;
 	init.num_parents = 1;
@@ -311,33 +310,31 @@ static __init struct clk *hb_clk_init(struct device_node *node, const struct clk
 	rc = clk_hw_register(NULL, &hb_clk->hw);
 	if (WARN_ON(rc)) {
 		kfree(hb_clk);
-		return NULL;
+		return;
 	}
-	rc = of_clk_add_hw_provider(node, of_clk_hw_simple_get, &hb_clk->hw);
-	return hb_clk->hw.clk;
+	of_clk_add_hw_provider(node, of_clk_hw_simple_get, &hb_clk->hw);
 }
 
 static void __init hb_pll_init(struct device_node *node)
 {
-	hb_clk_init(node, &clk_pll_ops);
+	hb_clk_init(node, &clk_pll_ops, 0);
 }
 CLK_OF_DECLARE(hb_pll, "calxeda,hb-pll-clock", hb_pll_init);
 
 static void __init hb_a9periph_init(struct device_node *node)
 {
-	hb_clk_init(node, &a9periphclk_ops);
+	hb_clk_init(node, &a9periphclk_ops, 0);
 }
 CLK_OF_DECLARE(hb_a9periph, "calxeda,hb-a9periph-clock", hb_a9periph_init);
 
 static void __init hb_a9bus_init(struct device_node *node)
 {
-	struct clk *clk = hb_clk_init(node, &a9bclk_ops);
-	clk_prepare_enable(clk);
+	hb_clk_init(node, &a9bclk_ops, CLK_IS_CRITICAL);
 }
 CLK_OF_DECLARE(hb_a9bus, "calxeda,hb-a9bus-clock", hb_a9bus_init);
 
 static void __init hb_emmc_init(struct device_node *node)
 {
-	hb_clk_init(node, &periclk_ops);
+	hb_clk_init(node, &periclk_ops, 0);
 }
 CLK_OF_DECLARE(hb_emmc, "calxeda,hb-emmc-clock", hb_emmc_init);
diff mbox series

Patch

diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
index d2477a5058ac..94b3ac783d90 100644
--- a/drivers/clk/clk.c
+++ b/drivers/clk/clk.c
@@ -68,6 +68,8 @@  struct clk_core {
 	struct hlist_head	children;
 	struct hlist_node	child_node;
 	struct hlist_head	clks;
+	struct list_head	prepare_list;
+	struct list_head	enable_list;
 	unsigned int		notifier_count;
 #ifdef CONFIG_DEBUG_FS
 	struct dentry		*dentry;
@@ -677,34 +679,34 @@  static void clk_core_unprepare(struct clk_core *core)
 {
 	lockdep_assert_held(&prepare_lock);
 
-	if (!core)
-		return;
-
-	if (WARN(core->prepare_count == 0,
-	    "%s already unprepared\n", core->name))
-		return;
-
-	if (WARN(core->prepare_count == 1 && core->flags & CLK_IS_CRITICAL,
-	    "Unpreparing critical %s\n", core->name))
-		return;
+	while (core) {
+		if (WARN(core->prepare_count == 0,
+		    "%s already unprepared\n", core->name))
+			return;
 
-	if (core->flags & CLK_SET_RATE_GATE)
-		clk_core_rate_unprotect(core);
+		if (WARN(core->prepare_count == 1 &&
+			 core->flags & CLK_IS_CRITICAL,
+			 "Unpreparing critical %s\n", core->name))
+			return;
 
-	if (--core->prepare_count > 0)
-		return;
+		if (core->flags & CLK_SET_RATE_GATE)
+			clk_core_rate_unprotect(core);
 
-	WARN(core->enable_count > 0, "Unpreparing enabled %s\n", core->name);
+		if (--core->prepare_count > 0)
+			return;
 
-	trace_clk_unprepare(core);
+		WARN(core->enable_count > 0, "Unpreparing enabled %s\n",
+		     core->name);
+		trace_clk_unprepare(core);
 
-	if (core->ops->unprepare)
-		core->ops->unprepare(core->hw);
+		if (core->ops->unprepare)
+			core->ops->unprepare(core->hw);
 
-	clk_pm_runtime_put(core);
+		clk_pm_runtime_put(core);
 
-	trace_clk_unprepare_complete(core);
-	clk_core_unprepare(core->parent);
+		trace_clk_unprepare_complete(core);
+		core = core->parent;
+	}
 }
 
 static void clk_core_unprepare_lock(struct clk_core *core)
@@ -737,49 +739,57 @@  EXPORT_SYMBOL_GPL(clk_unprepare);
 static int clk_core_prepare(struct clk_core *core)
 {
 	int ret = 0;
+	LIST_HEAD(head);
 
 	lockdep_assert_held(&prepare_lock);
 
-	if (!core)
-		return 0;
+	while (core) {
+		list_add(&core->prepare_list, &head);
+		/*
+		 * Stop once we see a clk that is already prepared. Adding a clk
+		 * to the list with a non-zero prepare count (or reaching NULL)
+		 * makes error handling work as implemented.
+		 */
+		if (core->prepare_count)
+			break;
+		core = core->parent;
+	}
 
-	if (core->prepare_count == 0) {
-		ret = clk_pm_runtime_get(core);
-		if (ret)
-			return ret;
+	/* First entry has either a prepare_count of 0 or a NULL parent. */
+	list_for_each_entry(core, &head, prepare_list) {
+		if (core->prepare_count == 0) {
+			ret = clk_pm_runtime_get(core);
+			if (ret)
+				goto unprepare_parent;
 
-		ret = clk_core_prepare(core->parent);
-		if (ret)
-			goto runtime_put;
+			trace_clk_prepare(core);
 
-		trace_clk_prepare(core);
+			if (core->ops->prepare)
+				ret = core->ops->prepare(core->hw);
 
-		if (core->ops->prepare)
-			ret = core->ops->prepare(core->hw);
+			trace_clk_prepare_complete(core);
 
-		trace_clk_prepare_complete(core);
+			if (ret)
+				goto runtime_put;
+		}
+		core->prepare_count++;
 
-		if (ret)
-			goto unprepare;
+		/*
+		 * CLK_SET_RATE_GATE is a special case of clock protection
+		 * Instead of a consumer claiming exclusive rate control, it is
+		 * actually the provider which prevents any consumer from making
+		 * any operation which could result in a rate change or rate
+		 * glitch while the clock is prepared.
+		 */
+		if (core->flags & CLK_SET_RATE_GATE)
+			clk_core_rate_protect(core);
 	}
 
-	core->prepare_count++;
-
-	/*
-	 * CLK_SET_RATE_GATE is a special case of clock protection
-	 * Instead of a consumer claiming exclusive rate control, it is
-	 * actually the provider which prevents any consumer from making any
-	 * operation which could result in a rate change or rate glitch while
-	 * the clock is prepared.
-	 */
-	if (core->flags & CLK_SET_RATE_GATE)
-		clk_core_rate_protect(core);
-
 	return 0;
-unprepare:
-	clk_core_unprepare(core->parent);
 runtime_put:
 	clk_pm_runtime_put(core);
+unprepare_parent:
+	clk_core_unprepare(core->parent);
 	return ret;
 }
 
@@ -819,27 +829,27 @@  static void clk_core_disable(struct clk_core *core)
 {
 	lockdep_assert_held(&enable_lock);
 
-	if (!core)
-		return;
-
-	if (WARN(core->enable_count == 0, "%s already disabled\n", core->name))
-		return;
-
-	if (WARN(core->enable_count == 1 && core->flags & CLK_IS_CRITICAL,
-	    "Disabling critical %s\n", core->name))
-		return;
+	while (core) {
+		if (WARN(core->enable_count == 0, "%s already disabled\n",
+			 core->name))
+			return;
 
-	if (--core->enable_count > 0)
-		return;
+		if (--core->enable_count > 0)
+			return;
 
-	trace_clk_disable_rcuidle(core);
+		if (WARN(core->enable_count == 1 &&
+			 core->flags & CLK_IS_CRITICAL,
+			 "Disabling critical %s\n", core->name))
+			return;
 
-	if (core->ops->disable)
-		core->ops->disable(core->hw);
+		trace_clk_disable_rcuidle(core);
 
-	trace_clk_disable_complete_rcuidle(core);
+		if (core->ops->disable)
+			core->ops->disable(core->hw);
 
-	clk_core_disable(core->parent);
+		trace_clk_disable_complete_rcuidle(core);
+		core = core->parent;
+	}
 }
 
 static void clk_core_disable_lock(struct clk_core *core)
@@ -875,37 +885,48 @@  EXPORT_SYMBOL_GPL(clk_disable);
 static int clk_core_enable(struct clk_core *core)
 {
 	int ret = 0;
+	LIST_HEAD(head);
 
 	lockdep_assert_held(&enable_lock);
 
-	if (!core)
-		return 0;
+	while (core) {
+		if (WARN(core->prepare_count == 0,
+			 "Enabling unprepared %s\n", core->name))
+			return -ESHUTDOWN;
 
-	if (WARN(core->prepare_count == 0,
-	    "Enabling unprepared %s\n", core->name))
-		return -ESHUTDOWN;
-
-	if (core->enable_count == 0) {
-		ret = clk_core_enable(core->parent);
+		list_add(&core->enable_list, &head);
+		/*
+		 * Stop once we see a clk that is already enabled. Adding a clk
+		 * to the list with a non-zero prepare count (or reaching NULL)
+		 * makes error handling work as implemented.
+		 */
+		if (core->enable_count)
+			break;
 
-		if (ret)
-			return ret;
+		core = core->parent;
+	}
 
-		trace_clk_enable_rcuidle(core);
+	/* First entry has either an enable_count of 0 or a NULL parent. */
+	list_for_each_entry(core, &head, enable_list) {
+		if (core->enable_count == 0) {
+			trace_clk_enable_rcuidle(core);
 
-		if (core->ops->enable)
-			ret = core->ops->enable(core->hw);
+			if (core->ops->enable)
+				ret = core->ops->enable(core->hw);
 
-		trace_clk_enable_complete_rcuidle(core);
+			trace_clk_enable_complete_rcuidle(core);
 
-		if (ret) {
-			clk_core_disable(core->parent);
-			return ret;
+			if (ret)
+				goto err;
 		}
+
+		core->enable_count++;
 	}
 
-	core->enable_count++;
 	return 0;
+err:
+	clk_core_disable(core->parent);
+	return ret;
 }
 
 static int clk_core_enable_lock(struct clk_core *core)
@@ -3288,6 +3309,8 @@  struct clk *clk_register(struct device *dev, struct clk_hw *hw)
 	core->num_parents = hw->init->num_parents;
 	core->min_rate = 0;
 	core->max_rate = ULONG_MAX;
+	INIT_LIST_HEAD(&core->prepare_list);
+	INIT_LIST_HEAD(&core->enable_list);
 	hw->core = core;
 
 	/* allocate local copy in case parent_names is __initdata */