diff mbox

[1/2] ASoC: max98090: Add master clock handling

Message ID 1400750228-13750-2-git-send-email-tushar.behera@linaro.org (mailing list archive)
State New, archived
Headers show

Commit Message

Tushar Behera May 22, 2014, 9:17 a.m. UTC
If master clock is provided through device tree, then update
the master clock frequency during set_sysclk.

Documentation has been updated to reflect the change.

Signed-off-by: Tushar Behera <tushar.behera@linaro.org>
---
 .../devicetree/bindings/sound/max98090.txt         |    6 ++++++
 sound/soc/codecs/max98090.c                        |   13 +++++++++++++
 sound/soc/codecs/max98090.h                        |    1 +
 3 files changed, 20 insertions(+)

Comments

Mark Brown May 22, 2014, 10:30 a.m. UTC | #1
On Thu, May 22, 2014 at 02:47:07PM +0530, Tushar Behera wrote:

> +	max98090->mclk = devm_clk_get(codec->dev, "mclk");
> +	if (!IS_ERR(max98090->mclk))
> +		clk_prepare_enable(max98090->mclk);
> +

Ths doesn't handle deferred probe, we need to at least return an error
if we get -EPROBE_DEFER.  

It'd also be better to move the enabling to set_bias_level() if possible
(I don't know if the clock is needed for register access) though less
essential.
Stephen Warren May 22, 2014, 3:51 p.m. UTC | #2
On 05/22/2014 03:17 AM, Tushar Behera wrote:
> If master clock is provided through device tree, then update
> the master clock frequency during set_sysclk.
> 
> Documentation has been updated to reflect the change.

> diff --git a/sound/soc/codecs/max98090.c b/sound/soc/codecs/max98090.c

> @@ -1929,6 +1930,11 @@ static int max98090_dai_set_sysclk(struct snd_soc_dai *dai,
>  	if (freq == max98090->sysclk)
>  		return 0;
>  
> +	if (!IS_ERR(max98090->mclk)) {
> +		freq = clk_round_rate(max98090->mclk, freq);
> +		clk_set_rate(max98090->mclk, freq);
> +	}

What are the intended semantics of set_sysclk()?
sound/soc/tegra/tegra_wm98090.c assumes that set_sysclk() is a
notification to the CODEC driver to tell it what rate the MCLK input is
set to (the rate is set before calling set_sysclk), whereas the code
above assumes that this function is to tell the CODEC to somehow
configure its input clock to be a particular rate. I have a feeling the
code above might fail on Tegra.
Mark Brown May 22, 2014, 5:34 p.m. UTC | #3
On Thu, May 22, 2014 at 09:51:38AM -0600, Stephen Warren wrote:
> On 05/22/2014 03:17 AM, Tushar Behera wrote:

> > +	if (!IS_ERR(max98090->mclk)) {
> > +		freq = clk_round_rate(max98090->mclk, freq);
> > +		clk_set_rate(max98090->mclk, freq);
> > +	}

> What are the intended semantics of set_sysclk()?
> sound/soc/tegra/tegra_wm98090.c assumes that set_sysclk() is a
> notification to the CODEC driver to tell it what rate the MCLK input is
> set to (the rate is set before calling set_sysclk), whereas the code
> above assumes that this function is to tell the CODEC to somehow
> configure its input clock to be a particular rate. I have a feeling the
> code above might fail on Tegra.

It's a bit of both.  Since we've never had a generic clock API (and
still don't really due to everything being a shambles) it's generally
just a notification to the CODEC that it's at the specified rate,
however if the CODEC knows about its dependencies it seems sensible for
it to tell the clock API about what was asked.  Ideally we want to
remove all the ASoC specific clock control and use the clock API.

The Tegra driver will presumably succeed unless someone does the
appropriate clock hookup in DT, at which point clk_set_rate() for the
rate the clock is already at really ought to succeed so things should
continue to work.
Stephen Warren May 22, 2014, 5:50 p.m. UTC | #4
On 05/22/2014 11:34 AM, Mark Brown wrote:
> On Thu, May 22, 2014 at 09:51:38AM -0600, Stephen Warren wrote:
>> On 05/22/2014 03:17 AM, Tushar Behera wrote:
> 
>>> +	if (!IS_ERR(max98090->mclk)) {
>>> +		freq = clk_round_rate(max98090->mclk, freq);
>>> +		clk_set_rate(max98090->mclk, freq);
>>> +	}
> 
>> What are the intended semantics of set_sysclk()?
>> sound/soc/tegra/tegra_wm98090.c assumes that set_sysclk() is a
>> notification to the CODEC driver to tell it what rate the MCLK input is
>> set to (the rate is set before calling set_sysclk), whereas the code
>> above assumes that this function is to tell the CODEC to somehow
>> configure its input clock to be a particular rate. I have a feeling the
>> code above might fail on Tegra.
>
> It's a bit of both.  Since we've never had a generic clock API (and
> still don't really due to everything being a shambles) it's generally
> just a notification to the CODEC that it's at the specified rate,
> however if the CODEC knows about its dependencies it seems sensible for
> it to tell the clock API about what was asked.  Ideally we want to
> remove all the ASoC specific clock control and use the clock API.

I think we should nail down exactly what set_sysclk() means. Since it
takes an explicit MCLK clock rate (rather than e.g. sample rate) right
now, surely it's a notification of what the clock /is/, not a request
for the CODEC to set up its input clock. If we expect the CODEC to go to
the clock driver and request an MCLK for itself (e.g. based on sample
rate), surely that should happen from some function besides
set_sysclk(), with different semantics e.g. hw_params().

I'm not convinced that the CODEC can trigger its input clock changes in
general. In Tegra, there needs to be centralized clock management, e.g.
to prevent one audio stream using a 44.1KHz-based rate and another using
a 48KHz-based rate. That's because the main audio PLL can't generate
both sets of frequencies at once. This can't be done in individual CODEC
drivers, since they shouldn't know about the Tegra restrictions. Doing
it in the clock driver in response to the CODEC's request for a specific
input clock, might work, but then the CODEC would have to call into the
clk driver from e.g. hw_params() rather than set_sysclk(). If that's how
it's supposed to work, then this patch is adding code in the wrong
place. If set_sysclk() doesn't get removed from the CODEC API, then
doing all this clock setup logic in the machine driver, as the
tegra_wm89090.c machine driver does, seems to make the most sense for now.

> The Tegra driver will presumably succeed unless someone does the
> appropriate clock hookup in DT, at which point clk_set_rate() for the
> rate the clock is already at really ought to succeed so things should
> continue to work.

Ah yes, if we don't add the clock to DT, this code won't do anything, so
nothing will break.
Mark Brown May 22, 2014, 6:19 p.m. UTC | #5
On Thu, May 22, 2014 at 11:50:55AM -0600, Stephen Warren wrote:

> I think we should nail down exactly what set_sysclk() means. Since it
> takes an explicit MCLK clock rate (rather than e.g. sample rate) right
> now, surely it's a notification of what the clock /is/, not a request
> for the CODEC to set up its input clock. If we expect the CODEC to go to

It really should be that from a device model/clock API point of view and
it certainly is with the patch proposed, the fact that it happens not to
have been is just a product of the poor clock support in Linux than
anything else.

> the clock driver and request an MCLK for itself (e.g. based on sample
> rate), surely that should happen from some function besides
> set_sysclk(), with different semantics e.g. hw_params().

I'm not sure where you see the CODEC going off and deciding the MCLK
rate itself here?  Essentially all that's happening here is that
set_sysclk() is behaving like the clock API does and setting its own
rate to what it was explicitly asked to set, including bouncing that
request up the chain.

The rounding could go if we wanted to be a bit stricter - if the machine
driver is doing a good job it should never change anything - but
otherwise I just can't see what you're concerned about.

> I'm not convinced that the CODEC can trigger its input clock changes in
> general. In Tegra, there needs to be centralized clock management, e.g.
> to prevent one audio stream using a 44.1KHz-based rate and another using
> a 48KHz-based rate. That's because the main audio PLL can't generate
> both sets of frequencies at once. This can't be done in individual CODEC
> drivers, since they shouldn't know about the Tegra restrictions. Doing
> it in the clock driver in response to the CODEC's request for a specific
> input clock, might work, but then the CODEC would have to call into the
> clk driver from e.g. hw_params() rather than set_sysclk(). If that's how
> it's supposed to work, then this patch is adding code in the wrong
> place. If set_sysclk() doesn't get removed from the CODEC API, then
> doing all this clock setup logic in the machine driver, as the
> tegra_wm89090.c machine driver does, seems to make the most sense for now.

What you're describing seems to make things worse not better for your
problem?  I'm sorry, I just can't follow what you're concerned about
here at all.  

The current situation is that the CODEC just gets told what it should
run SYSCLK at, all the patch does really is factor out the code to call
clk_set_rate() on a programmable parent clock and look that clock up in
DT.  Both before and after the patch the CODEC driver takes no decisions
on the rate.  If we started doing things in hw_params() the CODEC driver
would have to start taking decisions since it would only get the sample
rate and so on provided.
Stephen Warren May 22, 2014, 10:24 p.m. UTC | #6
On 05/22/2014 12:19 PM, Mark Brown wrote:
> On Thu, May 22, 2014 at 11:50:55AM -0600, Stephen Warren wrote:
> 
>> I think we should nail down exactly what set_sysclk() means. Since it
>> takes an explicit MCLK clock rate (rather than e.g. sample rate) right
>> now, surely it's a notification of what the clock /is/, not a request
>> for the CODEC to set up its input clock. If we expect the CODEC to go to
> 
> It really should be that from a device model/clock API point of view and
> it certainly is with the patch proposed, the fact that it happens not to
> have been is just a product of the poor clock support in Linux than
> anything else.
> 
>> the clock driver and request an MCLK for itself (e.g. based on sample
>> rate), surely that should happen from some function besides
>> set_sysclk(), with different semantics e.g. hw_params().
> 
> I'm not sure where you see the CODEC going off and deciding the MCLK
> rate itself here?  Essentially all that's happening here is that
> set_sysclk() is behaving like the clock API does and setting its own
> rate to what it was explicitly asked to set, including bouncing that
> request up the chain.

If set_sysclk() is never allowed to do anything other than forward the
value it receives to the clock API, I'm much less concerned, althoguh
still a bit.

My main worry is that this patch opens the door for set_sysclk() to
perform some kind of calculation to determine the MCLK rate. Right now,
this patch doesn't do that, but there's nothing obvious from the code
that no CODEC is allowed to do that. After all, sysclk has a parameter
to indicate *which* clock in the CODEC to set. Some CODEC driver author
might write something where the machine driver tells the CODEC driver
the desired rate of some CODEC-internal PLL, from which set_sysclk()
calculates what it needs for MCLK, and then goes off and requests that
value from the clock API.

Ignoring that, I'm still not sure that the CODEC driver setting the MCLK
rate is appropriate. If we have 1 MCLK output from an SoC, connected to
2 different CODECs, why wouldn't the overall machine driver call
clk_set_rate() on that clock, and then notify the two CODEC drivers of
that rate. Making each CODEC's set_sysclk() call clk_set_rate() on the
same clock with the same value seems redundant, albeit it should work
out fine since they both request the same rate.
Mark Brown May 22, 2014, 11:36 p.m. UTC | #7
On Thu, May 22, 2014 at 04:24:55PM -0600, Stephen Warren wrote:

> My main worry is that this patch opens the door for set_sysclk() to
> perform some kind of calculation to determine the MCLK rate. Right now,
> this patch doesn't do that, but there's nothing obvious from the code
> that no CODEC is allowed to do that. After all, sysclk has a parameter
> to indicate *which* clock in the CODEC to set. Some CODEC driver author
> might write something where the machine driver tells the CODEC driver
> the desired rate of some CODEC-internal PLL, from which set_sysclk()
> calculates what it needs for MCLK, and then goes off and requests that
> value from the clock API.

I really think you're reading too much into this - the set_sysclk() API
isn't any different to the clock API here really and most of the
potential for doing really problematic stuff and defining your clocks to
mean funny things exists anyway.  Of course people could do tasteless
things but that's always going to be a risk and we also want to try to
minimise the amount of redundant code people have to write.

It should be fairly easy to spot substantial abuse since the driver
would need to call clk_set_rate() with a different rate.

> Ignoring that, I'm still not sure that the CODEC driver setting the MCLK
> rate is appropriate. If we have 1 MCLK output from an SoC, connected to
> 2 different CODECs, why wouldn't the overall machine driver call
> clk_set_rate() on that clock, and then notify the two CODEC drivers of
> that rate. Making each CODEC's set_sysclk() call clk_set_rate() on the
> same clock with the same value seems redundant, albeit it should work
> out fine since they both request the same rate.

Right, it should work fine and it's less work for the machine drivers.
The alternative is that every machine driver using a device with a
programmable input has the code to get that clock from the CODEC device
and set it in sync with telling the CODEC about it which is redundant
and makes life harder for generic drivers like simple card (which
obviously can't know the specifics of every CODEC it works with).

It's certainly a more normal thing from a device model and device tree
point of view for the CODEC to be responsible for getting the resource
and it seems natural to extend that to such basic management as setting
the rate.  

If anything I think want to expose some or all of the CODEC clock trees
to the clock API, sometimes (rarely but not never) the CODEC PLLs and
FLLs are used to supply things outside the audio subsystem if they
happen to be going spare, it's difficult to do that at the minute since
there's no guaranteed API for being a clock provider.  At the minute
we're mostly reimplementing a custom clock API which seems like the
wrong way to be doing things.  That might go towards answering your
concerns since set_sysclk() would end up as a clock API call (or at
least a thin wrapper around one).
Tushar Behera May 23, 2014, 5:35 a.m. UTC | #8
On 22 May 2014 16:00, Mark Brown <broonie@kernel.org> wrote:
> On Thu, May 22, 2014 at 02:47:07PM +0530, Tushar Behera wrote:
>
>> +     max98090->mclk = devm_clk_get(codec->dev, "mclk");
>> +     if (!IS_ERR(max98090->mclk))
>> +             clk_prepare_enable(max98090->mclk);
>> +
>
> Ths doesn't handle deferred probe, we need to at least return an error
> if we get -EPROBE_DEFER.
>

Ok.

> It'd also be better to move the enabling to set_bias_level() if possible
> (I don't know if the clock is needed for register access) though less
> essential.

I tested with moving clk_enable/clk_disable calls to set_bias_level():
SND_SOC_BIAS_PREPARE. That works well for me. Does it look okay?

@@ -1801,6 +1801,25 @@ static int max98090_set_bias_level(struct
snd_soc_codec *codec,
                break;

        case SND_SOC_BIAS_PREPARE:
+               /*
+                * SND_SOC_BIAS_PREPARE is called while preparing for a
+                * transition to ON or away from ON. If current bias_level
+                * is SND_SOC_BIAS_ON, then it is preparing for a transition
+                * away from ON. Disable the clock in that case, otherwise
+                * enable it.
+                */
+               if (!IS_ERR(max98090->mclk)) {
+                       if (codec->dapm.bias_level == SND_SOC_BIAS_ON)
+                               clk_disable(max98090->mclk);
+                       else
+                               clk_enable(max98090->mclk);
+               }
                break;
Mark Brown May 23, 2014, 11:14 a.m. UTC | #9
On Fri, May 23, 2014 at 11:05:17AM +0530, Tushar Behera wrote:

> I tested with moving clk_enable/clk_disable calls to set_bias_level():
> SND_SOC_BIAS_PREPARE. That works well for me. Does it look okay?

> +               if (!IS_ERR(max98090->mclk)) {
> +                       if (codec->dapm.bias_level == SND_SOC_BIAS_ON)
> +                               clk_disable(max98090->mclk);
> +                       else
> +                               clk_enable(max98090->mclk);
> +               }
>                 break;

Should be clk_prepare_enable() and similarly for the disable and you
should check the error codes but yes, that looks good.
Tushar Behera May 23, 2014, 11:36 a.m. UTC | #10
On 23 May 2014 16:44, Mark Brown <broonie@kernel.org> wrote:
> On Fri, May 23, 2014 at 11:05:17AM +0530, Tushar Behera wrote:
>
>> I tested with moving clk_enable/clk_disable calls to set_bias_level():
>> SND_SOC_BIAS_PREPARE. That works well for me. Does it look okay?
>
>> +               if (!IS_ERR(max98090->mclk)) {
>> +                       if (codec->dapm.bias_level == SND_SOC_BIAS_ON)
>> +                               clk_disable(max98090->mclk);
>> +                       else
>> +                               clk_enable(max98090->mclk);
>> +               }
>>                 break;
>
> Should be clk_prepare_enable() and similarly for the disable and you
> should check the error codes but yes, that looks good.

I was planning to keep clk_prepare/clk_unprepare in probe/remove.

We are not doing anything specific with the return codes here, so
might be I will add only a error message in case enable/disable fails.
Mark Brown May 23, 2014, 11:41 a.m. UTC | #11
On Fri, May 23, 2014 at 05:06:27PM +0530, Tushar Behera wrote:
> On 23 May 2014 16:44, Mark Brown <broonie@kernel.org> wrote:

> > Should be clk_prepare_enable() and similarly for the disable and you
> > should check the error codes but yes, that looks good.

> I was planning to keep clk_prepare/clk_unprepare in probe/remove.

Why - what purpose would it serve to leave the clock prepared but not
enabled?
Tushar Behera May 23, 2014, 12:02 p.m. UTC | #12
On 23 May 2014 17:11, Mark Brown <broonie@kernel.org> wrote:
> On Fri, May 23, 2014 at 05:06:27PM +0530, Tushar Behera wrote:
>> On 23 May 2014 16:44, Mark Brown <broonie@kernel.org> wrote:
>
>> > Should be clk_prepare_enable() and similarly for the disable and you
>> > should check the error codes but yes, that looks good.
>
>> I was planning to keep clk_prepare/clk_unprepare in probe/remove.
>
> Why - what purpose would it serve to leave the clock prepared but not
> enabled?

I was getting some kernel slow path warning initially while preparing
this code, but not able to reproduce that warning now.
diff mbox

Patch

diff --git a/Documentation/devicetree/bindings/sound/max98090.txt b/Documentation/devicetree/bindings/sound/max98090.txt
index e4c8b36..a5e63fa 100644
--- a/Documentation/devicetree/bindings/sound/max98090.txt
+++ b/Documentation/devicetree/bindings/sound/max98090.txt
@@ -10,6 +10,12 @@  Required properties:
 
 - interrupts : The CODEC's interrupt output.
 
+Optional properties:
+
+- clocks: The phandle of the master clock to the CODEC
+
+- clock-names: Should be "mclk"
+
 Pins on the device (for linking into audio routes):
 
   * MIC1
diff --git a/sound/soc/codecs/max98090.c b/sound/soc/codecs/max98090.c
index 9b76f5a..41c61eb 100644
--- a/sound/soc/codecs/max98090.c
+++ b/sound/soc/codecs/max98090.c
@@ -17,6 +17,7 @@ 
 #include <linux/regmap.h>
 #include <linux/slab.h>
 #include <linux/acpi.h>
+#include <linux/clk.h>
 #include <sound/jack.h>
 #include <sound/pcm.h>
 #include <sound/pcm_params.h>
@@ -1929,6 +1930,11 @@  static int max98090_dai_set_sysclk(struct snd_soc_dai *dai,
 	if (freq == max98090->sysclk)
 		return 0;
 
+	if (!IS_ERR(max98090->mclk)) {
+		freq = clk_round_rate(max98090->mclk, freq);
+		clk_set_rate(max98090->mclk, freq);
+	}
+
 	/* Setup clocks for slave mode, and using the PLL
 	 * PSCLK = 0x01 (when master clk is 10MHz to 20MHz)
 	 *		 0x02 (when master clk is 20MHz to 40MHz)..
@@ -2213,6 +2219,10 @@  static int max98090_probe(struct snd_soc_codec *codec)
 
 	dev_dbg(codec->dev, "max98090_probe\n");
 
+	max98090->mclk = devm_clk_get(codec->dev, "mclk");
+	if (!IS_ERR(max98090->mclk))
+		clk_prepare_enable(max98090->mclk);
+
 	max98090->codec = codec;
 
 	/* Reset the codec, the DSP core, and disable all interrupts */
@@ -2308,6 +2318,9 @@  static int max98090_remove(struct snd_soc_codec *codec)
 
 	cancel_delayed_work_sync(&max98090->jack_work);
 
+	if (!IS_ERR(max98090->mclk))
+		clk_disable_unprepare(max98090->mclk);
+
 	return 0;
 }
 
diff --git a/sound/soc/codecs/max98090.h b/sound/soc/codecs/max98090.h
index 5a3c8d0..cf1b606 100644
--- a/sound/soc/codecs/max98090.h
+++ b/sound/soc/codecs/max98090.h
@@ -1524,6 +1524,7 @@  struct max98090_priv {
 	struct snd_soc_codec *codec;
 	enum max98090_type devtype;
 	struct max98090_pdata *pdata;
+	struct clk *mclk;
 	unsigned int sysclk;
 	unsigned int bclk;
 	unsigned int lrclk;