diff mbox

Applied "ASoC: sti: Add IEC control" to the asoc tree

Message ID E1ZNiBu-0005i8-Ks@finisterre (mailing list archive)
State Not Applicable
Headers show

Commit Message

Mark Brown Aug. 7, 2015, 2 p.m. UTC
The patch

   ASoC: sti: Add IEC control

has been applied to the asoc tree at

   git://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound.git 

All being well this means that it will be integrated into the linux-next
tree (usually sometime in the next 24 hours) and sent to Linus during
the next merge window (or sooner if it is a bug fix), however if
problems are discovered then the patch may be dropped or reverted.  

You may get further e-mails resulting from automated or manual testing
and review of the tree, please engage with people reporting problems and
send followup patches addressing any issues that are reported if needed.

If any updates are required or you are submitting further changes they
should be sent as incremental updates against current git, existing
patches will not be replaced.

Please add any relevant lists and maintainers to the CCs when replying
to this mail.

Thanks,
Mark

From 36cc093520b9a6348292c253d3ec03bb67a84da8 Mon Sep 17 00:00:00 2001
From: Arnaud Pouliquen <arnaud.pouliquen@st.com>
Date: Thu, 16 Jul 2015 11:36:07 +0200
Subject: [PATCH] ASoC: sti: Add IEC control

Add control to configure IEC60958 settings.

Signed-off-by: Arnaud Pouliquen <arnaud.pouliquen@st.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
---
 sound/soc/sti/uniperif_player.c | 77 ++++++++++++++++++++++++++++++++++++++---
 1 file changed, 73 insertions(+), 4 deletions(-)

Comments

Arnaud POULIQUEN Sept. 8, 2015, 4:04 p.m. UTC | #1
Hello,

I'm looking at possibility to offload atomic actions like mixing, 
decoding, encoding on a co-processor, and collect the result on host side.

Rational:
- Allow none-tunneled mode:
     . decoding and mixing on a DSP but A/V synchro on host (android, 
Gstreamer)
     . re-encode after mixing (HDMI)
- Allow to get post processing PCM stream to play it on USB or bluetooth 
devices.
- Allow to add some processing in pipe on host side.
- MIPS partitioning
-...

Some constraints:
- avoid copy (buffer descriptors)
- dynamic connection for mixing and splitting.

For this, i can list two standard drivers: V4L2 and ALSA.
If V4L2 should be OK for encode and decode, seems designed more for 
video than audio. For mixing and processing...

ALSA could answer to this kind of use cases, using compress API and ASoC 
dynamic PCM mechanism...

As example a mixer would be a sound card with several PCM playbacks and 
a PCM capture.

For time being i have never seen this kind of implementation...

What is your feeling on possibility to perform it with ALSA (with 
objective to be upstreamable)?

Otherwise, any standard way to do it?

Thanks in advance for your answer

Br,
Arnaud
Pierre-Louis Bossart Sept. 8, 2015, 4:46 p.m. UTC | #2
On 9/8/15 11:04 AM, Arnaud Pouliquen wrote:
> Hello,
>
> I'm looking at possibility to offload atomic actions like mixing,
> decoding, encoding on a co-processor, and collect the result on host side.
>
> Rational:
> - Allow none-tunneled mode:
>      . decoding and mixing on a DSP but A/V synchro on host (android,
> Gstreamer)
>      . re-encode after mixing (HDMI)
> - Allow to get post processing PCM stream to play it on USB or bluetooth
> devices.
> - Allow to add some processing in pipe on host side.
> - MIPS partitioning
> -...
>
> Some constraints:
> - avoid copy (buffer descriptors)
> - dynamic connection for mixing and splitting.
>
> For this, i can list two standard drivers: V4L2 and ALSA.
> If V4L2 should be OK for encode and decode, seems designed more for
> video than audio. For mixing and processing...
>
> ALSA could answer to this kind of use cases, using compress API and ASoC
> dynamic PCM mechanism...
>
> As example a mixer would be a sound card with several PCM playbacks and
> a PCM capture.
>
> For time being i have never seen this kind of implementation...
>
> What is your feeling on possibility to perform it with ALSA (with
> objective to be upstreamable)?
>
> Otherwise, any standard way to do it?
>

The compressed API was more designed for offloading and rendering, if 
you can to pass the mixed result back to the host you will have to setup 
a capture stream using the regular PCM API.
What you are describing is feasible but has issues related to:
- delay control
- DSP scheduling (no real means to process data faster than real-time as 
you would want in a data-driven co-processor)
There are also divergent views on the benefits of offloading 
intermediate operations to a resource-constrained co-processor, you 
might be better off doing everything on the host in terms of power 
consumption.
Mark Brown Sept. 8, 2015, 5:32 p.m. UTC | #3
On Tue, Sep 08, 2015 at 11:46:06AM -0500, Pierre-Louis Bossart wrote:

> What you are describing is feasible but has issues related to:
> - delay control
> - DSP scheduling (no real means to process data faster than real-time as you
> would want in a data-driven co-processor)
> There are also divergent views on the benefits of offloading intermediate
> operations to a resource-constrained co-processor, you might be better off
> doing everything on the host in terms of power consumption.

Indeed - there's also a big system complexity hit.  It does depend how
loaded things are if it's worth considering.  Part of the reason there's
no standard way to do it is that the benefits are sufficiently unclear
to be concerning.
Arnaud POULIQUEN Sept. 9, 2015, 8:36 a.m. UTC | #4
On 09/08/2015 07:32 PM, Mark Brown wrote:
> On Tue, Sep 08, 2015 at 11:46:06AM -0500, Pierre-Louis Bossart wrote:
>
>> What you are describing is feasible but has issues related to:
>> - delay control
>> - DSP scheduling (no real means to process data faster than real-time as you
>> would want in a data-driven co-processor)
>> There are also divergent views on the benefits of offloading intermediate
>> operations to a resource-constrained co-processor, you might be better off
>> doing everything on the host in terms of power consumption.
>
> Indeed - there's also a big system complexity hit.  It does depend how
> loaded things are if it's worth considering.  Part of the reason there's
> no standard way to do it is that the benefits are sufficiently unclear
> to be concerning.
>
Full agree with you in standard use cases. But with the increased number 
of channels and frequency we start to observe some use cases that 
consume more than 1000 MIPS. In this case systems can take advantage in 
partitioning, particularly for open systems like android.

Then tunneled and non-tunneled mode is a matter of compromise between 
flexibility and efficiency.

I have my answer, not standard API for no tunneled mode.
Thanks
Mark Brown Sept. 9, 2015, 9:56 a.m. UTC | #5
On Wed, Sep 09, 2015 at 10:36:20AM +0200, Arnaud Pouliquen wrote:

> I have my answer, not standard API for no tunneled mode.

Well, the standard thing is what Pierre described - play back and
capture via standard PCMs with routing internally to the card in the
usual fashion.  It's just not something people normally do.
diff mbox

Patch

diff --git a/sound/soc/sti/uniperif_player.c b/sound/soc/sti/uniperif_player.c
index d8df906..f6eefe1 100644
--- a/sound/soc/sti/uniperif_player.c
+++ b/sound/soc/sti/uniperif_player.c
@@ -250,6 +250,7 @@  static void uni_player_set_channel_status(struct uniperif *player,
 	 * sampling frequency. If no sample rate is already specified, then
 	 * set one.
 	 */
+	mutex_lock(&player->ctrl_lock);
 	if (runtime && (player->stream_settings.iec958.status[3]
 					== IEC958_AES3_CON_FS_NOTID)) {
 		switch (runtime->rate) {
@@ -327,6 +328,7 @@  static void uni_player_set_channel_status(struct uniperif *player,
 		player->stream_settings.iec958.status[3 + (n * 4)] << 24;
 		SET_UNIPERIF_CHANNEL_STA_REGN(player, n, status);
 	}
+	mutex_unlock(&player->ctrl_lock);
 
 	/* Update the channel status */
 	if (player->ver < SND_ST_UNIPERIF_VERSION_UNI_PLR_TOP_1_0)
@@ -538,6 +540,63 @@  static int uni_player_prepare_pcm(struct uniperif *player,
 }
 
 /*
+ * ALSA uniperipheral iec958 controls
+ */
+static int  uni_player_ctl_iec958_info(struct snd_kcontrol *kcontrol,
+				       struct snd_ctl_elem_info *uinfo)
+{
+	uinfo->type = SNDRV_CTL_ELEM_TYPE_IEC958;
+	uinfo->count = 1;
+
+	return 0;
+}
+
+static int uni_player_ctl_iec958_get(struct snd_kcontrol *kcontrol,
+				     struct snd_ctl_elem_value *ucontrol)
+{
+	struct snd_soc_dai *dai = snd_kcontrol_chip(kcontrol);
+	struct sti_uniperiph_data *priv = snd_soc_dai_get_drvdata(dai);
+	struct uniperif *player = priv->dai_data.uni;
+	struct snd_aes_iec958 *iec958 = &player->stream_settings.iec958;
+
+	mutex_lock(&player->ctrl_lock);
+	ucontrol->value.iec958.status[0] = iec958->status[0];
+	ucontrol->value.iec958.status[1] = iec958->status[1];
+	ucontrol->value.iec958.status[2] = iec958->status[2];
+	ucontrol->value.iec958.status[3] = iec958->status[3];
+	mutex_unlock(&player->ctrl_lock);
+	return 0;
+}
+
+static int uni_player_ctl_iec958_put(struct snd_kcontrol *kcontrol,
+				     struct snd_ctl_elem_value *ucontrol)
+{
+	struct snd_soc_dai *dai = snd_kcontrol_chip(kcontrol);
+	struct sti_uniperiph_data *priv = snd_soc_dai_get_drvdata(dai);
+	struct uniperif *player = priv->dai_data.uni;
+	struct snd_aes_iec958 *iec958 =  &player->stream_settings.iec958;
+
+	mutex_lock(&player->ctrl_lock);
+	iec958->status[0] = ucontrol->value.iec958.status[0];
+	iec958->status[1] = ucontrol->value.iec958.status[1];
+	iec958->status[2] = ucontrol->value.iec958.status[2];
+	iec958->status[3] = ucontrol->value.iec958.status[3];
+	mutex_unlock(&player->ctrl_lock);
+
+	uni_player_set_channel_status(player, NULL);
+
+	return 0;
+}
+
+static struct snd_kcontrol_new uni_player_iec958_ctl = {
+	.iface = SNDRV_CTL_ELEM_IFACE_PCM,
+	.name = SNDRV_CTL_NAME_IEC958("", PLAYBACK, DEFAULT),
+	.info = uni_player_ctl_iec958_info,
+	.get = uni_player_ctl_iec958_get,
+	.put = uni_player_ctl_iec958_put,
+};
+
+/*
  * uniperif rate adjustement control
  */
 static int snd_sti_clk_adjustment_info(struct snd_kcontrol *kcontrol,
@@ -559,7 +618,9 @@  static int snd_sti_clk_adjustment_get(struct snd_kcontrol *kcontrol,
 	struct sti_uniperiph_data *priv = snd_soc_dai_get_drvdata(dai);
 	struct uniperif *player = priv->dai_data.uni;
 
+	mutex_lock(&player->ctrl_lock);
 	ucontrol->value.integer.value[0] = player->clk_adj;
+	mutex_unlock(&player->ctrl_lock);
 
 	return 0;
 }
@@ -594,7 +655,12 @@  static struct snd_kcontrol_new uni_player_clk_adj_ctl = {
 	.put = snd_sti_clk_adjustment_put,
 };
 
-static struct snd_kcontrol_new *snd_sti_ctl[] = {
+static struct snd_kcontrol_new *snd_sti_pcm_ctl[] = {
+	&uni_player_clk_adj_ctl,
+};
+
+static struct snd_kcontrol_new *snd_sti_iec_ctl[] = {
+	&uni_player_iec958_ctl,
 	&uni_player_clk_adj_ctl,
 };
 
@@ -1031,10 +1097,13 @@  int uni_player_init(struct platform_device *pdev,
 		player->stream_settings.iec958.status[4] =
 					IEC958_AES4_CON_MAX_WORDLEN_24 |
 					IEC958_AES4_CON_WORDLEN_24_20;
-	}
 
-	player->num_ctrls = ARRAY_SIZE(snd_sti_ctl);
-	player->snd_ctrls = snd_sti_ctl[0];
+		player->num_ctrls = ARRAY_SIZE(snd_sti_iec_ctl);
+		player->snd_ctrls = snd_sti_iec_ctl[0];
+	} else {
+		player->num_ctrls = ARRAY_SIZE(snd_sti_pcm_ctl);
+		player->snd_ctrls = snd_sti_pcm_ctl[0];
+	}
 
 	return 0;
 }