diff mbox series

[1/1] ASoC: soc-dai: export some symbols

Message ID 20220920034545.2820888-2-jason.zhu@rock-chips.com (mailing list archive)
State New, archived
Headers show
Series [1/1] ASoC: soc-dai: export some symbols | expand

Commit Message

Jason Zhu Sept. 20, 2022, 3:45 a.m. UTC
Sometimes we need to make some dais alive when close the card, like
VAD, so these functions must be exported so that they can be called.

Signed-off-by: Jason Zhu <jason.zhu@rock-chips.com>
---
 sound/soc/soc-dai.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

Comments

Jason Zhu Sept. 21, 2022, 2:37 a.m. UTC | #1
在 2022/9/20 20:47, Mark Brown 写道:
> On Tue, Sep 20, 2022 at 11:45:45AM +0800, Jason Zhu wrote:
>
>> Sometimes we need to make some dais alive when close the card, like
>> VAD, so these functions must be exported so that they can be called.
> I'm not sure I fully understand the use case here - why wouldn't
> the core know about the audio stream being kept active?  For
> something like VAD I'd expect this to be just working like a
> normal audio path, if there's a DSP consuming the audio stream
> then it'll keep everything open.  If there is a good use case I
> suspect it'll be clearer if you send the users along with this
> patch.

Thanks. For example, we use the VAD(Voice Activity Detect) & PDM(
Pulse Density Modulation) to record sound>. The PDM is used to
record and copy data to DDR memory by DMA when the system is alive.
The VAD is used to detect voice from PDM and copy data to sram
(The sram is small) when the system is sleep. If the VAD detect
specific sound, wake up the system and continue to record sound.
The data can not be lost in this process. So we attach VAD & PDM
in the same card, then close the card and wake up VAD & PDM again
when the system is goto sleep. Like these code:
vad-sound {
	...
	rockchip,cpu = <&pdm0>;
	rockchip,codec = <&es7202>, <&vad>;
	...
};

static int rockchip_vad_enable_cpudai(struct rockchip_vad *vad)
{
	struct snd_soc_dai *cpu_dai;
	struct snd_pcm_substream *substream;
	int ret = 0;

	cpu_dai = vad->cpu_dai;
	substream = vad->substream;

	if (!cpu_dai || !substream)
		return 0;

	pm_runtime_get_sync(cpu_dai->dev);

	if (cpu_dai->driver->ops && cpu_dai->driver->ops->trigger) {
		ret = cpu_dai->driver->ops->startup(substream,
					    cpu_dai);

		ret = cpu_dai->driver->ops->trigger(substream,
						    SNDRV_PCM_TRIGGER_START,
						    cpu_dai);
	}

	return ret;
}
When the system is waked up, open the sound card. The data in sram
is copied firstly and close the vad. Then use the DMA to move data
to DDR memory from PDM.

Now we prefer to use framework code, like:
static int rockchip_vad_enable_cpudai(struct rockchip_vad *vad)
{
	struct snd_soc_dai *cpu_dai;
	struct snd_pcm_substream *substream;
	int ret = 0;

	cpu_dai = vad->cpu_dai;
	substream = vad->substream;

	if (!cpu_dai || !substream)
		return 0;

	pm_runtime_get_sync(cpu_dai->dev);

	ret = snd_soc_dai_startup(cpu_dai, substream);
	ret |= snd_soc_pcm_dai_prepare(substream);
	ret |= snd_soc_pcm_dai_trigger(substream, SNDRV_PCM_TRIGGER_START);

	return ret;
}
In this situation, those symbols must be exported.
Look forward to your reply and suggestions.
Mark Brown Sept. 23, 2022, 12:55 p.m. UTC | #2
On Wed, Sep 21, 2022 at 10:37:06AM +0800, Jason Zhu wrote:
> 在 2022/9/20 20:47, Mark Brown 写道:
> > On Tue, Sep 20, 2022 at 11:45:45AM +0800, Jason Zhu wrote:

> > > Sometimes we need to make some dais alive when close the card, like
> > > VAD, so these functions must be exported so that they can be called.

> > I'm not sure I fully understand the use case here - why wouldn't
> > the core know about the audio stream being kept active?  For
> > something like VAD I'd expect this to be just working like a
> > normal audio path, if there's a DSP consuming the audio stream
> > then it'll keep everything open.  If there is a good use case I
> > suspect it'll be clearer if you send the users along with this
> > patch.

> Thanks. For example, we use the VAD(Voice Activity Detect) & PDM(
> Pulse Density Modulation) to record sound>. The PDM is used to
> record and copy data to DDR memory by DMA when the system is alive.
> The VAD is used to detect voice from PDM and copy data to sram
> (The sram is small) when the system is sleep. If the VAD detect
> specific sound, wake up the system and continue to record sound.
> The data can not be lost in this process. So we attach VAD & PDM
> in the same card, then close the card and wake up VAD & PDM again
> when the system is goto sleep. Like these code:

This sounds like a very normal thing with a standard audio stream -
other devices have similar VAD stuff without needing to open code access
to the PCM operations?

> When the system is waked up, open the sound card. The data in sram
> is copied firstly and close the vad. Then use the DMA to move data
> to DDR memory from PDM.

Generally things just continue to stream the voice data through the same
VAD stream IIRC - switching just adds complexity here, you don't have to
deal with joining the VAD and regular streams up for one thing.
kernel test robot Sept. 24, 2022, 4:21 a.m. UTC | #3
Hi Jason,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on broonie-sound/for-next]
[also build test ERROR on linus/master v6.0-rc6 next-20220923]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Jason-Zhu/ASoC-soc-dai-export-some-symbols/20220923-164409
base:   https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound.git for-next
config: x86_64-randconfig-a014
compiler: clang version 14.0.6 (https://github.com/llvm/llvm-project f28c006a5895fc0e329fe15fead81e37457cb1d1)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/intel-lab-lkp/linux/commit/aad495f26cfbcfef836cc4eb63f3c48116f3fcee
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Jason-Zhu/ASoC-soc-dai-export-some-symbols/20220923-164409
        git checkout aad495f26cfbcfef836cc4eb63f3c48116f3fcee
        # save the config file
        mkdir build_dir && cp config build_dir/.config
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

>> sound/soc/soc-dai.o: error: local symbol 'soc_dai_trigger' was exported
Jason Zhu Sept. 26, 2022, 1:34 a.m. UTC | #4
在 2022/9/23 20:55, Mark Brown 写道:
> On Wed, Sep 21, 2022 at 10:37:06AM +0800, Jason Zhu wrote:
>> 在 2022/9/20 20:47, Mark Brown 写道:
>>> On Tue, Sep 20, 2022 at 11:45:45AM +0800, Jason Zhu wrote:
>>>> Sometimes we need to make some dais alive when close the card, like
>>>> VAD, so these functions must be exported so that they can be called.
>>> I'm not sure I fully understand the use case here - why wouldn't
>>> the core know about the audio stream being kept active?  For
>>> something like VAD I'd expect this to be just working like a
>>> normal audio path, if there's a DSP consuming the audio stream
>>> then it'll keep everything open.  If there is a good use case I
>>> suspect it'll be clearer if you send the users along with this
>>> patch.
>> Thanks. For example, we use the VAD(Voice Activity Detect) & PDM(
>> Pulse Density Modulation) to record sound>. The PDM is used to
>> record and copy data to DDR memory by DMA when the system is alive.
>> The VAD is used to detect voice from PDM and copy data to sram
>> (The sram is small) when the system is sleep. If the VAD detect
>> specific sound, wake up the system and continue to record sound.
>> The data can not be lost in this process. So we attach VAD & PDM
>> in the same card, then close the card and wake up VAD & PDM again
>> when the system is goto sleep. Like these code:
> This sounds like a very normal thing with a standard audio stream -
> other devices have similar VAD stuff without needing to open code access
> to the PCM operations?


At present, only VAD is handled in this way by Rockchip.

>
>> When the system is waked up, open the sound card. The data in sram
>> is copied firstly and close the vad. Then use the DMA to move data
>> to DDR memory from PDM.
> Generally things just continue to stream the voice data through the same
> VAD stream IIRC - switching just adds complexity here, you don't have to
> deal with joining the VAD and regular streams up for one thing.


Yes, this looks complicated. But our chip's sram which is assigned to VAD

maybe used by other devices when the system is alive.  So we have to copy

sound data in sram firstly, then use the DDR(SDRAM) to record sound data.
Pierre-Louis Bossart Sept. 26, 2022, 7:52 a.m. UTC | #5
On 9/26/22 03:34, Jason Zhu wrote:
> 
> 在 2022/9/23 20:55, Mark Brown 写道:
>> On Wed, Sep 21, 2022 at 10:37:06AM +0800, Jason Zhu wrote:
>>> 在 2022/9/20 20:47, Mark Brown 写道:
>>>> On Tue, Sep 20, 2022 at 11:45:45AM +0800, Jason Zhu wrote:
>>>>> Sometimes we need to make some dais alive when close the card, like
>>>>> VAD, so these functions must be exported so that they can be called.
>>>> I'm not sure I fully understand the use case here - why wouldn't
>>>> the core know about the audio stream being kept active?  For
>>>> something like VAD I'd expect this to be just working like a
>>>> normal audio path, if there's a DSP consuming the audio stream
>>>> then it'll keep everything open.  If there is a good use case I
>>>> suspect it'll be clearer if you send the users along with this
>>>> patch.
>>> Thanks. For example, we use the VAD(Voice Activity Detect) & PDM(
>>> Pulse Density Modulation) to record sound>. The PDM is used to
>>> record and copy data to DDR memory by DMA when the system is alive.
>>> The VAD is used to detect voice from PDM and copy data to sram
>>> (The sram is small) when the system is sleep. If the VAD detect
>>> specific sound, wake up the system and continue to record sound.
>>> The data can not be lost in this process. So we attach VAD & PDM
>>> in the same card, then close the card and wake up VAD & PDM again
>>> when the system is goto sleep. Like these code:
>> This sounds like a very normal thing with a standard audio stream -
>> other devices have similar VAD stuff without needing to open code access
>> to the PCM operations?
> 
> 
> At present, only VAD is handled in this way by Rockchip.
> 
>>
>>> When the system is waked up, open the sound card. The data in sram
>>> is copied firstly and close the vad. Then use the DMA to move data
>>> to DDR memory from PDM.
>> Generally things just continue to stream the voice data through the same
>> VAD stream IIRC - switching just adds complexity here, you don't have to
>> deal with joining the VAD and regular streams up for one thing.
> 
> 
> Yes, this looks complicated. But our chip's sram which is assigned to VAD
> 
> maybe used by other devices when the system is alive.  So we have to copy
> 
> sound data in sram firstly, then use the DDR(SDRAM) to record sound data.

There are other devices that requires a copy of the history buffer from
one PCM device and a software stitching with the real-time data coming
from another PCM device. It's not ideal but not uncommon either, even
for upcoming SDCA devices, combining data from 2 PCM devices will be an
allowed option (with additional control information to help with the
stitching).

That said, the usual practice for exporting symbols is to share
additional patches that show why this was needed. A single patch in
isolation is hard to review.
Mark Brown Sept. 26, 2022, 3:33 p.m. UTC | #6
On Mon, Sep 26, 2022 at 09:52:34AM +0200, Pierre-Louis Bossart wrote:
> On 9/26/22 03:34, Jason Zhu wrote:
> > 在 2022/9/23 20:55, Mark Brown 写道:

> >>> The data can not be lost in this process. So we attach VAD & PDM
> >>> in the same card, then close the card and wake up VAD & PDM again
> >>> when the system is goto sleep. Like these code:

> >> This sounds like a very normal thing with a standard audio stream -
> >> other devices have similar VAD stuff without needing to open code access
> >> to the PCM operations?

> > At present, only VAD is handled in this way by Rockchip.

The point here is that other non-Rockchip devices do similar sounding
things?

> >> Generally things just continue to stream the voice data through the same
> >> VAD stream IIRC - switching just adds complexity here, you don't have to
> >> deal with joining the VAD and regular streams up for one thing.

> > Yes, this looks complicated. But our chip's sram which is assigned to VAD
> > 
> > maybe used by other devices when the system is alive.  So we have to copy
> > 
> > sound data in sram firstly, then use the DDR(SDRAM) to record sound data.

> There are other devices that requires a copy of the history buffer from
> one PCM device and a software stitching with the real-time data coming
> from another PCM device. It's not ideal but not uncommon either, even
> for upcoming SDCA devices, combining data from 2 PCM devices will be an
> allowed option (with additional control information to help with the
> stitching).

If this is something that's not uncommon that sounds like an even
stronger reason for not just randomly exporting the symbols and open
coding things in individual drivers outside of framework control.  What
are these other use cases, or is it other instances of the same thing?

TBH this sounds like at least partly a userspace problem rather than a
kernel one, as with other things that tie multiple audio streams
together.
Pierre-Louis Bossart Sept. 26, 2022, 4:07 p.m. UTC | #7
On 9/26/22 17:33, Mark Brown wrote:
> On Mon, Sep 26, 2022 at 09:52:34AM +0200, Pierre-Louis Bossart wrote:
>> On 9/26/22 03:34, Jason Zhu wrote:
>>> 在 2022/9/23 20:55, Mark Brown 写道:
> 
>>>>> The data can not be lost in this process. So we attach VAD & PDM
>>>>> in the same card, then close the card and wake up VAD & PDM again
>>>>> when the system is goto sleep. Like these code:
> 
>>>> This sounds like a very normal thing with a standard audio stream -
>>>> other devices have similar VAD stuff without needing to open code access
>>>> to the PCM operations?
> 
>>> At present, only VAD is handled in this way by Rockchip.
> 
> The point here is that other non-Rockchip devices do similar sounding
> things?
> 
>>>> Generally things just continue to stream the voice data through the same
>>>> VAD stream IIRC - switching just adds complexity here, you don't have to
>>>> deal with joining the VAD and regular streams up for one thing.
> 
>>> Yes, this looks complicated. But our chip's sram which is assigned to VAD
>>>
>>> maybe used by other devices when the system is alive.  So we have to copy
>>>
>>> sound data in sram firstly, then use the DDR(SDRAM) to record sound data.
> 
>> There are other devices that requires a copy of the history buffer from
>> one PCM device and a software stitching with the real-time data coming
>> from another PCM device. It's not ideal but not uncommon either, even
>> for upcoming SDCA devices, combining data from 2 PCM devices will be an
>> allowed option (with additional control information to help with the
>> stitching).
> 
> If this is something that's not uncommon that sounds like an even
> stronger reason for not just randomly exporting the symbols and open
> coding things in individual drivers outside of framework control.  What
> are these other use cases, or is it other instances of the same thing?
> 
> TBH this sounds like at least partly a userspace problem rather than a
> kernel one, as with other things that tie multiple audio streams
> together.

I would tend to agree, the stitching can be either handled in DSP
firmware or in user-space. In the first case the kernel would expose a
single PCM to userspace, and in the second there would be two separate
PCM devices. The kernel drivers would typically do nothing other than
deal with moving captured data if/when available.

I also don't get the notion of 'keeping some DAIs alive when closing the
card', maybe the idea is to redefine what 'D3' means or have an 'active
standby' power state that doesn't exist today. That would in contrast be
something the frameworks know about.
Jason Zhu Sept. 27, 2022, 3:57 a.m. UTC | #8
在 2022/9/26 23:33, Mark Brown 写道:
> On Mon, Sep 26, 2022 at 09:52:34AM +0200, Pierre-Louis Bossart wrote:
>> On 9/26/22 03:34, Jason Zhu wrote:
>>> 在 2022/9/23 20:55, Mark Brown 写道:
>>>>> The data can not be lost in this process. So we attach VAD & PDM
>>>>> in the same card, then close the card and wake up VAD & PDM again
>>>>> when the system is goto sleep. Like these code:
>>>> This sounds like a very normal thing with a standard audio stream -
>>>> other devices have similar VAD stuff without needing to open code access
>>>> to the PCM operations?
>>> At present, only VAD is handled in this way by Rockchip.
> The point here is that other non-Rockchip devices do similar sounding
> things?

No.  Usually, the vad is integrated in codec, like rt5677, and is linked 
with DSP to

handle its data. If DSP detects useful sound, send an irq to system to 
wakeup and

record sound.  Others detect and analysis sound by VAD itself, like 
K32W041A.

>>>> Generally things just continue to stream the voice data through the same
>>>> VAD stream IIRC - switching just adds complexity here, you don't have to
>>>> deal with joining the VAD and regular streams up for one thing.
>>> Yes, this looks complicated. But our chip's sram which is assigned to VAD
>>>
>>> maybe used by other devices when the system is alive.  So we have to copy
>>>
>>> sound data in sram firstly, then use the DDR(SDRAM) to record sound data.
>> There are other devices that requires a copy of the history buffer from
>> one PCM device and a software stitching with the real-time data coming
>> from another PCM device. It's not ideal but not uncommon either, even
>> for upcoming SDCA devices, combining data from 2 PCM devices will be an
>> allowed option (with additional control information to help with the
>> stitching).
> If this is something that's not uncommon that sounds like an even
> stronger reason for not just randomly exporting the symbols and open
> coding things in individual drivers outside of framework control.  What
> are these other use cases, or is it other instances of the same thing?

Maybe in this case: One PDM is used to record sound, and there is two way

to move data. Use the VAD to move data to sram when system is sleep and

use DMA to move data when sytem is alive. If we seperate this in two audio

streams, we close the "PDM + VAD" audio stream firstly when system is alive

and open "PDM + DMA" audio stream. This process maybe take long time

that PDM FIFO will be full and lost some data. But we hope that data 
will not

be lost in the whole proces. So these must be done in one audio stream.

> TBH this sounds like at least partly a userspace problem rather than a
> kernel one, as with other things that tie multiple audio streams
> together.

Yes, userspace can tie multiple audio stream together to avoid doing

complicated things in kernel. This is good method!
Mark Brown Sept. 28, 2022, 11:52 a.m. UTC | #9
On Tue, Sep 27, 2022 at 11:57:53AM +0800, Jason Zhu wrote:
> 
> 在 2022/9/26 23:33, Mark Brown 写道:
> > On Mon, Sep 26, 2022 at 09:52:34AM +0200, Pierre-Louis Bossart wrote:
> > > On 9/26/22 03:34, Jason Zhu wrote:
> > > > 在 2022/9/23 20:55, Mark Brown 写道:

> > > > > > The data can not be lost in this process. So we attach VAD & PDM
> > > > > > in the same card, then close the card and wake up VAD & PDM again
> > > > > > when the system is goto sleep. Like these code:

> > > > > This sounds like a very normal thing with a standard audio stream -
> > > > > other devices have similar VAD stuff without needing to open code access
> > > > > to the PCM operations?

> > > > At present, only VAD is handled in this way by Rockchip.

> > The point here is that other non-Rockchip devices do similar sounding
> > things?

> No.  Usually, the vad is integrated in codec, like rt5677, and is linked
> with DSP to
> handle its data. If DSP detects useful sound, send an irq to system to
> wakeup and
> record sound.  Others detect and analysis sound by VAD itself, like
> K32W041A.

What I mean here is that you're missing my point.  The deferring of full
wake word recognition to a secondary algorithm running somewhere else is
a pretty common design.

> > If this is something that's not uncommon that sounds like an even
> > stronger reason for not just randomly exporting the symbols and open
> > coding things in individual drivers outside of framework control.  What
> > are these other use cases, or is it other instances of the same thing?

> Maybe in this case: One PDM is used to record sound, and there is two way
> to move data. Use the VAD to move data to sram when system is sleep and
> use DMA to move data when sytem is alive. If we seperate this in two audio
> streams, we close the "PDM + VAD" audio stream firstly when system is alive
> and open "PDM + DMA" audio stream. This process maybe take long time
> that PDM FIFO will be full and lost some data. But we hope that data will
> not be lost in the whole proces. So these must be done in one audio
> stream.

I'd have exepected that any handover be done such that the low power
wake word stream is running concurrently with the main audio stream for
some period of time, possibly until the sync between the two has been
worked out, and that data would be being read out of the wake word
stream while the full stream is starting up.  As you say I'd expect that
otherwise you'll run into trouble with dropouts.  I don't see how doing
that handover would require that we export any symbols though, if there
is any kernel support needed it should be handled in the framework.
Jason Zhu Sept. 29, 2022, 12:52 a.m. UTC | #10
在 2022/9/28 19:52, Mark Brown 写道:
> On Tue, Sep 27, 2022 at 11:57:53AM +0800, Jason Zhu wrote:
>> 在 2022/9/26 23:33, Mark Brown 写道:
>>> On Mon, Sep 26, 2022 at 09:52:34AM +0200, Pierre-Louis Bossart wrote:
>>>> On 9/26/22 03:34, Jason Zhu wrote:
>>>>> 在 2022/9/23 20:55, Mark Brown 写道:
>>>>>>> The data can not be lost in this process. So we attach VAD & PDM
>>>>>>> in the same card, then close the card and wake up VAD & PDM again
>>>>>>> when the system is goto sleep. Like these code:
>>>>>> This sounds like a very normal thing with a standard audio stream -
>>>>>> other devices have similar VAD stuff without needing to open code access
>>>>>> to the PCM operations?
>>>>> At present, only VAD is handled in this way by Rockchip.
>>> The point here is that other non-Rockchip devices do similar sounding
>>> things?
>> No.  Usually, the vad is integrated in codec, like rt5677, and is linked
>> with DSP to
>> handle its data. If DSP detects useful sound, send an irq to system to
>> wakeup and
>> record sound.  Others detect and analysis sound by VAD itself, like
>> K32W041A.
> What I mean here is that you're missing my point.  The deferring of full
> wake word recognition to a secondary algorithm running somewhere else is
> a pretty common design.
>
>>> If this is something that's not uncommon that sounds like an even
>>> stronger reason for not just randomly exporting the symbols and open
>>> coding things in individual drivers outside of framework control.  What
>>> are these other use cases, or is it other instances of the same thing?
>> Maybe in this case: One PDM is used to record sound, and there is two way
>> to move data. Use the VAD to move data to sram when system is sleep and
>> use DMA to move data when sytem is alive. If we seperate this in two audio
>> streams, we close the "PDM + VAD" audio stream firstly when system is alive
>> and open "PDM + DMA" audio stream. This process maybe take long time
>> that PDM FIFO will be full and lost some data. But we hope that data will
>> not be lost in the whole proces. So these must be done in one audio
>> stream.
> I'd have exepected that any handover be done such that the low power
> wake word stream is running concurrently with the main audio stream for
> some period of time, possibly until the sync between the two has been
> worked out, and that data would be being read out of the wake word
> stream while the full stream is starting up.  As you say I'd expect that
> otherwise you'll run into trouble with dropouts.  I don't see how doing
> that handover would require that we export any symbols though, if there
> is any kernel support needed it should be handled in the framework.
Thank you very much. I will think about how to support it in the framework.
diff mbox series

Patch

diff --git a/sound/soc/soc-dai.c b/sound/soc/soc-dai.c
index d530e8c2b77b..75294e830392 100644
--- a/sound/soc/soc-dai.c
+++ b/sound/soc/soc-dai.c
@@ -405,6 +405,7 @@  int snd_soc_dai_hw_params(struct snd_soc_dai *dai,
 end:
 	return soc_dai_ret(dai, ret);
 }
+EXPORT_SYMBOL_GPL(snd_soc_dai_hw_params);
 
 void snd_soc_dai_hw_free(struct snd_soc_dai *dai,
 			 struct snd_pcm_substream *substream,
@@ -420,6 +421,7 @@  void snd_soc_dai_hw_free(struct snd_soc_dai *dai,
 	/* remove marked substream */
 	soc_dai_mark_pop(dai, substream, hw_params);
 }
+EXPORT_SYMBOL_GPL(snd_soc_dai_hw_free);
 
 int snd_soc_dai_startup(struct snd_soc_dai *dai,
 			struct snd_pcm_substream *substream)
@@ -436,6 +438,7 @@  int snd_soc_dai_startup(struct snd_soc_dai *dai,
 
 	return soc_dai_ret(dai, ret);
 }
+EXPORT_SYMBOL_GPL(snd_soc_dai_startup);
 
 void snd_soc_dai_shutdown(struct snd_soc_dai *dai,
 			  struct snd_pcm_substream *substream,
@@ -451,6 +454,7 @@  void snd_soc_dai_shutdown(struct snd_soc_dai *dai,
 	/* remove marked substream */
 	soc_dai_mark_pop(dai, substream, startup);
 }
+EXPORT_SYMBOL_GPL(snd_soc_dai_shutdown);
 
 int snd_soc_dai_compress_new(struct snd_soc_dai *dai,
 			     struct snd_soc_pcm_runtime *rtd, int num)
@@ -556,6 +560,7 @@  int snd_soc_pcm_dai_probe(struct snd_soc_pcm_runtime *rtd, int order)
 
 	return 0;
 }
+EXPORT_SYMBOL_GPL(snd_soc_pcm_dai_probe);
 
 int snd_soc_pcm_dai_remove(struct snd_soc_pcm_runtime *rtd, int order)
 {
@@ -578,6 +583,7 @@  int snd_soc_pcm_dai_remove(struct snd_soc_pcm_runtime *rtd, int order)
 
 	return ret;
 }
+EXPORT_SYMBOL_GPL(snd_soc_pcm_dai_remove);
 
 int snd_soc_pcm_dai_new(struct snd_soc_pcm_runtime *rtd)
 {
@@ -594,6 +600,7 @@  int snd_soc_pcm_dai_new(struct snd_soc_pcm_runtime *rtd)
 
 	return 0;
 }
+EXPORT_SYMBOL_GPL(snd_soc_pcm_dai_new);
 
 int snd_soc_pcm_dai_prepare(struct snd_pcm_substream *substream)
 {
@@ -612,6 +619,7 @@  int snd_soc_pcm_dai_prepare(struct snd_pcm_substream *substream)
 
 	return 0;
 }
+EXPORT_SYMBOL_GPL(snd_soc_pcm_dai_prepare);
 
 static int soc_dai_trigger(struct snd_soc_dai *dai,
 			   struct snd_pcm_substream *substream, int cmd)
@@ -624,6 +632,7 @@  static int soc_dai_trigger(struct snd_soc_dai *dai,
 
 	return soc_dai_ret(dai, ret);
 }
+EXPORT_SYMBOL_GPL(soc_dai_trigger);
 
 int snd_soc_pcm_dai_trigger(struct snd_pcm_substream *substream,
 			    int cmd, int rollback)
@@ -659,6 +668,7 @@  int snd_soc_pcm_dai_trigger(struct snd_pcm_substream *substream,
 
 	return ret;
 }
+EXPORT_SYMBOL_GPL(snd_soc_pcm_dai_trigger);
 
 int snd_soc_pcm_dai_bespoke_trigger(struct snd_pcm_substream *substream,
 				    int cmd)