diff mbox

fsl_ssi.c: Getting channel slips with fsl_ssi.c in TDM (network) mode.

Message ID CAG5mAdz=TKSmXBOqKfSj0FYfk3c6syBa87OazyGRRF0NU4snXA@mail.gmail.com (mailing list archive)
State Not Applicable
Headers show

Commit Message

Caleb Crome Oct. 28, 2015, 10:06 p.m. UTC
On Tue, Oct 27, 2015 at 1:11 PM, Nicolin Chen <nicoleotsuka@gmail.com> wrote:
> On Tue, Oct 27, 2015 at 08:13:44AM +0100, Markus Pargmann wrote:
>
>> > So, the dma priority doesn't seem to be the issue.  It's now set in
>> > the device tree, and strangely it's set to priority 0 (the highest)
>> > along with the UARTS.  priority 0 is just the highest in the device
>> > tree -- it gets remapped to priority 3 in the sdma driver.  the DT
>> > exposes only 3 levels of DMA priority, low, medium, and high.  I
>> > created a new level that maps to DMA priroity 7 (the highest in the
>> > hardware), but still got the problem.
>> >
>> > So, still something unknown causing dma to miss samples.  must be in
>> > the dma ISR I would assume.  I guess it's time to look into that.
>
>> Cc Nicolin, Fabio, Shawn
>>
>> Perhaps you have an idea about this?
>
> Off the top of my head:
>
> 1) Enable TUE0, TUE1, ROE0, ROE1 to see if there is any IRQ trigged.
Ah, I found that SIER TIE & RIE were not enabled. I enabled them (and
just submitted a patch to the list, which will need to be fixed).

With my 2 patches, the

/sys/kernel/debug/2028000.ssi/stats

file now shows the proper interrupts.

>
> 2) Set the watermarks for both TX and RX to 8 while using burst sizes
>    of 6. It'd be nicer to provisionally set these numbers using hard
>    code than your current change depending on fifo_depth as it might
>    be an odd value.

Ah, this is fascinating you say this.  fifo_depth is definitely odd,
it's 15 as set in imx6qdl.dtsi:
fsl,fifo-depth = <15>;
But the DMA maxburst is made even later in the code...

Setting the watermark to 8 and maxburst to 8 dramatically reduces the
channel slip rate, in fact, i didn't see a slip for more than 30
minutes of playing.  That's a new record for sure.  But, eventually,
there was an underrun, and the channels slipped.

Setting watermark to 8 and maxburst to 6 still had some slips,
seemingly more than 8 & 8.

I feel like a monkey randomly typing at my keyboard though.  I don't
know why maxburst=8 worked better.  I get the
feeling that I was just lucky.

There does seem to be a correlation between user space reported
underruns and this channel slip, although they definitely are not 1:1
ratio:  underruns happen without slips and slips happen without
underruns.  The latter is very disturbing because user space has no
idea something is wrong.

My test is simply to run aplay with a 1000 second, 16 channel sound
file, and watch
the data decoded on my scope.   The sound file has the channel number
encoded as the most significant nibble of each word, and a do a
conditional trigger to watch to make sure the most significant nibble
after the fram sync is '0'.  i.e. trigger if there is a rising edge on
data within 300ns of the rising edge of fsync.

Here's the patch that has worked the best so far.


>
> 3) Try to enlarge the ALSA period size in the asound.conf or passing
>    parameters when you do the playback/capture so that the number of
>    interrupts from SDMA may reduce.
I checked this earlier and it seemed to help, but didn't solve the
issue.  I will check it again with my latest updates.

-Caleb



>
> You may also see if the reproducibility is somehow reduced or not.
>
> Nicolin

Comments

Nicolin Chen Oct. 29, 2015, 4:53 a.m. UTC | #1
On Wed, Oct 28, 2015 at 03:06:40PM -0700, Caleb Crome wrote:

> > 2) Set the watermarks for both TX and RX to 8 while using burst sizes
> >    of 6. It'd be nicer to provisionally set these numbers using hard
> >    code than your current change depending on fifo_depth as it might
> >    be an odd value.

> Ah, this is fascinating you say this.  fifo_depth is definitely odd,
> it's 15 as set in imx6qdl.dtsi:

> fsl,fifo-depth = <15>;
> But the DMA maxburst is made even later in the code...

And odd number for burst size may course a similar problem like
channel swapping in two-channel cases because the number of data
FIFO is 2 -- an even number. But it seems not to be related to
your problem here.

> Setting the watermark to 8 and maxburst to 8 dramatically reduces the
> channel slip rate, in fact, i didn't see a slip for more than 30
> minutes of playing.  That's a new record for sure.  But, eventually,
> there was an underrun, and the channels slipped.
> 
> Setting watermark to 8 and maxburst to 6 still had some slips,
> seemingly more than 8 & 8.
> 
> I feel like a monkey randomly typing at my keyboard though.  I don't
> know why maxburst=8 worked better.  I get the
> feeling that I was just lucky.

That's actually another possible root cause -- performance issue.
burst=8 will have less bus transaction number than the case when
burst=6. As you have quite a lot channels comparing to normal 2
channels, you need to feed the FIFO more frequently. If SDMA does
not feed the data before the input FIFO gets underrun, a channel
swapping might happen: in your case, channel slip.

> There does seem to be a correlation between user space reported
> underruns and this channel slip, although they definitely are not 1:1

Reported by user space? Are you saying that's an ALSA underrun in
the user space, not a hardware underrun reported by the IRQ in the
driver? They are quite different. ALSA underrun comes from the DMA
buffer gets underrun while the other one results from FIFO feeding
efficiency. For ALSA underrun, enlarging the playback period size
and period number will ease the problem:

period number = buffer size / period size;

An ALSA underrun may not be companied by a hardware underrun but
they may co-exist.

> ratio:  underruns happen without slips and slips happen without
> underruns.  The latter is very disturbing because user space has no
> idea something is wrong.

> @@ -1260,8 +1260,8 @@ static int fsl_ssi_imx_probe(struct platform_device *pdev,
>          * We have burstsize be "fifo_depth - 2" to match the SSI
>          * watermark setting in fsl_ssi_startup().
>          */
> -       ssi_private->dma_params_tx.maxburst = ssi_private->fifo_depth - 2;
> -       ssi_private->dma_params_rx.maxburst = ssi_private->fifo_depth - 2;
> +       ssi_private->dma_params_tx.maxburst = 8;
> +       ssi_private->dma_params_rx.maxburst = 8;

I am actually thinking about setting a watermark to a larger number.
I forgot how the SDMA script handles this number. But if this burst
size means the overall data count per transaction, it might indicate
that each FIFO only gets half of the burst size due to dual FIFOs.

Therefore, if setting watermark to 8, each FIFO has 7 (15 - 8) space
left, the largest safe burst size could be 14 (7 * 2) actually.

Yes. That's kind of fine tunning the parameters. And for your case,
you may try a larger number as the SSI is simultaneously consuming
a large amount of data even though it sounds risky. But it's worth
trying since you are using SSI which only has tight FIFOs not like
ESAI has 128 depth.

Nicolin
Caleb Crome Oct. 29, 2015, 1:44 p.m. UTC | #2
On Wed, Oct 28, 2015 at 9:53 PM, Nicolin Chen <nicoleotsuka@gmail.com> wrote:
> On Wed, Oct 28, 2015 at 03:06:40PM -0700, Caleb Crome wrote:
>
>> > 2) Set the watermarks for both TX and RX to 8 while using burst sizes
>> >    of 6. It'd be nicer to provisionally set these numbers using hard
>> >    code than your current change depending on fifo_depth as it might
>> >    be an odd value.
>
>> Ah, this is fascinating you say this.  fifo_depth is definitely odd,
>> it's 15 as set in imx6qdl.dtsi:
>
>> fsl,fifo-depth = <15>;
>> But the DMA maxburst is made even later in the code...
>
> And odd number for burst size may course a similar problem like
> channel swapping in two-channel cases because the number of data
> FIFO is 2 -- an even number. But it seems not to be related to
> your problem here.
>
>> Setting the watermark to 8 and maxburst to 8 dramatically reduces the
>> channel slip rate, in fact, i didn't see a slip for more than 30
>> minutes of playing.  That's a new record for sure.  But, eventually,
>> there was an underrun, and the channels slipped.
>>
>> Setting watermark to 8 and maxburst to 6 still had some slips,
>> seemingly more than 8 & 8.
>>
>> I feel like a monkey randomly typing at my keyboard though.  I don't
>> know why maxburst=8 worked better.  I get the
>> feeling that I was just lucky.
>
> That's actually another possible root cause -- performance issue.
> burst=8 will have less bus transaction number than the case when
> burst=6. As you have quite a lot channels comparing to normal 2
> channels, you need to feed the FIFO more frequently. If SDMA does
> not feed the data before the input FIFO gets underrun, a channel
> swapping might happen: in your case, channel slip.
>
>> There does seem to be a correlation between user space reported
>> underruns and this channel slip, although they definitely are not 1:1
>
> Reported by user space? Are you saying that's an ALSA underrun in
> the user space, not a hardware underrun reported by the IRQ in the
> driver? They are quite different. ALSA underrun comes from the DMA
> buffer gets underrun while the other one results from FIFO feeding
> efficiency. For ALSA underrun, enlarging the playback period size
> and period number will ease the problem:
>
> period number = buffer size / period size;
>
> An ALSA underrun may not be companied by a hardware underrun but
> they may co-exist.

Sometimes they happen at the same time.  So, I run aplay, and all is
fine.  Then the user space app will underrun, and then I look at the
scope, and the channels have slipped.  So somehow the start/restart
after the underrun is not always perfect I guess.

Is there any mechanism for the DMA fifo underruns to be reported back
to user space?  There certainly should be, because the consequences
are catastrophic, yet the user space app goes on as if everything is
just great.  Much, much worse than the underrun that is reported (i.e.
a skip in audio is bad but sometimes tolerable.  A channel slip is
permanent and absolutely intolerable).

>
>> ratio:  underruns happen without slips and slips happen without
>> underruns.  The latter is very disturbing because user space has no
>> idea something is wrong.
>
>> @@ -1260,8 +1260,8 @@ static int fsl_ssi_imx_probe(struct platform_device *pdev,
>>          * We have burstsize be "fifo_depth - 2" to match the SSI
>>          * watermark setting in fsl_ssi_startup().
>>          */
>> -       ssi_private->dma_params_tx.maxburst = ssi_private->fifo_depth - 2;
>> -       ssi_private->dma_params_rx.maxburst = ssi_private->fifo_depth - 2;
>> +       ssi_private->dma_params_tx.maxburst = 8;
>> +       ssi_private->dma_params_rx.maxburst = 8;
>
> I am actually thinking about setting a watermark to a larger number.
> I forgot how the SDMA script handles this number. But if this burst
> size means the overall data count per transaction, it might indicate
> that each FIFO only gets half of the burst size due to dual FIFOs.
>
> Therefore, if setting watermark to 8, each FIFO has 7 (15 - 8) space
> left, the largest safe burst size could be 14 (7 * 2) actually.

Oh, does this depend on the data size?  I'm using 16-bit data, so I
guess the bursts are measured in 2 byte units?  Does this mean that
the burst size should be dynamically adjusted depending on word size
(I guess done in hw_params)?

>
> Yes. That's kind of fine tunning the parameters. And for your case,
> you may try a larger number as the SSI is simultaneously consuming
> a large amount of data even though it sounds risky. But it's worth
> trying since you are using SSI which only has tight FIFOs not like
> ESAI has 128 depth.
>
> Nicolin
Caleb Crome Oct. 29, 2015, 2:55 p.m. UTC | #3
On Thu, Oct 29, 2015 at 6:44 AM, Caleb Crome <caleb@crome.org> wrote:
> On Wed, Oct 28, 2015 at 9:53 PM, Nicolin Chen <nicoleotsuka@gmail.com> wrote:
>>
>> I am actually thinking about setting a watermark to a larger number.
>> I forgot how the SDMA script handles this number. But if this burst
>> size means the overall data count per transaction, it might indicate
>> that each FIFO only gets half of the burst size due to dual FIFOs.
>>
>> Therefore, if setting watermark to 8, each FIFO has 7 (15 - 8) space
>> left, the largest safe burst size could be 14 (7 * 2) actually.
>
> Oh, does this depend on the data size?  I'm using 16-bit data, so I
> guess the bursts are measured in 2 byte units?  Does this mean that
> the burst size should be dynamically adjusted depending on word size
> (I guess done in hw_params)?
>
>> Nicolin

Okay, so wm=8 and maxburst=14 definitely does not work at all,.  wm=8,
maxburst=8 works okay, but still not perfect.

I just discovered some new information:

With wm=8 and maxburst=8 (which is my best setting so far), I just
captured a problem at the very start of playing a file, and restarted
enough times to capture it starting wrong:

Instead of the playback starting with

(hex numbers:  my ramp file has first nibble as channel, second nibble as frame)

frame 0:  00, 10, 20, 30, 40, 50, 60, 70, 80, 90, a0, b0, c0, d0, e0, f0
frame 1:  01, 11, 21, 31, 41, 51, 61, 71, 81, 91, a1, b1, c1, d1, e1, f1

It started with:

frame 0:  00, 00, 10, 20, 30, 40, 50, 60, 70, 80, 90, a0, b0, c0, d0, e0
frame 1:  f0, 01, 11, 21, 31, 41, 51, 61, 71, 81, 91, a1, b1, c1, d1, e1

So, the transfer started wrong right out of the gate -- with an extra
sample inserted at the beginning. Again, my setup is:
1) use scope to capture the TDM bus.  Trigger on first data change
2) aplay myramp.wav
3) If okay, ctrl-c and goto 2.
4) The capture below shows everything off by 1 sample.

The capture is here:
https://drive.google.com/open?id=0B-KUa9Yf1o7iOXFtWXk2ZXdoUXc

This test definitely reveals that there is a startup issue.  Now for
the $64,000 question: what to do with this knowledge?  I'm quite
unfamiliar with how the DMA works at all.

I'll start poking around the DMA I guess.

Thanks,
  -Caleb
Roberto Fichera Oct. 29, 2015, 3:37 p.m. UTC | #4
On 10/29/2015 03:55 PM, Caleb Crome wrote:
> On Thu, Oct 29, 2015 at 6:44 AM, Caleb Crome <caleb@crome.org> wrote:
>> On Wed, Oct 28, 2015 at 9:53 PM, Nicolin Chen <nicoleotsuka@gmail.com> wrote:
>>> I am actually thinking about setting a watermark to a larger number.
>>> I forgot how the SDMA script handles this number. But if this burst
>>> size means the overall data count per transaction, it might indicate
>>> that each FIFO only gets half of the burst size due to dual FIFOs.
>>>
>>> Therefore, if setting watermark to 8, each FIFO has 7 (15 - 8) space
>>> left, the largest safe burst size could be 14 (7 * 2) actually.
>> Oh, does this depend on the data size?  I'm using 16-bit data, so I
>> guess the bursts are measured in 2 byte units?  Does this mean that
>> the burst size should be dynamically adjusted depending on word size
>> (I guess done in hw_params)?
>>
>>> Nicolin
> Okay, so wm=8 and maxburst=14 definitely does not work at all,.  wm=8,
> maxburst=8 works okay, but still not perfect.
>
> I just discovered some new information:
>
> With wm=8 and maxburst=8 (which is my best setting so far), I just
> captured a problem at the very start of playing a file, and restarted
> enough times to capture it starting wrong:
>
> Instead of the playback starting with
>
> (hex numbers:  my ramp file has first nibble as channel, second nibble as frame)
>
> frame 0:  00, 10, 20, 30, 40, 50, 60, 70, 80, 90, a0, b0, c0, d0, e0, f0
> frame 1:  01, 11, 21, 31, 41, 51, 61, 71, 81, 91, a1, b1, c1, d1, e1, f1
>
> It started with:
>
> frame 0:  00, 00, 10, 20, 30, 40, 50, 60, 70, 80, 90, a0, b0, c0, d0, e0
> frame 1:  f0, 01, 11, 21, 31, 41, 51, 61, 71, 81, 91, a1, b1, c1, d1, e1
>
> So, the transfer started wrong right out of the gate -- with an extra
> sample inserted at the beginning. Again, my setup is:
> 1) use scope to capture the TDM bus.  Trigger on first data change
> 2) aplay myramp.wav
> 3) If okay, ctrl-c and goto 2.
> 4) The capture below shows everything off by 1 sample.
>
> The capture is here:
> https://drive.google.com/open?id=0B-KUa9Yf1o7iOXFtWXk2ZXdoUXc
>
> This test definitely reveals that there is a startup issue.  Now for
> the $64,000 question: what to do with this knowledge?  I'm quite
> unfamiliar with how the DMA works at all.

I'm my case for example, I'm using a iMX6SX SoC, I've changed fsl_ssi.c to start the SSI
clock generated internally by setting both RDMAE and TDMAE just once I'm pretty sure
that everything has been setup (DMA and callback). Note that I'm not using alsa because,
my target is to integrate SSI in TDM network mode with my DAHDI driver for VoIP app.

Back to the DMA question, in your case shouldn't be really a problem since all DMA
stuff is handled by the linux audio framework.

Regarding my SSI problem, I was able to keep the DMA working for few second once before
it get stopped and never retriggered. Currently I've 2 DMA channel one for TX and another for RX
I've changed my DTS and update my fsl_ssi to handle new clocks, I guess only the CLK_SPBA
has improved my situation. I've also tried to enable both RIE and TIE to service the ISR, with
and without SSI DMA support, but this end with a full system freeze. The ISR was never changed
in my fsl_ssi.c.

                ssi1: ssi@02028000 {
                    compatible = "fsl,imx6sx-ssi", "fsl,imx21-ssi";
                    reg = <0x02028000 0x4000>;
                    interrupts = <GIC_SPI 46 IRQ_TYPE_LEVEL_HIGH>;
                    clocks = <&clks IMX6SX_CLK_SSI1_IPG>,
                         <&clks IMX6SX_CLK_SSI1>,
--->>>                         <&clks IMX6SX_CLK_SPBA>,
                         <&clks IMX6SX_CLK_SDMA>;
                    clock-names = "ipg", "baud", "dma", "ahb";
                    dmas = <&sdma 37 1 0>, <&sdma 38 1 0>;
                    dma-names = "rx", "tx";


Another thing I'm looking is the sdma events (37 and 38) which are reported by the reference
manual to

37 -> SSI1 Receive 0 DMA request
38 -> SSI1 Transmit 0 DMA request

along that there are also

35 -> SSI1 Receive 1 DMA request
36 -> SSI1 Transmit 1 DMA request

I don't know actually how the two events types will behaves from the SDMA point of view.

I'm also considering to make plain new audio driver to at least try to use something which
is supposed to work fine with SSI.

>
> I'll start poking around the DMA I guess.

I guess it's a SSI startup problem.

>
> Thanks,
>   -Caleb
> _______________________________________________
> Alsa-devel mailing list
> Alsa-devel@alsa-project.org
> http://mailman.alsa-project.org/mailman/listinfo/alsa-devel
>
Caleb Crome Oct. 29, 2015, 3:54 p.m. UTC | #5
On Thu, Oct 29, 2015 at 8:37 AM, Roberto Fichera <kernel@tekno-soft.it> wrote:
> On 10/29/2015 03:55 PM, Caleb Crome wrote:
>> On Thu, Oct 29, 2015 at 6:44 AM, Caleb Crome <caleb@crome.org> wrote:
>>> On Wed, Oct 28, 2015 at 9:53 PM, Nicolin Chen <nicoleotsuka@gmail.com> wrote:
>>>> I am actually thinking about setting a watermark to a larger number.
>>>> I forgot how the SDMA script handles this number. But if this burst
>>>> size means the overall data count per transaction, it might indicate
>>>> that each FIFO only gets half of the burst size due to dual FIFOs.
>>>>
>>>> Therefore, if setting watermark to 8, each FIFO has 7 (15 - 8) space
>>>> left, the largest safe burst size could be 14 (7 * 2) actually.
>>> Oh, does this depend on the data size?  I'm using 16-bit data, so I
>>> guess the bursts are measured in 2 byte units?  Does this mean that
>>> the burst size should be dynamically adjusted depending on word size
>>> (I guess done in hw_params)?
>>>
>>>> Nicolin
>> Okay, so wm=8 and maxburst=14 definitely does not work at all,.  wm=8,
>> maxburst=8 works okay, but still not perfect.
>>
>> I just discovered some new information:
>>
>> With wm=8 and maxburst=8 (which is my best setting so far), I just
>> captured a problem at the very start of playing a file, and restarted
>> enough times to capture it starting wrong:
>>
>> Instead of the playback starting with
>>
>> (hex numbers:  my ramp file has first nibble as channel, second nibble as frame)
>>
>> frame 0:  00, 10, 20, 30, 40, 50, 60, 70, 80, 90, a0, b0, c0, d0, e0, f0
>> frame 1:  01, 11, 21, 31, 41, 51, 61, 71, 81, 91, a1, b1, c1, d1, e1, f1
>>
>> It started with:
>>
>> frame 0:  00, 00, 10, 20, 30, 40, 50, 60, 70, 80, 90, a0, b0, c0, d0, e0
>> frame 1:  f0, 01, 11, 21, 31, 41, 51, 61, 71, 81, 91, a1, b1, c1, d1, e1
>>
>> So, the transfer started wrong right out of the gate -- with an extra
>> sample inserted at the beginning. Again, my setup is:
>> 1) use scope to capture the TDM bus.  Trigger on first data change
>> 2) aplay myramp.wav
>> 3) If okay, ctrl-c and goto 2.
>> 4) The capture below shows everything off by 1 sample.
>>
>> The capture is here:
>> https://drive.google.com/open?id=0B-KUa9Yf1o7iOXFtWXk2ZXdoUXc
>>
>> This test definitely reveals that there is a startup issue.  Now for
>> the $64,000 question: what to do with this knowledge?  I'm quite
>> unfamiliar with how the DMA works at all.
>
> I'm my case for example, I'm using a iMX6SX SoC, I've changed fsl_ssi.c to start the SSI
> clock generated internally by setting both RDMAE and TDMAE just once I'm pretty sure
> that everything has been setup (DMA and callback). Note that I'm not using alsa because,
> my target is to integrate SSI in TDM network mode with my DAHDI driver for VoIP app.
>
> Back to the DMA question, in your case shouldn't be really a problem since all DMA
> stuff is handled by the linux audio framework.
>
> Regarding my SSI problem, I was able to keep the DMA working for few second once before
> it get stopped and never retriggered. Currently I've 2 DMA channel one for TX and another for RX
> I've changed my DTS and update my fsl_ssi to handle new clocks, I guess only the CLK_SPBA
> has improved my situation. I've also tried to enable both RIE and TIE to service the ISR, with
> and without SSI DMA support, but this end with a full system freeze.

I got this system freeze too when enabling RIE and TIE because the
interrupts TFE1IE, TFE0IE, TDE1IE, TDE0IE are *enabled* at reset.
(Check ref manual 61.9.5).   which I suspect was a livelock kind of
situation where the ISR is just called infinitely often.  After
disabling those, then the system worked okay.  Check out the previous
patch I sent on the issue yesterday or the day before.
>
> Another thing I'm looking is the sdma events (37 and 38) which are reported by the reference
> manual to
>
> 37 -> SSI1 Receive 0 DMA request
> 38 -> SSI1 Transmit 0 DMA request
>
> along that there are also
>
> 35 -> SSI1 Receive 1 DMA request
> 36 -> SSI1 Transmit 1 DMA request
>
> I don't know actually how the two events types will behaves from the SDMA point of view.

The 35 and 36 are for Dual fifo mode only, and no current system (with
fsl_ssi.c anyway) uses dual fifo mode.  How do I know?  Because the
it's definitely broken in the fsl_ssi.c.  I was just about to report
that bug.

hint:  fsl_ssi.c:  if (ssi_private->use_dma && !ret && dmas[3] ==
IMX_DMATYPE_SSI_DUAL) {
should read  if (ssi_private->use_dma && !ret && dmas[4] ==
IMX_DMATYPE_SSI_DUAL) {

>
> I'm also considering to make plain new audio driver to at least try to use something which
> is supposed to work fine with SSI.

Yeah, maybe that's the easiest way to go just to get operational.
Start with just the bare minimum ssi driver so you know all the
registers are locked into place the way you like.

-caleb
Roberto Fichera Oct. 29, 2015, 4:02 p.m. UTC | #6
On 10/29/2015 04:54 PM, Caleb Crome wrote:
> On Thu, Oct 29, 2015 at 8:37 AM, Roberto Fichera <kernel@tekno-soft.it> wrote:
>> On 10/29/2015 03:55 PM, Caleb Crome wrote:
>>> On Thu, Oct 29, 2015 at 6:44 AM, Caleb Crome <caleb@crome.org> wrote:
>>>> On Wed, Oct 28, 2015 at 9:53 PM, Nicolin Chen <nicoleotsuka@gmail.com> wrote:
>>>>> I am actually thinking about setting a watermark to a larger number.
>>>>> I forgot how the SDMA script handles this number. But if this burst
>>>>> size means the overall data count per transaction, it might indicate
>>>>> that each FIFO only gets half of the burst size due to dual FIFOs.
>>>>>
>>>>> Therefore, if setting watermark to 8, each FIFO has 7 (15 - 8) space
>>>>> left, the largest safe burst size could be 14 (7 * 2) actually.
>>>> Oh, does this depend on the data size?  I'm using 16-bit data, so I
>>>> guess the bursts are measured in 2 byte units?  Does this mean that
>>>> the burst size should be dynamically adjusted depending on word size
>>>> (I guess done in hw_params)?
>>>>
>>>>> Nicolin
>>> Okay, so wm=8 and maxburst=14 definitely does not work at all,.  wm=8,
>>> maxburst=8 works okay, but still not perfect.
>>>
>>> I just discovered some new information:
>>>
>>> With wm=8 and maxburst=8 (which is my best setting so far), I just
>>> captured a problem at the very start of playing a file, and restarted
>>> enough times to capture it starting wrong:
>>>
>>> Instead of the playback starting with
>>>
>>> (hex numbers:  my ramp file has first nibble as channel, second nibble as frame)
>>>
>>> frame 0:  00, 10, 20, 30, 40, 50, 60, 70, 80, 90, a0, b0, c0, d0, e0, f0
>>> frame 1:  01, 11, 21, 31, 41, 51, 61, 71, 81, 91, a1, b1, c1, d1, e1, f1
>>>
>>> It started with:
>>>
>>> frame 0:  00, 00, 10, 20, 30, 40, 50, 60, 70, 80, 90, a0, b0, c0, d0, e0
>>> frame 1:  f0, 01, 11, 21, 31, 41, 51, 61, 71, 81, 91, a1, b1, c1, d1, e1
>>>
>>> So, the transfer started wrong right out of the gate -- with an extra
>>> sample inserted at the beginning. Again, my setup is:
>>> 1) use scope to capture the TDM bus.  Trigger on first data change
>>> 2) aplay myramp.wav
>>> 3) If okay, ctrl-c and goto 2.
>>> 4) The capture below shows everything off by 1 sample.
>>>
>>> The capture is here:
>>> https://drive.google.com/open?id=0B-KUa9Yf1o7iOXFtWXk2ZXdoUXc
>>>
>>> This test definitely reveals that there is a startup issue.  Now for
>>> the $64,000 question: what to do with this knowledge?  I'm quite
>>> unfamiliar with how the DMA works at all.
>> I'm my case for example, I'm using a iMX6SX SoC, I've changed fsl_ssi.c to start the SSI
>> clock generated internally by setting both RDMAE and TDMAE just once I'm pretty sure
>> that everything has been setup (DMA and callback). Note that I'm not using alsa because,
>> my target is to integrate SSI in TDM network mode with my DAHDI driver for VoIP app.
>>
>> Back to the DMA question, in your case shouldn't be really a problem since all DMA
>> stuff is handled by the linux audio framework.
>>
>> Regarding my SSI problem, I was able to keep the DMA working for few second once before
>> it get stopped and never retriggered. Currently I've 2 DMA channel one for TX and another for RX
>> I've changed my DTS and update my fsl_ssi to handle new clocks, I guess only the CLK_SPBA
>> has improved my situation. I've also tried to enable both RIE and TIE to service the ISR, with
>> and without SSI DMA support, but this end with a full system freeze.
> I got this system freeze too when enabling RIE and TIE because the
> interrupts TFE1IE, TFE0IE, TDE1IE, TDE0IE are *enabled* at reset.
> (Check ref manual 61.9.5).   which I suspect was a livelock kind of
> situation where the ISR is just called infinitely often.  After
> disabling those, then the system worked okay.  Check out the previous
> patch I sent on the issue yesterday or the day before.

Ooohh!!! Forgot to check this!!! I'm now going to mask them!!!

>
>> Another thing I'm looking is the sdma events (37 and 38) which are reported by the reference
>> manual to
>>
>> 37 -> SSI1 Receive 0 DMA request
>> 38 -> SSI1 Transmit 0 DMA request
>>
>> along that there are also
>>
>> 35 -> SSI1 Receive 1 DMA request
>> 36 -> SSI1 Transmit 1 DMA request
>>
>> I don't know actually how the two events types will behaves from the SDMA point of view.
> The 35 and 36 are for Dual fifo mode only, and no current system (with
> fsl_ssi.c anyway) uses dual fifo mode.  How do I know?  Because the
> it's definitely broken in the fsl_ssi.c.  I was just about to report
> that bug.

Ah! Thanks! The reference manual is really clear to explain it :-D !

> hint:  fsl_ssi.c:  if (ssi_private->use_dma && !ret && dmas[3] ==
> IMX_DMATYPE_SSI_DUAL) {
> should read  if (ssi_private->use_dma && !ret && dmas[4] ==
> IMX_DMATYPE_SSI_DUAL) {

Yep! I know such piece of code.

>
>> I'm also considering to make plain new audio driver to at least try to use something which
>> is supposed to work fine with SSI.
> Yeah, maybe that's the easiest way to go just to get operational.
> Start with just the bare minimum ssi driver so you know all the
> registers are locked into place the way you like.
>
> -caleb
> _______________________________________________
> Alsa-devel mailing list
> Alsa-devel@alsa-project.org
> http://mailman.alsa-project.org/mailman/listinfo/alsa-devel
>
Caleb Crome Oct. 29, 2015, 4:19 p.m. UTC | #7
On Thu, Oct 29, 2015 at 9:02 AM, Roberto Fichera <kernel@tekno-soft.it> wrote:
> On 10/29/2015 04:54 PM, Caleb Crome wrote:
>> On Thu, Oct 29, 2015 at 8:37 AM, Roberto Fichera <kernel@tekno-soft.it> wrote:
>>> On 10/29/2015 03:55 PM, Caleb Crome wrote:
>>> I don't know actually how the two events types will behaves from the SDMA point of view.
>> The 35 and 36 are for Dual fifo mode only, and no current system (with
>> fsl_ssi.c anyway) uses dual fifo mode.  How do I know?  Because the
>> it's definitely broken in the fsl_ssi.c.  I was just about to report
>> that bug.
>
> Ah! Thanks! The reference manual is really clear to explain it :-D !
>
>> hint:  fsl_ssi.c:  if (ssi_private->use_dma && !ret && dmas[2] ==
>> IMX_DMATYPE_SSI_DUAL) {
>> should read  if (ssi_private->use_dma && !ret && dmas[3] ==
>> IMX_DMATYPE_SSI_DUAL) {

Oops, nevermid.  I was looking at that wrong.  It's correct as is.
-Caleb
Roberto Fichera Oct. 29, 2015, 4:34 p.m. UTC | #8
On 10/29/2015 05:02 PM, Roberto Fichera wrote:
> On 10/29/2015 04:54 PM, Caleb Crome wrote:
>> On Thu, Oct 29, 2015 at 8:37 AM, Roberto Fichera <kernel@tekno-soft.it> wrote:
>>> On 10/29/2015 03:55 PM, Caleb Crome wrote:
>>>> On Thu, Oct 29, 2015 at 6:44 AM, Caleb Crome <caleb@crome.org> wrote:
>>>>> On Wed, Oct 28, 2015 at 9:53 PM, Nicolin Chen <nicoleotsuka@gmail.com> wrote:
>>>>>> I am actually thinking about setting a watermark to a larger number.
>>>>>> I forgot how the SDMA script handles this number. But if this burst
>>>>>> size means the overall data count per transaction, it might indicate
>>>>>> that each FIFO only gets half of the burst size due to dual FIFOs.
>>>>>>
>>>>>> Therefore, if setting watermark to 8, each FIFO has 7 (15 - 8) space
>>>>>> left, the largest safe burst size could be 14 (7 * 2) actually.
>>>>> Oh, does this depend on the data size?  I'm using 16-bit data, so I
>>>>> guess the bursts are measured in 2 byte units?  Does this mean that
>>>>> the burst size should be dynamically adjusted depending on word size
>>>>> (I guess done in hw_params)?
>>>>>
>>>>>> Nicolin
>>>> Okay, so wm=8 and maxburst=14 definitely does not work at all,.  wm=8,
>>>> maxburst=8 works okay, but still not perfect.
>>>>
>>>> I just discovered some new information:
>>>>
>>>> With wm=8 and maxburst=8 (which is my best setting so far), I just
>>>> captured a problem at the very start of playing a file, and restarted
>>>> enough times to capture it starting wrong:
>>>>
>>>> Instead of the playback starting with
>>>>
>>>> (hex numbers:  my ramp file has first nibble as channel, second nibble as frame)
>>>>
>>>> frame 0:  00, 10, 20, 30, 40, 50, 60, 70, 80, 90, a0, b0, c0, d0, e0, f0
>>>> frame 1:  01, 11, 21, 31, 41, 51, 61, 71, 81, 91, a1, b1, c1, d1, e1, f1
>>>>
>>>> It started with:
>>>>
>>>> frame 0:  00, 00, 10, 20, 30, 40, 50, 60, 70, 80, 90, a0, b0, c0, d0, e0
>>>> frame 1:  f0, 01, 11, 21, 31, 41, 51, 61, 71, 81, 91, a1, b1, c1, d1, e1
>>>>
>>>> So, the transfer started wrong right out of the gate -- with an extra
>>>> sample inserted at the beginning. Again, my setup is:
>>>> 1) use scope to capture the TDM bus.  Trigger on first data change
>>>> 2) aplay myramp.wav
>>>> 3) If okay, ctrl-c and goto 2.
>>>> 4) The capture below shows everything off by 1 sample.
>>>>
>>>> The capture is here:
>>>> https://drive.google.com/open?id=0B-KUa9Yf1o7iOXFtWXk2ZXdoUXc
>>>>
>>>> This test definitely reveals that there is a startup issue.  Now for
>>>> the $64,000 question: what to do with this knowledge?  I'm quite
>>>> unfamiliar with how the DMA works at all.
>>> I'm my case for example, I'm using a iMX6SX SoC, I've changed fsl_ssi.c to start the SSI
>>> clock generated internally by setting both RDMAE and TDMAE just once I'm pretty sure
>>> that everything has been setup (DMA and callback). Note that I'm not using alsa because,
>>> my target is to integrate SSI in TDM network mode with my DAHDI driver for VoIP app.
>>>
>>> Back to the DMA question, in your case shouldn't be really a problem since all DMA
>>> stuff is handled by the linux audio framework.
>>>
>>> Regarding my SSI problem, I was able to keep the DMA working for few second once before
>>> it get stopped and never retriggered. Currently I've 2 DMA channel one for TX and another for RX
>>> I've changed my DTS and update my fsl_ssi to handle new clocks, I guess only the CLK_SPBA
>>> has improved my situation. I've also tried to enable both RIE and TIE to service the ISR, with
>>> and without SSI DMA support, but this end with a full system freeze.
>> I got this system freeze too when enabling RIE and TIE because the
>> interrupts TFE1IE, TFE0IE, TDE1IE, TDE0IE are *enabled* at reset.
>> (Check ref manual 61.9.5).   which I suspect was a livelock kind of
>> situation where the ISR is just called infinitely often.  After
>> disabling those, then the system worked okay.  Check out the previous
>> patch I sent on the issue yesterday or the day before.
> Ooohh!!! Forgot to check this!!! I'm now going to mask them!!!

Doesn't work for me! Still freeze the system!  SIER=0x01d005f4

>
>>> Another thing I'm looking is the sdma events (37 and 38) which are reported by the reference
>>> manual to
>>>
>>> 37 -> SSI1 Receive 0 DMA request
>>> 38 -> SSI1 Transmit 0 DMA request
>>>
>>> along that there are also
>>>
>>> 35 -> SSI1 Receive 1 DMA request
>>> 36 -> SSI1 Transmit 1 DMA request
>>>
>>> I don't know actually how the two events types will behaves from the SDMA point of view.
>> The 35 and 36 are for Dual fifo mode only, and no current system (with
>> fsl_ssi.c anyway) uses dual fifo mode.  How do I know?  Because the
>> it's definitely broken in the fsl_ssi.c.  I was just about to report
>> that bug.
> Ah! Thanks! The reference manual is really clear to explain it :-D !
>
>> hint:  fsl_ssi.c:  if (ssi_private->use_dma && !ret && dmas[3] ==
>> IMX_DMATYPE_SSI_DUAL) {
>> should read  if (ssi_private->use_dma && !ret && dmas[4] ==
>> IMX_DMATYPE_SSI_DUAL) {
> Yep! I know such piece of code.
>
>>> I'm also considering to make plain new audio driver to at least try to use something which
>>> is supposed to work fine with SSI.
>> Yeah, maybe that's the easiest way to go just to get operational.
>> Start with just the bare minimum ssi driver so you know all the
>> registers are locked into place the way you like.
>>
>> -caleb
>> _______________________________________________
>> Alsa-devel mailing list
>> Alsa-devel@alsa-project.org
>> http://mailman.alsa-project.org/mailman/listinfo/alsa-devel
>>
> _______________________________________________
> Alsa-devel mailing list
> Alsa-devel@alsa-project.org
> http://mailman.alsa-project.org/mailman/listinfo/alsa-devel
>
Caleb Crome Oct. 29, 2015, 4:39 p.m. UTC | #9
On Thu, Oct 29, 2015 at 9:34 AM, Roberto Fichera <kernel@tekno-soft.it> wrote:
> On 10/29/2015 05:02 PM, Roberto Fichera wrote:
>> On 10/29/2015 04:54 PM, Caleb Crome wrote:
>>> On Thu, Oct 29, 2015 at 8:37 AM, Roberto Fichera <kernel@tekno-soft.it> wrote:
>>>> On 10/29/2015 03:55 PM, Caleb Crome wrote:
>>>>> On Thu, Oct 29, 2015 at 6:44 AM, Caleb Crome <caleb@crome.org> wrote:
>>>>>> On Wed, Oct 28, 2015 at 9:53 PM, Nicolin Chen <nicoleotsuka@gmail.com> wrote:
>>>>>>> I am actually thinking about setting a watermark to a larger number.
>>>>>>> I forgot how the SDMA script handles this number. But if this burst
>>>>>>> size means the overall data count per transaction, it might indicate
>>>>>>> that each FIFO only gets half of the burst size due to dual FIFOs.
>>>>>>>
>>>>>>> Therefore, if setting watermark to 8, each FIFO has 7 (15 - 8) space
>>>>>>> left, the largest safe burst size could be 14 (7 * 2) actually.
>>>>>> Oh, does this depend on the data size?  I'm using 16-bit data, so I
>>>>>> guess the bursts are measured in 2 byte units?  Does this mean that
>>>>>> the burst size should be dynamically adjusted depending on word size
>>>>>> (I guess done in hw_params)?
>>>>>>
>>>>>>> Nicolin
>>>>> Okay, so wm=8 and maxburst=14 definitely does not work at all,.  wm=8,
>>>>> maxburst=8 works okay, but still not perfect.
>>>>>
>>>>> I just discovered some new information:
>>>>>
>>>>> With wm=8 and maxburst=8 (which is my best setting so far), I just
>>>>> captured a problem at the very start of playing a file, and restarted
>>>>> enough times to capture it starting wrong:
>>>>>
>>>>> Instead of the playback starting with
>>>>>
>>>>> (hex numbers:  my ramp file has first nibble as channel, second nibble as frame)
>>>>>
>>>>> frame 0:  00, 10, 20, 30, 40, 50, 60, 70, 80, 90, a0, b0, c0, d0, e0, f0
>>>>> frame 1:  01, 11, 21, 31, 41, 51, 61, 71, 81, 91, a1, b1, c1, d1, e1, f1
>>>>>
>>>>> It started with:
>>>>>
>>>>> frame 0:  00, 00, 10, 20, 30, 40, 50, 60, 70, 80, 90, a0, b0, c0, d0, e0
>>>>> frame 1:  f0, 01, 11, 21, 31, 41, 51, 61, 71, 81, 91, a1, b1, c1, d1, e1
>>>>>
>>>>> So, the transfer started wrong right out of the gate -- with an extra
>>>>> sample inserted at the beginning. Again, my setup is:
>>>>> 1) use scope to capture the TDM bus.  Trigger on first data change
>>>>> 2) aplay myramp.wav
>>>>> 3) If okay, ctrl-c and goto 2.
>>>>> 4) The capture below shows everything off by 1 sample.
>>>>>
>>>>> The capture is here:
>>>>> https://drive.google.com/open?id=0B-KUa9Yf1o7iOXFtWXk2ZXdoUXc
>>>>>
>>>>> This test definitely reveals that there is a startup issue.  Now for
>>>>> the $64,000 question: what to do with this knowledge?  I'm quite
>>>>> unfamiliar with how the DMA works at all.
>>>> I'm my case for example, I'm using a iMX6SX SoC, I've changed fsl_ssi.c to start the SSI
>>>> clock generated internally by setting both RDMAE and TDMAE just once I'm pretty sure
>>>> that everything has been setup (DMA and callback). Note that I'm not using alsa because,
>>>> my target is to integrate SSI in TDM network mode with my DAHDI driver for VoIP app.
>>>>
>>>> Back to the DMA question, in your case shouldn't be really a problem since all DMA
>>>> stuff is handled by the linux audio framework.
>>>>
>>>> Regarding my SSI problem, I was able to keep the DMA working for few second once before
>>>> it get stopped and never retriggered. Currently I've 2 DMA channel one for TX and another for RX
>>>> I've changed my DTS and update my fsl_ssi to handle new clocks, I guess only the CLK_SPBA
>>>> has improved my situation. I've also tried to enable both RIE and TIE to service the ISR, with
>>>> and without SSI DMA support, but this end with a full system freeze.
>>> I got this system freeze too when enabling RIE and TIE because the
>>> interrupts TFE1IE, TFE0IE, TDE1IE, TDE0IE are *enabled* at reset.
>>> (Check ref manual 61.9.5).   which I suspect was a livelock kind of
>>> situation where the ISR is just called infinitely often.  After
>>> disabling those, then the system worked okay.  Check out the previous
>>> patch I sent on the issue yesterday or the day before.
>> Ooohh!!! Forgot to check this!!! I'm now going to mask them!!!
>
> Doesn't work for me! Still freeze the system!  SIER=0x01d005f4

You still have many per-frame interrupts enabled, which is still too
many enabled.  for example, you have RLSIE, TLSIE, RFSIE, TFSIE, etc.
These all generate one interrupt per frame, and not necessarily at the
same time, so you could be having 4 or more interrupts per frame.  Be
sure they're all zero except for the DMA enable and the specific ones
you actually want enabled.

-C
Roberto Fichera Oct. 29, 2015, 4:59 p.m. UTC | #10
On 10/29/2015 05:39 PM, Caleb Crome wrote:
> On Thu, Oct 29, 2015 at 9:34 AM, Roberto Fichera <kernel@tekno-soft.it> wrote:
>> On 10/29/2015 05:02 PM, Roberto Fichera wrote:
>>> On 10/29/2015 04:54 PM, Caleb Crome wrote:
>>>> On Thu, Oct 29, 2015 at 8:37 AM, Roberto Fichera <kernel@tekno-soft.it> wrote:
>>>>> On 10/29/2015 03:55 PM, Caleb Crome wrote:
>>>>>> On Thu, Oct 29, 2015 at 6:44 AM, Caleb Crome <caleb@crome.org> wrote:
>>>>>>> On Wed, Oct 28, 2015 at 9:53 PM, Nicolin Chen <nicoleotsuka@gmail.com> wrote:
>>>>>>>> I am actually thinking about setting a watermark to a larger number.
>>>>>>>> I forgot how the SDMA script handles this number. But if this burst
>>>>>>>> size means the overall data count per transaction, it might indicate
>>>>>>>> that each FIFO only gets half of the burst size due to dual FIFOs.
>>>>>>>>
>>>>>>>> Therefore, if setting watermark to 8, each FIFO has 7 (15 - 8) space
>>>>>>>> left, the largest safe burst size could be 14 (7 * 2) actually.
>>>>>>> Oh, does this depend on the data size?  I'm using 16-bit data, so I
>>>>>>> guess the bursts are measured in 2 byte units?  Does this mean that
>>>>>>> the burst size should be dynamically adjusted depending on word size
>>>>>>> (I guess done in hw_params)?
>>>>>>>
>>>>>>>> Nicolin
>>>>>> Okay, so wm=8 and maxburst=14 definitely does not work at all,.  wm=8,
>>>>>> maxburst=8 works okay, but still not perfect.
>>>>>>
>>>>>> I just discovered some new information:
>>>>>>
>>>>>> With wm=8 and maxburst=8 (which is my best setting so far), I just
>>>>>> captured a problem at the very start of playing a file, and restarted
>>>>>> enough times to capture it starting wrong:
>>>>>>
>>>>>> Instead of the playback starting with
>>>>>>
>>>>>> (hex numbers:  my ramp file has first nibble as channel, second nibble as frame)
>>>>>>
>>>>>> frame 0:  00, 10, 20, 30, 40, 50, 60, 70, 80, 90, a0, b0, c0, d0, e0, f0
>>>>>> frame 1:  01, 11, 21, 31, 41, 51, 61, 71, 81, 91, a1, b1, c1, d1, e1, f1
>>>>>>
>>>>>> It started with:
>>>>>>
>>>>>> frame 0:  00, 00, 10, 20, 30, 40, 50, 60, 70, 80, 90, a0, b0, c0, d0, e0
>>>>>> frame 1:  f0, 01, 11, 21, 31, 41, 51, 61, 71, 81, 91, a1, b1, c1, d1, e1
>>>>>>
>>>>>> So, the transfer started wrong right out of the gate -- with an extra
>>>>>> sample inserted at the beginning. Again, my setup is:
>>>>>> 1) use scope to capture the TDM bus.  Trigger on first data change
>>>>>> 2) aplay myramp.wav
>>>>>> 3) If okay, ctrl-c and goto 2.
>>>>>> 4) The capture below shows everything off by 1 sample.
>>>>>>
>>>>>> The capture is here:
>>>>>> https://drive.google.com/open?id=0B-KUa9Yf1o7iOXFtWXk2ZXdoUXc
>>>>>>
>>>>>> This test definitely reveals that there is a startup issue.  Now for
>>>>>> the $64,000 question: what to do with this knowledge?  I'm quite
>>>>>> unfamiliar with how the DMA works at all.
>>>>> I'm my case for example, I'm using a iMX6SX SoC, I've changed fsl_ssi.c to start the SSI
>>>>> clock generated internally by setting both RDMAE and TDMAE just once I'm pretty sure
>>>>> that everything has been setup (DMA and callback). Note that I'm not using alsa because,
>>>>> my target is to integrate SSI in TDM network mode with my DAHDI driver for VoIP app.
>>>>>
>>>>> Back to the DMA question, in your case shouldn't be really a problem since all DMA
>>>>> stuff is handled by the linux audio framework.
>>>>>
>>>>> Regarding my SSI problem, I was able to keep the DMA working for few second once before
>>>>> it get stopped and never retriggered. Currently I've 2 DMA channel one for TX and another for RX
>>>>> I've changed my DTS and update my fsl_ssi to handle new clocks, I guess only the CLK_SPBA
>>>>> has improved my situation. I've also tried to enable both RIE and TIE to service the ISR, with
>>>>> and without SSI DMA support, but this end with a full system freeze.
>>>> I got this system freeze too when enabling RIE and TIE because the
>>>> interrupts TFE1IE, TFE0IE, TDE1IE, TDE0IE are *enabled* at reset.
>>>> (Check ref manual 61.9.5).   which I suspect was a livelock kind of
>>>> situation where the ISR is just called infinitely often.  After
>>>> disabling those, then the system worked okay.  Check out the previous
>>>> patch I sent on the issue yesterday or the day before.
>>> Ooohh!!! Forgot to check this!!! I'm now going to mask them!!!
>> Doesn't work for me! Still freeze the system!  SIER=0x01d005f4

I thought the same but setting only RFF0, TFE0, RDMAE and TDMAE along the RIE and TIE still free the system.

> You still have many per-frame interrupts enabled, which is still too
> many enabled.  for example, you have RLSIE, TLSIE, RFSIE, TFSIE, etc.
> These all generate one interrupt per frame, and not necessarily at the
> same time, so you could be having 4 or more interrupts per frame.  Be
> sure they're all zero except for the DMA enable and the specific ones
> you actually want enabled.

Yep! But I still think that the CPU should be able to handle all them.

>
> -C
> _______________________________________________
> Alsa-devel mailing list
> Alsa-devel@alsa-project.org
> http://mailman.alsa-project.org/mailman/listinfo/alsa-devel
>
Nicolin Chen Oct. 29, 2015, 5:19 p.m. UTC | #11
On Thu, Oct 29, 2015 at 06:44:12AM -0700, Caleb Crome wrote:
> > Reported by user space? Are you saying that's an ALSA underrun in
> > the user space, not a hardware underrun reported by the IRQ in the
> > driver? They are quite different. ALSA underrun comes from the DMA
> > buffer gets underrun while the other one results from FIFO feeding
> > efficiency. For ALSA underrun, enlarging the playback period size
> > and period number will ease the problem:
 
> Sometimes they happen at the same time.  So, I run aplay, and all is

The 'they' is indicating ALSA underrun + hardware underrun or ALSA
underrun + channel slip? It's not quite logical for a channel slip
resulting from an ALSA underrun as it should restart by calling the
trigger() functions in DAI drivers IIRC.

> fine.  Then the user space app will underrun, and then I look at the
> scope, and the channels have slipped.  So somehow the start/restart
> after the underrun is not always perfect I guess.

> Is there any mechanism for the DMA fifo underruns to be reported back
> to user space?  There certainly should be, because the consequences

No. The release from Freescale official tree has a reset procedure
applied to ESAI underrun but not SSI but I guess you may want to
refer to that.

> are catastrophic, yet the user space app goes on as if everything is
> just great.  Much, much worse than the underrun that is reported (i.e.
> a skip in audio is bad but sometimes tolerable.  A channel slip is
> permanent and absolutely intolerable).
Nicolin Chen Oct. 29, 2015, 11:22 p.m. UTC | #12
On Thu, Oct 29, 2015 at 04:37:35PM +0100, Roberto Fichera wrote:

> Regarding my SSI problem, I was able to keep the DMA working for few second once before
> it get stopped and never retriggered. Currently I've 2 DMA channel one for TX and another for rx

DMA only stops when getting terminate_all() executed or FIFO doesn't
reach the watermark so that no newer DMA request would be issued.

> I've changed my DTS and update my fsl_ssi to handle new clocks, I guess only the CLK_SPBA
> has improved my situation. I've also tried to enable both RIE and TIE to service the ISR, with

Guessing? It'd weird that SPBA would ease the issue here as I was told
by the IC team that SSI and SAI in SoloX don't require SPBA clock IIRC.

> and without SSI DMA support, but this end with a full system freeze. The ISR was never changed
> in my fsl_ssi.c.

You mentioned that clock status from the Codec chip shows the bit clock
stops but now it's related to DMA? I think you should first figure out
where the problem locates as Caleb's problem is different from yours.

As I mentioned, you may need to confirm that if the bit clock generation
is stopped. DMA surely won't work when the bit clock ends as SSI may
no longer consume the data FIFO so the watermark would never be reached
again.

Nicolin
diff mbox

Patch

diff --git a/sound/soc/fsl/fsl_ssi.c b/sound/soc/fsl/fsl_ssi.c
index 73778c2..b834f77 100644
--- a/sound/soc/fsl/fsl_ssi.c
+++ b/sound/soc/fsl/fsl_ssi.c
@@ -943,7 +943,7 @@  static int _fsl_ssi_set_dai_fmt(struct device *dev,
         * size.
         */
        if (ssi_private->use_dma)
-               wm = ssi_private->fifo_depth - 2;
+               wm = 8;
        else
                wm = ssi_private->fifo_depth;

@@ -1260,8 +1260,8 @@  static int fsl_ssi_imx_probe(struct platform_device *pdev,
         * We have burstsize be "fifo_depth - 2" to match the SSI
         * watermark setting in fsl_ssi_startup().
         */
-       ssi_private->dma_params_tx.maxburst = ssi_private->fifo_depth - 2;
-       ssi_private->dma_params_rx.maxburst = ssi_private->fifo_depth - 2;
+       ssi_private->dma_params_tx.maxburst = 8;
+       ssi_private->dma_params_rx.maxburst = 8;
        ssi_private->dma_params_tx.addr = ssi_private->ssi_phys + CCSR_SSI_STX0;
        ssi_private->dma_params_rx.addr = ssi_private->ssi_phys + CCSR_SSI_SRX0;