Message ID | 20180529174621.50a9001a@bbrezillon (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Tue, 29 May 2018 17:46:21 +0200 Boris Brezillon <boris.brezillon@bootlin.com> wrote: > On Tue, 29 May 2018 18:21:40 +0300 > Eugen Hristev <eugen.hristev@microchip.com> wrote: > > > On 29.05.2018 18:15, Boris Brezillon wrote: > > > On Tue, 29 May 2018 18:01:40 +0300 > > > Eugen Hristev <eugen.hristev@microchip.com> wrote: > > > > > >> [...] > > >> > > >> > > >>> > > >>> I think you're missing something here. We use the DMA engine in memcpy > > >>> mode (SRAM -> DRAM), not in device mode (dev -> DRAM or DRAM -> dev). > > >>> So there's no dmas prop defined in the DT and there should not be. > > >>> > > >>> Regards, > > >>> > > >>> Boris > > >>> > > >> > > >> Ok, so the memcpy SRAM <-> DRAM will hog the transfer between DRAM and > > >> LCD if my understanding is correct. That's the DMA that Peter wants to > > >> disable with his patch ? > > >> > > >> Then we can then try to force NFC SRAM DMA channels to use just DDR port > > >> 1 or 2 for memcpy ? > > > > > > You mean the dmaengine? According to "14.1.3 Master to Slave Access" > > > that's already the case. > > > > > > Only DMAC0 can access the NFC SRAM and it's done through DMAC0:IF0, > > > then access to DDR is going through port DDR port 1 (DMAC0:IF1) or 2 > > > (DMAC0:IF0). > > > > If we can make NFC use port 1 only, then HLCDC could have two ports as > > master 8 & 9, maybe a better bandwidth. > > Peter, can you try with the following patch? Actually that won't work because all SRAMs are on IF0, and here we use DMA memcpy to copy things from/to the SRAM to/from the DRAM. I have no simple solution to force usage of IF1 when accessing the DRAM but I'm also not sure this will solve Peter's problem since forcing LCDC to use DDR port 3 did not make things better. > > --->8--- > diff --git a/drivers/dma/at_hdmac_regs.h b/drivers/dma/at_hdmac_regs.h > index ef3f227ce3e6..2a48e870f292 100644 > --- a/drivers/dma/at_hdmac_regs.h > +++ b/drivers/dma/at_hdmac_regs.h > @@ -124,8 +124,8 @@ > #define ATC_SIF(i) (0x3 & (i)) /* Src tx done via AHB-Lite Interface i */ > #define ATC_DIF(i) ((0x3 & (i)) << 4) /* Dst tx done via AHB-Lite Interface i */ > /* Specify AHB interfaces */ > -#define AT_DMA_MEM_IF 0 /* interface 0 as memory interface */ > -#define AT_DMA_PER_IF 1 /* interface 1 as peripheral interface */ > +#define AT_DMA_MEM_IF 1 /* interface 0 as memory interface */ > +#define AT_DMA_PER_IF 0 /* interface 1 as peripheral interface */ > > #define ATC_SRC_PIP (0x1 << 8) /* Source Picture-in-Picture enabled */ > #define ATC_DST_PIP (0x1 << 12) /* Destination Picture-in-Picture enabled */
Hi again! I have spent some hours bringing out the old hardware with the 1024x768 panel from underneath the usual piles of junk and layers of dust...and tried the current kernel on that one. And the display was stable even when stressing with lots of NAND accesses. *boggle* Then I remembered that I had lowered the pixel clock (from 71.1 MHz to 65 MHz (and reduced the vertical blanking to maintain the refresh rate). I didn't notice that this fixed the NAND interference, probably because I ran NAND without DMA at the time? Anyway, if I reset the pixel clock to 71.1 MHz (without increasing the vertical blanking, just to be nasty) I can get the artifacts easily. But running with a pixel clock of 65 MHz is not a problem at all, so we can consider NAND-DMA with that panel solved. However, now we know that this setup needs relatively little to start working, and that might be good if we want to see if other changes has any effect. I will look into that tomorrow. And we can also get a grip on the critical bandwidth. But first, answers to some random questions... On 2018-05-29 09:25, Eugen Hristev wrote: > One more thing: what are the actual nand commands which you use when you > get the glitches? read/write/erase ... ? Erase seems to be least sensitive, read or write are worse (and similar) according to my unscientific observations. > What happens if you try to minimize the nand access? you also said at > some point that only *some* nand accesses cause glitches. These systems will normally not access the NAND, but the displays look like total crap when this happens. It can happen even when sync()ing small files, but doesn't happen for every little file. Writing out or reading a large file to/from NAND invariably triggers the issue. > Another thing : even if the LCD displays a still image, the DMA still > feeds data to the LCD right ? Absolutely. But since we are not playing some large video file (which could have been stored on the NAND) we typically don't see the problem. It only turns up in special circumstances. But these circumstances can't be avoided and the display looks so freaking ugly when it happens... On 2018-05-29 17:01, Eugen Hristev wrote: > Then we can then try to force NFC SRAM DMA channels to use just DDR port > 1 or 2 for memcpy ? I *think* my "horrid" patch does that. Specifically this line + desc->txd.phys = (desc->txd.phys & ~3) | 1; On 2018-05-28 18:09, Nicolas Ferre wrote: > Can you try to make all that you can to maximize the blanking period of > your screen (some are more tolerant than others according to that). By > doing so, you would allow the LCD FIFO to recover better after each > line. You might loose some columns on the side of your display but it > would give us a good idea of how far we are from getting rid of those > annoying LCD reset glitches (that are due to underruns on LCD FIFO). I noticed that the 1024x768 panel is using 24bpp, not 16bpp as I stated previously. Also, the horizontal blanking is 320 pixels, so a total of 1024+320=1344 pixels/row and a pixel clock of 71.1 Mhz yields 18.9 us/row. The needed data during that time is 1024*24 bits so 1.30 Gbit/s. For the 65 MHz pixel clock, I get 1.19 Gbit/s. Assuming, of course, that the pixel clock is actually what was requested... What is the granularity of the pixel clock anyway? For the bigger 1920x1080 panel, I have a horizontal blanking of 200 pixels and a pixel clock of 144 MHz, so 14.7 us/row -> 2.09 Gbit/s. I suspect that no amount of fiddling with blanking is going to get that anyway near the needed ~1.25 Gbit/s. Besides, the specs of the panel say that the maximum horizontal blanking time is 280 pixels. Seems futile to even try since this horizontal blanking time is so much shorter for the larger panel (fewer and faster pixels) and the longer time wasn't enough for the smaller panel to catch up. But ok, in combination with something else it might be just enough. Will try tomorrow... On 2018-05-28 18:09, Boris Brezillon wrote: > On Mon, 28 May 2018 17:52:53 +0200 Peter Rosin <peda@axentia.se> wrote: >> The panels we are using only supports one resolution (each), but the issue >> is there with both 1920x1080@16bpp and 1024x768@8bpp (~60Hz). > > Duh! This adds to the weirdness of this issue. I'd thought that by > dividing the required bandwidth by 2 you would get a reliable setup. I think I might have misremembered seeing the issue with 1024x768@8bpp. Sorry. But it *is* there for (the old variant of) 1024x768@24bpp, and that is still only 60% or so of the bandwidth compared to 1920x1080@16bpp. Cheers, Peter
diff --git a/drivers/dma/at_hdmac_regs.h b/drivers/dma/at_hdmac_regs.h index ef3f227ce3e6..2a48e870f292 100644 --- a/drivers/dma/at_hdmac_regs.h +++ b/drivers/dma/at_hdmac_regs.h @@ -124,8 +124,8 @@ #define ATC_SIF(i) (0x3 & (i)) /* Src tx done via AHB-Lite Interface i */ #define ATC_DIF(i) ((0x3 & (i)) << 4) /* Dst tx done via AHB-Lite Interface i */ /* Specify AHB interfaces */ -#define AT_DMA_MEM_IF 0 /* interface 0 as memory interface */ -#define AT_DMA_PER_IF 1 /* interface 1 as peripheral interface */ +#define AT_DMA_MEM_IF 1 /* interface 0 as memory interface */ +#define AT_DMA_PER_IF 0 /* interface 1 as peripheral interface */ #define ATC_SRC_PIP (0x1 << 8) /* Source Picture-in-Picture enabled */ #define ATC_DST_PIP (0x1 << 12) /* Destination Picture-in-Picture enabled */