Message ID | 1487327904-28311-5-git-send-email-fisaksen@baylibre.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Friday 17 February 2017 04:08 PM, Frode Isaksen wrote: > This CPU has VIVT caches, so need to flush the cache > for vmalloc'ed buffers, since the address may be aliased > (same phyiscal address used by multiple virtual addresses). > This fixes errors when mounting and reading/writing UBIFS > volumes with SPI NOR flash. > > Signed-off-by: Frode Isaksen <fisaksen@baylibre.com> > --- > drivers/spi/spi-davinci.c | 15 +++++++++++++-- > 1 file changed, 13 insertions(+), 2 deletions(-) > > diff --git a/drivers/spi/spi-davinci.c b/drivers/spi/spi-davinci.c > index 2632ae0..b69a370 100644 > --- a/drivers/spi/spi-davinci.c > +++ b/drivers/spi/spi-davinci.c > @@ -29,6 +29,7 @@ > #include <linux/spi/spi.h> > #include <linux/spi/spi_bitbang.h> > #include <linux/slab.h> > +#include <asm/cacheflush.h> > > #include <linux/platform_data/spi-davinci.h> > > @@ -650,6 +651,10 @@ static int davinci_spi_bufs(struct spi_device *spi, struct spi_transfer *t) > dmaengine_slave_config(dspi->dma_rx, &dma_rx_conf); > dmaengine_slave_config(dspi->dma_tx, &dma_tx_conf); > > + if (is_vmalloc_addr(t->rx_buf)) > + /* VIVT cache: flush since addr. may be aliased */ > + flush_kernel_vmap_range((void *)t->rx_buf, t->len); > + > rxdesc = dmaengine_prep_slave_sg(dspi->dma_rx, > t->rx_sg.sgl, t->rx_sg.nents, DMA_DEV_TO_MEM, > DMA_PREP_INTERRUPT | DMA_CTRL_ACK); > @@ -660,7 +665,9 @@ static int davinci_spi_bufs(struct spi_device *spi, struct spi_transfer *t) > /* use rx buffer as dummy tx buffer */ > t->tx_sg.sgl = t->rx_sg.sgl; > t->tx_sg.nents = t->rx_sg.nents; > - } > + } else if (is_vmalloc_addr(t->tx_buf)) > + /* VIVT cache: flush since addr. may be aliased */ > + flush_kernel_vmap_range((void *)t->tx_buf, t->len); > SPI core calls dma_unmap_sg(), that is supposed to flush caches. If flush_kernel_vmap_range() call is required here to flush actual cache lines, then what does dma_unmap_* calls in SPI core end up flushing? Will it flush a different alias of same physical address? If so, isn't that undesired? > txdesc = dmaengine_prep_slave_sg(dspi->dma_tx, > t->tx_sg.sgl, t->tx_sg.nents, DMA_MEM_TO_DEV, > @@ -699,8 +706,12 @@ static int davinci_spi_bufs(struct spi_device *spi, struct spi_transfer *t) > } > > clear_io_bits(dspi->base + SPIINT, SPIINT_MASKALL); > - if (spicfg->io_type == SPI_IO_TYPE_DMA) > + if (spicfg->io_type == SPI_IO_TYPE_DMA) { > clear_io_bits(dspi->base + SPIINT, SPIINT_DMA_REQ_EN); > + if (is_vmalloc_addr(t->rx_buf)) > + /* VIVT cache: invalidate since addr. may be aliased */ > + invalidate_kernel_vmap_range((void *)t->rx_buf, t->len); > + } > > clear_io_bits(dspi->base + SPIGCR1, SPIGCR1_SPIENA_MASK); > set_io_bits(dspi->base + SPIGCR1, SPIGCR1_POWERDOWN_MASK); >
On Fri, Feb 17, 2017 at 04:27:28PM +0530, Vignesh R wrote: > On Friday 17 February 2017 04:08 PM, Frode Isaksen wrote: > > @@ -650,6 +651,10 @@ static int davinci_spi_bufs(struct spi_device *spi, struct spi_transfer *t) > > dmaengine_slave_config(dspi->dma_rx, &dma_rx_conf); > > dmaengine_slave_config(dspi->dma_tx, &dma_tx_conf); > > > > + if (is_vmalloc_addr(t->rx_buf)) > > + /* VIVT cache: flush since addr. may be aliased */ > > + flush_kernel_vmap_range((void *)t->rx_buf, t->len); > > + > > rxdesc = dmaengine_prep_slave_sg(dspi->dma_rx, > > t->rx_sg.sgl, t->rx_sg.nents, DMA_DEV_TO_MEM, > > DMA_PREP_INTERRUPT | DMA_CTRL_ACK); > > @@ -660,7 +665,9 @@ static int davinci_spi_bufs(struct spi_device *spi, struct spi_transfer *t) > > /* use rx buffer as dummy tx buffer */ > > t->tx_sg.sgl = t->rx_sg.sgl; > > t->tx_sg.nents = t->rx_sg.nents; > > - } > > + } else if (is_vmalloc_addr(t->tx_buf)) > > + /* VIVT cache: flush since addr. may be aliased */ > > + flush_kernel_vmap_range((void *)t->tx_buf, t->len); > > > > SPI core calls dma_unmap_sg(), that is supposed to flush caches. > If flush_kernel_vmap_range() call is required here to flush actual cache > lines, then what does dma_unmap_* calls in SPI core end up flushing? The DMA API deals with the _kernel_ lowmem mapping. It has no knowledge of any other aliases in the system. When you have a VIVT cache (as all old ARM CPUs have) then if you access the memory through a different alias from the kernel lowmem mapping (iow, vmalloc) then the DMA API can't help you. However, the correct place to use flush_kernel_vmap_range() etc is not in drivers - it's supposed to be done in the callers that know that the memory is aliased. For full details on these flushing functions, see cachetlb.txt. This does not remove the requirement to also use the DMA API. === cachetlb.txt === The final category of APIs is for I/O to deliberately aliased address ranges inside the kernel. Such aliases are set up by use of the vmap/vmalloc API. Since kernel I/O goes via physical pages, the I/O subsystem assumes that the user mapping and kernel offset mapping are the only aliases. This isn't true for vmap aliases, so anything in the kernel trying to do I/O to vmap areas must manually manage coherency. It must do this by flushing the vmap range before doing I/O and invalidating it after the I/O returns. void flush_kernel_vmap_range(void *vaddr, int size) flushes the kernel cache for a given virtual address range in the vmap area. This is to make sure that any data the kernel modified in the vmap range is made visible to the physical page. The design is to make this area safe to perform I/O on. Note that this API does *not* also flush the offset map alias of the area. void invalidate_kernel_vmap_range(void *vaddr, int size) invalidates the cache for a given virtual address range in the vmap area which prevents the processor from making the cache stale by speculatively reading data while the I/O was occurring to the physical pages. This is only necessary for data reads into the vmap area.
On 17/02/2017 12:22, Russell King - ARM Linux wrote: > On Fri, Feb 17, 2017 at 04:27:28PM +0530, Vignesh R wrote: >> On Friday 17 February 2017 04:08 PM, Frode Isaksen wrote: >>> @@ -650,6 +651,10 @@ static int davinci_spi_bufs(struct spi_device *spi, struct spi_transfer *t) >>> dmaengine_slave_config(dspi->dma_rx, &dma_rx_conf); >>> dmaengine_slave_config(dspi->dma_tx, &dma_tx_conf); >>> >>> + if (is_vmalloc_addr(t->rx_buf)) >>> + /* VIVT cache: flush since addr. may be aliased */ >>> + flush_kernel_vmap_range((void *)t->rx_buf, t->len); >>> + >>> rxdesc = dmaengine_prep_slave_sg(dspi->dma_rx, >>> t->rx_sg.sgl, t->rx_sg.nents, DMA_DEV_TO_MEM, >>> DMA_PREP_INTERRUPT | DMA_CTRL_ACK); >>> @@ -660,7 +665,9 @@ static int davinci_spi_bufs(struct spi_device *spi, struct spi_transfer *t) >>> /* use rx buffer as dummy tx buffer */ >>> t->tx_sg.sgl = t->rx_sg.sgl; >>> t->tx_sg.nents = t->rx_sg.nents; >>> - } >>> + } else if (is_vmalloc_addr(t->tx_buf)) >>> + /* VIVT cache: flush since addr. may be aliased */ >>> + flush_kernel_vmap_range((void *)t->tx_buf, t->len); >>> >> SPI core calls dma_unmap_sg(), that is supposed to flush caches. >> If flush_kernel_vmap_range() call is required here to flush actual cache >> lines, then what does dma_unmap_* calls in SPI core end up flushing? > The DMA API deals with the _kernel_ lowmem mapping. It has no knowledge > of any other aliases in the system. When you have a VIVT cache (as all > old ARM CPUs have) then if you access the memory through a different > alias from the kernel lowmem mapping (iow, vmalloc) then the DMA API > can't help you. > > However, the correct place to use flush_kernel_vmap_range() etc is not > in drivers - it's supposed to be done in the callers that know that > the memory is aliased. OK, so this should be done in the ubifs layer instead ? xfs already does this, but no other fs. Thanks, Frode > > For full details on these flushing functions, see cachetlb.txt. This > does not remove the requirement to also use the DMA API. > > === cachetlb.txt === > > The final category of APIs is for I/O to deliberately aliased address > ranges inside the kernel. Such aliases are set up by use of the > vmap/vmalloc API. Since kernel I/O goes via physical pages, the I/O > subsystem assumes that the user mapping and kernel offset mapping are > the only aliases. This isn't true for vmap aliases, so anything in > the kernel trying to do I/O to vmap areas must manually manage > coherency. It must do this by flushing the vmap range before doing > I/O and invalidating it after the I/O returns. > > void flush_kernel_vmap_range(void *vaddr, int size) > flushes the kernel cache for a given virtual address range in > the vmap area. This is to make sure that any data the kernel > modified in the vmap range is made visible to the physical > page. The design is to make this area safe to perform I/O on. > Note that this API does *not* also flush the offset map alias > of the area. > > void invalidate_kernel_vmap_range(void *vaddr, int size) invalidates > the cache for a given virtual address range in the vmap area > which prevents the processor from making the cache stale by > speculatively reading data while the I/O was occurring to the > physical pages. This is only necessary for data reads into the > vmap area. > > -- To unsubscribe from this list: send the line "unsubscribe linux-spi" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Fri, Feb 17, 2017 at 12:36:17PM +0100, Frode Isaksen wrote: > On 17/02/2017 12:22, Russell King - ARM Linux wrote: > > The DMA API deals with the _kernel_ lowmem mapping. It has no knowledge > > of any other aliases in the system. When you have a VIVT cache (as all > > old ARM CPUs have) then if you access the memory through a different > > alias from the kernel lowmem mapping (iow, vmalloc) then the DMA API > > can't help you. > > > > However, the correct place to use flush_kernel_vmap_range() etc is not > > in drivers - it's supposed to be done in the callers that know that > > the memory is aliased. > > OK, so this should be done in the ubifs layer instead ? xfs already does > this, but no other fs. These APIs were created when XFS was being used on older ARMs and people experienced corruption. XFS was the only filesystem driver which wanted to do this (horrid, imho) DMA to memory that it accessed via a vmalloc area mapping. If ubifs is also doing this, it's followed XFS down the same route, but ignored the need for additional flushing. The down-side to adding this at the filesystem layer is that you get the impact whether or not the driver does DMA. However, for XFS that's absolutely necessary, as block devices will transfer to the kernel lowmem mapping, which itself will alias with the vmalloc area mapping. SPI is rather another special case - rather than SPI following the established mechanism of passing data references via scatterlists or similar, it also passes them via virtual addresses, which means SPI can directly access the vmalloc area when performing PIO. This really makes the problem more complex, because it means that if you do have a SPI driver that does that, it's going to be reading/writing direct from vmalloc space. That's not a problem as long as the data is only accessed via vmalloc space, but it will definitely go totally wrong if the data is subsequently mapped into userspace. The other important thing to realise is that the interfaces in cachetlb.txt assume that it's the lowmem mapping that will be accessed, and the IO device will push that data out to physical memory (either via the DMA API, or flush_kernel_dcache_page()). That's not true of SPI, as it passes virtual addresses around. So... overall, I'm not sure that this problem is properly solvable given SPIs insistance on passing virtual addresses and the differences in this area between SPI and block. What I'm quite sure about is that adding yet more cache flushing interfaces for legacy cache types really isn't the way forward.
On 17/02/2017 13:07, Russell King - ARM Linux wrote: > On Fri, Feb 17, 2017 at 12:36:17PM +0100, Frode Isaksen wrote: >> On 17/02/2017 12:22, Russell King - ARM Linux wrote: >>> The DMA API deals with the _kernel_ lowmem mapping. It has no knowledge >>> of any other aliases in the system. When you have a VIVT cache (as all >>> old ARM CPUs have) then if you access the memory through a different >>> alias from the kernel lowmem mapping (iow, vmalloc) then the DMA API >>> can't help you. >>> >>> However, the correct place to use flush_kernel_vmap_range() etc is not >>> in drivers - it's supposed to be done in the callers that know that >>> the memory is aliased. >> OK, so this should be done in the ubifs layer instead ? xfs already does >> this, but no other fs. > These APIs were created when XFS was being used on older ARMs and people > experienced corruption. XFS was the only filesystem driver which wanted > to do this (horrid, imho) DMA to memory that it accessed via a vmalloc > area mapping. > > If ubifs is also doing this, it's followed XFS down the same route, but > ignored the need for additional flushing. > > The down-side to adding this at the filesystem layer is that you get the > impact whether or not the driver does DMA. However, for XFS that's > absolutely necessary, as block devices will transfer to the kernel lowmem > mapping, which itself will alias with the vmalloc area mapping. > > SPI is rather another special case - rather than SPI following the > established mechanism of passing data references via scatterlists or > similar, it also passes them via virtual addresses, which means SPI > can directly access the vmalloc area when performing PIO. This > really makes the problem more complex, because it means that if you > do have a SPI driver that does that, it's going to be reading/writing > direct from vmalloc space. > > That's not a problem as long as the data is only accessed via vmalloc > space, but it will definitely go totally wrong if the data is > subsequently mapped into userspace. Thanks a lot for explaining... It often(2/5) goes wrong within this simple function called when mounting the UBIFS filesystem w/o a priori any userspace interaction: fs/ubifs/lpt.c: static int read_ltab(struct ubifs_info *c) { int err; void *buf; int retry=1; buf = vmalloc(c->ltab_sz); if (!buf) return -ENOMEM; err = ubifs_leb_read(c, c->ltab_lnum, buf, c->ltab_offs, c->ltab_sz, 1); if (err) goto out; retry: err = unpack_ltab(c, buf); if (err && retry--) { pr_err("%s: flush cache %p[%d] and retry\n", __func__, buf, c->ltab_sz); invalidate_kernel_vmap_range(buf, c->ltab_sz); goto retry; } out: vfree(buf); return err; } The retry code is added by me and with this code there is no error when retrying after flushing the cache. As you can see, the vmalloc'es buffer is allocated and freed within this function. read_ltab: flush cache c9543000[11] and retry Ok after this !! Thanks again, Frode > > The other important thing to realise is that the interfaces in > cachetlb.txt assume that it's the lowmem mapping that will be accessed, > and the IO device will push that data out to physical memory (either via > the DMA API, or flush_kernel_dcache_page()). That's not true of SPI, > as it passes virtual addresses around. > > So... overall, I'm not sure that this problem is properly solvable given > SPIs insistance on passing virtual addresses and the differences in this > area between SPI and block. > > What I'm quite sure about is that adding yet more cache flushing > interfaces for legacy cache types really isn't the way forward. > -- To unsubscribe from this list: send the line "unsubscribe linux-spi" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Friday 17 February 2017 05:37 PM, Russell King - ARM Linux wrote: [...] > SPI is rather another special case - rather than SPI following the > established mechanism of passing data references via scatterlists or > similar, it also passes them via virtual addresses, which means SPI > can directly access the vmalloc area when performing PIO. This > really makes the problem more complex, because it means that if you > do have a SPI driver that does that, it's going to be > reading/writing direct from vmalloc space. > > That's not a problem as long as the data is only accessed via > vmalloc space, but it will definitely go totally wrong if the data > is subsequently mapped into userspace. > > The other important thing to realise is that the interfaces in > cachetlb.txt assume that it's the lowmem mapping that will be > accessed, and the IO device will push that data out to physical > memory (either via the DMA API, or flush_kernel_dcache_page()). > That's not true of SPI, as it passes virtual addresses around. > > So... overall, I'm not sure that this problem is properly solvable > given SPIs insistance on passing virtual addresses and the > differences in this area between SPI and block. > I am debugging another issue with UBIFS wherein pages allocated by vmalloc are in highmem region that are not addressable using 32 bit addresses and is backed by LPAE. So, a 32 bit DMA cannot access these buffers at all. When dma_map_sg() is called to map these pages by spi_map_buf() the physical address is just truncated to 32 bit in pfn_to_dma() (as part of dma_map_sg() call). This results in random crashes as DMA starts accessing random memory during SPI read. Given, the above problem and also issue surrounding VIVT caches, I am thinking of may be using pre-allocated fixed size bounce buffer to handle buffers not in lowmem mapping. I have tried using 64KB pre-allocated buffer on TI DRA74 EVM with QSPI running at 76.8MHz and do not see any significant degradation in performance with UBIFS. Mainly because UBIFS seems to use vmalloc'd buffers only during initial preparing and mounting phase and not during file read/write. So, is it an acceptable solution to modify spi_map_buf() to use bounce buffer when buffer does not belong to lowmem region?
On 20/02/2017 07:55, Vignesh R wrote: > > On Friday 17 February 2017 05:37 PM, Russell King - ARM Linux wrote: > [...] >> SPI is rather another special case - rather than SPI following the >> established mechanism of passing data references via scatterlists or >> similar, it also passes them via virtual addresses, which means SPI >> can directly access the vmalloc area when performing PIO. This >> really makes the problem more complex, because it means that if you >> do have a SPI driver that does that, it's going to be >> reading/writing direct from vmalloc space. >> >> That's not a problem as long as the data is only accessed via >> vmalloc space, but it will definitely go totally wrong if the data >> is subsequently mapped into userspace. >> >> The other important thing to realise is that the interfaces in >> cachetlb.txt assume that it's the lowmem mapping that will be >> accessed, and the IO device will push that data out to physical >> memory (either via the DMA API, or flush_kernel_dcache_page()). >> That's not true of SPI, as it passes virtual addresses around. >> >> So... overall, I'm not sure that this problem is properly solvable >> given SPIs insistance on passing virtual addresses and the >> differences in this area between SPI and block. >> > I am debugging another issue with UBIFS wherein pages allocated by > vmalloc are in highmem region that are not addressable using 32 bit > addresses and is backed by LPAE. So, a 32 bit DMA cannot access these > buffers at all. > When dma_map_sg() is called to map these pages by spi_map_buf() the > physical address is just truncated to 32 bit in pfn_to_dma() (as part of > dma_map_sg() call). This results in random crashes as DMA starts > accessing random memory during SPI read. > > Given, the above problem and also issue surrounding VIVT caches, I am > thinking of may be using pre-allocated fixed size bounce buffer to > handle buffers not in lowmem mapping. > I have tried using 64KB pre-allocated buffer on TI DRA74 EVM with QSPI > running at 76.8MHz and do not see any significant degradation in > performance with UBIFS. Mainly because UBIFS seems to use vmalloc'd > buffers only during initial preparing and mounting phase and not during > file read/write. I am seeing a bug caused by VIVT cache in 'read_ltab()' function. In this function, the vmalloc'ed buffer is of size 11. Isn't it better to use kmalloc in this case ? Thanks, Frode > > So, is it an acceptable solution to modify spi_map_buf() to use bounce > buffer when buffer does not belong to lowmem region? > -- To unsubscribe from this list: send the line "unsubscribe linux-spi" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Monday 20 February 2017 02:56 PM, Frode Isaksen wrote: > > > On 20/02/2017 07:55, Vignesh R wrote: >> >> On Friday 17 February 2017 05:37 PM, Russell King - ARM Linux wrote: >> [...] >>> SPI is rather another special case - rather than SPI following the >>> established mechanism of passing data references via scatterlists or >>> similar, it also passes them via virtual addresses, which means SPI >>> can directly access the vmalloc area when performing PIO. This >>> really makes the problem more complex, because it means that if you >>> do have a SPI driver that does that, it's going to be >>> reading/writing direct from vmalloc space. >>> >>> That's not a problem as long as the data is only accessed via >>> vmalloc space, but it will definitely go totally wrong if the data >>> is subsequently mapped into userspace. >>> >>> The other important thing to realise is that the interfaces in >>> cachetlb.txt assume that it's the lowmem mapping that will be >>> accessed, and the IO device will push that data out to physical >>> memory (either via the DMA API, or flush_kernel_dcache_page()). >>> That's not true of SPI, as it passes virtual addresses around. >>> >>> So... overall, I'm not sure that this problem is properly solvable >>> given SPIs insistance on passing virtual addresses and the >>> differences in this area between SPI and block. >>> >> I am debugging another issue with UBIFS wherein pages allocated by >> vmalloc are in highmem region that are not addressable using 32 bit >> addresses and is backed by LPAE. So, a 32 bit DMA cannot access these >> buffers at all. >> When dma_map_sg() is called to map these pages by spi_map_buf() the >> physical address is just truncated to 32 bit in pfn_to_dma() (as part of >> dma_map_sg() call). This results in random crashes as DMA starts >> accessing random memory during SPI read. >> >> Given, the above problem and also issue surrounding VIVT caches, I am >> thinking of may be using pre-allocated fixed size bounce buffer to >> handle buffers not in lowmem mapping. >> I have tried using 64KB pre-allocated buffer on TI DRA74 EVM with QSPI >> running at 76.8MHz and do not see any significant degradation in >> performance with UBIFS. Mainly because UBIFS seems to use vmalloc'd >> buffers only during initial preparing and mounting phase and not during >> file read/write. > I am seeing a bug caused by VIVT cache in 'read_ltab()' function. In this function, the vmalloc'ed buffer is of size 11. Isn't it better to use kmalloc in this case ? read_ltab() isn't the only place where vmalloc() is used. A quick grep for vmalloc on fs/ubifs/ shows about ~19 occurrence. I guess every vmalloc() call can potentially allocate memory from highmem and might potentially cause issue for VIVT and such aliasing caches. Fixing just one such case isn't going to help IMHO.
On 20/02/2017 10:47, Vignesh R wrote: > > On Monday 20 February 2017 02:56 PM, Frode Isaksen wrote: >> >> On 20/02/2017 07:55, Vignesh R wrote: >>> On Friday 17 February 2017 05:37 PM, Russell King - ARM Linux wrote: >>> [...] >>>> SPI is rather another special case - rather than SPI following the >>>> established mechanism of passing data references via scatterlists or >>>> similar, it also passes them via virtual addresses, which means SPI >>>> can directly access the vmalloc area when performing PIO. This >>>> really makes the problem more complex, because it means that if you >>>> do have a SPI driver that does that, it's going to be >>>> reading/writing direct from vmalloc space. >>>> >>>> That's not a problem as long as the data is only accessed via >>>> vmalloc space, but it will definitely go totally wrong if the data >>>> is subsequently mapped into userspace. >>>> >>>> The other important thing to realise is that the interfaces in >>>> cachetlb.txt assume that it's the lowmem mapping that will be >>>> accessed, and the IO device will push that data out to physical >>>> memory (either via the DMA API, or flush_kernel_dcache_page()). >>>> That's not true of SPI, as it passes virtual addresses around. >>>> >>>> So... overall, I'm not sure that this problem is properly solvable >>>> given SPIs insistance on passing virtual addresses and the >>>> differences in this area between SPI and block. >>>> >>> I am debugging another issue with UBIFS wherein pages allocated by >>> vmalloc are in highmem region that are not addressable using 32 bit >>> addresses and is backed by LPAE. So, a 32 bit DMA cannot access these >>> buffers at all. >>> When dma_map_sg() is called to map these pages by spi_map_buf() the >>> physical address is just truncated to 32 bit in pfn_to_dma() (as part of >>> dma_map_sg() call). This results in random crashes as DMA starts >>> accessing random memory during SPI read. >>> >>> Given, the above problem and also issue surrounding VIVT caches, I am >>> thinking of may be using pre-allocated fixed size bounce buffer to >>> handle buffers not in lowmem mapping. >>> I have tried using 64KB pre-allocated buffer on TI DRA74 EVM with QSPI >>> running at 76.8MHz and do not see any significant degradation in >>> performance with UBIFS. Mainly because UBIFS seems to use vmalloc'd >>> buffers only during initial preparing and mounting phase and not during >>> file read/write. >> I am seeing a bug caused by VIVT cache in 'read_ltab()' function. In this function, the vmalloc'ed buffer is of size 11. Isn't it better to use kmalloc in this case ? > read_ltab() isn't the only place where vmalloc() is used. A quick grep > for vmalloc on fs/ubifs/ shows about ~19 occurrence. I guess every > vmalloc() call can potentially allocate memory from highmem and might > potentially cause issue for VIVT and such aliasing caches. > Fixing just one such case isn't going to help IMHO. Of course fixing it only in one place is of course not going to help.. For the moment there are 3 solutions to the UBIFS DMA problem: 1) Always use bounce buffer for vmalloc'ed buffers - impacts everyone. 2) Remove use of vmalloc'ed buffers in UBIFS - is it possible ? 3) Flush cache in UBIFS when reading/wring vmalloc'ed buffers - does not fix the higmem case. I will try to see if option 2) is possible. Thanks, Frode > > -- To unsubscribe from this list: send the line "unsubscribe linux-spi" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Monday 20 February 2017 04:04 PM, Frode Isaksen wrote: > > > On 20/02/2017 10:47, Vignesh R wrote: >> >> On Monday 20 February 2017 02:56 PM, Frode Isaksen wrote: >>> >>> On 20/02/2017 07:55, Vignesh R wrote: >>>> On Friday 17 February 2017 05:37 PM, Russell King - ARM Linux wrote: [...] >>>> I am debugging another issue with UBIFS wherein pages allocated by >>>> vmalloc are in highmem region that are not addressable using 32 bit >>>> addresses and is backed by LPAE. So, a 32 bit DMA cannot access these >>>> buffers at all. >>>> When dma_map_sg() is called to map these pages by spi_map_buf() the >>>> physical address is just truncated to 32 bit in pfn_to_dma() (as part of >>>> dma_map_sg() call). This results in random crashes as DMA starts >>>> accessing random memory during SPI read. >>>> >>>> Given, the above problem and also issue surrounding VIVT caches, I am >>>> thinking of may be using pre-allocated fixed size bounce buffer to >>>> handle buffers not in lowmem mapping. >>>> I have tried using 64KB pre-allocated buffer on TI DRA74 EVM with QSPI >>>> running at 76.8MHz and do not see any significant degradation in >>>> performance with UBIFS. Mainly because UBIFS seems to use vmalloc'd >>>> buffers only during initial preparing and mounting phase and not during >>>> file read/write. >>> I am seeing a bug caused by VIVT cache in 'read_ltab()' function. In this function, the vmalloc'ed buffer is of size 11. Isn't it better to use kmalloc in this case ? >> read_ltab() isn't the only place where vmalloc() is used. A quick grep >> for vmalloc on fs/ubifs/ shows about ~19 occurrence. I guess every >> vmalloc() call can potentially allocate memory from highmem and might >> potentially cause issue for VIVT and such aliasing caches. >> Fixing just one such case isn't going to help IMHO. > Of course fixing it only in one place is of course not going to help.. > For the moment there are 3 solutions to the UBIFS DMA problem: > 1) Always use bounce buffer for vmalloc'ed buffers - impacts everyone. > 2) Remove use of vmalloc'ed buffers in UBIFS - is it possible ? Maybe. But what about mtdblock with JFFS2 on top of it, mtdblock still uses vmalloc'd buffers?
On 20/02/2017 12:27, Vignesh R wrote: > > On Monday 20 February 2017 04:04 PM, Frode Isaksen wrote: >> >> On 20/02/2017 10:47, Vignesh R wrote: >>> On Monday 20 February 2017 02:56 PM, Frode Isaksen wrote: >>>> On 20/02/2017 07:55, Vignesh R wrote: >>>>> On Friday 17 February 2017 05:37 PM, Russell King - ARM Linux wrote: > [...] >>>>> I am debugging another issue with UBIFS wherein pages allocated by >>>>> vmalloc are in highmem region that are not addressable using 32 bit >>>>> addresses and is backed by LPAE. So, a 32 bit DMA cannot access these >>>>> buffers at all. >>>>> When dma_map_sg() is called to map these pages by spi_map_buf() the >>>>> physical address is just truncated to 32 bit in pfn_to_dma() (as part of >>>>> dma_map_sg() call). This results in random crashes as DMA starts >>>>> accessing random memory during SPI read. >>>>> >>>>> Given, the above problem and also issue surrounding VIVT caches, I am >>>>> thinking of may be using pre-allocated fixed size bounce buffer to >>>>> handle buffers not in lowmem mapping. >>>>> I have tried using 64KB pre-allocated buffer on TI DRA74 EVM with QSPI >>>>> running at 76.8MHz and do not see any significant degradation in >>>>> performance with UBIFS. Mainly because UBIFS seems to use vmalloc'd >>>>> buffers only during initial preparing and mounting phase and not during >>>>> file read/write. >>>> I am seeing a bug caused by VIVT cache in 'read_ltab()' function. In this function, the vmalloc'ed buffer is of size 11. Isn't it better to use kmalloc in this case ? >>> read_ltab() isn't the only place where vmalloc() is used. A quick grep >>> for vmalloc on fs/ubifs/ shows about ~19 occurrence. I guess every >>> vmalloc() call can potentially allocate memory from highmem and might >>> potentially cause issue for VIVT and such aliasing caches. >>> Fixing just one such case isn't going to help IMHO. >> Of course fixing it only in one place is of course not going to help.. >> For the moment there are 3 solutions to the UBIFS DMA problem: >> 1) Always use bounce buffer for vmalloc'ed buffers - impacts everyone. >> 2) Remove use of vmalloc'ed buffers in UBIFS - is it possible ? > Maybe. But what about mtdblock with JFFS2 on top of it, mtdblock still > uses vmalloc'd buffers? That's why I put the cache flush in the SPI driver in the first try.. FYI, I replaced the use of vmalloc with kmalloc in a few places in the UBIFS layer and it seems to work. The size of the buffers allocated varies from 11 (?) to 65408 bytes. Maybe the best place to handle this is in the SPI flash drivers(s). These drivers knows that they may handle vmalloc'ed buffers and the should assume that the underlying SPI driver may use DMA. Thanks, Frode > > -- To unsubscribe from this list: send the line "unsubscribe linux-spi" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Monday 20 February 2017 09:19 PM, Frode Isaksen wrote: > > > On 20/02/2017 12:27, Vignesh R wrote: >> >> On Monday 20 February 2017 04:04 PM, Frode Isaksen wrote: >>> >>> On 20/02/2017 10:47, Vignesh R wrote: >>>> On Monday 20 February 2017 02:56 PM, Frode Isaksen wrote: >>>>> On 20/02/2017 07:55, Vignesh R wrote: >>>>>> On Friday 17 February 2017 05:37 PM, Russell King - ARM Linux wrote: >> [...] >>>>>> I am debugging another issue with UBIFS wherein pages allocated by >>>>>> vmalloc are in highmem region that are not addressable using 32 bit >>>>>> addresses and is backed by LPAE. So, a 32 bit DMA cannot access these >>>>>> buffers at all. >>>>>> When dma_map_sg() is called to map these pages by spi_map_buf() the >>>>>> physical address is just truncated to 32 bit in pfn_to_dma() (as part of >>>>>> dma_map_sg() call). This results in random crashes as DMA starts >>>>>> accessing random memory during SPI read. >>>>>> >>>>>> Given, the above problem and also issue surrounding VIVT caches, I am >>>>>> thinking of may be using pre-allocated fixed size bounce buffer to >>>>>> handle buffers not in lowmem mapping. >>>>>> I have tried using 64KB pre-allocated buffer on TI DRA74 EVM with QSPI >>>>>> running at 76.8MHz and do not see any significant degradation in >>>>>> performance with UBIFS. Mainly because UBIFS seems to use vmalloc'd >>>>>> buffers only during initial preparing and mounting phase and not during >>>>>> file read/write. >>>>> I am seeing a bug caused by VIVT cache in 'read_ltab()' function. In this function, the vmalloc'ed buffer is of size 11. Isn't it better to use kmalloc in this case ? >>>> read_ltab() isn't the only place where vmalloc() is used. A quick grep >>>> for vmalloc on fs/ubifs/ shows about ~19 occurrence. I guess every >>>> vmalloc() call can potentially allocate memory from highmem and might >>>> potentially cause issue for VIVT and such aliasing caches. >>>> Fixing just one such case isn't going to help IMHO. >>> Of course fixing it only in one place is of course not going to help.. >>> For the moment there are 3 solutions to the UBIFS DMA problem: >>> 1) Always use bounce buffer for vmalloc'ed buffers - impacts everyone. >>> 2) Remove use of vmalloc'ed buffers in UBIFS - is it possible ? >> Maybe. But what about mtdblock with JFFS2 on top of it, mtdblock still >> uses vmalloc'd buffers? > That's why I put the cache flush in the SPI driver in the first try.. > FYI, I replaced the use of vmalloc with kmalloc in a few places in the UBIFS layer and it seems to work. The size of the buffers allocated varies from 11 (?) to 65408 bytes. FYI, here is the discussion on DMA + UBIFS: http://lists.infradead.org/pipermail/linux-mtd/2016-October/069856.html > Maybe the best place to handle this is in the SPI flash drivers(s). These drivers knows that they may handle vmalloc'ed buffers and the should assume that the underlying SPI driver may use DMA. > I guess so, NAND core already seems to use bounce buffer for vmalloc'd addresses.
diff --git a/drivers/spi/spi-davinci.c b/drivers/spi/spi-davinci.c index 2632ae0..b69a370 100644 --- a/drivers/spi/spi-davinci.c +++ b/drivers/spi/spi-davinci.c @@ -29,6 +29,7 @@ #include <linux/spi/spi.h> #include <linux/spi/spi_bitbang.h> #include <linux/slab.h> +#include <asm/cacheflush.h> #include <linux/platform_data/spi-davinci.h> @@ -650,6 +651,10 @@ static int davinci_spi_bufs(struct spi_device *spi, struct spi_transfer *t) dmaengine_slave_config(dspi->dma_rx, &dma_rx_conf); dmaengine_slave_config(dspi->dma_tx, &dma_tx_conf); + if (is_vmalloc_addr(t->rx_buf)) + /* VIVT cache: flush since addr. may be aliased */ + flush_kernel_vmap_range((void *)t->rx_buf, t->len); + rxdesc = dmaengine_prep_slave_sg(dspi->dma_rx, t->rx_sg.sgl, t->rx_sg.nents, DMA_DEV_TO_MEM, DMA_PREP_INTERRUPT | DMA_CTRL_ACK); @@ -660,7 +665,9 @@ static int davinci_spi_bufs(struct spi_device *spi, struct spi_transfer *t) /* use rx buffer as dummy tx buffer */ t->tx_sg.sgl = t->rx_sg.sgl; t->tx_sg.nents = t->rx_sg.nents; - } + } else if (is_vmalloc_addr(t->tx_buf)) + /* VIVT cache: flush since addr. may be aliased */ + flush_kernel_vmap_range((void *)t->tx_buf, t->len); txdesc = dmaengine_prep_slave_sg(dspi->dma_tx, t->tx_sg.sgl, t->tx_sg.nents, DMA_MEM_TO_DEV, @@ -699,8 +706,12 @@ static int davinci_spi_bufs(struct spi_device *spi, struct spi_transfer *t) } clear_io_bits(dspi->base + SPIINT, SPIINT_MASKALL); - if (spicfg->io_type == SPI_IO_TYPE_DMA) + if (spicfg->io_type == SPI_IO_TYPE_DMA) { clear_io_bits(dspi->base + SPIINT, SPIINT_DMA_REQ_EN); + if (is_vmalloc_addr(t->rx_buf)) + /* VIVT cache: invalidate since addr. may be aliased */ + invalidate_kernel_vmap_range((void *)t->rx_buf, t->len); + } clear_io_bits(dspi->base + SPIGCR1, SPIGCR1_SPIENA_MASK); set_io_bits(dspi->base + SPIGCR1, SPIGCR1_POWERDOWN_MASK);
This CPU has VIVT caches, so need to flush the cache for vmalloc'ed buffers, since the address may be aliased (same phyiscal address used by multiple virtual addresses). This fixes errors when mounting and reading/writing UBIFS volumes with SPI NOR flash. Signed-off-by: Frode Isaksen <fisaksen@baylibre.com> --- drivers/spi/spi-davinci.c | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-)