Message ID | 20230531154836.1366225-17-catalin.marinas@arm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm, dma, arm64: Reduce ARCH_KMALLOC_MINALIGN to 8 | expand |
On 5/31/23 17:48, Catalin Marinas wrote: > If an architecture opted in to DMA bouncing of unaligned kmalloc() > buffers (ARCH_WANT_KMALLOC_DMA_BOUNCE), reduce the minimum kmalloc() > cache alignment below cache-line size to ARCH_KMALLOC_MINALIGN. > > Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Nit below: > Cc: Andrew Morton <akpm@linux-foundation.org> > Cc: Christoph Hellwig <hch@lst.de> > Cc: Robin Murphy <robin.murphy@arm.com> > --- > mm/slab_common.c | 10 ++++++++++ > 1 file changed, 10 insertions(+) > > diff --git a/mm/slab_common.c b/mm/slab_common.c > index 7c6475847fdf..fe46459a8b77 100644 > --- a/mm/slab_common.c > +++ b/mm/slab_common.c > @@ -18,6 +18,7 @@ > #include <linux/uaccess.h> > #include <linux/seq_file.h> > #include <linux/dma-mapping.h> > +#include <linux/swiotlb.h> > #include <linux/proc_fs.h> > #include <linux/debugfs.h> > #include <linux/kasan.h> > @@ -863,10 +864,19 @@ void __init setup_kmalloc_cache_index_table(void) > } > } > > +#ifdef CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC > +static unsigned int __kmalloc_minalign(void) > +{ > + if (io_tlb_default_mem.nslabs) > + return ARCH_KMALLOC_MINALIGN; > + return dma_get_cache_alignment(); > +} > +#else > static unsigned int __kmalloc_minalign(void) > { > return dma_get_cache_alignment(); > } > +#endif Should be enough to put the #ifdef around the two lines into a single implementation of the function? > void __init > new_kmalloc_cache(int idx, enum kmalloc_cache_type type, slab_flags_t flags) >
diff --git a/mm/slab_common.c b/mm/slab_common.c index 7c6475847fdf..fe46459a8b77 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -18,6 +18,7 @@ #include <linux/uaccess.h> #include <linux/seq_file.h> #include <linux/dma-mapping.h> +#include <linux/swiotlb.h> #include <linux/proc_fs.h> #include <linux/debugfs.h> #include <linux/kasan.h> @@ -863,10 +864,19 @@ void __init setup_kmalloc_cache_index_table(void) } } +#ifdef CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC +static unsigned int __kmalloc_minalign(void) +{ + if (io_tlb_default_mem.nslabs) + return ARCH_KMALLOC_MINALIGN; + return dma_get_cache_alignment(); +} +#else static unsigned int __kmalloc_minalign(void) { return dma_get_cache_alignment(); } +#endif void __init new_kmalloc_cache(int idx, enum kmalloc_cache_type type, slab_flags_t flags)
If an architecture opted in to DMA bouncing of unaligned kmalloc() buffers (ARCH_WANT_KMALLOC_DMA_BOUNCE), reduce the minimum kmalloc() cache alignment below cache-line size to ARCH_KMALLOC_MINALIGN. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Christoph Hellwig <hch@lst.de> Cc: Robin Murphy <robin.murphy@arm.com> --- mm/slab_common.c | 10 ++++++++++ 1 file changed, 10 insertions(+)