diff mbox series

[v5,15/15] arm64: Enable ARCH_WANT_KMALLOC_DMA_BOUNCE for arm64

Message ID 20230524171904.3967031-16-catalin.marinas@arm.com (mailing list archive)
State New, archived
Headers show
Series mm, dma, arm64: Reduce ARCH_KMALLOC_MINALIGN to 8 | expand

Commit Message

Catalin Marinas May 24, 2023, 5:19 p.m. UTC
With the DMA bouncing of unaligned kmalloc() buffers now in place,
enable it for arm64 to allow the kmalloc-{8,16,32,48,96} caches. In
addition, always create the swiotlb buffer even when the end of RAM is
within the 32-bit physical address range (the swiotlb buffer can still
be disabled on the kernel command line).

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/Kconfig   | 1 +
 arch/arm64/mm/init.c | 7 ++++++-
 2 files changed, 7 insertions(+), 1 deletion(-)

Comments

Robin Murphy May 25, 2023, 4:12 p.m. UTC | #1
On 24/05/2023 6:19 pm, Catalin Marinas wrote:
> With the DMA bouncing of unaligned kmalloc() buffers now in place,
> enable it for arm64 to allow the kmalloc-{8,16,32,48,96} caches. In
> addition, always create the swiotlb buffer even when the end of RAM is
> within the 32-bit physical address range (the swiotlb buffer can still
> be disabled on the kernel command line).
> 
> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> ---
>   arch/arm64/Kconfig   | 1 +
>   arch/arm64/mm/init.c | 7 ++++++-
>   2 files changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index b1201d25a8a4..af42871431c0 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -120,6 +120,7 @@ config ARM64
>   	select CRC32
>   	select DCACHE_WORD_ACCESS
>   	select DYNAMIC_FTRACE if FUNCTION_TRACER
> +	select DMA_BOUNCE_UNALIGNED_KMALLOC

We may want to give the embedded folks an easier way of turning this 
off, since IIRC one of the reasons for the existing automatic behaviour 
was people not wanting to have to depend on the command line. Things 
with 256MB or so of RAM seem unlikely to get enough memory efficiency 
back from the smaller kmem caches to pay off the SWIOTLB allocation :)

Cheers,
Robin.

>   	select DMA_DIRECT_REMAP
>   	select EDAC_SUPPORT
>   	select FRAME_POINTER
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index 66e70ca47680..3ac2e9d79ce4 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -442,7 +442,12 @@ void __init bootmem_init(void)
>    */
>   void __init mem_init(void)
>   {
> -	swiotlb_init(max_pfn > PFN_DOWN(arm64_dma_phys_limit), SWIOTLB_VERBOSE);
> +	bool swiotlb = max_pfn > PFN_DOWN(arm64_dma_phys_limit);
> +
> +	if (IS_ENABLED(CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC))
> +		swiotlb = true;
> +
> +	swiotlb_init(swiotlb, SWIOTLB_VERBOSE);
>   
>   	/* this will put all unused low memory onto the freelists */
>   	memblock_free_all();
Catalin Marinas May 25, 2023, 5:08 p.m. UTC | #2
On Thu, May 25, 2023 at 05:12:37PM +0100, Robin Murphy wrote:
> On 24/05/2023 6:19 pm, Catalin Marinas wrote:
> > With the DMA bouncing of unaligned kmalloc() buffers now in place,
> > enable it for arm64 to allow the kmalloc-{8,16,32,48,96} caches. In
> > addition, always create the swiotlb buffer even when the end of RAM is
> > within the 32-bit physical address range (the swiotlb buffer can still
> > be disabled on the kernel command line).
> > 
> > Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: Will Deacon <will@kernel.org>
> > ---
> >   arch/arm64/Kconfig   | 1 +
> >   arch/arm64/mm/init.c | 7 ++++++-
> >   2 files changed, 7 insertions(+), 1 deletion(-)
> > 
> > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> > index b1201d25a8a4..af42871431c0 100644
> > --- a/arch/arm64/Kconfig
> > +++ b/arch/arm64/Kconfig
> > @@ -120,6 +120,7 @@ config ARM64
> >   	select CRC32
> >   	select DCACHE_WORD_ACCESS
> >   	select DYNAMIC_FTRACE if FUNCTION_TRACER
> > +	select DMA_BOUNCE_UNALIGNED_KMALLOC
> 
> We may want to give the embedded folks an easier way of turning this off,
> since IIRC one of the reasons for the existing automatic behaviour was
> people not wanting to have to depend on the command line. Things with 256MB
> or so of RAM seem unlikely to get enough memory efficiency back from the
> smaller kmem caches to pay off the SWIOTLB allocation :)

I thought about this initially and that's why I had two options
(ARCH_WANT_* and this one). But we already select SWIOTLB on arm64, so
for the embedded folk the only option is swiotlb=noforce on the cmdline
which, in turn, limits the kmalloc caches to kmalloc-64 (or whatever the
cache line size is) irrespective of this new select.
diff mbox series

Patch

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index b1201d25a8a4..af42871431c0 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -120,6 +120,7 @@  config ARM64
 	select CRC32
 	select DCACHE_WORD_ACCESS
 	select DYNAMIC_FTRACE if FUNCTION_TRACER
+	select DMA_BOUNCE_UNALIGNED_KMALLOC
 	select DMA_DIRECT_REMAP
 	select EDAC_SUPPORT
 	select FRAME_POINTER
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 66e70ca47680..3ac2e9d79ce4 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -442,7 +442,12 @@  void __init bootmem_init(void)
  */
 void __init mem_init(void)
 {
-	swiotlb_init(max_pfn > PFN_DOWN(arm64_dma_phys_limit), SWIOTLB_VERBOSE);
+	bool swiotlb = max_pfn > PFN_DOWN(arm64_dma_phys_limit);
+
+	if (IS_ENABLED(CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC))
+		swiotlb = true;
+
+	swiotlb_init(swiotlb, SWIOTLB_VERBOSE);
 
 	/* this will put all unused low memory onto the freelists */
 	memblock_free_all();