diff mbox series

arm64: swiotlb: Reduce the default size if no ZONE_DMA bouncing needed

Message ID 20231006153252.3162299-1-catalin.marinas@arm.com (mailing list archive)
State New, archived
Headers show
Series arm64: swiotlb: Reduce the default size if no ZONE_DMA bouncing needed | expand

Commit Message

Catalin Marinas Oct. 6, 2023, 3:32 p.m. UTC
With CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC enabled, the arm64 kernel still
allocates the default SWIOTLB buffer (64MB) even if ZONE_DMA is disabled
or all the RAM fits into this zone. However, this potentially wastes a
non-negligible amount of memory on platforms with little RAM.

Reduce the SWIOTLB size to 1MB per 1GB of RAM if only needed for
kmalloc() buffer bouncing.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Suggested-by: Ross Burton <ross.burton@arm.com>
Cc: Ross Burton <ross.burton@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/mm/init.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

Comments

Robin Murphy Oct. 6, 2023, 4:25 p.m. UTC | #1
On 2023-10-06 16:32, Catalin Marinas wrote:
> With CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC enabled, the arm64 kernel still
> allocates the default SWIOTLB buffer (64MB) even if ZONE_DMA is disabled
> or all the RAM fits into this zone. However, this potentially wastes a
> non-negligible amount of memory on platforms with little RAM.
> 
> Reduce the SWIOTLB size to 1MB per 1GB of RAM if only needed for
> kmalloc() buffer bouncing.
> 
> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> Suggested-by: Ross Burton <ross.burton@arm.com>
> Cc: Ross Burton <ross.burton@arm.com>
> Cc: Robin Murphy <robin.murphy@arm.com>
> Cc: Will Deacon <will@kernel.org>
> ---
>   arch/arm64/mm/init.c | 10 +++++++++-
>   1 file changed, 9 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index 8a0f8604348b..54ee1a4868c2 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -493,8 +493,16 @@ void __init mem_init(void)
>   {
>   	bool swiotlb = max_pfn > PFN_DOWN(arm64_dma_phys_limit);
>   
> -	if (IS_ENABLED(CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC))
> +	if (IS_ENABLED(CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC) && !swiotlb) {
> +		/*
> +		 * If no bouncing needed for ZONE_DMA, reduce the swiotlb
> +		 * buffer for kmalloc() bouncing to 1MB per 1GB of RAM.
> +		 */
> +		unsigned long size =
> +			ALIGN(memblock_phys_mem_size(), SZ_1G) >> 10;

Hmm, I wondered if DIV_ROUND_UP(memblock_phys_mem_size(), 1024) might be 
any easier, but by the time I've typed it out it's still just as long :)

Either way then,

Reviewed-by Robin Murphy <robin.murphy@arm.com>

> +		swiotlb_adjust_size(min(swiotlb_size_or_default(), size));
>   		swiotlb = true;
> +	}
>   
>   	swiotlb_init(swiotlb, SWIOTLB_VERBOSE);
>
Catalin Marinas Oct. 6, 2023, 4:45 p.m. UTC | #2
On Fri, Oct 06, 2023 at 05:25:03PM +0100, Robin Murphy wrote:
> On 2023-10-06 16:32, Catalin Marinas wrote:
> > With CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC enabled, the arm64 kernel still
> > allocates the default SWIOTLB buffer (64MB) even if ZONE_DMA is disabled
> > or all the RAM fits into this zone. However, this potentially wastes a
> > non-negligible amount of memory on platforms with little RAM.
> > 
> > Reduce the SWIOTLB size to 1MB per 1GB of RAM if only needed for
> > kmalloc() buffer bouncing.
> > 
> > Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> > Suggested-by: Ross Burton <ross.burton@arm.com>
> > Cc: Ross Burton <ross.burton@arm.com>
> > Cc: Robin Murphy <robin.murphy@arm.com>
> > Cc: Will Deacon <will@kernel.org>
> > ---
> >   arch/arm64/mm/init.c | 10 +++++++++-
> >   1 file changed, 9 insertions(+), 1 deletion(-)
> > 
> > diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> > index 8a0f8604348b..54ee1a4868c2 100644
> > --- a/arch/arm64/mm/init.c
> > +++ b/arch/arm64/mm/init.c
> > @@ -493,8 +493,16 @@ void __init mem_init(void)
> >   {
> >   	bool swiotlb = max_pfn > PFN_DOWN(arm64_dma_phys_limit);
> > -	if (IS_ENABLED(CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC))
> > +	if (IS_ENABLED(CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC) && !swiotlb) {
> > +		/*
> > +		 * If no bouncing needed for ZONE_DMA, reduce the swiotlb
> > +		 * buffer for kmalloc() bouncing to 1MB per 1GB of RAM.
> > +		 */
> > +		unsigned long size =
> > +			ALIGN(memblock_phys_mem_size(), SZ_1G) >> 10;
> 
> Hmm, I wondered if DIV_ROUND_UP(memblock_phys_mem_size(), 1024) might be any
> easier, but by the time I've typed it out it's still just as long :)

It's not the same result as it's rounded up to the nearest KB rather
than MB. But this may work as well. I just made up this number really.
Catalin Marinas Oct. 13, 2023, 6:45 p.m. UTC | #3
On Fri, 06 Oct 2023 16:32:52 +0100, Catalin Marinas wrote:
> With CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC enabled, the arm64 kernel still
> allocates the default SWIOTLB buffer (64MB) even if ZONE_DMA is disabled
> or all the RAM fits into this zone. However, this potentially wastes a
> non-negligible amount of memory on platforms with little RAM.
> 
> Reduce the SWIOTLB size to 1MB per 1GB of RAM if only needed for
> kmalloc() buffer bouncing.
> 
> [...]

Applied to arm64 (for-next/misc), thanks!

Also changed the division to DIV_ROUND_UP() as per Robin's suggestion.

[1/1] arm64: swiotlb: Reduce the default size if no ZONE_DMA bouncing needed
      https://git.kernel.org/arm64/c/65033574ade9
diff mbox series

Patch

diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 8a0f8604348b..54ee1a4868c2 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -493,8 +493,16 @@  void __init mem_init(void)
 {
 	bool swiotlb = max_pfn > PFN_DOWN(arm64_dma_phys_limit);
 
-	if (IS_ENABLED(CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC))
+	if (IS_ENABLED(CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC) && !swiotlb) {
+		/*
+		 * If no bouncing needed for ZONE_DMA, reduce the swiotlb
+		 * buffer for kmalloc() bouncing to 1MB per 1GB of RAM.
+		 */
+		unsigned long size =
+			ALIGN(memblock_phys_mem_size(), SZ_1G) >> 10;
+		swiotlb_adjust_size(min(swiotlb_size_or_default(), size));
 		swiotlb = true;
+	}
 
 	swiotlb_init(swiotlb, SWIOTLB_VERBOSE);