mbox series

[v6,00/17] mm, dma, arm64: Reduce ARCH_KMALLOC_MINALIGN to 8

Message ID 20230531154836.1366225-1-catalin.marinas@arm.com (mailing list archive)
Headers show
Series mm, dma, arm64: Reduce ARCH_KMALLOC_MINALIGN to 8 | expand

Message

Catalin Marinas May 31, 2023, 3:48 p.m. UTC
Hi,

Here's version 6 of the series reducing the kmalloc() minimum alignment
on arm64 to 8 (from 128). There are patches already to do the same for
riscv (pretty straight-forward after this series).

The first 11 patches decouple ARCH_KMALLOC_MINALIGN from
ARCH_DMA_MINALIGN and, for arm64, limit the kmalloc() caches to those
aligned to the run-time probed cache_line_size(). On arm64 we gain the
kmalloc-{64,192} caches.

The subsequent patches (11 to 17) further reduce the kmalloc() caches to
kmalloc-{8,16,32,96} if the default swiotlb is present by bouncing small
buffers in the DMA API.

Changes since v5:

- Renaming of the sg_* accessors for consistency.

- IIO_DMA_MINALIGN defined to ARCH_DMA_MINALIGN (missed it in previous
  versions).

- Modified Robin's patch 11 to use #ifdef CONFIG_NEED_SG_DMA_FLAGS
  instead of CONFIG_PCI_P2PDMA in scatterlist.h.

- Added the new sg_dma_*_swiotlb() under the same #ifdef as above.

The updated patches are also available on this branch:

git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux devel/kmalloc-minalign

Thanks.

Catalin Marinas (15):
  mm/slab: Decouple ARCH_KMALLOC_MINALIGN from ARCH_DMA_MINALIGN
  dma: Allow dma_get_cache_alignment() to be overridden by the arch code
  mm/slab: Simplify create_kmalloc_cache() args and make it static
  mm/slab: Limit kmalloc() minimum alignment to
    dma_get_cache_alignment()
  drivers/base: Use ARCH_DMA_MINALIGN instead of ARCH_KMALLOC_MINALIGN
  drivers/gpu: Use ARCH_DMA_MINALIGN instead of ARCH_KMALLOC_MINALIGN
  drivers/usb: Use ARCH_DMA_MINALIGN instead of ARCH_KMALLOC_MINALIGN
  drivers/spi: Use ARCH_DMA_MINALIGN instead of ARCH_KMALLOC_MINALIGN
  dm-crypt: Use ARCH_DMA_MINALIGN instead of ARCH_KMALLOC_MINALIGN
  iio: core: Use ARCH_DMA_MINALIGN instead of ARCH_KMALLOC_MINALIGN
  arm64: Allow kmalloc() caches aligned to the smaller cache_line_size()
  dma-mapping: Force bouncing if the kmalloc() size is not
    cache-line-aligned
  iommu/dma: Force bouncing if the size is not cacheline-aligned
  mm: slab: Reduce the kmalloc() minimum alignment if DMA bouncing
    possible
  arm64: Enable ARCH_WANT_KMALLOC_DMA_BOUNCE for arm64

Robin Murphy (2):
  scatterlist: Add dedicated config for DMA flags
  dma-mapping: Name SG DMA flag helpers consistently

 arch/arm64/Kconfig             |  1 +
 arch/arm64/include/asm/cache.h |  3 ++
 arch/arm64/mm/init.c           |  7 +++-
 drivers/base/devres.c          |  6 ++--
 drivers/gpu/drm/drm_managed.c  |  6 ++--
 drivers/iommu/Kconfig          |  1 +
 drivers/iommu/dma-iommu.c      | 58 ++++++++++++++++++++++++--------
 drivers/iommu/iommu.c          |  2 +-
 drivers/md/dm-crypt.c          |  2 +-
 drivers/pci/Kconfig            |  1 +
 drivers/spi/spidev.c           |  2 +-
 drivers/usb/core/buffer.c      |  8 ++---
 include/linux/dma-map-ops.h    | 61 ++++++++++++++++++++++++++++++++++
 include/linux/dma-mapping.h    |  4 ++-
 include/linux/iio/iio.h        |  2 +-
 include/linux/scatterlist.h    | 60 ++++++++++++++++++++++++++-------
 include/linux/slab.h           | 14 ++++++--
 kernel/dma/Kconfig             |  7 ++++
 kernel/dma/direct.c            |  2 +-
 kernel/dma/direct.h            |  3 +-
 mm/slab.c                      |  6 +---
 mm/slab.h                      |  5 ++-
 mm/slab_common.c               | 46 +++++++++++++++++++------
 23 files changed, 243 insertions(+), 64 deletions(-)

Comments

Isaac Manjarres June 8, 2023, 5:45 a.m. UTC | #1
On Wed, May 31, 2023 at 8:48 AM Catalin Marinas <catalin.marinas@arm.com> wrote:
> Here's version 6 of the series reducing the kmalloc() minimum alignment
> on arm64 to 8 (from 128). There are patches already to do the same for
> riscv (pretty straight-forward after this series).
Thanks, Catalin for getting these patches out. Please add my "Tested-by:" tag
for the series:

Tested-by: Isaac J. Manjarres <isaacmanjarres@google.com>

With the first 11 patches, I observed a reduction of 18.4 MB
in the slab memory footprint on my Pixel 6 device. After applying the
rest of the patches in the series, I observed a total reduction of
26.5 MB in the
slab memory footprint on my device. These are great results!

--Isaac
Ard Biesheuvel June 8, 2023, 8:05 a.m. UTC | #2
On Thu, 8 Jun 2023 at 07:45, Isaac Manjarres <isaacmanjarres@google.com> wrote:
>
> On Wed, May 31, 2023 at 8:48 AM Catalin Marinas <catalin.marinas@arm.com> wrote:
> > Here's version 6 of the series reducing the kmalloc() minimum alignment
> > on arm64 to 8 (from 128). There are patches already to do the same for
> > riscv (pretty straight-forward after this series).
> Thanks, Catalin for getting these patches out. Please add my "Tested-by:" tag
> for the series:
>
> Tested-by: Isaac J. Manjarres <isaacmanjarres@google.com>
>
> With the first 11 patches, I observed a reduction of 18.4 MB
> in the slab memory footprint on my Pixel 6 device. After applying the
> rest of the patches in the series, I observed a total reduction of
> 26.5 MB in the
> slab memory footprint on my device. These are great results!
>

It would also be good to get an insight into how much bouncing is
going on in this case, given that (AFAIK) Pixel 6 uses non-cache
coherent DMA.
Isaac Manjarres June 8, 2023, 9:29 p.m. UTC | #3
On Thu, Jun 08, 2023 at 10:05:58AM +0200, Ard Biesheuvel wrote:
> On Thu, 8 Jun 2023 at 07:45, Isaac Manjarres <isaacmanjarres@google.com> wrote:
> >
> > On Wed, May 31, 2023 at 8:48 AM Catalin Marinas <catalin.marinas@arm.com> wrote:
> > > Here's version 6 of the series reducing the kmalloc() minimum alignment
> > > on arm64 to 8 (from 128). There are patches already to do the same for
> > > riscv (pretty straight-forward after this series).
> > Thanks, Catalin for getting these patches out. Please add my "Tested-by:" tag
> > for the series:
> >
> > Tested-by: Isaac J. Manjarres <isaacmanjarres@google.com>
> >
> > With the first 11 patches, I observed a reduction of 18.4 MB
> > in the slab memory footprint on my Pixel 6 device. After applying the
> > rest of the patches in the series, I observed a total reduction of
> > 26.5 MB in the
> > slab memory footprint on my device. These are great results!
> >
> 
> It would also be good to get an insight into how much bouncing is
> going on in this case, given that (AFAIK) Pixel 6 uses non-cache
> coherent DMA.

I enabled the "swiotlb_bounced" trace event from the kernel command line
to see if anything was being bounced. It turns out that for Pixel 6
there are non-coherent DMA transfers occurring, but none of the
transfers that are in either the DMA_FROM_DEVICE or
DMA_BIDIRECTIONAL directions are small enough to require bouncing.

--Isaac

P.S. I noticed that the trace_swiotlb_bounced() tracepoint may not be
invoked even though bouncing occurs. For example, in the dma-iommu path,
swiotlb_tbl_map_single() is called when bouncing, instead of
swiotlb_map(), which is what ends up calling trace_swiotlb_bounced().

Would it make sense to move the call to trace_swiotlb_bounced() to
swiotlb_tbl_map_single() since that function is always invoked?
Petr Tesařík June 9, 2023, 8:11 a.m. UTC | #4
On Thu, 8 Jun 2023 14:29:45 -0700
Isaac Manjarres <isaacmanjarres@google.com> wrote:

>[...]
> P.S. I noticed that the trace_swiotlb_bounced() tracepoint may not be
> invoked even though bouncing occurs. For example, in the dma-iommu path,
> swiotlb_tbl_map_single() is called when bouncing, instead of
> swiotlb_map(), which is what ends up calling trace_swiotlb_bounced().
> 
> Would it make sense to move the call to trace_swiotlb_bounced() to
> swiotlb_tbl_map_single() since that function is always invoked?

Definitely, if you ask me. I believe the change was merely forgotten in
commit eb605a5754d0 ("swiotlb: add swiotlb_tbl_map_single library
function").

Let me take the author into Cc. Plus Konrad, who built further on that
commit, may also have an opinion.

Petr T
Isaac Manjarres June 14, 2023, 11:55 p.m. UTC | #5
On Mon, Jun 12, 2023 at 09:47:55AM +0200, Christoph Hellwig wrote:
> On Mon, Jun 12, 2023 at 07:44:46AM +0000, Tomonori Fujita wrote:
> > I cannot recall the patch but from quick look, moving trace_swiotlb_bounced() to
> > swiotlb_tbl_map_single() makes sense.
> 
> Agreed.

There's actually two call-sites for trace_swiotlb_bounced():
swiotlb_map() and xen_swiotlb_map_page(). Both those functions
also invoke swiotlb_tbl_map_single(), so moving the call to
trace_swiotlb_bounced() to swiotlb_tbl_map_single() means that
there will be 2 traces per bounce buffering event.

The difference between the two call-sites of trace_swiotlb_bounced()
is that the call in swiotlb_map() uses phys_to_dma() for the device
address, while xen_swiotlb_map_page() uses xen_phys_to_dma().

Would it make sense to move the trace_swiotlb_bounced() call to
swiotlb_tbl_map_single() and then introduce a
swiotlb_tbl_map_single_notrace() function which doesn't do the tracing,
and xen_swiotlb_map_page() can call this?

--Isaac