mbox series

[v13,00/14] huge vmalloc mappings

Message ID 20210317062402.533919-1-npiggin@gmail.com (mailing list archive)
Headers show
Series huge vmalloc mappings | expand

Message

Nicholas Piggin March 17, 2021, 6:23 a.m. UTC
Important compound page fix thanks to Ding Tianhong. 

Thanks,
Nick

Since v12:
- Use compound pages so it works with remap_vmalloc_range [noticed by Ding]
- Fix debug_vm_pgtable.c compile error.

Since v11:
- ARM compile fix (patch 1)
- debug_vm_pgtable compile fix

Since v10:
- Fixed code style, most > 80 colums, tweak patch titles, etc [thanks Christoph]
- Made huge vmalloc code and data structure compile away if unselected
  [Christoph]
- Archs only have to provide arch_vmap_p?d_supported for levels they
  implement [Christoph]

Since v9:
- Fixed intermediate build breakage on x86-32 !PAE [thanks Ding]
- Fixed small page fallback case vm_struct double-free [thanks Ding]

Since v8:
- Fixed nommu compile.
- Added Kconfig option help text
- Added VM_NOHUGE which should help archs implement it [suggested by Rick]

Since v7:
- Rebase, added some acks, compile fix
- Removed "order=" from vmallocinfo, it's a bit confusing (nr_pages
  is in small page size for compatibility).
- Added arch_vmap_pmd_supported() test before starting to allocate
  the large page, rather than only testing it when doing the map, to
  avoid unsupported configs trying to allocate huge pages for no
  reason.

Since v6:
- Fixed a false positive warning introduced in patch 2, found by
  kbuild test robot.

Since v5:
- Split arch changes out better and make the constant folding work
- Avoid most of the 80 column wrap, fix a reference to lib/ioremap.c
- Fix compile error on some archs

*** BLURB HERE ***

Nicholas Piggin (14):
  ARM: mm: add missing pud_page define to 2-level page tables
  mm/vmalloc: fix HUGE_VMAP regression by enabling huge pages in
    vmalloc_to_page
  mm: apply_to_pte_range warn and fail if a large pte is encountered
  mm/vmalloc: rename vmap_*_range vmap_pages_*_range
  mm/ioremap: rename ioremap_*_range to vmap_*_range
  mm: HUGE_VMAP arch support cleanup
  powerpc: inline huge vmap supported functions
  arm64: inline huge vmap supported functions
  x86: inline huge vmap supported functions
  mm/vmalloc: provide fallback arch huge vmap support functions
  mm: Move vmap_range from mm/ioremap.c to mm/vmalloc.c
  mm/vmalloc: add vmap_range_noflush variant
  mm/vmalloc: Hugepage vmalloc mappings
  powerpc/64s/radix: Enable huge vmalloc mappings

 .../admin-guide/kernel-parameters.txt         |   2 +
 arch/Kconfig                                  |  11 +
 arch/arm/include/asm/pgtable-3level.h         |   2 -
 arch/arm/include/asm/pgtable.h                |   3 +
 arch/arm64/include/asm/vmalloc.h              |  24 +
 arch/arm64/mm/mmu.c                           |  26 -
 arch/powerpc/Kconfig                          |   1 +
 arch/powerpc/include/asm/vmalloc.h            |  20 +
 arch/powerpc/kernel/module.c                  |  22 +-
 arch/powerpc/mm/book3s64/radix_pgtable.c      |  21 -
 arch/x86/include/asm/vmalloc.h                |  20 +
 arch/x86/mm/ioremap.c                         |  19 -
 arch/x86/mm/pgtable.c                         |  13 -
 include/linux/io.h                            |   9 -
 include/linux/vmalloc.h                       |  46 ++
 init/main.c                                   |   1 -
 mm/debug_vm_pgtable.c                         |   4 +-
 mm/ioremap.c                                  | 225 +-------
 mm/memory.c                                   |  66 ++-
 mm/page_alloc.c                               |   5 +-
 mm/vmalloc.c                                  | 485 +++++++++++++++---
 21 files changed, 621 insertions(+), 404 deletions(-)

Comments

Andrew Morton March 17, 2021, 10:58 p.m. UTC | #1
On Wed, 17 Mar 2021 16:23:48 +1000 Nicholas Piggin <npiggin@gmail.com> wrote:

> 
> *** BLURB HERE ***
> 

That's really not what it means ;)

Could we please get a nice description for the [0/n]?  What's it all
about, what's the benefit, what are potential downsides.

And performance testing results!  Because if it ain't faster, there's
no point in merging it?
Nicholas Piggin March 18, 2021, 3:50 a.m. UTC | #2
Excerpts from Andrew Morton's message of March 18, 2021 8:58 am:
> On Wed, 17 Mar 2021 16:23:48 +1000 Nicholas Piggin <npiggin@gmail.com> wrote:
> 
>> 
>> *** BLURB HERE ***
>> 
> 
> That's really not what it means ;)
 
Sigh, wasn't having a good yesterday.

> Could we please get a nice description for the [0/n]?  What's it all
> about, what's the benefit, what are potential downsides.
>
> And performance testing results!  Because if it ain't faster, there's
> no point in merging it?
> 

It's supposed to have a bit of description in patch 13, and has some
performance reuslts in patch 14. Is it better to put a bigger writeup
in 0? I thought that tends to get lost.

I'll write something here to discuss for now, and can fit it into the 
appropriate place in the series after that.

The kernel virtual mapping layer grew support for mapping memory with > 
PAGE_SIZE ptes with 0ddab1d2ed664 ("lib/ioremap.c: add huge I/O map 
capability interfaces"), and implemented support for using those huge
page mappings with ioremap.

According to the submission, the use-case is mapping very large 
non-volatile memory devices, which could be GB or TB.
https://lore.kernel.org/lkml/1425404664-19675-1-git-send-email-toshi.kani@hp.com/
The benefit is said to be in the overhead of maintaining the mapping,
perhaps both in memory overhead and setup / teardown time. Memory
overhead for the mapping with a 4kB page and 8 byte page table is 2GB
per TB of mapping, down to 4MB / TB with 2MB pages.

The same huge page vmap infrastructure can be quite easily adapted and
used for mapping vmalloc memory pages without more complexity for arch
or core vmap code. However unlike ioremap, vmalloc page table overhead 
is not a real problem, so the advantage to justify this is performance.

Several of the most structures in the kernel (e.g., vfs and network hash 
tables) are allocated with vmalloc on NUMA machines, in order to 
distribute access bandwidth over the machine. Mapping these with larger
pages can improve TLB usage significantly, for example this reduces TLB 
misses by nearly 30x on a `git diff` workload on a 2-node POWER9 (59,800 
-> 2,100) and reduces CPU cycles by 0.54%, due to vfs hashes being 
allocated with 2MB pages.

[ Other numbers?
  - The difference is even larger in a guest due to more costly TLB 
    misses.
  - Eric Dumazet was keen on the network hash performance possibilities.
  - Other archs? Ding was doing x86 testing. ]

The kernel module allocator also uses vmalloc to map module images even 
on non-NUMA, which can result in high iTLB pressure on highly modular 
distro type of kernels. This series does not implement huge mappings for 
modules yet, but it's a step along the way. Rick Edgecombe was looking 
at that IIRC.

The per-cpu allocator similarly might be able to take advantage of this.
Also on the todo list.

The disadvantages of this I can see are:
* Memory fragmentation can waste some physical memory because it will 
  attempt to allocate larger pages to fit the required size, rounding up 
  (once the requested size is >= 2MB).
  - I don't see it being a big problem in practice unless some user 
    crops up that allocates thousands of 2.5MB ranges. We can tewak 
    heuristics a bit there if needed to reduce peak waste.
* Less granular mappings can make the NUMA distribution less balanced.
  - Similar to the above.
  - Could also allocate all major system hashes with one allocation
    up-front and spread them all across the one block, which should help
    overall NUMA distribution and reduce fragmentation waste.
* Callers might expect something about the underlying allocated pages.
  - Tried to keep the apperance of base PAGE_SIZE pages throughout the 
    APIs and exposed data structures.
  - Added a VM_NO_HUGE_VMAP flag to hammer troublesome cases with.

- Finally, added a nohugevmalloc boot option to turn it off (independent
  of nohugeiomap).

Is that helpful?

Thanks,
Nick