mbox series

[v3,0/3] Speed up boot with faster linear map creation

Message ID 20240412131908.433043-1-ryan.roberts@arm.com (mailing list archive)
Headers show
Series Speed up boot with faster linear map creation | expand

Message

Ryan Roberts April 12, 2024, 1:19 p.m. UTC
Hi All,

It turns out that creating the linear map can take a significant proportion of
the total boot time, especially when rodata=full. And most of the time is spent
waiting on superfluous tlb invalidation and memory barriers. This series reworks
the kernel pgtable generation code to significantly reduce the number of those
TLBIs, ISBs and DSBs. See each patch for details.

The below shows the execution time of map_mem() across a couple of different
systems with different RAM configurations. We measure after applying each patch
and show the improvement relative to base (v6.9-rc2):

               | Apple M2 VM | Ampere Altra| Ampere Altra| Ampere Altra
               | VM, 16G     | VM, 64G     | VM, 256G    | Metal, 512G
---------------|-------------|-------------|-------------|-------------
               |   ms    (%) |   ms    (%) |   ms    (%) |    ms    (%)
---------------|-------------|-------------|-------------|-------------
base           |  168   (0%) | 2198   (0%) | 8644   (0%) | 17447   (0%)
no-cont-remap  |   78 (-53%) |  435 (-80%) | 1723 (-80%) |  3779 (-78%)
batch-barriers |   11 (-93%) |  161 (-93%) |  656 (-92%) |  1654 (-91%)
no-alloc-remap |   10 (-94%) |  104 (-95%) |  438 (-95%) |  1223 (-93%)

This series applies on top of v6.9-rc2. All mm selftests pass. I've compile and
boot tested various PAGE_SIZE and VA size configs.

---

Changes since v2 [2]
====================

  - Independently increment ptep/pmdp in alloc_init_cont_[pte|pmd]() rather than
    return the incremented value from int_[pte|pmd]() (per Mark)
  - Removed explicit barriers from alloc_init_cont_pte() and instead rely on the
    barriers in pte_clear_fixmap() (per Mark)
  - Significantly simplify approach to avoiding fixmap during alloc (patch 3
    reworked) (per Mark)
  - Dropped patch 4 - not possible with simplified patch 3 and improvement (~2%)
    didn't warrant complexity (per Mark)


Changes since v1 [1]
====================

  - Added Tested-by tags (thanks to Eric and Itaru)
  - Renamed ___set_pte() -> __set_pte_nosync() (per Ard)
  - Reordered patches (biggest impact & least controversial first)
  - Reordered alloc/map/unmap functions in mmu.c to aid reader
  - pte_clear() -> __pte_clear() in clear_fixmap_nosync()
  - Reverted generic p4d_index() which caused x86 build error. Replaced with
    unconditional p4d_index() define under arm64.


[1] https://lore.kernel.org/linux-arm-kernel/20240326101448.3453626-1-ryan.roberts@arm.com/
[2] https://lore.kernel.org/linux-arm-kernel/20240404143308.2224141-1-ryan.roberts@arm.com/

Thanks,
Ryan

Ryan Roberts (3):
  arm64: mm: Don't remap pgtables per-cont(pte|pmd) block
  arm64: mm: Batch dsb and isb when populating pgtables
  arm64: mm: Don't remap pgtables for allocate vs populate

 arch/arm64/include/asm/pgtable.h |   9 ++-
 arch/arm64/mm/mmu.c              | 101 +++++++++++++++++--------------
 2 files changed, 65 insertions(+), 45 deletions(-)

--
2.25.1

Comments

Itaru Kitayama April 10, 2024, 12:53 p.m. UTC | #1
On Fri, Apr 12, 2024 at 05:06:41PM +0100, Will Deacon wrote:
> On Fri, 12 Apr 2024 14:19:05 +0100, Ryan Roberts wrote:
> > It turns out that creating the linear map can take a significant proportion of
> > the total boot time, especially when rodata=full. And most of the time is spent
> > waiting on superfluous tlb invalidation and memory barriers. This series reworks
> > the kernel pgtable generation code to significantly reduce the number of those
> > TLBIs, ISBs and DSBs. See each patch for details.
> > 
> > The below shows the execution time of map_mem() across a couple of different
> > systems with different RAM configurations. We measure after applying each patch
> > and show the improvement relative to base (v6.9-rc2):
> > 
> > [...]
> 
> Applied to arm64 (for-next/mm), thanks!
> 
> [1/3] arm64: mm: Don't remap pgtables per-cont(pte|pmd) block
>       https://git.kernel.org/arm64/c/5c63db59c5f8
> [2/3] arm64: mm: Batch dsb and isb when populating pgtables
>       https://git.kernel.org/arm64/c/1fcb7cea8a5f
> [3/3] arm64: mm: Don't remap pgtables for allocate vs populate
>       https://git.kernel.org/arm64/c/0e9df1c905d8

I confirm this series boots the system on FVP (with my .config and my
buildroot rootfs using Shrinkwrap).

Tested-by: Itaru Kitayama <itaru.kitayama@fujitsu.com>

Thanks,
Itaru.

> 
> Cheers,
> -- 
> Will
> 
> https://fixes.arm64.dev
> https://next.arm64.dev
> https://will.arm64.dev
Mark Rutland April 12, 2024, 2:56 p.m. UTC | #2
On Fri, Apr 12, 2024 at 02:19:05PM +0100, Ryan Roberts wrote:
> Hi All,
> 
> It turns out that creating the linear map can take a significant proportion of
> the total boot time, especially when rodata=full. And most of the time is spent
> waiting on superfluous tlb invalidation and memory barriers. This series reworks
> the kernel pgtable generation code to significantly reduce the number of those
> TLBIs, ISBs and DSBs. See each patch for details.
> 
> The below shows the execution time of map_mem() across a couple of different
> systems with different RAM configurations. We measure after applying each patch
> and show the improvement relative to base (v6.9-rc2):
> 
>                | Apple M2 VM | Ampere Altra| Ampere Altra| Ampere Altra
>                | VM, 16G     | VM, 64G     | VM, 256G    | Metal, 512G
> ---------------|-------------|-------------|-------------|-------------
>                |   ms    (%) |   ms    (%) |   ms    (%) |    ms    (%)
> ---------------|-------------|-------------|-------------|-------------
> base           |  168   (0%) | 2198   (0%) | 8644   (0%) | 17447   (0%)
> no-cont-remap  |   78 (-53%) |  435 (-80%) | 1723 (-80%) |  3779 (-78%)
> batch-barriers |   11 (-93%) |  161 (-93%) |  656 (-92%) |  1654 (-91%)
> no-alloc-remap |   10 (-94%) |  104 (-95%) |  438 (-95%) |  1223 (-93%)
> 
> This series applies on top of v6.9-rc2. All mm selftests pass. I've compile and
> boot tested various PAGE_SIZE and VA size configs.

Nice!

> Ryan Roberts (3):
>   arm64: mm: Don't remap pgtables per-cont(pte|pmd) block
>   arm64: mm: Batch dsb and isb when populating pgtables
>   arm64: mm: Don't remap pgtables for allocate vs populate

For the series:

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

Catalin, Will, are you happy to pick this up?

Mark.
Ard Biesheuvel April 12, 2024, 3 p.m. UTC | #3
On Fri, 12 Apr 2024 at 15:19, Ryan Roberts <ryan.roberts@arm.com> wrote:
>
> Hi All,
>
> It turns out that creating the linear map can take a significant proportion of
> the total boot time, especially when rodata=full. And most of the time is spent
> waiting on superfluous tlb invalidation and memory barriers. This series reworks
> the kernel pgtable generation code to significantly reduce the number of those
> TLBIs, ISBs and DSBs. See each patch for details.
>
> The below shows the execution time of map_mem() across a couple of different
> systems with different RAM configurations. We measure after applying each patch
> and show the improvement relative to base (v6.9-rc2):
>
>                | Apple M2 VM | Ampere Altra| Ampere Altra| Ampere Altra
>                | VM, 16G     | VM, 64G     | VM, 256G    | Metal, 512G
> ---------------|-------------|-------------|-------------|-------------
>                |   ms    (%) |   ms    (%) |   ms    (%) |    ms    (%)
> ---------------|-------------|-------------|-------------|-------------
> base           |  168   (0%) | 2198   (0%) | 8644   (0%) | 17447   (0%)
> no-cont-remap  |   78 (-53%) |  435 (-80%) | 1723 (-80%) |  3779 (-78%)
> batch-barriers |   11 (-93%) |  161 (-93%) |  656 (-92%) |  1654 (-91%)
> no-alloc-remap |   10 (-94%) |  104 (-95%) |  438 (-95%) |  1223 (-93%)
>
> This series applies on top of v6.9-rc2. All mm selftests pass. I've compile and
> boot tested various PAGE_SIZE and VA size configs.
...
>
> Ryan Roberts (3):
>   arm64: mm: Don't remap pgtables per-cont(pte|pmd) block
>   arm64: mm: Batch dsb and isb when populating pgtables
>   arm64: mm: Don't remap pgtables for allocate vs populate
>

For the series,

Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Will Deacon April 12, 2024, 4:06 p.m. UTC | #4
On Fri, 12 Apr 2024 14:19:05 +0100, Ryan Roberts wrote:
> It turns out that creating the linear map can take a significant proportion of
> the total boot time, especially when rodata=full. And most of the time is spent
> waiting on superfluous tlb invalidation and memory barriers. This series reworks
> the kernel pgtable generation code to significantly reduce the number of those
> TLBIs, ISBs and DSBs. See each patch for details.
> 
> The below shows the execution time of map_mem() across a couple of different
> systems with different RAM configurations. We measure after applying each patch
> and show the improvement relative to base (v6.9-rc2):
> 
> [...]

Applied to arm64 (for-next/mm), thanks!

[1/3] arm64: mm: Don't remap pgtables per-cont(pte|pmd) block
      https://git.kernel.org/arm64/c/5c63db59c5f8
[2/3] arm64: mm: Batch dsb and isb when populating pgtables
      https://git.kernel.org/arm64/c/1fcb7cea8a5f
[3/3] arm64: mm: Don't remap pgtables for allocate vs populate
      https://git.kernel.org/arm64/c/0e9df1c905d8

Cheers,