mbox series

[PATCHv14,00/17] Linear Address Masking enabling

Message ID 20230111123736.20025-1-kirill.shutemov@linux.intel.com (mailing list archive)
Headers show
Series Linear Address Masking enabling | expand

Message

Kirill A. Shutemov Jan. 11, 2023, 12:37 p.m. UTC
Linear Address Masking[1] (LAM) modifies the checking that is applied to
64-bit linear addresses, allowing software to use of the untranslated
address bits for metadata.

The capability can be used for efficient address sanitizers (ASAN)
implementation and for optimizations in JITs and virtual machines.

The patchset brings support for LAM for userspace addresses. Only LAM_U57 at
this time.

Please review and consider applying.

git://git.kernel.org/pub/scm/linux/kernel/git/kas/linux.git lam

v14:
  - Rework address range check in get_user() and put_user();
  - Introduce CONFIG_ADDRESS_MASKING;
  - Cache untag masking in per-CPU variable;
  - Reject LAM enabling via PTRACE_ARCH_PRCTL;
  - Fix locking around untagged_addr_remote();
  - Fix typo in MM_CONTEXT_* conversion patch;
  - Fix selftest;
v13:
  - Fix race between untagged_addr() and LAM enabling:
    + Do not allow to enable LAM after the process spawned the second thread;
    + untagged_addr() untags the address according to rules of the current
      process;
    + untagged_addr_remote() can be used for untagging addresses for foreign
      process. It requires mmap lock for the target process to be taken;
v12:
  - Rebased onto tip/x86/mm;
  - Drop VM_WARN_ON() that may produce false-positive on race between context
    switch and LAM enabling;
  - Adjust comments explain possible race;
  - User READ_ONCE() in mm_lam_cr3_mask();
  - Do not assume &init_mm == mm in initialize_tlbstate_and_flush();
  - Ack by Andy;
v11:
  - Move untag_mask to /proc/$PID/status;
  - s/SVM/SVA/g;
  - static inline arch_pgtable_dma_compat() instead of macros;
  - Replace pasid_valid() with mm_valid_pasid();
  - Acks from Ashok and Jacob (forgot to apply from v9);
v10:
  - Rebased to v6.1-rc1;
  - Add selftest for SVM vs LAM;
v9:
  - Fix race between LAM enabling and check that KVM memslot address doesn't
    have any tags;
  - Reduce untagged_addr() overhead until the first LAM user;
  - Clarify SVM vs. LAM semantics;
  - Use mmap_lock to serialize LAM enabling;
v8:
  - Drop redundant smb_mb() in prctl_enable_tagged_addr();
  - Cleanup code around build_cr3();
  - Fix commit messages;
  - Selftests updates;
  - Acked/Reviewed/Tested-bys from Alexander and Peter;
v7:
  - Drop redundant smb_mb() in prctl_enable_tagged_addr();
  - Cleanup code around build_cr3();
  - Fix commit message;
  - Fix indentation;
v6:
  - Rebased onto v6.0-rc1
  - LAM_U48 excluded from the patchet. Still available in the git tree;
  - add ARCH_GET_MAX_TAG_BITS;
  - Fix build without CONFIG_DEBUG_VM;
  - Update comments;
  - Reviewed/Tested-by from Alexander;
v5:
  - Do not use switch_mm() in enable_lam_func()
  - Use mb()/READ_ONCE() pair on LAM enabling;
  - Add self-test by Weihong Zhang;
  - Add comments;
v4:
  - Fix untagged_addr() for LAM_U48;
  - Remove no-threads restriction on LAM enabling;
  - Fix mm_struct access from /proc/$PID/arch_status
  - Fix LAM handling in initialize_tlbstate_and_flush()
  - Pack tlb_state better;
  - Comments and commit messages;
v3:
  - Rebased onto v5.19-rc1
  - Per-process enabling;
  - API overhaul (again);
  - Avoid branches and costly computations in the fast path;
  - LAM_U48 is in optional patch.
v2:
  - Rebased onto v5.18-rc1
  - New arch_prctl(2)-based API
  - Expose status of LAM (or other thread features) in
    /proc/$PID/arch_status

[1] ISE, Chapter 10. https://cdrdv2.intel.com/v1/dl/getContent/671368
Kirill A. Shutemov (12):
  x86/mm: Rework address range check in get_user() and put_user()
  x86: Allow atomic MM_CONTEXT flags setting
  x86: CPUID and CR3/CR4 flags for Linear Address Masking
  x86/mm: Handle LAM on context switch
  mm: Introduce untagged_addr_remote()
  x86/uaccess: Provide untagged_addr() and remove tags before address
    check
  x86/mm: Provide arch_prctl() interface for LAM
  x86/mm: Reduce untagged_addr() overhead until the first LAM user
  mm: Expose untagging mask in /proc/$PID/status
  iommu/sva: Replace pasid_valid() helper with mm_valid_pasid()
  x86/mm/iommu/sva: Make LAM and SVA mutually exclusive
  selftests/x86/lam: Add test cases for LAM vs thread creation

Weihong Zhang (5):
  selftests/x86/lam: Add malloc and tag-bits test cases for
    linear-address masking
  selftests/x86/lam: Add mmap and SYSCALL test cases for linear-address
    masking
  selftests/x86/lam: Add io_uring test cases for linear-address masking
  selftests/x86/lam: Add inherit test cases for linear-address masking
  selftests/x86/lam: Add ARCH_FORCE_TAGGED_SVA test cases for
    linear-address masking

 arch/arm64/include/asm/mmu_context.h        |    6 +
 arch/sparc/include/asm/mmu_context_64.h     |    6 +
 arch/sparc/include/asm/uaccess_64.h         |    2 +
 arch/x86/Kconfig                            |   11 +
 arch/x86/entry/vsyscall/vsyscall_64.c       |    2 +-
 arch/x86/include/asm/cpufeatures.h          |    1 +
 arch/x86/include/asm/mmu.h                  |   18 +-
 arch/x86/include/asm/mmu_context.h          |   49 +-
 arch/x86/include/asm/processor-flags.h      |    2 +
 arch/x86/include/asm/tlbflush.h             |   48 +-
 arch/x86/include/asm/uaccess.h              |   35 +-
 arch/x86/include/uapi/asm/prctl.h           |    5 +
 arch/x86/include/uapi/asm/processor-flags.h |    6 +
 arch/x86/kernel/process.c                   |    6 +
 arch/x86/kernel/process_64.c                |   70 +-
 arch/x86/kernel/traps.c                     |    6 +-
 arch/x86/lib/getuser.S                      |   83 +-
 arch/x86/lib/putuser.S                      |   54 +-
 arch/x86/mm/init.c                          |    5 +
 arch/x86/mm/tlb.c                           |   53 +-
 drivers/iommu/iommu-sva.c                   |    8 +-
 drivers/vfio/vfio_iommu_type1.c             |    2 +-
 fs/proc/array.c                             |    6 +
 fs/proc/task_mmu.c                          |    9 +-
 include/linux/ioasid.h                      |    9 -
 include/linux/mm.h                          |   11 -
 include/linux/mmu_context.h                 |   14 +
 include/linux/sched/mm.h                    |    8 +-
 include/linux/uaccess.h                     |   22 +
 mm/debug.c                                  |    1 +
 mm/gup.c                                    |    4 +-
 mm/madvise.c                                |    5 +-
 mm/migrate.c                                |   11 +-
 tools/testing/selftests/x86/Makefile        |    2 +-
 tools/testing/selftests/x86/lam.c           | 1241 +++++++++++++++++++
 35 files changed, 1673 insertions(+), 148 deletions(-)
 create mode 100644 tools/testing/selftests/x86/lam.c

Comments

Peter Zijlstra Jan. 18, 2023, 4:49 p.m. UTC | #1
On Wed, Jan 11, 2023 at 03:37:19PM +0300, Kirill A. Shutemov wrote:

> Kirill A. Shutemov (12):
>   x86/mm: Rework address range check in get_user() and put_user()
>   x86: Allow atomic MM_CONTEXT flags setting
>   x86: CPUID and CR3/CR4 flags for Linear Address Masking
>   x86/mm: Handle LAM on context switch
>   mm: Introduce untagged_addr_remote()
>   x86/uaccess: Provide untagged_addr() and remove tags before address
>     check
>   x86/mm: Provide arch_prctl() interface for LAM
>   x86/mm: Reduce untagged_addr() overhead until the first LAM user
>   mm: Expose untagging mask in /proc/$PID/status
>   iommu/sva: Replace pasid_valid() helper with mm_valid_pasid()
>   x86/mm/iommu/sva: Make LAM and SVA mutually exclusive

+- the static_branch() thing,

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>