mbox series

[0/4] Invalidate secondary IOMMU TLB on permission upgrade

Message ID cover.b4454f7f3d0afbfe1965e8026823cd50a42954b4.1689666760.git-series.apopple@nvidia.com (mailing list archive)
Headers show
Series Invalidate secondary IOMMU TLB on permission upgrade | expand

Message

Alistair Popple July 18, 2023, 7:56 a.m. UTC
The main change is to move secondary TLB invalidation mmu notifier
callbacks into the architecture specific TLB flushing functions. This
makes secondary TLB invalidation mostly match CPU invalidation while
still allowing efficient range based invalidations based on the
existing TLB batching code.

==========
Background
==========

The arm64 architecture specifies TLB permission bits may be cached and
therefore the TLB must be invalidated during permission upgrades. For
the CPU this currently occurs in the architecture specific
ptep_set_access_flags() routine.

Secondary TLBs such as implemented by the SMMU IOMMU match the CPU
architecture specification and may also cache permission bits and
require the same TLB invalidations. This may be achieved in one of two
ways.

Some SMMU implementations implement broadcast TLB maintenance
(BTM). This snoops CPU TLB invalidates and will invalidate any
secondary TLB at the same time as the CPU. However implementations are
not required to implement BTM.

Implementations without BTM rely on mmu notifier callbacks to send
explicit TLB invalidation commands to invalidate SMMU TLB. Therefore
either generic kernel code or architecture specific code needs to call
the mmu notifier on permission upgrade.

Currently that doesn't happen so devices will fault indefinitely when
writing to a PTE that was previously read-only as nothing invalidates
the SMMU TLB.

========
Solution
========

To fix this the series first renames the .invalidate_range() callback
to .arch_invalidate_secondary_tlbs() as suggested by Jason and Sean to
make it clear this callback is only used for secondary TLBs. That was
made possible thanks to Sean's series [1] to remove KVM's incorrect
usage.

Based on feedback from Jason [2] the proposed solution to the bug is
to move the calls to mmu_notifier_arch_invalidate_secondary_tlbs()
closer to the architecture specific TLB invalidation code. This
ensures the secondary TLB won't miss invalidations, including the
existing invalidation in the ARM64 code to deal with permission
upgrade.

Currently only ARM64, PowerPC and x86 have IOMMU with secondary TLBs
requiring SW invalidation so the notifier is only called for those
architectures. It is also not called for invalidation of kernel
mappings as no secondary IOMMU implementations can access those and
hence it is not required.

[1] - https://lore.kernel.org/all/20230602011518.787006-1-seanjc@google.com/
[2] - https://lore.kernel.org/linux-mm/ZJMR5bw8l+BbzdJ7@ziepe.ca/

Alistair Popple (4):
  mm_notifiers: Rename invalidate_range notifier
  arm64/smmu: Use TLBI ASID when invalidating entire range
  mmu_notifiers: Call arch_invalidate_secondary_tlbs() when invalidating TLBs
  mmu_notifiers: Don't invalidate secondary TLBs as part of mmu_notifier_invalidate_range_end()

 arch/arm64/include/asm/tlbflush.h               |   5 +-
 arch/powerpc/include/asm/book3s/64/tlbflush.h   |   1 +-
 arch/powerpc/mm/book3s64/radix_hugetlbpage.c    |   1 +-
 arch/powerpc/mm/book3s64/radix_tlb.c            |   6 +-
 arch/x86/mm/tlb.c                               |   3 +-
 drivers/iommu/amd/iommu_v2.c                    |  10 +-
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c |  29 +++--
 drivers/iommu/intel/svm.c                       |   8 +-
 drivers/misc/ocxl/link.c                        |   8 +-
 include/asm-generic/tlb.h                       |   1 +-
 include/linux/mmu_notifier.h                    | 104 ++++-------------
 kernel/events/uprobes.c                         |   2 +-
 mm/huge_memory.c                                |  29 +----
 mm/hugetlb.c                                    |   8 +-
 mm/memory.c                                     |   8 +-
 mm/migrate_device.c                             |   9 +-
 mm/mmu_notifier.c                               |  47 +++-----
 mm/rmap.c                                       |  40 +-------
 18 files changed, 109 insertions(+), 210 deletions(-)

base-commit: fdf0eaf11452d72945af31804e2a1048ee1b574c

Comments

Tian, Kevin July 19, 2023, 3:14 a.m. UTC | #1
> From: Anshuman Khandual <anshuman.khandual@arm.com>
> Sent: Wednesday, July 19, 2023 11:04 AM
> 
> On 7/18/23 13:26, Alistair Popple wrote:
> > The main change is to move secondary TLB invalidation mmu notifier
> > callbacks into the architecture specific TLB flushing functions. This
> > makes secondary TLB invalidation mostly match CPU invalidation while
> > still allowing efficient range based invalidations based on the
> > existing TLB batching code.
> >
> > ==========
> > Background
> > ==========
> >
> > The arm64 architecture specifies TLB permission bits may be cached and
> > therefore the TLB must be invalidated during permission upgrades. For
> > the CPU this currently occurs in the architecture specific
> > ptep_set_access_flags() routine.
> >
> > Secondary TLBs such as implemented by the SMMU IOMMU match the CPU
> > architecture specification and may also cache permission bits and
> > require the same TLB invalidations. This may be achieved in one of two
> > ways.
> >
> > Some SMMU implementations implement broadcast TLB maintenance
> > (BTM). This snoops CPU TLB invalidates and will invalidate any
> > secondary TLB at the same time as the CPU. However implementations are
> > not required to implement BTM.
> 
> So, the implementations with BTM do not even need a MMU notifier callback
> for secondary TLB invalidation purpose ? Perhaps mmu_notifier_register()
> could also be skipped for such cases i.e with ARM_SMMU_FEAT_BTM
> enabled ?
> 

Out of curiosity. How does BTM work with device tlb? Can SMMU translate
a TLB broadcast request (based on ASID) into a set of PCI ATS invalidation
requests (based on PCI requestor ID and PASID) in hardware?

If software intervention is required then it might be the reason why mmu
notifier cannot be skipped. With BTM enabled it just means the notifier
callback can skip iotlb invalidation...