mbox series

[0/5] mm/migrate: avoid device private invalidations

Message ID 20200706222347.32290-1-rcampbell@nvidia.com (mailing list archive)
Headers show
Series mm/migrate: avoid device private invalidations | expand

Message

Ralph Campbell July 6, 2020, 10:23 p.m. UTC
The goal for this series is to avoid device private memory TLB
invalidations when migrating a range of addresses from system
memory to device private memory and some of those pages have already
been migrated. The approach taken is to introduce a new mmu notifier
invalidation event type and use that in the device driver to skip
invalidation callbacks from migrate_vma_setup(). The device driver is
also then expected to handle device MMU invalidations as part of the
migrate_vma_setup(), migrate_vma_pages(), migrate_vma_finalize() process.
Note that this is opt-in. A device driver can simply invalidate its MMU
in the mmu notifier callback and not handle MMU invalidations in the
migration sequence.

This series is based on linux-5.8.0-rc4 and the patches I sent for
("mm/hmm/nouveau: add PMD system memory mapping")
https://lore.kernel.org/linux-mm/20200701225352.9649-1-rcampbell@nvidia.com
There are no logical dependencies, but there would be merge conflicts
which could be resolved if this were to be applied before the other
series.

Also, this replaces the need for the following two patches I sent:
("mm: fix migrate_vma_setup() src_owner and normal pages")
https://lore.kernel.org/linux-mm/20200622222008.9971-1-rcampbell@nvidia.com
("nouveau: fix mixed normal and device private page migration")
https://lore.kernel.org/lkml/20200622233854.10889-3-rcampbell@nvidia.com

Ralph Campbell (5):
  nouveau: fix storing invalid ptes
  mm/migrate: add a direction parameter to migrate_vma
  mm/notifier: add migration invalidation type
  nouveau/svm: use the new migration invalidation
  mm/hmm/test: use the new migration invalidation

 arch/powerpc/kvm/book3s_hv_uvmem.c            |  2 ++
 drivers/gpu/drm/nouveau/nouveau_dmem.c        | 13 ++++++--
 drivers/gpu/drm/nouveau/nouveau_svm.c         | 10 +++++-
 drivers/gpu/drm/nouveau/nouveau_svm.h         |  1 +
 .../drm/nouveau/nvkm/subdev/mmu/vmmgp100.c    | 13 +++++---
 include/linux/migrate.h                       | 12 +++++--
 include/linux/mmu_notifier.h                  |  7 ++++
 lib/test_hmm.c                                | 33 +++++++++++--------
 mm/migrate.c                                  | 13 ++++++--
 9 files changed, 77 insertions(+), 27 deletions(-)

Comments

Bharata B Rao July 8, 2020, 1:18 p.m. UTC | #1
On Mon, Jul 06, 2020 at 03:23:42PM -0700, Ralph Campbell wrote:
> The goal for this series is to avoid device private memory TLB
> invalidations when migrating a range of addresses from system
> memory to device private memory and some of those pages have already
> been migrated. The approach taken is to introduce a new mmu notifier
> invalidation event type and use that in the device driver to skip
> invalidation callbacks from migrate_vma_setup(). The device driver is
> also then expected to handle device MMU invalidations as part of the
> migrate_vma_setup(), migrate_vma_pages(), migrate_vma_finalize() process.
> Note that this is opt-in. A device driver can simply invalidate its MMU
> in the mmu notifier callback and not handle MMU invalidations in the
> migration sequence.

In the kvmppc secure guest usecase,

1. We ensure that we don't issue migrate_vma() calls for pages that have
already been migrated to the device side (which is actually secure memory
for us that is managed by Ultravisor firmware)

2. The page table mappings on the device side (secure memory) are managed
transparent to the kernel by the Ultravisor firmware.

Hence I assume that no specific action would be required by the kvmppc
usecase due to this patchset. In fact, we never registered for this
mmu notifier events.

Regards,
Bharata.