diff mbox series

[v1,3/4] IB/core: P2P DMA for device private pages

Message ID 20241015152348.3055360-4-ymaman@nvidia.com (mailing list archive)
State RFC
Headers show
Series GPU Direct RDMA (P2P DMA) for Device Private Pages | expand

Commit Message

Yonatan Maman Oct. 15, 2024, 3:23 p.m. UTC
From: Yonatan Maman <Ymaman@Nvidia.com>

Add Peer-to-Peer (P2P) DMA request for hmm_range_fault calling,
utilizing capabilities introduced in mm/hmm. By setting
range.default_flags to HMM_PFN_REQ_FAULT | HMM_PFN_REQ_TRY_P2P, HMM
attempts to initiate P2P DMA connections for device private pages
(instead of page fault handling).

This enhancement utilizes P2P DMA to reduce performance overhead
during data migration between devices (e.g., GPU) and system memory,
providing performance benefits for GPU-centric applications that
utilize RDMA and device private pages.

Signed-off-by: Yonatan Maman <Ymaman@Nvidia.com>
Reviewed-by: Gal Shalom <GalShalom@Nvidia.com>
---
 drivers/infiniband/core/umem_odp.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
diff mbox series

Patch

diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c
index e9fa22d31c23..1f6498d26df4 100644
--- a/drivers/infiniband/core/umem_odp.c
+++ b/drivers/infiniband/core/umem_odp.c
@@ -381,7 +381,7 @@  int ib_umem_odp_map_dma_and_lock(struct ib_umem_odp *umem_odp, u64 user_virt,
 	pfn_start_idx = (range.start - ib_umem_start(umem_odp)) >> PAGE_SHIFT;
 	num_pfns = (range.end - range.start) >> PAGE_SHIFT;
 	if (fault) {
-		range.default_flags = HMM_PFN_REQ_FAULT;
+		range.default_flags = HMM_PFN_REQ_FAULT | HMM_PFN_REQ_ALLOW_P2P;
 
 		if (access_mask & ODP_WRITE_ALLOWED_BIT)
 			range.default_flags |= HMM_PFN_REQ_WRITE;