Message ID | 20221116081951.32750-5-lizhijian@fujitsu.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | RDMA/rxe: Add RDMA FLUSH operation | expand |
On Wed, Nov 16, 2022 at 04:19:45PM +0800, Li Zhijian wrote: > int rxe_mr_init_user(struct rxe_dev *rxe, u64 start, u64 length, u64 iova, > int access, struct rxe_mr *mr) > { > @@ -148,16 +157,25 @@ int rxe_mr_init_user(struct rxe_dev *rxe, u64 start, u64 length, u64 iova, > num_buf = 0; > map = mr->map; > if (length > 0) { > - buf = map[0]->buf; > + bool persistent_access = access & IB_ACCESS_FLUSH_PERSISTENT; > > + buf = map[0]->buf; > for_each_sgtable_page (&umem->sgt_append.sgt, &sg_iter, 0) { > + struct page *pg = sg_page_iter_page(&sg_iter); > + > + if (persistent_access && !is_pmem_page(pg)) { > + pr_debug("Unable to register persistent access to non-pmem device\n"); > + err = -EINVAL; > + goto err_release_umem; This should use rxe_dbg_mr() Jason
On 22/11/2022 22:46, Jason Gunthorpe wrote: > On Wed, Nov 16, 2022 at 04:19:45PM +0800, Li Zhijian wrote: >> int rxe_mr_init_user(struct rxe_dev *rxe, u64 start, u64 length, u64 iova, >> int access, struct rxe_mr *mr) >> { >> @@ -148,16 +157,25 @@ int rxe_mr_init_user(struct rxe_dev *rxe, u64 start, u64 length, u64 iova, >> num_buf = 0; >> map = mr->map; >> if (length > 0) { >> - buf = map[0]->buf; >> + bool persistent_access = access & IB_ACCESS_FLUSH_PERSISTENT; >> >> + buf = map[0]->buf; >> for_each_sgtable_page (&umem->sgt_append.sgt, &sg_iter, 0) { >> + struct page *pg = sg_page_iter_page(&sg_iter); >> + >> + if (persistent_access && !is_pmem_page(pg)) { >> + pr_debug("Unable to register persistent access to non-pmem device\n"); >> + err = -EINVAL; >> + goto err_release_umem; > > This should use rxe_dbg_mr() > Good catch, thanks Zhijian > Jason
diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index bc081002bddc..fd423c015be0 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -111,6 +111,15 @@ void rxe_mr_init_dma(int access, struct rxe_mr *mr) mr->ibmr.type = IB_MR_TYPE_DMA; } +static bool is_pmem_page(struct page *pg) +{ + unsigned long paddr = page_to_phys(pg);; + + return REGION_INTERSECTS == + region_intersects(paddr, PAGE_SIZE, IORESOURCE_MEM, + IORES_DESC_PERSISTENT_MEMORY); +} + int rxe_mr_init_user(struct rxe_dev *rxe, u64 start, u64 length, u64 iova, int access, struct rxe_mr *mr) { @@ -148,16 +157,25 @@ int rxe_mr_init_user(struct rxe_dev *rxe, u64 start, u64 length, u64 iova, num_buf = 0; map = mr->map; if (length > 0) { - buf = map[0]->buf; + bool persistent_access = access & IB_ACCESS_FLUSH_PERSISTENT; + buf = map[0]->buf; for_each_sgtable_page (&umem->sgt_append.sgt, &sg_iter, 0) { + struct page *pg = sg_page_iter_page(&sg_iter); + + if (persistent_access && !is_pmem_page(pg)) { + pr_debug("Unable to register persistent access to non-pmem device\n"); + err = -EINVAL; + goto err_release_umem; + } + if (num_buf >= RXE_BUF_PER_MAP) { map++; buf = map[0]->buf; num_buf = 0; } - vaddr = page_address(sg_page_iter_page(&sg_iter)); + vaddr = page_address(pg); if (!vaddr) { pr_warn("%s: Unable to get virtual address\n", __func__);
Memory region could support at most 2 flush access flags: IB_ACCESS_FLUSH_PERSISTENT and IB_ACCESS_FLUSH_GLOBAL But we only allow user to register persistent flush flags to the pmem MR where it has the ability of persisting data across power cycles. So register a persistent access flag to a non-pmem MR will be rejected. CC: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Li Zhijian <lizhijian@fujitsu.com> --- V6: Minimize pmem checking side effect # Jason V5: make sure the whole MR is pmem # Bob V4: set is_pmem more simple V2: new scheme check is_pmem # Dan update commit message, get rid of confusing ib_check_flush_access_flags() # Tom --- drivers/infiniband/sw/rxe/rxe_mr.c | 22 ++++++++++++++++++++-- 1 file changed, 20 insertions(+), 2 deletions(-)