From patchwork Mon Jan 30 03:24:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhu Yanjun X-Patchwork-Id: 13120356 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 516C2C54EAA for ; Mon, 30 Jan 2023 03:25:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229592AbjA3DZv (ORCPT ); Sun, 29 Jan 2023 22:25:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37380 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229965AbjA3DZu (ORCPT ); Sun, 29 Jan 2023 22:25:50 -0500 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 15DF11350B for ; Sun, 29 Jan 2023 19:25:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675049150; x=1706585150; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=fk/NXso13qUbReYbd9ymYZdw7PHMoOO0/2CBD9/SwQ0=; b=KGNmeuYl3yV7mbsQJ98ZuU1H4owsGF02WFdqkg+rsWP2lH3h5zpJ73q4 0v07tEL3EOMYfP62JWMhimmE9pwrdZY3JNJSp1IZ1NlK7V87nkO16o+j8 tVyqBi6wqoiCvA7MU4TYbPE4hEOCzhGEyCIkcQyT3g6qMPr1Sdtw3ITjU RCaGBI6oEnpK16jKgTX2MlwCsZY6zqJ248bDHySeSxR8hEy4ID4SHmZlb zbeNYK4UUNp8//dzstv1qsENHrOqDm1NTITOD+SUGnKoE2FmL3cMTPvuf gTb6bPMtVxNZ8QF6ii7hDVvOpCtYgusdPI13+4Mv7jsoOSpFz5UtI2q/6 w==; X-IronPort-AV: E=McAfee;i="6500,9779,10605"; a="354782064" X-IronPort-AV: E=Sophos;i="5.97,257,1669104000"; d="scan'208";a="354782064" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jan 2023 19:25:49 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10605"; a="727341576" X-IronPort-AV: E=Sophos;i="5.97,257,1669104000"; d="scan'208";a="727341576" Received: from unknown (HELO intel-71.bj.intel.com) ([10.238.154.71]) by fmsmga008.fm.intel.com with ESMTP; 29 Jan 2023 19:25:47 -0800 From: Zhu Yanjun To: mustafa.ismail@intel.com, shiraz.saleem@intel.com, jgg@ziepe.ca, leon@kernel.org, linux-rdma@vger.kernel.org Cc: Zhu Yanjun Subject: [PATCHv2 for-next 1/1] RDMA/irdma: Add support for dmabuf pin memory regions Date: Mon, 30 Jan 2023 11:24:07 +0800 Message-Id: <20230130032407.259855-1-yanjun.zhu@intel.com> X-Mailer: git-send-email 2.27.0 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Zhu Yanjun This is a followup to the EFA dmabuf[1]. Irdma driver currently does not support on-demand-paging(ODP). So it uses habanalabs as the dmabuf exporter, and irdma as the importer to allow for peer2peer access through libibverbs. In this commit, the function ib_umem_dmabuf_get_pinned() is used. This function is introduced in EFA dmabuf[1] which allows the driver to get a dmabuf umem which is pinned and does not require move_notify callback implementation. The returned umem is pinned and DMA mapped like standard cpu umems, and is released through ib_umem_release(). [1]https://lore.kernel.org/lkml/20211007114018.GD2688930@ziepe.ca/t/ Signed-off-by: Zhu Yanjun --- V1->V2: Thanks Shiraz Saleem, he gave me a lot of good suggestions. This commit is based on the shared functions from refactored irdma_reg_user_mr. --- drivers/infiniband/hw/irdma/verbs.c | 43 +++++++++++++++++++++++++++++ 1 file changed, 43 insertions(+) diff --git a/drivers/infiniband/hw/irdma/verbs.c b/drivers/infiniband/hw/irdma/verbs.c index 6982f38596c8..a638861689c2 100644 --- a/drivers/infiniband/hw/irdma/verbs.c +++ b/drivers/infiniband/hw/irdma/verbs.c @@ -2977,6 +2977,48 @@ static struct ib_mr *irdma_reg_user_mr(struct ib_pd *pd, u64 start, u64 len, return ERR_PTR(err); } +static struct ib_mr *irdma_reg_user_mr_dmabuf(struct ib_pd *pd, u64 start, + u64 len, u64 virt, + int fd, int access, + struct ib_udata *udata) +{ + struct irdma_device *iwdev = to_iwdev(pd->device); + struct ib_umem_dmabuf *umem_dmabuf = NULL; + struct irdma_mr *iwmr = NULL; + int err; + + if (len > iwdev->rf->sc_dev.hw_attrs.max_mr_size) + return ERR_PTR(-EINVAL); + + if (udata->inlen < IRDMA_MEM_REG_MIN_REQ_LEN) + return ERR_PTR(-EINVAL); + + umem_dmabuf = ib_umem_dmabuf_get_pinned(pd->device, start, len, fd, access); + if (IS_ERR(umem_dmabuf)) { + err = PTR_ERR(umem_dmabuf); + ibdev_dbg(&iwdev->ibdev, "Failed to get dmabuf umem[%d]\n", err); + return ERR_PTR(err); + } + + iwmr = irdma_alloc_iwmr(&umem_dmabuf->umem, pd, virt, IRDMA_MEMREG_TYPE_MEM); + if (IS_ERR(iwmr)) { + ib_umem_release(&umem_dmabuf->umem); + return (struct ib_mr *)iwmr; + } + + err = irdma_reg_user_mr_type_mem(iwmr, access); + if (err) + goto error; + + return &iwmr->ibmr; + +error: + irdma_free_iwmr(iwmr); + ib_umem_release(&umem_dmabuf->umem); + + return ERR_PTR(err); +} + /** * irdma_reg_phys_mr - register kernel physical memory * @pd: ibpd pointer @@ -4483,6 +4525,7 @@ static const struct ib_device_ops irdma_dev_ops = { .query_port = irdma_query_port, .query_qp = irdma_query_qp, .reg_user_mr = irdma_reg_user_mr, + .reg_user_mr_dmabuf = irdma_reg_user_mr_dmabuf, .req_notify_cq = irdma_req_notify_cq, .resize_cq = irdma_resize_cq, INIT_RDMA_OBJ_SIZE(ib_pd, irdma_pd, ibpd),