From patchwork Tue Oct 12 12:09:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Gal Pressman X-Patchwork-Id: 12552345 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4273CC433F5 for ; Tue, 12 Oct 2021 12:09:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 24B7560EB4 for ; Tue, 12 Oct 2021 12:09:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236360AbhJLMLf (ORCPT ); Tue, 12 Oct 2021 08:11:35 -0400 Received: from smtp-fw-33001.amazon.com ([207.171.190.10]:53355 "EHLO smtp-fw-33001.amazon.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236297AbhJLMLe (ORCPT ); Tue, 12 Oct 2021 08:11:34 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1634040574; x=1665576574; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=DH/iEl6RdpnlT684JdFoMTd6lB/TOY1gGLwBRIAkw2M=; b=p3Kce6p3zFM7e/et/XDvM1GptQy3D5+lyDIjE+LUI1FlUx1G7YuPJsEM lUa/Rq1PSlKXM68CPDSEExteQoDFehBg88OruQ/csVE81c8XbH8R41/3a 0ekuQwBkGXTW7YyJBuPV9/LjqzpvqaQG+gULn5asNUqn+ZI7ZCy4OKogA 4=; X-IronPort-AV: E=Sophos;i="5.85,367,1624320000"; d="scan'208";a="153045402" Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO email-inbound-relay-iad-1a-e823fbde.us-east-1.amazon.com) ([10.43.8.2]) by smtp-border-fw-33001.sea14.amazon.com with ESMTP; 12 Oct 2021 12:09:25 +0000 Received: from EX13D13EUB003.ant.amazon.com (iad12-ws-svc-p26-lb9-vlan3.iad.amazon.com [10.40.163.38]) by email-inbound-relay-iad-1a-e823fbde.us-east-1.amazon.com (Postfix) with ESMTPS id 6D5C2C08C4; Tue, 12 Oct 2021 12:09:21 +0000 (UTC) Received: from EX13MTAUEA002.ant.amazon.com (10.43.61.77) by EX13D13EUB003.ant.amazon.com (10.43.166.55) with Microsoft SMTP Server (TLS) id 15.0.1497.23; Tue, 12 Oct 2021 12:09:20 +0000 Received: from 8c85908914bf.ant.amazon.com.com (10.1.212.21) by mail-relay.amazon.com (10.43.61.169) with Microsoft SMTP Server id 15.0.1497.23 via Frontend Transport; Tue, 12 Oct 2021 12:09:16 +0000 From: Gal Pressman To: Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , Doug Ledford , Jason Gunthorpe CC: , , , , Oded Gabbay , Tomer Tayar , Yossi Leybovich , Alexander Matushevsky , Leon Romanovsky , Jianxin Xiong , Firas Jahjah , Gal Pressman Subject: [PATCH for-next 1/3] dma-buf: Fix pin callback comment Date: Tue, 12 Oct 2021 15:09:01 +0300 Message-ID: <20211012120903.96933-2-galpress@amazon.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211012120903.96933-1-galpress@amazon.com> References: <20211012120903.96933-1-galpress@amazon.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org The pin callback does not necessarily have to move the memory to system memory, remove the sentence from the comment. Signed-off-by: Gal Pressman Reviewed-by: Christian König --- include/linux/dma-buf.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index efdc56b9d95f..225e09caeb98 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -86,8 +86,8 @@ struct dma_buf_ops { * @pin: * * This is called by dma_buf_pin() and lets the exporter know that the - * DMA-buf can't be moved any more. The exporter should pin the buffer - * into system memory to make sure it is generally accessible by other + * DMA-buf can't be moved any more. Ideally, the exporter should + * pin the buffer so that it is generally accessible by all * devices. * * This is called with the &dmabuf.resv object locked and is mutual From patchwork Tue Oct 12 12:09:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gal Pressman X-Patchwork-Id: 12552349 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 112E6C433EF for ; Tue, 12 Oct 2021 12:09:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F3B4F60462 for ; Tue, 12 Oct 2021 12:09:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236406AbhJLMLp (ORCPT ); Tue, 12 Oct 2021 08:11:45 -0400 Received: from smtp-fw-9103.amazon.com ([207.171.188.200]:32241 "EHLO smtp-fw-9103.amazon.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236382AbhJLMLl (ORCPT ); Tue, 12 Oct 2021 08:11:41 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1634040580; x=1665576580; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=NSuVs5S6tnjWc3g6nyq5nGrnvDmj8HbW3uAg1PWAkhQ=; b=sfH3VjEu/bzhErZf2OlO8+3muI44Q7522Ub28RJ0RNPZtva0aChRbZYn 2FEHEQOfV+nxm4ubTxRbj9vDUTjw7jTOo7Wm/iW14k2DwtYaV2wS90Vr7 dwNHR82iDqwXcMqpki45xQO8nXaDuvJvl4sSrzEID0FBjKX6dq0CLELRe 0=; X-IronPort-AV: E=Sophos;i="5.85,367,1624320000"; d="scan'208";a="964133445" Received: from pdx4-co-svc-p1-lb2-vlan2.amazon.com (HELO email-inbound-relay-pdx-2a-ff3df2fe.us-west-2.amazon.com) ([10.25.36.210]) by smtp-border-fw-9103.sea19.amazon.com with ESMTP; 12 Oct 2021 12:09:30 +0000 Received: from EX13D19EUB003.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan3.pdx.amazon.com [10.236.137.198]) by email-inbound-relay-pdx-2a-ff3df2fe.us-west-2.amazon.com (Postfix) with ESMTPS id D15B342017; Tue, 12 Oct 2021 12:09:26 +0000 (UTC) Received: from EX13MTAUEA002.ant.amazon.com (10.43.61.77) by EX13D19EUB003.ant.amazon.com (10.43.166.69) with Microsoft SMTP Server (TLS) id 15.0.1497.23; Tue, 12 Oct 2021 12:09:25 +0000 Received: from 8c85908914bf.ant.amazon.com.com (10.1.212.21) by mail-relay.amazon.com (10.43.61.169) with Microsoft SMTP Server id 15.0.1497.23 via Frontend Transport; Tue, 12 Oct 2021 12:09:21 +0000 From: Gal Pressman To: Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , Doug Ledford , Jason Gunthorpe CC: , , , , Oded Gabbay , Tomer Tayar , Yossi Leybovich , Alexander Matushevsky , Leon Romanovsky , Jianxin Xiong , Firas Jahjah , Gal Pressman Subject: [PATCH for-next 2/3] RDMA/umem: Allow pinned dmabuf umem usage Date: Tue, 12 Oct 2021 15:09:02 +0300 Message-ID: <20211012120903.96933-3-galpress@amazon.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211012120903.96933-1-galpress@amazon.com> References: <20211012120903.96933-1-galpress@amazon.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Introduce ib_umem_dmabuf_get_pinned() which allows the driver to get a dmabuf umem which is pinned and does not require move_notify callback implementation. The returned umem is pinned and DMA mapped like standard cpu umems, and is released through ib_umem_release() (incl. unpinning and unmapping). Signed-off-by: Gal Pressman --- drivers/infiniband/core/umem_dmabuf.c | 51 +++++++++++++++++++++++++++ include/rdma/ib_umem.h | 12 +++++++ 2 files changed, 63 insertions(+) diff --git a/drivers/infiniband/core/umem_dmabuf.c b/drivers/infiniband/core/umem_dmabuf.c index c6e875619fac..4fd6abc4dd17 100644 --- a/drivers/infiniband/core/umem_dmabuf.c +++ b/drivers/infiniband/core/umem_dmabuf.c @@ -164,12 +164,63 @@ struct ib_umem_dmabuf *ib_umem_dmabuf_get(struct ib_device *device, } EXPORT_SYMBOL(ib_umem_dmabuf_get); +static void +ib_umem_dmabuf_unsupported_move_notify(struct dma_buf_attachment *attach) +{ + struct ib_umem_dmabuf *umem_dmabuf = attach->importer_priv; + + ibdev_warn_ratelimited(umem_dmabuf->umem.ibdev, + "Invalidate callback should not be called when memory is pinned\n"); +} + +static struct dma_buf_attach_ops ib_umem_dmabuf_attach_pinned_ops = { + .allow_peer2peer = true, + .move_notify = ib_umem_dmabuf_unsupported_move_notify, +}; + +struct ib_umem_dmabuf *ib_umem_dmabuf_get_pinned(struct ib_device *device, + unsigned long offset, + size_t size, int fd, + int access) +{ + struct ib_umem_dmabuf *umem_dmabuf; + int err; + + umem_dmabuf = ib_umem_dmabuf_get(device, offset, size, fd, access, + &ib_umem_dmabuf_attach_pinned_ops); + if (IS_ERR(umem_dmabuf)) + return umem_dmabuf; + + dma_resv_lock(umem_dmabuf->attach->dmabuf->resv, NULL); + err = dma_buf_pin(umem_dmabuf->attach); + if (err) + goto err_release; + umem_dmabuf->pinned = 1; + + err = ib_umem_dmabuf_map_pages(umem_dmabuf); + if (err) + goto err_unpin; + dma_resv_unlock(umem_dmabuf->attach->dmabuf->resv); + + return umem_dmabuf; + +err_unpin: + dma_buf_unpin(umem_dmabuf->attach); +err_release: + dma_resv_unlock(umem_dmabuf->attach->dmabuf->resv); + ib_umem_release(&umem_dmabuf->umem); + return ERR_PTR(err); +} +EXPORT_SYMBOL(ib_umem_dmabuf_get_pinned); + void ib_umem_dmabuf_release(struct ib_umem_dmabuf *umem_dmabuf) { struct dma_buf *dmabuf = umem_dmabuf->attach->dmabuf; dma_resv_lock(dmabuf->resv, NULL); ib_umem_dmabuf_unmap_pages(umem_dmabuf); + if (umem_dmabuf->pinned) + dma_buf_unpin(umem_dmabuf->attach); dma_resv_unlock(dmabuf->resv); dma_buf_detach(dmabuf, umem_dmabuf->attach); diff --git a/include/rdma/ib_umem.h b/include/rdma/ib_umem.h index 676c57f5ca80..db9b73e765c3 100644 --- a/include/rdma/ib_umem.h +++ b/include/rdma/ib_umem.h @@ -40,6 +40,7 @@ struct ib_umem_dmabuf { unsigned long first_sg_offset; unsigned long last_sg_trim; void *private; + u8 pinned : 1; }; static inline struct ib_umem_dmabuf *to_ib_umem_dmabuf(struct ib_umem *umem) @@ -140,6 +141,10 @@ struct ib_umem_dmabuf *ib_umem_dmabuf_get(struct ib_device *device, unsigned long offset, size_t size, int fd, int access, const struct dma_buf_attach_ops *ops); +struct ib_umem_dmabuf *ib_umem_dmabuf_get_pinned(struct ib_device *device, + unsigned long offset, + size_t size, int fd, + int access); int ib_umem_dmabuf_map_pages(struct ib_umem_dmabuf *umem_dmabuf); void ib_umem_dmabuf_unmap_pages(struct ib_umem_dmabuf *umem_dmabuf); void ib_umem_dmabuf_release(struct ib_umem_dmabuf *umem_dmabuf); @@ -180,6 +185,13 @@ struct ib_umem_dmabuf *ib_umem_dmabuf_get(struct ib_device *device, { return ERR_PTR(-EOPNOTSUPP); } +struct ib_umem_dmabuf *ib_umem_dmabuf_get_pinned(struct ib_device *device, + unsigned long offset, + size_t size, int fd, + int access) +{ + return ERR_PTR(-EOPNOTSUPP); +} static inline int ib_umem_dmabuf_map_pages(struct ib_umem_dmabuf *umem_dmabuf) { return -EOPNOTSUPP; From patchwork Tue Oct 12 12:09:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gal Pressman X-Patchwork-Id: 12552347 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C684BC4332F for ; Tue, 12 Oct 2021 12:09:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AECE660F92 for ; Tue, 12 Oct 2021 12:09:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236388AbhJLMLo (ORCPT ); Tue, 12 Oct 2021 08:11:44 -0400 Received: from smtp-fw-6002.amazon.com ([52.95.49.90]:13829 "EHLO smtp-fw-6002.amazon.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236394AbhJLMLk (ORCPT ); Tue, 12 Oct 2021 08:11:40 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1634040578; x=1665576578; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=RAvVbglZIGn00+TK43/FpsCS0ygJ/42Q29aCAfOFiOw=; b=rhM2idKScc7n6qYGpVgzyPBEAmzFUSsZKVLWRpSwQN9Om45Re1zOcmKC EtHTkd9AS+L7U8RcvmgpHYzv2iSWZXUCICIUspcks4f3gO5rvZLLcb7fl FMg4u2TfXnqTFiNo8KHmwv9MEic4EgSwXizHkpmWWXYsQMycHlD4+kLWF 8=; X-IronPort-AV: E=Sophos;i="5.85,367,1624320000"; d="scan'208";a="147112909" Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO email-inbound-relay-pdx-2b-1f9d5b26.us-west-2.amazon.com) ([10.43.8.2]) by smtp-border-fw-6002.iad6.amazon.com with ESMTP; 12 Oct 2021 12:09:35 +0000 Received: from EX13D02EUB003.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194]) by email-inbound-relay-pdx-2b-1f9d5b26.us-west-2.amazon.com (Postfix) with ESMTPS id 049DD423AC; Tue, 12 Oct 2021 12:09:31 +0000 (UTC) Received: from EX13MTAUEA002.ant.amazon.com (10.43.61.77) by EX13D02EUB003.ant.amazon.com (10.43.166.172) with Microsoft SMTP Server (TLS) id 15.0.1497.23; Tue, 12 Oct 2021 12:09:30 +0000 Received: from 8c85908914bf.ant.amazon.com.com (10.1.212.21) by mail-relay.amazon.com (10.43.61.169) with Microsoft SMTP Server id 15.0.1497.23 via Frontend Transport; Tue, 12 Oct 2021 12:09:26 +0000 From: Gal Pressman To: Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , Doug Ledford , Jason Gunthorpe CC: , , , , Oded Gabbay , Tomer Tayar , Yossi Leybovich , Alexander Matushevsky , Leon Romanovsky , Jianxin Xiong , Firas Jahjah , Gal Pressman Subject: [PATCH for-next 3/3] RDMA/efa: Add support for dmabuf memory regions Date: Tue, 12 Oct 2021 15:09:03 +0300 Message-ID: <20211012120903.96933-4-galpress@amazon.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211012120903.96933-1-galpress@amazon.com> References: <20211012120903.96933-1-galpress@amazon.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Implement a dmabuf importer for the EFA driver. As ODP is not supported, the pinned dmabuf are used to prevent the move_notify callback from being called. Signed-off-by: Gal Pressman --- drivers/infiniband/hw/efa/efa.h | 4 + drivers/infiniband/hw/efa/efa_main.c | 1 + drivers/infiniband/hw/efa/efa_verbs.c | 127 +++++++++++++++++++------- 3 files changed, 101 insertions(+), 31 deletions(-) diff --git a/drivers/infiniband/hw/efa/efa.h b/drivers/infiniband/hw/efa/efa.h index 2b8ca099b381..407d7c4baa16 100644 --- a/drivers/infiniband/hw/efa/efa.h +++ b/drivers/infiniband/hw/efa/efa.h @@ -141,6 +141,10 @@ int efa_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr, struct ib_mr *efa_reg_mr(struct ib_pd *ibpd, u64 start, u64 length, u64 virt_addr, int access_flags, struct ib_udata *udata); +struct ib_mr *efa_reg_user_mr_dmabuf(struct ib_pd *ibpd, u64 start, + u64 length, u64 virt_addr, + int fd, int access_flags, + struct ib_udata *udata); int efa_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata); int efa_get_port_immutable(struct ib_device *ibdev, u32 port_num, struct ib_port_immutable *immutable); diff --git a/drivers/infiniband/hw/efa/efa_main.c b/drivers/infiniband/hw/efa/efa_main.c index 203e6ddcacbc..72cd7d952a07 100644 --- a/drivers/infiniband/hw/efa/efa_main.c +++ b/drivers/infiniband/hw/efa/efa_main.c @@ -267,6 +267,7 @@ static const struct ib_device_ops efa_dev_ops = { .query_port = efa_query_port, .query_qp = efa_query_qp, .reg_user_mr = efa_reg_mr, + .reg_user_mr_dmabuf = efa_reg_user_mr_dmabuf, INIT_RDMA_OBJ_SIZE(ib_ah, efa_ah, ibah), INIT_RDMA_OBJ_SIZE(ib_cq, efa_cq, ibcq), diff --git a/drivers/infiniband/hw/efa/efa_verbs.c b/drivers/infiniband/hw/efa/efa_verbs.c index be6d3ff0f1be..659f5aab3d9e 100644 --- a/drivers/infiniband/hw/efa/efa_verbs.c +++ b/drivers/infiniband/hw/efa/efa_verbs.c @@ -3,6 +3,8 @@ * Copyright 2018-2020 Amazon.com, Inc. or its affiliates. All rights reserved. */ +#include +#include #include #include @@ -1491,26 +1493,18 @@ static int efa_create_pbl(struct efa_dev *dev, return 0; } -struct ib_mr *efa_reg_mr(struct ib_pd *ibpd, u64 start, u64 length, - u64 virt_addr, int access_flags, - struct ib_udata *udata) +static struct efa_mr *efa_alloc_mr(struct ib_pd *ibpd, int access_flags, + struct ib_udata *udata) { struct efa_dev *dev = to_edev(ibpd->device); - struct efa_com_reg_mr_params params = {}; - struct efa_com_reg_mr_result result = {}; - struct pbl_context pbl; int supp_access_flags; - unsigned int pg_sz; struct efa_mr *mr; - int inline_size; - int err; if (udata && udata->inlen && !ib_is_udata_cleared(udata, 0, sizeof(udata->inlen))) { ibdev_dbg(&dev->ibdev, "Incompatible ABI params, udata not cleared\n"); - err = -EINVAL; - goto err_out; + return ERR_PTR(-EINVAL); } supp_access_flags = @@ -1522,23 +1516,26 @@ struct ib_mr *efa_reg_mr(struct ib_pd *ibpd, u64 start, u64 length, ibdev_dbg(&dev->ibdev, "Unsupported access flags[%#x], supported[%#x]\n", access_flags, supp_access_flags); - err = -EOPNOTSUPP; - goto err_out; + return ERR_PTR(-EOPNOTSUPP); } mr = kzalloc(sizeof(*mr), GFP_KERNEL); - if (!mr) { - err = -ENOMEM; - goto err_out; - } + if (!mr) + return ERR_PTR(-ENOMEM); - mr->umem = ib_umem_get(ibpd->device, start, length, access_flags); - if (IS_ERR(mr->umem)) { - err = PTR_ERR(mr->umem); - ibdev_dbg(&dev->ibdev, - "Failed to pin and map user space memory[%d]\n", err); - goto err_free; - } + return mr; +} + +static int efa_register_mr(struct ib_pd *ibpd, struct efa_mr *mr, u64 start, + u64 length, u64 virt_addr, int access_flags) +{ + struct efa_dev *dev = to_edev(ibpd->device); + struct efa_com_reg_mr_params params = {}; + struct efa_com_reg_mr_result result = {}; + struct pbl_context pbl; + unsigned int pg_sz; + int inline_size; + int err; params.pd = to_epd(ibpd)->pdn; params.iova = virt_addr; @@ -1549,10 +1546,9 @@ struct ib_mr *efa_reg_mr(struct ib_pd *ibpd, u64 start, u64 length, dev->dev_attr.page_size_cap, virt_addr); if (!pg_sz) { - err = -EOPNOTSUPP; ibdev_dbg(&dev->ibdev, "Failed to find a suitable page size in page_size_cap %#llx\n", dev->dev_attr.page_size_cap); - goto err_unmap; + return -EOPNOTSUPP; } params.page_shift = order_base_2(pg_sz); @@ -1566,21 +1562,21 @@ struct ib_mr *efa_reg_mr(struct ib_pd *ibpd, u64 start, u64 length, if (params.page_num <= inline_size) { err = efa_create_inline_pbl(dev, mr, ¶ms); if (err) - goto err_unmap; + return err; err = efa_com_register_mr(&dev->edev, ¶ms, &result); if (err) - goto err_unmap; + return err; } else { err = efa_create_pbl(dev, &pbl, mr, ¶ms); if (err) - goto err_unmap; + return err; err = efa_com_register_mr(&dev->edev, ¶ms, &result); pbl_destroy(dev, &pbl); if (err) - goto err_unmap; + return err; } mr->ibmr.lkey = result.l_key; @@ -1588,9 +1584,78 @@ struct ib_mr *efa_reg_mr(struct ib_pd *ibpd, u64 start, u64 length, mr->ibmr.length = length; ibdev_dbg(&dev->ibdev, "Registered mr[%d]\n", mr->ibmr.lkey); + return 0; +} + +struct ib_mr *efa_reg_user_mr_dmabuf(struct ib_pd *ibpd, u64 start, + u64 length, u64 virt_addr, + int fd, int access_flags, + struct ib_udata *udata) +{ + struct efa_dev *dev = to_edev(ibpd->device); + struct ib_umem_dmabuf *umem_dmabuf; + struct efa_mr *mr; + int err; + + mr = efa_alloc_mr(ibpd, access_flags, udata); + if (IS_ERR(mr)) { + err = PTR_ERR(mr); + goto err_out; + } + + umem_dmabuf = ib_umem_dmabuf_get_pinned(ibpd->device, start, length, fd, + access_flags); + if (IS_ERR(umem_dmabuf)) { + ibdev_dbg(&dev->ibdev, "Failed to get dmabuf umem[%d]\n", err); + err = PTR_ERR(umem_dmabuf); + goto err_free; + } + + mr->umem = &umem_dmabuf->umem; + err = efa_register_mr(ibpd, mr, start, length, virt_addr, access_flags); + if (err) + goto err_release; + return &mr->ibmr; -err_unmap: +err_release: + ib_umem_release(mr->umem); +err_free: + kfree(mr); +err_out: + atomic64_inc(&dev->stats.reg_mr_err); + return ERR_PTR(err); +} + +struct ib_mr *efa_reg_mr(struct ib_pd *ibpd, u64 start, u64 length, + u64 virt_addr, int access_flags, + struct ib_udata *udata) +{ + struct efa_dev *dev = to_edev(ibpd->device); + struct efa_mr *mr; + int err; + + mr = efa_alloc_mr(ibpd, access_flags, udata); + if (IS_ERR(mr)) { + err = PTR_ERR(mr); + goto err_out; + } + + mr->umem = ib_umem_get(ibpd->device, start, length, access_flags); + if (IS_ERR(mr->umem)) { + err = PTR_ERR(mr->umem); + ibdev_dbg(&dev->ibdev, + "Failed to pin and map user space memory[%d]\n", err); + goto err_free; + } + + err = efa_register_mr(ibpd, mr, start, length, virt_addr, access_flags); + if (err) + goto err_release; + + return &mr->ibmr; + +err_release: ib_umem_release(mr->umem); err_free: kfree(mr);