From patchwork Thu Jul 27 14:29:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Selvin Xavier X-Patchwork-Id: 13330184 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6506C001DC for ; Thu, 27 Jul 2023 14:41:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231882AbjG0Olo (ORCPT ); Thu, 27 Jul 2023 10:41:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36292 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232215AbjG0Olm (ORCPT ); Thu, 27 Jul 2023 10:41:42 -0400 Received: from mail-pf1-x432.google.com (mail-pf1-x432.google.com [IPv6:2607:f8b0:4864:20::432]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 807FB30DC for ; Thu, 27 Jul 2023 07:41:40 -0700 (PDT) Received: by mail-pf1-x432.google.com with SMTP id d2e1a72fcca58-686be28e1a8so736156b3a.0 for ; Thu, 27 Jul 2023 07:41:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1690468900; x=1691073700; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=y0pg23OvZy0XoM6mWAV8CHaB92eVSKunpfYDnEsUioE=; b=eOjVQawe50XxCxoICkHtTkzWAtAF+v33/vnFcshPA+qfWPoSJOgoaBL76JXrZ3+9oE L0u02jxCaH+5/xUNU5wMFl+l0rM0DQLxoojoLBnOm/Es7iIRbMu9nuVFZnsyqdi0rXtB Ps34/jvNygpiyZE3falr/Tz9uV0T5yomytkqU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690468900; x=1691073700; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=y0pg23OvZy0XoM6mWAV8CHaB92eVSKunpfYDnEsUioE=; b=NJ1BrOFk4peBaa1hdRKWdEP+Xd7h2XbJOUYif0Iv1k2mswdB+S/mLqvA2GXay6uMO+ HD2OfxRMEJ5B4M2h0RaJRE2z7GVF6oLAnRpmXunHe4RDD78m4SlqzeNuX046uO8IoEbM +G02i2+GUhGoeXU67Ocfhg9cuRZuWBRNDBxPYX+leoF1KyLh03L3Qt4pJPl6QGkgqjhb kqE5Gn17eJhIMZY6v336iEeZQdFGHIuk4FxFSF0BIvG8ibUX7o7JYKalHvFApvo/2RJI 9VndfEXknC3UkjTNJkhoo1zwbm/jDX/zVhxp0qo5lK3hDEuDMZ6bc9xQ+Z8HVK8BYPKp yBoQ== X-Gm-Message-State: ABy/qLYPR3ANhdpQrkCe0dMB6jEC0TPq7sy7+s3LoPiv3OefyBybBMuN NLDc1BzsqklHCS77E70bwVP1nA== X-Google-Smtp-Source: APBJJlGUgpSHyHX7v6tkF9Me8SfuD+Q4P/ezQaKNISGtSE1N7rLYJ/Ojoc/RBLvU9xn4/tYhYInTWg== X-Received: by 2002:a17:90a:d814:b0:268:218b:ad20 with SMTP id a20-20020a17090ad81400b00268218bad20mr4029971pjv.7.1690468899800; Thu, 27 Jul 2023 07:41:39 -0700 (PDT) Received: from dhcp-10-192-206-197.iig.avagotech.net.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id c2-20020a170902d90200b001bbdf33b878sm1539685plz.272.2023.07.27.07.41.37 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 27 Jul 2023 07:41:39 -0700 (PDT) From: Selvin Xavier To: jgg@ziepe.ca, leon@kernel.org Cc: linux-rdma@vger.kernel.org, andrew.gospodarek@broadcom.com, Saravanan Vajravel , Selvin Xavier Subject: [PATCH for-next 1/1] RDMA/bnxt_re: Add support for dmabuf pinned memory regions Date: Thu, 27 Jul 2023 07:29:54 -0700 Message-Id: <1690468194-6185-2-git-send-email-selvin.xavier@broadcom.com> X-Mailer: git-send-email 2.5.5 In-Reply-To: <1690468194-6185-1-git-send-email-selvin.xavier@broadcom.com> References: <1690468194-6185-1-git-send-email-selvin.xavier@broadcom.com> Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Saravanan Vajravel Support the new verb which indicates dmabuf support. bnxt doesn't support ODP. So use the pinned version of the dmabuf APIs to enable bnxt_re devices to work as dmabuf importer. Signed-off-by: Saravanan Vajravel Signed-off-by: Selvin Xavier --- drivers/infiniband/hw/bnxt_re/ib_verbs.c | 48 ++++++++++++++++++++++++++------ drivers/infiniband/hw/bnxt_re/ib_verbs.h | 4 +++ drivers/infiniband/hw/bnxt_re/main.c | 1 + 3 files changed, 44 insertions(+), 9 deletions(-) diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c index 2b2505a..3c3459d 100644 --- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c +++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c @@ -3981,17 +3981,19 @@ int bnxt_re_dealloc_mw(struct ib_mw *ib_mw) return rc; } -/* uverbs */ -struct ib_mr *bnxt_re_reg_user_mr(struct ib_pd *ib_pd, u64 start, u64 length, - u64 virt_addr, int mr_access_flags, - struct ib_udata *udata) +static struct ib_mr *__bnxt_re_reg_user_mr(struct ib_pd *ib_pd, u64 start, + u64 length, u64 virt_addr, int fd, + int mr_access_flags, + struct ib_udata *udata, + bool dmabuf) { struct bnxt_re_pd *pd = container_of(ib_pd, struct bnxt_re_pd, ib_pd); struct bnxt_re_dev *rdev = pd->rdev; + struct ib_umem_dmabuf *umem_dmabuf; + unsigned long page_size; struct bnxt_re_mr *mr; struct ib_umem *umem; - unsigned long page_size; - int umem_pgs, rc; + int umem_pgs, rc = 0; u32 active_mrs; if (length > BNXT_RE_MAX_MR_SIZE) { @@ -4017,9 +4019,21 @@ struct ib_mr *bnxt_re_reg_user_mr(struct ib_pd *ib_pd, u64 start, u64 length, /* The fixed portion of the rkey is the same as the lkey */ mr->ib_mr.rkey = mr->qplib_mr.rkey; - umem = ib_umem_get(&rdev->ibdev, start, length, mr_access_flags); - if (IS_ERR(umem)) { - ibdev_err(&rdev->ibdev, "Failed to get umem"); + if (!dmabuf) { + umem = ib_umem_get(&rdev->ibdev, start, length, mr_access_flags); + if (IS_ERR(umem)) + rc = PTR_ERR(umem); + } else { + umem_dmabuf = ib_umem_dmabuf_get_pinned(&rdev->ibdev, start, length, + fd, mr_access_flags); + if (IS_ERR(umem_dmabuf)) + rc = PTR_ERR(umem_dmabuf); + else + umem = &umem_dmabuf->umem; + } + if (rc) { + ibdev_err(&rdev->ibdev, "Failed to get umem dmabuf = %s", + dmabuf ? "true" : "false"); rc = -EFAULT; goto free_mrw; } @@ -4059,6 +4073,22 @@ struct ib_mr *bnxt_re_reg_user_mr(struct ib_pd *ib_pd, u64 start, u64 length, return ERR_PTR(rc); } +struct ib_mr *bnxt_re_reg_user_mr(struct ib_pd *ib_pd, u64 start, u64 length, + u64 virt_addr, int mr_access_flags, + struct ib_udata *udata) +{ + return __bnxt_re_reg_user_mr(ib_pd, start, length, virt_addr, 0, + mr_access_flags, udata, false); +} + +struct ib_mr *bnxt_re_reg_user_mr_dmabuf(struct ib_pd *ib_pd, u64 start, + u64 length, u64 virt_addr, int fd, + int mr_access_flags, struct ib_udata *udata) +{ + return __bnxt_re_reg_user_mr(ib_pd, start, length, virt_addr, fd, + mr_access_flags, udata, true); +} + int bnxt_re_alloc_ucontext(struct ib_ucontext *ctx, struct ib_udata *udata) { struct ib_device *ibdev = ctx->device; diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.h b/drivers/infiniband/hw/bnxt_re/ib_verbs.h index f392a09..84715b7 100644 --- a/drivers/infiniband/hw/bnxt_re/ib_verbs.h +++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.h @@ -229,6 +229,10 @@ int bnxt_re_dealloc_mw(struct ib_mw *mw); struct ib_mr *bnxt_re_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, u64 virt_addr, int mr_access_flags, struct ib_udata *udata); +struct ib_mr *bnxt_re_reg_user_mr_dmabuf(struct ib_pd *ib_pd, u64 start, + u64 length, u64 virt_addr, + int fd, int mr_access_flags, + struct ib_udata *udata); int bnxt_re_alloc_ucontext(struct ib_ucontext *ctx, struct ib_udata *udata); void bnxt_re_dealloc_ucontext(struct ib_ucontext *context); int bnxt_re_mmap(struct ib_ucontext *context, struct vm_area_struct *vma); diff --git a/drivers/infiniband/hw/bnxt_re/main.c b/drivers/infiniband/hw/bnxt_re/main.c index 87960ac..c467415 100644 --- a/drivers/infiniband/hw/bnxt_re/main.c +++ b/drivers/infiniband/hw/bnxt_re/main.c @@ -862,6 +862,7 @@ static const struct ib_device_ops bnxt_re_dev_ops = { .query_qp = bnxt_re_query_qp, .query_srq = bnxt_re_query_srq, .reg_user_mr = bnxt_re_reg_user_mr, + .reg_user_mr_dmabuf = bnxt_re_reg_user_mr_dmabuf, .req_notify_cq = bnxt_re_req_notify_cq, .resize_cq = bnxt_re_resize_cq, INIT_RDMA_OBJ_SIZE(ib_ah, bnxt_re_ah, ib_ah),