From patchwork Mon Jul 31 08:01:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Selvin Xavier X-Patchwork-Id: 13333994 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78D56C001DC for ; Mon, 31 Jul 2023 08:15:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231869AbjGaIPf (ORCPT ); Mon, 31 Jul 2023 04:15:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35376 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231955AbjGaIPE (ORCPT ); Mon, 31 Jul 2023 04:15:04 -0400 Received: from mail-pf1-x433.google.com (mail-pf1-x433.google.com [IPv6:2607:f8b0:4864:20::433]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 757E7210B for ; Mon, 31 Jul 2023 01:13:00 -0700 (PDT) Received: by mail-pf1-x433.google.com with SMTP id d2e1a72fcca58-686ed1d2594so4089692b3a.2 for ; Mon, 31 Jul 2023 01:13:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1690791179; x=1691395979; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=ibkgTPoIFcZiCx3ujdUHo+jA13NWiomc/u4FEVuxcxk=; b=aLfsj1cyfTgSseHcvOcNDgFx+yuWNCU/Tr+Ty2yJ4t2morDH1WnIh0gscjrEpWUlPD bkyIkx4bWXICdPzoBTy0Fc0ei5Bwrn8PYxMlyginCEzvXQ3gl5TKZv0fH85XWTzngrf+ Edzfm6NWPuOIXfjktBIppZ1N+v3wJdLXfLm6E= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690791179; x=1691395979; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ibkgTPoIFcZiCx3ujdUHo+jA13NWiomc/u4FEVuxcxk=; b=FyaDXTLxTD0+5nUiSTY0RWXXthR2lAbPynx0MMDHFQqHucp6NFL5CFcbIk8u55chdB 07ziKDeZPdjm4VBXdFh7xjUfSzuvG3MkYkMWiulqRl5yzVQ6KRf3kNoYk8kNr17pFEqU EW0Ggy9dYDwzChTQcXklHmSy+ePh+X1p567yG5aQ4x2W+J3N+OprF5JjHatfCPn3YFJb DXZesLR29EoHuiH0Wi9Zaj6fW+yEnCdpSzwPVTr+6ymjfLLYoOB9stNiiQoAA4b01uI5 ebRHlY/vzeJD+nDB8AML9Rv5kh3FbZDyUx/j3BBo3QaEi1qc9XtKsNPnxmhQWOqdETpb dbqg== X-Gm-Message-State: ABy/qLZJOq3Nh8H+CmNs7mC0xWKi8bdvgQG9qE2Wsygf+RriSL3gObwC J9pYF5FBe3xzde4APEK+fYI1QQ== X-Google-Smtp-Source: APBJJlF6Zy97OgUYB+FTy2ktRRwunh1foP8d8PAfsNAHDJTQtryvdGmCvcEVwfeFhJpiVYnfLCmhDA== X-Received: by 2002:a05:6a21:35c2:b0:137:53d1:405 with SMTP id ba2-20020a056a2135c200b0013753d10405mr9975306pzc.26.1690791179206; Mon, 31 Jul 2023 01:12:59 -0700 (PDT) Received: from dhcp-10-192-206-197.iig.avagotech.net.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id n2-20020aa79042000000b00686a80f431dsm6996561pfo.126.2023.07.31.01.12.56 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 31 Jul 2023 01:12:58 -0700 (PDT) From: Selvin Xavier To: jgg@ziepe.ca, leon@kernel.org Cc: linux-rdma@vger.kernel.org, andrew.gospodarek@broadcom.com, Saravanan Vajravel , Selvin Xavier Subject: [PATCH for-next v2 1/1] RDMA/bnxt_re: Add support for dmabuf pinned memory regions Date: Mon, 31 Jul 2023 01:01:13 -0700 Message-Id: <1690790473-25850-2-git-send-email-selvin.xavier@broadcom.com> X-Mailer: git-send-email 2.5.5 In-Reply-To: <1690790473-25850-1-git-send-email-selvin.xavier@broadcom.com> References: <1690790473-25850-1-git-send-email-selvin.xavier@broadcom.com> Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Saravanan Vajravel Support the new verb which indicates dmabuf support. bnxt doesn't support ODP. So use the pinned version of the dmabuf APIs to enable bnxt_re devices to work as dmabuf importer. Signed-off-by: Saravanan Vajravel Signed-off-by: Selvin Xavier --- drivers/infiniband/hw/bnxt_re/ib_verbs.c | 83 ++++++++++++++++++++++---------- drivers/infiniband/hw/bnxt_re/ib_verbs.h | 4 ++ drivers/infiniband/hw/bnxt_re/main.c | 1 + 3 files changed, 62 insertions(+), 26 deletions(-) diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c index 2b2505a..718572d 100644 --- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c +++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c @@ -3981,16 +3981,13 @@ int bnxt_re_dealloc_mw(struct ib_mw *ib_mw) return rc; } -/* uverbs */ -struct ib_mr *bnxt_re_reg_user_mr(struct ib_pd *ib_pd, u64 start, u64 length, - u64 virt_addr, int mr_access_flags, - struct ib_udata *udata) +static struct ib_mr *__bnxt_re_user_reg_mr(struct ib_pd *ib_pd, u64 length, u64 virt_addr, + int mr_access_flags, struct ib_umem *umem) { struct bnxt_re_pd *pd = container_of(ib_pd, struct bnxt_re_pd, ib_pd); struct bnxt_re_dev *rdev = pd->rdev; - struct bnxt_re_mr *mr; - struct ib_umem *umem; unsigned long page_size; + struct bnxt_re_mr *mr; int umem_pgs, rc; u32 active_mrs; @@ -4000,6 +3997,12 @@ struct ib_mr *bnxt_re_reg_user_mr(struct ib_pd *ib_pd, u64 start, u64 length, return ERR_PTR(-ENOMEM); } + page_size = ib_umem_find_best_pgsz(umem, BNXT_RE_PAGE_SIZE_SUPPORTED, virt_addr); + if (!page_size) { + ibdev_err(&rdev->ibdev, "umem page size unsupported!"); + return ERR_PTR(-EINVAL); + } + mr = kzalloc(sizeof(*mr), GFP_KERNEL); if (!mr) return ERR_PTR(-ENOMEM); @@ -4011,36 +4014,23 @@ struct ib_mr *bnxt_re_reg_user_mr(struct ib_pd *ib_pd, u64 start, u64 length, rc = bnxt_qplib_alloc_mrw(&rdev->qplib_res, &mr->qplib_mr); if (rc) { - ibdev_err(&rdev->ibdev, "Failed to allocate MR"); + ibdev_err(&rdev->ibdev, "Failed to allocate MR rc = %d", rc); + rc = -EIO; goto free_mr; } /* The fixed portion of the rkey is the same as the lkey */ mr->ib_mr.rkey = mr->qplib_mr.rkey; - - umem = ib_umem_get(&rdev->ibdev, start, length, mr_access_flags); - if (IS_ERR(umem)) { - ibdev_err(&rdev->ibdev, "Failed to get umem"); - rc = -EFAULT; - goto free_mrw; - } mr->ib_umem = umem; - mr->qplib_mr.va = virt_addr; - page_size = ib_umem_find_best_pgsz( - umem, BNXT_RE_PAGE_SIZE_SUPPORTED, virt_addr); - if (!page_size) { - ibdev_err(&rdev->ibdev, "umem page size unsupported!"); - rc = -EFAULT; - goto free_umem; - } mr->qplib_mr.total_size = length; umem_pgs = ib_umem_num_dma_blocks(umem, page_size); rc = bnxt_qplib_reg_mr(&rdev->qplib_res, &mr->qplib_mr, umem, umem_pgs, page_size); if (rc) { - ibdev_err(&rdev->ibdev, "Failed to register user MR"); - goto free_umem; + ibdev_err(&rdev->ibdev, "Failed to register user MR - rc = %d\n", rc); + rc = -EIO; + goto free_mrw; } mr->ib_mr.lkey = mr->qplib_mr.lkey; @@ -4050,8 +4040,7 @@ struct ib_mr *bnxt_re_reg_user_mr(struct ib_pd *ib_pd, u64 start, u64 length, rdev->stats.res.mr_watermark = active_mrs; return &mr->ib_mr; -free_umem: - ib_umem_release(umem); + free_mrw: bnxt_qplib_free_mrw(&rdev->qplib_res, &mr->qplib_mr); free_mr: @@ -4059,6 +4048,48 @@ struct ib_mr *bnxt_re_reg_user_mr(struct ib_pd *ib_pd, u64 start, u64 length, return ERR_PTR(rc); } +struct ib_mr *bnxt_re_reg_user_mr(struct ib_pd *ib_pd, u64 start, u64 length, + u64 virt_addr, int mr_access_flags, + struct ib_udata *udata) +{ + struct bnxt_re_pd *pd = container_of(ib_pd, struct bnxt_re_pd, ib_pd); + struct bnxt_re_dev *rdev = pd->rdev; + struct ib_umem *umem; + struct ib_mr *ib_mr; + + umem = ib_umem_get(&rdev->ibdev, start, length, mr_access_flags); + if (IS_ERR(umem)) + return ERR_CAST(umem); + + ib_mr = __bnxt_re_user_reg_mr(ib_pd, length, virt_addr, mr_access_flags, umem); + if (IS_ERR(ib_mr)) + ib_umem_release(umem); + return ib_mr; +} + +struct ib_mr *bnxt_re_reg_user_mr_dmabuf(struct ib_pd *ib_pd, u64 start, + u64 length, u64 virt_addr, int fd, + int mr_access_flags, struct ib_udata *udata) +{ + struct bnxt_re_pd *pd = container_of(ib_pd, struct bnxt_re_pd, ib_pd); + struct bnxt_re_dev *rdev = pd->rdev; + struct ib_umem_dmabuf *umem_dmabuf; + struct ib_umem *umem; + struct ib_mr *ib_mr; + + umem_dmabuf = ib_umem_dmabuf_get_pinned(&rdev->ibdev, start, length, + fd, mr_access_flags); + if (IS_ERR(umem_dmabuf)) + return ERR_CAST(umem_dmabuf); + + umem = &umem_dmabuf->umem; + + ib_mr = __bnxt_re_user_reg_mr(ib_pd, length, virt_addr, mr_access_flags, umem); + if (IS_ERR(ib_mr)) + ib_umem_release(umem); + return ib_mr; +} + int bnxt_re_alloc_ucontext(struct ib_ucontext *ctx, struct ib_udata *udata) { struct ib_device *ibdev = ctx->device; diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.h b/drivers/infiniband/hw/bnxt_re/ib_verbs.h index f392a09..84715b7 100644 --- a/drivers/infiniband/hw/bnxt_re/ib_verbs.h +++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.h @@ -229,6 +229,10 @@ int bnxt_re_dealloc_mw(struct ib_mw *mw); struct ib_mr *bnxt_re_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, u64 virt_addr, int mr_access_flags, struct ib_udata *udata); +struct ib_mr *bnxt_re_reg_user_mr_dmabuf(struct ib_pd *ib_pd, u64 start, + u64 length, u64 virt_addr, + int fd, int mr_access_flags, + struct ib_udata *udata); int bnxt_re_alloc_ucontext(struct ib_ucontext *ctx, struct ib_udata *udata); void bnxt_re_dealloc_ucontext(struct ib_ucontext *context); int bnxt_re_mmap(struct ib_ucontext *context, struct vm_area_struct *vma); diff --git a/drivers/infiniband/hw/bnxt_re/main.c b/drivers/infiniband/hw/bnxt_re/main.c index 87960ac..c467415 100644 --- a/drivers/infiniband/hw/bnxt_re/main.c +++ b/drivers/infiniband/hw/bnxt_re/main.c @@ -862,6 +862,7 @@ static const struct ib_device_ops bnxt_re_dev_ops = { .query_qp = bnxt_re_query_qp, .query_srq = bnxt_re_query_srq, .reg_user_mr = bnxt_re_reg_user_mr, + .reg_user_mr_dmabuf = bnxt_re_reg_user_mr_dmabuf, .req_notify_cq = bnxt_re_req_notify_cq, .resize_cq = bnxt_re_resize_cq, INIT_RDMA_OBJ_SIZE(ib_ah, bnxt_re_ah, ib_ah),