From patchwork Mon Jan 16 19:34:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhu Yanjun X-Patchwork-Id: 13102505 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BD4E9C46467 for ; Mon, 16 Jan 2023 03:06:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231702AbjAPDG4 (ORCPT ); Sun, 15 Jan 2023 22:06:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56408 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231710AbjAPDGz (ORCPT ); Sun, 15 Jan 2023 22:06:55 -0500 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D6A8E72B9 for ; Sun, 15 Jan 2023 19:06:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673838414; x=1705374414; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/RWoHSJQZz+c1sR/jG/J8BSM+DceT39giVEwAhsyLy0=; b=IcOQ+h0Gw/MjwshCLrOb5fJHsFemeYNG3hPJ0WYWLTyQi0y2uyfRnZsd 1bS1MpdbEmQ+SRQwJXGK0HmVvImZPA0UM7UwoJn3yiIrYVFsVbDe57iqO wPcSQ86M9DbZcRkwbdXmhj5vG2Ri/WTaDiDOwjcfNLIOTsTHuyuUoBwXB DZyD6t+kgYhmb5juVjVlK+IqgZ+M6GM/uS04BY8ITsmoSJ8PIsS+HOD2T Z1SEaXCcqutabwqvS74ToXkoe24spIE+l3kZWqskVQWnCYYTOPYoSMpJm mDs0GkuEpglj2WysHv6WKAWw0KXiGLbPzsekgVgCrs6dPT8r6dkszRMzD A==; X-IronPort-AV: E=McAfee;i="6500,9779,10591"; a="386720781" X-IronPort-AV: E=Sophos;i="5.97,219,1669104000"; d="scan'208";a="386720781" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jan 2023 19:06:54 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10591"; a="660831295" X-IronPort-AV: E=Sophos;i="5.97,219,1669104000"; d="scan'208";a="660831295" Received: from unknown (HELO intel-71.bj.intel.com) ([10.238.154.71]) by fmsmga007.fm.intel.com with ESMTP; 15 Jan 2023 19:06:52 -0800 From: Zhu Yanjun To: mustafa.ismail@intel.com, shiraz.saleem@intel.com, jgg@ziepe.ca, leon@kernel.org, linux-rdma@vger.kernel.org Cc: Zhu Yanjun Subject: [PATCHv3 for-next 1/4] RDMA/irdma: Split MEM handler into irdma_reg_user_mr_type_mem Date: Mon, 16 Jan 2023 14:34:59 -0500 Message-Id: <20230116193502.66540-2-yanjun.zhu@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230116193502.66540-1-yanjun.zhu@intel.com> References: <20230116193502.66540-1-yanjun.zhu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Zhu Yanjun The source codes related with IRDMA_MEMREG_TYPE_MEM are split into a new function irdma_reg_user_mr_type_mem. Reviewed-by: Shiraz Saleem Signed-off-by: Zhu Yanjun --- drivers/infiniband/hw/irdma/verbs.c | 82 ++++++++++++++++++----------- 1 file changed, 50 insertions(+), 32 deletions(-) diff --git a/drivers/infiniband/hw/irdma/verbs.c b/drivers/infiniband/hw/irdma/verbs.c index f4674ecf9c8c..45eb2d339802 100644 --- a/drivers/infiniband/hw/irdma/verbs.c +++ b/drivers/infiniband/hw/irdma/verbs.c @@ -2745,6 +2745,54 @@ static int irdma_hwreg_mr(struct irdma_device *iwdev, struct irdma_mr *iwmr, return ret; } +static int irdma_reg_user_mr_type_mem(struct irdma_mr *iwmr, int access) +{ + struct irdma_device *iwdev = to_iwdev(iwmr->ibmr.device); + struct irdma_pbl *iwpbl = &iwmr->iwpbl; + bool use_pbles; + u32 stag; + int err; + + use_pbles = iwmr->page_cnt != 1; + + err = irdma_setup_pbles(iwdev->rf, iwmr, use_pbles, false); + if (err) + return err; + + if (use_pbles) { + err = irdma_check_mr_contiguous(&iwpbl->pble_alloc, + iwmr->page_size); + if (err) { + irdma_free_pble(iwdev->rf->pble_rsrc, &iwpbl->pble_alloc); + iwpbl->pbl_allocated = false; + } + } + + stag = irdma_create_stag(iwdev); + if (!stag) { + err = -ENOMEM; + goto free_pble; + } + + iwmr->stag = stag; + iwmr->ibmr.rkey = stag; + iwmr->ibmr.lkey = stag; + err = irdma_hwreg_mr(iwdev, iwmr, access); + if (err) + goto err_hwreg; + + return 0; + +err_hwreg: + irdma_free_stag(iwdev, stag); + +free_pble: + if (iwpbl->pble_alloc.level != PBLE_LEVEL_0 && iwpbl->pbl_allocated) + irdma_free_pble(iwdev->rf->pble_rsrc, &iwpbl->pble_alloc); + + return err; +} + /** * irdma_reg_user_mr - Register a user memory region * @pd: ptr of pd @@ -2761,12 +2809,11 @@ static struct ib_mr *irdma_reg_user_mr(struct ib_pd *pd, u64 start, u64 len, #define IRDMA_MEM_REG_MIN_REQ_LEN offsetofend(struct irdma_mem_reg_req, sq_pages) struct irdma_device *iwdev = to_iwdev(pd->device); struct irdma_ucontext *ucontext; - struct irdma_pble_alloc *palloc; struct irdma_pbl *iwpbl; struct irdma_mr *iwmr; struct ib_umem *region; struct irdma_mem_reg_req req; - u32 total, stag = 0; + u32 total; u8 shadow_pgcnt = 1; bool use_pbles = false; unsigned long flags; @@ -2817,7 +2864,6 @@ static struct ib_mr *irdma_reg_user_mr(struct ib_pd *pd, u64 start, u64 len, } iwmr->len = region->length; iwpbl->user_base = virt; - palloc = &iwpbl->pble_alloc; iwmr->type = req.reg_type; iwmr->page_cnt = ib_umem_num_dma_blocks(region, iwmr->page_size); @@ -2863,36 +2909,10 @@ static struct ib_mr *irdma_reg_user_mr(struct ib_pd *pd, u64 start, u64 len, spin_unlock_irqrestore(&ucontext->cq_reg_mem_list_lock, flags); break; case IRDMA_MEMREG_TYPE_MEM: - use_pbles = (iwmr->page_cnt != 1); - - err = irdma_setup_pbles(iwdev->rf, iwmr, use_pbles, false); + err = irdma_reg_user_mr_type_mem(iwmr, access); if (err) goto error; - if (use_pbles) { - err = irdma_check_mr_contiguous(palloc, - iwmr->page_size); - if (err) { - irdma_free_pble(iwdev->rf->pble_rsrc, palloc); - iwpbl->pbl_allocated = false; - } - } - - stag = irdma_create_stag(iwdev); - if (!stag) { - err = -ENOMEM; - goto error; - } - - iwmr->stag = stag; - iwmr->ibmr.rkey = stag; - iwmr->ibmr.lkey = stag; - err = irdma_hwreg_mr(iwdev, iwmr, access); - if (err) { - irdma_free_stag(iwdev, stag); - goto error; - } - break; default: goto error; @@ -2903,8 +2923,6 @@ static struct ib_mr *irdma_reg_user_mr(struct ib_pd *pd, u64 start, u64 len, return &iwmr->ibmr; error: - if (palloc->level != PBLE_LEVEL_0 && iwpbl->pbl_allocated) - irdma_free_pble(iwdev->rf->pble_rsrc, palloc); ib_umem_release(region); kfree(iwmr); From patchwork Mon Jan 16 19:35:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhu Yanjun X-Patchwork-Id: 13102506 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36F47C46467 for ; Mon, 16 Jan 2023 03:07:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231730AbjAPDG6 (ORCPT ); Sun, 15 Jan 2023 22:06:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56420 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231710AbjAPDG5 (ORCPT ); Sun, 15 Jan 2023 22:06:57 -0500 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B901E7696 for ; Sun, 15 Jan 2023 19:06:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673838416; x=1705374416; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=eQKoSUGQlOJbAkDxmx/2iJFsANyrf7cJgIK5c+4cTmA=; b=GRux76OAT6wkZOOo9d1/JlElrlLJZX53NSnqaYoO5D0VPGisOR8dz225 X2aYANL2hSxcKhd2pvfbQIBAUJlr4PDBu1mJO7qn62cSCW4y8AcPv7jTr DlnRRTmGXHxjzoxLCgIiCNgAucMuUUOzkhKh3RYRnA/5QsPgdL4q190io 9pLrAUsO/4LHEvxtA4u8+slFcnqZqsbeSa9dfkkdnMqWaPbPWDMygwz4/ 2Mj6phPUxbpYRk+T1OX+dP6RCvPpmrX9/wo4EvZ3tGwcIkFfVh/yPQZWW zw2l5p+uwKODqzXhmbnuantNR+iZtasweZvgTePA+ruKh0g+ySyN7FN2h g==; X-IronPort-AV: E=McAfee;i="6500,9779,10591"; a="386720787" X-IronPort-AV: E=Sophos;i="5.97,219,1669104000"; d="scan'208";a="386720787" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jan 2023 19:06:56 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10591"; a="660831301" X-IronPort-AV: E=Sophos;i="5.97,219,1669104000"; d="scan'208";a="660831301" Received: from unknown (HELO intel-71.bj.intel.com) ([10.238.154.71]) by fmsmga007.fm.intel.com with ESMTP; 15 Jan 2023 19:06:54 -0800 From: Zhu Yanjun To: mustafa.ismail@intel.com, shiraz.saleem@intel.com, jgg@ziepe.ca, leon@kernel.org, linux-rdma@vger.kernel.org Cc: Zhu Yanjun Subject: [PATCHv3 for-next 2/4] RDMA/irdma: Split mr alloc and free into new functions Date: Mon, 16 Jan 2023 14:35:00 -0500 Message-Id: <20230116193502.66540-3-yanjun.zhu@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230116193502.66540-1-yanjun.zhu@intel.com> References: <20230116193502.66540-1-yanjun.zhu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Zhu Yanjun In the function irdma_reg_user_mr, the mr allocation and free will be used by other functions. As such, the source codes related with mr allocation and free are split into the new functions. Reviewed-by: Shiraz Saleem Signed-off-by: Zhu Yanjun --- drivers/infiniband/hw/irdma/verbs.c | 74 ++++++++++++++++++----------- 1 file changed, 46 insertions(+), 28 deletions(-) diff --git a/drivers/infiniband/hw/irdma/verbs.c b/drivers/infiniband/hw/irdma/verbs.c index 45eb2d339802..1fc9761beef4 100644 --- a/drivers/infiniband/hw/irdma/verbs.c +++ b/drivers/infiniband/hw/irdma/verbs.c @@ -2793,6 +2793,48 @@ static int irdma_reg_user_mr_type_mem(struct irdma_mr *iwmr, int access) return err; } +static struct irdma_mr *irdma_alloc_iwmr(struct ib_umem *region, + struct ib_pd *pd, u64 virt, + enum irdma_memreg_type reg_type) +{ + struct irdma_device *iwdev = to_iwdev(pd->device); + struct irdma_pbl *iwpbl = NULL; + struct irdma_mr *iwmr = NULL; + unsigned long pgsz_bitmap; + + iwmr = kzalloc(sizeof(*iwmr), GFP_KERNEL); + if (!iwmr) + return ERR_PTR(-ENOMEM); + + iwpbl = &iwmr->iwpbl; + iwpbl->iwmr = iwmr; + iwmr->region = region; + iwmr->ibmr.pd = pd; + iwmr->ibmr.device = pd->device; + iwmr->ibmr.iova = virt; + iwmr->type = reg_type; + + pgsz_bitmap = (reg_type == IRDMA_MEMREG_TYPE_MEM) ? + iwdev->rf->sc_dev.hw_attrs.page_size_cap : PAGE_SIZE; + + iwmr->page_size = ib_umem_find_best_pgsz(region, pgsz_bitmap, virt); + if (unlikely(!iwmr->page_size)) { + kfree(iwmr); + return ERR_PTR(-EOPNOTSUPP); + } + + iwmr->len = region->length; + iwpbl->user_base = virt; + iwmr->page_cnt = ib_umem_num_dma_blocks(region, iwmr->page_size); + + return iwmr; +} + +static void irdma_free_iwmr(struct irdma_mr *iwmr) +{ + kfree(iwmr); +} + /** * irdma_reg_user_mr - Register a user memory region * @pd: ptr of pd @@ -2838,34 +2880,13 @@ static struct ib_mr *irdma_reg_user_mr(struct ib_pd *pd, u64 start, u64 len, return ERR_PTR(-EFAULT); } - iwmr = kzalloc(sizeof(*iwmr), GFP_KERNEL); - if (!iwmr) { + iwmr = irdma_alloc_iwmr(region, pd, virt, req.reg_type); + if (IS_ERR(iwmr)) { ib_umem_release(region); - return ERR_PTR(-ENOMEM); + return (struct ib_mr *)iwmr; } iwpbl = &iwmr->iwpbl; - iwpbl->iwmr = iwmr; - iwmr->region = region; - iwmr->ibmr.pd = pd; - iwmr->ibmr.device = pd->device; - iwmr->ibmr.iova = virt; - iwmr->page_size = PAGE_SIZE; - - if (req.reg_type == IRDMA_MEMREG_TYPE_MEM) { - iwmr->page_size = ib_umem_find_best_pgsz(region, - iwdev->rf->sc_dev.hw_attrs.page_size_cap, - virt); - if (unlikely(!iwmr->page_size)) { - kfree(iwmr); - ib_umem_release(region); - return ERR_PTR(-EOPNOTSUPP); - } - } - iwmr->len = region->length; - iwpbl->user_base = virt; - iwmr->type = req.reg_type; - iwmr->page_cnt = ib_umem_num_dma_blocks(region, iwmr->page_size); switch (req.reg_type) { case IRDMA_MEMREG_TYPE_QP: @@ -2918,13 +2939,10 @@ static struct ib_mr *irdma_reg_user_mr(struct ib_pd *pd, u64 start, u64 len, goto error; } - iwmr->type = req.reg_type; - return &iwmr->ibmr; - error: ib_umem_release(region); - kfree(iwmr); + irdma_free_iwmr(iwmr); return ERR_PTR(err); } From patchwork Mon Jan 16 19:35:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhu Yanjun X-Patchwork-Id: 13102507 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 04525C46467 for ; Mon, 16 Jan 2023 03:07:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231806AbjAPDHI (ORCPT ); Sun, 15 Jan 2023 22:07:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56488 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231776AbjAPDG7 (ORCPT ); Sun, 15 Jan 2023 22:06:59 -0500 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E36987A87 for ; Sun, 15 Jan 2023 19:06:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673838418; x=1705374418; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ulhP45X6VWqGzegBjojQgNrezNWcJtQVEwYWDPKdcsI=; b=LSoGDhoVyXFtN8UfWFM3zvxR8BVulAW5ihsuv/XBC8leMdv/cz+Z1ell 4wcY2mjWJaglyLT/kcYfOFnKdqX7gLbnh9Ky1at7Sm7OABejgAdppwlP6 1PSObsQWbWWr6EfahnSvVJqUVS0c/mCvK3kvC6YWObCAQLjQ2MV0wyjCb CFUCQtWODFNLGZ1kxQAgxdvrfCmrmqGBRgN5Mm8/bhSCqQYNF9gE4H1CY +QrBxWT1a8mKGr+e48Rxpgl9I89Sh+up6FmlPJVAGO4jTL7Z7g2hkvpAP CFHunoYkp3DOiXiVT6L0D9HvfXRpPzUJK/DpEfHj+gdiiX+fGYXjqjPgh w==; X-IronPort-AV: E=McAfee;i="6500,9779,10591"; a="386720792" X-IronPort-AV: E=Sophos;i="5.97,219,1669104000"; d="scan'208";a="386720792" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jan 2023 19:06:58 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10591"; a="660831306" X-IronPort-AV: E=Sophos;i="5.97,219,1669104000"; d="scan'208";a="660831306" Received: from unknown (HELO intel-71.bj.intel.com) ([10.238.154.71]) by fmsmga007.fm.intel.com with ESMTP; 15 Jan 2023 19:06:56 -0800 From: Zhu Yanjun To: mustafa.ismail@intel.com, shiraz.saleem@intel.com, jgg@ziepe.ca, leon@kernel.org, linux-rdma@vger.kernel.org Cc: Zhu Yanjun Subject: [PATCHv3 for-next 3/4] RDMA/irdma: Split QP handler into irdma_reg_user_mr_type_qp Date: Mon, 16 Jan 2023 14:35:01 -0500 Message-Id: <20230116193502.66540-4-yanjun.zhu@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230116193502.66540-1-yanjun.zhu@intel.com> References: <20230116193502.66540-1-yanjun.zhu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Zhu Yanjun Split the source codes related with QP handling into a new function. Reviewed-by: Shiraz Saleem Signed-off-by: Zhu Yanjun --- drivers/infiniband/hw/irdma/verbs.c | 47 ++++++++++++++++++++--------- 1 file changed, 33 insertions(+), 14 deletions(-) diff --git a/drivers/infiniband/hw/irdma/verbs.c b/drivers/infiniband/hw/irdma/verbs.c index 1fc9761beef4..93a8997d6267 100644 --- a/drivers/infiniband/hw/irdma/verbs.c +++ b/drivers/infiniband/hw/irdma/verbs.c @@ -2835,6 +2835,38 @@ static void irdma_free_iwmr(struct irdma_mr *iwmr) kfree(iwmr); } +static int irdma_reg_user_mr_type_qp(struct irdma_mem_reg_req req, + struct ib_udata *udata, + struct irdma_mr *iwmr) +{ + struct irdma_device *iwdev = to_iwdev(iwmr->ibmr.device); + struct irdma_pbl *iwpbl = &iwmr->iwpbl; + struct irdma_ucontext *ucontext = NULL; + unsigned long flags; + bool use_pbles; + u32 total; + int err; + + total = req.sq_pages + req.rq_pages + 1; + if (total > iwmr->page_cnt) + return -EINVAL; + + total = req.sq_pages + req.rq_pages; + use_pbles = total > 2; + err = irdma_handle_q_mem(iwdev, &req, iwpbl, use_pbles); + if (err) + return err; + + ucontext = rdma_udata_to_drv_context(udata, struct irdma_ucontext, + ibucontext); + spin_lock_irqsave(&ucontext->qp_reg_mem_list_lock, flags); + list_add_tail(&iwpbl->list, &ucontext->qp_reg_mem_list); + iwpbl->on_list = true; + spin_unlock_irqrestore(&ucontext->qp_reg_mem_list_lock, flags); + + return 0; +} + /** * irdma_reg_user_mr - Register a user memory region * @pd: ptr of pd @@ -2890,23 +2922,10 @@ static struct ib_mr *irdma_reg_user_mr(struct ib_pd *pd, u64 start, u64 len, switch (req.reg_type) { case IRDMA_MEMREG_TYPE_QP: - total = req.sq_pages + req.rq_pages + shadow_pgcnt; - if (total > iwmr->page_cnt) { - err = -EINVAL; - goto error; - } - total = req.sq_pages + req.rq_pages; - use_pbles = (total > 2); - err = irdma_handle_q_mem(iwdev, &req, iwpbl, use_pbles); + err = irdma_reg_user_mr_type_qp(req, udata, iwmr); if (err) goto error; - ucontext = rdma_udata_to_drv_context(udata, struct irdma_ucontext, - ibucontext); - spin_lock_irqsave(&ucontext->qp_reg_mem_list_lock, flags); - list_add_tail(&iwpbl->list, &ucontext->qp_reg_mem_list); - iwpbl->on_list = true; - spin_unlock_irqrestore(&ucontext->qp_reg_mem_list_lock, flags); break; case IRDMA_MEMREG_TYPE_CQ: if (iwdev->rf->sc_dev.hw_attrs.uk_attrs.feature_flags & IRDMA_FEATURE_CQ_RESIZE) From patchwork Mon Jan 16 19:35:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhu Yanjun X-Patchwork-Id: 13102508 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49988C3DA78 for ; Mon, 16 Jan 2023 03:07:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231756AbjAPDHL (ORCPT ); Sun, 15 Jan 2023 22:07:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56452 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231740AbjAPDHC (ORCPT ); Sun, 15 Jan 2023 22:07:02 -0500 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1E0E776AE for ; Sun, 15 Jan 2023 19:07:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673838421; x=1705374421; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+DDecRzImF4YUUMs2sZpFsBxS6o3DonGcFzq5A7MARg=; b=lhQ5N0HRbDb7iSwXT0vOGY4fSH+JztJz1B2hiG5Z+20CReEDhrxgZ+rZ aOH7JYXWT7DBF48JJccVi1lccgYS9bqCo28RiHMqosxzSW3Ddocp/PChv Vflylft3+NTL6LkeAaEJNrWg0k2ZuBpRgElGBCPN/iEt1rDsih94sq3LG 101L1OUTwXLBlpx6fpnGgpo7RoJxtWD558yJyidyJ5oOZSKW3NSyth0wk oVqus6GhTcK1D/0lsyW2/p6RK8vbsKXkBhUGJ0GXFghsXBd6vHc4JbK94 pkv3J/PD+ES3jErDQkvg3OGzF4c3u5ekbC9hTRH2YPbgvaSvqGTVHSK7t w==; X-IronPort-AV: E=McAfee;i="6500,9779,10591"; a="386720800" X-IronPort-AV: E=Sophos;i="5.97,219,1669104000"; d="scan'208";a="386720800" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jan 2023 19:07:00 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10591"; a="660831313" X-IronPort-AV: E=Sophos;i="5.97,219,1669104000"; d="scan'208";a="660831313" Received: from unknown (HELO intel-71.bj.intel.com) ([10.238.154.71]) by fmsmga007.fm.intel.com with ESMTP; 15 Jan 2023 19:06:58 -0800 From: Zhu Yanjun To: mustafa.ismail@intel.com, shiraz.saleem@intel.com, jgg@ziepe.ca, leon@kernel.org, linux-rdma@vger.kernel.org Cc: Zhu Yanjun Subject: [PATCHv3 for-next 4/4] RDMA/irdma: Split CQ handler into irdma_reg_user_mr_type_cq Date: Mon, 16 Jan 2023 14:35:02 -0500 Message-Id: <20230116193502.66540-5-yanjun.zhu@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230116193502.66540-1-yanjun.zhu@intel.com> References: <20230116193502.66540-1-yanjun.zhu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Zhu Yanjun Split the source codes related with CQ handling into a new function. Reviewed-by: Shiraz Saleem Signed-off-by: Zhu Yanjun --- drivers/infiniband/hw/irdma/verbs.c | 69 +++++++++++++++++------------ 1 file changed, 40 insertions(+), 29 deletions(-) diff --git a/drivers/infiniband/hw/irdma/verbs.c b/drivers/infiniband/hw/irdma/verbs.c index 93a8997d6267..6982f38596c8 100644 --- a/drivers/infiniband/hw/irdma/verbs.c +++ b/drivers/infiniband/hw/irdma/verbs.c @@ -2867,6 +2867,40 @@ static int irdma_reg_user_mr_type_qp(struct irdma_mem_reg_req req, return 0; } +static int irdma_reg_user_mr_type_cq(struct irdma_mem_reg_req req, + struct ib_udata *udata, + struct irdma_mr *iwmr) +{ + struct irdma_device *iwdev = to_iwdev(iwmr->ibmr.device); + struct irdma_pbl *iwpbl = &iwmr->iwpbl; + struct irdma_ucontext *ucontext = NULL; + u8 shadow_pgcnt = 1; + unsigned long flags; + bool use_pbles; + u32 total; + int err; + + if (iwdev->rf->sc_dev.hw_attrs.uk_attrs.feature_flags & IRDMA_FEATURE_CQ_RESIZE) + shadow_pgcnt = 0; + total = req.cq_pages + shadow_pgcnt; + if (total > iwmr->page_cnt) + return -EINVAL; + + use_pbles = req.cq_pages > 1; + err = irdma_handle_q_mem(iwdev, &req, iwpbl, use_pbles); + if (err) + return err; + + ucontext = rdma_udata_to_drv_context(udata, struct irdma_ucontext, + ibucontext); + spin_lock_irqsave(&ucontext->cq_reg_mem_list_lock, flags); + list_add_tail(&iwpbl->list, &ucontext->cq_reg_mem_list); + iwpbl->on_list = true; + spin_unlock_irqrestore(&ucontext->cq_reg_mem_list_lock, flags); + + return 0; +} + /** * irdma_reg_user_mr - Register a user memory region * @pd: ptr of pd @@ -2882,16 +2916,10 @@ static struct ib_mr *irdma_reg_user_mr(struct ib_pd *pd, u64 start, u64 len, { #define IRDMA_MEM_REG_MIN_REQ_LEN offsetofend(struct irdma_mem_reg_req, sq_pages) struct irdma_device *iwdev = to_iwdev(pd->device); - struct irdma_ucontext *ucontext; - struct irdma_pbl *iwpbl; - struct irdma_mr *iwmr; - struct ib_umem *region; - struct irdma_mem_reg_req req; - u32 total; - u8 shadow_pgcnt = 1; - bool use_pbles = false; - unsigned long flags; - int err = -EINVAL; + struct irdma_mem_reg_req req = {}; + struct ib_umem *region = NULL; + struct irdma_mr *iwmr = NULL; + int err; if (len > iwdev->rf->sc_dev.hw_attrs.max_mr_size) return ERR_PTR(-EINVAL); @@ -2918,8 +2946,6 @@ static struct ib_mr *irdma_reg_user_mr(struct ib_pd *pd, u64 start, u64 len, return (struct ib_mr *)iwmr; } - iwpbl = &iwmr->iwpbl; - switch (req.reg_type) { case IRDMA_MEMREG_TYPE_QP: err = irdma_reg_user_mr_type_qp(req, udata, iwmr); @@ -2928,25 +2954,9 @@ static struct ib_mr *irdma_reg_user_mr(struct ib_pd *pd, u64 start, u64 len, break; case IRDMA_MEMREG_TYPE_CQ: - if (iwdev->rf->sc_dev.hw_attrs.uk_attrs.feature_flags & IRDMA_FEATURE_CQ_RESIZE) - shadow_pgcnt = 0; - total = req.cq_pages + shadow_pgcnt; - if (total > iwmr->page_cnt) { - err = -EINVAL; - goto error; - } - - use_pbles = (req.cq_pages > 1); - err = irdma_handle_q_mem(iwdev, &req, iwpbl, use_pbles); + err = irdma_reg_user_mr_type_cq(req, udata, iwmr); if (err) goto error; - - ucontext = rdma_udata_to_drv_context(udata, struct irdma_ucontext, - ibucontext); - spin_lock_irqsave(&ucontext->cq_reg_mem_list_lock, flags); - list_add_tail(&iwpbl->list, &ucontext->cq_reg_mem_list); - iwpbl->on_list = true; - spin_unlock_irqrestore(&ucontext->cq_reg_mem_list_lock, flags); break; case IRDMA_MEMREG_TYPE_MEM: err = irdma_reg_user_mr_type_mem(iwmr, access); @@ -2955,6 +2965,7 @@ static struct ib_mr *irdma_reg_user_mr(struct ib_pd *pd, u64 start, u64 len, break; default: + err = -EINVAL; goto error; }