From patchwork Wed Jul 21 09:07:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 12390405 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CFDEEC07E9B for ; Wed, 21 Jul 2021 09:22:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B37D86109F for ; Wed, 21 Jul 2021 09:22:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236460AbhGUIlU (ORCPT ); Wed, 21 Jul 2021 04:41:20 -0400 Received: from mail.kernel.org ([198.145.29.99]:56320 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235879AbhGUI0n (ORCPT ); Wed, 21 Jul 2021 04:26:43 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 1F65D611C1; Wed, 21 Jul 2021 09:07:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1626858440; bh=5J+jBturQ/bt30k9Ss/DG7LeflH8jrxgH/e5dbAPOps=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=uZ9nYGvpwOJ2oDLwJnfIOgxsPQ8VF2qR0P6+/QYZVFEdOFtXUEbyglw9e9/nNRlva Qbyf0Qk4Q2yMuTMCj52LQ9F9+Loou5kCxsNKWDoNPwE2BJSU7y5p/cdlYsyOTUdRwM kkm/CGTBidL4ZbNSBPZ5H5IIBtKpTyEYMxiNM+s1eYaU4jntupnBG4qglD3/y+3jco u64orbNdhIbs653x8dJzLARhQ6R7Gyd90pQ28p0pxJBs/oKgwRLixCSv+OdIHMoEA9 F/lr8rHhzv1CqeKA1to0I7QmytF6uGpzmhtNG7psTSaC4ba+3aU5+GNKN7laBfmToZ dCzsD7iQCfpZg== From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Leon Romanovsky , linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, Mark Zhang , Christoph Hellwig Subject: [PATCH rdma-next v1 1/7] RDMA/mlx5: Delete not-available udata check Date: Wed, 21 Jul 2021 12:07:04 +0300 Message-Id: X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Leon Romanovsky XRC_TGT QPs are created through kernel verbs and don't have udata at all. Fixes: 6eefa839c4dd ("RDMA/mlx5: Protect from kernel crash if XRC_TGT doesn't have udata") Fixes: e383085c2425 ("RDMA/mlx5: Set ECE options during QP create") Signed-off-by: Leon Romanovsky --- drivers/infiniband/hw/mlx5/qp.c | 3 --- 1 file changed, 3 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c index 18b018f1db83..81e3170a1ae6 100644 --- a/drivers/infiniband/hw/mlx5/qp.c +++ b/drivers/infiniband/hw/mlx5/qp.c @@ -1908,7 +1908,6 @@ static int get_atomic_mode(struct mlx5_ib_dev *dev, static int create_xrc_tgt_qp(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp, struct mlx5_create_qp_params *params) { - struct mlx5_ib_create_qp *ucmd = params->ucmd; struct ib_qp_init_attr *attr = params->attr; u32 uidx = params->uidx; struct mlx5_ib_resources *devr = &dev->devr; @@ -1928,8 +1927,6 @@ static int create_xrc_tgt_qp(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp, if (!in) return -ENOMEM; - if (MLX5_CAP_GEN(mdev, ece_support) && ucmd) - MLX5_SET(create_qp_in, in, ece, ucmd->ece_options); qpc = MLX5_ADDR_OF(create_qp_in, in, qpc); MLX5_SET(qpc, qpc, st, MLX5_QP_ST_XRC); From patchwork Wed Jul 21 09:07:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 12390407 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CF866C636C9 for ; Wed, 21 Jul 2021 09:23:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B872D611CE for ; Wed, 21 Jul 2021 09:23:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235290AbhGUIlo (ORCPT ); Wed, 21 Jul 2021 04:41:44 -0400 Received: from mail.kernel.org ([198.145.29.99]:56502 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235639AbhGUI0z (ORCPT ); Wed, 21 Jul 2021 04:26:55 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id EEF156120A; Wed, 21 Jul 2021 09:07:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1626858452; bh=CEJyO9Yy4FU8TKYwWIP+VJYoiy0mqOrVsLitic27HE0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HKuytsVcsjFrYCOqDzC4ZXEtPz6apdzNdEMD8vpK7ElSsMFuqVMAIxBvu4Ed9pSNj JEXUNFqsJDO/erOTTPLpKb2BakayZH/EPoZ7+jT1yslZNDpbvHrElM5IbRx1NNFqN7 w6JcMSHOm4tbs4NU/47squxnz1SLOaOr62TqRam7CPEsULy5/kT/miDiv4OaFhpspU r3r+eT2RA2vu10kIGWkLRQFmm2SV0WONbZwSMctUPT5d8SkWRq/I0h6XBiRI2E1n21 xCAjSJ5L0XWxoqHeoMG6os/mObZASqXvwUL6QWQ4LSmCbRLn5R934EB2pd8jAafyqm Jm8QpnF8rF4vw== From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Leon Romanovsky , linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, Mark Zhang , Christoph Hellwig Subject: [PATCH rdma-next v1 2/7] RDMA/core: Delete duplicated and unreachable code Date: Wed, 21 Jul 2021 12:07:05 +0300 Message-Id: X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Leon Romanovsky The ib_create_named_qp() is kernel verb and no kernel users exist that use XRC_INI QP. Hence such QP path is not reachable. In addition, delete duplicated assignments of QP attributes from the initialization structure. Signed-off-by: Leon Romanovsky --- drivers/infiniband/core/core_priv.h | 1 + drivers/infiniband/core/verbs.c | 22 ++++------------------ 2 files changed, 5 insertions(+), 18 deletions(-) diff --git a/drivers/infiniband/core/core_priv.h b/drivers/infiniband/core/core_priv.h index 5dfa1190e3ea..cc54d74930d6 100644 --- a/drivers/infiniband/core/core_priv.h +++ b/drivers/infiniband/core/core_priv.h @@ -342,6 +342,7 @@ _ib_create_qp(struct ib_device *dev, struct ib_pd *pd, qp->rwq_ind_tbl = attr->rwq_ind_tbl; qp->event_handler = attr->event_handler; qp->port = attr->port_num; + qp->qp_context = attr->qp_context; spin_lock_init(&qp->mr_lock); INIT_LIST_HEAD(&qp->rdma_mrs); diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c index 89c6987cb5eb..635642a3ecbc 100644 --- a/drivers/infiniband/core/verbs.c +++ b/drivers/infiniband/core/verbs.c @@ -1257,28 +1257,14 @@ struct ib_qp *ib_create_named_qp(struct ib_pd *pd, return xrc_qp; } - qp->event_handler = qp_init_attr->event_handler; - qp->qp_context = qp_init_attr->qp_context; - if (qp_init_attr->qp_type == IB_QPT_XRC_INI) { - qp->recv_cq = NULL; - qp->srq = NULL; - } else { - qp->recv_cq = qp_init_attr->recv_cq; - if (qp_init_attr->recv_cq) - atomic_inc(&qp_init_attr->recv_cq->usecnt); - qp->srq = qp_init_attr->srq; - if (qp->srq) - atomic_inc(&qp_init_attr->srq->usecnt); - } - - qp->send_cq = qp_init_attr->send_cq; - qp->xrcd = NULL; + if (qp_init_attr->recv_cq) + atomic_inc(&qp_init_attr->recv_cq->usecnt); + if (qp->srq) + atomic_inc(&qp_init_attr->srq->usecnt); atomic_inc(&pd->usecnt); if (qp_init_attr->send_cq) atomic_inc(&qp_init_attr->send_cq->usecnt); - if (qp_init_attr->rwq_ind_tbl) - atomic_inc(&qp->rwq_ind_tbl->usecnt); if (qp_init_attr->cap.max_rdma_ctxs) { ret = rdma_rw_init_mrs(qp, qp_init_attr); From patchwork Wed Jul 21 09:07:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 12390429 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1AB59C636CE for ; Wed, 21 Jul 2021 09:24:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F06096121E for ; Wed, 21 Jul 2021 09:24:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236061AbhGUIl2 (ORCPT ); Wed, 21 Jul 2021 04:41:28 -0400 Received: from mail.kernel.org ([198.145.29.99]:56360 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236235AbhGUI0r (ORCPT ); Wed, 21 Jul 2021 04:26:47 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 15D7C60FE9; Wed, 21 Jul 2021 09:07:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1626858444; bh=tbn33wjjVuiv3kzeE4HsxsCHrdDaV9aipcwfnmjZej8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LOr+h7hzgUvO5OSjbzse7dwqjjdsOmgqknQ509BDgZMwB8wZGKUbeN/2WTMneqYuE Bek655opyXn95IPmu0tNvQWVlgUmI49435M4HPrOBsVdWvkVLdTGMAoPybFPk42IIa TYmh1FZnVfn0i9+kf3l4Q4mp0NcxdVnZkAIz6+gNBfKrNYhaTxoW904qWxG8bjhhRa EzLEddSubnA/5ydhLJiXNRTtqAqzb44IlkT4ZFVZITuh42hoggP5SfmnWe+A49/O1w WtxWvJKPOzMZRZR8G50PV2vrwArW33/KlRseMsZvhpjBkcpQ4G9yE2MTTU1xl2lVr1 yiwj3mgG7eg2w== From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Leon Romanovsky , linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, Mark Zhang , Christoph Hellwig Subject: [PATCH rdma-next v1 3/7] RDMA/core: Remove protection from wrong in-kernel API usage Date: Wed, 21 Jul 2021 12:07:06 +0300 Message-Id: <82dc7840fa802a17bf80a2f9a07c1433b204269c.1626857976.git.leonro@nvidia.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Leon Romanovsky The ib_create_named_qp() is kernel verb that is not used for user supplied attributes. In such case, it is ULP responsibility to provide valid QP attributes. In-kernel API shouldn't check it, exactly like other functions that don't check device capabilities. Signed-off-by: Leon Romanovsky --- drivers/infiniband/core/verbs.c | 10 ---------- 1 file changed, 10 deletions(-) diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c index 635642a3ecbc..2090f3c9f689 100644 --- a/drivers/infiniband/core/verbs.c +++ b/drivers/infiniband/core/verbs.c @@ -1219,16 +1219,6 @@ struct ib_qp *ib_create_named_qp(struct ib_pd *pd, struct ib_qp *qp; int ret; - if (qp_init_attr->rwq_ind_tbl && - (qp_init_attr->recv_cq || - qp_init_attr->srq || qp_init_attr->cap.max_recv_wr || - qp_init_attr->cap.max_recv_sge)) - return ERR_PTR(-EINVAL); - - if ((qp_init_attr->create_flags & IB_QP_CREATE_INTEGRITY_EN) && - !(device->attrs.device_cap_flags & IB_DEVICE_INTEGRITY_HANDOVER)) - return ERR_PTR(-EINVAL); - /* * If the callers is using the RDMA API calculate the resources * needed for the RDMA READ/WRITE operations. From patchwork Wed Jul 21 09:07:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 12390409 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB3BDC07E9B for ; Wed, 21 Jul 2021 09:23:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A1DC06120E for ; Wed, 21 Jul 2021 09:23:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237685AbhGUIlg (ORCPT ); Wed, 21 Jul 2021 04:41:36 -0400 Received: from mail.kernel.org ([198.145.29.99]:56390 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236745AbhGUI0v (ORCPT ); Wed, 21 Jul 2021 04:26:51 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 2C76961175; Wed, 21 Jul 2021 09:07:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1626858448; bh=S5dvdLz5NYOFsg/JDRUYpB0n90tnNDpombc9o4k6X7Q=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Xre9dMYm54KBkdCyP6faK7Ge1cPzQ2a1YVuYOlsAGTSbDYVouA5gE1PY3r/VSUEhZ a1qBtWJQ9oJrBsH4fWz4BmtcqTEzCanc8TZxDXKUpjOMmkQWmvX9ifxiLdEBBAoGoz cziYQAfufBjmwvvwTVVC9HuH9atXNHAa+/Yyp5G2OD4W2QyCw0XrSbetWmqOqHu4kD 7MgEResn21fl3mFIAOxTC5f1vcz6CDbgJhP3vGv2qGKwt8urtc6zg7S3MqyIO+xN5o X1/4ZOMU8+LDx+RvU+r1cPvUZQRsaUv3Psz7O7z5bwSORdth8i5aRmXTwehRpSw0fV eY85/HUMgTb5A== From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Leon Romanovsky , linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, Mark Zhang , Christoph Hellwig Subject: [PATCH rdma-next v1 4/7] RDMA/core: Reorganize create QP low-level functions Date: Wed, 21 Jul 2021 12:07:07 +0300 Message-Id: <7649b3ed0e8a1b2f05c1bb02b02d0f1af11ab060.1626857976.git.leonro@nvidia.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Leon Romanovsky The low-level create QP function grew to be larger than any sensible inline function should be. The inline attribute is not really needed for that function and can be implemented as exported symbol. Signed-off-by: Leon Romanovsky --- drivers/infiniband/core/core_priv.h | 59 ++--------------------- drivers/infiniband/core/verbs.c | 75 +++++++++++++++++++++++++---- include/rdma/ib_verbs.h | 16 ++++-- 3 files changed, 82 insertions(+), 68 deletions(-) diff --git a/drivers/infiniband/core/core_priv.h b/drivers/infiniband/core/core_priv.h index cc54d74930d6..d28ced053222 100644 --- a/drivers/infiniband/core/core_priv.h +++ b/drivers/infiniband/core/core_priv.h @@ -316,61 +316,10 @@ struct ib_device *ib_device_get_by_index(const struct net *net, u32 index); void nldev_init(void); void nldev_exit(void); -static inline struct ib_qp * -_ib_create_qp(struct ib_device *dev, struct ib_pd *pd, - struct ib_qp_init_attr *attr, struct ib_udata *udata, - struct ib_uqp_object *uobj, const char *caller) -{ - struct ib_qp *qp; - int ret; - - if (!dev->ops.create_qp) - return ERR_PTR(-EOPNOTSUPP); - - qp = rdma_zalloc_drv_obj_numa(dev, ib_qp); - if (!qp) - return ERR_PTR(-ENOMEM); - - qp->device = dev; - qp->pd = pd; - qp->uobject = uobj; - qp->real_qp = qp; - - qp->qp_type = attr->qp_type; - qp->rwq_ind_tbl = attr->rwq_ind_tbl; - qp->srq = attr->srq; - qp->rwq_ind_tbl = attr->rwq_ind_tbl; - qp->event_handler = attr->event_handler; - qp->port = attr->port_num; - qp->qp_context = attr->qp_context; - - spin_lock_init(&qp->mr_lock); - INIT_LIST_HEAD(&qp->rdma_mrs); - INIT_LIST_HEAD(&qp->sig_mrs); - - rdma_restrack_new(&qp->res, RDMA_RESTRACK_QP); - WARN_ONCE(!udata && !caller, "Missing kernel QP owner"); - rdma_restrack_set_name(&qp->res, udata ? NULL : caller); - ret = dev->ops.create_qp(qp, attr, udata); - if (ret) - goto err_create; - - /* - * TODO: The mlx4 internally overwrites send_cq and recv_cq. - * Unfortunately, it is not an easy task to fix that driver. - */ - qp->send_cq = attr->send_cq; - qp->recv_cq = attr->recv_cq; - - rdma_restrack_add(&qp->res); - return qp; - -err_create: - rdma_restrack_put(&qp->res); - kfree(qp); - return ERR_PTR(ret); - -} +struct ib_qp *_ib_create_qp(struct ib_device *dev, struct ib_pd *pd, + struct ib_qp_init_attr *attr, + struct ib_udata *udata, struct ib_uqp_object *uobj, + const char *caller); struct rdma_dev_addr; int rdma_resolve_ip_route(struct sockaddr *src_addr, diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c index 2090f3c9f689..7ee4daa72934 100644 --- a/drivers/infiniband/core/verbs.c +++ b/drivers/infiniband/core/verbs.c @@ -1201,19 +1201,76 @@ static struct ib_qp *create_xrc_qp_user(struct ib_qp *qp, } /** - * ib_create_named_qp - Creates a kernel QP associated with the specified protection - * domain. + * _ib_create_qp - Creates a QP associated with the specified protection domain + * @dev: IB device * @pd: The protection domain associated with the QP. - * @qp_init_attr: A list of initial attributes required to create the + * @attr: A list of initial attributes required to create the * QP. If QP creation succeeds, then the attributes are updated to * the actual capabilities of the created QP. + * @udata: User data + * @uobj: uverbs obect * @caller: caller's build-time module name - * - * NOTE: for user qp use ib_create_qp_user with valid udata! */ -struct ib_qp *ib_create_named_qp(struct ib_pd *pd, - struct ib_qp_init_attr *qp_init_attr, - const char *caller) +struct ib_qp *_ib_create_qp(struct ib_device *dev, struct ib_pd *pd, + struct ib_qp_init_attr *attr, + struct ib_udata *udata, struct ib_uqp_object *uobj, + const char *caller) +{ + struct ib_qp *qp; + int ret; + + if (!dev->ops.create_qp) + return ERR_PTR(-EOPNOTSUPP); + + qp = rdma_zalloc_drv_obj_numa(dev, ib_qp); + if (!qp) + return ERR_PTR(-ENOMEM); + + qp->device = dev; + qp->pd = pd; + qp->uobject = uobj; + qp->real_qp = qp; + + qp->qp_type = attr->qp_type; + qp->rwq_ind_tbl = attr->rwq_ind_tbl; + qp->srq = attr->srq; + qp->rwq_ind_tbl = attr->rwq_ind_tbl; + qp->event_handler = attr->event_handler; + qp->port = attr->port_num; + qp->qp_context = attr->qp_context; + + spin_lock_init(&qp->mr_lock); + INIT_LIST_HEAD(&qp->rdma_mrs); + INIT_LIST_HEAD(&qp->sig_mrs); + + rdma_restrack_new(&qp->res, RDMA_RESTRACK_QP); + WARN_ONCE(!udata && !caller, "Missing kernel QP owner"); + rdma_restrack_set_name(&qp->res, udata ? NULL : caller); + ret = dev->ops.create_qp(qp, attr, udata); + if (ret) + goto err_create; + + /* + * TODO: The mlx4 internally overwrites send_cq and recv_cq. + * Unfortunately, it is not an easy task to fix that driver. + */ + qp->send_cq = attr->send_cq; + qp->recv_cq = attr->recv_cq; + + rdma_restrack_add(&qp->res); + return qp; + +err_create: + rdma_restrack_put(&qp->res); + kfree(qp); + return ERR_PTR(ret); + +} +EXPORT_SYMBOL(_ib_create_qp); + +struct ib_qp *ib_create_qp_kernel(struct ib_pd *pd, + struct ib_qp_init_attr *qp_init_attr, + const char *caller) { struct ib_device *device = pd ? pd->device : qp_init_attr->xrcd->device; struct ib_qp *qp; @@ -1280,7 +1337,7 @@ struct ib_qp *ib_create_named_qp(struct ib_pd *pd, return ERR_PTR(ret); } -EXPORT_SYMBOL(ib_create_named_qp); +EXPORT_SYMBOL(ib_create_qp_kernel); static const struct { int valid; diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index 8cd7d1fc719f..4b50d9a3018a 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -3688,13 +3688,21 @@ static inline int ib_post_srq_recv(struct ib_srq *srq, bad_recv_wr ? : &dummy); } -struct ib_qp *ib_create_named_qp(struct ib_pd *pd, - struct ib_qp_init_attr *qp_init_attr, - const char *caller); +struct ib_qp *ib_create_qp_kernel(struct ib_pd *pd, + struct ib_qp_init_attr *qp_init_attr, + const char *caller); +/** + * ib_create_qp - Creates a kernel QP associated with the specific protection + * domain. + * @pd: The protection domain associated with the QP. + * @init_attr: A list of initial attributes required to create the + * QP. If QP creation succeeds, then the attributes are updated to + * the actual capabilities of the created QP. + */ static inline struct ib_qp *ib_create_qp(struct ib_pd *pd, struct ib_qp_init_attr *init_attr) { - return ib_create_named_qp(pd, init_attr, KBUILD_MODNAME); + return ib_create_qp_kernel(pd, init_attr, KBUILD_MODNAME); } /** From patchwork Wed Jul 21 09:07:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 12390427 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8FF9BC636CA for ; Wed, 21 Jul 2021 09:23:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 758D060231 for ; Wed, 21 Jul 2021 09:23:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236709AbhGUImK (ORCPT ); Wed, 21 Jul 2021 04:42:10 -0400 Received: from mail.kernel.org ([198.145.29.99]:56580 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236785AbhGUI1E (ORCPT ); Wed, 21 Jul 2021 04:27:04 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 1566E6120C; Wed, 21 Jul 2021 09:07:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1626858460; bh=jiCQfqGnLvlVni+94Cn3udgdJVPdHo0fDi0NpMPZUW8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=AbzPYXTBSCIVroy/Ph22yeggOVTRabXdfZsTKauINANt/+iTz7Aup1frk1U8Cck7A xExOxsf0ekTnUHHhHSEAt0Dc4gaAFoCfUxTSgdDvZGLcErkfzx8/1x+RlUyY3EEIXZ JC/gEekXclb39NygU54fOheXk8q3tuobBXtLZjAQIZIN8XvMP/ThGmOPJow+wOlAFJ rbrb+qH9j6LzjXPsRjA2wbg9OAYytz3fq1q+bAlVaaHlF9KsqyX33ivwsRBA4yfO5g aJ24leaAzXGkpmbUyJJ/sqbdkGUbtL0k3LNvUUVhS5lx5PadJrQHTA2exOmiVAiX3y JYkb2pQ4vRBrA== From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Leon Romanovsky , linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, Mark Zhang , Christoph Hellwig Subject: [PATCH rdma-next v1 5/7] RDMA/core: Configure selinux QP during creation Date: Wed, 21 Jul 2021 12:07:08 +0300 Message-Id: <4cb55670db663bcd45095dad67a59d7c2324e0b3.1626857976.git.leonro@nvidia.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Leon Romanovsky All QP creation flows called ib_create_qp_security(), but differently. This caused to the need to provide exclusion conditions for the XRC_TGT, because such QP already had selinux configuration call. In order to fix it, move ib_create_qp_security() to the general QP creation routine. Signed-off-by: Leon Romanovsky --- drivers/infiniband/core/uverbs_cmd.c | 7 ------- drivers/infiniband/core/uverbs_std_types_qp.c | 6 ------ drivers/infiniband/core/verbs.c | 11 +++++++---- 3 files changed, 7 insertions(+), 17 deletions(-) diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c index 8c8ca7bce3ca..b5153200b8a8 100644 --- a/drivers/infiniband/core/uverbs_cmd.c +++ b/drivers/infiniband/core/uverbs_cmd.c @@ -1447,10 +1447,6 @@ static int create_qp(struct uverbs_attr_bundle *attrs, } if (cmd->qp_type != IB_QPT_XRC_TGT) { - ret = ib_create_qp_security(qp, device); - if (ret) - goto err_cb; - atomic_inc(&pd->usecnt); if (attr.send_cq) atomic_inc(&attr.send_cq->usecnt); @@ -1502,9 +1498,6 @@ static int create_qp(struct uverbs_attr_bundle *attrs, resp.response_length = uverbs_response_length(attrs, sizeof(resp)); return uverbs_response(attrs, &resp, sizeof(resp)); -err_cb: - ib_destroy_qp_user(qp, uverbs_get_cleared_udata(attrs)); - err_put: if (!IS_ERR(xrcd_uobj)) uobj_put_read(xrcd_uobj); diff --git a/drivers/infiniband/core/uverbs_std_types_qp.c b/drivers/infiniband/core/uverbs_std_types_qp.c index c00cfb5ed387..92812f6a21b0 100644 --- a/drivers/infiniband/core/uverbs_std_types_qp.c +++ b/drivers/infiniband/core/uverbs_std_types_qp.c @@ -280,12 +280,6 @@ static int UVERBS_HANDLER(UVERBS_METHOD_QP_CREATE)( obj->uevent.uobject.object = qp; uverbs_finalize_uobj_create(attrs, UVERBS_ATTR_CREATE_QP_HANDLE); - if (attr.qp_type != IB_QPT_XRC_TGT) { - ret = ib_create_qp_security(qp, device); - if (ret) - return ret; - } - set_caps(&attr, &cap, false); ret = uverbs_copy_to_struct_or_zero(attrs, UVERBS_ATTR_CREATE_QP_RESP_CAP, &cap, diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c index 7ee4daa72934..9f6f7df55c9a 100644 --- a/drivers/infiniband/core/verbs.c +++ b/drivers/infiniband/core/verbs.c @@ -1216,6 +1216,7 @@ struct ib_qp *_ib_create_qp(struct ib_device *dev, struct ib_pd *pd, struct ib_udata *udata, struct ib_uqp_object *uobj, const char *caller) { + struct ib_udata dummy = {}; struct ib_qp *qp; int ret; @@ -1257,9 +1258,15 @@ struct ib_qp *_ib_create_qp(struct ib_device *dev, struct ib_pd *pd, qp->send_cq = attr->send_cq; qp->recv_cq = attr->recv_cq; + ret = ib_create_qp_security(qp, dev); + if (ret) + goto err_security; + rdma_restrack_add(&qp->res); return qp; +err_security: + qp->device->ops.destroy_qp(qp, udata ? &dummy : NULL); err_create: rdma_restrack_put(&qp->res); kfree(qp); @@ -1289,10 +1296,6 @@ struct ib_qp *ib_create_qp_kernel(struct ib_pd *pd, if (IS_ERR(qp)) return qp; - ret = ib_create_qp_security(qp, device); - if (ret) - goto err; - if (qp_init_attr->qp_type == IB_QPT_XRC_TGT) { struct ib_qp *xrc_qp = create_xrc_qp_user(qp, qp_init_attr); From patchwork Wed Jul 21 09:07:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 12390425 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8ACA6C636C9 for ; Wed, 21 Jul 2021 09:23:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6902F6121E for ; Wed, 21 Jul 2021 09:23:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236189AbhGUIly (ORCPT ); Wed, 21 Jul 2021 04:41:54 -0400 Received: from mail.kernel.org ([198.145.29.99]:56534 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236602AbhGUI1B (ORCPT ); Wed, 21 Jul 2021 04:27:01 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 087FE611CE; Wed, 21 Jul 2021 09:07:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1626858456; bh=H2qHO6uCBHx0ia/0d0XvPsc2qnBdZMxrt/VN8Fwdj+0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=cBf89wGeewgH5m922ocypO389JQk0FIKFdDy063c5UMeWupm2PtRNP8JZoDStZlmq 9kFa1dtWBGhPpB6KBm9poiwt9lDPfBHrr1+z2yvSAlpYo5Q72XDQOHr1/rPhZl78gX wEcMKfagVBW4w5+1UVKoWjFuiqlT3i7ZekHhsNP+a0NN/KS9Wa3pz1llQxWrlOmBVa /Hgfq+Xd3GjmnkaiRTJ8VthBy4CMhQFEBSLI4QKFHFMqFfPbcg16rxWOFT9c428Oo/ 19C6WhHqSwdUP1jJlmL+pNBud/M06KJ7WzSY2tN50qicdjuvjmkYo6+vaU+c27Kqfl m4OX/CbsZ2OGw== From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Leon Romanovsky , linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, Mark Zhang , Christoph Hellwig Subject: [PATCH rdma-next v1 6/7] RDMA/core: Properly increment and decrement QP usecnts Date: Wed, 21 Jul 2021 12:07:09 +0300 Message-Id: <6aefda3e2b8151ac49191eb0e20d47cabbeadf00.1626857976.git.leonro@nvidia.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Leon Romanovsky The QP usecnts were incremented through QP attributes structure while decreased through QP itself. Rely on the ib_creat_qp_user() code that initialized all QP parameters prior returning to the user and increment exactly like destroy does. Signed-off-by: Leon Romanovsky --- drivers/infiniband/core/core_priv.h | 2 + drivers/infiniband/core/uverbs_cmd.c | 13 +--- drivers/infiniband/core/uverbs_std_types_qp.c | 13 +--- drivers/infiniband/core/verbs.c | 60 ++++++++++--------- 4 files changed, 39 insertions(+), 49 deletions(-) diff --git a/drivers/infiniband/core/core_priv.h b/drivers/infiniband/core/core_priv.h index d28ced053222..d8f464b43dbc 100644 --- a/drivers/infiniband/core/core_priv.h +++ b/drivers/infiniband/core/core_priv.h @@ -320,6 +320,8 @@ struct ib_qp *_ib_create_qp(struct ib_device *dev, struct ib_pd *pd, struct ib_qp_init_attr *attr, struct ib_udata *udata, struct ib_uqp_object *uobj, const char *caller); +void ib_qp_usecnt_inc(struct ib_qp *qp); +void ib_qp_usecnt_dec(struct ib_qp *qp); struct rdma_dev_addr; int rdma_resolve_ip_route(struct sockaddr *src_addr, diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c index b5153200b8a8..62cafd768d89 100644 --- a/drivers/infiniband/core/uverbs_cmd.c +++ b/drivers/infiniband/core/uverbs_cmd.c @@ -1445,18 +1445,9 @@ static int create_qp(struct uverbs_attr_bundle *attrs, ret = PTR_ERR(qp); goto err_put; } + ib_qp_usecnt_inc(qp); - if (cmd->qp_type != IB_QPT_XRC_TGT) { - atomic_inc(&pd->usecnt); - if (attr.send_cq) - atomic_inc(&attr.send_cq->usecnt); - if (attr.recv_cq) - atomic_inc(&attr.recv_cq->usecnt); - if (attr.srq) - atomic_inc(&attr.srq->usecnt); - if (ind_tbl) - atomic_inc(&ind_tbl->usecnt); - } else { + if (cmd->qp_type == IB_QPT_XRC_TGT) { /* It is done in _ib_create_qp for other QP types */ qp->uobject = obj; } diff --git a/drivers/infiniband/core/uverbs_std_types_qp.c b/drivers/infiniband/core/uverbs_std_types_qp.c index 92812f6a21b0..a0e734735ba5 100644 --- a/drivers/infiniband/core/uverbs_std_types_qp.c +++ b/drivers/infiniband/core/uverbs_std_types_qp.c @@ -258,18 +258,9 @@ static int UVERBS_HANDLER(UVERBS_METHOD_QP_CREATE)( ret = PTR_ERR(qp); goto err_put; } + ib_qp_usecnt_inc(qp); - if (attr.qp_type != IB_QPT_XRC_TGT) { - atomic_inc(&pd->usecnt); - if (attr.send_cq) - atomic_inc(&attr.send_cq->usecnt); - if (attr.recv_cq) - atomic_inc(&attr.recv_cq->usecnt); - if (attr.srq) - atomic_inc(&attr.srq->usecnt); - if (attr.rwq_ind_tbl) - atomic_inc(&attr.rwq_ind_tbl->usecnt); - } else { + if (attr.qp_type == IB_QPT_XRC_TGT) { obj->uxrcd = container_of(xrcd_uobj, struct ib_uxrcd_object, uobject); atomic_inc(&obj->uxrcd->refcnt); diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c index 9f6f7df55c9a..65e344920513 100644 --- a/drivers/infiniband/core/verbs.c +++ b/drivers/infiniband/core/verbs.c @@ -1275,6 +1275,36 @@ struct ib_qp *_ib_create_qp(struct ib_device *dev, struct ib_pd *pd, } EXPORT_SYMBOL(_ib_create_qp); +void ib_qp_usecnt_inc(struct ib_qp *qp) +{ + if (qp->pd) + atomic_inc(&qp->pd->usecnt); + if (qp->send_cq) + atomic_inc(&qp->send_cq->usecnt); + if (qp->recv_cq) + atomic_inc(&qp->recv_cq->usecnt); + if (qp->srq) + atomic_inc(&qp->srq->usecnt); + if (qp->rwq_ind_tbl) + atomic_inc(&qp->rwq_ind_tbl->usecnt); +} +EXPORT_SYMBOL(ib_qp_usecnt_inc); + +void ib_qp_usecnt_dec(struct ib_qp *qp) +{ + if (qp->rwq_ind_tbl) + atomic_dec(&qp->rwq_ind_tbl->usecnt); + if (qp->srq) + atomic_dec(&qp->srq->usecnt); + if (qp->recv_cq) + atomic_dec(&qp->recv_cq->usecnt); + if (qp->send_cq) + atomic_dec(&qp->send_cq->usecnt); + if (qp->pd) + atomic_dec(&qp->pd->usecnt); +} +EXPORT_SYMBOL(ib_qp_usecnt_dec); + struct ib_qp *ib_create_qp_kernel(struct ib_pd *pd, struct ib_qp_init_attr *qp_init_attr, const char *caller) @@ -1307,14 +1337,7 @@ struct ib_qp *ib_create_qp_kernel(struct ib_pd *pd, return xrc_qp; } - if (qp_init_attr->recv_cq) - atomic_inc(&qp_init_attr->recv_cq->usecnt); - if (qp->srq) - atomic_inc(&qp_init_attr->srq->usecnt); - - atomic_inc(&pd->usecnt); - if (qp_init_attr->send_cq) - atomic_inc(&qp_init_attr->send_cq->usecnt); + ib_qp_usecnt_inc(qp); if (qp_init_attr->cap.max_rdma_ctxs) { ret = rdma_rw_init_mrs(qp, qp_init_attr); @@ -1972,10 +1995,6 @@ int ib_destroy_qp_user(struct ib_qp *qp, struct ib_udata *udata) { const struct ib_gid_attr *alt_path_sgid_attr = qp->alt_path_sgid_attr; const struct ib_gid_attr *av_sgid_attr = qp->av_sgid_attr; - struct ib_pd *pd; - struct ib_cq *scq, *rcq; - struct ib_srq *srq; - struct ib_rwq_ind_table *ind_tbl; struct ib_qp_security *sec; int ret; @@ -1987,11 +2006,6 @@ int ib_destroy_qp_user(struct ib_qp *qp, struct ib_udata *udata) if (qp->real_qp != qp) return __ib_destroy_shared_qp(qp); - pd = qp->pd; - scq = qp->send_cq; - rcq = qp->recv_cq; - srq = qp->srq; - ind_tbl = qp->rwq_ind_tbl; sec = qp->qp_sec; if (sec) ib_destroy_qp_security_begin(sec); @@ -2011,16 +2025,8 @@ int ib_destroy_qp_user(struct ib_qp *qp, struct ib_udata *udata) rdma_put_gid_attr(alt_path_sgid_attr); if (av_sgid_attr) rdma_put_gid_attr(av_sgid_attr); - if (pd) - atomic_dec(&pd->usecnt); - if (scq) - atomic_dec(&scq->usecnt); - if (rcq) - atomic_dec(&rcq->usecnt); - if (srq) - atomic_dec(&srq->usecnt); - if (ind_tbl) - atomic_dec(&ind_tbl->usecnt); + + ib_qp_usecnt_dec(qp); if (sec) ib_destroy_qp_security_end(sec); From patchwork Wed Jul 21 09:07:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 12390431 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1A7B3C636C9 for ; Wed, 21 Jul 2021 09:24:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 05B296120A for ; Wed, 21 Jul 2021 09:24:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234924AbhGUImY (ORCPT ); Wed, 21 Jul 2021 04:42:24 -0400 Received: from mail.kernel.org ([198.145.29.99]:56610 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236904AbhGUI1H (ORCPT ); Wed, 21 Jul 2021 04:27:07 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 0F33A6120E; Wed, 21 Jul 2021 09:07:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1626858464; bh=hmBTCuYmdKT0R/A7MnX2mud32dN03kh+Obf31pt3ank=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QQo+Zb7cetEQOqBOm53Uh77CVnEicfR1sx6QMRjDrva+kgUiEu08neoQaodyHM2o5 xW9YPluvymm61tDeouLPT4vWLyOZMaHxHhwUGiiTYqCG63Jbl6zuRiGAvNV+UOaXJi 19f4RK4wwaNmuhng0K1IRIUCAMryFjvUUU+wY+Qc6caJm+/RXiWVAPzUlAluKchYUv Dkkop/UDQtXwotZT4sLQE/6M0zm9DmoMosOpTDmRSBz5PU6bdHXm0WsOo3DGdoEPLt 4oZo3bXOlohFNnrMwxRfUH3Vx9dBjspeCBE3/gCjmJA1Rw6+bEw6JycpPv4iO9JCNB pN7gf2Fgvsi/g== From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Leon Romanovsky , linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, Mark Zhang , Christoph Hellwig Subject: [PATCH rdma-next v1 7/7] RDMA/core: Create clean QP creations interface for uverbs Date: Wed, 21 Jul 2021 12:07:10 +0300 Message-Id: X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Leon Romanovsky Unify create QP creation interface to make clean approach to create XRC_TGT and regular QPs. Signed-off-by: Leon Romanovsky --- drivers/infiniband/core/core_priv.h | 9 +-- drivers/infiniband/core/uverbs_cmd.c | 13 +--- drivers/infiniband/core/uverbs_std_types_qp.c | 10 +-- drivers/infiniband/core/verbs.c | 72 +++++++++++-------- 4 files changed, 52 insertions(+), 52 deletions(-) diff --git a/drivers/infiniband/core/core_priv.h b/drivers/infiniband/core/core_priv.h index d8f464b43dbc..f66f48d860ec 100644 --- a/drivers/infiniband/core/core_priv.h +++ b/drivers/infiniband/core/core_priv.h @@ -316,10 +316,11 @@ struct ib_device *ib_device_get_by_index(const struct net *net, u32 index); void nldev_init(void); void nldev_exit(void); -struct ib_qp *_ib_create_qp(struct ib_device *dev, struct ib_pd *pd, - struct ib_qp_init_attr *attr, - struct ib_udata *udata, struct ib_uqp_object *uobj, - const char *caller); +struct ib_qp *ib_create_qp_user(struct ib_device *dev, struct ib_pd *pd, + struct ib_qp_init_attr *attr, + struct ib_udata *udata, + struct ib_uqp_object *uobj, const char *caller); + void ib_qp_usecnt_inc(struct ib_qp *qp); void ib_qp_usecnt_dec(struct ib_qp *qp); diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c index 62cafd768d89..740e6b2efe0e 100644 --- a/drivers/infiniband/core/uverbs_cmd.c +++ b/drivers/infiniband/core/uverbs_cmd.c @@ -1435,23 +1435,14 @@ static int create_qp(struct uverbs_attr_bundle *attrs, attr.source_qpn = cmd->source_qpn; } - if (cmd->qp_type == IB_QPT_XRC_TGT) - qp = ib_create_qp(pd, &attr); - else - qp = _ib_create_qp(device, pd, &attr, &attrs->driver_udata, obj, - NULL); - + qp = ib_create_qp_user(device, pd, &attr, &attrs->driver_udata, obj, + KBUILD_MODNAME); if (IS_ERR(qp)) { ret = PTR_ERR(qp); goto err_put; } ib_qp_usecnt_inc(qp); - if (cmd->qp_type == IB_QPT_XRC_TGT) { - /* It is done in _ib_create_qp for other QP types */ - qp->uobject = obj; - } - obj->uevent.uobject.object = qp; obj->uevent.event_file = READ_ONCE(attrs->ufile->default_async_file); if (obj->uevent.event_file) diff --git a/drivers/infiniband/core/uverbs_std_types_qp.c b/drivers/infiniband/core/uverbs_std_types_qp.c index a0e734735ba5..dd1075466f61 100644 --- a/drivers/infiniband/core/uverbs_std_types_qp.c +++ b/drivers/infiniband/core/uverbs_std_types_qp.c @@ -248,12 +248,8 @@ static int UVERBS_HANDLER(UVERBS_METHOD_QP_CREATE)( set_caps(&attr, &cap, true); mutex_init(&obj->mcast_lock); - if (attr.qp_type == IB_QPT_XRC_TGT) - qp = ib_create_qp(pd, &attr); - else - qp = _ib_create_qp(device, pd, &attr, &attrs->driver_udata, obj, - NULL); - + qp = ib_create_qp_user(device, pd, &attr, &attrs->driver_udata, obj, + KBUILD_MODNAME); if (IS_ERR(qp)) { ret = PTR_ERR(qp); goto err_put; @@ -264,8 +260,6 @@ static int UVERBS_HANDLER(UVERBS_METHOD_QP_CREATE)( obj->uxrcd = container_of(xrcd_uobj, struct ib_uxrcd_object, uobject); atomic_inc(&obj->uxrcd->refcnt); - /* It is done in _ib_create_qp for other QP types */ - qp->uobject = obj; } obj->uevent.uobject.object = qp; diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c index 65e344920513..fa07bdd23104 100644 --- a/drivers/infiniband/core/verbs.c +++ b/drivers/infiniband/core/verbs.c @@ -1200,21 +1200,10 @@ static struct ib_qp *create_xrc_qp_user(struct ib_qp *qp, return qp; } -/** - * _ib_create_qp - Creates a QP associated with the specified protection domain - * @dev: IB device - * @pd: The protection domain associated with the QP. - * @attr: A list of initial attributes required to create the - * QP. If QP creation succeeds, then the attributes are updated to - * the actual capabilities of the created QP. - * @udata: User data - * @uobj: uverbs obect - * @caller: caller's build-time module name - */ -struct ib_qp *_ib_create_qp(struct ib_device *dev, struct ib_pd *pd, - struct ib_qp_init_attr *attr, - struct ib_udata *udata, struct ib_uqp_object *uobj, - const char *caller) +static struct ib_qp *create_qp(struct ib_device *dev, struct ib_pd *pd, + struct ib_qp_init_attr *attr, + struct ib_udata *udata, + struct ib_uqp_object *uobj, const char *caller) { struct ib_udata dummy = {}; struct ib_qp *qp; @@ -1273,7 +1262,43 @@ struct ib_qp *_ib_create_qp(struct ib_device *dev, struct ib_pd *pd, return ERR_PTR(ret); } -EXPORT_SYMBOL(_ib_create_qp); + +/** + * ib_create_qp_user - Creates a QP associated with the specified protection + * domain. + * @dev: IB device + * @pd: The protection domain associated with the QP. + * @attr: A list of initial attributes required to create the + * QP. If QP creation succeeds, then the attributes are updated to + * the actual capabilities of the created QP. + * @udata: User data + * @uobj: uverbs obect + * @caller: caller's build-time module name + */ +struct ib_qp *ib_create_qp_user(struct ib_device *dev, struct ib_pd *pd, + struct ib_qp_init_attr *attr, + struct ib_udata *udata, + struct ib_uqp_object *uobj, const char *caller) +{ + struct ib_qp *qp, *xrc_qp; + + if (attr->qp_type == IB_QPT_XRC_TGT) + qp = create_qp(dev, pd, attr, NULL, NULL, caller); + else + qp = create_qp(dev, pd, attr, udata, uobj, NULL); + if (attr->qp_type != IB_QPT_XRC_TGT || IS_ERR(qp)) + return qp; + + xrc_qp = create_xrc_qp_user(qp, attr); + if (IS_ERR(xrc_qp)) { + ib_destroy_qp(qp); + return xrc_qp; + } + + xrc_qp->uobject = uobj; + return xrc_qp; +} +EXPORT_SYMBOL(ib_create_qp_user); void ib_qp_usecnt_inc(struct ib_qp *qp) { @@ -1309,7 +1334,7 @@ struct ib_qp *ib_create_qp_kernel(struct ib_pd *pd, struct ib_qp_init_attr *qp_init_attr, const char *caller) { - struct ib_device *device = pd ? pd->device : qp_init_attr->xrcd->device; + struct ib_device *device = pd->device; struct ib_qp *qp; int ret; @@ -1322,21 +1347,10 @@ struct ib_qp *ib_create_qp_kernel(struct ib_pd *pd, if (qp_init_attr->cap.max_rdma_ctxs) rdma_rw_init_qp(device, qp_init_attr); - qp = _ib_create_qp(device, pd, qp_init_attr, NULL, NULL, caller); + qp = create_qp(device, pd, qp_init_attr, NULL, NULL, caller); if (IS_ERR(qp)) return qp; - if (qp_init_attr->qp_type == IB_QPT_XRC_TGT) { - struct ib_qp *xrc_qp = - create_xrc_qp_user(qp, qp_init_attr); - - if (IS_ERR(xrc_qp)) { - ret = PTR_ERR(xrc_qp); - goto err; - } - return xrc_qp; - } - ib_qp_usecnt_inc(qp); if (qp_init_attr->cap.max_rdma_ctxs) {