From patchwork Sun Oct 14 07:17:24 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shamir Rabinovitch X-Patchwork-Id: 10640581 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A8A8814BD for ; Sun, 14 Oct 2018 07:17:59 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8F12E2A385 for ; Sun, 14 Oct 2018 07:17:59 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 82EC72A3AE; Sun, 14 Oct 2018 07:17:59 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1C88C2A385 for ; Sun, 14 Oct 2018 07:17:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726237AbeJNO56 (ORCPT ); Sun, 14 Oct 2018 10:57:58 -0400 Received: from userp2130.oracle.com ([156.151.31.86]:52790 "EHLO userp2130.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726115AbeJNO56 (ORCPT ); Sun, 14 Oct 2018 10:57:58 -0400 Received: from pps.filterd (userp2130.oracle.com [127.0.0.1]) by userp2130.oracle.com (8.16.0.22/8.16.0.22) with SMTP id w9E78sBR064393; Sun, 14 Oct 2018 07:17:45 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references; s=corp-2018-07-02; bh=KgsjKPRxhmpGjcNhyCxHXa9CGAScjKlcEn58bZ8rMZ4=; b=u+UI+jJXdN36TCvziFmfyO6xzvrxQvUdotTpClSSIm2gI7IaSD+dITJ5uIfZYCAA37m8 cQITkJsXQc+SreIBL409IYEkyq3bomD2TBk9N7JUumA0PhYDtaMeHcuEiM67DRfj3B9I +fB2tm37i4C0jUiuN0ulZDmrAhO+tA324/x29/nAGa8dsbyJLVzEfJKy4c8EZ0ZIonjs kxO+0o9y4a9u8MlHPz6F2Zs/zZbmXl+LG5Kd1HJhKji/wB5d1MFLT7hZAjpeOQtq+ZXL C6tHhdVs2MyK9y8WDSgJn24dGDElJwzc41NCKn1Wo+3bkuRcRvGJ/d/dUTz5FoTjyLHT 3Q== Received: from aserv0021.oracle.com (aserv0021.oracle.com [141.146.126.233]) by userp2130.oracle.com with ESMTP id 2n384tmf86-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sun, 14 Oct 2018 07:17:45 +0000 Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236]) by aserv0021.oracle.com (8.14.4/8.14.4) with ESMTP id w9E7Hi1l029273 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sun, 14 Oct 2018 07:17:44 GMT Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26]) by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id w9E7HiPZ024353; Sun, 14 Oct 2018 07:17:44 GMT Received: from localhost.localdomain (/10.175.37.52) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Sun, 14 Oct 2018 07:17:41 +0000 From: Shamir Rabinovitch To: linux-rdma@vger.kernel.org Cc: dledford@redhat.com, jgg@ziepe.ca, leon@kernel.org, santosh.shilimkar@oracle.com, shamir.rabinovitch@oracle.com Subject: [PATCH v2 1/4] IB/{sw,hw}: ib_pd should not be used to get the ib_ucontext Date: Sun, 14 Oct 2018 10:17:24 +0300 Message-Id: <20181014071727.2271-2-shamir.rabinovitch@oracle.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181014071727.2271-1-shamir.rabinovitch@oracle.com> References: <20181014071727.2271-1-shamir.rabinovitch@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=9045 signatures=668706 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=3 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1807170000 definitions=main-1810140069 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Prepare the code for shared ib_x model. Future patch will remove the ucontext information from ib_pd. Prior patches added ucontext to ib_udata and used udata to convey ucontext to core, sw and driver layers. Stop using ucontext from ib_pd. Signed-off-by: Shamir Rabinovitch --- drivers/infiniband/hw/bnxt_re/ib_verbs.c | 8 +- drivers/infiniband/hw/cxgb3/iwch_provider.c | 5 +- drivers/infiniband/hw/cxgb4/mem.c | 3 +- drivers/infiniband/hw/cxgb4/qp.c | 4 +- drivers/infiniband/hw/i40iw/i40iw_verbs.c | 9 ++- drivers/infiniband/hw/mlx4/mr.c | 2 +- drivers/infiniband/hw/mlx4/qp.c | 18 +++-- drivers/infiniband/hw/mlx4/srq.c | 11 ++- drivers/infiniband/hw/mlx5/mlx5_ib.h | 3 +- drivers/infiniband/hw/mlx5/mr.c | 11 +-- drivers/infiniband/hw/mlx5/odp.c | 5 +- drivers/infiniband/hw/mlx5/qp.c | 80 ++++++++++--------- drivers/infiniband/hw/mlx5/srq.c | 19 +++-- drivers/infiniband/hw/mthca/mthca_dev.h | 3 +- drivers/infiniband/hw/mthca/mthca_provider.c | 14 ++-- drivers/infiniband/hw/mthca/mthca_srq.c | 39 +++++---- drivers/infiniband/hw/nes/nes_verbs.c | 19 +++-- drivers/infiniband/hw/ocrdma/ocrdma_verbs.c | 2 +- drivers/infiniband/hw/qedr/verbs.c | 8 +- drivers/infiniband/hw/usnic/usnic_ib_verbs.c | 2 +- drivers/infiniband/hw/vmw_pvrdma/pvrdma_mr.c | 2 +- drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c | 9 ++- drivers/infiniband/hw/vmw_pvrdma/pvrdma_srq.c | 2 +- drivers/infiniband/sw/rdmavt/mr.c | 2 +- drivers/infiniband/sw/rdmavt/qp.c | 7 +- drivers/infiniband/sw/rdmavt/srq.c | 2 +- drivers/infiniband/sw/rxe/rxe_loc.h | 3 +- drivers/infiniband/sw/rxe/rxe_mr.c | 3 +- drivers/infiniband/sw/rxe/rxe_qp.c | 5 +- drivers/infiniband/sw/rxe/rxe_verbs.c | 4 +- 30 files changed, 174 insertions(+), 130 deletions(-) diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c index cceaa6133903..90790d4a5cde 100644 --- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c +++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c @@ -730,7 +730,7 @@ struct ib_ah *bnxt_re_create_ah(struct ib_pd *ib_pd, /* Write AVID to shared page. */ if (rdma_is_user_pd(ib_pd)) { - struct ib_ucontext *ib_uctx = ib_pd->uobject->context; + struct ib_ucontext *ib_uctx = rdma_udata_context(udata); struct bnxt_re_ucontext *uctx; unsigned long flag; u32 *wrptr; @@ -880,7 +880,7 @@ static int bnxt_re_init_user_qp(struct bnxt_re_dev *rdev, struct bnxt_re_pd *pd, struct bnxt_qplib_qp *qplib_qp = &qp->qplib_qp; struct ib_umem *umem; int bytes = 0; - struct ib_ucontext *context = pd->ib_pd.uobject->context; + struct ib_ucontext *context = rdma_udata_context(udata); struct bnxt_re_ucontext *cntx = container_of(context, struct bnxt_re_ucontext, ib_uctx); @@ -1358,7 +1358,7 @@ static int bnxt_re_init_user_srq(struct bnxt_re_dev *rdev, struct bnxt_qplib_srq *qplib_srq = &srq->qplib_srq; struct ib_umem *umem; int bytes = 0; - struct ib_ucontext *context = pd->ib_pd.uobject->context; + struct ib_ucontext *context = rdma_udata_context(udata); struct bnxt_re_ucontext *cntx = container_of(context, struct bnxt_re_ucontext, ib_uctx); @@ -3585,7 +3585,7 @@ struct ib_mr *bnxt_re_reg_user_mr(struct ib_pd *ib_pd, u64 start, u64 length, /* The fixed portion of the rkey is the same as the lkey */ mr->ib_mr.rkey = mr->qplib_mr.rkey; - umem = ib_umem_get(ib_pd->uobject->context, start, length, + umem = ib_umem_get(rdma_udata_context(udata), start, length, mr_access_flags, 0); if (IS_ERR(umem)) { dev_err(rdev_to_dev(rdev), "Failed to get umem"); diff --git a/drivers/infiniband/hw/cxgb3/iwch_provider.c b/drivers/infiniband/hw/cxgb3/iwch_provider.c index 4d1a86999a2a..29eb525c0b2c 100644 --- a/drivers/infiniband/hw/cxgb3/iwch_provider.c +++ b/drivers/infiniband/hw/cxgb3/iwch_provider.c @@ -540,7 +540,8 @@ static struct ib_mr *iwch_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, mhp->rhp = rhp; - mhp->umem = ib_umem_get(pd->uobject->context, start, length, acc, 0); + mhp->umem = ib_umem_get(rdma_udata_context(udata), start, length, acc, + 0); if (IS_ERR(mhp->umem)) { err = PTR_ERR(mhp->umem); kfree(mhp); @@ -837,7 +838,7 @@ static struct ib_qp *iwch_create_qp(struct ib_pd *pd, * Kernel users need more wq space for fastreg WRs which can take * 2 WR fragments. */ - ucontext = pd->uobject ? to_iwch_ucontext(pd->uobject->context) : NULL; + ucontext = udata ? to_iwch_ucontext(rdma_udata_context(udata)) : NULL; if (!ucontext && wqsize < (rqsize + (2 * sqsize))) wqsize = roundup_pow_of_two(rqsize + roundup_pow_of_two(attrs->cap.max_send_wr * 2)); diff --git a/drivers/infiniband/hw/cxgb4/mem.c b/drivers/infiniband/hw/cxgb4/mem.c index 30d5521c5c21..43ea52d74843 100644 --- a/drivers/infiniband/hw/cxgb4/mem.c +++ b/drivers/infiniband/hw/cxgb4/mem.c @@ -537,7 +537,8 @@ struct ib_mr *c4iw_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, mhp->rhp = rhp; - mhp->umem = ib_umem_get(pd->uobject->context, start, length, acc, 0); + mhp->umem = ib_umem_get(rdma_udata_context(udata), start, length, acc, + 0); if (IS_ERR(mhp->umem)) goto err_free_skb; diff --git a/drivers/infiniband/hw/cxgb4/qp.c b/drivers/infiniband/hw/cxgb4/qp.c index 6103c1a6caec..5797034fbb0a 100644 --- a/drivers/infiniband/hw/cxgb4/qp.c +++ b/drivers/infiniband/hw/cxgb4/qp.c @@ -2163,7 +2163,7 @@ struct ib_qp *c4iw_create_qp(struct ib_pd *pd, struct ib_qp_init_attr *attrs, if (sqsize < 8) sqsize = 8; - ucontext = pd->uobject ? to_c4iw_ucontext(pd->uobject->context) : NULL; + ucontext = udata ? to_c4iw_ucontext(rdma_udata_context(udata)) : NULL; qhp = kzalloc(sizeof(*qhp), GFP_KERNEL); if (!qhp) @@ -2713,7 +2713,7 @@ struct ib_srq *c4iw_create_srq(struct ib_pd *pd, struct ib_srq_init_attr *attrs, rqsize = attrs->attr.max_wr + 1; rqsize = roundup_pow_of_two(max_t(u16, rqsize, 16)); - ucontext = pd->uobject ? to_c4iw_ucontext(pd->uobject->context) : NULL; + ucontext = udata ? to_c4iw_ucontext(rdma_udata_context(udata)) : NULL; srq = kzalloc(sizeof(*srq), GFP_KERNEL); if (!srq) diff --git a/drivers/infiniband/hw/i40iw/i40iw_verbs.c b/drivers/infiniband/hw/i40iw/i40iw_verbs.c index 408a65fc00b3..c55ce272ffdc 100644 --- a/drivers/infiniband/hw/i40iw/i40iw_verbs.c +++ b/drivers/infiniband/hw/i40iw/i40iw_verbs.c @@ -674,8 +674,9 @@ static struct ib_qp *i40iw_create_qp(struct ib_pd *ibpd, goto error; } iwqp->ctx_info.qp_compl_ctx = req.user_compl_ctx; + iwqp->user_mode = 1; - ucontext = to_ucontext(ibpd->uobject->context); + ucontext = to_ucontext(rdma_udata_context(udata)); if (req.user_wqe_buffers) { struct i40iw_pbl *iwpbl; @@ -1854,7 +1855,7 @@ static struct ib_mr *i40iw_reg_user_mr(struct ib_pd *pd, if (length > I40IW_MAX_MR_SIZE) return ERR_PTR(-EINVAL); - region = ib_umem_get(pd->uobject->context, start, length, acc, 0); + region = ib_umem_get(rdma_udata_context(udata), start, length, acc, 0); if (IS_ERR(region)) return (struct ib_mr *)region; @@ -1874,7 +1875,7 @@ static struct ib_mr *i40iw_reg_user_mr(struct ib_pd *pd, iwmr->region = region; iwmr->ibmr.pd = pd; iwmr->ibmr.device = pd->device; - ucontext = to_ucontext(pd->uobject->context); + ucontext = to_ucontext(rdma_udata_context(udata)); iwmr->page_size = PAGE_SIZE; iwmr->page_msk = PAGE_MASK; @@ -2095,7 +2096,7 @@ static int i40iw_dereg_mr(struct ib_mr *ib_mr, struct ib_udata *udata) if (rdma_is_user_pd(ibpd)) { struct i40iw_ucontext *ucontext; - ucontext = to_ucontext(ibpd->uobject->context); + ucontext = to_ucontext(rdma_udata_context(udata)); i40iw_del_memlist(iwmr, ucontext); } if (iwpbl->pbl_allocated && iwmr->type != IW_MEMREG_TYPE_QP) diff --git a/drivers/infiniband/hw/mlx4/mr.c b/drivers/infiniband/hw/mlx4/mr.c index dfd5eee13dbe..0e8363aa49f9 100644 --- a/drivers/infiniband/hw/mlx4/mr.c +++ b/drivers/infiniband/hw/mlx4/mr.c @@ -415,7 +415,7 @@ struct ib_mr *mlx4_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, if (!mr) return ERR_PTR(-ENOMEM); - mr->umem = mlx4_get_umem_mr(pd->uobject->context, start, length, + mr->umem = mlx4_get_umem_mr(rdma_udata_context(udata), start, length, virt_addr, access_flags); if (IS_ERR(mr->umem)) { err = PTR_ERR(mr->umem); diff --git a/drivers/infiniband/hw/mlx4/qp.c b/drivers/infiniband/hw/mlx4/qp.c index 78e0a430e6be..c7d535decf57 100644 --- a/drivers/infiniband/hw/mlx4/qp.c +++ b/drivers/infiniband/hw/mlx4/qp.c @@ -1015,7 +1015,7 @@ static int create_qp_common(struct mlx4_ib_dev *dev, struct ib_pd *pd, (qp->sq.wqe_cnt << qp->sq.wqe_shift); } - qp->umem = ib_umem_get(pd->uobject->context, + qp->umem = ib_umem_get(rdma_udata_context(udata), (src == MLX4_IB_QP_SRC) ? ucmd.qp.buf_addr : ucmd.wq.buf_addr, qp->buf_size, 0, 0); if (IS_ERR(qp->umem)) { @@ -1035,7 +1035,8 @@ static int create_qp_common(struct mlx4_ib_dev *dev, struct ib_pd *pd, goto err_mtt; if (qp_has_rq(init_attr)) { - err = mlx4_ib_db_map_user(to_mucontext(pd->uobject->context), + err = mlx4_ib_db_map_user( + to_mucontext(rdma_udata_context(udata)), (src == MLX4_IB_QP_SRC) ? ucmd.qp.db_addr : ucmd.wq.db_addr, &qp->db); if (err) @@ -1108,8 +1109,8 @@ static int create_qp_common(struct mlx4_ib_dev *dev, struct ib_pd *pd, } } } else if (src == MLX4_IB_RWQ_SRC) { - err = mlx4_ib_alloc_wqn(to_mucontext(pd->uobject->context), qp, - range_size, &qpn); + err = mlx4_ib_alloc_wqn(to_mucontext(rdma_udata_context(udata)), + qp, range_size, &qpn); if (err) goto err_wrid; } else { @@ -1180,8 +1181,9 @@ static int create_qp_common(struct mlx4_ib_dev *dev, struct ib_pd *pd, if (qp->flags & MLX4_IB_QP_NETIF) mlx4_ib_steer_qp_free(dev, qpn, 1); else if (src == MLX4_IB_RWQ_SRC) - mlx4_ib_release_wqn(to_mucontext(pd->uobject->context), - qp, 0); + mlx4_ib_release_wqn( + to_mucontext(rdma_udata_context(udata)), + qp, 0); else mlx4_qp_release_range(dev->dev, qpn, 1); } @@ -1191,7 +1193,9 @@ static int create_qp_common(struct mlx4_ib_dev *dev, struct ib_pd *pd, err_wrid: if (qp->umem) { if (qp_has_rq(init_attr)) - mlx4_ib_db_unmap_user(to_mucontext(pd->uobject->context), &qp->db); + mlx4_ib_db_unmap_user( + to_mucontext(rdma_udata_context(udata)), + &qp->db); } else { kvfree(qp->sq.wrid); kvfree(qp->rq.wrid); diff --git a/drivers/infiniband/hw/mlx4/srq.c b/drivers/infiniband/hw/mlx4/srq.c index b821a0883864..7316e3226b82 100644 --- a/drivers/infiniband/hw/mlx4/srq.c +++ b/drivers/infiniband/hw/mlx4/srq.c @@ -113,7 +113,8 @@ struct ib_srq *mlx4_ib_create_srq(struct ib_pd *pd, goto err_srq; } - srq->umem = ib_umem_get(pd->uobject->context, ucmd.buf_addr, + srq->umem = ib_umem_get(rdma_udata_context(udata), + ucmd.buf_addr, buf_size, 0, 0); if (IS_ERR(srq->umem)) { err = PTR_ERR(srq->umem); @@ -129,8 +130,9 @@ struct ib_srq *mlx4_ib_create_srq(struct ib_pd *pd, if (err) goto err_mtt; - err = mlx4_ib_db_map_user(to_mucontext(pd->uobject->context), - ucmd.db_addr, &srq->db); + err = mlx4_ib_db_map_user( + to_mucontext(rdma_udata_context(udata)), + ucmd.db_addr, &srq->db); if (err) goto err_mtt; } else { @@ -203,7 +205,8 @@ struct ib_srq *mlx4_ib_create_srq(struct ib_pd *pd, err_wrid: if (rdma_is_user_pd(pd)) - mlx4_ib_db_unmap_user(to_mucontext(pd->uobject->context), &srq->db); + mlx4_ib_db_unmap_user(to_mucontext(rdma_udata_context(udata)), + &srq->db); else kvfree(srq->wrid); diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h index fe1032ced752..28b9bc1ec13b 100644 --- a/drivers/infiniband/hw/mlx5/mlx5_ib.h +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h @@ -1075,7 +1075,8 @@ int mlx5_ib_dealloc_mw(struct ib_mw *mw); int mlx5_ib_update_xlt(struct mlx5_ib_mr *mr, u64 idx, int npages, int page_shift, int flags); struct mlx5_ib_mr *mlx5_ib_alloc_implicit_mr(struct mlx5_ib_pd *pd, - int access_flags); + int access_flags, + struct ib_udata *udata); void mlx5_ib_free_implicit_mr(struct mlx5_ib_mr *mr); int mlx5_ib_rereg_user_mr(struct ib_mr *ib_mr, int flags, u64 start, u64 length, u64 virt_addr, int access_flags, diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c index a91c47ac2948..950447b536ce 100644 --- a/drivers/infiniband/hw/mlx5/mr.c +++ b/drivers/infiniband/hw/mlx5/mr.c @@ -847,7 +847,7 @@ static int mr_cache_max_order(struct mlx5_ib_dev *dev) static int mr_umem_get(struct ib_pd *pd, u64 start, u64 length, int access_flags, struct ib_umem **umem, int *npages, int *page_shift, int *ncont, - int *order) + int *order, struct ib_udata *udata) { struct mlx5_ib_dev *dev = to_mdev(pd->device); struct ib_umem *u; @@ -855,7 +855,8 @@ static int mr_umem_get(struct ib_pd *pd, u64 start, u64 length, *umem = NULL; - u = ib_umem_get(pd->uobject->context, start, length, access_flags, 0); + u = ib_umem_get(rdma_udata_context(udata), start, length, access_flags, + 0); err = PTR_ERR_OR_ZERO(u); if (err) { mlx5_ib_dbg(dev, "umem get failed (%d)\n", err); @@ -1319,7 +1320,7 @@ struct ib_mr *mlx5_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, !(dev->odp_caps.general_caps & IB_ODP_SUPPORT_IMPLICIT)) return ERR_PTR(-EINVAL); - mr = mlx5_ib_alloc_implicit_mr(to_mpd(pd), access_flags); + mr = mlx5_ib_alloc_implicit_mr(to_mpd(pd), access_flags, udata); if (IS_ERR(mr)) return ERR_CAST(mr); return &mr->ibmr; @@ -1327,7 +1328,7 @@ struct ib_mr *mlx5_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, #endif err = mr_umem_get(pd, start, length, access_flags, &umem, &npages, - &page_shift, &ncont, &order); + &page_shift, &ncont, &order, udata); if (err < 0) return ERR_PTR(err); @@ -1478,7 +1479,7 @@ int mlx5_ib_rereg_user_mr(struct ib_mr *ib_mr, int flags, u64 start, ib_umem_release(mr->umem); mr->umem = NULL; err = mr_umem_get(pd, addr, len, access_flags, &mr->umem, - &npages, &page_shift, &ncont, &order); + &npages, &page_shift, &ncont, &order, udata); if (err) goto err; } diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c index b04eb6775326..4850c0f4cdad 100644 --- a/drivers/infiniband/hw/mlx5/odp.c +++ b/drivers/infiniband/hw/mlx5/odp.c @@ -446,9 +446,10 @@ static struct ib_umem_odp *implicit_mr_get_data(struct mlx5_ib_mr *mr, } struct mlx5_ib_mr *mlx5_ib_alloc_implicit_mr(struct mlx5_ib_pd *pd, - int access_flags) + int access_flags, + struct ib_udata *udata) { - struct ib_ucontext *ctx = pd->ibpd.uobject->context; + struct ib_ucontext *ctx = rdma_udata_context(udata); struct mlx5_ib_mr *imr; struct ib_umem *umem; diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c index edfbcfa90555..56628ddf1615 100644 --- a/drivers/infiniband/hw/mlx5/qp.c +++ b/drivers/infiniband/hw/mlx5/qp.c @@ -665,11 +665,11 @@ static int mlx5_ib_umem_get(struct mlx5_ib_dev *dev, unsigned long addr, size_t size, struct ib_umem **umem, int *npages, int *page_shift, int *ncont, - u32 *offset) + u32 *offset, struct ib_udata *udata) { int err; - *umem = ib_umem_get(pd->uobject->context, addr, size, 0, 0); + *umem = ib_umem_get(rdma_udata_context(udata), addr, size, 0, 0); if (IS_ERR(*umem)) { mlx5_ib_dbg(dev, "umem_get failed\n"); return PTR_ERR(*umem); @@ -696,14 +696,14 @@ static int mlx5_ib_umem_get(struct mlx5_ib_dev *dev, } static void destroy_user_rq(struct mlx5_ib_dev *dev, struct ib_pd *pd, - struct mlx5_ib_rwq *rwq) + struct mlx5_ib_rwq *rwq, struct ib_udata *udata) { struct mlx5_ib_ucontext *context; if (rwq->create_flags & MLX5_IB_WQ_FLAGS_DELAY_DROP) atomic_dec(&dev->delay_drop.rqs_cnt); - context = to_mucontext(pd->uobject->context); + context = to_mucontext(rdma_udata_context(udata)); mlx5_ib_db_unmap_user(context, &rwq->db); if (rwq->umem) ib_umem_release(rwq->umem); @@ -711,7 +711,8 @@ static void destroy_user_rq(struct mlx5_ib_dev *dev, struct ib_pd *pd, static int create_user_rq(struct mlx5_ib_dev *dev, struct ib_pd *pd, struct mlx5_ib_rwq *rwq, - struct mlx5_ib_create_wq *ucmd) + struct mlx5_ib_create_wq *ucmd, + struct ib_udata *udata) { struct mlx5_ib_ucontext *context; int page_shift = 0; @@ -723,8 +724,8 @@ static int create_user_rq(struct mlx5_ib_dev *dev, struct ib_pd *pd, if (!ucmd->buf_addr) return -EINVAL; - context = to_mucontext(pd->uobject->context); - rwq->umem = ib_umem_get(pd->uobject->context, ucmd->buf_addr, + context = to_mucontext(rdma_udata_context(udata)); + rwq->umem = ib_umem_get(rdma_udata_context(udata), ucmd->buf_addr, rwq->buf_size, 0, 0); if (IS_ERR(rwq->umem)) { mlx5_ib_dbg(dev, "umem_get failed\n"); @@ -797,7 +798,7 @@ static int create_user_qp(struct mlx5_ib_dev *dev, struct ib_pd *pd, return err; } - context = to_mucontext(pd->uobject->context); + context = to_mucontext(rdma_udata_context(udata)); if (ucmd.flags & MLX5_QP_FLAG_BFREG_INDEX) { uar_index = bfregn_to_uar_index(dev, &context->bfregi, ucmd.bfreg_index, true); @@ -836,7 +837,7 @@ static int create_user_qp(struct mlx5_ib_dev *dev, struct ib_pd *pd, err = mlx5_ib_umem_get(dev, pd, ubuffer->buf_addr, ubuffer->buf_size, &ubuffer->umem, &npages, &page_shift, - &ncont, &offset); + &ncont, &offset, udata); if (err) goto err_bfreg; } else { @@ -900,11 +901,12 @@ static int create_user_qp(struct mlx5_ib_dev *dev, struct ib_pd *pd, } static void destroy_qp_user(struct mlx5_ib_dev *dev, struct ib_pd *pd, - struct mlx5_ib_qp *qp, struct mlx5_ib_qp_base *base) + struct mlx5_ib_qp *qp, struct mlx5_ib_qp_base *base, + struct ib_udata *udata) { struct mlx5_ib_ucontext *context; - context = to_mucontext(pd->uobject->context); + context = to_mucontext(rdma_udata_context(udata)); mlx5_ib_db_unmap_user(context, &qp->db); if (base->ubuffer.umem) ib_umem_release(base->ubuffer.umem); @@ -1090,7 +1092,8 @@ static void destroy_flow_rule_vport_sq(struct mlx5_ib_dev *dev, static int create_raw_packet_qp_sq(struct mlx5_ib_dev *dev, struct mlx5_ib_sq *sq, void *qpin, - struct ib_pd *pd) + struct ib_pd *pd, + struct ib_udata *udata) { struct mlx5_ib_ubuffer *ubuffer = &sq->ubuffer; __be64 *pas; @@ -1107,7 +1110,7 @@ static int create_raw_packet_qp_sq(struct mlx5_ib_dev *dev, err = mlx5_ib_umem_get(dev, pd, ubuffer->buf_addr, ubuffer->buf_size, &sq->ubuffer.umem, &npages, &page_shift, - &ncont, &offset); + &ncont, &offset, udata); if (err) return err; @@ -1332,8 +1335,7 @@ static int create_raw_packet_qp(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp, struct mlx5_ib_raw_packet_qp *raw_packet_qp = &qp->raw_packet_qp; struct mlx5_ib_sq *sq = &raw_packet_qp->sq; struct mlx5_ib_rq *rq = &raw_packet_qp->rq; - struct ib_uobject *uobj = pd->uobject; - struct ib_ucontext *ucontext = uobj->context; + struct ib_ucontext *ucontext = rdma_udata_context(udata); struct mlx5_ib_ucontext *mucontext = to_mucontext(ucontext); int err; u32 tdn = mucontext->tdn; @@ -1344,7 +1346,7 @@ static int create_raw_packet_qp(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp, if (err) return err; - err = create_raw_packet_qp_sq(dev, sq, in, pd); + err = create_raw_packet_qp_sq(dev, sq, in, pd, udata); if (err) goto err_destroy_tis; @@ -1448,8 +1450,7 @@ static int create_rss_raw_qp_tir(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp, struct ib_qp_init_attr *init_attr, struct ib_udata *udata) { - struct ib_uobject *uobj = pd->uobject; - struct ib_ucontext *ucontext = uobj->context; + struct ib_ucontext *ucontext = rdma_udata_context(udata); struct mlx5_ib_ucontext *mucontext = to_mucontext(ucontext); struct mlx5_ib_create_qp_resp resp = {}; int inlen; @@ -1781,7 +1782,7 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd, return -EFAULT; } - err = get_qp_user_index(to_mucontext(pd->uobject->context), + err = get_qp_user_index(to_mucontext(rdma_udata_context(udata)), &ucmd, udata->inlen, &uidx); if (err) return err; @@ -2048,7 +2049,7 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd, err_create: if (qp->create_type == MLX5_QP_USER) - destroy_qp_user(dev, pd, qp, base); + destroy_qp_user(dev, pd, qp, base, udata); else if (qp->create_type == MLX5_QP_KERNEL) destroy_qp_kernel(dev, qp); @@ -2159,7 +2160,8 @@ static int modify_raw_packet_qp(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp, const struct mlx5_modify_raw_qp_param *raw_qp_param, u8 lag_tx_affinity); -static void destroy_qp_common(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp) +static void destroy_qp_common(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp, + struct ib_udata *udata) { struct mlx5_ib_cq *send_cq, *recv_cq; struct mlx5_ib_qp_base *base; @@ -2230,7 +2232,7 @@ static void destroy_qp_common(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp) if (qp->create_type == MLX5_QP_KERNEL) destroy_qp_kernel(dev, qp); else if (qp->create_type == MLX5_QP_USER) - destroy_qp_user(dev, &get_pd(qp)->ibpd, qp, base); + destroy_qp_user(dev, &get_pd(qp)->ibpd, qp, base, udata); } static const char *ib_qp_type_str(enum ib_qp_type type) @@ -2268,7 +2270,8 @@ static const char *ib_qp_type_str(enum ib_qp_type type) static struct ib_qp *mlx5_ib_create_dct(struct ib_pd *pd, struct ib_qp_init_attr *attr, - struct mlx5_ib_create_qp *ucmd) + struct mlx5_ib_create_qp *ucmd, + struct ib_udata *udata) { struct mlx5_ib_qp *qp; int err = 0; @@ -2278,7 +2281,7 @@ static struct ib_qp *mlx5_ib_create_dct(struct ib_pd *pd, if (!attr->srq || !attr->recv_cq) return ERR_PTR(-EINVAL); - err = get_qp_user_index(to_mucontext(pd->uobject->context), + err = get_qp_user_index(to_mucontext(rdma_udata_context(udata)), ucmd, sizeof(*ucmd), &uidx); if (err) return ERR_PTR(err); @@ -2366,7 +2369,9 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd, if (!rdma_is_user_pd(pd)) { mlx5_ib_dbg(dev, "Raw Packet QP is not supported for kernel consumers\n"); return ERR_PTR(-EINVAL); - } else if (!to_mucontext(pd->uobject->context)->cqe_version) { + } else if (!to_mucontext( + rdma_udata_context( + udata))->cqe_version) { mlx5_ib_dbg(dev, "Raw Packet QP is only supported for CQE version > 0\n"); return ERR_PTR(-EINVAL); } @@ -2398,7 +2403,7 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd, return ERR_PTR(-EINVAL); } } else { - return mlx5_ib_create_dct(pd, init_attr, &ucmd); + return mlx5_ib_create_dct(pd, init_attr, &ucmd, udata); } } @@ -2500,7 +2505,7 @@ int mlx5_ib_destroy_qp(struct ib_qp *qp, struct ib_udata *udata) if (mqp->qp_sub_type == MLX5_IB_QPT_DCT) return mlx5_ib_destroy_dct(mqp); - destroy_qp_common(dev, mqp); + destroy_qp_common(dev, mqp, udata); kfree(mqp); @@ -3023,13 +3028,14 @@ static int modify_raw_packet_qp(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp, static unsigned int get_tx_affinity(struct mlx5_ib_dev *dev, struct mlx5_ib_pd *pd, struct mlx5_ib_qp_base *qp_base, - u8 port_num) + u8 port_num, + struct ib_udata *udata) { struct mlx5_ib_ucontext *ucontext = NULL; unsigned int tx_port_affinity; - if (pd && pd->ibpd.uobject && pd->ibpd.uobject->context) - ucontext = to_mucontext(pd->ibpd.uobject->context); + if (udata && _rdma_udata_context(udata, false)) + ucontext = to_mucontext(rdma_udata_context(udata)); if (ucontext) { tx_port_affinity = (unsigned int)atomic_add_return( @@ -3054,7 +3060,8 @@ static unsigned int get_tx_affinity(struct mlx5_ib_dev *dev, static int __mlx5_ib_modify_qp(struct ib_qp *ibqp, const struct ib_qp_attr *attr, int attr_mask, enum ib_qp_state cur_state, enum ib_qp_state new_state, - const struct mlx5_ib_modify_qp *ucmd) + const struct mlx5_ib_modify_qp *ucmd, + struct ib_udata *udata) { static const u16 optab[MLX5_QP_NUM_STATE][MLX5_QP_NUM_STATE] = { [MLX5_QP_STATE_RST] = { @@ -3145,7 +3152,8 @@ static int __mlx5_ib_modify_qp(struct ib_qp *ibqp, (ibqp->qp_type == IB_QPT_XRC_TGT)) { if (mlx5_lag_is_active(dev->mdev)) { u8 p = mlx5_core_native_port_num(dev->mdev); - tx_affinity = get_tx_affinity(dev, pd, base, p); + tx_affinity = get_tx_affinity(dev, pd, base, p, + udata); context->flags |= cpu_to_be32(tx_affinity << 24); } } @@ -3616,7 +3624,7 @@ int mlx5_ib_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, } err = __mlx5_ib_modify_qp(ibqp, attr, attr_mask, cur_state, - new_state, &ucmd); + new_state, &ucmd, udata); out: mutex_unlock(&qp->mutex); @@ -5584,7 +5592,7 @@ static int prepare_user_rq(struct ib_pd *pd, return err; } - err = create_user_rq(dev, pd, rwq, &ucmd); + err = create_user_rq(dev, pd, rwq, &ucmd, udata); if (err) { mlx5_ib_dbg(dev, "err %d\n", err); if (err) @@ -5648,7 +5656,7 @@ struct ib_wq *mlx5_ib_create_wq(struct ib_pd *pd, err_copy: mlx5_core_destroy_rq_tracked(dev->mdev, &rwq->core_qp); err_user_rq: - destroy_user_rq(dev, pd, rwq); + destroy_user_rq(dev, pd, rwq, udata); err: kfree(rwq); return ERR_PTR(err); @@ -5660,7 +5668,7 @@ int mlx5_ib_destroy_wq(struct ib_wq *wq, struct ib_udata *udata) struct mlx5_ib_rwq *rwq = to_mrwq(wq); mlx5_core_destroy_rq_tracked(dev->mdev, &rwq->core_qp); - destroy_user_rq(dev, wq->pd, rwq); + destroy_user_rq(dev, wq->pd, rwq, udata); kfree(rwq); return 0; diff --git a/drivers/infiniband/hw/mlx5/srq.c b/drivers/infiniband/hw/mlx5/srq.c index c5eb18972838..1bff2707c447 100644 --- a/drivers/infiniband/hw/mlx5/srq.c +++ b/drivers/infiniband/hw/mlx5/srq.c @@ -102,16 +102,17 @@ static int create_srq_user(struct ib_pd *pd, struct mlx5_ib_srq *srq, return -EINVAL; if (in->type != IB_SRQT_BASIC) { - err = get_srq_user_index(to_mucontext(pd->uobject->context), - &ucmd, udata->inlen, &uidx); + err = get_srq_user_index( + to_mucontext(rdma_udata_context(udata)), + &ucmd, udata->inlen, &uidx); if (err) return err; } srq->wq_sig = !!(ucmd.flags & MLX5_SRQ_FLAG_SIGNATURE); - srq->umem = ib_umem_get(pd->uobject->context, ucmd.buf_addr, buf_size, - 0, 0); + srq->umem = ib_umem_get(rdma_udata_context(udata), ucmd.buf_addr, + buf_size, 0, 0); if (IS_ERR(srq->umem)) { mlx5_ib_dbg(dev, "failed umem get, size %d\n", buf_size); err = PTR_ERR(srq->umem); @@ -135,7 +136,7 @@ static int create_srq_user(struct ib_pd *pd, struct mlx5_ib_srq *srq, mlx5_ib_populate_pas(dev, srq->umem, page_shift, in->pas, 0); - err = mlx5_ib_db_map_user(to_mucontext(pd->uobject->context), + err = mlx5_ib_db_map_user(to_mucontext(rdma_udata_context(udata)), ucmd.db_addr, &srq->db); if (err) { mlx5_ib_dbg(dev, "map doorbell failed\n"); @@ -222,9 +223,11 @@ static int create_srq_kernel(struct mlx5_ib_dev *dev, struct mlx5_ib_srq *srq, return err; } -static void destroy_srq_user(struct ib_pd *pd, struct mlx5_ib_srq *srq) +static void destroy_srq_user(struct ib_pd *pd, struct mlx5_ib_srq *srq, + struct ib_udata *udata) { - mlx5_ib_db_unmap_user(to_mucontext(pd->uobject->context), &srq->db); + mlx5_ib_db_unmap_user(to_mucontext(rdma_udata_context(udata)), + &srq->db); ib_umem_release(srq->umem); } @@ -355,7 +358,7 @@ struct ib_srq *mlx5_ib_create_srq(struct ib_pd *pd, err_usr_kern_srq: if (rdma_is_user_pd(pd)) - destroy_srq_user(pd, srq); + destroy_srq_user(pd, srq, udata); else destroy_srq_kernel(dev, srq); diff --git a/drivers/infiniband/hw/mthca/mthca_dev.h b/drivers/infiniband/hw/mthca/mthca_dev.h index 220a3e4717a3..a13617cb6c44 100644 --- a/drivers/infiniband/hw/mthca/mthca_dev.h +++ b/drivers/infiniband/hw/mthca/mthca_dev.h @@ -510,7 +510,8 @@ int mthca_alloc_cq_buf(struct mthca_dev *dev, struct mthca_cq_buf *buf, int nent void mthca_free_cq_buf(struct mthca_dev *dev, struct mthca_cq_buf *buf, int cqe); int mthca_alloc_srq(struct mthca_dev *dev, struct mthca_pd *pd, - struct ib_srq_attr *attr, struct mthca_srq *srq); + struct ib_srq_attr *attr, struct mthca_srq *srq, + struct ib_udata *udata); void mthca_free_srq(struct mthca_dev *dev, struct mthca_srq *srq); int mthca_modify_srq(struct ib_srq *ibsrq, struct ib_srq_attr *attr, enum ib_srq_attr_mask attr_mask, struct ib_udata *udata); diff --git a/drivers/infiniband/hw/mthca/mthca_provider.c b/drivers/infiniband/hw/mthca/mthca_provider.c index e0536325de81..fcb2f940f021 100644 --- a/drivers/infiniband/hw/mthca/mthca_provider.c +++ b/drivers/infiniband/hw/mthca/mthca_provider.c @@ -456,7 +456,7 @@ static struct ib_srq *mthca_create_srq(struct ib_pd *pd, return ERR_PTR(-ENOMEM); if (rdma_is_user_pd(pd)) { - context = to_mucontext(pd->uobject->context); + context = to_mucontext(rdma_udata_context(udata)); if (ib_copy_from_udata(&ucmd, udata, sizeof ucmd)) { err = -EFAULT; @@ -475,7 +475,7 @@ static struct ib_srq *mthca_create_srq(struct ib_pd *pd, } err = mthca_alloc_srq(to_mdev(pd->device), to_mpd(pd), - &init_attr->attr, srq); + &init_attr->attr, srq, udata); if (err && rdma_is_user_pd(pd)) mthca_unmap_user_db(to_mdev(pd->device), &context->uar, @@ -538,7 +538,7 @@ static struct ib_qp *mthca_create_qp(struct ib_pd *pd, return ERR_PTR(-ENOMEM); if (rdma_is_user_pd(pd)) { - context = to_mucontext(pd->uobject->context); + context = to_mucontext(rdma_udata_context(udata)); if (ib_copy_from_udata(&ucmd, udata, sizeof ucmd)) { kfree(qp); @@ -577,7 +577,7 @@ static struct ib_qp *mthca_create_qp(struct ib_pd *pd, &init_attr->cap, qp); if (err && rdma_is_user_pd(pd)) { - context = to_mucontext(pd->uobject->context); + context = to_mucontext(rdma_udata_context(udata)); mthca_unmap_user_db(to_mdev(pd->device), &context->uar, @@ -916,12 +916,12 @@ static struct ib_mr *mthca_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, int write_mtt_size; if (udata->inlen < sizeof ucmd) { - if (!to_mucontext(pd->uobject->context)->reg_mr_warned) { + if (!to_mucontext(rdma_udata_context(udata))->reg_mr_warned) { mthca_warn(dev, "Process '%s' did not pass in MR attrs.\n", current->comm); mthca_warn(dev, " Update libmthca to fix this.\n"); } - ++to_mucontext(pd->uobject->context)->reg_mr_warned; + ++to_mucontext(rdma_udata_context(udata))->reg_mr_warned; ucmd.mr_attrs = 0; } else if (ib_copy_from_udata(&ucmd, udata, sizeof ucmd)) return ERR_PTR(-EFAULT); @@ -930,7 +930,7 @@ static struct ib_mr *mthca_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, if (!mr) return ERR_PTR(-ENOMEM); - mr->umem = ib_umem_get(pd->uobject->context, start, length, acc, + mr->umem = ib_umem_get(rdma_udata_context(udata), start, length, acc, ucmd.mr_attrs & MTHCA_MR_DMASYNC); if (IS_ERR(mr->umem)) { diff --git a/drivers/infiniband/hw/mthca/mthca_srq.c b/drivers/infiniband/hw/mthca/mthca_srq.c index 4e206d4e9230..21e8ea2c7ac8 100644 --- a/drivers/infiniband/hw/mthca/mthca_srq.c +++ b/drivers/infiniband/hw/mthca/mthca_srq.c @@ -92,10 +92,12 @@ static inline int *wqe_to_link(void *wqe) return (int *) (wqe + offsetof(struct mthca_next_seg, imm)); } -static void mthca_tavor_init_srq_context(struct mthca_dev *dev, - struct mthca_pd *pd, - struct mthca_srq *srq, - struct mthca_tavor_srq_context *context) +static void +mthca_tavor_init_srq_context(struct mthca_dev *dev, + struct mthca_pd *pd, + struct mthca_srq *srq, + struct mthca_tavor_srq_context *context, + struct ib_udata *udata) { memset(context, 0, sizeof *context); @@ -103,17 +105,21 @@ static void mthca_tavor_init_srq_context(struct mthca_dev *dev, context->state_pd = cpu_to_be32(pd->pd_num); context->lkey = cpu_to_be32(srq->mr.ibmr.lkey); - if (pd->ibpd.uobject) + if (udata) context->uar = - cpu_to_be32(to_mucontext(pd->ibpd.uobject->context)->uar.index); + cpu_to_be32( + to_mucontext( + rdma_udata_context(udata))->uar.index); else context->uar = cpu_to_be32(dev->driver_uar.index); } -static void mthca_arbel_init_srq_context(struct mthca_dev *dev, - struct mthca_pd *pd, - struct mthca_srq *srq, - struct mthca_arbel_srq_context *context) +static void +mthca_arbel_init_srq_context(struct mthca_dev *dev, + struct mthca_pd *pd, + struct mthca_srq *srq, + struct mthca_arbel_srq_context *context, + struct ib_udata *udata) { int logsize, max; @@ -129,9 +135,11 @@ static void mthca_arbel_init_srq_context(struct mthca_dev *dev, context->lkey = cpu_to_be32(srq->mr.ibmr.lkey); context->db_index = cpu_to_be32(srq->db_index); context->logstride_usrpage = cpu_to_be32((srq->wqe_shift - 4) << 29); - if (pd->ibpd.uobject) + if (udata) context->logstride_usrpage |= - cpu_to_be32(to_mucontext(pd->ibpd.uobject->context)->uar.index); + cpu_to_be32( + to_mucontext( + rdma_udata_context(udata))->uar.index); else context->logstride_usrpage |= cpu_to_be32(dev->driver_uar.index); context->eq_pd = cpu_to_be32(MTHCA_EQ_ASYNC << 24 | pd->pd_num); @@ -197,7 +205,8 @@ static int mthca_alloc_srq_buf(struct mthca_dev *dev, struct mthca_pd *pd, } int mthca_alloc_srq(struct mthca_dev *dev, struct mthca_pd *pd, - struct ib_srq_attr *attr, struct mthca_srq *srq) + struct ib_srq_attr *attr, struct mthca_srq *srq, + struct ib_udata *udata) { struct mthca_mailbox *mailbox; int ds; @@ -261,9 +270,9 @@ int mthca_alloc_srq(struct mthca_dev *dev, struct mthca_pd *pd, mutex_init(&srq->mutex); if (mthca_is_memfree(dev)) - mthca_arbel_init_srq_context(dev, pd, srq, mailbox->buf); + mthca_arbel_init_srq_context(dev, pd, srq, mailbox->buf, udata); else - mthca_tavor_init_srq_context(dev, pd, srq, mailbox->buf); + mthca_tavor_init_srq_context(dev, pd, srq, mailbox->buf, udata); err = mthca_SW2HW_SRQ(dev, mailbox, srq->srqn); diff --git a/drivers/infiniband/hw/nes/nes_verbs.c b/drivers/infiniband/hw/nes/nes_verbs.c index 185ad94b0daf..fcac784852f2 100644 --- a/drivers/infiniband/hw/nes/nes_verbs.c +++ b/drivers/infiniband/hw/nes/nes_verbs.c @@ -734,8 +734,8 @@ static int nes_dealloc_pd(struct ib_pd *ibpd, struct ib_udata *udata) struct nes_device *nesdev = nesvnic->nesdev; struct nes_adapter *nesadapter = nesdev->nesadapter; - if (rdma_is_user_pd(ibpd) && (ibpd->uobject->context)) { - nesucontext = to_nesucontext(ibpd->uobject->context); + if (rdma_is_user_pd(ibpd) && _rdma_udata_context(udata, false)) { + nesucontext = to_nesucontext(rdma_udata_context(udata)); nes_debug(NES_DBG_PD, "Clearing bit %u from allocated doorbells\n", nespd->mmap_db_index); clear_bit(nespd->mmap_db_index, nesucontext->allocated_doorbells); @@ -1068,9 +1068,11 @@ static struct ib_qp *nes_create_qp(struct ib_pd *ibpd, if (req.user_qp_buffer) nesqp->nesuqp_addr = req.user_qp_buffer; if (rdma_is_user_pd(ibpd) && - (ibpd->uobject->context)) { + _rdma_udata_context(udata, false)) { nesqp->user_mode = 1; - nes_ucontext = to_nesucontext(ibpd->uobject->context); + nes_ucontext = + to_nesucontext( + rdma_udata_context(udata)); if (virt_wqs) { err = 1; list_for_each_entry(nespbl, &nes_ucontext->qp_reg_mem_list, list) { @@ -1091,7 +1093,9 @@ static struct ib_qp *nes_create_qp(struct ib_pd *ibpd, } } - nes_ucontext = to_nesucontext(ibpd->uobject->context); + nes_ucontext = + to_nesucontext( + rdma_udata_context(udata)); nesqp->mmap_sq_db_index = find_next_zero_bit(nes_ucontext->allocated_wqs, NES_MAX_USER_WQ_REGIONS, nes_ucontext->first_free_wq); @@ -2136,7 +2140,7 @@ static struct ib_mr *nes_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, u8 stag_key; int first_page = 1; - region = ib_umem_get(pd->uobject->context, start, length, acc, 0); + region = ib_umem_get(rdma_udata_context(udata), start, length, acc, 0); if (IS_ERR(region)) { return (struct ib_mr *)region; } @@ -2385,7 +2389,8 @@ static struct ib_mr *nes_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, return ERR_PTR(-ENOMEM); } nesmr->region = region; - nes_ucontext = to_nesucontext(pd->uobject->context); + nes_ucontext = to_nesucontext( + rdma_udata_context(udata)); pbl_depth = region->length >> 12; pbl_depth += (region->length & (4096-1)) ? 1 : 0; nespbl->pbl_size = pbl_depth*sizeof(u64); diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c index b04b42bf9a24..222790c8649d 100644 --- a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c +++ b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c @@ -928,7 +928,7 @@ struct ib_mr *ocrdma_reg_user_mr(struct ib_pd *ibpd, u64 start, u64 len, mr = kzalloc(sizeof(*mr), GFP_KERNEL); if (!mr) return ERR_PTR(status); - mr->umem = ib_umem_get(ibpd->uobject->context, start, len, acc, 0); + mr->umem = ib_umem_get(rdma_udata_context(udata), start, len, acc, 0); if (IS_ERR(mr->umem)) { status = -EFAULT; goto umem_err; diff --git a/drivers/infiniband/hw/qedr/verbs.c b/drivers/infiniband/hw/qedr/verbs.c index b9c5ca03fbe9..6252e0d59b12 100644 --- a/drivers/infiniband/hw/qedr/verbs.c +++ b/drivers/infiniband/hw/qedr/verbs.c @@ -1470,8 +1470,8 @@ struct ib_srq *qedr_create_srq(struct ib_pd *ibpd, hw_srq->max_wr = init_attr->attr.max_wr; hw_srq->max_sges = init_attr->attr.max_sge; - if (udata && ibpd->uobject && ibpd->uobject->context) { - ib_ctx = ibpd->uobject->context; + if (udata && _rdma_udata_context(udata, false)) { + ib_ctx = rdma_udata_context(udata); if (ib_copy_from_udata(&ureq, udata, sizeof(ureq))) { DP_ERR(dev, @@ -1715,7 +1715,7 @@ static int qedr_create_user_qp(struct qedr_dev *dev, int alloc_and_init = rdma_protocol_roce(&dev->ibdev, 1); int rc = -EINVAL; - ib_ctx = ibpd->uobject->context; + ib_ctx = rdma_udata_context(udata); memset(&ureq, 0, sizeof(ureq)); rc = ib_copy_from_udata(&ureq, udata, sizeof(ureq)); @@ -2730,7 +2730,7 @@ struct ib_mr *qedr_reg_user_mr(struct ib_pd *ibpd, u64 start, u64 len, mr->type = QEDR_MR_USER; - mr->umem = ib_umem_get(ibpd->uobject->context, start, len, acc, 0); + mr->umem = ib_umem_get(rdma_udata_context(udata), start, len, acc, 0); if (IS_ERR(mr->umem)) { rc = -EFAULT; goto err0; diff --git a/drivers/infiniband/hw/usnic/usnic_ib_verbs.c b/drivers/infiniband/hw/usnic/usnic_ib_verbs.c index 11945372fee0..7edcfeb39ba6 100644 --- a/drivers/infiniband/hw/usnic/usnic_ib_verbs.c +++ b/drivers/infiniband/hw/usnic/usnic_ib_verbs.c @@ -501,7 +501,7 @@ struct ib_qp *usnic_ib_create_qp(struct ib_pd *pd, usnic_dbg("\n"); - ucontext = to_uucontext(pd->uobject->context); + ucontext = to_uucontext(rdma_udata_context(udata)); us_ibdev = to_usdev(pd->device); if (init_attr->create_flags) diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_mr.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_mr.c index adf0478b4756..3e5742f96ebb 100644 --- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_mr.c +++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_mr.c @@ -126,7 +126,7 @@ struct ib_mr *pvrdma_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, return ERR_PTR(-EINVAL); } - umem = ib_umem_get(pd->uobject->context, start, + umem = ib_umem_get(rdma_udata_context(udata), start, length, access_flags, 0); if (IS_ERR(umem)) { dev_warn(&dev->pdev->dev, diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c index 36d9ce00527d..c832924f1be5 100644 --- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c +++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c @@ -262,9 +262,10 @@ struct ib_qp *pvrdma_create_qp(struct ib_pd *pd, if (!is_srq) { /* set qp->sq.wqe_cnt, shift, buf_size.. */ - qp->rumem = ib_umem_get(pd->uobject->context, - ucmd.rbuf_addr, - ucmd.rbuf_size, 0, 0); + qp->rumem = ib_umem_get( + rdma_udata_context(udata), + ucmd.rbuf_addr, + ucmd.rbuf_size, 0, 0); if (IS_ERR(qp->rumem)) { ret = PTR_ERR(qp->rumem); goto err_qp; @@ -275,7 +276,7 @@ struct ib_qp *pvrdma_create_qp(struct ib_pd *pd, qp->srq = to_vsrq(init_attr->srq); } - qp->sumem = ib_umem_get(pd->uobject->context, + qp->sumem = ib_umem_get(rdma_udata_context(udata), ucmd.sbuf_addr, ucmd.sbuf_size, 0, 0); if (IS_ERR(qp->sumem)) { diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_srq.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_srq.c index 0b290f6f79dc..e5c17cfe64aa 100644 --- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_srq.c +++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_srq.c @@ -153,7 +153,7 @@ struct ib_srq *pvrdma_create_srq(struct ib_pd *pd, goto err_srq; } - srq->umem = ib_umem_get(pd->uobject->context, + srq->umem = ib_umem_get(rdma_udata_context(udata), ucmd.buf_addr, ucmd.buf_size, 0, 0); if (IS_ERR(srq->umem)) { diff --git a/drivers/infiniband/sw/rdmavt/mr.c b/drivers/infiniband/sw/rdmavt/mr.c index 49c9541050d4..63a3834f3ca2 100644 --- a/drivers/infiniband/sw/rdmavt/mr.c +++ b/drivers/infiniband/sw/rdmavt/mr.c @@ -388,7 +388,7 @@ struct ib_mr *rvt_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, if (length == 0) return ERR_PTR(-EINVAL); - umem = ib_umem_get(pd->uobject->context, start, length, + umem = ib_umem_get(rdma_udata_context(udata), start, length, mr_access_flags, 0); if (IS_ERR(umem)) return (void *)umem; diff --git a/drivers/infiniband/sw/rdmavt/qp.c b/drivers/infiniband/sw/rdmavt/qp.c index 1735deb1a9d4..da5cb26eb4a9 100644 --- a/drivers/infiniband/sw/rdmavt/qp.c +++ b/drivers/infiniband/sw/rdmavt/qp.c @@ -1120,9 +1120,10 @@ struct ib_qp *rvt_create_qp(struct ib_pd *ibpd, } else { u32 s = sizeof(struct rvt_rwq) + qp->r_rq.size * sz; - qp->ip = rvt_create_mmap_info(rdi, s, - ibpd->uobject->context, - qp->r_rq.wq); + qp->ip = rvt_create_mmap_info( + rdi, s, + rdma_udata_context(udata), + qp->r_rq.wq); if (!qp->ip) { ret = ERR_PTR(-ENOMEM); goto bail_qpn; diff --git a/drivers/infiniband/sw/rdmavt/srq.c b/drivers/infiniband/sw/rdmavt/srq.c index 78e06fc456c5..98f211098e02 100644 --- a/drivers/infiniband/sw/rdmavt/srq.c +++ b/drivers/infiniband/sw/rdmavt/srq.c @@ -119,7 +119,7 @@ struct ib_srq *rvt_create_srq(struct ib_pd *ibpd, u32 s = sizeof(struct rvt_rwq) + srq->rq.size * sz; srq->ip = - rvt_create_mmap_info(dev, s, ibpd->uobject->context, + rvt_create_mmap_info(dev, s, rdma_udata_context(udata), srq->rq.wq); if (!srq->ip) { ret = ERR_PTR(-ENOMEM); diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 8e305422adbb..f3bd3df55af3 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -157,7 +157,8 @@ int rxe_qp_chk_init(struct rxe_dev *rxe, struct ib_qp_init_attr *init); int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_pd *pd, struct ib_qp_init_attr *init, struct rxe_create_qp_resp __user *uresp, - struct ib_pd *ibpd); + struct ib_pd *ibpd, + struct ib_udata *udata); int rxe_qp_to_init(struct rxe_qp *qp, struct ib_qp_init_attr *init); diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 9d3916b93f23..aa46526288e2 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -171,7 +171,8 @@ int rxe_mem_init_user(struct rxe_pd *pd, u64 start, void *vaddr; int err; - umem = ib_umem_get(pd->ibpd.uobject->context, start, length, access, 0); + umem = ib_umem_get(rdma_udata_context(udata), start, length, access, + 0); if (IS_ERR(umem)) { pr_warn("err %d from rxe_umem_get\n", (int)PTR_ERR(umem)); diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c index 0bcededec2cb..87944a259f04 100644 --- a/drivers/infiniband/sw/rxe/rxe_qp.c +++ b/drivers/infiniband/sw/rxe/rxe_qp.c @@ -336,14 +336,15 @@ static int rxe_qp_init_resp(struct rxe_dev *rxe, struct rxe_qp *qp, int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_pd *pd, struct ib_qp_init_attr *init, struct rxe_create_qp_resp __user *uresp, - struct ib_pd *ibpd) + struct ib_pd *ibpd, + struct ib_udata *udata) { int err; struct rxe_cq *rcq = to_rcq(init->recv_cq); struct rxe_cq *scq = to_rcq(init->send_cq); struct rxe_srq *srq = init->srq ? to_rsrq(init->srq) : NULL; struct ib_ucontext *context = rdma_is_user_pd(ibpd) ? - ibpd->uobject->context : NULL; + rdma_udata_context(udata) : NULL; rxe_add_ref(pd); rxe_add_ref(rcq); diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 581e6261b9d0..127773c72d73 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -342,7 +342,7 @@ static struct ib_srq *rxe_create_srq(struct ib_pd *ibpd, struct rxe_dev *rxe = to_rdev(ibpd->device); struct rxe_pd *pd = to_rpd(ibpd); struct rxe_srq *srq; - struct ib_ucontext *context = udata ? ibpd->uobject->context : NULL; + struct ib_ucontext *context = udata ? rdma_udata_context(udata) : NULL; struct rxe_create_srq_resp __user *uresp = NULL; if (udata) { @@ -498,7 +498,7 @@ static struct ib_qp *rxe_create_qp(struct ib_pd *ibpd, rxe_add_index(qp); - err = rxe_qp_from_init(rxe, qp, pd, init, uresp, ibpd); + err = rxe_qp_from_init(rxe, qp, pd, init, uresp, ibpd, udata); if (err) goto err3; From patchwork Sun Oct 14 07:17:25 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shamir Rabinovitch X-Patchwork-Id: 10640577 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CE455109C for ; Sun, 14 Oct 2018 07:17:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B3B7F2A385 for ; Sun, 14 Oct 2018 07:17:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A49652A3A1; Sun, 14 Oct 2018 07:17:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7EFF62A385 for ; Sun, 14 Oct 2018 07:17:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726175AbeJNO54 (ORCPT ); Sun, 14 Oct 2018 10:57:56 -0400 Received: from userp2120.oracle.com ([156.151.31.85]:36594 "EHLO userp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726124AbeJNO54 (ORCPT ); Sun, 14 Oct 2018 10:57:56 -0400 Received: from pps.filterd (userp2120.oracle.com [127.0.0.1]) by userp2120.oracle.com (8.16.0.22/8.16.0.22) with SMTP id w9E79M8n024536; Sun, 14 Oct 2018 07:17:49 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references; s=corp-2018-07-02; bh=QVen5dhnxRNsCjTsJBFit4EsFnl8ScYee2X/HDyottE=; b=dB1So7XLMoDllYrX4KxUwueLmn5pJQYARiLsQgIBLqsM5cmzYtsraf66az/ukz2KW3Pj f1rlu6MIdMFVBy3lqNWNEzsLDYx8/rbd2NIm43tw5XceychAg2oV+SMKpFmWwT+6mDye lF63OrKetkV5DBFozWa/0x3eZbuqEGkjdhJj8cL934cwbj0aU2Yu/nzll7f83jzSS5uL lquyDL7miBiEKm3d1M5wqEY3Lw3oGlC3pe5VVKX/imybYS2CaQPKl+LhJC5epvjjlBkX kRvYFRmnfRfmoTv+5xQaf9I00JeRGOPE3ag/0kjluBQXdh7/Qdglau8BUG4LZiP9CdI0 Kg== Received: from aserv0022.oracle.com (aserv0022.oracle.com [141.146.126.234]) by userp2120.oracle.com with ESMTP id 2n39bqv835-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sun, 14 Oct 2018 07:17:48 +0000 Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72]) by aserv0022.oracle.com (8.14.4/8.14.4) with ESMTP id w9E7HlQ2021691 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sun, 14 Oct 2018 07:17:47 GMT Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26]) by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id w9E7Hlmp027997; Sun, 14 Oct 2018 07:17:47 GMT Received: from localhost.localdomain (/10.175.37.52) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Sun, 14 Oct 2018 07:17:46 +0000 From: Shamir Rabinovitch To: linux-rdma@vger.kernel.org Cc: dledford@redhat.com, jgg@ziepe.ca, leon@kernel.org, santosh.shilimkar@oracle.com, shamir.rabinovitch@oracle.com Subject: [PATCH v2 2/4] IB/uverbs: uobj_get_obj_read must return ib_uobject Date: Sun, 14 Oct 2018 10:17:25 +0300 Message-Id: <20181014071727.2271-3-shamir.rabinovitch@oracle.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181014071727.2271-1-shamir.rabinovitch@oracle.com> References: <20181014071727.2271-1-shamir.rabinovitch@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=9045 signatures=668706 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=3 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1807170000 definitions=main-1810140069 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Prepare the code for shared ib_x model. uobj_put_obj_read rely on ib_x uobject pointer. Having single pointer in ib_x object is not aligned with future shared ib_x model. In future shared ib_x model each ib_x object can belong to 1 or more ib_uobject. Thus the ib_uobject used in the macro cannot come from the ib_x object. Signed-off-by: Shamir Rabinovitch --- drivers/infiniband/core/uverbs_cmd.c | 136 ++++++++++++++++++++------- include/rdma/uverbs_std_types.h | 4 +- 2 files changed, 106 insertions(+), 34 deletions(-) diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c index f2bb0ebbad69..31b5e733c018 100644 --- a/drivers/infiniband/core/uverbs_cmd.c +++ b/drivers/infiniband/core/uverbs_cmd.c @@ -665,6 +665,7 @@ ssize_t ib_uverbs_reg_mr(struct ib_uverbs_file *file, struct ib_mr *mr; int ret; struct ib_device *ib_dev; + struct ib_uobject *pd_uobj; if (out_len < sizeof resp) return -ENOSPC; @@ -688,7 +689,8 @@ ssize_t ib_uverbs_reg_mr(struct ib_uverbs_file *file, if (IS_ERR(uobj)) return PTR_ERR(uobj); - pd = uobj_get_obj_read(pd, UVERBS_OBJECT_PD, cmd.pd_handle, file); + pd = uobj_get_obj_read(pd, UVERBS_OBJECT_PD, cmd.pd_handle, file, + pd_uobj); if (!pd) { ret = -EINVAL; goto err_free; @@ -757,6 +759,7 @@ ssize_t ib_uverbs_rereg_mr(struct ib_uverbs_file *file, struct ib_pd *old_pd; int ret; struct ib_uobject *uobj; + struct ib_uobject *pd_uobj; if (out_len < sizeof(resp)) return -ENOSPC; @@ -796,7 +799,7 @@ ssize_t ib_uverbs_rereg_mr(struct ib_uverbs_file *file, if (cmd.flags & IB_MR_REREG_PD) { pd = uobj_get_obj_read(pd, UVERBS_OBJECT_PD, cmd.pd_handle, - file); + file, pd_uobj); if (!pd) { ret = -EINVAL; goto put_uobjs; @@ -861,6 +864,7 @@ ssize_t ib_uverbs_alloc_mw(struct ib_uverbs_file *file, struct ib_udata udata; int ret; struct ib_device *ib_dev; + struct ib_uobject *pd_uobj; if (out_len < sizeof(resp)) return -ENOSPC; @@ -872,7 +876,8 @@ ssize_t ib_uverbs_alloc_mw(struct ib_uverbs_file *file, if (IS_ERR(uobj)) return PTR_ERR(uobj); - pd = uobj_get_obj_read(pd, UVERBS_OBJECT_PD, cmd.pd_handle, file); + pd = uobj_get_obj_read(pd, UVERBS_OBJECT_PD, cmd.pd_handle, file, + pd_uobj); if (!pd) { ret = -EINVAL; goto err_free; @@ -1170,6 +1175,7 @@ ssize_t ib_uverbs_resize_cq(struct ib_uverbs_file *file, struct ib_udata udata; struct ib_cq *cq; int ret = -EINVAL; + struct ib_uobject *cq_uobj; if (copy_from_user(&cmd, buf, sizeof cmd)) return -EFAULT; @@ -1179,7 +1185,8 @@ ssize_t ib_uverbs_resize_cq(struct ib_uverbs_file *file, in_len - sizeof(cmd) - sizeof(struct ib_uverbs_cmd_hdr), out_len - sizeof(resp), ib_uverbs_get_ucontext(file)); - cq = uobj_get_obj_read(cq, UVERBS_OBJECT_CQ, cmd.cq_handle, file); + cq = uobj_get_obj_read(cq, UVERBS_OBJECT_CQ, cmd.cq_handle, file, + cq_uobj); if (!cq) return -EINVAL; @@ -1239,11 +1246,13 @@ ssize_t ib_uverbs_poll_cq(struct ib_uverbs_file *file, struct ib_cq *cq; struct ib_wc wc; int ret; + struct ib_uobject *cq_uobj; if (copy_from_user(&cmd, buf, sizeof cmd)) return -EFAULT; - cq = uobj_get_obj_read(cq, UVERBS_OBJECT_CQ, cmd.cq_handle, file); + cq = uobj_get_obj_read(cq, UVERBS_OBJECT_CQ, cmd.cq_handle, file, + cq_uobj); if (!cq) return -EINVAL; @@ -1285,11 +1294,13 @@ ssize_t ib_uverbs_req_notify_cq(struct ib_uverbs_file *file, { struct ib_uverbs_req_notify_cq cmd; struct ib_cq *cq; + struct ib_uobject *cq_uobj; if (copy_from_user(&cmd, buf, sizeof cmd)) return -EFAULT; - cq = uobj_get_obj_read(cq, UVERBS_OBJECT_CQ, cmd.cq_handle, file); + cq = uobj_get_obj_read(cq, UVERBS_OBJECT_CQ, cmd.cq_handle, file, + cq_uobj); if (!cq) return -EINVAL; @@ -1355,6 +1366,11 @@ static int create_qp(struct ib_uverbs_file *file, struct ib_rwq_ind_table *ind_tbl = NULL; bool has_sq = true; struct ib_device *ib_dev; + struct ib_uobject *ind_tbl_uobj = NULL; + struct ib_uobject *srq_uobj = NULL; + struct ib_uobject *scq_uobj = NULL; + struct ib_uobject *rcq_uobj = NULL; + struct ib_uobject *pd_uobj = NULL; if (cmd->qp_type == IB_QPT_RAW_PACKET && !capable(CAP_NET_RAW)) return -EPERM; @@ -1372,7 +1388,8 @@ static int create_qp(struct ib_uverbs_file *file, (cmd->comp_mask & IB_UVERBS_CREATE_QP_MASK_IND_TABLE)) { ind_tbl = uobj_get_obj_read(rwq_ind_table, UVERBS_OBJECT_RWQ_IND_TBL, - cmd->rwq_ind_tbl_handle, file); + cmd->rwq_ind_tbl_handle, file, + ind_tbl_uobj); if (!ind_tbl) { ret = -EINVAL; goto err_put; @@ -1418,7 +1435,8 @@ static int create_qp(struct ib_uverbs_file *file, } else { if (cmd->is_srq) { srq = uobj_get_obj_read(srq, UVERBS_OBJECT_SRQ, - cmd->srq_handle, file); + cmd->srq_handle, file, + srq_uobj); if (!srq || srq->srq_type == IB_SRQT_XRC) { ret = -EINVAL; goto err_put; @@ -1429,7 +1447,8 @@ static int create_qp(struct ib_uverbs_file *file, if (cmd->recv_cq_handle != cmd->send_cq_handle) { rcq = uobj_get_obj_read( cq, UVERBS_OBJECT_CQ, - cmd->recv_cq_handle, file); + cmd->recv_cq_handle, file, + rcq_uobj); if (!rcq) { ret = -EINVAL; goto err_put; @@ -1440,11 +1459,12 @@ static int create_qp(struct ib_uverbs_file *file, if (has_sq) scq = uobj_get_obj_read(cq, UVERBS_OBJECT_CQ, - cmd->send_cq_handle, file); + cmd->send_cq_handle, file, + scq_uobj); if (!ind_tbl) rcq = rcq ?: scq; pd = uobj_get_obj_read(pd, UVERBS_OBJECT_PD, cmd->pd_handle, - file); + file, pd_uobj); if (!pd || (!scq && has_sq)) { ret = -EINVAL; goto err_put; @@ -1829,6 +1849,7 @@ ssize_t ib_uverbs_query_qp(struct ib_uverbs_file *file, struct ib_qp_attr *attr; struct ib_qp_init_attr *init_attr; int ret; + struct ib_uobject *qp_uobj; if (copy_from_user(&cmd, buf, sizeof cmd)) return -EFAULT; @@ -1840,7 +1861,8 @@ ssize_t ib_uverbs_query_qp(struct ib_uverbs_file *file, goto out; } - qp = uobj_get_obj_read(qp, UVERBS_OBJECT_QP, cmd.qp_handle, file); + qp = uobj_get_obj_read(qp, UVERBS_OBJECT_QP, cmd.qp_handle, file, + qp_uobj); if (!qp) { ret = -EINVAL; goto out; @@ -1940,12 +1962,14 @@ static int modify_qp(struct ib_uverbs_file *file, struct ib_qp_attr *attr; struct ib_qp *qp; int ret; + struct ib_uobject *qp_uobj; attr = kzalloc(sizeof(*attr), GFP_KERNEL); if (!attr) return -ENOMEM; - qp = uobj_get_obj_read(qp, UVERBS_OBJECT_QP, cmd->base.qp_handle, file); + qp = uobj_get_obj_read(qp, UVERBS_OBJECT_QP, cmd->base.qp_handle, file, + qp_uobj); if (!qp) { ret = -EINVAL; goto out; @@ -2183,6 +2207,8 @@ ssize_t ib_uverbs_post_send(struct ib_uverbs_file *file, int is_ud; ssize_t ret = -EINVAL; size_t next_size; + struct ib_uobject *qp_uobj; + struct ib_uobject *ah_uobj; if (copy_from_user(&cmd, buf, sizeof cmd)) return -EFAULT; @@ -2198,7 +2224,8 @@ ssize_t ib_uverbs_post_send(struct ib_uverbs_file *file, if (!user_wr) return -ENOMEM; - qp = uobj_get_obj_read(qp, UVERBS_OBJECT_QP, cmd.qp_handle, file); + qp = uobj_get_obj_read(qp, UVERBS_OBJECT_QP, cmd.qp_handle, file, + qp_uobj); if (!qp) goto out; @@ -2235,7 +2262,8 @@ ssize_t ib_uverbs_post_send(struct ib_uverbs_file *file, } ud->ah = uobj_get_obj_read(ah, UVERBS_OBJECT_AH, - user_wr->wr.ud.ah, file); + user_wr->wr.ud.ah, file, + ah_uobj); if (!ud->ah) { kfree(ud); ret = -EINVAL; @@ -2459,6 +2487,7 @@ ssize_t ib_uverbs_post_recv(struct ib_uverbs_file *file, const struct ib_recv_wr *bad_wr; struct ib_qp *qp; ssize_t ret = -EINVAL; + struct ib_uobject *qp_uobj; if (copy_from_user(&cmd, buf, sizeof cmd)) return -EFAULT; @@ -2469,7 +2498,8 @@ ssize_t ib_uverbs_post_recv(struct ib_uverbs_file *file, if (IS_ERR(wr)) return PTR_ERR(wr); - qp = uobj_get_obj_read(qp, UVERBS_OBJECT_QP, cmd.qp_handle, file); + qp = uobj_get_obj_read(qp, UVERBS_OBJECT_QP, cmd.qp_handle, file, + qp_uobj); if (!qp) goto out; @@ -2508,6 +2538,7 @@ ssize_t ib_uverbs_post_srq_recv(struct ib_uverbs_file *file, const struct ib_recv_wr *bad_wr; struct ib_srq *srq; ssize_t ret = -EINVAL; + struct ib_uobject *srq_uobj; if (copy_from_user(&cmd, buf, sizeof cmd)) return -EFAULT; @@ -2518,7 +2549,8 @@ ssize_t ib_uverbs_post_srq_recv(struct ib_uverbs_file *file, if (IS_ERR(wr)) return PTR_ERR(wr); - srq = uobj_get_obj_read(srq, UVERBS_OBJECT_SRQ, cmd.srq_handle, file); + srq = uobj_get_obj_read(srq, UVERBS_OBJECT_SRQ, cmd.srq_handle, file, + srq_uobj); if (!srq) goto out; @@ -2561,6 +2593,7 @@ ssize_t ib_uverbs_create_ah(struct ib_uverbs_file *file, int ret; struct ib_udata udata; struct ib_device *ib_dev; + struct ib_uobject *pd_uobj; if (out_len < sizeof resp) return -ENOSPC; @@ -2582,7 +2615,8 @@ ssize_t ib_uverbs_create_ah(struct ib_uverbs_file *file, goto err; } - pd = uobj_get_obj_read(pd, UVERBS_OBJECT_PD, cmd.pd_handle, file); + pd = uobj_get_obj_read(pd, UVERBS_OBJECT_PD, cmd.pd_handle, file, + pd_uobj); if (!pd) { ret = -EINVAL; goto err; @@ -2658,11 +2692,13 @@ ssize_t ib_uverbs_attach_mcast(struct ib_uverbs_file *file, struct ib_uqp_object *obj; struct ib_uverbs_mcast_entry *mcast; int ret; + struct ib_uobject *qp_uobj; if (copy_from_user(&cmd, buf, sizeof cmd)) return -EFAULT; - qp = uobj_get_obj_read(qp, UVERBS_OBJECT_QP, cmd.qp_handle, file); + qp = uobj_get_obj_read(qp, UVERBS_OBJECT_QP, cmd.qp_handle, file, + qp_uobj); if (!qp) return -EINVAL; @@ -2708,11 +2744,13 @@ ssize_t ib_uverbs_detach_mcast(struct ib_uverbs_file *file, struct ib_uverbs_mcast_entry *mcast; int ret = -EINVAL; bool found = false; + struct ib_uobject *qp_uobj; if (copy_from_user(&cmd, buf, sizeof cmd)) return -EFAULT; - qp = uobj_get_obj_read(qp, UVERBS_OBJECT_QP, cmd.qp_handle, file); + qp = uobj_get_obj_read(qp, UVERBS_OBJECT_QP, cmd.qp_handle, file, + qp_uobj); if (!qp) return -EINVAL; @@ -2822,6 +2860,9 @@ static int kern_spec_to_ib_spec_action(struct ib_uverbs_file *ufile, union ib_flow_spec *ib_spec, struct ib_uflow_resources *uflow_res) { + struct ib_uobject *flow_act_uobj; + struct ib_uobject *cnt_uobj; + ib_spec->type = kern_spec->type; switch (ib_spec->type) { case IB_FLOW_SPEC_ACTION_TAG: @@ -2846,7 +2887,8 @@ static int kern_spec_to_ib_spec_action(struct ib_uverbs_file *ufile, ib_spec->action.act = uobj_get_obj_read(flow_action, UVERBS_OBJECT_FLOW_ACTION, kern_spec->action.handle, - ufile); + ufile, + flow_act_uobj); if (!ib_spec->action.act) return -EINVAL; ib_spec->action.size = @@ -2864,7 +2906,8 @@ static int kern_spec_to_ib_spec_action(struct ib_uverbs_file *ufile, uobj_get_obj_read(counters, UVERBS_OBJECT_COUNTERS, kern_spec->flow_count.handle, - ufile); + ufile, + cnt_uobj); if (!ib_spec->flow_count.counters) return -EINVAL; ib_spec->flow_count.size = @@ -3075,6 +3118,8 @@ int ib_uverbs_ex_create_wq(struct ib_uverbs_file *file, size_t required_cmd_sz; size_t required_resp_len; struct ib_device *ib_dev; + struct ib_uobject *pd_uobj; + struct ib_uobject *cq_uobj; required_cmd_sz = offsetof(typeof(cmd), max_sge) + sizeof(cmd.max_sge); required_resp_len = offsetof(typeof(resp), wqn) + sizeof(resp.wqn); @@ -3102,13 +3147,15 @@ int ib_uverbs_ex_create_wq(struct ib_uverbs_file *file, if (IS_ERR(obj)) return PTR_ERR(obj); - pd = uobj_get_obj_read(pd, UVERBS_OBJECT_PD, cmd.pd_handle, file); + pd = uobj_get_obj_read(pd, UVERBS_OBJECT_PD, cmd.pd_handle, file, + pd_uobj); if (!pd) { err = -EINVAL; goto err_uobj; } - cq = uobj_get_obj_read(cq, UVERBS_OBJECT_CQ, cmd.cq_handle, file); + cq = uobj_get_obj_read(cq, UVERBS_OBJECT_CQ, cmd.cq_handle, file, + cq_uobj); if (!cq) { err = -EINVAL; goto err_put_pd; @@ -3231,6 +3278,7 @@ int ib_uverbs_ex_modify_wq(struct ib_uverbs_file *file, struct ib_wq_attr wq_attr = {}; size_t required_cmd_sz; int ret; + struct ib_uobject *wq_uobj; required_cmd_sz = offsetof(typeof(cmd), curr_wq_state) + sizeof(cmd.curr_wq_state); if (ucore->inlen < required_cmd_sz) @@ -3251,7 +3299,8 @@ int ib_uverbs_ex_modify_wq(struct ib_uverbs_file *file, if (cmd.attr_mask > (IB_WQ_STATE | IB_WQ_CUR_STATE | IB_WQ_FLAGS)) return -EINVAL; - wq = uobj_get_obj_read(wq, UVERBS_OBJECT_WQ, cmd.wq_handle, file); + wq = uobj_get_obj_read(wq, UVERBS_OBJECT_WQ, cmd.wq_handle, file, + wq_uobj); if (!wq) return -EINVAL; @@ -3290,6 +3339,7 @@ int ib_uverbs_ex_create_rwq_ind_table(struct ib_uverbs_file *file, size_t required_cmd_sz_header; size_t required_resp_len; struct ib_device *ib_dev; + struct ib_uobject **wqs_uobj = NULL; required_cmd_sz_header = offsetof(typeof(cmd), log_ind_tbl_size) + sizeof(cmd.log_ind_tbl_size); required_resp_len = offsetof(typeof(resp), ind_tbl_num) + sizeof(resp.ind_tbl_num); @@ -3343,10 +3393,17 @@ int ib_uverbs_ex_create_rwq_ind_table(struct ib_uverbs_file *file, goto err_free; } + wqs_uobj = kcalloc(num_wq_handles, sizeof(*wqs_uobj), GFP_KERNEL); + if (!wqs_uobj) { + err = -ENOMEM; + goto err_free; + } + for (num_read_wqs = 0; num_read_wqs < num_wq_handles; num_read_wqs++) { wq = uobj_get_obj_read(wq, UVERBS_OBJECT_WQ, - wqs_handles[num_read_wqs], file); + wqs_handles[num_read_wqs], file, + wqs_uobj[num_read_wqs]); if (!wq) { err = -EINVAL; goto put_wqs; @@ -3399,6 +3456,8 @@ int ib_uverbs_ex_create_rwq_ind_table(struct ib_uverbs_file *file, for (j = 0; j < num_read_wqs; j++) uobj_put_obj_read(wqs[j]); + kfree(wqs_uobj); + return uobj_alloc_commit(uobj, 0); err_copy: @@ -3410,6 +3469,7 @@ int ib_uverbs_ex_create_rwq_ind_table(struct ib_uverbs_file *file, uobj_put_obj_read(wqs[j]); err_free: kfree(wqs_handles); + kfree(wqs_uobj); kfree(wqs); return err; } @@ -3460,6 +3520,7 @@ int ib_uverbs_ex_create_flow(struct ib_uverbs_file *file, void *ib_spec; int i; struct ib_device *ib_dev; + struct ib_uobject *qp_uobj; if (ucore->inlen < sizeof(cmd)) return -EINVAL; @@ -3521,7 +3582,8 @@ int ib_uverbs_ex_create_flow(struct ib_uverbs_file *file, goto err_free_attr; } - qp = uobj_get_obj_read(qp, UVERBS_OBJECT_QP, cmd.qp_handle, file); + qp = uobj_get_obj_read(qp, UVERBS_OBJECT_QP, cmd.qp_handle, file, + qp_uobj); if (!qp) { err = -EINVAL; goto err_uobj; @@ -3654,6 +3716,8 @@ static int __uverbs_create_xsrq(struct ib_uverbs_file *file, struct ib_srq_init_attr attr; int ret; struct ib_device *ib_dev; + struct ib_uobject *cq_uobj; + struct ib_uobject *pd_uobj; obj = (struct ib_usrq_object *)uobj_alloc(UVERBS_OBJECT_SRQ, file, &ib_dev); @@ -3683,14 +3747,16 @@ static int __uverbs_create_xsrq(struct ib_uverbs_file *file, if (ib_srq_has_cq(cmd->srq_type)) { attr.ext.cq = uobj_get_obj_read(cq, UVERBS_OBJECT_CQ, - cmd->cq_handle, file); + cmd->cq_handle, file, + cq_uobj); if (!attr.ext.cq) { ret = -EINVAL; goto err_put_xrcd; } } - pd = uobj_get_obj_read(pd, UVERBS_OBJECT_PD, cmd->pd_handle, file); + pd = uobj_get_obj_read(pd, UVERBS_OBJECT_PD, cmd->pd_handle, file, + pd_uobj); if (!pd) { ret = -EINVAL; goto err_put_cq; @@ -3850,6 +3916,7 @@ ssize_t ib_uverbs_modify_srq(struct ib_uverbs_file *file, struct ib_srq *srq; struct ib_srq_attr attr; int ret; + struct ib_uobject *srq_uobj; if (copy_from_user(&cmd, buf, sizeof cmd)) return -EFAULT; @@ -3857,7 +3924,8 @@ ssize_t ib_uverbs_modify_srq(struct ib_uverbs_file *file, ib_uverbs_init_udata(&udata, buf + sizeof cmd, NULL, in_len - sizeof cmd, out_len, ib_uverbs_get_ucontext(file)); - srq = uobj_get_obj_read(srq, UVERBS_OBJECT_SRQ, cmd.srq_handle, file); + srq = uobj_get_obj_read(srq, UVERBS_OBJECT_SRQ, cmd.srq_handle, file, + srq_uobj); if (!srq) return -EINVAL; @@ -3880,6 +3948,7 @@ ssize_t ib_uverbs_query_srq(struct ib_uverbs_file *file, struct ib_srq_attr attr; struct ib_srq *srq; int ret; + struct ib_uobject *srq_uobj; if (out_len < sizeof resp) return -ENOSPC; @@ -3887,7 +3956,8 @@ ssize_t ib_uverbs_query_srq(struct ib_uverbs_file *file, if (copy_from_user(&cmd, buf, sizeof cmd)) return -EFAULT; - srq = uobj_get_obj_read(srq, UVERBS_OBJECT_SRQ, cmd.srq_handle, file); + srq = uobj_get_obj_read(srq, UVERBS_OBJECT_SRQ, cmd.srq_handle, file, + srq_uobj); if (!srq) return -EINVAL; @@ -4073,6 +4143,7 @@ int ib_uverbs_ex_modify_cq(struct ib_uverbs_file *file, struct ib_cq *cq; size_t required_cmd_sz; int ret; + struct ib_uobject *cq_uobj; required_cmd_sz = offsetof(typeof(cmd), reserved) + sizeof(cmd.reserved); @@ -4095,7 +4166,8 @@ int ib_uverbs_ex_modify_cq(struct ib_uverbs_file *file, if (cmd.attr_mask > IB_CQ_MODERATE) return -EOPNOTSUPP; - cq = uobj_get_obj_read(cq, UVERBS_OBJECT_CQ, cmd.cq_handle, file); + cq = uobj_get_obj_read(cq, UVERBS_OBJECT_CQ, cmd.cq_handle, file, + cq_uobj); if (!cq) return -EINVAL; diff --git a/include/rdma/uverbs_std_types.h b/include/rdma/uverbs_std_types.h index 3db2802fbc68..21eedc4183f8 100644 --- a/include/rdma/uverbs_std_types.h +++ b/include/rdma/uverbs_std_types.h @@ -72,9 +72,9 @@ static inline void *_uobj_get_obj_read(struct ib_uobject *uobj) return NULL; return uobj->object; } -#define uobj_get_obj_read(_object, _type, _id, _ufile) \ +#define uobj_get_obj_read(_object, _type, _id, _ufile, _uobject) \ ((struct ib_##_object *)_uobj_get_obj_read( \ - uobj_get_read(_type, _id, _ufile))) + (_uobject = uobj_get_read(_type, _id, _ufile)))) #define uobj_get_write(_type, _id, _ufile) \ rdma_lookup_get_uobject(uobj_get_type(_ufile, _type), _ufile, \ From patchwork Sun Oct 14 07:17:26 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shamir Rabinovitch X-Patchwork-Id: 10640579 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4DBD714BD for ; Sun, 14 Oct 2018 07:17:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 376FF2A39C for ; Sun, 14 Oct 2018 07:17:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2A3BD2A3AA; Sun, 14 Oct 2018 07:17:58 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6C8F22A39C for ; Sun, 14 Oct 2018 07:17:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726115AbeJNO56 (ORCPT ); Sun, 14 Oct 2018 10:57:58 -0400 Received: from userp2130.oracle.com ([156.151.31.86]:52840 "EHLO userp2130.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726124AbeJNO56 (ORCPT ); Sun, 14 Oct 2018 10:57:58 -0400 Received: from pps.filterd (userp2130.oracle.com [127.0.0.1]) by userp2130.oracle.com (8.16.0.22/8.16.0.22) with SMTP id w9E79Dnb064564; Sun, 14 Oct 2018 07:17:52 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references; s=corp-2018-07-02; bh=Q/cnDA2IgXsKctQpvSoR29F2ye1NAWwwFPWUVDhryCA=; b=ZSuITahzD4F17ZzcvuW1e18ZHJ9eerssvnLtKghJaLR0lOQhOz35kNQJcvIgQXW8ykn6 ogRWRYwjWn/a7plFpPnRxlT2kCA8LZdWz/o57qwBCqG2qHyGf8xzmSrnENI2q56Q7jN2 gTyYnyGuyboMIpMWNcEIbuNlun//eMSIcuNr44zffqqT7XwGilEvWmbdnXMnLKFHSu+n JNF4kq5vEsNfK0ilV50hWOrAE2p6Y7P1oVjl6siU7YdKsBoNV7fIcmmPgKX3mXGn86RU 3hCcLE7j+hQbSE96ooyXQmXV9gxCoX7cchMegnbUrMFrp5fNJ4j697q4oNj+hcLJDQar Vg== Received: from aserv0021.oracle.com (aserv0021.oracle.com [141.146.126.233]) by userp2130.oracle.com with ESMTP id 2n384tmf8c-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sun, 14 Oct 2018 07:17:51 +0000 Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72]) by aserv0021.oracle.com (8.14.4/8.14.4) with ESMTP id w9E7HoWD029441 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sun, 14 Oct 2018 07:17:51 GMT Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26]) by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id w9E7HoOw028004; Sun, 14 Oct 2018 07:17:50 GMT Received: from localhost.localdomain (/10.175.37.52) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Sun, 14 Oct 2018 07:17:49 +0000 From: Shamir Rabinovitch To: linux-rdma@vger.kernel.org Cc: dledford@redhat.com, jgg@ziepe.ca, leon@kernel.org, santosh.shilimkar@oracle.com, shamir.rabinovitch@oracle.com Subject: [PATCH v2 3/4] IB/uverbs: uobj_put_obj_read must not use ib_x uobject pointer Date: Sun, 14 Oct 2018 10:17:26 +0300 Message-Id: <20181014071727.2271-4-shamir.rabinovitch@oracle.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181014071727.2271-1-shamir.rabinovitch@oracle.com> References: <20181014071727.2271-1-shamir.rabinovitch@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=9045 signatures=668706 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=3 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1807170000 definitions=main-1810140069 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Prepare the code for shared ib_x model. uobj_put_obj_read rely on ib_x uobject pointer. Having single pointer in ib_x object is not aligned with future shared ib_x model. In future shared ib_x model each ib_x object can belong to 1 or more ib_uobject. Thus the ib_uobject used in the macro cannot come from the ib_x object. Signed-off-by: Shamir Rabinovitch --- drivers/infiniband/core/uverbs_cmd.c | 96 ++++++++++++++-------------- include/rdma/uverbs_std_types.h | 10 ++- 2 files changed, 56 insertions(+), 50 deletions(-) diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c index 31b5e733c018..0203c9587502 100644 --- a/drivers/infiniband/core/uverbs_cmd.c +++ b/drivers/infiniband/core/uverbs_cmd.c @@ -732,7 +732,7 @@ ssize_t ib_uverbs_reg_mr(struct ib_uverbs_file *file, goto err_copy; } - uobj_put_obj_read(pd); + uobj_put_obj_read(pd, pd_uobj); return uobj_alloc_commit(uobj, in_len); @@ -740,7 +740,7 @@ ssize_t ib_uverbs_reg_mr(struct ib_uverbs_file *file, ib_dereg_mr(mr); err_put: - uobj_put_obj_read(pd); + uobj_put_obj_read(pd, pd_uobj); err_free: uobj_alloc_abort(uobj); @@ -759,7 +759,7 @@ ssize_t ib_uverbs_rereg_mr(struct ib_uverbs_file *file, struct ib_pd *old_pd; int ret; struct ib_uobject *uobj; - struct ib_uobject *pd_uobj; + struct ib_uobject *pd_uobj = NULL; if (out_len < sizeof(resp)) return -ENOSPC; @@ -831,7 +831,7 @@ ssize_t ib_uverbs_rereg_mr(struct ib_uverbs_file *file, put_uobj_pd: if (cmd.flags & IB_MR_REREG_PD) - uobj_put_obj_read(pd); + uobj_put_obj_read(pd, pd_uobj); put_uobjs: uobj_put_write(uobj); @@ -910,13 +910,13 @@ ssize_t ib_uverbs_alloc_mw(struct ib_uverbs_file *file, goto err_copy; } - uobj_put_obj_read(pd); + uobj_put_obj_read(pd, pd_uobj); return uobj_alloc_commit(uobj, in_len); err_copy: uverbs_dealloc_mw(mw); err_put: - uobj_put_obj_read(pd); + uobj_put_obj_read(pd, pd_uobj); err_free: uobj_alloc_abort(uobj); return ret; @@ -1200,7 +1200,7 @@ ssize_t ib_uverbs_resize_cq(struct ib_uverbs_file *file, ret = -EFAULT; out: - uobj_put_obj_read(cq); + uobj_put_obj_read(cq, cq_uobj); return ret ? ret : in_len; } @@ -1284,7 +1284,7 @@ ssize_t ib_uverbs_poll_cq(struct ib_uverbs_file *file, ret = in_len; out_put: - uobj_put_obj_read(cq); + uobj_put_obj_read(cq, cq_uobj); return ret; } @@ -1307,7 +1307,7 @@ ssize_t ib_uverbs_req_notify_cq(struct ib_uverbs_file *file, ib_req_notify_cq(cq, cmd.solicited_only ? IB_CQ_SOLICITED : IB_CQ_NEXT_COMP); - uobj_put_obj_read(cq); + uobj_put_obj_read(cq, cq_uobj); return in_len; } @@ -1594,15 +1594,15 @@ static int create_qp(struct ib_uverbs_file *file, } if (pd) - uobj_put_obj_read(pd); + uobj_put_obj_read(pd, pd_uobj); if (scq) - uobj_put_obj_read(scq); + uobj_put_obj_read(scq, scq_uobj); if (rcq && rcq != scq) - uobj_put_obj_read(rcq); + uobj_put_obj_read(rcq, rcq_uobj); if (srq) - uobj_put_obj_read(srq); + uobj_put_obj_read(srq, srq_uobj); if (ind_tbl) - uobj_put_obj_read(ind_tbl); + uobj_put_obj_read(ind_tbl, ind_tbl_uobj); return uobj_alloc_commit(&obj->uevent.uobject, 0); err_cb: @@ -1612,15 +1612,15 @@ static int create_qp(struct ib_uverbs_file *file, if (!IS_ERR(xrcd_uobj)) uobj_put_read(xrcd_uobj); if (pd) - uobj_put_obj_read(pd); + uobj_put_obj_read(pd, pd_uobj); if (scq) - uobj_put_obj_read(scq); + uobj_put_obj_read(scq, scq_uobj); if (rcq && rcq != scq) - uobj_put_obj_read(rcq); + uobj_put_obj_read(rcq, rcq_uobj); if (srq) - uobj_put_obj_read(srq); + uobj_put_obj_read(srq, srq_uobj); if (ind_tbl) - uobj_put_obj_read(ind_tbl); + uobj_put_obj_read(ind_tbl, ind_tbl_uobj); uobj_alloc_abort(&obj->uevent.uobject); return ret; @@ -1870,7 +1870,7 @@ ssize_t ib_uverbs_query_qp(struct ib_uverbs_file *file, ret = ib_query_qp(qp, attr, cmd.attr_mask, init_attr); - uobj_put_obj_read(qp); + uobj_put_obj_read(qp, qp_uobj); if (ret) goto out; @@ -2087,7 +2087,7 @@ static int modify_qp(struct ib_uverbs_file *file, udata); release_qp: - uobj_put_obj_read(qp); + uobj_put_obj_read(qp, qp_uobj); out: kfree(attr); @@ -2369,11 +2369,11 @@ ssize_t ib_uverbs_post_send(struct ib_uverbs_file *file, ret = -EFAULT; out_put: - uobj_put_obj_read(qp); + uobj_put_obj_read(qp, qp_uobj); while (wr) { if (is_ud && ud_wr(wr)->ah) - uobj_put_obj_read(ud_wr(wr)->ah); + uobj_put_obj_read(ud_wr(wr)->ah, ah_uobj); next = wr->next; kfree(wr); wr = next; @@ -2506,7 +2506,7 @@ ssize_t ib_uverbs_post_recv(struct ib_uverbs_file *file, resp.bad_wr = 0; ret = qp->device->post_recv(qp->real_qp, wr, &bad_wr); - uobj_put_obj_read(qp); + uobj_put_obj_read(qp, qp_uobj); if (ret) { for (next = wr; next; next = next->next) { ++resp.bad_wr; @@ -2558,7 +2558,7 @@ ssize_t ib_uverbs_post_srq_recv(struct ib_uverbs_file *file, ret = srq->device->post_srq_recv ? srq->device->post_srq_recv(srq, wr, &bad_wr) : -EOPNOTSUPP; - uobj_put_obj_read(srq); + uobj_put_obj_read(srq, srq_uobj); if (ret) for (next = wr; next; next = next->next) { @@ -2657,14 +2657,14 @@ ssize_t ib_uverbs_create_ah(struct ib_uverbs_file *file, goto err_copy; } - uobj_put_obj_read(pd); + uobj_put_obj_read(pd, pd_uobj); return uobj_alloc_commit(uobj, in_len); err_copy: rdma_destroy_ah(ah); err_put: - uobj_put_obj_read(pd); + uobj_put_obj_read(pd, pd_uobj); err: uobj_alloc_abort(uobj); @@ -2729,7 +2729,7 @@ ssize_t ib_uverbs_attach_mcast(struct ib_uverbs_file *file, out_put: mutex_unlock(&obj->mcast_lock); - uobj_put_obj_read(qp); + uobj_put_obj_read(qp, qp_uobj); return ret ? ret : in_len; } @@ -2775,7 +2775,7 @@ ssize_t ib_uverbs_detach_mcast(struct ib_uverbs_file *file, out_put: mutex_unlock(&obj->mcast_lock); - uobj_put_obj_read(qp); + uobj_put_obj_read(qp, qp_uobj); return ret ? ret : in_len; } @@ -2896,7 +2896,7 @@ static int kern_spec_to_ib_spec_action(struct ib_uverbs_file *ufile, flow_resources_add(uflow_res, IB_FLOW_SPEC_ACTION_HANDLE, ib_spec->action.act); - uobj_put_obj_read(ib_spec->action.act); + uobj_put_obj_read(ib_spec->action.act, flow_act_uobj); break; case IB_FLOW_SPEC_ACTION_COUNT: if (kern_spec->flow_count.size != @@ -2915,7 +2915,7 @@ static int kern_spec_to_ib_spec_action(struct ib_uverbs_file *ufile, flow_resources_add(uflow_res, IB_FLOW_SPEC_ACTION_COUNT, ib_spec->flow_count.counters); - uobj_put_obj_read(ib_spec->flow_count.counters); + uobj_put_obj_read(ib_spec->flow_count.counters, cnt_uobj); break; default: return -EINVAL; @@ -3207,16 +3207,16 @@ int ib_uverbs_ex_create_wq(struct ib_uverbs_file *file, if (err) goto err_copy; - uobj_put_obj_read(pd); - uobj_put_obj_read(cq); + uobj_put_obj_read(pd, pd_uobj); + uobj_put_obj_read(cq, cq_uobj); return uobj_alloc_commit(&obj->uevent.uobject, 0); err_copy: ib_destroy_wq(wq, uhw); err_put_cq: - uobj_put_obj_read(cq); + uobj_put_obj_read(cq, cq_uobj); err_put_pd: - uobj_put_obj_read(pd); + uobj_put_obj_read(pd, pd_uobj); err_uobj: uobj_alloc_abort(&obj->uevent.uobject); @@ -3316,7 +3316,7 @@ int ib_uverbs_ex_modify_wq(struct ib_uverbs_file *file, } ret = wq->device->modify_wq(wq, &wq_attr, cmd.attr_mask, uhw); out: - uobj_put_obj_read(wq); + uobj_put_obj_read(wq, wq_uobj); return ret; } @@ -3454,7 +3454,7 @@ int ib_uverbs_ex_create_rwq_ind_table(struct ib_uverbs_file *file, kfree(wqs_handles); for (j = 0; j < num_read_wqs; j++) - uobj_put_obj_read(wqs[j]); + uobj_put_obj_read(wqs[j], wqs_uobj[j]); kfree(wqs_uobj); @@ -3466,7 +3466,7 @@ int ib_uverbs_ex_create_rwq_ind_table(struct ib_uverbs_file *file, uobj_alloc_abort(uobj); put_wqs: for (j = 0; j < num_read_wqs; j++) - uobj_put_obj_read(wqs[j]); + uobj_put_obj_read(wqs[j], wqs_uobj[j]); err_free: kfree(wqs_handles); kfree(wqs_uobj); @@ -3661,7 +3661,7 @@ int ib_uverbs_ex_create_flow(struct ib_uverbs_file *file, if (err) goto err_copy; - uobj_put_obj_read(qp); + uobj_put_obj_read(qp, qp_uobj); kfree(flow_attr); if (cmd.flow_attr.num_of_specs) kfree(kern_flow_attr); @@ -3674,7 +3674,7 @@ int ib_uverbs_ex_create_flow(struct ib_uverbs_file *file, err_free_flow_attr: kfree(flow_attr); err_put: - uobj_put_obj_read(qp); + uobj_put_obj_read(qp, qp_uobj); err_uobj: uobj_alloc_abort(uobj); err_free_attr: @@ -3716,7 +3716,7 @@ static int __uverbs_create_xsrq(struct ib_uverbs_file *file, struct ib_srq_init_attr attr; int ret; struct ib_device *ib_dev; - struct ib_uobject *cq_uobj; + struct ib_uobject *cq_uobj = NULL; struct ib_uobject *pd_uobj; obj = (struct ib_usrq_object *)uobj_alloc(UVERBS_OBJECT_SRQ, file, @@ -3818,20 +3818,20 @@ static int __uverbs_create_xsrq(struct ib_uverbs_file *file, uobj_put_read(xrcd_uobj); if (ib_srq_has_cq(cmd->srq_type)) - uobj_put_obj_read(attr.ext.cq); + uobj_put_obj_read(attr.ext.cq, cq_uobj); - uobj_put_obj_read(pd); + uobj_put_obj_read(pd, pd_uobj); return uobj_alloc_commit(&obj->uevent.uobject, 0); err_copy: ib_destroy_srq(srq); err_put: - uobj_put_obj_read(pd); + uobj_put_obj_read(pd, pd_uobj); err_put_cq: if (ib_srq_has_cq(cmd->srq_type)) - uobj_put_obj_read(attr.ext.cq); + uobj_put_obj_read(attr.ext.cq, cq_uobj); err_put_xrcd: if (cmd->srq_type == IB_SRQT_XRC) { @@ -3934,7 +3934,7 @@ ssize_t ib_uverbs_modify_srq(struct ib_uverbs_file *file, ret = srq->device->modify_srq(srq, &attr, cmd.attr_mask, &udata); - uobj_put_obj_read(srq); + uobj_put_obj_read(srq, srq_uobj); return ret ? ret : in_len; } @@ -3963,7 +3963,7 @@ ssize_t ib_uverbs_query_srq(struct ib_uverbs_file *file, ret = ib_query_srq(srq, &attr); - uobj_put_obj_read(srq); + uobj_put_obj_read(srq, srq_uobj); if (ret) return ret; @@ -4173,7 +4173,7 @@ int ib_uverbs_ex_modify_cq(struct ib_uverbs_file *file, ret = rdma_set_cq_moderation(cq, cmd.attr.cq_count, cmd.attr.cq_period); - uobj_put_obj_read(cq); + uobj_put_obj_read(cq, cq_uobj); return ret; } diff --git a/include/rdma/uverbs_std_types.h b/include/rdma/uverbs_std_types.h index 21eedc4183f8..895706ad9d0c 100644 --- a/include/rdma/uverbs_std_types.h +++ b/include/rdma/uverbs_std_types.h @@ -103,8 +103,14 @@ static inline void uobj_put_read(struct ib_uobject *uobj) rdma_lookup_put_uobject(uobj, UVERBS_LOOKUP_READ); } -#define uobj_put_obj_read(_obj) \ - uobj_put_read((_obj)->uobject) +static inline void uobj_put_obj_read(void *object, struct ib_uobject *uobject) +{ + if (WARN_ON(!object) || + WARN_ON(IS_ERR_OR_NULL(uobject)) || + WARN_ON(object != uobject->object)) + return; + uobj_put_read(uobject); +} static inline void uobj_put_write(struct ib_uobject *uobj) { From patchwork Sun Oct 14 07:17:27 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shamir Rabinovitch X-Patchwork-Id: 10640583 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3DF06109C for ; Sun, 14 Oct 2018 07:18:00 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 294732A39C for ; Sun, 14 Oct 2018 07:18:00 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1D87A2A3AA; Sun, 14 Oct 2018 07:18:00 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 33FDC2A39C for ; Sun, 14 Oct 2018 07:17:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726342AbeJNO6A (ORCPT ); Sun, 14 Oct 2018 10:58:00 -0400 Received: from userp2120.oracle.com ([156.151.31.85]:36658 "EHLO userp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726124AbeJNO6A (ORCPT ); Sun, 14 Oct 2018 10:58:00 -0400 Received: from pps.filterd (userp2120.oracle.com [127.0.0.1]) by userp2120.oracle.com (8.16.0.22/8.16.0.22) with SMTP id w9E78sJr024336; Sun, 14 Oct 2018 07:17:54 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references; s=corp-2018-07-02; bh=8lVDB/bg6I6Jw/F/wyU94aT2H/ACAnEUXLF32Sr2E5M=; b=YODdl1/Uo+qCD8BSpkv+maC0zxbl581f64W7Ta8y0/T9Jv5PoO06ILEY8z8QX3EhJmHd /NS/rU+qtpHs4hxrZXsS+26pQBVvw9LspHrLJmNnckgdRkyc/3c1NRlRAb1S4neZ4uKA d+n2pHQHRD1bCoWBemT8QbhVIfpDzszkmr9CxfJWw5qvXLJbY46jLJ6c/0fz/pDzhcVJ FsY9BqbPQChivvXUdWjeAB/sNWf4uCtVWctGQL66hRy3mSiA4HsCAs9MPCi/ilyk56jZ Z80mbf3Fi437UYA/m4EntqtMiYIywIeCRXcNMk4YLJhNMu9vBgnbUEJ+oO6KnMhcZ4ZK 4w== Received: from userv0022.oracle.com (userv0022.oracle.com [156.151.31.74]) by userp2120.oracle.com with ESMTP id 2n39bqv83a-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sun, 14 Oct 2018 07:17:54 +0000 Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72]) by userv0022.oracle.com (8.14.4/8.14.4) with ESMTP id w9E7HsBT003258 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sun, 14 Oct 2018 07:17:54 GMT Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26]) by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id w9E7HrBb028080; Sun, 14 Oct 2018 07:17:54 GMT Received: from localhost.localdomain (/10.175.37.52) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Sun, 14 Oct 2018 07:17:53 +0000 From: Shamir Rabinovitch To: linux-rdma@vger.kernel.org Cc: dledford@redhat.com, jgg@ziepe.ca, leon@kernel.org, santosh.shilimkar@oracle.com, shamir.rabinovitch@oracle.com Subject: [PATCH v2 4/4] IB/verbs: remove ib_pd uobject pointer Date: Sun, 14 Oct 2018 10:17:27 +0300 Message-Id: <20181014071727.2271-5-shamir.rabinovitch@oracle.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181014071727.2271-1-shamir.rabinovitch@oracle.com> References: <20181014071727.2271-1-shamir.rabinovitch@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=9045 signatures=668706 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=1 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1807170000 definitions=main-1810140069 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Prepare the code for shared ib_pd model. Having uobject pointer in ib_pd is not correct in shared ib_pd model. In shared ib_pd model, ib_pd can belong to 1 or more uobject. Prior patches removed the dependency in the code for uobject pointer from ib_pd. Now it's good time to complete the work and get rid of the uobject pointer in the ib_pd! Signed-off-by: Shamir Rabinovitch --- drivers/infiniband/core/uverbs_cmd.c | 1 - drivers/infiniband/core/verbs.c | 1 - drivers/infiniband/hw/mlx5/main.c | 1 - include/rdma/ib_verbs.h | 1 - 4 files changed, 4 deletions(-) diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c index 0203c9587502..db50b3fc357f 100644 --- a/drivers/infiniband/core/uverbs_cmd.c +++ b/drivers/infiniband/core/uverbs_cmd.c @@ -371,7 +371,6 @@ ssize_t ib_uverbs_alloc_pd(struct ib_uverbs_file *file, } pd->device = ib_dev; - pd->uobject = uobj; pd->__internal_mr = NULL; atomic_set(&pd->usecnt, 0); diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c index d972f1dbf8fd..174e683ca739 100644 --- a/drivers/infiniband/core/verbs.c +++ b/drivers/infiniband/core/verbs.c @@ -248,7 +248,6 @@ struct ib_pd *__ib_alloc_pd(struct ib_device *device, unsigned int flags, return pd; pd->device = device; - pd->uobject = NULL; pd->__internal_mr = NULL; atomic_set(&pd->usecnt, 0); pd->flags = flags; diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c index 885129678837..31e654f92c59 100644 --- a/drivers/infiniband/hw/mlx5/main.c +++ b/drivers/infiniband/hw/mlx5/main.c @@ -4567,7 +4567,6 @@ static int create_dev_resources(struct mlx5_ib_resources *devr) goto error0; } devr->p0->device = &dev->ib_dev; - devr->p0->uobject = NULL; atomic_set(&devr->p0->usecnt, 0); devr->c0 = mlx5_ib_create_cq(&dev->ib_dev, &cq_attr, NULL, NULL); diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index f88838988f4d..6a3c5b5892f0 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -1566,7 +1566,6 @@ struct ib_pd { u32 local_dma_lkey; u32 flags; struct ib_device *device; - struct ib_uobject *uobject; atomic_t usecnt; /* count all resources */ u32 unsafe_global_rkey;