From patchwork Sun Oct 7 08:14:05 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shamir Rabinovitch X-Patchwork-Id: 10629403 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CEC4414BD for ; Sun, 7 Oct 2018 08:14:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BE99F29193 for ; Sun, 7 Oct 2018 08:14:31 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B2A8E291A4; Sun, 7 Oct 2018 08:14:31 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E19BB29193 for ; Sun, 7 Oct 2018 08:14:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726785AbeJGPU6 (ORCPT ); Sun, 7 Oct 2018 11:20:58 -0400 Received: from userp2120.oracle.com ([156.151.31.85]:55274 "EHLO userp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726519AbeJGPU6 (ORCPT ); Sun, 7 Oct 2018 11:20:58 -0400 Received: from pps.filterd (userp2120.oracle.com [127.0.0.1]) by userp2120.oracle.com (8.16.0.22/8.16.0.22) with SMTP id w9789Dl9187669; Sun, 7 Oct 2018 08:14:26 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references; s=corp-2018-07-02; bh=JO18dQ1w5auvOAJkTHfoK119k9dl0pjJ5qfm7N/0DWE=; b=D8JX8xtkApOhrYHli+UVj1cl8lecOwZEJPEWvy8K0ZPebUrpkv0TqHJoA+ef4Cu1UYq6 2HN/u//R31vIy3Zj5hEDP2MZhLzz+PzA0ooswYiWGMjrZawR/0CpQJxkb8zvXGOKa5xV VEpzpN/K8I/pP3cS2C0UlL90/dXGop8UhjymtBcM9KmI9RqTzoEIKyUmp1tP0gUUUJiX 4CRVCTrQPjMO0lirqqU/XK82A4TWxBZF7H0IRuH5PXhGcAwiNobN/BXbTsdXqHJ0Gw93 tbPc3TfYkm0bYLkOB3Q+zgLh+MdHWVkLycGoL/Z/UYKdDy2pQCcJ8WUEQodtKPlkBMVR rQ== Received: from userv0021.oracle.com (userv0021.oracle.com [156.151.31.71]) by userp2120.oracle.com with ESMTP id 2mxnpqj479-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sun, 07 Oct 2018 08:14:26 +0000 Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75]) by userv0021.oracle.com (8.14.4/8.14.4) with ESMTP id w978EP7E009918 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sun, 7 Oct 2018 08:14:25 GMT Received: from abhmp0011.oracle.com (abhmp0011.oracle.com [141.146.116.17]) by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id w978EPTf031948; Sun, 7 Oct 2018 08:14:25 GMT Received: from localhost.localdomain (/10.175.55.174) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Sun, 07 Oct 2018 08:14:24 +0000 From: Shamir Rabinovitch To: linux-rdma@vger.kernel.org Cc: dledford@redhat.com, jgg@ziepe.ca, leon@kernel.org, santosh.shilimkar@oracle.com, shamir.rabinovitch@oracle.com Subject: [PATCH 3/4] IB/uverbs: uobj_put_obj_read must not use ib_x uobject pointer Date: Sun, 7 Oct 2018 11:14:05 +0300 Message-Id: <20181007081406.9734-4-shamir.rabinovitch@oracle.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181007081406.9734-1-shamir.rabinovitch@oracle.com> References: <20181007081406.9734-1-shamir.rabinovitch@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=9038 signatures=668706 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=3 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1807170000 definitions=main-1810070087 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP prepare the code for shared ib_x model. uobj_put_obj_read rely on ib_x uobject pointer. having single pointer in ib_x object is not aligned with future shared ib_x model. in future shared ib_x model each ib_x object can belong to 1 or more ib_uobject. thus the ib_uobject used in the macro cannot come from the ib_x object. Signed-off-by: Shamir Rabinovitch --- drivers/infiniband/core/uverbs_cmd.c | 96 ++++++++++++++-------------- include/rdma/uverbs_std_types.h | 10 ++- 2 files changed, 56 insertions(+), 50 deletions(-) diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c index d121383a9c48..45bd746f03ff 100644 --- a/drivers/infiniband/core/uverbs_cmd.c +++ b/drivers/infiniband/core/uverbs_cmd.c @@ -734,7 +734,7 @@ ssize_t ib_uverbs_reg_mr(struct ib_uverbs_file *file, goto err_copy; } - uobj_put_obj_read(pd); + uobj_put_obj_read(pd, pd_uobj); return uobj_alloc_commit(uobj, in_len); @@ -742,7 +742,7 @@ ssize_t ib_uverbs_reg_mr(struct ib_uverbs_file *file, ib_dereg_mr(mr); err_put: - uobj_put_obj_read(pd); + uobj_put_obj_read(pd, pd_uobj); err_free: uobj_alloc_abort(uobj); @@ -761,7 +761,7 @@ ssize_t ib_uverbs_rereg_mr(struct ib_uverbs_file *file, struct ib_pd *old_pd; int ret; struct ib_uobject *uobj; - struct ib_uobject *pd_uobj; + struct ib_uobject *pd_uobj = NULL; if (out_len < sizeof(resp)) return -ENOSPC; @@ -833,7 +833,7 @@ ssize_t ib_uverbs_rereg_mr(struct ib_uverbs_file *file, put_uobj_pd: if (cmd.flags & IB_MR_REREG_PD) - uobj_put_obj_read(pd); + uobj_put_obj_read(pd, pd_uobj); put_uobjs: uobj_put_write(uobj); @@ -912,13 +912,13 @@ ssize_t ib_uverbs_alloc_mw(struct ib_uverbs_file *file, goto err_copy; } - uobj_put_obj_read(pd); + uobj_put_obj_read(pd, pd_uobj); return uobj_alloc_commit(uobj, in_len); err_copy: uverbs_dealloc_mw(mw); err_put: - uobj_put_obj_read(pd); + uobj_put_obj_read(pd, pd_uobj); err_free: uobj_alloc_abort(uobj); return ret; @@ -1202,7 +1202,7 @@ ssize_t ib_uverbs_resize_cq(struct ib_uverbs_file *file, ret = -EFAULT; out: - uobj_put_obj_read(cq); + uobj_put_obj_read(cq, cq_uobj); return ret ? ret : in_len; } @@ -1286,7 +1286,7 @@ ssize_t ib_uverbs_poll_cq(struct ib_uverbs_file *file, ret = in_len; out_put: - uobj_put_obj_read(cq); + uobj_put_obj_read(cq, cq_uobj); return ret; } @@ -1309,7 +1309,7 @@ ssize_t ib_uverbs_req_notify_cq(struct ib_uverbs_file *file, ib_req_notify_cq(cq, cmd.solicited_only ? IB_CQ_SOLICITED : IB_CQ_NEXT_COMP); - uobj_put_obj_read(cq); + uobj_put_obj_read(cq, cq_uobj); return in_len; } @@ -1596,15 +1596,15 @@ static int create_qp(struct ib_uverbs_file *file, } if (pd) - uobj_put_obj_read(pd); + uobj_put_obj_read(pd, pd_uobj); if (scq) - uobj_put_obj_read(scq); + uobj_put_obj_read(scq, scq_uobj); if (rcq && rcq != scq) - uobj_put_obj_read(rcq); + uobj_put_obj_read(rcq, rcq_uobj); if (srq) - uobj_put_obj_read(srq); + uobj_put_obj_read(srq, srq_uobj); if (ind_tbl) - uobj_put_obj_read(ind_tbl); + uobj_put_obj_read(ind_tbl, ind_tbl_uobj); return uobj_alloc_commit(&obj->uevent.uobject, 0); err_cb: @@ -1614,15 +1614,15 @@ static int create_qp(struct ib_uverbs_file *file, if (!IS_ERR(xrcd_uobj)) uobj_put_read(xrcd_uobj); if (pd) - uobj_put_obj_read(pd); + uobj_put_obj_read(pd, pd_uobj); if (scq) - uobj_put_obj_read(scq); + uobj_put_obj_read(scq, scq_uobj); if (rcq && rcq != scq) - uobj_put_obj_read(rcq); + uobj_put_obj_read(rcq, rcq_uobj); if (srq) - uobj_put_obj_read(srq); + uobj_put_obj_read(srq, srq_uobj); if (ind_tbl) - uobj_put_obj_read(ind_tbl); + uobj_put_obj_read(ind_tbl, ind_tbl_uobj); uobj_alloc_abort(&obj->uevent.uobject); return ret; @@ -1872,7 +1872,7 @@ ssize_t ib_uverbs_query_qp(struct ib_uverbs_file *file, ret = ib_query_qp(qp, attr, cmd.attr_mask, init_attr); - uobj_put_obj_read(qp); + uobj_put_obj_read(qp, qp_uobj); if (ret) goto out; @@ -2089,7 +2089,7 @@ static int modify_qp(struct ib_uverbs_file *file, udata); release_qp: - uobj_put_obj_read(qp); + uobj_put_obj_read(qp, qp_uobj); out: kfree(attr); @@ -2371,11 +2371,11 @@ ssize_t ib_uverbs_post_send(struct ib_uverbs_file *file, ret = -EFAULT; out_put: - uobj_put_obj_read(qp); + uobj_put_obj_read(qp, qp_uobj); while (wr) { if (is_ud && ud_wr(wr)->ah) - uobj_put_obj_read(ud_wr(wr)->ah); + uobj_put_obj_read(ud_wr(wr)->ah, ah_uobj); next = wr->next; kfree(wr); wr = next; @@ -2508,7 +2508,7 @@ ssize_t ib_uverbs_post_recv(struct ib_uverbs_file *file, resp.bad_wr = 0; ret = qp->device->post_recv(qp->real_qp, wr, &bad_wr); - uobj_put_obj_read(qp); + uobj_put_obj_read(qp, qp_uobj); if (ret) { for (next = wr; next; next = next->next) { ++resp.bad_wr; @@ -2560,7 +2560,7 @@ ssize_t ib_uverbs_post_srq_recv(struct ib_uverbs_file *file, ret = srq->device->post_srq_recv ? srq->device->post_srq_recv(srq, wr, &bad_wr) : -EOPNOTSUPP; - uobj_put_obj_read(srq); + uobj_put_obj_read(srq, srq_uobj); if (ret) for (next = wr; next; next = next->next) { @@ -2659,14 +2659,14 @@ ssize_t ib_uverbs_create_ah(struct ib_uverbs_file *file, goto err_copy; } - uobj_put_obj_read(pd); + uobj_put_obj_read(pd, pd_uobj); return uobj_alloc_commit(uobj, in_len); err_copy: rdma_destroy_ah(ah); err_put: - uobj_put_obj_read(pd); + uobj_put_obj_read(pd, pd_uobj); err: uobj_alloc_abort(uobj); @@ -2731,7 +2731,7 @@ ssize_t ib_uverbs_attach_mcast(struct ib_uverbs_file *file, out_put: mutex_unlock(&obj->mcast_lock); - uobj_put_obj_read(qp); + uobj_put_obj_read(qp, qp_uobj); return ret ? ret : in_len; } @@ -2777,7 +2777,7 @@ ssize_t ib_uverbs_detach_mcast(struct ib_uverbs_file *file, out_put: mutex_unlock(&obj->mcast_lock); - uobj_put_obj_read(qp); + uobj_put_obj_read(qp, qp_uobj); return ret ? ret : in_len; } @@ -2898,7 +2898,7 @@ static int kern_spec_to_ib_spec_action(struct ib_uverbs_file *ufile, flow_resources_add(uflow_res, IB_FLOW_SPEC_ACTION_HANDLE, ib_spec->action.act); - uobj_put_obj_read(ib_spec->action.act); + uobj_put_obj_read(ib_spec->action.act, flow_act_uobj); break; case IB_FLOW_SPEC_ACTION_COUNT: if (kern_spec->flow_count.size != @@ -2917,7 +2917,7 @@ static int kern_spec_to_ib_spec_action(struct ib_uverbs_file *ufile, flow_resources_add(uflow_res, IB_FLOW_SPEC_ACTION_COUNT, ib_spec->flow_count.counters); - uobj_put_obj_read(ib_spec->flow_count.counters); + uobj_put_obj_read(ib_spec->flow_count.counters, cnt_uobj); break; default: return -EINVAL; @@ -3209,16 +3209,16 @@ int ib_uverbs_ex_create_wq(struct ib_uverbs_file *file, if (err) goto err_copy; - uobj_put_obj_read(pd); - uobj_put_obj_read(cq); + uobj_put_obj_read(pd, pd_uobj); + uobj_put_obj_read(cq, cq_uobj); return uobj_alloc_commit(&obj->uevent.uobject, 0); err_copy: ib_destroy_wq(wq, uhw); err_put_cq: - uobj_put_obj_read(cq); + uobj_put_obj_read(cq, cq_uobj); err_put_pd: - uobj_put_obj_read(pd); + uobj_put_obj_read(pd, pd_uobj); err_uobj: uobj_alloc_abort(&obj->uevent.uobject); @@ -3318,7 +3318,7 @@ int ib_uverbs_ex_modify_wq(struct ib_uverbs_file *file, } ret = wq->device->modify_wq(wq, &wq_attr, cmd.attr_mask, uhw); out: - uobj_put_obj_read(wq); + uobj_put_obj_read(wq, wq_uobj); return ret; } @@ -3456,7 +3456,7 @@ int ib_uverbs_ex_create_rwq_ind_table(struct ib_uverbs_file *file, kfree(wqs_handles); for (j = 0; j < num_read_wqs; j++) - uobj_put_obj_read(wqs[j]); + uobj_put_obj_read(wqs[j], wqs_uobj[j]); kfree(wqs_uobj); @@ -3468,7 +3468,7 @@ int ib_uverbs_ex_create_rwq_ind_table(struct ib_uverbs_file *file, uobj_alloc_abort(uobj); put_wqs: for (j = 0; j < num_read_wqs; j++) - uobj_put_obj_read(wqs[j]); + uobj_put_obj_read(wqs[j], wqs_uobj[j]); err_free: kfree(wqs_handles); kfree(wqs_uobj); @@ -3663,7 +3663,7 @@ int ib_uverbs_ex_create_flow(struct ib_uverbs_file *file, if (err) goto err_copy; - uobj_put_obj_read(qp); + uobj_put_obj_read(qp, qp_uobj); kfree(flow_attr); if (cmd.flow_attr.num_of_specs) kfree(kern_flow_attr); @@ -3676,7 +3676,7 @@ int ib_uverbs_ex_create_flow(struct ib_uverbs_file *file, err_free_flow_attr: kfree(flow_attr); err_put: - uobj_put_obj_read(qp); + uobj_put_obj_read(qp, qp_uobj); err_uobj: uobj_alloc_abort(uobj); err_free_attr: @@ -3718,7 +3718,7 @@ static int __uverbs_create_xsrq(struct ib_uverbs_file *file, struct ib_srq_init_attr attr; int ret; struct ib_device *ib_dev; - struct ib_uobject *cq_uobj; + struct ib_uobject *cq_uobj = NULL; struct ib_uobject *pd_uobj; obj = (struct ib_usrq_object *)uobj_alloc(UVERBS_OBJECT_SRQ, file, @@ -3820,20 +3820,20 @@ static int __uverbs_create_xsrq(struct ib_uverbs_file *file, uobj_put_read(xrcd_uobj); if (ib_srq_has_cq(cmd->srq_type)) - uobj_put_obj_read(attr.ext.cq); + uobj_put_obj_read(attr.ext.cq, cq_uobj); - uobj_put_obj_read(pd); + uobj_put_obj_read(pd, pd_uobj); return uobj_alloc_commit(&obj->uevent.uobject, 0); err_copy: ib_destroy_srq(srq); err_put: - uobj_put_obj_read(pd); + uobj_put_obj_read(pd, pd_uobj); err_put_cq: if (ib_srq_has_cq(cmd->srq_type)) - uobj_put_obj_read(attr.ext.cq); + uobj_put_obj_read(attr.ext.cq, cq_uobj); err_put_xrcd: if (cmd->srq_type == IB_SRQT_XRC) { @@ -3936,7 +3936,7 @@ ssize_t ib_uverbs_modify_srq(struct ib_uverbs_file *file, ret = srq->device->modify_srq(srq, &attr, cmd.attr_mask, &udata); - uobj_put_obj_read(srq); + uobj_put_obj_read(srq, srq_uobj); return ret ? ret : in_len; } @@ -3965,7 +3965,7 @@ ssize_t ib_uverbs_query_srq(struct ib_uverbs_file *file, ret = ib_query_srq(srq, &attr); - uobj_put_obj_read(srq); + uobj_put_obj_read(srq, srq_uobj); if (ret) return ret; @@ -4175,7 +4175,7 @@ int ib_uverbs_ex_modify_cq(struct ib_uverbs_file *file, ret = rdma_set_cq_moderation(cq, cmd.attr.cq_count, cmd.attr.cq_period); - uobj_put_obj_read(cq); + uobj_put_obj_read(cq, cq_uobj); return ret; } diff --git a/include/rdma/uverbs_std_types.h b/include/rdma/uverbs_std_types.h index 21eedc4183f8..895706ad9d0c 100644 --- a/include/rdma/uverbs_std_types.h +++ b/include/rdma/uverbs_std_types.h @@ -103,8 +103,14 @@ static inline void uobj_put_read(struct ib_uobject *uobj) rdma_lookup_put_uobject(uobj, UVERBS_LOOKUP_READ); } -#define uobj_put_obj_read(_obj) \ - uobj_put_read((_obj)->uobject) +static inline void uobj_put_obj_read(void *object, struct ib_uobject *uobject) +{ + if (WARN_ON(!object) || + WARN_ON(IS_ERR_OR_NULL(uobject)) || + WARN_ON(object != uobject->object)) + return; + uobj_put_read(uobject); +} static inline void uobj_put_write(struct ib_uobject *uobj) {