From patchwork Thu Jul 26 03:40:10 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Gunthorpe X-Patchwork-Id: 10545157 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C989C139A for ; Thu, 26 Jul 2018 03:40:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B54D929A6F for ; Thu, 26 Jul 2018 03:40:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B383E2ACD1; Thu, 26 Jul 2018 03:40:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2B0AB29A6F for ; Thu, 26 Jul 2018 03:40:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726345AbeGZEzK (ORCPT ); Thu, 26 Jul 2018 00:55:10 -0400 Received: from mail-it0-f66.google.com ([209.85.214.66]:52932 "EHLO mail-it0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725941AbeGZEzK (ORCPT ); Thu, 26 Jul 2018 00:55:10 -0400 Received: by mail-it0-f66.google.com with SMTP id d9-v6so681302itf.2 for ; Wed, 25 Jul 2018 20:40:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ziepe.ca; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=/sd+mIU0tXLAmGwF8Gc82RWCyIJoRL1wW3RrxH4JsEE=; b=mFdEHUVRrFPEQuMfI/XVPr29jadGzp05OtCus+4Pz0B59jvMqenDwJy4kMCRc0g5lc T/s0gmVvdwL1tYDJcaI0hfzNb4Wz800I1tbTOAFTKAJyKBVDEwuELA5nBTOEGnqnzLTv 7esl3PWIoXaZL8BwNPIyai0ONoUXGJQu+oeAvbWTGUN6eboD7tutY3yMxdyYowCqEowh e4edGmY2uu2AB0d2rtJuZSdPkFewJWZ4d9LUh8iv8BsUQHyUPO/seCyeL1GuuzWB00gQ OrmDFOipvwlWhbofmU51pGXmpP1Cx8Pz+RU7Jy+j0t3T6//bbztP8VlbOvSTJH+4OeAx lCTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=/sd+mIU0tXLAmGwF8Gc82RWCyIJoRL1wW3RrxH4JsEE=; b=iMpu03ptydVuRffue3pkEOCuQo5XICtcRgXhyMxVQlb/bi1QYChFwV/ZZNEf9aSsRA in+OQTre+hDcdXKCBBGNR7y0INmsxpZ74ojWr/POlgSv7V5ZesBkQ71siJ76tTt8cAei 4nBQNNF/PS6fs7sAb1ymLmnmzOHXuS2F9WqgXXbIjwk24MAyVEkk0oC27+2AYkN87XAB xD6MxRWMFXpoCOGBBA2bWGTlRg0ceaOJMR9i18qXroDZVVM/awNoc3sNQz42l1RF45C3 4RWZ2Iq0DR40xdGhWCjV9v8RV2w6lRZmPaArTOjKZOSIyS+V+kO0UiijGHE/WbQtnZcg nAxw== X-Gm-Message-State: AOUpUlFLPDBq4ObhhCzdqtuCjo665m2F2lpKBNg/Pu9Hsg1CbuTTIgga NIaZTpGJuXR/XhZQbyPfosrHfaPosv02iA== X-Google-Smtp-Source: AAOMgpdoqbT4o7SUwGo3ZUp29TaTNlfLNfTrDQHD+J9RhVH8lc2vyKfQZc4c9euJ9mVxubXkJX/sug== X-Received: by 2002:a24:7882:: with SMTP id p124-v6mr513901itc.111.1532576424874; Wed, 25 Jul 2018 20:40:24 -0700 (PDT) Received: from ziepe.ca (S010614cc2056d97f.ed.shawcable.net. [174.3.196.123]) by smtp.gmail.com with ESMTPSA id i129-v6sm295995ita.28.2018.07.25.20.40.24 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 25 Jul 2018 20:40:24 -0700 (PDT) Received: from jgg by mlx.ziepe.ca with local (Exim 4.86_2) (envelope-from ) id 1fiX8Z-0001WB-GD; Wed, 25 Jul 2018 21:40:23 -0600 From: Jason Gunthorpe To: linux-rdma@vger.kernel.org, Leon Romanovsky , "Guy Levi(SW)" , Yishai Hadas , "Ruhl, Michael J" Cc: Jason Gunthorpe Subject: [PATCH 01/11] IB/uverbs: Remove rdma_explicit_destroy() from the ioctl methods Date: Wed, 25 Jul 2018 21:40:10 -0600 Message-Id: <20180726034020.5583-2-jgg@ziepe.ca> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180726034020.5583-1-jgg@ziepe.ca> References: <20180726034020.5583-1-jgg@ziepe.ca> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jason Gunthorpe The core code will destroy the HW object on behalf of the method, if the method provides an implementation it must simply copy data from the stub uobj into the response. Destroy methods cannot touch the HW object. Signed-off-by: Jason Gunthorpe --- drivers/infiniband/core/rdma_core.c | 5 +--- drivers/infiniband/core/uverbs_ioctl.c | 26 ++++++++++++++++--- drivers/infiniband/core/uverbs_std_types_cq.c | 21 +++++---------- 3 files changed, 30 insertions(+), 22 deletions(-) diff --git a/drivers/infiniband/core/rdma_core.c b/drivers/infiniband/core/rdma_core.c index a63844ba841449..9e84ded6d3bee3 100644 --- a/drivers/infiniband/core/rdma_core.c +++ b/drivers/infiniband/core/rdma_core.c @@ -924,10 +924,7 @@ int uverbs_finalize_object(struct ib_uobject *uobj, rdma_lookup_put_uobject(uobj, true); break; case UVERBS_ACCESS_DESTROY: - if (commit) - ret = rdma_remove_commit_uobject(uobj); - else - rdma_lookup_put_uobject(uobj, true); + rdma_lookup_put_uobject(uobj, true); break; case UVERBS_ACCESS_NEW: if (commit) diff --git a/drivers/infiniband/core/uverbs_ioctl.c b/drivers/infiniband/core/uverbs_ioctl.c index db7a92ea5dbe87..703710085b5beb 100644 --- a/drivers/infiniband/core/uverbs_ioctl.c +++ b/drivers/infiniband/core/uverbs_ioctl.c @@ -51,6 +51,7 @@ static int uverbs_process_attr(struct ib_uverbs_file *ufile, u16 attr_id, const struct uverbs_attr_spec_hash *attr_spec_bucket, struct uverbs_attr_bundle_hash *attr_bundle_h, + struct uverbs_obj_attr **destroy_attr, struct ib_uverbs_attr __user *uattr_ptr) { const struct uverbs_attr_spec *spec; @@ -143,6 +144,10 @@ static int uverbs_process_attr(struct ib_uverbs_file *ufile, if (uattr->len != 0) return -EINVAL; + /* specs are allowed to have only one destroy attribute */ + WARN_ON(spec->u.obj.access == UVERBS_ACCESS_DESTROY && + *destroy_attr); + o_attr = &e->obj_attr; object = uverbs_get_object(ufile, spec->u.obj.obj_type); if (!object) @@ -235,6 +240,7 @@ static int uverbs_uattrs_process(struct ib_uverbs_file *ufile, size_t num_uattrs, const struct uverbs_method_spec *method, struct uverbs_attr_bundle *attr_bundle, + struct uverbs_obj_attr **destroy_attr, struct ib_uverbs_attr __user *uattr_ptr) { size_t i; @@ -268,7 +274,8 @@ static int uverbs_uattrs_process(struct ib_uverbs_file *ufile, attr_spec_bucket = method->attr_buckets[ret]; ret = uverbs_process_attr(ufile, uattr, attr_id, attr_spec_bucket, - &attr_bundle->hash[ret], uattr_ptr++); + &attr_bundle->hash[ret], destroy_attr, + uattr_ptr++); if (ret) { uverbs_finalize_attrs(attr_bundle, method->attr_buckets, @@ -322,9 +329,11 @@ static int uverbs_handle_method(struct ib_uverbs_attr __user *uattr_ptr, int ret; int finalize_ret; int num_given_buckets; + struct uverbs_obj_attr *destroy_attr = NULL; - num_given_buckets = uverbs_uattrs_process( - ufile, uattrs, num_uattrs, method_spec, attr_bundle, uattr_ptr); + num_given_buckets = + uverbs_uattrs_process(ufile, uattrs, num_uattrs, method_spec, + attr_bundle, &destroy_attr, uattr_ptr); if (num_given_buckets <= 0) return -EINVAL; @@ -333,7 +342,18 @@ static int uverbs_handle_method(struct ib_uverbs_attr __user *uattr_ptr, if (ret) goto cleanup; + /* + * We destroy the HW object before invoking the handler, handlers do + * not get to manipulate the HW objects. + */ + if (destroy_attr) { + ret = rdma_explicit_destroy(destroy_attr->uobject); + if (ret) + goto cleanup; + } + ret = method_spec->handler(ibdev, ufile, attr_bundle); + cleanup: finalize_ret = uverbs_finalize_attrs(attr_bundle, method_spec->attr_buckets, diff --git a/drivers/infiniband/core/uverbs_std_types_cq.c b/drivers/infiniband/core/uverbs_std_types_cq.c index c71305fc043332..32930880975e56 100644 --- a/drivers/infiniband/core/uverbs_std_types_cq.c +++ b/drivers/infiniband/core/uverbs_std_types_cq.c @@ -176,21 +176,12 @@ static int UVERBS_HANDLER(UVERBS_METHOD_CQ_DESTROY)(struct ib_device *ib_dev, { struct ib_uobject *uobj = uverbs_attr_get_uobject(attrs, UVERBS_ATTR_DESTROY_CQ_HANDLE); - struct ib_uverbs_destroy_cq_resp resp; - struct ib_ucq_object *obj; - int ret; - - if (IS_ERR(uobj)) - return PTR_ERR(uobj); - - obj = container_of(uobj, struct ib_ucq_object, uobject); - - ret = rdma_explicit_destroy(uobj); - if (ret) - return ret; - - resp.comp_events_reported = obj->comp_events_reported; - resp.async_events_reported = obj->async_events_reported; + struct ib_ucq_object *obj = + container_of(uobj, struct ib_ucq_object, uobject); + struct ib_uverbs_destroy_cq_resp resp = { + .comp_events_reported = obj->comp_events_reported, + .async_events_reported = obj->async_events_reported + }; return uverbs_copy_to(attrs, UVERBS_ATTR_DESTROY_CQ_RESP, &resp, sizeof(resp)); From patchwork Thu Jul 26 03:40:11 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Gunthorpe X-Patchwork-Id: 10545155 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 11BF3139A for ; Thu, 26 Jul 2018 03:40:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F37822ACCC for ; Thu, 26 Jul 2018 03:40:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F13542AC9E; Thu, 26 Jul 2018 03:40:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 375C72ACA1 for ; Thu, 26 Jul 2018 03:40:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726435AbeGZEzQ (ORCPT ); Thu, 26 Jul 2018 00:55:16 -0400 Received: from mail-io0-f195.google.com ([209.85.223.195]:41173 "EHLO mail-io0-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726430AbeGZEzQ (ORCPT ); Thu, 26 Jul 2018 00:55:16 -0400 Received: by mail-io0-f195.google.com with SMTP id q9-v6so239387ioj.8 for ; Wed, 25 Jul 2018 20:40:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ziepe.ca; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Wxfa2PvLOEQkeUsGu0hL/1EIIt0KzelqNEPXbfd7DA8=; b=EwYH7NEVkbZQw79zc0QIKd6WkAnpYdQhpfsELS8m17V9WLYEg4FGkAAbAvXdfZoI/5 S4H1esdkEA1T5A89Tq8PRIrR87OMo1nNdUknXpxm9YXbJNoPbxY2JlXkY9aPZRJvroQz wQMSRbczg6n0XVCd745tK27XhPIKHfMiJvkyPARS4BgIndJz2UlEhDQ1DSXqc0vVXb3U 9k15nMu+CogsTWOSKGiEtyLH7vgqJC6I8KG8jdi2tb4iPiOjAuO9Jqw8pzdmgMgfmltT uyfskiti0A2JA5t15oSHEVokPoHF7/fp/kuBYDBq1+NEchwaT3/53FnQZwdNr9MpVxqA aTeg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Wxfa2PvLOEQkeUsGu0hL/1EIIt0KzelqNEPXbfd7DA8=; b=B6g1NaEHTMRsuykV1EqFBUhkx1HZwQrlDrhrBiGJwQIRTPMaOAq4s7UF25Cx/9414Q dmUcjC0KFo/B2A5hUxdeCtCslrWA+C45fA9B31hh/9H1rYUJcno7e7OMLIjE18zSD0SG uIAOEvebH81jYcB6nRn2TUwO1VmnYDoPurZeYoz9rEZSP0X+DlTMgo+wZI1BZYB6QbWl ZWgHtXWe3ViLX6mQO2gpUc0o4s67x0atmXTf9nagWVxsK79rnZOLd8btDhDN56YwZ4ku uzUnv7O/yf3m4snzCgO3PBxTTXLXL6j0gn/yeqOVdW3f2Wn3YzB6ejQyCOWn0F/4OEtU gDVQ== X-Gm-Message-State: AOUpUlENx5V+kkUTipVq+JVzGTR2q/H/JHdcU7BjvFTYSlmPzzsK4Xea 64M6RF35YtfyNLDXxkihQVbKXqYYorx9aA== X-Google-Smtp-Source: AAOMgpcsgAkZNFc0DAsFp1MprJQ4hdq9BMYiK9UGiG8JcuooN580KW0g5xac4MBlcBCZwSpsOMrNhg== X-Received: by 2002:a5e:da41:: with SMTP id o1-v6mr182427iop.81.1532576430071; Wed, 25 Jul 2018 20:40:30 -0700 (PDT) Received: from ziepe.ca (S010614cc2056d97f.ed.shawcable.net. [174.3.196.123]) by smtp.gmail.com with ESMTPSA id l8-v6sm62262iog.42.2018.07.25.20.40.24 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 25 Jul 2018 20:40:27 -0700 (PDT) Received: from jgg by mlx.ziepe.ca with local (Exim 4.86_2) (envelope-from ) id 1fiX8Z-0001WI-HE; Wed, 25 Jul 2018 21:40:23 -0600 From: Jason Gunthorpe To: linux-rdma@vger.kernel.org, Leon Romanovsky , "Guy Levi(SW)" , Yishai Hadas , "Ruhl, Michael J" Cc: Jason Gunthorpe Subject: [PATCH 02/11] IB/uverbs: Make the write path destroy methods use the same flow as ioctl Date: Wed, 25 Jul 2018 21:40:11 -0600 Message-Id: <20180726034020.5583-3-jgg@ziepe.ca> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180726034020.5583-1-jgg@ziepe.ca> References: <20180726034020.5583-1-jgg@ziepe.ca> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jason Gunthorpe The ridiculous dance with uobj_remove_commit() is not needed, the write path can follow the same flow as ioctl - lock and destroy the HW object then use the data left over in the uobject to form the response to userspace. Two helpers are introduced to make this flow straightforward for the caller. Signed-off-by: Jason Gunthorpe --- drivers/infiniband/core/rdma_core.c | 53 ++++++++++--------- drivers/infiniband/core/uverbs_cmd.c | 77 ++++++---------------------- include/rdma/uverbs_std_types.h | 16 ++++-- include/rdma/uverbs_types.h | 1 - 4 files changed, 55 insertions(+), 92 deletions(-) diff --git a/drivers/infiniband/core/rdma_core.c b/drivers/infiniband/core/rdma_core.c index 9e84ded6d3bee3..7db75d784070cc 100644 --- a/drivers/infiniband/core/rdma_core.c +++ b/drivers/infiniband/core/rdma_core.c @@ -130,24 +130,44 @@ static int uverbs_try_lock_object(struct ib_uobject *uobj, bool exclusive) } /* - * Does both rdma_lookup_get_uobject() and rdma_remove_commit_uobject(), then - * returns success_res on success (negative errno on failure). For use by - * callers that do not need the uobj. + * uobj_get_destroy destroys the HW object and returns a handle to the uobj + * with a NULL object pointer. The caller must pair this with + * uverbs_put_destroy. */ -int __uobj_perform_destroy(const struct uverbs_obj_type *type, u32 id, - struct ib_uverbs_file *ufile, int success_res) +struct ib_uobject *__uobj_get_destroy(const struct uverbs_obj_type *type, + u32 id, struct ib_uverbs_file *ufile) { struct ib_uobject *uobj; int ret; uobj = rdma_lookup_get_uobject(type, ufile, id, true); if (IS_ERR(uobj)) - return PTR_ERR(uobj); + return uobj; - ret = rdma_remove_commit_uobject(uobj); - if (ret) - return ret; + ret = rdma_explicit_destroy(uobj); + if (ret) { + rdma_lookup_put_uobject(uobj, true); + return ERR_PTR(ret); + } + + return uobj; +} +/* + * Does both uobj_get_destroy() and uobj_put_destroy(). Returns success_res + * on success (negative errno on failure). For use by callers that do not need + * the uobj. + */ +int __uobj_perform_destroy(const struct uverbs_obj_type *type, u32 id, + struct ib_uverbs_file *ufile, int success_res) +{ + struct ib_uobject *uobj; + + uobj = __uobj_get_destroy(type, id, ufile); + if (IS_ERR(uobj)) + return PTR_ERR(uobj); + + rdma_lookup_put_uobject(uobj, true); return success_res; } @@ -449,21 +469,6 @@ static int __must_check _rdma_remove_commit_uobject(struct ib_uobject *uobj, return ret; } -/* This is called only for user requested DESTROY reasons - * rdma_lookup_get_uobject(exclusive=true) must have been called to get uobj, - * and after this returns the corresponding put has been done, and the kref - * for uobj has been consumed. - */ -int __must_check rdma_remove_commit_uobject(struct ib_uobject *uobj) -{ - int ret; - - ret = rdma_explicit_destroy(uobj); - /* Pairs with the lookup_get done by the caller */ - rdma_lookup_put_uobject(uobj, true); - return ret; -} - int rdma_explicit_destroy(struct ib_uobject *uobject) { int ret; diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c index 38d7de3f9b2f90..7ea179b59e4d8d 100644 --- a/drivers/infiniband/core/uverbs_cmd.c +++ b/drivers/infiniband/core/uverbs_cmd.c @@ -1304,37 +1304,22 @@ ssize_t ib_uverbs_destroy_cq(struct ib_uverbs_file *file, struct ib_uverbs_destroy_cq cmd; struct ib_uverbs_destroy_cq_resp resp; struct ib_uobject *uobj; - struct ib_cq *cq; struct ib_ucq_object *obj; - int ret = -EINVAL; if (copy_from_user(&cmd, buf, sizeof cmd)) return -EFAULT; - uobj = uobj_get_write(UVERBS_OBJECT_CQ, cmd.cq_handle, file); + uobj = uobj_get_destroy(UVERBS_OBJECT_CQ, cmd.cq_handle, file); if (IS_ERR(uobj)) return PTR_ERR(uobj); - /* - * Make sure we don't free the memory in remove_commit as we still - * needs the uobject memory to create the response. - */ - uverbs_uobject_get(uobj); - cq = uobj->object; - obj = container_of(cq->uobject, struct ib_ucq_object, uobject); - + obj = container_of(uobj, struct ib_ucq_object, uobject); memset(&resp, 0, sizeof(resp)); - - ret = uobj_remove_commit(uobj); - if (ret) { - uverbs_uobject_put(uobj); - return ret; - } - resp.comp_events_reported = obj->comp_events_reported; resp.async_events_reported = obj->async_events_reported; - uverbs_uobject_put(uobj); + uobj_put_destroy(uobj); + if (copy_to_user(u64_to_user_ptr(cmd.response), &resp, sizeof resp)) return -EFAULT; @@ -2104,32 +2089,19 @@ ssize_t ib_uverbs_destroy_qp(struct ib_uverbs_file *file, struct ib_uverbs_destroy_qp_resp resp; struct ib_uobject *uobj; struct ib_uqp_object *obj; - int ret = -EINVAL; if (copy_from_user(&cmd, buf, sizeof cmd)) return -EFAULT; - memset(&resp, 0, sizeof resp); - - uobj = uobj_get_write(UVERBS_OBJECT_QP, cmd.qp_handle, file); + uobj = uobj_get_destroy(UVERBS_OBJECT_QP, cmd.qp_handle, file); if (IS_ERR(uobj)) return PTR_ERR(uobj); obj = container_of(uobj, struct ib_uqp_object, uevent.uobject); - /* - * Make sure we don't free the memory in remove_commit as we still - * needs the uobject memory to create the response. - */ - uverbs_uobject_get(uobj); - - ret = uobj_remove_commit(uobj); - if (ret) { - uverbs_uobject_put(uobj); - return ret; - } - + memset(&resp, 0, sizeof(resp)); resp.events_reported = obj->uevent.events_reported; - uverbs_uobject_put(uobj); + + uobj_put_destroy(uobj); if (copy_to_user(u64_to_user_ptr(cmd.response), &resp, sizeof resp)) return -EFAULT; @@ -3194,22 +3166,14 @@ int ib_uverbs_ex_destroy_wq(struct ib_uverbs_file *file, return -EOPNOTSUPP; resp.response_length = required_resp_len; - uobj = uobj_get_write(UVERBS_OBJECT_WQ, cmd.wq_handle, file); + uobj = uobj_get_destroy(UVERBS_OBJECT_WQ, cmd.wq_handle, file); if (IS_ERR(uobj)) return PTR_ERR(uobj); obj = container_of(uobj, struct ib_uwq_object, uevent.uobject); - /* - * Make sure we don't free the memory in remove_commit as we still - * needs the uobject memory to create the response. - */ - uverbs_uobject_get(uobj); - - ret = uobj_remove_commit(uobj); resp.events_reported = obj->uevent.events_reported; - uverbs_uobject_put(uobj); - if (ret) - return ret; + + uobj_put_destroy(uobj); return ib_copy_to_udata(ucore, &resp, resp.response_length); } @@ -3916,31 +3880,20 @@ ssize_t ib_uverbs_destroy_srq(struct ib_uverbs_file *file, struct ib_uverbs_destroy_srq_resp resp; struct ib_uobject *uobj; struct ib_uevent_object *obj; - int ret = -EINVAL; if (copy_from_user(&cmd, buf, sizeof cmd)) return -EFAULT; - uobj = uobj_get_write(UVERBS_OBJECT_SRQ, cmd.srq_handle, file); + uobj = uobj_get_destroy(UVERBS_OBJECT_SRQ, cmd.srq_handle, file); if (IS_ERR(uobj)) return PTR_ERR(uobj); obj = container_of(uobj, struct ib_uevent_object, uobject); - /* - * Make sure we don't free the memory in remove_commit as we still - * needs the uobject memory to create the response. - */ - uverbs_uobject_get(uobj); - memset(&resp, 0, sizeof(resp)); - - ret = uobj_remove_commit(uobj); - if (ret) { - uverbs_uobject_put(uobj); - return ret; - } resp.events_reported = obj->events_reported; - uverbs_uobject_put(uobj); + + uobj_put_destroy(uobj); + if (copy_to_user(u64_to_user_ptr(cmd.response), &resp, sizeof(resp))) return -EFAULT; diff --git a/include/rdma/uverbs_std_types.h b/include/rdma/uverbs_std_types.h index 076f085d2dcf66..c2f89e41cbd2d4 100644 --- a/include/rdma/uverbs_std_types.h +++ b/include/rdma/uverbs_std_types.h @@ -84,6 +84,17 @@ int __uobj_perform_destroy(const struct uverbs_obj_type *type, u32 id, __uobj_perform_destroy(uobj_get_type(_type), _uobj_check_id(_id), \ _ufile, _success_res) +struct ib_uobject *__uobj_get_destroy(const struct uverbs_obj_type *type, + u32 id, struct ib_uverbs_file *ufile); + +#define uobj_get_destroy(_type, _id, _ufile) \ + __uobj_get_destroy(uobj_get_type(_type), _uobj_check_id(_id), _ufile) + +static inline void uobj_put_destroy(struct ib_uobject *uobj) +{ + rdma_lookup_put_uobject(uobj, true); +} + static inline void uobj_put_read(struct ib_uobject *uobj) { rdma_lookup_put_uobject(uobj, false); @@ -97,11 +108,6 @@ static inline void uobj_put_write(struct ib_uobject *uobj) rdma_lookup_put_uobject(uobj, true); } -static inline int __must_check uobj_remove_commit(struct ib_uobject *uobj) -{ - return rdma_remove_commit_uobject(uobj); -} - static inline int __must_check uobj_alloc_commit(struct ib_uobject *uobj, int success_res) { diff --git a/include/rdma/uverbs_types.h b/include/rdma/uverbs_types.h index cfc50fcdbff63e..8bae28dd2e4f98 100644 --- a/include/rdma/uverbs_types.h +++ b/include/rdma/uverbs_types.h @@ -126,7 +126,6 @@ void rdma_lookup_put_uobject(struct ib_uobject *uobj, bool exclusive); struct ib_uobject *rdma_alloc_begin_uobject(const struct uverbs_obj_type *type, struct ib_uverbs_file *ufile); void rdma_alloc_abort_uobject(struct ib_uobject *uobj); -int __must_check rdma_remove_commit_uobject(struct ib_uobject *uobj); int __must_check rdma_alloc_commit_uobject(struct ib_uobject *uobj); int rdma_explicit_destroy(struct ib_uobject *uobject); From patchwork Thu Jul 26 03:40:12 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Gunthorpe X-Patchwork-Id: 10545141 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 71949139A for ; Thu, 26 Jul 2018 03:40:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5B02D2ACAB for ; Thu, 26 Jul 2018 03:40:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4CA072ACA5; Thu, 26 Jul 2018 03:40:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2F8152ACBA for ; Thu, 26 Jul 2018 03:40:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726645AbeGZEzM (ORCPT ); Thu, 26 Jul 2018 00:55:12 -0400 Received: from mail-it0-f66.google.com ([209.85.214.66]:38711 "EHLO mail-it0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726430AbeGZEzM (ORCPT ); Thu, 26 Jul 2018 00:55:12 -0400 Received: by mail-it0-f66.google.com with SMTP id v71-v6so848057itb.3 for ; Wed, 25 Jul 2018 20:40:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ziepe.ca; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=QAILIkHGgQJMbOp1xYsXKpnvHRTpC9SUnz/Pn9PVdMI=; b=Pq5wlB93DCqd0aA25o+FQ49iip+apoA+d/10XU5xOJjQdn8R22fkuxSi5Jg14ZfMV0 3sLEKSQ3pw9LnzpbLijczbMtLSEezCXbGM9i3sHc0jTnfXSj2A59GswvrqHXRo/yRIfG G/b6hsLSkec9IaxkCalBhUHJ0Oc0dD9hwgJVgQh08FSkE6k21a0OyWDbXb/tPHec7fwX T3IWFe8bSKkhrB07jtB+TOJtcbUoa18VUX5UZOSgxlMQcIpmKhg7QKF7FjALZFxhiWA0 XKn3GZSt3dxzPlBCqTPrFax/ogBZX3bGkHg+8eMaN+mfoL2eNEsNg90xxEWhP7SnSlHB haSw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=QAILIkHGgQJMbOp1xYsXKpnvHRTpC9SUnz/Pn9PVdMI=; b=AWFhWbiytkxAT7IITHMaPhwq2LLdSVU+8u/VsKYtPGKmMZMxvin6Q78xwhOzCeMOJK VVbHubfVZ9VhYYT0ITfr4us7d7d0GHbruiwcO5t7/kqh77YZzVl2ZfR9eMLXRHWsF2eg Th/DHiL2zBkT9sx8ka7yxNQqRO+uaB2gtzn1sX2AclaHqGijUkJvhNmxgp9oHlPla32W e0mD/+6eLDQkUlr1AwqeOag6C4QvAXmYau1fBO7aMiovbi2R8u97m7N1iSLp7MsQ0bw5 +e8W16BEYpDyGbdeY53F52iI0uKMvEqNYC3phwC5Km3+6FI2YPvN8WmwX7dyetlSRZOx uj3w== X-Gm-Message-State: AOUpUlHy7/ixQ2VIdAnFvhkuyJ+PuJ77zlFv3D+QLOpHMV5A2N8WQv5I cY0/sGxDdBgJ5ccHElUTp9ueZta83fmFIQ== X-Google-Smtp-Source: AAOMgpcNX5ZLoTkLH1XrGt3DjdqA55pQcTORBX8JKBg2Cw/u0P2CP7b70wQU4H0JDz3xYQLDl9Ewiw== X-Received: by 2002:a24:f5c2:: with SMTP id k185-v6mr569099ith.106.1532576426669; Wed, 25 Jul 2018 20:40:26 -0700 (PDT) Received: from ziepe.ca (S010614cc2056d97f.ed.shawcable.net. [174.3.196.123]) by smtp.gmail.com with ESMTPSA id t187-v6sm376875ita.28.2018.07.25.20.40.24 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 25 Jul 2018 20:40:24 -0700 (PDT) Received: from jgg by mlx.ziepe.ca with local (Exim 4.86_2) (envelope-from ) id 1fiX8Z-0001WP-IJ; Wed, 25 Jul 2018 21:40:23 -0600 From: Jason Gunthorpe To: linux-rdma@vger.kernel.org, Leon Romanovsky , "Guy Levi(SW)" , Yishai Hadas , "Ruhl, Michael J" Cc: Jason Gunthorpe Subject: [PATCH 03/11] IB/uverbs: Consolidate uobject destruction Date: Wed, 25 Jul 2018 21:40:12 -0600 Message-Id: <20180726034020.5583-4-jgg@ziepe.ca> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180726034020.5583-1-jgg@ziepe.ca> References: <20180726034020.5583-1-jgg@ziepe.ca> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jason Gunthorpe There are several flows that can destroy a uobject and each one is minimized and sprinkled throughout the code base, making it difficult to understand and very hard to modify the destroy path. Consolidate all of these into uverbs_destroy_uobject() and call it in all cases where a uobject has to be destroyed. This makes one change to the lifecycle, during any abort (eg when alloc_commit is not called) we always call out to alloc_abort, even if remove_commit needs to be called to delete a HW object. This also renames RDMA_REMOVE_DURING_CLEANUP to RDMA_REMOVE_ABORT to clarify its actual usage and revises some of the comments to reflect what the life cycle is for the type implementation. Signed-off-by: Jason Gunthorpe --- drivers/infiniband/core/rdma_core.c | 251 ++++++++++++++-------------- include/rdma/ib_verbs.h | 4 +- include/rdma/uverbs_types.h | 70 ++++---- 3 files changed, 157 insertions(+), 168 deletions(-) diff --git a/drivers/infiniband/core/rdma_core.c b/drivers/infiniband/core/rdma_core.c index 7db75d784070cc..aa1d16d87746c3 100644 --- a/drivers/infiniband/core/rdma_core.c +++ b/drivers/infiniband/core/rdma_core.c @@ -129,6 +129,95 @@ static int uverbs_try_lock_object(struct ib_uobject *uobj, bool exclusive) return atomic_cmpxchg(&uobj->usecnt, 0, -1) == 0 ? 0 : -EBUSY; } +static void assert_uverbs_usecnt(struct ib_uobject *uobj, bool exclusive) +{ +#ifdef CONFIG_LOCKDEP + if (exclusive) + WARN_ON(atomic_read(&uobj->usecnt) != -1); + else + WARN_ON(atomic_read(&uobj->usecnt) <= 0); +#endif +} + +/* + * This must be called with the hw_destroy_rwsem locked (except for + * RDMA_REMOVE_ABORT) for read or write, also The uobject itself must be + * locked for write. + * + * Upon return the HW object is guaranteed to be destroyed. + * + * For RDMA_REMOVE_ABORT, the hw_destroy_rwsem is not required to be held, + * however the type's allocat_commit function cannot have been called and the + * uobject cannot be on the uobjects_lists + * + * For RDMA_REMOVE_DESTROY the caller shold be holding a kref (eg via + * rdma_lookup_get_uobject) and the object is left in a state where the caller + * needs to call rdma_lookup_put_uobject. + * + * For all other destroy modes this function internally unlocks the uobject + * and consumes the kref on the uobj. + */ +static int uverbs_destroy_uobject(struct ib_uobject *uobj, + enum rdma_remove_reason reason) +{ + struct ib_uverbs_file *ufile = uobj->ufile; + unsigned long flags; + int ret; + + assert_uverbs_usecnt(uobj, true); + + if (uobj->object) { + ret = uobj->type->type_class->remove_commit(uobj, reason); + if (ret) { + if (ib_is_destroy_retryable(ret, reason, uobj)) + return ret; + + /* Nothing to be done, dangle the memory and move on */ + WARN(true, + "ib_uverbs: failed to remove uobject id %d, driver err=%d", + uobj->id, ret); + } + + uobj->object = NULL; + } + + if (reason == RDMA_REMOVE_ABORT) { + WARN_ON(!list_empty(&uobj->list)); + WARN_ON(!uobj->context); + uobj->type->type_class->alloc_abort(uobj); + } + + uobj->context = NULL; + + /* + * For DESTROY the usecnt is held write locked, the caller is expected + * to put it unlock and put the object when done with it. + */ + if (reason != RDMA_REMOVE_DESTROY) + atomic_set(&uobj->usecnt, 0); + + if (!list_empty(&uobj->list)) { + spin_lock_irqsave(&ufile->uobjects_lock, flags); + list_del_init(&uobj->list); + spin_unlock_irqrestore(&ufile->uobjects_lock, flags); + + /* + * Pairs with the get in rdma_alloc_commit_uobject(), could + * destroy uobj. + */ + uverbs_uobject_put(uobj); + } + + /* + * When aborting the stack kref remains owned by the core code, and is + * not transferred into the type. Pairs with the get in alloc_uobj + */ + if (reason == RDMA_REMOVE_ABORT) + uverbs_uobject_put(uobj); + + return 0; +} + /* * uobj_get_destroy destroys the HW object and returns a handle to the uobj * with a NULL object pointer. The caller must pair this with @@ -171,6 +260,7 @@ int __uobj_perform_destroy(const struct uverbs_obj_type *type, u32 id, return success_res; } +/* alloc_uobj must be undone by uverbs_destroy_uobject() */ static struct ib_uobject *alloc_uobj(struct ib_uverbs_file *ufile, const struct uverbs_obj_type *type) { @@ -379,6 +469,16 @@ struct ib_uobject *rdma_alloc_begin_uobject(const struct uverbs_obj_type *type, return type->type_class->alloc_begin(type, ufile); } +static void alloc_abort_idr_uobject(struct ib_uobject *uobj) +{ + ib_rdmacg_uncharge(&uobj->cg_obj, uobj->context->device, + RDMACG_RESOURCE_HCA_OBJECT); + + spin_lock(&uobj->ufile->idr_lock); + idr_remove(&uobj->ufile->idr, uobj->id); + spin_unlock(&uobj->ufile->idr_lock); +} + static int __must_check remove_commit_idr_uobject(struct ib_uobject *uobj, enum rdma_remove_reason why) { @@ -395,25 +495,19 @@ static int __must_check remove_commit_idr_uobject(struct ib_uobject *uobj, if (ib_is_destroy_retryable(ret, why, uobj)) return ret; - ib_rdmacg_uncharge(&uobj->cg_obj, uobj->context->device, - RDMACG_RESOURCE_HCA_OBJECT); - - spin_lock(&uobj->ufile->idr_lock); - idr_remove(&uobj->ufile->idr, uobj->id); - spin_unlock(&uobj->ufile->idr_lock); + if (why == RDMA_REMOVE_ABORT) + return 0; + alloc_abort_idr_uobject(uobj); /* Matches the kref in alloc_commit_idr_uobject */ uverbs_uobject_put(uobj); - return ret; + return 0; } static void alloc_abort_fd_uobject(struct ib_uobject *uobj) { put_unused_fd(uobj->id); - - /* Pairs with the kref from alloc_begin_idr_uobject */ - uverbs_uobject_put(uobj); } static int __must_check remove_commit_fd_uobject(struct ib_uobject *uobj, @@ -426,47 +520,7 @@ static int __must_check remove_commit_fd_uobject(struct ib_uobject *uobj, if (ib_is_destroy_retryable(ret, why, uobj)) return ret; - if (why == RDMA_REMOVE_DURING_CLEANUP) { - alloc_abort_fd_uobject(uobj); - return ret; - } - - uobj->context = NULL; - return ret; -} - -static void assert_uverbs_usecnt(struct ib_uobject *uobj, bool exclusive) -{ -#ifdef CONFIG_LOCKDEP - if (exclusive) - WARN_ON(atomic_read(&uobj->usecnt) != -1); - else - WARN_ON(atomic_read(&uobj->usecnt) <= 0); -#endif -} - -static int __must_check _rdma_remove_commit_uobject(struct ib_uobject *uobj, - enum rdma_remove_reason why) -{ - struct ib_uverbs_file *ufile = uobj->ufile; - int ret; - - if (!uobj->object) - return 0; - - ret = uobj->type->type_class->remove_commit(uobj, why); - if (ib_is_destroy_retryable(ret, why, uobj)) - return ret; - - uobj->object = NULL; - - spin_lock_irq(&ufile->uobjects_lock); - list_del(&uobj->list); - spin_unlock_irq(&ufile->uobjects_lock); - /* Pairs with the get in rdma_alloc_commit_uobject() */ - uverbs_uobject_put(uobj); - - return ret; + return 0; } int rdma_explicit_destroy(struct ib_uobject *uobject) @@ -479,8 +533,8 @@ int rdma_explicit_destroy(struct ib_uobject *uobject) WARN(true, "ib_uverbs: Cleanup is running while removing an uobject\n"); return 0; } - assert_uverbs_usecnt(uobject, true); - ret = _rdma_remove_commit_uobject(uobject, RDMA_REMOVE_DESTROY); + + ret = uverbs_destroy_uobject(uobject, RDMA_REMOVE_DESTROY); up_read(&ufile->hw_destroy_rwsem); return ret; @@ -554,24 +608,14 @@ int __must_check rdma_alloc_commit_uobject(struct ib_uobject *uobj) /* Cleanup is running. Calling this should have been impossible */ if (!down_read_trylock(&ufile->hw_destroy_rwsem)) { WARN(true, "ib_uverbs: Cleanup is running while allocating an uobject\n"); - ret = uobj->type->type_class->remove_commit(uobj, - RDMA_REMOVE_DURING_CLEANUP); - if (ret) - pr_warn("ib_uverbs: cleanup of idr object %d failed\n", - uobj->id); - return ret; + uverbs_destroy_uobject(uobj, RDMA_REMOVE_ABORT); + return -EINVAL; } - assert_uverbs_usecnt(uobj, true); - /* alloc_commit consumes the uobj kref */ ret = uobj->type->type_class->alloc_commit(uobj); if (ret) { - if (uobj->type->type_class->remove_commit( - uobj, RDMA_REMOVE_DURING_CLEANUP)) - pr_warn("ib_uverbs: cleanup of idr object %d failed\n", - uobj->id); - up_read(&ufile->hw_destroy_rwsem); + uverbs_destroy_uobject(uobj, RDMA_REMOVE_ABORT); return ret; } @@ -589,27 +633,14 @@ int __must_check rdma_alloc_commit_uobject(struct ib_uobject *uobj) return 0; } -static void alloc_abort_idr_uobject(struct ib_uobject *uobj) -{ - ib_rdmacg_uncharge(&uobj->cg_obj, uobj->context->device, - RDMACG_RESOURCE_HCA_OBJECT); - - spin_lock(&uobj->ufile->idr_lock); - /* The value of the handle in the IDR is NULL at this point. */ - idr_remove(&uobj->ufile->idr, uobj->id); - spin_unlock(&uobj->ufile->idr_lock); - - /* Pairs with the kref from alloc_begin_idr_uobject */ - uverbs_uobject_put(uobj); -} - /* * This consumes the kref for uobj. It is up to the caller to unwind the HW * object and anything else connected to uobj before calling this. */ void rdma_alloc_abort_uobject(struct ib_uobject *uobj) { - uobj->type->type_class->alloc_abort(uobj); + uobj->object = NULL; + uverbs_destroy_uobject(uobj, RDMA_REMOVE_ABORT); } static void lookup_put_idr_uobject(struct ib_uobject *uobj, bool exclusive) @@ -667,45 +698,23 @@ const struct uverbs_obj_type_class uverbs_idr_class = { }; EXPORT_SYMBOL(uverbs_idr_class); -static void _uverbs_close_fd(struct ib_uobject *uobj) -{ - int ret; - - /* - * uobject was already cleaned up, remove_commit_fd_uobject - * sets this - */ - if (!uobj->context) - return; - - /* - * lookup_get_fd_uobject holds the kref on the struct file any time a - * FD uobj is locked, which prevents this release method from being - * invoked. Meaning we can always get the write lock here, or we have - * a kernel bug. If so dangle the pointers and bail. - */ - ret = uverbs_try_lock_object(uobj, true); - if (WARN(ret, "uverbs_close_fd() racing with lookup_get_fd_uobject()")) - return; - - ret = _rdma_remove_commit_uobject(uobj, RDMA_REMOVE_CLOSE); - if (ret) - pr_warn("Unable to clean up uobject file in %s\n", __func__); - - atomic_set(&uobj->usecnt, 0); -} - void uverbs_close_fd(struct file *f) { struct ib_uobject *uobj = f->private_data; struct ib_uverbs_file *ufile = uobj->ufile; if (down_read_trylock(&ufile->hw_destroy_rwsem)) { - _uverbs_close_fd(uobj); + /* + * lookup_get_fd_uobject holds the kref on the struct file any + * time a FD uobj is locked, which prevents this release + * method from being invoked. Meaning we can always get the + * write lock here, or we have a kernel bug. + */ + WARN_ON(uverbs_try_lock_object(uobj, true)); + uverbs_destroy_uobject(uobj, RDMA_REMOVE_CLOSE); up_read(&ufile->hw_destroy_rwsem); } - uobj->object = NULL; /* Matches the get in alloc_begin_fd_uobject */ kref_put(&ufile->ref, ib_uverbs_release_file); @@ -783,7 +792,6 @@ static int __uverbs_cleanup_ufile(struct ib_uverbs_file *ufile, { struct ib_uobject *obj, *next_obj; int ret = -EINVAL; - int err = 0; /* * This shouldn't run while executing other commands on this @@ -800,23 +808,8 @@ static int __uverbs_cleanup_ufile(struct ib_uverbs_file *ufile, * racing with a lookup_get. */ WARN_ON(uverbs_try_lock_object(obj, true)); - err = obj->type->type_class->remove_commit(obj, reason); - - if (ib_is_destroy_retryable(err, reason, obj)) { - pr_debug("ib_uverbs: failed to remove uobject id %d err %d\n", - obj->id, err); - atomic_set(&obj->usecnt, 0); - continue; - } - - if (err) - pr_err("ib_uverbs: unable to remove uobject id %d err %d\n", - obj->id, err); - - list_del(&obj->list); - /* Pairs with the get in rdma_alloc_commit_uobject() */ - uverbs_uobject_put(obj); - ret = 0; + if (!uverbs_destroy_uobject(obj, reason)) + ret = 0; } return ret; } diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index 42cbf8eabe9d99..7d18e1df052292 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -1466,8 +1466,8 @@ enum rdma_remove_reason { RDMA_REMOVE_CLOSE, /* Driver is being hot-unplugged. This call should delete the actual object itself */ RDMA_REMOVE_DRIVER_REMOVE, - /* Context is being cleaned-up, but commit was just completed */ - RDMA_REMOVE_DURING_CLEANUP, + /* uobj is being cleaned-up before being committed */ + RDMA_REMOVE_ABORT, }; struct ib_rdmacg_object { diff --git a/include/rdma/uverbs_types.h b/include/rdma/uverbs_types.h index 8bae28dd2e4f98..875dd8c16ba3a7 100644 --- a/include/rdma/uverbs_types.h +++ b/include/rdma/uverbs_types.h @@ -38,53 +38,49 @@ struct uverbs_obj_type; +/* + * The following sequences are valid: + * Success flow: + * alloc_begin + * alloc_commit + * [..] + * Access flow: + * lookup_get(exclusive=false) & uverbs_try_lock_object + * lookup_put(exclusive=false) via rdma_lookup_put_uobject + * Destruction flow: + * lookup_get(exclusive=true) & uverbs_try_lock_object + * remove_commit + * lookup_put(exclusive=true) via rdma_lookup_put_uobject + * + * Allocate Error flow #1 + * alloc_begin + * alloc_abort + * Allocate Error flow #2 + * alloc_begin + * remove_commit + * alloc_abort + * Allocate Error flow #3 + * alloc_begin + * alloc_commit (fails) + * remove_commit + * alloc_abort + * + * In all cases the caller must hold the ufile kref until alloc_commit or + * alloc_abort returns. + */ struct uverbs_obj_type_class { - /* - * Get an ib_uobject that corresponds to the given id from ucontext, - * These functions could create or destroy objects if required. - * The action will be finalized only when commit, abort or put fops are - * called. - * The flow of the different actions is: - * [alloc]: Starts with alloc_begin. The handlers logic is than - * executed. If the handler is successful, alloc_commit - * is called and the object is inserted to the repository. - * Once alloc_commit completes the object is visible to - * other threads and userspace. - e Otherwise, alloc_abort is called and the object is - * destroyed. - * [lookup]: Starts with lookup_get which fetches and locks the - * object. After the handler finished using the object, it - * needs to call lookup_put to unlock it. The exclusive - * flag indicates if the object is locked for exclusive - * access. - * [remove]: Starts with lookup_get with exclusive flag set. This - * locks the object for exclusive access. If the handler - * code completed successfully, remove_commit is called - * and the ib_uobject is removed from the context's - * uobjects repository and put. The object itself is - * destroyed as well. Once remove succeeds new krefs to - * the object cannot be acquired by other threads or - * userspace and the hardware driver is removed from the - * object. Other krefs on the object may still exist. - * If the handler code failed, lookup_put should be - * called. This callback is used when the context - * is destroyed as well (process termination, - * reset flow). - */ struct ib_uobject *(*alloc_begin)(const struct uverbs_obj_type *type, struct ib_uverbs_file *ufile); + /* This consumes the kref on uobj */ int (*alloc_commit)(struct ib_uobject *uobj); + /* This does not consume the kref on uobj */ void (*alloc_abort)(struct ib_uobject *uobj); struct ib_uobject *(*lookup_get)(const struct uverbs_obj_type *type, struct ib_uverbs_file *ufile, s64 id, bool exclusive); void (*lookup_put)(struct ib_uobject *uobj, bool exclusive); - /* - * Must be called with the exclusive lock held. If successful uobj is - * invalid on return. On failure uobject is left completely - * unchanged - */ + /* This does not consume the kref on uobj */ int __must_check (*remove_commit)(struct ib_uobject *uobj, enum rdma_remove_reason why); u8 needs_kfree_rcu; From patchwork Thu Jul 26 03:40:13 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Gunthorpe X-Patchwork-Id: 10545149 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5F3E8139A for ; Thu, 26 Jul 2018 03:40:32 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 43C5D2AC9F for ; Thu, 26 Jul 2018 03:40:32 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4207D2ACCC; Thu, 26 Jul 2018 03:40:32 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5CFAC2ACA5 for ; Thu, 26 Jul 2018 03:40:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726980AbeGZEzO (ORCPT ); Thu, 26 Jul 2018 00:55:14 -0400 Received: from mail-it0-f68.google.com ([209.85.214.68]:38715 "EHLO mail-it0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726430AbeGZEzO (ORCPT ); Thu, 26 Jul 2018 00:55:14 -0400 Received: by mail-it0-f68.google.com with SMTP id v71-v6so848108itb.3 for ; Wed, 25 Jul 2018 20:40:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ziepe.ca; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=lJxZl58JBsZP7gLA8UMuQUME9rot7+R4uSIod9g9NdU=; b=IIvwp0imKte4mqrzRN7obH0ogtaDiUcVQTAjNmo97oRpXDDA2kGxqAoXkgJpwinw4B GiM9T6ihbc6DEInNrsdQofVQ45HI9zAXyVqYb49Ulml33FL4pHCBryu8S9hDr1TEMflS jBcrJWY2cUVcM2cUclT9Ko0WoCXkpqMyKf+YTbgoYJPE/meKlp1kK4awBcOWqhFOmHSD HbgAr8wiDGvi3LOg6Z+E6bdCjdKfzCU3nK+ysMQADzgWWkxZW4lr8/ILgJaT4LAMaZmT JLLIvKcE7twETXQyWzLEV4dSN5P1FhGl1vLYPka9Tr04nbMuh22o85AKyXHCizg0/nH8 H26Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=lJxZl58JBsZP7gLA8UMuQUME9rot7+R4uSIod9g9NdU=; b=BrjfngcgNfuArvZDCRaPQMqb9ZAb5ubCebUHZCXi9ScwEx2F398Lwpmn7IV7QQchEy b2MSbpIf0aBM57wj1rsb12gVsssIMunfQSBYlq/3MQiqsunH0NjlKBopJez++lqTuBHU LTVlkw49IAqLljjvJtbp3wgfs9+FcwC02J0mN874MffeBijgSta+gbAn7khRXU3o6j3z mdzPZ0DJdiIreUrRymG/pMARBg71Xv8LTPXx2i7bKkqvZhu/ywMh8QGH397DThTU63QO iDO5e87Vlsjw7Svjs8uFCoLQofXTyHMG/ugzgae42jMOitXjqU7hKrTzSsSclLevjM6F APNQ== X-Gm-Message-State: AOUpUlGGQu0mpCgawVTIt3eU567n2OgDpmqT6wAR6NMIOGmMIOuhn5/3 x1Vq6Uv/HLHbvqWeNTVLHeISZ584pZ5RDQ== X-Google-Smtp-Source: AAOMgpeD7oU8MEDPs2DTo3jdUH4WlfGGXVgtil6d1WOcFJQzp0ZL0Vs39cnecwaxlA7FM7FW4HbaGg== X-Received: by 2002:a02:238f:: with SMTP id u137-v6mr246826jau.0.1532576428088; Wed, 25 Jul 2018 20:40:28 -0700 (PDT) Received: from ziepe.ca (S010614cc2056d97f.ed.shawcable.net. [174.3.196.123]) by smtp.gmail.com with ESMTPSA id e140-v6sm80797iof.50.2018.07.25.20.40.24 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 25 Jul 2018 20:40:24 -0700 (PDT) Received: from jgg by mlx.ziepe.ca with local (Exim 4.86_2) (envelope-from ) id 1fiX8Z-0001WZ-KN; Wed, 25 Jul 2018 21:40:23 -0600 From: Jason Gunthorpe To: linux-rdma@vger.kernel.org, Leon Romanovsky , "Guy Levi(SW)" , Yishai Hadas , "Ruhl, Michael J" Cc: Jason Gunthorpe Subject: [PATCH 04/11] IB/uverbs: Convert 'bool exclusive' into an enum Date: Wed, 25 Jul 2018 21:40:13 -0600 Message-Id: <20180726034020.5583-5-jgg@ziepe.ca> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180726034020.5583-1-jgg@ziepe.ca> References: <20180726034020.5583-1-jgg@ziepe.ca> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jason Gunthorpe This is more readable, and future patches will need a 3rd lookup type. Signed-off-by: Jason Gunthorpe --- drivers/infiniband/core/rdma_core.c | 94 +++++++++++++++++------------ include/rdma/uverbs_std_types.h | 13 ++-- include/rdma/uverbs_types.h | 16 +++-- 3 files changed, 75 insertions(+), 48 deletions(-) diff --git a/drivers/infiniband/core/rdma_core.c b/drivers/infiniband/core/rdma_core.c index aa1d16d87746c3..435dbe8ef2a28d 100644 --- a/drivers/infiniband/core/rdma_core.c +++ b/drivers/infiniband/core/rdma_core.c @@ -108,7 +108,8 @@ void uverbs_uobject_put(struct ib_uobject *uobject) kref_put(&uobject->ref, uverbs_uobject_free); } -static int uverbs_try_lock_object(struct ib_uobject *uobj, bool exclusive) +static int uverbs_try_lock_object(struct ib_uobject *uobj, + enum rdma_lookup_mode mode) { /* * When a shared access is required, we use a positive counter. Each @@ -121,21 +122,29 @@ static int uverbs_try_lock_object(struct ib_uobject *uobj, bool exclusive) * concurrently, setting the counter to zero is enough for releasing * this lock. */ - if (!exclusive) + switch (mode) { + case UVERBS_LOOKUP_READ: return __atomic_add_unless(&uobj->usecnt, 1, -1) == -1 ? -EBUSY : 0; - - /* lock is either WRITE or DESTROY - should be exclusive */ - return atomic_cmpxchg(&uobj->usecnt, 0, -1) == 0 ? 0 : -EBUSY; + case UVERBS_LOOKUP_WRITE: + /* lock is either WRITE or DESTROY - should be exclusive */ + return atomic_cmpxchg(&uobj->usecnt, 0, -1) == 0 ? 0 : -EBUSY; + } + return 0; } -static void assert_uverbs_usecnt(struct ib_uobject *uobj, bool exclusive) +static void assert_uverbs_usecnt(struct ib_uobject *uobj, + enum rdma_lookup_mode mode) { #ifdef CONFIG_LOCKDEP - if (exclusive) - WARN_ON(atomic_read(&uobj->usecnt) != -1); - else + switch (mode) { + case UVERBS_LOOKUP_READ: WARN_ON(atomic_read(&uobj->usecnt) <= 0); + break; + case UVERBS_LOOKUP_WRITE: + WARN_ON(atomic_read(&uobj->usecnt) != -1); + break; + } #endif } @@ -164,7 +173,7 @@ static int uverbs_destroy_uobject(struct ib_uobject *uobj, unsigned long flags; int ret; - assert_uverbs_usecnt(uobj, true); + assert_uverbs_usecnt(uobj, UVERBS_LOOKUP_WRITE); if (uobj->object) { ret = uobj->type->type_class->remove_commit(uobj, reason); @@ -229,13 +238,13 @@ struct ib_uobject *__uobj_get_destroy(const struct uverbs_obj_type *type, struct ib_uobject *uobj; int ret; - uobj = rdma_lookup_get_uobject(type, ufile, id, true); + uobj = rdma_lookup_get_uobject(type, ufile, id, UVERBS_LOOKUP_WRITE); if (IS_ERR(uobj)) return uobj; ret = rdma_explicit_destroy(uobj); if (ret) { - rdma_lookup_put_uobject(uobj, true); + rdma_lookup_put_uobject(uobj, UVERBS_LOOKUP_WRITE); return ERR_PTR(ret); } @@ -256,7 +265,7 @@ int __uobj_perform_destroy(const struct uverbs_obj_type *type, u32 id, if (IS_ERR(uobj)) return PTR_ERR(uobj); - rdma_lookup_put_uobject(uobj, true); + rdma_lookup_put_uobject(uobj, UVERBS_LOOKUP_WRITE); return success_res; } @@ -319,7 +328,8 @@ static int idr_add_uobj(struct ib_uobject *uobj) /* Returns the ib_uobject or an error. The caller should check for IS_ERR. */ static struct ib_uobject * lookup_get_idr_uobject(const struct uverbs_obj_type *type, - struct ib_uverbs_file *ufile, s64 id, bool exclusive) + struct ib_uverbs_file *ufile, s64 id, + enum rdma_lookup_mode mode) { struct ib_uobject *uobj; unsigned long idrno = id; @@ -349,9 +359,10 @@ lookup_get_idr_uobject(const struct uverbs_obj_type *type, return uobj; } -static struct ib_uobject *lookup_get_fd_uobject(const struct uverbs_obj_type *type, - struct ib_uverbs_file *ufile, - s64 id, bool exclusive) +static struct ib_uobject * +lookup_get_fd_uobject(const struct uverbs_obj_type *type, + struct ib_uverbs_file *ufile, s64 id, + enum rdma_lookup_mode mode) { struct file *f; struct ib_uobject *uobject; @@ -362,7 +373,7 @@ static struct ib_uobject *lookup_get_fd_uobject(const struct uverbs_obj_type *ty if (fdno != id) return ERR_PTR(-EINVAL); - if (exclusive) + if (mode != UVERBS_LOOKUP_READ) return ERR_PTR(-EOPNOTSUPP); f = fget(fdno); @@ -386,12 +397,12 @@ static struct ib_uobject *lookup_get_fd_uobject(const struct uverbs_obj_type *ty struct ib_uobject *rdma_lookup_get_uobject(const struct uverbs_obj_type *type, struct ib_uverbs_file *ufile, s64 id, - bool exclusive) + enum rdma_lookup_mode mode) { struct ib_uobject *uobj; int ret; - uobj = type->type_class->lookup_get(type, ufile, id, exclusive); + uobj = type->type_class->lookup_get(type, ufile, id, mode); if (IS_ERR(uobj)) return uobj; @@ -400,13 +411,13 @@ struct ib_uobject *rdma_lookup_get_uobject(const struct uverbs_obj_type *type, goto free; } - ret = uverbs_try_lock_object(uobj, exclusive); + ret = uverbs_try_lock_object(uobj, mode); if (ret) goto free; return uobj; free: - uobj->type->type_class->lookup_put(uobj, exclusive); + uobj->type->type_class->lookup_put(uobj, mode); uverbs_uobject_put(uobj); return ERR_PTR(ret); } @@ -643,32 +654,39 @@ void rdma_alloc_abort_uobject(struct ib_uobject *uobj) uverbs_destroy_uobject(uobj, RDMA_REMOVE_ABORT); } -static void lookup_put_idr_uobject(struct ib_uobject *uobj, bool exclusive) +static void lookup_put_idr_uobject(struct ib_uobject *uobj, + enum rdma_lookup_mode mode) { } -static void lookup_put_fd_uobject(struct ib_uobject *uobj, bool exclusive) +static void lookup_put_fd_uobject(struct ib_uobject *uobj, + enum rdma_lookup_mode mode) { struct file *filp = uobj->object; - WARN_ON(exclusive); + WARN_ON(mode != UVERBS_LOOKUP_READ); /* This indirectly calls uverbs_close_fd and free the object */ fput(filp); } -void rdma_lookup_put_uobject(struct ib_uobject *uobj, bool exclusive) +void rdma_lookup_put_uobject(struct ib_uobject *uobj, + enum rdma_lookup_mode mode) { - assert_uverbs_usecnt(uobj, exclusive); - uobj->type->type_class->lookup_put(uobj, exclusive); + assert_uverbs_usecnt(uobj, mode); + uobj->type->type_class->lookup_put(uobj, mode); /* * In order to unlock an object, either decrease its usecnt for * read access or zero it in case of exclusive access. See * uverbs_try_lock_object for locking schema information. */ - if (!exclusive) + switch (mode) { + case UVERBS_LOOKUP_READ: atomic_dec(&uobj->usecnt); - else + break; + case UVERBS_LOOKUP_WRITE: atomic_set(&uobj->usecnt, 0); + break; + } /* Pairs with the kref obtained by type->lookup_get */ uverbs_uobject_put(uobj); @@ -710,7 +728,7 @@ void uverbs_close_fd(struct file *f) * method from being invoked. Meaning we can always get the * write lock here, or we have a kernel bug. */ - WARN_ON(uverbs_try_lock_object(uobj, true)); + WARN_ON(uverbs_try_lock_object(uobj, UVERBS_LOOKUP_WRITE)); uverbs_destroy_uobject(uobj, RDMA_REMOVE_CLOSE); up_read(&ufile->hw_destroy_rwsem); } @@ -807,7 +825,7 @@ static int __uverbs_cleanup_ufile(struct ib_uverbs_file *ufile, * if we hit this WARN_ON, that means we are * racing with a lookup_get. */ - WARN_ON(uverbs_try_lock_object(obj, true)); + WARN_ON(uverbs_try_lock_object(obj, UVERBS_LOOKUP_WRITE)); if (!uverbs_destroy_uobject(obj, reason)) ret = 0; } @@ -890,10 +908,12 @@ uverbs_get_uobject_from_file(const struct uverbs_obj_type *type_attrs, { switch (access) { case UVERBS_ACCESS_READ: - return rdma_lookup_get_uobject(type_attrs, ufile, id, false); + return rdma_lookup_get_uobject(type_attrs, ufile, id, + UVERBS_LOOKUP_READ); case UVERBS_ACCESS_DESTROY: case UVERBS_ACCESS_WRITE: - return rdma_lookup_get_uobject(type_attrs, ufile, id, true); + return rdma_lookup_get_uobject(type_attrs, ufile, id, + UVERBS_LOOKUP_WRITE); case UVERBS_ACCESS_NEW: return rdma_alloc_begin_uobject(type_attrs, ufile); default: @@ -916,13 +936,13 @@ int uverbs_finalize_object(struct ib_uobject *uobj, switch (access) { case UVERBS_ACCESS_READ: - rdma_lookup_put_uobject(uobj, false); + rdma_lookup_put_uobject(uobj, UVERBS_LOOKUP_READ); break; case UVERBS_ACCESS_WRITE: - rdma_lookup_put_uobject(uobj, true); + rdma_lookup_put_uobject(uobj, UVERBS_LOOKUP_WRITE); break; case UVERBS_ACCESS_DESTROY: - rdma_lookup_put_uobject(uobj, true); + rdma_lookup_put_uobject(uobj, UVERBS_LOOKUP_WRITE); break; case UVERBS_ACCESS_NEW: if (commit) diff --git a/include/rdma/uverbs_std_types.h b/include/rdma/uverbs_std_types.h index c2f89e41cbd2d4..8c54e1439ba1de 100644 --- a/include/rdma/uverbs_std_types.h +++ b/include/rdma/uverbs_std_types.h @@ -58,11 +58,12 @@ static inline const struct uverbs_object_tree_def *uverbs_default_get_objects(vo #define uobj_get_read(_type, _id, _ufile) \ rdma_lookup_get_uobject(uobj_get_type(_type), _ufile, \ - _uobj_check_id(_id), false) + _uobj_check_id(_id), UVERBS_LOOKUP_READ) #define ufd_get_read(_type, _fdnum, _ufile) \ rdma_lookup_get_uobject(uobj_get_type(_type), _ufile, \ - (_fdnum)*typecheck(s32, _fdnum), false) + (_fdnum)*typecheck(s32, _fdnum), \ + UVERBS_LOOKUP_READ) static inline void *_uobj_get_obj_read(struct ib_uobject *uobj) { @@ -76,7 +77,7 @@ static inline void *_uobj_get_obj_read(struct ib_uobject *uobj) #define uobj_get_write(_type, _id, _ufile) \ rdma_lookup_get_uobject(uobj_get_type(_type), _ufile, \ - _uobj_check_id(_id), true) + _uobj_check_id(_id), UVERBS_LOOKUP_WRITE) int __uobj_perform_destroy(const struct uverbs_obj_type *type, u32 id, struct ib_uverbs_file *ufile, int success_res); @@ -92,12 +93,12 @@ struct ib_uobject *__uobj_get_destroy(const struct uverbs_obj_type *type, static inline void uobj_put_destroy(struct ib_uobject *uobj) { - rdma_lookup_put_uobject(uobj, true); + rdma_lookup_put_uobject(uobj, UVERBS_LOOKUP_WRITE); } static inline void uobj_put_read(struct ib_uobject *uobj) { - rdma_lookup_put_uobject(uobj, false); + rdma_lookup_put_uobject(uobj, UVERBS_LOOKUP_READ); } #define uobj_put_obj_read(_obj) \ @@ -105,7 +106,7 @@ static inline void uobj_put_read(struct ib_uobject *uobj) static inline void uobj_put_write(struct ib_uobject *uobj) { - rdma_lookup_put_uobject(uobj, true); + rdma_lookup_put_uobject(uobj, UVERBS_LOOKUP_WRITE); } static inline int __must_check uobj_alloc_commit(struct ib_uobject *uobj, diff --git a/include/rdma/uverbs_types.h b/include/rdma/uverbs_types.h index 875dd8c16ba3a7..0676672dbbb995 100644 --- a/include/rdma/uverbs_types.h +++ b/include/rdma/uverbs_types.h @@ -38,6 +38,11 @@ struct uverbs_obj_type; +enum rdma_lookup_mode { + UVERBS_LOOKUP_READ, + UVERBS_LOOKUP_WRITE, +}; + /* * The following sequences are valid: * Success flow: @@ -78,8 +83,8 @@ struct uverbs_obj_type_class { struct ib_uobject *(*lookup_get)(const struct uverbs_obj_type *type, struct ib_uverbs_file *ufile, s64 id, - bool exclusive); - void (*lookup_put)(struct ib_uobject *uobj, bool exclusive); + enum rdma_lookup_mode mode); + void (*lookup_put)(struct ib_uobject *uobj, enum rdma_lookup_mode mode); /* This does not consume the kref on uobj */ int __must_check (*remove_commit)(struct ib_uobject *uobj, enum rdma_remove_reason why); @@ -116,9 +121,10 @@ struct uverbs_obj_idr_type { }; struct ib_uobject *rdma_lookup_get_uobject(const struct uverbs_obj_type *type, - struct ib_uverbs_file *ufile, - s64 id, bool exclusive); -void rdma_lookup_put_uobject(struct ib_uobject *uobj, bool exclusive); + struct ib_uverbs_file *ufile, s64 id, + enum rdma_lookup_mode mode); +void rdma_lookup_put_uobject(struct ib_uobject *uobj, + enum rdma_lookup_mode mode); struct ib_uobject *rdma_alloc_begin_uobject(const struct uverbs_obj_type *type, struct ib_uverbs_file *ufile); void rdma_alloc_abort_uobject(struct ib_uobject *uobj); From patchwork Thu Jul 26 03:40:14 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Gunthorpe X-Patchwork-Id: 10545137 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9AA12139A for ; Thu, 26 Jul 2018 03:40:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 84A742ACA5 for ; Thu, 26 Jul 2018 03:40:29 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6C03B2A66A; Thu, 26 Jul 2018 03:40:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 696CA2A66A for ; Thu, 26 Jul 2018 03:40:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726375AbeGZEzL (ORCPT ); Thu, 26 Jul 2018 00:55:11 -0400 Received: from mail-io0-f194.google.com ([209.85.223.194]:41169 "EHLO mail-io0-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726072AbeGZEzL (ORCPT ); Thu, 26 Jul 2018 00:55:11 -0400 Received: by mail-io0-f194.google.com with SMTP id q9-v6so239294ioj.8 for ; Wed, 25 Jul 2018 20:40:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ziepe.ca; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=TxZucZeQmJD57bJyO4T7aTEgXk9bbr4LFqLxf04f9W4=; b=M3EsJ/zmcmPaxfa47p0SmqKc5TV70Leo3szkSK+rzxrH7gxdqItTK3fFWmU5Pl1nbS 45VkGbpwYtQ3tvZbQtzjMigY/OiP83/Fre8gt5mHvOUCvvLwRXBKqXDPywDE2k77c98M zZkijFZsUpYM3q3oAsnU5XSacreLiv6WfEXQgPfTRuM+PKfBmmZkZzN06Gabkqw2fD9k OMHfP0jal3bgRfcm8PzFxsvdbRSj9Y76z+p+rIKfiTRoK7mhf9e8YK1dwII1ntfSzdDj W2T0fxtZBj3kmmVNIzFqpkRfjcGP+ILGu1koSmakghcLD9Nskv5CNaNt1dlgTbjrbuNM jWYw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=TxZucZeQmJD57bJyO4T7aTEgXk9bbr4LFqLxf04f9W4=; b=k28EZJA5tM+am33JXr3Vzu0Pmbm0wdF7cKCv36oNeyPP2Ae+OQg0cHc+ZaJcupJgJm j7bNAtDYmbScxmM979Q4tscE29xs/wMd3oMepk7eKxCh0B3qHbF3RDMD7DVLcG4cPy4I 0A98PCmZkVh1kRSYak6pEbKloRV8NdBiUp8/ZYFfp5PJ7zaJB8XRxsD0EpesCl6vI1K5 kWpBVDWDkEe8UmsabpwqoiyGE1+3alEEiPXV3eJ6tdsbTISD7jaBJia3C2z8v6N85KMi THsHle1rJ2ACefFJ6lxQJQY0yghD1Bf1lXlqAx1tCjGy6ifd2zK4GMhIh3yj+Wcn/z4l bIIQ== X-Gm-Message-State: AOUpUlE8V0UQB7t7HXE/9jHQigzngyRy5434511vHDdEWcWAnN05WxNJ tUCaR95xD/toByN7jy94TjkJ5f2Zd8gOHg== X-Google-Smtp-Source: AAOMgpeKCFxeSdw6YmtJHuaVjLKKXQeusSt9P5vjhuQO6eCw9JZLpl8ysygtWlrO9UvIB7RexOghxQ== X-Received: by 2002:a6b:4e04:: with SMTP id c4-v6mr174919iob.19.1532576425288; Wed, 25 Jul 2018 20:40:25 -0700 (PDT) Received: from ziepe.ca (S010614cc2056d97f.ed.shawcable.net. [174.3.196.123]) by smtp.gmail.com with ESMTPSA id x17-v6sm294250ith.24.2018.07.25.20.40.24 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 25 Jul 2018 20:40:24 -0700 (PDT) Received: from jgg by mlx.ziepe.ca with local (Exim 4.86_2) (envelope-from ) id 1fiX8Z-0001Wh-LW; Wed, 25 Jul 2018 21:40:23 -0600 From: Jason Gunthorpe To: linux-rdma@vger.kernel.org, Leon Romanovsky , "Guy Levi(SW)" , Yishai Hadas , "Ruhl, Michael J" Cc: Jason Gunthorpe Subject: [PATCH 05/11] IB/uverbs: Allow RDMA_REMOVE_DESTROY to work concurrently with disassociate Date: Wed, 25 Jul 2018 21:40:14 -0600 Message-Id: <20180726034020.5583-6-jgg@ziepe.ca> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180726034020.5583-1-jgg@ziepe.ca> References: <20180726034020.5583-1-jgg@ziepe.ca> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jason Gunthorpe After all the recent structural changes this is now straightfoward, hoist the hw_destroy_rwsem up out of rdma_destroy_explicit and wrap it around the uobject write lock as well as the destroy. This is necessary as obtaining a write lock concurrently with uverbs_destroy_ufile_hw() will cause malfunction. After this change none of the destroy callbacks require the disassociate_srcu lock to be correct. This requires introducing a new lookup mode, UVERBS_LOOKUP_DESTROY as the IOCTL interface needs to hold an unlocked kref until all command verification is completed. Signed-off-by: Jason Gunthorpe --- drivers/infiniband/core/rdma_core.c | 71 ++++++++++++++++++-------- drivers/infiniband/core/rdma_core.h | 2 + drivers/infiniband/core/uverbs_ioctl.c | 7 ++- include/rdma/uverbs_types.h | 7 ++- 4 files changed, 63 insertions(+), 24 deletions(-) diff --git a/drivers/infiniband/core/rdma_core.c b/drivers/infiniband/core/rdma_core.c index 435dbe8ef2a28d..81d668abe18e45 100644 --- a/drivers/infiniband/core/rdma_core.c +++ b/drivers/infiniband/core/rdma_core.c @@ -127,8 +127,10 @@ static int uverbs_try_lock_object(struct ib_uobject *uobj, return __atomic_add_unless(&uobj->usecnt, 1, -1) == -1 ? -EBUSY : 0; case UVERBS_LOOKUP_WRITE: - /* lock is either WRITE or DESTROY - should be exclusive */ + /* lock is exclusive */ return atomic_cmpxchg(&uobj->usecnt, 0, -1) == 0 ? 0 : -EBUSY; + case UVERBS_LOOKUP_DESTROY: + return 0; } return 0; } @@ -144,6 +146,8 @@ static void assert_uverbs_usecnt(struct ib_uobject *uobj, case UVERBS_LOOKUP_WRITE: WARN_ON(atomic_read(&uobj->usecnt) != -1); break; + case UVERBS_LOOKUP_DESTROY: + break; } #endif } @@ -227,6 +231,35 @@ static int uverbs_destroy_uobject(struct ib_uobject *uobj, return 0; } +/* + * This calls uverbs_destroy_uobject() using the RDMA_REMOVE_DESTROY + * sequence. It should only be used from command callbacks. On success the + * caller must pair this with rdma_lookup_put_uobject(LOOKUP_WRITE). This + * version requires the caller to have already obtained an + * LOOKUP_DESTROY uobject kref. + */ +int uobj_destroy(struct ib_uobject *uobj) +{ + struct ib_uverbs_file *ufile = uobj->ufile; + int ret; + + down_read(&ufile->hw_destroy_rwsem); + + ret = uverbs_try_lock_object(uobj, UVERBS_LOOKUP_WRITE); + if (ret) + goto out_unlock; + + ret = uverbs_destroy_uobject(uobj, RDMA_REMOVE_DESTROY); + if (ret) { + atomic_set(&uobj->usecnt, 0); + goto out_unlock; + } + +out_unlock: + up_read(&ufile->hw_destroy_rwsem); + return ret; +} + /* * uobj_get_destroy destroys the HW object and returns a handle to the uobj * with a NULL object pointer. The caller must pair this with @@ -238,13 +271,13 @@ struct ib_uobject *__uobj_get_destroy(const struct uverbs_obj_type *type, struct ib_uobject *uobj; int ret; - uobj = rdma_lookup_get_uobject(type, ufile, id, UVERBS_LOOKUP_WRITE); + uobj = rdma_lookup_get_uobject(type, ufile, id, UVERBS_LOOKUP_DESTROY); if (IS_ERR(uobj)) return uobj; - ret = rdma_explicit_destroy(uobj); + ret = uobj_destroy(uobj); if (ret) { - rdma_lookup_put_uobject(uobj, UVERBS_LOOKUP_WRITE); + rdma_lookup_put_uobject(uobj, UVERBS_LOOKUP_DESTROY); return ERR_PTR(ret); } @@ -265,6 +298,11 @@ int __uobj_perform_destroy(const struct uverbs_obj_type *type, u32 id, if (IS_ERR(uobj)) return PTR_ERR(uobj); + /* + * FIXME: After destroy this is not safe. We no longer hold the rwsem + * so disassociation could have completed and unloaded the module that + * backs the uobj->type pointer. + */ rdma_lookup_put_uobject(uobj, UVERBS_LOOKUP_WRITE); return success_res; } @@ -534,23 +572,6 @@ static int __must_check remove_commit_fd_uobject(struct ib_uobject *uobj, return 0; } -int rdma_explicit_destroy(struct ib_uobject *uobject) -{ - int ret; - struct ib_uverbs_file *ufile = uobject->ufile; - - /* Cleanup is running. Calling this should have been impossible */ - if (!down_read_trylock(&ufile->hw_destroy_rwsem)) { - WARN(true, "ib_uverbs: Cleanup is running while removing an uobject\n"); - return 0; - } - - ret = uverbs_destroy_uobject(uobject, RDMA_REMOVE_DESTROY); - - up_read(&ufile->hw_destroy_rwsem); - return ret; -} - static int alloc_commit_idr_uobject(struct ib_uobject *uobj) { struct ib_uverbs_file *ufile = uobj->ufile; @@ -686,6 +707,8 @@ void rdma_lookup_put_uobject(struct ib_uobject *uobj, case UVERBS_LOOKUP_WRITE: atomic_set(&uobj->usecnt, 0); break; + case UVERBS_LOOKUP_DESTROY: + break; } /* Pairs with the kref obtained by type->lookup_get */ @@ -911,6 +934,9 @@ uverbs_get_uobject_from_file(const struct uverbs_obj_type *type_attrs, return rdma_lookup_get_uobject(type_attrs, ufile, id, UVERBS_LOOKUP_READ); case UVERBS_ACCESS_DESTROY: + /* Actual destruction is done inside uverbs_handle_method */ + return rdma_lookup_get_uobject(type_attrs, ufile, id, + UVERBS_LOOKUP_DESTROY); case UVERBS_ACCESS_WRITE: return rdma_lookup_get_uobject(type_attrs, ufile, id, UVERBS_LOOKUP_WRITE); @@ -942,7 +968,8 @@ int uverbs_finalize_object(struct ib_uobject *uobj, rdma_lookup_put_uobject(uobj, UVERBS_LOOKUP_WRITE); break; case UVERBS_ACCESS_DESTROY: - rdma_lookup_put_uobject(uobj, UVERBS_LOOKUP_WRITE); + if (uobj) + rdma_lookup_put_uobject(uobj, UVERBS_LOOKUP_DESTROY); break; case UVERBS_ACCESS_NEW: if (commit) diff --git a/drivers/infiniband/core/rdma_core.h b/drivers/infiniband/core/rdma_core.h index a736b46d18e34c..e4d8b985c31135 100644 --- a/drivers/infiniband/core/rdma_core.h +++ b/drivers/infiniband/core/rdma_core.h @@ -52,6 +52,8 @@ const struct uverbs_method_spec *uverbs_get_method(const struct uverbs_object_sp void uverbs_destroy_ufile_hw(struct ib_uverbs_file *ufile, enum rdma_remove_reason reason); +int uobj_destroy(struct ib_uobject *uobj); + /* * uverbs_uobject_get is called in order to increase the reference count on * an uobject. This is useful when a handler wants to keep the uobject's memory diff --git a/drivers/infiniband/core/uverbs_ioctl.c b/drivers/infiniband/core/uverbs_ioctl.c index 703710085b5beb..204130ee1cbe59 100644 --- a/drivers/infiniband/core/uverbs_ioctl.c +++ b/drivers/infiniband/core/uverbs_ioctl.c @@ -347,13 +347,18 @@ static int uverbs_handle_method(struct ib_uverbs_attr __user *uattr_ptr, * not get to manipulate the HW objects. */ if (destroy_attr) { - ret = rdma_explicit_destroy(destroy_attr->uobject); + ret = uobj_destroy(destroy_attr->uobject); if (ret) goto cleanup; } ret = method_spec->handler(ibdev, ufile, attr_bundle); + if (destroy_attr) { + uobj_put_destroy(destroy_attr->uobject); + destroy_attr->uobject = NULL; + } + cleanup: finalize_ret = uverbs_finalize_attrs(attr_bundle, method_spec->attr_buckets, diff --git a/include/rdma/uverbs_types.h b/include/rdma/uverbs_types.h index 0676672dbbb995..f64f413cecac22 100644 --- a/include/rdma/uverbs_types.h +++ b/include/rdma/uverbs_types.h @@ -41,6 +41,12 @@ struct uverbs_obj_type; enum rdma_lookup_mode { UVERBS_LOOKUP_READ, UVERBS_LOOKUP_WRITE, + /* + * Destroy is like LOOKUP_WRITE, except that the uobject is not + * locked. uobj_destroy is used to convert a LOOKUP_DESTROY lock into + * a LOOKUP_WRITE lock. + */ + UVERBS_LOOKUP_DESTROY, }; /* @@ -129,7 +135,6 @@ struct ib_uobject *rdma_alloc_begin_uobject(const struct uverbs_obj_type *type, struct ib_uverbs_file *ufile); void rdma_alloc_abort_uobject(struct ib_uobject *uobj); int __must_check rdma_alloc_commit_uobject(struct ib_uobject *uobj); -int rdma_explicit_destroy(struct ib_uobject *uobject); struct uverbs_obj_fd_type { /* From patchwork Thu Jul 26 03:40:15 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Gunthorpe X-Patchwork-Id: 10545139 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DDA3E180E for ; Thu, 26 Jul 2018 03:40:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C9BE22A66A for ; Thu, 26 Jul 2018 03:40:29 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BDD372AC9E; Thu, 26 Jul 2018 03:40:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0D9552AC89 for ; Thu, 26 Jul 2018 03:40:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726421AbeGZEzM (ORCPT ); Thu, 26 Jul 2018 00:55:12 -0400 Received: from mail-it0-f67.google.com ([209.85.214.67]:38711 "EHLO mail-it0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725941AbeGZEzL (ORCPT ); Thu, 26 Jul 2018 00:55:11 -0400 Received: by mail-it0-f67.google.com with SMTP id v71-v6so848039itb.3 for ; Wed, 25 Jul 2018 20:40:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ziepe.ca; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=g+TUuZb/fUN5a4wqaxY6rYhzp8E8eUC4Lnh41jGNa1I=; b=bz99gwu6aVNGbwG5v+1YVqsB1ge7hXJH6Z8Hu+MnejiA5P+UiA1iUN0jwfMSxl0i3s 8m3zM7yZPcGbrcU6xl7hTzji3tGoUiPk/H2dSUwbsrLWii5924Gen4r9sW6kYQtIP4KT z6YLFCdt30JKEd1DlukY+cSgAyqlI1VmaI63rheHlog/Fq4p6DVqgq/UEgi0KjXKnyMK tfqmdG5tU0aVHrupmtJGHgAkL39X7Kou86A8RR9RJtkAO9+YGI/bRsX5p05Vq+PRY4Dr zrb+ji67Dm6fX8a8zGBttcysmQPGJ2/ALoT/VZn12EXJP/BPz/fmYxwYfkCM3SEoBLoS 7OzA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=g+TUuZb/fUN5a4wqaxY6rYhzp8E8eUC4Lnh41jGNa1I=; b=hKETyS7LWysG0AqjV4HzYtCxQPSDyyNxt72iACRqtqUmXw/HMDptWEn+tsoDKuj7s1 3JxYne2C0BH+vPaOfsQJpko9V/RXerNVxMZgZHhkpCmfqEE3Fy2nzurIgeqonHDPUn68 ltTQl02VwekrahSDJhcjWuex2gJr2G10fU5EM53WtTL881v9hPs+4wzvUjCI9TcKY3SB /DdxRO5WHLix4xpGJr/oq9efb9KE1fvpyEAT/weVT67QxUpAo4rzbhlFg21wdH+asKNk CFX+qg/TpcqJReyAgM8OF+dcGty77jsmpdiVen7snrHwPh6qiF4huqBjVaWiS5d6/Lgr LiJA== X-Gm-Message-State: AOUpUlHPQNNRkeAX8dtiPv8WEwvYCy+mz5XvbZRD2/jvTqYFVKRQLH7W eFiz+gCDGWTLkXBVX4dVcfiTknlfJWdUZQ== X-Google-Smtp-Source: AAOMgpdM04irL0QJOT127AIGHKtL0ihvRT/WhLURgI3U8NDLQsSbRCDYk1/+4J1aOl2n/74OsIDh+w== X-Received: by 2002:a24:3fc6:: with SMTP id d189-v6mr504458ita.64.1532576426246; Wed, 25 Jul 2018 20:40:26 -0700 (PDT) Received: from ziepe.ca (S010614cc2056d97f.ed.shawcable.net. [174.3.196.123]) by smtp.gmail.com with ESMTPSA id k206-v6sm294925ite.34.2018.07.25.20.40.24 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 25 Jul 2018 20:40:24 -0700 (PDT) Received: from jgg by mlx.ziepe.ca with local (Exim 4.86_2) (envelope-from ) id 1fiX8Z-0001Wo-Mj; Wed, 25 Jul 2018 21:40:23 -0600 From: Jason Gunthorpe To: linux-rdma@vger.kernel.org, Leon Romanovsky , "Guy Levi(SW)" , Yishai Hadas , "Ruhl, Michael J" Cc: Jason Gunthorpe Subject: [PATCH 06/11] IB/uverbs: Allow uobject allocation to work concurrently with disassociate Date: Wed, 25 Jul 2018 21:40:15 -0600 Message-Id: <20180726034020.5583-7-jgg@ziepe.ca> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180726034020.5583-1-jgg@ziepe.ca> References: <20180726034020.5583-1-jgg@ziepe.ca> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jason Gunthorpe After all the recent structural changes this is now straightforward, hold the hw_destroy_rwsem across the entire uobject creation. We already take this semaphore on the success path, so holding it a bit longer is not going to change the performance. After this change none of the create callbacks require the disassociate_srcu lock to be correct. Signed-off-by: Jason Gunthorpe --- drivers/infiniband/core/rdma_core.c | 37 ++++++++++++++++++++--------- 1 file changed, 26 insertions(+), 11 deletions(-) diff --git a/drivers/infiniband/core/rdma_core.c b/drivers/infiniband/core/rdma_core.c index 81d668abe18e45..95a8110f186fdf 100644 --- a/drivers/infiniband/core/rdma_core.c +++ b/drivers/infiniband/core/rdma_core.c @@ -153,9 +153,8 @@ static void assert_uverbs_usecnt(struct ib_uobject *uobj, } /* - * This must be called with the hw_destroy_rwsem locked (except for - * RDMA_REMOVE_ABORT) for read or write, also The uobject itself must be - * locked for write. + * This must be called with the hw_destroy_rwsem locked for read or write, + * also the uobject itself must be locked for write. * * Upon return the HW object is guaranteed to be destroyed. * @@ -177,6 +176,7 @@ static int uverbs_destroy_uobject(struct ib_uobject *uobj, unsigned long flags; int ret; + lockdep_assert_held(&ufile->hw_destroy_rwsem); assert_uverbs_usecnt(uobj, UVERBS_LOOKUP_WRITE); if (uobj->object) { @@ -515,7 +515,22 @@ static struct ib_uobject *alloc_begin_fd_uobject(const struct uverbs_obj_type *t struct ib_uobject *rdma_alloc_begin_uobject(const struct uverbs_obj_type *type, struct ib_uverbs_file *ufile) { - return type->type_class->alloc_begin(type, ufile); + struct ib_uobject *ret; + + /* + * The hw_destroy_rwsem is held across the entire object creation and + * released during rdma_alloc_commit_uobject or + * rdma_alloc_abort_uobject + */ + if (!down_read_trylock(&ufile->hw_destroy_rwsem)) + return ERR_PTR(-EIO); + + ret = type->type_class->alloc_begin(type, ufile); + if (IS_ERR(ret)) { + up_read(&ufile->hw_destroy_rwsem); + return ret; + } + return ret; } static void alloc_abort_idr_uobject(struct ib_uobject *uobj) @@ -637,17 +652,11 @@ int __must_check rdma_alloc_commit_uobject(struct ib_uobject *uobj) struct ib_uverbs_file *ufile = uobj->ufile; int ret; - /* Cleanup is running. Calling this should have been impossible */ - if (!down_read_trylock(&ufile->hw_destroy_rwsem)) { - WARN(true, "ib_uverbs: Cleanup is running while allocating an uobject\n"); - uverbs_destroy_uobject(uobj, RDMA_REMOVE_ABORT); - return -EINVAL; - } - /* alloc_commit consumes the uobj kref */ ret = uobj->type->type_class->alloc_commit(uobj); if (ret) { uverbs_destroy_uobject(uobj, RDMA_REMOVE_ABORT); + up_read(&ufile->hw_destroy_rwsem); return ret; } @@ -660,6 +669,7 @@ int __must_check rdma_alloc_commit_uobject(struct ib_uobject *uobj) /* matches atomic_set(-1) in alloc_uobj */ atomic_set(&uobj->usecnt, 0); + /* Matches the down_read in rdma_alloc_begin_uobject */ up_read(&ufile->hw_destroy_rwsem); return 0; @@ -671,8 +681,13 @@ int __must_check rdma_alloc_commit_uobject(struct ib_uobject *uobj) */ void rdma_alloc_abort_uobject(struct ib_uobject *uobj) { + struct ib_uverbs_file *ufile = uobj->ufile; + uobj->object = NULL; uverbs_destroy_uobject(uobj, RDMA_REMOVE_ABORT); + + /* Matches the down_read in rdma_alloc_begin_uobject */ + up_read(&ufile->hw_destroy_rwsem); } static void lookup_put_idr_uobject(struct ib_uobject *uobj, From patchwork Thu Jul 26 03:40:16 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Gunthorpe X-Patchwork-Id: 10545143 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7BF5BA639 for ; Thu, 26 Jul 2018 03:40:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6A9DA2ACBE for ; Thu, 26 Jul 2018 03:40:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5EEBE2ACCB; Thu, 26 Jul 2018 03:40:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0F5092ACBE for ; Thu, 26 Jul 2018 03:40:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726585AbeGZEzN (ORCPT ); Thu, 26 Jul 2018 00:55:13 -0400 Received: from mail-it0-f65.google.com ([209.85.214.65]:54202 "EHLO mail-it0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725941AbeGZEzN (ORCPT ); Thu, 26 Jul 2018 00:55:13 -0400 Received: by mail-it0-f65.google.com with SMTP id 72-v6so893120itw.3 for ; Wed, 25 Jul 2018 20:40:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ziepe.ca; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=XSxMlGITjRFjd245DeVb2ukkC/iJLQSFA6+mI4NQfEI=; b=ey4BviSlPGzL95J3xgI3T4qAsrCgVxLOAK/yoqKZN0unGzS+em6x40nR0K9rCB5Uq1 FbWXkGES6id5AFwIAIaAh7x+/pzstPErMzXWWFLAyS8ogpej5//WH0U++7L1Qlu4dUfT 58HeSUsP51X7/wa8otUxffPrlCM1sPB8M35ehjH8Bpv+l3KL8DSBRWgv7vRehCen3rL5 MxaJIA0vb6wBfOw1t/VQmT/tMtjugOB+k5qspX7SKLOMSqS91WOl3nonGxAKBZPIMfla NLBgjzyyh5poLekKUaiSDe6Z5GP2NSwkv8UkA0vkLpBC0t2Ecf0SgiMAiafNh36URT0o FuEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=XSxMlGITjRFjd245DeVb2ukkC/iJLQSFA6+mI4NQfEI=; b=imrJFjk2shwIxu6heesm3dOWwyBn3Xrj9+Hba1t+GK8rZ18HkGSyPi0IJEQ10vmZso 1vL/n/dEqSn/qq5nqkxGGYSkFpIl4ZgQ9sHEAYtTsMvo/EkIYi3UKhr+osTZ7h9JHtKA j/q0MlLzBi0Bnu/9rzfkO3zOsqig6JSU74VB3bzFOHO7jhrgr8X1dDC270ixjAPkZdhx aYPbAZ0O820rOe1O6zLt1MFnCVnEWzfjohKi6PGgb6duIqzF1BkiVqTECpMcuniq+INk FJh5jrZZcGbhOOCe2jLKAORBCO3SE5QYIUXFTEntQN7kxp3COY+Lj3w7f/IrUZdiVzhi f8PQ== X-Gm-Message-State: AOUpUlH7KfOI/HYhzhhb64NoSpv6YYhQeAzvk1GpiZ9iUcwwawYRckEa sfg3t660M+L/B24BhPHzCH4/WIdubPNnLQ== X-Google-Smtp-Source: AAOMgpf7XMILspbqa7i4bAPvIiT5NtAv18Tm0YHLc2AlgfBc4nmdXwBP//aigo8DUBuIMbTpLbS8og== X-Received: by 2002:a24:6b15:: with SMTP id v21-v6mr562332itc.52.1532576427581; Wed, 25 Jul 2018 20:40:27 -0700 (PDT) Received: from ziepe.ca (S010614cc2056d97f.ed.shawcable.net. [174.3.196.123]) by smtp.gmail.com with ESMTPSA id r18-v6sm59779ioh.48.2018.07.25.20.40.24 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 25 Jul 2018 20:40:24 -0700 (PDT) Received: from jgg by mlx.ziepe.ca with local (Exim 4.86_2) (envelope-from ) id 1fiX8Z-0001Ww-Nw; Wed, 25 Jul 2018 21:40:23 -0600 From: Jason Gunthorpe To: linux-rdma@vger.kernel.org, Leon Romanovsky , "Guy Levi(SW)" , Yishai Hadas , "Ruhl, Michael J" Cc: Jason Gunthorpe Subject: [PATCH 07/11] IB/uverbs: Lower the test for ongoing disassociation Date: Wed, 25 Jul 2018 21:40:16 -0600 Message-Id: <20180726034020.5583-8-jgg@ziepe.ca> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180726034020.5583-1-jgg@ziepe.ca> References: <20180726034020.5583-1-jgg@ziepe.ca> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jason Gunthorpe Commands that are reading/writing to objects can test for an ongoing disassociation during their initial call to rdma_lookup_get_uobject. This directly prevents all of these commands from conflicting with an ongoing disassociation. Signed-off-by: Jason Gunthorpe --- drivers/infiniband/core/rdma_core.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/drivers/infiniband/core/rdma_core.c b/drivers/infiniband/core/rdma_core.c index 95a8110f186fdf..d4de1fed98f2fb 100644 --- a/drivers/infiniband/core/rdma_core.c +++ b/drivers/infiniband/core/rdma_core.c @@ -449,6 +449,17 @@ struct ib_uobject *rdma_lookup_get_uobject(const struct uverbs_obj_type *type, goto free; } + /* + * If we have been disassociated block every command except for + * DESTROY based commands. + */ + if (mode != UVERBS_LOOKUP_DESTROY && + !srcu_dereference(ufile->device->ib_dev, + &ufile->device->disassociate_srcu)) { + ret = -EIO; + goto free; + } + ret = uverbs_try_lock_object(uobj, mode); if (ret) goto free; From patchwork Thu Jul 26 03:40:17 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Gunthorpe X-Patchwork-Id: 10545147 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 48685180E for ; Thu, 26 Jul 2018 03:40:32 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 32EE42ACAB for ; Thu, 26 Jul 2018 03:40:32 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 26FEB2ACCE; Thu, 26 Jul 2018 03:40:32 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 75D1F2ACAB for ; Thu, 26 Jul 2018 03:40:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725941AbeGZEzO (ORCPT ); Thu, 26 Jul 2018 00:55:14 -0400 Received: from mail-it0-f65.google.com ([209.85.214.65]:35064 "EHLO mail-it0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726096AbeGZEzN (ORCPT ); Thu, 26 Jul 2018 00:55:13 -0400 Received: by mail-it0-f65.google.com with SMTP id q20-v6so868215ith.0 for ; Wed, 25 Jul 2018 20:40:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ziepe.ca; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=wzIQ1btRsXDiqhBnvz28VW70tFQA1pI4VuWoG4L5OYc=; b=TXdL24j9qnHpu3gEqVCLXcOiDKaBBtw9z9YgUJWpJpROSUeL74xss0il1Fbl1YG+Lb HfwT0KOwUMrj3rn+Thmo+p6ffhGTBjLQVU5pTH0RsPv4Ef5v9wg9K7vqM1g5CxH1QGFI qbwKXehD0Q8L9WNMjhUlEU8ctVy2lNTYndzdIl7YVcwWJx7ijSxfMefFQYKPLoq3pnYx 2X3YdyDKoTXoMzfIQY36oXmlk8wD4dDOfAz7WUh/npAXboko4vJpQJGmjDl1XSwPLGhu Wwg+hdwSU1eldfTlU7izvmwBJSCRS9SOHBAuv7jWPRtUD3MaMGFkolxUxe4eeghNO2tM E+XQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=wzIQ1btRsXDiqhBnvz28VW70tFQA1pI4VuWoG4L5OYc=; b=CVgLmiHJXI9pHpw2HC6b5ynmUI5x/S7N6zA72vbW9vLvhwm5hoe+UQNouK3fQAOA4M F6CyDkmjcFIOLvSdvnRCdYOCA1qBBxTd6gq1EBITnA30xG+d9+JQ2xHsOelfn4/ydPCy nsRppcP0wu/9JH17ixw3Y1Tabh18Uyij/lDuWCleVguWpwJkSyB7beoZnoa0b4GJ7D5K 60f3lnsmHbUtPzHeOnA1mn0vzlcaVCXY7Kd53qeGFcNMSIlY+FuoTXN3NkqiN06V+Gyk aMb8bRMnjna0xjZBASrXJiGrWK9xdsLSV2gg/70fx4K6XFzqzT4ImzQCFq4gxAe6FWof td2w== X-Gm-Message-State: AOUpUlHaB7ytnI1h3KDM2ru4cLIxzH9MscJ5WRFCSsDCtb/3+hKc8W/s hlYxNLwOIHofg8j/SkV11ZTlO/rvJj+++w== X-Google-Smtp-Source: AAOMgpdiBFFqVGbFHgbYcL9s/fyqV9jSBKYOiIl+3jjV6baUHS+3u3sPSQwPjv4sKb1xYl1X31lquw== X-Received: by 2002:a02:b559:: with SMTP id z25-v6mr212404jaj.27.1532576427111; Wed, 25 Jul 2018 20:40:27 -0700 (PDT) Received: from ziepe.ca (S010614cc2056d97f.ed.shawcable.net. [174.3.196.123]) by smtp.gmail.com with ESMTPSA id y18-v6sm406425ita.29.2018.07.25.20.40.24 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 25 Jul 2018 20:40:24 -0700 (PDT) Received: from jgg by mlx.ziepe.ca with local (Exim 4.86_2) (envelope-from ) id 1fiX8Z-0001X4-PG; Wed, 25 Jul 2018 21:40:23 -0600 From: Jason Gunthorpe To: linux-rdma@vger.kernel.org, Leon Romanovsky , "Guy Levi(SW)" , Yishai Hadas , "Ruhl, Michael J" Cc: Jason Gunthorpe Subject: [PATCH 08/11] IB/uverbs: Do not pass struct ib_device to the write based methods Date: Wed, 25 Jul 2018 21:40:17 -0600 Message-Id: <20180726034020.5583-9-jgg@ziepe.ca> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180726034020.5583-1-jgg@ziepe.ca> References: <20180726034020.5583-1-jgg@ziepe.ca> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jason Gunthorpe This is a step to get rid of the global check for disassociation. In this model, the ib_dev is not proven to be valid by the core code and cannot be provided to the method. Instead, every method decides if it is able to run after disassociation and obtains the ib_dev using one of three different approaches: - Call srcu_dereference on the udevice's ib_dev. As before, this means the method cannot be called after disassociation begins. (eg alloc ucontext) - Retrieve the ib_dev from the ucontext, via ib_uverbs_get_ucontext() - Retrieve the ib_dev from the uobject->object after checking under SRCU if disassociation has started (eg uobj_get) Largely, the code is all ready for this, the main work is to provide a ib_dev after calling uobj_alloc(). The few other places simply use ib_uverbs_get_ucontext() to get the ib_dev. This flexibility will let the next patches allow destroy to operate after disassociation. Signed-off-by: Jason Gunthorpe --- drivers/infiniband/core/uverbs.h | 2 - drivers/infiniband/core/uverbs_cmd.c | 155 +++++++++++++------------- drivers/infiniband/core/uverbs_main.c | 6 +- include/rdma/uverbs_std_types.h | 12 +- 4 files changed, 89 insertions(+), 86 deletions(-) diff --git a/drivers/infiniband/core/uverbs.h b/drivers/infiniband/core/uverbs.h index cf02b433000c03..5e21cc1f900b9a 100644 --- a/drivers/infiniband/core/uverbs.h +++ b/drivers/infiniband/core/uverbs.h @@ -299,7 +299,6 @@ extern const struct uverbs_object_def UVERBS_OBJECT(UVERBS_OBJECT_COUNTERS); #define IB_UVERBS_DECLARE_CMD(name) \ ssize_t ib_uverbs_##name(struct ib_uverbs_file *file, \ - struct ib_device *ib_dev, \ const char __user *buf, int in_len, \ int out_len) @@ -341,7 +340,6 @@ IB_UVERBS_DECLARE_CMD(close_xrcd); #define IB_UVERBS_DECLARE_EX_CMD(name) \ int ib_uverbs_ex_##name(struct ib_uverbs_file *file, \ - struct ib_device *ib_dev, \ struct ib_udata *ucore, \ struct ib_udata *uhw) diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c index 7ea179b59e4d8d..4552558f020a2b 100644 --- a/drivers/infiniband/core/uverbs_cmd.c +++ b/drivers/infiniband/core/uverbs_cmd.c @@ -66,7 +66,6 @@ _ib_uverbs_lookup_comp_file(s32 fd, struct ib_uverbs_file *ufile) _ib_uverbs_lookup_comp_file((_fd)*typecheck(s32, _fd), _ufile) ssize_t ib_uverbs_get_context(struct ib_uverbs_file *file, - struct ib_device *ib_dev, const char __user *buf, int in_len, int out_len) { @@ -76,6 +75,7 @@ ssize_t ib_uverbs_get_context(struct ib_uverbs_file *file, struct ib_ucontext *ucontext; struct file *filp; struct ib_rdmacg_object cg_obj; + struct ib_device *ib_dev; int ret; if (out_len < sizeof resp) @@ -85,6 +85,12 @@ ssize_t ib_uverbs_get_context(struct ib_uverbs_file *file, return -EFAULT; mutex_lock(&file->ucontext_lock); + ib_dev = srcu_dereference(file->device->ib_dev, + &file->device->disassociate_srcu); + if (!ib_dev) { + ret = -EIO; + goto err; + } if (file->ucontext) { ret = -EINVAL; @@ -177,11 +183,12 @@ ssize_t ib_uverbs_get_context(struct ib_uverbs_file *file, return ret; } -static void copy_query_dev_fields(struct ib_uverbs_file *file, - struct ib_device *ib_dev, +static void copy_query_dev_fields(struct ib_ucontext *ucontext, struct ib_uverbs_query_device_resp *resp, struct ib_device_attr *attr) { + struct ib_device *ib_dev = ucontext->device; + resp->fw_ver = attr->fw_ver; resp->node_guid = ib_dev->node_guid; resp->sys_image_guid = attr->sys_image_guid; @@ -225,12 +232,16 @@ static void copy_query_dev_fields(struct ib_uverbs_file *file, } ssize_t ib_uverbs_query_device(struct ib_uverbs_file *file, - struct ib_device *ib_dev, const char __user *buf, int in_len, int out_len) { struct ib_uverbs_query_device cmd; struct ib_uverbs_query_device_resp resp; + struct ib_ucontext *ucontext; + + ucontext = ib_uverbs_get_ucontext(file); + if (IS_ERR(ucontext)) + return PTR_ERR(ucontext); if (out_len < sizeof resp) return -ENOSPC; @@ -239,7 +250,7 @@ ssize_t ib_uverbs_query_device(struct ib_uverbs_file *file, return -EFAULT; memset(&resp, 0, sizeof resp); - copy_query_dev_fields(file, ib_dev, &resp, &ib_dev->attrs); + copy_query_dev_fields(ucontext, &resp, &ucontext->device->attrs); if (copy_to_user(u64_to_user_ptr(cmd.response), &resp, sizeof resp)) return -EFAULT; @@ -269,7 +280,6 @@ static u32 make_port_cap_flags(const struct ib_port_attr *attr) } ssize_t ib_uverbs_query_port(struct ib_uverbs_file *file, - struct ib_device *ib_dev, const char __user *buf, int in_len, int out_len) { @@ -277,6 +287,13 @@ ssize_t ib_uverbs_query_port(struct ib_uverbs_file *file, struct ib_uverbs_query_port_resp resp; struct ib_port_attr attr; int ret; + struct ib_ucontext *ucontext; + struct ib_device *ib_dev; + + ucontext = ib_uverbs_get_ucontext(file); + if (IS_ERR(ucontext)) + return PTR_ERR(ucontext); + ib_dev = ucontext->device; if (out_len < sizeof resp) return -ENOSPC; @@ -328,7 +345,6 @@ ssize_t ib_uverbs_query_port(struct ib_uverbs_file *file, } ssize_t ib_uverbs_alloc_pd(struct ib_uverbs_file *file, - struct ib_device *ib_dev, const char __user *buf, int in_len, int out_len) { @@ -338,6 +354,7 @@ ssize_t ib_uverbs_alloc_pd(struct ib_uverbs_file *file, struct ib_uobject *uobj; struct ib_pd *pd; int ret; + struct ib_device *ib_dev; if (out_len < sizeof resp) return -ENOSPC; @@ -350,7 +367,7 @@ ssize_t ib_uverbs_alloc_pd(struct ib_uverbs_file *file, in_len - sizeof(cmd) - sizeof(struct ib_uverbs_cmd_hdr), out_len - sizeof(resp)); - uobj = uobj_alloc(UVERBS_OBJECT_PD, file); + uobj = uobj_alloc(UVERBS_OBJECT_PD, file, &ib_dev); if (IS_ERR(uobj)) return PTR_ERR(uobj); @@ -387,7 +404,6 @@ ssize_t ib_uverbs_alloc_pd(struct ib_uverbs_file *file, } ssize_t ib_uverbs_dealloc_pd(struct ib_uverbs_file *file, - struct ib_device *ib_dev, const char __user *buf, int in_len, int out_len) { @@ -486,7 +502,6 @@ static void xrcd_table_delete(struct ib_uverbs_device *dev, } ssize_t ib_uverbs_open_xrcd(struct ib_uverbs_file *file, - struct ib_device *ib_dev, const char __user *buf, int in_len, int out_len) { @@ -499,6 +514,7 @@ ssize_t ib_uverbs_open_xrcd(struct ib_uverbs_file *file, struct inode *inode = NULL; int ret = 0; int new_xrcd = 0; + struct ib_device *ib_dev; if (out_len < sizeof resp) return -ENOSPC; @@ -535,7 +551,8 @@ ssize_t ib_uverbs_open_xrcd(struct ib_uverbs_file *file, } } - obj = (struct ib_uxrcd_object *)uobj_alloc(UVERBS_OBJECT_XRCD, file); + obj = (struct ib_uxrcd_object *)uobj_alloc(UVERBS_OBJECT_XRCD, file, + &ib_dev); if (IS_ERR(obj)) { ret = PTR_ERR(obj); goto err_tree_mutex_unlock; @@ -606,7 +623,6 @@ ssize_t ib_uverbs_open_xrcd(struct ib_uverbs_file *file, } ssize_t ib_uverbs_close_xrcd(struct ib_uverbs_file *file, - struct ib_device *ib_dev, const char __user *buf, int in_len, int out_len) { @@ -645,7 +661,6 @@ int ib_uverbs_dealloc_xrcd(struct ib_uobject *uobject, } ssize_t ib_uverbs_reg_mr(struct ib_uverbs_file *file, - struct ib_device *ib_dev, const char __user *buf, int in_len, int out_len) { @@ -656,6 +671,7 @@ ssize_t ib_uverbs_reg_mr(struct ib_uverbs_file *file, struct ib_pd *pd; struct ib_mr *mr; int ret; + struct ib_device *ib_dev; if (out_len < sizeof resp) return -ENOSPC; @@ -675,7 +691,7 @@ ssize_t ib_uverbs_reg_mr(struct ib_uverbs_file *file, if (ret) return ret; - uobj = uobj_alloc(UVERBS_OBJECT_MR, file); + uobj = uobj_alloc(UVERBS_OBJECT_MR, file, &ib_dev); if (IS_ERR(uobj)) return PTR_ERR(uobj); @@ -737,7 +753,6 @@ ssize_t ib_uverbs_reg_mr(struct ib_uverbs_file *file, } ssize_t ib_uverbs_rereg_mr(struct ib_uverbs_file *file, - struct ib_device *ib_dev, const char __user *buf, int in_len, int out_len) { @@ -829,7 +844,6 @@ ssize_t ib_uverbs_rereg_mr(struct ib_uverbs_file *file, } ssize_t ib_uverbs_dereg_mr(struct ib_uverbs_file *file, - struct ib_device *ib_dev, const char __user *buf, int in_len, int out_len) { @@ -843,7 +857,6 @@ ssize_t ib_uverbs_dereg_mr(struct ib_uverbs_file *file, } ssize_t ib_uverbs_alloc_mw(struct ib_uverbs_file *file, - struct ib_device *ib_dev, const char __user *buf, int in_len, int out_len) { @@ -854,6 +867,7 @@ ssize_t ib_uverbs_alloc_mw(struct ib_uverbs_file *file, struct ib_mw *mw; struct ib_udata udata; int ret; + struct ib_device *ib_dev; if (out_len < sizeof(resp)) return -ENOSPC; @@ -861,7 +875,7 @@ ssize_t ib_uverbs_alloc_mw(struct ib_uverbs_file *file, if (copy_from_user(&cmd, buf, sizeof(cmd))) return -EFAULT; - uobj = uobj_alloc(UVERBS_OBJECT_MW, file); + uobj = uobj_alloc(UVERBS_OBJECT_MW, file, &ib_dev); if (IS_ERR(uobj)) return PTR_ERR(uobj); @@ -911,7 +925,6 @@ ssize_t ib_uverbs_alloc_mw(struct ib_uverbs_file *file, } ssize_t ib_uverbs_dealloc_mw(struct ib_uverbs_file *file, - struct ib_device *ib_dev, const char __user *buf, int in_len, int out_len) { @@ -925,7 +938,6 @@ ssize_t ib_uverbs_dealloc_mw(struct ib_uverbs_file *file, } ssize_t ib_uverbs_create_comp_channel(struct ib_uverbs_file *file, - struct ib_device *ib_dev, const char __user *buf, int in_len, int out_len) { @@ -933,6 +945,7 @@ ssize_t ib_uverbs_create_comp_channel(struct ib_uverbs_file *file, struct ib_uverbs_create_comp_channel_resp resp; struct ib_uobject *uobj; struct ib_uverbs_completion_event_file *ev_file; + struct ib_device *ib_dev; if (out_len < sizeof resp) return -ENOSPC; @@ -940,7 +953,7 @@ ssize_t ib_uverbs_create_comp_channel(struct ib_uverbs_file *file, if (copy_from_user(&cmd, buf, sizeof cmd)) return -EFAULT; - uobj = uobj_alloc(UVERBS_OBJECT_COMP_CHANNEL, file); + uobj = uobj_alloc(UVERBS_OBJECT_COMP_CHANNEL, file, &ib_dev); if (IS_ERR(uobj)) return PTR_ERR(uobj); @@ -959,7 +972,6 @@ ssize_t ib_uverbs_create_comp_channel(struct ib_uverbs_file *file, } static struct ib_ucq_object *create_cq(struct ib_uverbs_file *file, - struct ib_device *ib_dev, struct ib_udata *ucore, struct ib_udata *uhw, struct ib_uverbs_ex_create_cq *cmd, @@ -977,17 +989,21 @@ static struct ib_ucq_object *create_cq(struct ib_uverbs_file *file, int ret; struct ib_uverbs_ex_create_cq_resp resp; struct ib_cq_init_attr attr = {}; - - if (!ib_dev->create_cq) - return ERR_PTR(-EOPNOTSUPP); + struct ib_device *ib_dev; if (cmd->comp_vector >= file->device->num_comp_vectors) return ERR_PTR(-EINVAL); - obj = (struct ib_ucq_object *)uobj_alloc(UVERBS_OBJECT_CQ, file); + obj = (struct ib_ucq_object *)uobj_alloc(UVERBS_OBJECT_CQ, file, + &ib_dev); if (IS_ERR(obj)) return obj; + if (!ib_dev->create_cq) { + ret = -EOPNOTSUPP; + goto err; + } + if (cmd->comp_channel >= 0) { ev_file = ib_uverbs_lookup_comp_file(cmd->comp_channel, file); if (IS_ERR(ev_file)) { @@ -1066,7 +1082,6 @@ static int ib_uverbs_create_cq_cb(struct ib_uverbs_file *file, } ssize_t ib_uverbs_create_cq(struct ib_uverbs_file *file, - struct ib_device *ib_dev, const char __user *buf, int in_len, int out_len) { @@ -1097,7 +1112,7 @@ ssize_t ib_uverbs_create_cq(struct ib_uverbs_file *file, cmd_ex.comp_vector = cmd.comp_vector; cmd_ex.comp_channel = cmd.comp_channel; - obj = create_cq(file, ib_dev, &ucore, &uhw, &cmd_ex, + obj = create_cq(file, &ucore, &uhw, &cmd_ex, offsetof(typeof(cmd_ex), comp_channel) + sizeof(cmd.comp_channel), ib_uverbs_create_cq_cb, NULL); @@ -1120,7 +1135,6 @@ static int ib_uverbs_ex_create_cq_cb(struct ib_uverbs_file *file, } int ib_uverbs_ex_create_cq(struct ib_uverbs_file *file, - struct ib_device *ib_dev, struct ib_udata *ucore, struct ib_udata *uhw) { @@ -1146,7 +1160,7 @@ int ib_uverbs_ex_create_cq(struct ib_uverbs_file *file, sizeof(resp.response_length))) return -ENOSPC; - obj = create_cq(file, ib_dev, ucore, uhw, &cmd, + obj = create_cq(file, ucore, uhw, &cmd, min(ucore->inlen, sizeof(cmd)), ib_uverbs_ex_create_cq_cb, NULL); @@ -1154,7 +1168,6 @@ int ib_uverbs_ex_create_cq(struct ib_uverbs_file *file, } ssize_t ib_uverbs_resize_cq(struct ib_uverbs_file *file, - struct ib_device *ib_dev, const char __user *buf, int in_len, int out_len) { @@ -1222,7 +1235,6 @@ static int copy_wc_to_user(struct ib_device *ib_dev, void __user *dest, } ssize_t ib_uverbs_poll_cq(struct ib_uverbs_file *file, - struct ib_device *ib_dev, const char __user *buf, int in_len, int out_len) { @@ -1253,7 +1265,7 @@ ssize_t ib_uverbs_poll_cq(struct ib_uverbs_file *file, if (!ret) break; - ret = copy_wc_to_user(ib_dev, data_ptr, &wc); + ret = copy_wc_to_user(cq->device, data_ptr, &wc); if (ret) goto out_put; @@ -1274,7 +1286,6 @@ ssize_t ib_uverbs_poll_cq(struct ib_uverbs_file *file, } ssize_t ib_uverbs_req_notify_cq(struct ib_uverbs_file *file, - struct ib_device *ib_dev, const char __user *buf, int in_len, int out_len) { @@ -1297,7 +1308,6 @@ ssize_t ib_uverbs_req_notify_cq(struct ib_uverbs_file *file, } ssize_t ib_uverbs_destroy_cq(struct ib_uverbs_file *file, - struct ib_device *ib_dev, const char __user *buf, int in_len, int out_len) { @@ -1350,11 +1360,13 @@ static int create_qp(struct ib_uverbs_file *file, int ret; struct ib_rwq_ind_table *ind_tbl = NULL; bool has_sq = true; + struct ib_device *ib_dev; if (cmd->qp_type == IB_QPT_RAW_PACKET && !capable(CAP_NET_RAW)) return -EPERM; - obj = (struct ib_uqp_object *)uobj_alloc(UVERBS_OBJECT_QP, file); + obj = (struct ib_uqp_object *)uobj_alloc(UVERBS_OBJECT_QP, file, + &ib_dev); if (IS_ERR(obj)) return PTR_ERR(obj); obj->uxrcd = NULL; @@ -1611,7 +1623,6 @@ static int ib_uverbs_create_qp_cb(struct ib_uverbs_file *file, } ssize_t ib_uverbs_create_qp(struct ib_uverbs_file *file, - struct ib_device *ib_dev, const char __user *buf, int in_len, int out_len) { @@ -1672,7 +1683,6 @@ static int ib_uverbs_ex_create_qp_cb(struct ib_uverbs_file *file, } int ib_uverbs_ex_create_qp(struct ib_uverbs_file *file, - struct ib_device *ib_dev, struct ib_udata *ucore, struct ib_udata *uhw) { @@ -1709,7 +1719,6 @@ int ib_uverbs_ex_create_qp(struct ib_uverbs_file *file, } ssize_t ib_uverbs_open_qp(struct ib_uverbs_file *file, - struct ib_device *ib_dev, const char __user *buf, int in_len, int out_len) { struct ib_uverbs_open_qp cmd; @@ -1721,6 +1730,7 @@ ssize_t ib_uverbs_open_qp(struct ib_uverbs_file *file, struct ib_qp *qp; struct ib_qp_open_attr attr; int ret; + struct ib_device *ib_dev; if (out_len < sizeof resp) return -ENOSPC; @@ -1733,7 +1743,8 @@ ssize_t ib_uverbs_open_qp(struct ib_uverbs_file *file, in_len - sizeof(cmd) - sizeof(struct ib_uverbs_cmd_hdr), out_len - sizeof(resp)); - obj = (struct ib_uqp_object *)uobj_alloc(UVERBS_OBJECT_QP, file); + obj = (struct ib_uqp_object *)uobj_alloc(UVERBS_OBJECT_QP, file, + &ib_dev); if (IS_ERR(obj)) return PTR_ERR(obj); @@ -1815,7 +1826,6 @@ static void copy_ah_attr_to_uverbs(struct ib_uverbs_qp_dest *uverb_attr, } ssize_t ib_uverbs_query_qp(struct ib_uverbs_file *file, - struct ib_device *ib_dev, const char __user *buf, int in_len, int out_len) { @@ -2018,7 +2028,6 @@ static int modify_qp(struct ib_uverbs_file *file, } ssize_t ib_uverbs_modify_qp(struct ib_uverbs_file *file, - struct ib_device *ib_dev, const char __user *buf, int in_len, int out_len) { @@ -2045,7 +2054,6 @@ ssize_t ib_uverbs_modify_qp(struct ib_uverbs_file *file, } int ib_uverbs_ex_modify_qp(struct ib_uverbs_file *file, - struct ib_device *ib_dev, struct ib_udata *ucore, struct ib_udata *uhw) { @@ -2081,7 +2089,6 @@ int ib_uverbs_ex_modify_qp(struct ib_uverbs_file *file, } ssize_t ib_uverbs_destroy_qp(struct ib_uverbs_file *file, - struct ib_device *ib_dev, const char __user *buf, int in_len, int out_len) { @@ -2120,7 +2127,6 @@ static void *alloc_wr(size_t wr_size, __u32 num_sge) } ssize_t ib_uverbs_post_send(struct ib_uverbs_file *file, - struct ib_device *ib_dev, const char __user *buf, int in_len, int out_len) { @@ -2400,7 +2406,6 @@ static struct ib_recv_wr *ib_uverbs_unmarshall_recv(const char __user *buf, } ssize_t ib_uverbs_post_recv(struct ib_uverbs_file *file, - struct ib_device *ib_dev, const char __user *buf, int in_len, int out_len) { @@ -2449,7 +2454,6 @@ ssize_t ib_uverbs_post_recv(struct ib_uverbs_file *file, } ssize_t ib_uverbs_post_srq_recv(struct ib_uverbs_file *file, - struct ib_device *ib_dev, const char __user *buf, int in_len, int out_len) { @@ -2498,7 +2502,6 @@ ssize_t ib_uverbs_post_srq_recv(struct ib_uverbs_file *file, } ssize_t ib_uverbs_create_ah(struct ib_uverbs_file *file, - struct ib_device *ib_dev, const char __user *buf, int in_len, int out_len) { @@ -2510,6 +2513,7 @@ ssize_t ib_uverbs_create_ah(struct ib_uverbs_file *file, struct rdma_ah_attr attr = {}; int ret; struct ib_udata udata; + struct ib_device *ib_dev; if (out_len < sizeof resp) return -ENOSPC; @@ -2517,18 +2521,20 @@ ssize_t ib_uverbs_create_ah(struct ib_uverbs_file *file, if (copy_from_user(&cmd, buf, sizeof cmd)) return -EFAULT; - if (!rdma_is_port_valid(ib_dev, cmd.attr.port_num)) - return -EINVAL; - ib_uverbs_init_udata(&udata, buf + sizeof(cmd), u64_to_user_ptr(cmd.response) + sizeof(resp), in_len - sizeof(cmd) - sizeof(struct ib_uverbs_cmd_hdr), out_len - sizeof(resp)); - uobj = uobj_alloc(UVERBS_OBJECT_AH, file); + uobj = uobj_alloc(UVERBS_OBJECT_AH, file, &ib_dev); if (IS_ERR(uobj)) return PTR_ERR(uobj); + if (!rdma_is_port_valid(ib_dev, cmd.attr.port_num)) { + ret = -EINVAL; + goto err; + } + pd = uobj_get_obj_read(pd, UVERBS_OBJECT_PD, cmd.pd_handle, file); if (!pd) { ret = -EINVAL; @@ -2585,7 +2591,6 @@ ssize_t ib_uverbs_create_ah(struct ib_uverbs_file *file, } ssize_t ib_uverbs_destroy_ah(struct ib_uverbs_file *file, - struct ib_device *ib_dev, const char __user *buf, int in_len, int out_len) { struct ib_uverbs_destroy_ah cmd; @@ -2598,7 +2603,6 @@ ssize_t ib_uverbs_destroy_ah(struct ib_uverbs_file *file, } ssize_t ib_uverbs_attach_mcast(struct ib_uverbs_file *file, - struct ib_device *ib_dev, const char __user *buf, int in_len, int out_len) { @@ -2648,7 +2652,6 @@ ssize_t ib_uverbs_attach_mcast(struct ib_uverbs_file *file, } ssize_t ib_uverbs_detach_mcast(struct ib_uverbs_file *file, - struct ib_device *ib_dev, const char __user *buf, int in_len, int out_len) { @@ -3017,7 +3020,6 @@ static int kern_spec_to_ib_spec(struct ib_uverbs_file *ufile, } int ib_uverbs_ex_create_wq(struct ib_uverbs_file *file, - struct ib_device *ib_dev, struct ib_udata *ucore, struct ib_udata *uhw) { @@ -3031,6 +3033,7 @@ int ib_uverbs_ex_create_wq(struct ib_uverbs_file *file, struct ib_wq_init_attr wq_init_attr = {}; size_t required_cmd_sz; size_t required_resp_len; + struct ib_device *ib_dev; required_cmd_sz = offsetof(typeof(cmd), max_sge) + sizeof(cmd.max_sge); required_resp_len = offsetof(typeof(resp), wqn) + sizeof(resp.wqn); @@ -3053,7 +3056,8 @@ int ib_uverbs_ex_create_wq(struct ib_uverbs_file *file, if (cmd.comp_mask) return -EOPNOTSUPP; - obj = (struct ib_uwq_object *)uobj_alloc(UVERBS_OBJECT_WQ, file); + obj = (struct ib_uwq_object *)uobj_alloc(UVERBS_OBJECT_WQ, file, + &ib_dev); if (IS_ERR(obj)) return PTR_ERR(obj); @@ -3132,7 +3136,6 @@ int ib_uverbs_ex_create_wq(struct ib_uverbs_file *file, } int ib_uverbs_ex_destroy_wq(struct ib_uverbs_file *file, - struct ib_device *ib_dev, struct ib_udata *ucore, struct ib_udata *uhw) { @@ -3179,7 +3182,6 @@ int ib_uverbs_ex_destroy_wq(struct ib_uverbs_file *file, } int ib_uverbs_ex_modify_wq(struct ib_uverbs_file *file, - struct ib_device *ib_dev, struct ib_udata *ucore, struct ib_udata *uhw) { @@ -3229,7 +3231,6 @@ int ib_uverbs_ex_modify_wq(struct ib_uverbs_file *file, } int ib_uverbs_ex_create_rwq_ind_table(struct ib_uverbs_file *file, - struct ib_device *ib_dev, struct ib_udata *ucore, struct ib_udata *uhw) { @@ -3247,6 +3248,7 @@ int ib_uverbs_ex_create_rwq_ind_table(struct ib_uverbs_file *file, u32 expected_in_size; size_t required_cmd_sz_header; size_t required_resp_len; + struct ib_device *ib_dev; required_cmd_sz_header = offsetof(typeof(cmd), log_ind_tbl_size) + sizeof(cmd.log_ind_tbl_size); required_resp_len = offsetof(typeof(resp), ind_tbl_num) + sizeof(resp.ind_tbl_num); @@ -3312,7 +3314,7 @@ int ib_uverbs_ex_create_rwq_ind_table(struct ib_uverbs_file *file, wqs[num_read_wqs] = wq; } - uobj = uobj_alloc(UVERBS_OBJECT_RWQ_IND_TBL, file); + uobj = uobj_alloc(UVERBS_OBJECT_RWQ_IND_TBL, file, &ib_dev); if (IS_ERR(uobj)) { err = PTR_ERR(uobj); goto put_wqs; @@ -3372,7 +3374,6 @@ int ib_uverbs_ex_create_rwq_ind_table(struct ib_uverbs_file *file, } int ib_uverbs_ex_destroy_rwq_ind_table(struct ib_uverbs_file *file, - struct ib_device *ib_dev, struct ib_udata *ucore, struct ib_udata *uhw) { @@ -3402,7 +3403,6 @@ int ib_uverbs_ex_destroy_rwq_ind_table(struct ib_uverbs_file *file, } int ib_uverbs_ex_create_flow(struct ib_uverbs_file *file, - struct ib_device *ib_dev, struct ib_udata *ucore, struct ib_udata *uhw) { @@ -3419,6 +3419,7 @@ int ib_uverbs_ex_create_flow(struct ib_uverbs_file *file, void *kern_spec; void *ib_spec; int i; + struct ib_device *ib_dev; if (ucore->inlen < sizeof(cmd)) return -EINVAL; @@ -3474,7 +3475,7 @@ int ib_uverbs_ex_create_flow(struct ib_uverbs_file *file, kern_flow_attr = &cmd.flow_attr; } - uobj = uobj_alloc(UVERBS_OBJECT_FLOW, file); + uobj = uobj_alloc(UVERBS_OBJECT_FLOW, file, &ib_dev); if (IS_ERR(uobj)) { err = PTR_ERR(uobj); goto err_free_attr; @@ -3579,7 +3580,6 @@ int ib_uverbs_ex_create_flow(struct ib_uverbs_file *file, } int ib_uverbs_ex_destroy_flow(struct ib_uverbs_file *file, - struct ib_device *ib_dev, struct ib_udata *ucore, struct ib_udata *uhw) { @@ -3601,7 +3601,6 @@ int ib_uverbs_ex_destroy_flow(struct ib_uverbs_file *file, } static int __uverbs_create_xsrq(struct ib_uverbs_file *file, - struct ib_device *ib_dev, struct ib_uverbs_create_xsrq *cmd, struct ib_udata *udata) { @@ -3612,8 +3611,10 @@ static int __uverbs_create_xsrq(struct ib_uverbs_file *file, struct ib_uobject *uninitialized_var(xrcd_uobj); struct ib_srq_init_attr attr; int ret; + struct ib_device *ib_dev; - obj = (struct ib_usrq_object *)uobj_alloc(UVERBS_OBJECT_SRQ, file); + obj = (struct ib_usrq_object *)uobj_alloc(UVERBS_OBJECT_SRQ, file, + &ib_dev); if (IS_ERR(obj)) return PTR_ERR(obj); @@ -3736,7 +3737,6 @@ static int __uverbs_create_xsrq(struct ib_uverbs_file *file, } ssize_t ib_uverbs_create_srq(struct ib_uverbs_file *file, - struct ib_device *ib_dev, const char __user *buf, int in_len, int out_len) { @@ -3766,7 +3766,7 @@ ssize_t ib_uverbs_create_srq(struct ib_uverbs_file *file, in_len - sizeof(cmd) - sizeof(struct ib_uverbs_cmd_hdr), out_len - sizeof(resp)); - ret = __uverbs_create_xsrq(file, ib_dev, &xcmd, &udata); + ret = __uverbs_create_xsrq(file, &xcmd, &udata); if (ret) return ret; @@ -3774,7 +3774,6 @@ ssize_t ib_uverbs_create_srq(struct ib_uverbs_file *file, } ssize_t ib_uverbs_create_xsrq(struct ib_uverbs_file *file, - struct ib_device *ib_dev, const char __user *buf, int in_len, int out_len) { struct ib_uverbs_create_xsrq cmd; @@ -3793,7 +3792,7 @@ ssize_t ib_uverbs_create_xsrq(struct ib_uverbs_file *file, in_len - sizeof(cmd) - sizeof(struct ib_uverbs_cmd_hdr), out_len - sizeof(resp)); - ret = __uverbs_create_xsrq(file, ib_dev, &cmd, &udata); + ret = __uverbs_create_xsrq(file, &cmd, &udata); if (ret) return ret; @@ -3801,7 +3800,6 @@ ssize_t ib_uverbs_create_xsrq(struct ib_uverbs_file *file, } ssize_t ib_uverbs_modify_srq(struct ib_uverbs_file *file, - struct ib_device *ib_dev, const char __user *buf, int in_len, int out_len) { @@ -3832,7 +3830,6 @@ ssize_t ib_uverbs_modify_srq(struct ib_uverbs_file *file, } ssize_t ib_uverbs_query_srq(struct ib_uverbs_file *file, - struct ib_device *ib_dev, const char __user *buf, int in_len, int out_len) { @@ -3872,7 +3869,6 @@ ssize_t ib_uverbs_query_srq(struct ib_uverbs_file *file, } ssize_t ib_uverbs_destroy_srq(struct ib_uverbs_file *file, - struct ib_device *ib_dev, const char __user *buf, int in_len, int out_len) { @@ -3901,15 +3897,21 @@ ssize_t ib_uverbs_destroy_srq(struct ib_uverbs_file *file, } int ib_uverbs_ex_query_device(struct ib_uverbs_file *file, - struct ib_device *ib_dev, struct ib_udata *ucore, struct ib_udata *uhw) { struct ib_uverbs_ex_query_device_resp resp = { {0} }; struct ib_uverbs_ex_query_device cmd; struct ib_device_attr attr = {0}; + struct ib_ucontext *ucontext; + struct ib_device *ib_dev; int err; + ucontext = ib_uverbs_get_ucontext(file); + if (IS_ERR(ucontext)) + return PTR_ERR(ucontext); + ib_dev = ucontext->device; + if (!ib_dev->query_device) return -EOPNOTSUPP; @@ -3935,7 +3937,7 @@ int ib_uverbs_ex_query_device(struct ib_uverbs_file *file, if (err) return err; - copy_query_dev_fields(file, ib_dev, &resp.base, &attr); + copy_query_dev_fields(ucontext, &resp.base, &attr); if (ucore->outlen < resp.response_length + sizeof(resp.odp_caps)) goto end; @@ -4022,7 +4024,6 @@ int ib_uverbs_ex_query_device(struct ib_uverbs_file *file, } int ib_uverbs_ex_modify_cq(struct ib_uverbs_file *file, - struct ib_device *ib_dev, struct ib_udata *ucore, struct ib_udata *uhw) { diff --git a/drivers/infiniband/core/uverbs_main.c b/drivers/infiniband/core/uverbs_main.c index 34df04ed142be7..a1e427b2c2a139 100644 --- a/drivers/infiniband/core/uverbs_main.c +++ b/drivers/infiniband/core/uverbs_main.c @@ -75,7 +75,6 @@ static struct class *uverbs_class; static DECLARE_BITMAP(dev_map, IB_UVERBS_MAX_DEVICES); static ssize_t (*uverbs_cmd_table[])(struct ib_uverbs_file *file, - struct ib_device *ib_dev, const char __user *buf, int in_len, int out_len) = { [IB_USER_VERBS_CMD_GET_CONTEXT] = ib_uverbs_get_context, @@ -116,7 +115,6 @@ static ssize_t (*uverbs_cmd_table[])(struct ib_uverbs_file *file, }; static int (*uverbs_ex_cmd_table[])(struct ib_uverbs_file *file, - struct ib_device *ib_dev, struct ib_udata *ucore, struct ib_udata *uhw) = { [IB_USER_VERBS_EX_CMD_CREATE_FLOW] = ib_uverbs_ex_create_flow, @@ -774,7 +772,7 @@ static ssize_t ib_uverbs_write(struct file *filp, const char __user *buf, buf += sizeof(hdr); if (!extended) { - ret = uverbs_cmd_table[command](file, ib_dev, buf, + ret = uverbs_cmd_table[command](file, buf, hdr.in_words * 4, hdr.out_words * 4); } else { @@ -793,7 +791,7 @@ static ssize_t ib_uverbs_write(struct file *filp, const char __user *buf, ex_hdr.provider_in_words * 8, ex_hdr.provider_out_words * 8); - ret = uverbs_ex_cmd_table[command](file, ib_dev, &ucore, &uhw); + ret = uverbs_ex_cmd_table[command](file, &ucore, &uhw); ret = (ret) ? : count; } diff --git a/include/rdma/uverbs_std_types.h b/include/rdma/uverbs_std_types.h index 8c54e1439ba1de..64ee2545dd3db3 100644 --- a/include/rdma/uverbs_std_types.h +++ b/include/rdma/uverbs_std_types.h @@ -125,12 +125,18 @@ static inline void uobj_alloc_abort(struct ib_uobject *uobj) } static inline struct ib_uobject *__uobj_alloc(const struct uverbs_obj_type *type, - struct ib_uverbs_file *ufile) + struct ib_uverbs_file *ufile, + struct ib_device **ib_dev) { - return rdma_alloc_begin_uobject(type, ufile); + struct ib_uobject *uobj = rdma_alloc_begin_uobject(type, ufile); + + if (!IS_ERR(uobj)) + *ib_dev = uobj->context->device; + return uobj; } -#define uobj_alloc(_type, _ufile) __uobj_alloc(uobj_get_type(_type), _ufile) +#define uobj_alloc(_type, _ufile, _ib_dev) \ + __uobj_alloc(uobj_get_type(_type), _ufile, _ib_dev) #endif From patchwork Thu Jul 26 03:40:18 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Gunthorpe X-Patchwork-Id: 10545153 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8D8CD14E0 for ; Thu, 26 Jul 2018 03:40:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 787802AC96 for ; Thu, 26 Jul 2018 03:40:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6F46B2ACCD; Thu, 26 Jul 2018 03:40:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 347DD2AC96 for ; Thu, 26 Jul 2018 03:40:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726096AbeGZEzP (ORCPT ); Thu, 26 Jul 2018 00:55:15 -0400 Received: from mail-it0-f65.google.com ([209.85.214.65]:37563 "EHLO mail-it0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726435AbeGZEzO (ORCPT ); Thu, 26 Jul 2018 00:55:14 -0400 Received: by mail-it0-f65.google.com with SMTP id h20-v6so854774itf.2 for ; Wed, 25 Jul 2018 20:40:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ziepe.ca; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=243+kxgkx65epeaxskRRzZT+2H/PMgpqyFY8Jdue5UA=; b=iPx3GFKyZFwTXD0xyAJ++o6WZcQgqa97GCWPH4LTZpVvH9zITxfOCiGel3VTDdHtDb SvdmpVDU09dqEocVL294BxZ5D1QvDZso5YdmcZE+Lif+ap5niEdfRLbqvqPaW8EwP+pn davqh3ELsootfKgw+RPcLf/LvH6OGMxVgx9Ur3uW2jwe/FqJZSN344J8wN7UG6sUmNs2 TlSaae4nlkQvDz72+IsWnmclxoCQYYGE9lne6+HhwW/nQtZ21ka7JakRXFSpOvoV+3EE XEqVc2vsX+9lLnNz/PbGVEeC39pVIVCReqhT+ygboK2AxcF0Mz8oswmxLwdLfSJjy6sk Ns3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=243+kxgkx65epeaxskRRzZT+2H/PMgpqyFY8Jdue5UA=; b=VDJjgZ3L0HjhUi598dk0DPMnHTMwv/5FdtGsw2hABxvZp4HaJsQ8PUgMP3xa2282nd 1VUgayoroyC0Ds9MO3fVIWrOYpWmVYpkfBmjnmWZjIAFG6C69q1up/ga+ld5nkvSiWWj GidpV3RSUiZGk0YV9tyg0Arwlo60L3Mo5GfuTsBXYavJLpBjpeztdbGxAsoQgZnKnZ/3 CRWZWSihvoUODgz1gGnyPK+Bnvl57RzI26t7mn/RVhoSIBkiYK7GjfzPghTO+R0blq0X X4xdDEILpWITBZU39BY/Ee6qHeiXT6o4HiQszuwthS+6krB9yPs4usW2hIC8ZsqS7w86 DVYQ== X-Gm-Message-State: AOUpUlFD0sUmAoBsmhBhBbJ3n+QDw+QLqt0SSiToK/LqyJo8rjoxKUxq 80x0K4gqwdZ2T225d6RL4lJL0cGwzW4Tbg== X-Google-Smtp-Source: AAOMgpdxvVXWWaCKWPc0AB1kczyJEq41ynnjpij8Dj3Zgii7w3chL2CqkddyGbbyc+6LdZB8Mxs+Fw== X-Received: by 2002:a02:8a59:: with SMTP id e25-v6mr233407jal.82.1532576428611; Wed, 25 Jul 2018 20:40:28 -0700 (PDT) Received: from ziepe.ca (S010614cc2056d97f.ed.shawcable.net. [174.3.196.123]) by smtp.gmail.com with ESMTPSA id w67-v6sm457521ita.5.2018.07.25.20.40.24 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 25 Jul 2018 20:40:25 -0700 (PDT) Received: from jgg by mlx.ziepe.ca with local (Exim 4.86_2) (envelope-from ) id 1fiX8Z-0001XD-RW; Wed, 25 Jul 2018 21:40:23 -0600 From: Jason Gunthorpe To: linux-rdma@vger.kernel.org, Leon Romanovsky , "Guy Levi(SW)" , Yishai Hadas , "Ruhl, Michael J" Cc: Jason Gunthorpe Subject: [PATCH 09/11] IB/uverbs: Do not pass struct ib_device to the ioctl methods Date: Wed, 25 Jul 2018 21:40:18 -0600 Message-Id: <20180726034020.5583-10-jgg@ziepe.ca> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180726034020.5583-1-jgg@ziepe.ca> References: <20180726034020.5583-1-jgg@ziepe.ca> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jason Gunthorpe This does the same as the patch before, except for ioctl. The rules are the same, but for the ioctl methods the core code handles setting up the uobject. - Retrieve the ib_dev from the uobject->context->device. This is safe under ioctl as the core has already done rdma_alloc_begin_uobject and so CREATE calls are entirely protected by the rwsem. - Retrieve the ib_dev from uobject->object - Call ib_uverbs_get_ucontext() Signed-off-by: Jason Gunthorpe --- drivers/infiniband/core/uverbs_ioctl.c | 2 +- drivers/infiniband/core/uverbs_std_types.c | 3 +- .../core/uverbs_std_types_counters.c | 21 ++++----- drivers/infiniband/core/uverbs_std_types_cq.c | 18 ++++---- drivers/infiniband/core/uverbs_std_types_dm.c | 13 +++--- .../core/uverbs_std_types_flow_action.c | 35 +++++++-------- drivers/infiniband/core/uverbs_std_types_mr.c | 22 +++++----- drivers/infiniband/hw/mlx5/devx.c | 43 +++++++++---------- drivers/infiniband/hw/mlx5/flow.c | 8 ++-- include/rdma/ib_verbs.h | 3 +- include/rdma/uverbs_ioctl.h | 4 +- 11 files changed, 78 insertions(+), 94 deletions(-) diff --git a/drivers/infiniband/core/uverbs_ioctl.c b/drivers/infiniband/core/uverbs_ioctl.c index 204130ee1cbe59..68237c372af1f9 100644 --- a/drivers/infiniband/core/uverbs_ioctl.c +++ b/drivers/infiniband/core/uverbs_ioctl.c @@ -352,7 +352,7 @@ static int uverbs_handle_method(struct ib_uverbs_attr __user *uattr_ptr, goto cleanup; } - ret = method_spec->handler(ibdev, ufile, attr_bundle); + ret = method_spec->handler(ufile, attr_bundle); if (destroy_attr) { uobj_put_destroy(destroy_attr->uobject); diff --git a/drivers/infiniband/core/uverbs_std_types.c b/drivers/infiniband/core/uverbs_std_types.c index c1e0492cc78a74..3aa7c7deac749a 100644 --- a/drivers/infiniband/core/uverbs_std_types.c +++ b/drivers/infiniband/core/uverbs_std_types.c @@ -210,8 +210,7 @@ static int uverbs_hot_unplug_completion_event_file(struct ib_uobject *uobj, return 0; }; -int uverbs_destroy_def_handler(struct ib_device *ib_dev, - struct ib_uverbs_file *file, +int uverbs_destroy_def_handler(struct ib_uverbs_file *file, struct uverbs_attr_bundle *attrs) { return 0; diff --git a/drivers/infiniband/core/uverbs_std_types_counters.c b/drivers/infiniband/core/uverbs_std_types_counters.c index dfe59ad721f635..3889d36b00182c 100644 --- a/drivers/infiniband/core/uverbs_std_types_counters.c +++ b/drivers/infiniband/core/uverbs_std_types_counters.c @@ -47,12 +47,13 @@ static int uverbs_free_counters(struct ib_uobject *uobject, return counters->device->destroy_counters(counters); } -static int UVERBS_HANDLER(UVERBS_METHOD_COUNTERS_CREATE)(struct ib_device *ib_dev, - struct ib_uverbs_file *file, - struct uverbs_attr_bundle *attrs) +static int UVERBS_HANDLER(UVERBS_METHOD_COUNTERS_CREATE)( + struct ib_uverbs_file *file, struct uverbs_attr_bundle *attrs) { + struct ib_uobject *uobj = uverbs_attr_get_uobject( + attrs, UVERBS_ATTR_CREATE_COUNTERS_HANDLE); + struct ib_device *ib_dev = uobj->context->device; struct ib_counters *counters; - struct ib_uobject *uobj; int ret; /* @@ -63,7 +64,6 @@ static int UVERBS_HANDLER(UVERBS_METHOD_COUNTERS_CREATE)(struct ib_device *ib_de if (!ib_dev->create_counters) return -EOPNOTSUPP; - uobj = uverbs_attr_get_uobject(attrs, UVERBS_ATTR_CREATE_COUNTERS_HANDLE); counters = ib_dev->create_counters(ib_dev, attrs); if (IS_ERR(counters)) { ret = PTR_ERR(counters); @@ -81,9 +81,8 @@ static int UVERBS_HANDLER(UVERBS_METHOD_COUNTERS_CREATE)(struct ib_device *ib_de return ret; } -static int UVERBS_HANDLER(UVERBS_METHOD_COUNTERS_READ)(struct ib_device *ib_dev, - struct ib_uverbs_file *file, - struct uverbs_attr_bundle *attrs) +static int UVERBS_HANDLER(UVERBS_METHOD_COUNTERS_READ)( + struct ib_uverbs_file *file, struct uverbs_attr_bundle *attrs) { struct ib_counters_read_attr read_attr = {}; const struct uverbs_attr *uattr; @@ -91,7 +90,7 @@ static int UVERBS_HANDLER(UVERBS_METHOD_COUNTERS_READ)(struct ib_device *ib_dev, uverbs_attr_get_obj(attrs, UVERBS_ATTR_READ_COUNTERS_HANDLE); int ret; - if (!ib_dev->read_counters) + if (!counters->device->read_counters) return -EOPNOTSUPP; if (!atomic_read(&counters->usecnt)) @@ -109,9 +108,7 @@ static int UVERBS_HANDLER(UVERBS_METHOD_COUNTERS_READ)(struct ib_device *ib_dev, if (!read_attr.counters_buff) return -ENOMEM; - ret = ib_dev->read_counters(counters, - &read_attr, - attrs); + ret = counters->device->read_counters(counters, &read_attr, attrs); if (ret) goto err_read; diff --git a/drivers/infiniband/core/uverbs_std_types_cq.c b/drivers/infiniband/core/uverbs_std_types_cq.c index 32930880975e56..3fcde611ca46a4 100644 --- a/drivers/infiniband/core/uverbs_std_types_cq.c +++ b/drivers/infiniband/core/uverbs_std_types_cq.c @@ -57,11 +57,13 @@ static int uverbs_free_cq(struct ib_uobject *uobject, return ret; } -static int UVERBS_HANDLER(UVERBS_METHOD_CQ_CREATE)(struct ib_device *ib_dev, - struct ib_uverbs_file *file, - struct uverbs_attr_bundle *attrs) +static int UVERBS_HANDLER(UVERBS_METHOD_CQ_CREATE)( + struct ib_uverbs_file *file, struct uverbs_attr_bundle *attrs) { - struct ib_ucq_object *obj; + struct ib_ucq_object *obj = container_of( + uverbs_attr_get_uobject(attrs, UVERBS_ATTR_CREATE_CQ_HANDLE), + typeof(*obj), uobject); + struct ib_device *ib_dev = obj->uobject.context->device; struct ib_udata uhw; int ret; u64 user_handle; @@ -102,9 +104,6 @@ static int UVERBS_HANDLER(UVERBS_METHOD_CQ_CREATE)(struct ib_device *ib_dev, goto err_event_file; } - obj = container_of(uverbs_attr_get_uobject(attrs, - UVERBS_ATTR_CREATE_CQ_HANDLE), - typeof(*obj), uobject); obj->comp_events_reported = 0; obj->async_events_reported = 0; INIT_LIST_HEAD(&obj->comp_list); @@ -170,9 +169,8 @@ DECLARE_UVERBS_NAMED_METHOD( UA_MANDATORY), UVERBS_ATTR_UHW()); -static int UVERBS_HANDLER(UVERBS_METHOD_CQ_DESTROY)(struct ib_device *ib_dev, - struct ib_uverbs_file *file, - struct uverbs_attr_bundle *attrs) +static int UVERBS_HANDLER(UVERBS_METHOD_CQ_DESTROY)( + struct ib_uverbs_file *file, struct uverbs_attr_bundle *attrs) { struct ib_uobject *uobj = uverbs_attr_get_uobject(attrs, UVERBS_ATTR_DESTROY_CQ_HANDLE); diff --git a/drivers/infiniband/core/uverbs_std_types_dm.c b/drivers/infiniband/core/uverbs_std_types_dm.c index c90efa4b99f4e2..edc3ff7733d47d 100644 --- a/drivers/infiniband/core/uverbs_std_types_dm.c +++ b/drivers/infiniband/core/uverbs_std_types_dm.c @@ -46,12 +46,15 @@ static int uverbs_free_dm(struct ib_uobject *uobject, return dm->device->dealloc_dm(dm); } -static int UVERBS_HANDLER(UVERBS_METHOD_DM_ALLOC)(struct ib_device *ib_dev, - struct ib_uverbs_file *file, - struct uverbs_attr_bundle *attrs) +static int +UVERBS_HANDLER(UVERBS_METHOD_DM_ALLOC)(struct ib_uverbs_file *file, + struct uverbs_attr_bundle *attrs) { struct ib_dm_alloc_attr attr = {}; - struct ib_uobject *uobj; + struct ib_uobject *uobj = + uverbs_attr_get(attrs, UVERBS_ATTR_ALLOC_DM_HANDLE) + ->obj_attr.uobject; + struct ib_device *ib_dev = uobj->context->device; struct ib_dm *dm; int ret; @@ -68,8 +71,6 @@ static int UVERBS_HANDLER(UVERBS_METHOD_DM_ALLOC)(struct ib_device *ib_dev, if (ret) return ret; - uobj = uverbs_attr_get(attrs, UVERBS_ATTR_ALLOC_DM_HANDLE)->obj_attr.uobject; - dm = ib_dev->alloc_dm(ib_dev, uobj->context, &attr, attrs); if (IS_ERR(dm)) return PTR_ERR(dm); diff --git a/drivers/infiniband/core/uverbs_std_types_flow_action.c b/drivers/infiniband/core/uverbs_std_types_flow_action.c index adb9209c47105e..d8cfafe23bd9cd 100644 --- a/drivers/infiniband/core/uverbs_std_types_flow_action.c +++ b/drivers/infiniband/core/uverbs_std_types_flow_action.c @@ -304,12 +304,13 @@ static int parse_flow_action_esp(struct ib_device *ib_dev, return 0; } -static int UVERBS_HANDLER(UVERBS_METHOD_FLOW_ACTION_ESP_CREATE)(struct ib_device *ib_dev, - struct ib_uverbs_file *file, - struct uverbs_attr_bundle *attrs) +static int UVERBS_HANDLER(UVERBS_METHOD_FLOW_ACTION_ESP_CREATE)( + struct ib_uverbs_file *file, struct uverbs_attr_bundle *attrs) { + struct ib_uobject *uobj = uverbs_attr_get_uobject( + attrs, UVERBS_ATTR_CREATE_FLOW_ACTION_ESP_HANDLE); + struct ib_device *ib_dev = uobj->context->device; int ret; - struct ib_uobject *uobj; struct ib_flow_action *action; struct ib_flow_action_esp_attr esp_attr = {}; @@ -321,8 +322,6 @@ static int UVERBS_HANDLER(UVERBS_METHOD_FLOW_ACTION_ESP_CREATE)(struct ib_device return ret; /* No need to check as this attribute is marked as MANDATORY */ - uobj = uverbs_attr_get_uobject( - attrs, UVERBS_ATTR_CREATE_FLOW_ACTION_ESP_HANDLE); action = ib_dev->create_flow_action_esp(ib_dev, &esp_attr.hdr, attrs); if (IS_ERR(action)) return PTR_ERR(action); @@ -336,32 +335,28 @@ static int UVERBS_HANDLER(UVERBS_METHOD_FLOW_ACTION_ESP_CREATE)(struct ib_device return 0; } -static int UVERBS_HANDLER(UVERBS_METHOD_FLOW_ACTION_ESP_MODIFY)(struct ib_device *ib_dev, - struct ib_uverbs_file *file, - struct uverbs_attr_bundle *attrs) +static int UVERBS_HANDLER(UVERBS_METHOD_FLOW_ACTION_ESP_MODIFY)( + struct ib_uverbs_file *file, struct uverbs_attr_bundle *attrs) { + struct ib_uobject *uobj = uverbs_attr_get_uobject( + attrs, UVERBS_ATTR_MODIFY_FLOW_ACTION_ESP_HANDLE); + struct ib_flow_action *action = uobj->object; int ret; - struct ib_uobject *uobj; - struct ib_flow_action *action; struct ib_flow_action_esp_attr esp_attr = {}; - if (!ib_dev->modify_flow_action_esp) + if (!action->device->modify_flow_action_esp) return -EOPNOTSUPP; - ret = parse_flow_action_esp(ib_dev, file, attrs, &esp_attr, true); + ret = parse_flow_action_esp(action->device, file, attrs, &esp_attr, + true); if (ret) return ret; - uobj = uverbs_attr_get_uobject( - attrs, UVERBS_ATTR_MODIFY_FLOW_ACTION_ESP_HANDLE); - action = uobj->object; - if (action->type != IB_FLOW_ACTION_ESP) return -EINVAL; - return ib_dev->modify_flow_action_esp(action, - &esp_attr.hdr, - attrs); + return action->device->modify_flow_action_esp(action, &esp_attr.hdr, + attrs); } static const struct uverbs_attr_spec uverbs_flow_action_esp_keymat[] = { diff --git a/drivers/infiniband/core/uverbs_std_types_mr.c b/drivers/infiniband/core/uverbs_std_types_mr.c index c1b9124d611e79..5c8846cb99453e 100644 --- a/drivers/infiniband/core/uverbs_std_types_mr.c +++ b/drivers/infiniband/core/uverbs_std_types_mr.c @@ -39,14 +39,18 @@ static int uverbs_free_mr(struct ib_uobject *uobject, return ib_dereg_mr((struct ib_mr *)uobject->object); } -static int UVERBS_HANDLER(UVERBS_METHOD_DM_MR_REG)(struct ib_device *ib_dev, - struct ib_uverbs_file *file, - struct uverbs_attr_bundle *attrs) +static int UVERBS_HANDLER(UVERBS_METHOD_DM_MR_REG)( + struct ib_uverbs_file *file, struct uverbs_attr_bundle *attrs) { struct ib_dm_mr_attr attr = {}; - struct ib_uobject *uobj; - struct ib_dm *dm; - struct ib_pd *pd; + struct ib_uobject *uobj = + uverbs_attr_get_uobject(attrs, UVERBS_ATTR_REG_DM_MR_HANDLE); + struct ib_dm *dm = + uverbs_attr_get_obj(attrs, UVERBS_ATTR_REG_DM_MR_DM_HANDLE); + struct ib_pd *pd = + uverbs_attr_get_obj(attrs, UVERBS_ATTR_REG_DM_MR_PD_HANDLE); + struct ib_device *ib_dev = pd->device; + struct ib_mr *mr; int ret; @@ -74,12 +78,6 @@ static int UVERBS_HANDLER(UVERBS_METHOD_DM_MR_REG)(struct ib_device *ib_dev, if (ret) return ret; - pd = uverbs_attr_get_obj(attrs, UVERBS_ATTR_REG_DM_MR_PD_HANDLE); - - dm = uverbs_attr_get_obj(attrs, UVERBS_ATTR_REG_DM_MR_DM_HANDLE); - - uobj = uverbs_attr_get(attrs, UVERBS_ATTR_REG_DM_MR_HANDLE)->obj_attr.uobject; - if (attr.offset > dm->length || attr.length > dm->length || attr.length > dm->length - attr.offset) return -EINVAL; diff --git a/drivers/infiniband/hw/mlx5/devx.c b/drivers/infiniband/hw/mlx5/devx.c index fee800f2fdec9f..f0c3968ecca7a6 100644 --- a/drivers/infiniband/hw/mlx5/devx.c +++ b/drivers/infiniband/hw/mlx5/devx.c @@ -409,11 +409,11 @@ static bool devx_is_general_cmd(void *in) } } -static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_QUERY_EQN)(struct ib_device *ib_dev, - struct ib_uverbs_file *file, - struct uverbs_attr_bundle *attrs) +static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_QUERY_EQN)( + struct ib_uverbs_file *file, struct uverbs_attr_bundle *attrs) { - struct mlx5_ib_dev *dev = to_mdev(ib_dev); + struct mlx5_ib_ucontext *c; + struct mlx5_ib_dev *dev; int user_vector; int dev_eqn; unsigned int irqn; @@ -423,6 +423,11 @@ static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_QUERY_EQN)(struct ib_device *ib_de MLX5_IB_ATTR_DEVX_QUERY_EQN_USER_VEC)) return -EFAULT; + c = devx_ufile2uctx(file); + if (IS_ERR(c)) + return PTR_ERR(c); + dev = to_mdev(c->ibucontext.device); + err = mlx5_vector2eqn(dev->mdev, user_vector, &dev_eqn, &irqn); if (err < 0) return err; @@ -454,9 +459,8 @@ static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_QUERY_EQN)(struct ib_device *ib_de * of the buggy user for execution (just insert it to the hardware schedule * queue or arm its CQ for event generation), no further harm is expected. */ -static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_QUERY_UAR)(struct ib_device *ib_dev, - struct ib_uverbs_file *file, - struct uverbs_attr_bundle *attrs) +static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_QUERY_UAR)( + struct ib_uverbs_file *file, struct uverbs_attr_bundle *attrs) { struct mlx5_ib_ucontext *c; struct mlx5_ib_dev *dev; @@ -483,9 +487,8 @@ static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_QUERY_UAR)(struct ib_device *ib_de return 0; } -static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_OTHER)(struct ib_device *ib_dev, - struct ib_uverbs_file *file, - struct uverbs_attr_bundle *attrs) +static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_OTHER)( + struct ib_uverbs_file *file, struct uverbs_attr_bundle *attrs) { struct mlx5_ib_ucontext *c; struct mlx5_ib_dev *dev; @@ -712,9 +715,8 @@ static int devx_obj_cleanup(struct ib_uobject *uobject, return ret; } -static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_OBJ_CREATE)(struct ib_device *ib_dev, - struct ib_uverbs_file *file, - struct uverbs_attr_bundle *attrs) +static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_OBJ_CREATE)( + struct ib_uverbs_file *file, struct uverbs_attr_bundle *attrs) { void *cmd_in = uverbs_attr_get_alloced_ptr(attrs, MLX5_IB_ATTR_DEVX_OBJ_CREATE_CMD_IN); int cmd_out_len = uverbs_attr_get_len(attrs, @@ -769,9 +771,8 @@ static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_OBJ_CREATE)(struct ib_device *ib_d return err; } -static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_OBJ_MODIFY)(struct ib_device *ib_dev, - struct ib_uverbs_file *file, - struct uverbs_attr_bundle *attrs) +static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_OBJ_MODIFY)( + struct ib_uverbs_file *file, struct uverbs_attr_bundle *attrs) { void *cmd_in = uverbs_attr_get_alloced_ptr(attrs, MLX5_IB_ATTR_DEVX_OBJ_MODIFY_CMD_IN); int cmd_out_len = uverbs_attr_get_len(attrs, @@ -811,9 +812,8 @@ static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_OBJ_MODIFY)(struct ib_device *ib_d return err; } -static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_OBJ_QUERY)(struct ib_device *ib_dev, - struct ib_uverbs_file *file, - struct uverbs_attr_bundle *attrs) +static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_OBJ_QUERY)( + struct ib_uverbs_file *file, struct uverbs_attr_bundle *attrs) { void *cmd_in = uverbs_attr_get_alloced_ptr(attrs, MLX5_IB_ATTR_DEVX_OBJ_QUERY_CMD_IN); int cmd_out_len = uverbs_attr_get_len(attrs, @@ -926,9 +926,8 @@ static void devx_umem_reg_cmd_build(struct mlx5_ib_dev *dev, MLX5_IB_MTT_READ); } -static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_UMEM_REG)(struct ib_device *ib_dev, - struct ib_uverbs_file *file, - struct uverbs_attr_bundle *attrs) +static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_UMEM_REG)( + struct ib_uverbs_file *file, struct uverbs_attr_bundle *attrs) { struct devx_umem_reg_cmd cmd; struct devx_umem *obj; diff --git a/drivers/infiniband/hw/mlx5/flow.c b/drivers/infiniband/hw/mlx5/flow.c index ee398a9b5f26b0..1a29f47f836e5a 100644 --- a/drivers/infiniband/hw/mlx5/flow.c +++ b/drivers/infiniband/hw/mlx5/flow.c @@ -39,8 +39,7 @@ static const struct uverbs_attr_spec mlx5_ib_flow_type[] = { }; static int UVERBS_HANDLER(MLX5_IB_METHOD_CREATE_FLOW)( - struct ib_device *ib_dev, struct ib_uverbs_file *file, - struct uverbs_attr_bundle *attrs) + struct ib_uverbs_file *file, struct uverbs_attr_bundle *attrs) { struct mlx5_ib_flow_handler *flow_handler; struct mlx5_ib_flow_matcher *fs_matcher; @@ -109,7 +108,7 @@ static int UVERBS_HANDLER(MLX5_IB_METHOD_CREATE_FLOW)( if (IS_ERR(flow_handler)) return PTR_ERR(flow_handler); - ib_set_flow(uobj, &flow_handler->ibflow, qp, ib_dev); + ib_set_flow(uobj, &flow_handler->ibflow, qp, &dev->ib_dev); return 0; } @@ -129,8 +128,7 @@ static int flow_matcher_cleanup(struct ib_uobject *uobject, } static int UVERBS_HANDLER(MLX5_IB_METHOD_FLOW_MATCHER_CREATE)( - struct ib_device *ib_dev, struct ib_uverbs_file *file, - struct uverbs_attr_bundle *attrs) + struct ib_uverbs_file *file, struct uverbs_attr_bundle *attrs) { struct ib_uobject *uobj = uverbs_attr_get_uobject( attrs, MLX5_IB_ATTR_FLOW_MATCHER_CREATE_HANDLE); diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index 7d18e1df052292..9a73cc91e1d5b6 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -4169,7 +4169,6 @@ void rdma_roce_rescan_device(struct ib_device *ibdev); struct ib_ucontext *ib_uverbs_get_ucontext(struct ib_uverbs_file *ufile); -int uverbs_destroy_def_handler(struct ib_device *ib_dev, - struct ib_uverbs_file *file, +int uverbs_destroy_def_handler(struct ib_uverbs_file *file, struct uverbs_attr_bundle *attrs); #endif /* IB_VERBS_H */ diff --git a/include/rdma/uverbs_ioctl.h b/include/rdma/uverbs_ioctl.h index d16d31d4322d4f..fad668f99d8f56 100644 --- a/include/rdma/uverbs_ioctl.h +++ b/include/rdma/uverbs_ioctl.h @@ -128,7 +128,7 @@ struct uverbs_method_spec { u32 flags; size_t num_buckets; size_t num_child_attrs; - int (*handler)(struct ib_device *ib_dev, struct ib_uverbs_file *ufile, + int (*handler)(struct ib_uverbs_file *ufile, struct uverbs_attr_bundle *ctx); struct uverbs_attr_spec_hash *attr_buckets[0]; }; @@ -171,7 +171,7 @@ struct uverbs_method_def { u32 flags; size_t num_attrs; const struct uverbs_attr_def * const (*attrs)[]; - int (*handler)(struct ib_device *ib_dev, struct ib_uverbs_file *ufile, + int (*handler)(struct ib_uverbs_file *ufile, struct uverbs_attr_bundle *ctx); }; From patchwork Thu Jul 26 03:40:19 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Gunthorpe X-Patchwork-Id: 10545145 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9ED8414E0 for ; Thu, 26 Jul 2018 03:40:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8D61E2ACCD for ; Thu, 26 Jul 2018 03:40:31 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 81F592A939; Thu, 26 Jul 2018 03:40:31 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0EAF22ACCC for ; Thu, 26 Jul 2018 03:40:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727742AbeGZEzO (ORCPT ); Thu, 26 Jul 2018 00:55:14 -0400 Received: from mail-it0-f65.google.com ([209.85.214.65]:51853 "EHLO mail-it0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726980AbeGZEzO (ORCPT ); Thu, 26 Jul 2018 00:55:14 -0400 Received: by mail-it0-f65.google.com with SMTP id e14-v6so912423itf.1 for ; Wed, 25 Jul 2018 20:40:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ziepe.ca; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=GvWQ4aGGXEHckm14hkHByqYeLlmApmyeqJ7ulTP2QDY=; b=mAebgsYnX+9xv1eJuUvS7a1Fd4N/JehpbITPUa45KEJ5WmfVMQvzW7+SnSsj0z5U3g RrLKpO/HeMaT6LJWskHFDK6pJS1WjojhxGJ10w+6kcz5+rpIOoQDdruaphx+y29wl5tM SW77MwOiHeCB6zvTdFPvwAhMs1jaTZnkpv9dh06YnS7HSBfEdWg9Qk9gXqn16DdaglRn thwLKQPoDaoYyZA6gWxLjsowRrOQAvO4k7qMrb7M+Ty6auK+JIVf2+uFI3jtjkHtxlig Z3JRuF47MOvT81mWII1pISdaCz1OZe2VYE95TF72sOab6yRn8uWqy7BV9HUMKTvKhpWU 5U/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=GvWQ4aGGXEHckm14hkHByqYeLlmApmyeqJ7ulTP2QDY=; b=OzpqOeptMaRAYx3KinwU3H7bjL/yacv5eophW1Y7oGZhXvTI7QejYZ4qdjjo/17Zta J1v7foiitZ0U/7fb1cIMNhPIhZt4LQ6BQrbdv7ovuNk0fuasDJbcLuEjeXELVCy5yhlr wqy/m3BeOpIzP5PXKcmKqigXjQUnXorWpPXeVgYmDYJqlxxuffy2lk3t/UiQSadkAZD4 XpB+bvfl1+5zH0s/JAdmCW89rkINiuaGcoIVMM1z77ASwXlAtZS9STzkx8AjK7guK8qe BGjHGlbZ5Uya009Nx/jgQh/LP7E9zb3jMuk2ruXbC9gPKnt76CpqgN3ev2BwyyappsMt l5Sw== X-Gm-Message-State: AOUpUlFu7vw68EPChp0XFAls4N0uY+NKc1nl0R80pO22yY0rt1JxNE6F trZma/dxF/qCZOmRsIXMzguuzK5z+yTsNQ== X-Google-Smtp-Source: AAOMgpdKrfpjOPQSRW4a3F1WOABlbxRIteQtcFxQuNTRgKAMDJcqxQsX3Q6ytasyvWXBd19/wa4cZA== X-Received: by 2002:a02:5103:: with SMTP id s3-v6mr224648jaa.102.1532576429045; Wed, 25 Jul 2018 20:40:29 -0700 (PDT) Received: from ziepe.ca (S010614cc2056d97f.ed.shawcable.net. [174.3.196.123]) by smtp.gmail.com with ESMTPSA id d12-v6sm259583itf.44.2018.07.25.20.40.24 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 25 Jul 2018 20:40:26 -0700 (PDT) Received: from jgg by mlx.ziepe.ca with local (Exim 4.86_2) (envelope-from ) id 1fiX8Z-0001XK-Sb; Wed, 25 Jul 2018 21:40:23 -0600 From: Jason Gunthorpe To: linux-rdma@vger.kernel.org, Leon Romanovsky , "Guy Levi(SW)" , Yishai Hadas , "Ruhl, Michael J" Cc: Jason Gunthorpe Subject: [PATCH 10/11] IB/uverbs: Do not block disassociate during write() Date: Wed, 25 Jul 2018 21:40:19 -0600 Message-Id: <20180726034020.5583-11-jgg@ziepe.ca> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180726034020.5583-1-jgg@ziepe.ca> References: <20180726034020.5583-1-jgg@ziepe.ca> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jason Gunthorpe Now that all the callbacks are safe to run concurrently with disassociation this test can be eliminated. The ufile core infrastructure becomes entirely self contained and is not sensitive to disassociation. Signed-off-by: Jason Gunthorpe --- drivers/infiniband/core/uverbs.h | 3 +++ drivers/infiniband/core/uverbs_main.c | 20 ++++++++------------ 2 files changed, 11 insertions(+), 12 deletions(-) diff --git a/drivers/infiniband/core/uverbs.h b/drivers/infiniband/core/uverbs.h index 5e21cc1f900b9a..0fa32009908c3d 100644 --- a/drivers/infiniband/core/uverbs.h +++ b/drivers/infiniband/core/uverbs.h @@ -158,6 +158,9 @@ struct ib_uverbs_file { spinlock_t uobjects_lock; struct list_head uobjects; + u64 uverbs_cmd_mask; + u64 uverbs_ex_cmd_mask; + struct idr idr; /* spinlock protects write access to idr */ spinlock_t idr_lock; diff --git a/drivers/infiniband/core/uverbs_main.c b/drivers/infiniband/core/uverbs_main.c index a1e427b2c2a139..a3213245aab246 100644 --- a/drivers/infiniband/core/uverbs_main.c +++ b/drivers/infiniband/core/uverbs_main.c @@ -646,13 +646,13 @@ struct file *ib_uverbs_alloc_async_event_file(struct ib_uverbs_file *uverbs_file return filp; } -static bool verify_command_mask(struct ib_device *ib_dev, - u32 command, bool extended) +static bool verify_command_mask(struct ib_uverbs_file *ufile, u32 command, + bool extended) { if (!extended) - return ib_dev->uverbs_cmd_mask & BIT_ULL(command); + return ufile->uverbs_cmd_mask & BIT_ULL(command); - return ib_dev->uverbs_ex_cmd_mask & BIT_ULL(command); + return ufile->uverbs_ex_cmd_mask & BIT_ULL(command); } static bool verify_command_idx(u32 command, bool extended) @@ -722,7 +722,6 @@ static ssize_t ib_uverbs_write(struct file *filp, const char __user *buf, { struct ib_uverbs_file *file = filp->private_data; struct ib_uverbs_ex_cmd_hdr ex_hdr; - struct ib_device *ib_dev; struct ib_uverbs_cmd_hdr hdr; bool extended; int srcu_key; @@ -757,14 +756,8 @@ static ssize_t ib_uverbs_write(struct file *filp, const char __user *buf, return ret; srcu_key = srcu_read_lock(&file->device->disassociate_srcu); - ib_dev = srcu_dereference(file->device->ib_dev, - &file->device->disassociate_srcu); - if (!ib_dev) { - ret = -EIO; - goto out; - } - if (!verify_command_mask(ib_dev, command, extended)) { + if (!verify_command_mask(file, command, extended)) { ret = -EOPNOTSUPP; goto out; } @@ -889,6 +882,9 @@ static int ib_uverbs_open(struct inode *inode, struct file *filp) mutex_unlock(&dev->lists_mutex); srcu_read_unlock(&dev->disassociate_srcu, srcu_key); + file->uverbs_cmd_mask = ib_dev->uverbs_cmd_mask; + file->uverbs_ex_cmd_mask = ib_dev->uverbs_ex_cmd_mask; + return nonseekable_open(inode, filp); err_module: From patchwork Thu Jul 26 03:40:20 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Gunthorpe X-Patchwork-Id: 10545151 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2EDA2139A for ; Thu, 26 Jul 2018 03:40:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1C9902AC89 for ; Thu, 26 Jul 2018 03:40:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1B0422ACB2; Thu, 26 Jul 2018 03:40:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6B1922ACA1 for ; Thu, 26 Jul 2018 03:40:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727790AbeGZEzP (ORCPT ); Thu, 26 Jul 2018 00:55:15 -0400 Received: from mail-it0-f66.google.com ([209.85.214.66]:50677 "EHLO mail-it0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726096AbeGZEzO (ORCPT ); Thu, 26 Jul 2018 00:55:14 -0400 Received: by mail-it0-f66.google.com with SMTP id w16-v6so920491ita.0 for ; Wed, 25 Jul 2018 20:40:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ziepe.ca; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=2ejtC4IKDmz8d4p6vZulQ9twGnA6fM52OrxusHaqXLY=; b=j8vjkhnds+xkcSb+kJolE3aZBjNgYPRCrEJNBLhAvGGn6wB2bCGaKnOjQclQL95/8D 9SpyIleG59aFxDtKbVIKBHxvS9TsZMdg3O4hBje5y1oUzI9/B94RGolAmrwlh/xOLAG7 PTo1zcvMPMfAV2uC72IU0aM45V1fYL1OfgVgENZKWaPGRBCQziIQjlBepakKKruxOCkq 2Sa0eHHHykdqx+dbKan6JIzPed38RLRzTF72skRbS99N1SXh5aEanheH7qcr/g1TccAk OtY+h7wAF02C+R+ODJvVmv511oK+O5x7fHqJGWAFrUpaSQDe5zH/q5VOArx4AJIQZen9 KtRw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=2ejtC4IKDmz8d4p6vZulQ9twGnA6fM52OrxusHaqXLY=; b=U9jygJEbk0FXgzVlZA0akB5TpzLrSbZFWyWpVYJ4XJfHa/7COx+j/NPVBvp2e/VLMu KanM+NXrXQd5Ph2HdsazXmjhZ0lBRYKt0nmKyvSoVIiDol2PpqvntlTumEZyxajZwVxI m5F2oJ8gpVsPx0EPFMGywClyHg+2tLWouVTxjYjV96dWJ7Y3ppgvvbXYqwDjKuqDl6Lt npdWJHI5wC6yWWogHeV40MVsxoTvPR+QLQlkQpoldrfufi6P+qERQiKeJArFkkdl74XE FpY4yGdOoW9aQ5fG0auwIHrMIn08UnhaAmUCew+/rUuRsEy6H99hD02sMoXoviZ19XXS +UdQ== X-Gm-Message-State: AOUpUlHo9qVWdDSM8DIQe53EAt6y+6oY9M8jp1nUjcPkkZCJAEfC4Nja xJquqaw/+PCBcrUgyxZOsA/08OyUYP3zEQ== X-Google-Smtp-Source: AAOMgpeUWjHyyJuKnshnYKzPsV6fkXAsvniTEgExUXKG2XlPedXMEjtOGaDXaRcuJAQS8/DrYE4ncQ== X-Received: by 2002:a24:85c6:: with SMTP id r189-v6mr512252itd.83.1532576429497; Wed, 25 Jul 2018 20:40:29 -0700 (PDT) Received: from ziepe.ca (S010614cc2056d97f.ed.shawcable.net. [174.3.196.123]) by smtp.gmail.com with ESMTPSA id 84-v6sm348862ita.35.2018.07.25.20.40.24 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 25 Jul 2018 20:40:27 -0700 (PDT) Received: from jgg by mlx.ziepe.ca with local (Exim 4.86_2) (envelope-from ) id 1fiX8Z-0001XS-Tk; Wed, 25 Jul 2018 21:40:23 -0600 From: Jason Gunthorpe To: linux-rdma@vger.kernel.org, Leon Romanovsky , "Guy Levi(SW)" , Yishai Hadas , "Ruhl, Michael J" Cc: Jason Gunthorpe Subject: [PATCH 11/11] IB/uverbs: Allow all DESTROY commands to succeed after disassociate Date: Wed, 25 Jul 2018 21:40:20 -0600 Message-Id: <20180726034020.5583-12-jgg@ziepe.ca> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180726034020.5583-1-jgg@ziepe.ca> References: <20180726034020.5583-1-jgg@ziepe.ca> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jason Gunthorpe The disassociate function was broken by design because it failed all commands. This prevents userspace from calling destroy on a uobject after it has detected a device fatal error and thus reclaiming the resources in userspace is prevented. This fix is now straightforward, when anything destroys a uobject that is not the user the object remains on the IDR with a NULL context and object pointer. All lookup locking modes other than DESTROY will fail. When the user ultimately calls the destroy function it is simply dropped from the IDR while any related information is returned. Signed-off-by: Jason Gunthorpe --- drivers/infiniband/core/rdma_core.c | 66 ++++++++++++++++++++++----- drivers/infiniband/core/rdma_core.h | 3 ++ drivers/infiniband/core/uverbs_main.c | 7 +-- include/rdma/uverbs_types.h | 6 ++- 4 files changed, 66 insertions(+), 16 deletions(-) diff --git a/drivers/infiniband/core/rdma_core.c b/drivers/infiniband/core/rdma_core.c index d4de1fed98f2fb..4235b9ddc2adaf 100644 --- a/drivers/infiniband/core/rdma_core.c +++ b/drivers/infiniband/core/rdma_core.c @@ -180,7 +180,7 @@ static int uverbs_destroy_uobject(struct ib_uobject *uobj, assert_uverbs_usecnt(uobj, UVERBS_LOOKUP_WRITE); if (uobj->object) { - ret = uobj->type->type_class->remove_commit(uobj, reason); + ret = uobj->type->type_class->destroy_hw(uobj, reason); if (ret) { if (ib_is_destroy_retryable(ret, reason, uobj)) return ret; @@ -204,10 +204,13 @@ static int uverbs_destroy_uobject(struct ib_uobject *uobj, /* * For DESTROY the usecnt is held write locked, the caller is expected - * to put it unlock and put the object when done with it. + * to put it unlock and put the object when done with it. Only DESTROY + * can remove the IDR handle. */ if (reason != RDMA_REMOVE_DESTROY) atomic_set(&uobj->usecnt, 0); + else + uobj->type->type_class->remove_handle(uobj); if (!list_empty(&uobj->list)) { spin_lock_irqsave(&ufile->uobjects_lock, flags); @@ -554,8 +557,8 @@ static void alloc_abort_idr_uobject(struct ib_uobject *uobj) spin_unlock(&uobj->ufile->idr_lock); } -static int __must_check remove_commit_idr_uobject(struct ib_uobject *uobj, - enum rdma_remove_reason why) +static int __must_check destroy_hw_idr_uobject(struct ib_uobject *uobj, + enum rdma_remove_reason why) { const struct uverbs_obj_idr_type *idr_type = container_of(uobj->type, struct uverbs_obj_idr_type, @@ -573,20 +576,28 @@ static int __must_check remove_commit_idr_uobject(struct ib_uobject *uobj, if (why == RDMA_REMOVE_ABORT) return 0; - alloc_abort_idr_uobject(uobj); - /* Matches the kref in alloc_commit_idr_uobject */ - uverbs_uobject_put(uobj); + ib_rdmacg_uncharge(&uobj->cg_obj, uobj->context->device, + RDMACG_RESOURCE_HCA_OBJECT); return 0; } +static void remove_handle_idr_uobject(struct ib_uobject *uobj) +{ + spin_lock(&uobj->ufile->idr_lock); + idr_remove(&uobj->ufile->idr, uobj->id); + spin_unlock(&uobj->ufile->idr_lock); + /* Matches the kref in alloc_commit_idr_uobject */ + uverbs_uobject_put(uobj); +} + static void alloc_abort_fd_uobject(struct ib_uobject *uobj) { put_unused_fd(uobj->id); } -static int __must_check remove_commit_fd_uobject(struct ib_uobject *uobj, - enum rdma_remove_reason why) +static int __must_check destroy_hw_fd_uobject(struct ib_uobject *uobj, + enum rdma_remove_reason why) { const struct uverbs_obj_fd_type *fd_type = container_of(uobj->type, struct uverbs_obj_fd_type, type); @@ -598,6 +609,10 @@ static int __must_check remove_commit_fd_uobject(struct ib_uobject *uobj, return 0; } +static void remove_handle_fd_uobject(struct ib_uobject *uobj) +{ +} + static int alloc_commit_idr_uobject(struct ib_uobject *uobj) { struct ib_uverbs_file *ufile = uobj->ufile; @@ -741,13 +756,41 @@ void rdma_lookup_put_uobject(struct ib_uobject *uobj, uverbs_uobject_put(uobj); } +void setup_ufile_idr_uobject(struct ib_uverbs_file *ufile) +{ + spin_lock_init(&ufile->idr_lock); + idr_init(&ufile->idr); +} + +void release_ufile_idr_uobject(struct ib_uverbs_file *ufile) +{ + struct ib_uobject *entry; + int id; + + /* + * At this point uverbs_cleanup_ufile() is guaranteed to have run, and + * there are no HW objects left, however the IDR is still populated + * with anything that has not been cleaned up by userspace. Since the + * kref on ufile is 0, nothing is allowed to call lookup_get. + * + * This is an optimized equivalent to remove_handle_idr_uobject + */ + idr_for_each_entry(&ufile->idr, entry, id) { + WARN_ON(entry->object); + uverbs_uobject_put(entry); + } + + idr_destroy(&ufile->idr); +} + const struct uverbs_obj_type_class uverbs_idr_class = { .alloc_begin = alloc_begin_idr_uobject, .lookup_get = lookup_get_idr_uobject, .alloc_commit = alloc_commit_idr_uobject, .alloc_abort = alloc_abort_idr_uobject, .lookup_put = lookup_put_idr_uobject, - .remove_commit = remove_commit_idr_uobject, + .destroy_hw = destroy_hw_idr_uobject, + .remove_handle = remove_handle_idr_uobject, /* * When we destroy an object, we first just lock it for WRITE and * actually DESTROY it in the finalize stage. So, the problematic @@ -945,7 +988,8 @@ const struct uverbs_obj_type_class uverbs_fd_class = { .alloc_commit = alloc_commit_fd_uobject, .alloc_abort = alloc_abort_fd_uobject, .lookup_put = lookup_put_fd_uobject, - .remove_commit = remove_commit_fd_uobject, + .destroy_hw = destroy_hw_fd_uobject, + .remove_handle = remove_handle_fd_uobject, .needs_kfree_rcu = false, }; EXPORT_SYMBOL(uverbs_fd_class); diff --git a/drivers/infiniband/core/rdma_core.h b/drivers/infiniband/core/rdma_core.h index e4d8b985c31135..b2e85ce65b78cb 100644 --- a/drivers/infiniband/core/rdma_core.h +++ b/drivers/infiniband/core/rdma_core.h @@ -110,4 +110,7 @@ int uverbs_finalize_object(struct ib_uobject *uobj, enum uverbs_obj_access access, bool commit); +void setup_ufile_idr_uobject(struct ib_uverbs_file *ufile); +void release_ufile_idr_uobject(struct ib_uverbs_file *ufile); + #endif /* RDMA_CORE_H */ diff --git a/drivers/infiniband/core/uverbs_main.c b/drivers/infiniband/core/uverbs_main.c index a3213245aab246..6f62146e9738a0 100644 --- a/drivers/infiniband/core/uverbs_main.c +++ b/drivers/infiniband/core/uverbs_main.c @@ -253,6 +253,8 @@ void ib_uverbs_release_file(struct kref *ref) struct ib_device *ib_dev; int srcu_key; + release_ufile_idr_uobject(file); + srcu_key = srcu_read_lock(&file->device->disassociate_srcu); ib_dev = srcu_dereference(file->device->ib_dev, &file->device->disassociate_srcu); @@ -867,8 +869,6 @@ static int ib_uverbs_open(struct inode *inode, struct file *filp) } file->device = dev; - spin_lock_init(&file->idr_lock); - idr_init(&file->idr); kref_init(&file->ref); mutex_init(&file->ucontext_lock); @@ -885,6 +885,8 @@ static int ib_uverbs_open(struct inode *inode, struct file *filp) file->uverbs_cmd_mask = ib_dev->uverbs_cmd_mask; file->uverbs_ex_cmd_mask = ib_dev->uverbs_ex_cmd_mask; + setup_ufile_idr_uobject(file); + return nonseekable_open(inode, filp); err_module: @@ -904,7 +906,6 @@ static int ib_uverbs_close(struct inode *inode, struct file *filp) struct ib_uverbs_file *file = filp->private_data; uverbs_destroy_ufile_hw(file, RDMA_REMOVE_CLOSE); - idr_destroy(&file->idr); mutex_lock(&file->device->lists_mutex); if (!file->is_closed) { diff --git a/include/rdma/uverbs_types.h b/include/rdma/uverbs_types.h index f64f413cecac22..1ab9a85eebd9f2 100644 --- a/include/rdma/uverbs_types.h +++ b/include/rdma/uverbs_types.h @@ -61,6 +61,7 @@ enum rdma_lookup_mode { * Destruction flow: * lookup_get(exclusive=true) & uverbs_try_lock_object * remove_commit + * remove_handle (optional) * lookup_put(exclusive=true) via rdma_lookup_put_uobject * * Allocate Error flow #1 @@ -92,8 +93,9 @@ struct uverbs_obj_type_class { enum rdma_lookup_mode mode); void (*lookup_put)(struct ib_uobject *uobj, enum rdma_lookup_mode mode); /* This does not consume the kref on uobj */ - int __must_check (*remove_commit)(struct ib_uobject *uobj, - enum rdma_remove_reason why); + int __must_check (*destroy_hw)(struct ib_uobject *uobj, + enum rdma_remove_reason why); + void (*remove_handle)(struct ib_uobject *uobj); u8 needs_kfree_rcu; };