From patchwork Sun Mar 19 15:59:03 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matan Barak X-Patchwork-Id: 9632699 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 09518602D6 for ; Sun, 19 Mar 2017 16:12:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EFF092849D for ; Sun, 19 Mar 2017 16:12:32 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E3F38284CB; Sun, 19 Mar 2017 16:12:32 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 682EA2849D for ; Sun, 19 Mar 2017 16:12:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751643AbdCSQMb (ORCPT ); Sun, 19 Mar 2017 12:12:31 -0400 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:39382 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752010AbdCSQMa (ORCPT ); Sun, 19 Mar 2017 12:12:30 -0400 Received: from Internal Mail-Server by MTLPINE1 (envelope-from matanb@mellanox.com) with ESMTPS (AES256-SHA encrypted); 19 Mar 2017 17:59:26 +0200 Received: from gen-l-vrt-078.mtl.labs.mlnx. (gen-l-vrt-078.mtl.labs.mlnx [10.137.78.1]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id v2JFxPdt020381; Sun, 19 Mar 2017 17:59:25 +0200 From: Matan Barak To: Doug Ledford Cc: linux-rdma@vger.kernel.org, Liran Liss , Jason Gunthorpe , Sean Hefty , Matan Barak , Leon Romanovsky , Majd Dibbiny , Tal Alon , Yishai Hadas , Ira Weiny , Haggai Eran , Christoph Lameter Subject: [PATCH V2 for-next 5/7] IB/core: Add lock to multicast handlers Date: Sun, 19 Mar 2017 17:59:03 +0200 Message-Id: <1489939145-125246-6-git-send-email-matanb@mellanox.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1489939145-125246-1-git-send-email-matanb@mellanox.com> References: <1489939145-125246-1-git-send-email-matanb@mellanox.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When two handlers used the same object in the old schema, we blocked the process in the kernel. The new schema just returns -EBUSY. This could lead to different behaviour in applications between the old schema and the new schema. In most cases, using such handlers concurrently could lead to crashing the process. For example, if thread A destroys a QP and thread B modifies it, we could have the destruction happens before the modification. In this case, we are accessing freed memory which could lead to crashing the process. This is true for most cases. However, attaching and detaching a multicast address from QP concurrently is safe. Therefore, we preserve the original behaviour by adding a lock there. Signed-off-by: Matan Barak Reviewed-by: Yishai Hadas --- drivers/infiniband/core/uverbs.h | 2 ++ drivers/infiniband/core/uverbs_cmd.c | 5 +++++ 2 files changed, 7 insertions(+) diff --git a/drivers/infiniband/core/uverbs.h b/drivers/infiniband/core/uverbs.h index 8971761..7b10142 100644 --- a/drivers/infiniband/core/uverbs.h +++ b/drivers/infiniband/core/uverbs.h @@ -163,6 +163,8 @@ struct ib_usrq_object { struct ib_uqp_object { struct ib_uevent_object uevent; + /* lock for mcast list */ + struct mutex mcast_lock; struct list_head mcast_list; struct ib_uxrcd_object *uxrcd; }; diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c index b806266..304a3ec 100644 --- a/drivers/infiniband/core/uverbs_cmd.c +++ b/drivers/infiniband/core/uverbs_cmd.c @@ -1343,6 +1343,7 @@ static int create_qp(struct ib_uverbs_file *file, return PTR_ERR(obj); obj->uxrcd = NULL; obj->uevent.uobject.user_handle = cmd->user_handle; + mutex_init(&obj->mcast_lock); if (cmd_sz >= offsetof(typeof(*cmd), rwq_ind_tbl_handle) + sizeof(cmd->rwq_ind_tbl_handle) && @@ -2579,6 +2580,7 @@ ssize_t ib_uverbs_attach_mcast(struct ib_uverbs_file *file, obj = container_of(qp->uobject, struct ib_uqp_object, uevent.uobject); + mutex_lock(&obj->mcast_lock); list_for_each_entry(mcast, &obj->mcast_list, list) if (cmd.mlid == mcast->lid && !memcmp(cmd.gid, mcast->gid.raw, sizeof mcast->gid.raw)) { @@ -2602,6 +2604,7 @@ ssize_t ib_uverbs_attach_mcast(struct ib_uverbs_file *file, kfree(mcast); out_put: + mutex_unlock(&obj->mcast_lock); uobj_put_obj_read(qp); return ret ? ret : in_len; @@ -2626,6 +2629,7 @@ ssize_t ib_uverbs_detach_mcast(struct ib_uverbs_file *file, return -EINVAL; obj = container_of(qp->uobject, struct ib_uqp_object, uevent.uobject); + mutex_lock(&obj->mcast_lock); ret = ib_detach_mcast(qp, (union ib_gid *) cmd.gid, cmd.mlid); if (ret) @@ -2640,6 +2644,7 @@ ssize_t ib_uverbs_detach_mcast(struct ib_uverbs_file *file, } out_put: + mutex_unlock(&obj->mcast_lock); uobj_put_obj_read(qp); return ret ? ret : in_len; }