From patchwork Sun May 17 05:50:57 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haggai Eran X-Patchwork-Id: 6422371 Return-Path: X-Original-To: patchwork-linux-rdma@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 003BAC0432 for ; Sun, 17 May 2015 05:52:11 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 0E98920549 for ; Sun, 17 May 2015 05:52:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 164FB2051A for ; Sun, 17 May 2015 05:52:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751838AbbEQFwH (ORCPT ); Sun, 17 May 2015 01:52:07 -0400 Received: from ns1327.ztomy.com ([193.47.165.129]:34464 "EHLO mellanox.co.il" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1751282AbbEQFwH (ORCPT ); Sun, 17 May 2015 01:52:07 -0400 Received: from Internal Mail-Server by MTLPINE1 (envelope-from haggaie@mellanox.com) with ESMTPS (AES256-SHA encrypted); 17 May 2015 08:51:03 +0300 Received: from gen-l-vrt-034.mtl.labs.mlnx (gen-l-vrt-034.mtl.labs.mlnx [10.137.34.1]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id t4H5pBDs006062; Sun, 17 May 2015 08:51:11 +0300 From: Haggai Eran To: Doug Ledford Cc: linux-rdma@vger.kernel.org, netdev@vger.kernel.org, Liran Liss , Guy Shapiro , Shachar Raindel , Yotam Kenneth , Haggai Eran , Matan Barak , Jason Gunthorpe Subject: [PATCH v4 for-next 01/12] IB/core: Add rwsem to allow reading device list or client list Date: Sun, 17 May 2015 08:50:57 +0300 Message-Id: <1431841868-28063-2-git-send-email-haggaie@mellanox.com> X-Mailer: git-send-email 1.7.11.2 In-Reply-To: <1431841868-28063-1-git-send-email-haggaie@mellanox.com> References: <1431841868-28063-1-git-send-email-haggaie@mellanox.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently the RDMA subsystem's device list and client list are protected by a single mutex. This prevents adding user-facing APIs that iterate these lists, since using them may cause a deadlock. The patch attempts to solve this problem by adding a read-write semaphore to protect the lists. Readers now don't need the mutex, and are safe just by read-locking the semaphore. The ib_register_device, ib_register_client, ib_unregister_device, and ib_unregister_client functions are modified to lock the semaphore for write during their respective list modification This patch attempts to solve a similar need [1] that was seen in the RoCE v2 patch series. [1] http://www.spinics.net/lists/linux-rdma/msg24733.html Cc: Matan Barak Cc: Jason Gunthorpe Signed-off-by: Haggai Eran --- drivers/infiniband/core/device.c | 24 +++++++++++++++++++----- 1 file changed, 19 insertions(+), 5 deletions(-) diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c index b360350a0b20..3a44723c6b9d 100644 --- a/drivers/infiniband/core/device.c +++ b/drivers/infiniband/core/device.c @@ -59,13 +59,17 @@ static LIST_HEAD(device_list); static LIST_HEAD(client_list); /* - * device_mutex protects access to both device_list and client_list. - * There's no real point to using multiple locks or something fancier - * like an rwsem: we always access both lists, and we're always - * modifying one list or the other list. In any case this is not a - * hot path so there's no point in trying to optimize. + * device_mutex and lists_rwsem protect access to both device_list and + * client_list. device_mutex protects writer access by device and client + * registration / de-registration. lists_rwsem protects reader access to + * these lists. Iterators of these lists must lock it for read, while updates + * to the lists must be done with a write lock. A special case is when the + * device_mutex is locked. In this case locking the lists for read access as + * the device_mutex implies it. */ static DEFINE_MUTEX(device_mutex); +static DECLARE_RWSEM(lists_rwsem); + static int ib_device_check_mandatory(struct ib_device *device) { @@ -311,7 +315,9 @@ int ib_register_device(struct ib_device *device, goto out; } + down_write(&lists_rwsem); list_add_tail(&device->core_list, &device_list); + up_write(&lists_rwsem); device->reg_state = IB_DEV_REGISTERED; @@ -347,7 +353,9 @@ void ib_unregister_device(struct ib_device *device) if (client->remove) client->remove(device); + down_write(&lists_rwsem); list_del(&device->core_list); + up_write(&lists_rwsem); kfree(device->gid_tbl_len); kfree(device->pkey_tbl_len); @@ -384,7 +392,10 @@ int ib_register_client(struct ib_client *client) mutex_lock(&device_mutex); + down_write(&lists_rwsem); list_add_tail(&client->list, &client_list); + up_write(&lists_rwsem); + list_for_each_entry(device, &device_list, core_list) if (client->add && !add_client_context(device, client)) client->add(device); @@ -423,7 +434,10 @@ void ib_unregister_client(struct ib_client *client) } spin_unlock_irqrestore(&device->client_data_lock, flags); } + + down_write(&lists_rwsem); list_del(&client->list); + up_write(&lists_rwsem); mutex_unlock(&device_mutex); }