From patchwork Thu Jul 30 15:33:25 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matan Barak X-Patchwork-Id: 6903651 Return-Path: X-Original-To: patchwork-linux-rdma@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 433029F358 for ; Thu, 30 Jul 2015 15:35:18 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 4532A20361 for ; Thu, 30 Jul 2015 15:35:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 9ED49205F0 for ; Thu, 30 Jul 2015 15:35:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1750995AbbG3PfG (ORCPT ); Thu, 30 Jul 2015 11:35:06 -0400 Received: from [193.47.165.129] ([193.47.165.129]:45304 "EHLO mellanox.co.il" rhost-flags-FAIL-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1752539AbbG3PfE (ORCPT ); Thu, 30 Jul 2015 11:35:04 -0400 Received: from Internal Mail-Server by MTLPINE1 (envelope-from matanb@mellanox.com) with ESMTPS (AES256-SHA encrypted); 30 Jul 2015 18:34:04 +0300 Received: from r-vnc06.mtr.labs.mlnx (r-vnc06.mtr.labs.mlnx [10.208.0.117]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id t6UFY49q008864; Thu, 30 Jul 2015 18:34:04 +0300 From: Matan Barak To: Doug Ledford Cc: Matan Barak , Or Gerlitz , Jason Gunthorpe , linux-rdma@vger.kernel.org, Sean Hefty , Somnath Kotur , Moni Shoua , talal@mellanox.com, haggaie@mellanox.com Subject: [PATCH for-next V7 04/10] IB/core: Add rwsem to allow reading device list or client list Date: Thu, 30 Jul 2015 18:33:25 +0300 Message-Id: <1438270411-17648-5-git-send-email-matanb@mellanox.com> X-Mailer: git-send-email 1.7.6.4 In-Reply-To: <1438270411-17648-1-git-send-email-matanb@mellanox.com> References: <1438270411-17648-1-git-send-email-matanb@mellanox.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Spam-Status: No, score=-8.3 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Haggai Eran Currently the RDMA subsystem's device list and client list are protected by a single mutex. This prevents adding user-facing APIs that iterate these lists, since using them may cause a deadlock. The patch attempts to solve this problem by adding a read-write semaphore to protect the lists. Readers now don't need the mutex, and are safe just by read-locking the semaphore. The ib_register_device, ib_register_client, ib_unregister_device, and ib_unregister_client functions are modified to lock the semaphore for write during their respective list modification. Also, in order to make sure client callbacks are called only between add() and remove() calls, the code is changed to only add items to the lists after the add() calls and remove from the lists before the remove() calls. Reviewed-By: Jason Gunthorpe Signed-off-by: Haggai Eran Signed-off-by: Matan Barak --- drivers/infiniband/core/device.c | 39 ++++++++++++++++++++++++++++----------- 1 file changed, 28 insertions(+), 11 deletions(-) diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c index 9567756..f08d438 100644 --- a/drivers/infiniband/core/device.c +++ b/drivers/infiniband/core/device.c @@ -55,17 +55,24 @@ struct ib_client_data { struct workqueue_struct *ib_wq; EXPORT_SYMBOL_GPL(ib_wq); +/* The device_list and client_list contain devices and clients after their + * registration has completed, and the devices and clients are removed + * during unregistration. */ static LIST_HEAD(device_list); static LIST_HEAD(client_list); /* - * device_mutex protects access to both device_list and client_list. - * There's no real point to using multiple locks or something fancier - * like an rwsem: we always access both lists, and we're always - * modifying one list or the other list. In any case this is not a - * hot path so there's no point in trying to optimize. + * device_mutex and lists_rwsem protect access to both device_list and + * client_list. device_mutex protects writer access by device and client + * registration / de-registration. lists_rwsem protects reader access to + * these lists. Iterators of these lists must lock it for read, while updates + * to the lists must be done with a write lock. A special case is when the + * device_mutex is locked. In this case locking the lists for read access is + * not necessary as the device_mutex implies it. */ static DEFINE_MUTEX(device_mutex); +static DECLARE_RWSEM(lists_rwsem); + static int ib_device_check_mandatory(struct ib_device *device) { @@ -305,8 +312,6 @@ int ib_register_device(struct ib_device *device, goto out; } - list_add_tail(&device->core_list, &device_list); - device->reg_state = IB_DEV_REGISTERED; { @@ -317,6 +322,10 @@ int ib_register_device(struct ib_device *device, client->add(device); } + down_write(&lists_rwsem); + list_add_tail(&device->core_list, &device_list); + up_write(&lists_rwsem); + out: mutex_unlock(&device_mutex); return ret; @@ -337,12 +346,14 @@ void ib_unregister_device(struct ib_device *device) mutex_lock(&device_mutex); + down_write(&lists_rwsem); + list_del(&device->core_list); + up_write(&lists_rwsem); + list_for_each_entry_reverse(client, &client_list, list) if (client->remove) client->remove(device); - list_del(&device->core_list); - mutex_unlock(&device_mutex); ib_device_unregister_sysfs(device); @@ -375,11 +386,14 @@ int ib_register_client(struct ib_client *client) mutex_lock(&device_mutex); - list_add_tail(&client->list, &client_list); list_for_each_entry(device, &device_list, core_list) if (client->add && !add_client_context(device, client)) client->add(device); + down_write(&lists_rwsem); + list_add_tail(&client->list, &client_list); + up_write(&lists_rwsem); + mutex_unlock(&device_mutex); return 0; @@ -402,6 +416,10 @@ void ib_unregister_client(struct ib_client *client) mutex_lock(&device_mutex); + down_write(&lists_rwsem); + list_del(&client->list); + up_write(&lists_rwsem); + list_for_each_entry(device, &device_list, core_list) { if (client->remove) client->remove(device); @@ -414,7 +432,6 @@ void ib_unregister_client(struct ib_client *client) } spin_unlock_irqrestore(&device->client_data_lock, flags); } - list_del(&client->list); mutex_unlock(&device_mutex); }