From patchwork Sun Oct 20 06:54:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 11200603 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5F60C13B1 for ; Sun, 20 Oct 2019 06:54:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 350A121D80 for ; Sun, 20 Oct 2019 06:54:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1571554478; bh=kJpPynIRE2iwdDacRESZ4ctSYg8aHSNMDYEqKMCHVwo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=kH0jmxhPx9jmjNPkux5jz82ZVVWK2jv09l8L/1XC8/QhK+4kypIG0xvVed3/iw6Mj wkP1urJHSFDKsg2/M0J7n1w/IvMKo7q4ebHQSDN0CdFxLmqhVMxRWqgIj3uYzaPh9c EGILB+X4IzqUUhuGDfwR486+IVnvXs5ZN/b4oWQY= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726016AbfJTGyh (ORCPT ); Sun, 20 Oct 2019 02:54:37 -0400 Received: from mail.kernel.org ([198.145.29.99]:59732 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725928AbfJTGyh (ORCPT ); Sun, 20 Oct 2019 02:54:37 -0400 Received: from localhost (unknown [77.137.89.37]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 82C5221D80; Sun, 20 Oct 2019 06:54:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1571554476; bh=kJpPynIRE2iwdDacRESZ4ctSYg8aHSNMDYEqKMCHVwo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GLh4dRIYH4uOKfC/iGoj8vlTuSjqpQ/CKupX+km88goC0tQK+wPiCC/Mn0z4YCs/F 6HIVQeuaskmwJIMAHFz/n9AZm6okZK0hmBfGgOWOI3hJ/u6r+wmQOfcHy9zM7mObJG WIvru2lHb+HcFvBex+ybHsW3jUs90mv0sU3sSMcA= From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Leon Romanovsky , RDMA mailing list , Daniel Jurgens , Parav Pandit Subject: [PATCH rdma-next 1/3] IB/core: Let IB core distribute cache update events Date: Sun, 20 Oct 2019 09:54:25 +0300 Message-Id: <20191020065427.8772-2-leon@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20191020065427.8772-1-leon@kernel.org> References: <20191020065427.8772-1-leon@kernel.org> MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Parav Pandit Currently when low level driver notifies Pkey, GID, port change events, they are notified to the registered handlers in the order they are registered. IB core and other ULPs such as IPoIB are interested in GID, LID, Pkey change events. Since all GID query done by ULPs is serviced by IB core, in below flow when GID change event occurs, IB core is yet to update the GID cache when IPoIB queries the GID, resulting into not updating IPoIB address. mlx5_ib_handle_event() ib_dispatch_event() ib_cache_event() queue_work() -> slow cache update [..] ipoib_event() queue_work() [..] work handler ipoib_ib_dev_flush_light() __ipoib_ib_dev_flush() ipoib_dev_addr_changed_valid() rdma_query_gid() <- Returns old GID, cache not updated. Hence, all events which require cache update are handled first by the IB core. Once cache update work is completed, IB core distributes the event to subscriber clients. Fixes: f35faa4ba956 ("IB/core: Simplify ib_query_gid to always refer to cache") Signed-off-by: Parav Pandit Reviewed-by: Daniel Jurgens Signed-off-by: Leon Romanovsky --- drivers/infiniband/core/cache.c | 94 +++++++++++++++-------------- drivers/infiniband/core/core_priv.h | 3 + drivers/infiniband/core/device.c | 26 +++++--- include/rdma/ib_verbs.h | 1 - 4 files changed, 69 insertions(+), 55 deletions(-) -- 2.20.1 diff --git a/drivers/infiniband/core/cache.c b/drivers/infiniband/core/cache.c index 00fb3eacda19..46dba17b385d 100644 --- a/drivers/infiniband/core/cache.c +++ b/drivers/infiniband/core/cache.c @@ -51,9 +51,8 @@ struct ib_pkey_cache { struct ib_update_work { struct work_struct work; - struct ib_device *device; - u8 port_num; - bool enforce_security; + struct ib_event event; + bool enforce_security; }; union ib_gid zgid; @@ -130,7 +129,7 @@ static void dispatch_gid_change_event(struct ib_device *ib_dev, u8 port) event.element.port_num = port; event.event = IB_EVENT_GID_CHANGE; - ib_dispatch_event(&event); + ib_dispatch_cache_event_clients(&event); } static const char * const gid_type_str[] = { @@ -1387,9 +1386,8 @@ static int config_non_roce_gid_cache(struct ib_device *device, return ret; } -static void ib_cache_update(struct ib_device *device, - u8 port, - bool enforce_security) +static int +ib_cache_update(struct ib_device *device, u8 port, bool enforce_security) { struct ib_port_attr *tprops = NULL; struct ib_pkey_cache *pkey_cache = NULL, *old_pkey_cache; @@ -1397,11 +1395,11 @@ static void ib_cache_update(struct ib_device *device, int ret; if (!rdma_is_port_valid(device, port)) - return; + return -EINVAL; tprops = kmalloc(sizeof *tprops, GFP_KERNEL); if (!tprops) - return; + return -ENOMEM; ret = ib_query_port(device, port, tprops); if (ret) { @@ -1419,8 +1417,10 @@ static void ib_cache_update(struct ib_device *device, pkey_cache = kmalloc(struct_size(pkey_cache, table, tprops->pkey_tbl_len), GFP_KERNEL); - if (!pkey_cache) + if (!pkey_cache) { + ret = -ENOMEM; goto err; + } pkey_cache->table_len = tprops->pkey_tbl_len; @@ -1452,49 +1452,58 @@ static void ib_cache_update(struct ib_device *device, kfree(old_pkey_cache); kfree(tprops); - return; + return 0; err: kfree(pkey_cache); kfree(tprops); + return ret; } static void ib_cache_task(struct work_struct *_work) { struct ib_update_work *work = container_of(_work, struct ib_update_work, work); + int ret; + + ret = ib_cache_update(work->event.device, work->event.element.port_num, + work->enforce_security); + + /* GID event is notified already for individual GID entries by + * dispatch_gid_change_event(). Hence, notifiy for rest of the + * events. + */ + if (!ret && work->event.event != IB_EVENT_GID_CHANGE) + ib_dispatch_cache_event_clients(&work->event); - ib_cache_update(work->device, - work->port_num, - work->enforce_security); kfree(work); } -static void ib_cache_event(struct ib_event_handler *handler, - struct ib_event *event) +bool ib_is_cache_update_event(const struct ib_event *event) +{ + return (event->event == IB_EVENT_PORT_ERR || + event->event == IB_EVENT_PORT_ACTIVE || + event->event == IB_EVENT_LID_CHANGE || + event->event == IB_EVENT_PKEY_CHANGE || + event->event == IB_EVENT_CLIENT_REREGISTER || + event->event == IB_EVENT_GID_CHANGE); +} + +void ib_enqueue_cache_update_event(const struct ib_event *event) { struct ib_update_work *work; - if (event->event == IB_EVENT_PORT_ERR || - event->event == IB_EVENT_PORT_ACTIVE || - event->event == IB_EVENT_LID_CHANGE || - event->event == IB_EVENT_PKEY_CHANGE || - event->event == IB_EVENT_CLIENT_REREGISTER || - event->event == IB_EVENT_GID_CHANGE) { - work = kmalloc(sizeof *work, GFP_ATOMIC); - if (work) { - INIT_WORK(&work->work, ib_cache_task); - work->device = event->device; - work->port_num = event->element.port_num; - if (event->event == IB_EVENT_PKEY_CHANGE || - event->event == IB_EVENT_GID_CHANGE) - work->enforce_security = true; - else - work->enforce_security = false; - - queue_work(ib_wq, &work->work); - } - } + work = kzalloc(sizeof(*work), GFP_ATOMIC); + if (!work) + return; + + INIT_WORK(&work->work, ib_cache_task); + work->event = *event; + if (event->event == IB_EVENT_PKEY_CHANGE || + event->event == IB_EVENT_GID_CHANGE) + work->enforce_security = true; + + queue_work(ib_wq, &work->work); } int ib_cache_setup_one(struct ib_device *device) @@ -1511,9 +1520,6 @@ int ib_cache_setup_one(struct ib_device *device) rdma_for_each_port (device, p) ib_cache_update(device, p, true); - INIT_IB_EVENT_HANDLER(&device->cache.event_handler, - device, ib_cache_event); - ib_register_event_handler(&device->cache.event_handler); return 0; } @@ -1535,14 +1541,12 @@ void ib_cache_release_one(struct ib_device *device) void ib_cache_cleanup_one(struct ib_device *device) { - /* The cleanup function unregisters the event handler, - * waits for all in-progress workqueue elements and cleans - * up the GID cache. This function should be called after - * the device was removed from the devices list and all - * clients were removed, so the cache exists but is + /* The cleanup function waits for all in-progress workqueue + * elements and cleans up the GID cache. This function should be + * called after the device was removed from the devices list and + * all clients were removed, so the cache exists but is * non-functional and shouldn't be updated anymore. */ - ib_unregister_event_handler(&device->cache.event_handler); flush_workqueue(ib_wq); gid_table_cleanup_one(device); diff --git a/drivers/infiniband/core/core_priv.h b/drivers/infiniband/core/core_priv.h index 9d07378b5b42..b08018a8cf74 100644 --- a/drivers/infiniband/core/core_priv.h +++ b/drivers/infiniband/core/core_priv.h @@ -149,6 +149,9 @@ unsigned long roce_gid_type_mask_support(struct ib_device *ib_dev, u8 port); int ib_cache_setup_one(struct ib_device *device); void ib_cache_cleanup_one(struct ib_device *device); void ib_cache_release_one(struct ib_device *device); +bool ib_is_cache_update_event(const struct ib_event *event); +void ib_enqueue_cache_update_event(const struct ib_event *event); +void ib_dispatch_cache_event_clients(struct ib_event *event); #ifdef CONFIG_CGROUP_RDMA void ib_device_register_rdmacg(struct ib_device *device); diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c index 2f89c4d64b73..e9ab1289c224 100644 --- a/drivers/infiniband/core/device.c +++ b/drivers/infiniband/core/device.c @@ -1951,15 +1951,7 @@ void ib_unregister_event_handler(struct ib_event_handler *event_handler) } EXPORT_SYMBOL(ib_unregister_event_handler); -/** - * ib_dispatch_event - Dispatch an asynchronous event - * @event:Event to dispatch - * - * Low-level drivers must call ib_dispatch_event() to dispatch the - * event to all registered event handlers when an asynchronous event - * occurs. - */ -void ib_dispatch_event(struct ib_event *event) +void ib_dispatch_cache_event_clients(struct ib_event *event) { unsigned long flags; struct ib_event_handler *handler; @@ -1971,6 +1963,22 @@ void ib_dispatch_event(struct ib_event *event) spin_unlock_irqrestore(&event->device->event_handler_lock, flags); } + +/** + * ib_dispatch_event - Dispatch an asynchronous event + * @event:Event to dispatch + * + * Low-level drivers must call ib_dispatch_event() to dispatch the + * event to all registered event handlers when an asynchronous event + * occurs. + */ +void ib_dispatch_event(struct ib_event *event) +{ + if (ib_is_cache_update_event(event)) + ib_enqueue_cache_update_event(event); + else + ib_dispatch_cache_event_clients(event); +} EXPORT_SYMBOL(ib_dispatch_event); static int iw_query_port(struct ib_device *device, diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index cca9985b4cbc..1f6d6734f477 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -2148,7 +2148,6 @@ struct ib_port_cache { struct ib_cache { rwlock_t lock; - struct ib_event_handler event_handler; }; struct ib_port_immutable { From patchwork Sun Oct 20 06:54:26 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 11200605 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F3F8C112B for ; Sun, 20 Oct 2019 06:54:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id D3A67222C3 for ; Sun, 20 Oct 2019 06:54:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1571554481; bh=i9zmlR9x6m4eR8fC2PUs4Yzm7+SX+QwoD0SpeymNtTQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=LAevSUAfbs6GRqru8TKn3jjuMH9Y5KB62/DlVYLNobhxbs3bxHNW97+227NA0VemJ n7d8waPQcyrFIZiLMfZtv29uMNhjNs6g6zd76i580Ev3afKjAhVTsVdAqB5RGW0BFr 7wpdts7o3RRVL85CHkwWZuYVVHLMJWpRqDtFYS84= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726019AbfJTGyl (ORCPT ); Sun, 20 Oct 2019 02:54:41 -0400 Received: from mail.kernel.org ([198.145.29.99]:59762 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725928AbfJTGyl (ORCPT ); Sun, 20 Oct 2019 02:54:41 -0400 Received: from localhost (unknown [77.137.89.37]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 1B7852190F; Sun, 20 Oct 2019 06:54:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1571554479; bh=i9zmlR9x6m4eR8fC2PUs4Yzm7+SX+QwoD0SpeymNtTQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OK9cwEXCTXHiEjTcuxJ9ww8m53D1h1HRxlXWes/wVGNmfVOQSjah2J7mwnKu2KpYm TscYPSCjFgoteezKe4C5XKTnfWi52hNgmSSfDX+OZnRg/0t372QQA91NDGxbDJMEpq ninyYT31B/mzOMqDpikLSGzwNmAGRbAn77UEelqs= From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Leon Romanovsky , RDMA mailing list , Daniel Jurgens , Parav Pandit Subject: [PATCH rdma-next 2/3] IB/core: Cut down single member ib_cache structure Date: Sun, 20 Oct 2019 09:54:26 +0300 Message-Id: <20191020065427.8772-3-leon@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20191020065427.8772-1-leon@kernel.org> References: <20191020065427.8772-1-leon@kernel.org> MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Parav Pandit Given that ib_cache structure has only single member now, merge the cache lock directly in the ib_device. Signed-off-by: Parav Pandit Reviewed-by: Daniel Jurgens Signed-off-by: Leon Romanovsky --- drivers/infiniband/core/cache.c | 30 +++++++++++++++--------------- include/rdma/ib_verbs.h | 7 ++----- 2 files changed, 17 insertions(+), 20 deletions(-) -- 2.20.1 diff --git a/drivers/infiniband/core/cache.c b/drivers/infiniband/core/cache.c index 46dba17b385d..b626ca682004 100644 --- a/drivers/infiniband/core/cache.c +++ b/drivers/infiniband/core/cache.c @@ -1039,7 +1039,7 @@ int ib_get_cached_pkey(struct ib_device *device, if (!rdma_is_port_valid(device, port_num)) return -EINVAL; - read_lock_irqsave(&device->cache.lock, flags); + read_lock_irqsave(&device->cache_lock, flags); cache = device->port_data[port_num].cache.pkey; @@ -1048,7 +1048,7 @@ int ib_get_cached_pkey(struct ib_device *device, else *pkey = cache->table[index]; - read_unlock_irqrestore(&device->cache.lock, flags); + read_unlock_irqrestore(&device->cache_lock, flags); return ret; } @@ -1063,9 +1063,9 @@ int ib_get_cached_subnet_prefix(struct ib_device *device, if (!rdma_is_port_valid(device, port_num)) return -EINVAL; - read_lock_irqsave(&device->cache.lock, flags); + read_lock_irqsave(&device->cache_lock, flags); *sn_pfx = device->port_data[port_num].cache.subnet_prefix; - read_unlock_irqrestore(&device->cache.lock, flags); + read_unlock_irqrestore(&device->cache_lock, flags); return 0; } @@ -1085,7 +1085,7 @@ int ib_find_cached_pkey(struct ib_device *device, if (!rdma_is_port_valid(device, port_num)) return -EINVAL; - read_lock_irqsave(&device->cache.lock, flags); + read_lock_irqsave(&device->cache_lock, flags); cache = device->port_data[port_num].cache.pkey; @@ -1106,7 +1106,7 @@ int ib_find_cached_pkey(struct ib_device *device, ret = 0; } - read_unlock_irqrestore(&device->cache.lock, flags); + read_unlock_irqrestore(&device->cache_lock, flags); return ret; } @@ -1125,7 +1125,7 @@ int ib_find_exact_cached_pkey(struct ib_device *device, if (!rdma_is_port_valid(device, port_num)) return -EINVAL; - read_lock_irqsave(&device->cache.lock, flags); + read_lock_irqsave(&device->cache_lock, flags); cache = device->port_data[port_num].cache.pkey; @@ -1138,7 +1138,7 @@ int ib_find_exact_cached_pkey(struct ib_device *device, break; } - read_unlock_irqrestore(&device->cache.lock, flags); + read_unlock_irqrestore(&device->cache_lock, flags); return ret; } @@ -1154,9 +1154,9 @@ int ib_get_cached_lmc(struct ib_device *device, if (!rdma_is_port_valid(device, port_num)) return -EINVAL; - read_lock_irqsave(&device->cache.lock, flags); + read_lock_irqsave(&device->cache_lock, flags); *lmc = device->port_data[port_num].cache.lmc; - read_unlock_irqrestore(&device->cache.lock, flags); + read_unlock_irqrestore(&device->cache_lock, flags); return ret; } @@ -1172,9 +1172,9 @@ int ib_get_cached_port_state(struct ib_device *device, if (!rdma_is_port_valid(device, port_num)) return -EINVAL; - read_lock_irqsave(&device->cache.lock, flags); + read_lock_irqsave(&device->cache_lock, flags); *port_state = device->port_data[port_num].cache.port_state; - read_unlock_irqrestore(&device->cache.lock, flags); + read_unlock_irqrestore(&device->cache_lock, flags); return ret; } @@ -1434,7 +1434,7 @@ ib_cache_update(struct ib_device *device, u8 port, bool enforce_security) } } - write_lock_irq(&device->cache.lock); + write_lock_irq(&device->cache_lock); old_pkey_cache = device->port_data[port].cache.pkey; @@ -1443,7 +1443,7 @@ ib_cache_update(struct ib_device *device, u8 port, bool enforce_security) device->port_data[port].cache.port_state = tprops->state; device->port_data[port].cache.subnet_prefix = tprops->subnet_prefix; - write_unlock_irq(&device->cache.lock); + write_unlock_irq(&device->cache_lock); if (enforce_security) ib_security_cache_change(device, @@ -1511,7 +1511,7 @@ int ib_cache_setup_one(struct ib_device *device) unsigned int p; int err; - rwlock_init(&device->cache.lock); + rwlock_init(&device->cache_lock); err = gid_table_setup_one(device); if (err) diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index 1f6d6734f477..adff05eade2c 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -2146,10 +2146,6 @@ struct ib_port_cache { enum ib_port_state port_state; }; -struct ib_cache { - rwlock_t lock; -}; - struct ib_port_immutable { int pkey_tbl_len; int gid_tbl_len; @@ -2609,7 +2605,8 @@ struct ib_device { struct xarray client_data; struct mutex unregistration_lock; - struct ib_cache cache; + /* Synchronize GID, Pkey cache entries, subnet prefix, LMC */ + rwlock_t cache_lock; /** * port_data is indexed by port number */ From patchwork Sun Oct 20 06:54:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 11200607 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F108E13B1 for ; Sun, 20 Oct 2019 06:54:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id C5D72222C3 for ; Sun, 20 Oct 2019 06:54:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1571554484; bh=n2fHxg8y9i0IWGL3/wzldRrCMiOS5XqrHRJGQYQ0Ydc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=rpSoH0WK5WYmTdUTCmZpti6LlqpXJMZDBh4RpbSCW8TRbRH6qRMIC9ALsnn0tho+/ rvjhwDdKg05ZqENXCXXoS2WF+6zl1V73KNmRi0wdeSxfyJ3Mi2Tmq1517SYpIkOeYH rFJPZqzT20p9TMLdaG7mO3pOYz61wCkrsD+Thsj4= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726063AbfJTGyo (ORCPT ); Sun, 20 Oct 2019 02:54:44 -0400 Received: from mail.kernel.org ([198.145.29.99]:59786 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725928AbfJTGyo (ORCPT ); Sun, 20 Oct 2019 02:54:44 -0400 Received: from localhost (unknown [77.137.89.37]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id AD4982190F; Sun, 20 Oct 2019 06:54:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1571554483; bh=n2fHxg8y9i0IWGL3/wzldRrCMiOS5XqrHRJGQYQ0Ydc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hzFGxmT7Q9Dfhc3gqgYNoJMEHG+j6yCQNunDoH99ZhvclTsOdP4hso5bXJCNwMec1 cOD3ikFcjdMCtWLI+AxSzybBePWA3BZepaLBb6h7oAr3e/Xf2HGJuVaMsaqd34TDzD JzmC7VP/wfAgv2DiuplKvd6zVXqG0/XeMB80+drM= From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Leon Romanovsky , RDMA mailing list , Daniel Jurgens , Parav Pandit Subject: [PATCH rdma-next 3/3] IB/core: Do not notify GID change event of an unregistered device Date: Sun, 20 Oct 2019 09:54:27 +0300 Message-Id: <20191020065427.8772-4-leon@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20191020065427.8772-1-leon@kernel.org> References: <20191020065427.8772-1-leon@kernel.org> MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Parav Pandit When IB device is undergoing unregistration, GID cache is cleaned up after all clients are unregistered in below flow. __ib_unregister_device() disable_device(); ib_cache_cleanup_one() gid_table_cleanup_one() cleanup_gid_table_port() There is no use of generating a GID change event at such stage, where there is no active client of the device and device is unregistered state. Signed-off-by: Parav Pandit Reviewed-by: Daniel Jurgens Reviewed-by: Leon Romanovsky Signed-off-by: Leon Romanovsky --- drivers/infiniband/core/cache.c | 8 +------- 1 file changed, 1 insertion(+), 7 deletions(-) -- 2.20.1 diff --git a/drivers/infiniband/core/cache.c b/drivers/infiniband/core/cache.c index b626ca682004..53d8313e8309 100644 --- a/drivers/infiniband/core/cache.c +++ b/drivers/infiniband/core/cache.c @@ -818,22 +818,16 @@ static void cleanup_gid_table_port(struct ib_device *ib_dev, u8 port, struct ib_gid_table *table) { int i; - bool deleted = false; if (!table) return; mutex_lock(&table->lock); for (i = 0; i < table->sz; ++i) { - if (is_gid_entry_valid(table->data_vec[i])) { + if (is_gid_entry_valid(table->data_vec[i])) del_gid(ib_dev, port, table, i); - deleted = true; - } } mutex_unlock(&table->lock); - - if (deleted) - dispatch_gid_change_event(ib_dev, port); } void ib_cache_gid_set_default_gid(struct ib_device *ib_dev, u8 port,