From patchwork Wed Feb 13 04:12:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Gunthorpe X-Patchwork-Id: 10809107 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0A6021399 for ; Wed, 13 Feb 2019 04:13:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EB0462C06B for ; Wed, 13 Feb 2019 04:13:07 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DFB162C0C0; Wed, 13 Feb 2019 04:13:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D3C9E2BFE1 for ; Wed, 13 Feb 2019 04:13:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727414AbfBMENF (ORCPT ); Tue, 12 Feb 2019 23:13:05 -0500 Received: from mail-pg1-f196.google.com ([209.85.215.196]:42834 "EHLO mail-pg1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729561AbfBMENE (ORCPT ); Tue, 12 Feb 2019 23:13:04 -0500 Received: by mail-pg1-f196.google.com with SMTP id d72so517375pga.9 for ; Tue, 12 Feb 2019 20:13:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ziepe.ca; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=s3vTWV9tkuAKQEx2R62Rj0kaGJPTTsFUxYgbWfcfHHk=; b=V0RGajbUOJGuwaYcFQ2tmJ5gV6sr8cOS1EOREkYDw9Py9dciVeadhDbciNt99aI77x B0pxVEdvN5gs+lEiZ2pVevPE21dkLOI78k8qsijiOPWAEy2zT3ichONQrb9RpLrjQdL2 LBmFv8D45sWnL9MZ+eD9lgXwXjqbsea/iZL6H2JIIvg6+zvjalGudGKSmfuwCja2/ZIa w5zRmcXSxTc4P8lkb64kGn3uGf0xT1MIuvQ3pB0jB8zBdZhW5C3bXQ16QHaAy9d+ajiQ rXoh/G2gPN6lqOh6k1dzOB8zWhGbpMaFYJ6O41YMRplQIK024zCFTjoM4FJ8BZaM9sU9 ZvLA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=s3vTWV9tkuAKQEx2R62Rj0kaGJPTTsFUxYgbWfcfHHk=; b=qvlGfuOCMCMIWdkdP5jU8susjM33RIasEjNM9qJueai5qJqRlra9PntsBxDYbsYxT0 Cln4TlD64SFEJLmwr1YXcHyGO5Q7U2w9kM3mmqK0mrgwTvAecVqIPecQyTZ12HOrAfDb iLL7g4bWdw3bSNgufkVGeWfXLHKGdz2R7gyUvof/lCnEnw1uCCOsy0+dncu74GFA4Aht RseEhO2tYZ0rNijtYZ/C54HoKIacOXNwT0oI3IpxP6bw9Dhp8dvsZi5Ws/R93uJ7FC+E 5FxAPyfgG3j3B5iWdr89ypmCYyXejtfvd6gubwMQRmwqlRYJ0ht3EWZTMSI5roE808zo YAxA== X-Gm-Message-State: AHQUAuZ/s6ioo4xWMMpkqEUXYUXiKza4ZdKXyoBcGSETHCNGTKMNXQ7m dfzQdJ5syQ3bmVTWaXW9xf99fgTmDTw= X-Google-Smtp-Source: AHgI3IYdFfzu6CBYA4zI5TqUwk1o4MZNvFZyRCXBb1uAFUYG/o3YiLQ2EO1nr5PGIDOXUOQle1U7fQ== X-Received: by 2002:a63:f141:: with SMTP id o1mr7099395pgk.134.1550031183463; Tue, 12 Feb 2019 20:13:03 -0800 (PST) Received: from ziepe.ca (S010614cc2056d97f.ed.shawcable.net. [174.3.196.123]) by smtp.gmail.com with ESMTPSA id h10sm20786182pgn.11.2019.02.12.20.13.00 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 12 Feb 2019 20:13:00 -0800 (PST) Received: from jgg by mlx.ziepe.ca with local (Exim 4.90_1) (envelope-from ) id 1gtlut-0005s5-P6; Tue, 12 Feb 2019 21:12:59 -0700 From: Jason Gunthorpe To: linux-rdma@vger.kernel.org Cc: Jason Gunthorpe Subject: [PATCH 02/10] RDMA/device: Consolidate ib_device per_port data into one place Date: Tue, 12 Feb 2019 21:12:48 -0700 Message-Id: <20190213041256.22437-3-jgg@ziepe.ca> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190213041256.22437-1-jgg@ziepe.ca> References: <20190213041256.22437-1-jgg@ziepe.ca> MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jason Gunthorpe There is no reason to have three allocations of per-port data. Combine them together and make the lifetime for all the per-port data match the struct ib_device. Following patches will require more port-specific data, now there is a good place to put it. Signed-off-by: Jason Gunthorpe --- drivers/infiniband/core/cache.c | 4 +- drivers/infiniband/core/device.c | 70 +++++++++------------------- drivers/infiniband/core/security.c | 24 +++++----- include/rdma/ib_verbs.h | 74 +++++++++++++++++------------- 4 files changed, 78 insertions(+), 94 deletions(-) diff --git a/drivers/infiniband/core/cache.c b/drivers/infiniband/core/cache.c index 3d137d8381a944..9d0e8aca741a6d 100644 --- a/drivers/infiniband/core/cache.c +++ b/drivers/infiniband/core/cache.c @@ -881,8 +881,8 @@ static int _gid_table_setup_one(struct ib_device *ib_dev) for (port = 0; port < ib_dev->phys_port_cnt; port++) { u8 rdma_port = port + rdma_start_port(ib_dev); - table = alloc_gid_table( - ib_dev->port_immutable[rdma_port].gid_tbl_len); + table = alloc_gid_table( + ib_dev->port_data[rdma_port].immutable.gid_tbl_len); if (!table) goto rollback_table_setup; diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c index d3bd09a6b53891..58591408bb1b35 100644 --- a/drivers/infiniband/core/device.c +++ b/drivers/infiniband/core/device.c @@ -292,8 +292,7 @@ static void ib_device_release(struct device *device) WARN_ON(refcount_read(&dev->refcount)); ib_cache_release_one(dev); ib_security_release_port_pkey_list(dev); - kfree(dev->port_pkey_list); - kfree(dev->port_immutable); + kfree(dev->port_data); xa_destroy(&dev->client_data); kfree(dev); } @@ -461,27 +460,31 @@ static int verify_immutable(const struct ib_device *dev, u8 port) rdma_max_mad_size(dev, port) != 0); } -static int read_port_immutable(struct ib_device *device) +static int setup_port_data(struct ib_device *device) { unsigned int port; int ret; - /** - * device->port_immutable is indexed directly by the port number to make + /* + * device->port_data is indexed directly by the port number to make * access to this data as efficient as possible. * - * Therefore port_immutable is declared as a 1 based array with - * potential empty slots at the beginning. + * Therefore port_data is declared as a 1 based array with potential + * empty slots at the beginning. */ - device->port_immutable = - kcalloc(rdma_end_port(device) + 1, - sizeof(*device->port_immutable), GFP_KERNEL); - if (!device->port_immutable) + device->port_data = kcalloc(rdma_end_port(device) + 1, + sizeof(*device->port_data), GFP_KERNEL); + if (!device->port_data) return -ENOMEM; rdma_for_each_port (device, port) { - ret = device->ops.get_port_immutable( - device, port, &device->port_immutable[port]); + struct ib_port_data *pdata = &device->port_data[port]; + + spin_lock_init(&pdata->pkey_list_lock); + INIT_LIST_HEAD(&pdata->pkey_list); + + ret = device->ops.get_port_immutable(device, port, + &pdata->immutable); if (ret) return ret; @@ -500,30 +503,6 @@ void ib_get_device_fw_str(struct ib_device *dev, char *str) } EXPORT_SYMBOL(ib_get_device_fw_str); -static int setup_port_pkey_list(struct ib_device *device) -{ - int i; - - /** - * device->port_pkey_list is indexed directly by the port number, - * Therefore it is declared as a 1 based array with potential empty - * slots at the beginning. - */ - device->port_pkey_list = kcalloc(rdma_end_port(device) + 1, - sizeof(*device->port_pkey_list), - GFP_KERNEL); - - if (!device->port_pkey_list) - return -ENOMEM; - - for (i = 0; i < (rdma_end_port(device) + 1); i++) { - spin_lock_init(&device->port_pkey_list[i].list_lock); - INIT_LIST_HEAD(&device->port_pkey_list[i].pkey_list); - } - - return 0; -} - static void ib_policy_change_task(struct work_struct *work) { struct ib_device *dev; @@ -661,10 +640,9 @@ static int setup_device(struct ib_device *device) if (ret) return ret; - ret = read_port_immutable(device); + ret = setup_port_data(device); if (ret) { - dev_warn(&device->dev, - "Couldn't create per port immutable data\n"); + dev_warn(&device->dev, "Couldn't create per-port data\n"); return ret; } @@ -676,12 +654,6 @@ static int setup_device(struct ib_device *device) return ret; } - ret = setup_port_pkey_list(device); - if (ret) { - dev_warn(&device->dev, "Couldn't create per port_pkey_list\n"); - return ret; - } - return 0; } @@ -1207,7 +1179,8 @@ int ib_find_gid(struct ib_device *device, union ib_gid *gid, if (!rdma_protocol_ib(device, port)) continue; - for (i = 0; i < device->port_immutable[port].gid_tbl_len; ++i) { + for (i = 0; i < device->port_data[port].immutable.gid_tbl_len; + ++i) { ret = rdma_query_gid(device, port, i, &tmp_gid); if (ret) return ret; @@ -1239,7 +1212,8 @@ int ib_find_pkey(struct ib_device *device, u16 tmp_pkey; int partial_ix = -1; - for (i = 0; i < device->port_immutable[port_num].pkey_tbl_len; ++i) { + for (i = 0; i < device->port_data[port_num].immutable.pkey_tbl_len; + ++i) { ret = ib_query_pkey(device, port_num, i, &tmp_pkey); if (ret) return ret; diff --git a/drivers/infiniband/core/security.c b/drivers/infiniband/core/security.c index 492702b836003c..1ab423b19f778f 100644 --- a/drivers/infiniband/core/security.c +++ b/drivers/infiniband/core/security.c @@ -49,16 +49,15 @@ static struct pkey_index_qp_list *get_pkey_idx_qp_list(struct ib_port_pkey *pp) struct pkey_index_qp_list *tmp_pkey; struct ib_device *dev = pp->sec->dev; - spin_lock(&dev->port_pkey_list[pp->port_num].list_lock); - list_for_each_entry(tmp_pkey, - &dev->port_pkey_list[pp->port_num].pkey_list, - pkey_index_list) { + spin_lock(&dev->port_data[pp->port_num].pkey_list_lock); + list_for_each_entry (tmp_pkey, &dev->port_data[pp->port_num].pkey_list, + pkey_index_list) { if (tmp_pkey->pkey_index == pp->pkey_index) { pkey = tmp_pkey; break; } } - spin_unlock(&dev->port_pkey_list[pp->port_num].list_lock); + spin_unlock(&dev->port_data[pp->port_num].pkey_list_lock); return pkey; } @@ -263,12 +262,12 @@ static int port_pkey_list_insert(struct ib_port_pkey *pp) if (!pkey) return -ENOMEM; - spin_lock(&dev->port_pkey_list[port_num].list_lock); + spin_lock(&dev->port_data[port_num].pkey_list_lock); /* Check for the PKey again. A racing process may * have created it. */ list_for_each_entry(tmp_pkey, - &dev->port_pkey_list[port_num].pkey_list, + &dev->port_data[port_num].pkey_list, pkey_index_list) { if (tmp_pkey->pkey_index == pp->pkey_index) { kfree(pkey); @@ -283,9 +282,9 @@ static int port_pkey_list_insert(struct ib_port_pkey *pp) spin_lock_init(&pkey->qp_list_lock); INIT_LIST_HEAD(&pkey->qp_list); list_add(&pkey->pkey_index_list, - &dev->port_pkey_list[port_num].pkey_list); + &dev->port_data[port_num].pkey_list); } - spin_unlock(&dev->port_pkey_list[port_num].list_lock); + spin_unlock(&dev->port_data[port_num].pkey_list_lock); } spin_lock(&pkey->qp_list_lock); @@ -551,9 +550,8 @@ void ib_security_cache_change(struct ib_device *device, { struct pkey_index_qp_list *pkey; - list_for_each_entry(pkey, - &device->port_pkey_list[port_num].pkey_list, - pkey_index_list) { + list_for_each_entry (pkey, &device->port_data[port_num].pkey_list, + pkey_index_list) { check_pkey_qps(pkey, device, port_num, @@ -569,7 +567,7 @@ void ib_security_release_port_pkey_list(struct ib_device *device) rdma_for_each_port (device, i) { list_for_each_entry_safe(pkey, tmp_pkey, - &device->port_pkey_list[i].pkey_list, + &device->port_data[i].pkey_list, pkey_index_list) { list_del(&pkey->pkey_index_list); kfree(pkey); diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index 99a4868f4d9c58..4225cd9eb6f840 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -2198,6 +2198,13 @@ struct ib_port_immutable { u32 max_mad_size; }; +struct ib_port_data { + struct ib_port_immutable immutable; + + spinlock_t pkey_list_lock; + struct list_head pkey_list; +}; + /* rdma netdev type - specifies protocol type */ enum rdma_netdev_t { RDMA_NETDEV_OPA_VNIC, @@ -2243,12 +2250,6 @@ struct rdma_netdev_alloc_params { struct net_device *netdev, void *param); }; -struct ib_port_pkey_list { - /* Lock to hold while modifying the list. */ - spinlock_t list_lock; - struct list_head pkey_list; -}; - struct ib_counters { struct ib_device *device; struct ib_uobject *uobject; @@ -2547,14 +2548,12 @@ struct ib_device { struct ib_cache cache; /** - * port_immutable is indexed by port number + * port_data is indexed by port number */ - struct ib_port_immutable *port_immutable; + struct ib_port_data *port_data; int num_comp_vectors; - struct ib_port_pkey_list *port_pkey_list; - struct iw_cm_verbs *iwcm; struct module *owner; @@ -2861,34 +2860,38 @@ static inline int rdma_is_port_valid(const struct ib_device *device, static inline bool rdma_is_grh_required(const struct ib_device *device, u8 port_num) { - return device->port_immutable[port_num].core_cap_flags & - RDMA_CORE_PORT_IB_GRH_REQUIRED; + return device->port_data[port_num].immutable.core_cap_flags & + RDMA_CORE_PORT_IB_GRH_REQUIRED; } static inline bool rdma_protocol_ib(const struct ib_device *device, u8 port_num) { - return device->port_immutable[port_num].core_cap_flags & RDMA_CORE_CAP_PROT_IB; + return device->port_data[port_num].immutable.core_cap_flags & + RDMA_CORE_CAP_PROT_IB; } static inline bool rdma_protocol_roce(const struct ib_device *device, u8 port_num) { - return device->port_immutable[port_num].core_cap_flags & - (RDMA_CORE_CAP_PROT_ROCE | RDMA_CORE_CAP_PROT_ROCE_UDP_ENCAP); + return device->port_data[port_num].immutable.core_cap_flags & + (RDMA_CORE_CAP_PROT_ROCE | RDMA_CORE_CAP_PROT_ROCE_UDP_ENCAP); } static inline bool rdma_protocol_roce_udp_encap(const struct ib_device *device, u8 port_num) { - return device->port_immutable[port_num].core_cap_flags & RDMA_CORE_CAP_PROT_ROCE_UDP_ENCAP; + return device->port_data[port_num].immutable.core_cap_flags & + RDMA_CORE_CAP_PROT_ROCE_UDP_ENCAP; } static inline bool rdma_protocol_roce_eth_encap(const struct ib_device *device, u8 port_num) { - return device->port_immutable[port_num].core_cap_flags & RDMA_CORE_CAP_PROT_ROCE; + return device->port_data[port_num].immutable.core_cap_flags & + RDMA_CORE_CAP_PROT_ROCE; } static inline bool rdma_protocol_iwarp(const struct ib_device *device, u8 port_num) { - return device->port_immutable[port_num].core_cap_flags & RDMA_CORE_CAP_PROT_IWARP; + return device->port_data[port_num].immutable.core_cap_flags & + RDMA_CORE_CAP_PROT_IWARP; } static inline bool rdma_ib_or_roce(const struct ib_device *device, u8 port_num) @@ -2899,12 +2902,14 @@ static inline bool rdma_ib_or_roce(const struct ib_device *device, u8 port_num) static inline bool rdma_protocol_raw_packet(const struct ib_device *device, u8 port_num) { - return device->port_immutable[port_num].core_cap_flags & RDMA_CORE_CAP_PROT_RAW_PACKET; + return device->port_data[port_num].immutable.core_cap_flags & + RDMA_CORE_CAP_PROT_RAW_PACKET; } static inline bool rdma_protocol_usnic(const struct ib_device *device, u8 port_num) { - return device->port_immutable[port_num].core_cap_flags & RDMA_CORE_CAP_PROT_USNIC; + return device->port_data[port_num].immutable.core_cap_flags & + RDMA_CORE_CAP_PROT_USNIC; } /** @@ -2921,7 +2926,8 @@ static inline bool rdma_protocol_usnic(const struct ib_device *device, u8 port_n */ static inline bool rdma_cap_ib_mad(const struct ib_device *device, u8 port_num) { - return device->port_immutable[port_num].core_cap_flags & RDMA_CORE_CAP_IB_MAD; + return device->port_data[port_num].immutable.core_cap_flags & + RDMA_CORE_CAP_IB_MAD; } /** @@ -2945,8 +2951,8 @@ static inline bool rdma_cap_ib_mad(const struct ib_device *device, u8 port_num) */ static inline bool rdma_cap_opa_mad(struct ib_device *device, u8 port_num) { - return (device->port_immutable[port_num].core_cap_flags & RDMA_CORE_CAP_OPA_MAD) - == RDMA_CORE_CAP_OPA_MAD; + return (device->port_data[port_num].immutable.core_cap_flags & + RDMA_CORE_CAP_OPA_MAD) == RDMA_CORE_CAP_OPA_MAD; } /** @@ -2971,7 +2977,8 @@ static inline bool rdma_cap_opa_mad(struct ib_device *device, u8 port_num) */ static inline bool rdma_cap_ib_smi(const struct ib_device *device, u8 port_num) { - return device->port_immutable[port_num].core_cap_flags & RDMA_CORE_CAP_IB_SMI; + return device->port_data[port_num].immutable.core_cap_flags & + RDMA_CORE_CAP_IB_SMI; } /** @@ -2991,7 +2998,8 @@ static inline bool rdma_cap_ib_smi(const struct ib_device *device, u8 port_num) */ static inline bool rdma_cap_ib_cm(const struct ib_device *device, u8 port_num) { - return device->port_immutable[port_num].core_cap_flags & RDMA_CORE_CAP_IB_CM; + return device->port_data[port_num].immutable.core_cap_flags & + RDMA_CORE_CAP_IB_CM; } /** @@ -3008,7 +3016,8 @@ static inline bool rdma_cap_ib_cm(const struct ib_device *device, u8 port_num) */ static inline bool rdma_cap_iw_cm(const struct ib_device *device, u8 port_num) { - return device->port_immutable[port_num].core_cap_flags & RDMA_CORE_CAP_IW_CM; + return device->port_data[port_num].immutable.core_cap_flags & + RDMA_CORE_CAP_IW_CM; } /** @@ -3028,7 +3037,8 @@ static inline bool rdma_cap_iw_cm(const struct ib_device *device, u8 port_num) */ static inline bool rdma_cap_ib_sa(const struct ib_device *device, u8 port_num) { - return device->port_immutable[port_num].core_cap_flags & RDMA_CORE_CAP_IB_SA; + return device->port_data[port_num].immutable.core_cap_flags & + RDMA_CORE_CAP_IB_SA; } /** @@ -3068,7 +3078,8 @@ static inline bool rdma_cap_ib_mcast(const struct ib_device *device, u8 port_num */ static inline bool rdma_cap_af_ib(const struct ib_device *device, u8 port_num) { - return device->port_immutable[port_num].core_cap_flags & RDMA_CORE_CAP_AF_IB; + return device->port_data[port_num].immutable.core_cap_flags & + RDMA_CORE_CAP_AF_IB; } /** @@ -3089,7 +3100,8 @@ static inline bool rdma_cap_af_ib(const struct ib_device *device, u8 port_num) */ static inline bool rdma_cap_eth_ah(const struct ib_device *device, u8 port_num) { - return device->port_immutable[port_num].core_cap_flags & RDMA_CORE_CAP_ETH_AH; + return device->port_data[port_num].immutable.core_cap_flags & + RDMA_CORE_CAP_ETH_AH; } /** @@ -3103,7 +3115,7 @@ static inline bool rdma_cap_eth_ah(const struct ib_device *device, u8 port_num) */ static inline bool rdma_cap_opa_ah(struct ib_device *device, u8 port_num) { - return (device->port_immutable[port_num].core_cap_flags & + return (device->port_data[port_num].immutable.core_cap_flags & RDMA_CORE_CAP_OPA_AH) == RDMA_CORE_CAP_OPA_AH; } @@ -3121,7 +3133,7 @@ static inline bool rdma_cap_opa_ah(struct ib_device *device, u8 port_num) */ static inline size_t rdma_max_mad_size(const struct ib_device *device, u8 port_num) { - return device->port_immutable[port_num].max_mad_size; + return device->port_data[port_num].immutable.max_mad_size; } /**