From patchwork Wed Sep 1 00:08:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sukadev Bhattiprolu X-Patchwork-Id: 12467925 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F7F4C4320E for ; Wed, 1 Sep 2021 00:10:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6D6AC6102A for ; Wed, 1 Sep 2021 00:10:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242434AbhIAAKz (ORCPT ); Tue, 31 Aug 2021 20:10:55 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:57682 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243347AbhIAAJP (ORCPT ); Tue, 31 Aug 2021 20:09:15 -0400 Received: from pps.filterd (m0098394.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 18103Hu0112134 for ; Tue, 31 Aug 2021 20:08:19 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=+mOJKw62qtzdHY6xYokt+enZZkNUKw9d3pnZYRY5hGY=; b=HIWIpojFhRMqnACyykFU8TEakCudB6qhnovkg5p/RNnkp8+Wv44J/VG9TLehMwZcY1mp fqZkdZJi6ad9XhOKRBhp0GMeP48vdtwGWqAbl/JyjiDJEbMZrqxngxTbZBbQZsidjFjp ZdeWDCDCvVSOjRJk9sAdPR7YTz8O/oSwrIyS7AwJqdnZcg3XOfwthfS2h2MgA9l8XCIa Zl+XeYNM2X9CMMTvtWZrHgnkDrMFRZvBOQSrYpEY7xV6waEpVz7F7E3B014lHsxpzORn hAaajkvhHrO0lcBYrep5S70zt19EI4xwWOZCGPyMRLoHE/9MVoKogoh3gAgLaS6q4Tlc jw== Received: from ppma03dal.us.ibm.com (b.bd.3ea9.ip4.static.sl-reverse.com [169.62.189.11]) by mx0a-001b2d01.pphosted.com with ESMTP id 3asx598nk5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Tue, 31 Aug 2021 20:08:19 -0400 Received: from pps.filterd (ppma03dal.us.ibm.com [127.0.0.1]) by ppma03dal.us.ibm.com (8.16.1.2/8.16.1.2) with SMTP id 17VNudeh014296 for ; Wed, 1 Sep 2021 00:08:18 GMT Received: from b03cxnp07029.gho.boulder.ibm.com (b03cxnp07029.gho.boulder.ibm.com [9.17.130.16]) by ppma03dal.us.ibm.com with ESMTP id 3aqcsd208g-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 01 Sep 2021 00:08:18 +0000 Received: from b03ledav004.gho.boulder.ibm.com (b03ledav004.gho.boulder.ibm.com [9.17.130.235]) by b03cxnp07029.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 18108GiM42729906 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 1 Sep 2021 00:08:17 GMT Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id DD64B7806B; Wed, 1 Sep 2021 00:08:16 +0000 (GMT) Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id A4DB67805C; Wed, 1 Sep 2021 00:08:15 +0000 (GMT) Received: from suka-w540.ibmuc.com (unknown [9.65.237.107]) by b03ledav004.gho.boulder.ibm.com (Postfix) with ESMTP; Wed, 1 Sep 2021 00:08:15 +0000 (GMT) From: Sukadev Bhattiprolu To: netdev@vger.kernel.org Cc: Brian King , cforno12@linux.ibm.com, Dany Madden , Rick Lindsley Subject: [PATCH net-next 1/9] ibmvnic: Consolidate code in replenish_rx_pool() Date: Tue, 31 Aug 2021 17:08:04 -0700 Message-Id: <20210901000812.120968-2-sukadev@linux.ibm.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210901000812.120968-1-sukadev@linux.ibm.com> References: <20210901000812.120968-1-sukadev@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: oQ42lvPVj5kgHGlGPQwUrFrPTA458x3R X-Proofpoint-ORIG-GUID: oQ42lvPVj5kgHGlGPQwUrFrPTA458x3R X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.391,18.0.790 definitions=2021-08-31_10:2021-08-31,2021-08-31 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 priorityscore=1501 phishscore=0 spamscore=0 adultscore=0 impostorscore=0 clxscore=1015 mlxscore=0 mlxlogscore=999 suspectscore=0 lowpriorityscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2107140000 definitions=main-2108310133 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org For better readability, consolidate related code in replenish_rx_pool() and add some comments. Signed-off-by: Sukadev Bhattiprolu Reviewed-by: Dany Madden --- drivers/net/ethernet/ibm/ibmvnic.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c index a775c69e4fd7..e8b1231be485 100644 --- a/drivers/net/ethernet/ibm/ibmvnic.c +++ b/drivers/net/ethernet/ibm/ibmvnic.c @@ -371,6 +371,8 @@ static void replenish_rx_pool(struct ibmvnic_adapter *adapter, } index = pool->free_map[pool->next_free]; + pool->free_map[pool->next_free] = IBMVNIC_INVALID_MAP; + pool->next_free = (pool->next_free + 1) % pool->size; if (pool->rx_buff[index].skb) dev_err(dev, "Inconsistent free_map!\n"); @@ -380,14 +382,15 @@ static void replenish_rx_pool(struct ibmvnic_adapter *adapter, dst = pool->long_term_buff.buff + offset; memset(dst, 0, pool->buff_size); dma_addr = pool->long_term_buff.addr + offset; - pool->rx_buff[index].data = dst; - pool->free_map[pool->next_free] = IBMVNIC_INVALID_MAP; + /* add the skb to an rx_buff in the pool */ + pool->rx_buff[index].data = dst; pool->rx_buff[index].dma = dma_addr; pool->rx_buff[index].skb = skb; pool->rx_buff[index].pool_index = pool->index; pool->rx_buff[index].size = pool->buff_size; + /* queue the rx_buff for the next send_subcrq_indirect */ sub_crq = &ind_bufp->indir_arr[ind_bufp->index++]; memset(sub_crq, 0, sizeof(*sub_crq)); sub_crq->rx_add.first = IBMVNIC_CRQ_CMD; @@ -405,7 +408,8 @@ static void replenish_rx_pool(struct ibmvnic_adapter *adapter, shift = 8; #endif sub_crq->rx_add.len = cpu_to_be32(pool->buff_size << shift); - pool->next_free = (pool->next_free + 1) % pool->size; + + /* if send_subcrq_indirect queue is full, flush to VIOS */ if (ind_bufp->index == IBMVNIC_MAX_IND_DESCS || i == count - 1) { lpar_rc = From patchwork Wed Sep 1 00:08:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sukadev Bhattiprolu X-Patchwork-Id: 12467927 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F334C4320A for ; Wed, 1 Sep 2021 00:10:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4B31F6103D for ; Wed, 1 Sep 2021 00:10:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242450AbhIAAK5 (ORCPT ); Tue, 31 Aug 2021 20:10:57 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:13084 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243377AbhIAAJR (ORCPT ); Tue, 31 Aug 2021 20:09:17 -0400 Received: from pps.filterd (m0098393.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 18104dme086635 for ; Tue, 31 Aug 2021 20:08:21 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=sXJ1bwTG74bQm5Re4vFT4zQ1GPiYKmPizLSIEfQl89c=; b=Z9UK78cC1TtuPlgf3NUD2TYhP+QcLtYeV1uttykX3T81D+Tar4vA93f2/5QrdwdIvOGp FFRhpp/z3g08d4hDWzH+JEeTappfaaPFUlJJ6vlxTseL7sUobqbvRMd3EubjUbfZajOc Aoymm/rYTKiacib9d8cnnvTPLTF3CmJcEM8Q1PAa8NoEOwP/ed6hzwFhOxQcpwsABA2N to8Sj8nrs5yEGHUtrQUqIjbcucaf6b0UuZrwj/gKoi1JW3RDllZsFxpGUixI+zzvgq9R 5kRPJLzHfnDX+jfDVdigTHt+MVXvHT5/b4MaHABLat2/zHMcc9YQyusnCTfuL9tlFfkT 9g== Received: from ppma02wdc.us.ibm.com (aa.5b.37a9.ip4.static.sl-reverse.com [169.55.91.170]) by mx0a-001b2d01.pphosted.com with ESMTP id 3asxnh82q2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Tue, 31 Aug 2021 20:08:21 -0400 Received: from pps.filterd (ppma02wdc.us.ibm.com [127.0.0.1]) by ppma02wdc.us.ibm.com (8.16.1.2/8.16.1.2) with SMTP id 17VNuf4d022839 for ; Wed, 1 Sep 2021 00:08:20 GMT Received: from b03cxnp08026.gho.boulder.ibm.com (b03cxnp08026.gho.boulder.ibm.com [9.17.130.18]) by ppma02wdc.us.ibm.com with ESMTP id 3aqcscj37g-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 01 Sep 2021 00:08:19 +0000 Received: from b03ledav004.gho.boulder.ibm.com (b03ledav004.gho.boulder.ibm.com [9.17.130.235]) by b03cxnp08026.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 18108IK033423832 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 1 Sep 2021 00:08:18 GMT Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 9D9997805C; Wed, 1 Sep 2021 00:08:18 +0000 (GMT) Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 3B45A78067; Wed, 1 Sep 2021 00:08:17 +0000 (GMT) Received: from suka-w540.ibmuc.com (unknown [9.65.237.107]) by b03ledav004.gho.boulder.ibm.com (Postfix) with ESMTP; Wed, 1 Sep 2021 00:08:17 +0000 (GMT) From: Sukadev Bhattiprolu To: netdev@vger.kernel.org Cc: Brian King , cforno12@linux.ibm.com, Dany Madden , Rick Lindsley Subject: [PATCH net-next 2/9] ibmvnic: Fix up some comments and messages Date: Tue, 31 Aug 2021 17:08:05 -0700 Message-Id: <20210901000812.120968-3-sukadev@linux.ibm.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210901000812.120968-1-sukadev@linux.ibm.com> References: <20210901000812.120968-1-sukadev@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: Fmkxo5LjXTKssFI_dGNW72nw2567LpYB X-Proofpoint-ORIG-GUID: Fmkxo5LjXTKssFI_dGNW72nw2567LpYB X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.391,18.0.790 definitions=2021-08-31_10:2021-08-31,2021-08-31 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=999 lowpriorityscore=0 mlxscore=0 impostorscore=0 spamscore=0 suspectscore=0 priorityscore=1501 bulkscore=0 clxscore=1015 adultscore=0 phishscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2107140000 definitions=main-2108310133 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Add/update some comments/function headers and fix up some messages. Signed-off-by: Sukadev Bhattiprolu Reviewed-by: Dany Madden Reported-by: kernel test robot --- drivers/net/ethernet/ibm/ibmvnic.c | 40 +++++++++++++++++++++++++----- 1 file changed, 34 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c index e8b1231be485..911315b10731 100644 --- a/drivers/net/ethernet/ibm/ibmvnic.c +++ b/drivers/net/ethernet/ibm/ibmvnic.c @@ -243,14 +243,13 @@ static int alloc_long_term_buff(struct ibmvnic_adapter *adapter, rc = ibmvnic_wait_for_completion(adapter, &adapter->fw_done, 10000); if (rc) { - dev_err(dev, - "Long term map request aborted or timed out,rc = %d\n", + dev_err(dev, "LTB map request aborted or timed out, rc = %d\n", rc); goto out; } if (adapter->fw_done_rc) { - dev_err(dev, "Couldn't map long term buffer,rc = %d\n", + dev_err(dev, "Couldn't map LTB, rc = %d\n", adapter->fw_done_rc); rc = -1; goto out; @@ -281,7 +280,9 @@ static void free_long_term_buff(struct ibmvnic_adapter *adapter, adapter->reset_reason != VNIC_RESET_MOBILITY && adapter->reset_reason != VNIC_RESET_TIMEOUT) send_request_unmap(adapter, ltb->map_id); + dma_free_coherent(dev, ltb->size, ltb->buff, ltb->addr); + ltb->buff = NULL; ltb->map_id = 0; } @@ -574,6 +575,10 @@ static int reset_rx_pools(struct ibmvnic_adapter *adapter) return 0; } +/** + * Release any rx_pools attached to @adapter. + * Safe to call this multiple times - even if no pools are attached. + */ static void release_rx_pools(struct ibmvnic_adapter *adapter) { struct ibmvnic_rx_pool *rx_pool; @@ -628,6 +633,9 @@ static int init_rx_pools(struct net_device *netdev) return -1; } + /* Set num_active_rx_pools early. If we fail below after partial + * allocation, release_rx_pools() will know how many to look for. + */ adapter->num_active_rx_pools = rxadd_subcrqs; for (i = 0; i < rxadd_subcrqs; i++) { @@ -646,6 +654,7 @@ static int init_rx_pools(struct net_device *netdev) rx_pool->free_map = kcalloc(rx_pool->size, sizeof(int), GFP_KERNEL); if (!rx_pool->free_map) { + dev_err(dev, "Couldn't alloc free_map %d\n", i); release_rx_pools(adapter); return -1; } @@ -739,10 +748,17 @@ static void release_one_tx_pool(struct ibmvnic_adapter *adapter, free_long_term_buff(adapter, &tx_pool->long_term_buff); } +/** + * Release any tx and tso pools attached to @adapter. + * Safe to call this multiple times - even if no pools are attached. + */ static void release_tx_pools(struct ibmvnic_adapter *adapter) { int i; + /* init_tx_pools() ensures that ->tx_pool and ->tso_pool are + * both NULL or both non-NULL. So we only need to check one. + */ if (!adapter->tx_pool) return; @@ -793,6 +809,7 @@ static int init_one_tx_pool(struct net_device *netdev, static int init_tx_pools(struct net_device *netdev) { struct ibmvnic_adapter *adapter = netdev_priv(netdev); + struct device *dev = &adapter->vdev->dev; int tx_subcrqs; u64 buff_size; int i, rc; @@ -805,17 +822,27 @@ static int init_tx_pools(struct net_device *netdev) adapter->tso_pool = kcalloc(tx_subcrqs, sizeof(struct ibmvnic_tx_pool), GFP_KERNEL); + /* To simplify release_tx_pools() ensure that ->tx_pool and + * ->tso_pool are either both NULL or both non-NULL. + */ if (!adapter->tso_pool) { kfree(adapter->tx_pool); adapter->tx_pool = NULL; return -1; } + /* Set num_active_tx_pools early. If we fail below after partial + * allocation, release_tx_pools() will know how many to look for. + */ adapter->num_active_tx_pools = tx_subcrqs; for (i = 0; i < tx_subcrqs; i++) { buff_size = adapter->req_mtu + VLAN_HLEN; buff_size = ALIGN(buff_size, L1_CACHE_BYTES); + + dev_dbg(dev, "Init tx pool %d [%llu, %llu]\n", + i, adapter->req_tx_entries_per_subcrq, buff_size); + rc = init_one_tx_pool(netdev, &adapter->tx_pool[i], adapter->req_tx_entries_per_subcrq, buff_size); @@ -4774,9 +4801,10 @@ static void handle_query_map_rsp(union ibmvnic_crq *crq, dev_err(dev, "Error %ld in QUERY_MAP_RSP\n", rc); return; } - netdev_dbg(netdev, "page_size = %d\ntot_pages = %d\nfree_pages = %d\n", - crq->query_map_rsp.page_size, crq->query_map_rsp.tot_pages, - crq->query_map_rsp.free_pages); + netdev_dbg(netdev, "page_size = %d\ntot_pages = %u\nfree_pages = %u\n", + crq->query_map_rsp.page_size, + __be32_to_cpu(crq->query_map_rsp.tot_pages), + __be32_to_cpu(crq->query_map_rsp.free_pages)); } static void handle_query_cap_rsp(union ibmvnic_crq *crq, From patchwork Wed Sep 1 00:08:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sukadev Bhattiprolu X-Patchwork-Id: 12467929 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23B15C432BE for ; Wed, 1 Sep 2021 00:10:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0436A61027 for ; Wed, 1 Sep 2021 00:10:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242507AbhIAALH (ORCPT ); Tue, 31 Aug 2021 20:11:07 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:24964 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243393AbhIAAJS (ORCPT ); Tue, 31 Aug 2021 20:09:18 -0400 Received: from pps.filterd (m0098396.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 18103iB9011628 for ; Tue, 31 Aug 2021 20:08:23 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=hCsVQbs2OJC2ekzFDlYFH6MX7lTssgTX45sVMA8Jyok=; b=kC5POOdVZJYL6fNKQZT3nsTJp0iinvz8BuvnkvKUZz9lha8hOMEmvEbCZhvbcbBTRG0P 9l+VUoMMz2gbLT7oVINmjcpC1GCiUtuv+ln3UZyrCvBP9oWgExED8Vu9KiixjCLBgmbw EEM5vH4I6xGzxnSYV4Il/6oFtOuvgCgzO7pzXT6OVWgKg7Dc66G3UjhAPGUmc3SVfSZs kDek5uYTyb58YH8tThxt/De0w0bccHlAwrCFzU11DabcnOLO2F2tusfxqgnze07fGpZX XeXI+zqLQ+E0xxmn/gB28aHZgjIHrMAtRxRvo5oQpN1uzt2ZJ/0sdY4iYdbeahFEzIgz Dw== Received: from ppma02dal.us.ibm.com (a.bd.3ea9.ip4.static.sl-reverse.com [169.62.189.10]) by mx0a-001b2d01.pphosted.com with ESMTP id 3assarypvn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Tue, 31 Aug 2021 20:08:22 -0400 Received: from pps.filterd (ppma02dal.us.ibm.com [127.0.0.1]) by ppma02dal.us.ibm.com (8.16.1.2/8.16.1.2) with SMTP id 17VNvif6013379 for ; Wed, 1 Sep 2021 00:08:22 GMT Received: from b03cxnp08028.gho.boulder.ibm.com (b03cxnp08028.gho.boulder.ibm.com [9.17.130.20]) by ppma02dal.us.ibm.com with ESMTP id 3aqcsdj2sk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 01 Sep 2021 00:08:21 +0000 Received: from b03ledav004.gho.boulder.ibm.com (b03ledav004.gho.boulder.ibm.com [9.17.130.235]) by b03cxnp08028.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 18108Kvh33882382 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 1 Sep 2021 00:08:20 GMT Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 5E32C7805C; Wed, 1 Sep 2021 00:08:20 +0000 (GMT) Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 0CA3A78066; Wed, 1 Sep 2021 00:08:19 +0000 (GMT) Received: from suka-w540.ibmuc.com (unknown [9.65.237.107]) by b03ledav004.gho.boulder.ibm.com (Postfix) with ESMTP; Wed, 1 Sep 2021 00:08:18 +0000 (GMT) From: Sukadev Bhattiprolu To: netdev@vger.kernel.org Cc: Brian King , cforno12@linux.ibm.com, Dany Madden , Rick Lindsley Subject: [PATCH net-next 3/9] ibmvnic: Use/rename local vars in init_rx_pools Date: Tue, 31 Aug 2021 17:08:06 -0700 Message-Id: <20210901000812.120968-4-sukadev@linux.ibm.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210901000812.120968-1-sukadev@linux.ibm.com> References: <20210901000812.120968-1-sukadev@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: cGwELd8gNkZq9E50gcSt_l26sCg2ESOC X-Proofpoint-GUID: cGwELd8gNkZq9E50gcSt_l26sCg2ESOC X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.391,18.0.790 definitions=2021-08-31_10:2021-08-31,2021-08-31 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 impostorscore=0 malwarescore=0 suspectscore=0 bulkscore=0 adultscore=0 mlxlogscore=999 mlxscore=0 spamscore=0 phishscore=0 priorityscore=1501 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2107140000 definitions=main-2108310133 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org To make the code more readable, use/rename some local variables. Basically we have a set of pools, num_pools. Each pool has a set of buffers, pool_size and each buffer is of size buff_size. pool_size is a bit ambiguous (whether size in bytes or buffers). Add a comment in the header file to make it explicit. Signed-off-by: Sukadev Bhattiprolu Reviewed-by: Dany Madden --- drivers/net/ethernet/ibm/ibmvnic.c | 17 +++++++++-------- drivers/net/ethernet/ibm/ibmvnic.h | 2 +- 2 files changed, 10 insertions(+), 9 deletions(-) diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c index 911315b10731..a611bd3f2539 100644 --- a/drivers/net/ethernet/ibm/ibmvnic.c +++ b/drivers/net/ethernet/ibm/ibmvnic.c @@ -618,14 +618,16 @@ static int init_rx_pools(struct net_device *netdev) struct ibmvnic_adapter *adapter = netdev_priv(netdev); struct device *dev = &adapter->vdev->dev; struct ibmvnic_rx_pool *rx_pool; - int rxadd_subcrqs; + u64 num_pools; + u64 pool_size; /* # of buffers in one pool */ u64 buff_size; int i, j; - rxadd_subcrqs = adapter->num_active_rx_scrqs; + num_pools = adapter->num_active_rx_scrqs; + pool_size = adapter->req_rx_add_entries_per_subcrq; buff_size = adapter->cur_rx_buf_sz; - adapter->rx_pool = kcalloc(rxadd_subcrqs, + adapter->rx_pool = kcalloc(num_pools, sizeof(struct ibmvnic_rx_pool), GFP_KERNEL); if (!adapter->rx_pool) { @@ -636,17 +638,16 @@ static int init_rx_pools(struct net_device *netdev) /* Set num_active_rx_pools early. If we fail below after partial * allocation, release_rx_pools() will know how many to look for. */ - adapter->num_active_rx_pools = rxadd_subcrqs; + adapter->num_active_rx_pools = num_pools; - for (i = 0; i < rxadd_subcrqs; i++) { + for (i = 0; i < num_pools; i++) { rx_pool = &adapter->rx_pool[i]; netdev_dbg(adapter->netdev, "Initializing rx_pool[%d], %lld buffs, %lld bytes each\n", - i, adapter->req_rx_add_entries_per_subcrq, - buff_size); + i, pool_size, buff_size); - rx_pool->size = adapter->req_rx_add_entries_per_subcrq; + rx_pool->size = pool_size; rx_pool->index = i; rx_pool->buff_size = ALIGN(buff_size, L1_CACHE_BYTES); rx_pool->active = 1; diff --git a/drivers/net/ethernet/ibm/ibmvnic.h b/drivers/net/ethernet/ibm/ibmvnic.h index 22df602323bc..5652566818fb 100644 --- a/drivers/net/ethernet/ibm/ibmvnic.h +++ b/drivers/net/ethernet/ibm/ibmvnic.h @@ -827,7 +827,7 @@ struct ibmvnic_rx_buff { struct ibmvnic_rx_pool { struct ibmvnic_rx_buff *rx_buff; - int size; + int size; /* # of buffers in the pool */ int index; int buff_size; atomic_t available; From patchwork Wed Sep 1 00:08:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sukadev Bhattiprolu X-Patchwork-Id: 12467931 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06AFBC4320A for ; Wed, 1 Sep 2021 00:10:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DA23C61027 for ; Wed, 1 Sep 2021 00:10:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242486AbhIAALL (ORCPT ); Tue, 31 Aug 2021 20:11:11 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:52006 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S243416AbhIAAJV (ORCPT ); Tue, 31 Aug 2021 20:09:21 -0400 Received: from pps.filterd (m0098414.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 18104KNP130330 for ; Tue, 31 Aug 2021 20:08:24 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=If8o3BQIwaQdoPbWi02Mb8phsGt6w8vXsylBrgUa14c=; b=M8swJQ6/d4B2s8BcIoVasjEjroPmdFng8f4V/pnDa7jn+7nHasZoiHHt2pf5o22fB+UD a1F7lzWzuSFcgcGolT4TO5iD9uWWgcEODM2Ps/zpLMn1Aic9YBp3KLIlgnz3XbzY/pYG NhC3lfUeqqk/5vJ6qkFvw9CxquvhJoMsEnv1E5QpaOjqahvSK4FXHTRFEYehUdxCyFDI UPgt2CPbatEuB0ysFVM+c0kYJTCPexwosQfmW3AqTcgZ8c2mK6tTkH+T6YTzMMEsx1n1 Aqd1K73BX1ohepbh/QQDXI+bc7Secf1yThV3p40El2ZXMnm0THi2wQsqWXaxQeukEWBl fg== Received: from ppma01dal.us.ibm.com (83.d6.3fa9.ip4.static.sl-reverse.com [169.63.214.131]) by mx0b-001b2d01.pphosted.com with ESMTP id 3asxncg2kb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Tue, 31 Aug 2021 20:08:24 -0400 Received: from pps.filterd (ppma01dal.us.ibm.com [127.0.0.1]) by ppma01dal.us.ibm.com (8.16.1.2/8.16.1.2) with SMTP id 18107mT0005491 for ; Wed, 1 Sep 2021 00:08:23 GMT Received: from b03cxnp07027.gho.boulder.ibm.com (b03cxnp07027.gho.boulder.ibm.com [9.17.130.14]) by ppma01dal.us.ibm.com with ESMTP id 3astd0xgwd-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 01 Sep 2021 00:08:23 +0000 Received: from b03ledav004.gho.boulder.ibm.com (b03ledav004.gho.boulder.ibm.com [9.17.130.235]) by b03cxnp07027.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 18108LXk30212446 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 1 Sep 2021 00:08:21 GMT Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id D1C567805C; Wed, 1 Sep 2021 00:08:21 +0000 (GMT) Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id A165F78066; Wed, 1 Sep 2021 00:08:20 +0000 (GMT) Received: from suka-w540.ibmuc.com (unknown [9.65.237.107]) by b03ledav004.gho.boulder.ibm.com (Postfix) with ESMTP; Wed, 1 Sep 2021 00:08:20 +0000 (GMT) From: Sukadev Bhattiprolu To: netdev@vger.kernel.org Cc: Brian King , cforno12@linux.ibm.com, Dany Madden , Rick Lindsley Subject: [PATCH net-next 4/9] ibmvnic: Use/rename local vars in init_tx_pools Date: Tue, 31 Aug 2021 17:08:07 -0700 Message-Id: <20210901000812.120968-5-sukadev@linux.ibm.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210901000812.120968-1-sukadev@linux.ibm.com> References: <20210901000812.120968-1-sukadev@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: PuaUDDEs5mEAvYkzexOr2n2qiF408H8E X-Proofpoint-ORIG-GUID: PuaUDDEs5mEAvYkzexOr2n2qiF408H8E X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.391,18.0.790 definitions=2021-08-31_10:2021-08-31,2021-08-31 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 lowpriorityscore=0 mlxscore=0 phishscore=0 priorityscore=1501 malwarescore=0 mlxlogscore=999 adultscore=0 impostorscore=0 spamscore=0 clxscore=1015 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2107140000 definitions=main-2108310133 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Use/rename local variables in init_tx_pools() for consistency with init_rx_pools() and for readability. Also add some comments Signed-off-by: Sukadev Bhattiprolu Reviewed-by: Dany Madden --- drivers/net/ethernet/ibm/ibmvnic.c | 30 ++++++++++++++++-------------- 1 file changed, 16 insertions(+), 14 deletions(-) diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c index a611bd3f2539..4c6739b250df 100644 --- a/drivers/net/ethernet/ibm/ibmvnic.c +++ b/drivers/net/ethernet/ibm/ibmvnic.c @@ -777,31 +777,31 @@ static void release_tx_pools(struct ibmvnic_adapter *adapter) static int init_one_tx_pool(struct net_device *netdev, struct ibmvnic_tx_pool *tx_pool, - int num_entries, int buf_size) + int pool_size, int buf_size) { struct ibmvnic_adapter *adapter = netdev_priv(netdev); int i; - tx_pool->tx_buff = kcalloc(num_entries, + tx_pool->tx_buff = kcalloc(pool_size, sizeof(struct ibmvnic_tx_buff), GFP_KERNEL); if (!tx_pool->tx_buff) return -1; if (alloc_long_term_buff(adapter, &tx_pool->long_term_buff, - num_entries * buf_size)) + pool_size * buf_size)) return -1; - tx_pool->free_map = kcalloc(num_entries, sizeof(int), GFP_KERNEL); + tx_pool->free_map = kcalloc(pool_size, sizeof(int), GFP_KERNEL); if (!tx_pool->free_map) return -1; - for (i = 0; i < num_entries; i++) + for (i = 0; i < pool_size; i++) tx_pool->free_map[i] = i; tx_pool->consumer_index = 0; tx_pool->producer_index = 0; - tx_pool->num_buffers = num_entries; + tx_pool->num_buffers = pool_size; tx_pool->buf_size = buf_size; return 0; @@ -811,17 +811,20 @@ static int init_tx_pools(struct net_device *netdev) { struct ibmvnic_adapter *adapter = netdev_priv(netdev); struct device *dev = &adapter->vdev->dev; - int tx_subcrqs; + int num_pools; + u64 pool_size; /* # of buffers in pool */ u64 buff_size; int i, rc; - tx_subcrqs = adapter->num_active_tx_scrqs; - adapter->tx_pool = kcalloc(tx_subcrqs, + pool_size = adapter->req_tx_entries_per_subcrq; + num_pools = adapter->num_active_tx_scrqs; + + adapter->tx_pool = kcalloc(num_pools, sizeof(struct ibmvnic_tx_pool), GFP_KERNEL); if (!adapter->tx_pool) return -1; - adapter->tso_pool = kcalloc(tx_subcrqs, + adapter->tso_pool = kcalloc(num_pools, sizeof(struct ibmvnic_tx_pool), GFP_KERNEL); /* To simplify release_tx_pools() ensure that ->tx_pool and * ->tso_pool are either both NULL or both non-NULL. @@ -835,9 +838,9 @@ static int init_tx_pools(struct net_device *netdev) /* Set num_active_tx_pools early. If we fail below after partial * allocation, release_tx_pools() will know how many to look for. */ - adapter->num_active_tx_pools = tx_subcrqs; + adapter->num_active_tx_pools = num_pools; - for (i = 0; i < tx_subcrqs; i++) { + for (i = 0; i < num_pools; i++) { buff_size = adapter->req_mtu + VLAN_HLEN; buff_size = ALIGN(buff_size, L1_CACHE_BYTES); @@ -845,8 +848,7 @@ static int init_tx_pools(struct net_device *netdev) i, adapter->req_tx_entries_per_subcrq, buff_size); rc = init_one_tx_pool(netdev, &adapter->tx_pool[i], - adapter->req_tx_entries_per_subcrq, - buff_size); + pool_size, buff_size); if (rc) { release_tx_pools(adapter); return rc; From patchwork Wed Sep 1 00:08:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sukadev Bhattiprolu X-Patchwork-Id: 12467933 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3CDFC432BE for ; Wed, 1 Sep 2021 00:10:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id ACC146102A for ; Wed, 1 Sep 2021 00:10:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242790AbhIAALS (ORCPT ); Tue, 31 Aug 2021 20:11:18 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:58446 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S243428AbhIAAJW (ORCPT ); Tue, 31 Aug 2021 20:09:22 -0400 Received: from pps.filterd (m0098416.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 18104miS124165 for ; Tue, 31 Aug 2021 20:08:25 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=uo+Z5YO8LnXN1Bt11bBCRiR2QZj19IWtkCnnPE30WLQ=; b=ijdZ/rQz8pkuHTPqUZ3JW62UoZEtSkoKtEERfwzmWTWtSCFn4nLmV4HZyKFXCaDYYCBu EW7KbuBNBwop+Xi4Ay9O8beuzV9C3P0TG4/n5GjSpZX+Phd7A6pTq2p3VTfTF8OXGRIZ ZWkrPsRaxQMFaTh3Jc9UyunICIIDK3DrxwMLir8+DL88AsktFmYcGbOu9QMoZAvkw0NP /e2X5+7iGIeMf6YwDUS8YJamSz+W5Wj52G2pjupgbTp8eOJ2ZxZA7YeZbsJslWqu57rf hUbn5odp0dk3+PSkpE4+449spjA+fZqt0Eny8bUI0QboTLucLDs/8VTMmOQCJrSg8nxB FA== Received: from ppma04dal.us.ibm.com (7a.29.35a9.ip4.static.sl-reverse.com [169.53.41.122]) by mx0b-001b2d01.pphosted.com with ESMTP id 3aswpw97g8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Tue, 31 Aug 2021 20:08:25 -0400 Received: from pps.filterd (ppma04dal.us.ibm.com [127.0.0.1]) by ppma04dal.us.ibm.com (8.16.1.2/8.16.1.2) with SMTP id 17VNuaw2022862 for ; Wed, 1 Sep 2021 00:08:25 GMT Received: from b03cxnp07029.gho.boulder.ibm.com (b03cxnp07029.gho.boulder.ibm.com [9.17.130.16]) by ppma04dal.us.ibm.com with ESMTP id 3aqcsdj1bb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 01 Sep 2021 00:08:25 +0000 Received: from b03ledav004.gho.boulder.ibm.com (b03ledav004.gho.boulder.ibm.com [9.17.130.235]) by b03cxnp07029.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 18108Nxt38535476 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 1 Sep 2021 00:08:23 GMT Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 704B078066; Wed, 1 Sep 2021 00:08:23 +0000 (GMT) Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 214177805C; Wed, 1 Sep 2021 00:08:22 +0000 (GMT) Received: from suka-w540.ibmuc.com (unknown [9.65.237.107]) by b03ledav004.gho.boulder.ibm.com (Postfix) with ESMTP; Wed, 1 Sep 2021 00:08:21 +0000 (GMT) From: Sukadev Bhattiprolu To: netdev@vger.kernel.org Cc: Brian King , cforno12@linux.ibm.com, Dany Madden , Rick Lindsley Subject: [PATCH net-next 5/9] ibmvnic: init_tx_pools move loop-invariant code out Date: Tue, 31 Aug 2021 17:08:08 -0700 Message-Id: <20210901000812.120968-6-sukadev@linux.ibm.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210901000812.120968-1-sukadev@linux.ibm.com> References: <20210901000812.120968-1-sukadev@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: gcZ5O1mtCMzp6hzMnZ5H3VkdfGQ7PUYd X-Proofpoint-GUID: gcZ5O1mtCMzp6hzMnZ5H3VkdfGQ7PUYd X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.391,18.0.790 definitions=2021-08-31_10:2021-08-31,2021-08-31 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 adultscore=0 clxscore=1015 bulkscore=0 mlxscore=0 phishscore=0 malwarescore=0 mlxlogscore=999 impostorscore=0 spamscore=0 priorityscore=1501 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2107140000 definitions=main-2108310133 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org In init_tx_pools() move some loop-invariant code out of the loop. Signed-off-by: Sukadev Bhattiprolu Reviewed-by: Dany Madden --- drivers/net/ethernet/ibm/ibmvnic.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c index 4c6739b250df..8894afdb3cb3 100644 --- a/drivers/net/ethernet/ibm/ibmvnic.c +++ b/drivers/net/ethernet/ibm/ibmvnic.c @@ -839,11 +839,10 @@ static int init_tx_pools(struct net_device *netdev) * allocation, release_tx_pools() will know how many to look for. */ adapter->num_active_tx_pools = num_pools; + buff_size = adapter->req_mtu + VLAN_HLEN; + buff_size = ALIGN(buff_size, L1_CACHE_BYTES); for (i = 0; i < num_pools; i++) { - buff_size = adapter->req_mtu + VLAN_HLEN; - buff_size = ALIGN(buff_size, L1_CACHE_BYTES); - dev_dbg(dev, "Init tx pool %d [%llu, %llu]\n", i, adapter->req_tx_entries_per_subcrq, buff_size); From patchwork Wed Sep 1 00:08:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sukadev Bhattiprolu X-Patchwork-Id: 12467935 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89E86C432BE for ; Wed, 1 Sep 2021 00:10:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6D17061027 for ; Wed, 1 Sep 2021 00:10:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242524AbhIAALX (ORCPT ); Tue, 31 Aug 2021 20:11:23 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:2358 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S243443AbhIAAJX (ORCPT ); Tue, 31 Aug 2021 20:09:23 -0400 Received: from pps.filterd (m0098414.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 18104IMj130135 for ; Tue, 31 Aug 2021 20:08:27 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=tQE6rPq6VdWTS3HxtldlMm3dpmKebOQ5bf6L05Jjeh0=; b=ma5RV7/QCH8iICclvSs2K3Y28Dpx6gwm9wLI5v0+GtQoVPcLiZd7EbMtIKE4ivVPbJGC gPB+DD/IQtVluWRScSBCrr+EOqfooN9B81SWF2p4xaWJIcY9YYGW3WqARaLTxxLSGPe9 tDCxy7cLClXk/OCImTdrW3R+CbT7io+unSbHl85v9F3bwDDA2gGjyGHqnjyAW7F8hr5M aCi1wKc0OTNqaNakzzpYLHk54I/fKRFkGs2ZxEdi6CWeeZvv854bSEJPAyYfblsaCZGC H4NwPlYgUccJPwisM8YhRt0VFtPB3x4R0tVMXaM1JQTpSrN+VoeK0yT/Tl8vSUGse6OB yA== Received: from ppma03wdc.us.ibm.com (ba.79.3fa9.ip4.static.sl-reverse.com [169.63.121.186]) by mx0b-001b2d01.pphosted.com with ESMTP id 3asxncg2kq-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Tue, 31 Aug 2021 20:08:26 -0400 Received: from pps.filterd (ppma03wdc.us.ibm.com [127.0.0.1]) by ppma03wdc.us.ibm.com (8.16.1.2/8.16.1.2) with SMTP id 18107eFQ022641 for ; Wed, 1 Sep 2021 00:08:26 GMT Received: from b03cxnp08026.gho.boulder.ibm.com (b03cxnp08026.gho.boulder.ibm.com [9.17.130.18]) by ppma03wdc.us.ibm.com with ESMTP id 3aqcsca6h2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 01 Sep 2021 00:08:26 +0000 Received: from b03ledav004.gho.boulder.ibm.com (b03ledav004.gho.boulder.ibm.com [9.17.130.235]) by b03cxnp08026.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 18108P5T36176206 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 1 Sep 2021 00:08:25 GMT Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 1799E78064; Wed, 1 Sep 2021 00:08:25 +0000 (GMT) Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id D3DCB7805F; Wed, 1 Sep 2021 00:08:23 +0000 (GMT) Received: from suka-w540.ibmuc.com (unknown [9.65.237.107]) by b03ledav004.gho.boulder.ibm.com (Postfix) with ESMTP; Wed, 1 Sep 2021 00:08:23 +0000 (GMT) From: Sukadev Bhattiprolu To: netdev@vger.kernel.org Cc: Brian King , cforno12@linux.ibm.com, Dany Madden , Rick Lindsley Subject: [PATCH net-next 6/9] ibmvnic: Use bitmap for LTB map_ids Date: Tue, 31 Aug 2021 17:08:09 -0700 Message-Id: <20210901000812.120968-7-sukadev@linux.ibm.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210901000812.120968-1-sukadev@linux.ibm.com> References: <20210901000812.120968-1-sukadev@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: e6Y5andw62o27Xz9RZ-4_8X-0ucwpA7F X-Proofpoint-ORIG-GUID: e6Y5andw62o27Xz9RZ-4_8X-0ucwpA7F X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.391,18.0.790 definitions=2021-08-31_10:2021-08-31,2021-08-31 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 lowpriorityscore=0 mlxscore=0 phishscore=0 priorityscore=1501 malwarescore=0 mlxlogscore=999 adultscore=0 impostorscore=0 spamscore=0 clxscore=1015 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2107140000 definitions=main-2108310133 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org In a follow-on patch, we will reuse long term buffers when possible. When doing so we have to be careful to properly assign map ids. We can no longer assign them sequentially because a lower map id may be available and we could wrap at 255 and collide with an in-use map id. Instead, use a bitmap to track active map ids and to find a free map id. Don't need to take locks here since the map_id only changes during reset and at that time only the reset worker thread should be using the adapter. Noticed this when analyzing an error Dany Madden ran into with the patch set. Reported-by: Dany Madden Signed-off-by: Sukadev Bhattiprolu Reviewed-by: Dany Madden --- drivers/net/ethernet/ibm/ibmvnic.c | 12 ++++++++---- drivers/net/ethernet/ibm/ibmvnic.h | 3 ++- 2 files changed, 10 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c index 8894afdb3cb3..30153a8bb5ec 100644 --- a/drivers/net/ethernet/ibm/ibmvnic.c +++ b/drivers/net/ethernet/ibm/ibmvnic.c @@ -228,8 +228,9 @@ static int alloc_long_term_buff(struct ibmvnic_adapter *adapter, dev_err(dev, "Couldn't alloc long term buffer\n"); return -ENOMEM; } - ltb->map_id = adapter->map_id; - adapter->map_id++; + ltb->map_id = find_first_zero_bit(adapter->map_ids, + MAX_MAP_ID); + bitmap_set(adapter->map_ids, ltb->map_id, 1); mutex_lock(&adapter->fw_lock); adapter->fw_done_rc = 0; @@ -284,6 +285,8 @@ static void free_long_term_buff(struct ibmvnic_adapter *adapter, dma_free_coherent(dev, ltb->size, ltb->buff, ltb->addr); ltb->buff = NULL; + /* mark this map_id free */ + bitmap_clear(adapter->map_ids, ltb->map_id, 1); ltb->map_id = 0; } @@ -1231,8 +1234,6 @@ static int init_resources(struct ibmvnic_adapter *adapter) return rc; } - adapter->map_id = 1; - rc = init_napi(adapter); if (rc) return rc; @@ -5553,6 +5554,9 @@ static int ibmvnic_probe(struct vio_dev *dev, const struct vio_device_id *id) adapter->vdev = dev; adapter->netdev = netdev; adapter->login_pending = false; + memset(&adapter->map_ids, 0, sizeof(adapter->map_ids)); + /* map_ids start at 1, so ensure map_id 0 is always "in-use" */ + bitmap_set(adapter->map_ids, 0, 1); ether_addr_copy(adapter->mac_addr, mac_addr_p); ether_addr_copy(netdev->dev_addr, adapter->mac_addr); diff --git a/drivers/net/ethernet/ibm/ibmvnic.h b/drivers/net/ethernet/ibm/ibmvnic.h index 5652566818fb..e97f1aa98c05 100644 --- a/drivers/net/ethernet/ibm/ibmvnic.h +++ b/drivers/net/ethernet/ibm/ibmvnic.h @@ -979,7 +979,8 @@ struct ibmvnic_adapter { u64 opt_tx_entries_per_subcrq; u64 opt_rxba_entries_per_subcrq; __be64 tx_rx_desc_req; - u8 map_id; +#define MAX_MAP_ID 255 + DECLARE_BITMAP(map_ids, MAX_MAP_ID); u32 num_active_rx_scrqs; u32 num_active_rx_pools; u32 num_active_rx_napi; From patchwork Wed Sep 1 00:08:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sukadev Bhattiprolu X-Patchwork-Id: 12467937 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22820C4320A for ; Wed, 1 Sep 2021 00:10:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0BFA761027 for ; Wed, 1 Sep 2021 00:10:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242538AbhIAALZ (ORCPT ); Tue, 31 Aug 2021 20:11:25 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:18422 "EHLO mx0b-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243454AbhIAAJZ (ORCPT ); Tue, 31 Aug 2021 20:09:25 -0400 Received: from pps.filterd (m0098417.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 18104MCj017476 for ; Tue, 31 Aug 2021 20:08:29 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=rgXlw79O5KWH41cwlhskB8Nx+HLW3z7nqzsBO2UiKLc=; b=QUqzMYzqXkKDL6iBvYsAADjHCscmQjhTyQPF4APHw5vhkKH+WRznADqx8BpDSq2DFO4V bTH9V/ygdZoHsH+riam10/+5n83NDFRoHFoCtTfCO/cREXdg8W2AmndTgEibNYcpfnCt P6P7DybVLGc4+9KsvyhmuCgzdvpAOQMckgk/9belF4WA6XeU6SpPwquHPwnAVJs45e8K nssPsHI5V5G6u9JhPHLD+O9sV6S5E1ne8SUhNF3IItm7bh07tV+uKvZoR6YaqgcG4x1h fuB27DN1EM9ZrsmN6t38mSx9tQA4wpQG2z/3/KOF0zm+pJoLQs2kn0yBmxcfEQbW4hYf SQ== Received: from ppma01dal.us.ibm.com (83.d6.3fa9.ip4.static.sl-reverse.com [169.63.214.131]) by mx0a-001b2d01.pphosted.com with ESMTP id 3aswqgh3x8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Tue, 31 Aug 2021 20:08:29 -0400 Received: from pps.filterd (ppma01dal.us.ibm.com [127.0.0.1]) by ppma01dal.us.ibm.com (8.16.1.2/8.16.1.2) with SMTP id 18107mRS005490 for ; Wed, 1 Sep 2021 00:08:28 GMT Received: from b03cxnp08027.gho.boulder.ibm.com (b03cxnp08027.gho.boulder.ibm.com [9.17.130.19]) by ppma01dal.us.ibm.com with ESMTP id 3astd0xgxh-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 01 Sep 2021 00:08:28 +0000 Received: from b03ledav004.gho.boulder.ibm.com (b03ledav004.gho.boulder.ibm.com [9.17.130.235]) by b03cxnp08027.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 18108Q1p15270590 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 1 Sep 2021 00:08:26 GMT Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id CC2FB78066; Wed, 1 Sep 2021 00:08:26 +0000 (GMT) Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 7AF137805C; Wed, 1 Sep 2021 00:08:25 +0000 (GMT) Received: from suka-w540.ibmuc.com (unknown [9.65.237.107]) by b03ledav004.gho.boulder.ibm.com (Postfix) with ESMTP; Wed, 1 Sep 2021 00:08:25 +0000 (GMT) From: Sukadev Bhattiprolu To: netdev@vger.kernel.org Cc: Brian King , cforno12@linux.ibm.com, Dany Madden , Rick Lindsley Subject: [PATCH net-next 7/9] ibmvnic: Reuse LTB when possible Date: Tue, 31 Aug 2021 17:08:10 -0700 Message-Id: <20210901000812.120968-8-sukadev@linux.ibm.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210901000812.120968-1-sukadev@linux.ibm.com> References: <20210901000812.120968-1-sukadev@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: 45mfETdDTHr0fZovSGqzWP0DdbR9I7yl X-Proofpoint-ORIG-GUID: 45mfETdDTHr0fZovSGqzWP0DdbR9I7yl X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.391,18.0.790 definitions=2021-08-31_10:2021-08-31,2021-08-31 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 spamscore=0 mlxlogscore=999 suspectscore=0 priorityscore=1501 impostorscore=0 adultscore=0 mlxscore=0 clxscore=1015 bulkscore=0 lowpriorityscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2107140000 definitions=main-2108310133 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Reuse the long term buffer during a reset as long as its size has not changed. If the size has changed, free it and allocate a new one of the appropriate size. When we do this, alloc_long_term_buff() and reset_long_term_buff() become identical. Drop reset_long_term_buff(). Signed-off-by: Sukadev Bhattiprolu Reviewed-by: Dany Madden --- drivers/net/ethernet/ibm/ibmvnic.c | 122 ++++++++++++++--------------- 1 file changed, 59 insertions(+), 63 deletions(-) diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c index 30153a8bb5ec..1bb5996c4313 100644 --- a/drivers/net/ethernet/ibm/ibmvnic.c +++ b/drivers/net/ethernet/ibm/ibmvnic.c @@ -108,6 +108,8 @@ static int init_crq_queue(struct ibmvnic_adapter *adapter); static int send_query_phys_parms(struct ibmvnic_adapter *adapter); static void ibmvnic_tx_scrq_clean_buffer(struct ibmvnic_adapter *adapter, struct ibmvnic_sub_crq_queue *tx_scrq); +static void free_long_term_buff(struct ibmvnic_adapter *adapter, + struct ibmvnic_long_term_buff *ltb); struct ibmvnic_stat { char name[ETH_GSTRING_LEN]; @@ -214,23 +216,62 @@ static int ibmvnic_wait_for_completion(struct ibmvnic_adapter *adapter, return -ETIMEDOUT; } +/** + * Reuse long term buffer unless size has changed. + */ +static bool reuse_ltb(struct ibmvnic_long_term_buff *ltb, int size) +{ + return (ltb->buff && ltb->size == size); +} + +/** + * Allocate a long term buffer of the specified size and notify VIOS. + * + * If the given @ltb already has the correct size, reuse it. Otherwise if + * its non-NULL, free it. Then allocate a new one of the correct size. + * Notify the VIOS either way since we may now be working with a new VIOS. + * + * Allocating larger chunks of memory during resets, specially LPM or under + * low memory situations can cause resets to fail/timeout and for LPAR to + * lose connectivity. So hold onto the LTB even if we fail to communicate + * with the VIOS and reuse it on next open. Free LTB when adapter is closed. + */ static int alloc_long_term_buff(struct ibmvnic_adapter *adapter, struct ibmvnic_long_term_buff *ltb, int size) { struct device *dev = &adapter->vdev->dev; int rc; - ltb->size = size; - ltb->buff = dma_alloc_coherent(dev, ltb->size, <b->addr, - GFP_KERNEL); + if (!reuse_ltb(ltb, size)) { + dev_dbg(dev, + "LTB size changed from 0x%llx to 0x%x, reallocating\n", + ltb->size, size); + free_long_term_buff(adapter, ltb); + } - if (!ltb->buff) { - dev_err(dev, "Couldn't alloc long term buffer\n"); - return -ENOMEM; + if (ltb->buff) { + dev_dbg(dev, "Reusing LTB [map %d, size 0x%llx]\n", + ltb->map_id, ltb->size); + } else { + ltb->buff = dma_alloc_coherent(dev, size, <b->addr, + GFP_KERNEL); + if (!ltb->buff) { + dev_err(dev, "Couldn't alloc long term buffer\n"); + return -ENOMEM; + } + ltb->size = size; + + ltb->map_id = find_first_zero_bit(adapter->map_ids, + MAX_MAP_ID); + bitmap_set(adapter->map_ids, ltb->map_id, 1); + + dev_dbg(dev, + "Allocated new LTB [map %d, size 0x%llx]\n", + ltb->map_id, ltb->size); } - ltb->map_id = find_first_zero_bit(adapter->map_ids, - MAX_MAP_ID); - bitmap_set(adapter->map_ids, ltb->map_id, 1); + + /* Ensure ltb is zeroed - specially when reusing it. */ + memset(ltb->buff, 0, ltb->size); mutex_lock(&adapter->fw_lock); adapter->fw_done_rc = 0; @@ -257,10 +298,7 @@ static int alloc_long_term_buff(struct ibmvnic_adapter *adapter, } rc = 0; out: - if (rc) { - dma_free_coherent(dev, ltb->size, ltb->buff, ltb->addr); - ltb->buff = NULL; - } + /* don't free LTB on communication error - see function header */ mutex_unlock(&adapter->fw_lock); return rc; } @@ -290,43 +328,6 @@ static void free_long_term_buff(struct ibmvnic_adapter *adapter, ltb->map_id = 0; } -static int reset_long_term_buff(struct ibmvnic_adapter *adapter, - struct ibmvnic_long_term_buff *ltb) -{ - struct device *dev = &adapter->vdev->dev; - int rc; - - memset(ltb->buff, 0, ltb->size); - - mutex_lock(&adapter->fw_lock); - adapter->fw_done_rc = 0; - - reinit_completion(&adapter->fw_done); - rc = send_request_map(adapter, ltb->addr, ltb->size, ltb->map_id); - if (rc) { - mutex_unlock(&adapter->fw_lock); - return rc; - } - - rc = ibmvnic_wait_for_completion(adapter, &adapter->fw_done, 10000); - if (rc) { - dev_info(dev, - "Reset failed, long term map request timed out or aborted\n"); - mutex_unlock(&adapter->fw_lock); - return rc; - } - - if (adapter->fw_done_rc) { - dev_info(dev, - "Reset failed, attempting to free and reallocate buffer\n"); - free_long_term_buff(adapter, ltb); - mutex_unlock(&adapter->fw_lock); - return alloc_long_term_buff(adapter, ltb, ltb->size); - } - mutex_unlock(&adapter->fw_lock); - return 0; -} - static void deactivate_rx_pools(struct ibmvnic_adapter *adapter) { int i; @@ -548,18 +549,10 @@ static int reset_rx_pools(struct ibmvnic_adapter *adapter) netdev_dbg(adapter->netdev, "Re-setting rx_pool[%d]\n", i); - if (rx_pool->buff_size != buff_size) { - free_long_term_buff(adapter, &rx_pool->long_term_buff); - rx_pool->buff_size = ALIGN(buff_size, L1_CACHE_BYTES); - rc = alloc_long_term_buff(adapter, - &rx_pool->long_term_buff, - rx_pool->size * - rx_pool->buff_size); - } else { - rc = reset_long_term_buff(adapter, - &rx_pool->long_term_buff); - } - + rx_pool->buff_size = ALIGN(buff_size, L1_CACHE_BYTES); + rc = alloc_long_term_buff(adapter, + &rx_pool->long_term_buff, + rx_pool->size * rx_pool->buff_size); if (rc) return rc; @@ -692,9 +685,12 @@ static int init_rx_pools(struct net_device *netdev) static int reset_one_tx_pool(struct ibmvnic_adapter *adapter, struct ibmvnic_tx_pool *tx_pool) { + struct ibmvnic_long_term_buff *ltb; int rc, i; - rc = reset_long_term_buff(adapter, &tx_pool->long_term_buff); + ltb = &tx_pool->long_term_buff; + + rc = alloc_long_term_buff(adapter, ltb, ltb->size); if (rc) return rc; From patchwork Wed Sep 1 00:08:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sukadev Bhattiprolu X-Patchwork-Id: 12467939 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 594E6C432BE for ; Wed, 1 Sep 2021 00:10:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 406EE61027 for ; Wed, 1 Sep 2021 00:10:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243200AbhIAAL1 (ORCPT ); Tue, 31 Aug 2021 20:11:27 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:8306 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243477AbhIAAJ1 (ORCPT ); Tue, 31 Aug 2021 20:09:27 -0400 Received: from pps.filterd (m0098393.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 18104cYJ086450 for ; Tue, 31 Aug 2021 20:08:31 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=3sUlOQbXW7U2SYRSakzo3f8oK5O4ZFmFLpahAf65+qs=; b=IJrKCSxr9BMnnIm3y4bHzQWe1+aY91f/d/47L2iK8YH/48sPa2+tEQd0ZVXBiru2HnWF han979mD2CGoMWJEDVqa54a7g7oI6VhTPd5jyjmXLFxl4GlGxvT/W3v8I/jtfOS6/OEu O2DUcVUX7Ecd+n0UjEfWFiAr9wSbjqunVhMpygM7sveTjcpmcWnDxjLwRK+E/Rd131jF +qODUzwf1wlVRiR8URBqS/8PteMK4536wnYBgAPkjJOnKjaS/C5r1Q6s61CwEyngXmEr Bmp82MC5vniXTF8fz70QVfNQ7GGAR5i/efhRVdj6BmbQplPQLX/iGd8b85A3Kvy9x5Me Zw== Received: from ppma03dal.us.ibm.com (b.bd.3ea9.ip4.static.sl-reverse.com [169.62.189.11]) by mx0a-001b2d01.pphosted.com with ESMTP id 3asxnh82s4-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Tue, 31 Aug 2021 20:08:31 -0400 Received: from pps.filterd (ppma03dal.us.ibm.com [127.0.0.1]) by ppma03dal.us.ibm.com (8.16.1.2/8.16.1.2) with SMTP id 17VNuesC014321 for ; Wed, 1 Sep 2021 00:08:30 GMT Received: from b03cxnp07027.gho.boulder.ibm.com (b03cxnp07027.gho.boulder.ibm.com [9.17.130.14]) by ppma03dal.us.ibm.com with ESMTP id 3aqcsd20as-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 01 Sep 2021 00:08:30 +0000 Received: from b03ledav004.gho.boulder.ibm.com (b03ledav004.gho.boulder.ibm.com [9.17.130.235]) by b03cxnp07027.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 18108S1e28443122 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 1 Sep 2021 00:08:28 GMT Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 8D3DC78067; Wed, 1 Sep 2021 00:08:28 +0000 (GMT) Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 2B2947805C; Wed, 1 Sep 2021 00:08:27 +0000 (GMT) Received: from suka-w540.ibmuc.com (unknown [9.65.237.107]) by b03ledav004.gho.boulder.ibm.com (Postfix) with ESMTP; Wed, 1 Sep 2021 00:08:26 +0000 (GMT) From: Sukadev Bhattiprolu To: netdev@vger.kernel.org Cc: Brian King , cforno12@linux.ibm.com, Dany Madden , Rick Lindsley Subject: [PATCH net-next 8/9] ibmvnic: Reuse rx pools when possible Date: Tue, 31 Aug 2021 17:08:11 -0700 Message-Id: <20210901000812.120968-9-sukadev@linux.ibm.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210901000812.120968-1-sukadev@linux.ibm.com> References: <20210901000812.120968-1-sukadev@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: 23aTGCJzjCWajZoZq9eo-Dv0eim8LP7o X-Proofpoint-ORIG-GUID: 23aTGCJzjCWajZoZq9eo-Dv0eim8LP7o X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.391,18.0.790 definitions=2021-08-31_10:2021-08-31,2021-08-31 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=999 lowpriorityscore=0 mlxscore=0 impostorscore=0 spamscore=0 suspectscore=0 priorityscore=1501 bulkscore=0 clxscore=1015 adultscore=0 phishscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2107140000 definitions=main-2108310133 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Rather than releasing the rx pools on and reallocating them on every reset, reuse the rx pools unless the pool parameters (number of pools, size of each pool or size of each buffer in a pool) have changed. If the pool parameters changed, then release the old pools (if any) and allocate new ones. Specifically release rx pools, if: - adapter is removed, - pool parameters change during reset, - we encounter an error when opening the adapter in response to a user request (in ibmvnic_open()). and don't release them: - in __ibmvnic_close() or - on errors in __ibmvnic_open() in the hope that we can reuse them on the next reset. With these, reset_rx_pools() can be dropped because its optimzation is now included in init_rx_pools() itself. cleanup_rx_pools() releases all the skbs associated with the pool and is called from ibmvnic_cleanup(), which is called on every reset. Since we want to reuse skbs across resets, move cleanup_rx_pools() out of ibmvnic_cleanup() and call it only when user closes the adapter. Add two new adapter fields, ->prev_rx_buf_sz, ->prev_rx_pool_size to keep track of the previous values and use them to decide whether to reuse or realloc the pools. Signed-off-by: Sukadev Bhattiprolu Reviewed-by: Dany Madden --- drivers/net/ethernet/ibm/ibmvnic.c | 183 +++++++++++++++++++---------- drivers/net/ethernet/ibm/ibmvnic.h | 3 + 2 files changed, 122 insertions(+), 64 deletions(-) diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c index 1bb5996c4313..ebd525b6fc87 100644 --- a/drivers/net/ethernet/ibm/ibmvnic.c +++ b/drivers/net/ethernet/ibm/ibmvnic.c @@ -368,20 +368,27 @@ static void replenish_rx_pool(struct ibmvnic_adapter *adapter, * be 0. */ for (i = ind_bufp->index; i < count; ++i) { - skb = netdev_alloc_skb(adapter->netdev, pool->buff_size); + index = pool->free_map[pool->next_free]; + + /* We maybe reusing the skb from earlier resets. Allocate + * only if necessary. But since the LTB may have changed + * during reset (see init_rx_pools()), update LTB below + * even if reusing skb. + */ + skb = pool->rx_buff[index].skb; if (!skb) { - dev_err(dev, "Couldn't replenish rx buff\n"); - adapter->replenish_no_mem++; - break; + skb = netdev_alloc_skb(adapter->netdev, + pool->buff_size); + if (!skb) { + dev_err(dev, "Couldn't replenish rx buff\n"); + adapter->replenish_no_mem++; + break; + } } - index = pool->free_map[pool->next_free]; pool->free_map[pool->next_free] = IBMVNIC_INVALID_MAP; pool->next_free = (pool->next_free + 1) % pool->size; - if (pool->rx_buff[index].skb) - dev_err(dev, "Inconsistent free_map!\n"); - /* Copy the skb to the long term mapped DMA buffer */ offset = index * pool->buff_size; dst = pool->long_term_buff.buff + offset; @@ -532,45 +539,6 @@ static int init_stats_token(struct ibmvnic_adapter *adapter) return 0; } -static int reset_rx_pools(struct ibmvnic_adapter *adapter) -{ - struct ibmvnic_rx_pool *rx_pool; - u64 buff_size; - int rx_scrqs; - int i, j, rc; - - if (!adapter->rx_pool) - return -1; - - buff_size = adapter->cur_rx_buf_sz; - rx_scrqs = adapter->num_active_rx_pools; - for (i = 0; i < rx_scrqs; i++) { - rx_pool = &adapter->rx_pool[i]; - - netdev_dbg(adapter->netdev, "Re-setting rx_pool[%d]\n", i); - - rx_pool->buff_size = ALIGN(buff_size, L1_CACHE_BYTES); - rc = alloc_long_term_buff(adapter, - &rx_pool->long_term_buff, - rx_pool->size * rx_pool->buff_size); - if (rc) - return rc; - - for (j = 0; j < rx_pool->size; j++) - rx_pool->free_map[j] = j; - - memset(rx_pool->rx_buff, 0, - rx_pool->size * sizeof(struct ibmvnic_rx_buff)); - - atomic_set(&rx_pool->available, 0); - rx_pool->next_alloc = 0; - rx_pool->next_free = 0; - rx_pool->active = 1; - } - - return 0; -} - /** * Release any rx_pools attached to @adapter. * Safe to call this multiple times - even if no pools are attached. @@ -589,6 +557,7 @@ static void release_rx_pools(struct ibmvnic_adapter *adapter) netdev_dbg(adapter->netdev, "Releasing rx_pool[%d]\n", i); kfree(rx_pool->free_map); + free_long_term_buff(adapter, &rx_pool->long_term_buff); if (!rx_pool->rx_buff) @@ -607,8 +576,53 @@ static void release_rx_pools(struct ibmvnic_adapter *adapter) kfree(adapter->rx_pool); adapter->rx_pool = NULL; adapter->num_active_rx_pools = 0; + adapter->prev_rx_pool_size = 0; +} + +/** + * Return true if we can reuse the existing rx pools. + * NOTE: This assumes that all pools have the same number of buffers + * which is the case currently. If that changes, we must fix this. + */ +static bool reuse_rx_pools(struct ibmvnic_adapter *adapter) +{ + u64 old_num_pools, new_num_pools; + u64 old_pool_size, new_pool_size; + u64 old_buff_size, new_buff_size; + + if (!adapter->rx_pool) + return false; + + old_num_pools = adapter->num_active_rx_pools; + new_num_pools = adapter->req_rx_queues; + + old_pool_size = adapter->prev_rx_pool_size; + new_pool_size = adapter->req_rx_add_entries_per_subcrq; + + old_buff_size = adapter->prev_rx_buf_sz; + new_buff_size = adapter->cur_rx_buf_sz; + + /* Require buff size to be exactly same for now */ + if (old_buff_size != new_buff_size) + return false; + + if (old_num_pools == new_num_pools && old_pool_size == new_pool_size) + return true; + + if (old_num_pools < adapter->min_rx_queues || + old_num_pools > adapter->max_rx_queues || + old_pool_size < adapter->min_rx_add_entries_per_subcrq || + old_pool_size > adapter->max_rx_add_entries_per_subcrq) + return false; + + return true; } +/** + * Initialize the set of receiver pools in the adapter. Reuse existing + * pools if possible. Otherwise allocate a new set of pools before + * initializing them. + */ static int init_rx_pools(struct net_device *netdev) { struct ibmvnic_adapter *adapter = netdev_priv(netdev); @@ -619,10 +633,18 @@ static int init_rx_pools(struct net_device *netdev) u64 buff_size; int i, j; - num_pools = adapter->num_active_rx_scrqs; pool_size = adapter->req_rx_add_entries_per_subcrq; + num_pools = adapter->req_rx_queues; buff_size = adapter->cur_rx_buf_sz; + if (reuse_rx_pools(adapter)) { + dev_dbg(dev, "Reusing rx pools\n"); + goto update_ltb; + } + + /* Allocate/populate the pools. */ + release_rx_pools(adapter); + adapter->rx_pool = kcalloc(num_pools, sizeof(struct ibmvnic_rx_pool), GFP_KERNEL); @@ -646,14 +668,12 @@ static int init_rx_pools(struct net_device *netdev) rx_pool->size = pool_size; rx_pool->index = i; rx_pool->buff_size = ALIGN(buff_size, L1_CACHE_BYTES); - rx_pool->active = 1; rx_pool->free_map = kcalloc(rx_pool->size, sizeof(int), GFP_KERNEL); if (!rx_pool->free_map) { dev_err(dev, "Couldn't alloc free_map %d\n", i); - release_rx_pools(adapter); - return -1; + goto out_release; } rx_pool->rx_buff = kcalloc(rx_pool->size, @@ -661,25 +681,58 @@ static int init_rx_pools(struct net_device *netdev) GFP_KERNEL); if (!rx_pool->rx_buff) { dev_err(dev, "Couldn't alloc rx buffers\n"); - release_rx_pools(adapter); - return -1; + goto out_release; } + } + + adapter->prev_rx_pool_size = pool_size; + adapter->prev_rx_buf_sz = adapter->cur_rx_buf_sz; + +update_ltb: + for (i = 0; i < num_pools; i++) { + rx_pool = &adapter->rx_pool[i]; + dev_dbg(dev, "Updating LTB for rx pool %d [%d, %d]\n", + i, rx_pool->size, rx_pool->buff_size); if (alloc_long_term_buff(adapter, &rx_pool->long_term_buff, - rx_pool->size * rx_pool->buff_size)) { - release_rx_pools(adapter); - return -1; - } + rx_pool->size * rx_pool->buff_size)) + goto out; + + for (j = 0; j < rx_pool->size; ++j) { + struct ibmvnic_rx_buff *rx_buff; - for (j = 0; j < rx_pool->size; ++j) rx_pool->free_map[j] = j; + /* NOTE: Don't clear rx_buff->skb here - will leak + * memory! replenish_rx_pool() will reuse skbs or + * allocate as necessary. + */ + rx_buff = &rx_pool->rx_buff[j]; + rx_buff->dma = 0; + rx_buff->data = 0; + rx_buff->size = 0; + rx_buff->pool_index = 0; + } + + /* Mark pool "empty" so replenish_rx_pools() will + * update the LTB info for each buffer + */ atomic_set(&rx_pool->available, 0); rx_pool->next_alloc = 0; rx_pool->next_free = 0; + /* replenish_rx_pool() may have called deactivate_rx_pools() + * on failover. Ensure pool is active now. + */ + rx_pool->active = 1; } - return 0; +out_release: + release_rx_pools(adapter); +out: + /* We failed to allocate one or more LTBs or map them on the VIOS. + * Hold onto the pools and any LTBs that we did allocate/map. + */ + return -1; } static int reset_one_tx_pool(struct ibmvnic_adapter *adapter, @@ -1053,7 +1106,6 @@ static void release_resources(struct ibmvnic_adapter *adapter) release_vpd_data(adapter); release_tx_pools(adapter); - release_rx_pools(adapter); release_napi(adapter); release_login_buffer(adapter); @@ -1326,6 +1378,7 @@ static int ibmvnic_open(struct net_device *netdev) if (rc) { netdev_err(netdev, "failed to initialize resources\n"); release_resources(adapter); + release_rx_pools(adapter); goto out; } } @@ -1455,7 +1508,6 @@ static void ibmvnic_cleanup(struct net_device *netdev) ibmvnic_napi_disable(adapter); ibmvnic_disable_irqs(adapter); - clean_rx_pools(adapter); clean_tx_pools(adapter); } @@ -1490,6 +1542,7 @@ static int ibmvnic_close(struct net_device *netdev) rc = __ibmvnic_close(netdev); ibmvnic_cleanup(netdev); + clean_rx_pools(adapter); return rc; } @@ -2218,7 +2271,6 @@ static int do_reset(struct ibmvnic_adapter *adapter, !adapter->rx_pool || !adapter->tso_pool || !adapter->tx_pool) { - release_rx_pools(adapter); release_tx_pools(adapter); release_napi(adapter); release_vpd_data(adapter); @@ -2235,9 +2287,10 @@ static int do_reset(struct ibmvnic_adapter *adapter, goto out; } - rc = reset_rx_pools(adapter); + rc = init_rx_pools(netdev); if (rc) { - netdev_dbg(adapter->netdev, "reset rx pools failed (%d)\n", + netdev_dbg(netdev, + "init rx pools failed (%d)\n", rc); goto out; } @@ -5573,6 +5626,7 @@ static int ibmvnic_probe(struct vio_dev *dev, const struct vio_device_id *id) init_completion(&adapter->reset_done); init_completion(&adapter->stats_done); clear_bit(0, &adapter->resetting); + adapter->prev_rx_buf_sz = 0; init_success = false; do { @@ -5673,6 +5727,7 @@ static void ibmvnic_remove(struct vio_dev *dev) unregister_netdevice(netdev); release_resources(adapter); + release_rx_pools(adapter); release_sub_crqs(adapter, 1); release_crq_queue(adapter); diff --git a/drivers/net/ethernet/ibm/ibmvnic.h b/drivers/net/ethernet/ibm/ibmvnic.h index e97f1aa98c05..b73a1b812368 100644 --- a/drivers/net/ethernet/ibm/ibmvnic.h +++ b/drivers/net/ethernet/ibm/ibmvnic.h @@ -986,7 +986,10 @@ struct ibmvnic_adapter { u32 num_active_rx_napi; u32 num_active_tx_scrqs; u32 num_active_tx_pools; + + u32 prev_rx_pool_size; u32 cur_rx_buf_sz; + u32 prev_rx_buf_sz; struct tasklet_struct tasklet; enum vnic_state state; From patchwork Wed Sep 1 00:08:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sukadev Bhattiprolu X-Patchwork-Id: 12467941 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 208B5C432BE for ; Wed, 1 Sep 2021 00:10:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 05CFD61027 for ; Wed, 1 Sep 2021 00:10:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242482AbhIAALc (ORCPT ); Tue, 31 Aug 2021 20:11:32 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:28684 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243496AbhIAAJ2 (ORCPT ); Tue, 31 Aug 2021 20:09:28 -0400 Received: from pps.filterd (m0098396.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 18103jwb011734 for ; Tue, 31 Aug 2021 20:08:33 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=XdTvTO8tQV1DuV2WJzJAcyxm82SmfAdlHZe93Yd4kaY=; b=TS4THFi3N/TctrhtZMOsKkKYjfLskdlB2IG83RAc/xqY/Yw9LQIrPaTZu8CpJHEAjdQc 6LO/gC4IHae48l5CeR/PFZJTj/Hyu2kgry5mITB/E6UlXIrvFUlAYPSPz9An9fCY5vzL 8JoRxbqHJKRQ7cM3vAjxak5dBZzCi+itQswU9rE5Nh0XsKSIzvqUPB8QroZ+4Ay0TVKO AJjyqIk+yQYE9zf1Ynygi7o2kH9BoCO8K+ciDvNhY5eT4EgpAGAXqu3jHzZDGJpZCAUI lpNYKWc1XSiBesNppKxyUa1opeOVXvEN6Q39MQuRjfdFaEU5vHQsga3N6dL8OkG7i5a2 zw== Received: from ppma05wdc.us.ibm.com (1b.90.2fa9.ip4.static.sl-reverse.com [169.47.144.27]) by mx0a-001b2d01.pphosted.com with ESMTP id 3assarypyj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Tue, 31 Aug 2021 20:08:32 -0400 Received: from pps.filterd (ppma05wdc.us.ibm.com [127.0.0.1]) by ppma05wdc.us.ibm.com (8.16.1.2/8.16.1.2) with SMTP id 17VNuf6E021485 for ; Wed, 1 Sep 2021 00:08:31 GMT Received: from b03cxnp07029.gho.boulder.ibm.com (b03cxnp07029.gho.boulder.ibm.com [9.17.130.16]) by ppma05wdc.us.ibm.com with ESMTP id 3aqcsct29x-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 01 Sep 2021 00:08:31 +0000 Received: from b03ledav004.gho.boulder.ibm.com (b03ledav004.gho.boulder.ibm.com [9.17.130.235]) by b03cxnp07029.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 18108UvK45678854 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 1 Sep 2021 00:08:30 GMT Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 5723978064; Wed, 1 Sep 2021 00:08:30 +0000 (GMT) Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id F096878066; Wed, 1 Sep 2021 00:08:28 +0000 (GMT) Received: from suka-w540.ibmuc.com (unknown [9.65.237.107]) by b03ledav004.gho.boulder.ibm.com (Postfix) with ESMTP; Wed, 1 Sep 2021 00:08:28 +0000 (GMT) From: Sukadev Bhattiprolu To: netdev@vger.kernel.org Cc: Brian King , cforno12@linux.ibm.com, Dany Madden , Rick Lindsley Subject: [PATCH net-next 9/9] ibmvnic: Reuse tx pools when possible Date: Tue, 31 Aug 2021 17:08:12 -0700 Message-Id: <20210901000812.120968-10-sukadev@linux.ibm.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210901000812.120968-1-sukadev@linux.ibm.com> References: <20210901000812.120968-1-sukadev@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: xWPju9b87GiCazlfgiEsB3Q_gYvnMr5e X-Proofpoint-GUID: xWPju9b87GiCazlfgiEsB3Q_gYvnMr5e X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.391,18.0.790 definitions=2021-08-31_10:2021-08-31,2021-08-31 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 impostorscore=0 malwarescore=0 suspectscore=0 bulkscore=0 adultscore=0 mlxlogscore=999 mlxscore=0 spamscore=0 phishscore=0 priorityscore=1501 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2107140000 definitions=main-2108310133 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Rather than releasing the tx pools on every close and reallocating them on open, reuse the tx pools unless the pool parameters (number of pools, size of each pool or size of each buffer in a pool) have changed. If the pool parameters changed, then release the old pools (if any) and allocate new ones. Specifically release tx pools, if: - adapter is removed, - pool parameters change during reset, - we encounter an error when opening the adapter in response to a user request (in ibmvnic_open()). and don't release them: - in __ibmvnic_close() or - on errors in __ibmvnic_open() in the hope that we can reuse them during this or next reset. With these changes reset_tx_pools() can be dropped because its optimization is now included in init_tx_pools() itself. cleanup_tx_pools() releases all the skbs associated with the pool and is called from ibmvnic_cleanup(), which is called on every reset. Since we want to reuse skbs across resets, move cleanup_tx_pools() out of ibmvnic_cleanup() and call it only when user closes the adapter. Add two new adapter fields, ->prev_mtu, ->prev_tx_pool_size to track the previous values and use them to decide whether to reuse or realloc the pools. Signed-off-by: Sukadev Bhattiprolu Reviewed-by: Dany Madden --- drivers/net/ethernet/ibm/ibmvnic.c | 201 +++++++++++++++++++---------- drivers/net/ethernet/ibm/ibmvnic.h | 2 + 2 files changed, 133 insertions(+), 70 deletions(-) diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c index ebd525b6fc87..8c422a717e88 100644 --- a/drivers/net/ethernet/ibm/ibmvnic.c +++ b/drivers/net/ethernet/ibm/ibmvnic.c @@ -735,53 +735,6 @@ static int init_rx_pools(struct net_device *netdev) return -1; } -static int reset_one_tx_pool(struct ibmvnic_adapter *adapter, - struct ibmvnic_tx_pool *tx_pool) -{ - struct ibmvnic_long_term_buff *ltb; - int rc, i; - - ltb = &tx_pool->long_term_buff; - - rc = alloc_long_term_buff(adapter, ltb, ltb->size); - if (rc) - return rc; - - memset(tx_pool->tx_buff, 0, - tx_pool->num_buffers * - sizeof(struct ibmvnic_tx_buff)); - - for (i = 0; i < tx_pool->num_buffers; i++) - tx_pool->free_map[i] = i; - - tx_pool->consumer_index = 0; - tx_pool->producer_index = 0; - - return 0; -} - -static int reset_tx_pools(struct ibmvnic_adapter *adapter) -{ - int tx_scrqs; - int i, rc; - - if (!adapter->tx_pool) - return -1; - - tx_scrqs = adapter->num_active_tx_pools; - for (i = 0; i < tx_scrqs; i++) { - ibmvnic_tx_scrq_clean_buffer(adapter, adapter->tx_scrq[i]); - rc = reset_one_tx_pool(adapter, &adapter->tso_pool[i]); - if (rc) - return rc; - rc = reset_one_tx_pool(adapter, &adapter->tx_pool[i]); - if (rc) - return rc; - } - - return 0; -} - static void release_vpd_data(struct ibmvnic_adapter *adapter) { if (!adapter->vpd) @@ -825,13 +778,13 @@ static void release_tx_pools(struct ibmvnic_adapter *adapter) kfree(adapter->tso_pool); adapter->tso_pool = NULL; adapter->num_active_tx_pools = 0; + adapter->prev_tx_pool_size = 0; } static int init_one_tx_pool(struct net_device *netdev, struct ibmvnic_tx_pool *tx_pool, int pool_size, int buf_size) { - struct ibmvnic_adapter *adapter = netdev_priv(netdev); int i; tx_pool->tx_buff = kcalloc(pool_size, @@ -840,13 +793,12 @@ static int init_one_tx_pool(struct net_device *netdev, if (!tx_pool->tx_buff) return -1; - if (alloc_long_term_buff(adapter, &tx_pool->long_term_buff, - pool_size * buf_size)) - return -1; - tx_pool->free_map = kcalloc(pool_size, sizeof(int), GFP_KERNEL); - if (!tx_pool->free_map) + if (!tx_pool->free_map) { + kfree(tx_pool->tx_buff); + tx_pool->tx_buff = NULL; return -1; + } for (i = 0; i < pool_size; i++) tx_pool->free_map[i] = i; @@ -859,6 +811,48 @@ static int init_one_tx_pool(struct net_device *netdev, return 0; } +/** + * Return true if we can reuse the existing tx pools, false otherwise + * NOTE: This assumes that all pools have the same number of buffers + * which is the case currently. If that changes, we must fix this. + */ +static bool reuse_tx_pools(struct ibmvnic_adapter *adapter) +{ + u64 old_num_pools, new_num_pools; + u64 old_pool_size, new_pool_size; + u64 old_mtu, new_mtu; + + if (!adapter->tx_pool) + return false; + + old_num_pools = adapter->num_active_tx_pools; + new_num_pools = adapter->num_active_tx_scrqs; + old_pool_size = adapter->prev_tx_pool_size; + new_pool_size = adapter->req_tx_entries_per_subcrq; + old_mtu = adapter->prev_mtu; + new_mtu = adapter->req_mtu; + + /* Require MTU to be exactly same to reuse pools for now */ + if (old_mtu != new_mtu) + return false; + + if (old_num_pools == new_num_pools && old_pool_size == new_pool_size) + return true; + + if (old_num_pools < adapter->min_tx_queues || + old_num_pools > adapter->max_tx_queues || + old_pool_size < adapter->min_tx_entries_per_subcrq || + old_pool_size > adapter->max_tx_entries_per_subcrq) + return false; + + return true; +} + +/** + * Initialize the set of transmit pools in the adapter. Reuse existing + * pools if possible. Otherwise allocate a new set of pools before + * initializing them. + */ static int init_tx_pools(struct net_device *netdev) { struct ibmvnic_adapter *adapter = netdev_priv(netdev); @@ -866,7 +860,21 @@ static int init_tx_pools(struct net_device *netdev) int num_pools; u64 pool_size; /* # of buffers in pool */ u64 buff_size; - int i, rc; + int i, j, rc; + + num_pools = adapter->req_tx_queues; + + /* We must notify the VIOS about the LTB on all resets - but we only + * need to alloc/populate pools if either the number of buffers or + * size of each buffer in the pool has changed. + */ + if (reuse_tx_pools(adapter)) { + netdev_dbg(netdev, "Reusing tx pools\n"); + goto update_ltb; + } + + /* Allocate/populate the pools. */ + release_tx_pools(adapter); pool_size = adapter->req_tx_entries_per_subcrq; num_pools = adapter->num_active_tx_scrqs; @@ -891,6 +899,7 @@ static int init_tx_pools(struct net_device *netdev) * allocation, release_tx_pools() will know how many to look for. */ adapter->num_active_tx_pools = num_pools; + buff_size = adapter->req_mtu + VLAN_HLEN; buff_size = ALIGN(buff_size, L1_CACHE_BYTES); @@ -900,21 +909,73 @@ static int init_tx_pools(struct net_device *netdev) rc = init_one_tx_pool(netdev, &adapter->tx_pool[i], pool_size, buff_size); - if (rc) { - release_tx_pools(adapter); - return rc; - } + if (rc) + goto out_release; rc = init_one_tx_pool(netdev, &adapter->tso_pool[i], IBMVNIC_TSO_BUFS, IBMVNIC_TSO_BUF_SZ); - if (rc) { - release_tx_pools(adapter); - return rc; - } + if (rc) + goto out_release; + } + + adapter->prev_tx_pool_size = pool_size; + adapter->prev_mtu = adapter->req_mtu; + +update_ltb: + /* NOTE: All tx_pools have the same number of buffers (which is + * same as pool_size). All tso_pools have IBMVNIC_TSO_BUFS + * buffers (see calls init_one_tx_pool() for these). + * For consistency, we use tx_pool->num_buffers and + * tso_pool->num_buffers below. + */ + rc = -1; + for (i = 0; i < num_pools; i++) { + struct ibmvnic_tx_pool *tso_pool; + struct ibmvnic_tx_pool *tx_pool; + u32 ltb_size; + + tx_pool = &adapter->tx_pool[i]; + ltb_size = tx_pool->num_buffers * tx_pool->buf_size; + if (alloc_long_term_buff(adapter, &tx_pool->long_term_buff, + ltb_size)) + goto out; + + dev_dbg(dev, "Updated LTB for tx pool %d [%p, %d, %d]\n", + i, tx_pool->long_term_buff.buff, + tx_pool->num_buffers, tx_pool->buf_size); + + tx_pool->consumer_index = 0; + tx_pool->producer_index = 0; + + for (j = 0; j < tx_pool->num_buffers; j++) + tx_pool->free_map[j] = j; + + tso_pool = &adapter->tso_pool[i]; + ltb_size = tso_pool->num_buffers * tso_pool->buf_size; + if (alloc_long_term_buff(adapter, &tso_pool->long_term_buff, + ltb_size)) + goto out; + + dev_dbg(dev, "Updated LTB for tso pool %d [%p, %d, %d]\n", + i, tso_pool->long_term_buff.buff, + tso_pool->num_buffers, tso_pool->buf_size); + + tso_pool->consumer_index = 0; + tso_pool->producer_index = 0; + + for (j = 0; j < tso_pool->num_buffers; j++) + tso_pool->free_map[j] = j; } return 0; +out_release: + release_tx_pools(adapter); +out: + /* We failed to allocate one or more LTBs or map them on the VIOS. + * Hold onto the pools and any LTBs that we did allocate/map. + */ + return rc; } static void ibmvnic_napi_enable(struct ibmvnic_adapter *adapter) @@ -1105,8 +1166,6 @@ static void release_resources(struct ibmvnic_adapter *adapter) { release_vpd_data(adapter); - release_tx_pools(adapter); - release_napi(adapter); release_login_buffer(adapter); release_login_rsp_buffer(adapter); @@ -1379,6 +1438,7 @@ static int ibmvnic_open(struct net_device *netdev) netdev_err(netdev, "failed to initialize resources\n"); release_resources(adapter); release_rx_pools(adapter); + release_tx_pools(adapter); goto out; } } @@ -1507,8 +1567,6 @@ static void ibmvnic_cleanup(struct net_device *netdev) ibmvnic_napi_disable(adapter); ibmvnic_disable_irqs(adapter); - - clean_tx_pools(adapter); } static int __ibmvnic_close(struct net_device *netdev) @@ -1543,6 +1601,7 @@ static int ibmvnic_close(struct net_device *netdev) rc = __ibmvnic_close(netdev); ibmvnic_cleanup(netdev); clean_rx_pools(adapter); + clean_tx_pools(adapter); return rc; } @@ -2119,9 +2178,9 @@ static const char *reset_reason_to_string(enum ibmvnic_reset_reason reason) static int do_reset(struct ibmvnic_adapter *adapter, struct ibmvnic_rwi *rwi, u32 reset_state) { + struct net_device *netdev = adapter->netdev; u64 old_num_rx_queues, old_num_tx_queues; u64 old_num_rx_slots, old_num_tx_slots; - struct net_device *netdev = adapter->netdev; int rc; netdev_dbg(adapter->netdev, @@ -2271,7 +2330,6 @@ static int do_reset(struct ibmvnic_adapter *adapter, !adapter->rx_pool || !adapter->tso_pool || !adapter->tx_pool) { - release_tx_pools(adapter); release_napi(adapter); release_vpd_data(adapter); @@ -2280,9 +2338,10 @@ static int do_reset(struct ibmvnic_adapter *adapter, goto out; } else { - rc = reset_tx_pools(adapter); + rc = init_tx_pools(netdev); if (rc) { - netdev_dbg(adapter->netdev, "reset tx pools failed (%d)\n", + netdev_dbg(netdev, + "init tx pools failed (%d)\n", rc); goto out; } @@ -5627,6 +5686,7 @@ static int ibmvnic_probe(struct vio_dev *dev, const struct vio_device_id *id) init_completion(&adapter->stats_done); clear_bit(0, &adapter->resetting); adapter->prev_rx_buf_sz = 0; + adapter->prev_mtu = 0; init_success = false; do { @@ -5728,6 +5788,7 @@ static void ibmvnic_remove(struct vio_dev *dev) release_resources(adapter); release_rx_pools(adapter); + release_tx_pools(adapter); release_sub_crqs(adapter, 1); release_crq_queue(adapter); diff --git a/drivers/net/ethernet/ibm/ibmvnic.h b/drivers/net/ethernet/ibm/ibmvnic.h index b73a1b812368..b8e42f67d897 100644 --- a/drivers/net/ethernet/ibm/ibmvnic.h +++ b/drivers/net/ethernet/ibm/ibmvnic.h @@ -967,6 +967,7 @@ struct ibmvnic_adapter { u64 min_mtu; u64 max_mtu; u64 req_mtu; + u64 prev_mtu; u64 max_multicast_filters; u64 vlan_header_insertion; u64 rx_vlan_header_insertion; @@ -988,6 +989,7 @@ struct ibmvnic_adapter { u32 num_active_tx_pools; u32 prev_rx_pool_size; + u32 prev_tx_pool_size; u32 cur_rx_buf_sz; u32 prev_rx_buf_sz;