diff mbox series

[net-next-internal,1/1] ibmvnic: Drop IBMVNIC_BUFFER_HLEN

Message ID 20220219060938.1297423-1-sukadev@linux.ibm.com (mailing list archive)
State Not Applicable
Delegated to: Netdev Maintainers
Headers show
Series [net-next-internal,1/1] ibmvnic: Drop IBMVNIC_BUFFER_HLEN | expand

Checks

Context Check Description
netdev/tree_selection success Clearly marked for net-next
netdev/apply fail Patch does not apply to net-next

Commit Message

Sukadev Bhattiprolu Feb. 19, 2022, 6:09 a.m. UTC
Commit c26eba03e407 ("ibmvnic: Update reset infrastructure to support
tunable parameters") (Oct 2017) defined/used IBMVNIC_BUFFER_HLEN (500)
in the computation of max_entries but its not clear what the intention
was.  If the macro is needed for the protocol or somewhere else, it
should have been used in some allocation/alignment of the rx buffers
but is not being used.

It also confuses the notion of the number of packets that can fit in
an LTB. For instance with 16MB LTB we should be able to fit in 1849
jumbo frames of 9088 bytes*. But these 500 bytes makes it appear that
we can only fit 16MB / 9514 or 1763 packets in the LTB. Even that
computation is inconsistent as the 9514 size does account for VLAN_HLEN
or the 128-byte alignment.

*MTU to LTB buffer size:

	Header: ETH_HLEN + VLAN_HLEN = 14 + 4 = 18.
	Alignment: 128 (L1_CACHE_BYTES)

	9000 MTU + Header = 9018. Align to 128: 9088.
	1500 MTU + Header = 1518. Align to 128: 1536.

If we adjust this computation and drop the IBMVNIC_BUFFER_HLEN, we can
use a 6MB MAX_ONE_LTB_SIZE and be more be a bit more efficient when
changing MTU sizes. The 6MB size perfectly fits the 4K packets of normal
mtu (1536). With jumbo frames we can use need a set of 6 LTBs of size 6MB
for the 4K*9088 packets. For 32 rx and tx queues (16 each) we need 192
LTBs  and for the 16 TSO pools we need another 16 LTBs for a total of
208 LTBs so we will be within the limit.

Using 6MB LTB rather than 8MB lets us try and reuse the first LTB when
switching MTU sizes (if we use the patch

https://lists.linux.ibm.com/mailinglists/pipermail/ibmvnic-review/2021-December/000155.html

The smaller LTB also further reduces the chance of an allocation failure
during low memory conditions while remaining well within the limit of 255
LTBs per vnic.

Do we need IBMVNIC_BUFFER_HLEN?

Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.ibm.com>
---
 drivers/net/ethernet/ibm/ibmvnic.c | 14 ++++++++------
 drivers/net/ethernet/ibm/ibmvnic.h | 18 ++++++++----------
 2 files changed, 16 insertions(+), 16 deletions(-)
diff mbox series

Patch

diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
index 2e5bf27e3e3d..c496f4de1757 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.c
+++ b/drivers/net/ethernet/ibm/ibmvnic.c
@@ -4116,6 +4116,7 @@  static void send_request_cap(struct ibmvnic_adapter *adapter, int retry)
 	struct device *dev = &adapter->vdev->dev;
 	union ibmvnic_crq crq;
 	int max_entries;
+	int buff_size;
 	int cap_reqs;
 
 	/* We send out 6 or 7 REQUEST_CAPABILITY CRQs below (depending on
@@ -4153,16 +4154,17 @@  static void send_request_cap(struct ibmvnic_adapter *adapter, int retry)
 			adapter->desired.rx_entries =
 					adapter->max_rx_add_entries_per_subcrq;
 
-		max_entries = IBMVNIC_MAX_LTB_SIZE /
-			      (adapter->req_mtu + IBMVNIC_BUFFER_HLEN);
+		buff_size = adapter->req_mtu + VLAN_HLEN;
+		buff_size = ALIGN(buff_size, L1_CACHE_BYTES);
+		max_entries = IBMVNIC_MAX_LTB_SIZE / buff_size;
 
-		if ((adapter->req_mtu + IBMVNIC_BUFFER_HLEN) *
-			adapter->desired.tx_entries > IBMVNIC_MAX_LTB_SIZE) {
+		if ((buff_size * adapter->desired.tx_entries) >
+		    IBMVNIC_MAX_LTB_SIZE) {
 			adapter->desired.tx_entries = max_entries;
 		}
 
-		if ((adapter->req_mtu + IBMVNIC_BUFFER_HLEN) *
-			adapter->desired.rx_entries > IBMVNIC_MAX_LTB_SIZE) {
+		if ((buff_size * adapter->desired.rx_entries) >
+		    IBMVNIC_MAX_LTB_SIZE) {
 			adapter->desired.rx_entries = max_entries;
 		}
 
diff --git a/drivers/net/ethernet/ibm/ibmvnic.h b/drivers/net/ethernet/ibm/ibmvnic.h
index 8d92ce5a6c5f..1b124f884311 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.h
+++ b/drivers/net/ethernet/ibm/ibmvnic.h
@@ -45,32 +45,30 @@ 
  * are of fixed length (IBMVNIC_TSO_BUF_SZ * IBMVNIC_TSO_BUFS) of 4MB.
  *
  * The Rx and Tx pools can have upto 4096 buffers. The max size of these
- * buffers is about 9588 (for jumbo frames, including IBMVNIC_BUFFER_HLEN).
- * So, setting the MAX_LTB_SIZE for a pool to 4096 * 9588 ~= 38MB.
+ * buffers is about 9088 (for jumbo frames). So, setting the MAX_LTB_SIZE
+ * for a pool to 4096 * 9088 ~= 36MB.
  *
  * There is a trade-off in setting MAX_ONE_LTB_SIZE. If it is large, the
  * allocation of the LTB can fail when system is low in memory. If its too
  * small, we would need several mappings for each of the Rx/Tx/TSO pools,
  * but there is a limit of 255 mappings per vnic in the VNIC protocol.
  *
- * Setting MAX_ONE_LTB_SIZE to 8MB. This requires 5 LTBs per Rx and Tx
- * pool for the MAX_LTB_SIZE of 38MB (above) and 1 LTB per TSO pool for
- * the 4MB. Thus the 16 Rx and Tx queues require 32 * 5 = 160 plus 16 for
- * the TSO pools for a total of 176 LTB mappings.
+ * Setting MAX_ONE_LTB_SIZE to 6MB. This requires 6 LTBs per Rx and Tx
+ * pool for the MAX_LTB_SIZE of 36MB (above) and 1 LTB per TSO pool for
+ * the 4MB. Thus the 16 Rx and Tx queues require 32 * 6 = 192 plus 16 for
+ * the TSO pools for a total of 208 LTB mappings.
  *
  * The max size of a single LTB is also limited by the MAX_ORDER in the
  * kernel. Add a compile time check for that.
  */
-#define IBMVNIC_MAX_ONE_LTB_SIZE	(8 << 20)
-#define IBMVNIC_MAX_LTB_SIZE		(38 << 20)
+#define IBMVNIC_MAX_ONE_LTB_SIZE	(6 << 20)
+#define IBMVNIC_MAX_LTB_SIZE		(36 << 20)
 #define IBMVNIC_ONE_LTB_SIZE_LIMIT	((1 << (MAX_ORDER - 1)) * PAGE_SIZE)
 
 #if IBMVNIC_MAX_ONE_LTB_SIZE > IBMVNIC_ONE_LTB_SIZE_LIMIT
 #error
 #endif
 
-#define IBMVNIC_BUFFER_HLEN		500
-
 #define IBMVNIC_RESET_DELAY 100
 
 static const char ibmvnic_priv_flags[][ETH_GSTRING_LEN] = {