Message ID | 20240801212340.132607-8-nnac123@linux.ibm.com (mailing list archive) |
---|---|
State | Changes Requested |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | ibmvnic RR performance improvements | expand |
On Thu, 1 Aug 2024 16:23:40 -0500 Nick Child wrote: > This extra > precaution (requesting header info when the backing device may not use > it) comes at the cost of performance (using direct vs indirect hcalls > has a 30% delta in small packet RR transaction rate). What's "small" in this case? Non-GSO, or also less than MTU?
On 8/2/24 19:15, Jakub Kicinski wrote: > On Thu, 1 Aug 2024 16:23:40 -0500 Nick Child wrote: >> This extra >> precaution (requesting header info when the backing device may not use >> it) comes at the cost of performance (using direct vs indirect hcalls >> has a 30% delta in small packet RR transaction rate). > > What's "small" in this case? Non-GSO, or also less than MTU? I suppose "non-GSO" is the proper term. If a packet is non-GSO then we are able to use the direct hcall. On the other hand, if a packet is GSO then indirect must be used, we do not have the option of direct vs indirect.
On Mon, 5 Aug 2024 08:52:57 -0500 Nick Child wrote: > On 8/2/24 19:15, Jakub Kicinski wrote: > > On Thu, 1 Aug 2024 16:23:40 -0500 Nick Child wrote: > >> This extra > >> precaution (requesting header info when the backing device may not use > >> it) comes at the cost of performance (using direct vs indirect hcalls > >> has a 30% delta in small packet RR transaction rate). > > > > What's "small" in this case? Non-GSO, or also less than MTU? > > I suppose "non-GSO" is the proper term. If a packet is non-GSO > then we are able to use the direct hcall. On the other hand, > if a packet is GSO then indirect must be used, we do not have the option > of direct vs indirect. It'd be great to add more exact analysis to the commit message. Presumably the change is most likely to cause trouble in combination with large non-GSO frames. Could you measure the perf impact when TSO is disabled and MTU is 9k?
diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c index 05c0d68c3efa..1990d518f247 100644 --- a/drivers/net/ethernet/ibm/ibmvnic.c +++ b/drivers/net/ethernet/ibm/ibmvnic.c @@ -2406,6 +2406,7 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev) unsigned long lpar_rc; union sub_crq tx_crq; unsigned int offset; + bool use_scrq_send_direct = false; int num_entries = 1; unsigned char *dst; int bufidx = 0; @@ -2465,6 +2466,18 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev) memset(dst, 0, tx_pool->buf_size); data_dma_addr = ltb->addr + offset; + /* if we are going to send_subcrq_direct this then we need to + * update the checksum before copying the data into ltb. Essentially + * these packets force disable CSO so that we can guarantee that + * FW does not need header info and we can send direct. + */ + if (!skb_is_gso(skb) && !ind_bufp->index && !netdev_xmit_more()) { + use_scrq_send_direct = true; + if (skb->ip_summed == CHECKSUM_PARTIAL && + skb_checksum_help(skb)) + use_scrq_send_direct = false; + } + if (skb_shinfo(skb)->nr_frags) { int cur, i; @@ -2546,11 +2559,13 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev) tx_crq.v1.flags1 |= IBMVNIC_TX_LSO; tx_crq.v1.mss = cpu_to_be16(skb_shinfo(skb)->gso_size); hdrs += 2; - } else if (!ind_bufp->index && !netdev_xmit_more()) { - ind_bufp->indir_arr[0] = tx_crq; + } else if (use_scrq_send_direct) { + /* See above comment, CSO disabled with direct xmit */ + tx_crq.v1.flags1 &= ~(IBMVNIC_TX_CHKSUM_OFFLOAD); ind_bufp->index = 1; tx_buff->num_entries = 1; netdev_tx_sent_queue(txq, skb->len); + ind_bufp->indir_arr[0] = tx_crq; lpar_rc = ibmvnic_tx_scrq_flush(adapter, tx_scrq, false); if (lpar_rc != H_SUCCESS) goto tx_err;
During initialization with the vnic server, a bitstring is communicated to the client regarding header info needed during CSO (See "VNIC Capabilities" in PAPR). Most of the time, to be safe, vnic server requests header info for CSO. When header info is needed, multiple TX descriptors are required per skb; This limits the driver to use send_subcrq_indirect instead of send_subcrq_direct. Previously, the vnic server request for header info was ignored. This allowed the use of send_sub_crq_direct. Transmitions were successful because the bitstring returned by vnic server is very broad and over cautionary. It was observed that mlx backing devices could actually transmit and handle CSO packets without the vnic server receiving header info (despite the fact that the bitstring requested it). There was a trust issue: The bitstring was overcautionary. This extra precaution (requesting header info when the backing device may not use it) comes at the cost of performance (using direct vs indirect hcalls has a 30% delta in small packet RR transaction rate). So it has been requested that the vnic server team tries to ensure that the bitstring is more exact. In the meantime, disable CSO when it is possible to use the skb in the send_subcrq_direct path. In other words, calculate the checksum before handing the packet to FW when the packet is not segmented and xmit_more is false. When the bitstring ever specifies that CSO does not require headers (dependent on VIOS vnic server changes), then this patch should be removed and replaced with one that investigates the bitstring before using send_subcrq_direct. Signed-off-by: Nick Child <nnac123@linux.ibm.com> --- drivers/net/ethernet/ibm/ibmvnic.c | 19 +++++++++++++++++-- 1 file changed, 17 insertions(+), 2 deletions(-)