From patchwork Wed Aug 14 17:33:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 13763781 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1DE191BD504; Wed, 14 Aug 2024 17:33:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723656811; cv=none; b=YX3IHrjtE+QDHXBStTfd36nxfaJsD/uxjYF/QQtRHXYLyWuEHoOYenAuMwmQBQXCFyJdn0LkPohhIpl8S3A2ieBpW519Zk7VR1gMdF+6CqjUGfxV6cuTdyWmA5bMmSKlgwcpYjCUTQBtoss9DT1mvxpFguRhiNW/6V12rs63ZgM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723656811; c=relaxed/simple; bh=IvV0qDABhzqZKc5/xCxF8BzJU9af5oxUA5A0YpRSIsA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=XkvS7ijNOr/MVtcIDVxxdlqZtauFbM3ydpzU43AqewvtarBI7vbBAFb2rmL/NV1CF0gf/kZTz13l3lm9uS3BIcQJLqcL23fhAx8uwY13X3HcfVPRhGj2/EmUBaKC6VXkMpJc2OFk/fZv5bS99Qmm0/adhAeDRMh0TfG7VLsxRnc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=cHoFKsSF; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="cHoFKsSF" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1723656810; x=1755192810; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=IvV0qDABhzqZKc5/xCxF8BzJU9af5oxUA5A0YpRSIsA=; b=cHoFKsSFYGiUWDRaIivarnwaU8L5eYM6k5/GR4StFrqlccVQ7essOSc9 ojVaiCt7VDXAwxU0XCuKxqQmCFxYNX7Pc/ZLQBzRmlLdbxhKLuCzrM5oI /r4GcKlMOv2AiWaO0eOqM4r/Kxzi7zHZfPi+Nt8a1o3oFfuEBmDoE5aBv 1ZaaatCAMzVZKciytMNoT9rU/b+rLgHFPRb5FRfEispiJUx4/ZBHw7YZW v9AQx0jEcGYc+4Dz+9sxXB9VBWTFVbazFm4o5/IqetVmFfyVE14ZVIROv jyj8M4X7mWv1KJGxvgv8J6JLXalkx9rjm5xdfrA3GWYHIV+psGUr+FkOt w==; X-CSE-ConnectionGUID: eyJ10oHHTTeLmdHh1Nx30w== X-CSE-MsgGUID: EUAMZ5jLTnWAtMaSPi/diw== X-IronPort-AV: E=McAfee;i="6700,10204,11164"; a="21860593" X-IronPort-AV: E=Sophos;i="6.10,146,1719903600"; d="scan'208";a="21860593" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Aug 2024 10:33:26 -0700 X-CSE-ConnectionGUID: 5kYVRKSVQ6WlragEl/qGcQ== X-CSE-MsgGUID: C2RoE23KT2WUng//bE1QPA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,146,1719903600"; d="scan'208";a="59233877" Received: from anguy11-upstream.jf.intel.com ([10.166.9.133]) by fmviesa010.fm.intel.com with ESMTP; 14 Aug 2024 10:33:26 -0700 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com, edumazet@google.com, netdev@vger.kernel.org Cc: Michal Kubiak , anthony.l.nguyen@intel.com, aleksander.lobakin@intel.com, przemyslaw.kitszel@intel.com, joshua.a.hay@intel.com, nex.sw.ncis.osdt.itp.upstreaming@intel.com, stable@vger.kernel.org Subject: [PATCH net-next 7/9] idpf: fix netdev Tx queue stop/wake Date: Wed, 14 Aug 2024 10:33:04 -0700 Message-ID: <20240814173309.4166149-8-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20240814173309.4166149-1-anthony.l.nguyen@intel.com> References: <20240814173309.4166149-1-anthony.l.nguyen@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Michal Kubiak netif_txq_maybe_stop() returns -1, 0, or 1, while idpf_tx_maybe_stop_common() says it returns 0 or -EBUSY. As a result, there sometimes are Tx queue timeout warnings despite that the queue is empty or there is at least enough space to restart it. Make idpf_tx_maybe_stop_common() inline and returning true or false, handling the return of netif_txq_maybe_stop() properly. Use a correct goto in idpf_tx_maybe_stop_splitq() to avoid stopping the queue or incrementing the stops counter twice. Fixes: 6818c4d5b3c2 ("idpf: add splitq start_xmit") Fixes: a5ab9ee0df0b ("idpf: add singleq start_xmit and napi poll") Cc: stable@vger.kernel.org # 6.7+ Signed-off-by: Michal Kubiak Reviewed-by: Przemek Kitszel Signed-off-by: Alexander Lobakin Signed-off-by: Tony Nguyen --- .../ethernet/intel/idpf/idpf_singleq_txrx.c | 4 +++ drivers/net/ethernet/intel/idpf/idpf_txrx.c | 35 +++++-------------- drivers/net/ethernet/intel/idpf/idpf_txrx.h | 9 ++++- 3 files changed, 21 insertions(+), 27 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c index 947d3ff9677c..5ba360abbe66 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c @@ -375,6 +375,10 @@ netdev_tx_t idpf_tx_singleq_frame(struct sk_buff *skb, IDPF_TX_DESCS_FOR_CTX)) { idpf_tx_buf_hw_update(tx_q, tx_q->next_to_use, false); + u64_stats_update_begin(&tx_q->stats_sync); + u64_stats_inc(&tx_q->q_stats.q_busy); + u64_stats_update_end(&tx_q->stats_sync); + return NETDEV_TX_BUSY; } diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c index c1df4341e9ab..60f875decd8a 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -2128,29 +2128,6 @@ void idpf_tx_splitq_build_flow_desc(union idpf_tx_flex_desc *desc, desc->flow.qw1.compl_tag = cpu_to_le16(params->compl_tag); } -/** - * idpf_tx_maybe_stop_common - 1st level check for common Tx stop conditions - * @tx_q: the queue to be checked - * @size: number of descriptors we want to assure is available - * - * Returns 0 if stop is not needed - */ -int idpf_tx_maybe_stop_common(struct idpf_tx_queue *tx_q, unsigned int size) -{ - struct netdev_queue *nq; - - if (likely(IDPF_DESC_UNUSED(tx_q) >= size)) - return 0; - - u64_stats_update_begin(&tx_q->stats_sync); - u64_stats_inc(&tx_q->q_stats.q_busy); - u64_stats_update_end(&tx_q->stats_sync); - - nq = netdev_get_tx_queue(tx_q->netdev, tx_q->idx); - - return netif_txq_maybe_stop(nq, IDPF_DESC_UNUSED(tx_q), size, size); -} - /** * idpf_tx_maybe_stop_splitq - 1st level check for Tx splitq stop conditions * @tx_q: the queue to be checked @@ -2162,7 +2139,7 @@ static int idpf_tx_maybe_stop_splitq(struct idpf_tx_queue *tx_q, unsigned int descs_needed) { if (idpf_tx_maybe_stop_common(tx_q, descs_needed)) - goto splitq_stop; + goto out; /* If there are too many outstanding completions expected on the * completion queue, stop the TX queue to give the device some time to @@ -2181,10 +2158,12 @@ static int idpf_tx_maybe_stop_splitq(struct idpf_tx_queue *tx_q, return 0; splitq_stop: + netif_stop_subqueue(tx_q->netdev, tx_q->idx); + +out: u64_stats_update_begin(&tx_q->stats_sync); u64_stats_inc(&tx_q->q_stats.q_busy); u64_stats_update_end(&tx_q->stats_sync); - netif_stop_subqueue(tx_q->netdev, tx_q->idx); return -EBUSY; } @@ -2207,7 +2186,11 @@ void idpf_tx_buf_hw_update(struct idpf_tx_queue *tx_q, u32 val, nq = netdev_get_tx_queue(tx_q->netdev, tx_q->idx); tx_q->next_to_use = val; - idpf_tx_maybe_stop_common(tx_q, IDPF_TX_DESC_NEEDED); + if (idpf_tx_maybe_stop_common(tx_q, IDPF_TX_DESC_NEEDED)) { + u64_stats_update_begin(&tx_q->stats_sync); + u64_stats_inc(&tx_q->q_stats.q_busy); + u64_stats_update_end(&tx_q->stats_sync); + } /* Force memory writes to complete before letting h/w * know there are new descriptors to fetch. (Only diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h index 2478f71adb95..df3574ac58c2 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h @@ -1020,7 +1020,6 @@ void idpf_tx_dma_map_error(struct idpf_tx_queue *txq, struct sk_buff *skb, struct idpf_tx_buf *first, u16 ring_idx); unsigned int idpf_tx_desc_count_required(struct idpf_tx_queue *txq, struct sk_buff *skb); -int idpf_tx_maybe_stop_common(struct idpf_tx_queue *tx_q, unsigned int size); void idpf_tx_timeout(struct net_device *netdev, unsigned int txqueue); netdev_tx_t idpf_tx_singleq_frame(struct sk_buff *skb, struct idpf_tx_queue *tx_q); @@ -1029,4 +1028,12 @@ bool idpf_rx_singleq_buf_hw_alloc_all(struct idpf_rx_queue *rxq, u16 cleaned_count); int idpf_tso(struct sk_buff *skb, struct idpf_tx_offload_params *off); +static inline bool idpf_tx_maybe_stop_common(struct idpf_tx_queue *tx_q, + u32 needed) +{ + return !netif_subqueue_maybe_stop(tx_q->netdev, tx_q->idx, + IDPF_DESC_UNUSED(tx_q), + needed, needed); +} + #endif /* !_IDPF_TXRX_H_ */