From patchwork Thu Apr 11 11:44:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Barker X-Patchwork-Id: 13625795 X-Patchwork-Delegate: kuba@kernel.org Received: from relmlie6.idc.renesas.com (relmlor2.renesas.com [210.160.252.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id ED66C14BF8B; Thu, 11 Apr 2024 11:45:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=210.160.252.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712835911; cv=none; b=K9TimfLhnIQDWkDRTcpWoidnXBmZg+oVCcAIqbysSffyqEBQlpUOk+i+oOLLhO0v9e5gWunT++qJ+JIZwyhAgMhwvzxGKEwwP0dAFz0jZ2jYFEYp+jS5L+gc5SBiG5z4x9PNtHcMA7AJZluqnya2Qn/6G3vJyQx1x6C6SUYTQUU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712835911; c=relaxed/simple; bh=MmDI+q4/9jb1kLuiW8i7oMnpEAKfa1V2ExzQEeIkyYw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=IaQquIr1YO/TUgj5LFjVd012S8IlvQWAwGWZAN1Yzkpy5l26M79/aPUk9vNZDoVl7ZsawFxhrqxwqZVs0V/8HlzJ+53Ke9GV8Si0SvjwVBryolpDbkPLl984JSNlAikYVq2JA++lMI/soW3UH8aURQ/DMzqKEjl831VzpZ6Zfzs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=bp.renesas.com; spf=pass smtp.mailfrom=bp.renesas.com; arc=none smtp.client-ip=210.160.252.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=bp.renesas.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bp.renesas.com X-IronPort-AV: E=Sophos;i="6.07,193,1708354800"; d="scan'208";a="205076272" Received: from unknown (HELO relmlir6.idc.renesas.com) ([10.200.68.152]) by relmlie6.idc.renesas.com with ESMTP; 11 Apr 2024 20:45:03 +0900 Received: from renesas-deb12.cephei.uk (unknown [10.226.93.85]) by relmlir6.idc.renesas.com (Postfix) with ESMTP id A6C0042017DE; Thu, 11 Apr 2024 20:44:59 +0900 (JST) From: Paul Barker To: Sergey Shtylyov , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , =?utf-8?q?Niklas_S=C3=B6derlund?= , Geert Uytterhoeven Cc: Paul Barker , netdev@vger.kernel.org, linux-renesas-soc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH net 2/4] net: ravb: Allow RX loop to move past DMA mapping errors Date: Thu, 11 Apr 2024 12:44:31 +0100 Message-Id: <20240411114434.26186-3-paul.barker.ct@bp.renesas.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240411114434.26186-1-paul.barker.ct@bp.renesas.com> References: <20240411114434.26186-1-paul.barker.ct@bp.renesas.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org The RX loops in ravb_rx_gbeth() and ravb_rx_rcar() skip to the next loop interation if a zero-length descriptor is seen (indicating a DMA mapping error). However, the current rx descriptor index `priv->cur_rx[q]` was incremented at the end of the loop and so would not be incremented when we skip to the next loop iteration. This would cause the loop to keep seeing the same zero-length descriptor instead of moving on to the next descriptor. As the loop counter `i` still increments, the loop would eventually terminate so there is no risk of being stuck here forever - but we should still fix this to avoid wasting cycles. To fix this, the rx descriptor index is incremented at the top of the loop, in the for statement itself. The assignments of `entry` and `desc` are brought into the loop to avoid the need for duplication. Fixes: d8b48911fd24 ("ravb: fix ring memory allocation") Signed-off-by: Paul Barker Reviewed-by: Sergey Shtylyov --- drivers/net/ethernet/renesas/ravb_main.c | 25 ++++++++++++------------ 1 file changed, 13 insertions(+), 12 deletions(-) diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c index 70f2900648d4..028ab5c6aaf7 100644 --- a/drivers/net/ethernet/renesas/ravb_main.c +++ b/drivers/net/ethernet/renesas/ravb_main.c @@ -775,12 +775,15 @@ static bool ravb_rx_gbeth(struct net_device *ndev, int *quota, int q) int limit; int i; - entry = priv->cur_rx[q] % priv->num_rx_ring[q]; limit = priv->dirty_rx[q] + priv->num_rx_ring[q] - priv->cur_rx[q]; stats = &priv->stats[q]; - desc = &priv->rx_ring[q].desc[entry]; - for (i = 0; i < limit && rx_packets < *quota && desc->die_dt != DT_FEMPTY; i++) { + for (i = 0; i < limit && rx_packets < *quota; i++, priv->cur_rx[q]++) { + entry = priv->cur_rx[q] % priv->num_rx_ring[q]; + desc = &priv->rx_ring[q].desc[entry]; + if (desc->die_dt == DT_FEMPTY) + break; + /* Descriptor type must be checked before all other reads */ dma_rmb(); desc_status = desc->msc; @@ -848,9 +851,6 @@ static bool ravb_rx_gbeth(struct net_device *ndev, int *quota, int q) break; } } - - entry = (++priv->cur_rx[q]) % priv->num_rx_ring[q]; - desc = &priv->rx_ring[q].desc[entry]; } /* Refill the RX ring buffers. */ @@ -891,7 +891,6 @@ static bool ravb_rx_rcar(struct net_device *ndev, int *quota, int q) { struct ravb_private *priv = netdev_priv(ndev); const struct ravb_hw_info *info = priv->info; - int entry = priv->cur_rx[q] % priv->num_rx_ring[q]; struct net_device_stats *stats = &priv->stats[q]; struct ravb_ex_rx_desc *desc; struct sk_buff *skb; @@ -900,12 +899,17 @@ static bool ravb_rx_rcar(struct net_device *ndev, int *quota, int q) int rx_packets = 0; u8 desc_status; u16 pkt_len; + int entry; int limit; int i; limit = priv->dirty_rx[q] + priv->num_rx_ring[q] - priv->cur_rx[q]; - desc = &priv->rx_ring[q].ex_desc[entry]; - for (i = 0; i < limit && rx_packets < *quota && desc->die_dt != DT_FEMPTY; i++) { + for (i = 0; i < limit && rx_packets < *quota; i++, priv->cur_rx[q]++) { + entry = priv->cur_rx[q] % priv->num_rx_ring[q]; + desc = &priv->rx_ring[q].ex_desc[entry]; + if (desc->die_dt == DT_FEMPTY) + break; + /* Descriptor type must be checked before all other reads */ dma_rmb(); desc_status = desc->msc; @@ -959,9 +963,6 @@ static bool ravb_rx_rcar(struct net_device *ndev, int *quota, int q) rx_packets++; stats->rx_bytes += pkt_len; } - - entry = (++priv->cur_rx[q]) % priv->num_rx_ring[q]; - desc = &priv->rx_ring[q].ex_desc[entry]; } /* Refill the RX ring buffers. */