From patchwork Mon Mar 4 11:08:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Niklas_S=C3=B6derlund?= X-Patchwork-Id: 13580442 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-ej1-f53.google.com (mail-ej1-f53.google.com [209.85.218.53]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1F2DB39AC3 for ; Mon, 4 Mar 2024 11:09:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.53 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709550576; cv=none; b=Y269St2Pz+Oo8B7OzQMa5WfyGM+Lm4nPtqryXX5bE06e/6/TFn9mbViB45in/b+TR2HwUkyGgS5DPUXXosw/CRl913NWlEQ3gSJd2/MTbS5md8wg7yKMRsVOQ1atXoj1QBL7pAFiGdVSDFVShbFITRgjWJK2SjRvqpLn4j9eMS0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709550576; c=relaxed/simple; bh=5U2MxyFvPJyfiWnsXnhLJHbieWDtjc4X8wltuW8lArs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=iiWPrih9cxkkYtKnhD49lAcjlrrk4rurL7K9He5uVBZYa2I7ko7m9dtt+CmMnQYC7tDTO+2DFmQJpLt9jAYSNG7OQJabXUfRbmBntC8ZBnGHJdCnmgzs+/2P6SzPvw9NopccqZMZvpFx1VNLmJ6T9kweuoVrWuHyrRSU3pJYhtA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=ragnatech.se; spf=pass smtp.mailfrom=ragnatech.se; dkim=pass (2048-bit key) header.d=ragnatech.se header.i=@ragnatech.se header.b=FRnfHGe2; arc=none smtp.client-ip=209.85.218.53 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=ragnatech.se Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ragnatech.se Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ragnatech.se header.i=@ragnatech.se header.b="FRnfHGe2" Received: by mail-ej1-f53.google.com with SMTP id a640c23a62f3a-a450615d1c4so203156466b.0 for ; Mon, 04 Mar 2024 03:09:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ragnatech.se; s=google; t=1709550572; x=1710155372; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=I3YPXa1AknljcKmXQYRbeC7UBRgYsHPtKdQ8vOYUfgM=; b=FRnfHGe2C2M2A3NnUnwywm/hOIi/sk1jTCbhfnBlCv8bJmRbbvJ9Hp+eFrVaVJsXZf MS0+e0ahni3J5f4kT1VA9tq7JuEIjSjjX8zBP9g+Gb1/W1E6ooSRMpaFVuV4mNFdHZeq DIWHncOYoh5rG5ztCXLGGofJBT6UYVyLS/WSBxwk3Pa2xn+k+0uC3lwY1h5CPK9LpXuZ +04MK8ip47KzLTXS62DruED2J1apM6zr23O2TdmNUPos4AEMt1WWiPk+Z37YPhaclp1D eiG7tjU2BF11MtgtPsE9I6CtPnzBb//6NLItionjR6bPCVfXo8W8GGFKO0dkXQxtLEvh OADg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709550572; x=1710155372; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=I3YPXa1AknljcKmXQYRbeC7UBRgYsHPtKdQ8vOYUfgM=; b=WaE3TwvT8hZ3HGSIFnsKettYr+TjL9bGOYB+bAaskOZCMfuN5CuaM/smEbC+In4/QG w83sqa4n0NwrUbhKuBzUE8T7pobhzI4KxYcVN/WoWPvg1y/CyE3rfHBxWfLppPdMasyb ydKDxdKxOWCD0TNpKX5qPnwYpOiWqaz/+bOn+NDWtx2/dbnKpPJhh/pfiNCK3R9BPqXa pPswHyj2BkBu1f5l5BPLFJeD/ODbSbPddLVxReCwjvkLBsrVmxQdNw5DRjnowlaBEtdK c8vbB1x9xG6grVbjn5B5bATrvnL6ZbX9WhWuiwXhFzF2Z0IYajRYRYrRmfUX0o2vFvJq 3w5w== X-Forwarded-Encrypted: i=1; AJvYcCXesC3v++xnJQUqGSmBwS2M37KVzXV0FGmRCdWh6YGmKffbdG27lVyq1qWsVSH43CprVnyryiJ1DZN92Ve3EajgOSVaj5kc X-Gm-Message-State: AOJu0Yz6k4yNw3d+POpr2n5fCTsUb3fGC6EvPQCHFDNnQ+sSiQFo8Fvk ho93os60k6iHVH2qbNM908rH4F+3PfLRI6sxja3qClzKYSqOsaOPAmeNCr5YizU= X-Google-Smtp-Source: AGHT+IGcVkJafWZlgAfFNGx7dSxxSDrmUP7rWuEi5PMo72yDTBWPxdYePEemlUYav/tb7USrL9HZgg== X-Received: by 2002:a17:906:dfd8:b0:a45:53ca:307 with SMTP id jt24-20020a170906dfd800b00a4553ca0307mr1943100ejc.8.1709550572335; Mon, 04 Mar 2024 03:09:32 -0800 (PST) Received: from sleipner.berto.se (p4fcc8c6a.dip0.t-ipconnect.de. [79.204.140.106]) by smtp.googlemail.com with ESMTPSA id v23-20020a170906565700b00a455ff77e7bsm688420ejr.88.2024.03.04.03.09.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Mar 2024 03:09:31 -0800 (PST) From: =?utf-8?q?Niklas_S=C3=B6derlund?= To: Sergey Shtylyov , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Biju Das , Claudiu Beznea , Yoshihiro Shimoda , netdev@vger.kernel.org Cc: linux-renesas-soc@vger.kernel.org, =?utf-8?q?Niklas_S=C3=B6derlund?= , Paul Barker Subject: [net-next,v3 1/6] ravb: Group descriptor types used in Rx ring Date: Mon, 4 Mar 2024 12:08:53 +0100 Message-ID: <20240304110858.117100-2-niklas.soderlund+renesas@ragnatech.se> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240304110858.117100-1-niklas.soderlund+renesas@ragnatech.se> References: <20240304110858.117100-1-niklas.soderlund+renesas@ragnatech.se> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org The Rx ring can either be made up of normal or extended descriptors, not a mix of the two at the same time. Make this explicit by grouping the two variables in a rx_ring union. The extension of the storage for more than one queue of normal descriptors from a single to NUM_RX_QUEUE queues have no practical effect. But aids in making the code readable as the code that uses it already piggyback on other members of struct ravb_private that are arrays of max length NUM_RX_QUEUE, e.g. rx_desc_dma. This will also make further refactoring easier. While at it, rename the normal descriptor Rx ring to make it clear it's not strictly related to the GbEthernet E-MAC IP found in RZ/G2L, normal descriptors could be used on R-Car SoCs too. Signed-off-by: Niklas Söderlund Reviewed-by: Paul Barker Reviewed-by: Sergey Shtylyov --- * Changes since v2 - Fix spelling in commit message. --- drivers/net/ethernet/renesas/ravb.h | 6 ++- drivers/net/ethernet/renesas/ravb_main.c | 57 ++++++++++++------------ 2 files changed, 33 insertions(+), 30 deletions(-) diff --git a/drivers/net/ethernet/renesas/ravb.h b/drivers/net/ethernet/renesas/ravb.h index 35e642fc4b2a..aecc98282c7e 100644 --- a/drivers/net/ethernet/renesas/ravb.h +++ b/drivers/net/ethernet/renesas/ravb.h @@ -1092,8 +1092,10 @@ struct ravb_private { struct ravb_desc *desc_bat; dma_addr_t rx_desc_dma[NUM_RX_QUEUE]; dma_addr_t tx_desc_dma[NUM_TX_QUEUE]; - struct ravb_rx_desc *gbeth_rx_ring; - struct ravb_ex_rx_desc *rx_ring[NUM_RX_QUEUE]; + union { + struct ravb_rx_desc *desc; + struct ravb_ex_rx_desc *ex_desc; + } rx_ring[NUM_RX_QUEUE]; struct ravb_tx_desc *tx_ring[NUM_TX_QUEUE]; void *tx_align[NUM_TX_QUEUE]; struct sk_buff *rx_1st_skb; diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c index f9fb772b05c7..c25a80f4d3b9 100644 --- a/drivers/net/ethernet/renesas/ravb_main.c +++ b/drivers/net/ethernet/renesas/ravb_main.c @@ -241,11 +241,11 @@ static void ravb_rx_ring_free_gbeth(struct net_device *ndev, int q) unsigned int ring_size; unsigned int i; - if (!priv->gbeth_rx_ring) + if (!priv->rx_ring[q].desc) return; for (i = 0; i < priv->num_rx_ring[q]; i++) { - struct ravb_rx_desc *desc = &priv->gbeth_rx_ring[i]; + struct ravb_rx_desc *desc = &priv->rx_ring[q].desc[i]; if (!dma_mapping_error(ndev->dev.parent, le32_to_cpu(desc->dptr))) @@ -255,9 +255,9 @@ static void ravb_rx_ring_free_gbeth(struct net_device *ndev, int q) DMA_FROM_DEVICE); } ring_size = sizeof(struct ravb_rx_desc) * (priv->num_rx_ring[q] + 1); - dma_free_coherent(ndev->dev.parent, ring_size, priv->gbeth_rx_ring, + dma_free_coherent(ndev->dev.parent, ring_size, priv->rx_ring[q].desc, priv->rx_desc_dma[q]); - priv->gbeth_rx_ring = NULL; + priv->rx_ring[q].desc = NULL; } static void ravb_rx_ring_free_rcar(struct net_device *ndev, int q) @@ -266,11 +266,11 @@ static void ravb_rx_ring_free_rcar(struct net_device *ndev, int q) unsigned int ring_size; unsigned int i; - if (!priv->rx_ring[q]) + if (!priv->rx_ring[q].ex_desc) return; for (i = 0; i < priv->num_rx_ring[q]; i++) { - struct ravb_ex_rx_desc *desc = &priv->rx_ring[q][i]; + struct ravb_ex_rx_desc *desc = &priv->rx_ring[q].ex_desc[i]; if (!dma_mapping_error(ndev->dev.parent, le32_to_cpu(desc->dptr))) @@ -281,9 +281,9 @@ static void ravb_rx_ring_free_rcar(struct net_device *ndev, int q) } ring_size = sizeof(struct ravb_ex_rx_desc) * (priv->num_rx_ring[q] + 1); - dma_free_coherent(ndev->dev.parent, ring_size, priv->rx_ring[q], + dma_free_coherent(ndev->dev.parent, ring_size, priv->rx_ring[q].ex_desc, priv->rx_desc_dma[q]); - priv->rx_ring[q] = NULL; + priv->rx_ring[q].ex_desc = NULL; } /* Free skb's and DMA buffers for Ethernet AVB */ @@ -335,11 +335,11 @@ static void ravb_rx_ring_format_gbeth(struct net_device *ndev, int q) unsigned int i; rx_ring_size = sizeof(*rx_desc) * priv->num_rx_ring[q]; - memset(priv->gbeth_rx_ring, 0, rx_ring_size); + memset(priv->rx_ring[q].desc, 0, rx_ring_size); /* Build RX ring buffer */ for (i = 0; i < priv->num_rx_ring[q]; i++) { /* RX descriptor */ - rx_desc = &priv->gbeth_rx_ring[i]; + rx_desc = &priv->rx_ring[q].desc[i]; rx_desc->ds_cc = cpu_to_le16(GBETH_RX_DESC_DATA_SIZE); dma_addr = dma_map_single(ndev->dev.parent, priv->rx_skb[q][i]->data, GBETH_RX_BUFF_MAX, @@ -352,7 +352,7 @@ static void ravb_rx_ring_format_gbeth(struct net_device *ndev, int q) rx_desc->dptr = cpu_to_le32(dma_addr); rx_desc->die_dt = DT_FEMPTY; } - rx_desc = &priv->gbeth_rx_ring[i]; + rx_desc = &priv->rx_ring[q].desc[i]; rx_desc->dptr = cpu_to_le32((u32)priv->rx_desc_dma[q]); rx_desc->die_dt = DT_LINKFIX; /* type */ } @@ -365,11 +365,11 @@ static void ravb_rx_ring_format_rcar(struct net_device *ndev, int q) dma_addr_t dma_addr; unsigned int i; - memset(priv->rx_ring[q], 0, rx_ring_size); + memset(priv->rx_ring[q].ex_desc, 0, rx_ring_size); /* Build RX ring buffer */ for (i = 0; i < priv->num_rx_ring[q]; i++) { /* RX descriptor */ - rx_desc = &priv->rx_ring[q][i]; + rx_desc = &priv->rx_ring[q].ex_desc[i]; rx_desc->ds_cc = cpu_to_le16(RX_BUF_SZ); dma_addr = dma_map_single(ndev->dev.parent, priv->rx_skb[q][i]->data, RX_BUF_SZ, @@ -382,7 +382,7 @@ static void ravb_rx_ring_format_rcar(struct net_device *ndev, int q) rx_desc->dptr = cpu_to_le32(dma_addr); rx_desc->die_dt = DT_FEMPTY; } - rx_desc = &priv->rx_ring[q][i]; + rx_desc = &priv->rx_ring[q].ex_desc[i]; rx_desc->dptr = cpu_to_le32((u32)priv->rx_desc_dma[q]); rx_desc->die_dt = DT_LINKFIX; /* type */ } @@ -437,10 +437,10 @@ static void *ravb_alloc_rx_desc_gbeth(struct net_device *ndev, int q) ring_size = sizeof(struct ravb_rx_desc) * (priv->num_rx_ring[q] + 1); - priv->gbeth_rx_ring = dma_alloc_coherent(ndev->dev.parent, ring_size, - &priv->rx_desc_dma[q], - GFP_KERNEL); - return priv->gbeth_rx_ring; + priv->rx_ring[q].desc = dma_alloc_coherent(ndev->dev.parent, ring_size, + &priv->rx_desc_dma[q], + GFP_KERNEL); + return priv->rx_ring[q].desc; } static void *ravb_alloc_rx_desc_rcar(struct net_device *ndev, int q) @@ -450,10 +450,11 @@ static void *ravb_alloc_rx_desc_rcar(struct net_device *ndev, int q) ring_size = sizeof(struct ravb_ex_rx_desc) * (priv->num_rx_ring[q] + 1); - priv->rx_ring[q] = dma_alloc_coherent(ndev->dev.parent, ring_size, - &priv->rx_desc_dma[q], - GFP_KERNEL); - return priv->rx_ring[q]; + priv->rx_ring[q].ex_desc = dma_alloc_coherent(ndev->dev.parent, + ring_size, + &priv->rx_desc_dma[q], + GFP_KERNEL); + return priv->rx_ring[q].ex_desc; } /* Init skb and descriptor buffer for Ethernet AVB */ @@ -830,7 +831,7 @@ static bool ravb_rx_gbeth(struct net_device *ndev, int *quota, int q) limit = priv->dirty_rx[q] + priv->num_rx_ring[q] - priv->cur_rx[q]; stats = &priv->stats[q]; - desc = &priv->gbeth_rx_ring[entry]; + desc = &priv->rx_ring[q].desc[entry]; for (i = 0; i < limit && rx_packets < *quota && desc->die_dt != DT_FEMPTY; i++) { /* Descriptor type must be checked before all other reads */ dma_rmb(); @@ -901,13 +902,13 @@ static bool ravb_rx_gbeth(struct net_device *ndev, int *quota, int q) } entry = (++priv->cur_rx[q]) % priv->num_rx_ring[q]; - desc = &priv->gbeth_rx_ring[entry]; + desc = &priv->rx_ring[q].desc[entry]; } /* Refill the RX ring buffers. */ for (; priv->cur_rx[q] - priv->dirty_rx[q] > 0; priv->dirty_rx[q]++) { entry = priv->dirty_rx[q] % priv->num_rx_ring[q]; - desc = &priv->gbeth_rx_ring[entry]; + desc = &priv->rx_ring[q].desc[entry]; desc->ds_cc = cpu_to_le16(GBETH_RX_DESC_DATA_SIZE); if (!priv->rx_skb[q][entry]) { @@ -957,7 +958,7 @@ static bool ravb_rx_rcar(struct net_device *ndev, int *quota, int q) boguscnt = min(boguscnt, *quota); limit = boguscnt; - desc = &priv->rx_ring[q][entry]; + desc = &priv->rx_ring[q].ex_desc[entry]; while (desc->die_dt != DT_FEMPTY) { /* Descriptor type must be checked before all other reads */ dma_rmb(); @@ -1017,13 +1018,13 @@ static bool ravb_rx_rcar(struct net_device *ndev, int *quota, int q) } entry = (++priv->cur_rx[q]) % priv->num_rx_ring[q]; - desc = &priv->rx_ring[q][entry]; + desc = &priv->rx_ring[q].ex_desc[entry]; } /* Refill the RX ring buffers. */ for (; priv->cur_rx[q] - priv->dirty_rx[q] > 0; priv->dirty_rx[q]++) { entry = priv->dirty_rx[q] % priv->num_rx_ring[q]; - desc = &priv->rx_ring[q][entry]; + desc = &priv->rx_ring[q].ex_desc[entry]; desc->ds_cc = cpu_to_le16(RX_BUF_SZ); if (!priv->rx_skb[q][entry]) {