From patchwork Tue Feb 27 01:40:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Niklas_S=C3=B6derlund?= X-Patchwork-Id: 13573103 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-wm1-f52.google.com (mail-wm1-f52.google.com [209.85.128.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E43895672 for ; Tue, 27 Feb 2024 01:41:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.52 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708998122; cv=none; b=LhLaqHawU0VfKZuSSB+IT6j8Na9NcNDBS3yg0bGyJMPgMhrPU76GfbMaLVCsBT3pudTf5uBqYCpwwQJcvCYFr19hul03S11en6V6pWnkoZtnosyp21tYkOyWokCb2faIOhh2cI582FNk5fErq1W4dxIOn/6pDtFJCQTb4HJrmHA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708998122; c=relaxed/simple; bh=FylOFo9RfnXHV9EQ7wt+WpmHfZyRTWtmSmAtnRbVm0E=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Q945HjaHYiBcmb4wQtzVRheDEj1mKT1qj+2KcBzXCYmhDw3TrxFmJceJFpY5l51bvboNGoSWdb42GfuPL7InjnQybEefvsrNHRB/RfEQTPBhNDY1v5e91T3OHVSUs6uF1z47WH/hveE8lVNLM38NbBWqvo85SBXyrL3IfDIayaA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=ragnatech.se; spf=pass smtp.mailfrom=ragnatech.se; dkim=pass (2048-bit key) header.d=ragnatech.se header.i=@ragnatech.se header.b=OdtSny3e; arc=none smtp.client-ip=209.85.128.52 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=ragnatech.se Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ragnatech.se Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ragnatech.se header.i=@ragnatech.se header.b="OdtSny3e" Received: by mail-wm1-f52.google.com with SMTP id 5b1f17b1804b1-412a4848f0dso11384845e9.1 for ; Mon, 26 Feb 2024 17:41:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ragnatech.se; s=google; t=1708998117; x=1709602917; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=xAMsdRq2CCVLSK0J5TCOOSXhHRp3Zy11izA1aiVcHbE=; b=OdtSny3eBeon/nRgcFgCtqV6Fii17njq100paNU96oF5yktqQtNmPLUuoME6DwJUS3 EFM0/iySxr2123T5ozlUvPkQddR2NtpX9tEqzF8UVSiHRamOMYw8OzOMrqzQ2jzNYzno rZIBvZKZL8HXMbcLHFhWUOwcmMXnhebVr3NBlkKz0I287VdSOIYW0lh/Op4UWD2SNhhq bLEz/y3Doe4U60AgkEfUMPQF6omBwiAs3N/+DCSvGh+rYwQtEHjjH/FSZAPpXfoC/6Y5 Was0nuF92KkexZlsFpxlLThFxCGb5SlxFGqAk2C+HZ55xsWToO//5avnlCm9T30ZU2CP zDFw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708998117; x=1709602917; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xAMsdRq2CCVLSK0J5TCOOSXhHRp3Zy11izA1aiVcHbE=; b=vh+HHY36f8dZIcwjk/4pqFW0hr29Be6drgtYl+tMQJoNQ3ES/R2SZbAVIcdNLIc3W1 t7KiAHyS7Br+j1phebgzydGktK0jE+qgXVXU8RDB9r4NIs5+YtnipVEI80AqHyTsKWOl 3WxiL6A8jqIeBiSkiRq5VxJpMceFJOt+/2eVR3Lfb9CtrwzI+ATaHt0TIRNGyv4ErNK5 VcriyZf6Gzelav14vl3EzrVtE3KITs4AbrlvWCvAk/vCsu8WOHcGSyDuXgick6d8POme DjeCfTebA/hnuCUvjrZh/bulm0uRZGNFgct+X/ZMk5KHhmqsW0v4DXCF91jyMuQlD1sr mQ7w== X-Forwarded-Encrypted: i=1; AJvYcCW49WgWrEjkeZv01a3rkryZEQgfV2nDJHghj/HS8QbxiB2DaVRKDP4spI/wYHQYDpuDskV3HkJZAEXaUjOp0ZbzY15J9liB X-Gm-Message-State: AOJu0YztSzJ7qm6kIBPuuHWqVYauBnGFKzUFeD1a/gjQ+N7gid9+nnr0 +SWQtNt3BkN89KOmOnV/4ApHl61rLvkcrKqz44rWfdyF5Oo6JLicIHmOrqopiLw= X-Google-Smtp-Source: AGHT+IHFbVMMqnwbkFQLA8ydpfnz28ahzoQuv6rhEthZZQ4GpgMQ8DCXucIt5tPkDi+0pGg6LjIPjg== X-Received: by 2002:a05:600c:4f86:b0:412:94b7:bc2 with SMTP id n6-20020a05600c4f8600b0041294b70bc2mr7789978wmq.2.1708998117431; Mon, 26 Feb 2024 17:41:57 -0800 (PST) Received: from sleipner.berto.se (p4fcc8c6a.dip0.t-ipconnect.de. [79.204.140.106]) by smtp.googlemail.com with ESMTPSA id w15-20020a05600c474f00b004129860d532sm9827918wmo.2.2024.02.26.17.41.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Feb 2024 17:41:57 -0800 (PST) From: =?utf-8?q?Niklas_S=C3=B6derlund?= To: Sergey Shtylyov , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Biju Das , Claudiu Beznea , Yoshihiro Shimoda , netdev@vger.kernel.org Cc: linux-renesas-soc@vger.kernel.org, =?utf-8?q?Niklas_S=C3=B6derlund?= Subject: [net-next 1/6] ravb: Group descriptor types used in Rx ring Date: Tue, 27 Feb 2024 02:40:09 +0100 Message-ID: <20240227014014.44855-2-niklas.soderlund+renesas@ragnatech.se> X-Mailer: git-send-email 2.43.2 In-Reply-To: <20240227014014.44855-1-niklas.soderlund+renesas@ragnatech.se> References: <20240227014014.44855-1-niklas.soderlund+renesas@ragnatech.se> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org The Rx ring can either be made up of normal or extended descriptors, not a mix of the two at the same time. Make this explicitly by grouping the two variables in a rx_ring union. The extension of the storage for more than one queue of normal descriptors from a single to NUM_RX_QUEUE queues have no practical effect. But aids in making the code readable as the code that uses it already piggyback on other members of struct ravb_private that are arrays of max length NUM_RX_QUEUE, e.g. rx_desc_dma. This will also make further refactoring easier. While at it rename the normal descriptor Rx ring to make it clear it's not strictly related to the GbEthernet E-MAC IP found in RZ/G2L, normal descriptors could be used on R-Car SoCs too. Signed-off-by: Niklas Söderlund Reviewed-by: Paul Barker --- drivers/net/ethernet/renesas/ravb.h | 6 ++- drivers/net/ethernet/renesas/ravb_main.c | 57 ++++++++++++------------ 2 files changed, 33 insertions(+), 30 deletions(-) diff --git a/drivers/net/ethernet/renesas/ravb.h b/drivers/net/ethernet/renesas/ravb.h index 35e642fc4b2a..aecc98282c7e 100644 --- a/drivers/net/ethernet/renesas/ravb.h +++ b/drivers/net/ethernet/renesas/ravb.h @@ -1092,8 +1092,10 @@ struct ravb_private { struct ravb_desc *desc_bat; dma_addr_t rx_desc_dma[NUM_RX_QUEUE]; dma_addr_t tx_desc_dma[NUM_TX_QUEUE]; - struct ravb_rx_desc *gbeth_rx_ring; - struct ravb_ex_rx_desc *rx_ring[NUM_RX_QUEUE]; + union { + struct ravb_rx_desc *desc; + struct ravb_ex_rx_desc *ex_desc; + } rx_ring[NUM_RX_QUEUE]; struct ravb_tx_desc *tx_ring[NUM_TX_QUEUE]; void *tx_align[NUM_TX_QUEUE]; struct sk_buff *rx_1st_skb; diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c index f9fb772b05c7..c25a80f4d3b9 100644 --- a/drivers/net/ethernet/renesas/ravb_main.c +++ b/drivers/net/ethernet/renesas/ravb_main.c @@ -241,11 +241,11 @@ static void ravb_rx_ring_free_gbeth(struct net_device *ndev, int q) unsigned int ring_size; unsigned int i; - if (!priv->gbeth_rx_ring) + if (!priv->rx_ring[q].desc) return; for (i = 0; i < priv->num_rx_ring[q]; i++) { - struct ravb_rx_desc *desc = &priv->gbeth_rx_ring[i]; + struct ravb_rx_desc *desc = &priv->rx_ring[q].desc[i]; if (!dma_mapping_error(ndev->dev.parent, le32_to_cpu(desc->dptr))) @@ -255,9 +255,9 @@ static void ravb_rx_ring_free_gbeth(struct net_device *ndev, int q) DMA_FROM_DEVICE); } ring_size = sizeof(struct ravb_rx_desc) * (priv->num_rx_ring[q] + 1); - dma_free_coherent(ndev->dev.parent, ring_size, priv->gbeth_rx_ring, + dma_free_coherent(ndev->dev.parent, ring_size, priv->rx_ring[q].desc, priv->rx_desc_dma[q]); - priv->gbeth_rx_ring = NULL; + priv->rx_ring[q].desc = NULL; } static void ravb_rx_ring_free_rcar(struct net_device *ndev, int q) @@ -266,11 +266,11 @@ static void ravb_rx_ring_free_rcar(struct net_device *ndev, int q) unsigned int ring_size; unsigned int i; - if (!priv->rx_ring[q]) + if (!priv->rx_ring[q].ex_desc) return; for (i = 0; i < priv->num_rx_ring[q]; i++) { - struct ravb_ex_rx_desc *desc = &priv->rx_ring[q][i]; + struct ravb_ex_rx_desc *desc = &priv->rx_ring[q].ex_desc[i]; if (!dma_mapping_error(ndev->dev.parent, le32_to_cpu(desc->dptr))) @@ -281,9 +281,9 @@ static void ravb_rx_ring_free_rcar(struct net_device *ndev, int q) } ring_size = sizeof(struct ravb_ex_rx_desc) * (priv->num_rx_ring[q] + 1); - dma_free_coherent(ndev->dev.parent, ring_size, priv->rx_ring[q], + dma_free_coherent(ndev->dev.parent, ring_size, priv->rx_ring[q].ex_desc, priv->rx_desc_dma[q]); - priv->rx_ring[q] = NULL; + priv->rx_ring[q].ex_desc = NULL; } /* Free skb's and DMA buffers for Ethernet AVB */ @@ -335,11 +335,11 @@ static void ravb_rx_ring_format_gbeth(struct net_device *ndev, int q) unsigned int i; rx_ring_size = sizeof(*rx_desc) * priv->num_rx_ring[q]; - memset(priv->gbeth_rx_ring, 0, rx_ring_size); + memset(priv->rx_ring[q].desc, 0, rx_ring_size); /* Build RX ring buffer */ for (i = 0; i < priv->num_rx_ring[q]; i++) { /* RX descriptor */ - rx_desc = &priv->gbeth_rx_ring[i]; + rx_desc = &priv->rx_ring[q].desc[i]; rx_desc->ds_cc = cpu_to_le16(GBETH_RX_DESC_DATA_SIZE); dma_addr = dma_map_single(ndev->dev.parent, priv->rx_skb[q][i]->data, GBETH_RX_BUFF_MAX, @@ -352,7 +352,7 @@ static void ravb_rx_ring_format_gbeth(struct net_device *ndev, int q) rx_desc->dptr = cpu_to_le32(dma_addr); rx_desc->die_dt = DT_FEMPTY; } - rx_desc = &priv->gbeth_rx_ring[i]; + rx_desc = &priv->rx_ring[q].desc[i]; rx_desc->dptr = cpu_to_le32((u32)priv->rx_desc_dma[q]); rx_desc->die_dt = DT_LINKFIX; /* type */ } @@ -365,11 +365,11 @@ static void ravb_rx_ring_format_rcar(struct net_device *ndev, int q) dma_addr_t dma_addr; unsigned int i; - memset(priv->rx_ring[q], 0, rx_ring_size); + memset(priv->rx_ring[q].ex_desc, 0, rx_ring_size); /* Build RX ring buffer */ for (i = 0; i < priv->num_rx_ring[q]; i++) { /* RX descriptor */ - rx_desc = &priv->rx_ring[q][i]; + rx_desc = &priv->rx_ring[q].ex_desc[i]; rx_desc->ds_cc = cpu_to_le16(RX_BUF_SZ); dma_addr = dma_map_single(ndev->dev.parent, priv->rx_skb[q][i]->data, RX_BUF_SZ, @@ -382,7 +382,7 @@ static void ravb_rx_ring_format_rcar(struct net_device *ndev, int q) rx_desc->dptr = cpu_to_le32(dma_addr); rx_desc->die_dt = DT_FEMPTY; } - rx_desc = &priv->rx_ring[q][i]; + rx_desc = &priv->rx_ring[q].ex_desc[i]; rx_desc->dptr = cpu_to_le32((u32)priv->rx_desc_dma[q]); rx_desc->die_dt = DT_LINKFIX; /* type */ } @@ -437,10 +437,10 @@ static void *ravb_alloc_rx_desc_gbeth(struct net_device *ndev, int q) ring_size = sizeof(struct ravb_rx_desc) * (priv->num_rx_ring[q] + 1); - priv->gbeth_rx_ring = dma_alloc_coherent(ndev->dev.parent, ring_size, - &priv->rx_desc_dma[q], - GFP_KERNEL); - return priv->gbeth_rx_ring; + priv->rx_ring[q].desc = dma_alloc_coherent(ndev->dev.parent, ring_size, + &priv->rx_desc_dma[q], + GFP_KERNEL); + return priv->rx_ring[q].desc; } static void *ravb_alloc_rx_desc_rcar(struct net_device *ndev, int q) @@ -450,10 +450,11 @@ static void *ravb_alloc_rx_desc_rcar(struct net_device *ndev, int q) ring_size = sizeof(struct ravb_ex_rx_desc) * (priv->num_rx_ring[q] + 1); - priv->rx_ring[q] = dma_alloc_coherent(ndev->dev.parent, ring_size, - &priv->rx_desc_dma[q], - GFP_KERNEL); - return priv->rx_ring[q]; + priv->rx_ring[q].ex_desc = dma_alloc_coherent(ndev->dev.parent, + ring_size, + &priv->rx_desc_dma[q], + GFP_KERNEL); + return priv->rx_ring[q].ex_desc; } /* Init skb and descriptor buffer for Ethernet AVB */ @@ -830,7 +831,7 @@ static bool ravb_rx_gbeth(struct net_device *ndev, int *quota, int q) limit = priv->dirty_rx[q] + priv->num_rx_ring[q] - priv->cur_rx[q]; stats = &priv->stats[q]; - desc = &priv->gbeth_rx_ring[entry]; + desc = &priv->rx_ring[q].desc[entry]; for (i = 0; i < limit && rx_packets < *quota && desc->die_dt != DT_FEMPTY; i++) { /* Descriptor type must be checked before all other reads */ dma_rmb(); @@ -901,13 +902,13 @@ static bool ravb_rx_gbeth(struct net_device *ndev, int *quota, int q) } entry = (++priv->cur_rx[q]) % priv->num_rx_ring[q]; - desc = &priv->gbeth_rx_ring[entry]; + desc = &priv->rx_ring[q].desc[entry]; } /* Refill the RX ring buffers. */ for (; priv->cur_rx[q] - priv->dirty_rx[q] > 0; priv->dirty_rx[q]++) { entry = priv->dirty_rx[q] % priv->num_rx_ring[q]; - desc = &priv->gbeth_rx_ring[entry]; + desc = &priv->rx_ring[q].desc[entry]; desc->ds_cc = cpu_to_le16(GBETH_RX_DESC_DATA_SIZE); if (!priv->rx_skb[q][entry]) { @@ -957,7 +958,7 @@ static bool ravb_rx_rcar(struct net_device *ndev, int *quota, int q) boguscnt = min(boguscnt, *quota); limit = boguscnt; - desc = &priv->rx_ring[q][entry]; + desc = &priv->rx_ring[q].ex_desc[entry]; while (desc->die_dt != DT_FEMPTY) { /* Descriptor type must be checked before all other reads */ dma_rmb(); @@ -1017,13 +1018,13 @@ static bool ravb_rx_rcar(struct net_device *ndev, int *quota, int q) } entry = (++priv->cur_rx[q]) % priv->num_rx_ring[q]; - desc = &priv->rx_ring[q][entry]; + desc = &priv->rx_ring[q].ex_desc[entry]; } /* Refill the RX ring buffers. */ for (; priv->cur_rx[q] - priv->dirty_rx[q] > 0; priv->dirty_rx[q]++) { entry = priv->dirty_rx[q] % priv->num_rx_ring[q]; - desc = &priv->rx_ring[q][entry]; + desc = &priv->rx_ring[q].ex_desc[entry]; desc->ds_cc = cpu_to_le16(RX_BUF_SZ); if (!priv->rx_skb[q][entry]) { From patchwork Tue Feb 27 01:40:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Niklas_S=C3=B6derlund?= X-Patchwork-Id: 13573104 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-wm1-f45.google.com (mail-wm1-f45.google.com [209.85.128.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CEA9A5C89 for ; Tue, 27 Feb 2024 01:41:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.45 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708998122; cv=none; b=WN6Q36lBuVYJoDz0RxizQnqcgpifM7ivILS2+qOARkWvsqWrNGSdAiyHOA3mhMekPsG9I2EEJBlzrafyONMcrH5+at1RFkWUlX6A2+OXR/ZTST6gdGUEyVsBCxei9e4rTKndfr5CsSTR0vUcRYcp9Bj4dtwdDXqij/Va+s/DhjY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708998122; c=relaxed/simple; bh=TxWvqhdKyCdWcAgdV1E1upYoL6nk1uGlgpZBn0ku1TU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=FqVrPBBBCNJ0D0K8d0h9lhOQ4USNp3U5MpgRGocYUlRA35XlpEdCcj4XuAQlhqKK+F/HEH2OOSV6JsFX6gslI1cGI2l3VPnu2vmDlzLLslEBCCoAu5qsGsi/uISS0NoQlLVVCbF3Vy/t+3fdfbyyrcZs/1lgNYf202srcgts3bA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=ragnatech.se; spf=pass smtp.mailfrom=ragnatech.se; dkim=pass (2048-bit key) header.d=ragnatech.se header.i=@ragnatech.se header.b=OlF53CLh; arc=none smtp.client-ip=209.85.128.45 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=ragnatech.se Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ragnatech.se Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ragnatech.se header.i=@ragnatech.se header.b="OlF53CLh" Received: by mail-wm1-f45.google.com with SMTP id 5b1f17b1804b1-412a3371133so12808305e9.2 for ; Mon, 26 Feb 2024 17:41:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ragnatech.se; s=google; t=1708998118; x=1709602918; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1Njr01eXaTmzMyHGNg6x9jCbyB5dpG3QCfY2Mg3WPEc=; b=OlF53CLhQsY7VvtEztbmmTJ3176GjHDyR+j81SqbhN+Ocj8to8oW+C+0IqINgLtq20 KfrtQff7M7alb2sCrEJoHknUFp2w/Ipx/SDIxl2OW5UXZBJLUyBLRMaCljcKiFfYBTZ9 mxbcy0izHyV07bycBlrMLOimFLTcekUFiLQpGr3IXr+NazysfFLJ9d8ndJF1kQ2Na9kc 9PEcWVyhSoGByJbKsW/G4zl1lQIwWk9QN3y1Oe2LhK0RTjPmm0ikSw2cPgjgg7sCMvXA 7bqeCpo9K2R1NZYHmiMSvXKva1PwtukZxKZP1wIvN3d9UVP8q9fd9hzCgo9F4UMYV3l8 tVyg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708998118; x=1709602918; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1Njr01eXaTmzMyHGNg6x9jCbyB5dpG3QCfY2Mg3WPEc=; b=suM6N9hkTbfmEq/98kx8vBmRa0y+CdotQvUfWkMZdjARx8OnWx4Z9w/CO0/XQjxefB Nz2skWRNMpj3a3dZLMiiobpC9rq6jatwZCVNA5vK8FSVl2DrZJmjpwEIKfaE+Y7UwVI5 aQcGouN7evkznZ1Xm/UAmroVz28l00phY9OxsVUbmRHVsRcYGmQcrsQSLeTjirv1XqXu oCf78GmhnzYuulx1u3TAAZ7yKako3RKCAay/jTtUnbNBkk+vzMKE0egzqYcO/8EMqJy1 lsjvSbZBsWzmypYdCuWaxdkc53aTjlp6apKujDW6GKYjA/bCoh5D9fodx9pbowg1wwsU ao1Q== X-Forwarded-Encrypted: i=1; AJvYcCVpciD7INWhg1oHtV0OtpUQeAXrnH7q21StzIb0/hYLIvd+Sw42bawec2ZtFPQNaYAqQJ1a2FiZrrsrILwe2VZvbbB6q6Es X-Gm-Message-State: AOJu0YxjCPGDzIaaNx73nUb4aIe6unr/iGBJEbHWmaoDIxxCkZaYEKW6 yMVyTCjLSvxABPUb7uU+3aEK4gFKZCy3grAfHtEbEz9M8KJ2yNdZXEndqUhr65U= X-Google-Smtp-Source: AGHT+IH+AZTMUsb862Ep8x+r5lTFIK5YwBiR2B+KnTFT5lDKD8GJoaR17NJtWoNoqWyevW6z22BOkg== X-Received: by 2002:a05:600c:1553:b0:412:a206:ad16 with SMTP id f19-20020a05600c155300b00412a206ad16mr4801157wmg.12.1708998118323; Mon, 26 Feb 2024 17:41:58 -0800 (PST) Received: from sleipner.berto.se (p4fcc8c6a.dip0.t-ipconnect.de. [79.204.140.106]) by smtp.googlemail.com with ESMTPSA id w15-20020a05600c474f00b004129860d532sm9827918wmo.2.2024.02.26.17.41.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Feb 2024 17:41:57 -0800 (PST) From: =?utf-8?q?Niklas_S=C3=B6derlund?= To: Sergey Shtylyov , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Biju Das , Claudiu Beznea , Yoshihiro Shimoda , netdev@vger.kernel.org Cc: linux-renesas-soc@vger.kernel.org, =?utf-8?q?Niklas_S=C3=B6derlund?= Subject: [net-next 2/6] ravb: Make it clear the information relates to maximum frame size Date: Tue, 27 Feb 2024 02:40:10 +0100 Message-ID: <20240227014014.44855-3-niklas.soderlund+renesas@ragnatech.se> X-Mailer: git-send-email 2.43.2 In-Reply-To: <20240227014014.44855-1-niklas.soderlund+renesas@ragnatech.se> References: <20240227014014.44855-1-niklas.soderlund+renesas@ragnatech.se> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org The struct member rx_max_buf_size was added before split descriptor support where added. It is unclear if the value describes the full skb frame buffer or the data descriptor buffer which can be combined into a single skb. Rename it to make it clear it referees to the maximum frame size and can cover multiple descriptors. Signed-off-by: Niklas Söderlund Reviewed-by: Paul Barker --- drivers/net/ethernet/renesas/ravb.h | 2 +- drivers/net/ethernet/renesas/ravb_main.c | 10 +++++----- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/renesas/ravb.h b/drivers/net/ethernet/renesas/ravb.h index aecc98282c7e..7f9e8b2c012a 100644 --- a/drivers/net/ethernet/renesas/ravb.h +++ b/drivers/net/ethernet/renesas/ravb.h @@ -1059,7 +1059,7 @@ struct ravb_hw_info { int stats_len; size_t max_rx_len; u32 tccr_mask; - u32 rx_max_buf_size; + u32 rx_max_frame_size; unsigned aligned_tx: 1; /* hardware features */ diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c index c25a80f4d3b9..3c59e2c317c7 100644 --- a/drivers/net/ethernet/renesas/ravb_main.c +++ b/drivers/net/ethernet/renesas/ravb_main.c @@ -2684,7 +2684,7 @@ static const struct ravb_hw_info ravb_gen3_hw_info = { .stats_len = ARRAY_SIZE(ravb_gstrings_stats), .max_rx_len = RX_BUF_SZ + RAVB_ALIGN - 1, .tccr_mask = TCCR_TSRQ0 | TCCR_TSRQ1 | TCCR_TSRQ2 | TCCR_TSRQ3, - .rx_max_buf_size = SZ_2K, + .rx_max_frame_size = SZ_2K, .internal_delay = 1, .tx_counters = 1, .multi_irqs = 1, @@ -2710,7 +2710,7 @@ static const struct ravb_hw_info ravb_gen2_hw_info = { .stats_len = ARRAY_SIZE(ravb_gstrings_stats), .max_rx_len = RX_BUF_SZ + RAVB_ALIGN - 1, .tccr_mask = TCCR_TSRQ0 | TCCR_TSRQ1 | TCCR_TSRQ2 | TCCR_TSRQ3, - .rx_max_buf_size = SZ_2K, + .rx_max_frame_size = SZ_2K, .aligned_tx = 1, .gptp = 1, .nc_queues = 1, @@ -2733,7 +2733,7 @@ static const struct ravb_hw_info ravb_rzv2m_hw_info = { .stats_len = ARRAY_SIZE(ravb_gstrings_stats), .max_rx_len = RX_BUF_SZ + RAVB_ALIGN - 1, .tccr_mask = TCCR_TSRQ0 | TCCR_TSRQ1 | TCCR_TSRQ2 | TCCR_TSRQ3, - .rx_max_buf_size = SZ_2K, + .rx_max_frame_size = SZ_2K, .multi_irqs = 1, .err_mgmt_irqs = 1, .gptp = 1, @@ -2758,7 +2758,7 @@ static const struct ravb_hw_info gbeth_hw_info = { .stats_len = ARRAY_SIZE(ravb_gstrings_stats_gbeth), .max_rx_len = ALIGN(GBETH_RX_BUFF_MAX, RAVB_ALIGN), .tccr_mask = TCCR_TSRQ0, - .rx_max_buf_size = SZ_8K, + .rx_max_frame_size = SZ_8K, .aligned_tx = 1, .tx_counters = 1, .carrier_counters = 1, @@ -2967,7 +2967,7 @@ static int ravb_probe(struct platform_device *pdev) priv->avb_link_active_low = of_property_read_bool(np, "renesas,ether-link-active-low"); - ndev->max_mtu = info->rx_max_buf_size - (ETH_HLEN + VLAN_HLEN + ETH_FCS_LEN); + ndev->max_mtu = info->rx_max_frame_size - (ETH_HLEN + VLAN_HLEN + ETH_FCS_LEN); ndev->min_mtu = ETH_MIN_MTU; /* FIXME: R-Car Gen2 has 4byte alignment restriction for tx buffer From patchwork Tue Feb 27 01:40:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Niklas_S=C3=B6derlund?= X-Patchwork-Id: 13573105 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-wm1-f46.google.com (mail-wm1-f46.google.com [209.85.128.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DBC1D63A9 for ; Tue, 27 Feb 2024 01:42:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.46 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708998122; cv=none; b=EIb+vxNKouY6Y2TrqgjHBOyRhzYuXP/rv1NFgrrPzIACJcM70/c3h4OmWC+WemWqbCzuPlzFBJ3tpjeYRUY2XcA6q3g3QLgeLM1HmF7KD9ex6t1bLBMjswh1ZSlVg6NZVj1tul7pEBi+lGpdLNtDqS29+tbac49Yx0ywFrH/YF8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708998122; c=relaxed/simple; bh=9QmPjhNMa2qTeEufGAR/UWseuiowTkREs3ReW2MaRws=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=oDi1sS2GwhRDEu2/jMuEpF9qunlwdmeUG1Hpo+xiAd9bHG3D3iyl/JkVIJ/pqbAGdH4sCGiJpzyyeL4CsismC2QGwAZBTNk/lypkDzyRBxYbm26mqNqGYTBZut4U8DNL//IFFzdD0/OYstHE/2HYkOBtw8QIsL4/9K5dql1F2AI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=ragnatech.se; spf=pass smtp.mailfrom=ragnatech.se; dkim=pass (2048-bit key) header.d=ragnatech.se header.i=@ragnatech.se header.b=f4h0S1lH; arc=none smtp.client-ip=209.85.128.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=ragnatech.se Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ragnatech.se Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ragnatech.se header.i=@ragnatech.se header.b="f4h0S1lH" Received: by mail-wm1-f46.google.com with SMTP id 5b1f17b1804b1-412a57832fcso11479615e9.1 for ; Mon, 26 Feb 2024 17:42:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ragnatech.se; s=google; t=1708998119; x=1709602919; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=hK/UhIu54iFfTIhyo9wYddHEQTzb00ORRXhnNJNn5PQ=; b=f4h0S1lHsS1sqDsurAE3APQRaSS6h21Lms0omeQoJp2jSw82wsiCtVC1Lbqkhggbiu 7phPLtB8IJdBHyHvXRUb4vv11T3MFQxCX8Hwxi2nwZ0ZLxyOT4wyY9Q9ekj8USvSrDZp MIIsU9owxrIv5E/yQmQjv6go6LmcRlgtxOZVUCmw/1krWyoriVEC18j70KWJ2I3Bl0Ys e803/SBFfUSpoTBkOODrHEZ68vrl2xUAoekYh8BCHaetwq2FK4Sj8BCgTWS5TkeCVOIa ZAPH2+NNMQaF7mdWUjiAGOidzTci3yNh3Sev1KACucNW7+a/xabx/tgxBQq3qaWxSDv6 f0kQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708998119; x=1709602919; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hK/UhIu54iFfTIhyo9wYddHEQTzb00ORRXhnNJNn5PQ=; b=vRHrfSp/KIXelDWj9Z1Oiw+OGV/hh+bJ69eSnNngC1mDzJcnbVUe6z4TNrGd+AcjLx STSCIRhkwhztqHILrr/4zEcfsFsSC/rkXuZzOCedpS3rZkATyBzFUBhJ2LizwFhXF24S +w8SjihFW8TB4NQShatJJINtgeZftw+e2VcK2OD45dhRm5xRizvIGUBVqZo1VinpuMVk 5IaFr2XJSpiP7y1XiFIZgG2caq583YjONCTC74pfzLjlepJfW5psM3ICfut+BbJtmshC LNDWtBMDjVtme3weYkNxO19dAI7zLw678jvG28gNfh/2TUYmruuEVN/Gt4QR2kz89yzL qVcw== X-Forwarded-Encrypted: i=1; AJvYcCUJmV/uiDd/46cigFcd+Ta3TBToTJZ81KjzRDdQvq1Mm6f8hzzQSBaYvpt9ZruG4LQ7EGRXK8F6qIr2sm852g3QLdmZ0jsI X-Gm-Message-State: AOJu0YwmaLZqzDfa8vd0pO/pCFG6Xk8DyY5btXbDPtdCVNvm6EmGugLj sGxACg6oFGZm9xBEiKzVIfwiKullPNXuIr8/pVK9ArOj6MASCzT7jfVl8V8Uncs= X-Google-Smtp-Source: AGHT+IFkjtR9zg/vLSb8fEdGEM49PnUSLscD406YnIxPv5f3GEAAveRHtLMSw7QkVTquH2taPyx2aQ== X-Received: by 2002:a05:600c:3112:b0:40f:b0bf:6abf with SMTP id g18-20020a05600c311200b0040fb0bf6abfmr5865150wmo.17.1708998119218; Mon, 26 Feb 2024 17:41:59 -0800 (PST) Received: from sleipner.berto.se (p4fcc8c6a.dip0.t-ipconnect.de. [79.204.140.106]) by smtp.googlemail.com with ESMTPSA id w15-20020a05600c474f00b004129860d532sm9827918wmo.2.2024.02.26.17.41.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Feb 2024 17:41:58 -0800 (PST) From: =?utf-8?q?Niklas_S=C3=B6derlund?= To: Sergey Shtylyov , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Biju Das , Claudiu Beznea , Yoshihiro Shimoda , netdev@vger.kernel.org Cc: linux-renesas-soc@vger.kernel.org, =?utf-8?q?Niklas_S=C3=B6derlund?= Subject: [net-next 3/6] ravb: Create helper to allocate skb and align it Date: Tue, 27 Feb 2024 02:40:11 +0100 Message-ID: <20240227014014.44855-4-niklas.soderlund+renesas@ragnatech.se> X-Mailer: git-send-email 2.43.2 In-Reply-To: <20240227014014.44855-1-niklas.soderlund+renesas@ragnatech.se> References: <20240227014014.44855-1-niklas.soderlund+renesas@ragnatech.se> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org The RAVB device requires the SKB data to be aligned to 128 bytes. The alignment is done by allocating a skb 128 bytes larger than the maximum frame size supported by the device and adjusting the headroom to fit the requirement. This code has been refactored a few times and small issues have been added along the way. The issues are not harmful but prevents merging parts of the Rx code which have been split in two implementations with the addition of RZ/G2L support, a device that supports larger frame sizes. This change removes the need for duplicated and somewhat inaccurate hardware alignment constrains stored in the hardware information struct by creating a helper to handle the allocation of a skb and alignment of a skb data. For the R-Car class of devices the maximum frame size is 4K and each descriptor is limited to 2K of data. The current implementation does not support split descriptors, this limits the frame size to 2K. The current hardware information however records the descriptor size just under 2K due to bad understanding of the device when larger MTUs where added. For the RZ/G2L device the maximum frame size is 8K and each descriptor is limited to 4K of data. The current hardware information records this correctly, but it gets the alignment constrains wrong as just aligns it by 128, it does not extend it by 128 bytes to allow the full frame to be stored. This works because the RZ/G2L device supports split descriptors and allocates each skb to 8K and aligns each 4K descriptor in this space. Signed-off-by: Niklas Söderlund --- drivers/net/ethernet/renesas/ravb.h | 1 - drivers/net/ethernet/renesas/ravb_main.c | 41 +++++++++++++----------- 2 files changed, 22 insertions(+), 20 deletions(-) diff --git a/drivers/net/ethernet/renesas/ravb.h b/drivers/net/ethernet/renesas/ravb.h index 7f9e8b2c012a..751bb29cd488 100644 --- a/drivers/net/ethernet/renesas/ravb.h +++ b/drivers/net/ethernet/renesas/ravb.h @@ -1057,7 +1057,6 @@ struct ravb_hw_info { netdev_features_t net_hw_features; netdev_features_t net_features; int stats_len; - size_t max_rx_len; u32 tccr_mask; u32 rx_max_frame_size; unsigned aligned_tx: 1; diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c index 3c59e2c317c7..6e39d498936f 100644 --- a/drivers/net/ethernet/renesas/ravb_main.c +++ b/drivers/net/ethernet/renesas/ravb_main.c @@ -113,12 +113,21 @@ static void ravb_set_rate_rcar(struct net_device *ndev) } } -static void ravb_set_buffer_align(struct sk_buff *skb) +static struct sk_buff * +ravb_alloc_skb(struct net_device *ndev, const struct ravb_hw_info *info) { - u32 reserve = (unsigned long)skb->data & (RAVB_ALIGN - 1); + struct sk_buff *skb; + u32 reserve; + skb = netdev_alloc_skb(ndev, info->rx_max_frame_size + RAVB_ALIGN - 1); + if (!skb) + return NULL; + + reserve = (unsigned long)skb->data & (RAVB_ALIGN - 1); if (reserve) skb_reserve(skb, RAVB_ALIGN - reserve); + + return skb; } /* Get MAC address from the MAC address registers @@ -251,7 +260,7 @@ static void ravb_rx_ring_free_gbeth(struct net_device *ndev, int q) le32_to_cpu(desc->dptr))) dma_unmap_single(ndev->dev.parent, le32_to_cpu(desc->dptr), - GBETH_RX_BUFF_MAX, + priv->info->rx_max_frame_size, DMA_FROM_DEVICE); } ring_size = sizeof(struct ravb_rx_desc) * (priv->num_rx_ring[q] + 1); @@ -276,7 +285,7 @@ static void ravb_rx_ring_free_rcar(struct net_device *ndev, int q) le32_to_cpu(desc->dptr))) dma_unmap_single(ndev->dev.parent, le32_to_cpu(desc->dptr), - RX_BUF_SZ, + priv->info->rx_max_frame_size, DMA_FROM_DEVICE); } ring_size = sizeof(struct ravb_ex_rx_desc) * @@ -342,7 +351,7 @@ static void ravb_rx_ring_format_gbeth(struct net_device *ndev, int q) rx_desc = &priv->rx_ring[q].desc[i]; rx_desc->ds_cc = cpu_to_le16(GBETH_RX_DESC_DATA_SIZE); dma_addr = dma_map_single(ndev->dev.parent, priv->rx_skb[q][i]->data, - GBETH_RX_BUFF_MAX, + priv->info->rx_max_frame_size, DMA_FROM_DEVICE); /* We just set the data size to 0 for a failed mapping which * should prevent DMA from happening... @@ -372,7 +381,7 @@ static void ravb_rx_ring_format_rcar(struct net_device *ndev, int q) rx_desc = &priv->rx_ring[q].ex_desc[i]; rx_desc->ds_cc = cpu_to_le16(RX_BUF_SZ); dma_addr = dma_map_single(ndev->dev.parent, priv->rx_skb[q][i]->data, - RX_BUF_SZ, + priv->info->rx_max_frame_size, DMA_FROM_DEVICE); /* We just set the data size to 0 for a failed mapping which * should prevent DMA from happening... @@ -476,10 +485,9 @@ static int ravb_ring_init(struct net_device *ndev, int q) goto error; for (i = 0; i < priv->num_rx_ring[q]; i++) { - skb = __netdev_alloc_skb(ndev, info->max_rx_len, GFP_KERNEL); + skb = ravb_alloc_skb(ndev, info); if (!skb) goto error; - ravb_set_buffer_align(skb); priv->rx_skb[q][i] = skb; } @@ -805,7 +813,8 @@ static struct sk_buff *ravb_get_skb_gbeth(struct net_device *ndev, int entry, skb = priv->rx_skb[RAVB_BE][entry]; priv->rx_skb[RAVB_BE][entry] = NULL; dma_unmap_single(ndev->dev.parent, le32_to_cpu(desc->dptr), - ALIGN(GBETH_RX_BUFF_MAX, 16), DMA_FROM_DEVICE); + ALIGN(priv->info->rx_max_frame_size, 16), + DMA_FROM_DEVICE); return skb; } @@ -912,13 +921,12 @@ static bool ravb_rx_gbeth(struct net_device *ndev, int *quota, int q) desc->ds_cc = cpu_to_le16(GBETH_RX_DESC_DATA_SIZE); if (!priv->rx_skb[q][entry]) { - skb = netdev_alloc_skb(ndev, info->max_rx_len); + skb = ravb_alloc_skb(ndev, info); if (!skb) break; - ravb_set_buffer_align(skb); dma_addr = dma_map_single(ndev->dev.parent, skb->data, - GBETH_RX_BUFF_MAX, + priv->info->rx_max_frame_size, DMA_FROM_DEVICE); skb_checksum_none_assert(skb); /* We just set the data size to 0 for a failed mapping @@ -992,7 +1000,7 @@ static bool ravb_rx_rcar(struct net_device *ndev, int *quota, int q) skb = priv->rx_skb[q][entry]; priv->rx_skb[q][entry] = NULL; dma_unmap_single(ndev->dev.parent, le32_to_cpu(desc->dptr), - RX_BUF_SZ, + priv->info->rx_max_frame_size, DMA_FROM_DEVICE); get_ts &= (q == RAVB_NC) ? RAVB_RXTSTAMP_TYPE_V2_L2_EVENT : @@ -1028,10 +1036,9 @@ static bool ravb_rx_rcar(struct net_device *ndev, int *quota, int q) desc->ds_cc = cpu_to_le16(RX_BUF_SZ); if (!priv->rx_skb[q][entry]) { - skb = netdev_alloc_skb(ndev, info->max_rx_len); + skb = ravb_alloc_skb(ndev, info); if (!skb) break; /* Better luck next round. */ - ravb_set_buffer_align(skb); dma_addr = dma_map_single(ndev->dev.parent, skb->data, le16_to_cpu(desc->ds_cc), DMA_FROM_DEVICE); @@ -2682,7 +2689,6 @@ static const struct ravb_hw_info ravb_gen3_hw_info = { .net_hw_features = NETIF_F_RXCSUM, .net_features = NETIF_F_RXCSUM, .stats_len = ARRAY_SIZE(ravb_gstrings_stats), - .max_rx_len = RX_BUF_SZ + RAVB_ALIGN - 1, .tccr_mask = TCCR_TSRQ0 | TCCR_TSRQ1 | TCCR_TSRQ2 | TCCR_TSRQ3, .rx_max_frame_size = SZ_2K, .internal_delay = 1, @@ -2708,7 +2714,6 @@ static const struct ravb_hw_info ravb_gen2_hw_info = { .net_hw_features = NETIF_F_RXCSUM, .net_features = NETIF_F_RXCSUM, .stats_len = ARRAY_SIZE(ravb_gstrings_stats), - .max_rx_len = RX_BUF_SZ + RAVB_ALIGN - 1, .tccr_mask = TCCR_TSRQ0 | TCCR_TSRQ1 | TCCR_TSRQ2 | TCCR_TSRQ3, .rx_max_frame_size = SZ_2K, .aligned_tx = 1, @@ -2731,7 +2736,6 @@ static const struct ravb_hw_info ravb_rzv2m_hw_info = { .net_hw_features = NETIF_F_RXCSUM, .net_features = NETIF_F_RXCSUM, .stats_len = ARRAY_SIZE(ravb_gstrings_stats), - .max_rx_len = RX_BUF_SZ + RAVB_ALIGN - 1, .tccr_mask = TCCR_TSRQ0 | TCCR_TSRQ1 | TCCR_TSRQ2 | TCCR_TSRQ3, .rx_max_frame_size = SZ_2K, .multi_irqs = 1, @@ -2756,7 +2760,6 @@ static const struct ravb_hw_info gbeth_hw_info = { .net_hw_features = NETIF_F_RXCSUM | NETIF_F_HW_CSUM, .net_features = NETIF_F_RXCSUM | NETIF_F_HW_CSUM, .stats_len = ARRAY_SIZE(ravb_gstrings_stats_gbeth), - .max_rx_len = ALIGN(GBETH_RX_BUFF_MAX, RAVB_ALIGN), .tccr_mask = TCCR_TSRQ0, .rx_max_frame_size = SZ_8K, .aligned_tx = 1, From patchwork Tue Feb 27 01:40:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Niklas_S=C3=B6derlund?= X-Patchwork-Id: 13573106 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-wm1-f53.google.com (mail-wm1-f53.google.com [209.85.128.53]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 927096AD7 for ; Tue, 27 Feb 2024 01:42:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.53 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708998123; cv=none; b=EeIvFAgTmNJbuelzyaqkd9c6J0lDVmBJysDv5L9MIIkBFrIUYeVd1RSoDgTpTp712Nl73loQEKa44FpJQ5/HkhFkccZ8DqSxKIzagnNw9OB2qkDaQNwv6tHgpM1A2tyFCtkfxaCkZ8w5dAEoTGPgoiEGCJEiFgMiTgDwqX9VfEY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708998123; c=relaxed/simple; bh=DmiYehH02JX9eScG9j64ZSxMCrWI7n4JEDZAyxQWx5s=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=cuHXZTdxSaAUTgcjTm/H2hz8KTUri13CQgt/rh0oA1hiCBk1wwFYi07vft0MajfpJb3P3AvRNJe9JX4nNaqctHFbLpZi58q3VR6v4swSxNeTWn1buqKtbRbi6HNiZUXPYfxKGPtY6m2HSfg3A6BgcBlUXxWu6yLA1NqlkaPCMDA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=ragnatech.se; spf=pass smtp.mailfrom=ragnatech.se; dkim=pass (2048-bit key) header.d=ragnatech.se header.i=@ragnatech.se header.b=bpAxC3G8; arc=none smtp.client-ip=209.85.128.53 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=ragnatech.se Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ragnatech.se Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ragnatech.se header.i=@ragnatech.se header.b="bpAxC3G8" Received: by mail-wm1-f53.google.com with SMTP id 5b1f17b1804b1-412a57832fcso11479685e9.1 for ; Mon, 26 Feb 2024 17:42:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ragnatech.se; s=google; t=1708998120; x=1709602920; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=58sBPe0nZJHNQfsXFyQ0Ar+qIGc9GIE6zpmVSU3zB7Y=; b=bpAxC3G83v7dXHDyGbl4MBPrFbEfPbuVQL5Dvl2snjSXqExFz64ieRglp3fWO4XHO4 H+ul3J3m5rSvMVrnTdsm0Ucy790wJ22mYERgZgfyP27xFf02I1UoK9dn76GO2RzxvvFe otensB4n4Rv/HmjCAEMSieLbThInmqwzusJpVm/Qz82dPGTXCBNjCVK3XXEyhod5Z38J KtOWnA8H3XAWQEBK129EoxR9IZuZmt5sULtuf5ssakiZlaXIjmJMvsoqFEwucOMgbA4G 1Jqd72UL2usA78dyvlRGe//bgkDdz63EcwPHjfqi8bT7V24kN+GPJGEzNAO75qHZ1V0E MMSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708998120; x=1709602920; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=58sBPe0nZJHNQfsXFyQ0Ar+qIGc9GIE6zpmVSU3zB7Y=; b=LpiSRXev6iVDwY+fpWy+uo8zTNl5U2WwnaoGnkejVa0EcPEhVFRqdglfMIEo4EGMgv 5Ev1YTIOtPEZAtIxoQoxI8CNRZl8jj4pSx044Pgpd4mjvaeHnGvjaeZ9jIJPDQoOrjsW K7LUJ0qlQHf9kWMYQ5uu1+Uy10mUOg+yPAjb+Vzk2EVpf7yrp2lbWUeDUBewXPD87K5t j4qtg/6gY0uI1o1gH6niqvQkzXMF5Ns/mzkiQtW8R4zamz157xLyd1I4nw4nPK1RfikJ +C11MFWj9HvSWv8DVdRKO1YVyrHb9Qu7fUrG1Os0QXnhBWR9DfVHk2RcmCyQyonTKuwi g9VQ== X-Forwarded-Encrypted: i=1; AJvYcCXUJU4C/Kw27DJjBKCiBzKXEnegA9vic879bu8PqpACw4BqKTHVx6cua7PydFbQRSMC/cnPIqzO8ycsj2LgAFtnVDLdAq6s X-Gm-Message-State: AOJu0YzaC5j8K7PJXV68NHhBl/DsxB6+VPCx5bHc2V15fwQg4vpHMzFW Dghw06Kv0eUlukfI6qtkmpZ17YSzEtG8ADodpxm+2/p9uY3dl9eXVsWyIQ+wEqg= X-Google-Smtp-Source: AGHT+IF7uqfLZ3XWY/OkIW2VTtPHfVRJo5sq/Y9TPOOfjNW3Te4TDXmQCoiAPoRSmL1R7WL4P3L/KA== X-Received: by 2002:a05:600c:4504:b0:410:656c:d6d with SMTP id t4-20020a05600c450400b00410656c0d6dmr6082590wmo.18.1708998120133; Mon, 26 Feb 2024 17:42:00 -0800 (PST) Received: from sleipner.berto.se (p4fcc8c6a.dip0.t-ipconnect.de. [79.204.140.106]) by smtp.googlemail.com with ESMTPSA id w15-20020a05600c474f00b004129860d532sm9827918wmo.2.2024.02.26.17.41.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Feb 2024 17:41:59 -0800 (PST) From: =?utf-8?q?Niklas_S=C3=B6derlund?= To: Sergey Shtylyov , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Biju Das , Claudiu Beznea , Yoshihiro Shimoda , netdev@vger.kernel.org Cc: linux-renesas-soc@vger.kernel.org, =?utf-8?q?Niklas_S=C3=B6derlund?= Subject: [net-next 4/6] ravb: Use the max frame size from hardware info for RZ/G2L Date: Tue, 27 Feb 2024 02:40:12 +0100 Message-ID: <20240227014014.44855-5-niklas.soderlund+renesas@ragnatech.se> X-Mailer: git-send-email 2.43.2 In-Reply-To: <20240227014014.44855-1-niklas.soderlund+renesas@ragnatech.se> References: <20240227014014.44855-1-niklas.soderlund+renesas@ragnatech.se> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Remove the define describing the RZ/G2L maximum frame size and only use the information in the hardware information struct. This will make it easier to merge the R-Car and RZ/G2L code paths. There is no functional change as both the define and the maximum frame length in the hardware information is set to 8K. Signed-off-by: Niklas Söderlund Reviewed-by: Paul Barker --- drivers/net/ethernet/renesas/ravb.h | 1 - drivers/net/ethernet/renesas/ravb_main.c | 5 +++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/renesas/ravb.h b/drivers/net/ethernet/renesas/ravb.h index 751bb29cd488..7fa60fccb6ea 100644 --- a/drivers/net/ethernet/renesas/ravb.h +++ b/drivers/net/ethernet/renesas/ravb.h @@ -1017,7 +1017,6 @@ enum CSR2_BIT { #define RX_BUF_SZ (2048 - ETH_FCS_LEN + sizeof(__sum16)) -#define GBETH_RX_BUFF_MAX 8192 #define GBETH_RX_DESC_DATA_SIZE 4080 struct ravb_tstamp_skb { diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c index 6e39d498936f..b309ca23f5b6 100644 --- a/drivers/net/ethernet/renesas/ravb_main.c +++ b/drivers/net/ethernet/renesas/ravb_main.c @@ -566,7 +566,7 @@ static void ravb_emac_init_gbeth(struct net_device *ndev) } /* Receive frame limit set register */ - ravb_write(ndev, GBETH_RX_BUFF_MAX + ETH_FCS_LEN, RFLR); + ravb_write(ndev, priv->info->rx_max_frame_size + ETH_FCS_LEN, RFLR); /* EMAC Mode: PAUSE prohibition; Duplex; TX; RX; CRC Pass Through */ ravb_write(ndev, ECMR_ZPF | ((priv->duplex > 0) ? ECMR_DM : 0) | @@ -627,6 +627,7 @@ static void ravb_emac_init(struct net_device *ndev) static int ravb_dmac_init_gbeth(struct net_device *ndev) { + struct ravb_private *priv = netdev_priv(ndev); int error; error = ravb_ring_init(ndev, RAVB_BE); @@ -640,7 +641,7 @@ static int ravb_dmac_init_gbeth(struct net_device *ndev) ravb_write(ndev, 0x60000000, RCR); /* Set Max Frame Length (RTC) */ - ravb_write(ndev, 0x7ffc0000 | GBETH_RX_BUFF_MAX, RTC); + ravb_write(ndev, 0x7ffc0000 | priv->info->rx_max_frame_size, RTC); /* Set FIFO size */ ravb_write(ndev, 0x00222200, TGC); From patchwork Tue Feb 27 01:40:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Niklas_S=C3=B6derlund?= X-Patchwork-Id: 13573107 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-wm1-f52.google.com (mail-wm1-f52.google.com [209.85.128.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7983A53A7 for ; Tue, 27 Feb 2024 01:42:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.52 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708998124; cv=none; b=JQ1aPwhI5LLZoe3fOtuVyhUxg7febRsg2oN79jz7kmqpdXwdGP222zOEnulZmCiJllZtt6oNyPkoBCQDmRjH5BLhEhPOC3IZSh0G6r263DeVTpryhpJbI8p3Gbi7In4lr7zKnptxK8zq/zgKD+NyRNML4bx7fkW+nwA9nwi4als= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708998124; c=relaxed/simple; bh=w/DjwGPj1HTTyzs29L1w+qCOuUvbUtis+M6MqzHRx2w=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=FToaX0R0yK5IQkiVToajNwiIyDMSiv69AP2wBeDTHmEO8/lxrOf6IP8+RrbxKd6hixOSk/tyqxZWEV501QqdErxsUwnsMswkQlKFfBQF/a81DYbKH6IFFZjA+jaOmFWfMiiUp7uIkm86Y7ZIBnQOMAMvASGVvRzV95/xty8Nbg0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=ragnatech.se; spf=pass smtp.mailfrom=ragnatech.se; dkim=pass (2048-bit key) header.d=ragnatech.se header.i=@ragnatech.se header.b=Qcd76C1i; arc=none smtp.client-ip=209.85.128.52 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=ragnatech.se Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ragnatech.se Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ragnatech.se header.i=@ragnatech.se header.b="Qcd76C1i" Received: by mail-wm1-f52.google.com with SMTP id 5b1f17b1804b1-412ad927275so349115e9.3 for ; Mon, 26 Feb 2024 17:42:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ragnatech.se; s=google; t=1708998121; x=1709602921; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=8Y5hO8ZnJAyS1q0nvm3bYNQmcxxBjS8mjPxWDnNLxN4=; b=Qcd76C1iE1CO8EMJvIs5Y/hVi4pWk/lnYoSBr+Ueloc8zz9eQYxbU5ncbeNaDv0lB5 QwYt7KZ1Vfcb8cwnWPOZkK44+a+MNUq9lMvQpUGJl1FO2oY9TW8d7o8W5orBIa/jXJ7l Pd3awmRLWI+IRBq/3YGinvZlud6BHhD8g09f0SGy+kWZ9n87oCrLDES3ah9uX2cyDkvY qKPi4tbWq6HuDtR+Iwk6Z5pNnb2FC7c04zwpHUSGi/3S1CYp0KboXQGxMTqc2PEFJyYF khpwQPt0CzvYZtw5C+JtfOuCWbQysdzYGAajgc7BzrIygd8BGFRtqpMZ2LxmPRrbdHpw 03cg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708998121; x=1709602921; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8Y5hO8ZnJAyS1q0nvm3bYNQmcxxBjS8mjPxWDnNLxN4=; b=U8nR2Xg+GFGYy0EMnllDnyOGyvwReQmrgp4rd3n2DuZoIepTAivRQwURjoXHHHFbYx yURcQWnWG6tvYnaY5R5HaaMBREntFgdAavtabjgosT9eWSVBDGZs+EdQd68kR1c5jA5b 9yhVAFb3qraSQINFE4DHaKOelIR8sOH/7MlESm9BersGeDidwIZAXwDCz/zDjdNOsGQS iW3GlH9L7OkGD3Hb71O9Oo5N4bYgKWo8j59QL+fMsylXoWgk5IEC+Nv6ogz/o0JfRmTP X58UVzBCxPkC71HCLZ8U7KVysar5Rmiyjb4JS2yn98qpF1jB7Fes2+SMPnhv/GYVbDuz gkNA== X-Forwarded-Encrypted: i=1; AJvYcCV+9eTJ9ulrlGWfbsqGl8LjmjA0hMYtWGHwSU2Ers++363l1UP6qWjZrXjX5dA2aHRBFCiFlw74pbOG/ECMAXYK2fyksqLu X-Gm-Message-State: AOJu0Ywyn+K9E6LDiMbxBrPmIfhSl/156SxZRXGsroByoZWfTMsLFWMW bRBcnTqeiDXpJhdZeCErMs6T87uMtCGJP8uMEcnTZVMx3CLImEXBlU0iaf1U5fw= X-Google-Smtp-Source: AGHT+IGuewTaru7AsQJQnTej5ayys1+mdPZq/0Uu13JVPRpMAUytFeMlrDpZ95UO07lEOsJTHXQF+Q== X-Received: by 2002:a05:600c:a002:b0:412:a498:ad36 with SMTP id jg2-20020a05600ca00200b00412a498ad36mr3456799wmb.20.1708998121029; Mon, 26 Feb 2024 17:42:01 -0800 (PST) Received: from sleipner.berto.se (p4fcc8c6a.dip0.t-ipconnect.de. [79.204.140.106]) by smtp.googlemail.com with ESMTPSA id w15-20020a05600c474f00b004129860d532sm9827918wmo.2.2024.02.26.17.42.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Feb 2024 17:42:00 -0800 (PST) From: =?utf-8?q?Niklas_S=C3=B6derlund?= To: Sergey Shtylyov , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Biju Das , Claudiu Beznea , Yoshihiro Shimoda , netdev@vger.kernel.org Cc: linux-renesas-soc@vger.kernel.org, =?utf-8?q?Niklas_S=C3=B6derlund?= Subject: [net-next 5/6] ravb: Move maximum Rx descriptor data usage to info struct Date: Tue, 27 Feb 2024 02:40:13 +0100 Message-ID: <20240227014014.44855-6-niklas.soderlund+renesas@ragnatech.se> X-Mailer: git-send-email 2.43.2 In-Reply-To: <20240227014014.44855-1-niklas.soderlund+renesas@ragnatech.se> References: <20240227014014.44855-1-niklas.soderlund+renesas@ragnatech.se> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org To make it possible to merge the R-Car and RZ/G2L code paths move the maximum usable size of a single Rx descriptor data slice in to the hardware information instead of using two different defines in the two different code paths. Signed-off-by: Niklas Söderlund --- drivers/net/ethernet/renesas/ravb.h | 5 +---- drivers/net/ethernet/renesas/ravb_main.c | 12 ++++++++---- 2 files changed, 9 insertions(+), 8 deletions(-) diff --git a/drivers/net/ethernet/renesas/ravb.h b/drivers/net/ethernet/renesas/ravb.h index 7fa60fccb6ea..b12b379baf5a 100644 --- a/drivers/net/ethernet/renesas/ravb.h +++ b/drivers/net/ethernet/renesas/ravb.h @@ -1015,10 +1015,6 @@ enum CSR2_BIT { #define NUM_RX_QUEUE 2 #define NUM_TX_QUEUE 2 -#define RX_BUF_SZ (2048 - ETH_FCS_LEN + sizeof(__sum16)) - -#define GBETH_RX_DESC_DATA_SIZE 4080 - struct ravb_tstamp_skb { struct list_head list; struct sk_buff *skb; @@ -1058,6 +1054,7 @@ struct ravb_hw_info { int stats_len; u32 tccr_mask; u32 rx_max_frame_size; + u32 rx_max_desc_use; unsigned aligned_tx: 1; /* hardware features */ diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c index b309ca23f5b6..dee51a78cf36 100644 --- a/drivers/net/ethernet/renesas/ravb_main.c +++ b/drivers/net/ethernet/renesas/ravb_main.c @@ -349,7 +349,7 @@ static void ravb_rx_ring_format_gbeth(struct net_device *ndev, int q) for (i = 0; i < priv->num_rx_ring[q]; i++) { /* RX descriptor */ rx_desc = &priv->rx_ring[q].desc[i]; - rx_desc->ds_cc = cpu_to_le16(GBETH_RX_DESC_DATA_SIZE); + rx_desc->ds_cc = cpu_to_le16(priv->info->rx_max_desc_use); dma_addr = dma_map_single(ndev->dev.parent, priv->rx_skb[q][i]->data, priv->info->rx_max_frame_size, DMA_FROM_DEVICE); @@ -379,7 +379,7 @@ static void ravb_rx_ring_format_rcar(struct net_device *ndev, int q) for (i = 0; i < priv->num_rx_ring[q]; i++) { /* RX descriptor */ rx_desc = &priv->rx_ring[q].ex_desc[i]; - rx_desc->ds_cc = cpu_to_le16(RX_BUF_SZ); + rx_desc->ds_cc = cpu_to_le16(priv->info->rx_max_desc_use); dma_addr = dma_map_single(ndev->dev.parent, priv->rx_skb[q][i]->data, priv->info->rx_max_frame_size, DMA_FROM_DEVICE); @@ -919,7 +919,7 @@ static bool ravb_rx_gbeth(struct net_device *ndev, int *quota, int q) for (; priv->cur_rx[q] - priv->dirty_rx[q] > 0; priv->dirty_rx[q]++) { entry = priv->dirty_rx[q] % priv->num_rx_ring[q]; desc = &priv->rx_ring[q].desc[entry]; - desc->ds_cc = cpu_to_le16(GBETH_RX_DESC_DATA_SIZE); + desc->ds_cc = cpu_to_le16(priv->info->rx_max_desc_use); if (!priv->rx_skb[q][entry]) { skb = ravb_alloc_skb(ndev, info); @@ -1034,7 +1034,7 @@ static bool ravb_rx_rcar(struct net_device *ndev, int *quota, int q) for (; priv->cur_rx[q] - priv->dirty_rx[q] > 0; priv->dirty_rx[q]++) { entry = priv->dirty_rx[q] % priv->num_rx_ring[q]; desc = &priv->rx_ring[q].ex_desc[entry]; - desc->ds_cc = cpu_to_le16(RX_BUF_SZ); + desc->ds_cc = cpu_to_le16(priv->info->rx_max_desc_use); if (!priv->rx_skb[q][entry]) { skb = ravb_alloc_skb(ndev, info); @@ -2692,6 +2692,7 @@ static const struct ravb_hw_info ravb_gen3_hw_info = { .stats_len = ARRAY_SIZE(ravb_gstrings_stats), .tccr_mask = TCCR_TSRQ0 | TCCR_TSRQ1 | TCCR_TSRQ2 | TCCR_TSRQ3, .rx_max_frame_size = SZ_2K, + .rx_max_desc_use = 2048 - ETH_FCS_LEN + sizeof(__sum16), .internal_delay = 1, .tx_counters = 1, .multi_irqs = 1, @@ -2717,6 +2718,7 @@ static const struct ravb_hw_info ravb_gen2_hw_info = { .stats_len = ARRAY_SIZE(ravb_gstrings_stats), .tccr_mask = TCCR_TSRQ0 | TCCR_TSRQ1 | TCCR_TSRQ2 | TCCR_TSRQ3, .rx_max_frame_size = SZ_2K, + .rx_max_desc_use = 2048 - ETH_FCS_LEN + sizeof(__sum16), .aligned_tx = 1, .gptp = 1, .nc_queues = 1, @@ -2739,6 +2741,7 @@ static const struct ravb_hw_info ravb_rzv2m_hw_info = { .stats_len = ARRAY_SIZE(ravb_gstrings_stats), .tccr_mask = TCCR_TSRQ0 | TCCR_TSRQ1 | TCCR_TSRQ2 | TCCR_TSRQ3, .rx_max_frame_size = SZ_2K, + .rx_max_desc_use = 2048 - ETH_FCS_LEN + sizeof(__sum16), .multi_irqs = 1, .err_mgmt_irqs = 1, .gptp = 1, @@ -2763,6 +2766,7 @@ static const struct ravb_hw_info gbeth_hw_info = { .stats_len = ARRAY_SIZE(ravb_gstrings_stats_gbeth), .tccr_mask = TCCR_TSRQ0, .rx_max_frame_size = SZ_8K, + .rx_max_desc_use = 4080, .aligned_tx = 1, .tx_counters = 1, .carrier_counters = 1, From patchwork Tue Feb 27 01:40:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Niklas_S=C3=B6derlund?= X-Patchwork-Id: 13573108 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-wm1-f52.google.com (mail-wm1-f52.google.com [209.85.128.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5645610A34 for ; Tue, 27 Feb 2024 01:42:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.52 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708998125; cv=none; b=O04kxJBEuq+frjLu8yJccKs3CpWcnyKujmNEodd2qu9KC5XuwBe5Ki+ytTt8tYSkrQ9CHcybVMpnfkpJInzsZusxV4if9TaLyLaXoiGDE/Ajm2arHA0O06AuXQ690q283Wb9jvbTYtH9hakCpJD5lmtO7xaGMWG2n/ncQmD7eGs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708998125; c=relaxed/simple; bh=u1rKXH9MEvSVg7z0ovNe6vfDyDHM2mnuWRI9llDxLEQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=J38rP+u1nbf9pWfWp8QE6MlBudxalOigv+zz59syTij9sVJE/rpuQtx9r6ShwlRBkriYx+u1CnNUssgsN05RViPv+sTg6EAK+ieie2LU+FGWI3PT/CISMzltguKAhFmOlMbjHyNPKw3fBh9GJp+6fY7AG8KyP5w4LYUptWGQOHQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=ragnatech.se; spf=pass smtp.mailfrom=ragnatech.se; dkim=pass (2048-bit key) header.d=ragnatech.se header.i=@ragnatech.se header.b=oFL2VjK6; arc=none smtp.client-ip=209.85.128.52 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=ragnatech.se Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ragnatech.se Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ragnatech.se header.i=@ragnatech.se header.b="oFL2VjK6" Received: by mail-wm1-f52.google.com with SMTP id 5b1f17b1804b1-412a57832fcso11479825e9.1 for ; Mon, 26 Feb 2024 17:42:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ragnatech.se; s=google; t=1708998122; x=1709602922; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Haxzg+6S48ZNhss1vHotACzALgiNr9ml+YJyk8dls4E=; b=oFL2VjK6M1uydrS1mkXwMdjZxNOznZx3N6rgWMP87Ny3LyCRItt5O7VKpBagNRT3x6 Bn4YFSRMj8zY51penZDwXJKPrBtlW09XrLT3o/2Rq8wpisAzFc9OUfKHHuUREHG5Dx2V 09fvtHL95P+BdCYLRnu9HDD1NMvYb0wm1p0rcx1BjiLhh7s26nruVELEhEL2Pdr5cKtj aZ71f1jLBLEQnp9GxggvB+F0n1kDUb/U7uSEbFiMIb+dlly1zWsLK5c+/IMcVbNPote3 +h4OyUPeeGd9z0Jew+0IzLgscNqoaavP5/KSoVEDjrWCsdfSpzmcKWn36FdQSRk1/rOq ed9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708998122; x=1709602922; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Haxzg+6S48ZNhss1vHotACzALgiNr9ml+YJyk8dls4E=; b=iQMbzHAXETv+jX6LfzUCpx0SG4iHheosr7nb9WCzCa1jdK+ZV+FAYJfDTIH8XbqxEJ xNIv3L/xGLxhucJxMPAi4HITEK0kPb+9ZCmqLV5VWlA1+IA6vSrjvBqtQcSlM2hGhIte vgedkr9rI2NfBFcak+IqSuhoqqglfgyVbajigZR6onz9SSjf5hLxkKRzwtgWEDzQPXUJ vOgNq/YzxmlyF0DZJplkuEz6IplmwLotgEzN13Hm8Emsvu0OeN4d9Vz0U543WvFcZQFy xnWyz46U48SJZmWAkfVKRJUjHpLDbS6EkOk+7Mn93VrffMGjPWqNG+serDhQDLzuvNTE Pomw== X-Forwarded-Encrypted: i=1; AJvYcCW+bhPxWrp7QS1VVWYUnoXpJnGrWtj5h5tLm23asIeesDqFoJMEcFrmIVVyU9spTZY5CFSLqW6AHhDWdOjq9zjZ6x0vEtWw X-Gm-Message-State: AOJu0YxZ4Lf4Nc/GMoUPvoLGo1syq7f2Ik4Cy3AnlnXogQRv2F9CNhHA 5Vxkro6G7lE7FS6FQVZATWzyCzBjWjF87GfyMmByNOEww4ReXu+NLwj2P4+N/Ps= X-Google-Smtp-Source: AGHT+IHry4xt8PLHMedDr3bdwAW4SBBzXFdefMYlz8zOa14JskFYDz0FjX4rOwnCYlbIeTjj+5z2iw== X-Received: by 2002:a05:600c:5592:b0:412:6de0:69a9 with SMTP id jp18-20020a05600c559200b004126de069a9mr5452392wmb.39.1708998121933; Mon, 26 Feb 2024 17:42:01 -0800 (PST) Received: from sleipner.berto.se (p4fcc8c6a.dip0.t-ipconnect.de. [79.204.140.106]) by smtp.googlemail.com with ESMTPSA id w15-20020a05600c474f00b004129860d532sm9827918wmo.2.2024.02.26.17.42.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Feb 2024 17:42:01 -0800 (PST) From: =?utf-8?q?Niklas_S=C3=B6derlund?= To: Sergey Shtylyov , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Biju Das , Claudiu Beznea , Yoshihiro Shimoda , netdev@vger.kernel.org Cc: linux-renesas-soc@vger.kernel.org, =?utf-8?q?Niklas_S=C3=B6derlund?= Subject: [net-next 6/6] ravb: Unify Rx ring maintenance code paths Date: Tue, 27 Feb 2024 02:40:14 +0100 Message-ID: <20240227014014.44855-7-niklas.soderlund+renesas@ragnatech.se> X-Mailer: git-send-email 2.43.2 In-Reply-To: <20240227014014.44855-1-niklas.soderlund+renesas@ragnatech.se> References: <20240227014014.44855-1-niklas.soderlund+renesas@ragnatech.se> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org The R-Car and RZ/G2L Rx code paths was split in two separate implementations when support for RZ/G2L was added due to the fact that R-Car uses the extended descriptor format while RZ/G2L uses normal descriptors. This has lead to a duplication of Rx logic with the only difference being the different Rx descriptors types used. The implementation however neglects to take into account that extended descriptors are normal descriptors with additional metadata at the end to carry hardware timestamp information. The hardware timestamps information is only consumed in the R-Car Rx loop and all the maintenance code around the Rx ring can be shared between the two implementations if the difference in descriptor length is carefully considered. This change merges the two implementations for Rx ring maintenance by adding a method to access both types of descriptors as normal descriptors, as this part covers all the fields needed for Rx ring maintenance the only difference between using normal or extended descriptor is the size of the memory region to allocate/free and the step size between each descriptor in the ring. Signed-off-by: Niklas Söderlund --- drivers/net/ethernet/renesas/ravb.h | 5 +- drivers/net/ethernet/renesas/ravb_main.c | 132 ++++++----------------- 2 files changed, 32 insertions(+), 105 deletions(-) diff --git a/drivers/net/ethernet/renesas/ravb.h b/drivers/net/ethernet/renesas/ravb.h index b12b379baf5a..b48935ec7e28 100644 --- a/drivers/net/ethernet/renesas/ravb.h +++ b/drivers/net/ethernet/renesas/ravb.h @@ -1039,9 +1039,6 @@ struct ravb_ptp { }; struct ravb_hw_info { - void (*rx_ring_free)(struct net_device *ndev, int q); - void (*rx_ring_format)(struct net_device *ndev, int q); - void *(*alloc_rx_desc)(struct net_device *ndev, int q); bool (*receive)(struct net_device *ndev, int *quota, int q); void (*set_rate)(struct net_device *ndev); int (*set_feature)(struct net_device *ndev, netdev_features_t features); @@ -1055,6 +1052,7 @@ struct ravb_hw_info { u32 tccr_mask; u32 rx_max_frame_size; u32 rx_max_desc_use; + u32 rx_desc_size; unsigned aligned_tx: 1; /* hardware features */ @@ -1090,6 +1088,7 @@ struct ravb_private { union { struct ravb_rx_desc *desc; struct ravb_ex_rx_desc *ex_desc; + void *raw; } rx_ring[NUM_RX_QUEUE]; struct ravb_tx_desc *tx_ring[NUM_TX_QUEUE]; void *tx_align[NUM_TX_QUEUE]; diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c index dee51a78cf36..2702455b6cc6 100644 --- a/drivers/net/ethernet/renesas/ravb_main.c +++ b/drivers/net/ethernet/renesas/ravb_main.c @@ -200,6 +200,13 @@ static const struct mdiobb_ops bb_ops = { .get_mdio_data = ravb_get_mdio_data, }; +static struct ravb_rx_desc * +ravb_rx_get_desc(struct ravb_private *priv, unsigned int q, + unsigned int i) +{ + return priv->rx_ring[q].raw + priv->info->rx_desc_size * i; +} + /* Free TX skb function for AVB-IP */ static int ravb_tx_free(struct net_device *ndev, int q, bool free_txed_only) { @@ -244,17 +251,17 @@ static int ravb_tx_free(struct net_device *ndev, int q, bool free_txed_only) return free_num; } -static void ravb_rx_ring_free_gbeth(struct net_device *ndev, int q) +static void ravb_rx_ring_free(struct net_device *ndev, int q) { struct ravb_private *priv = netdev_priv(ndev); unsigned int ring_size; unsigned int i; - if (!priv->rx_ring[q].desc) + if (!priv->rx_ring[q].raw) return; for (i = 0; i < priv->num_rx_ring[q]; i++) { - struct ravb_rx_desc *desc = &priv->rx_ring[q].desc[i]; + struct ravb_rx_desc *desc = ravb_rx_get_desc(priv, q, i); if (!dma_mapping_error(ndev->dev.parent, le32_to_cpu(desc->dptr))) @@ -263,48 +270,21 @@ static void ravb_rx_ring_free_gbeth(struct net_device *ndev, int q) priv->info->rx_max_frame_size, DMA_FROM_DEVICE); } - ring_size = sizeof(struct ravb_rx_desc) * (priv->num_rx_ring[q] + 1); - dma_free_coherent(ndev->dev.parent, ring_size, priv->rx_ring[q].desc, + ring_size = priv->info->rx_desc_size * (priv->num_rx_ring[q] + 1); + dma_free_coherent(ndev->dev.parent, ring_size, priv->rx_ring[q].raw, priv->rx_desc_dma[q]); - priv->rx_ring[q].desc = NULL; -} - -static void ravb_rx_ring_free_rcar(struct net_device *ndev, int q) -{ - struct ravb_private *priv = netdev_priv(ndev); - unsigned int ring_size; - unsigned int i; - - if (!priv->rx_ring[q].ex_desc) - return; - - for (i = 0; i < priv->num_rx_ring[q]; i++) { - struct ravb_ex_rx_desc *desc = &priv->rx_ring[q].ex_desc[i]; - - if (!dma_mapping_error(ndev->dev.parent, - le32_to_cpu(desc->dptr))) - dma_unmap_single(ndev->dev.parent, - le32_to_cpu(desc->dptr), - priv->info->rx_max_frame_size, - DMA_FROM_DEVICE); - } - ring_size = sizeof(struct ravb_ex_rx_desc) * - (priv->num_rx_ring[q] + 1); - dma_free_coherent(ndev->dev.parent, ring_size, priv->rx_ring[q].ex_desc, - priv->rx_desc_dma[q]); - priv->rx_ring[q].ex_desc = NULL; + priv->rx_ring[q].raw = NULL; } /* Free skb's and DMA buffers for Ethernet AVB */ static void ravb_ring_free(struct net_device *ndev, int q) { struct ravb_private *priv = netdev_priv(ndev); - const struct ravb_hw_info *info = priv->info; unsigned int num_tx_desc = priv->num_tx_desc; unsigned int ring_size; unsigned int i; - info->rx_ring_free(ndev, q); + ravb_rx_ring_free(ndev, q); if (priv->tx_ring[q]) { ravb_tx_free(ndev, q, false); @@ -335,7 +315,7 @@ static void ravb_ring_free(struct net_device *ndev, int q) priv->tx_skb[q] = NULL; } -static void ravb_rx_ring_format_gbeth(struct net_device *ndev, int q) +static void ravb_rx_ring_format(struct net_device *ndev, int q) { struct ravb_private *priv = netdev_priv(ndev); struct ravb_rx_desc *rx_desc; @@ -344,11 +324,11 @@ static void ravb_rx_ring_format_gbeth(struct net_device *ndev, int q) unsigned int i; rx_ring_size = sizeof(*rx_desc) * priv->num_rx_ring[q]; - memset(priv->rx_ring[q].desc, 0, rx_ring_size); + memset(priv->rx_ring[q].raw, 0, rx_ring_size); /* Build RX ring buffer */ for (i = 0; i < priv->num_rx_ring[q]; i++) { /* RX descriptor */ - rx_desc = &priv->rx_ring[q].desc[i]; + rx_desc = ravb_rx_get_desc(priv, q, i); rx_desc->ds_cc = cpu_to_le16(priv->info->rx_max_desc_use); dma_addr = dma_map_single(ndev->dev.parent, priv->rx_skb[q][i]->data, priv->info->rx_max_frame_size, @@ -361,37 +341,7 @@ static void ravb_rx_ring_format_gbeth(struct net_device *ndev, int q) rx_desc->dptr = cpu_to_le32(dma_addr); rx_desc->die_dt = DT_FEMPTY; } - rx_desc = &priv->rx_ring[q].desc[i]; - rx_desc->dptr = cpu_to_le32((u32)priv->rx_desc_dma[q]); - rx_desc->die_dt = DT_LINKFIX; /* type */ -} - -static void ravb_rx_ring_format_rcar(struct net_device *ndev, int q) -{ - struct ravb_private *priv = netdev_priv(ndev); - struct ravb_ex_rx_desc *rx_desc; - unsigned int rx_ring_size = sizeof(*rx_desc) * priv->num_rx_ring[q]; - dma_addr_t dma_addr; - unsigned int i; - - memset(priv->rx_ring[q].ex_desc, 0, rx_ring_size); - /* Build RX ring buffer */ - for (i = 0; i < priv->num_rx_ring[q]; i++) { - /* RX descriptor */ - rx_desc = &priv->rx_ring[q].ex_desc[i]; - rx_desc->ds_cc = cpu_to_le16(priv->info->rx_max_desc_use); - dma_addr = dma_map_single(ndev->dev.parent, priv->rx_skb[q][i]->data, - priv->info->rx_max_frame_size, - DMA_FROM_DEVICE); - /* We just set the data size to 0 for a failed mapping which - * should prevent DMA from happening... - */ - if (dma_mapping_error(ndev->dev.parent, dma_addr)) - rx_desc->ds_cc = cpu_to_le16(0); - rx_desc->dptr = cpu_to_le32(dma_addr); - rx_desc->die_dt = DT_FEMPTY; - } - rx_desc = &priv->rx_ring[q].ex_desc[i]; + rx_desc = ravb_rx_get_desc(priv, q, i); rx_desc->dptr = cpu_to_le32((u32)priv->rx_desc_dma[q]); rx_desc->die_dt = DT_LINKFIX; /* type */ } @@ -400,7 +350,6 @@ static void ravb_rx_ring_format_rcar(struct net_device *ndev, int q) static void ravb_ring_format(struct net_device *ndev, int q) { struct ravb_private *priv = netdev_priv(ndev); - const struct ravb_hw_info *info = priv->info; unsigned int num_tx_desc = priv->num_tx_desc; struct ravb_tx_desc *tx_desc; struct ravb_desc *desc; @@ -413,7 +362,7 @@ static void ravb_ring_format(struct net_device *ndev, int q) priv->dirty_rx[q] = 0; priv->dirty_tx[q] = 0; - info->rx_ring_format(ndev, q); + ravb_rx_ring_format(ndev, q); memset(priv->tx_ring[q], 0, tx_ring_size); /* Build TX ring buffer */ @@ -439,31 +388,18 @@ static void ravb_ring_format(struct net_device *ndev, int q) desc->dptr = cpu_to_le32((u32)priv->tx_desc_dma[q]); } -static void *ravb_alloc_rx_desc_gbeth(struct net_device *ndev, int q) +static void *ravb_alloc_rx_desc(struct net_device *ndev, int q) { struct ravb_private *priv = netdev_priv(ndev); unsigned int ring_size; - ring_size = sizeof(struct ravb_rx_desc) * (priv->num_rx_ring[q] + 1); + ring_size = priv->info->rx_desc_size * (priv->num_rx_ring[q] + 1); - priv->rx_ring[q].desc = dma_alloc_coherent(ndev->dev.parent, ring_size, - &priv->rx_desc_dma[q], - GFP_KERNEL); - return priv->rx_ring[q].desc; -} + priv->rx_ring[q].raw = dma_alloc_coherent(ndev->dev.parent, ring_size, + &priv->rx_desc_dma[q], + GFP_KERNEL); -static void *ravb_alloc_rx_desc_rcar(struct net_device *ndev, int q) -{ - struct ravb_private *priv = netdev_priv(ndev); - unsigned int ring_size; - - ring_size = sizeof(struct ravb_ex_rx_desc) * (priv->num_rx_ring[q] + 1); - - priv->rx_ring[q].ex_desc = dma_alloc_coherent(ndev->dev.parent, - ring_size, - &priv->rx_desc_dma[q], - GFP_KERNEL); - return priv->rx_ring[q].ex_desc; + return priv->rx_ring[q].raw; } /* Init skb and descriptor buffer for Ethernet AVB */ @@ -500,7 +436,7 @@ static int ravb_ring_init(struct net_device *ndev, int q) } /* Allocate all RX descriptors. */ - if (!info->alloc_rx_desc(ndev, q)) + if (!ravb_alloc_rx_desc(ndev, q)) goto error; priv->dirty_rx[q] = 0; @@ -2677,9 +2613,6 @@ static int ravb_mdio_release(struct ravb_private *priv) } static const struct ravb_hw_info ravb_gen3_hw_info = { - .rx_ring_free = ravb_rx_ring_free_rcar, - .rx_ring_format = ravb_rx_ring_format_rcar, - .alloc_rx_desc = ravb_alloc_rx_desc_rcar, .receive = ravb_rx_rcar, .set_rate = ravb_set_rate_rcar, .set_feature = ravb_set_features_rcar, @@ -2693,6 +2626,7 @@ static const struct ravb_hw_info ravb_gen3_hw_info = { .tccr_mask = TCCR_TSRQ0 | TCCR_TSRQ1 | TCCR_TSRQ2 | TCCR_TSRQ3, .rx_max_frame_size = SZ_2K, .rx_max_desc_use = 2048 - ETH_FCS_LEN + sizeof(__sum16), + .rx_desc_size = sizeof(struct ravb_ex_rx_desc), .internal_delay = 1, .tx_counters = 1, .multi_irqs = 1, @@ -2703,9 +2637,6 @@ static const struct ravb_hw_info ravb_gen3_hw_info = { }; static const struct ravb_hw_info ravb_gen2_hw_info = { - .rx_ring_free = ravb_rx_ring_free_rcar, - .rx_ring_format = ravb_rx_ring_format_rcar, - .alloc_rx_desc = ravb_alloc_rx_desc_rcar, .receive = ravb_rx_rcar, .set_rate = ravb_set_rate_rcar, .set_feature = ravb_set_features_rcar, @@ -2719,6 +2650,7 @@ static const struct ravb_hw_info ravb_gen2_hw_info = { .tccr_mask = TCCR_TSRQ0 | TCCR_TSRQ1 | TCCR_TSRQ2 | TCCR_TSRQ3, .rx_max_frame_size = SZ_2K, .rx_max_desc_use = 2048 - ETH_FCS_LEN + sizeof(__sum16), + .rx_desc_size = sizeof(struct ravb_ex_rx_desc), .aligned_tx = 1, .gptp = 1, .nc_queues = 1, @@ -2726,9 +2658,6 @@ static const struct ravb_hw_info ravb_gen2_hw_info = { }; static const struct ravb_hw_info ravb_rzv2m_hw_info = { - .rx_ring_free = ravb_rx_ring_free_rcar, - .rx_ring_format = ravb_rx_ring_format_rcar, - .alloc_rx_desc = ravb_alloc_rx_desc_rcar, .receive = ravb_rx_rcar, .set_rate = ravb_set_rate_rcar, .set_feature = ravb_set_features_rcar, @@ -2742,6 +2671,7 @@ static const struct ravb_hw_info ravb_rzv2m_hw_info = { .tccr_mask = TCCR_TSRQ0 | TCCR_TSRQ1 | TCCR_TSRQ2 | TCCR_TSRQ3, .rx_max_frame_size = SZ_2K, .rx_max_desc_use = 2048 - ETH_FCS_LEN + sizeof(__sum16), + .rx_desc_size = sizeof(struct ravb_ex_rx_desc), .multi_irqs = 1, .err_mgmt_irqs = 1, .gptp = 1, @@ -2751,9 +2681,6 @@ static const struct ravb_hw_info ravb_rzv2m_hw_info = { }; static const struct ravb_hw_info gbeth_hw_info = { - .rx_ring_free = ravb_rx_ring_free_gbeth, - .rx_ring_format = ravb_rx_ring_format_gbeth, - .alloc_rx_desc = ravb_alloc_rx_desc_gbeth, .receive = ravb_rx_gbeth, .set_rate = ravb_set_rate_gbeth, .set_feature = ravb_set_features_gbeth, @@ -2767,6 +2694,7 @@ static const struct ravb_hw_info gbeth_hw_info = { .tccr_mask = TCCR_TSRQ0, .rx_max_frame_size = SZ_8K, .rx_max_desc_use = 4080, + .rx_desc_size = sizeof(struct ravb_rx_desc), .aligned_tx = 1, .tx_counters = 1, .carrier_counters = 1,