From patchwork Mon Aug 7 06:56:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Furong Xu <0x1207@gmail.com> X-Patchwork-Id: 13343146 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 81AC72106 for ; Mon, 7 Aug 2023 06:57:02 +0000 (UTC) Received: from mail-ot1-x32b.google.com (mail-ot1-x32b.google.com [IPv6:2607:f8b0:4864:20::32b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3E1F41A4; Sun, 6 Aug 2023 23:57:00 -0700 (PDT) Received: by mail-ot1-x32b.google.com with SMTP id 46e09a7af769-6bb07d274feso3422549a34.0; Sun, 06 Aug 2023 23:57:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1691391419; x=1691996219; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=f9OrhSZvhb/ZMUIdWe8j3Y2F94jn11ho58WAhi9rnzk=; b=YCwJ5Wqbvh4a14jpVGxdNgmejAUYJA985sKomXUERwzROFls45NZv80dRiubGhA4oN IsN9pIKMLdB1L6Ah57ZccCxc6xPNhffAsVUTqG7FZYyWpl4O744iN2/z8Q5OAcFceY3S VXwHWDiBcGmpbISnLJkFuFZfw1xOfVNhIebw2/qW9rJCFy0DmEKdMMc2erEs5290mZKY NnXLwT8wihI78z89cYUI+KmFi0x/51eMO+lHhoLqCjYIvkIYVWHp2WSQWiT9CPFhGDfV MvRJCGB4XBkcyG16NG7ivDraz7jkdryBQ/v79FXyi/Oi9m3qGJBvtVradILNt7+L4ig+ kEog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691391419; x=1691996219; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=f9OrhSZvhb/ZMUIdWe8j3Y2F94jn11ho58WAhi9rnzk=; b=C/G6K3Qmsx6hTC1E6wbAMGq0ugGtAdpHEWWsVFNV6TdHjub/rW2kM0nxH+Xi+p3Udg y4zBT2WILsZBQ5veLa8J4TZdivPlTKmEfOodeoXbKg9o0khtRocQrU/L8i1ibZUTM/y8 i834GRlfbTk1DAHvQuyC/X18S1enW32XcSZFYypAmslTDhBISwZaSJ0bvED93IL0PuQ/ jxJ7H3iC1j2yjnT3kflCyDHM7stF+GPn3ZoBDaTAbcWg6BlixCm8y5kzzFGE5d+VPkB3 7kC7sVEptKS34JaC+JrXBEoeCPvusFD0bIi4cTTBJZUhJfAJ7osI+gLA1+MCOnBwINMu /Jng== X-Gm-Message-State: AOJu0YwyAPwiD6/AkUXh8Kv+NDtH/JRhU8Zj2xuK+FqiRs8Vs30NTmYm 5+1Xto0ZHm4rD94dngodmVw= X-Google-Smtp-Source: AGHT+IEq+N/N48py0+EYLPagfqEI5zpmAY0xVp0Qka/6+3fufCJCQGhCpXNmwkK+qrKzMAzN2GLY/A== X-Received: by 2002:a05:6359:668b:b0:134:e06a:ef40 with SMTP id so11-20020a056359668b00b00134e06aef40mr3185789rwb.17.1691391419432; Sun, 06 Aug 2023 23:56:59 -0700 (PDT) Received: from localhost.localdomain ([198.211.45.220]) by smtp.googlemail.com with ESMTPSA id m68-20020a633f47000000b00563e1ef0491sm4522311pga.8.2023.08.06.23.56.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 06 Aug 2023 23:56:59 -0700 (PDT) From: Furong Xu <0x1207@gmail.com> To: "David S. Miller" , Alexandre Torgue , Jose Abreu , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Maxime Coquelin , Joao Pinto Cc: netdev@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, xfr@outlook.com, rock.xu@nio.com, Furong Xu <0x1207@gmail.com> Subject: [PATCH] net: stmmac: xgmac: RX queue routing configuration Date: Mon, 7 Aug 2023 14:56:09 +0800 Message-Id: <20230807065609.1096076-1-0x1207@gmail.com> X-Mailer: git-send-email 2.34.1 Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-1.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_ENVFROM_END_DIGIT, FREEMAIL_FROM,RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org Commit abe80fdc6ee6 ("net: stmmac: RX queue routing configuration") introduced RX queue routing to DWMAC4 core. This patch extend the support to XGMAC2 core. Signed-off-by: Furong Xu <0x1207@gmail.com> --- .../net/ethernet/stmicro/stmmac/dwxgmac2.h | 14 +++++++ .../ethernet/stmicro/stmmac/dwxgmac2_core.c | 37 ++++++++++++++++++- 2 files changed, 49 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2.h b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2.h index 1913385df685..a2498da7406b 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2.h +++ b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2.h @@ -74,8 +74,22 @@ #define XGMAC_RXQEN(x) GENMASK((x) * 2 + 1, (x) * 2) #define XGMAC_RXQEN_SHIFT(x) ((x) * 2) #define XGMAC_RXQ_CTRL1 0x000000a4 +#define XGMAC_AVCPQ GENMASK(31, 28) +#define XGMAC_AVCPQ_SHIFT 28 +#define XGMAC_PTPQ GENMASK(27, 24) +#define XGMAC_PTPQ_SHIFT 24 +#define XGMAC_TACPQE BIT(23) +#define XGMAC_TACPQE_SHIFT 23 +#define XGMAC_DCBCPQ GENMASK(19, 16) +#define XGMAC_DCBCPQ_SHIFT 16 +#define XGMAC_MCBCQEN BIT(15) +#define XGMAC_MCBCQEN_SHIFT 15 +#define XGMAC_MCBCQ GENMASK(11, 8) +#define XGMAC_MCBCQ_SHIFT 8 #define XGMAC_RQ GENMASK(7, 4) #define XGMAC_RQ_SHIFT 4 +#define XGMAC_UPQ GENMASK(3, 0) +#define XGMAC_UPQ_SHIFT 0 #define XGMAC_RXQ_CTRL2 0x000000a8 #define XGMAC_RXQ_CTRL3 0x000000ac #define XGMAC_PSRQ(x) GENMASK((x) * 8 + 7, (x) * 8) diff --git a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c index a0c2ef8bb0ac..24918d95f612 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c +++ b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c @@ -127,6 +127,39 @@ static void dwxgmac2_tx_queue_prio(struct mac_device_info *hw, u32 prio, writel(value, ioaddr + reg); } +static void dwxgmac2_rx_queue_routing(struct mac_device_info *hw, + u8 packet, u32 queue) +{ + void __iomem *ioaddr = hw->pcsr; + u32 value; + + static const struct stmmac_rx_routing dwxgmac2_route_possibilities[] = { + { XGMAC_AVCPQ, XGMAC_AVCPQ_SHIFT }, + { XGMAC_PTPQ, XGMAC_PTPQ_SHIFT }, + { XGMAC_DCBCPQ, XGMAC_DCBCPQ_SHIFT }, + { XGMAC_UPQ, XGMAC_UPQ_SHIFT }, + { XGMAC_MCBCQ, XGMAC_MCBCQ_SHIFT }, + }; + + value = readl(ioaddr + XGMAC_RXQ_CTRL1); + + /* routing configuration */ + value &= ~dwxgmac2_route_possibilities[packet - 1].reg_mask; + value |= (queue << dwxgmac2_route_possibilities[packet-1].reg_shift) & + dwxgmac2_route_possibilities[packet - 1].reg_mask; + + /* some packets require extra ops */ + if (packet == PACKET_AVCPQ) { + value &= ~XGMAC_TACPQE; + value |= 0x1 << XGMAC_TACPQE_SHIFT; + } else if (packet == PACKET_MCBCQ) { + value &= ~XGMAC_MCBCQEN; + value |= 0x1 << XGMAC_MCBCQEN_SHIFT; + } + + writel(value, ioaddr + XGMAC_RXQ_CTRL1); +} + static void dwxgmac2_prog_mtl_rx_algorithms(struct mac_device_info *hw, u32 rx_alg) { @@ -1463,7 +1496,7 @@ const struct stmmac_ops dwxgmac210_ops = { .rx_queue_enable = dwxgmac2_rx_queue_enable, .rx_queue_prio = dwxgmac2_rx_queue_prio, .tx_queue_prio = dwxgmac2_tx_queue_prio, - .rx_queue_routing = NULL, + .rx_queue_routing = dwxgmac2_rx_queue_routing, .prog_mtl_rx_algorithms = dwxgmac2_prog_mtl_rx_algorithms, .prog_mtl_tx_algorithms = dwxgmac2_prog_mtl_tx_algorithms, .set_mtl_tx_queue_weight = dwxgmac2_set_mtl_tx_queue_weight, @@ -1524,7 +1557,7 @@ const struct stmmac_ops dwxlgmac2_ops = { .rx_queue_enable = dwxlgmac2_rx_queue_enable, .rx_queue_prio = dwxgmac2_rx_queue_prio, .tx_queue_prio = dwxgmac2_tx_queue_prio, - .rx_queue_routing = NULL, + .rx_queue_routing = dwxgmac2_rx_queue_routing, .prog_mtl_rx_algorithms = dwxgmac2_prog_mtl_rx_algorithms, .prog_mtl_tx_algorithms = dwxgmac2_prog_mtl_tx_algorithms, .set_mtl_tx_queue_weight = dwxgmac2_set_mtl_tx_queue_weight,