From patchwork Fri Mar 21 00:29:05 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harshitha Ramamurthy X-Patchwork-Id: 14024719 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 545C94879B for ; Fri, 21 Mar 2025 00:29:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742516963; cv=none; b=lcMb6YkFpcELvwPXDPu+H1jNM/qZwcGypuVKMVnbcG8XiZN+wj9b6t0RiNBSTtP7FsG2PibzLDcRmnywaerjE0FrRQgBpvD8utuzheYkjBLf+K2FrT6w28InCvKR6MSeiEB2LTk+dj9vcktlSvUfg8JgyLfJC4vLiG29ea/THrs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742516963; c=relaxed/simple; bh=Jbm7GYaJXcjd03+tuScQgxz8fqw00dOQwC1LuGpDchY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=LDN7P+9KlYSRqsWd1CIRMmBtpgs2RnSUI2OQZeoU/TD5oWY8eXtPfFX/xJr8rIKFNquWQCJMfAbIz45h6eelXMr3M+fkKaxdAgLz2sv9pXbs4uD+anHHSRXo9nxxBC8bcIBmLrkFMqz7FC56VBDKy7vs1wHx8XfCHI7BFO3ppF4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--hramamurthy.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=geEeMLjT; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--hramamurthy.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="geEeMLjT" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-2233b764fc8so20825805ad.3 for ; Thu, 20 Mar 2025 17:29:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742516962; x=1743121762; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=KoTBbWww7JEQC+F5jRMP+bnOpvYvD/+5OAlhy1ULkuk=; b=geEeMLjTSdLTdOKmTL2VSJ6nhZu2vOQdxNxwK934MjNLfhmNL72gdx0YamiIFAduVu 0/jNV6T/6traltcsEO2ihRo5TNMFXhvbLNyOWlxE7dS93JntCDH07lCjSgiNMaigo0p7 GrXvXcgv65+UarRPSEFu7kN5H2j0klMDkJTNcLsGN5iEh2Vtpdk3aujHYZmXheHFsrzb 0td9JZ/4zrBvkSGlTB+/W4MzNkO9iS2y7PkPoIoNUSzI7uxlSGcB/bzpLgyiHDNtdc/J NxhgBprpAaDXhbH16oSqzLhcyqQ2cHsJTh9amDiRbGPxFZtkwWtPa6fXolrt9b66kzQD sDOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742516962; x=1743121762; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=KoTBbWww7JEQC+F5jRMP+bnOpvYvD/+5OAlhy1ULkuk=; b=nok3Hhh6igF9MkBpdtciWBQbklpTBrUsbNKr+1LtLTcIQAgk6bwuuvHr2PuIfFInR9 h0YcFKALNtX7ALPh+WTDJBq+/r6epmZyH88HreirloTCCieKWPBvsbFVcF2qeINNUAZp LhagTbDAN+tFt+m6ZlGZdgf1IqynkMqu4hkUYoly5eY/p7klVfyZONgX1WhRqJO1OWLW sth3Kph08JHGfhRE+4C24+jWOngzCxGHoIbq0DNEqU+ZCJQrAqQYmwcMZdbHtaavcOwg 4hJzgTpuT/JputyPilCF160phyb+QHOXaV0PmyXVM/Ghr6BCZqIVAnqoH62LIm+FR1DI g0Xw== X-Forwarded-Encrypted: i=1; AJvYcCUfe2g06cvS5sxClfhltlAmSMAcNmYwSUcTSIuzLIHnF3H20URGnD84va2a2fWcz09TkQo=@vger.kernel.org X-Gm-Message-State: AOJu0Yzd8dxtx1iiam0teO77Ldtjk6r/P5zR4m2uhoEbzIXA776t4Dnd Zi9KagPfxUNOWvfWS10PgG8INo3JuZheXtOcuWC1e4QcsiSvlNmk55bh3YCW6peR3uz33JOPWbh mr8AL775x5wJ1BqwDPL7VzQ== X-Google-Smtp-Source: AGHT+IF/RKffTkar3eNChq0M93Wm9yCSbeUCKsyAMrO6ynd5mACmphLhb4ZEULjZm1n+Oj8W1m67dSwK7GHZr2a5Lg== X-Received: from pgbdo13.prod.google.com ([2002:a05:6a02:e8d:b0:af2:8474:f67e]) (user=hramamurthy job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:2448:b0:1f5:8179:4f42 with SMTP id adf61e73a8af0-1fe42f090fdmr2537813637.6.1742516961712; Thu, 20 Mar 2025 17:29:21 -0700 (PDT) Date: Fri, 21 Mar 2025 00:29:05 +0000 In-Reply-To: <20250321002910.1343422-1-hramamurthy@google.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250321002910.1343422-1-hramamurthy@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250321002910.1343422-2-hramamurthy@google.com> Subject: [PATCH net-next 1/6] gve: remove xdp_xsk_done and xdp_xsk_wakeup statistics From: Harshitha Ramamurthy To: netdev@vger.kernel.org Cc: jeroendb@google.com, hramamurthy@google.com, andrew+netdev@lunn.ch, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, ast@kernel.org, daniel@iogearbox.net, hawk@kernel.org, john.fastabend@gmail.com, pkaligineedi@google.com, willemb@google.com, ziweixiao@google.com, joshwash@google.com, horms@kernel.org, shailend@google.com, bcf@google.com, linux-kernel@vger.kernel.org, bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Joshua Washington These statistics pollute the hotpath and do not have any real-world use or meaning. Reviewed-by: Willem de Bruijn Signed-off-by: Joshua Washington Signed-off-by: Harshitha Ramamurthy --- drivers/net/ethernet/google/gve/gve.h | 2 -- drivers/net/ethernet/google/gve/gve_ethtool.c | 8 +++----- drivers/net/ethernet/google/gve/gve_tx.c | 8 ++------ 3 files changed, 5 insertions(+), 13 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h index 483c43bab3a9..2064e592dfdd 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -613,8 +613,6 @@ struct gve_tx_ring { dma_addr_t complq_bus_dqo; /* dma address of the dqo.compl_ring */ struct u64_stats_sync statss; /* sync stats for 32bit archs */ struct xsk_buff_pool *xsk_pool; - u32 xdp_xsk_wakeup; - u32 xdp_xsk_done; u64 xdp_xsk_sent; u64 xdp_xmit; u64 xdp_xmit_errors; diff --git a/drivers/net/ethernet/google/gve/gve_ethtool.c b/drivers/net/ethernet/google/gve/gve_ethtool.c index a572f1e05934..bc59b5b4235a 100644 --- a/drivers/net/ethernet/google/gve/gve_ethtool.c +++ b/drivers/net/ethernet/google/gve/gve_ethtool.c @@ -63,8 +63,8 @@ static const char gve_gstrings_rx_stats[][ETH_GSTRING_LEN] = { static const char gve_gstrings_tx_stats[][ETH_GSTRING_LEN] = { "tx_posted_desc[%u]", "tx_completed_desc[%u]", "tx_consumed_desc[%u]", "tx_bytes[%u]", "tx_wake[%u]", "tx_stop[%u]", "tx_event_counter[%u]", - "tx_dma_mapping_error[%u]", "tx_xsk_wakeup[%u]", - "tx_xsk_done[%u]", "tx_xsk_sent[%u]", "tx_xdp_xmit[%u]", "tx_xdp_xmit_errors[%u]" + "tx_dma_mapping_error[%u]", + "tx_xsk_sent[%u]", "tx_xdp_xmit[%u]", "tx_xdp_xmit_errors[%u]" }; static const char gve_gstrings_adminq_stats[][ETH_GSTRING_LEN] = { @@ -417,9 +417,7 @@ gve_get_ethtool_stats(struct net_device *netdev, data[i++] = value; } } - /* XDP xsk counters */ - data[i++] = tx->xdp_xsk_wakeup; - data[i++] = tx->xdp_xsk_done; + /* XDP counters */ do { start = u64_stats_fetch_begin(&priv->tx[ring].statss); data[i] = tx->xdp_xsk_sent; diff --git a/drivers/net/ethernet/google/gve/gve_tx.c b/drivers/net/ethernet/google/gve/gve_tx.c index 4350ebd9c2bd..c8c067e18059 100644 --- a/drivers/net/ethernet/google/gve/gve_tx.c +++ b/drivers/net/ethernet/google/gve/gve_tx.c @@ -959,14 +959,10 @@ static int gve_xsk_tx(struct gve_priv *priv, struct gve_tx_ring *tx, spin_lock(&tx->xdp_lock); while (sent < budget) { - if (!gve_can_tx(tx, GVE_TX_START_THRESH)) + if (!gve_can_tx(tx, GVE_TX_START_THRESH) || + !xsk_tx_peek_desc(tx->xsk_pool, &desc)) goto out; - if (!xsk_tx_peek_desc(tx->xsk_pool, &desc)) { - tx->xdp_xsk_done = tx->xdp_xsk_wakeup; - goto out; - } - data = xsk_buff_raw_get_data(tx->xsk_pool, desc.addr); nsegs = gve_tx_fill_xdp(priv, tx, data, desc.len, NULL, true); tx->req += nsegs; From patchwork Fri Mar 21 00:29:06 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harshitha Ramamurthy X-Patchwork-Id: 14024720 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 042DB78F47 for ; Fri, 21 Mar 2025 00:29:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742516967; cv=none; b=UbWlbiacSMtey/UL6PJnUt8TQM6avCZ5okBIlNDw/lV4nZpEmcIVMnulvrTAVzCSB1yJIzR+PAlk8G4eMh7H6+uF89JsGSNiUmgdQtHgFhVoLnERT3O+rThLiaFK8tIQlOOibXd8dG1ZCyw28b4zdaUXOO5w1GRoujTqVo4U1FY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742516967; c=relaxed/simple; bh=5Q2hLfL/6xTp+NVwY9wDFTcWg1KyCiyDU4S9oH7ScIQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=nGUrf0TIbi+9q67GPIQg+LQ+btdOmK9JSjM8FS4Olt7exAyloJuexuJWk66T8uUcyp1t7mjhcPEh0E5PJNfTYRHgYS21ckNNmjeKe8vmMjpYy2Tv+DjwFLzYgdUUKok5Nv0o9IhPWmfehSkjOMDgQX1dOen5TPi9iVuA5skgdm8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--hramamurthy.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=vaATa9cJ; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--hramamurthy.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="vaATa9cJ" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2ff53a4754aso3757297a91.2 for ; Thu, 20 Mar 2025 17:29:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742516964; x=1743121764; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:from:to:cc:subject:date:message-id :reply-to; bh=m6CLh2uwiKlHnVBDMPMLQbP6E+zIgXPGKwGw0k/0Kf8=; b=vaATa9cJLn90KsA26SO+vcmvhsh33tH2/N2LiMYv57aa+Zi1I3oI9UcKl4WlBvM0SZ RA6kGc7emID4xpwmP5fRRSbwgn5v00aQzzWvE4yGJJcttXWiP3wMIN/U5iw5KtNZPFRw acjH/4ePddp0IkY9K5ocVYGFk1z7ER8Zd0H2ZPIxAu7NrdOMJPxO30Dwq4NhxGgUidor PtKxTeQ+kawuanyFdZbIoBmtAj1ln9Jkfgb0MeP6V4Sp86sKjvrIlZDufHz6E73BWYVr pIh+uY2QHxpRdpqYWovj5/ZVYv5XCRJ815Lea6CAxxNl0ZVK9YwSNIhGDlIQ7aFMbvLe HsEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742516964; x=1743121764; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:x-gm-message-state:from:to:cc:subject :date:message-id:reply-to; bh=m6CLh2uwiKlHnVBDMPMLQbP6E+zIgXPGKwGw0k/0Kf8=; b=WlVTcIpE5szPT1qM2De99kJlD2YIxDafAqQes9MkoCHtoujK8/mwHNb7/1pnzXhP/v deis54gQwMjMGzLIq6J/lniv2xiTxBGU5bx00oZ/JFQAtQK6Y+UMT0Qepurlqu57cquv 2fpRnVKTd2Uh1NJ4ZlPlGy4NEZY4dnotexPt2DiiqIw0QY7QSW9lwE57imTe6LafcSY3 ubcEvm/gjSj0ekaGnaYcuwxpDMrvI2zsIJu1AF7EPt9PYRqAOY2BP20K2tlU8hW6wbhC o76fWVpVuAKEYAuEZhmlwYbtuE8Urt6mRMRoSXjtZmlpcIKbYWXqalnqVd/XjjjEiFLc vkPw== X-Forwarded-Encrypted: i=1; AJvYcCUmjoLudyt8jOsgUplD/LnBzmQYSoYqfJ8Wo7oyRGFKtgEUQhPJaAfTHBlQXMVHIeecDyA=@vger.kernel.org X-Gm-Message-State: AOJu0Yy6wEvSN0RDK3pG2SQo3OW4XBIThSPhCkz9W0s/hjyh8CP1ZU+l BHg0VMERrP4tJvc/X46uWQcwaIOYo/+7qZx/Vxq843Bn5/fDXW/YlbLLzUIV208JDTB1hz6sMeY le2eXQcCFyzT4t/CeH8BEnw== X-Google-Smtp-Source: AGHT+IFrW+DH3avgHgSsTOKvSsVjSUZBWZi+ZXpd3AIRkFg75vMXDEfSVveSZRio2ZX9bRilDqgv3E2IyzYq5hNMPg== X-Received: from pjur8.prod.google.com ([2002:a17:90a:d408:b0:2fa:b84:b308]) (user=hramamurthy job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:46:b0:2f4:432d:250d with SMTP id 98e67ed59e1d1-3030fe9e93emr1840861a91.21.1742516964435; Thu, 20 Mar 2025 17:29:24 -0700 (PDT) Date: Fri, 21 Mar 2025 00:29:06 +0000 In-Reply-To: <20250321002910.1343422-1-hramamurthy@google.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250321002910.1343422-1-hramamurthy@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250321002910.1343422-3-hramamurthy@google.com> Subject: [PATCH net-next 2/6] gve: introduce config-based allocation for XDP From: Harshitha Ramamurthy To: netdev@vger.kernel.org Cc: jeroendb@google.com, hramamurthy@google.com, andrew+netdev@lunn.ch, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, ast@kernel.org, daniel@iogearbox.net, hawk@kernel.org, john.fastabend@gmail.com, pkaligineedi@google.com, willemb@google.com, ziweixiao@google.com, joshwash@google.com, horms@kernel.org, shailend@google.com, bcf@google.com, linux-kernel@vger.kernel.org, bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Joshua Washington An earlier patch series[1] introduced RX/TX ring allocation configuration structs which contained metadata used to allocate and configure new RX and TX rings. This led to a much cleaner and safer allocation pattern wherein queue resources were not deallocated until new queue resources were successfully allocated. Migrate the XDP allocation path to use the same pattern to allow for the existence of a single allocation path instead of relying on XDP-specific allocation methods. These extra allocation methods result in the duplication of many existing behaviors while being prone to error when configuration changes unrelated to XDP occur. Link: https://lore.kernel.org/netdev/20240122182632.1102721-1-shailend@google.com/ [1] Reviewed-by: Praveen Kaligineedi Reviewed-by: Willem de Bruijn Signed-off-by: Joshua Washington Signed-off-by: Harshitha Ramamurthy --- drivers/net/ethernet/google/gve/gve.h | 56 ++-- drivers/net/ethernet/google/gve/gve_ethtool.c | 19 +- drivers/net/ethernet/google/gve/gve_main.c | 261 ++++-------------- drivers/net/ethernet/google/gve/gve_rx.c | 6 +- drivers/net/ethernet/google/gve/gve_rx_dqo.c | 6 +- drivers/net/ethernet/google/gve/gve_tx.c | 33 +-- drivers/net/ethernet/google/gve/gve_tx_dqo.c | 31 +-- 7 files changed, 118 insertions(+), 294 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h index 2064e592dfdd..e5cc3fada9c9 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -631,10 +631,17 @@ struct gve_notify_block { u32 irq; }; -/* Tracks allowed and current queue settings */ -struct gve_queue_config { +/* Tracks allowed and current rx queue settings */ +struct gve_rx_queue_config { u16 max_queues; - u16 num_queues; /* current */ + u16 num_queues; +}; + +/* Tracks allowed and current tx queue settings */ +struct gve_tx_queue_config { + u16 max_queues; + u16 num_queues; /* number of TX queues, excluding XDP queues */ + u16 num_xdp_queues; }; /* Tracks the available and used qpl IDs */ @@ -658,11 +665,11 @@ struct gve_ptype_lut { /* Parameters for allocating resources for tx queues */ struct gve_tx_alloc_rings_cfg { - struct gve_queue_config *qcfg; + struct gve_tx_queue_config *qcfg; + + u16 num_xdp_rings; u16 ring_size; - u16 start_idx; - u16 num_rings; bool raw_addressing; /* Allocated resources are returned here */ @@ -672,8 +679,8 @@ struct gve_tx_alloc_rings_cfg { /* Parameters for allocating resources for rx queues */ struct gve_rx_alloc_rings_cfg { /* tx config is also needed to determine QPL ids */ - struct gve_queue_config *qcfg; - struct gve_queue_config *qcfg_tx; + struct gve_rx_queue_config *qcfg_rx; + struct gve_tx_queue_config *qcfg_tx; u16 ring_size; u16 packet_buffer_size; @@ -764,9 +771,8 @@ struct gve_priv { u32 rx_copybreak; /* copy packets smaller than this */ u16 default_num_queues; /* default num queues to set up */ - u16 num_xdp_queues; - struct gve_queue_config tx_cfg; - struct gve_queue_config rx_cfg; + struct gve_tx_queue_config tx_cfg; + struct gve_rx_queue_config rx_cfg; u32 num_ntfy_blks; /* spilt between TX and RX so must be even */ struct gve_registers __iomem *reg_bar0; /* see gve_register.h */ @@ -1039,27 +1045,16 @@ static inline bool gve_is_qpl(struct gve_priv *priv) } /* Returns the number of tx queue page lists */ -static inline u32 gve_num_tx_qpls(const struct gve_queue_config *tx_cfg, - int num_xdp_queues, +static inline u32 gve_num_tx_qpls(const struct gve_tx_queue_config *tx_cfg, bool is_qpl) { if (!is_qpl) return 0; - return tx_cfg->num_queues + num_xdp_queues; -} - -/* Returns the number of XDP tx queue page lists - */ -static inline u32 gve_num_xdp_qpls(struct gve_priv *priv) -{ - if (priv->queue_format != GVE_GQI_QPL_FORMAT) - return 0; - - return priv->num_xdp_queues; + return tx_cfg->num_queues + tx_cfg->num_xdp_queues; } /* Returns the number of rx queue page lists */ -static inline u32 gve_num_rx_qpls(const struct gve_queue_config *rx_cfg, +static inline u32 gve_num_rx_qpls(const struct gve_rx_queue_config *rx_cfg, bool is_qpl) { if (!is_qpl) @@ -1077,7 +1072,8 @@ static inline u32 gve_rx_qpl_id(struct gve_priv *priv, int rx_qid) return priv->tx_cfg.max_queues + rx_qid; } -static inline u32 gve_get_rx_qpl_id(const struct gve_queue_config *tx_cfg, int rx_qid) +static inline u32 gve_get_rx_qpl_id(const struct gve_tx_queue_config *tx_cfg, + int rx_qid) { return tx_cfg->max_queues + rx_qid; } @@ -1087,7 +1083,7 @@ static inline u32 gve_tx_start_qpl_id(struct gve_priv *priv) return gve_tx_qpl_id(priv, 0); } -static inline u32 gve_rx_start_qpl_id(const struct gve_queue_config *tx_cfg) +static inline u32 gve_rx_start_qpl_id(const struct gve_tx_queue_config *tx_cfg) { return gve_get_rx_qpl_id(tx_cfg, 0); } @@ -1118,7 +1114,7 @@ static inline bool gve_is_gqi(struct gve_priv *priv) static inline u32 gve_num_tx_queues(struct gve_priv *priv) { - return priv->tx_cfg.num_queues + priv->num_xdp_queues; + return priv->tx_cfg.num_queues + priv->tx_cfg.num_xdp_queues; } static inline u32 gve_xdp_tx_queue_id(struct gve_priv *priv, u32 queue_id) @@ -1234,8 +1230,8 @@ int gve_adjust_config(struct gve_priv *priv, struct gve_tx_alloc_rings_cfg *tx_alloc_cfg, struct gve_rx_alloc_rings_cfg *rx_alloc_cfg); int gve_adjust_queues(struct gve_priv *priv, - struct gve_queue_config new_rx_config, - struct gve_queue_config new_tx_config, + struct gve_rx_queue_config new_rx_config, + struct gve_tx_queue_config new_tx_config, bool reset_rss); /* flow steering rule */ int gve_get_flow_rule_entry(struct gve_priv *priv, struct ethtool_rxnfc *cmd); diff --git a/drivers/net/ethernet/google/gve/gve_ethtool.c b/drivers/net/ethernet/google/gve/gve_ethtool.c index bc59b5b4235a..a862031ba5d1 100644 --- a/drivers/net/ethernet/google/gve/gve_ethtool.c +++ b/drivers/net/ethernet/google/gve/gve_ethtool.c @@ -475,8 +475,8 @@ static int gve_set_channels(struct net_device *netdev, struct ethtool_channels *cmd) { struct gve_priv *priv = netdev_priv(netdev); - struct gve_queue_config new_tx_cfg = priv->tx_cfg; - struct gve_queue_config new_rx_cfg = priv->rx_cfg; + struct gve_tx_queue_config new_tx_cfg = priv->tx_cfg; + struct gve_rx_queue_config new_rx_cfg = priv->rx_cfg; struct ethtool_channels old_settings; int new_tx = cmd->tx_count; int new_rx = cmd->rx_count; @@ -491,10 +491,17 @@ static int gve_set_channels(struct net_device *netdev, if (!new_rx || !new_tx) return -EINVAL; - if (priv->num_xdp_queues && - (new_tx != new_rx || (2 * new_tx > priv->tx_cfg.max_queues))) { - dev_err(&priv->pdev->dev, "XDP load failed: The number of configured RX queues should be equal to the number of configured TX queues and the number of configured RX/TX queues should be less than or equal to half the maximum number of RX/TX queues"); - return -EINVAL; + if (priv->xdp_prog) { + if (new_tx != new_rx || + (2 * new_tx > priv->tx_cfg.max_queues)) { + dev_err(&priv->pdev->dev, "The number of configured RX queues should be equal to the number of configured TX queues and the number of configured RX/TX queues should be less than or equal to half the maximum number of RX/TX queues when XDP program is installed"); + return -EINVAL; + } + + /* One XDP TX queue per RX queue. */ + new_tx_cfg.num_xdp_queues = new_rx; + } else { + new_tx_cfg.num_xdp_queues = 0; } if (new_rx != priv->rx_cfg.num_queues && diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index 6dcdcaf518f4..354f526a9238 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -795,30 +795,13 @@ static struct gve_queue_page_list *gve_rx_get_qpl(struct gve_priv *priv, int idx return rx->dqo.qpl; } -static int gve_register_xdp_qpls(struct gve_priv *priv) -{ - int start_id; - int err; - int i; - - start_id = gve_xdp_tx_start_queue_id(priv); - for (i = start_id; i < start_id + gve_num_xdp_qpls(priv); i++) { - err = gve_register_qpl(priv, gve_tx_get_qpl(priv, i)); - /* This failure will trigger a reset - no need to clean up */ - if (err) - return err; - } - return 0; -} - static int gve_register_qpls(struct gve_priv *priv) { int num_tx_qpls, num_rx_qpls; int err; int i; - num_tx_qpls = gve_num_tx_qpls(&priv->tx_cfg, gve_num_xdp_qpls(priv), - gve_is_qpl(priv)); + num_tx_qpls = gve_num_tx_qpls(&priv->tx_cfg, gve_is_qpl(priv)); num_rx_qpls = gve_num_rx_qpls(&priv->rx_cfg, gve_is_qpl(priv)); for (i = 0; i < num_tx_qpls; i++) { @@ -836,30 +819,13 @@ static int gve_register_qpls(struct gve_priv *priv) return 0; } -static int gve_unregister_xdp_qpls(struct gve_priv *priv) -{ - int start_id; - int err; - int i; - - start_id = gve_xdp_tx_start_queue_id(priv); - for (i = start_id; i < start_id + gve_num_xdp_qpls(priv); i++) { - err = gve_unregister_qpl(priv, gve_tx_get_qpl(priv, i)); - /* This failure will trigger a reset - no need to clean */ - if (err) - return err; - } - return 0; -} - static int gve_unregister_qpls(struct gve_priv *priv) { int num_tx_qpls, num_rx_qpls; int err; int i; - num_tx_qpls = gve_num_tx_qpls(&priv->tx_cfg, gve_num_xdp_qpls(priv), - gve_is_qpl(priv)); + num_tx_qpls = gve_num_tx_qpls(&priv->tx_cfg, gve_is_qpl(priv)); num_rx_qpls = gve_num_rx_qpls(&priv->rx_cfg, gve_is_qpl(priv)); for (i = 0; i < num_tx_qpls; i++) { @@ -878,27 +844,6 @@ static int gve_unregister_qpls(struct gve_priv *priv) return 0; } -static int gve_create_xdp_rings(struct gve_priv *priv) -{ - int err; - - err = gve_adminq_create_tx_queues(priv, - gve_xdp_tx_start_queue_id(priv), - priv->num_xdp_queues); - if (err) { - netif_err(priv, drv, priv->dev, "failed to create %d XDP tx queues\n", - priv->num_xdp_queues); - /* This failure will trigger a reset - no need to clean - * up - */ - return err; - } - netif_dbg(priv, drv, priv->dev, "created %d XDP tx queues\n", - priv->num_xdp_queues); - - return 0; -} - static int gve_create_rings(struct gve_priv *priv) { int num_tx_queues = gve_num_tx_queues(priv); @@ -954,7 +899,7 @@ static void init_xdp_sync_stats(struct gve_priv *priv) int i; /* Init stats */ - for (i = start_id; i < start_id + priv->num_xdp_queues; i++) { + for (i = start_id; i < start_id + priv->tx_cfg.num_xdp_queues; i++) { int ntfy_idx = gve_tx_idx_to_ntfy(priv, i); u64_stats_init(&priv->tx[i].statss); @@ -979,24 +924,21 @@ static void gve_init_sync_stats(struct gve_priv *priv) static void gve_tx_get_curr_alloc_cfg(struct gve_priv *priv, struct gve_tx_alloc_rings_cfg *cfg) { - int num_xdp_queues = priv->xdp_prog ? priv->rx_cfg.num_queues : 0; - cfg->qcfg = &priv->tx_cfg; cfg->raw_addressing = !gve_is_qpl(priv); cfg->ring_size = priv->tx_desc_cnt; - cfg->start_idx = 0; - cfg->num_rings = priv->tx_cfg.num_queues + num_xdp_queues; + cfg->num_xdp_rings = cfg->qcfg->num_xdp_queues; cfg->tx = priv->tx; } -static void gve_tx_stop_rings(struct gve_priv *priv, int start_id, int num_rings) +static void gve_tx_stop_rings(struct gve_priv *priv, int num_rings) { int i; if (!priv->tx) return; - for (i = start_id; i < start_id + num_rings; i++) { + for (i = 0; i < num_rings; i++) { if (gve_is_gqi(priv)) gve_tx_stop_ring_gqi(priv, i); else @@ -1004,12 +946,11 @@ static void gve_tx_stop_rings(struct gve_priv *priv, int start_id, int num_rings } } -static void gve_tx_start_rings(struct gve_priv *priv, int start_id, - int num_rings) +static void gve_tx_start_rings(struct gve_priv *priv, int num_rings) { int i; - for (i = start_id; i < start_id + num_rings; i++) { + for (i = 0; i < num_rings; i++) { if (gve_is_gqi(priv)) gve_tx_start_ring_gqi(priv, i); else @@ -1017,28 +958,6 @@ static void gve_tx_start_rings(struct gve_priv *priv, int start_id, } } -static int gve_alloc_xdp_rings(struct gve_priv *priv) -{ - struct gve_tx_alloc_rings_cfg cfg = {0}; - int err = 0; - - if (!priv->num_xdp_queues) - return 0; - - gve_tx_get_curr_alloc_cfg(priv, &cfg); - cfg.start_idx = gve_xdp_tx_start_queue_id(priv); - cfg.num_rings = priv->num_xdp_queues; - - err = gve_tx_alloc_rings_gqi(priv, &cfg); - if (err) - return err; - - gve_tx_start_rings(priv, cfg.start_idx, cfg.num_rings); - init_xdp_sync_stats(priv); - - return 0; -} - static int gve_queues_mem_alloc(struct gve_priv *priv, struct gve_tx_alloc_rings_cfg *tx_alloc_cfg, struct gve_rx_alloc_rings_cfg *rx_alloc_cfg) @@ -1069,26 +988,6 @@ static int gve_queues_mem_alloc(struct gve_priv *priv, return err; } -static int gve_destroy_xdp_rings(struct gve_priv *priv) -{ - int start_id; - int err; - - start_id = gve_xdp_tx_start_queue_id(priv); - err = gve_adminq_destroy_tx_queues(priv, - start_id, - priv->num_xdp_queues); - if (err) { - netif_err(priv, drv, priv->dev, - "failed to destroy XDP queues\n"); - /* This failure will trigger a reset - no need to clean up */ - return err; - } - netif_dbg(priv, drv, priv->dev, "destroyed XDP queues\n"); - - return 0; -} - static int gve_destroy_rings(struct gve_priv *priv) { int num_tx_queues = gve_num_tx_queues(priv); @@ -1113,20 +1012,6 @@ static int gve_destroy_rings(struct gve_priv *priv) return 0; } -static void gve_free_xdp_rings(struct gve_priv *priv) -{ - struct gve_tx_alloc_rings_cfg cfg = {0}; - - gve_tx_get_curr_alloc_cfg(priv, &cfg); - cfg.start_idx = gve_xdp_tx_start_queue_id(priv); - cfg.num_rings = priv->num_xdp_queues; - - if (priv->tx) { - gve_tx_stop_rings(priv, cfg.start_idx, cfg.num_rings); - gve_tx_free_rings_gqi(priv, &cfg); - } -} - static void gve_queues_mem_free(struct gve_priv *priv, struct gve_tx_alloc_rings_cfg *tx_cfg, struct gve_rx_alloc_rings_cfg *rx_cfg) @@ -1253,7 +1138,7 @@ static int gve_reg_xdp_info(struct gve_priv *priv, struct net_device *dev) int i, j; u32 tx_qid; - if (!priv->num_xdp_queues) + if (!priv->tx_cfg.num_xdp_queues) return 0; for (i = 0; i < priv->rx_cfg.num_queues; i++) { @@ -1283,7 +1168,7 @@ static int gve_reg_xdp_info(struct gve_priv *priv, struct net_device *dev) } } - for (i = 0; i < priv->num_xdp_queues; i++) { + for (i = 0; i < priv->tx_cfg.num_xdp_queues; i++) { tx_qid = gve_xdp_tx_queue_id(priv, i); priv->tx[tx_qid].xsk_pool = xsk_get_pool_from_qid(dev, i); } @@ -1304,7 +1189,7 @@ static void gve_unreg_xdp_info(struct gve_priv *priv) { int i, tx_qid; - if (!priv->num_xdp_queues) + if (!priv->tx_cfg.num_xdp_queues || !priv->rx || !priv->tx) return; for (i = 0; i < priv->rx_cfg.num_queues; i++) { @@ -1317,7 +1202,7 @@ static void gve_unreg_xdp_info(struct gve_priv *priv) } } - for (i = 0; i < priv->num_xdp_queues; i++) { + for (i = 0; i < priv->tx_cfg.num_xdp_queues; i++) { tx_qid = gve_xdp_tx_queue_id(priv, i); priv->tx[tx_qid].xsk_pool = NULL; } @@ -1334,7 +1219,7 @@ static void gve_drain_page_cache(struct gve_priv *priv) static void gve_rx_get_curr_alloc_cfg(struct gve_priv *priv, struct gve_rx_alloc_rings_cfg *cfg) { - cfg->qcfg = &priv->rx_cfg; + cfg->qcfg_rx = &priv->rx_cfg; cfg->qcfg_tx = &priv->tx_cfg; cfg->raw_addressing = !gve_is_qpl(priv); cfg->enable_header_split = priv->header_split_enabled; @@ -1415,17 +1300,13 @@ static int gve_queues_start(struct gve_priv *priv, /* Record new configs into priv */ priv->tx_cfg = *tx_alloc_cfg->qcfg; - priv->rx_cfg = *rx_alloc_cfg->qcfg; + priv->tx_cfg.num_xdp_queues = tx_alloc_cfg->num_xdp_rings; + priv->rx_cfg = *rx_alloc_cfg->qcfg_rx; priv->tx_desc_cnt = tx_alloc_cfg->ring_size; priv->rx_desc_cnt = rx_alloc_cfg->ring_size; - if (priv->xdp_prog) - priv->num_xdp_queues = priv->rx_cfg.num_queues; - else - priv->num_xdp_queues = 0; - - gve_tx_start_rings(priv, 0, tx_alloc_cfg->num_rings); - gve_rx_start_rings(priv, rx_alloc_cfg->qcfg->num_queues); + gve_tx_start_rings(priv, gve_num_tx_queues(priv)); + gve_rx_start_rings(priv, rx_alloc_cfg->qcfg_rx->num_queues); gve_init_sync_stats(priv); err = netif_set_real_num_tx_queues(dev, priv->tx_cfg.num_queues); @@ -1477,7 +1358,7 @@ static int gve_queues_start(struct gve_priv *priv, /* return the original error */ return err; stop_and_free_rings: - gve_tx_stop_rings(priv, 0, gve_num_tx_queues(priv)); + gve_tx_stop_rings(priv, gve_num_tx_queues(priv)); gve_rx_stop_rings(priv, priv->rx_cfg.num_queues); gve_queues_mem_remove(priv); return err; @@ -1526,7 +1407,7 @@ static int gve_queues_stop(struct gve_priv *priv) gve_unreg_xdp_info(priv); - gve_tx_stop_rings(priv, 0, gve_num_tx_queues(priv)); + gve_tx_stop_rings(priv, gve_num_tx_queues(priv)); gve_rx_stop_rings(priv, priv->rx_cfg.num_queues); priv->interface_down_cnt++; @@ -1556,56 +1437,6 @@ static int gve_close(struct net_device *dev) return 0; } -static int gve_remove_xdp_queues(struct gve_priv *priv) -{ - int err; - - err = gve_destroy_xdp_rings(priv); - if (err) - return err; - - err = gve_unregister_xdp_qpls(priv); - if (err) - return err; - - gve_unreg_xdp_info(priv); - gve_free_xdp_rings(priv); - - priv->num_xdp_queues = 0; - return 0; -} - -static int gve_add_xdp_queues(struct gve_priv *priv) -{ - int err; - - priv->num_xdp_queues = priv->rx_cfg.num_queues; - - err = gve_alloc_xdp_rings(priv); - if (err) - goto err; - - err = gve_reg_xdp_info(priv, priv->dev); - if (err) - goto free_xdp_rings; - - err = gve_register_xdp_qpls(priv); - if (err) - goto free_xdp_rings; - - err = gve_create_xdp_rings(priv); - if (err) - goto free_xdp_rings; - - return 0; - -free_xdp_rings: - gve_free_xdp_rings(priv); -err: - priv->num_xdp_queues = 0; - return err; -} - static void gve_handle_link_status(struct gve_priv *priv, bool link_status) { if (!gve_get_napi_enabled(priv)) @@ -1623,6 +1454,18 @@ static void gve_handle_link_status(struct gve_priv *priv, bool link_status) } } +static int gve_configure_rings_xdp(struct gve_priv *priv, + u16 num_xdp_rings) +{ + struct gve_tx_alloc_rings_cfg tx_alloc_cfg = {0}; + struct gve_rx_alloc_rings_cfg rx_alloc_cfg = {0}; + + gve_get_curr_alloc_cfgs(priv, &tx_alloc_cfg, &rx_alloc_cfg); + tx_alloc_cfg.num_xdp_rings = num_xdp_rings; + + return gve_adjust_config(priv, &tx_alloc_cfg, &rx_alloc_cfg); +} + static int gve_set_xdp(struct gve_priv *priv, struct bpf_prog *prog, struct netlink_ext_ack *extack) { @@ -1635,29 +1478,26 @@ static int gve_set_xdp(struct gve_priv *priv, struct bpf_prog *prog, WRITE_ONCE(priv->xdp_prog, prog); if (old_prog) bpf_prog_put(old_prog); + + /* Update priv XDP queue configuration */ + priv->tx_cfg.num_xdp_queues = priv->xdp_prog ? + priv->rx_cfg.num_queues : 0; return 0; } - gve_turndown(priv); - if (!old_prog && prog) { - // Allocate XDP TX queues if an XDP program is - // being installed - err = gve_add_xdp_queues(priv); - if (err) - goto out; - } else if (old_prog && !prog) { - // Remove XDP TX queues if an XDP program is - // being uninstalled - err = gve_remove_xdp_queues(priv); - if (err) - goto out; - } + if (!old_prog && prog) + err = gve_configure_rings_xdp(priv, priv->rx_cfg.num_queues); + else if (old_prog && !prog) + err = gve_configure_rings_xdp(priv, 0); + + if (err) + goto out; + WRITE_ONCE(priv->xdp_prog, prog); if (old_prog) bpf_prog_put(old_prog); out: - gve_turnup(priv); status = ioread32be(&priv->reg_bar0->device_status); gve_handle_link_status(priv, GVE_DEVICE_STATUS_LINK_STATUS_MASK & status); return err; @@ -1908,13 +1748,12 @@ int gve_adjust_config(struct gve_priv *priv, } int gve_adjust_queues(struct gve_priv *priv, - struct gve_queue_config new_rx_config, - struct gve_queue_config new_tx_config, + struct gve_rx_queue_config new_rx_config, + struct gve_tx_queue_config new_tx_config, bool reset_rss) { struct gve_tx_alloc_rings_cfg tx_alloc_cfg = {0}; struct gve_rx_alloc_rings_cfg rx_alloc_cfg = {0}; - int num_xdp_queues; int err; gve_get_curr_alloc_cfgs(priv, &tx_alloc_cfg, &rx_alloc_cfg); @@ -1922,13 +1761,8 @@ int gve_adjust_queues(struct gve_priv *priv, /* Relay the new config from ethtool */ tx_alloc_cfg.qcfg = &new_tx_config; rx_alloc_cfg.qcfg_tx = &new_tx_config; - rx_alloc_cfg.qcfg = &new_rx_config; + rx_alloc_cfg.qcfg_rx = &new_rx_config; rx_alloc_cfg.reset_rss = reset_rss; - tx_alloc_cfg.num_rings = new_tx_config.num_queues; - - /* Add dedicated XDP TX queues if enabled. */ - num_xdp_queues = priv->xdp_prog ? new_rx_config.num_queues : 0; - tx_alloc_cfg.num_rings += num_xdp_queues; if (netif_running(priv->dev)) { err = gve_adjust_config(priv, &tx_alloc_cfg, &rx_alloc_cfg); @@ -2056,7 +1890,7 @@ static void gve_turnup(struct gve_priv *priv) napi_schedule(&block->napi); } - if (priv->num_xdp_queues && gve_supports_xdp_xmit(priv)) + if (priv->tx_cfg.num_xdp_queues && gve_supports_xdp_xmit(priv)) xdp_features_set_redirect_target(priv->dev, false); gve_set_napi_enabled(priv); @@ -2412,6 +2246,7 @@ static int gve_init_priv(struct gve_priv *priv, bool skip_describe_device) priv->rx_cfg.num_queues = min_t(int, priv->default_num_queues, priv->rx_cfg.num_queues); } + priv->tx_cfg.num_xdp_queues = 0; dev_info(&priv->pdev->dev, "TX queues %d, RX queues %d\n", priv->tx_cfg.num_queues, priv->rx_cfg.num_queues); diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c index acb73d4d0de6..7b774cc510cc 100644 --- a/drivers/net/ethernet/google/gve/gve_rx.c +++ b/drivers/net/ethernet/google/gve/gve_rx.c @@ -385,12 +385,12 @@ int gve_rx_alloc_rings_gqi(struct gve_priv *priv, int err = 0; int i, j; - rx = kvcalloc(cfg->qcfg->max_queues, sizeof(struct gve_rx_ring), + rx = kvcalloc(cfg->qcfg_rx->max_queues, sizeof(struct gve_rx_ring), GFP_KERNEL); if (!rx) return -ENOMEM; - for (i = 0; i < cfg->qcfg->num_queues; i++) { + for (i = 0; i < cfg->qcfg_rx->num_queues; i++) { err = gve_rx_alloc_ring_gqi(priv, cfg, &rx[i], i); if (err) { netif_err(priv, drv, priv->dev, @@ -419,7 +419,7 @@ void gve_rx_free_rings_gqi(struct gve_priv *priv, if (!rx) return; - for (i = 0; i < cfg->qcfg->num_queues; i++) + for (i = 0; i < cfg->qcfg_rx->num_queues; i++) gve_rx_free_ring_gqi(priv, &rx[i], cfg); kvfree(rx); diff --git a/drivers/net/ethernet/google/gve/gve_rx_dqo.c b/drivers/net/ethernet/google/gve/gve_rx_dqo.c index 856ade0c209f..dcdad6d09bf3 100644 --- a/drivers/net/ethernet/google/gve/gve_rx_dqo.c +++ b/drivers/net/ethernet/google/gve/gve_rx_dqo.c @@ -299,12 +299,12 @@ int gve_rx_alloc_rings_dqo(struct gve_priv *priv, int err; int i; - rx = kvcalloc(cfg->qcfg->max_queues, sizeof(struct gve_rx_ring), + rx = kvcalloc(cfg->qcfg_rx->max_queues, sizeof(struct gve_rx_ring), GFP_KERNEL); if (!rx) return -ENOMEM; - for (i = 0; i < cfg->qcfg->num_queues; i++) { + for (i = 0; i < cfg->qcfg_rx->num_queues; i++) { err = gve_rx_alloc_ring_dqo(priv, cfg, &rx[i], i); if (err) { netif_err(priv, drv, priv->dev, @@ -333,7 +333,7 @@ void gve_rx_free_rings_dqo(struct gve_priv *priv, if (!rx) return; - for (i = 0; i < cfg->qcfg->num_queues; i++) + for (i = 0; i < cfg->qcfg_rx->num_queues; i++) gve_rx_free_ring_dqo(priv, &rx[i], cfg); kvfree(rx); diff --git a/drivers/net/ethernet/google/gve/gve_tx.c b/drivers/net/ethernet/google/gve/gve_tx.c index c8c067e18059..1b40bf0c811a 100644 --- a/drivers/net/ethernet/google/gve/gve_tx.c +++ b/drivers/net/ethernet/google/gve/gve_tx.c @@ -334,27 +334,23 @@ int gve_tx_alloc_rings_gqi(struct gve_priv *priv, struct gve_tx_alloc_rings_cfg *cfg) { struct gve_tx_ring *tx = cfg->tx; + int total_queues; int err = 0; int i, j; - if (cfg->start_idx + cfg->num_rings > cfg->qcfg->max_queues) { + total_queues = cfg->qcfg->num_queues + cfg->num_xdp_rings; + if (total_queues > cfg->qcfg->max_queues) { netif_err(priv, drv, priv->dev, "Cannot alloc more than the max num of Tx rings\n"); return -EINVAL; } - if (cfg->start_idx == 0) { - tx = kvcalloc(cfg->qcfg->max_queues, sizeof(struct gve_tx_ring), - GFP_KERNEL); - if (!tx) - return -ENOMEM; - } else if (!tx) { - netif_err(priv, drv, priv->dev, - "Cannot alloc tx rings from a nonzero start idx without tx array\n"); - return -EINVAL; - } + tx = kvcalloc(cfg->qcfg->max_queues, sizeof(struct gve_tx_ring), + GFP_KERNEL); + if (!tx) + return -ENOMEM; - for (i = cfg->start_idx; i < cfg->start_idx + cfg->num_rings; i++) { + for (i = 0; i < total_queues; i++) { err = gve_tx_alloc_ring_gqi(priv, cfg, &tx[i], i); if (err) { netif_err(priv, drv, priv->dev, @@ -370,8 +366,7 @@ int gve_tx_alloc_rings_gqi(struct gve_priv *priv, cleanup: for (j = 0; j < i; j++) gve_tx_free_ring_gqi(priv, &tx[j], cfg); - if (cfg->start_idx == 0) - kvfree(tx); + kvfree(tx); return err; } @@ -384,13 +379,11 @@ void gve_tx_free_rings_gqi(struct gve_priv *priv, if (!tx) return; - for (i = cfg->start_idx; i < cfg->start_idx + cfg->num_rings; i++) + for (i = 0; i < cfg->qcfg->num_queues + cfg->qcfg->num_xdp_queues; i++) gve_tx_free_ring_gqi(priv, &tx[i], cfg); - if (cfg->start_idx == 0) { - kvfree(tx); - cfg->tx = NULL; - } + kvfree(tx); + cfg->tx = NULL; } /* gve_tx_avail - Calculates the number of slots available in the ring @@ -844,7 +837,7 @@ int gve_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames, return -ENETDOWN; qid = gve_xdp_tx_queue_id(priv, - smp_processor_id() % priv->num_xdp_queues); + smp_processor_id() % priv->tx_cfg.num_xdp_queues); tx = &priv->tx[qid]; diff --git a/drivers/net/ethernet/google/gve/gve_tx_dqo.c b/drivers/net/ethernet/google/gve/gve_tx_dqo.c index 394debc62268..2eba868d8037 100644 --- a/drivers/net/ethernet/google/gve/gve_tx_dqo.c +++ b/drivers/net/ethernet/google/gve/gve_tx_dqo.c @@ -379,27 +379,23 @@ int gve_tx_alloc_rings_dqo(struct gve_priv *priv, struct gve_tx_alloc_rings_cfg *cfg) { struct gve_tx_ring *tx = cfg->tx; + int total_queues; int err = 0; int i, j; - if (cfg->start_idx + cfg->num_rings > cfg->qcfg->max_queues) { + total_queues = cfg->qcfg->num_queues + cfg->num_xdp_rings; + if (total_queues > cfg->qcfg->max_queues) { netif_err(priv, drv, priv->dev, "Cannot alloc more than the max num of Tx rings\n"); return -EINVAL; } - if (cfg->start_idx == 0) { - tx = kvcalloc(cfg->qcfg->max_queues, sizeof(struct gve_tx_ring), - GFP_KERNEL); - if (!tx) - return -ENOMEM; - } else if (!tx) { - netif_err(priv, drv, priv->dev, - "Cannot alloc tx rings from a nonzero start idx without tx array\n"); - return -EINVAL; - } + tx = kvcalloc(cfg->qcfg->max_queues, sizeof(struct gve_tx_ring), + GFP_KERNEL); + if (!tx) + return -ENOMEM; - for (i = cfg->start_idx; i < cfg->start_idx + cfg->num_rings; i++) { + for (i = 0; i < total_queues; i++) { err = gve_tx_alloc_ring_dqo(priv, cfg, &tx[i], i); if (err) { netif_err(priv, drv, priv->dev, @@ -415,8 +411,7 @@ int gve_tx_alloc_rings_dqo(struct gve_priv *priv, err: for (j = 0; j < i; j++) gve_tx_free_ring_dqo(priv, &tx[j], cfg); - if (cfg->start_idx == 0) - kvfree(tx); + kvfree(tx); return err; } @@ -429,13 +424,11 @@ void gve_tx_free_rings_dqo(struct gve_priv *priv, if (!tx) return; - for (i = cfg->start_idx; i < cfg->start_idx + cfg->num_rings; i++) + for (i = 0; i < cfg->qcfg->num_queues + cfg->qcfg->num_xdp_queues; i++) gve_tx_free_ring_dqo(priv, &tx[i], cfg); - if (cfg->start_idx == 0) { - kvfree(tx); - cfg->tx = NULL; - } + kvfree(tx); + cfg->tx = NULL; } /* Returns the number of slots available in the ring */ From patchwork Fri Mar 21 00:29:07 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harshitha Ramamurthy X-Patchwork-Id: 14024721 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B663A13635C for ; Fri, 21 Mar 2025 00:29:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742516969; cv=none; b=E+LsHWCR8iIU6Bf3yMlHnCb2XXaH6C1H/DXVh3Hl4N+G5PGyrTTe06ZbITsBXs6/pBKKI6Q/uyADM+8cFnQPSCof3/LnHEoUnxSxQfN3J8KGJsA9VPOxKwL0wpkmFNfSe2funrCGhPab5nNnrHAkDIxCYGxaj8kquHqd9mu8Jao= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742516969; c=relaxed/simple; bh=QnQjmOUMdu0LaJsPwkMYaV067bT5nlIwdVbWjgqde2o=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=kIASTOIq7iF+BepMEQXi1yzsf90fO1DFMp/HwElzXZdp8tRwPmAkdJidETZRyj14wCzn1VspZTT8bJUTK3HMS6+WKhvBryaVyp3Ye1mFzeP3UHsLuA2SYZfb6AOrNvjcQVyadyW7Y+ep1ZaVfqFTXjXXAxjLSuHKqsD77BDfGTI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--hramamurthy.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=NqHOlz6t; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--hramamurthy.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="NqHOlz6t" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2feb47c6757so1494591a91.3 for ; Thu, 20 Mar 2025 17:29:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742516967; x=1743121767; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=l1LG5VQljoMlrOtBLZUnQ7XLakC4CYuuHpRrE00Qlhw=; b=NqHOlz6ty4pwNBdXNiXKEI7A6Rh4eIcw6/cxKVLPKjKYFpcD7vtacIornRVpQKq12C KQmo2/fl+uNgwNxGYHMEZUu6KiycgRygWlFS43qTzlXyrd7oV99dsP23yOpzlzYEtQpo fVTQBBKskhB3M1wmsJ8ib4v50h0ikd6jq7pCip9uXqrujzdhWsoWgIpbRuWjNdgJA6dE /SVE01DFerm86lrHZ62LrLZHoQdZp2uEwGZKsv8CRWvv11CuTLB8pRt1tmOoUX8bSlcr dN09KZQ4HarXQO9WJOHFiaCxN5ze3SC7fpf7twFQxCAjtU8cAk51Bxz3uL5GxMKDm9lf ZNQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742516967; x=1743121767; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=l1LG5VQljoMlrOtBLZUnQ7XLakC4CYuuHpRrE00Qlhw=; b=XqydfgACad7/8CYAzqIFUJQPhrnc7LslK85X6GFHDDTqy4V0Tu76IWuIk9rejyzLkv uq9l+gpc5LCEx4pDGvZJ6+CTvgjWAon3wAoEq6Aji4bqRuWFswlp3yCoYCNatXRwaXkW VacCvmNDVyGwJ0dvNyKmn0AhHNCWZDBU211S5FyGGDzZgDdFrPONaLP2+4URXsuAjF59 vPYuGvUEhN6sYSFWlbmGweHpOJCqHF//Rys8V3TsAibHr2fkntk2GAlzsVrDNWqVrzpX 88xH9OZDMslVRJp5bHRPNoKnwBe6ddJkZonAV07FfY6DbqK9WFwlrJJTkSwJt+gN5GQU ODXQ== X-Forwarded-Encrypted: i=1; AJvYcCWQSHrs2nv1CWaJ8T1YaXYxWp7FnFGaNdcgPpm1VLPGiQP/o9av123gnisx4OgTS8y1PIE=@vger.kernel.org X-Gm-Message-State: AOJu0YxlRyO5dGPcHyGEjHhxcRGrOJ9a9OpFFJRgXJGa9k2Q2TbyFYeb 4ik36BoEX7keKw2fSkGbmoPT+aZ/ht6BikTY8PEFt2WwW1JbfOTIhmQVAmHWzE3M2kjPo9JU3S1 j3GoLXW5StwmNdqTG9qr6lA== X-Google-Smtp-Source: AGHT+IEadiXXjxt2+O81sSe0Uw9K8c8HTLQIJNQsl41h2NtHb9fpJuhccsgilZ6lbCG9EgWhslLn69uR/jdkP0Xg1Q== X-Received: from pjbpl11.prod.google.com ([2002:a17:90b:268b:b0:301:1bf5:2efc]) (user=hramamurthy job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:38cb:b0:2fe:b907:3b05 with SMTP id 98e67ed59e1d1-3030ff0d9f3mr1421101a91.29.1742516966814; Thu, 20 Mar 2025 17:29:26 -0700 (PDT) Date: Fri, 21 Mar 2025 00:29:07 +0000 In-Reply-To: <20250321002910.1343422-1-hramamurthy@google.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250321002910.1343422-1-hramamurthy@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250321002910.1343422-4-hramamurthy@google.com> Subject: [PATCH net-next 3/6] gve: update GQ RX to use buf_size From: Harshitha Ramamurthy To: netdev@vger.kernel.org Cc: jeroendb@google.com, hramamurthy@google.com, andrew+netdev@lunn.ch, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, ast@kernel.org, daniel@iogearbox.net, hawk@kernel.org, john.fastabend@gmail.com, pkaligineedi@google.com, willemb@google.com, ziweixiao@google.com, joshwash@google.com, horms@kernel.org, shailend@google.com, bcf@google.com, linux-kernel@vger.kernel.org, bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Joshua Washington Commit ebdfae0d377b ("gve: adopt page pool for DQ RDA mode") introduced a buf_size field to the gve_rx_slot_page_info struct, which can be used in the datapath to take the place of the packet_buffer_size field, as it will already be hot in the cache due to its extensive use. Using the buf_size field in the datapath frees up the packet_buffer_size field in the GQ-specific RX cacheline to be generalized for GQ and DQ (in the next patch), as there is currently no common packet buffer size field between the two queue formats. Reviewed-by: Willem de Bruijn Signed-off-by: Joshua Washington Signed-off-by: Harshitha Ramamurthy --- drivers/net/ethernet/google/gve/gve_rx.c | 24 +++++++++++++++--------- 1 file changed, 15 insertions(+), 9 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c index 7b774cc510cc..9d444e723fcd 100644 --- a/drivers/net/ethernet/google/gve/gve_rx.c +++ b/drivers/net/ethernet/google/gve/gve_rx.c @@ -141,12 +141,15 @@ void gve_rx_free_ring_gqi(struct gve_priv *priv, struct gve_rx_ring *rx, netif_dbg(priv, drv, priv->dev, "freed rx ring %d\n", idx); } -static void gve_setup_rx_buffer(struct gve_rx_slot_page_info *page_info, - dma_addr_t addr, struct page *page, __be64 *slot_addr) +static void gve_setup_rx_buffer(struct gve_rx_ring *rx, + struct gve_rx_slot_page_info *page_info, + dma_addr_t addr, struct page *page, + __be64 *slot_addr) { page_info->page = page; page_info->page_offset = 0; page_info->page_address = page_address(page); + page_info->buf_size = rx->packet_buffer_size; *slot_addr = cpu_to_be64(addr); /* The page already has 1 ref */ page_ref_add(page, INT_MAX - 1); @@ -171,7 +174,7 @@ static int gve_rx_alloc_buffer(struct gve_priv *priv, struct device *dev, return err; } - gve_setup_rx_buffer(page_info, dma, page, &data_slot->addr); + gve_setup_rx_buffer(rx, page_info, dma, page, &data_slot->addr); return 0; } @@ -199,7 +202,8 @@ static int gve_rx_prefill_pages(struct gve_rx_ring *rx, struct page *page = rx->data.qpl->pages[i]; dma_addr_t addr = i * PAGE_SIZE; - gve_setup_rx_buffer(&rx->data.page_info[i], addr, page, + gve_setup_rx_buffer(rx, &rx->data.page_info[i], addr, + page, &rx->data.data_ring[i].qpl_offset); continue; } @@ -222,6 +226,7 @@ static int gve_rx_prefill_pages(struct gve_rx_ring *rx, rx->qpl_copy_pool[j].page = page; rx->qpl_copy_pool[j].page_offset = 0; rx->qpl_copy_pool[j].page_address = page_address(page); + rx->qpl_copy_pool[j].buf_size = rx->packet_buffer_size; /* The page already has 1 ref. */ page_ref_add(page, INT_MAX - 1); @@ -283,6 +288,7 @@ int gve_rx_alloc_ring_gqi(struct gve_priv *priv, rx->gve = priv; rx->q_num = idx; + rx->packet_buffer_size = GVE_DEFAULT_RX_BUFFER_SIZE; rx->mask = slots - 1; rx->data.raw_addressing = cfg->raw_addressing; @@ -351,7 +357,6 @@ int gve_rx_alloc_ring_gqi(struct gve_priv *priv, rx->db_threshold = slots / 2; gve_rx_init_ring_state_gqi(rx); - rx->packet_buffer_size = GVE_DEFAULT_RX_BUFFER_SIZE; gve_rx_ctx_clear(&rx->ctx); return 0; @@ -590,7 +595,7 @@ static struct sk_buff *gve_rx_copy_to_pool(struct gve_rx_ring *rx, copy_page_info->pad = page_info->pad; skb = gve_rx_add_frags(napi, copy_page_info, - rx->packet_buffer_size, len, ctx); + copy_page_info->buf_size, len, ctx); if (unlikely(!skb)) return NULL; @@ -630,7 +635,8 @@ gve_rx_qpl(struct device *dev, struct net_device *netdev, * device. */ if (page_info->can_flip) { - skb = gve_rx_add_frags(napi, page_info, rx->packet_buffer_size, len, ctx); + skb = gve_rx_add_frags(napi, page_info, page_info->buf_size, + len, ctx); /* No point in recycling if we didn't get the skb */ if (skb) { /* Make sure that the page isn't freed. */ @@ -680,7 +686,7 @@ static struct sk_buff *gve_rx_skb(struct gve_priv *priv, struct gve_rx_ring *rx, skb = gve_rx_raw_addressing(&priv->pdev->dev, netdev, page_info, len, napi, data_slot, - rx->packet_buffer_size, ctx); + page_info->buf_size, ctx); } else { skb = gve_rx_qpl(&priv->pdev->dev, netdev, rx, page_info, len, napi, data_slot); @@ -855,7 +861,7 @@ static void gve_rx(struct gve_rx_ring *rx, netdev_features_t feat, void *old_data; int xdp_act; - xdp_init_buff(&xdp, rx->packet_buffer_size, &rx->xdp_rxq); + xdp_init_buff(&xdp, page_info->buf_size, &rx->xdp_rxq); xdp_prepare_buff(&xdp, page_info->page_address + page_info->page_offset, GVE_RX_PAD, len, false); From patchwork Fri Mar 21 00:29:08 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harshitha Ramamurthy X-Patchwork-Id: 14024722 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 51AC714A095 for ; Fri, 21 Mar 2025 00:29:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742516972; cv=none; b=ez/KNjJRy60E7pf1V0P2wVWfNHG/3NtTIN0+YbR3blhPpZoz3BpjWvdWH4QcItjgSa18jiAHUHCOZ6FUHUxKNcePzk4DE8NU6/aw0oSUamiqAJYwvHTKux0Z/gMLY9Ws9RsQJNaBV9xUEuMF2yfCQFqO0537NzgUESE3ZsRkhx8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742516972; c=relaxed/simple; bh=NhcZ+zbzhwPnYjaWWSny+o04Svf37lvqboIKaR19mLE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=pGA4mrWj3osAU22JJCxHQXbM9ahbNqLp3zz8MDZq+07LTLq1Zw0aHLXUC5PoTi2rjyBVff7Xg85k19KM6vQYfQj4On/NRyTCcwRxVV5KDHR7B59OA9bObZ02QG2o5ndyYXl/PBNl56VI3yzAivDSaGBlr+4NIgnxmZu8gS6yGJo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--hramamurthy.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=s47HC+i6; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--hramamurthy.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="s47HC+i6" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2ff798e8c90so2068566a91.1 for ; Thu, 20 Mar 2025 17:29:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742516969; x=1743121769; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=em9gQO4+IduflzLNE8by+sP01vmyK8sm1poXWbQdmh8=; b=s47HC+i6Ww6fkYcZBOvC2+Kv78HWeDRKHY8tqeIp59MmhTzuqu80mkEEK6JT4WDuOI 9huYGIkcZER4wHT27fM0qYt6G6sTwnI/wnek6Fk1Kx8Qa/4q6wuvuHFbKV4wmHJGIehZ y/rnA8BlrBLbjDsbJKUXNXkN68eyTTVgfBKhjVS0oRmEUJO9KSew7axqJjwLPQY7YnhA F2753sVnS5cqpS9ywmNnszoz8y71MMvOG39XkKPcC1My9oqnvHw9Xg4g9q/jKBzRN9Nq spHCTUgz3cgFd/1uTgzLofHOhBzjTUJwZMCq7Un+HW229zaUQML+4ExZy+mryudtgd9s 5FWQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742516969; x=1743121769; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=em9gQO4+IduflzLNE8by+sP01vmyK8sm1poXWbQdmh8=; b=pt7w4d0+mKkcbpSiS1VEO3F7CKtTOX7rYAKV4pOJ17ijNerQO5030jjlFV+vAVZCqm 2y4qsG0lqHDtmVlX6DySUNqzLP3XBxXGljrBni+CektQNv6bDsUxZw6Svb0wKqo+2t50 Ls/hRd1MDouQV+4nPP3kxa4UAwuSkJoVYZDPmS4pqYoGtBuypwGkqFoxSwJFHm3qPU5H fjKpiQkBHdzaQSoZ0LPCw4JT5M8JSGb+WWGU3Qi+OCwLLzzcy1Wa1kxbD3c+7MBmLE/d g2P/oaparKh/F/fv5Gj1PWOpMSzILWpAyZ87kl+ih6trdi+1G9erj1VYEPuu4aE1gNi9 ABXQ== X-Forwarded-Encrypted: i=1; AJvYcCVnld60Zx3ZyZvjh2AijcGgIOLahFa3/8y4q6JBJcUXHSvLzSLnrdwXKOieyA1QhLGvuDA=@vger.kernel.org X-Gm-Message-State: AOJu0YxkbPmSw9yjny43go0FcHoYtM2oTsuBa3Ba3e0RuPgEAIPVYbvm htTBAFVzEDk6espw8/t/wMBwu+4HgHy5nq2fGe/+MLT/VLbvseNVOC4m83RHFh8XPPX095aV393 KqS4wKxkGRDhIT17W9+ZqAg== X-Google-Smtp-Source: AGHT+IE8Ju/LbwThFm7Q2P3L+DeXYJs94u/aaDVbtgY6B0JEC3AT/HfCkpiYyotrEytpARmRRKMIrfjEygFYwdJSMw== X-Received: from pjc7.prod.google.com ([2002:a17:90b:2f47:b0:2f9:e05f:187f]) (user=hramamurthy job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:4d05:b0:2f1:3355:4a8f with SMTP id 98e67ed59e1d1-3030fe8d50amr1526961a91.4.1742516969451; Thu, 20 Mar 2025 17:29:29 -0700 (PDT) Date: Fri, 21 Mar 2025 00:29:08 +0000 In-Reply-To: <20250321002910.1343422-1-hramamurthy@google.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250321002910.1343422-1-hramamurthy@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250321002910.1343422-5-hramamurthy@google.com> Subject: [PATCH net-next 4/6] gve: merge packet buffer size fields From: Harshitha Ramamurthy To: netdev@vger.kernel.org Cc: jeroendb@google.com, hramamurthy@google.com, andrew+netdev@lunn.ch, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, ast@kernel.org, daniel@iogearbox.net, hawk@kernel.org, john.fastabend@gmail.com, pkaligineedi@google.com, willemb@google.com, ziweixiao@google.com, joshwash@google.com, horms@kernel.org, shailend@google.com, bcf@google.com, linux-kernel@vger.kernel.org, bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Joshua Washington The data_buffer_size_dqo field in gve_priv and the packet_buffer_size field in gve_rx_ring theoretically have the same meaning, but they are defined in two different places and used in two separate contexts. There is no good reason for this, so this change merges those fields into the packet_buffer_size field in the RX ring. This change also introduces a packet_buffer_size field to struct gve_rx_queue_config to account for cases where queues are not allocated, such as when the interface is down. Reviewed-by: Willem de Bruijn Signed-off-by: Joshua Washington Signed-off-by: Harshitha Ramamurthy --- drivers/net/ethernet/google/gve/gve.h | 4 ++-- drivers/net/ethernet/google/gve/gve_adminq.c | 4 +--- drivers/net/ethernet/google/gve/gve_buffer_mgmt_dqo.c | 7 +++---- drivers/net/ethernet/google/gve/gve_ethtool.c | 3 +-- drivers/net/ethernet/google/gve/gve_main.c | 8 +++----- drivers/net/ethernet/google/gve/gve_rx.c | 2 +- drivers/net/ethernet/google/gve/gve_rx_dqo.c | 1 + 7 files changed, 12 insertions(+), 17 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h index e5cc3fada9c9..9895541eddae 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -227,6 +227,7 @@ struct gve_rx_cnts { /* Contains datapath state used to represent an RX queue. */ struct gve_rx_ring { struct gve_priv *gve; + u16 packet_buffer_size; union { /* GQI fields */ struct { @@ -235,7 +236,6 @@ struct gve_rx_ring { /* threshold for posting new buffs and descs */ u32 db_threshold; - u16 packet_buffer_size; u32 qpl_copy_pool_mask; u32 qpl_copy_pool_head; @@ -635,6 +635,7 @@ struct gve_notify_block { struct gve_rx_queue_config { u16 max_queues; u16 num_queues; + u16 packet_buffer_size; }; /* Tracks allowed and current tx queue settings */ @@ -842,7 +843,6 @@ struct gve_priv { struct gve_ptype_lut *ptype_lut_dqo; /* Must be a power of two. */ - u16 data_buffer_size_dqo; u16 max_rx_buffer_size; /* device limit */ enum gve_queue_format queue_format; diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/ethernet/google/gve/gve_adminq.c index be7a423e5ab9..3e8fc33cc11f 100644 --- a/drivers/net/ethernet/google/gve/gve_adminq.c +++ b/drivers/net/ethernet/google/gve/gve_adminq.c @@ -731,6 +731,7 @@ static void gve_adminq_get_create_rx_queue_cmd(struct gve_priv *priv, .ntfy_id = cpu_to_be32(rx->ntfy_id), .queue_resources_addr = cpu_to_be64(rx->q_resources_bus), .rx_ring_size = cpu_to_be16(priv->rx_desc_cnt), + .packet_buffer_size = cpu_to_be16(rx->packet_buffer_size), }; if (gve_is_gqi(priv)) { @@ -743,7 +744,6 @@ static void gve_adminq_get_create_rx_queue_cmd(struct gve_priv *priv, cpu_to_be64(rx->data.data_bus); cmd->create_rx_queue.index = cpu_to_be32(queue_index); cmd->create_rx_queue.queue_page_list_id = cpu_to_be32(qpl_id); - cmd->create_rx_queue.packet_buffer_size = cpu_to_be16(rx->packet_buffer_size); } else { u32 qpl_id = 0; @@ -756,8 +756,6 @@ static void gve_adminq_get_create_rx_queue_cmd(struct gve_priv *priv, cpu_to_be64(rx->dqo.complq.bus); cmd->create_rx_queue.rx_data_ring_addr = cpu_to_be64(rx->dqo.bufq.bus); - cmd->create_rx_queue.packet_buffer_size = - cpu_to_be16(priv->data_buffer_size_dqo); cmd->create_rx_queue.rx_buff_ring_size = cpu_to_be16(priv->rx_desc_cnt); cmd->create_rx_queue.enable_rsc = diff --git a/drivers/net/ethernet/google/gve/gve_buffer_mgmt_dqo.c b/drivers/net/ethernet/google/gve/gve_buffer_mgmt_dqo.c index af84cb88f828..f9824664d04c 100644 --- a/drivers/net/ethernet/google/gve/gve_buffer_mgmt_dqo.c +++ b/drivers/net/ethernet/google/gve/gve_buffer_mgmt_dqo.c @@ -139,7 +139,7 @@ int gve_alloc_qpl_page_dqo(struct gve_rx_ring *rx, buf_state->page_info.page_offset = 0; buf_state->page_info.page_address = page_address(buf_state->page_info.page); - buf_state->page_info.buf_size = priv->data_buffer_size_dqo; + buf_state->page_info.buf_size = rx->packet_buffer_size; buf_state->last_single_ref_offset = 0; /* The page already has 1 ref. */ @@ -162,7 +162,7 @@ void gve_free_qpl_page_dqo(struct gve_rx_buf_state_dqo *buf_state) void gve_try_recycle_buf(struct gve_priv *priv, struct gve_rx_ring *rx, struct gve_rx_buf_state_dqo *buf_state) { - const u16 data_buffer_size = priv->data_buffer_size_dqo; + const u16 data_buffer_size = rx->packet_buffer_size; int pagecount; /* Can't reuse if we only fit one buffer per page */ @@ -217,10 +217,9 @@ void gve_free_to_page_pool(struct gve_rx_ring *rx, static int gve_alloc_from_page_pool(struct gve_rx_ring *rx, struct gve_rx_buf_state_dqo *buf_state) { - struct gve_priv *priv = rx->gve; netmem_ref netmem; - buf_state->page_info.buf_size = priv->data_buffer_size_dqo; + buf_state->page_info.buf_size = rx->packet_buffer_size; netmem = page_pool_alloc_netmem(rx->dqo.page_pool, &buf_state->page_info.page_offset, &buf_state->page_info.buf_size, diff --git a/drivers/net/ethernet/google/gve/gve_ethtool.c b/drivers/net/ethernet/google/gve/gve_ethtool.c index a862031ba5d1..31a21ccf4863 100644 --- a/drivers/net/ethernet/google/gve/gve_ethtool.c +++ b/drivers/net/ethernet/google/gve/gve_ethtool.c @@ -647,8 +647,7 @@ static int gve_set_tunable(struct net_device *netdev, switch (etuna->id) { case ETHTOOL_RX_COPYBREAK: { - u32 max_copybreak = gve_is_gqi(priv) ? - GVE_DEFAULT_RX_BUFFER_SIZE : priv->data_buffer_size_dqo; + u32 max_copybreak = priv->rx_cfg.packet_buffer_size; len = *(u32 *)value; if (len > max_copybreak) diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index 354f526a9238..20aabbe0e518 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -1224,9 +1224,7 @@ static void gve_rx_get_curr_alloc_cfg(struct gve_priv *priv, cfg->raw_addressing = !gve_is_qpl(priv); cfg->enable_header_split = priv->header_split_enabled; cfg->ring_size = priv->rx_desc_cnt; - cfg->packet_buffer_size = gve_is_gqi(priv) ? - GVE_DEFAULT_RX_BUFFER_SIZE : - priv->data_buffer_size_dqo; + cfg->packet_buffer_size = priv->rx_cfg.packet_buffer_size; cfg->rx = priv->rx; } @@ -1331,7 +1329,7 @@ static int gve_queues_start(struct gve_priv *priv, goto reset; priv->header_split_enabled = rx_alloc_cfg->enable_header_split; - priv->data_buffer_size_dqo = rx_alloc_cfg->packet_buffer_size; + priv->rx_cfg.packet_buffer_size = rx_alloc_cfg->packet_buffer_size; err = gve_create_rings(priv); if (err) @@ -2627,7 +2625,7 @@ static int gve_probe(struct pci_dev *pdev, const struct pci_device_id *ent) priv->service_task_flags = 0x0; priv->state_flags = 0x0; priv->ethtool_flags = 0x0; - priv->data_buffer_size_dqo = GVE_DEFAULT_RX_BUFFER_SIZE; + priv->rx_cfg.packet_buffer_size = GVE_DEFAULT_RX_BUFFER_SIZE; priv->max_rx_buffer_size = GVE_DEFAULT_RX_BUFFER_SIZE; gve_set_probe_in_progress(priv); diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c index 9d444e723fcd..90e875c1832f 100644 --- a/drivers/net/ethernet/google/gve/gve_rx.c +++ b/drivers/net/ethernet/google/gve/gve_rx.c @@ -288,7 +288,7 @@ int gve_rx_alloc_ring_gqi(struct gve_priv *priv, rx->gve = priv; rx->q_num = idx; - rx->packet_buffer_size = GVE_DEFAULT_RX_BUFFER_SIZE; + rx->packet_buffer_size = cfg->packet_buffer_size; rx->mask = slots - 1; rx->data.raw_addressing = cfg->raw_addressing; diff --git a/drivers/net/ethernet/google/gve/gve_rx_dqo.c b/drivers/net/ethernet/google/gve/gve_rx_dqo.c index dcdad6d09bf3..5fbcf93a54e0 100644 --- a/drivers/net/ethernet/google/gve/gve_rx_dqo.c +++ b/drivers/net/ethernet/google/gve/gve_rx_dqo.c @@ -223,6 +223,7 @@ int gve_rx_alloc_ring_dqo(struct gve_priv *priv, memset(rx, 0, sizeof(*rx)); rx->gve = priv; rx->q_num = idx; + rx->packet_buffer_size = cfg->packet_buffer_size; rx->dqo.num_buf_states = cfg->raw_addressing ? buffer_queue_slots : gve_get_rx_pages_per_qpl_dqo(cfg->ring_size); From patchwork Fri Mar 21 00:29:09 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harshitha Ramamurthy X-Patchwork-Id: 14024723 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 905B518DF6E for ; Fri, 21 Mar 2025 00:29:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742516975; cv=none; b=hxrT+AoTfnig1rJPyPJlgrrDzVFDZWB1Nn/h7fVMAC547xKRv/hfxre5kdV1dB+OwdXSEZxaf7+xTmESNQ/aB/YIzXcsomOj0PckeK23SkS5Q5OTeoytsV4TQyYk/WXBxn9DzDl0MaW4bHlM87MfbUfc578ld0BwypMzSY+pWdo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742516975; c=relaxed/simple; bh=RpEz+u3qtPZ/8waeYzrQDWIqd2UbyuttbigwM1gEpkA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Tkmz4TxFKx8c3UBdevPUHkdAT5SCdrFqIOEku6tIb/aI6FGroqn3DP9R+pOJHiGGQFlEpg9+d3VKtjpU8aJlyiKandLmcoBVsnjwiI/WI9uGC7xe9vrMkQA6a12jnNKEk4m/yJbUuvoVnHi+TEOiY38VQyHeO34L2+6oyWkcnN4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--hramamurthy.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=GpscTMen; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--hramamurthy.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="GpscTMen" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2ff6943febeso1812033a91.0 for ; Thu, 20 Mar 2025 17:29:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742516973; x=1743121773; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=rlZnJtOZOC8zdaIIteZz+tI3tXKlAzww+3aDYH87JYo=; b=GpscTMenyG2o6O82EkGmFPFszBIZh2uM2lOsQL/utyGqQhsnQI6CXbLipPHu9WxZx7 cCj5ja5zXCruRrddRyAmA5RTXqg5b1csSY9Mg9jhvNUj1giOlJYgQ6sYab9qdP1KU27v r+eqnvMkjpF/sDLJfVnX2LO5nVMpEfB86cCXX80a0siCM8BBKbs5g1cqrwYH/ZXCUGUt z2mg9kVn0gGLA95wUyxVsr7yZdIHJImkChGmWsFwbm4x/ojhWoPfWCPrH0Eif5UWS+d/ d1UVGOHNaOAfOTOMay5fKejPc826n93z/P223qz+sKgfot4VtWN9tgd24sx8odV54UD6 6B9Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742516973; x=1743121773; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=rlZnJtOZOC8zdaIIteZz+tI3tXKlAzww+3aDYH87JYo=; b=mMXa4WoxlTBovEpKDwkP7A9rqhW3lOqHvzrP7kzO0St4oLgVlTgfepumeKh5/rzoN/ ssmDDvPopKx9TB1Fb6KrepGR+y6ygklIDJXRwo7qwAVY8x6pZ0O2gZ7ibm4GpeS3T1xK D2EaJV7E43GK3qH1xyhLovvjbSnYw8OifOXnfE/45zKBYiYgTaqdrs5rQyKtf6csTnpW VB70Xy7ui9d4NuUWIfmrdV3hONOuVbNmHXnhcO9sqqj0wsvVJhg3hmMZhi38pusZzoyq rPh14atlHbD3sLA0leO+9+6CVQLuWKx6bjmN4iOPasb1An7cYOIk2fYCCWOCtKh2MATO vkmA== X-Forwarded-Encrypted: i=1; AJvYcCUS0ckFo9nxLYaPmyPb6s+T6vh4aYifwkg8dqKt20/h4bVJ9U6pdk0gH7zddAgUH6pPU0E=@vger.kernel.org X-Gm-Message-State: AOJu0YxoItxvLxzEdHRfMd6acGeW9YZ6BuC2r75S4FE3Q4CX5I2ZIMHc iVhmrbnGkYqsU3zWzonqwO83pn6zVGi8lFeeEgMq2JhUzrX+dOxd7BTKYxvK/KjKv6gqZAtOpes /FDc7SdycIXmfi5r1AENawg== X-Google-Smtp-Source: AGHT+IH1nlwYcmOy5sZiXKr1QY8fZy7y4pP1oqlNvVdbwry/tdlf+1eaNcltRq2LySc5zCPsOzTHnmuLrPfxcJnWqg== X-Received: from pjur4.prod.google.com ([2002:a17:90a:d404:b0:2fa:2661:76ac]) (user=hramamurthy job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:1d48:b0:2ff:6e58:89f5 with SMTP id 98e67ed59e1d1-3030ec2648cmr2514328a91.6.1742516972855; Thu, 20 Mar 2025 17:29:32 -0700 (PDT) Date: Fri, 21 Mar 2025 00:29:09 +0000 In-Reply-To: <20250321002910.1343422-1-hramamurthy@google.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250321002910.1343422-1-hramamurthy@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250321002910.1343422-6-hramamurthy@google.com> Subject: [PATCH net-next 5/6] gve: update XDP allocation path support RX buffer posting From: Harshitha Ramamurthy To: netdev@vger.kernel.org Cc: jeroendb@google.com, hramamurthy@google.com, andrew+netdev@lunn.ch, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, ast@kernel.org, daniel@iogearbox.net, hawk@kernel.org, john.fastabend@gmail.com, pkaligineedi@google.com, willemb@google.com, ziweixiao@google.com, joshwash@google.com, horms@kernel.org, shailend@google.com, bcf@google.com, linux-kernel@vger.kernel.org, bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Joshua Washington In order to support installing an XDP program on DQ, RX buffers need to be reposted using 4K buffers, which is larger than the default packet buffer size of 2K. This is needed to accommodate the extra head and tail that accompanies the data portion of an XDP buffer. Continuing to use 2K buffers would mean that the packet buffer size for the NIC would have to be restricted to 2048 - 320 - 256 = 1472B. However, this is problematic for two reasons: first, 1472 is not a packet buffer size accepted by GVE; second, at least 1474B of buffer space is needed to accommodate an MTU of 1460, which is the default on GCP. As such, we allocate 4K buffers, and post a 2K section of those 4K buffers (offset relative to the XDP headroom) to the NIC for DMA to avoid a potential extra copy. Because the GQ-QPL datapath requires copies regardless, this change was not needed to support XDP in that case. To capture this subtlety, a new field, packet_buffer_truesize, has been added to the rx ring struct to represent size of the allocated buffer, while packet_buffer_size has been left to represent the portion of the buffer posted to the NIC. Reviewed-by: Willem de Bruijn Signed-off-by: Praveen Kaligineedi Signed-off-by: Joshua Washington Signed-off-by: Harshitha Ramamurthy --- drivers/net/ethernet/google/gve/gve.h | 12 ++++++++-- .../ethernet/google/gve/gve_buffer_mgmt_dqo.c | 17 +++++++++----- drivers/net/ethernet/google/gve/gve_main.c | 19 +++++++++++++--- drivers/net/ethernet/google/gve/gve_rx_dqo.c | 22 ++++++++++++++----- 4 files changed, 53 insertions(+), 17 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h index 9895541eddae..2fab38c8ee78 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -59,6 +59,8 @@ #define GVE_MAX_RX_BUFFER_SIZE 4096 +#define GVE_XDP_RX_BUFFER_SIZE_DQO 4096 + #define GVE_DEFAULT_RX_BUFFER_OFFSET 2048 #define GVE_PAGE_POOL_SIZE_MULTIPLIER 4 @@ -227,7 +229,11 @@ struct gve_rx_cnts { /* Contains datapath state used to represent an RX queue. */ struct gve_rx_ring { struct gve_priv *gve; - u16 packet_buffer_size; + + u16 packet_buffer_size; /* Size of buffer posted to NIC */ + u16 packet_buffer_truesize; /* Total size of RX buffer */ + u16 rx_headroom; + union { /* GQI fields */ struct { @@ -688,6 +694,7 @@ struct gve_rx_alloc_rings_cfg { bool raw_addressing; bool enable_header_split; bool reset_rss; + bool xdp; /* Allocated resources are returned here */ struct gve_rx_ring *rx; @@ -1218,7 +1225,8 @@ void gve_free_buffer(struct gve_rx_ring *rx, struct gve_rx_buf_state_dqo *buf_state); int gve_alloc_buffer(struct gve_rx_ring *rx, struct gve_rx_desc_dqo *desc); struct page_pool *gve_rx_create_page_pool(struct gve_priv *priv, - struct gve_rx_ring *rx); + struct gve_rx_ring *rx, + bool xdp); /* Reset */ void gve_schedule_reset(struct gve_priv *priv); diff --git a/drivers/net/ethernet/google/gve/gve_buffer_mgmt_dqo.c b/drivers/net/ethernet/google/gve/gve_buffer_mgmt_dqo.c index f9824664d04c..a71883e1d920 100644 --- a/drivers/net/ethernet/google/gve/gve_buffer_mgmt_dqo.c +++ b/drivers/net/ethernet/google/gve/gve_buffer_mgmt_dqo.c @@ -139,7 +139,8 @@ int gve_alloc_qpl_page_dqo(struct gve_rx_ring *rx, buf_state->page_info.page_offset = 0; buf_state->page_info.page_address = page_address(buf_state->page_info.page); - buf_state->page_info.buf_size = rx->packet_buffer_size; + buf_state->page_info.buf_size = rx->packet_buffer_truesize; + buf_state->page_info.pad = rx->rx_headroom; buf_state->last_single_ref_offset = 0; /* The page already has 1 ref. */ @@ -162,7 +163,7 @@ void gve_free_qpl_page_dqo(struct gve_rx_buf_state_dqo *buf_state) void gve_try_recycle_buf(struct gve_priv *priv, struct gve_rx_ring *rx, struct gve_rx_buf_state_dqo *buf_state) { - const u16 data_buffer_size = rx->packet_buffer_size; + const u16 data_buffer_size = rx->packet_buffer_truesize; int pagecount; /* Can't reuse if we only fit one buffer per page */ @@ -219,7 +220,7 @@ static int gve_alloc_from_page_pool(struct gve_rx_ring *rx, { netmem_ref netmem; - buf_state->page_info.buf_size = rx->packet_buffer_size; + buf_state->page_info.buf_size = rx->packet_buffer_truesize; netmem = page_pool_alloc_netmem(rx->dqo.page_pool, &buf_state->page_info.page_offset, &buf_state->page_info.buf_size, @@ -231,12 +232,14 @@ static int gve_alloc_from_page_pool(struct gve_rx_ring *rx, buf_state->page_info.netmem = netmem; buf_state->page_info.page_address = netmem_address(netmem); buf_state->addr = page_pool_get_dma_addr_netmem(netmem); + buf_state->page_info.pad = rx->dqo.page_pool->p.offset; return 0; } struct page_pool *gve_rx_create_page_pool(struct gve_priv *priv, - struct gve_rx_ring *rx) + struct gve_rx_ring *rx, + bool xdp) { u32 ntfy_id = gve_rx_idx_to_ntfy(priv, rx->q_num); struct page_pool_params pp = { @@ -247,7 +250,8 @@ struct page_pool *gve_rx_create_page_pool(struct gve_priv *priv, .netdev = priv->dev, .napi = &priv->ntfy_blocks[ntfy_id].napi, .max_len = PAGE_SIZE, - .dma_dir = DMA_FROM_DEVICE, + .dma_dir = xdp ? DMA_BIDIRECTIONAL : DMA_FROM_DEVICE, + .offset = xdp ? XDP_PACKET_HEADROOM : 0, }; return page_pool_create(&pp); @@ -301,7 +305,8 @@ int gve_alloc_buffer(struct gve_rx_ring *rx, struct gve_rx_desc_dqo *desc) } desc->buf_id = cpu_to_le16(buf_state - rx->dqo.buf_states); desc->buf_addr = cpu_to_le64(buf_state->addr + - buf_state->page_info.page_offset); + buf_state->page_info.page_offset + + buf_state->page_info.pad); return 0; diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index 20aabbe0e518..cb2f9978f45e 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -1149,8 +1149,14 @@ static int gve_reg_xdp_info(struct gve_priv *priv, struct net_device *dev) napi->napi_id); if (err) goto err; - err = xdp_rxq_info_reg_mem_model(&rx->xdp_rxq, - MEM_TYPE_PAGE_SHARED, NULL); + if (gve_is_qpl(priv)) + err = xdp_rxq_info_reg_mem_model(&rx->xdp_rxq, + MEM_TYPE_PAGE_SHARED, + NULL); + else + err = xdp_rxq_info_reg_mem_model(&rx->xdp_rxq, + MEM_TYPE_PAGE_POOL, + rx->dqo.page_pool); if (err) goto err; rx->xsk_pool = xsk_get_pool_from_qid(dev, i); @@ -1226,6 +1232,7 @@ static void gve_rx_get_curr_alloc_cfg(struct gve_priv *priv, cfg->ring_size = priv->rx_desc_cnt; cfg->packet_buffer_size = priv->rx_cfg.packet_buffer_size; cfg->rx = priv->rx; + cfg->xdp = !!cfg->qcfg_tx->num_xdp_queues; } void gve_get_curr_alloc_cfgs(struct gve_priv *priv, @@ -1461,6 +1468,7 @@ static int gve_configure_rings_xdp(struct gve_priv *priv, gve_get_curr_alloc_cfgs(priv, &tx_alloc_cfg, &rx_alloc_cfg); tx_alloc_cfg.num_xdp_rings = num_xdp_rings; + rx_alloc_cfg.xdp = !!num_xdp_rings; return gve_adjust_config(priv, &tx_alloc_cfg, &rx_alloc_cfg); } @@ -1629,6 +1637,7 @@ static int gve_xsk_wakeup(struct net_device *dev, u32 queue_id, u32 flags) static int verify_xdp_configuration(struct net_device *dev) { struct gve_priv *priv = netdev_priv(dev); + u16 max_xdp_mtu; if (dev->features & NETIF_F_LRO) { netdev_warn(dev, "XDP is not supported when LRO is on.\n"); @@ -1641,7 +1650,11 @@ static int verify_xdp_configuration(struct net_device *dev) return -EOPNOTSUPP; } - if (dev->mtu > GVE_DEFAULT_RX_BUFFER_SIZE - sizeof(struct ethhdr) - GVE_RX_PAD) { + max_xdp_mtu = priv->rx_cfg.packet_buffer_size - sizeof(struct ethhdr); + if (priv->queue_format == GVE_GQI_QPL_FORMAT) + max_xdp_mtu -= GVE_RX_PAD; + + if (dev->mtu > max_xdp_mtu) { netdev_warn(dev, "XDP is not supported for mtu %d.\n", dev->mtu); return -EOPNOTSUPP; diff --git a/drivers/net/ethernet/google/gve/gve_rx_dqo.c b/drivers/net/ethernet/google/gve/gve_rx_dqo.c index 5fbcf93a54e0..2edf3c632cbd 100644 --- a/drivers/net/ethernet/google/gve/gve_rx_dqo.c +++ b/drivers/net/ethernet/google/gve/gve_rx_dqo.c @@ -225,6 +225,14 @@ int gve_rx_alloc_ring_dqo(struct gve_priv *priv, rx->q_num = idx; rx->packet_buffer_size = cfg->packet_buffer_size; + if (cfg->xdp) { + rx->packet_buffer_truesize = GVE_XDP_RX_BUFFER_SIZE_DQO; + rx->rx_headroom = XDP_PACKET_HEADROOM; + } else { + rx->packet_buffer_truesize = rx->packet_buffer_size; + rx->rx_headroom = 0; + } + rx->dqo.num_buf_states = cfg->raw_addressing ? buffer_queue_slots : gve_get_rx_pages_per_qpl_dqo(cfg->ring_size); rx->dqo.buf_states = kvcalloc(rx->dqo.num_buf_states, @@ -254,7 +262,7 @@ int gve_rx_alloc_ring_dqo(struct gve_priv *priv, goto err; if (cfg->raw_addressing) { - pool = gve_rx_create_page_pool(priv, rx); + pool = gve_rx_create_page_pool(priv, rx, cfg->xdp); if (IS_ERR(pool)) goto err; @@ -484,14 +492,15 @@ static void gve_skb_add_rx_frag(struct gve_rx_ring *rx, if (rx->dqo.page_pool) { skb_add_rx_frag_netmem(rx->ctx.skb_tail, num_frags, buf_state->page_info.netmem, - buf_state->page_info.page_offset, - buf_len, + buf_state->page_info.page_offset + + buf_state->page_info.pad, buf_len, buf_state->page_info.buf_size); } else { skb_add_rx_frag(rx->ctx.skb_tail, num_frags, buf_state->page_info.page, - buf_state->page_info.page_offset, - buf_len, buf_state->page_info.buf_size); + buf_state->page_info.page_offset + + buf_state->page_info.pad, buf_len, + buf_state->page_info.buf_size); } } @@ -611,7 +620,8 @@ static int gve_rx_dqo(struct napi_struct *napi, struct gve_rx_ring *rx, /* Sync the portion of dma buffer for CPU to read. */ dma_sync_single_range_for_cpu(&priv->pdev->dev, buf_state->addr, - buf_state->page_info.page_offset, + buf_state->page_info.page_offset + + buf_state->page_info.pad, buf_len, DMA_FROM_DEVICE); /* Append to current skb if one exists. */ From patchwork Fri Mar 21 00:29:10 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harshitha Ramamurthy X-Patchwork-Id: 14024724 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F31901CA9C for ; Fri, 21 Mar 2025 00:29:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742516977; cv=none; b=T1/SWW8MAnbvSgOBDLBJM4U9Qa+mla45bY3Hzpj2TEZ761SjqmHevpy5IOvvUCbXHNltyorbDC+W238rq7SI21qZ1gSQOFCrfz7ZhCDDM10pNQ8I7MV7LFbo0Kmq/YnyknrzQ6ArgmiLmqw+J+ET12lHbyAWaF8K/bJ7hgiZgKo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742516977; c=relaxed/simple; bh=KN2cOTG2wekaPP9DYryqEoiqxe6ajNRoChUByzrJO40=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=UJnbcG51gXYWxHSLAAn5K/e3Q/qtbNqYqaljQEvr2kBMpGVvmCdogOFHNs9IEItLB9GnN/maSFkpylmuompv0R6VHmTWzg3ea4mwLGfcnFXR4Brl0+HLBo1bQ7TP62M8sBqJOrcj/nImoCkXsf7ZiUd0tyMmfJnmImZH05Ht40E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--hramamurthy.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=RNk4cfGh; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--hramamurthy.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="RNk4cfGh" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2ff7f9a0b9bso2335687a91.0 for ; Thu, 20 Mar 2025 17:29:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742516975; x=1743121775; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=4GwU0iMBTSa3xceKKuHeiAKpYBS7s6JNs8UBvNjnbHA=; b=RNk4cfGhu8DvK3yNj6uHVIRFJTP83fbWHOvTnk4YzdrxjyCw9GaHjLW3Le3SNOJv5f KJHaU4lOdnMw9M5WGfOxC01ClTLoOYFGWgxyNRXNd2oEFv6qFwUkHgyuaooePnWtBER5 T+127BStnkgRMCIlJ2G20hLpTYafpzQ2YyOvAVYMENMUaQJaSm10ca5Xpq+luFmeq77A qopnUYj8Jn7X1W8PryThZk6vRIFVAOww082J3/Y1HxT/NuJha+ku8l6rfs+Mwz1mEUrx aMjGAIfCECeM30An/g3F8ER99l48ajb4VzWobELWBHlzm+mS1MYmsYDWye4kowasX4dF 7pTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742516975; x=1743121775; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=4GwU0iMBTSa3xceKKuHeiAKpYBS7s6JNs8UBvNjnbHA=; b=A2M4ungWLJHA2KE8uXvUxSeJTgkzy8xPecY5NPphR59cvstXf8dQU2Bi2UhHyHiOkl gdL247Yx0fUo7lufAaXsBvT/NV6q7q5ASgxQ2OLiHsgH6FhE9oaxO1ApktCUVgMQwMlQ O5MfHIw2nuEJXxFqGwT2R99XOXXZ1dDC9rRK+rmyYkqBZHwmj10GOPL/GpebCj1sIqfZ H1FyG4722aQ0TZ8Tuoj99FgrCBmI9VmwZAU8DcnCWcGCaJt0D5qJB/FRoMiOhm/6XgY4 VWaqNuDzpcwGA8uecesiUkEQ5XgKGOFzyOZeSmWEQLGbZoQ8SV50mMplXvWDAtuZoomG alQQ== X-Forwarded-Encrypted: i=1; AJvYcCV3ZPKXBfLRTFPfbZ5Hmits6AteSpXXARRU9shvLZj3TPNBLObJBlUXc0syEo+GPkal1JE=@vger.kernel.org X-Gm-Message-State: AOJu0YwMgq9JHaCepbpQzpeDaPuWoat8r+2N1dVo4FOxL4roVTfBhpEB CBKUnbm3nXvDrIMf22Kuf4kEHah++yFBTyIjgb3SvudiObRAUD2REyoZkaPGiPv6fvTA90ykwsr rHA+G6uyvc62CTOMtIfysug== X-Google-Smtp-Source: AGHT+IGD5szJoWREOu6a0QdchzHBvcQnxinXmcoUurIv/Tes/c8ceLECekn1mij0Ce5a1zIY1qnxC2UQ6yiY5tmtXA== X-Received: from pjbqd16.prod.google.com ([2002:a17:90b:3cd0:b0:2ef:8055:93d9]) (user=hramamurthy job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:1f86:b0:2f6:dcc9:38e0 with SMTP id 98e67ed59e1d1-3030fc3a833mr2492622a91.0.1742516975299; Thu, 20 Mar 2025 17:29:35 -0700 (PDT) Date: Fri, 21 Mar 2025 00:29:10 +0000 In-Reply-To: <20250321002910.1343422-1-hramamurthy@google.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250321002910.1343422-1-hramamurthy@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250321002910.1343422-7-hramamurthy@google.com> Subject: [PATCH net-next 6/6] gve: add XDP DROP and PASS support for DQ From: Harshitha Ramamurthy To: netdev@vger.kernel.org Cc: jeroendb@google.com, hramamurthy@google.com, andrew+netdev@lunn.ch, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, ast@kernel.org, daniel@iogearbox.net, hawk@kernel.org, john.fastabend@gmail.com, pkaligineedi@google.com, willemb@google.com, ziweixiao@google.com, joshwash@google.com, horms@kernel.org, shailend@google.com, bcf@google.com, linux-kernel@vger.kernel.org, bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Joshua Washington This patch adds support for running XDP programs on DQ, along with rudimentary processing for XDP_DROP and XDP_PASS. These actions require very limited driver functionality when it comes to processing an XDP buffer, so currently if the XDP action is not XDP_PASS, the packet is dropped and stats are updated. Reviewed-by: Willem de Bruijn Signed-off-by: Praveen Kaliginedi Signed-off-by: Joshua Washington Signed-off-by: Harshitha Ramamurthy --- drivers/net/ethernet/google/gve/gve_rx_dqo.c | 52 ++++++++++++++++++++ 1 file changed, 52 insertions(+) diff --git a/drivers/net/ethernet/google/gve/gve_rx_dqo.c b/drivers/net/ethernet/google/gve/gve_rx_dqo.c index 2edf3c632cbd..deb869eb38e0 100644 --- a/drivers/net/ethernet/google/gve/gve_rx_dqo.c +++ b/drivers/net/ethernet/google/gve/gve_rx_dqo.c @@ -546,6 +546,29 @@ static int gve_rx_append_frags(struct napi_struct *napi, return 0; } +static void gve_xdp_done_dqo(struct gve_priv *priv, struct gve_rx_ring *rx, + struct xdp_buff *xdp, struct bpf_prog *xprog, + int xdp_act, + struct gve_rx_buf_state_dqo *buf_state) +{ + u64_stats_update_begin(&rx->statss); + switch (xdp_act) { + case XDP_ABORTED: + case XDP_DROP: + default: + rx->xdp_actions[xdp_act]++; + break; + case XDP_TX: + rx->xdp_tx_errors++; + break; + case XDP_REDIRECT: + rx->xdp_redirect_errors++; + break; + } + u64_stats_update_end(&rx->statss); + gve_free_buffer(rx, buf_state); +} + /* Returns 0 if descriptor is completed successfully. * Returns -EINVAL if descriptor is invalid. * Returns -ENOMEM if data cannot be copied to skb. @@ -560,6 +583,7 @@ static int gve_rx_dqo(struct napi_struct *napi, struct gve_rx_ring *rx, const bool hsplit = compl_desc->split_header; struct gve_rx_buf_state_dqo *buf_state; struct gve_priv *priv = rx->gve; + struct bpf_prog *xprog; u16 buf_len; u16 hdr_len; @@ -633,6 +657,34 @@ static int gve_rx_dqo(struct napi_struct *napi, struct gve_rx_ring *rx, return 0; } + xprog = READ_ONCE(priv->xdp_prog); + if (xprog) { + struct xdp_buff xdp; + void *old_data; + int xdp_act; + + xdp_init_buff(&xdp, buf_state->page_info.buf_size, + &rx->xdp_rxq); + xdp_prepare_buff(&xdp, + buf_state->page_info.page_address + + buf_state->page_info.page_offset, + buf_state->page_info.pad, + buf_len, false); + old_data = xdp.data; + xdp_act = bpf_prog_run_xdp(xprog, &xdp); + buf_state->page_info.pad += xdp.data - old_data; + buf_len = xdp.data_end - xdp.data; + if (xdp_act != XDP_PASS) { + gve_xdp_done_dqo(priv, rx, &xdp, xprog, xdp_act, + buf_state); + return 0; + } + + u64_stats_update_begin(&rx->statss); + rx->xdp_actions[XDP_PASS]++; + u64_stats_update_end(&rx->statss); + } + if (eop && buf_len <= priv->rx_copybreak) { rx->ctx.skb_head = gve_rx_copy(priv->dev, napi, &buf_state->page_info, buf_len);