From patchwork Wed Dec 18 13:34:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Praveen Kaligineedi X-Patchwork-Id: 13913704 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E116F1B424C for ; Wed, 18 Dec 2024 13:34:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734528869; cv=none; b=MMh2+nbpQgUdNxhUTZUqLyeIKyTtvgx4cPDs3zMSJrIq6Cnmj2kv68zEK63sDW9sCey8BN2cj51VNQV4YJZdrdx42Wir5PKT6h7IslJT+px+Lyt3LBlhHnDySDddu79bal5yTE3Hf/tCTqTc5jwzYEsA+VOnrVgIigNTzvgQDHo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734528869; c=relaxed/simple; bh=OriOUj3M1nI27Avpba3wIze2vFUhMiy7qxk7DOa3psI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=oWGT86x+KPk1lZztcswFuFEWqPjz6f0RcM4PYl3hAHYhZo4zZJu1A5m4BtJne8RK2r+loEOtGpMK8ZmuSOCxS5LmceZnKgn47z8o4EvB4J12Ur2EjiUOnCA00lzlNoVmln4TnzbUuj7OjYkt2vVIfzsTxrWF5DHC0LdtEYA1YNQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--pkaligineedi.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ItRufYRn; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--pkaligineedi.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ItRufYRn" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-2163dc0f689so97542335ad.1 for ; Wed, 18 Dec 2024 05:34:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734528867; x=1735133667; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=eAFTBo7ivhTuc59SVtnRzpksnEHu505WEYyIklRMIGY=; b=ItRufYRn/73cAfP6i7/w4xR+14kfugMij8ku+JWTbkQf+FHOFKda45OGjyvTo5TEhL +wB+F5azOSCleL5sHELza2tqeF8gcUObN7PZEWvILq5qKxVhunwm1YqFbIqy704UtMfR 5MS0FLYK5P8Kv9Bmjt8//yK/RxGOdQObI5AC1cj7Fi5znnLYktJcBIRVdFWH5zxcErpK pFHC4EQFXNqjvIX82zmRlW89H4ylAd1KXuvlfnQzEu8HnBAVp2OY+Q4lUp4FAp5i5Tb3 VJRlWt4MLSqUlKNb46LC4qXBFWXo4EU+6OH/AZxQ46NwJl3gQgfzt6web5aRGNyUDNVG Rjhg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734528867; x=1735133667; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=eAFTBo7ivhTuc59SVtnRzpksnEHu505WEYyIklRMIGY=; b=Gw/iOi7+4ywv/nBRHlsZ8xiLrryQLUjt1g8zpfBtAGVOpsqNi0kVb/gzzkyn52Vkwa nEdTlncskV0ZrIUMtOE3rDbyqCI66bCH8TYPG46oORAmVKk9vP89i8Ld7F/TuCwHiyU9 2cFheH1F9xU9r2LNVz2ul51LkrSNM3qq7vXA9bEeBwjI19eNTEa9OoEZcyQnJBH39PVL epTVZdN503A5RnsnAvkQVVlvr6e+K1WCNJfgRrB+5yHvK/5wXRTpEzRquiHBDcLrLqiy 1ktc60UOfHxLY5XYzvmMHjrdmlb6lowWdoSbkQvcgXcpwvw9jTgPSzy5ZMYOjlMLycgx 04tQ== X-Forwarded-Encrypted: i=1; AJvYcCUVZWX5E+CLsTebJNSDuZ3gXneE1o6bkJDJLr6GRO2dfIm3BBHxc1f49O5DjxCStugSwQo=@vger.kernel.org X-Gm-Message-State: AOJu0Yw4/sB9xbpz60xs6vtWZO3I9z8CttuzBTMomCag34P5DBzoduBl TPfZw0fdt9yql3qbACgexxnlIQdZCtY8ZCtduw9xF6ZSkh3pD6zllSCS2v9NNgywV5W4d8z309p CG7ViGGsXEEvC7y5n6aH43MKEKA== X-Google-Smtp-Source: AGHT+IFY1SuAyxxaS/Chpz8wp/cqmlqOOaKMj+QKltDNe140ZmyzZdNe/qSR+a8+kqBKh2wehr1iKLZ6ofMLYYyTL+M= X-Received: from plbmf12.prod.google.com ([2002:a17:902:fc8c:b0:215:3fc5:cd3f]) (user=pkaligineedi job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:f70a:b0:215:b9a7:526d with SMTP id d9443c01a7336-218d7223339mr44408425ad.32.1734528867093; Wed, 18 Dec 2024 05:34:27 -0800 (PST) Date: Wed, 18 Dec 2024 05:34:11 -0800 In-Reply-To: <20241218133415.3759501-1-pkaligineedi@google.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241218133415.3759501-1-pkaligineedi@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241218133415.3759501-2-pkaligineedi@google.com> Subject: [PATCH net 1/5] gve: clean XDP queues in gve_tx_stop_ring_gqi From: Praveen Kaligineedi To: netdev@vger.kernel.org Cc: jeroendb@google.com, shailend@google.com, willemb@google.com, andrew+netdev@lunn.ch, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, ast@kernel.org, daniel@iogearbox.net, hawk@kernel.org, john.fastabend@gmail.com, horms@kernel.org, hramamurthy@google.com, joshwash@google.com, ziweixiao@google.com, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, stable@vger.kernel.org, Praveen Kaligineedi X-Patchwork-Delegate: kuba@kernel.org From: Joshua Washington When stopping XDP TX rings, the XDP clean function needs to be called to clean out the entire queue, similar to what happens in the normal TX queue case. Otherwise, the FIFO won't be cleared correctly, and xsk_tx_completed won't be reported. Fixes: 75eaae158b1b ("gve: Add XDP DROP and TX support for GQI-QPL format") Cc: stable@vger.kernel.org Signed-off-by: Joshua Washington Signed-off-by: Praveen Kaligineedi Reviewed-by: Praveen Kaligineedi Reviewed-by: Willem de Bruijn --- drivers/net/ethernet/google/gve/gve_tx.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/google/gve/gve_tx.c b/drivers/net/ethernet/google/gve/gve_tx.c index e7fb7d6d283d..83ad278ec91f 100644 --- a/drivers/net/ethernet/google/gve/gve_tx.c +++ b/drivers/net/ethernet/google/gve/gve_tx.c @@ -206,7 +206,10 @@ void gve_tx_stop_ring_gqi(struct gve_priv *priv, int idx) return; gve_remove_napi(priv, ntfy_idx); - gve_clean_tx_done(priv, tx, priv->tx_desc_cnt, false); + if (tx->q_num < priv->tx_cfg.num_queues) + gve_clean_tx_done(priv, tx, priv->tx_desc_cnt, false); + else + gve_clean_xdp_done(priv, tx, priv->tx_desc_cnt); netdev_tx_reset_queue(tx->netdev_txq); gve_tx_remove_from_block(priv, idx); } From patchwork Wed Dec 18 13:34:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Praveen Kaligineedi X-Patchwork-Id: 13913705 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 72E341BEF84 for ; Wed, 18 Dec 2024 13:34:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734528872; cv=none; b=hUGYl+B6t7ul6BqjWJ3o5v8NcQ9k0nXmE++1IlJIcML6/XyE83w9NhH4O23j+1qJi5s6i/OV9Bzb7Av0ElHBMR0EFsDhnh23gIXA5417HOZDaeSS+ihRWzoKLtbTPDfSRGNqCpQT0hE14ZucVUU81E+jQphELqNUy1c4zy/WHPQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734528872; c=relaxed/simple; bh=CvjpZzap2JTZv3aoX0brnCAcyXx3Af2l9Bmavz1WSA4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=q+qoDrPgZZUDmzYBQFmk7DSuNEtmKoAwHSaYVBTPTihm9oa2qUDRQPAVrE5wloxqTJ47XIbLJhCLcosmp7dGuYFJiBrM1jS79kyIdHGRAPReEtvh/O+OcMf3WDxQlXxjIDmY3wqhdKMWjrXrtFUVSD3vBQyKMcSbLEmT375l8a0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--pkaligineedi.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=0/JaCxdu; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--pkaligineedi.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="0/JaCxdu" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-725cf25d47bso5536418b3a.2 for ; Wed, 18 Dec 2024 05:34:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734528869; x=1735133669; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=NRgIGaGDm4cvXXtjC5u8gCurbHYhRvv9RGqvrTy9ue8=; b=0/JaCxduaU21L5RaQx1ihJQkMX1AE95D59KPUyoGVH8hJltOXiXJqNS3vX4009WxH7 rjpAASdQXLKI/+jO1QdHUfThNL8htyOBJzRrXOoN8z13/4tprvTc1bF+j5iAerBkM/9C vDuF04wu1F32nO8QvX/5a6TnWttMCrImbvkysgqaj8VF9dM7SdZVy8EatBls/34C/KTi LOCh/CRcBkWUuYrGSuLCdwgT5ImdZYYv3b98OHWzvq6hpkgjZ3vrDkjD0EFDEoC9h+Ov f2WchjMngQRtdwc457HC6V/svmX+Vqay5lFNygRpNw4Pg+O9VNyKDEQbx8ODPbox863J 7SzQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734528869; x=1735133669; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=NRgIGaGDm4cvXXtjC5u8gCurbHYhRvv9RGqvrTy9ue8=; b=V8UPWPTtOki+OCTQuoBeXu4g2VWYLnlnD31HiZ8joSahpyTLKOVlsLbwaI3mUK58+b 3XzzlThYHNGdCKxFAXf5d6cVbidN/tHL5hZualZe9MENYZzE4C+dLOnhk5bx+EPjYT8c 6Jt3//FMWjOQNUI1YIN1cuuqKoNF4/ojIiGv0Sv2Ic4Nx9CfHFslslcuQFJorDw8NWbA TsZ4lu9wR7n8H01Ev8ofMkdQErr9DUIlQDJZ5OXl7F6Rz7UrGkfXsUH4Y29lrmOTqzkA GxqepeoQ3P2IpSPbRuifRusl0hbS1HJ+ZGDNNIjBhoRXarLKonRtJ/2llxpcRLfRNWXw kzDA== X-Forwarded-Encrypted: i=1; AJvYcCUbJgPD6Yx+9fsNmhFsJn4FLL02gRpc8cLgj5ZO1pkqux+U9/TtkRUsyznKA9djwojw/ao=@vger.kernel.org X-Gm-Message-State: AOJu0YwKc4OqMUZ8Q8EPod14VnII5CsCtOVboZQ8Cy+tO/uKO34vZq/x cLaA++erUqt/O08dtecVqM/bdLy7jdm/133L1rKmA97wFPEXOqDk1mJ8+BuiPrMdpsOSYjrm27r 4RAVxTEGPKrMRzbXrb/0BTONvdg== X-Google-Smtp-Source: AGHT+IELVRUurFW8tfDDC0iUwFPiFsqYStc2zOYCPV3iXlkA6nU0se344US+p9bexkbh23hUu3k2oyF3UnV/97X3x9c= X-Received: from pgeu9.prod.google.com ([2002:a63:a909:0:b0:7ff:d6:4f28]) (user=pkaligineedi job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a21:1013:b0:1e1:e2d9:307 with SMTP id adf61e73a8af0-1e5b487e6e8mr4675360637.33.1734528868845; Wed, 18 Dec 2024 05:34:28 -0800 (PST) Date: Wed, 18 Dec 2024 05:34:12 -0800 In-Reply-To: <20241218133415.3759501-1-pkaligineedi@google.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241218133415.3759501-1-pkaligineedi@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241218133415.3759501-3-pkaligineedi@google.com> Subject: [PATCH net 2/5] gve: guard XDP xmit NDO on existence of xdp queues From: Praveen Kaligineedi To: netdev@vger.kernel.org Cc: jeroendb@google.com, shailend@google.com, willemb@google.com, andrew+netdev@lunn.ch, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, ast@kernel.org, daniel@iogearbox.net, hawk@kernel.org, john.fastabend@gmail.com, horms@kernel.org, hramamurthy@google.com, joshwash@google.com, ziweixiao@google.com, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, stable@vger.kernel.org, Praveen Kaligineedi X-Patchwork-Delegate: kuba@kernel.org From: Joshua Washington In GVE, dedicated XDP queues only exist when an XDP program is installed and the interface is up. As such, the NDO XDP XMIT callback should return early if either of these conditions are false. In the case of no loaded XDP program, priv->num_xdp_queues=0 which can cause a divide-by-zero error, and in the case of interface down, num_xdp_queues remains untouched to persist XDP queue count for the next interface up, but the TX pointer itself would be NULL. The XDP xmit callback also needs to synchronize with a device transitioning from open to close. This synchronization will happen via the GVE_PRIV_FLAGS_NAPI_ENABLED bit along with a synchronize_net() call, which waits for any RCU critical sections at call-time to complete. Fixes: 39a7f4aa3e4a ("gve: Add XDP REDIRECT support for GQI-QPL format") Cc: stable@vger.kernel.org Signed-off-by: Joshua Washington Signed-off-by: Praveen Kaligineedi Reviewed-by: Praveen Kaligineedi Reviewed-by: Shailend Chand Reviewed-by: Willem de Bruijn --- drivers/net/ethernet/google/gve/gve_main.c | 3 +++ drivers/net/ethernet/google/gve/gve_tx.c | 5 ++++- 2 files changed, 7 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index e171ca248f9a..5d7b0cc59959 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -1899,6 +1899,9 @@ static void gve_turndown(struct gve_priv *priv) gve_clear_napi_enabled(priv); gve_clear_report_stats(priv); + + /* Make sure that all traffic is finished processing. */ + synchronize_net(); } static void gve_turnup(struct gve_priv *priv) diff --git a/drivers/net/ethernet/google/gve/gve_tx.c b/drivers/net/ethernet/google/gve/gve_tx.c index 83ad278ec91f..852f8c7e39d2 100644 --- a/drivers/net/ethernet/google/gve/gve_tx.c +++ b/drivers/net/ethernet/google/gve/gve_tx.c @@ -837,9 +837,12 @@ int gve_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames, struct gve_tx_ring *tx; int i, err = 0, qid; - if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK)) + if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK) || !priv->xdp_prog) return -EINVAL; + if (!gve_get_napi_enabled(priv)) + return -ENETDOWN; + qid = gve_xdp_tx_queue_id(priv, smp_processor_id() % priv->num_xdp_queues); From patchwork Wed Dec 18 13:34:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Praveen Kaligineedi X-Patchwork-Id: 13913706 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E39C61C5CA3 for ; Wed, 18 Dec 2024 13:34:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734528872; cv=none; b=M2jTSaNIJnMpCz4Uc9CIFW0nLhpQ18cGgyKS+JArJ12zoBxMalvbKAuTRkOCSS8pgCNDuq8pZ+BHGb6M+gKYc3GHZCZ2zd5msHSrmKuPNyf0vTGzrQfONTl2nY6gDvVx1gMj4swsJL6QubcGZdBD3qRy/ywX55aK8gO83E56juo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734528872; c=relaxed/simple; bh=9BgVkXA7sAnOzdE3qY+1AbaUb+6u5iOoRd5soMYqiXc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=V5fpilfONRv4fOMVVa4BMg6tH74dM3HMIJU3Bx9uW78kFoUqGKpL3h1iIol6RG33IPCjiz4c1vyqbIHK3zhiYyH0fVni7FG3b/Lse9tuJghBEgSRXY2HK7k4gYyablqjSVA3fheXvnf6P9gM0MhzWzRS3BLeGZDU1XGhrMzZD/A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--pkaligineedi.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=IXWbzG2m; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--pkaligineedi.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="IXWbzG2m" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-7fd481e3c0bso4100309a12.0 for ; Wed, 18 Dec 2024 05:34:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734528870; x=1735133670; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=BaCLJGMJQz1qP4MJHET69CAtZwrd4CwtIMp86qT77Mc=; b=IXWbzG2mrE2Dgmp2VecBKZDqSTOTG+1q7SmlCb1dvlnUzHgd4wOwi5xd0t1j+XuxTX kAL2IEiCdUngsEiCuXaxY57WrRB1F7LE0RqRErFQEukXROCcX2q9uXy85pQZy50QGK1x ZfTsg31xaQ0ZmFeJMAa4VusJOuN6pNg7ZesloqvjFO+2WvK/TKh77xhU9H9nJUBw9wLz s4cIFFlES+h9i8E0FwqT9KG4pdE+a2AELeSsKNHICC8GIar9umw3LUgGkfv2qVjJ0ssb 2ngQZHly+V4Mu1tC5706hCbX72a/EEVpj1mg/viqpAXmvS+rvlxfFBdT7fanj2esiAXJ ob3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734528870; x=1735133670; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=BaCLJGMJQz1qP4MJHET69CAtZwrd4CwtIMp86qT77Mc=; b=jltnn1RPLzTteUqOdiXQi4OVzG6Mq5vK9IimCQ2zJqBvhs/QskFRxUydvqEyf0pEKl IotxbsM3qVfVlJFVjLzfVohvvCRROIOXKkTa1FJIwxFCZ0CrY5Y3cpfxwZ7wdRTsqVXE nZ7g2wc17NdNNoRfyWKMdTQCyB7Wudx4jmLZA/236rtVQWsISoGwbGAhbUYBJb8J5Jsg SNgxMU4DpSOIY9CLgE99OThEzGbRJtm0+FtxExcSRHQBLUguUMHsGcYUj54UeLSC89mN VUrV+8Recay9i/gw1EDsEjCAu+VmozCliiLJPypeB4QNw61GX1Jy6qu9MzHogCYVbaSw DfbQ== X-Forwarded-Encrypted: i=1; AJvYcCVeVEWV1IMjpdGs80yClcHv1RpOoPPY50fH2Do6IpaRgc6p7RF0XKUU8hPSCHxt6CH9JS4=@vger.kernel.org X-Gm-Message-State: AOJu0YyaQ4KqVCUtq1UO7Ot1QfKj3DcjrvyDVpX6tCsnSw6bfDkiUXvD wJusVv2kBIarGC9RAoGyv04OGL5LPLRuM20YUdvlcw7nJe0yyfwk3C4KSimN5COvGKIehdV+lPt sc3ql/JoXUUUSQSxbFrIcR53i5A== X-Google-Smtp-Source: AGHT+IEtLKXZPbOvOiUsezPjGdVej+2Jrgenj3uZFiHh0CPqYPWhY/GVn4ZOUteMHaxZiVYXA9nTm51H7JWqV6EAjaI= X-Received: from pgmt35.prod.google.com ([2002:a63:2263:0:b0:7fd:5835:26d1]) (user=pkaligineedi job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:8408:b0:1e1:c26d:d7fd with SMTP id adf61e73a8af0-1e5b48a0f20mr4449943637.37.1734528870243; Wed, 18 Dec 2024 05:34:30 -0800 (PST) Date: Wed, 18 Dec 2024 05:34:13 -0800 In-Reply-To: <20241218133415.3759501-1-pkaligineedi@google.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241218133415.3759501-1-pkaligineedi@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241218133415.3759501-4-pkaligineedi@google.com> Subject: [PATCH net 3/5] gve: guard XSK operations on the existence of queues From: Praveen Kaligineedi To: netdev@vger.kernel.org Cc: jeroendb@google.com, shailend@google.com, willemb@google.com, andrew+netdev@lunn.ch, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, ast@kernel.org, daniel@iogearbox.net, hawk@kernel.org, john.fastabend@gmail.com, horms@kernel.org, hramamurthy@google.com, joshwash@google.com, ziweixiao@google.com, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, stable@vger.kernel.org, Praveen Kaligineedi X-Patchwork-Delegate: kuba@kernel.org From: Joshua Washington This patch predicates the enabling and disabling of XSK pools on the existence of queues. As it stands, if the interface is down, disabling or enabling XSK pools would result in a crash, as the RX queue pointer would be NULL. XSK pool registration will occur as part of the next interface up. Similarly, xsk_wakeup needs be guarded against queues disappearing while the function is executing, so a check against the GVE_PRIV_FLAGS_NAPI_ENABLED flag is added to synchronize with the disabling of the bit and the synchronize_net() in gve_turndown. Fixes: fd8e40321a12 ("gve: Add AF_XDP zero-copy support for GQI-QPL format") Cc: stable@vger.kernel.org Signed-off-by: Joshua Washington Signed-off-by: Praveen Kaligineedi Reviewed-by: Praveen Kaligineedi Reviewed-by: Shailend Chand Reviewed-by: Willem de Bruijn Reviewed-by: Larysa Zaremba --- drivers/net/ethernet/google/gve/gve_main.c | 22 ++++++++++------------ 1 file changed, 10 insertions(+), 12 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index 5d7b0cc59959..e4e8ff4f9f80 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -1623,8 +1623,8 @@ static int gve_xsk_pool_enable(struct net_device *dev, if (err) return err; - /* If XDP prog is not installed, return */ - if (!priv->xdp_prog) + /* If XDP prog is not installed or interface is down, return. */ + if (!priv->xdp_prog || !netif_running(dev)) return 0; rx = &priv->rx[qid]; @@ -1669,21 +1669,16 @@ static int gve_xsk_pool_disable(struct net_device *dev, if (qid >= priv->rx_cfg.num_queues) return -EINVAL; - /* If XDP prog is not installed, unmap DMA and return */ - if (!priv->xdp_prog) + /* If XDP prog is not installed or interface is down, unmap DMA and + * return. + */ + if (!priv->xdp_prog || !netif_running(dev)) goto done; - tx_qid = gve_xdp_tx_queue_id(priv, qid); - if (!netif_running(dev)) { - priv->rx[qid].xsk_pool = NULL; - xdp_rxq_info_unreg(&priv->rx[qid].xsk_rxq); - priv->tx[tx_qid].xsk_pool = NULL; - goto done; - } - napi_rx = &priv->ntfy_blocks[priv->rx[qid].ntfy_id].napi; napi_disable(napi_rx); /* make sure current rx poll is done */ + tx_qid = gve_xdp_tx_queue_id(priv, qid); napi_tx = &priv->ntfy_blocks[priv->tx[tx_qid].ntfy_id].napi; napi_disable(napi_tx); /* make sure current tx poll is done */ @@ -1711,6 +1706,9 @@ static int gve_xsk_wakeup(struct net_device *dev, u32 queue_id, u32 flags) struct gve_priv *priv = netdev_priv(dev); int tx_queue_id = gve_xdp_tx_queue_id(priv, queue_id); + if (!gve_get_napi_enabled(priv)) + return -ENETDOWN; + if (queue_id >= priv->rx_cfg.num_queues || !priv->xdp_prog) return -EINVAL; From patchwork Wed Dec 18 13:34:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Praveen Kaligineedi X-Patchwork-Id: 13913707 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 94BCA1D9A48 for ; Wed, 18 Dec 2024 13:34:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734528874; cv=none; b=JHdOt1MW30WPH1Kifg/RJi0bBFgQjx//jfZT1QuWoCl2d/P18Ky1IcLBDnO0KgJfoLxEZdIrn8ABVlXOPcV3XmGesHy9YO2Xi6kKvst1D8yTIJK9kKlXmIvEYWbHcfNLJtEkGX2AhGyGBPCNdExVmhs06fv8sVe2+BaDADFfhdE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734528874; c=relaxed/simple; bh=u3ksI8X3LHlmnWlWNsr7oSVKMPEqKsRuUTAg+bTe+00=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=hdsPTiDSgmZJxvZVT9HOLWnZJWDZm71Oq2xBJ3mfezZzp9zIepm1MdRZKGVjQPLpywHNf1B6o/wGRpg28bIISnov8TekAVu951OALWy1ZiDl2Ss6kl/5rzUwPuqFgGD1aRde3Rts1yQMlKVwFasvCovm3tWUUw2sJqIdQjs/NJk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--pkaligineedi.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=MPxc0dho; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--pkaligineedi.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="MPxc0dho" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-2164861e1feso57039325ad.1 for ; Wed, 18 Dec 2024 05:34:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734528872; x=1735133672; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=YwlRUX1jaF5zrf3pwivF6X+hM80Z5phrYRTqYk/3DoI=; b=MPxc0dhofPFnXcILg06kG+YRGWPh4xf2Jay20u120zc0dKqN9UxHQ7AMA81rLhgXmb ZVqMkn/rIzJH0nSIiQQ3z+EjssO0gXN7iV27NaPDEAJCFAuRmo/JRJqgBG9KLyoExo/J zfXhn4kVlf4usRO89FOZIN7PrUnA0Ef20z6MIkhYvZFnQ/OutucxOUuFF1ilYZXAxC34 lOzOXJIZHHK6HRghsVoS79wZ4NEyWdcvCla3Bi3YeSb4PjzpTZslTx3cvvuIbCnWzR0c Y0pOr28Q+UKFULWe+0RbB/OS42OZ9BYI/7vy6OIBWu27DNVF2zOg8iZpVEGPpDiH1wVJ X1Ug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734528872; x=1735133672; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=YwlRUX1jaF5zrf3pwivF6X+hM80Z5phrYRTqYk/3DoI=; b=BcqKLikDuuANFYR5qMpXs0FUIb1baI+5ZvjybFh64ZODbrp/pud4r2mK9h1SLon8Zq VTrGet5zPQErUuUcX0gK0wM9aY1USsIcYMFW4Fp/THMF3PcnXBXHEfH/KVQhfi+HBm42 fpNIITEvx0QWdGRjzW3jy6oJ2kcB1mODo4hAEbZ16MmJHT83MrvkuYEqE6JoYJ65jkTU 9EiuU1x/PqlP+E0P7QCp3yd/gU8eOzMtU/iIhrKteR5UvNTnHqhYR0H/FiMrnyN4n77/ NDKzlxiIHgPoKfZ9EScjDWxr7VmoaONPY8qOdesHQqXJBXvIc6KmZ6yqlc8DYPJO7aDp faPQ== X-Forwarded-Encrypted: i=1; AJvYcCXZ68jxbSFNjpvb01+Hb2frzdOjsiqVJCOnNJwV062WALfcoVp57Jr4jlhNulPQHexTc7U=@vger.kernel.org X-Gm-Message-State: AOJu0YxEcJ3EdEZhfsAW59iY8LkxrluhuAKuQk6hM/2WIGgNcfTadn2I cUf3KkeWTkKrs+AkEVszuOiZ1ow64DG+XbGJi+Et0PYmK74vwezbQfiWFnGKORc9z4nhOPRQfOA wEEkEvIhw4mz8wr5UKC7uCGrfxQ== X-Google-Smtp-Source: AGHT+IGoZ3BfldQRegdOBvsvUntzPVMpURPuoWRCP2EtNiBtWfT4ejOfM4mqohMkYW+FU9E8z71bR23/h0c1KA2OAKc= X-Received: from pgid5.prod.google.com ([2002:a63:ed05:0:b0:7fd:5437:9912]) (user=pkaligineedi job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:dac6:b0:216:1ec6:9888 with SMTP id d9443c01a7336-218d72368afmr37261205ad.33.1734528871967; Wed, 18 Dec 2024 05:34:31 -0800 (PST) Date: Wed, 18 Dec 2024 05:34:14 -0800 In-Reply-To: <20241218133415.3759501-1-pkaligineedi@google.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241218133415.3759501-1-pkaligineedi@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241218133415.3759501-5-pkaligineedi@google.com> Subject: [PATCH net 4/5] gve: process XSK TX descriptors as part of RX NAPI From: Praveen Kaligineedi To: netdev@vger.kernel.org Cc: jeroendb@google.com, shailend@google.com, willemb@google.com, andrew+netdev@lunn.ch, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, ast@kernel.org, daniel@iogearbox.net, hawk@kernel.org, john.fastabend@gmail.com, horms@kernel.org, hramamurthy@google.com, joshwash@google.com, ziweixiao@google.com, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, stable@vger.kernel.org, Praveen Kaligineedi X-Patchwork-Delegate: kuba@kernel.org From: Joshua Washington When busy polling is enabled, xsk_sendmsg for AF_XDP zero copy marks the NAPI ID corresponding to the memory pool allocated for the socket. In GVE, this NAPI ID will never correspond to a NAPI ID of one of the dedicated XDP TX queues registered with the umem because XDP TX is not set up to share a NAPI with a corresponding RX queue. This patch moves XSK TX descriptor processing from the TX NAPI to the RX NAPI, and the gve_xsk_wakeup callback is updated to use the RX NAPI instead of the TX NAPI, accordingly. The branch on if the wakeup is for TX is removed, as the NAPI poll should be invoked whether the wakeup is for TX or for RX. Fixes: fd8e40321a12 ("gve: Add AF_XDP zero-copy support for GQI-QPL format") Cc: stable@vger.kernel.org Signed-off-by: Praveen Kaligineedi Signed-off-by: Joshua Washington Reviewed-by: Willem de Bruijn --- drivers/net/ethernet/google/gve/gve.h | 1 + drivers/net/ethernet/google/gve/gve_main.c | 8 +++++ drivers/net/ethernet/google/gve/gve_tx.c | 36 +++++++++++++--------- 3 files changed, 31 insertions(+), 14 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h index dd92949bb214..8167cc5fb0df 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -1140,6 +1140,7 @@ int gve_xdp_xmit_one(struct gve_priv *priv, struct gve_tx_ring *tx, void gve_xdp_tx_flush(struct gve_priv *priv, u32 xdp_qid); bool gve_tx_poll(struct gve_notify_block *block, int budget); bool gve_xdp_poll(struct gve_notify_block *block, int budget); +int gve_xsk_tx_poll(struct gve_notify_block *block, int budget); int gve_tx_alloc_rings_gqi(struct gve_priv *priv, struct gve_tx_alloc_rings_cfg *cfg); void gve_tx_free_rings_gqi(struct gve_priv *priv, diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index e4e8ff4f9f80..5cab7b88610f 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -333,6 +333,14 @@ int gve_napi_poll(struct napi_struct *napi, int budget) if (block->rx) { work_done = gve_rx_poll(block, budget); + + /* Poll XSK TX as part of RX NAPI. Setup re-poll based on max of + * TX and RX work done. + */ + if (priv->xdp_prog) + work_done = max_t(int, work_done, + gve_xsk_tx_poll(block, budget)); + reschedule |= work_done == budget; } diff --git a/drivers/net/ethernet/google/gve/gve_tx.c b/drivers/net/ethernet/google/gve/gve_tx.c index 852f8c7e39d2..4350ebd9c2bd 100644 --- a/drivers/net/ethernet/google/gve/gve_tx.c +++ b/drivers/net/ethernet/google/gve/gve_tx.c @@ -981,33 +981,41 @@ static int gve_xsk_tx(struct gve_priv *priv, struct gve_tx_ring *tx, return sent; } +int gve_xsk_tx_poll(struct gve_notify_block *rx_block, int budget) +{ + struct gve_rx_ring *rx = rx_block->rx; + struct gve_priv *priv = rx->gve; + struct gve_tx_ring *tx; + int sent = 0; + + tx = &priv->tx[gve_xdp_tx_queue_id(priv, rx->q_num)]; + if (tx->xsk_pool) { + sent = gve_xsk_tx(priv, tx, budget); + + u64_stats_update_begin(&tx->statss); + tx->xdp_xsk_sent += sent; + u64_stats_update_end(&tx->statss); + if (xsk_uses_need_wakeup(tx->xsk_pool)) + xsk_set_tx_need_wakeup(tx->xsk_pool); + } + + return sent; +} + bool gve_xdp_poll(struct gve_notify_block *block, int budget) { struct gve_priv *priv = block->priv; struct gve_tx_ring *tx = block->tx; u32 nic_done; - bool repoll; u32 to_do; /* Find out how much work there is to be done */ nic_done = gve_tx_load_event_counter(priv, tx); to_do = min_t(u32, (nic_done - tx->done), budget); gve_clean_xdp_done(priv, tx, to_do); - repoll = nic_done != tx->done; - - if (tx->xsk_pool) { - int sent = gve_xsk_tx(priv, tx, budget); - - u64_stats_update_begin(&tx->statss); - tx->xdp_xsk_sent += sent; - u64_stats_update_end(&tx->statss); - repoll |= (sent == budget); - if (xsk_uses_need_wakeup(tx->xsk_pool)) - xsk_set_tx_need_wakeup(tx->xsk_pool); - } /* If we still have work we want to repoll */ - return repoll; + return nic_done != tx->done; } bool gve_tx_poll(struct gve_notify_block *block, int budget) From patchwork Wed Dec 18 13:34:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Praveen Kaligineedi X-Patchwork-Id: 13913708 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2B58E1E9B39 for ; Wed, 18 Dec 2024 13:34:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734528875; cv=none; b=FmCDrm4DKuac8JSUP93vmuVX9M2hqtAB73FjxzoMc8Ah5308rYlMYgAYI14at9o3iw/hSg4nfSkonfycOpmVdBRl7dxoxeQzqpIq33MHDTUN/KA8Ks/PuLQY14UBY5Xp6np7dFoWFBj4ZWBIlTl+67ktNC9cqKluapB7eyHgp94= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734528875; c=relaxed/simple; bh=5nW61vOKeFgt2Ca3diR0gPzoG2D8KLy0Ubznq3j8i3w=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=pxFIm17VAhl4aoUZX+u7tsL+PLphrCBV2w7V76rFyCZ58NUb45sdcJWvTIEEPHPL88zUD4UHheHZ+fclLFntg2r1iMaAtC3b9fXCe/erpG+qknw7FfxVXVzJPcW1F2CbSqdbsL9nbsPAxJ268GpKR7hAgeG3kjzdcX/6lBVbit0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--pkaligineedi.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=xfz5QjC+; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--pkaligineedi.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="xfz5QjC+" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-7fcd2430636so4424428a12.2 for ; Wed, 18 Dec 2024 05:34:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734528873; x=1735133673; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=T5p6vlUBAsmQyOt8BuyoWsG+1hipC/LRRSVT0CsAjCw=; b=xfz5QjC+/jESiv2uFvLF7rUuOSlvdU5UBtL8Zkt4O0ZXCoAZdgPaXDdijcxVsWITru HcmSIsWy80R71wgIMCGdjrJeeDRX++FLMrlmeZLjS4oHGnjYypKkni4/blLGJxCEInth eip0fZuoxqh/nDopdSQzPsdp6RGkqFaFCbft0UsNrXDW2rUIlNhg+a93nXNc22tB0WcE 9UGJI/u/6mLWyg61qAA0Zv256zRIRNQtCTVHbg7/A6Gh0mtZ0KikFiDRi5++rsZC1SfX hsF4gs0h/yhycSJtQEg/bIWRs/ljOomEIPdn+IAuNv7FMUwJyP37DFdSwcBRpyd0FkU+ LgFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734528873; x=1735133673; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=T5p6vlUBAsmQyOt8BuyoWsG+1hipC/LRRSVT0CsAjCw=; b=jY0WdKhOBQ4BPHxxgjJURg93dS6MFVq8Jeli4YAiQuHIwxOPqp9SPbXYhgm2u8F+3G +eA9tLA2MZR6J/soeISKQSnOgozrRfpuWngFttiXzFjP7+yU5qZNhe20Epu5wWBIsSPP B6FXqOEykwMP7XktKuC+lie5Jlz7wPnfztiHzraYeoSd6Qp4Zvmjsin7vR6aqvUkACzg Jl7EVIUkj0Wa/BNHB+nFwX+lUmy5WVuZ9jud/txjyRbF5B28IOV3oVGpMRKdH/plJLTO 1CJGO9hoK158oB0EaJhaFHA/I8LXmGLLSdS2iL8fbQh4YTAIHya+vOIOrw7U/CWep7bD Sikg== X-Forwarded-Encrypted: i=1; AJvYcCWEfsdINyjwF6Ayv2yFNd2+LJaE2OxecmJxAsEcRTPNJsM3alKP6cyov5ElMqtYob99v2s=@vger.kernel.org X-Gm-Message-State: AOJu0Yw4ZDfZUu/wfVW52uMdo4mgOqnOgDDA9nhkmoh3tFdEIbAE25E6 JcF50vAAhba3AVGBdtXWcM4a5h+PCVs4dgPYo4MZd+XnCszs252RhfTCrd8XnM5iDiKMujI8Oy/ y1JUbEbYFC48gc1fQ3JGQowaT0A== X-Google-Smtp-Source: AGHT+IESNBKLCTm0uqLX+DRFxTtWjk+95d5i9mTaDJzn1eicqwfkpRcGBjtCrRCjQo4wMO7Q8jBDvY0A+fhe0GGi+qI= X-Received: from pjbqx3.prod.google.com ([2002:a17:90b:3e43:b0:2ee:4b69:50e1]) (user=pkaligineedi job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:2b8f:b0:2ee:df57:b194 with SMTP id 98e67ed59e1d1-2f2e91fef48mr3520473a91.21.1734528873500; Wed, 18 Dec 2024 05:34:33 -0800 (PST) Date: Wed, 18 Dec 2024 05:34:15 -0800 In-Reply-To: <20241218133415.3759501-1-pkaligineedi@google.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241218133415.3759501-1-pkaligineedi@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241218133415.3759501-6-pkaligineedi@google.com> Subject: [PATCH net 5/5] gve: fix XDP allocation path in edge cases From: Praveen Kaligineedi To: netdev@vger.kernel.org Cc: jeroendb@google.com, shailend@google.com, willemb@google.com, andrew+netdev@lunn.ch, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, ast@kernel.org, daniel@iogearbox.net, hawk@kernel.org, john.fastabend@gmail.com, horms@kernel.org, hramamurthy@google.com, joshwash@google.com, ziweixiao@google.com, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, stable@vger.kernel.org, Praveen Kaligineedi X-Patchwork-Delegate: kuba@kernel.org From: Joshua Washington This patch fixes a number of consistency issues in the queue allocation path related to XDP. As it stands, the number of allocated XDP queues changes in three different scenarios. 1) Adding an XDP program while the interface is up via gve_add_xdp_queues 2) Removing an XDP program while the interface is up via gve_remove_xdp_queues 3) After queues have been allocated and the old queue memory has been removed in gve_queues_start. However, the requirement for the interface to be up for gve_(add|remove)_xdp_queues to be called, in conjunction with the fact that the number of queues stored in priv isn't updated until _after_ XDP queues have been allocated in the normal queue allocation path means that if an XDP program is added while the interface is down, XDP queues won't be added until the _second_ if_up, not the first. Given the expectation that the number of XDP queues is equal to the number of RX queues, scenario (3) has another problematic implication. When changing the number of queues while an XDP program is loaded, the number of XDP queues must be updated as well, as there is logic in the driver (gve_xdp_tx_queue_id()) which relies on every RX queue having a corresponding XDP TX queue. However, the number of XDP queues stored in priv would not be updated until _after_ a close/open leading to a mismatch in the number of XDP queues reported vs the number of XDP queues which actually exist after the queue count update completes. This patch remedies these issues by doing the following: 1) The allocation config getter function is set up to retrieve the _expected_ number of XDP queues to allocate instead of relying on the value stored in `priv` which is only updated once the queues have been allocated. 2) When adjusting queues, XDP queues are adjusted to match the number of RX queues when XDP is enabled. This only works in the case when queues are live, so part (1) of the fix must still be available in the case that queues are adjusted when there is an XDP program and the interface is down. Fixes: 5f08cd3d6423 ("gve: Alloc before freeing when adjusting queues") Cc: stable@vger.kernel.org Signed-off-by: Joshua Washington Signed-off-by: Praveen Kaligineedi Reviewed-by: Praveen Kaligineedi Reviewed-by: Shailend Chand Reviewed-by: Willem de Bruijn --- drivers/net/ethernet/google/gve/gve_main.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index 5cab7b88610f..09fb7f16f73e 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -930,11 +930,13 @@ static void gve_init_sync_stats(struct gve_priv *priv) static void gve_tx_get_curr_alloc_cfg(struct gve_priv *priv, struct gve_tx_alloc_rings_cfg *cfg) { + int num_xdp_queues = priv->xdp_prog ? priv->rx_cfg.num_queues : 0; + cfg->qcfg = &priv->tx_cfg; cfg->raw_addressing = !gve_is_qpl(priv); cfg->ring_size = priv->tx_desc_cnt; cfg->start_idx = 0; - cfg->num_rings = gve_num_tx_queues(priv); + cfg->num_rings = priv->tx_cfg.num_queues + num_xdp_queues; cfg->tx = priv->tx; } @@ -1843,6 +1845,7 @@ int gve_adjust_queues(struct gve_priv *priv, { struct gve_tx_alloc_rings_cfg tx_alloc_cfg = {0}; struct gve_rx_alloc_rings_cfg rx_alloc_cfg = {0}; + int num_xdp_queues; int err; gve_get_curr_alloc_cfgs(priv, &tx_alloc_cfg, &rx_alloc_cfg); @@ -1853,6 +1856,10 @@ int gve_adjust_queues(struct gve_priv *priv, rx_alloc_cfg.qcfg = &new_rx_config; tx_alloc_cfg.num_rings = new_tx_config.num_queues; + /* Add dedicated XDP TX queues if enabled. */ + num_xdp_queues = priv->xdp_prog ? new_rx_config.num_queues : 0; + tx_alloc_cfg.num_rings += num_xdp_queues; + if (netif_running(priv->dev)) { err = gve_adjust_config(priv, &tx_alloc_cfg, &rx_alloc_cfg); return err;