From patchwork Tue Apr 30 23:14:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shailend Chand X-Patchwork-Id: 13650017 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AB08B1BF6C0 for ; Tue, 30 Apr 2024 23:14:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714518878; cv=none; b=o9ysWDxCI7rWCBi3j14OOsEnhYgVuzD7e2HefqXo4Xo3eNsAKpazegGP9ELnC/+Wd5/FHXODzCcvQ6heLTfEq/gcYQIX9uh3L34knqhTPGTd1tuyGaQ9brFHWtWkwk8GaBJofOhNGiezSsPyOR8fZpZ+qlX9vsTGbas4olVF8kM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714518878; c=relaxed/simple; bh=4xUUodiBIVxZkjvkbMI1u4kBI5P1wirbmcEa6D84Q7g=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Gg0BR/GipD6/T80ZkEdF0mZkrQ54DAYS1w4WGsLblUhBQNTFoWi1BKBQU5xKoRqBPjlORYFuOgqb1Lv8oNSX1xTg0ia4sP1ccvE64quFRq3OKW0yMOP75bNVKqxYH4fOGt4Q/jMZeEfHsZGX8cTwGTQmrSaNU4g8V9V7EzcnJvk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=faSwNBCX; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="faSwNBCX" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-6ee089eedb7so7684576b3a.1 for ; Tue, 30 Apr 2024 16:14:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1714518876; x=1715123676; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=7U+1J2EEWQ/RQ/ejFdIxlreAtu0YNhB+rfukR0B4WJM=; b=faSwNBCXo+gTIerb8HpAnRh7HKc7CM4QU4oXqgK9RgLjsUIXMlTY01+fLNB/ccFDOG xO1QyVK/zRSDHExyC0QW9MGX+OcMJSvq7bBGVNKXcfItCSgb6a/UjMoTEi5ixrikkEsS 7Dslwlvx6YGFUAGCiJKaia74w2DNWg7tLbSv4oRjC9+26GLz5FygqGV2fTYxM5IQTrg9 obhRrl46boqt5dKfrh3WE1SuAdHXEBbPbz5rNn4KP3+n0nMqmTPsqrzCOkSBEquKB7WO DjN4BL8Qzt6e44sHv0TwXGMBNjeE/kYliO50zyHOBjOif2P30fICE5CWUhEarWV3mY4w HXuQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714518876; x=1715123676; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=7U+1J2EEWQ/RQ/ejFdIxlreAtu0YNhB+rfukR0B4WJM=; b=DNYTx1WV3vC1MZBMxB58TwkLwIMMqn8QEEY3DbFwUtQ/hPBI83jnd0WcASaB9epIUs +6XZ77kuu2e2A8EnyUYO4rs5m06F5A+nidCvlGxDoklgK4SwHel51q+A32rAY+o+O+xS oiQyTbCuz3TkiK2cTGC3qvDZaVKme6wjc/wQyFaFgL/kfKOlevAZvUV+vCKRbKy3H1mk Z35WT9ctNS8LZZDI6mE2VIIwmXLG3UfQf8bZ60MjdcXGIuF7yodQJlGBUIZgdVjSB2BQ jJEK3n/4ijGe6pZdLtxaUekjz2XmGEYRJ5eaGkc0UVDGqlEGs9TDfTkhdSl7aIN+gaUL wYeA== X-Gm-Message-State: AOJu0YzfI12dkEGbV1vO3/wqai4pSQRIwjOEdoYvK4oFKEDyqLubcVqA 2AWJ8FWWm+dAcgQejeS88RUIyInAQQ3xrjvR8S6X0BwmmwQqjuTFEe2+rx+NDD6vpAbjzfxsJbt dG0Iy0mVBbCCOdXmWeqQrNHxAhRr8GdQf2xKnFqYsBJD9QAhMg5h8TeJoFKzOC8B6iKl2aJTivE WtvwuwBBMGaEHczS6xY6cKpB8lkxFFrYNsAegJBF6HIvY= X-Google-Smtp-Source: AGHT+IE4/8Ok2uWNDlTbdDpD46w3TCSp/6bkhgIEeFrltdtr/VLk4si7h+tb6g5gnYzwgrAXm8WddqPE5vlcdw== X-Received: from shailendkvm.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:2648]) (user=shailend job=sendgmr) by 2002:a05:6a00:9392:b0:6f3:f447:57e1 with SMTP id ka18-20020a056a00939200b006f3f44757e1mr46865pfb.1.1714518875684; Tue, 30 Apr 2024 16:14:35 -0700 (PDT) Date: Tue, 30 Apr 2024 23:14:10 +0000 In-Reply-To: <20240430231420.699177-1-shailend@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240430231420.699177-1-shailend@google.com> X-Mailer: git-send-email 2.45.0.rc0.197.gbae5840b3b-goog Message-ID: <20240430231420.699177-2-shailend@google.com> Subject: [PATCH net-next 01/10] queue_api: define queue api From: Shailend Chand To: netdev@vger.kernel.org Cc: almasrymina@google.com, davem@davemloft.net, edumazet@google.com, hramamurthy@google.com, jeroendb@google.com, kuba@kernel.org, pabeni@redhat.com, pkaligineedi@google.com, willemb@google.com, Shailend Chand X-Patchwork-Delegate: kuba@kernel.org From: Mina Almasry This API enables the net stack to reset the queues used for devmem TCP. Signed-off-by: Mina Almasry Signed-off-by: Shailend Chand --- include/linux/netdevice.h | 3 +++ include/net/netdev_queues.h | 31 +++++++++++++++++++++++++++++++ 2 files changed, 34 insertions(+) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index f849e7d110ed..6a58ec73c5e8 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -1957,6 +1957,7 @@ enum netdev_reg_state { * @sysfs_rx_queue_group: Space for optional per-rx queue attributes * @rtnl_link_ops: Rtnl_link_ops * @stat_ops: Optional ops for queue-aware statistics + * @queue_mgmt_ops: Optional ops for queue management * * @gso_max_size: Maximum size of generic segmentation offload * @tso_max_size: Device (as in HW) limit on the max TSO request size @@ -2340,6 +2341,8 @@ struct net_device { const struct netdev_stat_ops *stat_ops; + const struct netdev_queue_mgmt_ops *queue_mgmt_ops; + /* for setting kernel sock attribute on TCP connection setup */ #define GSO_MAX_SEGS 65535u #define GSO_LEGACY_MAX_SIZE 65536u diff --git a/include/net/netdev_queues.h b/include/net/netdev_queues.h index c7ac4539eafc..04bb1318c6cc 100644 --- a/include/net/netdev_queues.h +++ b/include/net/netdev_queues.h @@ -87,6 +87,37 @@ struct netdev_stat_ops { struct netdev_queue_stats_tx *tx); }; +/** + * struct netdev_queue_mgmt_ops - netdev ops for queue management + * + * queue_mem_size: Size of the struct that describes a queue's memory. + * + * @ndo_queue_mem_alloc: Allocate memory for an RX queue at the specified index. + * The new memory is written at the specified address. + * + * @ndo_queue_mem_free: Free memory from an RX queue. + * + * @ndo_queue_start: Start an RX queue with the specified memory and at the + * specified index. + * + * @ndo_queue_stop: Stop the RX queue at the specified index. The stopped + * queue's memory is written at the specified address. + */ +struct netdev_queue_mgmt_ops { + size_t ndo_queue_mem_size; + int (*ndo_queue_mem_alloc)(struct net_device *dev, + void *per_queue_mem, + int idx); + void (*ndo_queue_mem_free)(struct net_device *dev, + void *per_queue_mem); + int (*ndo_queue_start)(struct net_device *dev, + void *per_queue_mem, + int idx); + int (*ndo_queue_stop)(struct net_device *dev, + void *per_queue_mem, + int idx); +}; + /** * DOC: Lockless queue stopping / waking helpers. * From patchwork Tue Apr 30 23:14:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shailend Chand X-Patchwork-Id: 13650018 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 637C41C0DCF for ; Tue, 30 Apr 2024 23:14:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714518879; cv=none; b=NwohtX3Z6KixBMlyh3gVImcojG9hnSOFJV2rEITnz9YEwa5/fP/G64oJo4s6/Xas/Qes5VGSkGvub9V3pFnDchKq5lGFqWJ3wq8+suOx6QCxrm28qNKzopOobnpLFnptS4fy1AtFaC4tEwnRoapYS4/wdtE7kcpaoc6qESDjTeU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714518879; c=relaxed/simple; bh=G9vTwaOh9tqpmWBRnbfIv95mAPnSNpQXI1b5OG2x6ro=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=eXTh+YKFK6qBlOzWUHZbFQ6S6rJF+EDZU9z1plD6yssM53ZRNt2meYjHiP1BnG1N2UQZ5YqUqns7sMPpI1ieM3CaWAZ1wIcdGbIWC50dcH8J4HyiwB35H5oE7sX1uY8MdCi5Xmap5GXxeptUodHAkZJhm/EYI84gQdUd8cECpcA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=iYhiov9k; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="iYhiov9k" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-de617c7649dso2012034276.0 for ; Tue, 30 Apr 2024 16:14:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1714518877; x=1715123677; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=yQKspPSBYhGlej/Z1yQFkrqQvo1+u8pB8UNnxcDCgXM=; b=iYhiov9k56xdE+2tuh+hNLafjjZf5jaW9kCUnemKRq5EnkXnk91rErj/D15TUONu+K 3mBxAPOJpwNqGt1+aMZczQMv5D0Nf1X97F00fdplZAFAG+dZUSHAyb9TrcDss/pUzfKQ fi2xSKuOcHpTBwA2kxU/TY9Ny00HvaS9zxWOP4tKaqlCqt3c/r3/HqY2td6381zE9g/q pyBPr1Y2Ge1iP2WbL6CNKALCys4XKxmzPR/HS9219dOsNC7M3mz1rhR4OCL6b5Wa3saa ihzaNoQWjlP2tSCjCJI5ZMZY0JOlWiVoE+J6/9Zjsw3fGmo8cHNQsTl+l9B/c6dgScqN X2ow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714518877; x=1715123677; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=yQKspPSBYhGlej/Z1yQFkrqQvo1+u8pB8UNnxcDCgXM=; b=YU+Zk9IWxySrKf5nmbVX8lto7pvuXDvj/28RfhOjzwA2svkvbQ2cyKZ7YZDwbV+KCc CUmBo4G4mc4QFUczYRxaeT7w/RWo+rXjO4/MNgGC1fYUyojbyjQLf5gquQTV8gwp/Lgd CqQF16djVfAvG7+qAgkZPhnNuQr6cs/Xdthrfk+RGHVNVGx7hN38hizhXmwMgBxN7Knj R6lmOjzTAcLThq9F41+w1XM2tH4oNfbDMtlaBTx/tRroNHo/3T0QgkfkItLyjmnRg3Tm y84NwPfic4MMsCmCxtfXhVkAXSYpUyJELBsko5ccnOtilu8TLte0EmGnLEh13ZIoFlk7 f73g== X-Gm-Message-State: AOJu0YynOVaRL9n9fp5DX6PoSYpWlDciE0FlPYvBsYIsQVhehzRdeCTl adi5DspBTPc3G/2b8WltfihapN3hqQldF2lrhwufMD3DhDwa5hfefXcas/goiWnxTKCVGFLoSFm 3NF7oX4hLc/Dy2gq99RMlar1qqF/NcoFt5BTFiyFINg75Up5aiwb1kxqkEjI7KaWMsYjN+WACIk 0ScwUHj7jGDiDnAo9U4tdrE0GIqrObaiZkOBu1i6xvwA0= X-Google-Smtp-Source: AGHT+IEzqY05NkAq1Gn2LJqwB1WPzry6JpExZnGMwEJxmcc10xro9AzPpzGpuczywQ5DO+99AsfEAZdYSYSTUQ== X-Received: from shailendkvm.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:2648]) (user=shailend job=sendgmr) by 2002:a05:6902:18d2:b0:de0:ecc6:4681 with SMTP id ck18-20020a05690218d200b00de0ecc64681mr170359ybb.1.1714518877261; Tue, 30 Apr 2024 16:14:37 -0700 (PDT) Date: Tue, 30 Apr 2024 23:14:11 +0000 In-Reply-To: <20240430231420.699177-1-shailend@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240430231420.699177-1-shailend@google.com> X-Mailer: git-send-email 2.45.0.rc0.197.gbae5840b3b-goog Message-ID: <20240430231420.699177-3-shailend@google.com> Subject: [PATCH net-next 02/10] gve: Make the GQ RX free queue funcs idempotent From: Shailend Chand To: netdev@vger.kernel.org Cc: almasrymina@google.com, davem@davemloft.net, edumazet@google.com, hramamurthy@google.com, jeroendb@google.com, kuba@kernel.org, pabeni@redhat.com, pkaligineedi@google.com, willemb@google.com, Shailend Chand X-Patchwork-Delegate: kuba@kernel.org Although this is not fixing any existing double free bug, making these functions idempotent allows for a simpler implementation of future ndo hooks that act on a single queue. Tested-by: Mina Almasry Reviewed-by: Praveen Kaligineedi Reviewed-by: Harshitha Ramamurthy Signed-off-by: Shailend Chand --- drivers/net/ethernet/google/gve/gve_rx.c | 29 ++++++++++++++++-------- 1 file changed, 19 insertions(+), 10 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c index 9b56e89c4f43..0a3f88170411 100644 --- a/drivers/net/ethernet/google/gve/gve_rx.c +++ b/drivers/net/ethernet/google/gve/gve_rx.c @@ -30,6 +30,9 @@ static void gve_rx_unfill_pages(struct gve_priv *priv, u32 slots = rx->mask + 1; int i; + if (!rx->data.page_info) + return; + if (rx->data.raw_addressing) { for (i = 0; i < slots; i++) gve_rx_free_buffer(&priv->pdev->dev, &rx->data.page_info[i], @@ -69,20 +72,26 @@ static void gve_rx_free_ring_gqi(struct gve_priv *priv, struct gve_rx_ring *rx, int idx = rx->q_num; size_t bytes; - bytes = sizeof(struct gve_rx_desc) * cfg->ring_size; - dma_free_coherent(dev, bytes, rx->desc.desc_ring, rx->desc.bus); - rx->desc.desc_ring = NULL; + if (rx->desc.desc_ring) { + bytes = sizeof(struct gve_rx_desc) * cfg->ring_size; + dma_free_coherent(dev, bytes, rx->desc.desc_ring, rx->desc.bus); + rx->desc.desc_ring = NULL; + } - dma_free_coherent(dev, sizeof(*rx->q_resources), - rx->q_resources, rx->q_resources_bus); - rx->q_resources = NULL; + if (rx->q_resources) { + dma_free_coherent(dev, sizeof(*rx->q_resources), + rx->q_resources, rx->q_resources_bus); + rx->q_resources = NULL; + } gve_rx_unfill_pages(priv, rx, cfg); - bytes = sizeof(*rx->data.data_ring) * slots; - dma_free_coherent(dev, bytes, rx->data.data_ring, - rx->data.data_bus); - rx->data.data_ring = NULL; + if (rx->data.data_ring) { + bytes = sizeof(*rx->data.data_ring) * slots; + dma_free_coherent(dev, bytes, rx->data.data_ring, + rx->data.data_bus); + rx->data.data_ring = NULL; + } kvfree(rx->qpl_copy_pool); rx->qpl_copy_pool = NULL; From patchwork Tue Apr 30 23:14:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shailend Chand X-Patchwork-Id: 13650019 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0DA981C0DE0 for ; Tue, 30 Apr 2024 23:14:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714518881; cv=none; b=Sq2/7dF7Eh+wfg6d4+h9QnBTg6J25o2lKJA56uEFdG6/PooOCKNezYhJTX3ms39Bc2WtkDhOKNT905+bD+6xHIOWKjPUgkxCGs5B0UVNgQahehMy//b8vvuTYn2293bNeeIzDAYflpn7e1HlxjlLK6jBIv7eCDY5uZWThY5GDPY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714518881; c=relaxed/simple; bh=t+qKrNnLF+bPrbrbobIxv7gRNUfNlahiEGqwRfBYFY8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=oK7d/IFhLe3GHWdDQSh9f6+pgHG4qwkyYUf7UbibSGUv4TJbDaCHX1obj564kboF42jyT5QuHrnZxKi0jzEi2Rc0lhZpMSAZI2h2uxu2vwWH8Aa6BJJ7YbT6Pw5GmgYtgq8g9A8reEd+OUxhBjSyisa8lc+C753QxQnQbQJIk8o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=3g51WJGf; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="3g51WJGf" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-de610854b8bso3267500276.0 for ; Tue, 30 Apr 2024 16:14:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1714518879; x=1715123679; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=jaQd9yUHbjUVdVlM5GQAUnpdTn+qzNs0u4bWWW7TDhc=; b=3g51WJGfLPZgevrMVsIfEa/my5iToBQ1Spjmqm+blA5toWMqypsIBJ1yET7i1r6Wfz Fxu4CiUps2ovPfYu5h6KJqVdiS6usDNNvn8qLbhuUaklvXQK9elxmqXD9QGDR80eOdmb DrWALiBL503sL788OGzioo3WwN+ylzTXZMMhnFLim1OegS1I8edcmviB5E2HW/L9rXEW QpjIb6jCbdHtVLeDxAxqqjfMSuFMPFgVLWPs8vjcT1EpjPIUmVRHzs7hJLybI9lzOVMk 3IrBfZlhhU2IrWYLuwHPiTseSeBWsG1qLKS0/ktmaMCNBVSc0YhDZTmrub7zKsAeFFNX aN+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714518879; x=1715123679; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=jaQd9yUHbjUVdVlM5GQAUnpdTn+qzNs0u4bWWW7TDhc=; b=oQSW/3tmjjIWiSJmxvU1YHydh1xjAallfNz0FhIq3Ckc8PayfM9lgwbct1LrGEUYBT +sKMdV35sefVV6q6AZn76+kAK3J8f1l59iBRtN7LuIXchykzwQF53qfhw1TvEfRw/OPj XSde6b+J2mWIP/yLbR70or7FmUPIFz8NJUBVoohT02YUUss6w0i6ZA6mrvl1i3EJL0sR DQWIM9ml6p9HFK6HcBGICIn1lDT9nIG/6jZ+REZ1Fi/WrwZi8hKV0SY3OeqbIug6V6fs +TCQg+M2Pb468YZu29I0kyGAkFv6/w8jyXc3APPo4sIMFw16LGoniL7yQmKts3NAPRxv MZdQ== X-Gm-Message-State: AOJu0YwVU+1GwFghLqQe3OzqLUvWcR0Zd/QPrCGGfKYUW2IY7cCIkfhH liY6WtUFUcpgc5aoK8GVp1GWz3+rrdfEBiaee+yHRAwbPSVi1aOFkUhYj237Kdl89GoE9sudsps TwepnGzmunI+2gzOuiHx6GhT9YMAuXcLzKYdwSrl36QGUr6nscTjxDHsY4bfgBS2M3hKxoh54rP Vnyvms1ZK0wXkchSYmwdTcCXx6sQE2+TYjMfxc8bMubOk= X-Google-Smtp-Source: AGHT+IGtAKAyFLT2BZJH21BqkvaG53Y37b8BLA5I/0NI/Q0il0x0nty6QIMUw2Y69bjkE2YEWjQAP301KmARvQ== X-Received: from shailendkvm.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:2648]) (user=shailend job=sendgmr) by 2002:a05:6902:727:b0:de5:60b6:fb9 with SMTP id l7-20020a056902072700b00de560b60fb9mr150041ybt.1.1714518878829; Tue, 30 Apr 2024 16:14:38 -0700 (PDT) Date: Tue, 30 Apr 2024 23:14:12 +0000 In-Reply-To: <20240430231420.699177-1-shailend@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240430231420.699177-1-shailend@google.com> X-Mailer: git-send-email 2.45.0.rc0.197.gbae5840b3b-goog Message-ID: <20240430231420.699177-4-shailend@google.com> Subject: [PATCH net-next 03/10] gve: Add adminq funcs to add/remove a single Rx queue From: Shailend Chand To: netdev@vger.kernel.org Cc: almasrymina@google.com, davem@davemloft.net, edumazet@google.com, hramamurthy@google.com, jeroendb@google.com, kuba@kernel.org, pabeni@redhat.com, pkaligineedi@google.com, willemb@google.com, Shailend Chand X-Patchwork-Delegate: kuba@kernel.org This allows for implementing future ndo hooks that act on a single queue. Tested-by: Mina Almasry Reviewed-by: Praveen Kaligineedi Reviewed-by: Harshitha Ramamurthy Signed-off-by: Shailend Chand --- drivers/net/ethernet/google/gve/gve_adminq.c | 79 ++++++++++++++------ drivers/net/ethernet/google/gve/gve_adminq.h | 2 + 2 files changed, 58 insertions(+), 23 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/ethernet/google/gve/gve_adminq.c index b2b619aa2310..1b066c92d812 100644 --- a/drivers/net/ethernet/google/gve/gve_adminq.c +++ b/drivers/net/ethernet/google/gve/gve_adminq.c @@ -630,14 +630,15 @@ int gve_adminq_create_tx_queues(struct gve_priv *priv, u32 start_id, u32 num_que return gve_adminq_kick_and_wait(priv); } -static int gve_adminq_create_rx_queue(struct gve_priv *priv, u32 queue_index) +static void gve_adminq_get_create_rx_queue_cmd(struct gve_priv *priv, + union gve_adminq_command *cmd, + u32 queue_index) { struct gve_rx_ring *rx = &priv->rx[queue_index]; - union gve_adminq_command cmd; - memset(&cmd, 0, sizeof(cmd)); - cmd.opcode = cpu_to_be32(GVE_ADMINQ_CREATE_RX_QUEUE); - cmd.create_rx_queue = (struct gve_adminq_create_rx_queue) { + memset(cmd, 0, sizeof(*cmd)); + cmd->opcode = cpu_to_be32(GVE_ADMINQ_CREATE_RX_QUEUE); + cmd->create_rx_queue = (struct gve_adminq_create_rx_queue) { .queue_id = cpu_to_be32(queue_index), .ntfy_id = cpu_to_be32(rx->ntfy_id), .queue_resources_addr = cpu_to_be64(rx->q_resources_bus), @@ -648,13 +649,13 @@ static int gve_adminq_create_rx_queue(struct gve_priv *priv, u32 queue_index) u32 qpl_id = priv->queue_format == GVE_GQI_RDA_FORMAT ? GVE_RAW_ADDRESSING_QPL_ID : rx->data.qpl->id; - cmd.create_rx_queue.rx_desc_ring_addr = + cmd->create_rx_queue.rx_desc_ring_addr = cpu_to_be64(rx->desc.bus), - cmd.create_rx_queue.rx_data_ring_addr = + cmd->create_rx_queue.rx_data_ring_addr = cpu_to_be64(rx->data.data_bus), - cmd.create_rx_queue.index = cpu_to_be32(queue_index); - cmd.create_rx_queue.queue_page_list_id = cpu_to_be32(qpl_id); - cmd.create_rx_queue.packet_buffer_size = cpu_to_be16(rx->packet_buffer_size); + cmd->create_rx_queue.index = cpu_to_be32(queue_index); + cmd->create_rx_queue.queue_page_list_id = cpu_to_be32(qpl_id); + cmd->create_rx_queue.packet_buffer_size = cpu_to_be16(rx->packet_buffer_size); } else { u32 qpl_id = 0; @@ -662,25 +663,39 @@ static int gve_adminq_create_rx_queue(struct gve_priv *priv, u32 queue_index) qpl_id = GVE_RAW_ADDRESSING_QPL_ID; else qpl_id = rx->dqo.qpl->id; - cmd.create_rx_queue.queue_page_list_id = cpu_to_be32(qpl_id); - cmd.create_rx_queue.rx_desc_ring_addr = + cmd->create_rx_queue.queue_page_list_id = cpu_to_be32(qpl_id); + cmd->create_rx_queue.rx_desc_ring_addr = cpu_to_be64(rx->dqo.complq.bus); - cmd.create_rx_queue.rx_data_ring_addr = + cmd->create_rx_queue.rx_data_ring_addr = cpu_to_be64(rx->dqo.bufq.bus); - cmd.create_rx_queue.packet_buffer_size = + cmd->create_rx_queue.packet_buffer_size = cpu_to_be16(priv->data_buffer_size_dqo); - cmd.create_rx_queue.rx_buff_ring_size = + cmd->create_rx_queue.rx_buff_ring_size = cpu_to_be16(priv->rx_desc_cnt); - cmd.create_rx_queue.enable_rsc = + cmd->create_rx_queue.enable_rsc = !!(priv->dev->features & NETIF_F_LRO); if (priv->header_split_enabled) - cmd.create_rx_queue.header_buffer_size = + cmd->create_rx_queue.header_buffer_size = cpu_to_be16(priv->header_buf_size); } +} +static int gve_adminq_create_rx_queue(struct gve_priv *priv, u32 queue_index) +{ + union gve_adminq_command cmd; + + gve_adminq_get_create_rx_queue_cmd(priv, &cmd, queue_index); return gve_adminq_issue_cmd(priv, &cmd); } +int gve_adminq_create_single_rx_queue(struct gve_priv *priv, u32 queue_index) +{ + union gve_adminq_command cmd; + + gve_adminq_get_create_rx_queue_cmd(priv, &cmd, queue_index); + return gve_adminq_execute_cmd(priv, &cmd); +} + int gve_adminq_create_rx_queues(struct gve_priv *priv, u32 num_queues) { int err; @@ -727,17 +742,22 @@ int gve_adminq_destroy_tx_queues(struct gve_priv *priv, u32 start_id, u32 num_qu return gve_adminq_kick_and_wait(priv); } +static void gve_adminq_make_destroy_rx_queue_cmd(union gve_adminq_command *cmd, + u32 queue_index) +{ + memset(cmd, 0, sizeof(*cmd)); + cmd->opcode = cpu_to_be32(GVE_ADMINQ_DESTROY_RX_QUEUE); + cmd->destroy_rx_queue = (struct gve_adminq_destroy_rx_queue) { + .queue_id = cpu_to_be32(queue_index), + }; +} + static int gve_adminq_destroy_rx_queue(struct gve_priv *priv, u32 queue_index) { union gve_adminq_command cmd; int err; - memset(&cmd, 0, sizeof(cmd)); - cmd.opcode = cpu_to_be32(GVE_ADMINQ_DESTROY_RX_QUEUE); - cmd.destroy_rx_queue = (struct gve_adminq_destroy_rx_queue) { - .queue_id = cpu_to_be32(queue_index), - }; - + gve_adminq_make_destroy_rx_queue_cmd(&cmd, queue_index); err = gve_adminq_issue_cmd(priv, &cmd); if (err) return err; @@ -745,6 +765,19 @@ static int gve_adminq_destroy_rx_queue(struct gve_priv *priv, u32 queue_index) return 0; } +int gve_adminq_destroy_single_rx_queue(struct gve_priv *priv, u32 queue_index) +{ + union gve_adminq_command cmd; + int err; + + gve_adminq_make_destroy_rx_queue_cmd(&cmd, queue_index); + err = gve_adminq_execute_cmd(priv, &cmd); + if (err) + return err; + + return 0; +} + int gve_adminq_destroy_rx_queues(struct gve_priv *priv, u32 num_queues) { int err; diff --git a/drivers/net/ethernet/google/gve/gve_adminq.h b/drivers/net/ethernet/google/gve/gve_adminq.h index beedf2353847..e64f0dbe744d 100644 --- a/drivers/net/ethernet/google/gve/gve_adminq.h +++ b/drivers/net/ethernet/google/gve/gve_adminq.h @@ -451,7 +451,9 @@ int gve_adminq_configure_device_resources(struct gve_priv *priv, int gve_adminq_deconfigure_device_resources(struct gve_priv *priv); int gve_adminq_create_tx_queues(struct gve_priv *priv, u32 start_id, u32 num_queues); int gve_adminq_destroy_tx_queues(struct gve_priv *priv, u32 start_id, u32 num_queues); +int gve_adminq_create_single_rx_queue(struct gve_priv *priv, u32 queue_index); int gve_adminq_create_rx_queues(struct gve_priv *priv, u32 num_queues); +int gve_adminq_destroy_single_rx_queue(struct gve_priv *priv, u32 queue_index); int gve_adminq_destroy_rx_queues(struct gve_priv *priv, u32 queue_id); int gve_adminq_register_page_list(struct gve_priv *priv, struct gve_queue_page_list *qpl); From patchwork Tue Apr 30 23:14:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shailend Chand X-Patchwork-Id: 13650020 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A6BBC1BF6EC for ; Tue, 30 Apr 2024 23:14:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714518883; cv=none; b=tmdnKOLzxiYvAyGLPyD4epOG7oxahHeg25WTJDTSN8VmvjgftL24aKUp02JSAD9E2JPwBtDHTx027pMoi8KHRyZQmiCX7Yz5USxRNhEwHvhFdQaFye+FAE0SZs4/gx9fct7HY9i1c64bexG2qXJ3VrfaOXnqlKTaVn94gmhMqDQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714518883; c=relaxed/simple; bh=eIqn1LVUwugkpeRlzua2n5lFrHJQor+qYee1aB04brY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=PxiL9+8S0g2RbrYoKPgFCQTONMK30VjXCMWtCNj+67itnqHef+JJzVyCgF9fDVhzgjDREwmtGAJZGerJkVLPZNg4SbwNS1I4ak12yGCY/PHJpBMP8IbiXnrofiOJAEhviE4LimIpQ1LymWGpQS2iMzHqM5Mnt/GFfXTxI3uvX4U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=gukbBwcC; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="gukbBwcC" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-dbe9e13775aso11995862276.1 for ; Tue, 30 Apr 2024 16:14:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1714518880; x=1715123680; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=sX4HEYwsHS+rl8A8xVeENYauPKI62gnoZdac5LeFbE4=; b=gukbBwcCLgT83dIFXmxWHIZbri6zHxCXPAQc4cAhg4ldUynGEP1dYKJVpn7YfAP43G vqjJsW82Pg+Lc3eHLfjxhWoGPOarqrQ+or+7SRhMOxVl/N5tgdF14lRVhjfxtThHCz5x Gkq/nD2VOLW8mMW9AuqlqXSazq6NtDHt+BBSciywun8iSy7Fb8jTqBBMpVRG9g9ZU9Au khH3lwhypeMHfEY24s+E18ahilf3S2/gGI4JVGRAbxrlObW1IlcAZZ7TMkoKgAE+qZh5 oKQxiuw6gYoMXGqLSp/TXoCY/4H66Z7J+F+c5P8lmWWOTD8FA/cdi6L90CER2NzWeAfz GXzQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714518880; x=1715123680; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=sX4HEYwsHS+rl8A8xVeENYauPKI62gnoZdac5LeFbE4=; b=ro3DZW3HHLkOIV2oRjQVhRVAEMrtYgWEX4SK1GNW1eeW9Z0l1WrXKWSmueV95th2ue AIvx/UaGy2PeZS5U5sfXAFu8AU118wNWjyW/f+ckkiSFj14Lr5k6/z2ljnLNpmP/gBcG 3WUYmNKcHWPVLU76uVp1wt1T0rqx7hptUlqR1z1xe8FSuOQQe3aItWwmgupPPV1U/goD mxMHN6Zu9TrRFOyQE8MQGOGjdG8C61oIy3QDiOI9K6+jIUaLXgdGSHy2DFtrT+irEKG9 Mj83hd4oj4G4Bn7MhXrR7CuFsjHf0Df3O07wDMEMRvH6vlwVB8m3xrsVlhINf76dMR1J Zgow== X-Gm-Message-State: AOJu0Yxp5wtJzCKmXyULQ9dYw+P/OsMXUnfjPrfxGwmyGuVPo9M/kU8K NzYECSY3xJmhYR7CJpR49EBrkMzK8HpKz6eezhl67v5lVi68VkWQ9FyptPznyf0K0phSKdQxG4W 7V40Zzcw/uH53lew996tFFFS/57Kq5pw8v+ijHoNtfXw7YPFC0JLbL+DxMuJ5+rcb0i9oNZ8SGV DGDB5cxnIBZky+vUJlgTv1tetIz9t3u89TLx5a7um2Ons= X-Google-Smtp-Source: AGHT+IEfK1bmtuf/QDdiXvokMD8m7h2kAS6gymB6WztzogP+Ik8EBqoM7xhXlx5hU1JPlZ3QseqX0l3IVIpRMA== X-Received: from shailendkvm.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:2648]) (user=shailend job=sendgmr) by 2002:a05:6902:c09:b0:de5:2694:45ba with SMTP id fs9-20020a0569020c0900b00de5269445bamr340138ybb.0.1714518880413; Tue, 30 Apr 2024 16:14:40 -0700 (PDT) Date: Tue, 30 Apr 2024 23:14:13 +0000 In-Reply-To: <20240430231420.699177-1-shailend@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240430231420.699177-1-shailend@google.com> X-Mailer: git-send-email 2.45.0.rc0.197.gbae5840b3b-goog Message-ID: <20240430231420.699177-5-shailend@google.com> Subject: [PATCH net-next 04/10] gve: Make gve_turn(up|down) ignore stopped queues From: Shailend Chand To: netdev@vger.kernel.org Cc: almasrymina@google.com, davem@davemloft.net, edumazet@google.com, hramamurthy@google.com, jeroendb@google.com, kuba@kernel.org, pabeni@redhat.com, pkaligineedi@google.com, willemb@google.com, Shailend Chand X-Patchwork-Delegate: kuba@kernel.org Currently the queues are either all live or all dead, toggling from one state to the other via the ndo open and stop hooks. The future addition of single-queue ndo hooks changes this, and thus gve_turnup and gve_turndown should evolve to account for a state where some queues are live and some aren't. Tested-by: Mina Almasry Reviewed-by: Praveen Kaligineedi Reviewed-by: Harshitha Ramamurthy Signed-off-by: Shailend Chand --- drivers/net/ethernet/google/gve/gve_main.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index 61039e3dd2bb..469a914c71d6 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -1937,12 +1937,16 @@ static void gve_turndown(struct gve_priv *priv) int ntfy_idx = gve_tx_idx_to_ntfy(priv, idx); struct gve_notify_block *block = &priv->ntfy_blocks[ntfy_idx]; + if (!gve_tx_was_added_to_block(priv, idx)) + continue; napi_disable(&block->napi); } for (idx = 0; idx < priv->rx_cfg.num_queues; idx++) { int ntfy_idx = gve_rx_idx_to_ntfy(priv, idx); struct gve_notify_block *block = &priv->ntfy_blocks[ntfy_idx]; + if (!gve_rx_was_added_to_block(priv, idx)) + continue; napi_disable(&block->napi); } @@ -1965,6 +1969,9 @@ static void gve_turnup(struct gve_priv *priv) int ntfy_idx = gve_tx_idx_to_ntfy(priv, idx); struct gve_notify_block *block = &priv->ntfy_blocks[ntfy_idx]; + if (!gve_tx_was_added_to_block(priv, idx)) + continue; + napi_enable(&block->napi); if (gve_is_gqi(priv)) { iowrite32be(0, gve_irq_doorbell(priv, block)); @@ -1977,6 +1984,9 @@ static void gve_turnup(struct gve_priv *priv) int ntfy_idx = gve_rx_idx_to_ntfy(priv, idx); struct gve_notify_block *block = &priv->ntfy_blocks[ntfy_idx]; + if (!gve_rx_was_added_to_block(priv, idx)) + continue; + napi_enable(&block->napi); if (gve_is_gqi(priv)) { iowrite32be(0, gve_irq_doorbell(priv, block)); From patchwork Tue Apr 30 23:14:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shailend Chand X-Patchwork-Id: 13650021 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2E4421BF6F1 for ; Tue, 30 Apr 2024 23:14:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714518884; cv=none; b=TxuiJPmYaSYxKfNxTB9RJAOQQmdFmkmW4HPjH2X+xHEduHEI6WGylIzmnSX8SHXRH6MCsVbhKpJcbGpZSrKF4PwY71N1CY4g0AcjFSrhR0hpzJxj0iojo1tDVOBrR1Fy9HRMR8I9GbC4AsZiD4IajalDj0U2Ia6hIeLesmGo7J0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714518884; c=relaxed/simple; bh=q/x8uhyZ94hlLMU9upc5dvBwc1c1qlWMRkNRfbt16WY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=fUSsFtQE1nt1/hlG4xCezsTCWcH+R12mfePYVki7fLPAxAOQoBQm99LIjdJDwS9xWHeVLXphWkYxfr7Chzk1p9RsyrKeOuLGu5iewg4+qnf/rhzt9M5KXt1g+sc1oL1uFYhkFS5Fwx4VfQRt0pGfBALTUQOV9v/ISGv5RTFNTbc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=35t2m31k; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="35t2m31k" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-61be325413eso4448777b3.1 for ; Tue, 30 Apr 2024 16:14:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1714518882; x=1715123682; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=sPcGdWtYIxokvDL0AKkLOB+K3RyNfXxAC6Aj1nMvXi0=; b=35t2m31kbZ4Ho8odwy2woHWunIz2RxlPgOgWFRvRYJdSCOXQa2jxcsmp1EWwmmZr9M xTFHVZAE7/YFWQrU0ECHIS941KY8SYOcgnIVTeQzpEX/optwBgr3zRfcuCPhwMYVo1CV D0nQy40bzjvNWqte7aPsAaFl1ZtiVgpSC7arDqbDiSWIAcB+k9CvpuL/yHTRzFYy1ZOp KolpyMbo6XrVEKM1A0VaJQfN2ZUttWS3AmO+tEYNJfLgfqGjl8T6iJsw7AnmD01a9Z2k OiQdoIKy6uQaRsbs+wE+igarlYwPpBcf/1U0wcmpDakP2tlixwBOwOpXVMLJxG+WG+hX nAaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714518882; x=1715123682; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=sPcGdWtYIxokvDL0AKkLOB+K3RyNfXxAC6Aj1nMvXi0=; b=MJ88WGEME5wxFj8tiIVdNv0VJ1u4OfVS9bSa9m5a8gL3lMN0GRBIKrOaZ5QtyqypV4 xdczqumXm0PQmNLeW9+aG4NORzl5y5XK+Dy1zZuK78ypkQR5X4etr7RT4tzZlepSU+5s yL6ZsS2sucAI+NkJ1c1Gcme0RlHAsxvIRWKnW6BRtHHHBD51MKr8wYGHwuGtOre2whSf qoaql3SA7l05fKA4ak659N+HPDBdTkguiqNCa6YsXYQM8QBMJr2+W2RecRV3R76b2hZS LKHc3oZaUd/4rAvRfL/cqALzQCHU6/NSQwqOGi726l84aUmuXlAYRaV4a2HYrC3qhHIE JO8g== X-Gm-Message-State: AOJu0YxuixFk2uHAt/aDXLvQw3kws1Rc5aDvXLEDFItsi+p1NxYk+6Ia CRI9TyhHqxYJ5tz9f67j/ApzxZ8QkJgRvGhSJD1gZN5XSsAQyUgxINYjapvH2pNL9w64kczkHXb XVPuoc7R1pNTiXC2g41kY2IxnG0yAJSmcSZcON4Oq4nNrkPvUaWsg8+Yievt09FOxCaAdIaUwts GvPMTPIdNfY7N0tbJvOW5qSlovCP6x2qgL59a8s5HapF8= X-Google-Smtp-Source: AGHT+IGOYw9lWR8r9mRAMIvzGVfw8JHbSFX+TxGChD6spEfwPVWuGRycQxe/qVeloyNZvRpBzOakaPCNqzAXWg== X-Received: from shailendkvm.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:2648]) (user=shailend job=sendgmr) by 2002:a0d:d609:0:b0:61a:f3ea:3994 with SMTP id y9-20020a0dd609000000b0061af3ea3994mr882152ywd.3.1714518882112; Tue, 30 Apr 2024 16:14:42 -0700 (PDT) Date: Tue, 30 Apr 2024 23:14:14 +0000 In-Reply-To: <20240430231420.699177-1-shailend@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240430231420.699177-1-shailend@google.com> X-Mailer: git-send-email 2.45.0.rc0.197.gbae5840b3b-goog Message-ID: <20240430231420.699177-6-shailend@google.com> Subject: [PATCH net-next 05/10] gve: Make gve_turnup work for nonempty queues From: Shailend Chand To: netdev@vger.kernel.org Cc: almasrymina@google.com, davem@davemloft.net, edumazet@google.com, hramamurthy@google.com, jeroendb@google.com, kuba@kernel.org, pabeni@redhat.com, pkaligineedi@google.com, willemb@google.com, Shailend Chand X-Patchwork-Delegate: kuba@kernel.org gVNIC has a requirement that all queues have to be quiesced before any queue is operated on (created or destroyed). To enable the implementation of future ndo hooks that work on a single queue, we need to evolve gve_turnup to account for queues already having some unprocessed descriptors in the ring. Say rxq 4 is being stopped and started via the queue api. Due to gve's requirement of quiescence, queues 0 through 3 are not processing their rings while queue 4 is being toggled. Once they are made live, these queues need to be poked to cause them to check their rings for descriptors that were written during their brief period of quiescence. Tested-by: Mina Almasry Reviewed-by: Praveen Kaligineedi Reviewed-by: Harshitha Ramamurthy Signed-off-by: Shailend Chand --- drivers/net/ethernet/google/gve/gve_main.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index 469a914c71d6..ef902b72b9a9 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -1979,6 +1979,13 @@ static void gve_turnup(struct gve_priv *priv) gve_set_itr_coalesce_usecs_dqo(priv, block, priv->tx_coalesce_usecs); } + + /* Any descs written by the NIC before this barrier will be + * handled by the one-off napi schedule below. Whereas any + * descs after the barrier will generate interrupts. + */ + mb(); + napi_schedule(&block->napi); } for (idx = 0; idx < priv->rx_cfg.num_queues; idx++) { int ntfy_idx = gve_rx_idx_to_ntfy(priv, idx); @@ -1994,6 +2001,13 @@ static void gve_turnup(struct gve_priv *priv) gve_set_itr_coalesce_usecs_dqo(priv, block, priv->rx_coalesce_usecs); } + + /* Any descs written by the NIC before this barrier will be + * handled by the one-off napi schedule below. Whereas any + * descs after the barrier will generate interrupts. + */ + mb(); + napi_schedule(&block->napi); } gve_set_napi_enabled(priv); From patchwork Tue Apr 30 23:14:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shailend Chand X-Patchwork-Id: 13650022 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B48F31C230A for ; Tue, 30 Apr 2024 23:14:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714518886; cv=none; b=qXRTOrfHv3dfQSGjShpV9mz+LzdoshA5XemfEKeWTD5z0QL1rM/b1s99j4sWDWabnv8sX9fxyisRzdkxsnWKoGJwJG8LPOmT/ykCMObt0VFTW/RAjH0QR6jtz98rCfVxVhCW36RaYOd9xCQXaHQoK+N7HsSq7hKtbRtipZTE/kw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714518886; c=relaxed/simple; bh=Kcwo9z19w2X4PJZvh7/+SUOYEVJ9O743aJGBsQl493Q=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=KHE2iz0Sq1G9f7Ny2qxYhgfczAEa76YMpqmyLimH1oswzEgL5m/rLc0FUzGPWUjAFzriQ04D+jZYHwVlHJZHoyewLN4b0guQEcPKnCL0RwcxpjJgqYRP1yCI4OLrEFcVX49yZuCYLaM5q8lgf1OJTCThInCQUSL+y7I3k3pGNR4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=2QXevQ32; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="2QXevQ32" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-de617c7649dso2012148276.0 for ; Tue, 30 Apr 2024 16:14:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1714518884; x=1715123684; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=JIWEL1Ar1WKeaXWtcbiWbbyFnhxykfVzID7b3E5pD1Y=; b=2QXevQ3296fpHclMT7dzULr0cqAB8hWncMOi1LY2dqsayNBoHo5bce3vO6CG1aaMz8 UY3nTGWwKWn7fTIr7TX8nLBLo+8NaVONw28uLz9LI6UxT8r+UiEBMLQmyyFR2wmZ4MWT xXbSKMaN9cVPb63ljIZLxLm1xIEV0pkVEx5rvw0Ctux1Im016GKUUARA/vBj8FnwlL2N joqOgbbgiwHCdsbcwqaHtqftWTWldJhoxztjxhfaR6NCs6nGEhdbcjYl0IkGIMZiEMdR Q5qNOn2PBC/z3MDoc+DTHNHH7L9Z9p5QPMGwbZ1Xar8wCXJU1rnzFIaq2++1fl+tk8OS QsTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714518884; x=1715123684; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=JIWEL1Ar1WKeaXWtcbiWbbyFnhxykfVzID7b3E5pD1Y=; b=o4vxEaUPisSGCay639+LgjJG1Fbi6TOrDW8HXmddXfHp03opFhmoK7Y/g9G2o+TOfi 6ffkF7TVLMhnJzTct/m7mrbTNivmfmWHwKTjtB35AeKgnzylofmSh7f3ubHmOCfQiYXd 984JhKoJxsBw5hFSlzPO22Gv29kX2uRmCmhG7nDZPc05OrzVhGEtR2w2BVyyWJij+fzz vtQDxWX+SxMOWycJuLOhDLyEdhptrWl1CVPRxZeiyepLX8Dyfl7UHlgnKJ1MslRZ/P/j vn4WbZaIy5MvC/0xJ1+XmcUnJaWd812DZA61pLIRw75ss1zFDstvJYwGuKniEUckCDdg 3qAQ== X-Gm-Message-State: AOJu0YwNpPNThRrirB5xLMgji9eVh3RYfziOjbnYldpgxASKt2N/Ekmp /f6fkMQyAYxA2cSeUZDxu7wQzqRuJPGxzGdYzEIdHF0mhyFQvMJGmLW7CydWH95y7koukdm/F9o DoxroGjkXn1d4tAtoQiDG5WxKD62cEeQn0k3Qrr6la6FK3LxHXdze2FDngISX5KXB5zTiINaFOA 1YvSzqo84fTzjmWFvcGMpCmjLyFq4zGOa+858583xABRA= X-Google-Smtp-Source: AGHT+IHrd84vedK8kDVn4adgcF0t6RY1A7JQ4yVDVigBnBJ/hDldO7NfdnX5qQvNMfVtuZE5DpTRPP3frBE2Ng== X-Received: from shailendkvm.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:2648]) (user=shailend job=sendgmr) by 2002:a05:6902:c0b:b0:de5:2ce1:b62d with SMTP id fs11-20020a0569020c0b00b00de52ce1b62dmr168525ybb.10.1714518883761; Tue, 30 Apr 2024 16:14:43 -0700 (PDT) Date: Tue, 30 Apr 2024 23:14:15 +0000 In-Reply-To: <20240430231420.699177-1-shailend@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240430231420.699177-1-shailend@google.com> X-Mailer: git-send-email 2.45.0.rc0.197.gbae5840b3b-goog Message-ID: <20240430231420.699177-7-shailend@google.com> Subject: [PATCH net-next 06/10] gve: Avoid rescheduling napi if on wrong cpu From: Shailend Chand To: netdev@vger.kernel.org Cc: almasrymina@google.com, davem@davemloft.net, edumazet@google.com, hramamurthy@google.com, jeroendb@google.com, kuba@kernel.org, pabeni@redhat.com, pkaligineedi@google.com, willemb@google.com, Shailend Chand X-Patchwork-Delegate: kuba@kernel.org In order to make possible the implementation of per-queue ndo hooks, gve_turnup was changed in a previous patch to account for queues already having some unprocessed descriptors: it does a one-off napi_schdule to handle them. If conditions of consistent high traffic persist in the immediate aftermath of this, the poll routine for a queue can be "stuck" on the cpu on which the ndo hooks ran, instead of the cpu its irq has affinity with. This situation is exacerbated by the fact that the ndo hooks for all the queues are invoked on the same cpu, potentially causing all the napi poll routines to be residing on the same cpu. A self correcting mechanism in the poll method itself solves this problem. Tested-by: Mina Almasry Reviewed-by: Praveen Kaligineedi Reviewed-by: Harshitha Ramamurthy Signed-off-by: Shailend Chand --- drivers/net/ethernet/google/gve/gve.h | 1 + drivers/net/ethernet/google/gve/gve_main.c | 33 ++++++++++++++++++++-- 2 files changed, 32 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h index 53b5244dc7bc..f27a6d5fbecf 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -610,6 +610,7 @@ struct gve_notify_block { struct gve_priv *priv; struct gve_tx_ring *tx; /* tx rings on this block */ struct gve_rx_ring *rx; /* rx rings on this block */ + u32 irq; }; /* Tracks allowed and current queue settings */ diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index ef902b72b9a9..79b7a677ec0b 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -9,6 +9,7 @@ #include #include #include +#include #include #include #include @@ -253,6 +254,18 @@ static irqreturn_t gve_intr_dqo(int irq, void *arg) return IRQ_HANDLED; } +static int gve_is_napi_on_home_cpu(struct gve_priv *priv, u32 irq) +{ + int cpu_curr = smp_processor_id(); + const struct cpumask *aff_mask; + + aff_mask = irq_get_effective_affinity_mask(irq); + if (unlikely(!aff_mask)) + return 1; + + return cpumask_test_cpu(cpu_curr, aff_mask); +} + int gve_napi_poll(struct napi_struct *napi, int budget) { struct gve_notify_block *block; @@ -322,8 +335,21 @@ int gve_napi_poll_dqo(struct napi_struct *napi, int budget) reschedule |= work_done == budget; } - if (reschedule) - return budget; + if (reschedule) { + /* Reschedule by returning budget only if already on the correct + * cpu. + */ + if (likely(gve_is_napi_on_home_cpu(priv, block->irq))) + return budget; + + /* If not on the cpu with which this queue's irq has affinity + * with, we avoid rescheduling napi and arm the irq instead so + * that napi gets rescheduled back eventually onto the right + * cpu. + */ + if (work_done == budget) + work_done--; + } if (likely(napi_complete_done(napi, work_done))) { /* Enable interrupts again. @@ -428,6 +454,7 @@ static int gve_alloc_notify_blocks(struct gve_priv *priv) "Failed to receive msix vector %d\n", i); goto abort_with_some_ntfy_blocks; } + block->irq = priv->msix_vectors[msix_idx].vector; irq_set_affinity_hint(priv->msix_vectors[msix_idx].vector, get_cpu_mask(i % active_cpus)); block->irq_db_index = &priv->irq_db_indices[i].index; @@ -441,6 +468,7 @@ static int gve_alloc_notify_blocks(struct gve_priv *priv) irq_set_affinity_hint(priv->msix_vectors[msix_idx].vector, NULL); free_irq(priv->msix_vectors[msix_idx].vector, block); + block->irq = 0; } kvfree(priv->ntfy_blocks); priv->ntfy_blocks = NULL; @@ -474,6 +502,7 @@ static void gve_free_notify_blocks(struct gve_priv *priv) irq_set_affinity_hint(priv->msix_vectors[msix_idx].vector, NULL); free_irq(priv->msix_vectors[msix_idx].vector, block); + block->irq = 0; } free_irq(priv->msix_vectors[priv->mgmt_msix_idx].vector, priv); kvfree(priv->ntfy_blocks); From patchwork Tue Apr 30 23:14:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shailend Chand X-Patchwork-Id: 13650023 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 607C51C2314 for ; Tue, 30 Apr 2024 23:14:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714518888; cv=none; b=e0k+MZXBJSfWaIDub+6//dROYOD/gpYhGbZGHhsycRNWsUS9GeBYsEMIfsuaGu7+ufcTinHn+8Pf+MbdtUgQIWT/zUdtUFEjxI28n2c727hYXsOFwBudw8FU7FX3lyQVWGgCpO1OJl+B02iAwJbqdIC3B/dA66RCMTVx3Xd3f0w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714518888; c=relaxed/simple; bh=nqU7lgRnFZ/X0PNcsH7nolu4PE5U2oVnLJQWPNzmGVY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=TJY4jUe3YffwoC1BEdLOiAKLqLVwB5LkUGjqG6HzXURlGr7aJG0phDqi5Xtd7DPuwnFhb8FPu8PWa61UCduA7R/IrLJNWcBtGGdZ6YJFbCdskMLZTNocdbls/sREVKrQoznrglm3bGFPRjEOiqEfzf8zaK5OHAAsHvqexYRBxi0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=nbJzfE3h; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="nbJzfE3h" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-61be75e21fdso31895557b3.0 for ; Tue, 30 Apr 2024 16:14:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1714518885; x=1715123685; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=X1h4LMrzCT+Amis4VKG8AAFYoLhhUQWunAXrHJrR4lI=; b=nbJzfE3hoWnblFI+vePyEpt2GekhRZkqOZnPeD2BDbqHokYTv+XmSbAI7EJkjQ0J89 m5pMQDbUK9UnIdr0yauWsrgo0PsGhzhGB+4pXmCDdLy9OfLPfxxPRMpd0pW6CuQBbtJK VOpoCRHNTBMsj2s8djIIEnEI8GH8jGDt1p1i29QR/vGGpubRDJYJY6hRYldClaBz81GL r9VtPTumkIixr3UJCLNXsup5IQLAlOUykKSse8yAMKLQmBku7EugLIyJeZc4Ev+dhgwz MXKfAmrlXFHT7BhSWpgadU4ALSTCkmMrLlE91J2JSiy7Ylo97ci9XzU3IMbpRxH41vZg UvBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714518885; x=1715123685; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=X1h4LMrzCT+Amis4VKG8AAFYoLhhUQWunAXrHJrR4lI=; b=PEjaN9k7gYLhgzeuUFqC7yqq0HasLo9qqDVNxu37NI4ARem/VV+nt7RbRr6hpQG1Fu nOZQM7UCBTG/doMZzDG8njXYq7qRHzLOfXTcDB6m11+jzdosHe28LEyZKLRxiRRdN/64 G5g2mn6W2CRdGH4AiX3foDh/zrBGMUquWJ3BM+ueNtQ38AMeePb0cENsrV6knhFSJ1Hc gZCdJl7sI9/NeYTFAESv+OMVU1Zv/3ykvguJoh5Jsi8h38QJ89mJ4qXytgcArN5GvgKZ gWpqJRt5GzZ4nnbZtZIykykWvQyWnqJhVIxqrLFC0SjqSKREEOJoB/klHSd2f6dD76JJ lgLw== X-Gm-Message-State: AOJu0YzCe636df7eANWA3Yr21L5Jz6YPL1QHU/xFzVTwWvYxPePjMBf2 49eh1oGBw+uPl0hboqkld0LXfeX3EdFLvvWAIMri8NcNcRjYZobD+Y11ep2lZbPP+xki4Q+D5vJ bbHME8T04Gk2AuxGqg1JG8n5Tl0TEOrXQyd3+fY/Oi1zEamfaWFD6O4F86OaDPRhMUsyGMhc06g 8SHK8+HrrtzV2Zt7O750qUqEWLd3bXvJ8lYFYnygr+dtE= X-Google-Smtp-Source: AGHT+IFxTLJVhZ0ppLimVy7QqvU8zr+LhLmWPZm7IYSCzpK+j/DFd+pSfMRH8nyzcedsOQUjpq2hPG8UkPAbdw== X-Received: from shailendkvm.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:2648]) (user=shailend job=sendgmr) by 2002:a0d:d2c5:0:b0:61d:3304:c25e with SMTP id u188-20020a0dd2c5000000b0061d3304c25emr270225ywd.7.1714518885252; Tue, 30 Apr 2024 16:14:45 -0700 (PDT) Date: Tue, 30 Apr 2024 23:14:16 +0000 In-Reply-To: <20240430231420.699177-1-shailend@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240430231420.699177-1-shailend@google.com> X-Mailer: git-send-email 2.45.0.rc0.197.gbae5840b3b-goog Message-ID: <20240430231420.699177-8-shailend@google.com> Subject: [PATCH net-next 07/10] gve: Reset Rx ring state in the ring-stop funcs From: Shailend Chand To: netdev@vger.kernel.org Cc: almasrymina@google.com, davem@davemloft.net, edumazet@google.com, hramamurthy@google.com, jeroendb@google.com, kuba@kernel.org, pabeni@redhat.com, pkaligineedi@google.com, willemb@google.com, Shailend Chand X-Patchwork-Delegate: kuba@kernel.org This does not fix any existing bug. In anticipation of the ndo queue api hooks that alloc/free/start/stop a single Rx queue, the already existing per-queue stop functions are being made more robust. Specifically for this use case: rx_queue_n.stop() + rx_queue_n.start() Note that this is not the use case being used in devmem tcp (the first place these new ndo hooks would be used). There the usecase is: new_queue.alloc() + old_queue.stop() + new_queue.start() + old_queue.free() Tested-by: Mina Almasry Reviewed-by: Praveen Kaligineedi Reviewed-by: Harshitha Ramamurthy Signed-off-by: Shailend Chand --- drivers/net/ethernet/google/gve/gve_rx.c | 48 +++++++-- drivers/net/ethernet/google/gve/gve_rx_dqo.c | 102 +++++++++++++++---- 2 files changed, 120 insertions(+), 30 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c index 0a3f88170411..79c1d8f63621 100644 --- a/drivers/net/ethernet/google/gve/gve_rx.c +++ b/drivers/net/ethernet/google/gve/gve_rx.c @@ -53,6 +53,41 @@ static void gve_rx_unfill_pages(struct gve_priv *priv, rx->data.page_info = NULL; } +static void gve_rx_ctx_clear(struct gve_rx_ctx *ctx) +{ + ctx->skb_head = NULL; + ctx->skb_tail = NULL; + ctx->total_size = 0; + ctx->frag_cnt = 0; + ctx->drop_pkt = false; +} + +static void gve_rx_init_ring_state_gqi(struct gve_rx_ring *rx) +{ + rx->desc.seqno = 1; + rx->cnt = 0; + gve_rx_ctx_clear(&rx->ctx); +} + +static void gve_rx_reset_ring_gqi(struct gve_priv *priv, int idx) +{ + struct gve_rx_ring *rx = &priv->rx[idx]; + const u32 slots = priv->rx_desc_cnt; + size_t size; + + /* Reset desc ring */ + if (rx->desc.desc_ring) { + size = slots * sizeof(rx->desc.desc_ring[0]); + memset(rx->desc.desc_ring, 0, size); + } + + /* Reset q_resources */ + if (rx->q_resources) + memset(rx->q_resources, 0, sizeof(*rx->q_resources)); + + gve_rx_init_ring_state_gqi(rx); +} + void gve_rx_stop_ring_gqi(struct gve_priv *priv, int idx) { int ntfy_idx = gve_rx_idx_to_ntfy(priv, idx); @@ -62,6 +97,7 @@ void gve_rx_stop_ring_gqi(struct gve_priv *priv, int idx) gve_remove_napi(priv, ntfy_idx); gve_rx_remove_from_block(priv, idx); + gve_rx_reset_ring_gqi(priv, idx); } static void gve_rx_free_ring_gqi(struct gve_priv *priv, struct gve_rx_ring *rx, @@ -222,15 +258,6 @@ static int gve_rx_prefill_pages(struct gve_rx_ring *rx, return err; } -static void gve_rx_ctx_clear(struct gve_rx_ctx *ctx) -{ - ctx->skb_head = NULL; - ctx->skb_tail = NULL; - ctx->total_size = 0; - ctx->frag_cnt = 0; - ctx->drop_pkt = false; -} - void gve_rx_start_ring_gqi(struct gve_priv *priv, int idx) { int ntfy_idx = gve_rx_idx_to_ntfy(priv, idx); @@ -309,9 +336,8 @@ static int gve_rx_alloc_ring_gqi(struct gve_priv *priv, err = -ENOMEM; goto abort_with_q_resources; } - rx->cnt = 0; rx->db_threshold = slots / 2; - rx->desc.seqno = 1; + gve_rx_init_ring_state_gqi(rx); rx->packet_buffer_size = GVE_DEFAULT_RX_BUFFER_SIZE; gve_rx_ctx_clear(&rx->ctx); diff --git a/drivers/net/ethernet/google/gve/gve_rx_dqo.c b/drivers/net/ethernet/google/gve/gve_rx_dqo.c index 53fd2d87233f..7c2980c212f4 100644 --- a/drivers/net/ethernet/google/gve/gve_rx_dqo.c +++ b/drivers/net/ethernet/google/gve/gve_rx_dqo.c @@ -211,6 +211,82 @@ static void gve_rx_free_hdr_bufs(struct gve_priv *priv, struct gve_rx_ring *rx) } } +static void gve_rx_init_ring_state_dqo(struct gve_rx_ring *rx, + const u32 buffer_queue_slots, + const u32 completion_queue_slots) +{ + int i; + + /* Set buffer queue state */ + rx->dqo.bufq.mask = buffer_queue_slots - 1; + rx->dqo.bufq.head = 0; + rx->dqo.bufq.tail = 0; + + /* Set completion queue state */ + rx->dqo.complq.num_free_slots = completion_queue_slots; + rx->dqo.complq.mask = completion_queue_slots - 1; + rx->dqo.complq.cur_gen_bit = 0; + rx->dqo.complq.head = 0; + + /* Set RX SKB context */ + rx->ctx.skb_head = NULL; + rx->ctx.skb_tail = NULL; + + /* Set up linked list of buffer IDs */ + if (rx->dqo.buf_states) { + for (i = 0; i < rx->dqo.num_buf_states - 1; i++) + rx->dqo.buf_states[i].next = i + 1; + rx->dqo.buf_states[rx->dqo.num_buf_states - 1].next = -1; + } + + rx->dqo.free_buf_states = 0; + rx->dqo.recycled_buf_states.head = -1; + rx->dqo.recycled_buf_states.tail = -1; + rx->dqo.used_buf_states.head = -1; + rx->dqo.used_buf_states.tail = -1; +} + +static void gve_rx_reset_ring_dqo(struct gve_priv *priv, int idx) +{ + struct gve_rx_ring *rx = &priv->rx[idx]; + size_t size; + int i; + + const u32 buffer_queue_slots = priv->rx_desc_cnt; + const u32 completion_queue_slots = priv->rx_desc_cnt; + + /* Reset buffer queue */ + if (rx->dqo.bufq.desc_ring) { + size = sizeof(rx->dqo.bufq.desc_ring[0]) * + buffer_queue_slots; + memset(rx->dqo.bufq.desc_ring, 0, size); + } + + /* Reset completion queue */ + if (rx->dqo.complq.desc_ring) { + size = sizeof(rx->dqo.complq.desc_ring[0]) * + completion_queue_slots; + memset(rx->dqo.complq.desc_ring, 0, size); + } + + /* Reset q_resources */ + if (rx->q_resources) + memset(rx->q_resources, 0, sizeof(*rx->q_resources)); + + /* Reset buf states */ + if (rx->dqo.buf_states) { + for (i = 0; i < rx->dqo.num_buf_states; i++) { + struct gve_rx_buf_state_dqo *bs = &rx->dqo.buf_states[i]; + + if (bs->page_info.page) + gve_free_page_dqo(priv, bs, !rx->dqo.qpl); + } + } + + gve_rx_init_ring_state_dqo(rx, buffer_queue_slots, + completion_queue_slots); +} + void gve_rx_stop_ring_dqo(struct gve_priv *priv, int idx) { int ntfy_idx = gve_rx_idx_to_ntfy(priv, idx); @@ -220,6 +296,7 @@ void gve_rx_stop_ring_dqo(struct gve_priv *priv, int idx) gve_remove_napi(priv, ntfy_idx); gve_rx_remove_from_block(priv, idx); + gve_rx_reset_ring_dqo(priv, idx); } static void gve_rx_free_ring_dqo(struct gve_priv *priv, struct gve_rx_ring *rx, @@ -273,10 +350,10 @@ static void gve_rx_free_ring_dqo(struct gve_priv *priv, struct gve_rx_ring *rx, netif_dbg(priv, drv, priv->dev, "freed rx ring %d\n", idx); } -static int gve_rx_alloc_hdr_bufs(struct gve_priv *priv, struct gve_rx_ring *rx) +static int gve_rx_alloc_hdr_bufs(struct gve_priv *priv, struct gve_rx_ring *rx, + const u32 buf_count) { struct device *hdev = &priv->pdev->dev; - int buf_count = rx->dqo.bufq.mask + 1; rx->dqo.hdr_bufs.data = dma_alloc_coherent(hdev, priv->header_buf_size * buf_count, &rx->dqo.hdr_bufs.addr, GFP_KERNEL); @@ -301,7 +378,6 @@ static int gve_rx_alloc_ring_dqo(struct gve_priv *priv, { struct device *hdev = &priv->pdev->dev; size_t size; - int i; const u32 buffer_queue_slots = cfg->ring_size; const u32 completion_queue_slots = cfg->ring_size; @@ -311,11 +387,6 @@ static int gve_rx_alloc_ring_dqo(struct gve_priv *priv, memset(rx, 0, sizeof(*rx)); rx->gve = priv; rx->q_num = idx; - rx->dqo.bufq.mask = buffer_queue_slots - 1; - rx->dqo.complq.num_free_slots = completion_queue_slots; - rx->dqo.complq.mask = completion_queue_slots - 1; - rx->ctx.skb_head = NULL; - rx->ctx.skb_tail = NULL; rx->dqo.num_buf_states = cfg->raw_addressing ? min_t(s16, S16_MAX, buffer_queue_slots * 4) : @@ -328,19 +399,9 @@ static int gve_rx_alloc_ring_dqo(struct gve_priv *priv, /* Allocate header buffers for header-split */ if (cfg->enable_header_split) - if (gve_rx_alloc_hdr_bufs(priv, rx)) + if (gve_rx_alloc_hdr_bufs(priv, rx, buffer_queue_slots)) goto err; - /* Set up linked list of buffer IDs */ - for (i = 0; i < rx->dqo.num_buf_states - 1; i++) - rx->dqo.buf_states[i].next = i + 1; - - rx->dqo.buf_states[rx->dqo.num_buf_states - 1].next = -1; - rx->dqo.recycled_buf_states.head = -1; - rx->dqo.recycled_buf_states.tail = -1; - rx->dqo.used_buf_states.head = -1; - rx->dqo.used_buf_states.tail = -1; - /* Allocate RX completion queue */ size = sizeof(rx->dqo.complq.desc_ring[0]) * completion_queue_slots; @@ -368,6 +429,9 @@ static int gve_rx_alloc_ring_dqo(struct gve_priv *priv, if (!rx->q_resources) goto err; + gve_rx_init_ring_state_dqo(rx, buffer_queue_slots, + completion_queue_slots); + return 0; err: From patchwork Tue Apr 30 23:14:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shailend Chand X-Patchwork-Id: 13650024 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C6A401C230A for ; Tue, 30 Apr 2024 23:14:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714518889; cv=none; b=rMKxBxY6yW+/JARoGssAGGj9/ekbYxx31Ja5znjlyzI8uDEb7w99CAm7pg1mqQZovZERCVL8r/UdAVq7zXMPn31iRXhbDaNWVA7+shyXZZXvByn54O+0Xa03WvwlNuSdBFC1djo3ft2flM0KspSeN6q+ucUQ8dff6Wjt1v21dMI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714518889; c=relaxed/simple; bh=X9Zc8NXIFIstzXRVqTVsQiZQZAU23Z1+OY0PhoNWBcQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Tf/b0MGxr2IsCZvghNH9C3pKWm25jTiwLGuTLIpDqza/dvt4T/8wBXK2UHhTEaYWNU5Iq1JLcLTJcUbDAv80h4gfHA8pKJ4q3WAeLqJwsl9ymIrb1NnLxCzPALQCoSXJPEZlEbi04KHdYQxOQGjv9J6bIUFZ17BnazHq2RVKA2o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=z+ZcGwZ+; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="z+ZcGwZ+" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-de54ccab44aso13017733276.3 for ; Tue, 30 Apr 2024 16:14:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1714518887; x=1715123687; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=oG58J/JR5E42dJlgQeWSVgudbHg6jj2MpjFcGUtw42k=; b=z+ZcGwZ+kxdBVfscC80ssF1z+TBAW0AWuOx8YKIuftICuTibdHi+EsQ7FePg7LkbDo n9ekhMtaaGaGb+xYvc1zQnGaL+ceeGRkHzzWKgbLg4HnI/SDwtsh+1VB5yzvHiTBgJJN fu7iyMGGe8WCDSFRXeZW+HnnUuto/Ms8WsXUeRoz51sFYcZ3kl4E0E03acdDL2TBTbXq ax00/bVtWe/XwNQ2b95vTS2J0ft/CkV9Y4c6qBgl3Hm2M60Ax4eEkjm8Vdm53GBcxzDc a9okm9s2ZtdpKrbfwHUel5XmEEBMRQvE2oCYbzMu+NHoYkrMjDm1ARU6kMydEU1BZHSz pLlA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714518887; x=1715123687; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=oG58J/JR5E42dJlgQeWSVgudbHg6jj2MpjFcGUtw42k=; b=fAToe5nEmvLN+vzsGKumxGtxKa6h4x4ZUOdDvg5CwiBeomGEsb5TdCQM/wYIGYNbIw 4N4UAljsnsUDFU2XzVm/Px78wTl2s0EYYtXzjGb9MUR7CBZmCh2IRsDyP58qSDq4l2pK yOcRmIpSx9FvnvwyBz0419N/R21GXcGNbKWZZsGcC/5TfkvY0ebVdwyYUaQcGVPrvIeg jCkjCrQ9050T1t2Ul+YrV+eoR8Ot4obG1j8Lw+YogEnJzafq6embO++2tY6/00K2p/+4 3a9gf50GTrbggXFbIo3YDrcpmzEqVeQHle64mKfY6ZL3wunkEPpmSJ6xQcUFA61SCVD8 fE9A== X-Gm-Message-State: AOJu0Yx/dLDXJCESBR7h7Hwwb52uSTATq+AlMBOCUNUS6ksM8mBoNzFf GxXh/HvlomIEv7PVZsUQpmGDzd3oSN+v94WN+2fFI1VamemERMelcShAKNAbCgDpvm6Wn+Baeub guPcgaN8q/i0XHhgJnHD2+rmlHtiLkSIOeHTMWDL/ZItk7ZSG7OoWNRjTy6ddrRPFs6BlfobBiJ zIqOrUc+CcFTEt+tEe3ZsLIjRMHaDPo5swxC3LQn+jqo0= X-Google-Smtp-Source: AGHT+IHvBijwzx8AgGC/MW/jWvFA5wterxT9UV6CQebNmAzqY3F9+ao0ucNGTgMi/lcnjLRfHST6s4VQAAhJ/w== X-Received: from shailendkvm.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:2648]) (user=shailend job=sendgmr) by 2002:a25:b512:0:b0:de4:7be7:1c2d with SMTP id p18-20020a25b512000000b00de47be71c2dmr283787ybj.11.1714518886711; Tue, 30 Apr 2024 16:14:46 -0700 (PDT) Date: Tue, 30 Apr 2024 23:14:17 +0000 In-Reply-To: <20240430231420.699177-1-shailend@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240430231420.699177-1-shailend@google.com> X-Mailer: git-send-email 2.45.0.rc0.197.gbae5840b3b-goog Message-ID: <20240430231420.699177-9-shailend@google.com> Subject: [PATCH net-next 08/10] gve: Account for stopped queues when reading NIC stats From: Shailend Chand To: netdev@vger.kernel.org Cc: almasrymina@google.com, davem@davemloft.net, edumazet@google.com, hramamurthy@google.com, jeroendb@google.com, kuba@kernel.org, pabeni@redhat.com, pkaligineedi@google.com, willemb@google.com, Shailend Chand X-Patchwork-Delegate: kuba@kernel.org We now account for the fact that the NIC might send us stats for a subset of queues. Without this change, gve_get_ethtool_stats might make an invalid access on the priv->stats_report->stats array. Tested-by: Mina Almasry Reviewed-by: Praveen Kaligineedi Reviewed-by: Harshitha Ramamurthy Signed-off-by: Shailend Chand --- drivers/net/ethernet/google/gve/gve_ethtool.c | 41 ++++++++++++++++--- 1 file changed, 35 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve_ethtool.c b/drivers/net/ethernet/google/gve/gve_ethtool.c index bd7632eed776..a606670a9a39 100644 --- a/drivers/net/ethernet/google/gve/gve_ethtool.c +++ b/drivers/net/ethernet/google/gve/gve_ethtool.c @@ -8,6 +8,7 @@ #include "gve.h" #include "gve_adminq.h" #include "gve_dqo.h" +#include "gve_utils.h" static void gve_get_drvinfo(struct net_device *netdev, struct ethtool_drvinfo *info) @@ -165,6 +166,8 @@ gve_get_ethtool_stats(struct net_device *netdev, struct stats *report_stats; int *rx_qid_to_stats_idx; int *tx_qid_to_stats_idx; + int num_stopped_rxqs = 0; + int num_stopped_txqs = 0; struct gve_priv *priv; bool skip_nic_stats; unsigned int start; @@ -181,12 +184,23 @@ gve_get_ethtool_stats(struct net_device *netdev, sizeof(int), GFP_KERNEL); if (!rx_qid_to_stats_idx) return; + for (ring = 0; ring < priv->rx_cfg.num_queues; ring++) { + rx_qid_to_stats_idx[ring] = -1; + if (!gve_rx_was_added_to_block(priv, ring)) + num_stopped_rxqs++; + } tx_qid_to_stats_idx = kmalloc_array(num_tx_queues, sizeof(int), GFP_KERNEL); if (!tx_qid_to_stats_idx) { kfree(rx_qid_to_stats_idx); return; } + for (ring = 0; ring < num_tx_queues; ring++) { + tx_qid_to_stats_idx[ring] = -1; + if (!gve_tx_was_added_to_block(priv, ring)) + num_stopped_txqs++; + } + for (rx_pkts = 0, rx_bytes = 0, rx_hsplit_pkt = 0, rx_skb_alloc_fail = 0, rx_buf_alloc_fail = 0, rx_desc_err_dropped_pkt = 0, rx_hsplit_unsplit_pkt = 0, @@ -260,7 +274,13 @@ gve_get_ethtool_stats(struct net_device *netdev, /* For rx cross-reporting stats, start from nic rx stats in report */ base_stats_idx = GVE_TX_STATS_REPORT_NUM * num_tx_queues + GVE_RX_STATS_REPORT_NUM * priv->rx_cfg.num_queues; - max_stats_idx = NIC_RX_STATS_REPORT_NUM * priv->rx_cfg.num_queues + + /* The boundary between driver stats and NIC stats shifts if there are + * stopped queues. + */ + base_stats_idx += NIC_RX_STATS_REPORT_NUM * num_stopped_rxqs + + NIC_TX_STATS_REPORT_NUM * num_stopped_txqs; + max_stats_idx = NIC_RX_STATS_REPORT_NUM * + (priv->rx_cfg.num_queues - num_stopped_rxqs) + base_stats_idx; /* Preprocess the stats report for rx, map queue id to start index */ skip_nic_stats = false; @@ -274,6 +294,10 @@ gve_get_ethtool_stats(struct net_device *netdev, skip_nic_stats = true; break; } + if (queue_id < 0 || queue_id >= priv->rx_cfg.num_queues) { + net_err_ratelimited("Invalid rxq id in NIC stats\n"); + continue; + } rx_qid_to_stats_idx[queue_id] = stats_idx; } /* walk RX rings */ @@ -308,11 +332,11 @@ gve_get_ethtool_stats(struct net_device *netdev, data[i++] = rx->rx_copybreak_pkt; data[i++] = rx->rx_copied_pkt; /* stats from NIC */ - if (skip_nic_stats) { + stats_idx = rx_qid_to_stats_idx[ring]; + if (skip_nic_stats || stats_idx < 0) { /* skip NIC rx stats */ i += NIC_RX_STATS_REPORT_NUM; } else { - stats_idx = rx_qid_to_stats_idx[ring]; for (j = 0; j < NIC_RX_STATS_REPORT_NUM; j++) { u64 value = be64_to_cpu(report_stats[stats_idx + j].value); @@ -338,7 +362,8 @@ gve_get_ethtool_stats(struct net_device *netdev, /* For tx cross-reporting stats, start from nic tx stats in report */ base_stats_idx = max_stats_idx; - max_stats_idx = NIC_TX_STATS_REPORT_NUM * num_tx_queues + + max_stats_idx = NIC_TX_STATS_REPORT_NUM * + (num_tx_queues - num_stopped_txqs) + max_stats_idx; /* Preprocess the stats report for tx, map queue id to start index */ skip_nic_stats = false; @@ -352,6 +377,10 @@ gve_get_ethtool_stats(struct net_device *netdev, skip_nic_stats = true; break; } + if (queue_id < 0 || queue_id >= num_tx_queues) { + net_err_ratelimited("Invalid txq id in NIC stats\n"); + continue; + } tx_qid_to_stats_idx[queue_id] = stats_idx; } /* walk TX rings */ @@ -383,11 +412,11 @@ gve_get_ethtool_stats(struct net_device *netdev, data[i++] = gve_tx_load_event_counter(priv, tx); data[i++] = tx->dma_mapping_error; /* stats from NIC */ - if (skip_nic_stats) { + stats_idx = tx_qid_to_stats_idx[ring]; + if (skip_nic_stats || stats_idx < 0) { /* skip NIC tx stats */ i += NIC_TX_STATS_REPORT_NUM; } else { - stats_idx = tx_qid_to_stats_idx[ring]; for (j = 0; j < NIC_TX_STATS_REPORT_NUM; j++) { u64 value = be64_to_cpu(report_stats[stats_idx + j].value); From patchwork Tue Apr 30 23:14:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shailend Chand X-Patchwork-Id: 13650025 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 75F101C2330 for ; Tue, 30 Apr 2024 23:14:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714518892; cv=none; b=ddWerutdclXzYPHR20Bmdc9EE9ez1O+5yjXl9tRMH01Kz5mY/ZzXbT2W1QP3eutbqQCg92J5gYkwR+FovEYX2p6webmfzn6kFeE9uIxOQfjnc7ZKbMEpbymrdxdchnD3Kr/+NvrbBwpKa7qGvEWB5o/3tZG4RuPOZgk+OjjrUao= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714518892; c=relaxed/simple; bh=j+J+XAdezRJei86XSm9MfyCJ9QMf9JhIx5OWOZ2j9aI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=crGSWO/mhWLm/r9cq1rzEB1sDN2oFP7NbnGwE1tz6ejL69Uxddvu94DK3eKPPCedni18vDzG2sJ4yMsC1u784JcrPzlDitFvVwHG/YmXS5w+c/oM5NamuRF2t4D+dRN5v/vxWnsiElwKTwU1WYT5pKHJnmwYMhPGC8dLagu7Brw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=rllQnM4T; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="rllQnM4T" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-de45c577ca9so445494276.0 for ; Tue, 30 Apr 2024 16:14:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1714518888; x=1715123688; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=T6Ncb7GfFZ/XMCrs1kIBLFW31J/MKQvWr4Ide3a/iDQ=; b=rllQnM4TJPsmF8bMa8vIT9OSRR9YZXTx4QZnv4td2TIr0XfpQ4ubzshtJOl1wMNrD5 gp5+/nMjDm//0S3RLeS3DdymZmf+TuCCWPck67Ie5ggUOddLO4GPgY9Yn9LxEZ5YqZaO SO1fnN6sSxrp9N2Z4HZZgp5GYyDF7AK+Zzgbm2l3YRCCtLh6+pa9tAU8kUhkb1fs8OI/ t6YH1cea5uIi+adIcJtY3YVL5uPENT9Q7AUTcn6rYrdwDZQEzQ7ykFoKsoePwio1sUCe a76q5AqkyJK+/QM0xPPDoH6Kbx7XuetAuhC4dHYGxPGl6J+B7xFFySy9JAxXQMkqhQBK unLg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714518888; x=1715123688; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=T6Ncb7GfFZ/XMCrs1kIBLFW31J/MKQvWr4Ide3a/iDQ=; b=NsTuYXDlgGu+enjXB0qZ+wSEPj4pf0fdXTmIusiXEhdtncV7Hnli57NbP3f6mga2gt GsdkLXs4Nq9Tq6lYN1x9EkELepVe0sgqLf0QqqpFUDNct01z1EnHTWKFgtVMBMvZp+8U ARnWS/lGJVSkCjU7iZQZ3/IQoWQrgOw2KHnrsoKGsFuxRM5fUKzytGUWzjAK7rFtBagb iKaJjboNBKU1w45RsR4GDMXcT4kludxOl7AGUXkGkAz/j0277RlccGQMqHQCyw09k0AE zazLv8o+62AnkVXanfCzqPaIlyDOCGzai9Wq9IAHm/K9vWVKAJO1rND99yPu+sexv9GB xKKA== X-Gm-Message-State: AOJu0YxtM8Sxpj5VRuXzNpSK9tD89yMu7yHTic15E0YDsg7XRoABgjd3 VSd5iJuGo22I7bD4cVQ3QaBijWive/j19p8qFHG/7urHmHUxsQAu7P/nXPdPDk9veUQ/B8OCyqR aUqQjU7PNth4KCWOsdqQClWsSubZG3WukVFlbggJ77Pj3Tukx4oDBxWSTup8jzBKkmg89tKUuTf WKyY7/BWpdUhnOgGPmZqr/Irct2rGTLKIQlQYEx7w6RYA= X-Google-Smtp-Source: AGHT+IFoxxkdGV9lhHDSW5k0Q3e1jjrubuS912yZfSHsuLaTzgLPuERFueZwQ1y5AdEjt640Kl5ZNCvSlPxZ1Q== X-Received: from shailendkvm.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:2648]) (user=shailend job=sendgmr) by 2002:a05:6902:1549:b0:de5:9ecc:46b6 with SMTP id r9-20020a056902154900b00de59ecc46b6mr393696ybu.6.1714518888399; Tue, 30 Apr 2024 16:14:48 -0700 (PDT) Date: Tue, 30 Apr 2024 23:14:18 +0000 In-Reply-To: <20240430231420.699177-1-shailend@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240430231420.699177-1-shailend@google.com> X-Mailer: git-send-email 2.45.0.rc0.197.gbae5840b3b-goog Message-ID: <20240430231420.699177-10-shailend@google.com> Subject: [PATCH net-next 09/10] gve: Alloc and free QPLs with the rings From: Shailend Chand To: netdev@vger.kernel.org Cc: almasrymina@google.com, davem@davemloft.net, edumazet@google.com, hramamurthy@google.com, jeroendb@google.com, kuba@kernel.org, pabeni@redhat.com, pkaligineedi@google.com, willemb@google.com, Shailend Chand X-Patchwork-Delegate: kuba@kernel.org Every tx and rx ring has its own queue-page-list (QPL) that serves as the bounce buffer. Previously we were allocating QPLs for all queues before the queues themselves were allocated and later associating a QPL with a queue. This is avoidable complexity: it is much more natural for each queue to allocate and free its own QPL. Moreover, the advent of new queue-manipulating ndo hooks make it hard to keep things as is: we would need to transfer a QPL from an old queue to a new queue, and that is unpleasant. Tested-by: Mina Almasry Reviewed-by: Praveen Kaligineedi Reviewed-by: Harshitha Ramamurthy Signed-off-by: Shailend Chand --- drivers/net/ethernet/google/gve/gve.h | 30 +- drivers/net/ethernet/google/gve/gve_ethtool.c | 7 +- drivers/net/ethernet/google/gve/gve_main.c | 338 +++++------------- drivers/net/ethernet/google/gve/gve_rx.c | 41 ++- drivers/net/ethernet/google/gve/gve_rx_dqo.c | 23 +- drivers/net/ethernet/google/gve/gve_tx.c | 33 +- drivers/net/ethernet/google/gve/gve_tx_dqo.c | 23 +- 7 files changed, 169 insertions(+), 326 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h index f27a6d5fbecf..9e0a433c991c 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -638,26 +638,10 @@ struct gve_ptype_lut { struct gve_ptype ptypes[GVE_NUM_PTYPES]; }; -/* Parameters for allocating queue page lists */ -struct gve_qpls_alloc_cfg { - struct gve_queue_config *tx_cfg; - struct gve_queue_config *rx_cfg; - - u16 num_xdp_queues; - bool raw_addressing; - bool is_gqi; - - /* Allocated resources are returned here */ - struct gve_queue_page_list *qpls; -}; - /* Parameters for allocating resources for tx queues */ struct gve_tx_alloc_rings_cfg { struct gve_queue_config *qcfg; - /* qpls must already be allocated */ - struct gve_queue_page_list *qpls; - u16 ring_size; u16 start_idx; u16 num_rings; @@ -673,9 +657,6 @@ struct gve_rx_alloc_rings_cfg { struct gve_queue_config *qcfg; struct gve_queue_config *qcfg_tx; - /* qpls must already be allocated */ - struct gve_queue_page_list *qpls; - u16 ring_size; u16 packet_buffer_size; bool raw_addressing; @@ -701,7 +682,6 @@ struct gve_priv { struct net_device *dev; struct gve_tx_ring *tx; /* array of tx_cfg.num_queues */ struct gve_rx_ring *rx; /* array of rx_cfg.num_queues */ - struct gve_queue_page_list *qpls; /* array of num qpls */ struct gve_notify_block *ntfy_blocks; /* array of num_ntfy_blks */ struct gve_irq_db *irq_db_indices; /* array of num_ntfy_blks */ dma_addr_t irq_db_indices_bus; @@ -1025,7 +1005,6 @@ static inline u32 gve_rx_qpl_id(struct gve_priv *priv, int rx_qid) return priv->tx_cfg.max_queues + rx_qid; } -/* Returns the index into priv->qpls where a certain rx queue's QPL resides */ static inline u32 gve_get_rx_qpl_id(const struct gve_queue_config *tx_cfg, int rx_qid) { return tx_cfg->max_queues + rx_qid; @@ -1036,7 +1015,6 @@ static inline u32 gve_tx_start_qpl_id(struct gve_priv *priv) return gve_tx_qpl_id(priv, 0); } -/* Returns the index into priv->qpls where the first rx queue's QPL resides */ static inline u32 gve_rx_start_qpl_id(const struct gve_queue_config *tx_cfg) { return gve_get_rx_qpl_id(tx_cfg, 0); @@ -1090,6 +1068,12 @@ int gve_alloc_page(struct gve_priv *priv, struct device *dev, enum dma_data_direction, gfp_t gfp_flags); void gve_free_page(struct device *dev, struct page *page, dma_addr_t dma, enum dma_data_direction); +/* qpls */ +struct gve_queue_page_list *gve_alloc_queue_page_list(struct gve_priv *priv, + u32 id, int pages); +void gve_free_queue_page_list(struct gve_priv *priv, + struct gve_queue_page_list *qpl, + u32 id); /* tx handling */ netdev_tx_t gve_tx(struct sk_buff *skb, struct net_device *dev); int gve_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames, @@ -1126,11 +1110,9 @@ int gve_set_hsplit_config(struct gve_priv *priv, u8 tcp_data_split); void gve_schedule_reset(struct gve_priv *priv); int gve_reset(struct gve_priv *priv, bool attempt_teardown); void gve_get_curr_alloc_cfgs(struct gve_priv *priv, - struct gve_qpls_alloc_cfg *qpls_alloc_cfg, struct gve_tx_alloc_rings_cfg *tx_alloc_cfg, struct gve_rx_alloc_rings_cfg *rx_alloc_cfg); int gve_adjust_config(struct gve_priv *priv, - struct gve_qpls_alloc_cfg *qpls_alloc_cfg, struct gve_tx_alloc_rings_cfg *tx_alloc_cfg, struct gve_rx_alloc_rings_cfg *rx_alloc_cfg); int gve_adjust_queues(struct gve_priv *priv, diff --git a/drivers/net/ethernet/google/gve/gve_ethtool.c b/drivers/net/ethernet/google/gve/gve_ethtool.c index a606670a9a39..156b7e128b53 100644 --- a/drivers/net/ethernet/google/gve/gve_ethtool.c +++ b/drivers/net/ethernet/google/gve/gve_ethtool.c @@ -538,20 +538,17 @@ static int gve_adjust_ring_sizes(struct gve_priv *priv, { struct gve_tx_alloc_rings_cfg tx_alloc_cfg = {0}; struct gve_rx_alloc_rings_cfg rx_alloc_cfg = {0}; - struct gve_qpls_alloc_cfg qpls_alloc_cfg = {0}; int err; /* get current queue configuration */ - gve_get_curr_alloc_cfgs(priv, &qpls_alloc_cfg, - &tx_alloc_cfg, &rx_alloc_cfg); + gve_get_curr_alloc_cfgs(priv, &tx_alloc_cfg, &rx_alloc_cfg); /* copy over the new ring_size from ethtool */ tx_alloc_cfg.ring_size = new_tx_desc_cnt; rx_alloc_cfg.ring_size = new_rx_desc_cnt; if (netif_running(priv->dev)) { - err = gve_adjust_config(priv, &qpls_alloc_cfg, - &tx_alloc_cfg, &rx_alloc_cfg); + err = gve_adjust_config(priv, &tx_alloc_cfg, &rx_alloc_cfg); if (err) return err; } diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index 79b7a677ec0b..65adab0f5171 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -611,37 +611,36 @@ static void gve_teardown_device_resources(struct gve_priv *priv) gve_clear_device_resources_ok(priv); } -static int gve_unregister_qpl(struct gve_priv *priv, u32 i) +static int gve_unregister_qpl(struct gve_priv *priv, + struct gve_queue_page_list *qpl) { int err; - err = gve_adminq_unregister_page_list(priv, priv->qpls[i].id); + if (!qpl) + return 0; + + err = gve_adminq_unregister_page_list(priv, qpl->id); if (err) { netif_err(priv, drv, priv->dev, "Failed to unregister queue page list %d\n", - priv->qpls[i].id); + qpl->id); return err; } - priv->num_registered_pages -= priv->qpls[i].num_entries; + priv->num_registered_pages -= qpl->num_entries; return 0; } -static int gve_register_qpl(struct gve_priv *priv, u32 i) +static int gve_register_qpl(struct gve_priv *priv, + struct gve_queue_page_list *qpl) { - int num_rx_qpls; int pages; int err; - /* Rx QPLs succeed Tx QPLs in the priv->qpls array. */ - num_rx_qpls = gve_num_rx_qpls(&priv->rx_cfg, gve_is_qpl(priv)); - if (i >= gve_rx_start_qpl_id(&priv->tx_cfg) + num_rx_qpls) { - netif_err(priv, drv, priv->dev, - "Cannot register nonexisting QPL at index %d\n", i); - return -EINVAL; - } + if (!qpl) + return 0; - pages = priv->qpls[i].num_entries; + pages = qpl->num_entries; if (pages + priv->num_registered_pages > priv->max_registered_pages) { netif_err(priv, drv, priv->dev, @@ -651,14 +650,11 @@ static int gve_register_qpl(struct gve_priv *priv, u32 i) return -EINVAL; } - err = gve_adminq_register_page_list(priv, &priv->qpls[i]); + err = gve_adminq_register_page_list(priv, qpl); if (err) { netif_err(priv, drv, priv->dev, "failed to register queue page list %d\n", - priv->qpls[i].id); - /* This failure will trigger a reset - no need to clean - * up - */ + qpl->id); return err; } @@ -666,6 +662,26 @@ static int gve_register_qpl(struct gve_priv *priv, u32 i) return 0; } +static struct gve_queue_page_list *gve_tx_get_qpl(struct gve_priv *priv, int idx) +{ + struct gve_tx_ring *tx = &priv->tx[idx]; + + if (gve_is_gqi(priv)) + return tx->tx_fifo.qpl; + else + return tx->dqo.qpl; +} + +static struct gve_queue_page_list *gve_rx_get_qpl(struct gve_priv *priv, int idx) +{ + struct gve_rx_ring *rx = &priv->rx[idx]; + + if (gve_is_gqi(priv)) + return rx->data.qpl; + else + return rx->dqo.qpl; +} + static int gve_register_xdp_qpls(struct gve_priv *priv) { int start_id; @@ -674,7 +690,7 @@ static int gve_register_xdp_qpls(struct gve_priv *priv) start_id = gve_xdp_tx_start_queue_id(priv); for (i = start_id; i < start_id + gve_num_xdp_qpls(priv); i++) { - err = gve_register_qpl(priv, i); + err = gve_register_qpl(priv, gve_tx_get_qpl(priv, i)); /* This failure will trigger a reset - no need to clean up */ if (err) return err; @@ -685,7 +701,6 @@ static int gve_register_xdp_qpls(struct gve_priv *priv) static int gve_register_qpls(struct gve_priv *priv) { int num_tx_qpls, num_rx_qpls; - int start_id; int err; int i; @@ -694,15 +709,13 @@ static int gve_register_qpls(struct gve_priv *priv) num_rx_qpls = gve_num_rx_qpls(&priv->rx_cfg, gve_is_qpl(priv)); for (i = 0; i < num_tx_qpls; i++) { - err = gve_register_qpl(priv, i); + err = gve_register_qpl(priv, gve_tx_get_qpl(priv, i)); if (err) return err; } - /* there might be a gap between the tx and rx qpl ids */ - start_id = gve_rx_start_qpl_id(&priv->tx_cfg); for (i = 0; i < num_rx_qpls; i++) { - err = gve_register_qpl(priv, start_id + i); + err = gve_register_qpl(priv, gve_rx_get_qpl(priv, i)); if (err) return err; } @@ -718,7 +731,7 @@ static int gve_unregister_xdp_qpls(struct gve_priv *priv) start_id = gve_xdp_tx_start_queue_id(priv); for (i = start_id; i < start_id + gve_num_xdp_qpls(priv); i++) { - err = gve_unregister_qpl(priv, i); + err = gve_unregister_qpl(priv, gve_tx_get_qpl(priv, i)); /* This failure will trigger a reset - no need to clean */ if (err) return err; @@ -729,7 +742,6 @@ static int gve_unregister_xdp_qpls(struct gve_priv *priv) static int gve_unregister_qpls(struct gve_priv *priv) { int num_tx_qpls, num_rx_qpls; - int start_id; int err; int i; @@ -738,15 +750,14 @@ static int gve_unregister_qpls(struct gve_priv *priv) num_rx_qpls = gve_num_rx_qpls(&priv->rx_cfg, gve_is_qpl(priv)); for (i = 0; i < num_tx_qpls; i++) { - err = gve_unregister_qpl(priv, i); + err = gve_unregister_qpl(priv, gve_tx_get_qpl(priv, i)); /* This failure will trigger a reset - no need to clean */ if (err) return err; } - start_id = gve_rx_start_qpl_id(&priv->tx_cfg); for (i = 0; i < num_rx_qpls; i++) { - err = gve_unregister_qpl(priv, start_id + i); + err = gve_unregister_qpl(priv, gve_rx_get_qpl(priv, i)); /* This failure will trigger a reset - no need to clean */ if (err) return err; @@ -857,7 +868,6 @@ static void gve_tx_get_curr_alloc_cfg(struct gve_priv *priv, { cfg->qcfg = &priv->tx_cfg; cfg->raw_addressing = !gve_is_qpl(priv); - cfg->qpls = priv->qpls; cfg->ring_size = priv->tx_desc_cnt; cfg->start_idx = 0; cfg->num_rings = gve_num_tx_queues(priv); @@ -914,9 +924,9 @@ static int gve_alloc_xdp_rings(struct gve_priv *priv) return 0; } -static int gve_alloc_rings(struct gve_priv *priv, - struct gve_tx_alloc_rings_cfg *tx_alloc_cfg, - struct gve_rx_alloc_rings_cfg *rx_alloc_cfg) +static int gve_queues_mem_alloc(struct gve_priv *priv, + struct gve_tx_alloc_rings_cfg *tx_alloc_cfg, + struct gve_rx_alloc_rings_cfg *rx_alloc_cfg) { int err; @@ -1002,9 +1012,9 @@ static void gve_free_xdp_rings(struct gve_priv *priv) } } -static void gve_free_rings(struct gve_priv *priv, - struct gve_tx_alloc_rings_cfg *tx_cfg, - struct gve_rx_alloc_rings_cfg *rx_cfg) +static void gve_queues_mem_free(struct gve_priv *priv, + struct gve_tx_alloc_rings_cfg *tx_cfg, + struct gve_rx_alloc_rings_cfg *rx_cfg) { if (gve_is_gqi(priv)) { gve_tx_free_rings_gqi(priv, tx_cfg); @@ -1033,35 +1043,41 @@ int gve_alloc_page(struct gve_priv *priv, struct device *dev, return 0; } -static int gve_alloc_queue_page_list(struct gve_priv *priv, - struct gve_queue_page_list *qpl, - u32 id, int pages) +struct gve_queue_page_list *gve_alloc_queue_page_list(struct gve_priv *priv, + u32 id, int pages) { + struct gve_queue_page_list *qpl; int err; int i; + qpl = kvzalloc(sizeof(*qpl), GFP_KERNEL); + if (!qpl) + return NULL; + qpl->id = id; qpl->num_entries = 0; qpl->pages = kvcalloc(pages, sizeof(*qpl->pages), GFP_KERNEL); - /* caller handles clean up */ if (!qpl->pages) - return -ENOMEM; + goto abort; + qpl->page_buses = kvcalloc(pages, sizeof(*qpl->page_buses), GFP_KERNEL); - /* caller handles clean up */ if (!qpl->page_buses) - return -ENOMEM; + goto abort; for (i = 0; i < pages; i++) { err = gve_alloc_page(priv, &priv->pdev->dev, &qpl->pages[i], &qpl->page_buses[i], gve_qpl_dma_dir(priv, id), GFP_KERNEL); - /* caller handles clean up */ if (err) - return -ENOMEM; + goto abort; qpl->num_entries++; } - return 0; + return qpl; + +abort: + gve_free_queue_page_list(priv, qpl, id); + return NULL; } void gve_free_page(struct device *dev, struct page *page, dma_addr_t dma, @@ -1073,14 +1089,16 @@ void gve_free_page(struct device *dev, struct page *page, dma_addr_t dma, put_page(page); } -static void gve_free_queue_page_list(struct gve_priv *priv, - struct gve_queue_page_list *qpl, - int id) +void gve_free_queue_page_list(struct gve_priv *priv, + struct gve_queue_page_list *qpl, + u32 id) { int i; - if (!qpl->pages) + if (!qpl) return; + if (!qpl->pages) + goto free_qpl; if (!qpl->page_buses) goto free_pages; @@ -1093,109 +1111,8 @@ static void gve_free_queue_page_list(struct gve_priv *priv, free_pages: kvfree(qpl->pages); qpl->pages = NULL; -} - -static void gve_free_n_qpls(struct gve_priv *priv, - struct gve_queue_page_list *qpls, - int start_id, - int num_qpls) -{ - int i; - - for (i = start_id; i < start_id + num_qpls; i++) - gve_free_queue_page_list(priv, &qpls[i], i); -} - -static int gve_alloc_n_qpls(struct gve_priv *priv, - struct gve_queue_page_list *qpls, - int page_count, - int start_id, - int num_qpls) -{ - int err; - int i; - - for (i = start_id; i < start_id + num_qpls; i++) { - err = gve_alloc_queue_page_list(priv, &qpls[i], i, page_count); - if (err) - goto free_qpls; - } - - return 0; - -free_qpls: - /* Must include the failing QPL too for gve_alloc_queue_page_list fails - * without cleaning up. - */ - gve_free_n_qpls(priv, qpls, start_id, i - start_id + 1); - return err; -} - -static int gve_alloc_qpls(struct gve_priv *priv, struct gve_qpls_alloc_cfg *cfg, - struct gve_rx_alloc_rings_cfg *rx_alloc_cfg) -{ - int max_queues = cfg->tx_cfg->max_queues + cfg->rx_cfg->max_queues; - int rx_start_id, tx_num_qpls, rx_num_qpls; - struct gve_queue_page_list *qpls; - u32 page_count; - int err; - - if (cfg->raw_addressing) - return 0; - - qpls = kvcalloc(max_queues, sizeof(*qpls), GFP_KERNEL); - if (!qpls) - return -ENOMEM; - - /* Allocate TX QPLs */ - page_count = priv->tx_pages_per_qpl; - tx_num_qpls = gve_num_tx_qpls(cfg->tx_cfg, cfg->num_xdp_queues, - gve_is_qpl(priv)); - err = gve_alloc_n_qpls(priv, qpls, page_count, 0, tx_num_qpls); - if (err) - goto free_qpl_array; - - /* Allocate RX QPLs */ - rx_start_id = gve_rx_start_qpl_id(cfg->tx_cfg); - /* For GQI_QPL number of pages allocated have 1:1 relationship with - * number of descriptors. For DQO, number of pages required are - * more than descriptors (because of out of order completions). - * Set it to twice the number of descriptors. - */ - if (cfg->is_gqi) - page_count = rx_alloc_cfg->ring_size; - else - page_count = gve_get_rx_pages_per_qpl_dqo(rx_alloc_cfg->ring_size); - rx_num_qpls = gve_num_rx_qpls(cfg->rx_cfg, gve_is_qpl(priv)); - err = gve_alloc_n_qpls(priv, qpls, page_count, rx_start_id, rx_num_qpls); - if (err) - goto free_tx_qpls; - - cfg->qpls = qpls; - return 0; - -free_tx_qpls: - gve_free_n_qpls(priv, qpls, 0, tx_num_qpls); -free_qpl_array: - kvfree(qpls); - return err; -} - -static void gve_free_qpls(struct gve_priv *priv, - struct gve_qpls_alloc_cfg *cfg) -{ - int max_queues = cfg->tx_cfg->max_queues + cfg->rx_cfg->max_queues; - struct gve_queue_page_list *qpls = cfg->qpls; - int i; - - if (!qpls) - return; - - for (i = 0; i < max_queues; i++) - gve_free_queue_page_list(priv, &qpls[i], i); - - kvfree(qpls); - cfg->qpls = NULL; +free_qpl: + kvfree(qpl); } /* Use this to schedule a reset when the device is capable of continuing @@ -1299,17 +1216,6 @@ static void gve_drain_page_cache(struct gve_priv *priv) page_frag_cache_drain(&priv->rx[i].page_cache); } -static void gve_qpls_get_curr_alloc_cfg(struct gve_priv *priv, - struct gve_qpls_alloc_cfg *cfg) -{ - cfg->raw_addressing = !gve_is_qpl(priv); - cfg->is_gqi = gve_is_gqi(priv); - cfg->num_xdp_queues = priv->num_xdp_queues; - cfg->tx_cfg = &priv->tx_cfg; - cfg->rx_cfg = &priv->rx_cfg; - cfg->qpls = priv->qpls; -} - static void gve_rx_get_curr_alloc_cfg(struct gve_priv *priv, struct gve_rx_alloc_rings_cfg *cfg) { @@ -1317,7 +1223,6 @@ static void gve_rx_get_curr_alloc_cfg(struct gve_priv *priv, cfg->qcfg_tx = &priv->tx_cfg; cfg->raw_addressing = !gve_is_qpl(priv); cfg->enable_header_split = priv->header_split_enabled; - cfg->qpls = priv->qpls; cfg->ring_size = priv->rx_desc_cnt; cfg->packet_buffer_size = gve_is_gqi(priv) ? GVE_DEFAULT_RX_BUFFER_SIZE : @@ -1326,11 +1231,9 @@ static void gve_rx_get_curr_alloc_cfg(struct gve_priv *priv, } void gve_get_curr_alloc_cfgs(struct gve_priv *priv, - struct gve_qpls_alloc_cfg *qpls_alloc_cfg, struct gve_tx_alloc_rings_cfg *tx_alloc_cfg, struct gve_rx_alloc_rings_cfg *rx_alloc_cfg) { - gve_qpls_get_curr_alloc_cfg(priv, qpls_alloc_cfg); gve_tx_get_curr_alloc_cfg(priv, tx_alloc_cfg); gve_rx_get_curr_alloc_cfg(priv, rx_alloc_cfg); } @@ -1362,53 +1265,13 @@ static void gve_rx_stop_rings(struct gve_priv *priv, int num_rings) } } -static void gve_queues_mem_free(struct gve_priv *priv, - struct gve_qpls_alloc_cfg *qpls_alloc_cfg, - struct gve_tx_alloc_rings_cfg *tx_alloc_cfg, - struct gve_rx_alloc_rings_cfg *rx_alloc_cfg) -{ - gve_free_rings(priv, tx_alloc_cfg, rx_alloc_cfg); - gve_free_qpls(priv, qpls_alloc_cfg); -} - -static int gve_queues_mem_alloc(struct gve_priv *priv, - struct gve_qpls_alloc_cfg *qpls_alloc_cfg, - struct gve_tx_alloc_rings_cfg *tx_alloc_cfg, - struct gve_rx_alloc_rings_cfg *rx_alloc_cfg) -{ - int err; - - err = gve_alloc_qpls(priv, qpls_alloc_cfg, rx_alloc_cfg); - if (err) { - netif_err(priv, drv, priv->dev, "Failed to alloc QPLs\n"); - return err; - } - tx_alloc_cfg->qpls = qpls_alloc_cfg->qpls; - rx_alloc_cfg->qpls = qpls_alloc_cfg->qpls; - err = gve_alloc_rings(priv, tx_alloc_cfg, rx_alloc_cfg); - if (err) { - netif_err(priv, drv, priv->dev, "Failed to alloc rings\n"); - goto free_qpls; - } - - return 0; - -free_qpls: - gve_free_qpls(priv, qpls_alloc_cfg); - return err; -} - static void gve_queues_mem_remove(struct gve_priv *priv) { struct gve_tx_alloc_rings_cfg tx_alloc_cfg = {0}; struct gve_rx_alloc_rings_cfg rx_alloc_cfg = {0}; - struct gve_qpls_alloc_cfg qpls_alloc_cfg = {0}; - gve_get_curr_alloc_cfgs(priv, &qpls_alloc_cfg, - &tx_alloc_cfg, &rx_alloc_cfg); - gve_queues_mem_free(priv, &qpls_alloc_cfg, - &tx_alloc_cfg, &rx_alloc_cfg); - priv->qpls = NULL; + gve_get_curr_alloc_cfgs(priv, &tx_alloc_cfg, &rx_alloc_cfg); + gve_queues_mem_free(priv, &tx_alloc_cfg, &rx_alloc_cfg); priv->tx = NULL; priv->rx = NULL; } @@ -1417,7 +1280,6 @@ static void gve_queues_mem_remove(struct gve_priv *priv) * No memory is allocated. Passed-in memory is freed on errors. */ static int gve_queues_start(struct gve_priv *priv, - struct gve_qpls_alloc_cfg *qpls_alloc_cfg, struct gve_tx_alloc_rings_cfg *tx_alloc_cfg, struct gve_rx_alloc_rings_cfg *rx_alloc_cfg) { @@ -1425,7 +1287,6 @@ static int gve_queues_start(struct gve_priv *priv, int err; /* Record new resources into priv */ - priv->qpls = qpls_alloc_cfg->qpls; priv->tx = tx_alloc_cfg->tx; priv->rx = rx_alloc_cfg->rx; @@ -1497,23 +1358,19 @@ static int gve_open(struct net_device *dev) { struct gve_tx_alloc_rings_cfg tx_alloc_cfg = {0}; struct gve_rx_alloc_rings_cfg rx_alloc_cfg = {0}; - struct gve_qpls_alloc_cfg qpls_alloc_cfg = {0}; struct gve_priv *priv = netdev_priv(dev); int err; - gve_get_curr_alloc_cfgs(priv, &qpls_alloc_cfg, - &tx_alloc_cfg, &rx_alloc_cfg); + gve_get_curr_alloc_cfgs(priv, &tx_alloc_cfg, &rx_alloc_cfg); - err = gve_queues_mem_alloc(priv, &qpls_alloc_cfg, - &tx_alloc_cfg, &rx_alloc_cfg); + err = gve_queues_mem_alloc(priv, &tx_alloc_cfg, &rx_alloc_cfg); if (err) return err; /* No need to free on error: ownership of resources is lost after * calling gve_queues_start. */ - err = gve_queues_start(priv, &qpls_alloc_cfg, - &tx_alloc_cfg, &rx_alloc_cfg); + err = gve_queues_start(priv, &tx_alloc_cfg, &rx_alloc_cfg); if (err) return err; @@ -1588,7 +1445,6 @@ static int gve_remove_xdp_queues(struct gve_priv *priv) gve_unreg_xdp_info(priv); gve_free_xdp_rings(priv); - gve_free_n_qpls(priv, priv->qpls, qpl_start_id, gve_num_xdp_qpls(priv)); priv->num_xdp_queues = 0; return 0; } @@ -1601,14 +1457,9 @@ static int gve_add_xdp_queues(struct gve_priv *priv) priv->num_xdp_queues = priv->rx_cfg.num_queues; start_id = gve_xdp_tx_start_queue_id(priv); - err = gve_alloc_n_qpls(priv, priv->qpls, priv->tx_pages_per_qpl, - start_id, gve_num_xdp_qpls(priv)); - if (err) - goto err; - err = gve_alloc_xdp_rings(priv); if (err) - goto free_xdp_qpls; + goto err; err = gve_reg_xdp_info(priv, priv->dev); if (err) @@ -1626,8 +1477,6 @@ static int gve_add_xdp_queues(struct gve_priv *priv) free_xdp_rings: gve_free_xdp_rings(priv); -free_xdp_qpls: - gve_free_n_qpls(priv, priv->qpls, start_id, gve_num_xdp_qpls(priv)); err: priv->num_xdp_queues = 0; return err; @@ -1878,15 +1727,13 @@ static int gve_xdp(struct net_device *dev, struct netdev_bpf *xdp) } int gve_adjust_config(struct gve_priv *priv, - struct gve_qpls_alloc_cfg *qpls_alloc_cfg, struct gve_tx_alloc_rings_cfg *tx_alloc_cfg, struct gve_rx_alloc_rings_cfg *rx_alloc_cfg) { int err; /* Allocate resources for the new confiugration */ - err = gve_queues_mem_alloc(priv, qpls_alloc_cfg, - tx_alloc_cfg, rx_alloc_cfg); + err = gve_queues_mem_alloc(priv, tx_alloc_cfg, rx_alloc_cfg); if (err) { netif_err(priv, drv, priv->dev, "Adjust config failed to alloc new queues"); @@ -1898,14 +1745,12 @@ int gve_adjust_config(struct gve_priv *priv, if (err) { netif_err(priv, drv, priv->dev, "Adjust config failed to close old queues"); - gve_queues_mem_free(priv, qpls_alloc_cfg, - tx_alloc_cfg, rx_alloc_cfg); + gve_queues_mem_free(priv, tx_alloc_cfg, rx_alloc_cfg); return err; } /* Bring the device back up again with the new resources. */ - err = gve_queues_start(priv, qpls_alloc_cfg, - tx_alloc_cfg, rx_alloc_cfg); + err = gve_queues_start(priv, tx_alloc_cfg, rx_alloc_cfg); if (err) { netif_err(priv, drv, priv->dev, "Adjust config failed to start new queues, !!! DISABLING ALL QUEUES !!!\n"); @@ -1925,23 +1770,18 @@ int gve_adjust_queues(struct gve_priv *priv, { struct gve_tx_alloc_rings_cfg tx_alloc_cfg = {0}; struct gve_rx_alloc_rings_cfg rx_alloc_cfg = {0}; - struct gve_qpls_alloc_cfg qpls_alloc_cfg = {0}; int err; - gve_get_curr_alloc_cfgs(priv, &qpls_alloc_cfg, - &tx_alloc_cfg, &rx_alloc_cfg); + gve_get_curr_alloc_cfgs(priv, &tx_alloc_cfg, &rx_alloc_cfg); /* Relay the new config from ethtool */ - qpls_alloc_cfg.tx_cfg = &new_tx_config; tx_alloc_cfg.qcfg = &new_tx_config; rx_alloc_cfg.qcfg_tx = &new_tx_config; - qpls_alloc_cfg.rx_cfg = &new_rx_config; rx_alloc_cfg.qcfg = &new_rx_config; tx_alloc_cfg.num_rings = new_tx_config.num_queues; if (netif_carrier_ok(priv->dev)) { - err = gve_adjust_config(priv, &qpls_alloc_cfg, - &tx_alloc_cfg, &rx_alloc_cfg); + err = gve_adjust_config(priv, &tx_alloc_cfg, &rx_alloc_cfg); return err; } /* Set the config for the next up. */ @@ -2106,7 +1946,6 @@ int gve_set_hsplit_config(struct gve_priv *priv, u8 tcp_data_split) { struct gve_tx_alloc_rings_cfg tx_alloc_cfg = {0}; struct gve_rx_alloc_rings_cfg rx_alloc_cfg = {0}; - struct gve_qpls_alloc_cfg qpls_alloc_cfg = {0}; bool enable_hdr_split; int err = 0; @@ -2126,15 +1965,13 @@ int gve_set_hsplit_config(struct gve_priv *priv, u8 tcp_data_split) if (enable_hdr_split == priv->header_split_enabled) return 0; - gve_get_curr_alloc_cfgs(priv, &qpls_alloc_cfg, - &tx_alloc_cfg, &rx_alloc_cfg); + gve_get_curr_alloc_cfgs(priv, &tx_alloc_cfg, &rx_alloc_cfg); rx_alloc_cfg.enable_header_split = enable_hdr_split; rx_alloc_cfg.packet_buffer_size = gve_get_pkt_buf_size(priv, enable_hdr_split); if (netif_running(priv->dev)) - err = gve_adjust_config(priv, &qpls_alloc_cfg, - &tx_alloc_cfg, &rx_alloc_cfg); + err = gve_adjust_config(priv, &tx_alloc_cfg, &rx_alloc_cfg); return err; } @@ -2144,18 +1981,15 @@ static int gve_set_features(struct net_device *netdev, const netdev_features_t orig_features = netdev->features; struct gve_tx_alloc_rings_cfg tx_alloc_cfg = {0}; struct gve_rx_alloc_rings_cfg rx_alloc_cfg = {0}; - struct gve_qpls_alloc_cfg qpls_alloc_cfg = {0}; struct gve_priv *priv = netdev_priv(netdev); int err; - gve_get_curr_alloc_cfgs(priv, &qpls_alloc_cfg, - &tx_alloc_cfg, &rx_alloc_cfg); + gve_get_curr_alloc_cfgs(priv, &tx_alloc_cfg, &rx_alloc_cfg); if ((netdev->features & NETIF_F_LRO) != (features & NETIF_F_LRO)) { netdev->features ^= NETIF_F_LRO; if (netif_carrier_ok(netdev)) { - err = gve_adjust_config(priv, &qpls_alloc_cfg, - &tx_alloc_cfg, &rx_alloc_cfg); + err = gve_adjust_config(priv, &tx_alloc_cfg, &rx_alloc_cfg); if (err) { /* Revert the change on error. */ netdev->features = orig_features; diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c index 79c1d8f63621..fa45ab184297 100644 --- a/drivers/net/ethernet/google/gve/gve_rx.c +++ b/drivers/net/ethernet/google/gve/gve_rx.c @@ -41,7 +41,6 @@ static void gve_rx_unfill_pages(struct gve_priv *priv, for (i = 0; i < slots; i++) page_ref_sub(rx->data.page_info[i].page, rx->data.page_info[i].pagecnt_bias - 1); - rx->data.qpl = NULL; for (i = 0; i < rx->qpl_copy_pool_mask + 1; i++) { page_ref_sub(rx->qpl_copy_pool[i].page, @@ -107,6 +106,7 @@ static void gve_rx_free_ring_gqi(struct gve_priv *priv, struct gve_rx_ring *rx, u32 slots = rx->mask + 1; int idx = rx->q_num; size_t bytes; + u32 qpl_id; if (rx->desc.desc_ring) { bytes = sizeof(struct gve_rx_desc) * cfg->ring_size; @@ -132,6 +132,12 @@ static void gve_rx_free_ring_gqi(struct gve_priv *priv, struct gve_rx_ring *rx, kvfree(rx->qpl_copy_pool); rx->qpl_copy_pool = NULL; + if (rx->data.qpl) { + qpl_id = gve_get_rx_qpl_id(cfg->qcfg_tx, idx); + gve_free_queue_page_list(priv, rx->data.qpl, qpl_id); + rx->data.qpl = NULL; + } + netif_dbg(priv, drv, priv->dev, "freed rx ring %d\n", idx); } @@ -188,12 +194,6 @@ static int gve_rx_prefill_pages(struct gve_rx_ring *rx, if (!rx->data.page_info) return -ENOMEM; - if (!rx->data.raw_addressing) { - u32 qpl_id = gve_get_rx_qpl_id(cfg->qcfg_tx, rx->q_num); - - rx->data.qpl = &cfg->qpls[qpl_id]; - } - for (i = 0; i < slots; i++) { if (!rx->data.raw_addressing) { struct page *page = rx->data.qpl->pages[i]; @@ -246,8 +246,6 @@ static int gve_rx_prefill_pages(struct gve_rx_ring *rx, page_ref_sub(rx->data.page_info[i].page, rx->data.page_info[i].pagecnt_bias - 1); - rx->data.qpl = NULL; - return err; alloc_err_rda: @@ -274,6 +272,8 @@ static int gve_rx_alloc_ring_gqi(struct gve_priv *priv, struct device *hdev = &priv->pdev->dev; u32 slots = cfg->ring_size; int filled_pages; + int qpl_page_cnt; + u32 qpl_id = 0; size_t bytes; int err; @@ -306,10 +306,20 @@ static int gve_rx_alloc_ring_gqi(struct gve_priv *priv, goto abort_with_slots; } + if (!rx->data.raw_addressing) { + qpl_id = gve_get_rx_qpl_id(cfg->qcfg_tx, rx->q_num); + qpl_page_cnt = cfg->ring_size; + + rx->data.qpl = gve_alloc_queue_page_list(priv, qpl_id, + qpl_page_cnt); + if (!rx->data.qpl) + goto abort_with_copy_pool; + } + filled_pages = gve_rx_prefill_pages(rx, cfg); if (filled_pages < 0) { err = -ENOMEM; - goto abort_with_copy_pool; + goto abort_with_qpl; } rx->fill_cnt = filled_pages; /* Ensure data ring slots (packet buffers) are visible. */ @@ -350,6 +360,11 @@ static int gve_rx_alloc_ring_gqi(struct gve_priv *priv, rx->q_resources = NULL; abort_filled: gve_rx_unfill_pages(priv, rx, cfg); +abort_with_qpl: + if (!rx->data.raw_addressing) { + gve_free_queue_page_list(priv, rx->data.qpl, qpl_id); + rx->data.qpl = NULL; + } abort_with_copy_pool: kvfree(rx->qpl_copy_pool); rx->qpl_copy_pool = NULL; @@ -368,12 +383,6 @@ int gve_rx_alloc_rings_gqi(struct gve_priv *priv, int err = 0; int i, j; - if (!cfg->raw_addressing && !cfg->qpls) { - netif_err(priv, drv, priv->dev, - "Cannot alloc QPL ring before allocing QPLs\n"); - return -EINVAL; - } - rx = kvcalloc(cfg->qcfg->max_queues, sizeof(struct gve_rx_ring), GFP_KERNEL); if (!rx) diff --git a/drivers/net/ethernet/google/gve/gve_rx_dqo.c b/drivers/net/ethernet/google/gve/gve_rx_dqo.c index 7c2980c212f4..4ea8ecc3b2d5 100644 --- a/drivers/net/ethernet/google/gve/gve_rx_dqo.c +++ b/drivers/net/ethernet/google/gve/gve_rx_dqo.c @@ -307,6 +307,7 @@ static void gve_rx_free_ring_dqo(struct gve_priv *priv, struct gve_rx_ring *rx, size_t buffer_queue_slots; int idx = rx->q_num; size_t size; + u32 qpl_id; int i; completion_queue_slots = rx->dqo.complq.mask + 1; @@ -325,7 +326,11 @@ static void gve_rx_free_ring_dqo(struct gve_priv *priv, struct gve_rx_ring *rx, gve_free_page_dqo(priv, bs, !rx->dqo.qpl); } - rx->dqo.qpl = NULL; + if (rx->dqo.qpl) { + qpl_id = gve_get_rx_qpl_id(cfg->qcfg_tx, rx->q_num); + gve_free_queue_page_list(priv, rx->dqo.qpl, qpl_id); + rx->dqo.qpl = NULL; + } if (rx->dqo.bufq.desc_ring) { size = sizeof(rx->dqo.bufq.desc_ring[0]) * buffer_queue_slots; @@ -377,7 +382,9 @@ static int gve_rx_alloc_ring_dqo(struct gve_priv *priv, int idx) { struct device *hdev = &priv->pdev->dev; + int qpl_page_cnt; size_t size; + u32 qpl_id; const u32 buffer_queue_slots = cfg->ring_size; const u32 completion_queue_slots = cfg->ring_size; @@ -418,9 +425,13 @@ static int gve_rx_alloc_ring_dqo(struct gve_priv *priv, goto err; if (!cfg->raw_addressing) { - u32 qpl_id = gve_get_rx_qpl_id(cfg->qcfg_tx, rx->q_num); + qpl_id = gve_get_rx_qpl_id(cfg->qcfg_tx, rx->q_num); + qpl_page_cnt = gve_get_rx_pages_per_qpl_dqo(cfg->ring_size); - rx->dqo.qpl = &cfg->qpls[qpl_id]; + rx->dqo.qpl = gve_alloc_queue_page_list(priv, qpl_id, + qpl_page_cnt); + if (!rx->dqo.qpl) + goto err; rx->dqo.next_qpl_page_idx = 0; } @@ -454,12 +465,6 @@ int gve_rx_alloc_rings_dqo(struct gve_priv *priv, int err; int i; - if (!cfg->raw_addressing && !cfg->qpls) { - netif_err(priv, drv, priv->dev, - "Cannot alloc QPL ring before allocing QPLs\n"); - return -EINVAL; - } - rx = kvcalloc(cfg->qcfg->max_queues, sizeof(struct gve_rx_ring), GFP_KERNEL); if (!rx) diff --git a/drivers/net/ethernet/google/gve/gve_tx.c b/drivers/net/ethernet/google/gve/gve_tx.c index f805700d67e7..24a64ec1073e 100644 --- a/drivers/net/ethernet/google/gve/gve_tx.c +++ b/drivers/net/ethernet/google/gve/gve_tx.c @@ -216,6 +216,7 @@ static void gve_tx_free_ring_gqi(struct gve_priv *priv, struct gve_tx_ring *tx, struct device *hdev = &priv->pdev->dev; int idx = tx->q_num; size_t bytes; + u32 qpl_id; u32 slots; slots = tx->mask + 1; @@ -223,8 +224,12 @@ static void gve_tx_free_ring_gqi(struct gve_priv *priv, struct gve_tx_ring *tx, tx->q_resources, tx->q_resources_bus); tx->q_resources = NULL; - if (!tx->raw_addressing) { - gve_tx_fifo_release(priv, &tx->tx_fifo); + if (tx->tx_fifo.qpl) { + if (tx->tx_fifo.base) + gve_tx_fifo_release(priv, &tx->tx_fifo); + + qpl_id = gve_tx_qpl_id(priv, tx->q_num); + gve_free_queue_page_list(priv, tx->tx_fifo.qpl, qpl_id); tx->tx_fifo.qpl = NULL; } @@ -255,6 +260,8 @@ static int gve_tx_alloc_ring_gqi(struct gve_priv *priv, int idx) { struct device *hdev = &priv->pdev->dev; + int qpl_page_cnt; + u32 qpl_id = 0; size_t bytes; /* Make sure everything is zeroed to start */ @@ -279,12 +286,17 @@ static int gve_tx_alloc_ring_gqi(struct gve_priv *priv, tx->raw_addressing = cfg->raw_addressing; tx->dev = hdev; if (!tx->raw_addressing) { - u32 qpl_id = gve_tx_qpl_id(priv, tx->q_num); + qpl_id = gve_tx_qpl_id(priv, tx->q_num); + qpl_page_cnt = priv->tx_pages_per_qpl; + + tx->tx_fifo.qpl = gve_alloc_queue_page_list(priv, qpl_id, + qpl_page_cnt); + if (!tx->tx_fifo.qpl) + goto abort_with_desc; - tx->tx_fifo.qpl = &cfg->qpls[qpl_id]; /* map Tx FIFO */ if (gve_tx_fifo_init(priv, &tx->tx_fifo)) - goto abort_with_desc; + goto abort_with_qpl; } tx->q_resources = @@ -300,6 +312,11 @@ static int gve_tx_alloc_ring_gqi(struct gve_priv *priv, abort_with_fifo: if (!tx->raw_addressing) gve_tx_fifo_release(priv, &tx->tx_fifo); +abort_with_qpl: + if (!tx->raw_addressing) { + gve_free_queue_page_list(priv, tx->tx_fifo.qpl, qpl_id); + tx->tx_fifo.qpl = NULL; + } abort_with_desc: dma_free_coherent(hdev, bytes, tx->desc, tx->bus); tx->desc = NULL; @@ -316,12 +333,6 @@ int gve_tx_alloc_rings_gqi(struct gve_priv *priv, int err = 0; int i, j; - if (!cfg->raw_addressing && !cfg->qpls) { - netif_err(priv, drv, priv->dev, - "Cannot alloc QPL ring before allocing QPLs\n"); - return -EINVAL; - } - if (cfg->start_idx + cfg->num_rings > cfg->qcfg->max_queues) { netif_err(priv, drv, priv->dev, "Cannot alloc more than the max num of Tx rings\n"); diff --git a/drivers/net/ethernet/google/gve/gve_tx_dqo.c b/drivers/net/ethernet/google/gve/gve_tx_dqo.c index 3d825e406c4b..fe1b26a4d736 100644 --- a/drivers/net/ethernet/google/gve/gve_tx_dqo.c +++ b/drivers/net/ethernet/google/gve/gve_tx_dqo.c @@ -209,6 +209,7 @@ static void gve_tx_free_ring_dqo(struct gve_priv *priv, struct gve_tx_ring *tx, struct device *hdev = &priv->pdev->dev; int idx = tx->q_num; size_t bytes; + u32 qpl_id; if (tx->q_resources) { dma_free_coherent(hdev, sizeof(*tx->q_resources), @@ -236,7 +237,11 @@ static void gve_tx_free_ring_dqo(struct gve_priv *priv, struct gve_tx_ring *tx, kvfree(tx->dqo.tx_qpl_buf_next); tx->dqo.tx_qpl_buf_next = NULL; - tx->dqo.qpl = NULL; + if (tx->dqo.qpl) { + qpl_id = gve_tx_qpl_id(priv, tx->q_num); + gve_free_queue_page_list(priv, tx->dqo.qpl, qpl_id); + tx->dqo.qpl = NULL; + } netif_dbg(priv, drv, priv->dev, "freed tx queue %d\n", idx); } @@ -282,7 +287,9 @@ static int gve_tx_alloc_ring_dqo(struct gve_priv *priv, { struct device *hdev = &priv->pdev->dev; int num_pending_packets; + int qpl_page_cnt; size_t bytes; + u32 qpl_id; int i; memset(tx, 0, sizeof(*tx)); @@ -349,9 +356,13 @@ static int gve_tx_alloc_ring_dqo(struct gve_priv *priv, goto err; if (!cfg->raw_addressing) { - u32 qpl_id = gve_tx_qpl_id(priv, tx->q_num); + qpl_id = gve_tx_qpl_id(priv, tx->q_num); + qpl_page_cnt = priv->tx_pages_per_qpl; - tx->dqo.qpl = &cfg->qpls[qpl_id]; + tx->dqo.qpl = gve_alloc_queue_page_list(priv, qpl_id, + qpl_page_cnt); + if (!tx->dqo.qpl) + goto err; if (gve_tx_qpl_buf_init(tx)) goto err; @@ -371,12 +382,6 @@ int gve_tx_alloc_rings_dqo(struct gve_priv *priv, int err = 0; int i, j; - if (!cfg->raw_addressing && !cfg->qpls) { - netif_err(priv, drv, priv->dev, - "Cannot alloc QPL ring before allocing QPLs\n"); - return -EINVAL; - } - if (cfg->start_idx + cfg->num_rings > cfg->qcfg->max_queues) { netif_err(priv, drv, priv->dev, "Cannot alloc more than the max num of Tx rings\n"); From patchwork Tue Apr 30 23:14:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Shailend Chand X-Patchwork-Id: 13650026 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2CF181C0DD6 for ; Tue, 30 Apr 2024 23:14:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714518892; cv=none; b=c1HyZxFo8kqCmadqBNl6m1jMbNtyimSidBMiGxT/YuUb8iQStD8ZKs58KuaydWDV+QIrugBnHrNGWtBUK0hs9q9FWDgTC3Abq8wabuskh3DyEuv0Qz8fx1/rEVAk72vYD58G8pm8J5Pbnskunip5BC8C9RylCBtIl2tuSEpThVo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714518892; c=relaxed/simple; bh=sGt8lpzuoKugog68emobZt7Gm+OFWNmsA5Wl/JisnCE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=GZi73kW6elc4HH3emFcSUoqWy319jet9wNgAdbzglDGGUJOGfiai76B6TOXpEDq9h4zHIQpj3Dhf5u2pyrtf5visQ9nwrS3X+H5SOKvOdNZ4tRsJzRtxV6boODHSQFG7EkOeM6U6VXV6yQ3+jgNFH3fGdGC5co0xShL9EqjO/bM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ZH4SRH5F; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ZH4SRH5F" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-61bbd6578f9so63355297b3.1 for ; Tue, 30 Apr 2024 16:14:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1714518890; x=1715123690; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:from:to:cc:subject:date:message-id :reply-to; bh=WB3ZS9Ij//X6pRvwFQpFltP77jWP/bicI9Gj5o8Mrik=; b=ZH4SRH5FzouYfvU4crh4heBojOlROWRB8TDLQSo3jaXU1iTVhMAGUmNNakdxoqGbkN BBdzFILl7JYX8ADBp53DnNsIufYqNWf1P8I+/lmrHVEPnTN4UFFgqbnKN+24m/nAT1UK YDSsXbeFnEB0S0hekXj6kipj5NbCH96wyqNpCPOSaylMXu4g9rUcvIpC8yHtGQxpxZmV +w0fm6cyAzn+U68UcF7tVKZhFhE8Yr9xWdkEae5/KgODi3RITcvHLValN7BfXjGc2QWY doEcS3Y9r3DKLcw3KNGaqd/W3wXmVRx4DSXvnzkbga7171WEJIgFsCxoc2ulu2RDYyuF S8+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714518890; x=1715123690; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:x-gm-message-state:from:to:cc:subject :date:message-id:reply-to; bh=WB3ZS9Ij//X6pRvwFQpFltP77jWP/bicI9Gj5o8Mrik=; b=aw1252ZRyuuYwigpYDxX3N7ugzskfe7U1hWNUI44Ujb771yWw9oPpSQLghSkoq3G3y 251RHyYQovCTUwDAs0MRcNHhrYRDP8KHmRmsw+wmHGUgEPa2PBiuOFO/h5lwrIMY4072 oqSQte4h+kbk8GNKz4+ygiHiagipT4nSqMe2YQnyVxxAv7euiH8PO21H1SO+bhr5vV/p hJ822kgbWn+B/bQog+LA66J15xJhJkOIpSxPHBWAhqFiNnWxC0JPSI7C5tzqjirbbiKP ZyMRmf5Gb4ba+xW2rs4frbHAn5RHuChq5Eo6MskqtiT6UOOuI72X/6bq94EXblvvPmu3 //cg== X-Gm-Message-State: AOJu0Yxcz5I7B8RhdOoH1Z0ScSYKpqjsp9xTXypU7fSu9/qkjsBErPl4 8puk63hOeC/S6iqBuXVwfTwmoBHVE7Ynzlh2wjawuxuY0SWqKfvTZ4zDILlvgk+IlJCPu6yHDaj z39LOn7m3pRRA6voaorlhJad5Dcd6deuGy7VehVxdTBH6xKwmtEeaz9pZDgTiYxvuRwjIORQIo3 GnSwdEiLWr5K6Fiuq0SX8F3u9I/+sytTTra9bz277G2nI= X-Google-Smtp-Source: AGHT+IGC4T8B9UV31DPK55kce6jw/XCq6AoIshM3WLo/w5Byttp5TSOFlKCnL3HCwp0BULmMZsOqtv7Ijphkog== X-Received: from shailendkvm.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:2648]) (user=shailend job=sendgmr) by 2002:a25:74d7:0:b0:de5:8427:d66e with SMTP id p206-20020a2574d7000000b00de58427d66emr301580ybc.11.1714518889993; Tue, 30 Apr 2024 16:14:49 -0700 (PDT) Date: Tue, 30 Apr 2024 23:14:19 +0000 In-Reply-To: <20240430231420.699177-1-shailend@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240430231420.699177-1-shailend@google.com> X-Mailer: git-send-email 2.45.0.rc0.197.gbae5840b3b-goog Message-ID: <20240430231420.699177-11-shailend@google.com> Subject: [PATCH net-next 10/10] gve: Implement queue api From: Shailend Chand To: netdev@vger.kernel.org Cc: almasrymina@google.com, davem@davemloft.net, edumazet@google.com, hramamurthy@google.com, jeroendb@google.com, kuba@kernel.org, pabeni@redhat.com, pkaligineedi@google.com, willemb@google.com, Shailend Chand X-Patchwork-Delegate: kuba@kernel.org The new netdev queue api is implemented for gve. Tested-by: Mina Almasry Reviewed-by:  Mina Almasry Reviewed-by: Praveen Kaligineedi Reviewed-by: Harshitha Ramamurthy Signed-off-by: Shailend Chand --- drivers/net/ethernet/google/gve/gve.h | 6 + drivers/net/ethernet/google/gve/gve_dqo.h | 6 + drivers/net/ethernet/google/gve/gve_main.c | 177 +++++++++++++++++-- drivers/net/ethernet/google/gve/gve_rx.c | 12 +- drivers/net/ethernet/google/gve/gve_rx_dqo.c | 12 +- 5 files changed, 189 insertions(+), 24 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h index 9e0a433c991c..ae1e21c9b0a5 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -1096,6 +1096,12 @@ bool gve_tx_clean_pending(struct gve_priv *priv, struct gve_tx_ring *tx); void gve_rx_write_doorbell(struct gve_priv *priv, struct gve_rx_ring *rx); int gve_rx_poll(struct gve_notify_block *block, int budget); bool gve_rx_work_pending(struct gve_rx_ring *rx); +int gve_rx_alloc_ring_gqi(struct gve_priv *priv, + struct gve_rx_alloc_rings_cfg *cfg, + struct gve_rx_ring *rx, + int idx); +void gve_rx_free_ring_gqi(struct gve_priv *priv, struct gve_rx_ring *rx, + struct gve_rx_alloc_rings_cfg *cfg); int gve_rx_alloc_rings(struct gve_priv *priv); int gve_rx_alloc_rings_gqi(struct gve_priv *priv, struct gve_rx_alloc_rings_cfg *cfg); diff --git a/drivers/net/ethernet/google/gve/gve_dqo.h b/drivers/net/ethernet/google/gve/gve_dqo.h index b81584829c40..e83773fb891f 100644 --- a/drivers/net/ethernet/google/gve/gve_dqo.h +++ b/drivers/net/ethernet/google/gve/gve_dqo.h @@ -44,6 +44,12 @@ void gve_tx_free_rings_dqo(struct gve_priv *priv, struct gve_tx_alloc_rings_cfg *cfg); void gve_tx_start_ring_dqo(struct gve_priv *priv, int idx); void gve_tx_stop_ring_dqo(struct gve_priv *priv, int idx); +int gve_rx_alloc_ring_dqo(struct gve_priv *priv, + struct gve_rx_alloc_rings_cfg *cfg, + struct gve_rx_ring *rx, + int idx); +void gve_rx_free_ring_dqo(struct gve_priv *priv, struct gve_rx_ring *rx, + struct gve_rx_alloc_rings_cfg *cfg); int gve_rx_alloc_rings_dqo(struct gve_priv *priv, struct gve_rx_alloc_rings_cfg *cfg); void gve_rx_free_rings_dqo(struct gve_priv *priv, diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index 65adab0f5171..2f18910456b2 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include #include "gve.h" @@ -1238,16 +1239,28 @@ void gve_get_curr_alloc_cfgs(struct gve_priv *priv, gve_rx_get_curr_alloc_cfg(priv, rx_alloc_cfg); } +static void gve_rx_start_ring(struct gve_priv *priv, int i) +{ + if (gve_is_gqi(priv)) + gve_rx_start_ring_gqi(priv, i); + else + gve_rx_start_ring_dqo(priv, i); +} + static void gve_rx_start_rings(struct gve_priv *priv, int num_rings) { int i; - for (i = 0; i < num_rings; i++) { - if (gve_is_gqi(priv)) - gve_rx_start_ring_gqi(priv, i); - else - gve_rx_start_ring_dqo(priv, i); - } + for (i = 0; i < num_rings; i++) + gve_rx_start_ring(priv, i); +} + +static void gve_rx_stop_ring(struct gve_priv *priv, int i) +{ + if (gve_is_gqi(priv)) + gve_rx_stop_ring_gqi(priv, i); + else + gve_rx_stop_ring_dqo(priv, i); } static void gve_rx_stop_rings(struct gve_priv *priv, int num_rings) @@ -1257,12 +1270,8 @@ static void gve_rx_stop_rings(struct gve_priv *priv, int num_rings) if (!priv->rx) return; - for (i = 0; i < num_rings; i++) { - if (gve_is_gqi(priv)) - gve_rx_stop_ring_gqi(priv, i); - else - gve_rx_stop_ring_dqo(priv, i); - } + for (i = 0; i < num_rings; i++) + gve_rx_stop_ring(priv, i); } static void gve_queues_mem_remove(struct gve_priv *priv) @@ -1882,6 +1891,15 @@ static void gve_turnup(struct gve_priv *priv) gve_set_napi_enabled(priv); } +static void gve_turnup_and_check_status(struct gve_priv *priv) +{ + u32 status; + + gve_turnup(priv); + status = ioread32be(&priv->reg_bar0->device_status); + gve_handle_link_status(priv, GVE_DEVICE_STATUS_LINK_STATUS_MASK & status); +} + static void gve_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct gve_notify_block *block; @@ -2328,6 +2346,140 @@ static void gve_write_version(u8 __iomem *driver_version_register) writeb('\n', driver_version_register); } +static int gve_rx_queue_stop(struct net_device *dev, void *per_q_mem, int idx) +{ + struct gve_priv *priv = netdev_priv(dev); + struct gve_rx_ring *gve_per_q_mem; + int err; + + if (!priv->rx) + return -EAGAIN; + + /* Destroying queue 0 while other queues exist is not supported in DQO */ + if (!gve_is_gqi(priv) && idx == 0) + return -ERANGE; + + /* Single-queue destruction requires quiescence on all queues */ + gve_turndown(priv); + + /* This failure will trigger a reset - no need to clean up */ + err = gve_adminq_destroy_single_rx_queue(priv, idx); + if (err) + return err; + + if (gve_is_qpl(priv)) { + /* This failure will trigger a reset - no need to clean up */ + err = gve_unregister_qpl(priv, gve_rx_get_qpl(priv, idx)); + if (err) + return err; + } + + gve_rx_stop_ring(priv, idx); + + /* Turn the unstopped queues back up */ + gve_turnup_and_check_status(priv); + + gve_per_q_mem = (struct gve_rx_ring *)per_q_mem; + *gve_per_q_mem = priv->rx[idx]; + memset(&priv->rx[idx], 0, sizeof(priv->rx[idx])); + return 0; +} + +static void gve_rx_queue_mem_free(struct net_device *dev, void *per_q_mem) +{ + struct gve_priv *priv = netdev_priv(dev); + struct gve_rx_alloc_rings_cfg cfg = {0}; + struct gve_rx_ring *gve_per_q_mem; + + gve_per_q_mem = (struct gve_rx_ring *)per_q_mem; + gve_rx_get_curr_alloc_cfg(priv, &cfg); + + if (gve_is_gqi(priv)) + gve_rx_free_ring_gqi(priv, gve_per_q_mem, &cfg); + else + gve_rx_free_ring_dqo(priv, gve_per_q_mem, &cfg); +} + +static int gve_rx_queue_mem_alloc(struct net_device *dev, void *per_q_mem, + int idx) +{ + struct gve_priv *priv = netdev_priv(dev); + struct gve_rx_alloc_rings_cfg cfg = {0}; + struct gve_rx_ring *gve_per_q_mem; + int err; + + if (!priv->rx) + return -EAGAIN; + + gve_per_q_mem = (struct gve_rx_ring *)per_q_mem; + gve_rx_get_curr_alloc_cfg(priv, &cfg); + + if (gve_is_gqi(priv)) + err = gve_rx_alloc_ring_gqi(priv, &cfg, gve_per_q_mem, idx); + else + err = gve_rx_alloc_ring_dqo(priv, &cfg, gve_per_q_mem, idx); + + return err; +} + +static int gve_rx_queue_start(struct net_device *dev, void *per_q_mem, int idx) +{ + struct gve_priv *priv = netdev_priv(dev); + struct gve_rx_ring *gve_per_q_mem; + int err; + + if (!priv->rx) + return -EAGAIN; + + gve_per_q_mem = (struct gve_rx_ring *)per_q_mem; + priv->rx[idx] = *gve_per_q_mem; + + /* Single-queue creation requires quiescence on all queues */ + gve_turndown(priv); + + gve_rx_start_ring(priv, idx); + + if (gve_is_qpl(priv)) { + /* This failure will trigger a reset - no need to clean up */ + err = gve_register_qpl(priv, gve_rx_get_qpl(priv, idx)); + if (err) + goto abort; + } + + /* This failure will trigger a reset - no need to clean up */ + err = gve_adminq_create_single_rx_queue(priv, idx); + if (err) + goto abort; + + if (gve_is_gqi(priv)) + gve_rx_write_doorbell(priv, &priv->rx[idx]); + else + gve_rx_post_buffers_dqo(&priv->rx[idx]); + + /* Turn the unstopped queues back up */ + gve_turnup_and_check_status(priv); + return 0; + +abort: + gve_rx_stop_ring(priv, idx); + + /* All failures in this func result in a reset, by clearing the struct + * at idx, we prevent a double free when that reset runs. The reset, + * which needs the rtnl lock, will not run till this func returns and + * its caller gives up the lock. + */ + memset(&priv->rx[idx], 0, sizeof(priv->rx[idx])); + return err; +} + +static const struct netdev_queue_mgmt_ops gve_queue_mgmt_ops = { + .ndo_queue_mem_size = sizeof(struct gve_rx_ring), + .ndo_queue_mem_alloc = gve_rx_queue_mem_alloc, + .ndo_queue_mem_free = gve_rx_queue_mem_free, + .ndo_queue_start = gve_rx_queue_start, + .ndo_queue_stop = gve_rx_queue_stop, +}; + static int gve_probe(struct pci_dev *pdev, const struct pci_device_id *ent) { int max_tx_queues, max_rx_queues; @@ -2382,6 +2534,7 @@ static int gve_probe(struct pci_dev *pdev, const struct pci_device_id *ent) pci_set_drvdata(pdev, dev); dev->ethtool_ops = &gve_ethtool_ops; dev->netdev_ops = &gve_netdev_ops; + dev->queue_mgmt_ops = &gve_queue_mgmt_ops; /* Set default and supported features. * diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c index fa45ab184297..68f64ebb0e27 100644 --- a/drivers/net/ethernet/google/gve/gve_rx.c +++ b/drivers/net/ethernet/google/gve/gve_rx.c @@ -99,8 +99,8 @@ void gve_rx_stop_ring_gqi(struct gve_priv *priv, int idx) gve_rx_reset_ring_gqi(priv, idx); } -static void gve_rx_free_ring_gqi(struct gve_priv *priv, struct gve_rx_ring *rx, - struct gve_rx_alloc_rings_cfg *cfg) +void gve_rx_free_ring_gqi(struct gve_priv *priv, struct gve_rx_ring *rx, + struct gve_rx_alloc_rings_cfg *cfg) { struct device *dev = &priv->pdev->dev; u32 slots = rx->mask + 1; @@ -264,10 +264,10 @@ void gve_rx_start_ring_gqi(struct gve_priv *priv, int idx) gve_add_napi(priv, ntfy_idx, gve_napi_poll); } -static int gve_rx_alloc_ring_gqi(struct gve_priv *priv, - struct gve_rx_alloc_rings_cfg *cfg, - struct gve_rx_ring *rx, - int idx) +int gve_rx_alloc_ring_gqi(struct gve_priv *priv, + struct gve_rx_alloc_rings_cfg *cfg, + struct gve_rx_ring *rx, + int idx) { struct device *hdev = &priv->pdev->dev; u32 slots = cfg->ring_size; diff --git a/drivers/net/ethernet/google/gve/gve_rx_dqo.c b/drivers/net/ethernet/google/gve/gve_rx_dqo.c index 4ea8ecc3b2d5..c1c912de59c7 100644 --- a/drivers/net/ethernet/google/gve/gve_rx_dqo.c +++ b/drivers/net/ethernet/google/gve/gve_rx_dqo.c @@ -299,8 +299,8 @@ void gve_rx_stop_ring_dqo(struct gve_priv *priv, int idx) gve_rx_reset_ring_dqo(priv, idx); } -static void gve_rx_free_ring_dqo(struct gve_priv *priv, struct gve_rx_ring *rx, - struct gve_rx_alloc_rings_cfg *cfg) +void gve_rx_free_ring_dqo(struct gve_priv *priv, struct gve_rx_ring *rx, + struct gve_rx_alloc_rings_cfg *cfg) { struct device *hdev = &priv->pdev->dev; size_t completion_queue_slots; @@ -376,10 +376,10 @@ void gve_rx_start_ring_dqo(struct gve_priv *priv, int idx) gve_add_napi(priv, ntfy_idx, gve_napi_poll_dqo); } -static int gve_rx_alloc_ring_dqo(struct gve_priv *priv, - struct gve_rx_alloc_rings_cfg *cfg, - struct gve_rx_ring *rx, - int idx) +int gve_rx_alloc_ring_dqo(struct gve_priv *priv, + struct gve_rx_alloc_rings_cfg *cfg, + struct gve_rx_ring *rx, + int idx) { struct device *hdev = &priv->pdev->dev; int qpl_page_cnt;