From patchwork Wed May 1 23:25:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shailend Chand X-Patchwork-Id: 13651182 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 020FE168B05 for ; Wed, 1 May 2024 23:25:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714605961; cv=none; b=Jr/Dq5RE6EQ7EnCCVQrxE6Of2IOEYZVwX8IvqyzNELy90jaGtmUA/U2TkOXs8T1iGjyYnlDhIHwh2revckL+xrLs1YUSKTq3blYYVYwK+mPeb5o/FXkSXS29bVg0lOkB6qPM8IpsXR8nx0A/9pwefZzTNNejzmgZ61dkUacN7VM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714605961; c=relaxed/simple; bh=SsI8/LWGAq5J+40FM/vo85o3bCod9M5fAtck8S2h3A4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=PZZVq6Q2QaJ7/Atwif42wRjCW1BdUynQhrUwezPTTQ1TqE6mq4tCu1+4HFFr2YNQB9lDGoavqPTSAXtd92toY8zXhInddqLuPl2ucdTz57bCFUd7PR1og+jP+eIbOQBYajENl+9nHF/2WgBdG8TdUaC4mB6UQUtFrWnL5fHVZrE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=T4qL8gts; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="T4qL8gts" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-61b2ef746c9so138634417b3.1 for ; Wed, 01 May 2024 16:25:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1714605959; x=1715210759; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=CbbzxHFUlKrWVd7s5EnZ5ZCZfe3o+RIcd0TaZmFEmCw=; b=T4qL8gtsOrJtbll+zo05cs8lclAqiti6MaiqATuTBjApdm8bSgxg3VxyijVnYrAtSO EuhZ3yD7J1YrYrE1Valgm00P3Hz6gCyuf786TWA8vzx3d0QjskxDQIOtd7X17/L8S3vu Byou7KvsyJSJRoM6aau5ZrsgtEbzlHCLOLMYImhqq4BdOi4SY4D4naHZ/TE3zG36K/qX Idn+HnY41qGaN7S+Fhnz776aqZGuYplor2cSZ+9y+/hGzWQOZWb61JAA+lhXrCic2D5N 8Y5E6ylPt5Y2ZU+l+5mqWHFOLv3yhnoz2WY/32nqE5duU9zK5pxRVase0Fm2BBHEdzVX dMFw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714605959; x=1715210759; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=CbbzxHFUlKrWVd7s5EnZ5ZCZfe3o+RIcd0TaZmFEmCw=; b=JLMbHyTKfk7lJbR9+i69EhFDM2RFFwtzEGCmD18UfPvBl9W5+J+YgceePzkjHDviIa IpC5l/1CCdiF7ctRsTY1TJKFggZzJNq01X9bbeHQbQAyw5NGCu5PcQkadc6svWlEFABv TMk/F1e6XdW69O/ihrFdIS7EXHe9WR+ve5Ugke3I6s58fX5r8H2NqDV1zROwlrTmsvir +LQEP2RcOEoHD/CeiUYVq7JZVcxNhX6bLLj83pz5963v3ZZ/0HdxAMX4CJkgWewNzb7P Eh0HhSrdHlnUGP6JrbGufIX5FirpqH49ZgZCxG8AHrDHmdKtFqYC4MBhNniBjk/hcZDC +f3Q== X-Gm-Message-State: AOJu0YyygVYNZe1sF0XhbvhZ8xR1XyJ6AclStgXv+T5lhiHyM8cqJvgG rdnhwIF93GKzdzWMnSZkVQ4PTQPTlCbGcAfZVQU0fYFZLvLunybs9vy0q+VZslPIxFljvghsK8T W1mpeA2Rm8tq0199X2Qfhx1HTXEzgmAaz2Zx0SPZZhPC8nuQSMHvv5YrfBNcBd6rR2JslS+t77j d8dlmp7KEK4csVHmFcWFPHq3oI8+XPZUVxrMttkmmTMLg= X-Google-Smtp-Source: AGHT+IF0kkQ3K5h+U5uzpwLhLAVyVgElFElcKTiYsj4ptdEiGd3bNUJq8rh9vzGZU6J95XwpNOzbQ1c8zVdk2w== X-Received: from shailendkvm.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:2648]) (user=shailend job=sendgmr) by 2002:a0d:d447:0:b0:61b:e3e2:b5b5 with SMTP id w68-20020a0dd447000000b0061be3e2b5b5mr86291ywd.9.1714605958818; Wed, 01 May 2024 16:25:58 -0700 (PDT) Date: Wed, 1 May 2024 23:25:40 +0000 In-Reply-To: <20240501232549.1327174-1-shailend@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240501232549.1327174-1-shailend@google.com> X-Mailer: git-send-email 2.45.0.rc0.197.gbae5840b3b-goog Message-ID: <20240501232549.1327174-2-shailend@google.com> Subject: [PATCH net-next v2 01/10] queue_api: define queue api From: Shailend Chand To: netdev@vger.kernel.org Cc: almasrymina@google.com, davem@davemloft.net, edumazet@google.com, hramamurthy@google.com, jeroendb@google.com, kuba@kernel.org, pabeni@redhat.com, pkaligineedi@google.com, rushilg@google.com, willemb@google.com, ziweixiao@google.com, Shailend Chand X-Patchwork-Delegate: kuba@kernel.org From: Mina Almasry This API enables the net stack to reset the queues used for devmem TCP. Signed-off-by: Mina Almasry Signed-off-by: Shailend Chand --- include/linux/netdevice.h | 3 +++ include/net/netdev_queues.h | 31 +++++++++++++++++++++++++++++++ 2 files changed, 34 insertions(+) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index f849e7d110ed..6a58ec73c5e8 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -1957,6 +1957,7 @@ enum netdev_reg_state { * @sysfs_rx_queue_group: Space for optional per-rx queue attributes * @rtnl_link_ops: Rtnl_link_ops * @stat_ops: Optional ops for queue-aware statistics + * @queue_mgmt_ops: Optional ops for queue management * * @gso_max_size: Maximum size of generic segmentation offload * @tso_max_size: Device (as in HW) limit on the max TSO request size @@ -2340,6 +2341,8 @@ struct net_device { const struct netdev_stat_ops *stat_ops; + const struct netdev_queue_mgmt_ops *queue_mgmt_ops; + /* for setting kernel sock attribute on TCP connection setup */ #define GSO_MAX_SEGS 65535u #define GSO_LEGACY_MAX_SIZE 65536u diff --git a/include/net/netdev_queues.h b/include/net/netdev_queues.h index c7ac4539eafc..e7b84f018cee 100644 --- a/include/net/netdev_queues.h +++ b/include/net/netdev_queues.h @@ -87,6 +87,37 @@ struct netdev_stat_ops { struct netdev_queue_stats_tx *tx); }; +/** + * struct netdev_queue_mgmt_ops - netdev ops for queue management + * + * @ndo_queue_mem_size: Size of the struct that describes a queue's memory. + * + * @ndo_queue_mem_alloc: Allocate memory for an RX queue at the specified index. + * The new memory is written at the specified address. + * + * @ndo_queue_mem_free: Free memory from an RX queue. + * + * @ndo_queue_start: Start an RX queue with the specified memory and at the + * specified index. + * + * @ndo_queue_stop: Stop the RX queue at the specified index. The stopped + * queue's memory is written at the specified address. + */ +struct netdev_queue_mgmt_ops { + size_t ndo_queue_mem_size; + int (*ndo_queue_mem_alloc)(struct net_device *dev, + void *per_queue_mem, + int idx); + void (*ndo_queue_mem_free)(struct net_device *dev, + void *per_queue_mem); + int (*ndo_queue_start)(struct net_device *dev, + void *per_queue_mem, + int idx); + int (*ndo_queue_stop)(struct net_device *dev, + void *per_queue_mem, + int idx); +}; + /** * DOC: Lockless queue stopping / waking helpers. * From patchwork Wed May 1 23:25:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shailend Chand X-Patchwork-Id: 13651183 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AB772168B1A for ; Wed, 1 May 2024 23:26:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714605963; cv=none; b=osJ0wVYr/uvpb4QVMazR2xNDKXlhJ7ylDpQE3GyVU/+dzcZ5BNXa0tQiQMof7AR4NYIlZqQ+vu+dtSNKFr8wHLIULZ7gkmt/BJyFZeQbKZKwQuzFwj7Rivejmf7h4iruqOafXH6LmypKw6tsdUyMb5yUrCjUl9J1KDeoqCRo8g4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714605963; c=relaxed/simple; bh=G9vTwaOh9tqpmWBRnbfIv95mAPnSNpQXI1b5OG2x6ro=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=AehCrG9NZvTvwdrnL6I5akiFI3Xt14d7rAGTjYqjQ5bZB54n7rejt1XkXhpVm3kbc1Huy8FFEytRw0KsMOifUlcU3/ov8GVAOBFkKUpuM/0qjMr43NAtDBU+DYRMs4Qdf7XkbreI4Vo9wQEUv6mh5gIkhK9aQmoUd8/U3p8cezU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=NzduO/li; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="NzduO/li" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-de604d35ec0so6023146276.3 for ; Wed, 01 May 2024 16:26:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1714605960; x=1715210760; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=yQKspPSBYhGlej/Z1yQFkrqQvo1+u8pB8UNnxcDCgXM=; b=NzduO/ligzttOh4HOv08u92bab8JlWolPc9jlsd5ivDVvJw6kLJmEEPFw23jGjzeaV ZzPrSNVkA+XW30E5lhtcyATIP4BRRBz1bGqmc/JtXHbTb44vETb0UlKZth/H/tNNNfm/ gxbspCg5QoCvFQoFXC0rC0X1326dtadlNJC9krl2znsiQiaVkNvahHCJNQRa4kU5iJR5 3Opk3sg7uOmE9G7LjIA1uHLtm1gUIyFtRUMO9Cj59zEOWlxrpHXaNz3ZnqrMWsa1g5q7 6/Kv5kNs2VE/qeKiixTx7aS/RumNguQ8f/TwMNz7KNk/92m/HRKqV3m9W5IejPphg5DG Ojag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714605960; x=1715210760; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=yQKspPSBYhGlej/Z1yQFkrqQvo1+u8pB8UNnxcDCgXM=; b=T664PyPL3Efx1xF2ZJXHPsO1LsRWL1qlSCqilXU8d9Qd6n+3xf5HEY0tJYptjsP1H+ tZr6Nam6AFv4fmIW2Li47RrHAr46FJRjT/J+tiNe050Sqf6Wi3swyFE66PdIlX8lR0G6 pimEWhKqzj2zi1Euh80KrEqZ9FS2ZorxmwQO2IqysGGo0ySQBu3r/0W40X06ijkthRQj w5MYsxjAM/dL1gfe0DhFig8Px66nqeWkeoywmtGHkDOiMnMoveLSa7wpXi95WIUvcsV5 9BjGbGfIxTfrlhm7bEEGEyjKdqPVLbN2jAzh5AhRBR5Oa/93CbK3c9dntSWidoBSAAQ2 pcjw== X-Gm-Message-State: AOJu0Yy72leMgO7VN0PPHzuexXoZuvB/Ro6mkLZCHE1wEtc3e1VukOQn Zc1Ip+HsxIMuEQSJxlm0uGLaOV95LQyb0CF0Tn1kmTQsCRjW2e9wF+dxzpVf6RcOFg2Mbs4emh8 3hrwpFr5tRJR3lj9/oDgPxuqvAyxs5dufB5WBFW0R/EQr+j/ml0TDmDZcjBSnb/xb09Bhfq7Las Q7K+9bfKOgMQu+tIQntc31ff7/KXlgk+DCUphRoz1Gqn4= X-Google-Smtp-Source: AGHT+IH9X3g2nHdCUNhovfmGjdQX4nADcPdvh477P+X1iZqsQKM/5QObCOfAlAkcHgYx/0sPR6ejPjYyCGX1YA== X-Received: from shailendkvm.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:2648]) (user=shailend job=sendgmr) by 2002:a25:2d06:0:b0:de5:4ed6:d3f3 with SMTP id t6-20020a252d06000000b00de54ed6d3f3mr89854ybt.6.1714605960631; Wed, 01 May 2024 16:26:00 -0700 (PDT) Date: Wed, 1 May 2024 23:25:41 +0000 In-Reply-To: <20240501232549.1327174-1-shailend@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240501232549.1327174-1-shailend@google.com> X-Mailer: git-send-email 2.45.0.rc0.197.gbae5840b3b-goog Message-ID: <20240501232549.1327174-3-shailend@google.com> Subject: [PATCH net-next v2 02/10] gve: Make the GQ RX free queue funcs idempotent From: Shailend Chand To: netdev@vger.kernel.org Cc: almasrymina@google.com, davem@davemloft.net, edumazet@google.com, hramamurthy@google.com, jeroendb@google.com, kuba@kernel.org, pabeni@redhat.com, pkaligineedi@google.com, rushilg@google.com, willemb@google.com, ziweixiao@google.com, Shailend Chand X-Patchwork-Delegate: kuba@kernel.org Although this is not fixing any existing double free bug, making these functions idempotent allows for a simpler implementation of future ndo hooks that act on a single queue. Tested-by: Mina Almasry Reviewed-by: Praveen Kaligineedi Reviewed-by: Harshitha Ramamurthy Signed-off-by: Shailend Chand --- drivers/net/ethernet/google/gve/gve_rx.c | 29 ++++++++++++++++-------- 1 file changed, 19 insertions(+), 10 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c index 9b56e89c4f43..0a3f88170411 100644 --- a/drivers/net/ethernet/google/gve/gve_rx.c +++ b/drivers/net/ethernet/google/gve/gve_rx.c @@ -30,6 +30,9 @@ static void gve_rx_unfill_pages(struct gve_priv *priv, u32 slots = rx->mask + 1; int i; + if (!rx->data.page_info) + return; + if (rx->data.raw_addressing) { for (i = 0; i < slots; i++) gve_rx_free_buffer(&priv->pdev->dev, &rx->data.page_info[i], @@ -69,20 +72,26 @@ static void gve_rx_free_ring_gqi(struct gve_priv *priv, struct gve_rx_ring *rx, int idx = rx->q_num; size_t bytes; - bytes = sizeof(struct gve_rx_desc) * cfg->ring_size; - dma_free_coherent(dev, bytes, rx->desc.desc_ring, rx->desc.bus); - rx->desc.desc_ring = NULL; + if (rx->desc.desc_ring) { + bytes = sizeof(struct gve_rx_desc) * cfg->ring_size; + dma_free_coherent(dev, bytes, rx->desc.desc_ring, rx->desc.bus); + rx->desc.desc_ring = NULL; + } - dma_free_coherent(dev, sizeof(*rx->q_resources), - rx->q_resources, rx->q_resources_bus); - rx->q_resources = NULL; + if (rx->q_resources) { + dma_free_coherent(dev, sizeof(*rx->q_resources), + rx->q_resources, rx->q_resources_bus); + rx->q_resources = NULL; + } gve_rx_unfill_pages(priv, rx, cfg); - bytes = sizeof(*rx->data.data_ring) * slots; - dma_free_coherent(dev, bytes, rx->data.data_ring, - rx->data.data_bus); - rx->data.data_ring = NULL; + if (rx->data.data_ring) { + bytes = sizeof(*rx->data.data_ring) * slots; + dma_free_coherent(dev, bytes, rx->data.data_ring, + rx->data.data_bus); + rx->data.data_ring = NULL; + } kvfree(rx->qpl_copy_pool); rx->qpl_copy_pool = NULL; From patchwork Wed May 1 23:25:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shailend Chand X-Patchwork-Id: 13651184 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5766916ABCE for ; Wed, 1 May 2024 23:26:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714605964; cv=none; b=DOCBq3Ehu7b5yVwi7zuZflw4f0PRylnG5iMgcl6HBtJD6yJ5R6eJZfsRA4VOvjEgnnnp1BAbuTZOUcDTVrYboSVmLgGAzx/q1PpJBSCZbVHDjEiNzgQkr2S0UFTQs+3SjOKH7uzzgVJVhQCuQx/t9NEWTWTAJ6wT1rqFW3GdWqU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714605964; c=relaxed/simple; bh=25lUAmj31UZCXn/MJf1dSFdnTzHm03fKdZ45toY82aI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=fgkr4069ojzCIoF6Ulry2pzpYeYihm5Nk04+5v+ZUA3YT+ydcLxq/6UzV3PmrS6lRfRGckKMEOBacmlwGEYm43FRFdixyBNL3tWll4LfNpg/xeN+myM8Wulvwd2845AQETPTxShRMT6xSpb87qdoaUKZCtaPcv3o5tukFx/kE2Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=IXLJeZNW; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="IXLJeZNW" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-dbf618042daso14289445276.0 for ; Wed, 01 May 2024 16:26:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1714605962; x=1715210762; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=h5IvwWkf3rTYHyQrCshDp0mGjo5tPZngfKUqXHSa0RE=; b=IXLJeZNWu40w8lBB2LfaKX1QJbzfkGXqQ1THSRJRUQqjk1cUV4am28G+78s48nA9ET shcGj85h++jFIzOf5dleFrBQlTX+PORs0Gifrd8+S6/pL5H77Guk3l/W81ooITxOLpDN ZmJ6iZLF+gUoM/orwRW8jEus7cdjyQxPfxwfJVbr6nyC58EHht2ZOAZWPO9W9Ls9RSu7 9kEnTJ6rUXFUfqgoqC1igtATbkSRPGXpyrhcv1p40buuaP7Bi6C3HdiT6umKtTg3Sxnq D2fIEbqB9OoKLmgPs51X78AqCtRA25Jj4dswwPpjYVHhh5DykWSAPeELSD2z3Dfa+BPy hmwA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714605962; x=1715210762; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=h5IvwWkf3rTYHyQrCshDp0mGjo5tPZngfKUqXHSa0RE=; b=naafMxAqIRQFnXih5w5gjPKQGEB+X16gHtMGfTiRQ4ttovoXT2eWRX9b4gVSFZXN/z 8M7Jv7NHLlFW6vMo3hpBcPFxLBeEk6FrxEY82sScmemegsBUElMjIXYbidziYNjagvf0 tJ2A92gf9uNiT1kLIwKFwjyvuV4hF+Gau6/jCXpxoF85ix6EMNjMBAvpIulsvpMMnl70 213uKJu7HekPKNp0S/jE2PpkkvljqAqrVKEdhVv5xPcXfjrHCmHgwU6hVYKIBVvMJhWR jGNyOHVbP068JJEVSgoGcq5D+ZfL4KLiHQHxz/k8AuxyjwDumTIp1Dy3uZu8a+YceZu2 8naA== X-Gm-Message-State: AOJu0YwP6CZPc4MR23cjfcXNjyaTwBUmu9BRg5pVbCf6GRQhYufX66A4 xY0h2IJmwDJm00f9fid2nGcDM3F5oHyeZng06vcQ1d06wnK5FR9ncQjULqHrVDsJwD9ZgxlWYc4 rsmCHEEKYFoBUo0w9z/oTpte0DwUBps57LnuVMCSsG2FPFFnaxyytsqsXMM+jH8sQYTVEzwtSRL aKebN8qCf8QFR30mTehnU8XDkaZ1JNAyanw+1qo+xpRgE= X-Google-Smtp-Source: AGHT+IEzwHEC4agK+Tt9zNP3YB8mAX+TBRK+z6kXnglFCXfyYUTas1GltjyOTsK6OCrSF9R4NxwJvqw8DcxtHQ== X-Received: from shailendkvm.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:2648]) (user=shailend job=sendgmr) by 2002:a05:6902:1004:b0:de5:1ea2:fc6f with SMTP id w4-20020a056902100400b00de51ea2fc6fmr424990ybt.6.1714605962108; Wed, 01 May 2024 16:26:02 -0700 (PDT) Date: Wed, 1 May 2024 23:25:42 +0000 In-Reply-To: <20240501232549.1327174-1-shailend@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240501232549.1327174-1-shailend@google.com> X-Mailer: git-send-email 2.45.0.rc0.197.gbae5840b3b-goog Message-ID: <20240501232549.1327174-4-shailend@google.com> Subject: [PATCH net-next v2 03/10] gve: Add adminq funcs to add/remove a single Rx queue From: Shailend Chand To: netdev@vger.kernel.org Cc: almasrymina@google.com, davem@davemloft.net, edumazet@google.com, hramamurthy@google.com, jeroendb@google.com, kuba@kernel.org, pabeni@redhat.com, pkaligineedi@google.com, rushilg@google.com, willemb@google.com, ziweixiao@google.com, Shailend Chand X-Patchwork-Delegate: kuba@kernel.org This allows for implementing future ndo hooks that act on a single queue. Tested-by: Mina Almasry Reviewed-by: Praveen Kaligineedi Reviewed-by: Harshitha Ramamurthy Signed-off-by: Shailend Chand --- drivers/net/ethernet/google/gve/gve_adminq.c | 79 +++++++++++++------- drivers/net/ethernet/google/gve/gve_adminq.h | 2 + 2 files changed, 54 insertions(+), 27 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/ethernet/google/gve/gve_adminq.c index b2b619aa2310..3df8243680d9 100644 --- a/drivers/net/ethernet/google/gve/gve_adminq.c +++ b/drivers/net/ethernet/google/gve/gve_adminq.c @@ -630,14 +630,15 @@ int gve_adminq_create_tx_queues(struct gve_priv *priv, u32 start_id, u32 num_que return gve_adminq_kick_and_wait(priv); } -static int gve_adminq_create_rx_queue(struct gve_priv *priv, u32 queue_index) +static void gve_adminq_get_create_rx_queue_cmd(struct gve_priv *priv, + union gve_adminq_command *cmd, + u32 queue_index) { struct gve_rx_ring *rx = &priv->rx[queue_index]; - union gve_adminq_command cmd; - memset(&cmd, 0, sizeof(cmd)); - cmd.opcode = cpu_to_be32(GVE_ADMINQ_CREATE_RX_QUEUE); - cmd.create_rx_queue = (struct gve_adminq_create_rx_queue) { + memset(cmd, 0, sizeof(*cmd)); + cmd->opcode = cpu_to_be32(GVE_ADMINQ_CREATE_RX_QUEUE); + cmd->create_rx_queue = (struct gve_adminq_create_rx_queue) { .queue_id = cpu_to_be32(queue_index), .ntfy_id = cpu_to_be32(rx->ntfy_id), .queue_resources_addr = cpu_to_be64(rx->q_resources_bus), @@ -648,13 +649,13 @@ static int gve_adminq_create_rx_queue(struct gve_priv *priv, u32 queue_index) u32 qpl_id = priv->queue_format == GVE_GQI_RDA_FORMAT ? GVE_RAW_ADDRESSING_QPL_ID : rx->data.qpl->id; - cmd.create_rx_queue.rx_desc_ring_addr = + cmd->create_rx_queue.rx_desc_ring_addr = cpu_to_be64(rx->desc.bus), - cmd.create_rx_queue.rx_data_ring_addr = + cmd->create_rx_queue.rx_data_ring_addr = cpu_to_be64(rx->data.data_bus), - cmd.create_rx_queue.index = cpu_to_be32(queue_index); - cmd.create_rx_queue.queue_page_list_id = cpu_to_be32(qpl_id); - cmd.create_rx_queue.packet_buffer_size = cpu_to_be16(rx->packet_buffer_size); + cmd->create_rx_queue.index = cpu_to_be32(queue_index); + cmd->create_rx_queue.queue_page_list_id = cpu_to_be32(qpl_id); + cmd->create_rx_queue.packet_buffer_size = cpu_to_be16(rx->packet_buffer_size); } else { u32 qpl_id = 0; @@ -662,25 +663,40 @@ static int gve_adminq_create_rx_queue(struct gve_priv *priv, u32 queue_index) qpl_id = GVE_RAW_ADDRESSING_QPL_ID; else qpl_id = rx->dqo.qpl->id; - cmd.create_rx_queue.queue_page_list_id = cpu_to_be32(qpl_id); - cmd.create_rx_queue.rx_desc_ring_addr = + cmd->create_rx_queue.queue_page_list_id = cpu_to_be32(qpl_id); + cmd->create_rx_queue.rx_desc_ring_addr = cpu_to_be64(rx->dqo.complq.bus); - cmd.create_rx_queue.rx_data_ring_addr = + cmd->create_rx_queue.rx_data_ring_addr = cpu_to_be64(rx->dqo.bufq.bus); - cmd.create_rx_queue.packet_buffer_size = + cmd->create_rx_queue.packet_buffer_size = cpu_to_be16(priv->data_buffer_size_dqo); - cmd.create_rx_queue.rx_buff_ring_size = + cmd->create_rx_queue.rx_buff_ring_size = cpu_to_be16(priv->rx_desc_cnt); - cmd.create_rx_queue.enable_rsc = + cmd->create_rx_queue.enable_rsc = !!(priv->dev->features & NETIF_F_LRO); if (priv->header_split_enabled) - cmd.create_rx_queue.header_buffer_size = + cmd->create_rx_queue.header_buffer_size = cpu_to_be16(priv->header_buf_size); } +} + +static int gve_adminq_create_rx_queue(struct gve_priv *priv, u32 queue_index) +{ + union gve_adminq_command cmd; + gve_adminq_get_create_rx_queue_cmd(priv, &cmd, queue_index); return gve_adminq_issue_cmd(priv, &cmd); } +/* Unlike gve_adminq_create_rx_queue, this actually rings the doorbell */ +int gve_adminq_create_single_rx_queue(struct gve_priv *priv, u32 queue_index) +{ + union gve_adminq_command cmd; + + gve_adminq_get_create_rx_queue_cmd(priv, &cmd, queue_index); + return gve_adminq_execute_cmd(priv, &cmd); +} + int gve_adminq_create_rx_queues(struct gve_priv *priv, u32 num_queues) { int err; @@ -727,22 +743,31 @@ int gve_adminq_destroy_tx_queues(struct gve_priv *priv, u32 start_id, u32 num_qu return gve_adminq_kick_and_wait(priv); } +static void gve_adminq_make_destroy_rx_queue_cmd(union gve_adminq_command *cmd, + u32 queue_index) +{ + memset(cmd, 0, sizeof(*cmd)); + cmd->opcode = cpu_to_be32(GVE_ADMINQ_DESTROY_RX_QUEUE); + cmd->destroy_rx_queue = (struct gve_adminq_destroy_rx_queue) { + .queue_id = cpu_to_be32(queue_index), + }; +} + static int gve_adminq_destroy_rx_queue(struct gve_priv *priv, u32 queue_index) { union gve_adminq_command cmd; - int err; - memset(&cmd, 0, sizeof(cmd)); - cmd.opcode = cpu_to_be32(GVE_ADMINQ_DESTROY_RX_QUEUE); - cmd.destroy_rx_queue = (struct gve_adminq_destroy_rx_queue) { - .queue_id = cpu_to_be32(queue_index), - }; + gve_adminq_make_destroy_rx_queue_cmd(&cmd, queue_index); + return gve_adminq_issue_cmd(priv, &cmd); +} - err = gve_adminq_issue_cmd(priv, &cmd); - if (err) - return err; +/* Unlike gve_adminq_destroy_rx_queue, this actually rings the doorbell */ +int gve_adminq_destroy_single_rx_queue(struct gve_priv *priv, u32 queue_index) +{ + union gve_adminq_command cmd; - return 0; + gve_adminq_make_destroy_rx_queue_cmd(&cmd, queue_index); + return gve_adminq_execute_cmd(priv, &cmd); } int gve_adminq_destroy_rx_queues(struct gve_priv *priv, u32 num_queues) diff --git a/drivers/net/ethernet/google/gve/gve_adminq.h b/drivers/net/ethernet/google/gve/gve_adminq.h index beedf2353847..e64f0dbe744d 100644 --- a/drivers/net/ethernet/google/gve/gve_adminq.h +++ b/drivers/net/ethernet/google/gve/gve_adminq.h @@ -451,7 +451,9 @@ int gve_adminq_configure_device_resources(struct gve_priv *priv, int gve_adminq_deconfigure_device_resources(struct gve_priv *priv); int gve_adminq_create_tx_queues(struct gve_priv *priv, u32 start_id, u32 num_queues); int gve_adminq_destroy_tx_queues(struct gve_priv *priv, u32 start_id, u32 num_queues); +int gve_adminq_create_single_rx_queue(struct gve_priv *priv, u32 queue_index); int gve_adminq_create_rx_queues(struct gve_priv *priv, u32 num_queues); +int gve_adminq_destroy_single_rx_queue(struct gve_priv *priv, u32 queue_index); int gve_adminq_destroy_rx_queues(struct gve_priv *priv, u32 queue_id); int gve_adminq_register_page_list(struct gve_priv *priv, struct gve_queue_page_list *qpl); From patchwork Wed May 1 23:25:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shailend Chand X-Patchwork-Id: 13651185 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 02EDF168B1A for ; Wed, 1 May 2024 23:26:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714605966; cv=none; b=dyDg5+0tEu3205OqYzgCrmKc0FBl5UVRebes/IOBYdEcBpiOSSFsMEP4B9kEhLbd3ensayl9nzz+3mdV61oMvkPCJAw88ITOb5vITdgrix6AXH8PR15r/srArg3uMClD/rd3sHnmhxSIkxZ22dmtGUDg2IHcxU8gkspiTnUCahw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714605966; c=relaxed/simple; bh=eIqn1LVUwugkpeRlzua2n5lFrHJQor+qYee1aB04brY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=mFi952Y/bGpZFRWpZNV0Yz8x+Sy8zSJT2cwAg+LTFIxYvdFeeqZ1wpeN1cQTynhzRef3/nw/qF1MHyBWPmZUEEGeEulBLqVaPGTFLvBVFMk05AKW4pgDL2pXK61reQwd4rfQBgPwCOTSqBYu4rn6eerknZxzV/GsbhIY2oUEigQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=22QBXuT7; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="22QBXuT7" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-dc647f65573so17099960276.2 for ; Wed, 01 May 2024 16:26:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1714605964; x=1715210764; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=sX4HEYwsHS+rl8A8xVeENYauPKI62gnoZdac5LeFbE4=; b=22QBXuT7Q8h74TJgW1xXzuGXTVdgzKCFE2Ez6du1r758VVsYglJC5B3sHlG3SNN70C okxBiXdiB19v6KSlxHnbxSgEgEpf6P6ESud8EuDLgH2fbx+QzVpFeDs1ILtpmZaQ383U aKIGXEskBe/UiZV23DpmfRyuOLIqMQEQrK5KfdZpUd/n87TUflfIGsRGh8E2yHEe5vTx 1Zq13YxRH7bzRRcgcdMRCQptIkm4ylQoxibk+IGBIc7tL9OAcxo8Rm7oJm5HKaCeNOK8 Fkgr6J+qtiRMXG+vdeh2rB4mU7WqaVi4lCrif398EH/CAYYhSofhucH85ORvxjrJlaoD gl+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714605964; x=1715210764; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=sX4HEYwsHS+rl8A8xVeENYauPKI62gnoZdac5LeFbE4=; b=HfNHzu43N29SbOwp80kl6rxBtBvl+lBiRddJpbd5yrG+KlSL5duNL/V/i+O5eibFUR 61NHKvBg7E4tE1dgyGuZdWOltZjvecx4u1KQ6cGslo7q2mmdbyy9x63nO76jS6ToRh4P FElpwZjKgY1XLULhNCI5X36QpNun159MxK+UQ55KqamiXfrWR4VUYtaVG9StBuqqK5dg gjOfApWnzepRHq73jndjlwcWimB/WLUA5uzczxa6bp3e4ZXAOrNfDbXCmV7NzLUQY0Ne XZcZ6bSpQgA1lP9ez6HggPTDiSRvk7QT02l6F5hxRkzE+56mBPG1FKbCwIWivUBsCx3O ysHQ== X-Gm-Message-State: AOJu0YxkKfktoBauw1XlEjY0Gw+gRujmPIcwfZpe66M23EIdg594KO8G 75RtWhrgy4a7PyBiT7qFypF4UbgsrrOiig1UpeeaE4Kg0MO5fHWp14/714PAX/g9KtI0KIyd8cl 2wf1wP5mIiBPScLegkY7jCGZ5ggirD5kIw2IjJ5YOWooXkFOoV/S03AGj+llk5FON2Di74hUQWY lx9CA/DQlGmguiyQHzFdsfD4tg4c4PkGnkTn1ViXXSXD8= X-Google-Smtp-Source: AGHT+IEIVqyRAB8Dn97W+pngBZnoBZdBoIYE7C5UKVOkLrFaRTk5Vm1G5xdb3tTVYR+rAwiVZnHW8ExzbSdsnA== X-Received: from shailendkvm.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:2648]) (user=shailend job=sendgmr) by 2002:a05:6902:1505:b0:dc6:fec4:1c26 with SMTP id q5-20020a056902150500b00dc6fec41c26mr88185ybu.1.1714605963770; Wed, 01 May 2024 16:26:03 -0700 (PDT) Date: Wed, 1 May 2024 23:25:43 +0000 In-Reply-To: <20240501232549.1327174-1-shailend@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240501232549.1327174-1-shailend@google.com> X-Mailer: git-send-email 2.45.0.rc0.197.gbae5840b3b-goog Message-ID: <20240501232549.1327174-5-shailend@google.com> Subject: [PATCH net-next v2 04/10] gve: Make gve_turn(up|down) ignore stopped queues From: Shailend Chand To: netdev@vger.kernel.org Cc: almasrymina@google.com, davem@davemloft.net, edumazet@google.com, hramamurthy@google.com, jeroendb@google.com, kuba@kernel.org, pabeni@redhat.com, pkaligineedi@google.com, rushilg@google.com, willemb@google.com, ziweixiao@google.com, Shailend Chand X-Patchwork-Delegate: kuba@kernel.org Currently the queues are either all live or all dead, toggling from one state to the other via the ndo open and stop hooks. The future addition of single-queue ndo hooks changes this, and thus gve_turnup and gve_turndown should evolve to account for a state where some queues are live and some aren't. Tested-by: Mina Almasry Reviewed-by: Praveen Kaligineedi Reviewed-by: Harshitha Ramamurthy Signed-off-by: Shailend Chand --- drivers/net/ethernet/google/gve/gve_main.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index 61039e3dd2bb..469a914c71d6 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -1937,12 +1937,16 @@ static void gve_turndown(struct gve_priv *priv) int ntfy_idx = gve_tx_idx_to_ntfy(priv, idx); struct gve_notify_block *block = &priv->ntfy_blocks[ntfy_idx]; + if (!gve_tx_was_added_to_block(priv, idx)) + continue; napi_disable(&block->napi); } for (idx = 0; idx < priv->rx_cfg.num_queues; idx++) { int ntfy_idx = gve_rx_idx_to_ntfy(priv, idx); struct gve_notify_block *block = &priv->ntfy_blocks[ntfy_idx]; + if (!gve_rx_was_added_to_block(priv, idx)) + continue; napi_disable(&block->napi); } @@ -1965,6 +1969,9 @@ static void gve_turnup(struct gve_priv *priv) int ntfy_idx = gve_tx_idx_to_ntfy(priv, idx); struct gve_notify_block *block = &priv->ntfy_blocks[ntfy_idx]; + if (!gve_tx_was_added_to_block(priv, idx)) + continue; + napi_enable(&block->napi); if (gve_is_gqi(priv)) { iowrite32be(0, gve_irq_doorbell(priv, block)); @@ -1977,6 +1984,9 @@ static void gve_turnup(struct gve_priv *priv) int ntfy_idx = gve_rx_idx_to_ntfy(priv, idx); struct gve_notify_block *block = &priv->ntfy_blocks[ntfy_idx]; + if (!gve_rx_was_added_to_block(priv, idx)) + continue; + napi_enable(&block->napi); if (gve_is_gqi(priv)) { iowrite32be(0, gve_irq_doorbell(priv, block)); From patchwork Wed May 1 23:25:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shailend Chand X-Patchwork-Id: 13651186 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C275B16C43E for ; Wed, 1 May 2024 23:26:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714605968; cv=none; b=sC9fagjXP/PHpXYIKv2tmBk7NcmSvYN3mjGs3MBUKZr+foOerx8dKM+zDfTxFifb2DCgAd+aBhm7r14hjlVcKuV4lt5cLJH3rQQbG/nCcVDGNG/iDKv0gDTKsgn9vcRfoxpJVC6oy3DNvMxj4fYL+0EqWHCVAztBoqy+1lM6xDw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714605968; c=relaxed/simple; bh=q/x8uhyZ94hlLMU9upc5dvBwc1c1qlWMRkNRfbt16WY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=deF66WsZy2dULemoVxp8i4wAcmYIW59xxGA/Kav2cKn2FPz/xoZtCTn6/Z+tyHJZfrtOPYCzR0DZOfXSwh3R9ympypJeZAJ2I0LUVuuj2X02etGkWgqLk7Huf87rC0+UJMxALq9OBf5IHKbWNma6g4Naq5gaAhmxq3QhSQJ6jEk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=CWj7l1lN; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="CWj7l1lN" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-5d8bff2b792so7220407a12.1 for ; Wed, 01 May 2024 16:26:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1714605966; x=1715210766; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=sPcGdWtYIxokvDL0AKkLOB+K3RyNfXxAC6Aj1nMvXi0=; b=CWj7l1lNy36Dqfh0YYE746K0Zb+azIHk7tcDq7IVZ0lemNK4Ql24Ouzjto4aPy8C1z AbMsaBvGmqqTOhg4MOeVJtrkMOgTzJ5okPzyBxUUc+56nLg//luIf27BnBlzBvRbU9EH kcYA1UR7/L8hIvAN7hLohbdIZ0hPGbJMZgsa+qLI7+EE9XEXIM26VaOebREfU8JJfLgw 1IsuNw0Bd0kumBgFlYe7224UmJuHZBNgqFGurtw6ic6XFFgN6bXoNS2abWKnYZAB3nxg Ks4vUsJLLahMUvtbpTpomodg9qaiT3j9RxFNikQx7DLRjLjFw3Gd6b8E1pMtXj5WXWho XZ+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714605966; x=1715210766; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=sPcGdWtYIxokvDL0AKkLOB+K3RyNfXxAC6Aj1nMvXi0=; b=TkuUlDQ0HkvK099IueUu8IF5k1LDxNPmWJsOdyEUF7drb3gDqBknsZ4tx1yUNx1UCn EAONMQg7EoXbjEMewyNrSUrBcXTrrQFI13Q3HdzMHFjd9fNxKqY76MCznk+S4wmVccv1 R4nTSvemfDDjXyO5A79VX66Q1PIFIjdILK2R4yDUYfAg8dj7RWWEsqcEuVo/TYkjeXXd GflYtvoJwYk6LAzZZR7mTtYfJBFXNDR1CMKXhAkIyvLfMEiAoZGRJo9BfyXztEtVhY3x eGmrivD9KF3JMYHRH8UWPFUunA7gVgjHoDeH2nr1KiFL0CsaCqIawP7grjuhSMkehuv5 P4rw== X-Gm-Message-State: AOJu0YxTJB+UHtqyBMP/eMELH6AFdm1g6D7omc6lYevuQ9EGLW6iB+fB Iec3P6GEvPRFfUGosTjM3b3Bmedd/iCjcMSpwplo1OZKn6pVa8itl+cMYFu5fJk8oLpuPpzDbSh kE5XXO76Y4WPcWdm0l1Nj3FKtvf2w7Dahhenj7lq8jxLg4UNgHmS/jPRBhgs3m9M0AP6sYXIuOy aFEMXxJTMDTbzmmnSYezQRqYNOdjAXuwyVf8Ul+tuwjbs= X-Google-Smtp-Source: AGHT+IHUJneyciiVPxFP9nWxMxeLBv/zNPYp6B6xaDJf8LQj+34buBH6t3o48Kht5lHsUMasrsc83zTNUw8nEQ== X-Received: from shailendkvm.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:2648]) (user=shailend job=sendgmr) by 2002:a05:6a02:696:b0:5db:edca:d171 with SMTP id ca22-20020a056a02069600b005dbedcad171mr34264pgb.6.1714605965553; Wed, 01 May 2024 16:26:05 -0700 (PDT) Date: Wed, 1 May 2024 23:25:44 +0000 In-Reply-To: <20240501232549.1327174-1-shailend@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240501232549.1327174-1-shailend@google.com> X-Mailer: git-send-email 2.45.0.rc0.197.gbae5840b3b-goog Message-ID: <20240501232549.1327174-6-shailend@google.com> Subject: [PATCH net-next v2 05/10] gve: Make gve_turnup work for nonempty queues From: Shailend Chand To: netdev@vger.kernel.org Cc: almasrymina@google.com, davem@davemloft.net, edumazet@google.com, hramamurthy@google.com, jeroendb@google.com, kuba@kernel.org, pabeni@redhat.com, pkaligineedi@google.com, rushilg@google.com, willemb@google.com, ziweixiao@google.com, Shailend Chand X-Patchwork-Delegate: kuba@kernel.org gVNIC has a requirement that all queues have to be quiesced before any queue is operated on (created or destroyed). To enable the implementation of future ndo hooks that work on a single queue, we need to evolve gve_turnup to account for queues already having some unprocessed descriptors in the ring. Say rxq 4 is being stopped and started via the queue api. Due to gve's requirement of quiescence, queues 0 through 3 are not processing their rings while queue 4 is being toggled. Once they are made live, these queues need to be poked to cause them to check their rings for descriptors that were written during their brief period of quiescence. Tested-by: Mina Almasry Reviewed-by: Praveen Kaligineedi Reviewed-by: Harshitha Ramamurthy Signed-off-by: Shailend Chand --- drivers/net/ethernet/google/gve/gve_main.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index 469a914c71d6..ef902b72b9a9 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -1979,6 +1979,13 @@ static void gve_turnup(struct gve_priv *priv) gve_set_itr_coalesce_usecs_dqo(priv, block, priv->tx_coalesce_usecs); } + + /* Any descs written by the NIC before this barrier will be + * handled by the one-off napi schedule below. Whereas any + * descs after the barrier will generate interrupts. + */ + mb(); + napi_schedule(&block->napi); } for (idx = 0; idx < priv->rx_cfg.num_queues; idx++) { int ntfy_idx = gve_rx_idx_to_ntfy(priv, idx); @@ -1994,6 +2001,13 @@ static void gve_turnup(struct gve_priv *priv) gve_set_itr_coalesce_usecs_dqo(priv, block, priv->rx_coalesce_usecs); } + + /* Any descs written by the NIC before this barrier will be + * handled by the one-off napi schedule below. Whereas any + * descs after the barrier will generate interrupts. + */ + mb(); + napi_schedule(&block->napi); } gve_set_napi_enabled(priv); From patchwork Wed May 1 23:25:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shailend Chand X-Patchwork-Id: 13651187 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5F13216ABCE for ; Wed, 1 May 2024 23:26:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714605969; cv=none; b=F9iEmAc1dFPvUN98nmq/uhFvcgjVBABPOvcxV8FVn7+OaR6kcRTLXnc4Nl/ceTB78Sao9seOC9RdQ1Qk4ZvaOFG3cVrAS+qJMpjAUcBhMv7KVKWMim0gwew+hQgmP2usVHP718JP9Ya0nAM68ShXwqu2ishIOHCM7OsEuikDt3Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714605969; c=relaxed/simple; bh=Kcwo9z19w2X4PJZvh7/+SUOYEVJ9O743aJGBsQl493Q=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=FmB86OU2k8BUFz+YbwlR6/vvqgSRRFTqrEKchFv6zuTVW3dFcdNEmTMeCJIbfwxFRtEv4h6xV93qCFov8Gef1DKgzVl/FCNxsxWAvHQ264+2Dzyergjv2FxrjH9QWyBziNiF0x27uEjE9l2BBTO9g3uWMXuo/l0ngaJ7pAeIdSc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=fYeLJujI; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="fYeLJujI" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-61be325413eso17462077b3.1 for ; Wed, 01 May 2024 16:26:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1714605967; x=1715210767; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=JIWEL1Ar1WKeaXWtcbiWbbyFnhxykfVzID7b3E5pD1Y=; b=fYeLJujIt1lGgG2enaRI2DALN4pXRQm/D6UiKlZiYfXvkpxLnHa2/t2Hr02W46cIN6 nZAtQTXpXgF/gE7ubk2PpiAvMak+yW4fjZBMlAFZw3HiGS1cc+IF4pIyfL1KSsSpJgxj Qzp/kOd2G+bY+tcI4frB7JYkLDrrY2a51C950UNBTAasR5rog7RLDp0SkVCOfiAU3p3r CXPVgmJxW1vmDgKT/TM/KD3LKR3aJckzF2HdwA+kQAY8IZYq6YCzHx8cFxwSzygnGz/V XGSUqtkQyTvU4FC3eksV9iK77W8i4L+ahPoXCe7v6WO81BFwVeEY/W105DOqXFnuT7B/ iKDQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714605967; x=1715210767; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=JIWEL1Ar1WKeaXWtcbiWbbyFnhxykfVzID7b3E5pD1Y=; b=SLoP2FkeViURd/Z+AfpgpHJq9M7iNL2UyE2rlQMydIEYEdz5mTJQDhWyqVE/zN6vwN cn/Ejt8bYMOOP89A3I7wDrQ6bw1MUAvimrJhreVzmKxLbstXdxxc6DuhuhKQbRg2PJFU sKaF52EHVWYA73y/PsC60qzQ+YtLI2x2ifE/rHVTOHKHEGpriTLZWE/ABuc2P0qT259l gwyGjTGdcWIHZYGiXdw41/41TyQ2IAMdqYR6g1Sa/XFcgnwYsYBSgjcucumPMMfLcFtD ftZjdcw5MMk0x06r6zs6HIdpTZ4VMPs1mf9n/TLwSkFUIpZLeoI4hMoJ9j5nD2BTx4IN jxbg== X-Gm-Message-State: AOJu0YxcKvHNUeRQbrJmnAgh0g5a//YQf2d7IXbRIaQ/RiZxy0eogupW kU6c9jn6OBEaJOmCOmTA8XDYS+8xTW8ZVu9U4MpdsaAi3YbmtJP54S1Uc+ZK2O9CyI3t5mGoSFi NYWSLRYjvpFW44+uqyHCzx4PF6Kdyat+bUg3awhill3iuNB7JW3tLqrf0hFn8vtl9s0XQuOnl7k 9eWwHUhULM/+aIlfIvgwqSPd/bnKjC3SVRrrD1ir6KktM= X-Google-Smtp-Source: AGHT+IHEDdUpz3rmvRrmU+185B4fQ4LIOeNyJW2yJybyAyJkvAnW89dqCcfsZLY/mBybGoerFSpjMZgRLRFabg== X-Received: from shailendkvm.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:2648]) (user=shailend job=sendgmr) by 2002:a0d:d9c9:0:b0:618:5009:cb71 with SMTP id b192-20020a0dd9c9000000b006185009cb71mr273438ywe.5.1714605967283; Wed, 01 May 2024 16:26:07 -0700 (PDT) Date: Wed, 1 May 2024 23:25:45 +0000 In-Reply-To: <20240501232549.1327174-1-shailend@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240501232549.1327174-1-shailend@google.com> X-Mailer: git-send-email 2.45.0.rc0.197.gbae5840b3b-goog Message-ID: <20240501232549.1327174-7-shailend@google.com> Subject: [PATCH net-next v2 06/10] gve: Avoid rescheduling napi if on wrong cpu From: Shailend Chand To: netdev@vger.kernel.org Cc: almasrymina@google.com, davem@davemloft.net, edumazet@google.com, hramamurthy@google.com, jeroendb@google.com, kuba@kernel.org, pabeni@redhat.com, pkaligineedi@google.com, rushilg@google.com, willemb@google.com, ziweixiao@google.com, Shailend Chand X-Patchwork-Delegate: kuba@kernel.org In order to make possible the implementation of per-queue ndo hooks, gve_turnup was changed in a previous patch to account for queues already having some unprocessed descriptors: it does a one-off napi_schdule to handle them. If conditions of consistent high traffic persist in the immediate aftermath of this, the poll routine for a queue can be "stuck" on the cpu on which the ndo hooks ran, instead of the cpu its irq has affinity with. This situation is exacerbated by the fact that the ndo hooks for all the queues are invoked on the same cpu, potentially causing all the napi poll routines to be residing on the same cpu. A self correcting mechanism in the poll method itself solves this problem. Tested-by: Mina Almasry Reviewed-by: Praveen Kaligineedi Reviewed-by: Harshitha Ramamurthy Signed-off-by: Shailend Chand --- drivers/net/ethernet/google/gve/gve.h | 1 + drivers/net/ethernet/google/gve/gve_main.c | 33 ++++++++++++++++++++-- 2 files changed, 32 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h index 53b5244dc7bc..f27a6d5fbecf 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -610,6 +610,7 @@ struct gve_notify_block { struct gve_priv *priv; struct gve_tx_ring *tx; /* tx rings on this block */ struct gve_rx_ring *rx; /* rx rings on this block */ + u32 irq; }; /* Tracks allowed and current queue settings */ diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index ef902b72b9a9..79b7a677ec0b 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -9,6 +9,7 @@ #include #include #include +#include #include #include #include @@ -253,6 +254,18 @@ static irqreturn_t gve_intr_dqo(int irq, void *arg) return IRQ_HANDLED; } +static int gve_is_napi_on_home_cpu(struct gve_priv *priv, u32 irq) +{ + int cpu_curr = smp_processor_id(); + const struct cpumask *aff_mask; + + aff_mask = irq_get_effective_affinity_mask(irq); + if (unlikely(!aff_mask)) + return 1; + + return cpumask_test_cpu(cpu_curr, aff_mask); +} + int gve_napi_poll(struct napi_struct *napi, int budget) { struct gve_notify_block *block; @@ -322,8 +335,21 @@ int gve_napi_poll_dqo(struct napi_struct *napi, int budget) reschedule |= work_done == budget; } - if (reschedule) - return budget; + if (reschedule) { + /* Reschedule by returning budget only if already on the correct + * cpu. + */ + if (likely(gve_is_napi_on_home_cpu(priv, block->irq))) + return budget; + + /* If not on the cpu with which this queue's irq has affinity + * with, we avoid rescheduling napi and arm the irq instead so + * that napi gets rescheduled back eventually onto the right + * cpu. + */ + if (work_done == budget) + work_done--; + } if (likely(napi_complete_done(napi, work_done))) { /* Enable interrupts again. @@ -428,6 +454,7 @@ static int gve_alloc_notify_blocks(struct gve_priv *priv) "Failed to receive msix vector %d\n", i); goto abort_with_some_ntfy_blocks; } + block->irq = priv->msix_vectors[msix_idx].vector; irq_set_affinity_hint(priv->msix_vectors[msix_idx].vector, get_cpu_mask(i % active_cpus)); block->irq_db_index = &priv->irq_db_indices[i].index; @@ -441,6 +468,7 @@ static int gve_alloc_notify_blocks(struct gve_priv *priv) irq_set_affinity_hint(priv->msix_vectors[msix_idx].vector, NULL); free_irq(priv->msix_vectors[msix_idx].vector, block); + block->irq = 0; } kvfree(priv->ntfy_blocks); priv->ntfy_blocks = NULL; @@ -474,6 +502,7 @@ static void gve_free_notify_blocks(struct gve_priv *priv) irq_set_affinity_hint(priv->msix_vectors[msix_idx].vector, NULL); free_irq(priv->msix_vectors[msix_idx].vector, block); + block->irq = 0; } free_irq(priv->msix_vectors[priv->mgmt_msix_idx].vector, priv); kvfree(priv->ntfy_blocks); From patchwork Wed May 1 23:25:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shailend Chand X-Patchwork-Id: 13651188 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EA0A216C680 for ; Wed, 1 May 2024 23:26:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714605972; cv=none; b=m5cJQFYYZ5dm93CRs/LGqciGfRWtAp/4S8vfReqMmjQyvtveeKUZWnze0jtiWk2z9EMndaBbAUNEXcQ9hImsBI7MNsl0g9cnIgiPIRmXTWtE+ZrsRE3TUUVvtsiBAy0lPOyX1aMzDLnTgLcVgLfSHjr9zqJfXE9bimIxC7H97Nw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714605972; c=relaxed/simple; bh=nqU7lgRnFZ/X0PNcsH7nolu4PE5U2oVnLJQWPNzmGVY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=WRZMrQkDGlCiIdDK/i4G6xnq5nHEkXL3eta9d9ybEQrhCyiQnOUcQjp9NR9/hlQU8KIRSv7Q0jY+2hG0cOzPA6T9YTLfOy/NRX8hg0v/ZD53hO1jsNJAqLoKc5BfS08zMJJ/HAu8dKoWyDUmxX4JYCQenFmWWcW+WvZKNOSiFLQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=pA4gqVn0; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="pA4gqVn0" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-61be8f9ca09so47039527b3.2 for ; Wed, 01 May 2024 16:26:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1714605969; x=1715210769; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=X1h4LMrzCT+Amis4VKG8AAFYoLhhUQWunAXrHJrR4lI=; b=pA4gqVn062KW0VhIa1A77hLIsPK45W3JvA9ziDuubDIroob1wvbUKd8JloSTFL1jdd ISGcSJMCf4idrLeZa9n33vWCuRN/A4tqxL/zKVAa8M/FdD84GYBTQ6YMHvsWOotdPJ8a pi1aifNSlDrMDnHhdtF34izOYUwRekJXnpyZFMxtylZ/MsKlxSX1QFLq45OOppbNeHfd AreFnktFnbIwe1JNTIE47TiSQhB5zk/sBkBTMspLY5bQ3mU9BK3QpOc/efEQjx3VisyT vcLHk/l3bBae2WwCIMEwZ/aEmPW5rPQ3fsR+B9GG45iyGdJxbRfSzUAvXy/qZe+oh9F4 PENQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714605969; x=1715210769; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=X1h4LMrzCT+Amis4VKG8AAFYoLhhUQWunAXrHJrR4lI=; b=n7TzJowJrWk+a455iTvWPzFhHALxuuy6ebxtyeKbUcGHC0kOGutQ/t562zVQqlPcCB eAZaeodwRVoEgvK7sybzwuZzWK1VPwRzhZbSuzeHFp/n+N942ClPboozU7uHOeN+Akbp LGY4hOQ37FgNlVpduHQIAB7fbQFV6sYBfoR7lszzmvtIysFPWj4x+T6f2ijmvWTCLGlJ icYWsTTOk5nk+YDt8vIi7AwB/CL+ztZWa4OCzqWNFcGnO/7BLcwTl/4Y6XcEsWKPxdIU jy0i4wVrSELRA9h8Eu4ipo6wqyibDk/UTw6mw/zyz/pCshH4K0ChBVt9nxK5OcgN5Lf3 iJKg== X-Gm-Message-State: AOJu0YzWhQfoXGoDuZwz4Yysb1cFX5Po2bk4MuU6wlUKv01UNM0oV4HN y1S+XEQmaFdeIYpVUA12VzPcrHlXV7+INi9vM4uS174mrEzSFy8KT4dY5p/GLoF8/XphwXqQdZs HXW6I+hr6hGVj8XZNBJx7tJc1z+XWEnrsRjaevHsPyBqbsILFpl0QVMfao6oJkPWidxZuILaJce pvRXOSxhc1BXBrIwAltDm4gHZb06uHP8+ZdenMwALyrp0= X-Google-Smtp-Source: AGHT+IFu8YPuGVYTy4Ck3cIqMGzE7KGj/yMCF68OavkqIO0nyJpuqNJW7y9AFIwxQvMq4rx9fGrm9bduAQL1BQ== X-Received: from shailendkvm.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:2648]) (user=shailend job=sendgmr) by 2002:a05:6902:c01:b0:dc2:550b:a4f4 with SMTP id fs1-20020a0569020c0100b00dc2550ba4f4mr1328794ybb.1.1714605968916; Wed, 01 May 2024 16:26:08 -0700 (PDT) Date: Wed, 1 May 2024 23:25:46 +0000 In-Reply-To: <20240501232549.1327174-1-shailend@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240501232549.1327174-1-shailend@google.com> X-Mailer: git-send-email 2.45.0.rc0.197.gbae5840b3b-goog Message-ID: <20240501232549.1327174-8-shailend@google.com> Subject: [PATCH net-next v2 07/10] gve: Reset Rx ring state in the ring-stop funcs From: Shailend Chand To: netdev@vger.kernel.org Cc: almasrymina@google.com, davem@davemloft.net, edumazet@google.com, hramamurthy@google.com, jeroendb@google.com, kuba@kernel.org, pabeni@redhat.com, pkaligineedi@google.com, rushilg@google.com, willemb@google.com, ziweixiao@google.com, Shailend Chand X-Patchwork-Delegate: kuba@kernel.org This does not fix any existing bug. In anticipation of the ndo queue api hooks that alloc/free/start/stop a single Rx queue, the already existing per-queue stop functions are being made more robust. Specifically for this use case: rx_queue_n.stop() + rx_queue_n.start() Note that this is not the use case being used in devmem tcp (the first place these new ndo hooks would be used). There the usecase is: new_queue.alloc() + old_queue.stop() + new_queue.start() + old_queue.free() Tested-by: Mina Almasry Reviewed-by: Praveen Kaligineedi Reviewed-by: Harshitha Ramamurthy Signed-off-by: Shailend Chand --- drivers/net/ethernet/google/gve/gve_rx.c | 48 +++++++-- drivers/net/ethernet/google/gve/gve_rx_dqo.c | 102 +++++++++++++++---- 2 files changed, 120 insertions(+), 30 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c index 0a3f88170411..79c1d8f63621 100644 --- a/drivers/net/ethernet/google/gve/gve_rx.c +++ b/drivers/net/ethernet/google/gve/gve_rx.c @@ -53,6 +53,41 @@ static void gve_rx_unfill_pages(struct gve_priv *priv, rx->data.page_info = NULL; } +static void gve_rx_ctx_clear(struct gve_rx_ctx *ctx) +{ + ctx->skb_head = NULL; + ctx->skb_tail = NULL; + ctx->total_size = 0; + ctx->frag_cnt = 0; + ctx->drop_pkt = false; +} + +static void gve_rx_init_ring_state_gqi(struct gve_rx_ring *rx) +{ + rx->desc.seqno = 1; + rx->cnt = 0; + gve_rx_ctx_clear(&rx->ctx); +} + +static void gve_rx_reset_ring_gqi(struct gve_priv *priv, int idx) +{ + struct gve_rx_ring *rx = &priv->rx[idx]; + const u32 slots = priv->rx_desc_cnt; + size_t size; + + /* Reset desc ring */ + if (rx->desc.desc_ring) { + size = slots * sizeof(rx->desc.desc_ring[0]); + memset(rx->desc.desc_ring, 0, size); + } + + /* Reset q_resources */ + if (rx->q_resources) + memset(rx->q_resources, 0, sizeof(*rx->q_resources)); + + gve_rx_init_ring_state_gqi(rx); +} + void gve_rx_stop_ring_gqi(struct gve_priv *priv, int idx) { int ntfy_idx = gve_rx_idx_to_ntfy(priv, idx); @@ -62,6 +97,7 @@ void gve_rx_stop_ring_gqi(struct gve_priv *priv, int idx) gve_remove_napi(priv, ntfy_idx); gve_rx_remove_from_block(priv, idx); + gve_rx_reset_ring_gqi(priv, idx); } static void gve_rx_free_ring_gqi(struct gve_priv *priv, struct gve_rx_ring *rx, @@ -222,15 +258,6 @@ static int gve_rx_prefill_pages(struct gve_rx_ring *rx, return err; } -static void gve_rx_ctx_clear(struct gve_rx_ctx *ctx) -{ - ctx->skb_head = NULL; - ctx->skb_tail = NULL; - ctx->total_size = 0; - ctx->frag_cnt = 0; - ctx->drop_pkt = false; -} - void gve_rx_start_ring_gqi(struct gve_priv *priv, int idx) { int ntfy_idx = gve_rx_idx_to_ntfy(priv, idx); @@ -309,9 +336,8 @@ static int gve_rx_alloc_ring_gqi(struct gve_priv *priv, err = -ENOMEM; goto abort_with_q_resources; } - rx->cnt = 0; rx->db_threshold = slots / 2; - rx->desc.seqno = 1; + gve_rx_init_ring_state_gqi(rx); rx->packet_buffer_size = GVE_DEFAULT_RX_BUFFER_SIZE; gve_rx_ctx_clear(&rx->ctx); diff --git a/drivers/net/ethernet/google/gve/gve_rx_dqo.c b/drivers/net/ethernet/google/gve/gve_rx_dqo.c index 53fd2d87233f..7c2980c212f4 100644 --- a/drivers/net/ethernet/google/gve/gve_rx_dqo.c +++ b/drivers/net/ethernet/google/gve/gve_rx_dqo.c @@ -211,6 +211,82 @@ static void gve_rx_free_hdr_bufs(struct gve_priv *priv, struct gve_rx_ring *rx) } } +static void gve_rx_init_ring_state_dqo(struct gve_rx_ring *rx, + const u32 buffer_queue_slots, + const u32 completion_queue_slots) +{ + int i; + + /* Set buffer queue state */ + rx->dqo.bufq.mask = buffer_queue_slots - 1; + rx->dqo.bufq.head = 0; + rx->dqo.bufq.tail = 0; + + /* Set completion queue state */ + rx->dqo.complq.num_free_slots = completion_queue_slots; + rx->dqo.complq.mask = completion_queue_slots - 1; + rx->dqo.complq.cur_gen_bit = 0; + rx->dqo.complq.head = 0; + + /* Set RX SKB context */ + rx->ctx.skb_head = NULL; + rx->ctx.skb_tail = NULL; + + /* Set up linked list of buffer IDs */ + if (rx->dqo.buf_states) { + for (i = 0; i < rx->dqo.num_buf_states - 1; i++) + rx->dqo.buf_states[i].next = i + 1; + rx->dqo.buf_states[rx->dqo.num_buf_states - 1].next = -1; + } + + rx->dqo.free_buf_states = 0; + rx->dqo.recycled_buf_states.head = -1; + rx->dqo.recycled_buf_states.tail = -1; + rx->dqo.used_buf_states.head = -1; + rx->dqo.used_buf_states.tail = -1; +} + +static void gve_rx_reset_ring_dqo(struct gve_priv *priv, int idx) +{ + struct gve_rx_ring *rx = &priv->rx[idx]; + size_t size; + int i; + + const u32 buffer_queue_slots = priv->rx_desc_cnt; + const u32 completion_queue_slots = priv->rx_desc_cnt; + + /* Reset buffer queue */ + if (rx->dqo.bufq.desc_ring) { + size = sizeof(rx->dqo.bufq.desc_ring[0]) * + buffer_queue_slots; + memset(rx->dqo.bufq.desc_ring, 0, size); + } + + /* Reset completion queue */ + if (rx->dqo.complq.desc_ring) { + size = sizeof(rx->dqo.complq.desc_ring[0]) * + completion_queue_slots; + memset(rx->dqo.complq.desc_ring, 0, size); + } + + /* Reset q_resources */ + if (rx->q_resources) + memset(rx->q_resources, 0, sizeof(*rx->q_resources)); + + /* Reset buf states */ + if (rx->dqo.buf_states) { + for (i = 0; i < rx->dqo.num_buf_states; i++) { + struct gve_rx_buf_state_dqo *bs = &rx->dqo.buf_states[i]; + + if (bs->page_info.page) + gve_free_page_dqo(priv, bs, !rx->dqo.qpl); + } + } + + gve_rx_init_ring_state_dqo(rx, buffer_queue_slots, + completion_queue_slots); +} + void gve_rx_stop_ring_dqo(struct gve_priv *priv, int idx) { int ntfy_idx = gve_rx_idx_to_ntfy(priv, idx); @@ -220,6 +296,7 @@ void gve_rx_stop_ring_dqo(struct gve_priv *priv, int idx) gve_remove_napi(priv, ntfy_idx); gve_rx_remove_from_block(priv, idx); + gve_rx_reset_ring_dqo(priv, idx); } static void gve_rx_free_ring_dqo(struct gve_priv *priv, struct gve_rx_ring *rx, @@ -273,10 +350,10 @@ static void gve_rx_free_ring_dqo(struct gve_priv *priv, struct gve_rx_ring *rx, netif_dbg(priv, drv, priv->dev, "freed rx ring %d\n", idx); } -static int gve_rx_alloc_hdr_bufs(struct gve_priv *priv, struct gve_rx_ring *rx) +static int gve_rx_alloc_hdr_bufs(struct gve_priv *priv, struct gve_rx_ring *rx, + const u32 buf_count) { struct device *hdev = &priv->pdev->dev; - int buf_count = rx->dqo.bufq.mask + 1; rx->dqo.hdr_bufs.data = dma_alloc_coherent(hdev, priv->header_buf_size * buf_count, &rx->dqo.hdr_bufs.addr, GFP_KERNEL); @@ -301,7 +378,6 @@ static int gve_rx_alloc_ring_dqo(struct gve_priv *priv, { struct device *hdev = &priv->pdev->dev; size_t size; - int i; const u32 buffer_queue_slots = cfg->ring_size; const u32 completion_queue_slots = cfg->ring_size; @@ -311,11 +387,6 @@ static int gve_rx_alloc_ring_dqo(struct gve_priv *priv, memset(rx, 0, sizeof(*rx)); rx->gve = priv; rx->q_num = idx; - rx->dqo.bufq.mask = buffer_queue_slots - 1; - rx->dqo.complq.num_free_slots = completion_queue_slots; - rx->dqo.complq.mask = completion_queue_slots - 1; - rx->ctx.skb_head = NULL; - rx->ctx.skb_tail = NULL; rx->dqo.num_buf_states = cfg->raw_addressing ? min_t(s16, S16_MAX, buffer_queue_slots * 4) : @@ -328,19 +399,9 @@ static int gve_rx_alloc_ring_dqo(struct gve_priv *priv, /* Allocate header buffers for header-split */ if (cfg->enable_header_split) - if (gve_rx_alloc_hdr_bufs(priv, rx)) + if (gve_rx_alloc_hdr_bufs(priv, rx, buffer_queue_slots)) goto err; - /* Set up linked list of buffer IDs */ - for (i = 0; i < rx->dqo.num_buf_states - 1; i++) - rx->dqo.buf_states[i].next = i + 1; - - rx->dqo.buf_states[rx->dqo.num_buf_states - 1].next = -1; - rx->dqo.recycled_buf_states.head = -1; - rx->dqo.recycled_buf_states.tail = -1; - rx->dqo.used_buf_states.head = -1; - rx->dqo.used_buf_states.tail = -1; - /* Allocate RX completion queue */ size = sizeof(rx->dqo.complq.desc_ring[0]) * completion_queue_slots; @@ -368,6 +429,9 @@ static int gve_rx_alloc_ring_dqo(struct gve_priv *priv, if (!rx->q_resources) goto err; + gve_rx_init_ring_state_dqo(rx, buffer_queue_slots, + completion_queue_slots); + return 0; err: From patchwork Wed May 1 23:25:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shailend Chand X-Patchwork-Id: 13651189 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6623416ABCE for ; Wed, 1 May 2024 23:26:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714605972; cv=none; b=rR+iGphKvpKgYygQXsfdlWSLXQH3RjBVuHTSnSsjE2DijgJSo3U/G1tybW1B1xXqLG1+zmrr3qDwlFxquAc79auY0umHGVQ4d5M4kQTFxNf9NtmF9A4chlHK8QchLOYYCabikWxNTkh+h8d4Ddc/qEIS0XUNDjG4laZJu4Dcsek= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714605972; c=relaxed/simple; bh=X9Zc8NXIFIstzXRVqTVsQiZQZAU23Z1+OY0PhoNWBcQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=S5+7FJuPi0x5eYue3xlnoEgx7fuE8T+jyZwh9BQGUL+3U4nME8TdDl7JVGCcWefEiEggN26NrkGSYU5ZX6rXsJ2Mq1qZ5oiA9Lus1ZWCrxoRWZkvESFUSJ5PBScc4eJsY2VNkbTCAabQaqgGwLmviloYhNvilUTGOv/nKf2xo6Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=K+l9BZCh; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="K+l9BZCh" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-61be26af113so54552937b3.1 for ; Wed, 01 May 2024 16:26:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1714605970; x=1715210770; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=oG58J/JR5E42dJlgQeWSVgudbHg6jj2MpjFcGUtw42k=; b=K+l9BZChT1F7NKI7m1aZAQElEHtbZFxWJTyqhaNLsWtWa9XlW/KPSmUgpcpOc4ISWL 39V+2f9UkCKFt3+0oV9w2zNFL6l1ptWs/53LjLMWbfEb7aaoMWlrXieIH97VuPrM6NCa o5tpYVFB1oH3vm9W6ZBR4OuNTyxKz19nbsHESBZdweGg1KiSB5H2EiKUjF+9mPdXdo2W vMD6PXKtfXoD9h1cHsWBhxKysbJsm+WwzvFJVoPyDDfjAe6YIFQE5a3RWFFMY7TsWAB5 IYyEKEAvTIMQKlbjRllC2tl8b5r/NCBvTvfxXdHIfk6OFGcAG1ccOcV3Rssd+zlmdPli 6MbA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714605970; x=1715210770; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=oG58J/JR5E42dJlgQeWSVgudbHg6jj2MpjFcGUtw42k=; b=cskZbUgwFwCZ/DI+PGUipLWGlTe4nAzYX2D817er+FQIyK1X2LfwRDFaNQ4afubjWW c+kP1hCPJHIsT1H10l+HOWlYDox1VCcjUc2AU5GhxqmdbizsU6fRwYa6c/PrS43FOyvq tbFtVYdcd/7IAd4EfiiyxETBfiiy8od+Mk3BZFqpT87TmIw4h/JdRllVtO0tnfFqUaOF 4iKZRb/cs9zU3r8iNlh8iS1Dy3dqwizEnnCEvJ9tjm616EKlZuSY6Pl2Ghp4oVPAirv9 Le3T0gqxmvKLG4AgSS/EJK9iAzQjpAxdWaMLWemDiwpyI2VlaI35czrp7huK52Qqh8YF tGtg== X-Gm-Message-State: AOJu0Yyw02ELRa68OpFgI+orHuFJ0Lw/UOzpilq8wg1iYkk/ujZcAFR2 eCtWpLg2YYillmUWXHC5Vrjto5IyGOXie+6JWAPIiUZcuYDHnnFw77XGzWarB+tb7pr2T+5wess m9rsjzaSPUTndGdmXS1wtL09gW/JeK1w6sz7JFZQ/MCGr+QhqyIM2UBxYwY2S95sfMaup7PXFId WPzBPEJHMc3miYTNzRXMTcMiUtlk02f0NW8PiPinFg1qg= X-Google-Smtp-Source: AGHT+IHnYRyjDpf0Flv90jnccc4t4xpkRWkZlyG4eE0Qu+j1tCm+MDTmRYeSjygC2wmNpX5t9gJ/nQDk5tCkrw== X-Received: from shailendkvm.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:2648]) (user=shailend job=sendgmr) by 2002:a81:49d3:0:b0:61b:ea5c:9cf5 with SMTP id w202-20020a8149d3000000b0061bea5c9cf5mr931491ywa.7.1714605970343; Wed, 01 May 2024 16:26:10 -0700 (PDT) Date: Wed, 1 May 2024 23:25:47 +0000 In-Reply-To: <20240501232549.1327174-1-shailend@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240501232549.1327174-1-shailend@google.com> X-Mailer: git-send-email 2.45.0.rc0.197.gbae5840b3b-goog Message-ID: <20240501232549.1327174-9-shailend@google.com> Subject: [PATCH net-next v2 08/10] gve: Account for stopped queues when reading NIC stats From: Shailend Chand To: netdev@vger.kernel.org Cc: almasrymina@google.com, davem@davemloft.net, edumazet@google.com, hramamurthy@google.com, jeroendb@google.com, kuba@kernel.org, pabeni@redhat.com, pkaligineedi@google.com, rushilg@google.com, willemb@google.com, ziweixiao@google.com, Shailend Chand X-Patchwork-Delegate: kuba@kernel.org We now account for the fact that the NIC might send us stats for a subset of queues. Without this change, gve_get_ethtool_stats might make an invalid access on the priv->stats_report->stats array. Tested-by: Mina Almasry Reviewed-by: Praveen Kaligineedi Reviewed-by: Harshitha Ramamurthy Signed-off-by: Shailend Chand --- drivers/net/ethernet/google/gve/gve_ethtool.c | 41 ++++++++++++++++--- 1 file changed, 35 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve_ethtool.c b/drivers/net/ethernet/google/gve/gve_ethtool.c index bd7632eed776..a606670a9a39 100644 --- a/drivers/net/ethernet/google/gve/gve_ethtool.c +++ b/drivers/net/ethernet/google/gve/gve_ethtool.c @@ -8,6 +8,7 @@ #include "gve.h" #include "gve_adminq.h" #include "gve_dqo.h" +#include "gve_utils.h" static void gve_get_drvinfo(struct net_device *netdev, struct ethtool_drvinfo *info) @@ -165,6 +166,8 @@ gve_get_ethtool_stats(struct net_device *netdev, struct stats *report_stats; int *rx_qid_to_stats_idx; int *tx_qid_to_stats_idx; + int num_stopped_rxqs = 0; + int num_stopped_txqs = 0; struct gve_priv *priv; bool skip_nic_stats; unsigned int start; @@ -181,12 +184,23 @@ gve_get_ethtool_stats(struct net_device *netdev, sizeof(int), GFP_KERNEL); if (!rx_qid_to_stats_idx) return; + for (ring = 0; ring < priv->rx_cfg.num_queues; ring++) { + rx_qid_to_stats_idx[ring] = -1; + if (!gve_rx_was_added_to_block(priv, ring)) + num_stopped_rxqs++; + } tx_qid_to_stats_idx = kmalloc_array(num_tx_queues, sizeof(int), GFP_KERNEL); if (!tx_qid_to_stats_idx) { kfree(rx_qid_to_stats_idx); return; } + for (ring = 0; ring < num_tx_queues; ring++) { + tx_qid_to_stats_idx[ring] = -1; + if (!gve_tx_was_added_to_block(priv, ring)) + num_stopped_txqs++; + } + for (rx_pkts = 0, rx_bytes = 0, rx_hsplit_pkt = 0, rx_skb_alloc_fail = 0, rx_buf_alloc_fail = 0, rx_desc_err_dropped_pkt = 0, rx_hsplit_unsplit_pkt = 0, @@ -260,7 +274,13 @@ gve_get_ethtool_stats(struct net_device *netdev, /* For rx cross-reporting stats, start from nic rx stats in report */ base_stats_idx = GVE_TX_STATS_REPORT_NUM * num_tx_queues + GVE_RX_STATS_REPORT_NUM * priv->rx_cfg.num_queues; - max_stats_idx = NIC_RX_STATS_REPORT_NUM * priv->rx_cfg.num_queues + + /* The boundary between driver stats and NIC stats shifts if there are + * stopped queues. + */ + base_stats_idx += NIC_RX_STATS_REPORT_NUM * num_stopped_rxqs + + NIC_TX_STATS_REPORT_NUM * num_stopped_txqs; + max_stats_idx = NIC_RX_STATS_REPORT_NUM * + (priv->rx_cfg.num_queues - num_stopped_rxqs) + base_stats_idx; /* Preprocess the stats report for rx, map queue id to start index */ skip_nic_stats = false; @@ -274,6 +294,10 @@ gve_get_ethtool_stats(struct net_device *netdev, skip_nic_stats = true; break; } + if (queue_id < 0 || queue_id >= priv->rx_cfg.num_queues) { + net_err_ratelimited("Invalid rxq id in NIC stats\n"); + continue; + } rx_qid_to_stats_idx[queue_id] = stats_idx; } /* walk RX rings */ @@ -308,11 +332,11 @@ gve_get_ethtool_stats(struct net_device *netdev, data[i++] = rx->rx_copybreak_pkt; data[i++] = rx->rx_copied_pkt; /* stats from NIC */ - if (skip_nic_stats) { + stats_idx = rx_qid_to_stats_idx[ring]; + if (skip_nic_stats || stats_idx < 0) { /* skip NIC rx stats */ i += NIC_RX_STATS_REPORT_NUM; } else { - stats_idx = rx_qid_to_stats_idx[ring]; for (j = 0; j < NIC_RX_STATS_REPORT_NUM; j++) { u64 value = be64_to_cpu(report_stats[stats_idx + j].value); @@ -338,7 +362,8 @@ gve_get_ethtool_stats(struct net_device *netdev, /* For tx cross-reporting stats, start from nic tx stats in report */ base_stats_idx = max_stats_idx; - max_stats_idx = NIC_TX_STATS_REPORT_NUM * num_tx_queues + + max_stats_idx = NIC_TX_STATS_REPORT_NUM * + (num_tx_queues - num_stopped_txqs) + max_stats_idx; /* Preprocess the stats report for tx, map queue id to start index */ skip_nic_stats = false; @@ -352,6 +377,10 @@ gve_get_ethtool_stats(struct net_device *netdev, skip_nic_stats = true; break; } + if (queue_id < 0 || queue_id >= num_tx_queues) { + net_err_ratelimited("Invalid txq id in NIC stats\n"); + continue; + } tx_qid_to_stats_idx[queue_id] = stats_idx; } /* walk TX rings */ @@ -383,11 +412,11 @@ gve_get_ethtool_stats(struct net_device *netdev, data[i++] = gve_tx_load_event_counter(priv, tx); data[i++] = tx->dma_mapping_error; /* stats from NIC */ - if (skip_nic_stats) { + stats_idx = tx_qid_to_stats_idx[ring]; + if (skip_nic_stats || stats_idx < 0) { /* skip NIC tx stats */ i += NIC_TX_STATS_REPORT_NUM; } else { - stats_idx = tx_qid_to_stats_idx[ring]; for (j = 0; j < NIC_TX_STATS_REPORT_NUM; j++) { u64 value = be64_to_cpu(report_stats[stats_idx + j].value); From patchwork Wed May 1 23:25:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shailend Chand X-Patchwork-Id: 13651190 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C3C72168B1F for ; Wed, 1 May 2024 23:26:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714605975; cv=none; b=MhHjK3NxDX1C297wonyBxqmK8KrBqayPbvNUNf2W8vA3pDx/+cZ6G5wZf8yO/AWSeyF11opvnE469b0F0VNDEm5CzFrhc42lpx+Z8ltHhxfMJsOjo2iA3waLleH8P8Lvay2J6DsIedsJfA1MA3JvdqmvaOWoJEfv4jMbFCw8E0E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714605975; c=relaxed/simple; bh=/joToHSomABW+06vWRtBrMH4B6SnFVzxzlzpu9HmI18=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=gd1p9BIIqlULGpUuFETNnGjzFLBmnv3t4gEXuBCpZuVEJ94ChsY0pCjNTWv64vm81DYjQtLRpaqI6O0OqGMeNcm3xFtWY/M5GcnGKCKa86X8cuE7UOO21scbb/EM/RUSpKRrW8e9GBWzvDgShcmpncD9gug7kR016cETSec1pGY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=PudHx9wY; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="PudHx9wY" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-61e0c1f7169so4316357b3.0 for ; Wed, 01 May 2024 16:26:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1714605972; x=1715210772; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=QzydajZgokrqXX9bEfu1ztrWITDHwOkwhwLeM0IM3fs=; b=PudHx9wYWuHEDAfcIrXND9NLUIPFdqU3bASBiLLJtx2+GF2oxh6C68GYQbcUMmPCCC 7ksxfxL1VdZ1+u6SfxRscgnboJo6C1T1OplIdY3AeT8a2X+UtKsfXdBGGvEnTOabY5mq pl6lYDDIBE+s+h935vKYFdhTUURhhz0USRVRxI5o0FLPYElp+1YLukzJYuCTpMxFFq5b e4ywt4E1LWztZ5qRG/1uDXcYy3jUv0P897fjuSZBmmMcF+u4U7pXjNyb3IiPn5rvs9qM 2Jkegh8HbdbP6333T0SRGkwoo5DLD9KqlxSN6waTvNIRRaIGnHMzsNroOmeNNIPWGOYb 8p/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714605972; x=1715210772; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=QzydajZgokrqXX9bEfu1ztrWITDHwOkwhwLeM0IM3fs=; b=jiexxq2iJzC9sIUC+Ei3NdwCwaG3JniqWLdSV4v7UNnzBEjvd6XstZlsOuMdVvJqvq fkByw6SKuYZYa2+kK9Lsykp05CpTXMfhaDN/JpLMOUQe16V13UypBjScV8+AWj36HbHF g+W9N+baeHtyGfG1DgqLBXoe2IGTyXvw6Kr3kjGBuy+yz07Dc+73ufJBr5FKqmr3Qp6v ixOmVm1bKmiGMVFEbz1o/8bp9LTbeJkZgDUlWJlER+NgBYDWDEnC94SOOYFHuEeetoJx 4k3hJbFaT8YS+gfx+ZPJlLjQfqjorQvprCMdRT+7EMHQLWyKLXIVaL4Gb8D6BztiQInK mE/A== X-Gm-Message-State: AOJu0YxgEv6eipltpT8WI5vqPsA8aytJPUMNT4MLa9+V+LiCJOrJZQBb 25QeEThVuWbQo77QGrQz2Z0eGXQWt1BKF1uZxGyLlHTsgPZzulhhDmIXu8YfXanEA1L/68LELoc LJM7DlliXMjx11T6NYxOowocQNtLjx7ug+OOK9lFWI07akN+K4rP0HB7ooLHgu+XTYqm25Y6NfT uKVsqIIyyyUTrfx9ZLwHaoNBh8n/D9zGEw8myCIKb5/OM= X-Google-Smtp-Source: AGHT+IG1WdQZ3ofC7EAFmMNGJlTEc5w0O8GXi6rwGEHHlEatLtO1xoRh7qsZ+6CV8+W36fSq0HdVxAbXFD0XsQ== X-Received: from shailendkvm.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:2648]) (user=shailend job=sendgmr) by 2002:a05:6902:2d07:b0:de5:9ecc:46b6 with SMTP id fo7-20020a0569022d0700b00de59ecc46b6mr268689ybb.6.1714605971925; Wed, 01 May 2024 16:26:11 -0700 (PDT) Date: Wed, 1 May 2024 23:25:48 +0000 In-Reply-To: <20240501232549.1327174-1-shailend@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240501232549.1327174-1-shailend@google.com> X-Mailer: git-send-email 2.45.0.rc0.197.gbae5840b3b-goog Message-ID: <20240501232549.1327174-10-shailend@google.com> Subject: [PATCH net-next v2 09/10] gve: Alloc and free QPLs with the rings From: Shailend Chand To: netdev@vger.kernel.org Cc: almasrymina@google.com, davem@davemloft.net, edumazet@google.com, hramamurthy@google.com, jeroendb@google.com, kuba@kernel.org, pabeni@redhat.com, pkaligineedi@google.com, rushilg@google.com, willemb@google.com, ziweixiao@google.com, Shailend Chand X-Patchwork-Delegate: kuba@kernel.org Every tx and rx ring has its own queue-page-list (QPL) that serves as the bounce buffer. Previously we were allocating QPLs for all queues before the queues themselves were allocated and later associating a QPL with a queue. This is avoidable complexity: it is much more natural for each queue to allocate and free its own QPL. Moreover, the advent of new queue-manipulating ndo hooks make it hard to keep things as is: we would need to transfer a QPL from an old queue to a new queue, and that is unpleasant. Tested-by: Mina Almasry Reviewed-by: Praveen Kaligineedi Reviewed-by: Harshitha Ramamurthy Signed-off-by: Shailend Chand --- drivers/net/ethernet/google/gve/gve.h | 30 +- drivers/net/ethernet/google/gve/gve_ethtool.c | 7 +- drivers/net/ethernet/google/gve/gve_main.c | 343 +++++------------- drivers/net/ethernet/google/gve/gve_rx.c | 43 ++- drivers/net/ethernet/google/gve/gve_rx_dqo.c | 23 +- drivers/net/ethernet/google/gve/gve_tx.c | 33 +- drivers/net/ethernet/google/gve/gve_tx_dqo.c | 23 +- 7 files changed, 171 insertions(+), 331 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h index f27a6d5fbecf..9e0a433c991c 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -638,26 +638,10 @@ struct gve_ptype_lut { struct gve_ptype ptypes[GVE_NUM_PTYPES]; }; -/* Parameters for allocating queue page lists */ -struct gve_qpls_alloc_cfg { - struct gve_queue_config *tx_cfg; - struct gve_queue_config *rx_cfg; - - u16 num_xdp_queues; - bool raw_addressing; - bool is_gqi; - - /* Allocated resources are returned here */ - struct gve_queue_page_list *qpls; -}; - /* Parameters for allocating resources for tx queues */ struct gve_tx_alloc_rings_cfg { struct gve_queue_config *qcfg; - /* qpls must already be allocated */ - struct gve_queue_page_list *qpls; - u16 ring_size; u16 start_idx; u16 num_rings; @@ -673,9 +657,6 @@ struct gve_rx_alloc_rings_cfg { struct gve_queue_config *qcfg; struct gve_queue_config *qcfg_tx; - /* qpls must already be allocated */ - struct gve_queue_page_list *qpls; - u16 ring_size; u16 packet_buffer_size; bool raw_addressing; @@ -701,7 +682,6 @@ struct gve_priv { struct net_device *dev; struct gve_tx_ring *tx; /* array of tx_cfg.num_queues */ struct gve_rx_ring *rx; /* array of rx_cfg.num_queues */ - struct gve_queue_page_list *qpls; /* array of num qpls */ struct gve_notify_block *ntfy_blocks; /* array of num_ntfy_blks */ struct gve_irq_db *irq_db_indices; /* array of num_ntfy_blks */ dma_addr_t irq_db_indices_bus; @@ -1025,7 +1005,6 @@ static inline u32 gve_rx_qpl_id(struct gve_priv *priv, int rx_qid) return priv->tx_cfg.max_queues + rx_qid; } -/* Returns the index into priv->qpls where a certain rx queue's QPL resides */ static inline u32 gve_get_rx_qpl_id(const struct gve_queue_config *tx_cfg, int rx_qid) { return tx_cfg->max_queues + rx_qid; @@ -1036,7 +1015,6 @@ static inline u32 gve_tx_start_qpl_id(struct gve_priv *priv) return gve_tx_qpl_id(priv, 0); } -/* Returns the index into priv->qpls where the first rx queue's QPL resides */ static inline u32 gve_rx_start_qpl_id(const struct gve_queue_config *tx_cfg) { return gve_get_rx_qpl_id(tx_cfg, 0); @@ -1090,6 +1068,12 @@ int gve_alloc_page(struct gve_priv *priv, struct device *dev, enum dma_data_direction, gfp_t gfp_flags); void gve_free_page(struct device *dev, struct page *page, dma_addr_t dma, enum dma_data_direction); +/* qpls */ +struct gve_queue_page_list *gve_alloc_queue_page_list(struct gve_priv *priv, + u32 id, int pages); +void gve_free_queue_page_list(struct gve_priv *priv, + struct gve_queue_page_list *qpl, + u32 id); /* tx handling */ netdev_tx_t gve_tx(struct sk_buff *skb, struct net_device *dev); int gve_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames, @@ -1126,11 +1110,9 @@ int gve_set_hsplit_config(struct gve_priv *priv, u8 tcp_data_split); void gve_schedule_reset(struct gve_priv *priv); int gve_reset(struct gve_priv *priv, bool attempt_teardown); void gve_get_curr_alloc_cfgs(struct gve_priv *priv, - struct gve_qpls_alloc_cfg *qpls_alloc_cfg, struct gve_tx_alloc_rings_cfg *tx_alloc_cfg, struct gve_rx_alloc_rings_cfg *rx_alloc_cfg); int gve_adjust_config(struct gve_priv *priv, - struct gve_qpls_alloc_cfg *qpls_alloc_cfg, struct gve_tx_alloc_rings_cfg *tx_alloc_cfg, struct gve_rx_alloc_rings_cfg *rx_alloc_cfg); int gve_adjust_queues(struct gve_priv *priv, diff --git a/drivers/net/ethernet/google/gve/gve_ethtool.c b/drivers/net/ethernet/google/gve/gve_ethtool.c index a606670a9a39..156b7e128b53 100644 --- a/drivers/net/ethernet/google/gve/gve_ethtool.c +++ b/drivers/net/ethernet/google/gve/gve_ethtool.c @@ -538,20 +538,17 @@ static int gve_adjust_ring_sizes(struct gve_priv *priv, { struct gve_tx_alloc_rings_cfg tx_alloc_cfg = {0}; struct gve_rx_alloc_rings_cfg rx_alloc_cfg = {0}; - struct gve_qpls_alloc_cfg qpls_alloc_cfg = {0}; int err; /* get current queue configuration */ - gve_get_curr_alloc_cfgs(priv, &qpls_alloc_cfg, - &tx_alloc_cfg, &rx_alloc_cfg); + gve_get_curr_alloc_cfgs(priv, &tx_alloc_cfg, &rx_alloc_cfg); /* copy over the new ring_size from ethtool */ tx_alloc_cfg.ring_size = new_tx_desc_cnt; rx_alloc_cfg.ring_size = new_rx_desc_cnt; if (netif_running(priv->dev)) { - err = gve_adjust_config(priv, &qpls_alloc_cfg, - &tx_alloc_cfg, &rx_alloc_cfg); + err = gve_adjust_config(priv, &tx_alloc_cfg, &rx_alloc_cfg); if (err) return err; } diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index 79b7a677ec0b..e22ac764ec4f 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -611,37 +611,36 @@ static void gve_teardown_device_resources(struct gve_priv *priv) gve_clear_device_resources_ok(priv); } -static int gve_unregister_qpl(struct gve_priv *priv, u32 i) +static int gve_unregister_qpl(struct gve_priv *priv, + struct gve_queue_page_list *qpl) { int err; - err = gve_adminq_unregister_page_list(priv, priv->qpls[i].id); + if (!qpl) + return 0; + + err = gve_adminq_unregister_page_list(priv, qpl->id); if (err) { netif_err(priv, drv, priv->dev, "Failed to unregister queue page list %d\n", - priv->qpls[i].id); + qpl->id); return err; } - priv->num_registered_pages -= priv->qpls[i].num_entries; + priv->num_registered_pages -= qpl->num_entries; return 0; } -static int gve_register_qpl(struct gve_priv *priv, u32 i) +static int gve_register_qpl(struct gve_priv *priv, + struct gve_queue_page_list *qpl) { - int num_rx_qpls; int pages; int err; - /* Rx QPLs succeed Tx QPLs in the priv->qpls array. */ - num_rx_qpls = gve_num_rx_qpls(&priv->rx_cfg, gve_is_qpl(priv)); - if (i >= gve_rx_start_qpl_id(&priv->tx_cfg) + num_rx_qpls) { - netif_err(priv, drv, priv->dev, - "Cannot register nonexisting QPL at index %d\n", i); - return -EINVAL; - } + if (!qpl) + return 0; - pages = priv->qpls[i].num_entries; + pages = qpl->num_entries; if (pages + priv->num_registered_pages > priv->max_registered_pages) { netif_err(priv, drv, priv->dev, @@ -651,14 +650,11 @@ static int gve_register_qpl(struct gve_priv *priv, u32 i) return -EINVAL; } - err = gve_adminq_register_page_list(priv, &priv->qpls[i]); + err = gve_adminq_register_page_list(priv, qpl); if (err) { netif_err(priv, drv, priv->dev, "failed to register queue page list %d\n", - priv->qpls[i].id); - /* This failure will trigger a reset - no need to clean - * up - */ + qpl->id); return err; } @@ -666,6 +662,26 @@ static int gve_register_qpl(struct gve_priv *priv, u32 i) return 0; } +static struct gve_queue_page_list *gve_tx_get_qpl(struct gve_priv *priv, int idx) +{ + struct gve_tx_ring *tx = &priv->tx[idx]; + + if (gve_is_gqi(priv)) + return tx->tx_fifo.qpl; + else + return tx->dqo.qpl; +} + +static struct gve_queue_page_list *gve_rx_get_qpl(struct gve_priv *priv, int idx) +{ + struct gve_rx_ring *rx = &priv->rx[idx]; + + if (gve_is_gqi(priv)) + return rx->data.qpl; + else + return rx->dqo.qpl; +} + static int gve_register_xdp_qpls(struct gve_priv *priv) { int start_id; @@ -674,7 +690,7 @@ static int gve_register_xdp_qpls(struct gve_priv *priv) start_id = gve_xdp_tx_start_queue_id(priv); for (i = start_id; i < start_id + gve_num_xdp_qpls(priv); i++) { - err = gve_register_qpl(priv, i); + err = gve_register_qpl(priv, gve_tx_get_qpl(priv, i)); /* This failure will trigger a reset - no need to clean up */ if (err) return err; @@ -685,7 +701,6 @@ static int gve_register_xdp_qpls(struct gve_priv *priv) static int gve_register_qpls(struct gve_priv *priv) { int num_tx_qpls, num_rx_qpls; - int start_id; int err; int i; @@ -694,15 +709,13 @@ static int gve_register_qpls(struct gve_priv *priv) num_rx_qpls = gve_num_rx_qpls(&priv->rx_cfg, gve_is_qpl(priv)); for (i = 0; i < num_tx_qpls; i++) { - err = gve_register_qpl(priv, i); + err = gve_register_qpl(priv, gve_tx_get_qpl(priv, i)); if (err) return err; } - /* there might be a gap between the tx and rx qpl ids */ - start_id = gve_rx_start_qpl_id(&priv->tx_cfg); for (i = 0; i < num_rx_qpls; i++) { - err = gve_register_qpl(priv, start_id + i); + err = gve_register_qpl(priv, gve_rx_get_qpl(priv, i)); if (err) return err; } @@ -718,7 +731,7 @@ static int gve_unregister_xdp_qpls(struct gve_priv *priv) start_id = gve_xdp_tx_start_queue_id(priv); for (i = start_id; i < start_id + gve_num_xdp_qpls(priv); i++) { - err = gve_unregister_qpl(priv, i); + err = gve_unregister_qpl(priv, gve_tx_get_qpl(priv, i)); /* This failure will trigger a reset - no need to clean */ if (err) return err; @@ -729,7 +742,6 @@ static int gve_unregister_xdp_qpls(struct gve_priv *priv) static int gve_unregister_qpls(struct gve_priv *priv) { int num_tx_qpls, num_rx_qpls; - int start_id; int err; int i; @@ -738,15 +750,14 @@ static int gve_unregister_qpls(struct gve_priv *priv) num_rx_qpls = gve_num_rx_qpls(&priv->rx_cfg, gve_is_qpl(priv)); for (i = 0; i < num_tx_qpls; i++) { - err = gve_unregister_qpl(priv, i); + err = gve_unregister_qpl(priv, gve_tx_get_qpl(priv, i)); /* This failure will trigger a reset - no need to clean */ if (err) return err; } - start_id = gve_rx_start_qpl_id(&priv->tx_cfg); for (i = 0; i < num_rx_qpls; i++) { - err = gve_unregister_qpl(priv, start_id + i); + err = gve_unregister_qpl(priv, gve_rx_get_qpl(priv, i)); /* This failure will trigger a reset - no need to clean */ if (err) return err; @@ -857,7 +868,6 @@ static void gve_tx_get_curr_alloc_cfg(struct gve_priv *priv, { cfg->qcfg = &priv->tx_cfg; cfg->raw_addressing = !gve_is_qpl(priv); - cfg->qpls = priv->qpls; cfg->ring_size = priv->tx_desc_cnt; cfg->start_idx = 0; cfg->num_rings = gve_num_tx_queues(priv); @@ -914,9 +924,9 @@ static int gve_alloc_xdp_rings(struct gve_priv *priv) return 0; } -static int gve_alloc_rings(struct gve_priv *priv, - struct gve_tx_alloc_rings_cfg *tx_alloc_cfg, - struct gve_rx_alloc_rings_cfg *rx_alloc_cfg) +static int gve_queues_mem_alloc(struct gve_priv *priv, + struct gve_tx_alloc_rings_cfg *tx_alloc_cfg, + struct gve_rx_alloc_rings_cfg *rx_alloc_cfg) { int err; @@ -1002,9 +1012,9 @@ static void gve_free_xdp_rings(struct gve_priv *priv) } } -static void gve_free_rings(struct gve_priv *priv, - struct gve_tx_alloc_rings_cfg *tx_cfg, - struct gve_rx_alloc_rings_cfg *rx_cfg) +static void gve_queues_mem_free(struct gve_priv *priv, + struct gve_tx_alloc_rings_cfg *tx_cfg, + struct gve_rx_alloc_rings_cfg *rx_cfg) { if (gve_is_gqi(priv)) { gve_tx_free_rings_gqi(priv, tx_cfg); @@ -1033,35 +1043,41 @@ int gve_alloc_page(struct gve_priv *priv, struct device *dev, return 0; } -static int gve_alloc_queue_page_list(struct gve_priv *priv, - struct gve_queue_page_list *qpl, - u32 id, int pages) +struct gve_queue_page_list *gve_alloc_queue_page_list(struct gve_priv *priv, + u32 id, int pages) { + struct gve_queue_page_list *qpl; int err; int i; + qpl = kvzalloc(sizeof(*qpl), GFP_KERNEL); + if (!qpl) + return NULL; + qpl->id = id; qpl->num_entries = 0; qpl->pages = kvcalloc(pages, sizeof(*qpl->pages), GFP_KERNEL); - /* caller handles clean up */ if (!qpl->pages) - return -ENOMEM; + goto abort; + qpl->page_buses = kvcalloc(pages, sizeof(*qpl->page_buses), GFP_KERNEL); - /* caller handles clean up */ if (!qpl->page_buses) - return -ENOMEM; + goto abort; for (i = 0; i < pages; i++) { err = gve_alloc_page(priv, &priv->pdev->dev, &qpl->pages[i], &qpl->page_buses[i], gve_qpl_dma_dir(priv, id), GFP_KERNEL); - /* caller handles clean up */ if (err) - return -ENOMEM; + goto abort; qpl->num_entries++; } - return 0; + return qpl; + +abort: + gve_free_queue_page_list(priv, qpl, id); + return NULL; } void gve_free_page(struct device *dev, struct page *page, dma_addr_t dma, @@ -1073,14 +1089,16 @@ void gve_free_page(struct device *dev, struct page *page, dma_addr_t dma, put_page(page); } -static void gve_free_queue_page_list(struct gve_priv *priv, - struct gve_queue_page_list *qpl, - int id) +void gve_free_queue_page_list(struct gve_priv *priv, + struct gve_queue_page_list *qpl, + u32 id) { int i; - if (!qpl->pages) + if (!qpl) return; + if (!qpl->pages) + goto free_qpl; if (!qpl->page_buses) goto free_pages; @@ -1093,109 +1111,8 @@ static void gve_free_queue_page_list(struct gve_priv *priv, free_pages: kvfree(qpl->pages); qpl->pages = NULL; -} - -static void gve_free_n_qpls(struct gve_priv *priv, - struct gve_queue_page_list *qpls, - int start_id, - int num_qpls) -{ - int i; - - for (i = start_id; i < start_id + num_qpls; i++) - gve_free_queue_page_list(priv, &qpls[i], i); -} - -static int gve_alloc_n_qpls(struct gve_priv *priv, - struct gve_queue_page_list *qpls, - int page_count, - int start_id, - int num_qpls) -{ - int err; - int i; - - for (i = start_id; i < start_id + num_qpls; i++) { - err = gve_alloc_queue_page_list(priv, &qpls[i], i, page_count); - if (err) - goto free_qpls; - } - - return 0; - -free_qpls: - /* Must include the failing QPL too for gve_alloc_queue_page_list fails - * without cleaning up. - */ - gve_free_n_qpls(priv, qpls, start_id, i - start_id + 1); - return err; -} - -static int gve_alloc_qpls(struct gve_priv *priv, struct gve_qpls_alloc_cfg *cfg, - struct gve_rx_alloc_rings_cfg *rx_alloc_cfg) -{ - int max_queues = cfg->tx_cfg->max_queues + cfg->rx_cfg->max_queues; - int rx_start_id, tx_num_qpls, rx_num_qpls; - struct gve_queue_page_list *qpls; - u32 page_count; - int err; - - if (cfg->raw_addressing) - return 0; - - qpls = kvcalloc(max_queues, sizeof(*qpls), GFP_KERNEL); - if (!qpls) - return -ENOMEM; - - /* Allocate TX QPLs */ - page_count = priv->tx_pages_per_qpl; - tx_num_qpls = gve_num_tx_qpls(cfg->tx_cfg, cfg->num_xdp_queues, - gve_is_qpl(priv)); - err = gve_alloc_n_qpls(priv, qpls, page_count, 0, tx_num_qpls); - if (err) - goto free_qpl_array; - - /* Allocate RX QPLs */ - rx_start_id = gve_rx_start_qpl_id(cfg->tx_cfg); - /* For GQI_QPL number of pages allocated have 1:1 relationship with - * number of descriptors. For DQO, number of pages required are - * more than descriptors (because of out of order completions). - * Set it to twice the number of descriptors. - */ - if (cfg->is_gqi) - page_count = rx_alloc_cfg->ring_size; - else - page_count = gve_get_rx_pages_per_qpl_dqo(rx_alloc_cfg->ring_size); - rx_num_qpls = gve_num_rx_qpls(cfg->rx_cfg, gve_is_qpl(priv)); - err = gve_alloc_n_qpls(priv, qpls, page_count, rx_start_id, rx_num_qpls); - if (err) - goto free_tx_qpls; - - cfg->qpls = qpls; - return 0; - -free_tx_qpls: - gve_free_n_qpls(priv, qpls, 0, tx_num_qpls); -free_qpl_array: - kvfree(qpls); - return err; -} - -static void gve_free_qpls(struct gve_priv *priv, - struct gve_qpls_alloc_cfg *cfg) -{ - int max_queues = cfg->tx_cfg->max_queues + cfg->rx_cfg->max_queues; - struct gve_queue_page_list *qpls = cfg->qpls; - int i; - - if (!qpls) - return; - - for (i = 0; i < max_queues; i++) - gve_free_queue_page_list(priv, &qpls[i], i); - - kvfree(qpls); - cfg->qpls = NULL; +free_qpl: + kvfree(qpl); } /* Use this to schedule a reset when the device is capable of continuing @@ -1299,17 +1216,6 @@ static void gve_drain_page_cache(struct gve_priv *priv) page_frag_cache_drain(&priv->rx[i].page_cache); } -static void gve_qpls_get_curr_alloc_cfg(struct gve_priv *priv, - struct gve_qpls_alloc_cfg *cfg) -{ - cfg->raw_addressing = !gve_is_qpl(priv); - cfg->is_gqi = gve_is_gqi(priv); - cfg->num_xdp_queues = priv->num_xdp_queues; - cfg->tx_cfg = &priv->tx_cfg; - cfg->rx_cfg = &priv->rx_cfg; - cfg->qpls = priv->qpls; -} - static void gve_rx_get_curr_alloc_cfg(struct gve_priv *priv, struct gve_rx_alloc_rings_cfg *cfg) { @@ -1317,7 +1223,6 @@ static void gve_rx_get_curr_alloc_cfg(struct gve_priv *priv, cfg->qcfg_tx = &priv->tx_cfg; cfg->raw_addressing = !gve_is_qpl(priv); cfg->enable_header_split = priv->header_split_enabled; - cfg->qpls = priv->qpls; cfg->ring_size = priv->rx_desc_cnt; cfg->packet_buffer_size = gve_is_gqi(priv) ? GVE_DEFAULT_RX_BUFFER_SIZE : @@ -1326,11 +1231,9 @@ static void gve_rx_get_curr_alloc_cfg(struct gve_priv *priv, } void gve_get_curr_alloc_cfgs(struct gve_priv *priv, - struct gve_qpls_alloc_cfg *qpls_alloc_cfg, struct gve_tx_alloc_rings_cfg *tx_alloc_cfg, struct gve_rx_alloc_rings_cfg *rx_alloc_cfg) { - gve_qpls_get_curr_alloc_cfg(priv, qpls_alloc_cfg); gve_tx_get_curr_alloc_cfg(priv, tx_alloc_cfg); gve_rx_get_curr_alloc_cfg(priv, rx_alloc_cfg); } @@ -1362,53 +1265,13 @@ static void gve_rx_stop_rings(struct gve_priv *priv, int num_rings) } } -static void gve_queues_mem_free(struct gve_priv *priv, - struct gve_qpls_alloc_cfg *qpls_alloc_cfg, - struct gve_tx_alloc_rings_cfg *tx_alloc_cfg, - struct gve_rx_alloc_rings_cfg *rx_alloc_cfg) -{ - gve_free_rings(priv, tx_alloc_cfg, rx_alloc_cfg); - gve_free_qpls(priv, qpls_alloc_cfg); -} - -static int gve_queues_mem_alloc(struct gve_priv *priv, - struct gve_qpls_alloc_cfg *qpls_alloc_cfg, - struct gve_tx_alloc_rings_cfg *tx_alloc_cfg, - struct gve_rx_alloc_rings_cfg *rx_alloc_cfg) -{ - int err; - - err = gve_alloc_qpls(priv, qpls_alloc_cfg, rx_alloc_cfg); - if (err) { - netif_err(priv, drv, priv->dev, "Failed to alloc QPLs\n"); - return err; - } - tx_alloc_cfg->qpls = qpls_alloc_cfg->qpls; - rx_alloc_cfg->qpls = qpls_alloc_cfg->qpls; - err = gve_alloc_rings(priv, tx_alloc_cfg, rx_alloc_cfg); - if (err) { - netif_err(priv, drv, priv->dev, "Failed to alloc rings\n"); - goto free_qpls; - } - - return 0; - -free_qpls: - gve_free_qpls(priv, qpls_alloc_cfg); - return err; -} - static void gve_queues_mem_remove(struct gve_priv *priv) { struct gve_tx_alloc_rings_cfg tx_alloc_cfg = {0}; struct gve_rx_alloc_rings_cfg rx_alloc_cfg = {0}; - struct gve_qpls_alloc_cfg qpls_alloc_cfg = {0}; - gve_get_curr_alloc_cfgs(priv, &qpls_alloc_cfg, - &tx_alloc_cfg, &rx_alloc_cfg); - gve_queues_mem_free(priv, &qpls_alloc_cfg, - &tx_alloc_cfg, &rx_alloc_cfg); - priv->qpls = NULL; + gve_get_curr_alloc_cfgs(priv, &tx_alloc_cfg, &rx_alloc_cfg); + gve_queues_mem_free(priv, &tx_alloc_cfg, &rx_alloc_cfg); priv->tx = NULL; priv->rx = NULL; } @@ -1417,7 +1280,6 @@ static void gve_queues_mem_remove(struct gve_priv *priv) * No memory is allocated. Passed-in memory is freed on errors. */ static int gve_queues_start(struct gve_priv *priv, - struct gve_qpls_alloc_cfg *qpls_alloc_cfg, struct gve_tx_alloc_rings_cfg *tx_alloc_cfg, struct gve_rx_alloc_rings_cfg *rx_alloc_cfg) { @@ -1425,7 +1287,6 @@ static int gve_queues_start(struct gve_priv *priv, int err; /* Record new resources into priv */ - priv->qpls = qpls_alloc_cfg->qpls; priv->tx = tx_alloc_cfg->tx; priv->rx = rx_alloc_cfg->rx; @@ -1497,23 +1358,19 @@ static int gve_open(struct net_device *dev) { struct gve_tx_alloc_rings_cfg tx_alloc_cfg = {0}; struct gve_rx_alloc_rings_cfg rx_alloc_cfg = {0}; - struct gve_qpls_alloc_cfg qpls_alloc_cfg = {0}; struct gve_priv *priv = netdev_priv(dev); int err; - gve_get_curr_alloc_cfgs(priv, &qpls_alloc_cfg, - &tx_alloc_cfg, &rx_alloc_cfg); + gve_get_curr_alloc_cfgs(priv, &tx_alloc_cfg, &rx_alloc_cfg); - err = gve_queues_mem_alloc(priv, &qpls_alloc_cfg, - &tx_alloc_cfg, &rx_alloc_cfg); + err = gve_queues_mem_alloc(priv, &tx_alloc_cfg, &rx_alloc_cfg); if (err) return err; /* No need to free on error: ownership of resources is lost after * calling gve_queues_start. */ - err = gve_queues_start(priv, &qpls_alloc_cfg, - &tx_alloc_cfg, &rx_alloc_cfg); + err = gve_queues_start(priv, &tx_alloc_cfg, &rx_alloc_cfg); if (err) return err; @@ -1572,11 +1429,8 @@ static int gve_close(struct net_device *dev) static int gve_remove_xdp_queues(struct gve_priv *priv) { - int qpl_start_id; int err; - qpl_start_id = gve_xdp_tx_start_queue_id(priv); - err = gve_destroy_xdp_rings(priv); if (err) return err; @@ -1588,27 +1442,19 @@ static int gve_remove_xdp_queues(struct gve_priv *priv) gve_unreg_xdp_info(priv); gve_free_xdp_rings(priv); - gve_free_n_qpls(priv, priv->qpls, qpl_start_id, gve_num_xdp_qpls(priv)); priv->num_xdp_queues = 0; return 0; } static int gve_add_xdp_queues(struct gve_priv *priv) { - int start_id; int err; priv->num_xdp_queues = priv->rx_cfg.num_queues; - start_id = gve_xdp_tx_start_queue_id(priv); - err = gve_alloc_n_qpls(priv, priv->qpls, priv->tx_pages_per_qpl, - start_id, gve_num_xdp_qpls(priv)); - if (err) - goto err; - err = gve_alloc_xdp_rings(priv); if (err) - goto free_xdp_qpls; + goto err; err = gve_reg_xdp_info(priv, priv->dev); if (err) @@ -1626,8 +1472,6 @@ static int gve_add_xdp_queues(struct gve_priv *priv) free_xdp_rings: gve_free_xdp_rings(priv); -free_xdp_qpls: - gve_free_n_qpls(priv, priv->qpls, start_id, gve_num_xdp_qpls(priv)); err: priv->num_xdp_queues = 0; return err; @@ -1878,15 +1722,13 @@ static int gve_xdp(struct net_device *dev, struct netdev_bpf *xdp) } int gve_adjust_config(struct gve_priv *priv, - struct gve_qpls_alloc_cfg *qpls_alloc_cfg, struct gve_tx_alloc_rings_cfg *tx_alloc_cfg, struct gve_rx_alloc_rings_cfg *rx_alloc_cfg) { int err; /* Allocate resources for the new confiugration */ - err = gve_queues_mem_alloc(priv, qpls_alloc_cfg, - tx_alloc_cfg, rx_alloc_cfg); + err = gve_queues_mem_alloc(priv, tx_alloc_cfg, rx_alloc_cfg); if (err) { netif_err(priv, drv, priv->dev, "Adjust config failed to alloc new queues"); @@ -1898,14 +1740,12 @@ int gve_adjust_config(struct gve_priv *priv, if (err) { netif_err(priv, drv, priv->dev, "Adjust config failed to close old queues"); - gve_queues_mem_free(priv, qpls_alloc_cfg, - tx_alloc_cfg, rx_alloc_cfg); + gve_queues_mem_free(priv, tx_alloc_cfg, rx_alloc_cfg); return err; } /* Bring the device back up again with the new resources. */ - err = gve_queues_start(priv, qpls_alloc_cfg, - tx_alloc_cfg, rx_alloc_cfg); + err = gve_queues_start(priv, tx_alloc_cfg, rx_alloc_cfg); if (err) { netif_err(priv, drv, priv->dev, "Adjust config failed to start new queues, !!! DISABLING ALL QUEUES !!!\n"); @@ -1925,23 +1765,18 @@ int gve_adjust_queues(struct gve_priv *priv, { struct gve_tx_alloc_rings_cfg tx_alloc_cfg = {0}; struct gve_rx_alloc_rings_cfg rx_alloc_cfg = {0}; - struct gve_qpls_alloc_cfg qpls_alloc_cfg = {0}; int err; - gve_get_curr_alloc_cfgs(priv, &qpls_alloc_cfg, - &tx_alloc_cfg, &rx_alloc_cfg); + gve_get_curr_alloc_cfgs(priv, &tx_alloc_cfg, &rx_alloc_cfg); /* Relay the new config from ethtool */ - qpls_alloc_cfg.tx_cfg = &new_tx_config; tx_alloc_cfg.qcfg = &new_tx_config; rx_alloc_cfg.qcfg_tx = &new_tx_config; - qpls_alloc_cfg.rx_cfg = &new_rx_config; rx_alloc_cfg.qcfg = &new_rx_config; tx_alloc_cfg.num_rings = new_tx_config.num_queues; if (netif_carrier_ok(priv->dev)) { - err = gve_adjust_config(priv, &qpls_alloc_cfg, - &tx_alloc_cfg, &rx_alloc_cfg); + err = gve_adjust_config(priv, &tx_alloc_cfg, &rx_alloc_cfg); return err; } /* Set the config for the next up. */ @@ -2106,7 +1941,6 @@ int gve_set_hsplit_config(struct gve_priv *priv, u8 tcp_data_split) { struct gve_tx_alloc_rings_cfg tx_alloc_cfg = {0}; struct gve_rx_alloc_rings_cfg rx_alloc_cfg = {0}; - struct gve_qpls_alloc_cfg qpls_alloc_cfg = {0}; bool enable_hdr_split; int err = 0; @@ -2126,15 +1960,13 @@ int gve_set_hsplit_config(struct gve_priv *priv, u8 tcp_data_split) if (enable_hdr_split == priv->header_split_enabled) return 0; - gve_get_curr_alloc_cfgs(priv, &qpls_alloc_cfg, - &tx_alloc_cfg, &rx_alloc_cfg); + gve_get_curr_alloc_cfgs(priv, &tx_alloc_cfg, &rx_alloc_cfg); rx_alloc_cfg.enable_header_split = enable_hdr_split; rx_alloc_cfg.packet_buffer_size = gve_get_pkt_buf_size(priv, enable_hdr_split); if (netif_running(priv->dev)) - err = gve_adjust_config(priv, &qpls_alloc_cfg, - &tx_alloc_cfg, &rx_alloc_cfg); + err = gve_adjust_config(priv, &tx_alloc_cfg, &rx_alloc_cfg); return err; } @@ -2144,18 +1976,15 @@ static int gve_set_features(struct net_device *netdev, const netdev_features_t orig_features = netdev->features; struct gve_tx_alloc_rings_cfg tx_alloc_cfg = {0}; struct gve_rx_alloc_rings_cfg rx_alloc_cfg = {0}; - struct gve_qpls_alloc_cfg qpls_alloc_cfg = {0}; struct gve_priv *priv = netdev_priv(netdev); int err; - gve_get_curr_alloc_cfgs(priv, &qpls_alloc_cfg, - &tx_alloc_cfg, &rx_alloc_cfg); + gve_get_curr_alloc_cfgs(priv, &tx_alloc_cfg, &rx_alloc_cfg); if ((netdev->features & NETIF_F_LRO) != (features & NETIF_F_LRO)) { netdev->features ^= NETIF_F_LRO; if (netif_carrier_ok(netdev)) { - err = gve_adjust_config(priv, &qpls_alloc_cfg, - &tx_alloc_cfg, &rx_alloc_cfg); + err = gve_adjust_config(priv, &tx_alloc_cfg, &rx_alloc_cfg); if (err) { /* Revert the change on error. */ netdev->features = orig_features; diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c index 79c1d8f63621..70aca2f2c8c3 100644 --- a/drivers/net/ethernet/google/gve/gve_rx.c +++ b/drivers/net/ethernet/google/gve/gve_rx.c @@ -41,7 +41,6 @@ static void gve_rx_unfill_pages(struct gve_priv *priv, for (i = 0; i < slots; i++) page_ref_sub(rx->data.page_info[i].page, rx->data.page_info[i].pagecnt_bias - 1); - rx->data.qpl = NULL; for (i = 0; i < rx->qpl_copy_pool_mask + 1; i++) { page_ref_sub(rx->qpl_copy_pool[i].page, @@ -107,6 +106,7 @@ static void gve_rx_free_ring_gqi(struct gve_priv *priv, struct gve_rx_ring *rx, u32 slots = rx->mask + 1; int idx = rx->q_num; size_t bytes; + u32 qpl_id; if (rx->desc.desc_ring) { bytes = sizeof(struct gve_rx_desc) * cfg->ring_size; @@ -132,6 +132,12 @@ static void gve_rx_free_ring_gqi(struct gve_priv *priv, struct gve_rx_ring *rx, kvfree(rx->qpl_copy_pool); rx->qpl_copy_pool = NULL; + if (rx->data.qpl) { + qpl_id = gve_get_rx_qpl_id(cfg->qcfg_tx, idx); + gve_free_queue_page_list(priv, rx->data.qpl, qpl_id); + rx->data.qpl = NULL; + } + netif_dbg(priv, drv, priv->dev, "freed rx ring %d\n", idx); } @@ -188,12 +194,6 @@ static int gve_rx_prefill_pages(struct gve_rx_ring *rx, if (!rx->data.page_info) return -ENOMEM; - if (!rx->data.raw_addressing) { - u32 qpl_id = gve_get_rx_qpl_id(cfg->qcfg_tx, rx->q_num); - - rx->data.qpl = &cfg->qpls[qpl_id]; - } - for (i = 0; i < slots; i++) { if (!rx->data.raw_addressing) { struct page *page = rx->data.qpl->pages[i]; @@ -246,8 +246,6 @@ static int gve_rx_prefill_pages(struct gve_rx_ring *rx, page_ref_sub(rx->data.page_info[i].page, rx->data.page_info[i].pagecnt_bias - 1); - rx->data.qpl = NULL; - return err; alloc_err_rda: @@ -274,6 +272,8 @@ static int gve_rx_alloc_ring_gqi(struct gve_priv *priv, struct device *hdev = &priv->pdev->dev; u32 slots = cfg->ring_size; int filled_pages; + int qpl_page_cnt; + u32 qpl_id = 0; size_t bytes; int err; @@ -306,10 +306,22 @@ static int gve_rx_alloc_ring_gqi(struct gve_priv *priv, goto abort_with_slots; } + if (!rx->data.raw_addressing) { + qpl_id = gve_get_rx_qpl_id(cfg->qcfg_tx, rx->q_num); + qpl_page_cnt = cfg->ring_size; + + rx->data.qpl = gve_alloc_queue_page_list(priv, qpl_id, + qpl_page_cnt); + if (!rx->data.qpl) { + err = -ENOMEM; + goto abort_with_copy_pool; + } + } + filled_pages = gve_rx_prefill_pages(rx, cfg); if (filled_pages < 0) { err = -ENOMEM; - goto abort_with_copy_pool; + goto abort_with_qpl; } rx->fill_cnt = filled_pages; /* Ensure data ring slots (packet buffers) are visible. */ @@ -350,6 +362,11 @@ static int gve_rx_alloc_ring_gqi(struct gve_priv *priv, rx->q_resources = NULL; abort_filled: gve_rx_unfill_pages(priv, rx, cfg); +abort_with_qpl: + if (!rx->data.raw_addressing) { + gve_free_queue_page_list(priv, rx->data.qpl, qpl_id); + rx->data.qpl = NULL; + } abort_with_copy_pool: kvfree(rx->qpl_copy_pool); rx->qpl_copy_pool = NULL; @@ -368,12 +385,6 @@ int gve_rx_alloc_rings_gqi(struct gve_priv *priv, int err = 0; int i, j; - if (!cfg->raw_addressing && !cfg->qpls) { - netif_err(priv, drv, priv->dev, - "Cannot alloc QPL ring before allocing QPLs\n"); - return -EINVAL; - } - rx = kvcalloc(cfg->qcfg->max_queues, sizeof(struct gve_rx_ring), GFP_KERNEL); if (!rx) diff --git a/drivers/net/ethernet/google/gve/gve_rx_dqo.c b/drivers/net/ethernet/google/gve/gve_rx_dqo.c index 7c2980c212f4..4ea8ecc3b2d5 100644 --- a/drivers/net/ethernet/google/gve/gve_rx_dqo.c +++ b/drivers/net/ethernet/google/gve/gve_rx_dqo.c @@ -307,6 +307,7 @@ static void gve_rx_free_ring_dqo(struct gve_priv *priv, struct gve_rx_ring *rx, size_t buffer_queue_slots; int idx = rx->q_num; size_t size; + u32 qpl_id; int i; completion_queue_slots = rx->dqo.complq.mask + 1; @@ -325,7 +326,11 @@ static void gve_rx_free_ring_dqo(struct gve_priv *priv, struct gve_rx_ring *rx, gve_free_page_dqo(priv, bs, !rx->dqo.qpl); } - rx->dqo.qpl = NULL; + if (rx->dqo.qpl) { + qpl_id = gve_get_rx_qpl_id(cfg->qcfg_tx, rx->q_num); + gve_free_queue_page_list(priv, rx->dqo.qpl, qpl_id); + rx->dqo.qpl = NULL; + } if (rx->dqo.bufq.desc_ring) { size = sizeof(rx->dqo.bufq.desc_ring[0]) * buffer_queue_slots; @@ -377,7 +382,9 @@ static int gve_rx_alloc_ring_dqo(struct gve_priv *priv, int idx) { struct device *hdev = &priv->pdev->dev; + int qpl_page_cnt; size_t size; + u32 qpl_id; const u32 buffer_queue_slots = cfg->ring_size; const u32 completion_queue_slots = cfg->ring_size; @@ -418,9 +425,13 @@ static int gve_rx_alloc_ring_dqo(struct gve_priv *priv, goto err; if (!cfg->raw_addressing) { - u32 qpl_id = gve_get_rx_qpl_id(cfg->qcfg_tx, rx->q_num); + qpl_id = gve_get_rx_qpl_id(cfg->qcfg_tx, rx->q_num); + qpl_page_cnt = gve_get_rx_pages_per_qpl_dqo(cfg->ring_size); - rx->dqo.qpl = &cfg->qpls[qpl_id]; + rx->dqo.qpl = gve_alloc_queue_page_list(priv, qpl_id, + qpl_page_cnt); + if (!rx->dqo.qpl) + goto err; rx->dqo.next_qpl_page_idx = 0; } @@ -454,12 +465,6 @@ int gve_rx_alloc_rings_dqo(struct gve_priv *priv, int err; int i; - if (!cfg->raw_addressing && !cfg->qpls) { - netif_err(priv, drv, priv->dev, - "Cannot alloc QPL ring before allocing QPLs\n"); - return -EINVAL; - } - rx = kvcalloc(cfg->qcfg->max_queues, sizeof(struct gve_rx_ring), GFP_KERNEL); if (!rx) diff --git a/drivers/net/ethernet/google/gve/gve_tx.c b/drivers/net/ethernet/google/gve/gve_tx.c index f805700d67e7..24a64ec1073e 100644 --- a/drivers/net/ethernet/google/gve/gve_tx.c +++ b/drivers/net/ethernet/google/gve/gve_tx.c @@ -216,6 +216,7 @@ static void gve_tx_free_ring_gqi(struct gve_priv *priv, struct gve_tx_ring *tx, struct device *hdev = &priv->pdev->dev; int idx = tx->q_num; size_t bytes; + u32 qpl_id; u32 slots; slots = tx->mask + 1; @@ -223,8 +224,12 @@ static void gve_tx_free_ring_gqi(struct gve_priv *priv, struct gve_tx_ring *tx, tx->q_resources, tx->q_resources_bus); tx->q_resources = NULL; - if (!tx->raw_addressing) { - gve_tx_fifo_release(priv, &tx->tx_fifo); + if (tx->tx_fifo.qpl) { + if (tx->tx_fifo.base) + gve_tx_fifo_release(priv, &tx->tx_fifo); + + qpl_id = gve_tx_qpl_id(priv, tx->q_num); + gve_free_queue_page_list(priv, tx->tx_fifo.qpl, qpl_id); tx->tx_fifo.qpl = NULL; } @@ -255,6 +260,8 @@ static int gve_tx_alloc_ring_gqi(struct gve_priv *priv, int idx) { struct device *hdev = &priv->pdev->dev; + int qpl_page_cnt; + u32 qpl_id = 0; size_t bytes; /* Make sure everything is zeroed to start */ @@ -279,12 +286,17 @@ static int gve_tx_alloc_ring_gqi(struct gve_priv *priv, tx->raw_addressing = cfg->raw_addressing; tx->dev = hdev; if (!tx->raw_addressing) { - u32 qpl_id = gve_tx_qpl_id(priv, tx->q_num); + qpl_id = gve_tx_qpl_id(priv, tx->q_num); + qpl_page_cnt = priv->tx_pages_per_qpl; + + tx->tx_fifo.qpl = gve_alloc_queue_page_list(priv, qpl_id, + qpl_page_cnt); + if (!tx->tx_fifo.qpl) + goto abort_with_desc; - tx->tx_fifo.qpl = &cfg->qpls[qpl_id]; /* map Tx FIFO */ if (gve_tx_fifo_init(priv, &tx->tx_fifo)) - goto abort_with_desc; + goto abort_with_qpl; } tx->q_resources = @@ -300,6 +312,11 @@ static int gve_tx_alloc_ring_gqi(struct gve_priv *priv, abort_with_fifo: if (!tx->raw_addressing) gve_tx_fifo_release(priv, &tx->tx_fifo); +abort_with_qpl: + if (!tx->raw_addressing) { + gve_free_queue_page_list(priv, tx->tx_fifo.qpl, qpl_id); + tx->tx_fifo.qpl = NULL; + } abort_with_desc: dma_free_coherent(hdev, bytes, tx->desc, tx->bus); tx->desc = NULL; @@ -316,12 +333,6 @@ int gve_tx_alloc_rings_gqi(struct gve_priv *priv, int err = 0; int i, j; - if (!cfg->raw_addressing && !cfg->qpls) { - netif_err(priv, drv, priv->dev, - "Cannot alloc QPL ring before allocing QPLs\n"); - return -EINVAL; - } - if (cfg->start_idx + cfg->num_rings > cfg->qcfg->max_queues) { netif_err(priv, drv, priv->dev, "Cannot alloc more than the max num of Tx rings\n"); diff --git a/drivers/net/ethernet/google/gve/gve_tx_dqo.c b/drivers/net/ethernet/google/gve/gve_tx_dqo.c index 3d825e406c4b..fe1b26a4d736 100644 --- a/drivers/net/ethernet/google/gve/gve_tx_dqo.c +++ b/drivers/net/ethernet/google/gve/gve_tx_dqo.c @@ -209,6 +209,7 @@ static void gve_tx_free_ring_dqo(struct gve_priv *priv, struct gve_tx_ring *tx, struct device *hdev = &priv->pdev->dev; int idx = tx->q_num; size_t bytes; + u32 qpl_id; if (tx->q_resources) { dma_free_coherent(hdev, sizeof(*tx->q_resources), @@ -236,7 +237,11 @@ static void gve_tx_free_ring_dqo(struct gve_priv *priv, struct gve_tx_ring *tx, kvfree(tx->dqo.tx_qpl_buf_next); tx->dqo.tx_qpl_buf_next = NULL; - tx->dqo.qpl = NULL; + if (tx->dqo.qpl) { + qpl_id = gve_tx_qpl_id(priv, tx->q_num); + gve_free_queue_page_list(priv, tx->dqo.qpl, qpl_id); + tx->dqo.qpl = NULL; + } netif_dbg(priv, drv, priv->dev, "freed tx queue %d\n", idx); } @@ -282,7 +287,9 @@ static int gve_tx_alloc_ring_dqo(struct gve_priv *priv, { struct device *hdev = &priv->pdev->dev; int num_pending_packets; + int qpl_page_cnt; size_t bytes; + u32 qpl_id; int i; memset(tx, 0, sizeof(*tx)); @@ -349,9 +356,13 @@ static int gve_tx_alloc_ring_dqo(struct gve_priv *priv, goto err; if (!cfg->raw_addressing) { - u32 qpl_id = gve_tx_qpl_id(priv, tx->q_num); + qpl_id = gve_tx_qpl_id(priv, tx->q_num); + qpl_page_cnt = priv->tx_pages_per_qpl; - tx->dqo.qpl = &cfg->qpls[qpl_id]; + tx->dqo.qpl = gve_alloc_queue_page_list(priv, qpl_id, + qpl_page_cnt); + if (!tx->dqo.qpl) + goto err; if (gve_tx_qpl_buf_init(tx)) goto err; @@ -371,12 +382,6 @@ int gve_tx_alloc_rings_dqo(struct gve_priv *priv, int err = 0; int i, j; - if (!cfg->raw_addressing && !cfg->qpls) { - netif_err(priv, drv, priv->dev, - "Cannot alloc QPL ring before allocing QPLs\n"); - return -EINVAL; - } - if (cfg->start_idx + cfg->num_rings > cfg->qcfg->max_queues) { netif_err(priv, drv, priv->dev, "Cannot alloc more than the max num of Tx rings\n"); From patchwork Wed May 1 23:25:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Shailend Chand X-Patchwork-Id: 13651191 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8AE88168B05 for ; Wed, 1 May 2024 23:26:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714605976; cv=none; b=Tu31PseeCSjAOFDJXF0neMbwD1MLjf/dBhhUX00nxbHj8NQUQtJTdLuuaoWtPNseg+30KAPth1xIkEnDjoj28cn9rVDjM+miLZkueIVhfY7nv8Yf5EHkqXvo18UG4b5U6qalZ1DGDD+DO6L9ZCS7lC5c4cCfyO6QmiVaOn5bClk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714605976; c=relaxed/simple; bh=W1AK8ygLnTgsvtDOCzZrWNdpLZaLjHjsEmN7cXjb4Zc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ZpwB3E+hrc2Fcn0JshMyyODPB6DpkCAitCxFuNJkVCDaqxI3ieRPuQz7tdJP3x8B7rb1ru9xLdC2qcOHXPfQntEx9aXFnD7oqlfamJjUCrQ7r7Tf0Tpemf1fJlLNHPFnthFC66EoDkcSEu9eBHMwO77/I7ZftFmOfHLAtDfYRJQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Heaso9n5; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Heaso9n5" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-de604ccb373so5519756276.2 for ; Wed, 01 May 2024 16:26:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1714605973; x=1715210773; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:from:to:cc:subject:date:message-id :reply-to; bh=eG2KoU7GnurBrM2dBqBYHFFNoIgr4+ZljfxRfTW2pfU=; b=Heaso9n5kV+qkEbBzGmuM6QrFiO/4yxNjD7s5JwTIxpRCRe9C4vzl3n+/cex32KfRH DtXdK/ukWQZjk7o6qr0xk6yPflBXOD5dHwqzFQke9soveuy1LR6ymPK965PMV8jyts4S oA2TUWSKo8zPVcJWvzAQU9quPoxmmv8vSmJ49iR1FP5Kq2JE4VtCMofXNbzsZygoTMJ6 AARf8rwbEmAvmVf1dnT6+C4tmDq2FN2AeUNyfX5xRspwAEwSyccVealQGXU2WdLFDhX2 WwFs0GX4sVttKqjqRT8USmDEMqZvnf2Qkmovk/G/ha+ZTv69lPiTtZ+zvD84BNBzgNwD IdbA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714605973; x=1715210773; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:x-gm-message-state:from:to:cc:subject :date:message-id:reply-to; bh=eG2KoU7GnurBrM2dBqBYHFFNoIgr4+ZljfxRfTW2pfU=; b=SlpQSelIcXIiEmeIEc3bMmSWIh+vdfsjucmyizRyH59AmU5Umtj18AY9PsacvRyRnU hGTyS55KLL+R9joLQtOaiaDdM34Zx0dUJxMdOrLby72WS8OrhlEDgKWAcrnHfjLoewF5 P1sx900lvaVr053wfDywzKWUASlFqmYcvwtaaOaP8vUynuOozaqQHXHPS1ly7HKButdJ mCtqaE/pV228ICmD2iy9aAUw9Ade5hsrYlr/QuVCk++3KeQYuLSmH1DLUJUv78oFzOqb P+JWH+/iy2IMcAVSD34wjoWbp/0+3U6QaTK2cRL7oFtTCF3c9MWi/waNKLYDPH13l1jD PN4g== X-Gm-Message-State: AOJu0YyTvqyjSc2+ntFOw335WS6CWRxraj9ZXyxZBDyAztpETk9y0qW1 nToshXGJiSawOz/u68j1qYgJFqbru+C6Rs0k/oTstl2jSSdthraYhX5rnTC5Sp0+Pa+aBUdbEU2 tD9vPmPJxckdKmb6Ixo3nIRZOIXDV2DjBIZtVkIqLNow0k9Xhwzk4oq1ivmwhAJPF+yxUieU5VN c+QHGjr6p/fyLFWaMo5GDMPYzCNqT0gtc5iXnaqQrKzTQ= X-Google-Smtp-Source: AGHT+IGXdSB3eFf32lSKBDqUZ2PY2G/Kl5aaGe0RuFURZpUvxj6VQcUBHfa7cz3vVOd1SvoHDEiaJlFjz0E4HQ== X-Received: from shailendkvm.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:2648]) (user=shailend job=sendgmr) by 2002:a05:6902:1004:b0:de4:7a4b:903 with SMTP id w4-20020a056902100400b00de47a4b0903mr502982ybt.3.1714605973511; Wed, 01 May 2024 16:26:13 -0700 (PDT) Date: Wed, 1 May 2024 23:25:49 +0000 In-Reply-To: <20240501232549.1327174-1-shailend@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240501232549.1327174-1-shailend@google.com> X-Mailer: git-send-email 2.45.0.rc0.197.gbae5840b3b-goog Message-ID: <20240501232549.1327174-11-shailend@google.com> Subject: [PATCH net-next v2 10/10] gve: Implement queue api From: Shailend Chand To: netdev@vger.kernel.org Cc: almasrymina@google.com, davem@davemloft.net, edumazet@google.com, hramamurthy@google.com, jeroendb@google.com, kuba@kernel.org, pabeni@redhat.com, pkaligineedi@google.com, rushilg@google.com, willemb@google.com, ziweixiao@google.com, Shailend Chand X-Patchwork-Delegate: kuba@kernel.org The new netdev queue api is implemented for gve. Tested-by: Mina Almasry Reviewed-by:  Mina Almasry Reviewed-by: Praveen Kaligineedi Reviewed-by: Harshitha Ramamurthy Signed-off-by: Shailend Chand --- drivers/net/ethernet/google/gve/gve.h | 6 + drivers/net/ethernet/google/gve/gve_dqo.h | 6 + drivers/net/ethernet/google/gve/gve_main.c | 177 +++++++++++++++++-- drivers/net/ethernet/google/gve/gve_rx.c | 12 +- drivers/net/ethernet/google/gve/gve_rx_dqo.c | 12 +- 5 files changed, 189 insertions(+), 24 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h index 9e0a433c991c..ae1e21c9b0a5 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -1096,6 +1096,12 @@ bool gve_tx_clean_pending(struct gve_priv *priv, struct gve_tx_ring *tx); void gve_rx_write_doorbell(struct gve_priv *priv, struct gve_rx_ring *rx); int gve_rx_poll(struct gve_notify_block *block, int budget); bool gve_rx_work_pending(struct gve_rx_ring *rx); +int gve_rx_alloc_ring_gqi(struct gve_priv *priv, + struct gve_rx_alloc_rings_cfg *cfg, + struct gve_rx_ring *rx, + int idx); +void gve_rx_free_ring_gqi(struct gve_priv *priv, struct gve_rx_ring *rx, + struct gve_rx_alloc_rings_cfg *cfg); int gve_rx_alloc_rings(struct gve_priv *priv); int gve_rx_alloc_rings_gqi(struct gve_priv *priv, struct gve_rx_alloc_rings_cfg *cfg); diff --git a/drivers/net/ethernet/google/gve/gve_dqo.h b/drivers/net/ethernet/google/gve/gve_dqo.h index b81584829c40..e83773fb891f 100644 --- a/drivers/net/ethernet/google/gve/gve_dqo.h +++ b/drivers/net/ethernet/google/gve/gve_dqo.h @@ -44,6 +44,12 @@ void gve_tx_free_rings_dqo(struct gve_priv *priv, struct gve_tx_alloc_rings_cfg *cfg); void gve_tx_start_ring_dqo(struct gve_priv *priv, int idx); void gve_tx_stop_ring_dqo(struct gve_priv *priv, int idx); +int gve_rx_alloc_ring_dqo(struct gve_priv *priv, + struct gve_rx_alloc_rings_cfg *cfg, + struct gve_rx_ring *rx, + int idx); +void gve_rx_free_ring_dqo(struct gve_priv *priv, struct gve_rx_ring *rx, + struct gve_rx_alloc_rings_cfg *cfg); int gve_rx_alloc_rings_dqo(struct gve_priv *priv, struct gve_rx_alloc_rings_cfg *cfg); void gve_rx_free_rings_dqo(struct gve_priv *priv, diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index e22ac764ec4f..cabf7d4bcecb 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include #include "gve.h" @@ -1238,16 +1239,28 @@ void gve_get_curr_alloc_cfgs(struct gve_priv *priv, gve_rx_get_curr_alloc_cfg(priv, rx_alloc_cfg); } +static void gve_rx_start_ring(struct gve_priv *priv, int i) +{ + if (gve_is_gqi(priv)) + gve_rx_start_ring_gqi(priv, i); + else + gve_rx_start_ring_dqo(priv, i); +} + static void gve_rx_start_rings(struct gve_priv *priv, int num_rings) { int i; - for (i = 0; i < num_rings; i++) { - if (gve_is_gqi(priv)) - gve_rx_start_ring_gqi(priv, i); - else - gve_rx_start_ring_dqo(priv, i); - } + for (i = 0; i < num_rings; i++) + gve_rx_start_ring(priv, i); +} + +static void gve_rx_stop_ring(struct gve_priv *priv, int i) +{ + if (gve_is_gqi(priv)) + gve_rx_stop_ring_gqi(priv, i); + else + gve_rx_stop_ring_dqo(priv, i); } static void gve_rx_stop_rings(struct gve_priv *priv, int num_rings) @@ -1257,12 +1270,8 @@ static void gve_rx_stop_rings(struct gve_priv *priv, int num_rings) if (!priv->rx) return; - for (i = 0; i < num_rings; i++) { - if (gve_is_gqi(priv)) - gve_rx_stop_ring_gqi(priv, i); - else - gve_rx_stop_ring_dqo(priv, i); - } + for (i = 0; i < num_rings; i++) + gve_rx_stop_ring(priv, i); } static void gve_queues_mem_remove(struct gve_priv *priv) @@ -1877,6 +1886,15 @@ static void gve_turnup(struct gve_priv *priv) gve_set_napi_enabled(priv); } +static void gve_turnup_and_check_status(struct gve_priv *priv) +{ + u32 status; + + gve_turnup(priv); + status = ioread32be(&priv->reg_bar0->device_status); + gve_handle_link_status(priv, GVE_DEVICE_STATUS_LINK_STATUS_MASK & status); +} + static void gve_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct gve_notify_block *block; @@ -2323,6 +2341,140 @@ static void gve_write_version(u8 __iomem *driver_version_register) writeb('\n', driver_version_register); } +static int gve_rx_queue_stop(struct net_device *dev, void *per_q_mem, int idx) +{ + struct gve_priv *priv = netdev_priv(dev); + struct gve_rx_ring *gve_per_q_mem; + int err; + + if (!priv->rx) + return -EAGAIN; + + /* Destroying queue 0 while other queues exist is not supported in DQO */ + if (!gve_is_gqi(priv) && idx == 0) + return -ERANGE; + + /* Single-queue destruction requires quiescence on all queues */ + gve_turndown(priv); + + /* This failure will trigger a reset - no need to clean up */ + err = gve_adminq_destroy_single_rx_queue(priv, idx); + if (err) + return err; + + if (gve_is_qpl(priv)) { + /* This failure will trigger a reset - no need to clean up */ + err = gve_unregister_qpl(priv, gve_rx_get_qpl(priv, idx)); + if (err) + return err; + } + + gve_rx_stop_ring(priv, idx); + + /* Turn the unstopped queues back up */ + gve_turnup_and_check_status(priv); + + gve_per_q_mem = (struct gve_rx_ring *)per_q_mem; + *gve_per_q_mem = priv->rx[idx]; + memset(&priv->rx[idx], 0, sizeof(priv->rx[idx])); + return 0; +} + +static void gve_rx_queue_mem_free(struct net_device *dev, void *per_q_mem) +{ + struct gve_priv *priv = netdev_priv(dev); + struct gve_rx_alloc_rings_cfg cfg = {0}; + struct gve_rx_ring *gve_per_q_mem; + + gve_per_q_mem = (struct gve_rx_ring *)per_q_mem; + gve_rx_get_curr_alloc_cfg(priv, &cfg); + + if (gve_is_gqi(priv)) + gve_rx_free_ring_gqi(priv, gve_per_q_mem, &cfg); + else + gve_rx_free_ring_dqo(priv, gve_per_q_mem, &cfg); +} + +static int gve_rx_queue_mem_alloc(struct net_device *dev, void *per_q_mem, + int idx) +{ + struct gve_priv *priv = netdev_priv(dev); + struct gve_rx_alloc_rings_cfg cfg = {0}; + struct gve_rx_ring *gve_per_q_mem; + int err; + + if (!priv->rx) + return -EAGAIN; + + gve_per_q_mem = (struct gve_rx_ring *)per_q_mem; + gve_rx_get_curr_alloc_cfg(priv, &cfg); + + if (gve_is_gqi(priv)) + err = gve_rx_alloc_ring_gqi(priv, &cfg, gve_per_q_mem, idx); + else + err = gve_rx_alloc_ring_dqo(priv, &cfg, gve_per_q_mem, idx); + + return err; +} + +static int gve_rx_queue_start(struct net_device *dev, void *per_q_mem, int idx) +{ + struct gve_priv *priv = netdev_priv(dev); + struct gve_rx_ring *gve_per_q_mem; + int err; + + if (!priv->rx) + return -EAGAIN; + + gve_per_q_mem = (struct gve_rx_ring *)per_q_mem; + priv->rx[idx] = *gve_per_q_mem; + + /* Single-queue creation requires quiescence on all queues */ + gve_turndown(priv); + + gve_rx_start_ring(priv, idx); + + if (gve_is_qpl(priv)) { + /* This failure will trigger a reset - no need to clean up */ + err = gve_register_qpl(priv, gve_rx_get_qpl(priv, idx)); + if (err) + goto abort; + } + + /* This failure will trigger a reset - no need to clean up */ + err = gve_adminq_create_single_rx_queue(priv, idx); + if (err) + goto abort; + + if (gve_is_gqi(priv)) + gve_rx_write_doorbell(priv, &priv->rx[idx]); + else + gve_rx_post_buffers_dqo(&priv->rx[idx]); + + /* Turn the unstopped queues back up */ + gve_turnup_and_check_status(priv); + return 0; + +abort: + gve_rx_stop_ring(priv, idx); + + /* All failures in this func result in a reset, by clearing the struct + * at idx, we prevent a double free when that reset runs. The reset, + * which needs the rtnl lock, will not run till this func returns and + * its caller gives up the lock. + */ + memset(&priv->rx[idx], 0, sizeof(priv->rx[idx])); + return err; +} + +static const struct netdev_queue_mgmt_ops gve_queue_mgmt_ops = { + .ndo_queue_mem_size = sizeof(struct gve_rx_ring), + .ndo_queue_mem_alloc = gve_rx_queue_mem_alloc, + .ndo_queue_mem_free = gve_rx_queue_mem_free, + .ndo_queue_start = gve_rx_queue_start, + .ndo_queue_stop = gve_rx_queue_stop, +}; + static int gve_probe(struct pci_dev *pdev, const struct pci_device_id *ent) { int max_tx_queues, max_rx_queues; @@ -2377,6 +2529,7 @@ static int gve_probe(struct pci_dev *pdev, const struct pci_device_id *ent) pci_set_drvdata(pdev, dev); dev->ethtool_ops = &gve_ethtool_ops; dev->netdev_ops = &gve_netdev_ops; + dev->queue_mgmt_ops = &gve_queue_mgmt_ops; /* Set default and supported features. * diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c index 70aca2f2c8c3..acb73d4d0de6 100644 --- a/drivers/net/ethernet/google/gve/gve_rx.c +++ b/drivers/net/ethernet/google/gve/gve_rx.c @@ -99,8 +99,8 @@ void gve_rx_stop_ring_gqi(struct gve_priv *priv, int idx) gve_rx_reset_ring_gqi(priv, idx); } -static void gve_rx_free_ring_gqi(struct gve_priv *priv, struct gve_rx_ring *rx, - struct gve_rx_alloc_rings_cfg *cfg) +void gve_rx_free_ring_gqi(struct gve_priv *priv, struct gve_rx_ring *rx, + struct gve_rx_alloc_rings_cfg *cfg) { struct device *dev = &priv->pdev->dev; u32 slots = rx->mask + 1; @@ -264,10 +264,10 @@ void gve_rx_start_ring_gqi(struct gve_priv *priv, int idx) gve_add_napi(priv, ntfy_idx, gve_napi_poll); } -static int gve_rx_alloc_ring_gqi(struct gve_priv *priv, - struct gve_rx_alloc_rings_cfg *cfg, - struct gve_rx_ring *rx, - int idx) +int gve_rx_alloc_ring_gqi(struct gve_priv *priv, + struct gve_rx_alloc_rings_cfg *cfg, + struct gve_rx_ring *rx, + int idx) { struct device *hdev = &priv->pdev->dev; u32 slots = cfg->ring_size; diff --git a/drivers/net/ethernet/google/gve/gve_rx_dqo.c b/drivers/net/ethernet/google/gve/gve_rx_dqo.c index 4ea8ecc3b2d5..c1c912de59c7 100644 --- a/drivers/net/ethernet/google/gve/gve_rx_dqo.c +++ b/drivers/net/ethernet/google/gve/gve_rx_dqo.c @@ -299,8 +299,8 @@ void gve_rx_stop_ring_dqo(struct gve_priv *priv, int idx) gve_rx_reset_ring_dqo(priv, idx); } -static void gve_rx_free_ring_dqo(struct gve_priv *priv, struct gve_rx_ring *rx, - struct gve_rx_alloc_rings_cfg *cfg) +void gve_rx_free_ring_dqo(struct gve_priv *priv, struct gve_rx_ring *rx, + struct gve_rx_alloc_rings_cfg *cfg) { struct device *hdev = &priv->pdev->dev; size_t completion_queue_slots; @@ -376,10 +376,10 @@ void gve_rx_start_ring_dqo(struct gve_priv *priv, int idx) gve_add_napi(priv, ntfy_idx, gve_napi_poll_dqo); } -static int gve_rx_alloc_ring_dqo(struct gve_priv *priv, - struct gve_rx_alloc_rings_cfg *cfg, - struct gve_rx_ring *rx, - int idx) +int gve_rx_alloc_ring_dqo(struct gve_priv *priv, + struct gve_rx_alloc_rings_cfg *cfg, + struct gve_rx_ring *rx, + int idx) { struct device *hdev = &priv->pdev->dev; int qpl_page_cnt;