From patchwork Mon Apr 1 23:45:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harshitha Ramamurthy X-Patchwork-Id: 13613130 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5CC3257864 for ; Mon, 1 Apr 2024 23:45:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712015138; cv=none; b=drciM+Oq5+q3JPBSoN2PVWTs/TsUh7XRVAHl5HdGaTOpH5C364G51JNMAifr23qWC6nwueAWvZ4u5mpEzhJsfqfC5AfD/j+1p8tAvNtLz/a5j1kv1dROYQMbogC5ftjQPozt5IOGc1KMWOLGNJ85iSjW+KjR+cfZbbftXUlYKIQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712015138; c=relaxed/simple; bh=+CsztK9q5tM43GGL10gYK8Kqi1f7Pf8hDsI9/3BBNzM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Qs1wvb+L4UY50Aho/XZ92sX7sQAy7th5/iOWJgMaTsV29pip0VRJs+4iK3vWfdI5pmUVLq5MXObHxnU3h5SvzkTCfn+sb4wT1iOfVc73IQF3jIrqip2rx5y9ler1VWtbVlvDls0HYH/EG+0HNbGWn5cBBVEChMSiDsBzZMOjnLE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--hramamurthy.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=YXNe6TZB; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--hramamurthy.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="YXNe6TZB" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-6e71495a60dso5125630b3a.0 for ; Mon, 01 Apr 2024 16:45:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1712015136; x=1712619936; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=omBG1Fp9W5XLPZF2l7sQk0iqYfHEUN+VAknuUn+rqZE=; b=YXNe6TZBl/kEvTwXRhlNSxxHrRPhLlfx9lkIgnhnXx0OVn7CnAEzcPUxChab9x3bLW kYFyi7aURr89dDUCiO1oslUcW4d/fFbFliVw8ORikUnWnPv+oB0mTM6aPAcrjUHmaZET knBRf7kC1c3d4URoJ71sjgwPR8KWMaUJr3sfJt5pZ3RR5IYyzCqKjiP6d5yCYrnY1EHE BBH7k3KVlu9dCAk4TYonQvTVh7MKFR4PsiENefVEXsp89Cuz3iTTwEBOycrCNdAbruQt UmRm3cGEmuFokK3k8Vm8htbh4YfD5C9pfQ5ojUAKIWAbrFm6xGZkMWEX6r3DosKtArK1 +b7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1712015136; x=1712619936; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=omBG1Fp9W5XLPZF2l7sQk0iqYfHEUN+VAknuUn+rqZE=; b=Yvsbyr3c+Yv51SV/xK2O8k5bWfiANucCd67Z4vjr2NoRB3qGnCNURcivfRqBi05Hxy ttgI8E13+1RztgrQma4/oWSKbK06aZvWeByaeWvxaYSuD1zxScSYm/YiL+ZqiidCUwMX fMVWbJUxp23plqh13f616TWVeR66DBAZrT/mIiWvjgBSVfo4yHrqkcQPtlEVlAtF5j1H pN0+ObkCQZJvjWjsPMuN7G18qLMWIcE1uGu/OfJ+qIGmWmjp+qx5vzwvYBvOiy69s6Z9 XxHLTAp9oyJlCD2uoD6oZhw1TOupTQMBHHlLXty82eVvY20pE4116XuPytHQKbagJ/mR WQ5Q== X-Gm-Message-State: AOJu0Yyg4ADWjes+UiZ+rQsZ0mN/M15VE5woAe2zZZr0fNOqCcd5GWei nFjS/S+Yr1/TY5dbryjDHf//vvcaRUvQi9v7cgHVfxC+IWKV9LEI7JzCjtX7XZR8+i9y2Mdwevr vbnqDkaiXwNXaDfwzt/V+YVPCbqJY4iG18p5ozJ3kGSpyn1QqKWhAky06rOILjAwMZkQnUxlg8n KCXmDvIY4wVEOT/y0d6QjxiU3UHq3+7D3WK89umbPHJ3/51REPZstmDr/aN6M= X-Google-Smtp-Source: AGHT+IFPnyCpOKHXsNJyFmoIYw9mHF0nhAKa+sK2SXtLT4dW9Z+Ni+6ZQLW0B19CFvr6ZBUZQvE7c1icfY8p5Xe4pA== X-Received: from hramamurthy-gve.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:141e]) (user=hramamurthy job=sendgmr) by 2002:a05:6a00:1784:b0:6ea:baf6:57a3 with SMTP id s4-20020a056a00178400b006eabaf657a3mr215627pfg.6.1712015136495; Mon, 01 Apr 2024 16:45:36 -0700 (PDT) Date: Mon, 1 Apr 2024 23:45:26 +0000 In-Reply-To: <20240401234530.3101900-1-hramamurthy@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240401234530.3101900-1-hramamurthy@google.com> X-Mailer: git-send-email 2.44.0.478.gd926399ef9-goog Message-ID: <20240401234530.3101900-2-hramamurthy@google.com> Subject: [PATCH net-next 1/5] gve: simplify setting decriptor count defaults From: Harshitha Ramamurthy To: netdev@vger.kernel.org Cc: jeroendb@google.com, pkaligineedi@google.com, shailend@google.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, willemb@google.com, rushilg@google.com, jfraker@google.com, linux-kernel@vger.kernel.org, Harshitha Ramamurthy X-Patchwork-Delegate: kuba@kernel.org Combine the gve_set_desc_cnt and gve_set_desc_cnt_dqo into one function which sets the counts after checking the queue format. Both the functions in the previous code and the new combined function never return an error so make the new function void and remove the goto on error. Also rename the new function to gve_set_default_desc_cnt to be clearer about its intention. Reviewed-by: Praveen Kaligineedi Reviewed-by: Willem de Bruijn Signed-off-by: Harshitha Ramamurthy --- drivers/net/ethernet/google/gve/gve_adminq.c | 44 +++++++------------- 1 file changed, 15 insertions(+), 29 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/ethernet/google/gve/gve_adminq.c index ae12ac38e18b..50affa11a59c 100644 --- a/drivers/net/ethernet/google/gve/gve_adminq.c +++ b/drivers/net/ethernet/google/gve/gve_adminq.c @@ -745,31 +745,19 @@ int gve_adminq_destroy_rx_queues(struct gve_priv *priv, u32 num_queues) return gve_adminq_kick_and_wait(priv); } -static int gve_set_desc_cnt(struct gve_priv *priv, - struct gve_device_descriptor *descriptor) +static void gve_set_default_desc_cnt(struct gve_priv *priv, + const struct gve_device_descriptor *descriptor, + const struct gve_device_option_dqo_rda *dev_op_dqo_rda) { priv->tx_desc_cnt = be16_to_cpu(descriptor->tx_queue_entries); priv->rx_desc_cnt = be16_to_cpu(descriptor->rx_queue_entries); - return 0; -} - -static int -gve_set_desc_cnt_dqo(struct gve_priv *priv, - const struct gve_device_descriptor *descriptor, - const struct gve_device_option_dqo_rda *dev_op_dqo_rda) -{ - priv->tx_desc_cnt = be16_to_cpu(descriptor->tx_queue_entries); - priv->rx_desc_cnt = be16_to_cpu(descriptor->rx_queue_entries); - - if (priv->queue_format == GVE_DQO_QPL_FORMAT) - return 0; - - priv->options_dqo_rda.tx_comp_ring_entries = - be16_to_cpu(dev_op_dqo_rda->tx_comp_ring_entries); - priv->options_dqo_rda.rx_buff_ring_entries = - be16_to_cpu(dev_op_dqo_rda->rx_buff_ring_entries); - return 0; + if (priv->queue_format == GVE_DQO_RDA_FORMAT) { + priv->options_dqo_rda.tx_comp_ring_entries = + be16_to_cpu(dev_op_dqo_rda->tx_comp_ring_entries); + priv->options_dqo_rda.rx_buff_ring_entries = + be16_to_cpu(dev_op_dqo_rda->rx_buff_ring_entries); + } } static void gve_enable_supported_features(struct gve_priv *priv, @@ -888,15 +876,13 @@ int gve_adminq_describe_device(struct gve_priv *priv) dev_info(&priv->pdev->dev, "Driver is running with GQI QPL queue format.\n"); } - if (gve_is_gqi(priv)) { - err = gve_set_desc_cnt(priv, descriptor); - } else { - /* DQO supports LRO. */ + + /* set default descriptor counts */ + gve_set_default_desc_cnt(priv, descriptor, dev_op_dqo_rda); + + /* DQO supports LRO. */ + if (!gve_is_gqi(priv)) priv->dev->hw_features |= NETIF_F_LRO; - err = gve_set_desc_cnt_dqo(priv, descriptor, dev_op_dqo_rda); - } - if (err) - goto free_device_descriptor; priv->max_registered_pages = be64_to_cpu(descriptor->max_registered_pages); From patchwork Mon Apr 1 23:45:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harshitha Ramamurthy X-Patchwork-Id: 13613131 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 131685813B for ; Mon, 1 Apr 2024 23:45:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712015141; cv=none; b=WHx7eTWTXKW3krXvdWXNHqN/T6w3uKPK2OQehDrlvxZLLUPKPMghJs5LA7aImTKKMxcvMW8/qRwmVkEqDJzfBOO5b69HXegUfKDxmtDIWkX/rL86g9GbRlVM3gPhXsj4MYgNYldsWyNYAcP7kGs4q6dAH1hebjdv8q9p79B7Exs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712015141; c=relaxed/simple; bh=4Al7cogJgwG71T9PF36FmiCOCANhmjGddBvyH81kElo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=atJM7XkTlFNaShlJN38rCo6rIUEiryY+MdMh93MMTwqCSG+ByLXmHJ5+XAK9CX7CL9BB2mC2eIb5arFPHosrOZgUAZuQjQMMWsRZ4n1Xo6AH6e8MlKrWe0t6zIjZI6fWuY+oRpRHyVq5cwBy5GM5w7zkZ1BDBdMm1XgtaTFc9w0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--hramamurthy.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=YDhZya3c; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--hramamurthy.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="YDhZya3c" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-1dff9fccdbdso40322015ad.0 for ; Mon, 01 Apr 2024 16:45:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1712015138; x=1712619938; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=TD84AfiiAPduKfquC9ZyoiPEtdYCEB6tU+3PiFa9oWU=; b=YDhZya3chzxAteE1FMP5l49U3pkC/aAgONylHAS3ayzWwMh7eX6sQBpcg6ZJnRTJsc dFyR8sbPYYOaYxFGTW3MTcWENcRq9IP0FPhmL8GDiaSLb2DoKOIHT0bHXWdh6B5BqfnY oPAQ1XYrR8CQqhldG48IO0f47plaUFUsn/IyqSgfSl63dfyMfsKrM6H8bcZlWNxnRIZT Gol5ljKBuN8P0MTjsElR3ulWkgcWOq4ETpRmQ1H4QFtx6snna/r32tsTLQ8qlc6THSdI UR0L3N6Fo4ItToETANDj9aGytnNAH5tgFAW6Cl1/vyoCaR/G82+lwAbK1vugOjTBN2kc ZI7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1712015138; x=1712619938; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=TD84AfiiAPduKfquC9ZyoiPEtdYCEB6tU+3PiFa9oWU=; b=BplHVtrJrmLlVhVyNGkBE1O5ZEbZFADabfaGfOW9X2fx+s85AMtI4ojTElawHD5ogr 5VlwSdYljL2uRW8Q3ohRx7IlvZFM7g6HZXzEGo7ty36Bv8Bz2fV9O/i7H4mlQjb2Ljxs duf6w8NrjhoHjnFh4vju2T+vYNmYjZ4vAoTgovaQ2VEGdWr0hkzPDiL7asISdY1pztGB x8/gSnwutVqvSNxilmBhTzCuujri4+2xmrVhZEaY/11Ph0ZM3dDxaaaSKuNZnMnVA44o NwIuZb5sU2voS2r9VyfNKLxJeJo9WwVYViFaZjsKmkyCqcvYxrZr/SaAUSwJt336xPjs Z4Kw== X-Gm-Message-State: AOJu0Yyn30vjihLo/8aCkA/0TppKLv1vwjMRAetPxu//nhWBQMMGQXWF iuS6eQvg/VLEOfUuFvZmXkPby9Q2/naCyjACw6kYg7Uz9sW9WgSOADWUlprZKY0KOQ16LnJnKcQ pAThdBZPGGCpxc9WRUMoFiy5RI0aaBFN2c1yg5G48eQe7rD98btEFG3GQIQy7hdQxk8OBa91xcN VUB4YZfQLKHnhYWkGUtQKs6L1uy2ltHTPj5CVDfC4P5g9iNgP0GaBNCTFf8m8= X-Google-Smtp-Source: AGHT+IEpZZdbMF8QXndZ12sIWCPLHdc5DHSePA9XGiZF2UTbpSSDR6xCYwihyqIUpslkzVLV0Ia4HHkiL33e5dL/UA== X-Received: from hramamurthy-gve.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:141e]) (user=hramamurthy job=sendgmr) by 2002:a17:902:d505:b0:1e1:a509:5681 with SMTP id b5-20020a170902d50500b001e1a5095681mr819299plg.2.1712015138316; Mon, 01 Apr 2024 16:45:38 -0700 (PDT) Date: Mon, 1 Apr 2024 23:45:27 +0000 In-Reply-To: <20240401234530.3101900-1-hramamurthy@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240401234530.3101900-1-hramamurthy@google.com> X-Mailer: git-send-email 2.44.0.478.gd926399ef9-goog Message-ID: <20240401234530.3101900-3-hramamurthy@google.com> Subject: [PATCH net-next 2/5] gve: make the completion and buffer ring size equal for DQO From: Harshitha Ramamurthy To: netdev@vger.kernel.org Cc: jeroendb@google.com, pkaligineedi@google.com, shailend@google.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, willemb@google.com, rushilg@google.com, jfraker@google.com, linux-kernel@vger.kernel.org, Harshitha Ramamurthy X-Patchwork-Delegate: kuba@kernel.org For the DQO queue format, the gve driver stores two ring sizes for both TX and RX - one for completion queue ring and one for data buffer ring. This is supposed to enable asymmetric sizes for these two rings but that is not supported. Make both fields reference the same single variable. This change renders reading supported TX completion ring size and RX buffer ring size for DQO from the device useless, so change those fields to reserved and remove related code. Reviewed-by: Praveen Kaligineedi Reviewed-by: Willem de Bruijn Signed-off-by: Harshitha Ramamurthy --- drivers/net/ethernet/google/gve/gve.h | 6 --- drivers/net/ethernet/google/gve/gve_adminq.c | 40 +++++--------------- drivers/net/ethernet/google/gve/gve_adminq.h | 3 +- drivers/net/ethernet/google/gve/gve_rx_dqo.c | 3 +- drivers/net/ethernet/google/gve/gve_tx_dqo.c | 4 +- 5 files changed, 13 insertions(+), 43 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h index 4814c96d5fe7..f009f7b3e68b 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -621,11 +621,6 @@ struct gve_qpl_config { unsigned long *qpl_id_map; /* bitmap of used qpl ids */ }; -struct gve_options_dqo_rda { - u16 tx_comp_ring_entries; /* number of tx_comp descriptors */ - u16 rx_buff_ring_entries; /* number of rx_buff descriptors */ -}; - struct gve_irq_db { __be32 index; } ____cacheline_aligned; @@ -792,7 +787,6 @@ struct gve_priv { u64 link_speed; bool up_before_suspend; /* True if dev was up before suspend */ - struct gve_options_dqo_rda options_dqo_rda; struct gve_ptype_lut *ptype_lut_dqo; /* Must be a power of two. */ diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/ethernet/google/gve/gve_adminq.c index 50affa11a59c..2ff9327ec056 100644 --- a/drivers/net/ethernet/google/gve/gve_adminq.c +++ b/drivers/net/ethernet/google/gve/gve_adminq.c @@ -565,6 +565,7 @@ static int gve_adminq_create_tx_queue(struct gve_priv *priv, u32 queue_index) cpu_to_be64(tx->q_resources_bus), .tx_ring_addr = cpu_to_be64(tx->bus), .ntfy_id = cpu_to_be32(tx->ntfy_id), + .tx_ring_size = cpu_to_be16(priv->tx_desc_cnt), }; if (gve_is_gqi(priv)) { @@ -573,24 +574,17 @@ static int gve_adminq_create_tx_queue(struct gve_priv *priv, u32 queue_index) cmd.create_tx_queue.queue_page_list_id = cpu_to_be32(qpl_id); } else { - u16 comp_ring_size; u32 qpl_id = 0; - if (priv->queue_format == GVE_DQO_RDA_FORMAT) { + if (priv->queue_format == GVE_DQO_RDA_FORMAT) qpl_id = GVE_RAW_ADDRESSING_QPL_ID; - comp_ring_size = - priv->options_dqo_rda.tx_comp_ring_entries; - } else { + else qpl_id = tx->dqo.qpl->id; - comp_ring_size = priv->tx_desc_cnt; - } cmd.create_tx_queue.queue_page_list_id = cpu_to_be32(qpl_id); - cmd.create_tx_queue.tx_ring_size = - cpu_to_be16(priv->tx_desc_cnt); cmd.create_tx_queue.tx_comp_ring_addr = cpu_to_be64(tx->complq_bus_dqo); cmd.create_tx_queue.tx_comp_ring_size = - cpu_to_be16(comp_ring_size); + cpu_to_be16(priv->tx_desc_cnt); } return gve_adminq_issue_cmd(priv, &cmd); @@ -621,6 +615,7 @@ static int gve_adminq_create_rx_queue(struct gve_priv *priv, u32 queue_index) .queue_id = cpu_to_be32(queue_index), .ntfy_id = cpu_to_be32(rx->ntfy_id), .queue_resources_addr = cpu_to_be64(rx->q_resources_bus), + .rx_ring_size = cpu_to_be16(priv->rx_desc_cnt), }; if (gve_is_gqi(priv)) { @@ -635,20 +630,13 @@ static int gve_adminq_create_rx_queue(struct gve_priv *priv, u32 queue_index) cmd.create_rx_queue.queue_page_list_id = cpu_to_be32(qpl_id); cmd.create_rx_queue.packet_buffer_size = cpu_to_be16(rx->packet_buffer_size); } else { - u16 rx_buff_ring_entries; u32 qpl_id = 0; - if (priv->queue_format == GVE_DQO_RDA_FORMAT) { + if (priv->queue_format == GVE_DQO_RDA_FORMAT) qpl_id = GVE_RAW_ADDRESSING_QPL_ID; - rx_buff_ring_entries = - priv->options_dqo_rda.rx_buff_ring_entries; - } else { + else qpl_id = rx->dqo.qpl->id; - rx_buff_ring_entries = priv->rx_desc_cnt; - } cmd.create_rx_queue.queue_page_list_id = cpu_to_be32(qpl_id); - cmd.create_rx_queue.rx_ring_size = - cpu_to_be16(priv->rx_desc_cnt); cmd.create_rx_queue.rx_desc_ring_addr = cpu_to_be64(rx->dqo.complq.bus); cmd.create_rx_queue.rx_data_ring_addr = @@ -656,7 +644,7 @@ static int gve_adminq_create_rx_queue(struct gve_priv *priv, u32 queue_index) cmd.create_rx_queue.packet_buffer_size = cpu_to_be16(priv->data_buffer_size_dqo); cmd.create_rx_queue.rx_buff_ring_size = - cpu_to_be16(rx_buff_ring_entries); + cpu_to_be16(priv->rx_desc_cnt); cmd.create_rx_queue.enable_rsc = !!(priv->dev->features & NETIF_F_LRO); if (priv->header_split_enabled) @@ -746,18 +734,10 @@ int gve_adminq_destroy_rx_queues(struct gve_priv *priv, u32 num_queues) } static void gve_set_default_desc_cnt(struct gve_priv *priv, - const struct gve_device_descriptor *descriptor, - const struct gve_device_option_dqo_rda *dev_op_dqo_rda) + const struct gve_device_descriptor *descriptor) { priv->tx_desc_cnt = be16_to_cpu(descriptor->tx_queue_entries); priv->rx_desc_cnt = be16_to_cpu(descriptor->rx_queue_entries); - - if (priv->queue_format == GVE_DQO_RDA_FORMAT) { - priv->options_dqo_rda.tx_comp_ring_entries = - be16_to_cpu(dev_op_dqo_rda->tx_comp_ring_entries); - priv->options_dqo_rda.rx_buff_ring_entries = - be16_to_cpu(dev_op_dqo_rda->rx_buff_ring_entries); - } } static void gve_enable_supported_features(struct gve_priv *priv, @@ -878,7 +858,7 @@ int gve_adminq_describe_device(struct gve_priv *priv) } /* set default descriptor counts */ - gve_set_default_desc_cnt(priv, descriptor, dev_op_dqo_rda); + gve_set_default_desc_cnt(priv, descriptor); /* DQO supports LRO. */ if (!gve_is_gqi(priv)) diff --git a/drivers/net/ethernet/google/gve/gve_adminq.h b/drivers/net/ethernet/google/gve/gve_adminq.h index 5ac972e45ff8..3ff2028a7472 100644 --- a/drivers/net/ethernet/google/gve/gve_adminq.h +++ b/drivers/net/ethernet/google/gve/gve_adminq.h @@ -103,8 +103,7 @@ static_assert(sizeof(struct gve_device_option_gqi_qpl) == 4); struct gve_device_option_dqo_rda { __be32 supported_features_mask; - __be16 tx_comp_ring_entries; - __be16 rx_buff_ring_entries; + __be32 reserved; }; static_assert(sizeof(struct gve_device_option_dqo_rda) == 8); diff --git a/drivers/net/ethernet/google/gve/gve_rx_dqo.c b/drivers/net/ethernet/google/gve/gve_rx_dqo.c index 8e8071308aeb..7c2ab1edfcb2 100644 --- a/drivers/net/ethernet/google/gve/gve_rx_dqo.c +++ b/drivers/net/ethernet/google/gve/gve_rx_dqo.c @@ -305,8 +305,7 @@ static int gve_rx_alloc_ring_dqo(struct gve_priv *priv, size_t size; int i; - const u32 buffer_queue_slots = cfg->raw_addressing ? - priv->options_dqo_rda.rx_buff_ring_entries : cfg->ring_size; + const u32 buffer_queue_slots = cfg->ring_size; const u32 completion_queue_slots = cfg->ring_size; netif_dbg(priv, drv, priv->dev, "allocating rx ring DQO\n"); diff --git a/drivers/net/ethernet/google/gve/gve_tx_dqo.c b/drivers/net/ethernet/google/gve/gve_tx_dqo.c index bc34b6cd3a3e..70f29b90a982 100644 --- a/drivers/net/ethernet/google/gve/gve_tx_dqo.c +++ b/drivers/net/ethernet/google/gve/gve_tx_dqo.c @@ -295,9 +295,7 @@ static int gve_tx_alloc_ring_dqo(struct gve_priv *priv, /* Queue sizes must be a power of 2 */ tx->mask = cfg->ring_size - 1; - tx->dqo.complq_mask = priv->queue_format == GVE_DQO_RDA_FORMAT ? - priv->options_dqo_rda.tx_comp_ring_entries - 1 : - tx->mask; + tx->dqo.complq_mask = tx->mask; /* The max number of pending packets determines the maximum number of * descriptors which maybe written to the completion queue. From patchwork Mon Apr 1 23:45:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harshitha Ramamurthy X-Patchwork-Id: 13613132 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C354C5DF0D for ; Mon, 1 Apr 2024 23:45:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712015144; cv=none; b=OS9b9REPCbqmF9QJTwgzQaUHq0dJJ9grBu+Ag8SDF2/WlNtkQv+9LZKJ99kOXWqzneNtxhswphMaASfU+7+zSZsUpheJCyy/7zWxZZV7r/HMMI0IoQgZr7jxQlNgDLyBWwlZ+zKya/7o0Vapi+bsXs6EOwYK30ZqZdh970VKPqo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712015144; c=relaxed/simple; bh=zmdb2RykTKxgOnYxe1piDwaiGtsm1mvy4+l4CT85SUA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=F0UTIDsKxzaRwz1glHOBrsDZM43XReKQABNCBvHeju3oDzhIpwzeteMjtyzlKUM7aoa6kZOw00zNallpoJykqdeGZKZayB1ZZNUaLGbbHE0Dn0N8alalC9Ng8GkAlk4olOFC3UCr/iUNRFmhWsTfpcLvgY21VlYON9n6iLagTvE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--hramamurthy.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=JgmK2iv1; arc=none smtp.client-ip=209.85.215.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--hramamurthy.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="JgmK2iv1" Received: by mail-pg1-f201.google.com with SMTP id 41be03b00d2f7-5dc4ffda13fso4200261a12.0 for ; Mon, 01 Apr 2024 16:45:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1712015142; x=1712619942; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=SL4gSpw35iFPDr0r8I57ZBW33p2oC8+GNJ3croWhHJo=; b=JgmK2iv1BsWi1+J6CF0+sjsAkb9ri5i57gPWb6KU58cW8RRW9xeAuIVmMxVRB2rCPT x/M19NJMdLmEHF7XW59j3VOxmHFLvv4nsZdAn3XEYxfZ9XFtSHJh7za+7+yi0LI1aeEP t4o6Z5dJJkh8ShLmwM+zYROAfeJATIgvgMwqp9//O9xWIdK/8PK1DcIJHf2tiIr9xdjH ZNXl7bsQKkKGuFOh++RmOMmCe9l9d38bstWZ000DYKagpbhBmFyjuGAfakw7MRzcRnF8 iE7PPBG9WkGYUB9DziqzgU+rbOxbHcCa6DZOHhT2bQ8K9zCif6aaMrA0ID8NKRy8my7k zUjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1712015142; x=1712619942; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=SL4gSpw35iFPDr0r8I57ZBW33p2oC8+GNJ3croWhHJo=; b=wJbyhB3QGICtRSY1bLHLY/gDuIcfYn92Qpx76MR499G+9he4yG3UJTzv86Of++Q2sT O89FzeTyJNQbxwTs9Aei1G6qCysePdvzbLwOXa3JfArgPhaY+3/DBshKjXk/RI3G9p3C SWqByaX/bxKjclRgAlx5OuwzFu/EuxR4BU8GhDMwAYwsGKjOk1LHoj+/Z4sJiICokuN/ hcMghj8HSZn4EzBBqDB+vD5IVzEf49ceX/aKU5n8U7cuDFJLehXHjtrYnb9qNoDymSOJ D/TGdJfkSB6i2X0P0WkIU6YSVyHgihZNiiAtiKdKN4JBBNtUpx1OcHsoasNLm4NdYI6l d9Ew== X-Gm-Message-State: AOJu0YxFrSmzj6rN/3S8Qxs15y+oBGAKGGhYAaDrK7l2N4Eo0kccuBvI ZDV1PSWnGDYYGp7pisKrWtFBI203LBTxoeAW4jk4J1HQjZIQpQHkLEFdvqdZm0VjhlqRTbxaGRa 0wEYwNG160BODNYxpsGMBp7MhOqgeGtt3Nwvu6ADFi7XpOpLGa40dLDxjP9K1VCUjuBupP4iffK QsRsNRQMehUVtuaPt7Mx+jaMPMdjKpQN5WmCy4/G1ug82V2JeXKbEVjrdo8XM= X-Google-Smtp-Source: AGHT+IEVImByN++D88oacBX+RxQuh83VMx/H5Xh8Wa6G1kf6Ysz7vUklVGBi0ciEnIoEuO4k9iWTvBB2xKDji1NemA== X-Received: from hramamurthy-gve.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:141e]) (user=hramamurthy job=sendgmr) by 2002:a17:903:2305:b0:1e2:57b:9d8c with SMTP id d5-20020a170903230500b001e2057b9d8cmr649790plh.4.1712015140212; Mon, 01 Apr 2024 16:45:40 -0700 (PDT) Date: Mon, 1 Apr 2024 23:45:28 +0000 In-Reply-To: <20240401234530.3101900-1-hramamurthy@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240401234530.3101900-1-hramamurthy@google.com> X-Mailer: git-send-email 2.44.0.478.gd926399ef9-goog Message-ID: <20240401234530.3101900-4-hramamurthy@google.com> Subject: [PATCH net-next 3/5] gve: set page count for RX QPL for GQI and DQO queue formats From: Harshitha Ramamurthy To: netdev@vger.kernel.org Cc: jeroendb@google.com, pkaligineedi@google.com, shailend@google.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, willemb@google.com, rushilg@google.com, jfraker@google.com, linux-kernel@vger.kernel.org, Harshitha Ramamurthy X-Patchwork-Delegate: kuba@kernel.org Fulfill the requirement that for GQI, the number of pages per RX QPL is equal to the ring size. Set this value to be equal to ring size. Because of this change, the rx_data_slot_cnt and rx_pages_per_qpl fields stored in the priv structure are not needed, so remove their usage. And for DQO, the number of pages per RX QPL is more than ring size to account for out-of-order completions. So set it to two times of rx ring size. Reviewed-by: Praveen Kaligineedi Reviewed-by: Willem de Bruijn Signed-off-by: Harshitha Ramamurthy --- drivers/net/ethernet/google/gve/gve.h | 11 ++++++++--- drivers/net/ethernet/google/gve/gve_adminq.c | 11 ----------- drivers/net/ethernet/google/gve/gve_main.c | 14 +++++++++----- drivers/net/ethernet/google/gve/gve_rx.c | 2 +- drivers/net/ethernet/google/gve/gve_rx_dqo.c | 4 ++-- 5 files changed, 20 insertions(+), 22 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h index f009f7b3e68b..693d4b7d818b 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -63,7 +63,6 @@ #define GVE_DEFAULT_HEADER_BUFFER_SIZE 128 #define DQO_QPL_DEFAULT_TX_PAGES 512 -#define DQO_QPL_DEFAULT_RX_PAGES 2048 /* Maximum TSO size supported on DQO */ #define GVE_DQO_TX_MAX 0x3FFFF @@ -714,8 +713,6 @@ struct gve_priv { u16 tx_desc_cnt; /* num desc per ring */ u16 rx_desc_cnt; /* num desc per ring */ u16 tx_pages_per_qpl; /* Suggested number of pages per qpl for TX queues by NIC */ - u16 rx_pages_per_qpl; /* Suggested number of pages per qpl for RX queues by NIC */ - u16 rx_data_slot_cnt; /* rx buffer length */ u64 max_registered_pages; u64 num_registered_pages; /* num pages registered with NIC */ struct bpf_prog *xdp_prog; /* XDP BPF program */ @@ -1038,6 +1035,14 @@ static inline u32 gve_rx_start_qpl_id(const struct gve_queue_config *tx_cfg) return gve_get_rx_qpl_id(tx_cfg, 0); } +static inline u32 gve_get_rx_pages_per_qpl_dqo(u32 rx_desc_cnt) +{ + /* For DQO, page count should be more than ring size for + * out-of-order completions. Set it to two times of ring size. + */ + return 2 * rx_desc_cnt; +} + /* Returns a pointer to the next available tx qpl in the list of qpls */ static inline struct gve_queue_page_list *gve_assign_tx_qpl(struct gve_tx_alloc_rings_cfg *cfg, diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/ethernet/google/gve/gve_adminq.c index 2ff9327ec056..faeff20cd370 100644 --- a/drivers/net/ethernet/google/gve/gve_adminq.c +++ b/drivers/net/ethernet/google/gve/gve_adminq.c @@ -764,12 +764,8 @@ static void gve_enable_supported_features(struct gve_priv *priv, if (dev_op_dqo_qpl) { priv->tx_pages_per_qpl = be16_to_cpu(dev_op_dqo_qpl->tx_pages_per_qpl); - priv->rx_pages_per_qpl = - be16_to_cpu(dev_op_dqo_qpl->rx_pages_per_qpl); if (priv->tx_pages_per_qpl == 0) priv->tx_pages_per_qpl = DQO_QPL_DEFAULT_TX_PAGES; - if (priv->rx_pages_per_qpl == 0) - priv->rx_pages_per_qpl = DQO_QPL_DEFAULT_RX_PAGES; } if (dev_op_buffer_sizes && @@ -878,13 +874,6 @@ int gve_adminq_describe_device(struct gve_priv *priv) mac = descriptor->mac; dev_info(&priv->pdev->dev, "MAC addr: %pM\n", mac); priv->tx_pages_per_qpl = be16_to_cpu(descriptor->tx_pages_per_qpl); - priv->rx_data_slot_cnt = be16_to_cpu(descriptor->rx_pages_per_qpl); - - if (gve_is_gqi(priv) && priv->rx_data_slot_cnt < priv->rx_desc_cnt) { - dev_err(&priv->pdev->dev, "rx_data_slot_cnt cannot be smaller than rx_desc_cnt, setting rx_desc_cnt down to %d.\n", - priv->rx_data_slot_cnt); - priv->rx_desc_cnt = priv->rx_data_slot_cnt; - } priv->default_num_queues = be16_to_cpu(descriptor->default_num_queues); gve_enable_supported_features(priv, supported_features_mask, diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index 166bd827a6d7..470447c0490f 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -1103,13 +1103,13 @@ static int gve_alloc_n_qpls(struct gve_priv *priv, return err; } -static int gve_alloc_qpls(struct gve_priv *priv, - struct gve_qpls_alloc_cfg *cfg) +static int gve_alloc_qpls(struct gve_priv *priv, struct gve_qpls_alloc_cfg *cfg, + struct gve_rx_alloc_rings_cfg *rx_alloc_cfg) { int max_queues = cfg->tx_cfg->max_queues + cfg->rx_cfg->max_queues; int rx_start_id, tx_num_qpls, rx_num_qpls; struct gve_queue_page_list *qpls; - int page_count; + u32 page_count; int err; if (cfg->raw_addressing) @@ -1141,8 +1141,12 @@ static int gve_alloc_qpls(struct gve_priv *priv, /* For GQI_QPL number of pages allocated have 1:1 relationship with * number of descriptors. For DQO, number of pages required are * more than descriptors (because of out of order completions). + * Set it to twice the number of descriptors. */ - page_count = cfg->is_gqi ? priv->rx_data_slot_cnt : priv->rx_pages_per_qpl; + if (cfg->is_gqi) + page_count = rx_alloc_cfg->ring_size; + else + page_count = gve_get_rx_pages_per_qpl_dqo(rx_alloc_cfg->ring_size); rx_num_qpls = gve_num_rx_qpls(cfg->rx_cfg, gve_is_qpl(priv)); err = gve_alloc_n_qpls(priv, qpls, page_count, rx_start_id, rx_num_qpls); if (err) @@ -1363,7 +1367,7 @@ static int gve_queues_mem_alloc(struct gve_priv *priv, { int err; - err = gve_alloc_qpls(priv, qpls_alloc_cfg); + err = gve_alloc_qpls(priv, qpls_alloc_cfg, rx_alloc_cfg); if (err) { netif_err(priv, drv, priv->dev, "Failed to alloc QPLs\n"); return err; diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c index 20f5a9e7fae9..cd727e55ae0f 100644 --- a/drivers/net/ethernet/google/gve/gve_rx.c +++ b/drivers/net/ethernet/google/gve/gve_rx.c @@ -240,7 +240,7 @@ static int gve_rx_alloc_ring_gqi(struct gve_priv *priv, int idx) { struct device *hdev = &priv->pdev->dev; - u32 slots = priv->rx_data_slot_cnt; + u32 slots = cfg->ring_size; int filled_pages; size_t bytes; int err; diff --git a/drivers/net/ethernet/google/gve/gve_rx_dqo.c b/drivers/net/ethernet/google/gve/gve_rx_dqo.c index 7c2ab1edfcb2..15108407b54f 100644 --- a/drivers/net/ethernet/google/gve/gve_rx_dqo.c +++ b/drivers/net/ethernet/google/gve/gve_rx_dqo.c @@ -178,7 +178,7 @@ static int gve_alloc_page_dqo(struct gve_rx_ring *rx, return err; } else { idx = rx->dqo.next_qpl_page_idx; - if (idx >= priv->rx_pages_per_qpl) { + if (idx >= gve_get_rx_pages_per_qpl_dqo(priv->rx_desc_cnt)) { net_err_ratelimited("%s: Out of QPL pages\n", priv->dev->name); return -ENOMEM; @@ -321,7 +321,7 @@ static int gve_rx_alloc_ring_dqo(struct gve_priv *priv, rx->dqo.num_buf_states = cfg->raw_addressing ? min_t(s16, S16_MAX, buffer_queue_slots * 4) : - priv->rx_pages_per_qpl; + gve_get_rx_pages_per_qpl_dqo(cfg->ring_size); rx->dqo.buf_states = kvcalloc(rx->dqo.num_buf_states, sizeof(rx->dqo.buf_states[0]), GFP_KERNEL); From patchwork Mon Apr 1 23:45:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harshitha Ramamurthy X-Patchwork-Id: 13613133 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BCAD55D8FD for ; Mon, 1 Apr 2024 23:45:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712015144; cv=none; b=BgpOHbytU76nvJ2CqJUGxxx6IHN8kAf9HscIlJSZE0lJftWA8YC0uC8U44MuqeoAwmxm5GB7mnYlWARWfXdyS1zL448QNDviQgOex1jASWfUcYtxuskVjl1xEgmIFGegT2GNp/hLCTd5D/QZqI0Y+YuFO5Ua2VYI+ECgOifluXo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712015144; c=relaxed/simple; bh=wpaOzUKZe6m41VJhMrZmFbZtbx1NJZMDb9aKqCk1HAc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=QyZwQMWYZ8VxVpEE1JLEEorLzIpnka3ZghZoLzbxheQ8z7H7BGeOrMsTAeISRi/M0zTx3PCZGcLONMZtFTuMgHeC17pVMQuJ1erJK4SUYpew12zbl4zxZbcRd6VLpDe5nNEE5lism27rugu+Iwk/W0bxgRSNtusmIGgcLY8Eof4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--hramamurthy.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Df6Zv/SK; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--hramamurthy.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Df6Zv/SK" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-1e20573ef3dso31294125ad.1 for ; Mon, 01 Apr 2024 16:45:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1712015142; x=1712619942; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=DEKaQ6KAeMPWO9dtuQ6KA+UswlqpBXRhAE4EEfNny0E=; b=Df6Zv/SKtWu+T9fwjBMY2viIe2nmpbrNhDxJybsuROI919TgWhoCmbazid38TzPx8P TtDQXJL4rKumCL8MJybDtvy73slGcRs5xpvaHEBWLyOJN5QzDsqAq2bv4maNq0xEKCwL bJFrlFr4MtnQ/OQRsP5SxKgq0IKM7VkAMNbi4grP/wHFbIantsteCFOvJVotjjzcr5iY CDilthfuosgVoqmk3a0gLyBnsZ5HBNX8vXH3TwaOq204jlYznuJB3/yYZQShqjBRODKc d7Yu6MM8K2zeoNrNCPAOHHStg22UAtPkgowDuZVvxifEujLfC6pZ6ZMiYA1CIj5ixPW8 kidg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1712015142; x=1712619942; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=DEKaQ6KAeMPWO9dtuQ6KA+UswlqpBXRhAE4EEfNny0E=; b=bfdLLv8MchhDITGyPjdavs+lU5BFeDI4+jnFTPx81tqo261KwmqaYNefKdGDGp7Kaj sRP1Nl1B50JOF1mlLUSxHHB1AEjZMMAglwNjft1CfO0Kmgv59P5/wbQmBhJ/Oc0GcBBl YumHtJ12MmWHuMQQDZ1EC02H5mu3HeMG3cgC0g8Lj3tBmfgqph0qZTAOLBNjsiC0dPKA eOiD+Qq6QsHeR8SnX6pDNEHSdfkQhU9HBf+ujAvQ3F1nKBFv2VTL+G9sDCMKADuoorqf x9h44hHMVSR66l0dW+uXqcMtkNmOnX3gnH2EPUZQwLttJPRRPJKzT6xdEpvd78eSgQaa HWnw== X-Gm-Message-State: AOJu0YwCokgEJS7tgB1vXyZvfTnRkIIL9HB4HfXI9oza/urfM6y+3Lw9 0c1qaK+yStk8FR8C3Pqh1SrwivOwJqra7DVr8XzjCrlw9N+xEbN9eTzRCq9+/jIL+OuadTMmR47 xoEGLmsu7MrLV8PlBLpdDcf6fw3Pe9Vdd3YQmeYM4HVnlFV6l3zNKSbXvu0xCyu5dFwta6habih db09YkVoymDCpCTseIO1WVOY4BhrkUVhd/YTa7lDMDMGaoHVP8o9fIjqPbRvE= X-Google-Smtp-Source: AGHT+IEFwS5Z6zr+ZYfTQEzPuI+9uZ0WffSIQt9jTChPWGH4dIrQ5vd1dTDCGXzmtoo03Z8BVdd8AkExGL442jOsWw== X-Received: from hramamurthy-gve.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:141e]) (user=hramamurthy job=sendgmr) by 2002:a17:903:41d1:b0:1e0:c580:4960 with SMTP id u17-20020a17090341d100b001e0c5804960mr1068554ple.8.1712015141826; Mon, 01 Apr 2024 16:45:41 -0700 (PDT) Date: Mon, 1 Apr 2024 23:45:29 +0000 In-Reply-To: <20240401234530.3101900-1-hramamurthy@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240401234530.3101900-1-hramamurthy@google.com> X-Mailer: git-send-email 2.44.0.478.gd926399ef9-goog Message-ID: <20240401234530.3101900-5-hramamurthy@google.com> Subject: [PATCH net-next 4/5] gve: add support to read ring size ranges from the device From: Harshitha Ramamurthy To: netdev@vger.kernel.org Cc: jeroendb@google.com, pkaligineedi@google.com, shailend@google.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, willemb@google.com, rushilg@google.com, jfraker@google.com, linux-kernel@vger.kernel.org, Harshitha Ramamurthy X-Patchwork-Delegate: kuba@kernel.org Add support to read ring size change capability and the min and max descriptor counts from the device and store it in the driver. Also accommodate a special case where the device does not provide minimum ring size depending on the version of the device. In that case, rely on default values for the minimums. Reviewed-by: Praveen Kaligineedi Reviewed-by: Willem de Bruijn Signed-off-by: Harshitha Ramamurthy --- drivers/net/ethernet/google/gve/gve.h | 10 +++ drivers/net/ethernet/google/gve/gve_adminq.c | 71 +++++++++++++++++--- drivers/net/ethernet/google/gve/gve_adminq.h | 45 ++++++++----- 3 files changed, 102 insertions(+), 24 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h index 693d4b7d818b..669cacdae4f4 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -50,6 +50,10 @@ /* PTYPEs are always 10 bits. */ #define GVE_NUM_PTYPES 1024 +/* Default minimum ring size */ +#define GVE_DEFAULT_MIN_TX_RING_SIZE 256 +#define GVE_DEFAULT_MIN_RX_RING_SIZE 512 + #define GVE_DEFAULT_RX_BUFFER_SIZE 2048 #define GVE_MAX_RX_BUFFER_SIZE 4096 @@ -712,6 +716,12 @@ struct gve_priv { u16 num_event_counters; u16 tx_desc_cnt; /* num desc per ring */ u16 rx_desc_cnt; /* num desc per ring */ + u16 max_tx_desc_cnt; + u16 max_rx_desc_cnt; + u16 min_tx_desc_cnt; + u16 min_rx_desc_cnt; + bool modify_ring_size_enabled; + bool default_min_ring_size; u16 tx_pages_per_qpl; /* Suggested number of pages per qpl for TX queues by NIC */ u64 max_registered_pages; u64 num_registered_pages; /* num pages registered with NIC */ diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/ethernet/google/gve/gve_adminq.c index faeff20cd370..b2b619aa2310 100644 --- a/drivers/net/ethernet/google/gve/gve_adminq.c +++ b/drivers/net/ethernet/google/gve/gve_adminq.c @@ -32,6 +32,8 @@ struct gve_device_option *gve_get_next_option(struct gve_device_descriptor *desc return option_end > descriptor_end ? NULL : (struct gve_device_option *)option_end; } +#define GVE_DEVICE_OPTION_NO_MIN_RING_SIZE 8 + static void gve_parse_device_option(struct gve_priv *priv, struct gve_device_descriptor *device_descriptor, @@ -41,7 +43,8 @@ void gve_parse_device_option(struct gve_priv *priv, struct gve_device_option_dqo_rda **dev_op_dqo_rda, struct gve_device_option_jumbo_frames **dev_op_jumbo_frames, struct gve_device_option_dqo_qpl **dev_op_dqo_qpl, - struct gve_device_option_buffer_sizes **dev_op_buffer_sizes) + struct gve_device_option_buffer_sizes **dev_op_buffer_sizes, + struct gve_device_option_modify_ring **dev_op_modify_ring) { u32 req_feat_mask = be32_to_cpu(option->required_features_mask); u16 option_length = be16_to_cpu(option->option_length); @@ -165,6 +168,27 @@ void gve_parse_device_option(struct gve_priv *priv, "Buffer Sizes"); *dev_op_buffer_sizes = (void *)(option + 1); break; + case GVE_DEV_OPT_ID_MODIFY_RING: + if (option_length < GVE_DEVICE_OPTION_NO_MIN_RING_SIZE || + req_feat_mask != GVE_DEV_OPT_REQ_FEAT_MASK_MODIFY_RING) { + dev_warn(&priv->pdev->dev, GVE_DEVICE_OPTION_ERROR_FMT, + "Modify Ring", (int)sizeof(**dev_op_modify_ring), + GVE_DEV_OPT_REQ_FEAT_MASK_MODIFY_RING, + option_length, req_feat_mask); + break; + } + + if (option_length > sizeof(**dev_op_modify_ring)) { + dev_warn(&priv->pdev->dev, + GVE_DEVICE_OPTION_TOO_BIG_FMT, "Modify Ring"); + } + + *dev_op_modify_ring = (void *)(option + 1); + + /* device has not provided min ring size */ + if (option_length == GVE_DEVICE_OPTION_NO_MIN_RING_SIZE) + priv->default_min_ring_size = true; + break; default: /* If we don't recognize the option just continue * without doing anything. @@ -183,7 +207,8 @@ gve_process_device_options(struct gve_priv *priv, struct gve_device_option_dqo_rda **dev_op_dqo_rda, struct gve_device_option_jumbo_frames **dev_op_jumbo_frames, struct gve_device_option_dqo_qpl **dev_op_dqo_qpl, - struct gve_device_option_buffer_sizes **dev_op_buffer_sizes) + struct gve_device_option_buffer_sizes **dev_op_buffer_sizes, + struct gve_device_option_modify_ring **dev_op_modify_ring) { const int num_options = be16_to_cpu(descriptor->num_device_options); struct gve_device_option *dev_opt; @@ -204,7 +229,8 @@ gve_process_device_options(struct gve_priv *priv, gve_parse_device_option(priv, descriptor, dev_opt, dev_op_gqi_rda, dev_op_gqi_qpl, dev_op_dqo_rda, dev_op_jumbo_frames, - dev_op_dqo_qpl, dev_op_buffer_sizes); + dev_op_dqo_qpl, dev_op_buffer_sizes, + dev_op_modify_ring); dev_opt = next_opt; } @@ -738,6 +764,12 @@ static void gve_set_default_desc_cnt(struct gve_priv *priv, { priv->tx_desc_cnt = be16_to_cpu(descriptor->tx_queue_entries); priv->rx_desc_cnt = be16_to_cpu(descriptor->rx_queue_entries); + + /* set default ranges */ + priv->max_tx_desc_cnt = priv->tx_desc_cnt; + priv->max_rx_desc_cnt = priv->rx_desc_cnt; + priv->min_tx_desc_cnt = priv->tx_desc_cnt; + priv->min_rx_desc_cnt = priv->rx_desc_cnt; } static void gve_enable_supported_features(struct gve_priv *priv, @@ -747,7 +779,9 @@ static void gve_enable_supported_features(struct gve_priv *priv, const struct gve_device_option_dqo_qpl *dev_op_dqo_qpl, const struct gve_device_option_buffer_sizes - *dev_op_buffer_sizes) + *dev_op_buffer_sizes, + const struct gve_device_option_modify_ring + *dev_op_modify_ring) { /* Before control reaches this point, the page-size-capped max MTU from * the gve_device_descriptor field has already been stored in @@ -778,12 +812,33 @@ static void gve_enable_supported_features(struct gve_priv *priv, "BUFFER SIZES device option enabled with max_rx_buffer_size of %u, header_buf_size of %u.\n", priv->max_rx_buffer_size, priv->header_buf_size); } + + /* Read and store ring size ranges given by device */ + if (dev_op_modify_ring && + (supported_features_mask & GVE_SUP_MODIFY_RING_MASK)) { + priv->modify_ring_size_enabled = true; + + /* max ring size for DQO QPL should not be overwritten because of device limit */ + if (priv->queue_format != GVE_DQO_QPL_FORMAT) { + priv->max_rx_desc_cnt = be16_to_cpu(dev_op_modify_ring->max_rx_ring_size); + priv->max_tx_desc_cnt = be16_to_cpu(dev_op_modify_ring->max_tx_ring_size); + } + if (priv->default_min_ring_size) { + /* If device hasn't provided minimums, use default minimums */ + priv->min_tx_desc_cnt = GVE_DEFAULT_MIN_TX_RING_SIZE; + priv->min_rx_desc_cnt = GVE_DEFAULT_MIN_RX_RING_SIZE; + } else { + priv->min_rx_desc_cnt = be16_to_cpu(dev_op_modify_ring->min_rx_ring_size); + priv->min_tx_desc_cnt = be16_to_cpu(dev_op_modify_ring->min_tx_ring_size); + } + } } int gve_adminq_describe_device(struct gve_priv *priv) { struct gve_device_option_buffer_sizes *dev_op_buffer_sizes = NULL; struct gve_device_option_jumbo_frames *dev_op_jumbo_frames = NULL; + struct gve_device_option_modify_ring *dev_op_modify_ring = NULL; struct gve_device_option_gqi_rda *dev_op_gqi_rda = NULL; struct gve_device_option_gqi_qpl *dev_op_gqi_qpl = NULL; struct gve_device_option_dqo_rda *dev_op_dqo_rda = NULL; @@ -815,9 +870,9 @@ int gve_adminq_describe_device(struct gve_priv *priv) err = gve_process_device_options(priv, descriptor, &dev_op_gqi_rda, &dev_op_gqi_qpl, &dev_op_dqo_rda, - &dev_op_jumbo_frames, - &dev_op_dqo_qpl, - &dev_op_buffer_sizes); + &dev_op_jumbo_frames, &dev_op_dqo_qpl, + &dev_op_buffer_sizes, + &dev_op_modify_ring); if (err) goto free_device_descriptor; @@ -878,7 +933,7 @@ int gve_adminq_describe_device(struct gve_priv *priv) gve_enable_supported_features(priv, supported_features_mask, dev_op_jumbo_frames, dev_op_dqo_qpl, - dev_op_buffer_sizes); + dev_op_buffer_sizes, dev_op_modify_ring); free_device_descriptor: dma_pool_free(priv->adminq_pool, descriptor, descriptor_bus); diff --git a/drivers/net/ethernet/google/gve/gve_adminq.h b/drivers/net/ethernet/google/gve/gve_adminq.h index 3ff2028a7472..beedf2353847 100644 --- a/drivers/net/ethernet/google/gve/gve_adminq.h +++ b/drivers/net/ethernet/google/gve/gve_adminq.h @@ -133,6 +133,16 @@ struct gve_device_option_buffer_sizes { static_assert(sizeof(struct gve_device_option_buffer_sizes) == 8); +struct gve_device_option_modify_ring { + __be32 supported_featured_mask; + __be16 max_rx_ring_size; + __be16 max_tx_ring_size; + __be16 min_rx_ring_size; + __be16 min_tx_ring_size; +}; + +static_assert(sizeof(struct gve_device_option_modify_ring) == 12); + /* Terminology: * * RDA - Raw DMA Addressing - Buffers associated with SKBs are directly DMA @@ -142,28 +152,31 @@ static_assert(sizeof(struct gve_device_option_buffer_sizes) == 8); * the device for read/write and data is copied from/to SKBs. */ enum gve_dev_opt_id { - GVE_DEV_OPT_ID_GQI_RAW_ADDRESSING = 0x1, - GVE_DEV_OPT_ID_GQI_RDA = 0x2, - GVE_DEV_OPT_ID_GQI_QPL = 0x3, - GVE_DEV_OPT_ID_DQO_RDA = 0x4, - GVE_DEV_OPT_ID_DQO_QPL = 0x7, - GVE_DEV_OPT_ID_JUMBO_FRAMES = 0x8, - GVE_DEV_OPT_ID_BUFFER_SIZES = 0xa, + GVE_DEV_OPT_ID_GQI_RAW_ADDRESSING = 0x1, + GVE_DEV_OPT_ID_GQI_RDA = 0x2, + GVE_DEV_OPT_ID_GQI_QPL = 0x3, + GVE_DEV_OPT_ID_DQO_RDA = 0x4, + GVE_DEV_OPT_ID_MODIFY_RING = 0x6, + GVE_DEV_OPT_ID_DQO_QPL = 0x7, + GVE_DEV_OPT_ID_JUMBO_FRAMES = 0x8, + GVE_DEV_OPT_ID_BUFFER_SIZES = 0xa, }; enum gve_dev_opt_req_feat_mask { - GVE_DEV_OPT_REQ_FEAT_MASK_GQI_RAW_ADDRESSING = 0x0, - GVE_DEV_OPT_REQ_FEAT_MASK_GQI_RDA = 0x0, - GVE_DEV_OPT_REQ_FEAT_MASK_GQI_QPL = 0x0, - GVE_DEV_OPT_REQ_FEAT_MASK_DQO_RDA = 0x0, - GVE_DEV_OPT_REQ_FEAT_MASK_JUMBO_FRAMES = 0x0, - GVE_DEV_OPT_REQ_FEAT_MASK_DQO_QPL = 0x0, - GVE_DEV_OPT_REQ_FEAT_MASK_BUFFER_SIZES = 0x0, + GVE_DEV_OPT_REQ_FEAT_MASK_GQI_RAW_ADDRESSING = 0x0, + GVE_DEV_OPT_REQ_FEAT_MASK_GQI_RDA = 0x0, + GVE_DEV_OPT_REQ_FEAT_MASK_GQI_QPL = 0x0, + GVE_DEV_OPT_REQ_FEAT_MASK_DQO_RDA = 0x0, + GVE_DEV_OPT_REQ_FEAT_MASK_JUMBO_FRAMES = 0x0, + GVE_DEV_OPT_REQ_FEAT_MASK_DQO_QPL = 0x0, + GVE_DEV_OPT_REQ_FEAT_MASK_BUFFER_SIZES = 0x0, + GVE_DEV_OPT_REQ_FEAT_MASK_MODIFY_RING = 0x0, }; enum gve_sup_feature_mask { - GVE_SUP_JUMBO_FRAMES_MASK = 1 << 2, - GVE_SUP_BUFFER_SIZES_MASK = 1 << 4, + GVE_SUP_MODIFY_RING_MASK = 1 << 0, + GVE_SUP_JUMBO_FRAMES_MASK = 1 << 2, + GVE_SUP_BUFFER_SIZES_MASK = 1 << 4, }; #define GVE_DEV_OPT_LEN_GQI_RAW_ADDRESSING 0x0 From patchwork Mon Apr 1 23:45:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harshitha Ramamurthy X-Patchwork-Id: 13613134 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D1FBF5FBA5 for ; Mon, 1 Apr 2024 23:45:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712015146; cv=none; b=uxWsqBN+DLu/rmK3qFqd9luAVnXroEQdMj50yErKVvitQ0pn0gNj0fn7KL27k9r3pdyDbg0/a5ePS5m2YDyvLqmYBOvNhUyD4OBr5KKhfdTjYKUuCRz0GafFjnxvJCEK8AOFVRppHxPfB1apFVg4S84GW8CP3ktxaIHG1bRQkQg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712015146; c=relaxed/simple; bh=T+r5MN0wKWkjv/eITFuRd7ZDeIMaevp8ktEOAZXCBSo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=MS9mp6otDU0/prHQmo5A5tDx0LWZFvSgwjVHmiPo1B0Wup4UUV7UTQRL9lFlWZsime3v5GUTqr1Zo+NbvOfcofhRJp7FfOr8GqSVecLgaHiwixGakUaPuRuCW5+RHssqEH1das5cwsQLx8QfWiTX6XAHv9tHKw5CyWwX0VRV438= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--hramamurthy.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=PJ+b+Yy5; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--hramamurthy.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="PJ+b+Yy5" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-6ea815e1c1eso4311595b3a.0 for ; Mon, 01 Apr 2024 16:45:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1712015144; x=1712619944; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=7x9CpIazFdTR4Zq3OOfIkhN9CyfFJTEzyO4IvebVf/Y=; b=PJ+b+Yy5VKuBMG7SxvPYx4moHt8w0Jap/Cw11iV+MwlCFQGe5Qa3pQiQil0m0Yfa2E 34q59/xfMhO5MsYIGmbKxE4sFCtmw/2e9vXd3ME4lbGMj8zAuT/XZ2kLlXtg9g7s2HFG X9kzi6SIbFP71KXZESyb5vwUmIsntuv+5ESaKcDMD6p/Jh62UuLsQ9IQMzl9eH+MrTIh AEmnvVqdio2G4Nma1EZDZGLuQH5HBABCn6NPzS+5FQIMQoru3ufhyhKBhzAWdDBTmWff 9y22uxF+Gd6BXvRatF7DFgwKZbYuNzJhzZCcC1+Jj2UFTW1lY5T5Uxi19uXlP5DVN5hE 19QA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1712015144; x=1712619944; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=7x9CpIazFdTR4Zq3OOfIkhN9CyfFJTEzyO4IvebVf/Y=; b=oYy4pDz8AUgGco7LPVCEqLf55peTbo3ovZFcbRk9RZIx+u/qrsB2oXUC6tZwvTtPmW 3ptH9bmQKv3xV5irsN877F6CalNyLKCZbl1pmeXlZSx+nwJEvUZccfFEh5P9g9LYRjNp LsQAbzBehZHsbBkXdxWG9XTrdgD6742tI6zQTQs9WYQfWJfk3ooPzfS98psevX9v44kM GcjXcgz9b8x8LyVTfoo5CAtHqKfrn83y2koA4oqLrHjIl9kw+NITKE5hZujEmFbG8REL fSaexAsgsgdzg1bG0WTHq/Q4foMI4Xz/MNRenRfrRxdsTQj/mUYHMKoFwToK2x4DzXIk bQlQ== X-Gm-Message-State: AOJu0YwktXmHqFJBMcFNFunpH3DrRJhrys+VwLHsbz//wmKRhlibb+nM mMBU9TE/VErAO2ELFe8YZS/Dm5Uc8Ksw4q/vYEGYh8Llg+5ycaS/7YmflRF1TLn8Jj6NnKM4Nsm hi3vHVkCaQNqFup9py/1LGAt7q9u4yiSbfjZYCW4lRRg1XLh/kMZxqPWaM+d+gDuibvmzj0dl4I dTbHZsvGVpwGdCX8whF65HBQLmULQ0h+fD88gKP1pZI2nPjhDT0vXHQbOi2rk= X-Google-Smtp-Source: AGHT+IHxXT/yLL2cd2rNrldpnEy/7nGDMyQzRcgHUJLn8ajOjZ2fO60/quZkbOYxInCtgd4AjE3Y9oES7raKn1ME0A== X-Received: from hramamurthy-gve.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:141e]) (user=hramamurthy job=sendgmr) by 2002:a05:6a00:a1c:b0:6e6:b9a8:5ce8 with SMTP id p28-20020a056a000a1c00b006e6b9a85ce8mr252030pfh.6.1712015143610; Mon, 01 Apr 2024 16:45:43 -0700 (PDT) Date: Mon, 1 Apr 2024 23:45:30 +0000 In-Reply-To: <20240401234530.3101900-1-hramamurthy@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240401234530.3101900-1-hramamurthy@google.com> X-Mailer: git-send-email 2.44.0.478.gd926399ef9-goog Message-ID: <20240401234530.3101900-6-hramamurthy@google.com> Subject: [PATCH net-next 5/5] gve: add support to change ring size via ethtool From: Harshitha Ramamurthy To: netdev@vger.kernel.org Cc: jeroendb@google.com, pkaligineedi@google.com, shailend@google.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, willemb@google.com, rushilg@google.com, jfraker@google.com, linux-kernel@vger.kernel.org, Harshitha Ramamurthy X-Patchwork-Delegate: kuba@kernel.org Allow the user to change ring size via ethtool if supported by the device. The driver relies on the ring size ranges queried from device to validate ring sizes requested by the user. Reviewed-by: Praveen Kaligineedi Reviewed-by: Willem de Bruijn Signed-off-by: Harshitha Ramamurthy --- drivers/net/ethernet/google/gve/gve.h | 8 ++ drivers/net/ethernet/google/gve/gve_ethtool.c | 85 +++++++++++++++++-- drivers/net/ethernet/google/gve/gve_main.c | 16 ++-- 3 files changed, 95 insertions(+), 14 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h index 669cacdae4f4..e97633b68e25 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -1159,6 +1159,14 @@ int gve_set_hsplit_config(struct gve_priv *priv, u8 tcp_data_split); /* Reset */ void gve_schedule_reset(struct gve_priv *priv); int gve_reset(struct gve_priv *priv, bool attempt_teardown); +void gve_get_curr_alloc_cfgs(struct gve_priv *priv, + struct gve_qpls_alloc_cfg *qpls_alloc_cfg, + struct gve_tx_alloc_rings_cfg *tx_alloc_cfg, + struct gve_rx_alloc_rings_cfg *rx_alloc_cfg); +int gve_adjust_config(struct gve_priv *priv, + struct gve_qpls_alloc_cfg *qpls_alloc_cfg, + struct gve_tx_alloc_rings_cfg *tx_alloc_cfg, + struct gve_rx_alloc_rings_cfg *rx_alloc_cfg); int gve_adjust_queues(struct gve_priv *priv, struct gve_queue_config new_rx_config, struct gve_queue_config new_tx_config); diff --git a/drivers/net/ethernet/google/gve/gve_ethtool.c b/drivers/net/ethernet/google/gve/gve_ethtool.c index dbe05402d40b..815dead31673 100644 --- a/drivers/net/ethernet/google/gve/gve_ethtool.c +++ b/drivers/net/ethernet/google/gve/gve_ethtool.c @@ -490,8 +490,8 @@ static void gve_get_ringparam(struct net_device *netdev, { struct gve_priv *priv = netdev_priv(netdev); - cmd->rx_max_pending = priv->rx_desc_cnt; - cmd->tx_max_pending = priv->tx_desc_cnt; + cmd->rx_max_pending = priv->max_rx_desc_cnt; + cmd->tx_max_pending = priv->max_tx_desc_cnt; cmd->rx_pending = priv->rx_desc_cnt; cmd->tx_pending = priv->tx_desc_cnt; @@ -503,20 +503,93 @@ static void gve_get_ringparam(struct net_device *netdev, kernel_cmd->tcp_data_split = ETHTOOL_TCP_DATA_SPLIT_DISABLED; } +static int gve_adjust_ring_sizes(struct gve_priv *priv, + u16 new_tx_desc_cnt, + u16 new_rx_desc_cnt) +{ + struct gve_tx_alloc_rings_cfg tx_alloc_cfg = {0}; + struct gve_rx_alloc_rings_cfg rx_alloc_cfg = {0}; + struct gve_qpls_alloc_cfg qpls_alloc_cfg = {0}; + struct gve_qpl_config new_qpl_cfg; + int err; + + /* get current queue configuration */ + gve_get_curr_alloc_cfgs(priv, &qpls_alloc_cfg, + &tx_alloc_cfg, &rx_alloc_cfg); + + /* copy over the new ring_size from ethtool */ + tx_alloc_cfg.ring_size = new_tx_desc_cnt; + rx_alloc_cfg.ring_size = new_rx_desc_cnt; + + /* qpl_cfg is not read-only, it contains a map that gets updated as + * rings are allocated, which is why we cannot use the yet unreleased + * one in priv. + */ + qpls_alloc_cfg.qpl_cfg = &new_qpl_cfg; + tx_alloc_cfg.qpl_cfg = &new_qpl_cfg; + rx_alloc_cfg.qpl_cfg = &new_qpl_cfg; + + if (netif_running(priv->dev)) { + err = gve_adjust_config(priv, &qpls_alloc_cfg, + &tx_alloc_cfg, &rx_alloc_cfg); + if (err) + return err; + } + + /* Set new ring_size for the next up */ + priv->tx_desc_cnt = new_tx_desc_cnt; + priv->rx_desc_cnt = new_rx_desc_cnt; + + return 0; +} + +static int gve_validate_req_ring_size(struct gve_priv *priv, u16 new_tx_desc_cnt, + u16 new_rx_desc_cnt) +{ + /* check for valid range */ + if (new_tx_desc_cnt < priv->min_tx_desc_cnt || + new_tx_desc_cnt > priv->max_tx_desc_cnt || + new_rx_desc_cnt < priv->min_rx_desc_cnt || + new_rx_desc_cnt > priv->max_rx_desc_cnt) { + dev_err(&priv->pdev->dev, "Requested descriptor count out of range\n"); + return -EINVAL; + } + + if (!is_power_of_2(new_tx_desc_cnt) || !is_power_of_2(new_rx_desc_cnt)) { + dev_err(&priv->pdev->dev, "Requested descriptor count has to be a power of 2\n"); + return -EINVAL; + } + return 0; +} + static int gve_set_ringparam(struct net_device *netdev, struct ethtool_ringparam *cmd, struct kernel_ethtool_ringparam *kernel_cmd, struct netlink_ext_ack *extack) { struct gve_priv *priv = netdev_priv(netdev); + u16 new_tx_cnt, new_rx_cnt; + int err; + + err = gve_set_hsplit_config(priv, kernel_cmd->tcp_data_split); + if (err) + return err; - if (priv->tx_desc_cnt != cmd->tx_pending || - priv->rx_desc_cnt != cmd->rx_pending) { - dev_info(&priv->pdev->dev, "Modify ring size is not supported.\n"); + if (cmd->tx_pending == priv->tx_desc_cnt && cmd->rx_pending == priv->rx_desc_cnt) + return 0; + + if (!priv->modify_ring_size_enabled) { + dev_err(&priv->pdev->dev, "Modify ring size is not supported.\n"); return -EOPNOTSUPP; } - return gve_set_hsplit_config(priv, kernel_cmd->tcp_data_split); + new_tx_cnt = cmd->tx_pending; + new_rx_cnt = cmd->rx_pending; + + if (gve_validate_req_ring_size(priv, new_tx_cnt, new_rx_cnt)) + return -EINVAL; + + return gve_adjust_ring_sizes(priv, new_tx_cnt, new_rx_cnt); } static int gve_user_reset(struct net_device *netdev, u32 *flags) diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index 470447c0490f..a515e5af843c 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -1314,10 +1314,10 @@ static void gve_rx_get_curr_alloc_cfg(struct gve_priv *priv, cfg->rx = priv->rx; } -static void gve_get_curr_alloc_cfgs(struct gve_priv *priv, - struct gve_qpls_alloc_cfg *qpls_alloc_cfg, - struct gve_tx_alloc_rings_cfg *tx_alloc_cfg, - struct gve_rx_alloc_rings_cfg *rx_alloc_cfg) +void gve_get_curr_alloc_cfgs(struct gve_priv *priv, + struct gve_qpls_alloc_cfg *qpls_alloc_cfg, + struct gve_tx_alloc_rings_cfg *tx_alloc_cfg, + struct gve_rx_alloc_rings_cfg *rx_alloc_cfg) { gve_qpls_get_curr_alloc_cfg(priv, qpls_alloc_cfg); gve_tx_get_curr_alloc_cfg(priv, tx_alloc_cfg); @@ -1867,10 +1867,10 @@ static int gve_xdp(struct net_device *dev, struct netdev_bpf *xdp) } } -static int gve_adjust_config(struct gve_priv *priv, - struct gve_qpls_alloc_cfg *qpls_alloc_cfg, - struct gve_tx_alloc_rings_cfg *tx_alloc_cfg, - struct gve_rx_alloc_rings_cfg *rx_alloc_cfg) +int gve_adjust_config(struct gve_priv *priv, + struct gve_qpls_alloc_cfg *qpls_alloc_cfg, + struct gve_tx_alloc_rings_cfg *tx_alloc_cfg, + struct gve_rx_alloc_rings_cfg *rx_alloc_cfg) { int err;