From patchwork Thu Dec 16 00:46:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeroen de Borst X-Patchwork-Id: 12679763 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2620C433EF for ; Thu, 16 Dec 2021 00:47:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229596AbhLPArJ (ORCPT ); Wed, 15 Dec 2021 19:47:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53358 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229471AbhLPArJ (ORCPT ); Wed, 15 Dec 2021 19:47:09 -0500 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E503CC061574 for ; Wed, 15 Dec 2021 16:47:08 -0800 (PST) Received: by mail-pj1-x104a.google.com with SMTP id b8-20020a17090a10c800b001a61dff6c9dso12950503pje.5 for ; Wed, 15 Dec 2021 16:47:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=4hgswiItr2SQTMaY8m1rgHLpszklWZ6dae36OJa0w8E=; b=HjtFfQGzV5DVNcv+qrL4zpE9AuwTVOdozsnJTVZ3Ju4Y9on6FW/xZB1JeXlYFdgh2Z nrRVj4ruBNsi+UXCkVY9OuQZClTYyv70OX/MxcEFy2I7EqzGNDSAoHHlZd8UZkVHE/pI RcoRqMsYdOY4+oB1w3NuVZHFfSwedJTa980HmKNrGepkascmmVpZrffsXzR8T9pkqOF7 sK6ka+sRTRL2XtyTYAT/18hKd0UYA8Lm/jvmEHbyqQoffhQeqov9oOUatyKH+Z9C/WgL rzF4PXe05yBdUlquoGZ5GZFGBy6S6fGlSMQRK4Df7faJpdGVQLdZpVrYZ1Pol41gEh7J k08A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=4hgswiItr2SQTMaY8m1rgHLpszklWZ6dae36OJa0w8E=; b=Vc9ClhHXi88WEEP+uQrc7uWfHkuUEduZO+U/tKEQ81R+h6WTPZfsSDD24iqwrg1QLY 3gnXXkq+8asLUL2LD7GCvksuXsQ4hv9d/RRxsjS1mCH4as5HpFSrcqDEmJdoCCrEHdvP IUuDWP1uPgqGn0Y3ayw0GDCzbOWnexsNo7gB2n49eiEn3pqX5tE+GjncztrGUxORx8ze REkRB8xrtPxxQmOt3ofAW7u5ijQmxbzb862QVF2Bnpg2ldCzrZVxxUsjRrhxPiu4T8VT 2GiUvmR2AgEJgdGP+rm5fNmxdO4IcolKZasTOXvmqGARJlMOH7GAFF+Bnpo2YBTW2fOI cZSw== X-Gm-Message-State: AOAM530GuDb8cMGa/baDRvBz5HLDJS2xNleMQrUouXaDz+0jl9rG6HA9 z+FIRY3GaRjgMfcm/D1afjIqn7Ni65GHxeodPBDLzVmga4JS0MuWfW1ryZV/8RPYbueqK6Wlm7H LSfxXYWbSSkhG6G56LPM5fr2aIh3Os/k3TgVw+WqhePkzRPlZHCpEh2S1LvhR7gitgoY= X-Google-Smtp-Source: ABdhPJwsoRTZn6c9+8kw/8kZoPdmMW/wN11LyD5igQGY9DlBXdpmhfscnYNp/4mBWHdEUIDJCdWyhPDJVkLfHA== X-Received: from jeroendb.sea.corp.google.com ([2620:15c:100:202:964d:9084:bbdd:97a9]) (user=jeroendb job=sendgmr) by 2002:a17:902:6b45:b0:148:a30e:5219 with SMTP id g5-20020a1709026b4500b00148a30e5219mr6926214plt.10.1639615628296; Wed, 15 Dec 2021 16:47:08 -0800 (PST) Date: Wed, 15 Dec 2021 16:46:45 -0800 In-Reply-To: <20211216004652.1021911-1-jeroendb@google.com> Message-Id: <20211216004652.1021911-2-jeroendb@google.com> Mime-Version: 1.0 References: <20211216004652.1021911-1-jeroendb@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH net-next 1/8] gve: Correct order of processing device options From: Jeroen de Borst To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, Jeroen de Borst Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org The legacy raw addressing device option was processed before the new RDA queue format option. This caused the supported features mask, which is provided only on the RDA queue format option, not to be set. This disabled jumbo-frame support when using raw adressing. Fixes: 255489f5b33c ("gve: Add a jumbo-frame device option") Signed-off-by: Jeroen de Borst --- drivers/net/ethernet/google/gve/gve_adminq.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/ethernet/google/gve/gve_adminq.c index 83ae56c310d3..326b56b49216 100644 --- a/drivers/net/ethernet/google/gve/gve_adminq.c +++ b/drivers/net/ethernet/google/gve/gve_adminq.c @@ -738,10 +738,7 @@ int gve_adminq_describe_device(struct gve_priv *priv) * is not set to GqiRda, choose the queue format in a priority order: * DqoRda, GqiRda, GqiQpl. Use GqiQpl as default. */ - if (priv->queue_format == GVE_GQI_RDA_FORMAT) { - dev_info(&priv->pdev->dev, - "Driver is running with GQI RDA queue format.\n"); - } else if (dev_op_dqo_rda) { + if (dev_op_dqo_rda) { priv->queue_format = GVE_DQO_RDA_FORMAT; dev_info(&priv->pdev->dev, "Driver is running with DQO RDA queue format.\n"); @@ -753,6 +750,9 @@ int gve_adminq_describe_device(struct gve_priv *priv) "Driver is running with GQI RDA queue format.\n"); supported_features_mask = be32_to_cpu(dev_op_gqi_rda->supported_features_mask); + } else if (priv->queue_format == GVE_GQI_RDA_FORMAT) { + dev_info(&priv->pdev->dev, + "Driver is running with GQI RDA queue format.\n"); } else { priv->queue_format = GVE_GQI_QPL_FORMAT; if (dev_op_gqi_qpl) From patchwork Thu Dec 16 00:46:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeroen de Borst X-Patchwork-Id: 12679767 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42AC2C433EF for ; Thu, 16 Dec 2021 00:47:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232528AbhLPArN (ORCPT ); Wed, 15 Dec 2021 19:47:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53366 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229630AbhLPArL (ORCPT ); Wed, 15 Dec 2021 19:47:11 -0500 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D6B1EC061574 for ; Wed, 15 Dec 2021 16:47:10 -0800 (PST) Received: by mail-pg1-x54a.google.com with SMTP id k21-20020a63f015000000b0033db7baf101so2427326pgh.19 for ; Wed, 15 Dec 2021 16:47:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=6OEtlas5nY1bjouVNYnoq+29JHD3AcGvaHAcF1Dvy+M=; b=DSjafhk5n/78bUs1lZrzWOCQsJTVHyWsWZW/vj6lcJ9Wu5a5uWODqi2A2h+nedSvNo ccXrzxtNWf2MHYm/TWLCGPKylfwIACSnQIan/V85Q0YsgW3Oalvedpkb0GG3VDSQYaFO YeVwIda0PzlQZG9rkjQDc2TgfRR1y6K31X/s3ptZkw48a/6fJZ2gSO+fd9t3ppx4o74D Ge05abUzaOtBvcwkPySp4LI9Ndz6hl0Xsi9sdsx2YeZmknnRLZ2HuYizjVpHVXrSvOvb APMp2gnRr8OX6eYSfKBwYivfotO7luaO0CeELwrRdZP6T8wcLmtzZCJmlXWYkUj92WHb ApNg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=6OEtlas5nY1bjouVNYnoq+29JHD3AcGvaHAcF1Dvy+M=; b=lBpI1cU2k3oF0XMItviiUBKBKM89WpGc2WplEej3SIE2ML4xH/j6r3Q4s4H8Zs+Dgy yLuEqw+MgsEKGWpb9T7jD+nr46vMGnnqeW/orbN340LBRUkdVum5D8GIQrHcqDVPQs3Y b29n2RGUFODl2F6lP1vkfEUx5+hQ+iNeMMYEdDJBEGVbLezi64o07rG1nPE55xy4qV24 Dp/zb7gTz3x6pjBG8O5+MBZXIydvFKklqRXBHz/rXkUZuWiBD0mf7OWDB53jOELm9kfl zj4qo3v2FPS74xtLmSMeTOF3Px/CB/U8CRra/HY73PjHO8Ncip5wBnde1VczgK/ZyUDv rYpw== X-Gm-Message-State: AOAM533FD4ArCQL8cM4aqg3cTww3RBvAJyPFUkLiuPTxG20kDPeWiXmz QaVwMVYN3uitluL6D+GVF7m4BPsvLC1F2Qsb5fO1cKYxyruAX5f2bWmPTv48kRTzxc8v43WetLg a73XyOq9R3HHtPX8ZySGsZlv5j5/7yxi549wanEV7gwNDnF9KFW7jTgOsJO/nzdhppmg= X-Google-Smtp-Source: ABdhPJy07fHpA9wSN/7GMQOZIjDkovPp8l4zPJ3BI/8/T7qkOLm1UFtdgUwUXcCCfU+FlJaJoSe8k2mRzpOlwA== X-Received: from jeroendb.sea.corp.google.com ([2620:15c:100:202:964d:9084:bbdd:97a9]) (user=jeroendb job=sendgmr) by 2002:a63:9207:: with SMTP id o7mr9835599pgd.430.1639615630233; Wed, 15 Dec 2021 16:47:10 -0800 (PST) Date: Wed, 15 Dec 2021 16:46:46 -0800 In-Reply-To: <20211216004652.1021911-1-jeroendb@google.com> Message-Id: <20211216004652.1021911-3-jeroendb@google.com> Mime-Version: 1.0 References: <20211216004652.1021911-1-jeroendb@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH net-next 2/8] gve: Move the irq db indexes out of the ntfy block struct From: Jeroen de Borst To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, Catherine Sullivan , David Awogbemila Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Catherine Sullivan Giving the device access to other kernel structs is not ideal. Move the indexes into their own array and just keep pointers to them in the ntfy block struct. Signed-off-by: Catherine Sullivan Signed-off-by: David Awogbemila --- drivers/net/ethernet/google/gve/gve.h | 13 ++++--- drivers/net/ethernet/google/gve/gve_adminq.c | 2 +- drivers/net/ethernet/google/gve/gve_dqo.h | 2 +- drivers/net/ethernet/google/gve/gve_main.c | 36 ++++++++++++++------ 4 files changed, 36 insertions(+), 17 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h index b719f72281c4..b6bd8f679127 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -441,13 +441,13 @@ struct gve_tx_ring { * associated with that irq. */ struct gve_notify_block { - __be32 irq_db_index; /* idx into Bar2 - set by device, must be 1st */ + __be32 *irq_db_index; /* pointer to idx into Bar2 */ char name[IFNAMSIZ + 16]; /* name registered with the kernel */ struct napi_struct napi; /* kernel napi struct for this block */ struct gve_priv *priv; struct gve_tx_ring *tx; /* tx rings on this block */ struct gve_rx_ring *rx; /* rx rings on this block */ -} ____cacheline_aligned; +}; /* Tracks allowed and current queue settings */ struct gve_queue_config { @@ -466,6 +466,10 @@ struct gve_options_dqo_rda { u16 rx_buff_ring_entries; /* number of rx_buff descriptors */ }; +struct gve_irq_db { + __be32 index; +} ____cacheline_aligned; + struct gve_ptype { u8 l3_type; /* `gve_l3_type` in gve_adminq.h */ u8 l4_type; /* `gve_l4_type` in gve_adminq.h */ @@ -492,7 +496,8 @@ struct gve_priv { struct gve_rx_ring *rx; /* array of rx_cfg.num_queues */ struct gve_queue_page_list *qpls; /* array of num qpls */ struct gve_notify_block *ntfy_blocks; /* array of num_ntfy_blks */ - dma_addr_t ntfy_block_bus; + struct gve_irq_db *irq_db_indices; /* array of num_ntfy_blks */ + dma_addr_t irq_db_indices_bus; struct msix_entry *msix_vectors; /* array of num_ntfy_blks + 1 */ char mgmt_msix_name[IFNAMSIZ + 16]; u32 mgmt_msix_idx; @@ -733,7 +738,7 @@ static inline void gve_clear_report_stats(struct gve_priv *priv) static inline __be32 __iomem *gve_irq_doorbell(struct gve_priv *priv, struct gve_notify_block *block) { - return &priv->db_bar2[be32_to_cpu(block->irq_db_index)]; + return &priv->db_bar2[be32_to_cpu(*block->irq_db_index)]; } /* Returns the index into ntfy_blocks of the given tx ring's block diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/ethernet/google/gve/gve_adminq.c index 326b56b49216..2ad7f57f7e5b 100644 --- a/drivers/net/ethernet/google/gve/gve_adminq.c +++ b/drivers/net/ethernet/google/gve/gve_adminq.c @@ -462,7 +462,7 @@ int gve_adminq_configure_device_resources(struct gve_priv *priv, .num_counters = cpu_to_be32(num_counters), .irq_db_addr = cpu_to_be64(db_array_bus_addr), .num_irq_dbs = cpu_to_be32(num_ntfy_blks), - .irq_db_stride = cpu_to_be32(sizeof(priv->ntfy_blocks[0])), + .irq_db_stride = cpu_to_be32(sizeof(*priv->irq_db_indices)), .ntfy_blk_msix_base_idx = cpu_to_be32(GVE_NTFY_BLK_BASE_MSIX_IDX), .queue_format = priv->queue_format, diff --git a/drivers/net/ethernet/google/gve/gve_dqo.h b/drivers/net/ethernet/google/gve/gve_dqo.h index 836042364124..b2e2fb015693 100644 --- a/drivers/net/ethernet/google/gve/gve_dqo.h +++ b/drivers/net/ethernet/google/gve/gve_dqo.h @@ -73,7 +73,7 @@ static inline void gve_write_irq_doorbell_dqo(const struct gve_priv *priv, const struct gve_notify_block *block, u32 val) { - u32 index = be32_to_cpu(block->irq_db_index); + u32 index = be32_to_cpu(*block->irq_db_index); iowrite32(val, &priv->db_bar2[index]); } diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index 59b66f679e46..348b4cfc4a12 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -334,15 +334,23 @@ static int gve_alloc_notify_blocks(struct gve_priv *priv) dev_err(&priv->pdev->dev, "Did not receive management vector.\n"); goto abort_with_msix_enabled; } - priv->ntfy_blocks = + priv->irq_db_indices = dma_alloc_coherent(&priv->pdev->dev, priv->num_ntfy_blks * - sizeof(*priv->ntfy_blocks), - &priv->ntfy_block_bus, GFP_KERNEL); - if (!priv->ntfy_blocks) { + sizeof(*priv->irq_db_indices), + &priv->irq_db_indices_bus, GFP_KERNEL); + if (!priv->irq_db_indices) { err = -ENOMEM; goto abort_with_mgmt_vector; } + + priv->ntfy_blocks = kvzalloc(priv->num_ntfy_blks * + sizeof(*priv->ntfy_blocks), GFP_KERNEL); + if (!priv->ntfy_blocks) { + err = -ENOMEM; + goto abort_with_irq_db_indices; + } + /* Setup the other blocks - the first n-1 vectors */ for (i = 0; i < priv->num_ntfy_blks; i++) { struct gve_notify_block *block = &priv->ntfy_blocks[i]; @@ -361,6 +369,7 @@ static int gve_alloc_notify_blocks(struct gve_priv *priv) } irq_set_affinity_hint(priv->msix_vectors[msix_idx].vector, get_cpu_mask(i % active_cpus)); + block->irq_db_index = &priv->irq_db_indices[i].index; } return 0; abort_with_some_ntfy_blocks: @@ -372,10 +381,13 @@ static int gve_alloc_notify_blocks(struct gve_priv *priv) NULL); free_irq(priv->msix_vectors[msix_idx].vector, block); } - dma_free_coherent(&priv->pdev->dev, priv->num_ntfy_blks * - sizeof(*priv->ntfy_blocks), - priv->ntfy_blocks, priv->ntfy_block_bus); + kvfree(priv->ntfy_blocks); priv->ntfy_blocks = NULL; +abort_with_irq_db_indices: + dma_free_coherent(&priv->pdev->dev, priv->num_ntfy_blks * + sizeof(*priv->irq_db_indices), + priv->irq_db_indices, priv->irq_db_indices_bus); + priv->irq_db_indices = NULL; abort_with_mgmt_vector: free_irq(priv->msix_vectors[priv->mgmt_msix_idx].vector, priv); abort_with_msix_enabled: @@ -403,10 +415,12 @@ static void gve_free_notify_blocks(struct gve_priv *priv) free_irq(priv->msix_vectors[msix_idx].vector, block); } free_irq(priv->msix_vectors[priv->mgmt_msix_idx].vector, priv); - dma_free_coherent(&priv->pdev->dev, - priv->num_ntfy_blks * sizeof(*priv->ntfy_blocks), - priv->ntfy_blocks, priv->ntfy_block_bus); + kvfree(priv->ntfy_blocks); priv->ntfy_blocks = NULL; + dma_free_coherent(&priv->pdev->dev, priv->num_ntfy_blks * + sizeof(*priv->irq_db_indices), + priv->irq_db_indices, priv->irq_db_indices_bus); + priv->irq_db_indices = NULL; pci_disable_msix(priv->pdev); kvfree(priv->msix_vectors); priv->msix_vectors = NULL; @@ -428,7 +442,7 @@ static int gve_setup_device_resources(struct gve_priv *priv) err = gve_adminq_configure_device_resources(priv, priv->counter_array_bus, priv->num_event_counters, - priv->ntfy_block_bus, + priv->irq_db_indices_bus, priv->num_ntfy_blks); if (unlikely(err)) { dev_err(&priv->pdev->dev, From patchwork Thu Dec 16 00:46:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeroen de Borst X-Patchwork-Id: 12679765 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 057AFC433FE for ; Thu, 16 Dec 2021 00:47:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229698AbhLPArN (ORCPT ); Wed, 15 Dec 2021 19:47:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53382 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229661AbhLPArM (ORCPT ); Wed, 15 Dec 2021 19:47:12 -0500 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 922D7C061574 for ; Wed, 15 Dec 2021 16:47:12 -0800 (PST) Received: by mail-pf1-x449.google.com with SMTP id n19-20020a056a0007d300b004acbc929796so14378613pfu.18 for ; Wed, 15 Dec 2021 16:47:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Iz6iJoGt7wXxOudAxXKVHGp/SEoXhidy1H4dTjPHamE=; b=TLy1kKem/EAl5J/hl/5lqoebcTkwlk0YywEzTmn5L4akNBtTLDYPKmipDmOi4uYbPO ynxIBrfi1Kep0UyRmODIyMbQqMs7sIITTQXUD29GyjhCdM/YgGb0EW8LM8LgLccaqbq2 o3OcRyNB2kdWnegRPCHphM5WbxzcPfp+xG05oe5UCgTLbppJGLaOB52r5BOin+WkdjiK fcTTUZ4Tshz5mYMNrsq/9I+TiPhhDx92/AkJoRQ0wYcr4BfHIQWkACkn997p3Ph5OS1O 1DF4eOsNzCfx9TlZ4exX/Wj96X2S1220YujGv5P1MAC59W/jryBk/hPvnmCXmGhycr4x Cxmg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Iz6iJoGt7wXxOudAxXKVHGp/SEoXhidy1H4dTjPHamE=; b=3U1hFMI0at8cy0fi1UGvAwCCVIJmn9aLLOWLZ/44IQEwpb2yI03RH6WcAo+8MC6FND HpNpxAFvf/C0fHIoWtso0Kd1bp4Zw+NNhJJjuzDsC1VuZSfI7RfS0icUT45iloDVWBX0 pu5mAKmX7j3bLL4qS4cAodiJtzFHMbSnzWE6pzFaS0Asq1qs/WfONazLcVvAYgxgHGDL xWehuGIGxsyd2G0sn9hweCQ73Mv74ZYyLjEkO3BJfbiDZiEVy7KcdYgD+h2X+U1r2DDP ZKFr+fjg0YvxhNe0HGUiq5BmT1R76/Jmv3FMjaQ75uUDkd9wG28+7HxhR83bTtkGLuMk t9zw== X-Gm-Message-State: AOAM531qQkZ9mzHMdfvE8Ttlgovm3jPKSXVumAbTXWDHCZxAm3jCjWOZ ly1pIZCatr7Yv5mz4haRgMdB3hg9wBBMyKViFz7oStCqx0rqHQGv0LGMmJWFJUDi8jA9BtzEpmX ZeSJ5dDgv59lb6nG8cfZEzFdJeQSAPEZUDVbyG3uXgz0acua+IIMSZ2u7YipmaNTfNug= X-Google-Smtp-Source: ABdhPJzIisskEuvTZ6vlFd2JB/wh5dT9IFmL3STtUrn6bhsBCNNbnhHosnXuR1qqwuPeRGLRFGOW98+OsrVfXg== X-Received: from jeroendb.sea.corp.google.com ([2620:15c:100:202:964d:9084:bbdd:97a9]) (user=jeroendb job=sendgmr) by 2002:a63:230c:: with SMTP id j12mr9900766pgj.579.1639615631974; Wed, 15 Dec 2021 16:47:11 -0800 (PST) Date: Wed, 15 Dec 2021 16:46:47 -0800 In-Reply-To: <20211216004652.1021911-1-jeroendb@google.com> Message-Id: <20211216004652.1021911-4-jeroendb@google.com> Mime-Version: 1.0 References: <20211216004652.1021911-1-jeroendb@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH net-next 3/8] gve: Update gve_free_queue_page_list signature From: Jeroen de Borst To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, Catherine Sullivan Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Catherine Sullivan The id field should be a u32 not a signed int. Signed-off-by: Catherine Sullivan --- drivers/net/ethernet/google/gve/gve_main.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index 348b4cfc4a12..086424518ecc 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -831,8 +831,7 @@ void gve_free_page(struct device *dev, struct page *page, dma_addr_t dma, put_page(page); } -static void gve_free_queue_page_list(struct gve_priv *priv, - int id) +static void gve_free_queue_page_list(struct gve_priv *priv, u32 id) { struct gve_queue_page_list *qpl = &priv->qpls[id]; int i; From patchwork Thu Dec 16 00:46:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeroen de Borst X-Patchwork-Id: 12679769 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B76D2C433F5 for ; Thu, 16 Dec 2021 00:47:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232541AbhLPArQ (ORCPT ); Wed, 15 Dec 2021 19:47:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53390 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229661AbhLPArO (ORCPT ); Wed, 15 Dec 2021 19:47:14 -0500 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6F0CCC061574 for ; Wed, 15 Dec 2021 16:47:14 -0800 (PST) Received: by mail-pj1-x1049.google.com with SMTP id r23-20020a17090a941700b001a74be6cf80so12970177pjo.2 for ; Wed, 15 Dec 2021 16:47:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=AMeUDlbCvaNPe5VdCo2WIv1gI6blTnglwzNMgHnGKkc=; b=hgUh044JOm51muZn7OfflhgNaKhdAYIIzwmqQ1QsH4ILUP3jZy/SZIPre7PN4MnJuK 4cLLzlZrOFZrOycC82Pzhs6tuN5C9UIBFHFaaoQkLuWNRxcxUJ8kKEFYpGkhPyh9QHST vk38u5r60pTdnVVdi2UsH53iW+xPvcfs9+GndvYUMiQqhIOV4vR79yRK3WyGC98IZ0wN nOrwDVwDTDPRSuTb6dwft/feFZkPSA0Be1MQGHss3IOoh/DfQPmKRen39HR9YBsnRbP0 eQ0jW7tDcODJNdHKuv3czjnayHJHuWhCMrzzMFtVkBgD/s0DtSTV1i7NMjKe7Nkc71l7 jApw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=AMeUDlbCvaNPe5VdCo2WIv1gI6blTnglwzNMgHnGKkc=; b=qMpYjYMG96lycyC5Q0ls+QeDz+3N59wK23K4L5udfaolf6HcdN5yDH2brANGJYAfej ZKl3UsgTeBxmWfkbZqre1wIdN7c/r5b3+RGYIfLh6hEGVk//Wwx/V3CLWp4T8Mp3QkG4 H+qzt8PpdYgSeSOSbcq4RyLRI1TWLEkrY3q3BTkxCD3dasatmC8l3qfJau52gauXCGiQ O16rio1DcqH2u+0dd3pJpRm5y6rIuqSyJpIY/N26hlAWVeQdVubJXE4fxadIECAgIsAI Bw8KIz0HP98l9esAL+RcMqg/AlOD+pnzRROZf7JLgs5ZaNXY9k72yn1UYQBZXQ4g2abG eTkQ== X-Gm-Message-State: AOAM531aIyGCNk2Ek8xbSgjJGQaNVCATOAu2XSIHIl5WKVO3QCO5hbYk c/wqMuJJX6bIKRxIw3HPWMyk7Ic9ZnNqj7PhMkOEuPfO8nm5nICTYxwr4pjfD1HBklujcYsoyl5 5UgMfhVtBH6vaqw5iwozpjDIUEBgblTR+590e39ET2o8jPR5tBlOzViEVH7sRzdTBg5U= X-Google-Smtp-Source: ABdhPJz/cRZlfXqiDAaoG605N+g/rorna5opbT8m5E5S+DnVi1C2q1cjPI/bRAoYF8DXtVLfsHan2OnKk/4CoA== X-Received: from jeroendb.sea.corp.google.com ([2620:15c:100:202:964d:9084:bbdd:97a9]) (user=jeroendb job=sendgmr) by 2002:a17:90b:1a88:: with SMTP id ng8mr2865349pjb.180.1639615633903; Wed, 15 Dec 2021 16:47:13 -0800 (PST) Date: Wed, 15 Dec 2021 16:46:48 -0800 In-Reply-To: <20211216004652.1021911-1-jeroendb@google.com> Message-Id: <20211216004652.1021911-5-jeroendb@google.com> Mime-Version: 1.0 References: <20211216004652.1021911-1-jeroendb@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH net-next 4/8] gve: remove memory barrier around seqno From: Jeroen de Borst To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, Catherine Sullivan Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Catherine Sullivan No longer needed after we introduced the barrier in gve_napi_poll. Signed-off-by: Catherine Sullivan --- drivers/net/ethernet/google/gve/gve_rx.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c index 3d04b5aff331..9ddcc497f48e 100644 --- a/drivers/net/ethernet/google/gve/gve_rx.c +++ b/drivers/net/ethernet/google/gve/gve_rx.c @@ -639,8 +639,6 @@ bool gve_rx_work_pending(struct gve_rx_ring *rx) desc = rx->desc.desc_ring + next_idx; flags_seq = desc->flags_seq; - /* Make sure we have synchronized the seq no with the device */ - smp_rmb(); return (GVE_SEQNO(flags_seq) == rx->desc.seqno); } From patchwork Thu Dec 16 00:46:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeroen de Borst X-Patchwork-Id: 12679771 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 99058C433EF for ; Thu, 16 Dec 2021 00:47:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232542AbhLPArR (ORCPT ); Wed, 15 Dec 2021 19:47:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53400 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232538AbhLPArQ (ORCPT ); Wed, 15 Dec 2021 19:47:16 -0500 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4EB81C061574 for ; Wed, 15 Dec 2021 16:47:16 -0800 (PST) Received: by mail-pf1-x449.google.com with SMTP id y128-20020a62ce86000000b004ba4a5ab3e2so109536pfg.2 for ; Wed, 15 Dec 2021 16:47:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=qDQT8NpqvTRhC7Nl6AaC4866C2rTVaFhtZpe8aBxuAM=; b=IGk3MhuryRKTIyRZY0uPzMrTFHvcD+0xRvBaut+gYSs6vbMRhOBVhdwW3Mj6NfBh3u rdHnw9yYg3SuBEtnRt3fg2GJyOLu3p0OWSlwJ95/7okgnCNlm+gJYmXZXPI56MylcQof QOW6bR/GQOwk35R18IFTMdYVaryW+88ZGiROcREpgMIWBfKgjpu7nZZOqQbMqDIw3wdZ xR2hxqeWvvRr3tgPWvRJ+rRUlpMNNrs3pxfRGd8AqjZ1RyJqEe9+fPkVPyFxc2FWQ85w Zp1U+voRjeEI2MPk2BjDJwEKunvxrCAH3mtIRIUNCnCkbTZWSfSEdOLW0LAlc2orQd8a 5kYg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=qDQT8NpqvTRhC7Nl6AaC4866C2rTVaFhtZpe8aBxuAM=; b=BpjTmsH/DMIOjHS0+mNrqa93t6RgZCOAifyPkzE0vZBgsbyUNe6T5WmDYnudEAZiND jjfQSjacCgTUPzIzWYlS3so/P2dCB0GmgMNxq8hpkBwqrG7H7/xT0AJ6r9zsg8FZpl1j Tt/EscXAru3K3n0TmWpGGCBezP1m92R0qhlnxCatw8ojo9ZuMZITYaKnBU1YFae8YfAY MnQNe9RuaScaujSWA1oFbITCuxAdrbn0eu9lYjMq7t6OzD+70eNpvsAirYfhalvwzmMZ yY/krj4RBImNnhrAXf5QihnMDmxYnNVxH0frZC050wdfWexRTyFbsmyrDyzjjixNnbL7 ULOw== X-Gm-Message-State: AOAM531Q6qXhC9tRXSw0Ktz5dcltSNKkUzKns7S/zg/DEMHh/h0oF6Gm MQqaV29suoy9oAtykInNliOrABZLwHL5J4xtmMX6K7eN12eh29KDR9EVg0CeijRskrcY0E1c9ZC r00Ks70U0sw3c0TJ1iDre2aURwdYi9yFZNU3594g6/sQk+PJ0KGZmug/f+AxtqCU+mkQ= X-Google-Smtp-Source: ABdhPJyepMid8LZYKd4Q754semomEzRELiy6uxvd2Odn/ihPlXLst1L479U/p/3pWNllULxF25uj4td/HSvoEQ== X-Received: from jeroendb.sea.corp.google.com ([2620:15c:100:202:964d:9084:bbdd:97a9]) (user=jeroendb job=sendgmr) by 2002:a17:902:ec92:b0:146:86fd:24c7 with SMTP id x18-20020a170902ec9200b0014686fd24c7mr13788392plg.57.1639615635700; Wed, 15 Dec 2021 16:47:15 -0800 (PST) Date: Wed, 15 Dec 2021 16:46:49 -0800 In-Reply-To: <20211216004652.1021911-1-jeroendb@google.com> Message-Id: <20211216004652.1021911-6-jeroendb@google.com> Mime-Version: 1.0 References: <20211216004652.1021911-1-jeroendb@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH net-next 5/8] gve: Add optional metadata descriptor type GVE_TXD_MTD From: Jeroen de Borst To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, Willem de Bruijn , David Awogbemila Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Willem de Bruijn Allow drivers to pass metadata along with packet data to the device. Introduce a new metadata descriptor type * GVE_TXD_MTD This descriptor is optional. If present it immediate follows the packet descriptor and precedes the segment descriptor. This descriptor may be repeated. Multiple metadata descriptors may follow. There are no immediate uses for this, this is for future proofing. At present devices allow only 1 MTD descriptor. The lower four bits of the type_flags field encode GVE_TXD_MTD. The upper four bits of the type_flags field encodes a *sub*type. Introduce one such metadata descriptor subtype * GVE_MTD_SUBTYPE_PATH This shares path information with the device for network failure discovery and robust response: Linux derives ipv6 flowlabel and ECMP multipath from sk->sk_txhash, and updates this field on error with sk_rethink_txhash. Allow the host stack to do the same. Pass the tx_hash value if set. Also communicate whether the path hash is set, or more exactly, what its type is. Define two common types GVE_MTD_PATH_HASH_NONE GVE_MTD_PATH_HASH_L4 Concrete examples of error conditions that are resolved are mentioned in the commits that add sk_rethink_txhash calls. Such as commit 7788174e8726 ("tcp: change IPv6 flow-label upon receiving spurious retransmission"). Experimental results mirror what the theory suggests: where IPv6 FlowLabel is included in path selection (e.g., LAG/ECMP), flowlabel rotation on TCP timeout avoids the vast majority of TCP disconnects that would otherwise have occurred during link failures in long-haul backbones, when an alternative path is available. Rotation can be applied to various bad connection signals, such as timeouts and spurious retransmissions. In aggregate, such flow level signals can help locate network issues. Define initial common states: GVE_MTD_PATH_STATE_DEFAULT GVE_MTD_PATH_STATE_TIMEOUT GVE_MTD_PATH_STATE_CONGESTION GVE_MTD_PATH_STATE_RETRANSMIT Signed-off-by: Willem de Bruijn Signed-off-by: David Awogbemila --- drivers/net/ethernet/google/gve/gve.h | 1 + drivers/net/ethernet/google/gve/gve_desc.h | 20 ++++++ drivers/net/ethernet/google/gve/gve_tx.c | 73 ++++++++++++++++------ 3 files changed, 74 insertions(+), 20 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h index b6bd8f679127..ed43b8ece5a2 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -229,6 +229,7 @@ struct gve_rx_ring { /* A TX desc ring entry */ union gve_tx_desc { struct gve_tx_pkt_desc pkt; /* first desc for a packet */ + struct gve_tx_mtd_desc mtd; /* optional metadata descriptor */ struct gve_tx_seg_desc seg; /* subsequent descs for a packet */ }; diff --git a/drivers/net/ethernet/google/gve/gve_desc.h b/drivers/net/ethernet/google/gve/gve_desc.h index 4d225a18d8ce..f4ae9e19b844 100644 --- a/drivers/net/ethernet/google/gve/gve_desc.h +++ b/drivers/net/ethernet/google/gve/gve_desc.h @@ -33,6 +33,14 @@ struct gve_tx_pkt_desc { __be64 seg_addr; /* Base address (see note) of this segment */ } __packed; +struct gve_tx_mtd_desc { + u8 type_flags; /* type is lower 4 bits, subtype upper */ + u8 path_state; /* state is lower 4 bits, hash type upper */ + __be16 reserved0; + __be32 path_hash; + __be64 reserved1; +} __packed; + struct gve_tx_seg_desc { u8 type_flags; /* type is lower 4 bits, flags upper */ u8 l3_offset; /* TSO: 2 byte units to start of IPH */ @@ -46,6 +54,7 @@ struct gve_tx_seg_desc { #define GVE_TXD_STD (0x0 << 4) /* Std with Host Address */ #define GVE_TXD_TSO (0x1 << 4) /* TSO with Host Address */ #define GVE_TXD_SEG (0x2 << 4) /* Seg with Host Address */ +#define GVE_TXD_MTD (0x3 << 4) /* Metadata */ /* GVE Transmit Descriptor Flags for Std Pkts */ #define GVE_TXF_L4CSUM BIT(0) /* Need csum offload */ @@ -54,6 +63,17 @@ struct gve_tx_seg_desc { /* GVE Transmit Descriptor Flags for TSO Segs */ #define GVE_TXSF_IPV6 BIT(1) /* IPv6 TSO */ +/* GVE Transmit Descriptor Options for MTD Segs */ +#define GVE_MTD_SUBTYPE_PATH 0 + +#define GVE_MTD_PATH_STATE_DEFAULT 0 +#define GVE_MTD_PATH_STATE_TIMEOUT 1 +#define GVE_MTD_PATH_STATE_CONGESTION 2 +#define GVE_MTD_PATH_STATE_RETRANSMIT 3 + +#define GVE_MTD_PATH_HASH_NONE (0x0 << 4) +#define GVE_MTD_PATH_HASH_L4 (0x1 << 4) + /* GVE Receive Packet Descriptor */ /* The start of an ethernet packet comes 2 bytes into the rx buffer. * gVNIC adds this padding so that both the DMA and the L3/4 protocol header diff --git a/drivers/net/ethernet/google/gve/gve_tx.c b/drivers/net/ethernet/google/gve/gve_tx.c index a9cb241fedf4..4888bf05fbed 100644 --- a/drivers/net/ethernet/google/gve/gve_tx.c +++ b/drivers/net/ethernet/google/gve/gve_tx.c @@ -296,11 +296,14 @@ static inline int gve_skb_fifo_bytes_required(struct gve_tx_ring *tx, return bytes; } -/* The most descriptors we could need is MAX_SKB_FRAGS + 3 : 1 for each skb frag, - * +1 for the skb linear portion, +1 for when tcp hdr needs to be in separate descriptor, - * and +1 if the payload wraps to the beginning of the FIFO. +/* The most descriptors we could need is MAX_SKB_FRAGS + 4 : + * 1 for each skb frag + * 1 for the skb linear portion + * 1 for when tcp hdr needs to be in separate descriptor + * 1 if the payload wraps to the beginning of the FIFO + * 1 for metadata descriptor */ -#define MAX_TX_DESC_NEEDED (MAX_SKB_FRAGS + 3) +#define MAX_TX_DESC_NEEDED (MAX_SKB_FRAGS + 4) static void gve_tx_unmap_buf(struct device *dev, struct gve_tx_buffer_state *info) { if (info->skb) { @@ -395,6 +398,19 @@ static void gve_tx_fill_pkt_desc(union gve_tx_desc *pkt_desc, pkt_desc->pkt.seg_addr = cpu_to_be64(addr); } +static void gve_tx_fill_mtd_desc(union gve_tx_desc *mtd_desc, + struct sk_buff *skb) +{ + BUILD_BUG_ON(sizeof(mtd_desc->mtd) != sizeof(mtd_desc->pkt)); + + mtd_desc->mtd.type_flags = GVE_TXD_MTD | GVE_MTD_SUBTYPE_PATH; + mtd_desc->mtd.path_state = GVE_MTD_PATH_STATE_DEFAULT | + GVE_MTD_PATH_HASH_L4; + mtd_desc->mtd.path_hash = cpu_to_be32(skb->hash); + mtd_desc->mtd.reserved0 = 0; + mtd_desc->mtd.reserved1 = 0; +} + static void gve_tx_fill_seg_desc(union gve_tx_desc *seg_desc, struct sk_buff *skb, bool is_gso, u16 len, u64 addr) @@ -426,6 +442,7 @@ static int gve_tx_add_skb_copy(struct gve_priv *priv, struct gve_tx_ring *tx, st int pad_bytes, hlen, hdr_nfrags, payload_nfrags, l4_hdr_offset; union gve_tx_desc *pkt_desc, *seg_desc; struct gve_tx_buffer_state *info; + int mtd_desc_nr = !!skb->l4_hash; bool is_gso = skb_is_gso(skb); u32 idx = tx->req & tx->mask; int payload_iov = 2; @@ -457,7 +474,7 @@ static int gve_tx_add_skb_copy(struct gve_priv *priv, struct gve_tx_ring *tx, st &info->iov[payload_iov]); gve_tx_fill_pkt_desc(pkt_desc, skb, is_gso, l4_hdr_offset, - 1 + payload_nfrags, hlen, + 1 + mtd_desc_nr + payload_nfrags, hlen, info->iov[hdr_nfrags - 1].iov_offset); skb_copy_bits(skb, 0, @@ -468,8 +485,13 @@ static int gve_tx_add_skb_copy(struct gve_priv *priv, struct gve_tx_ring *tx, st info->iov[hdr_nfrags - 1].iov_len); copy_offset = hlen; + if (mtd_desc_nr) { + next_idx = (tx->req + 1) & tx->mask; + gve_tx_fill_mtd_desc(&tx->desc[next_idx], skb); + } + for (i = payload_iov; i < payload_nfrags + payload_iov; i++) { - next_idx = (tx->req + 1 + i - payload_iov) & tx->mask; + next_idx = (tx->req + 1 + mtd_desc_nr + i - payload_iov) & tx->mask; seg_desc = &tx->desc[next_idx]; gve_tx_fill_seg_desc(seg_desc, skb, is_gso, @@ -485,16 +507,17 @@ static int gve_tx_add_skb_copy(struct gve_priv *priv, struct gve_tx_ring *tx, st copy_offset += info->iov[i].iov_len; } - return 1 + payload_nfrags; + return 1 + mtd_desc_nr + payload_nfrags; } static int gve_tx_add_skb_no_copy(struct gve_priv *priv, struct gve_tx_ring *tx, struct sk_buff *skb) { const struct skb_shared_info *shinfo = skb_shinfo(skb); - int hlen, payload_nfrags, l4_hdr_offset; - union gve_tx_desc *pkt_desc, *seg_desc; + int hlen, num_descriptors, l4_hdr_offset; + union gve_tx_desc *pkt_desc, *mtd_desc, *seg_desc; struct gve_tx_buffer_state *info; + int mtd_desc_nr = !!skb->l4_hash; bool is_gso = skb_is_gso(skb); u32 idx = tx->req & tx->mask; u64 addr; @@ -523,23 +546,30 @@ static int gve_tx_add_skb_no_copy(struct gve_priv *priv, struct gve_tx_ring *tx, dma_unmap_len_set(info, len, len); dma_unmap_addr_set(info, dma, addr); - payload_nfrags = shinfo->nr_frags; + num_descriptors = 1 + shinfo->nr_frags; + if (hlen < len) + num_descriptors++; + if (mtd_desc_nr) + num_descriptors++; + + gve_tx_fill_pkt_desc(pkt_desc, skb, is_gso, l4_hdr_offset, + num_descriptors, hlen, addr); + + if (mtd_desc_nr) { + idx = (idx + 1) & tx->mask; + mtd_desc = &tx->desc[idx]; + gve_tx_fill_mtd_desc(mtd_desc, skb); + } + if (hlen < len) { /* For gso the rest of the linear portion of the skb needs to * be in its own descriptor. */ - payload_nfrags++; - gve_tx_fill_pkt_desc(pkt_desc, skb, is_gso, l4_hdr_offset, - 1 + payload_nfrags, hlen, addr); - len -= hlen; addr += hlen; - idx = (tx->req + 1) & tx->mask; + idx = (idx + 1) & tx->mask; seg_desc = &tx->desc[idx]; gve_tx_fill_seg_desc(seg_desc, skb, is_gso, len, addr); - } else { - gve_tx_fill_pkt_desc(pkt_desc, skb, is_gso, l4_hdr_offset, - 1 + payload_nfrags, hlen, addr); } for (i = 0; i < shinfo->nr_frags; i++) { @@ -560,11 +590,14 @@ static int gve_tx_add_skb_no_copy(struct gve_priv *priv, struct gve_tx_ring *tx, gve_tx_fill_seg_desc(seg_desc, skb, is_gso, len, addr); } - return 1 + payload_nfrags; + return num_descriptors; unmap_drop: - i += (payload_nfrags == shinfo->nr_frags ? 1 : 2); + i += num_descriptors - shinfo->nr_frags; while (i--) { + /* Skip metadata descriptor, if set */ + if (i == 1 && mtd_desc_nr == 1) + continue; idx--; gve_tx_unmap_buf(tx->dev, &tx->info[idx & tx->mask]); } From patchwork Thu Dec 16 00:46:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeroen de Borst X-Patchwork-Id: 12679773 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 150AAC433EF for ; Thu, 16 Dec 2021 00:47:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229695AbhLPArT (ORCPT ); Wed, 15 Dec 2021 19:47:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53410 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232433AbhLPArS (ORCPT ); Wed, 15 Dec 2021 19:47:18 -0500 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5F668C06173E for ; Wed, 15 Dec 2021 16:47:18 -0800 (PST) Received: by mail-pf1-x44a.google.com with SMTP id u4-20020a056a00098400b004946fc3e863so14394626pfg.8 for ; Wed, 15 Dec 2021 16:47:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Zqx+JA+CCY/r+M4FLeWBHWKil1+lASrhKANf0oGDSpI=; b=BK0scwODexi3tOrUA8ZFajw9+BTlUaZ9ecdlnTXBJmItu33yl/H9heYdMCS7gjb7qx Qm7C5po0Lf/cLdwO23icsgNXlWqXzYHoB7cFxYuRUXPu4slskQEQ1Vn3q5grPQ40lshx BUagssHJauIC+haNtW9pNcyUTSUd/Us6nAKrrrxDLy6zqj5s3cvr5rHqPCYBTg3BNC+s YdjCh/MtMlCCHRDAosnpPRQmS0BfcXxDbDDBL9bl/rf/ifoeG3CazZ/Uzws68HLeuBkp sTO254V5xWBpzn2Tbb6bZhzO6y1y7QFfdQXbYCQN55qiUtwyQDQqnFedCBCOHSrMy3Wz fTyg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Zqx+JA+CCY/r+M4FLeWBHWKil1+lASrhKANf0oGDSpI=; b=ZLZnivtJ6j+sAPFJzGRkZNBtYPHoRWNeoYcEdVd8f41QWE6Jqs+Coqy4m0rM975DcK c1sLaPeE7yNkjRN9IsT8gXMDqPl7YJsotbT8Oo7hAUPevwYXsItu9npsaURrzN5y3ow3 x/VHC1WNA6R5sKKZNsqSSTccphjCSwI88IzxZ8dGvPwBbQSpFvs6qBTKqOVc46eWcOgp DB07ehMlgpXuiZNVEksgO/4cyyvvdqp9jONhDAw0QYcqB8GdNOXjpwYab4WrxV2KOkZ0 r20s6PfH61x3OX96WH1sVZcmVwtw26HxJjSWkDKnU40EO0uXjEm3M5lzkJjWwapAXfeW 9zTw== X-Gm-Message-State: AOAM531IqpH7UOSlRCiKfb4xzoMy/o3Lfijgw877e3605Ro59Bblr5iS DY5mZ4CzzENCqSZbKGXviclTb36ymo5rc2ObxGXDex/bePQIosun8Ap9BKS6AAjQVxdM1OqJ8kR RBy9wr07P8rG8fobRJ4tWpbdSbAEJvLSaWyQcWpxf378wTTqnKiebEKLbp/MJpgvT9qI= X-Google-Smtp-Source: ABdhPJxfsvWdq1MTVKcyMKNCjSWm0xpqvyMCR1SZzZnSK+J14FVMDIyZMpakH0s7GojJ8qosFccZ6CVgrGmfrw== X-Received: from jeroendb.sea.corp.google.com ([2620:15c:100:202:964d:9084:bbdd:97a9]) (user=jeroendb job=sendgmr) by 2002:a17:90a:c203:: with SMTP id e3mr213356pjt.0.1639615637411; Wed, 15 Dec 2021 16:47:17 -0800 (PST) Date: Wed, 15 Dec 2021 16:46:50 -0800 In-Reply-To: <20211216004652.1021911-1-jeroendb@google.com> Message-Id: <20211216004652.1021911-7-jeroendb@google.com> Mime-Version: 1.0 References: <20211216004652.1021911-1-jeroendb@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH net-next 6/8] gve: Implement suspend/resume/shutdown From: Jeroen de Borst To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, Catherine Sullivan , David Awogbemila Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Catherine Sullivan Add support for suspend, resume and shutdown. Signed-off-by: Catherine Sullivan Signed-off-by: David Awogbemila --- drivers/net/ethernet/google/gve/gve.h | 3 ++ drivers/net/ethernet/google/gve/gve_main.c | 57 ++++++++++++++++++++++ 2 files changed, 60 insertions(+) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h index ed43b8ece5a2..950dff787269 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -557,6 +557,8 @@ struct gve_priv { u32 page_alloc_fail; /* count of page alloc fails */ u32 dma_mapping_error; /* count of dma mapping errors */ u32 stats_report_trigger_cnt; /* count of device-requested stats-reports since last reset */ + u32 suspend_cnt; /* count of times suspended */ + u32 resume_cnt; /* count of times resumed */ struct workqueue_struct *gve_wq; struct work_struct service_task; struct work_struct stats_report_task; @@ -573,6 +575,7 @@ struct gve_priv { /* Gvnic device link speed from hypervisor. */ u64 link_speed; + bool up_before_suspend; /* True if dev was up before suspend */ struct gve_options_dqo_rda options_dqo_rda; struct gve_ptype_lut *ptype_lut_dqo; diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index 086424518ecc..e5456187b3f2 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -1676,6 +1676,58 @@ static void gve_remove(struct pci_dev *pdev) pci_disable_device(pdev); } +static void gve_shutdown(struct pci_dev *pdev) +{ + struct net_device *netdev = pci_get_drvdata(pdev); + struct gve_priv *priv = netdev_priv(netdev); + bool was_up = netif_carrier_ok(priv->dev); + + rtnl_lock(); + if (was_up && gve_close(priv->dev)) { + /* If the dev was up, attempt to close, if close fails, reset */ + gve_reset_and_teardown(priv, was_up); + } else { + /* If the dev wasn't up or close worked, finish tearing down */ + gve_teardown_priv_resources(priv); + } + rtnl_unlock(); +} + +#ifdef CONFIG_PM +static int gve_suspend(struct pci_dev *pdev, pm_message_t state) +{ + struct net_device *netdev = pci_get_drvdata(pdev); + struct gve_priv *priv = netdev_priv(netdev); + bool was_up = netif_carrier_ok(priv->dev); + + priv->suspend_cnt++; + rtnl_lock(); + if (was_up && gve_close(priv->dev)) { + /* If the dev was up, attempt to close, if close fails, reset */ + gve_reset_and_teardown(priv, was_up); + } else { + /* If the dev wasn't up or close worked, finish tearing down */ + gve_teardown_priv_resources(priv); + } + priv->up_before_suspend = was_up; + rtnl_unlock(); + return 0; +} + +static int gve_resume(struct pci_dev *pdev) +{ + struct net_device *netdev = pci_get_drvdata(pdev); + struct gve_priv *priv = netdev_priv(netdev); + int err; + + priv->resume_cnt++; + rtnl_lock(); + err = gve_reset_recovery(priv, priv->up_before_suspend); + rtnl_unlock(); + return err; +} +#endif /* CONFIG_PM */ + static const struct pci_device_id gve_id_table[] = { { PCI_DEVICE(PCI_VENDOR_ID_GOOGLE, PCI_DEV_ID_GVNIC) }, { } @@ -1686,6 +1738,11 @@ static struct pci_driver gvnic_driver = { .id_table = gve_id_table, .probe = gve_probe, .remove = gve_remove, + .shutdown = gve_shutdown, +#ifdef CONFIG_PM + .suspend = gve_suspend, + .resume = gve_resume, +#endif }; module_pci_driver(gvnic_driver); From patchwork Thu Dec 16 00:46:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeroen de Borst X-Patchwork-Id: 12679777 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A6C5C433FE for ; Thu, 16 Dec 2021 00:47:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232433AbhLPArX (ORCPT ); Wed, 15 Dec 2021 19:47:23 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53416 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232547AbhLPArU (ORCPT ); Wed, 15 Dec 2021 19:47:20 -0500 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D884AC061574 for ; Wed, 15 Dec 2021 16:47:19 -0800 (PST) Received: by mail-pf1-x449.google.com with SMTP id w2-20020a627b02000000b0049fa951281fso14396569pfc.9 for ; Wed, 15 Dec 2021 16:47:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=5D+ShvjgcU7ed+hKOQlJMneE8N9UvAX/syZ4CjDlJA8=; b=kmISZFcja7CJB/mbHsa0KiczBYaiSTqekNnkfkcB1ukOvtItbXO/HJKlaHD+bP0bqm 1jpQxCU0x1g+TmN5698ikbIv5Ae4Mj1y5W/VeTSsKgGY53jWFOUW9Xyh9iSMnUm6PUau ogn/IavGzONbh84HGHuZNU7T5fJJE+ReX98derPKsR8NHDGfEcROPqbW9YXWSn0cBFu2 +tDo+5G/T8mbR+DI6DSb/gkkt8iIrgfr7yQJ8dhKOkTsD6cmfX6K/l+RO5H1qVyGsPZK tp4t1UhYi6D3i9zDtdtgavloqos1rFT/Jw87daTk6NE7JjzZt9KdMNzRh61BWdzR3j5v 879g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=5D+ShvjgcU7ed+hKOQlJMneE8N9UvAX/syZ4CjDlJA8=; b=t1JD67IBUSyo7R6BJkzqJCFy5e3T0LD0wwEIIut5kqiD22LHuQpcmG93XnlTyMb+gB 26WsTo/aqhscSd8Bp9zMwVTB5oPzGtXRhiy1dmrb5fY6XHozG1pnow6j59UfJmh95LLO A+tTudHkXcxvLpxynbGzKauxMYu5YN1aAuVIbLCABz7IIPGvG208JozeOIbnQ0CbL/41 JmuUtWu72FoYUabo5LBOnow1SnONvk25gcFOAlVHkpTvSSxfTcpnUyYUpVK+nou1ouxX UzX74nocT/lhzDbJJmW+gmOTjwSchRYvGjqz36owB0QM8JrEeXipgPBNBCKU0jRgwnTO uzdQ== X-Gm-Message-State: AOAM532hhEc0WouppduynY9BWm9DwWP1fLEWE09XNSWpQSzGMcK0ygcS hJ3w3ng7SSUOQUjBTSaeSFCPoxWmSqN5brgft25P205g3eyRD6uAV3QOsE6ExYNXxWEPJOxPsjs i4m5LcOBjYnIUmGLatRVRnJatFPkWelYsvRzJAasqkNCRfY2AFd09YLBAt8tMMvoIQNk= X-Google-Smtp-Source: ABdhPJxFJDgtlzyNnK3CSja3Qqz2W/pGOQYcGZpxhvYxXNwmpx8ihzr/0JTdRiAukfFSkxLNHJFUje6jhguhrg== X-Received: from jeroendb.sea.corp.google.com ([2620:15c:100:202:964d:9084:bbdd:97a9]) (user=jeroendb job=sendgmr) by 2002:a17:902:bd44:b0:148:a2e8:2c51 with SMTP id b4-20020a170902bd4400b00148a2e82c51mr7039696plx.160.1639615639318; Wed, 15 Dec 2021 16:47:19 -0800 (PST) Date: Wed, 15 Dec 2021 16:46:51 -0800 In-Reply-To: <20211216004652.1021911-1-jeroendb@google.com> Message-Id: <20211216004652.1021911-8-jeroendb@google.com> Mime-Version: 1.0 References: <20211216004652.1021911-1-jeroendb@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH net-next 7/8] gve: Add consumed counts to ethtool stats From: Jeroen de Borst To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, Jordan Kim , Jeroen de Borst Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Jordan Kim Being able to see how many descriptors are in-use is helpful when diagnosing certain issues. Signed-off-by: Jeroen de Borst Signed-off-by: Jordan Kim --- drivers/net/ethernet/google/gve/gve_ethtool.c | 21 +++++++++++-------- 1 file changed, 12 insertions(+), 9 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve_ethtool.c b/drivers/net/ethernet/google/gve/gve_ethtool.c index fd2d2c705391..e0815bb031e9 100644 --- a/drivers/net/ethernet/google/gve/gve_ethtool.c +++ b/drivers/net/ethernet/google/gve/gve_ethtool.c @@ -42,7 +42,7 @@ static const char gve_gstrings_main_stats[][ETH_GSTRING_LEN] = { }; static const char gve_gstrings_rx_stats[][ETH_GSTRING_LEN] = { - "rx_posted_desc[%u]", "rx_completed_desc[%u]", "rx_bytes[%u]", + "rx_posted_desc[%u]", "rx_completed_desc[%u]", "rx_consumed_desc[%u]", "rx_bytes[%u]", "rx_cont_packet_cnt[%u]", "rx_frag_flip_cnt[%u]", "rx_frag_copy_cnt[%u]", "rx_dropped_pkt[%u]", "rx_copybreak_pkt[%u]", "rx_copied_pkt[%u]", "rx_queue_drop_cnt[%u]", "rx_no_buffers_posted[%u]", @@ -50,7 +50,7 @@ static const char gve_gstrings_rx_stats[][ETH_GSTRING_LEN] = { }; static const char gve_gstrings_tx_stats[][ETH_GSTRING_LEN] = { - "tx_posted_desc[%u]", "tx_completed_desc[%u]", "tx_bytes[%u]", + "tx_posted_desc[%u]", "tx_completed_desc[%u]", "tx_consumed_desc[%u]", "tx_bytes[%u]", "tx_wake[%u]", "tx_stop[%u]", "tx_event_counter[%u]", "tx_dma_mapping_error[%u]", }; @@ -139,10 +139,11 @@ static void gve_get_ethtool_stats(struct net_device *netdev, struct ethtool_stats *stats, u64 *data) { - u64 tmp_rx_pkts, tmp_rx_bytes, tmp_rx_skb_alloc_fail, tmp_rx_buf_alloc_fail, - tmp_rx_desc_err_dropped_pkt, tmp_tx_pkts, tmp_tx_bytes; + u64 tmp_rx_pkts, tmp_rx_bytes, tmp_rx_skb_alloc_fail, + tmp_rx_buf_alloc_fail, tmp_rx_desc_err_dropped_pkt, + tmp_tx_pkts, tmp_tx_bytes; u64 rx_buf_alloc_fail, rx_desc_err_dropped_pkt, rx_pkts, - rx_skb_alloc_fail, rx_bytes, tx_pkts, tx_bytes; + rx_skb_alloc_fail, rx_bytes, tx_pkts, tx_bytes, tx_dropped; int stats_idx, base_stats_idx, max_stats_idx; struct stats *report_stats; int *rx_qid_to_stats_idx; @@ -191,7 +192,7 @@ gve_get_ethtool_stats(struct net_device *netdev, rx_desc_err_dropped_pkt += tmp_rx_desc_err_dropped_pkt; } } - for (tx_pkts = 0, tx_bytes = 0, ring = 0; + for (tx_pkts = 0, tx_bytes = 0, tx_dropped = 0, ring = 0; ring < priv->tx_cfg.num_queues; ring++) { if (priv->tx) { do { @@ -203,6 +204,7 @@ gve_get_ethtool_stats(struct net_device *netdev, start)); tx_pkts += tmp_tx_pkts; tx_bytes += tmp_tx_bytes; + tx_dropped += priv->tx[ring].dropped_pkt; } } @@ -214,9 +216,7 @@ gve_get_ethtool_stats(struct net_device *netdev, /* total rx dropped packets */ data[i++] = rx_skb_alloc_fail + rx_buf_alloc_fail + rx_desc_err_dropped_pkt; - /* Skip tx_dropped */ - i++; - + data[i++] = tx_dropped; data[i++] = priv->tx_timeo_cnt; data[i++] = rx_skb_alloc_fail; data[i++] = rx_buf_alloc_fail; @@ -255,6 +255,7 @@ gve_get_ethtool_stats(struct net_device *netdev, data[i++] = rx->fill_cnt; data[i++] = rx->cnt; + data[i++] = rx->fill_cnt - rx->cnt; do { start = u64_stats_fetch_begin(&priv->rx[ring].statss); @@ -318,12 +319,14 @@ gve_get_ethtool_stats(struct net_device *netdev, if (gve_is_gqi(priv)) { data[i++] = tx->req; data[i++] = tx->done; + data[i++] = tx->req - tx->done; } else { /* DQO doesn't currently support * posted/completed descriptor counts; */ data[i++] = 0; data[i++] = 0; + data[i++] = tx->dqo_tx.tail - tx->dqo_tx.head; } do { start = From patchwork Thu Dec 16 00:46:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeroen de Borst X-Patchwork-Id: 12679775 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E664AC433EF for ; Thu, 16 Dec 2021 00:47:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229561AbhLPArX (ORCPT ); Wed, 15 Dec 2021 19:47:23 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53424 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229626AbhLPArW (ORCPT ); Wed, 15 Dec 2021 19:47:22 -0500 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9A5A5C06173E for ; Wed, 15 Dec 2021 16:47:21 -0800 (PST) Received: by mail-pg1-x549.google.com with SMTP id t28-20020a63955c000000b0033f3b16a931so554811pgn.4 for ; Wed, 15 Dec 2021 16:47:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=s7W2tKKF1xH65WC9882sxiN25NTOC3PTlid+qix2NCM=; b=HxTu0TF74jGLYISPCAHoJ7fHftpxT8kfkcrVLfB7NDzpjzdWwgZ9wd3nqOzbcM6Ell EAKLmhIJDE49l5/jwaC74MzBmFeu7xqdHXUsUoEoALC7pLH3cWS1O5YBUhs+kaMMrjoz 0UzPoUOdAOPMBioOBPenG1uVgyXSwocnEWF73dVduEKc33thxO1Zwj3Km6+OuwG2BDPb dKYsrqAPQqD51Y+2WFdp/6dk0zeqAEqSLna9Klo4OahocbEKPDgT3PElteXvKcK6Akh7 JHTlDm934MvyNoUHoYP/KKsFZMSUjPJ3rxOHyTycxxHutvSwcXhqgXQOSV38oSDwSvRY rScQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=s7W2tKKF1xH65WC9882sxiN25NTOC3PTlid+qix2NCM=; b=mqZ3WbwSiX5YGFBBQdx8o8x0fO3luYIMrzNGS/hBPs9+ZAM9UcmSjDgARmOF4hYoh4 qNf1SWigANiIZmlrRrdJFC+V7ZZ0Sk5AyP3116eeFicPsQYrhO0Z2PU8jw5lxX4xoRmU rvmiga2KU6FkaPOK4lG30UaC94cmOAgrKwP/AsI95rehBa/cKbF5mH0liph9BsKe54hz X1GUX3CkhGygA45DoauVo9pFewjykIjuSNS8Cd2rw+PFk2rbSR38pXboog6slnQudwSQ E/oXntwQBH387WYS3GHNaco8PSePQoYiWqxc+FF5I0naoJBqpqGluw7koXikyx+AR4nB 764A== X-Gm-Message-State: AOAM530gk9oT9l1lUjzHVTIUICDPYgySBYyfv98dupYb9DqW41FamcQn tgKSbjuCpzG2w9UeDBuDUXmsBRaiW0G6pJruHvwLqhnkQb9JSEo5MkUmn0eN7hYRp4YC9vgNuR3 qW20NtYmSY1tAhIID1foWB0BR8s0JHbD41d1sEkpIB5bvVQ2vBZVvn05O8L3+2gOVnXE= X-Google-Smtp-Source: ABdhPJzeq0FmvHfeDNn1s9dNDBJ8t4l558IMFxPGjh+fzeM4PZeL/8rmK8ADwR4Bnc3GervtS2MRTSlbfuQzwg== X-Received: from jeroendb.sea.corp.google.com ([2620:15c:100:202:964d:9084:bbdd:97a9]) (user=jeroendb job=sendgmr) by 2002:a17:90a:3046:: with SMTP id q6mr2234216pjl.208.1639615641033; Wed, 15 Dec 2021 16:47:21 -0800 (PST) Date: Wed, 15 Dec 2021 16:46:52 -0800 In-Reply-To: <20211216004652.1021911-1-jeroendb@google.com> Message-Id: <20211216004652.1021911-9-jeroendb@google.com> Mime-Version: 1.0 References: <20211216004652.1021911-1-jeroendb@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH net-next 8/8] gve: Add tx|rx-coalesce-usec for DQO From: Jeroen de Borst To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, Tao Liu , Jeroen de Borst Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Tao Liu Adding ethtool support for changing rx-coalesce-usec and tx-coalesce-usec when using the DQO queue format. Signed-off-by: Tao Liu Signed-off-by: Jeroen de Borst --- drivers/net/ethernet/google/gve/gve.h | 4 ++ drivers/net/ethernet/google/gve/gve_dqo.h | 22 +++++-- drivers/net/ethernet/google/gve/gve_ethtool.c | 61 +++++++++++++++++++ drivers/net/ethernet/google/gve/gve_main.c | 15 +++-- 4 files changed, 91 insertions(+), 11 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h index 950dff787269..5f5d4f7aa813 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -584,6 +584,10 @@ struct gve_priv { int data_buffer_size_dqo; enum gve_queue_format queue_format; + + /* Interrupt coalescing settings */ + u32 tx_coalesce_usecs; + u32 rx_coalesce_usecs; }; enum gve_service_task_flags_bit { diff --git a/drivers/net/ethernet/google/gve/gve_dqo.h b/drivers/net/ethernet/google/gve/gve_dqo.h index b2e2fb015693..1eb4d5fd8561 100644 --- a/drivers/net/ethernet/google/gve/gve_dqo.h +++ b/drivers/net/ethernet/google/gve/gve_dqo.h @@ -18,6 +18,7 @@ #define GVE_TX_IRQ_RATELIMIT_US_DQO 50 #define GVE_RX_IRQ_RATELIMIT_US_DQO 20 +#define GVE_MAX_ITR_INTERVAL_DQO (GVE_ITR_INTERVAL_DQO_MASK * 2) /* Timeout in seconds to wait for a reinjection completion after receiving * its corresponding miss completion. @@ -54,17 +55,17 @@ gve_tx_put_doorbell_dqo(const struct gve_priv *priv, } /* Builds register value to write to DQO IRQ doorbell to enable with specified - * ratelimit. + * ITR interval. */ -static inline u32 gve_set_itr_ratelimit_dqo(u32 ratelimit_us) +static inline u32 gve_setup_itr_interval_dqo(u32 interval_us) { u32 result = GVE_ITR_ENABLE_BIT_DQO; /* Interval has 2us granularity. */ - ratelimit_us >>= 1; + interval_us >>= 1; - ratelimit_us &= GVE_ITR_INTERVAL_DQO_MASK; - result |= (ratelimit_us << GVE_ITR_INTERVAL_DQO_SHIFT); + interval_us &= GVE_ITR_INTERVAL_DQO_MASK; + result |= (interval_us << GVE_ITR_INTERVAL_DQO_SHIFT); return result; } @@ -78,4 +79,15 @@ gve_write_irq_doorbell_dqo(const struct gve_priv *priv, iowrite32(val, &priv->db_bar2[index]); } +/* Sets interrupt throttling interval and enables interrupt + * by writing to IRQ doorbell. + */ +static inline void +gve_set_itr_coalesce_usecs_dqo(struct gve_priv *priv, + struct gve_notify_block *block, + u32 usecs) +{ + gve_write_irq_doorbell_dqo(priv, block, + gve_setup_itr_interval_dqo(usecs)); +} #endif /* _GVE_DQO_H_ */ diff --git a/drivers/net/ethernet/google/gve/gve_ethtool.c b/drivers/net/ethernet/google/gve/gve_ethtool.c index e0815bb031e9..50b384910c83 100644 --- a/drivers/net/ethernet/google/gve/gve_ethtool.c +++ b/drivers/net/ethernet/google/gve/gve_ethtool.c @@ -8,6 +8,7 @@ #include #include "gve.h" #include "gve_adminq.h" +#include "gve_dqo.h" static void gve_get_drvinfo(struct net_device *netdev, struct ethtool_drvinfo *info) @@ -540,7 +541,65 @@ static int gve_get_link_ksettings(struct net_device *netdev, return err; } +static int gve_get_coalesce(struct net_device *netdev, + struct ethtool_coalesce *ec, + struct kernel_ethtool_coalesce *kernel_ec, + struct netlink_ext_ack *extack) +{ + struct gve_priv *priv = netdev_priv(netdev); + + if (gve_is_gqi(priv)) + return -EOPNOTSUPP; + ec->tx_coalesce_usecs = priv->tx_coalesce_usecs; + ec->rx_coalesce_usecs = priv->rx_coalesce_usecs; + + return 0; +} + +static int gve_set_coalesce(struct net_device *netdev, + struct ethtool_coalesce *ec, + struct kernel_ethtool_coalesce *kernel_ec, + struct netlink_ext_ack *extack) +{ + struct gve_priv *priv = netdev_priv(netdev); + u32 tx_usecs_orig = priv->tx_coalesce_usecs; + u32 rx_usecs_orig = priv->rx_coalesce_usecs; + int idx; + + if (gve_is_gqi(priv)) + return -EOPNOTSUPP; + + if (ec->tx_coalesce_usecs > GVE_MAX_ITR_INTERVAL_DQO || + ec->rx_coalesce_usecs > GVE_MAX_ITR_INTERVAL_DQO) + return -EINVAL; + priv->tx_coalesce_usecs = ec->tx_coalesce_usecs; + priv->rx_coalesce_usecs = ec->rx_coalesce_usecs; + + if (tx_usecs_orig != priv->tx_coalesce_usecs) { + for (idx = 0; idx < priv->tx_cfg.num_queues; idx++) { + int ntfy_idx = gve_tx_idx_to_ntfy(priv, idx); + struct gve_notify_block *block = &priv->ntfy_blocks[ntfy_idx]; + + gve_set_itr_coalesce_usecs_dqo(priv, block, + priv->tx_coalesce_usecs); + } + } + + if (rx_usecs_orig != priv->rx_coalesce_usecs) { + for (idx = 0; idx < priv->rx_cfg.num_queues; idx++) { + int ntfy_idx = gve_rx_idx_to_ntfy(priv, idx); + struct gve_notify_block *block = &priv->ntfy_blocks[ntfy_idx]; + + gve_set_itr_coalesce_usecs_dqo(priv, block, + priv->rx_coalesce_usecs); + } + } + + return 0; +} + const struct ethtool_ops gve_ethtool_ops = { + .supported_coalesce_params = ETHTOOL_COALESCE_USECS, .get_drvinfo = gve_get_drvinfo, .get_strings = gve_get_strings, .get_sset_count = gve_get_sset_count, @@ -550,6 +609,8 @@ const struct ethtool_ops gve_ethtool_ops = { .set_channels = gve_set_channels, .get_channels = gve_get_channels, .get_link = ethtool_op_get_link, + .get_coalesce = gve_get_coalesce, + .set_coalesce = gve_set_coalesce, .get_ringparam = gve_get_ringparam, .reset = gve_user_reset, .get_tunable = gve_get_tunable, diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index e5456187b3f2..f7f65c4bf993 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -1113,9 +1113,8 @@ static void gve_turnup(struct gve_priv *priv) if (gve_is_gqi(priv)) { iowrite32be(0, gve_irq_doorbell(priv, block)); } else { - u32 val = gve_set_itr_ratelimit_dqo(GVE_TX_IRQ_RATELIMIT_US_DQO); - - gve_write_irq_doorbell_dqo(priv, block, val); + gve_set_itr_coalesce_usecs_dqo(priv, block, + priv->tx_coalesce_usecs); } } for (idx = 0; idx < priv->rx_cfg.num_queues; idx++) { @@ -1126,9 +1125,8 @@ static void gve_turnup(struct gve_priv *priv) if (gve_is_gqi(priv)) { iowrite32be(0, gve_irq_doorbell(priv, block)); } else { - u32 val = gve_set_itr_ratelimit_dqo(GVE_RX_IRQ_RATELIMIT_US_DQO); - - gve_write_irq_doorbell_dqo(priv, block, val); + gve_set_itr_coalesce_usecs_dqo(priv, block, + priv->rx_coalesce_usecs); } } @@ -1425,6 +1423,11 @@ static int gve_init_priv(struct gve_priv *priv, bool skip_describe_device) dev_info(&priv->pdev->dev, "Max TX queues %d, Max RX queues %d\n", priv->tx_cfg.max_queues, priv->rx_cfg.max_queues); + if (!gve_is_gqi(priv)) { + priv->tx_coalesce_usecs = GVE_TX_IRQ_RATELIMIT_US_DQO; + priv->rx_coalesce_usecs = GVE_RX_IRQ_RATELIMIT_US_DQO; + } + setup_device: err = gve_setup_device_resources(priv); if (!err)