From patchwork Thu Oct 7 16:25:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeroen de Borst X-Patchwork-Id: 12542309 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC65BC433FE for ; Thu, 7 Oct 2021 16:25:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B954D61245 for ; Thu, 7 Oct 2021 16:25:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233610AbhJGQ1k (ORCPT ); Thu, 7 Oct 2021 12:27:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55716 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242664AbhJGQ1e (ORCPT ); Thu, 7 Oct 2021 12:27:34 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 94F0AC061762 for ; Thu, 7 Oct 2021 09:25:40 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id n22-20020a6563d6000000b0029261ffde9bso121249pgv.22 for ; Thu, 07 Oct 2021 09:25:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:message-id:mime-version:subject:from:to:cc; bh=MLNq0Ecu/DTuLkV9EQCwfPnAyKraiDA8g5gWng3WZe4=; b=LjqHvqT8SH/cHaVN0k7538hp+srgMMgDZqZm6OIzOL0M2hahn+Xbvy3a7Zz/C6Dexi /TH0Jo47yObdm67BPV53GJPBgSerB449VkGnMjGA3A/1nu36moiC8aOoBbpYjSXUp2F3 FcXuK1ROhBgzd03Imsja6F9or23X7VZealSi5TaPRwzR8qrASuubQ+G0N1Esr8GrD0z0 lKkKL8GcJtloZGQL4jWVMJaQa6vbHQs87j5WXdQEVCRtqz30fJPlZ/1hmUgLsPnW4BxC NnoFpGGSdKNeX03eLpJcitFlQbwa8iY6S6SobpmllbBlbU0Usty3pX1ER5c0+03wKvJT /hMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=MLNq0Ecu/DTuLkV9EQCwfPnAyKraiDA8g5gWng3WZe4=; b=eSYLvmkLk7nV5odhWuFpdGYtQ3jvSeXrMyquHedHzXjmmpW+K+0Mfvf6gsqm0dJp/p x/V5NNy206kINeVThY9GgcqZbiyUP8Ob1kCW7FJZEf49/upJMmArYE1LxH8XERqZANmV BlUf81A3hPlMs9JQPaoQ9vnwUBCGRJTfVPL3iXUf1rCbOTcpbrbucT6X2XEeONgb7kTQ zNwLw+z9sp7SBCD0aLsKuKU0/qisa9sXijkGo9EHQnz+9Ov4eK3ZLGTxe8Ip1S9Usdgr GKknBU47K7UvHK/qirXsmqfzdCK4hl0MCFNVYasS7NIwzvmIFJJDGNz7BCV5jniRBzif +xhw== X-Gm-Message-State: AOAM5303gN31YAbXSfFvjHTM847+Jbb32m6B5T9le3FImm+fx3MZyHKF 6B5iphgqo5Ukbwztp07Hw1NuF+jE0sSHHpUP4JleWsU9yqudul+nDVlB0OcDagPdKQ/ULR6AbUl L0cpqT3iTccng77FpjOHA1Cge5T524ZKg9ZcHyA5eHjsDghKwp81RyqFiS1VVd2mXjDQ= X-Google-Smtp-Source: ABdhPJwbXCC+E3iY/Okreuc4VazyysECZJHcFaNP9JMc6HZzxtGTWHPtkcQ4wJIfiDZOMVinUhkawEOq5LPSmg== X-Received: from jeroendb.sea.corp.google.com ([2620:15c:100:202:fe55:7411:11ac:c2a7]) (user=jeroendb job=sendgmr) by 2002:aa7:9203:0:b0:44c:aa4f:5496 with SMTP id 3-20020aa79203000000b0044caa4f5496mr5254454pfo.60.1633623938725; Thu, 07 Oct 2021 09:25:38 -0700 (PDT) Date: Thu, 7 Oct 2021 09:25:28 -0700 Message-Id: <20211007162534.1502578-1-jeroendb@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.33.0.800.g4c38ced690-goog Subject: [PATCH net-next 1/7] gve: Switch to use napi_complete_done From: Jeroen de Borst To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, Yangchun Fu , Catherine Sullivan , David Awogbemila Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Yangchun Fu Use napi_complete_done to allow for the use of gro_flush_timeout. Fixes: f5cedc84a30d2 ("gve: Add transmit and receive support") Signed-off-by: Yangchun Fu Signed-off-by: Catherine Sullivan Signed-off-by: David Awogbemila --- drivers/net/ethernet/google/gve/gve.h | 5 ++- drivers/net/ethernet/google/gve/gve_main.c | 38 +++++++++++++--------- drivers/net/ethernet/google/gve/gve_rx.c | 37 +++++++++++---------- 3 files changed, 43 insertions(+), 37 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h index 85bf825606e8..59c525800e5d 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -825,11 +825,10 @@ __be32 gve_tx_load_event_counter(struct gve_priv *priv, struct gve_tx_ring *tx); /* rx handling */ void gve_rx_write_doorbell(struct gve_priv *priv, struct gve_rx_ring *rx); -bool gve_rx_poll(struct gve_notify_block *block, int budget); +int gve_rx_poll(struct gve_notify_block *block, int budget); +bool gve_rx_work_pending(struct gve_rx_ring *rx); int gve_rx_alloc_rings(struct gve_priv *priv); void gve_rx_free_rings_gqi(struct gve_priv *priv); -bool gve_clean_rx_done(struct gve_rx_ring *rx, int budget, - netdev_features_t feat); /* Reset */ void gve_schedule_reset(struct gve_priv *priv); int gve_reset(struct gve_priv *priv, bool attempt_teardown); diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index cd9df68cc01e..388262c61b8d 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -181,34 +181,40 @@ static int gve_napi_poll(struct napi_struct *napi, int budget) __be32 __iomem *irq_doorbell; bool reschedule = false; struct gve_priv *priv; + int work_done = 0; block = container_of(napi, struct gve_notify_block, napi); priv = block->priv; if (block->tx) reschedule |= gve_tx_poll(block, budget); - if (block->rx) - reschedule |= gve_rx_poll(block, budget); + if (block->rx) { + work_done = gve_rx_poll(block, budget); + reschedule |= work_done == budget; + } if (reschedule) return budget; - napi_complete(napi); - irq_doorbell = gve_irq_doorbell(priv, block); - iowrite32be(GVE_IRQ_ACK | GVE_IRQ_EVENT, irq_doorbell); + /* Complete processing - don't unmask irq if busy polling is enabled */ + if (likely(napi_complete_done(napi, work_done))) { + irq_doorbell = gve_irq_doorbell(priv, block); + iowrite32be(GVE_IRQ_ACK | GVE_IRQ_EVENT, irq_doorbell); - /* Double check we have no extra work. - * Ensure unmask synchronizes with checking for work. - */ - mb(); - if (block->tx) - reschedule |= gve_tx_poll(block, -1); - if (block->rx) - reschedule |= gve_rx_poll(block, -1); - if (reschedule && napi_reschedule(napi)) - iowrite32be(GVE_IRQ_MASK, irq_doorbell); + /* Double check we have no extra work. + * Ensure unmask synchronizes with checking for work. + */ + mb(); - return 0; + if (block->tx) + reschedule |= gve_tx_poll(block, -1); + if (block->rx) + reschedule |= gve_rx_work_pending(block->rx); + + if (reschedule && napi_reschedule(napi)) + iowrite32be(GVE_IRQ_MASK, irq_doorbell); + } + return work_done; } static int gve_napi_poll_dqo(struct napi_struct *napi, int budget) diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c index bb8261368250..bb9fc456416b 100644 --- a/drivers/net/ethernet/google/gve/gve_rx.c +++ b/drivers/net/ethernet/google/gve/gve_rx.c @@ -450,7 +450,7 @@ static bool gve_rx(struct gve_rx_ring *rx, struct gve_rx_desc *rx_desc, return true; } -static bool gve_rx_work_pending(struct gve_rx_ring *rx) +bool gve_rx_work_pending(struct gve_rx_ring *rx) { struct gve_rx_desc *desc; __be16 flags_seq; @@ -518,8 +518,8 @@ static bool gve_rx_refill_buffers(struct gve_priv *priv, struct gve_rx_ring *rx) return true; } -bool gve_clean_rx_done(struct gve_rx_ring *rx, int budget, - netdev_features_t feat) +int gve_clean_rx_done(struct gve_rx_ring *rx, int budget, + netdev_features_t feat) { struct gve_priv *priv = rx->gve; u32 work_done = 0, packets = 0; @@ -553,13 +553,15 @@ bool gve_clean_rx_done(struct gve_rx_ring *rx, int budget, } if (!work_done && rx->fill_cnt - cnt > rx->db_threshold) - return false; + return 0; - u64_stats_update_begin(&rx->statss); - rx->rpackets += packets; - rx->rbytes += bytes; - u64_stats_update_end(&rx->statss); - rx->cnt = cnt; + if (work_done) { + u64_stats_update_begin(&rx->statss); + rx->rpackets += packets; + rx->rbytes += bytes; + u64_stats_update_end(&rx->statss); + rx->cnt = cnt; + } /* restock ring slots */ if (!rx->data.raw_addressing) { @@ -570,26 +572,26 @@ bool gve_clean_rx_done(struct gve_rx_ring *rx, int budget, * falls below a threshold. */ if (!gve_rx_refill_buffers(priv, rx)) - return false; + return 0; /* If we were not able to completely refill buffers, we'll want * to schedule this queue for work again to refill buffers. */ if (rx->fill_cnt - cnt <= rx->db_threshold) { gve_rx_write_doorbell(priv, rx); - return true; + return budget; } } gve_rx_write_doorbell(priv, rx); - return gve_rx_work_pending(rx); + return work_done; } -bool gve_rx_poll(struct gve_notify_block *block, int budget) +int gve_rx_poll(struct gve_notify_block *block, int budget) { struct gve_rx_ring *rx = block->rx; netdev_features_t feat; - bool repoll = false; + int work_done = 0; feat = block->napi.dev->features; @@ -598,8 +600,7 @@ bool gve_rx_poll(struct gve_notify_block *block, int budget) budget = INT_MAX; if (budget > 0) - repoll |= gve_clean_rx_done(rx, budget, feat); - else - repoll |= gve_rx_work_pending(rx); - return repoll; + work_done = gve_clean_rx_done(rx, budget, feat); + + return work_done; } From patchwork Thu Oct 7 16:25:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeroen de Borst X-Patchwork-Id: 12542311 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60D28C4332F for ; Thu, 7 Oct 2021 16:25:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 42F6D6120D for ; Thu, 7 Oct 2021 16:25:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242685AbhJGQ1m (ORCPT ); Thu, 7 Oct 2021 12:27:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55718 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242668AbhJGQ1f (ORCPT ); Thu, 7 Oct 2021 12:27:35 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 33185C061764 for ; Thu, 7 Oct 2021 09:25:41 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id z130-20020a256588000000b005b6b4594129so8575373ybb.15 for ; Thu, 07 Oct 2021 09:25:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=q+muMcXnIK0tIvESCIwPxCzd8ymTbee320pnZSs9Jk0=; b=tUHNaj7uWkRfTDpD8pgyW5eYadm8yLfQ4v1/s+xFfeMX74pgw4svWRdB+XKJfFMmzj DJJcg6O8RNEwCsxniSDn4cPUsCfsl3l6qIsUkiYbCL4dTSq9B1clEw4eO7jhk7tQ/oG9 bBDe07BnLWPDefa6Pfl6gxvSE8B6KXhmE8L+zuVuLWyzm7wZcCqLcY7MFBeK0eBtMoT4 Mwk6wVrYsfGUV6lVRf7ua59ujfmE+GCM5eHCvmLU14CmJLVCreJyVp5ce71+S3EVE0Au 6pCDRFsDoFR6Dfpotf4Dlj7fUOfok2lZUK7XJSQGFRo/UwtsA5aWBRaPYBapVsyNzRfX EcRw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=q+muMcXnIK0tIvESCIwPxCzd8ymTbee320pnZSs9Jk0=; b=oKx9kpDuwuIua+VacTG5DvAmukVTEbdbv4ZCx+LTflZqBqq+t++VrbDylUvIfLDs2U lKGljbmtx10E+r9tep4RgIGuIHXHzclGl8DfhH7LRXhke6uLonscrv3Y994X9zJVH+qy SQgbq4stRZ5xMkq0UM3JDgT+/MrV7uulFKNjFsJJAuA/kZMM/DEs2XPjBJOxseIt2y/4 UwrmiyTb/C6ftIjqWZ/xRE2owN3wF8Nuexmd4N2f4IojcQa6ydCT6kCbb4tHbdLXklX4 WwwgxThQgpEu5JpOTbVSZHh79aRl668IhVV1xA8AmRhJcZAtJdcU09D9sb0++omhoEwN I67g== X-Gm-Message-State: AOAM533habaX3wrRh/xnpuYHd8F77G/jkDUBBr1rT0NpQvQ9TkdjxZd+ nhY2eFa2y0FKzxVcD4DZyp7AKTdwnkZ4i3BlCs9j6Ynsg3REUyrllFs0x0GLX/qNHhoovjK6LqQ 9bguRWXeHcARXCCM4/OD9/lE1u2R98MNI/jrW/aOgzyj/HhgtgtY2M7R014lzesdl6Bo= X-Google-Smtp-Source: ABdhPJx5ye4wjm0Ts2CGnP7N9Bt4ooLI5Ie1J0JF7znE3UJjTIwRmuOXLkqtV+GbCviYVIHMiJVDS3Ki3V++XA== X-Received: from jeroendb.sea.corp.google.com ([2620:15c:100:202:fe55:7411:11ac:c2a7]) (user=jeroendb job=sendgmr) by 2002:a25:6106:: with SMTP id v6mr5931072ybb.531.1633623940312; Thu, 07 Oct 2021 09:25:40 -0700 (PDT) Date: Thu, 7 Oct 2021 09:25:29 -0700 In-Reply-To: <20211007162534.1502578-1-jeroendb@google.com> Message-Id: <20211007162534.1502578-2-jeroendb@google.com> Mime-Version: 1.0 References: <20211007162534.1502578-1-jeroendb@google.com> X-Mailer: git-send-email 2.33.0.800.g4c38ced690-goog Subject: [PATCH net-next 2/7] gve: Add rx buffer pagecnt bias From: Jeroen de Borst To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, Catherine Sullivan , Yanchun Fu , Nathan Lewis , David Awogbemila Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Catherine Sullivan Add a pagecnt bias field to rx buffer info struct to eliminate needing to increment the atomic page ref count on every pass in the rx hotpath. Also prefetch two packet pages ahead. Fixes: ede3fcf5ec67f ("gve: Add support for raw addressing to the rx path") Signed-off-by: Yanchun Fu Signed-off-by: Nathan Lewis Signed-off-by: Catherine Sullivan Signed-off-by: David Awogbemila --- drivers/net/ethernet/google/gve/gve_rx.c | 52 +++++++++++++++++------- 1 file changed, 37 insertions(+), 15 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c index bb9fc456416b..ecf5a396290b 100644 --- a/drivers/net/ethernet/google/gve/gve_rx.c +++ b/drivers/net/ethernet/google/gve/gve_rx.c @@ -16,19 +16,23 @@ static void gve_rx_free_buffer(struct device *dev, dma_addr_t dma = (dma_addr_t)(be64_to_cpu(data_slot->addr) & GVE_DATA_SLOT_ADDR_PAGE_MASK); + page_ref_sub(page_info->page, page_info->pagecnt_bias - 1); gve_free_page(dev, page_info->page, dma, DMA_FROM_DEVICE); } static void gve_rx_unfill_pages(struct gve_priv *priv, struct gve_rx_ring *rx) { - if (rx->data.raw_addressing) { - u32 slots = rx->mask + 1; - int i; + u32 slots = rx->mask + 1; + int i; + if (rx->data.raw_addressing) { for (i = 0; i < slots; i++) gve_rx_free_buffer(&priv->pdev->dev, &rx->data.page_info[i], &rx->data.data_ring[i]); } else { + for (i = 0; i < slots; i++) + page_ref_sub(rx->data.page_info[i].page, + rx->data.page_info[i].pagecnt_bias - 1); gve_unassign_qpl(priv, rx->data.qpl->id); rx->data.qpl = NULL; } @@ -69,6 +73,9 @@ static void gve_setup_rx_buffer(struct gve_rx_slot_page_info *page_info, page_info->page_offset = 0; page_info->page_address = page_address(page); *slot_addr = cpu_to_be64(addr); + /* The page already has 1 ref */ + page_ref_add(page, INT_MAX - 1); + page_info->pagecnt_bias = INT_MAX; } static int gve_rx_alloc_buffer(struct gve_priv *priv, struct device *dev, @@ -293,17 +300,18 @@ static bool gve_rx_can_flip_buffers(struct net_device *netdev) ? netdev->mtu + GVE_RX_PAD + ETH_HLEN <= PAGE_SIZE / 2 : false; } -static int gve_rx_can_recycle_buffer(struct page *page) +static int gve_rx_can_recycle_buffer(struct gve_rx_slot_page_info *page_info) { - int pagecount = page_count(page); + int pagecount = page_count(page_info->page); /* This page is not being used by any SKBs - reuse */ - if (pagecount == 1) + if (pagecount == page_info->pagecnt_bias) return 1; /* This page is still being used by an SKB - we can't reuse */ - else if (pagecount >= 2) + else if (pagecount > page_info->pagecnt_bias) return 0; - WARN(pagecount < 1, "Pagecount should never be < 1"); + WARN(pagecount < page_info->pagecnt_bias, + "Pagecount should never be less than the bias."); return -1; } @@ -319,11 +327,11 @@ gve_rx_raw_addressing(struct device *dev, struct net_device *netdev, if (!skb) return NULL; - /* Optimistically stop the kernel from freeing the page by increasing - * the page bias. We will check the refcount in refill to determine if - * we need to alloc a new page. + /* Optimistically stop the kernel from freeing the page. + * We will check again in refill to determine if we need to alloc a + * new page. */ - get_page(page_info->page); + gve_dec_pagecnt_bias(page_info); return skb; } @@ -346,7 +354,7 @@ gve_rx_qpl(struct device *dev, struct net_device *netdev, /* No point in recycling if we didn't get the skb */ if (skb) { /* Make sure that the page isn't freed. */ - get_page(page_info->page); + gve_dec_pagecnt_bias(page_info); gve_rx_flip_buff(page_info, &data_slot->qpl_offset); } } else { @@ -370,8 +378,18 @@ static bool gve_rx(struct gve_rx_ring *rx, struct gve_rx_desc *rx_desc, union gve_rx_data_slot *data_slot; struct sk_buff *skb = NULL; dma_addr_t page_bus; + void *va; u16 len; + /* Prefetch two packet pages ahead, we will need it soon. */ + page_info = &rx->data.page_info[(idx + 2) & rx->mask]; + va = page_info->page_address + GVE_RX_PAD + + page_info->page_offset; + + prefetch(page_info->page); /* Kernel page struct. */ + prefetch(va); /* Packet header. */ + prefetch(va + 64); /* Next cacheline too. */ + /* drop this packet */ if (unlikely(rx_desc->flags_seq & GVE_RXF_ERR)) { u64_stats_update_begin(&rx->statss); @@ -402,7 +420,7 @@ static bool gve_rx(struct gve_rx_ring *rx, struct gve_rx_desc *rx_desc, int recycle = 0; if (can_flip) { - recycle = gve_rx_can_recycle_buffer(page_info->page); + recycle = gve_rx_can_recycle_buffer(page_info); if (recycle < 0) { if (!rx->data.raw_addressing) gve_schedule_reset(priv); @@ -493,7 +511,7 @@ static bool gve_rx_refill_buffers(struct gve_priv *priv, struct gve_rx_ring *rx) * owns half the page it is impossible to tell which half. Either * the whole page is free or it needs to be replaced. */ - int recycle = gve_rx_can_recycle_buffer(page_info->page); + int recycle = gve_rx_can_recycle_buffer(page_info); if (recycle < 0) { if (!rx->data.raw_addressing) @@ -540,6 +558,10 @@ int gve_clean_rx_done(struct gve_rx_ring *rx, int budget, "[%d] seqno=%d rx->desc.seqno=%d\n", rx->q_num, GVE_SEQNO(desc->flags_seq), rx->desc.seqno); + + /* prefetch two descriptors ahead */ + prefetch(rx->desc.desc_ring + ((cnt + 2) & rx->mask)); + dropped = !gve_rx(rx, desc, feat, idx); if (!dropped) { bytes += be16_to_cpu(desc->len) - GVE_RX_PAD; From patchwork Thu Oct 7 16:25:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeroen de Borst X-Patchwork-Id: 12542313 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8C10C433EF for ; Thu, 7 Oct 2021 16:25:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8A2CB61074 for ; Thu, 7 Oct 2021 16:25:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242695AbhJGQ1n (ORCPT ); Thu, 7 Oct 2021 12:27:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55730 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241716AbhJGQ1h (ORCPT ); Thu, 7 Oct 2021 12:27:37 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B0B3BC061570 for ; Thu, 7 Oct 2021 09:25:42 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id u5-20020a63d3450000b029023a5f6e6f9bso124020pgi.21 for ; Thu, 07 Oct 2021 09:25:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=t9BTZnuwugGBCsNM9hKVnh7lfB4qB5mKDLELhG9sTh0=; b=NXN3fj6RjpGO/lr+rTT720Z0R91fKqssGivN/EXv/8+4zm46kYmyv2uOnqeV6sroZa RPQs8eoZZupPXkC07giBC9K/5aqhhmk4uzd69UCX39I7gXzyAC1JLj2KOsxrPdOCYJRt CijZysJc7mFkgrCdbiPQsZdnQeAqBNZIK4OCr4n8L8EJbLoCafbOFHuMLTUnrfIBy6Sx A6WfIngOdocao1/LtATIfiX7uXootXDqEDL7iw8KaZGrXGw5eePoLbk17kp6yyoVVeQd PA+aOZEQSwAk0DircoA+2xkDqk5Cjuhzr5WppRwtjeDr43rfK3A4RYQSpyRuj80NvYGj 5d5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=t9BTZnuwugGBCsNM9hKVnh7lfB4qB5mKDLELhG9sTh0=; b=aj1REgxKu6qMuRiKXpDmyb2uQ7KvjQ3KZ3EZC7tczXSD71NHOholf+wtx2tj9Q5imN 1x5rLkAFc/JgW/JZFCn0hsGcewgUUvADqVA1dmBE8laEAL2zutzRCf57HsNDipywwnoC usFyn3fHBLgBwz9IlRN/YZ6JulEqD03a+N42d1u/YPAxWxL6c9BE7ZlrDx8MvP9Ix7DY hYh3qErc4biS2pNnTzesIiXi5Qi1BvaLEDkZvSJBHFzsVNGcMCwyVkY7klX1uV5y6zno Ux797i3TBIcg6i8dbnnWbs3tDn/M4qIs08zRedpqro0hwXd84buoz2aVElcvPpzQHoKS rJwg== X-Gm-Message-State: AOAM5300qkV8Y+Cf9KW63enzWP91Kv+MXr3TJ6mdNkX/nCL2kID6GOUp 6pxtCaRTYmm2OW8u/22wAzUQe1awi+4++jk6dn7AWh8MOT6yN/TeF8PPG4tn0WM0QPeF22oHHUy CgXoSM2kk/pw9P2vOX+LebMM0lHW2JCEqgawp3npEriEQ2v6zr5JSRrCvOv7sXly0Gu8= X-Google-Smtp-Source: ABdhPJwi1ewG3eAgjg5cnY9L3yqcf+nvtEyZ/6jR+ZZ6h2ut5VJVnyI3rYSvYLnbltFSUd32PJZOLVPPPy3RaA== X-Received: from jeroendb.sea.corp.google.com ([2620:15c:100:202:fe55:7411:11ac:c2a7]) (user=jeroendb job=sendgmr) by 2002:a17:902:7613:b0:13d:b35f:b4d7 with SMTP id k19-20020a170902761300b0013db35fb4d7mr4824867pll.8.1633623942070; Thu, 07 Oct 2021 09:25:42 -0700 (PDT) Date: Thu, 7 Oct 2021 09:25:30 -0700 In-Reply-To: <20211007162534.1502578-1-jeroendb@google.com> Message-Id: <20211007162534.1502578-3-jeroendb@google.com> Mime-Version: 1.0 References: <20211007162534.1502578-1-jeroendb@google.com> X-Mailer: git-send-email 2.33.0.800.g4c38ced690-goog Subject: [PATCH net-next 3/7] gve: Do lazy cleanup in TX path From: Jeroen de Borst To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, Tao Liu , Catherine Sullivan Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Tao Liu When TX queue is full, attemt to process enough TX completions to avoid stalling the queue. Fixes: f5cedc84a30d2 ("gve: Add transmit and receive support") Signed-off-by: Tao Liu Signed-off-by: Catherine Sullivan --- drivers/net/ethernet/google/gve/gve.h | 9 +- drivers/net/ethernet/google/gve/gve_ethtool.c | 3 +- drivers/net/ethernet/google/gve/gve_main.c | 6 +- drivers/net/ethernet/google/gve/gve_tx.c | 94 +++++++++++-------- 4 files changed, 62 insertions(+), 50 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h index 59c525800e5d..003b30b91c6d 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -341,8 +341,8 @@ struct gve_tx_ring { union { /* GQI fields */ struct { - /* NIC tail pointer */ - __be32 last_nic_done; + /* Spinlock for when cleanup in progress */ + spinlock_t clean_lock; }; /* DQO fields. */ @@ -821,8 +821,9 @@ netdev_tx_t gve_tx(struct sk_buff *skb, struct net_device *dev); bool gve_tx_poll(struct gve_notify_block *block, int budget); int gve_tx_alloc_rings(struct gve_priv *priv); void gve_tx_free_rings_gqi(struct gve_priv *priv); -__be32 gve_tx_load_event_counter(struct gve_priv *priv, - struct gve_tx_ring *tx); +u32 gve_tx_load_event_counter(struct gve_priv *priv, + struct gve_tx_ring *tx); +bool gve_tx_clean_pending(struct gve_priv *priv, struct gve_tx_ring *tx); /* rx handling */ void gve_rx_write_doorbell(struct gve_priv *priv, struct gve_rx_ring *rx); int gve_rx_poll(struct gve_notify_block *block, int budget); diff --git a/drivers/net/ethernet/google/gve/gve_ethtool.c b/drivers/net/ethernet/google/gve/gve_ethtool.c index 716e6240305d..618a3e1d858e 100644 --- a/drivers/net/ethernet/google/gve/gve_ethtool.c +++ b/drivers/net/ethernet/google/gve/gve_ethtool.c @@ -330,8 +330,7 @@ gve_get_ethtool_stats(struct net_device *netdev, data[i++] = tmp_tx_bytes; data[i++] = tx->wake_queue; data[i++] = tx->stop_queue; - data[i++] = be32_to_cpu(gve_tx_load_event_counter(priv, - tx)); + data[i++] = gve_tx_load_event_counter(priv, tx); data[i++] = tx->dma_mapping_error; /* stats from NIC */ if (skip_nic_stats) { diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index 388262c61b8d..74e35a87ec38 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -201,13 +201,13 @@ static int gve_napi_poll(struct napi_struct *napi, int budget) irq_doorbell = gve_irq_doorbell(priv, block); iowrite32be(GVE_IRQ_ACK | GVE_IRQ_EVENT, irq_doorbell); - /* Double check we have no extra work. - * Ensure unmask synchronizes with checking for work. + /* Ensure IRQ ACK is visible before we check pending work. + * If queue had issued updates, it would be truly visible. */ mb(); if (block->tx) - reschedule |= gve_tx_poll(block, -1); + reschedule |= gve_tx_clean_pending(priv, block->tx); if (block->rx) reschedule |= gve_rx_work_pending(block->rx); diff --git a/drivers/net/ethernet/google/gve/gve_tx.c b/drivers/net/ethernet/google/gve/gve_tx.c index 9922ce46a635..a9cb241fedf4 100644 --- a/drivers/net/ethernet/google/gve/gve_tx.c +++ b/drivers/net/ethernet/google/gve/gve_tx.c @@ -144,7 +144,7 @@ static void gve_tx_free_ring(struct gve_priv *priv, int idx) gve_tx_remove_from_block(priv, idx); slots = tx->mask + 1; - gve_clean_tx_done(priv, tx, tx->req, false); + gve_clean_tx_done(priv, tx, priv->tx_desc_cnt, false); netdev_tx_reset_queue(tx->netdev_txq); dma_free_coherent(hdev, sizeof(*tx->q_resources), @@ -176,6 +176,7 @@ static int gve_tx_alloc_ring(struct gve_priv *priv, int idx) /* Make sure everything is zeroed to start */ memset(tx, 0, sizeof(*tx)); + spin_lock_init(&tx->clean_lock); tx->q_num = idx; tx->mask = slots - 1; @@ -328,10 +329,16 @@ static inline bool gve_can_tx(struct gve_tx_ring *tx, int bytes_required) return (gve_tx_avail(tx) >= MAX_TX_DESC_NEEDED && can_alloc); } +static_assert(NAPI_POLL_WEIGHT >= MAX_TX_DESC_NEEDED); + /* Stops the queue if the skb cannot be transmitted. */ -static int gve_maybe_stop_tx(struct gve_tx_ring *tx, struct sk_buff *skb) +static int gve_maybe_stop_tx(struct gve_priv *priv, struct gve_tx_ring *tx, + struct sk_buff *skb) { int bytes_required = 0; + u32 nic_done; + u32 to_do; + int ret; if (!tx->raw_addressing) bytes_required = gve_skb_fifo_bytes_required(tx, skb); @@ -339,29 +346,28 @@ static int gve_maybe_stop_tx(struct gve_tx_ring *tx, struct sk_buff *skb) if (likely(gve_can_tx(tx, bytes_required))) return 0; - /* No space, so stop the queue */ - tx->stop_queue++; - netif_tx_stop_queue(tx->netdev_txq); - smp_mb(); /* sync with restarting queue in gve_clean_tx_done() */ - - /* Now check for resources again, in case gve_clean_tx_done() freed - * resources after we checked and we stopped the queue after - * gve_clean_tx_done() checked. - * - * gve_maybe_stop_tx() gve_clean_tx_done() - * nsegs/can_alloc test failed - * gve_tx_free_fifo() - * if (tx queue stopped) - * netif_tx_queue_wake() - * netif_tx_stop_queue() - * Need to check again for space here! - */ - if (likely(!gve_can_tx(tx, bytes_required))) - return -EBUSY; + ret = -EBUSY; + spin_lock(&tx->clean_lock); + nic_done = gve_tx_load_event_counter(priv, tx); + to_do = nic_done - tx->done; - netif_tx_start_queue(tx->netdev_txq); - tx->wake_queue++; - return 0; + /* Only try to clean if there is hope for TX */ + if (to_do + gve_tx_avail(tx) >= MAX_TX_DESC_NEEDED) { + if (to_do > 0) { + to_do = min_t(u32, to_do, NAPI_POLL_WEIGHT); + gve_clean_tx_done(priv, tx, to_do, false); + } + if (likely(gve_can_tx(tx, bytes_required))) + ret = 0; + } + if (ret) { + /* No space, so stop the queue */ + tx->stop_queue++; + netif_tx_stop_queue(tx->netdev_txq); + } + spin_unlock(&tx->clean_lock); + + return ret; } static void gve_tx_fill_pkt_desc(union gve_tx_desc *pkt_desc, @@ -576,7 +582,7 @@ netdev_tx_t gve_tx(struct sk_buff *skb, struct net_device *dev) WARN(skb_get_queue_mapping(skb) >= priv->tx_cfg.num_queues, "skb queue index out of range"); tx = &priv->tx[skb_get_queue_mapping(skb)]; - if (unlikely(gve_maybe_stop_tx(tx, skb))) { + if (unlikely(gve_maybe_stop_tx(priv, tx, skb))) { /* We need to ring the txq doorbell -- we have stopped the Tx * queue for want of resources, but prior calls to gve_tx() * may have added descriptors without ringing the doorbell. @@ -672,19 +678,19 @@ static int gve_clean_tx_done(struct gve_priv *priv, struct gve_tx_ring *tx, return pkts; } -__be32 gve_tx_load_event_counter(struct gve_priv *priv, - struct gve_tx_ring *tx) +u32 gve_tx_load_event_counter(struct gve_priv *priv, + struct gve_tx_ring *tx) { - u32 counter_index = be32_to_cpu((tx->q_resources->counter_index)); + u32 counter_index = be32_to_cpu(tx->q_resources->counter_index); + __be32 counter = READ_ONCE(priv->counter_array[counter_index]); - return READ_ONCE(priv->counter_array[counter_index]); + return be32_to_cpu(counter); } bool gve_tx_poll(struct gve_notify_block *block, int budget) { struct gve_priv *priv = block->priv; struct gve_tx_ring *tx = block->tx; - bool repoll = false; u32 nic_done; u32 to_do; @@ -692,17 +698,23 @@ bool gve_tx_poll(struct gve_notify_block *block, int budget) if (budget == 0) budget = INT_MAX; + /* In TX path, it may try to clean completed pkts in order to xmit, + * to avoid cleaning conflict, use spin_lock(), it yields better + * concurrency between xmit/clean than netif's lock. + */ + spin_lock(&tx->clean_lock); /* Find out how much work there is to be done */ - tx->last_nic_done = gve_tx_load_event_counter(priv, tx); - nic_done = be32_to_cpu(tx->last_nic_done); - if (budget > 0) { - /* Do as much work as we have that the budget will - * allow - */ - to_do = min_t(u32, (nic_done - tx->done), budget); - gve_clean_tx_done(priv, tx, to_do, true); - } + nic_done = gve_tx_load_event_counter(priv, tx); + to_do = min_t(u32, (nic_done - tx->done), budget); + gve_clean_tx_done(priv, tx, to_do, true); + spin_unlock(&tx->clean_lock); /* If we still have work we want to repoll */ - repoll |= (nic_done != tx->done); - return repoll; + return nic_done != tx->done; +} + +bool gve_tx_clean_pending(struct gve_priv *priv, struct gve_tx_ring *tx) +{ + u32 nic_done = gve_tx_load_event_counter(priv, tx); + + return nic_done != tx->done; } From patchwork Thu Oct 7 16:25:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeroen de Borst X-Patchwork-Id: 12542315 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE287C433F5 for ; Thu, 7 Oct 2021 16:25:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 90D8F61245 for ; Thu, 7 Oct 2021 16:25:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242707AbhJGQ1o (ORCPT ); Thu, 7 Oct 2021 12:27:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55746 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242660AbhJGQ1j (ORCPT ); Thu, 7 Oct 2021 12:27:39 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 78D90C061768 for ; Thu, 7 Oct 2021 09:25:44 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id w4-20020a1709029a8400b00138e222b06aso3462387plp.12 for ; Thu, 07 Oct 2021 09:25:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=zHxeeZD1jZwO/Usfb2NsWkAnbNZnwg7n8Dn9pOEtrEk=; b=lxM1eWtkvZZt6Mqczx7qeWa1aKJY2GpgnCFsyxJ0nycAY0ZlWcvjUu6LJzXvfDjRc0 +GBMxCbmD01f0Nz5gkQ2u2QXRN3vuThx2ijGFBc1Qspx6nzmm26gJY+TeNSKncvvKTd6 ID75ZHX68Lnj3Sl23Yt2RrdyNv13UsEYxRJYrjW05cIINAUPcDk/PZX+irXs9LEAv3Dn 4zqFTN6WzlZ9FLdYvPottm4Y7Hy7K9JKms83e00jlTasOPUkv2srzmHdfjxwJTYWmzls CfOT8bcASDqN6prpsjVbT8qLJcgwZ3RlMhlxS0vDZKD/BvM0vEtYhpiDoCvZRUz/BmTt Gd6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=zHxeeZD1jZwO/Usfb2NsWkAnbNZnwg7n8Dn9pOEtrEk=; b=u/u3YsH92YpirnXgDdTpHtNhbnA8Eydqv7podraM3exmQRuVgvTs/1DmlrMgf3F4rq t4JuzHySTDubOQNB+IMhziLZV+tZP/Uxv5u+N8ZVhVqcZ4+JBIhrfi8B7pzH9axzGwHP xjtTQTf8kO9pFb6PRMWLy6jI8H8S5wvBBgCw9FsWMgMbOoxyPC4UTl81SSRJl4wSrfsb OkTFZdJUZIm5DVK+CIgxZyrlyOiiORFrAtJfvt1pnDaMpUcK26+QLBDkUnwUJrWsuPP+ HGj26DHps6Ee+PAZThllewnQd7ljen1ONdA2dTdlXlwzsLLCRxJwXdRg3iDbA3GMwzFu UYug== X-Gm-Message-State: AOAM530zYGHXchBCbZNshDbBxmyQK/6hJ0x31ms0jCA1gKoveivvKdt7 FWFREuofw/VakETnDukADt1z+NLZOKJlANHmLI9lIsQJ22Qkm1teca4AL4LWN173MgEx9zQ3u8p GK6NN1LwdnboUZlexmz5LaH8P//+Zu02Kvik18IxrRpGRxVh05htKmeM+4BmV/gMJZgE= X-Google-Smtp-Source: ABdhPJxQqIEOxPfYTHSkpp3Mk4beyZrLznIRiSKrykAYD4sVCacDp69hifydVBECan3gdZ33+yspeWXMiEUqVg== X-Received: from jeroendb.sea.corp.google.com ([2620:15c:100:202:fe55:7411:11ac:c2a7]) (user=jeroendb job=sendgmr) by 2002:a17:902:a414:b0:13e:45cd:1939 with SMTP id p20-20020a170902a41400b0013e45cd1939mr4924319plq.54.1633623943851; Thu, 07 Oct 2021 09:25:43 -0700 (PDT) Date: Thu, 7 Oct 2021 09:25:31 -0700 In-Reply-To: <20211007162534.1502578-1-jeroendb@google.com> Message-Id: <20211007162534.1502578-4-jeroendb@google.com> Mime-Version: 1.0 References: <20211007162534.1502578-1-jeroendb@google.com> X-Mailer: git-send-email 2.33.0.800.g4c38ced690-goog Subject: [PATCH net-next 4/7] gve: Recover from queue stall due to missed IRQ From: Jeroen de Borst To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, John Fraker , David Awogbemila Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: John Fraker Don't always reset the driver on a TX timeout. Attempt to recover by kicking the queue in case an IRQ was missed. Fixes: 9e5f7d26a4c08 ("gve: Add workqueue and reset support") Signed-off-by: John Fraker Signed-off-by: David Awogbemila --- drivers/net/ethernet/google/gve/gve.h | 4 +- drivers/net/ethernet/google/gve/gve_adminq.h | 1 + drivers/net/ethernet/google/gve/gve_main.c | 48 +++++++++++++++++++- 3 files changed, 51 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h index 003b30b91c6d..b8d46adb9c1a 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -30,7 +30,7 @@ #define GVE_MIN_MSIX 3 /* Numbers of gve tx/rx stats in stats report. */ -#define GVE_TX_STATS_REPORT_NUM 5 +#define GVE_TX_STATS_REPORT_NUM 6 #define GVE_RX_STATS_REPORT_NUM 2 /* Interval to schedule a stats report update, 20000ms. */ @@ -413,7 +413,9 @@ struct gve_tx_ring { u32 q_num ____cacheline_aligned; /* queue idx */ u32 stop_queue; /* count of queue stops */ u32 wake_queue; /* count of queue wakes */ + u32 queue_timeout; /* count of queue timeouts */ u32 ntfy_id; /* notification block index */ + u32 last_kick_msec; /* Last time the queue was kicked */ dma_addr_t bus; /* dma address of the descr ring */ dma_addr_t q_resources_bus; /* dma address of the queue resources */ dma_addr_t complq_bus_dqo; /* dma address of the dqo.compl_ring */ diff --git a/drivers/net/ethernet/google/gve/gve_adminq.h b/drivers/net/ethernet/google/gve/gve_adminq.h index 47c3d8f313fc..3953f6f7a427 100644 --- a/drivers/net/ethernet/google/gve/gve_adminq.h +++ b/drivers/net/ethernet/google/gve/gve_adminq.h @@ -270,6 +270,7 @@ enum gve_stat_names { TX_LAST_COMPLETION_PROCESSED = 5, RX_NEXT_EXPECTED_SEQUENCE = 6, RX_BUFFERS_POSTED = 7, + TX_TIMEOUT_CNT = 8, // stats from NIC RX_QUEUE_DROP_CNT = 65, RX_NO_BUFFERS_POSTED = 66, diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index 74e35a87ec38..d969040deab6 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -24,6 +24,9 @@ #define GVE_VERSION "1.0.0" #define GVE_VERSION_PREFIX "GVE-" +// Minimum amount of time between queue kicks in msec (10 seconds) +#define MIN_TX_TIMEOUT_GAP (1000 * 10) + const char gve_version_str[] = GVE_VERSION; static const char gve_version_prefix[] = GVE_VERSION_PREFIX; @@ -1109,9 +1112,47 @@ static void gve_turnup(struct gve_priv *priv) static void gve_tx_timeout(struct net_device *dev, unsigned int txqueue) { - struct gve_priv *priv = netdev_priv(dev); + struct gve_notify_block *block; + struct gve_tx_ring *tx = NULL; + struct gve_priv *priv; + u32 last_nic_done; + u32 current_time; + u32 ntfy_idx; + + netdev_info(dev, "Timeout on tx queue, %d", txqueue); + priv = netdev_priv(dev); + if (txqueue > priv->tx_cfg.num_queues) + goto reset; + + ntfy_idx = gve_tx_idx_to_ntfy(priv, txqueue); + if (ntfy_idx > priv->num_ntfy_blks) + goto reset; + + block = &priv->ntfy_blocks[ntfy_idx]; + tx = block->tx; + current_time = jiffies_to_msecs(jiffies); + if (tx->last_kick_msec + MIN_TX_TIMEOUT_GAP > current_time) + goto reset; + + /* Check to see if there are missed completions, which will allow us to + * kick the queue. + */ + last_nic_done = gve_tx_load_event_counter(priv, tx); + if (last_nic_done - tx->done) { + netdev_info(dev, "Kicking queue %d", txqueue); + iowrite32be(GVE_IRQ_MASK, gve_irq_doorbell(priv, block)); + napi_schedule(&block->napi); + tx->last_kick_msec = current_time; + goto out; + } // Else reset. + +reset: gve_schedule_reset(priv); + +out: + if (tx) + tx->queue_timeout++; priv->tx_timeo_cnt++; } @@ -1239,6 +1280,11 @@ void gve_handle_report_stats(struct gve_priv *priv) .value = cpu_to_be64(last_completion), .queue_id = cpu_to_be32(idx), }; + stats[stats_idx++] = (struct stats) { + .stat_name = cpu_to_be32(TX_TIMEOUT_CNT), + .value = cpu_to_be64(priv->tx[idx].queue_timeout), + .queue_id = cpu_to_be32(idx), + }; } } /* rx stats */ From patchwork Thu Oct 7 16:25:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeroen de Borst X-Patchwork-Id: 12542317 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84B11C433FE for ; Thu, 7 Oct 2021 16:25:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 64A4161074 for ; Thu, 7 Oct 2021 16:25:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242670AbhJGQ1r (ORCPT ); Thu, 7 Oct 2021 12:27:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55748 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242661AbhJGQ1j (ORCPT ); Thu, 7 Oct 2021 12:27:39 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C5B8AC061755 for ; Thu, 7 Oct 2021 09:25:45 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id z5-20020a17090acb0500b001a04086c030so1406198pjt.6 for ; Thu, 07 Oct 2021 09:25:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=jR9TGkqL0rHmu61LobodF7vDYPToecNjmx/obBnZsyg=; b=bVEM34Jn4xhf738uuqkWC0NYlDhF0nR2FSRUS7oBwrsZp0YlqIbijF9zbFQb91Btfe vCQbMhcnOq24Hnbm+kfkF2unTwxBZ3O9k1//hiQoK7dY7cDd5c3pwEnMBZbc8qctpwaE J5NJzJ4UYz7lG9M8vXxQFTK3J3Pmjv3ceB6kP8rKjV5ZWpw5We2GWExWClpBevuZyJPC P0fNd7MK7PRBWT/wRZ4hZqHUVCyHvISzSKLUUotEa8mJfu6HY5K9Ci/VD2uTWFuw6A0X Egz5QYsLH4CBRY3gauT9BtM0Z3yE+vZjFwsa1tB6KeLsi3HzPy7T4UFJd1lEZ+tJVBnO mPhQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=jR9TGkqL0rHmu61LobodF7vDYPToecNjmx/obBnZsyg=; b=8E9gxZKRtK6Xh8Qtg5fasl6+S3twBDS7DMBjNQjY0l/FNTjLFOc4JIOYLrwHjd/Fxz 5FyAdYBD09AHQBQ8b+w5g/l1HBlLRRyPbF/fK8vYGAHFAjMbLe5KUYdTWT0F9cun1jtF mn1ooguLtlWRFMQIstsmk7VyLTYYYv2B8pinhWjuwAWfXka6ROeR7NexCnS/mm3uZRBG lfJ5+ZIdadodMUR6V8BrpA1nJQ2vhSBO1j6BK9DkqDS8NW0NpNuIcGNB5aX2aHAY+Xqn jyLMgPJ7L42KjJwx9LHeViGajMD99Byb3KvIF8xWtO/H+qw/cTHurYdazR/xXEVzVRLU 22zg== X-Gm-Message-State: AOAM533Tk0VT5iU1fcM9os1FSXGTr9TcO+b97NOrmndbV6wtcNRIEpcN yK1Rlvtg79F2OHqxS7sfqaPw7HsowNxJQWjp4R+wmujtPUFk4EZvBbZTWF8+Fvq0yozGfTM+h99 ETu9mfaZvtW/nmB5nQMtt0uccQoMRvc8G1T/7UEJyLJpQ57SXN842T7dUKlRJQfz5j7c= X-Google-Smtp-Source: ABdhPJy1+8f0BDw3SaddSGii5a9yaJTEBQGe/vJ32AbOsRnvpbYeESxBu8/XTcq2aT46ID+S1XD52gLOFWoyaQ== X-Received: from jeroendb.sea.corp.google.com ([2620:15c:100:202:fe55:7411:11ac:c2a7]) (user=jeroendb job=sendgmr) by 2002:aa7:88d6:0:b0:44c:5c0b:c8a8 with SMTP id k22-20020aa788d6000000b0044c5c0bc8a8mr5276141pff.76.1633623945168; Thu, 07 Oct 2021 09:25:45 -0700 (PDT) Date: Thu, 7 Oct 2021 09:25:32 -0700 In-Reply-To: <20211007162534.1502578-1-jeroendb@google.com> Message-Id: <20211007162534.1502578-5-jeroendb@google.com> Mime-Version: 1.0 References: <20211007162534.1502578-1-jeroendb@google.com> X-Mailer: git-send-email 2.33.0.800.g4c38ced690-goog Subject: [PATCH net-next 5/7] gve: Add netif_set_xps_queue call From: Jeroen de Borst To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, Catherine Sullivan , David Awogbemila Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Catherine Sullivan Configure XPS when adding tx queues to the notification blocks. Fixes: dbdaa67540512 ("gve: Move some static functions to a common file") Signed-off-by: Catherine Sullivan Signed-off-by: David Awogbemila --- drivers/net/ethernet/google/gve/gve_utils.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/net/ethernet/google/gve/gve_utils.c b/drivers/net/ethernet/google/gve/gve_utils.c index 93f3dcbeeea9..45ff7a9ab5f9 100644 --- a/drivers/net/ethernet/google/gve/gve_utils.c +++ b/drivers/net/ethernet/google/gve/gve_utils.c @@ -18,12 +18,16 @@ void gve_tx_remove_from_block(struct gve_priv *priv, int queue_idx) void gve_tx_add_to_block(struct gve_priv *priv, int queue_idx) { + unsigned int active_cpus = min_t(int, priv->num_ntfy_blks / 2, + num_online_cpus()); int ntfy_idx = gve_tx_idx_to_ntfy(priv, queue_idx); struct gve_notify_block *block = &priv->ntfy_blocks[ntfy_idx]; struct gve_tx_ring *tx = &priv->tx[queue_idx]; block->tx = tx; tx->ntfy_id = ntfy_idx; + netif_set_xps_queue(priv->dev, get_cpu_mask(ntfy_idx % active_cpus), + queue_idx); } void gve_rx_remove_from_block(struct gve_priv *priv, int queue_idx) From patchwork Thu Oct 7 16:25:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeroen de Borst X-Patchwork-Id: 12542319 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E412CC433F5 for ; Thu, 7 Oct 2021 16:25:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BF02261278 for ; Thu, 7 Oct 2021 16:25:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242723AbhJGQ1s (ORCPT ); Thu, 7 Oct 2021 12:27:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55756 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242664AbhJGQ1l (ORCPT ); Thu, 7 Oct 2021 12:27:41 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A9B02C061570 for ; Thu, 7 Oct 2021 09:25:47 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id c123-20020a621c81000000b004446be17615so329876pfc.7 for ; Thu, 07 Oct 2021 09:25:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=dCPWeeUFjrvtU3Sm917T3uFMVJ0WnFWxESHzDfpi2Is=; b=V66+W2WvWOZNuw0M6c/SznkTKqxiBNzb6T/Z1biWvgIZDggGalfAZdnptLr6Qnimwy pTUKmQeOaXkxOf882/ujo7aQrIWu6vLamwIXnucGslhaQUejlVB1BnhwVbnwvuF6SScF 9icgcSV86vtwj9gvmo4cQgq8NRZj5F+1fVYcWBOaC1F4uJWb4YnHwBMcjGyOIfHk6bfv Hbb1Tui9eBIwudbM3CKVCZKuvsmiEgckyIxGnMdUFa+1A/eFfMKnFi6amuzVk+wWlITp MiZ5U/xpCw5rSeG4GgqpXBToapCpv1uODZThMxKVb6lQhRHOepS+dbtDXpfd9WwoZR3G k0KA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=dCPWeeUFjrvtU3Sm917T3uFMVJ0WnFWxESHzDfpi2Is=; b=8RM5TRtnrrL6SrySYNZQOOH/t7LbbsDsHY+c+LwUfxZW/gRG8T64MkbePoP8x28LbF sKm8siadgkWZB5mqr82khTdz2//8esth/VP1e1IdKEFNnYL94Q5vvDqnlkQR9HDwmLXZ uLD6fxXsit4p2S4HaDEmAnImqadRFoZnT7ymnbCLPCKRL5vIQg0jZ5Xq9ku972NpBS/0 doBd7NRv9UpXuPN4BntkgxO9CiUdvD8nVP4D+mLFITnxRFYXinPA3gv4HFjtuz5tuAgX DGwjVcmg2YCBtfMS8PSdmPYTnoysGGKc5KlMJl26/+CbF9YAYPq2RRDSguf+sodmRPPO Tssw== X-Gm-Message-State: AOAM53335w7A6QbuC8IoCo02qctSw/XZDXKRf5njweeYxI2WKOB2sbOz ISKnfziPOrKs6ynd7PRs1JfuemOunwVes+HTdP6/brOUXo8gEJd498Q5hwzcNXu8hs19wYPbp2F D0pHsOIAhQiioeHVhsoYSzZjfgvB9yUwjtENBkk5uCsLEF7tJK6+xDbFyvLXqEtP/oR0= X-Google-Smtp-Source: ABdhPJzwRgAAh2zb7wyBsv1WVtzIWGWwzQ0uRsurbGP2M0NTUN/btraosmIaKPFYqyQs5D+YDzZb3FR5mCddvg== X-Received: from jeroendb.sea.corp.google.com ([2620:15c:100:202:fe55:7411:11ac:c2a7]) (user=jeroendb job=sendgmr) by 2002:a17:902:7e88:b0:13e:91ec:4114 with SMTP id z8-20020a1709027e8800b0013e91ec4114mr4914909pla.30.1633623947034; Thu, 07 Oct 2021 09:25:47 -0700 (PDT) Date: Thu, 7 Oct 2021 09:25:33 -0700 In-Reply-To: <20211007162534.1502578-1-jeroendb@google.com> Message-Id: <20211007162534.1502578-6-jeroendb@google.com> Mime-Version: 1.0 References: <20211007162534.1502578-1-jeroendb@google.com> X-Mailer: git-send-email 2.33.0.800.g4c38ced690-goog Subject: [PATCH net-next 6/7] gve: Allow pageflips on larger pages From: Jeroen de Borst To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, Jordan Kim , Jeroen de Borst Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Jordan Kim Half pages are just used for small enough packets. This change allows this to also apply for systems with pages larger than 4 KB. Fixes: 02b0e0c18ba75 ("gve: Rx Buffer Recycling") Signed-off-by: Jordan Kim Signed-off-by: Jeroen de Borst --- drivers/net/ethernet/google/gve/gve_rx.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c index ecf5a396290b..c6e95e1409a9 100644 --- a/drivers/net/ethernet/google/gve/gve_rx.c +++ b/drivers/net/ethernet/google/gve/gve_rx.c @@ -296,7 +296,7 @@ static void gve_rx_flip_buff(struct gve_rx_slot_page_info *page_info, __be64 *sl static bool gve_rx_can_flip_buffers(struct net_device *netdev) { - return PAGE_SIZE == 4096 + return PAGE_SIZE >= 4096 ? netdev->mtu + GVE_RX_PAD + ETH_HLEN <= PAGE_SIZE / 2 : false; } From patchwork Thu Oct 7 16:25:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeroen de Borst X-Patchwork-Id: 12542321 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7EE3C433FE for ; Thu, 7 Oct 2021 16:25:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C418D61074 for ; Thu, 7 Oct 2021 16:25:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242641AbhJGQ1t (ORCPT ); Thu, 7 Oct 2021 12:27:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55772 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242701AbhJGQ1n (ORCPT ); Thu, 7 Oct 2021 12:27:43 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2D5A4C061765 for ; Thu, 7 Oct 2021 09:25:49 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id i14-20020a63d44e000000b002955652e9deso131889pgj.16 for ; Thu, 07 Oct 2021 09:25:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ZvOQdeVvcW2ZfRct+SD+7QAtcz5vYsq7/z/gYOSNUAU=; b=hhoVlR0Dhv8Ifkx1awhfhn8PRMO8pqxKYkYRnU5NfB6XSNNaXtvQwwVaF47moh4hPU gN12wOMekg4+ZlBlu5tHyuI6J+KqKFrfbTc5YnSo1WJjRUNPsOKpx15CKeMhVeRLvN64 0T9SGwC7EZFqNsKjhX3R1+FUJEXkFDqOq5avulAsZYFR2jM7gw/MR5ieVDc8GGZDL98M 94tPmZzEJ01eLBVNBbWbWJ1cva2yJ9l0Vwc5B6vUw7sSNYW79zaigy4zPS5HcTl/wvVv aGLV9P2I44FGpOkW7PAD00JU8LoaxH585KWE5BAOD/pi7hAseMOJFbFcHa0DAHR1SDSm vXdQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ZvOQdeVvcW2ZfRct+SD+7QAtcz5vYsq7/z/gYOSNUAU=; b=topScKw5GSUqe+ChvRRzS4hx/k7uE9Lf2Mj0NTfwGrwBh2mMU4CsCpcedOB36Tdrfe iUdcicHlTuecBGjb5D+LOuxlnLeQZO0GgOxaDLxDgYPoWxrvI6/HUEI9UcGhNpdqLQib gHET4e8P16x+1klKOkzp8LWjtE/tBcFshUpd0OJJSjnkXB7Ieg5V6apoyzFXj28a1mv+ /JEfsjiysH5zNbV1IingMaa/KMkqcK936JFGefOSw+gAho5WfUd7WTJLxMbDWI4KQEcI ZdVaDwpjSWmIqPi7GP6QviGT1UQ1aKX5ghHap8DcGI7Le/wCj/axiR2YpoAIwxoVkExX CDnA== X-Gm-Message-State: AOAM533fYa4E3nLt/hw1CuR0y004bAJAW2Qiwz0o6YSIVqGyoEZhDddE drfa/qNvoB6Tv27sYFTwP1bAbO9OhBaR537wJS7rL0nCLzIR7LAtIv4ouYFE8zrpwiFNDeysXoK LJyUqXUVOV30DHpizkjtN2OWYveQE64JiCs0EYxBXTpl8Hn2ZATHdVv9+fbk4Az7Vq4k= X-Google-Smtp-Source: ABdhPJxn1EYFKOIRAFi0YbgKfNJSFFxL9dcBKpf8s1IbnEBdyJUCiZ7aT6FeBaSS51fttDyj6D4uVRVGJFpuKg== X-Received: from jeroendb.sea.corp.google.com ([2620:15c:100:202:fe55:7411:11ac:c2a7]) (user=jeroendb job=sendgmr) by 2002:a62:8f53:0:b0:44c:5d10:9378 with SMTP id n80-20020a628f53000000b0044c5d109378mr4999469pfd.19.1633623948567; Thu, 07 Oct 2021 09:25:48 -0700 (PDT) Date: Thu, 7 Oct 2021 09:25:34 -0700 In-Reply-To: <20211007162534.1502578-1-jeroendb@google.com> Message-Id: <20211007162534.1502578-7-jeroendb@google.com> Mime-Version: 1.0 References: <20211007162534.1502578-1-jeroendb@google.com> X-Mailer: git-send-email 2.33.0.800.g4c38ced690-goog Subject: [PATCH net-next 7/7] gve: Track RX buffer allocation failures From: Jeroen de Borst To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, Catherine Sullivan , Jeroen de Borst Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Catherine Sullivan The rx_buf_alloc_fail counter wasn't getting updated. Fixes: 433e274b8f7b0 ("gve: Add stats for gve.") Signed-off-by: Catherine Sullivan Signed-off-by: Jeroen de Borst --- drivers/net/ethernet/google/gve/gve_rx.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c index c6e95e1409a9..69f6db9ffdfc 100644 --- a/drivers/net/ethernet/google/gve/gve_rx.c +++ b/drivers/net/ethernet/google/gve/gve_rx.c @@ -526,8 +526,13 @@ static bool gve_rx_refill_buffers(struct gve_priv *priv, struct gve_rx_ring *rx) gve_rx_free_buffer(dev, page_info, data_slot); page_info->page = NULL; - if (gve_rx_alloc_buffer(priv, dev, page_info, data_slot)) + if (gve_rx_alloc_buffer(priv, dev, page_info, + data_slot)) { + u64_stats_update_begin(&rx->statss); + rx->rx_buf_alloc_fail++; + u64_stats_update_end(&rx->statss); break; + } } } fill_cnt++;