From patchwork Mon Oct 11 15:36:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeroen de Borst X-Patchwork-Id: 12550351 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9DBE4C433EF for ; Mon, 11 Oct 2021 15:37:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7E4DF60C49 for ; Mon, 11 Oct 2021 15:37:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242505AbhJKPi7 (ORCPT ); Mon, 11 Oct 2021 11:38:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60752 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239234AbhJKPi5 (ORCPT ); Mon, 11 Oct 2021 11:38:57 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 04A27C061570 for ; Mon, 11 Oct 2021 08:36:57 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id b126-20020a251b84000000b005bd8aca71a2so4008507ybb.4 for ; Mon, 11 Oct 2021 08:36:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=MzS39mmkdZRlhSNwPWiTmcSGcVFNuo/17rd0/eHH/jw=; b=T+B3zaGbOq3ycwvggd9NKZh2hs8iOFF50/QYQhW+/8eLe9fUxzS3D+IuObJGs78TNg agRqPzd4I2GxsD+dhdIqV9sCP9uh/hbv+xAjMC49aBHYoVkMWAHLTaZK5n8GWO8KKH/s dnc0IF513lCTk9rNYfgm31I9E6PE9eHrVtFa8WmcbUMzvWJDZ+Lzp+0FOQtOijYQaN2J qmj185GaqUNtYmPGusjFDelTAyfDOaaJ43BEI9qFtNf4SLeXT8KU8egEqmRszi7TOCrL FN5bp2Y8AC2SQy/4JiDJ8U0LPeNJYQYkLkV6lDmld6m9ViVn4GELbrTEbzQBbTnifuip ewNg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=MzS39mmkdZRlhSNwPWiTmcSGcVFNuo/17rd0/eHH/jw=; b=MbMuuX6qacR1ApxLCn6+u4NGOom9FyWj0VEQEkAe5jsN7pXevrqaAmuBeGh8+p7vAt 0qL2A5JOmNUoxqHihvDZsinjBIvvcvDfhRopLb5o7dyCgQylcWDF1OwXVAQc90BCchVt nwtHnpzXpeAGJTLvcRKsHsW3/lv89vIteT7In0GzeQxF/UHgGRF/C0KdyMENJruzEPoq KmDiQHMFpsrjKQafteeL4FL+kO/8spbGdmXh4pnVZ72hy4FYkJKltHIPwGnW94NECUKY PC/DHotnbE/L5zuqwLyAXVS76+Q/NggB/Nk2PfGgsC2WtYXhWNqmuu1cnR10uULOYpBf kKDg== X-Gm-Message-State: AOAM533YT/9/jBb0zvKHQzF7SoybosNpsWD6hwpfqMyVicu4Z9phis2C 0EMf5dydhvS99UDH3eQelEl6QiTxfHf7aUykEyl2iNgOfK8a/ESBx/cItaKk5lzrHcQ3wbYIRro TOsUDtKN3n1xxyiT4z2vSlHxrCeKiR+GzIXOAtNl6XDKBs/cidn0cZFbObahZooSpJhE= X-Google-Smtp-Source: ABdhPJwgwhjb/LN95uwotpYjApGWZPcl+GP8+dnYCD6ZoaDs5e6pFZcY57lfVEPu9d0nky9K9LgEXDy6UreKhQ== X-Received: from jeroendb.sea.corp.google.com ([2620:15c:100:202:94b6:8af3:6cef:e277]) (user=jeroendb job=sendgmr) by 2002:a05:6902:1246:: with SMTP id t6mr23060277ybu.187.1633966616087; Mon, 11 Oct 2021 08:36:56 -0700 (PDT) Date: Mon, 11 Oct 2021 08:36:44 -0700 In-Reply-To: <20211011153650.1982904-1-jeroendb@google.com> Message-Id: <20211011153650.1982904-2-jeroendb@google.com> Mime-Version: 1.0 References: <20211011153650.1982904-1-jeroendb@google.com> X-Mailer: git-send-email 2.33.0.882.g93a45727a2-goog Subject: [PATCH net-next v2 1/7] gve: Switch to use napi_complete_done From: Jeroen de Borst To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, Yangchun Fu , Catherine Sullivan , David Awogbemila Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Yangchun Fu Use napi_complete_done to allow for the use of gro_flush_timeout. Fixes: f5cedc84a30d2 ("gve: Add transmit and receive support") Signed-off-by: Yangchun Fu Signed-off-by: Catherine Sullivan Signed-off-by: David Awogbemila --- drivers/net/ethernet/google/gve/gve.h | 5 ++- drivers/net/ethernet/google/gve/gve_main.c | 38 +++++++++++++--------- drivers/net/ethernet/google/gve/gve_rx.c | 37 +++++++++++---------- 3 files changed, 43 insertions(+), 37 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h index 2f93ed470590..4abd53bdde73 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -825,11 +825,10 @@ __be32 gve_tx_load_event_counter(struct gve_priv *priv, struct gve_tx_ring *tx); /* rx handling */ void gve_rx_write_doorbell(struct gve_priv *priv, struct gve_rx_ring *rx); -bool gve_rx_poll(struct gve_notify_block *block, int budget); +int gve_rx_poll(struct gve_notify_block *block, int budget); +bool gve_rx_work_pending(struct gve_rx_ring *rx); int gve_rx_alloc_rings(struct gve_priv *priv); void gve_rx_free_rings_gqi(struct gve_priv *priv); -bool gve_clean_rx_done(struct gve_rx_ring *rx, int budget, - netdev_features_t feat); /* Reset */ void gve_schedule_reset(struct gve_priv *priv); int gve_reset(struct gve_priv *priv, bool attempt_teardown); diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index 5b5dcaaeed7f..b41679ab0dbe 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -192,34 +192,40 @@ static int gve_napi_poll(struct napi_struct *napi, int budget) __be32 __iomem *irq_doorbell; bool reschedule = false; struct gve_priv *priv; + int work_done = 0; block = container_of(napi, struct gve_notify_block, napi); priv = block->priv; if (block->tx) reschedule |= gve_tx_poll(block, budget); - if (block->rx) - reschedule |= gve_rx_poll(block, budget); + if (block->rx) { + work_done = gve_rx_poll(block, budget); + reschedule |= work_done == budget; + } if (reschedule) return budget; - napi_complete(napi); - irq_doorbell = gve_irq_doorbell(priv, block); - iowrite32be(GVE_IRQ_ACK | GVE_IRQ_EVENT, irq_doorbell); + /* Complete processing - don't unmask irq if busy polling is enabled */ + if (likely(napi_complete_done(napi, work_done))) { + irq_doorbell = gve_irq_doorbell(priv, block); + iowrite32be(GVE_IRQ_ACK | GVE_IRQ_EVENT, irq_doorbell); - /* Double check we have no extra work. - * Ensure unmask synchronizes with checking for work. - */ - mb(); - if (block->tx) - reschedule |= gve_tx_poll(block, -1); - if (block->rx) - reschedule |= gve_rx_poll(block, -1); - if (reschedule && napi_reschedule(napi)) - iowrite32be(GVE_IRQ_MASK, irq_doorbell); + /* Double check we have no extra work. + * Ensure unmask synchronizes with checking for work. + */ + mb(); - return 0; + if (block->tx) + reschedule |= gve_tx_poll(block, -1); + if (block->rx) + reschedule |= gve_rx_work_pending(block->rx); + + if (reschedule && napi_reschedule(napi)) + iowrite32be(GVE_IRQ_MASK, irq_doorbell); + } + return work_done; } static int gve_napi_poll_dqo(struct napi_struct *napi, int budget) diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c index 94941d4e4744..3347879a4a5d 100644 --- a/drivers/net/ethernet/google/gve/gve_rx.c +++ b/drivers/net/ethernet/google/gve/gve_rx.c @@ -456,7 +456,7 @@ static bool gve_rx(struct gve_rx_ring *rx, struct gve_rx_desc *rx_desc, return true; } -static bool gve_rx_work_pending(struct gve_rx_ring *rx) +bool gve_rx_work_pending(struct gve_rx_ring *rx) { struct gve_rx_desc *desc; __be16 flags_seq; @@ -524,8 +524,8 @@ static bool gve_rx_refill_buffers(struct gve_priv *priv, struct gve_rx_ring *rx) return true; } -bool gve_clean_rx_done(struct gve_rx_ring *rx, int budget, - netdev_features_t feat) +static int gve_clean_rx_done(struct gve_rx_ring *rx, int budget, + netdev_features_t feat) { struct gve_priv *priv = rx->gve; u32 work_done = 0, packets = 0; @@ -559,13 +559,15 @@ bool gve_clean_rx_done(struct gve_rx_ring *rx, int budget, } if (!work_done && rx->fill_cnt - cnt > rx->db_threshold) - return false; + return 0; - u64_stats_update_begin(&rx->statss); - rx->rpackets += packets; - rx->rbytes += bytes; - u64_stats_update_end(&rx->statss); - rx->cnt = cnt; + if (work_done) { + u64_stats_update_begin(&rx->statss); + rx->rpackets += packets; + rx->rbytes += bytes; + u64_stats_update_end(&rx->statss); + rx->cnt = cnt; + } /* restock ring slots */ if (!rx->data.raw_addressing) { @@ -576,26 +578,26 @@ bool gve_clean_rx_done(struct gve_rx_ring *rx, int budget, * falls below a threshold. */ if (!gve_rx_refill_buffers(priv, rx)) - return false; + return 0; /* If we were not able to completely refill buffers, we'll want * to schedule this queue for work again to refill buffers. */ if (rx->fill_cnt - cnt <= rx->db_threshold) { gve_rx_write_doorbell(priv, rx); - return true; + return budget; } } gve_rx_write_doorbell(priv, rx); - return gve_rx_work_pending(rx); + return work_done; } -bool gve_rx_poll(struct gve_notify_block *block, int budget) +int gve_rx_poll(struct gve_notify_block *block, int budget) { struct gve_rx_ring *rx = block->rx; netdev_features_t feat; - bool repoll = false; + int work_done = 0; feat = block->napi.dev->features; @@ -604,8 +606,7 @@ bool gve_rx_poll(struct gve_notify_block *block, int budget) budget = INT_MAX; if (budget > 0) - repoll |= gve_clean_rx_done(rx, budget, feat); - else - repoll |= gve_rx_work_pending(rx); - return repoll; + work_done = gve_clean_rx_done(rx, budget, feat); + + return work_done; } From patchwork Mon Oct 11 15:36:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeroen de Borst X-Patchwork-Id: 12550353 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1DCDEC433F5 for ; Mon, 11 Oct 2021 15:37:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 006D960EB4 for ; Mon, 11 Oct 2021 15:37:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242535AbhJKPjB (ORCPT ); Mon, 11 Oct 2021 11:39:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60758 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239178AbhJKPi6 (ORCPT ); Mon, 11 Oct 2021 11:38:58 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 50D7EC061570 for ; Mon, 11 Oct 2021 08:36:58 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id x25-20020aa79199000000b0044caf0d1ba8so6463551pfa.1 for ; Mon, 11 Oct 2021 08:36:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Vq5uG/mknd3C6XywCguFKo+XID2+aYDrcaX9qyydnC8=; b=ojMyvEJZhJcN61llFCADVI62cnjXJexdezpbbIH9rC+8Er3Yfpbw/dz7761gsHuKo8 qAqdK3bWi1Dzxl0sDmBxZy6yeS19mZ0KMp5LBxBjmRE3hvnVIGqc3iJdKGfqBng8/E6J Nh6Qoq5TTnNMGSIlbtHoJNJbS06AgpW41gBrOuGYisnM6dSFVQ4jyCcskuENSfAIwsXM R9BNgURbaMQ3fIYuee6FKiIN9AKOc4jSLaRT7fzRBeOtrfgaRyW6DwfyX49OMeXZ12EW p8RlQ3PONUV4vgapgPhBkjMt8VnZw+3ZTDz+qqrmLTDUbgtOITvDik+hKwPwAh0g7LVB aUlA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Vq5uG/mknd3C6XywCguFKo+XID2+aYDrcaX9qyydnC8=; b=qXXeAtfwgeXnkguGab2T8oZ89/AVhMD5569kMtQ6FMEVGJsKctTLXb0Zv2ZAw2N5PT IT2GFt+BENkFZBtylpmGwekOJbS50EsU4KGjK/Lm386uLtxJ/Xkk2M2wB4Vvp1DI8LCx k4qdkq76sJGLApNGqEoc9cXr9e5TlQhBaZVSecb5V3ZB/FJTKairDgGbgQcZsMyvnL6L GMC90vJC7H3gqADeD6kfmoUrgGmcaPLZIU+fl/CCLYKkc1XV++PSGtJEXtw/PSP3SV2k glBt6d3MNMyr+jZqumqnkZOcYDlRvGRdSGjmd+tltUpnX+oQvt/fxziJz5Wts8jUlDg4 p8SA== X-Gm-Message-State: AOAM531ybq2CLDITFTStuP++AudgFJqaiJteDLJWMyjZOSQ9wLfYMp/t 9/G2XK1h14RDbG8T6eLdNM+77+FtWgSTjXl7cyHhccnjT0TcNouH6wxud7pbGEkZWxDcu2XfZR9 bliW6r1Q5bkfhQoCNNzkuYKen7wVPXqTBp3Eb1DLdO88p/qcgm0x62eDzXAUdWI/sffY= X-Google-Smtp-Source: ABdhPJxI4qNjY+TgPQrLIBSs9u/gTaLm5aJ0U4mHy0tW613qDeUH4CH1wVS0Ol2NtydZZOPCn05chBwNB1aUog== X-Received: from jeroendb.sea.corp.google.com ([2620:15c:100:202:94b6:8af3:6cef:e277]) (user=jeroendb job=sendgmr) by 2002:a62:403:0:b0:433:9582:d449 with SMTP id 3-20020a620403000000b004339582d449mr26508220pfe.15.1633966617669; Mon, 11 Oct 2021 08:36:57 -0700 (PDT) Date: Mon, 11 Oct 2021 08:36:45 -0700 In-Reply-To: <20211011153650.1982904-1-jeroendb@google.com> Message-Id: <20211011153650.1982904-3-jeroendb@google.com> Mime-Version: 1.0 References: <20211011153650.1982904-1-jeroendb@google.com> X-Mailer: git-send-email 2.33.0.882.g93a45727a2-goog Subject: [PATCH net-next v2 2/7] gve: Add rx buffer pagecnt bias From: Jeroen de Borst To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, Catherine Sullivan , Yanchun Fu , Nathan Lewis , David Awogbemila Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Catherine Sullivan Add a pagecnt bias field to rx buffer info struct to eliminate needing to increment the atomic page ref count on every pass in the rx hotpath. Also prefetch two packet pages ahead. Fixes: ede3fcf5ec67f ("gve: Add support for raw addressing to the rx path") Signed-off-by: Yanchun Fu Signed-off-by: Nathan Lewis Signed-off-by: Catherine Sullivan Signed-off-by: David Awogbemila --- drivers/net/ethernet/google/gve/gve_rx.c | 52 +++++++++++++++++------- 1 file changed, 37 insertions(+), 15 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c index 3347879a4a5d..41b21b527470 100644 --- a/drivers/net/ethernet/google/gve/gve_rx.c +++ b/drivers/net/ethernet/google/gve/gve_rx.c @@ -16,19 +16,23 @@ static void gve_rx_free_buffer(struct device *dev, dma_addr_t dma = (dma_addr_t)(be64_to_cpu(data_slot->addr) & GVE_DATA_SLOT_ADDR_PAGE_MASK); + page_ref_sub(page_info->page, page_info->pagecnt_bias - 1); gve_free_page(dev, page_info->page, dma, DMA_FROM_DEVICE); } static void gve_rx_unfill_pages(struct gve_priv *priv, struct gve_rx_ring *rx) { - if (rx->data.raw_addressing) { - u32 slots = rx->mask + 1; - int i; + u32 slots = rx->mask + 1; + int i; + if (rx->data.raw_addressing) { for (i = 0; i < slots; i++) gve_rx_free_buffer(&priv->pdev->dev, &rx->data.page_info[i], &rx->data.data_ring[i]); } else { + for (i = 0; i < slots; i++) + page_ref_sub(rx->data.page_info[i].page, + rx->data.page_info[i].pagecnt_bias - 1); gve_unassign_qpl(priv, rx->data.qpl->id); rx->data.qpl = NULL; } @@ -69,6 +73,9 @@ static void gve_setup_rx_buffer(struct gve_rx_slot_page_info *page_info, page_info->page_offset = 0; page_info->page_address = page_address(page); *slot_addr = cpu_to_be64(addr); + /* The page already has 1 ref */ + page_ref_add(page, INT_MAX - 1); + page_info->pagecnt_bias = INT_MAX; } static int gve_rx_alloc_buffer(struct gve_priv *priv, struct device *dev, @@ -299,17 +306,18 @@ static bool gve_rx_can_flip_buffers(struct net_device *netdev) ? netdev->mtu + GVE_RX_PAD + ETH_HLEN <= PAGE_SIZE / 2 : false; } -static int gve_rx_can_recycle_buffer(struct page *page) +static int gve_rx_can_recycle_buffer(struct gve_rx_slot_page_info *page_info) { - int pagecount = page_count(page); + int pagecount = page_count(page_info->page); /* This page is not being used by any SKBs - reuse */ - if (pagecount == 1) + if (pagecount == page_info->pagecnt_bias) return 1; /* This page is still being used by an SKB - we can't reuse */ - else if (pagecount >= 2) + else if (pagecount > page_info->pagecnt_bias) return 0; - WARN(pagecount < 1, "Pagecount should never be < 1"); + WARN(pagecount < page_info->pagecnt_bias, + "Pagecount should never be less than the bias."); return -1; } @@ -325,11 +333,11 @@ gve_rx_raw_addressing(struct device *dev, struct net_device *netdev, if (!skb) return NULL; - /* Optimistically stop the kernel from freeing the page by increasing - * the page bias. We will check the refcount in refill to determine if - * we need to alloc a new page. + /* Optimistically stop the kernel from freeing the page. + * We will check again in refill to determine if we need to alloc a + * new page. */ - get_page(page_info->page); + gve_dec_pagecnt_bias(page_info); return skb; } @@ -352,7 +360,7 @@ gve_rx_qpl(struct device *dev, struct net_device *netdev, /* No point in recycling if we didn't get the skb */ if (skb) { /* Make sure that the page isn't freed. */ - get_page(page_info->page); + gve_dec_pagecnt_bias(page_info); gve_rx_flip_buff(page_info, &data_slot->qpl_offset); } } else { @@ -376,8 +384,18 @@ static bool gve_rx(struct gve_rx_ring *rx, struct gve_rx_desc *rx_desc, union gve_rx_data_slot *data_slot; struct sk_buff *skb = NULL; dma_addr_t page_bus; + void *va; u16 len; + /* Prefetch two packet pages ahead, we will need it soon. */ + page_info = &rx->data.page_info[(idx + 2) & rx->mask]; + va = page_info->page_address + GVE_RX_PAD + + page_info->page_offset; + + prefetch(page_info->page); /* Kernel page struct. */ + prefetch(va); /* Packet header. */ + prefetch(va + 64); /* Next cacheline too. */ + /* drop this packet */ if (unlikely(rx_desc->flags_seq & GVE_RXF_ERR)) { u64_stats_update_begin(&rx->statss); @@ -408,7 +426,7 @@ static bool gve_rx(struct gve_rx_ring *rx, struct gve_rx_desc *rx_desc, int recycle = 0; if (can_flip) { - recycle = gve_rx_can_recycle_buffer(page_info->page); + recycle = gve_rx_can_recycle_buffer(page_info); if (recycle < 0) { if (!rx->data.raw_addressing) gve_schedule_reset(priv); @@ -499,7 +517,7 @@ static bool gve_rx_refill_buffers(struct gve_priv *priv, struct gve_rx_ring *rx) * owns half the page it is impossible to tell which half. Either * the whole page is free or it needs to be replaced. */ - int recycle = gve_rx_can_recycle_buffer(page_info->page); + int recycle = gve_rx_can_recycle_buffer(page_info); if (recycle < 0) { if (!rx->data.raw_addressing) @@ -546,6 +564,10 @@ static int gve_clean_rx_done(struct gve_rx_ring *rx, int budget, "[%d] seqno=%d rx->desc.seqno=%d\n", rx->q_num, GVE_SEQNO(desc->flags_seq), rx->desc.seqno); + + /* prefetch two descriptors ahead */ + prefetch(rx->desc.desc_ring + ((cnt + 2) & rx->mask)); + dropped = !gve_rx(rx, desc, feat, idx); if (!dropped) { bytes += be16_to_cpu(desc->len) - GVE_RX_PAD; From patchwork Mon Oct 11 15:36:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeroen de Borst X-Patchwork-Id: 12550355 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C5E72C433EF for ; Mon, 11 Oct 2021 15:37:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A3F4C60231 for ; Mon, 11 Oct 2021 15:37:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243213AbhJKPjH (ORCPT ); Mon, 11 Oct 2021 11:39:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60774 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242703AbhJKPjB (ORCPT ); Mon, 11 Oct 2021 11:39:01 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4C742C061749 for ; Mon, 11 Oct 2021 08:37:00 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id p10-20020a056a000b4a00b0044cf01eccdbso2686855pfo.19 for ; Mon, 11 Oct 2021 08:37:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=lFtiZNcaHifFiylWqi/E96izb/7KoTvFK4oSKhgppms=; b=l1MQgzZDqzZ1gjj5azObfJFRVBgaMPxtQfGvr0y73K86Xb3s7Vxl2S58xddu733Get tmHRnmaUldzWQXje99P8+FPgc/TO4Zp/9uYbiKyuJe6Vl/boVXGKZRyiv6b4leNcY/an XP/PQjOkkRBwSomqkBDSm1UxvTswzrjpmkUZ9kn5sc0ljDmLq27fIGyHhRbLAqk7EK1+ ehb1FvyWwCoRkEVN42BUesTdC5PeSqTkda9LtcAA2Fp/ChgxhC4kDk3mJsXD2/16IBBh Kbhct3MMtnxhqE+SfVzO44+cmdXljAW522yHCI9NasnV9N2/0B25BZ2lLjck0yr0X+f3 mDKg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=lFtiZNcaHifFiylWqi/E96izb/7KoTvFK4oSKhgppms=; b=ED1zWoVtkYDfSZrsv0fCycECwai9FEYifpV0/yIkuVsa3oJ5z1/MzLH2Z3J+uLKf0k 2SXLJuhFbcQU5Eoo4mFD9Ci6IDsZ+zFM5yNpoGcNTSz+76M/I1KAIDcOOQ2jEXO+RFOa N3+q/YOoGDspeHoE9ysptfNU7RerjYsm0yhVzdIlNnkFWxdsX6oVVx+HwU6+BdJwbkeG UV1qGCg8uaAg0z+/FC9KHeEzWBHb90Fe79PrDMOGR7EQgKhUd6EJiMex5dOhaKVkNScB jlA27rjKbx1ISJLBZ+rFJ/D7cFJ6rEDJc7bwdNGjIfj9SdjPSWUeG7bXHBcVVz9AtYZ0 pm2Q== X-Gm-Message-State: AOAM530+OWay/ZtGjUcSTXTw5FHJlXTek/IY/XPMwXFvSXMeVnlTYTUy YxR43wsSdAxhGlpVgLqElH4xP25VRH9/FDsr4hVMro1Yxb22Ff4/r34dKrGn3AWFxgBRE2rqHLP K/VvAU4QIMxa1j71N0M4nxmnWj3X1yA8Ctwc880GDxeDD1tv76IgandM7/bYrZ3vG+l0= X-Google-Smtp-Source: ABdhPJwuiS/MnqhVcsLMmrA++EQkhykplZ1Sg1mc8Tatf7TzHiNmFErvc47DO3+TWdNeB1TQKWiZajUSyBVFBw== X-Received: from jeroendb.sea.corp.google.com ([2620:15c:100:202:94b6:8af3:6cef:e277]) (user=jeroendb job=sendgmr) by 2002:a17:90a:a41:: with SMTP id o59mr29649851pjo.243.1633966619590; Mon, 11 Oct 2021 08:36:59 -0700 (PDT) Date: Mon, 11 Oct 2021 08:36:46 -0700 In-Reply-To: <20211011153650.1982904-1-jeroendb@google.com> Message-Id: <20211011153650.1982904-4-jeroendb@google.com> Mime-Version: 1.0 References: <20211011153650.1982904-1-jeroendb@google.com> X-Mailer: git-send-email 2.33.0.882.g93a45727a2-goog Subject: [PATCH net-next v2 3/7] gve: Do lazy cleanup in TX path From: Jeroen de Borst To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, Tao Liu , Catherine Sullivan Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Tao Liu When TX queue is full, attemt to process enough TX completions to avoid stalling the queue. Fixes: f5cedc84a30d2 ("gve: Add transmit and receive support") Signed-off-by: Tao Liu Signed-off-by: Catherine Sullivan --- drivers/net/ethernet/google/gve/gve.h | 9 +- drivers/net/ethernet/google/gve/gve_ethtool.c | 3 +- drivers/net/ethernet/google/gve/gve_main.c | 6 +- drivers/net/ethernet/google/gve/gve_tx.c | 94 +++++++++++-------- 4 files changed, 62 insertions(+), 50 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h index 4abd53bdde73..3de561e22659 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -341,8 +341,8 @@ struct gve_tx_ring { union { /* GQI fields */ struct { - /* NIC tail pointer */ - __be32 last_nic_done; + /* Spinlock for when cleanup in progress */ + spinlock_t clean_lock; }; /* DQO fields. */ @@ -821,8 +821,9 @@ netdev_tx_t gve_tx(struct sk_buff *skb, struct net_device *dev); bool gve_tx_poll(struct gve_notify_block *block, int budget); int gve_tx_alloc_rings(struct gve_priv *priv); void gve_tx_free_rings_gqi(struct gve_priv *priv); -__be32 gve_tx_load_event_counter(struct gve_priv *priv, - struct gve_tx_ring *tx); +u32 gve_tx_load_event_counter(struct gve_priv *priv, + struct gve_tx_ring *tx); +bool gve_tx_clean_pending(struct gve_priv *priv, struct gve_tx_ring *tx); /* rx handling */ void gve_rx_write_doorbell(struct gve_priv *priv, struct gve_rx_ring *rx); int gve_rx_poll(struct gve_notify_block *block, int budget); diff --git a/drivers/net/ethernet/google/gve/gve_ethtool.c b/drivers/net/ethernet/google/gve/gve_ethtool.c index 716e6240305d..618a3e1d858e 100644 --- a/drivers/net/ethernet/google/gve/gve_ethtool.c +++ b/drivers/net/ethernet/google/gve/gve_ethtool.c @@ -330,8 +330,7 @@ gve_get_ethtool_stats(struct net_device *netdev, data[i++] = tmp_tx_bytes; data[i++] = tx->wake_queue; data[i++] = tx->stop_queue; - data[i++] = be32_to_cpu(gve_tx_load_event_counter(priv, - tx)); + data[i++] = gve_tx_load_event_counter(priv, tx); data[i++] = tx->dma_mapping_error; /* stats from NIC */ if (skip_nic_stats) { diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index b41679ab0dbe..b6805ad2011b 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -212,13 +212,13 @@ static int gve_napi_poll(struct napi_struct *napi, int budget) irq_doorbell = gve_irq_doorbell(priv, block); iowrite32be(GVE_IRQ_ACK | GVE_IRQ_EVENT, irq_doorbell); - /* Double check we have no extra work. - * Ensure unmask synchronizes with checking for work. + /* Ensure IRQ ACK is visible before we check pending work. + * If queue had issued updates, it would be truly visible. */ mb(); if (block->tx) - reschedule |= gve_tx_poll(block, -1); + reschedule |= gve_tx_clean_pending(priv, block->tx); if (block->rx) reschedule |= gve_rx_work_pending(block->rx); diff --git a/drivers/net/ethernet/google/gve/gve_tx.c b/drivers/net/ethernet/google/gve/gve_tx.c index 9922ce46a635..a9cb241fedf4 100644 --- a/drivers/net/ethernet/google/gve/gve_tx.c +++ b/drivers/net/ethernet/google/gve/gve_tx.c @@ -144,7 +144,7 @@ static void gve_tx_free_ring(struct gve_priv *priv, int idx) gve_tx_remove_from_block(priv, idx); slots = tx->mask + 1; - gve_clean_tx_done(priv, tx, tx->req, false); + gve_clean_tx_done(priv, tx, priv->tx_desc_cnt, false); netdev_tx_reset_queue(tx->netdev_txq); dma_free_coherent(hdev, sizeof(*tx->q_resources), @@ -176,6 +176,7 @@ static int gve_tx_alloc_ring(struct gve_priv *priv, int idx) /* Make sure everything is zeroed to start */ memset(tx, 0, sizeof(*tx)); + spin_lock_init(&tx->clean_lock); tx->q_num = idx; tx->mask = slots - 1; @@ -328,10 +329,16 @@ static inline bool gve_can_tx(struct gve_tx_ring *tx, int bytes_required) return (gve_tx_avail(tx) >= MAX_TX_DESC_NEEDED && can_alloc); } +static_assert(NAPI_POLL_WEIGHT >= MAX_TX_DESC_NEEDED); + /* Stops the queue if the skb cannot be transmitted. */ -static int gve_maybe_stop_tx(struct gve_tx_ring *tx, struct sk_buff *skb) +static int gve_maybe_stop_tx(struct gve_priv *priv, struct gve_tx_ring *tx, + struct sk_buff *skb) { int bytes_required = 0; + u32 nic_done; + u32 to_do; + int ret; if (!tx->raw_addressing) bytes_required = gve_skb_fifo_bytes_required(tx, skb); @@ -339,29 +346,28 @@ static int gve_maybe_stop_tx(struct gve_tx_ring *tx, struct sk_buff *skb) if (likely(gve_can_tx(tx, bytes_required))) return 0; - /* No space, so stop the queue */ - tx->stop_queue++; - netif_tx_stop_queue(tx->netdev_txq); - smp_mb(); /* sync with restarting queue in gve_clean_tx_done() */ - - /* Now check for resources again, in case gve_clean_tx_done() freed - * resources after we checked and we stopped the queue after - * gve_clean_tx_done() checked. - * - * gve_maybe_stop_tx() gve_clean_tx_done() - * nsegs/can_alloc test failed - * gve_tx_free_fifo() - * if (tx queue stopped) - * netif_tx_queue_wake() - * netif_tx_stop_queue() - * Need to check again for space here! - */ - if (likely(!gve_can_tx(tx, bytes_required))) - return -EBUSY; + ret = -EBUSY; + spin_lock(&tx->clean_lock); + nic_done = gve_tx_load_event_counter(priv, tx); + to_do = nic_done - tx->done; - netif_tx_start_queue(tx->netdev_txq); - tx->wake_queue++; - return 0; + /* Only try to clean if there is hope for TX */ + if (to_do + gve_tx_avail(tx) >= MAX_TX_DESC_NEEDED) { + if (to_do > 0) { + to_do = min_t(u32, to_do, NAPI_POLL_WEIGHT); + gve_clean_tx_done(priv, tx, to_do, false); + } + if (likely(gve_can_tx(tx, bytes_required))) + ret = 0; + } + if (ret) { + /* No space, so stop the queue */ + tx->stop_queue++; + netif_tx_stop_queue(tx->netdev_txq); + } + spin_unlock(&tx->clean_lock); + + return ret; } static void gve_tx_fill_pkt_desc(union gve_tx_desc *pkt_desc, @@ -576,7 +582,7 @@ netdev_tx_t gve_tx(struct sk_buff *skb, struct net_device *dev) WARN(skb_get_queue_mapping(skb) >= priv->tx_cfg.num_queues, "skb queue index out of range"); tx = &priv->tx[skb_get_queue_mapping(skb)]; - if (unlikely(gve_maybe_stop_tx(tx, skb))) { + if (unlikely(gve_maybe_stop_tx(priv, tx, skb))) { /* We need to ring the txq doorbell -- we have stopped the Tx * queue for want of resources, but prior calls to gve_tx() * may have added descriptors without ringing the doorbell. @@ -672,19 +678,19 @@ static int gve_clean_tx_done(struct gve_priv *priv, struct gve_tx_ring *tx, return pkts; } -__be32 gve_tx_load_event_counter(struct gve_priv *priv, - struct gve_tx_ring *tx) +u32 gve_tx_load_event_counter(struct gve_priv *priv, + struct gve_tx_ring *tx) { - u32 counter_index = be32_to_cpu((tx->q_resources->counter_index)); + u32 counter_index = be32_to_cpu(tx->q_resources->counter_index); + __be32 counter = READ_ONCE(priv->counter_array[counter_index]); - return READ_ONCE(priv->counter_array[counter_index]); + return be32_to_cpu(counter); } bool gve_tx_poll(struct gve_notify_block *block, int budget) { struct gve_priv *priv = block->priv; struct gve_tx_ring *tx = block->tx; - bool repoll = false; u32 nic_done; u32 to_do; @@ -692,17 +698,23 @@ bool gve_tx_poll(struct gve_notify_block *block, int budget) if (budget == 0) budget = INT_MAX; + /* In TX path, it may try to clean completed pkts in order to xmit, + * to avoid cleaning conflict, use spin_lock(), it yields better + * concurrency between xmit/clean than netif's lock. + */ + spin_lock(&tx->clean_lock); /* Find out how much work there is to be done */ - tx->last_nic_done = gve_tx_load_event_counter(priv, tx); - nic_done = be32_to_cpu(tx->last_nic_done); - if (budget > 0) { - /* Do as much work as we have that the budget will - * allow - */ - to_do = min_t(u32, (nic_done - tx->done), budget); - gve_clean_tx_done(priv, tx, to_do, true); - } + nic_done = gve_tx_load_event_counter(priv, tx); + to_do = min_t(u32, (nic_done - tx->done), budget); + gve_clean_tx_done(priv, tx, to_do, true); + spin_unlock(&tx->clean_lock); /* If we still have work we want to repoll */ - repoll |= (nic_done != tx->done); - return repoll; + return nic_done != tx->done; +} + +bool gve_tx_clean_pending(struct gve_priv *priv, struct gve_tx_ring *tx) +{ + u32 nic_done = gve_tx_load_event_counter(priv, tx); + + return nic_done != tx->done; } From patchwork Mon Oct 11 15:36:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeroen de Borst X-Patchwork-Id: 12550357 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD731C433F5 for ; Mon, 11 Oct 2021 15:37:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BC9B260231 for ; Mon, 11 Oct 2021 15:37:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242961AbhJKPjL (ORCPT ); Mon, 11 Oct 2021 11:39:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60786 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242833AbhJKPjC (ORCPT ); Mon, 11 Oct 2021 11:39:02 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D079CC061570 for ; Mon, 11 Oct 2021 08:37:01 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id j12-20020aa783cc000000b0044b702424b7so7639479pfn.6 for ; Mon, 11 Oct 2021 08:37:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=sKHNXPZkAHz9ntbu1HD5pE+BXuKXxMUQa+JKWhdPjkA=; b=JvT+Sd2IRA5Oj6URdobLnrY55vs4SksDZwcoSw9ylrUowoPM17zghT/19M0V00Gvsc hBj2Jd+hFGYUTBjOC4jGOtWs/aO60YWc8RvXywXf6GptO7I5jnDY2x2Ya+znsKDGeKcv EkzEvYL28OI6FkA/0iZ6Bdu8pv0QTAH57XftnBanRlyx0SumlmdmYlhh8a4flh8GZVnU 11FC9LLrKfZ298L4CmfB4zRR37S5DKhIL2KjGfIlzSOmUrEpKCqpksdb86tGAdliev1U 4NuOnpuQ0fu13JIRvPdWKputl2BvgAfrhMjKeDV/RTmL/hfuz2pj0NxhnP+NO2aiXQ+R 8p0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=sKHNXPZkAHz9ntbu1HD5pE+BXuKXxMUQa+JKWhdPjkA=; b=crYMhuJ1E/PD49Tu0NUii8Yiyd8nlHI2Ag5/Nu2JMIWELNojaorfcVAfAeI4zQ877s dixYTqB8f0B6lqYTrECabPz+0uf4LvmA9UTzbR9VBo5MIX1FSDXqb2K9PALuI5Zo1qto TdBRkhtc9NOJIfF3F/2xb8P3GooYu0VUqYZTRAXPrfNoUfqxakvQedfH8PrWgXFZlvES ERWkp4KTNZPH4I9vL0P3c2TlA7f4/xcHS4Rc4qEk2ISoQ6Eo0Z5LleabdYISEDoHRWwM TIZBYAUTbn5W0JFPGx/kLgYLFZAWgV/zAONjuKXHRH8YoPH857QchMw9RsoxvPFJEbWZ xGog== X-Gm-Message-State: AOAM530kBDN7I3WX9znVP4diGEfda88NbJPN/EBwDAgdTgnhvcJDIOBT fSwd+/3dGqQTMy3YC8yXE2/eL767MTlpBKAU7GQnP4YmTLx4NgONNymvpwIU4s/AjI2Luev5GN9 tsoPy/4FFw2qTQF0ZaDrzr9v0f/eqmMu0ogZ7wkFx36vaY7hW30zbhmql0jK6drd6vO4= X-Google-Smtp-Source: ABdhPJwkqpVW442BsvalDdK2YwdGuG04zuQDKuXjf/YjgE/w3JRL6v+uXl/XkKccCpE0SMuNZPuICiUMPOhM0A== X-Received: from jeroendb.sea.corp.google.com ([2620:15c:100:202:94b6:8af3:6cef:e277]) (user=jeroendb job=sendgmr) by 2002:a62:6541:0:b0:44c:2988:7d9d with SMTP id z62-20020a626541000000b0044c29887d9dmr26217432pfb.50.1633966621268; Mon, 11 Oct 2021 08:37:01 -0700 (PDT) Date: Mon, 11 Oct 2021 08:36:47 -0700 In-Reply-To: <20211011153650.1982904-1-jeroendb@google.com> Message-Id: <20211011153650.1982904-5-jeroendb@google.com> Mime-Version: 1.0 References: <20211011153650.1982904-1-jeroendb@google.com> X-Mailer: git-send-email 2.33.0.882.g93a45727a2-goog Subject: [PATCH net-next v2 4/7] gve: Recover from queue stall due to missed IRQ From: Jeroen de Borst To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, John Fraker , David Awogbemila Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: John Fraker Don't always reset the driver on a TX timeout. Attempt to recover by kicking the queue in case an IRQ was missed. Fixes: 9e5f7d26a4c08 ("gve: Add workqueue and reset support") Signed-off-by: John Fraker Signed-off-by: David Awogbemila --- drivers/net/ethernet/google/gve/gve.h | 4 +- drivers/net/ethernet/google/gve/gve_adminq.h | 1 + drivers/net/ethernet/google/gve/gve_main.c | 48 +++++++++++++++++++- 3 files changed, 51 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h index 3de561e22659..51ed8fe71d2d 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -30,7 +30,7 @@ #define GVE_MIN_MSIX 3 /* Numbers of gve tx/rx stats in stats report. */ -#define GVE_TX_STATS_REPORT_NUM 5 +#define GVE_TX_STATS_REPORT_NUM 6 #define GVE_RX_STATS_REPORT_NUM 2 /* Interval to schedule a stats report update, 20000ms. */ @@ -413,7 +413,9 @@ struct gve_tx_ring { u32 q_num ____cacheline_aligned; /* queue idx */ u32 stop_queue; /* count of queue stops */ u32 wake_queue; /* count of queue wakes */ + u32 queue_timeout; /* count of queue timeouts */ u32 ntfy_id; /* notification block index */ + u32 last_kick_msec; /* Last time the queue was kicked */ dma_addr_t bus; /* dma address of the descr ring */ dma_addr_t q_resources_bus; /* dma address of the queue resources */ dma_addr_t complq_bus_dqo; /* dma address of the dqo.compl_ring */ diff --git a/drivers/net/ethernet/google/gve/gve_adminq.h b/drivers/net/ethernet/google/gve/gve_adminq.h index 47c3d8f313fc..3953f6f7a427 100644 --- a/drivers/net/ethernet/google/gve/gve_adminq.h +++ b/drivers/net/ethernet/google/gve/gve_adminq.h @@ -270,6 +270,7 @@ enum gve_stat_names { TX_LAST_COMPLETION_PROCESSED = 5, RX_NEXT_EXPECTED_SEQUENCE = 6, RX_BUFFERS_POSTED = 7, + TX_TIMEOUT_CNT = 8, // stats from NIC RX_QUEUE_DROP_CNT = 65, RX_NO_BUFFERS_POSTED = 66, diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index b6805ad2011b..7647cd05b1d2 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -24,6 +24,9 @@ #define GVE_VERSION "1.0.0" #define GVE_VERSION_PREFIX "GVE-" +// Minimum amount of time between queue kicks in msec (10 seconds) +#define MIN_TX_TIMEOUT_GAP (1000 * 10) + const char gve_version_str[] = GVE_VERSION; static const char gve_version_prefix[] = GVE_VERSION_PREFIX; @@ -1121,9 +1124,47 @@ static void gve_turnup(struct gve_priv *priv) static void gve_tx_timeout(struct net_device *dev, unsigned int txqueue) { - struct gve_priv *priv = netdev_priv(dev); + struct gve_notify_block *block; + struct gve_tx_ring *tx = NULL; + struct gve_priv *priv; + u32 last_nic_done; + u32 current_time; + u32 ntfy_idx; + + netdev_info(dev, "Timeout on tx queue, %d", txqueue); + priv = netdev_priv(dev); + if (txqueue > priv->tx_cfg.num_queues) + goto reset; + + ntfy_idx = gve_tx_idx_to_ntfy(priv, txqueue); + if (ntfy_idx > priv->num_ntfy_blks) + goto reset; + + block = &priv->ntfy_blocks[ntfy_idx]; + tx = block->tx; + current_time = jiffies_to_msecs(jiffies); + if (tx->last_kick_msec + MIN_TX_TIMEOUT_GAP > current_time) + goto reset; + + /* Check to see if there are missed completions, which will allow us to + * kick the queue. + */ + last_nic_done = gve_tx_load_event_counter(priv, tx); + if (last_nic_done - tx->done) { + netdev_info(dev, "Kicking queue %d", txqueue); + iowrite32be(GVE_IRQ_MASK, gve_irq_doorbell(priv, block)); + napi_schedule(&block->napi); + tx->last_kick_msec = current_time; + goto out; + } // Else reset. + +reset: gve_schedule_reset(priv); + +out: + if (tx) + tx->queue_timeout++; priv->tx_timeo_cnt++; } @@ -1252,6 +1293,11 @@ void gve_handle_report_stats(struct gve_priv *priv) .value = cpu_to_be64(last_completion), .queue_id = cpu_to_be32(idx), }; + stats[stats_idx++] = (struct stats) { + .stat_name = cpu_to_be32(TX_TIMEOUT_CNT), + .value = cpu_to_be64(priv->tx[idx].queue_timeout), + .queue_id = cpu_to_be32(idx), + }; } } /* rx stats */ From patchwork Mon Oct 11 15:36:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeroen de Borst X-Patchwork-Id: 12550359 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 151BBC433F5 for ; Mon, 11 Oct 2021 15:37:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DC56460231 for ; Mon, 11 Oct 2021 15:37:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243617AbhJKPjN (ORCPT ); Mon, 11 Oct 2021 11:39:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60822 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243270AbhJKPjJ (ORCPT ); Mon, 11 Oct 2021 11:39:09 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 819B3C061764 for ; Mon, 11 Oct 2021 08:37:03 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id 15-20020a630d4f000000b00287c5b3f77bso7115114pgn.11 for ; Mon, 11 Oct 2021 08:37:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=hdWeng6qPmeYcD52MAWN7GI/aHqi0RCXV2CYrCVhuf8=; b=KcRlt8vUAygaa4AFu03/QWKBYJaWAzOVOoFOSE/1xWfSLf8gXWRTNjKmagqgTNNtTn c8YOWS9hBUwuSnE0ejno++EJLYLRFcXuU8tuDG3D+8LbjXSPxN3EIwCt5YnMuHDYqR0e n31bNmu02nBLYEzGXzTCxnG5nmnoUKB0Qi4FpHKm0RL5wgcTp7uVPLxeEZMO9N777hfg MaRapawtMnTWrG2fkX8U8DJzWUIIcw14YBmxpA+SqHoisYiDZp9R4fOD9uOOPjPGj7R2 /k0xyE71Cm3GE098cf9HNMQE25+dZjITRpRvy5LbKSdcdf8HhR8YISvLSvZhw69Bw68g 6owQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=hdWeng6qPmeYcD52MAWN7GI/aHqi0RCXV2CYrCVhuf8=; b=ufe+9opcZ3MqQYfBtzinEH5OI4wdcv519zp5P0T/Cma3ULEMHgug2A0XdRtYhbzpvF K5mll7ROdWK2AApWUCjhE3aCYZb0EQVIz2KXANQE1kikSlXFE6Zkseb7VQHeA/qSF8UG U8d23s0w9tzZueU2fDtKAp6z8wObDDxbErGcXAcYgGBZht87mtsBD/gm4khfpuCVgcoG MhwejEAeaTcEUywI/MyQnIbbTS/i2bUhGsaIROAfuo8SfoBJe3cYoGusNqGVFdYMbnYv eSGCSxHwmcX2JNuLclr3vbWUuN/Vu5M10HMYlMCy0OTlmPmOXOoBo1su2kprHYKuleb/ mmUA== X-Gm-Message-State: AOAM533owLQcEv28WKNqUfaQBJqOyW+XywBDXmZWnz0FBd9ElHKVBeWb 3MdsFVF5DpyG3ycUI/guqGe9pcGAeP+VIvePCvV7jrz71avauouI5rIL5jP3eiZWYcsXBOpb5CO LZuG/OiHdEQuPFqcH/FwRUZPleI9MdvlQchUQkWQSSI+UrTBbetcB0iwEhY4t2XYTcak= X-Google-Smtp-Source: ABdhPJxz6eytCy0lzKtp6C2KgvzJzx29bLahcyWT3vQL3WdWlZeWAm1iZ1zhcYj8gW7is0JxyFo5TXLQeTnujg== X-Received: from jeroendb.sea.corp.google.com ([2620:15c:100:202:94b6:8af3:6cef:e277]) (user=jeroendb job=sendgmr) by 2002:a62:1683:0:b0:3f3:814f:4367 with SMTP id 125-20020a621683000000b003f3814f4367mr26489505pfw.68.1633966622905; Mon, 11 Oct 2021 08:37:02 -0700 (PDT) Date: Mon, 11 Oct 2021 08:36:48 -0700 In-Reply-To: <20211011153650.1982904-1-jeroendb@google.com> Message-Id: <20211011153650.1982904-6-jeroendb@google.com> Mime-Version: 1.0 References: <20211011153650.1982904-1-jeroendb@google.com> X-Mailer: git-send-email 2.33.0.882.g93a45727a2-goog Subject: [PATCH net-next v2 5/7] gve: Add netif_set_xps_queue call From: Jeroen de Borst To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, Catherine Sullivan , David Awogbemila Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Catherine Sullivan Configure XPS when adding tx queues to the notification blocks. Fixes: dbdaa67540512 ("gve: Move some static functions to a common file") Signed-off-by: Catherine Sullivan Signed-off-by: David Awogbemila --- drivers/net/ethernet/google/gve/gve_utils.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/net/ethernet/google/gve/gve_utils.c b/drivers/net/ethernet/google/gve/gve_utils.c index 93f3dcbeeea9..45ff7a9ab5f9 100644 --- a/drivers/net/ethernet/google/gve/gve_utils.c +++ b/drivers/net/ethernet/google/gve/gve_utils.c @@ -18,12 +18,16 @@ void gve_tx_remove_from_block(struct gve_priv *priv, int queue_idx) void gve_tx_add_to_block(struct gve_priv *priv, int queue_idx) { + unsigned int active_cpus = min_t(int, priv->num_ntfy_blks / 2, + num_online_cpus()); int ntfy_idx = gve_tx_idx_to_ntfy(priv, queue_idx); struct gve_notify_block *block = &priv->ntfy_blocks[ntfy_idx]; struct gve_tx_ring *tx = &priv->tx[queue_idx]; block->tx = tx; tx->ntfy_id = ntfy_idx; + netif_set_xps_queue(priv->dev, get_cpu_mask(ntfy_idx % active_cpus), + queue_idx); } void gve_rx_remove_from_block(struct gve_priv *priv, int queue_idx) From patchwork Mon Oct 11 15:36:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeroen de Borst X-Patchwork-Id: 12550361 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 230C1C433EF for ; Mon, 11 Oct 2021 15:37:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0834460C49 for ; Mon, 11 Oct 2021 15:37:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243107AbhJKPjQ (ORCPT ); Mon, 11 Oct 2021 11:39:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60830 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242579AbhJKPjJ (ORCPT ); Mon, 11 Oct 2021 11:39:09 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3DFE2C061767 for ; Mon, 11 Oct 2021 08:37:05 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id t12-20020a056a00138c00b0044d255ba434so1238083pfg.17 for ; Mon, 11 Oct 2021 08:37:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=cKM84Kg2taF4NtwIgfssFuwXo7kK4ZKsOMQ0te+fXUk=; b=HTlCSgChnnD9nLDvNVucowpDUexWyfso8cGI5c+ZwclSfmgCy3IHBli6go4eDPwRqj 6lYZQEJ8DEIPkTb13M9r6H9//04V4LvrORrgPLDrEdIKpKI1eAkUJA+NqKCVPXxg9QYE obolTsw9W/iKlNQXPORPQL7q3m4yAD1irE3WNXD77iBueufL9LKXL+yKHYD3335ek5PY 0roIczEyd/Is9lQwdaUJoSpEg848Ii2sXe/8JqnLX0x1a6wGAJwsNaeqpb5L7lBcigSm doLdFWZh3/rRSEFK10ZGhYvpcE135gJqYmKORAys2iyY8R9JZ8k5UAtMaC2BZ4K2BhHN gNIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=cKM84Kg2taF4NtwIgfssFuwXo7kK4ZKsOMQ0te+fXUk=; b=fs7r0pdpD3w0NEH9m91rjvsidUHAHgaDhtto371MTeBZFsCJ3x8R6l1qzztRi0GlMO aKTZqG3x4pb+QbLhbQE8n74taOqMGYoydLhkPdvrG99dimm2UNiCqnkVxTna6dgzDxDr EMgac1sRTB8zLhyam180MZu65kqTiZKQSsOOWyjx7COecvsgNUrUwclay2eEF9s07+4M Jy0Zs0E2BcQHxSiNEX5uXptM1cRC4eGi7NZBjqJqGufg2vIbcpqZ6fVsMpRuj6Dvx1ff tNCBJVg4Xqe8TC7eQKQlne813YtheeWFmyVehooEc/k6oewQG6RwvDnAof4pcX/HhHqh vEmw== X-Gm-Message-State: AOAM530ByYhIpLN55y78kCv40FhW93IzxvJXCvdRlWhOnCrH8ff6/haL udm3CD6gXcxDrjBVljF5Pafz6urpfnXrMTLHUlhmVrPZdRTgwVSCv6vPDwl4w2kAWQ8dRW7gome R7hOVcqmJ3fk/wYaqK9m0feByCgIT/O0qkTQx4KBuBxZMB3s5zCm8L+mcRdzxBqZwIUM= X-Google-Smtp-Source: ABdhPJzF+AeLVAXNIxRUwUF+5eFjJlJQAk6D5Jo5mF+GZiDiNbiUfEDhz7lYykXp2dVvTAY7t0Otpg6QdLo8VA== X-Received: from jeroendb.sea.corp.google.com ([2620:15c:100:202:94b6:8af3:6cef:e277]) (user=jeroendb job=sendgmr) by 2002:a62:1c52:0:b0:44c:f1c3:9cb5 with SMTP id c79-20020a621c52000000b0044cf1c39cb5mr14529242pfc.14.1633966624598; Mon, 11 Oct 2021 08:37:04 -0700 (PDT) Date: Mon, 11 Oct 2021 08:36:49 -0700 In-Reply-To: <20211011153650.1982904-1-jeroendb@google.com> Message-Id: <20211011153650.1982904-7-jeroendb@google.com> Mime-Version: 1.0 References: <20211011153650.1982904-1-jeroendb@google.com> X-Mailer: git-send-email 2.33.0.882.g93a45727a2-goog Subject: [PATCH net-next v2 6/7] gve: Allow pageflips on larger pages From: Jeroen de Borst To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, Jordan Kim , Jeroen de Borst Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Jordan Kim Half pages are just used for small enough packets. This change allows this to also apply for systems with pages larger than 4 KB. Fixes: 02b0e0c18ba75 ("gve: Rx Buffer Recycling") Signed-off-by: Jordan Kim Signed-off-by: Jeroen de Borst --- drivers/net/ethernet/google/gve/gve_rx.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c index 41b21b527470..98ba981cd534 100644 --- a/drivers/net/ethernet/google/gve/gve_rx.c +++ b/drivers/net/ethernet/google/gve/gve_rx.c @@ -302,7 +302,7 @@ static void gve_rx_flip_buff(struct gve_rx_slot_page_info *page_info, __be64 *sl static bool gve_rx_can_flip_buffers(struct net_device *netdev) { - return PAGE_SIZE == 4096 + return PAGE_SIZE >= 4096 ? netdev->mtu + GVE_RX_PAD + ETH_HLEN <= PAGE_SIZE / 2 : false; } From patchwork Mon Oct 11 15:36:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeroen de Borst X-Patchwork-Id: 12550363 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E70EC433EF for ; Mon, 11 Oct 2021 15:37:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4D53560231 for ; Mon, 11 Oct 2021 15:37:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235157AbhJKPjT (ORCPT ); Mon, 11 Oct 2021 11:39:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60832 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243328AbhJKPjJ (ORCPT ); Mon, 11 Oct 2021 11:39:09 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 316D3C061768 for ; Mon, 11 Oct 2021 08:37:07 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id b9-20020a5b07890000b0290558245b7eabso23644235ybq.10 for ; Mon, 11 Oct 2021 08:37:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=PSjfyu1ChdlgSSxfTzhpeyMZixB5wwmSjmCj88bzjYo=; b=P84YpPAt66Glt/fiB1PM2P1gvmvBMpbG9GWEG7/CMWx7p+OdxHrozecW3i68+emqW2 8LuEcHfFOVhdu0EolHQaj0hlosAYG1L41wANMaZTnlHGMb6o/AJGe+aJOkZQdcsZUfIF j7Utyq29wAeQpMYabODdSjLjfRhqRhbJfXkB9MixYfQLMg/ENDLIXA6l7PEKNixEuY5R 23c6crpeScfjNW8UpRjiXUQJmtrJ2qAwdxCSd8lWME3u/7HuIYWsfNMyApOCV87AT5zf C3PZ5oMOS+2HSSgwMzvQTobvitZGnCaqvYHlKtTJI71Oko5MTwpACPgrhJ0eV30AzvZn P4DQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=PSjfyu1ChdlgSSxfTzhpeyMZixB5wwmSjmCj88bzjYo=; b=RNWLt2ITkfS7fFY9Lasj9oqqw7I6udIzYfUJzg6mFDb9K5CRI2kMKYLHZLAw653+KK hg6eIs30SqhMgITFpW3YCMGOwNQRS3oGWDhM+2L6hePEd1P/+WGdRyXZzq1Op7r4A8TY oQaDa1cMXh1I2G0KSvOqjxaYD01I2Qa6kbTgxSi0aV96Ovw1vUHd1UThFurR6rvUCbfd qMffcJ65852PgkECxzLJHqxlMdYnE3CSo99tdEKY+tATFuw/Eq0rDXwFpw4fQ5RELV4K Uy7IT4ABQu1xGTPe3r3IBqRyFNEwjHoD+QbxzOQsIm8Or3khHmABWyMT/ZLmLr+JnjlG RSXw== X-Gm-Message-State: AOAM530CCPTUutaf5Ept8QA/eNCYxBc5jO9Lb1EjcLnvp3jFCAPWxx4k I4eMjNF4Dhnbrx7zngQrwjU2CCwRIvsL7DOTVT+0Qhn1JBxpN2TGfSqovI95XeX2cxNOfeygeXD xgDQ3/iyh6iEdna651Q/qpfwvEkWTe5f8KM8GtDL/HNErBp9xsPZwlLykcX00wRYkmPk= X-Google-Smtp-Source: ABdhPJwjxa4W0W4qsQqeIVP+bkr8eHFYen20xVFL1ELQbhk8T8z8IaWbF0OSpoj4rrWCwWMewtB826hAq2gTag== X-Received: from jeroendb.sea.corp.google.com ([2620:15c:100:202:94b6:8af3:6cef:e277]) (user=jeroendb job=sendgmr) by 2002:a25:6106:: with SMTP id v6mr22526347ybb.531.1633966626411; Mon, 11 Oct 2021 08:37:06 -0700 (PDT) Date: Mon, 11 Oct 2021 08:36:50 -0700 In-Reply-To: <20211011153650.1982904-1-jeroendb@google.com> Message-Id: <20211011153650.1982904-8-jeroendb@google.com> Mime-Version: 1.0 References: <20211011153650.1982904-1-jeroendb@google.com> X-Mailer: git-send-email 2.33.0.882.g93a45727a2-goog Subject: [PATCH net-next v2 7/7] gve: Track RX buffer allocation failures From: Jeroen de Borst To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, Catherine Sullivan , Jeroen de Borst Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Catherine Sullivan The rx_buf_alloc_fail counter wasn't getting updated. Fixes: 433e274b8f7b0 ("gve: Add stats for gve.") Signed-off-by: Catherine Sullivan Signed-off-by: Jeroen de Borst --- drivers/net/ethernet/google/gve/gve_rx.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c index 98ba981cd534..95bc4d8a1811 100644 --- a/drivers/net/ethernet/google/gve/gve_rx.c +++ b/drivers/net/ethernet/google/gve/gve_rx.c @@ -532,8 +532,13 @@ static bool gve_rx_refill_buffers(struct gve_priv *priv, struct gve_rx_ring *rx) gve_rx_free_buffer(dev, page_info, data_slot); page_info->page = NULL; - if (gve_rx_alloc_buffer(priv, dev, page_info, data_slot)) + if (gve_rx_alloc_buffer(priv, dev, page_info, + data_slot)) { + u64_stats_update_begin(&rx->statss); + rx->rx_buf_alloc_fail++; + u64_stats_update_end(&rx->statss); break; + } } } fill_cnt++;