From patchwork Mon Oct 14 10:58:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13834697 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 20B4BD1A443 for ; Mon, 14 Oct 2024 11:01:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AD7446B00B6; Mon, 14 Oct 2024 07:01:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A86F36B00B8; Mon, 14 Oct 2024 07:01:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8DADA6B00B9; Mon, 14 Oct 2024 07:01:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 6DC286B00B6 for ; Mon, 14 Oct 2024 07:01:02 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 5744440D9F for ; Mon, 14 Oct 2024 11:00:57 +0000 (UTC) X-FDA: 82671915228.20.D5F7146 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf06.hostedemail.com (Postfix) with ESMTP id EA681180018 for ; Mon, 14 Oct 2024 11:00:55 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=none; spf=pass (imf06.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1728903472; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=r9lKqnlxiymaFGc73YACDsyodHkCfWFElNljnle06Lc=; b=1U4k0CGkX+7u1KCIm49ThUSt2+MvuBoqpR0fGeiSLr5enMDCpIAWGHNCrgyBiF8z8a+dPa VfoEPinJHgyErYb/Se9ooEtep6lwbdHxcBFj+fj49MTa2uwCRzOzbdwaGVJydc9gJlqm37 QQ3le3fbqdEK3dadPp391Y0opqdO81g= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=none; spf=pass (imf06.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1728903472; a=rsa-sha256; cv=none; b=GlPkDoPfFBHbc/gNFgImQYVa7HP8yylxQndBAHqtU1NjjPOmuMk/iJzqh7EhOBe0f/MLk7 9JrhkGYccqyDcq2m2FwmiuVqW5rpf5r14Y6irZyhzb692GLH9ngRCTKWB+boYy9UhVaIIj r3e3Nj6Zdlmn8g4HBGFlP69YEmSoLZ0= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 45F9016A3; Mon, 14 Oct 2024 04:01:29 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 635553F51B; Mon, 14 Oct 2024 04:00:56 -0700 (PDT) From: Ryan Roberts To: "David S. Miller" , Andrew Morton , Anshuman Khandual , Ard Biesheuvel , Catalin Marinas , David Hildenbrand , Eric Dumazet , Greg Marsden , Ivan Ivanov , Jakub Kicinski , Kalesh Singh , Marc Zyngier , Mark Rutland , Matthias Brugger , Miroslav Benes , Paolo Abeni , Will Deacon Cc: Ryan Roberts , bpf@vger.kernel.org, intel-wired-lan@lists.osuosl.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org Subject: [RFC PATCH v1 29/57] net: igb: Remove PAGE_SIZE compile-time constant assumption Date: Mon, 14 Oct 2024 11:58:36 +0100 Message-ID: <20241014105912.3207374-29-ryan.roberts@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241014105912.3207374-1-ryan.roberts@arm.com> References: <20241014105514.3206191-1-ryan.roberts@arm.com> <20241014105912.3207374-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: EA681180018 X-Stat-Signature: dwh6jxhs6ibadxum9w68qun81ftoprt8 X-Rspam-User: X-HE-Tag: 1728903655-544683 X-HE-Meta: U2FsdGVkX1+qyP7Gs9lE42kJpqK21GQfLTk4cer9EMQUOt7fRmVXngYCT78BdpxAbpI/BM7SbifhL7QVk+kWSKJD71KKj/tXfDTy2ly2qpVsQjOIlTzYL8K0bic366k6S6fRt5N6K7QU2QsBYlo5gmRtnBYqd2WFqi1WEdTnr++Ww3jnMuPIrV+2hqdX4iMLlrpy53PvPvsqDsMCvZQrHdX+HCKMH+X0P3IvTIC4c2sjWi/eeDdUC2Zsd3ZgM3qp+IFKBQDzA4wFeI2qjuI2su/1KYsYj6P2NjbpyhaiD4nkEHt4kZR0e+Yr69wYI9QG2X3hLXUAT2txmCkbbEVbfxtHposYAeehpfR5QdSkLOpDPTcNBKJsx8hI8/nXvOzN2ZMrE7HyZw81anRO+6druS7BLfXAF3X/uCyXBmdkiYuof+ZdSQXciJ5c0AAF1QSBicg7qOko+wuWFSc2B0bvEDf1lTQw7uTNtCII1Ft5ybrhe4l4LNwqF4aJpzoNhbRYzltvLDtwSc56SH+OnzagAK0SSGJPzVwiOk0OAeB7iGOH3I8PPrP6wP30ogdU3oVNg6XQuSwRrRWweN2ED5veX2TzjNSnZp6OPU0R3EgOUA59OSWUL14JDKHmcFdO+pClf+n37cwpvUsaPg6U8wKnZ3j0yXXzfQJQyGg2JztEV1rigqX+nx/sdREjSwTMudUmVX++YWF8F3g3YLgml91T99cs/1xehk0lXCqag/1Y3LUUh+GDtaXaCV8CIJMCL/fpb/g9sbf8bdM+M+OrCK4v4tZWP+z74C+c5Y2icO5DaHNHrYkYiSpFf2TxD9qOagQuW2WBerEwzQIm9w0ileBDFLfEp/ISSJlAjANgk5HHAFYbH9BxXdGbgRhOnrD2pw8DCuCBpIwgPlgSqrHJ8I0I6HydIB1iuNFsiC8hreXJ3N8og7twLuVEcjlOr+AfDbluOdmeTz4QXJ44x2RU1il t0RTYZks Wu7J4ARsci1hoh/FOyJOOm/tdF+GxQ4bPPbtrPwUlHdz0jd939lpAB8r4WJ7Cinf9KbMuwL4WmiTS9e3Vs8yMikIhd41sf/irPorLREhGB3eQi9CaIMEOsBdqK9/FlGwll0Ubh9Wr8AAptmJG3O2np8a9+DLw3EiRJ3/aBlIXzEI4Xs5iDarN1mal+1xOrWTn1W2aeDh7+2FmVcNtkiL7Z/rH4EPC58kxr0Jb X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: To prepare for supporting boot-time page size selection, refactor code to remove assumptions about PAGE_SIZE being compile-time constant. Code intended to be equivalent when compile-time page size is active. Convert CPP conditionals to C conditionals. The compiler will dead code strip when doing a compile-time page size build, for the same end effect. But this will also work with boot-time page size builds. Signed-off-by: Ryan Roberts --- ***NOTE*** Any confused maintainers may want to read the cover note here for context: https://lore.kernel.org/all/20241014105514.3206191-1-ryan.roberts@arm.com/ drivers/net/ethernet/intel/igb/igb.h | 25 ++-- drivers/net/ethernet/intel/igb/igb_main.c | 149 +++++++++++----------- 2 files changed, 82 insertions(+), 92 deletions(-) diff --git a/drivers/net/ethernet/intel/igb/igb.h b/drivers/net/ethernet/intel/igb/igb.h index 3c2dc7bdebb50..04aeebcd363b3 100644 --- a/drivers/net/ethernet/intel/igb/igb.h +++ b/drivers/net/ethernet/intel/igb/igb.h @@ -158,7 +158,6 @@ struct vf_mac_filter { * up negative. In these cases we should fall back to the 3K * buffers. */ -#if (PAGE_SIZE < 8192) #define IGB_MAX_FRAME_BUILD_SKB (IGB_RXBUFFER_1536 - NET_IP_ALIGN) #define IGB_2K_TOO_SMALL_WITH_PADDING \ ((NET_SKB_PAD + IGB_TS_HDR_LEN + IGB_RXBUFFER_1536) > SKB_WITH_OVERHEAD(IGB_RXBUFFER_2048)) @@ -177,6 +176,9 @@ static inline int igb_skb_pad(void) { int rx_buf_len; + if (PAGE_SIZE >= 8192) + return NET_SKB_PAD + NET_IP_ALIGN; + /* If a 2K buffer cannot handle a standard Ethernet frame then * optimize padding for a 3K buffer instead of a 1.5K buffer. * @@ -196,9 +198,6 @@ static inline int igb_skb_pad(void) } #define IGB_SKB_PAD igb_skb_pad() -#else -#define IGB_SKB_PAD (NET_SKB_PAD + NET_IP_ALIGN) -#endif /* How many Rx Buffers do we bundle into one write to the hardware ? */ #define IGB_RX_BUFFER_WRITE 16 /* Must be power of 2 */ @@ -280,7 +279,7 @@ struct igb_tx_buffer { struct igb_rx_buffer { dma_addr_t dma; struct page *page; -#if (BITS_PER_LONG > 32) || (PAGE_SIZE >= 65536) +#if (BITS_PER_LONG > 32) || (PAGE_SIZE_MAX >= 65536) __u32 page_offset; #else __u16 page_offset; @@ -403,22 +402,20 @@ enum e1000_ring_flags_t { static inline unsigned int igb_rx_bufsz(struct igb_ring *ring) { -#if (PAGE_SIZE < 8192) - if (ring_uses_large_buffer(ring)) - return IGB_RXBUFFER_3072; + if (PAGE_SIZE < 8192) { + if (ring_uses_large_buffer(ring)) + return IGB_RXBUFFER_3072; - if (ring_uses_build_skb(ring)) - return IGB_MAX_FRAME_BUILD_SKB; -#endif + if (ring_uses_build_skb(ring)) + return IGB_MAX_FRAME_BUILD_SKB; + } return IGB_RXBUFFER_2048; } static inline unsigned int igb_rx_pg_order(struct igb_ring *ring) { -#if (PAGE_SIZE < 8192) - if (ring_uses_large_buffer(ring)) + if (PAGE_SIZE < 8192 && ring_uses_large_buffer(ring)) return 1; -#endif return 0; } diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c index 1ef4cb871452a..4f2c53dece1a2 100644 --- a/drivers/net/ethernet/intel/igb/igb_main.c +++ b/drivers/net/ethernet/intel/igb/igb_main.c @@ -4797,9 +4797,7 @@ void igb_configure_rx_ring(struct igb_adapter *adapter, static void igb_set_rx_buffer_len(struct igb_adapter *adapter, struct igb_ring *rx_ring) { -#if (PAGE_SIZE < 8192) struct e1000_hw *hw = &adapter->hw; -#endif /* set build_skb and buffer size flags */ clear_ring_build_skb_enabled(rx_ring); @@ -4810,12 +4808,11 @@ static void igb_set_rx_buffer_len(struct igb_adapter *adapter, set_ring_build_skb_enabled(rx_ring); -#if (PAGE_SIZE < 8192) - if (adapter->max_frame_size > IGB_MAX_FRAME_BUILD_SKB || + if (PAGE_SIZE < 8192 && + (adapter->max_frame_size > IGB_MAX_FRAME_BUILD_SKB || IGB_2K_TOO_SMALL_WITH_PADDING || - rd32(E1000_RCTL) & E1000_RCTL_SBP) + rd32(E1000_RCTL) & E1000_RCTL_SBP)) set_ring_uses_large_buffer(rx_ring); -#endif } /** @@ -5314,12 +5311,10 @@ static void igb_set_rx_mode(struct net_device *netdev) E1000_RCTL_VFE); wr32(E1000_RCTL, rctl); -#if (PAGE_SIZE < 8192) - if (!adapter->vfs_allocated_count) { + if (PAGE_SIZE < 8192 && !adapter->vfs_allocated_count) { if (adapter->max_frame_size <= IGB_MAX_FRAME_BUILD_SKB) rlpml = IGB_MAX_FRAME_BUILD_SKB; } -#endif wr32(E1000_RLPML, rlpml); /* In order to support SR-IOV and eventually VMDq it is necessary to set @@ -5338,11 +5333,10 @@ static void igb_set_rx_mode(struct net_device *netdev) /* enable Rx jumbo frames, restrict as needed to support build_skb */ vmolr &= ~E1000_VMOLR_RLPML_MASK; -#if (PAGE_SIZE < 8192) - if (adapter->max_frame_size <= IGB_MAX_FRAME_BUILD_SKB) + if (PAGE_SIZE < 8192 && + adapter->max_frame_size <= IGB_MAX_FRAME_BUILD_SKB) vmolr |= IGB_MAX_FRAME_BUILD_SKB; else -#endif vmolr |= MAX_JUMBO_FRAME_SIZE; vmolr |= E1000_VMOLR_LPE; @@ -8435,17 +8429,17 @@ static bool igb_can_reuse_rx_page(struct igb_rx_buffer *rx_buffer, if (!dev_page_is_reusable(page)) return false; -#if (PAGE_SIZE < 8192) - /* if we are only owner of page we can reuse it */ - if (unlikely((rx_buf_pgcnt - pagecnt_bias) > 1)) - return false; -#else + if (PAGE_SIZE < 8192) { + /* if we are only owner of page we can reuse it */ + if (unlikely((rx_buf_pgcnt - pagecnt_bias) > 1)) + return false; + } else { #define IGB_LAST_OFFSET \ (SKB_WITH_OVERHEAD(PAGE_SIZE) - IGB_RXBUFFER_2048) - if (rx_buffer->page_offset > IGB_LAST_OFFSET) - return false; -#endif + if (rx_buffer->page_offset > IGB_LAST_OFFSET) + return false; + } /* If we have drained the page fragment pool we need to update * the pagecnt_bias and page count so that we fully restock the @@ -8473,20 +8467,22 @@ static void igb_add_rx_frag(struct igb_ring *rx_ring, struct sk_buff *skb, unsigned int size) { -#if (PAGE_SIZE < 8192) - unsigned int truesize = igb_rx_pg_size(rx_ring) / 2; -#else - unsigned int truesize = ring_uses_build_skb(rx_ring) ? + unsigned int truesize; + + if (PAGE_SIZE < 8192) + truesize = igb_rx_pg_size(rx_ring) / 2; + else + truesize = ring_uses_build_skb(rx_ring) ? SKB_DATA_ALIGN(IGB_SKB_PAD + size) : SKB_DATA_ALIGN(size); -#endif + skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buffer->page, rx_buffer->page_offset, size, truesize); -#if (PAGE_SIZE < 8192) - rx_buffer->page_offset ^= truesize; -#else - rx_buffer->page_offset += truesize; -#endif + + if (PAGE_SIZE < 8192) + rx_buffer->page_offset ^= truesize; + else + rx_buffer->page_offset += truesize; } static struct sk_buff *igb_construct_skb(struct igb_ring *rx_ring, @@ -8494,16 +8490,16 @@ static struct sk_buff *igb_construct_skb(struct igb_ring *rx_ring, struct xdp_buff *xdp, ktime_t timestamp) { -#if (PAGE_SIZE < 8192) - unsigned int truesize = igb_rx_pg_size(rx_ring) / 2; -#else - unsigned int truesize = SKB_DATA_ALIGN(xdp->data_end - - xdp->data_hard_start); -#endif unsigned int size = xdp->data_end - xdp->data; + unsigned int truesize; unsigned int headlen; struct sk_buff *skb; + if (PAGE_SIZE < 8192) + truesize = igb_rx_pg_size(rx_ring) / 2; + else + truesize = SKB_DATA_ALIGN(xdp->data_end - xdp->data_hard_start); + /* prefetch first cache line of first page */ net_prefetch(xdp->data); @@ -8529,11 +8525,10 @@ static struct sk_buff *igb_construct_skb(struct igb_ring *rx_ring, skb_add_rx_frag(skb, 0, rx_buffer->page, (xdp->data + headlen) - page_address(rx_buffer->page), size, truesize); -#if (PAGE_SIZE < 8192) - rx_buffer->page_offset ^= truesize; -#else - rx_buffer->page_offset += truesize; -#endif + if (PAGE_SIZE < 8192) + rx_buffer->page_offset ^= truesize; + else + rx_buffer->page_offset += truesize; } else { rx_buffer->pagecnt_bias++; } @@ -8546,16 +8541,17 @@ static struct sk_buff *igb_build_skb(struct igb_ring *rx_ring, struct xdp_buff *xdp, ktime_t timestamp) { -#if (PAGE_SIZE < 8192) - unsigned int truesize = igb_rx_pg_size(rx_ring) / 2; -#else - unsigned int truesize = SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) + - SKB_DATA_ALIGN(xdp->data_end - - xdp->data_hard_start); -#endif unsigned int metasize = xdp->data - xdp->data_meta; + unsigned int truesize; struct sk_buff *skb; + if (PAGE_SIZE < 8192) + truesize = igb_rx_pg_size(rx_ring) / 2; + else + truesize = SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) + + SKB_DATA_ALIGN(xdp->data_end - + xdp->data_hard_start); + /* prefetch first cache line of first page */ net_prefetch(xdp->data_meta); @@ -8575,11 +8571,10 @@ static struct sk_buff *igb_build_skb(struct igb_ring *rx_ring, skb_hwtstamps(skb)->hwtstamp = timestamp; /* update buffer offset */ -#if (PAGE_SIZE < 8192) - rx_buffer->page_offset ^= truesize; -#else - rx_buffer->page_offset += truesize; -#endif + if (PAGE_SIZE < 8192) + rx_buffer->page_offset ^= truesize; + else + rx_buffer->page_offset += truesize; return skb; } @@ -8634,14 +8629,14 @@ static unsigned int igb_rx_frame_truesize(struct igb_ring *rx_ring, { unsigned int truesize; -#if (PAGE_SIZE < 8192) - truesize = igb_rx_pg_size(rx_ring) / 2; /* Must be power-of-2 */ -#else - truesize = ring_uses_build_skb(rx_ring) ? - SKB_DATA_ALIGN(IGB_SKB_PAD + size) + - SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) : - SKB_DATA_ALIGN(size); -#endif + if (PAGE_SIZE < 8192) + truesize = igb_rx_pg_size(rx_ring) / 2; /* Must be power-of-2 */ + else + truesize = ring_uses_build_skb(rx_ring) ? + SKB_DATA_ALIGN(IGB_SKB_PAD + size) + + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) : + SKB_DATA_ALIGN(size); + return truesize; } @@ -8650,11 +8645,11 @@ static void igb_rx_buffer_flip(struct igb_ring *rx_ring, unsigned int size) { unsigned int truesize = igb_rx_frame_truesize(rx_ring, size); -#if (PAGE_SIZE < 8192) - rx_buffer->page_offset ^= truesize; -#else - rx_buffer->page_offset += truesize; -#endif + + if (PAGE_SIZE < 8192) + rx_buffer->page_offset ^= truesize; + else + rx_buffer->page_offset += truesize; } static inline void igb_rx_checksum(struct igb_ring *ring, @@ -8825,12 +8820,12 @@ static struct igb_rx_buffer *igb_get_rx_buffer(struct igb_ring *rx_ring, struct igb_rx_buffer *rx_buffer; rx_buffer = &rx_ring->rx_buffer_info[rx_ring->next_to_clean]; - *rx_buf_pgcnt = -#if (PAGE_SIZE < 8192) - page_count(rx_buffer->page); -#else - 0; -#endif + + if (PAGE_SIZE < 8192) + *rx_buf_pgcnt = page_count(rx_buffer->page); + else + *rx_buf_pgcnt = 0; + prefetchw(rx_buffer->page); /* we are reusing so sync this buffer for CPU use */ @@ -8881,9 +8876,8 @@ static int igb_clean_rx_irq(struct igb_q_vector *q_vector, const int budget) int rx_buf_pgcnt; /* Frame size depend on rx_ring setup when PAGE_SIZE=4K */ -#if (PAGE_SIZE < 8192) - frame_sz = igb_rx_frame_truesize(rx_ring, 0); -#endif + if (PAGE_SIZE < 8192) + frame_sz = igb_rx_frame_truesize(rx_ring, 0); xdp_init_buff(&xdp, frame_sz, &rx_ring->xdp_rxq); while (likely(total_packets < budget)) { @@ -8932,10 +8926,9 @@ static int igb_clean_rx_irq(struct igb_q_vector *q_vector, const int budget) xdp_prepare_buff(&xdp, hard_start, offset, size, true); xdp_buff_clear_frags_flag(&xdp); -#if (PAGE_SIZE > 4096) /* At larger PAGE_SIZE, frame_sz depend on len size */ - xdp.frame_sz = igb_rx_frame_truesize(rx_ring, size); -#endif + if (PAGE_SIZE > 4096) + xdp.frame_sz = igb_rx_frame_truesize(rx_ring, size); skb = igb_run_xdp(adapter, rx_ring, &xdp); }