From patchwork Mon Apr 19 12:53:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 12211773 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B817C433B4 for ; Mon, 19 Apr 2021 12:53:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 22F2161003 for ; Mon, 19 Apr 2021 12:53:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239254AbhDSMxp (ORCPT ); Mon, 19 Apr 2021 08:53:45 -0400 Received: from mail2.protonmail.ch ([185.70.40.22]:59144 "EHLO mail2.protonmail.ch" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239095AbhDSMxm (ORCPT ); Mon, 19 Apr 2021 08:53:42 -0400 Date: Mon, 19 Apr 2021 12:53:06 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail; t=1618836790; bh=dPP56wqBhSWGV3YL8whhYttyE2wvgRvYvvsmV73KBL8=; h=Date:To:From:Cc:Reply-To:Subject:From; b=VQ66dxJg3jgoswOeS3xpbnyhVPlSx8ucuj1ksjlNOOEm1PAZ7p54Pl38zGyj0yhSC weNZ1pV5AHdYHe8RGcfslkMoXPH3iTM6wcGayltsEZRL1fSUEzKnI99XR63YTX6J7f KBysTMm8eenjEa+1jAYcNIz9zrJT94N3/b9iDbz/0ZpKspsum/rVG36N0LBBo05+7b wXC9hL4cuuEoMBAAIQpnxOOi1A099a/6ZrPtStRk3uE9yZBXwonGggrrSyORve9uf6 3yNaBVTRR0j/ckeysM74vKK3D6FDOjqmgg4R9w7nWx4JgfdulIVamP2yyejE0QxEBO Q+yc4k37PiHAw== To: "David S. Miller" , Jakub Kicinski From: Alexander Lobakin Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Eric Dumazet , Wei Wang , Cong Wang , Taehee Yoo , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , "Michael S. Tsirkin" , Alexander Lobakin , netdev@vger.kernel.org, linux-kernel@vger.kernel.org Reply-To: Alexander Lobakin Subject: [PATCH v2 net] gro: fix napi_gro_frags() Fast GRO breakage due to IP alignment check Message-ID: <20210419125258.5969-1-alobakin@pm.me> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Commit 38ec4944b593 ("gro: ensure frag0 meets IP header alignment") did the right thing, but missed the fact that napi_gro_frags() logics calls for skb_gro_reset_offset() *before* pulling Ethernet header to the skb linear space. That said, the introduced check for frag0 address being aligned to 4 always fails for it as Ethernet header is obviously 14 bytes long, and in case with NET_IP_ALIGN its start is not aligned to 4. Fix this by adding @nhoff argument to skb_gro_reset_offset() which tells if an IP header is placed right at the start of frag0 or not. This restores Fast GRO for napi_gro_frags() that became very slow after the mentioned commit, and preserves the introduced check to avoid silent unaligned accesses. From v1 [0]: - inline tiny skb_gro_reset_offset() to let the code be optimized more efficively (esp. for the !NET_IP_ALIGN case) (Eric); - pull in Reviewed-by from Eric. [0] https://lore.kernel.org/netdev/20210418114200.5839-1-alobakin@pm.me Fixes: 38ec4944b593 ("gro: ensure frag0 meets IP header alignment") Reviewed-by: Eric Dumazet Signed-off-by: Alexander Lobakin --- net/core/dev.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) -- 2.31.1 diff --git a/net/core/dev.c b/net/core/dev.c index 1f79b9aa9a3f..15fe36332fb8 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -5914,7 +5914,7 @@ static struct list_head *gro_list_prepare(struct napi_struct *napi, return head; } -static void skb_gro_reset_offset(struct sk_buff *skb) +static inline void skb_gro_reset_offset(struct sk_buff *skb, u32 nhoff) { const struct skb_shared_info *pinfo = skb_shinfo(skb); const skb_frag_t *frag0 = &pinfo->frags[0]; @@ -5925,7 +5925,7 @@ static void skb_gro_reset_offset(struct sk_buff *skb) if (!skb_headlen(skb) && pinfo->nr_frags && !PageHighMem(skb_frag_page(frag0)) && - (!NET_IP_ALIGN || !(skb_frag_off(frag0) & 3))) { + (!NET_IP_ALIGN || !((skb_frag_off(frag0) + nhoff) & 3))) { NAPI_GRO_CB(skb)->frag0 = skb_frag_address(frag0); NAPI_GRO_CB(skb)->frag0_len = min_t(unsigned int, skb_frag_size(frag0), @@ -6143,7 +6143,7 @@ gro_result_t napi_gro_receive(struct napi_struct *napi, struct sk_buff *skb) skb_mark_napi_id(skb, napi); trace_napi_gro_receive_entry(skb); - skb_gro_reset_offset(skb); + skb_gro_reset_offset(skb, 0); ret = napi_skb_finish(napi, skb, dev_gro_receive(napi, skb)); trace_napi_gro_receive_exit(ret); @@ -6232,7 +6232,7 @@ static struct sk_buff *napi_frags_skb(struct napi_struct *napi) napi->skb = NULL; skb_reset_mac_header(skb); - skb_gro_reset_offset(skb); + skb_gro_reset_offset(skb, hlen); if (unlikely(skb_gro_header_hard(skb, hlen))) { eth = skb_gro_header_slow(skb, hlen, 0);